Skip to content

PiML (Python Interpretable Machine Learning) toolbox for model development and validation

License

Notifications You must be signed in to change notification settings

binfnstats/PiML-Toolbox

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

98 Commits
 
 
 
 
 
 
 
 

Repository files navigation

drawing

A low-code interpretable machine learning toolbox in Python

Installation | Examples | Usage | Citations

PiML (or π·ML, /ˈpaɪ·ˈem·ˈel/) is a new Python toolbox for interpretable machine learning model development and validation. Through low-code automation and high-code programming, PiML supports various machine learning models including

  • Inherently interpretable models:
  1. EBM: Explainable Boosting Machine (Nori, et al. 2019; Lou, et al. 2013)
  2. GAMI-Net: Generalized Additive Model with Structured Interactions (Yang, Zhang and Sudjianto, 2021)
  3. ReLU-DNN: Deep ReLU Networks using Aletheia Unwrapper and Sparsification (Sudjianto, et al. 2020)
  • Arbitrary machine learning models,e.g.
  1. Tree-ensembles: RF, GBM, XGBoost, LightGBM, ...
  2. DNNs: MLP, ResNet, CNN, Attention, ...
  3. Kernel methods: SVM, Gaussian Process, ...

Installation

pip install PiML  

Low-code Examples

Click the ipynb links to run examples in Google Colab:

  1. BikeSharing data: ipynb
  2. CaliforniaHousing data: ipynb
  3. TaiwanCredit data: ipynb
  4. Upload custom data in two ways: ipynb

Begin your own PiML journey with this demo notebook.

Low-code Usage on Google Colab

Stage 1: Initialize an experiment, Load and Prepare data

from piml import Experiment
exp = Experiment(platform="colab")
exp.data_loader()

exp.data_summary()

exp.data_prepare()

exp.eda()

Stage 2: Train intepretable models

exp.model_train()

Stage 3. Explain and Interpret

exp.model_explain()

exp.model_interpret() 

Stage 4. Diagnose and Compare

exp.model_diagnose()

exp.model_compare()

Arbitrary Black-Box Modeling

For example, train a complex LightGBM with depth 7 and register it to the experiment:

from lightgbm import LGBMRegressor
pipeline = exp.make_pipeline(LGBMRegressor(max_depth=7))
pipeline.fit() 
exp.register(pipeline=pipeline, name='LGBM')

Then, compare it to inherently interpretable models (e.g. EBM and GAMI-Net):

exp.model_compare()

Citations

PiML, ReLU-DNN Aletheia and GAMI-Net

"PiML: A Python Toolbox for Interpretable Machine Learning Model Development and Validation" (A. Sudjianto, A. Zhang, Z. Yang, Y. Su, N. Zeng and V. Nair, 2022)

@article{sudjianto2022piml,
title={PiML: A Python Toolbox for Interpretable Machine Learning Model Development and Validation},
author={Sudjianto, Agus and Zhang, Aijun and Yang, Zebin and Su, Yu and Zeng, Ningzhou and Nair Vijay},
journal={To appear},
year={2022}
}

"Designing Inherently Interpretable Machine Learning Models" (A. Sudjianto and A. Zhang, 2021) arXiv link

@article{sudjianto2021designing,
title={Designing Inherently Interpretable Machine Learning Models},
author={Sudjianto, Agus and Zhang, Aijun},
journal={arXiv preprint:2111.01743},
year={2021}
}

"Unwrapping The Black Box of Deep ReLU Networks: Interpretability, Diagnostics, and Simplification" (A. Sudjianto, W. Knauth, R. Singh, Z. Yang and A. Zhang, 2020) arXiv link

@article{sudjianto2020unwrapping,
title={Unwrapping the black box of deep ReLU networks: interpretability, diagnostics, and simplification},
author={Sudjianto, Agus and Knauth, William and Singh, Rahul and Yang, Zebin and Zhang, Aijun},
journal={arXiv preprint:2011.04041},
year={2020}
}

"GAMI-Net: An Explainable Neural Network based on Generalized Additive Models with Structured Interactions" (Z. Yang, A. Zhang, and A. Sudjianto, 2021) arXiv link

@article{yang2021gami,
title={GAMI-Net: An explainable neural network based on generalized additive models with structured interactions},
author={Yang, Zebin and Zhang, Aijun and Sudjianto, Agus},
journal={Pattern Recognition},
volume={120},
pages={108192},
year={2021}
}
EBM and GA2M

"InterpretML: A Unified Framework for Machine Learning Interpretability" (H. Nori, S. Jenkins, P. Koch, and R. Caruana, 2019)

@article{nori2019interpretml,
title={InterpretML: A Unified Framework for Machine Learning Interpretability},
author={Nori, Harsha and Jenkins, Samuel and Koch, Paul and Caruana, Rich},
journal={arXiv preprint:1909.09223},
year={2019}
}

"Accurate intelligible models with pairwise interactions" (Y. Lou, R. Caruana, J. Gehrke, and G. Hooker, 2013)

@inproceedings{lou2013accurate,
title={Accurate intelligible models with pairwise interactions},
author={Lou, Yin and Caruana, Rich and Gehrke, Johannes and Hooker, Giles},
booktitle={Proceedings of the 19th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining},
pages={623--631},
year={2013},
organization={ACM}
}  

About

PiML (Python Interpretable Machine Learning) toolbox for model development and validation

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Jupyter Notebook 100.0%