Hi there π I'm Chandan, a Senior Researcher at Microsoft Research working on interpretable machine learning. Homepage / Twitter / Google Scholar / LinkedIn
Interpretable and accurate predictive modeling, sklearn-compatible (JOSS 2021). Contains FIGS (PNAS 2022) and HSTree (ICML 2022)
Interpretability for text. Contains Aug-imodels (Nature Communications 2023) , Tree-Prompt (EMNLP 2023) , iPrompt (ICLR workshop 2023) , SASC (NeurIPS workshop 2023) , and QA-Embs (NeurIPS 2024)
adaptive-wavelets Adaptive, interpretable wavelets (NeurIPS 2021)
Utilities for trustworthy data-science (JOSS 2021)
deep-explanation-penalization Penalizing neural-network explanations (ICML 2020)
hierarchical-dnn-interpretations Hierarchical interpretations for neural network predictions (ICLR 2019)
transformation-importance Feature importance for transformations (ICLR Workshop 2020)
covid19-severity-prediction Extensive COVID-19 data + forecasting for counties and hospitals (HDSR 2021)
clinical-rule-vetting General pipeline for deriving clinical decision rules
iai-clinical-decision-rule Clinical decision rules for predicting intra-abdominal injury (PLOS Digital Health 2022)
molecular-partner-prediction Predicting successful CME events using only clathrin markers
gan-vae-pretrained-pytorch Pretrained GANs + VAEs + classifiers for MNIST/CIFAR in pytorch
gpt-paper-title-generator Generating paper titles with GPT-2
disentangled-attribution-curves Attribution curves for interpreting tree ensembles trees (arxiv 2019)
matching-with-gans Matching in GAN latent space for better bias benchmarking. (CVPR workshop 2021)
data-viz-utils Functions for easily making publication-quality figures with matplotlib
mdl-complexity Revisiting complexity and the bias-variance tradeoff (JMLR 2021)
pasta Post-hoc Attention Steering for LLMs (ICLR 2024), led by Qingru Zhang
meta-tree Learning a Decision Tree Algorithm with Transformers (TMLR 2024), led by Yufan Zhuang
explanation-consistency-finetuning Consistent Natural-Language Explanations (COLING 2025), led by Yanda Chen
induction-gram Interpretable Language Modeling via Induction-head Ngram Models (arXiv 2024), led by Eunji Kim & Sriya Mantena
Major: autogluon , big-bench , nl-augmenter
Minor: conference-acceptance-rates , iterative-random-forest , interpretable-ml-book , awesome-interpretable-machine-learning , awesome-machine-learning-interpretability , awesome-llm-interpretability , executable-books , deep-fMRI-dataset
hummingbird-tracking, imodels-experiments, cookiecutter-ml-research, nano-descriptions, news-title-bias, java-mini-games, imodels-data, news-balancer, arxiv-copier, dnn-experiments, max-activation-interpretation-pytorch, acronym-generator, hpa-interp, sensible-local-interpretations, global-sports-analysis, mouse-brain-decoding, ...