Interpretable Language Modeling via Induction-head Ngram Models (Kim*, Mantena* et al. 2024).
Recent large language models (LLMs) have excelled across a wide range of tasks, but their use in high-stakes and compute-limited settings has intensified the demand for interpretability and efficiency. We address this need by proposing Induction-head ngram models (Induction-Gram), a method that builds an efficient, interpretable LM by bolstering modern ngram models with a hand-engineered "induction head". This induction head uses a custom neural similarity metric to efficiently search the model's input context for potential next-word completions. This process enables Induction-Gram to provide ngram-level grounding for each generated token. Moreover, experiments show that this simple method significantly improves next-word prediction over baseline interpretable models (up to 26%p) and can be used to speed up LLM inference for large models through speculative decoding. We further study Induction-Gram in a natural-language neuroscience setting, where the goal is to predict the next fMRI response in a sequence. It again provides a significant improvement over interpretable models (20% relative increase in the correlation of predicted fMRI responses), potentially enabling deeper scientific investigation of language selectivity in the brain.
- Clone the repo and run
pip install -e .
to install thealm
package locally - Set paths/env variables in
alm/config.py
to point to the correct directories to store data - Experiments
- For language modeling experiments, please refer to
experiments/readme.md
.
- For language modeling experiments, please refer to
alm
is the main packagealm/config.py
is the configuration file that points to where things should be storedalm/data
contains data utilitiesalm/models
contains source for models
data
contains code for preprocessing dataexperiments
contains code for language modeling experiments
If you find this work useful, please cite:
@misc{kim2024inductiongram,
title={Interpretable Language Modeling via Induction-head Ngram Models},
author={Eunji Kim and Sriya Mantena and Weiwei Yang and Chandan Singh and Sungroh Yoon and Jianfeng Gao},
year={2024},
eprint={2411.00066},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2411.00066},
}