Skip to content

MuCoT: Multilingual Contrastive Training for Question-Answering in Low-resource Languages - Accepted at ACL 2022 Workshop

Notifications You must be signed in to change notification settings

gokulkarthik/mucot

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

46 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

MuCoT: Multilingual Contrastive Training for Question-Answering in Low-resource Languages

MBZUAI - ML701 Course Project (Machine Learning by Dr. Salman Khan)

🎉 Accepted for oral presentation at ACL 2022 Workshop on Speech and Language Technologies for Dravidian Languages

PWC

Accuracy of English-language Question Answering (QA) systems has improved significantly in recent years with the advent of Transformer-based models (e.g., BERT). These models are pre-trained in a self-supervised fashion with a large English text corpus and further fine-tuned with a massive English QA dataset (e.g., SQuAD). However, QA datasets on such a scale are not available for most of the other languages. Multi-lingual BERT-based models (mBERT) are often used to transfer knowledge from high-resource languages to low-resource languages. Since these models are pre-trained with huge text corpora containing multiple languages, they typically learn language-agnostic embeddings for tokens from different languages. However, directly training an mBERT-based QA system for low-resource languages is challenging due to the paucity of training data. In this work, we augment the QA samples of the target language using translation and transliteration into other languages and use the augmented data to fine-tune an mBERT-based QA model, which is already pre-trained in English. Experiments on the Google ChAII dataset show that fine-tuning the mBERT model with translations from the same language family boosts the question-answering performance, whereas the performance degrades in the case of cross-language families. We further show that introducing a contrastive loss between the translated question-context feature pairs during the fine-tuning process, prevents such degradation with cross-lingual family translations and leads to marginal improvement. The code for this work is available at https://github.com/gokulkarthik/mucot.

TL;DR: We use contrastive loss between the translated pairs during fine-tuning to improve multilingual BERT for question answering in low-resource languages.

[ArXiv Preprint] [ACL Slides] [ACL Video] [ACL Anthology]

Bibtex:

@inproceedings{kumar-etal-2022-mucot,
    title = "{M}u{C}o{T}: Multilingual Contrastive Training for Question-Answering in Low-resource Languages",
    author = "Kumar, Gokul Karthik  and
      Gehlot, Abhishek  and
      Mullappilly, Sahal Shaji  and
      Nandakumar, Karthik",
    booktitle = "Proceedings of the Second Workshop on Speech and Language Technologies for Dravidian Languages",
    month = may,
    year = "2022",
    address = "Dublin, Ireland",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2022.dravidianlangtech-1.3",
    doi = "10.18653/v1/2022.dravidianlangtech-1.3",
    pages = "15--24",
    abstract = "Accuracy of English-language Question Answering (QA) systems has improved significantly in recent years with the advent of Transformer-based models (e.g., BERT). These models are pre-trained in a self-supervised fashion with a large English text corpus and further fine-tuned with a massive English QA dataset (e.g., SQuAD). However, QA datasets on such a scale are not available for most of the other languages. Multi-lingual BERT-based models (mBERT) are often used to transfer knowledge from high-resource languages to low-resource languages. Since these models are pre-trained with huge text corpora containing multiple languages, they typically learn language-agnostic embeddings for tokens from different languages. However, directly training an mBERT-based QA system for low-resource languages is challenging due to the paucity of training data. In this work, we augment the QA samples of the target language using translation and transliteration into other languages and use the augmented data to fine-tune an mBERT-based QA model, which is already pre-trained in English. Experiments on the Google ChAII dataset show that fine-tuning the mBERT model with translations from the same language family boosts the question-answering performance, whereas the performance degrades in the case of cross-language families. We further show that introducing a contrastive loss between the translated question-context feature pairs during the fine-tuning process, prevents such degradation with cross-lingual family translations and leads to marginal improvement. The code for this work is available at https://github.com/gokulkarthik/mucot.",
}


Idea

Results

Datasets

Chaii

  • Chaii Original - Kaggle
  • Chaii Translated & Transliterated - Kaggle

Run data/chaii_split.py from the root directory make the train-val-test splits.

Models

Pretrained:

About

MuCoT: Multilingual Contrastive Training for Question-Answering in Low-resource Languages - Accepted at ACL 2022 Workshop

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published