Skip to content
#

multilingual-evaluation

Here are 6 public repositories matching this topic...

Language: All
Filter by language

We introduce MKQA, an open-domain question answering evaluation set comprising 10k question-answer pairs aligned across 26 typologically diverse languages (260k question-answer pairs in total). The goal of this dataset is to provide a challenging benchmark for question answering quality across a wide set of languages. Please refer to our paper f…

  • Updated Jun 16, 2022
  • Python

Improve this page

Add a description, image, and links to the multilingual-evaluation topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the multilingual-evaluation topic, visit your repo's landing page and select "manage topics."

Learn more