Datasets:

Modalities:
Text
Formats:
parquet
Languages:
English
ArXiv:
Libraries:
Datasets
pandas
License:
NicoHelemon's picture
Update README.md
5f3679c verified
metadata
dataset_info:
  features:
    - name: question
      dtype: string
    - name: options
      sequence: string
    - name: rationale
      dtype: string
    - name: label
      dtype: string
    - name: label_idx
      dtype: int64
    - name: dataset
      dtype: string
  splits:
    - name: train
      num_bytes: 203046319
      num_examples: 200000
    - name: validation
      num_bytes: 264310
      num_examples: 519
  download_size: 122985245
  dataset_size: 203310629
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
      - split: validation
        path: data/validation-*
license: apache-2.0
task_categories:
  - multiple-choice
language:
  - en
size_categories:
  - 100K<n<1M

MNLP M2 MCQA Dataset

A unified multiple-choice question answering (MCQA) benchmark on STEM subjects combining samples from OpenBookQA, SciQ, MMLU-auxiliary, AQUA-Rat, and MedMCQA.

Dataset Summary

This dataset merges five existing science and knowledge-based MCQA datasets into one standardized format:

Source Train samples
OpenBookQA 4 900
SciQ 10 000
MMLU-aux 85 100
AQUA-Rat 50 000
MedMCQA 50 000
Total 200 000

Supported Tasks and Leaderboards

  • Task: Multiple-Choice Question Answering (multiple-choice-question-answering)
  • Metrics: Accuracy

Languages

  • English

Dataset Structure

Each example has the following fields:

Name Type Description
question string The question stem.
options list[string] List of 4-5 answer choices.
label string The correct answer letter, e.g. "A", or "a".
label_idx int Zero-based index of the correct answer (0–4).
rationale string (Optional) Supporting fact or rationale text.
dataset string Source dataset name (openbookqa, sciq, etc.)

Splits

DatasetDict({
train: Dataset(num_rows=200000),
validation: Dataset(num_rows=519),
})

Dataset Creation

  1. Source Datasets
  • OpenBookQA (allenai/openbookqa)
  • SciQ (allenai/sciq)
  • MMLU-auxiliary (cais/mmlu, config=all)
  • AQUA-Rat (deepmind/aqua_rat)
  • MedMCQA (openlifescienceai/medmcqa)
  1. Sampling We sample each training split down to a fixed size (4 900–85 100 examples). Validation examples are sampled per source by first computing each dataset’s original validation-to-train ratio (len(validation)/len(train)), taking the minimum of these ratios and 5 %, and then holding out that fraction from each source.

  2. Unification All examples are mapped to a common schema (question, options, label, …) with minimal preprocessing.

  3. Push to Hub

from datasets import DatasetDict, load_dataset, concatenate_datasets

# after loading, sampling, mapping…
ds = DatasetDict({"train": combined, "validation": val_combined})
ds.push_to_hub("NicoHelemon/MNLP_M2_mcqa_dataset", private=False)

Usage

from datasets import load_dataset

ds = load_dataset("NicoHelemon/MNLP_M2_mcqa_dataset")
print(ds["train"][0])
# {
# "question": "What can genes do?",
# "options": ["Give a young goat hair that looks like its mother's hair", ...],
# "label": "A",
# "label_idx": 0,
# "rationale": "Key fact: genes are a vehicle for passing inherited…",
# "dataset": "openbookqa"
# }

Licensing

This collection is released under the Apache-2.0 license. Original source datasets may carry their own licenses—please cite appropriately.

Citation

If you use this dataset, please cite:



@misc
{helemon2025m2mcqa,
title = {MNLP M2 MCQA Dataset},
author = {Nicolas Gonzalez},
year = 2025,
howpublished = {\url{https://huggingface.co/datasets/NicoHelemon/MNLP_M2_mcqa_dataset}},
}

And please also cite the original datasets:


@misc{mihaylov2018suitarmorconductelectricity,
      title={Can a Suit of Armor Conduct Electricity? A New Dataset for Open Book Question Answering}, 
      author={Todor Mihaylov and Peter Clark and Tushar Khot and Ashish Sabharwal},
      year={2018},
      eprint={1809.02789},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/1809.02789}, 
}

@misc{welbl2017crowdsourcingmultiplechoicescience,
      title={Crowdsourcing Multiple Choice Science Questions}, 
      author={Johannes Welbl and Nelson F. Liu and Matt Gardner},
      year={2017},
      eprint={1707.06209},
      archivePrefix={arXiv},
      primaryClass={cs.HC},
      url={https://arxiv.org/abs/1707.06209}, 
}

@misc{hendrycks2021measuringmassivemultitasklanguage,
      title={Measuring Massive Multitask Language Understanding}, 
      author={Dan Hendrycks and Collin Burns and Steven Basart and Andy Zou and Mantas Mazeika and Dawn Song and Jacob Steinhardt},
      year={2021},
      eprint={2009.03300},
      archivePrefix={arXiv},
      primaryClass={cs.CY},
      url={https://arxiv.org/abs/2009.03300}, 
}

@misc{ling2017programinductionrationalegeneration,
      title={Program Induction by Rationale Generation : Learning to Solve and Explain Algebraic Word Problems}, 
      author={Wang Ling and Dani Yogatama and Chris Dyer and Phil Blunsom},
      year={2017},
      eprint={1705.04146},
      archivePrefix={arXiv},
      primaryClass={cs.AI},
      url={https://arxiv.org/abs/1705.04146}, 
}

@misc{pal2022medmcqalargescalemultisubject,
      title={MedMCQA : A Large-scale Multi-Subject Multi-Choice Dataset for Medical domain Question Answering}, 
      author={Ankit Pal and Logesh Kumar Umapathi and Malaikannan Sankarasubbu},
      year={2022},
      eprint={2203.14371},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2203.14371}, 
}