Balepur, Nishant; Ravichander, Abhilasha; Rudinger, Rachel
Artifacts or Abduction: How Do LLMs Answer Multiple-Choice Questions Without the Question? Sonstige
In-progress preprint, 2024.
Abstract | Links | BibTeX | Schlagwörter: artificial intelligence, KI, large language models, LLM, multiple choice, O
@misc{balepur2024artifacts,
title = {Artifacts or Abduction: How Do LLMs Answer Multiple-Choice Questions Without the Question?},
author = {Nishant Balepur and Abhilasha Ravichander and Rachel Rudinger},
url = {https://doi.org/10.48550/arXiv.2402.12483},
doi = {10.48550/arXiv.2402.12483 Focus to learn more},
year = {2024},
date = {2024-01-01},
urldate = {2024-01-01},
abstract = {Multiple-choice question answering (MCQA) is often used to evaluate large language models (LLMs). To see if MCQA assesses LLMs as intended, we probe if LLMs can perform MCQA with choices-only prompts, where models must select the correct answer only from the choices. In three MCQA datasets and four LLMs, this prompt bests a majority baseline in 11/12 cases, with up to 0.33 accuracy gain. To help explain this behavior, we conduct an in-depth, black-box analysis on memorization, choice dynamics, and question inference. Our key findings are threefold. First, we find no evidence that the choices-only accuracy stems from memorization alone. Second, priors over individual choices do not fully explain choices-only accuracy, hinting that LLMs use the group dynamics of choices. Third, LLMs have some ability to infer a relevant question from choices, and surprisingly can sometimes even match the original question. We hope to motivate the use of stronger baselines in MCQA benchmarks, the design of robust MCQA datasets, and further efforts to explain LLM decision-making.},
howpublished = {In-progress preprint},
keywords = {artificial intelligence, KI, large language models, LLM, multiple choice, O},
pubstate = {published},
tppubtype = {misc}
}
Multiple-choice question answering (MCQA) is often used to evaluate large language models (LLMs). To see if MCQA assesses LLMs as intended, we probe if LLMs can perform MCQA with choices-only prompts, where models must select the correct answer only from the choices. In three MCQA datasets and four LLMs, this prompt bests a majority baseline in 11/12 cases, with up to 0.33 accuracy gain. To help explain this behavior, we conduct an in-depth, black-box analysis on memorization, choice dynamics, and question inference. Our key findings are threefold. First, we find no evidence that the choices-only accuracy stems from memorization alone. Second, priors over individual choices do not fully explain choices-only accuracy, hinting that LLMs use the group dynamics of choices. Third, LLMs have some ability to infer a relevant question from choices, and surprisingly can sometimes even match the original question. We hope to motivate the use of stronger baselines in MCQA benchmarks, the design of robust MCQA datasets, and further efforts to explain LLM decision-making.
Foster, David; Miller, Harold L.
In: Psychology Science Quarterly, Bd. 51, Nr. 4, S. 355–369, 2009, ISSN: 1866-6140.
Abstract | Links | BibTeX | Schlagwörter: computerized testing, Discrete-Option Multiple-Choice, fairness, multiple choice, self-assessment, test security
@article{Foster2009,
title = {A new format for multiple-choice testing: Discrete-Option Multiple-Choice. Results from early studies},
author = {David Foster and Harold L. Miller},
url = {https://doaj.org/article/9851131c12144827a1369f195773d083},
issn = {1866-6140},
year = {2009},
date = {2009-04-01},
urldate = {2018-06-13},
journal = {Psychology Science Quarterly},
volume = {51},
number = {4},
pages = {355–369},
abstract = {The standard multiple-choice format has remained relatively unchanged for nearly 100 years, even over the past 25 years as multiple-choice tests have been computerized. We introduce a unique version of the multiple-choice format that has the potential to improve a test’s measurement and security properties, along with other advantages. We summarize our research with college students on course-level exams to demonstrate these benefits and to establish the Discrete-Option Multiple-Choice (DOMC) format as not only a viable way to measure skills and content knowledge, but an essential one.},
keywords = {computerized testing, Discrete-Option Multiple-Choice, fairness, multiple choice, self-assessment, test security},
pubstate = {published},
tppubtype = {article}
}
The standard multiple-choice format has remained relatively unchanged for nearly 100 years, even over the past 25 years as multiple-choice tests have been computerized. We introduce a unique version of the multiple-choice format that has the potential to improve a test’s measurement and security properties, along with other advantages. We summarize our research with college students on course-level exams to demonstrate these benefits and to establish the Discrete-Option Multiple-Choice (DOMC) format as not only a viable way to measure skills and content knowledge, but an essential one.