Sensitivities of Multiple-Choice and Multi-Item Mirror Question Tests in Determining Misassessment Risk Among First-Year Medical Students in Foundation Biology and Physics at Levy Mwanawasa Medical University

Ephraim Chongo, Mirriam Kaona, Mathias Chamatwa Zulu

Abstract


This study examined the extent to which traditional Number-Right Multiple-Choice Question (MCQ) scoring misassesses students’ true knowledge compared with the Multi-Item Mirror Question Test (MIMQT) model, which uses a Knowledge Equivalence Scoring (KES) system that awards equal value for identifying what applies and what does not apply within a concept. A comparative quasi-experimental design was employed, involving 119 first-year Foundation Biology and Physics students at Levy Mwanawasa Medical University, randomly assigned to either a traditional MCQ test (n = 58) or an equivalent-content MIMQT model (n = 61). Performance scores and student perceptions were collected, with the latter measured through a reliable questionnaire (α = .876). Results showed significantly higher performance under the MIMQT model in both Physics (Mean = 7.62 vs. 6.38) and Biology (Mean = 11.16 vs. 8.45). An independent samples t-test confirmed a statistically significant difference between scoring models, t(94) = –3.58, p = .001. In this study, traditional MCQs mismeasured 17% of Physics and 24% of Biology knowledge, meaning that MIMQT model improved measurement by the same percentages. Perception data revealed strong preference to the MIMQT model across fairness, accuracy, motivation, and reduced misassessment, with 75%–95% of respondents agreeing on key components. Overall, the findings indicate that the MIMQT model provides a more accurate, equitable, and diagnostically sensitive measure of true knowledge than traditional MCQs by assessing bidirectional understanding with KES rather than unidirectional single-answer recognition with “all” or “nothing” scoring system. The findings show that traditional Number-Right MCQs do not fully capture true knowledge because they rely on selecting a single best answer and ignore the knowledge of correctly identifying incorrect options. Since these formats operate as disguised True/False systems without awarding credit for knowing what does not apply, they provide a narrow and unidirectional judgment of understanding. In contrast, the MIMQT model, which scores both forms of identifications with equal value, offers a more comprehensive measurement of true knowledge (including peripheral knowledge) by recognising bidirectional understanding of a concept. If this was not the case, only “true” responses would earn credit while correct recognition of “false” would not earn points and dismissed as non-knowledge.

Keywords: MIMQT Model, Knowledge Equivalence Scoring, Multiple-Choice Questions, Misassessment Risk, True Knowledge Measurement, Partial Knowledge, MCQ sensitivity, Number – Right Scoring

DOI: 10.7176/JEP/17-4-10

Publication date: April 30th 2026


Full Text: PDF
Download the IISTE publication guideline!

To list your conference here. Please contact the administrator of this platform.

Paper submission email: JEP@iiste.org

ISSN (Paper)2222-1735 ISSN (Online)2222-288X

Please add our address "contact@iiste.org" into your email contact list.

This journal follows ISO 9001 management standard and licensed under a Creative Commons Attribution 3.0 License.

Copyright © www.iiste.org