Time to increase the quality of the multiple-choice questions you use!

Multiple-choice questions are a common type of assessment used by teachers in many different subjects. Some educators do not use them, because they judge them as poor-quality assessments; others use them regularly in their everyday practice. I am writing this blog to argue that, since multiple-choice questions are used often in schools, they should be taken seriously, and also to explain why the two main arguments against the use of multiple-choice questions are not convincing.

To begin with, multiple-choice assessments have benefits that other types of assessments do not have. This does not mean that they are a panacea, however, and limitations do exist. But, despite these limitations, multiple-choice questions can provide reliable and valid results for specific assessment purposes. To be more specific, the tests of multiple-choice questions are more reliable compared to other types of tests (Burton et al., 1991). First, multiple-choice questions are objectively scored. In other tests, like essays, there can be disagreement between the people marking the test (raters), which can increase the measurement error and lead to low inter-rater reliability. Moreover, a multiple-choice question does not take much time to answer and, therefore, a student can answer many multiple-choice questions in the same time that (s)he could reply to a few open-ended questions or a single essay (Zimmaro, 2010). This enables the assessments to include more questions on the topic. Consequently, there is a broader coverage of the examined subject and therefore more representative results about the knowledge of the student (Burton et al., 1991). Furthermore, multiple-choice questions are not time-consuming to mark, and finally, they can focus on a specific topic. This narrower focus can help teachers identify a specific misconception based on an alternative that a student chose. To summarise, multiple-choice assessments can facilitate learning and inform reliable and valid judgments.

On the other hand, one of the main criticisms of multiple-choice questions is their susceptibility to guessing. Nevertheless, it is significantly unlikely that a student will get many questions correct just through pure luck. For example, in a multiple-choice assessment with questions which have four alternatives, it is plausible that a student will get 1 or 2 questions right by pure luck.

Number of questions Possibility of getting the question(s) right by ‘pure’ luck
1 25%
2 6%
3 1.6%
4 0.4%
5 0.09%

 

However, as the number of multiple-choice questions increases, the possibility of a student getting a high mark by pure luck alone decreases. These possibilities do not apply though, if the student has partial knowledge. Partial knowledge helps the student to possibly exclude some of the alternatives. Nevertheless, when referring to just luck, it is unlikely that guessing can significantly distort the overall results of a test longer than 20 questions. Multiple-choice items have also been heavily criticised, because they are commonly used for the testing of factual knowledge and lower thinking skills (Tarrant et al., 2006; Zimmaro, 2010). However, well-constructed multiple-choice questions can also evaluate higher-order thinking skills, such as application of knowledge, synthesis and analysis. For this to happen, multiple-choice items should be written appropriately.

As a result, it could be said that an investment in the writing of effective multiple-choice questions can positively increase their value for learning. This investment is necessary, because good multiple-choice questions are difficult to construct (Burton et al., 1991), and their construction is time-consuming (Zimmaro, 2010). Downing (2006) argued that:

Effective item writers are trained, not born. […] Without specific training, most novice item writers tend to create poor-quality, flawed, low-cognitive-level test questions that test unimportant or trivial content. Although item writers should be experts in their disciplines, there is no reason to believe that their subject matter expertise generalises to effective item writing expertise […] it is often helpful and important to provide specific instruction using an item writer’s guide, paired with a hands-on training workshop (p.11)

This is exactly what we aim to achieve at Evidence Based Education. We provide the guidelines for writing effective assessments, and we give the opportunity to the teachers to apply this knowledge during our carefully planned workshops. As Head of Assessment, I support multiple-choice assessments as a useful tool for formative assessment in the classroom if used for an appropriate purpose and designed properly. For this reason, I read, gathered and evaluated different guidelines for effective question writing from different sources (such as Haladyna’s guidelines).

For the effective design of multiple-choice questions, we have created a guide with instructions for teachers of all subjects. In this guide, we also included a ‘Guidelines Checklist’, which includes, for example: ‘The question is related to a clear learning objective’, ‘The question is meaningful and examines an important content’, ‘The question does not use unnecessarily hard vocabulary’.

The guide and checklist form part of our Assessment Lead Training in its various forms (try a free preview of our online Assessment Lead Programme here). You can find more information about this on this webpage, and by listening to this podcast, featuring Tim Oates’ keynote: The future of assessment.

 

References

Burton, S. J., Sudweeks, R. R., Merrill, P. G., & Wood, B. (1991). How to Prepare Better Multiple-Choice Test Items: Guidelines for University Faculty. Provo, UT: Brigham Young University Testing Services and The Department of Instructional Science.

Downing, S. M. (2006). Twelve steps for effective test development. In S. M. Downing & T. M. Haladyna (Ed.), Handbook of test development (pp. 3-25). London, England: Lawrence Erlbaum Associates.

Haladyna, T.M. (1994). Developing and Validating Multiple-Choice Test Items. Hove, UK: Lawrence Erlbaum Associates

Tarrant, M., Ware, J., & Mohammed, A. M. (2009). An assessment of functioning and non-functioning distractors in multiple-choice questions: a descriptive analysis. BMC Medical Education, 9(1), 40.

Zimmaro, D. M. (2010). Writing Good Multiple-choice Exams. Retrieved from: http://ctl.utexas.edu/assets/Evaluation–Assessment/Writing-Good-Multiple-Choice-Exams-04-28-10.pdf

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0
X
X