By Associate Professor Kevin Lapp
This post originally appeared on EvidenceProf Blog.
Multiple choice testing is a popular assessment format in Evidence courses, more popular in my experience in Evidence than in other courses. Some professors use it exclusively, while others mix multiple-choice testing with essay questions on their exams. There's good reason for using multiple choice testing in Evidence courses. For one, the MBE portion of the bar exam contains multiple choice Evidence questions, so doing so as part of a final exam helps prepare students for the bar exam format. In addition, multiple choice testing has been around, and is widely accepted as a credible format to assess student knowledge. Evidence is also a heavily rule-based class that lends itself to an assessment format that requires students to identify a single correct answer. Finally, multiple choice questions allow professors to assess more topics than can be squeezed into an essay question, reducing the chances that a student performs well on an exam because he happened to know the issues covered by the essay questions.
But there can be a large gap between good multiple choice questions and bad multiple choice questions. This post is about how those of us who do use multiple choice questions can know if we are doing it in a way that makes for good assessment. The credibility of our multiple choice questions as sound assessment tools is particularly important given the high stakes testing that goes on in so many law school classrooms. When the great bulk, if not the entire portion, of a student's grade hinges on a single 3 or 4 hour exam, it is our duty to take advantage of the available tools to ensure that our exams function as credible assessment tools.
A great resource for me as I set about the task of writing my first multiple-choice section of an exam was Susan Case & Beth Donahue's article, "Developing High-Quality Multiple-Choice Questions for Assessment in Legal Education." (58 J. Legal Educ. 372 (2008)). In it, Case and Donahue note that multiple choice questions have several advantages over other formats, including content coverage, grading ease and consistency, and reliability of scores. But the reliability of the scores (as a reflection of learning) depends on the quality of the questions. Poorly written questions fail to assess what is intended and fail to encourage the desired learning. The article includes a very helpful discussion of how to draft high-quality multiple choice questions (and answer choices) that I won't recount here but strongly recommend.
Another indispensable resource for those who use multiple choice questions on exams was provided to me by my school. At Loyola, professors receive a detailed report with the results of the objective portion of a final exam. This report includes means, modes and variances for the exam as a whole, as well as information on each individual question. The information on individual questions includes the percentage of students who selected each possible answer, the performance on the question of the top and bottom 27% of the class, and something called the question's "point biserial" (a number which reflects how well a particular question is discriminating amongst the students) With this information, a professor can see whether everyone in the class got #1 correct (fine as an easy warm-up, though meaningless for discriminating amongst students with regard to knowledge), whether the answers selected were distributed evenly amongst the four answer choices for another question (a sign that the question isn't working), and whether those who scored in the top fourth of the class performed better on individual questions than those who scored in the bottom fourth of the class (a sign of a good question because it discriminates between low-performing and high-performing students).
Despite doing research and finding things like the Case and Donahue article ahead of time, I was not confident in my ability to write a meaningful multiple choice assessment tool the first time I taught Evidence. Happily, the post-exam report suggested that the great majority of my questions were good questions. More importantly, the report allowed me (with the help of folks who could interpret the data) to get rid of or try to fix the questions that weren't credibly assessing learning, making my exam the next time around a more valid and reliable instrument. For those whose schools don't provide this kind of data, I'd encourage you to ask for it.
And as the calendar turns to November, and I start drafting some new questions for this semester's exam, I'd also be happy to hear other suggestions and strategies for creating good multiple choice questions.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment