S9-O/LT7-2 - Caches as an Example of Machine-gradable Exam Questions for Complex Engineering Systems1. Innovative Practice Full Paper
1 University of Illinois at Urbana-Champaign
This Innovative Practice Full Paper presents a framework for generating computer-based exams for complex engineering systems (such as cache memories) that can be machine graded while still offering partial credit for students. Complex multi-faceted engineering systems often require long, multi-part problems to fully assess students' understanding of those systems. Cache memories represent one such system in computer architecture courses. Traditionally, we assessed students' understanding of caches using comprehensive, multi-part questions in a paper-based exam. Grading these exams was time-consuming and relied on subjective grading. To cope with rising enrollment, we sought to address these issues by developing machine administered and gradable exams that did not heavily rely on multiple-choice questions or exact numerical responses. Additionally, this system needed to provide partial credit, a common expectation of our students. We developed a cache simulator to use as a back-end for our questions. We used the simulator to develop exam questions and new homework assignments to help students practice cache memory concepts. To give students access to fair partial credit, we allowed multiple submissions for the exam questions with limited feedback. We also awarded partial credit for answers within certain tolerance of the correct answer. The partial credit awarded reduced as deviation from the correct answer increased. Consequently, students could correct minor mistakes or propagating errors which are common reasons for awarding partial credit. To evaluate the effect of the switch from paper-based to computerized exam, we ported questions from one of our paper-based exams to a computerized exam. We evaluated the differences in student performance on paper-based version and the computerized version of the questions and found mixed results with students performing comparably or better than the paper-based exam on the computer-based exam. We also surveyed students about their experience with the computer-based exam. Students overwhelmingly indicated a preference for the computer-based exam. We believe that ideas from our work can be used to automate generation, administration, and grading of complex multi-part questions in engineering disciplines beyond computer architecture.