This research investigates the effectiveness of randomized computer-based exams used in Carnegie Mellon’s Introduction to Computer Systems course. Students received 7-8 question exams, where questions are chosen randomly from a pool of questions in their distinct categories. The exam system collects data about how each student progresses throughout their exam. This data includes the order in which students view and answer the exam questions, the amount of time they spend viewing each question, and the score they have achieved at any given point during their exam. Our analysis of student scores quantifies and validates the fairness of these exams. Further analysis explores the correlation of higher-level student behaviors (e.g., order of solving questions) and the correlation with ability and score. This research supports the value of administering computer-based exams and informs future exam design.
Nicolas Diaz University of Maryland, College Park, Saunak Roy University of Maryland, College Park, Jonathan Beltran University of Maryland, College Park