Accelerating Accurate Assignment Authoring Using Solution-Generated Autograders
Students learning to program benefit from access to large numbers of practice problems. Autograders are commonly used to support large problem banks by providing quick feedback on submissions. But authoring accurate autograders remains challenging. Autograders are frequently implemented by enumerating test cases—a tedious process that can produce inaccurate autograders that fail to correctly identify correct or incorrect submissions.
We present solution-generated autograding: a faster and more accurate way to create autograders. Our approach leverages a key difference between software testing and autograding—specifically, that the question author can provide a solution. By starting with a solution, we can eliminate the need to write test cases, validate the autograder’s accuracy, and evaluate other aspects of submission quality beyond correctness. We describe Problematic, an implementation of solution-generated autograding for Java and Kotlin, and share experiences from four years using Problematic to support a large CS1 course: authoring nearly 800 questions used by thousands of students resulting in millions of submissions.