Blogs (1) >>

Grading computer graphics programming assessments and generating formative and summative feedback can require significant effort on the part of human experts. Since these assessments generate visual outputs that can be static or animated, determining correctness may be subjective. For feedback to be effective, it must be delivered in a timely manner. This can be a challenge for introductory computer graphics-based courses since cohort size can be substantial, errors in visual output can be subtle, and causes of errors are often not obvious.

In this paper, we explore the feasibility of an automated system for marking visual output and providing program implementation feedback for learners in an introductory computer graphics-based design course in three short programming assessments, including static and animated scenes. To assess the effectiveness of our approach, we compare the marks generated by our tool with those assigned by a human expert.

We show that it is possible to automate marking, providing both a grade based on the visual output and formative feedback on source code in the style of a human marker. This can improve objective consistency, grade reproducibility, and reduce marking time, enabling a course to scale to support large cohorts without the need for more resourcing for human markers. We describe lessons learnt and potential pitfalls to assist educators with introducing automated marking for their courses. Finally, we identify areas for future refinement and development of our automated system.