Given the popularity of LLMs, there have been proposals to replace human Teaching Assistants (TAs) in providing feedback to CS1 students with an LLM-based AI agent. In this paper, we investigate a new hybrid model for providing CS1 feedback where human TAs are provided with AI-generated feedback that they can verify and edit. We present the results for a large-scale randomized intervention trial of 185 CS1 undergraduate students that compares the efficacy of this new hybrid approach to manual interventions and direct AI-generated feedback.
While we expected that augmenting TAs with AI-generated feedback would improve their efficiency, our results are mixed. Similarly, while we expected human TAs to catch and eliminate wrong feedback generated by LLMs, this assumption was not validated in practice. On the contrary, there is evidence that AI augmentation could lead to complacency among TAs. In other words, augmenting human tutors with AI does not always directly improve teaching outcomes. We still believe that an AI-augmented hybrid model is a promising approach for providing feedback, but more work is needed to ensure it is truly effective.