Improving AI in CS1: Leveraging Human Feedback for Better Learning
This program is tentative and subject to change.
In 2023, we developed and deployed AI-based tools in CS1 at our university with the goal of offering our students 24/7 interactive assistance and approximating a 1:1 teacher-to-student ratio. These tools provide students with code explanations, style suggestions, and responses to course-related inquiries—all designed to emulate human educator responses and encourage critical thinking. Given the rise of AI tutors, ensuring such tools consistently deliver quality aligned with educators’ intentions is challenging, especially with constant model updates. To address this challenge, we propose continuous evaluation and improvement of LLM-based systems with a collaborative human-in-the-loop approach. We have experimented with few-shot prompting and explored dynamic system prompt to ensure our AI tools adopt pedagogically focused teaching styles. These strategies have also shown to significantly reduce the token usage and costs of the system. Additionally, we have introduced a feedback mechanism that allows students to provide immediate feedback on AI responses. We introduce a framework for evaluating LLMs in CS education, which includes a model-evaluation back end that teaching assistants periodically review. This setup ensures our AI system remains effective and aligned with teaching goals. This paper offers insights into our methods and the impact of these AI tools on CS1 and contributes to the discourse on AI in education, showcasing scalable, personalized learning enhancements.