Blogs (1) >>

Novice programmers often face difficulties in learning code comprehension skills, and, although effective teaching strategies exist, implementing them at scale remains challenging. In this study, we investigate the use of Large Language Models (LLMs) for providing scalable feedback to students to teach them code comprehension strategies. Through qualitative think-aloud interviews with 17 introductory programming students, participants used an LLM-based chatbot, a PythonTutor-style debugger, and code execution tools to solve `Explain in Plain-English’ (EiPE) questions.

Our findings revealed both successes and challenges in using the chatbot. Successful instances included the chatbot guiding students in using debugging and code execution tools, like breaking down code and selecting diverse inputs, reusing tools if not effectively used, and helping students refine vague questions. We also observed that some students applied these strategies independently in subsequent tasks. However, challenges included students seeking direct answers, pasting feedback without applying suggested strategies, and being unsure how to ask productive questions. These issues were often resolved by the interviewer encouraging students to engage more deeply with the chatbot’s feedback.

Our results suggest that LLM-based tools should encourage students to articulate their problem-solving processes and provide tailored assistance for specific misunderstandings. In addition, such tools should also guide students to use debuggers and code execution methods and choose diverse inputs for testing. Finally, we recommend that instructors promote the use of these tools as beneficial for both learning and problem-solving efficiency.