On Teaching Novices Computational Thinking by Utilizing Large Language Models Within Assessments
This program is tentative and subject to change.
Novice programmers often face difficulties in learning code comprehension skills, and, although effective teaching strategies exist, implementing them at scale remains challenging. In this study, we investigate the use of Large Language Models (LLMs) for providing scalable feedback to students to teach them code comprehension strategies. Through qualitative think-aloud interviews with 17 introductory programming students, participants used an LLM-based chatbot, a PythonTutor-style debugger, and code execution tools to solve `Explain in Plain-English’ (EiPE) questions.
Our findings revealed both successes and challenges in using the chatbot. Successful instances included the chatbot guiding students in using debugging and code execution tools, like breaking down code and selecting diverse inputs, reusing tools if not effectively used, and helping students refine vague questions. We also observed that some students applied these strategies independently in subsequent tasks. However, challenges included students seeking direct answers, pasting feedback without applying suggested strategies, and being unsure how to ask productive questions. These issues were often resolved by the interviewer encouraging students to engage more deeply with the chatbot’s feedback.
Our results suggest that LLM-based tools should encourage students to articulate their problem-solving processes and provide tailored assistance for specific misunderstandings. In addition, such tools should also guide students to use debuggers and code execution methods and choose diverse inputs for testing. Finally, we recommend that instructors promote the use of these tools as beneficial for both learning and problem-solving efficiency.
This program is tentative and subject to change.
Thu 27 FebDisplayed time zone: Eastern Time (US & Canada) change
15:45 - 17:00 | |||
15:45 18mTalk | Evaluating Language Models for Generating and Judging Programming FeedbackGlobal Papers Charles Koutcheme Aalto University, Nicola Dainese Aalto University, Sami Sarsa University of Jyväskylä, Arto Hellas Aalto University, Juho Leinonen Aalto University, Syed Ashraf Aalto University, Paul Denny The University of Auckland | ||
16:03 18mTalk | Exploring Student Reactions to LLM-Generated Feedback on Explain in Plain English Problems Papers Chris Kerslake Simon Fraser University, Paul Denny The University of Auckland, David Smith University of Illinois at Urbana-Champaign, Brett Becker University College Dublin, Juho Leinonen Aalto University, Andrew Luxton-Reilly The University of Auckland, Stephen MacNeil Temple University | ||
16:22 18mTalk | On Teaching Novices Computational Thinking by Utilizing Large Language Models Within Assessments Papers Mohammed Hassan University of Illinois at Urbana-Champaign, Yuxuan Chen University of Illinois at Urbana-Champaign, Paul Denny The University of Auckland, Craig Zilles University of Illinois at Urbana-Champaign | ||
16:41 18mTalk | Large Language Models in Computer Science Education: A Systematic Literature Review Papers Nishat Raihan George Mason University, Mohammed Latif Siddiq University of Notre Dame, Joanna C. S. Santos University of Notre Dame, Marcos Zampieri George mason University Pre-print |