Large Language Models in Computer Science Education: A Systematic Literature Review
This program is tentative and subject to change.
Large language models (LLMs) are becoming increasingly better at a wide range of Natural Language Processing tasks (NLP), such as text generation and understanding. Recently, these models have extended their capabilities to coding tasks, bridging the gap between natural languages (NL) and programming languages (PL). Foundational models such as the Generative Pre-trained Transformer (GPT) and LLaMA series have set strong baseline performances in various NL and PL tasks. Additionally, several models have been fine-tuned specifically for code generation, showing significant improvements in code-related applications. Both foundational and fine-tuned models are increasingly used in education, helping students write, debug, and understand code. We present a comprehensive systematic literature review to examine the impact of LLMs in computer science and computer engineering education. We analyze their effectiveness in enhancing the learning experience, supporting personalized education, and aiding educators in curriculum development. We address five research questions to uncover insights into how LLMs contribute to educational outcomes, identify challenges, and suggest directions for future research.
This program is tentative and subject to change.
Thu 27 FebDisplayed time zone: Eastern Time (US & Canada) change
15:45 - 17:00 | |||
15:45 18mTalk | Evaluating Language Models for Generating and Judging Programming FeedbackGlobal Papers Charles Koutcheme Aalto University, Nicola Dainese Aalto University, Sami Sarsa University of Jyväskylä, Arto Hellas Aalto University, Juho Leinonen Aalto University, Syed Ashraf Aalto University, Paul Denny The University of Auckland | ||
16:03 18mTalk | Exploring Student Reactions to LLM-Generated Feedback on Explain in Plain English Problems Papers Chris Kerslake Simon Fraser University, Paul Denny The University of Auckland, David Smith University of Illinois at Urbana-Champaign, Brett Becker University College Dublin, Juho Leinonen Aalto University, Andrew Luxton-Reilly The University of Auckland, Stephen MacNeil Temple University | ||
16:22 18mTalk | On Teaching Novices Computational Thinking by Utilizing Large Language Models Within Assessments Papers Mohammed Hassan University of Illinois at Urbana-Champaign, Yuxuan Chen University of Illinois at Urbana-Champaign, Paul Denny The University of Auckland, Craig Zilles University of Illinois at Urbana-Champaign | ||
16:41 18mTalk | Large Language Models in Computer Science Education: A Systematic Literature Review Papers Nishat Raihan George Mason University, Mohammed Latif Siddiq University of Notre Dame, Joanna C. S. Santos University of Notre Dame, Marcos Zampieri George mason University Pre-print |