This program is tentative and subject to change.
Debugging is an essential yet often under-emphasized skill in programming education. In the era of code-generating large language models (LLMs), the ability for students to reason about code and identify errors is increasingly important. However, students frequently resort to trial-and-error methods to resolve bugs without truly understanding the underlying issues. Developing the ability to identify and hypothesize the cause of bugs is crucial but can be time-consuming to teach effectively through traditional means. This paper introduces BugSpotter, an innovative tool that leverages an LLM to generate buggy code from a problem description and verifies the synthesized bugs via a test suite. Students interact with BugSpotter by designing failing test cases, where the buggy code’s output differs from the expected result as defined by the problem specification. This not only provides opportunities for students to enhance their debugging skills, but also to practice reading and understanding problem specifications. We deployed BugSpotter in a large classroom setting and compared the debugging exercises it generated to exercises hand-crafted by an instructor for the same problems. We found that the LLM-generated exercises produced by BugSpotter varied in difficulty and were well-matched to the problem specifications. Importantly, the LLM-generated exercises were comparable to those manually created by instructors with respect to student performance, suggesting that BugSpotter could be an effective and efficient aid for learning debugging.
This program is tentative and subject to change.
Fri 28 FebDisplayed time zone: Eastern Time (US & Canada) change
13:45 - 15:00 | |||
13:45 18mTalk | BugSpotter: Automated Generation of Code Debugging Exercises Papers Victor-Alexandru Padurean Max Planck Institute for Software Systems, Paul Denny The University of Auckland, Adish Singla Max Planck Institute for Software Systems | ||
14:03 18mTalk | Compiler-Integrated, Conversational AI for Debugging CS1 ProgramsGlobal Papers Jake Renzella University of New South Wales, Sydney, Alexandra Vassar UNSW, Lorenzo Lee Solano University of New South Wales, Sydney, Andrew Taylor The University of New South Wales, Sydney | ||
14:22 18mTalk | “Debugging: From Art to Science” A Case Study on a Debugging Course and Its Impact on Student Performance and Confidence Papers G. Aaron Wilkin Rose-Hulman Institute of Technology | ||
14:41 18mTalk | How Effective and Efficient are Student-Written Software Tests? Papers Amanda Showler Ontario Tech University, Michael Miljanovic Ontario Tech University, Jeremy Bradbury Ontario Tech University |