Personalized Parsons Puzzles as Scaffolding Enhance Practice Engagement Over Just Showing LLM-Powered Solutions
This program is tentative and subject to change.
As generative AI products could generate code and assist students with programming learning seamlessly, integrating AI into programming education contexts has driven much attention. However, one emerging concern is that students might get answers without learning from the LLM-generated content. In this work, we deployed the LLM-powered personalized Parsons puzzles as scaffolding to write-code practice in an intro-level Python classroom (PC condition) and conducted an 80-minute randomized between-subjects study. Both conditions received the same practice problems. The only difference was that when requesting help, the control condition showed students a complete AI-generated solution as scaffolding (CC condition), simulating the traditional LLM output. Results indicated that, students who received personalized Parsons puzzles as scaffolding engaged in practicing significantly longer while maintaining the same high performance level as those who received complete AI-generated solutions as scaffolding.