Exploring LLMs Impact on Student-Created User Stories and Acceptance Testing in Software Development
In Agile software development methodology, a user story describes a new feature or functionality from an end user’s perspective. The user story details may also incorporate acceptance testing criteria, which can be developed through negotiation with users. When creating stories from user feedback, the software engineer may maximize their usefulness by considering story attributes, including scope, independence, negotiability, and testability. This study investigates how LLMs (large language models), with guided instructions, affect undergraduate software engineering students’ ability to transform user feedback into user stories. Students, working individually, were asked to analyze user feedback comments, appropriately group related items, and create user stories following the principles of INVEST, a framework for assessing user stories. Students may be attracted to the speed at which LLMs can create user stories, but how does using LLMs impact quality? We found that LLMs can enhance certain aspects of user stories, particularly in helping students develop valuable stories with well-defined acceptance criteria. However, students tend to perform better without LLMs when creating user stories with an appropriate scope.