Prompt-Engineering Strategies for Minimizing Bias in Large Language Model Outputs: Applications in Computing Education
This program is tentative and subject to change.
Sat 1 Mar 2025 11:45 - 11:55 at Meeting Rooms 408-410 - Lightning Talks #3
We present our preliminary work on developing data-driven strategies to mitigate bias in automated content generation by Large Language Models (LLMs) in computer science (CS) education. We consider a list of fairness-aware prompt-engineering strategies and explore their impact on educational content generation. We investigate both empirical insights into fair prompt formulation and actionable takeaways that can be leveraged not only for automated generation of educational content but also for potential incorporation into the evolving curriculum of Ethics in Artificial Intelligence (AI). LLM has the potential to significantly change the CS education landscape and it has already begun re-shaping software development activities of learners and practitioners alike. Amidst all these developments, adding foundational and applied knowledge of LLM usage to the educator’s armamentarium is both pragmatic and proactive. This work serves as the starting point of an ongoing inquiry into principled language generation technologies and its democratization, with CS educators at the forefront.