Blogs (1) >>

Artificial intelligence (AI) is ubiquitous in K-12 youths’ everyday lives. However, it has become increasingly well-documented that AI can cause harm by reflecting and amplifying societal biases. While many youth are not currently empowered to engage in broader responsible AI discourse and processes, there is great potential. Foundational to engaging in critical conversations is ability to critique AI. We present the RAD framework, designed to scaffold critique of AI in three steps: Recognize (harms of AI), Analyze (societal aspects of AI harms), and Deliberate (what more responsible AI could be). We ran a workshop study with racially diverse middle school girls (N = 21) to investigate its effectiveness. We found that through being scaffolded with the framework, the learners could articulate biases that they saw in an AI scenario and consider how they may impact different stakeholders. They then could contemplate how different stakeholders had varying amounts of power in the AI scenario and what that meant in terms of creating more responsible AI systems and processes. After participating in the study, the girls felt more strongly about voicing their opinions on AI with others. The RAD framework and activities work toward emboldening youths’ engagement in critical discourse about AI.