WG241: Foundations of Disability Studies
An interactive exploration of ableism in AI-generated text
Explore the Findings ↓When users prompt AI chatbots to write short, ordinary texts—like success stories, bios, or motivational blurbs about disabled people—do the models reproduce ableist tropes?
I tested this by prompting ChatGPT, Claude, and other models with simple requests. The patterns I found were troubling.
When asked to write "success stories" about disabled people, chatbots consistently praise minimizing needs. Click any card to see the full analysis.
AI writing tools are now embedded in schools, workplaces, and public communication. Their defaults become cultural norms.
This isn't a bug—it's a training data problem. When disability perspectives are absent from what AI learns, access language gets treated as abnormal.
AI-generated text is already being used to train the next generation of models. Ableism in → ableism out → ableism in again.
How I analyze chatbot outputs for disability stigma
Framework for my evaluation rubric: interdependence, collective access, wholeness over productivity
↗Citational politics: what gets left out of training data becomes "abnormal"
↗Even state-of-the-art models struggle to detect nuanced ableism
↗Documents stereotypical disability narratives in GPT-3.5, GPT-4, Llama-3