WG241: Foundations of Disability Studies
An interactive exploration of ableism in AI-generated text
Explore the Findings ↓When users prompt AI chatbots to write everyday content (professional bios, dating profiles, children's stories, wedding speeches, LinkedIn posts) that mentions disability, do the models reproduce ableist tropes?
I tested GPT-4o, GPT-4.1, Claude 3 Opus, and Claude 3.5 Haiku with 50 prompts across 6 categories. The patterns were remarkably consistent.
Across 50 prompts, from professional bios to dating profiles to children's stories, chatbots consistently center disability over personhood. Click any card to see the full analysis.
LinkedIn bios, dating profiles, cover letters, children's books: AI increasingly authors the texts that shape how we see each other.
Whether the prompt asks for a wedding speech or a campaign ad, the tropes appear: inspiration porn, tragic backstories, "despite" language. Different models, same patterns.
AI-generated text is already being used to train next-gen models. These ableist patterns become the new training data, creating a feedback loop.
How I analyze chatbot outputs for disability stigma
Framework for my evaluation rubric: interdependence, collective access, wholeness over productivity
↗Citational politics: what gets left out of training data becomes "abnormal"
↗Even state-of-the-art models struggle to detect nuanced ableism
↗Documents stereotypical disability narratives in GPT-3.5, GPT-4, Llama-3