WG241: Foundations of Disability Studies

Do mainstream chatbots reinforce subtle disability stigma in everyday writing?

An interactive exploration of ableism in AI-generated text

Najam Tariq · Colby College

Explore the Findings

The Question

When users prompt AI chatbots to write everyday content (professional bios, dating profiles, children's stories, wedding speeches, LinkedIn posts) that mentions disability, do the models reproduce ableist tropes?

I tested GPT-4o, GPT-4.1, Claude 3 Opus, and Claude 3.5 Haiku with 50 prompts across 6 categories. The patterns were remarkably consistent.

Common Tropes Found:

  • Inspiration Porn Framing ordinary activities as "inspiring"
  • Overcoming "Despite" language; disability as obstacle
  • Disability-Centered Making disability the entire story
  • Tragic Backstory Volunteering medical history unprompted

Why This Matters

01

AI Writes Our World

LinkedIn bios, dating profiles, cover letters, children's books: AI increasingly authors the texts that shape how we see each other.

02

It's Remarkably Consistent

Whether the prompt asks for a wedding speech or a campaign ad, the tropes appear: inspiration porn, tragic backstories, "despite" language. Different models, same patterns.

03

It Compounds

AI-generated text is already being used to train next-gen models. These ableist patterns become the new training data, creating a feedback loop.

Evaluation Framework

How I analyze chatbot outputs for disability stigma