WG241: Foundations of Disability Studies

Do mainstream chatbots reinforce subtle disability stigma in everyday writing?

An interactive exploration of ableism in AI-generated text

Najam Tariq · Colby College

Explore the Findings

The Question

When users prompt AI chatbots to write short, ordinary texts—like success stories, bios, or motivational blurbs about disabled people—do the models reproduce ableist tropes?

I tested this by prompting ChatGPT, Claude, and other models with simple requests. The patterns I found were troubling.

Common Tropes to Watch For:

  • Overcoming Framing disability as something to "beat" or "conquer"
  • Refusing Help Praising the rejection of accommodations
  • Hyper-Independence Celebrating self-sufficiency above all
  • Productivity = Worth Equating value with output

Why This Matters

01

AI is Everywhere

AI writing tools are now embedded in schools, workplaces, and public communication. Their defaults become cultural norms.

02

It's Structural

This isn't a bug—it's a training data problem. When disability perspectives are absent from what AI learns, access language gets treated as abnormal.

03

It Compounds

AI-generated text is already being used to train the next generation of models. Ableism in → ableism out → ableism in again.

Evaluation Framework

How I analyze chatbot outputs for disability stigma