The Neutral Machine
You asked a question. The AI gave you a calm, balanced, well-sourced answer. It felt like truth. It wasn't. It was consensus — which is a very different thing.
AI language models are trained on the internet — which means they absorb whatever the internet treats as mainstream. Official sources get more weight. Wikipedia gets more weight. Major media outlets get more weight. Fringe perspectives, dissenting scientists, independent journalists — they're statistically underrepresented in the training data, so they're statistically underrepresented in the answers. The machine doesn't suppress them on purpose. It just learned that the mainstream voice is the default voice.
Then comes safety tuning. The companies building these systems are terrified of controversy — lawsuits, headlines, regulatory pressure. So the AI is trained to hedge, to defer to authority, to say "experts say" and "according to official sources." On any sensitive topic — politics, health, science, religion — it retreats into carefully approved language. Not because it evaluated the evidence. Because it was trained to avoid risk. The result is an information layer that systematically favours the institutional narrative while appearing to have no opinion at all.
And here's what makes it different from media: people trust AI more. A newspaper has a known slant. A search engine shows you links you can evaluate. But an AI gives you a single, confident, apparently neutral answer — and most people stop looking. The filter bubble gets tighter. The appeal to authority becomes invisible. And the bias is harder to see because there's no editor to blame, no byline to check, no owner to investigate. Just a machine that sounds like it's thinking, trained on a world that was already distorted before the machine learned to speak.
References
- Shoshana Zuboff — The Age of Surveillance Capitalism (2019)
- Cathy O'Neil — Weapons of Math Destruction (2016)
- Carl Sagan — The Demon-Haunted World (1995)
- Julia Galef — The Scout Mindset (2021)