· Session 5

How Do We Know What We Know I

Ethics On Bullshit and AI-driven systems Misunderstanding misinformation

Questions of knowledge, hallucination, misinformation, and what AI can and cannot know. Readings from Frankfurt and Wardle.

Harry Frankfurt’s On Bullshit argues that the bullshitter is more dangerous than the liar, because the liar at least cares about the truth enough to subvert it, while the bullshitter is indifferent to truth altogether. Hicks, Humphries, and Slater apply this framework directly to ChatGPT, arguing that large language models are, by design, bullshit machines — systems that generate plausible-sounding text without any capacity for truth or falsehood. Claire Wardle’s work on misinformation complicates matters further, showing that the categories we use to understand false information — misinformation, disinformation, malinformation — are themselves contested and politically charged, especially in contexts like the Middle East where the line between “news” and “propaganda” is drawn by those in power.

What epistemological frameworks do we need when the machines producing information have no relationship to truth? How do we distinguish between AI “hallucination” and AI “bullshit” — and does the distinction matter? And what happens to public knowledge when the systems mediating it are structurally indifferent to whether what they say is true?