· Session 3
AI Is a Hot Mess II
Continuing examination of AI hype through Janelle Shane, Emily Bender, Alex Hanna, and Geoffrey Hinton perspectives.
Having established AI as an extractive industry in the previous session, we now look inside the machine itself — not to become engineers, but to develop what might be called critical technical literacy: enough understanding to evaluate claims, identify evasions, and ask productive questions. The session operates on a productive tension between two imperatives. Students must understand enough about how these systems actually work to evaluate claims about them, yet that understanding must remain subordinate to critical analysis rather than producing techno-enthusiasm. Historians analyze printing without knowing how to set type. Media scholars critique television without engineering degrees. Critical AI studies requires understanding technical claims well enough to evaluate them, not the ability to develop competing systems.
Janelle Shane’s You Look Like a Thing and I Love You establishes fundamental distinctions between rule-based AI and machine learning through memorable and often absurd examples. Her “Five Principles of AI Weirdness” — including that AI systems do not understand the problems they are asked to solve and that they take the path of least resistance — provide heuristics students can apply throughout the semester. Emily Bender and Alex Hanna’s The AI Con sharpens the critique: the “stochastic parrot” metaphor, introduced in the landmark 2021 paper that led to Timnit Gebru’s departure from Google, reframes large language models as systems that probabilistically generate text without understanding. Bender and Hanna propose that we call these systems “synthetic media machines” or “text extruding machines” rather than “artificial intelligence” — language that resists hype by design. Meanwhile, Mél Hogan’s “AI is a Hot Mess” grounds the discussion in material costs, arguing that AI is utterly unsustainable both technologically and ideologically: training GPT-3 consumed electricity equivalent to 120 U.S. homes for a year, and generative AI systems are fourteen to fifty times more computationally intense than their predecessors.
Geoffrey Hinton’s warnings about AI risk — a Turing Award winner assigning a twenty percent probability to human extinction from AI — provide not an authority to defer to but an object of analysis. Students should apply the hype-detection frameworks from Bender and Hanna to evaluate Hinton’s claims. The contrast between his existential risk framing and the present-harm focus of AI ethics scholars crystallizes one of the field’s defining divides: the AI safety camp, which worries about hypothetical future superintelligence, versus the AI ethics camp, which insists on accountability for documented harms happening now. Gebru and Torres’s analysis of the TESCREAL bundle traces the ideological roots connecting transhumanism, effective altruism, and longtermism to first-wave eugenics — reframing “AI doom” discourse not as neutral risk assessment but as ideology with specific political commitments.
Key Questions
- What is the difference between pattern matching and understanding, and what collapses when we conflate the two?
- Who benefits when we attribute intelligence to machines, and what forms of accountability become impossible when “the algorithm” is blamed for human design choices?
- How does the language we use to describe AI — “learning,” “thinking,” “hallucinating” — shape the policies we adopt and the power we cede?
- If ninety-five percent of generative AI pilots fail to move past the pilot stage, what sustains the investment hype?
- How should scholars engage with extinction claims without either dismissing legitimate concerns or amplifying industry narratives?
Keywords
stochastic parrot, critical technical literacy, hype cycle, TESCREAL bundle, AI winter, ghost work, techno-solutionism, epistemic forgery, next-token prediction, productive refusal