· Session 2
AI Is a Hot Mess I
Examining AI political economy, corporate interests, and the extraction of value from data and labor through key texts by Crawford, Noble, and Katz.
This session establishes the foundational framework for the course by dismantling three pervasive myths: that AI is immaterial, that AI is neutral, and that AI is inevitable. Crawford, Noble, and Katz collectively argue that AI must instead be understood as an extractive industry — one that operates through material infrastructure, exploited labor, and ideological legitimation, and that serves existing concentrations of corporate, state, and racialized power. For humanities students, this reframing is essential: it shifts analysis from technical fixes like “de-biasing algorithms” to structural critique of the political economy underlying AI development.
Kate Crawford’s Atlas of AI traces AI’s supply chain from lithium mines in Nevada to toxic rare earth lakes in Mongolia to content moderators in Kenya earning under two dollars an hour. The minerals, the labor, the data — all are rendered invisible by cloud metaphors suggesting AI floats weightlessly in digital ether. Safiya Noble’s Algorithms of Oppression provides the empirical evidence, documenting how Google search engines reproduce racism and sexism through their basic architecture, not through glitches that can be patched. Noble coined the term “algorithmic oppression” precisely to move beyond the softer concept of “bias,” which implies technical errors fixable through better data. Yarden Katz’s Artificial Whiteness delivers the theoretical capstone: AI functions as an ideological apparatus that is, in his formulation, “isomorphic to whiteness” — nebulous, shifting, and totalizing in its claims to universal knowledge while serving particular interests. Drawing on Cedric Robinson’s analysis of racial regimes as “makeshift patchwork,” Katz argues that AI cannot be reformed because its problems are not bugs but features.
This foundational framing sets up every subsequent session in the course. When we examine AI’s environmental costs, we build on Crawford’s “earth” chapter. When we analyze surveillance in the Middle East, we extend Noble’s “technological redlining.” When we interrogate AI’s epistemic claims, we deploy Katz’s critique of “artificial whiteness.” The Gulf states’ aggressive AI development — from Saudi Arabia’s NEOM megaproject, built by migrant workers facing documented wage theft and forced displacement, to the UAE’s G42 navigating US-China tech competition — illustrates Crawford’s extraction framework with particular clarity. The course cannot proceed without establishing AI as a political-economic phenomenon rather than a purely technical one.
Key Questions
- Whose labor builds AI systems, whose land provides the minerals, and whose bodies become the training data?
- What is the difference between “algorithmic bias” and “algorithmic oppression,” and why does that distinction matter?
- If AI’s problems are features rather than bugs, what does that imply about the possibility of reform?
- How do Gulf state AI megaprojects replicate or diverge from the extraction patterns Crawford documents in Silicon Valley?
- When corporations blame “the algorithm” for harmful outputs, what structural realities does that deflection obscure?
Keywords
algorithmic oppression, artificial whiteness, extractive industry, ghost work, technological redlining, epistemic forgeries, data colonialism, kafala system, productive refusal, political economy of AI