· Session 1

Introduction + Framing

Ethics Beyond the hype - critical frameworks Media, knowledge, and power dynamics

This opening session establishes the intellectual framework for the course, introducing key questions about AI, power, and knowledge production in Middle Eastern contexts.

This opening session establishes the intellectual framework for the semester. We begin not with how AI works, but with what is at stake — politically, culturally, and epistemologically — when we adopt these systems without interrogating them. Chatbots hallucinate one-third of the time when responding to news queries, yet institutions treat them as authoritative. Facial recognition systems that cannot detect dark-skinned faces are deployed in hiring and policing. The corporations building these systems are valued at hundreds of billions of dollars while the workers who train them — labeling toxic content for under two dollars an hour — remain invisible. This collision of epistemic unreliability, structural bias, and concentrated power makes the case for critical humanities engagement more urgent than any abstraction about “ethics” ever could.

The course traces historical parallels between AI and past technological transformations, from the printing press to social media, using the Middle East and North Africa as a critical lens. This is not a region studies course that happens to mention AI, nor a technology course that occasionally references the Middle East. It is a sustained inquiry into how power operates through technological systems, and how those systems are experienced differently depending on where you sit. When the same question posed in Arabic versus English yields dramatically different answers from the same chatbot — different framings, different silences, different assumptions about what counts as knowledge — we are no longer dealing with a technical glitch. We are confronting an epistemological infrastructure that encodes whose languages matter and whose experiences are legible.

The Gulf states have each pledged roughly one hundred billion dollars toward AI development, reordering who controls these systems and whose values shape them. Only half a percent of natural language processing research focuses on Arabic, a language spoken by nearly five hundred million people. AI-generated video can fabricate news footage and election content within minutes. Meanwhile, the laborers who build these systems — content moderators in Nairobi, data labelers in Lahore, construction workers at desert megaprojects — remain invisible behind the polished interfaces. The extraction economy has not shifted from oil to data; it has expanded to include both, and the communities bearing the costs remain largely the same.

Key Questions

  • How do we develop frameworks for thinking critically about technologies that are changing faster than our ability to regulate them?
  • If AI speaks different “truths” depending on the language of the query, what does that reveal about the training data — and the world it reflects?
  • Who should be studying AI critically — engineers alone, or people trained to analyze power, representation, and the production of knowledge?
  • What happens when the communities most affected by algorithmic systems are the least represented in their design?

Keywords

algorithmic epistemology, extraction economy, critical AI studies, techno-solutionism, data colonialism, ghost work, hallucination, epistemic infrastructure, decolonial AI, kafala system