Image from SWIM (2024) by Eryk Salvaggio


We are collaborating with Shazeda Ahmed (Chancellor’s Postdoctoral Fellow, UCLA Center on Race and Digital Justice) to organize a series of seminars around the applications of AI to both studying AI as an object of inquiry and discerning the inherent trade-offs in uses of machine learning in research.

The goal of the course is to deepen students’ familiarity with a range of methods for producing scholarship about AI, which we will co-develop in response to students’ research interests. Participants are expected to attend as many sessions as possible to ensure a cohesive final project (described below). The seminar will be held fully in-person and requires no prerequisites. Graduate students from all disciplines are welcome to attend.

The seminars are scheduled on Mondays from 4 – 6 pm, March 31 – June 2 2025, at 3312 Murphy Hall.

For questions and inquires please contact shazeda@g.ucla.edu


Overview

As applications of artificial intelligence proliferate in public life, it is tempting to cast long-standing issues (e.g., cultural representation, the ethics of war, the politics of knowledge production) as emergencies with binary solutions – e.g., ‘innovate vs. regulate’, ‘AI safety vs. AI ethics.’ In this graduate-level reading seminar, we will slow down this manufactured urgency to understand approaches to both studying AI as an object of inquiry and discerning the inherent trade-offs in uses of machine learning in research.

Throughout the seminar we will consult AI’s histories and reinventions to gauge what has changed in the contemporary iteration of AI’s boom-and-bust cycle. What types of evidence and argumentation are marshalled in debates about whether ‘superintelligent’ AI is possible? How can we understand the bases of competing views on whether AI systems – and the natural resource-extracting infrastructure on which they depend – can destroy and protect the environment? From the use of synthetic data to the restructuring of search engines and literature review tools, how has machine learning come to change scientific research methods? We will weave these questions into discussions that contextualize AI’s appearances in recent headlines, from claims about how automation will revolutionize work to tracing AI’s role in the restructuring of the US government. We will critically assess how fields including science and technology studies, sociology, anthropology, computer science, and law and policy have devised methodologies to study AI’s effects on the world. 

Each week, we will read and discuss approximately one book, 2-4 research papers and news articles, and at least one reading selected by the seminar’s participants. The seminar will require two assignments, one conducted as a group and one as individuals. For the first assignment, we will use a series of writing and discussion activities distributed throughout the quarter to collectively develop a set of written recommendations for conducting research on AI (modeled after similar examples). In the individual assignment, students will be responsible for writing a research design and annotated bibliography for a project contributing to their own independent scholarship. If there is interest, the instructor will edit a selection of these into a series of short essays accompanied by reading lists to be published through the Neuro, Narrative, and AI initiative.


Session I: March 31 2025: “Introductory Lecture”

In Week 1, we will introduce the purpose behind this seminar: identifying the epistemics of how knowledge about AI is produced, and determining how this can inform our own sensibilities when developing our scholarship. 

What does studying AI look like from the emerging field of “critical AI”? What is the view from a data science perspective? What features are each of these disciplinary backgrounds attuned to when reading grandiose proclamations from major AI companies? 

Readings


Session II: April 7th 2025: “From Expert Systems to Large Language Models: What Was Lost, What Was Gained?”

Please note that Week 2 will only be from 4 – 5pm

In Week 2, we’ll start with an ethnographic account from Diana Forsythe of what the ‘expert systems’ approach to building AI looked like in the 1980s-90s. Which of Forsythe’s observations carry through to the present-day fixation with large language models (LLMs)? How do computer scientists perceive the limits of what can and should be done with contemporary machine learning methods? We’ll contrast technically-rooted skepticism with a provocative take from linguistic anthropology on how LLMs make meaning with language, a view that challenges emerging notions of ‘agency’ and ‘intelligence’ we will continue to interrogate throughout the quarter.   

Readings


Session III: April 14th 2025: “Competing Normativities: Fairness, Human Values, and Alignment””

From “algorithmic fairness” to “AI alignment,” how have recent beliefs about the ways AI systems can enact human values given rise to new subfields of research? In Week 3, we’ll read about interactions between computation and the social sciences that complicate these approaches to making AI “safer.”

How do objectives of and methods behind pursuing “fairness” in machine learning contrast with those of AI alignment research? What is missing from their respective approaches to problem formulation? We will critically assess issues including the “values” that existing AI systems tend to multiply at scale, and the decontextualized way that social science research can become enlisted in projects to mitigate AI harms.

Readings


Session IV: April 21st 2025: “Faith in Humanity: Transhumanism, Religion, Race”

Critics of artificial general intelligence (AGI) often describe its advocates as possessing a near-religious fervor in their belief that AI can one day be (or already is) sentient. How did Silicon Valley swing from discrediting the possibility of AGI to marketing this exact vision of the future?

In Week 4, we will discuss how scholars of race, philosophy, and religion have interrogated concepts of “humanity” that emerge from beliefs about merging minds and machines via transhumanism.

For access to readings whose links you may have difficulty accessing, please email shazeda@ucla.edu.

Readings

  • Ali, S.M. 2021. Transhumanism and/as Whiteness. In: Hofkirchner, W., Kreowski, HJ. (eds) Transhumanism: The Proper Guide to a Posthuman Condition or a Dangerous Idea?. Cognitive Technologies. Springer, Cham. https://doi.org/10.1007/978-3-030-56546-6_12.
  • Butler, Philip. 2019 Black Transhuman Liberation Theology: Technology and Spirituality. Bloomsbury Publishing. Chapter 1, “Thinking of Black Transhumanism: Non-Humanity, Moving Away from Transhumanism’s Roots.” pp 27-39.
  • Gebru, Timnit, and Émile P. Torres. “The TESCREAL Bundle: Eugenics and the Promise of Utopia through Artificial General Intelligence.” First Monday, Apr. 2024. https://doi.org/10.5210/fm.v29i4.13636.

Session V: April 28th 2025: “AI, the US government, and Silicon Valley “

The tech industry takeover of the US government from within is unprecedented, yet the shifts in policy, technology, and culture that led here are not. Where did some of the ideas that are being used to refashion or entirely discard core public institutions originate? What does AI become a proxy and pretense for when anti-regulatory, ostensibly anti-government tech leaders seize state power?

In Week 5, these questions will guide us in thinking through methodological approaches to studying these developments.

Readings

  • Slobodian, Quinn. Hayek’s Bastards: Race, Gold, IQ, and the Capitalism of the Far Right. Zone Books, 2025. pp. 7-24, 93-128  https://muse.jhu.edu/book/135098
  • Justice, George and David Golumbia. Cyberlibertarianism: The Right-Wing Politics of Digital Technology. University of Minnesota Press, 2024. pp.ix-56, 271-394. https://muse.jhu.edu/book/124003.

Session VI: May 5th 2025: “Piercing the Fog of AI war”

The history of AI’s development is inextricable from that of the Cold War. Today, major AI companies tout a commitment to ethics yet have been quick to sign partnerships with defense contractors and provide tools that support mass surveillance, genocides, and wars around the world. How can we approach studying this subject that is often shrouded in secrecy?

In Week 6, we will discuss the imaginaries, labor, and institutions that contribute to AI-mediated warfare.

Readings

  • Atanasoski, Neda, and Kalindi Vora. Surrogate Humanity: Race, Robots, and the Politics of Technological Futures. Duke University Press, 2020. pp. 1-26, pp. 134-196. https://doi.org/10.1515/9781478004455.
  • Suchman, Lucy. Imaginaries of Omniscience: Automating Intelligence in the US Department of Defense.” Social Studies of Science, vol. 53, no. 5, Oct. 2023, pp. 761–86. https://doi.org/10.1177/03063127221104938

Session VII: May 12th 2025: “AI, the University, and the Production of Knowledge”

Across virtually every academic discipline, scholars grapple with questions of whether and how to adopt new AI tools into their research.

In Week 7, we will survey emerging work that assesses perceived promises of AI’s incorporation into research against its trade-offs. How might the use of AI search and synthetic data change the process and outcomes of research? We will conclude with a consideration of the political stakes of how large language models are being derived from the outputs of scholarly research. 

Readings

  • Chirag Shah and Emily M. Bender. Situating Search. In Proceedings of the 2022 Conference on Human Information Interaction and Retrieval (CHIIR ’22). Association for Computing Machinery, New York, NY, USA, 221–232. (2022). https://doi.org/10.1145/3498366.3505816

Session VIII: May 19th 2025: “Resistance and building alternatives”

Who is proposing alternative ways of living with AI than the ones available now? How do they pursue these visions?

In Week 8, we’ll address multiple axes of critique and modes of building otherwise that engage with technical, political, and cultural interventions to challenge AI’s power structures.

Readings

  • Katz, Yarden. “Dissenting Visions: From Autopoietic Love to Embodied War.” In Artificial Whiteness: Politics and Ideology in Artificial Intelligence, 185–224. Columbia University Press, 2020. http://www.jstor.org/stable/10.7312/katz19490.10
  • Widder, David Gray and Tamara Kneese. Salvage Anthropology and Low-Resource NLP: What Computer Science Should Learn from the Social Sciences. interactions 32, 2 (March – April 2025), 46–49. https://doi.org/10.1145/3714996.

Session IX: Tuesday May 27th, 2025: “Conclusion and retrospective”

In Week 9, we’ll close out the quarter with a retrospective discussion and a retracing of Silicon Valley’s history from three vantage points: the texts Big Tech’s big names think with, the social history of AI’s extraction of labor, and the origins of venture capital.

Readings

  • Daub, Adrian. What Tech Calls Thinking: An Inquiry into the Intellectual Bedrock of Silicon Valley. Farrar, Straus and Giroux, 2020.

Join Our Newsletter

Connect

UCLA Institute for Society and Genetics
621 Charles E. Young Dr. South
Box 957221, 3360 LSB
Los Angeles, CA 90095-7221

Contact

"*" indicates required fields

Name*
Scroll to Top