An interview with Jennifer Rhee

On Thursday, April 5, 2018, Jennifer Rhee (Virginia Commonwealth University) delivered a lecture titled “Drone Warfare, Drone Art, and the Limits of Identification.” This follow-up interview with Rhee, author of The Robotic Imaginary: The Human and the Price of Dehumanized Labor, was conducted by Max Larson, then doctoral candidate in English and predoctoral fellow at the Penn State Center for Humanities and Information (now postdoctoral lecturer in English at Penn State), by email during spring 2019.

Zakiyyah Iman Jackson powerfully critiques posthuman calls to move “beyond the human,” which only reaffirm the narrowness of this category, particularly in relation to race. The stakes of interrogating the purportedly universal human are nothing short of life and death in its many forms: slow death, social death, death in life.

ML: Since The Robotic Imaginary is a book about robots, readers might expect to find theoretical discussions of the post-human, non-human, trans-human, or some other turn away from anthropos. From the outset, however, you explicitly state that you want to retain the human as an analytical concept, and you draw upon Édouard Glissant, Sylvia Wynter, and other thinkers who examine humanization and de-humanization as co-constitutive processes. While The Robotic Imaginary is largely a work of media studies, thinkers such as Glissant and Wynter are not usually regarded as media theorists — at least not in the way that post-humanists such as Katherine Hayles or Friedrich Kittler have been canonized as media theorists. What do you see as the current status of the human and the post-human in media studies? Do media scholars need to re-think their approach to humanism?

JR: I see the category of the human as a productive site of contestation in media studies. I’ve learned a lot from theorizations of the posthuman and the nonhuman, particularly Katherine Hayles’ How We Became Posthuman: Virtual Bodies in Cybernetics, Literature, and Informatics. Part of what I find so useful about Hayles’ engagement with the posthuman is that it opens up the category of the human for interrogation. Karen Barad’s posthumanism is also incredibly productive; for Barad, the posthuman designates the constructedness of the category of the human and the power relations that shape and police this construction (my version of media studies also draws significantly from feminist science studies, which you may have gleaned from my citing Barad here).1 So, while my book insists on “staying with the human,” to paraphrase Donna Haraway, I don’t see discourses of the human and of the post-human in media studies as necessarily oppositional. I’m interested in what a work opens up. I’m particularly interested in work that opens up further conversations about the power relations and the dehumanizing erasures and exclusions that produce the category of the human. Some of this work has been done under the name of the post-human, some under the name of the human. For example, Hayles and Bernard Stiegler place the human at the center of technology and media studies in ways that I find very useful, because they demonstrate the fundamental elasticity of the category of the human.2 Wendy Chun’s important media studies work also offers an incisive interrogation of the human, particularly as it’s been constructed through race. Chun’s scholarship refuses the human as an ahistorical and universal category, but instead offers the human as a category whose history and present are enmeshed with race, racism, colonialism, and capitalism.3

This may seem counterintuitive, but my turn back to the anthropos in The Robotic Imaginary finds common cause with [Françoise] Vergès’ turn away from the anthropos and “the anthropocene.”

In addition to what a work opens up, I also like to think in terms of the possibility of shared stakes. Zakiyyah Iman Jackson powerfully critiques posthuman calls to move “beyond the human,” which only reaffirm the narrowness of this category, particularly in relation to race.4 The stakes of interrogating the purportedly universal human are nothing short of life and death in its many forms: slow death, social death, death in life. In thinking about these stakes, your question brings to mind debates around the term “anthropocene,” which I’m thinking about for my next book on digital counting technologies and race. In addition to examining the racial biases of digital counting and big data, from digital redlining, biometric surveillance technologies, and predictive policing software, I’m also looking at the environmental costs of digital counting. As numerous thinkers observe, for example Françoise Vergès and Jason Moore, the term anthropocene turns on the concept of a universal human; in doing so the term erases the unevenly distributed harms of climate change and environmental pollution, which largely affects poorer communities in the Global South that are also the least responsible for these environmental harms.5 Thus the term anthropocene, in presuming an undifferentiated and universal human as the agent of this new geological era, erases important racial, geographic, and economic differences when it comes to environmental harm, responsibility, and possibilities for repair. Vergès offers instead the term racial capitalocene to note the unevenness of the environmental harms and to situate contemporary environmental shifts within larger historical and socioeconomic contexts. This may seem counterintuitive, but my turn back to the anthropos in The Robotic Imaginary finds common cause with Vergès’ turn away from the anthropos and “the anthropocene.” Following scholars like Sylvia Wynter, Éduoard Glissant, Chun, and Jackson, as well as Franz Fanon and Lisa Lowe, who identify the construction of the modern human through the dehumanization, exploitation, and violent erasure of enslaved, colonized, and indigenous people, my book underscores the history of differentiated claims to the human. In this way, I understand the concept of the human as itself a technology of racial and gender differentiation.

Your thoughtful question also points to the richness of media studies’ interdisciplinary possibilities and the broader discussions that these interdisciplinary conversations engage. My engagement with this interdisciplinarity looks to feminist science studies, feminist and gender theory, and critical race and ethnic studies as a way to rethink and critique the human from within media studies. I’m really excited about the scrutiny that the concept of the human is under in media studies. Here, I’d like to point to Neda Atanasoski and Kalindi Vora’s excellent new book, Surrogate Humanity: Race, Robots, and the Politics of Technological Futures. Their scholarship on robots, AI, and race has been really important to my thinking, and I’m eager to continue thinking with their work. And Simone Browne’s Dark Matters: On the Surveillance of Blackness is just an exemplary work of scholarship in many ways, including in its insistence on a broader historical and political scope when thinking about technology and race.

For me, it was important to examine the ways robots emerge from longer histories of dehumanization without further inscribing this dehumanization.

ML: I’d like to turn the previous question onto its head. As I already suggested, some of your main theoretical touchstones, such as Glissant and Wynter, are not usually regarded as media theorists. Instead, these thinkers examine processes of dehumanization through the experiences of actual, living, historical persons. In The Robotic Imaginary, you convincingly argue that humanoid robots — which are not human, but are modeled upon humans — also provide an important site for interrogating processes of dehumanization. While the analytical value of the “robotic imaginary” is clear, I’m wondering if you can make a more comparative claim. When we study dehumanization, what is the difference between studying robots as opposed to studying actual living persons? Are there any particular challenges or benefits?

JR: I’m delighted that you found the robotic imaginary’s analytic value to be clear! For me, it was important to examine the ways robots emerge from longer histories of dehumanization without further inscribing this dehumanization. Lived experience, in many ways, makes all the difference. Christina Sharpe’s powerful discussion of the ongoing dehumaning of Black lives in her book In the Wake: On Blackness and Being is particularly illuminating in this context. While I think the figure of the robot can productively reflect whose lives are valued in society, there’s obviously no comparison between the lived experience of politically marginalized people and robots. This is why my book, and my concept of the robotic imaginary, is not at all in dialogue with ongoing conversations in robot ethics about whether robots should have rights (for example, in 2017, Saudi Arabia granted a robot named Sophia legal personhood). My book, as I mention in my introduction, is first and foremost about the human. Specifically, I’m concerned with the centrality of dehumanizing practices in defining the category of the human. The human is nothing without dehumanization. Robots across technology and culture reflect and reinscribe these dehumanizing practices in salient ways, and there’s an urgency to attending to robots and AIs in this context, especially considering the rapid and ongoing growth of AI and robotics technologies which maintains, if not exacerbates and accelerates, these dehumanizing logics and projects.

What I hope I’ve done is to make clear that the robot in art and technology is useful as a reflection of societal values of human lives, without effecting a collapse where we lose sight of the humans whose lives are materially affected by their historic and ongoing dehumanization and devaluation. Not taking seriously the ways that the cultural imaginary heavily shapes how technologies are developed, applied, and used risks reinscribing these values, with all of their biases, through these technologies. For example, in literature, the robot often operates as a metaphor for politically marginalized and disenfranchised people, particularly in relation to race, gender, and sexuality. This is, to hint a bit at my response to your fourth question, part of why literature is so important for me methodologically for this project. Fictional robots operate as metaphors, which can powerfully reflect larger societal issues with great nuance, and yet as metaphors they are always metaphors for something else or someone else. This is an incredibly important and useful insight to have when analyzing technological robots, in part to remind us that these technologies, and the cultural narratives that inform them, are enmeshed with devastating dehumanizing projects that have historically (though not just historically) governed and policed who is considered human and who isn’t.

ML: The cover of your book contains an arresting image of Nam June Paik’s Robot-K456. At the end of the first chapter, you discuss Paik’s performance piece, The First Accident of the 21st Century, in which a taxi cab crashes into Robot-K456 on Manhattan’s Madison Avenue. The Robotic Imaginary contains many examples similar to this one, where robots are designed to challenge our typical relationships with anthropomorphized technologies. Most of these examples, however, seem to be confined to the world of robotics laboratories and performance art. What about our more quotidian, day-to-day interactions with mass-produced technology? For instance, most of my own personal experiences with humanoid robots have been with Siri and Alexa. What would an ethics of care, opacity, and vulnerability look like in this context — not at an art exhibit, but at Amazon or Google, on our iPhones and our smart speakers? Under conditions of global capitalism, is such an ethics possible?

JR: My answer lies in your concluding query, “Under the conditions of global capitalism…” While art is no less embedded in global capitalism, it is embedded in very different ways than are Siri and Alexa, which are products designed not just to serve the consumer in very gendered ways, but are also, as has been increasingly revealed, tools of data surveillance for Apple, Amazon, and the state agencies these corporations share this data with. That doesn’t mean that people can’t have important, meaningful interactions with these technologies. Artist Stephanie Dinkins has talked poignantly about this possibility with Alexa. But these individual meaningful encounters are also set against the backdrop of these technologies’ (perhaps primary) purpose as data surveillance and capture for corporate profit and policing purposes. And these corporate and policing purposes overwhelmingly disadvantage or threaten those people already disproportionately targeted by the police and the government, including Black and brown people, queer and trans people, and undocumented people. When juxtaposing these multiple scales of operation, questions of who owns these technologies and who is owned through these technologies, come starkly into relief.

Another way to approach this question is to look at the difference between Siri and Samantha, the fictional operating system in Spike Jonze’s Her. As I discuss in my chapter on care labor and conversational AI, Samantha works in the multiple ways your question lays out: meaningful personal interaction, corporate surveillance, and data capture. I read the film as ultimately unable to imagine an ethics because of the narrow scope of the narrative, which focuses pretty exclusively on the romantic relationship between Samantha and one of her owners, Theodore. Samantha is both owned commodity and romantic partner (though these two positions are complicated in productive ways that point to the ways heteronormative relationships have historically worked in the service of capitalism). And the central conflict in the film is about the purported untenability of these dual positions, as embodied in Samantha. However, I argue that the film’s focus on Samantha and Theodore’s romantic relationship reveals the dangers of a narrow scope that erases the significant complexities of what a technology is. To think ethically about a technology requires inhabiting multiple scales and temporalities. The relation between individual user and technology is, of course, an important scale, but staying only at this scalar register, as the film does, erases the multiple and overlapping urgent stakes around ethically about technology.

As my students and I discuss, technology is not reducible to a single technological object or a single application or function. Technology also includes the materials used to manufacture and support the technology, the environmental costs of manufacturing, sustaining, and disposing of a technology (from mineral mining to managing e-waste), and the often exploitative and toxic conditions of human labor concentrated in the Global South that go into producing and supporting the technology. For me, an ethics of care and shared vulnerability would include this expansive conceptualization of a technological object. In global capitalism we’re very interconnected. Here I’m thinking about a concept of interconnection that explicitly places our capacity to harm and be harmed by each other at the center of ethical considerations, something akin to Wendy Chun’s incisive writings on the “leakiness” of social media as the starting point for rethinking our ethical relations with these technologies and with each other.6 Part of what I’m pointing to here is something that the humanities is expert in: embracing and navigating complexity. Specifically, thinking about our relationships to technology in all their complexities. Our technological objects imbricate us even further with other humans, for example, the humans who make possible these technological objects and who often do so in horrific and toxic conditions, as the suicides at Foxconn factories demonstrate. In thinking about technology, somewhat analogous to thinking about global capitalism, the question of ethics is one of how to apprehend ourselves and others simultaneously in multiple temporalities, relations, and geographies.

In engaging these themes, literature foregrounds the ethical dimensions of how and why some humans are separated from other humans; for me, this is an important starting point for interrogating AI and robotics technologies.

ML: Can you say something about the role of interdisciplinarity in your work? I’m thinking in particular about your clear commitment to literature. Why is it important to consider novels alongside — to borrow some examples from The Robotic Imaginary — robots, AI expert systems, and the United States drone program?

JR: The broader answer to your question lies in the co-evolution of culture and technoscience, which take place in and are shaped by shared social, economic, and political contexts. In terms of your specific question about the importance of literature in my project, my commitment to literature in part stems from robot fictions’ early and ongoing significant contributions to shaping the robotic imaginary. As I mention in my book’s introduction, the word “robot” first appeared in Karel Čapek’s influential play R.U.R.: Rossum’s Universal Robots (1920). The word “robot,” suggested to Čapek by his brother Joseph, is derived from the Czech word rabota, meaning forced labor or drudgery, and from robotnik, the Czech word for serf. And the word “roboticist” first appeared in Isaac Asimov’s short story, “Strange Playfellow” (1940). Roboticists themselves identify childhood encounters with fictional robots as important formative moments that connect to their technological work. Isaac Asimov, whose work I don’t examine closely in my book, was an incredibly important early thinker of robots. Asimov’s influential robot stories, like Philip K. Dick’s android stories and novels, are preoccupied with robots’ similarities to humans, particularly robots’ similarities to humans who are marginalized and dehumanized by society. Asimov, Dick, as well as Čapek’s influential robot imaginings offer the robot not as innately different from humans; rather, the differences between humans and robots come down to their treatment by other humans and by society, particularly in relation to issues of labor and freedom. In this way, as Isiah Lavender and Despina Kakoudaki have insightfully argued, robots have been linked to enslavement from their fictional origins.7 In engaging these themes, literature foregrounds the ethical dimensions of how and why some humans are separated from other humans; for me, this is an important starting point for interrogating AI and robotics technologies.

Also, as my chapter on emotional robots and Philip K. Dick’s android novels demonstrates, literature doesn’t just reflect what’s happening in technology. Drawing on metaphor, narrative, and other literary devices and techniques, literature possesses a unique aesthetic capacity to depict significant complexities and nuances about technology, such as how a technology reflects existing social norms and biases, and how a technology can be reimagined to work against these norms and biases. I’m making an argument here for writing and reading literature, which are practices that I value very highly. I’m also making an argument for the related practice of literary analysis, the close reading of literary texts joined with the deep study of their historical and political contexts in all their complexities. As my students demonstrate so impressively every semester, literary analysis allows us to apprehend important complexities: to analyze them, to present them clearly, cogently, and without reduction or distortion, and to develop new knowledge from within these complexities. So, for example, it’s not just about how Philip K. Dick’s novels influenced the robotic visions of many roboticists, but also how these novels bring to the foreground the exclusionary power relations in which humans and robots alike operate. And practices associated with literary analysis provide an important means of identifying, analyzing, and communicating the centrality of power relations in producing the human and the dehumanized as depicted in Dick’s works.


1 Karen Barad, “Posthumanist Performativity: Toward an Understanding of How Matter Comes to Matter,” Signs 28.3 (2003): 801-831.

2 I’m drawing on Diana Fuss here, who defines the human as “one of our most elastic fictions.” Diana Fuss, “Introduction,” in Human, All Too Human, ed. Diana Fuss (New York: Routledge, 1996), 1.

3 Wendy Hui Kyong Chun, “Race and/as Technology,” Camera Obscura 24.1 (2009): 7-34.

4 Zakiyyah Iman Jackson, “Outer Worlds: The Persistence of Race in Movement ‘Beyond the Human’,” GLQ: A Journal of Lesbian and Gay Studies 21, nos. 2–3 (2015): 215.

5 Françoise Vergès, “Racial Capitalocene: Is the Anthropocene Racial?” in Futures of Black Radicalism, eds. Gaye Theresa Johnson and Alex Lubin, (New York: Verso, 2017); Jason W. Moore, “Anthropocene or Capitalocene? Nature, History, and the Crisis of Capitalism,” in Anthropocene or Capitalocene? Nature, History, and the Crisis of Capitalism, ed. Jason W. Moore (Oakland: PM Press, 2016): 1-13.

6 Wendy Hui Kyong Chun, Updating to Remain the Same: Habitual New Media (Cambridge, MA: MIT Press, 2016), 103-128.

7 Isiah Lavender III, Race in American Science Fiction (Bloomington: Indiana University Press, 2011), 60–62; Despina Kakoudaki, Anatomy of a Robot: Literature, Cinema, and the Cultural Work of Artificial People (New Brunswick: Rutgers University Press, 2014), 115–72.

Lecture: Olivia Banner, “Technopsyence and Afro-Surrealism’s Cripistemologies”

Thursday, March 28, 2019
3:30 PM
Grucci Room, 102 Burrowes Building

Lecture sponsored by the Digital Culture and Media Initiative (Department of English) and the Rock Ethics Institute

On Thursday, March 28, 2019, Olivia Banner (University of Texas at Dallas) will deliver a lecture titled “Technopsyence and Afro-Surrealism’s Cripistemologies.”

Event flyer

Description of presentation

New psychiatric research leashes mobile device data to neuroscientific and genetic research for the purpose of resolving weaknesses in psychiatric nosologies. Digital psychiatric treatment tools, implemented under neoliberal austerity frameworks, generate new data associated with mental health. I name this interlinked assemblage “technopsyence” to indicate that the psy-ences, like other domains of 21st century biomedicine, operate in tandem with the technology industries that they fuel. With the risk industries incorporating this new data into their calculations, technopsyence serves as another big data industry by which populations are capacitated and debilitated. In this, I argue, technopsyence reproduces extractive racial capitalism.

Using a crip theoretical approach, I explore a recent Afro-Surrealist text about Black cyborgs and Black mental distress to consider its reimaginings of the relationship among digital technologies, bodyminds, and extractive racial capitalism. An aesthetic that challenges Enlightenment epistemologies, Afro-Surrealism questions the models for knowledge about bodyminds that undergird technopsyence. The text I examine offers a historical materialist cripistemology of digital media. It presses us to imagine care in the digital era outside of racial capitalism.

Speaker bio

Olivia Banner is Assistant Professor of Critical Media Studies at the University of Texas at Dallas. Her recent book Communicative Biocapitalism: The Voice of the Patient in Digital Health and the Health Humanities (University of Michigan Press, 2017) examines how gender, race, and disability inform the value that biocapitalism locates in “the voice of the patient.” Her second book project, Screening “Madness,” 1949-2020, constructs a genealogy of screen media’s incorporation into the psychiatric disciplines to reveal those media’s centrality to the disciplines’ racialization and gendering of pathologization.

Lecture: Aden Evens, “Ontological Limits of the Digital”

Thursday, March 14, 2019
3:30 PM
Grucci Room, 102 Burrowes Building

On Thursday, March 14, 2019, Aden Evens (Dartmouth College) will deliver a lecture titled “Ontological Limits of the Digital.”

Event flyer

Description of presentation

Digital technologies are nearly ubiquitous and serve a great many purposes, but this very heterogeneity discourages an analysis of universal characteristics of the digital, including consideration of possible fundamental limits on what the digital can do. Instead of drawing conclusions about the digital by surveying its applications, this talk examines the ontological foundations of digital technology, especially the ontology of the bit, in an attempt to construct a general theory of what the digital does. How do bits underpin digital operation, giving the digital its vast and broad reach? What aspects of bits, and the digital structures built from them, carry over into the human-machine interface and so also into the cognition and behavior of those who engage with digital technologies? Recognizing that digital ontology is in important respects unlike the ontology of the material world, this talk attempts to articulate the ontology of the digital, identify its distinctive modalities, and speculate on that basis about its unassailable limitations.

Speaker bio

Aden Evens is Associate Professor and Vice-Chair of English and Creative Writing at Dartmouth College. His extradisciplinary research explores the ways in which formal systems influence individuals and cultures. His early career work on music, sound, and associated technologies led to the publication of the book Sound Ideas: Music, Machines, and Experience (University of Minnesota Press, 2005). Since then, he has been writing and teaching about the digital, perplexed at how few people seem to share his sense of alarm at the increasing hegemony of this underexamined facet of our lives. His second book, Logic of the Digital (Bloomsbury Academic Press, 2015), offered a sober look at the digital’s underlying principles.

Cover of Evens, Logic of the Digital