An interview with Adrienne Shaw

On March 19, 2015, Adrienne Shaw delivered a lecture for the DCMI speaker series in Critical Media and Digital Studies titled “Gaming at the Edge: Gender, Race, and Sexuality in Video Games.” This follow-up interview was conducted by Brian Lennon by email during May 2015. The interview covers topics including questions of intersectionality and representation, audience research and game studies, qualitative research methodologies, ideology critique, and the influence of Stuart Hall.

Real ethical analysis of qualitative data requires taking what people say seriously and really trying to wrap your head around why they say things the way they do.

Brian Lennon: In reading Gaming at the Edge: Sexuality and Gender at the Margins of Gamer Culture I couldn’t help constructing a narrative of intellectual history in which the analytic sophistication of theories of intersectionality, for example, had yet to find their way anywhere near the underdeveloped discourses on political, social, and aesthetic representation that prevail in the game industry itself. Of course your point in the book’s first chapter is that academic game studies has done no better when it comes to integrating the perspectives and insights generated by theories of intersectionality and by related concepts that also have roots in postcolonial studies (strategic essentialism, for example). It’s easier to be disappointed here, because scholars should know better and try harder, and because academic game studies wasn’t born yesterday — and so on (I could go on). But I find your book a very hopeful one, in the end, on these issues and others, and in its imagination of “a future free of dickwolves” in the broadest sense. In the best of all possible worlds, where do you think we’re going when it comes to bringing that kind of analytic sophistication to game studies as a field of academic research, in particular?

Adrienne Shaw: I have actually been quite heartened by the fact that since I first began the book, the amount of games research that is dealing with nuance around questions of intersectionality and representation is increasing exponentially. I think that a lot of early work focused on gender, because it had to. Even if the scholars doing that work knew that gender is more complicated than binary, Western-centric portrayals, and a lot of their work hints at this, but they were speaking to audiences that did not necessarily get that. Based on my conversations with people who have been in game studies longer than I have, they were just trying to get other scholars to take seriously the fact that gender was an issue in games in the first place. I think feminist and progressive scholars take for granted that every one “gets it,” but whether you are in humanities, social sciences, or any of the interdisciplines, there are people out there who don’t know a thing about feminism or anti-racist politics beyond the caricatures they see commonly represented. When a field is new, particularly, sometimes you do have to reinvent the wheel and say those issues people have been talking about for years are issues here too. I am actually quite lucky in the fact that game studies was convalescing as a field when I began grad school, so by the time my book came out I could point back to that early work and say: “so it is clear, gender is a problem here, but now that we know that let’s start talking about it with a bit more complexity.” I feel like people are really taking up that call, some of whom started doing so before my book came out (I think we’ve all recognized that the groundwork had been laid, now we could take the debate elsewhere). I think text, audience, and industry studies are all becoming more nuanced in how they address questions of identity and access. In a perfect world I think the big thing I would like to see game scholars address next is class. So much of how and if and why people engage with games, game production, and game culture is defined by material resources. That doesn’t mean people who are poor do not engage with games, but that their approach to games is different and largely has been invisible in game studies. Class there most assuredly intersects with gender, race, and sexuality, but needs to be addressed with much more nuance.

Brian Lennon: The phrase “nice when it happens” emerges from a remark by your interviewee Carol and comes to serve as an epigrammatic condensation of your general argument in Gaming at the Edge (which I’ve tried to paraphrase accurately in the following question, below). You reflect at some length in your Conclusion about how the word “nice” and its concept “niceness” function there. In many ways, the statement as a whole marks a player’s habit of taking the problem of representation on a case-by-case basis, which mirrors your methodological approach in the book. But it also expresses a kind of general (if certainly not final or totalizing) theory about the importance of representation: something like “nice when it happens, and when it doesn’t happen, not a one-dimensional problem.” What would you say is the main challenge in writing disciplined academic prose that reports and reflects on such verbal formulations of interviewees?

Adrienne Shaw: Well, I think the first thing is to be very careful about not putting those words into peoples’ mouths. When I noted in my coding of the interviews that these references to “niceness” were happening again and again, I went back to make sure it wasn’t in the question. Lucky me it wasn’t. Word to wise future interviewers out there: never use phrases from early interviews in later interviews, or you might muddle your data. Second, it is important to look at those kinds of phrases in context. How did they express it? Where did those phrases pop up? It is hard to communicate in academic prose, but “nice when it happens” was usually articulated at the same moment everyone made a face like they were grasping at an idea that was just out of reach and hard to put into words. That moment for me captured the heart of the complexity of representation (and I think you articulate that take away quite well in your question). It always was towards the end of the interview, and it was always clear that they were wrestling between what they thought they were supposed to say and how they actually felt but by the end — it’s not that people were more honest but they were getting closer to how they wanted to say what they thought. I think in staying true to interviewees’ perspectives, you have to be attentive to the process of the interview, and how those verbal formulations evolve and unfold over the course of the interview — that’s probably the biggest challenge too though. Circling back to my first point, interviewees aren’t just there to provide nice illustrative quotes for what the researcher wants to say. Quotes should never simply be for texture. Real ethical analysis of qualitative data requires taking what people say seriously and really trying to wrap your head around why they say things the way they do.

In my book I try to push back against the marketing arguments often given for why representation matters (i.e. that women play games and that’s why women should be in games), but I am also pushing back against a focus on effects arguments as well (i.e. games make people sexist therefore women in games should be represented better). I think both arguments are too deterministic and do not honor decades of audience research.

Brian Lennon: The main argument in Gaming at the Edge, as I read it, is in many ways deeply counter-intuitive, and that’s one reason it’s so thought-provoking. You argue that the best reason for game designers to develop more diverse games is not that the players on whom you focus in your research identify exclusively, or even often with their player avatars — because they don’t, often not even at all. Rather, the best reason for game designers to develop more diverse games is that, as you put it, “representation does not matter to players in many games” (219, emphasis in original). There is, in other words, nothing really holding designers back, at least at the level of argument and ideas (rather than, for example, hiring and other employment practices in the game industry itself and the structures of identification they transmit to game development). Gaming at the Edge does a great job of elaborating and supporting this argument, but if we (unwisely) abstract it from the qualitative research that led you to this conclusion, it still runs up against the more and more powerful invitations to strict or canalized identification that seem to accompany every new advance in game graphics and networking in particular (I’m thinking particularly of contemporary military-themed first-person shooters that integrate the characteristic features of adventure and multiplayer RPGs). That makes me wonder if there still a role for ideology critique — that is, critique of design choices, in advance of reception — in the analysis of games as cultural forms. Your remarks on Custer’s Revenge, and other remarks in the book, suggest there is — but I suppose I’m wondering where we ought to separate games that can and should be critiqued up front, based on their invitations to inhabit an abhorrent world view, from games whose players deserve the qualitative research attention you’ve given them in Gaming at the Edge.

Adrienne Shaw: This is a question that I think hearkens back to some critiques of early active audience research. On the one hand, we have decades of research discussing the impact of media on promoting particular ideologies. On the other hand, we have a great deal of research, particularly since the 1980s, showing that audiences are not merely passive recipients of ideology. If we look to Stuart Hall’s work though, and Hall is probably the biggest influence on my research, we see that these two perspectives are not mutually exclusive. Media texts do help reinforce ideologies, even as audiences are not mindless dupes obeying the commands of their television. An ideological critique shows us something about how society is structured, audience analysis shows us something about how people live within that structure. In my book I try to push back against the marketing arguments often given for why representation matters (i.e. that women play games and that’s why women should be in games), but I am also pushing back against a focus on effects arguments as well (i.e. games make people sexist therefore women in games should be represented better). I think both arguments are too deterministic and do not honor decades of audience research. Instead, I hope that the book makes the case that games simply should be more diverse and that audiences I think are ready for it. It would be a new project, but I suspect that even hetero, cis, white players are skilled at “making do” with media in the same way marginalized groups have been. At the same time though, there will always be a place for ideological critiques of game content. Those are in fact crucial to unpacking what forms of diversity have or have not been included in games. I just think that the ideological critiques can be made in a way that does not presume effects, and that be the reason they matter. If anything the replication of ideology in games is an effect of hierarchical social structures. It shows us what dominant values are, and in turn provides us new entry points towards changing that structure.

I think that it is important, that whatever the pressures to do bigger, faster projects, we scholars always remember to take the time to think through what we are trying to do, how we are going to do it, and take the time to figure out if we’ve really found what we think we’ve found.

Brian Lennon: You describe Gaming at the Edge as a book that focuses on “solitary gaming” (49), though from my own point of view there’s a great deal in your qualitative research approach, and in other aspects of your project, that compensates up front for the possible limitations of such a focus, as you describe them in that passage of the book. It’s clear, for example, that “solitary” doesn’t necessarily mean only chosen solitude, and that study of the “game play that often takes place behind closed doors” is also the study of a social space in relation to other social spaces. Still, at a historical moment when we hear over and over that the unprecedented scale of available research data (of all kinds) requires modes of analysis that both meet data with computation and scale analysis up to a new precedent, your interview-based research preserves a vital tension between the social context of gaming and the computational form of a video game. I think that tension is valuable, but I wanted to ask you about what kinds of pressure, if any, our historical moment (or merely our research funding climate?) now exerts on game studies scholars to abandon that tension and return to the neo-formalism of some of the earliest studies in the field — perhaps now scaled up using software packages for data analysis and visualization, for example.

Adrienne Shaw: Well, even in qualitative research there is fetishization of size and duration. Decades in the field or tens of thousands or respondents are not necessary nor sufficient for good research findings, but they always do sound impressive when you report your findings. As I tell my methods students, it always comes back to what question do you want to answer and how much data do you need to make the claims you do. Sometimes those questions do take decades of field work and tens of thousands of survey respondents, but sometimes they don’t. I hope it is clear in the book, that this was a modestly sized project and if anything should serve as a call to investigate all of these issues more deeply. It was meant to dig into personal experiences in a way that showed the multiplicity of game experiences rather than search for any universals. I approached category saturation pretty quickly, even with a relatively small number of interviewees, which in qualitative research is a good way to let you know that there is a there, there. There are people who have used larger surveys to get at similar findings that I detail in the book and other work (e.g. connections to avatars or gamer identity), and for the field it is actually really valuable that both at the micro scale and the macro scale some of these things are consistent. I think more than the pressure to scale up studies, is the pressure to move through these studies quickly. The thoughtfulness of the design process gets lost when people are just churning out grant proposals to get money to study the utility of new analytic tools. The thoughtfulness of the analysis process gets lost in the rhetoric that data is old if it is not published as soon as the study is over. I think that it is important, that whatever the pressures to do bigger, faster projects, we scholars always remember to take the time to think through what we are trying to do, how we are going to do it, and take the time to figure out if we’ve really found what we think we’ve found.

An interview with Andrew L. Russell

On January 29, 2015, Andrew L. Russell delivered a lecture for the DCMI speaker series in Critical Media and Digital Studies titled “The Open Internet: An Exploration in Network Archaeology.” This follow-up interview was conducted by Brian Lennon by email during February and March 2015. The interview covers topics including overly tidy Internet histories, the Internet’s autocratic origin, Internet history as an episode in the history of capitalism, Silicon Valley cyberlibertarianism, and the history and ideological function of terms like “open” and “neutrality.”

…by the mid–1990s, Internet advocates appropriated the keyword “open” from the ruins of OSI, and convinced Internet users and the general public that the Internet had been open all along. This appropriation cloaked the Internet’s autocratic origins — a deception that persists to this day.

Brian Lennon: In Open Standards and the Digital Age: History, Ideology, and Networks you write that “[i]n a strange plot twist, the autocratic Internet emerged as a symbol of open systems and an exemplar of a new style of open-systems standardization, as the [International Organization for Standardization (ISO)] project titled Open Systems Interconnection crumbled under the weight of its democratic structure and process” (232). The story you tell in the book itself suggests that this “plot twist” is not so strange after all, and maybe not really such a twist or swerve or detour, either, if one looks as closely as you have at the particulars of the social and social-institutional history of what we call the Internet. I read the phrase “strange plot twist” as a bit mischievous in that respect: it not only marks an apparent irony but suggests that we oughtn’t to be all that surprised. Am I off base here?

Andrew Russell: You’re quite right, and it’s an astute reading of that passage, although I don’t think I’m being overly mischievous. Most people active in computer networking in the 1980s expected Open Systems Interconnection to be the dominant design for their field for the foreseeable future. They also complained that the standardization process for OSI was a little slow, too bureaucratic, and so on — even though they knew that a slow pace was the necessary cost of the consensus-building process. Additionally, they were making real progress trying to develop and stabilize technologies that obviously were not (and still are not) ready to become a secure, robust, and reliable infrastructure for global communications and business transactions.

It’s important to keep in mind here that “openness” was more than a label or marketing term for OSI. “Openness” also was a strategy for assembling an alliance that could undermine powerful incumbents at IBM and the national telecom monopolies. Then, as now, the conventional wisdom was that openness, inclusion, and democratic participation are procedural virtues that would ensure just outcomes. And, in this case, the just outcome was to prevent the wonderful new possibilities of computer internetworking from being stifled by giant, undemocratic, self-interested institutions.

This vision of justice failed. To understand why, it’s important to highlight three aspects of the story — perhaps they are twists or swerves, perhaps not. First, if we take this failure on its own terms, I think it’s fair to call it a surprise. With OSI, democracy and openness failed during an historical moment when so many people believed they would succeed — indeed, during the same moment when even the Soviets embraced glasnost. Second, where a project (OSI) committed to democracy and openness failed, a closed and undemocratic institution succeeded: not IBM, not the telecom monopolies, but the American Department of Defense. The DoD, flush with taxpayer dollars, was able to design computer networking standards to its own specifications and needs, and then seed the market with these TCP/IP standards. Third, by the mid–1990s, Internet advocates appropriated the keyword “open” from the ruins of OSI, and convinced Internet users and the general public that the Internet had been open all along. This appropriation cloaked the Internet’s autocratic origins — a deception that persists to this day.

Without a doubt, this outcome surprised engineers who had been active in computer networking since the 1960s or 1970s. But your point is that in 2015 we should perhaps be less surprised. You might be right — especially about readers who have spent more time thinking about the creepy, sinister, and morally bankrupt activities that the Internet enables and that Internet-centric institutions (Google, NSA, Facebook, etc.) perpetuate. If readers already see the Internet as a mixed bag, and if they believe that our concepts of “openness” and “democracy” have deep problems, then my account of the Internet’s history might not surprise them.

I began my book in this way to demonstrate (rather than merely assert) an overarching point: partisans invoked the values of “openness” and “neutrality” well before the Internet (let alone “open source,” “open data,” etc.) came into existence. Seen in this light, squabbles over “net neutrality” and the “open Internet” are simply recent iterations of longstanding disputes about power and centralized control.

Brian Lennon: Why do you think conversations about concepts like “openness” (not only in “open standards” but also “open source,” “open access,” “open data,” and so on) and “neutrality” (for example, in “net neutrality”) can become so politically difficult, both in the historical period you deal with in Open Standards and the Digital Age and today? What makes these concepts, and the words attached to them, so volatile and so receptive to carrying such different values from one context or conversation to the next?

Andrew Russell: The terminology of “open” and “neutral” projects clear-cut moral judgments onto discussions that are more complex and ambiguous. That’s the point, of course: the people who introduced the language of openness and neutrality were politically sophisticated, and knew that they would benefit by recasting complex arguments into simple terms. I begin my book with a discussion of the American “Open Door” policy of the late 19th and early 20th centuries, in part to demonstrate that this discursive trick has a long history. The salient point there is that American diplomats used the language of openness to portray their geopolitical rivals in a negative light. It’s no coincidence that many of these diplomats identified as political “Progressives” (what kind of monster could be against Progress?). A cartoon from the turn of the century portrays Uncle Sam in front of a door that opens up to China. Uncle Sam is grinning and holding a key that says “American Diplomacy,” and he stands between the open door to China and a bunch of European imperialists — each about half as tall as Uncle Sam. I began my book in this way to demonstrate (rather than merely assert) an overarching point: partisans invoked the values of “openness” and “neutrality” well before the Internet (let alone “open source,” “open data,” etc.) came into existence. Seen in this light, squabbles over “net neutrality” and the “open Internet” are simply recent iterations of longstanding disputes about power and centralized control.

To answer your question more directly: conversations about “openness” and “neutrality” are volatile and politically difficult because someone has framed them to be difficult. This volatility, by design, diminishes the possibility of opposition, and therefore advances an exclusionary and divisive political vision.

Brian Lennon: A passage in Open Standards and the Digital Age that I find especially interesting appears in the conclusion to Chapter Five, titled “Critiques of Centralized Control, 1930s-1970s.” There, you mention the “fusion between the hacker critique of centralized control and a libertarian strain of individual freedom and empowerment” in the corporate cultures of Silicon Valley. “It would be oversimplifying matters,” you remark, “to reduce the critiques of centralized control that matured in the 1960s and 1970s to some sort of irresistible triumph of a populist or democratic control over technology” (157). You appear to feel that other historians (Paul Edwards, Ted Friedman, and Fred Turner are mentioned in this passage) have already accomplished a great deal in analyzing the issues here. And yet one might suggest that Silicon Valley “cyberlibertarianism” has never been as visible and influential in public life in the United States as it is today, and has been since the mid-2000s — and at the same time, or perhaps for just that reason, that its political character remains fundamentally both confusing and confused. I myself might say that as good as earlier research and analysis has been, far from having provided us with a broad and durable political understanding of Silicon Valley corporate culture, it has simply not had sufficient impact, and what Richard Barbrook and Andy Cameron called “the Californian ideology” is today quite literally out of control, emboldened by national political deadlock and indecision, to say nothing of outright fear and ignorance. Do you have any further thoughts on this?

Andrew Russell: Let me first make a few comments, and then circle back to “cyberlibertarianism.”

I’m glad you pointed out this passage, because this is one part of the book where I’m afraid I buried the lede. My goal was to push back against prevailing interpretations. On one extreme, a popular account of the Internet’s origins (and the origins of personal computing more generally) emphasizes some fusion of the California counterculture and a hands-on, entrepreneurial spirit that took material form in Californian garages. The heroes of this story are Stewart Brand, Steve Jobs, etc. On the other extreme, a competing account of computing and Internet history emphasizes the role of the American military — the “biggest angel of them all,” to borrow a phrase that my PhD advisor, Bill Leslie, used in his account of the origins of Silicon Valley. One could try to synthesize these competing views by featuring researchers or counterculture types who used defense money to do their work — indeed, this is a compelling sub-theme of Hafner and Lyon’s book Where Wizards Stay Up Late. Such a synthesis would recapitulate a major point of contention in the historiography of Cold War science and technology, namely: who’s using whom? Did military funds seduce graduate students and scientists into doing the work of the American military-industrial-academic complex? Or, did these students and scientists trick the American military into supporting projects that the scientists wanted to do anyway?

My view — and here again I have buried the lede — is that Internet history is fundamentally an episode in the history of capitalism. My story is not a story about the triumph of the commons, or of the free market; it’s a story about industrial capitalism in the United States and the world. In this sense, it’s no coincidence that my book is published in a series called “Cambridge Studies of the Emergence of Global Enterprise.”

These are important questions, and one’s answers might depend on whether she or he sees networked computing as a technology of freedom or as the foundation of a digital surveillance society. However, the point I make at the end of Chapter 5 is to say that this debate, interesting as it is, misses an essential point: the familiar dynamics of supply, demand, and competition in American business were the key drivers of computer networking and internetworking in the postwar era. Digital networking and internetworking matured because there was market demand, and there were industrial firms such as IBM, AT&T, Digital, Honeywell, and so on who mobilized the organizational capabilities to meet that demand. The decisive critiques of centralized control came not from California hippies, but from engineers and regulators who sought to inject competition and entrepreneurship into markets that had been dominated by IBM and AT&T. My view — and here again I have buried the lede — is that Internet history is fundamentally an episode in the history of capitalism. My story is not a story about the triumph of the commons, or of the free market; it’s a story about industrial capitalism in the United States and the world. In this sense, it’s no coincidence that my book is published in a series called “Cambridge Studies of the Emergence of Global Enterprise.”

As you say: “cyberlibertarianism” continues to have deep influence, both within and beyond Silicon Valley, despite the work of many scholars who have pointed out its contradictions and shortcomings (David Golumbia most cogently; Astra Taylor most productively; Evgeny Morozov most bombastically; Andrew Keen most recently). Like you, I’m disappointed their critiques have not made a bigger impact in public opinion or political discourse — although I can’t say I’m too surprised, since measured scholarship rarely disrupts irrational ideologues.

I would point out two contributions that I make to the work of these critics of cyberlibertarianism. First, my account makes it clear that governments played a variety of roles — military funding, antitrust prosecution, direct and indirect support for standardization — in the creation and development of markets for digital computers and networks. One can’t write a history of capitalism that omits government, as scholars such as Mariana Mazzucato continue to demonstrate. Second, I find no historical support for the cyberlibertarian notion that innovation and entrepreneurship will cure all societal ills. In many cases, the best results come from large organizations that move slowly and cautiously, which can allow them to develop expertise in sophisticated realms and to incorporate a broad spectrum of interests. This latter point also has been emphasized by Richard John, whose book Network Nation is a masterful history of the political economy of American telegraphy and telephony.

Brian Lennon: In your abstract for the lecture you recently delivered here at Penn State, you observed that “[h]istorians of technology know better than to accept […] tidy narratives, but so far they have failed to convince scholars, policymakers, and the public to see nuance and contingency (rather than the bold march of progress) in the Internet’s history.” Am I right to think that scholars are included in that category for a reason? How might scholars be as susceptible as policymakers and the public to the comforts of “tidy narratives” of progress?

Andrew Russell: The fundamental problem — and I don’t mean to be glib here — is that it’s very difficult for non-expert audiences to grasp the subtle details of complicated phenomena. When I tell people that I study the history of the Internet, they often say things like “can something so recent have a history?” or “you mean that thing that Al Gore invented?” If this frustrates me, I can only imagine how ecologists feel when they see that large swaths of the public believe that climate change is a hoax, or how immunologists feel when they meet parents who think they shouldn’t vaccinate their kids.

So while this isn’t a problem specific to historians like me, there is certainly a long track record in the history of science, medicine, and technology where simple and heroic narratives have greater traction than accounts that emphasize complexity and moral ambiguity. (By the way, the core problem here is similar to the problem that John Oliver attacked in his rant about infrastructure, which is annoying in places but culminates brilliantly with a trailer for an imaginary blockbuster film called Infrastructure.)

I included “scholars” alongside policymakers and the public in the abstract for my talk at Penn State because I have seen many cases where law professors and social scientists (among others) propagate an oversimplified version of the Internet’s history. In a way, I don’t blame them since there aren’t many nuanced accounts of this history available. At the same time, however, we should and do expect more from scholars — and I have faint hopes that my book will be a resource for those who sense that there’s something unsatisfying about the version of Internet history relayed by, say, Walter Isaacson’s The Innovators.

There is a related problem that is more specific to the Internet, which is that its history is difficult to summarize in a succinct or linear narrative. In fact, my own view is that the phrase “the history of the Internet” contains a category error; we would do better to talk about multiple “histories of networking.” Very few individuals within these histories — Baran, Licklider, Cerf, Kahn, Pouzin, Berners-Lee — are household names.

I can’t say that I’ve made much of an effort to popularize my interpretation. Instead, my bias has been to make my case legible to my fellow specialists in academia, and to remain faithful to the source material that I have used. In a way, this is the real problem: the topic is a landscape of acronym-filled minefields. For readers to recite my interpretation of Internet history accurately, they would need to know the meaning of and interactions between ARPANET, RFC, INWG, TCP, ISO-OSI, TCP/IP, DCA, NSF, IAB, IETF, CERN, and W3C. Compare this to Steven Johnson’s feel-good summary, published in The New York Times Magazine: “The Internet? We Built That.” Can I really blame anyone for preferring Johnson’s interpretation to mine?