On January 29, 2015, Andrew L. Russell delivered a lecture for the DCMI speaker series in Critical Media and Digital Studies titled “The Open Internet: An Exploration in Network Archaeology.” This follow-up interview was conducted by Brian Lennon by email during February and March 2015. The interview covers topics including overly tidy Internet histories, the Internet’s autocratic origin, Internet history as an episode in the history of capitalism, Silicon Valley cyberlibertarianism, and the history and ideological function of terms like “open” and “neutrality.”
…by the mid–1990s, Internet advocates appropriated the keyword “open” from the ruins of OSI, and convinced Internet users and the general public that the Internet had been open all along. This appropriation cloaked the Internet’s autocratic origins — a deception that persists to this day.
Brian Lennon: In Open Standards and the Digital Age: History, Ideology, and Networks you write that “[i]n a strange plot twist, the autocratic Internet emerged as a symbol of open systems and an exemplar of a new style of open-systems standardization, as the [International Organization for Standardization (ISO)] project titled Open Systems Interconnection crumbled under the weight of its democratic structure and process” (232). The story you tell in the book itself suggests that this “plot twist” is not so strange after all, and maybe not really such a twist or swerve or detour, either, if one looks as closely as you have at the particulars of the social and social-institutional history of what we call the Internet. I read the phrase “strange plot twist” as a bit mischievous in that respect: it not only marks an apparent irony but suggests that we oughtn’t to be all that surprised. Am I off base here?
Andrew Russell: You’re quite right, and it’s an astute reading of that passage, although I don’t think I’m being overly mischievous. Most people active in computer networking in the 1980s expected Open Systems Interconnection to be the dominant design for their field for the foreseeable future. They also complained that the standardization process for OSI was a little slow, too bureaucratic, and so on — even though they knew that a slow pace was the necessary cost of the consensus-building process. Additionally, they were making real progress trying to develop and stabilize technologies that obviously were not (and still are not) ready to become a secure, robust, and reliable infrastructure for global communications and business transactions.
It’s important to keep in mind here that “openness” was more than a label or marketing term for OSI. “Openness” also was a strategy for assembling an alliance that could undermine powerful incumbents at IBM and the national telecom monopolies. Then, as now, the conventional wisdom was that openness, inclusion, and democratic participation are procedural virtues that would ensure just outcomes. And, in this case, the just outcome was to prevent the wonderful new possibilities of computer internetworking from being stifled by giant, undemocratic, self-interested institutions.
This vision of justice failed. To understand why, it’s important to highlight three aspects of the story — perhaps they are twists or swerves, perhaps not. First, if we take this failure on its own terms, I think it’s fair to call it a surprise. With OSI, democracy and openness failed during an historical moment when so many people believed they would succeed — indeed, during the same moment when even the Soviets embraced glasnost. Second, where a project (OSI) committed to democracy and openness failed, a closed and undemocratic institution succeeded: not IBM, not the telecom monopolies, but the American Department of Defense. The DoD, flush with taxpayer dollars, was able to design computer networking standards to its own specifications and needs, and then seed the market with these TCP/IP standards. Third, by the mid–1990s, Internet advocates appropriated the keyword “open” from the ruins of OSI, and convinced Internet users and the general public that the Internet had been open all along. This appropriation cloaked the Internet’s autocratic origins — a deception that persists to this day.
Without a doubt, this outcome surprised engineers who had been active in computer networking since the 1960s or 1970s. But your point is that in 2015 we should perhaps be less surprised. You might be right — especially about readers who have spent more time thinking about the creepy, sinister, and morally bankrupt activities that the Internet enables and that Internet-centric institutions (Google, NSA, Facebook, etc.) perpetuate. If readers already see the Internet as a mixed bag, and if they believe that our concepts of “openness” and “democracy” have deep problems, then my account of the Internet’s history might not surprise them.
I began my book in this way to demonstrate (rather than merely assert) an overarching point: partisans invoked the values of “openness” and “neutrality” well before the Internet (let alone “open source,” “open data,” etc.) came into existence. Seen in this light, squabbles over “net neutrality” and the “open Internet” are simply recent iterations of longstanding disputes about power and centralized control.
Brian Lennon: Why do you think conversations about concepts like “openness” (not only in “open standards” but also “open source,” “open access,” “open data,” and so on) and “neutrality” (for example, in “net neutrality”) can become so politically difficult, both in the historical period you deal with in Open Standards and the Digital Age and today? What makes these concepts, and the words attached to them, so volatile and so receptive to carrying such different values from one context or conversation to the next?
Andrew Russell: The terminology of “open” and “neutral” projects clear-cut moral judgments onto discussions that are more complex and ambiguous. That’s the point, of course: the people who introduced the language of openness and neutrality were politically sophisticated, and knew that they would benefit by recasting complex arguments into simple terms. I begin my book with a discussion of the American “Open Door” policy of the late 19th and early 20th centuries, in part to demonstrate that this discursive trick has a long history. The salient point there is that American diplomats used the language of openness to portray their geopolitical rivals in a negative light. It’s no coincidence that many of these diplomats identified as political “Progressives” (what kind of monster could be against Progress?). A cartoon from the turn of the century portrays Uncle Sam in front of a door that opens up to China. Uncle Sam is grinning and holding a key that says “American Diplomacy,” and he stands between the open door to China and a bunch of European imperialists — each about half as tall as Uncle Sam. I began my book in this way to demonstrate (rather than merely assert) an overarching point: partisans invoked the values of “openness” and “neutrality” well before the Internet (let alone “open source,” “open data,” etc.) came into existence. Seen in this light, squabbles over “net neutrality” and the “open Internet” are simply recent iterations of longstanding disputes about power and centralized control.
To answer your question more directly: conversations about “openness” and “neutrality” are volatile and politically difficult because someone has framed them to be difficult. This volatility, by design, diminishes the possibility of opposition, and therefore advances an exclusionary and divisive political vision.
Brian Lennon: A passage in Open Standards and the Digital Age that I find especially interesting appears in the conclusion to Chapter Five, titled “Critiques of Centralized Control, 1930s-1970s.” There, you mention the “fusion between the hacker critique of centralized control and a libertarian strain of individual freedom and empowerment” in the corporate cultures of Silicon Valley. “It would be oversimplifying matters,” you remark, “to reduce the critiques of centralized control that matured in the 1960s and 1970s to some sort of irresistible triumph of a populist or democratic control over technology” (157). You appear to feel that other historians (Paul Edwards, Ted Friedman, and Fred Turner are mentioned in this passage) have already accomplished a great deal in analyzing the issues here. And yet one might suggest that Silicon Valley “cyberlibertarianism” has never been as visible and influential in public life in the United States as it is today, and has been since the mid-2000s — and at the same time, or perhaps for just that reason, that its political character remains fundamentally both confusing and confused. I myself might say that as good as earlier research and analysis has been, far from having provided us with a broad and durable political understanding of Silicon Valley corporate culture, it has simply not had sufficient impact, and what Richard Barbrook and Andy Cameron called “the Californian ideology” is today quite literally out of control, emboldened by national political deadlock and indecision, to say nothing of outright fear and ignorance. Do you have any further thoughts on this?
Andrew Russell: Let me first make a few comments, and then circle back to “cyberlibertarianism.”
I’m glad you pointed out this passage, because this is one part of the book where I’m afraid I buried the lede. My goal was to push back against prevailing interpretations. On one extreme, a popular account of the Internet’s origins (and the origins of personal computing more generally) emphasizes some fusion of the California counterculture and a hands-on, entrepreneurial spirit that took material form in Californian garages. The heroes of this story are Stewart Brand, Steve Jobs, etc. On the other extreme, a competing account of computing and Internet history emphasizes the role of the American military — the “biggest angel of them all,” to borrow a phrase that my PhD advisor, Bill Leslie, used in his account of the origins of Silicon Valley. One could try to synthesize these competing views by featuring researchers or counterculture types who used defense money to do their work — indeed, this is a compelling sub-theme of Hafner and Lyon’s book Where Wizards Stay Up Late. Such a synthesis would recapitulate a major point of contention in the historiography of Cold War science and technology, namely: who’s using whom? Did military funds seduce graduate students and scientists into doing the work of the American military-industrial-academic complex? Or, did these students and scientists trick the American military into supporting projects that the scientists wanted to do anyway?
My view — and here again I have buried the lede — is that Internet history is fundamentally an episode in the history of capitalism. My story is not a story about the triumph of the commons, or of the free market; it’s a story about industrial capitalism in the United States and the world. In this sense, it’s no coincidence that my book is published in a series called “Cambridge Studies of the Emergence of Global Enterprise.”
These are important questions, and one’s answers might depend on whether she or he sees networked computing as a technology of freedom or as the foundation of a digital surveillance society. However, the point I make at the end of Chapter 5 is to say that this debate, interesting as it is, misses an essential point: the familiar dynamics of supply, demand, and competition in American business were the key drivers of computer networking and internetworking in the postwar era. Digital networking and internetworking matured because there was market demand, and there were industrial firms such as IBM, AT&T, Digital, Honeywell, and so on who mobilized the organizational capabilities to meet that demand. The decisive critiques of centralized control came not from California hippies, but from engineers and regulators who sought to inject competition and entrepreneurship into markets that had been dominated by IBM and AT&T. My view — and here again I have buried the lede — is that Internet history is fundamentally an episode in the history of capitalism. My story is not a story about the triumph of the commons, or of the free market; it’s a story about industrial capitalism in the United States and the world. In this sense, it’s no coincidence that my book is published in a series called “Cambridge Studies of the Emergence of Global Enterprise.”
As you say: “cyberlibertarianism” continues to have deep influence, both within and beyond Silicon Valley, despite the work of many scholars who have pointed out its contradictions and shortcomings (David Golumbia most cogently; Astra Taylor most productively; Evgeny Morozov most bombastically; Andrew Keen most recently). Like you, I’m disappointed their critiques have not made a bigger impact in public opinion or political discourse — although I can’t say I’m too surprised, since measured scholarship rarely disrupts irrational ideologues.
I would point out two contributions that I make to the work of these critics of cyberlibertarianism. First, my account makes it clear that governments played a variety of roles — military funding, antitrust prosecution, direct and indirect support for standardization — in the creation and development of markets for digital computers and networks. One can’t write a history of capitalism that omits government, as scholars such as Mariana Mazzucato continue to demonstrate. Second, I find no historical support for the cyberlibertarian notion that innovation and entrepreneurship will cure all societal ills. In many cases, the best results come from large organizations that move slowly and cautiously, which can allow them to develop expertise in sophisticated realms and to incorporate a broad spectrum of interests. This latter point also has been emphasized by Richard John, whose book Network Nation is a masterful history of the political economy of American telegraphy and telephony.
Brian Lennon: In your abstract for the lecture you recently delivered here at Penn State, you observed that “[h]istorians of technology know better than to accept […] tidy narratives, but so far they have failed to convince scholars, policymakers, and the public to see nuance and contingency (rather than the bold march of progress) in the Internet’s history.” Am I right to think that scholars are included in that category for a reason? How might scholars be as susceptible as policymakers and the public to the comforts of “tidy narratives” of progress?
Andrew Russell: The fundamental problem — and I don’t mean to be glib here — is that it’s very difficult for non-expert audiences to grasp the subtle details of complicated phenomena. When I tell people that I study the history of the Internet, they often say things like “can something so recent have a history?” or “you mean that thing that Al Gore invented?” If this frustrates me, I can only imagine how ecologists feel when they see that large swaths of the public believe that climate change is a hoax, or how immunologists feel when they meet parents who think they shouldn’t vaccinate their kids.
So while this isn’t a problem specific to historians like me, there is certainly a long track record in the history of science, medicine, and technology where simple and heroic narratives have greater traction than accounts that emphasize complexity and moral ambiguity. (By the way, the core problem here is similar to the problem that John Oliver attacked in his rant about infrastructure, which is annoying in places but culminates brilliantly with a trailer for an imaginary blockbuster film called Infrastructure.)
I included “scholars” alongside policymakers and the public in the abstract for my talk at Penn State because I have seen many cases where law professors and social scientists (among others) propagate an oversimplified version of the Internet’s history. In a way, I don’t blame them since there aren’t many nuanced accounts of this history available. At the same time, however, we should and do expect more from scholars — and I have faint hopes that my book will be a resource for those who sense that there’s something unsatisfying about the version of Internet history relayed by, say, Walter Isaacson’s The Innovators.
There is a related problem that is more specific to the Internet, which is that its history is difficult to summarize in a succinct or linear narrative. In fact, my own view is that the phrase “the history of the Internet” contains a category error; we would do better to talk about multiple “histories of networking.” Very few individuals within these histories — Baran, Licklider, Cerf, Kahn, Pouzin, Berners-Lee — are household names.
I can’t say that I’ve made much of an effort to popularize my interpretation. Instead, my bias has been to make my case legible to my fellow specialists in academia, and to remain faithful to the source material that I have used. In a way, this is the real problem: the topic is a landscape of acronym-filled minefields. For readers to recite my interpretation of Internet history accurately, they would need to know the meaning of and interactions between ARPANET, RFC, INWG, TCP, ISO-OSI, TCP/IP, DCA, NSF, IAB, IETF, CERN, and W3C. Compare this to Steven Johnson’s feel-good summary, published in The New York Times Magazine: “The Internet? We Built That.” Can I really blame anyone for preferring Johnson’s interpretation to mine?