Quantcast
Channel: News and Research articles on Governance
Viewing all 294 articles
Browse latest View live

What we talk about when we talk about cybersecurity: security in internet governance debates

$
0
0

This paper is part of 'Doing internet governance: practices, controversies, infrastructures, and institutions', a Special issue of the Internet Policy Review.

Introduction

"Despite widespread use of 'security' by scholars and politicians during the last forty years, not much attention has been devoted to explicating the concept," Baldwin (1997) argues in his discussion of security as an ambiguous and inadequately explored idea. While the problem of “security” being insufficiently explicated may seem largely academic and theoretical, the lack of clarity surrounding this term has become of significant and immediate practical importance among the participants vying for control in the multistakeholder forums of internet governance. This paper explores how the ambiguous nature of security, discussed and debated in the literature of security studies, is amplified and enacted in current discussions of online security due to the multistakeholder model of internet governance.

Security has been a recurring theme in the ongoing debates about internet governance, especially as a tool for national governments seeking to claim greater authority in the multistakeholder system. In preparation for the December 2012 World Conference on International Telecommunications (WCIT), for instance, several nations submitted proposals to revise the International Telecommunications Regulations (ITRs) treaty to include more language about security. These changes were intended to broaden the treaty’s scope and, accordingly, to expand the purview of the United Nations International Telecommunication Union (ITU), which convened the WCIT, to include issues related to internet security. The previous version of the ITRs, negotiated and signed in 1988, did not make any mention of telecommunications security, but the most recently revised ITRs, signed by 89 nations at the 2012 WCIT, make several references to security, including a new article on the "security and robustness of networks" (International Telecommunication Regulations, 2012).

The broad language about security used in the ITRs does not clarify what it would mean to ensure the security and robustness of networks, much less how governments ought to go about doing this. This confusion in the realm of internet security is not unique to international governance bodies—many actors, from private firms to individual government agencies, have far-reaching and ambiguous definitions of their roles in contributing to online security. However, in the evolving and controversial internet governance landscape the ambiguity and conflation of security issues is especially striking because everyone in attendance at internet governance meetings is generally willing to agree that improving security is an important goal for the internet, but these conversations rarely yield much consensus about how to achieve this outcome. This is not a new phenomenon; Wolfers (1952) points out that "The term ‘security’ covers a range of goals so wide that highly divergent policies can be interpreted as policies of security". But the broad diversity of participant groups that the multistakeholder model specifically seeks to foster can exacerbate this problem. Having many different stakeholders at the table in such forums, each with their own goals and notions of security, can contribute to even greater divergence around ideas of security than in more traditional governance models.

This paper considers conflicting constructions of security by stakeholders in three cases of internet governance controversies: the proposals to the ITU to enable states to restrict how their internet traffic is routed addressed at the 2012 WCIT, the debate over the creation of dot-less domains within the Internet Corporation for Assigned Names and Numbers (ICANN), and the ICANN discussions of revising the WHOIS policy governing the privacy of domain registration information. For each of these cases, we explore how the underlying controversies were cast as "security" issues by parties on all sides of the debates, and how each stakeholder group’s different perspective on what constituted a secure internet shaped their use of security-related rhetoric. Finally, we discuss how these conflicting notions of security continue to shape emerging controversies in the internet governance space and serve to abstract some of the sharpest differences in opinion between stakeholder groups by conflating several very different definitions of security into a single, shared vocabulary that represents several incompatible visions for what a more secure internet should look like.

Section 1: Definitional issues in security studies and information security

The challenges associated with defining security predate computers and discussions of cybersecurity. The field of security studies has long engaged with related questions, dating back to Wolfers’ (1952) work on the nature of national security as an "ambiguous symbol". He argued, “the term ‘security’ covers a range of goals so wide that highly divergent policies can be interpreted as policies of security”. Others in the field have suggested that the notion of security may be an “essentially contested concept”, an idea “so value-laden that no amount of argument or evidence can ever lead to agreement on a single version as the correct or standard use” (Baldwin, 1997). Still others have traced a gradual broadening in the definitions of security over time to include greater emphasis on individuals, private organisations, international systems—not just nation states (Rothschild, 1995).

As the ongoing discussions around these issues would suggest, there are multiple definitions of national security even within the field of security studies. For instance, Wolfers (1952) defines a nation as secure "to the extent to which it is not in danger of having to sacrifice core values, if it wishes to avoid war, and is able, if challenged, to maintain them by victory in such a war". Ullman (1983), in an effort to provide a definition that also covers the potential for non-military threats to security, such as natural disasters, offers a variation on Wolfers’ definition in which a threat to national security is defined more generally as:

An action or sequence of events that (1) threatens drastically and over a relatively brief span of time to degrade the quality of life for the inhabitants of a state, or (2) threatens significantly to narrow the range of policy choices available to the government of a state or to private, nongovernmental entities (persons, groups, corporations) within the state.

The common thread in these definitions is the ability to maintain the status quo of a nation’s government, values, and population.

Definitions of security drawn from the field of computer science and information security echo some of this same emphasis on maintaining the status quo (in a technical system, rather than a nation state) but in a notably different manner. The best-known and most widespread framework for information security is the "CIA triad", the notion that a network is secure when the confidentiality, integrity, and availability of its information are assured. This definition is commonly used in technical contexts, such as ISO 17799, an Information Security Management Standard published by the International Organization for Standardization (ISO), and the United States National Institute of Standards and Technology (NIST) Special Publication 800-53, Security and Privacy Controls for Federal Information Systems and Organizations, to define the high-level goals of information security. However, it has also been criticised as incomplete, and other concepts, including authentication, non-repudiation, and control are sometimes added to this initial list of three (Beautement & Pym, 2010). For instance, the ISO and International Electrotechnical Commission (IEC) publication 7498-2, “Information processing systems—Open Systems Interconnection—Basic Reference Model—Part 2: Security Architecture”, defines the crucial elements of security for information processing systems as identification and authentication, access control, data integrity, data confidentiality, data availability, auditability, and non-repudiation. Parker (1998) also expands on the CIA triad by adding utility, authenticity, and possession to his definition of a secure computer system. Another influential definition of security among technical stakeholders and engineers holds that a computer is secure only “if you can depend on it and its software to behave as you expect” (Garfinkel & Spafford, 2003). This definition lends itself to an interpretation of security wherein technical mechanisms are not manipulated or interfered with in unpredictable ways, but does not speak at all to the question of social consequences or harms that might result from security lapses, or even what specific characteristics—such as confidentiality, integrity, and availability, for instance—a computer system should be expected to exhibit.

The disagreements within the security studies community and the technical community about the appropriate definitions of security are not insignificant, but they pale in comparison to the differences across these communities. Definitions of national security, like those proposed by Wolfers and Ullman, emphasise the ability to resist change. Technical definitions of information security focus instead on the positive attributes a computer system must exhibit to be considered secure rather than the absence of threats or dramatic changes. The specificity of technical definitions of security also distinguishes them from definitions of national security, in part because information security definitions have a much smaller scope—they are confined by the boundaries of a computer system, rather than the boundaries of a nation. Cavelty (2010) points out that the apparent parallels between definitions of national security and information security can be misleading, writing, "The terminology in information security is often seemingly congruent with the terminology in national security discourses: it is about threats, agents, vulnerabilities, etc. However, the terms have very specific meanings so that seemingly clear analogies must be used with care". Nowhere is the care required to move back and forth between discussions of information security and national security more critical than in the multistakeholder forums that bring together the different stakeholders that espouse each of these very different views of security. In the context of these organisations, the definitional differences around security are not just theoretical—they lead to very concrete disputes as these distinct definitions collide in a single forum.

Section 2: Security and the multistakeholder model

Existing literature has examined closely the notion of "multistakeholder" governance model of many internet governance forums such as ICANN and the IGF (Mueller, 2010; DeNardis, 2014). One consequence of the multistakeholder models espoused by these organisations is that each stakeholder group has its own distinct ideas and perspectives on how the internet should function and what desirable outcomes for its future would be. There is perhaps no area of governance where these views diverge more starkly than the realm of security. Notably, stakeholder groups involved in internet governance, including government officials, representatives of private industry, and members of civil society, don’t just disagree on what steps should be taken to help secure the internet—they also disagree on what it would mean to have a secure internet in the first place.

Private industry stakeholders, many of whom represent technical firms, tend to hold a view of security closest to that of the technical computer science definition, seeking to build systems that operate as expected, with strong protections for confidentiality, integrity, and availability. For government representatives and political stakeholders, the scope of the system they aim to control (and protect) is much broader, so security is less likely to be focused on whether computers behave as expected and more likely to mean protection of a country’s core values and status quo. For these stakeholders, a secure internet or computer is, correspondingly, more likely to be one that cannot easily be used to cause harm to people or governments. For many government stakeholders, definitional differences and nuances are disappearing as the notion of internet security is increasingly used as a proxy for national security (DeNardis, 2014). Meanwhile, civil society representatives, and especially political activists, engaged in internet governance forums often present their concerns about security as issues of personal and individual security, tied to anonymity and privacy protections, rather than national or technical security. Their notion of a secure internet is one in which it is difficult for governments—or corporations, or indeed, anyone—to identify online users’ real identities.

Even within these stakeholder-specific definitions, all members of a given stakeholder group do not always agree about what constitutes a secure internet or how it is best achieved. National governments have different views, for example, on whether empowering a government to shut down internet connectivity within its borders, in emergency circumstances, would provide more or less security to their citizens. These differences speak to the strong political implications of the definition of security adopted by individual stakeholders, and the extent to which the security actions a group or government would most like to see taken often give rise to the definition they promote—rather than the other way around.

Because consensus building is such a crucial component of multistakeholder internet governance processes, however, these differences of opinion are largely hidden through use of broad language about "security" that effectively abstracts any concrete controversies underlying the general principles. Wolfers (1952) identifies this phenomenon in the field of security, more generally, writing that, “while appearing to offer guidance and a basis for broad consensus [notions like national security] may be permitting everyone to label whatever policy he favors with an attractive and possibly deceptive name”. For example, Article 5A of the revised ITRs states that “Member States shall individually and collectively endeavour to ensure the security and robustness of international telecommunication networks”. This language appears to foster cooperation among the member nations of the ITU by articulating a principle most governments feel comfortable affirming, but it does so not by disambiguating the ideas of network security and robustness, but rather by abstracting the ideas of security and robustness so there is sufficient ambiguity for every signatory to interpret those words according to their own opinions and priorities. Thus, security facilitates superficial cooperation among different stakeholders up to a point, without forcing them to confront the profound differences of opinion underlying their different interpretations of what a secure internet would look like, who and what it would be secure from, and who and what it would be secure for.

The security studies literature makes clear that using ambiguous definitions of security to foster superficial consensus is not new or unique to online security and internet governance. However, the multistakeholder model of governance is particularly susceptible to these problems given its emphasis on bringing together representatives of different segments of society and consensus building processes. In this context, participants start out often having very different views on issues and are then encouraged—even pressured—to find areas of common ground for controversial issues, leading to considerable variation in how they may frame and define those issues (Epstein, Ross, & Baumer, 2014). Mueller (2010) notes that "in internet governance, the term security now encompasses a host of problems, perhaps too many to fit properly under one word."

Section 3: Methodology and case selection

The approach taken in this paper to elucidate how the multistakeholder model exacerbates definitional differences in the meaning of security across different communities is a series of three case studies, presented chronologically: proposals to enable greater national control of internet routing at WCIT in 2012, proposals to create a dotless search domain through ICANN in 2013, and proposals to alter WHOIS database policies, debated at ICANN in 2015. Each case is analysed through the lens of how participants characterise the central issues as relating to security concerns in the documents, proposals, and statements they file related to the case. The analysis of individual cases based on close reading of formal documents touches on only a small portion of the security debates in the internet governance arena and is intended to offer a starting point for further, more thorough discussion of and research into these issues.

The corpus of documents analysed included working drafts of policy statements produced by these multistakeholder forums, as well as formal written comments addressed to these forums in response to policy proposals, and finalised statements of policy that resulted from these deliberations. Since each governance group has a different process for producing policy and soliciting comment, access to these documents varied; 18 documents were analysed for the WCIT case, nine for the dotless domains case, and 13 for the WHOIS case. ICANN makes all policies and comments publicly available on its website, while the ITU operates in a more closed fashion, only publishing the final, signed version of the ITRs. However, there were sufficient leaks of draft proposals and public commentary and response in the lead up to the 2012 WCIT that it was still possible to assemble a significant corpus of documents. References to security in these documents were analysed and coded according to which stakeholders they indicated being secured and which types of threats they indicated those stakeholders would be protected from.

By focusing on cases that centre on very specific, concrete changes to existing internet infrastructure, this analysis aims to ground the very broad, often vague discussions of security that are common to internet governance forums in the clearer, actionable proposals that force stakeholders to confront their differences in definition, giving rise to real disagreements. The cases were selected to highlight the clashes in opinions about security across the three primary groups of stakeholders involved in internet governance: governments, private industry, and civil society. Each case centres on a conflict between two of those groups: the internet routing case highlights differences in opinion between government representatives and private industry, the dotless domain case illustrates differences between private industry and civil society in conceptions of security, and the WHOIS database proposal is a case of government ideas about national security conflict with civil society’s conceptions of individual security. Although the ITU, unlike ICANN, is not a multistakeholder forum, in the sense that only government delegations were permitted to vote at the WCIT, it is used in the presented case study as an instance of private industry conflict with national governments because the delegations from governments opposing the routing proposal were heavily populated by industry representatives who, in many cases, led the opposition to these proposals. For instance, the US delegation to WCIT consisted of 95 people, 60 of whom came from private industry and other non-governmental organisations, including Amazon, AT&T, Cisco, Facebook, Google, Microsoft, and Verizon. In fact, the fights at the WCIT around security in part stemmed from the decision by such nations to model their national delegations on miniature multistakeholder forums, even in the context of an explicitly governmental organisation.

Section 4: Internet traffic routing proposals at WCIT

The ITU, which convenes the WCIT, is not a multistakeholder body, but neither is it a body that has traditionally considered security to be within the scope of its mission to "enable the growth and sustained development of telecommunications and information networks, and to facilitate universal access so that people everywhere can participate in, and benefit from, the emerging information society and global economy". However, as a governmental governance organisation it is perhaps not surprising that the ITU chose security as one of the linchpins of its effort to claim authority in internet governance issues. Ensuring people’s security and protection from harm is typically the domain of governments and it may be appropriate and advantageous for governments to intervene in some of these areas where market forces do not appear to have brought about adequate levels of protection from malicious actors for internet users (Charney, 2002). As Mueller (2010) puts it, “Security more often than not is associated with efforts to reassert hierarchy and control. If anything can reanimate the desire for the nation-state, for traditional government, surely it is the demand for security.” Framing internet governance issues as security matters is therefore strategically useful for government actors seeking to assert greater control over multistakeholder governance processes.

But while most stakeholders might be willing to concede the role of governments in ensuring security in the abstract, government stakeholder notions of security often clash with those of other stakeholders involved in the internet governance process. Perhaps nowhere was this tension more apparent than in the months leading up to WCIT, when the Arab states regional group submitted a proposal to amend the ITRs to include an article stating that "A Member State shall have the right to know through where its traffic has been routed, and should have the right to impose any routing regulations in this regard, for purposes of security and countering fraud" (Llansó, 2012). The proposal reportedly stemmed from concerns on the part of Arab states that their online traffic might be routed through Israel, thereby facilitating espionage efforts (Mueller, 2012). It was a proposal driven, in other words, by concerns about national security and protecting nation states’ communications from interception by their foreign enemies. In the context of the Arab states’ proposal on routing, however, government stakeholders’ emphasis on national security priorities clashed with technical design features of the internet that were considered critical by technical and network operator stakeholders. Specifically, the requirement to inform nations of where their internet traffic was being routed and restrict routing paths would have required significant alterations of the existing internet infrastructure.

The proposal was criticised for its technical ramifications, with non-government stakeholders expressing concern that: "If the Arab States proposal were applied to all Internet communications, the requirement that countries be able to ‘know’ how every IP packet is routed to its destination would necessitate extensive network engineering changes, not only creating huge new costs, but also threatening the performance benefits and network efficiency of the current system" (Llansó, 2012). The government and industry stakeholders’ views on security came into conflict here precisely because fulfilling the Arab states’ vision of a secure internet, in which they could control the countries their packets flowed through, would have required implementing exactly the kind of network behaviour that technicians and operators would deem unexpected and insecure, in which packets respected national borders rather than being routed according to the most efficient or least congested pathways. Additionally, of course, the proposal would likely have been hugely expensive and time-consuming for industry operators to implement.

While industry stakeholders pushed back against the proposal to give governments greater control over routing paths, civil society stakeholders also objected to the proposal—also in the name of security, but on very different grounds. Arguing that providing governments with information about how IP packets were routed might also serve to help countries keep track of what their citizens were doing online and who they were communicating with, privacy and security activists made the case that such practices could also be detrimental to individuals’ online security. By allowing governments to block certain IP addresses or types of traffic, civil society stakeholders argued, "These types of regulations, which could be legitimized if the Arab States proposal is adopted, could threaten user rights to privacy and freedom of expression on the Internet" (Llansó, 2012). The proposal was ultimately not included in the revised version of the ITRs assembled at the 2012 WCIT, in part because of the considerable lobbying by technical industry stakeholders which successfully persuaded several national governments that such a proposal would be more detrimental to security than it would be helpful.

Section 5: ICANN and dotless domains

While industry and civil society were largely aligned in their perspectives on the security of the WCIT routing proposal, Google’s 2013 proposal to purchase a "dotless" domain from ICANN gave rise to a conflict between competing views of security—and also competing views about industry competition—between private industry and civil society. In a letter to ICANN (Falvey, 2013), Google requested that it be permitted to operate the .search top-level domain as a dotless domain so that users who did not type in a fully-qualified domain name would be automatically directed to the .search domain, even if they did not explicitly type the full domain name. In their request, the company wrote: “Google intends to operate a redirect service on the ‘dotless’ .search domain (http://search/) that, combined with a simple technical standard will allow a consistent query interface across firms that provide search functionality, and will enable users to easily conduct searches with firms that provide the search functionality that they designate as their preference” (Falvey, 2013). This proposal, likely driven by business and economic factors, was quickly recast as an issue of security and stability by technical stakeholders involved in ICANN and internet governance. In Google’s request, the idea is framed not as an effort to assert Google’s dominance in the online search market (in fact, the letter explicitly states that users will be able to select their search function of preference and not be forced to use Google’s), but rather as a matter of providing users with a “consistent query interface”—an interface that will behave as expected (or, securely) across a variety of different search firms.

The Internet Architecture Board (IAB), a body composed of technical experts, weighed in on the matter with a statement warning against issuing such domains due to concerns about security and stability. The IAB (2013) wrote:

Since dotless domains will not behave consistently across various locations (and applications and platforms that may have different search list configuration mechanisms), they have the potential to confuse users and erode the stability of the global DNS. By attempting to change expected behavior, dotless domains introduce potential security vulnerabilities. These include causing traffic intended for local services to be directed onto the global Internet (and vice-versa), which can enable a number of attacks, including theft of credentials and cookies, cross-site scripting attacks, etc.

This notion of security adheres closely to the technical definition of a secure computer system as one that behaves as expected. The IAB was not concerned about the national security or social implications of dotless domains (at least, not directly) but rather about the potential for these domains to "change expected behavior". The ICANN Security and Stability Advisory Committee (SSAC), another body representing neither industry nor governments, also issued a report (2012) advising against issuing dotless domains as they could lead to unexpected, and potentially malicious, outcomes (Zusman et al., 2013).

Following these recommendations, at an August 2013 meeting, ICANN adopted a resolution prohibiting dotless domains. Notably, the civil society stakeholders who were most vocal about their security concerns related to implementation of dotless domains enjoyed considerable support from government stakeholders and even some industry stakeholders—especially those who viewed Google as a competitor—in this debate. The ICANN Governmental Advisory Committee (GAC) also voiced objections to Google’s proposal and supported the position taken by the IAB and SSAC. The GAC’s willingness to go along with the civil society stakeholders’ recommendations on this matter suggests that there are, in fact, concrete points of agreement among the internet governance community around what types of security are desirable, especially from a technical standpoint. It also speaks to the fact that more than competing versions of security, this case seemed to exhibit an underlying tension between market competition and security. Often, however, it is harder for stakeholders to reach consensus when dealing with the notions of security that derive not from the technical world, but from the government and civil society stakeholder groups.

Section 6: WHOIS database policies

Arguments about security were also at the heart of the controversy over a 2015 proposal to ICANN to alter the privacy policy governing the WHOIS database, which contains information on the people and organisations who own and operate domain names. The 2015 proposal would limit access to privacy and proxy services that conceal domain owners’ personal information in the publicly accessible WHOIS database. Under the proposal, the owners of any website that includes commercial or transactional services of any kind (including donations, sales, etc.) would be required to keep their contact information, including name, address, phone number, and email, publicly available in WHOIS. The proposal led to a clash between government and civil society representatives in the multistakeholder process, with both supporters and critics of the controversial proposal couching their reasoning in terms of security concerns.

Supporters of the WHOIS proposal included the GAC Public Safety Working Group (PSWG), which stated in a report (2016) that: "In order to promote transparency and consumer safety and trust, the PSWG recommends against permitting websites actively engaged in commercial transactions—meaning the collection of money for a good or service—to hide their identities using Privacy/Proxy (P/P) Services." The government stakeholders in the GAC and PSWG viewed the disclosure of personal information about people undertaking online commercial transactions as a matter of national security and safety. “The public is entitled to know the true identity of those with whom they are doing business,” they wrote, emphasising the need for public safety authorities and law enforcement officers to be able to identify and track down individuals from their online activity. “To the extent privacy services are used to hide the actors responsible for malicious activities or obscure other pertinent information, there must be reasonable mechanisms in place for public safety authorities to unmask bad actors and obtain necessary information,” the GAC PSWG concluded (2016), affirming their national security perspective on the issue.

Civil society stakeholders, meanwhile, offered a different assessment of the proposal to limit privacy and proxy services for WHOIS focused on individual security rather than national security and public safety. Among privacy and security advocates, the proposal was widely criticised for making it more difficult for online users to avoid harassment. Jeong and Albert (2015) argue, "For many, particularly those who become the targets of online harassment, WHOIS proxy or privacy protections are vital for their safety". At issue here are two very different conceptions of who is being secured from what: are the innocent online fundraisers and entrepreneurs being protected from harassers and political retribution, or are innocent internet users being protected from online crooks and website scams? Government representatives were hewing to notions of national security and public safety that emphasised the latter, while civil society representatives were embracing a notion of individual or human security that highlighted the former.

Conclusion

The tensions that arise around issues of security among different groups of internet governance stakeholders speak to the many tangled notions of what online security is and whom it is meant to protect that are espoused by the participants in multistakeholder governance forums. What makes these debates significant and unique in the context of internet governance is not that the different stakeholders often disagree (indeed, that is a common occurrence), but rather that they disagree while all using the same vocabulary of security to support their respective stances. Government stakeholders advocate for limitations on WHOIS privacy/proxy services in order to aid law enforcement and protect their citizens from crime and fraud. Civil society stakeholders advocate against those limitations in order to aid activists and minorities and protect those online users from harassment. Both sides would claim that their position promotes a more secure internet and a more secure society—and in a sense, both would be right, except that each promotes a differently secure internet and society, protecting different classes of people and behaviour from different threats.

While vague notions of security may be sufficiently universally accepted as to appear in official documents and treaties, the specific details of individual decisions—such as the implementation of dotless domains, changes to the WHOIS database privacy policy, and proposals to grant government greater authority over how their internet traffic is routed—require stakeholders to disentangle the many different ideas embedded in that language. For the idea of security to truly foster cooperation and collaboration as a boundary object in internet governance circles, the participating stakeholders will have to more concretely agree on what their vision of a secure internet is and how it will balance the different ideas of security espoused by different groups. Alternatively, internet governance stakeholders may find it more useful to limit their discussions on security, as a whole, and try to force their discussions to focus on more specific threats and issues within that space as a means of preventing themselves from succumbing to a façade of agreement without grappling with the sources of disagreement that linger just below the surface.

The intersection of multistakeholder internet governance and definitional issues of security is striking because of the way that the multistakeholder model both reinforces and takes advantage of the ambiguity surrounding the idea of security explored in the security studies literature. That ambiguity is a crucial component of maintaining a functional multistakeholder model of governance because it lends itself well to high-level agreements and discussions, contributing to the sense of consensus building across stakeholders. At the same time, gathering those different stakeholders together to decide specific issues related to the internet and its infrastructure brings to a fore the vast variety of definitions of security they employ and forces them to engage in security-versus-security fights, with each trying to promote their own particular notion of security. Security has long been a contested concept, but rarely do these contestations play out as directly and dramatically as in the multistakeholder arena of internet governance, where all parties are able to face off on what really constitutes security in a digital world.

References

GAC Public Safety Working Group Comments to Initial Report on the Privacy & Proxy Services Accreditation Issues Policy Development Process. (2016, March 9). Available from https://gacweb.icann.org/display/GACADV/2016-03-09+Privacy+and+Proxy+Services+Accreditation+Issues?preview=/41943982/41943981/ANNEX%20A%20-%20PSWG%2BGAC%20comments%20proxy%20privacy%20accreditation%20issues.pdf

Proposals for the Work of the Conference. (2012, December 3-14). Submitted by Russia, UAE, China, Saudi Arabia, Algeria, Sudan, and Egypt, World Conference on International Telecommunications (WCIT-12). Available from Available from http://files.wcitleaks.org/public/Merged%20UAE%20081212.pdf

Baldwin, D. A. (1997). The Concept of Security. Review of International Studies vol. 23, no. 1, pp. 5-26.

Beautement, A., & Pym, D. (2010). Structured systems economics for security management. In Proceedings of the Ninth Workshop on the Economics of Information Security. Cambridge, MA, USA.

Cavelty, M. D. (2010). Cyber-Security. In The Routledge Handbook of New Security Studies. (Peter Burgess ed.) London: Routledge, pp. 154-162.

Charney, S. (2002). Transition Between Law Enforcement and National Defense. In Security in the Information Age: New Challenges, New Strategies. (Robert F. Bennet ed.), available from http://www.iwar.org.uk/cip/resources/senate-2002/security.pdf

DeNardis, L. (2014). The global war for internet governance. Yale University Press.

Epstein, D., Ross, M., and Baumer, E. (2014). It’s the definition, stupid! Framing of online privacy in the Internet Governance Forum debates. Journal of Information Policy, vol. 4, pp. 144-172.

Falvey, S. (2013, April 6). "Re: Update on Amendments to Four of Charleston Road Registry’s Applications." Letter of Christine Willett, General Manager, ICANN gTLD program. Available from http://assets.sbnation.com/assets/2455295/falvey-to-willett-06apr13-en.pdf

Garfinkel, S. and Spafford, G. (2003). Practical UNIX and Internet security. O'Reilly Media, Inc.

ICANN Security and Stability Advisory Committee. (2012, February 23). SSAC Report on Dotless Domains. Available from https://www.icann.org/en/groups/ssac/documents/sac-053-en.pdf

"International Telecommunication Regulations." (2012). Final Acts: World Conference on International Telecommunications, Dubai. Available from http://www.itu.int/en/wcit-12/Documents/final-acts-wcit-12.pdf

Internet Architecture Board. (2013, July 10). IAB Statement: Dotless Domains Considered Harmful. Available from https://www.iab.org/documents/correspondence-reports-documents/2013-2/iab-statement-dotless-domains-considered-harmful/

Jeong, S. and Albert, K.. (2015, July 2). An Unassuming Web Proposal Would Make Harassment Easier. Wired. Available from http://www.wired.com/2015/07/unassuming-web-proposal-make-harassment-easier/

Llansó, E. (2012, Sept. 6). Security Proposals to the ITU Could Create More Problems Not Solutions. Center for Democracy and Technology. Available from https://citizenlab.org/cybernorms2012/CDT2012.pdf

Mueller, M. L. (2010). Networks and states: The global politics of Internet governance. MIT Press.

Mueller, M. L. (2012, June 21). Threat Analysis of the WCIT Part 4: The ITU and Cybersecurity. Internet Governance Project. Available from http://www.internetgovernance.org/2012/06/21/threat-analysis-of-the-wcit-4-cybersecurity/

Parker, D. B. (1998). Fighting Computer Crime: A New Framework for Protecting Information. Wiley.

Rothschild, E. (1995) What is security? Daedalus, Vol. 124, No. 3, pp. 53-98.

Ullman, R. H. (1983). Redefining Security. International Security, Vol. 8., No. 1, pp. 129-153.

Wolfers, A. (1952). "National Security" as an Ambiguous Symbol. Political Science Quarterly vol. 67, No. 4, pp. 481-502.

Zusman, M., J. Allen, and R. Umadas. (2013, July 29). Dotless Domain Name Security and Stability Study. Available from https://www.icann.org/en/groups/ssac/documents/dotless-domain-study-29jul13-en.pdf

Declaration of novelty and no competing interests:

By submitting this manuscript I declare that this manuscript and its essential content has not been published elsewhere or that it is considered for publication in another outlet.

No competing interests exist that have influenced or can be perceived to have influenced the text.


Multistakeholder governance processes as production sites: enhanced cooperation "in the making"

$
0
0

This paper is part of 'Doing internet governance: practices, controversies, infrastructures, and institutions', a Special issue of the Internet Policy Review.

Introduction

The last decade witnessed rapidly growing interest among scholars from different disciplines in the new forms of participatory governance and multistakeholder deliberation that have emerged around the coordination and regulation of the internet. Most scholarly research examines these new forms by focusing on institutions of internet governance, the role and interests of stakeholders and their influence on the output of policy debates. Less common are analyses centring on the interactions and processes of multistakeholder governance, aiming to assess what is actually happening inside the black box of the newly created institutions and their complex deliberation and coordination mechanisms (Epstein, 2013; Flyverbom, 2010; Gasser, Budish, & West, 2015).

Using the toolbox provided by science and technology studies (STS) this research paper seeks to contribute to the second stream of scholarly work by exploring the inner workings of multistakeholder arrangements in the field of internet governance (IG). The work builds on existing research that traces processes of social ordering at the intersection of multistakeholder settings and intergovernmental institutions (Epstein, 2012; Flyverbom, 2011) and on theoretical considerations of multistakeholder and coordination mechanisms in IG (Hofmann, 2016; Hofmann, Katzenbach, & Gollatz, 2016). The paper adds a new perspective to the existing literature because it focuses primarily on processes of discursive production in internet governance. By combining concepts of actor-network theory (ANT) and interpretative policy analysis (IPA), the work explores how actors translate ideas, shape meaning and compete over the inscription of discourse into policy outcome. By framing the discursive struggles in IG policy processes as vying for power and influence, the paper seeks to open the black box of multistakeholder policymaking and to apply the conceptual instruments of ANT and IPA to retrace how actors position themselves within discursive production processes. By focusing on processes rather than outcomes, it is possible to identify how and which actors are able to exert influence and to show that, despite a lack of binding policy outcomes, multistakeholder arrangements in IG can produce valid results.

Once the conceptual ideas have been introduced and the theoretical challenges discussed, these are illustrated, using selected and emblematic examples from recent deliberations within the UN Working Group on Enhanced Cooperation (WGEC). The objective of this multistakeholder group was to overcome controversies on the role of governments in IG, which have persisted since the World Summit on the Information Society (WSIS). With its aim of inscribing meaning into the discursive artefact of "enhanced cooperation" (a diplomatic term introduced in IG by the WSIS outcome document in 2005), the WGEC lends itself particularly well to an empirical analysis which, rather than centring on policy outcome, concentrates on unravelling the processes of discursive interaction within a restricted network of actors. 1 The empirical research builds on desk research, online observation, document analysis and interviews with some of the WGEC's members and observers, conducted in 2014. Beyond the presentation of empirical examples, the main objective of this paper is to introduce the conceptual marriage between ANT and IPA and, in so doing, it is hoped to contribute qualitatively and significantly to the existing body of literature on STS and IG. In its concluding section, the paper discusses the conceptual notions introduced earlier, in light of the frequent criticism that multistakeholder arrangements are unproductive.

STS in internet governance: retracing processes of discursive production

The recent interest in integrating concepts and tools from STS into the theoretical and empirical research on internet governance has resulted primarily in a focus on the internet’s materiality and the complex practices through which actors shape and use the internet infrastructure for governance purposes (DeNardis, 2014; Musiani, 2015; Musiani, Cogburn, DeNardis, & Levinson, 2015). While equally drawing on STS literature, the aim of this short research piece is to open another of the many black boxes 2 of internet governance by exploring the performative effects of multistakeholder deliberations and the conflictual co-production of discourse in policy debates. To this end, the paper proposes to combine selected tools from actor-network theory with interpretative policy analysis, another tradition which, with some notable exceptions (Epstein, 2011, 2012; Pickard, 2007), is still underrepresented in IG research.

In keeping with IPA, the underlying assumption of this paper is the conception of policymaking as "a constant discursive struggle" (Fischer & Forester, 1993, p. 2) as well as the understanding that, more than anything else, it is the ideas and language exchanged in policy debates that shape the definition of policy problems and their solutions (Fischer, 2003; Fischer & Gottweis, 2012). Accordingly, interpretative analyses are marked by a focus on policy discourse and its creation through the meaning-making capacities of actors involved in policymaking. Yet, IPA goes beyond a static or purely linguistic understanding of "discourse"3 and, instead, proposes to study its importance for policymaking by assessing processes of "arguing" and the exchanges between actors in policy debates rather than the language of the final outcome documents (Hajer, 1993, 2002; Münch, 2016). 4

To construe policymaking as a struggle over and a process of meaning-making implies three important consequences for policy research: first, it moves policy analysis away from its established teleological perspective. What is of greatest interest is not the input and output of policymaking and their causal interrelation, but rather the processes of policymaking, including all deliberations, negotiations, and decision-making. Second, IPA not only assesses the language or ideas of the actors involved in policy debates, but also the actors themselves, their social practices and their interactions. Hence, interpretative approaches add the aspect of "agency" to the analysis of discourse; in other words, they attempt to "reintegrate the subject" into the study of policy (Zittoun, 2009, p. 67). Third, since IPA regards policy as "the outcome of joint productions of meanings among various policy actors [emphasis added]" (Mottier, 2005, p. 256), it concerns the interactions of actors in the policy process and, accordingly, assesses how various actors, jointly but antagonistically (Marres, 2007, p. 773), engage in meaning-making and the production of common discourse, during policy debates.

When combining these three aspects, researchers interested in meaning-making in IG and elsewhere need to consider not only the arguments exchanged during policy debates, but also the actors, their practices and the dynamic (power) relations between them. Vivien Schmidt interprets this as a departure from a purely postmodernist understanding of discourse in which only the content of ideas has a centre stage position. By focusing on the actual processes of arguing and discussing, the creation and origins of the ideas thus produced as well as the material reality outside of the linguistic utterances are reintegrated into the analysis (Schmidt, 2008, p. 305). Given this relational focus and the interest in the materiality of policy debates, it is surprising that IPA has not been combined more often with ANT, an approach developed for analysing complex processes of co-creation in science and technology (Callon, 1986; Latour & Callon, 1981). 5 Despite many conceptual and methodological differences, ANT shares IPA's focus on social ordering and the relational production of semantics and materials, and concurs with regard to the three important aspects of IPA mentioned above. But unlike discourse analysis, ANT also provides the tools for retracing production processes on the micro-level. 6

A shared interest of both approaches is to observe processes rather than outcomes. Since ANT-inspired research is interested in how relations are established, it considers analysis of actions to be a more fruitful path than static situational observation. Accordingly, ANT research focusses on production processes as a way to view an object or meaning "in the making" (Latour, 1993, p. 265). Further, ANT concurs with IPA in its emphasis on actors and their interactions. Seeking to rigorously "follow the actor", ANT attempts to map various elements of the creation process, as well as actors’ behaviour and interactions (Law & Callon, 1988, p. 285). Lastly, ANT overlaps with and, at the same time, goes beyond interpretative approaches in its objective to thoroughly assess the concrete ways through which social order is created contingent on the relationship between actors.

ANT observes the dynamic processes of social ordering by focusing attention on what it calls "translations", meaning the interactions through which actors build relations, influence each other and the objects they produce. 7 When combined with a focus on discursive production, the ANT term "translation" refers to all processes of deliberation, negotiations, and intrigues, which allow actors to construct common definitions or narratives and build coalitions. Since it is through these efforts that actors mobilise other actors to share their political, social, cultural or economic interests, the process of translation can be considered as the creation of alignment in interest (Rutland & Aylett, 2008, p. 635). Hence, by following closely the actors and their translation processes, the combination of IPA and ANT makes it possible to better understand the "plurality of processes, formal and informal, where actors, with different degrees of power and autonomy, intervene" (Raboy & Padovani, 2010, p. 151) and thus to unravel the often chaotic and irrational internal workings inside the black box of policy deliberations in IG.

Despite the fruitful correlations between ANT and IPA, the two approaches differ with regards to some underlying epistemological assumptions. While ANT recognises that discourse can generally have an impact on production processes, it considers discourse as merely one influential element amongst many others. In fact, ANT acknowledges the role of discourse or a linguistic utterance only if it leaves a trace in the policy process that can be observed by the researcher, for instance in the form of a text and/or a detectable shift in the position of other actors. This is in contrast to IPA and similar approaches, for which discourse and language have priority over other influences such as economic interests or material structures. In addition, policy analysis inspired by discourse analytical approaches often draws on Foucault’s understanding of discourse, which is strongly connected to the notion of power. Thus, discourse is often considered to simultaneously express, reinforce and reproduce overarching power structures. Conversely, ANT-scholars commonly reject the idea of power structures as external forces which act upon the processes under scrutiny. In ANT, power relations are always contingent, created on the micro-level through the interaction and practices of actors; they only have significance if they leave a trace (Law, 1992, p. 388). Nevertheless, these epistemological differences —whose full exploration would go beyond the scope of this paper— do not preclude the combining of ANT and IPA on a conceptual and empirical level. For the analysis of discursive production in IG, using selected ANT tools allows us to examine concretely how power relations and discourses are created in a multistakeholder context, without needing to consider nor justify them by appealing to external economic forces, political interests or geopolitical tensions.

Opening the black box of discursive production in multistakeholder arrangements

Why is the combination of ANT and IPA particularly suitable for studying discursive production processes in multistakeholder arrangements? There are two main reasons for this: first, multistakeholder arrangements are hybrid spaces in which heterogeneous actors engage in processes of social ordering. Second, multistakeholder arrangements – especially in IG – are discursive spaces because they serve primarily as venues for dialogue and coordination.

The "hybrid space" aspect refers to the character of multistakeholder processes per se, which can be defined as governance settings that incorporate "representatives from multiple groups in discussions and decision making" (Gasser et al., 2015, p. 2) – for instance, from governments, the private and technical sector or civil society. 8 From an ANT perspective, these settings are spaces in which heterogeneous actors from different backgrounds, with diverging social practices come together and engage in processes of social ordering (Callon, Lascoumes, & Barthe, 2009, p. 18; see also Flyverbom, 2011, p. ix). Because of its expanded definition of actors, ANT allows researchers to assess multistakeholder policy processes in a way that differs significantly from traditional policy or institutional analysis, taking the particularities of multistakeholder arrangements into consideration.

In accordance with the ANT principle of "generalised symmetry", the term "actor" does not refer to someone "who wishes to grab power[,] makes a network of allies and extend[s] his power"; rather, it is a semiotic definition comprising many sorts of "actants", all of which can have an observable impact on the processes under consideration (Latour, 1996, p. 374). This makes it possible to account not only for human, but also non-human actors like animals, material objects or technology. It also means that organisational settings, rules and procedures in production processes can be regarded as actors if they affect other actors and influence their practices. This is particularly useful when considering multistakeholder settings in which there are strong relational ties between human actors and material arrangements like the rules of participation, which may be set by the actors themselves and re-negotiated during the processes to which these rules apply. While human actors delineate rules, procedures and organisational settings, these arrangements, in turn, delimit who is allowed to participate. Thus, because the arrangements grant agency to others, this makes them – from an ANT perspective – actors.

By the same token, texts and documents can be considered actors in the processes under scrutiny because they are the result of "a material operation of creating order" (Latour & Woolgar, 1986, p. 245; see also Nimmo, 2011, pp. 114ff). Multistakeholder processes in IG often contain public consultations and the submission of stakeholder comments in order to include an even larger number of different voices. Moreover, like intergovernmental processes, multistakeholder processes tend to draw on existing policy documents in order to refer to already agreed language. From an ANT perspective, the comments, documents and language do not simply represent input into the processes; rather, they influence the role, position and ideas of actors and, thus, act upon other actors and their endeavours to make meaning. Accordingly, the merit of ANT for studying multistakeholder processes in IG (as well as other fields) is that it allows researchers to consider simultaneously the agency of humans, materials and semiotics as equally important elements in a network of heterogeneous actors. 9

The second aspect that makes the combination of ANT and interpretative approaches fruitful for IG research concerns the meaning-making capacities of actors and their importance for multistakeholder governance. Both theory and practice suggest that the value of multistakeholder processes in internet governance lies in facilitating dialogue and coordination rather than in producing tangible outcome (Hofmann et al., 2016, pp. 10ff; Pavan, 2012, pp. xxixff). This is prominently visible in the context of the Internet Governance Forum (IGF) which constitutes an emblematic case for the institutionalisation of the multistakeholder policy dialogue in IG. Although there has always been disagreement regarding the exact mandate of the IGF, it is generally understood that the forum should not negotiate policy texts but "ensure an open and inclusive discourse on all policy issues potentially relevant for Internet governance" (Hofmann, 2016, p. 12). 10Therefore, it is possible to conceptualise the IGF and other multistakeholder venues that provide for non-binding policy deliberations as "discursive spaces" (Epstein, 2012, p. 29). 11 Similarly, Mikkel Flyverbom (2011, p. 167) emphasises the "discursive power" of multistakeholder arrangements and the role of discourses for social ordering, as it is through the production of dominant discourses that some ideas, problem perceptions and policy options become conceivable while others are rendered out of the question.

The combination of ANT and IPA provides the tools for tracing how actors use their own discursive power to contribute to joint meaning-making by translating their ideas and opinions. An actor's translation of an idea or discourse is successful when others adopt it and start to promote the same. 12 But this success is often only temporary. From an ANT perspective, every order is always unstable and precarious and requires continuous reordering so as to be maintained (Latour, 2004, p. 63). One way to stabilise the results of translations for longer periods is to fix them in the most durable material, a process in ANT called "inscription". 13 In multistakeholder arrangements, inscription occurs, for instance, when a discourse is not simply communicated orally but transformed into a more solid, material form such as written text or —more ideally— an organisational setting or procedure. Through this effort, the discourse becomes institutionalised: "If a discourse solidifies in particular institutional arrangements [...] then we speak of discourse institutionalization" (Hajer, 2005, p. 303). 14 From an ANT perspective, documents and organisational settings can themselves be actors. Thus, institutionalisations have a strong performative function because the inscribed discourses potentially impact other actors and their production processes. 15 But even discourse institutionalisations are not permanent because written text can easily be ignored and procedures can be abolished. As a consequence, assessing discursive production in IG through an ANT-inspired perspective implies that we cannot consider institutional structures as given. Instead, the analysis needs to centre on the continuous processes of ordering and reordering through which actors make sense of and build the world around them.

In sum, multistakeholder arrangements in IG can be conceptualised as hybrid sites of discursive production in which heterogeneous actors (including documents and organisational settings) engage in translation processes. During these processes, all actors seek to stabilise their own positions by jointly producing discourse and inscribing it into the materiality of the arrangements. In the following section, the merits of ANT-inspired analyses of discursive production processes in IG are illustrated using empirical examples. The examples used derive from an analysis of the Working Group on Enhanced Cooperation, a multistakeholder group set up by the United Nations, which convened four times in 2013-2014. Because of the conceptual focus of this paper, it is not the intention here to provide the full picture of the group’s deliberations and interactions; therefore, the empirical findings are presented selectively. 16

Enhanced cooperation: a mediator "in the making"

The multistakeholder concept is related to a large range of approaches and processes in IG. In particular, the IGF and its discursive role have been analysed often and thoroughly. In contrast, much less attention has been paid to "the other track of dialogue created by the WSIS" (Radu & Chenou, 2014, p. 10), the process of "enhanced cooperation." This second stream of deliberations is particularly interesting for a study that combines an ANT-inspired perspective with IPA since the concept of enhanced cooperation is itself a discursive artefact. The expression is borrowed from European Union Law, where it is used in various treaties to refer "to the coexistence of different rhythms and depths in institutional integration in different policy areas" (Rioux, Adam, & Company Pérez, 2014, p. 41). In the context of IG, the term was introduced in 2005 in theTunis Agenda, one of the four official outcome documents of WSIS, in an attempt to overcome fundamental discrepancies regarding the role of governments in the technical, operational and policymaking matters of the internet (Brown, 2014; Kleinwächter, 2013).

Inscribed into paragraphs 69 and 71 of the Tunis Agenda, "the process towards enhanced cooperation" was supposed to result in a future mechanism to "enable governments, on equal footing, to carry out their roles and responsibilities, in international public policy issues pertaining to the internet […]" without interfering in the day-to-day technical and operational IG matters. By situating the achievement of enhanced cooperation in the future, the documents' authors acknowledged that such a mechanism did not yet exist, nor was there any clearly defined cooperation mechanism that included governments and had an uncontested role with regard to internet policymaking. Thus the concept of enhanced cooperation was a purely discursive artefact which –through its inscription in the Tunis Agenda– gained material form and a durable link to IG. 17

However, the inscription of enhanced cooperation into the Tunis Agenda did not go as far as to specify what the notion meant precisely or how it should be implemented or measured. Since enhanced cooperation was introduced in an attempt to find a compromise, it was left intentionally with a "creative ambiguity" and "much room for interpretation" (Kummer, 2007, p. 9). Therefore, this discursive artefact, enhanced cooperation, can be interpreted as what ANT refers to as a "mediator". Mediators are artefacts produced by actors and circulated during the processes of translation and inscription. They can be immaterial – like services or notions, or material – like texts or other physical objects. It is not their form that makes them mediators, but the fact that they "transform, translate, distort and modify the meaning or the elements they are supposed to carry"; hence, "their input is never a good predictor of their output; their specificity has to be taken into account every time" (Latour, 2005, p. 39).

The actors involved in WSIS recognised immediately that enhanced cooperation was a mediator which could be interpreted in different ways for creating social ordering in IG: "'Enhanced cooperation' is one of the code words in Internet governance discussions and means different things to different people" (Kummer, 2012). During WSIS, some actors – in particular, those from the governmental side – invoked enhanced cooperation to translate their call for more multilateral decision-making in IG through a new organisation under the auspices of the UN, whereas others used it to justify their wish to strengthen the multistakeholder approach. 18 Ever since, translation processes surrounding enhanced cooperation have persisted as actors continuously seek to inscribe their ideas into the discursive artefact itself. Accordingly, enhanced cooperation has remained a mediator "in the making", an artefact whose meaning has yet to be defined concretely. In 2012, the UN decided to consolidate the diverging interpretations and, eventually, provide a stable meaning to the ambiguous notion of enhanced cooperation. Yet, this was an undertaking that proved to be difficult. 19

Ordering through discourse: the Working Group on Enhanced Cooperation

Following a resolution adopted by the UN General Assembly in 2012 (UN, 2012), the Working Group on Enhanced Cooperation on Public Policy Issues Pertaining to the Internet was created in 2013. Operating under the auspices of the UN Commission on Science and Technology for Development (CSTD), this multistakeholder group was mandated to develop recommendations on how to implement the WSIS mandate regarding the cooperation of stakeholders in IG. 20

Identifying the actors

Officially the WGEC comprised 42 members: 22 government representatives and five representatives from each of the currently recognised stakeholder groups (international organisations, private sector, academia/technical community and civil society). But from a perspective inspired by the ANT principle of generalised symmetry, those who acted upon the processes within the WGEC (and, accordingly, who count as actors within these processes) differed from the official members.

First, only a small number of the official government representatives actually attended the WGEC meetings or intervened in a way that impacted the group's translation processes (Kaspar, 2014; Kovacs, 2014). 21 Second, although the CSTD had decided upfront on the WGEC’s official configuration, the modalities of the working procedures were altered during the group's first meeting so that observers could also attend. Moreover, video streaming and live transcripts were made available to allow members and observers to join the discussions remotely, while the last ten minutes of each session were reserved for interventions by the observers (Dickinson, Dutton, Maciel, Miloshevic, & Radunovic, 2014, pp. 18ff). Several governmental and non-governmental observers made use of these opportunities and actively contributed to the deliberations by drafting statements, producing room documents and reporting via Twitter. 22 Third, after its inception, the WGEC introduced another change to the procedures that altered the configuration of actors and discourses in the group as it agreed to hold public consultations via a survey on the implementation and potential operationalisation of enhanced cooperation. The 69 survey responses, resulting in over 500 pages of text, became an important working tool for the group and many of its initial debates revolved around these responses and their categorisation. The same was the case for the various documents which the group produced, with the support of some active observers, in order to consolidate and structure the survey responses.

In short, while not all group members intervened in the WGEC's deliberations, the organisational settings, the documents received and produced by the group and the ideas inscribed into their materiality left observable traces on the group’s interactions. Thus these elements can be considered as acting agents which shaped the discursive production of the group and which can help to unravel the WGEC's inner workings. The merit of this approach is revealed if we follow the actors, their interactions and discursive production closely.

Follow the actors and their practices

The WGEC's mandate was to propose recommendations for implementing enhanced cooperation into existing and future IG mechanisms and procedures. Therefore, most of the actors' practices involved the translation of ideas and their inscription into draft recommendations. Between May 2013 and May 2014, the WGEC convened in Geneva four times. Since the categorisation of the survey responses could not be accomplished within the limited timeframe of the official meetings, a correspondence group was charged to complete this task in between meetings. But despite this effort, there was no mechanism to ensure that all responses categorised by the correspondence group were eventually reflected in the proposed draft recommendations. Some group members sought to consider the material input in their proposals and therefore allowed the discourses inscribed into them to act as inputs into the WGEC’s deliberations. Others, by contrast, ignored the comments provided by other stakeholders and, instead, tried to inscribe discourses into the draft recommendations that were in line with their own or their government's policy agenda (Doria, 2014b; Kovacs, 2014). 23 In the end, the agency that official group members had initially accorded outsiders and written texts, by inviting public comment, was subsequently reduced; these potential actors were prevented from acting upon the process of social ordering within the WGEC.

Actors' translation strategies also showed interesting variations. While all WGEC members were supposed to collaborate "on equal footing"– and in fact were treated equally by the chairman – some government representatives, for instance, the Iranian delegate, made more use of their speaking rights than others (Kaspar, 2014). The most interesting translation strategies, however, were chosen by some of the observers. While they officially had a less important voice – they only had a short speaking slot in each session – they frequently approached WGEC members outside the official meetings in order to translate their comments and voice concerns. Two civil society observers established themselves as "obligatory passage points"24 for stakeholder input by assuming the lead of the correspondence group whose goal was to consolidate the survey responses into a manageable number of items. 25 In so doing, these civil society observers tried to use the variety of discourses and ideas inscribed in the various comments to influence WGEC deliberations; at the same time, they ordered and shaped these discourses and ideas, thereby changing the form and the impact they had on the WGEC’s deliberation processes.

Despite various efforts by different actors, none of these endeavours eventually led to the institutionalisation of specific discourses in the form of recommendations. When the WGEC could not reach a consensus concerning some controversial issues and time was lacking for discussing others, an additional meeting was scheduled. 26 But even during the final session, divergences on a number of issues continued to persist so that the group eventually decided not to submit recommendations to the CSTD (Doria, 2014b; CSTD, 2014).

Retracing processes of discursive production

When considering interactions within the WGEC and the difficulties it encountered from an IPA perspective – that is, by assessing the discursive exchanges and their content – it is striking that the actors' practices and positions did not appear to be primarily determined by their stakeholder categories. Like many others aspects of IG, stakeholder groups need to be considered as artefacts created in an attempt to bring order to the messy environment of global governance processes. Stakeholders are not given and stable entities but emerge through categorisations (Flyverbom, 2011, p. 38). Consequently, within the WGEC, the positions of actors did not split simply along the lines of stakeholder categories; instead, important conflicts emerged within and across stakeholder groups. As a matter of fact, when linking the conflicting discourses back to the actors expressing them, the WGEC’s inability to reach consensus was primarily due to divergences between "discourse coalitions" formed by actors in the debates.

The concept of "discourse coalition" commonly refers to a group of actors that shares the usage of a particular discourse over a longer period of time (Hajer, 2006, p. 70; see also Schmidt, 2012, p. 101). To form a discourse coalition, actors do not need to agree on everything, coordinate their actions or share the same values or interests (which is rarely the case in multistakeholder groups involving a large number of heterogeneous actors). For Hajer, identifying discourse coalitions is the real challenge of IPA because it combines "the analysis of the discursive production of reality with the analysis of the (extradiscursive) social practices from which social constructs emerge and in which the actors that make these statements engage" (1993, p. 45). In the WGEC, we can identify at least three discourse coalitions: the first comprised actors who agreed that enhanced cooperation has not been implemented at all since no official structures have been installed for governments to formulate internet-related public policies. 27 Positioned at the other extreme, the second discourse coalition was united by the claim that enhanced cooperation has been implemented in the form of multistakeholder arrangements, notably the IGF. 28 Although there were many nuances in between these two extreme positions (Kaspar, 2014), it is possible to identify a third discourse coalition around the acknowledgment that some progress had been achieved since WSIS, but that enhanced cooperation had not yet been fully realised (Aguerre, 2013; Brown, 2014). 29

By examining the arguments closely, it becomes clear that there was little compatibility between the discourses and the coalitions behind them, mainly because those actors positioning themselves at either of the two extremes were not willing to move towards a middle ground. Because these actors sought to inscribe incompatible discourses into the group’s recommendations, it is not surprising that the WGEC encountered difficulties in reaching a consensus on the main issues, namely, the definitions of "enhanced cooperation" and "equal footing". As a result, after four meetings and one year of deliberation and consultation, the WGEC was not able to meet its objective and submit draft recommendations to the CSTD. During all their discursive and extra-discursive interactions, no discourse coalition was able to prevail over the others, to translate their interests and ideas more successfully than others or to institutionalise their discourses. Due to the group’s pre-defined set-up, its efforts at social ordering were interrupted before any kind of power balance and discursive order —even if only an unstable and temporary one— could have been achieved.

Conclusion: discourses, social ordering and internet governance

In May 2016, two years after the working group’s final session, the CSTD announced the appointment of a new WGEC. Continuing with the same stakeholder composition but different individual members, it is charged with the same task as its predecessor, thereby "taking into consideration the work that has been done on this matter so far". By doing so, the CSTD has not only launched a second attempt to finally infuse the mediator, enhanced cooperation, with clearly defined meaning, but it has also acknowledged that the first effort did result in some achievement, albeit not in the form of recommendations. Although the WGEC was unable to reach consensus on which of the many interpretations of enhanced cooperation should become institutionalised through its inscription into an official UN document, it would not do the group justice to consider all of its work a failure. In fact, by following the WGEC’s actors and retracing their discursive interactions, one recognises that, regarding some questions, the group was able to move beyond the destructive binary logic of diametrically opposed positions. In particular, concerning the wish of some governments to create a UN mechanism for IG, alternative scenarios were discussed positively. 30 Thus, in a few instances, incompatible viewpoints did give way "to a more inclusive acknowledgement of diverse views and diverse options for the way forward" (Liddicoat, 2014). This acknowledgment can be considered a discursive achievement which the successor group can now build on.

The importance of discursive achievement, although it may be merely temporary, can only be recognised through a focus on the inner workings of a multistakeholder arrangement rather than on its outcome. An approach that emphasises the role of discourse and language for policymaking and, at the same time, accounts for the multiple practices of all involved as well as the (power) relations that emerge from this interaction can thus provide new insights, as can be seen in the case of the WGEC. On the one hand, new light is shed on the performativity of multistakeholder arrangements: instead of resulting in binding or non-binding policy texts, the settings, procedures and actors in IG or other multistakeholder processes all contribute to the joint (though frequently contentious) production of discourse and a shared understanding of the issues at stake. On the other hand, studying the inner workings of multistakeholder groups provides some justification for the existence and sense of such arrangements by uncovering the discursive achievements. In fact, because of their primary function as discursive spaces for dialogue and coordination, multistakeholder processes in IG are frequently criticised as being unproductive (DeNardis, 2010, p. 3; Malcolm, 2015; Pavan, 2012, pp. 79ff); however, by meticulously retracing the production processes of particular multistakeholder arrangements and identifying what exactly they may have achieved in lieu of official policy texts, it is possible in some instances to counter this general criticism.

Overall, the production processes within multistakeholder groups can be considered as attempts at social ordering in IG because these processes generate discourses and create institutions which add to the shape and materiality of IG rules and procedures. The WGEC’s deliberations had the particular characteristic that they touched the heart of the controversies which have accompanied IG processes since WSIS in 2003-05, namely, the collaboration of stakeholders and, more particularly, the role of governments in public policy-making related to the internet. Accordingly, the multistakeholder working group not only tried to attach meaning to the discursive artefact of enhanced cooperation, but also negotiated its own legitimacy as well as the influence that its various members can or should have on global IG processes. As a result, the WGEC contributed fundamentally to the governance of the internet, understood as the processes of reflexive coordination through which actors "question and redefine the rules of the game" of IG in general (Hofmann et al., 2016, p. 10). In this case, social ordering was only partially achieved because, although new joint discourses were produced, none of them was ultimately institutionalised through inscription in a consensual text or formal policy recommendation. But from an ANT perspective, "ordering is always partial and in-the-making, and all attempts to act on the world must compete with other, equally possible modes of ordering" (Flyverbom, 2011, p. 137). Opening the black box of multistakeholder arrangements through a focus on discursive production processes can provide us with valuable insights in this regard.

References

Aguerre, C. (2013, November 26). Enhancing Stakeholder Cooperation [Blog]. Retrieved from http://teamarin.net/2013/11/26/enhancing-stakeholder-cooperation-guest-blog/

Akrich, M. (1994). The description of technical objects. In W. E. Bijker & J. Law (Eds.), Shaping Technology/building Society: Studies in Sociotechnical Change (pp. 205–224). Cambridge, MA: MIT Press.

Ang, P. H., & Pang, N. (2010). Going beyond talk: Can international internet governance work? Presented at the 5th Annual Symposium of the Global Internet Governance Academic Network (GigaNet), Vilnius, Lithuania.

Brown, D. (2014, February 24). Spotlight on Internet Governance 2014: Part Two U.N. Working Group on Enhanced Cooperation. Retrieved from https://www.accessnow.org/blog/2014/02/24/spotlight-on-internet-governance-2014-un-working-group-on-enhanced-cooperat

Brown, S. D. (2002). Michel Serres: Science, Translation and the Logic of the Parasite. Theory Culture and Society, 19(3), 1–27.

Callon, M. (1986). Some elements of a sociology of translation: domestication of the scallops and the fishermen of St Brieuc Bay. In J. Law (Ed.), Power, action and belief: a new sociology of knowledge? (pp. 196–223). London: Routledge.

Cressman, D. (2009). A Brief Overview of Actor-Network Theory: Punctualization, Heterogeneous Engineering & Translation. Retrieved from http://www.sfu.ca/cprost/docs/A%20Brief%20Overview%20of%20ANT.pdf

CSTD (2014, May), Report on CSTD Working Group on Enhanced Cooperation, E/CN.16/2014/CRP.3. Retrieved from http://unctad.org/meetings/en/SessionalDocuments/ecn162014crp3_en.pdf

DeNardis, L. (2010). The Emerging Field of Internet Governance (Working paper No. ID 1678343). Yale: American University. Retrieved from http://papers.ssrn.com/abstract=1678343

DeNardis, L. (2014). The Global War for Internet Governance. New Haven: Yale University Press.

Dickinson, S., Dutton, W. H., Maciel, M. F., Miloshevic, D., & Radunovic, V. (2014). Enhanced Cooperation in Governance (SSRN Scholarly Paper No. ID 2376807). Rochester, NY: Social Science Research Network. Retrieved from http://papers.ssrn.com/abstract=2376807

Doria, A. (2014a, March 1). Five days in the Enhanced Cooperation box. Retrieved from http://avri.doria.org/post/78199466296/five-days-in-the-enhanced-cooperation-box

Doria, A. (2014b, July 3). [Phone interview].

Epstein, D. (2011). Manufacturing Internet policy language: The inner workings of the discourse construction at the Internet Governance Forum. Presented at the TPRC 2011 - Research Conference on Communication, Information and Internet Policy, Arlington, VA: George Mason University School of Law.

Epstein, D. (2012). The Duality of Information Policy Debates: The Case of the Internet Governance Forum (PhD thesis). Cornell University, Cornell.

Epstein, D. (2013). The making of institutions of information governance: the case of the Internet Governance Forum. Journal of Information Technology, 28(2), 137–149. http://doi.org/10.1057/jit.2013.8

Fischer, F. (2003). Reframing Public Policy: Discursive Politics and Deliberative Practices. Oxford; New York: Oxford University Press.

Fischer, F., & Forester, J. (Eds.). (1993). The Argumentative Turn in Policy Analysis and Planning. Durham; London: Duke University Press.

Fischer, F., & Gottweis, H. (Eds.). (2012). The argumentative turn revisited: public policy as communicative practice. Durham; London: Duke University Press.

Flyverbom, M. (2010). Hybrid networks and the global politics of the digital revolution – a practice-oriented, relational and agnostic approach. Global Networks, 10(3), 424–442. http://doi.org/10.1111/j.1471-0374.2010.00296.x

Flyverbom, M. (2011). The Power of Networks - Organizing the Global Politics of the Internet. Cheltenham: Edward Elgar.

Gasper, D., & Apthorpe, R. (1996). Introduction: Discourse Analysis and Policy Discourse. European Journal of Development Research, 8(1), 1–15.

Gasser, U., Budish, R., & West, S. M. (2015). Multistakeholder as Governance Groups: Observations from Case Studies (Berkman Center Research Publication No. No. 2015-1). Cambridge, MA: Berkman Center for Internet and Society. Retrieved from http://papers.ssrn.com/abstract=2549270

Hajer, M. (1993). Discourse Coalitions and the Institutionalization of Practice: The Case of Acid Rain in Britain. In F. Fischer & J. Forester (Eds.), The Argumentative Turn in Policy Analysis and Planning (pp. 43–76). Durham; London: Duke University Press.

Hajer, M. (2002). Discourse Analysis and the Study of Policy Making. Eur Polit Sci, 2, 61–65.

Hajer, M. (2005). Coalitions, Practices, and Meaning in Environmental Politics: from Acid Rain to BSE. In J. Torfing & D. Howarth (Eds.), Discourse Theory in European Politics (pp. 297–315). Basingstoke: Palgrave Macmillan.

Hajer, M. (2006). Doing Discourse Analysis: Coalitions, Practices, Meaning. In M. van den Brink & T. Metze (Eds.), Words matter in policy and planning : Discourse Theory and Methods in the Social Sciences (pp. 65–74). Utrecht: KNAG/Nethur.

Hofmann, J. (2016). Multi-stakeholderism in Internet governance: putting a fiction into practice. Journal of Cyber Policy, 1(1). Online first: http://doi.org/http://dx.doi.org/10.1080/23738871.2016.1158303

Hofmann, J., Katzenbach, C., & Gollatz, K. (2016). Between coordination and regulation: Finding the governance in Internet governance. New Media & Society. Online first: http://doi.org/doi: 10.1177/1461444816639975

Kaspar, L. (2014, May 20). [Phone interview].

Kleinwächter, W. (2012, December 17). WCIT and Internet Governance: Harmless Resolution or Trojan Horse? [Blog]. Retrieved from http://www.circleid.com/posts/20121217_wcit_and_internet_governance_harmless_resolution_or_trojan_horse/

Kleinwächter, W. (2013, November 12). Enhanced Cooperation in Internet Governance: From Mystery to Clarity? [Blog]. Retrieved from http://www.circleid.com/posts/20131112_enhanced_cooperation_in_internet_governance_mystery_to_clarity/

Kovacs, A. (2014, July 3). [Phone interview].

Kummer, M. (2007). The debate on Internet governance: From Geneva to Tunis and beyond. Information Polity, 12(1–2), 5–13.

Kummer, M. (2012, July 2). Internet Governance: What is Enhanced Cooperation? Retrieved from http://www.internetsociety.org/blog/2012/07/internet-governance-what-enhanced-cooperation

Latour, B. (1993). An Interview with Bruno Latour. Configurations, 1(2), 247–268.

Latour, B. (1996). On actor-network theory: A few clarifications. Soziale Welt, 47(4), 369–381.

Latour, B. (2004). On using ANT for studying information systems: a (somewhat) Socratic dialogue. In C. Avgerou, C. Ciborra, & F. Land (Eds.), The Social Study of Information and Communication Technology: Innovation, Actors, and Contexts (pp. 62–76). Oxford; New York: Oxford University Press.

Latour, B. (2005). Reassembling the Social: An Introduction to Actor-Network-Theory. Oxford; New York: Oxford University Press.

Latour, B., & Callon, M. (1981). Unscrewing the Big Leviathan: how actors macro-structure reality and how sociologists help them to do so. In K. Knorr-Cetina & A. V. Cicourel (Eds.), Advances in Social Theory and Methodology: Towards and integration of Micro- and Macro-Sociologies of Knowledge? (pp. 277–303). London: Routledge and Kegan Paul.

Latour, B., & Woolgar, S. (1986). Laboratory Life: The Construction of Scientific Facts (2nd edition). Princeton: Princeton University Press.

Law, J. (1992). Notes on the theory of the actor-network: Ordering, strategy, and heterogeneity. Systemic Practice and Action Research, 5(4), 379–393.

Law, J., & Callon, M. (1988). Engineering and Sociology in a Military Aircraft Project: A Network Analysis of Technological Change. Social Problems, 35(3), 284–297.

Liddicoat, J. (2014). Working Group on Enhanced Cooperation: The next enthralling episode. APC. Retrieved from https://www.apc.org/en/blog/working-group-enhanced-cooperation-next-enth...

Liddicoat, J., Doria, A., & Kaspar, L. (2013). The UN Working Group on Enhanced Cooperation: Report on the second meeting. APC. Retrieved from http://www.apc.org/en/blog/un-working-group-enhanced-cooperation-report-secon

Malcolm, J. (2008). Multi-Stakeholder Governance And The Internet Governance Forum. Perth: Terminus Press.

Malcolm, Jeremy. (2015). Criteria of meaningful stakeholder inclusion in internet governance. Internet Policy Review, 4(4). http://doi.org/10.14763/2015.4.391

Marres, N. (2007). The Issues Deserve More Credit Pragmatist Contributions to the Study of Public Involvement in Controversy. Social Studies of Science, 37(5), 759–780. http://doi.org/10.1177/0306312706077367

Mikus, M. (2009). Strategies, meanings and actor-networks: community-based biodiversity conservation and sustainable development in the Comoros (Final thesis MSc Anthropology and Development). London School of Economics, London.

Mottier, V. (2005). From Welfare to Social Exclusion: Eugenic Social Policies and the Swiss National Order. In D. Howarth & J. Torfing (Eds.), Discourse Theory in European Politics: Identity, Policy and Governance (pp. 255–274). Hampshire and New York: Palgrave Macmillan.

Mueller, M. (2012, December 18). ITU Phobia: Why WCIT was derailed. Retrieved from http://www.internetgovernance.org/2012/12/18/itu-phobia-why-wcit-was-derailed/

Münch, S. (2016). Interpretative Policy-Analyse : eine Einführung. Wiesbaden: Springer VS.

Musiani, F. (2015). Practice, Plurality, Performativity, and Plumbing Internet Governance Research Meets Science and Technology Studies. Science, Technology & Human Values, 40(2), 272–286. http://doi.org/10.1177/0162243914553803

Musiani, F., Cogburn, D. L., DeNardis, L., & Levinson, N. S. (Eds.). (2015). The Turn to Infrastructure in Internet Governance. New York, NY: Palgrave Macmillan.

Nimmo, R. (2011). Actor-network theory and methodology: social research in a more-than-human world. Methodological Innovations Online, 6(3), 108–119.

Pavan, E. (2012). Frames and Connections in the Governance of Global Communications: A Network Study of the Internet Governance Forum. Lanham, MD: Lexington Books.

Phillips, N., Lawrence, T. B., & Hardy, C. (2004). Discourse and Institutions. The Academy of Management Review, 29(4), 635–652.

Pickard, V. (2007). Neoliberal Visions and Revisions in Global Communications Policy From NWICO to WSIS. Journal of Communication Inquiry, 31(2), 118–139.

Pohle, J. (2014). ‘Mapping the WSIS+10 Review Process’. Research report on the 10-­year review process of the World Summit on the Information Society. Brussels: Vrije Universiteit Brussel. Retrieved from http://www.globalmediapolicy.net/sites/default/files/Pohle_Report%20WSIS+10_final.pdf

Raboy, M., & Padovani, C. (2010). Mapping Global Media Policy: Concepts, Frameworks, Methods. Communication, Culture & Critique, 3(2), 150–169. http://doi.org/10.1111/j.1753-9137.2010.01064.x

Radu, R., & Chenou, J.-M. (2014). Global Internet Policy: a Fifteen-Year Long Debate. In R. Radu, J.-M. Chenou, & R. H. Weber (Eds.), The Evolution of Global Internet Governance: Principles and Policies in the Making (pp. 1–19). Heidelberg, New York, Dordrecht, London: Springer.

Raymond, M., & DeNardis, L. (2015). Multistakeholderism: anatomy of an inchoate global institution. International Theory, 7(3), 572–616. http://doi.org/10.1017/S1752971915000081

Rioux, M., Adam, N., & Company Pérez, B. (2014). Competing Institutional Trajectories for Global Regulation - Internet in a Fragmented World. In R. Radu, J.-M. Chenou, & R. H. Weber (Eds.), The Evolution of Global Internet Governance: Principles and Policies in the Making (pp. 37–55). Heidelberg, New York, Dordrecht, London: Springer.

Rutland, T., & Aylett, A. (2008). The work of policy: actor networks, governmentality, and local action on climate change in Portland, Oregon. Environment and Planning D: Society and Space, 26(4), 627–646.

Schmidt, V. A. (2008). Discursive Institutionalism: The Explanatory Power of Ideas and Discourse. Annual Review of Political Science, 11(1), 303–326.

Schmidt, V. A. (2010). Taking ideas and discourse seriously: explaining change through discursive institutionalism as the fourth ‘new institutionalism’. European Political Science Review, 2(1), 1-25. http://doi.org/10.1017/S175577390999021X

Schmidt, V. A. (2012). Discursive Institutionalism. Scope, Dynamics, and Philosophical Underpinnings. In F. Fischer & H. Gottweis (Eds.), The Argumentative Turn Revisited. Public Policy as Communicative Practice (pp. 85–113). Durham; London: Duke University Press.

Schmidt, V. A., & Radaelli, C. M. (2004). Policy Change and Discourse in Europe: Conceptual and Methodological Issues. West European Politics, 27(2), 183–210.

Scott, W. R. (2008). Institutions and Organizations: Ideas and Interests. Los Angeles: SAGE Publications.

UN (2012), Resolution A/RES/67/195. United Nations General Assembly, 67th session.

Zittoun, P. (2009). Understanding Policy Change as a Discursive Problem. Journal of Comparative Policy Analysis: Research and Practice, 11(1), 65–82.

Footnotes

1. This idea builds on Jeanette Hofmann's recent framing of multistakeholderism as a "discursive artefact that aims to smooth contradictory and messy practices into a coherent story about collaborative transnational policymaking" (Hofmann, 2016, p. 16).

2. Borrowed from the field of natural sciences and technology, "black box" stands for a device whose complex internal workings need not be known in order to predict its outputs. STS authors use the term to refer to a material object, a situation or process which has become self-evident and obvious to the observer: "A black box contains that which no longer needs to be reconsidered, those things whose contents have become a matter of indifference" (Latour & Callon, 1981, p. 285). For more details about the use of the concept in STS, see also Cressman (2009, p. 6).

3. Building on the Hajer's (1993, p. 45) basic definition, policy discourse can be conceptualised as a set of ideas, concepts, frames and definitions that gives meaning to a real-world phenomenon, structuring it as a concrete policy problem and proposing solutions. Because of its structuring function, the production and reproduction of discourse is perceived as an iterative process: by addressing the problem and its potential solutions in official policy texts, the world-view behind the discourse is stabilised and reproduced in common policy thinking. For an overview on the different understandings of the term "discourse" in relation to policy and policymaking, see Gasper & Apthorpe (1996, pp. 2ff).

4. Interpretative policy analysis is also referred to as argumentative or deliberative policy analysis. Although there are small conceptual and methodological differences between the different approaches, this paper subsumes them all under the most commonly used category of "interpretative" approaches to policy analysis (for more details, see Münch, 2016, pp. 3ff).

5. A notable exception is Marek Mikus' work on strategies, meanings and actor-networks in sustainable development (2009). ANT has also been used to study issue formation and discursive processes of public involvement in participatory democracy. For examples and criticism, see Marres (2007, pp. 361ff).

6. Initiated in the 1980s by a group of French sociologists, most of ANT's notions and methodological tools were developed out of these scholars' empirical case studies in scientific laboratories. Over the years, ANT has become a popular approach for observing the creation of knowledge in science and technology as well as processes of social ordering and the creation of meaning in many different contexts.

7. ANT's originators borrowed the term "translation" from the French philosopher Michel Serres, in whose work "translation appears as the process of making connections, of forging a passage between two domains, or simply as establishing communication" (Brown, 2002).

8. Since the term "multistakeholder" was coined in the 1990s, its characteristics have been defined in a multitude of ways. For an overview on the use of the multistakeholder concept within and outside of internet governance research, see also Hofmann (2016).

9. There are many other ways in which ANT's principle of "generalised symmetry" impacts the analysis of multistakeholder governance processes. Mikkel Flyverbom, for instance, regards multistakeholder arrangements as "contingent assemblages under constant (re)construction", meaning a hybrid network of heterogeneous actors ranging from technologies to badges, emails, activists and many more actors (2011, p. 8). A full assessment of the potential implications of ANT's enlarged understanding of actors would go beyond the scope of this paper.

10. The long held disagreement revolves around the interpretation of article 72(g) of the Tunis Agenda, which requests the IGF to "identify emerging issues, [...] and, where appropriate, make recommendations"; this, in turn, can be understood as a call for the production of outcome documents. For more details about this debate, see Malcolm (2008, pp. 355ff).

11. Due to its focus on dialogue rather than output, the IGF has often been described as a "talk shop" (Ang & Pang, 2010, p. 1). Many scholars and practitioners emphasise the discrepancy between IG discourse and praxis since most studies on multistakeholderism in IG centre on who contributes to discussions rather than who contributes to the actual practices of IG (Raymond & DeNardis, 2015, p. 588).

12. If a successful translation starts to structure the discourse of a larger group of actors, Maarten Hajer speaks of "discourse structuration", which "occurs when a discourse starts to dominate the way a given social unit [...] conceptualizes the world" (Hajer, 2005, p. 303).

13. The concept of "inscription" was first used by ANT scholar Madeleine Akrich to describe how engineers, inventors, manufacturers or designers "inscribe" their vision into the design of an object or the materiality of a technical artefact (Akrich, 1994).

14. In institutional theory, institutionalisation is considered the process through which institutions are (re)produced. The idea that institutions are constructed and transformed through discourse is developed in great detail in Phillips, Lawrence, & Hardy (2004) and Schmidt (2010).

15. This self-reinforcing function of discourses and institutions has been described frequently by organisational theory and neo-institutionalism (for instance, Schmidt & Radaelli, 2004; Scott, 2008, p. 149).

16. For a more detailed description of the inner workings and discursive exchanges of the WGEC and its relation to the ten-year review of WSIS, see Pohle (2014, pp. 25ff) and the dedicated mapping section on the Global Media Policy Platform.

17. Concerning the development of IG as a new field of political action during WSIS, it would be interesting to retrace the process through which the concept of enhanced cooperation was inscribed into the Tunis Agenda. But since the WSIS outcome documents were only negotiated by governments and not through a multistakeholder process, such an analysis would not comply with the conceptual objective of this research paper.

18. In the years following WSIS, the discrepancy between these two supposedly irreconcilable positions has not been overcome, as recurrent disputes on the issue in international forums like ITU's 2012 World Conference on International Telecommunication (WCIT) have shown (Kleinwächter, 2012; Mueller, 2012).

19. For more details about several other attempts since 2006, in which the UN sought agreement on the controversial topic of enhanced cooperation, see Aguerre (2013).

20. Official details and meeting summaries of the WGEC are available on the UNCTAD website. The creation of the WGEC was preceded by a one-day open consultation, organised by the CSTD on 18 May 2012, and a long and controversial discussion during the CSTD session in May 2012, which ended with the proposal to mandate a working group (Kummer, 2012).

21. According to observers, the most active government representatives were from the USA, Sweden, Nigeria, Japan, Iran, Russia, Saudi Arabia and India (Kaspar, 2014).

22. The most active observers from civil society and the technical community were Anja Kovacs, Samantha Dickinson, Matthew Shears, Joana Veron, Lea Kasper, Deborah Brown and Richard Hill (Doria, 2014b; Liddicoat, 2014). In addition, there were observers from governments not represented in the group, such as the UK and Canada.

23. The impact of different governments on the group's work also depended on whether they sent representatives from their capitals, who were experts on the topics discussed, or simply relied on their official delegates in Geneva, who did not necessarily have the same specific expertise and could, therefore, not influence the deliberations in the same way (Kaspar, 2014).

24. The concept of "obligatory passage points" (OPP) was introduced into ANT literature by Michel Callon, who describes it as one of different moments of translation. To become an OPP, an actor has to create a situation in which all actors have to interact with him in order to achieve their goal. As a result, the actor is in a privileged position as he can seek to translate his interests through these interactions (Callon, 1986).

25. The two observers who did most of the correspondence group's work were Lea Kaspar and Samantha Dickinson, while two group members, Joy Liddicoat and Phil Rushton, served as its official chairs. The correspondence group was open to everyone who had responded to a public call for interest. In addition, all members of the WGEC were automatically members of the group, although most of them did not engage in the work (Doria, 2014b; Kaspar, 2014).

26. For details about the deliberations during the WGEC’s third meeting and the difficulties encountered, see Doria (2014a), Liddicoat (2014) and Liddicoat, Doria, & Kaspar (2013).

27. This position was, for instance, taken by the delegate from Saudi Arabia but also by other actors, including from civil society.

28. This was, for instance, the position of the delegates from Japan and Finland but also of representatives from the technical community and from civil society.

29. This position was expressed, for example, by the delegates from Brazil but also by the civil society representative from the Association for Progressive Communications (APC).

30. One of the options considered by the WGEC was the creation of a platform, possibly under the auspices of the CSTD, through which governments could share information and resources (Kovacs, 2014).

Governing the internet in the privacy arena

$
0
0

This paper is part of 'Doing internet governance: practices, controversies, infrastructures, and institutions', a Special issue of the Internet Policy Review.

Introduction

For quite a while now the spread of digital networking practices fuels discourses that render problematic the way privacy is destabilised by informational means (e.g. Schaar, 2009). The global surveillance disclosures triggered by Edward Snowden in 2013 and the involvement of prominent political actors (e.g., Merkel, Rousseff) and institutions (e.g., intelligence services, governments) have further boosted these discourses and the public re-negotiation of privacy. In this article we will deal with these controversial processes. Taking the 2013 disclosures as a starting point from where to follow the controversy (Pinch & Leuenberger, 2006) we focus on the "Struggles and Negotiations to Define What is Problematic and What is Not" (Callon, 1980). We hold that in answer to Snowden’s revelations numerous social worlds began to publicly specify problem definitions, and to propose solutions accordingly; some of the problem/solution packages were incommensurate and some were compatible, but all of them constituted what we call in the style of Anselm Strauss (1978) and Adele Clarke (1991) the privacy arena: the virtual place where social worlds gather to argue and struggle around privacy, i.e., where they define the initial situation and the actors involved, specify the problem, and put forward diverse solutions.

Before specifying this approach in detail (1) we would like to point out that by focusing on controversies we take up a radically agnostic stance (Callon, 1986) towards privacy: we will completely abstain from specifying any a priori understanding of the concept and its normative weight. We know very well that such specifications fill enormous bookshelves, and elsewhere we have contributed to further filling them (e.g., Ochs & Ilyes, 2014; Büttner et al., 2016). Yet, here we will bracket our knowledge and focus exclusively on segments of the public renegotiation of privacy that emerged in answer to the surveillance disclosures. We will analyse two such segments: the Schengen/National Routing (SNR) proposal (2) and the German Parliamentary Committee investigating the NSA surveillance disclosures (NSA-Untersuchungsausschuss) (3). As will be explained, in the negotiations encountered in these segments privacy is generally set in relation to a whole web of values, interests, routines, distinctions etc. In this sense, what is at issue in the controversies is the sociotechnical set-up and governance of the internet at large. As our analysis reveals there are two oscillating governance styles to be identified in the privacy arena (as far as we have investigated), i.e. two ways of (more or less democratically) dealing with the issue. Their interplay results in an obstruction of the democratic search for appropriate problem definitions and according solutions. We will finally summarise and provide an analytic diagnosis concerning possible paths future developments within the privacy arena may take if the blockade remains (4).

Section 1: Methodological preliminaries

Our ultimate interest as pursued in this article is to prove the validity of our methodology for studying the public renegotiations of privacy as processes pertaining to "technical democracy" (Callon, Lascoumes, & Barthe, 2011). Having said this, our goal is to flesh out a framework that a) allows to follow the controversies and renegotiations concerning privacy, and b) to analyse the democratic style of these struggles.

To do so, we take up a classic science and technology studies (STS) approach, namely the "Theory/Methods Package" (Clarke & Leigh Star, 2008) provided by social worlds/arenas theory. The latter goes back to Anselm Strauss who holds that contemporary social formations consist of a multitude of social worlds. These worlds are constituted by specific core activities differentiating a social world from the rest of the world; core activities are in turn based on material-symbolic techniques carried out by human organisms and their material contemporaries, and they unfold at (perhaps virtual) places (Strauss, 1978: 122). Thus, a social world is characterised by what is done there (core activity), how it is done (technique), and where it is done (place). In the course of establishing and stabilising a social world some type of organisation may emerge and processes of authentication and legitimation occur: actors negotiate definitions pertaining to the elements and practices making up the given world (Strauss, 1978: 122-126; 1982: 172-173). Thus, insofar as the building blocks of social formations (read: social worlds) are conceived as contested settings from the outset, it is collective processes of negotiating practices and sociotechnical order that are at the very heart of social worlds theory. However, when turning the lens from a single social world towards the wider set-up it is located within, the struggles and negotiations among social worlds appear; these constitute arenas, i.e. those sites where diverse social worlds gather around specific issues so as to engage in disputes, negotiations and struggles about the legitimate composition of the world, etc. (Strauss, 1993: 225-232).

In the case that interests us here the issue of privacy constitutes an arena where social worlds renegotiate privacy’s status. The overall privacy arena is composed of various segments that break down the issue into specific sub-issues and treat the overall issue accordingly. Our research question concerns the democratic character of such negotiations. It is important to note that by using the term "democracy" we do not refer to a specific form of institutionalised government nor to political regimes disposing of specific institutional procedures. Instead, we use the term in the sense of John Dewey (1946) to denote societal learning processes. These involve the building of issue centered publics and may feature several phases of defining groups and their interests, of building associations, naming experts, determining representatives, of problematising and devising solutions, of trial and error etc. Asking for the democratic character of the negotiations encountered in the privacy arena thus amounts to analysing the political features of the corresponding learning processes in a broader way than pursued in classic political science insofar as the approach that we follow directs attention to public arguments that may or may not involve the conventional institutions of political (democratic) systems. 1

In what follows we present a "methodological showcase": we will provide brief analyses of two different segments of the privacy arena where specific problematisations/solutions are negotiated. As our ultimate interest lies in showing that the approach promoted here allows for specifying the democratic character of the arena negotiations, we will only go as deep into the case studies as is required to prove the validity of the methodology; and we will restrict the analysis to the minimum number of cases to be compared when following the comparative method (Glaser & Strauss, 1998).

Section 2: Schengen/National Routing (SNR)

The global surveillance disclosures have shown the general public quite plainly the dimensions of the digital crisis of privacy. What are the democratic response patterns emerging in reaction? To tackle this question we successively chose cases promising to feature analytically differing characteristics. 2 As a start, we selected the Schengen/National Routing (SNR) discourse as segment of the privacy arena. The SNR problem/solution package came up as a direct reaction to the Snowden revelations (Dönni, Machado, Tsiaras, & Stiller, 2015). The proposal focuses on routing data packages in a territorially framed way, either within the Schengen area or within the nation state. Hence, it aims at providing a technical fix (routing) for a social problem (surveillance); we therefore presumed to come across a constellation where the sociotechnical dimension becomes visible easily – a readily analysable STS case.

To see what the SNR proposal results in we have first to understand that the internet as a "network of networks" is composed of so-called “autonomous systems” (AS) 3 run by private or public corporations (e.g. commercial Internet Service Providers (ISPs) or universities). When sending a data file via the internet the file is broken down into a number of data packages (IP packages). Those packages include information concerning their origin and the target address, and they are sent independently from each other (Tiermann & Goldacker, 2015: 14-15). When a file is sent from a device, its constituent IP packages are firstly routed through the AS the device is connected to; at some point the IP package transits into another AS with whom the “original” provider (ISP or public entity) has a peering (big carriers agree to mutually route each other’s traffic), or a transit contract (small providers pay large carriers for routing their IP packages).

Thus, the IP packages composing a file when they travelling through the internet, the IP packages composing a file are likely to pass through a multitude of further AS, and they thereby may take different routes (Dierichs & Pohlmann, 2008): which way a package takes is not predetermined a priori, and there is no central navigation. Instead, packages are sent in stages, from one router to the next. Routing protocols define the way a package is sent on: within AS’ there are so-called Interior Gateway Protocols routing the data flow, such as the "Open Shortest Path First" protocol (OSPF); Exterior Gateway Protocols govern how data packages are sent on between AS’. When IP packages pass from router to router the latter make decisions where to send a package next according to the criteria (speed, distance, efficiency) of the algorithms inscribed into the routing devices (Dierichs & Pohlmann, 2008), and according to routing tables indicating which networks can be reached via which paths (Tiermann & Goldacker, 2015: 15). It is here where the rules determining how data packages travel through the internet materialise: inscribed into protocols, routers and routing tables.

In the wake of the global surveillance disclosures it was proposed to transform established routing practices: "The idea was to restrict the routing of data between two systems located in country A to systems that are also located in country A. By never crossing into a second judicial territory, your information will be protected by the same privacy laws for its entire journey, bypassing possible snooping attempts from the outside. This concept can be easily expanded from a country to a number of countries" (Pohlmann, Sparenberg, Siromaschenko & Kilden, 2014: 156). The discursive rise of SNR in Germany began when René Obermann - at the time CEO of German telecommunications company Deutsche Telekom - took the “Snowden revelations” as an opportunity to present national routing to the public as an easy to implement technical solution of a whole bunch of problems triggered by intelligence practices, among them the “privacy problem” (FAZ.net, 2013). In November 2013, DeutscheTelekom gained a strong ally for its proposal: the newly built government coalition explicitly endorsed national routing in its coalition agreement (CDU/CSU/SPD, 2013, p. 147f.). Only a couple of months thereafter the Federal Minister of Transport and Digital Infrastructure also recommended to keep datastreams within the borders of the Schengen region (Welt.de, 2013). The alliance between the former state-run monopolist Deutsche Telekom and parts of the state seems natural enough, as the proposal allows both worlds to translate their interests into one shared overall interest. SNR at this point of the story had become an obligatory passage point (OPP). The latter occurs according to Callon (1986) in a network of relationships between all kinds of heterogeneous elements when an entity manages to position itself in a way so as to redirect the interests of all other entities through its own interest: other entities’ interests are translated in one overall interest, the OPP. Once established, to pursue their own interests all entities henceforth have to pass through the OPP. This grants entities controlling the latter a great deal of power.

In the case at hand, at the point where Deutsche Telekom and the German Federal Ministry of Transport and Digital Infrastructure (BMVI) managed to establish SNR as provisional OPP they were able to claim that all entities that had an interest in the preservation of privacy had to consent to the SNR solution. SNR was a rather convenient OPP for both parties, for it allowed them to reproduce the entrenched routines of the worlds of industry and state: fencing data flows into the territory of the nation state again amounts to reproducing the national container of modern society by infrastructural means and promised to re-install DeutscheTelekom’s monopolist position. Large infrastructural projects such as this one can be considered traditional undertakings in industrial modernity, which is why representatives of these groups were able to capitalise on established contacts and habits.

We call the governance mode that we come across here democratic protectionism. Again, note that we use this term to characterise the style of negotiating privacy in the SNR segment: what is typical for this mode, firstly, is that it features a strong tendency to continue with, and thus reproduce institutionalised routines. It locates the threats to privacy and democracy outside the well-established and institutionalised routines of the domestic state and its industrial players. There is no reflexive questioning of domestic institutions, and the public is only called upon to nod the proposal through; the whole constellation does not consider giving the public a voice of its own so as to define the problem, or specify the solution: the well-functioning state and its former monopolist will take care of the problem. The "don’t worry, we’ll take care of it"-mentality of the proposal mirrors, secondly, protectionism’s lack of transparency: the issue is settled in ministries and boardrooms.

The resistance that the proposal aroused is quite telling. Small and non-German providers’ take on SNR was that a law prescribing SNR may harm them and hamper competition; the centralised solution was deemed tantamount to a re-launch of Deutsche Telekom’s former monopoly. The conflict furthermore played out in Germany’s main IT industrial association BITKOM, which is constituted by German companies as well as global players with subsidiaries in Germany. When BITKOM (2013) desired to compose a position paper in reaction to the surveillance disclosures in 2013, Deutsche Telekom pressed for including a passage explicitly pleading for SNR. US based companies, however, as the paper was still internally discussed and not yet published, succeeded in attenuating the claim. In the final, published version of the paper, there is only a recommendation to examine SNR (Wirtschaftswoche, 2014). The conflict mirrors the schism between the modern routines and institutions pertaining to the nation state on the one hand, and globalised infrastructures and economic competition on the other hand.

Yet it seems that democratic protectionism has profound deficiencies in coming to terms with digitally transformed conditions. Quite in contrast to BMVI’s energetic endorsement, the Federal Ministry for Economic Affairs and Energy (BMWi) and the Federal Ministry of the Interior (BMI) raised concerns about the cost-benefit ratio of the proposal, and in some instance even opted against legal regulation. A press release of the BMWi explicitly brought into position the ‘open and free Internet’ against the ‘legal prescription’ of SNR (BMWi, 2014, para. 2). The argument went that it was impossible to have "openness" within a SNR system. As matters stood, the algorithmic rules governing routers’ decisions to transmit a given IP package so far had not based the decision on whether or not the next possible router was located within national or Schengen territory. While the strategy of the SNR advocates implied to inscribe this rule into the routing system, those who turned against it, although collectively referring to “openness”, did so for very different reasons. Regardless of whether these opponents to SNR had a strategic, instrumental or moral interest in “openness”, they could not accept SNR as an OPP and started turning against it. As a result, a rather improbable alliance of opponents emerged, including competition-minded German companies, the global players of the digital economy, the BMWi and BMI – and the Net-Community (“Netzgemeinde”), i.e. the social world constituted by the core practices of those internet users who establish a reflexive relationship to their own practices. For members of the Net-Community internet usage is not (only) instrumental but meaningful in that it partakes in members’ conscious self-constitution. The Net-Community’s main concern was that SNR may result in fragmentation of the internet. Thus, whereas there was no agreement on what “openness” actually meant (competition, non-fragmentation) there was nevertheless agreement on the way routers were not supposed to make decisions when it came to the transmission of IP packages: on grounds of considering national or Schengen territory. That was already too much of adverse winds for the SNR proposal. The odds were stacked against SNR and as a result the proposal did not occur in the Digitale Agenda 2014-2017, the German government’s central strategy document on digitisation.

The point that we would like to drive home is that the SNR proposal was so indissolubly tied up with a democratic protectionist style of negotiation that both the proposal and the style of negotiation together did not allow for translation of a sufficient number of (diverse) interests and therefore failed. The proposed solution was rather non-transparent, and stipulated a whole set-up of roles for all those who were involved, including an "external threat" to the well-functioning democratic system herein. For the proposal to have been successful, the location of the enemy “out there” would have needed to be mirrored in the materiality of the routing system: inscribed into the routing tables, algorithms etc. governing the transmission of IP packages. Whereas SNR supporters consequently would have needed a manifold of allies joining the extensive task of re-engineering the current technical structure of the routing system, the negotiation style of protectionism, as it excludes from the outset, does not seem to be appropriate to win those allies over. Having said this, it is not quite easy to maintain the routines of the modern nation-state, nor does it seem to be easily possible to sort “external threats” from “internal shelter.” Democratic protectionism has essential difficulties in governing the internet due to the non-reflexive premises it sets out from: we stay the same while problematic agencies out there have to (be) change(d). 4

However, if it is the non-reflexive characteristic of democratic protectionism that is responsible for its disappointment the question arises whether there are arena segments featuring more reflexive modes of negotiation. To deal with this question we will next turn to a segment promising "more reflexivity".

Section 3: The German Parliamentary Committee investigating the "NSA spying scandal" (NSA-Untersuchungsausschuss)

Pursuing a comparative research strategy we looked out for a contrasting segment that promised to take up the surveillance disclosures from the angle of the domestic state’s internal democratic system. Also, we were looking for a segment which features a governance mode that scrutinises such routines before a wider public.

We opted for analysing the German Parliamentary Committee investigating the NSA spying scandal (NSA-PIC). Of course, parliamentary investigation committees in general form part of established democratic routines. The NSA-PIC in particular, by setting out from the NSA’s activities, additionally seemed to shift the problem to the outside. Yet, a closer look reveals that such a view is mistaken since, theoretically speaking, the role of investigation committees is to actually reflect on (perhaps dysfunctional) institutionalised routines, especially those of government. In this spirit they not only imply the ability of the democratic system to register institutional problems but also to fix them by initiating processes of self-transformation (e.g. Wissenschaftlicher Dienst des Bundestags, 2009, para 2). Thus, such committees are supposed to feature reflexivity and, insofar as the investigation is accomplished in the public gaze, transparency. The setting-up of the NSA-PIC mirrors how the perceived "external threat" triggered the whole investigation, but results in reflexive monitoring. This is already inscribed into the first sentence of the NSA-PIC’s mandate where it says that the committee investigates data collection activities of the so-called “Five Eyes” and German authorities’ (governmental agencies, intelligence services, Federal Office for Information Security) role in this. Thus, there seems great potential in the NSA-PIC to overcome protectionism’s non transparent persistence in routines.

Specifying the social worlds involved in the arena we may first note that the nomination request of the NSA-PIC was jointly issued by all parliamentary parties, those that represent government (conservatives and social-democrats) as well as by the outs (leftists and green party). The committee was likewise composed of members of all parties. Hence, the NSA-PIC is constituted by (I.) the social world of governmental parliamentarians (Regierungsfraktion) and (II.) the social world of oppositional parliamentarians; at the same time (III.) the social world of government, i.e. the executive body of the state (Regierungsapparat) is object of the investigation. The same goes for (IV.) the social world of intelligence services, whose members are called upon to act as witnesses, whereas members of (V.) the social world of jurisdiction (constitutional law experts) are heard as experts. The social world of the Net-Community (VI.) meanwhile acts as observer.

To what extent was this arena setting able to overcome protectionism, i.e. to induce reflexive change and provide for transparency? The NSA-PIC at first seemed to keep to its promises in that it addressed time and again the involvement of the German Federal Intelligence Service (BND) and other German authorities in the "Five Eyes’" surveillance activities (Deutscher Bundestag, 2014a, para B. I.). Not only is it NSA-PIC's explicit mandate to investigate authorities’ illegitimate participation in NSA operations, but also to identify BND’s and governmental bodies’ own transgressions. The NSA-PIC in fact did so. For example, the collaboration between the NSA and BND under the code name Eikonal attracted considerable attention and press coverage. Initially unveiled by the media the operation is publicly investigated in the NSA-PIC to this day (SZ.de, 2014). Reports stated that due to the BND’s inability to guarantee perfect filtering of internet data streams, data sets were passed on to the NSA which might very well include data regarding German citizens. Additionally, the BND reportedly used highly questionable ways to get permission for this operation from the responsible parliamentary control commission (Deutscher Bundestag, 2014c, p. 75f.). It is transgressions such as these which were disclosed to the public.

Moreover, the whole process effectively induced reflexive change, too. For instance, in November 2015 the government coalition came to an agreement regarding the reform of the BND, including the strengthening of parliamentary control of the intelligence service (Götschenberg, 2015). At this point of the analysis the NSA-PIC seemed to genuinely overcome democratic protectionism: institutional routines were called into question via the system’s own remedy procedures. Instead of aiming to reproduce past structures (territorial society) under contemporary conditions (transnational data flows) by technical means (routing) there was a strong constitution bound mode of identifying problems and solving issues. This is exemplified by a group of legal experts who, when providing a statement before the Committee, were quite explicit about the need to modify the law, including basic rights. One of these experts, former Constitutional court judge Hoffmann-Riem (2014: 55-56) in a paper explicitly stated that territory-bound jurisdiction comes to its limits, given that the routing of data packages was highly contingent on factors other than territory. However, experts did not conclude that data flows were to be pushed again into the boundaries of the nation-state; instead the latter’s legal basis was to change. Again the NSA-PIC arena’s potential to induce reflexive change in a transparent way becomes visible, and it is this potential which fundamentally differs from the mode of democratic protectionism.

For us, the occurrence of this potential indicates that there is a different governance mode at work in the NSA-PIC arena. We call this mode democratic constitutionalism. The latter strongly appeals to normative democratic principles (e.g. fundamental rights) not only to render the NSA-PIC legitimate (Deutscher Bundestag, 2014b, 1821 A), but also to bring internal problems to the table without discarding the established system as a whole; instead its core values (whatever they might be in this instance) are reflexively applied.

This finding is not surprising as the governance mode of democratic constitutionalism is by and large very much in line with the way the NSA-PIC is set up formally. What is striking, however, is that it does not manage to dominate the segment but is massively hampered by the protectionist mode that also re-emerges here. Protectionist governance practices and discourses in the NSA-PIC include the treatment of the internet as external cause and as issue to be dealt with not by changing oneself but by protecting oneself (Deutscher Bundestag, 2014b, 1816 D); of still more relevance is the fact that subsequent to the official statement of government spokesman Steffen Seibert (2015) that the BND had in fact "technical and organisational deficiencies", a discourse emerged that claimed the strengthening of BND’s independence from the NSA. As a consequence, some even demanded to equip BND with more financial resources to expand the institution. And while we cannot provide evidence that this was indeed triggered by the “independence-from-the-NSA” discourse BND’s and other intelligence service’s staff was increased by 500 between June 2013 (Snowden revelations) and November 2015 (Netzpolitik.org, 2015).

Our interpretation of these events is that the negotiation of privacy in the NSA-PIC somewhat oscillates between the modes of democratic protectionism and constitutionalism. Connecting this diagnosis back to the social worlds analysis we can see that as a result of this oscillation there are committee members who are torn into two directions at the same time: those who belong to the governing coalition are simultaneously a) part of the forces that strive to render events transparent and induce reflexive change, and as they also form part of the very social world that is bound to come under scrutiny (government), b) of antipodal forces. While the social world of the Net-Community does not act as a political pressure group, but mainly observes and registers, the social world of jurisdiction might appeal to political decision makers – but this is insufficient to tip the balance in favour of the constitutionalist forces. 5 In this sense, what the analysis reveals is the limits of constitutionalism: procedures in the investigation committee in one way or another are still bound to the routines of the established institutions pertaining to the territorial state. Constitutionalist governance time and again gets stuck; for, while it is possible for this mode to radically call into question institutionalised governmental routines it is not able to also substantially modify these routines; part of the problem is that if constitutionalism did so, it would potentially threaten its own conditions of existence.

Thus, while there is some potential for reflexive, transparent change to be detected in the NSA-PIC segment of the privacy arena, the segment still seems to be bound too strongly to the routines of the nation state. This raises the question for future research: are there arena segments that feature comparable reflexivity and transparency while being less closely tied to the nation-state?

Conclusion

We would like to make a case for the methodology applied here by briefly summarising the main points made above. First of all, the methodology presented above seems appropriate for studying the public renegotiation of privacy as a way of doing internet governance, for it allowed us to identify key parameters of the democratic styles coining these negotiations: transparency vs. opaqueness and the persistence in routines vs. embracing reflexive change. While social worlds/arenas theory enables one to focus on technical, legal, political etc. governance "solutions" on a level playing field the comparative strategy also permits to contrast cases according to a certain set of parameters named above.

Future research might continue the search for arenas that promise transparency and reflexivity without being as much hampered by the persistence in the routines of the nation-state. However, drawing on the parameters in a more analytical vein also helps to systematically speculate on further governance modes to be encountered within the overall privacy arena. Now, if democratic protectionism (non-transparency plus persistence in routines) and constitutionalism (transparency plus persistence in routines) continue to generate obstruction, logically there remain two future paths: if actors not bound to democratic routines (e.g. economic ones) step in by non-transparently negotiating backroom decisions with enfeebled politics, negotiations may acquire a post-democratic character (non-transparency plus non-boundedness to democratic routines). The more optimistic option would be the rise of an experimental governance mode (transparency plus non-boundedness) that neither starts in providing fixed problem definitions nor provides ready-made solutions. Which modes are going to prevail or mix in the future only time will tell; however, the methodology presented here will enable us to understand the trajectories of the privacy arena.

References:

Bitkom (2013, October 31). Positionspapier zu Abhörmaßnahmen der Geheimdienste und Sicherheitsbehörden, Datenschutz und Datensicherheit. Retrieved from https://www.bitkom.org/Publikationen/2013/Positionen/Positionspapier-zu-Abhoermassnahmen/BITKOM-Positionspapier-Abhoermassnahmen.pdf

BMWi (2014, June 13). Staatssekretär Kapferer: Offenes und freies Internet erhalten, Pressemitteilung vom 13.06.2014. Retrieved from http://www.bmwi.de/DE/Presse/pressemitteilungen,did=642114.html

Büttner, B., Geminn, C., Hagendorff, T., Lamla, J., Ledder, S. Ochs, C., & Pittroff, F. (2016). Die Reterritorialisierung des Digitalen: Zur Reaktion nationaler Demokratie auf die Krise der Privatheit nach Snowden. Kassel: Kassel University Press. Retrieved from http://www.uni-kassel.de/upress/online/OpenAccess/978-3-86219-106-2.OpenAccess.pdf

BMWi/BMI/BMVI (2014, August 20). Digitale Agenda 2014-2017 (English version). Retrieved from http://www.digitale-agenda.de/Content/DE/_Anlagen/2014/08/2014-08-20-digitale-agenda-engl.pdf

Callon, M. (1980). Struggles and negotiations to define what is problematic and what is not. The socio-logic of translation. In K.D. Knorr-Cetina, R. Krohn & R. D. Whitley (Eds.), The Social Process of Scientific Investigation: Sociology of the Sciences Yearbook (pp. 197-220). Dordrecht, Holland: Reidel.

Callon, M. (1986). Some Elements of a Sociology of Translation: Domestication of the Scallops and the Fisherman of St. Brieuc Bay. In J. Law (Ed.), Power, Action, and Belief. A New Sociology of Knowledge? (pp. 1996-233). London, England: Routledge & Kegan Paul.

Callon, M., Lascoumes, P. & Barthe, Y. (2011). Acting in an Uncertain World. An Essay on Technical Democracy. Cambridge, MA/London: MIT Press.

CDU/CSU/SPD (2013). Deutschlands Zukunft gestalten, Koalitionsvertrag zwischen CDU, CSU und SPD, 18 Legislaturperiode. Retrieved from https://www.bundesregierung.de/Content/DE/Anlagen/2013/2013-12-17-koalitionsvertrag.pdf?_blob=publicationFile&v=2

Clarke, A. (1991). Social Worlds Theory as Organizational Theory. In D. Maines (Ed.), Social Organization and Social Process: Essays in Honour of Anselm Strauss (pp. 17-42). Hawthorne, NY: Aldine de Gruyter.

Clarke, A. & Leigh Star, S. (2008). The Social Worlds Framework: A Theory/Methods Package. In: E.J. Hackett, O. Amsterdamska, M. Lynch & J. Wajcman (Ed.), The Handbook of Science and Technology Studies. 3rd Edition (pp. 113-137). Cambridge, MA/London: MIT Press.

Deutscher Bundestag (2014a). Drucksache 18/ 843 18. Wahlperiode. Antrag der Fraktionen CDU/CSU, SPD, DIE LINKE und BÜNDNIS 90/DIE GRÜNEN. Einsetzung eines Untersuchungsausschusses. Retrieved from http://dip.bundestag.de/btd/18/008/1800843.pdf

Deutscher Bundestag (2014b). Plenarprotokoll 18/23. Stenografischer Bericht. 23. Sitzung. Rede: Untersuchungsausschuss zur Überwachungsaffäre, Plenarsitzung. Retrieved from http://dipbt.bundestag.de/dip21/btp/18/18023.pdf

Deutscher Bundestag (2014c). Transcript: Bundestag Committee of Inquiry into the National Security Agency [Untersuchungsausschuss ("NSA")], Session 24 WikiLeaks release: 12, May 2015. Retrieved from https://wikileaks.org/bnd-nsa/sitzungen/2401/WikiLeaksTranscriptSession2401fromGermanNSA_Inquiry.pdf

Dewey, J. (1946). The Public and its Problems. An Essay in Political Inquiry. Denver: Swallow.

Dierichs, S. & Pohlmann, N. (2008). So funktioniert Internet-Routing. Retrieved from http://www.heise.de/netze/artikel/So-funktioniert-Internet-Routing-221495.html?view=print

Dönni, D., Machado, G.S., Tsiaras, C. & Stiller, B. (2015). Schengen Routing: A Compliance analysis. In Latré, S.,Charalambides, M. Francois, J., Schmitt, C. & Stiller, B. (Ed.), Intelligent Mechanisms for Network Configuration and Security, Proceedings (pp. 100-112). Heidelberg et al.: Springer.

Faz.net (2013, November 2013). Im Gespräch: René Obermann und Frank Rieger. Snowdens Enthüllungen sind ein Erdbeben. Frankfurter Allgemeine Zeitung. Retrieved from http://www.faz.net/aktuell/feuilleton/debatten/ueberwachung/im-gespraech-rene-obermann-und-frank-rieger-snowdens-enthuellungen-sind-ein-erdbeben-12685829.html

Glaser, G. B., & Strauss, A. L. (1998). Grounded Theory. Strategien qualitativer Forschung. Bern, Switzerland: Hans Huber.

Götschenberg, M. (2015). Einigung auf Geheimdienstreform. Koalition nimmt BND an die Leine. Tagesschau. Retrieved from https://www.tagesschau.de/inland/bnd-reform-101.html

Hoffmann-Riem, W. (2014). Freiheitsschutz in den globalen Kommunikationsinfrastrukturen. In: Juristen-Zeitung, 69, 53-63.

Lamla, J. (2013). Arenen des Demokratischen Experimentalismus. Zur Konvergenz von nordamerikanischem und französischem Pragmatismus. Berliner Journal für Soziologie, 23(3-4), 345-365.

Netzpolitk.org (2015). 500 neue Stellen für BND, Verfassungsschutz & Co. Retrieved from https://netzpolitik.org/2015/500-neue-stellen-fuer-bnd-verfasungsschutz-co/

Ochs, C., & Ilyes, P. (2014). Sociotechnical Privacy: Mapping the Research Landscape. Tecnoscienza. Italian Journal of Science & Technology Studies, 4(2), 73-91.

Pinch, T., & Leuenberger, C. (2006). Studying Scientific Controversy from the STS Perspective. Paper presented at the EASTS Conference "Science Controversy and Democracy. Retrieved from https://www.researchgate.net/publication/265245795StudyingScientificControversyfromtheSTS_Perspective

Pohlmann, N., Sparenberg, M., Siromaschenko, I. & Kilden, K. (2014). Secure Communication and Digital Sovereignty in Europe. Highlights of the Information Security Solutions Europe 2014 Conference. In Reimer, H., Pohlmann, N., Schneider, W. (Ed.), ISSE 2014 Securing Electronic Business Processes (pp. 155-169). Heidelberg et al.: Springer.

Schaar, P. (2009). Das Ende der Privatsphäre. Der Weg in die Überwachungsgesellschaft. München: Goldmann.

Seibert, S. (2015). Fernmeldeaufklärung des Bundesnachrichtendienstes, Press release. Retrieved from https://www.bundesregierung.de/Content/DE/Pressemitteilungen/BPA/2015/04/2015-04-23-bnd.html

Strauss, A. L. (1978). A Social World Perspective. Studies in Symbolic Interaction, 1, 119–128.

Strauss, A. L. (1982). Social Worlds and Legitimation Processes. Studies in Symbolic Interaction, 4, 171-190.

Strauss, A.L. (1993). Continual Permutations of Action. Hawthorne, NY: de Gruyter.

Süddeutsche Zeitung (2014, October 4). Codewort Eikonal - der Albtraum der Bundesregierung. Retrieved from http://www.sueddeutsche.de/politik/geheimdienste-codewort-eikonal-der-albtraum-der-bundesregierung-1.2157432

Tiermann, J. & Goldacker, G. (2015). Vernetzung als Infrastruktur – Ein Internet Modell. Fraunhofer FOKUS: Berlin.

Welt.de (2013). Deutschland muss eine Aufholjagd starten, Interview mit Alexander Dobrindt. Retrieved from http://www.welt.de/politik/deutschland/article123773626/Deutschland-muss-eine-Aufholjagd-starten.html

WirtschaftsWoche (2014). Echte Zerreißprobe. WirtschaftsWoche, 8/2014, 52-54.

Wissenschaftlicher Dienst des Bundestags (2009). Aktueller Begriff. Untersuchungsausschüsse. Retrieved from https://www.bundestag.de/blob/190568/ce3840e6f7dbfe7052aa62debf812326/untersuchungsausschuesse-data.pdf

Footnotes

1. The framework can only be sketched here in general terms. For a detailed blueprint see Lamla (2013). Readers familiar with the STS literature may note that this approach falls into line with pragmatist minded STS investigations of the relation between technoscience, the public and democratic politics as accomplished by Callon, Latour, Marres and others.

2. What we present here is work in progress; while we limit our presentation to two cases we have also analysed a third one, the European General Data Protection Regulation.

3. In 2008 Dierichs and Pohlmann estimated that the internet consisted of about 110,000 AS (Dierichs & Pohlmann, 2008).

4. Interestingly, the basic strategy that aims to maintain the sovereignty of the nation state under digitised circumstances has not entirely disappeared, but was somehow shifted. SNR may be understood as an attempt to reterritorialise information flows that threaten to exceed certain territories, and while the routing strategy was discredited, in the Digitale Agenda digital sovereignty is still one of the goals the government strives to achieve (BMWi, BMI, and BMVI, 2014: 4). In this sense, we might say that the strategy of reterritorialisation managed to survive in a new guise, once it was not tied to routing anymore (for more information, see Büttner et al., 2016: 149-151).)

5. Note that, as "constitutionalism" refers to a governance mode, it may not be identified with one particular social world. Accordingly, it is not only the judges who foster constitutionalist forces, but also, say, the green party (opposition) member of parliament Konstantin von Notz who frequently argues in a constitutionalist style.

The problem of future users: how constructing the DNS shaped internet governance

$
0
0

This paper is part of 'Doing internet governance: practices, controversies, infrastructures, and institutions', a Special issue of the Internet Policy Review.

Introduction

Like so many engineers building the Advanced Research Projects Agency Network (ARPANET), Elizabeth "Jake" Feinler struggled each day to get the network into some working order. Feinler was head of the Network Information Center (NIC) at Stanford Research Institute (SRI). Because the NIC functioned as the administrative clearinghouse for ARPANET and the early internet, Feinler had to keep track of everything, but without a standardised addressing system to help. In the archives at the Computer History Museum (CHM) in Mt. View, California, I found a collection of printer paper that Feinler stapled together in June 1973. This hand-written directory, which Feinler titled “Changes or Reverifications”, lists what sites were online, the point of contact at each site, and even minutia such as the current phone numbers of the liaisons for the contacts (SRI ARC/NIC Records, Lot X3578-2006). One can imagine how unwieldy this task would become, as institutions connected sites to ARPANET at a rapid pace, often installing multiple computers, each of which required a unique identifier. Feinler’s desk reference, a historical precursor of the Domain Name System (DNS), is evidence of a basic quandary that all early network designers faced. As ARPANET’s ill-conceived addressing schema fueled frustration among the networking community, designers started doing the work of internet governance to solve a fundamental problem of design: the need to address future users.

Through a critical reading of documents circulated among ARPANET and early internet engineers, this article shows 1) how "the problem of future users" motivated the social construction of the DNS, and 2) how this historical process itself constitutes the preformation phase of internet governance. To do this, I draw from two theoretical approaches, showing how a social constructivist critique can inform path dependent theories of technological and organisational lock-in. On the one hand, social constructivists “claim that technological artifacts are open to sociological analysis, not just in their usage but especially with respect to their design and technical ‘content’.” (Bijker, Hughes, and Pinch, 1987, p. 4). On the other hand, path dependence theory “stresses the importance of past events for future action or, in a more focused way, of foregoing decisions for current and future decision making” (Sydow, Schreyögg, and Koch, 2009, p. 690). Whereas social constructivists are often concerned with issues of ideology, theorists of path dependence are concerned with self-reinforcing social and economic mechanisms that guide technologies and organisations toward “increasing stability and lock-in” (Dobusch and Schüßler, 2012, p. 618). Despite their differences, both approaches regard historical evidence as “process data”, which “consist largely of stories about what happened and who did what when—that is, events, activities, and choices ordered over time” (Langley, 1999, p. 692). This conceptual dovetail opens a window, allowing one to consider how ideology—understood as values supported by material relations—can itself become a self-reinforcing mechanism of path dependence, setting constraints for the DNS, for ICANN, or for any other technological or organisational development.

In considering the social construction of the DNS as the preformation phase of internet governance, I show how Feinler’s mundane task of ordering the network by hand marks a catalyst in the development of design priorities and management functions that ICANN would inherit. Following an "identity crisis" that emerged during the shift from ARPANET protocol to the internet’s TCP/IP suite, designers needed to construct a standardised addressing schema. Initially, they did not solve the problem of future users by calling for the outright establishment of governmental institutions. First, they suppressed the visibility of numerical addresses, thereby hiding the historically contingent development of core infrastructure. Next, they harnessed the power of extensibility, choosing top-level domain names associated with generic social categories. These choices ushered new tasks of ordering the network into a preexisting discourse of social bureaucracy, which structured everyday work relations, emerging institutional affiliations, and the future ideology of internet governance.

ARPANET’s identity crisis: names, numbers, and initial constraints

The installation of ARPANET marks the triggering event in the development of universal digital addressing as embodied in the DNS, and as such constitutes the preformation phase of governance functions related to ICANN. Jörg Sydow, Georg Schreyögg, and Jochen Koch write that "history matters in the Preformation Phase", because in “organizations initial choices and actions are embedded in routines and practices” and “reflect the heritage [. . .] making up those institutions” (2009, p. 692). ARPANET became operational late in 1969, with sites at the University of California, Los Angeles (UCLA), the Stanford Research Institute (SRI), UC Santa Barbara (UCSB), and the University of Utah (UTAH). ARPANET was designed according to a two-layer architecture, allowing engineers to update, debug, or completely replace entire sections of the network without crashing the system. Even though it afforded designers much needed flexibility, this two-layer architecture established the material conditions for what I think of as “ARPANET’s identity crisis”, a debate among the engineering community about how best to order numerical identification in relation to site-specific names.

At first, designers assigned network addresses to sites according to the order in which machines were installed, setting a precedent that would become problematic as ARPANET grew. The first site, UCLA, had the address 1, while the fourth site, UTAH, had the address 4. The fact that UCLA was assigned address 1 had no overarching design rationale. Janet Abbate (1999) explains that ARPA chose it as the first site because Leonard Kleinrock and his students at UCLA were experimenting with "a mathematical tool called queuing theory to analyze network systems" (p. 58). This historical accident is exemplary of the fact that, as Sydow et al. write, “Since organizations are social systems and not markets or natural entities, triggering events in organizations are likely to prove to be not so innocent, random, or ‘small’” (2009, p. 693). Even though it was random and erased from internet infrastructure, UCLA-1 set a precedent that would soon come to annoy many in the ARPANET community and motivate an ideological reorientation.

Designers did not find the numerical identification of the host layer satisfactory. In 1973, Feinler’s colleague at the NIC, Jim White, wrote that "the fact that [. . .] Network software employs numbers to designate hosts, is purely an artifact of the existing implementation of the Network, and is something that the human user should NEVER see or even know about" (Gee Host Names and Numbers are Swell, May 11, 1973, SRI ARC/NIC Records, Lot X3578-2006, CHM). An addressing schema based solely upon the order in which machines were installed could not help but emphasise the historical contingency of ARPANET’s initial design philosophy.

During these early years, the somewhat coincidental process by which ARPANET was assembled also drove heated debates involving site-specific naming conventions. Peggy Karp of the MITRE Corporation proposed a set of standardised names in 1971, citing problems related to the fact that each site "employs their own special list" of host mnemonics (Standardization of Host Mneumonics, Request for Comments 226), as evidenced by the hand-written desk reference introducing this article. Karp (1971) proposed a list of standardised host names, suggesting, for example, that host 1 remain “UCLA” and host 2 become “SRIARC” (ARC standing for the NIC’s original and at that time still official department name, the Augmentation Research Center). Karp’s proposal limited site names to their institutional affiliation, specifying neither the type of computer running at each address, nor each site’s often more popular nickname.

Karp’s proposed site names generated a flurry of discussions throughout 1971, prompting Robert Braden of UCLA to declare, "Please, let's not perpetrate systems programmers' midnight decisions on all future Network users!" (Host Mnemonics Proposed in RFC 226, Request for Comments 239). He objected to “UCLA” because it does not specify the host computer, suggesting instead “UCLAS7 or UCLANM” because UCLA ran an NMC Sigma 7 computer. Braden also writes that “SRIARC” is “a poor choice[,]” because “everybody calls it the NIC,” and so suggests the name “SRINIC.” Even though Braden’s own proposals read as a programmer’s midnight decisions, he is right to point out that mnemonics based upon the installation of ARPANET infrastructure were not “fully satisfactory”, writing, “It is a set of historical accidents, and shows it.” Braden ends up recommending that names be standardised according to codes at the NIC, based on its ability to function as an institutional reference.

Designers reached consensus around this idea, and the NIC accepted the task of standardising host names, which would fortify its institutional function of ordering the network into the foreseeable future. Writing on behalf of the NIC, Richard W. Watson (1971) emphasised the need to "recognize the expanding character of the Network, with the potential eventually of several hundred sites" (More on Standard Host Names, Request for Comments 273). The NIC standardised official site mnemonics based upon the rough structure of “institution name-computer” as initially proposed by Jon Postel (Standard Host Names, Request for Comments 236, 1971), then a graduate student at UCLA.

During the end of 1973 into early 1974, as the NIC secured centralised authority of the official host name list, a new project rumored to be underway stirred anxieties across the network. Up to this point, the work of ordering numerical addresses with site names functioned relative to ARPANET alone. Realising this might have been short-sighted, one concerned designer wrote, "There has been no general discussion of multi-network addressing—although there is apparently an unpublicized Internetworking Protocol experiment in progress—and some other convention may be more desirable" (L.P. Deutsch, Host Names On-Line, Request for Comments 606, 1973). This “unpublicized Internetworking Protocol experiment” brought the entire ARPANET identity schema under question. In dealing with APRANET’s identity crisis, network designers realised that core infrastructural development must be suppressed from the user interface, an insight that would direct the early work of doing internet governance through its influence on the design philosophy of the DNS.

Constructing the DNS: three mechanisms of positive governmental feedback

The two-layer architecture of ARPANET could not accommodate network growth, offering designers a crash course in how to avoid negative governmental feedback. The idea of feedback or self-reinforcement is central to path dependence theory. Dobusch and Schüßler write, "Specifically, we argue that the mechanisms of positive feedback or self-reinforcement can be specified as a necessary condition for path dependence" (p. 618). In short, lock-in could not occur without such self-reinforcing mechanisms. Moreover, Dobush and Schüßler argue that as a conceptual construct, feedback “can act as an integrating factor—as a conceptual bridge to other theories that explain evolutionary processes characterized by increasing stability and lock-in” (p. 618). One such concept for which feedback can act as a bridge is ideology, itself a way social constructivists describe the self-reinforcing relationship between social values and work relations. Before the DNS could become locked-in as the internet’s social interface, designers had to renegotiate their values in order to foster positive governmental feedback.

Facing a future of exponential growth, designers adopted a specific orientation toward the past. To solve the problem of future users, designers not only needed to organise numbers in relation to names; they had to build a new layer of internet infrastructure, one that would not be deemed historically accidental in an ever-shifting internetwork landscape. Working in response to the constraints of ARPANET infrastructure, designers constructed the DNS by negotiating three mechanisms of positive governmental reinforcement, implementing: 1) extensible field addressing; 2) domains of shared cognition; and 3) a hierarchical authority. The social construction of the DNS shows how the initial phase of path dependence is never open, and often restricted by a self-conscious goal to make a technology adoptable, when others had not yet been able to adopt it.

1. Extensible field addressing

In 1977, Jon Postel, who had become a researcher at the University of Southern California (USC), proposed a solution to the addressing problem through finding a way to order numbers and names hierarchically, unlike the schema associated with ARPANET. Postel (1977) wrote, "The addressing scheme should be expandable to increase in scope when interconnections are made between complex systems[,]"and concluded that the best solution to the problem “is to always represent the address by fields” (Extensible Field Addressing, Request for Comments 730). Fields are discrete categories that structure a database. The organisation of fields in a database indicates how categories of data relate to each other. Databases can accommodate a specific instantiation of data by positioning it in its proper field. For example, Postel (1977) proposed this addressing schema: Network / Switching Machine / Host / Message-Identifier. The original address for UCSB, the third node on the ARPANET, would read: ARPANET / 3 / 3 / [message-id]. Postel (1977) considered this hierarchical structure “a natural way” of addressing, because “the most general field should come first with additional fields specifying more and more details” (Extensible Field Addressing, Request for Comments 730), it seeming more natural, I suspect, in comparison to the prior ARPANET addressing, a non-hierarchical schema that nobody liked.

Extensibility refers to the ability to accommodate future infrastructural development seamlessly at the user interface. An extensible field model of address afforded designers the opportunity to layer over the physical history of network design and all the accidents that came with it. Designers could choose what categories of data each field would embody. They could label fields according to named concepts such as "network", “host”, or even “domain”, and put unique numerical identification into its place. Extensible field addressing facilitated the creation of a database that allowed designers the choice of how network entities could be represented at the user interface.

By embodying the value of extensibility, the DNS was designed to interpolate all future users, ushering new sites into their respective social categories like so many Matryoshka nesting dolls. In the year leading up to the installation of the master table, Paul Mockapetris (1983), who outlined the technical specifications of the DNS, wrote that while the DNS "database will initially be proportional to the number of hosts using the system", it “will eventually grow to be proportional to the number of users on those hosts as mailboxes and other information are added to the domain system” (Domain Names—Implementation and Specification, Request for Comments 883). This database would itself become the discursive body of network infrastructure, structured by domains of shared cognition.

2. Domains of shared cognition

Designers developed a new addressing schema based upon the extensible field model, reaching a consensus around the concept of "Internet Name Domains". Deciding what a domain actually was, however, required much discussion. D.L. Mills first proposed this system. Mills (1981) writes that since “every internet host is uniquely identified by one or more 32-bit internet addresses and that the entire system is fully connected[,]” a “hierarchical name-space partitioning can easily be devised to deal with this problem” (Internet Name Domains, Request for Comments 799). Mills discussed this schema in relation to email. He suggests the structure “ . @ ”, with specific network mnemonics, such as ARPA or COMSAT, placed in the domain field. Like Postel’s proposal, this domain model also positions networks at the top of the address hierarchy. Even though this model suppresses the visibility of core infrastructure, it still maintains a site-specific, historical reference through the “host” field.

Considering the rapid growth of internetworking, David D. Clark of the MIT offered a rather contemplative response. Clark (1982) begins, "It has been said that the principal function of an operating system is to define a number of different names for the same object, so that it can busy itself keeping track of the relationship between all of the different names" (Names, Addresses, Ports, and Routes, Request for Comments 814). He goes on to argue that network protocols such as TCP/IP are no different. He suggests that the “scope of the problem” had not yet been accurately judged, writing,

One of the first questions one can ask about a naming mechanism is how many names one can expect to encounter. In order to answer this, it is necessary to know something about the expected maximum size of the internet. Currently, the internet is fairly small. It contains no more than 25 active networks, and no more than a few hundred hosts. This makes it possible to install tables which exhaustively list all of these elements. However, any implementation undertaken now should be based on an assumption of a much larger internet. The guidelines currently recommended are an upper limit of about 1,000 networks. If we imagine an average number of 25 hosts per net, this would suggest a maximum number of 25,000 hosts.

Even with what we now know to have been low estimates, Clark argues that the potential breadth of the internet requires the complete suppression of infrastructural fields, such as "", at the directory interface in order to implement an acceptable management strategy.

Having come to understand core infrastructure as historically accidental to the user interface, designers recuperated domains by redefining them according to abstract concepts of network governance rather than according to site installation. Postel and Zaw-Sing Su (1982) of SRI defined a domain as "a region of jurisdiction for name assignment and of responsibility for name-to-address translation" (The Domain Naming Convention for Internet User Applications, Request for Comments 819). The intent of a domain-based addressing schema, they wrote, “is that the Internet names be used to form a tree-structured administrative dependent, rather than a strictly topology dependent, hierarchy.” In defining domains as spaces of administration and jurisdiction, engineers opened a way to organise the directory interface according to categories of bureaucratic discourse. In other words, defining domains as spaces of network governance allowed a new set of names to restructure existing sites, by occupying the top of the extensible field hierarchy.

While TCP/IP became the universally adopted protocol suite in January 1983, the DNS became operational on 15 December 1984, when the NIC acquired the master table of top-level domain names and their associated servers (Postel, Domain Name System Implementation Schedule—Revised, Request for Comments 921). Designers reached consensus around five top-level domains: GOV, EDU, COM, MIL, and ORG. (Initially, ARPA itself was a sixth top-level domain, although it was restricted for use of network experimentation). While this decision has no direct technical rationale, Postel and Reynolds wrote that the intention of the system was "to provide an organization name [. . .] free of undesirable semantics" (Domain Requirements, Request for Comments 920, 1984). Names indicating what might become historical accidents of network design were avoided: UCLA, for example, could not reside in a top-level field. A specific institution would reside within its conceptual category, ‘Education’ or ‘Government’ or ‘Commerce,’ and so on. The philosophy of extensible field addressing allowed designers to position institutional modes of social identification at the top of the hierarchy, ushering the DNS into a preexisting discourse of governmental functions that exist independently of the internet itself.

Because the DNS has social concepts at the top of its extensible field hierarchy, the system could both reflect the society into which it was implemented, while simultaneously making room for future users outside the ARPANET community. With networks addressed according to governmental concepts rather than institutions themselves, designers made what they perceived to be a pragmatic decision: build a flexible, layered network organised by a hierarchy of institutional signifiers that already exist in the world. The DNS provided common terms through which the general public could understand how the internet is organised in relation to society, proving to be a solution to ARPANET’s identity crisis. With the DNS, designers had a stable addressing system in place. Now all they had to do was to find a way to make it work on an everyday basis, and into the future.

3. Hierarchical authority

In order to govern future users, Paul Mockapetris introduced the concept of authority. He writes, "Although we want to have the potential of delegating the privileges of name space management at every node, we don't want such delegation to be required" (Mockapetris, Domain Names—Concepts and Facilities, RFC 882, 1983). If such delegation were required, each network or specific institution would have final authority over its users, leading the system back into the realm of “historical accident” that designers needed to avoid. Instead, Mockapetris recommended investing authority with a name server, which would have “authority over all of its domain until it delegates authority for a subdomain to some other name server”. He proposed principles of authority that require a name server administrator to “register with the parent administrator of domains” and also to “identify a responsible person[,]”someone “associated with each domain to be a contact point for questions about the domain, to verify and update the domain related information, and to resolve any problems (e.g. protocol violations) with hosts in the domain” (Mockapetris, Domain Names—Concepts and Facilities, RFC 882, 1983).

Mockapetris borrowed the term "responsible person" from Jon Postel. In order to establish a domain, Postel (1981) wrote, “There must be a responsible person to serve as a coordinator for domain related questions” (The Domain Names Plan and Schedule, Request for Comments 881). He goes so far as to cordon off a special section in order to define “responsible person” precisely:

An individual must be identified who has authority for the administration of the names within the domain, and who takes responsibility for the behaviour of the hosts in the domain in their interactions with hosts outside the domain.

[. . .]

If some host in a domain somehow misbehaves in interactions with hosts outside the domain (e.g. consistently violates protocols), the responsible person for the domain must be able to take action to eliminate the problem.

Postel conceives the "responsible person" not simply as a steward, but as a potential disciplinary authority, someone who has the power to decide what constitutes “misbehavior” and then to “eliminate the problem” accordingly. That Mockapetris adopted this term, too, suggests that in terms of internetwork administration, responsibility meant something very specific. A responsible person seems to be someone who shares values akin to those of internet designers. An irresponsible person, someone who “consistently violates protocols”, for instance, is someone who does not share values akin to designers, someone who might very well warrant an administrative elimination.

Organisational representatives that used the internet for specific projects or business activities initially filled the role of "Responsible Persons", although this ended up contributing to administrative difficulties it was intended to resolve. Archival documents from the NIC show that the “Responsible Person” (RP) model was not effective in managing network access. This was largely due to the definition of RPs as organisational figureheads. Even though she no longer had to order the network by hand, Feinler still had to order RPs, which proved just as difficult. In a hand written memo, Feinler (1985) wrote,

The Responsible Persons are the wrong people to track who has permission to use the network. They are people such as very important PIs or Vice Presidents of companies and the like—people who deal in concepts and macro mgt; not administrative minutia. They either forget or outright refuse to do the job and yet they are listed as contacts. (Memo on Responsible Persons, SRI ARC/NIC Records, Lot X3578-2006, Computer History Museum)

When someone sought a password to access network services, "Responsible Persons" were the ones charged with managing this. However, due to the fact that many RPs were “very important PIs or Vice Presidents”, this sort of “administrative minutia” fell through the cracks. This problem was compounded because, in this early system, “passwords [were] invalidated after 6 months[,]” at which point users had “to get permission again from the RP”. Feinler (1985) wrote, “Unfortunately the RPs usually let their own passwords expire and can’t reactivate their users” (Memo on Responsible Persons, SRI ARC/NIC Records, Lot X3578-2006, CHM). In defining “Responsible Persons” according to their institutional status in disparate organisations, the NIC was often unable to help users effectively get network access.

Another administrative problem emerged regarding the fact that multiple "Responsible Persons" were often affiliated with a single host computer. Users tended to think that host sites facilitated network access, not realising that “Responsible Persons” of specific organisations or projects served that function. In an email to Feinler, Bob Baker (1985) wrote,

There has been a lot of confusion caused by people failing to understand that the registration [. . .] is organization oriented and not host or site oriented. Thus some people have the mistaken idea that to get registered they should contact someone connected with the host they use, instead of the "responsible person" for the organization they belong to. (How to Announce TACACS, January 2, 1985, SRI ARC/NIC Records, Lot X3578-2006, CHM.)

In another memo to Feinler pointing out the main problems of the "Responsible Person" model, Johanna Landsbergen (1986) wrote, “When the password for the Responsible Person of an Org expires and he/she does not remember their old password”, no one at the organisation has the ability to “get a new password” (ARPANET TACACS TOOL PROBS, January 15, 1986, SRI ARC/NIC Records, Lot X3578-2006, CHM). Feinler (1986) raised these issues at multiple NIC meetings, her notes indicating that “Responsible Persons” are “administratively confusing”, for it was difficult to find the correct “RP if there are 4 at one org” (Notes, SRI ARC/NIC Records, Lot X3578-2006, CHM). With the DNS having signified network topology according to institutional concepts, it was difficult to clearly identify “Responsible Persons” as stable points of contact.

The association of "Responsible Persons" with the organisations they represent also led to ambiguities in the construction of network domain databases. In the draft of a proposed “User Database Host Schema” from Bolt Berenek and Newman (BBN), John V. DelSignore includes a glossary that attempts to distinguish the terms “user”, “person”, and “organization.” He writes,

The word "user" is generally used to indicated [sic] a person that is or has logged into the database tool and is performing or performed a certain act or command. Also ‘user’ indicates the real-life person associated with a person record.

[. . .]

occasionally the words "person" or “organization” appear in sentences such as “The user created the person”. We realize the users create “person records” and not “persons” per se. The terms “person’” and “organization” are often used interchangeably with “person record” and “organization record” for the sake of brevity. (John V. DelSignore, Jr., ARPANET TAC Access Control User Database Host Schema, September 16, 1986, SRI ARC/NIC Records, Lot X3578-2006, CHM)

Feinler (1986) was right to conclude "that as a registration scheme it is an administrative nightmare" (Notes, SRI ARC/NIC Records, Lot X3578-2006, CHM), and that the RP model of internet governance could not hold.

Much like numbered host identification that developed in an historically contingent manner, the "Responsible Persons" model of network administration could not efficiently accommodate future users by virtue of the fact that RPs were associated with specific projects at specific organisations at specific moments in time. By the end of 1986, designers abandoned the RP model, instead situating host administrators as points of contact for network users to register in the DNS and acquire network access. The RPs still functioned as gatekeepers; however, they no longer had to manage passwords, directly correspond with users seeking access, or maintain records of host activity. The host administrator took on this task, working as a liaison between users, organisational representatives, and the NIC. The introduction of the host administrator role brought the division of labour in line with the DNS addressing schema. To replace the historical accident of RPs, as well as to manage existing and future sites as if they had always been expected, designers created a new responsibility—one of maintaining the internet’s order—abstracted from specific projects.

As early network designers introduced the concept of authority through the social construction of the DNS, they catalysed the development of bureaucratically independent internet governance functions like IANA, which paved the way for later institutions like ICANN. More than a material base, the DNS provided the conceptual structure for a hierarchical regime of internet governance that centralised administrative power within discursive categories inherited from historically naturalised social categories. Through the social construction of the DNS, early network designers better ensured that future users would themselves maintain the DNS as a foundation for governing the global internet.

Conclusion

By using social constructivist historical analysis in tandem with path dependence theory, this article shows how early network designers built the DNS through harnessing three modes of positive governmental feedback: 1) extensibility, which afforded ways to hide the contingent development of network infrastructure; 2) domains of shared cognition, which allowed non-expert users to navigate the internet in a socially legible manner; and 3) hierarchical authority, which established the initial structure for "internet governance" as an institutionalised function, and ensured that future users could themselves maintain the system and extend it further. After the installation of the DNS, Feinler continued as head of the NIC until 1989, the NIC transformed into InterNIC in 1993, and ICANN finally assumed all responsibilities of InterNIC with its foundation in 1998.

The lock-in of ICANN as the internet’s primary governing body is indeed related to macro-level forces in the global political economy of the 1990s. In his article "ICANN between technical mandate and political challenges" (2000), Wolfgang Kleinwächter argues that ICANN became locked-in with its incorporation due to four macro-level problems: 1) “the need to demonopolise” (p. 556) registrars during the dot-com boom; 2) the need to settle “disputes between trademark holders and domain name holders” (p. 558), which led to ICANN’s Uniform Dispute Resolution Policy (UDRP); 3) the need to recognise country-code top-level domains (cc-TLDs), codified in ICANN’s Governmental Advisory Committee (GAC); and 4) the need to create new generic top-level domains (gTLDs), which motivated ICANN to work in tandem with the World Intellectual Property Organization (WIPO). Kleinwächter persuasively articulates the immediate historical context of ICANN’s global lock-in; but such macro-level forces are themselves historically related to the micro-level decisions of people who built the DNS toward governmental ends. In a way, Kleinwächter himself reveals how ICANN’s incorporation solved the problem of future users as it emerged again in the 1990s, but at a macro-level policy analysis.

As more people consider "internet governance" beyond its institutional focus, an idea promoted by scholars including Laura DeNardis (2012) and Francesca Musiani (2014), finding new ways to conceptualise the term will become more important. Understanding the prehistory of institutional bodies is one way to consider how people “do” governance in response to everyday working pressures. This article shows how internet governance has never been given, and has always been done. Internet governance is not a product or result of organisations like ICANN. Even though it serves an important governance function, ICANN is itself based on the historical contingencies of ordering the early internet. ICANN must also constantly respond to how people use the internet in increasingly varied ways, and as more users understand the political significance of “doing” internet governance, complexities related to the problem of future users will only compound, as users themselves attempt to create—or alter—control structures of the internet.

In designing and implementing the DNS, early designers paved the way for new technocratic functions that ICANN would inherit, although they did not initially call for the institution of governance bodies per se. Rather, they worked through technical issues of the finest complexity, and in so doing developed perspectives on issues such as the nature of historical contingency, the jurisdiction of virtual space, and the concept of authority itself. In learning how to recognise the historical contingency of network design, embracing extensibility, and reifying a new division of labour, early network designers ensured that future users could not only navigate the internet, but could also keep the system in working order on an everyday basis. The technocratic relations that rigidified around the DNS fostered values related to a particular brand of universality, one supported through the potentially infinite extension of social genera, as evidenced today in ICANN’s slogan: "One world. One Internet."

References

Abbate, J. (1999). Inventing the Internet. Cambridge: MIT Press.

Baker, B. (1985). How to Announce TACACS (SRI ARC/NIC Records, Lot X3578-2006, Computer History Museum).

Bijker, Wiebe E., Thomas P. Hughes, & Trevor Pinch. (1987) The Social Construction of Technological Systems: New Directions in the Sociology and History of Technology. Cambridge: MIT Press.

Braden, R. (1971). Host Mnemonics Proposed in RFC 226 (Request for Comments 239).

Clark, D.D. (1982). Names, Addresses, Ports, and Routes (Request for Comments 814).

DelSignore, J.V. Jr. (1986). ARPANET TAC Access Control User Database Host Schema (SRI ARC/NIC Records, Lot X3578-2006, Computer History Museum).

DeNardis, L. (2012). Hidden Levers of Internet Control: An Infrastructure-based Theory of Internet Governance. Information, Communication and Society 15(5): 720-38.

Deutsch, L.P. (1973). Host Names On-Line (Request for Comments 606).

Dobusch, L. & E. Schüßler. (2012). Theorizing path dependence: a review of positive feedback mechanisms in technology markets, regional clusters, and organizations. Industrial and Corporate Change 22(2): 617-647.

Feinler, E. (1973). Changes and Reverifications (SRI ARC/NIC Records, Lot X3578-2006, Computer History Museum).

Feinler E. (1985). Memo on Responsible Persons (SRI ARC/NIC Records, Lot X3578-2006, Computer History Museum).

Feinler, E. (1986). Notes (SRI ARC/NIC Records, Lot X3578-2006, Computer History Museum).

Karp, P. (1971). Standardization of Host Mneumonics (Request for Comments 226).

Kleinwächter, W. (2000). ICANN between technical mandate and political challenges. Telecommunications Policy 24: 553-563.

Landsbergen, J. (1986). ARPANET TACACS TOOL PROBS (SRI ARC/NIC Records, Lot X3578-2006, Computer History Museum).

Langley, A. (1999). Strategies for Theorizing from Process Data. The Academy of Management Review 24(4): 691-710.

Mills, D.L. (1981). Internet Name Domains (Request for Comments 799).

Mockapetris, P. 1983). Domain Names—Implementation and Specification (Request for Comments 883).

Musiani, F. (2014). Practice, Plurality, Performativity, and Plumbing: Internet Governance Research Meets Science and Technology Studies. Science, Technology & Human Values 40(2): 272-286.

Postel, J. (1984). Domain Name System Implementation Schedule—Revised (Request for Comments 921).

Postel, J. (1981). The Domain Names Plan and Schedule (Request for Comments 881).

Postel, J. (1977). Extensible Field Addressing (Request for Comments 730).

Postel, J. (1971). Standard Host Names (Request for Comments 236).

Postel, J. & J. Reynolds. (1984). Domain Requirements (Request for Comments 920).

Postel, J. & Z.S. Su. (1982). The Domain Naming Convention for Internet User Applications (Request for Comments 819).

Sydow, J., G. Schreyögg & J. Koch. (2009). Organizational Path Dependence: Opening the Black Box. The Academy of Management Review 34(4): 689-709.

Watson, R.W. (1971). More on Standard Host Names (Request for Comments 273).

Wentheimer, E. (1971). Network Host Status. Request for Comments 252.

White, J. (1973). Gee Host Names and Numbers are Swell (SRI ARC/NIC Records, Lot X3578-2006, Computer History Museum).

Acknowledgments

The author wishes to thank Andrea Hackl, Leonard Dobusch, and Francesca Musiani for their generous guidance in the revision process. The author also wishes to thank Sara Lott, Senior Archives Manager at the Computer History Museum, who helped in the early selection of materials.

Disclosing and concealing: internet governance, information control and the management of visibility

$
0
0

This paper is part of 'Doing internet governance: practices, controversies, infrastructures, and institutions', a Special issue of the Internet Policy Review.

Introduction

The ubiquity of digital technologies and the datafication of many domains of social life raise important questions about governance. Not so long ago, digital technologies were mainly seen as ‘devices of information’ and not ‘agencies of order’ (Durham Peters, 2015), but this has certainly changed over the last decade. As processes of digitalisation and ‘datafication’ (Mayer-Schönberger and Cukier, 2013) come to shape most societal domains, it makes less and less sense to think of digital technologies as tools or as separate (cyber)spaces. Digital transformations increasingly make headlines, define political agendas and shape research priorities. In research circles, the nexus between digital technologies and governance – whether in the shape of (technical) coordination, (political) regulation or (social) ordering (Hofmann, Katzenbach, and Gollatz, 2016) – has emerged as a key concern and laid the foundation for the field of internet governance and more recent discussions of the role of data and algorithms in the social and political affairs (Boyd and Crawford, 2012; Gillespie, 2012).

Scholarly work on internet governance has had a rapid and remarkable trajectory in trying to keep up with technological and political developments in this area. This research emerged as a set of reflections on technology and ideology offered from within the engineering labs and close-knit communities developing the technological innovations that we have come to know as the internet (Hafner and Lyon, 1996). As these technological developments spilled into more public, global contexts, internet governance research became occupied with questions about international agreements, participation, rights and related questions, primarily by engaging insights from international relations and political science (DeNardis, 2014; Mueller, 2010). As this paper suggests, we stand at the threshold of yet another major transformation when it comes to the role of digital technologies in societal and political affairs, which requires that internet governance scholars once again calibrate the conceptual tools and analytical approaches used to guide their work. At this point, questions about the entanglement of technology and social practices and the ordering effects of processes of digitalisation and datafication deserve more attention, and this requires that we extend the emerging engagement with insights from sociology and science and technology studies (STS). In particular, I suggest that internet governance research needs to explore howdigital, datafied infrastructures afford and condition ordering through information control, the management of visibilities and the guidance of attention. Articulating the central role of visibility practices, such as transparency, surveillance, secrecy and leakages, in the digital age, this paper sets an agenda for internet governance research that makes processes of seeing, knowing and ordering central. To this end, the paper suggests a conceptual vocabulary for studying information control and managed visibilities as forms of ordering, and provides some empirical illustrations of such studies.

The paper makes two contributions to the emergent engagement with STS and sociological perspectives in internet governance research and work on the societal and political ramifications of digital technologies in political science more broadly. The first contribution is an overview of developments in the internet domain that make it pertinent to push beyond existing orientations and theoretical approaches. This takes the shape of a historical overview of the trajectory of internet governance research, with particular focus on the underlying assumptions about the internet, the primary objects of study, and the conceptions of governance reflecting different theoretical and disciplinary foundations.

In the field of internet governance, most work has explored governance arrangements, institutional developments and the effects of interactions among public and private actors in the emergence of the internet as a matter of concern in global politics (DeNardis, 2009; 2014; Deibert et al, 2010; Mueller, 2010). For anyone trying to understand the public, political and scholarly significance of the internet, these works have been ground-breaking and central to the emergence of this field of research, as well as the public understanding of the importance of the issues. But to push our field forward, we need more theories, analytical vocabularies and empirical orientations that take into account how digital and datafied infrastructures are ingrained in and shape social and cultural practices that go beyond what is normally associated with internet governance (such as regulatory bodies, standard-setters and technical communities) and are central to a much wider range of ordering processes (for similar arguments, see for instance Flyverbom, 2011; Mansell, 2012; Franklin, 2013; Musiani, 2015).

The second contribution of the paper is to articulate what science and technology studies and, in particular, sociologies of knowledge and visibility (Shapiro, 2003; Brighenti, 2007; Rubio and Baert, 2012; Flyverbom et al., 2016) have to offer when it comes to investigating how digital technologies relate to governance. It suggests, in particular, that a focus on the dynamics and effects of information control and visualisation is a valuable starting point, and provides some empirical illustrations of what may be gained by approaching internet governance issues in this manner. Focusing on information control and the management of visibilities may open up new avenues for research and make different objects of analysis central, in particular when it comes to understanding the role of digital infrastructures in the shaping of social and political realities. As suggested by Gillespie (2016), we still lack "the language to capture the kind of power" that digital infrastructures involve, so we need to explore alternative conceptualisations and analytical vocabularies that can invigorate studies and theories of digital technologies in new and exciting ways.

Early trajectories of internet governance research

Even though early discussions of the internet often considered it to be a separate space outside the reach of traditional forms of governmental regulation - for instance by referring to it as ‘cyberspace’, important scholarship has shown that it has always been subjected to multiple forms of governance (Lessig, 1999; Goldsmith and Wu, 2006; Musiani et al., 2016). Studies of internet governance emerged alongside the technological developments they set out to investigate and were entangled with the people, organisations and ideologies shaping this domain. That is, many of those pursuing research on internet governance were also so-called ‘inter-nauts’, i.e., members of the technical communities building and coordinating the internet and/or directly involved in policy development in this area. Reflecting this symbiosis, early scholarly discussions were focused on technical and operational issues (Ziewitz and Pentzold, 2014), and often made technical arguments for policy approaches, such as the need for the governance of technological networks itself to be networked (Klein, 2002; Kleinwächter, 2000). Most of this research focused on a narrow set of organisations, in particular the Internet Association for Assigned Names and Numbers (ICANN), and other bodies directly involved in standard-setting, coordination and other operational matters (Klein, 2002; Mueller, 2002).

These early discussions of internet governance had a strong focus on showing the uniqueness of the internet and how the (libertarian) logics underpinning its development conditioned particular forms of governance. These approaches suggested that the internet has a distinct decentralised technological architecture, which makes it difficult to govern, and focused on the tension between a decentralised, global technological innovation and established forms of regulation based on national boundaries and sovereignty. Such discussions often articulated a resistance to top-down control and governmental interventions, and a focus on open standards and other technical features that allow for interoperability, peer production, innovation and unimpeded data flows. To sum up, these early approaches to internet governance had a primary focus on technical coordination, individual organisations, the (libertarian) ideologies shaping the area, and conceptualised internet governance mainly as a matter of technical coordination to be kept separate and safe from statist regulation. These features served the purpose of establishing the internet as different from other technological developments, and to separate the internet from ordinary social and political life.

With commercialisation, intense political struggles, and the growing importance of the internet as an infrastructure for trade and socio-cultural formations, these orientations are no longer clear-cut or even pervasive, but they still form an important foundation for what remains a controversial and problematic relation between the internet and established, (inter)governmental approaches to governance.

As discussions of the internet and its consequences for economic, political and cultural developments grabbed the attention of the public, scholars and policymakers, also work on internet governance took on new challenges and themes. In particular, questions about processes of institutionalisation, inter-governmental arrangements and stakeholder participation, as well as policy issues such as privacy, security and rights, became more central.

The growing focus on the internet as a phenomenon with wide-reaching societal consequences was also reflected in the understandings of governance underpinning research in this emergent field. Moving beyond the focus on technical forms of coordination and operational bodies allowed for issues like the intersections between established statist and intergovernmental forms of regulation and more controversial multi-stakeholder approaches to be addressed (Anderson, Dean and Lovink, 2006; Mueller, 2010; Flyverbom, 2011). Also, this research highlighted the global nature of internet governance as an issue area with ramifications for a wide range of more established policy concerns (DeNardis, 2009). This involved linking the internet to questions of inclusion, development, rights and security (Chadwick, 2006; Jørgensen, 2006). In terms of how governance was arranged, this research highlighted that there was no institutionalised regulatory system in place, and few established authorities or international agreements like those we see in other areas. It also stressed the complexity of internet governance, where some parts are steered by a myriad of technical, private, standards-based and other ad hoc forms of regulation, some parts are handled by established international organisations and others are addressed through more informal governance arrangements, such multi-stakeholder dialogues without negotiation- or decision-making power.

Reflecting the maturation of the internet and the growing focus on its ‘governability’ (Hofmann, Katzenbach and Gollatz, 2016), much of this research focused on the sites and organisations where internet governance was addressed, such as the World Summit on the Information Society and related bodies, and questions about participation and inclusion (Mueller, 2010; Singh and Flyverbom, 2016). Largely, internet governance research was born and bred in disciplinary fields with a focus on institutionalisation (what institutions and governance regimes are emerging to handle the global governance and politics of the internet?), the state (how do state and non-state actors coordinate or clash in this area and what are possible effects of public or private forms of governance?) and pragmatic politics (how is the internet emerging as a key asset and object of regulation?). This was also reflected in the theories underpinning this work, where most insights and conceptual approaches were adopted from the field of international relations and addressed issues such as the role of public and private actors, intergovernmental processes, networks and institutional developments.

A similar point is made by Hofmann, Katzenbach, and Gollatz (2016), who argue that the work we normally associate with internet governance has focused on regulation, which can be understood as institutionalised, deliberate and goal-oriented interventions by public or private actors seeking to shape behaviour, solve policy problems and implement rules. This is in some ways odd, since very influential work such as Lessig’s exactly stressed the need to understand the regulation of the internet as an interplay among such different forces as laws, norms, markets, and architecture or technical codes (Lessig, 1999). This is partly due to disciplinary differences in theoretical and empirical orientation. But it is still puzzling that only little research has captured the relations among these four forms of governance, or offered analytical frameworks that may help us understand their entanglement (exceptions include Bowrey, 2005; Flichy, 2007; Mansell, 2012).

Taken together, these discussions articulated the need for intergovernmental negotiations and multi-stakeholder dialogues about the internet, and brought up important questions about inclusion, institutionalisation, and rights. Still, most work held on to the idea that the internet should be thought of as a separate space with a need for novel governance arrangements rather than extensions of statist approaches. But they served the important purpose of showing the importance of the internet for political affairs and the need for thorough research in this area. Building on these foundations while acknowledging their limitations is central when we move forward in attempts to grasp emerging developments and develop new vocabularies for the study of governance of and by digital technologies.

Digital infrastructures and ordering - in search of new approaches

With the emergence of ubiquitous digitalisation and datafication (Mayer-Schönberger and Cukier, 2013), digital technologies have become infrastructures for large parts of social life and an increasing number of human activities take a digital form or leave extensive digital traces. By using digital technologies, we control global value chains and production processes, engage in politics and connect with friends and family. The infrastructures making all this possible consist of multiple digital platforms, tracking systems and other largely invisible ways of sourcing and aggregating data, as well as advanced algorithms and visualisation techniques. As digital technologies become ubiquitous, it seems that we need research that picks up new kinds of issues and discussions than those we normally associate with internet governance research. My suggestion is that we need to shift from the focus on how to govern digital transformations, ‘the internet’ or ‘cyberspace’ to the question of how these govern. The internet is not just an object of need in governance, but itself constitutive of governance – a means of ordering (Flyverbom, 2011; Ziewitz and Pentzold, 2014; Hofmann, Katzenbach, and Gollatz, 2016).

For scholars interested in the intersection of digital technologies and governance, basic sociological questions about the individual, organisational and societal ramifications of these developments should be central. That is, how do digital transformations shape fundamental issues and mundane practices, such as how we produce knowledge, how we decide what is important, and how we work and think. But most public discussions focus on more spectacular issues - the increasing financial resources of internet companies, concerns about states tracking and profiling citizens, and the effects of digital disruption on traditional industries and institutions. As a result, not enough work addresses the materiality and the possibilities for action offered by digital infrastructures and platforms. To the degree that we even think of their existence, such infrastructures come across as neutral or innocent, and we are more concerned with the interests and aims of the companies and other actors building and taking advantage of them. This focus means that we refrain from studying a wide range of issues that could be considered relevant for, and part, of internet governance. Also, from within the field, a number of scholars have called for more comprehensive and fine-grained accounts of the relations between digital technologies and governance, and the complex entanglements of public and private actors, humans and technologies (DeNardis, 2012; Musiani, 2015; Hofmann, Katzenbach, and Gollatz, 2016). At the core of this critique is an emergent realisation that governance involves mundane activities and forms of ordering that are overlooked if we focus too much on the role of formal institutions and deliberate attempts to regulate.

One way to rethink the meaning of internet governance is to conceptualise governance in terms of ordering, not regulation. To this end, insights from Foucauldian and related sociologies of governance are a useful starting point (Dean, 1999; Law, 2003). Such analytical vocabularies are more agnostic when it comes to explanations about causes and structures, more focused on addressing relational interactions, and more practice-oriented than traditional work on internet governance (see Flyverbom, 2011 for a more elaborate discussion). Such broadly sociological approaches increasingly mark discussions about uses, design, digital infrastructures, materiality and similar sociological accounts of digital transformations. Engaging with these more encompassing research agendas could help establish links and conversations across disciplines and phenomena of relevance to our field. The point is not only that we need to open up the concept of governance to include more subtle and emergent forms, but also that more attention to social practices and ordering processes highlights a set of discussions that have been marginal in previous work. A range of sociological perspectives and themes, like the ones discussed above, are relevant for this purpose.

As digital technologies and data become ubiquitous and infrastructural, so that it makes less sense to think of ‘cyberspace’ as a separate and independent space, we have to shift our attention to the more subtle and intricate ways they shape individual, organisational and societal possibilities for action. To this end, we need more accounts of what digital technologies are, afford and do when it comes to shaping practices, interactions and visibilities. These more subtle forms of ordering that digital technologies create are also forms of internet governance and need to be included in our conceptual approaches (Hofmann, Katzenbach, and Gollatz, 2016: 7).

Studies of ordering: information control and the management of visibilities

Insights from sociology and science and technology studies are useful starting points if we want to reinvigorate studies of internet governance. What I am to do here is to stress how sociological accounts of visibility (Brighenti, 2007; 2010) have a lot to offer when it comes to articulating how digital technologies facilitate and constrain our possibilities for action. Visibility, information control and knowledge are central aspects of power and governance, and deserve more scrutiny, particularly in the age of big data, autonomic computing and radical transparency. Novel studies of ordering could start exploring how digital transformations shape the way we see, know and govern. This extended research agenda for internet governance studies would make questions about information control and visibility management central, and study how processes of digitalisation and datafication contribute to ordering by making certain phenomena and practices visible, and others invisible, in ways that come to guide our attention and contribute to social and political ordering. Drawing on insights from science and technology studies, affordance theory and sociology, such approaches help us grasp how digital technologies afford and condition ordering through the production of visibilities and the guidance of attention. The argument that there is an intimate relationship between seeing, knowing and governing (Foucault, 1988; Brighenti, 2007) deserves further scrutiny because digitalisation and datafication fundamentally shape how we make things visible or invisible, knowable or unknowable and governable or ungovernable. Some work on the use of digital technology has engaged with questions about visibilities and invisibilities (Treem and Leonardi, 2012) and with questions of transparency (Weber, 2008), but without considering it as a part of the broader governance effects of digital technologies. I suggest that a more extensive focus on visibilities invites us to explore how digital technologies condition ordering, and how our attention is guided as a result of these dynamics. That is, systems of governance or forms of ordering always revolve around particular ways of seeing and perceiving, involve distinctive ways of thinking and questioning and work through concrete practical rationalities and techniques of intervention (Foucault, 1988; Dean, 1999).

All types of knowledge production and visualisation techniques have implications for what we see as important and possible to govern, and to unpack these we can rely on conceptual discussions of affordances and the material foundations of knowledge production (Hutchby, 2001; Leonardi, 2012; Hansen and Flyverbom, 2015). Such approaches invite us to engage with questions about the material infrastructures and sources of data that are used for purposes of governance, about the political rationalities that digital technologies help institutionalise, and the patterns of exclusion and inclusion involved when social processes and phenomena are made ‘algorithm-ready’ (Gillespie, 2012; Madsen et al., 2016). The affordances of digital technologies when it comes to ordering can be explored at the individual, organisational and societal level, and the following section offers three examples.

Governing through visibilities

Having articulated the conceptual argument, let me offer some illustrations of the possible shape of such studies. As suggested by Walters (2012: 52), we need to explore the "new territories of power" associated with “the entanglement of the digital, the informational and the governmental”. As stressed above, there are many valuable ways to explore such encompassing questions about how digital technologies govern and are governed, and how ordering plays out as a result of digital transformations. Even if we focus on information control and visibilities, the list of possible topics is extensive, and cuts across individual, organisational, material and societal levels of analysis. In this context, I can only hint at a couple of these, and the following three suggestions are in no way exhaustive. But I hope they illustrate some of what could be explored by engaging these ideas about information control and visibilities in future research.

Transparency reports

One question is how our understanding of the phenomenon ‘internet governance’ is conditioned by the kinds of information and disclosures that make it visible and knowable in the first place. As noted above, internet governance plays out in a bewildering range of settings, involves multiple actors and encompasses both intergovernmental, private and technical forms of governance. But we rarely think about how these processes are about managing visibilities in ways that condition particular forms of ordering. One emergent form of internet governance is what internet and telecommunications companies refer to as ‘transparency reports’ and related attempts to show how powerful actors seek to control digital spaces. These reports disclose what data companies compile, the requests for information that states make, and how states filter and sometimes shut off the internet. Such reports thereby respond to an increased focus on transparency when it comes to data aggregation, covert uses of data, as well as filtering, surveillance and censorship in digital infrastructures. But they also distract our attention from the roles and responsibilities of internet companies. Transparency reports may list the number of requests made by individual governments, but they do not provide insight into the agreements or relationships between states and internet companies. They are also a very particular kind of reporting, which may cater to demands for openness and disclosure about government surveillance and censorship, but provide a very specific response in a preformatted and selective shape. What is particularly significant in this context is that transparency reports seek to articulate the value of numbers-based approaches to governance, and challenge (what internet companies consider to be) the overly emotional reactions that policymakers often rely on (Flyverbom, forthcoming). Attempts to make digital technologies governable by use of data visualisations – such as transparency reports – are important to investigate because they select and visualise information in ways that are neither natural nor innocent, and thus manage visibilities and guide our attention.

The result of these disclosures in the name of transparency is that the public gaze is directed to particular parts of the problem – for instance that some governments make a lot of requests for information to be taken down or made available for their use. But it also important to remember that some states are not even part of such reports because they refuse to share this information. Also, we must not forget that internet companies are involved in other forms of data control and data sharing that they do not talk about publicly, and we can think of transparency reports as strategic ways of guiding our attention. For instance, it was only after the Snowden revelations that Google made it clear that its transparency reports had not disclosed information on how the company feeds information about users to the National Security Agency (NSA). As Google mentioned somewhat apologetically in a blog post: "U.S. law does not allow us to share information about some national security requests that we might receive. Specifically, the U.S. government argues that we cannot share information about the requests we receive (if any) under the Foreign Intelligence Surveillance Act. But you deserve to know" (Google official blog, 2013, para. 3). Because transparency reports are voluntary and initiated by companies themselves, the content and format can be selective enough to allow for such limitations to stay out of sight. As a result, it is often not clear what data is selected and omitted in these reports are compiled and we are rarely given insight into the contexts and conditions of their production. Transparency reports are also a form of obfuscation and strategic opacity (Stohl, Stohl, and Leonardi, 2016). But my argument is not simply that such reports should be more inclusive and deliver more actual transparency, but also that all kinds of disclosures guide our attention and must be understood as managed visibilities that could be different. That is, they invite us to understand internet companies and governance issues in certain ways. This is also what my second illustration highlights.

Internet platforms, humans and machines

Internet platforms like Google, Twitter and Facebook are often perceived as different from traditional companies, and they curate this position quite carefully, for instance by stressing organisational values like dialogue, transparency and innovation (Flyverbom, 2015). But most of the time, we only know and engage with these companies through the services they provide - search, connecting with friends or possibilities for discussion. With no products of their own and their focus on facilitating interactions and sharing, they come across as utilities and platforms rather than normal companies. This position serves an important purpose and is actively maintained by their owners and directors. To the degree that such platforms are seen as technical utilities, not complex organisations full of people and engaged in strategic attempts to shape political agendas and cultural formations, they are in a better position to stay off the radar when it comes to regulation and oversight.

Recent discussions of how Facebook Trending relies not only on neutral and consistent algorithms, but also human curators who seemingly highlight some news stories and political views over others, has shown what happens if we start to think of internet companies as similar to news conglomerates. We have a long history of regulating the latter very strictly, and falling into a similar category would put a company like Facebook in a very different situation than at present. The strategic positioning as utilities involves issues such as human labour, how digital data is organised and edited and how internet companies relate to culture and politics. My point is that these issues should be part of our focus when we investigate how the internet is governed and shapes governance. The task is mainly to establish the links between internet governance and emergent and important research on, for instance, how human labour is invisible on internet platforms, how digital technologies condition particular forms of knowledge production, how identities and personal information are curated in digital spaces and how algorithmic operations edit, sort and shape realities. Starting points could be work on the societal implications of algorithms and data (Gillespie, 2012; Flyverbom and Madsen, 2015; Pasquale, 2015), and studies of digital labour and the entanglement of human and technical operations at work on internet platforms (Irani, 2015; Roberts, 2016). These may seem to be only remotely relevant to internet governance studies, but the links are important to explore. As I have suggested in this section, what is made visible by and on internet platforms has consequences for how they are perceived and regulated, and how we think of digital transformations more broadly. But questions of visibilities and their relation to ordering are important to explore not only at the organisational level, but also as they shape individual conduct and create the foundation for how we govern societal affairs.

Data doubles

Digital technologies and data also play important roles in the production of visualisations that we use as the basis for decisions and governance. At the individual level, an example is what Ruppert (2011) and others have referred to as ‘data doubles’, i.e. the sum of digital traces we leave. As data doubles come to function as complete representations of us in the context of governance, we see the emergence of potentially worrying scenarios, including the possibility of predictive policing and other forms of governance that do not rely anymore on situated encounters with the subjects they seek to govern (Hansen and Flyverbom, 2015) . Beyond the level of the individual, digital transformations also shape areas like urban governance (Kitchin, 2014), the prevention of terrorism (Morozov, 2014b), control with financial transactions (Hansen and Flyverbom, 2015), and international development (Hilbert, 2013). Digitalisation and datafication have implications for how we approach societal challenges, such as terrorism, development or tax evasion. A focus on the management of visibilities invites us to consider how such regulatory or political issues come to look different as a result of digital transformations. In the case of development or tax evasion, the reliance on digital, datafied infrastructures means that established ways of producing knowledge are challenged and supplemented by algorithmic forms of calculation and scrutiny. That is, whereas development agencies usually rely on national statistics or household surveys, the use of digital traces as indicators of food crises or epidemics produces rather different types of visualisation and knowledge, direct our attention to new issues, and lead to alternative ways of dealing with governance issues (Flyverbom and Madsen, 2015). The point is not that big data produce more accurate ‘truths’, but rather that we need to explore how such forms of knowledge production condition different and sometimes problematic approaches to governance (Madsen et al., 2016). Morozov (2014b) uses the example of terrorism. In the past, and using more traditional forms of knowledge and visualisation, this was considered a problem with strong ties to history and foreign policy. But if we approach terrorism by use of digital technologies and the aggregation of digital traces, terrorism takes the shape of an ‘information problem’ – a matter of picking up enough signals to pre-emptively strike against a (soon to become) terrorist. Morozov’s focus is on the problematic, technocratic effects of these forms of what he terms ‘algorithmic regulation’ based on ‘Silicon valley logics’. Even if we do not share Morozov’s worries, it is important to explore how digital technologies and datafication unsettle "key questions about the constitution of knowledge, the processes of research, how we should engage with information, and the nature and categorization of reality" (boyd and Crawford, 2012: 665). In particular, we need to consider how political controversies and complex governance issues are re-articulated as administrative or technical matters, and to reflect on the consequences of such ‘post-political’ forms of governance (Garsten and Jacobsson, 2013). The example of terrorism suggests how ubiquitous digital technologies and processes of datafication create new conditions for how we see, know and govern the world around us. With this, I have sought to illustrate that digital technologies come to shape the way we manage visibilities and produce knowledge, and that these formations have consequences for how we make the world around us knowable and governable. Such questions are not foreign to sociological and STS-inspired accounts of digital transformations, but are rarely considered part of the field of internet governance.

Conclusion

This paper has suggested that contemporary developments in the digital domain invite us to extend and reinvigorate studies of internet governance by giving more attention to questions of managed visibilities and their relation to processes of ordering. Through encounters with sociology, science and technology studies and similar approaches, we have seen a growing interest in more encompassing approaches to governance, extending far beyond Lessig’s (1999) call for approaches to internet governance that address both legal and technical forms of governance. In contrast to the focus on regulation in most internet governance studies, such accounts approach governance by focusing on the forms of ordering (Flyverbom, 2011) and mundane coordination activities (Hofmann, Katzenbach, and Gollatz, 2016) involved, the ‘relevance’ of algorithms for social and political formations (Gillespie, 2012; Ziewitz, 2016) and the role of infrastructures and architectures in the shaping of conduct (DeNardis, 2012). These approaches allow for far more elaborate and fine-grained investigations of how digital technologies and datafication processes become woven into the fabric of social life. But the digital realm also involves other subtle forms of governance that deserve attention. In particular, I have sought to articulate how discussions of the relation between information control, visibilities and governance could move the field forward, and the concern with seeing, knowing and governing could pave the way for novel studies of internet governance. To this end, the concept of managed visibilities is a starting point that invites us to explore how digitalisation and datafication condition particular forms of information control and the guidance of attention. The conceptual and illustrative discussions of ordering through the management of visibilities show both how the increasingly ubiquitous and infrastructural nature of digital technologies shapes societal and political transformations, and how such theoretical approaches may contribute to the opening up of exciting new avenues for research in and beyond the field of internet governance studies. These contributions are important because they may help us reflect on the largely invisible ways in which digital infrastructures and architectures institutionalise and normalise particular forms of seeing, knowing and governing.

References

Anderson, J., Dean, J. and Lovink, G. (2006) Reformatting Politics: Information Technology and Global Civil Society. London, Routledge

Bowrey, K. (2005) Law and internet cultures. Cambridge University Press

Boyd, D. and Crawford, K. (2012) Critical questions for big data: Provocations for a cultural, technological, and scholarly phenomenon. Information, Communication & Society. 15(5)

Brighenti, A. M. (2007). Visibility as sociological category. Current Sociology, 55, 323.

Chadwick, A. (2006) Internet Politics: States, Citizens, and New Communication Technologies. Oxford: Oxford University Press

Clinton, B. (1990): http://www.techlawjournal.com/cong106/pntr/20000308sp.htm

Dean, M. (1999) Governmentality: Power and Rule in Modern Society, London, Sage

Deibert,R., Palfrey, J., Rohozinski, R. and Zittrain, J. (eds) (2008) Access Denied: The Practice and Policy of Global Internet Filtering, MIT Press

Deibert,R., Palfrey, J., Rohozinski, R. and Zittrain, J. (eds) (2010) Access Controlled: The Shaping of Power, Rights and Rule in Cyberspace. MIT Press

DeNardis, L. (2009) Protocol Politics: The Globalization of Internet Governance, MIT Press

DeNardis L. (2012) Hidden levers of Internet control. Information, Communication & Society

15(5): 720–738.

DeNardis, L. (2014) The Global War for Internet Governance, New Haven: Yale University Press

Durham Peters, J. (2015) The Marvelous Clouds: Toward a Philosophy of Elemental Media, University of Chicago Press

Flichy, P. (2007) The Internet Imaginaire, MIT Press

Flyverbom, M. (2011) The Power of Networks: Organizing the Global Politics of the Internet, Cheltenham: Edward Elgar

Flyverbom, M. (2015) Sunlight in cyberspace?: On Transparency as a Form of Ordering, European Journal of Social Theory, Vol. 18, No. 2, 2015, p. 168-184

Flyverbom, M. (forthcoming) Corporate advocacy in the internet domain: shaping policy through data visualizations, in Political Affairs, edited by Garsten, C. and Soderbom, A., Cheltenham: Edward Elgar

Flyverbom, M. and Madsen, A.K. (2015) Sorting data out – unpacking big data value chains and algorithmic knowledge production, in Die Gesellschaft der Daten: Über die digitale Transformation der sozialen Ordnung, edited by Süssenguth, F. Bielefeld: Transcript Verlag, p. 123-144

Flyverbom, M., Leonardi, P., Stohl, C. and Stohl, M. (2016) The Management of Visibilities in the Digital Age: Introduction to special issue, International Journal of Communication, 10

Franklin, M. I.. 2013. Digital Dilemmas: Power, Resistance, and the Internet. Oxford: Oxford University Press

Garsten, C. and Jacobsson, K. (2013) Post-political regulation: soft power and post-political visions in global governance. Critical Sociology 39(3): 421–7.

Gillespie, T. (2012) The relevance of algorithms, in Media Technologies: Essays on Communication, Materiality, and Society, edited by Gillespie, Boczkowski and Foot, Cambridge, MA, MIT Press

Hafner, K. and Lyon, M. (1996) Where Wizards Stay Up Late: The Origins of the Internet. New York: Simon & Schuster

Hansen, H. K., and Flyverbom, M. (2015). The politics of transparency and the calibration of knowledge in the digital age. Organization. 22(6), 872–889

Hutchby, I. (2001) ‘Texts, Technologies and Affordances’, Sociology 35(2): 441–56.

Foucault, M. (1988). Power/knowledge: Selected interviews and other writings, 1972–1977. Brighton, UK: Harvest Press.

Hilbert, M. (2013). Big Data for development: From information- to knowledge societies, http://ssrn.com/abstract=2205145.

Hofmann, J.Katzenbach, C. and Gollatz, K. (2016) Between coordination and regulation: Finding the governance in Internet governance, New Media & Society, DOI: 10.1177/1461444816639975

Irani, L. (2015)Difference and Dependence Among Digital Workers: The Case of Amazon Mechanical Turk.South Atlantic Quarterly, 114(1).

Jørgensen, R.F. (ed.) (2006) Human Rights in the Global Information Society, Boston MA: MIT Press

Kitchin, R. (2014) The real-time city? Big data and smart urbanism, GeoJournal, 79(1): 1-14

Klein, H. (2002) ICANN and Internet Governance: Leveraging Technical Coordination to Realize Global Public Policy, The Information Society, 18:3, 193-207

Kleinwächter, W. (2000) ICANN between technical mandate and political challenges, Telecommunications Policy 24; 553-563

Law J (2003) Ordering and Obduracy. Lancaster: Centre for Science Studies, Lancaster University. Available at: http://www.comp.lancs.ac.uk/sociology/papers/Law-Orderingand-Obduracy.pdf

Leonardi, P. M. (2012). Materiality, sociomateriality, and socio-technical systems: What do these terms mean? How are they different? Do we need them? In P. M. Leonardi, B. A. Nardi, and J. Kallinikos (Eds.), Materiality and organizing: Social interaction in a technological world (pp. 25–48). Oxford, UK: Oxford University Press.

Lessig, L. (1999) Code and Other Laws of Cyberspace, New York: Basic Books

Madsen, A., Flyverbom, M., Hilbert, M. and Ruppert, E. (2016) Big Data: Issues for an International Political Sociology of Data Practices, International Political Sociology, Vol. 10, No. 3, p. 275-296

Mansell, Robin (2012) Imagining the internet: communication, innovation, and governance, Oxford: Oxford University Press

Mayer- Schönberger, V. and K. Cukier (2013), Big Data: A Revolution That Will Transform How We Live, Work, and Think, Eamon Dolan/Houghton Mifflin Harcourt: Boston

Morozov, E. (2014a) To save everything, click here: the folly of technological solutionism, PublicAffairs

Morozov, E. (2014b) https://www.theguardian.com/technology/2014/jul/20/rise-of-data-death-of-politics-evgeny-morozov-algorithmic-regulation

Mueller, M. (2002) Ruling the Root: Internet Governance and the Taming of Cyberspace; MIT Press

Mueller, M. (2010) Networks and States: The Global Politics of Internet Governance, MIT Press

Musiani, F. (2015) Practice, Plurality, Performativity, and Plumbing: Internet Governance Research Meets Science and Technology Studies, Science, Technology, & Human Values 2015, Vol. 40(2) 272-286

Musiani, F., Cogburn, D.L., DeNardis, L. and Levinson, N.S. (eds) (2016) The Turn to Infrastructure in Internet Governance, Palgrave Macmillan

Pasquale, F. (2015) The Black Box Society: The Secret Algorithms That Control Money and Information, Harvard University Press

Roberts, S.T. (2016) Commercial Content Moderation: Digital Laborers' Dirty Work, in Noble and Tynes (eds) Intersectional Internet: Race, Sex, Class and Culture Online

Ruppert, E. (2011). Population Objects: Interpassive Subjects. Sociology, 45(2) pp. 218–233.

Shapiro, G. (2003) Archaeologies of Vision: Foucault and Nietzsche on Seeing and Saying, Chicago, University of Chicago Press.

Singh, J.P. and Flyverbom, M. (2016) Representing participation in ICT4D projects, Telecommunications Policy, Vol. 40, No. 7, 2016, p. 692-703

Stohl, C., Stohl, M., and Leonardi, P. (2016) Managing Opacity: Information Visibility and the Paradox of Transparency in the Digital Age, International Journal of Communication, 10

Treem, J. and Leonardi, P. (2012) Social Media Use in Organizations: Exploring the Affordances of Visibility, Editability, Persistence, and Association; Communication Yearbook, Vol. 36, pp. 143-189

Walters, W. (2012) Governmentality: Critical Encounters. London: Routledge

Weber, R. (2008) Transparency and the governance of the internet, Computer Law & Security Report, 24, pp. 342-348

Ziewitz, M. (2016). Governing Algorithms: Myth, Mess, and Methods. Science, Technology & Human Values 41(1): 3–16. doi:10.1177/0162243915608948.

Ziewitz, M. and Petzold, C. (2014) In search of internet governance: Performing order in digitally networked environments, New Media & Society, 16(2), 306-322

Instability and internet design

$
0
0

This paper is part of 'Doing internet governance: practices, controversies, infrastructures, and institutions', a Special issue of the Internet Policy Review.

Where convergence was the orienting issue for communication policy-makers in the second half of the 20th century, in the 21st it is resilience in the face of instability, whether from human or natural causes, that has come to the fore (see, e.g., Manzano, et al., 2013; Smith, 2014; Sterbenz et al., 2014; Tipper, 2014). Defining instability here as unpredictable but constant change in one’s environment and in the means with which one interacts with it, instability-based problems underlie many of today’s internet policy issues.

Among those who must be considered policy-makers for the internet are the computer scientists and electrical engineers responsible for the technical decision-making that brings the network into being and sustains it through constant transformations, expansions, and ever-increasing complexity. The instabilities faced by early internet designers - those who worked on the problem from when it was first funded by DARPA in 1969 through the close of 1979 - were myriad in number and form. They arose on both sides of this sociotechnical infrastructure, appearing technically in software and hardware, and socially in interpersonal and institutional relations. This was a difficult working situation not only because instabilities were pervasive and unpredictable, but also because the sources of instability and their manifestations were themselves constantly refreshed, unrelenting.

It is these policy-makers who are the focus of this article, which asks: how did technical decision-makers for what we now call the internet carry on their work in the face of unpredictable but pervasive and ongoing instability in what they were building and what they had to build it with? It addresses this question by inductively mining the technical document series that served as both medium for internet design and a record of that history (Abbate, 1999).

The analysis is based on a reading of the almost 750 documents in the Internet Requests for Comments (RFCs, www.ietf.org/rfc.html) series that were published during the first decade of the design process (1969-1979). Coping techniques developed during this early period remain important after almost 50 years at the time of writing because such a wide range of types and sources of instability appeared during that period, and because the decisions, practices, and norms of that decade were path determinative for internet decision-making going forward. The document series records a conversation among those responsible for the technical side of the sociotechnical network, but during the first 20 years of the process in particular the discussion included a great deal of attention to social, economic, cultural, legal, and governance issues. Thinking about the design process through the lens of what it took to conceptualise the network and bring it into being under conditions of such instability increases yet again one's appreciation of what was accomplished.

The focus here is on those types of instability that are particularly important for large-scale sociotechnical infrastructure rather than those that appear with any type of endeavour. In bridge-building, for example, it is not likely that the technologies and materials being used will change constantly over the course of the project, but this is a common problem for those working with large-scale sociotechnical infrastructure. Such instability remains a central problem for internet designers today; a draft book on possible future network architectures by David Clark (2016), who has been involved with internet design since the mid-1970s, devotes significant attention to problems of this kind. Other ubiquitous and inevitable decision-making problems, such as value differences among those involved and frustration over time lags between steps of development and implementation processes, were also experienced by internet designers but are beyond the scope of this piece.

Mechanisms developed to cope with instabilities are rarely discussed in scholarly literature. The closest work, although it addresses a qualitatively different type of problem, comes from those in science, technology, and society studies (STS) who examine ways in which scientists transform various types of messiness in the laboratory into the clean details reported as scientific findings (importantly, in the work by Latour & Woolgar [1986], and Star [1989]), and into public representation of those efforts (Bowker, 1994). The research agenda going forward should look in addition at what can be learned from psychology and anthropology.

Internet designer efforts to cope with instabilities began with determining just what constituted stability - in essence, designing the problem itself in the sense of learning to perceive it and frame it in ways that helped solve it. They went on to include figuring out the details (conceptual labour), getting along (social practices), and making it work (technical approaches).

Defining the problem as a technique for its cure

Discerning the parameters of instability is an epistemological problem requiring those involved in addressing it to figure out just how to know when the system is stable enough for normal operations to proceed. Internet designers have, from the beginning, required a consensus on the concepts fundamental to such problems. 1 The techniques used to achieve a consensus regarding just what distinguished stability from instability of particular importance included drawing the line between stability and instability, distinguishing among different types of change for differential treatment within protocol (standard) setting processes, and resolving tensions between the global and the local, the universal and the specific.

Although the subject of what internet designers knew empirically about how the network was actually functioning is beyond the scope of this article, it is worth noting that comprehending and responding to the sources of instability was made even more problematic by a lack of information:

[E]ven those of us presumably engaged in ‘computer science’ have not found it necessary to confirm our hypotheses about network operation by experiment an [sic] to improve our theories on the basis of evidence (RFC 550, 1973, p. 2).

Indeed, design force was explicitly preferred over empirical knowledge:

If there are problems using this approach, please don’t ‘code around’ the problem or treat your [network interconnection node] as a ‘black box’ and exxtrapolate its characteristics from a series of experiments. Instead, send your comments and problems to . . . BBN, and we will fix the . . . system" (RFC 209, 1971, p. 1).

Stability vs instability

For analytical and pragmatic purposes, instability as understood here - unpredictable but constant change in one’s environment, including the ways in which one interacts with and is affected by it whether directly or indirectly - can usefully be distinguished from other concepts commonly used in discussions of the internet. Instability is not the same thing as ignorance (lack of knowledge about something specific), uncertainty (lack of knowledge about the outcome of processes subject to contingency or opacity, or otherwise unknowable), or ambiguity (lack of clarity regarding either empirical realities or intentions). Indeed, instability differs from all of these other terms in an important way: ignorance, uncertainty, and ambiguity are about what is known by those doing the design work, the maker. Instability, on the other hand, is about unpredictable mutability in that which is being made and the tools and materials available to make it.

For designers of what we now call the internet, goals during the first decade of the design process re network stability were humble. They sought protocols that could last for at least a couple of years, fearing that if this level of stability could not be achieved it would be hard to convince others to join in the work (RFC 164, 1971). It was considered a real improvement when the network crashed only every day or two (RFC 153, 1971), a rate neither widely nor commonly experienced. According to RFC 369 (1972), no one who responded to a survey had reported a mean-time-between-failure of more than two hours and the average percentage of time with trouble free operation was 35%.

Network designers defined stability operationally, not theoretically. The network is unstable when it isn’t functional or when one can’t count on it to be functional in future barring extraordinary events. Concepts long used in the security domain to think about those forces that can make a system unstable can be helpful in thinking about instabilities and the internet design process. Those involved with national security distinguish between system sensitivity and vulnerability. Sensitivity involves system perturbations that may be annoying and perhaps costly but are survivable; hacking into the Democratic National Committee information systems (Sanger & Schmitt, 2016) was a perturbation, but hasn’t brought the country down (as of the time of writing). Vulnerability entails those disturbances to a system that undermine its survival altogether; if malware such as Conficker (Kirk, 2015) were used to shut down the entire electrical network of the United States, it would generate a serious crisis for the country. Vulnerability has long been important to the history of telecommunications networks, being key to stimulating the growth of a non-British international telecommunications network early in the 20th century (Blanchard, 1986; Headrick, 1990); the push for greater European computational capacity and intelligent networks in the 1980s (Nora Minc, 1980; Tengelin, 1981); and in discussions of arms control (Braman, 1991) and cybersecurity (Braman, 2014). Factors that cause network instability are those that present possible vulnerabilities.

Technical change

The phenomenon of fundamental and persistent change was explicitly discussed by those involved in the early years of designing what we refer to today as the internet. The distinction between incremental and radical change was of particular importance because of the standard-setting context.

It can be difficult for those of us who have been online for decades and/or who were born "digital natives" to appreciate the extent of the intellectual and group decision-making efforts required to achieve agreement upon the most fundamental building blocks of the internet. Even the definition of a byte was once the subject of an RFC and there was concern that noncompliance with the definition by one user would threaten the stability of the entire network (RFC 176, 1971).

For the early internet, everything was subject to change, all the time: operating systems, distinctions among network layers, programming languages, software, hardware, network capacity, users, user practices, and on. Everyone was urged to take into account the possibility that even command codes and distinctions among network layers could be redefined (RFC 292, 1972). Those who were wise and/or experienced expected operational failures when ideas were first tried under actual network conditions (RFC 72, 1970). Operating by consensus was highly valued, but it was also recognised that a consensus once achieved might still have to be thrown out in response to experience or the introduction of new ideas or protocols. Instituting agreed-upon changes was itself a source of difficulty because use of the network was constant and maintenance breaks would therefore be experienced as instability (RFC 381, 1972), a condition ultimately mitigated but not solved by regular scheduling of shutdowns.

Looking back from 2016, early perceptions of the relative complexity and scale of the problem are poignant:

Software changes at either site can cause difficulties since the programs are written assuming that things won't change. Anyone who has ever had a program that works knows what system changes or intermittent glitches can do to foul things up. With two systems and a Network things are at least four times as difficult. (RFC 525, 1973, p. 5)

RFC 525 (1973) also repeats the point that changes by a user at a local site can cause difficulties for the network as a whole. RFC 528 (1973) makes the opposite point: changes in the network could impede or make it impossible for processes at local user sites to continue operating as they had (RFC 559, 1973; RFC 647, 1974); one author complained about the possibility of a situation in which servers behave erratically when they suddenly find their partner speaking a new language (RFC 722, 1976). Interdependencies among the technologies and systems involved in internet design were complex, often requiring delay in implementation of seemingly minor changes because each would require so many concomitant alterations of the protocols with which they interact that all are better left until they can be a part of a major overhaul package (RFC 103, 1971).

Incremental vs radical

A particularly difficult problem during the early years of the internet design process was determining when what was being proposed should be considered something new (a radical change) or a modification (incremental change) (RFC 435, 1973). The difference matters because systems respond differently to the two. Both types of change were rife during the internet design process, manifested in explicit discussions about whether something being discussed in an RFC should be treated as an official change or a modification if ultimately agreed upon and put into practice. As the question was put in RFC 72 (1970), what constitutes official change to a protocol, given that ideas about protocols go through many modifications before reaching solutions acceptable to all?

Translation of value differences into an objective framework was one means used to try to avoid tensions over whether something involved an incremental or radical change. Describing the design of algorithms as a "touchy" subject, a “Gordian knot”, for example, one author proposing a graphics protocol notes, “There are five or ten different criteria for a ‘best’ algorithm, each criterion different in emphasis” (RFC 292, 1972, p. 4). The coping technique used in response to this problem in RFC 292 was to simply order the commands by level and number them. If several commands at the same level came into conflict, some attempt would be made to encode variations of meanings in terms of bit configurations.

Macro vs micro

There are two dimensions along which distinctions between macro-level and micro-level approaches were important in network design: the global vs the local, and general function vs specific function. These two can be aligned with each other, as with the local and specific treatment of a screen pixel trigger in an early graphics protocol that was determined to be so particular to a given configuration of technologies that it should not be included in internet protocols (RFC 553, 1973). The two dimensions of globality and generality, however, need not operate in tandem. In one example, sufficient universality on the network side was ensured by insisting that it could deal with all local variations encountered (e.g., RFC 184, 1971; RFC 529, 1973).

Global vs local

The tension between the universal and the local is fundamental to the nature of infrastructural systems. Indeed, as Star and Ruhleder (1996, p. 114) put it, infrastructure - however global - only comes into being in its local instances. The relationship between the two has long been important to telecommunications networks. In the 1880s, long-time AT&T president Theodore Vail and chief engineer J. J. Carty, who designed the company's monopoly-like and, for the era, ubiquitous network, encountered it:

'No one knows all the details now,' said Theodore Vail. 'Several days ago I was walking through a telephone exchange and I saw something new. I asked Mr. Carty to explain it. He is our chief engineer; but he did not understand it. We called the manager. He didn't know, and called his assistant. He didn't know, and called the local engineer, who was able to tell us what it was. (Casson, 1910, p. 167)

Early internet designers phrased the problem this way: "Should a PROTOCOL such as TELNET provide the basis for extending a system to perform functions that go beyond the normal capacity of the local system" (RFC 139, 1971, p. 11). Discussion of ways in which a single entity might provide functions for everyone on the network that most other hosts would be unable to provide for themselves reads much like ruminations on a political system characterised by federalism (in the US) or subsidiarity (in Europe): “. . . to what extent should such extensions be thought of as Network-wide standards as opposed to purely local implementations” (Ibid.). The comparison with political thinking is not facile; a tension between geopolitical citizenship and what can be called “network citizenship” runs throughout the RFCs (Braman, 2013).

Drawing, or finding, the line between the universal and the local could be problematic. Decisions that incorporated that line included ensuring that special-purpose technology- or user-specific details could be sent over the network (RFC 184, 1971), treating transfer of incoming mail to a user's alternate mailbox as a feature rather than a protocol (RFC 539, 1973), and setting defaults in the universal position so that they serve as many users as possible (RFC 596, 1973). Interestingly, there was a consensus that users needed to be able to reconnect, but none on just where the reconnection capacity should be located (RFC 426, 1973).

General purpose vs specific purpose

The industrial machines for which legal and policies were historically crafted were either single-purpose or general-purpose. As this affected network policy a century ago, antitrust (competition) law was applied to the all-private US telecommunications network because, it was argued, being general purpose - serving more than one function, carrying both data and voice - was legally problematic as unfair competition. The resulting Kingsbury Commitment separated the two functions into two separate companies and networks that could interconnect but not be the same (Horwitz, 1989).

The internet, though, was experienced as a fresh start in network design. When the distinction between general and special purpose machines came up in the RFCs, it was with pride about having transformed what had previously been the function of a special purpose process into one available for general purpose use:

With such a backbone, many of the higher level protocols could be designed and implemented more quickly and less painfully -- conditions which would undoubtedly hasten their universal acceptance and availability" (RFC 435, 1973, p. 5).

It was a basic design criterion - what can be considered, in essence, a constitutional principle for network design - that the network should serve not only all kinds of uses and all kinds of users, but also be technologically democratic. The network, that is, needed to be designed in such a way that it served not only those with the most sophisticated equipment and the fastest networks, but also those with the most simple equipment and the slowest networks (Braman, 2011). 2

With experience, internet designers came to appreciate that the more general purpose the technologies at one layer, the faster and easier it is to design and build higher level protocols upon them. Thus it was emphasised, for example, that TELNET needed to find all commands "interesting" and worthy of attention, whether or not they were of kinds or from sources previously known (RFC 529, 1973, p. 9). In turn, as higher level and more specialised protocols are built upon general purpose protocols, acceptance of (and commitment to) those protocols and to design of the network as general purpose are reinforced (RFC 435, 1973).

Standardisation was key. It was understood that a unified approach would be needed for data and file transfer protocols in order to meet existing and anticipated network needs (RFC 309, 1972). Designing for general purpose also introduced new criteria into decision-making. Programming languages and character sets were to be maximised for flexibility (RFC 435, 1973), for example, even though that meant including characters in ASCII set that were not needed by the English language users who then dominated the design process (RFC 318, 1972).

Figuring out the details

The importance of the conceptual labour involved in the internet design process cannot be overstated, beginning with the need to define a byte discussed above through the most ambitious visions of globally distributed complex systems of diverse types serving a multitude of users and uses. Coping techniques in this category include the art of drawing distinctions itself as well as techniques for ambiguity reduction.

Conceptual distinctions

Early recognition that not all information received was meant to be a message spurred efforts to distinguish between bit flows intended to as communications or information transfer, and those that were, instead, errors, spurious information, manifestations of hardware or software idiosyncrasies, or failures (RFC 46, 1970; RFC 48, 1970). Other distinctions had to be drawn between data and control information and among data pollution, synchronicity, and network "race" problems (when a process races, it won't stop) (RFC 82, 1970).

The need for distinctions could get very specific. A lack of buffer space, for example, presented a very different type of problem from malfunctioning user software (e.g., RFC 54, 1970; RFC 57, 1970). Distinctions were drawn in ways perhaps more diverse than expected: people experienced what we might call ghost communications when BBN, the consulting firm developing the technology used to link computers to the network during the early years, would test equipment before delivery by sending messages received by others as from or about nodes they didn't think existed (RFC 305, 1972). And there were programmes that were perceived as having gone "berserk" (RFC 553, 1973).

Identifying commonalities that can then become the subject of standardisation is a critically important type of conceptual labour. The use of numerous ad hoc techniques for transmitting data and files across ARPANET was considered unworkable for the most common situations and designers knew it would become more so (RFC 310, 1972). Thus it was considered important to identify common elements across processes for standardisation. One very basic example of this was discussion of command and response as something that should be treated with a standard discipline across protocols despite a history of having previously been discussed only within each specific use or process context (RFC 707, 1975). The use of a single access point is another example of the effort to identify common functions across processes that could be standardised for all purposes (RFC 552, 1973).

Drawing conceptual distinctions is a necessary first step for many of the other coping techniques. It is required before the technical labour of unbundling processes or functions into separate functions for differential treatment, one of the technical tools discussed below, for example, and is evident in other techniques as well.

Ambiguity reduction

Reducing ambiguity was highly valued as a means of coping with instability. One author even asserted this as a principle: "words which are so imprecise as to require quotation marks should never appear in protocol specifications" (RFC 513, 1973, p. 1). Quotation marks, of course, are used to identify a word as a neologism or a term being used with an idiosyncratic and/or novel meaning. This position resonates with the principle in US constitutional law that a law so vague two or more reasonable adults cannot agree on its meaning is unconstitutional and void.

Concerns about ambiguity often arose in the course of discussions about what human users need in contrast to what was needed for the non-human, or daemon users such as software, operating systems, and levels of the network, for which the network was also being designed (Braman, 2011). It was pointed out, for example, that the only time mail and file transfer protocols came into conflict was in naming conventions that needed to serve human as well as daemon users (RFC 221, 1971).

Getting along

The history of the internet design process as depicted in the internet RFCs provides evidence of the value of social capital, interpersonal relationships, and community in the face of instability. Valuing friendliness, communication, living with ambiguity, humour, and reflexivity about the design process were all social tools for coping with instability visible in the RFCs from the first decade. Collectively, we can refer to such tools as "getting along".

Friendliness

In addition to the normative as well as discursive emphasis on community consensus-building discussed elsewhere (Braman, 2011), the concept of friendliness was used explicitly. Naming sites in ways that made mnemonic sense to humans was deemed usefully user-friendly, allowing humans to identify the sources of incoming messages (RFC 237, 1971). Friendliness was a criterion used to evaluate host sites, both by network administrators concerned also about reliability and response time (RFC 369, 1972) and by potential users who might have been discouraged by a network environment that seemed alien (RFC 707, 1975). Interpersonal relations - rapport among members of the community (RFC 33, 1970) - were appreciated as a coping technique. The effects of one’s actions on others were to be considered: "A system should not try to simulate a facility if the simulation has side effects" (RFC 520, 1973, p. 3).

The sociotechnical nature of the effort, interestingly, shines through even when discussing interpersonal relations:

The resulting mixture of ideas, discussions, disagreements, and resolutions has been highly refreshing and beneficial to all involved, and we regard the human interaction as a valuable by-product of the main effect. (RFC 33, 1970, p. 3)

At the interface between the network and local sites, internet designers learned through experience about the fundamental importance of the social side of a sociotechnical system. After discussing how network outsiders inevitably become insiders in the course of getting their systems online, one author noted,

[I]f personnel from the several Host[s] [sic] are barred from active participation in attaching to the network there will be natural (and understandable) grounds for resentment of the intrusion the network will appear to be; systems programmers also have territorial emotions, it may safely be assumed. (RFC 675, 1974)

The quality of relations between network designers and those at local sites mattered because if the network were perceived as an intruder, compliance with protocols was less likely (RFC 684, 1975).

Communication

Constant communication was another technique used in the attempt to minimise sources of instability. Rules were set for documentation genres and schedules (RFC 231, 1971). Using genre categories provided a means of announcing to users how relatively fixed, or not, a particular design decision or proposal was and when actual changes to protocols might be expected - both useful as means of dealing with instability. Today, the Internet Engineering Task Force (IETF), which hosts the RFCs online, still uses genre distinctions among such categories as Internet Standard, Draft Standard, and Proposed Standard, as well as genres for Best Practices and others that include those that are Informational, Historic, or Experimental. 3

Users were admonished to keep the RFCs and other documentation together because the RFCs would come faster and more regularly than would user guides. Still, it was highlighted, it was impossible for users to keep up with changes in the technologies: "It is almost inevitable that the TUG [Tip user Guide] revisions follow actual system changes" (RFC 386, 1972, p. 1, emphasis added). Simplicity and clarity in communication were valued; one author’s advice was to write as if explaining something both to a secretary and to a corporation president - that is, to both the naiver and to the sophisticated (RFC 569, 1973).

Living with ambiguity

Although eager to reduce ambiguity wherever possible, early network designers also understood that some amount of ambiguity due to error and other factors was inevitable (RFC 203, 1971). In those instances, the goal was to learn to distinguish among causal factors, and to develop responses to each that at least satisficed even if that meant simply ignoring errors (RFC 746, 1973).

Humour

Humour is a technique used to cope with instability, as well as with ignorance, uncertainty, and ambiguity, in many environments. Within the internet design process, it served these functions while simultaneously supporting the development of a real sense of community. In RFC 468 (1973), for example, there is an amusing description of just how long it took to define something during the course of internet design. There was an ongoing tradition of humorous RFCs (beware of any published on 1 April, April Fool’s Day) (Limoncelli & Salus, 2007).

Reflexivity about the design process

The final social technique for adapting to instability evident early on was sustaining communal reflexivity about the nature of the design process itself. RFC 451 (1973) highlighted the importance of regularly questioning whether or not things should continue being done as they were being done. It was hoped that practices developed within the network design community would diffuse into those of programmers at the various sites linking into the network (RFC 684, 1975).

Making it work

Many of the coping techniques described above are social. Some are technical, coming into play as the design principles that are, in essence, policy for the internet design process (Braman, 2011). A final set of techniques is also technical, coming into use as specific design decisions intended to increase adaptive capacity by working with characteristics of the technologies themselves. Approaches to solving specific technical problems in the face of instability included designing in adaptive capacity, tight links between genre and machinic specifications, delay, and the reverse of delay, making something happen.

Adaptive capacity

General purpose machines begin by being inherently flexible enough to adapt to many situations, but it is possible to go further in enhancing adaptive capacity. The general goal of such features was captured in RFC 524 (1973):

The picture being painted for the reader is one in which processes cooperate in various ways to flexibly move and manage Network mail. The author claims . . . that the picture will in future get yet more complicated, but that the proposal specified here can be conveniently enlarged to handle that picture too (p. 3).

The problem of adaptation came up initially with the question of what to do with software that had been designed before its possible use in a network environment had been considered. RFC 80 (1970) argued that resolving this incompatibility should get as much attention as developing new hardware by those seeking to expand the research capacity of network users. Another such mechanism was the decision to require the network to adapt to variability in input/output mechanisms rather than requiring programmes to conform with the network (RFC 138, 1971). Taking this position did not preclude establishing standards for software programmes that interact with the network and making clear that using those standards is desirable (RFC 166, 1971).

Beginning with recuperation of lost messages, and irrespective of the source of error, redundancy has long been a technique for coping with network instability issues. When satellites became available for use in international communications, for example, the US Federal Communications Commission (FCC) required every network provider to continue to invest as much in underseas cables as it invested in satellites (Horwitz, 1989). The early RFCs discuss redundancy in areas as disparate as message transmission (RFC 65, 1970) and the siting of the network directory (RFC 625, 1974). Redundancy in databases was understood as an access issue (RFC 677, 1975).

There are other ways adaptation was technically designed into the early network as a means of coping with instability. RFC 435 (1973) looks at how to determine whether or not a server has an echoing mode during a period in which many hosts could either echo or not echo, but did not have the option to go either way. Requiring fixed socket offsets until a suitable network-wide solution could be found to the problem of identity control at connection points between computers and the ARPANET (RFC 189, 1971) is another example.

There were situations for which reliance on ad hoc problem solving was the preferred approach (RFC 247, 1971). At their best, ad hoc environments could be used for experimentation, as was done with the mail facility (RFC 724, 1977). A "level 0" protocol was a more formal attempt to define an area in which experimentation could take place; successes there could ultimately be embedded in later protocols for the network itself (RFC 549, 1973). Maintaining a “wild west” zone for experimentation as a policy tool is familiar to those who know the history of radio regulation in the United States, where amateur (“ham”) radio operators have long been given spectrum space at the margins of what was usable. Regulators understood that these typically idiosyncratic individuals were persistent and imaginative inventors interested in pressing the limits of what they could do - and that their tinkering had yielded technical solutions that then made it possible to open up those wavelengths to commercial use over and over again.

Reliance on probabilities was another long familiar technique for situations involving instability as well as uncertainty. RFC 60 (1970) describes a technique apparently used by many larger facilities connected to the network to gain flexibility managing traffic and processing loads. They would falsely report their buffer space, relying on the probability that they would not get into logistical trouble doing so and assuming that statistics would keep them out of trouble should any difficulties occur. The use of fake errors was recommended as a means of freeing up buffer space, a measure considered a last resort but powerful enough to control any emergency.

Genre specifications

Working with the genre requirements described above offered another set of opportunities for coping with instability. The RFC process was begun as an intentionally informal conversation but, over time, became much more formal regarding gatekeeping, genre classification, and genre requirements specific to stages of decision-making. Concomitantly, the tone and writing style of the documents became more formal as well. It is because of these two changes to the RFC publishing process that discussions of social issues within the design conversation declined so significantly after the first couple of decades.

For any RFC dealing with a protocol, what had not been articulated simply didn't exist (RFC 569, 1973). This put a lot of weight on the needs both to provide documentation - and to keep a technology operating in exactly the manner described in that documentation (RFC 209, 1971). This was not a naive position; in discussion of the interface between the network and host computers, it was admitted that specifications were neither complete nor correct, but the advice was to hold the vendor responsible for technical characteristics as described. In a related vein, RFC authors were advised not to describe something still under experimentation in such a manner that others will believe the technology is fixed (RFC 549, 1973)

This position does, however, create a possible golem problem, in reference to the medieval story about a human-type figure created out of clay to do work for humans, always resulting in disaster because instructions were never complete or specific enough. From this perspective, the expectation of an unambiguous, completely specified mapping between commands and responses may be a desirable ideal (RFC 722, 1976), but could not realistically be achieved.

Putting things off

The network design process was, by definition, ongoing, but this fundamental fact itself created instabilities: "Thus each new suggestion for change could conceivably retard program development in terms of months" (RFC 72, 1970, p. 2).

Because interdependencies among protocols and the complexity of individual protocols made it difficult to accomplish what were otherwise incremental changes without also requiring so much perturbation of protocols that wholesale revision would be needed (RFC 167, 1971), it was often necessary to postpone improvements that solved current problems until an overhaul took place. This happened with accounting and access controls (Ibid.) and basic bit stream and byte stream decisions for a basic protocol (RFC 176, 1971). As the network matured, it became easier to deal with many of these issues (RFC 501, 1973).

There were a number of occasions when the approach to a problem was to start by distinguishing steps of a process that had previously been treated as a single step - unbundling types of information processing, that is, in the way that vendors or regulators sometimes choose or are required to do with service or product bundles. It was realised, for example, that treating "hide your input" and “no echo” as two separate matters usefully permitted differential treatment of each (RFC 435, 1973). Similarly, the official FTP process was broken down into separate commands for data transfer and for file transfer, with the option of further distinguishing subsets within each (RFC 486, 1973). If we think of unbundling the steps of a single process as one way of making conceptual distinctions that provide support for continuing to work in the face of instability as a vertical matter, we might call it horizontal unbundling when distinctions among types of processing involved in a single step are drawn. By 1973 (RFC 520, 1973) it had already been found that having three digits for codes to distinguish among types of replies was insufficient, so a move to five digits was proposed as a short-term fix.

Demonstration

There were some instances in which designers foresaw a potential problem but could not convince others in the community that it was likely and serious. One technique used in such instances was to make actualize the potential - to make it happen in order to demonstrate the problem in such a way that the community would so appreciate the nature and seriousness of the concern that they would turn to addressing the issue. In 1970, for example, one designer - acting on an insight he had had about a potential type of problem in 1967 - deliberately flooded the network in order to convince his colleagues of the lock-up that results when that happens because of errors in message flow (RFC 635, 1974). This technique is familiar to those who know the literature on the diffusion of innovations. In Rogers’ (2003) synthesis of what has been learned from thousands of studies of the diffusion of many different types of technologies in a wide range of cultural settings around the world, trialability and observability are among the five factors that significantly affect the willingness of individuals and groups to take up the use of new technologies and practices.

Conclusions

In today's digital, social, and natural worlds, instability is a concern of increasing importance to all of us as individuals and as communities. Those responsible for designing, building, and operating the infrastructures upon which all else depends - during times of instability just as during times of calm and slow change - confront particular difficulties of enormous importance that may be technical in nature but are of social, political, economic, and cultural importance as well. Insights drawn from discussions about the Internet design process in the Requests for Comments (RFCs) technical document series during the first decade of work on what we now call the internet (1969-1979) regarding how they coped with instability provides insights into coping techniques of potential use in the design, building, and operation of any large-scale sociotechnical infrastructure. The toolkit developed by network designers engaged with all facets of what makes a particular system sociotechnical rather than "just" social or technical: negotiating the nature of the issue, undertaking the conceptual labour involved in figuring out the details, learning how to get along with all of those involved, and incorporating adaptive techniques into the infrastructure itself.

Many of those involved with "ethics in engineering," including the relatively recent subset of that community that refers to itself as studying “values in design,” often start from theory and try to induce new behaviours among computer scientists and engineers in the course of design practice with the hope of stimulating innovations in content, design, or architecture. Here, instead, the approach has been to learn from the participants in the design process themselves, learning from these highly successful technical decision-makers - de facto policy-makers for the internet - about how to cope with instabilities in a manner that allowed productive work to go forward.

References

Abbate, J. (1999). Inventing the Internet. Cambridge, MA: MIT Press.

Below, A. (2012). The genre of guides as a means of structuring technology and community. Unpublished MA Thesis, University of Wisconsin-Milwaukee.

Blanchard, M. A. (1986). Exporting the First Amendment. New York: Longman.

Bowker, G. C. (1994). Science on the run: Information management and industrial geophysics at Schlumberger, 1920-1940. Cambridge, MA: MIT Press.

Braman, S. (2014). Cyber security ethics at the boundaries: System maintenance and the Tallinn Manual. In L. Glorioso & A.-M. Osula (Eds.), Proceedings: 1st Workshop on Ethics of Cyber Conflict, pp. 49-58. Tallinn, Estonia: NATO Cooperative Cyber Defence Centre of Excellence.

Braman, S. (2013). The geopolitical vs. the network political: Governance in early Internet design, International Journal of Media & Cultural Politics, 9(3), 277-296.

Braman, S. (2011) The framing years: Policy fundamentals in the Internet design process, 1969-1979, The Information Society, 27(5), 295-310.

Braman, S. (1990). Information policy and security. Presented to the 2nd Europe Speaks to Europe Conference, Moscow, USSR.

Casson, H. N. (1910). The history of the telephone. Chicago, IL: A. C. McClurg & Co.

Clark, D. D. (2016). Designs for an internet. Available at http://groups.csail.mit.edu/ana/People/DDC/archbook.

Headrick, D. R. (1990). The invisible weapon: Telecommunications and international relations, 1851-1945. New York/Oxford: Oxford University Press.

Horwitz, R. B. (1986). The irony of regulatory reform: The deregulation of American telecommunications. New York/Oxford: Oxford University Press.

Kirk, J. (2015, Aug. 3). Remember Conficker? It’s still around, Computerworld, http://www.computerworld.com/article/2956312/malware-vulnerabilities/remember-conficker-its-still-around.html, accessed Sept. 6, 2016.

Latour, B. & Woolgar, S. (2013). Laboratory life: The construction of scientific facts, 2d ed. Princeton, NJ: Princeton University Press.

Limoncelli, T. A. & Salus, P. H. (Eds.) (2007). Book on humor in the RFCs. Peer-to-Peer Communications.

Manzano, M., Calle, E., Torres-Padrosa, V., Segovia, J., & Harle, D. (2013). Endurance: A new robustness measure for complex networks under multiple failure scenarios, Computer Networks, 57, 3641-3653.

Nora, S. & Minc, A. (1980). The computerization of society? Cambridge, MA: MIT Press.

Rogers, E. M. (2003) Diffusion of Innovations, 5th ed. New York: Free Press.

Sanger, D. E. & Schmitt, E. (2016, July 26). Spy agency consensus grows that Russia hacked D.N.C., The New York Times, http://www.nytimes.com/2016/07/27/us/politics/spy-agency-consensus-grows-that-russia-hacked-dnc.html, accessed Sept. 6, 2016.

Smith, P. (2014). Redundancy, diversity, and connectivity to achieve multilevel network resilience, survivability, and disruption tolerance, Telecommunications Systems, 56, 17-31.

Star, S. L. (1989). Regions of the mind: Brain research and the quest for scientific certainty. Stanford, CA: Stanford University Press.

Star, S. L. & Ruhleder, K. (1996). Steps toward an ecology of infrastructure: Design and access for large information spaces, Information Systems Research, 7(1), 111-134.

Sterbenz, J.P. G., Hutchison, D., Çetinkaya, E.K., Jabhar, A., Rohrer, J.P., Schöller, M., & Tipper, D. (2014). Resilient network design: Challenges and future directions, Telecommunications Systems, 56, 5-16.

Tengelin, V. (1981). The vulnerability of the computerised society. In H. P. Gassmann (Ed.), Information, computer and communication policies for the 80s, pp. 205-213. Amsterdam, The Netherlands: North-Holland Publishing Co.

RFCs Cited

RFC 33, New Host-Host Protocol, S. D. Crocker, February 1970.

RFC 46, ARPA Network Protocol Notes, E. Meyer, April 1970.

RFC 48, Possible Protocol Plateau, J. Postel, S. D. Crocker, April 1970.

RFC 54, Official Protocol Proffering, S.D. Crocker, J. Postel,l J. Newkirk, M. Kraley, June 1970.

RFC 57, Thoughts and Reflections on NWG/RFC 54, M. Kraley, J. Newkirk, June 1970.

RFC 60, Simplified NCP Protocol, R. B. Kalin, July 1970.

RFC 65, Comments on Host/Host Protocol Document #1, D.C. Walden, August 1970.

RFC 72, Proposed Moratorium on Changes to Network Protocol, R. D. Bressler, September 1970.

RFC 80, Protocols and Data Formats, E. Harslem, J.. Heafner, December 1970.

RFC 82, Network Meeting Notes, E. Meyer, December 1970.

RFC 103, Implementation of Interrupt Keys, R. B. Kalin, February 1971.

RFC 138, Status Report on Proposed Data Reconfiguration Service, R.H> Anderson, V.G. Cerf, E. Harslem, J.F. Heafner, J. Madden, R.M. Metcalfe, A. Shoshani, J.E. White, D.C.M. Wood, April 1971.

RFC 139, Discussion of Telnet Protocol, T. C. O'Sullivan, May 1971.

RFC 153, SRI ARC-NIC Status, J.T. Melvin, R.W. Watson, May 1971.

RFC 164, Minutes of Network Working Group Meeting, 5/16 through 5/19/71, J. F. Heafner, May 1971.

RFC 166, Data Reconfiguration Service: An Implementation Specification, R.H. Anderson, V.G. Cerf, E. Harslem, J.F. Heafner, J. Madden, R.M. Metcalfe, A. Shoshani, J.E. White, D.C.M. Wood, May 1971.

RFC 167, Socket Conventions Reconsidered, A.K. Bhushan, R. M. Metcalfe, J. M. Winett, May 1971.

RFC 176, Comments on 'Byte Size for Connections', A.K. Bhushan, R. Kanodia, R. M. Metcalfe, J. Postel, June 1971.

RFC 184, Proposed Graphic Display Modes, K.C. Kelley, July 1971.

RFC 189, Interim NETRJS Specifications, R.T. Braden, July 1971.

RFC 203, Achieving Reliable Communication, R.B. Kalin, August 1971.

RFC 209, Host/IMP Interface Documentation, B. Cosell, August 1971.

RFC 221, Mail Box Protocol: Version 2, R. W. Watson, 1971.

RFC 231, Service center standards for remote usage: A user's view, J.F. Heafner, E. Harslem, September 1971.

RFC 237, NIC View of Standard Host Names, R.W. Watson, October 1971.

RFC 247, Proffered Set of Standard Host Names, P.M. Karp, October 1971.

RFC 292, Graphics Protocol: Level 0 Only, J. C. Michener, I.W. Cotton, K.C. Kelley, D.E. Liddle, E. Meyer, January 1972.

RFC 305, Unknown Host Numbers, R. Alter, February 1972.

RFC 309, Data and File Transfer Workshop Announcement, A. K. Bhushan, March 1972.

RFC 310, Another Look at Data and File Transfer Protocols, A> K. Bhushan, April 1972.

RFC 318, Telnet Protocols, J. Postel, April 1972.

RFC 369, Evaluation of ARPANET Services January-March, 1972, J.R. Pickens, July 1972.

RFC 381, Three Aids to Improved Network Operation, J.M. McQuillan, July 1972.

RFC 386, Letter to TIP Users-2, B. Cosell, D.C. Walden, August 1972.

RFC 426, Reconnection Protocol, R. Thomas, January 1973.

RFC 435, Telnet Issues, B. Cosell, D.C. Walden, January 1973.

RFC 451, Tentative Proposal for a Unified User Level Protocol, M. A. Padlipsky, February 1973.

RFC 468, FTP Data Compression, R.T. Braden, March 1973.

RFC 486, Data Transfer Revisited, R.D. Bressler, March 1973.

RFC 501, Un-muddling 'Free File Transfer', K.T. Pogran, May 1973.

RFC 513, Comments on the New Telnet Specifications, W. Hathaway, May 1973.

RFC 520, Memo to FTP Group: Proposal for File Access Protocol, J.D. Day, June 1973.

RFC 524, Proposed mail protocol, J.E. White, June 1973.

RFC 525, MIT-MATHLAB meets UCSB-OLS -- an example of resource sharing. W. Parrish, J.R. Pickens, June 1973.

RFC 528, Software checksumming in the IMP and network reliability, J.M. McQuillan, June 1973.

RFC 529, Note on Protocol Synch Sequences, A.M. McKenzie, R. Thomas, R.S. Tomlinson, K.T. Pogran, June 1973.

RFC 539, Thoughts on the Mail Protocol Proposed in RFC 524, D. Crocker, J. Postel, July 1973.

RFC 549, Minutes of Network Graphics Group Meeting, 15-17 July 1973, J.C. Michener, July 1973.

RFC 552, Single Access to Standard Protocols, A.D. Owen, July 1973.

RFC 553, Draft Design for a Text/Graphics Protocol, C.H. Irby, K. Victor, July 1973.

RFC 559, Comments on the New Telnet Protocol and its Implementation, A.K. Bushan, August 1973.

RFC 569, NETED: A Common Editor for the ARPA Network, M.A. Padlipsky, October 1973.

RFC 596, Second thoughts on Telnet Go-Ahead, E.A. Taft, December 1973.

RFC 625, On-line hostnames service, M.D. Kudlick, E.J. Feinler, March 1974.

RFC 635, Assessment of ARPANET protocols, V. Cerf, April 1974.

RFC 647, Proposed protocol for connecting host computers to ARPA-like networks via front end processors, M.A. Padlipsky, November 1974.

RFC 675, _____. 1974.

RFC 677, Maintenance of duplicate databases, P.R. Johnson, R. Thomas, January 1975.

RFC 684, Commentary on procedure calling as a network protocol, R. Schantz, April 1975.

RFC 707, High-level framework for network-based resource sharing, J.E. White, December 1975.

RFC 722, Thoughts on Interactions in Distributed Services, J. Haverty, September 1976.

RFC 724, Proposed official standard for the format of ARPA Network messages, D. Crocker, K.T. Pogran, J. Vittal, D.A. Henderson, May 1977.

RFC 746, SUPDUP graphis extension, R. Stallman, March 1978.

Footnotes

1. Of course the extent to which this was true shouldn’t be overstated. Jon Postel famously simply announced himself as the "naming czar" when he was still a graduate student.

2. In contrast to technological democracy, network neutrality involves regulatory treatment of vendor efforts to differentiate service provision speed to and access by users through pricing mechanisms sometimes, though not always, driven by relations between service and content providers that are also subject to competition (antitrust) law.

3. Other genre distinctions have been found useful by those conducting research on the RFCs. Below (2012), for example, analysed all of the documents identifiable as "guides" by those in the field of technical communication for the ways in which they were used for community-building in a valuable case study for that community of scholars and practitioners.

The invisible politics of Bitcoin: governance crisis of a decentralised infrastructure

$
0
0

This paper is part of 'Doing internet governance: practices, controversies, infrastructures, and institutions', a Special issue of the Internet Policy Review.

Introduction

Since its inception in 2008, the grand ambition of the Bitcoin project has been to support direct monetary transactions among a network of peers, by creating a decentralised payment system that does not rely on any intermediaries. Its goal is to eliminate the need for trusted third parties, particularly central banks and governmental institutions, which are prone to corruption.

Recently, the community of developers, investors and users of Bitcoin has experienced what can be regarded as an important governance crisis– a situation whereby diverging interests have run the risk of putting the whole project in jeopardy. This governance crisis is revealing of the limitations of excessive reliance on technological tools to solve issues of social coordination and economic exchange. Taking the Bitcoin project as a case study, we argue that online peer-to-peer communities involve inherently political dimensions, which cannot be dealt with purely on the basis of protocols and algorithms.

The first part of this paper exposes the specificities of Bitcoin, presents its underlying political economy, and traces the short history of the project from its inception to the crisis. The second part analyses the governance structure of Bitcoin, which can be understood as a two-layered construct: an infrastructure seeking to govern user behaviour via a decentralised, peer-to-peer network on the one hand, and an open source community of developers designing and architecting this infrastructure on the other. We explore the challenges faced at both levels, the solutions adopted to ensure the sustainability of the system, and the unacknowledged power structures they involve. In a third part, we expose the invisible politics of Bitcoin, with regard to both the implicit assumptions embedded in the technology and the highly centralised and largely undemocratic development process it relies on. We conclude that the overall system displays a highly technocratic power structure, insofar as it is built on automated technical rules designed by a minority of experts with only limited accountability for their decisions. Finally, drawing on the wider framework of internet governance research and practice, we argue that some form of social institution may be needed to ensure accountability and to preserve the legitimacy of the system as a whole – rather than relying on technology alone.

I. Bitcoin in theory and practice

A. The Bitcoin project: political economy of a trustless peer-to-peer network

Historically, money has taken many different forms. Far from being an exclusively economic tool, money is closely associated with social and political systems as a whole – which Nigel Dodd refers to as the social life of money (Dodd 2014). Indeed, money has often been presented as an instrument which can be leveraged to shape society in certain ways and as Dodd has shown, this includes powerful utopian dimensions: for sociologist Georg Simmel for instance, an ideal social order hinged upon the definition of a “perfect money” (Simmel, 2004). In the wake of economic crises in particular, it is not uncommon to witness the emergence of alternative money or exchange frameworks aimed at establishing different social relations between individuals – more egalitarian, or less prone to accumulation and speculation (North, 2007). On the other hand however, ideals of self-regulating markets have often sought to detach money from existing social relations, resulting in a progressive “disembedding” of commercial interactions from their social and cultural context (Polanyi, 2001 [1944]).

Since it first appeared in 2009, the decentralised cryptocurrency Bitcoin has raised high hopes for its potential to reshuffle not only the institutions of banking and finance, but also more generally power relations within society. The potential consequences of this innovation, however, are profoundly ambivalent. On the one hand, Bitcoin can be presented as a neoliberal project insofar as it radicalises Friedrich Hayek’s and Milton Friedman’s ambition to end the monopoly of nation-states (via their central banks) on the production and distribution of money (Hayek, 1990), or as a libertarian dream which aims at reducing the control of governments on the economy (De Filippi, 2014). On the other hand, it has also been framed as a solution for greater social justice, by undermining oligopolistic and anti-democratic arrangements between big capital and governments, which are seen to favour economic crises and inequalities. Both of these claims hinge on the fact that as a socio-technical assemblage, Bitcoin seems to provide a solution for “governing without governments”, which appeals to liberal sentiments both from the left and from the right. Its implicit political project can therefore be understood as effectively getting rid of politics by relying on technology.

More generally, distributed networks have long been associated with a redistribution of power relations, due to the elimination of single points of control. This was one of the main interpretations of the shift in telecommunications routing methods from circuit switching to packet switching in the 1960s and the later deployment of the internet protocol suite (TCP/IP) from the 1970s onwards (Abbate, 1999), as well as the adoption of the end-to-end principle – which proved to be a compelling but also partly misleading metaphor (Gillespie, 2006). The idea was that information could flow through multiple and unfiltered channels, thus circumventing any attempts at controlling or censoring it, and providing a basis for more egalitarian social relations as well as stronger privacy. In practice however, it became clear that network design is much more complex and that additional software, protocols and hardware, at various layers of the network, could (and did) provide alternate forms of re-centralisation and control and that, moreover, the internet was not structurally immune to other modes of intervention such as law and regulation (Benkler, 2016).

However, there have been numerous attempts at re-decentralising the network, most of which have adopted peer-to-peer architectures as opposed to client-server alternatives, with the underlying assumption that such technical solutions provide both individual freedom and “a promise of equality” (Agre, 2003) 1. Other technologies have also been adopted in order to add features relating to user privacy for instance, which involve alternative routing methods (Dingledine, Mathewson, & Syverson, 2004) and cryptography (which predates computing, see e.g. Kahn 1996). In particular, such ideas were strongly advocated starting from the late 1980s by an informal collective of hackers, mathematicians, computer scientists and activists known as cypherpunks, who saw strong cryptography as a means of achieving greater privacy and security of interpersonal communications, especially in the face of perceived excesses and abuses on the part of governmental authorities. 2 Indeed, all of these solutions pursue implicit or explicit goals, in terms of their social or political consequences, which can be summed up as enabling self-organised direct interactions between individuals, without relying on a third party for coordination, and also preventing any form of surveillance or coercion.

Yet cryptography is not only useful to protect the privacy of communications; it can also serve as a means to promote further decentralisation and disintermediation when combined with a peer-to-peer architecture. In 2008, a pseudonymous entity named Satoshi Nakamoto released a white paper on the Cryptography Mailing list (metzdowd.com) describing the idea of a decentralised payment system relying on a distributed ledger with cryptographic primitives (Nakamoto, 2008a). One year later, a first implementation of the ideas defined in the white paper was released and the Bitcoin network was born. It introduces its own native currency (or unit of account) with a fixed supply – and whose issuance is regulated, only and exclusively, by technological means. The Bitcoin network can therefore be used to replace at least some of the key functions played by central banks and other financial institutions in modern societies: the issuance of money on the one hand, and, on the other hand, the fiduciary functions of banks and other centralised clearing houses.

Supported by many self-proclaimed libertarians, Bitcoin is often presented as an alternative monetary system, capable of bypassing most of the state-backed financial institutions – with all of their shortcomings and vested interests which have become so obvious in the light of the financial crisis of 2008. Indeed, as opposed to traditional centralised economies, Bitcoin’s monetary supply is not controlled by any central authority, but is rather defined (in advance) by the Bitcoin protocol – which precisely stipulates the total amount of bitcoins that will ever come into being (21 million) and the rate at which they will be issued over time. A certain number of bitcoins are generated, on average, every ten minutes and assigned as a reward to those who lend their computational resources to the Bitcoin network in order to both operate and secure the network. In this sense, Bitcoin can be said to mimic the characteristics of gold. Just as gold cannot be created out of thin air, but rather needs to be extracted from the earth (through mining), Bitcoin also requires a particular kind of computational effort – also known as mining– in order for the network protocol to generate new bitcoins (and just as gold progressively becomes harder to find as the stock gets depleted over, also the amount of bitcoins generated through mining decreases over time).

The establishment and maintenance of a currency has traditionally been regarded as a key prerogative of the State, as well as a central institution of democratic societies. Controlling the money supply, by different means, is one of the main instruments that can be leveraged in order to shape the economy, both domestically and in the context of international trade. Yet, regardless of whether one believes that the State has the right (or duty) to intervene in order to regulate the market economy, monetary policies have sometimes been instrumentalised by certain governments using inflation as a means to finance government spending (e.g. in the case of the Argentine great depression of 1998-2002). Perhaps most critical is the fact that, with the introduction of fractional-reserve banking, commercial banks acquired the ability to (temporarily) increase the money supply by giving out loans which are not backed up by actual funds (Ferguson, 2008). 3 The fractional-reserve banking system (and the tendency of commercial banks to create money at unsustainable rates) is believed to be one of the main factors leading to the global financial crisis of 2008 – which has brought the issue of private money issuance back into the public debate (Quinn, 2009).

Although there have been many attempts at establishing alternative currencies, and cryptocurrencies have also been debated for a long time, the creation of the Bitcoin network was in large part motivated in response to the social and cultural contingencies that emerged during the global financial crisis of 2008. As explicitly stated by Satoshi Nakamoto in various blog posts and forums, Bitcoin aimed at eradicating corruption from the realm of currency issuance and exchange. Given that governments and central banks could no longer be trusted to secure the value of fiat currency and other financial instruments, Bitcoin was designed to operate as a trustless technology, which only relies on maths and cryptography. 4 The paradox being that this trustlesstechnology is precisely what is needed for building a new form of “distributed trust” (Mallard, Méadel, & Musiani, 2014).

Trust management is a classic issue in peer-to-peer computing, and can be understood as the confidence that a peer has to ensure that it will be treated fairly and securely, when interacting with another peer, for example, during transactions or downloading files, especially by preventing malicious operations and collusion schemes (Zhu, Jajodia, & Kankanhalli, 2006). To address this issue, Bitcoin has brought two fundamental innovations, which, together, provide for the self-governability and self-sustainability of the network. The first innovation is the blockchain, which relies on public-private key encryption and hashing algorithms to create a decentralised, append-only and tamper-proof database. The second innovation is Proof-of-Work, a decentralised consensus protocol using cryptography and economic incentives to encourage people to operate and simultaneously secure the network. Accordingly, the Bitcoin protocol represents an elegant, but purely technical solution to the issue of social trust – which is normally resolved by relying on trusted authorities and centralised intermediaries. With the blockchain, to the extent that trust is delegated to the technology, individuals who do not know (and therefore do not necessarily trust) each other, can now transact with one another on a peer-to-peer basis, without the need for any intermediary.

Hence Bitcoin uses cryptography not as a way to preserve the secrecy of transactions, but rather in order to create a trustless infrastructure for financial transactions. In this context, cryptography is merely used as a discrete notational system (DuPont, 2014) designed to promote the autonomy of the system, which can operate independently of any centralised third party 5. It relies on simple cryptographic primitives or building blocks (SHA256 hash functions and public-key cryptography) to resolve, in a decentralised manner, the double-spending problem 6 found in many virtual currencies. The scheme used by Bitcoin (Proof-of-Work) relies on a peer-to-peer network of validators (or miners) who commit their computational resources (hashing power) to the network in order to record all valid transactions into a decentralised public ledger (a.k.a. the blockchain) in a chronological order. All valid transactions are recorded into a block, which incorporates a reference (or hash) to the previous block – so that any attempt at tampering with the order or the content of any past transaction will always and necessarily result in an apparent discontinuity in the chain of blocks.

By combining a variety of existing technologies with basic cryptographic primitives, Bitcoin has created a system that is provably secure, practically incorruptible and probabilistically unattackable 7– all this, without resorting to any centralised authority in charge of policing the network. Bitcoin relies on a fully open and decentralised network, designed in such a way that anyone is free to use the network and contribute to it, without the need for any kind of previous identification. Yet, contrary to popular belief, Bitcoin is neither anonymous nor privacy-friendly. Quite the contrary, anyone with a copy of the blockchain can see the history of all Bitcoin transactions. Decentralised verification requires, indeed, that every transaction be made available for validation to all nodes in the network and that every transaction ever done on the Bitcoin network can be traced back to its origin. 8

In sum, Bitcoin embodies in its very protocols a profoundly market-driven approach to social coordination, premised on strong assumptions of rational choice (Olson, 1965) and game-theoretical principles of non-cooperation (von Neumann & Morgenstern, 1953 [1944]). The (self-)regulation of the overall system is primarily achieved through a system relying on perfect information (the blockchain), combined with a consensus protocol and incentives mechanism (Proof-of-work), to govern the mutually adjusting interests of all involved actors. Other dimensions of social trust and coordination (such as loyalty, coercion, etc.) are seemingly expunged from a system which expressly conforms to Hayek’s ideals of catallactic organisation (Hayek, 1976, p. 107ff).

B. From inception to crisis

1. A short history of Bitcoin

The history of Bitcoin – albeit very short – consists of a very intense series of events, which have led to the decentralised cryptocurrency becoming one of the most widely used forms of digital cash. The story began in October 2008, with the release of the Bitcoin white paper (Nakamoto, 2008a). In January 2009, the Bitcoin software was published and the first block of the Bitcoin blockchain was created (the so-called Genesis block) with a release of 50 bitcoins. Shortly after, the first Bitcoin transaction took place between Satoshi Nakamoto and Hal Finney – a well-known cryptographer and prominent figure of the cypherpunk movement in the 1990s. It is not until a few months later that Bitcoin finally acquired an equivalent value in fiat currency 9 and slowly made its way into the commercial realm, as it started being accepted by a small number of merchants. 10

In the early days, Satoshi Nakamoto was actively contributing to the source code and collaborating with many of the early adopters. Yet, he was always very careful to never disclose any personal details, so as to maintain his identity secret. To date, in spite of the various theories that have been put forward, 11 the real identity of Satoshi Nakamoto remains unknown. In a way, the pseudonymity of Satoshi Nakamoto perfectly mirrors that of his brainchild, Bitcoin – a technology designed to substitute technology for trust, thus rendering the identification of transacting parties unnecessary.

Over the next few months, Bitcoin adoption continued to grow, slowly but steadily. Yet, the real spike in popularity of Bitcoin was not due to increased adoption by commercial actors, but rather to the establishment in January 2011 of Silk Road– an online marketplace (mostly used for the trading of illicit drugs) relying on Tor and Bitcoin to preserve the anonymity of buyers and sellers. Silk Road paved the way for Bitcoin to enter the mainstream, but also led many governmental agencies to raise several concerns that Bitcoin could be used to create black markets, evade taxation, facilitate money laundering and even support the financing of terrorist activities.

In April 2011, to the surprise of many, Satoshi Nakamoto announced on a public mailing list that he would no longer work on Bitcoin. I’ve moved on to other things he said, before disappearing without further justification. Yet, before doing so, he transferred control over the source code repository of the Bitcoin client to Gavin Andresen, one of the main contributors to the Bitcoin code. Andresen, however, did not want to become the sole leader of such a project, and thus granted control over the code to four other developers – Pieter Wuille, Wladimir van der Laan, Gregory Maxwell, and Jeff Garzik. Those entrusted with these administration rights for the development of the Bitcoin project became known as the core developers.

As the popularity of Bitcoin continued to grow, so did the commercial opportunities and regulatory concerns. However, with the exit of Satoshi Nakamoto, Bitcoin was left without any leading figure or institution that could speak on its behalf. This is what justified the creation, in September 2012, of the Bitcoin Foundation – an American lobbying group focused on standardising, protecting and promoting Bitcoin. With a board comprising some of the biggest names in the Bitcoin space (including Gavin Andresen himself), the Bitcoin Foundation was intended to do for Bitcoin what the Linux Foundation had done for open source software: paying developers to work full-time on the project, establishing best practices and, most importantly, bringing legitimacy and building trust in the Bitcoin ecosystem. And yet, concerns were raised regarding the legitimacy of this self-selected group of individuals – many of whom had dubious connections or were allegedly related to specific Bitcoin scams 12– to act as the referent and public face of Bitcoin. Beyond the irony of having a decentralised virtual currency like Bitcoin being represented by a centralised profit-driven organisation, it soon became clear that the Bitcoin Foundation was actually unable to take on that role. Plagued by a series of financial and management issues, with some of its ex-board members under criminal investigation and most of its funds depleted, the Bitcoin Foundation has today lost much of its credibility.

But even the fall of the Bitcoin Foundation did not seem to significantly affect Bitcoin – probably because the Foundation was merely a facade that never had the ability to effectively control the virtual currency. Bitcoin adoption has continued to grow over the past few years, to eventually reach a market capitalisation of almost US 7 billion dollars. Bitcoin still has no public face and no actual institution that can represent it. Yet, people continue to use it, to maintain its protocol, and to rely on its technical infrastructure for an increasing number of commercial (and non-commercial) operations. And although a few Bitcoin-specific regulations have been enacted thus far (see e.g. the NY State BitLicense), regulators around the world have, for the most part, refrained from regulating Bitcoin in a way that would significantly impinge upon it (De Filippi, 2014).

Bitcoin thus continues to operate, and continues to be regarded (by many) as an open source software platform that relies on a decentralised peer-to-peer network governed by distributed consensus. Yet, if one looks at the underlying reasons why Bitcoin has been created in the first place, and the ways it has eventually been adopted by different categories of people, it becomes clear that the original conception of Bitcoin as a decentralised platform for financial disruption has progressively been compromised by the social and cultural context in which the technology operates.

Following the first wave of adoption by the cypherpunk community, computer geeks and crypto-libertarians, a second (larger) wave of adoption followed the advent of Silk Road in 2011. But what actually brought Bitcoin to the mainstream were the new opportunities for speculation that emerged around the cryptocurrency, as investors from all over the world started to accumulate bitcoins (either by purchasing them or by mining) with the sole purpose of generating profits through speculation. This trend is a clear reflection of the established social, economic and political order of a society driven by the capitalistic values of accumulation and profit maximisation. Accordingly, even a decentralised technology specifically designed to promote disintermediation and financial disruption can be unable to protect itself from the inherent tendencies of modern capitalist society to concentrate wealth and centralise power into the hands of a few (Kostakis & Bauwens, 2014).

The illusion of Bitcoin as a decentralised global network had already been challenged in the past, with the advent of large mining pools, mostly from China, which nowadays control over 75% of the network. But this is only one part of the problem. It took a simple – yet highly controversial – protocol issue to realise that, in spite of the open source nature of the Bitcoin platform, the governance of the platform itself is also highly centralised.

2. The block size dispute

To many outside observers, the contentious issue may seem surprisingly specific. As described earlier, the blockchain underpinning the Bitcoin network is composed of a series of blocks listing the totality of transactions which have been executed so far. For a number of reasons (mainly related to preserving the security and stability of the system, as well as to ensure easy adoption), the size of these blocks was initially set at 1 megabyte. In practice, however, this technical specification also sets a restriction on the number of transactions which the blockchain can handle in a particular time frame. Hence, as the adoption of Bitcoin grew, along with the number of transactions to be processed, this arbitrary limitation (which was originally perceived as being innocuous) became the source of heated discussions – on several internet forums, blogs, and conferences – leading to an important dispute within the Bitcoin community (Rizzo, 2016). Some argued that the one megabyte cap was effectively preventing Bitcoin from scaling and was thus a crucial impediment to its growth. Others claimed that many workarounds could be found (e.g. off-chain solutions that would take off the load from the main Bitcoin blockchain) to resolve this problem without increasing the block size. They insisted that maintaining the cap was necessary both for security reasons and for ideological reasons, and was a precondition to keeping the system more inclusive and decentralised.

On 15 August 2015, failing to reach any form of consensus over the issue of block sizes, a spinoff project was proposed. Frustrated by the reluctance expressed by the other Bitcoin developers to officially raise the block size limit (Hearn, 2015), two core developers, Gavin Andresen and Mike Hearn, released a new version of the Bitcoin client software (Bitcoin XT) with the latent capacity of accepting and producing an increased block size of eight megabytes. This client constitutes a particular kind of fork of the original software or reference client (called Bitcoin Core). Bitcoin XT was released as a soft fork, 13 with the possibility to turn into a hard fork, if and when a particular set of conditions were met. Initially, the software would remain identical to the Bitcoin Core software, with the exception that all the blocks mined with the Bitcoin XT software would be “signed” by XT. This signature serves as a proxy for a poll: starting from 11 January 2016, in the event that at least 75% of all most recent 1,000 blocks have been signed by XT, the software would start accepting and producing blocks with a maximum block size of eight megabytes – with the cap increasing linearly so as to double every two years. This would mark the beginning of an actual hard fork, leading to the emergence of two blockchain networks featuring two different and incompatible protocols.

The launch of Bitcoin XT proved highly controversial. It generated a considerable amount of debate among the core developers, and eventually led to a full-blown conflict which has been described as a civil war within the Bitcoin community (Hearn, 2016). Among the Bitcoin core developers, Gregory Maxwell in particular was a strong proponent of maintaining the 1 megabyte cap. According to him, increasing the block size cap would constitute a risky change to the fundamental rules of the system, and would inherently bring Bitcoin towards more centralisation – because it would mean that less powerful machines (such as home computers) could no longer continue to handle the blockchain, thus making the system more prone to being overrun by a small number of big computers and mining pools. Similarly, Nick Szabo – a prominent cryptographer involved since the early days in the cypherpunk community – declared that increasing the block size so rapidly would constitute a huge security risk that could jeopardise the whole network. Finally, another argument raised against the Bitcoin XT proposal was that increasing the block size would possibly lead to variable, and delayed confirmation times (as larger blocks may fail to be confirmed every ten minutes).

Within the broader Bitcoin community, the conflict gave rise to copious amounts of flame-wars in various online forums that represent the main sources of information for the Bitcoin community (Reddit, Bitcoin Info, Bitcoin.org, etc.). Many accused the proponents of Bitcoin XT of using populist arguments and alarmist strategies to bring people on their side. Others claimed that, by promoting a hard fork, Bitcoin XT developers were doing exactly what the Bitcoin protocol was meant to prevent: they were creating a situation whereby people from each side of the network would be able to spend the same bitcoins twice. In some cases, the conflict eventually resulted in outright censorship and banning of Bitcoin XT supporters from the most popular Bitcoin websites. 14 Most critically, the conflict also led to a variety of personal attacks towards Bitcoin XT proponents, and several online operators who expressed support for Bitcoin XT experienced Distributed Denial of Service (DDoS) attacks.

In the face of these events, and given the low rate of adoption of Bitcoin XT by the Bitcoin community at large, 15 Mike Hearn, one of the core developers and key instigators of Bitcoin XT, decided to resign from the development of Bitcoin – which he believed was on the brink of technical collapse. Hearn condemned the emotionally charged reactions to the block size debate, and pointed at major disagreements among the appointed Bitcoin core developers in the interpretation of Nakamoto’s legacy.

But the conflict did not end there. Bitcoin XT was only the first of a series of improvements which were subsequently proposed to the Bitcoin protocol. As Bitcoin XT failed to gain mass adoption, it was eventually abandoned on January 23rd. New suggestions were made to resolve the block size problem (see e.g., Bitcoin Unlimited, Bitcoin Classic, BitPay Core). The most popular today is probably Bitcoin Classic, which proposes to increase the block size cap to 2 megabytes (instead of 8) by following the same scheme as Bitcoin XT (i.e. after 75% of bitcoin miners will have endorsed the new format). One interesting aspect of Bitcoin Classic is that it also plans to set up a specific governance structure that is intended to promote more democratic decision-making with regard to code changes, by means of a voting process that will account for the opinions of the broader community of miners, users, and developers. Bitcoin Classic has received support from relevant players in the Bitcoin community, including Gavin Andresen himself, and currently accounts for 25% of the Bitcoin network’s nodes.

It is, at this moment in time, quite difficult to predict where Bitcoin is heading. Some may think that the Bitcoin experiment has failed and that it is not going anywhere; 16 others may think that Bitcoin will continue to grow in underserved and inaccessible markets as a gross settlement network for payment obligations and safe haven assets; 17 while many others believe that Bitcoin is still heading to the moon and that it will continue to surprise us as time goes on. 18 One thing is sure though: regardless of the robustness and technical viability of the Bitcoin protocol, this governance crisis and failure in conflict resolution has highlighted the fragility of the current decision-making mechanisms within the Bitcoin project. It has also emphasised the tension between the (theoretically) decentralised nature of the Bitcoin network and the highly centralised governance model that has emerged around it, which ultimately relied on the goodwill and aligned interests of only a handful of people.

II. Bitcoin governance and its challenges

Governance structures are set up in order to adequately pursue collective goals, maintain social order, channel interests and keep power relations under check, while ensuring the legitimacy of actions taken collectively. They are therefore closely related to the issue of trust, which is a key aspect of social coordination and which online socio-technical systems address by combining informal interpersonal relations, formal rules and technical solutions in different ways (Kelty, 2005). In the case of online peer-production communities, two essential features are decisive in shaping their governance structure, namely the fact that they are volunteer-driven and that they seek to self-organise (Benkler, 2006). Thus, compared to more traditional forms of organisations such as firms and corporations, they often need to implement alternative means of coordination and incentivisation (Demil & Lecocq, 2006).

Nicolas Auray has shown that, although the nature of online peer-production communities can be very different (ranging from Slashdot to Wikipedia and Debian), they all face three key challenges which they need to address in order to thrive (Auray, 2012):

  • definition and protection of communityborders;

  • establishment of incentives for participation and acknowledgment of the status of contributors;

  • and, finally, pacification of conflicts.

Understanding how each of these challenges is addressed in the case of the Bitcoin project is particularly difficult, since Bitcoin is composed of two separate, but highly interdependent layers, which involve very different coordination mechanisms. On the one hand, there is the infrastructural layer: a decentralised payment system based on a global *trustless *peer-to-peer network which operates according to a specific set of protocols. On the other hand, there is the layer of the architects: a small group of developers and software engineers who have been entrusted with key roles for the development of this technology.

The Bitcoin project can thus be said to comprise at least two different types of communities – each with their own boundaries and protection mechanisms, rewards or incentive systems, and mechanisms for conflict resolution. One is the community of nodes within the network, which includes both passive users merely using the network to transfer money around, and “active” users (or miners) contributing their own computational resources to the networks in order to support its operations. The other is the community of developers, who are contributing code to the Bitcoin project with a view to maintain or improve its functionalities. What the crisis described above has revealed is the difficulty of establishing a governance structure which would properly interface both of these dimensions. As a consequence, a small number of individuals became responsible for the long-term sustainability of a large collective open source project, and the project rapidly fell prone to interpersonal conflict once consensus could no longer be reached among them.

This section will describe the specificities of the two-layered structure of the Bitcoin project and the mechanisms put in place to address these key challenges, in order to better understand any shortcomings they may display.

A. The Bitcoin network: governance by infrastructure

As described earlier, the Bitcoin network purports to be both self-governing and self-sustaining. 19 As a trustless infrastructure, it seeks to function independently of any social institutions. The rules governing the platform are not enforced by any single entity, instead they are embedded directly into the network protocol that every user must abide to. 20

Given the open and decentralised nature of the Bitcoin network, its community borders are extremely flexible and dynamic, in that everyone is free to participate and contribute to the network – either as a passive user or as an active miner. The decentralised character of the network however, creates significant challenges when it comes to the protection thereof, mainly due to the lack of a centralised authority in charge of policing it. Bitcoin thus implemented a technical solution to protect the network against malicious attacks (e.g. so-called sybil attacks) through the Proof-of-Work mechanism, designed to make it economically expensive to cheat the network. Yet, while the protocol has proved successful thus far, it remains subject to a lot of criticism. Beyond the problems related to the high computational costs of Proof-of-Work, 21 the Bitcoin network can also be co-opted by capital. If one or more colluding actors were to control at least 51% of the network’s hashing power, they would be able to arbitrarily censor transactions by validating certain blocks at the expense of others (the so-called 51% attack).

With regard to status recognition, the Bitcoin protocol eliminates the problem at the root by creating a trustless infrastructure where the identity of the participant nodes is entirely irrelevant. In Bitcoin, there is no centralised authority in charge of assigning a network identifier (or account) to each individual node. The notions of identity and status are thus eradicated from the system and the only thing that matters – ultimately – is the amount of computational resources that every node is providing to the network.

Conversely, the reward system represents one of the constitutive elements of the Bitcoin network. The challenge has been resolved in a purely technical manner by the Bitcoin protocol, through the notion of mining. In addition to providing a protection mechanism, the Proof-of-Work algorithm introduces a series of economic incentives to reward those who are contributing to maintaining and securing the network with their computational resources (or hashing power). The mining algorithm is such that the first one to find the solution to a hard mathematical problem (whose difficulty increases over time) 22 will be able to register a new block into the blockchain and will earn a specific amount of bitcoins as a reward (the reward was initially set at 50 bitcoins and is designed to be halved every four years). From a game-theoretical perspective, this creates an interesting incentive for all network participants to provide more and more resources to the network, so as to increase their chances of being rewarded bitcoins. 23 Bitcoin’s incentive mechanism is thus a complicated, albeit mathematically elegant way of bringing a decentralised network of self-interested actors to collaborate and contribute to the operations of the Bitcoin network by relying exclusively on mathematical algorithms and cryptography. Over time, however, the growing difficulty of mining due to the increasing amount of computational resources engaged in the network, combined with the decreasing amount of rewards awarded by the network, has eventually led to a progressive concentration of hashing power into a few *mining pools, *which are today controlling a large majority of the Bitcoin network – thereby making it more vulnerable to a 51% attack. 24 Hence, in spite of its original design as a fully decentralised network ruled by distributed consensus, in practice, the Bitcoin network has evolved into a highly centralised network ruled by an increasingly oligopolistic market structure.

Finally, with regard to the issue of conflict resolution, it is first important to determine what constitutes a conflict at the level of the Bitcoin infrastructure. If the purpose of the Bitcoin protocol is for a decentralised network of peers to reach consensus as to what is the right set of transactions (or block) that should be recorded into the Bitcoin blockchain, then a conflict arises whenever two alternative blocks (which are both valid from a purely mathematical standpoint) are registered by different network participants in the same blockchain – thus creating two competing versions (or forks) of the same blockchain. Given that there is no way of deciding objectively which blockchain should be favoured over the other, the Bitcoin protocol implements a specific fork-choice strategy stipulating that, if there is a conflict somewhere on the network, the longest chain shall win. 25 Again, as with the former two mechanisms, the longest-chain rule is a simple and straightforward mechanism to resolve the emergence of conflicts within the Bitcoin network by relying – solely and exclusively – on technological means.

It is clear from this description, that the objective of Satoshi Nakamoto and the early Bitcoin developers was to create a decentralised payment system that is both self-sufficient and self-contained. Perhaps naively, they thought it was possible to create a new technological infrastructure that would be able to govern itself – through its own protocols and rules – and that would not require any third-party intervention in order to sustain itself. And yet, in spite of the mathematical elegance of the overall system, once introduced in a particular socio-economic context, technological systems often evolve in unforeseen ways and may fall prey to unexpected power relations.

In the short history of Bitcoin, indeed, there have been significant tensions related to border protection, rewards systems and conflict resolution. Some of these issues are inherent in the technological infrastructure and design of the Bitcoin protocol. Perhaps one of the most revealing of the possible ways of subverting the system is the notion of selfish mining whereby miners can increase their potential returns by refusing to cooperate with the rest of the network. 26 While this does not constitute a technical threat to the Bitcoin protocol per se, it can nonetheless be regarded as an economic attack, which contributes to potentially reducing the security of the Bitcoin network by changing the inherent incentive structure. 27 Other issues emerged as a result of more exogenous factors, such as the Mt. Gox scandal 28 of 2014 – which led to the loss of 774,000 bitcoins (worth more than US 450 million dollars at the time) – as well as many other scams and thefts that occurred on the Bitcoin network over the years. 29 Most of these were not due to an actual flaw in the Bitcoin protocol, but were mostly the result of ill-intentioned individuals and bad security measures in centralised platforms built on top of the Bitcoin network (Trautman, 2014).

Accordingly, it might be worth considering whether – independently of the technical soundness of the Bitcoin protocol – the Bitcoin network can actually do away with any form of external regulation and/or sanctioning bodies, or whether, in order to ensure the proper integration (and assimilation) of such a technological artefact within the social, economic and cultural contexts of modern societies, the Bitcoin network might require some form of surveillance and arbitration mechanisms (either internal or external to the system) in order to preserve legitimate market dynamics, as well as to guarantee a proper level of consumer protection and financial stability in the system.


B. The Bitcoin architects: governance of infrastructure

Just like many other internet protocols, Bitcoin was initially released as an open source software, encouraging people to review the code and spontaneously contribute to it. Despite their formal emphasis on openness, different open source software projects and communities feature very different social and organisational structures. The analysis of communication patterns among various open source projects has shown tendencies ranging from highly distributed exchanges between core developers and active users, to high degrees of centralisation around a single developer (Crowston & Howison, 2005). Moreover, different open source communities enjoy a more or less formalised governance structure, which often evolves as the project matures. Broadly speaking, open source communities have been categorised into two main types or configurations: democratic-organic versus “autocratic-mechanistic” (de Laat, 2007). The former display a highly structured and meritocratic governance system (such as the Debian community, most notably), whereas the latter feature less sophisticated and more implicit governance systems, such as the Linux community, where most of the decision-making power has remained in the hands of Linus Torvald – often referred to as the “benevolent dictator’. Bitcoin definitely falls into the second category.

Indeed, since its inception, Satoshi Nakamoto was the main person in charge of managing the project, as well as the only person with the right to commit code into the official Bitcoin repository. It was only at a later stage, when Satoshi began to disengage from the Bitcoin project, that this power was eventually transferred to a small group of ‘core developers’. Hence, just like many other open source projects, there is a discrepancy between those who can provide input to the project (the community at large) and those who have the ultimate call as to where the project is going. Indeed, while anyone is entitled to submit changes to the software (such as bug fixes, incremental improvements, etc.), only a small number of individuals (the core developers) have the power to decide which changes shall be incorporated into the main branch of the software. This is justified partly by the high level of technical expertise needed to properly assess the proposed changes, but also – more implicitly – by the fact that the core developers have been entrusted with the responsibility of looking after the project, on the grounds of their involvement (and, to some extent, shared ideology) with the original concept of Satoshi Nakamoto.

With this in mind, we can now provide a second perspective on the three key challenges facing Bitcoin, and analyse how they are being dealt with from the side of its architects: the Bitcoin developers.

The definition and protection of community boundaries, and of the work produced collectively, is a key issue in open source collectives. It classically finds a solution through the setting up of an alternative intellectual property regime and licensing scheme – copyleft, which ensures that the work will be preserved as a common pool resource – but also enforces a number of organisational features and rules intended to preserve some control over the project (O'Mahony, 2003; Schweik & English, 2007). In the case of Bitcoin, community borders are – at least in theory – quite clearly defined. Just like many other open source software projects, there exists a dividing line between the community of users and developers at large, who can provide input and suggest modifications to the code (by making a pull-request, for instance), and the core developers who are in charge of preserving the quality and the functionality of the code, and who are the only ones with the power to accept (or refuse) the proposed modifications (e.g. by merging pull-requests into the main branch of the code). However, the distinction between these two communities is not as clear-cut as it may seem, since the community at large also has an important (albeit indirect) influence on the decisions concerning the code.

Specifically, consensus formation among the Bitcoin core developers has been formalised through a process known as Bitcoin Improvement Proposals (BIPs) 30, which builds heavily on the process in place for managing the Python programming language (PEPs or Python Enhancement Proposals). Historically, both of these processes share similarities with (and sometimes explicitly refer to) what can be considered the “canonical” approach to consensus formation for designing and documenting network protocols: RFC or Request For Comments, used to create and develop the internet protocol suite (Flichy, 2007, p. 35ff). The BIP process requires that all source code and documentation be released and made available to anyone, so that a multiplicity of individuals can contribute to discuss and improve them. Yet, the final call as to whether a change will be implemented ultimately relies on the core developers assessing the degree of public support which a proposal has built, and finding a consensus among themselves:

We are fairly liberal with approving BIPs, and try not to be too involved in decision making on behalf of the community. The exception is in very rare cases of dispute resolution when a decision is contentious and cannot be agreed upon. In those cases, the conservative option will always be preferred. Having a BIP here does not make it a formally accepted standard until its status becomes Active. For a BIP to become Active requires the mutual consent of the community. Those proposing changes should consider that ultimately consent may rest with the consensus of the Bitcoin users. 31

This description provides a concise overview of the structures of legitimacy and accountability which govern the relationship between the Bitcoin architects (or core developers) and the Bitcoin users. While the community is open for anyone to participate, decision-making is delegated to a small number of people who try to keep intervention to a minimum. Yet, ultimately, the sovereignty of the overall project rests with the people– i.e. the Bitcoin users and miners. If the core developers were to make a modification to the code that the community disagrees with (the miners, in particular), the community might simply refuse to run the new code. This can be regarded as a form of “vetoing power’ 32 or “market-based governance’ 33 which guarantees that legitimacy of the code ultimately rests with the users.

Regarding acknowledgment of status, this requires balancing rewards for the most active and competent contributors, while promoting and maintaining the collective character of the overall endeavour. Indeed, open source developers are acutely aware of the symbolic retributions which they can acquire by taking part in a given project, and are also monitoring other contributors to assess their position within communities which display a strongly meritocratic orientation (Stewart, 2005). Some communities rank individuals by resorting to systems of marks which provide a quantitative metric for reputation; others rely on much less formalised forms of evaluation. In the case of Bitcoin, some measure of reputation can be derived from the platform used to manage the versioning of the software – Github– which includes metrics for users" activities (such as number of contributions, number of followers, etc.). However, the reputation of the core developers is on a completely different scale, and is mostly derived from their actual merit or technical expertise, as well as a series of less easily defined individual qualities which can be understood as a form of charisma.

Finally, conflictmanagement is probably the most difficult issue to deal with in consensus-oriented communities, since it requires a way to avoid both paralysingdeadlocks and divisivefights. Taking Wikipedia as an example, the community relies on specific mechanisms of mutual surveillance as the most basic way of managing conflicts; however, additional regulatory procedures of mediation and sanctions have been established and can be resorted to if needed (Auray, 2012, p. 225). The Debian community is also well known for its sophisticated rules and procedures (Lazaro, 2008). Though not immune to deadlocks and fighting, these communities have managed to scale while maintaining some degree of inclusivity, by shifting contentious issues from substantive to procedural grounds – thus limiting the opportunities for personal disputes and ad hominem attacks.

Obviously, the Bitcoin community lacks any such form of conflict management procedures. As described above, failure to reach consensus among the core developers concerning the block size dispute led to an actual forking of the Bitcoin project. Forking is a process whereby two (or more) software alternatives are provided to the user base, who will therefore need to make a choice: the adoption rate will ultimately determine which branch of the project will win the competition, or whether they will both evolve as two separate branches of the same software. Forking is standard practice in free/libre and open source software development, and although it can be seen as a last resort solution which can sometimes put the survival of a project at risk (Robles & González-Barahona, 2012), it can also be considered a key feature of its governance mechanisms. For Nyman and Lindman: The right to fork code is built into the very definition of what it means to be an open source program– it is a reminder that developers have the essential freedom to take the code wherever they want, and this freedom also functions as a looming threat of division that binds the developer community together (Nyman & Lindman, 2013).

In sum, it can be stressed that, at all three levels (defining borders, acknowledging status, and managing conflicts), the governance of the Bitcoin project relies almost exclusively on its leaders, lending credit to the view that peer production can often lead to the formation of oligarchic organisational forms (Shaw & Hill, 2014). More specifically, in classic weberian terms – and as can often be observed in online communities – Bitcoin governance consists in a form of domination based on charismatic authority (O'Neil, 2014), largely founded on presumed technical expertise. The recent crisis experienced by the Bitcoin community revealed the limits of consensus formation between individuals driven by sometimes diverging political and commercial interests, and underlined the discrepancies between the overall goals of the project (a self-regulating decentralised virtual currency and payment system) and the excessively centralised and technocratic elites who are in charge of the project.

III. The invisible politics of Bitcoin

Vires in Numeris (latin for: Strength in Numbers) was the motto printed on the first physical Bitcoin wallets 34– perhaps as an ironic reference to the “In God we Trust” motto printed on US dollar bills. In the early days, the political objectives of Bitcoin were clearly and explicitly stated through the desire of changing existing power dynamics between individuals and the state. 35 Yet, while some people use Bitcoin as a vehicle for expressing their political views (e.g. the community of so-called cypherpunks and crypto-libertarians), others believe that there is no real political ideology expressed within the technology itself. 17 Indeed, if asked, many will say that one of the core benefits of Bitcoin is that it operates beyond the scope of governments, politics, and central banks. 36 But it does not take much of a stretch to realise that this desire to remain a-political constitutes a political dimension in and of itself (Kostakis & Giotitsas, 2014).

Decentralisation inherently affects political structures by removing a control point. Regarding Bitcoin, decentralisation is achieved through a peer-to-peer payment system that operates independently of any (trusted) third party. As a result, not only does Bitcoin question one of the main prerogatives of the state – that of money issuance and regulation, it also sheds doubts on the need (and, therefore, the legitimacy) of existing financial institutions. On the one hand, as a decentralised platform for financial transactions, Bitcoin sets a limit on the power of central banks and other financial institutions to define the terms and conditions, and control the execution of financial transactions. On the other hand, by enabling greater disintermediation, the Bitcoin blockchain provides new ways for people to coordinate themselves without relying on a centralised third party or trusted authority, thus potentially promoting individual freedoms and emancipation. 37 More generally, the blockchain is now raising high hopes as a solution which, beyond a payments system, could support many forms of direct interactions between free and equal individuals – with the implicit assumption that this would contribute to furthering democratic goals by promoting a more horizontal and self-organising social structure (Clippinger & Bollier, 2014).

As Bitcoin evolves – and in the eventuality that it gets more broadly adopted – it will need to face a growing number of technical challenges (e.g. related to blockchain scalability), but it will also encounter a variety of social and political challenges – as the technology will continue to impinge upon existing social and governmental institutions, ushering in an increasingly divergent mix of political positions.

The mistake of the Bitcoin community was to believe that, once technical governance had been worked out, the need to rely on government institutions and centralised organisations in order to manage and regulate social interactions would eventually disappear (Atzori, 2015; Scott, 2014). Politics would progressively give way to new forms of technologically-driven protocols for social coordination (Abramowicz, 2015) – regarded as a more efficient way for individuals to cooperate towards the achievement of a collective goal while preserving their individual autonomy.

Yet, one cannot get rid of politics through technology alone, because the governance of a technology is – itself – inherently tied to a wide range of power dynamics. As Yochai Benkler elegantly puts it, there are no spaces of perfect freedom from all constraints, only different sets of constraints that one necessarily must choose from (Benkler, 2006). Bitcoin as a trustless technology might perhaps escape the existing political framework of governmental and market institutions; yet, it remains subject to the (invisible) politics of a handful of individuals – the programmers who are in charge of developing the technology and, to a large extent, deciding upon its functionalities.

Implicit in the governance structure of Bitcoin is the idea that the Bitcoin core developers (together with a small number of technical experts) are – by virtue of their technical expertise – the most likely to come up with the right decision as to the specific set of technical features that should be implemented in the platform. Such a technocratic approach to governance is problematic in that it goes counter to the original conception of the Bitcoin project. There exists, therefore, an obvious discrepancy between the libertarian vision of Bitcoin as a decentralised infrastructure that cannot be regulated by any third party institution, and the actual governance structure that dictates the technological development of Bitcoin – which, in spite of its open source nature, is highly centralised and undemocratic. While the (a)political dimension of the former has been praised or at least acknowledged by many, the latter has remained, for a long time, invisible to the public: the technical decisions to be taken by the Bitcoin developers were not presented as political decisions, and were therefore never debated as such.

The block size debate is a good illustration of this tendency. Although the debate was framed as a value-neutral technical discussion, most of the arguments in favour or against increasing the size of a block were, in fact, part of a hidden political debate. Indeed, except for the few arguments concerning the need to preserve the security of the system, most of the arguments that animated the discussion were, ultimately, concerned with the socio-political implications of such a technical choice (e.g. supporting a larger amount of financial transactions versus preserving the decentralised nature of the network). Yet, insofar as the problem was presented as if it involved only rational and technical choices, the political dimensions which these choices might involve were not publicly acknowledged.

Moreover, if one agrees that all artefacts have politics (Winner, 1980) and that technology frames social practice (Kallinikos, 2011), it follows that the design and features of the Bitcoin platform must be carefully thought through by taking into account not only its impact on the technology as such (i.e. security and scalability concerns), but also its social and political implications on society at large.

Politics exist because, in many cases, consensus is hard to achieve, especially when issues pertaining to *social justice *need to be addressed. Social organisations are thus faced with the difficult challenge of accommodating incompatible and often irreconcilable interests and values. The solutions found by modern day liberal democracies involve strong elements of publicity and debate. The underlying assumption is that the only way to ensure the legitimacy of collective decisions is by making conflicts apparent and by discussing and challenging ideas within the public sphere (Habermas, 1989). Public deliberations and argumentation are also necessary to achieve a greater degree of rationality in collective decisions, as well as to ensure full transparency and accountability of the ways in which these decisions are both made and put into practice. But the antagonistic dimensions of social life constantly undermine the opportunities for consensus formation. A truly democratic approach needs, therefore, to acknowledge – and, ideally, to balance or compromise – these spaces of irreconcilabledissent which are the most revealing of embedded power relations (Mouffe & Laclau, 2001; Mouffe, 1993).

This is perhaps even more crucial for technologies such as the internet or Bitcoin, which seek to implement a global and shared infrastructure for new forms of coordination and exchange. Bitcoin as an information infrastructure must be understood here as a means of introducing and shaping a certain type of social relations (Star, 1999; Bowker et al., 2010). Yet, just like many other infrastructures, Bitcoin is mostly an invisible technology that operates in the background (Star & Strauss, 1999). It is, therefore, all the more important to make the design choices lying behind its technical features more visible, in order to shed light on the politics which are implicit in the technological design.

It should be clear, by now, that the political intentions of a technology cannot be resolved, only and exclusively, by technological means. While technology can be used to steer and mediate many kinds of social interactions, it should not (and cannot) be the sole and main driver of social change. As Bitcoin has shown, it is unrealistic to believe that human organisations can be governed by relying exclusively on algorithmic rules. In order to ensure the long-term sustainability of these organisations, it is necessary to incorporate, on top of the technical framework, a specific governance structure that enables people to discuss and coordinate themselves in an authentically democratic way, but also – and perhaps more importantly – to engage and come up with decisions as to how the technology should evolve. In that regard, one should always be wary that the decision-making process involve not only those who are building the technology (i.e. developers and software engineers) but also all those who will ultimately be affected by these decisions (i.e. the users of that technology).

Different dimensions of the internet have already been analysed from such a perspective within the broader framework of internet governance (DeNardis, 2012; Musiani et al., 2016), providing important insights about the performative dimensions of the underlying software and protocols, and the ways they have been put to use. These could prove useful in better understanding and formulating a novel governance structure for the Bitcoin project – one that is mediated (rather than dictated) by technological rules.

Conclusion: Bitcoin within the wider frame of internet governance

The internet, understood as a complex and heterogeneous socio-technical construct, combines many different types of arrangements – involving social norms, legal rules and procedures, market practices and technological solutions – which, taken together, constitute its overall governance and power structures (Brousseau, Marzouki, & Méadel, 2012). Most of the research on internet governance has focused on the interplay between infrastructures on the one hand, and superstructures or institutions on the other – particularly those which have emerged on top of the network during the course of its history (such as ICANN or IETF), sometimes generating conflictual relationships with existing national and international legal frameworks, private corporations, or even civil society at large (Mueller, 2002; Mueller, 2010; Mathiason, 2009; DeNardis, 2009; Bygrave & Bing, 2009). 38

Internet governance has been fraught with many frictions, controversies and disputes over the years – an international fight to control the basic rules and protocols of the internet described by some as a global war (DeNardis, 2014). Even the much praised governance model of the internet protocol suite – based on the IETF’s (deceptively simple) rule of “rough consensus and running code” – effectively involved, at certain points, fair amounts of power struggles and even autocratic design (Russell, 2014). The idea that consensus over technical issues can be reached more easily because it only involves objective criteria and factual observations (i.e. something either works or doesn’t) neglects the reality that “stories about standards are necessarily about power and control – they always either reify or change existing conditions and are always conscious attempts to shape the future in specific ways” (Russell, 2012).

Set within the wider frame and history of internet governance, the Bitcoin case is particularly instructive insofar as it draws on a certain number of new, but also already existing practices, to promote some of the ideals which have been associated with the internet since its inception: furthering individual autonomy and supporting collective self-organisation (Loveluck, 2015). As we have seen, Bitcoin can be understood as a dual-layered construct, composed of a global network infrastructure on the one hand, and a small community of developers on the other. Although the trustlessness of the network seeks to obliviate the need for a central control point, in practice, as soon as a technology is deployed, new issues emerge from unanticipated uses of technology – which ultimately require the setting up of social institutions in order to protect or regulate the technology. These institutions can be more or less attuned with the overall aims of the technology, and can steer it in different directions. For instance, while the IETF managed to implement a relatively decentralised and bottom-up process for establishing standards, the Domain Name System (DNS) has shown that even a distributed network might, at some point, need to rely on a centralised control point to administer scarce resources (such as domain names). This has led to the emergence of centralised – and somewhat contested – institutions, such as, most notably, the ICANN – a US-based non-profit corporation that is in charge of coordinating all unique identifiers across the world wide web.

The lessons from the past – taking account of both the success stories and failures of internet governance – can serve as useful indications as to what should be attempted or, on the contrary, avoided in terms of Bitcoin governance. In particular, it should be acknowledged that socio-technical systems cannot – by virtue of their embeddedness into a social and cultural context – ensure their own self-governance and self-sustainability through technology alone. Any technology will eventually fall prey to the social, cultural and political pressures of the context in which it operates, which will very probably make it grow and evolve in unanticipated directions (Akrich, 1989; MacKenzie & Wajcman, 1999).

The Bitcoin project has evolved significantly over the years, for reasons which are both endogenous and exogenous to the system. From a small network run by a few crypto-libertarians and computer geeks eager to experiment with a new liberation technology (Diamond, 2010), the Bitcoin network quickly scaled into a global network which is struggling to meet the new demands and expectations of its growing user base and stakeholders.

The block size debate created an actual schism within the Bitcoin community – and, by doing so, ultimately stressed the need for a more democratic governance system. Drawing on the many different arrangements which have been experienced at different levels of internet governance, each with their own distinctive forms of deliberation and decision-making procedures (Badouard et al., 2012), the Bitcoin development process could perhaps be improved by introducing an alternative governance structure that would better account for the many other dimensions (other than technical) that the technology might have, especially with regard to its social, economic and political implications on society at large.

The Bitcoin Foundation was a first attempt in this direction, though it never managed to establish itself as a standardisation body precisely due to a lack of legitimacy and accountability in its own governance process. A centralised governance body (similar to ICANN) in charge of ensuring the legitimacy and accountability for the future developments of the Bitcoin project would obviously fail to obtain any kind of legitimacy from within the Bitcoin community – since eliminating the need for fiduciary institutions or other centralised authorities was the very purpose of the Bitcoin network. The technologically-driven approach currently endorsed by the Bitcoin project, aiming to create a governance structure that is solely and exclusively dictated by technological means (governance by infrastructure) has also been shown to be bound to failure, since a purely technological system cannot fully account for the whole spectrum (and complexity) of social interactions. In this regard, one of the main limitations of the Bitcoin protocol is that it is based on algorithmically quantifiable and verifiable actions (i.e. how much computing resources people are investing in the network) and it is therefore unable to reward those who contribute to the network in different manners, other than through hashing power.

A more interesting approach would involve using the underlying technology – the *blockchain – *not as a regulatory technology that will technologically enforce a particular set of predefined protocols and rules (as Bitcoin does), but rather as a platform on which people might encode their own sets of rules and procedures that will define a particular system of governance – one that can benefit from the distinctive characteristics of the blockchain (in terms of transparency, traceability, accountability, and incorruptibility) but would also leave room for the establishment of an institutional framework that could operate on top of that (decentralised) network. This would make sure that technology remains a tool of empowerment for people, who would use it to enable and support new models of governance, rather than the opposite.

Given the experimental nature and current lack of maturity of the technology, it is difficult to predict, at this specific point in time, what would be the best strategy to ensure that the Bitcoin project evolves in accordance with the interests of all relevant stakeholders. Yet, regardless of the approach taken, it is our belief that a proper governance structure for Bitcoin can only be achieved by publicly acknowledging its political dimensions, and replacing the current technocratic power structure of the Bitcoin project with an institutional framework capable of understanding (and accommodating) the politics inherent in each of its technical features.

References

Abbate, J. (1999), Inventing the Internet, Cambridge, MA: MIT Press.

Abramowicz, M.B. (2015), Peer-to-peer law, built on Bitcoin, Legal Studies Research Paper, GWU Law School,http://scholarship.law.gwu.edu/faculty_publications/1109/

Agre, P.E. (2003), P2P and the promise of Internet equality, Communications of the ACM 46(2), pp. 39-42.

Akrich, M. (1989), La construction d'un système socio-technique. Esquisse pour une anthropologie des techniques, Anthropologie et Sociétés 13(2), pp. 31-54.

Atzori, M. (2015), Blockchain technology and decentralized governance: is the State still necessary?, working paper, Available at SSRN, http://papers.ssrn.com/sol3/Papers.cfm?abstract_id=2709713

Auray, N. (2012), Online communities and governance mechanisms, in E. Brousseau, M. Marzouki & C. Méadel (eds.), Governance, Regulation and Powers on the Internet. Cambridge and New York: Cambridge University Press, pp. 211-231.

Badouard, R. et al (2012), Towards a typology of Internet governance sociotechnical arrangements, in F. Massit-Folléa, C. Méadel & L. Monnoyer-Smith (eds.), Normative Experience in Internet Politics Paris: Transvalor/Presses des Mines, pp. 99-124.

Benkler, Y. (2006), The Wealth of Networks. How Social Production Transforms Markets and Freedom. New Haven, CT: Yale University Press.

Benkler, Y. (2016), Degrees of freedom, dimensions of power, Daedalus145(1), pp. 18-32.

Bimber, B. (1994), Three faces of technological determinism, in M.R. Smith & L. Marx (eds.), Does Technology Drive History? The Dilemma of Technological Determinism. Cambridge, MA and London: MIT Press, pp. 79-100.

Bowker, G.C. et al (2010), Toward Information Infrastructure Studies: ways of knowing in a networked environment, in J. Hunsinger, L. Klastrup & M. Allen (eds.), International Handbook of Internet Research. Dordrecht and London: Springer, pp. 97-117.

Brousseau, E., Marzouki, M., & Méadel, C. eds. (2012), Governance, Regulation and Powers on the Internet. Cambridge and New York: Cambridge University Press.

Bygrave, L.A. & Bing, J. eds. (2009), Internet Governance. Infrastructure and Institutions. Oxford and New York: Oxford University Press.

Clippinger, J.H. & Bollier, D. eds. (2014), *From Bitcoin to Burning Man and Beyond. The Quest for Autonomy and Identity in a Digital Society, *Boston, MA and Amherst, MA: ID3 and Off the Common.

Crowston, K. & Howison, J. (2005), The social structure of free and open source software development, First Monday [online] 10(2), http://firstmonday.org/ojs/index.php/fm/article/view/1207/1127

David, M. (2010), Peer to Peer and the Music Industry. The Criminalization of Sharing. London, Thousand Oaks, CA, New Delhi and Singapore: Sage.

Demil, B. & Lecocq, X. (2006), Neither market nor hierarchy nor network: the emergence of bazaar governance, Organization Studies 27(10), pp. 1447-1466.

DeNardis, L. (2009), Protocol Politics. The Globalization of Internet Governance. Cambridge, MA: MIT Press.

DeNardis, L. (2012), Hidden levers of Internet control. An infrastructure-based theory of Internet governance, Information, Communication & Society 15(5), pp. 720-738.

DeNardis, L. (2014), The Global War for Internet Governance. New Haven, CT: Yale University Press.

Diamond, L. (2010), Liberation technology, Journal of Democracy 21(3), pp. 69-83.

Dingledine, R., Mathewson, N. & Syverson, P. (2004), Tor: the second-generation onion router, Proceedings of the 13th USENIX Security Symposium, San Diego, CA.

Dodd, N. (2014), The Social Life of Money, Princeton, NJ: Princeton University Press.

DuPont, Q. (2014), "The politics of cryptography: Bitcoin and the ordering machines", Journal of Peer Production (4), http://peerproduction.net/issues/issue-4-value-and-currency/peer-reviewed-articles/the-politics-of-cryptography-bitcoin-and-the-ordering-machines/

Eyal, I. & Sirer, E.G. (2014), "Majority is not enough: Bitcoin mining is vulnerable", in Financial Cryptography and Data Security, Springer, pp. 436-454.

Ferguson, N. (2008), The Ascent of Money. A Financial History of the World, London: Penguin.

De Filippi, P. (2014), "Bitcoin: a regulatory nightmare to a libertarian dream", Internet Policy Review 3(2), http://policyreview.info/articles/analysis/bitcoin-regulatory-nightmare-libertarian-dream

Flichy, P. (2007), The Internet Imaginaire, Cambridge, MA: MIT Press.

Gillespie, T. (2006), "Engineering a principle: ‘end-to-end’ in the design of the internet", Social Studies of Science 36(3), pp. 427-457.

Habermas, J. (1989), The Structural Transformation of the Public Sphere. An Inquiry into a Category of Bourgeois Society, Cambridge: Polity Press.

Hayek, F.A. (1976), Law, Legislation and Liberty. Vol. 2, The Mirage of Social Justice, London: Routledge & Kegan Paul.

Hayek, F.A. (1990), The Denationalization of Money: The Argument Refined, 3rd edition, London: The Institute of Economic Affairs.

Hearn, M. (2015), "Why is Bitcoin forking?", Medium, https://medium.com/faith-and-future/why-is-bitcoin-forking-d647312d22c1. Accessed 15 April 2016.

Hearn, M. (2016), "The resolution of the Bitcoin experiment", Medium, https://medium.com/@octskyward/the-resolution-of-the-bitcoin-experiment-dabb30201f7. Accessed 15 April 2016.

Hughes, E. (1993), "A Cypherpunk's Manifesto", http://www.activism.net/cypherpunk/manifesto.html. Accessed 24 March 2011.

Kahn, D. (1996), The Codebreakers. The Story of Secret Writing, 2nd edition, New York: Scribener.

Kallinikos, J. (2011), Governing Through Technology. Information Artefacts and Social Practice, Basingstoke and New York: Palgrave Macmillan.

Kelty, C. (2005), "Trust among the algorithms: ownership, identity, and the collaborative stewardship of information", in R.A. Ghosh (ed.), Code. Collaborative Ownership and the Digital Economy, Cambridge, MA: MIT Press, pp. 127-152.

Kostakis, V. & Bauwens, M. (2014), "Distributed capitalism", in Network Society and Future Scenarios for a Collaborative Economy, Basingstoke and New York: Palgrave Macmillan, pp. 30-34.

Kostakis, V. & Giotitsas, C. (2014), "The (a)political economy of bitcoin", tripleC 12(2), pp. 431-440, http://triplec.at/index.php/tripleC/article/view/606.

de Laat, P.B. (2007), "Governance of open source software: state of the art", Journal of Management & Governance 11(2), pp. 165-177.

Lazaro, C. (2008), La Liberté logicielle. Une ethnographie des pratiques d'échange et de coopération au sein de la communauté Debian, Louvain-la-Neuve: Bruylant-Academia.

Levy, S. (2001), Crypto. How the Code Rebels Beat the Government—Saving Privacy in the Digital Age, New York: Viking.

Loveluck, B. (2015), Réseaux, libertés et contrôle. Une généalogie politique d'internet, Paris: Armand Colin.

MacKenzie, D. & Wajcman, J. eds. (1999), The Social Shaping of Technology, 2nd edition, Buckingham: Open University Press.

Mallard, A., Méadel, C. & Musiani, F. (2014), "The paradoxes of distributed trust: peer-to-peer architecture and user confidence in Bitcoin", Journal of Peer Production (4), http://peerproduction.net/issues/issue-4-value-and-currency/peer-reviewe...

Mathiason, J. (2009), Internet Governance. The New Frontier of Global Institutions, London and New York: Routledge.

McLeay, M., Radia, A. & Thomas, R. (2014), "Money release in the modern economy", Bank of England Quarterly Bulletin , pp. 14-27.

Mouffe, C. (1993), The Return of the Political, London and New York: Verso.

Mouffe, C. & Laclau, E. (2001), Hegemony and Socialist Strategy. Towards a Radical Democratic Politics, 2nd edition, London: Verso.

Mueller, M. (2002), Ruling the Root. Internet Governance and the Taming of Cyberspace, Cambridge, MA: MIT Press.

Mueller, M. (2010), Networks and States. The Global Politics of Internet Governance, Cambridge, MA: MIT Press.

Musiani, F. et al eds. (2016), The Turn to Infrastructure in Internet Governance, Basingstoke and New York: Palgrave Macmillan.

Nakamoto, S. (2008a), "Bitcoin: a peer-to-peer electronic cash system", Bitcoin.org, https://bitcoin.org/bitcoin.pdf. Accessed 20 February 2014.

Nakamoto, S. (2008b), "Re: Bitcoin P2P e-cash paper", The Cryptography Mailing List, http://www.mail-archive.com/cryptography@metzdowd.com/msg09971.html. Accessed 4 May 2016.

Nakamoto, S. (2009), "Bitcoin open source implementation of P2P currency", P2P Foundation, http://p2pfoundation.ning.com/forum/topics/bitcoin-open-source. Accessed 15 April 2016.

North, P. (2007), Money and Liberation. The Micropolitics of Alternative Currency Movements, Minneapolis, MN: University of Minnesota Press.

Nyman, L. & Lindman, J. (2013), "Code forking, governance, and sustainability in open source software", Technology Innovation Management Review 3(1), p. 7.

Olson, M. (1965), The Logic of Collective Action. Public Goods and the Theory of Groups, Cambridge, MA: Harvard University Press.

O'Mahony, S. (2003), "Guarding the commons: how community managed software projects protect their work", Research Policy 32(7), pp. 1179-1198.

O'Neil, M. (2014), "Hacking Weber: legitimacy, critique, and trust in peer production", Information, Communication & Society 17(7), pp. 872-888.

Oram, A. ed. (2001), Peer-to-Peer. Harnessing the Power of Disruptive Technologies, Sebastopol, CA: O'Reilly.

Palmer, D. (2016), "Scalability debate continues as Bitcoin XT proposal stalls", CoinDesk, http://www.coindesk.com/scalability-debate-bitcoin-xt-proposal-stalls. Accessed 15 April 2016.

Polanyi, K. (2001 [1944]), The Great Transformation. The Political and Economic Origins of Our Time, Boston, MA: Beacon Press.

Quinn, B.J. (2009), "The failure of private ordering and the financial crisis of 2008", New York University Journal of Law and Business 5(2), pp. 549-615.

Rizzo, P. (2016), "Making sense of Bitcoin's divisive block size debate", CoinDesk, http://www.coindesk.com/making-sense-block-size-debate-bitcoin/. Accessed 15 April 2016.

Robles, G. & González-Barahona, J.M. (2012), "A comprehensive study of software forks: dates, reasons and outcomes", in I. Hammouda et al (eds.), Open Source Systems. Long-Term Sustainability, Berlin: Springer, pp. 1-14.

Russell, A.L. (2012), "Standards, networks, and critique", IEEE Annals of the History of Computing 34(3), pp. 78-80.

Russell, A.L. (2014), Open Standards and the Digital Age. History, Ideology, and Networks, Cambridge and New York: Cambridge University Press.

Schweik, C.M. & English, R. (2007), "Tragedy of the FOSS commons? Investigating the institutional designs of free/libre and open source software projects", First Monday [online] 12(2), http://firstmonday.org/htbin/cgiwrap/bin/ojs/index.php/fm/article/view/1619/1534

Scott, B. (2014), "Visions of a techno-Leviathan: the politics of the Bitcoin blockchain", E-International Relations, http://www.e-ir.info/2014/06/01/visions-of-a-techno-leviathan-the-politics-of-the-bitcoin-blockchain/. Accessed 2 May 2016.

Shaw, A. & Hill, B.M. (2014), "Laboratories of oligarchy? How the iron law extends to peer production", Journal of Communication 64(2), pp. 215-238.

Simmel, G. (2004), The Philosophy of Money, 3rd enlarged edition, London and New York: Routledge.

Star, S.L. (1999), "The ethnography of infrastructure", American Behavioral Scientist 43(3), pp. 377-391.

Star, S.L. & Strauss, A. (1999), "Layers of silence, arenas of voice: the ecology of visible and invisible work", Computer Supported Cooperative Work (CSCW) 8(1-2), pp. 9-30.

Stewart, D. (2005), "Social status in an open-source community", American Sociological Review 70(5), pp. 823-842.

The Economist (2016), "Craig Steven Wright claims to be Satoshi Nakamoto. Is he?", http://www.economist.com/news/briefings/21698061-craig-steven-wright-claims-be-satoshi-nakamoto-bitcoin. Accessed 2 May 2016.

Trautman, L.J. (2014), "Virtual currencies; Bitcoin & what now after Liberty Reserve, Silk Road, and Mt. Gox?", Richmond Journal of Law and Technology 20(4).

von Neumann, J. & Morgenstern, O. (1953 [1944]), Theory of Games and Economic Behavior, 3rd edition, Princeton, NJ: Princeton University Press

Winner, L. (1980), "Do artifacts have politics?", Daedalus 109(1), pp. 121-136.

Wright, A. & De Filippi, P. (2015), "Decentralized blockchain technology and the rise of lex cryptographia", Available at SSRN, http://ssrn.com/abstract=2580664

Zhu, B., Jajodia, S. & Kankanhalli, M.S. (2006), "Building trust in peer-to-peer systems: a review", International Journal of Security and Networks 1(1-2), pp. 103-112.

Footnotes

1. See also Oram 2001. The case of file-sharing and its effects on copyright law have been particularly salient (David, 2010).

2. See Hughes, 1993; Levy, 2001.

3. In a fractional-reserve banking system, commercial banks are entitled to generate credits, by making loans or investment, while holding reserves which only account for a fraction of their deposit liabilities – thereby effectively creating money out of thin air. A report from the Bank of England estimates that, as of December 2003, only 3% of the money in circulation in the global economy was represented by physical cash (issued by the central bank), whereas the remaining 97% is made up of loans and co-existent deposits created by private or commercial banks (McLeay, Radia, & Thomas, 2014).

4.“[Bitcoin is] completely decentralized, with no central server or trusted parties, because everything is based on crypto proof instead of trust. The root problem with conventional currency is all the trust that’s required to make it work. The central bank must be trusted not to debase the currency, but the history of fiat currencies is full of breaches of that trust. Banks must be trusted to hold our money and transfer it electronically, but they lend it out in waves of credit bubbles with barely a fraction in reserve. We have to trust them with our privacy, trust them not to let identity thieves drain our accounts… With e-currency based on cryptographic proof, without the need to trust a third party middleman, money can be secure and transactions effortless.” (Nakamoto, 2009).

5. On 7 November 2008, Satoshi Nakamoto explained on the Cryptography mailing list that [we will not find a solution to political problems in cryptography,] but we can win a major battle in the arms race and gain a new territory of freedom for several years. Governments are good at cutting off the heads of a centrally controlled network like Napster, but pure P2P networks like Gnutella and Tor seem to be holding their own (Nakamoto 2008b).

6. The double-spending problem is a problem commonly found in many digital cash systems, whereby people can spend the same digital token twice by simply duplicating it. It is usually solved through the introduction of a centralised (trusted) third party, which is in charge of verifying that every transaction is valid, before authorising it.

7. Unless one or more colluding parties control over 51% of the network. See below for a more detailed explanation of the Bitcoin security model.

8. Of course, a variety of tools can be used to reduce the degree of transparency inherent in the blockchain. Just like public-key encryption has enabled more secure communications on top of the internet network, specific cryptographic techniques (such as homomorphic encryption and zero-knowledge proofs) can be used to conceal the content of blockchain-based transactions, without reducing the verifiability thereof. The most popular of these technologies is Zerocash, a privacy-preserving blockchain which relies on zero-knowledge proofs to enable people to transact on a public blockchain without disclosing neither the origin, the destination, nor the amount of the transaction.

9. In October 2009, Bitcoin was first estimated with an exchange rate of 1 USD for 1,309 BTC by the New Liberty Standard, calculated according the costs of electricity that had to be incurred in order to generate bitcoins at the time.

10. The first commercial Bitcoin transaction known to date is the purchase by a Florida-based programmer, Laslo Hanyecz, of a pizza purchased (by a volunteer) from Papa John’s for a face value of 10,000 BTC.

11. Over the years, several people have been outed as being Satoshi Nakamoto – these include: Michael Clear (Irish graduate student at Trinity College); Neal King, Vladimir Oksman and Charles Bry (who filed a patent application for updating and distributed encryption keys, just a few days before the registration of the bitcoin.org domain name); Shinichi Mochizuki (Japanese mathematician); Jed McCaleb (founder of the first Bitcoin exchange Mt. Gox); Nick Szabo (author of the bit gold paper and strong proponent of the notion of “smart contract”); Hal Finney (a well-known cryptographer who was the recipient of the first Bitcoin transaction); and Dorian Nakamoto (an unfortunate case of homonymy). Most recently, Craig Steven Wright (an Australian computer scientist and businessman) claimed to be Satoshi Nakamoto, without however being able to provide proper evidence to support his claim (2016). To date, all of these claims have been dismissed and the real identity of Satoshi Nakamoto remains a mystery.

12. The Bitcoin Foundation has been heavily criticised due to the various scandals that its board members had been associated with. These include: Charlie Shrem, who had been involved in aiding and abetting the operations of the online marketplace Silk Road; Peter Vessenes and Mark Karpeles, who were highly involved with the scandals of the now defunct Bitcoin exchange Mt. Gox; and Brock Pierce, whose election in spite of his questionable history in the virtual currency space has created huge controversy within the Bitcoin Foundation, eventually leading to the resignation of nine members.

13. In general, forks can be categorised into soft and hard forks: the former retains some compatibility or interoperability with the original software, whereas the latter involves a clear break or discontinuity with the preceding system.

14. For instance, one of the largest US Bitcoin wallet and exchange company, Coinbase, was removed from Bitcoin.org upon making the announcement that they would be experimenting with Bitcoin XT.

15. As of 11 January 2016, only about 10% of the blocks in the Bitcoin network had been signed by XT nodes (Palmer, 2016).

16. Mike Hearn, interview with the authors, April 2016.

17.a.b. Patrick Murck, interview with the authors, April 2016.

18. Peter Todd and Pindar Wong, interview with the authors, April 2016.

19. See supra, part I.A.

20. This reveals a significant bias of the Bitcoin community towards technological determinism – a vision whereby technological artefacts can influence both culture and society, without the need for any social intervention or assimilation (Bimber, 1994).

21. As the name indicates, the Proof-of-Work algorithm used by Bitcoin requires a certain amount of work to be done before one can record a new set of transactions (a block) into Bitcoin’s distributed transaction database (the blockchain). In Bitcoin, the work consists in finding a particular nounce to be embedded into the current block, so that processing the block with a particular hash function (SHA-256) will result in a string with a certain number of leading zeros. The first one to find this nounce will be able to register the block and will therefore be rewarded with a specific number of bitcoins (Nakamoto 2008a). The amount of work to be done depends on the number of leading zeros necessary to register a block – this number may increase or decrease depending on the amount of computational resources (or hashing power) currently available in the network, so as to ensure that a new block is registered, on average, every 10 minutes. While this model was useful, in the earlier stages of the network, as an incentive for people to contribute computational resources to maintain the network, the Proof-of-Work algorithm creates a competitive game which encourages people to invest more and more hashing power into the network (so as to be rewarded more bitcoins), ultimately resulting in a growing consumption of energy.

22. The difficulty of said mathematical problem is dynamically set by the network: its difficulty increases with the amount of computational resources engaged in the network, so as to ensure that one new block is registered in the blockchain, on average, every 10 minutes.

23. In the early days, given the limited number of participants in the network, mining could be easily achieved by anyone with a personal computer or laptop. Subsequently, as Bitcoin’s adoption grew and the virtual currency acquired a greater market value, the economic incentives of mining grew to the point that people started to build specific hardware equipments (ASICs) created for the sole purpose of mining, making it difficult for people to mine without such specialised equipment. Note that such an evolution had actually been anticipated by Satoshi Nakamoto himself, who wrote already in 2008 that, even if “at first, most users would run network nodes, [...] as the network grows beyond a certain point, [mining] would be left more and more to specialists with server farms of specialized hardware.”

24. Bitcoin mining pools are a mechanism allowing for Bitcoin miners to pool their resources together and share their hashing power while splitting the reward equally according to the amount of shares they contributed to solving a block. Mining pools constitute a threat to the decentralised nature of Bitcoin. Already in 2014, one mining pool (GHash) was found to control more than half of Bitcoin’s hashing power, and was thus able to decide by itself which transactions shall be regarded as valid or invalid – the so-called 51% attack. Today, most of the hashing power is distributed among a few mining pools, which together hold over 75% of the network, and could potentially collude in order to take over the network.

25. Note that the longest chain is to be calculated by taking into account the number of transactions, rather than the number of blocks. The reason for such an arbitrary choice is that the longest chain is likely to be the one that required the greater amount of computational resources, and is therefore – probabilistically – the less likely to have been falsified or tampered with (e.g. by someone willing to censor or alter the content of former transactions).

26. Selfish mining is the process whereby one miner (or mining pool) does not broadcast the validated block as soon as the solution to the mathematical problem for this blockchain has been found, but rather continues to mine the next block in order to benefit from the first-mover advantage in terms of finding the solution for that block. By releasing validated blocks with a delay, ill-intentioned miners can therefore attempt to secure the block rewards for all subsequent blocks in the chain, since – unless the network manages to catch up with them – their fork of the blockchain will always be the longest one (and thus the one that required the most Proof-of-Work) and will thus be the one that will ultimately be adopted by the network (Eyal & Sirer, 2014).

27. Selfish miners encourage honest, but profit-maximising nodes to join the coalition of non-cooperating nodes, thus eventually making the network more vulnerable to a 51% attack.

28. Mt. Gox was one of the largest Bitcoin exchanges, handling over 70% of all bitcoin transactions as of April 2013. Regulatory issues brought Mt. Gox to be banned from the US banking system, thus making it harder for US customers to withdraw funds into their bank accounts. On 7 February 2014, Mt. Gox halted all bitcoin withdrawals, claiming that they had encountered issues due to the “transaction malleability” bug in the Bitcoin software (which enabled people to pretend a transaction did not occur, when it actually occurred, so as to bring the client to create an additional transaction). On 24 February, the Mt. Gox website went offline and an (allegedly leaked) internal document got released showing that Mt. Gox had lost 774,408 bitcoins in an (allegedly unnoticed) theft that had been going on for years. On 28 February, Mt. Gox filed for bankruptcy reporting a loss of US 473 million dollars in bitcoin.

29. These include, amongst others, the Bitcoin Saving and Trust bitcoin-based Ponzi scheme; the hacking of exchanges such as Bitcoinica, BitFloor, Flexcoin, Poloniex, Bitcurex, etc; or even online Bitcoin wallet services such as Inputs.io and BIPS.

30.BIP stands for Bitcoin Improvement Proposal. A BIP is a design document providing information to the Bitcoin community, or describing a new feature for Bitcoin or its processes or environment. The BIP should provide a concise technical specification of the feature and a rationale for the feature. We intend BIPs to be the primary mechanisms for proposing new features, for collecting community input on an issue, and for documenting the design decisions that have gone into Bitcoin. The BIP author is responsible for building consensus within the community and documenting dissenting opinions. (https://github.com/bitcoin/bips/blob/master/bip-0001.mediawiki)

31.https://github.com/bitcoin/bips/blob/master/README.mediawiki

32.“Bitcoin governance is mainly dominated by veto power, in the sense that many parties can choose to stop a change; we haven't seen much use of power to push through changes. The main shortcoming is users have, in practice, less veto power than they should due to coercion.” (Peter Todd, interview with the authors, April 2016).

33.“If multiple competing implementations of the Bitcoin protocol exist, mining pool operators and wallet providers must decide which code to run. Their decision is disciplined and constrained by market forces. For mining pool operators, poor policy decisions can lead miners to withdraw hashing power from the pool. Wallet providers may find users shift their keys to another provider and exchange services may find liquidity moves to other providers. This structure favors stability, resilience and a conservative development process. It also makes the development and standards setting process resilient to political forces.” (Patrick Murck, interview with the authors, April 2016).

34. The first kinds of physical Bitcoin wallets consisted of a pre-loaded Bitcoin account whose private address was stored in the shape of physical coins that people could hold.

35. As detailed above in Part I.A.

36. Mike Hearn, Pindar Wong, and Patrick Murck, interview with the authors, April 2016.

37. Peter Todd, interview with the authors, April 2016.

38. For instance, what happens when the freedom of expression made possible by the network impinges on country-specific laws? And who should decide (and on what grounds) whether the new .amazon generic Top Level Domain (gTLD) should be attributed to the US American company which has trademarked the name, or to the Brazilian government which lays claim to a geographical area?

Governing the internet in the privacy arena

$
0
0

This paper is part of 'Doing internet governance: practices, controversies, infrastructures, and institutions', a Special issue of the Internet Policy Review.

Introduction

For quite a while now the spread of digital networking practices fuels discourses that render problematic the way privacy is destabilised by informational means (e.g. Schaar, 2009). The global surveillance disclosures triggered by Edward Snowden in 2013 and the involvement of prominent political actors (e.g., Merkel, Rousseff) and institutions (e.g., intelligence services, governments) have further boosted these discourses and the public re-negotiation of privacy. In this article we will deal with these controversial processes. Taking the 2013 disclosures as a starting point from where to follow the controversy (Pinch & Leuenberger, 2006) we focus on the "Struggles and Negotiations to Define What is Problematic and What is Not" (Callon, 1980). We hold that in answer to Snowden’s revelations numerous social worlds began to publicly specify problem definitions, and to propose solutions accordingly; some of the problem/solution packages were incommensurate and some were compatible, but all of them constituted what we call in the style of Anselm Strauss (1978) and Adele Clarke (1991) the privacy arena: the virtual place where social worlds gather to argue and struggle around privacy, i.e., where they define the initial situation and the actors involved, specify the problem, and put forward diverse solutions.

Before specifying this approach in detail (1) we would like to point out that by focusing on controversies we take up a radically agnostic stance (Callon, 1986) towards privacy: we will completely abstain from specifying any a priori understanding of the concept and its normative weight. We know very well that such specifications fill enormous bookshelves, and elsewhere we have contributed to further filling them (e.g., Ochs & Ilyes, 2014; Büttner et al., 2016). Yet, here we will bracket our knowledge and focus exclusively on segments of the public renegotiation of privacy that emerged in answer to the surveillance disclosures. We will analyse two such segments: the Schengen/National Routing (SNR) proposal (2) and the German Parliamentary Committee investigating the NSA surveillance disclosures (NSA-Untersuchungsausschuss) (3). As will be explained, in the negotiations encountered in these segments privacy is generally set in relation to a whole web of values, interests, routines, distinctions etc. In this sense, what is at issue in the controversies is the sociotechnical set-up and governance of the internet at large. As our analysis reveals there are two oscillating governance styles to be identified in the privacy arena (as far as we have investigated), i.e. two ways of (more or less democratically) dealing with the issue. Their interplay results in an obstruction of the democratic search for appropriate problem definitions and according solutions. We will finally summarise and provide an analytic diagnosis concerning possible paths future developments within the privacy arena may take if the blockade remains (4).

Section 1: Methodological preliminaries

Our ultimate interest as pursued in this article is to prove the validity of our methodology for studying the public renegotiations of privacy as processes pertaining to "technical democracy" (Callon, Lascoumes, & Barthe, 2011). Having said this, our goal is to flesh out a framework that a) allows to follow the controversies and renegotiations concerning privacy, and b) to analyse the democratic style of these struggles.

To do so, we take up a classic science and technology studies (STS) approach, namely the "Theory/Methods Package" (Clarke & Leigh Star, 2008) provided by social worlds/arenas theory. The latter goes back to Anselm Strauss who holds that contemporary social formations consist of a multitude of social worlds. These worlds are constituted by specific core activities differentiating a social world from the rest of the world; core activities are in turn based on material-symbolic techniques carried out by human organisms and their material contemporaries, and they unfold at (perhaps virtual) places (Strauss, 1978: 122). Thus, a social world is characterised by what is done there (core activity), how it is done (technique), and where it is done (place). In the course of establishing and stabilising a social world some type of organisation may emerge and processes of authentication and legitimation occur: actors negotiate definitions pertaining to the elements and practices making up the given world (Strauss, 1978: 122-126; 1982: 172-173). Thus, insofar as the building blocks of social formations (read: social worlds) are conceived as contested settings from the outset, it is collective processes of negotiating practices and sociotechnical order that are at the very heart of social worlds theory. However, when turning the lens from a single social world towards the wider set-up it is located within, the struggles and negotiations among social worlds appear; these constitute arenas, i.e. those sites where diverse social worlds gather around specific issues so as to engage in disputes, negotiations and struggles about the legitimate composition of the world, etc. (Strauss, 1993: 225-232).

In the case that interests us here the issue of privacy constitutes an arena where social worlds renegotiate privacy’s status. The overall privacy arena is composed of various segments that break down the issue into specific sub-issues and treat the overall issue accordingly. Our research question concerns the democratic character of such negotiations. It is important to note that by using the term "democracy" we do not refer to a specific form of institutionalised government nor to political regimes disposing of specific institutional procedures. Instead, we use the term in the sense of John Dewey (1946) to denote societal learning processes. These involve the building of issue centered publics and may feature several phases of defining groups and their interests, of building associations, naming experts, determining representatives, of problematising and devising solutions, of trial and error etc. Asking for the democratic character of the negotiations encountered in the privacy arena thus amounts to analysing the political features of the corresponding learning processes in a broader way than pursued in classic political science insofar as the approach that we follow directs attention to public arguments that may or may not involve the conventional institutions of political (democratic) systems. 1

In what follows we present a "methodological showcase": we will provide brief analyses of two different segments of the privacy arena where specific problematisations/solutions are negotiated. As our ultimate interest lies in showing that the approach promoted here allows for specifying the democratic character of the arena negotiations, we will only go as deep into the case studies as is required to prove the validity of the methodology; and we will restrict the analysis to the minimum number of cases to be compared when following the comparative method (Glaser & Strauss, 1998).

Section 2: Schengen/National Routing (SNR)

The global surveillance disclosures have shown the general public quite plainly the dimensions of the digital crisis of privacy. What are the democratic response patterns emerging in reaction? To tackle this question we successively chose cases promising to feature analytically differing characteristics. 2 As a start, we selected the Schengen/National Routing (SNR) discourse as segment of the privacy arena. The SNR problem/solution package came up as a direct reaction to the Snowden revelations (Dönni, Machado, Tsiaras, & Stiller, 2015). The proposal focuses on routing data packages in a territorially framed way, either within the Schengen area or within the nation state. Hence, it aims at providing a technical fix (routing) for a social problem (surveillance); we therefore presumed to come across a constellation where the sociotechnical dimension becomes visible easily – a readily analysable STS case.

To see what the SNR proposal results in we have first to understand that the internet as a "network of networks" is composed of so-called “autonomous systems” (AS) 3 run by private or public corporations (e.g. commercial Internet Service Providers (ISPs) or universities). When sending a data file via the internet the file is broken down into a number of data packages (IP packages). Those packages include information concerning their origin and the target address, and they are sent independently from each other (Tiermann & Goldacker, 2015: 14-15). When a file is sent from a device, its constituent IP packages are firstly routed through the AS the device is connected to; at some point the IP package transits into another AS with whom the “original” provider (ISP or public entity) has a peering (big carriers agree to mutually route each other’s traffic), or a transit contract (small providers pay large carriers for routing their IP packages).

Thus, the IP packages composing a file when they travelling through the internet, the IP packages composing a file are likely to pass through a multitude of further AS, and they thereby may take different routes (Dierichs & Pohlmann, 2008): which way a package takes is not predetermined a priori, and there is no central navigation. Instead, packages are sent in stages, from one router to the next. Routing protocols define the way a package is sent on: within AS’ there are so-called Interior Gateway Protocols routing the data flow, such as the "Open Shortest Path First" protocol (OSPF); Exterior Gateway Protocols govern how data packages are sent on between AS’. When IP packages pass from router to router the latter make decisions where to send a package next according to the criteria (speed, distance, efficiency) of the algorithms inscribed into the routing devices (Dierichs & Pohlmann, 2008), and according to routing tables indicating which networks can be reached via which paths (Tiermann & Goldacker, 2015: 15). It is here where the rules determining how data packages travel through the internet materialise: inscribed into protocols, routers and routing tables.

In the wake of the global surveillance disclosures it was proposed to transform established routing practices: "The idea was to restrict the routing of data between two systems located in country A to systems that are also located in country A. By never crossing into a second judicial territory, your information will be protected by the same privacy laws for its entire journey, bypassing possible snooping attempts from the outside. This concept can be easily expanded from a country to a number of countries" (Pohlmann, Sparenberg, Siromaschenko & Kilden, 2014: 156). The discursive rise of SNR in Germany began when René Obermann - at the time CEO of German telecommunications company Deutsche Telekom - took the “Snowden revelations” as an opportunity to present national routing to the public as an easy to implement technical solution of a whole bunch of problems triggered by intelligence practices, among them the “privacy problem” (FAZ.net, 2013). In November 2013, DeutscheTelekom gained a strong ally for its proposal: the newly built government coalition explicitly endorsed national routing in its coalition agreement (CDU/CSU/SPD, 2013, p. 147f.). Only a couple of months thereafter the Federal Minister of Transport and Digital Infrastructure also recommended to keep data streams within the borders of the Schengen region (Welt.de, 2013). The alliance between the former state-run monopolist Deutsche Telekom and parts of the state seems natural enough, as the proposal allows both worlds to translate their interests into one shared overall interest. SNR at this point of the story had become an obligatory passage point (OPP). The latter occurs according to Callon (1986) in a network of relationships between all kinds of heterogeneous elements when an entity manages to position itself in a way so as to redirect the interests of all other entities through its own interest: other entities’ interests are translated in one overall interest, the OPP. Once established, to pursue their own interests all entities henceforth have to pass through the OPP. This grants entities controlling the latter a great deal of power.

In the case at hand, at the point where Deutsche Telekom and the German Federal Ministry of Transport and Digital Infrastructure (BMVI) managed to establish SNR as provisional OPP they were able to claim that all entities that had an interest in the preservation of privacy had to consent to the SNR solution. SNR was a rather convenient OPP for both parties, for it allowed them to reproduce the entrenched routines of the worlds of industry and state: fencing data flows into the territory of the nation state again amounts to reproducing the national container of modern society by infrastructural means and promised to re-install Deutsche Telekom’s monopolist position. Large infrastructural projects such as this one can be considered traditional undertakings in industrial modernity, which is why representatives of these groups were able to capitalise on established contacts and habits.

We call the governance mode that we come across here democratic protectionism. Again, note that we use this term to characterise the style of negotiating privacy in the SNR segment: what is typical for this mode, firstly, is that it features a strong tendency to continue with, and thus reproduce institutionalised routines. It locates the threats to privacy and democracy outside the well-established and institutionalised routines of the domestic state and its industrial players. There is no reflexive questioning of domestic institutions, and the public is only called upon to nod the proposal through; the whole constellation does not consider giving the public a voice of its own so as to define the problem, or specify the solution: the well-functioning state and its former monopolist will take care of the problem. The "don’t worry, we’ll take care of it"-mentality of the proposal mirrors, secondly, protectionism’s lack of transparency: the issue is settled in ministries and boardrooms.

The resistance that the proposal aroused is quite telling. Small and non-German providers’ take on SNR was that a law prescribing SNR may harm them and hamper competition; the centralised solution was deemed tantamount to a re-launch of Deutsche Telekom’s former monopoly. The conflict furthermore played out in Germany’s main IT industrial association BITKOM, which is constituted by German companies as well as global players with subsidiaries in Germany. When BITKOM (2013) desired to compose a position paper in reaction to the surveillance disclosures in 2013, Deutsche Telekom pressed for including a passage explicitly pleading for SNR. US based companies, however, as the paper was still internally discussed and not yet published, succeeded in attenuating the claim. In the final, published version of the paper, there is only a recommendation to examine SNR (Wirtschaftswoche, 2014). The conflict mirrors the schism between the modern routines and institutions pertaining to the nation state on the one hand, and globalised infrastructures and economic competition on the other hand.

Yet it seems that democratic protectionism has profound deficiencies in coming to terms with digitally transformed conditions. Quite in contrast to BMVI’s energetic endorsement, the Federal Ministry for Economic Affairs and Energy (BMWi) and the Federal Ministry of the Interior (BMI) raised concerns about the cost-benefit ratio of the proposal, and in some instance even opted against legal regulation. A press release of the BMWi explicitly brought into position the ‘open and free Internet’ against the ‘legal prescription’ of SNR (BMWi, 2014, para. 2). The argument went that it was impossible to have "openness" within a SNR system. As matters stood, the algorithmic rules governing routers’ decisions to transmit a given IP package so far had not based the decision on whether or not the next possible router was located within national or Schengen territory. While the strategy of the SNR advocates implied to inscribe this rule into the routing system, those who turned against it, although collectively referring to “openness”, did so for very different reasons. Regardless of whether these opponents to SNR had a strategic, instrumental or moral interest in “openness”, they could not accept SNR as an OPP and started turning against it. As a result, a rather improbable alliance of opponents emerged, including competition-minded German companies, the global players of the digital economy, the BMWi and BMI – and the Net-Community (“Netzgemeinde”), i.e. the social world constituted by the core practices of those internet users who establish a reflexive relationship to their own practices. For members of the Net-Community internet usage is not (only) instrumental but meaningful in that it partakes in members’ conscious self-constitution. The Net-Community’s main concern was that SNR may result in fragmentation of the internet. Thus, whereas there was no agreement on what “openness” actually meant (competition, non-fragmentation) there was nevertheless agreement on the way routers were not supposed to make decisions when it came to the transmission of IP packages: on grounds of considering national or Schengen territory. That was already too much of adverse winds for the SNR proposal. The odds were stacked against SNR and as a result the proposal did not occur in the Digitale Agenda 2014-2017, the German government’s central strategy document on digitisation.

The point that we would like to drive home is that the SNR proposal was so indissolubly tied up with a democratic protectionist style of negotiation that both the proposal and the style of negotiation together did not allow for translation of a sufficient number of (diverse) interests and therefore failed. The proposed solution was rather non-transparent, and stipulated a whole set-up of roles for all those who were involved, including an "external threat" to the well-functioning democratic system herein. For the proposal to have been successful, the location of the enemy “out there” would have needed to be mirrored in the materiality of the routing system: inscribed into the routing tables, algorithms etc. governing the transmission of IP packages. Whereas SNR supporters consequently would have needed a manifold of allies joining the extensive task of re-engineering the current technical structure of the routing system, the negotiation style of protectionism, as it excludes from the outset, does not seem to be appropriate to win those allies over. Having said this, it is not quite easy to maintain the routines of the modern nation-state, nor does it seem to be easily possible to sort “external threats” from “internal shelter.” Democratic protectionism has essential difficulties in governing the internet due to the non-reflexive premises it sets out from: we stay the same while problematic agencies out there have to (be) change(d). 4

However, if it is the non-reflexive characteristic of democratic protectionism that is responsible for its disappointment the question arises whether there are arena segments featuring more reflexive modes of negotiation. To deal with this question we will next turn to a segment promising "more reflexivity".

Section 3: The German Parliamentary Committee investigating the "NSA spying scandal" (NSA-Untersuchungsausschuss)

Pursuing a comparative research strategy we looked out for a contrasting segment that promised to take up the surveillance disclosures from the angle of the domestic state’s internal democratic system. Also, we were looking for a segment which features a governance mode that scrutinises such routines before a wider public.

We opted for analysing the German Parliamentary Committee investigating the NSA spying scandal (NSA-PIC). Of course, parliamentary investigation committees in general form part of established democratic routines. The NSA-PIC in particular, by setting out from the NSA’s activities, additionally seemed to shift the problem to the outside. Yet, a closer look reveals that such a view is mistaken since, theoretically speaking, the role of investigation committees is to actually reflect on (perhaps dysfunctional) institutionalised routines, especially those of government. In this spirit they not only imply the ability of the democratic system to register institutional problems but also to fix them by initiating processes of self-transformation (e.g. Wissenschaftlicher Dienst des Bundestags, 2009, para 2). Thus, such committees are supposed to feature reflexivity and, insofar as the investigation is accomplished in the public gaze, transparency. The setting-up of the NSA-PIC mirrors how the perceived "external threat" triggered the whole investigation, but results in reflexive monitoring. This is already inscribed into the first sentence of the NSA-PIC’s mandate where it says that the committee investigates data collection activities of the so-called “Five Eyes” and German authorities’ (governmental agencies, intelligence services, Federal Office for Information Security) role in this. Thus, there seems great potential in the NSA-PIC to overcome protectionism’s non transparent persistence in routines.

Specifying the social worlds involved in the arena we may first note that the nomination request of the NSA-PIC was jointly issued by all parliamentary parties, those that represent government (conservatives and social-democrats) as well as by the outs (leftists and green party). The committee was likewise composed of members of all parties. Hence, the NSA-PIC is constituted by (I.) the social world of governmental parliamentarians (Regierungsfraktion) and (II.) the social world of oppositional parliamentarians; at the same time (III.) the social world of government, i.e. the executive body of the state (Regierungsapparat) is object of the investigation. The same goes for (IV.) the social world of intelligence services, whose members are called upon to act as witnesses, whereas members of (V.) the social world of jurisdiction (constitutional law experts) are heard as experts. The social world of the Net-Community (VI.) meanwhile acts as observer.

To what extent was this arena setting able to overcome protectionism, i.e. to induce reflexive change and provide for transparency? The NSA-PIC at first seemed to keep to its promises in that it addressed time and again the involvement of the German Federal Intelligence Service (BND) and other German authorities in the "Five Eyes’" surveillance activities (Deutscher Bundestag, 2014a, para B. I.). Not only is it NSA-PIC's explicit mandate to investigate authorities’ illegitimate participation in NSA operations, but also to identify BND’s and governmental bodies’ own transgressions. The NSA-PIC in fact did so. For example, the collaboration between the NSA and BND under the code name Eikonal attracted considerable attention and press coverage. Initially unveiled by the media the operation is publicly investigated in the NSA-PIC to this day (SZ.de, 2014). Reports stated that due to the BND’s inability to guarantee perfect filtering of internet data streams, data sets were passed on to the NSA which might very well include data regarding German citizens. Additionally, the BND reportedly used highly questionable ways to get permission for this operation from the responsible parliamentary control commission (Deutscher Bundestag, 2014c, p. 75f.). It is transgressions such as these which were disclosed to the public.

Moreover, the whole process effectively induced reflexive change, too. For instance, in November 2015 the government coalition came to an agreement regarding the reform of the BND, including the strengthening of parliamentary control of the intelligence service (Götschenberg, 2015). At this point of the analysis the NSA-PIC seemed to genuinely overcome democratic protectionism: institutional routines were called into question via the system’s own remedy procedures. Instead of aiming to reproduce past structures (territorial society) under contemporary conditions (transnational data flows) by technical means (routing) there was a strong constitution bound mode of identifying problems and solving issues. This is exemplified by a group of legal experts who, when providing a statement before the Committee, were quite explicit about the need to modify the law, including basic rights. One of these experts, former Constitutional court judge Hoffmann-Riem (2014: 55-56) in a paper explicitly stated that territory-bound jurisdiction comes to its limits, given that the routing of data packages was highly contingent on factors other than territory. However, experts did not conclude that data flows were to be pushed again into the boundaries of the nation-state; instead the latter’s legal basis was to change. Again the NSA-PIC arena’s potential to induce reflexive change in a transparent way becomes visible, and it is this potential which fundamentally differs from the mode of democratic protectionism.

For us, the occurrence of this potential indicates that there is a different governance mode at work in the NSA-PIC arena. We call this mode democratic constitutionalism. The latter strongly appeals to normative democratic principles (e.g. fundamental rights) not only to render the NSA-PIC legitimate (Deutscher Bundestag, 2014b, 1821 A), but also to bring internal problems to the table without discarding the established system as a whole; instead its core values (whatever they might be in this instance) are reflexively applied.

This finding is not surprising as the governance mode of democratic constitutionalism is by and large very much in line with the way the NSA-PIC is set up formally. What is striking, however, is that it does not manage to dominate the segment but is massively hampered by the protectionist mode that also re-emerges here. Protectionist governance practices and discourses in the NSA-PIC include the treatment of the internet as external cause and as issue to be dealt with not by changing oneself but by protecting oneself (Deutscher Bundestag, 2014b, 1816 D); of still more relevance is the fact that subsequent to the official statement of government spokesman Steffen Seibert (2015) that the BND had in fact "technical and organisational deficiencies", a discourse emerged that claimed the strengthening of BND’s independence from the NSA. As a consequence, some even demanded to equip BND with more financial resources to expand the institution. And while we cannot provide evidence that this was indeed triggered by the “independence-from-the-NSA” discourse BND’s and other intelligence service’s staff was increased by 500 between June 2013 (Snowden revelations) and November 2015 (Netzpolitik.org, 2015).

Our interpretation of these events is that the negotiation of privacy in the NSA-PIC somewhat oscillates between the modes of democratic protectionism and constitutionalism. Connecting this diagnosis back to the social worlds analysis we can see that as a result of this oscillation there are committee members who are torn into two directions at the same time: those who belong to the governing coalition are simultaneously a) part of the forces that strive to render events transparent and induce reflexive change, and as they also form part of the very social world that is bound to come under scrutiny (government), b) of antipodal forces. While the social world of the Net-Community does not act as a political pressure group, but mainly observes and registers, the social world of jurisdiction might appeal to political decision makers – but this is insufficient to tip the balance in favour of the constitutionalist forces. 5 In this sense, what the analysis reveals is the limits of constitutionalism: procedures in the investigation committee in one way or another are still bound to the routines of the established institutions pertaining to the territorial state. Constitutionalist governance time and again gets stuck; for, while it is possible for this mode to radically call into question institutionalised governmental routines it is not able to also substantially modify these routines; part of the problem is that if constitutionalism did so, it would potentially threaten its own conditions of existence.

Thus, while there is some potential for reflexive, transparent change to be detected in the NSA-PIC segment of the privacy arena, the segment still seems to be bound too strongly to the routines of the nation state. This raises the question for future research: are there arena segments that feature comparable reflexivity and transparency while being less closely tied to the nation-state?

Conclusion

We would like to make a case for the methodology applied here by briefly summarising the main points made above. First of all, the methodology presented above seems appropriate for studying the public renegotiation of privacy as a way of doing internet governance, for it allowed us to identify key parameters of the democratic styles coining these negotiations: transparency vs. opaqueness and the persistence in routines vs. embracing reflexive change. While social worlds/arenas theory enables one to focus on technical, legal, political etc. governance "solutions" on a level playing field the comparative strategy also permits to contrast cases according to a certain set of parameters named above.

Future research might continue the search for arenas that promise transparency and reflexivity without being as much hampered by the persistence in the routines of the nation-state. However, drawing on the parameters in a more analytical vein also helps to systematically speculate on further governance modes to be encountered within the overall privacy arena. Now, if democratic protectionism (non-transparency plus persistence in routines) and constitutionalism (transparency plus persistence in routines) continue to generate obstruction, logically there remain two future paths: if actors not bound to democratic routines (e.g. economic ones) step in by non-transparently negotiating backroom decisions with enfeebled politics, negotiations may acquire a post-democratic character (non-transparency plus non-boundedness to democratic routines). The more optimistic option would be the rise of an experimental governance mode (transparency plus non-boundedness) that neither starts in providing fixed problem definitions nor provides ready-made solutions. Which modes are going to prevail or mix in the future only time will tell; however, the methodology presented here will enable us to understand the trajectories of the privacy arena.

References:

Bitkom (2013, October 31). Positionspapier zu Abhörmaßnahmen der Geheimdienste und Sicherheitsbehörden, Datenschutz und Datensicherheit. Retrieved from https://www.bitkom.org/Publikationen/2013/Positionen/Positionspapier-zu-Abhoermassnahmen/BITKOM-Positionspapier-Abhoermassnahmen.pdf

BMWi (2014, June 13). Staatssekretär Kapferer: Offenes und freies Internet erhalten, Pressemitteilung vom 13.06.2014. Retrieved from http://www.bmwi.de/DE/Presse/pressemitteilungen,did=642114.html

Büttner, B., Geminn, C., Hagendorff, T., Lamla, J., Ledder, S. Ochs, C., & Pittroff, F. (2016). Die Reterritorialisierung des Digitalen: Zur Reaktion nationaler Demokratie auf die Krise der Privatheit nach Snowden. Kassel: Kassel University Press. Retrieved from http://www.uni-kassel.de/upress/online/OpenAccess/978-3-86219-106-2.OpenAccess.pdf

BMWi/BMI/BMVI (2014, August 20). Digitale Agenda 2014-2017 (English version). Retrieved from http://www.digitale-agenda.de/Content/DE/_Anlagen/2014/08/2014-08-20-digitale-agenda-engl.pdf

Callon, M. (1980). Struggles and negotiations to define what is problematic and what is not. The socio-logic of translation. In K.D. Knorr-Cetina, R. Krohn & R. D. Whitley (Eds.), The Social Process of Scientific Investigation: Sociology of the Sciences Yearbook (pp. 197-220). Dordrecht, Holland: Reidel.

Callon, M. (1986). Some Elements of a Sociology of Translation: Domestication of the Scallops and the Fisherman of St. Brieuc Bay. In J. Law (Ed.), Power, Action, and Belief. A New Sociology of Knowledge? (pp. 1996-233). London, England: Routledge & Kegan Paul.

Callon, M., Lascoumes, P. & Barthe, Y. (2011). Acting in an Uncertain World. An Essay on Technical Democracy. Cambridge, MA/London: MIT Press.

CDU/CSU/SPD (2013). Deutschlands Zukunft gestalten, Koalitionsvertrag zwischen CDU, CSU und SPD, 18 Legislaturperiode. Retrieved from https://www.bundesregierung.de/Content/DE/_Anlagen/2013/2013-12-17-koalitionsvertrag.pdf

Clarke, A. (1991). Social Worlds Theory as Organizational Theory. In D. Maines (Ed.), Social Organization and Social Process: Essays in Honour of Anselm Strauss (pp. 17-42). Hawthorne, NY: Aldine de Gruyter.

Clarke, A. & Leigh Star, S. (2008). The Social Worlds Framework: A Theory/Methods Package. In: E.J. Hackett, O. Amsterdamska, M. Lynch & J. Wajcman (Ed.), The Handbook of Science and Technology Studies. 3rd Edition (pp. 113-137). Cambridge, MA/London: MIT Press.

Deutscher Bundestag (2014a). Drucksache 18/ 843 18. Wahlperiode. Antrag der Fraktionen CDU/CSU, SPD, DIE LINKE und BÜNDNIS 90/DIE GRÜNEN. Einsetzung eines Untersuchungsausschusses. Retrieved from http://dip.bundestag.de/btd/18/008/1800843.pdf

Deutscher Bundestag (2014b). Plenarprotokoll 18/23. Stenografischer Bericht. 23. Sitzung. Rede: Untersuchungsausschuss zur Überwachungsaffäre, Plenarsitzung. Retrieved from http://dipbt.bundestag.de/dip21/btp/18/18023.pdf

Deutscher Bundestag (2014c). Transcript: Bundestag Committee of Inquiry into the National Security Agency [Untersuchungsausschuss ("NSA")], Session 24 WikiLeaks release: 12, May 2015. Retrieved from https://wikileaks.org/bnd-nsa/sitzungen/2401/WikiLeaksTranscriptSession2401fromGermanNSA_Inquiry.pdf

Dewey, J. (1946). The Public and its Problems. An Essay in Political Inquiry. Denver: Swallow.

Dierichs, S. & Pohlmann, N. (2008). So funktioniert Internet-Routing. Retrieved from http://www.heise.de/netze/artikel/So-funktioniert-Internet-Routing-221495.html?view=print

Dönni, D., Machado, G.S., Tsiaras, C. & Stiller, B. (2015). Schengen Routing: A Compliance analysis. In Latré, S.,Charalambides, M. Francois, J., Schmitt, C. & Stiller, B. (Ed.), Intelligent Mechanisms for Network Configuration and Security, Proceedings (pp. 100-112). Heidelberg et al.: Springer.

Faz.net (2013, November 2013). Im Gespräch: René Obermann und Frank Rieger. Snowdens Enthüllungen sind ein Erdbeben. Frankfurter Allgemeine Zeitung. Retrieved from http://www.faz.net/aktuell/feuilleton/debatten/ueberwachung/im-gespraech-rene-obermann-und-frank-rieger-snowdens-enthuellungen-sind-ein-erdbeben-12685829.html

Glaser, G. B., & Strauss, A. L. (1998). Grounded Theory. Strategien qualitativer Forschung. Bern, Switzerland: Hans Huber.

Götschenberg, M. (2015). Einigung auf Geheimdienstreform. Koalition nimmt BND an die Leine. Tagesschau. Retrieved from https://www.tagesschau.de/inland/bnd-reform-101.html

Hoffmann-Riem, W. (2014). Freiheitsschutz in den globalen Kommunikationsinfrastrukturen. In: Juristen-Zeitung, 69, 53-63.

Lamla, J. (2013). Arenen des Demokratischen Experimentalismus. Zur Konvergenz von nordamerikanischem und französischem Pragmatismus. Berliner Journal für Soziologie, 23(3-4), 345-365.

Netzpolitk.org (2015). 500 neue Stellen für BND, Verfassungsschutz & Co. Retrieved from https://netzpolitik.org/2015/500-neue-stellen-fuer-bnd-verfasungsschutz-co/

Ochs, C., & Ilyes, P. (2014). Sociotechnical Privacy: Mapping the Research Landscape. Tecnoscienza. Italian Journal of Science & Technology Studies, 4(2), 73-91.

Pinch, T., & Leuenberger, C. (2006). Studying Scientific Controversy from the STS Perspective. Paper presented at the EASTS Conference "Science Controversy and Democracy. Retrieved from https://www.researchgate.net/publication/265245795StudyingScientificControversyfromtheSTS_Perspective

Pohlmann, N., Sparenberg, M., Siromaschenko, I. & Kilden, K. (2014). Secure Communication and Digital Sovereignty in Europe. Highlights of the Information Security Solutions Europe 2014 Conference. In Reimer, H., Pohlmann, N., Schneider, W. (Ed.), ISSE 2014 Securing Electronic Business Processes (pp. 155-169). Heidelberg et al.: Springer.

Schaar, P. (2009). Das Ende der Privatsphäre. Der Weg in die Überwachungsgesellschaft. München: Goldmann.

Seibert, S. (2015). Fernmeldeaufklärung des Bundesnachrichtendienstes, Press release. Retrieved from https://www.bundesregierung.de/Content/DE/Pressemitteilungen/BPA/2015/04/2015-04-23-bnd.html

Strauss, A. L. (1978). A Social World Perspective. Studies in Symbolic Interaction, 1, 119–128.

Strauss, A. L. (1982). Social Worlds and Legitimation Processes. Studies in Symbolic Interaction, 4, 171-190.

Strauss, A.L. (1993). Continual Permutations of Action. Hawthorne, NY: de Gruyter.

Süddeutsche Zeitung (2014, October 4). Codewort Eikonal - der Albtraum der Bundesregierung. Retrieved from http://www.sueddeutsche.de/politik/geheimdienste-codewort-eikonal-der-albtraum-der-bundesregierung-1.2157432

Tiermann, J. & Goldacker, G. (2015). Vernetzung als Infrastruktur – Ein Internet Modell. Fraunhofer FOKUS: Berlin.

Welt.de (2013). Deutschland muss eine Aufholjagd starten, Interview mit Alexander Dobrindt. Retrieved from http://www.welt.de/politik/deutschland/article123773626/Deutschland-muss-eine-Aufholjagd-starten.html

WirtschaftsWoche (2014). Echte Zerreißprobe. WirtschaftsWoche, 8/2014, 52-54.

Wissenschaftlicher Dienst des Bundestags (2009). Aktueller Begriff. Untersuchungsausschüsse. Retrieved from https://www.bundestag.de/blob/190568/ce3840e6f7dbfe7052aa62debf812326/untersuchungsausschuesse-data.pdf

Footnotes

1. The framework can only be sketched here in general terms. For a detailed blueprint see Lamla (2013). Readers familiar with the STS literature may note that this approach falls into line with pragmatist minded STS investigations of the relation between technoscience, the public and democratic politics as accomplished by Callon, Latour, Marres and others.

2. What we present here is work in progress; while we limit our presentation to two cases we have also analysed a third one, the European General Data Protection Regulation.

3. In 2008 Dierichs and Pohlmann estimated that the internet consisted of about 110,000 AS (Dierichs & Pohlmann, 2008).

4. Interestingly, the basic strategy that aims to maintain the sovereignty of the nation state under digitised circumstances has not entirely disappeared, but was somehow shifted. SNR may be understood as an attempt to reterritorialise information flows that threaten to exceed certain territories, and while the routing strategy was discredited, in the Digitale Agenda digital sovereignty is still one of the goals the government strives to achieve (BMWi, BMI, and BMVI, 2014: 4). In this sense, we might say that the strategy of reterritorialisation managed to survive in a new guise, once it was not tied to routing anymore (for more information, see Büttner et al., 2016: 149-151).)

5. Note that, as "constitutionalism" refers to a governance mode, it may not be identified with one particular social world. Accordingly, it is not only the judges who foster constitutionalist forces, but also, say, the green party (opposition) member of parliament Konstantin von Notz who frequently argues in a constitutionalist style.


The problem of future users: how constructing the DNS shaped internet governance

$
0
0

This paper is part of 'Doing internet governance: practices, controversies, infrastructures, and institutions', a Special issue of the Internet Policy Review.

Introduction

Like so many engineers building the Advanced Research Projects Agency Network (ARPANET), Elizabeth "Jake" Feinler struggled each day to get the network into some working order. Feinler was head of the Network Information Center (NIC) at Stanford Research Institute (SRI). Because the NIC functioned as the administrative clearinghouse for ARPANET and the early internet, Feinler had to keep track of everything, but without a standardised addressing system to help. In the archives at the Computer History Museum (CHM) in Mt. View, California, I found a collection of printer paper that Feinler stapled together in June 1973. This hand-written directory, which Feinler titled “Changes or Reverifications”, lists what sites were online, the point of contact at each site, and even minutia such as the current phone numbers of the liaisons for the contacts (SRI ARC/NIC Records, Lot X3578-2006). One can imagine how unwieldy this task would become, as institutions connected sites to ARPANET at a rapid pace, often installing multiple computers, each of which required a unique identifier. Feinler’s desk reference, a historical precursor of the Domain Name System (DNS), is evidence of a basic quandary that all early network designers faced. As ARPANET’s ill-conceived addressing schema fueled frustration among the networking community, designers started doing the work of internet governance to solve a fundamental problem of design: the need to address future users.

Through a critical reading of documents circulated among ARPANET and early internet engineers, this article shows 1) how "the problem of future users" motivated the social construction of the DNS, and 2) how this historical process itself constitutes the preformation phase of internet governance. To do this, I draw from two theoretical approaches, showing how a social constructivist critique can inform path dependent theories of technological and organisational lock-in. On the one hand, social constructivists “claim that technological artifacts are open to sociological analysis, not just in their usage but especially with respect to their design and technical ‘content’.” (Bijker, Hughes, and Pinch, 1987, p. 4). On the other hand, path dependence theory “stresses the importance of past events for future action or, in a more focused way, of foregoing decisions for current and future decision making” (Sydow, Schreyögg, and Koch, 2009, p. 690). Whereas social constructivists are often concerned with issues of ideology, theorists of path dependence are concerned with self-reinforcing social and economic mechanisms that guide technologies and organisations toward “increasing stability and lock-in” (Dobusch and Schüßler, 2012, p. 618). Despite their differences, both approaches regard historical evidence as “process data”, which “consist largely of stories about what happened and who did what when—that is, events, activities, and choices ordered over time” (Langley, 1999, p. 692). This conceptual dovetail opens a window, allowing one to consider how ideology—understood as values supported by material relations—can itself become a self-reinforcing mechanism of path dependence, setting constraints for the DNS, for ICANN, or for any other technological or organisational development.

In considering the social construction of the DNS as the preformation phase of internet governance, I show how Feinler’s mundane task of ordering the network by hand marks a catalyst in the development of design priorities and management functions that ICANN would inherit. Following an "identity crisis" that emerged during the shift from ARPANET protocol to the internet’s TCP/IP suite, designers needed to construct a standardised addressing schema. Initially, they did not solve the problem of future users by calling for the outright establishment of governmental institutions. First, they suppressed the visibility of numerical addresses, thereby hiding the historically contingent development of core infrastructure. Next, they harnessed the power of extensibility, choosing top-level domain names associated with generic social categories. These choices ushered new tasks of ordering the network into a preexisting discourse of social bureaucracy, which structured everyday work relations, emerging institutional affiliations, and the future ideology of internet governance.

ARPANET’s identity crisis: names, numbers, and initial constraints

The installation of ARPANET marks the triggering event in the development of universal digital addressing as embodied in the DNS, and as such constitutes the preformation phase of governance functions related to ICANN. Jörg Sydow, Georg Schreyögg, and Jochen Koch write that "history matters in the Preformation Phase", because in “organizations initial choices and actions are embedded in routines and practices” and “reflect the heritage [. . .] making up those institutions” (2009, p. 692). ARPANET became operational late in 1969, with sites at the University of California, Los Angeles (UCLA), the Stanford Research Institute (SRI), UC Santa Barbara (UCSB), and the University of Utah (UTAH). ARPANET was designed according to a two-layer architecture, allowing engineers to update, debug, or completely replace entire sections of the network without crashing the system. Even though it afforded designers much needed flexibility, this two-layer architecture established the material conditions for what I think of as “ARPANET’s identity crisis”, a debate among the engineering community about how best to order numerical identification in relation to site-specific names.

At first, designers assigned network addresses to sites according to the order in which machines were installed, setting a precedent that would become problematic as ARPANET grew. The first site, UCLA, had the address 1, while the fourth site, UTAH, had the address 4. The fact that UCLA was assigned address 1 had no overarching design rationale. Janet Abbate (1999) explains that ARPA chose it as the first site because Leonard Kleinrock and his students at UCLA were experimenting with "a mathematical tool called queuing theory to analyze network systems" (p. 58). This historical accident is exemplary of the fact that, as Sydow et al. write, “Since organizations are social systems and not markets or natural entities, triggering events in organizations are likely to prove to be not so innocent, random, or ‘small’” (2009, p. 693). Even though it was random and erased from internet infrastructure, UCLA-1 set a precedent that would soon come to annoy many in the ARPANET community and motivate an ideological reorientation.

Designers did not find the numerical identification of the host layer satisfactory. In 1973, Feinler’s colleague at the NIC, Jim White, wrote that "the fact that [. . .] Network software employs numbers to designate hosts, is purely an artifact of the existing implementation of the Network, and is something that the human user should NEVER see or even know about" (Gee Host Names and Numbers are Swell, May 11, 1973, SRI ARC/NIC Records, Lot X3578-2006, CHM). An addressing schema based solely upon the order in which machines were installed could not help but emphasise the historical contingency of ARPANET’s initial design philosophy.

During these early years, the somewhat coincidental process by which ARPANET was assembled also drove heated debates involving site-specific naming conventions. Peggy Karp of the MITRE Corporation proposed a set of standardised names in 1971, citing problems related to the fact that each site "employs their own special list" of host mnemonics (Standardization of Host Mneumonics, Request for Comments 226), as evidenced by the hand-written desk reference introducing this article. Karp (1971) proposed a list of standardised host names, suggesting, for example, that host 1 remain “UCLA” and host 2 become “SRIARC” (ARC standing for the NIC’s original and at that time still official department name, the Augmentation Research Center). Karp’s proposal limited site names to their institutional affiliation, specifying neither the type of computer running at each address, nor each site’s often more popular nickname.

Karp’s proposed site names generated a flurry of discussions throughout 1971, prompting Robert Braden of UCLA to declare, "Please, let's not perpetrate systems programmers' midnight decisions on all future Network users!" (Host Mnemonics Proposed in RFC 226, Request for Comments 239). He objected to “UCLA” because it does not specify the host computer, suggesting instead “UCLAS7 or UCLANM” because UCLA ran an NMC Sigma 7 computer. Braden also writes that “SRIARC” is “a poor choice[,]” because “everybody calls it the NIC,” and so suggests the name “SRINIC.” Even though Braden’s own proposals read as a programmer’s midnight decisions, he is right to point out that mnemonics based upon the installation of ARPANET infrastructure were not “fully satisfactory”, writing, “It is a set of historical accidents, and shows it.” Braden ends up recommending that names be standardised according to codes at the NIC, based on its ability to function as an institutional reference.

Designers reached consensus around this idea, and the NIC accepted the task of standardising host names, which would fortify its institutional function of ordering the network into the foreseeable future. Writing on behalf of the NIC, Richard W. Watson (1971) emphasised the need to "recognize the expanding character of the Network, with the potential eventually of several hundred sites" (More on Standard Host Names, Request for Comments 273). The NIC standardised official site mnemonics based upon the rough structure of “institution name-computer” as initially proposed by Jon Postel (Standard Host Names, Request for Comments 236, 1971), then a graduate student at UCLA.

During the end of 1973 into early 1974, as the NIC secured centralised authority of the official host name list, a new project rumored to be underway stirred anxieties across the network. Up to this point, the work of ordering numerical addresses with site names functioned relative to ARPANET alone. Realising this might have been short-sighted, one concerned designer wrote, "There has been no general discussion of multi-network addressing—although there is apparently an unpublicized Internetworking Protocol experiment in progress—and some other convention may be more desirable" (L.P. Deutsch, Host Names On-Line, Request for Comments 606, 1973). This “unpublicized Internetworking Protocol experiment” brought the entire ARPANET identity schema under question. In dealing with APRANET’s identity crisis, network designers realised that core infrastructural development must be suppressed from the user interface, an insight that would direct the early work of doing internet governance through its influence on the design philosophy of the DNS.

Constructing the DNS: three mechanisms of positive governmental feedback

The two-layer architecture of ARPANET could not accommodate network growth, offering designers a crash course in how to avoid negative governmental feedback. The idea of feedback or self-reinforcement is central to path dependence theory. Dobusch and Schüßler write, "Specifically, we argue that the mechanisms of positive feedback or self-reinforcement can be specified as a necessary condition for path dependence" (p. 618). In short, lock-in could not occur without such self-reinforcing mechanisms. Moreover, Dobush and Schüßler argue that as a conceptual construct, feedback “can act as an integrating factor—as a conceptual bridge to other theories that explain evolutionary processes characterized by increasing stability and lock-in” (p. 618). One such concept for which feedback can act as a bridge is ideology, itself a way social constructivists describe the self-reinforcing relationship between social values and work relations. Before the DNS could become locked-in as the internet’s social interface, designers had to renegotiate their values in order to foster positive governmental feedback.

Facing a future of exponential growth, designers adopted a specific orientation toward the past. To solve the problem of future users, designers not only needed to organise numbers in relation to names; they had to build a new layer of internet infrastructure, one that would not be deemed historically accidental in an ever-shifting internetwork landscape. Working in response to the constraints of ARPANET infrastructure, designers constructed the DNS by negotiating three mechanisms of positive governmental reinforcement, implementing: 1) extensible field addressing; 2) domains of shared cognition; and 3) a hierarchical authority. The social construction of the DNS shows how the initial phase of path dependence is never open, and often restricted by a self-conscious goal to make a technology adoptable, when others had not yet been able to adopt it.

1. Extensible field addressing

In 1977, Jon Postel, who had become a researcher at the University of Southern California (USC), proposed a solution to the addressing problem through finding a way to order numbers and names hierarchically, unlike the schema associated with ARPANET. Postel (1977) wrote, "The addressing scheme should be expandable to increase in scope when interconnections are made between complex systems[,]"and concluded that the best solution to the problem “is to always represent the address by fields” (Extensible Field Addressing, Request for Comments 730). Fields are discrete categories that structure a database. The organisation of fields in a database indicates how categories of data relate to each other. Databases can accommodate a specific instantiation of data by positioning it in its proper field. For example, Postel (1977) proposed this addressing schema: Network / Switching Machine / Host / Message-Identifier. The original address for UCSB, the third node on the ARPANET, would read: ARPANET / 3 / 3 / [message-id]. Postel (1977) considered this hierarchical structure “a natural way” of addressing, because “the most general field should come first with additional fields specifying more and more details” (Extensible Field Addressing, Request for Comments 730), it seeming more natural, I suspect, in comparison to the prior ARPANET addressing, a non-hierarchical schema that nobody liked.

Extensibility refers to the ability to accommodate future infrastructural development seamlessly at the user interface. An extensible field model of address afforded designers the opportunity to layer over the physical history of network design and all the accidents that came with it. Designers could choose what categories of data each field would embody. They could label fields according to named concepts such as "network", “host”, or even “domain”, and put unique numerical identification into its place. Extensible field addressing facilitated the creation of a database that allowed designers the choice of how network entities could be represented at the user interface.

By embodying the value of extensibility, the DNS was designed to interpolate all future users, ushering new sites into their respective social categories like so many Matryoshka nesting dolls. In the year leading up to the installation of the master table, Paul Mockapetris (1983), who outlined the technical specifications of the DNS, wrote that while the DNS "database will initially be proportional to the number of hosts using the system", it “will eventually grow to be proportional to the number of users on those hosts as mailboxes and other information are added to the domain system” (Domain Names—Implementation and Specification, Request for Comments 883). This database would itself become the discursive body of network infrastructure, structured by domains of shared cognition.

2. Domains of shared cognition

Designers developed a new addressing schema based upon the extensible field model, reaching a consensus around the concept of "Internet Name Domains". Deciding what a domain actually was, however, required much discussion. D.L. Mills first proposed this system. Mills (1981) writes that since “every internet host is uniquely identified by one or more 32-bit internet addresses and that the entire system is fully connected[,]” a “hierarchical name-space partitioning can easily be devised to deal with this problem” (Internet Name Domains, Request for Comments 799). Mills discussed this schema in relation to email. He suggests the structure “<user> . <host> @ <domain>”, with specific network mnemonics, such as ARPA or COMSAT, placed in the domain field. Like Postel’s proposal, this domain model also positions networks at the top of the address hierarchy. Even though this model suppresses the visibility of core infrastructure, it still maintains a site-specific, historical reference through the “host” field.

Considering the rapid growth of internetworking, David D. Clark of the MIT offered a rather contemplative response. Clark (1982) begins, "It has been said that the principal function of an operating system is to define a number of different names for the same object, so that it can busy itself keeping track of the relationship between all of the different names" (Names, Addresses, Ports, and Routes, Request for Comments 814). He goes on to argue that network protocols such as TCP/IP are no different. He suggests that the “scope of the problem” had not yet been accurately judged, writing,

One of the first questions one can ask about a naming mechanism is how many names one can expect to encounter. In order to answer this, it is necessary to know something about the expected maximum size of the internet. Currently, the internet is fairly small. It contains no more than 25 active networks, and no more than a few hundred hosts. This makes it possible to install tables which exhaustively list all of these elements. However, any implementation undertaken now should be based on an assumption of a much larger internet. The guidelines currently recommended are an upper limit of about 1,000 networks. If we imagine an average number of 25 hosts per net, this would suggest a maximum number of 25,000 hosts.

Even with what we now know to have been low estimates, Clark argues that the potential breadth of the internet requires the complete suppression of infrastructural fields, such as "<host>", at the directory interface in order to implement an acceptable management strategy.

Having come to understand core infrastructure as historically accidental to the user interface, designers recuperated domains by redefining them according to abstract concepts of network governance rather than according to site installation. Postel and Zaw-Sing Su (1982) of SRI defined a domain as "a region of jurisdiction for name assignment and of responsibility for name-to-address translation" (The Domain Naming Convention for Internet User Applications, Request for Comments 819). The intent of a domain-based addressing schema, they wrote, “is that the Internet names be used to form a tree-structured administrative dependent, rather than a strictly topology dependent, hierarchy.” In defining domains as spaces of administration and jurisdiction, engineers opened a way to organise the directory interface according to categories of bureaucratic discourse. In other words, defining domains as spaces of network governance allowed a new set of names to restructure existing sites, by occupying the top of the extensible field hierarchy.

While TCP/IP became the universally adopted protocol suite in January 1983, the DNS became operational on 15 December 1984, when the NIC acquired the master table of top-level domain names and their associated servers (Postel, Domain Name System Implementation Schedule—Revised, Request for Comments 921). Designers reached consensus around five top-level domains: GOV, EDU, COM, MIL, and ORG. (Initially, ARPA itself was a sixth top-level domain, although it was restricted for use of network experimentation). While this decision has no direct technical rationale, Postel and Reynolds wrote that the intention of the system was "to provide an organization name [. . .] free of undesirable semantics" (Domain Requirements, Request for Comments 920, 1984). Names indicating what might become historical accidents of network design were avoided: UCLA, for example, could not reside in a top-level field. A specific institution would reside within its conceptual category, ‘Education’ or ‘Government’ or ‘Commerce,’ and so on. The philosophy of extensible field addressing allowed designers to position institutional modes of social identification at the top of the hierarchy, ushering the DNS into a preexisting discourse of governmental functions that exist independently of the internet itself.

Because the DNS has social concepts at the top of its extensible field hierarchy, the system could both reflect the society into which it was implemented, while simultaneously making room for future users outside the ARPANET community. With networks addressed according to governmental concepts rather than institutions themselves, designers made what they perceived to be a pragmatic decision: build a flexible, layered network organised by a hierarchy of institutional signifiers that already exist in the world. The DNS provided common terms through which the general public could understand how the internet is organised in relation to society, proving to be a solution to ARPANET’s identity crisis. With the DNS, designers had a stable addressing system in place. Now all they had to do was to find a way to make it work on an everyday basis, and into the future.

3. Hierarchical authority

In order to govern future users, Paul Mockapetris introduced the concept of authority. He writes, "Although we want to have the potential of delegating the privileges of name space management at every node, we don't want such delegation to be required" (Mockapetris, Domain Names—Concepts and Facilities, RFC 882, 1983). If such delegation were required, each network or specific institution would have final authority over its users, leading the system back into the realm of “historical accident” that designers needed to avoid. Instead, Mockapetris recommended investing authority with a name server, which would have “authority over all of its domain until it delegates authority for a subdomain to some other name server”. He proposed principles of authority that require a name server administrator to “register with the parent administrator of domains” and also to “identify a responsible person[,]”someone “associated with each domain to be a contact point for questions about the domain, to verify and update the domain related information, and to resolve any problems (e.g. protocol violations) with hosts in the domain” (Mockapetris, Domain Names—Concepts and Facilities, RFC 882, 1983).

Mockapetris borrowed the term "responsible person" from Jon Postel. In order to establish a domain, Postel (1981) wrote, “There must be a responsible person to serve as a coordinator for domain related questions” (The Domain Names Plan and Schedule, Request for Comments 881). He goes so far as to cordon off a special section in order to define “responsible person” precisely:

An individual must be identified who has authority for the administration of the names within the domain, and who takes responsibility for the behaviour of the hosts in the domain in their interactions with hosts outside the domain.

[. . .]

If some host in a domain somehow misbehaves in interactions with hosts outside the domain (e.g. consistently violates protocols), the responsible person for the domain must be able to take action to eliminate the problem.

Postel conceives the "responsible person" not simply as a steward, but as a potential disciplinary authority, someone who has the power to decide what constitutes “misbehavior” and then to “eliminate the problem” accordingly. That Mockapetris adopted this term, too, suggests that in terms of internetwork administration, responsibility meant something very specific. A responsible person seems to be someone who shares values akin to those of internet designers. An irresponsible person, someone who “consistently violates protocols”, for instance, is someone who does not share values akin to designers, someone who might very well warrant an administrative elimination.

Organisational representatives that used the internet for specific projects or business activities initially filled the role of "Responsible Persons", although this ended up contributing to administrative difficulties it was intended to resolve. Archival documents from the NIC show that the “Responsible Person” (RP) model was not effective in managing network access. This was largely due to the definition of RPs as organisational figureheads. Even though she no longer had to order the network by hand, Feinler still had to order RPs, which proved just as difficult. In a hand written memo, Feinler (1985) wrote,

The Responsible Persons are the wrong people to track who has permission to use the network. They are people such as very important PIs or Vice Presidents of companies and the like—people who deal in concepts and macro mgt; not administrative minutia. They either forget or outright refuse to do the job and yet they are listed as contacts. (Memo on Responsible Persons, SRI ARC/NIC Records, Lot X3578-2006, Computer History Museum)

When someone sought a password to access network services, "Responsible Persons" were the ones charged with managing this. However, due to the fact that many RPs were “very important PIs or Vice Presidents”, this sort of “administrative minutia” fell through the cracks. This problem was compounded because, in this early system, “passwords [were] invalidated after 6 months[,]” at which point users had “to get permission again from the RP”. Feinler (1985) wrote, “Unfortunately the RPs usually let their own passwords expire and can’t reactivate their users” (Memo on Responsible Persons, SRI ARC/NIC Records, Lot X3578-2006, CHM). In defining “Responsible Persons” according to their institutional status in disparate organisations, the NIC was often unable to help users effectively get network access.

Another administrative problem emerged regarding the fact that multiple "Responsible Persons" were often affiliated with a single host computer. Users tended to think that host sites facilitated network access, not realising that “Responsible Persons” of specific organisations or projects served that function. In an email to Feinler, Bob Baker (1985) wrote,

There has been a lot of confusion caused by people failing to understand that the registration [. . .] is organization oriented and not host or site oriented. Thus some people have the mistaken idea that to get registered they should contact someone connected with the host they use, instead of the "responsible person" for the organization they belong to. (How to Announce TACACS, January 2, 1985, SRI ARC/NIC Records, Lot X3578-2006, CHM.)

In another memo to Feinler pointing out the main problems of the "Responsible Person" model, Johanna Landsbergen (1986) wrote, “When the password for the Responsible Person of an Org expires and he/she does not remember their old password”, no one at the organisation has the ability to “get a new password” (ARPANET TACACS TOOL PROBS, January 15, 1986, SRI ARC/NIC Records, Lot X3578-2006, CHM). Feinler (1986) raised these issues at multiple NIC meetings, her notes indicating that “Responsible Persons” are “administratively confusing”, for it was difficult to find the correct “RP if there are 4 at one org” (Notes, SRI ARC/NIC Records, Lot X3578-2006, CHM). With the DNS having signified network topology according to institutional concepts, it was difficult to clearly identify “Responsible Persons” as stable points of contact.

The association of "Responsible Persons" with the organisations they represent also led to ambiguities in the construction of network domain databases. In the draft of a proposed “User Database Host Schema” from Bolt Berenek and Newman (BBN), John V. DelSignore includes a glossary that attempts to distinguish the terms “user”, “person”, and “organization.” He writes,

The word "user" is generally used to indicated [sic] a person that is or has logged into the database tool and is performing or performed a certain act or command. Also ‘user’ indicates the real-life person associated with a person record.

[. . .]

occasionally the words "person" or “organization” appear in sentences such as “The user created the person”. We realize the users create “person records” and not “persons” per se. The terms “person’” and “organization” are often used interchangeably with “person record” and “organization record” for the sake of brevity. (John V. DelSignore, Jr., ARPANET TAC Access Control User Database Host Schema, September 16, 1986, SRI ARC/NIC Records, Lot X3578-2006, CHM)

Feinler (1986) was right to conclude "that as a registration scheme it is an administrative nightmare" (Notes, SRI ARC/NIC Records, Lot X3578-2006, CHM), and that the RP model of internet governance could not hold.

Much like numbered host identification that developed in an historically contingent manner, the "Responsible Persons" model of network administration could not efficiently accommodate future users by virtue of the fact that RPs were associated with specific projects at specific organisations at specific moments in time. By the end of 1986, designers abandoned the RP model, instead situating host administrators as points of contact for network users to register in the DNS and acquire network access. The RPs still functioned as gatekeepers; however, they no longer had to manage passwords, directly correspond with users seeking access, or maintain records of host activity. The host administrator took on this task, working as a liaison between users, organisational representatives, and the NIC. The introduction of the host administrator role brought the division of labour in line with the DNS addressing schema. To replace the historical accident of RPs, as well as to manage existing and future sites as if they had always been expected, designers created a new responsibility—one of maintaining the internet’s order—abstracted from specific projects.

As early network designers introduced the concept of authority through the social construction of the DNS, they catalysed the development of bureaucratically independent internet governance functions like IANA, which paved the way for later institutions like ICANN. More than a material base, the DNS provided the conceptual structure for a hierarchical regime of internet governance that centralised administrative power within discursive categories inherited from historically naturalised social categories. Through the social construction of the DNS, early network designers better ensured that future users would themselves maintain the DNS as a foundation for governing the global internet.

Conclusion

By using social constructivist historical analysis in tandem with path dependence theory, this article shows how early network designers built the DNS through harnessing three modes of positive governmental feedback: 1) extensibility, which afforded ways to hide the contingent development of network infrastructure; 2) domains of shared cognition, which allowed non-expert users to navigate the internet in a socially legible manner; and 3) hierarchical authority, which established the initial structure for "internet governance" as an institutionalised function, and ensured that future users could themselves maintain the system and extend it further. After the installation of the DNS, Feinler continued as head of the NIC until 1989, the NIC transformed into InterNIC in 1993, and ICANN finally assumed all responsibilities of InterNIC with its foundation in 1998.

The lock-in of ICANN as the internet’s primary governing body is indeed related to macro-level forces in the global political economy of the 1990s. In his article "ICANN between technical mandate and political challenges" (2000), Wolfgang Kleinwächter argues that ICANN became locked-in with its incorporation due to four macro-level problems: 1) “the need to demonopolise” (p. 556) registrars during the dot-com boom; 2) the need to settle “disputes between trademark holders and domain name holders” (p. 558), which led to ICANN’s Uniform Dispute Resolution Policy (UDRP); 3) the need to recognise country-code top-level domains (cc-TLDs), codified in ICANN’s Governmental Advisory Committee (GAC); and 4) the need to create new generic top-level domains (gTLDs), which motivated ICANN to work in tandem with the World Intellectual Property Organization (WIPO). Kleinwächter persuasively articulates the immediate historical context of ICANN’s global lock-in; but such macro-level forces are themselves historically related to the micro-level decisions of people who built the DNS toward governmental ends. In a way, Kleinwächter himself reveals how ICANN’s incorporation solved the problem of future users as it emerged again in the 1990s, but at a macro-level policy analysis.

As more people consider "internet governance" beyond its institutional focus, an idea promoted by scholars including Laura DeNardis (2012) and Francesca Musiani (2014), finding new ways to conceptualise the term will become more important. Understanding the prehistory of institutional bodies is one way to consider how people “do” governance in response to everyday working pressures. This article shows how internet governance has never been given, and has always been done. Internet governance is not a product or result of organisations like ICANN. Even though it serves an important governance function, ICANN is itself based on the historical contingencies of ordering the early internet. ICANN must also constantly respond to how people use the internet in increasingly varied ways, and as more users understand the political significance of “doing” internet governance, complexities related to the problem of future users will only compound, as users themselves attempt to create—or alter—control structures of the internet.

In designing and implementing the DNS, early designers paved the way for new technocratic functions that ICANN would inherit, although they did not initially call for the institution of governance bodies per se. Rather, they worked through technical issues of the finest complexity, and in so doing developed perspectives on issues such as the nature of historical contingency, the jurisdiction of virtual space, and the concept of authority itself. In learning how to recognise the historical contingency of network design, embracing extensibility, and reifying a new division of labour, early network designers ensured that future users could not only navigate the internet, but could also keep the system in working order on an everyday basis. The technocratic relations that rigidified around the DNS fostered values related to a particular brand of universality, one supported through the potentially infinite extension of social genera, as evidenced today in ICANN’s slogan: "One world. One Internet."

References

Abbate, J. (1999). Inventing the Internet. Cambridge: MIT Press.

Baker, B. (1985). How to Announce TACACS (SRI ARC/NIC Records, Lot X3578-2006, Computer History Museum).

Bijker, Wiebe E., Thomas P. Hughes, & Trevor Pinch. (1987) The Social Construction of Technological Systems: New Directions in the Sociology and History of Technology. Cambridge: MIT Press.

Braden, R. (1971). Host Mnemonics Proposed in RFC 226 (Request for Comments 239).

Clark, D.D. (1982). Names, Addresses, Ports, and Routes (Request for Comments 814).

DelSignore, J.V. Jr. (1986). ARPANET TAC Access Control User Database Host Schema (SRI ARC/NIC Records, Lot X3578-2006, Computer History Museum).

DeNardis, L. (2012). Hidden Levers of Internet Control: An Infrastructure-based Theory of Internet Governance. Information, Communication and Society 15(5): 720-38.

Deutsch, L.P. (1973). Host Names On-Line (Request for Comments 606).

Dobusch, L. & E. Schüßler. (2012). Theorizing path dependence: a review of positive feedback mechanisms in technology markets, regional clusters, and organizations. Industrial and Corporate Change 22(2): 617-647.

Feinler, E. (1973). Changes and Reverifications (SRI ARC/NIC Records, Lot X3578-2006, Computer History Museum).

Feinler E. (1985). Memo on Responsible Persons (SRI ARC/NIC Records, Lot X3578-2006, Computer History Museum).

Feinler, E. (1986). Notes (SRI ARC/NIC Records, Lot X3578-2006, Computer History Museum).

Karp, P. (1971). Standardization of Host Mneumonics (Request for Comments 226).

Kleinwächter, W. (2000). ICANN between technical mandate and political challenges. Telecommunications Policy 24: 553-563.

Landsbergen, J. (1986). ARPANET TACACS TOOL PROBS (SRI ARC/NIC Records, Lot X3578-2006, Computer History Museum).

Langley, A. (1999). Strategies for Theorizing from Process Data. The Academy of Management Review 24(4): 691-710.

Mills, D.L. (1981). Internet Name Domains (Request for Comments 799).

Mockapetris, P. 1983). Domain Names—Implementation and Specification (Request for Comments 883).

Musiani, F. (2014). Practice, Plurality, Performativity, and Plumbing: Internet Governance Research Meets Science and Technology Studies. Science, Technology & Human Values 40(2): 272-286.

Postel, J. (1984). Domain Name System Implementation Schedule—Revised (Request for Comments 921).

Postel, J. (1981). The Domain Names Plan and Schedule (Request for Comments 881).

Postel, J. (1977). Extensible Field Addressing (Request for Comments 730).

Postel, J. (1971). Standard Host Names (Request for Comments 236).

Postel, J. & J. Reynolds. (1984). Domain Requirements (Request for Comments 920).

Postel, J. & Z.S. Su. (1982). The Domain Naming Convention for Internet User Applications (Request for Comments 819).

Sydow, J., G. Schreyögg & J. Koch. (2009). Organizational Path Dependence: Opening the Black Box. The Academy of Management Review 34(4): 689-709.

Watson, R.W. (1971). More on Standard Host Names (Request for Comments 273).

Wentheimer, E. (1971). Network Host Status. Request for Comments 252.

White, J. (1973). Gee Host Names and Numbers are Swell (SRI ARC/NIC Records, Lot X3578-2006, Computer History Museum).

Acknowledgments

The author wishes to thank Andrea Hackl, Leonard Dobusch, and Francesca Musiani for their generous guidance in the revision process. The author also wishes to thank Sara Lott, Senior Archives Manager at the Computer History Museum, who helped in the early selection of materials.

Disclosing and concealing: internet governance, information control and the management of visibility

$
0
0

This paper is part of 'Doing internet governance: practices, controversies, infrastructures, and institutions', a Special issue of the Internet Policy Review.

Introduction

The ubiquity of digital technologies and the datafication of many domains of social life raise important questions about governance. Not so long ago, digital technologies were mainly seen as ‘devices of information’ and not ‘agencies of order’ (Durham Peters, 2015), but this has certainly changed over the last decade. As processes of digitalisation and ‘datafication’ (Mayer-Schönberger and Cukier, 2013) come to shape most societal domains, it makes less and less sense to think of digital technologies as tools or as separate (cyber)spaces. Digital transformations increasingly make headlines, define political agendas and shape research priorities. In research circles, the nexus between digital technologies and governance – whether in the shape of (technical) coordination, (political) regulation or (social) ordering (Hofmann, Katzenbach, and Gollatz, 2016) – has emerged as a key concern and laid the foundation for the field of internet governance and more recent discussions of the role of data and algorithms in the social and political affairs (Boyd and Crawford, 2012; Gillespie, 2012).

Scholarly work on internet governance has had a rapid and remarkable trajectory in trying to keep up with technological and political developments in this area. This research emerged as a set of reflections on technology and ideology offered from within the engineering labs and close-knit communities developing the technological innovations that we have come to know as the internet (Hafner and Lyon, 1996). As these technological developments spilled into more public, global contexts, internet governance research became occupied with questions about international agreements, participation, rights and related questions, primarily by engaging insights from international relations and political science (DeNardis, 2014; Mueller, 2010). As this paper suggests, we stand at the threshold of yet another major transformation when it comes to the role of digital technologies in societal and political affairs, which requires that internet governance scholars once again calibrate the conceptual tools and analytical approaches used to guide their work. At this point, questions about the entanglement of technology and social practices and the ordering effects of processes of digitalisation and datafication deserve more attention, and this requires that we extend the emerging engagement with insights from sociology and science and technology studies (STS). In particular, I suggest that internet governance research needs to explore howdigital, datafied infrastructures afford and condition ordering through information control, the management of visibilities and the guidance of attention. Articulating the central role of visibility practices, such as transparency, surveillance, secrecy and leakages, in the digital age, this paper sets an agenda for internet governance research that makes processes of seeing, knowing and ordering central. To this end, the paper suggests a conceptual vocabulary for studying information control and managed visibilities as forms of ordering, and provides some empirical illustrations of such studies.

The paper makes two contributions to the emergent engagement with STS and sociological perspectives in internet governance research and work on the societal and political ramifications of digital technologies in political science more broadly. The first contribution is an overview of developments in the internet domain that make it pertinent to push beyond existing orientations and theoretical approaches. This takes the shape of a historical overview of the trajectory of internet governance research, with particular focus on the underlying assumptions about the internet, the primary objects of study, and the conceptions of governance reflecting different theoretical and disciplinary foundations.

In the field of internet governance, most work has explored governance arrangements, institutional developments and the effects of interactions among public and private actors in the emergence of the internet as a matter of concern in global politics (DeNardis, 2009; 2014; Deibert et al, 2010; Mueller, 2010). For anyone trying to understand the public, political and scholarly significance of the internet, these works have been ground-breaking and central to the emergence of this field of research, as well as the public understanding of the importance of the issues. But to push our field forward, we need more theories, analytical vocabularies and empirical orientations that take into account how digital and datafied infrastructures are ingrained in and shape social and cultural practices that go beyond what is normally associated with internet governance (such as regulatory bodies, standard-setters and technical communities) and are central to a much wider range of ordering processes (for similar arguments, see for instance Flyverbom, 2011; Mansell, 2012; Franklin, 2013; Musiani, 2015).

The second contribution of the paper is to articulate what science and technology studies and, in particular, sociologies of knowledge and visibility (Shapiro, 2003; Brighenti, 2007; Rubio and Baert, 2012; Flyverbom et al., 2016) have to offer when it comes to investigating how digital technologies relate to governance. It suggests, in particular, that a focus on the dynamics and effects of information control and visualisation is a valuable starting point, and provides some empirical illustrations of what may be gained by approaching internet governance issues in this manner. Focusing on information control and the management of visibilities may open up new avenues for research and make different objects of analysis central, in particular when it comes to understanding the role of digital infrastructures in the shaping of social and political realities. As suggested by Gillespie (2016), we still lack "the language to capture the kind of power" that digital infrastructures involve, so we need to explore alternative conceptualisations and analytical vocabularies that can invigorate studies and theories of digital technologies in new and exciting ways.

Early trajectories of internet governance research

Even though early discussions of the internet often considered it to be a separate space outside the reach of traditional forms of governmental regulation - for instance by referring to it as ‘cyberspace’, important scholarship has shown that it has always been subjected to multiple forms of governance (Lessig, 1999; Goldsmith and Wu, 2006; Musiani et al., 2016). Studies of internet governance emerged alongside the technological developments they set out to investigate and were entangled with the people, organisations and ideologies shaping this domain. That is, many of those pursuing research on internet governance were also so-called ‘inter-nauts’, i.e., members of the technical communities building and coordinating the internet and/or directly involved in policy development in this area. Reflecting this symbiosis, early scholarly discussions were focused on technical and operational issues (Ziewitz and Pentzold, 2014), and often made technical arguments for policy approaches, such as the need for the governance of technological networks itself to be networked (Klein, 2002; Kleinwächter, 2000). Most of this research focused on a narrow set of organisations, in particular the Internet Corporation for Assigned Names and Numbers (ICANN), and other bodies directly involved in standard-setting, coordination and other operational matters (Klein, 2002; Mueller, 2002).

These early discussions of internet governance had a strong focus on showing the uniqueness of the internet and how the (libertarian) logics underpinning its development conditioned particular forms of governance. These approaches suggested that the internet has a distinct decentralised technological architecture, which makes it difficult to govern, and focused on the tension between a decentralised, global technological innovation and established forms of regulation based on national boundaries and sovereignty. Such discussions often articulated a resistance to top-down control and governmental interventions, and a focus on open standards and other technical features that allow for interoperability, peer production, innovation and unimpeded data flows. To sum up, these early approaches to internet governance had a primary focus on technical coordination, individual organisations, the (libertarian) ideologies shaping the area, and conceptualised internet governance mainly as a matter of technical coordination to be kept separate and safe from statist regulation. These features served the purpose of establishing the internet as different from other technological developments, and to separate the internet from ordinary social and political life.

With commercialisation, intense political struggles, and the growing importance of the internet as an infrastructure for trade and socio-cultural formations, these orientations are no longer clear-cut or even pervasive, but they still form an important foundation for what remains a controversial and problematic relation between the internet and established, (inter)governmental approaches to governance.

As discussions of the internet and its consequences for economic, political and cultural developments grabbed the attention of the public, scholars and policymakers, also work on internet governance took on new challenges and themes. In particular, questions about processes of institutionalisation, inter-governmental arrangements and stakeholder participation, as well as policy issues such as privacy, security and rights, became more central.

The growing focus on the internet as a phenomenon with wide-reaching societal consequences was also reflected in the understandings of governance underpinning research in this emergent field. Moving beyond the focus on technical forms of coordination and operational bodies allowed for issues like the intersections between established statist and intergovernmental forms of regulation and more controversial multi-stakeholder approaches to be addressed (Anderson, Dean and Lovink, 2006; Mueller, 2010; Flyverbom, 2011). Also, this research highlighted the global nature of internet governance as an issue area with ramifications for a wide range of more established policy concerns (DeNardis, 2009). This involved linking the internet to questions of inclusion, development, rights and security (Chadwick, 2006; Jørgensen, 2006). In terms of how governance was arranged, this research highlighted that there was no institutionalised regulatory system in place, and few established authorities or international agreements like those we see in other areas. It also stressed the complexity of internet governance, where some parts are steered by a myriad of technical, private, standards-based and other ad hoc forms of regulation, some parts are handled by established international organisations and others are addressed through more informal governance arrangements, such multi-stakeholder dialogues without negotiation- or decision-making power.

Reflecting the maturation of the internet and the growing focus on its ‘governability’ (Hofmann, Katzenbach and Gollatz, 2016), much of this research focused on the sites and organisations where internet governance was addressed, such as the World Summit on the Information Society and related bodies, and questions about participation and inclusion (Mueller, 2010; Singh and Flyverbom, 2016). Largely, internet governance research was born and bred in disciplinary fields with a focus on institutionalisation (what institutions and governance regimes are emerging to handle the global governance and politics of the internet?), the state (how do state and non-state actors coordinate or clash in this area and what are possible effects of public or private forms of governance?) and pragmatic politics (how is the internet emerging as a key asset and object of regulation?). This was also reflected in the theories underpinning this work, where most insights and conceptual approaches were adopted from the field of international relations and addressed issues such as the role of public and private actors, intergovernmental processes, networks and institutional developments.

A similar point is made by Hofmann, Katzenbach, and Gollatz (2016), who argue that the work we normally associate with internet governance has focused on regulation, which can be understood as institutionalised, deliberate and goal-oriented interventions by public or private actors seeking to shape behaviour, solve policy problems and implement rules. This is in some ways odd, since very influential work such as Lessig’s exactly stressed the need to understand the regulation of the internet as an interplay among such different forces as laws, norms, markets, and architecture or technical codes (Lessig, 1999). This is partly due to disciplinary differences in theoretical and empirical orientation. But it is still puzzling that only little research has captured the relations among these four forms of governance, or offered analytical frameworks that may help us understand their entanglement (exceptions include Bowrey, 2005; Flichy, 2007; Mansell, 2012).

Taken together, these discussions articulated the need for intergovernmental negotiations and multi-stakeholder dialogues about the internet, and brought up important questions about inclusion, institutionalisation, and rights. Still, most work held on to the idea that the internet should be thought of as a separate space with a need for novel governance arrangements rather than extensions of statist approaches. But they served the important purpose of showing the importance of the internet for political affairs and the need for thorough research in this area. Building on these foundations while acknowledging their limitations is central when we move forward in attempts to grasp emerging developments and develop new vocabularies for the study of governance of and by digital technologies.

Digital infrastructures and ordering - in search of new approaches

With the emergence of ubiquitous digitalisation and datafication (Mayer-Schönberger and Cukier, 2013), digital technologies have become infrastructures for large parts of social life and an increasing number of human activities take a digital form or leave extensive digital traces. By using digital technologies, we control global value chains and production processes, engage in politics and connect with friends and family. The infrastructures making all this possible consist of multiple digital platforms, tracking systems and other largely invisible ways of sourcing and aggregating data, as well as advanced algorithms and visualisation techniques. As digital technologies become ubiquitous, it seems that we need research that picks up new kinds of issues and discussions than those we normally associate with internet governance research. My suggestion is that we need to shift from the focus on how to govern digital transformations, ‘the internet’ or ‘cyberspace’ to the question of how these govern. The internet is not just an object of need in governance, but itself constitutive of governance – a means of ordering (Flyverbom, 2011; Ziewitz and Pentzold, 2014; Hofmann, Katzenbach, and Gollatz, 2016).

For scholars interested in the intersection of digital technologies and governance, basic sociological questions about the individual, organisational and societal ramifications of these developments should be central. That is, how do digital transformations shape fundamental issues and mundane practices, such as how we produce knowledge, how we decide what is important, and how we work and think. But most public discussions focus on more spectacular issues - the increasing financial resources of internet companies, concerns about states tracking and profiling citizens, and the effects of digital disruption on traditional industries and institutions. As a result, not enough work addresses the materiality and the possibilities for action offered by digital infrastructures and platforms. To the degree that we even think of their existence, such infrastructures come across as neutral or innocent, and we are more concerned with the interests and aims of the companies and other actors building and taking advantage of them. This focus means that we refrain from studying a wide range of issues that could be considered relevant for, and part, of internet governance. Also, from within the field, a number of scholars have called for more comprehensive and fine-grained accounts of the relations between digital technologies and governance, and the complex entanglements of public and private actors, humans and technologies (DeNardis, 2012; Musiani, 2015; Hofmann, Katzenbach, and Gollatz, 2016). At the core of this critique is an emergent realisation that governance involves mundane activities and forms of ordering that are overlooked if we focus too much on the role of formal institutions and deliberate attempts to regulate.

One way to rethink the meaning of internet governance is to conceptualise governance in terms of ordering, not regulation. To this end, insights from Foucauldian and related sociologies of governance are a useful starting point (Dean, 1999; Law, 2003). Such analytical vocabularies are more agnostic when it comes to explanations about causes and structures, more focused on addressing relational interactions, and more practice-oriented than traditional work on internet governance (see Flyverbom, 2011 for a more elaborate discussion). Such broadly sociological approaches increasingly mark discussions about uses, design, digital infrastructures, materiality and similar sociological accounts of digital transformations. Engaging with these more encompassing research agendas could help establish links and conversations across disciplines and phenomena of relevance to our field. The point is not only that we need to open up the concept of governance to include more subtle and emergent forms, but also that more attention to social practices and ordering processes highlights a set of discussions that have been marginal in previous work. A range of sociological perspectives and themes, like the ones discussed above, are relevant for this purpose.

As digital technologies and data become ubiquitous and infrastructural, so that it makes less sense to think of ‘cyberspace’ as a separate and independent space, we have to shift our attention to the more subtle and intricate ways they shape individual, organisational and societal possibilities for action. To this end, we need more accounts of what digital technologies are, afford and do when it comes to shaping practices, interactions and visibilities. These more subtle forms of ordering that digital technologies create are also forms of internet governance and need to be included in our conceptual approaches (Hofmann, Katzenbach, and Gollatz, 2016: 7).

Studies of ordering: information control and the management of visibilities

Insights from sociology and science and technology studies are useful starting points if we want to reinvigorate studies of internet governance. What I aim to do here is to stress how sociological accounts of visibility (Brighenti, 2007; 2010) have a lot to offer when it comes to articulating how digital technologies facilitate and constrain our possibilities for action. Visibility, information control and knowledge are central aspects of power and governance, and deserve more scrutiny, particularly in the age of big data, autonomic computing and radical transparency. Novel studies of ordering could start exploring how digital transformations shape the way we see, know and govern. This extended research agenda for internet governance studies would make questions about information control and visibility management central, and study how processes of digitalisation and datafication contribute to ordering by making certain phenomena and practices visible, and others invisible, in ways that come to guide our attention and contribute to social and political ordering. Drawing on insights from science and technology studies, affordance theory and sociology, such approaches help us grasp how digital technologies afford and condition ordering through the production of visibilities and the guidance of attention. The argument that there is an intimate relationship between seeing, knowing and governing (Foucault, 1988; Brighenti, 2007) deserves further scrutiny because digitalisation and datafication fundamentally shape how we make things visible or invisible, knowable or unknowable and governable or ungovernable. Some work on the use of digital technology has engaged with questions about visibilities and invisibilities (Treem and Leonardi, 2012) and with questions of transparency (Weber, 2008), but without considering it as a part of the broader governance effects of digital technologies. I suggest that a more extensive focus on visibilities invites us to explore how digital technologies condition ordering, and how our attention is guided as a result of these dynamics. That is, systems of governance or forms of ordering always revolve around particular ways of seeing and perceiving, involve distinctive ways of thinking and questioning and work through concrete practical rationalities and techniques of intervention (Foucault, 1988; Dean, 1999).

All types of knowledge production and visualisation techniques have implications for what we see as important and possible to govern, and to unpack these we can rely on conceptual discussions of affordances and the material foundations of knowledge production (Hutchby, 2001; Leonardi, 2012; Hansen and Flyverbom, 2015). Such approaches invite us to engage with questions about the material infrastructures and sources of data that are used for purposes of governance, about the political rationalities that digital technologies help institutionalise, and the patterns of exclusion and inclusion involved when social processes and phenomena are made ‘algorithm-ready’ (Gillespie, 2012; Madsen et al., 2016). The affordances of digital technologies when it comes to ordering can be explored at the individual, organisational and societal level, and the following section offers three examples.

Governing through visibilities

Having articulated the conceptual argument, let me offer some illustrations of the possible shape of such studies. As suggested by Walters (2012: 52), we need to explore the "new territories of power" associated with “the entanglement of the digital, the informational and the governmental”. As stressed above, there are many valuable ways to explore such encompassing questions about how digital technologies govern and are governed, and how ordering plays out as a result of digital transformations. Even if we focus on information control and visibilities, the list of possible topics is extensive, and cuts across individual, organisational, material and societal levels of analysis. In this context, I can only hint at a couple of these, and the following three suggestions are in no way exhaustive. But I hope they illustrate some of what could be explored by engaging these ideas about information control and visibilities in future research.

Transparency reports

One question is how our understanding of the phenomenon ‘internet governance’ is conditioned by the kinds of information and disclosures that make it visible and knowable in the first place. As noted above, internet governance plays out in a bewildering range of settings, involves multiple actors and encompasses both intergovernmental, private and technical forms of governance. But we rarely think about how these processes are about managing visibilities in ways that condition particular forms of ordering. One emergent form of internet governance is what internet and telecommunications companies refer to as ‘transparency reports’ and related attempts to show how powerful actors seek to control digital spaces. These reports disclose what data companies compile, the requests for information that states make, and how states filter and sometimes shut off the internet. Such reports thereby respond to an increased focus on transparency when it comes to data aggregation, covert uses of data, as well as filtering, surveillance and censorship in digital infrastructures. But they also distract our attention from the roles and responsibilities of internet companies. Transparency reports may list the number of requests made by individual governments, but they do not provide insight into the agreements or relationships between states and internet companies. They are also a very particular kind of reporting, which may cater to demands for openness and disclosure about government surveillance and censorship, but provide a very specific response in a preformatted and selective shape. What is particularly significant in this context is that transparency reports seek to articulate the value of numbers-based approaches to governance, and challenge (what internet companies consider to be) the overly emotional reactions that policymakers often rely on (Flyverbom, forthcoming). Attempts to make digital technologies governable by use of data visualisations – such as transparency reports – are important to investigate because they select and visualise information in ways that are neither natural nor innocent, and thus manage visibilities and guide our attention.

The result of these disclosures in the name of transparency is that the public gaze is directed to particular parts of the problem – for instance that some governments make a lot of requests for information to be taken down or made available for their use. But it also important to remember that some states are not even part of such reports because they refuse to share this information. Also, we must not forget that internet companies are involved in other forms of data control and data sharing that they do not talk about publicly, and we can think of transparency reports as strategic ways of guiding our attention. For instance, it was only after the Snowden revelations that Google made it clear that its transparency reports had not disclosed information on how the company feeds information about users to the National Security Agency (NSA). As Google mentioned somewhat apologetically in a blog post: "U.S. law does not allow us to share information about some national security requests that we might receive. Specifically, the U.S. government argues that we cannot share information about the requests we receive (if any) under the Foreign Intelligence Surveillance Act. But you deserve to know" (Google official blog, 2013, para. 3). Because transparency reports are voluntary and initiated by companies themselves, the content and format can be selective enough to allow for such limitations to stay out of sight. As a result, it is often not clear what data is selected and omitted in these reports are compiled and we are rarely given insight into the contexts and conditions of their production. Transparency reports are also a form of obfuscation and strategic opacity (Stohl, Stohl, and Leonardi, 2016). But my argument is not simply that such reports should be more inclusive and deliver more actual transparency, but also that all kinds of disclosures guide our attention and must be understood as managed visibilities that could be different. That is, they invite us to understand internet companies and governance issues in certain ways. This is also what my second illustration highlights.

Internet platforms, humans and machines

Internet platforms like Google, Twitter and Facebook are often perceived as different from traditional companies, and they curate this position quite carefully, for instance by stressing organisational values like dialogue, transparency and innovation (Flyverbom, 2015). But most of the time, we only know and engage with these companies through the services they provide - search, connecting with friends or possibilities for discussion. With no products of their own and their focus on facilitating interactions and sharing, they come across as utilities and platforms rather than normal companies. This position serves an important purpose and is actively maintained by their owners and directors. To the degree that such platforms are seen as technical utilities, not complex organisations full of people and engaged in strategic attempts to shape political agendas and cultural formations, they are in a better position to stay off the radar when it comes to regulation and oversight.

Recent discussions of how Facebook Trending relies not only on neutral and consistent algorithms, but also human curators who seemingly highlight some news stories and political views over others, has shown what happens if we start to think of internet companies as similar to news conglomerates. We have a long history of regulating the latter very strictly, and falling into a similar category would put a company like Facebook in a very different situation than at present. The strategic positioning as utilities involves issues such as human labour, how digital data is organised and edited and how internet companies relate to culture and politics. My point is that these issues should be part of our focus when we investigate how the internet is governed and shapes governance. The task is mainly to establish the links between internet governance and emergent and important research on, for instance, how human labour is invisible on internet platforms, how digital technologies condition particular forms of knowledge production, how identities and personal information are curated in digital spaces and how algorithmic operations edit, sort and shape realities. Starting points could be work on the societal implications of algorithms and data (Gillespie, 2012; Flyverbom and Madsen, 2015; Pasquale, 2015), and studies of digital labour and the entanglement of human and technical operations at work on internet platforms (Irani, 2015; Roberts, 2016). These may seem to be only remotely relevant to internet governance studies, but the links are important to explore. As I have suggested in this section, what is made visible by and on internet platforms has consequences for how they are perceived and regulated, and how we think of digital transformations more broadly. But questions of visibilities and their relation to ordering are important to explore not only at the organisational level, but also as they shape individual conduct and create the foundation for how we govern societal affairs.

Data doubles

Digital technologies and data also play important roles in the production of visualisations that we use as the basis for decisions and governance. At the individual level, an example is what Ruppert (2011) and others have referred to as ‘data doubles’, i.e. the sum of digital traces we leave. As data doubles come to function as complete representations of us in the context of governance, we see the emergence of potentially worrying scenarios, including the possibility of predictive policing and other forms of governance that do not rely anymore on situated encounters with the subjects they seek to govern (Hansen and Flyverbom, 2015) . Beyond the level of the individual, digital transformations also shape areas like urban governance (Kitchin, 2014), the prevention of terrorism (Morozov, 2014b), control with financial transactions (Hansen and Flyverbom, 2015), and international development (Hilbert, 2013). Digitalisation and datafication have implications for how we approach societal challenges, such as terrorism, development or tax evasion. A focus on the management of visibilities invites us to consider how such regulatory or political issues come to look different as a result of digital transformations. In the case of development or tax evasion, the reliance on digital, datafied infrastructures means that established ways of producing knowledge are challenged and supplemented by algorithmic forms of calculation and scrutiny. That is, whereas development agencies usually rely on national statistics or household surveys, the use of digital traces as indicators of food crises or epidemics produces rather different types of visualisation and knowledge, direct our attention to new issues, and lead to alternative ways of dealing with governance issues (Flyverbom and Madsen, 2015). The point is not that big data produce more accurate ‘truths’, but rather that we need to explore how such forms of knowledge production condition different and sometimes problematic approaches to governance (Madsen et al., 2016). Morozov (2014b) uses the example of terrorism. In the past, and using more traditional forms of knowledge and visualisation, this was considered a problem with strong ties to history and foreign policy. But if we approach terrorism by use of digital technologies and the aggregation of digital traces, terrorism takes the shape of an ‘information problem’ – a matter of picking up enough signals to pre-emptively strike against a (soon to become) terrorist. Morozov’s focus is on the problematic, technocratic effects of these forms of what he terms ‘algorithmic regulation’ based on ‘Silicon valley logics’. Even if we do not share Morozov’s worries, it is important to explore how digital technologies and datafication unsettle "key questions about the constitution of knowledge, the processes of research, how we should engage with information, and the nature and categorization of reality" (boyd and Crawford, 2012: 665). In particular, we need to consider how political controversies and complex governance issues are re-articulated as administrative or technical matters, and to reflect on the consequences of such ‘post-political’ forms of governance (Garsten and Jacobsson, 2013). The example of terrorism suggests how ubiquitous digital technologies and processes of datafication create new conditions for how we see, know and govern the world around us. With this, I have sought to illustrate that digital technologies come to shape the way we manage visibilities and produce knowledge, and that these formations have consequences for how we make the world around us knowable and governable. Such questions are not foreign to sociological and STS-inspired accounts of digital transformations, but are rarely considered part of the field of internet governance.

Conclusion

This paper has suggested that contemporary developments in the digital domain invite us to extend and reinvigorate studies of internet governance by giving more attention to questions of managed visibilities and their relation to processes of ordering. Through encounters with sociology, science and technology studies and similar approaches, we have seen a growing interest in more encompassing approaches to governance, extending far beyond Lessig’s (1999) call for approaches to internet governance that address both legal and technical forms of governance. In contrast to the focus on regulation in most internet governance studies, such accounts approach governance by focusing on the forms of ordering (Flyverbom, 2011) and mundane coordination activities (Hofmann, Katzenbach, and Gollatz, 2016) involved, the ‘relevance’ of algorithms for social and political formations (Gillespie, 2012; Ziewitz, 2016) and the role of infrastructures and architectures in the shaping of conduct (DeNardis, 2012). These approaches allow for far more elaborate and fine-grained investigations of how digital technologies and datafication processes become woven into the fabric of social life. But the digital realm also involves other subtle forms of governance that deserve attention. In particular, I have sought to articulate how discussions of the relation between information control, visibilities and governance could move the field forward, and the concern with seeing, knowing and governing could pave the way for novel studies of internet governance. To this end, the concept of managed visibilities is a starting point that invites us to explore how digitalisation and datafication condition particular forms of information control and the guidance of attention. The conceptual and illustrative discussions of ordering through the management of visibilities show both how the increasingly ubiquitous and infrastructural nature of digital technologies shapes societal and political transformations, and how such theoretical approaches may contribute to the opening up of exciting new avenues for research in and beyond the field of internet governance studies. These contributions are important because they may help us reflect on the largely invisible ways in which digital infrastructures and architectures institutionalise and normalise particular forms of seeing, knowing and governing.

References

Anderson, J., Dean, J. and Lovink, G. (2006) Reformatting Politics: Information Technology and Global Civil Society. London, Routledge

Bowrey, K. (2005) Law and internet cultures. Cambridge University Press

Boyd, D. and Crawford, K. (2012) Critical questions for big data: Provocations for a cultural, technological, and scholarly phenomenon. Information, Communication & Society. 15(5)

Brighenti, A. M. (2007). Visibility as sociological category. Current Sociology, 55, 323.

Chadwick, A. (2006) Internet Politics: States, Citizens, and New Communication Technologies. Oxford: Oxford University Press

Clinton, B. (1990): http://www.techlawjournal.com/cong106/pntr/20000308sp.htm

Dean, M. (1999) Governmentality: Power and Rule in Modern Society, London, Sage

Deibert,R., Palfrey, J., Rohozinski, R. and Zittrain, J. (eds) (2008) Access Denied: The Practice and Policy of Global Internet Filtering, MIT Press

Deibert,R., Palfrey, J., Rohozinski, R. and Zittrain, J. (eds) (2010) Access Controlled: The Shaping of Power, Rights and Rule in Cyberspace. MIT Press

DeNardis, L. (2009) Protocol Politics: The Globalization of Internet Governance, MIT Press

DeNardis L. (2012) Hidden levers of Internet control. Information, Communication & Society

15(5): 720–738.

DeNardis, L. (2014) The Global War for Internet Governance, New Haven: Yale University Press

Durham Peters, J. (2015) The Marvelous Clouds: Toward a Philosophy of Elemental Media, University of Chicago Press

Flichy, P. (2007) The Internet Imaginaire, MIT Press

Flyverbom, M. (2011) The Power of Networks: Organizing the Global Politics of the Internet, Cheltenham: Edward Elgar

Flyverbom, M. (2015) Sunlight in cyberspace?: On Transparency as a Form of Ordering, European Journal of Social Theory, Vol. 18, No. 2, 2015, p. 168-184

Flyverbom, M. (forthcoming) Corporate advocacy in the internet domain: shaping policy through data visualizations, in Political Affairs, edited by Garsten, C. and Soderbom, A., Cheltenham: Edward Elgar

Flyverbom, M. and Madsen, A.K. (2015) Sorting data out – unpacking big data value chains and algorithmic knowledge production, in Die Gesellschaft der Daten: Über die digitale Transformation der sozialen Ordnung, edited by Süssenguth, F. Bielefeld: Transcript Verlag, p. 123-144

Flyverbom, M., Leonardi, P., Stohl, C. and Stohl, M. (2016) The Management of Visibilities in the Digital Age: Introduction to special issue, International Journal of Communication, 10

Franklin, M. I.. 2013. Digital Dilemmas: Power, Resistance, and the Internet. Oxford: Oxford University Press

Garsten, C. and Jacobsson, K. (2013) Post-political regulation: soft power and post-political visions in global governance. Critical Sociology 39(3): 421–7.

Gillespie, T. (2012) The relevance of algorithms, in Media Technologies: Essays on Communication, Materiality, and Society, edited by Gillespie, Boczkowski and Foot, Cambridge, MA, MIT Press

Hafner, K. and Lyon, M. (1996) Where Wizards Stay Up Late: The Origins of the Internet. New York: Simon & Schuster

Hansen, H. K., and Flyverbom, M. (2015). The politics of transparency and the calibration of knowledge in the digital age. Organization. 22(6), 872–889

Hutchby, I. (2001) ‘Texts, Technologies and Affordances’, Sociology 35(2): 441–56.

Foucault, M. (1988). Power/knowledge: Selected interviews and other writings, 1972–1977. Brighton, UK: Harvest Press.

Hilbert, M. (2013). Big Data for development: From information- to knowledge societies, http://ssrn.com/abstract=2205145.

Hofmann, J.Katzenbach, C. and Gollatz, K. (2016) Between coordination and regulation: Finding the governance in Internet governance, New Media & Society, DOI: 10.1177/1461444816639975

Irani, L. (2015)Difference and Dependence Among Digital Workers: The Case of Amazon Mechanical Turk.South Atlantic Quarterly, 114(1).

Jørgensen, R.F. (ed.) (2006) Human Rights in the Global Information Society, Boston MA: MIT Press

Kitchin, R. (2014) The real-time city? Big data and smart urbanism, GeoJournal, 79(1): 1-14

Klein, H. (2002) ICANN and Internet Governance: Leveraging Technical Coordination to Realize Global Public Policy, The Information Society, 18:3, 193-207

Kleinwächter, W. (2000) ICANN between technical mandate and political challenges, Telecommunications Policy 24; 553-563

Law J (2003) Ordering and Obduracy. Lancaster: Centre for Science Studies, Lancaster University. Available at: http://www.comp.lancs.ac.uk/sociology/papers/Law-Orderingand-Obduracy.pdf

Leonardi, P. M. (2012). Materiality, sociomateriality, and socio-technical systems: What do these terms mean? How are they different? Do we need them? In P. M. Leonardi, B. A. Nardi, and J. Kallinikos (Eds.), Materiality and organizing: Social interaction in a technological world (pp. 25–48). Oxford, UK: Oxford University Press.

Lessig, L. (1999) Code and Other Laws of Cyberspace, New York: Basic Books

Madsen, A., Flyverbom, M., Hilbert, M. and Ruppert, E. (2016) Big Data: Issues for an International Political Sociology of Data Practices, International Political Sociology, Vol. 10, No. 3, p. 275-296

Mansell, Robin (2012) Imagining the internet: communication, innovation, and governance, Oxford: Oxford University Press

Mayer- Schönberger, V. and K. Cukier (2013), Big Data: A Revolution That Will Transform How We Live, Work, and Think, Eamon Dolan/Houghton Mifflin Harcourt: Boston

Morozov, E. (2014a) To save everything, click here: the folly of technological solutionism, PublicAffairs

Morozov, E. (2014b) https://www.theguardian.com/technology/2014/jul/20/rise-of-data-death-of-politics-evgeny-morozov-algorithmic-regulation

Mueller, M. (2002) Ruling the Root: Internet Governance and the Taming of Cyberspace; MIT Press

Mueller, M. (2010) Networks and States: The Global Politics of Internet Governance, MIT Press

Musiani, F. (2015) Practice, Plurality, Performativity, and Plumbing: Internet Governance Research Meets Science and Technology Studies, Science, Technology, & Human Values 2015, Vol. 40(2) 272-286

Musiani, F., Cogburn, D.L., DeNardis, L. and Levinson, N.S. (eds) (2016) The Turn to Infrastructure in Internet Governance, Palgrave Macmillan

Pasquale, F. (2015) The Black Box Society: The Secret Algorithms That Control Money and Information, Harvard University Press

Roberts, S.T. (2016) Commercial Content Moderation: Digital Laborers' Dirty Work, in Noble and Tynes (eds) Intersectional Internet: Race, Sex, Class and Culture Online

Ruppert, E. (2011). Population Objects: Interpassive Subjects. Sociology, 45(2) pp. 218–233.

Shapiro, G. (2003) Archaeologies of Vision: Foucault and Nietzsche on Seeing and Saying, Chicago, University of Chicago Press.

Singh, J.P. and Flyverbom, M. (2016) Representing participation in ICT4D projects, Telecommunications Policy, Vol. 40, No. 7, 2016, p. 692-703

Stohl, C., Stohl, M., and Leonardi, P. (2016) Managing Opacity: Information Visibility and the Paradox of Transparency in the Digital Age, International Journal of Communication, 10

Treem, J. and Leonardi, P. (2012) Social Media Use in Organizations: Exploring the Affordances of Visibility, Editability, Persistence, and Association; Communication Yearbook, Vol. 36, pp. 143-189

Walters, W. (2012) Governmentality: Critical Encounters. London: Routledge

Weber, R. (2008) Transparency and the governance of the internet, Computer Law & Security Report, 24, pp. 342-348

Ziewitz, M. (2016). Governing Algorithms: Myth, Mess, and Methods. Science, Technology & Human Values 41(1): 3–16. doi:10.1177/0162243915608948.

Ziewitz, M. and Petzold, C. (2014) In search of internet governance: Performing order in digitally networked environments, New Media & Society, 16(2), 306-322

Instability and internet design

$
0
0

This paper is part of 'Doing internet governance: practices, controversies, infrastructures, and institutions', a Special issue of the Internet Policy Review.

Where convergence was the orienting issue for communication policy-makers in the second half of the 20th century, in the 21st it is resilience in the face of instability, whether from human or natural causes, that has come to the fore (see, e.g., Manzano, et al., 2013; Smith, 2014; Sterbenz et al., 2014; Tipper, 2014). Defining instability here as unpredictable but constant change in one’s environment and in the means with which one interacts with it, instability-based problems underlie many of today’s internet policy issues.

Among those who must be considered policy-makers for the internet are the computer scientists and electrical engineers responsible for the technical decision-making that brings the network into being and sustains it through constant transformations, expansions, and ever-increasing complexity. The instabilities faced by early internet designers - those who worked on the problem from when it was first funded by DARPA in 1969 through the close of 1979 - were myriad in number and form. They arose on both sides of this sociotechnical infrastructure, appearing technically in software and hardware, and socially in interpersonal and institutional relations. This was a difficult working situation not only because instabilities were pervasive and unpredictable, but also because the sources of instability and their manifestations were themselves constantly refreshed, unrelenting.

It is these policy-makers who are the focus of this article, which asks: how did technical decision-makers for what we now call the internet carry on their work in the face of unpredictable but pervasive and ongoing instability in what they were building and what they had to build it with? It addresses this question by inductively mining the technical document series that served as both medium for internet design and a record of that history (Abbate, 1999).

The analysis is based on a reading of the almost 750 documents in the Internet Requests for Comments (RFCs, www.ietf.org/rfc.html) series that were published during the first decade of the design process (1969-1979). Coping techniques developed during this early period remain important after almost 50 years at the time of writing because such a wide range of types and sources of instability appeared during that period, and because the decisions, practices, and norms of that decade were path determinative for internet decision-making going forward. The document series records a conversation among those responsible for the technical side of the sociotechnical network, but during the first 20 years of the process in particular the discussion included a great deal of attention to social, economic, cultural, legal, and governance issues. Thinking about the design process through the lens of what it took to conceptualise the network and bring it into being under conditions of such instability increases yet again one's appreciation of what was accomplished.

The focus here is on those types of instability that are particularly important for large-scale sociotechnical infrastructure rather than those that appear with any type of endeavour. In bridge-building, for example, it is not likely that the technologies and materials being used will change constantly over the course of the project, but this is a common problem for those working with large-scale sociotechnical infrastructure. Such instability remains a central problem for internet designers today; a draft book on possible future network architectures by David Clark (2016), who has been involved with internet design since the mid-1970s, devotes significant attention to problems of this kind. Other ubiquitous and inevitable decision-making problems, such as value differences among those involved and frustration over time lags between steps of development and implementation processes, were also experienced by internet designers but are beyond the scope of this piece.

Mechanisms developed to cope with instabilities are rarely discussed in scholarly literature. The closest work, although it addresses a qualitatively different type of problem, comes from those in science, technology, and society studies (STS) who examine ways in which scientists transform various types of messiness in the laboratory into the clean details reported as scientific findings (importantly, in the work by Latour & Woolgar [1986], and Star [1989]), and into public representation of those efforts (Bowker, 1994). The research agenda going forward should look in addition at what can be learned from psychology and anthropology.

Internet designer efforts to cope with instabilities began with determining just what constituted stability - in essence, designing the problem itself in the sense of learning to perceive it and frame it in ways that helped solve it. They went on to include figuring out the details (conceptual labour), getting along (social practices), and making it work (technical approaches).

Defining the problem as a technique for its cure

Discerning the parameters of instability is an epistemological problem requiring those involved in addressing it to figure out just how to know when the system is stable enough for normal operations to proceed. Internet designers have, from the beginning, required a consensus on the concepts fundamental to such problems. 1 The techniques used to achieve a consensus regarding just what distinguished stability from instability of particular importance included drawing the line between stability and instability, distinguishing among different types of change for differential treatment within protocol (standard) setting processes, and resolving tensions between the global and the local, the universal and the specific.

Although the subject of what internet designers knew empirically about how the network was actually functioning is beyond the scope of this article, it is worth noting that comprehending and responding to the sources of instability was made even more problematic by a lack of information:

[E]ven those of us presumably engaged in ‘computer science’ have not found it necessary to confirm our hypotheses about network operation by experiment an [sic] to improve our theories on the basis of evidence (RFC 550, 1973, p. 2).

Indeed, design force was explicitly preferred over empirical knowledge:

If there are problems using this approach, please don’t ‘code around’ the problem or treat your [network interconnection node] as a ‘black box’ and exxtrapolate its characteristics from a series of experiments. Instead, send your comments and problems to . . . BBN, and we will fix the . . . system" (RFC 209, 1971, p. 1).

Stability vs instability

For analytical and pragmatic purposes, instability as understood here - unpredictable but constant change in one’s environment, including the ways in which one interacts with and is affected by it whether directly or indirectly - can usefully be distinguished from other concepts commonly used in discussions of the internet. Instability is not the same thing as ignorance (lack of knowledge about something specific), uncertainty (lack of knowledge about the outcome of processes subject to contingency or opacity, or otherwise unknowable), or ambiguity (lack of clarity regarding either empirical realities or intentions). Indeed, instability differs from all of these other terms in an important way: ignorance, uncertainty, and ambiguity are about what is known by those doing the design work, the maker. Instability, on the other hand, is about unpredictable mutability in that which is being made and the tools and materials available to make it.

For designers of what we now call the internet, goals during the first decade of the design process re network stability were humble. They sought protocols that could last for at least a couple of years, fearing that if this level of stability could not be achieved it would be hard to convince others to join in the work (RFC 164, 1971). It was considered a real improvement when the network crashed only every day or two (RFC 153, 1971), a rate neither widely nor commonly experienced. According to RFC 369 (1972), no one who responded to a survey had reported a mean-time-between-failure of more than two hours and the average percentage of time with trouble free operation was 35%.

Network designers defined stability operationally, not theoretically. The network is unstable when it isn’t functional or when one can’t count on it to be functional in future barring extraordinary events. Concepts long used in the security domain to think about those forces that can make a system unstable can be helpful in thinking about instabilities and the internet design process. Those involved with national security distinguish between system sensitivity and vulnerability. Sensitivity involves system perturbations that may be annoying and perhaps costly but are survivable; hacking into the Democratic National Committee information systems (Sanger & Schmitt, 2016) was a perturbation, but hasn’t brought the country down (as of the time of writing). Vulnerability entails those disturbances to a system that undermine its survival altogether; if malware such as Conficker (Kirk, 2015) were used to shut down the entire electrical network of the United States, it would generate a serious crisis for the country. Vulnerability has long been important to the history of telecommunications networks, being key to stimulating the growth of a non-British international telecommunications network early in the 20th century (Blanchard, 1986; Headrick, 1990); the push for greater European computational capacity and intelligent networks in the 1980s (Nora Minc, 1980; Tengelin, 1981); and in discussions of arms control (Braman, 1991) and cybersecurity (Braman, 2014). Factors that cause network instability are those that present possible vulnerabilities.

Technical change

The phenomenon of fundamental and persistent change was explicitly discussed by those involved in the early years of designing what we refer to today as the internet. The distinction between incremental and radical change was of particular importance because of the standard-setting context.

It can be difficult for those of us who have been online for decades and/or who were born "digital natives" to appreciate the extent of the intellectual and group decision-making efforts required to achieve agreement upon the most fundamental building blocks of the internet. Even the definition of a byte was once the subject of an RFC and there was concern that noncompliance with the definition by one user would threaten the stability of the entire network (RFC 176, 1971).

For the early internet, everything was subject to change, all the time: operating systems, distinctions among network layers, programming languages, software, hardware, network capacity, users, user practices, and on. Everyone was urged to take into account the possibility that even command codes and distinctions among network layers could be redefined (RFC 292, 1972). Those who were wise and/or experienced expected operational failures when ideas were first tried under actual network conditions (RFC 72, 1970). Operating by consensus was highly valued, but it was also recognised that a consensus once achieved might still have to be thrown out in response to experience or the introduction of new ideas or protocols. Instituting agreed-upon changes was itself a source of difficulty because use of the network was constant and maintenance breaks would therefore be experienced as instability (RFC 381, 1972), a condition ultimately mitigated but not solved by regular scheduling of shutdowns.

Looking back from 2016, early perceptions of the relative complexity and scale of the problem are poignant:

Software changes at either site can cause difficulties since the programs are written assuming that things won't change. Anyone who has ever had a program that works knows what system changes or intermittent glitches can do to foul things up. With two systems and a Network things are at least four times as difficult. (RFC 525, 1973, p. 5)

RFC 525 (1973) also repeats the point that changes by a user at a local site can cause difficulties for the network as a whole. RFC 528 (1973) makes the opposite point: changes in the network could impede or make it impossible for processes at local user sites to continue operating as they had (RFC 559, 1973; RFC 647, 1974); one author complained about the possibility of a situation in which servers behave erratically when they suddenly find their partner speaking a new language (RFC 722, 1976). Interdependencies among the technologies and systems involved in internet design were complex, often requiring delay in implementation of seemingly minor changes because each would require so many concomitant alterations of the protocols with which they interact that all are better left until they can be a part of a major overhaul package (RFC 103, 1971).

Incremental vs radical

A particularly difficult problem during the early years of the internet design process was determining when what was being proposed should be considered something new (a radical change) or a modification (incremental change) (RFC 435, 1973). The difference matters because systems respond differently to the two. Both types of change were rife during the internet design process, manifested in explicit discussions about whether something being discussed in an RFC should be treated as an official change or a modification if ultimately agreed upon and put into practice. As the question was put in RFC 72 (1970), what constitutes official change to a protocol, given that ideas about protocols go through many modifications before reaching solutions acceptable to all?

Translation of value differences into an objective framework was one means used to try to avoid tensions over whether something involved an incremental or radical change. Describing the design of algorithms as a "touchy" subject, a “Gordian knot”, for example, one author proposing a graphics protocol notes, “There are five or ten different criteria for a ‘best’ algorithm, each criterion different in emphasis” (RFC 292, 1972, p. 4). The coping technique used in response to this problem in RFC 292 was to simply order the commands by level and number them. If several commands at the same level came into conflict, some attempt would be made to encode variations of meanings in terms of bit configurations.

Macro vs micro

There are two dimensions along which distinctions between macro-level and micro-level approaches were important in network design: the global vs the local, and general function vs specific function. These two can be aligned with each other, as with the local and specific treatment of a screen pixel trigger in an early graphics protocol that was determined to be so particular to a given configuration of technologies that it should not be included in internet protocols (RFC 553, 1973). The two dimensions of globality and generality, however, need not operate in tandem. In one example, sufficient universality on the network side was ensured by insisting that it could deal with all local variations encountered (e.g., RFC 184, 1971; RFC 529, 1973).

Global vs local

The tension between the universal and the local is fundamental to the nature of infrastructural systems. Indeed, as Star and Ruhleder (1996, p. 114) put it, infrastructure - however global - only comes into being in its local instances. The relationship between the two has long been important to telecommunications networks. In the 1880s, long-time AT&T president Theodore Vail and chief engineer J. J. Carty, who designed the company's monopoly-like and, for the era, ubiquitous network, encountered it:

'No one knows all the details now,' said Theodore Vail. 'Several days ago I was walking through a telephone exchange and I saw something new. I asked Mr. Carty to explain it. He is our chief engineer; but he did not understand it. We called the manager. He didn't know, and called his assistant. He didn't know, and called the local engineer, who was able to tell us what it was. (Casson, 1910, p. 167)

Early internet designers phrased the problem this way: "Should a PROTOCOL such as TELNET provide the basis for extending a system to perform functions that go beyond the normal capacity of the local system" (RFC 139, 1971, p. 11). Discussion of ways in which a single entity might provide functions for everyone on the network that most other hosts would be unable to provide for themselves reads much like ruminations on a political system characterised by federalism (in the US) or subsidiarity (in Europe): “. . . to what extent should such extensions be thought of as Network-wide standards as opposed to purely local implementations” (Ibid.). The comparison with political thinking is not facile; a tension between geopolitical citizenship and what can be called “network citizenship” runs throughout the RFCs (Braman, 2013).

Drawing, or finding, the line between the universal and the local could be problematic. Decisions that incorporated that line included ensuring that special-purpose technology- or user-specific details could be sent over the network (RFC 184, 1971), treating transfer of incoming mail to a user's alternate mailbox as a feature rather than a protocol (RFC 539, 1973), and setting defaults in the universal position so that they serve as many users as possible (RFC 596, 1973). Interestingly, there was a consensus that users needed to be able to reconnect, but none on just where the reconnection capacity should be located (RFC 426, 1973).

General purpose vs specific purpose

The industrial machines for which legal and policies were historically crafted were either single-purpose or general-purpose. As this affected network policy a century ago, antitrust (competition) law was applied to the all-private US telecommunications network because, it was argued, being general purpose - serving more than one function, carrying both data and voice - was legally problematic as unfair competition. The resulting Kingsbury Commitment separated the two functions into two separate companies and networks that could interconnect but not be the same (Horwitz, 1989).

The internet, though, was experienced as a fresh start in network design. When the distinction between general and special purpose machines came up in the RFCs, it was with pride about having transformed what had previously been the function of a special purpose process into one available for general purpose use:

With such a backbone, many of the higher level protocols could be designed and implemented more quickly and less painfully -- conditions which would undoubtedly hasten their universal acceptance and availability" (RFC 435, 1973, p. 5).

It was a basic design criterion - what can be considered, in essence, a constitutional principle for network design - that the network should serve not only all kinds of uses and all kinds of users, but also be technologically democratic. The network, that is, needed to be designed in such a way that it served not only those with the most sophisticated equipment and the fastest networks, but also those with the most simple equipment and the slowest networks (Braman, 2011). 2

With experience, internet designers came to appreciate that the more general purpose the technologies at one layer, the faster and easier it is to design and build higher level protocols upon them. Thus it was emphasised, for example, that TELNET needed to find all commands "interesting" and worthy of attention, whether or not they were of kinds or from sources previously known (RFC 529, 1973, p. 9). In turn, as higher level and more specialised protocols are built upon general purpose protocols, acceptance of (and commitment to) those protocols and to design of the network as general purpose are reinforced (RFC 435, 1973).

Standardisation was key. It was understood that a unified approach would be needed for data and file transfer protocols in order to meet existing and anticipated network needs (RFC 309, 1972). Designing for general purpose also introduced new criteria into decision-making. Programming languages and character sets were to be maximised for flexibility (RFC 435, 1973), for example, even though that meant including characters in ASCII set that were not needed by the English language users who then dominated the design process (RFC 318, 1972).

Figuring out the details

The importance of the conceptual labour involved in the internet design process cannot be overstated, beginning with the need to define a byte discussed above through the most ambitious visions of globally distributed complex systems of diverse types serving a multitude of users and uses. Coping techniques in this category include the art of drawing distinctions itself as well as techniques for ambiguity reduction.

Conceptual distinctions

Early recognition that not all information received was meant to be a message spurred efforts to distinguish between bit flows intended to as communications or information transfer, and those that were, instead, errors, spurious information, manifestations of hardware or software idiosyncrasies, or failures (RFC 46, 1970; RFC 48, 1970). Other distinctions had to be drawn between data and control information and among data pollution, synchronicity, and network "race" problems (when a process races, it won't stop) (RFC 82, 1970).

The need for distinctions could get very specific. A lack of buffer space, for example, presented a very different type of problem from malfunctioning user software (e.g., RFC 54, 1970; RFC 57, 1970). Distinctions were drawn in ways perhaps more diverse than expected: people experienced what we might call ghost communications when BBN, the consulting firm developing the technology used to link computers to the network during the early years, would test equipment before delivery by sending messages received by others as from or about nodes they didn't think existed (RFC 305, 1972). And there were programmes that were perceived as having gone "berserk" (RFC 553, 1973).

Identifying commonalities that can then become the subject of standardisation is a critically important type of conceptual labour. The use of numerous ad hoc techniques for transmitting data and files across ARPANET was considered unworkable for the most common situations and designers knew it would become more so (RFC 310, 1972). Thus it was considered important to identify common elements across processes for standardisation. One very basic example of this was discussion of command and response as something that should be treated with a standard discipline across protocols despite a history of having previously been discussed only within each specific use or process context (RFC 707, 1975). The use of a single access point is another example of the effort to identify common functions across processes that could be standardised for all purposes (RFC 552, 1973).

Drawing conceptual distinctions is a necessary first step for many of the other coping techniques. It is required before the technical labour of unbundling processes or functions into separate functions for differential treatment, one of the technical tools discussed below, for example, and is evident in other techniques as well.

Ambiguity reduction

Reducing ambiguity was highly valued as a means of coping with instability. One author even asserted this as a principle: "words which are so imprecise as to require quotation marks should never appear in protocol specifications" (RFC 513, 1973, p. 1). Quotation marks, of course, are used to identify a word as a neologism or a term being used with an idiosyncratic and/or novel meaning. This position resonates with the principle in US constitutional law that a law so vague two or more reasonable adults cannot agree on its meaning is unconstitutional and void.

Concerns about ambiguity often arose in the course of discussions about what human users need in contrast to what was needed for the non-human, or daemon users such as software, operating systems, and levels of the network, for which the network was also being designed (Braman, 2011). It was pointed out, for example, that the only time mail and file transfer protocols came into conflict was in naming conventions that needed to serve human as well as daemon users (RFC 221, 1971).

Getting along

The history of the internet design process as depicted in the internet RFCs provides evidence of the value of social capital, interpersonal relationships, and community in the face of instability. Valuing friendliness, communication, living with ambiguity, humour, and reflexivity about the design process were all social tools for coping with instability visible in the RFCs from the first decade. Collectively, we can refer to such tools as "getting along".

Friendliness

In addition to the normative as well as discursive emphasis on community consensus-building discussed elsewhere (Braman, 2011), the concept of friendliness was used explicitly. Naming sites in ways that made mnemonic sense to humans was deemed usefully user-friendly, allowing humans to identify the sources of incoming messages (RFC 237, 1971). Friendliness was a criterion used to evaluate host sites, both by network administrators concerned also about reliability and response time (RFC 369, 1972) and by potential users who might have been discouraged by a network environment that seemed alien (RFC 707, 1975). Interpersonal relations - rapport among members of the community (RFC 33, 1970) - were appreciated as a coping technique. The effects of one’s actions on others were to be considered: "A system should not try to simulate a facility if the simulation has side effects" (RFC 520, 1973, p. 3).

The sociotechnical nature of the effort, interestingly, shines through even when discussing interpersonal relations:

The resulting mixture of ideas, discussions, disagreements, and resolutions has been highly refreshing and beneficial to all involved, and we regard the human interaction as a valuable by-product of the main effect. (RFC 33, 1970, p. 3)

At the interface between the network and local sites, internet designers learned through experience about the fundamental importance of the social side of a sociotechnical system. After discussing how network outsiders inevitably become insiders in the course of getting their systems online, one author noted,

[I]f personnel from the several Host[s] [sic] are barred from active participation in attaching to the network there will be natural (and understandable) grounds for resentment of the intrusion the network will appear to be; systems programmers also have territorial emotions, it may safely be assumed. (RFC 675, 1974)

The quality of relations between network designers and those at local sites mattered because if the network were perceived as an intruder, compliance with protocols was less likely (RFC 684, 1975).

Communication

Constant communication was another technique used in the attempt to minimise sources of instability. Rules were set for documentation genres and schedules (RFC 231, 1971). Using genre categories provided a means of announcing to users how relatively fixed, or not, a particular design decision or proposal was and when actual changes to protocols might be expected - both useful as means of dealing with instability. Today, the Internet Engineering Task Force (IETF), which hosts the RFCs online, still uses genre distinctions among such categories as Internet Standard, Draft Standard, and Proposed Standard, as well as genres for Best Practices and others that include those that are Informational, Historic, or Experimental. 3

Users were admonished to keep the RFCs and other documentation together because the RFCs would come faster and more regularly than would user guides. Still, it was highlighted, it was impossible for users to keep up with changes in the technologies: "It is almost inevitable that the TUG [Tip user Guide] revisions follow actual system changes" (RFC 386, 1972, p. 1, emphasis added). Simplicity and clarity in communication were valued; one author’s advice was to write as if explaining something both to a secretary and to a corporation president - that is, to both the naiver and to the sophisticated (RFC 569, 1973).

Living with ambiguity

Although eager to reduce ambiguity wherever possible, early network designers also understood that some amount of ambiguity due to error and other factors was inevitable (RFC 203, 1971). In those instances, the goal was to learn to distinguish among causal factors, and to develop responses to each that at least satisficed even if that meant simply ignoring errors (RFC 746, 1973).

Humour

Humour is a technique used to cope with instability, as well as with ignorance, uncertainty, and ambiguity, in many environments. Within the internet design process, it served these functions while simultaneously supporting the development of a real sense of community. In RFC 468 (1973), for example, there is an amusing description of just how long it took to define something during the course of internet design. There was an ongoing tradition of humorous RFCs (beware of any published on 1 April, April Fool’s Day) (Limoncelli & Salus, 2007).

Reflexivity about the design process

The final social technique for adapting to instability evident early on was sustaining communal reflexivity about the nature of the design process itself. RFC 451 (1973) highlighted the importance of regularly questioning whether or not things should continue being done as they were being done. It was hoped that practices developed within the network design community would diffuse into those of programmers at the various sites linking into the network (RFC 684, 1975).

Making it work

Many of the coping techniques described above are social. Some are technical, coming into play as the design principles that are, in essence, policy for the internet design process (Braman, 2011). A final set of techniques is also technical, coming into use as specific design decisions intended to increase adaptive capacity by working with characteristics of the technologies themselves. Approaches to solving specific technical problems in the face of instability included designing in adaptive capacity, tight links between genre and machinic specifications, delay, and the reverse of delay, making something happen.

Adaptive capacity

General purpose machines begin by being inherently flexible enough to adapt to many situations, but it is possible to go further in enhancing adaptive capacity. The general goal of such features was captured in RFC 524 (1973):

The picture being painted for the reader is one in which processes cooperate in various ways to flexibly move and manage Network mail. The author claims . . . that the picture will in future get yet more complicated, but that the proposal specified here can be conveniently enlarged to handle that picture too (p. 3).

The problem of adaptation came up initially with the question of what to do with software that had been designed before its possible use in a network environment had been considered. RFC 80 (1970) argued that resolving this incompatibility should get as much attention as developing new hardware by those seeking to expand the research capacity of network users. Another such mechanism was the decision to require the network to adapt to variability in input/output mechanisms rather than requiring programmes to conform with the network (RFC 138, 1971). Taking this position did not preclude establishing standards for software programmes that interact with the network and making clear that using those standards is desirable (RFC 166, 1971).

Beginning with recuperation of lost messages, and irrespective of the source of error, redundancy has long been a technique for coping with network instability issues. When satellites became available for use in international communications, for example, the US Federal Communications Commission (FCC) required every network provider to continue to invest as much in underseas cables as it invested in satellites (Horwitz, 1989). The early RFCs discuss redundancy in areas as disparate as message transmission (RFC 65, 1970) and the siting of the network directory (RFC 625, 1974). Redundancy in databases was understood as an access issue (RFC 677, 1975).

There are other ways adaptation was technically designed into the early network as a means of coping with instability. RFC 435 (1973) looks at how to determine whether or not a server has an echoing mode during a period in which many hosts could either echo or not echo, but did not have the option to go either way. Requiring fixed socket offsets until a suitable network-wide solution could be found to the problem of identity control at connection points between computers and the ARPANET (RFC 189, 1971) is another example.

There were situations for which reliance on ad hoc problem solving was the preferred approach (RFC 247, 1971). At their best, ad hoc environments could be used for experimentation, as was done with the mail facility (RFC 724, 1977). A "level 0" protocol was a more formal attempt to define an area in which experimentation could take place; successes there could ultimately be embedded in later protocols for the network itself (RFC 549, 1973). Maintaining a “wild west” zone for experimentation as a policy tool is familiar to those who know the history of radio regulation in the United States, where amateur (“ham”) radio operators have long been given spectrum space at the margins of what was usable. Regulators understood that these typically idiosyncratic individuals were persistent and imaginative inventors interested in pressing the limits of what they could do - and that their tinkering had yielded technical solutions that then made it possible to open up those wavelengths to commercial use over and over again.

Reliance on probabilities was another long familiar technique for situations involving instability as well as uncertainty. RFC 60 (1970) describes a technique apparently used by many larger facilities connected to the network to gain flexibility managing traffic and processing loads. They would falsely report their buffer space, relying on the probability that they would not get into logistical trouble doing so and assuming that statistics would keep them out of trouble should any difficulties occur. The use of fake errors was recommended as a means of freeing up buffer space, a measure considered a last resort but powerful enough to control any emergency.

Genre specifications

Working with the genre requirements described above offered another set of opportunities for coping with instability. The RFC process was begun as an intentionally informal conversation but, over time, became much more formal regarding gatekeeping, genre classification, and genre requirements specific to stages of decision-making. Concomitantly, the tone and writing style of the documents became more formal as well. It is because of these two changes to the RFC publishing process that discussions of social issues within the design conversation declined so significantly after the first couple of decades.

For any RFC dealing with a protocol, what had not been articulated simply didn't exist (RFC 569, 1973). This put a lot of weight on the needs both to provide documentation - and to keep a technology operating in exactly the manner described in that documentation (RFC 209, 1971). This was not a naive position; in discussion of the interface between the network and host computers, it was admitted that specifications were neither complete nor correct, but the advice was to hold the vendor responsible for technical characteristics as described. In a related vein, RFC authors were advised not to describe something still under experimentation in such a manner that others will believe the technology is fixed (RFC 549, 1973)

This position does, however, create a possible golem problem, in reference to the medieval story about a human-type figure created out of clay to do work for humans, always resulting in disaster because instructions were never complete or specific enough. From this perspective, the expectation of an unambiguous, completely specified mapping between commands and responses may be a desirable ideal (RFC 722, 1976), but could not realistically be achieved.

Putting things off

The network design process was, by definition, ongoing, but this fundamental fact itself created instabilities: "Thus each new suggestion for change could conceivably retard program development in terms of months" (RFC 72, 1970, p. 2).

Because interdependencies among protocols and the complexity of individual protocols made it difficult to accomplish what were otherwise incremental changes without also requiring so much perturbation of protocols that wholesale revision would be needed (RFC 167, 1971), it was often necessary to postpone improvements that solved current problems until an overhaul took place. This happened with accounting and access controls (Ibid.) and basic bit stream and byte stream decisions for a basic protocol (RFC 176, 1971). As the network matured, it became easier to deal with many of these issues (RFC 501, 1973).

There were a number of occasions when the approach to a problem was to start by distinguishing steps of a process that had previously been treated as a single step - unbundling types of information processing, that is, in the way that vendors or regulators sometimes choose or are required to do with service or product bundles. It was realised, for example, that treating "hide your input" and “no echo” as two separate matters usefully permitted differential treatment of each (RFC 435, 1973). Similarly, the official FTP process was broken down into separate commands for data transfer and for file transfer, with the option of further distinguishing subsets within each (RFC 486, 1973). If we think of unbundling the steps of a single process as one way of making conceptual distinctions that provide support for continuing to work in the face of instability as a vertical matter, we might call it horizontal unbundling when distinctions among types of processing involved in a single step are drawn. By 1973 (RFC 520, 1973) it had already been found that having three digits for codes to distinguish among types of replies was insufficient, so a move to five digits was proposed as a short-term fix.

Demonstration

There were some instances in which designers foresaw a potential problem but could not convince others in the community that it was likely and serious. One technique used in such instances was to make actualize the potential - to make it happen in order to demonstrate the problem in such a way that the community would so appreciate the nature and seriousness of the concern that they would turn to addressing the issue. In 1970, for example, one designer - acting on an insight he had had about a potential type of problem in 1967 - deliberately flooded the network in order to convince his colleagues of the lock-up that results when that happens because of errors in message flow (RFC 635, 1974). This technique is familiar to those who know the literature on the diffusion of innovations. In Rogers’ (2003) synthesis of what has been learned from thousands of studies of the diffusion of many different types of technologies in a wide range of cultural settings around the world, trialability and observability are among the five factors that significantly affect the willingness of individuals and groups to take up the use of new technologies and practices.

Conclusions

In today's digital, social, and natural worlds, instability is a concern of increasing importance to all of us as individuals and as communities. Those responsible for designing, building, and operating the infrastructures upon which all else depends - during times of instability just as during times of calm and slow change - confront particular difficulties of enormous importance that may be technical in nature but are of social, political, economic, and cultural importance as well. Insights drawn from discussions about the Internet design process in the Requests for Comments (RFCs) technical document series during the first decade of work on what we now call the internet (1969-1979) regarding how they coped with instability provides insights into coping techniques of potential use in the design, building, and operation of any large-scale sociotechnical infrastructure. The toolkit developed by network designers engaged with all facets of what makes a particular system sociotechnical rather than "just" social or technical: negotiating the nature of the issue, undertaking the conceptual labour involved in figuring out the details, learning how to get along with all of those involved, and incorporating adaptive techniques into the infrastructure itself.

Many of those involved with "ethics in engineering," including the relatively recent subset of that community that refers to itself as studying “values in design,” often start from theory and try to induce new behaviours among computer scientists and engineers in the course of design practice with the hope of stimulating innovations in content, design, or architecture. Here, instead, the approach has been to learn from the participants in the design process themselves, learning from these highly successful technical decision-makers - de facto policy-makers for the internet - about how to cope with instabilities in a manner that allowed productive work to go forward.

References

Abbate, J. (1999). Inventing the Internet. Cambridge, MA: MIT Press.

Below, A. (2012). The genre of guides as a means of structuring technology and community. Unpublished MA Thesis, University of Wisconsin-Milwaukee.

Blanchard, M. A. (1986). Exporting the First Amendment. New York: Longman.

Bowker, G. C. (1994). Science on the run: Information management and industrial geophysics at Schlumberger, 1920-1940. Cambridge, MA: MIT Press.

Braman, S. (2014). Cyber security ethics at the boundaries: System maintenance and the Tallinn Manual. In L. Glorioso & A.-M. Osula (Eds.), Proceedings: 1st Workshop on Ethics of Cyber Conflict, pp. 49-58. Tallinn, Estonia: NATO Cooperative Cyber Defence Centre of Excellence.

Braman, S. (2013). The geopolitical vs. the network political: Governance in early Internet design, International Journal of Media & Cultural Politics, 9(3), 277-296.

Braman, S. (2011) The framing years: Policy fundamentals in the Internet design process, 1969-1979, The Information Society, 27(5), 295-310.

Braman, S. (1990). Information policy and security. Presented to the 2nd Europe Speaks to Europe Conference, Moscow, USSR.

Casson, H. N. (1910). The history of the telephone. Chicago, IL: A. C. McClurg & Co.

Clark, D. D. (2016). Designs for an internet. Available at http://groups.csail.mit.edu/ana/People/DDC/archbook.

Headrick, D. R. (1990). The invisible weapon: Telecommunications and international relations, 1851-1945. New York/Oxford: Oxford University Press.

Horwitz, R. B. (1986). The irony of regulatory reform: The deregulation of American telecommunications. New York/Oxford: Oxford University Press.

Kirk, J. (2015, Aug. 3). Remember Conficker? It’s still around, Computerworld, http://www.computerworld.com/article/2956312/malware-vulnerabilities/remember-conficker-its-still-around.html, accessed Sept. 6, 2016.

Latour, B. & Woolgar, S. (2013). Laboratory life: The construction of scientific facts, 2d ed. Princeton, NJ: Princeton University Press.

Limoncelli, T. A. & Salus, P. H. (Eds.) (2007). Book on humor in the RFCs. Peer-to-Peer Communications.

Manzano, M., Calle, E., Torres-Padrosa, V., Segovia, J., & Harle, D. (2013). Endurance: A new robustness measure for complex networks under multiple failure scenarios, Computer Networks, 57, 3641-3653.

Nora, S. & Minc, A. (1980). The computerization of society? Cambridge, MA: MIT Press.

Rogers, E. M. (2003) Diffusion of Innovations, 5th ed. New York: Free Press.

Sanger, D. E. & Schmitt, E. (2016, July 26). Spy agency consensus grows that Russia hacked D.N.C., The New York Times, http://www.nytimes.com/2016/07/27/us/politics/spy-agency-consensus-grows-that-russia-hacked-dnc.html, accessed Sept. 6, 2016.

Smith, P. (2014). Redundancy, diversity, and connectivity to achieve multilevel network resilience, survivability, and disruption tolerance, Telecommunications Systems, 56, 17-31.

Star, S. L. (1989). Regions of the mind: Brain research and the quest for scientific certainty. Stanford, CA: Stanford University Press.

Star, S. L. & Ruhleder, K. (1996). Steps toward an ecology of infrastructure: Design and access for large information spaces, Information Systems Research, 7(1), 111-134.

Sterbenz, J.P. G., Hutchison, D., Çetinkaya, E.K., Jabhar, A., Rohrer, J.P., Schöller, M., & Tipper, D. (2014). Resilient network design: Challenges and future directions, Telecommunications Systems, 56, 5-16.

Tengelin, V. (1981). The vulnerability of the computerised society. In H. P. Gassmann (Ed.), Information, computer and communication policies for the 80s, pp. 205-213. Amsterdam, The Netherlands: North-Holland Publishing Co.

RFCs Cited

RFC 33, New Host-Host Protocol, S. D. Crocker, February 1970.

RFC 46, ARPA Network Protocol Notes, E. Meyer, April 1970.

RFC 48, Possible Protocol Plateau, J. Postel, S. D. Crocker, April 1970.

RFC 54, Official Protocol Proffering, S.D. Crocker, J. Postel,l J. Newkirk, M. Kraley, June 1970.

RFC 57, Thoughts and Reflections on NWG/RFC 54, M. Kraley, J. Newkirk, June 1970.

RFC 60, Simplified NCP Protocol, R. B. Kalin, July 1970.

RFC 65, Comments on Host/Host Protocol Document #1, D.C. Walden, August 1970.

RFC 72, Proposed Moratorium on Changes to Network Protocol, R. D. Bressler, September 1970.

RFC 80, Protocols and Data Formats, E. Harslem, J.. Heafner, December 1970.

RFC 82, Network Meeting Notes, E. Meyer, December 1970.

RFC 103, Implementation of Interrupt Keys, R. B. Kalin, February 1971.

RFC 138, Status Report on Proposed Data Reconfiguration Service, R.H> Anderson, V.G. Cerf, E. Harslem, J.F. Heafner, J. Madden, R.M. Metcalfe, A. Shoshani, J.E. White, D.C.M. Wood, April 1971.

RFC 139, Discussion of Telnet Protocol, T. C. O'Sullivan, May 1971.

RFC 153, SRI ARC-NIC Status, J.T. Melvin, R.W. Watson, May 1971.

RFC 164, Minutes of Network Working Group Meeting, 5/16 through 5/19/71, J. F. Heafner, May 1971.

RFC 166, Data Reconfiguration Service: An Implementation Specification, R.H. Anderson, V.G. Cerf, E. Harslem, J.F. Heafner, J. Madden, R.M. Metcalfe, A. Shoshani, J.E. White, D.C.M. Wood, May 1971.

RFC 167, Socket Conventions Reconsidered, A.K. Bhushan, R. M. Metcalfe, J. M. Winett, May 1971.

RFC 176, Comments on 'Byte Size for Connections', A.K. Bhushan, R. Kanodia, R. M. Metcalfe, J. Postel, June 1971.

RFC 184, Proposed Graphic Display Modes, K.C. Kelley, July 1971.

RFC 189, Interim NETRJS Specifications, R.T. Braden, July 1971.

RFC 203, Achieving Reliable Communication, R.B. Kalin, August 1971.

RFC 209, Host/IMP Interface Documentation, B. Cosell, August 1971.

RFC 221, Mail Box Protocol: Version 2, R. W. Watson, 1971.

RFC 231, Service center standards for remote usage: A user's view, J.F. Heafner, E. Harslem, September 1971.

RFC 237, NIC View of Standard Host Names, R.W. Watson, October 1971.

RFC 247, Proffered Set of Standard Host Names, P.M. Karp, October 1971.

RFC 292, Graphics Protocol: Level 0 Only, J. C. Michener, I.W. Cotton, K.C. Kelley, D.E. Liddle, E. Meyer, January 1972.

RFC 305, Unknown Host Numbers, R. Alter, February 1972.

RFC 309, Data and File Transfer Workshop Announcement, A. K. Bhushan, March 1972.

RFC 310, Another Look at Data and File Transfer Protocols, A> K. Bhushan, April 1972.

RFC 318, Telnet Protocols, J. Postel, April 1972.

RFC 369, Evaluation of ARPANET Services January-March, 1972, J.R. Pickens, July 1972.

RFC 381, Three Aids to Improved Network Operation, J.M. McQuillan, July 1972.

RFC 386, Letter to TIP Users-2, B. Cosell, D.C. Walden, August 1972.

RFC 426, Reconnection Protocol, R. Thomas, January 1973.

RFC 435, Telnet Issues, B. Cosell, D.C. Walden, January 1973.

RFC 451, Tentative Proposal for a Unified User Level Protocol, M. A. Padlipsky, February 1973.

RFC 468, FTP Data Compression, R.T. Braden, March 1973.

RFC 486, Data Transfer Revisited, R.D. Bressler, March 1973.

RFC 501, Un-muddling 'Free File Transfer', K.T. Pogran, May 1973.

RFC 513, Comments on the New Telnet Specifications, W. Hathaway, May 1973.

RFC 520, Memo to FTP Group: Proposal for File Access Protocol, J.D. Day, June 1973.

RFC 524, Proposed mail protocol, J.E. White, June 1973.

RFC 525, MIT-MATHLAB meets UCSB-OLS -- an example of resource sharing. W. Parrish, J.R. Pickens, June 1973.

RFC 528, Software checksumming in the IMP and network reliability, J.M. McQuillan, June 1973.

RFC 529, Note on Protocol Synch Sequences, A.M. McKenzie, R. Thomas, R.S. Tomlinson, K.T. Pogran, June 1973.

RFC 539, Thoughts on the Mail Protocol Proposed in RFC 524, D. Crocker, J. Postel, July 1973.

RFC 549, Minutes of Network Graphics Group Meeting, 15-17 July 1973, J.C. Michener, July 1973.

RFC 552, Single Access to Standard Protocols, A.D. Owen, July 1973.

RFC 553, Draft Design for a Text/Graphics Protocol, C.H. Irby, K. Victor, July 1973.

RFC 559, Comments on the New Telnet Protocol and its Implementation, A.K. Bushan, August 1973.

RFC 569, NETED: A Common Editor for the ARPA Network, M.A. Padlipsky, October 1973.

RFC 596, Second thoughts on Telnet Go-Ahead, E.A. Taft, December 1973.

RFC 625, On-line hostnames service, M.D. Kudlick, E.J. Feinler, March 1974.

RFC 635, Assessment of ARPANET protocols, V. Cerf, April 1974.

RFC 647, Proposed protocol for connecting host computers to ARPA-like networks via front end processors, M.A. Padlipsky, November 1974.

RFC 675, _____. 1974.

RFC 677, Maintenance of duplicate databases, P.R. Johnson, R. Thomas, January 1975.

RFC 684, Commentary on procedure calling as a network protocol, R. Schantz, April 1975.

RFC 707, High-level framework for network-based resource sharing, J.E. White, December 1975.

RFC 722, Thoughts on Interactions in Distributed Services, J. Haverty, September 1976.

RFC 724, Proposed official standard for the format of ARPA Network messages, D. Crocker, K.T. Pogran, J. Vittal, D.A. Henderson, May 1977.

RFC 746, SUPDUP graphis extension, R. Stallman, March 1978.

Footnotes

1. Of course the extent to which this was true shouldn’t be overstated. Jon Postel famously simply announced himself as the "naming czar" when he was still a graduate student.

2. In contrast to technological democracy, network neutrality involves regulatory treatment of vendor efforts to differentiate service provision speed to and access by users through pricing mechanisms sometimes, though not always, driven by relations between service and content providers that are also subject to competition (antitrust) law.

3. Other genre distinctions have been found useful by those conducting research on the RFCs. Below (2012), for example, analysed all of the documents identifiable as "guides" by those in the field of technical communication for the ways in which they were used for community-building in a valuable case study for that community of scholars and practitioners.

The invisible politics of Bitcoin: governance crisis of a decentralised infrastructure

$
0
0

This paper is part of 'Doing internet governance: practices, controversies, infrastructures, and institutions', a Special issue of the Internet Policy Review.

Introduction

Since its inception in 2008, the grand ambition of the Bitcoin project has been to support direct monetary transactions among a network of peers, by creating a decentralised payment system that does not rely on any intermediaries. Its goal is to eliminate the need for trusted third parties, particularly central banks and governmental institutions, which are prone to corruption.

Recently, the community of developers, investors and users of Bitcoin has experienced what can be regarded as an important governance crisis– a situation whereby diverging interests have run the risk of putting the whole project in jeopardy. This governance crisis is revealing of the limitations of excessive reliance on technological tools to solve issues of social coordination and economic exchange. Taking the Bitcoin project as a case study, we argue that online peer-to-peer communities involve inherently political dimensions, which cannot be dealt with purely on the basis of protocols and algorithms.

The first part of this paper exposes the specificities of Bitcoin, presents its underlying political economy, and traces the short history of the project from its inception to the crisis. The second part analyses the governance structure of Bitcoin, which can be understood as a two-layered construct: an infrastructure seeking to govern user behaviour via a decentralised, peer-to-peer network on the one hand, and an open source community of developers designing and architecting this infrastructure on the other. We explore the challenges faced at both levels, the solutions adopted to ensure the sustainability of the system, and the unacknowledged power structures they involve. In a third part, we expose the invisible politics of Bitcoin, with regard to both the implicit assumptions embedded in the technology and the highly centralised and largely undemocratic development process it relies on. We conclude that the overall system displays a highly technocratic power structure, insofar as it is built on automated technical rules designed by a minority of experts with only limited accountability for their decisions. Finally, drawing on the wider framework of internet governance research and practice, we argue that some form of social institution may be needed to ensure accountability and to preserve the legitimacy of the system as a whole – rather than relying on technology alone.

I. Bitcoin in theory and practice

A. The Bitcoin project: political economy of a trustless peer-to-peer network

Historically, money has taken many different forms. Far from being an exclusively economic tool, money is closely associated with social and political systems as a whole – which Nigel Dodd refers to as the social life of money (Dodd 2014). Indeed, money has often been presented as an instrument which can be leveraged to shape society in certain ways and as Dodd has shown, this includes powerful utopian dimensions: for sociologist Georg Simmel for instance, an ideal social order hinged upon the definition of a “perfect money” (Simmel, 2004). In the wake of economic crises in particular, it is not uncommon to witness the emergence of alternative money or exchange frameworks aimed at establishing different social relations between individuals – more egalitarian, or less prone to accumulation and speculation (North, 2007). On the other hand however, ideals of self-regulating markets have often sought to detach money from existing social relations, resulting in a progressive “disembedding” of commercial interactions from their social and cultural context (Polanyi, 2001 [1944]).

Since it first appeared in 2009, the decentralised cryptocurrency Bitcoin has raised high hopes for its potential to reshuffle not only the institutions of banking and finance, but also more generally power relations within society. The potential consequences of this innovation, however, are profoundly ambivalent. On the one hand, Bitcoin can be presented as a neoliberal project insofar as it radicalises Friedrich Hayek’s and Milton Friedman’s ambition to end the monopoly of nation-states (via their central banks) on the production and distribution of money (Hayek, 1990), or as a libertarian dream which aims at reducing the control of governments on the economy (De Filippi, 2014). On the other hand, it has also been framed as a solution for greater social justice, by undermining oligopolistic and anti-democratic arrangements between big capital and governments, which are seen to favour economic crises and inequalities. Both of these claims hinge on the fact that as a socio-technical assemblage, Bitcoin seems to provide a solution for “governing without governments”, which appeals to liberal sentiments both from the left and from the right. Its implicit political project can therefore be understood as effectively getting rid of politics by relying on technology.

More generally, distributed networks have long been associated with a redistribution of power relations, due to the elimination of single points of control. This was one of the main interpretations of the shift in telecommunications routing methods from circuit switching to packet switching in the 1960s and the later deployment of the internet protocol suite (TCP/IP) from the 1970s onwards (Abbate, 1999), as well as the adoption of the end-to-end principle – which proved to be a compelling but also partly misleading metaphor (Gillespie, 2006). The idea was that information could flow through multiple and unfiltered channels, thus circumventing any attempts at controlling or censoring it, and providing a basis for more egalitarian social relations as well as stronger privacy. In practice however, it became clear that network design is much more complex and that additional software, protocols and hardware, at various layers of the network, could (and did) provide alternate forms of re-centralisation and control and that, moreover, the internet was not structurally immune to other modes of intervention such as law and regulation (Benkler, 2016).

However, there have been numerous attempts at re-decentralising the network, most of which have adopted peer-to-peer architectures as opposed to client-server alternatives, with the underlying assumption that such technical solutions provide both individual freedom and “a promise of equality” (Agre, 2003) 1. Other technologies have also been adopted in order to add features relating to user privacy for instance, which involve alternative routing methods (Dingledine, Mathewson, & Syverson, 2004) and cryptography (which predates computing, see e.g. Kahn 1996). In particular, such ideas were strongly advocated starting from the late 1980s by an informal collective of hackers, mathematicians, computer scientists and activists known as cypherpunks, who saw strong cryptography as a means of achieving greater privacy and security of interpersonal communications, especially in the face of perceived excesses and abuses on the part of governmental authorities. 2 Indeed, all of these solutions pursue implicit or explicit goals, in terms of their social or political consequences, which can be summed up as enabling self-organised direct interactions between individuals, without relying on a third party for coordination, and also preventing any form of surveillance or coercion.

Yet cryptography is not only useful to protect the privacy of communications; it can also serve as a means to promote further decentralisation and disintermediation when combined with a peer-to-peer architecture. In 2008, a pseudonymous entity named Satoshi Nakamoto released a white paper on the Cryptography Mailing list (metzdowd.com) describing the idea of a decentralised payment system relying on a distributed ledger with cryptographic primitives (Nakamoto, 2008a). One year later, a first implementation of the ideas defined in the white paper was released and the Bitcoin network was born. It introduces its own native currency (or unit of account) with a fixed supply – and whose issuance is regulated, only and exclusively, by technological means. The Bitcoin network can therefore be used to replace at least some of the key functions played by central banks and other financial institutions in modern societies: the issuance of money on the one hand, and, on the other hand, the fiduciary functions of banks and other centralised clearing houses.

Supported by many self-proclaimed libertarians, Bitcoin is often presented as an alternative monetary system, capable of bypassing most of the state-backed financial institutions – with all of their shortcomings and vested interests which have become so obvious in the light of the financial crisis of 2008. Indeed, as opposed to traditional centralised economies, Bitcoin’s monetary supply is not controlled by any central authority, but is rather defined (in advance) by the Bitcoin protocol – which precisely stipulates the total amount of bitcoins that will ever come into being (21 million) and the rate at which they will be issued over time. A certain number of bitcoins are generated, on average, every ten minutes and assigned as a reward to those who lend their computational resources to the Bitcoin network in order to both operate and secure the network. In this sense, Bitcoin can be said to mimic the characteristics of gold. Just as gold cannot be created out of thin air, but rather needs to be extracted from the earth (through mining), Bitcoin also requires a particular kind of computational effort – also known as mining– in order for the network protocol to generate new bitcoins (and just as gold progressively becomes harder to find as the stock gets depleted over, also the amount of bitcoins generated through mining decreases over time).

The establishment and maintenance of a currency has traditionally been regarded as a key prerogative of the State, as well as a central institution of democratic societies. Controlling the money supply, by different means, is one of the main instruments that can be leveraged in order to shape the economy, both domestically and in the context of international trade. Yet, regardless of whether one believes that the State has the right (or duty) to intervene in order to regulate the market economy, monetary policies have sometimes been instrumentalised by certain governments using inflation as a means to finance government spending (e.g. in the case of the Argentine great depression of 1998-2002). Perhaps most critical is the fact that, with the introduction of fractional-reserve banking, commercial banks acquired the ability to (temporarily) increase the money supply by giving out loans which are not backed up by actual funds (Ferguson, 2008). 3 The fractional-reserve banking system (and the tendency of commercial banks to create money at unsustainable rates) is believed to be one of the main factors leading to the global financial crisis of 2008 – which has brought the issue of private money issuance back into the public debate (Quinn, 2009).

Although there have been many attempts at establishing alternative currencies, and cryptocurrencies have also been debated for a long time, the creation of the Bitcoin network was in large part motivated in response to the social and cultural contingencies that emerged during the global financial crisis of 2008. As explicitly stated by Satoshi Nakamoto in various blog posts and forums, Bitcoin aimed at eradicating corruption from the realm of currency issuance and exchange. Given that governments and central banks could no longer be trusted to secure the value of fiat currency and other financial instruments, Bitcoin was designed to operate as a trustless technology, which only relies on maths and cryptography. 4 The paradox being that this trustlesstechnology is precisely what is needed for building a new form of “distributed trust” (Mallard, Méadel, & Musiani, 2014).

Trust management is a classic issue in peer-to-peer computing, and can be understood as the confidence that a peer has to ensure that it will be treated fairly and securely, when interacting with another peer, for example, during transactions or downloading files, especially by preventing malicious operations and collusion schemes (Zhu, Jajodia, & Kankanhalli, 2006). To address this issue, Bitcoin has brought two fundamental innovations, which, together, provide for the self-governability and self-sustainability of the network. The first innovation is the blockchain, which relies on public-private key encryption and hashing algorithms to create a decentralised, append-only and tamper-proof database. The second innovation is Proof-of-Work, a decentralised consensus protocol using cryptography and economic incentives to encourage people to operate and simultaneously secure the network. Accordingly, the Bitcoin protocol represents an elegant, but purely technical solution to the issue of social trust – which is normally resolved by relying on trusted authorities and centralised intermediaries. With the blockchain, to the extent that trust is delegated to the technology, individuals who do not know (and therefore do not necessarily trust) each other, can now transact with one another on a peer-to-peer basis, without the need for any intermediary.

Hence Bitcoin uses cryptography not as a way to preserve the secrecy of transactions, but rather in order to create a trustless infrastructure for financial transactions. In this context, cryptography is merely used as a discrete notational system (DuPont, 2014) designed to promote the autonomy of the system, which can operate independently of any centralised third party 5. It relies on simple cryptographic primitives or building blocks (SHA256 hash functions and public-key cryptography) to resolve, in a decentralised manner, the double-spending problem 6 found in many virtual currencies. The scheme used by Bitcoin (Proof-of-Work) relies on a peer-to-peer network of validators (or miners) who commit their computational resources (hashing power) to the network in order to record all valid transactions into a decentralised public ledger (a.k.a. the blockchain) in a chronological order. All valid transactions are recorded into a block, which incorporates a reference (or hash) to the previous block – so that any attempt at tampering with the order or the content of any past transaction will always and necessarily result in an apparent discontinuity in the chain of blocks.

By combining a variety of existing technologies with basic cryptographic primitives, Bitcoin has created a system that is provably secure, practically incorruptible and probabilistically unattackable 7– all this, without resorting to any centralised authority in charge of policing the network. Bitcoin relies on a fully open and decentralised network, designed in such a way that anyone is free to use the network and contribute to it, without the need for any kind of previous identification. Yet, contrary to popular belief, Bitcoin is neither anonymous nor privacy-friendly. Quite the contrary, anyone with a copy of the blockchain can see the history of all Bitcoin transactions. Decentralised verification requires, indeed, that every transaction be made available for validation to all nodes in the network and that every transaction ever done on the Bitcoin network can be traced back to its origin. 8

In sum, Bitcoin embodies in its very protocols a profoundly market-driven approach to social coordination, premised on strong assumptions of rational choice (Olson, 1965) and game-theoretical principles of non-cooperation (von Neumann & Morgenstern, 1953 [1944]). The (self-)regulation of the overall system is primarily achieved through a system relying on perfect information (the blockchain), combined with a consensus protocol and incentives mechanism (Proof-of-work), to govern the mutually adjusting interests of all involved actors. Other dimensions of social trust and coordination (such as loyalty, coercion, etc.) are seemingly expunged from a system which expressly conforms to Hayek’s ideals of catallactic organisation (Hayek, 1976, p. 107ff).

B. From inception to crisis

1. A short history of Bitcoin

The history of Bitcoin – albeit very short – consists of a very intense series of events, which have led to the decentralised cryptocurrency becoming one of the most widely used forms of digital cash. The story began in October 2008, with the release of the Bitcoin white paper (Nakamoto, 2008a). In January 2009, the Bitcoin software was published and the first block of the Bitcoin blockchain was created (the so-called Genesis block) with a release of 50 bitcoins. Shortly after, the first Bitcoin transaction took place between Satoshi Nakamoto and Hal Finney – a well-known cryptographer and prominent figure of the cypherpunk movement in the 1990s. It is not until a few months later that Bitcoin finally acquired an equivalent value in fiat currency 9 and slowly made its way into the commercial realm, as it started being accepted by a small number of merchants. 10

In the early days, Satoshi Nakamoto was actively contributing to the source code and collaborating with many of the early adopters. Yet, he was always very careful to never disclose any personal details, so as to maintain his identity secret. To date, in spite of the various theories that have been put forward, 11 the real identity of Satoshi Nakamoto remains unknown. In a way, the pseudonymity of Satoshi Nakamoto perfectly mirrors that of his brainchild, Bitcoin – a technology designed to substitute technology for trust, thus rendering the identification of transacting parties unnecessary.

Over the next few months, Bitcoin adoption continued to grow, slowly but steadily. Yet, the real spike in popularity of Bitcoin was not due to increased adoption by commercial actors, but rather to the establishment in January 2011 of Silk Road– an online marketplace (mostly used for the trading of illicit drugs) relying on Tor and Bitcoin to preserve the anonymity of buyers and sellers. Silk Road paved the way for Bitcoin to enter the mainstream, but also led many governmental agencies to raise several concerns that Bitcoin could be used to create black markets, evade taxation, facilitate money laundering and even support the financing of terrorist activities.

In April 2011, to the surprise of many, Satoshi Nakamoto announced on a public mailing list that he would no longer work on Bitcoin. I’ve moved on to other things he said, before disappearing without further justification. Yet, before doing so, he transferred control over the source code repository of the Bitcoin client to Gavin Andresen, one of the main contributors to the Bitcoin code. Andresen, however, did not want to become the sole leader of such a project, and thus granted control over the code to four other developers – Pieter Wuille, Wladimir van der Laan, Gregory Maxwell, and Jeff Garzik. Those entrusted with these administration rights for the development of the Bitcoin project became known as the core developers.

As the popularity of Bitcoin continued to grow, so did the commercial opportunities and regulatory concerns. However, with the exit of Satoshi Nakamoto, Bitcoin was left without any leading figure or institution that could speak on its behalf. This is what justified the creation, in September 2012, of the Bitcoin Foundation – an American lobbying group focused on standardising, protecting and promoting Bitcoin. With a board comprising some of the biggest names in the Bitcoin space (including Gavin Andresen himself), the Bitcoin Foundation was intended to do for Bitcoin what the Linux Foundation had done for open source software: paying developers to work full-time on the project, establishing best practices and, most importantly, bringing legitimacy and building trust in the Bitcoin ecosystem. And yet, concerns were raised regarding the legitimacy of this self-selected group of individuals – many of whom had dubious connections or were allegedly related to specific Bitcoin scams 12– to act as the referent and public face of Bitcoin. Beyond the irony of having a decentralised virtual currency like Bitcoin being represented by a centralised profit-driven organisation, it soon became clear that the Bitcoin Foundation was actually unable to take on that role. Plagued by a series of financial and management issues, with some of its ex-board members under criminal investigation and most of its funds depleted, the Bitcoin Foundation has today lost much of its credibility.

But even the fall of the Bitcoin Foundation did not seem to significantly affect Bitcoin – probably because the Foundation was merely a facade that never had the ability to effectively control the virtual currency. Bitcoin adoption has continued to grow over the past few years, to eventually reach a market capitalisation of almost US 7 billion dollars. Bitcoin still has no public face and no actual institution that can represent it. Yet, people continue to use it, to maintain its protocol, and to rely on its technical infrastructure for an increasing number of commercial (and non-commercial) operations. And although a few Bitcoin-specific regulations have been enacted thus far (see e.g. the NY State BitLicense), regulators around the world have, for the most part, refrained from regulating Bitcoin in a way that would significantly impinge upon it (De Filippi, 2014).

Bitcoin thus continues to operate, and continues to be regarded (by many) as an open source software platform that relies on a decentralised peer-to-peer network governed by distributed consensus. Yet, if one looks at the underlying reasons why Bitcoin has been created in the first place, and the ways it has eventually been adopted by different categories of people, it becomes clear that the original conception of Bitcoin as a decentralised platform for financial disruption has progressively been compromised by the social and cultural context in which the technology operates.

Following the first wave of adoption by the cypherpunk community, computer geeks and crypto-libertarians, a second (larger) wave of adoption followed the advent of Silk Road in 2011. But what actually brought Bitcoin to the mainstream were the new opportunities for speculation that emerged around the cryptocurrency, as investors from all over the world started to accumulate bitcoins (either by purchasing them or by mining) with the sole purpose of generating profits through speculation. This trend is a clear reflection of the established social, economic and political order of a society driven by the capitalistic values of accumulation and profit maximisation. Accordingly, even a decentralised technology specifically designed to promote disintermediation and financial disruption can be unable to protect itself from the inherent tendencies of modern capitalist society to concentrate wealth and centralise power into the hands of a few (Kostakis & Bauwens, 2014).

The illusion of Bitcoin as a decentralised global network had already been challenged in the past, with the advent of large mining pools, mostly from China, which nowadays control over 75% of the network. But this is only one part of the problem. It took a simple – yet highly controversial – protocol issue to realise that, in spite of the open source nature of the Bitcoin platform, the governance of the platform itself is also highly centralised.

2. The block size dispute

To many outside observers, the contentious issue may seem surprisingly specific. As described earlier, the blockchain underpinning the Bitcoin network is composed of a series of blocks listing the totality of transactions which have been executed so far. For a number of reasons (mainly related to preserving the security and stability of the system, as well as to ensure easy adoption), the size of these blocks was initially set at 1 megabyte. In practice, however, this technical specification also sets a restriction on the number of transactions which the blockchain can handle in a particular time frame. Hence, as the adoption of Bitcoin grew, along with the number of transactions to be processed, this arbitrary limitation (which was originally perceived as being innocuous) became the source of heated discussions – on several internet forums, blogs, and conferences – leading to an important dispute within the Bitcoin community (Rizzo, 2016). Some argued that the one megabyte cap was effectively preventing Bitcoin from scaling and was thus a crucial impediment to its growth. Others claimed that many workarounds could be found (e.g. off-chain solutions that would take off the load from the main Bitcoin blockchain) to resolve this problem without increasing the block size. They insisted that maintaining the cap was necessary both for security reasons and for ideological reasons, and was a precondition to keeping the system more inclusive and decentralised.

On 15 August 2015, failing to reach any form of consensus over the issue of block sizes, a spinoff project was proposed. Frustrated by the reluctance expressed by the other Bitcoin developers to officially raise the block size limit (Hearn, 2015), two core developers, Gavin Andresen and Mike Hearn, released a new version of the Bitcoin client software (Bitcoin XT) with the latent capacity of accepting and producing an increased block size of eight megabytes. This client constitutes a particular kind of fork of the original software or reference client (called Bitcoin Core). Bitcoin XT was released as a soft fork, 13 with the possibility to turn into a hard fork, if and when a particular set of conditions were met. Initially, the software would remain identical to the Bitcoin Core software, with the exception that all the blocks mined with the Bitcoin XT software would be “signed” by XT. This signature serves as a proxy for a poll: starting from 11 January 2016, in the event that at least 75% of all most recent 1,000 blocks have been signed by XT, the software would start accepting and producing blocks with a maximum block size of eight megabytes – with the cap increasing linearly so as to double every two years. This would mark the beginning of an actual hard fork, leading to the emergence of two blockchain networks featuring two different and incompatible protocols.

The launch of Bitcoin XT proved highly controversial. It generated a considerable amount of debate among the core developers, and eventually led to a full-blown conflict which has been described as a civil war within the Bitcoin community (Hearn, 2016). Among the Bitcoin core developers, Gregory Maxwell in particular was a strong proponent of maintaining the 1 megabyte cap. According to him, increasing the block size cap would constitute a risky change to the fundamental rules of the system, and would inherently bring Bitcoin towards more centralisation – because it would mean that less powerful machines (such as home computers) could no longer continue to handle the blockchain, thus making the system more prone to being overrun by a small number of big computers and mining pools. Similarly, Nick Szabo – a prominent cryptographer involved since the early days in the cypherpunk community – declared that increasing the block size so rapidly would constitute a huge security risk that could jeopardise the whole network. Finally, another argument raised against the Bitcoin XT proposal was that increasing the block size would possibly lead to variable, and delayed confirmation times (as larger blocks may fail to be confirmed every ten minutes).

Within the broader Bitcoin community, the conflict gave rise to copious amounts of flame-wars in various online forums that represent the main sources of information for the Bitcoin community (Reddit, Bitcoin Info, Bitcoin.org, etc.). Many accused the proponents of Bitcoin XT of using populist arguments and alarmist strategies to bring people on their side. Others claimed that, by promoting a hard fork, Bitcoin XT developers were doing exactly what the Bitcoin protocol was meant to prevent: they were creating a situation whereby people from each side of the network would be able to spend the same bitcoins twice. In some cases, the conflict eventually resulted in outright censorship and banning of Bitcoin XT supporters from the most popular Bitcoin websites. 14 Most critically, the conflict also led to a variety of personal attacks towards Bitcoin XT proponents, and several online operators who expressed support for Bitcoin XT experienced Distributed Denial of Service (DDoS) attacks.

In the face of these events, and given the low rate of adoption of Bitcoin XT by the Bitcoin community at large, 15 Mike Hearn, one of the core developers and key instigators of Bitcoin XT, decided to resign from the development of Bitcoin – which he believed was on the brink of technical collapse. Hearn condemned the emotionally charged reactions to the block size debate, and pointed at major disagreements among the appointed Bitcoin core developers in the interpretation of Nakamoto’s legacy.

But the conflict did not end there. Bitcoin XT was only the first of a series of improvements which were subsequently proposed to the Bitcoin protocol. As Bitcoin XT failed to gain mass adoption, it was eventually abandoned on January 23rd. New suggestions were made to resolve the block size problem (see e.g., Bitcoin Unlimited, Bitcoin Classic, BitPay Core). The most popular today is probably Bitcoin Classic, which proposes to increase the block size cap to 2 megabytes (instead of 8) by following the same scheme as Bitcoin XT (i.e. after 75% of bitcoin miners will have endorsed the new format). One interesting aspect of Bitcoin Classic is that it also plans to set up a specific governance structure that is intended to promote more democratic decision-making with regard to code changes, by means of a voting process that will account for the opinions of the broader community of miners, users, and developers. Bitcoin Classic has received support from relevant players in the Bitcoin community, including Gavin Andresen himself, and currently accounts for 25% of the Bitcoin network’s nodes.

It is, at this moment in time, quite difficult to predict where Bitcoin is heading. Some may think that the Bitcoin experiment has failed and that it is not going anywhere; 16 others may think that Bitcoin will continue to grow in underserved and inaccessible markets as a gross settlement network for payment obligations and safe haven assets; 17 while many others believe that Bitcoin is still heading to the moon and that it will continue to surprise us as time goes on. 18 One thing is sure though: regardless of the robustness and technical viability of the Bitcoin protocol, this governance crisis and failure in conflict resolution has highlighted the fragility of the current decision-making mechanisms within the Bitcoin project. It has also emphasised the tension between the (theoretically) decentralised nature of the Bitcoin network and the highly centralised governance model that has emerged around it, which ultimately relied on the goodwill and aligned interests of only a handful of people.

II. Bitcoin governance and its challenges

Governance structures are set up in order to adequately pursue collective goals, maintain social order, channel interests and keep power relations under check, while ensuring the legitimacy of actions taken collectively. They are therefore closely related to the issue of trust, which is a key aspect of social coordination and which online socio-technical systems address by combining informal interpersonal relations, formal rules and technical solutions in different ways (Kelty, 2005). In the case of online peer-production communities, two essential features are decisive in shaping their governance structure, namely the fact that they are volunteer-driven and that they seek to self-organise (Benkler, 2006). Thus, compared to more traditional forms of organisations such as firms and corporations, they often need to implement alternative means of coordination and incentivisation (Demil & Lecocq, 2006).

Nicolas Auray has shown that, although the nature of online peer-production communities can be very different (ranging from Slashdot to Wikipedia and Debian), they all face three key challenges which they need to address in order to thrive (Auray, 2012):

  • definition and protection of communityborders;

  • establishment of incentives for participation and acknowledgment of the status of contributors;

  • and, finally, pacification of conflicts.

Understanding how each of these challenges is addressed in the case of the Bitcoin project is particularly difficult, since Bitcoin is composed of two separate, but highly interdependent layers, which involve very different coordination mechanisms. On the one hand, there is the infrastructural layer: a decentralised payment system based on a global trustless peer-to-peer network which operates according to a specific set of protocols. On the other hand, there is the layer of the architects: a small group of developers and software engineers who have been entrusted with key roles for the development of this technology.

The Bitcoin project can thus be said to comprise at least two different types of communities – each with their own boundaries and protection mechanisms, rewards or incentive systems, and mechanisms for conflict resolution. One is the community of nodes within the network, which includes both passive users merely using the network to transfer money around, and “active” users (or miners) contributing their own computational resources to the networks in order to support its operations. The other is the community of developers, who are contributing code to the Bitcoin project with a view to maintain or improve its functionalities. What the crisis described above has revealed is the difficulty of establishing a governance structure which would properly interface both of these dimensions. As a consequence, a small number of individuals became responsible for the long-term sustainability of a large collective open source project, and the project rapidly fell prone to interpersonal conflict once consensus could no longer be reached among them.

This section will describe the specificities of the two-layered structure of the Bitcoin project and the mechanisms put in place to address these key challenges, in order to better understand any shortcomings they may display.

A. The Bitcoin network: governance by infrastructure

As described earlier, the Bitcoin network purports to be both self-governing and self-sustaining. 19 As a trustless infrastructure, it seeks to function independently of any social institutions. The rules governing the platform are not enforced by any single entity, instead they are embedded directly into the network protocol that every user must abide to. 20

Given the open and decentralised nature of the Bitcoin network, its community borders are extremely flexible and dynamic, in that everyone is free to participate and contribute to the network – either as a passive user or as an active miner. The decentralised character of the network however, creates significant challenges when it comes to the protection thereof, mainly due to the lack of a centralised authority in charge of policing it. Bitcoin thus implemented a technical solution to protect the network against malicious attacks (e.g. so-called sybil attacks) through the Proof-of-Work mechanism, designed to make it economically expensive to cheat the network. Yet, while the protocol has proved successful thus far, it remains subject to a lot of criticism. Beyond the problems related to the high computational costs of Proof-of-Work, 21 the Bitcoin network can also be co-opted by capital. If one or more colluding actors were to control at least 51% of the network’s hashing power, they would be able to arbitrarily censor transactions by validating certain blocks at the expense of others (the so-called 51% attack).

With regard to status recognition, the Bitcoin protocol eliminates the problem at the root by creating a trustless infrastructure where the identity of the participant nodes is entirely irrelevant. In Bitcoin, there is no centralised authority in charge of assigning a network identifier (or account) to each individual node. The notions of identity and status are thus eradicated from the system and the only thing that matters – ultimately – is the amount of computational resources that every node is providing to the network.

Conversely, the reward system represents one of the constitutive elements of the Bitcoin network. The challenge has been resolved in a purely technical manner by the Bitcoin protocol, through the notion of mining. In addition to providing a protection mechanism, the Proof-of-Work algorithm introduces a series of economic incentives to reward those who are contributing to maintaining and securing the network with their computational resources (or hashing power). The mining algorithm is such that the first one to find the solution to a hard mathematical problem (whose difficulty increases over time) 22 will be able to register a new block into the blockchain and will earn a specific amount of bitcoins as a reward (the reward was initially set at 50 bitcoins and is designed to be halved every four years). From a game-theoretical perspective, this creates an interesting incentive for all network participants to provide more and more resources to the network, so as to increase their chances of being rewarded bitcoins. 23 Bitcoin’s incentive mechanism is thus a complicated, albeit mathematically elegant way of bringing a decentralised network of self-interested actors to collaborate and contribute to the operations of the Bitcoin network by relying exclusively on mathematical algorithms and cryptography. Over time, however, the growing difficulty of mining due to the increasing amount of computational resources engaged in the network, combined with the decreasing amount of rewards awarded by the network, has eventually led to a progressive concentration of hashing power into a few *mining pools, *which are today controlling a large majority of the Bitcoin network – thereby making it more vulnerable to a 51% attack. 24 Hence, in spite of its original design as a fully decentralised network ruled by distributed consensus, in practice, the Bitcoin network has evolved into a highly centralised network ruled by an increasingly oligopolistic market structure.

Finally, with regard to the issue of conflict resolution, it is first important to determine what constitutes a conflict at the level of the Bitcoin infrastructure. If the purpose of the Bitcoin protocol is for a decentralised network of peers to reach consensus as to what is the right set of transactions (or block) that should be recorded into the Bitcoin blockchain, then a conflict arises whenever two alternative blocks (which are both valid from a purely mathematical standpoint) are registered by different network participants in the same blockchain – thus creating two competing versions (or forks) of the same blockchain. Given that there is no way of deciding objectively which blockchain should be favoured over the other, the Bitcoin protocol implements a specific fork-choice strategy stipulating that, if there is a conflict somewhere on the network, the longest chain shall win. 25 Again, as with the former two mechanisms, the longest-chain rule is a simple and straightforward mechanism to resolve the emergence of conflicts within the Bitcoin network by relying – solely and exclusively – on technological means.

It is clear from this description, that the objective of Satoshi Nakamoto and the early Bitcoin developers was to create a decentralised payment system that is both self-sufficient and self-contained. Perhaps naively, they thought it was possible to create a new technological infrastructure that would be able to govern itself – through its own protocols and rules – and that would not require any third-party intervention in order to sustain itself. And yet, in spite of the mathematical elegance of the overall system, once introduced in a particular socio-economic context, technological systems often evolve in unforeseen ways and may fall prey to unexpected power relations.

In the short history of Bitcoin, indeed, there have been significant tensions related to border protection, rewards systems and conflict resolution. Some of these issues are inherent in the technological infrastructure and design of the Bitcoin protocol. Perhaps one of the most revealing of the possible ways of subverting the system is the notion of selfish mining whereby miners can increase their potential returns by refusing to cooperate with the rest of the network. 26 While this does not constitute a technical threat to the Bitcoin protocol per se, it can nonetheless be regarded as an economic attack, which contributes to potentially reducing the security of the Bitcoin network by changing the inherent incentive structure. 27 Other issues emerged as a result of more exogenous factors, such as the Mt. Gox scandal 28 of 2014 – which led to the loss of 774,000 bitcoins (worth more than US 450 million dollars at the time) – as well as many other scams and thefts that occurred on the Bitcoin network over the years. 29 Most of these were not due to an actual flaw in the Bitcoin protocol, but were mostly the result of ill-intentioned individuals and bad security measures in centralised platforms built on top of the Bitcoin network (Trautman, 2014).

Accordingly, it might be worth considering whether – independently of the technical soundness of the Bitcoin protocol – the Bitcoin network can actually do away with any form of external regulation and/or sanctioning bodies, or whether, in order to ensure the proper integration (and assimilation) of such a technological artefact within the social, economic and cultural contexts of modern societies, the Bitcoin network might require some form of surveillance and arbitration mechanisms (either internal or external to the system) in order to preserve legitimate market dynamics, as well as to guarantee a proper level of consumer protection and financial stability in the system.


B. The Bitcoin architects: governance of infrastructure

Just like many other internet protocols, Bitcoin was initially released as an open source software, encouraging people to review the code and spontaneously contribute to it. Despite their formal emphasis on openness, different open source software projects and communities feature very different social and organisational structures. The analysis of communication patterns among various open source projects has shown tendencies ranging from highly distributed exchanges between core developers and active users, to high degrees of centralisation around a single developer (Crowston & Howison, 2005). Moreover, different open source communities enjoy a more or less formalised governance structure, which often evolves as the project matures. Broadly speaking, open source communities have been categorised into two main types or configurations: democratic-organic versus “autocratic-mechanistic” (de Laat, 2007). The former display a highly structured and meritocratic governance system (such as the Debian community, most notably), whereas the latter feature less sophisticated and more implicit governance systems, such as the Linux community, where most of the decision-making power has remained in the hands of Linus Torvald – often referred to as the “benevolent dictator’. Bitcoin definitely falls into the second category.

Indeed, since its inception, Satoshi Nakamoto was the main person in charge of managing the project, as well as the only person with the right to commit code into the official Bitcoin repository. It was only at a later stage, when Satoshi began to disengage from the Bitcoin project, that this power was eventually transferred to a small group of ‘core developers’. Hence, just like many other open source projects, there is a discrepancy between those who can provide input to the project (the community at large) and those who have the ultimate call as to where the project is going. Indeed, while anyone is entitled to submit changes to the software (such as bug fixes, incremental improvements, etc.), only a small number of individuals (the core developers) have the power to decide which changes shall be incorporated into the main branch of the software. This is justified partly by the high level of technical expertise needed to properly assess the proposed changes, but also – more implicitly – by the fact that the core developers have been entrusted with the responsibility of looking after the project, on the grounds of their involvement (and, to some extent, shared ideology) with the original concept of Satoshi Nakamoto.

With this in mind, we can now provide a second perspective on the three key challenges facing Bitcoin, and analyse how they are being dealt with from the side of its architects: the Bitcoin developers.

The definition and protection of community boundaries, and of the work produced collectively, is a key issue in open source collectives. It classically finds a solution through the setting up of an alternative intellectual property regime and licensing scheme – copyleft, which ensures that the work will be preserved as a common pool resource – but also enforces a number of organisational features and rules intended to preserve some control over the project (O'Mahony, 2003; Schweik & English, 2007). In the case of Bitcoin, community borders are – at least in theory – quite clearly defined. Just like many other open source software projects, there exists a dividing line between the community of users and developers at large, who can provide input and suggest modifications to the code (by making a pull-request, for instance), and the core developers who are in charge of preserving the quality and the functionality of the code, and who are the only ones with the power to accept (or refuse) the proposed modifications (e.g. by merging pull-requests into the main branch of the code). However, the distinction between these two communities is not as clear-cut as it may seem, since the community at large also has an important (albeit indirect) influence on the decisions concerning the code.

Specifically, consensus formation among the Bitcoin core developers has been formalised through a process known as Bitcoin Improvement Proposals (BIPs) 30, which builds heavily on the process in place for managing the Python programming language (PEPs or Python Enhancement Proposals). Historically, both of these processes share similarities with (and sometimes explicitly refer to) what can be considered the “canonical” approach to consensus formation for designing and documenting network protocols: RFC or Request For Comments, used to create and develop the internet protocol suite (Flichy, 2007, p. 35ff). The BIP process requires that all source code and documentation be released and made available to anyone, so that a multiplicity of individuals can contribute to discuss and improve them. Yet, the final call as to whether a change will be implemented ultimately relies on the core developers assessing the degree of public support which a proposal has built, and finding a consensus among themselves:

We are fairly liberal with approving BIPs, and try not to be too involved in decision making on behalf of the community. The exception is in very rare cases of dispute resolution when a decision is contentious and cannot be agreed upon. In those cases, the conservative option will always be preferred. Having a BIP here does not make it a formally accepted standard until its status becomes Active. For a BIP to become Active requires the mutual consent of the community. Those proposing changes should consider that ultimately consent may rest with the consensus of the Bitcoin users. 31

This description provides a concise overview of the structures of legitimacy and accountability which govern the relationship between the Bitcoin architects (or core developers) and the Bitcoin users. While the community is open for anyone to participate, decision-making is delegated to a small number of people who try to keep intervention to a minimum. Yet, ultimately, the sovereignty of the overall project rests with the people– i.e. the Bitcoin users and miners. If the core developers were to make a modification to the code that the community disagrees with (the miners, in particular), the community might simply refuse to run the new code. This can be regarded as a form of “vetoing power’ 32 or “market-based governance’ 33 which guarantees that legitimacy of the code ultimately rests with the users.

Regarding acknowledgment of status, this requires balancing rewards for the most active and competent contributors, while promoting and maintaining the collective character of the overall endeavour. Indeed, open source developers are acutely aware of the symbolic retributions which they can acquire by taking part in a given project, and are also monitoring other contributors to assess their position within communities which display a strongly meritocratic orientation (Stewart, 2005). Some communities rank individuals by resorting to systems of marks which provide a quantitative metric for reputation; others rely on much less formalised forms of evaluation. In the case of Bitcoin, some measure of reputation can be derived from the platform used to manage the versioning of the software – Github– which includes metrics for users" activities (such as number of contributions, number of followers, etc.). However, the reputation of the core developers is on a completely different scale, and is mostly derived from their actual merit or technical expertise, as well as a series of less easily defined individual qualities which can be understood as a form of charisma.

Finally, conflictmanagement is probably the most difficult issue to deal with in consensus-oriented communities, since it requires a way to avoid both paralysingdeadlocks and divisivefights. Taking Wikipedia as an example, the community relies on specific mechanisms of mutual surveillance as the most basic way of managing conflicts; however, additional regulatory procedures of mediation and sanctions have been established and can be resorted to if needed (Auray, 2012, p. 225). The Debian community is also well known for its sophisticated rules and procedures (Lazaro, 2008). Though not immune to deadlocks and fighting, these communities have managed to scale while maintaining some degree of inclusivity, by shifting contentious issues from substantive to procedural grounds – thus limiting the opportunities for personal disputes and ad hominem attacks.

Obviously, the Bitcoin community lacks any such form of conflict management procedures. As described above, failure to reach consensus among the core developers concerning the block size dispute led to an actual forking of the Bitcoin project. Forking is a process whereby two (or more) software alternatives are provided to the user base, who will therefore need to make a choice: the adoption rate will ultimately determine which branch of the project will win the competition, or whether they will both evolve as two separate branches of the same software. Forking is standard practice in free/libre and open source software development, and although it can be seen as a last resort solution which can sometimes put the survival of a project at risk (Robles & González-Barahona, 2012), it can also be considered a key feature of its governance mechanisms. For Nyman and Lindman: The right to fork code is built into the very definition of what it means to be an open source program– it is a reminder that developers have the essential freedom to take the code wherever they want, and this freedom also functions as a looming threat of division that binds the developer community together (Nyman & Lindman, 2013).

In sum, it can be stressed that, at all three levels (defining borders, acknowledging status, and managing conflicts), the governance of the Bitcoin project relies almost exclusively on its leaders, lending credit to the view that peer production can often lead to the formation of oligarchic organisational forms (Shaw & Hill, 2014). More specifically, in classic weberian terms – and as can often be observed in online communities – Bitcoin governance consists in a form of domination based on charismatic authority (O'Neil, 2014), largely founded on presumed technical expertise. The recent crisis experienced by the Bitcoin community revealed the limits of consensus formation between individuals driven by sometimes diverging political and commercial interests, and underlined the discrepancies between the overall goals of the project (a self-regulating decentralised virtual currency and payment system) and the excessively centralised and technocratic elites who are in charge of the project.

III. The invisible politics of Bitcoin

Vires in Numeris (latin for: Strength in Numbers) was the motto printed on the first physical Bitcoin wallets 34– perhaps as an ironic reference to the “In God we Trust” motto printed on US dollar bills. In the early days, the political objectives of Bitcoin were clearly and explicitly stated through the desire of changing existing power dynamics between individuals and the state. 35 Yet, while some people use Bitcoin as a vehicle for expressing their political views (e.g. the community of so-called cypherpunks and crypto-libertarians), others believe that there is no real political ideology expressed within the technology itself. 17 Indeed, if asked, many will say that one of the core benefits of Bitcoin is that it operates beyond the scope of governments, politics, and central banks. 36 But it does not take much of a stretch to realise that this desire to remain a-political constitutes a political dimension in and of itself (Kostakis & Giotitsas, 2014).

Decentralisation inherently affects political structures by removing a control point. Regarding Bitcoin, decentralisation is achieved through a peer-to-peer payment system that operates independently of any (trusted) third party. As a result, not only does Bitcoin question one of the main prerogatives of the state – that of money issuance and regulation, it also sheds doubts on the need (and, therefore, the legitimacy) of existing financial institutions. On the one hand, as a decentralised platform for financial transactions, Bitcoin sets a limit on the power of central banks and other financial institutions to define the terms and conditions, and control the execution of financial transactions. On the other hand, by enabling greater disintermediation, the Bitcoin blockchain provides new ways for people to coordinate themselves without relying on a centralised third party or trusted authority, thus potentially promoting individual freedoms and emancipation. 37 More generally, the blockchain is now raising high hopes as a solution which, beyond a payments system, could support many forms of direct interactions between free and equal individuals – with the implicit assumption that this would contribute to furthering democratic goals by promoting a more horizontal and self-organising social structure (Clippinger & Bollier, 2014).

As Bitcoin evolves – and in the eventuality that it gets more broadly adopted – it will need to face a growing number of technical challenges (e.g. related to blockchain scalability), but it will also encounter a variety of social and political challenges – as the technology will continue to impinge upon existing social and governmental institutions, ushering in an increasingly divergent mix of political positions.

The mistake of the Bitcoin community was to believe that, once technical governance had been worked out, the need to rely on government institutions and centralised organisations in order to manage and regulate social interactions would eventually disappear (Atzori, 2015; Scott, 2014). Politics would progressively give way to new forms of technologically-driven protocols for social coordination (Abramowicz, 2015) – regarded as a more efficient way for individuals to cooperate towards the achievement of a collective goal while preserving their individual autonomy.

Yet, one cannot get rid of politics through technology alone, because the governance of a technology is – itself – inherently tied to a wide range of power dynamics. As Yochai Benkler elegantly puts it, there are no spaces of perfect freedom from all constraints, only different sets of constraints that one necessarily must choose from (Benkler, 2006). Bitcoin as a trustless technology might perhaps escape the existing political framework of governmental and market institutions; yet, it remains subject to the (invisible) politics of a handful of individuals – the programmers who are in charge of developing the technology and, to a large extent, deciding upon its functionalities.

Implicit in the governance structure of Bitcoin is the idea that the Bitcoin core developers (together with a small number of technical experts) are – by virtue of their technical expertise – the most likely to come up with the right decision as to the specific set of technical features that should be implemented in the platform. Such a technocratic approach to governance is problematic in that it goes counter to the original conception of the Bitcoin project. There exists, therefore, an obvious discrepancy between the libertarian vision of Bitcoin as a decentralised infrastructure that cannot be regulated by any third party institution, and the actual governance structure that dictates the technological development of Bitcoin – which, in spite of its open source nature, is highly centralised and undemocratic. While the (a)political dimension of the former has been praised or at least acknowledged by many, the latter has remained, for a long time, invisible to the public: the technical decisions to be taken by the Bitcoin developers were not presented as political decisions, and were therefore never debated as such.

The block size debate is a good illustration of this tendency. Although the debate was framed as a value-neutral technical discussion, most of the arguments in favour or against increasing the size of a block were, in fact, part of a hidden political debate. Indeed, except for the few arguments concerning the need to preserve the security of the system, most of the arguments that animated the discussion were, ultimately, concerned with the socio-political implications of such a technical choice (e.g. supporting a larger amount of financial transactions versus preserving the decentralised nature of the network). Yet, insofar as the problem was presented as if it involved only rational and technical choices, the political dimensions which these choices might involve were not publicly acknowledged.

Moreover, if one agrees that all artefacts have politics (Winner, 1980) and that technology frames social practice (Kallinikos, 2011), it follows that the design and features of the Bitcoin platform must be carefully thought through by taking into account not only its impact on the technology as such (i.e. security and scalability concerns), but also its social and political implications on society at large.

Politics exist because, in many cases, consensus is hard to achieve, especially when issues pertaining to *social justice *need to be addressed. Social organisations are thus faced with the difficult challenge of accommodating incompatible and often irreconcilable interests and values. The solutions found by modern day liberal democracies involve strong elements of publicity and debate. The underlying assumption is that the only way to ensure the legitimacy of collective decisions is by making conflicts apparent and by discussing and challenging ideas within the public sphere (Habermas, 1989). Public deliberations and argumentation are also necessary to achieve a greater degree of rationality in collective decisions, as well as to ensure full transparency and accountability of the ways in which these decisions are both made and put into practice. But the antagonistic dimensions of social life constantly undermine the opportunities for consensus formation. A truly democratic approach needs, therefore, to acknowledge – and, ideally, to balance or compromise – these spaces of irreconcilabledissent which are the most revealing of embedded power relations (Mouffe & Laclau, 2001; Mouffe, 1993).

This is perhaps even more crucial for technologies such as the internet or Bitcoin, which seek to implement a global and shared infrastructure for new forms of coordination and exchange. Bitcoin as an information infrastructure must be understood here as a means of introducing and shaping a certain type of social relations (Star, 1999; Bowker et al., 2010). Yet, just like many other infrastructures, Bitcoin is mostly an invisible technology that operates in the background (Star & Strauss, 1999). It is, therefore, all the more important to make the design choices lying behind its technical features more visible, in order to shed light on the politics which are implicit in the technological design.

It should be clear, by now, that the political intentions of a technology cannot be resolved, only and exclusively, by technological means. While technology can be used to steer and mediate many kinds of social interactions, it should not (and cannot) be the sole and main driver of social change. As Bitcoin has shown, it is unrealistic to believe that human organisations can be governed by relying exclusively on algorithmic rules. In order to ensure the long-term sustainability of these organisations, it is necessary to incorporate, on top of the technical framework, a specific governance structure that enables people to discuss and coordinate themselves in an authentically democratic way, but also – and perhaps more importantly – to engage and come up with decisions as to how the technology should evolve. In that regard, one should always be wary that the decision-making process involve not only those who are building the technology (i.e. developers and software engineers) but also all those who will ultimately be affected by these decisions (i.e. the users of that technology).

Different dimensions of the internet have already been analysed from such a perspective within the broader framework of internet governance (DeNardis, 2012; Musiani et al., 2016), providing important insights about the performative dimensions of the underlying software and protocols, and the ways they have been put to use. These could prove useful in better understanding and formulating a novel governance structure for the Bitcoin project – one that is mediated (rather than dictated) by technological rules.

Conclusion: Bitcoin within the wider frame of internet governance

The internet, understood as a complex and heterogeneous socio-technical construct, combines many different types of arrangements – involving social norms, legal rules and procedures, market practices and technological solutions – which, taken together, constitute its overall governance and power structures (Brousseau, Marzouki, & Méadel, 2012). Most of the research on internet governance has focused on the interplay between infrastructures on the one hand, and superstructures or institutions on the other – particularly those which have emerged on top of the network during the course of its history (such as ICANN or IETF), sometimes generating conflictual relationships with existing national and international legal frameworks, private corporations, or even civil society at large (Mueller, 2002; Mueller, 2010; Mathiason, 2009; DeNardis, 2009; Bygrave & Bing, 2009). 38

Internet governance has been fraught with many frictions, controversies and disputes over the years – an international fight to control the basic rules and protocols of the internet described by some as a global war (DeNardis, 2014). Even the much praised governance model of the internet protocol suite – based on the IETF’s (deceptively simple) rule of “rough consensus and running code” – effectively involved, at certain points, fair amounts of power struggles and even autocratic design (Russell, 2014). The idea that consensus over technical issues can be reached more easily because it only involves objective criteria and factual observations (i.e. something either works or doesn’t) neglects the reality that “stories about standards are necessarily about power and control – they always either reify or change existing conditions and are always conscious attempts to shape the future in specific ways” (Russell, 2012).

Set within the wider frame and history of internet governance, the Bitcoin case is particularly instructive insofar as it draws on a certain number of new, but also already existing practices, to promote some of the ideals which have been associated with the internet since its inception: furthering individual autonomy and supporting collective self-organisation (Loveluck, 2015). As we have seen, Bitcoin can be understood as a dual-layered construct, composed of a global network infrastructure on the one hand, and a small community of developers on the other. Although the trustlessness of the network seeks to obliviate the need for a central control point, in practice, as soon as a technology is deployed, new issues emerge from unanticipated uses of technology – which ultimately require the setting up of social institutions in order to protect or regulate the technology. These institutions can be more or less attuned with the overall aims of the technology, and can steer it in different directions. For instance, while the IETF managed to implement a relatively decentralised and bottom-up process for establishing standards, the Domain Name System (DNS) has shown that even a distributed network might, at some point, need to rely on a centralised control point to administer scarce resources (such as domain names). This has led to the emergence of centralised – and somewhat contested – institutions, such as, most notably, the ICANN – a US-based non-profit corporation that is in charge of coordinating all unique identifiers across the world wide web.

The lessons from the past – taking account of both the success stories and failures of internet governance – can serve as useful indications as to what should be attempted or, on the contrary, avoided in terms of Bitcoin governance. In particular, it should be acknowledged that socio-technical systems cannot – by virtue of their embeddedness into a social and cultural context – ensure their own self-governance and self-sustainability through technology alone. Any technology will eventually fall prey to the social, cultural and political pressures of the context in which it operates, which will very probably make it grow and evolve in unanticipated directions (Akrich, 1989; MacKenzie & Wajcman, 1999).

The Bitcoin project has evolved significantly over the years, for reasons which are both endogenous and exogenous to the system. From a small network run by a few crypto-libertarians and computer geeks eager to experiment with a new liberation technology (Diamond, 2010), the Bitcoin network quickly scaled into a global network which is struggling to meet the new demands and expectations of its growing user base and stakeholders.

The block size debate created an actual schism within the Bitcoin community – and, by doing so, ultimately stressed the need for a more democratic governance system. Drawing on the many different arrangements which have been experienced at different levels of internet governance, each with their own distinctive forms of deliberation and decision-making procedures (Badouard et al., 2012), the Bitcoin development process could perhaps be improved by introducing an alternative governance structure that would better account for the many other dimensions (other than technical) that the technology might have, especially with regard to its social, economic and political implications on society at large.

The Bitcoin Foundation was a first attempt in this direction, though it never managed to establish itself as a standardisation body precisely due to a lack of legitimacy and accountability in its own governance process. A centralised governance body (similar to ICANN) in charge of ensuring the legitimacy and accountability for the future developments of the Bitcoin project would obviously fail to obtain any kind of legitimacy from within the Bitcoin community – since eliminating the need for fiduciary institutions or other centralised authorities was the very purpose of the Bitcoin network. The technologically-driven approach currently endorsed by the Bitcoin project, aiming to create a governance structure that is solely and exclusively dictated by technological means (governance by infrastructure) has also been shown to be bound to failure, since a purely technological system cannot fully account for the whole spectrum (and complexity) of social interactions. In this regard, one of the main limitations of the Bitcoin protocol is that it is based on algorithmically quantifiable and verifiable actions (i.e. how much computing resources people are investing in the network) and it is therefore unable to reward those who contribute to the network in different manners, other than through hashing power.

A more interesting approach would involve using the underlying technology – the blockchain– not as a regulatory technology that will technologically enforce a particular set of predefined protocols and rules (as Bitcoin does), but rather as a platform on which people might encode their own sets of rules and procedures that will define a particular system of governance – one that can benefit from the distinctive characteristics of the blockchain (in terms of transparency, traceability, accountability, and incorruptibility) but would also leave room for the establishment of an institutional framework that could operate on top of that (decentralised) network. This would make sure that technology remains a tool of empowerment for people, who would use it to enable and support new models of governance, rather than the opposite.

Given the experimental nature and current lack of maturity of the technology, it is difficult to predict, at this specific point in time, what would be the best strategy to ensure that the Bitcoin project evolves in accordance with the interests of all relevant stakeholders. Yet, regardless of the approach taken, it is our belief that a proper governance structure for Bitcoin can only be achieved by publicly acknowledging its political dimensions, and replacing the current technocratic power structure of the Bitcoin project with an institutional framework capable of understanding (and accommodating) the politics inherent in each of its technical features.

References

Abbate, J. (1999), Inventing the Internet, Cambridge, MA: MIT Press.

Abramowicz, M.B. (2015), Peer-to-peer law, built on Bitcoin, Legal Studies Research Paper, GWU Law School,http://scholarship.law.gwu.edu/faculty_publications/1109/

Agre, P.E. (2003), P2P and the promise of Internet equality, Communications of the ACM 46(2), pp. 39-42.

Akrich, M. (1989), La construction d'un système socio-technique. Esquisse pour une anthropologie des techniques, Anthropologie et Sociétés 13(2), pp. 31-54.

Atzori, M. (2015), Blockchain technology and decentralized governance: is the State still necessary?, working paper, Available at SSRN, http://papers.ssrn.com/sol3/Papers.cfm?abstract_id=2709713

Auray, N. (2012), Online communities and governance mechanisms, in E. Brousseau, M. Marzouki & C. Méadel (eds.), Governance, Regulation and Powers on the Internet. Cambridge and New York: Cambridge University Press, pp. 211-231.

Badouard, R. et al (2012), Towards a typology of Internet governance sociotechnical arrangements, in F. Massit-Folléa, C. Méadel & L. Monnoyer-Smith (eds.), Normative Experience in Internet Politics Paris: Transvalor/Presses des Mines, pp. 99-124.

Benkler, Y. (2006), The Wealth of Networks. How Social Production Transforms Markets and Freedom. New Haven, CT: Yale University Press.

Benkler, Y. (2016), Degrees of freedom, dimensions of power, Daedalus145(1), pp. 18-32.

Bimber, B. (1994), Three faces of technological determinism, in M.R. Smith & L. Marx (eds.), Does Technology Drive History? The Dilemma of Technological Determinism. Cambridge, MA and London: MIT Press, pp. 79-100.

Bowker, G.C. et al (2010), Toward Information Infrastructure Studies: ways of knowing in a networked environment, in J. Hunsinger, L. Klastrup & M. Allen (eds.), International Handbook of Internet Research. Dordrecht and London: Springer, pp. 97-117.

Brousseau, E., Marzouki, M., & Méadel, C. eds. (2012), Governance, Regulation and Powers on the Internet. Cambridge and New York: Cambridge University Press.

Bygrave, L.A. & Bing, J. eds. (2009), Internet Governance. Infrastructure and Institutions. Oxford and New York: Oxford University Press.

Clippinger, J.H. & Bollier, D. eds. (2014), *From Bitcoin to Burning Man and Beyond. The Quest for Autonomy and Identity in a Digital Society, *Boston, MA and Amherst, MA: ID3 and Off the Common.

Crowston, K. & Howison, J. (2005), The social structure of free and open source software development, First Monday [online] 10(2), http://firstmonday.org/ojs/index.php/fm/article/view/1207/1127

David, M. (2010), Peer to Peer and the Music Industry. The Criminalization of Sharing. London, Thousand Oaks, CA, New Delhi and Singapore: Sage.

Demil, B. & Lecocq, X. (2006), Neither market nor hierarchy nor network: the emergence of bazaar governance, Organization Studies 27(10), pp. 1447-1466.

DeNardis, L. (2009), Protocol Politics. The Globalization of Internet Governance. Cambridge, MA: MIT Press.

DeNardis, L. (2012), Hidden levers of Internet control. An infrastructure-based theory of Internet governance, Information, Communication & Society 15(5), pp. 720-738.

DeNardis, L. (2014), The Global War for Internet Governance. New Haven, CT: Yale University Press.

Diamond, L. (2010), Liberation technology, Journal of Democracy 21(3), pp. 69-83.

Dingledine, R., Mathewson, N. & Syverson, P. (2004), Tor: the second-generation onion router, Proceedings of the 13th USENIX Security Symposium, San Diego, CA.

Dodd, N. (2014), The Social Life of Money, Princeton, NJ: Princeton University Press.

DuPont, Q. (2014), "The politics of cryptography: Bitcoin and the ordering machines", Journal of Peer Production (4), http://peerproduction.net/issues/issue-4-value-and-currency/peer-reviewed-articles/the-politics-of-cryptography-bitcoin-and-the-ordering-machines/

Eyal, I. & Sirer, E.G. (2014), "Majority is not enough: Bitcoin mining is vulnerable", in Financial Cryptography and Data Security, Springer, pp. 436-454.

Ferguson, N. (2008), The Ascent of Money. A Financial History of the World, London: Penguin.

De Filippi, P. (2014), "Bitcoin: a regulatory nightmare to a libertarian dream", Internet Policy Review 3(2), http://policyreview.info/articles/analysis/bitcoin-regulatory-nightmare-libertarian-dream

Flichy, P. (2007), The Internet Imaginaire, Cambridge, MA: MIT Press.

Gillespie, T. (2006), "Engineering a principle: ‘end-to-end’ in the design of the internet", Social Studies of Science 36(3), pp. 427-457.

Habermas, J. (1989), The Structural Transformation of the Public Sphere. An Inquiry into a Category of Bourgeois Society, Cambridge: Polity Press.

Hayek, F.A. (1976), Law, Legislation and Liberty. Vol. 2, The Mirage of Social Justice, London: Routledge & Kegan Paul.

Hayek, F.A. (1990), The Denationalization of Money: The Argument Refined, 3rd edition, London: The Institute of Economic Affairs.

Hearn, M. (2015), "Why is Bitcoin forking?", Medium, https://medium.com/faith-and-future/why-is-bitcoin-forking-d647312d22c1. Accessed 15 April 2016.

Hearn, M. (2016), "The resolution of the Bitcoin experiment", Medium, https://medium.com/@octskyward/the-resolution-of-the-bitcoin-experiment-dabb30201f7. Accessed 15 April 2016.

Hughes, E. (1993), "A Cypherpunk's Manifesto", http://www.activism.net/cypherpunk/manifesto.html. Accessed 24 March 2011.

Kahn, D. (1996), The Codebreakers. The Story of Secret Writing, 2nd edition, New York: Scribener.

Kallinikos, J. (2011), Governing Through Technology. Information Artefacts and Social Practice, Basingstoke and New York: Palgrave Macmillan.

Kelty, C. (2005), "Trust among the algorithms: ownership, identity, and the collaborative stewardship of information", in R.A. Ghosh (ed.), Code. Collaborative Ownership and the Digital Economy, Cambridge, MA: MIT Press, pp. 127-152.

Kostakis, V. & Bauwens, M. (2014), "Distributed capitalism", in Network Society and Future Scenarios for a Collaborative Economy, Basingstoke and New York: Palgrave Macmillan, pp. 30-34.

Kostakis, V. & Giotitsas, C. (2014), "The (a)political economy of bitcoin", tripleC 12(2), pp. 431-440, http://triplec.at/index.php/tripleC/article/view/606.

de Laat, P.B. (2007), "Governance of open source software: state of the art", Journal of Management & Governance 11(2), pp. 165-177.

Lazaro, C. (2008), La Liberté logicielle. Une ethnographie des pratiques d'échange et de coopération au sein de la communauté Debian, Louvain-la-Neuve: Bruylant-Academia.

Levy, S. (2001), Crypto. How the Code Rebels Beat the Government—Saving Privacy in the Digital Age, New York: Viking.

Loveluck, B. (2015), Réseaux, libertés et contrôle. Une généalogie politique d'internet, Paris: Armand Colin.

MacKenzie, D. & Wajcman, J. eds. (1999), The Social Shaping of Technology, 2nd edition, Buckingham: Open University Press.

Mallard, A., Méadel, C. & Musiani, F. (2014), "The paradoxes of distributed trust: peer-to-peer architecture and user confidence in Bitcoin", Journal of Peer Production (4), http://peerproduction.net/issues/issue-4-value-and-currency/peer-reviewe...

Mathiason, J. (2009), Internet Governance. The New Frontier of Global Institutions, London and New York: Routledge.

McLeay, M., Radia, A. & Thomas, R. (2014), "Money release in the modern economy", Bank of England Quarterly Bulletin , pp. 14-27.

Mouffe, C. (1993), The Return of the Political, London and New York: Verso.

Mouffe, C. & Laclau, E. (2001), Hegemony and Socialist Strategy. Towards a Radical Democratic Politics, 2nd edition, London: Verso.

Mueller, M. (2002), Ruling the Root. Internet Governance and the Taming of Cyberspace, Cambridge, MA: MIT Press.

Mueller, M. (2010), Networks and States. The Global Politics of Internet Governance, Cambridge, MA: MIT Press.

Musiani, F. et al eds. (2016), The Turn to Infrastructure in Internet Governance, Basingstoke and New York: Palgrave Macmillan.

Nakamoto, S. (2008a), "Bitcoin: a peer-to-peer electronic cash system", Bitcoin.org, https://bitcoin.org/bitcoin.pdf. Accessed 20 February 2014.

Nakamoto, S. (2008b), "Re: Bitcoin P2P e-cash paper", The Cryptography Mailing List, http://www.mail-archive.com/cryptography@metzdowd.com/msg09971.html. Accessed 4 May 2016.

Nakamoto, S. (2009), "Bitcoin open source implementation of P2P currency", P2P Foundation, http://p2pfoundation.ning.com/forum/topics/bitcoin-open-source. Accessed 15 April 2016.

North, P. (2007), Money and Liberation. The Micropolitics of Alternative Currency Movements, Minneapolis, MN: University of Minnesota Press.

Nyman, L. & Lindman, J. (2013), "Code forking, governance, and sustainability in open source software", Technology Innovation Management Review 3(1), p. 7.

Olson, M. (1965), The Logic of Collective Action. Public Goods and the Theory of Groups, Cambridge, MA: Harvard University Press.

O'Mahony, S. (2003), "Guarding the commons: how community managed software projects protect their work", Research Policy 32(7), pp. 1179-1198.

O'Neil, M. (2014), "Hacking Weber: legitimacy, critique, and trust in peer production", Information, Communication & Society 17(7), pp. 872-888.

Oram, A. ed. (2001), Peer-to-Peer. Harnessing the Power of Disruptive Technologies, Sebastopol, CA: O'Reilly.

Palmer, D. (2016), "Scalability debate continues as Bitcoin XT proposal stalls", CoinDesk, http://www.coindesk.com/scalability-debate-bitcoin-xt-proposal-stalls. Accessed 15 April 2016.

Polanyi, K. (2001 [1944]), The Great Transformation. The Political and Economic Origins of Our Time, Boston, MA: Beacon Press.

Quinn, B.J. (2009), "The failure of private ordering and the financial crisis of 2008", New York University Journal of Law and Business 5(2), pp. 549-615.

Rizzo, P. (2016), "Making sense of Bitcoin's divisive block size debate", CoinDesk, http://www.coindesk.com/making-sense-block-size-debate-bitcoin/. Accessed 15 April 2016.

Robles, G. & González-Barahona, J.M. (2012), "A comprehensive study of software forks: dates, reasons and outcomes", in I. Hammouda et al (eds.), Open Source Systems. Long-Term Sustainability, Berlin: Springer, pp. 1-14.

Russell, A.L. (2012), "Standards, networks, and critique", IEEE Annals of the History of Computing 34(3), pp. 78-80.

Russell, A.L. (2014), Open Standards and the Digital Age. History, Ideology, and Networks, Cambridge and New York: Cambridge University Press.

Schweik, C.M. & English, R. (2007), "Tragedy of the FOSS commons? Investigating the institutional designs of free/libre and open source software projects", First Monday [online] 12(2), http://firstmonday.org/htbin/cgiwrap/bin/ojs/index.php/fm/article/view/1619/1534

Scott, B. (2014), "Visions of a techno-Leviathan: the politics of the Bitcoin blockchain", E-International Relations, http://www.e-ir.info/2014/06/01/visions-of-a-techno-leviathan-the-politics-of-the-bitcoin-blockchain/. Accessed 2 May 2016.

Shaw, A. & Hill, B.M. (2014), "Laboratories of oligarchy? How the iron law extends to peer production", Journal of Communication 64(2), pp. 215-238.

Simmel, G. (2004), The Philosophy of Money, 3rd enlarged edition, London and New York: Routledge.

Star, S.L. (1999), "The ethnography of infrastructure", American Behavioral Scientist 43(3), pp. 377-391.

Star, S.L. & Strauss, A. (1999), "Layers of silence, arenas of voice: the ecology of visible and invisible work", Computer Supported Cooperative Work (CSCW) 8(1-2), pp. 9-30.

Stewart, D. (2005), "Social status in an open-source community", American Sociological Review 70(5), pp. 823-842.

The Economist (2016), "Craig Steven Wright claims to be Satoshi Nakamoto. Is he?", http://www.economist.com/news/briefings/21698061-craig-steven-wright-claims-be-satoshi-nakamoto-bitcoin. Accessed 2 May 2016.

Trautman, L.J. (2014), "Virtual currencies; Bitcoin & what now after Liberty Reserve, Silk Road, and Mt. Gox?", Richmond Journal of Law and Technology 20(4).

von Neumann, J. & Morgenstern, O. (1953 [1944]), Theory of Games and Economic Behavior, 3rd edition, Princeton, NJ: Princeton University Press

Winner, L. (1980), "Do artifacts have politics?", Daedalus 109(1), pp. 121-136.

Wright, A. & De Filippi, P. (2015), "Decentralized blockchain technology and the rise of lex cryptographia", Available at SSRN, http://ssrn.com/abstract=2580664

Zhu, B., Jajodia, S. & Kankanhalli, M.S. (2006), "Building trust in peer-to-peer systems: a review", International Journal of Security and Networks 1(1-2), pp. 103-112.

Footnotes

1. See also Oram 2001. The case of file-sharing and its effects on copyright law have been particularly salient (David, 2010).

2. See Hughes, 1993; Levy, 2001.

3. In a fractional-reserve banking system, commercial banks are entitled to generate credits, by making loans or investment, while holding reserves which only account for a fraction of their deposit liabilities – thereby effectively creating money out of thin air. A report from the Bank of England estimates that, as of December 2003, only 3% of the money in circulation in the global economy was represented by physical cash (issued by the central bank), whereas the remaining 97% is made up of loans and co-existent deposits created by private or commercial banks (McLeay, Radia, & Thomas, 2014).

4.“[Bitcoin is] completely decentralized, with no central server or trusted parties, because everything is based on crypto proof instead of trust. The root problem with conventional currency is all the trust that’s required to make it work. The central bank must be trusted not to debase the currency, but the history of fiat currencies is full of breaches of that trust. Banks must be trusted to hold our money and transfer it electronically, but they lend it out in waves of credit bubbles with barely a fraction in reserve. We have to trust them with our privacy, trust them not to let identity thieves drain our accounts… With e-currency based on cryptographic proof, without the need to trust a third party middleman, money can be secure and transactions effortless.” (Nakamoto, 2009).

5. On 7 November 2008, Satoshi Nakamoto explained on the Cryptography mailing list that [we will not find a solution to political problems in cryptography,] but we can win a major battle in the arms race and gain a new territory of freedom for several years. Governments are good at cutting off the heads of a centrally controlled network like Napster, but pure P2P networks like Gnutella and Tor seem to be holding their own (Nakamoto 2008b).

6. The double-spending problem is a problem commonly found in many digital cash systems, whereby people can spend the same digital token twice by simply duplicating it. It is usually solved through the introduction of a centralised (trusted) third party, which is in charge of verifying that every transaction is valid, before authorising it.

7. Unless one or more colluding parties control over 51% of the network. See below for a more detailed explanation of the Bitcoin security model.

8. Of course, a variety of tools can be used to reduce the degree of transparency inherent in the blockchain. Just like public-key encryption has enabled more secure communications on top of the internet network, specific cryptographic techniques (such as homomorphic encryption and zero-knowledge proofs) can be used to conceal the content of blockchain-based transactions, without reducing the verifiability thereof. The most popular of these technologies is Zerocash, a privacy-preserving blockchain which relies on zero-knowledge proofs to enable people to transact on a public blockchain without disclosing neither the origin, the destination, nor the amount of the transaction.

9. In October 2009, Bitcoin was first estimated with an exchange rate of 1 USD for 1,309 BTC by the New Liberty Standard, calculated according the costs of electricity that had to be incurred in order to generate bitcoins at the time.

10. The first commercial Bitcoin transaction known to date is the purchase by a Florida-based programmer, Laslo Hanyecz, of a pizza purchased (by a volunteer) from Papa John’s for a face value of 10,000 BTC.

11. Over the years, several people have been outed as being Satoshi Nakamoto – these include: Michael Clear (Irish graduate student at Trinity College); Neal King, Vladimir Oksman and Charles Bry (who filed a patent application for updating and distributed encryption keys, just a few days before the registration of the bitcoin.org domain name); Shinichi Mochizuki (Japanese mathematician); Jed McCaleb (founder of the first Bitcoin exchange Mt. Gox); Nick Szabo (author of the bit gold paper and strong proponent of the notion of “smart contract”); Hal Finney (a well-known cryptographer who was the recipient of the first Bitcoin transaction); and Dorian Nakamoto (an unfortunate case of homonymy). Most recently, Craig Steven Wright (an Australian computer scientist and businessman) claimed to be Satoshi Nakamoto, without however being able to provide proper evidence to support his claim (2016). To date, all of these claims have been dismissed and the real identity of Satoshi Nakamoto remains a mystery.

12. The Bitcoin Foundation has been heavily criticised due to the various scandals that its board members had been associated with. These include: Charlie Shrem, who had been involved in aiding and abetting the operations of the online marketplace Silk Road; Peter Vessenes and Mark Karpeles, who were highly involved with the scandals of the now defunct Bitcoin exchange Mt. Gox; and Brock Pierce, whose election in spite of his questionable history in the virtual currency space has created huge controversy within the Bitcoin Foundation, eventually leading to the resignation of nine members.

13. In general, forks can be categorised into soft and hard forks: the former retains some compatibility or interoperability with the original software, whereas the latter involves a clear break or discontinuity with the preceding system.

14. For instance, one of the largest US Bitcoin wallet and exchange company, Coinbase, was removed from Bitcoin.org upon making the announcement that they would be experimenting with Bitcoin XT.

15. As of 11 January 2016, only about 10% of the blocks in the Bitcoin network had been signed by XT nodes (Palmer, 2016).

16. Mike Hearn, interview with the authors, April 2016.

17.a.b. Patrick Murck, interview with the authors, April 2016.

18. Peter Todd and Pindar Wong, interview with the authors, April 2016.

19. See supra, part I.A.

20. This reveals a significant bias of the Bitcoin community towards technological determinism – a vision whereby technological artefacts can influence both culture and society, without the need for any social intervention or assimilation (Bimber, 1994).

21. As the name indicates, the Proof-of-Work algorithm used by Bitcoin requires a certain amount of work to be done before one can record a new set of transactions (a block) into Bitcoin’s distributed transaction database (the blockchain). In Bitcoin, the work consists in finding a particular nounce to be embedded into the current block, so that processing the block with a particular hash function (SHA-256) will result in a string with a certain number of leading zeros. The first one to find this nounce will be able to register the block and will therefore be rewarded with a specific number of bitcoins (Nakamoto 2008a). The amount of work to be done depends on the number of leading zeros necessary to register a block – this number may increase or decrease depending on the amount of computational resources (or hashing power) currently available in the network, so as to ensure that a new block is registered, on average, every 10 minutes. While this model was useful, in the earlier stages of the network, as an incentive for people to contribute computational resources to maintain the network, the Proof-of-Work algorithm creates a competitive game which encourages people to invest more and more hashing power into the network (so as to be rewarded more bitcoins), ultimately resulting in a growing consumption of energy.

22. The difficulty of said mathematical problem is dynamically set by the network: its difficulty increases with the amount of computational resources engaged in the network, so as to ensure that one new block is registered in the blockchain, on average, every 10 minutes.

23. In the early days, given the limited number of participants in the network, mining could be easily achieved by anyone with a personal computer or laptop. Subsequently, as Bitcoin’s adoption grew and the virtual currency acquired a greater market value, the economic incentives of mining grew to the point that people started to build specific hardware equipments (ASICs) created for the sole purpose of mining, making it difficult for people to mine without such specialised equipment. Note that such an evolution had actually been anticipated by Satoshi Nakamoto himself, who wrote already in 2008 that, even if “at first, most users would run network nodes, [...] as the network grows beyond a certain point, [mining] would be left more and more to specialists with server farms of specialized hardware.”

24. Bitcoin mining pools are a mechanism allowing for Bitcoin miners to pool their resources together and share their hashing power while splitting the reward equally according to the amount of shares they contributed to solving a block. Mining pools constitute a threat to the decentralised nature of Bitcoin. Already in 2014, one mining pool (GHash) was found to control more than half of Bitcoin’s hashing power, and was thus able to decide by itself which transactions shall be regarded as valid or invalid – the so-called 51% attack. Today, most of the hashing power is distributed among a few mining pools, which together hold over 75% of the network, and could potentially collude in order to take over the network.

25. Note that the longest chain is to be calculated by taking into account the number of transactions, rather than the number of blocks. The reason for such an arbitrary choice is that the longest chain is likely to be the one that required the greater amount of computational resources, and is therefore – probabilistically – the less likely to have been falsified or tampered with (e.g. by someone willing to censor or alter the content of former transactions).

26. Selfish mining is the process whereby one miner (or mining pool) does not broadcast the validated block as soon as the solution to the mathematical problem for this blockchain has been found, but rather continues to mine the next block in order to benefit from the first-mover advantage in terms of finding the solution for that block. By releasing validated blocks with a delay, ill-intentioned miners can therefore attempt to secure the block rewards for all subsequent blocks in the chain, since – unless the network manages to catch up with them – their fork of the blockchain will always be the longest one (and thus the one that required the most Proof-of-Work) and will thus be the one that will ultimately be adopted by the network (Eyal & Sirer, 2014).

27. Selfish miners encourage honest, but profit-maximising nodes to join the coalition of non-cooperating nodes, thus eventually making the network more vulnerable to a 51% attack.

28. Mt. Gox was one of the largest Bitcoin exchanges, handling over 70% of all bitcoin transactions as of April 2013. Regulatory issues brought Mt. Gox to be banned from the US banking system, thus making it harder for US customers to withdraw funds into their bank accounts. On 7 February 2014, Mt. Gox halted all bitcoin withdrawals, claiming that they had encountered issues due to the “transaction malleability” bug in the Bitcoin software (which enabled people to pretend a transaction did not occur, when it actually occurred, so as to bring the client to create an additional transaction). On 24 February, the Mt. Gox website went offline and an (allegedly leaked) internal document got released showing that Mt. Gox had lost 774,408 bitcoins in an (allegedly unnoticed) theft that had been going on for years. On 28 February, Mt. Gox filed for bankruptcy reporting a loss of US 473 million dollars in bitcoin.

29. These include, amongst others, the Bitcoin Saving and Trust bitcoin-based Ponzi scheme; the hacking of exchanges such as Bitcoinica, BitFloor, Flexcoin, Poloniex, Bitcurex, etc; or even online Bitcoin wallet services such as Inputs.io and BIPS.

30.BIP stands for Bitcoin Improvement Proposal. A BIP is a design document providing information to the Bitcoin community, or describing a new feature for Bitcoin or its processes or environment. The BIP should provide a concise technical specification of the feature and a rationale for the feature. We intend BIPs to be the primary mechanisms for proposing new features, for collecting community input on an issue, and for documenting the design decisions that have gone into Bitcoin. The BIP author is responsible for building consensus within the community and documenting dissenting opinions. (https://github.com/bitcoin/bips/blob/master/bip-0001.mediawiki)

31.https://github.com/bitcoin/bips/blob/master/README.mediawiki

32.“Bitcoin governance is mainly dominated by veto power, in the sense that many parties can choose to stop a change; we haven't seen much use of power to push through changes. The main shortcoming is users have, in practice, less veto power than they should due to coercion.” (Peter Todd, interview with the authors, April 2016).

33.“If multiple competing implementations of the Bitcoin protocol exist, mining pool operators and wallet providers must decide which code to run. Their decision is disciplined and constrained by market forces. For mining pool operators, poor policy decisions can lead miners to withdraw hashing power from the pool. Wallet providers may find users shift their keys to another provider and exchange services may find liquidity moves to other providers. This structure favors stability, resilience and a conservative development process. It also makes the development and standards setting process resilient to political forces.” (Patrick Murck, interview with the authors, April 2016).

34. The first kinds of physical Bitcoin wallets consisted of a pre-loaded Bitcoin account whose private address was stored in the shape of physical coins that people could hold.

35. As detailed above in Part I.A.

36. Mike Hearn, Pindar Wong, and Patrick Murck, interview with the authors, April 2016.

37. Peter Todd, interview with the authors, April 2016.

38. For instance, what happens when the freedom of expression made possible by the network impinges on country-specific laws? And who should decide (and on what grounds) whether the new .amazon generic Top Level Domain (gTLD) should be attributed to the US American company which has trademarked the name, or to the Brazilian government which lays claim to a geographical area?

New UN resolution on the right to privacy in the digital age: crucial and timely

$
0
0

The rapid pace of technological development enables individuals all over the world to use new information and communications technologies (ICTs) to improve their lives. At the same time, technology is enhancing the capacity of governments, companies and individuals to undertake surveillance, interception and data collection, which may violate or abuse human rights, in particular the right to privacy. In this context, the UN General Assembly’s Third Committee adoption on 21 November of a new resolution on the right to privacy in the digital age comes as timely and crucial for protecting the right to privacy in light of new challenges.

As with previous UN resolutions on this topic, the resolution adopted on 21 November 2016 recognises the importance of respecting international commitments in relation to the right to privacy. It underscores that any legitimate concerns states may have with regard to their security can and should be addressed in a manner consistent with obligations under international human rights law.

Recognising that more and more personal data is being collected, processed, and shared, this year’s resolution expresses concern about the sale or multiple re-sales of personal data, which often happens without the individual’s free, explicit and informed consent. It calls for the strengthening of prevention of and protection against such violations, and calls on states to develop preventative measures, sanctions, and remedies.

This year, the resolution more explicitly acknowledges the role of the private sector. It calls on states to put in place (or maintain) effective sanctions and remedies to prevent the private sector from committing violations and abuses of the right to privacy. This is in line with states’ obligations under the UN Guiding Principles on Business and Human Rights, which require states to protect against abuses by businesses within their territories or jurisdictions. The resolution specifically calls on states to refrain from requiring companies to take steps that interfere with the right to privacy in an arbitrary or unlawful way. With respect to companies, it recalls the responsibility of the private sector to respect human rights, and specifically calls on them to inform users about company policies that may impact their right to privacy.

The resolution notes that violations and abuses of the right to privacy increasingly affect individuals and have particular effects on women, children and vulnerable or marginalised communities. It links the right to privacy with the exercise of freedom of expression, as well as participation in political, economic, social, and cultural life, a framing that challenges increasing identification of security and surveillance by governments and corporations.

Since the UN General Assembly’s first resolution on this issue in 2013, in reaction to the Snowden revelations, its approach has evolved from a largely political response to mass surveillance to addressing more complex challenges around data collection and the role of the private sector. These are encouraging developments, and have already brought about some positive change, including the establishment of a UN Special Rapporteur on the right to privacy. But more work is needed to implement the resolutions, especially the calls on states to improve their laws and practices with respect to the their surveillance practices. Like all UNGA resolutions, this is non-binding and unless states take their commitments seriously and civil society applies pressure, there is a risk that these resolutions remain just words on paper.

Looking forward, the resolution suggests that the Human Rights Council (HRC) consider holding an expert workshop as a contribution to a future report of the UN High Commissioner on Human Rights on this matter. Practically speaking, this means the issue is bounced back to the HRC in Geneva, which will decide when and under what terms to hold the workshop. Expert workshops can be an excellent way to discuss challenging and complex issues outside the highly politicised confines of UNGA or the HRC, so should this workshop happen, it has the potential to be a valuable opportunity to work through some of the thorny issues around privacy in the digital age, of which there is no shortage.

Why we should support the “European Charter of digital fundamental rights”

$
0
0



On 1 December 2017 the “Charter of digital fundamental rights of the European Union” was published by 27 initiators. This quickly started a fierce debate about the sense and nonsense of such a document. Critics believe that the text is a juridical disaster, poorly drafted and producing numerous contradictions. Still it is unique in reaching out to engage with much broader audiences than any other digital charter did before. And we should engage because European values matter for everyone as they shape the rules for a digital world.

Oh no, another one!

That was our initial thought when we heard about the Charter of digital fundamental rights of the European Union. Over the last years there have been many Charters of Digital Rights or other attempts to formulate a normative framework of how we should be living in a world going digital. Not enough, we even have been involved in drafting one on our own. Some of the texts have become documents that are regularly referenced by activists, others turned into self-imposed principles for progressive organisations. There are some outstanding examples of how far normative frameworks can go. Brazil demonstrated with the establishment of the “Marco Civil da Internet” (Civil Rights Framework for the Internet) how a long inclusive process of crowdsourcing and participation can lead to enactment as hard law.

So far, the European Charter was neither as inclusive nor is it anywhere close to becoming a legally binding document. We see it as a first step, a form of petition. The authors of the Charter include well-known experts from business, academia, civil society and politicians across a surprisingly wide range of political parties. After initial publication of the Charter, it quickly gained support of over 1000 signatures. The Charter is, however, also harshly criticised. Prominent voices from the digital community and legal experts are denouncing the text.

Just the beginning

We appreciate this broad and controversial debate sparked around the Digital Charter. We don’t expect a perfect consensus as result, but rather a rough understanding that reflects the certainly heterogeneous perspectives of the multiple voices the different stakeholders want to express. The text of the Brazilian Civil Rights Framework for the Internet, though seen controversial, did change completely between its initial discussion in 2009 and legal enactment in 2014.

But before it can become a truly European initiative, we would like to see the group of mostly German authors to build on existing efforts, for example Italy’s 2015 Declaration of Internet Rights that went through public consultation online and offline with several stakeholders over a period of five months. The perspectives from other European member states will need to be integrated as soon as possible.

Some commentators argue there is no need for a Charter as existing laws are sufficient. We don’t think so for a number of reasons, not only technological developments such as big data and artificial intelligence (AI). The existing normative frameworks need to address the ever growing tension between national character of legislation and the global character of the internet, a question that was asked for example at the Global Internet and Jurisdiction conference in November 2016 in Paris, as well as being heavily discussed within global trade regimes.

As policy fields are increasingly interdependent within the global digital realm, it is time to break out of traditional vertical policy silos: The internet brings together economic affairs, human rights, media policy, international relations, development policy, trade policy, consumer protection, anti-trust, technical standards, to name just a few. That’s why it is high time for a broadly shared consensus among stakeholders representing our values.

The legal experts criticising the Charter as a juridical disaster should stop complaining about the bad craftsmanship of the text and help translating the values represented in the Charter into a workable legal architecture, and iron out contradictions. Among the existing contradictions our biggest concern is the ambiguity left between free speech and censorship. European legal norms embody a clear commitment to free speech, so we can remain hopeful that this ambiguity will be resolved eventually. Rather than revolving around further details and shortcomings, we would like to point to the bigger picture of what potentials a European Digital Charter can have.

Europe’s third way in the digital world

From an international perspective, the Charter is an impressive text. The 23 articles address topics that in many other contexts would be highly controversial or even forbidden. If you ever have followed the negotiation of an international resolution that deals with the digital world you will appreciate the boldness of the Charter.

From a global bird’s eye view, we observe two dominant paradigms: one is represented by US policies and could be called the “Internet for companies”. The state taking a laissez-faire approach with the strong belief that the market forces will sort things out, including a Schumpeterian disruption narrative to justify monopolies, and easy paths to monetise citizen’s data. The second dominant view, and diametrically opposed to the first one, can be seen in China’s policies – the “Internet of the state”. Having governmental gatekeepers intervene in business and civic activities, China is practicing internet fragmentation motivated by industrial policy and control of the public sphere. This is where the Charter of digital fundamental rights of the European Union has the potential to step in and offer a third way, shifting power from businesses and the state towards us, as users, and citizens while preserving the free and open character of the internet. That is a vision for a genuine “Internet of the people”. Therefore, we support the Charter and encourage citizen of Europe to engage and shape an information society according to our shared values.

 

Bulgaria: regulating pornography in the new digital realities

$
0
0

In March 2013, the European Parliament contemplated a proposed bill intended to criminalise pornography in response to increasing pressures from various constituencies to eliminate gender and sexual stereotypes. The bill was a reaction to a report prepared by the Committee on Women's Rights and Gender Equality, proposing a number of measures to improve gender equality within EU members states, which among other things, called for "a ban on all forms of pornography in the media and on the advertising of sex tourism" (2012/2116(INI), Article 17).

In response to the report, the European Parliament voted 368-159 in favour of passing it, with 98 abstaining. However, the controversial "porn ban" was rejected. Many internet freedom advocates, including Christian Engström, a Member of the European Parliament (MEP) and deputy leader of the Swedish Pirate Party, noted that while the goals of the Committee were “of course very laudable”, as always, “the devil is in the detail” (Engström, 2013, n.p.). The defeat of this bill demonstrated that enforcing legal measures concerning online pornography is an enormous trial. While the introduction of the bill was hailed as a pioneering effort on the part of the EU to establish and enforce unprecedented measures of monitoring online pornography and the growing culture of exploitation of women's sexualities in the media, its controversial porn ban was rejected for its potential to stifle free speech. This paper offers an overview of the legal and cultural discourse surrounding the regulation of pornography in the newest European member state, Bulgaria. As a former socialist state, Bulgaria treated the topic of pornography as an ideological issue, a problem of the morally corrupt West, and therefore, minimised its relevance to social and legal discourse. With the collapse of communism, however, pornography became one of the fastest and most sought after media imports, which quickly turned into a staple of street culture and late night entertainment. While widely accessible, pornography is not defined or regulated by media law. In fact, Bulgaria only recently began modifying the few existing criminal legal measures, partly because of the increasing pressure from the European Union (Zankova, 2013). By looking at the historic and cultural context of pornography law in Bulgaria, this paper offers a critical analysis of the legal, cultural, and political challenges to monitoring and regulating the traditional and digital means for distributing and consuming pornography thus, using it as a case study to explore the complex transnational institutional mechanisms and regional responses involved in policy matters related to pornography, especially in the new digital realities of the world and the Eastern European region, in specific.

As an attempt to study policy in a larger transnational context, this study offers a new direction in policy studies because as Burgelman (1997) has argued, "there is a need to look into the tension between the national, the European, and the global levels in communication policy" (p. 141). In academia, the most common approaches to studying pornography come from a feminist critique of the representation of female sexuality. In this area, three authors, namely, Dworkin ([1979] 1999}, Kendrick ([1987] 1996), and Kipnis (1996) represent the different ways in which the term pornography is used and applied, in theory, allowing for productive difference that enables a multi-dimensional analysis of sexual texts. Among these established critical perspectives, Kendrick’s view of the role of regulations as a means of defining what sexual content is considered pornographic is theoretically closest to the study at hand. However, the study takes on a different approach from Kendrick’s since the current study concerns itself with online pornography, also acknowledging the relevance of internet governance literature, which has influenced much of the debate concerning what is in fact the best way to ensure the proper, equitable and legally sound operation of a global network of information that knows no borders nor has discernable physical attributes. Internet governance functions carry significant public interest implications and how these functions are diffusely distributed among new institutional forms, the private sector, and more traditional forms of governance. Among those, matters of regulation and control have been used to both compare and distinguish the offline world from the online one in order to create a corresponding framework of global policies and regulations. which remains a sizeable challenge. This challenge is further complicated by the unprecedented speed of technology innovation and the failure and often virtual impossibility of national and international regulatory bodies to respond to those changes at the same rate.

Specifically, this paper addresses the following research questions: how has the cultural and legal discourse on pornography regulation evolved over time? How has this discursive evolution impacted the current effort to regulate online pornography in Bulgaria? Finally, how do the current challenges in regulating online pornography in Bulgaria capture the complexity of doing internet governance? In addition, this paper also argues that the absence of specific and accurate language that defines pornography in the current law and the resulting virtual absence of enforceable legal sanctions on the ground is a result of the country’s desire to break away from a past of censorship and control in favour of a legal outlook that values freedom of information and privacy rights over government intervention into media and business practices. The paper thus compares how two competing frameworks - the US and the European - are influencing Bulgaria’s struggles over its pornography laws, both online and offline. Through a qualitative document analysis of the Bulgarian criminal and media law, the study engages in a systematic examination of the development of legal definitions of pornography, focusing on the most recent addition of language concerning online pornography, and how these definitions in turn reflect Bulgaria’s struggle in defining its conceptual approach to media regulation in specific, and internet governance in general. Therefore, this study fills the void that Just & Puppis (2012) described as the paucity of theoretically-grounded communication policy research.

Brief cultural history of pornography in Eastern Europe and Bulgaria

In Eastern Europe the most recent history of regulating pornography dates to the ideological shift triggered by the Soviet revolution. On the one hand, the socialist ideologies of the Soviet revolution advocated for openness in communicating and engaging in sexual relationships, celebrating a symbolic "victory" over the shackles of outdated moral codes. On the other hand, they also promoted a view of sexuality so clinical and driven by the ultimate goal of procreating to ensure the future economic survival of the regime, that it treated sex and, by extension any depictions of it, as animalistic, and ultimately, repulsive social practice (Borenstein, 2008). This ambiguous treatment of the topic of sex appeared a most convenient ideological narrative, which as Carleton (2005) pointed out, at the onset of the socialist revolution produced a liberating interpretation of sexual relationships and the laws that define them.

Stalin’s ascent to power and the conservative values he espoused brought the possibility of a debate about sex and pornography to a screeching halt. Sexual discourse was in essence removed from the public conversation. The only allowed conversation about sex was focused on improving one’s general sexual health, and occasionally, on improving a couple’s sense of intimacy. "Adolescent sex education, called ‘sanitary education’ was reduced to instruction in hygiene and physiology" (Baban, 2000, p. 239), removing any and all visual allure of sexual seduction from sexual relationships.

Even though the official policy of the communist ideologues concerning sex was to treat it as taboo, it didn’t deter the effort of local agents to introduce sexually provocative content - usually produced abroad and distributed through underground networks - in order to challenge existing norms and meet local demand. In countries such as Hungary, East Germany and Yugoslavia, which enjoyed a freer market and travel opportunities with access to the West, a more tolerant attitude towards Western produced commodities also translated into attitudes towards sex and pornography that strayed from the rigid ideological approach of the Stalinist era. The general approach in these countries was that content that is sexually provocative and in fact, pornographic, would be tolerated as long as it deters the socialist subjects from questioning the legitimacy of the political regime (Magó-Maghiar, 2009; Žikić, 2009).

The changing cultural climate of the 1980s, triggered by a growing exchange of contacts with Western content and cultural products alongside the steady decline of the economic success of the socialist state, lead to an increase in the circulation of pornographic materials. In Bulgaria, the distribution of smuggled pornography was already a thriving underground business; however, this "taboo" practice was a lot less invested with ideological intent. In fact, as Ibroscheva (2013) pointed out, “the socialist political establishment began to entertain the idea of more openness in its treatment of topics pertaining to the body—sex, erotica, and to some extent, even soft porn—only as an attempt to mitigate, if not entirely suppress, the growing discontent with the dismal economic output and stifling lack of political freedom” (p. 92-93). Gradually, homegrown publications that borrowed widely from erotic Western media, such as Playboy, Penthouse, etc, became sought after as the one Western import that provides instantaneous gratification and escape from reality. Goscilo (1996) noted in her extensive studies of the history of sex in post-Soviet Russia, “by the (sic) mid-1992 pornography was thriving as a mainstay of the novelties introduced along with kiwis and deodorants into Russia’s capital” (p. 135).

While post-communist Russia was poised to experience a slow, yet widely profitable growth of the pornography market, in countries like Bulgaria and the former Yugoslav Republic of Serbia, the mushrooming of private television outlets also meant that widely sought after pornographic content was the perfect programming block filler, especially in the wee hours of the night. As Nikolic (2005) contended, "it was exactly media - which was in every other word closed - that ‘opened the sexual views’ of viewers" (p. 135). In East Germany, which found itself in the closest proximity to the West, the invasion of the West into the local post-socialist culture was most felt in what Norman (2000) called the “sex wave” that materialised in the production of erotic and pornographic films, a previously nonexistent cultural space. Daskalova (2000) also noted, a “real explosion” of the magazine publishing business took place, most notably demonstrated in the unprecedented and vastly popularised visuals of sex found in virtually every printed materials. As Deltcheva’s (1996) pointed out, “the ‘pornographic network’ gained enormous dimensions - starting from the sales of Emanuelle at every street corner to the (pirated) Playboy photographs which periodically appeared in leading daily and weekly newspapers” (p. 307). However, the state remained conspicuously silent in making any attempts to define, regulate or sanction such content and its public displays.

In Bulgaria, the legal language of communism dealt very strictly with all sexual "deviances" including masturbation, indecent public exposure, homosexuality and pornography, which was criminalised as soon as the communist regime was installed (Bulgarian Criminal Law, 1951; Special Section V, Crimes Against the Person: para. 9 Debauchery). As Popova (2010) contended, the purpose of the law was to define and uphold a sense of morality in which all types of erotic actions and sexual tools were condemned on the grounds of their antisocial and individualistic character, further amplified by the traditional rhetoric of disease and decay, in turn inspired by the socialist metaphor of the society as a living, breathing optimally performing organism. For example, masturbation was seen in this capacity as a medical condition with fatal consequences; homosexuality was perceived not only as a sexual but also as a social perversion, and as far as pornography was concerned, it was cast as the leading factor of moral decay and a highly successful weapon of Western ideological subversion. The Criminal Code (CC) introduced in 1968 and the standing law until recently, were defined as there to “defend the socialist rule of law and to educate citizens to respect the rules of the socialist community”, and as such, prohibits the creation and dissemination of pornographic content and included a penalty of imprisonment of up to one year or a fine for doing so (Ognyanova, 2007b, para. 3).

In the early years of the post-communist transition, pornography nestled itself as a "normalising" and virtually unsanctioned cultural mechanism, expressing a collective desire to join the market and exercise business entrepreneurship, while at the same time, filling a giant gap of “sexual education” left behind by the communist denial of sexual pleasure. Ninov (2001) reflected on this trend, stating that “freely speculating with its own ideas of democracy, the ‘flesh market’ is trying to defend its output, qualifying it as moral sexual education. The deficit in sexual literature and ignored sexual education before 1989, on the one hand, and the opportunity for free self-expression, for shedding inferiority complexes and doing away with censorship, on the other, are at the core of the unprecedented interest in pornographic publications” (p. 396).

To many experts, the unbridled access to pornographic content in Eastern Europe, and specifically Bulgaria, was seen as the unintended consequence of the maturation of a new market economy and the general liberalisation of the post-socialist transition. Similarly, a number of pundits also confidently predicted that porn "will be... channeled and confided to the needs of a group of people suffering from identity crisis or problem puberty..." (Ninov, 2001, p. 399), casting the sudden boom of previously tabooed media output as outgrowth of the transition. The fact remains that sexual content and to some extent pornography are now fully integrated in media practices and because, as Kirova (2012) pointed out, “for quite some people, their appearance is not a relict manifestation of outdated social relations, but rather an instance of ‘innovation’ and ‘modernisation’ of un-cool Bulgarian morality” (para. 36).

Media regulation, pornography and internet governance: complex challenge

While media moved fast to modernise, the government was extremely slow in responding to the need for policy and regulation. In fact its regulatory response was described as "overhasty, unpremeditated and premature" (Georgieva-Stankova, 2012, p. 195). After the collapse of the regime, the vacuum left behind by the control of the communist authorities now needed to be filled by media laws and policies that had no precedents in the cultural and legal communist past. In fact, as Ognyanova (2009) pointed out, “unlike other sectors of the economy, where the government adopts the so-called sectoral policies, no political acts (strategies) for the media sector in Bulgaria have been developed in the years of democratic transition” (p. 31).

The main media supervisory body called National Council for Radio and Television (NCRT) was established in 1997 and was renamed in 2001 to Council for Electronic Media (CEM). CEM is responsible for overseeing public service broadcasting, as well as commercial broadcasting, including advertising. Its members are chosen by the Parliament and the President. The current law guiding the operation of media in Bulgaria is the Radio and Television Act (RTA) of 1996, which took nearly six years to draft. To this day, however, the internet remains unregulated (Davidova, 2014).

Modernising Bulgaria’s post-communist culture also meant acknowledging the fast-paced growth of technology and the infrastructure that enables its everyday use. Even though Bulgaria provided commercial internet services as early as 1992, access to those was not widely available and was often seen as a luxury. Today, Bulgaria boasts one the fastest internet connections in the world and 59 percent of population has access to some internet services (http://www.nsi.bg/en/node/6099). In addition, children are the most active internet users, with between 50 and 60 percent of those aged 7-19 currently using the internet (http://www.nsi.bg/ZActual_e/IT_HH2006.htm). According to some non-governmental organisations, over 90 percent of children aged 8-16 have already seen online pornography (http://www.sva.bg). With data also pointing that the most frequently searched word in 2015 on the internet was "porn with Galena"1, it becomes clear that digital distribution networks enabled by the internet quickly caught up with making porn already part of the cultural mediascape in post-socialist Bulgaria.

The problem of defining pornography becomes complex enough to be described by Georgi Lozanov, the head of the CEM, as a "task worth the Nobel Prize" (bTV interview, aired 2 June 2015). Matters become even more complex when pornographic content is added to channels of dissemination and distribution, especially in light of the access to digital technologies and evoking debates about internet governance. "Internet governance" is a contested term with various definitions (Hofmann, 2005). As Mueller (2010) suggests, internet governance debates have often been reduced to an exaggerated dichotomy between the extremes of cyberlibertarianism and cyberconservativism. The former can resemble utopian technological determinism and the later is basically a state sovereignty model that extends traditional forms of state control to the internet with the goal of adequately serving the public interest.

In Bulgaria, the press and the internet are not currently regulated by CEM, despite multiple attempts to craft a press law and introduce internet-related regulations. As Marinova (2008) pointed out, "in principle the Bulgarian government does not regulate internet communications and is only responsible for the provision of services" (p. 3). However, a professional organisation handles the development and usage of the internet in Bulgaria. The Internet Society (Bulgaria), consists of two subgroups: Internet Architecture Board, dealing with architecture, protocols and standards, and Internet Engineering Steering Group, which is in charge of the technical processes in building the standards of internet performance. 2 Both organisations have been seen as defending the interests of the Bulgarian internet providers against excessive regulation and restrictive internet policies imposed by the state. Issues pertaining to the harmful content of pornography, specifically online child pornography, are monitored by non-governmental bodies such as the National Center for Safe Internet, while reported criminal activities concerning child porn are investigated by the Cybersecurity Department of the Ministry of Internal Affairs.

At present, there is no state regulatory authority in charge of overseeing and monitoring online services. CEM does not have the competency to license, and consequently monitor content distributed exclusively online. In fact, the current media regulatory framework doesn't even have a working definition of social media. With regard to television programmes that stream content online, this can be evaluated as an incomplete transposition of the Audiovisual Media Services Directive (AVMSD) and thus a violation of European law, which requires that each member state set up an independent regulator for AVMS that meet the criteria necessary to define user-generated videos and video sharing social media platforms. In the case of commercial communications via online channels, such activities are delegated to the purview of the NCRT, but the measures are not efficiently enforced yet. With regards to providing protection to minors, there is neither co-regulation nor self-regulation mechanism in place. The outcome of this lack of oversight is, as Ognyanova (2007a) points out, a misconstrued notion of protecting children and minors, which focuses on punishing internet pornography as crime, and not on preventing crime in the first place. "In the EU countries, the guiding principle of the law is to protect children, with fewer, but consistently enforced bans. In Bulgaria, all porn is banned, but children are given less protections under the law" (Ognyanova, 2007a, para. 5). This is indeed alarming as data from the National Center for Safe Internet shows that in 2015, there were over 2,500 hotline tips on sexual crimes against minors online, many of which were perpetrated by other minors (Lazarova, 2016, para. 1).

Between the EU and the US model: ramifications for Bulgaria’s law

Because Bulgaria joined the EU in 2007, its legal approach to media as well as other matters of judicial reform, is to be guided by directives set forth by the Council of the European Union. For example, media law enforced by the CEM is to be compliant with the guiding multilateral directive "Television Without Frontiers" (TWFD), which laid the universal principles and legal responsibilities surrounding the operations of media entities in the European Union member states (Council Directive 89/552/EEC, Art. 22 and 22a, 1989). With the exponential growth of the internet and the amount of pornographic content made available online, the directive was amended in 1997 after a meeting between the Council of the EU and representatives of the member states to address the regulation of harmful and illegal content on the internet, notably, child pornography (Art. 4 Council Recommendation, 98/560/CE, 1998 O.J. (L 270) 48, Annex 22.2 (a-c). Ultimately, the TWFD was expanded to also include pornographic material available on the internet, which bans among others, television programmes that could harm the physical, mental, or moral development of minors (Council Recommendation 98/560/CE, art. 4, 1997, O.J. (C70) 1). The EU has been engaged in an active pursuit to curb and eradicate child pornography on the internet by focusing on prevention, rather than criminal prosecution. As Eko (2009) argued, “The European Union emphasises measures–including content-based ones–in order to avoid the need to apply criminal prosecution and other enforcement measures that may be damaging to the right to personal communication, privacy and data protection as well as the right to disseminate nonprejudicial information” (p. 135).

The United States, on the other hand, has exhibited one of the most comprehensive sets of legal measures to deal with child pornography. The US approach is two-pronged. On the one hand, guided by the First Amendment, sexual expressions and pornography are treated as protected speech. The US Supreme Court has ruled that pornography is protected speech under the First Amendment and therefore, trying to ban it is unconstitutional. Because the internet regulatory regime in the United States is also defined by the concept of the "marketplace of ideas", the internet has been afforded the same protections as the press, which means that as far as pornography is transmitted or stored via internet means, it is protected by the First Amendment (Eko, 2009).

At the same time, case and statutory law have been applied and kept abreast of technological advancements to protect children and minors from the dangers of child pornography. Two notable examples include the Child Protection and Obscenity Enforcement Act (18 U.S.C. paras. 2251-2256 (1988) and the Child Pornography Prevention Act of 1996 (P.L. 104-208, Div A, Title I para. 101(a) [Title I, para. 121] 110). The Child Pornography Prevention Act of 1996 was part of the omnibus Communication Decency Act, which framed the internet as a dangerous space for children. As the internet became an increasing common site where both sexual predators stalked their victims and where pornographic content was freely available, Congress Acted by passing the Communication Decency Act of 1996 (CDA) as part of the Telecommunications Act of 1996, which to date is the primary legal document defining the functions of the internet (47 U.S.C. para. 223 et seq. (1996)). These attempts of the government to criminalise child pornography have been successfully challenged on multiple occasions but numerous entities, including the pornographic industries and civil liberties organisations, including the American Civil Liberty Union (ACLU), which eventually was followed by the passing of the Child Online Privacy Protection Act (COPPA) in 1998 (47 U.S.C. para. 231 (a)(1)). The constitutionality of COPPA was once again contested and eventually, the act was invalidated on the ground that filtering content and other technology-driven solutions are better than governmental intervention and criminalising content that might otherwise be afforded First Amendment protection (Eko, 2006). It becomes clear that European and US law differ significantly in the matter of treating online pornography and those differences are important to pinpoint in the case of Bulgaria’s current struggles as it tries to reconcile competing legal frameworks that legislators seem to evoke in support of their proposed bills.

In Bulgaria, where as Marinova (2004, p. 4) points out, "the state regime has a largely laissez-faire attitude, and the field of communications is especially liberal," matters of streamlining legal definitions that do not violate the newly earned freedom of speech and yet, address problems with online and offline pornography adequately becomes challenging, to say the least. Realising the existence of two legal paths - the US one suggesting treating child pornography as crime while protecting porn in general, or the European approach, pursuing child porn as a human right violation, the Bulgarian Criminal Code was finally amended in 2007 to offer a new definition of “pornographic material”:

Pornographic material is now defined as a material which is indecent, unacceptable or incompatible with public morals and which depicts in an open manner a sexual conduct. Such conduct shall be action, which expresses real or simulated sexual intercourse between persons of the same or different gender, sodomy, masturbation, sexual sadism or masochism, or lascivious demonstration of the sexual organs of a person (Amendment to the Bulgarian Criminal Code, State Gazette No. 38, May 11, 2007).

The provision of Article 159 of the same act was also amended to introduce for the first time legal sanctions directed towards online pornography. "A person who possesses or provides pornographic material for himself or for another person through a computer system or via other means, material that has featured a person who has not turned 18 years of age or one who has the appearance of such a person, shall be punished by imprisonment of up to one year or a fine of up to BGN 2,000" (ibid.). The sanctions, curiously, also differentiate the punishment for online and offline pornography, including prison time and a monetary fine, both of which were significantly higher for offline pornography compared to internet pornography, a measure possibly pointing to the fact that offline pornography involves a larger degree of intentionality and potential to inflict physical harm than online pornography.

Conclusion

There has been a general discussion as to whether the internet, as a general rule, lends itself to governance by traditional sovereigns or if something in the net's architecture resists such forms of control. In recent years, pornography has become a hot topic of discussion involving national and global governance. The speed, ease and accessibility of pornographic content today is indeed unprecedented and has proven virtually impossible to curb both in traditional and online settings. As York (2016, May 25) argued, " ...banning pornography is all but impossible, unless we’re comfortable with the collateral damage". Bulgaria’s standing challenge is to find a model language for its media law that addresses effectively pornographic content, borrowing from practices in the West, namely, the European Union and the United States. As a member of the EU, Bulgaria is committed to its laws and directives, which define and guide the sanctioning and criminalising of child pornography, which is clearly demonstrated in the 2007 amended language of the Bulgarian Criminal Code. In its attempt to act as an “exemplary” member state of the EU, Bulgarian legislators proposed a blanket ban on all pornography, in order to protect “children and human dignity.” On the other hand, Bulgaria is also shaking off the remains of communist censorship that stifled freedom of expression. In an attempt to emulate and adopt a new, US-inspired media philosophy that rejects government interference in the function of the press in any type and form, Bulgarian media embraced sexual content as a symbolic opposition to the morally contrived communist past. To satisfy, and perhaps please EU regulators beyond reproach, Bulgaria has effectively banned all porn, while at the same time, virtually failed to enact any of the sanctions it has mandated against transgressors. As Ognyanova (2007b) points out, “Bulgaria is a country of paradoxes - pornography is fully and completely banned, but you can find it nestled at the news kiosks, right next to school textbooks for sale” (par. 8).

Bulgaria’s case also clearly demonstrates the vast cultural differences in how online pornography is defined and socially viewed and the wide range of law-enforcing capabilities - and their desire to impose sanctions - demonstrating that imposing uniform legal rules is challenging, if not impossible. Since Bulgaria’s media experts and legal advisors advocate that the pivotal role of such internet regulation is to protect children from the dangers of using digital technologies that could harm them, and because of the fact that other European countries have had favourable experience with co-and self-regulation instruments, the need for Bulgaria to introduce meaningful policies to monitor and sanction harmful use of new digital technologies directed towards children becomes even more pressing. Proposing legislature without defining what "media" or “pornography” actually mean is not different from what York (2016) calls a conflation of nudity, sexuality, and pornography, which seems more dangerous than pornography itself. Without transparency and a vibrant public debate that involves the participation of internet users, providers and regulators, trying to “ban” porn becomes a dangerous and potentially doomed exercise in defining the limits of free speech in the cultural environment of a transitional democracy that is still reconciling the past alongside the enormous market pressures of the present. In this sense, internet governance has significant public interest implications, diffusely distributed among new institutional forms, the private sector, and more traditional forms of governance and remains a critical factor in making long term policy changes that carry meaning, and not just gestures of symbolism.

References

Baban, A. (2000). Women’s sexuality and reproductive behavior in post-Ceausescu Romani: A psychological Approach. In S. Gal & G. Kligman (Eds.), Reproducing gender: Politics, publics and everyday life after socialism (pp. 225-257). Princeton, New Jersey: Princeton University Press.

Borenstein, E. (2008). Overkill: Sex and violence in contemporary Russian popular culture. Ithaca and London: Cornell University Press.

Burgelman, J. C. (1997). "Issues and assumptions in communications policy and research in Western Europe: A critical analysis." In J. Corner, P. Schlesinger and R. Silverston (Eds.), International Media Research: A Critical Survey (pp. 1-17). London and New York: Routledge.

Carleton, G. (2005). Sexual revolution in Bolshevik Russia. Pittsburgh, PA: University of Pittsburgh Press.

Child Pornography Prevention Act of 1996 (1996). (P.L. 104-208, Div A, Title I § 101(a) [Title I, § 121] 110).

Child Protection and Obscenity Enforcement Act of 1988 (1988). (18 U.S.C. §2257).

Children's Online Privacy Protection Act of 1998 (1998). 15 U.S.C. 6501–6505.

Council Recommendation 98/560/CE, 1998 O.J. (L 270) 48 at art. 4, Annex 22.2 (a-c).

Daskalova, K. (2000). Women's problems, women's discourses in post-communist Bulgaria. In S. Gal & G. Kligman (Eds). Reproducing gender: Politics, publics and everyday life after socialism (pp. 331-380.). Princeton, NJ: Princeton University Press.

Davidova, P. (2014, May 19). "How will the internet media be regulated in Bulgaria?"The Union of Bulgarian Journalists, available online http://sbj-bg.eu/index.php?t=22916.

Deltcheva, R. (1996). New tendencies in post-totalitarian Bulgaria: Mass culture and the media. Europe-Asia Studies, 48(2), 305—316.

Dworkin, A. [1979] (1999). Pornography: Men possessing women. The Women’s Press: London.

Eko, L. (2009). Suffer the virtual little Children: The European Union, The United States, and international regulation of online child pornography, Journal of Media Law & Ethics, 1(1/2), 107-149.

Engström, C. (2013, March 6). An EU proposal to ban porn through "self-regulation,” available online https://christianengstrom.wordpress.com/2013/03/06/an-eu-proposal-to-ban-porn-through-self-regulation/.

Hofmann, J. (2005). "Internet governance: A regulatory idea in flux," English translation available online http://duplox.wzb.eu/people/jeanette/texte/Internet%20Governance%english....

Georgieva-Stankova, N. (2011)."Media regulations, ownership, control and the ‘invisible’ hand of the market”. Trakia Journal of Sciences, 9(30), 91-203.

Goscilo, H. (1996). Dehexing sex: Russian womanhood during and after Glasnost. Ann Arbor, MI: University of Michigan.

Ibroscheva. E. (2013). Advertising, sex, and post-socialism: Women, media, and femininity in the Balkans. Lanham, MD: Rowman and Littlefield.

Just, N., & Puppis, M. (2012). Trends in communication policy research: New theories, methods, and subjects. Chicago, IL: The University of Chicago Press.

Kendrick, W. [1987] (1996). The secret museum: Pornography in modern culture. University of California Press: Berkeley, Los Angeles and London.

Kipnis, L. (1996). Bound and gagged: Pornography and the politics of fantasy in America. Grove Press: New York.

Kirova, M. (2012, March 26). "Breeding inequality: Gender inequalities in Bulgarian advertising", Public Republic, available online at http://www.public-republic.net/breeding-inequality-gender-identities-in-bulgarian- advertising.php.

Lazarova, U. (March 10, 2016). "More than 50 materials with child pornography and 20 internet pedophiles have been caught in 2015". Dnevnik.bg, available online http://www.dnevnik.bg/bulgaria/2016/03/10/2719963_nad_50_materiala_s_detska_pornografiia_i_20_internet/.

Magó-Maghiar, A. (2009). Representations of sexuality in Hungarian popular culture of the 1980s. Medij. istraz. god. 16, br. 1, pp. 73–95.

Marinova, J. (2008). "National report for Bulgaria." In Hasebrink,U., Livingstone, S. and Haddon, L. (Eds.), Comparing children's online opportunities and risks across Europe: Cross-national comparisons for EU kids online. A report for the EC Safer Internet Plus Programme.

Mueller, M. (2010). Networks and states: The global politics of internet governance. Cambridge: MIT Press.

Nikolic, Tea (2005). Serbian sexual response: Gender and sexuality in Serbia during the 1990s. In A. Stulhofer & T. Sanfort (Eds.), Sexuality and gender in post-communist Eastern Europe and Russia (pp. 95-125). New York: The Haworth Press.

Nikolova, E. (2011). Bulgaria. The media in South-East Europe: A comparative media law and policy study. Institute of European Media Law, available online http://library.fes.de/pdf-files/bueros/sofia/08097.pdf

Ninov, B. (2001). Forms of erotic expression. In N. Genov & A. Krasteva (Eds.), Recent social trends in Bulgaria 1960-1995 (pp. 395-406). McGill, Montreal: Queen’s University Press.

Norman, B. (2000). "Test the West": East German performance art takes on Western advertising. The Journal of Popular Culture, XXXIV: 255–267.

Ognyanova, N. (2007a, May 14). Pornography has been defined available online https://nellyo.wordpress.com/2007/05/14/pornography_definition/.

Ognyanova, N. (2007b, May 23). Online pornography makes an entry in Penal Code. Kapital, available online http://www.capital.bg/biznes/tehnologii_i_nauka/2007/05/23/342834_onlain_pornografiiata_gastrolira_v_nk/.

Ognyanova, N. (2009). Bulgarian media policy and law: How much Europeanization. Central European Journal of Communication, (02), 27-41.

Popova, M. (2004). "Regulating the internet: Between cyberanarchy and cybercensorship."LiterNet, 8: 57, avaialable online, http://liternet.bg/publish11/m_popova/regulation.htm. (in Bulgarian).

Popova, G. (2010). "Joyous austerity" in the representations of love and eroticism in socialist Bulgaria. Notabene, 15. Retrieved from http://notabene-bg.org/read.php?id=167.

Resolution of the Council and of the Representatives of the Governments of Member States, Meeting within the Council of 17 February 1997 on Illegal and Harmful Content on the Internet, 1997 O.J. (C 70) 1.

Telecommunications Act of 1996 (1996). Pub. L. No. 104-104, 110 Stat. 56.

Television Without Frontiers Directive (1989). Council Directive 89/552/EEC, 1989 O. J. (L. 298) 23 at arts. 22, 22a (as amended by Council Directive 97/36/EC, 1997 O. J. (L202) 60 at art. 22, 22a).

York, J. C. (2016, May 25). Who defines pornography? These days, it’s Facebook. The Washington Post, available online https://www.washingtonpost.com/news/in-theory/wp/2016/05/25/who-defines-pornography-these-days-its-facebook/.

Zankova, B. (2013). "Regulating media in the new media environment: Problems, risks and challenges based on five European countries’ case studies."LiterNet, 8:159, available online, http://liternet.bg/publish28/bisera-zankova/reguliraneto.htm#1a. (in Bulgarian).

Žikić, B. (2009). Dissidents liked pretty girls: Nudity, pornography and quality press in socialism. Medij. istraz. god. 16, br. 1, pp. 53–71.

Footnotes

1. Galena is a famous Bulgarian pop-folk singer, known for sexually provocative music videos and rumored to have acted in amateur porn.

2. Bulgaria joined the Internet Society in 1997 and its Bulgarian chapter is headed by Veni Markovski, the owner of one of the largest internet providers in the country.


New Charter unlikely to solve ailments of fundamental rights in digital environments

$
0
0

When we produce charters, conventions and constitutions and call for our governments and states to uphold and respect these documents, we do so knowing that the production of such documents is hard work and the enforcement, even more so.

In spite of such challenges in the production and enforcement of charters, conventions and constitutions, the European Union is now equipped with the Convention for Human Rights (ECHR), the Charter of Fundamental Human Rights (the Charter) and a number of constitutions that dictate in no weak language the relationships which we consider desirable between citizens and their rulers.

Franz von Weizsäcker and Norman Schräpel call for broader, pan-European discussions on a new European Charter, covering only the digital environment. Such a charter for the digital world has been inspired by the passing of the Marco Civil da Internet by the legislature in Brazil and such a document was discussed in Italy. It’s been called for by Sir Tim Berners-Lee, inventor of the web and founder of the Web Foundation, inspired by the Magna Charta. One such Charta has been discussed in the European Parliament committee on liberty and justice on the 5th of December 2016 - detailing rights on algorithms, profiling, net neutrality, information security, artificial intelligence, data sovereignty and the need for constitutions.1 Franz von Weizsäcker and Norman Schräpel rally around a Charter (similar to the one discussed by the European Parliament) as a possible middle way between the US internet governance, dominated by corporations, and the Chinese internet governance, dominated by the state.

Moral fibre went lost

It is of course a matter of personal taste and political opinion whether one wants to embark upon the project of producing a Charta, a specialised Charta for the digital world. But as such, my personal taste and reflection is that one more conventional text is not going to help in the creation of a socially, economically and politically juste digital environment as long as we cannot ensure that governments uphold their already existing human rights commitments. As long as our political leaders lack the moral fibre to uphold the rights of their own citizens and to uphold the rights of constituents of other territories, more documents creating ever more diluted rights is not the way forward.

In this vein, having yet another document on human rights is not going to make France cease its state of exception, where no rights apply for individuals except the right not to be arbitrarily killed.2 Such an additional document is not going to stop the United States from extra-legally killing non-combattants in territories with which they are not at war.3 It is not going to convince the United Kingdom that exempting troops from human rights obligations is a disastrous way ahead.4

Similarly, such a document will not convince the Swedish government that mass surveillance and internet filtering, both unrestrained by due process in or transparency for public authorities that benefit from those measures, is not a good idea.5 Even in the face of the Secretary General of the Council of Europe in 2016 announcing mass surveillance and internet filtering two of his three top priorities for due process in the digital world in the upcoming years,6 both the Swedish government and the national media have been silent. Domestic respect for human rights in the online environment in Sweden is presently restricted to a commitment to endeavour further developing the Swedish government website for information about human rights.7

It's not so much the proliferation of charters that is the problem - it is that governments are no longer following their commitments. The moral fibre of the global political class is null, and drafting another charter is unlikely to remedy this problem.

Timing is everything

My second objection is that it is simply not the right time to create such a Charta. In the draft I saw discussed by the European Parliament commitment, the digital rights included aimed only to specify or restrict more general rights that already exist in the ECHR, the Charter, the EU framework agreements, or national constitutions. It will address the right to data protection, consumer protection under certain circumstances, or a call for better competition between service providers, but surely all these aspects are already covered by our current laws? In many of these areas we’ve seen significant legal developments in the past few years, and discussing a Charta at this time could only risk undoing what little progress has been made.

Consider, for instance, the case of Verein für Konsumentinformation (VKI) v Amazon EU Sárl (C-191/15), arbitrated by the European Court of Justice on 28 July 2016.8 The ruling establishes that there is a way for consumer organisations in member states to act against unfair contract clauses, agreed between consumers in one member state and digital platforms in another member state. With such a way to enforce consumer rights, surely we already possess the tools we require to challenge contractual arrangements that have until now made it difficult for us to exercise our individual rights to algorithmic transparency,9 data protection, information security10 and data sovereignty. European law on data protection is already strong – constitutionally as well as in secondary law. The problem is not the lack of recodification of its principles but the difficulty of enforcement.

Another case to be watching is the Norwegian consumer group Forbrukerrådets challenge of unfair contract terms for fitness apps.11When the Forbrukerrådet and the VKI cases are finally settled, we – the consumers – will hopefully have a way to challenge unfair contract terms, and when the unfair contract terms go away, data protection authorities will be able to step in and consider the appropriateness of specific practices that have until now been protected by private contracts. Put more simply, one could argue that our predicament with privacy rights in the digital private sector has been that Data Protection Authorities have not been given the competency to determine if a particular contractual arrangement aimed at removing from the data subject some of her or his rights is actually ”fair” within the meaning of consumer law, and until now consumer groups have taken only little interest in the precarious waters of end-user license agreements.

Would a Charta help further these positive developments in the consumer rights sphere? It is doubtful. Any political process will be burdened by lobbyism, and in this case two-fold lobbyism: we have states that clearly have no desire to uphold even the existing rights of individuals, and we risk ending up with companies using the negotiation of a new charters as a stepping stone to undo the consumer rights advancements. It is very difficult to see an outcome which services individual interests well, politically and economically.

Similarly, on the topic of net neutrality and competition, the specific rules of the EU area are already laid down in the Regulation on an open internet connection from 2015. The Swedish national regulatory authority for telecommunications has been an early arbitrator of the meaning of this regulation. On 7 December 2016, it decided that the regulation does not force the regulator to consider competition issues which may arise between information society service providers due to cooperation agreements between the electronic communications service sector and the information society services as we understand them in EU law.12 Practically, this means that the Swedish incumbent telecommunications operator is free to enter into agreements with an incumbent social network service provider over zero-rating, only under the condition that the consumer also buys a datapack from the incumbent for non-social network services. The reasons for why the regulator makes this assessment is, in my understanding, twofold: it lacks the resources and the expertise to make assessments of the competitive landscape in the social network service sector, online advertisement sector, and communication services for end-consumers sector. Its resources are collected from administrative fees leveraged from telecommunications operators that are not interested in financing the delicate and expensive process of making appropriate competitive assessments for non-electronic communications services markets.

While the European Commission Directorate on Competition has made a number of advances in mapping the information society service sector,13 notably in Google/Doubleclick, Microsoft/Yahoo! Search Business and Facebook/Whatsapp, mapping of online advertisement services, communication services for private persons or social networking services has, up until now, been made mostly in merger decisions. Doctrine is surprisingly scarce, and even in those EU member states where private competition enforcement has been at least nominally possible (such as Sweden) media and advertisement businesses appear not to have been interested in creating further legal or economic clarity. The Swedish regulator, therefore, has decided not to get stuck with a complicated, technical task for which they are not funded, in a field where both they - and seemingly the rest of Europe - lacks expertise.

Would a Charta help bring clarity in the legislation for these markets, help us understand the market forces at play or help us build on existing legislative frameworks that have until now been under-exploited in the struggle to guarantee citizens' rights as consumers and individuals? It’s doubtful. The generic nature of the language that such a document necessarily must contain could only cause further confusion, and at worst would distract legislators and policymakers from resolving the outstanding issues for regulators and enforcers of the already existing laws.

I do not make this reflection as a lawyer, or a technical expert, or even as a founder of a Swedish digital rights NGO, but as a former legislator and as a citizen. Authoring a Charta might be a welcome distraction for rulers who have less interest in discussing why they are unable to uphold the contracts they have already made with their citizens. To play on a hopefully not too unfamiliar internet meme, which could well describe the interest of legislators and policymakers in the production of a Charta: why work when you can organise a meeting?

I would like to propose a strategy for digital rights that is the reverse: why organise a meeting when we can work?

Footnotes

1.http://www.emeeting.europarl.europa.eu/committees/agenda/201612/LIBE/LIBE(2016)1205_1P/sitt-3521176

2. Politico (13 Nov 2016) France to extend state of emergency. http://www.politico.eu/<wbr />article/france-to-extend-<wbr />state-of-emergency/</a></fn>

3. Cf. the Stop Killer Robots campaign: https://www.stopkillerrobots.org/

4. The Independent (4 Oct 2016) British troops to be made exempt from European human rights laws during combat http://www.independent.co.uk/news/uk/politics/british-troops-shielded-legal-action-european-court-human-rights-iraq-afghanistan-a7343551.html

5. ECLI:EU:C:2016:572, Opinion of the Advocate General in cases C-203/15, C-698/15, http://curia.europa.eu/jcms/jcms/p1_216230

6. https://search.coe.int/cm/Pages/result_details.aspx?ObjectId=0900001680646af8

7. Swedish government, Proposition 2016/17:29. http://data.riksdagen.se/dokument/H40329

8. ECLI:EU:C:2016:612, Verein für Konsumenteninformation v Amazon EU Sàrl, Request for a preliminary ruling from the Oberster Gerichtshof, Case C-191/15.http://curia.europa.eu/juris/liste.jsf?num=C-191/15

9. Cf. Art 15, directive 95/46/EC.

10. Cf. Section VIII, directive 95/46/EC.

11.http://www.forbrukerradet.no/siste-nytt/fitness-wristbands-violate-european-law

12.http://www.pts.se/sv/Nyheter/Internet/2016/Operatorer-ska-behandla-internettrafik-likvardigt-enligt-beslutsforslag-fran-PTS/

13. For a definition of these services, consider Article 1.1.b of Regulation No. 2015/1535. http://data.europa.eu/eli/dir/2015/1535/oj

Coding and encoding rights in internet infrastructure

$
0
0

Acknowledgments: The authors would like to wholeheartedly thank Frédéric Dubois, Mikkel Flyverbom, Seda Gürses, and Joris van Hoboken for the precious comments at the review stage, as well as Francesca Musiani, Dmitry Epstein and Christian Katzenbach for the inspiration. They would also like to acknowledge the support of the Digital Methods Initiative at the University of Amsterdam.

‘Does ICANN violate human rights?’, asked a 2014 report by the Council of Europe (CoE), questioning whether the policies and operations of the Internet Corporation for Assigned Names and Numbers (ICANN) unintentionally infringe users’ right to privacy, freedom of association, and freedom of expression. ICANN is a nonprofit corporation in charge of the coordination of a public resource, the internet's underlying address book or Domain Name System (DNS). The CoE report was the first exogenous attempt to gauge ICANN’s policymaking in light of human rights, fundamental freedoms and democratic standards (Zalnieriute & Schneider, 2014). Two years down the road, human rights are not only being encoded in the organisational structure by means of inclusion in the bylaws (Appelman, 2016); they also permeate much of the policy development within ICANN. This ongoing multistakeholder process has been driven, among others, by a small group of civil society actors, who set up to inscribe human rights into names and numbers, protocols and standards, both within ICANN and the Internet Engineering Task Force (Cath, 2015).

Internet governance embraces the global coordination of the DNS and internet addresses, but also various other ‘environments with low formalization, heterogeneous organizational forms, large numbers of actors and massively distributed authority and decision-making power’ (Van Eeten & Mueller, 2013, p. 730). Here, we approach it as a ‘politically contested process of meaning making in which past and future technological projects are framed in a particular light’ (McCharty, 2011, p. 90). This article explores the meaning-making and discursive role of the organised civil society in institutional and infrastructure design, focusing on the management within ICANN of the DNS, an inherent part of internet infrastructure, and its relation to human rights values. We investigate civil society engagement with the organisation, in particular following the transition of the stewardship over ICANN from the US Congress to the global multistakeholder community announced in early 2014, and map the distinct articulations of the human rights discourse that emerged in relation to internet infrastructure and the organisation itself. In doing so, we adopt the disciplinary lenses of Science and Technology Studies, for STS allows us to address technology as a site of contestation, focusing on its unremitting interplay with the social and on the controversies that might emerge. STS allows us to understand internet governance ‘as a normative "system of systems"’, unpacking ‘the micro practices of governance as mechanisms of distributed, semi-formal or reflexive coordination, private ordering, and use of internet resources’ (Epstein, Katzenbach, & Musiani, 2016). It empowers us to move away from a ‘focus on institutions as agents’ towards investigating ‘the agency of technology designers, policy-makers, and users as those interact in a distributed fashion, with technologies, rules, and regulations, leading to unintended consequences with systemic effects’ (Ibid.; see also Musiani, 2015). 1

We see this civil society-led struggle to inscribe human rights in internet infrastructure as an instance of bottom-up design, defined as the process of enshrining ‘radical’ (Milan, 2014b) or unconventional policy preferences - which sprung out of technological practice and cultures such as the hacker subculture 2 - into governance fora and institutions. 3 Our definition owes to the STS notions of social shaping of technology (e.g., MacKenzie & Wajcman, 1999) and co-production (e.g., Jasanoff, 2004), which stress the role of users in technology innovation and in the diffusion of new ideas. It is also inspired by the disciplines of critical design (e.g., Dunne & Raby, 2001) and critical technology practice, especially where these focus on culturally embedded discursive practices (e.g., Agre, 1997; Dourish, 2001).

Bottom-up design seeks to intervene in the organisational process that some STS scholars have termed ‘ordering’, which entails the negotiation of plurality and alternatives within a given context (Mol, 2001). 4 Organisations like ICANN can thus be seen as materially heterogeneous institutions in charge of ordering and arranging difference (Law, 1994b; Woolgar & Neyland, 2013). Following Jasanoff, regulations like ICANN bylaws are to be understood as ‘devices that order and reorder society’ (2004, p. 14). Looking at these ordering practices allows us to capture ‘the normative effect of mundane practices and daily routines’ that characterise internet governance as a series of ‘hybrid configurations constantly reshaping their purposes and procedures in order to connect and mobilise objects, subjects and other elements, constituted and positioned relationally, around particular issues’ (Epstein et al., 2016). Thus, bottom-up design can be seen as a way of ‘making institutions’ while/by‘making discourses’, that is to say ‘producing new languages or modifying old ones so as to find words for novel phenomena’ (Jasanoff, 2004, pp. 39-41). In the context of this article, objects of ordering are internet infrastructure and the associated values as they bear on decision-making and infrastructure/organisation design.

Practitioners of bottom-up design typically operate as critical communities who ‘seek acceptance of a new conceptualization of a problem’, and try to shape the way people think about it (Rochon, 1998, p. 22). An important source of legitimacy for such critical communities is expertise, including technical practice (Ibid.). At the core of bottom-up design is a (variably explicit) connection with technology-oriented movements like the open-source software community (Hess, 2005), and with critical tech communities engaging with alternative technologies and technical practices (Hintz & Milan, 2009; Tréguer, Panayotis, & Söderberg, 2016). These take autonomous technologies as alternative institutions: not just as ‘objects of governance, but also as a set of tools for governance’ (Musiani, 2016, p. 85 original italics). As such, they represent the source of the cultural and ideological references of an important portion of civil society advocates within ICANN.

Following the STS tradition, we approach the struggle for coding and encoding rights within ICANN as an instance of ‘solving a problem of disorder within established cultures’ (Jasanoff, 2004, p. 6), where the disorder is a mismatch between a time-honored organisational culture, ICANN’s, and the values of part of its community. We take ICANN, and the human rights debate within it, as a site of multi-level contestation (McCharty, 2011) characterised by ‘disagreement, negotiation, and the potential for breakdown’ (Akrich, 1992, p. 207), and seek to capture the visions and internal diversity of the civil society contingent. We engage in a partial ‘sociography’ of this process, describing the relationships behind it (Ibid.) and the related ‘ordering narratives’ (Doolin, 2003), constantly moving between the ‘technical’ (of both technical infrastructure and organisational mechanisms) and the ‘social’ (of civil society mobilising) (cf. Bijker & Law, 1992).

Original data for this article was collected analysing, by means of the Python toolkit BigBang (Benthall, 2015), [NCUC-discuss], the principal mailing list of theNonCommercial Users Constituency (NCUC), the main home for civil society organisations and individuals within ICANN. 5 Mailing list analysis was selected for three reasons: first, even though ICANN holds regular face-to-face and teleconference meetings, mailing lists remain a key channel for deliberation and decision-making within the ICANN community; second, participation in the online discussion make differences and conflicts visible; third, language reflects the ‘cultural and symbolic understandings surrounding the internet’ (McCharty, 2011, p. 90). In addition to the quantitative analysis, we engaged in qualitative discourse analysis of selected e-mails as well as extensive participant observation (2013-present). 6

In what follows, we reflect on civil society’s engagement in internet governance and introduce the notion of sociotechnical imaginaries, useful to capture the advocates’ visions and values. Next, we present ICANN as an organisation in evolution particularly susceptible to organisational reform. The third section delves into the empirical analysis, and shows how the progressive inclusion of new civil society advocates in the process caused an expansion of the human rights agenda. We conclude linking these concerted efforts to the recent turn to infrastructure in internet governance (Musiani, Cogburn, DeNardis, & Levinson, 2016).

1. Civil society and internet governance: emerging socio-technical imaginaries

Civil society 7 emerged as a significant player in the global internet governance debate at the United Nations’ World Summit on the Information Society (WSIS, 2003-2005), when it was invited to the negotiation table ‘on equal footing’ (Hintz, 2009). Ever since, the composite civil society rubric, constituted by individuals and nonprofit organisations, has made its voice heard at the yearlyInternet Governance Forum, a WSIS spin-off for a multistakeholder dialogue on internet-related public policy issues (Mueller, 2010).

Rather than a uniform monolithic entity, civil society is a multifaceted field of action and beliefs where distinct approaches, worldviews and visions of what the internet is and should look like co-exist, not without conflicts. These collective visions or imaginaires link ‘intentions and projects as well as utopias and ideologies’ (Flichy, 2007, p. 4). They are collective because they tend to be shared by groups and individuals across the world and regardless of national cultures. They can be seen as ‘ways of thinking about what infrastructures are, where they are located, who controls them, and what they do’ (Parks, 2015, p. 355). These imaginaries, knitting together the ‘technological’ and the ‘social’ to say it with STS scholars, emerge from, among other, ‘the imaginative faculties, cultural preferences and economic or political resources’ of internet users (Jasanoff, 2004, p. 16), and evolve in interaction with the actions and preferences of other actors including governments and industry (see also Bijker, 1997). They originate in users’ mundane practices as these shape governance discourses. 8 They mirror subtending ideologies, but are also influenced by broader geopolitics such as foreign policy (cf. McCharty, 2011; see also Turner, 2006).

Sociotechnical imaginaries embody a normative, prefigurative dimension. They can be seen as ‘a means of relating the local and the present to broader developments and structures of the past or the future’ (Hoffmann, Katzenbach, & Gollatz, 2016). They are at once ‘descriptive of attainable futures and prescriptive of the kinds of futures that ought to be attained’ (Jasanoff, Kim, & Sperling, 2007, p. 1). Most importantly, they are instruments of co-production that ‘have the power to shape technological design’ (Ibid.). As we shall see, ICANN policy-making is shaped in ‘bottom-up, consensus-driven, multi-stakeholder’ policy development processes where discursive change is functional to issue naming and recognition as well as agenda setting (cf. Stone, 1988; Dery, 2000). Thus, there is a direct line between the visions enshrined in the sociotechnical imaginaries of the various actors, on the one hand, and the concrete outcomes of institutional and infrastructural formation, on the other.

Focusing on sociotechnical imaginaries allows us to observe civil society in action as it contributes to shape policy in infrastructural and institutional design. As the process is ongoing, this article tracks two moments of co-production, namely the emergence of new ideas and the ensuing contestation phase (Jasanoff, 2004).

2. ICANN and the struggle for human rights

ICANN is a nonprofit organisation incorporated in California whose mission is to ‘ensure the stable and secure operation of the internet's unique identifier systems’ (ICANN 2016). ICANN is in fact in charge of the management, operation and technical maintenance of a number of databases concerning both ‘names’ (e.g., root name servers, the DNS) and ‘numbers’ (e.g., Internet Protocol address spaces such as IPv4/6, the regional registries). Set up in 1998 to manage the Internet Assigned Numbers Authority (IANA) on behalf of the US Department of Commerce (Mueller, 2002), ICANN is at a historical turning point. At its 55thmeeting, in Marrakesh, Morocco (March 2016), the ICANN community voted in support of transitioning the stewardship over the IANA function from the US National Telecommunication and Information Agency (NTIA) to the global multistakeholder community.

ICANN consists of two parts: the corporation that implements policies and procedures to run the infrastructure, and the so-called ‘community’ that, supported by ICANN staff, develops in a multistakeholder fashion the policies that are implemented by the corporation. Since its inception, ICANN stimulated bottom-up policy development, although the industry still plays a leading role with civil society merely in tow, and the organisation has not been exempt from criticism (Bygrave, 2015; Raymond & DeNardis, 2015). Civil society involvement dates back to the establishment of the NonCommercial Domain Name Holders Constituency (NCDNHC) in 1999, relabeled NCUC in 2003. NCUC membership, which is free of charge, includes both organisations and individuals, the latter ranging from technical experts and academics to professional advocates and users, with backgrounds as diverse as engineering, law, and development activism. At the time of writing, it counted 118 organisation and 415 individual members from 157 countries. 9 NCUC has a policymaking function, and contributes to elect six members in the Council of the Generic Names Supporting Organization, in charge of the policies for Generic Top Level Domains (e.g., .net, .com, .hotel, .مثال).

Notwithstanding the early engagement of civil society in the organisation, human rights remained long at the margins of ICANN, in contrast to governance fora like WSIS and IGF (Jørgensen, 2006). The wind changed direction as a new group of advocates joined ICANN in 2014, following a combination of events such as the leaks by security contractor Edward Snowden of classified documents proving blanket surveillance of internet users by national security agencies (June 2013 onwards); the CoE report on ICANN’s responsibility to respect human rights; and most importantly theannouncement, on March 2014, that the United States would release control over the IANA function. Since early 2014, in an unprecedented experiment of ‘polycentric governance’ (Scholte, 2016), the ICANN community engaged in a major redesign endeavour. It launched, among others, theCross Community Working Group on Enhancing ICANN Accountability (CCWG Accountability), tasked with ‘develop[ing] a plan to transition the US government stewardship role with regard to the IANAfunctions and related root zone management’. The IANA transition, and CCWG Accountability in particular, worked as a ‘policy window’, or an occasion for political participation by civil society advocates (Kingdon, 1995). This policy window represented an opportunity to connect the ‘policy niche’ of human rights (Milan, 2009), until then largely ignored by the community at large, to a broader process at the core of the organisation’s future.

The CoE report was presented at the 50thICANN meeting, in London (June 2014). The ICANN 51 (Los Angeles, October 2014) agenda included a session on human rights co-organised by the CoE and the ICANNGovernment Advisory Committee (GAC). Two new entities were formed: theGAC Working Group on Human Rights and International Law (GAC WG HRIL) and the multistakeholderCrossevoCommunity Working Party on ICANNs Corporate and Social Responsibility to Respect Human Rights (CCWP HR), 10 established as a sub-entity of the NCSG and chaired by the freedom of expression non-governmental organisation Article 19 (recently affiliated to the NCUC). The two operate independently but coordinate their work through joint public meetings. At ICANN 52 in Singapore (February 2015), Article 19 launched the reportICANN’s Corporate Responsibility to Respect Human Rights.

At ICANN 53 (Buenos Aires, June 2015) and ICANN 54 (Dublin, October 2015), CCWP HR held both working and outreach sessions with other ICANN constituencies, representing the interests of other communities, e.g. the Intellectual Property (IP) Constituency. Meanwhile, the CCWG Accountability recommended a concrete commitment to human rights in the ICANN post-transition bylaws, but parts of the community pushed back, concerned that a commitment to human rights would broaden ICANN’s scope and mission. Eventually, thefinal report by CCWG Accountability, made public on February 2016, recommended that ICANN should commit to respect human rights within its narrow scope and mission; that it should not be forced to actively protect human rights or force external parties to do so; that such commitment is to be included in the ICANN bylaws, but that the specific bylaw would only be enacted pending the development of an adequate framework of interpretation.

The ICANN community vote in support of theIANA stewardship transition proposal, in March 2016, paved the way for the proposed regulations to be reworked into the organisation’s bylaws. The bylaws revision concluded phase 1 (or Workstream 1) of the transition. Bylaw (viii), adopted in May 2016 and included in Article 1 Mission, Commitment and Core Values, Section 1.2(b) reads:

  1. Subject to the limitations set forth in Section 27.2 11, within the scope of its Mission and other Core Values, respecting internationally recognized human rights as required by applicable law. This Core Value does not create, and shall not be interpreted to create, any obligation on ICANN outside its Mission, or beyond obligations found in applicable law. This Core Value does not obligate ICANN to enforce its human rights obligations, or the human rights obligations of other parties, against other parties.

This concluded the contestation phase concerning the inclusion of human rights into the bylaws (Jasanoff, 2004). The NTIA announced in June 2016 its acceptance of the proposal put forward by the global internet multistakeholder community; the actual IANA stewardship transition was completed on 1 October 2016 when the ICANN contract with the US government officially came to an end. As far as human rights are concerned, the ongoing Workstream 2 of the IANA transition requires the development of the framework of interpretation for bylaw (viii), and of a human rights impact assessment instrument for ICANN policies and operations. Figure 1 shows how human rights relate to ICANN’s themes and policies/processes.

Do ICANN’s policies and operations have an impact on human rights? Civil society engagement in the organisation seeks to inscribe human rights in internet infrastructure.
Figure 1. An overview of the relation between human rights, themes and policies/processes in ICANN, prepared by CCWP HR.

3. NCUC: a community in expansion

Mailing lists constitute the main meeting point and organisation and discussion ground for ICANN constituencies and their membership. Examining the evolution of participation is key to understand civil society dynamics around ICANN. 12 By analysing traffic volume on NCUC-discuss, we identified two peaks of traffic, corresponding respectively to the NCUC inception and to the period 2014-present (figure 2). We link the recent growth in NCUC membership to the political opportunities (Tarrow, 1998) brought about by the CoE report, the Global Multistakeholder Meeting on the Future of Internet Governance (NetMundial, São Paulo, Brazil, 2014), the Snowden revelations, and especially the IANA transition - which attracted the attention of civil society advocates who had to date kept ICANN at a distance, notwithstanding their commitment to digital rights. The increase in membership corresponded to a growing diversification in geographical origin, with a new cluster of NCUC active members from the Asia Pacific region.

Growth of the NCUC community as reflected in NCUC-discuss (unit of analysis: e-mails from members who made their first post to ncuc-discuss).
Figure 2. Growth of the NCUC community as reflected in NCUC-discuss (unit of analysis: e-mails from members who made their first post to ncuc-discuss).

Further analysis, linking individual participants’ first e-mail to the list with their further participation to the online discussion, allows us to identify three groups of members (figure 3). Group 0 (in red) corresponds to the early days of the NCUC foundation; some members are still active today. Group 1 (in orange) relates to a second phase in the NCUC evolution, with membership from the Global South increasing and new issues entering the agenda, concerning e.g. the new round of allocation of generic Top Level Domain Names (gTLDs) that kicked off in 2010-12. Group 3, including yet another round of new participants (in grey), parallels the IANA transition and the other recent political opportunities described above.

Relation between different groups of participants to ncuc-discuss. E-mails were divided into three cohorts based on when members sent their first e-mail to the list.
Figure 3. Relation between different groups of participants to ncuc-discuss. E-mails were divided into three cohorts based on when members sent their first e-mail to the list.

We interpret these groups as three cohorts of civil society advocates in ICANN, which, as we shall see next, correspond to the progressive broadening of the advocacy agenda. Cohorts 2 and 3 could build on the institution-building and advocacy activities of the previous one(s), enjoying the expertise, structures and resources available over time thanks to internal lobbying (e.g., travel support for civil society advocates, infrastructure for remote participation and conference calls, translation services, and the list goes on).

These findings can be interpreted in light of earlier analyses pointing to a recent adjustment in membership for the civil society engaged in internet governance. Traditional internet governance venues are increasingly subject to the attention of digital rights activists and hackers. The Snowden revelations, but also processes like NetMundial, have determined a shift in the agendas and strategies of civil society actors, to the point of partially reconfiguring traditional equilibriums (Milan, 2014a, 2014b). This represents an innovation with respect to the post-WSIS phase, characterised by a marginalisation of grassroots internet activists, who privileged a hands-on approach that prioritised technology design over policy design (Milan & Hintz, 2013).

4. The evolution of sociotechnical imaginaries

Mailing lists serve as a critical communication and deliberation infrastructure for ICANN constituencies and their membership, representing a crucial venue to investigate discursive change, albeit not the only channel of conversation. 13 We postulate a relation between the participation of new members to the discussion and the evolution of human rights discourse. In other words, the change of pace that affected the way human rights were framed and presented to the broader ICANN community, is a function of the inclusion of new members within NCUC - and by extension, of the novel policy windows that became available over time. We argue that the three cohorts of advocates we identified correspond roughly to three distinct sociotechnical imaginaries, which we now move to describe with the support of discourse analysis. These are to be seen as simplified ideal-types useful to depict the trajectory of human rights at ICANN, but there are no shift interruptions between the three. Rather, the civil society agenda is cumulative: visions and political preferences do not replace each other but co-exist and dialogue. For the sake of brevity, we highlight only a small selection of representative issues amongst the many advocates fought for over time.

2002-2009. Freedom of expression as a barrier to expansive IP rights. The early civil society advocacy agenda focused on the fight against the strategy of IP protection enacted by ICANN to the detriment of noncommercial interests. It was indeed the observation that ‘Trademark claims were limiting legitimate uses of words and concepts in the domain name space’ (Mueller, 2012), that prompted freedom of speech advocates to create a space for civil society within ICANN - what is now the NCUC. To be safeguarded were the (then) three million .org domain name holders, plus users and potential registrants. The advocacy agenda included freedom of expression, consumer protection, ‘trademark maximalism’ (Mueller, 2012),ICANN’s mission creep (in particular with respect to content regulation), transparency, and the power unbalance between commercial and noncommercial players. Qualitative analysis of the list reveals that activists mostly reacted to upcoming and potential threats at the level of policy-making and institutional design, resisting incumbent regulations by means of discursive tactics oriented to ‘reorder’ narratives and trying to secure a voice for noncommercial players in an organisation that was still designing itself.

With its emphasis on boundless freedom of expression and individual rights, the sociotechnical imaginary of this first cohort evoked libertarianism and the US First Amendment. Civil liberties, rather than human rights, were the main frame of reference, infused with the idea of the internet as enabler of individual rights and free expression. Privacy came in as a function of the latter, in turn rooted in a fierce distrust for governments. This version of cyberlibertarianism resonates with the early cypherpunks (Greenberg, 2013) and with the tech movements of the 1960/70s (Flichy, 2007). The discourse, however, appears more complex if we separate rhetoric from content. While the rhetoric was indeed libertarian, and emphasised negative freedoms such as the protection of users against powerful institutions (both state and commercial players), the narrative was permeated by positive freedoms: advocates supported progressive ideas like user participation within a libertarian strategy - in a novel configuration similar to what, in a different context, Fuchs (2014) has termed ‘social cyberlibertarianism’.

2009-2014. Beyond freedom of expression: privacy, due process, social and economic rights. The second cohort of civil society advocates contributed to consolidate the voice and the standing of the constituency. Membership and diversity increased as new professionals joined, including technical experts but also organisations and individual activists with a hacker or human rights background. The liberal rights discourse expanded towards a broader definition of freedom of expression, which came to include neighbouring issues like privacy, due process, and social and economic rights. The strategy remained largely defensive as far as human rights were concerned, with advocates trying to offset threats and expand the discourse to include, for example, development issues. Sadly, the bulk of the ICANN community did not seem to take user rights seriously, as this reflection on the gTLDs auction procedure illustrates: ‘Deep pockets win / communities lose / but no one in power at ICANN cares about communities / and if there had been applicants from developing countries they would also lose / and no one in power at ICANN cares about developing economies’. The concerns about the gTLDs programme by large nonprofits like the International Red Cross, and the subsequent creation of NPOC, added complexity to the game, with competing views on, among others, privacy. Due process within ICANN itself was of concerns to advocates, too, as this account relays: ‘ICANN is insufficiently accountable to relevant noncommercial interests. [They] are not given the appropriate representation (…) There is a real worry that ICANN is an "industry organization"’. Overall, advocates expressed concern about‘The broader fit between ICANN's actions/policies and the sort of public interest values we’re all here to champion’. The prevailing sociotechnical imaginary expanded from a libertarian to a ‘classical’ human rights agenda, although rights were typically mobilised independently from each other and without a reference to the overall human rights programme, which was seldom explicitly invoked and largely upon initiative of single individuals. The notion of human rights of this period approximates the International Covenant on Economic, Social and Cultural Rights.

2014-present. Waving the digital rights banner: human rights at the forefront. This ‘third cohort’ took a significant leap forward in the struggle to inscribe human rights into infrastructure and institutional design at ICANN. Exploiting novel policy windows and opportunities for engagement, larger non-profit organisations with a digital rights agenda joined NCUC, including theCenter for Democracy & Technology, the Centre of Internet and Society, the Electronic Frontiers Foundation andAccess Now. The increased organisational membership - able to mobilise resources, thus ensuring continuity of engagement - was coupled by a growing participation of vocal individuals from the global South. These advocates built on the longstanding members’ expertise, but their limited familiarity with unwritten community norms prompted them to occasionally bypass established practices to advance their goals. Strategy-wise, they reacted to threats but especially actively sought opportunities and created the conditions for advancing their cause. They connected human rights with the notion of corporate social responsibility; bridged over to other policy fora, and ‘reordered’ the narrative by other means (e.g. amovie) and through strategic alliances (e.g., cross-community engagement with CoE, GAC and other constituencies,participation in academic conferences). Human rights permeated institutional design also with a push for an ICANN privacy policy.

This third cohort includes human rights supporters who do not hesitate to evoke human rights by their name. They also have a much broader human rights agenda inspired to recent notions of digital rights as well as the International Covenant on Civil and Political Rights, foregrounding for instance cultural rights, such as linguistic diversity. These ideas are grounded in a profound understanding of the materiality of the infrastructure, and of its surveillance and control affordances. The human rights agenda is not embraced by the entire NCUC, and there exists criticism concerning the value and potential limitations of a human rights approach (e.g. Mueller, 2016). In fact, views by government representatives coexist with hacker hands-on attitudes and ‘social cyberlibertarian’ perspectives, in a combination that sets aside dogmatism in favour of a pragmatic preference for flexible, ad hoc alliances and informal collaborations across constituencies.

Conclusions

Focusing on the emergence and contestation of new ideas, this article offered a snapshot into the concerted efforts of a group of advocates to wire human rights into the policies (the infrastructure) and procedures (the institution) of ICANN, seen as a site ‘for the testing and reaffirmation of political culture’ (Jasanoff, 2004, p. 40). Embracing bottom-up design as a form of policy advocacy rooted on and inspired to technical practice, NCUC human rights advocates operated as a critical community advancing discursive tactics entrenched in sociotechnical imaginaries. Using novel ‘ordering narratives’ able to (re)structure relations strategically organised (Law, 1991), they partially managed to subvert mainstream organisational narratives that had thus far been ‘recursively told, embodied, and performed’ (Law, 1994a, p. 259) by the ICANN community. Paraphrasing Jasanoff, advocates tried to make the organisation by making discourses. Further research could comprise, for instance, a cross-constituency analysis of the evolution over time of the human rights discourse, and a detailed discourse and social-network analysis of ICANN policy development processes as they related to specific human rights and portions of the ICANN infrastructure (e.g. the WHOIS database and its privacy implications).

Echoing Epstein et al. (2016), we believe STS has much to offer in the understanding of the complex ecosystem of internet governance. To name just one of the many promising venues, the STS perspective on ordering as a key organisational mechanism adopted in this article, encouraged us to approach both infrastructure and organisation as sites of contestation and co-production. It allowed us to illuminate some of the micro practices of governance by civil society actors within ICANN, tracking their meaning-making and discursive role as they unfolded in the NCUC mailing list. Triangulating participant observation with quantitative and qualitative analysis of the main NCUC mailing list, where organisation and deliberation unfold, we identified three ideal-type generations of civil society advocates corresponding to distinct but cumulative ideal-type human rights imaginaries, with their respective agendas and tactics. We showed how the combination of emerging political opportunities and the progressive inclusion of new, diverse members brought about new issues, or new ways of framing certain issues, altering and empowering the emerging ‘ordering narratives’ from the bottom up.

We like to think of this struggle as an attempt to explicitly wire the politics of internet architecture into the politics of institutions (see DeNardis, 2012). It can also be seen as an instance of the recent ‘turn to infrastructure’ in internet governance (Musiani et al., 2016), whereby private actors seek to expand the remit (and the features) of the infrastructure (i.e., the DNS) to positively permeate institutional design (i.e., ICANN). It remains to be seen how the ongoing human rights struggle will evolve over time, and how the stabilisation phase (Jasanoff, 2004) will affect the agenda setting capability of civil society and its role within the ICANN community.

References

Agre, P. E. (1997). Computation and Human Experience. Cambridge, MA: Cambridge University Press.

Akrich, M. (1992). The De-Scription of Technical Objects. In W. E. Bijker & J. Law (Eds.), Shaping Technology/Building Society. Studies in Sociotechnical Change (pp. 205–224). Cambridge, MA: MIT Press.

Appelman, D. L. (2016). Internet Governance and Human Rights: ICANN’s Transition Away from United States Control. The Clarion, a Journal of the American Bar Association’s International Human Rights Committee, 1(1).

Benthall, S. (2015). Testing Generative Models of Online Collaboration with BigBang (pp. 182–189). Presented at the Proceedings of the 14th Python in Science Conference.

Bijker, W. E. (1997). Of Bicycles, Bakelites, and Bulbs. Toward a Theory of Sociotechnical Change. Cambridge, MA and London, England: MIT Press.

Bijker, W. E., & Law, J. (Eds.). (1992). Shaping Technology/Building Society. Cambridge, MA: MIT Press.

Bygrave, L. (2015). Internet Governance by Contract. Oxford: Oxford University Press.

Cath, C. (2015). A case study of coding rights: should freedom of speech be instantiated in the protocols and standards designed by the Internet Engineering Task Force? (MA dissertation). University of Oxford, Oxford, UK.

Cardoso, F. H. (2004). We the people: Civil society, the United Nations and global governance. Report of the Panel of Eminent Persons on United Nations-Civil society relations. United Nations.

DeNardis, L. (2012). Hidden Levers of Internet Control: An Infrastructure-Based Theory of Internet Governance. Information, Communication & Society, 15(5), 720–738.

Dery, D. (2000). Agenda setting and problem definition. Policy Studies, 2(1), 37–47.

Doolin, B. (2003). Narratives of change: Discourse, technology and organization. Organization, 10(4), 751–770.

Dourish, P. (2001). Where the Action Is The Foundations of Embodied Interaction. Cambridge, MA: MIT Press.

Dunne, A., & Raby, F. (2001). Design Noir: The Secret Life of Electronic Objects. Basel: Birkhäuser.

Epstein, D., Katzenbach, C., & Musiani, F. (2016). Introduction: internet governance as usual – and its blind spots. Internet Policy Review, 5(3).

Flichy, P. (2007). The internet imaginaire. Cambridge, Mass.: MIT Press.

Flyverbom, M. (2011). The Power of Networks: Organizing the Global Politics of the Internet. Cheltenham, UK: Edward Elgar.

Flyverbom, M. (2016). Disclosing and concealing: internet governance, information control and the management of visibility. Internet Policy Review, 5(3).

Greenberg, A. (2013). This Machine Kills Secrets: Julian Assange, the Cypherpunks, and Their Fight to Empower Whistleblowers (Reprint edition). New York: Plume.

Hess, D. J. (2005). Technology-and product-oriented movements: Approximating social movement studies and science and technology studies. Science, Technology & Human Values, 30(4), 515–535.

Hintz, A. (2009). Civil Society Media and Global Governance: Intervening into the World Summit on the Information Society. Münster: Lit.

Hintz, A., & Milan, S. (2009). At the margins of Internet governance: grassroots tech groups and communication policy. International Journal of Media & Cultural Politics, 5(1/2), 23–38.

Hoffmann, J., Katzenbach, C., & Gollatz, K. (2016). Between coordination and regulation: Finding the governance in Internet Governance. New Media & Society.

ICANN. (2016, February 11). Bylaws for Internet Corporation for Assigned Names and Numbers | A California Nonprofit Public-Benefit Corporation. Retrieved from https://www.icann.org/resources/pages/bylaws-2016-02-16-en

Jasanoff, S. (Ed.). (2004). States of Knowledge: The Co-production of Science and the Social Order. New York: Routledge.

Jasanoff, S., Kim, S.-H., & Sperling, S. (2007). Sociotechnical Imaginaries and Science and Technology Policy: A Cross-National Comparison. NSF Research Project, Harvard University.

JSF Resea, R. F. (2006). Human rights in the global information society. Cambridge, MA: MIT Press.

Kingdon, J. W. (1995). Agendas, Alternatives and Public Policies. New York: Longman.

Law, J. (1991). Power, Discretion and Strategy. In J. Law (Ed.), A Sociology of Monsters: Essays on Power, Technology and Domination (pp. 165–191). London: Routledge.

Law, J. (1994a). Organization, Narrative and Strategy. In J. Hassard & M. Parker (Eds.), Towards a New Theory of Organizations (pp. 248–68). London: Routledge.

Law, J. (1994b). Organizing modernity. Oxford, UK Cambridge, Massachusetts, USA: Blackwell.

MacKenzie, D., & Wajcman, J. (Eds.). (1999). The Social Shaping of Technology (2nd ed.). Buckingham and Philadelphia: Open University Press.

McCharty, D. R. (2011). Open Networks and the Open Door: American Foreign Policy and the Narration of the Internet. Foreign Policy Analysis, 7, 89–111.

Milan, S. (2009). Community Media activists in Transnational Policy Arenas: Strategies and Lessons Learnt. In K. Howley (Ed.), Understanding Community Media (pp. 308–317). Thousand Oaks, CA: Sage.

Milan, S. (2014a, April 24). NETmundial: Is there a new guard of civil society coming to the internet governance fora?, CGCS Media Wire

Milan, S. (2014b, September 10). The Fair of Competing Narratives: Civil Society(ies) after NETmundial, CGCS Media Wire

Milan, S., & Hintz, A. (2013). Networked Collective Action and the Institutionalized Policy Debate: Bringing Cyberactivism to the Policy Arena? Policy & Internet, 5(1), 7–26.

Mol, A. (2001). The Body Multiple: Ontology in Medical Practice. Durham and London: Duke University Press.

Mueller, M. L. (2002). Ruling the root Internet governance and the taming of cyberspace. Cambridge, Mass.: MIT Press.

Mueller, M. L. (2010). Networks and States. The Global Politics of Internet Governance. Cambridge, MA and London, England: MIT Press.

Mueller, M. L. (2012, October). Brief History of NCUC, NCUC.org

Mueller, M. L. (2016, October 26). Missing the Target: The Human Rights Push in ICANN goes off the rails.Internet Governance Project Blog.

Musiani, F. (2015). Practice, Plurality, Performativity, and Plumbing: Internet Governance Research Meets Science and Technology Studies. Science, Technology & Human Values, 40(2), 272–288.

Musiani, F. (2016). Alternative Technologies as Alternative Institutions: The Case of the Domain Name System. In F. Musiani, D. L. Cogburn, L. DeNardis, & N. S. Levinson (Eds.), The Turn to Infrastructure in Internet Governance (pp. 73–86). Basingstoke, UK: Palgrave Macmillan.

Musiani, F., Cogburn, D. L., DeNardis, L., & Levinson, N. S. (2016). The Turn to Infrastructure in Internet Governance. Basingstoke, UK: Palgrave Macmillan.

Padovani, C., Musiani, F., & Pavan, E. (2010). Investigating Evolving Discourses On Human Rights in the Digital Age Emerging Norms and Policy Challenges. International Communication Gazette, 72(4-5), 359–378.

Parks, L. (2015). "Stuff you can kick": Toward a theory of media infrastructure. In P. Svensson & D. T. Goldberg (Eds.), Between Humanities and the Digital (pp. 355–373). Cambridge, MA and London, England: MIT Press.

Pavan, E. (2012). Frames and connections in the governance of global communications: A network study of the internet governance forum. Lanham, MA: Lexington Books.

Raymond, M., & DeNardis, L. (2015). Multistakeholderism: Anatomy of an Inchoate Global Institution. International Theory, 7(3), 572–616.

Rochon, T. R. (1998). Culture Moves. Ideas, Activism, and Changing Values. Princeton, NJ: Princeton University Press.

Scholte, J. A. (2016). Process and power in Internet governance. Reflections on the IANA transition. Presented at the RIPE, Amsterdam.

Stone, D. A. (1988). Policy, paradox and political reason. New York: HarperCollins.

Tarrow, S. (1998). Power in Movement. Social Movements and Contentious Politics. Cambridge: Cambridge University.

Tréguer, F., Panayotis, A., & Söderberg, J. (Eds.). (2016). Alternative Internets. Journal of Peer Production, 9.

Turner, F. (2006). From counterculture to cyberculture: Stewart Brand, the Whole Earth Network, and the rise of digital utopianism. Chicago: University of Chicago Press.

Van Eeten, M. J., & Mueller, M. L. (2013). Where is the governance in Internet governance? New Media & Society, 15(5), 720–736.

Woolgar, S., & Neyland, D. (2013). Mundane governance: Ontology and accountability. Oxford: Oxford University Press.

Zalnieriute, M., & Schneider, T. (2014). A Council of Europe analysis on ICANN’s procedures and policies in the light of human rights, fundamental freedoms and democratic values. Strasbourg: Council of Europe.

Footnotes

1. For an overview of STS in internet governance research see the Internet Policy Review special issue ‘Doing Internet Governance: practices, controversies, infrastructures, and institutions’, available at: https://policyreview.info/articles/analysis/doing-internet-governance-pr...

2. See also Law (1994b).

3.‘Bottom-up’ here is intended to evoke also the bottom-up process of ICANN itself, as we shall see in what follows. Although it does not equal grassroots participation and there is still limited civil society involvement in ICANN, we observe a slow increase in the participation of grassroots organisations from different backgrounds - as testified by the expanding organisational membership in the Noncommercial Users Constituency (NCUC) and in the number of advocates with grassroots activism or hacker backgrounds - a trend observed also in other internet governance venues (Milan, 2014a).

4. Illustrating the evolution and uses of the notion of ordering goes beyond the scope of this article. For an overview see Flyverbom (2011, 2016); Hoffmann, Katzenbach, & Gollatz (2016).

5. See lists.ncuc.org/cgi-bin/mailman/listinfo/ncuc-discuss. The e-mail list, which built on the pre-existing NCDNHC list later renamed, is the main venue for NCUC members to exchange views and strategise. Open to members only but publicly archived, members are subscribed by default upon joining NCUC. Ncuc-discuss archives include also e-mails from the period immediately before NCUC was formally established, including e-mails from ncdnhc-discuss for 2002-2003.

6. Both authors are active within the ICANN civil society sector. Milan represents noncommercial users in the Council of the Generic Names Supporting Organization (GNSO), thus contributing to policy development in the generic domains space; ten Oever is the chair of the Cross Community Working Party on ICANNs Corporate and Social Responsibility to Respect Human Rights (CCWP HR). As such he played a key role in advancing the human rights discourse.

7.‘Civil society’ indicates the realm of human activity outside the remit of the state and the market (see Cardoso, 2004).

8. These collective visions have also been approached as, e.g., discourses (Padovani, Musiani, & Pavan, 2010), frames (Pavan, 2012), and narratives (McCharty, 2011).

9. Together with the Not-for-Profit Operational Concerns Constituency (NPOC), NCUC constitutes the Non Commercial Stakeholder Group (NCSG). NCSG elects the six GNSO councilors representing civil society. A third entity, the At-Large Advisory Committee (ALAC), represents users’ interests. NPOC and ALAC are not considered here for they have not been particularly vocal in the human rights debate.

10. CCWPs are ad hoc, informal single-issue groups with no official policy development or advisory power.

11. Section 27.2 sets some procedural limitations for the human rights bylaw, including their coming into force pending the development of a framework of interpretation.

12. NCUC recent membership includes digital rights organisations like the Electronic Frontier Foundation and Access Now, freedom of expression organisations like Article 19 and Free Press, but also the American Civil Liberties Union, the Centre for Internet and Society (Bangalore, India), and the Washington-based Center for Democracy & Technology. A close reading of organisational membership over time would nicely complement our automated analysis of mailing list traffic, but it is outside the scope of this article.

13. By focusing on one constituency-based mailing list, this study fails to capture the contentious process of negotiation across constituencies, and this represents the main limitation of this approach. However, by concentrating on that main civil society avenue within ICANN that also happens to drive the bottom-up design efforts described here, the article offers a snapshot into the behind-the-scenes of the ongoing process of discursive change that has human rights at its core.

Data retention: flogging a dead horse

$
0
0

On 21 December 2016 the Court of Justice of the European Union (CJEU) ruled that national laws stipulating a generalised retention of traffic and location data from electronic communications violate the EU fundamental rights to privacy and the protection of personal data. Two and a half years earlier the judges had already struck down an EU directive which obliged the member states to adopt national laws on data retention for the very same reasons. Yet another four years in advance, the German Federal Constitutional Court had invalidated the then existing German law on data retention because it infringed the fundamental right of telecommunication freedom.

Blanket retention of communication data unconstitutional

From a legal point of view there can hardly be the shadow of a doubt that any form of a generalised retention of communications metadata is simply incompatible with the EU's constitutional principles. The German Federal Government, however, keeps flogging a dead horse by holding on to the German national law on data retention passed in 2015 – with a fairly peculiar reasoning: even though both the Ministry of Justice and the Ministry of the Interior still claim to need more time in order to thoroughly evaluate the German law in light of the latest CJEU ruling, both have already declared that they deem the law consistent with the EU Charter of Fundamental Rights. This rationale is all the more surprising as the German regulation on data retention evidently does not meet a number of the fundamental rights requirements postulated by the CJEU in its 2016 ruling. 

The blatant inconsistencies begin with the law's personal scope. Since the declared aim of data retention is the fight against serious crime, the CJEU demands that data retention must not affect all persons using electronic communication services but instead must be limited to those who are, even indirectly, in a situation that is liable to give rise to criminal proceedings. By contrast, the German law provides for no differentiation, limitation or exception whatsoever in respect to the individuals whose communications data is retained. It therefore applies even to persons for whom there is no evidence capable of suggesting that their conduct might have a link, even an indirect or remote one, with serious criminal offences. Instead, it simply covers, in a generalised manner, all users of electronic communication means and services. This is true even for persons whose communications are subject to the obligation of professional secrecy, as for example lawyers, doctors or data protection officers. The CJEU, however, ruled that the traffic and location data from such professionals must not be retained at all. 

German law out of scope

The German law is also lacking any restrictions in respect to its territorial scope. The CJEU, on the other hand, requires the data retention to be limited to geographical areas where the competent national authorities consider, on the basis of objective evidence, that there is a high risk of the preparation or commission of serious crimes. 

Another obvious contradiction to the CJEU's 2016 decision lies in the modalities of access to the retained data. The German legislation allows law enforcement authorities to use anyone's retained data for the general purpose of prosecuting and combating serious crime, whereas the CJEU demands that the access must be restricted to the data of individuals suspected of planning, committing or having committed a serious crime or of being implicated in one way or another in such a crime. Only in particular situations where, for example, vital national security, defence or public security interests are threatened by terrorist activities, the CJEU deems it acceptable to grant the authorities access to the data of other persons, provided that there is objective evidence that this data might make an effective contribution to combating such activities. In order to ensure, in practice, that these conditions are fully respected, the CJEU also ruled that access to the retained data must be subject to a prior review carried out either by a court or by an independent administrative body. Due to a somewhat vague reference, the German legislation, however, allows intelligence agencies to collect certain pieces of information, as for example IP addresses, from the pool of retained data without any previous control by a judge or an otherwise independent body. 

German Federal Government needs to go back to basics

All these shortcomings cannot be fixed or amended without departing from the concept of a generalised, undifferentiated and indiscriminate retention of communications data altogether. Once the horse is dead, there is not much sense in flogging it any longer. Therefore, it is high time that the German Federal Government changes its perspective and starts looking at fundamental rights as an achievement of civilisation worth defending rather than an inconvenient obstacle that needs to be overcome.

Australian internet policy

$
0
0
Introducing Australian internet policy: problems and prospects

Papers in this special issue

Editorial
Angela Daly, Queensland University of Technology, Australia
Julian Thomas, RMIT University, Australia

The passage of Australia’s data retention regime: national security, human rights, and media scrutiny
Nicolas Suzor, Queensland University of Technology, Australia
Kylie Pappalardo, Queensland University of Technology, Australia
Natalie McIntosh, Queensland University of Technology, Australia

Computer network operations and ‘rule-with-law’ in Australia
Adam Molnar, Deakin University, Australia
Christopher Parsons, Citizen Lab, Canada
Erik Zouave, KU Leuven, Belgium

Internet accessibility and disability policy: lessons for digital inclusion and equality from Australia
Gerard Goggin, University of Sydney, Australia
Scott Hollier, Media Access Australia, Australia
Wayne Hawkins, Australian Communications Consumer Action Network (ACCAN), Australia

Internet policy and Australia’s Northern Territory Intervention
Ellie Rennie, Swinburne University of Technology, Australia
Jake Goldenfein, Swinburne University of Technology, Australia
Julian Thomas, RMIT University, Australia

Towards responsive regulation of the Internet of Things: Australian perspectives
Megan Richardson, The University of Melbourne, Australia
Rachelle Bosua, The University of Melbourne, Australia
Karin Clark, The University of Melbourne, Australia
Jeb Webb, The University of Melbourne, Australia
Atif Ahmad, The University of Melbourne, Australia
Sean Maynard, The University of Melbourne, Australia

EDITORIAL

Introduction

We are delighted to introduce this special edition of the Internet Policy Review on Australian Internet Policy. This is the first special edition to focus explicitly on a country or region outside of the journal’s Europe focus — although there have been previous special issues and individual articles which have in practice ventured beyond the continent’s confines. We also believe that this is the first time that a special edition of an international journal has addressed Australian internet policy in a way which is relevant to internal Australian, as well as transnational, discussions of these matters. Accordingly, it is our hope that this special issue will be useful both for domestic audiences considering the current state of Australian internet policy and the scope for possible future directions and reforms, and also as a point of comparison for international audiences, in order to understand how this far-flung country is addressing issues familiar to all countries, and issues related to its specific national characteristics.

The internet took its first breath in Australia in 1989, when a connection was established between a computer at the University of Melbourne and the University of Hawaii, using a 56 Kbps satellite link (Clarke, 2004; Given & Goggin, 2012). From those origins in academic communications, the internet has rapidly become a basic element in everyday social and economic life. Australians are enthusiastic and intensive users of digital communications technologies. The Pew Research Center’s 2016 global survey data reports that, after South Korea, Australia now has the second highest rate of internet uptake in the world, with 93% of the adult population using the net, and the second highest rate of smartphone ownership, at 77% of the adult population (Pew, 2016). Australians were also avid early online shoppers; yet Australia was also described in 2014 — by the nation’s own Attorney-General — as "the worst offender of any country in the world when it comes to piracy" (Hopewell, 2014).

Australia can also claim a distinctive and significant place in the development of internet infrastructures. Today’s ubiquitous Wi-Fi networks rely heavily on Australian innovations in wireless networks, derived from public sector research in radio astronomy in the 1990s (CSIRO, 2016). In relation to internet access networks, Australia’s publicly-funded National Broadband Network (NBN) has a striking geographical reach and policy ambition in comparison with international models. The NBN, now due for completion in 2020, aims to provide fast internet even to the most remote parts of the continent, and to foster innovation and competition in the sector. In its early phase of design and planning, the NBN was perceived to be visionary: a bold combination of communications policy and micro-economic reform. While it has without question already substantially improved access and infrastructure for many Australians, the NBN’s convoluted path through political contention and delay is likely to provide many lessons for future scholars and practitioners of internet policy. Even so, the NBN remains one key example of the occasional propensity in Australia to approach communications (and other policy domains) not only through incremental, pragmatic responses to immediate problems, but also through far-reaching, very ambitious, ‘nation building’ schemes.

Due to particular, geographic, political and economic issues, Australian communications policy, and then internet policy, have been shaped by certain country-specific controversies. Policy has been shaped by the distinctive challenges of geography — a large country, with a small population, most of which is highly urbanised — and political economy, where the federal structure of national government demands a high level of subsidy for services to Australia’s expansive rural and remote areas. Despite the high levels of use, digital exclusion remains a highly visible problem for governments, especially in Australia’s regions. The Australian government has now moved to an ambitious ‘digital first’ strategy where service development and delivery is focussed on online platforms. But a substantial number of the very citizens who depend most on government services are not currently able to access them. A recent study reported wide variations in the degree of access and in digital skills and capabilities, especially between Australia’s capital cities and some regional areas. Further, measures of the affordability of internet access suggested that this declined in the 2014-2016 period, as Australians increased the proportion of their incomes applied to paying for this service (Thomas et al., 2016). However, perhaps as an unintended consequence of the long focus on the NBN, plans to address digital inclusion issues are not yet as advanced as in other jurisdictions. Australia at this stage has no equivalent to the United Kingdom’s 2014 digital inclusion strategy.

Access and infrastructure have been by no means the only areas of debate or scholarly attention and concern. Recent developments have highlighted weak privacy protections, and ongoing battles between large copyright holders and internet service providers over filesharing are not yet resolved. Regional and global geopolitical trends are now also influencing Australian internet policy, as trade in services and international data flows become increasingly controversial elements in global politics.

Australia in a global context: tied to the West, turning to Asia

As an economically advanced Western-style anglophone liberal democracy, Australia’s law and politics have much in common with its counterparts in North America and Europe. Yet its geographical position in the Asia Pacific region, and 21st century ‘pivot’ to Asia (Australian government, 2012) place the country at a crossroads culturally, legally and politically. This dual nature of Australia can also be seen in internet policy developments, some of which will resonate with other Global North locations, while other developments may be more in keeping with its geographical neighbourhood.

One large point of divergence from the European context is human rights protection in Australia. In the absence of a comprehensive set of rights in either the Australian Constitution or legislation, Australia occupies a unique position as the only Western liberal democracy not to have the full suite of domestically-enforceable human rights, relying instead upon a patchwork of legislation, common law, and constitutional interpretation. The outcomes this produces can be seen in a number of internet policy areas, prominently privacy and free expression. While Australia participated in the shadowy Five Eyes surveillance partnership, there was no ability to challenge its practices on the basis of infringement of citizens’ privacy rights, in sharp contrast to the situation in the European Union where the Snowden revelations triggered outcomes such as the invalidation of the Data Retention Directive (Digital Rights Ireland v Minister for Communications, 2014) and the EU-US Safe Harbor agreement (Schrems v Data Protection Commissioner). The lack of strong, constitutional privacy protections may prove to be an eye-opener for British readers faced with a Brexit situation which may involve the disapplication of EU law (and the Charter of Fundamental Rights) and an exit from the European Convention on Human Rights.

In Australia, the parliamentary process and common law rights have not proved sufficient so far to protect Australians’ privacy rights. Proposals for strengthening privacy have been developed and discussed at length, but have not yet been translated into enforceable laws and norms (e.g. Australian Law Reform Commission, 2014). The coming into force of the EU’s General Data Protection Regulation (GDPR) has sparked some debate in Australia about compliance with the extra-territorial reach of this law, since the GDPR applies to data processors and controllers outside of the EU which process the data of EU citizens (Shaw, 2015). Whether the GDPR, or other international provisions, can produce spillovers of higher privacy protection for Australians, remains to be seen.

The Australian experience with regional and bilateral trade agreements and their effect on internet matters may also prove instructive in the context of Brexit and other developments elsewhere, given the changes that implementation of the Australia-United States Free Trade Agreement (AUSFTA) required in Australian copyright law. Australia’s Productivity Commission — a highly respected, independent economic policy advisory body — consider these to have had a detrimental effect in Australia (see Australian Government Productivity Commission, 2016). But in other respects the United States represents a policy road not chosen: for instance, in the context of debate about whether a broad ‘fair use’ exception to copyright should be introduced into Australian law. Despite many years of consideration including Australian Law Reform Commission and Productivity Commission recommendations that fair use become part of Australian law, this has not happened, and in fact faces strong resistance from certain rightsholders groups (Malcolm, 2017; Aufderheide & Hunter Davis, 2017).

Australia has been the site of many digital copyright and piracy battles over recent years. This may be the result of its reputation as an unusually strong market for pirated media. The respected peer-to-peer news site TorrentFreak has often placed Australia at or near the top of its lists of torrenting countries for popular series such as Game of Thrones. Australia was the first jurisdiction internationally in which large copyright holders sued an internet service provider (ISP) for alleged copyright infringements carried out by its users, in the Roadshow v iiNet saga, but the end result was a decision of the Australian High Court which found the ISP, iiNet, was not liable for its customers’ conduct in these circumstances (Lindsay, 2012). Subsequent to this decision, the last five years has seen discussions between ISPs and rightsholders, and some government policy interventions, to address piracy concerns through measures including a graduated response scheme, but no agreement was reached on the allocation of costs and the scheme was recently abandoned (Francis, 2016). Yet on copyright infringement, Australia has also taken some negative cues from the United Kingdom, such as the site-blocking legislation introduced in 2015, which, like the UK scenario, has had limited effectiveness in addressing the downloading of copyright-infringing material (Dootson, Pappalardo, & Suzor, 2016).

The international climate concerning trade has changed significantly since we began work on this special issue in March 2016. Issues related to intellectual property were prominent in public debate over the proposed, and highly controversial, Trans-Pacific Partnership (TPP) (Rimmer, 2017). The TPP’s Intellectual Property and E-Commerce chapters would likely have had a significant effect on Australian internet policy (Daly, 2013). However, the TPP now appears doomed, with the new US president Donald J. Trump withdrawing his country from the treaty. But this does not necessarily spell the death knell for multilateral trade agreements more generally. Australia is still involved in negotiations for the Trade in Services Agreement (TiSA) which also concerns internet policy matters (Erickson & Leggin, 2016). In the power vacuum left by the US, China has stepped into the Asia Pacific region by spearheading the nascent Regional Comprehensive Economic Partnership (RCEP) agreement negotiations (Jaipragas, 2017). Whether these negotiations actually result in finalised agreements is unclear, but the international trade space in the coming years may end up taking on a more European or Chinese character, with significant implications for internet policy in other countries in the region, including Australia.

There are some internet policy issues which have been prominent in the rest of the world but have received scant attention so far in Australia. One such matter is network neutrality. This has been a major topic for discussion in many parts of the world, including the US, the European Union, Brazil and India, which have all introduced regulation to address concerns about internet service providers engaging in discriminatory traffic practices (Marsden, 2017). Yet in Australia there has been very limited discussion on the matter (Daly, 2016a). The reason for this is likely to be Australia’s distinctive market structure for internet service. The country has had historically low rates of uptake for cable or satellite television networks, especially in comparison with North America and Europe, and it is this industry sector which has sparked significant concern about neutrality elsewhere. Another factor is the increasingly significant role of the NBN as a publicly-owned, nation-wide, near-monopoly wholesaler of internet access. The intent here was that Telstra, Australia’s dominant telecommunications firm, should no longer be conflicted as a major wholesaler and retailer of access. But neither of these points mean that neutrality will not become an important problem in the future, with the same dynamics of media convergence operating in Australian markets as elsewhere. In fact, one feature of the Australian market — the ubiquity of data caps for fixed as well as mobile internet access — may make the problem of neutrality more likely to arise, as streaming services begin to seek more concessional or ‘uncapped’ arrangements with particular ISPs.

Another matter is digital market dominance, or the emergence of monopolistic transnational internet companies and their economic and socio-political effects on internet users. Policy and regulatory attention has been paid to this issue in various parts of the world, particularly the EU (Daly, 2016b), but to date no action has been taken in Australia, notwithstanding (for instance) Google’s 90% market share of all searches in Australia (Scardamaglia & Daly, 2016). Australia has, however, moved comparatively quickly in legislating to capture lost taxation revenue from transnational diverted profits, a problem particularly associated with internet businesses (Ting et al., 2016). These provisions are popularly known as the "Google Tax", although their impact on Google is not yet clear.

Looking to the future, an array of technological developments including the Internet of Things, artificial intelligence, robotics, cryptocurrency, and automation are all currently emerging as areas likely to further extend current discussions of privacy, security, intellectual property and competition in Australia.

In this issue

We are delighted to have five contributions on Australian internet policy issues for this special edition of the Internet Policy Review. The authors come from a variety of disciplines, including media and communications, cultural studies, law, criminology and, computing and information systems. We would like to thank all authors for their contributions, as well as the reviewers and Internet Policy Review editorial staff, without whom this special edition would not be possible.

Some of these papers, and the idea to have a journal special edition on Australian internet policy, emerged from a one-day conference hosted by Swinburne University of Technology in Melbourne on 5 October 2015, which was generously funded by the .au Domain Administration (auDA) as an academic pre-event to that year’s Australian Internet Governance Forum. We thank everyone at Swinburne (where we were both then based) and auDA for their support for that event, as well as for this special edition.

The papers in this special edition cover a range of areas of internet policy in Australia. One area which is under-represented is the relationship between the internet and intellectual property. During and immediately after the iiNet case discussed above, digital copyright and piracy were high priorities in Australian internet policy discussions. It seems that with the Snowden revelations of Australian participation in mass surveillance programmes, and subsequent introduction of mandatory data retention legislation, privacy and data protection have supplanted digital copyright as the current ‘hot topic’ in Australian internet policy, as attested by this special edition’s articles. However, digital intellectual property issues have not gone away, and copyright and related concerns are likely to figure once more in future surveys of the topic.

Indeed, specifically on the topic of data retention, Suzor, Pappalardo, and McIntosh’s contribution to this special edition analyses the media debate accompanying the introduction of data retention legislation in Australia in 2015. The legislation remains very problematic from a human rights perspective, especially its impact upon individual privacy. As the authors found, despite these public interest concerns featuring in media discussions of the legislation, they were largely unaddressed in the final text of the law. This represents a limitation on the ability of civil society to influence Australian law-making in a problematic context where there is no constitutional protection for privacy rights. This is an important article on a topic which continues to attract public debate in Australia, including, at the time of writing, around Australian government proposals to extend the use of retained data to civil proceedings, involving further privacy infringements and scope-creep (Cooper, 2017).

Issues of privacy are also explored in Molnar, Parsons, and Zouave’s article on computer network operations (CNOs) in Australia. CNOs constitute government intrusion or interference with networked ICT infrastructures for the purposes of law enforcement and security intelligence. Several pieces of legislation authorise Australian government agencies to use CNOs for security and law enforcement purposes. However the authors identify the lack of safeguards and effective oversight of law enforcement and security activities in these laws, compounded by the secrecy accompanying most CNO measures in Australia, resulting in serious risks to democratic freedoms and also procedural justice. Again, the authors of this piece also point to the lack of comprehensive and enforceable human rights in Australia, observing how this is placing Australians - when faced with CNOs - in a weaker position compared to their counterparts in other Five Eyes countries. With the recent Wikileaks revelations about CNO use by the US CIA and its partners including the Australian intelligence agencies (Hern, 2017) on the one hand, and new plans to regulate the use of CNOs in Italy (Pietrosanti and Aterno, 2017) government hacking is a timely topic and this is an important contribution to its academic understanding in Australia and beyond.

Other contributions in this special edition explore the relationship between Australian internet policy, and other policy areas. Goggin, Hollier, and Hawkins examine the interaction of disability policy and internet access in Australia. Digital inclusion, especially for people with disabilities, has been a long-standing issue for internet policy internationally. Australia has had a relatively good track record historically in applying anti-discrimination law to web accessibility, but over the last fifteen years, progress in this area has been slower than expected. Yet, Australia’s ambitious ‘nation building projects’ in the form of the National Broadband Network (NBN) and National Disability Insurance Scheme (NDIS) may go some way to remedying this lack of progress, and may also provide models that can be used in other countries to achieve these objectives.

The interaction of internet policy and Indigenous rights is explored in Rennie, Goldenfein, and Thomas’ article, which focuses on the surveillance of publicly-funded computers and internet use in remote Indigenous communities during the Australian government's problematic and controversial Northern Territory ‘Intervention’, a broad cluster of legal and policy changes which from 2007 onwards had a major impact on social welfare, land tenure, law enforcement, and many other aspects of the Indigenous experience in more than 70 affected communities in outback Australia. As part of the intervention, between 2007 and 2012, providers of internet and computer access facilities were required to document the use of their computers, keep detailed records of computer users and instal filters on computers and networks - a form of official ICT surveillance, designed to target a particular group, Indigenous people. The authors argue that it was the digital divide between Indigenous and other Australians which made this form of targeted surveillance possible, and that in fact the policy exacerbated this divide by imposing costly requirements on those attempting to provide some level of internet access in remote communities. This article is an important contribution to the little-researched topic of Indigenous rights and technology in Australia, and demonstrates the limits of ‘liberation technology’ in contexts of intersectional disadvantage.

Looking to the future, the article from Richardson, Bosua, Clark, Webb, Ahmad, and Maynard explores the nascent Australian Internet of Things community and their experiences and concerns about privacy and data protection. The authors found that privacy continues to be valued by IoT users and they also want greater control and transparency regarding their IoT data, but their awareness of the current legal framework protecting personal information was low. In consequence, the authors suggest a ‘responsive regulation’ model for IoT governance incorporating privacy by design principles in order to respond better to the wishes of IoT users and developers, while not discounting that larger scale law reform in Australia may also be needed in the future to address concerns around IoT-enabled ubiquitous surveillance. This article, thus, demonstrates that privacy remains an important value in the digital age, despite a lack of constitutional protection in Australia.

Conclusion

The articles here cover a range of topics in Australian internet policy and, we hope, provide the reader with an introduction to developments in this country. This special issue, however, cannot provide a comprehensive picture. As we have noted, digital intellectual property issues are important, but are not explored here. Intersections with migration and refugee policy, and Australia’s international relations are promising areas for investigation which are not covered in this issue. Internet governance, and Australia’s role in international internet governance processes, as well as the administration of domain names, are also topics which merit further attention. To some degree these gaps reflect the wider state of the field. We do not yet have a comprehensive account of how the Australian government and other institutions have approached the regulation of the internet and understood its emergence as field of law and policy. In our view, Australia is still in the process of reaching robust policy positions on core internet policy problems such as intellectual property, privacy and surveillance. Australian internet policy researchers also have some way to go in documenting and assessing these developments.

In the articles collected here, privacy emerges as the most prominent theme. This is not because of any official current programme of law reform in this area; rather, it reflects the immediate needs of researchers responding to recent and current surveillance activities of diverse kinds, and a host of new technological developments. In Australia as elsewhere, privacy and security are likely to remain key concerns for work in the internet policy area, given ongoing debates around national security and data retention, the increasing use of drones and other automated vehicles, the growing importance of algorithmic media, and current developments in machine learning.

As the internet is already deeply embedded in everyday life and work, it is not surprising that internet policy now interacts with a plethora of other policy areas: in this special issue, the articles cover intersections with disability and Indigenous policy. Our attention to these points of connection is not only the result of the ubiquitous, essential character of internet service across diverse fields: it follows from Australia’s deployment of far-reaching, major national policy programmes in these areas. In the case of the National Disability Insurance Scheme and the National Broadband Network, these ambitious developments have generated strong support across Australia’s social and economic boundaries. In the case of the Northern Territory Intervention, promoted by government as a unifying, national response to a crisis in welfare, the intent and implementation of the policy remains controversial. We would suggest that all these initiatives point to a curiously double-edged quality in Australian public policy, which has strongly shaped the development of the internet in Australia. While much of what we see in Australian internet policy reveals a cautious, incremental and highly pragmatic approach, there remains a disposition towards far-reaching, highly ambitious, ‘nation building’ projects — and the results and ramifications of these are often surprising.

References

Aufderheide, P. & Hunter Davis, D. (2017). Contributors and Arguments in Australian Policy Debates on Fair Use and Copyright: The Missing Discussion of the Creative Process. International Journal of Communication, 11, 522-545.

Australian Government. (2012). Australia in the Asian Century. White Paper. Retrieved fromhttp://asialink.unimelb.edu.au/__data/assets/pdf_file/0004/645592/australia-in-the-asian-century-white-paper.pdf

Australian Government Productivity Commission. (2016). Intellectual Property Arrangements. Draft Report. Retrieved from http://www.pc.gov.au/inquiries/completed/intellectual-property/draft/intellectual-property-overview-draft.pdf

Australian Law Reform Commission. (2014). Serious Invasions of Privacy in the Digital Era. ALRC Report 123. Retrieved from https://www.alrc.gov.au/publications/serious-invasions-privacy-digital-era-alrc-report-123

Clarke, R. (2004). Origins and Nature of the Internet in Australia. Retrieved from www.rogerclarke.com/II/OzI04.html.

Cooper, H. (2017). Data retention laws: Experts warn against opening up metadata to civil cases as telcos renew bid to change laws. Abc. Retrieved from http://www.abc.net.au/news/2017-01-05/telco-industry-pushes-for-metadata-collection-changes/8162896

CSIRO. (2016). Bringing WiFi to the world. Retrieved from https://www.csiro.au/en/Research/D61/Areas/Wireless-and-networks/Wireless-broadband/WiFi

Daly, A. (2013). The Trans-Pacific Partnership: a knockout blow for innovation? The Conversation. Retrieved from https://theconversation.com/the-trans-pacific-partnership-a-knockout-blow-for-innovation-14262

Daly, A. (2016a). Net Neutrality in Australia: The Debate Continues, But No Policy in Sight. In L. Belli & P. De Filippi (eds.). Net Neutrality Compendium: Human Rights, Free Competition and the Futures of the Internet. Springer.

Daly, A. (2016b). Private Power, Online Information Flows and EU Law: Mind the Gap. (Hart).

Digital Rights Ireland Ltd v Minister for Communications, Marine and Natural Resources, No. C-293/12 and C-594/12 (Grand Chamber, European Court of Justice April 8, 2014). Retrieved fromhttp://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=CELEX:62012CJ0293&rid=1

Dootson, P., Pappalardo, K. & Suzor, N. (2016). Blocking access to illegal file-share websites won’t stop illegal downloading. The Conversation. Retrived from https://theconversation.com/blocking-access-to-illegal-file-share-websites-wont-stop-illegal-downloading-70473

Erickson, M. & Leggin, S. 2016. Exporting Internet Law through International Trade Agreements: Recalibrating U.S. Trade Policy in the Digital Age. Catholic University Journal of Law and Technology, 24(2), 317-368.

Francis, H. (2016). Foxtel and rights holders drop 'three strikes' piracy scheme. Sydney Morning Herald. Retrieved from http://www.smh.com.au/technology/technology-news/foxtel-and-rights-holders-drop-three-strikes-piracy-scheme-20160525-gp3b2y.html

Given, J & Goggin, G. (2012) Australian Internet Histories: It's Time’ Media International Australia, 143(1), 57-62.

Hern, A. (2017). Am I at risk of being hacked? What you need to know about the ‘Vault 7’ documents. The Guardian. Retrieved from https://www.theguardian.com/technology/2017/mar/08/wikileaks-vault-7-cia-documents-hacked-what-you-need-to-know

Hopewell, L., (2014, January 2). Australia Worst In The World For Piracy, According To Our Attorney-General, Gizmodo. Retrieved from

www.gizmodo.com.au/2014/06/australia-is-the-worst-in-the-world-for-piracy-according-to-attorney-general/

Jaipragas, B. (2017). As Trump kills the TPP, can China-backed RCEP fill the gap? South China Morning Post. Retrieved from http://www.scmp.com/week-asia/geopolitics/article/2060041/trump-kills-tpp-can-china-backed-rcep-fill-gap

Lindsay, D. (2012). ISP Liability for End-User Copyright Infringements: The High Court Decision in Roadshow Films v. iiNet. Telecommunications Journal of Australia, Volume 62, Number 4.

Malcolm, J. (2017). Australia’s Battle Over Fair Use Boils Over. Electronic Frontier Foundation DeepLinks Blog. Retrieved from https://www.eff.org/deeplinks/2017/02/australias-battle-over-fair-use-boils-over

Marsden, C. (2017). Network Neutrality: From Policy to Law to Regulation. Manchester University Press.

Pew Research Center (2016). "Smartphone Ownership and Internet Usage Continues to Climb in Emerging Economies". Retrieved from http://www.pewglobal.org/files/2016/02/pew_research_center_global_technology_report_final_february_22__2016.pdf

Pietrosanti, F. & Aterno, S. (2017). Italy unveils a legal proposal to regulate government hacking. Boing Boing. Retrieved from http://boingboing.net/2017/02/15/title-italy-unveils-a-law-pro.html

Rimmer, M. (2017) The Trans-Pacific Partnership: Intellectual Property, Public Health, and Access to Essential Medicines. Intellectual Property Journal. Forthcoming.

Scardamaglia, A. & Daly, A. (2016). Google, online search and consumer confusion in Australia. International Journal of Law and Information Technology, 24(3), 203-228.

Schrems v Data Protection Commissioner, Case C-362/14, CJEU, 6 October 2015.

Shaw, L. (2015). The impact of the new European General Data Protection Regulation in Australia. Keypoint Law. Retrieved from http://www.keypointlaw.com.au/keynotes/impact-new-european-general-data-protection-regulation-australia

Thomas, J., Barraket, J., Ewing, S., MacDonald, T., Mundell, M., and Tucker, J., (2016). Measuring Digital Inclusion in Australia: The Australian Digital Inclusion Index, Swinburne University of Technology. Retrieved from https://digitalinclusionindex.org.au/the-index-report/report/

Ting A, Faccio T and Kadet J. (2016). ‘Effects of Australia's MAAL and DPT on Internet-Based Business’, Tax Notes International, vol.83, pp. 145-51.

Accountability challenges confronting cyberspace governance

$
0
0

Introduction

What a little more than forty years ago started as a government-sponsored network research project has evolved into a “global [...] substrate that [...] underpins the world’s critical socio-economic systems” (Demchak & Dombrowski, 2013, p. 29; Weber, 2013). Cyberspace has become a key domain of power execution and a core issue of global politics (Nye, 2010). Initially construed as a space free from regulation and intervention (Barlow, 1996; Johnson & Post, 1996), the rising tide of threats to the stability and future development of cyberspace has spurred calls for more expansive governance.

Over the course of the past two decades, the term governance has enjoyed widespread use across a great number of discourses (Enderlein, Wälti, & Zürn, 2010). In the context of cyberspace, governance has come to refer to the sum of regulatory efforts put forward with regard to addressing and guiding the future development and evolution of cyberspace (Baldwin, Cave, & Lodge, 2010, p. 525). Cyberspace governance is characterised by a large quantity of actors, issue areas, and fora involved in processes of steering. Accountability structures are often incoherent in settings of this nature and questions such as who is accountable to whom for what by which standards and why remain opaque, and warrant closer examination (Bovens, Goodin, & Schillemans, 2014). For purposes of illustration, it is worth considering the following: while critically important to the workings of the digital realm, the activities of some of the largest cyberspace governance entities, including among others the Internet Corporation for Assigned Names and Numbers (ICANN), the Internet Governance Forum (IGF), or the Internet Engineering Taskforce (IETF) are not based on or mandated by international legal instruments. Furthermore, “there are no clear [or only few] existing structures such as courts, legislative committees, national auditors, ombudsmen, and so on, to which recourse can be made to render [these cyberspace governance institutions] accountable” (Black, 2008, p. 138).

Taking note of the complexities related to processes of account rendering in the context of cyberspace governance, this paper asks the following interrelated research questions:

  • Conceptually, what are the key accountability challenges confronting cyberspace governance?
  • How can these accountability challenges be addressed?

Attaining a better understanding of how accountability structures play out in cyberspace governance is key for increasing transparency, assessing processes of legitimisation, and scrutinising impending models of regulation.

This paper is structured along four sections: Section I reviews relevant background information and concepts, and lays out the methodology. Section II highlights key accountability challenges confronting cyberspace governance. Section III stipulates a set of policy recommendations geared towards addressing the accountability challenges identified as part of Section II. Section IV summarises the findings of this paper and offers some concluding remarks.

Conceptual frame and methodology

In order to grasp the accountability challenges confronting cyberspace governance, it is necessary to establish a common point of departure and lay out key concepts, i.e. cyberspace and accountability.

Cyberspace

Termed by William Gibson in the mid-1980s (Ottis & Lorents, 2010), cyberspace is the most elemental concept with regard to cyberspace governance (Kello, 2013, p. 17). It lays out the domain within which cyberspace governance can be construed. Even though cyberspace has become deeply embedded in everyday life, there is little clarity on what it comprises (Murray, 2007). The understanding of cyberspace is still nascent and the concept riddled with terminological ambiguity. The number of definitional accounts pertaining to cyberspace is bewilderingly large, ranging from technological to socio-political and economic descriptions (NATO Cooperative Cyber Defence Centre of Excellence, 2017).

Cyberspace is often equated with the World Wide Web but the two are not the same. Cyberspace can be thought of as a complex, highly distributed network infrastructure (Clarke & Knake, 2012). In contrast, the World Wide Web denotes a collection of resources (e.g. webpages) identifiable by means of global Uniform Resource Identifiers (URI), and accessible via cyberspace (World Wide Web Consortium, 2004).

The view of cyberspace adopted in this a paper is consistent with Chris Demchak’s and Peter Dombrowski’s understanding of cyberspace as a “global [...] substrate that [...] underpins the world’s critical socio-economic systems” (Demchak & Dombrowski, 2013, p. 29). Their definition underscores the economic, social, and political importance of the network infrastructure, and alludes to the multitude of docking points for governance and policy interventions, as well as stakeholder concerns.

Accountability

In terms of conceptual coherence, accountability struggles with similar definitional ambiguity to that of cyberspace. Over the past decade, accountability has become something of a catchword, and has been assigned various meanings by scholars of different disciplines, impairing consistent and comprehensive terminological application and research (Bovens et al., 2014). Although scholars seem to agree on the concept’s overall importance, they appear to be less unified apropos its constitutive elements.

Consciously abstaining from advancing yet another definition or reconceptualisation of accountability, and increasing the term’s elusiveness, this paper relies on what Bovens, Goodin and Schillemans call the minimal conceptual consensus:

“The minimal conceptual consensus entails, first of all, that accountability is about providing answers; is about answerability towards others with a legitimate claim to demand an account. Accountability is then a relational concept, linking those who owe an account and those to whom it is owed. Accountability is a relational concept in another sense as well, linking agents and others for whom they perform tasks or who are affected by the tasks they perform” (Bovens et al., 2014, p. 6).

Emphasising the concept’s socio-relational core, i.e. the onus of an actor or body to give reasons for or defend conduct to another set of actors, the minimal definitional consensus is concise, yet broad enough to ascertain empirical validity and operationalisation in complex analytical environments, such as cyberspace governance (Bovens, 2007, p. 13).

Far from a coherent system, cyberspace governance resembles a jungle of different, at times competing, regulatory endeavours. Such endeavours can take many forms: they can be hierarchical with clear sanctions attached, e.g. legal rules and ordinances, international and national contracts and agreements, or softer, e.g. voluntary technical standards and protocols, and informal codes of conduct (Levi-Faur, 2011, p. xvi). In order to counter tendencies of disintegration and ensure continuous openness and stability of the digital environment, tangible accountability structures are of critical importance (Scholte, 2008, p. 15; Weber, 2014, p. 78).

Methods

From a methodological point of view, this paper employs qualitative means of data collection and analysis. It is grounded in a review of policy documents and secondary academic literature on accountability, cyberspace governance, and international relations. Data was collected by means of online desk research. Databases queried included among others: Taylor & Francis Online, EBSCOhost, Elsevier Science Direct, Google Scholar, Google Books, as well as Search Oxford Libraries Online (SOLO). The sources identified were grouped and examined by means of content analysis.

Building on existing accountability scholarship and engaging in further theorisation, this paper serves as a steppingstone for thinking more rigorously about accountability in the context of cyberspace governance. Its goal is to contribute to current scholarly debates, and formulate relevant policy recommendations.

The findings of this paper are contextually and temporally specific and need to be understood as such. Much of the topic under investigation is still very much in flux. Conceptually, the governance of cyberspace is a field that is likely to remain under construction for the foreseeable future (Dutton & Peltu, 2007).

Key challenges

Cyberspace governance involves a great number of different constituencies, spans across various issue areas, and exhibits a high degree of institutional malleability (Kleinwächter, 2011; Mueller, Mathiason, & Klein, 2007, p. 237; Raymond & DeNardis, 2015, p. 41). Cumulatively, these factors contribute to a rise in complexity apropos basic structures of accountability.

A juxtaposition of the concepts of cyberspace and accountability reveals the following accountability challenges with regard to the governance of the virtual domain: the problem of many hands, the profusion of issue areas, and the hybridity and malleability of institutional arrangements.

The problem of many hands refers to a condition of accountability obfuscation caused by a great number of actors engaged in concurring regulatory ventures (Bovens, 2007; Papadopoulos, 2003). “Because many different officials contribute in many ways
to decisions and policies […] it is difficult even in principle to identify who is morally responsible for political [and technical] outcomes” (Thompson, 1980, p. 905). In the context of cyberspace governance, the number of stakeholders contributing to policy outcomes and regulatory deliberations is immense. To illustrate, questions such as “who is accountable for the current and future development of the virtual realm” may yield any of the following answers: the Internet Corporation for Assigned Names and Numbers (ICANN), the Internet Engineering Task Force (IETF), the World Wide Web Consortium (W3C), the Internet Governance Forum (IGF), the International Telecommunications Union (ITU), large Internet Service Providers (ISPs) such as AT&T, powerful nation states or departments, such as the US Department of Commerce or the US National Security Agency, influential software companies, as well as civil society groups and individual experts who take part in and contribute to the operations of organisations, such as ICANN or the IETF (DeNardis, 2014; Scholte, 2008, p. 19). While the abundance of actors involved in cyberspace governance does not (necessarily) imply an absence of accountability mechanisms, it does mean higher degrees of complexity.

The heterogeneity of stakeholder configurations can aggravate questions of agency and contribution. Accountability structures are more difficult to determine because actors co-produce outcomes and contribute to the end-product in hybrid constellations. Accountability structures can further be complicated by the conflation of stakeholder-specific traditions, standards, and expectations (Koppell, 2005, p. 94). Not only is the variety of actors contributing to governance ventures and their goals larger, making the identification of accountability objects more difficult (i.e. for which goals should accountability be rendered?), but their expectations can diverge and complicate the emergence of clear lines of responsibility or accountability (Bovens et al., 2014; Carr, 2016, p. 43). Indeed, environments characterised by multiple stakeholders tend to provide opportunities for blame-shifting (Papadopoulos, 2010, p. 1039).

The problem of many hands represents but one accountability challenge in the context of cyberspace governance. The profusion of issue areas, spanning across technical, socio-political, and economic spheres, constitutes another conundrum. In the context of cyberspace governance, the excess and coming together of technical and non-technical issue areas can severely complicate accountability structures. Seemingly unrelated issue areas may suddenly converge. Examples of such convergence can, among others, be found in areas related to intellectual property rights protection and address naming and numbering:

“The names and numbers given to Internet entities, such as domain names used in Internet addresses, may seem to be a [solely technical] issue to be managed by the Internet Corporation for Assigned Names and Numbers (ICANN). But, the registration of a well-known trademark as a domain name with the intention of selling it back to the owner, called ‘cyber-squatting’, has led to governance issues that are also the concern of international organisations, like the World Intellectual Property Organisation (WIPO), and national and international legislation and regulations which also cover more traditional trademark and related concerns” (Dutton & Peltu, 2007, p. 8).

The confluence of issue areas can lead to “tangled web[s] of relationships” (Dubnick & Frederickson, 2014, p. xxi). Left untangled, these intertwined webs of relationships can have fatal consequences for accountability structures. For one thing, they can result in the erosion of (pre-existing) accountability structures and cause accountability deficits. For another thing, they can lead to dysfunctional amalgamations of accountability arrangements and bring about situations of accountability overcrowding (Bovens, 2007, p. 462).

The hybridity of institutional arrangements pertaining to cyberspace governance poses yet another accountability challenge. Cyberspace governance is characterised by the absence of a coherent regime or organisation in charge of enacting globally consistent and comprehensive norms and policies. A considerable number of institutions involved in cyberspace governance exhibit characteristics of fluidity and ad-hocism. Accountability structures tend to suffer from the dispersion of topics across different organisational settings and related institutional volatility. They are further aggravated by the fact that stakeholders can take on different roles across different fora of interaction.

The propensity for role-shifting means that certain actors may be involved in the production of outcomes in one forum (be accountors) but may play the part of accountees in other institutional settings. For example, an academic research group may contribute substantially to the development of new security protocols, e.g. in the context of IETF meetings, but may hold private sector companies accountable for faulty implementation/commercialisation of said security protocols, e.g. in circumstances of dispute resolution (Dickinson, 2014). “Insofar as accountability mechanisms are present, […] mechanisms [can] become mixed. The [jumble] of accountability mechanisms that results from this [can give] rise to uncertainty, confusion, or shrinking” (Bovens et al., 2014, p. 250).

The hybridity of institutional setups also makes developments hard to track and procedural access for some stakeholders, including civil society, uneven, thereby undermining processes of public account giving (Jayawardane, Larik, & Jackson, 2015, p. 7). Civil society organisations have voiced concerns re unequal participation and the fact that decisions of sensitive, yet far-reaching nature are made behind closed doors across several I* organisations, including, for example, the Internet Society (ISOC), IETF, ICANN, W3C, the Internet Architecture Board (IAB), as well as the regional Internet registries (RIRs), and country code domain name registries (APNIC, 2017).

Policy recommendations

In the context of cyberspace governance, the heterogeneity of stakeholders, the profusion of issue areas, as well as the malleability and distribution of institutional arrangements generate deep-rooted accountability tensions that are not easy to resolve. However, these tensions should not discourage researchers and policymakers from thinking about potential solutions and devising relevant strategies (Black, 2012). The subsequent paragraphs offer a set of policy recommendations geared towards addressing the three challenges identified above.

Heterogeneity of stakeholders

Cyberspace governance is not a unitary undertaking but exhibits characteristics of post-sovereignty. Processes of steering are “institutionally diffuse and lack a single locus of supreme, absolute, and comprehensive authority” (Scholte, 2008, p. 18). Given the complexity of the realm and the absence of a final arbiter, policy prescriptions centring on hierarchical command and control mechanisms appear ill-suited to resolve the tensions identified. Accountability structures should be reflective of the diversity of stakeholders, and be established on a collective basis. In view of the dominance of sovereigntist (hierarchical) accountability artefacts, the implementation of shared accountability structures may entail a deliberate rehashing of account rendering functions and processes. While the call for collective accountability structures does not imply the participation of the entirety of stakeholders, it does mean the enfranchisement of all relevant parties (Malcolm, 2015, p. 2). The enlistment of stakeholders essential to the resolution of specific cyberspace governance problems presents an important first step with regard to streamlining collective accountability structures and identifying corresponding responsibilities.

In terms of accountability enforcement, the institutionalisation of multistakeholder-oriented checks and balances is key. Independent, constitutionally inspired oversight mechanisms, such as ombudsmen or multistakeholder-versed third-party supervisory and review authorities, and clear standards provide useful instruments in this regard. The latter support the introduction of meaningful benchmarks of expected behaviour and set criteria against which conduct can be assessed (Weber, 2009, p. 159). Given the heterogeneity of stakeholders, relevant standards need to be flexible, yet specific enough to take effect in the respective cyberspace governance arenas.

The adoption of constitutionally inspired enforcement mechanisms has proven fruitful in various cases. In the context of ICANN, for example, the appointment of an ombudsman has helped clarify otherwise murky accountability structures, and provided community members with a useful mechanism of recourse. The ICANN ombudsman evaluates complaints about the organisation (including staff, board, supporting organisations, and advisory committees) lodged by community members, and promotes understanding of pertinent community issues (Davidson, 2009, p. 137).

Profusion of issue areas

The intertwining of political, technical, economic, and cultural dimensions, requires a conscious re-calibration of cyberspace governance debates. Given the scale and scope of the cyberspace governance landscape, accountability arrangements cannot meaningfully be established based on broadly framed, overarching legal instruments, e.g. global treaties or covenants. Rather, discussions of accountability should be organised around specific, manageable issue areas, and include stakeholders from different backgrounds, which are capable of flagging areas of intersection and convergence. The identification of relevant issue areas around which procedures and actor expectations can converge is critical for the emergence of tangible accountability structures (Krasner, 1985, p. 2). Issue specificity helps to reduce ambiguity apropos actor relations, incentives, and goals, and allows for the strategic construction and connection of different cyberspace governance debates, as well as for the attribution of stakeholder responsibilities (Slack, 2016, p. 76).

In the absence of clearly defined processes of account rendering, issue-specific policy networks can offer a useful corrective. In the context of the IGF, for example, so-called Dynamic Coalitions have served as critical means for creating accountability-related anchor points. Dynamic Coalitions are informal, issue-oriented groups of stakeholders working on specific cyberspace governance topics, e.g. freedom of expression and freedom of the media on the internet, network neutrality, or the internet of thing. To be recognised, they have to “produce a written statement which [outlines] the need for the coalition, an action plan, a mailing list, the contact person(s), [as well as] a list of representatives from at least three stakeholder groups” (Internet Governance Forum, 2016). Such thematic groupings go some way in creating a collective identity and sense of responsibility among stakeholders (Harlow & Rawlings, 2007, p. 560).

Malleability and distribution of institutional arrangements

To avoid forum-related accountability confusion, institutions and stakeholders involved in processes of cyberspace governance are well advised to clearly specify their mission and openly communicate their role (Malcolm, 2015, p. 4). Well-defined mission statements and mandates help to create longer-term commitment and guidance, and reduce the risk of ad-hocism and agenda shifting brought about by changing stakeholder configurations.

Institutional inaccessibility and discrimination should be addressed through proactive engagement and resourcing, as well as through flexible institutional set-ups. Cyberspace governance bodies need to be procedurally and structurally open to admit the participation of all stakeholders who are significantly affected by specific policy problems, or interested in the deliberation and resolution of cyberspace governance issues (Malcolm, 2015). “Proactive dissemination of pertinent, appropriate and quality information […] at the right time, in the right format, and through the right channels increases the likelihood of uptake by [relevant stakeholders and decreases the possibility of defection and exclusion]” (World Health Organisation, 2015, p. 10). Organisational transparency and certainty, as well as meaningful stakeholder inclusion structured around specific issue areas are of critical importance for the creation of clear accountability structures and the assurance of continuous stakeholder buy-in.

Conclusion

In as complex and dispersed an environment as cyberspace, the examination and institutionalisation of accountability structures is not a straightforward undertaking. Researchers and policymakers are confronted with tangled webs of accountability relationships of different texture and design. Untangling these webs, requires conscious and concerted efforts at process and institutional levels (Bovens et al., 2014, p. 251).

This paper has argued that accountability structures are contested by the very elements that are constitutive of cyberspace governance, namely, the number of stakeholders contributing to regulatory ventures, the multiplicity of issue areas concerned, and the hybridity and distribution of institutional arrangements involved. Taken together, these factors bring about the following accountability challenges: the problem of many hands, the profusion of issue areas, as well as the malleability of institutional arrangements.

With a view to addressing the challenges identified, this paper has reasoned that in accordance with the distributed nature of the realm, accountability needs to be exercised and structured in a collective fashion. Given the polycentric nature of cyberspace governance, one-dimensional, sovereigntist conceptions of accountability that intend to attach ultimate responsibility to a unitary source of authority are misplaced. In the absence of a single locus of authority, accountability structures need to be consciously reframed, involving all relevant stakeholders. “All nodes in a given [cyberspace governance venture] must play their part in delivering transparency, consultation, evaluation, and correction” (Scholte, 2008, p. 20). Clear communication of and clarity about institutional and stakeholder-related roles, goals, and expectations are key success factors for establishing accountability structures in complex governance settings. Greater organisational transparency, proactive stakeholder engagement, and procedural openness are key prerequisites for tackling institutional malleability and elusiveness.

No claim is made that the recommendations stipulated by this paper will resolve all accountability challenges pertaining to the governance of the digital realm. On the contrary, this paper recognises that much of what has been discussed is still very much terra incognita and requires continuing research. Establishing accountability structures in polycentric governance environments is a demanding and difficult enterprise which requires concerted and sustained efforts by scholars and practitioners alike.

References

APNIC. (2017). I* organizations – APNIC. Retrieved May 31, 2017, from https://www.apnic.net/community/ecosystem/iorgs/

Baldwin, R., Cave, M., & Lodge, M. (2010). The Oxford Handbook of Regulation. (R. Baldwin, M. Cave, & M. Lodge, Eds.). Oxford University Press. doi:10.1093/oxfordhb/9780199560219.001.0001

Barlow, J. P. (1996). A Declaration of the Independence of Cyberspace. Retrieved from https://projects.eff.org/~barlow/Declaration-Final.html

Black, J. (2008). Constructing and contesting legitimacy and accountability in polycentric regulatory regimes. Regulation & Governance, 2(2), 137–164. doi:10.1111/j.1748-5991.2008.00034.x

Black, J. (2012). Calling Regulators to Account: Challenges, Capacities and Prospects. SSRN Electronic Journal. doi:10.2139/ssrn.2160220

Bovens, M. (2007). Analysing and assessing accountability: a conceptual framework. European Law Journal, 13(4), 447–468. doi:10.1111/j.1468-0386.2007.00378.x

Bovens, M., Goodin, R. E., & Schillemans, T. (2014). The Oxford Handbook Public Accountability. (M. Bovens, R. E. Goodin, & T. Schillemans, Eds.). Oxford University Press. Retrieved from http://www.oxfordhandbooks.com/view/10.1093/oxfordhb/9780199641253.001.0001/oxfordhb-9780199641253

Carr, M. (2016). Public-private partnerships in national cyber-security strategies. International Affairs, 92(1), 43–62. doi:10.1111/1468-2346.12504

Clarke, R. A., & Knake, R. K. (2012). Cyber War: The Next Threat to National Security and What To Do About It. Terrorism and Political Violence. HarperCollins. doi:10.1080/09546553.2011.533082

Davidson, A. (2009). The Law of Electronic Commerce. Cambridge University Press. Retrieved from https://books.google.co.uk/books?id=VfIfAwAAQBAJ

Demchak, C., & Dombrowski, P. (2013). Cyber Westphalia: Asserting State Prerogatives in Cyberspace. Georgetown Journal of International Affairs. Retrieved from http://www.jstor.org/stable/43134320

DeNardis, L. (2014). The Global War for Internet Governance. New York, New York, USA: Yale University Press. doi:10.12987/yale/9780300181357.001.0001

Dickinson, S. (2014). Background Paper (IGF 2014 Workshop 96: Accountability challenges facing Internet governance today). Retrieved from http://www.intgovforum.org/cms/wks2014/uploads/proposal_background_paper/internet-governance-accountability-challenges-background-paper.pdf

Dubnick, M. J., & Frederickson, H. G. (2014). Accountable Governance: Problems and Promises. M.E. Sharpe. Retrieved from https://books.google.co.uk/books?id=M32XUtMBSh4C

Dutton, W. H., & Peltu, M. (2007). The emerging Internet governance mosaic: connecting the pieces. Information Polity, 12(1–2), 63–81. Retrieved from https://www.oii.ox.ac.uk/archive/downloads/publications/FD5.pdf

Enderlein, H., Wälti, S., & Zürn, M. (2010). Handbook on Multi-Level Governance. Edward Elgar Publishing Limited. Retrieved from https://books.google.ch/books?id=YlmoCs207UAC

Harlow, C., & Rawlings, R. (2007). Promoting Accountability in Multilevel Governance: A Network Approach. European Law Journal, 13(4), 542–562. doi:10.1111/j.1468-0386.2007.00383.x

Internet Governance Forum. (2016). Dynamic Coalitions. Retrieved September 14, 2017, from http://www.intgovforum.org/cms/dynamiccoalitions

Jayawardane, S., Larik, J., & Jackson, E. (2015). Cyber Governance: Challenges, Solutions, and Lessons for Effective Global Governance. Retrieved from http://www.thehagueinstituteforglobaljustice.org/information-for-policy-makers/policy-brief/cyber-governance-challenges-solutions-and-lessons-for-effective-global-governance/

Johnson, D. R., & Post, D. (1996). Law and Borders: The Rise of Law in Cyberspace. Stanford Law Review, 48(5), 1367. doi:10.2307/1229390

Kello, L. (2013). The Meaning of the Cyber Revolution: Perils to Theory and Statecraft. International Security, 38(2), 7–40. doi:10.1162/ISEC_a_00138

Kleinwächter, W. (2011). A new Generation of Regulatory Frameworks: The Multistakeholder Internet Governance Model. In Kommunikation: Festschrift für Rolf H. Weber zum 60. Geburtstag (pp. 559–580). Stämpfli Verlag.

Koppell, J. G. S. (2005). Pathologies of accountability: ICANN and the challenge of “Multiple Accountabilities Disorder.” Public Administration Review, 65(1), 94–108. doi:10.1111/j.1540-6210.2005.00434.x

Krasner, S. D. (1985). International regimes (Vol. 3a). Cornell University Press. Retrieved from https://books.google.de/books?id=WIYKBNM5zagC

Levi-Faur, D. (2011). Handbook on the Politics of Regulation. (D. Levi-Faur, Ed.). Cheltenham: Edward Elgar Publishing. doi:10.4337/9780857936110

Malcolm, J. (2015). Criteria of meaningful stakeholder inclusion in internet governance. doi:10.14763/2015.4.391

Mueller, M., Mathiason, J., & Klein, H. (2007). The Internet and Global Governance: Principles and Norms for a New Regime. Global Governance, 13, 237–254.

Murray, A. (2007). The Regulation of Cyberspace. Taylor & Francis. doi:10.4324/9780203945407

NATO Cooperative Cyber Defence Centre of Excellence. (2017). Cyber Definitions. Retrieved May 31, 2017, from https://ccdcoe.org/cyber-definitions.html

Nye, J. S. (2010). Cyber Power. Belfer Center for Science and International Affairs, (May), 1–31. Retrieved from http://belfercenter.ksg.harvard.edu/files/cyber-power.pdf

Ottis, R., & Lorents, P. (2010). Cyberspace: Definition and Implications. In Proceedings of the 5th International Conference on Information Warfare and Security, Dayton, OH, US, 8-9 April (pp. 267–270).

Papadopoulos, Y. (2003). Cooperative forms of governance: Problems of democratic accountability in complex environments. European Journal of Political Research, 42(4), 473–501. doi:10.1111/1475-6765.00093

Papadopoulos, Y. (2010). Accountability and Multi-level Governance: More Accountability, Less Democracy? West European Politics, 33(5), 1030–1049. doi:10.1080/01402382.2010.486126

Raymond, M., & DeNardis, L. (2015). Multistakeholderism: anatomy of an inchoate global institution. International Theory, 7(3), 572–616. doi:10.1017/S1752971915000081

Scholte, J. A. (2008). Global governance, accountability and civil society. doi:10.1017/CBO9780511921476

Slack, C. (2016). Wired yet Disconnected: The Governance of International Cyber Relations. Global Policy, 7(1), 69–78. doi:10.1111/1758-5899.12268

Thompson, D. F. (1980). Moral Responsibility of Public Officials: The Problem of Many Hands. American Political Science Review, 74(4), 905–916. doi:10.2307/1954312

Weber, R. H. (2009). Accountability in Internet Governance. International Journal of Communications Law and Policy, 13, 152–167.

Weber, R. H. (2013). The legitimacy and accountability of the internet’s governing institutions. In Research Handbook on Governance of the Internet (pp. 99–120). Edward Elgar Publishing Limited.

Weber, R. H. (2014). Realizing a New Global Cyberspace Framework: Normative Foundations and Guiding Principles. Springer. Retrieved from https://books.google.co.uk/books?id=YemZBAAAQBAJ

World Health Organisation. (2015). WHO Accountability Framework. Retrieved from http://www.who.int/about/who_reform/managerial/accountability-framework.pdf

World Wide Web Consortium. (2004). Architecture of the World Wide Web. (I. Jacobs & N. Walsh, Eds.). W3C. Retrieved from https://www.w3.org/TR/webarch/

Viewing all 294 articles
Browse latest View live