Quantcast
Channel: News and Research articles on Governance
Viewing all 294 articles
Browse latest View live

When internet interconnection trouble occurs, immediate coordination kicks in

$
0
0

Disclaimer: The author is the recipient of a grant from RIPE NCC’s academic cooperation initiative.

For the majority of people in developed countries, the Internet is invisible most of the time. A socket in the wall, a cell site atop a building, a WiFi password written on a restaurant menu – only rarely are we reminded of the fact that Internet connectivity is not just there like a natural resource. It has become ambient. But for some people there is another side to it. As ethnographer Susan Leigh Star wrote in 1999, referring to the relational character of infrastructures: “One person’s infrastructure is another’s topic, or difficulty”. In this text, I discuss a group of professionals for whom Internet connectivity is a topic and sometimes, a difficulty: the network engineers and peering coordinators (in short: networkers) around the world who make the Internet work.

The size of this group can only be approximately estimated: The Internet today has grown to close to 60,000 networks (called “autonomous systems” or “AS” in technical parlance). Yet the majority of these networks do not manage their Internet connectivity actively; they purchase it from a larger transit network. Like a leaf that hangs off a tree they connect to the Internet with one stem. These networks do not play an active role in shaping the routing system; hence, for the purposes of this observation, they can be excluded. However, an estimated fifth of all Internet networks are relevant. They interconnect with two or more Internet exchanges, which means that they manage their interconnections actively. At each of these networks there is at least one networker whose job involves determining which interconnection is used for what traffic, configuring the interfaces and maintaining what I suggest calling an infrastructural relationship with the neighboring networks. So as a lower bound estimate, there are at least 12,000  networkers who take care of Internet connectivity globally.

How do networkers ensure Internet connectivity?

These networkers jointly manufacture Internet connectivity. But how do they do it practically? Due to the decentralised character of the Internet, there is no hierarchical or multilateral organisation, no global institution to impose rules upon the network operators. Organisations such as the Regional Internet Registries, Internet exchanges or standard developing organisations support what many refer to as a community on a meso-level. But they have no say in how networkers ultimately operate their networks. Is Internet connectivity thus the aggregated result of atomised, unrelated, individual actions, as the market metaphor would suggest? Is it contingent? Or what forms of coordination can we find between networkers? Existing research already highlights the role of trust, distrust or reputation between networkers. But it does not yet grasp the global dimension.

In the following, I will argue that at the core of the Internet we can find an interaction order among networkers that is ongoing, immediate and global in scope. This interaction order is laid out in the routing system; it is enacted by the networkers and complemented by the use of channels for real-time communication. This form of coordination among networkers plays a critical role in the maintenance of Internet connectivity.

Theoretical backdrop: Scopic media expand face-to-face situations

As a backdrop for this argument I am drawing on the work of economic sociologist Karin Knorr-Cetina. Having studied traders on financial markets, Knorr-Cetina noticed that these professionals make up a “genuinely global form” by which she means “fields of practice that stretch across all time zones” (2014). Traders are experts who work almost autonomously. She found that the central banking system facilitates and shapes direct interactions between them, although the trading desks are geographically distributed. It serves as what she calls “scopic media”. Knorr-Cetina uses the term scope to refer to a reflexive mechanism of both observation and projection. Traders both observe and act upon this central system, and they are kept in check by it, wherever they are. She suggests that scopic media bring together distributed phenomena or actors. They move the boundary between situations and systems, expand local situations geographically, and they transform face-to-face situations into synthetic situations.

In a limited way, networkers can be compared to traders. Both professional groups comprise of highly specialised experts in a field that is characterised by complexity. Because networking requires expert knowledge and experience, it is not uncommon for networkers to act more or less autonomously within their organisations (less so in large firms) – simply because their superiors may not be able to comprehend the details. The reverse side of their specialisation is that many lack counterparts at their workplaces with whom they can discuss arising issues. This is one of the reasons why networkers are inclined to turn to fellow professionals outside of their organisation for advice or chat, even if those colleagues are competitors on a company level.

Networkers are united by the routing system

Thinking along the lines of Knorr-Cetina’s concept of scopic media, let’s look at what promotes the common concern shared by networkers around the globe. Networkers are virtually united by a continuous focus on the routing system. Although each networker has his or her unique perspective, the routing system is of interest to everyone. This system emerges from the path availability information that networks announce to their neighbouring networks by means of the Border Gateway Protocol (BGP).

Every network depends on this resource, which is produced collaboratively. But, as any networker knows, BGP comes with uncertainties. One of the well-known challenges is that, even today, networkers cannot generally know if the route information that they receive from their counterparts is correct. There is no central authority either. This uncertainty, in combination with further interdependencies between the networks, can lead to critical situations at any moment, so a common focus is necessary. “One movement from your partner can ruin your network for hours while you understand what’s wrong”, explains one networker.

“Think of Houston Mission Control”

As a consequence, networkers have monitoring systems in place. Real-time alerts draw their attention to irregularities at their interconnections. Sometimes scripts also trigger automated responses, such as the re-routing of traffic. In small networks, networkers will “watch the edge routers all the time”. Larger networks have so-called network operations centers (NOCs). “Think of Houston Mission Control”, explains one high level engineer, “our people work in shifts to watch over our network 24 hours a day.” The monitoring systems produce graphs with live views of the network’s interconnections. “You understand when something goes wrong almost immediately”, comments another networker. A key qualification for networkers is to be able to “stay on screen for hours”, adds a third. Because it absorbs the networker’s attention while reflecting his or her inputs, the routing system can be seen as a “centering and mediating device” in accordance with Knorr-Cetina’s concept of scopic media.

Looking at it more closely, networkers receive feedback about quality parameters at their interconnection points through monitoring tools. The tools deliver information about interface utilisation, congestion, the latency of important connections, packet loss or about the volume of traffic that they are exchanging with other networks. Some tools also highlight when abnormalities in traffic patterns occur or when traffic flows shift from one interconnection to another. However, while monitoring tools do alert the networkers to problems, they cannot explain them. They indicate that something may be wrong, but they do not identify the issue. An edge router may be set up to alert its operator that it is receiving an unusually large number of prefixes (these are clusters of IP addresses) from one interconnection partner, but it will not be able to determine why that is. This is an issue because the causes of irregularities can be manifold and networkers have to react to them, often quickly.

Human expertise required

For Internet users it may be difficult to imagine the lack of certainty in Internet networking. So here are some examples of situations that networkers have to decipher: A jump in the number of prefixes received from an interconnection partner could indicate a problematic route leak (a misconfiguration), but it could also mean that the interconnection partner’s network has undergone a corporate merger and has more customers now. Traffic flows between two networks may shift from one interconnection point to another and cause congestion for various reasons: there may be hardware problems, another network may have shut down the interconnection session to do maintenance, or there may be problems with the physical infrastructure occurring, for example, when a tractor has unearthed the fibre wave. A networker may see traffic levels increase at one point because of a media event (think Olympics) or because the interconnecting network has started to peer with a customer of hers. These examples indicate that while the routing system does serve as a kind of a nervous system, it lacks comprehensive talk-back functions. The information that a single networker can gather from it is limited and potentially full of unknowns.

Need for cooperation

Networkers help themselves by coordinating with each other directly beyond what can be fitted in BGP messages. When problems appear to be so broad in scope that they cannot be resolved with the networker’s direct interconnection partners or friends alone, one place to turn to for information is the many mailing lists that are maintained by the Internet exchanges, by local Network Operator Groups (NOGs) or by the Regional Internet Registries. Yet these lists are, by definition, limited in scope. What’s more, turning to the right mailing list already requires a preliminary understanding of the problem. And communication via the lists can be slow.

So networkers have found themselves another place to go to, especially when problems arise. They use chat rooms “to stay in touch with the community at large”. The existence of these informal rooms is neither a secret, nor is it widely known outside the networker community. They are technically open but socially closed places for networkers to meet online. Depending on the room, a couple of hundred networkers will be logged in at any time of the day. Even large networks try to listen in on the chat. It is common, too, that networkers set up alerts so that they will get notified when their AS number is mentioned in the chat. Some find the chats to be “a waste of time” because of the high “spam to noise ratio”. Others report having turned their backs on these chat rooms due to other users’  bad manners and misogynist behaviour. Yet others describe the chats as invaluable places for information-sharing and swift coordination. Certainly, any explanation of how networkers manufacture Internet connectivity would be incomplete without mentioning the use of this medium. It would mean omitting an important part of the informal self-governance at the core of the Internet.

Useless chatter – until something happens

What most characterises the chat rooms in the eyes of my interviewees is that they allow networkers to communicate quickly and directly with each other without a formal process around it. Networkers do not want company borders or bureaucracy to stand in their way when they are busy getting things done, i.e. fixing technical problems. Chatroom users appreciate the immediate virtual presence of their colleagues. They use the chat room as a diagnostic tool, they address the crowd to find people who share niche problems, e.g. problems with specific hardware configurations. And they also distribute time-critical information that will be of interest to many others, e.g. when there are problems at a large Internet exchange facility. A typical scenario is also that one networker searches for someone from a specific other network, and when the match is made, they switch over to a private conversation.

The dimension I would like to highlight here, however, is, the wider capacity for ad-hoc coordination that can emerge from these chat rooms. This happens when things go wrong. Because, as one networker comments, “it is mostly useless chatter – until something happens”. Here are three of the many examples given by networkers for what they do in the chat rooms:

“The day it matters is a day like 9/11. A day like that where something goes wrong. Where you loose a Gigabit of capacity in one day and you say: ‘I have lost half of my capacity to the States. I need help.’ And someone answers: ‘You’re lucky. I have half free. Do you want it for a few days?’ (...) People were giving each other products you would have charged for. People were giving away capacity to make sure it worked.”

In another instance in 2012, a hurricane had caused damage in the Caribbean and the North-Eastern part of the U.S.. Through RIPE NCC’s distributed network monitoring system “Atlas”, networkers learned where power outages had occurred. In the chat room, they gathered to organise generators and diesel to restore power to facilities that were down.

In one last example, a highly trusted network became the source of a major irritation in the routing system. According to one networker, in 2010 the following happened:

“RIPE NCC sent out a specific BGP update message to their peers and upstreams to test whether BGP sessions could carry a certain large payload. Usually those messages are quite small in size. They wanted to test: can we carry cryptographic signatures in the payload? And those, obviously, usually are larger. So they sent an update that was technically valid and should not have created problems. But it did create massive problems. And because of the particular software defect that was triggered it would ripple out over the entire Internet and continue to be destructive. It was a very unique case. And it was on that chat that within minutes people started, within seconds even, they started talking about ‘Where is this coming from?’, ‘Who is sending this update?’, ‘What is causing this?’ And the matter was resolved quite fast, once people realised that it was the RIPE NCC update. And then they called RIPE NCC and they said: ‘You should turn this off immediately.’ I feel that the chat room was very useful in scenarios like that.”

Future-proofing the Internet

These examples expose a specific trait of the global interaction order: it has the ability to deal with new and unforeseen problems. The Internet is often praised for being open and for allowing permissionless innovation on the application layer levels. But as we can see, such innovations can pose new challenges to networks, demanding continuous adaptability from those who seek to ensure Internet connectivity. Informal structures foster flexibility and timely problem-solving.

That said, the global interaction order gives rise to  questions, some of which are practical, some are matters of principle. Practical questions arise with regard to growth. Only two years ago, the Internet consisted of close to 50,000 networks; now there are close to 60,000 already. (How) will new generations of networkers find their way into these structures, learn about informal rules or change them if the Internet continues to grow at this pace? Some networkers are already complaining that the size of the chat rooms has become unworkably large. Others fear that with more users comes increasing publicity, which might bring some of the practices under scrutiny. This leads to the question of principle. If this informal mode of coordination is so important that it should be considered an expression of Internet governance, then it is important to create legitimacy around it. This text is an attempt to start this process.

Networkers will undoubtedly be doing themselves no favours if they give into the urge to seal off their semi-public virtual societies or retreat to private social networks. This is for two reasons: First, the way the Internet is designed, networks are in this together, and they will be affected by the actions of newcomers in any case. Second, what is the alternative? If networkers lose this means of easy, open and immediate coordination, they will be left with the current, fragmented organisational structures. These structures serve important purposes. They cannot, however, substitute for the flexible means of immediate cooperation needed in a pinch.

Reference

 
Knorr Cetina, K. (2014). Scopic media and global coordination: the mediatization of face-to-face encounters. In K. Lundby (Ed.), Mediatization of communication. Berlin: de Gruyter., S.40
 
Star, S. L. (1999). The Ethnography of Infrastructure. American Behavioral Scientist43(3), 377-391, p.380.

Gaps and bumps in the political history of the internet

$
0
0

Disclaimer and aknowledgement This paper is based on a presentation given at the AOIR 2016 pre-conference workshop on “Challenges in Internet History and Memory Studies” in Berlin, in October 2016. The author would like to thank Camille Paloque Bergès and Dominique Trudel for their help and input on the original draft, as well as to Meryem Marzouki, Benjamin Peters, Joris Van Hoboken and Frédéric Dubois for their constructive criticisms and suggestions during the review process.

Introduction

I used to work as a legal analyst for a French advocacy group defending civil and political rights on the internet. When I decided to go into academia to start working on a doctoral dissertation more than four years ago, the goal was to take a step back. I sought to get out of the policy-making frenzy, to break away from the repetitive and at times even hysterical activist discourse. By studying political conflicts (Tilly & Tarrow, 2015) related to the expansion or restriction of rights exerted through, and affected by, communication technologies – what I will call “communication rights contention” (or “digital rights contention” when referring specifically to digital technologies), my goal was at once personal and political. I hoped to build a richer understanding of this community of practice in which I was taking part, but also of its political environment in order to help produce actionable knowledge for citizens interested or even mobilised around these issues, in the vein of what Stefania Milan has termed “engaged research” (Milan, 2010).

Such engaged research is far from original in the field of internet policy: many of us realise that the digital environment is one of the areas in contemporary society where political conflicts are unsettling long-standing balances in power relationships, both between and within state, market and civil society (though the quantitative question – “how much?” – and the normative one – “is it for the better?” – are of course fiercely debated).

Unsurprisingly, the amount of scholarship devoted to rights and citizenship in the online environment has kept rising since the internet's inception, fueled in the past few years by dramatic events like the WikiLeaks disclosures of 2010, the so-called Arab Spring in 2011 and the release of the first Snowden documents in 2013 (see Figure 1.). Issues such as copyright law, net neutrality, online censorship, big data surveillance, “cybersecurity” and hacking, and their impact on rights have now become mainstream news and policy items that a growing number of people in academia want to engage with.

 


 

Figure. 1. Volume of academic papers per year dealing with digital rights contention between 1993 and 2015.1

 

Unfortunately, within this field of research interested in “digital rights contention” and led by socio-legal researchers, political scientists and sociologists, relatively little attention has been paid to history. Among the references on digital rights contention surveyed in Figure 1, about 10% mentioned the term “history” in their title, abstracts, keywords or in the name of the journal in which they appeared, and these 10% include a significant amount of false positives. Of course, the rest of the literature may contain historical considerations in passing, but generally speaking, this suggests that, until now, full-fledged historical perspectives are rather scarce in the literature on digital rights contention.

In this short essay, I reflexively draw on my own journey through various academic disciplines and streams of sources that I deem most relevant for the historical study of digital rights contention. Reflexivity is a look-back on one's own thought process, how the dominant forms of thinking and inquiring affect the researcher and the research object (Shacklock & Smyth, 2002). Here, my goal is not only to provide a short overview of the strengths and weaknesses of this existing historiography for those embarking on a similar journey – especially people in academia and in the wider “digital rights community”, but also to identify “gaps and bumps” in this literature and offer a word of caution against the way dominant historical narratives might preclude us from more complex and critical thinking on internet policy. After pointing to the positive prospects opened by the maturation of internet history as a sub-discipline, I conclude by suggesting a couple of research agendas to those, including historians, willing to “fill the gaps”, with a view to putting together a more complete and balanced picture of the internet's political history.

IANAH (“I am not a historian”): From law to history

IANAL – an acronym for “I am not a lawyer” – was the way Usenet users would clarify that, though they might have been giving legal advice in the course of an online interaction, they were not engaging in the unauthorised practice of law. Though I have a law degree and worked as a legal analyst, I am not a lawyer either, and I am even less of a trained historian.

Rather, my own realisation of the importance of history for the study of internet politics – and my modest practice of it – is mostly incidental. It first drew on the work of prominent US legal scholars and social theorists of the internet, such as Lawrence Lessig. Now, as stressed by one of its critics (who praises him in the regard), almost all of Lessig's talks and books “make extensive reference to the history (and to a lesser extent, sociology) of science, because he has been obsessed with the way controversies over knowledge become baked into political practice” (Mirowski, 2015). Though Lessig's books on internet policy (Lessig, 2004, 2006) were some of the very first readings that, at the end of 2008, pushed me towards digital rights activism, others ensued such as Yochai Benkler's The Wealth of Networks (2006), where the author grounds his social theory of the “network society” and its main normative arguments on the analysis of historical developments, such as the newspaper industry in the 19th century or the regulation of radio broadcasting in the US in the 1920s and 1930s. Then came Jonathan Zittrain, another US legal scholar who, like Lessig and Benkler, is closely associated to the Berkman Klein Center for Internet & Society at Harvard University, and whose book The Future of the Internet (Zittrain, 2008) went over part of the history of early computer networks to warn against the increasing centralised architecture of the internet.

Among this group of US legal scholars who have laid the intellectual ground for many digital rights activists (Mueller, Kuerbis, & Pagé, 2004), Tim Wu is probably the one that has most openly espoused history as a discipline. Already in 2006, he offered an important contribution with Who Controls the Internet?, co-authored with Jack Goldsmith (Goldsmith & Wu, 2006). In this book, they followed Lessig in countering cyber-libertarians like John Perry Barlow (Barlow, 1996) to show how states were reasserting their sovereignty on the supposedly “borderless internet” through techno-legal strategies. In 2010, Wu went much further in that endeavour with The Master Switch, delving into greater detail in some of the examples used by his fellow colleagues. Surveying the development of communication and media industries in the US since Bell's telephone, he went on to assert that

“history shows a typical progression of information technologies: from somebody’s hobby to somebody’s industry; from jury-rigged contraption to slick production marvel; from a freely accessible channel to one strictly controlled by a single corporation or cartel – from open to closed system (...)” (Wu, 2010).

In sum, the question explored by these legal scholars is whether the internet, despite its countless founding techno-utopias about its subversive and democratic potential, was undergoing the same process of “feudalisation” as past information technologies, and how law and policy might help stop that fate.

These few US East Coast legal scholars were of course not the only ones resorting to history to shed light on internet policies (e.g., Hargittai, 2000; Spar, 2003; Braman, 2009, 2012). In doing so, they actually followed a long stream of legal research in the US, and in particular Ithiel de Sola Pool's foundational work on the relationship between communications technologies, the law and human rights (Pool, 1983). For me, coming from a region where the legal culture has traditionally been much less in dialogue with social sciences than in common law countries, their analysis was extremely refreshing. Rather than sticking to law and policy “in the book” as is too often the case on this side of the Atlantic, these scholars suggested that we seek to understand how they developed as both discourses and practices, through history.

Other contributions to the history of digital rights contention

With a better appreciation of the fact that, “in order to explain the structures of contemporary societies, one must investigate their historical origins and development” (Deflem & Lee Dove, 2013), I sought to extend my understanding of the political history of the internet. This group of US legal scholars had pointed to an interesting direction, but their use of history is in some ways often selective and repetitive, sometimes biased by the underlying normative assumptions. As I started my PhD, I turned to three other streams of reference to understand the political genealogy of the internet (each being comprised of various disciplines and approaches): the history of technologies, technologists and “internet revolutionaries”; the production and use of internet technologies by social movements; the net's cultural and economic histories.

 






Category (shorthand)

Disciplines/approaches

Representative authors

Positive features

Possible shortcomings

1. History of law and policy

Law, political science

Wu, Lessig

Focus on policy and law “in practice”, comparative cross-temporal analysis

US-centric, selective and repetitive use of history

2. History of technology and technologists

Science and Technology Studies, journalistic chronicles, biographies

Hafner & Lyon, Abbate, Levy

Document the history of technology and its seminal political framing, account for the first contentious episodes

US-centric, hyperbolic tone, individualistic focus on a few “computer heroes”

3. History of social movements

Political sociology, anthropology

Coleman, Jordan, Mueller

Focus on processes of identity formation, emergence of action repertoires in digital rights activist groups

Overly celebratory or theoretical, lack of diversity in the actors/groups studied

4. Cultural history

Cultural history, political economy, critical theory

Turner, Mattelart, Barbrook, Schiller

Transdisciplinary critiques of the “rhetoric of technological sublime”

May appear too distant from contemporary debates

 

Figure 2. Streams of literature relevant to the history of digital rights contention

 

The first stream is two-fold. On the one hand, it is comprised of early works influenced by Science and Technology Studies (STS) on the history of internet technologies and technologists: references such as Hafner and Lyon's Where Wizzards Stay Up Late (1998), Abbate's Inventing the Internet (2000) or Bardini’s Bootstrapping (2000). They put the spotlight on the scientist's lab in a Cold War America and help explain the technical origin of design choices that have had significant political implications. Based on extensive interviews with key protagonists who took part in the elaboration of personal computers and internet protocols, they are sometimes similar in tone to the descriptions of “how the Internet came to be” offered by some of the net's so-called “founding fathers” (e.g. Cerf & Aboba, 1993).

On the other hand, we also find sources are not tied to an academic discipline like STS but are rather based on journalistic or observant-participant dive-ins to the early underground world of hackers and other early computer cultures (Levy, 1984; Hafner & Markoff, 1995; Ludlow, 1996; Hauben & Hauben, 1997; Rheingold, 2000). These references also include accounts offered by prominent actors as well as their biographers and chroniclers, whether they were hackers or activists who witnessed and took part in some of the first contentious episodes surrounding rights in the digital environment from the late 1980s on (Wieckmann, 1989; Bowcott & Hamilton, 1990; Sterling, 1993; Levy, 2001; Godwin, 2003; Dreyfus & Assange, 2012; Greenberg, 2012). These references are interesting because they capture the political understandings of computer technologies at the time, and document the first interactions between early “internet revolutionaries” and their political environment – for instance the first waves of repression targeting hacker groups in the 1980s and 1990s. By recording the making of the internet and of what would become the so-called “digital rights movement”, they generally offer a good starting point for the history of digital rights contention.

In academic writings on internet policy, the history of technologies, technologists and “internet revolutionaries” are extremely influential. Some of them, such as Steven Levy's Hackers, have even achieved iconic status, to the point of being must-reads for any informed discussion on hackers. But despite their hyper-visibility, they also come with certain flaws. First of all, they tend to be overly individualistic. As Roy Rosenzweig already observed in a seminal review of the net's historiography published in 1998, early STS approaches to internet history tend to trace the development of a technology by following “great men” of science navigate technical challenges and bureaucratic conundrums (Rosenzweig, 1998). In doing so, they sometimes overlook the role of institutions and ideologies in the shaping of technology, downplay the importance of the wider context by insisting on the role of key individuals on technological paths. Though the second stream – the one concerned with early hackers – is often more attentive to the personal histories, political commitments and ideologies of “computer heroes” who had significant influence on the political framing of the internet, it also focuses on a handful of individuals and tends to flatten out the diversity of the actors involved in early underground computer cultures.

What is more, both streams tend to be subject to the celebratory, hyperbolic tone of their times. As James Curran writes, “their central theme is that utopian dreams, mutual reciprocity and pragmatic flexibility led to the building of a transformative technology that built a better world” (Curran, 2012). This bias begs the question of their double-status as historical objects: many of these works self-identify as scholarly works of history, and most can indeed be treated as such, but they also suffer from methodological and epistemological flaws (Serres, 2000). For that reason, they have often reinforced dominant “grand narratives”, sustaining taken-for-granted assumptions about the supposedly emancipatory essence of the internet.

To help alleviate these shortcomings, I delved into two other streams of works. These form our third and fourth branches of the internet scholarship most relevant for the history of digital rights contention.

The third branch documents the production and use of internet technologies by social movements. Besides hacker groups, international NGOs had started using computer networks in their increasingly global advocacy efforts from the early 1980s on (Murphy, 2005; Willetts, 2010), but it was really the launch of the Web that broadened access to these tools in the 1990s, in particular with the rapid growth of the Global Justice movement formed to oppose neo-liberal globalisation. The Global Justice movement and its enduring legacy have received a lot of attention by social theorists and sociologists (Atton, 2005; Dahlberg, 2007; Cardon & Granjon, 2010; Hands, 2011; Wolfson, 2014, Gerbaudo, 2017; Funke & Wolfson, 2017), and in particular the self-publishing platform Indymedia founded during the 1999 protests in Seattle (Halleck, 2004; Pike, 2005; Pickard, 2006). Sociologists and anthropologists have also paid attention to the centrality of the free software movement (Kelty, 2008; Coleman, 2012), of hackers (Jordan & Taylor, 2004; Coleman, 2014; Jordan, 2016), to the emergence and evolution of new digital action repertoires (Costanza-Chock, 2003; 2004; Markovic, 2000, Sauter, 2014; Zügar, Milan & Tanczer, 2015; Coleman, 2017; Vlavo, 2017), and of digital rights activism itself (Jordan, 1999, MacKinnon, 2012; Mueller, Kuerbis, & Pagé, 2004, 2007; Breindl, 2011, Croeser, 2012, Postigo, 2012). Like legal scholars, some of these authors tend to adopt an instrumental use of history, offering only a brief genealogy of a group or an action repertoire. But when they are anchored in mobilisations and pay attention to actors and their political practice rather than being too theoretically-driven – which until recently has been an overall trend in the literature on activist uses of digital media (Neumayer & Rossi, 2016) –, these references collectively present a collection of contentious episodes, one that is now constantly expanding thanks to the growing body of empirically-grounded work on contemporary digital rights contention.

The last stream of references I turned to is formed by cultural historians, critical theorists and political economists. Here we find works such as Paul Edwards' The Closed World on the reciprocal relationship between Cold War discourse and ideology on the one hand and early computer research on the other (Edwards, 1996), Fred Turner's exploration of the liberal and counter-cultural roots of “cyberculture” (Turner, 2006, 2013), but also other critical research looking at the founding utopias of digital technologies and their legacy (Barbrook & Cameron, 1995; Mattelart, 2000; Kirk, 2002; Mirowski, 2002; Galloway, 2004; Barbrook, 2007; Chun, 2008; Streeter, 2010; Proulx & Breton 2012; Schulte, 2013; Morozov, 2013; Loveluck, 2015). Next to cultural historians and critical theorists, we can also include communications scholars like Dan Schiller or Robert McChesney, who have surveyed the internet's political economy and its evolution over time (Schiller, 2000; Pickard, 2007; McChesney, 2013). One common point between all of these authors is that they offer a critical assessment of the history of the internet – for instance by seeking to explain how the counter culture's neo-communalism of the 1960s morphed into “tech libertarianism” in the 1990s, or by stressing the sustained but sometimes overlooked role of the military and of capitalism in shaping internet politics. In doing so, they help deconstruct what James Carey has called “the rhetoric of the technological sublime” prominent in the 1990s (2005), which has been extremely influential in digital rights activist and scholarly circles – at least until 2013 and the Snowden disclosures. In that sense, they allow for a healthy critical re-examination of the internet's history that makes them particularly relevant to today's debates.

Addressing lingering gaps in internet histories

Taken together, these four bodies of work (including that formed by legal scholars) complement each other, but most of them remain largely focused on the US.

One can make several hypotheses to explain the prominence of US-centric narratives in internet historiography, such as the dominance of the North-American computer industry and the notion of the internet as a “great American invention” (Russel, 2012), the influence of US-based activists in the framing of the internet as a revolutionary technology at a time when many European groups were still boasting techno-skeptic attitudes towards computer networks, the institutional weight of North American academic institutions in producing and circulating knowledge on the net's history, and of course the status of the English language as a global lingua franca.

Other national or regional histories of political contention around past and present communications technologies would help challenge mainstream narratives, but they have traditionally been underrepresented. For instance, in the course of my ongoing doctoral research, I have been looking at France as an example of how conflicts around the political use of communication technologies as well as the dominant ideologies of time jointly shaped the laws and policies regulating the public sphere since the 16th century. To the contrary of US or even British scholars who can delve on a quite extensive historiography on media policy, censorship and surveillance, similar historical work focused on France is harder to find, and sources seem much more scattered. Though there are of course exceptions (e.g. Chartier & Martin, 1985; Reynié, 1998; Darnton, 1983; 2010), when you do find them – for instance references dealing with the repression of political uses of amateur radio in the 1920s, chances are that their authors will be US historians (Vaillant, 2010).

How can we explain such a difference? This may be due to different traditions and approaches in communications history, and the way specific national contexts have shaped communication history in these different countries (Simonson et al., 2013). On the whole, it looks like French historians of communication are often more interested in intellectual, economic or technological histories, which are often less directly relevant to study of contention around communication rights. In the US, the fact that many critical media historians are found in transdisciplinary “communications departments” (Thibault & Trudel, 2015) as well as the centrality of the First Amendment in American political culture, may explain a greater interest in political and legal issues among scholars, even when they work on foreign countries.

When one considers more recent periods, the situation is very similar. You will have to look very hard – and often to no avail – to find scholarly sources depicting the French hacker scene of the late 1980s, or addressing the first mobilisations around digital rights in the 1990s, the way government agencies dealt with issues such as internet surveillance, or how they navigated difficult regulatory debates on, say, intermediary liability. A handful of French scholars have addressed the appropriation of early Web tools by activist groups (Blondeau, 2007; Granjon & Torres, 2012), the politicisation of the first generation of internet users (Paloque-Berges, 2015) and surveyed controversies around internet regulation (Thoumyre, 2000; Mailland, 2001; Marzouki, 2001; Auray, 2002). But important gaps remain. Though the case of France might be quite extreme, the situation looks similar in many other countries. Even if such histories exist, they seem to be hard to find, have not been translated into English and are therefore usually not part the conversation in transnational academic or activist circles.

Thankfully, things are starting to change for internet history. Social and human sciences scholars are increasingly tackling the important shortcomings of current historiography. For instance, historians influenced by STS have been rebuking teleological understanding of the internet's architecture (Russell, 2014). Other recent works offer a wider frame of analysis by contributing to a more global, inclusive and nuanced understanding of the history of either scientific or popular computer networking (Griset & Schafer, 2011; Mindell, Segal, & Gerovitch, 2013; Driscoll, 2014; Alberts & Oldenziel, 2014; Medina, 2014; Schafer & Thierry, 2015; Peters, 2016; Goggin & McLelland, 2017; Srinivasan, 2017; Wasserman, 2017). Most relevant for digital rights contention, others aim to uncover European histories of politicised engineers, hackers and digital rights groups (Bazzichelli, 2009; Lovink, 2009; Löblich & Wendelin, 2012; Burkart, 2014; Denker, 2014; Nevejan & Badenoch, 2014; Medosch, 2015; Fornés, Herran, & Duque, 2017), of alternative appropriations of hacking and digital rights in “network peripheries” (Chan, 2014; Toupin, 2016), of the emergence of large-scale surveillance and state-sponsored hacking in the digital era (e.g. Chamayou, 2015; Jones, 2017) or of privacy advocacy in “surveillance societies” (Bennett, 2008; Mattelart, 2010; Fuster, 2014; Vincent, 2016).

These trends partly reflect the increasing institutionalisation of internet history as a sub-discipline, with a growing number of international conferences devoted to the topic and more attention given by academic publishers to the issue – as illustrated for instance by the recent launch of the journal Internet Histories (Brügger et al., 2017). It is also the result of an increasing interest by Internet scholar in historicising their research topics. Over time, this will hopefully bring to the fore hitherto invisible histories by encouraging translations of existing research, spark useful debates and stimulate new research directions.

Of course, it is unlikely that we will ever reach a “gapless” history. Rather, the goal should be to fill the gaps that we think will play a key role in helping build a critical discourse based on the analysis of past events and their ramifications through time, in the vein of Michel Foucault's overtly critical genealogical project: against the history of the winners that normalises the status quo and reinforced their truth claims, Foucault's genealogy aims to uncover past power struggles “to separate out, from the contingency that has made us what we are, the possibility of no longer being, doing, or thinking what we are, do, or think” (Foucault, 1984).

In that way, we might also be able to reassess our own normative assumptions about the internet, but also to help engaged citizens reclaim alternative histories, get inspiration from forgotten discourses and practices, rediscover relevant action repertoires, and better inform the way we analyse and strategise to foster the emancipatory and democratic potential of communication technologies.

Conclusion: enriching the political genealogy of the internet

To conclude, I would like to point to two overarching lines of inquiry that would be useful to consider as we collectively seek to improve our grasp of the history of digital rights contention with an eye to the present and the future.

The first consists in building the “political memory” of the digital rights movement by further investigating the historical trajectories of its actors and repertoires. Beyond anecdotal evidence and a few precious pieces of scholarship highlighted above, we still lack a thorough picture of the emergence hacker scenes outside of the United States, of the way human rights groups started forming and mobilising around policy and regulatory issues surrounding the internet in the second half of the 1990s and early 2000s, or of the first efforts aimed at building alternative internet architectures, as the Web underwent its first major waves of commodification and regulation. The goal here would to reclaim a more nuanced history than the one conveyed in mainstream narratives, to account for the diversity of the movement across historical and cultural contexts, to shed light on the formation and evolution of political identities within it, of continuities or shifts in strategies. In that regard, scholars in contentious politics can point us to many other topics that could bear useful lessons for today's digital rights activists.

At the same time, we should aim to shed light on the perspective of the many actors they contended against and interacted with, in particular state and corporate actors. As we continue to break away from the techno-utopian discourses that have too often been taken at face-value, plaguing much of internet activism and scholarship of the past two decades, we may want to ask questions such as the following: what factors enabled state and corporate actors to resist or take advantage of the challenges posed by the internet to long-established power relationships in the media and telecommunications fields? What was the significance of the late 1980s hacker crackdown or of the repression of the global justice movement that drove the first forms of transnational police cooperation against “cybercrime”? How can we historicise the growing public-private hybridisation in online surveillance and censorship and how does this trend affect traditional notions regarding the “limits of the state” (Mitchell, 1991)? Or, to take on the cue of a recent research aiming to counterbalance the international focus in internet governance studies (Mueller, 2007, 2010; Epstein, 2013), how did internet policy come to form an autonomous policy field and how did it evolve within national state bureaucracies, what are the resulting tensions, and do they affect opportunity structures for activists (Carr, 2013; Pohle, Hösl, & Kniep, 2016)?

A second overarching line of inquiry that would be useful to explore for digital rights scholars are techno-critical movements. Two decades of a neoliberal co-optation of discourses on openness and innovation has only served to reinforce the progressivism at the core of most modern ideologies and their teleological understanding of technology. This has created a “veil of illusion” that keeps most of us from asking uncomfortable questions, for instance regarding the formidable ecological impact of computer networks, or the contradictions of a movement defending human rights through technologies built by factory workers that are trapped somewhere in the globalised chain of production of our digital world, and deprived of minimal political and social rights (Gabrys, 2011; Flipo, Dobré, & Michot, 2013; Fuchs, 2014; Taffel, 2016).

We need to remember that the internet was shaped by a widespread critique of technology and technocracies. At a time of an endless arms race of corporate and state actors towards the “next big thing” in computing technology – whether it is big data, artificial intelligence, quantum computing or the so-called “internet of things”, which all raise the “threat level” for digital rights – it might be time to open up our own discourse to the possibility of a technological de-escalation. As engineers reclaim the legacy of the “appropriate technologies” movement (Pursell, 1993) – for instance by discussing concepts such as “limit-aware” computing (Chen, 2016; Qadir, Sathiaseelan, Wang, & Crowcroft, 2016) – and as hackers experiment with low-tech communications, social science scholars could also revisit the history of “technocritics”. To give but one example, at the turn of the 1970s and 1980s, collectives critical of computer technologies and of their growing role in public and private bureaucracies were numerous, both in the US and in Europe (Izoard, 2010; Wright, 2011; Jarrige, 2016). Although many of the actors invested in these groups later embraced computing in their professional trajectories, revisiting their critique might prove useful for today's political activists keen on promoting forms of “digital disengagement” and “innovative disruptions” of the spiralling and ecocide “disruptive innovation” of the Silicon Valley.

These are just a few of many possible research directions that can make history relevant to contemporary debates around internet politics. What is for sure is that by bringing local or contextual histories to the fore, such investigations will open new avenues for comparative historical analysis, using “systematic and contextualized comparison” of processes through time and space to draw inspiring lessons from the past (Mahoney & Rueschemeyer, 2003). For instance, in the context of Web historiography, Brügger has called for “cross-national studies of the history of transnational events on the web” to show variations in outcomes (i.e. why similar events in different contexts lead to different results) (2013b).

For digital rights contention, one example of such comparative analysis would be to compare some of the surveillance scandals of the 1960s and 1970s, when the first wave of computerisation sparked resistance and led to the adoption of data protection frameworks (Bennett, 2012; Fuster, 2014), with post-Snowden controversies to explain why today's heated debates on surveillance are actually leading to the legalisation of large-scale and suspicionless surveillance rather than their roll-back – or what we might call the “Snowden paradox” (Tréguer, 2017). In that vein, Schulze recently compared the 1990s “Crypto War” with and the recent Apple-FBI fight over encryption (2017). And of course, all past political struggles around communications technologies will be relevant to such comparative analysis, not just the most recent ones around computer networks (e.g. Trudel & Tréguer, 2016).

As a collective endeavour, bringing forgotten or partly invisible histories to the fore will first require digging up the work of communication and media historians as well as social theorists who have so far been overlooked by digital rights scholars (whether they define themselves as legal researchers, sociologists or political scientists). As today's historians come to terms with the challenge of filling current gaps, we will also need to pay close attention to these developments to channel them into our own work, and directly participate in these efforts. Doing so will mean developing transversal approaches and overcoming pitfalls in digital research methods (Ankerson, 2012; Brügger, 2013a; Rogers, 2013), appropriating more traditional methods like oral histories and archival work, crossing or even dissolving traditional academic disciplines, and eventually overcoming the barrier of methodological nationalism so as to engage in fruitful transnational collaborations (Scheel et al., 2016). Finally, these endeavours should remain anchored in the critical project proposed by Foucault in his famous text What is the Enlightenment?, and look for answers to the key question that he identified: “How can the growth of capabilities” – and more specifically those brought about by what he called “techniques of communication” – “be disconnected from the intensification of power relations?” (1984).

To be sure, advancing the political history of the internet and make it politically relevant in present times will be a challenging task. But, as George Santayana's well-known aphorism goes, “those who cannot remember the past are condemned to repeat it”. History is of course no guarantee in and of itself, but it certainly is a key resource to better engage in these debates and attempt to ward off the eternal return of a technocratic, dystopian future. As the trenches of internet politics get deeper and deeper and their stakes higher and higher, it can help us breathe new air in the internet policy debate while contributing to the still much-needed “reinvention” of media activism (Mueller, Kuerbis, & Pagé, 2004).

 

References

Abbate, J. (2000). Inventing the Internet. The MIT Press.

Alberts, G., & Oldenziel, R. (Eds.). (2014). Hacking Europe - From Computer Cultures to Demoscenes. Springer.

Ankerson, M. S. (2012). Writing web histories with an eye on the analog past. New Media & Society, 14(3), 384–400.

Atton, C. (2005). An Alternative Internet: Radical Media, Politics and Creativity. Edinburgh University Press.

Auray, N. (2012). L’Olympe de l’internet français et sa conception de la loi civile. Les Cahiers du numérique, 3(2), 79–90.

Barbrook, R., & Cameron, A. (1995). The Californian ideology. Mute, 1(1), 44–72.

Barbrook, R. (2007). Imaginary Futures: From Thinking Machines to the Global Village. London: Pluto Press.

Bardini, T. (2000). Bootstrapping: Douglas Engelbart, Coevolution, and the Origins of Personal Computing. Stanford University Press.

Barlow, J. P. (1996, February 9). A Cyberspace Independence Declaration. Retrieved from https://w2.eff.org/Censorship/Internet_censorship_bills/barlow_0296.declaration

Bazzichelli, T. (2009). Networking: The Net as Artwork. Books on Demand.

Benkler, Y. (2006). The Wealth of Networks. Yale University Press.

Bennett, C. J. (1992). Regulating Privacy: Data Protection and Public Policy in Europe and the United States. Cornell University Press.

Bennett, C. J. (2008). The Privacy Advocates: Resisting the Spread of Surveillance. Cambridge: MIT Press.

Blondeau, O. (2007). Devenir média: l’activisme sur Internet, entre défection et expérimentation. Paris: Amsterdam.

Bowcott, O., & Hamilton, S. (1990). Beating the System: Hackers, Phreakers and Electronic Spies. London: Bloomsbury Publishing PLC.

Braman, S. (2009). Change of State: Information, Policy, and Power. Cambridge, Mass: MIT Press.

Braman, S. (2012). Privacy by design: Networked computing, 1969–1979. New Media & Society, 14(5), 798–814.

Breindl, Y. (2011, October 22). Hacking the Law: An Analysis of Internet-based Campaigning on Digital Rights in the European Union. Faculté de Philosophie et Lettres at the Université libre de Bruxelles, Brussels.

Brügger, N. (2013a). Historical Network Analysis of the Web. Social Science Computer Review, 31(3), 306–321.

Brügger, N. (2013b). Web historiography and Internet Studies: Challenges and perspectives. New Media and Society, 15(5), 752–764

Burkart, P. (2014). Pirate Politics: The New Information Policy Contests. Cambridge, Mass: The MIT Press.

Cardon, D., & Granjon, F. (2010). Médiactivistes. Paris: Les Presses de Sciences Po.

Carey, J. W. (2005). Historical Pragmatism and the Internet. New Media and Society, 7(4), 443–455.

Carr, M. (2013). The political history of the Internet: A theoretical approach to the implications for U.S. power. In Cyberspaces and Global Affairs (pp. 173–188).

Cerf, V. G., & Aboba, B. (1993). How the Internet Came to Be. The Online User’s Encyclopedia. Retrieved from http://www.virtualschool.edu/mon/Internet/CerfHowInternetCame2B.html

Chamayou, G. (2015). Oceanic enemy: A brief philosophical history of the NSA. Radical Philosophy, (191). Retrieved from https://www.radicalphilosophy.com/commentary/oceanic-enemy

Chan, A. S. (2014). Networking Peripheries: Technological Futures and the Myth of Digital Universalism. Cambridge, Massachusetts: Mit Press.

Chartier, R., & Martin, H.-J. (Eds.). (1985). Histoire de l’édition française (Vols 1–4). Promodis.

Chen, J. (2016). A Strategy for Limits-aware Computing. Presented at the LIMITS ’16, Irvine, California. Retrieved from http://limits2016.org/papers/a9-qadir.pdf

Chun, W. H. K. (2008). Control and Freedom: Power and Paranoia in the Age of Fiber Optics. Cambridge, Massachusetts: Mit Press.

Coleman, G. (2005). Indymedia’s Independence: From Activist Media to Free Software. Multitudes, 21(2), 41–48.

Coleman, G. (2012). Coding Freedom: The Ethics and Aesthetics of Hacking. Princeton University Press.

Coleman, G. (2014). Hacker, Hoaxer, Whistleblower, Spy: The Many Faces of Anonymous. S.l.: Verso.

Coleman, G. (2017). The Public Interest Hack. Limn, (8). Retrieved from http://limn.it/the-political-meaning-of-hacktivism/

Costanza-Chock, S. (2003). Mapping the Repertoire of Electronic Contention. In A. Opel & D. Pompper (Eds.), Representing Resistance: Media, Civil Disobedience, and the Global Justice Movement (pp. 173–191). Westport, Conn.: Praeger.

Costanza-Chock, S. (2004). The Whole World is Watching: Online Surveillance of Social Movement Organizations. In P. N. Thomas & Z. Nain (Eds.), Who Owns the Media?: Global Trends and Local Resistance (pp. 271–292). London: Zed Books.

Croeser, S. (2012). Contested technologies: The emergence of the digital liberties movement. First Monday, 17(8). Retrieved from http://firstmonday.org/ojs/index.php/fm/article/view/4162

Curran, J. (2012). Rethinking internet histories. In J. Curran, N. Fenton, & D. Freedman (Eds.), Misunderstanding the Internet (2nd edition, pp. 34–62). London ; New York: Routledge.

Dahlberg, L. (2007). The Internet and Discursive Exclusion: From Deliberative to Agonistic Public Sphere Theory. In L. Dahlberg & E. Siapera (Eds.), Radical democracy and the Internet: interrogating theory and practice (pp. 128–147). Palgrave Macmillan.

Darnton, R. (2010). Poetry and the Police: Communication Networks in Eighteenth-Century Paris. Cambridge, Mass: Belknap Press.

Darnton, R. (1983). Bohème littéraire et Révolution : le monde des livres au XVIIIe siècle. Paris: Gaillmard & Le Seuil.

Denker, K. (2014). Heroes Yet Criminals of the German Computer Revolution. In G. Alberts & R. Oldenziel (Eds.), Hacking Europe - From Computer Cultures to Demoscenes (pp. 167–188). Springer.

Dreyfus, S., & Assange, J. (2012). Underground (Export, Airside & Ireland ed). Canongate Books Ltd.

Driscoll, K. E. (2014, July). Hobbyist inter-networking and the popular Internet imaginary: Forgotten histories of networked personal computing, 1978-1998. University of Southern California. Retrieved from http://digitallibrary.usc.edu/cdm/ref/collection/p15799coll3/id/444362

Edwards, P. N. (1996). The Closed World: Computers and the Politics of Discourse in Cold War America. Cambridge, Massachusetts: Mit Press.

Epstein, D. (2013). The making of institutions of information governance: The case of the internet governance forum. Journal of Information Technology, 28(2), 137–149.

Flipo, F., Dobré, M., & Michot, M. (2013). La face cachée du numérique. L’impact environnemental des nouvelles technologies. L’Échappée.

Fornés, J., Herran, N., & Duque, L. (2017). Computing for Democracy: The Asociación de Técnicos de Informática and the Professionalization of Computing in Spain. IEEE Annals of the History of Computing, 39(2), 30–48.

Foucault, M. (1984). What is Enlightenment? In P. Rabinow (Ed.), The Foucault Reader. New York: Pantheon Books.

Fuchs, C. (2014). Theorising and analysing digital labour: From global value chains to modes of production. The Political Economy of Communication, 1(2). Retrieved from http://www.polecom.org/index.php/polecom/article/view/19

Funke, P. N., & Wolfson, T. (2017). From Global Justice to Occupy and Podemos: Mapping Three Stages of Contemporary Activism. TripleC: Communication, Capitalism & Critique, 15(2), 393–405.

Fuster, G. G. (2014). The Emergence of Personal Data Protection as a Fundamental Right of the EU. Springer Science & Business.

Gabrys, J. (2011). Digital Rubbish: A Natural History of Electronics. Ann Arbor: University of Michigan.

Galloway, A. R. (2004). Protocol: how control exists after decentralization. Cambridge, Massachusetts: Mit Press.

Gerbaudo, P. (2017). From Cyber-Autonomism to Cyber-Populism: An Ideological Analysis of the Evolution of Digital Activism. TripleC: Communication, Capitalism & Critique, 15(2), 478–491.

Godwin, M. (2003). Cyber Rights: Defending Free Speech in the Digital Age. Cambridge, Massachusetts: Mit Press.

Goggin, G., & McLelland, M. (2017). The Routledge Companion to Global Internet Histories. Routledge.

Goldsmith, J., & Wu, T. (2006). Who Controls the Internet?: Illusions of a Borderless World. Oxford University Press.

Granjon, F., & Torres, A. (2012). R@S : la naissance d’un acteur majeur de l’« Internet militant » français. Le Temps Des Médias, 18(1).

Greenberg, A. (2012). How Wikileakers, Hacktivists, and Cypherpunks are Freeing the World’s Information. Virgin Books.

Griset, P., & Schafer, V. (2011). Hosting the World Wide Web Consortium for Europe: From CERN to INRIA. History and Technology, 27(3), 353–370.

Hafner, K., & Lyon, M. (1998). Where Wizards Stay Up Late: The Origins Of The Internet. Simon & Schuster.

Hafner, K., & Markoff, J. (1995). Cyberpunk: Outlaws and Hackers on the Computer Frontier (Updated). Touchstone.

Halleck, D. (2004). Indymedia: Building an international activist internet network. Presented at the 2nd International Symposium of Interactive Media Design, Yeditepe University: Yeditepe University.

Hands, J. (2011). @ Is For Activism : Dissent, resistance and rebellion in a digital culture. New York: Pluto Press.

Hargittai, E. (2000). Radio’s Lessons for the Internet. Communications of the Association for Computing Machinery, 43(1), 50–56.

Hauben, M., & Hauben, R. (1997). Netizens: On the History and Impact of Usenet and the Internet. Wiley.

Izoard, C. (2010). L’informatisation, entre mises à feu et résignation. In C. Biagini & G. Carnino (Eds.), Les Luddites en France : résistances à l’industrialisation et à l’informatisation (pp. 251–286). Montreuil: Editions L’échappée.

Jarrige, F. (2016). Technocritiques: Du refus des machines à la contestation des technosciences. La Découverte.

Jones, M. L. (2017). The spy who pwned me. Limn, (8). Retrieved from http://limn.it/the-spy-who-pwned-me/

Jordan, T. (1999). New Space, New Politics: Cyberpolitics and the Electronic Frontier Foundation. In A. Lent & T. Jordan (Eds.), Storming the Millennium: The New Politics of Change (First Edition edition, pp. 80–107). London: Lawrence & Wishart Ltd.

Jordan, T. (2016). A genealogy of hacking. Convergence: The International Journal of Research into New Media Technologies, 1-17.

Jordan, T., & Taylor, P. (2004). Hacktivism and Cyberwars: Rebels with a Cause? Routledge.

Kelty, C. M. (2008). Two Bits: The Cultural Significance of Free Software. Duke University Press.

Kirk, A. (2002). ‘Machines of Loving Grace’: Alternative Technology, Environment and the Counterculture. In P. Braunstein & M. W. Doyle (Eds.), Imagine Nation : The American Counterculture of the 1960s and ’70s (pp. 353–378). New York: Routledge.

Lessig, L. (2004). Free Culture. The Penguin Press.

Lessig, L. (2006). Code: version 2.0. Basic Books.

Levy, S. (1984). Hackers: Heroes of the Computer Revolution (Anv Upd). O’Reilly Media.

Levy, S. (2001). Crypto: How the Code Rebels Beat the Government Saving Privacy in the Digital Age (1st edition). London: Penguin Books.

Löblich, M., & Wendelin, M. (2012). ICT policy activism on a national level: Ideas, resources and strategies of German civil society in governance processes. New Media & Society, 14(6), 899–915.

Loveluck, B. (2015). Réseaux, libertés et contrôle : Une généalogie politique d’internet. Armand Colin.

Lovink, G. (2009). Dynamics of Critical Internet Culture (2nd ed.). Institute of Network Cultures.

Ludlow, P. (Ed.). (1996). High Noon on the Electronic Frontier: Conceptual Issues in Cyberspace. Cambridge, Massachusetts: Mit Press.

Mailland, J. (2001). Freedom of Speech, the Internet, and the Costs of Control: The French Example. New York University Journal of International Law & Politics, 33, 1179.

MacKinnon, R. (2012). Consent of the Networked: The Worldwide Struggle for Internet Freedom. Basic Books.

Mahoney, J., & Rueschemeyer, D. (2003). Comparative Historical Analysis in the Social Sciences. Cambridge University Press.

Markovic, S. (2000, December). Radio B92 and OpenNet - Internet Censorship Case Study. Retrieved 20 January 2015, from http://europe.rights.apc.org/cases/b92.html

Marzouki, M. (2003). Nouvelles modalités de la censure : le cas d’Internet en France. Le Temps Des Médias, 1(1), 148.

Mattelart, A. (2000). Networking the World, 1794-2000. Minneapolis: University Of Minnesota Press.

Mattelart, A. (2010). The globalization of surveillance. Polity.

McChesney, R. W. (2013). Digital Disconnect: How Capitalism is Turning the Internet Against Democracy. The New Press.

Medina, E. (2014). Cybernetic Revolutionaries: Technology and Politics in Allende’s Chile (Reprint edition). The MIT Press.

Medosch, A. (2015). The Rise of the Network Commons. Retrieved from http://www.thenextlayer.org/NetworkCommons

Milan, S. (2010). Toward an epistemology of engaged research. International Journal of Communication, 4, 856–858.

Mindell, D. A., Segal, J., & Gerovitch, S. (2013). From communications engineering to communications science: Cybernetics and information theory in the United States, France, and the Soviet Union. In M. Walker (Ed.), Science and Ideology: A Comparative History. Routledge.

Mirowski, P. (2002). Machine Dreams: Economics Becomes a Cyborg Science. Cambridge University Press.

Mirowski, P. (2015). What is Science Critique? Part 1: Lessig, Latour. In Keynote address to Workshop on the Changing Political Economy of Research and Innovation. UCSD. Retrieved from https://www.academia.edu/11571148/What_is_Science_Critique_Part_1_Lessig_Latour

Mitchell, T. (1991). The Limits of the State: Beyond Statist Approaches and Their Critics. The American Political Science Review, 85(1), 77. doi:10.2307/1962879

Morozov, E. (2013). To Save Everything, Click Here: The Folly of Technological Solutionism. New York: PublicAffairs

Mueller, M., Kuerbis, B., & Pagé, C. (2004). Reinventing Media Activism: Public Interest Advocacy in the Making of U.S. Communication-Information Policy, 1960-2002. Information Society, 20(3), 169–187.

Mueller, M. L., Kuerbis, B. N., & Pagé, C. (2007). Democratizing Global Communication? Global Civil Society and the Campaign for Communication Rights in the Information Society. International Journal of Communication, 1(1), 30.

Mueller, M. L. (2004). Ruling the Root – Internet Governance and the Taming of Cyberspace (New Ed). Cambridge, Mass.: MIT Press.

Mueller, M. L. (2010). Networks and States: The Global Politics of Internet Governance. Cambridge, Massachusetts: Mit Press.

Murphy, B. (2005). Interdoc: The first international non–governmental computer network. First Monday, 10(5). Retrieved from http://firstmonday.org/ojs/index.php/fm/article/view/1239

Neumayer, C., & Rossi, L. (2016). 15 Years of Protest and Media Technologies Scholarship: A Sociotechnical Timeline. Social Media + Society, 2(3). doi:10.1177/2056305116662180

Nevejan, C., & Badenoch. (2014). How Amsterdam Invented the Internet: European Networks of Significance, 1980-1995. In G. Alberts & R. Oldenziel (Eds.), Hacking Europe - From Computer Cultures to Demoscenes (pp. 189–218). Springer.

Paloque-Berges, C. (2015). When institutions meet the Web: advocating for a ‘critical Internet user’ figure in the 1990s. Retrieved from https://halshs.archives-ouvertes.fr/halshs-01245511/document

Peters, B. (2016). How Not to Network a Nation: The Uneasy History of the Soviet Internet. Cambridge, Massachusetts: Mit Press.

Pickard, V. W. (2006). United yet autonomous: Indymedia and the struggle to sustain a radical democratic network. Media, Culture & Society, 28(3), 315–336. doi:10.1177/0163443706061685

Pickard, V. W. (2007). Neoliberal visions and revisions in global communications policy from NWICO to WSIS. Journal of Communication Inquiry, 31(2), 118–139. doi:10.1177/0196859906298162

Pike, J. R. (2005). A Gang of Leftists with a Website: The Indymedia Movement. Transformations, (10). Retrieved from http://www.transformationsjournal.org/journal/issue_10/article_02.shtml

Pohle, J., Hösl, M., & Kniep, R. (2016). Analysing internet policy as a field of struggle. Internet Policy Review, 5(3), 426–445.

Pool, I. de S. (1983). Technologies of Freedom. Belknap Press.

Postigo, H. (2012). The Digital Rights Movement: The Role of Technology in Subverting Digital Copyright. Cambridge, Massachusetts: MIT Press.

Pursell, C. (1993). The Rise and Fall of the Appropriate Technology Movement in the United States, 1965-1985. Technology and Culture, 34(3), 629.

Proulx, S., & Breton, P. (2012). L’explosion de la communication. La Découverte.

Qadir, J., Arjuna Sathiaseelan, A., Wang, L., & Crowcroft, J. (2016). Taming Limits with Approximate Networking. Presented at the LIMITS ’16, Irvine, California. Retrieved from http://limits2016.org/papers/a9-qadir.pdf

Reynié, D. (1998). Le Triomphe de l’opinion publique: l’espace public français du XVIe au XXe siècle. Odile Jacob.

Rheingold, H. (2000). The Virtual Community: Homesteading on the Electronic Frontier. Cambridge, Massachusetts: MIT Press.

Rogers, R. (2013). Digital Methods (Reprint edition). Cambridge, Massachusetts: MIT Press.

Rosenzweig, R. (1998). Wizards, Bureaucrats, Warriors, and Hackers: Writing the History of the Internet. The American Historical Review, 103(5), 1530–1552.

Russell, A. L. (2014). Open Standards and the Digital Age: History, ideology, and networks. Cambridge University Press.

Sauter, M. (2014). The Coming Swarm: DDOS Actions, Hacktivism, and Civil Disobedience on the Internet. New York: Bloomsbury Academic.

Schafer, V., & Thierry, B. G. (Eds.). (2015). Connecting Women. Cham: Springer International Publishing.

Scheel, S., Cakici, B., Grommé, F., Ruppert, E., Takala, V., & Ustek-Spilda, F. (2016). Transcending Methodological Nationalism through Transversal Methods? On the Stakes and Challenges of Collaboration (ARITHMUS Working Paper Series No. 1). Goldsmith, University of London. Retrieved from http://arithmus.eu/wp-content/uploads/2015/02/Scheel-et-al-2016-Transcending-method-nationalism_ARITHMUS-Working-paper-1.pdf

Schiller, D. (2000). Digital Capitalism: Networking the Global Market SystemCambridge, Massachusetts: Mit Press.

Schulte, S. R. (2013). Cached: Decoding the Internet in Global Popular Culture. New York: New York Univ Press.

Schulze, M. (2017). Clipper Meets Apple vs. FBI—A Comparison of the Cryptography Discourses from 1993 and 2016. Media and Communication, 5(1), 54–62.

Serres, A. (2000, October 20). Aux sources d’Internet : l’émergence d’ARPANET (PhD thesis). Université Rennes 2. Retrieved from https://tel.archives-ouvertes.fr/tel-00312005/document

Shacklock, G., & Smyth, J. (2002). Being Reflexive in Critical and Social Educational Research. Routledge.

Simonson, P., Peck, J., Craig, R. T., & Jackson, J. (2013). The History of Communication History. In The Handbook of Communication History (pp. 13–58). New York: Routledge.

Spar, D. L. (2003). Ruling the Waves: From the Compass to the Internet, a History of Business and Politics along the Technological Frontier. New York: Harvest Books.

Srinivasan, R. (2017). Whose Global Village?: Rethinking How Technology Shapes Our World. NYU Press.

Sterling, B. (1993). The Hacker Crackdown: Law And Disorder On The Electronic Frontier. Bantam.

Streeter, T. (2010). The Net Effect: Romanticism, Capitalism, and the Internet. NYU Press.

Taffel, S. (2016). Invisible Bodies and Forgotten Spaces: Materiality, Toxicity, and Labour in Digital Ecologies. In H. Randell-Moon & R. Tippet (Eds.), Security, Race, Biopower (pp. 121–141). Palgrave Macmillan UK. doi:10.1057/978-1-137-55408-6_7

Thibault, G., & Trudel, D. (2015). Excaver, tracer, réécrire : sur les renouveaux historiques en communication. Communiquer. Revue de communication sociale et publique, (15), 5–23.

Thoumyre, L. (2000). Responsabilités sur le Web : une histoire de la réglementation des réseaux numériques. Lex Electronica, 6(1). Retrieved from http://www.lex-electronica.org/articles/v6-1/thoumyre.htm

Tilly, C., & Tarrow, S. (2015). Contentious Politics (2nd edition). New York: Oxford University Press.

Tréguer, F. (2017). Intelligence Reform and the Snowden Paradox: The Case of France. Media and Communication, 5(1). Retrieved from https://hal.archives-ouvertes.fr/hal-01481648/

Trudel, D., & Tréguer, F. (2016). Alternative Communications Networks Throughout History (report). ISCC-CNRS. Retrieved from https://halshs.archives-ouvertes.fr/halshs-01418826/document

Toupin, S. (2016). Gesturing Towards “Anti-Colonial Hacking” and its Infrastructure. Journal of Peer Production, (9). Retrieved from http://peerproduction.net/editsuite/issues/issue-9-alternative-internets/peer-reviewed-papers/anti-colonial-hacking/

Turner, F. (2006). From Counterculture to Cyberculture: Stewart Brand, the Whole Earth Network, and the Rise of Digital Utopianism. University of Chicago Press.

Turner, F. (2013). The Democratic Surround: Multimedia & American Liberalism From World War II to the Psychedelic Sixties. Chicago: The University of Chicago Press.

Vaillant, D. W. (2010). La Police de l’Air: Amateur Radio and the Politics of Aural Surveillance in France, 1921–1940. French Politics, Culture & Society, 28(1), 1–24. doi:10.3167/fpcs.2010.280101

Vincent, D. (2016). Privacy: A Short History. John Wiley & Sons.

Vlavo, F. (2017). The Performativity of Digital Activism. Routledge.

Wasserman, H. (2017). African histories of the Internet. Internet Histories, 0(0), 1–9.

Wieckmann, J. (1989). Das Chaos-Computer-Buch: Hacking made in Germany. Wunderlich.

Willetts, P. (2010). NGOs networking and the creation of the Internet. In Non-Governmental Organizations in World Politics: The Construction of Global Governance (pp. 114–143). Taylor & Francis US.

Wolfson, T. (2014). Digital Rebellion: The Birth of the Cyber Left (1st Edition edition). Urbana: University of Illinois Press.

Wright, S. (2011). Beyond a Bad Attitude? Information Workers and Their Prospects Through the Pages of Processed World. Journal of Information Ethics, 20(2).

Wu, T. (2010). The Master Switch: The Rise and Fall of Information Empires. Knopf

Zittrain, J. (2008). The Future of the Internet — And How to Stop It. Yale University Press. Available at futureoftheinternet.org/

Züger, T., Milan, S., & Tanczer, L. M. (2015). Sand in the Information Society Machine: How Digital Technologies Change and Challenge the Paradigms of Civil Disobedience. The Fibreculture Journal, (26), 109–136.

 

Footnotes

1. This scientometric analysis was conducted with the tool ScienceScape, developed by Sciences Po's Medialab. It is based on a corpus of 2,951 references queried in the SCOPUS database (the search query looked for the use of words associated with contentious politics (Tilly & Tarrow, 2015) – such as “citizenship”, “civil rights”, “repression”, “mobilization” – along with references to the internet or online environment in the titles, abstract or keywords of references, in the literature categorised either as “social sciences” or “arts & humanities”.

Contested meanings of inclusiveness, accountability and transparency in trade policymaking

$
0
0

Recent plurilateral trade negotiations such as the Trans-Pacific Partnership (TPP), the Trade in Services Agreement (TISA), the Anti-Counterfeiting Trade Agreement (ACTA), the Regional Comprehensive Economic Partnership (RCEP), and the reopened North American Free Trade Agreement (NAFTA), are addressing a range of digital governance issues that have never before been the subject of trade agreements. A number of these issues, such as net neutrality, rules about internet domain names, encryption standards, and software source code, have previously been addressed through more open mechanisms and institutions of governance associated with the internet governance regime.

As a result, stakeholders associated with that regime, such as members of the internet technical community and internet activist groups, have come to expect certain standards of transparency and broad public consultation in policymaking on internet-related issues. This is not unjustified, as these expectations stem from the same ideals of democratic governance that underpin other transnational political movements (Mueller, 2010, p.1). These stakeholders have responded to the inclusion of such issues in closed and opaque trade negotiations with fierce public opposition, resulting in the ultimate failure of some of these agreements, such as the TPP from which the United States withdrew in 2016, and ACTA which the European Union declined to endorse in 2012.

One example of such a provision that has migrated from internet governance into trade is a rule that appeared in the final text of the TPP in Article 14.14, requiring countries to adopt measures for the control of spam, or unsolicited electronic mail. Similar provisions have been proposed for TISA, RCEP, and NAFTA. Until the appearance of this provision in these trade texts, spam control had hitherto only been addressed at an international level in soft internet governance fora such as the Internet Governance Forum (IGF) and the Coalition Against Unsolicited Commercial Email (CAUCE), and in technical standards development bodies such as the Internet Engineering Task Force (IETF).

If trade agreements are to be used to address such internet-related public policy issues, public support for this would appear to be contingent upon improving the inclusiveness, accountability, and transparency of the negotiations, so that civil society does not feel that it has been systematically excluded, and also so that the substantive content of the agreements can benefit from review by a broader range of affected stakeholders.

There is much research on appropriate definitions and measures of inclusiveness, accountability and transparency in various contexts, including multi-stakeholder governance networks (Malcolm, 2008, pp. 260-282), however reviewing such literature is not an objective of this short paper. Suffice it to say for present purposes that inclusiveness is a measure of the extent to which a diversity of interests of the involved stakeholders informs and deepens policy discussions and policy development processes (Belli, 2015). Accountability includes the measures that a policy process incorporates to demonstrate its input legitimacy to stakeholders; that is, measures that increase the perception of those stakeholders that its outputs are justified (Mena and Palazzo, 2012). Depending on the process, these may include processes for the nomination and election of representatives, for procedural fairness in decision-making, for internal or external review or appeal, and so on. Finally the closely associated concept of the transparency of a policy process can be defined as public access to its records (which may include meeting minutes, reports, mailing list discussions, and financial documents), and also to its meetings (Piotrowski and Borry, 2010).

Rather than spend further time unpacking these concepts, this paper aims only to demonstrate that different groups of actors involved in trade negotiations over internet public policy issues contest the appropriate meanings of these terms, and to suggest some measures that could be taken to address this contestation, without presupposing exactly where the appropriate balance will ultimately be struck.

1. Trade and internet governance regimes

Trade policymaking was never the province of a single stakeholder group alone. Indeed, in medieval times, the "law merchant" or lex mercatoria was a system of non-state transnational law developed principally by merchants themselves, which operated in parallel to the domestic legal regimes of the kingdoms between which they plied their trade (Cutler, 2003, pp. 108-140). This overlapping system of private and public governance effectively wove together the law merchant with the laws of the various sovereign kingdoms into the patchwork quilt that was the earliest form of international trade law regime.

This developed, centuries later, into a more formalised system of international trade law, through the formation of the earliest bilateral trade agreements between nation states, such as the 1860 Cobden-Chevalier Treaty between the United Kingdom and France, and ultimately plurilateral agreements such as the original 1947 General Agreement on Tariffs and Trade (GATT) and its 1994 successor that is now embodied in the World Trade Organisation (WTO) (Hoekman, 2009, 25, p. 47).

But in the meantime, merchants and states were joined by a third stakeholder group seeking its own right of involvement in trade policymaking: civil society. This memorably came to a head in 1999 when anti-WTO protests by civil society groups spilled over onto the streets of Seattle, sending a sharp notice to the organisation that greater inclusiveness, accountability and transparency would henceforth be demanded of the international trade policymaking regime. In the wake of these protests, the WTO began to acknowledge the need for change, with even the former Chairman of the WTO Appellate Body, Ambassador Julian Lacarte (2004), acknowledging that the integration of civil society into the WTO’s processes would enhance the organisation’s legitimacy by reducing its institutional democratic deficits, and would have capacity building benefits. Eventually some reforms were indeed initiated by the WTO, although compared with most of the bodies of the United Nations (UN) system, it remains much less open to civil society participation (Steffek and Kissling, 2006; Wolfe, 2012).

In parallel to these developments, civil society was forging a very different dynamic with the emerging institutions of governance of the internet. Unlike in the trade regime, academic and civil society participation in the development of the internet’s technical infrastructure had been integral from its earliest beginnings (Leiner et al., 2003). Despite the fact that much of the early internet was developed under US government contract, some of its early pioneers were positively hostile to the idea of governments claiming sovereignty over this new realm that non-governmental stakeholders had built. Drawing on and in turn influencing a canon of “cyber-libertarian” writings in Wired Magazine (Katz, 1997) and American legal journals (Johnson and Post, 1996), the classic expression of this came from the co-founder of my organisation, the Electronic Frontier Foundation (EFF), John Perry Barlow, who wrote in his Declaration of the Independence of Cyberspace:

Governments of the Industrial World, you weary giants of flesh and steel, I come from Cyberspace, the new home of Mind. On behalf of the future, I ask you of the past to leave us alone. You are not welcome among us. You have no sovereignty where we gather. […]

Where there are real conflicts, where there are wrongs, we will identify them and address them by our means. We are forming our own Social Contract. This governance will arise according to the conditions of our world, not yours. (Barlow, 1996)

Interestingly, such claims did not go entirely unrecognised by governments. In 2003, the United Nations convened the World Summit on the Information Society (WSIS) partly in recognition that "the Information Society is intrinsically global in nature and national efforts need to be supported by effective international and regional cooperation among governments, the private sector, civil society and other stakeholders" (WSIS, 2003, paragraph 60). A final outcome document of this conference, the Tunis Agenda for the Information Society, recognised the need to establish "a transparent, democratic, and multilateral process, with the participation of governments, private sector, civil society and international organisations, in their respective roles" to address the "many cross-cutting international public policy issues that require attention and are not adequately addressed by the current mechanisms" (WSIS, 2005, paras 60-61).

While useful, this does not go as far as providing a template for how this proposed model of multi-stakeholder cooperation should be put into practice. After all, internet governance is not a unitary construct, but rather a basket of loosely connected issue areas, each of which are typically governed by a different patchwork—or to shift analogies, a different network—of public and private governance mechanisms (Malcolm 2008, ch. 2). In some of these issue areas, such as the technical administration of the internet, stakeholders participate on an equal footing; a precept derived from the Tunis Agenda, which in turn recognised the central role that civil society had played in the development of the internet (Doria, 2014). This ideal is most closely realised in multi-stakeholder internet governance organisations such as the Internet Corporation for Assigned Names and Numbers (ICANN), the IETF, and the IGF. For example, ICANN, which is responsible for the administration of internet domain names, is led by non-governmental stakeholders within a multi-stakeholder network that relegates governments to an advisory role.1

But in other internet governance issue areas, such as cybersecurity, the role of governments has historically been much more central, and more government-centric structures consequently prevail. This also applies to the institutions of the global trading system, which are for the most part intergovernmental in origin and membership (the World Economic Forum, for which the World Social Forum was oppositionally named, is a notable exception), and tend to be inscrutable, closed and opaque in their dealings with the public. Far from offering CSOs the ability to participate on an equal footing as in ICANN, such organisations are lucky if they are even able to read official documents, and perhaps to observe certain meetings (such as the WTO Ministerial Conferences). Indeed, in recent bilateral and plurilateral negotiations such as those over the TPP, TISA, and RCEP, they can expect much less than this (Malcolm, 2010).

These existing institutional power structures are just one of the bases upon which different measures of inclusiveness, accountability and transparency may be appropriate for different issue areas that make up the internet governance regime. The need for differential application of these criteria is reflected in the 2014 NETmundial Multistakeholder Statement, in some ways an update, and in others a challenge to the Tunis Agenda (Maciel, 2014), which recognised that "roles and responsibilities of stakeholders should be interpreted in a flexible manner with reference to the issue under discussion" (NETmundial Multistakeholder Statement, 2014, p. 6).

But this still begs the question, how should stakeholder participation differ from one issue to another? Without attempting to definitively answer that question, this paper endeavours at least to sketch out some of the different positions on this question, as held by trade officials, civil society, not to mention different civil society factions. The paper closes with a description of some projects that could help to bring these parties closer to a common understanding.

2. Contested meanings of inclusiveness, accountability, and transparency

2.1. Between trade officials and civil society

Civil society groups have taken up the challenge of defining what particular measures of inclusiveness, accountability and transparency ought to be demanded of trade policymakers who address internet-related public policy issues, drawing in part on the norms established at WSIS and NETmundial and in other international instruments. The most notable normative document of such demands, at least on a global level, is the 2016 Brussels Declaration on Trade and the Internet, which provides in part:

Any international rulemaking process that affects the online and digital environment should adhere to human rights and good governance obligations to actively disseminate information, promote public participation and provide access to justice in governmental decision-making.

The Declaration goes on to suggest several particular measures that countries could take to this end, including ensuring the pro-active dissemination of information through the release of draft proposals and consolidated texts, providing opportunities for meaningful involvement by civil society representatives such as through public notice and comment and public hearing processes, and requiring balanced representation on any trade advisory bodies.

Since trade is a domain in which government leadership has historically been strongest, trade ministries and negotiators have often resisted these and similar demands or dismissed them outright (Malcolm, 2015b), often while claiming to uphold those very same values. An outline of the contours of inclusiveness, accountability and transparency as these values are defined by trade ministries can be gleaned from comparing the public statements and policies of one such ministry, the Office of the United States Trade Representative (USTR), to civil society demands for improvement of each of these criteria.

On inclusiveness, which in terms of WSIS and NETmundial is about each stakeholder group having an appropriate role in the process, trade negotiators seem to regard this as being largely satisfied by the inclusion of those who are participants in trade, rather than the inclusion of other interest groups that are impacted by trade rules.2 Membership of the US Industry Trade Advisory Committees (ITAC) are therefore heavily dominated by industry. In particular ITAC 8 on Information and Communications Technologies, Services, and Electronic Commerce is exclusively composed of representatives of companies and industry associations. A 2014 USTR call for members of a new Public Interest Trade Advisory Committee was shelved after a year, partly because strict confidentiality rules would have precluded the participation of most CSOs (Anonymous, 2015).

As to accountability, the U.S. Congress is required to approve the USTR’s negotiating objectives and its final text, but has only minimal oversight of the negotiations in progress, due to the tightly constrained conditions of access to the negotiating text, as described below. In 2015, the USTR appointed its own General Counsel to the newly-created position of Transparency Officer, putting the incumbent in the invidious position of being expected to defend the office’s current practices around transparency, at the same time as reforming those practices. Expressing their unease at the lack of external accountability inherent in this arrangement, civil society groups including EFF, the Sunlight Foundation, and OpenTheGovernment.org responded by demanding that the position be made independent of the office (Malcolm, 2017). Beyond this, the only other mechanism of external accountability accepted by the USTR has been the notification of its preferential trade agreements to the WTO.

As to transparency, the United States Trade Representative (USTR) infamously touted the TPP as "the most transparent trade negotiation in history" (Johnson, 2013), despite the fact that no drafts of the agreement were officially released and that the few corporate advisors to the USTR who had access to the text did so under non-disclosure agreement. Although reforms under the Bipartisan Congressional Trade Priorities and Accountability Act of 2015 will give Members of Congress greater access to texts under negotiation, which has been notoriously difficult in the past (Carter, 2012), these reforms do nothing to provide such access to the general public. As might be supposed, this is starkly at odds with civil society’s conception of what would amount to adequate transparency.

2.2. Within civil society

Perhaps surprisingly, disagreements as to the appropriate meanings of inclusiveness, accountability, and transparency in trade policymaking also exist within civil society. However, before explaining these differences, some further background is needed. Despite some overlap, the groups that typically identify as civil society in internet governance discourse are different from those who typically engage in trade policy debates. This is not to suggest that there are only two factions of civil society organisations (CSOs) in the trade and internet regimes, nor that the division between them is always clear. However it is useful for present purposes to distinguish between what we will call the internet CSOs on the one hand, and the trade and development CSOs on the other, to explain some of the observed differences between their policy positions and working styles in trade and internet policy advocacy.

Although both groups engage in rights discourse, one of the most striking differences between them is the focus of the former on individual civil and political rights, whereas the latter is more likely to prioritise collective economic and social rights. Examples of networks of CSOs involved in internet governance include the Non-Commercial Stakeholders Group (NCSG) of ICANN, Best Bits, the Civil Society Internet Governance Caucus (IGC), and the Association for Progressive Communications (APC).3 Although some of these groups (most notably the last) also advocate for social programmes to bridge the digital divide, they are more often found advancing principles and policies that support users’ individual rights to freedom of expression, privacy, and freedom of association. For example, 30 civil society participants in the Best Bits network made a remarkable 34 references to human rights in their 16 paragraph joint submission to the Global Multistakeholder Meeting on the Future of Internet Governance that produced the NETmundial Multistakeholder Statement (Best Bits, 2014).

Remembering that many of these groups draw their lineage, and in many cases their members, from amongst the ranks of internet’s early cyber-libertarian pioneers, this individualist orientation should not be surprising. Kelty (2005, pp. 204-205) identifies a “recursive public” of self-identified geeks from which such groups are drawn, who advocate “not just the rhetoric of openness but also a particular attitude toward the conditions of possibility of openness”, and who elevate the computer code that defines the network itself over state ordering as a means to realise this ideal. Some such groups at the domestic level, such as TechFreedom (US) maintain an explicitly conservative agenda today, while others such as EFF claim an apolitical or non-partisan mission, while still pursuing an agenda that emphasises individual rather than collective rights. Many of these groups accept corporate donations, and some also solicit grants from OECD governments such as the United States and Sweden.

This contrasts with similar loose groupings of CSOs involved in global debates on trade and development, such as the World Social Forum, and at a domestic US level, the Occupy movement, both of which adopt a strong position of opposition to what is characterised as the hegemonic neoliberal globalisation that underpins the global trading system (Conway, 2013, ch. 3). Trade unions are dominant participants in these civil society movements, and with their very different history of struggle against capitalist oppression, it is in turn no wonder that these civil society networks are much more leftist in orientation, and give far greater emphasis to collective rights. Even at the more "conservative" fringes of this movement, amongst individual NGOs that seek to engage directly with trade policymaking bodies rather than disrupting or overthrowing those bodies, groups such as Public Citizen (US) and Third World Network (Malaysia/Geneva) still maintain strong ties with organised labour, and refuse to accept funds from corporations or governments.

Given this disjunct, at the intersection of these two groupings of internet and trade civil society activists some very interesting tensions naturally emerge. Examples of civil society groups that intersect to a greater or lesser degree between the trade and the internet governance realms include the Civil Society Information Society Advisory Council (CSISAC) of the Organisation for Economic Cooperation and Development (OECD), the Trans-Atlantic Consumer Dialog (TACD), the Open Digital Trade Network, the Just Net Coalition (JNC), and the IGF’s Dynamic Coalition on Trade and the Internet.

These differences are most pronounced where they concern substantive digital trade issues, such as network neutrality4 and data localisation.5 For example, a joint letter of mostly trade and development CSOs released in October 2017 addressing the prospect of data localisation rules being included in the WTO’s digital trade agenda states:

What e-commerce proposal proponents call “localization barriers” are actually the tools that countries use to ensure that they can benefit from the presence of transnational corporations to advance their own development and the economic, social, and political rights of their citizens.6

In contrast, on the same topic the internet-focused EFF warns that “Pushing localization for short-term social, political and economic gains could ultimately harm users and innovators” (Panday, 2017).

The two factions also conceptualise the procedural issues of governance in a trade policymaking context somewhat differently. To repeat the exercise that was undertaken above when comparing the meanings respectively ascribed to the above three attributes of trade policy development processes by trade ministries and civil society, examples of internal disagreements within civil society on these attributes provide an idea of the main divergences between the two civil society factions on inclusiveness, accountability and transparency.

As regards inclusiveness, the biggest disagreements are on the appropriate role of states within multi-stakeholder processes. The trade and development focused CSOs unambiguously consider ICANN-level "equal footing" multi-stakeholder inclusiveness to be a bridge too far, recognising the power differentials between stakeholders that can result in the capture of such processes. A typical expression of this is in the founding document of the JNC, the Delhi Declaration (Just Net Coalition, 2014), which states:

The right to make Internet-related public policies lies exclusively with those who legitimately and directly represent people. While there is a pressing need to deepen democracy through innovative methods of participatory democracy, these cannot include—in the name of multi-stakeholderism—new forms of formal political power for corporate interests.

By contrast, the mainstream internet CSOs, as explained above, tend to support a model of multi-stakeholder governance that does not privilege governments over other stakeholder groups; or at least not as uniformly as the Delhi Declaration suggests. Disagreements on this point have dogged the IGC, Best Bits, and CSISAC.7

Differences on accountability between the two groups are less pronounced, but where they exist they tend to stem from heightened fears by the trade and development CSOs of corporate or governmental capture of a supposedly neutral process. For example, a schism formed between the two civil society factions within the Best Bits civil society network, resulting in the splitting of the JNC as a new network for CSOs from the trade and development camp. A catalyst of this schism was the revelation that a Best Bits member organisation had accepted programme funding from the U.S. State Department, which that member had in turn used to fund its contribution to a Best Bits meeting. Although regarded as a unexceptional internal matter by many of the mainstream internet CSOs, for those who split off into JNC, the CSO’s failure to disclose that funding link amounted to a serious breach of accountability norms that threw the legitimacy of the entire network into question.

On transparency, the trade and development CSOs are more strategic about accepting certain closed processes than the internet-focused CSOs tend to be. For example, it is common practice for internet governance institutions to operate open mailing lists with public archives, to maintain documentation in open wikis, and in some cases (for example, at ICANN) also to record and transcribe meetings. While CSOs that are primarily involved in internet governance tend to apply these same standards to their own internal groups, lists, and documentation, this is quite foreign to the groups that are involved in trade advocacy. The latter tend to operate gated or invitation-only lists and meetings, without publicly available records of discussions or work product. Indeed, it is common for activists to be subdivided into an outer and an inner circle, with only the inner circle having access to a "core list" on which only the most sensitive information is exchanged. This has become a bone of contention when groups from each of the two civil society camps come together, as in the cases of the IGC, Best Bits, and the Open Digital Trade Network.

To some extent, the differing expectations of these civil society factions around inclusiveness, accountability, and transparency are shaped by realpolitik. Since stakeholders in the internet governance regime commence from a position of relative inclusion and transparency, it is natural for them to expect this to be extended to trade fora that overlap with internet governance. Since the trade policymaking community doesn’t bestow any such privileges on civil society stakeholders, they are less accustomed to the same, even when the issues being discussed in trade fora turn from traditional trade topics to digital trade rules that have historically been discussed more openly and inclusively by other institutions.

However, this doesn’t fully explain the differences between the two factions. It is posited that at a deeper level there is also an underlying ideological distinction between the two civil society factions—that mainstream CSOs engaged in internet policy discussions tend to be more libertarian (at least as concerns internet public policy development) and to frame their demands in terms of civil rights, whereas the mainstream CSOs inhabiting the trade policymaking regime are generally more collectivist and more likely to frame their concerns in terms of global and social justice. This results in the trade and development NGOs being more cautious about corporate capture in the meanings that they ascribe to inclusiveness, accountability, and transparency in trade policy development. It is important that these differences be acknowledged in developing mechanisms to help resolve these contested meanings.

3. Reconciling these contested meanings

In summary then, the important differences of meaning ascribed to the criterion of inclusiveness turn on whether and which non-governmental stakeholders are to be included. For both trade ministries and CSOs there is a consensus that non-governmental stakeholders should be consulted in trade policy development, but internet-focused NGOs are more likely to go further and demand a deeper level of multi-stakeholder involvement that is troubling to CSOs from the trade and development space. As to which non-governmental stakeholders should be involved, trade ministries tend to be most supportive of the inclusion of corporations (since they are actually involved in trade), trade CSOs tend to be least supportive of this due to their fears that corporations will capture the process, and internet CSOs are more likely to support the involvement of all affected non-governmental stakeholders on an equal footing, as this is a precept of multi-stakeholderism in the internet governance regime.

Summarising the differences that exist on the meaning ascribed to the accountability of trade policy development processes, it can be observed that trade ministries do not interpret this as requiring close oversight from the legislative branch or from any other external authority (other than the notification of preferential trade agreements to the WTO). Both the trade and internet focused CSOs are unified in their expectation of closer public oversight of the process. To the extent that there is a difference between the two factions of CSOs, it relates to the accountability of non-governmental stakeholders. Pointing to examples in which they see multi-stakeholder processes as having been captured by corporate interests (Moog, Spicer and Böhm, 2015), the trade CSOs are likely to be far less comfortable with the increased involvement of non-governmental stakeholders in trade policy development unless adequate accountability mechanisms, including mechanisms of financial accountability, are first established.

Finally to summarise the main differences of expectations around the transparency of trade policy development processes, the biggest contestation that can be observed is between the trade ministries, who regard their current transparency practices as adequate, and the CSOs, who are demanding deep reforms such as the publication of proposals and drafts. As between the two factions of CSOs, there is a smaller divergence between the internet-focused CSOs who are more likely to expect a more radical degree of transparency aligned with the practices of internet governance institutions, and the trade CSOs, who seem to be more accepting of a lesser level of transparency, for example excluding measures such as the publication of correspondence and meeting transcript.

On each of these measures, can a middle ground be reached that both acknowledges the political realities within which the trade negotiators work, while also satisfying the shared expectations of enhanced inclusiveness, accountability, and transparency from both of the civil society factions that engage in trade policy advocacy? If so, the first step is to identify the contested meanings of these concepts, and to provide fora in which these differences can be explained, discussed, and ultimately reconciled. Towards this end, three related projects led by the author at EFF are now presented.

3.1. Open Digital Trade Network

The first of these is the Open Digital Trade Network, which was formed in February 2016 out of the group that had met the previous month in Brussels to produce the Brussels Declaration on Trade and the Internet. EFF selected the approximately 30 participants at the meeting based on personal contacts and recommendations, to include a cross-section of those who had knowledge of the global trading system, and those with expertise in internet governance. Although most participants were from civil society, the group was rounded out with the participation of a small number of private sector participants who were known to be receptive to the reform of trade negotiation processes, as well as one participant associated with the United Nations.

The diversity of the group helped to ensure that its recommendations were balanced. For example, the group concluded that "There is a spectrum of public participation in policy institutions, and the politics of trade negotiations constrain our ability to be empowered directly at the highest levels", an insight that might have been lost if the group had excluded trade CSOs; but conversely, the group also agreed on "drafting of model texts that draw on established human rights, Internet governance and development norms", which would not have been recognised if internet CSOs had been absent.

Considerations of the cultural and ideological differences between civil society factions shaped the work of the Open Digital Trade Network in several ways. The group employed a deliberative democratic methodology, designed to reduce the possibility of domination by any single group. This included the co-development of a balanced background briefing paper, the use of "Idea Rating Sheets" for brainstorming and gaining peer feedback on ideas, and actively facilitated small group discussion.

Knowing of the reticence of trade CSOs to the radical transparency common in internet governance spaces, the group’s Brussels meeting was conducted under Chatham House rules, and the subsequently-formed mailing list and web platform were made accessible to members only. Those who applied for membership of the group following its formation could only do so upon the recommendation of an existing member, which also differs from the typical practice of internet governance community groups such as Best Bits, which tend to be more overtly open.

But not all of the cultural differences of CSO participants were as successfully managed. In particular, the deeper differences of opinion on substantive internet policy issues resulted in some heated exchanges, and ultimately resulted in the group mutually deciding to refocus on procedural questions only. Even on such matters of process, there were also disagreements between the two CSO factions. For example, objections were raised to the participation of private sector members in the group, and one of the trade-focused CSO representatives also expressed concern about other members of the network having received grant funding from Google.8

Today, the Open Digital Trade Network continues to operate as a distributed online network of about 60 experts, lobbyists, and activists, based around a mailing list and a web platform that provides a project management system and knowledge base. With EFF’s continued loose coordination, members of the network collaborate on specific projects of mutual interest such as workshops, meetings with trade negotiators, drafting and delivery of joint statements, and the exchange of information on negotiations in progress.

3.2. IGF Dynamic Coalition on Trade and the Internet

The Internet Governance Forum’s Dynamic Coalition on Trade and the Internet was established in February 2017 to extend the mission of the Open Digital Trade Network to a broader group of stakeholders, including additional industry participants and, it was hoped, government representatives such as trade policymakers. Through the common management of both projects by EFF, the Dynamic Coalition maintains a close relationship with the Open Digital Trade Network and aims to build upon rather than to duplicate its work. Amongst the items contained in the Dynamic Coalition’s 2017 action plan are to develop a multi-stakeholder approach to facilitating the transparency and inclusiveness in international trade negotiations and domestic consultation processes, and to build a network of representatives from trade institutions and delegations for liaison with the Dynamic Coalition and the broader IGF community.

The even more diverse membership of the Dynamic Coalition makes the coalition potentially better equipped to develop a broadly acceptable resolution of the contested meanings of the inclusiveness, accountability and transparency of trade policymaking processes, and to disseminate that learning to affected stakeholders. On the other hand, the very diversity of a multi-stakeholder group also makes it more difficult for it to arrive at a meaningful consensus, and amplifies the accountability fears of trade and development CSOs, who fear that the prospect of the group’s capture by narrow corporate interests.

The IGF addresses concerns about accountability and transparency of its Dynamic Coalitions by requiring them to comply with three basic principles of inclusiveness and transparency for carrying out their work: open membership, open mailing lists, and open archives. They must also ensure their statements and outputs reflect minority or dissenting viewpoints. These requirements, perhaps naturally, more closely resemble those of working groups associated with other internet governance institutions than those associated with the trade policy regime.

Although this sets a positive precedent for a group concerned with advocating for inclusiveness and transparency in trade policy development, on a practical level it also seems to have operated against the Dynamic Coalition. The limited participation that the group has enjoyed since its formation suggests that some members are reticent to participate openly, as they have not established mutual trust and may fear (legitimately, based on the experiences of similar groups) that fellow members might not participate in good faith. The group’s diversity also calls into question whether it will be capable of reaching a consensus position on the reform of trade negotiation processes that would be at a similar level of ambition to those reached by the Open Digital Trade Network. It remains to be seen whether the Dynamic Coalition will live up to the potential envisioned for it by EFF.

The Dynamic Coalition had 42 members at the time of its inaugural meeting in December 2017 at the 2017 IGF meeting in Geneva. At that meeting, a joint statement on transparency in trade negotiations was endorsed by consensus, and participants agreed to embark on a process of outreach to additional stakeholders from governments, the Internet business community, and trade organizations such as UNCTAD and the WTO. The success of that endeavour could be expected to positively affect the group’s standing and influence, but perhaps also to increase the difficulty of it reaching consensus on future resolutions such as its statement on transparency.

3.3. Shadow Regulation

Each of the preceding projects is aimed at bringing together specific participants from the trade and internet governance regimes, as it is they who are most closely connected with the institutions that can effect the desired improvements to the inclusiveness, accountability and transparency of trade negotiation processes. The Shadow Regulation project is different in that it is intended to help raise awareness of the need for such reforms within a broader public sphere. Neither is it specifically restricted to recommending improvements in these practices within trade negotiations alone. Instead the recommendations are generalised to be capable of application to any body engaged in transnational public policy development.

The process recommendations that form part of the Shadow Regulation project have been distilled into the simplified form of three criteria which are captured on an infographic specifically designed to aid its dissemination to a broad public involved in a diverse range of public policy development processes. The criteria provide:

  • Inclusion: We need to make sure that all stakeholders who are affected by internet policies have not only the opportunity, but also the resources, to be heard.
  • Balance: Reaching the optimal solution requires letting the best ideas rise to the top, even if governments and corporations don’t always get their way.
  • Accountability: Institutions and stakeholders who participate in crafting rules, standards or principles for the internet must be transparent and deserving of our trust.

Despite the differences between the Shadow Regulation project and the preceding projects, it was developed with similar attention to the need for its principles to address the needs of the diverse institutions and CSO communities whose buy-in would be essential for their wide acceptance. For example, one concern common to both civil society factions, but expressed most strongly by those from the trade and development camp, is that multi-stakeholder processes are vulnerable to capture by corporations (Moog, Spicer, & Böhm, 2015).

For corporations, multi-stakeholder processes may offer an attractive opportunity to stave off pressure for hard regulation by governments, along with the perception of greater legitimacy than pure self-regulation. Thus, although originally opposed to the formation of the IGF, private sector actors soon metamorphosed into its strong supporters (Malcolm, 2008, p. 350). But the greater resources that corporate actors can draw upon in multi-stakeholder discussions, relative to those of civil society, also create clear risks that they will dominate processes that are not adequately managed to prevent this (Conger, 2017).

The paper of the author that formed the basis for the principles (Malcolm, 2015a) takes particular care to address these concerns, acknowledging that:

[...] multi-stakeholder processes have come under much criticism from some who fear that corporations will entrench their positions of power and abuse those processes to overpower the public interest—particularly if the role of governments is not structurally elevated over those of all other stakeholders. Similarly, there is concern that the security and economic interests of certain governments can be (as, indeed, those of the United States have been) structurally cemented in what are notionally multi-stakeholder internet governance processes.

Despite careful crafting of this project to present process criteria capable of broad adoption, by its nature the success of this project is less susceptible to evaluation. It seems unlikely that the minimal resources that EFF alone can devote to dissemination of the principles will enable it to produce a measurable impact on improving the receptiveness of policymakers to embracing inclusiveness, accountability or transparency reforms. The further success of this project depends upon the term "Shadow Regulation" catching on and self-replicating, meme-like, in the consciousness of internet users and policymakers.

Unlike the other projects mentioned, Shadow Regulation is not a membership-based group, but an endeavour to attach an easily understood viral brand to the process norms that underpin our trade reform advocacy as well as our critiques of other closed and opaque policy processes. As such, Shadow Regulation provides a resource designed to be suitable for diverse civil society groups to draw upon to support their advocacy work.

4. Conclusion

In promoting the improvement of the inclusiveness, accountability and transparency of trade policymaking processes, it might be assumed that all of the parties begin with a common understanding of what those criteria mean, and simply differ on the desirability of implementing them. This paper has demonstrated why that is not the case, and why there may actually be misunderstandings not only between trade ministries and civil society, but even within civil society, about what inclusiveness, accountability or transparency mean in the context of public policy development. To some extent, the different meanings attached to these criteria stem from the differences in the institutional development of the internet governance and the trade policymaking regimes, which in turn have influenced the expectations of the institutions and CSOs respectively engaged in each of those areas.

But underlying this is a deeper explanation. Mainstream CSOs that focus on internet activism tend to favour governance mechanisms based on markets, technology, and decentralised collective action, rather than intervention by government. Even when government intervention is recommended, this is often limited to upholding individual rights, rather than the provision of broad or intrusive social programmes that would significantly affect the administration or operation of the network. This resembles a skew to the right, at least in respect of internet policy issues. In contrast, mainstream CSOs that focus on trade and development are much more likely to exhibit a skew to the political left, and to favour government intervention to address problems at the intersection of technology policy and trade, even where such intervention would have redistributive effects, or would place new restrictions on the free flow of data across the internet—a shibboleth to mainstream internet-focused CSOs.

These divergences not only affect the respective positions of the two factions of CSOs on substantive internet and trade policy issues, but also their attitudes towards the processes by which internet-related trade rules are developed, including how the inclusiveness of such processes should be assessed, what measures of accountability are important, and how transparency is to be operationalised. This rift assumes importance because of the increasing convergence of these two very different regimes of internet governance and trade, which increasingly finds institutions and the CSOs who participate in them being required to cooperate. Unless a project is undertaken to reconcile the contested meanings of inclusiveness, accountability and transparency held by civil society stakeholders and the institutions in which they work, then the result is misunderstanding at best, and conflict at worst.

This paper does not suggest exactly where that balance should be struck, for example, what accountability measures should be required of non-governmental stakeholders, or whether meetings of trade negotiators should be open to public interest representatives. These are important questions, but beyond the scope of the paper. Rather, it suggests that we must acknowledge the contested meanings of inclusiveness, accountability and transparency in the context of trade policymaking and work towards their reconciliation, before we can expect to arrive at mutually acceptable improvements to trade policymaking processes.

Three related projects of EFF have been put forward in response to this challenge. The Open Digital Trade Network was formed in an effort to create a coalition of activists and policy experts at the intersection of the two domains, who despite their diverse positions on substantive internet policy issues, could agree on reforms to trade negotiation processes that would bring them in line with norms of transparency and public participation drawn in part from the internet governance regime.

The IGF Dynamic Coalition on Trade and the Internet was an extension of the Open Digital Trade Network aimed at extending the consensus on these recommendations for reform to a broader set of stakeholders, who through their connections with a diversity of policymaking fora, could begin to disseminate these recommendations as best practices. Finally EFF’s Shadow Regulation project, aimed at a still broader community of interest, utilises an infographic to disseminate a simple lexicon of process criteria that can be promulgated as a best practice standard for the development of cross-border internet-related public policies, whether trade-related or otherwise.

But of course, more is required to complete the broader enterprise of which these ongoing projects form part. For example, a research project identified early on by the Open Digital Trade Network, but not yet undertaken, is defining more precisely the levels of multi-stakeholder participation and transparency that could be appropriate at different points in internet public policy development through trade negotiations, and on different policy issues. It is here where the differences between civil society factions may assume as much importance as the divergences between CSOs and trade ministries.

This paper has highlighted the need for those engaged in such future exercises to remain cognisant of the realpolitik of the trade negotiator, and of the political ideologies and concerns of each civil society faction, such as those around corporate capture of multi-stakeholder processes and their potential to usurp the legitimate authority of states. After all, the reform of trade negotiation practices will only be accomplished to the extent that it is broadly supported, which in turn requires reform proposals to be compatible with the needs and values of all involved stakeholders.

References

Anonymous. (2015). A year after unveiling, “PITAC” stalled due to fight over secrecy rules. Inside U.S. Trade, 27 Feb.

Barlow, J. P. (1996). A declaration of the independence of cyberspace. Retrieved from https://www.eff.org/cyberspace-independence

Belli, L. (2015). A heterostakeholder cooperation for sustainable internet policymaking. Internet Policy Review, 4(2). doi:10.14763/2015.2.364

Best Bits. (2014, March). Internet governance principles and human rights. Retrieved from http://content.netmundial.br/contribution/internet-governance-principles-and-human-rights/107

Brussels Declaration on Trade and the Internet. (2016, February). Retrieved from https://www.eff.org/files/2016/03/15/brussels_declaration.pdf

Carter, Z. (2012). Trans-Pacific Partnership: key senate Democrat joins bipartisan trade revolt against Obama. Huffington Post, 23 May. Retrieved from http://www.huffingtonpost.com/2012/05/23/trans-pacific-partnership-ron-wyden_n_1540984.html

Conway, J. (2013). Edges of global justice: the world social forum and its “others”. Rethinking glob- alizations. Routledge. Retrieved from https://books.google.com/books?id=yqyVnwVv5hEC

Conger, K. (2017). The fight over DRM standards for streaming video is over and big business won. Gizmodo, 18 September. Retrieved from https://gizmodo.com/the-fight-over-drm-standards-for-streaming-video-is-ove-1818520581.

Cutler, A. C. (2003). Private power and global authority: transnational merchant law in the global political economy. Cambridge: Cambridge University Press.

Doria, A. (2014). Use [and Abuse] of Multistakeholderism in the Internet. In R. Radu, R. H. Weber, & J.-M. Chenou (Eds.), The evolution of global internet governance: principles and policies in the making (pp. 115–140). Berlin, Heidelberg: Springer.

Futter, A., & Gillwald, A. (2015). Zero-rated internet services: What is to be done? (Policy Paper No. 1, 2015). Research ICT Africa. Retrieved from https://www.researchictafrica.net/docs/Facebook%20zerorating%20Final_Web.pdf

Hoekman, B. M. (2009). The political economy of the world trading system (3rd ed.). Oxford: Oxford University Press.

Katz, J. (1997). Birth of a digital nation. Wired Magazine, 5.

James, D. (2017). Twelve reasons to oppose rules on digital commerce in the WTO. Huffington Post, 12 May 2017. Retrieved from http://www.huffingtonpost.com/entry/5915db61e4b0bd90f8e6a48a

Johnson, D. R., & Post, D. (1996). Law and Borders: The Rise of Law in Cyberspace. Stanford Law Review, 48(5), 1367. doi:10.2307/1229390

Johnson, T. (2013). U.S. trade representative defends pending trade pact after Wikileaks disclosure. Variety, November 13. Retrieved from https://variety.com/2013/biz/news/u-s-trade-representative-says-pending-pact-has-zero-to-do-with-sopa-1200838860/

Just Net Coalition. (2014, February). The Delhi declaration for a just and equitable internet. Retrieved from https://justnetcoalition.org/delhi-declaration

Kelty, C. (2005). Geeks, social imaginaries, and recursive publics. Cultural Anthropology, 20(2), 185-214. doi:10.1525/can.2005.20.2.185

Lacarte, J. A. (2004). Transparency, public debate and participation by NGOs in the WTO: A WTO perspective. Journal of International Economic Law, 7(3), 683-686. doi:10.1093/jiel/7.3.683

Leiner, B. M., Cerf, V. G., Clark, D. D., Kahn, R. E., Kleinrock, L., Lynch, D. C., ... Wol , S. (2003). A brief history of the internet. Retrieved from http://www.isoc.org/internet/history/brief.shtml

Maciel, M. (2014). Creating a Global Internet Public Policy Space: Is There a Way Forward? In W. J. Drake & M. Price (Eds.), Beyond Netmundial: The Roadmap for Institutional Improvements to the Global Internet Governance Ecosystem (pp. 99–107). Philadelphia: Internet Policy Observatory. Retrieved from https://repository.upenn.edu/internetpolicyobservatory/5

Malcolm, J. (2008). Multi-Stakeholder Governance and the Internet Governance Forum. Perth: Terminus Press.

Malcolm, J. (2010). Public Interest Representation in Global IP Policy Institutions. Retrieved from https://digitalcommons.wcl.american.edu/research/6/.

Malcolm, J. (2015a). Criteria of meaningful stakeholder inclusion in internet governance. Internet Policy Review, 4(4). Retrieved from https://policyreview.info/articles/analysis/criteria-meaningful-stakeholder-inclusion-internet-governance doi:10.14763/2015.4.391

Malcolm, J. (2015b, October). U.S. open government commitments fail to improve trade transparency. Retrieved from https://www.eff.org/deeplinks/2015/10/us-open-government-commitments-fail-improve-trade-transparency

Malcolm, J. (2017, January). Does Trump’s withdrawal from TPP signal a new approach to trade agreements? Retrieved from https://www.eff.org/deeplinks/2017/01/does-trumps-withdrawal-tpp-signal-new-approach-trade-agreements

Mena, S., & Palazzo, G. (2012). Input and Output Legitimacy of Multi-Stakeholder Initiatives. Business Ethics Quarterly, 22(3), 527-556. doi:10.5840/beq201222333

Moog, S., Spicer, A., & Böhm, S. (2015). The politics of multi-stakeholder initiatives: The crisis of the Forest Stewardship Council. Journal of Business Ethics, 128(3), 469-493. doi:10.1007/s10551-013-2033-3

Mueller, M. L. (2010). Networks and states: The global politics of Internet governance. Cambridge, Mass: MIT Press. doi:10.7551/mitpress/9780262014595.001.0001

NETmundial Multistakeholder Statement. (2014, April). Retrieved from http://netmundial.br/wp-content/uploads/2014/04/NETmundial-Multistakeholder-Document.pdf

Panday, J. (2017, August). Rising demands for data localization a response to weak data protection mechanisms. Retrieved from https://www.eff.org/deeplinks/2017/08/rising-demands-data-localization-response-weak-data-protection-mechanisms

Piotrowski, S. J., & Borry, E. (2010). An analytic framework for open meetings and transparency. Public Administration and Management, 15(1), 138-176.

Steffek, J. & Kissling, C. (2006). Civil Society Participation in International Governance: the UN and the WTO Compared. Retrieved from http://econstor.eu/bitstream/10419/24955/1/514659831.pdf

Wolfe, R. (2012). Protectionism and multilateral accountability during the great recession: drawing inferences from dogs not barking. J. World Trade, 46, 777.

WSIS. (2003). Declaration of principles. Retrieved from http://www.itu.int/wsis/docs/geneva/official/dop.html

WSIS. (2005). Tunis Agenda for the Information Society. Retrieved from http://www.itu.int/wsis/docs2/tunis/off/6rev1.html

Footnotes

1. Note that this it not to suggest that the notional “equal footing” of participating stakeholders within ICANN effectively equalises their respective power relations within that organisation; a problem of the multi-stakeholder model to which we will shortly return.

2. This derives from the author’s personal observations during discussions with staff of the United States Trade Representatives during negotiations of the TPP and NAFTA during 2016 and 2017.

3. Links to and further information about each of these groups may be found at the website of the Internet Governance Civil Society Coordination Group (CSCG), a peak body that contains representatives of each and functions to "ensure a coordinated civil society response and conduit when it comes to making civil society appointments to outside bodies": http://internetgov-cs.org/.

4. On network neutrality (the principle that internet providers should not discriminate between data on the basis of type, source, or destination), the mainstream internet civil society position is that such discrimination ought not to be allowed, because it interferes with the user’s freedom to access all internet content on a level footing. But civil society groups that are not primarily concerned with internet policy are more inclined to favour such discrimination, through mechanisms such as "zero rating" or differential pricing for certain internet content, in order to facilitate access by marginalised groups (as the US National Association for the Advancement of Colored People (NAACP) contends), or underserved populations in the developing countries of Africa (Futter and Gillwald, 2015).

5. These are domestic laws or policies that mandate or give priority to the use of domestic Internet hosting providers or networks, on which the position generally favored by Internet governance civil society groups is that such rules interfere with the free flow of data online, and should be disallowed. However there are developing country groups and networks, such as the South Center and the Our World Is Not For Sale network, that favor certain protectionist measures as a way to give developing country entrepreneurs a leg up over dominant Internet platforms from the developed world (James 2017).

6. The letter, published at https://justnetcoalition.org/2017/to_WTO_Agenda.pdf, is endorsed by 300 groups including numerous trade unions and fair trade groups, but few internet-focused CSOs.

7. The author makes this observation as a participant in each of these organisations, being a member and former coordinator of the IGC, founder and steering committee member of Best Bits, and steering committee member of CSISAC.

8. The author is reporting on discussions to which he was a party as EFF’s representative as convener of the Open Digital Trade Network.

Neutrality, fairness or freedom? Principles for platform regulation

$
0
0
Acknowledgements: The author wishes to express his gratitude to the Jindal Initiative on Research in IP and Competition (JIRICO) at O.P. Jindal Global University, India, for organising the JIRICO Global Writing Workshop & Research Colloquium. This paper benefited from the insightful comments of the workshop participants, and—at a later stage—those of the reviewers and editors of Internet Policy Review. Any mistakes are the author’s sole responsibility. Finally, the author notes that this paper builds on thoughts presented in the concluding section of a previous article, namely Friso Bostoen (2018). Online Platforms and Vertical Integration: The Return of Margin Squeeze? Journal of Antitrust Enforcement, 6 (forthcoming).

 

1. Introduction

The question whether online platforms need to be regulated has been heavily debated by legal scholars without clear answer. However, it appears that policymakers in different EU member states and at EU level have drawn their conclusions and are proposing or adopting regulation. ‘Regulation’ is used in a broad sense here, referring to specific legal instruments as well as the application of competition law, but this article focuses on the former. Most notably, France has adopted a law on platform fairness, while the European Commission (‘Commission’) not only ordered Google to implement a form of search neutrality but is also tabling regulatory proposals on fairness in platform-supplier relations.

There is, however, a disconnect between the academic debate and the political reality. The first issue is that legal scholars often discuss the (in)appropriateness of platform-specific regulation without a clear view of what this regulation looks like, which means that both support and criticism miss a clear target. As the scholarly discussion is not centred on the regulation that is actually being proposed, policymakers cannot fully benefit from the academic debate. This brings us to the second issue, namely that policymakers make scholarly discussion difficult by presenting their regulatory proposals in terms of ‘fairness’ and ‘neutrality’—goals that are lofty but meaningless in and of themselves. To solve the two-way disconnect between academia and policy, this article seeks to propose a frame of reference for productive debate on platform regulation. It does so by extracting from the various regulatory initiatives two operational principles, namely transparency and non-discrimination.

To achieve its goal of presenting meaningful principles for platform regulation, the article is structured in three parts, each with its proper methodology. It starts with a brief doctrinal review of the legal debate on the need for platform regulation (section 2). While platforms give rise to several concerns, the article identifies the economic dynamic that is at the heart of many of the regulatory initiatives (section 3). The main resource for this exercise is the economic literature on online platforms, which is primarily theoretical but incipiently empirical. The main part of the article then surveys the policy initiatives targeting online platforms both at the EU level and at the member state level with a focus on France (section 4). As there is a lack of primary research on this topic, an objective preliminary description of the (proposed) legal instruments is crucial. Through a critical analysis, the article then distils from these instruments operational principles for platform regulation, and identifies in which cases they can serve a purpose.

2. The need for online platform regulation

Many authors have discussed the need for regulation of online platforms such as Google, Amazon, Facebook and Apple. Their views can be split up in two camps, which may be termed—with some simplification—‘anti intervention’ on the one hand and ‘pro intervention’ on the other.

In the anti-intervention camp are authors who argue that intervention in digital markets should be kept to a minimum. The reasons they offer vary. Some argue that competitive issues are unlikely to develop in digital markets (Rato & Petit, 2014, p. 8) and that when they do, the dynamic nature of these markets will quickly correct them (Evans, 2017). A related argument goes that intervention in these fast-paced digital markets is prone to decisional errors, which would stifle innovation (Shelanski, 2013). Others make the more general argument that regulation should not focus on the phenomenon of platforms, in other words, that there should not be a ‘law of the platform’ (Lamadrid, 2015; Lobel, 2016). This argument goes back to a speech by Frank Easterbrook, in which he argued that there should not be a ‘law of the cyberspace’ any more than that there should be a ‘law of the horse’, but that new phenomena should rather be assessed according to general legal principles that existed before their rise (Easterbrook, 1996). Relatedly, some authors argue that the lack of a clear definition of ‘platform’ make the creation of a targeted regulatory framework difficult (Maxwell & Pénard, 2015, pp. 7-11).

In the pro-intervention camp are authors who do believe that there are competitive issues with online platforms that should be addressed. While some authors argue that any issues can perfectly be addressed under the current competition rules, a number of others argue that competition law should be tailored to digital markets in order to fully solve the issues at hand. On the one hand, their work includes proposals for substantive reform—i.e. changing the law itself, or at least its interpretation (Khan, 2017, pp. 790-797). On the other, they offer suggestions for procedural reform—i.e. changing how the law is administered—with a particular focus on the need to speed up interventions to keep pace with the fast-moving digital sector (Kadar, 2015, pp. 19-23). A final group of authors holds the view that the current situation justifies the adoption of a new regulatory framework that would apply to (certain) online platforms. The argument goes that some (especially data-related) issues are simply not sufficiently addressed by any branch of the current legal framework (Strowel & Vergote, 2016, pp. 11-15).

The merits of each position can be elaborately discussed. On the intervention side, a lot of work on the application of general competition law to online platforms remains to be done. However, some EU member states have either skipped this exercise or drawn their conclusions and moved to the next stage by adopting more specific regulation targeting online platforms. Before discussing the regulatory initiatives themselves, let us take a look at the (anti-)competitive dynamic of online platforms that authorities are increasingly regulating.

3. The subject of online platform regulation

The preliminary question is how to define an online platform. However, agreement on such a definition is elusive. Generally, they can be described as intermediaries operating in multi-sided markets, in which they seek to facilitate direct interaction between different user groups—the ‘sides’ of the market (similarly Rochet & Tirole, 2003). While it is difficult to describe what platforms are, it is easier—and for the purpose of regulating them more important—to describe what they do. A wealth of online platforms exist, but with a view to making everything ‘as simple as possible, but not simpler’ (Einstein), it can be said that the core role of platforms consists in search and matching (Martens, 2016, pp. 20-26).

The most obvious kind of matching platforms are online dating/marriage services like the American Match.com and the Indian Matrimony.com. Other platforms, such as Upwork and MTurk, connect persons with a job with an on-demand workforce to carry it out. However, most matching platforms do not seek to facilitate a (working) relationship, but rather a transaction. Mobile app stores (like Apple’s App Store and Google Play) serve as a good example: they facilitate transactions between app developers (suppliers) and consumers. These kinds of platforms profit by charging suppliers a commission on every transaction. For app stores, for example, this commission amounts to 30% of the price of the app (and subsequent in-app purchases or subscriptions).1

The focus of other platforms lies more with search than with matching. Search engines such as Google serve as a prime example. Their main function consists in listing and ranking information on the Web. In doing so, they connect not two but three user groups: users seeking information, websites seeking an audience, and advertisers seeking new customers. This intermediation is free for users and websites, and financed by advertisers who generally pay each time a user clicks on the link to their website (a ‘pay per click’ model).

However, these two core functions do not lead to a strict dichotomy, as many platforms combine elements of both categories. Consider Amazon Marketplace. Shopping on this platform usually starts with a search query, after which Amazon offers a ranking of results. Some of these results will have been paid for by advertisers, others will pop up organically. When consumer and supplier have found each other, Amazon facilitates the transaction between them. Thus, Amazon Marketplace intermediates between consumers, advertisers and suppliers. Price comparison websites (such as Booking.com for hotels) function in a similar way.

Platforms and suppliers are in what has been called a ‘frenemy relationship’ (Ezrachi & Stucke, 2016, pp. 147-158). In a first period, platforms need suppliers: as platforms only offer a digital infrastructure for interaction, a platform without suppliers is simply worthless to consumers; intermediation only works with two user groups to connect. In a second period, when a platform becomes larger, its suppliers become more dependent on it to provide their services to consumers (European Commission, 2016a, p. 13). In that situation, the platform may seek to capture more of the value in the supply chain. A first, straightforward way to do so is by increasing the commission rate/pay per click fee it demands from suppliers.

There is also a more intricate—and possibly more profitable—way to capture more value, namely by integrating vertically. Vertical integration means that the platform starts creating its own goods or services for distribution through its platform, in order to make an additional profit on those sales. After such vertical integration, the platform actually competes with certain suppliers, which creates an incentive to exclude them. This exclusion can be carried out explicitly through the delisting of certain suppliers from the platform (e.g. Dredge, 2013). More often, however, this exclusion proceeds through a subtle combination of commission rates imposed on the supplier and the ranking of results presented to the consumer. In both cases, the platform operator uses its control over the ecosystem to favour its own goods and services.

We have seen this dynamic play out before. Apple, for example, integrated vertically when it started offering Apple Music, a music streaming app, through its own App Store. It subsequently sought to make subscribing to Spotify, a competing app, less attractive through a combination of high commission rates and restrictive conditions (Crook, 2016). Google’s search engine used to direct consumers to more specific services. Now that Google has its own specialised services (e.g. comparison shopping and flights), consumers are pointed in that direction (Google Search, 2017; Google India, 2017). Amazon Marketplace, finally, does not only connect sellers to buyers, but also operates as a seller itself. When Amazon starts producing a new good, it skews its algorithms in favour of this offering (Angwin & Mattu, 2016).

A public consultation carried out by the European Commission shows this sort of behaviour is perceived as a wider problem: 90% of responding businesses (out of a total of 116) replied that they are dissatisfied with the relations between platforms and suppliers. The problematic practices most commonly experienced by these businesses were: (i) a platform applying unbalanced terms and conditions; (ii) a platform promoting its own services to the disadvantage of services provided by suppliers; and (iii) a platform refusing access to its services (European Commission, 2016b, p. 9).

Because of their exclusionary nature, these practices merit our attention. After all, consumers may suffer the consequences: empirical case studies indicate that unfair platform-supplier competition can reduce innovation and increase prices (Wen & Zhu, 2017), limit consumer choice (Zhu & Liu, 2016), and degrade the quality of the platform (Luca et al., 2016). The first regulatory reflex in this case should be competition law. However, given the novelty of this issue (and the duration of proceedings), competition law has not been sufficiently applied to potentially abusive behaviour in the digital economy, and it thus remains unclear whether this branch of law can adequately regulate online platforms. Nevertheless, some authorities in Europe have (at least implicitly) concluded that competition law is not up to the task. Consequently, they are preparing or have already adopted specific regulation targeting the relationship between platforms and suppliers, which is the subject of the next section.

4. The principles of online platform regulation

4.1. Introduction

The possibilities for platform regulation are situated on a broad spectrum limited by two extremes. On one side of this spectrum, we find the complete freedom from regulatory intervention for platform operators. While such freedom is difficult to imagine, since platforms are subject to various parts of the current legal framework, eleven EU member states did call on the Commission not to specifically regulate platforms (Joint letter, 2016). Situated on the other side of the spectrum are complete bans on certain platform behaviour. We are now seeing an example of the latter in the bans on so-called ‘most favoured nation clauses’ between booking platforms and hotels. These clauses prohibit the hotel from offering their rooms at a lower price or under better conditions on their own website and/or on other platforms. After competition authorities took a balanced approach regarding the permissibility of these clauses, legislators in several member states outright banned them (Bostoen, 2017b).

Between leaving platforms complete freedom and completely banning some of their behaviour are a number of options that authorities in Europe are exploring. Most of the regulatory initiatives are centred around two principles, namely neutrality and fairness. Of course, these principles do not mean much in and of themselves; it is their content that matters. This section will examine the different regulatory initiatives, organising them by the principle they claim to represent, but with specific regard for their content.

4.2. Neutrality

The first principle that surfaces in (or lies under the surface of) a number of regulatory initiatives is neutrality. This principle is not new, but has often been included in instruments regulating network sectors. In telecommunications regulation, we have two precedents that are especially relevant, both of which impose a form of network neutrality through an obligation of non-discrimination.

a. Network neutrality

Starting in the 1990s, the European telecom sector was liberalised under direction of the Commission (Directive 2002/77/EC). Member states were obliged to abolish exclusive or special rights for the provision of telecom services (which were often provided by former state monopolies) and the telecom network was opened up to entrants who could compete with the incumbent in providing those services. In this situation, the incumbent would regularly end up both providing network access to entrants (upstream market) and competing with them in offering services to end-users (downstream market). To ensure effective competition, a certain neutrality was imposed on the provider of the network:

Member States, shall ensure that vertically integrated public undertakings which provide electronic communications networks and which are in a dominant position do not discriminate in favour of their own activities.

For example, the incumbent could engage in such discrimination by charging higher prices for network access to downstream competitors than to its own downstream operations. If this price charged to competitors for access to the incumbent’s network was too high, or the price charged by the incumbent to end-users too low, then entrants could not effectively compete on that downstream market. In that case, the Commission would intervene through competition law enforcement, more specifically with the figure ‘margin squeeze’ (Bostoen, 2017a).

Margin squeeze is defined as the situation where a dominant undertaking charges ‘a price for the product on the upstream market which, compared to the price it charges on the downstream market, does not allow even an equally efficient competitor to trade profitably in the downstream market on a lasting basis’ (European Commission, 2009, para. 80). In other words, margin squeeze targets the situation where a network operator forces his downstream competitor—who is just as efficient—off the market by squeezing his profit margins.

In a next step, neutrality obligations were expanded from the telecom network itself to the internet services provided through this network. Net neutrality, a term coined by Tim Wu in 2003, implies that internet service providers (‘ISPs’) must treat data equally, i.e. cannot block or slow down specific applications or services. In April 2016, the EU Regulation on net neutrality came into force, aiming ‘to safeguard equal and non-discriminatory treatment of traffic in the provision of internet access services’ (Regulation (EU) 2015/2120, consideration 1).

The new Regulation is motivated in part by worries concerning vertical integration, the idea being that ISPs could give their downstream services preferential treatment over those of competing content providers. They could, for example, slow down Youtube and Netflix to make their video service more attractive, make Spotify’s traffic shaky to draw users to its own music streaming service, or mess with the speed of online communication services to make their own apps more popular. Such conduct is prohibited by Article 3 of the Regulation, which states:

Providers of internet access services shall treat all traffic equally, when providing internet access services, without discrimination, restriction or interference, and irrespective of the sender and receiver, the content accessed or distributed, the applications or services used or provided, or the terminal equipment used.

Thus, the EU mandates neutrality in providing access to the telecom network, and in treating services on the internet. There are two important parallels between these neutrality policies. Firstly, they seek to prevent networks from excluding undertakings that use their network, i.e. downstream telecom entrants in the case of margin squeeze and content providers in the case of net neutrality. Secondly, they are implicitly premised on the idea that these networks are essential enough to be qualified as ‘utilities’. In the US, for example, net neutrality was imposed by classifying ISPs as ‘common carriers’, which meant they had to comply with the same non-discrimination obligations as railways and telephone companies (Open Internet Order, 2015).

Given the importance of online platforms for a great number of businesses, and the fact that vertical integration leads them to exclude competitors from their network, one may wonder whether they should be the next target of neutrality obligations. US Senator Al Franken supports such a move, stating in a recent speech: ‘As tech giants become a new kind of internet gatekeeper, I believe the same basic principles of net neutrality should apply here’ (Franken, 2017).

However, some authors have argued that the analogy between ISPs and online platforms—or between network and platform neutrality—is not justified (Renda, 2015; Ammori, 2016). For one, the goals of net neutrality do not only include competition but also broader objectives such as media pluralism and freedom of expression. The most convincing distinction is that telecom networks, contrary to online platforms, are natural monopolies (additionally, they often started as public undertakings, at least in Europe). A related argument goes that, while users of online platforms can easily multi-home (i.e. shift between different providers), high switching costs confine users to one ISP. Despite these arguments, we are now seeing neutrality obligations in the platform economy.

b. Platform neutrality

After seven years of investigation, the Commission issued its decision in the Google Search case in 2017. It decided that Google has abused its market dominance as a search engine by giving an illegal advantage to another Google product, its comparison shopping service. According to the decision, Google systematically gave prominent placement to its own comparison shopping service while demoting rival comparison shopping services in its search results. In other words, Google used its control of the platform (the search engine) to favour its own service (comparison shopping). The imposed remedy is as follows (European Commission, 2017a):

[T]he Decision orders Google to comply with the simple principle of giving equal treatment to rival comparison shopping services and its own service: Google has to apply the same processes and methods to position and display rival comparison shopping services in Google’s search results pages as it gives to its own comparison shopping service.

Thus, the Commission imposed a form of search neutrality, but left the implementation up to Google. The Commission did specify that it ‘does not object to the design of Google’s generic search algorithms or to demotions as such, nor to the way that Google displays or organises its search results pages (e.g. the display of a box with comparison shopping results displayed prominently in a rich, attractive format)’ (European Commission, 2017b).

To comply with the decision, Google has created a stand-alone unit for Google Shopping. Where the coveted top spots on a general Google Search page were previously reserved exclusively for Google Shopping, it now has to bid for those spots against other comparison shopping services (Heckman, 2017). According to the company, Google Shopping will participate in the auction the same way as everyone else, and thus compete on equal terms. In essence, Google is instituting a behavioural separation between its search engine and its comparison shopping service. However, this choice of remedy has not escaped criticism: in February 2018, a number of complainants addressed a letter to the Commission arguing that ‘without full ownership unbundling (structural separation), Google Shopping’s participation in the auction is essentially meaningless’ (Foundem et al., 2018).

The Google decision was far from the first initiative to regulate the search engine. In 2014, the European Parliament adopted its so-called ‘Google Resolution’. Concerned with the evolution of search engines into gatekeepers, it called on the Commission ‘to consider proposals aimed at unbundling search engines from other commercial services’ (European Parliament, 2014, point 15). Given its general wording, the resolution can be interpreted as recommending anything from a behavioural separation to a structural break-up, but was probably just intended to put pressure on the Commission to swiftly and strongly conclude the Google investigation.

In 2015, the French senate adopted its ‘Google Amendment’. The amendment (which did not make it into law) would have imposed a number of obligations on every search engine ‘with a structuring effect on the digital economy’, including (i) making available three other search engines on its home page; (ii) informing its users of its general ranking principles; and (iii) ensuring that search results are fair and non-discriminatory, and do not favour the search engine’s specialised services. ARCEP, the French telecom regulator, would be tasked with the enforcement of these obligations.

While the Google Search decision imposes neutrality on one platform within a specific group of platforms (search engines), the French Digital Council drafted a report on platform neutrality (neutralité des plateformes) with a broader scope. The Council describes how, as intermediaries, platforms do not only connect their users but may also become their competitors. They note how this intermediary position gives platforms a big competitive advantage over their suppliers, which may lead to discriminatory and non-transparent conditions of access to the platform. Accordingly, it is crucial that ranked results are fully transparent, so that users can easily distinguish between results paid for by advertisers, results favoured because of their relation with the platform, and general algorithmic results (Conseil National du Numérique, 2014, pp. 8-9, 12, 16).

In an annex to the report, the French Digital Council launches some ideas for platform neutrality (Conseil National du Numérique, 2014, pp. 28-29). These include creating a prohibition of every form of discrimination with respect to suppliers that is not justified by the quality of the service or legitimate economic reasons. Another idea consists in the principle of equal access for suppliers who have become competitors of indispensable platforms, which would particularly apply to rankings and to conditions of access. The report was ordered by and submitted to the French government, and some of its considerations made it into the French law on platform fairness, which is discussed under the next subsection.

4.3. Fairness

Apart from neutrality, authorities—most notably in France and at the EU level—have included fairness as a guiding principle in their regulatory initiatives. The most salient example is the law on platform fairness (loyauté des plateformes) adopted by the French Parliament in 2016. The groundwork for this law was laid by the Digital Council’s 2014 report on platform neutrality discussed above, but various other reports followed.

Also in 2014, the French Council of State drafted a report on ‘fundamental rights on the internet’. In the report, it observes that marketplaces and search engines no longer play the purely technical and passive role that is required to benefit from the safe harbour regime contained in the E-commerce Directive. The Council thus considers it necessary to create a new legal category for platforms that offer ranking or listing services for content, goods or services placed online by third parties. It argues that these platforms cannot be subjected to a principle of neutrality (or equal treatment), because it is their role to hierarchise results on the internet (thereby favouring some over others). According to the French Council of State, these ranking platforms should be subjected to a principle of fairness towards both consumers and suppliers (Conseil d’Etat, 2014, p. 21). The specific obligations deriving from this principle would have to be defined, and the Council proposes four (Conseil d’Etat, 2014, pp. 278-281):

  1. Platforms are not allowed to alter or distort their ranking for purposes contrary to the interests of their users, and should not favour their own services over their competitors’;
  2. Platforms should inform the users of the general workings of their algorithm, and should clearly distinguish between results paid for by advertisers, results favoured because of their relation with the platform, and general algorithmic results;
  3. Platforms should publish their criteria for removing lawful content and apply them in a non-discriminatory manner;
  4. Platforms should communicate in advance with suppliers about any changes in their content policy or the workings of their algorithm that may affect them.

After enshrining the principle of fairness in law, these obligations could either be specified by the platforms in charters of professional conduct, or be adopted by law too. A number of (existing) authorities, including the French competition authority and ARCEP, would be tasked to enforce the obligations. Note that, apart from a different terminology (‘fairness’ instead of ‘neutrality’), the report described shows a lot of commonality with the report on platform neutrality by the French Digital Council (and with the French senate’s ‘Google Amendment’, which came later).

In 2015, the French Digital Council drafted another report, this time titled ‘digital ambition’. One chapter is titled ‘platform fairness’, a visible terminological change from its previous report on platform neutrality (Conseil National du Numérique, 2015, pp. 58-78). It starts out by describing a ‘structural imbalance’ between the platform and its suppliers due to the intermediary position of the platform, which may—when vertically integrated—also compete with its suppliers. After observing that existing law remains difficult to apply to platform conduct, the Council proposes to adopt a general principle of fairness.

This principle seeks to oblige the platform to carry out its services in good faith without distorting them for purposes contrary to the interests of their users (both private and professional). A clear separation between organic search results, sponsored search results and search results ‘internal to the platform ecosystem’ would have to be imposed. With a view to normalising access to ‘inescapable’ platforms, the Council adds two more recommendations: (i) to install an obligation for the platform to inform its suppliers in advance in case of major changes (relating e.g. to tariffs, content or algorithms); and (ii) to apply a principle of non-discrimination in ranking, except in case of legitimate considerations that are compatible with the interests of internet users.

The scope of application of these obligations would not be determined by a definition, but by a number of criteria (including the platform’s audience, its massive adoption and its ability to harm innovation), which would single out platforms ‘with the greatest capacity for hindrance’. Enforcement of these obligations would, in the first place, be the task of existing regulatory authorities. However, the Council also recommends creating two new institutions: (i) a European fairness rating agency, based on an open network of contributors; and (ii) a body of algorithm experts that can be mobilised on the demand of a regulatory authority.

The last decisive step towards the French law on platform fairness was a 2015 report on ‘a new democratic age’ by a legislative commission. The report refers to both the work of the French Digital Council and Council of State, and adds critical notes (Commission de réflexion et de propositions sur le droit et les libertés à l’âge numérique, 2015, pp. 197-225). It starts by describing the competitive dynamics of online platforms, and how some of them have grown into ‘quasi-inescapable’ intermediaries. The report particularly notes how these platforms are able to impose significantly imbalanced conditions on their suppliers, and how—after vertical integration—they are able to restrict competition by favouring their offer over their suppliers’. Moreover, it is not always easy for users to determine when this happens.

In the first place, the report finds it necessary to better adapt competition law to online platforms. It also believes there is room for a specific regulatory instrument beyond competition law, but takes note of several issues. Firstly, there are the definitional problems. In that regard, the report finds the platform definition in the Council of State’s 2014 report flawed because it is both too broad (as it targets all platforms, not only those in a dominant position) and too narrow (as it captures only platforms whose content is determined by third parties, not those with a greater editorial role such as Netflix or Spotify). It also takes issue with the Digital Council’s focus on platforms with ‘the greatest capacity for hindrance’ as this is more of a moral than a legal notion. The report prefers the notion of ‘digital platforms that are structuring for the economy’. Finally, the report goes into objectives of the regulation, and the different ways to conceive the obligations. It examines different ideas, most of which are centred on transparency and non-discrimination. However, it does not offer any proper suggestions.

After this series of reports, the French parliament adopted a law on platform fairness in October 2016 (Loi n° 2016-1321, Article 49). The law applies to ‘platform operators’, which are defined as every natural or legal person offering professionally—whether remunerated or not—a public online communication service relying on:

  1. listing or ranking through data processing the content, goods or services offered or uploaded by third parties; or
  2. connecting multiple parties for the sale of a good, the provision of a service, or the exchange or sharing of content, a good or a service.

It is interesting to see that the law opts for a platform definition based on its core functions, namely search and matching. The law obliges the online platform operator to offer the consumer faithful, clear and transparent information, especially regarding:

  1. the general terms and conditions of use of the intermediation service, and the methods of listing and ranking and delisting;
  2. the existence of a contractual relationship, a capitalistic link or direct remuneration that influences the listing or ranking.

In other words, what the law imposes is transparency. On 29 September 2017, three decrees were adopted to specify these obligations. One decree elaborated on the information obligations, but the most important decree specified the scope of the new law (Décret n° 2017-1435): it applies to platforms that receive over five million unique visitors per month—an objective criterion that contrasts with earlier proposals centred on more subjective notions such as ‘greatest capacity for hindrance’ or ‘structuring effect on the digital economy’.

At EU level, the Commission is monitoring online platforms closely in the framework of its Digital Single Market (DSM) strategy. After the aforementioned public consultation showed widespread dissatisfaction with the platform-supplier relationship, the Commission noted: ‘Beyond the application of competition policy, the question arises as to whether EU-level action is needed to address fairness of […] relations between platforms and their suppliers’ (European Commission, 2016a, p. 13). It started by carrying out a fact-finding exercise on platform-to-business trading practices.

The Commission presented the results of its fact-finding exercise in the May 2017 mid-term review of its DSM strategy (European Commission, 2017c, pp. 8-9). They indicate that ‘some online platforms are engaging in trading practices which are to the potential detriment of their professional users, such as the removal (‘delisting’) of products or services without due notice or without any effective possibility to contest the platform’s decision’. There is also ‘widespread concern that some platforms may favour their own products or services [or] otherwise discriminate between different suppliers and sellers’. A final key issue is the lack of transparency in ranking or search results.

All of this led the Commission to conclude that ‘platforms have become key gatekeepers of the internet, intermediating access to information, content and online trading.’ Accordingly, it pledged to use its competition enforcement powers wherever relevant, and started exploring regulatory options, which it recently specified in an inception impact assessment on ‘fairness in platform-to-business relations’ (European Commission, 2017d). The options range from ‘EU soft law action to spur industry-led intervention’ to ‘EU legislative instrument providing detailed principles’, but are short on specifics. The move enjoys support from the European Parliament, which has called for a ‘targeted legislative framework for B2B relations based on the principles of preventing abuse of market power and ensuring that platforms that serve as a gateway to a downstream market do not become gatekeepers’ (European Parliament, 2017, pp. 15-6).

4.4. Transparency and non-discrimination: principles we can (dis)agree on

The proposals for platform regulation discussed above brand themselves around ‘neutrality’ and ‘fairness’. While such terms sound lofty, and as we have already stated, they do not carry much meaning in and of themselves. In the preceding section, we therefore took a closer look at how exactly regulatory authorities conceive these principles. This subsection distils from these various conceptions more substantive principles, which can then form the basis of meaningful discussion on their merit.

Most of the initiatives are concerned with the search function of platforms, having Google as their explicit or implicit main target. Generally, the scope of application of these new regulations is defined as online services that list or rank the content, goods or services of third parties, which would also capture marketplaces (such as Amazon’s). Such a functional definition circumvents the disagreement on a conceptual platform definition, but focusing solely on the listing/ranking function means that not only intermediaries but also undertakings that operate a more traditional distribution model (e.g. Netflix and Spotify) are included.

Another question is whether regulation should target every online platform, or only those platforms in a dominant position (or with a significant effect on the economy otherwise defined). Dominance assessments—especially in dynamic, two-sided markets—may be difficult, but a general application of the obligations would unduly burden smaller operators. A solution is offered by the French law on platform fairness, which defines its scope of application by reference to unique monthly visitors—a measure that is easy to (self-)assess. Other objective measures, such as the amount of active (business) users or employees of the platform, are also conceivable.

Almost all proposals for regulatory intervention are centred around ‘neutrality’ and ‘fairness’. However, the fact that proposals under different names overlap, while proposals under the same heading differ, shows how these terms are impossible to evaluate as such. This article proposes to shift the debate towards two clearer principles, namely ‘transparency’ and ‘non-discrimination’. Both of these principles serve to establish a more level playing field between platforms and their suppliers, but while transparency only tempers the benefit a platform can derive from favouring its own services over those of certain suppliers, non-discrimination limits or even eliminates this possibility.

a. Transparency

The primary transparency obligation is making a clear distinction between search results that are generated organically by the search algorithm, the results that are paid for by advertisers, and the results that are favoured because of their connection to the platform. This transparency must be ensured towards consumers. However, when consumers distrust advertised and favoured results, it also has an effect on competition between suppliers and between the platform and suppliers.

A second kind of transparency is geared directly towards suppliers. Suppliers should, in the first place, receive information on the platform’s ranking algorithm. Transparency would also comprise the timely communication by the platform of significant changes in its ranking policy or terms and conditions. Rather than general announcements, this information obligation would have to be carried out specifically towards the supplier that is impacted by the change (e.g. a demotion or delisting). Such transparency may, for example, give developers the opportunity to adapt their apps to the changing terms and conditions governing the platform’s application programming interface (API), rather than being removed from the platform for non-compliance with unnotified changes. If it does come to a delisting, the platform should provide reasons.

Transparency on the internet enjoys broad support, as illustrated by its inclusion in the OECD Principles for Internet Policy Making (2011, p. 8). The European Council has joined the European Commission and Parliament in stressing ‘the necessity of increased transparency in platforms’ practices and uses’ (2017, p. 5). However, many authorities consider that transparency (alone) will not solve the perceived competitive issues and believe that non-discrimination is called for. This is also what happened in the course of the Google Search investigation: Google offered to maintain a high degree of transparency—and more (European Commission, 2013), but the Commission did not accept these commitments and finally ordered Google to comply with the principle of equal treatment.

b. Non-discrimination

Non-discrimination (or its positive equivalent: equal treatment) goes a step further than transparency. It generally implies that an online platform cannot discriminate in favour of its own offering. In the first place, this means that the platform cannot skew the search results in favour of its own services. However, algorithms are meant to favour certain results and demote others to present a useful ranking. Mindful of this inherent function of algorithms, authorities have sought to specify the obligation of non-discrimination. The common formula then goes that platforms cannot alter or distort the ranking ‘for purposes contrary to the interests of its users’. However, a more objective measure, i.e. applying ‘the same underlying processes and methods’ to ranking rival and proper services (Google Search, 2017, para. 700), seems preferable.

Search rankings are only one part of many platforms. When it comes to matching different users, high commission rates or restrictive conditions are of concern. In this context, regulatory proposals refer to a right of equal access to the platform for suppliers, especially after the platform integrates vertically. With regard to conditions governing the supply of services through the platform, equal access is easily conceivable. For example, when the app store’s own music streaming app is offering a family subscription, the app store must allow other music streaming apps to do so too.

With regard to commission rates, however, equal access is a lot more difficult. When a platform provides services on its own platform, it does not have to pay a commission rate on every transaction. By contrast, the commission rates imposed on suppliers can be substantial, which has led to allegations of anti-competitive conduct. Music streaming apps, for example, have complained that the 30% cut they owe Apple on subscriptions sold through the App Store makes it difficult to compete against Apple’s own streaming service (Singleton, 2015)—conduct that is being investigated by the U.S. Federal Trade Commission (Warren, 2016, pp. 2-3).

Obliging platforms to charge equal commission rates on its own downstream services and on those of competing suppliers is a bad idea: either the commission rate on competing products is scrapped and the platform business model collapses for lack of profits, or the platform charges its downstream services an equal rate, which would have no effect because this constitutes an internal transfer. An idea would be to determine fair, reasonable and non-discriminatory terms for access to the platform, but regulators would then have to engage in the difficult exercise of price regulation. Moreover, such an obligation would be premised on the debatable idea that certain online platforms are essential to suppliers.

A better idea to prevent commission rates from distorting competition on online platforms is applying the margin squeeze test (Bostoen, 2018). Translating this test to the platform economy would mean asking the following question: could the vertically integrated platform offer its downstream product to end-users profitably if it had to pay its own commission rate? Applied to the example above, the question would be: is Apple’s music subscription model profitable after discounting the 30% cut imposed on competitors? If not, the platform would have to adapt either its commission rate or the price of its downstream product. Note that applying this framework does not even require adopting a new non-discrimination rule, as margin squeeze is already part of current competition law.

5. Conclusion

The relationship between online platforms and suppliers is a difficult one. Firstly, suppliers are often dependent on the platform they use to offer their products to consumers. Additionally, platforms increasingly integrate vertically, which means they start competing with their suppliers on the downstream market. This gives them an incentive to exclude competing suppliers—an incentive that is not infrequently acted upon. Incipient empirical research shows that these practices do not only harm competitors but can also harm consumers. In that case, platform-supplier competition must be regulated.

This article sought to bring clarity to the debate on platform regulation. It did so by delving into the various regulatory initiatives, most of which—at least in name—centre around the concepts ‘fairness’ and ‘neutrality’. Closer inspection revealed that the obligations they contain can be better described along the lines of ‘transparency’ and ‘non-discrimination’. The final subsection then evaluated these principles and offered suggestions to operationalise them.

While this article has set out a frame of reference for productive debate on platform regulation, it has not settled the debate. Indeed, the question remains whether ex ante regulation is required to address the identified anti-competitive dynamic. The Google Search investigation, for example, has shown that both transparency and non-discrimination can also be imposed ex post through competition law. The choice of ex ante over ex post regulation should be determined primarily by (i) how widespread and harmful anti-competitive conduct in platform-supplier relations is; and (ii) whether competition law (and related legal branches) can provide adequate redress.

Until enough relevant data—i.e. economic research and decisions by competition authorities—are available to make a reasoned choice, it appears prudent to shy away from an ex ante obligation of non-discrimination for online platforms (similarly CERRE, 2017, pp. 58-59). Less restraint should be shown in imposing non-discrimination duties either after establishing a sui generis abuse (as in Google Search) or by applying the margin squeeze framework. A useful first step would be to impose ex ante transparency obligations—which are less intrusive—in order to test their effectiveness.

Most importantly, it is hoped that the principles of transparency and non-discrimination presented here may serve as a focal point for the inevitable future discussions on platform regulation.

References

Ammori, M. (2016). Failed Analogies: Net Neutrality vs. “Search” and “Platform” Neutrality. In Aitor Ortiz (ed.), Internet Competition and Regulation of Online Platforms (pp. 52-58). Competition Policy International. Available at https://www.competitionpolicyinternational.com/wp-content/uploads/2016/05/internet-competition-libro.pdf

Angwin, J. & Mattu, S. (2016, September 20). Amazon Says It Puts Customers First. But Its Pricing Algorithm Doesn’t. ProPublica. Retrieved from www.propublica.org/article/amazon-says-it-puts-customers-first-but-its-pricing-algorithm-doesnt

Bostoen, F. (2017). Margin Squeeze: Where Competition Law and Sector Regulation Compete. Jura Falconis, 53(1), 3-60. Retrieved from https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2922633

Bostoen, F. (2017). Most Favoured Nation Clauses: Towards an Assessment Framework under EU Competition Law. European Competition and Regulatory Law Review, 1(3), 223-236. doi:10.21552/core/2017/3/9

Bostoen, F. (2018). Online Platforms and Vertical Integration: The Return of Margin Squeeze? Journal of Antitrust Enforcement, 6, forthcoming. Retrieved from https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3075237

Centre on Regulation in Europe. (2017). Internet Platforms and Non-Discrimination. Project Report. Retrieved from http://www.cerre.eu/publications/internet-platforms-non-discrimination

Commission de réflexion et de propositions sur le droit et les libertés à l’âge numérique. (2015). Numérique et libertés: Un nouvel âge démocratique. Report.

Conseil d’Etat. (2014). Le numérique et les droits fondamentaux. Report.

Conseil National du Numérique. (2014). Neutralité des plateformes: Réunir les conditions d’un environnement numérique ouvert et soutenable. Opinion.

Conseil National du Numérique. (2015). Ambition numérique: Pour une politique française et européenne de la transition numérique. Report.

Crook, J. (2016, June 30). Spotify and Apple are staring each other down while flipping the bird. TechCrunch. Retrieved from https://techcrunch.com/2016/06/30/spotify-and-apple-are-staring-each-other-down-while-flipping-the-bird/

Décret n° 2017-1435 du 29 septembre 2017 relatif à la fixation d'un seuil de connexions à partir duquel les opérateurs de plateformes en ligne élaborent et diffusent des bonnes pratiques pour renforcer la loyauté, la clarté et la transparence des informations transmises aux consommateurs. Available at https://www.legifrance.gouv.fr/eli/decret/2017/9/29/ECOC1716648D/jo/texte

Directive 2002/77/EC of the Commission of 16 September 2002 on competition in the markets for electronic communications, networks and services [2002] OJ L249/21. Available at http://data.europa.eu/eli/dir/2002/77/oj

Dredge, S. (2013, October 22). ERA Says Apple’s HMV App Ban Raises “Serious Issues Of Competition”. Musically. Retrieved from http://musically.com/2013/10/22/era-says-apples-hmv-app-ban-raises-serious-issues-of-competition/

Easterbrook, F. (1996). Cyberspace and the Law of the Horse. University of Chicago Legal Forum, 1996(1), 207-216. Available at https://chicagounbound.uchicago.edu/uclf/vol1996/iss1/7

Evans, D. (2017). Why the dynamics of competition for online platforms leads to sleepless nights but not sleepy monopolies. Retrieved from https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3009438 doi:10.2139/ssrn.3009438

European Commission (2009). Guidance on enforcement priorities in applying Article 82 of the EC Treaty to abusive exclusionary conduct by dominant undertakings [2009] OJ C45/7. Available at https://eur-lex.europa.eu/legal-content/EN/ALL/?uri=CELEX:52009XC0224(01)

European Commission (2013, April 25). Commission seeks feedback on commitments offered by Google to address competition concerns. Press release. Available at http://europa.eu/rapid/press-release_IP-13-371_en.htm

European Commission (2016a). Online platforms and the Digital Single Market: opportunities and challenges for Europe. Communication, COM(2016)288. Available at http://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52016DC0288

European Commission (2016b). Public consultation on the regulatory environment for platforms, online intermediaries and the collaborative economy. Synopsis Report. Available at https://ec.europa.eu/digital-single-market/en/news/results-public-consultation-regulatory-environment-platforms-online-intermediaries-data-and

European Commission (2017a, June 27). Commission fines Google €2.42 billion for abusing dominance as search engine by giving illegal advantage to own comparison shopping service. Press release, IP/17/1784. Available at http://europa.eu/rapid/press-release_IP-17-1784_en.htm

European Commission (2017b, June 27). Commission fines Google €2.42 billion for abusing dominance as search engine by giving illegal advantage to own comparison shopping service. Fact sheet, MEMO/17/1785. Available at http://europa.eu/rapid/press-release_MEMO-17-1785_en.htm

European Commission (2017c). Mid-Term Review on the implementation of the Digital Single Market Strategy. Communication, COM(2017) 228. Available at http://ec.europa.eu/newsroom/document.cfm?doc_id=44527

European Commission (2017d). Fairness in platform-to-business relations. Inception impact assessment, Ref. Ares(2017)5222469. Available at https://ec.europa.eu/info/law/better-regulation/initiatives/ares-2017-5222469_en

European Council (2017, October 19). Conclusions meeting, Brussels, EUCO 14/17. Available at http://data.consilium.europa.eu/doc/document/ST-14-2017-INIT/en/pdf

European Parliament (2014). Supporting consumer rights in the digital single market. Resolution, 2014/2973(RSP). Available at http://www.europarl.europa.eu/sides/getDoc.do?pubRef=-//EP//TEXT+TA+P8-TA-2014-0071+0+DOC+XML+V0//EN

European Parliament (2017). Online platforms and the digital single market. Report, A8-0204/2017. Available at http://www.europarl.europa.eu/sides/getDoc.do?type=REPORT&reference=A8-2017-0204&format=XML&language=EN

Ezrachi, A. & Stucke, M. (2016). Virtual competition: the promise and perils of the algorithm-driven economy. Cambridge, Massachusetts: Harvard University Press.

Foundem et al. (2018, February 2018). Letter to Commissioner Vestager re: AT.39740 – Google Search (Comparison Shopping). Retrieved from http://www.foundem.co.uk/Open_Letter_Commissioner_Vestager_Feb_2018.pdf

Franken, A. (2017, November 8). We must not let big tech threaten our security, freedoms and democracy. The Guardian. Retrieved from https://www.theguardian.com/commentisfree/2017/nov/08/big-tech-security-freedoms-democracy-al-franken

German Monopolies Commission (2015). Competition policy: The challenge of digital markets. Special Report, No 68. Available at http://www.monopolkommission.de/images/PDF/SG/s68_fulltext_eng.pdf

Google Search (Shopping) (2017). Case AT.39740. Decision of the European Commission of 27 June 2017. Available at http://ec.europa.eu/competition/antitrust/cases/dec_docs/39740/39740_14996_3.pdf

Google India (2017). Case Nos. 07 and 30 of 2012. Decision of the Competition Commission of India of 8 February 2018. Available at http://www.cci.gov.in/sites/default/files/07%20%26%20%2030%20of%202012.pdf

Heckmann, O. (2017, September 27). Changes to Google Shopping in Europe. Google Inside AdWords blog. Retrieved from https://adwords.googleblog.com/2017/09/changes-to-google-shopping-in-europe.html

Joint letter (2016, April 4). from the governments of eleven member states to Commission Vice-President Andrus Ansip. Retrieved from https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/513402/platforms-letter.pdf

Kadar, M. (2015). European Union competition law in the digital era. Zeitschrift für Wettbewerbsrecht, 13(4), 342-363. doi:10.15375/zwer-2015-0403 Retrieved from https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2703062

Khan, L. (2017). Amazon’s Antitrust Paradox. Yale Law Journal, 126(3), 710-805. Available at https://www.yalelawjournal.org/note/amazons-antitrust-paradox

Lamadrid, A. (2015, November 24). Regulating platforms? A competition law perspective. Chillin’ Competition. Retrieved from https://chillingcompetition.com/2015/11/24/regulating-platforms-a-competition-law-perspective/

Lobel, O. (2016). The Law of the Platform. Minnesota Law Review, 101, 87-166. Available at http://www.minnesotalawreview.org/wp-content/uploads/2016/11/Lobel.pdf

Loi n° 2016-1321 du 7 octobre 2016 pour une République numérique.

Luca, M. et al. (2016). Does Google Content Degrade Google Search? Experimental Evidence. Harvard Business School Working Paper, 16-035, 44p. Retrieved from http://people.hbs.edu/mluca/SearchDegradation.pdf

Martens, B. (2016). An Economic Policy Perspective on Online Platforms (Digital Economy Working Paper No. 2016/05). Joint Research Center of the European Commission, Institute for Prospective Technological Studies. Retrieved from https://ec.europa.eu/jrc/sites/jrcsh/files/JRC101501.pdf

Maxwell, W. & Pénard, T. (2015). Regulating digital platforms in Europe – a white paper. Retrieved from https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2584873

OECD (2011). Principles for Internet Policy Making. Council Recommendation.

Open Internet Order (2015). Federal Communications Commission, Report and Order on Remand, Declaratory Ruling, and Order in the Matter of Protecting and Promoting the Open Internet, adopted 26 February 2015.

French Senate, Projet de loi Croissance, activité et égalité des chances économiques, Article additionnel après article 33 nonies (supprimé). Retrieved from www.senat.fr/amendements/2014-2015/371/Amdt_995.html

Rato, M. & Petit, N. (2014). Abuse of Dominance in Technology-Enabled Markets: Established Standards Reconsidered. European Competition Journal, 9(1), 1-65. doi:10.5235/17441056.9.1.1

Regulation (EU) 2015/2120 of the European Parliament and of the Council of 25 November 2015 laying down measures concerning open internet access [2015] OJ L310/1. Available at https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32015R2120

Renda, A. (2015). Antitrust, Regulation, and the Neutrality Trap: A plea for a smart, evidence-based internet policy (Special Report No. 104). Brussels: Centre for European Policy Studies. Retrieved from https://www.ceps.eu/system/files/SR104_AR_NetNeutrality.pdf

Rochet, J.-C. & Tirole, J. (2003). Platform competition in two-sided markets. Journal of the European Economic Association, 1(4), 990-1029. doi:10.1162/154247603322493212

Shelanski, H. (2013). Information, innovation, and competition policy for the internet. University of Pennsylvania Law Review, 161(6), 1663-1705. Available at http://www.jstor.org/stable/23527815

Strowel, A. & Vergote, W. (2016). Digital Platforms: To Regulate or Not to Regulate? Retrieved from http://ec.europa.eu/information_society/newsroom/image/document/2016-7/uclouvain_et_universit_saint_louis_14044.pdf

Warren, E. (2016, June 29). Reigniting Competition in the American Economy. Keynote speech presented at the New America’s Open Markets Program Event, Washington, D.C. Retrieved from https://www.warren.senate.gov/newsroom/press-releases/senator-elizabeth-warren-delivers-remarks-on-reigniting-competition-in-the-american-economy

Wen, W., & Zhu, F. (2017). Threat of Platform-Owner Entry and Complementor Responses: Evidence from the Mobile App Market (Working Paper No. 18–036). Harvard Business School. Retrieved from https://www.hbs.edu/faculty/Pages/item.aspx?num=53422

Wu, T. (2003). Network Neutrality, Broadband Discrimination. Journal on Telecommunications and High Technology Law, 2, 141-179.

Zhu, F., & Liu, Q. (2016). Competing with Complementors: An Empirical Look at Amazon.com (Working Paper No. 15–044). Harvard Business School. Retrieved from https://www.hbs.edu/faculty/Pages/item.aspx?num=48334

Footnotes

1. The 30% fee has long been the industry standard (although the Google Play Store allows for certain exceptions). It applies to every in-app media purchase, so not to the purchase of services provided ‘outside’ of the app, e.g. an Uber ride or an Airbnb stay. Apple recently lowered its cut to 15% for long-term in-app subscriptions (i.e. when the customer has been subscribed for over a year), and Google quickly followed suit.

Standard form contracts and a smart contract future

$
0
0

Introduction

... consumers will lose their right to meaningfully participate in the formation and incorporation of meaningful provisions in consumer contracts. Over time, commercial institutions will gain complete control over this, and will, by implication, invert the value of contract over goods and services

Legal scholar Ronald C. Griffin, 1978

New institutions, and new ways to formalize the relationships that make up these institutions, are now made possible by the digital revolution. I call these new contracts ‘smart’, because they are far more functional than their inanimate paper-based ancestors.

Nick Szabo, 1996 (author of smart contract concept)

A common feature of commercial relationships, standard form contracts (SFCs) have been a product of organised trade in some fields such as marine shipping and banking for many centuries, and, more recently, in others such as mass production industries as a form of service contract between companies or for consumers (Sales, 1953; Burke, 2000). These documents, also referred to as ‘contracts of adhesion’, generally make use of regularised or commonly used clauses that are written by one party with the expectation of acceptance by the other, often without the latter actually reading the terms (Kessler, 1943). The reader’s reluctance to read remains consistent and perhaps even more problematic in a digital environment, where interface design choices such as discrete hyperlinks only emphasise and promote this tendency (Obar and Oeldorf-Hirsch, 2016).

One of the defining features of SFCs is that they are assumed to be exempt from negotiation based on their fixed clauses and facilitation of routine transactions (D’Agostino, 2015). Around the mid-twentieth century, the United States’ maturing capitalistic, free enterprise system prompted the widespread adoption of SFCs as a vital part of a “highly elastic legal institution” meant to protect a market for the trade of goods and services; SFCs became a tool, then, of “almost unlimited usefulness and pliability” for a diverse range of transaction types (Kessler, 1943). SFCs have many positives for industry: they encourage trade by increasing transactional efficiency and, as they are presented on a ‘take-it-or-leave-it’ basis, significantly decrease transaction costs (Hillman and Rachlinski, 2002; Patterson, 2010).Thus SFCs account for the vast majority of contracts and are an important part of the world’s economic landscape. It is estimated that consistently over the past several decades more than 99% of all contracts used in commercial and consumer transactions are SFCs (Griffin, 1978; Patterson, 2010).

Despite its use in highly efficient regularised systems, contract law generally has no complete descriptive or normative theory; instead, it is generally viewed as a remedial ‘institution’ whose function is to adjudicate any issues that arise between two individuals or entities after transactional activity (Griffin, 1978). Foundationally it rests on an ideacalled ‘freedom of contract,’ which promotes the facility of individuals to transact without the interference of oversight systems such as government institutions (Kessler, 1943). Broadly, then, contract law sees its goal as one of enabling the “power of self governing parties to further their shared objectives through contracting” (Eisenberg, 1994). Contract law is seen as embodying this tension between freedom of contract and the ability or need for sovereign or third party entities to intervene in their governance (Lonegrass, 2012). Politically, at their extreme ‘contract systems’ are in opposition to ‘status systems’: “A ‘status system’ establishes obligations and relationships by birth, whereas a ‘contract system’ presumes that the individuals are free and equal” (D’Agostino, 2015). Libertarianism, for instance, views “freedom of contract as the expression of a ‘minimal state,’ in which people pursue their interests by themselves only.” SFCs are the fullest embodiment of this expression, in one sense, with “the ceremony necessary to vouch for the deliberate nature of a transaction” effectively “reduced to the absolute minimum” to oblige the business community (Kessler, 1943). Ultimately, then, the hierarchy of interests in business and industry most dominantly controls the nature of these transactions and businesses are the entities with this freedom, not consumers. This is especially true in many cases of SFCs, where there is an imbalance of power and asymmetric information between the parties (Mulcahy, 2008).

Looking ahead, newer implementations of digitised contracts such as smart contracts made with blockchain technology have yet to be considered within the purview of these previous issues with SFCs. With smart contracts, the goal of many blockchain technology supporters is to replace the need for centralised governance and third party institutions with “immutable, unstoppable, and irrefutable computer code” that instantiates the tamper-proof records, which are said to be able to ‘self-enforce’ (Szabo, 1996; Wall, 2016). Ideally, this concept could be a realisation of the “freedom of contract” concept with individuals being able to transact without the intervention by a third party institution, whether for facilitation or enforcement. Recently, however, many of the newer implementations integrate into these third party services in the form of private blockchains that provide more secure systems for companies, if not yet something revolutionary. One currently successful and potentially disruptive public blockchain that supports smart contract technology is the Ethereum blockchain1. So far smart contracts have been primarily used for simple transactions and verification purposes (e.g., basic financial wallets, notarisation, lotteries and games), but projects with a wide variety of functions, including futures, securities, insurance, Internet of Things service contracts, supply chain contracts, and mortgage and property transfer are already in experimental or developmental stages (Barlotetti and Pompianu, 2017).

Smart contracts’ use of a distributed ledger technology (DLT) such as a blockchain opens the door for these many types of complex transactions among companies and individuals. DLT makes use of a set of cryptographically linked transactional documents that are publically copied onto each node of a decentralised network (peer-to-peer), reducing the points of vulnerability so that no one centralised point exists. Theoretically, you would need to hack into the code of every computer on the network in order to disrupt the ledger (Frisby, 2016). Certain communities of supporters are idealistically promoting it as revolutionary, even as “the most consequential technology since the internet” (Varadarajan, 2017).

The decentralisation of this new technology seems at first to fulfill Manuel Castells’ (1996, 2007) “network society”, reimagining several social interactions and institutions as networks that depart from being organised around centres and hierarchies. While the decentralised DLT concept seems to align with this ideal, its actual implementation might mirror previous hierarchies of industry-based systems as it is being integrated into them. In some ways, DLT aligns more with another sociological theory, boyd’s (2008) concept of “networked publics” in which interaction in a networked space has more nuanced characteristics: it is persistent (or recorded), searchable, and has an undefined boundary for its audience, each of these with benefits and sacrifices for users. Since DLT technology leaves a copy on each computer node and is said to be an ‘unalterable’ record of a public ledger, it might be argued that previous contracts were more private and transient than these newer instantiations made with DLT. On the other hand, the use of cryptography used in smart contracts allows for a more anonymous yet secure record that while providing more accountability, also provides the traceability that has been associated with consumer rights abuses (De Filippi and Wright, 2018). Additionally, the boundaries of a contract’s audience depend most importantly on the institutions that interpret and enforce it, not just on who sees it or reads it. If the DLT behind a blockchain implementation allows for a legitimacy that somehow supersedes this institutional interpretation, then this audience could shift beneficially to the parties who write them or negatively to solely the machines that read, interpret, or store them. This spectrum could be useful in understanding some of the social effects of these future technologies, how they live up to ideals (or fail to do so), and what is at stake in their widespread use and regulation.

Significant effort has been exerted toward disassociating smart contracts from contracts in general (Werbach and Cornell, 2017); however, with these efforts to sidestep third party enforcement that would legitimate them as contracts comes a de-legitimacy more generally that would preclude them from many uses. If they are meant to replicate simple, automated transactions (i.e., Szabo’s analogy to a vending machine), then perhaps they do not need to make use of previous theories of contracts. In popular discourse, however, with smart contract technology some are predicting an exchange of more complex SFCs where the “role of lawyers might shift to producing smart contract templates on a competitive market, [and] contract selling points would be their quality, how customisable they are, and their ease of use” (Cassano, 2014). On the contrary, De Filippi and Wright (2018) claim:

Just as we moved from an earlier era of expensive, highly tailored clothing toward mass-produced garments with limited personalization, with the growing adaptation of blockchain technology and other contract automation tools, we may witness a shift from expensive and bespoke contracts to low-cost and highly standardised legal agreements with limited avenues for customization.

Regardless, current discussions of smart contracts include uses such as triggering service payments over time, facilitating car or home rentals, controlling Internet of Things products, and defining labour terms, each of which seems to promote the idea that these ‘automated transactions’ will indeed want to be legitimated as contracts, perhaps just functioning by a different technical mechanism through automation and memorialisation on the blockchain. Not all smart contracts will be SFCs nor will all SFCs become automated and this essay does not attempt to solve this particular ontological issue; rather, it entertains the idea that there is a strong possibility of correlation between future smart contract implementations and SFCs and therefore the issues that are present in this previously unresolved discourse are applicable to future discussions of smart contract regulation.

Thus this essay is a critical review of the main aspects of the discourse surrounding SFCs as they exist in a digital environment, including some of the basic features of the legal reasoning, statutes, and ideologies that justify their use, with an eye toward a future application to smart contract technology. While it mainly covers the American legal landscape, and while contract law varies from state to state and district-to-district, the contract theories presented are widely applicable and did not originate in the United States. Differences between civil law and common law systems will be noted where appropriate, but this essay does not claim to provide a complete, comprehensive review2.

Some of the specific features of smart contracts such as their automation also contribute to this discussion. In looking for “secure” contracts (i.e., without bugs in the code), users may find themselves already utilising standardised smart contract terms (or algorithms) frequently. For the smart contracts that utilise Ethereum’s open source protocol and blockchain, certain code-based limitations have a standardising effect on their uses and even general form, for instance. As regularisation is seen as a positive for SFCs, many of these efforts might be interpreted as beneficial (i.e., the more standard a contract is, the more users can presume to know what is in it and a reluctance to read can be justified); however, precaution needs to be taken that the industry-serving principles and policies that currently serve SFCs at the expense of consumers are not exacerbated with the smart contracts that might take their place.

Background: legal theory of standard form contracts (SFCs)

Modern SFCs take many forms, including those that facilitate multi-step supply chain transactions between businesses and others that lay out terms and conditions between companies and customers for a multitude of services. Legal discourse often distinguishes between two common forms of contracts: business-to-business (B2B) contracts and business-to-consumer (B2C) contracts, with the distinguishing factor being the amount of knowledge each of the parties possess. B2B contracts are generally between two companies that are considered to be two sophisticated parties, or two parties who have professional knowledge that increases their understanding of the substantive content of the contract and can negotiate or participate in the creation of the contract. B2B contracts are presumed to have been read. B2C contracts, on the other hand, are between individuals and companies where one party is a naïve reader, or a reader who is not presumed to understand the content, and the other is sophisticated. This essay is concerned chiefly with B2C contracts as these are the primary focus of concern when it comes to violating consumer rights. (Mulcahy, 2008; Lonegrass, 2012)

There are various ways to address the inequities of B2C standard form contracts - regulating the content of the contracts themselves, providing remedies in cases of unconscionability, and dictates of mandatory disclosure or more explicit assent (Ben-Shahar and Schneider, 2014). One aspect that spans all of these solutions and should be examined carefully is how previous digital SFCs were interpreted legally as textual entities. This means looking at how a user or reader might encounter a contract as part of a digital interface and how notions of genuine effort signal to courts a ‘reasonable communicativeness’ in these spaces. This might also mean looking at interpretations of procedural unconscionability with such factors as awareness, agreement, presentation, and meaningful choice. All of these aspects must not only be thought through for digital contracts, but also now for automated contracts with the invention of smart contract technology.

A body of legal scholarship exists in both civil and common law systems to address these issues. In the US for several decades, three devices have been used to rectify the legal shortcomings of SFCs: the Restatement (Second) of Contracts (Section 211), the Uniform Commercial Code (UCC) (Article II), and measures of unconscionability (Griffin, 1978; Moringiello and Reynolds, 2013). The first two remedies outline the basic features of contract formation and sales contracts through persuasive, authoritative legal scholarship (scholarship that is cited in many legal briefs and case law) (i.e., Restatement) or binding codes that require compliance (e.g., UCC). The third device, a determination of unconscionability, is less straightforward and relies on a study of two aspects of the contract that contribute to its complexity: the procedural component and the substantive component (Schwartz and Scott, 2003; Mann and Siebeneicher, 2008; D’Agostino, 2015). Unconscionability, in general, measures whether or not a contract is ‘in good conscience’3. The substantive component of an unconscionability determination considers the content of the clauses, the procedural component of an SFC looks at how clauses have been included into the contract and “cannot be determined by merely examining the face of a contract”. Instead, they must be considered in terms of “the circumstances under which the contract was executed, its purpose, and effects” (D’Agostino, 2015). This often includes the use of boilerplate language and format, the consumer’s awareness or ignorance of the existence of some clauses in the contract (usually called ‘unfair surprise’), and more generally the adhesive nature of the contract itself, which relies on processes of display, awareness, and agreement. Procedural and substantive unconscionability exist on a sliding scale where both must be shown for a case to have merit, but one can be more prominent than the other (Lonegrass, 2012).

Civil law systems tend to treat the issue with sovereignist approaches such as in the EU and UK with the Unfair Terms in Consumer Contracts Directive of 1993 and then 1999, which were superseded by the Consumer Rights Act of 2015. These directives include a list of non-exhaustive terms courts will likely consider unfair in cases of ambiguity. A few of these terms address some of the procedural issues, such as one that states that terms must be in “plain intelligible language” and that drafters must “provide copies of standard contracts” as well as “information about their use” as well as not allowing a contract to be amended or modified unilaterally without sufficient reason. Most specifically, under this directive, terms in contracts cannot be “irrevocably binding [to] the consumer” if they “had no real opportunity of becoming acquainted [with them] before the conclusion of the contract”. These directives are worthwhile, but still ambiguous in a digital setting where these contracts are ubiquitous. In the US more recently, similar standards were enacted, such as the 2017 Bureau of Consumer Financial Protection’s regulation that prohibits the use of mandatory arbitration in financial service contracts - as they tend to prevent class action lawsuits for consumers. These efforts might be viewed as attempts to solve some of the issues that are uniquely present for SFCs in a digital environment and for procedural unconscionability - awareness, agreement, and understanding that all contribute to a ‘meeting of the minds’ amongst contractual parties.

According to traditional contract law, the steps of “offer”, “acceptance”, and “consideration” are required as the fundamental criteria for a contract to be deemed received and accepted.4 Ideally, this would manifest as a transaction that is a “meeting of the minds”, which includes user awareness (offer) and an understanding (acceptance) of the resigning of something of value (consideration) (Yovel, 2000; Moringiello and Reynolds, 2007). In terms of procedure, other factors of the context of the contract come into these discussions as well, including the mental capacity and competency of the parties of a contract (e.g., sophisticated or naïve readers) and the contract’s “legal form”, which takes into account that some contracts “have a specific form or [are] drawn in a certain way” (D’Agostino, 2015). This last factor is especially important to standardised contracts that rely on simplified procedural processes to retain their validity. In theory, these discussions of the form and conscionability of SFCs would result in explicitly allowing both parties the opportunity to study the agreement, or at least to accept or decline its terms with an understanding of its implications. However, since the primary value attached to SFCs is their role in economic efficiency, substantive changes are infrequent and only happen with large class action litigation. Instead, more commonly due to forum selection clauses, which dictate the terms of the legal setting for remedial action, mandatory arbitration prompts a “private conversation” between drafters and courts (Horton, 2009), and most of the law in this area focuses on procedural issues.5 Thus while electronic procedures of SFCs have served to promote further efficiency of industry through their digital state (Hillman and Rachlinski, 2002; Moringiello and Reynolds, 2007), it is possible that the affordances provided procedurally sacrifice consumer rights and increase the tendency toward unconscionability that was already suspected by some legal scholars prior to their widespread digitisation (Hillman, 2006).

Some of the topics at stake in this debate are accessibility and notification requirements, or regulations that force drafters (i.e., those writing the contract) to make a copy accessible to the adherent (i.e., the other party, non-writer of the contract) and provide notice of its existence and any modifications (Preston and McCann, 2011). Laws such as the Uniform Electronic Transaction Act (UETA) that were intended to streamline the process of digital transactions, however, subverted this debate and exacerbated the issues in some of the arguments for the unconscionability of SFCs. Put simply, with the UETA, “there is effectively no legal impetus for any company to retain evidence of these contracts and signatures,” whereas previously, companies were required to provide and retain copies of their contracts with their customers (Randolph, 2001). It has been argued that with this statute, consumer rights were sacrificed for the greater good of the economy. This could be seen as promoting the notion that, since SFCs are associated with an inequality of bargaining power, it is much more likely that they will be “used as instruments of economic oppression because their terms can more easily be weighted in [favour] of the interests of the stronger parties who prepare them”(Mulcahy, 2008).

Digital contracts, visualisation, and electronic agreements

In a digital environment, standard form contracts have taken on several forms for both B2B and B2C transactions. A spectrum exists in regard to how contracts are rendered digitally and how they function in a networked environment. For instance, it is now popular for some B2C contracts to make use of visualisation software that produces rent agreements or car lease agreements6. Even as one party is sophisticated and the other naïve, visualisation software applications in these situations haveproven to increase comprehension among naïve readers (Barton et al., 2013; Passera et al., 2016). One study (Barton et al., 2013) performed on this topic found that “visualizations could provide a personal touch to an otherwise sterile-looking contract document, and diminish the ‘otherness’ of legal terms,” and further, visual aids “decreased the ambiguity of information, so that it would be easier to understand alternatives [and] converge on a shared interpretation.” In these cases, the sophisticated party who is drafting the contract is speaking to the naïve party with the intention of communicating at least some of the information of the contract.

With other types of agreements, however, such as terms of service (ToS) agreements used by most website platforms to lay out the terms of acceptable use, data collection, ownership, and privacy policies, the intention is to de-emphasise the presence of the contract. ToS agreements are generally found as a hyperlink in the footer of the page or as a step to which consumers must agree during a registration process and typically provide only the necessary information for courts to interpret their effort of communication. Nancy Kim (2013) argues that unconscionability is a toothless mechanism to address the issues with these agreements, stating that procedural unconscionability is only viewed by courts as a threshold requirement - as long as “there was notice and an opportunity to read the terms”- and not as a way to further communication efforts for actual users/consumers. Furthering the issues, Kim argues, when courts do consider the substance of these contracts, they tend to rely too heavily on industry norms and thus on the agendas of those with more power and information.

Although it is only an example of one genre of SFC, how legal statues have been applied to ToS agreements are foundational in determining how future SFCs, like smart contracts, might be interpreted since they are some of the most egregious offenders of consumer rights (Ventuini et al., 2016). As it is within the digital environment that ToS agreements have retreated from paper evidence of transaction terms to hidden hyperlinks in the margins, allowing in some cases for agreement to take the form of browsing, notions of awareness need to be re-thought. And as the text behind the hyperlink can change at any time (citing the oft-used “unilateral modification” clause), often without notice, this environment calls into question what awareness actually means, if anything (Moringiello andReynolds, 2007). Preston and McCann (2011) acknowledge the new “truly unruly ToS”, referring to this new genre of contract as “a beast untied from the contexts in which form contracts gained (limited) legitimacy” and akin to judicial opinion adopting a “wild horse while forgetting that such beasts were only originally allowed into civilized communities because they were in a corral.” Put another way, the inherent physical restrictions on paper contracts that accommodated some of their inherent inequalities, such as the requirement to retain a copy of the agreement and provide it to the adherent, have been removed in the digital environment (Randolph, 2001). One set of influential laws that contributed to the current situation is the UETA and E-Sign laws, which were enacted in the early 2000s by the Clinton administration to regularise interstate commerce practices and validate electronic signatures. These laws were vital to legalising digital SFCs, including ToS agreements, in their current form.

The UETA, which stemmed from a large legal undertaking meant to streamline the recordkeeping practices of business transactions across state lines, provided affordances to digital contracts, such as ToS agreements, and contributed to the codification of their inequities.By allowing commercial interests not to have to keep paper copies of their electronic documents as evidence of transactions, the UETA effectively gave legally binding status to electronic documents and signatures without requiring a paper component (Section 7 (c)). Relevant sections of the UETA include Section 7, which is comprised of four parts that “summarily give(s) legal recognition to electronic signatures, records and contracts,” include determining that “a record or signature may not be denied legal effect or enforceability solely because it is in electronic form” and “a contract may not be denied legal effect or enforceability solely because an electronic record was used in its formation” (Section 7 (a) and (b)). These criteria vastly expanded the idea of a contract to allow digital records to exist as valid legal documents, allowing for looser rules in regard to archiving the various copies of a document than were previously imposed on businesses for each transaction.

The E-Sign laws made a few departures from the UETA, and broadened the notions of agreement and awareness even further. The E-Sign laws stated that: “The mere fact of use, or of behavior consistent with acceptance, by a party should be sufficient to evidence that party's willingness and to make applicable E-SIGN's base rule.” For ToS agreements, this was interpreted to mean that just engaging in the digital space could be affirmation of agreement (Wittie and Winn, 2001). Called ‘browsewrap’agreements7 (as opposed to ‘clickwrap’ agreements), users do not need to explicitly agree to these hyperlinks in the margins to be held liable for their contents. Simply using a website is enough to provide contractual obligation. While over time clickwrap agreements have proven more often to be valid in legal opinion, provisions like the UETA and E-Sign laws ultimately allowed for the unwieldy and incomprehensible legalese of these contracts to hold power over regular users’ personal data and activities, even as many consumers are not aware of their contents or that they even exist.

Research on ToS agreements has been primarily in the legal sphere and within legal discourse. Many scholars have claimed that these agreements are unconscionable in general or in part, or have claimed that they are not really contracts at all. While notions of tacit agreement have been used in several cases to uphold ToS agreements, such as in the case of browsewrap agreements, others have argued (Radin, 2013; Ben-Shahar, 2014) that these contracts, as one-sided, boilerplate text hidden in inconspicuous hyperlinks, are one of the many variations of contemporary contracts that do not actually fulfill the ‘contract’ criteria, and therefore should not be subject to legal contract theory. According to legal scholar Margaret Radin (2013), besides being completely non-negotiable, issues of consent and user awareness have become too complicated to allow these documents to be defined as being between two parties that: 1) are aware of the agreement to which they are promising to adhere (including a general sense of its terms), and 2) are aware of the exchange taking place as laid out by the terms of the agreement. She states: ‘“Agreement’ has become a talismanic word merely indicating that the [drafter] deploying the boilerplate wants the recipient to be bound.” In other words, ‘agreement’ to and comprehension of the nature of the ToS contract are central to its definition and enforceability as a contract, yet since this document has taken on a digital form, these aspects of it have been obscured, and therefore cannot be considered to possess the necessary characteristics to be deemed valid. Omri Ben-Shahar (2014), contract law professor from the University of Chicago Law School, best represents this opinion:

Because boilerplates do not represent informed consent, because they are divorced from our intuitive understanding of agreement, and because they divest people of their democratically enacted entitlements, they degrade the institution of contract that is justified by its respect for individual autonomy and private control. Therefore, boilerplates should be powerless to govern people's rights.

While Radin and Shahar represent the most extreme opinion on the topic, analysis is needed to produce a more nuanced portrayal describing how a combination of developing perceptions of users and a dominant ideology that favours consumerism contributed to the policies and legal precedent that preceded the current form of digitised SFCs. These documents are often not viewed as contracts by the people most affected by their contract status with the consequent effects on user rights being masked by their placement within an information system such as an interface or registration process.

Ironically, often legal judgments on SFCs rely on reasoning that compares them to previous physical or paper versions. Judge Kimba Wood, for instance, in a ‘clickwrap’ case (Bar-Ayal v. Time Warner, 2006) noted that even though a user had to scroll through thirty ‘screens’ of the ToS agreement to find the clause at issue, it was still upheld as legal due to Wood’s argument that “it is not significantly more arduous to scroll down to read an agreement on a computer screen than to turn the pages of a printed agreement” (Moringiello and Reynolds, 2007). Printed, the agreement would have been eight pages, which leaves open the question: is it easier or different to scroll through thirty screens than to flip through eight pages? How does a pop-up window in a sign up process on a website change the process of reading or comprehension by providing different types of situational frames or markers of authenticity for users? Rather than relying on these markers, the circumstances and literacies of readers/adherents should be part of the measurements of efforts of communication.

In a study of the user agreement that came with his new iPhone, for instance, type and print professor Brian Lawler (California State Polytechnical University) analysed the documentation and formatting practices of the agreement that signify the authentic appearance of a contract. He notes how the 32-page pamphlet had margins of only about one-eighth of an inch [about 3 mm - Ed.], which causes the page to read “like a big gray mass […] with hardly any whitespace” (Sullivan, 2012). With the characters' height at only 4.5 points, “a smidge taller than the thickness of a single dime," Lawler states that we are dealing with some “seriously small” font as well as “painfully tight” spacing between the lines of the text at only “just past the minimum legible standard before the descenders (the bottoms of the j's and p's, for instance) in one line of text start to overlap with the ascenders (the tops of the h's and f's) in the next line.” And none of this is by accident - Lawler explains how “the world's best typesetters work on these documents, and most fine-print producers review the whole design with legal teams”. Because the ‘freedom of contract principle does not preclude any specific contract format, the “legal form” required of SFCs is determined by courts as gestures of “genuine effort” by drafters that simply recycle conventions of SFCs that are ineffective, even having the opposite effect of communicativeness (Sullivan, 2012). One way to address these issues is through mandatory disclosure efforts, yet these have also widely been found to fail to reach consumers (Ben-Shahar and Schneider, 2014). For instance, clickwrap agreement, which requires a specific button to push, only was found to increase reading by 0.36% over browsewrap (Marotta-Wurgler qtd. in Schwartz, 2015). Rather than disclose the content of an unreadable agreement or have consumers agree blindly, perhaps the solution is in standardising and regulating the procedural and substantive elements of these agreements in ways that are effective and not merely convention.

Increasingly, with these decisions the stakes are high. While privacy and data collection are the main concerns associated with ToS, there are not very many studies on the direct effects of the digitisation of SFC agreements. One major study, however, undertaken in 2016 by a partnership between the Dynamic Coalition on Platform Responsibility (DCPR) and the United Nations' Internet Governance Forum found that these agreements affect human rights significantly in the areas of freedom of speech, privacy, and due process, particularly for marginalised and low-income communities (Ventuini et al., 2016). Another study proved what we already assumed - that ToS agreements are not read by the majority of users. Obar and Oeldorf-Hirsch (2016) tested 543 participants to see if they read and understood the ToS of a fictional website and empirically concluded that the “vast majority of participants completely missed a variety of potentially dangerous and life-changing clauses”. While this unfamiliarity of the majority of consumers might pass under a SFC legal concept known as the “informed minority” hypothesis that claims “regulation is effective if it at least increases the proportion of informed consumers to a critical mass able to influence sellers’ decisions” (D’Agostino, 2015), the exact proportion needed to make a difference is difficult to determine. While few, these studies, when viewed adjacently, signal a dangerous intersection of a SFC process that hides its existence and the consequent implications of this inconspicuousness for vulnerable communities and the wider public.

Conclusion: applications to smart contracts

Smart contracts represent the synthesis of two lines of technological development - electronic contracting and cryptography - and yet this fusion is complicated: “Viewed in one way, smart contracts represent merely the latest step in the evolution of electronic agreements. From another perspective, smart contracts’ use of blockchain technology distinguishes them from any antecedents” (Werbach and Cornell, 2017). In other words, smart contracts in some sense, are merely an extension of electronic data interchange (EDI) formats8 used in many B2B and B2C contracts (Szabo, 1996; Werbach and Cornell, 2017). Yet, under new laws, such as the Nevada Senate Bill (No. 398) that drew on statutes like the UETA, they are being defined as legitimate, binding self-contained documents, claiming they produce “an electronic record created by the use of a decentralised method by multiple parties to verify and store a digital record of transactions which is secured by the use of a cryptographic hash of previous transaction information.” Nevada’s recent bill similarly more liberally allots agreement mechanisms for this technology as well. It states: “A smart contract, record or signature may not be denied legal effect or enforceability solely because a blockchain was used to create, store or verify the smart contract, record or signature.” While the intention behind this decision could help with streamlining smart contract transactions for industry in the same way that it did for other types of transactions, the implications may be similar to how the definition of agreement changed radically with ToS and browsewrap interpretations. For instance, commitment to the blockchain, which theoretically only requires action by the drafter, can now stand in for agreement9. It already seems that smart contracts, although initially disassociated with their contract predecessors, are still being provided the same affordances and power as legal contracts (De Filippi and Wright, 2018) and should be monitored as such so that the same types of inequities are not codified into this new technology .

An exploration of the literature surrounding specific instances SFCs found that it lacks a close examination of the textual and documentary aspects of SFCs, which are particularly important when applying these principles and policies to future technologies such as smart contracts. Instead, common perspectives are either based on outdated notions from paper versions of these contracts or on ideologies of industry and business that do not sufficiently address the needs of consumers/users in the digital age. Perhaps a more nuanced and critical look at the ‘desirable’ characteristics that a digital, networked environment can support is the more appropriate query moving forward. boyd (2008) offers persistence, searchability, and a limited audience as some of the qualities of a networked public--perhaps parsing through which of these aspects a blockchain-supported smart contract system can enhance or distort is a worthwhile endeavour. Questions arise such as: how much permanence of the DLT record is needed for transparency’s sake and how should it be balanced with a need for privacy and limited audience? What aspects of contracts should be searchable and how would a comparison or organisation or documentation system of standardised smart contracts be beneficial to users uninitiated with their content? How might a digital interface in a smart contract situation distort the textual elements of a contract that were needed previously (or distorted by previous forms of SFCs such as ToS)? How might the automatic execution of a smart contract exacerbate issues of awareness and comprehension, and how do laws like the UETA’s determination contribute to this issue?

One location to start this work might be in the early efforts to standardise the protocols currently being formalised and the languages and terminology associated with smart contracts. For a smart contract to work on the Ethereum blockchain, it often adheres to the ERC-20 token contract protocol to function properly. ERC-20 is one of the most popular protocols to provide tokens with a common set of features and interfaces they can use to perform transactions such as sending currency and verifying account amounts or other information (McDonald, 2017; S. Palladino, personal communication, 12 December 2017). As ERC-20 provides the location by which many users interact with the contract, it is an appropriate place to start applying some of the knowledge from the discourse on procedural unconscionability that deals with users awareness, acceptance, meaningful choice, and understanding of the contract as a document. It is a community-created standard managed and formalised on github’s forum and repository, yet currently mostly only those from a computer science perspective are contributing to its development. Related efforts in terms of formalising and standardising smart contracts include efforts to translate natural language contract terms into the most common smart contract language, Solidity10 and the International Standards Organization (ISO) TC/037 Study Group’s work to standardise the associated terminology. As these mature, it would be worth incorporating interdisciplinary concepts (e.g., from contract law, finance, recordkeeping, and computer science), with perhaps a determination of the enforcement mechanism that is needed based on spectrums of identification of genre, functionality, or specific terms (see Kim, 2013; Lemieux, 2016; Raskin, 2017). It is yet to be seen whether the code restrictions and standardisation of protocols will have a positive effect on making smart contracts more predictable for consumers or whether it will make worse some of the issues of invisibility that plagued previous SFCs. With the ubiquitousness of SFCs, the goal is to prevent a situation where nearly every written transaction is a smart contract that exists behind-the-scenes, unreadable by most even if approached, and automatically executed and enforced by its technological mechanism rather than by an understanding of the reader or “meeting of the minds”. And perhaps it must be conceded that smart contracts and the companies that will produce them might not be able to self-regulate to the level necessary to prevent the abuses SFCs have been known to enact.

In addition to globalisation and scientific progress, Castells (1996) underscores the transition to a network society with an additional dimension: a new technological paradigm that includes the electronic hypertext, which has become a “new frame of reference for symbolic processing,” producing a state of ‘real virtuality’ that has become the “fundamental component of our symbolic environment” and “backbone of a new culture”. Castells notes how this space will be without physical boundaries-- timeless and placeless. boyd (2008) similarly notes, albeit less optimistically, how the networked public consists of “all people across all space and all time”. It is within this boundless, symbolic, hypertextual space, it seems, that SFCs flourish - as documents within documents (called “linked contracts”), as files and templates utilised and organised by records management systems, and as protocols that execute terms of a contract (smart contracts). With ToS, one platform can reach large numbers of users and engage them in a contract simultaneously with a hyperlink. With unilateral modification clauses, that same platform can also change the terms of this contract for all users at once. A collapsing of time and place does not always have a democratic or decentralised result with democratised power relations, and the legal reasoning that relies on these contracts as ‘standard’ only aids in this process (i.e., allows them to be hidden, ignored, invisible). So as not to fall into this same rhetoric that would displace smart contracts’ textuality while still providing the same legitimacy as previous contracts, discourse surrounding digitised SFCs and the issues that are still being resolved in this discourse are beneficial to consider.

In 1978, legal scholar Ronald C. Griffin wrote: “We are faced with an historic choice in contracts. We can lump together standard forms and classic contracts, or we can treat the former differently.” In the decades since, it seems standardised contracts have been “lumped together”, not only with other types of contracts, but also with new technological forms of these documents. Contract law changed very little from the First Restatement of Contracts in 1932 to the early 2000s, due to no “disruptive” technological developments in this field during these years (Moringiello and Reynolds, 2013)11. And legal discourse since has either relied on conceptions of past forms of contracts to validate digital versions or changed the very nature of some aspects of these contracts to accommodate industry with laws like the UETA. This does a disservice to consumers who need a stable, clear understanding of these contracts to inform the ‘reasonable expectations’ they are meant to rely on when expected not to have read the terms. Even at that early stage in the late 1970s, Griffin understood “the rules of the quiet past are simply too cumbersome to deal with the complexities of a stormy contract future.” We have already reached that future, and it is indeed stormy. But with the streamlining of digital processes that increase the efficiency of SFCs for business, communication efforts to increase awareness of the terms for consumers could also be streamlined in a digital environment.

References

Bartoletti, M., & Pompianu, L. (2017a). An empirical analysis of smart contracts: platforms, applications, and design patterns. ArXiv:1703.06322 [Cs]. Retrieved from http://arxiv.org/abs/1703.06322

Barton, T. D., Berger-Walliser, G. & Haapio, H. (2013). Visualization: Seeing Contracts for What They Are, and What They Could Become, Journal of Law, Business & Ethics, 19, 47-64. Available at https://scholarlycommons.law.cwsl.edu/fs/11/

Ben-Shahar, O. (2014). Regulation Through Boilerplate: An Apologia [Review of the book Boilerplate: The Fine Print, Vanishing Rights, and the Rule of Law, by M. J.]. Michigan Law Review, 112, 883-903. http://chicagounbound.uchicago.edu/journal_articles/4272

Ben-Shahar, O & Schneider, C. E. (2014). More Than You Wanted to Know: The Failure of Mandated Disclosure. Princeton, NJ: Princeton University Press.

boyd, danah. (2008). Why Youth Social ❤️ Social Network Sites: The Role of Networked Publics in Teenage Social Life. In D. Buckingham (Ed.), Youth, Identity, and Digital Media. Cambridge, Mass: MIT Press. Available at https://mitpress.mit.edu/books/youth-identity-and-digital-media

Burke, J.A. (2000). Contract as Commodity: A Nonfiction Approach. Seton Hall Legislative Journal, 24, 285.

Cassano, J. (2014, September 17). What Are Smart Contracts? Cryptocurrency’s Killer App. Fast Company. Retrieved from https://www.fastcompany.com/3035723/smart-contracts-could-be-cryptocurrencys-killer-app

Castells, M. (1996). The Rise of the Network Society, The Information Age: Economy, Society, and Culture, Vol. I. Oxford: Blackwell Publishers.

Castells, M. (2007). Communication, Power and Counter-power in the Network Society. International Journal of Communication, 1(1). Retrieved from http://ijoc.org/index.php/ijoc/article/view/46

Clack, C. D., Bakshi, V. A., & Braine, L. (2016). Smart Contract Templates: foundations, design landscape and research directions. ArXiv:1608.00771 [Cs]. Retrieved from http://arxiv.org/abs/1608.00771

D’Agostino, E. (2015). Contracts of Adhesion Between Law and Economics. Cham: Springer International Publishing. doi:10.1007/978-3-319-13114-6

Eisenberg, M. A. (1994). Expression Rules in Contract Law and Problems of Offer and Acceptance. California Law Review, 82(5), 1127-1180. Available at https://works.bepress.com/melvin_eisenberg/26/

Filippi, P. D., & Wright, A. 2018. Blockchain and the Law: The Rule of Code. Cambridge, Massachusetts: Harvard University Press.

Frisby, D. (2016, April 21). How blockchain will revolutionise far more than money – Dominic Frisby | Aeon Essays. Aeon. Retrieved from https://aeon.co/essays/how-blockchain-will-revolutionise-far-more-than-money

Griffin, R. C. (1978). Standard Form Contracts. North Carolina Central Law Journal, 9, 158-177. Available at http://commons.law.famu.edu/faculty-research/25/

Hillman, R. (2006). Online Boilerplate: Would Mandatory Website Disclosure of E-Standard Terms Backfire? Michigan Law Review, 104(5), 837-856. Available at http://repository.law.umich.edu/mlr/vol104/iss5/2

Hillman, R. A. & Rachlinski, J. J. (2002). Standard-Form Contracts in the Electronic Age. New York University Law Review, 77(2), 429-495. Available at https://ssrn.com/abstract=287819

Horton, D. (2009). The Shadow Terms: Contract Procedure and Unilateral Amendments. UCLA Law Review,605. Available at https://www.uclalawreview.org/pdf/57-3-1.pdf

Kessler, F. (1943). Contracts of Adhesion--Some Thoughts about Freedom of Contract. Columbia Law Review, 43(5), 629–642. doi:10.2307/1117230 Available at https://chicagounbound.uchicago.edu/journal_articles/7738/

Kim, N. (2013). Wrap Contracts: Foundations and Ramifications. Oxford: Oxford University Press.

Lemieux, V. L. (2016). Trusting records: is Blockchain technology the answer?, Records Management Journal, Vol. 26 Issue: 2, pp.110-139,

Lonegrass, M. T. (2012). Finding Room for Fairness in Formalism—The Sliding Scale Approach to Unconscionability. Loyola University Chicago Law Journal, 44, 1-64. Available at https://digitalcommons.law.lsu.edu/faculty_scholarship/19

Mann, R. J. and Siebeneicher, T. (2008). Just One Click: The Reality of Internet Retail Contracting, http://web.law.columbia.edu/sites/default/files/microsites/contract-economic-organization/files/working-papers/Mann%20and%20Sibeneicher%20Just%20One%20Click.pdf

McDonald, J. (2017, September 15). Understanding ERC-20 token contracts. Retrieved from https://medium.com/@jgm.orinoco/understanding-erc-20-token-contracts-a809a7310aa5

Moringiello J. M. & Reynolds, W. L. (2007). Survey of the Law of Cyberspace: Electronic Contracting Cases 2006-2007. The Business Lawyer, 63(1). Available at http://www.jstor.org/stable/40688445

Moringiello J. M. & Reynolds, W. L. (2013). From Lord Coke to Internet Privacy: The Past, Present, and Future of the Law of Electronic Contracting. Maryland Law Review, 72, 452-500. Available at http://digitalcommons.law.umaryland.edu/cgi/viewcontent.cgi?article=2178&context=fac_pubs

Mulcahy, L. (2008). Contract Law in Perspective, 5th ed. New York: Taylor &​ Francis.

United States District Court Southern District of New York. (2006). Shlomo Bar-Ayal v. Time Warner Cable Inc. Available at https://jenner.com/system/assets/assets/3016/original/Bar-Ayal_v_Time_Warner.pdf?1319460897

Obar, J., & Oeldorf-Hirsch, A. (2016). The Biggest Lie on the Internet: Ignoring the Privacy Policies and Terms of Service Policies of Social Networking Services (SSRN Scholarly Paper No. ID 2757465). Rochester, NY: Social Science Research Network, https://papers.ssrn.com/abstract=2757465. doi:10.2139/ssrn.2757465

Olszewicz, J. (2017, October 6). “Ethereum Price Analysis - Network slowdown precedes fork.” Brave New Coin. Retrieved from https://bravenewcoin.com/news/ethereum-price-analysis-network-slowdown-precedes-fork/

Passera, S., Smedlund, A., & Liinasuo, M. Exploring contract visualization: clarification and framing strategies to shape collaborative business relationships. Journal of Strategic Contracting and Negotiation, 2(1-2), 69-100. doi: 10.1177/2055563616669739

Patterson, M. R. (2010). Standardization of Standard-Form Contracts: Competition and Contract Implications. William and Mary Law Review, 52(2), 327-414. Available at http://scholarship.law.wm.edu/wmlr/vol52/iss2/2 and https://papers.ssrn.com/abstract=2010124

Preston, C. B. & McCann. E. W. (2011). Unwrapping Shrinkwraps, Clickwraps, and Browsewraps: How the Law Went Wrong from Horse Traders to the Law of the Horse. BYU Journal of Public Law, 26(1). Available at https://digitalcommons.law.byu.edu/jpl/vol26/iss1/2

Radin, M.J. Boilerplate: The Fine Print, Vanishing Rights, and the Rule of Law. Princeton, NJ: Princeton Press.

Randolph, P. A., Jr. (2001). Has E-sign Murdered the Statute of Frauds? Probate and Property, 15.

Raskin, M. (2017). The Law and Legality of Smart Contracts. Georgetown Law Technology Review, 1(2).

Sales, H.B. (1953). Standard Form Contracts. The Modern Law Review, 16(3), 318-342.

Schwartz, A. (2015). Regulating for Rationality (Faculty Scholarship Series, Paper No. 4971). Yale Law School. Retrieved from http://digitalcommons.law.yale.edu/fss_papers/4971

Schwartz, A. & Scott, R. E. (2003). Contract Theory and the Limits of Contract Law (John M. Olin Center for Studies in Law, Economics, and Public Policy Working Papers. Paper No. 275). Available at http://digitalcommons.law.yale.edu/lepp_papers/275/

Sullivan, M. (2012, January 19). Attack of the Fine Print. Wall Street Journal, http://www.smartmoney.com/spend/technology/attack-of-the-fine-print-1326481930264/

Szabo, N. (1996). Smart Contracts: Building Blocks for Digital Markets. Alamut. http://www.alamut.com/subj/economics/nick_szabo/smartContracts.html

Varadarajan, T. (2017, Sept 22). The Blockchain Is the Internet of Money. The Wall Street Journal, https://www.wsj.com/articles/the-blockchain-is-the-internet-of-money-1506119424?utm_medium=social&utm_source=twitter

Ventuini, J., Louzada, L., Maciel, M., Zingales, N., Stylianou, K., Belli, L., & Magrani, E. (2016). Terms of Service and human rights: an analysis of online platform contracts. Editora Revan, http://internetgovernance.fgv.br/sites/internetgovernance.fgv.br/files/publicacoes/terms_of_services_06_12_2016.pdf

Wall, L. D. (2016). “Smart Contracts” in a Complex World (Notes from the Vault). Atlanta: Federal Reserve Bank of Atlanta. Retrieved from https://www.frbatlanta.org:443/cenfis/publications/notesfromthevault/1607

Werbach, K. & Cornell, N. (2017). Contracts Ex Machina. Duke Law Journal, 67, 313-382. Available at https://scholarship.law.duke.edu/dlj/vol67/iss2/2

Wilson, T. D. (2010). Fifty years of information behavior research. Bulletin of the American Society for Information Science and Technology, 36(3). doi:10.1002/bult.2010.1720360308

Wittie, R. A. & Winn, J. K. (2001). Electronic Records and Signatures under the Federal E-SIGN Legislation and the UETA. The Business Lawyer, 56(1), 293-340. Available at http://www.jstor.org/stable/40687979

Yovel, J. (2000). What is Contract Law “About”? Speech Act Theory and a Critique of “Skeletal Promises”. Northwestern Law Review, 94(3), 937-962.

Footnotes

1. According to Etherscan (the Ethereum Blockchain Explorer), it has already processed over 4.3 million blocks of transactions, with hundreds of thousands being continually processed each day, and thus is the most widely available and accessible implementation of smart contracts at this point in time. As of the beginning October 2017, for instance, it had an estimated market capitalisation of US $28 billion and has recently been adopted by Microsoft, Intel, and more than two dozen major banks and has been at the centre of discussions for several national and international government institutions and the entire worldwide financial industry.

2. There has been discussion around the relationship of SFCs to consent and data collection, for instance, that are not covered here and that are extremely relevant due to recent revelations like Cambridge Analytica and facebook.

3.§ 208 Unconscionable Contract or Term, Restatement (Second) of Contracts, 2017

4.§ 1 Contract Defined, Restatement (Second) of Contracts, 2017

5. In a 2006-2007 survey of electronic contracts, Moringiello and Reynolds (2007) state: “Without putative class actions and arbitration/forum selection clauses there would be little law in this area.”

6. See the services and research by Helena Haapio and Lexpert, for instance: Haapio, H. (2013, May 15). Visual Law: What Lawyers Need To Learn From Information Designers. Legal Information Institute.

7. Browsewrap agreements are digital contracts that commonly exist in the margins of interfaces and to which the adhering party agrees based on browsing the website, not by explicit agreement such as clicking an “I Agree” button (called clickwrap) (Kim, 2013).

8. Szabo (1996) cites the field of Electronic Data Interchange (EDI), in which “elements of traditional business transactions (invoices, receipts, etc.) are exchanged electronically,” as one of the “primitive forerunners” to smart contract technology.

9. Exact wording: “If a law requires a signature, submission of a blockchain which electronically contains the signature or verifies the intent of a person to provide the signature satisfies the law.”

10. See the Logic Based Production System by Imperial College London that produces “smart contracts written in quasi-natural language [and] executed through simulated human reasoning.”

11. It has been argued, for instance, that a “student who could pass a contracts exam in 1932 could also pass the exam in 2000” (Moringiello and Reynolds, 2013).

Networked publics: multi-disciplinary perspectives on big policy issues

$
0
0

Papers in this Special Issue

Editorial: Networked publics: multi-disciplinary perspectives on big policy issues
William H. Dutton, Michigan State University

Political topic-communities and their framing practices in the Dutch Twittersphere
Maranke Wieringa, Daniela van Geenen, Mirko Tobias Schäfer, & Ludo Gorzeman

Big crisis data: generality-singularity tensions
Karolin Eva Kappler

Cryptographic imaginaries and the networked public
Sarah Myers West

Not just one, but many ‘Rights to be Forgotten’
Geert Van Calster, Alejandro Gonzalez Arreaza, & Elsemiek Apers

What kind of cyber security? Theorising cyber security and mapping approaches
Laura Fichtner

Algorithmic governance and the need for consumer empowerment in data-driven markets
Stefan Larsson

Standard form contracts and a smart contract future
Kristin B. Cornelius

Introduction: networked publics shaped by changing policy and regulation

This special issue of Internet Policy Review is the first of a series organised in collaboration with the Association of Internet Researchers (AoIR), an academic association centred on the ‘advancement of the cross-disciplinary field of Internet studies’1. AoIR was inspired by the internet as a major technological innovation of the twenty-first century, holding its first conference in 2000 around the state of what was then a fledgling field focused on a new research topic. The first conference gathered academics together with those involved with the internet from technical, corporate and governmental communities as well as many early internet enthusiasts from all sectors of society. Given its diversity within and beyond academia, early debate was centred on whether and how it should be viewed as a field. Some consensus emerged through the conferences that internet studies would be an interdisciplinary field (Wellman, 2004). No single discipline could address the internet and the many issues associated with it as objects of study (Consalvo and Ess, 2011).

Since those early days, its yearly conferences have focused on the use and impacts of continuous innovations in the internet, social media, mobile internet, the Internet of Things (IoT), and related information and communication technologies. While research on internet policy and governance has been developing since the technology’s inception, it was only in 2016 that the annual AoIR conference was organised around a theme of policy and governance - the concept of ‘Internet Rules!’. But with the continuing emergence of major issues of policy, regulation and governance of the internet and related ICTs, most recently around the privacy and surveillance issues of big data, policy issues have begun to draw increasing attention by the field, and this has been reflected in policy issues rising in the agendas of AoIR conferences.

This trend is illustrated by the 2017 AoIR conference. Its focus on networked publics is not explicitly policy-oriented. The concept of networked public is broad and useful in capturing the idea that networking technologies like the internet and social media can create virtual spaces analogous to physical spaces. These permit communities to form around such activities as play, work, or political and social movements. For example, danah boyd (2008) used the term to discuss her findings on the ways American teenagers used networking for a variety of social activities. I find the term compatible with my discussion of how individuals have used networks to empower themselves vis-à-vis institutions to become a fifth estate, comparable to the fourth estate shaped by the role of an independent press of an earlier era (Dutton, 2009). However, whatever networked public is of interest, from teenagers finding a comfortable space for socialising to networked individuals feeling free to search for information and network with others to hold powerful institutions more accountable, the vitality - if not the very existence - of these networks will depend on their policy and regulatory contexts. Therefore, it is not surprising that a conference without an explicit policy focus has yielded a strong set of policy-oriented contributions. The future of networked publics depend on the ways in which policy and regulation facilitate or constrain individuals from accessing and producing information and connecting with other individuals in meaningful ways.

From the changing composition of contributions to AoIR conferences over the years, it became increasingly apparent to the editors of Internet Policy Review as well as the evolving leadership of AoIR that the annual conference would be a growing source of developing scholarship on emerging issues of policy and regulation surrounding the internet. In fact, changes in the composition of AoIR conferences reflect aspects of this shift and led to more interaction between the journal and AoIR. It was in that spirit that I was asked to be a guest editor of this special issue arising from papers presented at the 2017 AoIR conference in Tartu, Estonia, organised around the theme of networked publics.

I along with the editors of Internet Policy Review were encouraged by the response to our call for papers to be considered for this special issue. We are pleased to provide this special issue, which is composed of the best policy-related papers presented at AoIR 2017.

Remarkably, for what has been defined as an interdisciplinary field, the papers in this special issue are more disciplinary than might have been anticipated in those early years of the field. It is even more remarkable in that policy studies are also viewed as inherently interdisciplinary. For example, many top policy studies programmes describe themselves as ‘interdisciplinary’, such as the Moritz College of Law’s Center for Interdisciplinary Law and Policy Studies. For this reason, this special issue refers to ‘multidisciplinary’ rather than ‘interdisciplinary’ perspectives, as each paper arguably draws primarily from a core discipline, such as sociology, science and technologies studies (STS), or law. However, it will be apparent from contributions to this special issue that disciplinary perspectives on major issues surrounding the internet and policy can offer new insights that constructively stimulate and inform debate over policy and regulation. The contributions to this issue also raise the question over whether the field as a whole is taking a more disciplinary turn.

The rise of new policy, regulation and governance issues

Before describing the contributions to this issue, it is useful to acknowledge and explain the relatively late emergence of policy issues both within the field and with respect to the larger public’s understanding of the internet. The shift of attention to the policy issues of the internet and related information and communication technologies (ICTs) is an inescapable observation based on mass media framing of internet-related stories – but it is also one of the most dramatic developments around the internet since its first decade of worldwide diffusion.

Early internet research was focused on issues driven primarily by technical innovations (Wellman, 2004; Dutton, 2013). Internet policy research initially arose in this field largely around limitations of access to the internet and related technologies, such as over issues of building internet infrastructures (Kahin and Wilson, 1997), reducing digital divides and skill gaps (Norris, 2001; Hargittai, 2002) and responding to global internet filtering regimes (Deibert et al., 2008, 2010). However, over the last decades, there has arguably been a shift to a greater focus on a wider array of policy issues (Mueller, 2002; Cranor and Wildman, 2003; DeNardis, 2009, 2013; Braman, 2009; Dutton, 2015). This shift aligns with the internet moving from a promising innovation at the turn of the century to an essential part of the lives of most people in the world’s developed economies. Within the span of two decades, this promising innovation had connected over half of the world’s population, reaching over 4 billion users (54% of the world) by 2018 (World Internet Stats, 2018).

Beyond the growing centrality of the internet, there has also been a shift in public views of the internet. Instead of being seen as a technology that fosters democracy, the internet and related technologies are increasingly identified as posing threats to democratic structures and participation in politics and society (Rainie and Wellman, 2012; Howard, 2015). In this vein, the internet is increasingly portrayed as a privacy invading surveillance technology, fueled by advances in social media, big data, the Internet of Things, and artificial intelligence (Howard, 2015). Far from the ‘technology of freedom’ of yesteryear (de sola Pool, 1983), the internet and related social media and big data are feared to be eroding privacy and putting democracy at risk – as politicians, governments and business and industries succumb to the potential for these new tools to help them observe and manipulate public opinion and behaviour (Morozov, 2011; Greenwald, 2014; Keen, 2015; Sunstein, 2017). More people want government and internet service providers to ‘do something’!

New risks tied to the internet and social media have become popularised, including:

  • search algorithms trapping internet users in ‘filter bubbles’ (Pariser, 2011),
  • social media enabling internet users to cocoon themselves in ‘echo chambers’ that confirm their social and political viewpoints (Sunstein, 2017); and
  • advertising incentives combining with the power of social media to promote the spread of disinformation, such as so-called unprofessional, junk, or fake news (Keen, 2007).

These threats to privacy and the quality and reliability of information have found widespread acceptance by the educated public, mass media, and politicians and regulators alike, illustrated by the establishment of inquiries and study groups on such issues as privacy (Mendell et al., 2012; Hardie et al., 2014) and the disinformation fostered by junk or fake news examined by the UK’s Digital, Culture, Media and Sport Committee (2017) and a high level study group for the European Commission (2018). Only recently has systematic empirical research been undertaken to address the validity of some of these expectations, as illustrated by the contributions to this special issue.

Of course, views of the internet as a technology of freedom or control are based on technologically deterministic assumptions that are not new and that have been challenged by empirical research over the years (Beniger, 1986). Well over a decade ago, I noted that:

Growing concerns over the lack of real information, the prevalence of misinformation, and increasing problems with information overload should ... not be viewed as aberrations within an information society. These failures are actually caused by inadequate regulation of access to information - the incorrect treatment of all information as being equal and benign. (Dutton, 1999, p. 11)

Utopian versus dystopian perspectives on the role of the internet and communication technologies has been a central issue for decades (Williams, 1982). Kenneth Laudon (1977) wrote about the potential for new interactive technologies being used to manage democracy, manipulating public opinion, rather than responding to democratic forces, long before the internet was taken seriously. Laudon was focused on interactive cable and telecommunications.

However, dystopian perspectives on the internet as a technology of control and manipulation rather than freedom and collective intelligence have gained increased currency in the aftermath of major events. These include the unraveling of what was thought to be an Arab Spring fostered by social media (Morozov, 2011), the disclosures by the whistleblower Edward Snowden of classified National Security Agency (NSA) documents that provided evidence of mass surveillance (Greenwald, 2014), the rise of the Internet of Things that will put tens of billions of devices online (Howard, 2015); and the Facebook fiasco over Cambridge Analytica, in which personal data of Facebook users was obtained by a political consulting firm via an academic researcher (Dutton, 2018; Schotz, 2018).

Equally significant developments contributing to this shift of perspective have been the increasing concentration of the internet industry, such as in the so-called FANG firms of Facebook, Amazon, Netflix, and Google. As I was writing this introduction, I received an online notification from a news feed that claimed to reveal: “Why Amazon is obsessed with getting inside of our homes”. Worry over the consequences of concentration within the internet industry has been one motivation behind calls for new policy initiatives around such aims as increasing competition, privacy and data protection, and efforts to prevent the blocking of legitimate content, such as through network neutrality initiatives (Wu, 2003).

It is within this backdrop of rising concerns over threats to the very values that once almost personified the internet as a technology of freedom that all the articles within this special issue can be seen. As a group, they address three big policy and regulatory issue areas that have risen around the internet. Simply put, these are research papers on the role of the internet in reshaping:

  1. access to (dis)information in ways that could literally clarify or distort our views of local and worldwide developments - from the news to environmental crises;
  2. privacy, data protection, and the security of the internet - each of which are threatened in new ways by new technologies, such as big data, computational analytics, and increasingly essential services being provided online; and
  3. legal and contractual relationships between users and providers - such as through new forms of notice and consent to the use of personal information.

These are only three of many more areas of key policy issues. Concerns over freedom of expression, digital divides, sociality, and many more remain equally important. But these three areas capture big areas of concern and arise from the actual composition of the best policy-related papers at AoIR 2017. The following sections provide a broad outline of the articles in this issue grouped around these three areas. This will be followed by a short overview of several cross-cutting themes of this special issue.

Reshaping access to information: who knows what?

All major innovations in communication technologies have a potential to reshape access to information – what we know, who we know, what services we obtain, and what knowhow we require (McLuhan, 1964; Dutton, 1999). Mark Graham (2014, p. 100) has called this ‘augmented reality’ in that the internet not only reshapes what we know, but also what we ‘are able to know and do’. This has been viewed positively with respect to the internet creating the potential for more open and global access to information, providing access to a heretofore unimaginable range of information from anywhere at any time (Dutton, 1999). Therefore, most concern in the early period of internet diffusion was focused on efforts to block access to information online, such as through internet filtering (Deibert et al., 2010).

However, it has long been argued that just as new media open up new channels of access, they can also exacerbate existing inequalities in the production and consumption of information around the world. This led the McBride Commission to call for a new world information order (ICCP, 1980), and contemporary internet scholars to call attention to continuing inequalities in access to production and consumption of information in a networked world (Castells, 1996; Graham, 2014).

As noted above, in the early years of the internet, the focus was on access to the technologies and skills to be online in a networked world, giving rise to issues over digital divides (Dutton, 1999; Norris, 2001). As increasing proportions of the world have gained access to the internet and social media, the focus has shifted to the quality and bias of information served up and consumed on these networks.

One of the most compelling arguments has been that the rise of search, and the algorithms that underpin the personalisation of its results, could be limiting access to information by diminishing the diversity of information, such as by creating a ‘filter bubble’ in which ‘what you’ve clicked on the past determines what you see next …’ (Pariser 2011 p. 16). A similar but complementary thesis is that social media not only personalise information, but they also enable individuals to more easily and almost unwittingly cocoon themselves in what Cass Sunstein (2017p. 6) coined as ‘echo chambers’ – built by ‘people’s growing power to filter what they see’, which adds to the power of providers to filter ‘based on what they know about us’. Many – from scientists to casual news readers – wish to confirm their beliefs through what they read and hear. This ‘confirmatory bias’ is greatly enabled in principle by the new social media at our fingertips (Sunstein, 2017). Therefore, rather than simply opening up new information vistas, the new media could narrow and distort our views of reality.

In many fundamental respects, this is not a new concern. A key issue with the mass media has long been focused on the quality of news and the degree that propaganda or even documentary and entertainment media coverage might distort our views of the real world and key events, ranging from the reporting of car accidents in local news to the reporting of war correspondents in remote areas. For instance, continuing debates centre on the degree to which mass media coverage might well ‘cultivate’ misperceptions of the real world (Gerbner et al., 1986), such as through consuming news portraying the world as more violent than it is in fact when coverage tends to focus on stories that attract readers – the rule of thumb in many newsrooms that ‘if it bleeds, it leads’. But as the internet has become more central to the consumption of news, new concerns have been raised, such as around the disinformation sown by junk or fake news, and the biases introduced by filter bubbles and echo chambers described above.

The first article is this issue addresses concerns over filter bubbles and echo chambers by focusing on what the authors call ideological ‘topic-communities’ forming in the Dutch Twittersphere that are focused on politics. To what degree are they diverse and can the levels of homophily observed on Twitter be explained by either the notion of a filter bubble or an echo chamber? Maranke Wieringa, Daniela van Geenen, Mirko Tobias Schäfer, and Ludo Gorzeman’s article, ‘Political topic-communities and their framing practices in the Dutch Twittersphere’, questions the explanatory value of a filter bubble as overly deterministic in light of their findings, but they lend some support to the significance of an echo chamber among one of their observed ideological communities. Their research is focused on two weeks of normal politics – the research was not conducted during a major campaign or election – and draws on a creative and rigorous use of multiple methods to provide a strong case for their findings. Nevertheless, their work raises further questions: Are their findings a reflection of Twitter users seeking to convey, rather than consume, partisan or ideological political perspectives? Are they retweeting and framing media coverage to influence others, rather than being naïve, cocooned readers, trapped in an echo chamber?

The next article by Karolin Eva Kappler, entitled ‘Big crisis data: generality-singularity tensions’, is far removed from discussions of filter bubbles and echo chambers in political discourse. Nevertheless, Kappler forces us to consider how the use of big data in the identification and monitoring of emergencies, disasters, and crises are changing the way we see these real world events, and even whether they can sustain attention when the crisis has past. For example, when social scientists collect data through any means, whether a survey or by direct observation, their method of observation shapes what they can see as well as what might be less visible through their particular methodological lens. Kappler explores the potential of a big data bias in perception, drawing on sociological perspectives to critically compare three platforms designed to capture big data about crisis events. She identifies a variety of implications common and distinct to these different platforms’ approaches to capturing crisis data, such as the idea that they make each crisis unique – a singular event – rather than a more general crisis or just another emergency. How does what she calls the ‘platformization’ of emergencies shape what we know about them? This article is refreshing in the way it moves away from the hype about big data capturing reality to critically assessing what realities these platforms see, observe, valorise, produce, and appropriate. They are, according to Kappler, all about ‘doing singularity’ – making the event a unique rather than general phenomenon.

Competing perspectives on privacy and security

The next set of three articles provides different disciplinary perspectives on the issues of privacy and security. The first, by Sarah Myers West, entitled ‘Cryptographic imaginaries and the networked public’, provides a fascinating historical and comparative perspective on what she calls ‘cyptographic imaginaries’ – how people think about encryption whether through cyphers (that transpose letters of an alphabet) and codes (that replace words) in different social, cultural, and political contexts. Specifically, she looks at encryption in three different cultures: the occult, affairs of state (national security and secrecy), and in democratic systems, where it provides a means to enable private communication essential to some movements by avoiding surveillance and potential social or political sanctions. Anchored in an STS approach, this comparison illustrates how similar technologies take on quite different meanings and roles in different cultural settings. Such insights support policy-making in this area by demonstrating how the technologies of encryption need to be understood not only in a technical sense, and not only cross-nationally, but also in the more specific social, cultural, and political contexts in which they are used. Technologies do not determine universal solutions as the role and impact of encryption, for example, is also shaped by their socio-cultural contexts of use.

The next article, by Geert van Calster, Alejandro Gonzalez Arreaza, and Elsemiek Apers, entitled ‘Not just one, but many ‘Rights to be Forgotten’’, is based on a comparative analysis of national law and policy anchored in what has become known as the ‘right to be forgotten’ (Mayer-Schönberger, 2009). While general support for such a right emerged in Europe initially through the courts and later through the European Commission, initiatives to legally define and implement this right have diffused widely across the world. This article conducts a comparative survey of over two dozen cases of concrete legal implementations of this right to be forgotten. The research team finds far more case law variations, such as in the territory over which the right would be enforced, than commentary on this universal right would lead us to expect. The article demonstrates the value of close and comparative legal analysis how general legal principles are implemented in case law across different national jurisdictions. Their study is reminiscent of early American research on implementation, which tracked how a policy spawned in Washington DC changed dramatically by the time it was implemented in local communities (Pressman and Wildavsky, 1973). One clear implication of their findings is the degree that even widespread acceptance of a general legal principle can still lead to cross-national differences. As various evolving principles of policy and regulation for the digital age move into national courts and legislatures, will the resulting patchwork of national case law be another force underpinning an increasing fragmentation of a global, open internet, that frustrates efforts at harmonisation?

Closely aligned with the right to privacy is an associated right to security. Computer scientists have long approached this issue in the information age through a focus on cyber security, defined to include the ‘technologies, processes, and policies that help to prevent and/or reduce the negative impact of events in cyberspace that can happen as the result of deliberate actions against information technology by a hostile or malevolent actor’ (National Research Council, 2014, p. 2). If privacy is in part defined by unauthorised access to personal information, then a lack of cyber security, such as the inability to prevent unauthorised access to internet devices or infrastructures, is one critical route to infringing privacy. Take, for instance, the US government’s efforts to unlock a smartphone to gain access to personal information in an investigation of terrorism (Benner and Lichtblau 2016).

The next article in this issue moves the discussion of cyber security from a general aim to a more concrete set of goals in more specific domains. By focusing on concrete domains or institutional contexts of cyber security, it is clear that cyber security takes on somewhat different meanings across each domain. Laura Fichtner’s article, entitled ‘What kind of cyber security? Theorising cyber security and mapping approaches’, provides a critical, social scientific perspective on the concept of security and and also distinguishes between four domains of cyber security, largely defined by the major values and purposes they prioritise in their particular contexts. These are: 1) data protection, such as protecting data files from unauthorised access; 2) safeguarding financial interests, such as preventing credit card fraud; 3) protecting public and political infrastructures, like securing electronic voting machines; and 4) information and communication flows, as in failing to prevent the exposure of diplomatic cables of the US State Department by WikiLeaks (Leigh and Harding, 2011). Anchored in an STS approach to her study and a focus on computer ethics, Fichtner builds a strong case that each of these arenas of cyber security involve not only different priorities, but also different ecologies of actors and prototypical responses. For example, compare the tolerance of the actors involved in credit card fraud (banks), where some loses are expected, to those ensuring against voting fraud (governments), where electronic voting is not allowed in most jurisdictions for fear of undetectable fraudulent voting (Jones and Simons, 2012). Here again, a closer look at the implementation of a global concept illuminates differences across domains that are important to address in policy and practice.

Social and legal insights on issues of consent

The final set of articles in this special issue address one of the most concrete but insurmountable issues of consumer protection in the digital age – how to notify and obtain the informed consent of internet users on the ways personal and trace data created by them can be used? This principle of a notice and consent process is simple to understand, but almost impossible to implement in ways that satisfy such important and obvious values as informed consent. I have witnessed many sessions at privacy and security conferences and panels that devoted disproportionate amounts of time critiquing the problems with contemporary approaches to notice and consent. Most notice and consent forms are long, technical, and not read. From here, agreement stops, as it has been more difficult to provide a clear and compelling alternative.

The first article in this section, by Stefan Larsson, is entitled ‘Algorithmic governance and the need for consumer empowerment in data-driven markets’. Larsson provides an insightful critique of contemporary policy and practice on notice and consent that brings this discussion into the big data age of consumer profiling. He highlights the lack of transparency in user agreements, which are exceedingly complex, and the need for policy to strengthen consumer protection in this area. In the end, his analysis leads him to question the ability of internet users to ever be able to protect themselves in the age of big data analytics. He then makes a case for the necessity of structural reform that moves responsibility from internet users to consumer protection authorities. In many respects, this is a more specific example of the case for data protection authorities in other areas. However, his article should stimulate debate on alternative remedies. It also should raise questions over the need for all users to understand all aspects of such user agreements. If only a few users discover a problem with a notice and consent process, then their objections can become a means for holding providers more accountable to users in general. Also, will consumer protection authorities themselves be adequately resourced to hold global internet service providers to account? Will consumer protection authorities have the staff and skills to understand how data are used by a complex ecology of actors in ways that truly protect users?

The final article is by Kristine B. Cornelius, entitled ‘Standard form contracts and a smart contract future’. Her legal perspective on contract law and practice adds an extremely useful background to the debate over how to regulate notice and consent, terms of service and other online contracts. Her historical points remind readers that standard form contracts (SFC) are not new. They have had a very positive role in making some legal issues manageable by the lay public and consumers that expert systems could augment (Susskind, 2008). However, her review argues that these SFC have been too slow to adapt to the digital context, such as in being too anchored to legacy paper-based forms. Moreover, she argues that the shift in medium has implications for the procedural process, which can pit the needs of consumers against the ideologies of business and industry. This need not be the case. She argues that smart contracts can be used to actually enhance the freedom of individuals to complete transactions online. In such ways, Cornelius provides insights about smart contracting in the digital context, such as in permitting more decentralised control, which might provide new approaches to such intractable issues as notice and consent.

Points of summary and conclusion

This brief editorial has sought to put the contributions to this special issue in a broader context and illuminate some of the relationships between the articles. While I have noted basic points of each contribution, I have avoided detailed summaries of their evidence and arguments. I therefore encourage you to read these contributions on their own terms, as each is succinct and useful in advancing the study of policy and regulation in the field of internet studies. That said, I found several themes relevant across these contributions which I will note as a personal observation. They all remain relatively anecdotal as they are tied simply to this sample of articles from one but nevertheless an important conference for the field of internet studies. Hopefully they will generate questions about whether they are more generally applicable.

Disciplinary perspectives

First, it is arguable that each article is anchored in more or less of a disciplinary perspective, such as in sociology, science and technology studies (STS), computer ethics and law. It is remarkable in that internet studies and policy studies are purportedly more ‘interdisciplinary’ fields and yet these contributions are more grounded in disciplinary than interdisciplinary perspectives. And, from my point-of-view, each article makes an original contribution to internet and policy studies by virtue of bringing a disciplinary approach to bear on their topic. Rather than an interdisciplinary treatment of a topic, which might surface commonalities across disciplinary divides, these contributions tend to foreground the details and differences that might be overlooked in more general treatments. For example, we see comparisons across platforms for tracking big crisis data (Kappler, this issue), multiple implementations of the right to be forgotten (Calster et al., this issue), and four distinct approaches to cyber security (Fichtner, this issue).

Another consequence of these disciplinary approaches might have been the avoidance of a degree of advocacy that invades and undermines many policy-oriented pieces. The objective of each article is more tied to theorising or refining their theoretical or empirical approach than advocating a particular policy or practice. In many ways, this leads to analyses that can be useful to the design of policy and practice by those from multiple positions on any given issue. For example, whether you support or oppose initiatives on the right to be forgotten, it is extremely useful to know that this right differs across legal jurisdictions in ways not well recognised in general debates.

A greenfield for historical, legal, social and cultural theorising

A greenfield in urban planning and development is ideal in that the developer does not need to grapple with all the constraints imposed by an existing built environment. In some respects, internet policy studies are theoretical greenfields for which theoretical ideas from many disciplines might prove valuable to explore. The contributions to this special issue, for example, underscore the degree that many theoretical approaches from cultural studies and the social sciences could be valuable to relatively under-theorised areas of internet policy studies. Work in this area is so new and so under-researched and theorised that prevalent perspectives, such as STS, have much to add to the literature. For instance, histories of the internet and internet policy and regulation have only become foci for serious historical research in the last decade, as the internet has become recognised as central to information societies in the digital age (Haigh et al., 2015). Perhaps this issue can be a call for historians, legal scholars, critical cultural theorists and social scientists across a variety of disciplines to bring their theoretical perspectives to bear on this new empirical terrain.

Need for interdisciplinary problem-solving

Multidisciplinary research is used here to refer to bringing together research anchored in specific disciplines. In contrast, interdisciplinary research refers to research that is at the intersections of disciplines or which is a synthesis of disciplinary perspectives. It does not mean a lack of or no discipline or an ‘indiscipline’ (Shrum, 2005). That said, at the end of the day, internet policy is inherently a problem-oriented field (Dutton, 2013). How to inform and stimulate debate on policy and regulation appropriate to mitigating problems with such issues as junk news, big data, encryption, the right to be forgotten, cyber security, and notice and consent are likely to require interdisciplinary thinking. But that does not require every study or every paper to be anchored in interdisciplinary research. As just noted above, disciplinary enquiries can prove to be very useful.

Instead, it suggests that disciplinary research needs to be brought together within more interdisciplinary projects, teams and centres that can understand, work with, and appreciate the contributions across the disciplines. In fact, that may well be a role that special issues on policy can play for the field of internet studies. The contributions to this special issue certainly demonstrate the value of systematic and critical disciplinary research to address the validity of key issues and concerns over the policy implications of the internet and related media, information and communication technologies.

References

Beniger, J. R. (1986). The control revolution. (Cambridge, MA: Harvard University Press).

Benner, K., and Lichtblau, E. (2016, March 28). U.S. says it has unlocked iPhone without Apple. New York Times. Retrieved from https://www.nytimes.com/2016/03/29/technology/apple-iphone-fbi-justice-department-case.html

boyd, d. m. (2008). Taken Out of Context: American Teen Sociality in Networked Publics (Phd Dissertation). University of California, Berkeley. Retrieved from https://www.danah.org/papers/TakenOutOfContext.pdf

Braman, S. (2009). Change of state: information, policy, and power. Cambridge, MA: MIT Press.

Castells, M. (1996). The Rise of the Network Society: The Information Age. Oxford: Blackwell Publishers.

National Research Council. (2014). At the Nexus of Cybersecurity and Public Policy: Some Basic Concepts and Issues. (D. Clark, T. Berson, & H. S. Lin, Eds.). Washington, DC: National Academies Press. doi:10.17226/18749

Consalvo, M., & Ess, C. (Eds.). (2011), The Handbook of Internet Studies. Oxford: Wiley-Blackwell.

Cranor, L. F., & Wildman, S. S. (Eds.). (2003). Rethinking Rights and Regulations. Cambridge, MA: MIT Press.

de Sola Pool, I. (1983). Technologies of Freedom. Cambridge, MA: Harvard University Press.

Deibert, R., Palfrey, J., Rohozinski, R., Zittrain, J. (Eds.). (2008). Access Denied: The practice and policy of global internet filtering. Cambridge, MA: MIT Press. Available at https://mitpress.mit.edu/books/access-denied

Deibert, R., Palfrey, J., Rohozinski, R., Zittrain, J. (Eds.). (2010). Access Controlled: The Shaping of Power, Rights, and Rule in Cyberspace. Cambridge, MA: MIT Press. Available at https://mitpress.mit.edu/books/access-controlled

DeNardis, L. (2009). Protocol politics: the globalization of internet governance. Cambridge, MA: MIT Press.

DeNardis, L. (2013). The emerging field of internet governance. In W. H. Dutton (Ed.), The Oxford Handbook of Internet Governance (pp. 555-575). Oxford: Oxford University Press.

Dutton, W. H. (1999). Society on the Line. Oxford: Oxford University Press.

Dutton, W. H. (2009). The fifth estate emerging through the network of networks, Prometheus, 27(1), 1-15. doi:10.1080/08109020802657453

Dutton, W. H. (2013). Internet Studies: The Foundations of a Transformative Field. In Dutton, W. H. (Ed.), The Oxford Handbook of Internet Studies (pp. 1-23). Oxford: Oxford University Press. Available at: http://www.oxfordhandbooks.com/view/10.1093/oxfordhb/9780199589074.001.0001/oxfordhb-9780199589074-e-1

Dutton, W. H. (2015). Putting Policy in its Place: The Challenge for Research on Internet Policy and Regulation. I/S: A Journal of Law and Policy for the Information Society, 12(1), 157-84. Retrieved from http://moritzlaw.osu.edu/students/groups/is/files/2016/09/10-Dutton.pdf

Dutton, W. H. (2018, March 21). Regulating Facebook Won’t Prevent Data Breaches, The Conversation. Retrieved from https://theconversation.com/regulating-facebook-wont-prevent-data-breaches-93697

European Commission, Directorate-General for Communication Networks, Content and Technology. (2018). A multi-dimensional approach to disinformation: report of the independent high level group on fake news and online disinformation. Brussels: European Union. doi:10.2759/739290 Retrieved from https://www.cato.org/publications/policy-analysis/risky-business-role-arms-sales-us-foreign-policy

Gerbner, G., Gross, L., Morgan, M., & Signorielli, N. (1986). Living with Television: The Dynamics of the Cultivation Process. In J. Bryant & D. Zillman (Eds.), Perspectives on Media Effects. Hilldale, NJ: Lawrence Erlbaum Associates.

Graham, M. (2014). Internet Geographies: Data Shadows and Digital Divisions of Labor. In M. Graham & W. H. Dutton (Eds), Society and the Internet (pp. 99-116). Oxford: Oxford University Press.

Greenwald, G. (2014), No Place to Hide. New York: Metropolitan Books.

Haigh, T., Russell, A. L., & Dutton, W. H. (2015). Histories of the Internet: Introducing a Special Issue of Information & Culture. Information & Culture, 50(2), 143–159. doi:10.7560/IC50201

Hardie, T., Cooper, A., Chen, L., O’Hanlon, P., & Zuniga, J. C. (2014). Pervasive surveillance of the internet: Designing privacy into internet protocols. IEEE 802 Tutorial. Retrieved from https://mentor.ieee.org/802-ec/dcn/14/ec-14-0043-01-00EC-internet-privacy-tutorial.pdf

Hargittai, E. (2002) Beyond logs and surveys: In-depth measures of people’s web use skills. Journal of the Association of Information Science and Technology, 53(14), 1239-1244. doi:10.1002/asi.10166

Howard, P. N. (2015). Pax Tehnica: How the Internet of Things May Set Us Free or Lock Us Up. New Haven, Connecticut: Yale University Press.

International Commission for the Study of Communication Problems (Ed.). (1980). Many voices, one world: communication and society, today and tomorrow: towards a new more just and more efficient world information and communication order. Paris; London; New York: UNESCO; Kogan Page; Unipub.

Jones, D., & Simons, B. (2012), Broken ballots: Will your vote count? Stanford, CA: CSLI Publications.

Kahin, B., & Wilson, E. (Eds.). (1997). National information infrastructure initiatives: Vision and policy design. Cambridge, MA: MIT Press.

Keen, A. (2007). The Cult of the Amateur. New York: Doubleday.

Keen, A. (2015). The Internet is Not the Answer London: Atlantic.

Laudon, K. (1977). Communications Technology and Democratic Participation. New York: Praeger.

Leigh, D., & Harding, L. (2011), WikiLeaks. London: Guardian Books.

Mayer-Schönberger, V. (2009). Delete: The Virtue of Forgetting in the Digital Age. Princeton and Oxford: Princeton University Press.

McLuhan, M. (1964). Understanding Media: The Extensions of Man. London: Routledge.

Mendel, T., Puddephatt, A., Wagner, B., Hawtin, D., & Torres, N. (2012). Global survey on internet privacy and freedom of expression. Paris: UNESCO.

Morozov, E. (2011). The Net Delusion: How Not to Liberate The World. New York: Allen Lane.

Mueller, M. L. (2002). Ruling the root: Internet governance and the taming of cyberspace. Cambridge, MA: MIT Press.

Norris, P. (2001). Digital divide: Civic engagement, information poverty, and the internet worldwide. Cambridge, UK: Cambridge University Press.

Pariser, E. (2011). The Filter Bubble. New York: Penguin.

Pressman, J. L., & Wildavsky, A. (1973). Implementation. Berkeley, CA: University of California Press.

Rainie, L. & Wellman, B. (2012). Networked: The New Social Operating System. Cambridge, MA: MIT Press.

Schotz, M. (2018, March 17). Cambridge Analytica Took 50M Facebook Users’ Data – And Both Companies Owe Answers, Wired. Retrieved from: https://www.wired.com/story/cambridge-analytica-50m-facebook-users-data/

Shrum, W. (2005). Internet indiscipline: Two approaches to making a field, The Information Society, 21(4), 273-5. doi:10.1080/01972240591007599

Sunstein, C. R. (2017). #republic: Divided Democracy in the Age of Social Media. Princeton, NJ: Princeton University Press.

Susskind, R. (2008). The End of Lawyers? Oxford: Oxford University Press.

United Kingdom Digital, Culture, Media and Sport Committee. (2017). Fake news inquiry - publications. Retrieved from: https://www.parliament.uk/business/committees/committees-a-z/commons-select/digital-culture-media-and-sport-committee/inquiries/parliament-2017/fake-news-17-19/publications/

Wellman, B. (2004), The Three Ages of Internet Studies, New Media & Society, 6(1), 123-129. doi:10.1177/1461444804040633

Williams, F. (1982), The Communications Revolution. Beverly Hills, CA: Sage.

World Internet Stats (2018), World internet user statistics. Retrieved from https://www.internetworldstats.com/stats.htm

Wu, T. (2003), Network neutrality, broadband discrimination, Journal of Telecommunications and High Technology Law, 2, 141–179. 

Footnotes

1. See AoIR website for more information: https://aoir.org/about/

Collectively exercising the right of access: individual effort, societal effect

$
0
0

1. Introduction

Personal data is one of the main assets in the new data economy. As a by-product of the growth of internet-enabled communication, computing power and storage capabilities, the amount of personal data that is being collected, processed and stored is growing fast. The increase in the use of personal data provides potential economic and academic benefits, but also entails risks with regards to power and privacy (Zuboff, 2015). This raises new questions as to how this new data economy should be governed (Bennett & Raab, 2017; Economist, 2017).

The European Union (EU) and the United States (US) have different approaches toward the question of how to govern personal data, though many elements seem similar and there is a partially shared genealogy (Hustinx, 2013; Schwartz, 2013). Recent events, such as the fall of the Safe Harbor agreement and the continued questioning of the EU-US Privacy Shield agreement, show that the differences are not just theoretical. While the US overall has a regime founded in consumer protection law starting from the principle that data practices are allowed as long as they have a legal ground, the EU is taking a more cautionary approach with more focus on protecting citizens rights, by approaching privacy and data protection as a fundamental right. As part of this fundamental rights approach, Europe is focusing more on safeguarding citizen rights through principles of transparency and individual control. According to Article 29 Data Protection Working Party (2018) - a cross European panel of data protection agencies - transparency is especially important, as it is one of the preconditions for the ability to exert control with respect to the processing of personal data.

The European Union has had a unified data protection framework since 1995. In light of the developments sketched above and with the aim of providing better protection of its citizens, a new data protection regulation is going into force in the EU in 2018. While there are some important additions to the data regulation framework, the central core of the framework remains essentially unchanged. This happens while we do not even know if the elements of this core function, and while some elements like informed consent, have been shown to be largely dysfunctional (e.g., Zuiderveen Borgesius, 2015). 1

The right of access is one of the key legal provisions in this framework, which should provide transparency to citizens. It puts an obligation on organisations to, upon request, provide citizens with the personal data held on them, the source of this data, the purpose of this data, and who this data is shared with (we discuss these provisions in more detail in Section 2). The right of access intends to enable citizens to verify the lawfulness of the data practices of an organisation, after this processing has already started. So, in theory, this right should enable citizens to protect their rights related to the use of their personal data.

This paper addresses the following key questions: To what extent does the exercise of the right of access meet its objective in practice? Does it provide meaningful actual transparency to citizens?

We answer these questions by recruiting participants who send data access requests and share the replies with us. We then first analyse the replies to the access requests from the point of view of their compliance with the law. Next, we collect the views of the study participants, the citizens for whom the law is written, and ask them to rate the replies that they receive, what they expect from the law, and how they evaluate the right of access after having used it. Lastly, we reflect on these findings and explore under what conditions the right of access might contribute to transparency and ensuring the lawfulness of data processing. We conclude that a much deeper story emerges through perceiving the requests as a collective endeavour.

Our paper contributes to a considerable amount of scholarly work that deals with the different data protection regulations by legal scholars (e.g., Galetta & De Hert, 2015), and governance scholars (e.g., Bennett & Raab, 2017), by providing empirical evidence to analysis that often deals with abstract principles. There have been a few small-scale studies in the Netherlands of exercising access requests in practice, such as the studies by Van Breda and Kronenburg (2016) and Hoepman (2011). We extend these works by sending requests to a larger set of organisations, sending multiple requests to the same organisation, sending follow ups, and sending requests for specific types of data. The most similar study to ours has been performed by Norris, De Hert, L’Hoiry, and Galetta (2017) who have conducted the first major multi-country empirical study of the right of access, sending and monitoring 184 access requests. To some extent, our work corroborates their findings, albeit in another country, as their study did not include the Netherlands. Our main methodological contribution is the inclusion of non-researcher citizen-participants in gathering the data, as well as in the interpretation of and reflection on the replies.

2. Right of access

In order to empower its citizens, European lawmakers have created the so called right of access in the Data Protection Directive (DPD). This gives citizens the right to obtain information about personal data that is processed pertaining to them. In the Netherlands, the DPD has been codified into law via de “Wet Bescherming Persoonsgegevens” (Dutch Personal Data Protection Act). Article 35 of that act defines the right of access as follows:

1. The data subject may request the controller without constraint and at reasonable intervals to notify him about whether personal data relating to him are being processed. The controller will notify the data subject about whether or not his personal data are being processed in writing within four weeks.

2. Where such data are being processed, the notification will contain a full summary thereof in an intelligible form, a description of the purpose(s) of the processing, the categories of data concerned and the recipients or categories of recipients, as well as the available information on the source of the data.

3. Before a controller provides the notification referred to in subsection 1, against which a third party is likely to object, he will give that third party the opportunity to express his views where the notification contains data relating to him, unless this proves impossible or involves a disproportionate effort.

4. Upon request, the controller will provide knowledge of the logic involved in any automatic processing of data concerning him.

In this research, almost all data access requests fall under the scope of this Dutch law. If the organisation is located in another European country the national implementation of the DPD applies. In most important aspects, these implementations are very similar. Differences can be found in attributes like the maximum time allowed for the response (e.g., four weeks in the Dutch law and 40 days in the UK law). In May 2018, when the new General Data Protection Regulation (GDPR) came into effect, these differences became a thing of the past. 2

The data protection regulation consists of a set of different obligations to data controllers and rights for data subjects. The goals of the different provisions overlap. With regards to transparency, there are rules that require a priori information provision directly to the data subject (art. 13 GDPR and art. 14 GDPR) and to the data protection authority (DPA) (art. 30 GDPR). There are also rules that require information provision a posteriori (art. 15 GDPR) via the right of access. A key difference between the two types of transparency is that a priori transparency can only describe data practices in abstract terms, it will describe the categories of data that are being processed more or less precisely. A statement may for example say that an organisation collects names but can only say that it recorded the name Adam after processing started. Therefore, a posteriori transparency can be used to check the accuracy of the processing while a priori information provision cannot. We think that this specificity is also needed to verify the lawfulness of the processing. People have a better understanding of processes when they are observable in concrete terms.

The text of the law is rather unclear in this respect saying that “a full summary of the data” and “the recipients or categories of recipients” have to be provided. 3 This seems to leave room for an interpretation that allows forstating categories as an acceptable reply (Van Breda & Kronenburg, 2016). However, the Dutch DPA (2007) has taken the position that the reply should include a full reproduction of the data, and this position has been accepted by the courts. 4

The transparency related rights and obligations should help the data subject, the right of access enables data subjects to check the quality of their personal data and the lawfulness of the processing. Recital 41 of the DPD defines it as follows: “… any person must be able to exercise the right of access to data relating to him which are being processed, in order to verify in particular the accuracy of the data and the lawfulness of the processing”5. De Hert and Gutwirth (2006) explain that the rationale for the data protection regulation is “to promote meaningful public accountability, and provide data subjects with an opportunity to contest inaccurate or abusive record holding practices”.

Notwithstanding these legal provisions, recent surveys show that European citizens, as elsewhere, do not feel that they have transparency and control over the use of their personal data. And while the regulatory framework for dealing with the rapid increase of the collection and use of personal data relies heavily on citizen empowerment, very little is known about the practical effectiveness of the legal provisions, such as the right of access, that should guarantee this empowerment (OECD, 2013, p. 34).

3. Research method

To find out how the right of access functions in practice, we need to observe how organisations answer data access requests in practice, compared to the criteria formulated in the law, and furthermore evaluate the experience of citizens making use of the right. In order to do so, we recruited participants to send data access requests, and interviewed them about their experience during the process, as we will explain in this section.

3.1. Data collection

The data used in this study all derive from actual replies from organisations to right of access request letters sent by seven individuals—two of the authors and five participants. Initially, to gain a basic understanding of the process involved, the authors sent approximately 35 access requests. At a later stage, eight participants connected to the authors but not data governance researchers were invited to participate in the study, five of which completed it. 6

Potential participants received documentation explaining the basics of the legal right under investigation, the purpose of the study, a template of the access request letter, a list for choosing the organisations to send a data access request to, and a consent form. Participants took part in a semi-structured intake interview, and were asked about their expectations of access requests, their attitudes towards the use of personal data in society, and their motivation for participation. These interviews served as a reference for the subjective judgment of the effectiveness of the access requests later on.

Participants were next tasked to choose at least ten organisations to send data access requests to—with a suggestion of five that deliver public services (e.g., public transport and education), three dominantly online companies (e.g., online shops and internet service providers), and two miscellaneous. The suggestions were to ensure we collect multiple data points on similar organisations, while giving participants the freedom to engage actively and with personal interest.

Subsequently, we helped the participants draft the data access requests, based on a fixed template. The standard template was a slightly adapted version of the template that the Dutch Data Protection Authority (DPA) offers on its website. 7 One participant used the standard letters provided by the Dutch digital rights organisation Bits of Freedom, which while worded slightly differently, contains the same elements. This includes a request for (i) an overview of the data being processed, if any, (ii) an explanation of the purposes of collection, (iii) with whom data has been shared, and (iv) the origin of the data. In 14 cases an English letter was sent, based on a similar template provided by the British DPA, the Information Commissioner’s Office (ICO). In 16 cases the letter was thereafter individualised, requesting specific types of data a participant wished to receive (e.g., internet traffic, or data related to a specific flight). The postal address of the target organisation was also added to the template. This address was found by looking for it in the organisation’s privacy policy, and if it was not provided, the address provided in Bits of Freedom’s online database, or the general address of the organisation was used. As a means of identification by the receiving organisation, a copy of an ID document was added.

Overall, the seven individuals sent a total of 106 access requests to organisations in different sectors, as shown in Table 1. Of these, 65 requests were sent to public organisations and 41 to private organisations. Most requests were sent by letter (85), but e-mail (15) and web forms (6) were also used. The majority of the target organisations (92) were located in the Netherlands.

In order to check the progress on the data access requests, and to find out if there were any problems, we had regular (often weekly) contact with the participants. If after four weeks—the maximum time allowed by the law—a reply was not received, participants were asked to send reminders to the organisation, indicating they expected a swift answer and referring to the legal deadline. And again, two weeks later, a second reminder suggesting the possibility to seek recourse via the DPA if a reply was not received. With regards to reminders, 47 first reminders and 21 second reminders were sent, while none of the participants filed a complaint with the DPA for non-response.

When participants received a reply, they were asked to share it with us. From these responses we recorded basic process information, such as response time, numbers of reminders sent, and how the response was received (regular post, registered post or e-mail). We noted if the responses contained answers to the different sub-questions asked—where the data comes from, with whom it is shared, and why it has been collected—and if these answers were generic or specific. We also asked the participants to evaluate the responses on completeness, communication style, and accuracy of the data received (to the extent that it was provided). They could also write down general remarks.

Finally, after all data access requests were processed, participants were interviewed again, and asked to reflect on the effectiveness of the right of access and their participation in the research.

Table 1: Number of data access requests sent to different sectors

Sector

Example organisations

Access requests sent

Target organisations

Education

Delft University of Technology, Design Academy Eindhoven, Gymnasium Haganum (high school).

7

5

Finance

ABN, Mastercard, OHRA

6

5

Government

Tax authority, municipalities, UWV

30

19

Platforms

Mi, Skype, Spotify

10

9

Retail

Happy Socks, Ikea, Bol.com

8

6

Telecom

KPN, T-Mobile, Ziggo

8

6

Transport

Car2Go, NS, Amsterdam Airport Schiphol

20

7

Utilities

Eneco, Energiedirect, PostNL

7

7

Other

NGOs, art institutions, general practitioners

10

10

3.2. Data analysis

As we have discussed, the right of access aims to bring transparency to citizens about the way in which organisations use their personal data. The transparency to be achieved is, however, not defined precisely or uniformly in the law, case law, or scientific literature.

We operationalised transparency in two ways. The first way was to compare the access responses to the formal legal criteria. The law and related case law specify several mandatory elements in response to an access request (see section 2 “Right of access”). There needs to be a reply to an access request within a number of weeks (four in Dutch law), and the reply needs to include the categories of data that an organisation processes, the actual data that is processed, an explanation of the reasons for which it is processed and an explanation of how the organisation received the data, and if, and with whom, the data was shared. We checked the replies for these elements, and whether they were given in general or specific form.

Our second way is to let citizens, for whom the law is intended, judge whether the responses gave sufficient insight into the lawfulness and accuracy of the data processing, as the law intends. This, as was described in Section 3.1, was done by asking the participants to grade each access request and response, plus the intake and final interviews.

3.3. Ethical considerations

Before involving participants, we sought and received approval from the Delft University of Technology’s Human Research Ethics Committee.

Our research requires the participants to share replies to their access requests with us, which by their nature might contain highly sensitive personal information. One principle of ethical data sharing is informed consent. We thus informed participants in detail about the setup of the experimental design, and strived for open communications and an atmosphere that makes it easy for participants to decide to share or not share their data, to share only part of their data or to revert any previously taken decision on this matter. Participants can at any point and for any reason pull back from the research. Moreover, keeping the data safe is a key concern. The original response letters were held by the participants themselves, and we stored a digital copy on an encrypted university server, accessible to two of the researchers, and the individual participants only.

Another consideration is that replying to a data access request, if taken seriously, may take much time for an organisation. Some organisations we talked to have reported that they have been previously targeted by public data access request campaigns and have experienced this almost as a ‘distributed denial of service attack’. While acknowledging this concern, given that organisations have a legal obligation to reply to access requests, and given the importance of investigating access rights by the actual application of the right (versus investigation by proxy), we deem our method acceptable. For larger size research with participants, however, some form of load balancing with regards to the queried organisations in the research design is needed. Finally, since our research is not intended as an attack on any organisation, and especially not on any individual within an organisation, we protect the privacy of individuals responding to access requests within organisations and never mention their names.

4. Legal compliance

We will now present our findings of the extent in which the replies to the access requests complied with the law. In 4.1 we look at the most basic questions: was there a reply at all, and how long did it take for organisations to reply. In 4.2 and 4.3, we describe the extent to which the replies were complete. In 4.4 we discuss how responses to specific requests and follow up questions were handled, and patterns that can be observed by matching replies to similar requests.

4.1. Is anybody listening?

Approximately 80 percent of the data access requests where eventually answered. 8 About half were answered within the four weeks stipulated under the Dutch law and, as the response time histogram in Figure 1 shows, a relatively large proportion of the replies return in the fourth and fifth week after the request, around the legal deadline. Coolblue, a web shop, responded to a request by letter in two days with data. A small proportion (7) of organisations replied within a week, but most of these responses did not contain the requested data. 9 At the other end of the spectrum, 34 organisations answered late, 21 answered after one reminder, and 9 after two reminders were sent.

Histogram of response time (in days)
Figure 1: Histogram of response time (in days)

Figure 2 provides an overview of the replies received, and the department that has sent the reply. Approximately 33% of the responses included user data and an additional 15% included categories of data but not the data itself, while 26% stated they did not have any data, and 5% of responses referred the participant to another organisation. Most replies were signed by the customer service department (25%), followed by privacy (13%), legal (12%), and others.

Pie charts of response classification and responding department
Figure 2: Response classification (left) & responding department (right) for total sample

Finally, Table 2 shows the response classification and time by sector. We can see quite some diversity across sectors. For instance, all educational organisations in our sample offered replies, while 35% of the requests to companies in the transport sector remained unanswered.

Table 2: Access request response classification and time (by sector)

Sector

N

Data (specific or categories)

No data or Referral

No reply (excluding cancelled)

Response time (mean number of days)

Education

7

57%

43%

0%

29.3

Finance

6

67%

33%

0%

40.8

Government

30

40%

43%

10%

34.2

Platforms

10

60%

10%

20%

33.1

Retail

8

50%

25%

25%

30.8

Telecom

8

38%

38%

25%

21.8

Transport

20

30%

35%

35%

26.5

Utilities

7

57%

29%

14%

20.7

Other

10

80%

0%

10%

28.8

Total

106

48%

31%

17%

30.5

4.2. Diversity of responses

In order to give a feel of the diversity among the replies, which exists in many different regards, we will start with providing a detailed description of two answers, one compliant and one non-compliant.

Stroom

A data request was sent by letter to StroomThe Hague, a publicly funded art centre in The Hague, by a participant who collaborates with them. Nineteen days after the request was sent, a response was received in the form of a letter from the director of the organisation.

In the two-page reply, seven categories of data, including name and contact details, artist details, nationality, and correspondence, are discussed. For each category, the letter describes how the organisation has received the data (for example, if it was given to them by the participant). The data is either provided in the letter, or a reference is given to an online platform where the participant can access the data, and they briefly explain why the data has been (or is) processed. Furthermore, the letter indicates which of these data are publicly available, and even includes a section about data they do not currently have, but might have under different conditions, for instance, if the participant would have had a financial relationship with this organisation.

Ziggo

A data access request was sent to Ziggo—a large Dutch cable company owned 50% by Vodafone and 50% by Liberty Global—by a participant. A customer service representative called within two days asking if the participant is facing any problems, for example with their password, and expressing that they do not really know what to do with this request. The participant explains that she would like to know how Ziggo deals with personal data, and if, for example, they record what television programmes have been watched or which internet pages have been visited. The customer service representative responds that they will figure this out and get back to the participant in writing.

Four days later, the participant is called again, this time by a representative of complaints management, who again expresses that it is not “really clear” to them what they have to do with the letter. The participant explains the same story again, and requests access to her data. The complaints manager suggests the participant read the information on the website, with no additional information. The same day the participant receives Ziggo’s privacy policy by email, which in layman terms explains the right of access: “You have the right to know which of your personal data we store. We can request a small fee for the administrative costs that are connected to offering this type of data”. But still no data is offered, nor are the specific questions regarding specific types of data answered.

A few weeks later, the participant sends a more specific data access request, and specifically asks for an overview of all the data related to her internet use in the past three months, and refers to the fact that, in her mind, the previous data access request was not sufficiently addressed. Nine months, and a reminder letter later, no response has been received.

4.3. How complete are the replies?

The data protection law stipulates that organisations should, upon request, provide a full overview of the personal data held, plus the purpose and method of collection, and who the data was shared with. Just like the diversity in response time, there is quite some diversity in the content of replies across sectors, as the breakdown by sector in Table 3 shows.

Table 3: Completeness of access responses, based on the elements specified in the law and reiterated in the requests, grouped by sector

Sector

N

Contains data

(specific or general)

Purpose of collection

Method of collection

Data sharing

(specific or general)

Education

7

14%

43%

57%

43%

29%

29%

Finance

6

50%

17%

67%

50%

17%

67%

Government

25

24%

24%

36%

24%

24%

20%

Platforms

7

71%

12%

43%

29%

0%

71%

Retail

6

50%

17%

33%

33%

33%

17%

Telecom

6

50%

7%

33%

0%

0%

33%

Transport

13

46%

0%

54%

15%

8%

46%

Utilities

6

50%

17%

50%

67%

0%

50%

Other

8

62%

38%

50%

62%

62%

12%

Total

84

61% 10

45%

32%

55%

Overall, even among the organisations that did respond to the data access request, it only very rarely seemed to be with a complete overview. Many organisations reply with lists of labels of data or categories of data, instead of sharing the specific data. As an example, Happy Socks sent a participant an email in which they said that they have data like his name and home address, but they did not give the actual name and address that they have on file. 11 OHRA, a health insurance company, after 69 days and two reminders, sent a letter containing a list of categories of data they collect, containing, amongst other things “medical data” and a list of the categories of potential recipients of the personal data containing, amongst others “healthcare providers”.

When data is given, it can be challenging for the data subject to know if it is complete. For example, The Hague Library sent a reply that contained a print-screen from what seems to be their Customer Relationship Management (CRM) system. This print-screen shows a tab called “borrower registration” which includes fields like the name, date of birth, home address, contact details, and bank account number. Is this all the information the library system holds? Or are there other tabs in the system­—with for instance payment history, a history of the books that have been borrowed, or a profile of the borrower’s interests —which are not included because of a narrow interpretation of “personal data”? 12

Access requests sent to several municipalities—who all received the same request, and probably hold similar personal data—shed light on another aspect. Large organisations often find it hard to give a complete overview of all the personal data they have, and choose different ways to handle this complexity. The Municipality of The Hague sent a 16-page list of labels of data they share with other organisations on two databases, “BRP Verstrekkingsvoorziening” (Personal Records Database Distribution Facility) and “Beheervoorziening BSN” (Social Security Number Distribution Facility), but didn’t offer any further explanation (seeAppendix 1 for the first page of the reply). The Municipality of Amsterdam on the other hand responded with a letter explaining that they have a multitude of public tasks and responsibilities, and are therefore registering personal data in multiple systems. They invited the participant to visit in person to see if the access request could be more narrowly specified. The Municipality of Amstelveen took the middle ground: they sent an overview of some registrations, and invited the participant to visit in person to learn about the ways that the municipality deals with personal data.

Indeed, the text of the law is rather unclear, stating that “a full summary of the data” and “the recipients or categories of recipients” have to be provided. This seems to leave room for various interpretations, for instance that stating categories only is an acceptable reply (Van Breda & Kronenburg, 2016). However, as previously mentioned, the Dutch DPA (2007) has taken the position that the reply should include a full reproduction of the data if the data subject asks for it, not just the categories, and this position has been accepted by the courts. 13 The GDPR addresses the ambiguity with regards to returning the actual data. 14

Another aspect of incompleteness is that many organisations do not answer the sub questions about purpose of processing and data sharing (Table 3). In fact, while 83% organisations answered to the access requests, only 22% answered to all the sub questions asked, and only 10% organisations were specific in both the aspect of the data collected and the aspect of which organisations data was shared with. Bol.com, a large Dutch online web shop, was unique in the sample for sharing the specific third-party partners that receive data for processing payments and product delivery.

4.4. Do more specific requests and follow-ups help?

One might expect that the likelihood of receiving the full and specific data increases when a more specific request is sent. The empirical data shows a mixed picture in this respect. Participants sent 16 modified access requests asking for specific forms of data. Out of these 16 cases only three received a response that directly addressed the specific question posed. Participants also sent 13 follow-up requests. These requests were almost invariably responded to with an individualised response directly addressing the question posed.

For example, participants sent five access requests to Amsterdam Airport Schiphol, two of which were modified. Schiphol replied to four participants, all with the same answer: that the airport does not have any personal data relating to them in their databases. 15 This was while one participant requested all personal data related to one specific recent flight and another requested data related to the Wi-Fi-tracking system while including the MAC address of the phone carried. These specific elements were simply ignored. We also sent one follow up letter to Schiphol, asking how it is possible that the airport has no personal data, while handling luggage and boarding passes, and engaging in Wi-Fi-tracking. Schiphol answered that they indeed keep luggage and boarding pass data, but delete these a few days after a flight, and the Wi-Fi-tracking data they hold cannot be traced back to an individual. 16

This example follows a pattern we regularly observed. In most cases a request for information about specific data in an initial data request is ignored, while follow up requests get an individualised reply more often.

Sometimes a follow up request does receive an answer with data that was previously withheld. The UWV (Employee Insurance Agency), which is the autonomous administrative authority commissioned by the Ministry of social affairs and employment to implement employee insurances and provide labour market and data services, is an example of this. In first instance, a participant sent a standard access request to the agency, to which they replied that they did not use any of the participants’ personal data. 17 Then the participant sent a follow-up letter, in which she pointed out that according to information on their own website, UWV processes data about work and income history of all employees in the Netherlands, and that she therefore does not understand how it is possible that UWV does not process any of her personal data. In response to this letter, UWV sent a reply including many pages from a system in which various personal details, including detailed income data, were recorded.

Through the examples Schiphol, UWV, and the Dutch municipalities (section 4.3), we can learn that matching responses from the same (or related) organisation increases the ability to judge the quality, completeness, and veracity of an access response. To demonstrate this point, consider how Van Breda and Kronenburg (2016) judged Schiphol’s access response, in isolation, to be of rather high quality. They find the response, despite providing no data, to be transparent and helpful as it provides information on other organisations that may process information about the data subject in the airport, and they commend the fact that the response was sent by registered post. But by sending five requests and comparing the answers we found that Schiphol sends exactly the same letter, irrespective of the precise question posed in the request. In other words, matching responses allows for a better judgement of the completeness of the individual answers.

5. Participant perceptions

Our overall analysis so far suggests a rather mixed conclusion with regards to compliance. There clearly are organisations that are putting an effort to be transparent about the way they process personal data, while others, whether out of inability or unwillingness, are non-compliant with the basics of the law. More importantly however, the right of access is a data subject right intended to empower the citizen. Thus, we have to go beyond a formal legal judgment, and take into account the citizens’ perspective, to assess the extent to which the right of access functions. We shall do that in this section.

5.1. Best and worst responses

When participants were asked which of the responses they thought were best in the interview, two criteria emerged throughout: the completeness of the data, and in different forms, the feeling of being taken seriously. The completeness was appreciated, in terms of sheer quantity, the coverage in time, and the precision in describing the origin of the data. But the much more striking aspect that participants judged was the tone and the implied willingness to provide transparency of the interactions: “Amstelveen Municipality did best because they invited me and were clearly putting an effort to get you the insight you wanted, even though you did not even know exactly what you wanted”, or TU Delft explained a lot and although I did not get the data I felt that I could have gotten it”.

When asked which were the worst responses, the mirror image emerges. While participants disliked responses without data, they are more vehemently critical of responses that do not treat them respectfully. Participants made remarks such as: “You get the feeling that they try to keep you at a distance and make it complicated”, “The way they are responding is almost like I am an idiot and they are making stuff up”, “the way in which they address you is kind of aggressive to start with”, or “Their answer seems like a Jedi/Sith mind trick”.

5.2. Completeness and communication style

We asked participants to grade all individual access request replies on a Likert scale (very bad – bad – neutral – good – very good) on the aspects of perceived completeness and communication stylesatisfaction. If we map these grades to numbers (very bad = 1, very good = 5), the average grade participants gave for perceived completeness was 2.1 (bad), and for communication satisfaction 2.6 (midway between neutral and bad).

While the number of requests is too low to make statistically significant claims about sectors as a whole, there seems to be quite a marked difference between different sectors in the sample, as shown in the Figure 3 boxplots. The high grades for the educational organisations and low grades for the telecommunications sector in particular stand out.

Boxplots of perceived completeness & communication satisfaction grades
Figure 3: Perceived completeness (left) & Communication satisfaction (right) grades given by participants to access responses (Boxplots are ordered by the median grade per sector, indicated by the orange line in the rectangles. The rectangles show the 25 and 75 percentiles of grades, which are on a Likert scale)

Low grades in the telecommunications sector (which includes mobile operators and ISPs) can be traced to a number of specific behaviours. For example, three out of four organisations (Car2go, Tele2, Telfort, Ziggo) that told participants to check out their privacy policy were in the telecommunications sector. This made participants feel “[they] let you walk in circles, [and] you get nowhere”, as their privacy policies explicitly mention the right of access that the participant is trying to make use of. With regards to completeness, none of companies in the telecom sector provided internet traffic or location data (gathered through connections with cell-phone towers), even when specifically requested. Participants felt very uncomfortable about this, because they believed that these companies have much more data than they are sharing through the access response. Additionally, participants expect more from technologically capable companies: “I tend to be a bit more lenient with companies or organisations that are not really IT based. [But] for example if a whole business is set up around databases and providing a website and giving you services, I would expect that they also have the expertise to very easily create a database dump and just give it to me”. Our finding about the negative perception on the telecommunication sector is in line with findings from Norris et al. (2017) who find that seven out of ten organisations in the mobile telephony branch apply restrictive practices when answering to data access requests.

5.3. What do citizens expect from the right of access?

Before sending the data access requests, we interviewed participants and asked them what they expected from exercising the right of access, and why they were participating in our research. After having sent data access requests and receiving the replies, we interviewed participants again and asked them to reflect on the right of access based on their experience within this research.

Most participants expressed before sending access requests that they did not expect to get access to their data through using the right. Instead, participants expressed that exercising the right of access could still be good for other reasons. They expressed that when confronted with an access request, an organisation might start to critically assess its data practices, or as one participant put it, “I want to participate in this research because I want it to initiate a discussion. This to me is even more important than getting to see my own data. These two things are not even comparable. It is extremely important that we will make sure that in society, in politics and within organisations, the awareness is built”.

Most participants reported that the replies were, by and large, in line with their expectations, i.e., the right does not work that well with respect to getting the data that they expected organisations to have: “No, it is not effective”, “In reality it is worse than I expected”, and “It feels you are still ending up in some kind of black box”. However, they also expressed that the right works with respect to getting a deeper understanding of data practices: “It has made me more aware” and “The experience of sending out these access requests was really eye opening”

Most importantly, a feeling of gaining strength through collectivisation was expressed. Participants said things such as “I think it has contributed to organisations building some kind of process for dealing with access requests, especially because I know we were in a group” and “it gives me a feeling of the potentiality of this [right] helping society in order to be more in control of our data or to be at least informed [...] about our data being hosted by third parties.

6. Discussion

6.1. A failing instrument

The goal of the right of access as a juridical tool is to enable citizens to verify the lawfulness of the processing.18 In these goals, it mostly fails. A substantial proportion of the queried organisations, whether out of inability or out of unwillingness, are non-compliant with the law. And while many replies are quite elaborate, even these replies frequently provide inadequate information to the individual for making an informed judgment about the lawfulness of the processing. Most participants reported the process to be a poor experience in terms of transparency and empowerment.

We also found that, even after over fifteen years since this law is in place, certain organisations reported that they never received an access request, indicating that the right of access is rarely exercised by citizens. This is especially intriguing in the case of large organisations that process personal data, such as Delft University of Technology with over 20,000 students, or Stedin, an electricity and gas network company with around 2,000,000 clients. Participants did not ask the organisation to report if their request was the first they ever received, but this is probably true for other cases as well. This is quite remarkable especially when taking into account that the right of access is already present in Dutch law since 2001.

That the right of access has so far not been used very often is another sign that the right of access does not function well now. Of our participants, only one had ever used the right of access. If we ask why this may be the case, a possible answer is that people just do not care so much about the particular data practices of individual organisations. But given the reflections by the participants in the interviews (section 5.3), an alternative cause maybe that the expectation of success is very low.

6.2. A way to salvage it

Based on our experience, we see some ways forward. First, the exercise of the right of access can be part of an effort to create awareness and spark dialogue among citizens as well as organisations. And second, it could be used collectively as a way to increase empowerment.

The underlying problem that could be addressed through collectivisation is two-folded. In the relationship between the citizen and the data controller, the starting point is one of a deficit of both power and knowledge on the part of the citizen (as argued by De Hert and Gutwirth, 2006).

With regards to the question of knowledge there are a few connected issues. Once a reply to an access request is received, it is very hard to know to what extent the reply is complete, or to judge the quality of the reply and the lawfulness of the data practices it reveals. To be able to judge the completeness, one needs to exactly have the knowledge that one does not have and is trying to receive through the access request. This judgement therefore can only take place in a context of a network of knowledge. This contextual knowledge needed to judge the quality of a reply can come from matching replies from other access requests, and from others with specialised knowledge.

That matching can help was demonstrated in the cases of Amsterdam Schiphol Airport (Section 4.4) and the Dutch municipalities (Section 4.2). We were only able to see that Schiphol was always sending the same answer, because we had different answers to compare with each other. And by comparing the reply of one municipality which only sent information regarding one database to those of other municipalities who showed they had personal data in a variety of databases, it appears likely that this municipality processes more data than what they sent to some participants.

The ability to judge the quality of a reply is also dependent on specialised knowledge coming from the legal and technical realms. Such is the case for example when the question is concerned whether a Media Access Control (MAC) address, a unique identifier for a communication device that is collected during Wi-Fi tracking, should be considered personal data or not. According to the Dutch DPA (2015), a MAC address is personal data, even when hashed by the organisation. A citizen that does not have the technical knowledge to understand how this works, or the legal knowledge that the DPA has voiced in this opinion, stands very weak against an organisation that takes an opposing position. Similarly, when the Dutch unemployment agency UWV, one of the largest governmental institutions of the Netherlands, claims not to process any personal data, a citizen needs to (1) know that this cannot be true, and (2) have the audacity to oppose the claim of a large government organisation. In such situations, doing access requests in a community of people, some of whom possess specialised knowledge, empowers the position of citizens.

Viewing the right of access as a legal tool to empower citizens vis-à-vis more powerful and knowledgeable organisations has parallels with freedom of informationact (FOIA) rights. The ideal behind FOIA rights is that the citizenry has the right to gain knowledge about the functioning and decision making of governmental bodies (Kreimer, 2008). And similar arguments have been made with regards to private companies (Pasquale, 2015). Only an informed citizenry can make informed political judgments with regards to a government that in a democratic society should be under its control. The rationale for having the right of access is very similar. Moreover, similar to the right of access, FOIA rights are individual rights, while the benefit is meant to be for the society as a whole.

Similarly, the difficult conditions of unequal information and power experienced by citizens that exercise their right of access, resemble the conditions experienced with FOIA rights. Kreimer (2008) notes for example that “to press a recalcitrant administration for disclosure under FOIA requires time, money and expertise”. And while the right of access has been codified in such a way that it ought to be relatively easy for the citizen to execute, for example by having a low level of formal requirements to the request or capping the cost that organisations can charge for fulfilling a request, getting a clear picture of data practices through the exercise of the right of access is still very difficult, as organisations limit accessibility to the information in many different ways. Given the parallels, the conditions under which the right of access bears full fruition will also be very similar to FOIA rights. As Kreimer (2008) phrases it, FOIA regulation is effective when part of a broader “ecology of transparency” that includes “tenacious requesters” like well-financed NGOs and an active media.

6.3. The ecology of access requests and future work

If indeed, as we argue, the access right works best when used collectively and is aimed at empowerment and transparency at a societal level, the next question is what are the best fitting forms of collective organisation for this right?

Several forms have been tested so far. A number of online projects, including Bits of Freedom’s Privacy Inzage Machineand Citizen Lab and Open Effect’s Access My Info (see Hilts & Parsons, 2015) help citizens generate access request letters. These projects create awareness for citizens about the right, and lower the boundary of exercising the right by simplifying the process. They may also encourage organisations to be better stewards of personal information, as receiving access requests in high numbers signals to an organisation that citizens are concerned about how their personal data is used, and can “spur institutions to improve their privacy practices”19. Activists, such as Rejo Zenger (in the Netherlands) and Max Schrems (in Austria and Ireland) have exercised their right of access, used blogs and websites to share findings about the results with a broader public, and entered into litigation in order to force organisations into increased transparency about their personal data practices (e.g., Zenger, 2011). Others like Dehaye (2017) have combined the creation of an online access request tool with academic work and investigative journalism.

We plan to extend the current research in two ways. First, we are currently building a digital platform to recruit a larger group of participants in various EU countries, to send and track access requests in line with the method explored in this paper (see Asghari et al., 2017). This allows a more elaborate empirical assessment of the right of access in action, and in particular, to compare sectoral and country level differences. Second, we plan to include the point of view of the target organisations, by interviewing their DPOs in the future.

7. Conclusion

Just as the proverbial proof of the pudding is in the eating, rather than in a careful assessment of its recipe, the right of access should be assessed by how effective it is in practice. And since the right is meant to empower citizens, citizens should be the ones to judge if it empowers them. In our study, we asked participants to send access requests, and collected responses to their requests, and interviewed them along the way. The resulting picture is not pretty: while there are some positive exceptions, overall the compliance with the right of access is a mess. Non-compliance with the formal requirements of the law is widespread, with some organisations failing to answer at all, and others obstructing transparency in their answers. This mess did not surprise our participants though.

This sobering picture, however, does not mean that the right is useless. When the right is used in a collective manner, it creates a context to judge the quality of replies and the lawfulness of the data practices by comparing replies to similar access requests. Participants also perceived a societal much more than an individual value in exercising this right, not the least because through collective use, the power imbalance between individual citizens and organisations shifts in favour of the citizen.

Acknowledgements

We thank the participants for their effort in sending all the access requests, thinking through the replies they received and positive support of this research endeavour. We thank Nele Brökelmann, Kathalijne Eendhuizen, Stefanie Hänold, Joris van Hoboken and Mirna Sodré de Oliveira, as well as the reviewers, for their constructive critical comments on the text. This work was supported by the Princeton University Center for Information Technology Policy (CITP) Interdisciplinary Seed Grant Program.

References

Article 29 Data Protection Working Party. (2018). Guidelines on transparency under Regulation 2016/679 (No. WP260 rev.01). Brussels: European Union. Retrieved from http://ec.europa.eu/newsroom/article29/item-detail.cfm?item_id=622227

Asghari, H., Greenstadt, R., Mahieu, R. L. P., & Mittal, P. (2017). The Right of Access as a tool for Privacy Governance. Presented at HotPETs during The 17th Privacy Enhancing Technologies Symposium. Retrieved from https://petsymposium.org/2017/papers/hotpets/rights-of-access.pdf

Bennett, C., & Raab, C. D. (2017). Revisiting the Governance of Privacy. (SSRN Scholarly Paper No. ID 2972086). Rochester, NY: Social Science Research Network. Retrieved from https://papers.ssrn.com/abstract=2972086

Bruening, P. J., & Culnan, M. J. (2015). Through a Glass Darkly: From Privacy Notices to Effective Transparency. North Carolina Journal of Law & Technology, 17(4), 515-579. Retrieved from http://heinonline.org/hol-cgi-bin/get_pdf.cgi?handle=hein.journals/ncjl17&section=20

Dehaye, P.-O. (2017). Cambridge Analytica and Facebook data. Medium. Retrieved May 23, 2017, from https://medium.com/personaldata-io/cambridge-analytica-and-facebook-data-299c54cb23fa

De Hert, P., & Gutwirth, S. (2006). Privacy, data protection and law enforcement. Opacity of the individual and transparency of power. In E. Claes, A. Duff, & S. Gutwirth (Eds.), Privacy and the criminal law (pp. 61–104). Antwerp/Oxford: Intersentia.

De Hert, P., & Papakonstantinou, V. (2016). The new General Data Protection Regulation: Still a sound system for the protection of individuals? Computer Law & Security Review, 32(2), 179–194. doi:10.1016/j.clsr.2016.02.006

Dutch DPA. (2015). Wifi-tracking rond winkels in strijd met de wet. Retrieved from https://autoriteitpersoonsgegevens.nl/nl/nieuws/cbp-wifi-tracking-rond-winkels-strijd-met-de-wet

Dutch DPA. (2007). Publication of personal data on the internet. Retrieved from https://autoriteitpersoonsgegevens.nl/sites/default/files/downloads/mijn_privacy/en_20071108_richtsnoeren_internet.pdf

Economist. (2017). Regulating the internet giants: The world’s most valuable resource is no longer oil, but data. The Economist. Retrieved from http://www.economist.com/news/leaders/21721656-data-economy-demands-new-approach-antitrust-rules-worlds-most-valuable-resource

Galetta, A., & De Hert, P. (2015). The proceduralisation of data protection remedies under EU data protection law: towards a more effective and data subject-oriented remedial system? Review of European Administrative Law, 8(1), 125–151. Retrieved from https://www.researchgate.net/publication/280034195_The_Proceduralisation_of_Data_Protection_Remedies_under_EU_Data_Protection_Law_Towards_a_More_Effective_and_Data_Subject-oriented_Remedial_System

Hilts, A., & Parsons, C. (2015). Access My Info: An application that helps people create legal requests for their personal information. In The 15th Privacy Enhancing Technologies Symposium, Philadelphia, PA. Retrieved from https://www.petsymposium.org/2015/papers/hilts-ami-hotpets2015.pdf

Hoepman, J. H. (2011). Het recht op inzage is een wassen neus. Wat nu? Informatiebeveiliging, 2011(6), 16–17. Retrieved from https://repository.tudelft.nl/view/tno/uuid:6be95e4c-a836-4d64-8ad2-eeb1b987bfa7/

Hustinx, P. (2013). EU data protection law: The review of directive 95/46/EC and the proposed general data protection regulation. Collected Courses of the European University Institute’s Academy of European Law, 24th Session on European Union Law, 1–12. Retrieved from https://pdfs.semanticscholar.org/f1e3/333fcc1344d28134e0ab418817d5f7aa270d.pdf

Kreimer, S. F. (2008). The freedom of information act and the ecology of transparency. University of Pennsylvania Journal of Constitutional Law, 10(5), 1011-1080. Retrieved from: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1088413

Norris, C., De Hert, P., L’Hoiry, X., & Galetta, A. (Eds.). (2017). The Unaccountable State of Surveillance - Exercising Access Rights in Europe. Cham: Springer International Publishing. Retrieved from http://www.springer.com/us/book/9783319475714

Pasquale, F. (2015). The Black Box Society: The Secret Algorithms that Control Money and Information. Harvard University Press.

The Organisation for Economic Co-operation and Development (OECD). (2013). The OECD Privacy Framework. OECD Publishing. Retrieved from: https://www.oecd.org/sti/ieconomy/privacy-guidelines.htm

Schnackenberg, A. K., & Tomlinson, E. C. (2016). Organizational transparency: A new perspective on managing trust in organization-stakeholder relationships. Journal of Management, 42(7), 1784–1810. doi:10.1177/0149206314525202

Schwartz, P. M. (2013). The EU-U.S. Privacy Collision: A Turn to Institutions and Procedures. Harvard Law Review, 126(7), 1966–2009. Retrieved from http://papers.ssrn.com/abstract=2290261

Van Breda, B. C., & Kronenburg, C. C. M. (2016). Inzage in de praktijk van het inzageverzoek. Privacy & Informatie, 2016(50), 60–65. Retrieved from http://old.ivir.nl/syscontent/pdfs/232.pdf

Zenger, R. (2011). Winst bij de rechter, Telfort geeft inzage in álle persoonsgegevens. Retrieved November 17, 2017, https://rejo.zenger.nl/focus/winst-bij-de-rechter-telfort-geeft-inzage-alle-persoonsgegevens/

Zuboff, S. (2015). Big Other: Surveillance Capitalism and the Prospects of an Information Civilization. Journal of Information Technology, 2015(30), 75–89. doi:10.1057/jit.2015.5 Retrieved from: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2594754

Zuiderveen Borgesius, F. (2015). “Informed Consent. We Can Do Better to Defend Privacy.” IEEE Security & Privacy, 13(2), 103–107. doi:10.1109/MSP.2015.34 Retrieved from http://papers.ssrn.com/abstract=2793769

Appendix

Below is the first page of a reply to an access request to the Municipality of The Hague. It contains a list of labels of data they share with other organisations through two databases “BRP Verstrekkingsvoorziening” (Personal Records Database distribution facility) and “Beheervoorziening BSN” (Social Security Number distribution facility). No further information, background or invitation for further questions is given.

Reply to an access request to the Municipality of The Hague
 

Footnotes

1. For a critical analysis on the functioning of notice regulation in the US, see Bruening and Culnan, 2015.

2. The DPD was replaced by the GDPR effective May 2018. According to De Hert and Papakonstantinou (2016) the GDPR is not substantially different than past law with regards to the right of access. Two major motivations for the introduction of the GDPR were harmonisation and increased protection for citizens in an environment of intense technological change. Harmonisation is achieved by the fact that the regulation will be directly applicable in all member states, whereas the DPD applied only through its implementations into respective national laws. Stronger data protection for citizens is pursued by, among other things, increased fines, which may increase the relevance of our work.

3. In the GDPR the first ambiguity seems to be solved as the law says: “The data subject shall have.... access to the personal data.... and the following information: …. (b) the categories of personal data concerned. The ambiguity with regards to the recipients however remains as the law still states … (c) The recipients or categories of recipient ...”

4. Dutch DPA (2007) p. 39 “Pursuant to Article 35 of the Wbp, a report must be a complete and clear overview of the data that are being processed in relation to a data subject. This must not be a description or summary of the data, but a complete reproduction. If the report were incomplete, the data subject would of course be insufficiently able to exercise his or her rights under the terms of the Wbp. This interpretation was confirmed in mid-2007 by the Supreme Court in the judgments on the Dexia case, Supreme Court, 29 June 2007, LJN: AZ4663 and Supreme Court, 29 June 2007, LJN: AZ4664.”

5.Similarly, recital 63 of the GDPR does the same. It reads: “A data subject should have the right of access to personal data which have been collected concerning him or her, and to exercise that right easily and at reasonable intervals, in order to be aware of, and verify, the lawfulness of the processing.”

6. The small number of participants and selection method impose limitations on the generalisability of our findings, for instance about how citizens as a whole perceive access rights, or how all organisations handle access requests. Our design, however, offers insights into sentiments and data practices that are present in society.

7. We add a subject line, a paragraph explaining why the citizen requested the data, and a paragraph explaining that we add a copy of a passport in order for the organisation to be able to verify the identity.

8. We only count a request as unanswered after at least 60 days have passed. It is of course possible that some organisations will still reply at some later point.

9. In two of these cases the replies referred back to the privacy policy of the organisation, and in two cases they referred to another organisation.

10. Participants agreed with seven of the responses without data. If we count the lack of data in these replies as the correct data, the percentage within this cell increases to 69%.

11. This only happened after the participant first sent a reminder, then received a copy of the privacy policy, and then again asked Happy Socks to act upon the access request as detailed in their own privacy policy.

12. We in fact had a follow up conversation with The Hague Library, and they stated that the former is true; in particular they stated that they do not keep borrowing history nor any borrower profile.

13. Dutch DPA (2007) p.39 “Pursuant to Article 35 of the Wbp, a report must be a complete and clear overview of the data that are being processed in relation to a data subject. This must not be a description or summary of the data, but a complete reproduction. If the report were incomplete, the data subject would of course be insufficiently able to exercise his or her rights under the terms of the Wbp. This interpretation was confirmed in mid-2007 by the Supreme Court in the judgments on the Dexia case, Supreme Court, 29 June 2007, LJN: AZ4663 and Supreme Court, 29 June 2007, LJN: AZ4664”

14. The GDPR states “the data subject shall have.... access to the personal data.... and the following information: …. (b) the categories of personal data concerned.” The ambiguity with regards to the recipients however remains as the law still states“… (c) The recipients or categories of recipient ...”

15. The fifth participant received a confirmation one month after their request, stating Schiphol expects to answer with a delay because of the holiday period.

16. To clarify, we are not proposing that organisations should retain the data longer in order to respond to an access request; rather that the initial response could have pointed out the collection-deletion practice to improve transparency.

17. Of the five others who sent data access requests to two different branches of UWV, three never received a reply, even after sending reminders, and two received a reply that UWV did not process their personal data.

18. As both the explanatory memorandum of the Dutch Personal Data Protection Act as well as recitals of the GDPR state, as discussed in Section 2.

19. See https://accessmyinfo.org/

Disrupting the disruptive: making sense of app blocking in Brazil

$
0
0

Acknowledgements: I would like to thank Mark Lemley, who read and commented an early version of this piece presented at the Fourth Conference for Junior Scholars organised by the Stanford Program in Law and Society, and Dennys Antonialli, Beatriz Kira, and the reviewers, whose invaluable comments and constructive criticism helped me refine my argument. A conversation on information controls with Joss Wright at the Summer Doctoral Programme 2017 of the Oxford Internet Institute was also crucial for the final draft.

Introduction

There will be no nation that has no speech that it wishes to regulate on the Internet. Every nation will have something it wants to control. Those things, however, will be different, nation to nation” (Lessig, 2006, p. 297). The quote from Lawrence Lessig’s classic Code 2.0 draws attention to how desire for control is a global phenomenon. The basic variation is that, under democratic regimes, state interventions are held to be more clearly backed by the rule of law and reasonably protective of human rights (Wright & Breindl, 2013); these are the grounds of their purported legitimacy.

Indeed, Brazil has no history in shutting down, blocking, or filtering the internet for the purpose of preventing citizens from having access to content that is considered subversive or, frustrating online and offline political movements and silencing dissent. It does, however, have a record of judicial orders demanding that intermediaries such as local internet access providers and app stores administrators “block” user access to specific internet applications, because the application either provides an illegal service or has failed to comply with a court order. These kinds of decisions have affected internet applications such as YouTube, Facebook, and WhatsApp.

This paper reflects on this phenomenon and adds to a body of literature that tries to understand why internet blocking practices are enacted through contextual knowledge, that is, by looking at the context in which they happen (Crete-Nishihata; Deibert; Senft, 2013). Upon review of episodes of internet blocking of social media in Brazil, it argues that the blocking orders issued by Brazilian judges can be connected with a scenario of “regulatory disruption”, that is, a context of regulatory frameworks unfit to deal with innovative internet applications which expands the role of the judiciary in resolving legal disputes. The main contribution of this study is exposing this important explanatory factor behind Brazilian blocking practices, adding to a more comprehensive understanding of the use of these measures in Western states.

Blocking and filtering practices around the world

The terms “blocking” and “filtering” are used to refer to an array of technical measures that either partially or totally restrict the flow of data on the internet. That is, they interfere with the freedom to seek, receive, and impart ideas and information through this medium of mass communication. As such, they are a means of regulation of behaviours and, thus, a form of control. This section briefly navigates through various types of blocking and filtering practices around the world to both shed light on their shared features and distinctive elements and highlight the different contexts in which blocking and filtering of internet applications, services, and content occur. This is a necessary step toward understanding the nature of blocking measures documented in Brazil.

In many countries, the act of implementing technical restraints to block access to internet applications, services, and content is part of ambitious state policies of information control. It is a measure that serves the purpose of preventing citizens from having access to content that is considered subversive, and, thus, to information considered culturally and/or politically sensitive by the nation state in question (Zittrain & Palfrey, 2008, p. 32). For this reason, blockades are often associated with authoritarian regimes that institutionalise national policies of censorship.

The most paradigmatic case is that of China, whose government implements a sophisticated system of filtering and blocking websites and internet content, coined in the West as the "Great Firewall of China" (Goldsmith & Wu, 2006, pp. 85–104; Freedom House, 2015, pp. 195-8; Mackinnon, 2012). The restrictions are implemented by Chinese internet connection providers based on a directory organised by the "Internet Police" and reach not only any publications of content explicitly critical to the regime, but also pornographic content and webpages about democracy, public health, and religion. There is also documented evidence that access to websites of foreign government entities (such as Australia, the United Kingdom, the United States, Taiwan, and Tibet) and educational institutions, social media, and news portals is restricted (Zittrain & Edelman, 2003). Similar schemes can be found in Saudi Arabia (Zittrain & Edelman, 2002) and North Korea (Talmadge, 2016).

In other countries, blocking and filtering take different forms: they are not permanent, as in China, but temporary during strategic periods of political importance. Indeed, internet shutdowns and blocking or throttling of internet applications have been provisionally implemented around elections, to frustrate online and offline political movements and silence dissent. That was the case in Uganda, for example, during the February 2016 presidential elections, when social media platforms and mobile financial services were blocked (Mugume, 2017). Other examples, within the past two years include shutdowns in Chad, the Congo, Gambia, Gabon, and Montenegro (Paradigm Initiative Nigeria, 2016). In other instances, blockages and internet disruptions are implemented in regions and periods of rising political tensions and during “emergency situations”, for “national security” reasons. In Cameroon, for instance, English-speaking areas faced internet shutdowns during periods of political unrest with the French-speaking majority at the beginning of 2017 (BBC News, 2017). During growing protests over the economic marginalisation and political persecution of Ethiopia’s largest ethnic group, Oromos, in April 2016, WhatsApp, Facebook Messenger, and Twitter also became inaccessible in the city of Oromia, the centre of the political uprising (Davison, 2016). Turkey also has a well-documented history of politically charged and strategic blockages of this kind (Yesil, Kerem Sözeri, & Khazraee, 2017). So does India (Human Rights Watch, 2017).

Blocking and filtering practices are also found in western liberal democracies. Most common are those within the “child pornography” and “copyrighted material” realms (Blakely, 2014; Berghofer & Sell, 2015). In Germany, the penal code also prohibits the use and dissemination of Nazi and Holocaust denial materials, implicating a responsibility to eradicate this content from the web (Freedom House, 2015, p. 350; Goldsmith & Wu, 2006, p. 75). In France, internet connection providers, following notification, must block websites that contain materials that incite terrorism (Wright & Breindl, 2013; Freedom House, 2015, pp. 311-2.). In the United States, the practice of "domain seizure" is used as a method to impede access to websites that disseminate content in violation of copyright law and that provide illegal services, such as sales of drugs and smuggled goods (Freedom House, 2015, p. 879; Goldsmith & Wu, 2006, p. 75; pp. 77-79; DeNardis, 2012, p. 728). In the same country, internet service providers receive thousands of take-down notices daily under the Digital Millennium Copyright Act. Across Europe, search engines must consider requests for search results to be de-indexed following the recognition of the so-called "right to be forgotten" after the 2014 ruling of the Court of Justice of the European Union (2014).

All these illustrate that blocking and filtering practices, and the motivations underlying them, vary around the world. They range from the order of removal of specific content considered illegal to the total restriction of access to a certain application, or even, in certain cases, to complete internet shutdown in a country or region for a certain period of time. The actors at the frontline of these practices also vary: the practices can not only be implemented by platforms themselves, such as when Google filters its search results for a certain name or when Twitter’s Periscope takes down a user’s live-broadcasting of a UFC (Ultimate Fighting Championship) fight, but also through domain registrars, when a website is seized, and by internet access providers, when internet applications are shut down, for example. Most significantly, the stated motivations for these practices clearly vary: from protecting community interests through the outright control of information of a particular content (related to pornography, religion, human rights, and democracy, etc.) and dismantling political uprisings, to safeguarding children, preventing crime, and guaranteeing data protection rights. There are, therefore, several ways by which governments impact information and services available on the internet, and access to these data. 1 

Blocking in Brazil

Social media blocking in Brazil

This research focused on a particular kind of blocking measure that has been experienced in Brazil. It looked at eight publicly known cases in which Brazilian judges ordered either a temporary or a permanent blockage of entire social media platforms, to be executed either by internet connection providers or by app store administrators, which ultimately functioned as “points of control” (Goldsmith & Wu, 2006, viii; Zittrain, 2002; DeNardis, 2012; Hall, 2016). The study of these cases is specially interesting for the internet policy literature and contributes to a holistic understanding of internet blocking practices around the world, as it facilitates comparison and the identification of new or persisting trends.

Based on the review of the legal issues underlying the decisions, this section shows that a common thread in all these Brazilian blocking cases is the scenario of “regulatory disruption”, a term borrowed from Nathan Cortez (2014, p. 177). Here it is used to refer to a context of lack of convention on how to solve a legal conflict, either because current regulation fails to deal with a new set of facts posed or affected by technology or because the prima facie applicable legal regime is in contradiction with policy considerations and/or outdated in light of new technologies. The phenomenon of legal uncertainty resulting from the non-existence of convention is closely connected with social changes and has occupied legal philosophers concerned with “judicial discretion” in “hard cases” (when no settled rule disposes of the particular case) for decades (Hart, 1961; Dworkin, 1963). Under these disruptive scenarios, adjudication gains importance and relevance over the outcome of the case, because the answer to the case is not clearly stated either in legislation or precedents.

The innovations that services like YouTube, Facebook, Secret, and WhatsApp brought about and popularised disrupted not only markets but also the legal equilibrium. The Brazilian blocks are closely connected with a pressing legal challenge in the internet age: dealing with novel internet applications that do not square well with existing regulatory frameworks. While this challenge is not exclusive to Brazil, it is however one of the main reasons why social media were occasionally blocked in that country. The analysis of these blocking decisions and the context in which they are inserted shed light on how some members of the Brazilian judiciary have dealt with challenges imposed by technology, the internet in particular: disrupting - blocking - disruptive applications.

In the next subsection, a short summary of the cases affecting four social media applications is provided: each subsection describes the regulatory context in which they are inserted, unravels the motivations asserted by the courts, and identifies trends in the legal arguments used to ground blocking decisions. Thereby, the operation of “regulatory disruption” is demonstrated. The purpose is to help explain these decisions, not justify them. Criticism of these blocking orders and their impact on infrastructure, economy, and human rights may still be in order, even if not addressed here.

Date

Application

Motivation

5 January 2007

YouTube

Failure to remove illegal content

9 August 2012

Facebook

Failure to remove illegal content

19 August 2014

Secret

Violation of constitutional ban on anonymity

25 February 2015

WhatsApp

Failure to comply with user data demands

16 December 2015

WhatsApp

Failure to comply with user data demands

2 May 2016

WhatsApp

Failure to comply with user data demands

19 July 2016

WhatsApp

Failure to comply with user data demands

5 October 2016

Facebook

Failure to remove illegal content

Source: bloqueios.info

Making sense of the blocking orders

YouTube

In September 2006, Brazilian celebrity Daniella Cicarelli and her boyfriend Renato Malzoni were caught on camera having sex on a beach in Spain. The video was immediately all over the internet. The couple brought suit against media companies disseminating pictures and the video of the incident. These companies included YouTube Inc., then a one-year old video-sharing start-up, which was soon-to-be acquired by Google Inc.

After some back-and-forth grappling with the fact that the video was recorded in a public environment and that no intrusion upon seclusion had taken place, the couple obtained an injunction ordering media outlets (and YouTube) to takedown the video, as it violated the couple’s constitutional right to privacy. Without specifying URLs, a higher court judge at the São Paulo State Court determined that the video be erased from the internet (Tribunal de Justiça de São Paulo, 2006).

Despite the positive outcome for the plaintiffs, and the immediate compliance by traditional media outlets, the video could still be found online. In fact, that was the very first time the Brazilian internet experienced a so-called “Streisand effect” —a term used to describe how efforts to either hide, remove, or censor a piece of information online have the accidental consequence of making it more visible—to its full extent. The phenomenon was largely enabled by the exact innovative feature that made video-sharing platforms so popular: the possibility of uploading content to the website without editorial control, a power granted to any user of the platform.

Users reposted the prohibited material on YouTube continuously. After months of unsuccessful efforts, the couple filed a motion for complete blockage of YouTube, since the company could not guarantee that the video would not be found on its platform. In very ambiguous language, the competent judge then ordered that the “video be made inaccessible to Brazilian Internet users” and that internet access providers implement “technical measures” to prevent access to the video. The basis claimed for such an order was Art. 461, §5º of the former Code of Civil Procedure, 2 which grants judges a general writ power to demand any action that will help put their decisions into practice. Following the decision, a lower court judge issued letters to backbone providers to filter the illegal content, leading to the blockage of the entire platform on 5 January 2007, affecting at least five million internet users at the time (Agência Estado, 2007).

Unsurprisingly, the blockage caused a huge public outcry and received extensive media coverage. Within a few days, the state court judge determined that the ban on YouTube be suspended, claiming that his decision had been misinterpreted, but insisting that YouTube Inc. had neglected his previous orders to takedown the couple’s video and had to implement technical measures to prevent users from uploading it (Tribunal de Justiça de São Paulo, 2007).

This first incident is perhaps the most telling with regards to the “regulatory disruption” aspect of internet blocking in Brazil. Are internet platforms liable for user generated content? If so, then when and how? The existing regulatory model at the time did not provide clear answers to these new questions. The blocking order originated in this complicated legal scenario with an inexperienced (in internet matters) judge looking for an effective solution to a new problem. It is even said that blocking YouTube brought to light the need to regulate the liability of internet applications for user generated content (Souza, Moniz, & Branco, 2007) and that the episode ended up bolstering conversations that originated the Civil Rights Framework for the internet (Federal Law no. 12.967/14, the MCI), which then prescribed rules establishing the regime of intermediary liability (Souza, 2015, pp. 391-6).

Facebook

During municipal elections in 2012, and then again in 2016, city counselor candidate Dalmo Meneses and mayoral candidate Udo Döhler filed suits against Facebook requesting the removal of pages on the social media platform that were allegedly harmful to their reputation and supposedly compromised their runs for seats at the City Council and the City Hall, respectively. In the first case, the page in question was “Reage Praia Mole”, which brought environmental threats endangering a beach in the state of Santa Catarina to the attention of the local population and contained critiques of the local administration. In the second, more recent case, the targeted page was “Hudo Caduco”, which made humorous references to the mayor of Joinville, a city in Santa Catarina.

In both cases, electoral judges granted the takedown requests (Justiça Eleitoral, 2012a; Justiça Eleitoral, 2016a). They considered these pages illegal because their content provided “degrading advertisement” to the reputation of electoral candidates in an (immediately) "anonymous" fashion, i.e., by pseudonym, violating Brazilian electoral law. As a matter of fact, Brazilian electoral law regulates campaign ads strictly. It contains subjective language that goes as far as to say that “advertisements that may degrade or ridicule candidates is prohibited” (Art. 53, §1º of Federal Law no. 9504/97) and that the “Electoral Justice will prevent the re-presentation of advertisements offensive to the candidate's honour, to morality, and to good manners” (Art. 53, §2º of Federal Law no. 9504/07). 3 The two judges did not differentiate individual opinions and parodies from advertising and interpreted the provisions in a manner favorable to the plaintiffs.

Facebook failed to carry out the takedown orders in both cases. As a result, the judges also issued blocking orders against the entire platform, to be implemented by internet access providers (Justiça Eleitoral, 2012b; Justiça Eleitoral, 2016b). The legal basis was Article 57-I 4 of the Elections Act (Federal Law no. 9504/97), added by Law no. 12.034/09, which provides for the possibility of suspending electronic sites that violate the electoral law. They claimed that the suspension of the platform was apt, both to put a stop to the reputational harm caused by the pages, which also had a negative effect on the candidates’ chances in the coming elections, and to punish the company for the failure to comply with the previous judicial orders. 

However, the orders were not implemented, because Facebook, under the imminent threat of blockage of the entire platform, complied with the decisions to remove the prohibited content (Justiça Eleitoral, 2012c; Tribunal Regional Eleitoral, 2016). In regards to the 2012 case, the Brazilian subsidiary of Facebook Inc., Facebook Brasil Serviços Online Ltda., explained in court it had failed to immediately comply with the request because it did not have the technical ability to takedown the page. In the 2016 case, Facebook Brazil had decided to keep the page online while it challenged the takedown order in a higher court on grounds that the content was legal, but the threat of blocking the entire platform forced the company to review this approach to the case.

These two blocking decisions against Facebook are associated with a scenario of regulatory disruption. Both decisions are closely connected to the Elections Act’s failure to account for individual political opinions posted on social media platforms and distinguish them from campaign “ads” within the meaning of the law. Case law and scholarship also failed to offer any clear guidance. In the two cases, criticisms and parodies were interpreted as insults and offenses—an "advertisement" that was held as being unlawful—and the lack of immediate identification of authorship, a form of “anonymity”. Lack of clear standards in the face of new social practices of civic engagement online and challenges associated with setting up clear guidelines in highly ambiguous cases seem to help explain why the judges declared the pages illegal and ordered their takedown. Limited understanding of the internet’s social impact and the technical, economical, and human rights implications of shutting down a social media platform may also figure as potential reasons of their decision to block the entire platform, which could have affected 40 million Brazilian users in 2012 and 90 million in 2016.

Secret

The mobile application Secret allowed users to post comments without indication of authorship. The messages could be viewed by the users’ circle of friends, connected to the platform in an (outwardly) anonymous fashion. Upon the launch of the app in Brazil and its instant success, the Public Prosecutor’s Office in the state of Espírito Santo (Ministério Público do Espírito Santo) brought a civil suit against the platform and formally requested that the app store administrators Google, Apple, and Microsoft 5 remove the service from their virtual shops and remotely delete the app from user cellphones. According to the prosecutor that filed the action, the app violated the constitutional prohibition on anonymity, provided for in Art. 5, IV 6 of the Brazilian Constitution, and facilitated illegal activities, such as hate speech and defamation (Roncolato, 2014).

The lower court judge was persuaded by these arguments. He granted the request and banned the app (Justiça Estadual do Espírito Santo, 2014). 7 While Apple complied with the removal request, Google and Microsoft challenged it. The legal battle dragged on for almost a year, saw the late intervention of Secret Inc., and concluded with the majority of the Espírito Santo state court panel finding in favour of Secret, Google, and Microsoft, on the grounds that the app collected and retained IP addresses and other relevant metadata that could be used to identify users who engaged in criminal activities through the app (Tribunal de Justiça do Espírito Santo, 2015). Therefore, the app did not run afoul of the anonymity ban. By then, Secret Inc. had already discontinued its activities in Brazil.

The lack of developed scholarship and case law around the scope and meaning of the anonymity prohibition of the Brazilian Constitution of 1988 on the internet is a leading explanation for the decision to ban Secret (Monteiro, 2017, pp. 36-41; Souza, 2015, pp. 391-6), as this was the first case that directly presented the question of how the anonymity clause should be read in cyberspace – in another instance of “regulatory disruption”. Leading commentators and practitioners have for decades read the anonymity clause in Art. 5, V as imposing an identification requirement for the exercise of freedom of expression in Brazil (Monteiro, 2017, pp. 1-8). Accordingly, the mechanic application of this reading by the lower court judge is at the root of the blocking order here. However, this understanding and application of the identification requirement doctrine has bizarre implications on the internet, such as the idea that only people carrying with a verified name tag can access and use the internet (Monteiro, 2017, p. 6). This realisation triggered the review of the case and the blocking order was later overturned.

WhatsApp

The US messaging service WhatsApp is a tremendous success in Brazil. The company has more than 120 million active users in the country and a penetration as high as 95% among smartphone holders. The app’s ever increasing popularity has led representatives of the Brazilian telecommunications industry to go as far as to call it a “pirate” telecom service (Folha de São Paulo, 2015), for it provides services akin to SMS-texting and calling without having to observe the strict regulatory framework applied to telcos. The comment highlights the industry’s concern that users have now started to prefer messaging and calling through WhatsApp to using their regular cell phone calling and texting plans.

The telecommunications industry was not the only interest group to worry about WhatsApp’s growing popularity though. As in other countries, law enforcement and intelligence officials share the same concern over the fact that people’s calling and texting habits are rapidly changing and now take place in and through WhatsApp. Unsurprisingly, the four publicly known blocking orders issued against the service raised from the failure to comply with judicial demands for user data relevant to criminal investigations related to child abuse, drug trafficking, and organised crime.

 The clash between the company and Brazilian authorities that led to the blockages started off as a jurisdictional battle: data demands were served on Facebook Brasil Serviços Online, the Brazilian subsidiary of Facebook Inc., which claimed to have no relation to WhatsApp and did not provide any information; WhatsApp Inc., in turn, initially refused to accept that it was under obligation to comply with direct requests for user data made by Brazilian judges under Brazilian Law and insisted that authorities had to resort to mutual legal cooperation treaties. In regards to this point, Brazilian law enforcement argued that WhatsApp provides services in Brazil, attracting the application of Brazilian law, including the Wiretap Act (Federal Law 9.296 of 1996) and the Marco Civil da Internet (“MCI”, Federal Law no. 12965 of 2014, known as the Brazilian Civil Rights Framework for the Internet). The full implementation of end-to-end encryption in March 2015, establishing the technical impossibility of performing wiretaps, added another layer of complexity to the battle and intensified tensions, which has now become the centre of the discussion (Abreu, 2016; Antonialli & Brito Cruz, 2015). The company claims it cannot provide the content data Brazilian law enforcement wants, as it does not hold the key to decrypt communications. Many Brazilian authorities refuse to believe this assertion. Others demand the company change its current end-to-end architecture.

With regards to the legal arguments cited to ground the blocking orders, some judges made explicit reference to art. 12, III 8 of the Marco Civil da Internet, which provides for "temporary suspension" as a kind of sanction to application providers, to justify the blocking orders (Justiça Estadual de Sergipe, 2016). Others have also relied on the general writ power to demand any action that will help judges enforce their decisions (here, requiring user content data disclosures), found in the Code of Civil Procedure (Justiça Estadual de São Paulo, 2016c). 

In all cases, the blocking orders were reversed by appellate courts, because of their “disproportionality” (Tribunal de Justiça do Piauí, 2015; Tribunal de Justiça de São Paulo, 2015; Tribunal de Justiça de Sergipe, 2016; Supremo Tribunal Federal, 2016). In the last two cases, political parties even brought suit before the Brazilian Supreme Court (Barros, 2016; Mansur, 2016), seeking a declaration that blocking WhatsApp is unconstitutional, based on the theory that such blocking violates freedom of communication of a hundred million Brazilian WhatsApp users. Many have also argued that the blocking orders had no legal basis, challenging the judges’ interpretation of the “temporary suspension” clause (Abreu, 2017). Final decisions are still pending. Meanwhile, legislators have proposed several bills and amendments to the MCI, seeking to either create an explicit legal basis for blocking orders or declare them unlawful (Kira, 2016).

WhatsApp’s implementation of end-to-end encryption and its pervasive popularity deeply compromised legal equilibria among law enforcement and “surveillance intermediaries” (Rozenshtein, 2018), causing a regulatory disruption. The technology changed a default condition that was taken for granted by public security agents in Brazil: the possibility of recovering the content of user communications and using the data as evidence when following the procedure and requirements established in law. The technical impossibility of the company’s compliance fundamentally departed from traditional practices, whereby telecommunications companies would cooperate with law enforcement to perform wiretaps, provided they followed the regular procedure established by the Wiretap Act and the Marco Civil da Internet. It posed challenging legal questions: is a communication service that is not ‘wiretappable’ constitutionally protected? Is there the legal obligation to build internet applications capable of surveillance? The regulatory disruption has triggered a heated debate on the constitutionality of end-to-end encryption (Abreu, 2016) and even prompted the Ministry of Justice to announce that it would propose legislation to regulate encrypted messaging services (Passarinho, 2016).

Conclusion

Nation states act to intervene and exercise control over the internet to neutralise unwelcomed uses of the internet, according to the nation states’ points of view and societal values (Zittrain & Palfrey, 2008, p. 44; Belli, 2016, p. 19). Some countries will base their actions on human rights standards and democratic values, whereas others will not.

In Brazil, internet applications have been entirely blocked for their disruptive character. Blocking orders have been based on alleged violations of legal and constitutional provisions—as in the Secret case—or non-compliance with previous court orders—as in the YouTube, Facebook, and WhatsApp cases—indicating that some Brazilian judges assume an assertive role when found in a scenario of regulatory disruption, that is, when long-standing legal equilibria is disturbed by technology. The consideration of this factor is a necessary piece of any account on blocking of social media in Brazil and its identification may facilitate comparisons with and illuminate analyses of blocking practices in other countries.

Concededly, however, regulatory disruption, as defined here, is a worldwide phenomenon existent in all legal systems. While crucial to understanding the Brazilian blocking orders against YouTube, Facebook, Secret, and WhatsApp, it is insufficient to fully explain why the practice does not always repeat in other jurisdictions found in similar scenarios. Even inside Brazil, other judges have dealt with the same challenges, but not resorted to blocking measures against entire social media platforms. Similarly, in most western countries facing the same disruptions, blockages did not take place. For example, WhatsApp has been in the spotlight in the United States, the United Kingdom, France, and in many other countries where it is popular, but has not been blocked in those countries. This suggests that Brazil’s legal culture, judicial behaviour, and level of social and institutional development may be critical elements that came into play in the context of regulatory disruption. These aspects require further attention and research.

References

Abreu, J. S. (2016, October 17). From Jurisdictional Battles to Crypto Wars: WhatsApp vs. Brazilian courts. The Bulletin. Retrieved from http://jtl.columbia.edu/from-jurisdictional-battles-to-crypto-wars-brazilian-courts-v-whatsapp/.

Abreu, J. S. (2017, March 7). Is there legal support for WhatsApp blocks? Interpretative disputes and their advocates”. bloqueios.info, InternetLab. Retrieved from http://bloqueios.info/en/is-there-legal-support-for-whatsapp-blocks-interpretative-disputes-and-their-advocates/.

Agência Estado. (2007, January 9). Pelo menos 5 milhões estão sem acesso ao YouTube. Estadão. Retrieved from http://www.estadao.com.br/noticias/geral,pelo-menos-5-milhoes-estao-sem-acesso-ao-youtube,20070109p13497 .

Antonialli, D.; Brito Cruz, F. (2015, December 17). Entenda a decisão que bloqueou o WhatsApp no Brasil. Deu nos Autos. Retrieved from http://link.estadao.com.br/blogs/deu-nos-autos/entenda-bloquei-whatsapp-brasil/ .

BBC News. (2017, February 8). Why has Cameroon blocked the internet?, BBC News, http://www.bbc.com/news/world-africa-38895541

Barros, P.P. (2016, November 18). “ADPF 403 in STF: Are WhatsApp blockings constitutional?”. bloqueios.info, InternetLab. Retrieved from http://bloqueios.info/en/adpf-403-in-stf-are-whatsapp-blockings-constitutional/

Blakely, M.R. (2014, June 4). Injunction Function: internet service providers and fair balance in web-blocking. Internet Policy Review. Retrieved from https://policyreview.info/articles/news/injunction-function-internet-service-providers-and-fair-balance-web-blocking/293 .

Belli, L. (2016). End-to-end, Net Neutrality, and Human Rights. In: L.Belli, P. De Filippi (Eds.), Net Neutrality Compendium (pp. 13-29). Cham: Springer. doi:10.1007/978-3-319-26425-7_2

Berghofer, S. & Sell, S. (2015). Online debates on the regulation of child pornography and copyright: two subjects, one argument?. Internet Policy Review, 4(2). doi:10.14763/2015.2.363

Branco, S.(2016, October 13). How a supermodel helped to regulate the Internet in Brazil, Droitdu.net. Retrieved from http://droitdu.net/2016/10/how-a-top-model-helped- to-regulate-brazilian-Internet /

Court of Justice of the European Union. (2014, May 13). Google Spain SL, Google Inc. v. Agencia Española de Protección de Datos, Mario Costeja Gonzáles, C-131/12.

Cortez, N. (2014). Regulating Disruptive Innovation. Berkeley Technology Law Journal,29(1), 175-228. doi:10.15779/Z38NM5G

Crete-Nishihata, M., Deibert, R. J. & Senft, A. (2013). Not by Technical Means Alone: The Multidisciplinary Challenge of Studying Information Controls. IEEE Internet Computing, 17(3): 34-41. doi:10.1109/MIC.2013.29 Retrieved from SSRN: https://ssrn.com/abstract=2265644

Davison, W. (2016, April 12). Twitter, WhatsApp Down in Ethiopia Oromia Area After Unrest. Bloomberg Technology. Retrieved from https://www.bloomberg.com/news/articles/2016-04-12/twitter-whatsapp-offline-in-ethiopia-s-oromia-area-after-unrest

DeNardis, L. (2012). Hidden Levers of Internet Control. Information, Communication & Society, 15(5), 720-738. doi:10.1080/1369118X.2012.659199

Dworkin, R. (1963). Judicial Discretion. The Journal of Philosophy, 60(21), 624-638. doi:10.2307/2023557

Folha de São Paulo. (2015, August 7). “WhatsApp é ‘pirataria pura’, diz presidente da Vivo”. Folha de São Paulo, http://www1.folha.uol.com.br/mercado/2015/08/1666187-whatsapp-e-pirataria-pura-afirma-presidente-da-vivo.shtml

Freedom House. (2015). Freedom on the Net 2015. Retrieved from  https://freedomhouse.org/sites/default/files/FOTN%202015%20Full%20Report.pdf

Hall, J.L. (2016). A Survey of Worldwide Censorship Techniques, Center for Democracy and Technology. Retrieved from https://tools.ietf.org/html/draft-hall-censorship-tech-04 .

Human Rights Watch. (2017, June 15). India: 20 Internet Shutdowns in 2017, https://www.hrw.org/news/2017/06/15/india-20-internet-shutdowns-2017

Goldsmith, J.; Wu, T. (2006). Who controls the Internet? Illusions of a borderless world. Oxford: Oxford University Press.

Hart, H.L.A. (1961). The Concept of Law. Oxford: Oxford University Press.

Lessig, L. (2006). Code 2.0. New York: Basic Books. Available at http://codev2.cc/download+remix/

Kira, B. (2016, December 2). Finding a “silver bullet”: Brazilian Congress analyses draft bills about website and application blockings. bloqueios.info, InternetLab. Translation by Ana Luisa Araujo. Retrieved from http://bloqueios.info/en/finding-a-silver-bullet/.

Mackinnon, R. (2012). Consent of the Networked: The Worldwide Struggle For Internet Freedom. New York: Basic Books.

Mansur, F. (2016, November 18). ADI 5527 and appblocks: a problem in the wording of the law or in its interpretation?”. bloqueios.info, InternetLab. Retrieved from http://bloqueios.info/en/adi-5527-and-appblocks-a-problem-in-the-wording-of-the-law-or-in-its-interpretation/.

Moneiro, A. P. L. (2017). Online anonymity in Brazil: Identification and the dignity of wearing a mask (Master’s Thesis). University of São Paulo, São Paulo, Brazil.

Mugume, P. (2017, March 15). ‘UCC Dragged to Court Over ‘Unlawful’ Social Media Shutdown in 2016 Elections, Date Set for Case Hearing’. PC Tech Magazine, http://pctechmag.com/2017/03/ucc-dragged-to-court-over-unlawful-social-media-shutdown-in-2016-elections-date-set-for-case-hearing/

Paradigm Initiative Nigeria (2016, December). Choking The Pipe: How Governments Hurt Internet Freedom On A Continent That Needs More Access. Retrieved from http://pinigeria.org/2016/wp-content/uploads/documents/research/Digital%20Rights%20In%20Africa%20Report%202016%20(HR).pdf

Passarinho, N. (2016, July 19). Governo elabora projeto para regular acesso a informações do WhatsApp. G1. Retrieved from http://g1.globo.com/politica/noticia/2016/07/governo-elabora-projeto-para-regular-acesso-informacoes-do-whatsapp.html .

Roncolato, M. (2014, August 19). Justiça do ES determina suspensão do Secret no Brasil. O Estado de S. Paulo. Retrieved from http://link.estadao.com.br/noticias/geral,justica-do-es-determina-a-suspensao-do-secret-no-brasil,10000030685 .

Rozenshtein, A.Z. (2018). Surveillance Intermediaries. Stanford Law Review,70(1), 99-189.

Souza, C.A.P.; Moniz, P.P.; Branco, S. (2007). Neutralidade de rede, interesse público e filtragem de conteúdo: reflexões sobre o bloqueio do site Youtube no Brasil. Revista de Direito Administrativo, 246, 50-78.

Souza, C.A.P. (2015). As Cinco Faces da Proteção à Liberdade de Expressão no Marco Civil da Internet. In N. De Lucca, A. Simão Filho, C. R. P. Lima (Eds.), Direito & Internet III - Marco Civil Internet (Volume II) (pp. 377-408) São Paulo: Latin Quarter.

Talmadge, E. (2016, April 1). ‘North Korea announces blocks on Facebook, Twitter and YouTube, The Guardian. Retrieved from https://www.theguardian.com/world/2016/apr/01/north-korea-announces-blocks-on-facebook-twitter-and-youtube .

Wright, J. & Breindl, Y. (2013). Internet filtering trends in liberal democracies: French and German regulatory debates. Internet Policy Review, 2(2). doi:10.14763/2013.2.122

Yesil, B.; Kerem Sözeri, E.; and Khazraee, E. (2017). Turkey’s Internet Policy After the Coup Attempt: the Emergence of a Distributed Network of Online Suppression and Surveillance. Internet Policy Observatory. Retrieved from http://repository.upenn.edu/internetpolicyobservatory/22

Zittrain, J. (2002). Internet Points of Control. Boston College Law Review,44: 653-688.

Zittrain, J.; Edelman, B. (2002). Documentation of Internet Filtering in Saudi Arabia. Cambridge, MA: Berkman Center for Internet & Society. Retrieved from http://cyber.harvard.edu/filtering/saudiarabia/

Zittrain, J.; Edelman, B. (2003). Internet Filtering in China. IEEE Internet Computing, 7(2), 70-77. doi:10.1109/MIC.2003.1189191 Retrieved from http://unpan1.un.org/intradoc/groups/public/documents/apcity/unpan011043.pdf .

Zittrain, J. (2008). The Future of the Internet - And How to Stop It. New Haven: Yale University Press.

Zittrain, J., & Palfrey, J. (2008). Internet filtering: the politics and mechanisms of control. In R. Deibert, J. Palfrey, R. Rohozinski, & J. Zittrain (Eds.), Access denied: the practice and policy of global Internet filtering (pp. 29–56). Cambridge, MA: MIT Press. Available at https://mitpress.mit.edu/books/access-denied

List of court decisions (available at bloqueios.info/en):

Tribunal de Justiça de São Paulo. (2006, September 29). Agravo de Instrumento no. 472.738-4, judge Ênio Santarelli Zuliani (blocking order against YouTube).

Tribunal de Justiça de São Paulo. (2007, January 9). Agravo de Instrumento no. 488.184-4/3, judge Ênio Santarelli Zuliani, January 9th 2007 (order suspending the block of YouTube).

Justiça Eleitoral. (2012a, July 26). Ação Cautelar no. 86-37.2012.6.24.0013, 13ª Vara Eleitoral, judge Luiz Felipe Siegert Schuch (content takedown order).

Justiça Eleitoral. (2012b, August 9). Ação Cautelar no. 86-37.2012.6.24.0013, 13ª Vara Eleitoral, judge Luiz Felipe Siegert Schuch (blocking order against Facebook).

Justiça Eleitoral. (2012c, August 11). Ação Cautelar no. 86-37.2012.6.24.0013, 13ª Vara Eleitoral, judge Luiz Felipe Siegert Schuch (reconsideration of order suspending the block on Facebook).

Justiça Eleitoral. (2016a, September 19). Representação no. 0000141-28.2016.6.24.0019, 19ª Vara Eleitoral, judge Renato L. C. Roberge (content takedown order).

Justiça Eleitoral. (2016b, October 5). Representação no. 0000141-28.2016.6.24.0019, 19ª Vara Eleitoral, judge Renato L. C. Roberge (blocking order against Facebook).

Justiça Estadual do Espírito Santo. (2014, August 19). Processo nº 0028553-98.2014.8.08.0024, 5ª Vara Cível de Vitória, judge Paulo César de Carvalho (blocking order against Secret).

Justiça Estadual do Rio de Janeiro. (2016, July 17). Inquérito Policial 062-00164/2016, 2ª Vara Criminal de Duque de Caxias, judge Daniela Barbosa Assumpção de Souza, July 17, 2016 (blocking order against WhatsApp).

Justiça Estadual de São Paulo. (2016, December 6). Processo de Interceptação Telefônica n. 0017520-08.2015.8.26.0564, 1ª Vara Criminal de São Bernardo do Campo, judge Sandra Regina Nostre Marques (blocking order against WhatsApp).

Justiça Estadual de Sergipe (2016, April 26). Processo n. 201655090143, Vara Criminal da Comarca de Lagarto, judge Marcel Maia Montalvão (blocking order against WhatsApp)

Tribunal de Justiça do Espírito Santo. (2015, July 2015). Agravo de Instrumento no. 003518628.2014.8.08.0024, rapporteur Robson Luiz Albanez, July 21st 2015 (Secret).

Tribunal de Justiça do Piauí. (2015, February 26). Mandado de Segurança n. 2015.0001.001592-4, rapporteur Desembargador Raimundo Nonato Costa Alencar (order suspending ban on WhatsApp).

Tribunal de Justiça de São Paulo. (2015, December 17). Mandado de Segurança n. 2271462-77.2015.8.26.0000, rapporteur Xavier de Souza (order suspending ban on WhatsApp).

Tribunal de Justiça de Sergipe (2016, May 3). Mandado de Segurança n. 201600110899, rapporteur Ricardo Múcio Santana de Abreu Lima (order suspending ban on WhatsApp).

Tribunal Regional Eleitoral. (2016, October 26). Recurso Eleitoral no. 0000141-28.2016.6.24.0019, judge Antonio do Rego Monteira Rocha (order suspending the block on Facebook).

Supremo Tribunal Federal. (2016, July 17). Medida Cautelar na ADPF 403, justice Ricardo Lewandowski (order suspending ban on WhatsApp).

Footnotes

1. Conceptually, blocking and filtering practices can be broken down to at least three different phenomena: (a) content removal or filtering; (b) shutdown or total restriction of an internet application; and (c) complete network shutdown. These measures can be implemented through cooperation with several players: (a) transportation intermediaries (e.g., backbone providers, access providers, hosting services); (b) information intermediaries (e.g., search engines and app store holders), (c) financial intermediaries (e.g., credit card companies); (d) membership and domain names registrars; and (e) platforms. See Goldsmith & Wu, 2006, p. 72. For technical methods of implementation at the infrastructure level, see Hall, 2016.

2. Art. 461, §5º In order to carry out specific protection or to obtain the equivalent practical result, the judge may, on his own initiative or upon request, determine the necessary measures, such as the imposition of a fine for delay, search and seizure, the removal of persons and things, the dismantling of works and the prevention of harmful activity, if necessary with support of police force.

3. Art. 53. Instant cuts or any type of prior censorship shall not be allowed in free electoral programmes. // § 1 Advertising that may degrade or ridicule candidates is prohibited; the offender party or coalition shall lose the right to advertise in the free election hours on the following day. // § 2. Without prejudice to the provisions of the preceding paragraph, at the request of a party, coalition or candidate, the Electoral Justice shall prevent the re-presentation of advertising that is offensive to the candidate's honour, to morality, and to good manners (original in Portuguese).

4. Art. 57-I. Upon a candidate party or coalition’s request and in attention to art. 96, the Electoral Court may order the suspension, for twenty-four hours, of access to all informational content of the websites that fail to comply with the provisions of this Act. // § 1 Each reiteration of conduct will double the period of suspension. // § 2 In the period of suspension referred to in this article, the company will inform all users attempting to access its services that it is temporarily inaccessible due to failure to comply with the electoral law (original in Portuguese).

5. Microsoft was ordered to remove and delete Secret’s equivalent app, called ‘Cryptic’.

6. Art. 5, section IV – the expression of thought is free, and anonymity is forbidden (original in Portuguese).

7. There are no statistics available on how many Brazilian users were affected by the measure. Some suggest the app had hundreds of thousands of users, especially kids and teenagers.

8.Art. 12. Without prejudice to any other civil, criminal, or administrative sanctions, the infringement of the rules set forth in arts. 10 and 11 above are subject, on a case-by-case basis, to the following sanctions, applied individually or cumulatively:I – a warning, which shall establish a deadline for the adoption of corrective measures; II – fine of up to 10% (ten percent) of the gross income of the economic group in Brazil in the last fiscal year, taxes excluded, considering the economic condition of the infringer, the principle of proportionality between the gravity of the conduct, and the size of the sanction; III – the temporary suspension of the activities that entail the acts set forth in art. 11; or IV – prohibition to execute the activities that entail the acts set forth in art. 11. Sole paragraph. In case of a foreign company, the subsidiary, branch, office or establishment located in the country will be held jointly liable for the payment of the fine set forth in the main section of art. 12.


Internet policy politics - A Q&A with Marianne Franklin

$
0
0
Q&A

Marianne Franklin is Professor of Global Media & Politics, convenor of the MA in Global Media and Transnational Communications at Goldsmiths, University of London. We interviewed her in advance of the five year anniversary celebration of our journal.

Internet Policy Review:Marianne, you have been following the work of Internet Policy Review since the very beginning. Back in 2012, internet governance was a key notion. Does internet governance still mean anything today?

The short answer is yes, more than ever.

The longer one is; depends what we mean by this term. Over the last decade, at least, the issues that fall under the rubric internet governance have multiplied with all kinds of analytical and practical implications - legal, technical, ethical, sociocultural, economic and political. The internet’s design as a “network of [computer] networks” has also become increasingly complex technologically, which implies that the complexity of the legal, sociocultural, economic and political dimensions of internet design, access, use, and content management need to be embraced, rather than explained away. Internet governance used to be a descriptor, stemming from a stricter, engineering understanding of technical standards and network architectures that appear far removed from ‘normative’ issues such as rights, freedom, democracy and the like. In 2018, maintaining that this sort of narrow techno-centric definition is the only one possible would be avoiding the many issues we all face – as users, designers, policy-makers, academics, or activists for whom the internet is both an object and a means to achieve certain goals.

This is not to deny, or belittle the role that technical experts have played in shaping the way that the internet works. But as technologies are never neutral, nor immutable, the need to address the sociocultural and political implications of transformative technologies such as the internet, given how many people take being online for granted, is even more pressing today. In the last few years the stakes have also been raised geopolitically and within national polities. This means that technical experts, scholars, political representatives, and activists need to both sharpen their focus whilst bearing in mind the broader context of any emerging design, terms of access, and use, and how content – databases – are being managed, and for whom. These are exciting and challenging times in that regard because finding a focus, and keeping focused, whilst not being blind to the rest is not an easy task. But as demanding as this may be, it needs to be a core premise for theory and research, public policy advocacy, and activism that are currently addressing the spectrum of internet governance topics; for established and emerging scholars and for journals that focus on the internet like Internet Policy Review.

Internet Policy Review: Your specialty in internet governance is human rights. What are we looking at exactly?

Simply put, human rights with regards to internet communications and architecture are more than social and economic rights. They are more than the narrowly defined set of rights stemming from the US civil liberties and the American constitution (that enshrine free speech for instance). I think we need to be more, not less ambitious in this regard and consider the ways in which internet design, terms of access and use implicate international human rights law and norms as a whole, not only those that have been currently ‘cherry-picked’ so to speak. In addition, human rights law and norms, as western liberal institutions, are not beyond criticism either. I do not see human rights as religious tenets for they are also products of human history, quite recent history as it happens. Nonetheless the term encompasses three, if not four ‘generations’ of rights anchored in the UN system, which span from the Universal Declaration of Human Rights and those treaties and covenants that are derived directly from the UDHR such as the ICCPR, the ICESCR, or the ECHR to those on the rights of women, of indigenous peoples, persons with disabilities, and those of children.

In Western countries and Europe in particular, we’ve been dealing with the right to privacy online, particularly since the Snowden revelations in 2013 and subsequent events. This discussion is far from over, as jurisprudence has only really started to emerge in recent years. But privacy is but one right among many others and, moreover, it is one of the more controversial ones from a sociocultural perspective. Whilst Privacy, along with Freedom of Expression, and Freedom of the Press are well-developed areas for debate and action, they are the tip of the human rights-internet iceberg, a beginning not an end to the matter. Take for example, the work being done on raising awareness of the gendered dimensions to even these fundamental rights and freedoms and their impact on internet policy-making, how these overlap advocacy platforms that address the way in which women’s rights online (in terms of access, and freedom of association, and of information) are yet to be fully realised in many parts of the world. Existing and emerging rights (some argue that internet-access should be a new right) are already being reshaped in the face of how emerging technologies (the Internet of Things, artificial intelligence etc.), internet-dependent government services, and commercial mobile phone apps are changing the way in which people interact with the world around them. Looking ahead, moreover, to think about the connections between human rights and the internet across the spectrum and how much work there is still to do, we need to consider the role that education can play. By this I mean critical thinking, daring to ask questions and query the norm, not learning by rote.

Education, as a dialogue, in this regard is crucial because online and offline, internet-based practices and infrastructures based on data-tracking (viz. surveillance), not privacy or other rights and freedoms, have become normalised across Europe and around the world. The extrajudicial programmes of mass online surveillance about which Snowden called the world to attention have now become enshrined in law, from the UK to Germany, the Netherlands, France, and in the Global South as well. I’m deeply concerned about the continued assumption that accessing and using internet media and communications services have to be based on large-scale forms of data-retention, and the automated 24/7 tracking of our digital imaginations, and footprints, by both state agencies and companies (from tech giants to start-ups). These practices are political and commercial decisions, not a technical imperative. In Germany, France, the UK, we may now have privacy acts of some sort or another but all these regulations can be mitigated, if not circumvented by intelligence services, and law enforcement agencies. Just because something is technically possible, even commercially attractive, that does not necessarily mean that it is either justifiable or desirable.

It is only along with, and through education (e.g., through teaching people how to use crypto-tools, or getting them to question their habits) that we can combat this incipient passivity towards this emergence of surveillance as the norm, rather than an exception.

Internet Policy Review: What role can researchers play in addressing the challenges in internet governance?

I would like to see internet policy and governance research open up to other disciplinary approaches, e.g., digital cultures, feminist studies, philosophy, anthropology (being online is, after all, a cultural practice). Many scholars, of digital cultures for instance, would not consider their work as internet governance, strictly speaking. Yet these research agendas and theoretical approaches, along with those from philosophers and historians of technology, have been looking at human-machine interactions long before internet governance became a recognisable, arguably trendy domain. Besides, to speak of ‘governance’ often elides questions about the exercising of deep power – I have written quite extensively on the need to more thoroughly theorise, rather than describe how power is exercised, and pushed back against through digital, networked domains. We need to beware of fetishising the technical in this domain; the web continues to be a space in which enormous amounts of content – meaning-makings – circulate, including relationships, art and culture such as music, communities in which ideas and identities are forged. For these reasons, internet governance as a scholarly but also policy-making rubric needs to be anchored in multidisciplinary forms of inquiry and action. It is too important to be left to the experts, or become ‘parked’ in one corner of academe in other words.

So I guess my answer to your question is: let’s not get entangled in a standoff between disciplines or one between academic cultures (e.g., Anglo-American and European traditions as, ipso facto, superior to those from other, non-Western traditions). Whilst it is too important to leave it up to experts – technical or legal – this does not mean we should ignore such experts; quite the contrary. If the internet, broadly defined, is a technology of interconnections then so too should the way we study, write, and mobilise around internet governance be interconnected, cross-disciplinary. As these very terms of reference are transforming in the wake of R&D, and now policy agendas are looking to promote artificial intelligence, biotechnologies, nano-technologies, and design innovations such as blockchain technologies, we may also well along the way in a shift in the very experience of what it means to function as community, act and feel as a human being at the online-offline nexus. In that regard, philosophers, including feminist scholars, have been considering these intimacies between humans and machines for some time, and not always in simplistic, pessimistic terms. Which leaves me with one thought: is there the possibility that some consideration might be needed to whether there should be a ‘right’ not to have to go online?

After the (virtual) gold rush: is Bitcoin more than a speculative bubble?

$
0
0
Acknowledgements: The authors wish to thank Primavera de Filippi, Clément Fontan, Olivier Perreira, Mikael Petitjean, Joakim Sandberg, Danielle Zwarthoed, Pierre-Etienne Vandamme and the journal reviewers for their insightful comments and suggestions on this article.

 

Introduction

While alternative currencies have always circulated along the main official currencies (Blanc, 2000), a new wave of currencies has emerged, bringing about important changes to the way that we conceive money. Relying on cryptography and peer-to-peer networks, these “cryptocurrencies” neither rest on a central authority nor require any centralised management or system of payment. In the wake of criticisms of the contemporary banking system following the 2007 financial crisis, they have gained in popularity, and have been presented as an alternative to the current payment system.

Having inspired a great number of alternative cryptocurrencies such as Ripple, Dogecoin, Ethereum, etc 1, Bitcoin remains the most prominent cryptocurrency in terms of valuation and public recognition 2. Bitcoin has been the subject of much enthusiasm, billed by some as “the future of money” (Frisby, 2014), or presented as “challenging the global economic order” (Vigna and Casey, 2016). Its proponents are often highly critical of state regulations over money, sometimes conceived as inadmissible infringements on freedom, or as inefficient, unsecure, and inflationary (Nakamoto, 2008, 2009).

Naturally, Bitcoin has also attracted a fair amount of skepticism, some going as far as denying that Bitcoin really constitutes a form of money (Dodd, 2017; Glaser et al., 2014; Yermack, 2013), or noting that the Bitcoin valuation exhibits all the characteristics of a speculative bubble (Dwyer, 2015). Moreover, a substantial amount of commentary on Bitcoin focuses on its technical functioning, or on discussing the achievements and flaws of its underlying technology (see for instance Böhme et al., 2015).

Our aim in this article is different. We will avoid dwelling too long on how the technology behind Bitcoin works, nor enter into the discussion as to whether Bitcoin is indeed a form of money. We want to take Bitcoin’s proponents at their word: if we consider Bitcoin as a form of money, is it appropriate for use as a currency? Moreover, Bitcoin is often hailed for its supposed advantages over official currencies, the conventional payment system, such as being more stable, safe and efficient, or in allowing to dispense with the need of a central authority. But can it effectively meet these expectations? And if not, is there more to Bitcoin than a speculative bubble? This is what we are going to discuss in this article.

After a brief introduction to Bitcoin for those not already familiar with its technical underpinnings (1), this article reviews four separate arguments in favour of its adoption (2): namely that Bitcoin can be a more stable currency, achieve a more secure and efficient payment system, provide a credible alternative to the central management of money, or better protect transaction privacy. We discuss the philosophical background of these arguments by showing how they relate to the principles of justice developed by libertarians such as Nozick (1974) and Rothbard (2016), and neoliberal economists such as Hayek (1990 [1976]) and Friedman (1959, 1969). The third section of the article then assesses whether Bitcoin can effectively fulfil these expectations (3). First, we will consider whether Bitcoin’s design makes it a stable currency; (3.1). Second, we question the security and efficiency of Bitcoin’s payment system; (3.2). Third, we discuss the issue of whether Bitcoin can indeed function as a radically decentralised currency, free from centralised governance or authority (3.3). Finally, we address the extent to which Bitcoin can protect payment privacy (3.4). We conclude that it is unlikely that Bitcoin can function as a currency unless it changes drastically, which would probably detract from the characteristics that make it attractive to its proponents.

1. What is Bitcoin?

Whether Bitcoin is, or is not, a form of money is still a highly debated issue (Bjerg, 2016; Urquhart, 2016; Glaser et al., 2014; Yermack, 2013). Of course, the definition of money is itself a controversial issue. Money is sometimes conceived as a debt token (Graeber, 2011), as a social relation (Ingham, 2004), as a social totality (Aglietta and Orléan, 2002), or as a peculiar social convention fulfilling a certain number of functions (Tobin, 2008), among other examples. Despite their divergences, most theories of money generally recognise that, in modern societies, money is a medium of exchange that is widely accepted within a specific community. 3 This definition will suffice for the purpose of this article. In this article, we will assume that Bitcoin can indeed be considered as a form of money, as our goal is to determine whether, as a currency, it can fulfil certain specific aims or functions.

Bitcoin differs in many respects from “official currencies” such as the Euro or the Dollar. Coins and notes are usually emitted by the Central Bank of each monetary zone (the European Central Bank for the Eurozone, the US Federal Reserve for the Dollar), while deposit money, which constitutes the vast majority of money supply today, is made up of funds held in demand deposit accounts in commercial banks (McLeay, Radia, and Thomas, 2014).

By contrast, Bitcoin is a decentralised cryptocurrency that rests on a distributed repository, protected and managed through the use of cryptographic protocols. It is thus independent from any central authority.

First, Bitcoin is not backed by a State or a Central Bank. Contrary to the Euro or the Dollar, where a Central Bank is in charge of ensuring price stability and financial stability through adequate monetary policy (Goodhart, 2011; Goodhart et al., 2014), there is no such central authority in the Bitcoin system. There is no lender of last resort either, that is, a State or a Central Bank that could bail out banks in the event of a financial panic (Goodhart, 1991; Blinder, 2010).

Second, Bitcoin’s payment system is entirely decentralised and rests on an open-source cryptographic protocol. This protocol originates from an article published in 2008 by a certain Satoshi Nakamoto (2008), whose identity remains mysterious (Davis, 2011). The central innovation of Bitcoin, which puts together previous advances in cryptography, such as the proof of work technology (Narayanan and Clark, 2017), is that it is based on a decentralised public ledger (Ali et al., 2014a). In a conventional payment system, banks hold a record of transactions and ensure that no unit of money is used more than once by the same user (“double-spending” problem). With Bitcoin, this control system is decentralised through a public ledger system operated on a peer-to-peer network. This ledger has several important properties. First, every user can verify and process transactions. Moreover, the Bitcoin protocol secures the ledger against falsifications, without resorting to any banking institution or any central authority. Finally, an important consequence of the public availability of this ledger is that Bitcoin can only preserve a “pseudo-anonymity” for its users: details of all transactions are logged on the public ledger, where the only indication of the identity of their parties is their Bitcoin address (Luu and Imwinkelried, 2015).

A third crucial difference between Bitcoin and conventional currencies lies in its creation process. Every user can participate in the creation of new Bitcoins, by resolving a deliberately complicated series of algorithms (though in practice this “mining” process is mainly taken up by professional miners). The first Bitcoins were created from scratch and used by the first Bitcoin users. The first user of the protocol, assumed to be Nakamoto himself, mined the first 50 Bitcoins in 2009 (Wallace, 2011). The following Bitcoins are created when new transactions take place, as a reward going to those who successfully add a new block to the ledger. More precisely, miners, by solving puzzles, try to verify each transaction and to get the right to add it to a new “block” containing several transactions, appended in the Bitcoin ledger (also called the “Blockchain”, for that reason). This new block is accepted within the ledger if it contains a valid transaction and a new puzzle solution. Miners are all competing to verify each transaction in order to get the reward attached to the completion of a block. Along with this reward, miners may also set a fee for processing transactions, as a complementary revenue. While at the start these fees were marginal, they have tended to rise steeply recently due to network congestion, which led to a major crisis about reforming the protocol (see section 3.1). Eventually, every time a block is verified, new Bitcoins are minted. 4

However, this Bitcoin creation process has an algorithmic limit. The Bitcoin protocol has a marginally decreasing rate of Bitcoin creation per block, which approximates the rate at which gold is mined. Therefore, the total supply of Bitcoins will asymptotically approach the amount of 21 million (Houy, 2014), which, according to some estimations, will be reached around the year 2140 (Ali et al., 2014a). The reward of miners is therefore set to decrease, being divided by two every 210,000 blocks, while the difficulty of mining is programmed to increase along with the network size. Nowadays, more than 17 million Bitcoins have been mined (according to blockchain.info, consulted 19/07/2018). Approximately 200,000 transactions take place every day, for an estimated value of less than 1 million BTC.

2. The case for Bitcoin adoption

Bitcoin’s proponents do not form a homogeneous group, and many people may support Bitcoin adoption for different reasons. However, the main recurring cases for Bitcoin adoption may be summarised as follows:

  • Bitcoin can constitute a more stable currency than conventional state-sponsored money, by taking monetary policy out of the government’s hands
  • Bitcoin can provide a more secure and efficient payment system compared to a system relying on trusted third parties
  • Bitcoin can dispense with the need of coercive institutions such as States and Central Banks, by achieving a decentralised securing of transactions through cryptographic proof
  • Bitcoin helps protect users’ privacy against abuse of state power through government surveillance

First, Bitcoin is often hailed as a means to achieve a more stable monetary system (Ametrano, 2016; Collard, 2017; Lakomski-Laguerre and Desmedt, 2015; Rochard, 2013). As Nakamoto (2009) stresses, with conventional currencies, “the central bank must be trusted not to debase the currency, but the history of fiat currencies is full of breaches of that trust”. As others have noted (ECB, 2012, p. 23), this criticism reminds the neoliberal critique that state monopoly in the issuance of money will necessarily lead to over-inflation, resulting in depressions and unemployment (Hayek, 1990 [1976]; Friedman, 1959, 1969). Hayek argued that governments have a tendency to abuse their monopoly power by systematically creating too much money (Hayek, 1990 [1976], pp. 28–32). Similarly, Friedman and Schwartz (1963) argue that historically, interventions of the Federal Reserve of the United States have been mostly detrimental to economic stability and have often worsened crises rather than solved them. Even if this account has been contested (Kindleberger, 1973, 1978), Friedman argues on this basis that monetary policy should “avoid sharp swings” (Friedman, 1968, 15) and proposed to “freeze” the monetary base by setting a fixed rate of growth in the amount of money (around 3–5% according to Friedman (1959, p. 91, 1968, p. 16)). His argument is based on historical evidence, but also on his own theory, which, similarly to Hayek’s ([1976] 1990), predicts that excessive money creation is inflationary and cannot impact employment in the long run (Friedman, 1968).

The Bitcoin protocol is designed in this spirit and has been praised for its “perfect monetary policy” (Rochard, 2013): since no central agencies can control the Bitcoin’s supply, whose rate of growth is set algorithmically, it is immune from inflation. Actually, unless a majority of nodes decides collectively to modify the protocol itself, there is no procedure for altering the rate of Bitcoin creation. It is not our purpose in this article to discuss the economic merits of such a fixed or “algorithmic” monetary policy, an issue which is the subject of an extensive literature (see Bordo, 2008, for a review of the recent debates). However, as we shall see in section 3, to really fulfill that promise, Bitcoin must be able to dispense with any central governance altogether and it is doubtful that it could, while retaining the other qualities that would make it an attractive currency.

Second, Bitcoin is often presented as the basis for a more secure and efficient payment system, which allows to dispense altogether with the need for a trusted third party (Ali et al., 2014a; Angel and McCabe, 2015; Grinberg, 2011; Vidan and Lehdonvirta, 2018). According to Angel and McCabe (2015, p. 606), Bitcoin “represents a technological solution that creates appropriate incentives for honesty without needing a government to enforce laws against dishonesty.” This motivation originally comes from a distrust of banking institutions, which, in the context of the 2008 global financial crisis, many consider as unsafe (Ali et al., 2014a, p. 6; Maurer et al., 2013, pp. 261–262). Presenting Bitcoin in the aftermath of the crisis, Nakamoto (2009) has some harsh words for our current banking system, where “Banks must be trusted to hold our money and transfer it electronically, but they lend it out in waves of credit bubbles with barely a fraction in reserve”. Bitcoin’s payment system is presented as safer, since it does not require trusting any particular payment intermediary. Moreover, Nakamoto also points to two other disadvantages of having to rely on a trusted third-party: the transaction costs it induces, as well as the possibility of fraud through reversal of transactions (see also Angel and McCabe, 2015, p. 606). By providing “a system based on cryptographic proof instead of trust”, Bitcoin purports to reduce transaction costs due to the absence of intermediary, and reduce opportunities for fraud by making transaction irreversible (Nakamoto, 2008, p. 1).

A third argument contends that Bitcoin may contribute to lessening the level of state coercion facing individuals, by putting money out of the control of government or any centralised institution. Indeed, another common objection to the exercise of monetary policy by states, besides stability, stems out of a libertarian concern for the protection of the rights and liberties of individuals (Nozick, 1974; Rothbard, 2016). Safeguarding these rights and liberties puts limits on what others can legitimately do to people without their consent. The State should keep only a marginal role, which basically consists of protecting property rights from theft or fraud. Apart from that, state interventions in the economy encroach on individual freedom (i.e., coercion), and is therefore wrong. This argument clearly rejects the possibility of the State’s monopoly over money. In the words of Murray Rothbard, such a monopoly allows the State to act as a “legalized, monopoly counterfeiter” and use monetary creation as “a giant scheme of hidden taxation”, therefore violating individual property rights (Rothbard, 2016). Similarly, for Hayek, “legal tender is simply a legal device to force people to accept in fulfilment of a contract something they never intended” (Hayek, 1990 [1976], pp. 39–40). It thus violates their freedom to set voluntarily the terms of a contract.

Libertarianism constitutes an important philosophical root among Bitcoin proponents (Golumbia, 2016; Karlstrom, 2014; Lakomski-Laguerre and Desmedt, 2015; Wallace, 2011). For libertarians, such as Dowd (2014, p. 64), Bitcoin safeguards “the freedom of the individual to trade, and the freedom of the individual to accumulate, move and protect his or her financial wealth — in other words, financial freedom.” Because it supposedly allows to dispense with the need for any central institution, Bitcoin may significantly weaken the hold of coercive institutions over individuals’ lives.

Bitcoin’s fourth alleged advantage flows from the previous one: because Bitcoin’s payment system (supposedly) does not rely on trusted intermediaries, it would better protect the privacy of its users than conventional means of payments. For instance, Nakamoto (2009) complains that “we have to trust [banks and payment intermediaries] for our privacy [and] trust them not to let identity thieves drain our accounts”. In the aftermath of the NSA surveillance scandals (Hintz, 2014), which has showed that private intermediaries could rarely be trusted to protect the privacy of their customers against overreaching state authorities, privacy has often been viewed as one of Bitcoin’s main appeal.

However, the extent to which Bitcoin can fulfil these promises is doubtful, as we will discuss in the following section. Indeed, while Bitcoin’s distributed cryptographic proof is an important technical achievement with interesting potential applications, basic market analysis makes it dubious that Bitcoin’s promise to act as a reliable non-inflationary currency is really sustainable (section 3.1). Moreover, there are reasons to be wary of its claim to provide a more secure and efficient means of payment, due to the prevalence of intermediaries and transaction costs (section 3.2). Besides, Bitcoin’s decentralised architecture, while making it independent from central governance from Banks or States is also what makes it extremely difficult for its community of developers and users to govern it (section 3.3). Finally, it is highly unlikely that Bitcoin can meet the expectations of users who regard it as a way to better protect the privacy of their transactions, and even if it did, it would raise serious concerns for the possibility of law enforcement and redistribution (section 3.4).

3. Can Bitcoin fulfil its promises?

3.1. Is Bitcoin a stable currency?

One of Bitcoin’s main promises is to provide a more stable currency than conventional, State-backed money, that would not be plagued by the States’ or Central Banks’ inflationary biases, or otherwise nefarious monetary decision.

However, even if Bitcoins were more widespread in the population, day-to-day use of Bitcoin as a currency would still face important hurdles, due to its high volatility compared to other currencies. Indeed, this volatility undermines its quality both as a means of exchange and as a store of value.

Bitcoin’s volatility is well illustrated by the following graphs (Figures 1, 2, and 3), which show that Bitcoin’s price has gone up and down between 2013 and the present day. Figure 1 illustrates how the market price of a Bitcoin has sharply risen from around five dollars in 2011 to an all-time high of $19,783 by the end of 2017. However, due to the scale of this graph, it fails to accurately depict how Bitcoin’s value has varied on a day-to-day basis. To better illustrate Bitcoin’s volatility, it is useful to represent this data in two additional close-up graphs. Figure 2 is limited to the pre-2017 period, while Figure 3 focuses on the period between January 2017 and the present day.

Financial economists have studied Bitcoin’s volatility in depth. Dwyer (2015) finds that Bitcoin's average volatility is always higher than for gold or a set of foreign currencies. Cheah and Fry (2015) and Cheung, Roca, and Su (2015) show, using econometric models, that the price of Bitcoin exhibits speculative bubbles. These studies show how, for many users, Bitcoin is mainly used as a speculative asset, which people buy and sell for the sake of rapid financial profit, explaining why, as a consequence, its value has varied sharply throughout time. This has led some to conclude that Bitcoin is a financial asset rather than a currency (Glaser et al., 2014; Urquhart, 2016; Yermack, 2013).

Why does volatility matter? First, a volatile asset is a less secure asset, from an investor’s point of view. Contrary to gold or government bonds, it might yield a greater return, but bears the risk of abruptly losing its value. Second, volatility means that one cannot predict the future value of a commodity (labeled in Bitcoin), which tends to fluctuate constantly and in a random way. This means that Bitcoin cannot be a stable unit of account as it is unable to represent adequately the value of goods and services. Volatility exacerbates uncertainty and undermines the possibility of contracting in Bitcoin, which cannot, therefore, constitute a reliable means of exchange and a secure store of value.

In sum, the empirical evidence from Bitcoin’s financial records appears to contradict the claim that Bitcoin can provide a stable means of payment and store of value, in line with the theoretical prescriptions of Friedman and Hayek.

A line graph showing Bitcoin’s market price 2009–2018; the first four years are low and stable, then there is a small peak late 2013, and then it peaks significantly toward the end of 2017 before dropping sharply early 2018
Figure 1: Bitcoin’s market price 2009–2018 (source: Own elaboration based on data collected on blockchain.info)
A line graph showing Bitcoin’s market price 2013–2016; it peaks in the latter half of 2013, begins falling again, and then begins rising in the latter half of 2015
Figure 2: Bitcoin’s market price 2013–2016 (source: Own elaboration based on data collected on blockchain.info)
A line graph showing Bitcoin’s market price 2017–July 2018; it rises sharply late 2017 and then begins falling again.
Figure 3: Bitcoin’s market price 2017–present (source: Own elaboration based on data collected on blockchain.info)

3.2 Is Bitcoin a secure and efficient payment system?

A second argument in favour of Bitcoin adoption contends that it is a more secure and efficient means of payment and store of value than conventional money, as its payment system does not rest on centralised institutions, such as Banks.

However, while Bitcoin’s protocol itself has been remarkably secured against possible abuses or manipulations, this security is undercut by the difficulty for users of securing their Bitcoins against fraud or loss. Indeed, Bitcoin users are faced with a dilemma between ensuring their own security, and trusting intermediary services. Storing one’s wallet on one’s computer is not much different than keeping one’s money in a safe: unsecure password can be cracked, stolen through “phishing” scams, or simply forgotten. And because Bitcoin transactions are non-reversible, victims are left without recourse in case of theft (Guadamuz and Marsden, 2015, p. 10).

Therefore, for many users online wallet services and even Bitcoin exchanges can appear as safer alternatives for storing and trading one’s Bitcoins, just as Banks are considered safer than keeping one’s money in safes. However, if one resorts to such online intermediaries, Bitcoin is not any more secure than conventional currencies, where one has to rely on banking and payment intermediaries. It can even be even less secure, as few of these services are (for the moment) regulated beyond the usual protection of general contract and insolvency law (the main focus of legislators having been the use of cryptocurrencies for money laundering 5). Users of cryptocurrencies are therefore left without much protection against fraud or bankruptcy. The bankruptcy of MtGox, one of the prominent Bitcoin exchange platforms (where Bitcoins can be traded-in for national currencies), has shed light on the risks taken by Bitcoin holders (Popper and Abrams, 2014). The collapse of MtGox was partly due to technological incidents, and to an apparent theft of no less than 744,000 Bitcoins, valued approximately at $350 million at the time (Popper and Abrams, 2014). This illustrates how Bitcoin’s users are highly vulnerable to frauds or to bankruptcies affecting exchange platforms. As the European Banking Authority rightly highlights, “no specific regulatory protections exist that would cover you for losses if a platform that exchanges or holds your virtual currencies fails or goes out of business” (European Banking Authority, 2013, p. 1). On the contrary, centralised payment systems, such as the Euro system, are partly protected from such events. In Belgium, for instance, banking deposits are guaranteed by the State up to €100,000 per person. 6 Of course, the protection of deposits differs from the protection of payments. However, the fact that deposits are protected is an indirect protection of payments: people are ensured that their money is safe (or a large part of it) and the continuity of payments is therefore guaranteed. Moreover, states usually play the role of lender of last resort. If banks go bankrupt, that is, if they cannot honour their debts any more, States can usually bail them out to avoid a collapse of the economy. These two kinds of protection are absent from the Bitcoin's payment system, which exposes users to frauds and to bankruptcies of exchange platforms.

Another difficulty for Bitcoin to act as an efficient means of payment is the issue of transaction costs. While Bitcoin speed and low transaction fees were advocated among the cryptocurrency’s assets compared to traditional banking solutions, Bitcoin’s scaling problem due to rising user adoption (which we will cover more extensively in the next section) has radically changed the equation.

Indeed, the congestion in the Bitcoin network led to a sharp rise in transaction fees. While for most of the cryptocurrency’s history users have enjoyed negligible transaction fees, the average transaction fee had risen from less than $0.1 in January 2017 to about $4 in June 2017, even (briefly) reaching an all-time high of almost $54 per transaction in mid-December 2017. 7 Confirmation time for transaction had also witnessed a sharp rise: from an average of 20 minutes in August 2016, with a peak at 92 minutes on 16 August, it increased to an average of 123 minutes in August 2017, with a peak at 1,524 minutes on August 27 8. Since then, however, the average fee has decreased significantly to less than $1, and the average confirmation time is back to around 20 minutes, as of June 2018.

This return to normal has been attributed to various factors, such as a calming down of Bitcoin’s latest speculative bubble of late 2017 (Torpey, 2018) and the adoption of a protocol upgrade called “Segwit” intended to mitigate the issue of block size (Sedgwick, 2018) by packing more payments into less space on the blockchain (Lee, 2018). However, this respite might be temporary. A future rise in the demand for Bitcoin, and a failure to timely adapt the Bitcoin protocol to this rise, may well lead to higher and more volatile transaction fees. This equation is further complicated by the algorithmic decrease of miners’ reward, which is supposed to be offset by an increase in transaction fees (Nakamoto, 2018, p. 4).

To sum up, as of today Bitcoin is still far from providing a secure and efficient means of payment. Admittedly, many actors are trying to address these issues in their attempts of reforming Bitcoin, which is at the heart of the still-ongoing block size debate. Bitcoin users are notably pinning their hopes on a proposed alternative payment network, called the Lightning network, which it is still under development. However, as we will see in the next section (3.3), there are good reasons to entertain serious doubts on the capacity of the Bitcoin community to successfully tackle such technical challenges.

3.3 Can Bitcoin avoid formal governance?

As we have seen, for some, one of the main appeals of Bitcoin and other cryptocurrencies lies in their decentralised nature, which minimises the influence of coercive institutions (such as States and Central Banks) on monetary policy. Whatever the merits of the underlying libertarian argument, it is dubious that Bitcoin can dispense altogether with any formal governance or trust in some privileged actors.

Let us begin by noting that the original Bitcoin source code, originally drafted under the name of Satoshi Nakamoto, already contains a great number of substantial rules, which have an effect on the economics of Bitcoin: the decreasing supply of Bitcoins to be minted, the cap on the size of transaction blocks, etc.

Are these rules entirely set in stone, immutable? And if not, who has the power to alter them? This is the issue at the heart of Bitcoin governance. While Bitcoin has indeed no formal governance (there is no constitution or founding principles setting decision-making procedures), a set of practices have emerged, in the interplay of three categories of actors: core developers, miners, and users.

Taking over from Nakamoto’s initial drafting of the protocol, the core development team enjoys a sort of moral authority over the community, which entrusts it for technical decisions. Core developers control the GitHub repository (https://github.com/bitcoin/) and the domain (https://bitcoincore.org). As with many open source development projects, Bitcoin follows a “autocratic-mechanistic” model, where anyone is free to contribute code, but a small group of co-opted developers (the core developers) can ultimately decide which changes get implemented in the software (de Laat, 2007; de Filippi and Loveluck, 2016).

However, it is important to note that the Bitcoin core development team cannot impose any modifications to the existing Bitcoin protocol without the consent of at least a substantial number of miners or users. Since Bitcoin is an open source software, any user could refuse to update its software and continue to use its older version, or propose an alternative change to shift the software development in a different direction, thereby creating a “fork” (an alternative branch of a software development). In the case of Bitcoin, this can happen essentially through two mechanisms.

The first is called a “soft fork”, and consists in adding stricter rules determining which blocks or transactions are valid. A soft fork can be imposed on the existing network with the collaboration of miners with a mere majority of hash-power, which can enforce the new rules by rejecting blocks or transactions that do not conform to the change.

The second is called a “hard fork”, which touches on the fundamental characteristics of the protocol such as block structure or difficulty rules. As it is not backward-compatible, a hard fork requires all full nodes to upgrade, or the blockchain could split between users using the new updated version and those using the older version.

Finally, in the Bitcoin development community, a standard form of building consensus around a proposed modification has emerged in the form of documents called Bitcoin Implementation Proposals (BIPs).

For a long time, these issues of governance where mostly ignored, primarily due to the idea that the developers’ role was purely technical and unlikely to cause deep ideological divergence (Lehdonvirta, 2016).

In the last few years however, the Bitcoin block size controversy has brought to light the importance of governance and what de Filippi and Loveluck (2016) call the “invisible politics” of Bitcoin. Indeed, a deep disagreement divides the Bitcoin community on the issue of the Bitcoin’s block size cap, a computational bottleneck that has increasingly worsened transaction fees and processing delays with Bitcoin’s gain in popularity. A first risk of split occurred in 2015 when some Bitcoin core developers proposed a fork called “Bitcoin XT”, aiming to increase its block size from 1 to 8 megabytes. After much debate, the Bitcoin community stayed loyal to the original Bitcoin protocol (billed “Bitcoin Core”), thus avoiding a definite split. However, the attempts by the reformists were pursued, and during 2016 and 2017 various fork proposals have been made, either by consortiums of miners or users. To succeed, these reform proposals generally require reaching a particular adoption rate of a qualified majority of miners or users before a given date. While the process is still ongoing, and since only a particular proposal (Segwit) did get adopted, this process remains complex and risky for the integrity of Bitcoin’s blockchain. And indeed, it has already generated its first major split: in August 2017, after months of Bitcoin scaling controversy, a group of users successfully hard-forked Bitcoin, as well as its whole transaction history, into a new cryptocurrency with a block size of 8Mb, named Bitcoin Cash. While the hard fork did not cause the rate of Bitcoin to crash, as some feared, it nonetheless showed that the risk of a Bitcoin schism was a very real possibility.

The risk of schisms can already prove problematic in the context of free and open source software, where forks pose the risk of scattering developers and users between incompatible projects, threatening their sustainability (Robles and González-Barahona, 2012). However, it is even more problematic in the case of a currency, where network effects are crucial (Lehdonvirta, 2016): while a given piece of software can be useful to a very small niche of users, a currency can only function as such if enough people are willing to exchange it or accept it as a means of payment. These schisms could significantly weaken Bitcoin by diminishing its attractiveness as a medium of exchange. Admittedly, until now, the existence of a great number of alternative cryptocurrencies has apparently not curbed user enthusiasm for Bitcoin. However, there is a significant risk that the ongoing multiplication of Bitcoin clones (such as "Bitcoin Cash", "Bitcoin Gold", "Bitcoin Diamond"…) will constitute a factor of confusion for the broader public, thereby threatening its ability to be used as a mainstream medium of exchange.

Therefore, not only is Bitcoin not the self-governing, radically decentralised currency that some of its supporters would want it to be; Bitcoin’s informal governance, plagued by the risk of schisms, also constitutes a significant threat to its sustainability as a currency.

A second significant challenge to the idea that cryptocurrencies can escape governance or central authorities is related to the particular way transaction security is achieved with Bitcoin. Bitcoin’s “proof-of-work” security is crucially based on trusting a majority of nodes in the system: in his 2008 paper, Nakamoto notes that proof-of-work security will be able to resist attackers “as long as honest nodes collectively control more CPU power than any cooperating group of attacker nodes” (Nakamoto, 2008, p. 1). Perhaps initially this threshold of 50% seemed high enough, ensuring that it cannot easily be reached. However, in 2014, a consortium (or “pool”) of miners called GHash.io was able to concentrate 51% of the total computational power (Goodin, 2014). Therefore, this pool of miners potentially had the ability to circumvent the security of Bitcoin’s payment system, and spend the same coins twice or reject competing miners' transactions. That this concentration of power did not last long is due to the care of individual miners, who decided to pull from the pool out of a concern for Bitcoin’s integrity. Due to the criticism, the operators of GHash.io issued a statement and committed to “take all necessary precautions to prevent reaching 51% of all hashing power”(Hajdarbegovic, 2014).

Therefore, as others have noted, Bitcoin’s central security feature “depends on the goodwill of a few people whose names nobody knows” (Bershidsky, 2014). Can the Bitcoin community rely on the goodwill of individual miners and social responsibility of mining pools to avert an attack? Or should mining pools be prevented to acquire such a position? The latter option would likely involve some kind of antitrust regulation, similar to conventional antitrust laws. It would require, therefore, a sort of central competition authority to prevent collusion among miners.

Consequently, either the Bitcoin community retains its own libertarian form of “governance” by competition between forks, with risks for its governability, user base and security, or it recognises that some degree of formal central governance is inevitable. However, that recognition leads to what Lehdonvirta bills as the “blockchain’s governance paradox”: if Bitcoin users address the problem of governance by trusting a central institution to make the rules, then why do they need a decentralised cryptocurrency anymore? (Lehdonvirta, 2016).

In summary, the prevailing skepticism against governance among the Bitcoin community has made any change in its algorithmic regulation very difficult and long to achieve, and prevents putting in place any structural protection against a collusion by miners to breach its proof-of-work security. Bitcoin’s informal governance model does not fare well compared to its promise to provide a reliable alternative to the allegedly flawed centralised banking system.

3.4 Does Bitcoin improve payment privacy?

Finally, one last perceived advantage of Bitcoin over conventional currencies is the better protection of payment privacy that it is supposed to provide, since its decentralised payment system makes it independent of banks or other payment intermediaries, and does not require disclosure of an account holder’s identity. Admittedly, this is not a claim that more knowledgeable Bitcoin proponents are likely to make, as it has been at the centre of much criticism. However, it remains a recurrent preconception, at least in popular opinion and among some Bitcoin users, and therefore deserves a brief discussion here.

There are many good reasons why people might seek privacy in their transactions. They might wish to avoid mass data collection of their transaction history by private companies for targeted marketing, or they can be political opponents, fearing retribution from authoritarian regimes.

However, these privacy-protecting features are also what makes Bitcoin a particularly suitable tool for engaging in fraud, illegal business, and tax evasion, which has been a recurrent concern for lawmakers (Gibbs, 2018; Gruber, 2013; Kollewe, 2018; Marian, 2013; Mersch, 2018).

At the core of the Bitcoin protocol are two distinct features, which have opposite tendencies in terms of anonymity. On the one hand, Bitcoin’s public ledger tends to make it more transparent, as all transactions are logged in a publicly accessible ledger. On the other hand, Bitcoin’s peer-to-peer network tends to make it more anonymous (as it does not rely on the presence of financial intermediaries holding all the users’ information).

As others have noted, Bitcoin only provides pseudo-anonymity, in that while a given transaction only lists the pseudo-anonymous Bitcoin address of the sender and receiver, details of all transactions are logged on the public ledger. Therefore, as Luu and Inwinkelried (2015, p. 10) put it, “[i]f a Bitcoin address could somehow be associated with a specific identity, the pseudo-anonymity would be penetrated”. Parties to a transaction could be traced back to the holder of an exchange account, by using identification techniques such as traffic analysis, and transaction graph analysis (Biryukov, Khovratovich, and Pustogarov, 2014; Luu and Imwinkelried, 2015, p. 24; Reid and Harrigan, 2013, p. 17). State authorities could use such information to identify customers of cryptocurrency exchanges, provided such services are imposed “Know Your Customer” obligations under anti-money laundering regulations, as is the case in the US under the US Department of Treasury’s guidance on virtual currencies 9, as well as in the EU with the recent adoption of the 5th Anti-Money Laundering Directive 10. In the EU, since “virtual currency” exchanges services as well as custodian wallet providers are now covered by the 5th AML directive, they are subject to customers due diligence obligations as well as obligations to report transactions suspected of being the proceeds of criminal activity, or being related to money laundering or terrorist financing.

Therefore, until Bitcoin use become sufficiently widespread that an autonomous Bitcoin economy could be imaginable, the position of gatekeeper held by exchanges in the flow of Bitcoin appear to undercut the claim for Bitcoin to be any more privacy-protecting than conventional currency. Indeed, none is entirely disintermediated, they are just relying on different sorts of financial intermediaries.

A possible way to disrupt this possibility of identification would be to use mixing services (also called “laundry services”), which allow a user to exchange a given amount of tainted Bitcoins for a corresponding sum coming from a multiplicity of other users, and sent to a new Bitcoin address (Gruber, 2013, pp.189–193; Marian, 2013, p. 44). However, the issue with relying on third-party mixing services is that they could themselves be the target of court injunctions, or be the subject of "Know Your Customer" obligations, as with Bitcoin exchanges.

Of course, users could possibly resort to exchanges or mixing services based in lax or lawless jurisdiction, in order to minimise the risk that their data be handed over to the authorities by such services. They would however face an important issue of trust, as those unregulated mixing services are also likely to be the less reliable, with little guarantee of seeing one's money back in case of fraud. This apparently happened to Meiklejohn et al. (2013) while studying these services, who note in their article that “[o]ne of these [mixing services], BitMix, simply stole our money”.

Thus, it does not seem that Bitcoin could achieve a better level of transaction privacy than conventional currencies. Some even go so far as arguing that, far from making the job of law enforcement agencies harder, Bitcoin even generates new opportunities to track down illicit activities (Kaplanov, 2012, p. 171). Companies like Chainanalysis have developed software aimed at analysing the blockchain to identify Bitcoin users, which have been used by several public agencies, such as the US Internal Revenue Service, the FBI, or Europol (Orcutt, 2017).

This provides a good reason for State authorities not to ban Bitcoin altogether, for risk of promoting alternative cryptocurrencies that better protect transaction privacy without resorting to third parties. However, a case could be made that cryptocurrencies embedding protocol-level privacy protection (such as the proposed Zerocash, which would integrate a mixing service in the blockchain itself 11) should be banned, as they could be used as gateway currencies for transacting in Bitcoin, therefore evading scrutiny by State authorities. Whether such repressive approach is at all feasible remains an open question.

More fundamentally, the decentralised (although not entirely disintermediated) nature of cryptocurrencies like Bitcoin has another important drawback for user privacy: without a bank or financial institution, users are solely responsible for the privacy of their transaction. And the average — not particularly tech savvy — consumer will be more likely to commit some privacy oversight in its Bitcoin transactions. Therefore, paradoxically, the many flaws in Bitcoin’s privacy protection mean that unsophisticated users might enjoy a lesser level of transaction privacy by using such a pseudo-anonymous cryptocurrency than by relying on traditional financial intermediaries.

Conclusion

Now that the latest Bitcoin "gold rush" appears to have – momentarily – receded, the central question for potential Bitcoin users remains: are there good reasons to adopt Bitcoin, other than investing in a speculative asset?

This article highlighted four arguments justifying the attractiveness of Bitcoin. To recall, the first lies in Bitcoin’s practical promise of constituting a stable currency, immune to inflation, in the spirit of what neoliberal authors like Hayek or Friedman have argued for. The second is that Bitcoin could help reducing state coercion by dispensing with the need of a monetary policy, in line with the libertarian ideal of a minimal state. The third argument is that Bitcoin would constitute a more efficient and safe system of payment. And the fourth is that Bitcoin supposedly better protects transaction privacy than the conventional banking system.

As we saw, it is dubious that Bitcoin, as it is now, can deliver on these promises.

First, Bitcoin’s financial record detracts from any claim of being a stable currency: its highly volatile value makes it risky for merchants to accept, and inconvenient for consumers to use. This, alone, makes Bitcoin unfit to be used as an alternative currency for the time being.

Second, Bitcoin’s promise to provide an efficient store of value or means of payment is not supported by evidence from its use. On the one hand, securely storing and trading one’s Bitcoins requires a substantial level of knowledge from its users. On the other hand, consumer confidence in Bitcoin’s capacity to provide efficient payment facilities lies on shaky foundations: an increased success of Bitcoin could lead to higher transaction fees and longer confirmation times, which would make it impractical for consumers.

Third, the promise of making Bitcoin a currency independent from central authorities has been largely a double-edged sword. Even if the Bitcoin protocol is an achievement of a currency run by a radically decentralised network, it is highly unlikely that it can act as a reliable and governable currency without some formal governance mechanisms, and without resorting to some financial intermediaries. As exemplified by the ongoing scaling debate, the Bitcoin community’s unwillingness to seriously address the issue of Bitcoin governance undermines its resilience to economic and technical challenges. Bitcoin’s current informal governance mechanism generates recurrent risks for its sustainability and integrity, as it creates uncertainty for users as to the value of their holdings as well as to which “fork” of the Bitcoin blockchain constitutes the “real” Bitcoin. Moreover, without formal governance mechanisms, Bitcoin ultimately relies on trusting the goodwill of its users (the very thing it purported to avoid) to avert a potential miner collusion to form a 51% attack. The emergence of a multitude of new intermediaries seems to indicate that even with cryptocurrencies, banking and financial intermediaries may still have some usefulness as a layer of protection for consumers after all.

Fourth, we pointed out that Bitcoin’s pseudo-anonymous payment system provided a very limited layer of protection for the privacy of user transactions. As with security, Bitcoin puts most of the burden of privacy protection on its users’ shoulders, which creates a disparity in user privacy along the same lines as the digital divide in technology knowledge. Therefore, paradoxically, for the average user, Bitcoin might provide a lesser level of transaction privacy than traditional financial intermediaries. And even if Bitcoin did provide a better level of transaction privacy than conventional currencies, it would generate a range of further questions as to the possibility of law enforcement against crime and tax evasion.

Therefore, contrary to what its proponents might hope for, Bitcoin is far from fulfilling its promises to be a stable, efficient, radically decentralised and privacy-protecting currency. The reason for its relative popularity and substantial valuation lies thus either in unrealistic expectations from its users as to its capacity to act as a functioning currency, or in the prospects of rewards allowed by its status of high-risk speculative asset.

This, in turn, does not mean that cryptocurrencies are a useless development altogether. Their advent has brought about a great number of worthy innovations, with many useful applications. In particular, Bitcoin’s distributed ledger technology might find useful applications in many areas. Some have hailed blockchain’s potential in fostering decentralised organisation, by reducing the transaction costs of organising cooperation among a great number of individuals (de Filippi and Wright, 2018, p. 136). Even central and private banks have started looking into using blockchain technology, not so much for introducing cryptocurrencies (although such plans do exist 12) but mainly to improve on their infrastructure for areas such as clearing and settlement or trade finance (Arnold, 2017). While these projects are clearly inspired by the technological innovations behind Bitcoin, they are likely to significantly diverge from Bitcoin’s main ideological commitments (Bordo and Levin, 2017). Blockchain technology could also possibly be used in countries where banks cannot be trusted, or where the monetary system is failing, as some have argued (Varoufakis, 2014). In general, blockchain could be used to reduce costs (although on the condition of adopting alternative mechanisms to reduce its environmental impact) 13 and make payment settlements easier. However, with blockchain applications such as Bitcoin, it is important to take such claims with a grain of salt, and go beyond the overly enthusiastic rhetoric to assess the actual merits of the technology.

If its proponents want Bitcoin to become more than a speculative asset, they will probably have to adopt a more explicit and formalised governance to be able to seriously tackle not only mere technical challenges, but also the underlying political choices behind them as to the cryptocurrency’s future. The question remains however, whether Bitcoin can be reformed so as to become a workable currency, while still retaining some of the attractiveness that its enthusiasts saw in its initial promises. As of today, Bitcoin seems far from being the future of money.

References

Aglietta, M., and Orléan, A. (2002). La monnaie: entre violence et confiance. Paris: Odile Jacob.

Ali, R., Barrdear, J., Clews, R. and Southgate, J. (2014a). Innovations in payment technologies and the emergence of digital currencies. Bank of England Quarterly Bulletin, Q3. Retrieved from https://econpapers.repec.org/article/boeqbullt/0147.htm

Ali, R., Barrdear, J., Clews, R. and Southgate, J. (2014b). The economics of digital currencies. Bank of England Quarterly Bulletin, Q3. Retrieved from https://www.bankofengland.co.uk/-/media/boe/files/digital-currencies/the-economics-of-digital-currencies

Ametrano, F. M. (2016). Hayek Money: The Cryptocurrency Price Stability Solution (SSRN Scholarly Paper No. ID 2425270). Rochester, NY: Social Science Research Network. Retrieved from https://papers.ssrn.com/abstract=2425270

Angel, J. J., and McCabe, D. (2015). The Ethics of Payments: Paper, Plastic, or Bitcoin? Journal of Business Ethics.132(3), 603–11. doi:10.1007/s10551-014-2354-x

Arnold, M. (2017, October 16). Five ways banks are using blockchain. Financial Times. London.

Bershidsky, L. (2014, July 17). Trust Will Kill Bitcoin. Bloomberg View. Retrieved from http://www.bloombergview.com/articles/2014-07-17/trust-will-kill-Bitcoin

Bjerg, O. (2016). How is Bitcoin Money? Theory, Culture and Society, 33(1), 53–72. doi:10.1177/0263276415619015

Blinder, A. S. (2010). How Central Should the Central Bank Be? Journal of Economic Literature, 48(1), 123–133. doi:10.1257/jel.48.1.123

Böhme, R., Christin, N., Edelman, B. and Moore, T. (2015). Bitcoin: Economics, Technology, and Governance. The Journal of Economic Perspectives, 29(2), 213–238. doi:10.1257/jep.29.2.213

Bordo, M. D. (2008). History of monetary policy. In S. N. Durlauf and L. E. Blume (Eds.), The New Palgrave Dictionary of Economics (second edition, pp. 715–721). Basingstoke: Palgrave MacMillan.

Bordo, M. D., and Levin, A. T. (2017). Central Bank Digital Currency and the Future of Monetary Policy (Working Paper No. 23711). Cambridge (Mass.): National Bureau of Economic Research. doi:10.3386/w23711

Bordo, M. D., and Schwartz, A. J. (1995). The Performance and Stability of Banking Systems under ‘Self-Regulation’: Theory and Evidence. Cato Journal, 14(3), 453–479.

Cheah, E.-T. and Fry, J. (2015). Speculative bubbles in Bitcoin markets? An empirical investigation into the fundamental value of Bitcoin. Economics Letters, 130, 32–36. doi:10.1016/j.econlet.2015.02.029 Retrieved from https://eprints.soton.ac.uk/410439/

Collard, B. (2017, January 4). Le Bitcoin, la monnaie de la liberté. Retrieved 2 March 2018, from https://www.contrepoints.org/2017/01/04/276673-bitcoin-monnaie-de-liberte

Davis, J. (2011, October 10). The Crypto-Currency: Bitcoin and its mysterious inventor. The New Yorker, New York. Retrieved from https://www.newyorker.com/magazine/2011/10/10/the-crypto-currency

De Filippi, P., and Loveluck, B. (2016). The Invisible Politics of Bitcoin: Governance Crisis of a Decentralized Infrastructure. Internet Policy Review, 5(4). doi:10.14763/2016.3.427

De Filippi, P., and Wright, A. (2018) Blockchain and the Law. The Rule of Code. Cambridge, (Mass.): Harvard University Press.

Deetman, S. (2016, March 29). Bitcoin Could Consume as Much Electricity as Denmark by 2020. Retrieved from https://motherboard.vice.com/en_us/article/bitcoin-could-consume-as-much-electricity-as-denmark-by-2020

de Laat, P.B. (2007). Governance of open source software: state of the art. Journal of Management and Governance, 11(2), 165–177. doi:10.1007/s10997-007-9022-9

Dodd, N. (2017). The social life of Bitcoin. Theory, Culture and Society. Retrieved from doi:10.1177/0263276417746464

Dowd, K. (2014). New Private Monies: A Bit-Part Player? Hobart Paper 174. London: Institute of Economic Affairs.

Dwyer, G. P. (2015). The economics of Bitcoin and similar private digital currencies. Journal of Financial Stability, 17, 81–91. doi:10.1016/j.jfs.2014.11.006

European Banking Authority. (2013). Warning to consumers on virtual currencies (Statement no EBA/WRG/2013/01). Retrieved from https://www.eba.europa.eu/-/eba-warns-consumers-on-virtual-currencies

European Central Bank. (2012). Virtual Currency Schemes. Frankfurt am Main. Retrieved form www.ecb.int/pub/pdf/other/virtualcurrencyschemes201210en.pdf

European Central Bank. (2015). Virtual currency schemes: a further analysis. Frankfurt am Main. Retrieved from https://www.ecb.europa.eu/pub/pdf/other/virtualcurrencyschemesen.pdf

Friedman, M. (1959). A Program for Monetary Stability. New York: Fordham University Press.

Friedman, M. (1969). The optimum quantity of money and other essays. Chicago: Aldine.

Friedman, M., and Schwartz, A. J. (1963). A monetary history of the United States 1867-1960. Princeton, N.J: Princeton university press.

Frisby, D. (2014). Bitcoin: The Future of Money? London: Unbound

Gibbs, S. (2018, February 26). EU finance head: we will regulate bitcoin if risks are not tackled. The Guardian. Retrieved from http://www.theguardian.com/technology/2018/feb/26/eu-finance-head-regulate-bitcoin-cryptocurrencies-risks

Glaser, F., Zimmermann, K., Haferkorn, M., Weber, M. C. and Siering, M. (2014). Bitcoin - Asset or Currency? Revealing Users’ Hidden Intentions. Proceedings of the 22nd European Conference on Information Systems. Tel Aviv, Israel. Retrieved from https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2425247

Golumbia, D. (2016). The Politics of Bitcoin. Minneapolis, MN: University of Minnesota Press.

Goodhart, C. A. E. (1991). Are central banks necessary? In F. Capie and G. E. Wood (Eds.), Unregulated banking: chaos or order? (pp. 1–21). London: MacMillan.

Goodhart, C. A. E. (2011). The changing role of central banks. Financial History Review, 18(2), 135–154. doi:10.1017/S0968565011000096

Goodhart, C. A. E., Gabor, D., Vestergaard, J., and Ertürk, I. (2014). Central Banking at a Crossroads: Europe and Beyond. London: Anthem Press.

Goodin, D. (2014, June 15). Bitcoin security guarantee shattered by anonymous miner with 51% network power. Retrieved from http://arstechnica.com/security/2014/06/Bitcoin-security-guarantee-shattered-by-anonymous-miner-with-51-network-power/

Graeber, D. (2011). Debt the first 5,000 years. Brooklyn, N.Y.: Melville House.

Grinberg, R. (2011). Bitcoin: An Innovative Alternative Digital Currency. Hastings Science and Technology Law Journal, 4, 159–207.

Gruber, S. (2013). Trust, identity and disclosure- are Bitcoin exchanges the next virtual havens for money laundering and tax evasion? Quinnipiac Law Review, 32(1), 135–208. Retrieved from https://papers.ssrn.com/abstract=2312110

Guadamuz A. & Marsden Ch. (2015) Blockchain and Bitcoin: Regulatory responses to cryptocurrencies, First Monday, 20(12). Retrieved from https://firstmonday.org/article/view/6198/5163

Hajdarbegovic, N. (2014, January 9). Bitcoin Miners Ditch Ghash.io Pool Over Fears of 51% Attack. Coindesk.Com. Retrieved from https://www.coindesk.com/bitcoin-miners-ditch-ghash-io-pool-51-attack/

Hayek, F. A. von. (2007). The Road to Serfdom: Text and Documents--The Definitive Edition. (B. Caldwell, Ed.). Chicago: University Of Chicago Press. (Original work published 1944)

Hayek, F. A. von. (1990) [1976]. Denationalisation of money: the argument refined: an analysis of the theory and practice of concurrent currencies (3rd edition). London: Institute of Economic Affairs.

Hintz, A. (2014). Outsourcing Surveillance—Privatising Policy: Communications Regulation by Commercial Intermediaries. Birkbeck Law Review, 2(2), 349. Available at http://orca.cf.ac.uk/70838/

Houy, Nicolas. (2014). The Economics of Bitcoin Transaction Fees. GATE Working Paper 2014/07. Retrieved from https://halshs.archives-ouvertes.fr/halshs-00951358

Ingham, G. (2004). The Nature of Money. Cambridge: Polity.

Kaminska, I (2017, September 18). What is ‘Utility Settlement Coin’ really? Financial Times Alphaville. London

Kaplanov, N. M. (2012). Nerdy Money: Bitcoin, the Private Digital Currency, and the Case Against Its Regulation. Loyola Consumer Law Review, 25(1), 111–174. Retrieved from https://www.ssrn.com/abstract=2115203

Karlstrøm, H. (2014). Do libertarians dream of electric coins? The material embeddedness of Bitcoin. Distinktion: Scandinavian Journal of Social Theory, 15(1), 23–36. doi:10.1080/1600910X.2013.870083

Katz, L. (2017, July 19). Bitcoin Acceptance Among Retailers Is Low and Getting Lower, Bloomberg, https://www.bloomberg.com/news/articles/2017-07-12/bitcoin-acceptance-among-retailers-is-low-and-getting-lower

Kemp, R. (2010). Open source software (OSS) governance in the organisation. Computer Law & Security Review, 26(3), 309–316. doi:10.1016/j.clsr.2010.01.008

Kindleberger, C. P. (1973), The World in Depression, 1929-1939. London: Allen Lane.

Kindleberger, C. P. (1978). Manias, Panics, and Crashes: A History of Financial Crises. New-York: Basic Books.

Kollewe, J. (2018, February 8). ECB official backs bitcoin clampdown. The Guardian. Retrieved from http://www.theguardian.com/technology/2018/feb/08/ecb-official-backs-bitcoin-clampdown

Lakomski-Laguerre, O., and Desmedt, L. (2015). L’alternative monétaire Bitcoin : une perspective institutionnaliste. Revue de la régulation, (18), 1–19. doi:10.4000/regulation.11593

Lee, T. B. (2018, February 20). Bitcoin’s transaction fee crisis is over—for now. Ars Technica. Retrieved from https://arstechnica.com/tech-policy/2018/02/bitcoins-transaction-fee-crisis-is-over-for-now/

Lee, S., Baek, H., and Jahng, J. (2017). Governance strategies for open collaboration: Focusing on resource allocation in open source software development organizations. International Journal of Information Management, 37(5), 431–437. doi: 10.1016/j.ijinfomgt.2017.05.006

Lehdonvirta, V. (2016). The blockchain paradox: Why distributed ledger technologies may do little to transform the economy. Oxford Internet Institute. Retrieved from https://www.oii.ox.ac.uk/blog/the-blockchain-paradox-why-distributed-ledger-technologies-may-do-little-to-transform-the-economy/

Luu, J. and Imwinkelried, E. J. (2015). The Challenge of Bitcoin Pseudo-Anonymity to Computer Forensics (UC Davis Legal Studies Research Paper Series no 462). Retrieved from https://papers.ssrn.com/abstract=2671921

Marian, O. Y. (2013). Are Cryptocurrencies ‘Super’ Tax Havens? Michigan Law Review First Impression, 112(38). Retrieved from https://papers.ssrn.com/abstract=2305863

Maurer, B., Nelms, T. C., and Swartz, L. (2013). “When perhaps the real problem is money itself!”: the practical materiality of Bitcoin. Social Semiotics, 23(2), 261–277. doi:10.1080/10350330.2013.777594

McLeay, M., Radia, A. and Thomas, R. (2014). Money creation in the modern economy. Bank of England Quarterly Bulletin, Q1. Retrieved from https://www.bankofengland.co.uk/-/media/boe/files/quarterly-bulletin/2014/money-creation-in-the-modern-economy

Meiklejohn, S., Pomarole, M., Jordan, G., Levchenko, K., McCoy, D., Voelker, G. M. et Savage, S. (2013). A Fistful of Bitcoins: Characterizing Payments Among Men with No Names. Proceedings of the ACM SIGCOMM Internet Measurement Conference, IMC, 2013, 127–139. doi:10.1145/2504730.2504747 Retrieved from https://cseweb.ucsd.edu/~smeiklejohn/files/imc13.pdf

Mersch, Y. (2018). Virtual or virtueless? The evolution of money in the digital age. London: Official Monetary and Financial Institutions Forum. Retrieved from https://www.ecb.europa.eu/press/key/date/2018/html/ecb.sp180208.en.html

Miers, I., Garman, C., Green, M., and Rubin, A. D. (2013). Zerocoin: Anonymous Distributed E-Cash from Bitcoin. Presented at the IEEE Symposium on Security & Privacy, Oakland. Retrieved from http://spar.isi.jhu.edu/~mgreen/ZerocoinOakland.pdf

Nakamoto, S. (2008). Bitcoin: A Peer-to-Peer Electronic Cash System. Retrieved from http://www.Bitcoin.org/Bitcoin.pdf

Nakamoto, S. (2009). Bitcoin open source implementation of P2P currency. Retrieved from http://p2pfoundation.ning.com/forum/topics/Bitcoin-open-source

Nozick, R. (1974). Anarchy, State and Utopia, New York, Basic Books.

O'Mahony, S. and Ferraro, F. (2007). The emergence of Governance in an Open Source Community, Academy of Management Journal, 50(5), 1079–1106. doi:10.2307/20159914 Retrieved from https://www.jstor.org/stable/20159914

Orcutt, M. (2017, September 11). Criminals Thought Bitcoin Was the Perfect Hiding Place, but They Thought Wrong. MIT Technology Review. Retrieved from https://www.technologyreview.com/s/608763/criminals-thought-bitcoin-was-the-perfect-hiding-place-they-thought-wrong/

Popper, N. and Abrams, R. (2014, February 25). Apparent Theft at Mt. Gox Shakes Bitcoin World. The New York Times. New York. Retrieved from http://www.nytimes.com/2014/02/25/business/apparent-theft-at-mt-gox-shakes-Bitcoin-world.html

Redman, J. (2017a, May 6). The Bitcoin Network’s Transaction Queue Breaks Another Record. Bitcoin.com, https://news.bitcoin.com/bitcoin-transaction-queue-breaks-record/

Redman, J. (2017b, June 9). Rising Network Fees Are Causing Changes Within the Bitcoin Economy. Bitcoin.com, https://news.bitcoin.com/fees-causing-changes-bitcoin-economy/

Robles, G. and González-Barahona, J. M. (2012). A comprehensive study of software forks: dates, reasons and outcomes. In I. Hammouda et al (Ed.), Open Source Systems. Long-Term Sustainability, Berlin: Springer, p. 1–14.

Rochard, P. (2013). The Bitcoin Central Bank’s Perfect Monetary Policy. Satoshi Nakamoto Institute. Retrieved from https://nakamotoinstitute.org/mempool/the-bitcoin-central-banks-perfect-monetary-policy/

Rothbard, M. (2016). Essentials of Money and Inflation. In J. T. Salerno, M. McCaffrey (Eds.), The Rothbard Reader (pp. 157–162). Auburn, Alabama: Ludwig von Mises Institute. Available at https://mises.org/library/rothbard-reader

Sedgwick, K. (2018, January 3). Bitcoin Fees Are Falling Amidst Greater Segwit Adoption. Bitcoin.Com. Retrieved from https://news.bitcoin.com/bitcoin-fees-are-falling-amidst-greater-segwit-adoption/

Tobin, J. (2008). Money. In S. F. Durlauf and L. E. Blume (Eds.), New Palgrave Dictionary of Economics (Second Edition). Basingstoke: Palgrave MacMillan. Retrieved from http://www.dictionaryofeconomics.com/article?id=pde2008_M000217

Torpey, K. (2018, January 31). Bitcoin Transaction Fees Are Pretty Low Right Now: Here’s Why. Bitcoin Magazine. Retrieved from https://bitcoinmagazine.com/articles/bitcoin-transaction-fees-are-pretty-low-right-now-heres-why/

Urquhart, A. (2016). The inefficiency of Bitcoin. Economics Letters, 148, 80-82. doi:10.1016/j.econlet.2016.09.019

Varoufakis, Y. (2014, February 15). Bitcoin: A flawed currency blueprint with a potentially useful application for the Eurozone. Retrieved from http://yanisvaroufakis.eu/2014/02/15/Bitcoin-a-flawed-currency-blueprint-with-a-potentially-useful-application-for-the-eurozone/

Vidan, G. and Lehdonvirta, V. (2018). Mine the Gap: Bitcoin and the Maintenance of Trustlessness. SSRN Scholarly Paper ID 3225236. Rochester, NY: Social Science Research Network. Retrieved from https://papers.ssrn.com/abstract=3225236.

Vigna, P. and Casey, M. J. (2015) The Age of Cryptocurrency: How Bitcoin and Digital Money Are Challenging the Global Economic Order. New York:St. Martin's Press

Wallace, B. (2011, November 23). The Rise and Fall of Bitcoin. Wired Magazine. Retrieved from https://www.wired.com/2011/11/mf_Bitcoin/

Yermack, D. (2013). Is Bitcoin a Real Currency? An economic appraisal (Working Paper No. 19747). National Bureau of Economic Research. Retrieved from www.nber.org/papers/w19747

Footnotes

1. See www.mapofcoins.com for a comprehensive list of existing cryptocurrencies and their underlying technologies.

2. While we will focus on Bitcoin, our discussion could also apply to other cryptocurrencies insofar as they share some of Bitcoin’s characteristics and aims.

3. See Aglietta and Orléan (2002, 84–85), Tobin (2008, 1) and Graeber (2011, 46–47).

4. For a detailed presentation of how transactions in Bitcoins works, see Ali et al. (2014a, p. 7–8). For an overview of bitcoin, see Böhme et al. (2015).

5. In that regard, let us note the inclusion of virtual currency exchanges and “custodian wallet services” among the services regulated by the recently adopted 5th Anti-Money Laundering Directive, Directive 2018/843. In the US, although no new legislation was adopted on the matter, these services are effectively considered as covered under the Bank Secrecy Act according FinCEN’s guidance on virtual currencies, US Department of Treasury, 18 March 2013.

6. As described on the website of the Belgian Ministry of Finance: http://fondsdegarantie.belgium.be/fr

7. These statistics are based on our own computations, thanks to data collected on blockchain.com. See also Lee (2018).

8. According to statistics from blockchain.com.

9. US Department of Treasury, Application of FinCEN's Regulations to Persons Administering, Exchanging, or Using Virtual Currencies, 18 March 2013.

10. Directive 2018/843 of the European Parliament and of the Council amending Directive (EU) 2015/849 on the prevention of the use of the financial system for the purposes of money laundering or terrorist financing, art. 1.

11. See Miers et al (2013)

12. UBS and a consortium of financial institutions are reportedly developing a central bank backed cryptocurrency called Utility Settlement Coin, on which few details are known (Kaminska 2017)

13. Indeed, although Bitcoin’s proof-of-work security algorithm has been rightly criticized for its high environmental impact (see Deetman, 2016), alternative security algorithm that are less energy intensive have been proposed (such as “proof-of-stake” algorithm, which would rely less on solving difficult computational problems, by replacing “computational power” with “financial stake” as a consensus mechanism)

Privatised enforcement and the right to freedom of expression in a world confronted with terrorism propaganda online

$
0
0

Acknowledgments: I would like to thank prof. dr. Joris van Hoboken for his supervision and insightful comments during the writing of my paper. A special thank you goes to Mariana Simon Cartaya who generously proof-read this paper. Moreover, I would like to thank the peer-reviewers for their time spent on reading and evaluating this paper.

Introduction

Terrorism is not a new issue (Ansart, 2011), but terrorism propaganda online is. As early as 2008 the EU Council officially recognised the internet as a medium used by terrorist recruiters for the dissemination of propaganda material (EU Council Framework Decision 2008/919/JHA). Several studies revealed the important role played by social media platforms, predominantly Twitter, in ISIS’ 1 propaganda strategy (Badawy & Ferrara, 2017, p. 2). A 2015 report illustrated that members of ISIS, on average, posted 38 propaganda materials each day, ranging from videos to photographs or articles and on a diversity of platforms, including Facebook, Tumblr, Twitter or Surespot (Winter, 2015, p. 10). Countering this type of speech has challenged traditional law enforcement in many ways. In 2014, the EU Commission recognised that traditional law enforcement is insufficient to deal with evolving trends in radicalisation and that all of society ought to be involved in the countering of terrorism online (COM (2013) 941 final, para. 8).

On 31 May 2016, four IT companies (Facebook, Microsoft, Twitter and Youtube, 2016) adopted the EU Code of conduct against illegal hate speech online (hereinafter, the Code). This instrument places enforcement responsibilities into the hands of private companies and gives rise to the practice of ‘privatised enforcement’. The dangers stemming from such practice can be illustrated by Twitter’s latest biannual report (2017), in which it indicates that from July 2017 through December 2017, 274,460 accounts were suspended because of terrorism’ related activities in violation of the company’s terms and services. It also specifies on its webpage concerning removal requests that ‘out of the 1,661 reports received from trusted reporters and other EU non-governmental organisations (NGOs), 19% resulted in content removal due to terms of service (TOS) violations and 10% in content being withheld in a particular country based on local law(s)’. In other words, more posts seem to have been removed because of non-compliance with the companies’ policies than due to illegality. Consequently, when placing private companies at the frontline of law enforcement online, the risk may arise that our right to freedom of expression is merely guided by their terms of service, which may not always be in accordance with the level of protection guaranteed under human rights instruments, such as under Article 10 of the European Convention on Human Rights (hereinafter, ECHR) or Article 11 of the Charter of Fundamental Rights of the European Union. Moreover, taking into account the primary profit-making nature of platforms, it is questionable in how far delegation of such large-scale public functions, which are fundamental to the proper function of our democracy, may be at odd with their business objectives and thereby result in a conflict of interests. As was pointed out in an article which discussed the liability of Google when faced with removal of defamatory content: ‘in order to pursue its profit (emphasis added), Google did not adopt precautionary measures that could have prevented the upload of illegal materials […] Google is profiting from people uploading materials on the internet’ (Sarter et al., p. 372). Taking into account the intermediaries’ data-driven business model, placing them at the frontline of law enforcement may be dangerous from a legal point of view but also for democracy in general.

Whereas the privatised enforcement phenomenon has already received considerable academic attention, this paper specifically focuses on the risks stemming from the Code, in the field of illegal hate speech and, in particular, terrorism propaganda. Through identifying such risks and by taking into account subsequently adopted EU instruments, recommendations are made on how to better guarantee respect for fundamental human rights in the online environment. These findings are especially relevant as the EU Commission issued, on 12 September 2018, a proposal for a Regulation on the prevention of terrorist content online. Besides the proposal’s general requirement that hosting service providers should remove or disable access to terrorist content within one hour after receipt of a removal order, it also encourages the use of ‘referrals’, whose content should be assessed against the companies own terms and conditions. In that respect, it makes no reference to the law.

In order to draw a conclusion and make recommendations, the content of the Code and its relationship with privatised enforcement is first discussed. This section also delineates to what degree terrorism propaganda falls within the scope of the Code. Doing so is necessary, seeing as the Code merely focuses on the removal of ‘illegal hate speech’ whereas the countering of terrorism propaganda formed one of the main incentives for its adoption. This was made clear by EU Commissioner Vera Jourová who declared, when announcing the Code, that recent terror attacks have strengthened the need for it and that ‘social media is unfortunately one of the tools that terrorist groups use to radicalise young people’ (European Commission, 2016). In other words, it investigates whether and to what extent terrorist propaganda can be countered through hate speech tools. In the second section, different reasons behind privatised enforcement in the field of terrorism propaganda are presented. This is followed by a discussion on the dangers of such practice from a free speech perspective. In the subsequent section, recommendations to outweigh the identified risks are proposed, by taking into account subsequently adopted EU instruments building upon the Code, namely the communication and recommendation on tackling illegal content. The final section presents important developments that have taken place since the adoption of the Code.

Privatised enforcement through the EU Code of conduct on countering illegal hate speech online

The Code is a self-regulatory initiative under which Twitter, Microsoft, YouTube and Facebook made a commitment to put in place a notice-and-take down system for the countering of illegal hate speech, the ambit of which is laid down in Framework Decision 2008/913/JHA. This non-binding instrument encourages companies to assess the legality of a post within 24 hours after being notified and to remove or block access to it in case of unlawfulness. Importantly, it explicitly stipulates that the notified posts have to be primarily reviewed against the company’s rules and community guidelines and only ‘where necessary’ (emphasis added)against national laws transposing the Framework Decision. Through these means, specifically encouraging the companies to ‘take the lead’ and initiative in tackling illegal hate speech online, the Code stimulates the occurrence of privatised enforcement.

This phenomenon was defined as a practice in which private companies undertake ‘non-law based “voluntary” enforcement measures’ (Council of Europe, 2014, p. 86). Legal scholars define this practice as: ‘instances where private parties (voluntarily) undertake law-enforcement measures’ (Angelopoulos et al., 2015, p. 6). These two definitions show that privatised enforcement has three key components: enforcement of the law; by a private party; and imposed voluntarily (in the sense that the enforcement measures flow from self-regulatory initiatives and are thus ‘non-law based’). This is sometimes also referred to as ‘intermediarization’ (Farrand, 2013, p. 405) or ‘delegated’ enforcement, in the sense that the regulator’s role is delegated to companies and private sector actors (ADF International, 2016, p. 1). This practice has already been encouraged in different fields of law such as copyright law (EDRi, 2014, pp. 2-14) or the countering of ‘fake news’ on social media (OSCE, FOM.GAL/3/17, 2017, section 4(a)).

Whereas terrorism propaganda formed one of the main reasons for adopting the Code, such speech is not explicitly mentioned in it. The companies are merely required to counter ‘illegal hate speech’. In the Commission’s Communication on ‘tackling illegal content online’ (COM (2017), 555 final) a clear distinction is made between ‘incitement to terrorism’ and ‘xenophobic and racist speech that publicly incites hatred and violence’ (p. 2). The latter refers to the type of hate speech that is criminalised under Framework Decision 2008/913/JHA and which serves as legal basis for content removal under the Code. Concerning incitement to terrorism, the Communication refers to Article 5 of the Terrorism Directive (EU Directive 2017/541), which covers the ‘public provocation to commit a terrorist offence’. Bearing this in mind, how can the Code thus contribute to the countering of terrorism propaganda?

An important distinction to be drawn between ‘incitement to terrorism’ and ‘illegal hate speech’ is that the former only covers incitement to violence (See Article 3(1), point (a) to (i) of the Terrorism Directive) while the latter also extends to incitement to hatred. The relation between these two was made clear by Vera Jourová who stated, in the context of terrorism propaganda, that there is growing evidence that online incitement to hatred leads to violence offline’ (European Commission, 2015). In this respect, it is important to highlight that the United Nations General Assembly (2013) has determined that ‘the likelihood for harm to occur’ is a factor that should be taken into account when assessing whether incitement to hatred is present (para. 29). Although ‘incitement’ is by definition an inchoate crime, there is thus an implicit assumption that the speech has a reasonable probability to incite the intended actions and thereby cause harm. In the Surek v. Turkey case, this implicit relation between incitement to hatred, on the one hand, and actions, on the other, was made clear by the European Court of Human Rights (hereinafter, ECtHR) which noted that the speech was ‘capable of inciting to further violence by instilling a deep-seated and irrational hatred’ (§62).

In the context of terrorism, the Commission claimed, in June 2017, that ‘countering illegal hate speech online’ serves to counter radicalisation (COM (2017), 354 final, p. 3). The link between radicalisation through hate speech and terrorist acts was also made explicit by Julian King, Commissioner for the Security Union who declared that: ‘there is a direct link between recent attacks in Europe and the online material used by terrorist groups like Da’esh to radicalise the vulnerable and to sow fear and division in our communities’ (European Commission, 2017). This overlap between incitement to hatred and incitement to terrorism may be explained by the fact that terrorism relies on extremist ideologies. These were identified by Europol (2013) to include religious, ethno-nationalist and separatist ideologies as well as left-wing and anarchistic ones (pp. 16-30).

However, it is relevant to highlight the Leroy v. France case, which illustrates how the Code, and thereby illegal hate speech, would fall short in countering all types of terrorism propaganda. In this case, a cartoonist was accused of glorification to terrorism after having published, on the day of the 9/11 terrorist attacks, a drawing representing the American Twin Towers. The drawing was interpreted by the Court (§42) as a call for violence to and glorification of terrorism but was not perceived as a reflection of the cartoonist’s anti-American ideologies. This type of speech, in which the underlying extremist ideologies are implicit within the speech – and therefore ‘hidden’– will not easily be caught under the Code. Indeed, for ‘illegal hate speech’ to be present, some kind of discrimination must be expressed (Article 1(a) Framework Decision 2008/913/JHA). Such a discriminatory element is however not required for ‘incitement to terrorism’ as defined under the Terrorism Directive.

In light of the above, it can be inferred that incitement to terrorism and illegal hate speech complement each other in the fight against terrorism propaganda online. However, for removal of less obvious terrorism propaganda, where no discriminatory element or incitement to violence is present, new instruments should see the light of day. The recently proposed Regulation (COM(2018), 640 final) which adopts a very broad definition of ‘terrorist content’ extending beyond ‘incitement to terrorism’, may be one of these. Having regard to the complexity of these legal definitions, which carries the risk of misinterpretation by non-legal persons, it is important to find out what the impetus is for involving internet intermediaries in the countering of such type of speech.

Different reasons for privatised enforcement in the field of terrorism propaganda

In 2015, the Commission highlighted, in a proposal for a directive, the importance of internet intermediaries in the fight against terrorism propaganda (COM(2015)625, para. 2). The instrument stressed the importance of the Internet as ‘primary channel used by terrorists to disseminate propaganda, issue public threats, glorify horrendous terrorist acts such as beheadings, and claim responsibility for attacks’ (para. 10). Consequently, without yet imposing any obligations on the part of Internet intermediaries, this proposal raised awareness on the prominent use of social media for terrorist purposes and on the need to ‘tackle the evolving terrorist threats in a more effective way’ (para. 14). It can thus be said that this proposal framed the path for heightened scrutiny with regards to the Internet intermediaries’ role in the context of terrorism. On 15 March 2017, EU Directive 2017/541 was adopted under which internet intermediaries were encouraged to develop voluntary actions for the countering of terrorist content on their services (recital 22).

When it comes to the countering of illegal content online, the Commission has emphasised the favourable position of internet intermediaries. In its recent proposal for a regulation concerned with the online removal of terrorist content, it indicated their ‘central role’ in the dissemination of such material as well as their ‘technological means and capabilities’ justifying their ‘particular societal responsibilities’ (recital 3). Placing internet intermediaries at the frontline of law enforcement online was thus by no means a coincidence. Indeed, as opposed to public authorities, intermediaries have better technological means at their disposal to swiftly notice illegal content, identify infringing authors and, subsequently, block or remove allegedly illegal material. Moreover, taking into account the speed at which terrorist content is disseminated across online services, it seems primordial to involve the parties that are most prone to react quickly.

Freedom of expression risks stemming from privatised enforcement

Whilst it was practical to involve internet intermediaries in the counter-terrorism process, to have them enforce the law online constitutes a real potential danger for their users’ right to freedom of expression. Importantly, as was made clear in Jersild v. Denmark (§30), the right to freedom of expression is twofold in the sense that it does not only protect individuals’ right to impart information but also the public’s right to receive such information. As repeatedly held by the ECtHR, ‘freedom of expression constitutes one of the essential foundations of a democratic society and one of the basic conditions for its progress and for each individual’s self-fulfillment’ (Hertel v. Switzerland, § 46; Animal Defenders International v. The United Kingdom, § 100). This was also recognised at the international level by the Human Right Committee (2011, para. 2). Importantly, the right is very broad in scope and also applies to ideas that ‘offend, shock or disturb any sector of the population’ (Handyside v. UK, § 49). Whereas the right is subject to limitations, such limitations are strict as these must meet different requirements, which will be discussed below, in order to be permissible. Taking into account the broad nature of this right, on the one hand, and the strict limitations, on the other, it is argued that the Code gives rise to the risk that the rule of law is undermined and that private censorship may arise.

A challenge to the rule of law

As specified by the Code, removal of a post shall be primarily based on the company’s terms of service and only secondarily and when necessary, on national law. In an issue paper by the Council of Europe (2014) it was warned that such a practice would give rise to the risk that ‘general terms and conditions of private-sector entities are not in accordance with international human rights standards’ and therefore that the rule of law is threatened (p. 14 and 87). In that same paper, the rule of law was described as ‘a principle of governance by which all persons, institutions and entities, public and private, including the state itself, are accountable to laws that are publicly promulgated, equally enforced, independently adjudicated and consistent with international human rights norms and standards’ (p. 10). Concerns about the rule of law not being respected were also expressed by Vera Jourová who, when discussing the Code, stressed that: ‘the rule of law applies online just as much as offline. We cannot accept a digital Wild West […] If the tech companies don’t deliver, we will do it’ (European Commission, September 2017).

The ECtHR has developed a test for the rule of law, which requires that any interference on a person’s right to freedom of expression must be based on a proper legal basis. This legal basis must be sufficiently precise, accessible to the public and provide sufficient safeguards (Kruslin v. France, §27-36). The interference must also serve one of the legitimate aims under Article 10(2) ECHR and be necessary in a democratic society (Sunday Times v. UK, §45). In a fact-sheet concerning the implementation of the Code, the Commission specified that the Council Framework Decision 2008/913/JHA should form the legal basis for removal of illegal hate speech (2016, p. 3). However, by explicitly encouraging IT companies to prohibit ‘hateful conduct’ in their Community Guidelines, the Code (para.10) encourages them to go further than what is prescribed under the Decision. As mentioned previously, the right to freedom of expression also extends to shocking or offending ideas (Handyside v. UK, § 49).

Under Facebook’s terms and conditions, illegal hate speech is phrased as content that amounts to a ‘direct attack based on what we call protected characteristics - race, ethnicity, national origin, religious affiliation, sexual orientation, caste, sex, gender, gender identity, and serious disease or disability’. The threshold for Facebook to remove a post appears to be lower than what is required under the Framework Decision as there is no requirement for incitement to be present. Unlike Facebook, Twitter and YouTube do emphasise in their community guidelines that where the primary purpose of an account is incitement to harm, based on discriminatory grounds, the account will be deleted. Concerning Microsoft, it advises its users not to ‘incite other users to threaten, stalk, insult, victimise, or intimidate another person or group of people’. Here, no reference is made to the discriminatory nature of the incitement. It is thus clear that the different conditions for ‘illegal hate speech’, required under EU law, are not always reflected in the companies’ policies. A post that would not necessarily amount to any criminal offence under EU law may thus still be removed because of non-compliance with the terms of service.

This disparity between law and terms of use is well illustrated by a letter of the German Ministry of Justice and consumer protection (Bundesministerium der Justiz und für Verbraucherschutz, 2016) written in response to a request of the German Parliament who wished to obtain information on how many of the 100,000 contents deleted by Facebook were actually illegal under German Federal law. The answer was that it was unknown. As is explicitly stated in the letter: ‘there is no examination whether concrete individual cases of hate messages are illegal’. Consequently, the danger exists that the law may be downgraded to terms of service.

The risk for IT companies to incorrectly interpret and enforce illegal hate speech is even more emphasised when taking into account that under EU law, different factors such as the intent of the speaker, the likelihood for harm to occur and the context of the speech must be considered (Surek v Turkey, §62; Gokceli v Turkey, § 38). Although in theory the European Commission (2017) has specified that such factors shall also be taken into account by the IT companies in their assessment of illegal hate speech, no reporting activities have yet taken place in which is demonstrated that such elements play a role in their assessment. The only way through which can be inferred that the companies do take these factors into account is by taking a look at their community guidelines. However, when reading those, Twitter merely seems to take into account the ‘context of the larger conversation’ and Facebook fails to mention ‘the likelihood for harm to occur’ or require ‘incitement’ to be present (Allan, 2017).

Private censorship

Another major risk on our right to freedom of expression is that the privatised enforcement system encouraged under the Code would lead to private censorship. The UN Special Rapporteur on freedom of expression defined ‘private censorship’ as meaning that ‘censorship measures are delegated to private entities’ which includes situations in which intermediaries undertake censorship on behalf of the state (United Nations General Assembly, 2011, A/HRC/17/27, para. 45 jo. 75). This risk, which is intrinsically related to the notice-and-take down system as supported by the Code, must be seen in light of the e-commerce Directive which has, inter alia, created an exemption regime to the liability of hosting service providers (Directive 2000/31/EC). As pointed out by legal scholars (Sartor et al., 2010), this legal construction may lead to internet intermediaries becoming the gatekeepers of the internet as it ‘presupposes authorising the provider to exercise the controls that may prevent its liability, i.e., empowering it to exclude all those contents that may generate liability’ (p. 376).

Indeed, according to Article 14 (jo. recital 46) of this Directive, hosting providers may be exempted from liability when they ‘expeditiously remove or disable access to illegal content’ after having been notified of such content’s presence. As was argued by legal scholar Aleksandra Kuczerawy (2015), such a mechanism implies a conflict of interests for the intermediary. To put this in her own words: ‘they [the internet intermediaries] have to decide swiftly about removing or blocking content in order to exonerate themselves from possible liability, which basically makes them a judge in their own cause’ (p. 48). Consequently, as was pointed out by her, they will have the incentive to be over-protective and to remove or disable access to content regardless of their illegality and, sometimes, even without carrying out a balancing of interest. This may in turn result in users’ right to freedom of expression being impeded as ‘any potential controversial information would then likely be prevented from reaching public accessibility’ (Sartor et al., 2010, pp. 376-377).

During a public consultation on the e-Commerce Directive, the majority of stakeholders (including internet intermediaries) were of the opinion that over-removal of content is partly due to legal uncertainties surrounding the scope and terms of the liability exemption (European Commission, 2012, SEC (2011)1641, pp. 43-46). As was argued by Lisl Brunner (2016), former policy director on matters of intermediary liability, such legal uncertainty has increased after the ECtHR ruling in the Delfi case as the Court did not clarify the fine line that exists between service providers which are of an active or passive nature. This distinction is of importance since the European Court of Justice (CJEU) has repeatedly confirmed that the liability exemption established under the e-Commerce Directive can only be enjoyed by hosting providers which are of a mere technical, automatic and passive nature (L’oreal SA and others v. Ebay International A.G. and others, paras. 111-116; Google France SRL and Others v. Louis Vuitton Malletier SA and others, paras. 114 and 120). Whereas the Delfi case concerned the alleged infringement of a publishers’ right to freedom of expression under Article 10 ECHR, the cases decided by the CJEU related to the interpretation of EU law in the course of preliminary ruling procedures. However, both courts were in these cases confronted with liability issues and the interpretation of Article 14 of the e-Commerce Directive. Taking into account aforementioned risks, it is necessary to find out how these could at most be outweighed.

Different ways to balance the dangers of privatized enforcement on the right to freedom of expression

One way to counterbalance the issue of overly broad terms of service through which the rule of law may be threatened would be to provide legal safeguards to end users. In this regard, legal scholars (Angelopoulos et al., 2015) stressed, in a study concerned with privatised enforcement and human rights limitations, the importance of IT companies to be transparent and accountable and to take into account due process principles (p. 57). This idea was also supported by the Commission in its communication on ‘tackling illegal content online’ and subsequent recommendation (COM(2017), 555 final, p. 14; C(2018), 1117 final, Chapter II (16-17) jo. preamble pt. 20).

Whilst the Code states that it promotes transparency, it only does so by encouraging publication of transparency reports. In the two latest periodical reviews, no attention was paid to the existence of transparency measures towards end users whose post had been notified and/or removed (European Commission, 2017; 2018). The main focus was whether the companies had provided feedback to notifying users. Whilst the Commission did stress, in its communication, the importance of transparency reports, it also stressed the importance of being transparent towards users whose post had been notified and that information shall be provided about received counter-notices (COM(2017), 555 final, p. 16). Intrinsically related to this point and as was put forward by Kuczerawy (2015, p. 51), the companies should have in place a system of counter-notices. This would help uphold due process principles in notice-and-actions procedures. The need for this was further supported by the Commission (COM(2017), 555 final, p. 17; C(2018)1117, Chapter II(13))

Another way to secure respect for the rule of law online would be through the States’ positive obligations. The ECtHR has recognised that states play an important role in protecting the right to freedom of expression, which includes both negative obligations (to abstain from interfering with that right) and positive ones (to take action) (Özgür Gündem v. Turkey, para. 43; Centro Europa 7 S.R.L. and Di Stefano v. Italy; Youth Initiative for Human Rights v. Serbia; Dink v. Turkey, para. 137). The Council of Europe (2014) has already suggested that ‘states have an obligation to ensure that general terms and conditions of private companies that are not in accordance with international human rights standards must be held null and void’ (p. 114). Legal scholars also supported this idea and stressed that ‘States may be found to be in breach of their positive obligations for their failure to prevent violations of individuals’ fundamental rights as a result of privatized law enforcement by online intermediaries’ (Angelopoulos et al., 2015, p. 79). However, as these scholars mentioned, different criteria must be taken into account when establishing whether a breach of a state’s positive obligation occurred (p. 79). Whilst analysis of such a breach goes beyond the scope of this paper (since it would require a case-by-case analysis), relying on state’s positive obligations could help to foster the rule of law in the online environment. However, as was concluded in their study, discussions should find place in order to ‘operationalize relevant positive obligations of States in the context of self-regulatory or privatised law enforcement measures by online intermediaries’ (p. 79).

Concerning the countering of private censorship, IT companies should have more legal certainty about their liability exemption provided for under the e-Commerce Directive. This was encouraged by Kuczerawy (2015, p. 46) who claimed that legal uncertainties exist with regards to Article 14 of the Directive, such as the scope of the term ‘service providers’, the meaning of ‘actual knowledge’ or the term ‘expeditiously’ (pp. 50-51). As was made clear in a working paper of the European Commission (SEC (2011), 1641 final) the rules for notice-and-take-down procedures vary from one member state to another, making it unclear for internet intermediaries as to which rules should be followed (p. 25). Such fragmentation could result in a race to the bottom where intermediaries choose to interpret the rules of the countries with the most stringent laws in order to secure at utmost their liability exemption. Indeed, when taking into account that internet intermediaries could potentially be subject to the laws of all countries in which their content is accessible, the safest way for them to act would be to take a restrictive approach and treat the harshest laws as threshold for content removal. In other words, by ‘lowering the standards of free speech on the internet to the lowest common regulatory denominator’ (Mills, 2015, p. 19)

In 2012, the Commission announced an initiative on ‘Notice-and-Action’ procedures aimed at harmonising, at EU level, the rules on these procedures (COM(2011), 942 final, p. 15). From all the different parties involved in the public consultation, most of them supported the idea that the EU should clarify the functioning of notice-and-action-procedures and thereby adopt binding minimum rules (European Commission, 2012, p. 8). Such an initiative did, however, not yet lead to an EU binding legal instrument. In May 2017, members of the European Parliament expressed, in an open letter to the Commission’s Vice-President, their wish for a notice-and-action directive. According to them, having in place an EU framework on notice-and-actions procedures would help to counter the issue that ‘large internet platforms are independently taking their own actions to take down online content, without transparency or independent scrutiny’ (Schaake, 2017).

Recently, in its communication and subsequent recommendation on tackling illegal content online, the Commission tried to clarify some vague aspects of the liability exemption for hosting providers (COM(2017), 555 final, pp. 13-14; (C(2018), 1117 final, preamble pt. 26). Concerning the term ‘expeditiously’, the Commission takes a flexible approach by specifying that the term must be analysed on a case-by-case basis. With respect to the risk of a race-to-the-bottom, the Commission’s recommendation clarifies that the laws to be taken into account are those from the member state in which the hosting provider is established or these where the services are provided (C(2018), 1117 final, preamble pt. 14).

Another possible way to achieve a higher level of legal certainty would, yet again, be through positive state obligations. Importantly, the ECtHR established in Dink v. Turkey (para. 137) that one of these obligations consists in ensuring that individuals can express themselves without fear. In light of this, legal scholars have held that such a positive obligation could include the duty to reduce internet intermediaries’ fear of being held liable, which would be a ‘promotional obligation’ (Angelopoulos et al., 2015, pp. 32; 42).

Developments since the adoption of the EU Code of conduct on countering illegal hate speech online

Since implementation of the Code, the IT companies have been put under serious pressure to better counter online terrorism propaganda. After the terrorist attacks committed in Brussels, London and Manchester, the Commission (COM(2017), 354 final) declared that the Code had helped in the counter-radicalisation process but that it was insufficient (p. 3). In light of this lack of effectiveness, different measures have seen the light of day.

On the one hand, the EU has issued non-binding instruments such as the communication and the recommendation on tackling illegal content online and, on the other, it is in the process of adopting legislation aimed at tackling terrorist content online.

Regarding the communication and subsequently adopted recommendation, several actors have criticised these. Concerning the former, the European Federation of Journalists (2017) pointed out to a lack of guidance to platforms for respect of the right to freedom of expression. According to Jens-Henrik Jeppesen, representative and director for European Affairs, the communication ‘describes a regime of privatised law enforcement that does not attempt to draw a bright line between content that violates platforms’ terms of service (TOS) and content that breaks the law’ (Jeppesen, 2017). Furthermore, Marietje Schaake, member of the European Parliament, warned that ‘the good parts on enhancing transparency and accountability for the removal of illegal content are completely overshadowed by the parts that encourage automated measures by online platforms’ (Schaake, 2017).It should be borne in mind thattechnological means are not (yet) able to contextualise posts, whereas ‘context of content’ is a factor that needs to be taken into account when assessing illegal hate speech. However, on that point, the recommendation seems to provide better safeguards as it suggests that human oversight and verification should be provided where there is no ‘human in the loop’ (when automated means are used). Despite the better safeguards, the recommendation still seems to magnify the risks of privatised enforcement. With regard to terrorist content, it states that Europol and the competent authorities shall request removal either ‘by reference to the relevant applicable laws or (emphasis added) to the terms of service of the hosting service provider concerned’ (preamble pt. 34). Furthermore, the wording suggests that companies have discretion as to whether or not to remove terrorist content after having been notified by the member states’ competent authorities (Chapter III (34)). As opposed to the Code which permits removal within 24 hours, the recommendation adopts a one-hour removal timeframe (Chapter III (33)). As was argued by Emma Llansó (2018), director of the ‘Free Expression Project’ at the Washington-based Center for Democracy & Technology, the recommendation places too much focus on the speed of removals and the need for automatic filtering technologies instead of on safeguards for human rights. Moreover, speedy decision may impact due process norms such as the right to be heard, protected under Article 6 ECHR.

Furthermore, by clarifying that ‘terrorist content’ is not limited to the offences listed in the Terrorism Directive but may also extend to ‘content produced by or attributable to terrorist groups’ (Chapter I(4(h)), the Commission makes it rather unclear what type of content is targeted and subject to the one-hour removal. This vagueness is emphasised given the fine line that exists between ‘terrorist content’ and illegal hate speech which, in some cases, contributes to radicalisation. Clarification on what type of speech is entailed in ‘radicalization’ would thus be helpful, especially when taking into account that Julian King, Commissioner for the Security Union, identified radicalisation as being the core problem of terrorism (Commission, 2018).

Last but not least, by taking a flexible and case-by-case approach, both instruments do not seem to clarify the liability exemptions contained in the e-Commerce Directive. In December 2017, different Members of the European Parliament urged the Commission to ‘take up the specific issue of notice and action procedures as a priority, independent from its work on addressing illegal content online’ (Schaake et al., 2017).

Concerning legislative measures, the EU Commission issued, on 12 September 2018, a proposal for a regulation on the prevention of terrorist content online COM(2018), 640 final). Like the recommendation, it adopts a one-hour removal timeframe and specifies that referrals should be assessed against the companies’ terms of service. Moreover, it adopts an even broader definition of ‘terrorist content’ than under the recommendation and threatens non-compliant hosting service providers with penalties (Article 18). It also adds further confusion to the liability exemption under the e-Commerce Directive as it states that derogation to the general prohibition to monitor under Article 15 of it may exceptionally arise (explanatory memorandum, p. 3).

A new Audiovisual Media Services Directive is also being adopted which would make private companies accountable for having hate speech videos or videos inciting to terrorism present on their services (COM(2016), 287 final). This general trend to hold intermediaries accountable for illegal content has emerged in different fields of law, such as in intellectual property law with the proposed Copyright Directive (Com (2016), 593 final, Article 13). This regulatory tendency can also be seen at national level. For example, Germany adopted the so-called ‘network enforcement law’, which threatens social media companies with fines of up to 50 million in case of non-removal of illegal content within a certain time. 2 In a legal review of this (draft) law, commissioned by the Office of the OSCE Representative on Freedom of the Media, Bernd Holznagel (2017) warned that: ‘with the risk of high fines in mind, the networks will probably be more inclined to delete a post than to expose themselves to the risk of a penalty payment’ (p. 23). He also noted that such regulations may encourage platforms to circumvent the laws’ territorial scope through removal of German-language comments (24).

Conclusion

The present paper has aimed to demonstrate how the Code has clear implications on internet users’ right to freedom of expression. The privatised enforcement system encouraged under it could result in private censorship as well as undermining of the rule of law.

By taking into account different developments since the adoption of the Code, this paper claims that, from an EU-perspective, a shift from the focus on ‘speed’ to ‘legality’ should take place. Whereas the Code adopted a 24-hour framework for removal of illegal content, the recommendation on tackling illegal content online and the recently proposed regulation (COM(2018) 641 final) encourages removal of terrorist content within one hour. Such short time frame, paired with the unclear definition attributed to ‘terrorist content’, will undoubtedly magnify the risks of over-removal of content. Moreover, the EU should clarify the liability exemption under the e-Commerce Directive by giving clear guidance on what the terms contained therein entail. This would help prevent a race-to-the bottom where intermediaries choose to interpret and apply the most stringent national laws in order to secure at utmost their liability. Concerning the IT companies, these should increase their level of transparency when removing posts. Unlike what the recommendation encourages, more efforts should be made in terms of transparency towards the users whose posts have been notified. IT companies should always provide counter-notices and provide feedback. Human intervention should also be a conditio sine qua non in cases where there is no human in the loop and thus not only ‘where appropriate’ as stipulated in the recommendation.

Despite the shortcomings in the recently adopted EU instruments, these illustrate that some attention is being paid at the EU level for the protection of human rights in the digital environment. However, having regard to the recent proposal for a regulation on the prevention of terrorist content online, more attention is still needed in order to reconcile the practice of privatised enforcement with respect for individuals’ fundamental human rights.

References

ADF International. (2016). Response to call for submissions by the UN Special Rapporteur on the Protection of the Right to freedom of Opinion and Expression. Retrieved from www.ohchr.org/Documents/Issues/Expression/Telecommunications/ADF.docx

Allan, R. (2017). Hard Questions: Who Should Decide What is Hate Speech in an Online Global Community?. Retrieved from https://newsroom.fb.com/news/2017/06/hard-questions-hate-speech/

Angelopoulos, C., Brody, A., Hins, W., Hugenholtz, B., Leerssen, P., Margoni, T., McGonagle, T., van Daalen, O., & van Hoboken, J. (2015). Study of fundamental rights limitations for online enforcement through self-regulation. Amsterdam: Institute for Information Law IViR.Retrieved from https://www.ivir.nl/publicaties/download/1796

Ansart, G. (2011). The invention of Modern State Terrorism during the French Revolution. Retrieved from https://docs.lib.purdue.edu/cgi/viewcontent.cgi?article=1031&context=revisioning

Animal Defenders International v. The United Kingdom (Appno 48876/08) ECHR, 2013.

Badawy, A., & Ferrara, E. (2018). The Rise of Jihadist Propaganda on Social Networks. Journal of Computational Social Science, 1(2), 453-470. doi:10.1007/s42001-018-0015-z Retrieved from https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2982256

Bundesministerium der Justiz und für Verbraucherschutz. (2016). Ihre schriftliche Frage Nr. 10/19 vom 6. Oktober 2016. Retrieved from http://andrej-hunko.de/start/download/doc_download/863-schriftliche-frage-zur-groessenordnung-der-100-000-von-facebook-geloeschten-internetinhalte

Brunner, L. (2016). The liability of an Online intermediary for Third-party content: The watchdog becomes the monitor: intermediary liability after Delfi v Estonia. Human Rights Law Review, 16(1), 163-174. doi:10.1093/hrlr/ngv048

Centro Europa 7 S.R.L. and Di Stefano v. Italy (App no 38433/09) ECHR, 2012.

Council of Europe. (2014). The rule of law on the internet and in the wider digital world (Issue Paper). Strasbourg: Council of Europe.

Delfi AS v Estonia (App no 64569/09) ECHR, 2015.

Dink v. Turkey (App nos 2668/07, 6102/08, 30079/08, 7072/09 and 7124/09) ECHR, 2010.

EDRi. (2014). Human Rights and Privatised law enforcement. Retrieved from https://edri.org/wp-content/uploads/2014/02/EDRi_HumanRights_and_PrivLaw_web.pdf

EU Council Framework Decision 2008/919/JHA of 28 November 2008 Amending Framework Decision 2002/475/JHA on combatting terrorism.

EU Directive 2000/31/EC of the European Parliament and of the Council of 8 June 2000 on certain legal aspects of information society services, in particular electronic commerce, in the Internal Market.

EU Directive 2017/541 of the European Parliament and of the Council of 15 March 2017 on combating terrorism and replacing Council Framework Decision 2002/475/JHA and amending Council Decision 2005/671/JHA.

European Commission. (2012). Commission Staff Working Document, online services, including e-commerce, in the Single Market Accompanying the document: Communication from the Commission to the European Parliament, the Council, The European Economic and Social Committee and the Committee of the Regions, A Coherent framework to boost confidence in the Digital Single Market of e-Commerce and other online services. SEC (2011), 1641 final.

European Commission. (2012). Commission Communication to the European Parliament, the Council, the Economic and Social Committee and the Committee of the Regions. A coherent framework for building trust in the Digital Single Market for e-commerce and online services. COM (2011), 942 final.

European Commission. (2012). Summary of the results of the Public Consultation on the future of electronic commerce in the Internal Market and the implementation of the Directive on electronic commerce (2000/31/EC). Retrieved from http://ec.europa.eu/information_society/newsroom/image/document/2017-4/consultation_summary_report_en_2010_42070.pdf

European Commission. (2014). Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions. Preventing Radicalisation to Terrorism and Violent Extremism: Strenghtening the EU’s response. COM (2013), 941 final.

European Commission. (2015). Proposal for a Directive of the European Parliament and of the Council on combatting Terrorism and replacing Council Framework Decision 2002/475/JHA on combatting terrorism. COM (2015), 625 final.

European Commission. (2015). EU Internet Forum: Bringing together governments, Europol and technology companies to counter terrorist content and hate speech online. Retrieved from http://europa.eu/rapid/press-release_IP-15-6243_en.htm

European Commission. (2016). Proposal for a Directive of the European Parliament and of the Council on Copyright in the Digital Single Market. Com (2016), 593 final.

European Commission. (2016). European Commission and IT Companies announce Code of Conduct on illegal online hate speech. Retrieved from http://europa.eu/rapid/press-release_IP-16-1937_en.htm

European Commission. (2016). Code of Conduct – Illegal online hate speech Questions and Answers. Retrieved from http://ec.europa.eu/newsroom/document.cfm?doc_id=41844

European Commission. (2016). Proposal for a Directive of the European Parliament and of the Council amending Directive 2010/13/EU on the coordination of certain provisions laid down by law, regulation or administrative action in Member States concerning the provision of Audiovisual Media Services in view of changing market realities. COM (2016), 287 final.

European Commission. (2017). Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions. tackling illegal content online, towards an enhanced responsibility of online platforms. (COM (2017), 555 final.

European Commission. (2017). Fact Sheet Code of Conduct on countering illegal online hate speech 2nd monitoring. Retrieved from http://europa.eu/rapid/press-release_IP-17-3493_en.htm

European Commission. (2017). Security Union: Commission steps up efforts to tackle illegal content online. Retrieved from http://europa.eu/rapid/press-release_IP-17-3493_en.htm

European Commission. (2017). Fighting Terrorism Online: Internet Forum Pushes for automatic detection of terrorist propaganda. Retrieved from http://europa.eu/rapid/press-release_IP-17-5105_en.htm

European Commission. (2017). Code of Conduct on Countering Hate Speech online: One year after. Retrieved from http://ec.europa.eu/newsroom/document.cfm?doc_id=45032

European Commission. (2017). Communication from the Commission to the European Parliament, the European Council and the Council. Eighth progress report towards an effective and genuine Security Union. COM (2017), 354 final.

European Commission. (2018). Commission Recommendation of 1.3.2018 On measures to effectively tackle illegal content online.C (2018), 1117 final.

European Commission. (2018). Proposal for a Regulation of the European Parliament and of the Council on preventing the dissemination of terrorist content online. COM (2018), 640 final.

European Commission. (2018). Security Union: Commission follows up on terrorist radicalisation. Retrieved from http://europa.eu/rapid/press-release_IP-18-381_en.htm

European Commission. (2018). Code of Conduct on Countering Illegal Hate Speech online: Results of the third monitoring exercise. Retrieved from http://ec.europa.eu/newsroom/just/document.cfm?doc_id=49286

European Federation of Journalists. (2017). The European Commission will not legislate on illegal content online. Retrieved from https://europeanjournalists.org/blog/2017/09/29/the-european-commission-will-not-legislate-on-illegal-content-online/

Europol. (2013). TE-SAT 2013 EU Terrorism Situation and Trend Report (Report). The Netherlands: European Union Agency for Law Enforcement Cooperation. doi:10.2813/00041

Facebook, Microsoft, Twitter and Youtube. (2016). Code of Conduct Countering Illegal Hate Speech Online. Retrieved from http://www.statewatch.org/news/2017/sep/eu-com-illegal-content-online-code-of-conduct.pdf

Facebook. (2018). Community Standards Hate Speech. Retrieved from https://www.facebook.com/communitystandards/hate_speech

Farrand, B. (2013). Regulatory Capitalism, Decentered enforcement and its legal Consequences for Digital Expression: The Use of Copyright law to restrict freedom of speech online. Journal of Information Technology & Politics, 10(4), 404-422. doi:10.1080/19331681.2013.843922

Gokceli v Turkey (App no 27215/95 and 36194/97) ECHR, 2003.

Handyside v UK (App no 5393/72) ECHR, 1976.

Hertel v. Switzerland (App no 59/1997/843/1049) ECHR, 1998.

Holznagel, B. (2017). Legal Review of the Draft Law on Better Law Enforcement in Social Networks. Vienna: Organization for Security and Co-operation in Europe. Retrieved from https://www.osce.org/fom/333541?download=true

Human Right Committee. (2011, September) General Comment No. 34 (CCPR/C/GC/34).

Jeppesen, J. (2017). Tackling illegal content online: the EC continues push for privatized law enforcement. Retrieved from https://cdt.org/blog/tackling-illegal-content-online-the-ec-continues-push-for-privatised-law-enforcement/

Jersild v Denmark (App no 15890/89) ECHR 1994.

Judgment of 12 July 2011, L’oreal SA and others v Ebay International A.G. and others, C-324/09, EU:C:2011:474.

Judgment of 23 March 2010, Google France SRL and Others v Louis Vuitton Malletier SA and others, Joined cases C-236/08 to C-238/08, EU:C:2010:159.

Kruslin v France (App no 11801/85) ECHR, 1990.

Kuczerawy, A. (2015). Intermediary liability & Freedom of Expression: Recent developments in the EU Notice & Action initiative. Computer Law & Security Review, 31(1), 46-56. doi: 10.1016/j.clsr.2014.11.004

Llansó, E. (2018). EC Recommendation on Tackling illegal content online doubles down on push for Privatized law enforcement. Retrieved from https://cdt.org/blog/ec-recommendation-on-tackling-illegal-content-online-doubles-down-on-push-for-privatized-law-enforcement/

Leroy v France (App no 36109/03) ECHR, 2008.

Microsoft. (2018). Microsoft Community Frequently Asked Questions. Retrieved from https://answers.microsoft.com/en-us/page/faq?auth=1#faqCodeConduct3

Mills, A. (2015). The law applicable to cross-border defamation on social media: whose laws govern free speech in ‘Facebookistan’? Journal of Media Law, 7(1), 1-35. doi: 10.1080/17577632.2015.105594

OSCE, United Nations. (2017, March) Joint Declaration on Freedom of expression and ‘”Fake News”, Disinformation and Propaganda (FOM.GAL/3/17).

Özgür Gündem v. Turkey (App No. 23144/99) ECHR, 2000.

Sartor G., Viola De Azevedo Cunha M. (2010). The Italian Google-Case; Privacy, Freedom of Speech and Responsibility of Providers for User-Generated Contents. International Journal of Law and Information Technology, 18(4), pp. 356-378. doi: 10.1093/ijlit/eaq010

Schaake, M. (2017). Open letter – MEPs want notice and action Directive. Retrieved from https://marietjeschaake.eu/en/meps-want-notice-and-action-directive

Schaake, M. (2017). No room for upload-filters in the EU. Retrieved from https://marietjeschaake.eu/en/no-room-for-upload-filters-in-the-eu

Schaake, M et al. (2017). Letter to the Commission on notice and action procedures. Retrieved from https://marietjeschaake.eu/en/letter-to-the-commission-on-notice-and-action-procedures

Sunday Times v UK (App no 6538/74) ECHR, 1979.

Surek v Turkey (No 1) (App no 26682/95) ECHR, 1999.

Twitter. (2017). Government TOS reports – July to December 2017. Retrieved from https://transparency.twitter.com/en/gov-tos-reports.html#government-tos-reports-jul-dec-2017

Twitter. (2017). Removal requests - July to December 2017. Retrieved from https://transparency.twitter.com/en/removal-requests.html

Twitter. (2018). Hateful Conduct Policy. Retrieved from https://help.twitter.com/en/rules-and-policies/hateful-conduct-policy

United Nations General Assembly, Human Rights Council. (2011, May) Report of the Special Rapporteur on the Promotion and Protection of the Right to freedom of Opinion and Expression (A/HRC/17/27).

United Nations General Assemby, Human Rights Council. (2013, January) Annual Report of the United Nations High Commissioner for Human Rights (A/HRC/22/17/Add.4).

Winter, C. (2015). Documenting the Virtual “Caliphate”. London: Quilliam Foundation. Retrieved from http://www.quilliaminternational.com/wp-content/uploads/2015/10/FINAL-documenting-the-virtual-caliphate.pdf

Youth Initiative for Human Rights v. Serbia (App no. 48135/06) ECHR, 2013.

YouTube. (2018). Hate Speech Policy. Retrieved from https://help.twitter.com/en/rules-and-policies/hateful-conduct-policy

Footnotes

1. The Islamic State of Iraq and the Levant

2. Gesetz zur Verbesserung der Rechtsdurchsetzung in sozialen Netzwerken (Netzwerkdurchsetzungsgesetz - NetzDG) (only available in German), < https://www.buzer.de/s1.htm?g=NetzDG&f=1>; Bundesministerium der Justiz und für Verbraucherschutz, ‘Act to Improve Enforcement of the Law in Social Networks (Network Enforcement Act, NetzDG) - Basic Information’, <https://www.bmjv.de/DE/Themen/FokusThemen/NetzDG/_documents/NetzDG_englisch.html>

Protecting the global digital information ecosystem: a practical initiative

$
0
0
Draft G7 Charlevoix statement

Introduction

The digitisation of our societies comes along with a number of challenges and opportunities - the dimension of which are far from being assessed, not to say understood. While the internet allowing easy access of everybody to the general political discourse was for some time understood as a great opportunity for strengthening democracy, more recent developments depicted by buzzwords like “fake news”, “disinformation operations” and “psychographically microtargeted advertising” as practiced with the support of Cambridge Analytica are observed with great concern as fundamental threats to the functioning of democracy. 1 Not less than cyber attacks on industries, infrastructures and governments, such practices are difficult to control. In particular, their origine is difficult to localise and there is no technical instrument available for clear attribution. Yet, protecting our democratic systems seems to amount to a serious common concern of the United States, the European Union as well as all democracies around the globe.

While some countries have already taken legislative measures or – at least – drafted action plans aiming at protecting the internet, the public discourse and democracy from criminal and terrorist content as well as from hate speech, fake news and disinformation operations, 2 valid concerns are raised with regard to the respect for freedom of speech as the foundation of democracy. 3 The question, thus, how to adequately protect the deliberative process of building political will and to thereby ensure the legitimacy of the democratic process against all sorts of IT-driven attempts of manipulation remains open. The recent call of a high-level civil servant of the European Commission “for a new culture of incorporating the principles of democracy, rule of law and human rights by design in AI and a three-level technological impact assessment for new technologies like AI as a practical way forward for this purpose“ 4 indicates only one of the directions the political discussion may move to. But it also shows that the problem is not limited to one or the other country or continent but has a global dimension. It is a challenge to constitutionalism at a global level.

This is why all democracies have an interest in finding common approaches for tackling the new challenges to their own survival. Forums like the G7 and the G20 are as important for stimulating the discussion on concrete solutions and measures as is the Internet Governance Forum (IGF) and other multi-stakeholder initiatives. Academic research and conferences can provide material, analysis and ideas to feed this process of discovery.

With the aim to identifying some G7 interest in digital technology and democracy Eileen Donahoe, Fen Hampson, Gordon Smith (all CIGI - Centre for International Governance Innovation, Waterloo, Canada) and myself (HIIG - Humboldt Institute for Internet and Society, Berlin, Germany) back in 2017 decided to submit some thoughts and proposals to those preparing the G7 summit in Charlevoix, Québec, Canada, 8-9 June 2018. On the basis of our discussions my contribution to this collaborative initiative was the following draft statement, and I would like to particularly thank Eileen Donahoe for her substantial input and revision of this work.

While the attempt to introduce the draft statement formally in the preparatory work for the summit at an early stage was unsuccessful, it is interesting to see, nonetheless, that THE CHARLEVOIX G7 SUMMIT COMMUNIQUE in its point 15 raises – as a matter of “building a more peaceful and secure world” - some of the issues addressed by our draft as follows:

  1. We commit to take concerted action in responding to foreign actors who seek to undermine our democratic societies and institutions, our electoral processes, our sovereignty and our security as outlined in the Charlevoix Commitment on Defending Democracy from Foreign Threats. We recognize that such threats, particularly those originating from state actors, are not just threats to G7 nations, but to international peace and security and the rules-based international order. We call on others to join us in addressing these growing threats by increasing the resilience and security of our institutions, economies and societies, and by taking concerted action to identify and hold to account those who would do us harm. 5

The “Charlevoix Commitment on Defending Democracy from Foreign Threats“ referred to in this Communiqué, 6identifies more in detail the steps the leaders of the G7 intend to take. The formulation remains less concrete than our proposal. In particular, it could have been clearer on the global character of the problem and the close relationship of defending democracy against foreign attacks with other issues of cybersecurity and the due diligence obligations of states, industries and the individual. 7 It is to be understood as a challenge of global (internet) governance and international peace.

More work is to be done, therefore, and besides of many upcoming conferences and projects the IGF in Berlin (25-29 November 2019) presents an excellent multistakeholder forum where the issues could further be discussed with the aim of finding a consensus among all stakeholders on a declaration on the protection of the global digital information ecosystem along the lines of the following draft.

Draft G7 Charlevoix statement on the protection of the global digital information ecosystem

  1. Threat of misuse of digital technology and information: We, the Leaders of the G7, note with concern increased misuse of the internet and digital information both, by states and private actors, aimed at disturbing political processes in our democracies and in political systems throughout the world. We strongly condemn any malicious cyber activities like the manipulation of national elections, digital disinformation campaigns and psychographic targeting in election campaigns, and commit ourselves fully to abstain from such practices.
  2. Protection of the global digital information ecosystem: The effective protection of the digital information ecosystem is a condition for the full exercise of political freedoms and self-determination of peoples in modern democracies. We will take all necessary measures and call upon all stakeholders, to defend our globalised digital society against any threat or attempt to hamper further development of the benefits offered by digitisation of societies.
  3. Cyber threats against other states equivalent to violation of international law: We understand malicious cyber-activities against other states and their infrastructures, including digital offences by governments against the integrity of political processes and the public sphere of political discourse in foreign countries, as equivalent to an intervention into their internal affairs contrary to the principle of equal sovereignty embodied in Article 2 (1) of the Charter of the United Nations. Those activities constitute a breach of international law giving rise to countermeasures.
  4. Global cyber-security compact: With a view to avoiding distrust among states and a risk of escalation and conflict worldwide, we commit ourselves and strive to bring all countries together to agree upon a ‘global cyber-security compact’ compelling all governments to abstain from cyber-offences against other states or private parties and, in particular, from information operations and other intentional intervention into the democratic processes of other countries.
  5. Due diligence against cyber attacks: The international law principle of due diligence requiring each country to make every effort possible to prevent attacks by private actors from their territory against foreign countries, industries or people, equally applies to malicious cyber activities. This includes the prohibition of private parties to conduct such action. We commit ourselves and call all other governments to fully respect this principle and to agree upon concrete terms of its application in cyber space as part of the ‘global cyber-security compact’.
  6. Private sector responsibilities to develop resilient technology: We call upon IT companies such as communication service providers and platforms to develop resilient IT systems and share technologies to combat malicious activities in the cyber-sphere. Social media and search platforms should also apply algorithms and be prepared to detect and take down illegal hate speech and content that supports extremist, racist and terrorist propaganda. Illegal expression that can be identified as originating from unlawful bots or other automatic devices that distort the free and independent formation of political views, should also be restricted in full compliance with the human right to freedom of expression and international human rights law.
  7. Private sector global governance responsibilities: Corporate Social Responsibility (CSR) and the United Nations Guiding Principles on Business and Human Rights applicable to private sector companies are important elements of the global governance framework for business relations in the cyber realm and function as a corollary to state regulation and international law. It includes the duty to be responsive to concerns of individuals who feel offended in their rights by illegal conduct of IT companies or illegal content made publicly available through social media and platforms. We urge companies to establish easily available, rapid and efficient procedures to fairly answer private complaints against such acting and content, with due regard to the freedom of expression, and accordingly to take complaints against any unjustified take-down of lawful content.
  8. Multi-stakeholder collaboration to protect the global digital ecosystem: The protection of the digital information ecosystem can only be accomplished through common and coordinated efforts of states, businesses, the civil society and individual users. We commit to new and additional investment in education systems including universities to develop curricula, train teachers and media, and to undertake further research to ensure highest degrees of digital literacy, critical thinking, awareness for cyber-risks and diligence in the production and use of IT products throughout our societies. We consider these new efforts as being essential for ensuring a safe digital infrastructure and sustainable democratic resilience.
  9. Global information culture consistent with international human rights: We call for the establishment of a new global information culture, based upon the protection of international human rights standards and in particular, the fundamental freedom of expression, free access to information, to education and to culture, full respect of privacy and the protection of personal data. We understand these as guiding principles of our policies related to digitisation and security and as a condition for a prosperous development of our democracies. We commit ourselves to support civil society initiatives and other stakeholders in their endeavor to give effect to these principles, rights and values as part of the ongoing process of internet governance.

Footnotes

1. With some proposals for solution: Yochai Benkler, Robert Faris, and Hal Roberts, Network Propaganda. Manipulation, Disinformation, and Radicalization in American Politics (Oxford University Press, 2018). For operations in Europe and with regard to the Brexit-referendum see in particular the alarming account of Carole Cadwalladr, The great British Brexit robbery: how our democracy was hijacked, The Guardian, 7 May 2017, at: https://www.theguardian.com/technology/2017/may/07/the-great-british-brexit-robbery-hijacked-democracy (accessed 10 December 2018).

2. See the German Network Enforcement Law (“Gesetz zur Verbesserung der Rechtsdruchsetzung in sozialen Netzwerken, Netzwerkdurchsetzungsgesetz - NetzDG“) of 1 September 2017, available at: https://www.gesetze-im-internet.de/netzdg/BJNR335210017.html (accessed 8 December 2018), English translation: https://www.bmjv.de/SharedDocs/Gesetzgebungsverfahren/Dokumente/NetzDG_engl.pdf?__blob=publicationFile&v=2. For the initiatives of the European Union see, in particular the “Code of Practice against Disinformation“, where for the first time worldwide industry agreed, on a voluntary basis, to self-regulatory standards to fight disinformation, at: https://ec.europa.eu/digital-single-market/en/news/code-practice-disinformation (accessed 28 February 2019). See also the revised EU Cybersecurity Strategy: European Commission/High Representative of the Union For Foreign Affairs and Security Policy, Joint Communication to the European Parliament, the European Council, the Council, the European Economic and Social Committee and the Committee of the Regions “Action Plan Against Disinformation“, 5.12.2018, JOIN(2018) 36 final, p. 1, (Introduction) and pp. 5-11, mentioning four pillars and 10 actions, at: https://eeas.europa.eu/headquarters/headquarters-homepage_en/54866/Action%20Plan%20against%20Disinformation (accessed 13 February 2019). See also the Introduction to: European Commission, Joint Communication of the European Parliament and the Council, Resilience, Deterrence and Defence: Building strong cybersecurity for the EU, JOIN(2017) 450 final of 13 September 2017, at: https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=CELEX:52017JC0450&from=en (accessed 10 February 2019). For the United States see, in particular, The White House, National Cyber Strategy of the United States of America, September 2018, Introduction, pp. 1-2, and p. 9: “Protect our democracy”, at: https://www.whitehouse.gov/wp-content/uploads/2018/09/National-Cyber-Strategy.pdf, (accessed 10 February 2019).

3. Eileen Donahoe, Don’t Undermine Democratic Values in the Name of Democracy. How not to regulate social media, in The American Interest, 2017, at: https://www.the-american-interest.com/2017/12/12/179079/ (accessed 8 December 2018); see also: idem: Protecting Democracy from Online Disinformation Requires Better Algorithms, Not Censorship. In: Council on Foreign Relations, 21 August 2017, at: https://www.cfr.org/blog/protecting-democracy-online-disinformation-requires-better-algorithms-not-censorship (accessed 8 December 2018); for other critical comments see Mathias Hong, The German Network Entforcement Act and the Presumption in Favour of Freedom of Speech, in: Verfassungsblog 22 January 2018, at: https://verfassungsblog.de/the-german-network-enforcement-act-and-the-presumption-in-favour-of-freedom-of-speech/ (accessed 8 December 2018).

4. Paul Nemitz, Constitutional democracy and technology in the age of artificial intelligence, in: Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 15 October 2018, at: https://royalsocietypublishing.org/doi/full/10.1098/rsta.2018.0089 (accessed 10 December 2018).

5. See: https://g7.gc.ca/wp-content/uploads/2018/06/G7SummitCommunique.pdf (accessed 28 February 2019).

6. Available at: https://g7.gc.ca/wp-content/uploads/2018/06/DefendingDemocracyFromForeignThreats.pdf (accessed 28 February 2019).

7. More details: Ingolf Pernice, Global Cybersecurity Governance. A Constitutional Analysis, in: 7 Global Constitutionalism (2018), pp. 112-141.

Counter-terrorism in Ethiopia: manufacturing insecurity, monopolizing speech

$
0
0

This paper is part of Practicing rights and values in internet policy around the world, a special issue of Internet Policy Review guest-edited by Aphra Kerr, Francesca Musiani, and Julia Pohle.

Introduction

The potential of the internet as a site of protest, resistance, and social change in the context of repressive regimes has been the subject of scholarly inquiry in the past two decades (Baron, 2009; Castells, 2012; Cowen, 2009; Fenton, 2016; Pohle & Audenhove, 2017; Shirky, 2008). From the printing press to the internet, the transformative power of new communication technologies lies in their tendency to disrupt central authority and control. When faced with disruptive communication technologies, authoritarian governments’ knee-jerk reaction is usually one of confusion, suspicion, and prohibition. However, repressive regimes also learn to embrace these technologies with the aim of strengthening the existing order (Kalathil & Boas, 2003; Morozov, 2013). For example, McKinnon’s (2011) notion of “networked authoritarianism” reflects how authoritarian regimes in countries like China not only adopt new communication technologies but also use these technologies to bolster their legitimacy. While some of the most widespread practices of using the internet as a means of control include surveillance (Fuchs & Trottier, 2017; Grinberg, 2017), censorship (Akgül & Kırlıdoğ, 2015; Yang, 2014), and hacking (Gerschewski & Dukalskis, 2018; Kerr, 2018), these practices are oftentimes informed by internet policy frameworks and rational-legal dynamics. As Hintz and Milan (2018) articulate, the institutionalisation and normalisation of surveillance practices into law and popular imagination in Western democracies indicates authoritarian repurposing of the internet is now a global phenomenon.

Through a case study of Ethiopia, this paper attempts to shed some light on how the rise of counter-terrorism legal frameworks shape a nation state’s internet policy, especially as it pertains to communication of dissent, resistance, and protest. Consistent with global trends in response to acts of terrorism, the Ethiopian government adopted a counter-terrorism legislation in 2009. While this legislation was championed by the Ethiopian government as a way of combating terrorism, its adoption as arguably the most consequential legal framework in undermining freedom of expression is akin to the neopatrimonial rational-legal design of the ruling party, the Ethiopian People’s Revolutionary Democratic Front (EPRDF). Neopatrimonialism, Clapham (1985) notes, is “a form of organisation in which relationships of a broadly patrimonial type pervade a political and administrative system which is formally constructed on rational-legal lines” (p. 48). Neopatrimonial governments are organised through modern bureaucratic principles with formally defined powers although these powers are exercised “as a form of private property” (Ibid.).

In this paper, I discuss how the Anti-Terrorism Proclamation of 2009 (hereafter referred to as “the EATP” or “the Proclamation”) has been appropriated to stifle freedom of expression involving mediated communication, especially in digital platforms. The study relies on a policy review of the EATP and other supplementary legal frameworks to assess provisions affecting digital freedoms, internet governance frameworks and political expression. In examining EPRDF’s adoption and use of a counter-terrorism legal framework, I situate my discussion within the neopatrimonial state framework. Drawing on literature that critically examines the corrosive impact of counter-terrorism laws on freedom of expression globally, I analyse how the EATP has severely undermined civil liberties of expression. In doing so, I demonstrate how the law affected digital freedoms as well as other pillars of a democratic polity such as journalistic practice. I conclude by highlighting the implications of the Proclamation in projecting a highly restrictive Ethiopian internet policy framework as it pertains to regulation and surveillance.

The global rise of counter-terrorism laws

The terrorist attacks of 11 September 2001 in the US, as well as other similar incidents in different parts of the world have caused profound changes in political, economic, and social relations globally. From communication systems to immigration flows to financial transactions, nations have aggressively sought a wide range of mechanisms to proactively curb potential threats (Birkland, 2004; Croucher, 2018). While executive branches such as law enforcement bodies and even militaries are commonly part of the counter-terrorism apparatus, the most conspicuous common denominator across nations has been the rise of what came to be known as counter-terrorism laws (De Frias, Samuel, & White, 2012).

The recent prominence of counter-terrorism laws across the world has had significant implications to the study of global terrorism from legal and policy perspectives, especially in terms of determining what constitutes (and does not) an act of terrorism. In this regard, the lack of a universal definition of terrorism is not only unsurprising but may also be an impossible task. Although such fluidity of the term is not new, the executive delimitation of terrorism has been conditioned by Resolution 1373 of the United Nations Security Council that was issued on 28 September 2001 following the terrorist attacks on the US earlier that month. The Resolution, among other things, called nations to criminalise acts of terrorism as well as financing, planning, preparation and support for terrorism. In order to expedite the directive, the United Nations Security Council (UNSC) created a new Counter-Terrorism Committee that was tasked with overseeing counter-terrorism actions adopted by member states. While the resolution directed member states to step-up their counter-terrorism efforts, it did not provide a framework to define what constitutes an act of terrorism. Roach et al. (2012) note that this has left individual nations to define terrorism according to their contextual concerns. This approach is not unexpected given how international counter-terrorism law and policy involve multiple layers of actors and stakeholders as well as “interplay between international, regional and domestic sources of law” (Roach, Hor, Ramaj, & Williams, 2012, p. 3).

One of the consequences of the rather elastic framing of terrorism coupled with the rise of counter-terrorism laws across nation states has brought renewed concerns about infringement of basic human rights. Well known post-9/11 counter-terrorism activities in Guantanamo Bay or American “black sites” in some European countries (secret prisons mostly operated by the CIA where inmates have no rights other than those afforded to them by their detainers) as well as rendition sites in countries like Egypt have demonstrated there is a thin line between curbing terrorist acts and violating the basic right to be free from torture and degrading treatment (Setty, 2012). In addition to concerns over torture and degrading treatment, counter-terrorism efforts have also ignited debate on striking the right balance between thwarting terrorism and ensuring expressive, associational and assembly freedoms (Schneiderman & Cossman, 2002). Of critical importance here is how UNSC-endorsed counter-terrorism laws have created an added impetus for authoritarian governments to criminalise legitimate forms of domestic dissent (Roach, 2015).

Because of their reactive posture, counter-terrorism laws are closely linked with state securitisation. Securitisation, however, is subject to misperception in its framing of disorder. In many instances, as Bergson (1998) notes, disorder can be a construct of the state emanating from a contradiction between one’s own interests or needs. In this sense, securitisation generates insecuritisation by creating fear, which in turn empowers the state to expand its control. As Karatzogianni and Robinson (2017) highlight, securitisation “involves framing-out any claims, demands, rights or needs, which might be articulated by non-state actors. Such actors are simply disempowered, and either suppressed and ‘managed’ or paternalistically ‘protected’” (p. 287). By reducing social problems and differences to security issues, securitisation considerably expands state power by creating emergencies to combat imagined “dangers” (Bigo, 2000; Freedman, 1998; Gasteyger, 1999). In tandem with this securitisation rationale, many authoritarian and quasi-authoritarian states have aligned themselves with what came to be loosely known as the “war against terrorism” global front. However, these states have intensified the use of counter-terrorism apparatus, including legislative means, to revamp internet policy frameworks, which in turn have direct ramifications to civic liberties.

It should be noted that concerns over appropriating counter-terrorism legal frameworks for authoritarian ends is not a uniquely Ethiopian phenomenon. For example, Egypt adopted its own version of counter-terrorism law in 2015 that significantly curbed rights of freedoms of assembly, association and expression. Formally referred to as the Law of Organising the Lists of Terrorist Entities and Terrorists, Egypt’s counter-terrorism legislation gives mandate to the government to legally exercise surveillance over Egyptians as well as penalise those who oppose or criticise state policies and practices. Egypt’s counter-terrorism law has been criticised for criminalising dissent, usually through conflating crimes committed by violent groups to peaceful acts of expression that are critical to the government. By employing vague language that is prone to arbitrary interpretation, Hamzawy (2017, p.17) notes that the terrorism law “does not require the government’s accusations of terrorist involvement to be proven through transparent judicial proceedings before individuals are placed on the list.”

Another African country that has adopted a counter-terrorism law recently with controversial outcomes is Cameroon. The Law on the Suppression of Acts of Terrorism in Cameroon (No. 2014/028) was enacted in 2014 against a backdrop of an initiative to contain threats from designated terrorist organisations, most notably Nigeria’s Islamist Jihadist group, Boko Haram. While the law won notable support originally, its eventual deployment raised serious concerns over infringement of rights of expression protected under the Cameroonian Constitution and international human rights law. According to a report by the Committee to Protect Journalists (CPJ) (2017, p.7), the counter-terrorism legislation has been especially criticised for penalising journalists by conflating “news coverage of militants or demonstrators with praise,” resulting in journalists not knowing “what they can and cannot report safely, so they err on the side of caution.” One of the most notable cases involved Radio France Internationale (RFI) journalist Ahmed Abba, who is serving a ten-year prison sentence on terrorism charges for his reporting on the militant group Boko Haram after he was convicted by a military tribunal of “non-denunciation of terrorism” and “laundering of the proceeds of terrorist acts” (CPJ, 2017, p. 7).

Ethiopia, EPRDF and counter-terrorism

The Federal Democratic Republic of Ethiopia (FDRE) has been ruled by EPRDF since 1991. A coalition of four ethnically organised political parties, EPRDF instituted a highly centralised, top-down administration structure that championed an ethno-nationalist political programme (Gashaw, 1993; Gudina, 2007; Habtu, 2003). Although EPRDF’s administrative lifespan projects a nominal democratic façade of elections, its legitimacy to govern has been called into question several times (Aalen & Tronvoll, 2009; Lyons, 2016). In the most recent national elections of 2015, for example, EPRDF declared victory over every parliamentary seat to extend its already protracted longevity. For the second most populous country in Africa that holds contested ethnic, ideological, cultural, and political worldviews, EPRDF’s complete dominance in the rational-legal apparatus of the Ethiopian state has been anything but representative of the Ethiopian public.

In its 28 years’ dominance of the Ethiopian government, EPRDF carried out several mechanisms of quelling alternative political ideologies as well as the individuals and organisations that express them. The mechanisms through which EPRDF strived for political monism that guarantees its supremacy range from outright human rights violations to ideological warfare (Allo, 2017; Gudina, 2011; Kidane, 2001; Tronvoll, 2008). Arguably, the most common strategy EPRDF deployed to assert its power involved the mobilisation of its security and executive apparatus to go after political dissidents who were reportedly imprisoned, exiled, harassed, disappeared or died (Di Nunzio, 2014; Gudina, 2011; Vestal, 1999). Secondly, EPRDF was involved in a mass ideological indoctrination of its political programme (Abbink, 2017; Arriola & Lyons, 2016). Across federal and state government offices, state-owned enterprises, and state-run higher education institutions, devotion to the ideals of EPRDF’s abyotawi dimokrasi1 became the definitive rubric for reward and punishment. As former Prime Minister of Ethiopia and author of EPRDF’s political-cum-economic programme, Meles Zenawi believed, the long-term success of his party was contingent on the successful branding of “developmentalism” 2 and the creation of a mass devoted to it (Fana, 2014; Matfess, 2015). Thirdly, EPRDF was accused of fostering a state-sponsored social engineering of the Ethiopian people through an ethnic federalism design. By forging regional states along ethnic fault lines, EPRDF, despite its unpopularity, managed to sustain its political longevity through a divide-and-rule strategy that created mistrust and animosity between different ethnic groups (Bélair, 2016; Mehretu, 2012). Fourthly, and crucial to the current study, EPRDF laid out a neopatrimonial rational-legal network where state resources were systematically channelled for party interests (Kelsall, 2013; Weiss, 2016). This created a blurring of the demarcation between party and government, resulting in the rise of, among other things, a justice system that is loyal to EPRDF interests. It is this neopatrimonial interlocks between the Ethiopian legislative and judiciary organs coupled with global shifts in counter-terrorism strategies that cultivated the necessary conditions for the introduction of the EATP 2009.

An influential player in geopolitical and diplomatic affairs of the African continent, the FDRE is a key ally to the US in combating terrorism and terrorist groups in the Horn of Africa. The US and FDRE have established multiple counter-terrorism partnerships that specifically target designated terrorist groups such as Al Shabab in neighbouring Somalia (for example, see Agbiboa, 2015). In spite of its abysmal human rights record, especially between 2005-2018, Ethiopia continued to be regarded highly by the US and its allies due to the strategic alliance it offers in combating terrorism. Nevertheless, for Ethiopia’s ruling party, this partnership is as much about combating terrorism as it is about extending its grip on political power which has now lasted for nearly three decades (Clapham, 2009). EPRDF has been accused of repurposing counter-terrorism apparatuses—intelligence and surveillance systems, military equipment, and technical knowhow—financed and set up by its Western allies to quell critical expression, organisation and assembly domestically (Turse, 2017). In spite of years of US Department of State country reports that document state-sponsored human rights abuses, the US continued to follow a policy of appeasement toward the Ethiopian government, possibly to avoid the disruption of its geopolitical priority in the region. In this sense, it is plausible to argue that EPRDF views this as a critical leverage, one that is aimed at keeping outside political interference at bay, thereby effectively silencing external pressures of political reform. In the interest of maintaining its strategic priorities, EPRDF’s Western partners have chosen to be “oblivious to or even ignorant of Ethiopia’s worsening political exclusivity” (Workneh, 2015, p. 103), allowing the former to, without meaningful accountability, undermine basic human rights under the guise of counter-terrorism efforts.

It is against this background that Ethiopia adopted a counter-terrorism legal framework in 2009 although, in prior years, it was already involved in other counter-terrorism activities including the US backed military campaign against the Al Qaeda affiliated Islamic Court Union (ICU) in 2006 (Rice & Goldenberg, 2007). Since its enactment by the EPRDF-dominated Ethiopian parliament, the EATP has been extensively used to prosecute hundreds of individuals that include journalists, opposition political party members, and civil society groups. 3 In many ways, the Ethiopian government’s actions since the adoption of the Proclamation in 2009 justified concerns of human rights groups who have heavily criticised the law for being dangerously vague in framing terrorist acts, violating international human rights law, and dismantling criminal justice due process standards. Some observers highlight the EATP has become the most potent tool to stifle legitimate forms of critical expression, organisation, and assembly (Kibret, 2017; Sekyere and Asare, 2016).

The fatal consequences of Ethiopia’s adoption of a counter-terrorism framework to freedom of speech was mostly predictable because of the Ethiopian government’s poor track record on human rights. Several nations’ rush to adopt counter-terrorism laws has been motivated by the idea of creating a lawful means to bypass existing criminal justice procedures that may not be speedy or effective enough to respond to national security threats (Daniels, Macklem, & Roach, 2012). In this sense, counter-terrorism laws empower governments to exercise a “state of exception” where, under perceived or real terrorism threats, normal procedures of jurisprudence in criminal law may be circumvented in the spirit of upholding “the greater good.” As Roach et al. (2012, p. 10) succinctly summarised, the intent here is “accommodating terrorism and emergencies within the rule of law without producing permanent states of emergency and exception.” It is plausible to perceive, without overlooking critical loopholes, how countries with established democratic traditions would have better institutional mechanisms to combat corrosive uses of counter-terrorism laws. In a democratically fragile country like Ethiopia where all branches of government including the judiciary are set up to buttress the self-proclaimed hegemonic project of the ruling party, the EATP has become the rule and not the exception (see Fig. 1 for EPRDF’s neopatrimonial interlocks in the context of the EATP).

A circular chart displaying the neopatrimonial interlocks of the Ethiopian Anti-Terrorism Proclamation
Fig. 1: The neopatrimonial interlocks of the Ethiopian Anti-Terrorism Proclamation

Situating the Ethiopian counter-terrorism law apparatus under the neopatrimonial state framework

The anocratic design of the Ethiopian state has effectively created a monopoly of governance by EPRDF. 4 Through a rational-legal system that resembles a democratic polity but which, in practice, enables the continuity of a one-party rule, EPRDF has projected itself as a vanguard elite of democracy and development in Ethiopia. For critics, however, EPRDF’s Ethiopia is neither developmental nor democratic, but rather a neopatrimonial state that lodged a complex rational-legal bureaucracy to appropriate public resources into the group and individual interests of the ruling elite.

The concept of neopatrimonialism essentially encompasses a dualistic nature where “the state is characterized by patrimonialisation and bureaucratization” [sic.] (Bach, 2011, p. 277). This fusion, a quintessential characteristic of the neopatrimonial state, assumes a scenario where the “patrimonial logic coexists with the development of bureaucratic administration and at least the pretence of rational-legal forms of state legitimacy” (Van de Walle, 1994, p. 131). Such dualism can effectively translate into a wide array of empirical situations. These mirror variations in the state’s failure or capacity to produce “public” policies.Neopatrimonialism, in this sense, is a modern, sophisticated form of patrimonialism in which “patrimonial logic coexists with the development of bureaucratic administration and at least the pretense of rational-legal forms of state legitimacy” (Van de Walle, 1994, p. 131).

Davies (2008) incorporates two key features of neopatrimonial governance. Firstly, the neopatrimonial state personalises political authority significantly both as an ideology and as an instrument. Secondly, such governments develop a conflicting rational-legal bureaucratic system and clientelistic relations, with the latter usually dominating over the former. A number of scholars treat clientelism and patronage as integral components of neopatrimonialism (Bratton & van de Walle, 1994; Erdmann & Engel, 2007; Eisenstadt, 1973). Clientelism represents “the exchange or brokerage of specific services and resources for political support, often in the form of votes” involving “a relationship between unequals, in which the major benefits accrue to the patron” and “redistributive effects are considered to be very limited” (Erdmann and Engel, 2007, p. 107). In a broad sense, it is the complexity and sophistication of this “brokerage” that distinguishes neopatrimonial clientelism from patrimonial clientelism. Unlike the “direct dyadic exchange relation between the little and the big man” (Erdmann & Engel, 2007, p. 107) that is characteristic of patrimonialism, the neopatrimonial state needs a network of brokers that have permeated into a bureaucratic nexus where they can render the interest of the political center to the periphery (Powell, 1970; Weingrod, 1969).

Making sense of the EATP as a neopatrimonial instrument of control

In the Ethiopian context, a recent example of how the rational-legal bureaucracy is upended along neopatrimonial lines is indicated by the adoption of the EATP in 2009. Although the Proclamation was conceived as a means by which the Ethiopian state could legally circumvent existing laws of criminal justice—which is commonly practiced in other countries with similar legal frameworks—recent trends indicate the Proclamation has been excessively used to criminalise domestic political opposition and critical speech. In the following, I will address how the EATP was used not as a security tool but rather as a means of safeguarding the ruling party’s dominance in three ways: (a) curbing digital freedoms; (b) monopolising the political narrative; and (c) manufacturing fear to incubate self-censorship.

Curbing digital freedoms

One of the ways the EATP set itself up as a legal framework with substantial ramifications for freedom of expression is related to its determination of what it deems to be evidences of terrorist acts. Specifically, the law’s focus on “digital evidence” warrants critical scrutiny in relation to its implications to digital expressions of dissent and resistance. For example, Villasenor (2011) demonstrates how digital storage enables authoritarian governments to track organised dissent online. Hellmeier (2016), who surveyed determinants of internet filtering as measured by the Open Net Initiative in 34 autocratic regimes, outlines digital toolkits available to autocrats to control political activism on the internet. Dripps (2013) warns about the risks posed on privacy when unchecked access to digital evidences may lead to the exposition of “innocent and intimate information” of individuals (p. 51). Against this backdrop of authoritarian governments’ use of digital artifacts to stifle critical speech, the EATP’s definition of “digital evidence” poses a palpable risk to communication of dissent, protest or resistance:

[Digital evidence refers to] information of probative value stored or transmitted in digital form that is any data, which is recorded or preserved on any medium in or by a computer system or other similar device, that can be read or perceived by a person or a computer system or other similar device, and includes a display, printout or other output of such data (FDRE, 2009, p. 4829).

While this definition by itself may be fairly acceptable in everyday use of the language, it warrants special scrutiny in terms of what it entails in a counter-terrorism context. In tandem with the overall characteristic of the legislation, the broad definition of “digital evidence” leaves a vast latitude of interpretive discretion to judiciary and executive branches of the government. In the Ethiopian context, the extensive neopatrimonial interlocks between the legislative body that adopted the EATP, the judiciary that interprets the law, and the executive branch that carries out punishments have undermined the credibility of due process. When seen against the neopatrimonial roots of the EATP, the oppressive conceptualisation of “digital evidence” are evident in three ways: information storage; transmission, and consumption.

The information storage imperative poses a threat to digital freedoms because, at its core, it is an attempt to dissolve the notion of communication devices such as computers, cell phones, storage drives as private entities. The mobile phone or the computer is not only an information processing device, but a physical space where individuals purposefully (documents, pictures, audio, video, etc.) or inadvertently (cookies, search history, catches, etc.) store crucial information that enables them to archive different aspects of their livelihood. It gives them control over memory by enabling a sense of permanence. In this sense, the individual’s communication device has become an extension of private personhood (Conger, Pratt, & Loch, 2013; Kotz, Gunter, Kumar, & Weiner, 2016; Weber, 2010).

It should be noted that government encroachment on private digital spaces, especially through information extraction, is neither unusual nor uniquely Ethiopian. 5 In 2016, for example, the Federal Bureau of Investigation (FBI) in the US has asked Apple to help unlock an iPhone belonging to a shooter responsible to the death of 14 people in San Bernardino, California (Nakashima, 2017). The case ended up in court because Apple declined to help the FBI by arguing developing software to access the phone would be used in several other instances, thereby endangering encryption and data privacy altogether. In 2015, the New York Times reported how pro-Syrian government hackers targeted cellphones and computers of rebel fighters in an attempt to extract the latter’s contacts and operations (Sanger & Schmitt, 2015). In examining the Ethiopian case, the goal of state-sponsored encroachment of the private digital space is consistent with other similar practices globally, i.e., the case for information extraction. However, Ethiopia offers a compelling case of an attempt to institutionalise the de-personalisation of the private communication device through a counter-terrorism legal framework that is arguably designed for non-counter-terrorism acts of political dissent. The nebulous designation of digital evidence as “information of probative value” has indeed resulted in the prosecution of several individuals charged under the counter-terrorism law. 6

Extensive policing over communication technology devices by the Ethiopian government has been a common practice. For example, until recently, the government requires citizens to register their laptops with the Ethiopian Customs Authority before they could travel out of the country. so that they won’t bring new devices in upon return. Between 2017-2018, Ethio-Telecom, the state-owned telecommunications operator which has monopoly over voice, text, and data services in Ethiopia, required citizens to register their phones with the company in order to obtain service. Any phone that was not registered would not get access to telecommunication services in Ethiopia. It is within this already hostile ICT environment that the government, through the EATP, moved to dismantle citizens’ reasonable expectation that their communication devices are private. Several journalists, bloggers, opposition party members report how the government confiscates their mobile phones and personal computers in its “arrest first, find evidence later” approach. Sileshi Hagos is a good case in point here. He was briefly detained and interrogated by government security forces about his fiancé Reeyot Alemu, a journalist who was imprisoned under terrorism charges for her alleged communication with the banned and terrorist designee opposition party Ginbot 7 (Sreberny, 2014). 7 The government confiscated his laptop, presumably to extract information related to his and Reeyot Alemu’s alleged communication with Ginbot 7 (Pen International, 2011).

While the EATP’s designation of “storage” as an important element of “digital evidences” empowers the government to encroach on personal communication devices, perhaps the more dangerous way in which the law sets the state up to dissolve individual privacy rights is conditioned by the government’s newfound legal status to use information intercepted from communication exchanges in the digital sphere. While the EATP’s provision of the Ethiopian intelligence apparatus with legal protection to eavesdrop on citizens’ communication curbs privacy rights, 8 the more dangerous provision involves the authorisation of mass surveillance through communication service providers. The EATP’s stipulation that “any communication service provider shall cooperate when requested by the National Intelligence and Security Service to conduct the interception” (FDRE, 2009, p. 4834) directly puts Ethio-Telecom, the sole telecommunication service provider in Ethiopia, as a site of unchecked mass surveillance. A massive state-owned monopoly in the telecommunications sector of Ethiopia with more than 59 million de facto clients (ITU, 2018b), Ethio-Telecom has been implicated in citizen monitoring in several instances. During the 2005 general elections, for example, the Ethiopian government ordered the state-owned telecommunication provider to shut down the SMS system after opposition groups successfully deployed text-based campaigns (Abdi & Deane, 2008). Horne & Wong (2014) detail how the Ethiopian government acquires surveillance technologies from several countries, and are then oftentimes integrated with Ethio-Telecom operations. This results in unrestricted access to call records, internet browsing logs, and instant messaging platforms (Marquis-Boire, Marczak, Guarnieri, & Scott-Railton, 2013). In 2015, a massive online data dump involving the Italian commercial surveillance company, Hacking Team, showed numerous evidences including email transcripts, invoices, and technical manuals that directly implicated the Ethiopian government. The Hacking Team’s surveillance products were used by Ethiopia’s Information Network Security Agency (INSA) to acquire communication involving journalists affiliated with Ethiopian Satellite Television (ESAT), a US and Europe based network known for its critical views on EPRDF’s rule (Currier & Marquis-Boire, 2015). In this sense, the EATP’s directive for communication providers in Ethiopia—Ethio-Telecom by default—to relinquish private information of users only formalises what many considered to be a long-standing exercise of institutional control of citizens’ communication.

In addition to Ethio-Telecom, this stipulation enables the Ethiopian National Intelligence and Security Service (NISS) to require third-party communication service providers such as internet cafes to keep records of users’ online activities. Requiring third-party communication service providers to monitor and report users’ activities is not uncommon in other parts of the world. In the Ethiopian case, it is particularly concerning because the majority of users that rely on computers do not access the internet from their households but from third party public providers like internet kiosks, cafeterias, hotels, as well as schools.

The storage and transmission elements of “digital evidence” are compounded by the consumption component which directly implicates user behaviour. Under the “Failure to Disclose Terrorist Acts” section, the EATP stipulates, among other things, anyone who fails to disclose information or evidence that can be used to prosecute or punish a suspect involved in “an act of terrorism” will be punished with rigorous imprisonment (FDRE, 2009, p. 4832). The danger of this provision of the EATP lies in the parametric nebulousness of “an act of terrorism” which emanates from the contested conceptualisation of terrorism itself. If a journalist receives an email communication from one of the terrorist-designated Ethiopian political organisations and he/she keeps the name of the source anonymous as a matter of journalistic ethics, the EATP empowers the state to prosecute the journalist through the “Failure to Disclose Terrorist Acts” provision. For many journalists, the challenge here is the extensive popular support the Oromo Liberation Front (OLF) and Ginbot 7 enjoy compels them to report the organisations’ activities as a matter of public interest. For EPRDF, blacklisting these organisations serves the purpose of making them obsolete in the political arena. Journalists who transmit information regarding such organisations as OLF and Ginbot 7 in public discourse inevitably run the risk of being charged as terrorists or accomplices of terrorism.

Monopoly of political narrative

Although the various charges carried out under the premises of the EATP by the Ethiopian government differ in their scope and nature, a sizable number of the cases have serious implications for the state of freedom of expression, especially mediated critical speech. In this sense, it is of no surprise that the EATP has probably been put into retributive effect more than any other legal framework related to communication involving electronic media. Since its enforcement, the law has disproportionately targeted community members who are involved in the dissemination of information through traditional and digital media platforms, including bloggers, journalists, and freelance writers. In Ethiopian Satellite Television and Oromia Media Network v The Federal Public Prosecutor, US based television stations Ethiopian Satellite Television (ESAT) and Oromia Media Network (OMN) were accused of disseminating information deemed to be in the interest of the Ethiopian government designated terrorist groups Ginbot 7 and OLF. The underlying argument of the Federal Public Prosecutor was based on the assumption that disseminators of information involving terrorist-designated groups act as accessories of terrorism. The EATP renders a very broad and ambiguous language that criminalises speech deemed to be an “encouragement” of terrorism, whatever the latter may be, through the interpretive lens of the Ethiopian government. Consider Article 6 of the Proclamation:

Whosoever publishes or causes the publication of a statement that is likely to be understood by some or all of the members of the public to whom it is published as a direct or indirect encouragement or other inducement to them to the commission or preparation or instigation of an act of terrorism…is punishable with rigorous imprisonment from 10 to 20 years [emphasis mine] (FDRE, 2009, p. 4831).

When the determination of what encompasses an encouragement of a terrorism act is made based on the “likely” understanding of “members of the public”, the outcome warrants a scenario of arbitrary interpretation, jurisprudence, and execution of the law. In other words, by keeping the law as vague and broad as possible, the government can choose to use it haphazardly in order to stamp out legitimate acts of political expression and dissent. Consider, for example, the case of Reeyot Alemu Gobebo, former contributor of the weekly newspaper Feteh. She was convicted on three counts under the terrorism law for her writings that were highly critical of the ruling party and the former Prime Minister of Ethiopia, Meles Zenawi, who was persistent in his characterisation of members of the free press as “messengers” of terrorist groups (Abiye, 2011). Although Reeyot Alemu was formally convicted of having ties with terrorist groups—a common blanket accusation the Ethiopian government infers to arrest journalists and freelance writers—it is important to note that Reeyot Alemu and other journalists that were imprisoned with terrorism charges were targeted by the government for their continued journalistic practices that were viewed by EPRDF as divergent to its hegemonic rule (see CPJ, 2012; Dirbaba & O’Donnell, 2012).

In other words, when journalists such as Reeyot Alemu report about groups such as Ginbot 7 and OLF, their actions are justified based on the enormous public interest imperative that is at stake. If and when a journalist, according to the EATP, “publishes or causes the publication of” groups or individuals designated as terrorists by the Ethiopian government, they will run the very likely risk of being imprisoned. Everyday journalist routines of establishing a source, conducting an interview, or simply relaying a press release involving designated “terrorist organisations” can easily be prosecutable acts. 9

Manufacturing fear, fostering self-censorship

While the appropriation of the EATP to target media professionals by tying them to controversially terrorist designated political groups is in and of itself an attack on the freedom of expression enshrined in the Ethiopian Constitution, 10 the more dangerous consequence is probably the chilling effect this “example” has set for ordinary citizens. The indiscriminate use of “terrorist” to refer to journalists reporting on opposition groups has now evolved to include individuals whose political, economic, social or human rights opinions differ from EPRDF’s narrative. For example, Workneh (2015) notes how legal frameworks such as the Anti-Terrorism Proclamation adopted by the government have created a cloud of insecurity and fear in Ethiopian social media users when it comes to political opinions. The thin line between “dissent” and “terrorism” leads users to unwittingly undergo different forms of self-censorship in the digital sphere, a scenario that enables the government to create a subdued public that is reluctant in participating in a counter-hegemonic narrative.

This “fear factor” born out of the government’s criminalisation of critical speech is compounded by the EATP’s empowerment of the state with the authority to intercept communication that endows the National Intelligence and Security Service (NISS), upon getting court warrant, to: intercept or conduct surveillance on the telephone, fax, radio, internet, electronic, postal and similar communications of a person suspected of terrorism; enter into any premise in secret to enforce the interception; or install or remove instruments enabling the interception. As indicated earlier in this article, the Federal Public Prosecutor has presented transcripts of phone conversations obtained through wiretapping by the government as evidence in a court of law in Soliana Shimeles et al. v the Federal Public Prosecutor. 11 The frequency in which the government infiltrates into the private communications of Ethiopian citizens—especially activists and journalists with critical opinions—has become a common practice since the EATP was put in place in 2009. 12

Conclusion

The adoption of legal frameworks such as the EATP that stipulate broad and vague definitions of terrorism, which, in turn, are used to frame critical speech as terrorist acts, are used to directly prosecute critics of the Ethiopian government. More importantly, however, it is sensible to argue that the Ethiopian government’s actions through the EATP could be seen as a long-term proactive strategy of creating a rational-legal bureaucracy—consistent with the neopatrimonial logic—that is subject to arbitrary interpretation and execution at the will of the state. The result is the making of a public that is unsure about what could be considered as a “terrorist” message as opposed to “normal” speech, which in turn incubates a widespread self-censorship culture. Consequently, the much-publicised prosecution of Zone 9 bloggers and other online political activists in Ethiopia through the EATP and other legal frameworks is not necessarily an exercise of stifling the views of the defendants per se, but rather what they represent in terms of a young, critical and digitally literate Ethiopian populace that is in the making.

As a practical matter, it is evident to see how the EATP compounds the highly restrictive internet policy frameworks in Ethiopia informed by such legal frameworks as the Telecom Fraud Offense Proclamation of 2012. Elsewhere, I argued how Ethiopia’s internet policy frameworks negatively affect user activity (Workneh, 2015), and outlined how the highly centralised, top-down, and monopolistic Ethiopian telecommunication policy adversely affected quality, access, and usability of digital platforms for Ethiopians (Workneh, 2018). 13 The outdated legislative frameworks that shape Ethiopia’s digital ensemble need a reboot. This recourse should envisage a shift from vanguard centralism to a participatory, multi-stakeholder, and equitable paradigm. Inclusive, people-centered internet policies have paid dividends to citizens of other African countries like Kenya, where the highly successful mobile finance platform m-pesa, for example, brought about tangible results in information justice and financial inclusion (see e.g. Jacob, 2016).

It is in this spirit that, as Ethiopia currently undergoes an uncertain political reform (Burke, 2018) that includes the ongoing scrutiny of its counter-terrorism legislation (Maasho, 2018), a cautions diagnosis of what to do with the EATP is paramount. One approach to address the corrosive outcomes of the EATP is to get rid of the law in its entirety. This view is not uncommon. Brownlie (2014) argues there should be no category of a law of terrorism and that terrorism cases should be conducted “in accordance with the applicable sectors of public international law: jurisdiction, international criminal justice, state responsibility, and so forth” (p. 713). The second approach is to keep the EATP by making significant revisions, especially in terms of provisions that have been identified as problematic to civil liberties. While an argument can be made for the merits and shortcomings of both, the execution of either approach doesn’t necessarily guarantee the right to freely express opinions. In my view, the EATP is one instrument of EPRDF’s multifaceted neopatrimonial apparatus. Without a comprehensive political reform that ensures genuine multi-stakeholder participation and the termination of the neopatrimonial order, any action against the EATP, noble as it may be, will fall short of a meaningful stride toward a free society. The most consequential provision of the EATP is not any of the language that directly curb freedom of expression but rather the Proclamation’s designation of a politically homogeneous legislative body to have the power to “proscribe and de-proscribe an organization as terrorist organization” [sic.] (FDRE, 2009, p. 4837). It is this very clause that has enabled the EPRDF-dominated Ethiopian House of Peoples’ Representatives to proscribe opposition groups and their supporters such as OLF, Ginbot 7, and ONLF as terrorists 14, which in turn led to the persecution of thousands of Ethiopians. If the Ethiopian government’s legislative body is truly representative of the diverse political spectrum of the country, a politically-motivated designation of dissenting individuals and organisations as terrorists is highly unlikely, thereby minimising the likelihood of a counter-terrorism legislation’s significance as an instrument of neopatrimonial control.

References

Aalen, L., & Tronvoll, K. (2009). The end of democracy? Curtailing political and civil rights in Ethiopia. Review of African Political Economy, 36(120), 193–207. doi:10.1080/03056240903065067

Abbink, J. (2017). Paradoxes of electoral authoritarianism: the 2015 Ethiopian elections as hegemonic performance. Journal of Contemporary African Studies, 35(3), 303–323. doi:10.1080/02589001.2017.1324620

Agbiboa, D. (2015). Shifting the battleground: The transformation of Al-Shabab and the growing influence of Al-Qaeda in East Africa and the Horn. Politikon, 42(2), 177–194. doi:10.1080/02589346.2015.1005791

Akgül, M., & Kırlıdoğ, M. (2015). Internet censorship in Turkey. Internet Policy Review, 4(2). doi:10.14763/2015.2.366

Abdi, J., & Deane, J. (2008). The Kenyan 2007 elections and their aftermath: The role of media and communication. BBC World Service Trust. BBC World Service Trust.

Abiye, T. M. (2011, December 7). The journalist as terrorist: An Ethiopian story. Retrieved October 2, 2017, from Open Democracy: https://www.opendemocracy.net/abiye-teklemariam-megenta/journalist-as-terrorist-ethiopian-story

Allo, A. (2017). Protests, terrorism, and development: On Ethiopia’s perpetual state of emergency. Yale Human Rights and Development Journal, 19(1), 133–177. Retrieved from https://digitalcommons.law.yale.edu/yhrdlj/vol19/iss1/4/

Arriola, L. R., & Lyons, T. (2016). Ethipoia: The 100% election. Journal of Democracy, 27(1), 76–88. doi:10.1353/jod.2016.0011

Bach, D. C. (2011). Patrimonialism and neopatrimonialism: Comparative trajectories and readings. Commonwealth and Comparative Politics, 49(3), 275-295. doi:10.1080/14662043.2011.582731

Bach, J.-N. (2011). Abyotawi democracy: neither revolutionary nor democratic, a critical review of EPRDF’s conception of revolutionary democracy in post-1991 Ethiopia. Journal of Eastern African Studies, 5(4), 641–663. doi: 10.1080/17531055.2011.642522

Baron, D. (2009). A better pencil: Readers, writers, and the digital revolution. Oxford: Oxford University Press. doi:10.1017/s0047404511000832

Bélair, J. (2016). Ethnic federalism and conflicts in Ethiopia. Canadian Journal of African Studies, 50(2), 295–301. doi:10.1080/00083968.2015.1124580

Bergson, H. (1998). Creative evolution. New York: Dover Publications.

Bigo, D. (2000). When two become one: Internal and external securitisations in Europe. In M. Kelstrup & M. Williams (Eds.), International Relations Theory and the Politics of European Integration: Power, Security and Community (pp. 171–204). London: Routledge. doi:10.4324/9780203187807-8

Birkland, T. (2004). “The world changed today”: Agenda‐setting and policy change in the wake of the September 11 terrorist attacks. Review of Policy Research, 21(2), 179–200. doi:10.1111/j.1541-1338.2004.00068.x

Bratton, M., & Van de Walle, N. (1994). Neopatrimonial regimes and political transitions in Africa. World Politics, 46(4), 453–489. doi:10.2307/2950715

Brownlie, I. (2004). Principles of public international law. Oxford: Oxford University Press.

Burke, J. (2018, July 8). “These changes are unprecedented”: how Abiy is upending Ethiopian politics. The Guardian. Retrieved from https://www.theguardian.com/world/2018/jul/08/abiy-ahmed-upending-ethiopian-politics

Castells, M. (2012). Networks of outrage and hope: Social movements in the internet age (1st ed.). Cambridge: Polity Press.

Clapham, C. (1985). Third world politics. London: Helm.

Clapham, C. (2009). Post-war Ethiopia: The trajectories of crisis. Review of African Political Economy, 36(120), 181–192. doi:10.1080/03056240903064953

Committee to Protect Journalists (CPJ). (2012, August 3). Ethiopian appeals court reduces sentence of Reeyot Alemu. Retrieved December 5, 2018, from https://cpj.org/2012/08/ethiopian-appeals-court-reduces-sentence-of-reeyot.php

Committee to Protect Journalists (CPJ). (2017). Journalists not terrorists: In Cameroon, anti-terror legislation is used to silence critics and suppress dissent (Special Report). New York: Committee to Protect Journalists. Retrieved from https://cpj.org/reports/Cameroon-English-Web.pdf

Conger, S., Pratt, J. H., & Loch, K. D. (2013). Personal information privacy and emerging technologies. Information Systems Journal, 23(5), 401–417. doi:10.1111/j.1365-2575.2012.00402.x

Cowen, T. (2009). Create your own economy: The path to prosperity in a disordered world. New York: Dutton.

Croucher, S. (2018). Globalization and belonging: The politics of identity in a changing world (2nd ed.). Lanham: Rowman and Littlefield.

Currier, C., & Marquis-Boire, M. (2015, July 7). The Intercept. Retrieved March 14, 2016 https://theintercept.com/2015/07/07/leaked-documents-confirm-hacking-team-sells-spyware-repressive-countries/

Daniels, R., Macklem, P, & Roach, K. (2002). The security of freedom: Essays on Canada's Anti-Terrorism Bill. Toronto: University of Totonto Press. doi:10.3138/9781442682337-fm

Davies, S. (2008). The political economy of land tenure in Ethiopia (PhD Thesis, University of St. Andrews). Retrieved from http://hdl.handle.net/10023/580

De Frias, A. M. S., Samuel-Azran, K., & White, N. (Eds.). (2012). Counter-terrorism: International law and practice (1st ed.). Oxford: Oxford University Press.

Di Nunzio, M. (2014). ‘Do not cross the red line’: The 2010 general elections, dissent, and political mobilization in urban Ethiopia. African Affairs, 113(452), 409–130. doi:10.1093/afraf/adu029

Dirbaba, B., & O’Donnell, P. (2012). The double talk of manipulative liberalism in Ethiopia: An example of new strategies of media repression. African Communication Research, 5(3), 283–312.

Dripps, D. (2013). “Dearest property”: Digital evidence and the history of private “papers” as special objects of search and seizure. Journal of Criminal Law and Criminology, 103(1), 49–109. Retrieved from https://scholarlycommons.law.northwestern.edu/jclc/vol103/iss1/2/

Eisenstadt, S. N. (1973). Traditional patrimonialism and modern neopatrimonialism. London: Sage Publications.

Erdmann, G., & Engel, U. (2007). Neopatrimonialism reconsidered: Critical review and elaboration of an elusive concept. Commonwealth & Comparative Politics, 45(1), 95–119. doi:10.1080/14662040601135813

Fana, G. (2014). Securitisation of development in Ethiopia: the discourse and politics of developmentalism. Review of African Political Economy, 41(1), S64–S74. doi:10.1080/03056244.2014.976191

Federal Democratic Republic of Ethiopia (FDRE). (1995). Proclamation of the Constitution of the Federal Democratic Republic of Ethiopia. Federal Negarit Gazeta. doi:10.1093/gmo/9781561592630.article.42063

Federal Democratic Republic of Ethiopia (FDRE). (2009). A Proclamation on anti-terrorism. Federal Negarit Gazeta. Retrieved from http://www.refworld.org/docid/4ba799d32.html

Fenton, N. (2016). The internet of radical politics and social change. In J. Curran, N. Fenton, & D. Freedman (Eds.), Misunderstanding the Internet (pp. 173–202). London: Routledge. doi:10.4324/9781315695624-6

Freedman, L. (1998). International security: Changing targets. Foreign Policy, 110, 48–64. doi:10.2307/1149276

Fuchs, C., & Trottier, D. (2017). Internet surveillance after Snowden: A critical empirical study of computer experts’ attitudes on commercial and state surveillance of the internet and social media post-Edward Snowden. Journal of Information, Communication and Ethics in Society, 15(4), 412–444. doi:10.1108/jices-01-2016-0004

Gashaw, S. (1993). Nationalism and ethnic conflict in Ethiopia. In C. Young (Ed.), The rising tide of cultural pluralism: The nation state at bay? (pp. 138-157). Madison, Wisconsin: University of Wisconsin Press.

Gerschewski, J., & Dukalskis, A. (2018). How the internet can reinforce authoritarian regimes: The case of North Korea. Georgetown Journal of International Affairs, 19, 12–19. doi:10.1353/gia.2018.0002

Gasteyger, C. (1999). Old and new dimensions of international security. In K. Spillmann & A. Wenger (Eds.), Towards the 21st Century: Trends in Post-Cold War International Security Policy (Vol. 4, pp. 69–108). Bern: Peter Lang.

Grigoryan, A. H. (2013). A model for anocracy. Journal of Income Distribution, 22(1), 3–24. Retrieved from https://ideas.repec.org/a/jid/journl/y2013v22i1p3-24.html Available at https://newsroom.aua.am/files/2013/12/Model-for-Anocracy.pdf

Grinberg, D. (2017). Chilling developments: Digital access, surveillance, and the authoritarian dilemma in Ethiopia. Surveillance & Society, 15(3–4), 432–438. doi:10.24908/ss.v15i3/4.6623

Gudina, M. (2007). Ethnicity, democratisation and decentralization in Ethiopia: The case of Oromia. Eastern Africa Social Science Research Review, 23 (1), 81-106. doi:10.1353/eas.2007.0000

Gudina, M. (2011). Elections and democratization in Ethiopia, 1991–2010. Journal of Eastern African Studies, 5(4), 664–680. doi:10.1080/17531055.2011.642524

Habtu, A. (2003). Ethnic federalism in Ethiopia: Background, present conditions and future prospects. Second EAF International Symposium on Contemporary Development Issues in Ethiopia. Addis Ababa, Ethiopia.

Hamzawy, A. (2017). Legislating authoritarianism: Egypt’s new era of repression (Paper). Washington, DC: Carnegie Endowment for International Peace. Retrieved from https://carnegieendowment.org/2017/03/16/legislating-authoritarianism-egypt-s-new-era-of-repression-pub-68285

Heinlein, P. (2012, January 8). Ethiopian politicians on trial for terrorism. Voice of America. Retrieved from https://www.voanews.com/a/ethiopian-politicians-on-trial-for-terrorism-136960163/159430.html

Hellmeier, S. (2016). The dictator’s digital toolkit: Explaining variation in internet filtering in authoritarian regimes. Politics & Policy, 44(6), 1158–1191. doi:10.1111/polp.12189

Horne, F., & Wong, C. (2014). “They know everything we do”: Telecom and internet surveillance in Ethiopia (Report). New York: Human Rights Watch. Retrieved from https://www.hrw.org/report/2014/03/25/they-know-everything-we-do/telecom-and-internet-surveillance-ethiopia

Human Rights Watch. (2013a). Ethiopia: Terrorism law decimates media. Human Rights Watch. Retrieved from http://www.hrw.org/news/2013/05/03/ethiopia-terrorism-law-decimates-media

Human Rights Watch. (2013b). “They want a confession” Torture and ill-treatment in Ethiopia’s Maekelawi police station. Human Rights Watch. Retrieved from https://www.hrw.org/report/2013/10/17/they-want-confession/torture-and-ill-treatment-ethiopias-maekelawi-police-station

International Telecommunication Union (ITU). (2018a). Country ICT data: Percentage of individuals using the internet. Retrieved from https://www.itu.int/en/ITU-D/Statistics/Pages/stat/default.aspx

International Telecommunication Union (ITU). (2018b). Ethio Telecom. Retrieved October 1, 2018, from https://telecomworld.itu.int/exhibitor-sponsor-list/ethio-telecom/

Jacob, F. (2016). The role of m-pesa in Kenya’s economic and political development. In M. M. Koster, M. M. Kithinji, & J. Rotich (Eds.), Kenya after 50. African histories and modernities. New York: Palgrave Macmillan. doi:10.1057/9781137574633_6

Kalathil, S., & Boas, T. (2003). Open networks, closed regimes: The impact of the internet on authoritarian rule. Washington D.C.: Carnegie Endowment for International Peace. doi:10.5210/fm.v8i1.1025

Karatzogianni, A., & Robinson, A. (2017). Schizorevolutions versus microfascisms: The fear of anarchy in state securitisation. Journal of International Political Theory, 13(3), 282–295. doi:10.1177/1755088217718570

Kelsall, T. (2013). Business, politics, and the state in Africa: Challenging the orthodoxies on growth and transformation. London: Zed Books.

Kerr, J. A. (2018). Information, security, and authoritarian stability: Internet policy diffusion and coordination in the former Soviet region. International Journal of Communication, 12, 3814–3834. Retrieved from https://ijoc.org/index.php/ijoc/article/view/8542

Kidane, M. (2001). Ethiopia’s ethnic-Based federalism: 10 years after. African Issues, 29(1–2), 20–25. 10.2307/1167105

Kibret, Z. (2017). The terrorism of ‘counterterrorism’: The use and abuse of anti-terrorism law, the case of Ethiopia. European Scientific Journal, 13(13), 504-539. doi:10.19044/esj.2017.v13n13p504

Kotz, D., Gunter, C. A., Kumar, S., & Weiner, J. P. (2016). Privacy and security in mobile health: A research agenda. Computer, 49(6), 22–30. 10.1109/MC.2016.185

Loriaux, M. (1999). The French development state as a myth and moral ambition. In M. Woo-Cumings (Ed.), The developmental state (pp. 235–275). New York: Cornell University Press.

Lyons, T. (2016). From victorious rebels to strong authoritarian parties: prospects for post-war democratization. Democratization, 23(6), 1026–1041. doi:10.1080/13510347.2016.1168404

Maasho, A. (2018, May 30). Ethiopian government and opposition start talks on amending anti-terrorism law. Reuters. Retrieved from https://uk.reuters.com/article/uk-ethiopia-politics/ethiopian-government-and-opposition-start-talks-on-amending-anti-terrorism-law-idUKKCN1IV1RL

MacKinnon, R. (2011). Liberation technology: China’s “networked authoritarianism.” Journal of Democracy, 22(2), 32–46. doi:10.1353/jod.2011.0033

Marquis-Boire, M., Marczak, B., Guarnieri, C., & Scott-Railton, J. (2013, March 13). You only click twice: FinFisher’s global proliferation. (U. o. The Citizen Lab, Producer) Retrieved July 8, 2016, from The Citizen Lab: https://citizenlab.org/2013/03/you-only-click-twice-finfishers-global-proliferation-2/

Matfess, H. (2015). Rwanda and Ethiopia: Developmental authoritarianism and the new politics of African strong men. African Studies Review, 58(2), 181–204. doi:10.1017/asr.2015.43

Mehretu, A. (2012). Ethnic federalism and its potential to dismember the Ethiopian state. Progress in Development Studies, 12(2–3), 113–133. doi:10.1177/146499341101200303

Mkandawire, T. (2001). Thinking about developmental states in Africa. Cambridge Journal of Economics, 25(3), 289–314. doi:10.1093/cje/25.3.289

Morozov, E. (2013, March 23). Imprisoned by innovation. The New York Times. Retrieved from http://www.nytimes.com/2013/03/24/opinion/sunday/morozov-imprisoned-by-innovation.html?_r=0

Nakashima, E. (2016, February 17). Apple vows to resist FBI demand to crack iPhone linked to San Bernardino attacks. The Washington Post. Retrieved from https://www.washingtonpost.com/world/national-security/us-wants-apple-to-help-unlock-iphone-used-by-san-bernardino-shooter/2016/02/16/69b903ee-d4d9-11e5-9823-02b905009f99_story.html

Nunes Lopes Espiñeira Lemos, A., & Pasquali Kurtz, L. (2018). Sovereignty over personal data in Brazil: State jurisdiction facing denial of access to users’ data by online platform Whatsapp. Presented at the GigaNet: Global Internet Governance Academic Network, Annual Symposium 2017, SSRN. Retrieved from https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3107293

Pen International. (2011, September 23). Ethiopia: Two more journalists arrested under antiterrorism legislation; fears of torture. Pen International. Retrieved February 21, 2019, from https://pen-international.org/news/ethiopia-two-more-journalists-arrested-under-antiterrorism-legislation-fears-of-torture

Pohle, J., & Van Audenhove, L. (2017). Post-Snowden internet policy: Between public outrage, resistance and policy change. Media and Communication, 5(1), 1–6. doi:10.17645/mac.v5i1.932

Powell, J. D. (1970). Peasant societies and clientelist politics. American Political Science Review, 64(2), 411–425. doi:10.2307/1953841

Rice, X., & Goldenberg, S. (2007, January 13). How US forged an Alliance with Ethiopia over Invasion . The Guardian. Retrieved from https://www.theguardian.com/world/2007/jan/13/alqaida.usa

Roach, K. (2012). The criminal law and its less restrained alternatives. In V. V. Ramraj, M. Hor, K. Roach, & G. Williams (Eds.), Global anti-terrorism law and policy (pp. 91-121). Cambridge: Cambridge University Press. doi:10.1017/cbo9781139043793.007

Roach, K. (2015). Comparative counter-terrorism law comes of age. In K. Roach (Ed.), Comparative counter terrorism law (pp. 1-48). Cambridge: Cambridge University Press. doi:10.1017/cbo9781107298002.001

Roach, K., Hor, M., Ramraj, V. V., & Williams, G. (2012). Introduction. In V. Ramraj, M. Hor, K. Roach, & G. Williams (Eds.), Global anti-terrorism law and policy (2nd ed., pp. 1-16). Cambridge: Cambridge University Press. doi: 10.1017/cbo9781139043793.001

Sanger, D., & Schmitt, E. (2015, February 1). Hackers use old lure on web to help syrian government. New York Times. Retrieved from https://www.nytimes.com/2015/02/02/world/middleeast/hackers-use-old-web-lure-to-aid-assad.html

Schneiderman, D., & Cossman, B. (2002). Political association and the Anti-Terrorism Bill. In R. J. Daniels, Macklem, P, & K. Roach (Eds.), The security of freedom: Essays on Canada's Anti-Terrorism Bill (pp. 173-194). Toronto: University of Toronto Press. doi:10.3138/9781442682337-012

Shirky, C. (2008). Here comes everybody: The power of organizing without organizations. New York: Penguin Press.

Sekyere, P., & Asare, B. (2016). An examination of Ethiopia's anti-terrorism proclamation on fundamental human rights. European Scientific Journal, 12(1), 351-371. doi:10.19044/esj.2016.v12n1p351

Setty, S. N. (2012). The United States. In K. Roach (Ed.), Comparative counter-terrorism law (pp. 49-77). Cambridge: Cambridge University Press. doi:10.1017/cbo9781107298002.002

Sreberny, A. (2014). Violence against women journalists. In A. V. Montiel (Ed.), Media and gender: a scholarly agenda for the Global Alliance on Media and Gender (pp. 35–43). Paris: UNESCO.

Sutherland, E. (2018, June 22). Digital privacy in Africa: Cybersecurity, data protection & surveillance. doi:10.2139/ssrn.3201310

Tronvoll, K. (2008). Human rights violations in federal Ethiopia: When ethnic identity is a political stigma. International Journal on Minority and Group Rights, 15(1), 49–79. doi:10.1163/138548708x272528

Turse, N. (2017, September 13). How the NSA built a secret surveillance network for Ethiopia. The Intercept. Retrieved October 14, 2017: https://theintercept.com/2017/09/13/nsa-ethiopia-surveillance-human-rights/

van de Walle, N. (1994). Neopatrimonialism and democracy in Africa: with an illustration from Cameroon. In J. Widner (Ed.), Economic change and political liberalization in sub-Saharan Africa. (pp. 129-157). Baltimore: John Hopkins University Press.

Vestal, T. M. (1999). Ethiopia: A post-Cold War African state. Westport: Praeger.

Villasenor, J. (2011). Recording everything: Digital storage as an enabler of authoritarian governments. Washington, DC: Brookings Institute.

Weingrod, A. (1968). Patrons, patronage, and political parties. Comparative Studies in Society and History, 10(4), 377–400. doi:10.1017/s0010417500005004

Weis, T. (2016). Vanguard capitalism: party, state, and market in the EPRDF’s Ethiopia. (Phd Thesis, University of Oxford). Retrieved from https://ora.ox.ac.uk/objects/uuid:c4c9ae33-0b5d-4fd6-b3f5-d02d5d2c7e38

Workneh, T. (2015). Digital cleansing? A look into state-sponsored policing of Ethiopian networked communities. African Journalism Studies, 36(4), 102-124. doi:10.1080/23743670.2015.1119493

Workneh, T. (2018). State monopoly of telecommunications in Ethiopia: origins, debates, and the way forward. Review of African Political Economy. doi:10.1080/03056244.2018.1531390

Yang, F. (2014). Rethinking China’s internet censorship: The practice of recoding and the politics of visibility. New Media & Society, 18(7), 1364–1381. doi:10.1177/1461444814555951

Footnotes

1. See Bach (2011) for a discussion on EPRDF’s theory and practice of abyotawi dimokrasi.

2. EPRDF’s political discourse of “developmentalism” is rooted in the theory of the developmental state. The developmental state, according to Loriaux (1999), is “an embodiment of a normative or moral ambition to use the interventionist power of the state to guide investment in a way that promotes a certain solidaristic vision of national economy” (p. 24). The role of the elite developmental state model, Mkandawire (2001) contends, is “to establish an ‘ideological hegemony,’ so that its development project becomes, in a Gramscian sense, a ‘hegemonic’ project to which key actors in the nation adhere voluntarily” (p. 290). While EPRDF maintained the notion of development trumps all other priorities, critics argue that the party’s adoption of the developmental state theory into a political program is nothing more than an attempt to institutionalize rent seeking interests of the ruling elite (for example, see Berhanu, 2013).

3. For example, Kibret (2017) has identified more than 120 cases under which the Federal Public Prosecutor has charged nearly one thousand individuals by citing the provisions of the EATP. Most of these individuals have been charged for alleged affiliation with domestic rebel groups proscribed by the Ethiopian parliament as “terrorist organizations” in June 2011. These rebel groups include Ginbot 7 for Justice, Freedom and Democracy (Ginbot 7), Ogaden National Liberation Front (ONLF), Oromo Liberation Front (OLF). From the 985 individuals prosecuted under the EATP between September 2011 and March 2017, a third of all the charges involve civilians who have “nothing to do with either terrorism or the rebel groups” (p. 524).

4. An anocracy represents a political system which is neither fully democratic nor fully autocratic, often being vulnerable to political instability. See Grigoryan (2013).

5. See Nunes Lopes Espiñeira Lemos & Pasquali, and Kurtz (2018), Sutherland (2018) on global information extraction practices emanating from state-sponsored surveillance. In the Ethiopian context, information extraction is usually related to seizure of a digital apparatus by government officials to obtain information although other forms of the practice including surveillance are also common. For example, see Horne & Wong (2014).

6. For example, in Soliana Shimeles et al. v the Federal Public Prosecutor involving ten bloggers and journalists as defendants, the Federal Public Prosecutor charged the defendants under Article 3 of the Anti-Terrorism proclamation accusing them of “serious risk to the safety or health of the public or section of the public” and “serious damage to property”. The prosecutor presented, as part of its evidence, several pages of transcripts of phone conversations belonging to the defendants.

7. See note 4 for more on Ginbot 7

8. Article 14(4) of EATP states: “The National Intelligence and Security Services or the Police may gather information by surveillance in order to prevent and control acts of terrorism” (Federal Negarit Gazeta, 2009, p. 4834).

9. For example, the EATP was used to convict prominent Ethiopian media practitioners including Eskinder Nega, a journalist and blogger who received the 2012 PEN Freedom to Write Award, to serve a sentence of 18 years in prison (Dirbaba & O’Donnell, 2012). Another convicted journalist is 2012 Hellman-Hammett Award winner Woubshet Taye, who was sentenced to serve a 14-year sentence under the Anti-Terrorism Proclamation (Heinlein, 2012). Other journalists and media practitioners who faced charges under the anti-terrorism proclamation include Mastewal Birhanu, Yusuf Getachew and Solomon Kebede (Human Rights Watch, 2013a).

10. Article 29 (2) of the Ethiopian Constitution states: “Everyone shall have the right to freedom of expression without interference. This right shall include freedom to seek, receive and impart information and ideas of all kinds, regardless of frontiers, either orally, in writing or in print, in the form of art, or through other media of his choice” (FDRE, 1995, p. 9)

11. Zone 9 bloggers founding member, personal correspondence, 15 October 2017.

12. See Kibret (2017) for a complete list of cases involving the Ethiopian Anti-Terrorism Proclamation.

13. Ethiopia’s internet penetration--though improving--lags behind several other African countries. For details, see ITU, 2018a.

14. In July 2018, the Ethiopian House of Peoples’ Representatives, upon the Cabinet of Ministers’ recommendation, removed Ginbot 7, OLF, and ONLF from its terror list. In January 2019, the FDRE government announced it pardoned 13,000 accused of treason or terrorism.

Operationalising communication rights: the case of a “digital welfare state”

$
0
0

This paper is part of Practicing rights and values in internet policy around the world, a special issue of Internet Policy Review guest-edited by Aphra Kerr, Francesca Musiani, and Julia Pohle.

Introduction

The rampant spread of disinformation and hate speech online, the so-called surveillance capitalism of the internet giants and related violations of privacy (Zuboff, 2019), persisting digital divides (International Telecommunication Union, 2018), and inequalities created by algorithms (Eubanks, 2018): these issues and many other current internet-related phenomena challenge us as individuals and members of the society. These challenges have sparked renewed discussion about the idea and ideal of citizens’ communication rights.

Either as a legal approach or as a moral discursive strategy, the rights-based approach is typically presented in a general sense as a counterforce that protects individuals against illegitimate forms of power, including both state and corporate domination (Horten, 2016). The notion of communication rights can not only refer to existing legally binding norms, but also more broadly to normative principles against which real-world developments are assessed. However, there is no consensus on what kinds of institutions are needed to uphold and enforce communication rights in the non-territorial, regulation-averse and rapidly changing media environment. Besides the actions of states, the realisation of communication rights is now increasingly impacted by the actions of global multinational corporations, activists, and users themselves.

While much of the academic debate has focused on transnational attempts to codify and promote communication rights at the global level, in this article, we examined a national approach to communication rights. Despite the obvious transnational nature of the challenges, we argued for the continued relevance of analysing communication rights in the context of national media systems and policy traditions. We provided a model to analyse communication rights in a framework that has its foundation in a specific normative, but also empirically grounded understanding of the role of communication in a democracy. In addition, we discussed the relevance of single country analyses to global or regional considerations of rights-based governance.

Communication rights and the case of Finland

The concept of communication rights has a varied history, starting with the attempts of the Global South in the 1970s to counter the Westernisation of communication (Hamelink, 1994; McIver et al., 2003). The connections between human rights and media policy have also been addressed, especially in international contexts and in the United Nations (Jørgensen, 2013; Mansell & Nordenstreng, 2006). Communication rights have also been invoked in more specific contexts to promote, for instance, the rights of disabled persons and cultural and sexual minorities in today’s communication environment (Padovani & Calabrese, 2014; McLeod, 2018). Currently, these rights are most often employed for the use of civil society manifestos and international declarations focused on digital or internet-related rights (Karppinen, 2017; Redeker, Gill, & Gasser, 2018).

Today, heated policy debates have surrounded the role of global platforms in realising or violating principles, such as freedom of expression or privacy, which are already stipulated in the United Nations Universal Declaration of Human Rights (MacKinnon, 2013; Zuboff, 2019). Various groups have made efforts to monitor and influence the global policy landscape, including the United Nations, its Special Rapporteurs, and the Internet Governance Forum; voluntary multi-stakeholder coalitions, such as the Global Network Initiative; and civil society actors, such as the Electronic Frontier Foundation, Freedom House, or Ranking Digital Rights (MacKinnon et al., 2016). At the same time, nation states are still powerful actors whose choices can make a difference in the realisation of rights (Flew, Iosifides, & Steemers, 2016). This influence is made evident through monitoring efforts that track internet freedom and the increased efforts by national governments to control citizens’ data and internet access (Shahbaz, 2018).

Communication rights in Finland are particularly worth exploring and analysing. Although the Finnish communication policy solutions are now intertwined with the broader European Union initiatives, the country has an idiosyncratic historical legacy in communication policy. Year after year, it remains as one of the top countries in press freedom rankings (Reporters without Borders, 2018). In the 1990s, Finland was a frontrunner in shaping information society policies, gaining notice for technological development and global competitiveness, especially in the mobile communications sector (Castells & Himanen, 2002). Finland was also among the first nations to make affordable broadband access a legal right ( Nieminen, 2013). On the EU Digital Economy and Society Index, Finland scores high in almost all categories, partly due to its forward-looking strategies for artificial intelligence and extensive, highly developed digital public services (Ministry of Finance, 2018). According to the think tank Center for Data Innovation, Finland’s availability of official information is the best in the EU (Wallace & Castro, 2017). Not only are Finns among the most frequent users of the internet in the European Union, they also report feeling well-informed about risks of cybercrime and trust public authorities with their online data more than citizens of any other EU country (European Union, 2017, pp. 58-60).

While national competitiveness in the global marketplace has informed many of Finland’s policy approaches (Halme et al., 2014), they also reflect the Nordic tradition of the so-called “epistemic commons”, that is the ideals of knowledge and culture as a joint and shared domain, free of restrictions (Nieminen, 2014 1). Aspects such as civic education, universal literacy, and mass media are at the heart of this ideal (Nieminen, 2014). This ideal has been central to what Syvertsen, Enli, Mjøs, and Moe (2014) called the “Nordic Media Welfare State”: Nordic countries are characterised by universal media and communications services, strong and institutionalised editorial freedom, a cultural policy for the media, and policy solutions that are consensual and durable, based on consultation with both public and private stakeholders.

Operationalising rights

How does Finland, a country with such unique policy traditions, fare as a “Digital Welfare State”? In this article, we employed a basic model that divides the notion of communication rights into four distinct operational categories (Nieminen, 2010; 2016; 2019; Horowitz & Nieminen, 2016). These divisions differ from other recent categorisations (Couldry et al., 2016; Goggin et al., 2017) in that they specifically reflect the ideal of the epistemic commons of shared knowledge and culture. Communication rights, then, should preserve and remove restrictions on the epistemic commons. We understand the following rights as central to those tasks:

  1. Access: citizens’ equal access to information, orientation, entertainment, and other contents serving their rights.
  2. Availability: equal availability of various types of content (information, orientation, entertainment, or other) for citizens.
  3. Dialogical rights: the existence of public spaces that allow citizens to publicly share information, experiences, views, and opinions on common matters.
  4. Privacy: protection of every citizen’s private life from unwanted publicity, unless such exposure is clearly in the public interest or if the person decides to expose it to the public, as well as protection of personal data (processing, by authorities or businesses alike, must have legal grounds and abide by principles, such as data minimisation and purpose limitation, while individuals’ rights must be safeguarded).

To discuss each category of rights, we deployed them in three levels: the level of the Finnish regulatory-normative framework; the level of implementation by the public sector, as manifested in the level of activity by commercial media and communications technology providers; and in the level of activity by citizen-consumers. This multi-level analysis aims at depicting the complex nature of the rights and the often contested and contradictory realisations at different levels. For each category, we also highlighted one example: for access, telecommunications; for availability, extended collective licencing in the context of online video recording services; for dialogical rights, e-participation; and for privacy, monitoring communications metadata within organisations.

Access

Access as a communication right well illustrates the development of media forms, the expansion of the Finnish media ecosystem, and the increasing complexity of rights as realised in regulatory decisions by the public sector, commercial media, and communications technology providers. After 100 years of independence, Finland is still short of domestic capital and heavily dependent on exports, which makes it vulnerable to economic downturns (OECD, 2018). Interestingly, despite changes to the national borders, policies, and technologies over time, it is these geopolitical, demographic, and socioeconomic conditions that have remained relatively unchanged and, in turn, have shaped most of the current challenges towards securing access to information and media.

While the right to access in Finland also relates to institutions, such as libraries and schools, the operationalisation here is illustrated by the case of telecommunications. Telecommunications are perhaps the most illustrative cases of access. They were originally introduced in Finland by the Russian Empire; however, the Finnish Senate managed to obtain an imperial mandate for licensing private telephone operations. As a result, the Finnish telephone system formed a competitive market based on several regional private companies. There was no direct state involvement in the telecommunications business before Finland became independent (Kuusela, 2007).

The licenses of the private telephone operators required them to arrange the telephone services in their area to meet the telephone customers’ needs for reasonable and equal prices. In practice, every company had a universal service obligation (USO) in its licensing area. However, as the recession of the 1930s stopped the development of private telephone companies in the most sparsely inhabited areas, the state of Finland had to step in. The national Post and Telecommunication service eventually played a pivotal role in providing telephone services to the most northern and eastern parts of Finland (Moisala, Rahko, & Turpeinen, 1977).

Access to a fixed telephone network improved gradually until the early 1990s, when about 95% of households had at least one telephone in their use. However, the number of mobile phone subscriptions surpassed the number of fixed line telephone subscriptions as early as 1999, and an increasing share of households gave up the traditional telephone completely. As a substitute to the fixed telephone, in the late 1990s, mobile phones were seen in Finland as the best way to bring communication “into every pocket” (Silberman, 1999). Contrary to the ideal of the epistemic commons, the official government broadband strategy was based much more on market-led development and mobile networks than, for example, in Sweden, where the government made more public investments in building fixed fibre-optic connections (Eskelinen, Frank, & Hirvonen, 2008). Finland also gave indirect public subsidies to mobile broadband networks (Haaparanta & Puhakka, 2002). While the rest of Europe had started to auction their mobile spectrum (Sims, Youell, & Womersley, 2015); in Finland, the operators received all mobile frequencies for free until 2013.

The European regulations of USOs in telecommunication have been designed to set a relatively modest minimum level of telephone services at an affordable price, which could be implemented in a traditional fixed telephone network. Any extensions for mobile or broadband services have been deliberately omitted (Wavre, 2018). However, the universal services directive (2002/22/EC) lets the member states use both fixed and wireless mobile network solutions for USO provision. In addition, while the directive suggests that users should be able to access the internet via the USO connection, it does not set any minimum bitrate for connections in the common market.

Finland amended its national legislation in 2007 to let the telecom operators meet their universal service obligations using mobile networks. The results were dramatic, as operators quickly replaced large parts of the fixed telephone network with a mobile network, especially in eastern and northern parts of Finland. Today, less than 10% of households have fixed telephones. At the same time, there are almost 10 million mobile subscriptions in use in a country with 5.5 million inhabitants. Less than 1% of households do not have any mobile phones at all (Statistic Finland, 2017). Thanks to the 3G networks using frequencies the operators had obtained for free, Finland became a pioneer in making affordable broadband a legal right. Reasonably priced access to broadband internet from home has been part of the universal service obligation in Finland since 2010. However, the USO broadband speed requirement (2 Mbps) is rather modest by contemporary standards.

It is obvious that since the 1990s, Finland has not systematically addressed access as a basic right, but rather as a tool to reach political and economic goals. Although about 90% of households already have internet access, only 51% of them have access to ultra-fast fixed connections. Almost one-third of Finnish households are totally dependent on mobile broadband, which is the highest share in the EU. To guarantee access to 4G mobile broadband throughout the country, the Finnish government licensed two operators, Finnish DNA and Swedish Telia, to build and operate a new, shared mobile (broadband) network in the northern and eastern half of Finland. Despite recent government efforts to also develop ultra-fast fixed broadband, Finland is currently lagging other EU countries. A report monitoring the EU initiative “A Digital Agenda for Europe” (European Court of Auditors, 2018) found that Finland is only 22nd in the ranking in terms of progress towards universal coverage with fast broadband (> 30 Mbps) by 2020. In contrast, another Nordic Media Welfare State, Sweden, with its ongoing investments in citizens’ access to fast broadband, expects all households have access to at least 100 Mbps by 2020 (European Court of Auditors, 2018).

Availability

As a communication right, availability is the counterpart to access, but also dialogical rights and privacy. Availability refers to the abundance, plurality, and diversity of factual and cultural content to which citizens may equally expose themselves. Importantly, despite an apparent abundance of available content in the current media landscape, digitalisation does not translate into limitless availability, but rather implies new restrictions and conditions thereof as well as challenges stemming from disinformation. Availability both overcomes many traditional boundaries and faces new ones, many pertaining to ownership and control over content. For instance, public service broadcasting no longer self-evidently caters for availability, and media concentration may affect availability. In Finland, one specific question of availability and communication pertains to linguistic rights. Finland has two official languages, which implies additional demands for availability both in Finnish and in Swedish, alongside Sami and other minority languages. These are guaranteed in a special Language Act, but are also included in several other laws, including the law on public service broadcasting.

Here, availability is examined primarily through overall trends in free speech and access to information in Finland, as well as from the perspective of copyright and paywalls in particular. Availability is framed and regulated from an international and supranational level (e.g., the European Union) to the national level. Availability at a national level relies on the constitutionally safeguarded freedom of expression and access to information as well as fundamental cultural and educational rights. Freedom of the press and publicity dates back to 18th-century Sweden-Finland. After periods of censorship and “Finlandization”, the basic tenet has been a ban on prior restraint, notwithstanding measures required to protect children in the audio-visual field (Neuvonen, 2005; 2018). Later, Finland became a contracting party to the European Convention of Human Rights (ECHR) in 1989, linking Finland closely to the European tradition. However, in Finland, privacy and freedom of expression were long balanced in favour of the former, departing somewhat from ECHR standards and affecting media output (Tiilikka, 2007).

Regarding transparency, and publicity in the public sector, research has showed that Finnish municipalities, in general, are not truly active in catering to citizens’ access to information requests, and there is an inequality across the country (Koski & Kuutti, 2016). This is in contrast to the ideals of the Nordic Welfare State (Syvertsen et al., 2014). In response, civil society group, Open Knowledge Finland, has created a website that publishes information requests and guides people to submit their own request.

The digital environment is conducive to restrictions and requirements stemming from copyright and personal data protection—both having an effect on availability. The “right to be forgotten”, for example, enables individual requests to remove links in search results, thus affecting searchability (Alén-Savikko, 2015). To overcome a particular copyright challenge, new provisions were tailored in Finland to enable online video recording services, thereby allowing people to access TV broadcasts at more convenient times in a manner that transcends the traditional private copying practices. The Finnish solution rests partly on the Nordic approach to so called extended collective licensing (ECL), which was originally developed as a solution to serve the public interest in the field of broadcasting. Collective management organizations are able to license such use not only on behalf of their members, with an extended effect (i.e. they are regarded representative of non-members as well), while TV companies license their rights (Alén-Savikko & Knapstad, 2019; Alén-Savikko 2016).

Alongside legal norms, different business models frame and construct the way availability presents itself to citizens. Currently, pay-per-use models and pay walls feature in the digital media sector, although pay TV development in particular has long been moderate in Finland (Ministry of Transport and Communications, 2014a). With new business models, availability transforms into conditional access, while equal opportunity turns into inequality based on financial means. From the perspective of individual members of the public, the one-sided emphasis on consumer status is in direct opposition to the ideals of the epistemic commons and the Nordic Media Welfare State.

Dialogical rights

Access and availability are prerequisites for dialogical rights. These rights can be operationalised as citizens’ possibilities and realised activities to engage in dialogue that fosters democratic decision-making. Digital technology offers new opportunities of participation: in dialogues between citizens and the government; in dialogues with and via legacy media; and in direct, mediated peer-to-peer communication that can amount to civic engagement.

Finland has a long legacy of providing equal opportunities for participation, for instance as the first country in Europe to establish universal suffrage in 1906, when still under the Russian Empire. After reaching independence in 1917, Finland implemented its constitution in 1919. The constitution secures freedom of expression, while also stipulating that public authorities shall promote opportunities for the individual to participate in societal activity and to influence the decisions that concern him or her.

Currently, a dozen laws support dialogical rights, ranging from the Election Act and Non-Discrimination Act to the Act on Libraries. Several of them address media organisations, including the Finnish Freedom of Expression Act (FEA) that safeguards individuals’ right to report and make a complaint about media content and the Act on Yleisradio (public broadcasting) that stipulates the organization’s role in supporting democratic participation.

Finland seems to do particularly well in providing internet-based opportunities for direct dialogue between citizens and their government. These efforts began, as elsewhere in Europe, in the 1990s (Pelkonen, 2004). The government launched a public engagement programme, followed in the subsequent decade by two other participation-focused programmes (Wilhelmsson, 2017). While Estonia is the forerunner in all types of electronic public services, Finland excels in the Nordic model of combining e-governance and e-participation initiatives: it currently features a number of online portals for gathering both citizen’s opinions and initiatives, both at the national and municipal levels (Wilhelmsson, 2017).

Still, increasing inequality in capability for political participation is one of the main concerns in the National Action Plan 2017–2019 (Ministry of Justice, 2017). The country report on the Sustainable Governance Indicators notes that the weak spot for Finland is public’s evaluative and participatory competencies (Anckar et al., 2018). Some analyses posit that the Finnish civil society is simply not very open for diverse debates, contrary to the culture of public dialogue in Sweden (Pulkkinen, 1996). While Finns are avid news followers, they trust the news, and they are more likely to pay for online news than news consumers in most countries (Reunanen, 2018), participatory possibilities do not entice them very much. Social media are not widely used for political participation, even by young people (Statistics Finland, 2017) and, for example, Twitter remains a forum for dialogues between the political and media elite (Eloranta & Isotalus, 2016).

The most successful Finnish e-participation initiative is based on a 2012 amendment to the constitution that has made it possible for citizens to submit initiatives to the Parliament. One option to do so is via a designated open source online portal. An initiative will proceed to Parliament if it has collected at least 50,000 statements of support within six months. By 2019, the portal had accrued almost 1000 proposals, 24 had proceeded to be discussed in Parliament, and two related laws had been passed. Research shows, however, that many other digital public service portals still remain unknown to Finns (Wilhelmsson, 2017).

As Karlsson (2015) has posited in the case of Sweden, public and political dialogues online can be assessed by their intensity, quality, and inclusiveness. The Finnish case shows that digital solutions do not guarantee participation if they are not actively marketed to citizens, and if they do not entail a direct link to decision-making (Wilhelmsson, 2017). While the Finnish portal for citizen initiatives has mobilized some marginalized groups, the case suggests that e-participation can also alienate others, for example older citizens (Christensen et al., 2017). Valuing each and every voice as well as prioritising ways to do so over economic or political priorities (Couldry, 2010) or the need to govern effectively (Nousiainen, 2016) could be seen as central to dialogical rights between the citizen and those in the government and public administration.

Privacy

Privacy brings together all the main strands of changes caused by digitalisation: changes in media systems from mass to multimedia; technological advancements; regulatory challenges of converging sectors; and shifting sociocultural norms and practices. It also highlights a shrinking, rather than expanding, space for the right to privacy.

Recent technical developments and the increased surveillance capacities of both corporations and nation states have raised concerns regarding the fundamental right to privacy. While the trends are arguably global, there is a distinctly national logic to privacy rights. This logic coexists with international legal instruments. In the Nordic case, the strong privacy rules exist alongside access to information laws that require the public disclosure of data that would be regarded as intimate in many parts of the world, such as tax records. Curiously, a few years ago, the majority of Finns did not even consider their name, home address, fingerprints, or mobile phone numbers to be personal information (European Union, 2011), and they are still among the most trusting citizens in the EU when it comes to the use of their digital data by authorities (European Union, 2017).

In Finland, the right to privacy is a fundamental constitutional right and includes the right to be left alone, a person’s honour and dignity, the physical integrity of a person, the confidentiality of communications, the protection of personal data, and the right to be secure in one’s home (Neuvonen, 2014). The present slander and defamation laws date back to Finland’s first criminal code from 1889, when Finland was still a Grand Duchy of the Russian Empire. In 1919, the Finnish constitution provided for the confidentiality of communications by mail, telegraph, and telephone, as well as the right to be secure in one’s home—important rights for citizens in a country that had lived under the watchful eye of the Russian security services.

In the sphere of privacy protection, new laws are usually preceded by the threat of new technology (Tene & Polonetsky, 2013); however, in Finland, this was not the case. Rather, the need for new laws reflected a change in Finland’s journalistic culture that had previously respected the private lives of politicians, business leaders, and celebrities. The amendments were called “Lex Hymy” (Act 908/1974) after one of Finland’s most popular monthlies had evolved into a magazine increasingly focused on scandals.

Many of the more recent rules on electronic communications and personal data are a result of international policies being codified into national legislation, perhaps most importantly EU legislation’s transposition into national law. What is fairly clear, however, is that the state has been seen as the guarantor of the right to privacy since even before Finland was a sovereign nation. The strong role of the state is consistent with the European social model and increased focus on public service regulation (cf., Venturelli, 2002, p. 80). Nevertheless, the potential weakness of this model is that the privacy rights seldom trump the public interest, and public uses of personal data are not as strictly regulated as their private use.

Finland has also introduced legislation that weakens the relatively strong right to privacy. After transposing the ePrivacy Directive guaranteeing the confidentiality of electronic communications into national law, the Finnish Government proposed an amending act that granted businesses and organisations the right to monitor communications metadata within their networks. The act was dubbed “Lex Nokia” after Finland’s leading newspaper published an article that alleged that the Finnish mobile giant had pressured politicians and officials to introduce the new law (Sajari, 2009). While it is difficult to assess to what degree Nokia influenced the contents of the legislation, it is clear that Nokia took the initiative and was officially involved in the legislative process (Jääsaari, 2012).

The Lex Nokia act demonstrates how the state’s public interest considerations might coincide with the economic interests of large corporations to the detriment of the right to privacy. Regardless, Finnish citizens remain more trusting of public authorities, health institutions, banks, and telecommunications companies than most of their European compatriots (European Union, 2015). It remains to be seen whether this trust in authority will erode, as more public and private actors aim to capitalise on the promises of big data. Nothing in recent Eurobarometer surveys (European Union, 2018a, pp. 38–56; European Union, 2018b) would indicate that the trust in public authorities would be in crisis or in steep decline—the same cannot be said for trust in political institutions, which seem to decline a few percentage points each year in various studies.

Discussion

The promotion of communication rights based on the ideal of epistemic commons is institutionalized in a variety of ways in Finnish communication policy-making, ranging from traditional public service media arrangements to more recent broadband and open data initiatives. However, understood as equal and effective capabilities, communication rights and the related policy principles of the Nordic Media Welfare State have never been completely or uniformly followed in the Nordic countries.

The analysis of the Finnish case highlights how the ideal of a “Digital Welfare State” falls short in several ways. Policies of access or privacy may focus on economic goals rather than rights. E-participation initiatives promoting dialogical rights do not automatically translate to a capacity or a desire to participate in decision-making. Arguably, the model employed in this article has been built on a specific understanding of which rights and stakeholders are needed to support the ideals of the epistemic commons and the Nordic Media Welfare State. That is why it focuses more on the national specificities and less on the impact on supranational and international influences on the national situation. It is obvious that in the current media landscape, national features are challenged by a number of emergent forces, including not only technological transformations but also general trends of globalisation and the declining capacities of nation states to enforce public interest or rights-based policies (Horten, 2016).

Still, more subtle and local manifestations of global and market-driven trends are worth examining to understand different policy options and interpretations. National mapping and monitoring the state of communication rights with measurement tools and indicators have been developed and employed that target their various components, such as linguistic issues or accessibility. In Finland, this type of approach has been adopted in the field of media and communications policy (Ala-Fossi et al., 2018; Artemjeff & Lunabba, 2018; Ministry of Transport and Communications, 2014b). Recent academic efforts aiming at comparative outlooks (Couldry et al., 2016; Goggin et al., 2017) are indications that communication rights urgently call for a variety of conceptualisations and operationalisations to uncover similarities and differences between countries and regions. As Eubanks (2017) argued, we seem to be at a crossroads: despite our unparalleled capacities for communication, we are witnessing new forms of digitally enabled inequality, and we need to curb these inequalities now—if we want to counter them at all. We may need both the global policy efforts, but we also need to understand their specific national and supranational reiterations to counter these and other inequalities regarding citizens’ communication rights.

References

Ala-Fossi, M., Alén-Savikko, A., Grönlund, M., Haara, P., Hellman, H., Herkman, J.,…Mykkanen, M. (2018). Media- ja viestintäpolitiikan nykytila ja sen mittaaminen [Current state of media and communication policy and measurement]. Helsinki: Ministry of Transport and Communications. Retrieved February 21, 2019, from http://urn.fi/URN:ISBN:978-952-243-548-4

Alén-Savikko, A. (2015). Pois hakutuloksista, pois mielestä? [Pois hakutuloksista, pois mielestä?]. Lakimies, 113(3-4), 410–433. Retrieved from http://www.doria.fi/handle/10024/126796

Alén-Savikko, A. (2016). Copyright-proof network-based video recording services? An analysis of the Finnish solution. Javnost – The Public, 23(2), 204–219. doi:10.1080/13183222.2016.1162979

Alén-Savikko, A., & Knapstad, T. (2019). Extended collective licensing and online distribution – prospects for extending the Nordic solution to the digital realm. In T. Pihlajarinne, J. Vesala & O. Honkkila (Eds.), Online distribution of content in the EU (pp. 79–96). Cheltenham, UK & Northampton, MA: Edward Elgar. doi:10.4337/9781788119900.00012

Anckar, D., Kuitto, K., Oberst, C. & Jahn, D. (2018). Finland Report Sustainable Governance Indicators 2018. Retrieved March 14, 2018, from https://www.researchgate.net/publication/328214890_Finland_Report_-_Sustainable_Governance_Indicators_2018

Artemjeff, P., & Lunabba, V. (2018). Kielellisten oikeuksien seurantaindikaattorit [Indicators for monitoring linguistic rights] (No. 42/2018). Helsinki: Ministry of Justice Finland. Retrieved from http://julkaisut.valtioneuvosto.fi/bitstream/handle/10024/161087/OMSO_42_2018_Kielellisten_oikeuksien_seurantaindikaattorit.pdf

Castells, M., & Himanen, P. (2002). The information society and the welfare state: The Finnish model. Oxford: Oxford University Press.

Christensen, H., Jäske, M., Setälä, M. & Laitinen, E. (2017). The Finnish Citizens’ Initiative: Towards Inclusive Agenda-setting? Scandinavian Political Studies, 40(4), 411–433. doi:10.1111/1467-9477.12096

Couldry, N. (2010). Why voice matters: Culture and politics after neoliberalism. London: Sage.

Couldry, N. Rodriguez, C., Bolin G., Cohen, J. , Goggin, G., Kraidy, M. …Zhao Y. (2016). Chapter 13 – Media and communications. Retrieved November 14, 2018, from https://comment.ipsp.org/sites/default/files/pdf/chapter_13_-_media_and_communications_ipsp_commenting_platform.pdf

Eloranta, A., & Isotalus, P. (2016). Vaalikeskustelun aikainen livetwiittaaminen – kansalaiskeskustelun uusi muoto? [Election discussion during the electoral debate - a new form of civic debate?]. In K. Grönlund & H. Wass (Eds.), Poliittisen osallistumisen eriytyminen: Eduskuntavaalitutkimus 2015 [Differentiation of Political Participation: Parliamentary Research 2015](pp. 435–455). Helsinki: Oikeusministeriö.

Eskelinen, H., Frank, L., & Hirvonen, T. (2008). Does strategy matter? A comparison of broadband rollout policies in Finland and Sweden. Telecommunications Policy, 32(6), 412–421. doi:10.1016/j.telpol.2008.04.001

Eubanks, V. (2017). Automating inequality: How high-tech tools profile, police, and punish the poor. New York: St. Martin’s Press.

European Court of Auditors (2018). Broadband in the EU Member States: despite progress, not all the Europe 2020 targets will be met (Special report No. 12). Luxembourg: European Court of Auditors. Retrieved February 22, 2018, from http://publications.europa.eu/webpub/eca/special-reports/broadband-12-2018/en/

European Commission (2011). Attitudes on data protection and electronic identity in the European Union (Special Eurobarometer No. 359). Luxembourg: Publications Office of the European Union. Retrieved November 14, 2018, from http://ec.europa.eu/public_opinion/archives/ebs/ebs_359_en.pdf

European Commission (2015). Data protection (Special Eurobarometer No. 431). Luxembourg: Publications Office of the European Union. Retrieved November 14, 2018, from https://data.europa.eu/euodp/data/dataset/S2075_83_1_431_ENG

European Commission (2017). Europeans’ attitudes towards cyber security (Special Eurobarometer No. 464a). Luxembourg: Publications Office of the European Union. doi:10.2837/82418

European Commission (2018a). Public opinion in the European Union (Standard Eurobarometer No. 89). Luxembourg: Publications Office of the European Union. doi:10.2775/172445

European Commission (2018b). Kansallinen raportti. KansalaismielipideEuroopan unionissa: Suomi [National Report. Citizenship in the European Union: Finland] (Standard Eurobarometer, National Report No. 90). Luxembourg: Publications Office of the European Union. Retrieved from https://ec.europa.eu/finland/sites/finland/files/eb90_nat_fi_fi.pdf

Flew, T., Iosifides, P., & Steemers, J. (Eds.). (2016). Global media and national policies: The return of the state. Basingstoke: Palgrave. doi:10.1057/9781137493958

Goggin, G., Vromen, A., Weatherall, K. G., Martin, F., Webb, A., Sunman, L., & Bailo, F. (2017). Digital rights in Australia (Sydney Law School Research Paper No. 18/23). Sydney: University of Sydney. https://ses.library.usyd.edu.au/bitstream/2123/17587/7/USYDDigitalRightsAustraliareport.pdf

Haaparanta P., & Puhakka M. (2002). Johtolangatonta keskustelua: Tunne ja järki huutokauppakeskustelussa. Kansantaloudellinen Aikakauskirja, 98(3), 267–274.

Habermas, J. (2006). Political communication in media society: Does democracy still enjoy an epistemic dimension? The impact of normative theory on empirical research. Communication Theory, 16(4), 411–426. doi:10.1111/j.1468-2885.2006.00280.x

Halme, K., Lindy, I., Piirainen, K., Salminen, V., & White, J. (Eds.). (2014). Finland as a knowledge economy 2.0: Lessons on policies and governance (Report No. 86943). Washington, DC: World Bank Group. Retrieved from http://documents.worldbank.org/curated/en/418511468029361131/Finland-as-a-knowledge-economy-2-0-lessons-on-policies-and-governance

Hamelink, C. J. (1994). The politics of world communication. London: Sage.

Horowitz, M., & Nieminen, H. (2016). European public service media and communication rights. In G. F. Lowe & N. Yamamoto (Eds.), Crossing borders and boundaries in public service media: RIPE@2015 (pp. 95–106). Gothenburg: Nordicom. Available at https://gupea.ub.gu.se/bitstream/2077/44888/1/gupea_2077_44888_1.pdf#page=97

Horten, M. (2016). The closing of the net. Cambridge: Polity Press.

International Telecommunication Union. (2018). Measuring the information society report 2018 - Volume 1. Geneva: International Telecommunication Union. Retrieved from: https://www.itu.int/en/ITU-D/Statistics/Pages/publications/misr2018.aspx

Jääsaari, J. (2012). Suomalaisen viestintäpolitiikan normatiivinen kriisi: Esimerkkinä Lex Nokia [The normative crisis of Finnish communications policy: For example, Lex Nokia]. In K. Karppinen & J. Matikainen (Eds.), Julkisuus ja Demokratia [Publicity and Democracy] (pp. 265–291). Tampere: Vastapaino.

Jørgensen, R. F. (2013). Framing the net: The internet and human rights. Cheltenham, UK & Northhampton, MA: Edward Elgar.

Karlsson, M. (2015). Interactive, qualitative, and inclusive? Assessing the deliberative capacity of the political blogosphere. In K. Jezierska & L. Koczanowicz (Eds.), Democracy in dialogue, dialogue in democracy: The politics of dialogue in theory and practice (pp. 253–272). London & New York: Routledge.

Karppinen, K. (2017). Human rights and the digital. In H. Tumber & S. Waisbord (Eds.), Routledge companion to media and human rights (pp. 95–103). London & New York: Routledge. doi:10.4324/9781315619835-9

Koski, A., & Kuutti, H. (2016). Läpinäkyvyys kunnan toiminnassa – tietopyyntöihin Vastaaminen [Transparency in municipal action - responding to requests for information]. Helsinki: Kunnallisalan kehittämissäätiö [Municipal Development Foundation]. Retrieved November 14, 2018, from http://kaks.fi/wp-content/uploads/2016/11/Tutkimusjulkaisu-98_nettiin.pdf

Kuusela, V. (2007). Sentraalisantroista kännykkäkansaan - televiestinnän historia Suomessa tilastojen valossa [From the central antennas to mobile phone - the history of telecommunications in Finland in the light of statistics]. Helsinki: Tilastokeskus. Retrieved November 14, 2018, from http://www.stat.fi/tup/suomi90/syyskuu.html

MacKinnon, R. (2013). Consent of the networked: The struggle for internet freedom. New York: Basic Books.

MacKinnon, R., Maréchal, N., & Kumar, P. (2016). Global Commission on Internet Governance – Corporate accountability for a free and open internet (Paper No. 45). Ontario; London: Centre for International Governance Innovation; Chatham House .Retrieved from https://www.cigionline.org/sites/default/files/documents/GCIG%20no.45.pdf

Mansell, R. & Nordenstreng, K. (2006). Great Media and Communication Debates: WSIS and the MacBride Report. Information Technologies and International Development, 3(4), 15–36. Available at http://tampub.uta.fi/handle/10024/98193

McIver, W. J., Jr., Birdsall, W. F., & Rasmussen, M. (2003). The internet and right to communicate. First Monday,8(12). doi:10.5210/fm.v8i12.1102

McLeod, S. (2018). Communication rights: Fundamental human rights for all. International Journal of Speech-Language Pathology, 20(1), 3–11. doi:10.1080/17549507.2018.1428687

Ministry of Finance, Finland. (2018, May 23). Digital Economy and Society Index: Finland has EU's best digital public services. Helsinki: Ministry of Finance. Retrieved February 28, 2019, from https://vm.fi/en/article/-/asset_publisher/digitaalitalouden-ja-yhteiskunnan-indeksi-suomessa-eu-n-parhaat-julkiset-digitaaliset-palvelut

Ministry of Justice, Finland. (2017). Action plan on democracy policy. Retrieved February 28, 2019, from https://julkaisut.valtioneuvosto.fi/bitstream/handle/10024/79279/07_17_demokratiapol_FI_final.pdf?sequence=1

Ministry of Transport and Communications, Finland. (2014a). Televisioala Suomessa: Toimintaedellytykset internetaikakaudella [Television industry in Finland: Operating conditions in the Internet era] (Publication No. 13/2014). Helsinki: Ministry of Transport and Communications. Retrieved from http://urn.fi/URN:ISBN:978-952-243-398-5

Ministry of Transport and Communications, Finland. (2014b). Viestintäpalveluiden esteettömyysindikaattorit [Accessibility indicators for communication services] (Publication No. 36/2014). Helsinki: Ministry of Transport and Communications. Retrieved from http://urn.fi/URN:ISBN:978-952-243-437-1

Moisala, U. E., Rahko, K., & Turpeinen, O. (1977). Puhelin ja puhelinlaitokset Suomessa 1877–1977 [Telephone and telephone companies in Finland 1877–1977]. Turku: Puhelinlaitosten Liitto ry.

Neuvonen, R. (2005). Sananvapaus, joukkoviestintä ja sääntely [Freedom of expression, media and regulation]. Helsinki: Talentum.

Neuvonen, R. (2014). Yksityisyyden suoja Suomessa [Privacy in Finland]. Helsinki: Lakimiesliiton kustannus.

Neuvonen, R. (2018). Sananvapauden historia Suomessa [The History of Freedom of Expression in Finland]. Helsinki: Gaudeamus

Nieminen, H. (2019). Inequality, social trust and the media. Towards citizens’ communication and information rights. In J. Trappel (Ed.), Digital Media Inequalities Policies against divides, distrust and discrimination (pp, 43–66). Gothenburg: Nordicom. Available at https://norden.diva-portal.org/smash/get/diva2:1299036/FULLTEXT01.pdf#page=45

Nieminen, H. (2016). Communication and information rights in European media policy. In L. Kramp, N. Carpentier, A. Hepp, R. Kilborn, R. Kunelius, H. Nieminen, T. Olsson, T. Pruulmann-Vengerfeldt, I. Tomanić Trivundža, & S. Tosoni (Eds.), Politics, civil society and participation: media and communications in a transforming environment (pp. 41–52). Bremen: Edition lumière. Available at: http://www.researchingcommunication.eu/book11chapters/C03_NIEMINEN201516.pdf

Nieminen, H. (2014). A short history of the epistemic commons: Critical intellectuals, Europe and the small nations. Javnost - The Public, 2(3), 55–76. doi:10.1080/13183222.2014.11073413

Nieminen, H. (2013). European broadband regulation: The “broadband for all 2015” strategy in Finland. In M. Löblich & S. Pfaff- Rüdiger (Eds.), Communication and media policy in the era of the internet: Theories and processes (pp. 119-133). Munich: Nomos. doi:10.5771/9783845243214-119

Nieminen, H. (2010). The European public sphere and citizens’ communication rights. In I. Garcian-Blance, S. Van Bauwel, & B. Cammaerts (Eds.), Media agoras: Democracy, diversity, and communication (pp. 16-44). Newcastle Upon Tyne, UK: Cambridge Publishing.

Nousiainen, M. (2016). Osallistavan käänteen lyhyt historia [A brief history of a participatory turn]. In M. Nousiainen & K. Kulovaara (Eds.), Hallinnan ja osallistamisen politiikat [Governance and Inclusion Policies] (pp. 158-189). Jyväskylä: Jyväskylä University Press. Available at https://jyx.jyu.fi/bitstream/handle/123456789/50502/978-951-39-6613-3.pdf?sequence=1#page=159

OECD. (2018). OECD economic surveys: Finland 2018. Paris: OECD Publishing. doi:10.1787/eco_surveys-fin-2018-en

Padovani, C., & Calabrese, A. (Eds.) (2014). Communication Rights and Social Justice. Historical Accounts of Transnational Mobilizations. Cham: Springer / Palgrave Macmillan. doi:10.1057/9781137378309

Pelkonen, A. (2004). Questioning the Finnish model – Forms of public engagement in building the Finnish information society (Discussion Paper No. 5). London: STAGE. Retrieved November 14, 2018, from http://lincompany.kz/pdf/Finland/5_ICTFinlandcase_final2004.pdf

Pulkkinen. T. (1996). Snellmanin perintö suomalaisessa sananvapaudessa [Snellman's legacy in Finnish freedom of speech]. In: K. Nordenstreng (Ed.), Sananvapaus [Freedom of Expression] (pp. 194–208). Helsinki: WSOY

Redeker, D., Gill, L., & Gasser, U. (2018). Towards digital constitutionalism? Mapping attempts to craft an Internet Bill of Rights. International Communication Gazette, 80(4), 302–319. doi:10.1177/1748048518757121\

Reporters without Borders (2018). 2018 World Press Freedom Index. Retrieved February 28, 2019, from: https://rsf.org/en/ranking

Reunanen, E. (2018). Finland. In N. Newman, R. Fletcher, A. Kalogeropoulos, D. A. L. Levy, & R. K. Nielsen (Eds.), Reuters Institute digital news report 2018 (pp. 77–78). Oxford: Reuters Institute for the Study of Journalism.

Sajari, P. (2009). Lakia vahvempi Nokia [The law is stronger Nokia]. Helsingin Sanomat.

Shahbaz, A. (2018). Freedom on the net 2018: The rise of digital authoritarianism. Washington, DC: Freedom House. Retrieved February 28, 2019, from https://freedomhouse.org/sites/default/files/FOTN_2018_Final%20Booklet_11_1_2018.pdf

Silberman, S. (1999, September). Just say Nokia. Wired Magazine.

Sims, M., Youell, T., & Womersley, R. (2015). Understanding spectrum liberalisation. Boca Raton, FL: CRC Press.

Statistics Finland (2017). Väestön tieto- ja viestintätekniikan käyttö 2017 [Population Information and Communication Technologies 2017]. Helsinki: Official Statistics of Finland. Retrieved February 28, 2019, from https://www.stat.fi/til/sutivi/2017/13/sutivi_2017_13_2017-11-22_fi.pdf

Syvertsen, T., Enli, G., Mjøs, O., & Moe, H. (2014). Media welfare state. Nordic media in the digital era. Ann Arbor: University of Michigan Press.

Tene, O., & Polonetsky, J. (2013). A theory of creepy: Technology, privacy and shifting social norms. Yale Journal of Law & Technology, 16, 59–102. Available at: https://yjolt.org/theory-creepy-technology-privacy-and-shifting-social-norms

Tiilikka, P. (2007). Sananvapaus ja yksilön suoja: lehtiartikkelin aiheuttaman kärsimyksen korvaaminen [Freedom of speech and protection of the individual: compensation for the suffering of a journal article]. Helsinki: WSOYpro.

Venturelli, S. (2002). Inventing e-regulation in the US, EU and East Asia: Conflicting social visions of the information society. Telematics and Informatics,19(2), 69–90. doi:10.1016/S0736-5853(01)00007-7

Wallace, N., & Castro, D. (2017). The state of data innovation in the EU. Brussels &

Washington, D.C. Center for Data Innovation. Retrieved February 28, 2019, from http://www2.datainnovation.org/2017-data-innovation-eu.pdf

Wavre, V. (2018). Policy diffusion and telecommunications regulation. Cham: Springer / Palgrave Macmillan.

Wilhelmsson, N. (2017). Finland: eDemocracy adding value and venues for democracy. In eDemocracy and eParticipation. The precious first steps and the way forward (pp 25-33). Retrieved February 28, 2019, from http://www.fnf-southeasteurope.org/wp-content/uploads/2017/11/eDemocracy_Final_new.pdf

Zuboff, S. (2019). The age of surveillance capitalism: The fight for a human future at the new frontier of power. New York: Public Affairs.

Footnotes

1. The quest for more openness and publicity is a continuation of the long historical development. European modernity is fundamentally based on the assumption that knowledge and culture belong to the common domain and that the process of democratization necessarily means removing restrictions on the epistemic commons. Aspects such as civic education, universal literacy, and mass media (newspapers; public service broadcasting as tool for the daily interpretation of the world) are at the heart of this ideal. The epistemic commons reflects the core ideas and ideals of deliberative democracy: At the centre of this view is democratic will formation that is public and transparent, includes everyone and provides equal opportunities for participation, and results in rational consensus (Habermas, 2006). The epistemic commons is thought to facilitate such will formation.

Beyond ‘zero sum’: the case for context in regulating zero rating in the global South

$
0
0

This paper is part of Practicing rights and values in internet policy around the world, a special issue of Internet Policy Review guest-edited by Aphra Kerr, Francesca Musiani, and Julia Pohle.

Introduction

The contestation of network neutrality 1 in the United States was arguably the predominant communications policy debate over the last decade (Bauer and Obar, 2014). This otherwise arcane aspect of telecoms policy became the locus of concerns about limiting freedom of expression online, the stifling of digital innovation and exacerbating market concentration. Although major policy developments on this issue occurred concurrently in several countries of the global South (Belli & De Filippi, 2016), it is the practice of ‘zero rating’ mobile apps – exempting content and services from data charges (de Miera, 2016) – that has wrested academic and media attention away from the US case. There are shrill arguments on either side. Those who oppose zero rating (henceforth ZR) frame it as a “pernicious” threat to network neutrality and the multiple social goods that it protects (notably innovation and expression) (Crawford, 2015; Malcolm, 2014). Proponents defend zero rating as an internet on-ramp for the billions offline (Katz & Callorda, 2015; West, 2015). Prevailing voices have thus reduced ZR to a zero sum game; one torn between the apparently incommensurate goals of facilitating access, and preserving a neutral network.

Moreover, with some notable exceptions (A4AI, 2016; Mozilla, 2017; Marsden, 2016), judgements on ZR have tended towards broader theoretical strokes rather than granular empirical analysis. This tendency has become pronounced because Facebook’s one-size-fits-all Free Basics programme – offered in one basic format in 63 countries worldwide (Internet.org, 2017) - has dominated consideration of the issue and shaped the contours of the debate accordingly. In fact, much of the ZR offered in the global South is tailored by individual carriers and varies considerably 2. There is no universal prescription for zero rating, so analyses should be rigorously contextualised.

Accordingly, this paper examines the mesh of competing concerns around ZR to identify the complex interrelationship between them. I contend that through a pragmatic and contextual approach, we can move beyond absolutist judgements and better defend the social goods sought both by advocates of net neutrality (Crawford, 2015; Van Schewick 2012) and digital inclusion (West 2015).

We can observe these polarised tendencies in regulatory decisions. Market absolutists such as the head of the US’ FCC, Ajit Pai, claim that his laissez-faire approach to ZR benefit “those with low incomes” and encourage “a competitive marketplace” (Brodkin, 2017). Preserving network neutrality in this judgement scarcely registers as a concern. Conversely, the veto on zero rating implemented by India’s TRAI in 2016 was based primarily on the perceived threat to an “open and non-discriminatory” network (TRAI, 2016). This ban negates the possible benefits of ZR to millions of economically disadvantaged Indian citizens. The prospect of a regulatory ripple effect from two of the world’s largest telecoms markets is genuine. It is essential therefore to develop empirical analyses that can contribute to informed and balanced ZR regulation; or in other words, which effectively reconciles the rights of ZR users with no other means to access the internet, and the need to safeguard innovation, competition and free expression.

This article analyses the multiple forms of zero rating offered in four wireless markets – Brazil, Colombia, Mexico and South Africa - across two dimensions: political-economic and developmental. By using these contextual frames, I identifythe factors that exacerbate or mitigate ZR’s impacts on net neutrality and access. By weighing up these factors, I contend that we can better identify circumstances in which ZR could be sanctioned as a short-term means to boost mobile internet access. Conversely, in other contexts, ZR constitutes an intolerable infringement upon network neutrality, local innovation and freedom of expression.

Wireless markets in the global South are a dynamic object of study, with market offerings and regulatory decisions often in flux. Zero rating represents this dynamism in miniature. The case studies presented here capture particular modes of enabling mobile internet access; some of which may be obsolete within months, while others may become consolidated as dominant business practices. Only by tracking this ‘moving target’, however, and by applying the dominant presumptions about ZR to actual market conditions, will we be able to make informed judgements and meaningful policy interventions.

Structure and contributions of this article

This paper offers three principal contributions to the existing literature, and proceeds in three stages. In the first section, in addition to proposing a working definition, I identify the main arguments regarding ZR’s impact on network neutrality and mobile internet access. I present my first contribution here: a typology of the six forms of zero rating most prevalent in these four wireless markets. This provides the set of definitions that I use in my analysis, and adds two significant sub-categories absent from existing typologies (Carrillo, 2016; Belli, 2017).

In the second section I present a fine-grained analysis of all mobile internet offerings in the four countries using this typology. This demonstrates the prevalence of zero rated mobile internet services therein.

The central contribution of this article features in the last section. Here I examine these four wireless markets across two analytical frames:

  • political-economic, where I scrutinise the wireless market in terms of concentration, market-share and ownership structure. Various traditions within the political economy of communication focus on these criteria in order to analyse market strength, including the institutional political economy tradition (as described in Mosco, 2008) and critical Marxist approaches (Fuchs, 2015). I, however, follow most closely the monopoly capital school developed prominently by McChesney (2000).
  • developmental, in which I assess the affordability and penetration of the mobile internet, the level of local innovation, as well as state-led initiatives to boost internet access. In this frame I use development indicators as commonly applied within ICT4D research (e.g., Levendis & Lee, 2013)

Thereafter I assess how these insights might be applied to the challenge of crafting effective public policy around ZR in the global South.

Methodology

These countries have been purposefully selected in order to generate a rich array of findings from a limited number of cases. Three continents are represented, thus recreating some of the wide geographical range encompassed by the global South. There is also a diversity of scenarios with regard to key variables such as affordability of mobile services and the presence of programmes like Facebook Inc.'s Free Basics. Finally, the four countries demonstrate different approaches to legislating network neutrality and offer the opportunity to examine the relationship between forms of network neutrality legislation and the extent to which it is compromised by ZR.

In terms of analytically useful commonalities, all four countries are classified as large, but less mature, telecoms markets (Groene, Navelekar, & Coakley, 2017). Accordingly, they could represent bellwethers for the rest of the global South in terms of market and regulatory trends. Finally, all four counties selected are ones in which material could be accessed in languages spoken by this researcher.

To delimit the study, only those carriers with +10% of national market share were included. All data regarding mobile data offerings was collected from the carriers’ websites and was accurate as of August 2017. Where offerings varied by region, data was collected for the largest metropolitan area - e.g. São Paulo for Brazil.

Zero rating, network neutrality and mobile internet access

Zero rating refers to the practice of mobile web content being offered to consumers by mobile ISPs (MISPs) without counting against their data allowance. Indeed, it is essential to note that ZR is a product of the artificial scarcity implied by the imposition of data caps, without which ZR would hold no attraction for existing mobile internet users. ZR can therefore represent a cost saving to users as data plans typically limit the volume a subscriber may use per billing period. MISPs and content platforms, meanwhile, offer the service based on the calculation that longer-term revenue will outweigh short term costs through increased take-up of mobile internet services. ZR has become increasingly ubiquitous in wireless markets in the global South where cost presents a greater obstacle to mobile internet access than in the global North (ITU, 2015).

Before proceeding further, it is important to settle on a precise definition of ZR. Rossini and Moore offer a useful starting point by classifying zero rating as a matter of billing management by MISPs that discriminates between web content through price, rather than technical network management (2015, p.1). In turn, Marsden highlights the essential feature of positive discrimination of web content that characterises zero rating, as opposed to the negative discrimination implied by throttling or blocking (2016, p.7). By combining these, I propose the definition of zero rating as the practice ofpositive discrimination of web content by mobile ISPs enabled by billing management practices. Using this definition rather than a strict focus on ‘free’ services is important because it captures the practice of differential pricing that is commonly used to sell app-specific bundles and that might otherwise escape analysis.

Network neutrality

The concept of network neutrality features in discussions of zero rating because the former is compromised by the latter. Net neutrality refers to the normative goal that all data should move across the internet without being subject to discrimination based on origin or type (Wu, 2003). Academics and activists have interpreted net neutrality as a means to protect innovation and competition on the internet (Van Schewick, 2010), as well as users’ speech and information access rights (Nunziato, 2009). Regulatory actions have also been guided by such concerns, for example the BEREC ‘Guidelines on the Implementation by National Regulators of European Network Neutrality Rules’ (BEREC, 2016). By facilitating positive discrimination of web content, ZR constitutes a violation of network neutrality. By extension, ZR may also impede innovation, competition and free speech.

Zero rating necessarily favours access to certain web platforms at the expense of others. MISPs therefore assume a gatekeeper role “that pick winners and losers online” and “undermines the vision of an open Internet where all applications have an equal chance of reaching audiences” (Van Schewick 2016, p.4). This is exacerbated by the fact that most ZR features globally dominant platforms (Viecens & Callorda, 2016). Indeed, findings from the Zero Rating Map show that in each of the 100 mapped countries, at least one Facebook-owned app is zero rated. Meanwhile, the lower user bases and shallower pockets of smaller content providers, start-ups and non-commercial services means they are often left on the sidelines, which can distort competition and impede innovation. These effects may also manifest themselves amongst MISPs if zero rated offers serve to entrench the market power of dominant players.

At the same time as market distortions might be observed through infringement of net neutrality, the freedom of expression of users may also be diminished. Zero rating favours certain speech and information resources at the expense of others, meaning that the internet’s potential as a democratic space of open communication - already threatened by state surveillance, corporate control over user data and widespread disinformation - is further imperilled. It is also possible that users become siloed within a ‘walled garden’ of content. Finally, it is important to note that the quest to collect user data often drives ZR schemes. This has been well-documented in the case of Free Basics (LaFrance, 2016), and is also evident in jurisdictions such as Brazil, where the offer of zero rated applications becomes a means to circumvent internet regulation that prevents MISPs from monitoring the content of user communications (Presidencia da Republica, 2016)

There are, of course, counter-arguments. In the case of the wireless sector, if the market for MISPs is already competitive, then the presence of zero rating may not unduly distort it (Saenz, 2016; Galpaya, 2017). Moreover, if a smaller, struggling incumbent, or new entrant, can use zero rated offers to entice more subscribers, this may result in greater competition. Another claim is that because MISPs benefit from users accessing an ecosystem of applications, the carriers themselves will act to prevent zero rating from become anti-competitive at the application layer out of economic self-interest (Eisenach, 2015). The argument follows that this would therefore apply a natural brake to any tendency towards a non-neutral network.

In terms of user communication rights, one must ultimately be cognisant of the possibility that access to some applications may be better than none; a point that segues into discussion of the relationship between zero rating and mobile internet access.

Mobile internet access

The goal of increasing rates of mobile internet access is often invoked alongside net neutrality in discussions of ZR. This is because of the obvious potential that a cost-free form of mobile internet represents for boosting adoption. Increasing levels of mobile internet access amongst those estimated four billion people for whom the cost is prohibitive (ITU, 2015) is a goal that animates many NGOs, technology corporations and governments. Alongside the presumed commercial benefits for those providing the connectivity (the opacity of the economic arrangements negates the possibility of knowing for certain), the goal of increasing mobile internet access is justified on the basis that it will improve health, education, economic productivity and even democracy.

Although some research suggests that ZR is used in conjunction with a data cap (that permits open access to any web content within a pre-agreed data allowance) as a cost-saving measure (A4AI, 2016; Mozilla, 2017), for many users, zero rated offers may constitute their only access to the internet. Given the importance of messaging apps like WhatsApp for everyday communication in much of the global South (Galpaya, 2017) the significance of free access should not be understated.

One oft-repeated argument by proponents of zero rating (most notably the platforms and carriers) is that these services constitute an internet ‘on ramp’ for non-users. Facebook’s own research claims that 50% of Free Basics users go on to become full mobile internet subscribers (2015). Independent research offers some different perspectives. Surveys of 1000 users conducted by the Alliance for Affordable Internet (A4AI) in each of Colombia, Peru, Ghana, Nigeria, Kenya, India, Bangladesh, and the Philippines showed that only 12% of respondents had not experienced the internet prior to using a zero rated service (A4AI, 2016). Similar research conducted in seven developing countries on behalf of the Mozilla Foundation also discounted the ‘on ramp’ theory (2017).

Other arguments that connect ZR to an increase in the provision of affordable access focus on the possibility that zero rating can boost innovation for impoverished users as they join the network and edge providers offer specialist services corresponding to their needs (Sylvain, 2015). Furthermore, some researchers – as well as industry actors (Brito, 2015) – note the possibility that financial arrangements between content providers and MISPs could be struck, funnelling revenue towards infrastructure build-out (Berglind, 2016). This would in turn facilitate increased rates of internet access. However, the opacity of these agreements negates knowing this for certain. As a final counterpoint, some observers fret that ZR might permit governments a ‘free pass’ on infrastructure investment (Rossini & Moore, 2015, p. 12).

Having concluded this brief survey, we should now classify the forms of zero rating available to consumers in the global South. The following typology is based on analysis of the four wireless markets featured in this research, as well as the wider literature.

Table 1: Typology of forms of zero – and Near-0 – rated data offers in the global South

MISP-

driven

Model

Pre/Post-Pay 3

Description

Example

Apps plus cap

Post

Unlimited access to suite of apps with data cap for complete internet

Tigo’s ‘Combo’ plan (Colombia)

Add-On

Either

Single app made available as optional add-on, with data charge waived

TIM’s ‘Torcedor’ (Brazil)

Triple-lock bundle

Pre

Time-limited data cap for a suite of apps

Movistar’s ‘Recarga’ (Mexico)

Content-driven

Platform ZR

Either

Platform-driven walled garden

Free Basics/

Internet.org

Earned data

Either

Data earned in exchange for content consumption

Vivo Ads (Brazil)

Non-

commercial

Either

Users provided free access to non-commercial content. Not exclusive to carrier

Wikizero 4

Table 1 shows six forms of zero rated mobile internet services. They are grouped into two broad brackets: MISP-driven and content-driven. As mentioned above, I propose a broader definition of ZR that includes a bundled approach to selling apps and web services that I call Near-0 Rating. Although the service is not free, it corresponds to a form of positive discrimination premised on pricing. It also favours access to a select few globally dominant content and messaging platforms.

This practice is exemplified by the widely offered pre-pay Triple-lock bundles in which the limitations are trifold: temporal, volume-based and content-specific. An archetype is Movistar’s Recarga package in Mexico in which a data-capped bundle of access to WhatsApp, Facebook and Twitter is offered on a sliding scale from 24 hours to one month.

The most common form of zero rating in post-pay consists of unlimited access to a suite of web applications – typically Facebook, WhatsApp and Twitter – as part of a data contract that includes capped access to the wider internet. The Colombian carrier Tigo offers an archetype with their Combo plan that includes a sliding scale of monthly data allowances, from 800MB to 6GB, alongside unlimited access to six apps.

While both of these models represent clear forms of discrimination, it becomes more explicit when a) there is no additional data cap for the open internet, or b) when the zero rated content continues to be available after any accompanying data cap is reached. Both of these variants possess the potential to lock users into a ‘walled garden’ of content.

‘Earned data’ meanwhile refers to promotions in which users are rewarded with a data allowance in exchange for consumption of a certain kind of web content, likely an advertisement. An example of this type of zero rating exists in Brazil in the form of a partnership between the carrier Vivo and Procter & Gamble (Telecom Paper, 2016).

There are two other principle forms of content-driven ZR. Facebook’s ‘Internet.org’ project (re-branded ‘Free Basics’ in 2015) launched in 2013 (Internet.org, 2017) is the most conspicuous example of ‘platform ZR’ (Belli, 2016). It partners with a mobile carrier to offer voice-only subscribers access to a suite of pared down web applications and services – including Facebook itself – at no cost, but with no access to the wider internet. According to Facebook’s CEO, Mark Zuckerberg, it is an altruistically-driven plan to “connect every village…and improve the world for all of us” (Bhatia, 2016). Its critics, meanwhile, interpret it as a ploy to lock the four billion unconnected people in the global South into a corporate-faux-Internet (Levy, 2015).

Finally, there is also a non-commercial model of ZR. For example, the Wikimedia Foundation operated Wikizero 2011-2018, establishing non-exclusive partnerships with mobile carriers in countries where cost constituted an acute obstacle to access in order to provide free access to Wikipedia content (Wikimedia Foundation, 2017). Another state-led example is the Brazilian 800 Saude app that provided healthcare information (Governo do Brasil, 2017). When non-commercial models are offered non-exclusively, the benefits for access to knowledge are evident, while the infringement on net neutrality in terms of competition, innovation and expression should only concern absolutist defenders of the principle (Malcolm, 2014).

Prevalence of ZR in the countries under analysis

Table 2: Extent and form of zero rated mobile internet services

Country

% of post-pay services incl. ZR in each market

(% ‘apps + cap’) 5

% of pre-pay services incl. ZR in each market

(% ‘triple lock’)

Availability of Free Basics

Brazil

33% (100%)

76% (100%)

N

Colombia

100% (100%)

100% (100%)

Y

Mexico

100% (100%)

72% (100%)

Y

South Africa

10% (0%)

33% (0%)

Y

Sources: Websites of all MISPs with more than 10% wireless market share. Data collected July 2017. See annex for full details.

In examining these data in Table 2, we see that the ‘apps plus cap’ model in post-pay, and the ‘triple lock’ model in the pre-pay segment represent the dominant forms of zero rating internet services. In terms of the markets as a whole, in Mexico and Colombia, ZR has become integral to the preferred business models of the major carriers. In Brazil there is a significant difference in the extent of zero rated services in the pre and post-pay segments, while in South Africa, zero rating constitutes a minimal share of the market mix.

In order to gain further insight from these data, it must be properly contextualised. Accordingly, I will now examine the data through two frames: political-economic and developmental.

Two contextual frames for understanding the relationship between ZR, network neutrality and mobile internet access

The political-economic frame

The level of wireless concentration, the market positions of the carriers offering ZR, ownership of zero rated content services as well as the market strength of the zero rated service all have a significant bearing on the degree to which network neutrality is compromised.

Table 3: Wireless market characteristics

Country

Number of MISPs w/+10% market share

Market concentration: HHI 6 Score¹

Wireless market share (mobile Internet subs)

Brazil

4

2,457 (Unconcentrated)

Vivo 31%; TIM 25%; Claro 25%; Oi 17%²

Colombia

3

3,737 (Moderately concentrated)

Claro 49% (53%); Movistar 23% (30%); Tigo 18% (12%)³

Mexico

3

5,152 (Highly concentrated)

Telcel 65% (70%); Movistar 23% (15%); AT&T 11% (14%)⁴

South Africa

3

3,205 (Moderately concentrated)

Vodacom 35%; MTN 35%; Cell C 17%⁵

Sources: ¹ Economist Intelligence Unit, 2017; ² Anatel, 2017; ³ MinTIC, 2017; ⁴ ITF, 2017; ⁵ Business Tech, 2017.

When we think about zero rating and its impact on network neutrality, the market strength of the participating MISP is a key criterion. The case of Mexico is emblematic in this respect. Its wireless market is highly concentrated, with one player – América Móvil’s subsidiary, Telcel – accounting for 70% of all mobile internet subscriptions (ITF, 2017). The fact that all of Telcel’s post-pay, and one third of its pre-pay, data plans feature zero rated content means that the impact on competition is more acute. This is also true for the moderately concentrated market of Colombia where the market leader, Claro (also owned by América Móvil), offers zero rated services. It is probable that the offer of ZR will further exacerbate concentration in these wireless markets as the zero rated offers attract even more subscribers to the dominant MISPs.

These effects can also be registered in the content market. Research by Van Schewick in the United States shows that users will tend to favour zero rating over content that counts towards their data caps (2016). This distortion in the online environment is exacerbated when we consider that - in common with all of the zero rated content presented in Table 4 - all of Telcel’s zero rated content features the globally dominant platforms in terms of active users 7: social network Facebook; micro-blogging service Twitter; and messaging app WhatsApp (Statista, 2017). The phenomenon of network effects is accelerated when simple notification services (SNS) and messaging apps are zero rated which may hasten the onset of user ‘lock-in’ (Palfrey & Gasser, 2012), which would in turn further distort market competition.

Table 4: Zero rated content characteristics

Country

ZR that includes MISP-owned content (%)

ZR that includes global content platform (%)

Exclusivity between global content and MISP

Local content incl. in ZR offers

Brazil

31%

84%

N

N

Colombia

20%

100%

N

N

Mexico

33%

100%

N

N

South Africa

0%

100%

Y

Y

Moreover, if a carrier zero rates its own service, then we see a pernicious form of vertical integration in which one entity not only owns the pipes, platform and content, but can effectively lock users into this proprietary funnel through price discrimination. It should be noted that the phenomenon of ‘lock-in’ (Palfrey & Gasser, 2012) can occur irrespective of the use of ZR, and is widely considered to have a negative impact on innovation and competition within the market in question. We see this in the case of Telcel as it exacerbates its market power by zero rating its Claro Video service on 2/3 of its post-pay plans. Indeed, in Mexico, Colombia and Brazil 8, 20-33% of all zero rated services featured carrier-owned content and services. In all cases bar one (Tigo in Colombia), these were offered by the national subsidiary of one of four global operator groups: Telefónica, America Móvil, Telecom Italia and AT&T. This is significant because these are multinational corporations - with the former two in a dominant position in Latin America – meaning that when they zero rate their own content platforms in one market, it may serve to consolidate their power regionally.

The infringement of network neutrality by ZR could be justified as pro-competitive if it was offered by an MISP with the smallest market share; it might serve to attract more users, increase its share and thus make the market more competitive (Goodman, 2016). This would be especially true of markets that are defined as moderately or highly concentrated, such as Colombia and Mexico. While AT&T in Mexico (9%), and Tigo in Colombia (17%) are the market laggards and offer ZR in all of their plans, they do so in the context of ubiquitous ZR. As such, the pro-competitive impact is muted.

South Africa offers the only case where the smallest player – Cell C with 14% market share - offers ZR (Free Basics) in a moderately concentrated market where the dominant incumbents do not. This example also highlights the only instance of exclusivity between a zero rated global content platform and an MISP in this study. According to Marsden’s (2016) analysis, exclusivity in ZR arrangements should be ex ante prohibited. This arrangement can in theory create a more concentrated market than the non-exclusive alternative because it would draw even more users onto the favoured network in order to benefit from the zero rated services. The market position of Cell C is such that in this case, that is only a minor concern.

Brazil, unique amongst these four cases, can boast of a wireless market comprised of four large MISPs closely matched in market share. The provision of ZR by these carriers also seems to follow a pro-competitive model in that the two players grappling for second place, TIM and Claro, are more aggressive in their use of zero rated inducements than the market leader, Vivo 9 The outlier, however, is the fourth placed Oi SA that comprises 17% of the market and offers no ZR.

Although the infringement of network neutrality through the zero rating of locally developed apps and content could encourage local technological development, the data collected for this study suggests that this is a distant prospect. The only examples are the apps included in the Free Basics suite offered by Cell C in South Africa. This includes the youth employment accelerator Harambee, and the literacy app Fundza (Cell C, 2017). In this case, it should be noted that Facebook serves as the arbiter of which apps will be granted the privilege of admission, appointing themselves de facto gatekeepers of South Africa’s app ecosystem and discriminating against those applications that are not included in the Free Basics suite.

In sum, by applying this political economy lens to ZR and the markets in which it is offered, we can identify various instances of red lines, where ZR not only infringes network neutrality, but does so in a way that has a significantly detrimental impact on competition and innovation in the wireless and/or content market:

  • Any offer of ZR in a highly concentrated market (except by the carrier with lowest market share) 10
  • Any exclusive offer of ZR (except by the carrier with lowest market share)
  • Any offer of ZR by a carrier majority-owned by a global operator group (unless lowest market share)
  • Any carrier zero rating their own content/platform

We can also identify amber zones in which ZR’s benefits to innovation and competition could outweigh the negative impact of its infringement of net neutrality:

  • The ZR of locally developed/public interest apps and services in a non-exclusive form
  • The offer of ZR by a market laggard/newcomer/struggling incumbent

The developmental frame

The best way to understand the impact of zero rating on rates of mobile internet access is by using a developmental frame. This is because low levels of economic development, limited telecoms infrastructure and high access costs collectively create conditions whereby zero rated access to specific applications could be justified as a stopgap measure in the absence of widely available and affordable mobile internet access.

The offer of zero rated services is sometimes criticised on the basis that it allows governments to evade responsibility for improving mobile internet access for their citizens (Rossini & Moore 2015, p. 12). The enthusiasm with which many governments have welcomed the arrival of Facebook’s Free Basics perhaps validates this perspective. A market solution of zero rated internet is ultimately a profit-oriented scheme subject to corporate exigencies, a fact that explains the disquiet of many observers. Although community networks offer great promise to address deficiencies in both private and public provision of access (Baca, Belli, Huerta, & Velasco, 2018), their relatively limited scale means it is important to identify the extent of government programmes to reduce access costs and increase national penetration of mobile broadband. This also needs to be understood in the context of the level of national ICT development and the extent to which a significant deficit needs to be bridged. The national capacity for innovation, meanwhile, is a relevant metric to assess how ZR might stymie the local development of web apps and services 11. All this data can be reviewed in Table 5.

Table 5: Infrastructure and innovation

Country

ICT development index (/175 country ranking)¹

State policy to promote free/low cost internet access (/10 score)²

Capacity for innovation (/139 country ranking)³

Brazil

63

8

80

Colombia

83

9

93

Mexico

92

7

66

South Africa

88

6

32

Sources: ¹ ITU, 2016; ² A4AI, 2017; ³ WEF, 2016

The other major sub-index to consider is affordability and access. Although the provision of ZR is always offered by content providers with the goal of boosting market share and access to valuable user data, it is often presented by its boosters as a means to overcome socio-economic obstacles to mobile internet access, either by maximising the utility of data-capped open internet access, or providing some app-specific connectivity to those who otherwise have none (Layton & Calderwood, 2015; West, 2015). Intuitively, the provision of a free service should represent a boon to the poorest segments of society. The conundrum to consider is the extent to which the benefits of ZR to the poorest outweigh the potentially negative impact on network neutrality and its associated social goods. As such, Table 6 presents several key indicators that help to gauge the affordability of mobile internet access, as well as the take-up of those services.

In terms of measuring affordability, A4AI offers a new benchmark: that 1GB of data should not exceed 2% of a user’s monthly income (“1 for 2”). A4AI argue that this is a more substantive measure than the 500MB for 5% threshold defined by the UN Broadband Commission (2017). Another useful metric for gauging cost as impediment to access is the proportion of mobile subscribers that use a prepaid plan. This form of mobile access offers users the highest level of cost-control and is therefore often adopted by those with the lowest economic means. Looking at the respective indices of mobile subscriptions and mobile internet subscriptions, meanwhile, serves double duty as a measure both of cost and infrastructure; a significant disparity between the two forms of penetration suggests a barrier of cost and/or a lack of broadband availability. The final metric listed in Table 6 is useful to understand the intensity of the negative impact of ZR on network neutrality: a high proportion of internet use over WiFi means that users are accessing the open, full internet, and are not limited to the walled-garden provisions of application-specific ZR 12.

Table 6: Affordability and access

Country

Price of 1GB mobile prepaid plan as % of monthly income¹

Mobile subs/mobile broadband subs (% penetration)²

Mobile subs prepaid (%)

% time mobile internet users connected to WiFi (/95 ranking) ⁷

Brazil

1.97

119/73

66³

12th

Colombia

1.45

105/47

79⁴

36th

Mexico

2.03

81/59

84⁵

28th

South Africa

2.48

160/40

84⁶

71st

Sources: ¹ A4AI, 2017; ² GSMA Intelligence, 2015; ³ Teleco, 2017; ⁴ MinTIC, 2017; ⁵ IFT, 2017; ⁶ ICASA, 2016; ⁷ Open Signal, 2016

Through combining these measures we might illuminate the extent to which ZR could be justified as an expedient for facilitating higher levels of mobile internet access 13. In the case of Brazil, for instance, it boasts the highest level of ICT development of the four countries according to a cluster of indicators compiled by the ITU. It also scores highly in the A4AI’s aggregated metric for measuring the quality of the state’s efforts to increase mobile internet access. In terms of affordability, the data in Table 6 shows that Brazil almost exactly meets the ‘1 for 2’ threshold and demonstrates robust levels of penetration at 119% for mobile and 73% for mobile broadband subscriptions. With regards to the forms of access - at 66%, the level of prepaid subscriptions may be high compared to wireless markets in the global North, but it is the lowest of the four countries examined here. Relatively speaking, the level of WiFi use is very high.

Taken together, these indicators suggest that there is no compelling justification for ZR as a means to boost access in the case of Brazil: at least in urban areas where 86% of Brazil’s population resides (World Bank, 2017), mobile internet is relatively widely diffused and affordable, ICT infrastructure is robust, and the state is a willing partner in boosting levels of access. Moreover, the high levels of WiFi connection imply that many users are able to access the open internet, even if they also contract ZR services.

There is of course an alternate interpretation that focuses on the challenge of connecting the 27%, or 56 million Brazilians, who do not access mobile internet. Data compiled in 2015 by the Brazilian Internet Steering Committee showed that 90% of those Brazilians who had never used the internet were in the lowest social classes (Derechos Digitales 2017, p. 56). We can infer that cost is likely a significant impediment to access for these citizens (exacerbating other systemic obstacles such as (digital) illiteracy and a lack of locally relevant content and services), one that ZR could help to overcome. The fact that the Brazilian state is judged to be pro-active in addressing access issues, however, could alleviate concerns that ZR would permit it to abdicate its responsibilities.

South Africa demonstrates a more straightforward case where ZR could be justified as a means to generate access. It does not meet the ‘1 for 2’ threshold, there is a high penetration of mobile subscriptions – many of which are prepaid - accompanied by low levels of mobile internet subscriptions and WiFi access. The country also features in the bottom half of the ICT development index and receives a middling grade for state efforts to boost take-up of mobile internet. The South African government did launch a digital inclusion programme, South Africa Connect, in 2013 with the goal of connecting 90% of the population to the internet by 2020 (South African Government, 2013). A review of this plan reveals it is based on market-led initiatives rather than a state-led infrastructure programme. Such an approach may explain why the South African government was receptive to the arrival of Facebook’s Free Basics in 2014.

The only factor in this analysis that might undermine the case in favour of ZR is that South Africa ranks highly for innovation capacity (WEF, 2016). Heavy take-up of ZR might therefore damage this positive aspect of the South African economy. There is indeed evidence of this dynamic in practice as a local messaging service was forced to shut down in 2015 citing competition from WhatsApp as the cause (Steyn, 2016).

Colombia and Mexico present similar scenarios in terms of these development indicators and their relationship with ZR. Colombia receives the highest score for its government’s efforts to boost access. This is in recognition of the achievements wrought by Colombia’s Vive Digital programme that aimed to increase its internet connected population to 27 million in 2018 from 8 million in 2014 (Rossini & Moore, 2015, p. 49). Indeed, in 2016 the Colombian government announced an innovative programme dubbed Internet Móvil Social para la Gente which provides subsidised data connections and 4G handsets to citizens registered for government welfare programmes (MinTIC, 2016).

In the context of targeted and adequately funded state efforts to increase mobile internet access, the presence of commercial ZR could complement rather than undermine these programmes; a stopgap that addresses economic and infrastructural barriers while more substantive public policy is implemented. This becomes a more compelling argument given that although Colombia comfortably meets the affordability threshold of 1 for 2, it only figures at the halfway mark of the global ICT development ranking, demonstrates a significant disparity between rates of mobile and mobile broadband connections, as well as a high proportion of prepaid subscriptions.

Finally, Mexico appears lowest in the rankings for ICT development of the four countries here, the penetration rates are the lowest, and the level of prepaid subscriptions the highest alongside South Africa. And although Mexico technically meets the 1 for 2 benchmark, OECD data reveals that for the poorest tier of households, the cost of a mobile subscription represents 6.2% of monthly income (OECD, 2017). This suggests that Mexico faces a significant challenge in facilitating adequate levels of mobile internet access, one which ZR might partially address. Although the Mexican state received a lower score than Colombia or Brazil for fomenting access – though still above the emerging country average of ‘6’ (A4AI, 2017) – it has embarked on major ICT infrastructure projects such as Mexico Conectado and Red Compartida (IFT, 2017). This scheme is sufficiently well-developed in terms of existing investments, and ambitious enough in terms of future goals (OECD, 2017), to suggest that in common with Colombia, ZR might serve to complement rather than derail state connectivity programmes.

Overall, it is hard to define hard ‘red lines’ for ZR by examining access through development indicators. This is because the confluence of factors is more dynamic and complex, especially within the infrastructure and innovation sub index. In the first instance, the diverse states of telecoms infrastructure in the four countries under examination here further complicates the equation. Moreover, it is difficult to interpret whether a country’s low score for state connectivity programmes means that ZR should be considered a threat to those nascent efforts, or an essential stopgap to realise the same objectives. Similarly, does a high level of capacity in national technological innovation mean that ZR constitutes a grave threat to the growth potential for the mobile software sector, or does it suggest that this ecosystem is robust enough to withstand the pressure? These are fundamental questions to consider when we wish to evaluate ZR, and can only be substantively addressed through greater contextual analysis than the parameters of this study permit.

A more straightforward case to be made, one grounded in affordability and access, is that a combination of low penetration and high cost mean that there is a compelling argument for ZR addressing an economic barrier for many users. Even on this point, however, we must be aware that the 1GB for 2% of monthly income measure can prove a blunt tool as it is based on average income (A4AI 2017, p. 47). In societies like Brazil and Mexico that meet this affordability threshold, the economic inequality is such that the wealthy few skew the average. Thus for many, the cost of mobile internet will be more onerous, and the economic benefits of ZR potentially more significant.

Conclusion

Zero rated mobile internet services represent a thorny public policy challenge in the global South. On one hand they can overcome cost barriers to realise the valued goal of increasing mobile internet penetration. On the other, the dangers ZR poses to competition and innovation in the wireless and online services markets, as well as the implications of locking users into ‘walled gardens’ of content, are apparent. The premise of this research is that the challenge of ZR can be better addressed when it is rigorously contextualised; when we weigh the values of both neutrality and access on the scale. To that end, I created a typology of models of ZR. This classified the forms in which ZR is sold, and moved beyond a strict focus on ‘free data’ to demonstrate that ‘Near-0 Rating’ offers should also be considered.

I also identified two contextual frames through which ZR should be examined in order to evaluate the factors that accentuate or diminish its impact on neutrality and access. A political-economic lens guides our focus towards the market power of participating actors, as well as the circumstances in which the infringement of network neutrality can become pro or anti-competitive. Examining indices of technology diffusion, meanwhile, helps to assess whether ZR can address affordability and infrastructural deficits, as well as whether local innovation might be impeded.

Through charting the uneven conceptual terrain on which ZR appears, we can discount the notion that addressing ZR is a zero sum game composed of an ‘access or neutrality’ calculation. Instead, we need to be much more attentive to the multiple interlocking factors that influence how ZR impacts upon both the social goods sought by defenders of network neutrality, as well as the goals of digital inclusion advocates. The precise composition of these factors will vary in every society and wireless market, so the manner in which they are reconciled will depend on national policy priorities. Whether ZR is interpreted as a curse or a boon for local app development, for instance, is a matter for the relevant regulators, advocacy groups and industry associations to decide. Moreover, as previously stated, ZR is a moving target, and although the dominant tendency captured in this research is to zero rate market leaders in each application category, an alternative approach based on zero rating entire classes of applications would require that the negative implications of ZR for innovation and competition would need to be reassessed.

A policy of subsidised data and handsets, as introduced in Colombia, is arguably the ideal way to address limited mobile internet penetration for the most economically disadvantaged. However, in the absence of such progressive public policy, an absolute veto on ZR threatens to make the perfection implied by full internet access for all, an enemy of the good. Any proponent of an absolute ban on ZR should rehearse a speech to an impoverished user in the global South to explain why access to socially essential communication services should remain beyond their means. Ultimately, rather than an on-ramp, we might better conceptualise ZR as a temporary relief road: a makeshift piece of the network that can accommodate mass demand while the proper permanent infrastructure (through both public policy and market provision) is established.

Regarding the limitations of this research, the data on the prevalence of ZR in the four markets examined here represents a snapshot in time, and the available insights are accordingly restricted. Longitudinal studies are needed to assess the impacts of ZR on innovation and competition over time, as well as to understand whether they represent a short-term marketing ploy, or a permanent fixture of these markets. What are also needed are large-scale studies that probe the practices of mobile internet users in the global South. These would help us better understand whether ZR entices non-users online, and the extent to which that introduction shapes later patterns of use; especially whether users migrate beyond zero rated silos.

References

A4AI (Alliance for Affordable Internet). (2016). Impacts of Emerging Mobile Data Services in Developing Countries. Research Brief No. 2. Retrieved from http://a4ai.org/the-impacts-of-emerging-mobile-data-services-in-developing-countries/

A4AI (Alliance for Affordable Internet). (2017). Affordability Report 2017. Retrieved from https://a4ai.org/wpcontent/uploads/2017/02/A4AI-2017-Affordability-Report.pdf

Anatel (2017, December 8). Brasil registra 240,9 milhões de linhas móveis em operação em outubro de 2017 [240,9 million wireless subscriptions registered in Brazil in October 2017]. Retrieved from http://www.anatel.gov.br/dados/component/content/article?id=283

Baca, C., Belli, L. Huerta, E. & Velasco, K. (2018) Community Networks in Latin America: Challenges, Regulations and Solutions. Reston, VA; Geneva: Internet Society. Retrieved from https://www.internetsociety.org/wp-content/uploads/2018/12/2018-Community-Networks-in-LAC-EN.pdf

Bauer, J, & Obar, J. (2014). Reconciling political and economic goals in the net neutrality debate. The Information Society,30(1), 1-19. doi:10.1080/01972243.2013.856362

Bhatia, R. (2016, May 12). The inside story of Facebook’s biggest setback. The Guardian. Retrieved from https://www.theguardian.com/technology/2016/may/12/facebook-free-basics-india zuckerberg

Belli, L. (2017). Net neutrality, Zero-rating and the Minitelisation of the Internet. Journal of Cyber Policy, 2(1). doi:10.1080/23738871.2016.1238954

BEREC (Body of European Regulators for Electronic Communications) (2016). BEREC Guidelines on the Implementation by National Regulators of European Net Neutrality Rules. Retrieved from https://berec.europa.eu/eng/document_register/subject_matter/berec/regulatory_best_practices/guidelines/6160-berec-guidelines-on-the-implementation-by-national-regulators-of-european-net-neutrality-rules

Brito, C. (2015). The Internet in Mexico, two years after #ReformaTelecom. Digital Rights Latin America and the Caribbean. Retrieved from https://www.digitalrightslac.net/en/el-internet-en-mexico-a-dos-anos-de-la-reformatelecom/

Brodkin, J. (2017, February 28). FCC Head Ajit Pai: You can thank me for carriers’ new unlimited plans. Ars Technica. Retrieved from https://arstechnica.com/tech-policy/2017/02/fcc-head-ajit-pai-you-can-thank-me-for-carriers-new-unlimited-data-plans/

Business Tech (2017, June 28). SA mobile subscribers in 2017: Vodacom vs MTN vs Cell C vs Telkom. Retrieved from https://businesstech.co.za/news/mobile/182301/sa-mobile-market-share-in-2017vodacom-vs-mtn-vs-cell-c-vs-telkom/

Carrillo, A. (2016). Having Your Cake and Eating It Too? Zero Rating, Net Neutrality and International Law. Stanford Technology Law Review, 19. Retrieved from https://law.stanford.edu/wp-content/uploads/2017/11/19-3-1-carrillo-final_0.pdf

Cell C (2017). Free Basics. Retrieved from https://www.cellc.co.za/cellc/free-basics-by-facebook

Crawford, S. (2015, January 7). Less than Zero. Wired. Retrieved fromhttps://www.wired.com/2015/01/less-than-zero/

de Miera Berglind, O. (2016). The Effect of Zero-Rating on Mobile Broadband Demand: An Empirical Approach and Potential Implications. International Journal of Communication, 10. Retrieved from https://ijoc.org/index.php/ijoc/article/view/4651

Derechos Digitales (2017). Neutralidad de red en América Latina: Reglamentación, aplicación de la ley y perspectivas. Los casos de Chile, Colombia, Brazil y México [Network Neutrality in Latin America: Regulation, Law Enforcement and Perspectives. The cases of Chile, Colombia, Brazil and Mexico]. Retrieved from https://www.derechosdigitales.org/wp-content/uploads/NeutralidadeRedeAL_SET17.pdf

DTPS (Dept. of Telecommunications and Postal Services). (2016). National Integrated ICT Policy: (White Paper). Retrieved from https://www.dtps.gov.za/images/phocagallery/Popular_Topic_Pictures/National_Integrated_ICT_Policy_White.pdf

Economist Intelligence Unit (2017). The Inclusive Internet: Mapping Progress 2017. Retrieved from https://theinclusiveinternet.eiu.com/explore/countries/performance/affordability/competitive-environment/wireless-operators-market-share?highlighted=BR

Fuchs, C. (2015). Reading Marx in the Information Age: A Media and Communication Studies Perspective on Capital, Volume 1. Routledge: London. doi:10.4324/9781315669564

Galpaya, H. (2017). Global Commission on Internet Governance – Zero Rating in Emerging Economies (Paper series: No. 47). Ontario; London: Centre for International Governance Innovation; Chatham House. Retrieved from https://www.cigionline.org/sites/default/files/documents/GCIG%20no.47_1.pdf

García, L. & Brito, C. (2014). Enrique Peña Nieto contra el Internet [Enrique Peña Nieto against the Internet] [Blog post]. Retrieved from Nexos website: https://www.redaccion.nexos.com.mx/?p=6176

Goodman, E. (2016). Zero rating broadband data: Equality and free speech at the network’s other edge. Colorado Technology Law Journal, 15(1), 63-92. Retrieved from http://ctlj.colorado.edu/wpcontent/uploads/2017/01/4-Goodman-12.29.16_FINAL_PDF-A.pdf (accessed May 2017).

Governo do Brasil (2017). Aplicativo vai ampliar o acesso da população às informações de saúde [The application will increase the population's access to health information]. Retrieved from http://www.brasil.gov.br/saude/2017/06/aplicativo-vai-ampliar-o-acesso-da-populacao-as-informacoes-de-saude

Groene, F., Navelekar A. & Coakley M. (2017). An industry at risk: Commoditization in the wireless telecom industry. Strategy&. Retrieved from https://www.strategyand.pwc.com/reports/industry-at-risk

GSMA Intelligence (2015). Data. Retrieved from https://www.gsmaintelligence.com/markets/409/dashboard/

Hoskins, G. (2017). Draft once, deploy everywhere: Contextualizing digital law and Brazil’s Marco Civil da Internet, publication ahead of print in Television & New Media, 19(5), 431-447. doi:10.1177/1527476417738568

ICASA (Independent Communications Authority of South Africa). (2017). 2nd Report on the state of the ICT sector in South Africa. Retrieved from https://www.ellipsis.co.za/wpcontent/uploads/2017/05/ICASA-Report-on-State-of-SA-ICTSector-2017.pdf

IFT (Instituto Federal de Telecomunicaciónes). (2016). Comparador de planes de telefonía móvil [Mobile Phone Plan Comparator]. Retrieved from http://comparador.ift.org.mx/indexmovil.php

IFT (Instituto Federal de Telecomunicaciónes). (2017). Reportes de informes trimestrales [Quarterly reports]. Retrieved from https://bit.ift.org.mx/SASVisualAnalyticsViewer/VisualAnalyticsViewer_guest.jsp?appSwitchDisabled=false&reportName=%C3%8Dndice+Informes+Trimestrales&reportPath=/Shared+Daa/SAS+Visual+Analytics/Reportes/&appSwitcherDisabled=true

Internet.org. (2017). Where we’ve launched. Retrieved from https://info.internet.org/en/story/whereweve launched/

ITU (International Telecommunications Union). (2015). ICT Facts and Figures. Retrieved from http://www.itu.int/en/ITU-D/Statistics/Documents/facts/ICTFactsFigures2015.pdf

ITU (International Telecommunications Union). (2017). ICT Development Index 2017. Retrieved from http://www.itu.int/net4/ITU-D/idi/2017/index.html

Katz, R. & Callorda, F. (2015). Iniciativas para el Cierre de la Brecha Digital en America Latina [Initiatives to Close the Digital Divide in Latin America]. New York, NY: Telecom Advisory Services, LLC. Retrieved from http://www.mintic.gov.co/portal/604/articles14374_pdf.pdf

LaFrance, A. (2016). Facebook and the New Colonialism. The Atlantic. Retrieved from https://www.theatlantic.com/technology/archive/2016/02/facebook-and-the-new-colonialism/462393/

Layton, R. & Calderwood, S. (2015). Zero Rating: Do hard rules protect or harm consumers and competition? Evidence from Chile, Netherlands, Slovenia. Social Science Research Network. Retrieved from https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2587542

Levendis, J., & Lee, S. H. (2013). On the endogeneity of telecommunications and economic growth: Evidence from Asia. Information Technology for Development, 19(1), 62–85. doi:10.1080/02681102.2012.694793

Levy, J. (2015, May 5). Opinion: Facebook’s Internet.org isn’t the Internet, it’s Facebooknet. Wired. Retrieved from https://www.wired.com/2015/05/opinion-internet-org-facebooknet/

Lobo, A. & Grossman, L. (2016). Zero rating: Marco Civil proíbe ou não acordos comerciais com as OTTs? [Does Marco Civil prohibit commercial agreements with OTTs or not ?]. Convergência Digital [Digital Convergence]. Retrieved from http://convergenciadigital.uol.com.br/cgi/cgilua.exe/sys/start.htm?UserActiveTemplate=site&inoid=42398

Malcolm, J. (2014). Net Neutrality and the Global Digital Divide. Electronic Frontier Foundation. Retrieved from https://www.eff.org/deeplinks/2014/07/net-neutrality-and-global-digital-divide

Marsden, C. (2016). Comparative Case Studies in Implementing Net Neutrality: A Critical Analysis of Zero Rating. SCRIPTed, 13(1), 2-38. doi:10.2966/scrip.130116.1

McChesney, R. (2000) Rich Media, Poor Democracy: Communication Policy in Dubious Times. The New Press: NY.

MinTIC (Ministerio de Tecnologías de la Información y las Comunicaciones). (2011). Ley No. 1450. Retrieved from http://www.mintic.gov.co/portal/604/articles-3821_documento.pdf

MinTIC (Ministerio de Tecnologías de la Información y las Comunicaciones). (2016). Internet móvil para los colombianos más necesitados [Mobile Internet for Colombians most in need]. Retrieved from http://www.mintic.gov.co/portal/604/w3-article-16860.html

MinTIC (Ministerio de Tecnologías de la Información y las Comunicaciones). (2017). Boletín trimestral de las TIC [Quarterly ICT Newsletter]. Retrieved from http://colombiatic.mintic.gov.co/602/articles-55212_archivo_pdf.pdf

Mosco, V. (2008) Political Economy of the Media. In W. Donsbach (Ed.), The International Encyclopedia of Communication. Blackwell Publishing. doi:10.1002/9781405186407.wbiecp057.pub3

Mozilla Foundation (2017). Equal Rating. Retrieved from https://equalrating.com/research/

Nunziato, D. (2009). Virtual freedom: net neutrality and free speech in the Internet age. Stanford, CA: Stanford University Press.

OECD (Organisation for Economic Co-operation and Development). (2017). Telecommunication and Broadcasting Review of Mexico 2017. Retrieved from http://www.oecd.org/mexico/oecdtelecommunication-and-broadcasting-review-of-mexico-2017-9789264278011-en.htm

Open Signal (2016). Global state of mobile networks. Retrieved from https://www.opensignal.com/reports/2016/08/global-state-of-the-mobile-network

Palfrey, J. & Gasser, U. (2012). Interop: The promise and perils of highly interconnected systems. New York, NY: Basic Books.

Presidencia da Republica (2016). Decreto No. 8.771. Retrieved from http://www.planalto.gov.br/ccivil_03/_Ato2015-2018/2016/Decreto/D8771.htm

Rossini, C. & Moore T. (2015). Exploring Zero Rating Challenges: Views from Five Countries (Working Paper). Washington, DC: Public Knowledge. Retrieved from: https://www.publicknowledge.org/documents/exploring-zero-rating-challenges-views-from-five-countries

South African Government (2013). South Africa connect: Creating opportunities, ensuring inclusion, South Africa’s broadband policy. Retrieved from https://www.gov.za/sites/default/files/37119_gon953.pdf

Statista (2017). Most famous social networking sites 2017, by active users. Retrieved from https://www.statista.com/statistics/272014/global-social-networks-ranked-by-number-of-users/

Steyn, L. (2016, February 05). SA wades into global app regulation battle. Mail & Guardian. Retrieved from https://mg.co.za/article/2016-02-04-sa-wades-into-global-app-regulation-battle

Sylvain, O. (2016). Network equality. Hastings Law Journal, 67(2), 443-498. Retrieved from http://www.hastingslawjournal.org/network-equality/

Teleco (2017). Estatísticas de celulares no Brasil [Cellular statistics in Brazil]. Retrieved from http://www.teleco.com.br/ncel.asp

Telecom Paper (2016, May 16). Telefonica Vivo launches sponsored data service. Retrieved from https://www.telecompaper.com/news/telefonica-vivo-launches-sponsored-data-service--1143609

TIM Brasil. (2015, February 29). Capítulo 2: Da Neutralidade da Rede [Chapter 2: Network Neutrality]. Comment posted to http://pensando.mj.gov.br/marcocivil/texto-em-debate/minuta/

TRAI (Telecoms Regulatory Authority of India). (2016). Prohibition of Discriminatory Tariffs for Data Services Regulations. New Delhi, India.

van Schewick, B. (2012). Internet architecture and innovation. Cambridge, MA: MIT Press.

van Schewick, B. (2016). T-Mobile’s Binge On violates key network neutrality principles. Retrieved from https://services.crtc.gc.ca/pub/DocWebBroker/OpenDocument.aspx?DMID=2647608

Viecens, M. & Callorda, F. (2016). La brecha digital en America Latina: Precio, calidad y accesibilidad de la banda ancha en la región [The Digital Divide in Latin America: Price, Quality and Accessibility of Broadband in the Region] (Report). Ottawa: International Development Research Centre.

West, D. M. (2015). Digital divide: improving Internet access in the developing world through affordable services and diverse content (Report). Washington DC Brookings Institute. Retrieved from https://www.brookings.edu/wp-content/uploads/2016/06/West_Internet-Access.pdf

Wikimedia Foundation. (2017). Wikipedia Zero. Retrieved from https://wikimediafoundation.org/wiki/Wikipedia_Zero

WEF (World Economic Forum). (2016). The Global Information Technology Report 2016. Retrieved from http://www3.weforum.org/docs/GITR2016/WEF_GITR_Full_Report.pdf

World Bank (2017). Urban population. Retrieved from https://data.worldbank.org/indicator/SP.URB.TOTL.IN.ZS?page=1

Wu, T. (2003). Network neutrality, broadband discrimination. Journal of Telecommunications and High Technology Law, 2, 141-179. Available at https://scholarship.law.columbia.edu/faculty_scholarship/1281

Annex

Brazil

Carrier

Market share*

Operator Group

Plan

Pre/Post

ZR

Description

Vivo

30.00%

Telefonica

Vivo Pos

Post

N

4 different voice/data plans from 6-30GB

Vivo

 

Telefonica

Vivo V

Post

N

4 different voice/data plans from 6-30GB

Vivo

 

Telefonica

Vivo Controle

Pre/Post

N

5 monthly voice/data plan with data cap 1-3GB. Once cap is reached purchase of new data bundle required.

Vivo

 

Telefonica

Vivo Internet Redes Sociais

Add on

Y

Triple lock data bundle: one month or one week add-on permitting 400MB or 800MB use of FB, FB Messenger & Twitter. Available with Pre Vivo Turbo and Vivo Controle.

Vivo

 

Telefonica

Vivo Turbo

Pre

N

4 weekly/monthly voice/data plans with data cap 300MB-1.2GB.

Vivo

 

Telefonica

Vivo Easy

Pre

N

4 monthly voice/data plans with data cap 1.5-3GB.

TIM

25.00%

Telecom Italia

TIM Pre 1GB

Pre

Y

7 day package including 500MB data cap, voice plus unltd. WhatsApp & music streaming via Deezer

TIM

 

Telecom Italia

TIM Pre 150

Pre

Y

7 day package including 150MB data cap, voice plus unltd. WhatsApp

TIM

 

Telecom Italia

TIM Pre Diario

Pre

Y

1 day package including 50MB data cap, voice plus unltd. WhatsApp

TIM

 

Telecom Italia

TIM Pre 1.5GB

Pre

Y

30 day package including 1GB data cap, voice plus unltd. WhatsApp

TIM

 

Telecom Italia

TIM Beta

Pre

Y

Monthly & weekly voice/data plan with 10 or 1.5GB cap plus unltd. Music streaming with Deezer

TIM

 

Telecom Italia

TIM Beta Diario

Pre

N

Daily 100MB

TIM

 

Telecom Italia

Turbo WhatsApp

Pre

Y

30 day package including 50MB per day for WhatsApp and 50MB data cap for the duration

TIM

 

Telecom Italia

Infinity Turbo 7

Pre

Y

7 day package including voice, 100MB data cap per day and unltd WhatsApp

TIM

 

Telecom Italia

TIM Controle Light Factura

Pre

N

30 day package including voice and 1GB of Internet

TIM

 

Telecom Italia

TIM Controle

Pre

Y

30 day package including voice, 2GB of Internet and unltd. WhatsApp and Banca Virtual

TIM

 

Telecom Italia

TIM Music by Deezer

Add-On

Y

Available with all Pre and Controle plans: weekly unltd music streaming for set fee

TIM

 

Telecom Italia

TIM Black

Post

Y

5 monthly voice/data plans 3-20GB w/TIM Music and Banca Virtual (Brazilian digital magazines at no cost)

TIM

 

Telecom Italia

TIM Torcedor

Add-On

Y

Available with TIM Pos: free video of your favourite team's goals

TIM

 

Telecom Italia

TIM Pos Express

Post

Y

2 monthly voice/data plans with 3 or 5GB data cap plus TIM Music and Banca Virtual

TIM

 

Telecom Italia

TIM Da Vinci

Post

N

Monthly voice/data plan with 50GB data cap

Claro

25.00%

America Movil

Claro Controle

Post

Y

2/3GB monthly voice/data plans w unltd WhatsApp, Claro Music and Video

Claro

 

America Movil

Claro Pos Giga 5/6/7/9/14/25

Post

Y

Includes unltd WhatsApp, Claro Musica

Claro

 

America Movil

Claro PreMix Mega

Pre

Y

250MB monthly data plus WhatsApp & Claro Musica

Claro

 

America Movil

Pacote WhatsApp

Pre

Y

Multiple daily and monthly voice/data packages w/unlt WhatsApp

Claro

 

America Movil

Claro Pre Mix Super Giga

Pre

Y

1GB monthly data plus unltd. WhatsApp & Claro Musica

Oi

18.00%

Oi SA

Pos-Pago

Post

N

4 monthly voice/data plans w/4-20GB data cap

Oi

 

Oi SA

Controle

Post

N

3 monthly voice/data plans w/1-3.5GB data cap

Oi

 

Oi SA

Pre

Pre

N

Sliding scale of 8 time-ltd voice/data plans from 10-30 days

Colombia

Carrier

Market share

Operator Group

Plan

Pre/Post

ZR

Description

Claro

53.10%

America Movil

Smartphone en prepago

Pre

Y

Triple lock w/Tw, FB, WhatsApp

Claro

Compra tu SIM

Pre

Y

Triple lock w/Tw, FB, WhatsApp

Claro

Reventa Control

Pre

Y

Triple lock w/Tw, FB, WhatsApp

Claro

El Propio Chip

Pre

Y

Triple lock w/Tw, FB, WhatsApp

Claro

Prepago Amigo

Pre

Y

Triple lock w/Tw, FB, WhatsApp

Claro

Prepago Facil

Pre

Y

Triple lock w/Tw, FB, WhatsApp

Claro

Prepay Data Packets

Pre

Y

Triple lock w/Tw, FB, WhatsApp

Claro

Plan SM/IP Nav

Post

Y

Unltd access to WhatsApp, Twitter, FB

Claro

Plan Navegacion BB

Post

Y

Data cap plus unltd. FB, Twittter, Gtalk, MySpace, Yahoo Messenger, BB Messenger

Claro

Sinlimitenav 1/3/6/10GB

Post

Y

Unltd access to WhatsApp, Twitter, FB

Tigo

17.30%

Millicom International Cellular SA

Cargo basico 1.2 & 2.5 GB

Post

Y

Unltd access to WhatsApp & FB plus either Tigo Go music or Tigo Sports

Tigo

17.30%

Millicom International Cellular SA

Cargo basico 3.5, 4.5, 6.5 GB

Post

Y

Unltd access to WhatsApp, FB & 2 from 13 premium apps

Tigo

17.30%

Millicom International Cellular SA

Paquete prepago (x4)

Pre

Y

1,3,7,30 day packets with data cap and unltd. FB & WhatsApp

Tigo

17.30%

Millicom International Cellular SA

Super Bolsas Tigo (x5)

Pre

Y

30 day data caps. 3 w/unltd WhatsApp; 2 w/unltd. WA & FB

Tigo

17.30%

Millicom International Cellular SA

Prepagada en combo

Pre

Y

15 different time-ltd voice/data packets with unltd. FB & WhatsApp

Tigo

17.30%

Millicom International Cellular SA

Prepagadados de datos

Pre

Y

15 different time ltd. data packets with unltd FB

Movistar

23%

Telefónica Móviles Colombia S.A.

Plan Innovacion (x5)

Post

Y

8 different data caps w/unltd. Waze, Line, FB, Twitter, WhatsApp unltd (even after data cap is reached)

Movistar

23%

Telefónica Móviles Colombia S.A.

Plan Innovacion (x3)

Post

Y

Waze, Line, FB, Twitter, WhatsApp unltd (even after data cap is reached) PLUS Movistar Musica and/or Movistar Play

Movistar

23%

Telefónica Móviles Colombia S.A.

Internet 1,2,4,8GB

Post

Y

Unltd WhatsApp plus data cap

Movistar

23%

Telefónica Móviles Colombia S.A.

Todo En Uno

Pre

Y

7/90/180 days days of voice/data plus unltd. FB, Twitter & WhatsApp

Mexico

Carrier

Market share*

Operator Group

Plan

Pre/Post

ZR

Description

Telcel

67%

America Movil

Max Sin Limite 2/3/5/6/6.5/7/9/12,000MB

Post

Y

FB, Twitter & WhatsApp, Claro Video unltd. 5K MB > +Uber

Telcel

 

America Movil

Telcel Internet 1/2/3.5/7/10/20

Post

Y

FB, Twitter & WhatsApp, Claro Video unltd. 7K MB > +Uber

Telcel

 

America Movil

Telcel Max

Post

Y

FB, Twitter & WhatsApp

Telcel

 

America Movil

Amigo Sin Limite

Pre

Y

Sliding scale of triple locks w capped FB & Twitter in Mexico & WhatsApp in North America

Telcel

 

America Movil

Amigo Por Segundo

Pre

N

Sliding scale of triple locks w capped FB & Twitter in Mexico & WhatsApp in North America

Telcel

 

America Movil

Amigo Optimo Plus Sin Frontera

Pre

N

Sliding scale of triple locks w capped FB & Twitter in Mexico & WhatsApp in North America

Movistar

24%

Telefonica

Vas a Volar - 1.5/3/4.5/6/9/12/15,000MB

Post

Y

plus sliding scale of 2,3 or 4GB of WhatsApp, Tw & FB

Movistar

 

Telefonica

Vas a volar

Pre

Y

Sliding scale data packets 2/4/5.5/7/10/15 plus sliding scale of 2,3 or 4GB of WhatsApp, Tw & FB

AT&T Mexico

9%

AT&T

AT&T Con Todo 500MB-8GB

Post

Y

10 data packets: Unltd FB, Twitter & Whatsapp AND 'new SNS' Snapchat, Instagram & Uber

AT&T Mexico

 

AT&T

AT&T a Tu Manera

Post

Y

9 data packets: Unltd FB, Twitter & Whatsapp AND 'new SNS' Snapchat, Instagram & Uber

AT&T Mexico

 

AT&T

Unidos Prepago

Pre

Y

Sliding scale of 10 time-ltd packets. All include capped data for Whatsapp, FB and Twitter AND SC & Instag for 5 most expensive packets

AT&T Mexico

 

AT&T

AT&T a Tu Manera te damos Mas

Pre

Y

2/3/5/8GB plus unltd FB, Twitter, WhatsApp AND unltd Uber, Snapchat, Instagram

AT&T

 

AT&T

Recarga Plus

Pre

Y

1GB of Internet plus cap for all above SNS

South Africa

Carrier

Market share

Operator Group

Plan

Pre/Post

ZR

Vodacom

39.20%

Vodafone

VARIOUS (24)

Pre

N

Vodacom

39.20%

Vodafone

VARIOUS (26)

Post

N

Cell C

14%

3C Telecommunications (SA)

LTE Power Plan

Post

N

Cell C

14%

3C Telecommunications (SA)

Smartdata

Post

N

Cell C

14%

3C Telecommunications (SA)

Smartdata TopUp

Post

N

Cell C

14%

3C Telecommunications (SA)

FREE BASICS

Pre & Post

Y

MTN

33%

MTN Group (SA)

MTN Sky (4)

Post

N

MTN

33%

MTN Group (SA)

New MTN Sky

Post

N

MTN

33%

MTN Group (SA)

My MTN Choice +Talk

Post

N

MTN

33%

MTN Group (SA)

My MTN Choice

Pre & Post

N

MTN

33%

MTN Group (SA)

My MTN Choice Flexi

Post

N

MTN

33%

MTN Group (SA)

My MTN Choice+

Post

N

Footnotes

1. Network neutrality refers to the principle that network operators should treat all information packets in an isonomic fashion, and should not discriminate based on sender, receiver, content, device or application. Although it is widely agreed that some traffic management practices are essential, these should not extend to forms of discrimination such as throttling and blocking (negative) or priority access (positive) that produce a commercial/competitive advantage for network operators.

2. See the ‘Zero Rating Map’ coordinated by Luca Belli for a survey of the global landscape of zero rating https://public.tableau.com/profile/zeroratingcts#!/vizhome/zeroratinginfo/Painel1

3. Pre-pay services involve an upfront charge to the user, in exchange for a finite amount voice or data service. When the contracted airtime or data has expired, the user must pay an extra charge in order to be permitted to continue using the service, or wait until the beginning of their next billing period. Post-pay services present users with an invoice at the end of each billing period for a service bundle that often permits the user to exceed the caps on any contracted services on a pro-rata basis.

4. The Wikipedia Foundation announced on 16 February 2018 that the service would be discontinued at the end of 2018.

5. The figures listed in this table do not cumulatively equal 100% for each column, but instead indicate in every row the percentage of plans that include a ZR component for each payment category, in each market.

6. The Herfindahl-Hirschman Index measures concentration by the number of firms operating in a particular industry and their market share.

7. Excluding social networking sites where the majority user base is resident in only one country, e.g., WeChat, QQ and QZone.

8. It should be noted that in the Brazilian case, many common examples of zero rating are in fact illegal according to the regulation of the Marco Civil da Internet law, which prohibits positive discrimination of vertically integrated apps (Governo do Brasil, 2017)

9. Vivo claims a 30% market share, and does not offer ZR in any of its plans. TIM claims a 25% market share and 90% of its pre-pay, and 66% of its post-pay plans feature some ZR component. Claro also claims a 25% market share, and all of its pre and post pay plans contain some ZR component. All data recorded from the carrier websites in July 2017.

10. It should be noted that in certain situations, ZR of a market-leading platform or service by a struggling or new MISP could restrict competition at the application layer, even if the wireless market is not adversely affected.

11. It should be noted that more granular data is available to assess more precisely the state of innovation within local app development ecosystems. Within the limitations of this study, however, the national capacity of innovation ranking assessed by the World Economic Forum provides a useful proxy for assessing general national innovation, from which the levels of more specific sectors can be inferred.

12. It should be noted that significant disparities in access to WiFi may exist between urban and rural areas, meaning that a high national average could still obscure a dearth of infrastructure in rural areas and a commensurate dependence on ZR.

13. By ensuring that use of certain communication platforms and information services does not count against data-capped access to the full mobile internet, or by providing some app-specific access to those who have no mobile internet access.


Data and digital rights: recent Australian developments

$
0
0

This paper is part of Practicing rights and values in internet policy around the world, a special issue of Internet Policy Review guest-edited by Aphra Kerr, Francesca Musiani, and Julia Pohle.

Introduction

Digital rights have become a much debated set of issues in a world in which digital communications, cultures, platforms, and technologies are key to social life (Couldry et al., 2018; Hintz, Dencik, & Wahl-Jorgenson, 2019; Isin & Ruppert, 2015).

We see this, for example, in public debates about the widespread application of biometrics systems, facial recognition, or mandatory retention of telecommunications data, strategies nominally mobilised by nation states in their pursuit of information about terrorist threats, but also in controlling political dissidence. Similarly, the widespread involvement of non-state actors in the capture, analysis and trade of personal information has heightened public fears about how corporate use of their data might affect their access to information, goods and services, and also prompted questions about discriminatory applications of automated decision-making (Eubanks, 2018). Increasingly, too, linkage and use of data by governments in decision-making, and links between state and non-state actors in the collection, use, and sharing of data elicits concerns relating to power and inequality. Governments are using data beyond the security context, and also are intimately connected with the collection and use of data by private actors (including the sharing of data with third parties).

Globally and locally, it has proven difficult for citizens to propel their governments to take action, especially given the increasing complex interplay among national (and sub-national), regional, and global laws, policies, and innovation systems when it comes to internet and associated technologies. Outcomes for consumers, citizens, civil society, business, and institutions should, at least in theory, be highly influenced by the kinds of fundamental human rights set out in longstanding international frameworks, and policed (or not policed) by institutions, such as the United Nations, and as set out in national charters of rights and rights-promoting national legislation. But both national and international institutions have been slow to grapple with and enact aspects of digital rights, even as governments and non-state actors take actions that restrict or undermine those rights. Although the technologies themselves have facilitated some counterbalance to this effect: through the growth of new rights advocacy organisations and models enabled by digital platform, such as US-based international group Access Now (Solomon, 2018).

Adding to the challenges are the decisive roles played in communications and media by nonstate-based governance and regulation arrangements, such as the community standards and terms of service of digital platforms — which decisively shape global content regulation on social media channels. There are risks that these efforts will protect existing power relations, and deflect, and make more difficult, the activation of digital rights in the context of data tracking, collection and trading, pervasive, embedded, and automated in everyday life by digital systems.

All in all, there are long, entangled challenges as well as genealogies to digital rights (Liberty, 1999). Little surprise then that the turn to digital rights has been roundly critiqued for its incoherent and partial nature. In his notable paper, for instance, Kari Karppinen argues that the umbrella concept of “digital rights” falls short of being a coherent framework. Rather, Karppinen suggests, digital rights amounts to a diverse set of debates, visions, and perspectives on the process of contemporary media transformations (Karppinen, 2017). He proposes that we approach digital rights as “emerging normative principles for the governance of digital communication environment[s]” (Karppinen, 2017, p. 96).

Reflecting on this suggestion, we imagine that such normative principles are likely to come from existing human rights frameworks, as well as emergent conceptions and practices of rights. Some especially important issues in this regard, which theorists, activists, policymakers, and platform providers alike have sought to explore via notions of digital rights, are evolving citizen uses of platforms like Facebook, personal health tracking apps, and state e-health registers and databases, and the associated rights and responsibilities of platform users.

To explore these issues, in 2017, we conducted an Australia study of citizen uses and attitudes in relation to emerging digital technology and rights (Goggin et al., 2017), as part of a larger project on digital rights and governance in Australia and Asia (Goggin et al., 2019). Our study drew on three sources of data: a national survey of the attitudes and opinions of 1600 Australians on key rights issues; focus group discussion of related rights scenarios; and analysis of legal, policy and governance issues (Goggin et al., 2017).

In summary, our study showed that the majority of respondents are concerned about their online privacy, including in the relatively new areas of digital privacy at work. A central issue across a very high proportion of respondents we surveyed is control. Their concerns regarding control are not sufficiently addressed by availing themselves of available privacy settings and options. It appears an underlying issue is lack of knowledge about what platforms, and other core actors (such as corporations and governments) do with internet users’ information, and consequent absence of any sense of control. Our findings showed considerable concern about individual privacy and data protection, and the adequacy of responses by technology corporations and governments (cf. the key report by Digital Rights Watch, 2018).

Like other studies nationally and internationally (OAIC, 2017; Ofcom, 2018; Pew 2016; Center for the Digital Future, 2017), these findings lend firm support to the need for better policy and design frameworks and practices to respond to such concerns.

Following hard on the heels of our research in 2018–2019 have been successive waves of revelations and debates about data privacy breaches. Key among these was the Cambridge Analytica/Facebook exposé (Cadwallader & Graham-Harrison, 2018; Isaak & Hanna, 2018), but also many other well publicised and controversial issues have been raised by the data collection and sharing practices of corporations and governments, by surveillance practices, and lack of effective safeguards or accountability mechanism for citizens.

In mid-2018, expectations were raised around the world by the implementation of the European General Data Protection Regulation (GDPR), with many hoping that this law would have a decisive influence on corporate policies and practices internationally, and also jurisdictions outside the direct orbit of European polity, law, and governance.

Against this backdrop, in this paper, we reflect upon subsequent developments in Australia in data privacy rights.

In the first part of the paper, we discuss Australian policy in comparison to the European and international developments. In the second part, we discuss two contemporaneous and novel Australian policy developments initiated by the national government: a Digital Platforms Inquiry; and the development of a consumer data right.

Both policy initiatives seek to grapple with the widening pressure to provide better public domain information, fair and effective options for users to exercise choice over how they configure technologies, and strengthened legal frameworks, enhanced rights, and better avenues redress. Both also illustrate the uniquely challenging environment for digital rights in Australia.

Australian digital rights, privacy, and data protection in international context

The concept of rights has a long, complex, and rich set of histories, across politics, law, philosophy, and ethics –– to mention just a few key domains. Shortly after the 70th anniversary of the United Nations Universal Declaration of Human Rights in 2017, it is evident that the very idea of rights remains strongly contested from a wide range of perspectives (Blouin-Genest, Doran, & Paquerot, 2019; Moyn, 2018). The recognition of certain rights is shaped by cultural, social, political, and linguistic dynamics, as well as particular contexts and events (Erni, 2019; Gregg, 2012; Hunt, 2007; Moyn, 2010).

The way that we acknowledge, defend or pursue rights — our contemporary rights “setting” — has also been shaped by the heritage of this concept in international relations as well as local contexts, and the pivotal role that rights instruments, language and discourses, practices, and struggles play in our economic, political, and social arrangements (Gregg, 2016; López, 2018). In each country, there are particular histories, arrangements, and challenges concerning rights. In relation to our Australian setting, there is a fundamental threshold issue about the constitutional and legal status of rights (Chappell, Chesterman, & Hill, 2009; Gerber & Castan, 2013). As often observed, Australia lacks an explicit, overarching constitutional or legal framework enunciating and safeguarding rights — a gap that has led many over recent years to propose a national bill of rights (Byrnes, 2009; Erdos, 2010), and led three intermediate governments, the Victorian, Australian Capital Territory, and most recently (in 2019) Queensland governments, to develop their own human rights charters.

The Australian setting is interesting to data privacy scholarship for a range of reasons, including its status as an ambiguously placed nation across global North and South (Gibson, 1992; Mann & Daly, 2018), and between West and East (Goggin, 2008; Keating, 1996). It stands as proof that protection for human rights is not inevitable, even in a Western liberal democracy. The absence of a bill of rights or equivalent to the European Convention on Human Rights in Australia has significant implications in this context. Not least, it arguably diminishes the quality of the discussion about rights, because, for instance, it means that Australia lacks opportunities for measured judicial consideration of acts that may breach human rights, or questions regarding the proportionality or trade-off to be drawn between, for example, national security and privacy (Mann et al., 2018). That leaves researchers, institutions, and the wider society –– including the public –– with a relatively impoverished rights discussion that is skewed by the political considerations of the day and the views of advocacy groups on all sides.

The Australian case is of particular relevance to the UK going forward, and understanding the data privacy rights evolution of kindred ‘Westminster’ democracies (Erdos, 2010). The UK has a Human Rights Act, and it has some teeth, however it lacks any constitutional bill of rights (Hunt, 2015; Kang-Riou, Milner, & Nayak, 2012) –– although this has been a longstanding proposal from some actors (Blackburn, 1999), including the Conservative Party during the 2015 UK General Election. Up to now, however, it has been possible to challenge actions in the UK via EU institutions, and the UK has been bound by specific instantiations of rights in detailed EU instruments. Owing to Brexit, the existing UK human rights arrangements look like becoming unmoored from at least some judicial systems of Europe (subject to the shape of the final arrangements) (Gearty, 2016; Young, 2017, pp. 211-254).

Notably, in recent times, it has been proactive European Union (EU) response that has gained widespread attention (Daly, 2016b). Data protection is enshrined in the Treaty on the Functioning of the EU (Article 16). The fundamental right to the protection of personal data is also explicitly recognised in Article 8 of the 2000 Charter of Fundamental Rights of the European Union, and the general right to respect for ‘private and family life, home and communications’ (Article 7). The EU’s new GDPR (European Union, 2016) took effect in May 2018. The GDPR ‘seeks to harmonise the protection of fundamental rights and freedoms of natural persons in respect of processing activities and to ensure the free flow of personal data between [EU] Member States’ (Recital 3). In part, the GDPR represents an important early effort to address the implications of large-scale data analytics and automated processing and decision-making. The implications of the GDPR for citizen rights are untested in the courts so far, but the implementation of the law has provided a focal point for a sustained academic, policy, and industry discussion of automated data processing in Europe.

In the area of privacy and data protection, Europe has played an important normative role (Voloshin, 2014) in Australian debates (Stats, 2015), because of its leadership in this area and the involvement of many Australian researchers, jurists, parliamentarians, policy-makers, and industry figures in engagement with European actors and trends (Calzada, 2018; Poullet, 2018; Stalla-Bourdillon, Pearce, & Tsakalakis, 2018; Vestoso, 2018). Recently, the EU’s expanded emphasis on its external policy portfolio, and its capacity to serve as a more “joined-up global actor”, has been theorised as a kind of “new sector diplomacy” (Damro, Gstöhl & Schunz, 2017). Already the GDPR has some global effect –– introducing compliance obligations for international organisations or businesses based outside the EU that have an establishment in the EU, that offer goods and services in the EU, or that monitor or process data about the behaviour of individuals in the EU.

Another route for this strong influence has been via joint efforts under the auspices of the OECD. A watershed here was the creation and adoption of the OECD Guidelines on the Protection of Privacy and Transborder Flows of Personal Data (OECD, 1980/2013). Distinguished Australian High Court judge and law reformer, the Justice Michael Kirby was the Chairman of the OECD Expert Groups on Privacy (1978–80) and Data Security (1991–2). He notes that the OECD Guidelines, and the privacy principles they contain, “profoundly influenced” the foundational 1988 Australian Privacy Act that remains in force today (Kirby, 1999). More recently, there is abundant evidence of the influence of European law and policy reform on the wider region, as documented in the work Australian privacy scholar, Graham Greenleaf, notably his comparative study of Asian data privacy laws (Greenleaf, 2014).

Europe is thus a lodestone and an internationally respected point of reference for privacy and data protection, including in Australia. Yet, European developments also offer a stark contrast to the situation in Australia, where, in recent years, Australia’s law-makers have been slow to respond to expressions of citizen and user data privacy concerns (Daly, 2016a; Daly, 2018).

State of play of data privacy in Australia: a snapshot

Australian privacy law is the result of both legislation and the common law. There is no right to privacy enshrined in the Constitution. Information collection and processing by government and by larger private sector players is governed by the Privacy Act 1988 (Cth) and a range of state and territory legislation. These instruments do not, however, provide an enforceable right to privacy. The Privacy Act includes 13 Australian Privacy Principles (APPs) that impose obligations on government and private sector organisations (with some important exclusions) when collecting, handling, storing, using, and disclosing personal information, and certain rights for individuals to access and correct personal information. The Privacy Principles place more stringent obligations on entities that handle “sensitive information” about an individual, including information about their health and biometric data, racial or ethnic origin, political opinions and membership, religious beliefs or affiliation, sexual orientation, and criminal record. Both the current Australian legal framework and the terms and conditions applied by online platforms are based on a model of notice and consent: notification that personal information is being collected and consent to those users. Yet as our 2017 study indicated, even where citizens may have assented to their data collection, and may be taking active steps to protect their privacy, they are still worried that they lack knowledge of its potential uses, and control over the acquisition of personal information.

Australians, however, have no direct right to sue for a breach of the principles –– only rights to complain, first to the organisation involved or, if there is no satisfactory response, to the Office of the Australian Information Commissioner (OAIC). For its part, the OAIC’s powers include “investigating individuals' complaints [in the second instance] and commencing Commissioner initiated investigations, making a determination about breaches of privacy, and applying to the Federal Court for a civil penalty order for serious or repeated interferences with privacy” (OAIC, 2018). The role, powers, and resourcing of the OAIC and its failure to take enforcement actions have been the subject of considerable criticism (Australian Privacy Foundation, 2018; Daly, 2017).

Australians’ rights against unwanted intrusions on seclusion, or the unwanted revelation of private information, are also limited. The appellate courts in Australia do not currently recognise any civil cause of action for invasion of privacy, although the High Court has left open the possibility of developing one (Daly, 2016a). There is some potential to seek remedies for serious invasions of privacy through other legal mechanisms, such as legal rights to prevent physical invasion or surveillance of one’s home, rights against defamation or the disclosure of confidential information, or even copyright law (ALRC, 2014). Proposals to recognise a statutory cause of action from the Australian Law Reform Commission have not been acted on (Daly, 2016a).

None of these various Australian legal regimes have responded to broader shifts in the capacity to gather data on a larger scale, to link datasets, to analyse and pattern data, and to use such capacities to draw inferences about people or tailor what people see or the decisions that are made about them at an ever more fine-grained level (despite relatively recent 2014 reforms, cf. Von Dietze & Allgrove, 2014). For now, Australians’ hope of some data protection may be indirect, via the rising tide of the GDPR and European-influenced international frameworks.

As charted by Australian privacy law expert and advocate Professor Graham Greenleaf, there is also an emergence of an effective global standard, due to widespread adoption of standards in accordance with global Data Protection Convention 108/108+ (COE, 2018; Greenleaf, 2018a & 2018b), the Council of Europe data protection convention that includes many of the GDPR requirements –– what Greenleaf terms “GDPR-lite” (Greenleaf 2018a; Kemp, 2018). Article 27 makes the Convention open to “any State around the globe complying with its provisions” (COE, 2018, clause 172, p. 32).

In October 2018, Joseph A. Cannataci, the UN Special Rapporteur on the right to privacy recommended that member states of the United Nations “be encouraged to ratify data protection Convention 108+ ….[and] implement the principles contained there through domestic law without undue delay, paying particular attention to immediately implementing those provisions requiring safeguards for personal data collected for surveillance and other national security purposes” (Cannataci, 2018, recommendation 117.e). This is a recommendation which chimes with the positions taken by Australian privacy and civil society groups — and it will be interesting to see if it is picked up by a current Australian Human Rights Commission inquiry underway on technology and human rights, expected to report in 2019 (AHRC, 2018).

The policy and legal lacunae in Australia become evident when governments and corporations are in a tense dance to reconcile their interests, in order to make the market in consumer data, sharing, and collection work smoothly and to promote innovation agendas in IT development. At the heart of the contemporary power, technology, and policy struggles over data collection and uses are citizen and user disquiet and lack of trust about the systems that would provide protection and safeguards, and secure privacy and data rights.

In Australia, there have been a range of specific incidents and controversies that have attracted significant criticism and dissent by a range of activist groups. Concerns have been raised, in particular, by policy initiatives, such as internal moves to facilitate broader government data sharing, among agencies, as well as wider, security-oriented reforms centring on facial recognition (Mann et al., 2018). One of the most controversial initiatives was the botched 2017 introduction of a national scheme called “My Health Record” to collect and make available to health practitioners the data of patients. (Smee, 2018). Such was the widespread opposition and opting-in that by February 2019, approximately 2.5 million Australians (of a population of 25 million) had chosen to opt out (Knaus, 2019).

Two approaches to digital rights

As we have indicated, in Australia while there is a groundswell of concern and continuing activism on digital rights issues, there is no real reform of general privacy and data protection laws afoot. Instead, privacy is being addressed at a legislative level in a piecemeal way, with tailored rules being included in legislation for specific, data-related policy initiatives. Two interesting and significant initiatives underway that could, if implemented properly, make important contributions to better defining and strengthening privacy and data rights.

Digital Platforms Inquiry

One important force for change is the Digital Platforms Inquiry being undertaken by the general market regulator, the Australian Competition and Consumer Commission (ACCC).

Established in December 2017 by then Treasurer, later Prime Minister the Hon Scott Morrison MP to the ACCC, the Digital Platforms Inquiry was first and foremost focused on the implications for news and journalistic content of the emergence of online search engines, social media, and digital content aggregators.

In its preliminary report, released in December 2018, the ACCC gave particular attention to Google and Facebook, noting their reliance on consumer attention and consumer data for advertising revenues as well as the “substantial market power” both companies hold in the Australian market (ACCC, 2018b, p. 4).

What is especially interesting in the ACCC’s interim report and its public discussion is the salience given to issues of consumer data collection and consumers’ awareness of these practices and their implications (note the framing of Australians as consumers, rather than citizens, a point to which we return below). The ACCC also found that consumers were troubled by the scale and scope of platform data collection. It also noted that they “are generally not aware of the extent of data that is collected nor how it is collected, used and shared by digital platforms” (p.8) due to the length, complexity, and ambiguity of platform terms of service and privacy policies, and that they had little bargaining power compared to platforms which largely set the terms of information collection, use and disclosure on a bundled or ‘take it or leave it’ basis (p.8). Reflecting on this, the ACCC argued that this information asymmetry and power imbalance had negative implications for people’s capacity to demonstrate consent and exercise choice (ACCC, 2018b, p. 8). The ACCC also noted the absence of effective mechanisms for enforcing privacy laws, and cautioned that:

The lack of both consumer protection and effective deterrence under laws governing data collection have enabled digital platforms’ data practices to undermine consumers’ ability to select a product that best meets their privacy preferences. (ACCC, 2018b, p. 8)

The ACCC’s Preliminary Report proposes various recommendations for legislative and policy change to address issues of market power and safeguarding competition, and also proposes a set of amendments to the Privacy Act “to better enable consumers to make informed decisions in relation to, and have greater control over, privacy and the collection of personal information” (ACCC, 2018b, p. 13).

Among other things, these recommendations include: strengthening notification requirements for collection of consumers’ personal information by their platform or third party; requiring that consent be express (and opt-in), adequately informed, voluntarily given, current, and specific; enabling erasure of personal information; increasing penalties for breach; and expanded resources for the Office of Australian Information Commissioner (OAIC) to scale up its enforcement activities. (ACCC, 2018b, pp. 13-14). In addition, the ACCC recommends a new enforceable code of practice to be developed by key digital platforms and the OAIC, to “provide Australians with greater transparency and control over how their personal information is collected, used and disclosed by digital platforms” (ACCC, 2018b, p, 14). Also notable is a recommendation for the introduction of a statutory cause of action enabling individuals to take action over serious invasions of privacy “to increase the accountability of businesses for their data practices and give consumers greater control over their personal information” (ACCC, 2018b, p. 14).

With the full report due in mid-2019, and formal government response to follow, a wide range of actors debated potential regulation of digital platforms, including civil society, academia, as well as industry. For their part, affected platform operators Google and Facebook were notably united in their opposition to a new regulator that could ensure greater transparency and oversight in the operation of algorithms that “determine search results and rank news articles in user feeds” (Duke & McDuling, 2019; cf. Ananny & Crawford, 2016; Google, 2018).

The international stakes are also high, illustrated in the ACCC’s call for its international counterparts to follow its lead in this “world first” inquiry in applying tougher safeguards (Duke & McDuling, 2018; Simons, 2019). Pitted against the digital platform giants are the older media companies still with significant interests in press, broadcasting, and radio, supporting the call for tighter regulation of the ‘digital behemoths’ (Swan, Vitorovich, & Samios, 2019). Clearly protectionism of existing media market dispensations is to the fore here, rather than protection of citizen rights — these traditional corporate players are very happy to see emergent internet and digital platform companies regulated as if these were media companies; or indeed facing extensions of other regulations, such as privacy and data law and regulation.

Consumer Data Right: “Data as an Asset”

There has been something of a long-term, bipartisan consensus shared by both major political parties — the conservative Liberal/National Party Coalition government as well as the typically more social democratic Australian Labor Party (ALP, currently in opposition) — that, especially when it comes to internet, telecommunications, social media, and associated digital technologies, “light touch” market-oriented regulation is to be favoured. The dominant position of the ALP is to style itself as pro-market with an admixture of government intervention and responsive regulation as needed. Hence it has been generally more responsive to calls for privacy and data rights improvements, when it comes to abuses from digital platform companies. However, it is extremely reluctant to be seen as “weak” or “soft” on issues of national security, cybersecurity, and fighting terrorism, so has rarely challenged contentious Coalition laws on metadata, and data retention (Suzor, Pappalardo, & McIntosh, 2017). Most recently, in December 2018, the ALP backed down in parliament, withdrawing its proposed amendments on legislation allowing security agencies greater access to encrypted communications (creating “backdoors” in Whatsapp, iMessage, and other “over-the-top” messaging apps) (Worthington & Bogle, 2018). Internationally, this new law was received as an “encryption-busting law that could impact global privacy”, as a Wired magazine report put it (Newman, 2018).

On the direction of the government, the ACCC is also a key player in a second, related yet distinct initiative to better conceptualize and enact one very particular kind of digital right, in the form of a consumer data right. Data generated by consumers in using particular technologies, and their associated products and services, often resides with, and is controlled or even owned by, the company providing it. If consumers cannot access and transfer their data from one provider to another, and especially if they cannot trust a provider to use their data in agreed ways, this makes it difficult for a competitive market to be effectively established and sustained.

Following an Open Banking Review (Australian Government, 2017), and Productivity Commission report on Data Availability and Use (Productivity Commission, 2017), the Australian government decided to legislate a Consumer Data Right. The idea of this Consumer Data Right is to “give Australians greater control over their data, empowering customers to choose to share their data with trusted recipients only for the purposes that they have authorised” (Australian Government, 2018):

… [W]e see the future treatment of data as joint property as a healthier foundation for future policy development ... [W]hat is happening today in Australia to treat data as an asset in regulatory terms is a first step in a better foundation for managing both the threat and the benefit [of data collection]. (Harris, 2018)

The Australia consumer data right has its parallels in European developments, such as data portability right under the GDPR (Esayas & Daly, 2018), although its foundation lies in consumer rights, rather than broader digital or human rights. Such a concept of a data right — as something that an individual has ownership of — is clearly bound up with the controversial debates on “data as commodity” (e.g. Nimmer & Krauthaus, 1992; Fuchs, 2012), and indeed the wide-ranging debate underway about what “good data” concepts and practices might look like (Daly, 2019). The Productivity Commission report, which provides the theoretical basis for the data right, summarizes it as follows:

Rights to use data will give better outcomes for consumers than ownership: the concept of your data always being your data suggests a more inalienable right than one of ownership (which can be contracted away or sold). And in any event, consumers do not own their data in Australia. (Productivity Commission, 2017, p. 191)

The consumer would have the “right to obtain a machine-readable copy of their own digital data” (p. 191), however the “asset” would be a joint property:

Consumer data would be a joint asset between the individual consumer and the entity holding the data. Exercise of the Right by a consumer would not alter the ability of the initial data holder to retain and keep using the data. (Productivity Commission, 2017, p. 191)

The government’s plan is to implement the consumer data right initially in the banking, energy, and telecommunications sectors, and then to roll it out economy wide sector-by-sector (Australian Government, 2018). The ACCC was charged with developing the rules for the consumer data right framework (ACCC, 2018a), of which it has released a preliminary version. The consumer data right framework would be nested inside the general privacy protection framework existing in Australia, especially the Privacy Act. This has led to criticisms — even from industry participants, such as the energy company AGL –– that the government should take the opportunity to update and strength the existing Privacy Act (for instance, in relation to the Australian Privacy Principles), rather than creating a separate set of privacy safeguards, in effect leading to “twin privacy regimes” that would “complicate compliance as well as the collection of consents for data sharing from consumers” (Crozier, 2019; Dept of Prime Minister & Cabinet, 2018).

What is especially interesting in this process is the role that standards play. In the long term, the government has promised the establishment of a Data Standards Body, with an Advisory Committee including representatives of data holders (such as banks, telecommunications, and energy companies), data “recipients” (such as fintech firms), and consumer and privacy advocates. The Data Standards Body would be led by an independent Chair responsible for selection of the Advisory Committee, as well as “ensuring appropriate government, process, and stakeholder engagement” (Australian Government, 2018). In the short term, for the first three years, Data61, the digital innovation arm of Australia’s national science agency (https://www.data61.csiro.au/) has been appointed to lead the development of Consumer Data Standards. Some consumer-sensitive work has been conducted in this process. For instance, Data61 conducted research with approximately 80 consumers, releasing a consumer experience report (Data61, 2019, p. 4).

As the Consumer Policy Research Centre notes in their Consumer Data and Digital Economy report (Nguyen & Solomon, 2018), how the framework strikes a balance will be crucial: “For consumers to benefit, policy settings need to drive innovation, enhance competition, protect human rights and the right to privacy and, ultimately, enable genuine consumer choice” (CPRC, 2018). So far, however, the framework, draft rules, and policy process has been heavily criticised by the CPRC, other consumer advocacy, privacy, digital rights groups, industry participants, and parliamentarians. (Eyers, 2019).

Conclusion

Citizen uses of and attitudes to privacy and data are at the heart of contemporary internet and emerging technologies. Much more work needs to be done to fill out the picture on these internationally. In particular it will be important to ensure that the full range of citizens and societies are represented in research and theory. Also it is key that such work is translated into the kinds of insights and evidence shaping and woven into the often messy policy and law making, discourses, and institutional arrangements. We would hope to see serious efforts to engage with citizens regarding their understandings, expectations, and experience of digital rights and developing technologies, with a view to informing strong, responsive citizen-centred frameworks in law, policy, technology design, and product and service offerings.

Globally, there are legislative and regulatory efforts underway to respond to people’s concern about developments in data collection and use, and the feeling, documented in our research and the research of others, of an absence of effective control. The European efforts such as the Convention 108+ and GDPR have been vital in the wider international scene to provide resources and norms that can help influence, guide, or, better still, structure government and corporate frameworks and behaviour.

This paper makes a case for the importance of local context. Australia is an interesting case for examining government responses to concerns about data collection and use, as a technologically advanced, Western developed nation without an effective human (or digital) rights framework. In Australia, it is notable that efforts to respond to concern have come, not in the context of an overhaul of privacy laws or digital rights generally, but via efforts, by market-oriented policy bodies (the ACCC and Productivity Commission) to make markets work better and meet the needs, and expectations, of consumers.

In the case of the Digital Platforms Inquiry, there are internationally leading reforms to frameworks on data, algorithms, and privacy rights proposed that betoken a major step forward for citizens’ digital rights. Yet in play is a political and policy process in which citizen concerns and activism are allied with some actors (even potentially old media companies), while pitted against others (digital platforms companies, including those such as Google who often argue for some element of digital rights). Ultimately it will be up to the government concerned to take action, and then for the regulators and key industry interests to be prepared to lead necessary change, ensuring citizens will have a fair and strong role in shaping co-regulatory frameworks and practices.

Like the premise of the Digital Platforms Inquiry, the Consumer Data Right initiative involves designing the architecture — legal, economic, and technical — to ensure the effective and fair operation of markets in consumer data. In both initiatives undertaken by the ACCC there is a common thread — they are aligned with consumer protection, rather than citizen concerns and rights. Here the consent, labour, and legitimation of consumers is in tension, rather than in harmony, it could be suggested, with the interests of citizens (Lunt & Livingstone, 2012). At the same time, individuals’ privacy rights as citizens seem to be missing from the debate, subsumed under an overwhelming security imperative that frames individual privacy as consistently a lower priority than broad law enforcement and national security goals.

Thus, Australia offers a fascinating and instructive instance where internet policy experiment in compartmentalised data privacy rights is being predicated and attempted. Given the story so far, we would say that it is further evidence of the imperative for strong regulatory frameworks that capture and pin together transnational, regional, national, and sub-national level and modes to address citizens mounting privacy and data concerns; at the same time, it offers yet more evidence that, at best, this remains, in Australia as elsewhere, a work in process.

Acknowledgements

We are grateful to the three reviewers of this paper as well as the editors of the journal and special issue for their very helpful feedback on earlier versions of this paper.

References

Ananny, M., & Crawford, K. (2016). Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability. New Media & Society, 20(3), 973–989. doi:10.1177/1461444816676645

Australian Competition and Consumer Commission. (2018a, September, 12). ACCC seeks views on consumer data rights rules framework [Media release MR179/18]. Retrieved from https://www.accc.gov.au/media-release/accc-seeks-views-on-consumer-data-right-rules-framework

Australian Competition and Consumer Commission. (2018b). Digital Platforms Inquiry: Preliminary report. Canberra: Australian Competition and Consumer Commission. Retrieved from https://www.accc.gov.au/focus-areas/inquiries/digital-platforms-inquiry/preliminary-report

Australian Government (2018, May 9). Consumer data right. Canberra: The Treasury. Retrieved from https://treasury.gov.au/consumer-data-right/

Australian Government (2018). Review into Open Banking: Giving consumers choice, convenience, and confidence. Canberra: The Treasury. Retrieved from https://static.treasury.gov.au/uploads/sites/1/2018/02/Review-into-Open-Banking-_For-web-1.pdf

Australian Human Rights Commission. (2018). Human rights and technology issues paper. Sydney: Australian Human Rights Commission. Retrieved from https://tech.humanrights.gov.au/sites/default/files/2018-07/Human%20Rights%20and%20Technology%20Issues%20Paper%20FINAL.pdf

Australian Privacy Foundation. (2018, August 15). Privacy in Australia: Brief to UN Special Rapporteur on Right to Privacy. Retrieved from https://privacy.org.au/wp-content/uploads/2018/08/Privacy-in-Australia-Brief.pdf

Blackburn, R. (1999). Towards a constitutional Bill of Rights for the United Kingdom: Commentary and documents. London and New York: Pinter.

Blouin-Genest, Gabriel, Doran, Marie-Christine, & Paquerot, Sylvie. (Eds.). (2019). Human rights as battlefields: Changing practices and contestations. Cham, Switzerland: Palgrave Macmillan.

Cadwalladr, C., & Graham-Harrison, E. (2018).Revealed: 50 million Facebook profiles harvested for Cambridge Analytica in major data breach. The Guardian. March 18, 2018. Retrieved from https://www.theguardian.com/news/2018/mar/17/cambridge-analytica-facebook-influence-us-election

Calzada, I. (2018). (Smart) citizen from data providers to decision-makers? The case study of Barcelona. Sustainability, 10(9). doi:10.3390/su10093252

Cantacci, J. A. (2018). Report of the Special Rapporteur on the right to privacy. (Report No. A/73/45712). General Assembly of the United Nations. Retrieved from https://www.ohchr.org/Documents/Issues/Privacy/SR_Privacy/A_73_45712.docx

Center for the Digital Future. (2017). The 2017 Digital Future Report: Surveying the Digital Future. Year Fifteen. Los Angeles: Center for the Digital Future at USC Annenberg. Retrieved from https://www.digitalcenter.org/wp-content/uploads/2018/04/2017-Digital-Future-Report-2.pdf

Consumer Policy Research Centre (CPRC). (2018, July 17). Report: Consumer data & the digital economy [Media release]. Retrieved from http://cprc.org.au/2018/07/15/report-consumer-data-digital-economy/

Couldry, N., Rodriguez, C., Bolin, G., Cohen, J., Volkmer, I., Goggin, G.,…Lee, K. (2018). Media and communications. In International Panel on Social Progress (IPSP) (Ed.), Rethinking Society for the 21st Century: Report of the International Panel on Social Progress (Vol. 2, pp. 523–562). Cambridge: Cambridge University Press. doi:10.1017/9781108399647.006

Council of Europe (COE). (2018). Convention 108+: Convention for the protection of individuals with regard to the processing of personal data. Strasbourg: Council of Europe. Retrieved from https://rm.coe.int/convention-108-convention-for-the-protection-of-individuals-with-regar/16808b36f1

Crozier, R. (2019, March 5). AGL warns consumer data data right being “rushed”. IT News. Retrieved from https://www.itnews.com.au/news/agl-warns-consumer-data-right-being-rushed-520097

Daly, A. (2016a). Digital rights in Australia’s Asian century: A good neighbour? in Digital Asia Hub (Ed.), The good life in Asia’s digital 21st century (pp. 128–136). Hong Kong: Digital Asia Hub. Retrieved from https://www.digitalasiahub.org/thegoodlife

Daly, A. (2019). Good data is (and as) peer production. Journal of Peer Production, 13. Retrieved from http://peerproduction.net/issues/issue-13-open/news-from-nowhere/good-data-is-and-as-peer-production/

Daly, A. (2018). The introduction of data breach notification legislation in Australia: A comparative view. Computer Law & Security Review, 34(3), 477–495. doi:10.1016/j.clsr.2018.01.005

Daly, A. (2016b). Private power, online information flows and EU law. Oxford: Hart Publishing.

Daly, A. (2017). Privacy in automation: An appraisal of the emerging Australian approach. Computer Law & Security Review, 33(6), 836–846. doi:10.1016/j.clsr.2017.05.009

Damro, C., Gstöhl, S., & Schunz, S. (Eds.). (2017). The European Union’s evolving external engagement: Towards new sectoral diplomacies? London: Routledge.

Data61. (2019). Consumer data standards: Phase 1: CX report. February 20, 2019. Retrieved from https://consumerdatastandards.org.au/wp-content/uploads/2019/02/Consumer-Data-Standards-Phase-1_-CX-Report.pdf

Department of Prime Minister and Cabinet. (2018, July 4). New Australian Government data sharing and release legislation: Issues paper for consultation. Retrieved from https://www.pmc.gov.au/resource-centre/public-data/issues-paper-data-sharing-release-legislation

Digital Rights Watch. (2018). State of digital rights. Sydney: Digital Rights Watch. Retrieved from https://digitalrightswatch.org.au/wp-content/uploads/2018/05/State-of-Digital-Rights-Media.pdf

Duke, J., & McDuling, J. (2019, March 4). Australian regulators prepare for Facebook, Google turf war. The Age. Retrieved from https://www.theage.com.au/business/companies/australian-regulators-prepare-for-facebook-google-turf-war-20190304-p511kg.html

Duke, J., & McDuling, J. (2018, December 10). Facebook, Google scramble to contain global fallout from ACCC plan. Sydney Morning Herald. Retrieved from https://www.smh.com.au/business/companies/competition-watchdog-suggests-new-ombudsman-to-handle-google-and-facebook-20181210-p50l80.html

Erdos, D. (2010). Delegating rights protections: The rise of Bills of Rights in the Westminster World. Oxford: Oxford University Press.

Erni, J. (2019). Law and cultural studies: A critical rearticulation of human rights. London and New York: Routledge.

Esayas, S. Y., & Daly, A. (2018). The proposed Australia consumer data right: A European comparison. European Competition and Regulatory Law Review, 2(3), 187–202. doi:10.21552/core/2018/3/6

Eubanks, V. (2017). Automating inequality: How high-tech tools profile, police, and punish the poor. New York: St. Martin’s Press

Eyers, J. (2019, February 18). Labor warns consumer data right could become second “My Health” debacle. Australian Financial Review. Retrieved from https://www.afr.com/business/banking-and-finance/labor-warns-consumer-data-right-could-become-second-my-health-debacle-20190218-h1be3u

Fuchs, C. (2012). Dallas Smythe today: The audience commodity, the digital labour debate, Marxist political economy and critical theory. Prolegomena to a digital labour theory of value. tripleC: Open Access Journal for a Global Sustainable Information Society, 10(2), 692–740. doi:10.31269/triplec.v10i2.443

Gearty, C. (2016). On fantasy island: Britain, Europe, and human rights. Oxford; New York: Oxford University Press.

Gibson, R. (1992). South of the West: Postcolonialism and the narrative construction of Australia. Bloomington, IN: Indiana University Press.

Goggin, G. (2008). Reorienting the mobile: Australasian imaginaries. The Information Society, 24(3), 171–181. doi:10.1080/01972240802020077

Goggin, G., Vromen, A., Weatherall, K., Martin, F., Webb, A., Sunman, L., & Bailo, F. (2017). Digital rights in Australia. Sydney: Department of Media and Communications. Retrieved from http://hdl.handle.net/2123/17587

Goggin, G., Ford, M., Webb, A., Martin, F., Vromen, A., & Weatherall, K. (2019). Digital rights in Asia: Rethinking regional and international agenda. In A. Athique & E. Baulch (Eds.), Digital transactions in Asia: Economic, informational, and social exchanges. London and New York: Routledge.

Google. (2019, October 19). Second submission to the ACCC Digital Platforms Inquiry. Retrieved from https://www.accc.gov.au/focus-areas/inquiries/digital-platforms-inquiry/submissions

Greenleaf, G. (2014). Asian data privacy laws: Trade and human rights perspectives. Oxford: Oxford University Press.

Greenleaf, G. (2012). The influence of European data privacy standards outside Europe: Implications for globalization of Convention 108. International Data Privacy Law, 2(2), 68–92. doi:10.1093/idpl/ips006

Greenleaf, G. (2018a, May 24), Global convergence of Data Privacy standards and laws: Speaking notes for the European Commission Events on the launch of the General Data Protection Regulation (GDPR), Brussels & New Delhi, May 25 (Research Paper No. 18–56). Sydney: University of New South Wales. doi:10.2139/ssrn.3184548

Greenleaf, G. (2018b, April 8). The UN should adopt Data Protection Convention 108 as a global treaty. Submission on ‘the right to privacy in the digital age’ to the UN High Commission for Human Rights, to the Human Rights Council, and to the Special Rapporteur on the Right to Privacy. Retrieved from https://www.ohchr.org/Documents/Issues/DigitalAge/ReportPrivacyinDigitalAge/GrahamGreenleafAMProfessorLawUNSWAustralia.pdf

Gregg, B. (2012). Human rights as social construction. Cambridge, UK: Cambridge University Press.

Gregg, B. (2016). The human rights state: Justice within and beyond sovereign nations. Philadelphia, PA: University of Pennsylvania Press.

Harris, P. (2018, July 4). Data, the European Union General Data Protection Regulation (GDPR) and Australia’s new consumer right. Speech to the International Institute of Communications (IIC) Telecommunication and Media Forum (TMF), Sydney. Retrieved from https://www.pc.gov.au/news-media/speeches/data-protection

Hintz, A., Dencik, L., & Wahl-Jorgensen, K. (2018). Digital citizenship in a datafied society. Cambridge: Polity Press.

Hunt, M. (2015). Parliaments and human rights: Redressing the democratic deficit. London: Bloomsbury.

Isaak, J., & Hanna, M. J. (2018). User data privacy: Facebook, Cambridge Analytica, and privacy protection. Computer, 51(8), 56–59. doi:10.1109/MC.2018.3191268

Isin, E. F., & Ruppert, E. S. (2015). Being digital citizens. Lanham, MA: Rowman & Littlefield.

Kang-Riou, N., Milner, J., & Nayak, S. (Eds.). (2012). Confronting the Human Rights Act: Contemporary themes and perspectives. London; New York: Routledge.

Karppinen, K. (2017). Human rights and the digital. In H. Tumber & S. Waisbord (Eds.), Routledge Companion to Media and Human Rights (pp. 95-103). London; New York: Routledge. doi:10.4324/9781315619835-9

Keating, P. (1996). Australia, Asia, and the new regionalism. Singapore: Institute of Southeast Asian Studies.

Kemp, K. (2018, September 27). Getting data right. Retrieved from https://www.centerforfinancialinclusion.org/getting-data-right

Kirby, M. (1999). Privacy protection, a new beginning: OECD principles 20 years on. Privacy Law & Policy Reporter, 6(3). Retrieved from http://www5.austlii.edu.au/au/journals/PrivLawPRpr/1999/41.html

Knaus, C. (2019). More than 2.5 million people have opted out of My Health Record. The Guardian. Retrieved from https://www.theguardian.com/australia-news/2019/feb/20/more-than-25-million-people-have-opted-out-of-my-health-record

Liberty (Ed.). (1999). Liberating cyberspace: Civil liberties, Human Rights, and the Internet. London: Pluto Press.

López, J. J. (2018). Human rights as political imaginary. Cham, Switzerland: Palgrave Macmillan. doi:10.1007/978-3-319-74274-8

Lunt, P., & S. Livingstone. (2012). Media regulation: Governance and the interests of citizens and consumers. London: Sage.

Mann, M., & Daly, A. (2018). (Big) data and the north-in-South: Australia’s informational imperialism and digital colonialism. Television & New Media, 20(4). Retrieved from doi:10.1177/1527476418806091

Mann, M., Daly, A., Wilson, M. & Suzor, N. (2018). The limits of (digital) constitutionalism: Exploring the privacy-security (im)balance in Australia. International Communication Gazette, 80(4), 369–384. doi:10.1177%2F1748048518757141

Mendelson, D. (2018). The European Union General Data Protection Regulation (EU 2016/679) and the Australian My Health Record Scheme: A comparative study of consent to data processing provisions. Journal of Law and Medicine, 26(1), 23–38.

Moyn, S. (2018). Not enough: Human rights in an unequal world. Cambridge, MA: Harvard University Press.

Murphy, K. (2018, July 31). My Health Record: Greg Hunt promises to redraft legislation after public outcry. The Guardian. Retrieved from https://www.theguardian.com/australia-news/2018/jul/31/my-health-record-greg-hunt-promises-to-redraft-legislation-after-public-outcry

Newman, L. H. (2018, December 7). Australia’s encryption-busting law could impact global privacy. Wired. Retrieved from https://www.wired.com/story/australia-encryption-law-global-impact/

Nguyen, P., & Solomon, L. (2018). Consumer data and the digital economy. Melbourne: Consumer Policy Research Centre. Retrieved from http://cprc.org.au/wp-content/uploads/Full_Data_Report_A4_FIN.pdf

Nimmer, R. T., & Krauthaus, P. A. (1992). Information as a commodity: New imperatives of commercial law. Law and Contemporary Problems, 55(3), 103–130. doi:10.2307/1191865

Office of the Australian Information Commissioner (OAIC). (2017). Australian community attitudes to privacy survey, 2017. Sydney: Office of the Australian Information Commissioner. Retrieved from https://www.oaic.gov.au/resources/engage-with-us/community-attitudes/acaps-2017/acaps-2017-report.pdf

Office of the Australian Information Commissioner (OAIC). (2019). History of the Privacy Act. Retrieved from https://www.oaic.gov.au/about-us/who-we-are/history-of-the-privacy-act

Office of the Australian Information Commissioner (OAIC). (2018, April 17). Submission on Issues Paper –– Digital Platforms Inquiry. Retrieved from https://www.oaic.gov.au/engage-with-us/submissions/digital-platforms-inquiry-submission-to-the-australian-competition-and-consumer-commission

OECD. (1980/2013). OECD guidelines on the protection of privacy and transborder flows of personal data. Paris: OECD. Retrieved from http://www.oecd.org/internet/ieconomy/oecdguidelinesontheprotectionofprivacyandtransborderflowsofpersonaldata.htm

Ofcom. (2018). Adults’ media use and attitudes report 2018. London: Ofcom. Retrieved from https://www.ofcom.org.uk/research-and-data/media-literacy-research/adults/adults-media-use-and-attitudes

Pew Research Center. (2016). Privacy and information sharing. Washington, DC: Pew Research Center. Retrieved from http://www.pewinternet.org/2016/01/14/privacy-and-information-sharing/

Poullet, Y. (2018). Is the general data protection regulation the solution? Computer Law & Security Review, 34(4), 773–778. doi:10.1016/j.clsr.2018.05.021

Productivity Commission. (2017). Data availability and use (Report No. 82). Canberra: Productivity Commission. Retrieved from https://www.pc.gov.au/inquiries/completed/data-access/report

Simons, M. (2018, December 11). The ACCC’s plan to reshape the media landscape. Inside Story. Retrieved from https://insidestory.org.au/the-acccs-plan-to-reshape-the-media-landscape/

Smee, B. (2018, September 18). My Health Record: Big pharma can apply to access data. The Guardian. Retrieved from https://www.theguardian.com/australia-news/2018/sep/18/my-health-record-big-pharma-can-apply-to-access-data

Solomon, B. (2018, August 23). (2018). Open letter to Michelle Bachelet, new High Commissioner for Human Rights. Access Now. Retrieved from https://www.accessnow.org/cms/assets/uploads/2018/09/Open-Letter-Bachelet.pdf

Stalla-Bourdillon, S., Pearce, H., & Tsakalakis, N. (2018). The GDPR: A game changer for electronic identification schemes. Computer Law & Security Review, 34(4), 784–805. doi:10.1016/j.clsr.2018.05.012

Stats, K. (2015). Antipodean antipathy: Australia’s relations with the European Union. In N. Witzleb, A. M. Arranz & P. Winand (Eds.), The European Union and Global Engagement: Institutions, Policies, and Challenges. Cheltenham, UK: Edward Elgar (pp. 279–304).

Suzor, N. P., Pappalardo, K. M., & McIntosh, N. (2017). The passage of Australia’s data retention regime: National security, human rights, and media scrutiny. Internet Policy Review, 6(1). doi:10.14763/2017.1.454

Swan, D., Vitorovich, L., & Samios, Z. (2019, March 5). Media companies back ACCC on need to patrol digital behemoths. The Australian. Retrieved from https://www.theaustralian.com.au/business/media/media-companies-back-accc-on-need-to-patrol-digital-behemoths-google-and-facebook/

Vestoso, M. (2018). The GDPR beyond privacy: Data-driven challenges for social scientists, legislators and policy-makers. Future Internet, 10(7). doi:10.3390/fi10070062

Voloshin, G. (2014). The European Union’s normative power in Central Asia: Promoting values and defending interests. Houndsmills, UK: Palgrave Macmillan. doi:10.1057/9781137443946

von Dietze, A., & Allgrove, A.-M. (2014). Australian privacy reforms—an overhauled data protection regime for Australia. International Data Privacy Law4(4), 326–341. doi:10.1093/idpl/ipu016

Worthington, B., & Bogle, A. (2018, 6 December). Labor backdown allows Federal government to pass encryption laws. Sydney Morning Herald. Retrieved from https://www.abc.net.au/news/2018-12-06/labor-backdown-federal-government-to-pass-greater-surveillance/10591944

Young, A. L. (2017). Democratic dialogue and the constitution. Oxford: Oxford University Press.

Empire and the megamachine: comparing two controversies over social media content

$
0
0

This paper is part of Practicing rights and values in internet policy around the world, a special issue of Internet Policy Review guest-edited by Aphra Kerr, Francesca Musiani, and Julia Pohle.

Introduction

This paper considers two major controversies in 2017 over content on social media. The first was within the advertising industry as major brands found that programmatic advertising was appearing next to distasteful, violent, or otherwise objectionable content, particularly on YouTube. As a result, several prominent companies, including Procter & Gamble, stopped advertising on social media platforms for months or longer. This event, dubbed the “adpocalypse”, has had wide ranging effects, particularly on the monetisation of educational and LGBTQ content on YouTube. The second string of events were the first public hearings with representatives of social media companies over Russian operatives disseminating misinformation before, during, and after the 2016 US presidential elections. Misinformation campaigns on social media have since been the subject of government inquiries around the world. The public examinations of these controversies, by senate committees and advertising trade groups, provide insight into major themes of the governance relationships between social media companies and their major stakeholders.

The media, private and public, is essential to political engagement and the construction of democratic culture (Dahlgren, 2009). Arguments for the protection of the public’s interest in media typically refer to how government policy should intervene to protect the democratic public interest from the tendencies of private economic interests (Croteau & Hoynes, 2006). The term public interest lacks precision, however, and what is considered in the public’s interest may shift depending on the framework from which it is viewed—a political public interest versus an economic one, for instance (Shtern, 2009). To make matters more complicated, in practice, media governance is divided among a range of actors, including public (government) policy intervention and private (business) media interests. Researchers considering the public interest in communication and media must understand how governance of media is enacted in different contexts and how stakeholders intervene in media systems. Specifically, this paper argues that advertising interests can act as de facto governors of media content delivery in certain contexts, making editorial-style directives as to what content will succeed or fail. This power introduces an understudied layer of governance outside of national policy-making, which follows an inefficient route compared to advertising concerns. This paper uses an historical framework based on Harold Innis’ research into the publishing industry in the 18th and 19th centuries, combined with Mumford’s (1966) concept of the megamachine, particularly as explored by Latour (1999), to analyse how social media companies are being held accountable to nations and advertisers. Innis’ analysis of the print publishing industry demonstrates how freedom of the press laws enabled economic concerns, driven by the advertising industry, to expand the geographic reach of the press and facilitate US cultural imperialism (Innis, 2008). This paper suggests that social media companies are similarly shaped by their reliance on the advertising business model. World governments have largely taken a light approach to regulating social media content while advertisers represent most of those companies’ earnings. This paper compares a specific instance of advertisers shaping the rules by which social media companies are governed to public scrutiny by the US government. Both events involve outside actors pressuring social media companies to change their behaviours, with varying degrees of directness and success.

The rise of social media companies, their transnational nature, and the transnational, risk-averse nature of their advertising stakeholders has created an emphasis on brand safety in media content governance. This argument is complementary to work that defines design practices as regulation (Yeung, 2017) and arguments that platform content moderation is based in US free speech law, balanced against corporate social responsibility and user expectation (Klonick, 2017). It builds on research that shows that social media platforms attract efforts to regulate content, including social pressure, to prevent certain speakers or practices on the platforms, even those protected by legal rulings (Mueller, 2015). Social media companies are responding to increased public scrutiny by altering how creators and content are moderated and monetised, with impacts on how we understand the platforms’ role in enabling expression and circulating information. The public instances of platforms negotiating their accountability to different groups examined in this paper are instructive in understanding how different actors attempt to govern media systems and how media systems respond. Carey (1967) observed that Mumford’s ideas of transformation in social organisation are incorporated into Innis’ understanding of changes in the technology of communication. This paper combines Innis’ work on the press industry with theories of the megamachine, using the two as a novel way to examine the administrative links that govern media systems.

Harold Innis and the newspaper industry

Harold Innis was deeply concerned with the relationship between printing, monopolies of knowledge, and public life. Empire and Communication particularly places the newspaper industry at the centre of US cultural imperialism: “The United States, with systems of mechanized communication and organized force, has sponsored a new type of imperialism imposed on common law in which sovereignty is preserved de jure and used to expand imperialism de facto” (Innis, 2007 [1950], p. 195). Those “systems of mechanized communication,” including business models favouring maximum circulation, directly influenced the functioning of later technologies, including the telegraph and the radio. At the time of its creation, the newspaper industry reversed the influence between political centres and peripheries, as the US, then a colony, began providing England with content divorced from English culture and communities (Berland, 1997). Innis argues that while national laws were in effect, they were removed from the details of content and production, so that commercial interests—particularly those of advertisers—could and did govern most daily operations. The US government stepped in only to support industry growth through subsidising mail delivery or facilitating trade relationships. Innis’ work on the newspaper industry has significant implications for how we understand social media companies—which have extended the advertising model to new scales and transnational contexts while exerting considerable influence over how information is accessed and circulated by citizens.

The economic framework created by social media companies facilitates content flowing back into the US with unexpected consequences for that country’s own democratic processes and cultural stability. If the newspaper affected neighbouring cultures and economies by putting US industries and culture at the heart of global communications, social media may have created opportunities for other actors to move cultural and political content around the globe. National security concerns over political misinformation spreading through social media platforms have increased public scrutiny of social media content regulation and stoked enthusiasm for stricter national restrictions on online content, but these developments are slow when compared to changes made to meet commercial imperatives. Innis argued that commercial imperatives in print media tended to favour circulation over cultural or territorial integrity and that space vacated by policy direction could be filled by directives from commerce.

Becoming a vendible commodity: strategies of circulation

In the Bias of Communication, Innis locates the beginning of the newspaper’s economic model at the lapse of the Licensing of the Press Act. After the Act lapsed, “news became a vendible commodity” (Innis, 2008 [1951], p. 143). The Act was one of many legislative efforts to regulate the press and print industries in the United Kingdom, in this case by requiring that publications be registered (Nipps, 2014). In response to legislative pressures, media structured itself strategically to avoid regulation, shifting formats and obscuring content coverage as political developments made certain formats less expedient. When taxes affected newspapers, there was a rise in other, non-newspaper formats of publication, including publications that circulated on irregular schedules or used unusual page sizes. Innis (2008) depicts these encounters as the negotiation of parliamentary accountability to the people, but it might equally be read as actors in the publishing industry defending their market position and investment in the print industry in the face of uncertain political patronage and funding. Eventually, the need for a predictable source of funds “compelled dependence of the political press on advertisements” (Innis, 2008, p. 153). The print industry turned to the advertising business model in part to escape the uncertainty and risk of political patronage and the inconsistency of tax law and became subject to advertisers’ need to grow their audience in the process.

The rise of advertising as a preferred business model for publishing has been covered in detail elsewhere (see Wu, 2016). What concerns this paper is the technical and political ramifications of that preference. Once newspapers became dependent on advertising dollars, news was important insofar as it attracted readers. Roy Howard, an American publisher speaking before World War I, claimed: “We come here simply as news merchants. We are here to sell advertising and sell it at a rate profitable to those who buy it. But first we must produce a newspaper with news appeal that will result in a circulation and make that advertising effective.” (Originally published in Lords of the Press by George Seldes, 1939 in Innis, 2008, p. 181). Advertising requires circulation, and circulation has historically been achieved in publishing through efficient distribution, combined with attention-getting content strategies such as sensationalism, use of images, and exclusive content. The preferences of advertisers hoping to reach national or international audiences pushed technical developments that allowed for the printing of illustrations and the printing and shipping of more papers (Buxton, 1998; Innis, 2007). Paper and printing, instantiated in newspapers whose agendas were set by the demands of advertising, facilitated the connection of wide geographic areas to news, reporting, and advertisements created in a central location. It was the ability of media in this case to assert control over space that led Innis to identify the industry as an agent of US cultural imperialism— at the time, an expression of how “any given medium will…favour the growth of certain kinds of interests and institutions at the expense of others” (Carey, 1967, p. 9).

Once the press divorced itself from political money with the aid of advertising, publishers could become less interested in the specifics of content and pursue broader circulation, protecting themselves with freedom of expression laws as necessary. Innis argued that publishing’s freedom from direct control over content and its partnership with advertising interests made circulation its priority and allowed information to be treated as a commodity, maximising its spread over geographic space, with implications for affiliated industries and expressions of cultural sovereignty. As technology has moved beyond the printing press, the separation between publishers and content has become pronounced. That is the case of social media companies, who have engineered considerable distance in many jurisdictions from laws that ordinarily hold publishers accountable for the speech present on their platforms—a level of licentiousness undreamed of by the news merchants of Innis’ analysis. Innis’ work remains germane for its insight into advertising as a driving force in technical capability and the entanglement of commercial media with broader economic and political functioning, even as the geographic arrangements that concerned him (such as the focus on US culture as the central director of media development globally) have shifted considerably. Innis’ granular examinations of how media outlets were held accountable to laws and the mandates of an advertising business model can act as a model for analysing media interests and institutions.

Megamachines and machines for growth

In Innis’ account, guarantees of freedom of expression, once in place, rendered the relationship between content and regulation predictable.Once advertisers, regulators, and publishers achieved relative equilibrium in their goals, the industry was able to grow in a manner heedless to geographic location, creating the scale necessary for financial efficiency. One way to understand that operation is as a machine. In Pandora’s Hope, Bruno Latour (1999) describes the megamachine as “a large, stratified, externalized body politic” (p. 208). The megamachine organises “large numbers of humans via chains of command, deliberate planning and accounting procedures” (p. 207) to achieve a goal defined by central institutions. The concept of a megamachine was originally theorised by Mumford (1966) as the systems, including but not limited to bureaucracies, that could put large numbers of humans to work on a single goal—organising the military, for instance, or the labour necessary for large-scale construction projects. Mumford argues that the megamachine is essential to technological developments and consisted of “a reliable organisation of knowledge…and an elaborate structure for giving and carrying out orders” (p. 8). Latour’s megamachine functions through “nested subprograms” (p. 207) for action that can be tracked across social relationships. Latour suggests that identifying the workings of the subprogrammes may do more to explain collective behaviours and functionalities than discourses about identity (Latour, 1999). The megamachine, then, must be separate from individual or societal wants or preferences. The imperative to “make newspapers”, for instance, calls forth systems of salespeople, authors, accountants, publishers, postal workers, and trade relationships that push for increased circulation, regardless of individual preferences within the human systems.

The metaphor of a machine is relatively common in examinations of commodification. For instance, Harvey Molotch’s much-cited article The City as a Growth Machine (1976) examines the processes that divorce geographic and other forms of specificity from decision-making in urban development. Those managing transnational communication systems are similarly pursuing freedom from context. A media operation, divorced as much as possible from individual human concerns—achieved through centralisation, technological efficiency and predictable relationships between content and regulation—can be run as a machine. Viewed through the lens of a machine, Innis’ analysis of newspapers is a collective of human and industrial processes that became a political body of their own. Innis identifies an industry that he associates with a society (the press in the United States), he then identifies subprogrammes (trade relationships, subsidies, technologies) that encourage the industry to act on wider geographic areas, including neighbouring societies—a media megamachine with the US at its centre. In this scenario, news monopolies, separate from “place, ethics, and community” (Berland, 1997, p. 61), in conjunction with “contemporary transnational capitalism” (p. 68), allowed US agendas to dominate global communication.

In a context where the largest media companies are transnational as a rule, Innis’ publishing megamachine takes on new aspects. For one, consumer markets across the globe are gaining in importance and institutions for managing global commercial concerns have grown. In her application of Innis’ ideas to global media systems, Berland (1997) declared that corporations undermine nation states by undoing their central positioning—putting the corporation at the centre of the media machine, rather than the US state. It may be too soon to declare that corporations have ended US dominance of global communications, but transnational digital communication giants are increasingly tied to diverse social and policy environments. Understanding Innis’ “vendible commodity” as a map of the subprogrammes that make up a media megamachine gives researchers a template for examining that machine in a context that has shifted away from one national jurisdiction to a complex, global policy environment. Considering Innis alongside the megamachine, rather than conducting a straightforward political economic analysis, emphasises administrative links over power relations and the construction of a body politic, rather than a market model of competing interests—an idea of particular salience to the advertising industry, which has a particular definition of appropriate content.

The social media megamachine

Social media companies have emphatically defined themselves as outside of the press and publishing industries because of their reliance on user-generated content, but that distinction is blurring, as statistics place social media high on lists of news sources used by citizens, and professional media content is more prominent on the platforms (Bakshy, Messing, & Adamic, 2015; Smith, 2017; Burgess, 2015). In the US, Section 230 of the Communication Decency Act, which prohibits providers of online publishing services from being treated as the legal publisher or speaker of information on that service, gives US regulation little power over online platforms (Ardia, 2009; Klonick, 2017). Section 230 has contributed to the impression that social media platforms—as hosts for user-generated content rather than media platforms—are nearly immune to direct regulatory intervention. However, social media companies are governed—directly by national governments as well as through a range of voluntary self-regulation initiatives (in regard to terrorist content, for instance) (Gorwa, 2019). In Europe, and particularly in Germany, direct regulation has recently been used to force social media companies to comply with regional laws and norms. Current efforts are coalescing around competition law, privacy and data protection, and the rollback of intermediary protection from liability (Gorwa, 2019). Other jurisdictions have taken stricter measures, employing tactics such as geo-blocking towards social media companies to prevent them from operating either temporarily or permanently. China has made significant use of this ability and commanded noteworthy concessions from tech companies, such as Google, that wish to access the millions of potential users in that country (Gallagher, 2018). However, nations, no matter how populous, are only one group of stakeholders with fragmented interests. Advertisers represent upwards of 85% of social media companies’ earnings (Facebook Inc., 2018; Twitter Inc., 2018; Alphabet Inc., 2017).

As new publishing platforms have emerged, advertisers have remained central to media business models, acquiring new abilities to target viewers (Turow, 2011) and interact directly with consumers (Brodmerkal & Carah, 2016). Social media platforms’ advertising business started by serving banner ads to students and now cater to a complex global ecology of developers, brand pages, marketing partners, and ad publishers (Nieborg, 2017). Advertisers on social media platforms expect not only circulation, but also the ability to target specific categories of users, to have advertising content integrated into the look and function of the social media site, and to keep advertising separate from content that might be objectionable to their target audience—an ideal called “brand safety” (Trapp, 2016; Facebook, Google, and Twitter Executives on Russia Election Interference, 2017b, 1:40:36; Teich, 2017). Advertisers seek deeper relationships, including feelings and values, between consumers and brands (Banet-Weiser, 2012). Social media companies have made themselves central to meeting these preferences, and digital advertising is now a multimillion-dollar industry, with Facebook and Google earning more than half of that revenue (Ha, 2017; Reuters, 2017; Helmond, Nieborg, & van der Vlist, 2017). Scholarship on social media platforms has already traced some of the ways in which accountability to advertisers has motivated significant changes to platform infrastructure, algorithms, and content policies (Gehl, 2014; Van Dijck, 2013; Helmond, 2015) and created a business model that permeates borders, incentivises the sharing of personal data, and wields affective forces to keep users connected (Karppi, 2018). While there have long been fears that advertiser agendas affect the production of media content, studies have tended to focus on advertisers securing positive reviews of their own content (Rinallo, Basuroy, Wu, & Jeon, 2013) or on deceptive practices and native advertising (Carlson, 2015). The social media model has dispensed with conventions of “church-and-state” division between editorial and business concerns, making a virtue of its ability to integrate advertising content in ways tailored to achieve business goals (Couldry & Turow, 2014).

Where newspaper advertisers were primarily national in their operations, advertising partners for the social media platforms are global and their audience, like that of the social media companies themselves, is often borderless. Social media companies face a fragmented policy environment and comparatively coherent economic incentives. Where Innis argued that the commercialism of media was part of US media imperialism, we can use the framework of the media megamachine identified above to trace the chains of command in new circumstances and to locate the “centrally directed” elements of the machine’s programming (Mumford, 1966, p. 6). The social media megamachine is characterised by a policy environment strongly oriented toward self-regulation, where the ability to regulate exists but is often not exercised. Meanwhile, commercial incentives have become more granular and targeted towards outcomes at the level of content, affect, and quality of interaction with consumers, rather than only circulation. The next section examines the senate hearings with representatives of three major social media companies to better understand how pressure was applied to social media company representatives in a national context before comparing that process to interactions between advertisers and social media over content concerns. Doing so provides an illustrative comparison of how social media companies are governed in different institutional contexts.

The senate hearings and control of social media content

The US senate hearings with representatives of social media companies over Russian activities during the 2016 presidential election were held 31 October and 1 November 2017 before the judiciary Subcommittee on Crime and Terrorism and the house and senate Committees on Intelligence. In his opening remarks, the republican chair of the judiciary subcommittee called social media platforms “portals” into US society and everyday life. The actions of the Internet Research Agency (IRA), located in St. Petersburg—the group that created group pages, advertisements, and events in the guise of American social and political groups—were central to the hearings and made a prime example of the failings of content regulation. Other examples, including the spread of false news stories on social media, extreme content, and political content posted outside of an election cycle, were also significant parts of the inquiry (Facebook, Google, and Twitter Executives on Russia Election Interference, 2017a, 1:24:10). False or sensationalised news have a long history in media and even the “filter bubble” is preceded by media that mirrors the preferences of its audience (Bennet and Iyengar, 2008). The ease with which such content is created and disseminated on social media has added urgency to questions about how to control that content. The hearings took place over a year after the initial problematic behaviour was identified. During that year, social media platforms resisted the idea that the problems were worth examining, repeatedly claiming that deception and misinformation on online platforms was minimal and of little significance to elections (Hudgins & Newcomb, 2017).

During the hearings, the distance between the priorities of legislators and social media representatives was evident. Senate interlocutors repeatedly used the language and perspective of the national interest, including an interrogation centered on which nations social media companies consider a threat (Facebook, Google and Twitter Executives on Russian Disinformation, 2017). On the other side, the social media representatives argued that their tools are agnostic and “the internet is borderless”, (34:05) meaning national sovereignties and enmities mean much less on social media platforms. These are opposing views, and not necessarily reconcilable, though many members, such as democrat Adam Schiff, sought reassurances that social media companies consider their corporate responsibility to include the protection of democratic communication within the US (Facebook, Google, and Twitter Executives on Russia Election Interference, 2017a). The representatives of the social media companies were careful to separate themselves from public service obligations. However, the license enjoyed by those companies is not guaranteed and the tech companies were careful to acknowledge moral and societal stakes and responsibilities. The rhetorical dance of non-obligation and self-regulation has historically allowed tech companies to align the companies with national interests or legislation in various national contexts in a largely self-regulated manner (Klonick, 2017).

The newspaper industry of Innis’ analysis also had arms-length distance from the instruments of national governance, but, being located in the US and aligned with US interests through their readership, they had a more straightforward relationship with national politics. In the 2016 US election, much of the confusion and concern about the role of technology companies stemmed from the normalisation of social media content not made by or for US citizens. There was no reason, from the perspective of the tech companies to flag advertisements paid for in roubles because there are Russian companies online that might want to pay for advertisements for legitimate purposes. Twitter, in particular, argued that they were within their rights to partner with foreign media companies (Facebook, Google, and Twitter Executives on Russia Election Interference, 2017a). Both companies and senators admitted to having overlooked ordinary promotional tools and advertising for the dissemination of political content while focusing on cyber espionage efforts (Facebook, Google, and Twitter Executives on Russia Election Interference, 2017b, 2:23:36). In the face of US scrutiny, the companies had two strategic options: either argue that the US government has no authority over them or demonstrate enough responsiveness and control over platform content to forestall government concerns, maintaining their social license as trustworthy partners. That was the strategy employed during the hearings. Representatives emphasised the proactive measures they had taken and their records of cooperation with national governments.

The central mechanism of dissemination of Russian content during the US presidential election was promotional tools. The IRA’s actions in most respects resemble a public relations campaign. It used tools, including Facebook’s Custom Audience Tool, A/B testing for advertisements, and geographic targeting, to reach a desired audience (Facebook, Google and Twitter Executives on Russian Disinformation, 2017). They also rolled out paid promotional campaigns, along with group pages that created content which could then be “boosted” to reach more people. It is a strategy that can be used by other groups—friendly or unfriendly, state or business—because it was designed to fit a range of globalised business needs and is resistant to governance outside of the social media companies themselves (Facebook, Google and Twitter Executives on Russian Disinformation, 2017). The use of mundane tools of business promotion to do the work of infiltrating US media lead many senators to question the legal disparity between media companies and social media companies. Offline, to comply with election spending limits, the purchaser and the purpose of any advertisement must be clearly identifiable. Online, the speed and volume of interactions, along with the comparatively light regulation enjoyed by the companies, have made it difficult to track the content of advertisements and consistently enforce standards (Facebook, Google and Twitter Executives on Russian Disinformation, 2017, 1:08:09). When flagging content for violations of terms of service, social media companies have until recently focused on the authenticity of accounts or behaviour rather than content. However, much of the behaviour that necessitated the inquiry was virtually indistinguishable from ordinary accounts until the connection between its origin and the content was made (Facebook, Google and Twitter Executives on Russian Disinformation, 2017, 2:12). Many companies that might not be appropriate advertisers during election cycles are legitimate users, by the standards of the platforms’ terms of service, the rest of the time. The kinds of distinctions being made between acceptable and unacceptable content typically require human readers. During the hearings, social media companies, particularly Facebook, announced increases to their human moderation staff by, in some cases, tens of thousands. Google has made similar announcements in response to advertiser concerns over children’s content on YouTube (Schindler, 2017).

The social media companies emphasised other existing initiatives that, they argued, demonstrated their proactive engagement with the problems identified after the 2016 election. Facebook has created Transparency Centres that allow users to check all the advertising campaigns a page is running on the platform, piloted election integrity initiatives in Canada (Valentino-Devries, 2018), and applied machine learning to control content around the 2017 German federal election (Lapowsky, 2017). These efforts have focused on user-facing transparency along with prompt identification and removal of offensive, dangerous, or inappropriate content, aided by machine learning. Overall, the hearings were marked by social media companies’ insistence that the problems identified by the political representatives were already under control and that platform self-regulation should continue to be the norm. They did not suggest that institutional relationships between social media companies and policy regimes were under-developed. To prove their point, they indicated existing initiatives, particularly those focused on transparency, content review, and the proactive removal of offensive content. While they were presented to lawmakers as proactive measures that indicate the integrity of platform policies, these announcements closely resembled initiatives put in place earlier in 2017, in response to a different controversy on the platform.

IAB, content scandals, and brand safety

In the spring of 2017, social media companies faced widespread outrage from advertisers whose ads had appeared next to objectionable content. Dozens of companies, including major global advertisers such as AT&T, boycotted advertising on YouTube and elsewhere and demanded guarantees that the platforms were brand safe (Davies, 2017). To retain their advertising clients, the platforms were quick to create tools that allow marketing partners to review the placement of their ads and the content that they accompanied (Perez, 2017). The tools provided to advertisers—including machine learning to remove content, transparency centres, and human reviewers—resemble many of the initiatives social media companies cited in the US hearings as evidence of their proactive engagement of political misinformation. This section draws on industry coverage, including updates from trade groups such as the Interactive Advertising Bureau (IAB), to compare the two cases in more detail, and argue that the recycling in public hearings of steps taken to address commercial concerns indicates that, in some contexts, commercial actors may be able to drive the terms of platform governance more directly than policy processes.

This study examined IAB coverage, beginning from 20 March 2017 to establish a timeline of discussion and action between social media firms and advertisers during the brand safety crisis. Several developments outlined in the IAB’s coverage are of interest to this paper. The first is the quick reaction of the social media companies to threats of boycotts from some of their core stakeholders. Procter & Gamble announced a boycott of social media in early March 2017, and by the 31st Google had acknowledged the issues and created more conservative default advertising settings, as well as three new categories for excluding content. Where advertisers used to be able to opt out of running ads next to “sensitive subjects” and “tragedy and conflicts” they could now avoid content that might be “sexually suggestive”, “sensational and shocking”, or that might contain “profanity and rough language” (Sloane, 2017; Schindler, 2017). In contrast, the senate hearings over Russian misinformation in US elections took place more than a year after the initial reporting of misinformation and nearly a year after then-president Obama announced sanctions and investigations into Russian interference in the election (Sanger, 2016). During that year, the affected social media platforms denied the influence of Russian activities on their services and downplayed the significance of political messaging on their platforms (Hudgins & Newcomb, 2017).

In a blog post outlining their responses to the brand safety crisis, Google articulated a dual responsibility to creators (including those with controversial views) and to advertisers. The commitments made by the company—to tighten safeguards by restricting ads to creators in YouTube’s partner programme as well as to re-examine what kind of content to allow on the platform—were heavily weighted towards making sure that any content that is monetised is uncontroversial. Controls for advertisers include defaulting ads to narrower, more heavily vetted content, new tools to allow management of which sites and what content can appear next to ads, more options to exclude higher risk content and a commitment to sink more resources into content review by hiring more content reviewers and creating new machine learning tools (Schindler, 2017). During the senate hearings, increasing human content moderation and addressing content through artificial intelligence and machine learning were prominent in social media platforms’ claims to be proactively addressing government concerns about political misuse of promotional tools. In addition to changing the defaults for advertisements, social media platforms, including YouTube and Facebook, opened their platforms to auditing by third parties closely affiliated with the IAB, such as the Media Rating Council, and made changes to YouTube’s Preferred Partner Program. The Preferred Partner Program was formerly defined purely by the level of engagement with its content. It has been adjusted to reflect for heavily vetted content, specifically meant to be brand safe (Bardin, 2017). In the senate hearings, the platforms argued that the priorities of national governments are met through coordination between the platforms and government agencies, particularly with law enforcement agencies, and civil society groups. During the brand safety crisis, advertising interests were able to address their concerns directly to social media companies and get nearly immediate results, including powerful tools that changed how content on the platform was monetised - most likely with knock-on effects for what content is recommended. In contrast, government concerns are subject to coordination between disparate agencies, globalised civil society groups (some of whom resent companies taking credit for their role in social media moderation; Russell, 2018), and the social media companies. Thinking in terms of the hierarchies and accounting procedures that define the operation of the megamachine, there is a direct chain of accounting between globalised advertising interests and tools made by social media companies, while national interests and policies are represented by a host of competing interests. The concerns of advertisers were not subject to dispute in the way that national concerns were. While the platforms did claim that ads next to objectionable content were minimal, they also made concrete changes quickly after concerns were raised.

The brand safety crisis on YouTube was dubbed the “adpocalypse” after many individuals saw revenues for their channels drop drastically (Burgess & Green, 2018). The fallout from YouTube’s efforts to address brand safety concerns landed particularly on educational and LGBTQ content—content more likely to be flagged as “sensational and shocking” or “sexually suggestive” and therefore not necessarily brand safe. The commitments made by YouTube were meant to reassure advertisers that they can resume advertising on social media platforms—which most have done, though there continue to be problems with advertisements appearing next to objectionable content. They have strong implications for what content is monetised online, what is not, and how that is decided. The hazards of managing content this way are well articulated by Burgess and Green (2018), who argue that the role of advertisers in changing which topics are monetised on YouTube "problematically conflates sociocultural and political issues with commercial ones" (p. 151). The authors question whether brand safety initiatives will support diversity and inclusion when, in addressing violent and conspiratorial content, the adpocalypse also worked against sexual and gender minorities. It also showed a limited capacity for addressing the difference between inappropriate content and educational content, except in the case of professional media producers like the BBC. While advertisers have always been relatively conservative and risk-averse, the granular controls provided to them by social media allow them to more directly shape relationships between ads and content, and therefore the broader environment of content delivery.

This ability to shape content does not free advertisers or social media platforms from political pressure and social norms. Sometimes the closeness between social media and advertisers makes them targets for civil society efforts. In 2013, the Association for Progressive Communications and partners, including the Everyday Sexism Project, began communicating with Facebook advertisers whose content appeared next to images and text of violence against women. Within weeks of the campaign going public, the platform had taken action to address content that it had formerly resisted moderating (Fascendini, 2013; Levine, 2013; Pavan, 2017). The success of that campaign raises questions. For whom does that kind of pressure work? Advertisers are willing to boycott platforms on their own behalf and some are willing to act on behalf of other groups, such as women - an important consumer category - concerned about offensive content. Do political issues that are less commercially sensitive—LGBTQ media, Burmese speakers—have to take the “slow lane” of civil society and legal action? As Mueller (2015) demonstrates, it is a limited victory to win the legal right for a marginal group to speak if a platform cannot or does not continue to host that speech. There is no reason to think that any speech that is legal must also be monetisable. But if what is monetisable becomes the frontline of platform content governance, it is important to understand how those decisions are made and what their effects are.

Conclusion

During the senate hearings, more than one senator challenged the social media companies’ representatives to articulate their relationship to the nation in which they are based. As US democrat senator Amy Klobuchar, author of a bill that would harmonise advertising standards between online and offline media companies, remarked, any small radio station in the US is required to review every ad that runs on their station (Facebook, Google and Twitter Executives on Russian Disinformation, 2017, 2:55:59). The senate hearings are one of many recent examples of nations attempting to establish a clearer relationship between national priorities and online content. In the case of Klobuchar’s Honest Ads bill, establishing the relationship is a matter of extending previous standards for content to new media players, given that cultural infiltration by hostile powers is not “new or novel” (Facebook, Google, and Twitter Executives on Russia Election Interference, 2017a, 00:00:33). This paper used the theoretical concept of the megamachine, with Innis’ analysis of the newspaper industry as a template, to examine contemporary attempts to influence social media operations, following the chains of accounting and command between those directing the machine and those being directed by it. Both the newspaper industry and social media companies have a core product that acts as a “vendible commodity”, attracting audiences who, in turn, attract advertisers. However, where Innis connected the business model of the print industry with US cultural imperialism, social media platforms’ operations and major advertising clients are transnational and bridge many policy environments, complicating lines of accountability between the platforms and national governments. At the same time, advertiser’s comparatively unified desire for targeted, brand safe content institutionalises closer relationships between advertising goals and the content moderation and monetisation policies of social media companies. Innis’ analysis embedded media in the political economy of trade relationships and cultural products exchanged between the US and Canada, but examining his work as a megamachine allows insights into the administration of media content governance, which appears to have shifted towards transnational institutions as social media communications technologies cover more of the globe. With the emergence of social media, there is a renewed interest in content governance by both nations and advertisers. However, the advertisers have a head start in centralising their interests and building the infrastructure to see their vision of content governance respected, positioning them to be effective frontline governors of social media content.

Comparing the senate hearings with the brand safety controversy reveals correlating crises over content distribution online, stemming from inconsistently monitored placement of promoted materials. While this limited analysis is unable to reveal a core or causative relationship between the two, it does highlight similarities between actions presented to the US government in public hearings as proactive ways to address foreign interference and actions designed to assuage the concerns of advertisers over social media content. These similarities suggest a convergence between the actions needed to provide transparency in the advertising supply chain and the actions required to fulfill a public mandate for trustworthiness. Certainly, this comparison indicates that the architecture of social media companies is much more developed for meeting the concerns of advertisers than it is for regulators or public interest concerns. Advertising interests can demand concrete changes to social media platforms and see swift action to meet those demands, while even very powerful national governments may take the slow route of applying public pressure.

In her article on social media platforms as “the new governors”, Klonick (2017) argues that the loss of equal access and participation and the lack of direct platform accountability are major causes for concern, even as social media platform policies largely follow the outlines of US speech laws. This paper has used the scholarship on the megamachine to think through patterns of accountability between media systems and their stakeholders. It has argued that commercial actors are able to exert considerable pressure on social media content moderation, acting sometimes ahead of government policy processes, and that the criteria for that governance is not principles of speech and representation, but the fuzzier criteria of brand values and brand safety. Rather than direct accountability to users or policy, social media companies are accountable to a range of stakeholders, and advertisers are often at the front of the line. It is possible that the interests of advertiser can serve to curb dangerous or extreme speech on social media platforms. Cunningham and Craig (2019), for instance, suggest advertisers may encourage better democratic norms in online communication “because most brands and advertisers will not tolerate association with such affronts to civility and democracy” (p. 8). It seems unlikely that the conservative nature of advertisers and brands is a substitute for governance of online spaces by regulators outside of the private sector, however—particularly as advertisers have limited investment in small countries, minority populations, and political communication. While the social media megamachine appears well designed to administer the interests of advertisers in content delivery, it is less efficient in facilitating the governance in other contexts—taking considerably longer to acknowledge and respond to democratic concerns over content.

Globally, there have been recent attempts to define the power of social media companies and to strengthen their accountability to national media policy regimes. Indicative of this tendency is the General Data Protection Regulation (GDPR), which came into force in the May of 2018, and is to date the most far-reaching move to directly regulate the actions of social media companies. However, representatives of the EU parliament indicated in a May 2018 hearing with Mark Zuckerberg that it is unlikely that the GDPR will change the business model of Facebook and other social media companies (EURACTIV, 2018). Advertisers have quickly adapted to the changes, shifting their spending to publishers with accurate first-party data (from memberships and mailing lists, for instance) and those with established reporting systems (Seb, 2018; Holton, 2018). While requirements for individual consent are stricter, the desire for large networks of circulation and brand-safe content may serve to entrench the power of established players who have the resources to ensure compliance and the long-term users who must agree to the terms to continue to use the service. Advertisers will spend money only in supply chains that are verified—that have audiences the advertisers know they can target both safely and legally (Holton, 2018). As such, the influence of advertiser preferences on their publishing partners will continue to strongly affect how content is moderated and monetised, even under this stricter regulatory burden. Recently, Facebook founder Mark Zuckerberg has argued for a more direct, globally standardised watchdog of public interests in the governance of content (Zuckerberg, 2019). Such a body might correct for the diffuse nature of national policy-making in comparison to the coherent agenda of advertising interests. Gillespie (2018) called Facebook “two intertwined networks, content and advertising, both open to all" (p. 203). Perhaps social media governance needs to acknowledge a similar division in its stakeholders and match the influence of the advertising industry with a transnational institution for political governance that addresses the democratic interest in social media content.

References

Alphabet, Inc. (2017). Annual Report 2016. Retrieved from https://www.sec.gov/Archives/edgar/data/1652044/000165204417000008/goog10-kq42016.htm

Ardia, D. S. (2009). Free speech savior or shield for scoundrels: An empirical study of intermediary immunity under Section 230 of the Communications Decency Act. Loyola of Los Angeles Law Review, 43, 373-506. Available at https://scholarship.law.unc.edu/faculty_publications/37/

Bakshy, E., Messing, S., & Adamic, L. A. (2015). Exposure to ideologically diverse news and opinion on Facebook. Science, 348(6239), 1130-1132. doi:10.1126/science.aaa1160

Banet-Weiser, S. (2012). AuthenticTM: The politics of ambivalence in a brand culture. New York: NYU Press.

Bardin, A. (2017, March 20). Strengthening YouTube for advertisers and creators. YouTube Creator Blog. Retrieved from https://youtube-creators.googleblog.com/2017/03/strengthening-youtube-for-advertisers.html

Bennett, W. L., & Iyengar, S. (2008). A new era of minimal effects? The changing foundations of political communication. Journal of communication, 58(4), 707-731. doi:10.1111/j.1460-2466.2008.00410.x

Berland, J. (1997). Space at the margins: Colonial spatiality and critical theory after Innis. TOPIA: Canadian Journal of Cultural Studies, 1(1). doi:10.3138/topia.1.55

Brodmerkel, S., & Carah, N. (2016). Brand machines, sensory media and calculative culture. London: Palgrave Macmillan. doi:10.1057/978-1-137-49656-0

Burgess, J. (2015). From ‘broadcast yourself’ to ‘follow your interests’: Making over social media. International Journal of Cultural Studies, 18(3), 281–285. doi:10.1177/1367877913513684

Burgess, J., & Green, J. (2018). YouTube: Online Video and Participatory Culture. Cambridge: Polity Press.

Buxton, W. J. (1998). Harold Innis' excavation of modernity: the newspaper industry, communications, and the decline of public life. Canadian Journal of Communication, 23(3). doi:10.22230/cjc.1998v23n3a1047

Carey, J. W. (1967). Harold Adams Innis and Marshall McLuhan. The Antioch Review, 27(1), 5–39. doi:10.2307/4610816

Couldry, N., & Turow, J. (2014). Advertising, big data and the clearance of the public realm: marketers’ new approaches to the content subsidy. International Journal of Communication, 8, 1710–1726. Retrieved from https://ijoc.org/index.php/ijoc/article/view/2166

Croteau, D., & Hoynes, W. (2006). The Business of Media: Corporate Media and the Public Interest. Thousand Oaks, CA: Pine Forge Press.

Davies, J. (2017, April 4). The YouTube ad boycott concisely explained. Retrieved February 18, 2019, from https://digiday.com/uk/youtube-ad-boycott-concisely-explained/

EURACTIV. (2018). Mark Zuckerberg’s full meeting with EU Parliament leaders. Retrieved from https://www.youtube.com/watch?v=o0zdBUOrhG8

Facebook Inc. (2018). Facebook Annual Report 2017. Retrieved from https://s21.q4cdn.com/399680738/files/doc_financials/annual_reports/FB_AR_2017_FINAL.pdf

Facebook, Google and Twitter Executives on Russian Disinformation: Hearing before the Senate Judiciary Subcommittee on Crime and Terrorism, Senate, 114th Cong.(2017). Retrievedfrom https://www.c-span.org/video/?436454-1/facebook-google-twitter-executives-testify-russia-election-ads

Facebook, Google, and Twitter Executives on Russia Election Interference: Hearing before the House Select Intelligence Committee, House, 114th Cong. (2017a). Retrieved from https://www.c-span.org/video/?436362-1/facebook-google-twitter-executives-testify-russias-influence-2016-election

Facebook, Google, and Twitter Executives on Russia Election Interference: Hearing before the Senate Select Intelligence Committee, Senate, 114th Cong. (2017b). Retrieved from https://www.c-span.org/video/?436360-1/facebook-google-twitter-executives-testify-russias-influence-2016-election

Fascendini, F. (2013, May 24). How funny is this, Facebook? Retrieved February 26, 2019, from Association for Progressive Communications Website https://www.apc.org/en/news/how-funny-facebook

Gallagher, R. (2018, August 1). Google Plans to Launch Censored Search Engine in China, Leaked Documents Reveal. The Intercept. Retrieved August 14, 2018, from https://theintercept.com/2018/08/01/google-china-search-engine-censorship/

Gehl, R. W. (2014). Reverse engineering social media: Software, culture, and political economy in new media capitalism. Philadelphia: Temple University Press.

Gillespie, T. (2018). Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media. New Haven, NY: Yale University Press.

Gorwa, R. (2019). What is platform governance? Information, Communication & Society, 22(6), 854-871. doi:10.1080/1369118X.2019.1573914

Ha, A. (2017, December 21). Digital ad spend grew 23 percent in the first six months of 2017, according to IAB. Techcrunch. Retrieved from https://techcrunch.com/2017/12/20/iab-ad-revenue-report-2017/

Helmond, A. (2015). The platformization of the web: Making web data platform ready. Social Media + Society, 1(2). doi:10.1177/2056305115603080

Helmond, A., Nieborg, D. B., & van der Vlist, F. N. (2017). The Political Economy of Social Data: A Historical Analysis of Platform-Industry Partnerships. Proceedings of the 8th International Conference on Social Media & Society - #SMSociety17, 1–5. doi:10.1145/3097286.3097324

Holton, K. (2018, August 23). Europe’s new data law upends global online advertising. Reuters. Retrieved from https://ca.reuters.com/article/businessNews/idCAKCN1L80HW-OCABS

Hudgins, J., & Newcomb, A. (2017, November 1). Google, Facebook, Twitter and Russia: A timeline on the ’16 election. NBC News. Retrieved February 18, 2019, from https://www.nbcnews.com/news/us-news/google-facebook-twitter-russia-timeline-16-election-n816036

Innis, H. A. (2007). Empire and communications. Lanham, MD: Rowman & Littlefield.

Innis, H. A. (2008). The bias of communication (2nd ed.). Toronto: University of Toronto Press.

Isaac, M. (2016, November 22). Facebook said to create censorship tool to get back into China. The New York Times. Retrieved from https://www.nytimes.com/2016/11/22/technology/facebook-censorship-tool-china.html

Karppi, T. (2018). Disconnect: Facebook’s affective bonds. Minneapolis: University of Minnesota Press.

Klonick, K. (2017). The new governors: The people, rules, and processes governing online speech. Harvard Law Review, 131, 1598-1670.

Lapowsky, I. (2017, September 27). Facebook’s crackdown ahead of German election shows it’s learning. Wired. Retrieved from https://www.wired.com/story/facebooks-crackdown-ahead-of-german-election-shows-its-learning/

Latour, B. (1999). Pandora's hope: Essays on the reality of science studies. Cambridge, MA: Harvard University press.

Levine, M. (2013, May 28). Controversial, harmful and hateful speech on Facebook. Retrieved February 26, 2019, from https://www.facebook.com/notes/facebook-safety/controversial-harmful-and-hateful-speech-on-facebook/574430655911054

Medeiros, B. (2017). Platform (non-)intervention and the “marketplace” paradigm for speech regulation. Social Media + Society, 3(1), doi:10.1177/2056305117691997

Molotch, H. (1976). The city as a growth machine: Toward a political economy of place. American journal of sociology, 82(2), 309-332. doi:10.1086/226311

Mueller, M. L. (2015). Hyper-transparency and social control: Social media as magnets for regulation. Telecommunications Policy, 39(9), 804–810. doi:10.1016/j.telpol.2015.05.001

Mumford, L. (1966). The first megamachine. Diogenes, 14(55), 1-15. doi:10.1177/039219216601405501

Nieborg, D. (2017, November 10). Facebook messenger and the political economy of platforms. Presentation given as part of the Marketing Research Seminar Series presented by the Schulich School of Business at York University.

Nipps, Karen. (2014). Cum privilegio: Licensing of the press act of 1662. The Library Quarterly: Information, Community, Policy, 84(4), 494–500. doi:10.1086/677787

Pavan, E. (2017). Internet intermediaries and online gender-based violence. In M. Segrave & L. Vitis (Eds.), Gender, Technology and Violence (pp. 62–79). Taylor & Francis. doi:10.4324/9781315441160-5

Perez, S. (2017, Dec. 5). Youtube promises to increase content moderation and other enforcement staff to 10k in 2018. Techcrunch. Retrieved from https://techcrunch.com/2017/12/05/youtube-promises-to-increase-content-moderation-staff-to-over-10k-in-2018/

Reuters. (2017, July 28). Why Google and Facebook prove that online advertising is a duopoly. Fortune. Retrieved from http://fortune.com/2017/07/28/google-facebook-digital-advertising/

Rinallo, D., Basuroy, S., Wu, R., & Jeon, H. J. (2013). The media and their advertisers: Exploring ethical dilemmas in product coverage decisions. Journal of Business Ethics, 114(3), 425–441. doi:10.1007/s10551-012-1353-z

Russel, J. (2018, April 6). Myanmar group blasts Zuckerberg’s claim on Facebook hate speech prevention. Techcrunch. Retrieved February 26, 2019, from http://social.techcrunch.com/2018/04/06/myanmar-group-blasts-zuckerbergs-claim-on-facebook-hate-speech-prevention/

Schindler, P. (2017, March 20). Expanded safeguards for advertisers. Retrieved from https://blog.google/topics/ads/expanded-safeguards-for-advertisers/

Seb, J. (2018, June 25). A month after GDPR takes effect, programmatic ad spend has started to recover. Digiday. Retrieved August 18, 2018, from https://digiday.com/marketing/month-gdpr-takes-effect-programmatic-ad-spend-started-recover/

Shtern, J. (2009). Global internet governance and the public interest in communication (Unpublished doctoral dissertation). Université de Montréal, Montréal.

Sloane, G. (2017, Mar. 17). As Youtube tinkers with ad formula, its stars see their videos lose money. Adage. Retrieved from http://adage.com/article/digital/youtube-feels-ad-squeeze-creators/308489/

Smith, G. (2017, January 25). Newspapers scale back Facebook and Snapchat content as meagre advertising returns disappoint. The Independent. Retrieved from http://www.independent.co.uk/news/business/news/newspapers-facebook-snpachat-adverts-meagre-returns-news-media-outlets-social-media-money-earnings-a7545331.html

Teich, D. (2017, June 14). How Youtube handled its brand safety crisis. Digiday. Retrieved from https://digiday.com/marketing/youtube-handled-brand-safety-crisis/

Trapp, F. (2016, April 15). Algorithm and advertising: The real impact of Instagram’s changes. Adweek. Retrieved from http://www.adweek.com/digital/francis-trapp-guest-post-instagram-algorithm/

Turow, J. (2012). The daily you: How the new advertising industry is defining your identity and your worth. New Haven, CT: Yale University Press.

Twitter, Inc. (2018). Annual report 2018. Retrieved from http://files.shareholder.com/downloads/AMDA-2F526X/6366391326x0x976375/0D39560E-C8B5-4BA0-83C4-C9B5C88D4737/TWTR_2018_AR.pdf

Valentino-Devries, J. (Jan. 31, 2018). Facebook’s experiment in ad transparency is like playing hide and seek. Propublica.org. Retrieved from https://www.propublica.org/article/facebook-experiment-ad-transparency-toronto-canada

van Dijck, J. (2013). The culture of connectivity: A critical history of social media. Oxford: Oxford University Press.

Wu, T. (2016). The attention merchants: The epic scramble to get inside our heads. New York: Knopf.

Yeung, K. (2017). ‘Hypernudge’: Big data as a mode of regulation by design. Information, Communication & Society, 20(1), 118–136. doi:10.1080/1369118X.2016.1186713

Communication and internet policy: a critical rights-based history and future

$
0
0

Papers in this Special Issue

Communication and internet policy: a critical rights-based history and future
Aphra Kerr, Francesca Musiani & Julia Pohle

Data and digital rights: recent Australian developments
Gerard Goggin, Ariadne Vromen, Kimberlee Weatherall, Fiona Martin, & Lucy Sunman

Operationalising communication rights: the case of a “digital welfare state”
Marko Ala-Fossi, Anette Alén-Savikko, Jockum Hilden, Minna Aslama Horowitz, Johanna Jääsaari, Kari Karppinen, Katja Lehtisaari, & Hannu Nieminen

Counter-terrorism in Ethiopia: manufacturing insecurity, monopolizing speech
Téwodros W. Workneh

Empire and the megamachine
Stephanie Hill

Beyond ‘zero sum’: the case for context in regulating zero rating in the global South
Guy Thurston Hoskins

Communication and internet policy: a critical rights-based history and future

March 2019 was the 30th anniversary of the World Wide Web (WWW). Tim Berners-Lee, inventor of the WWW, called this year “a moment to celebrate how far we’ve come, but also an opportunity to reflect on how far we have yet to go”. He argued that the future development of the internet would require governments, the private sector and internet users to share responsibility:

Governments must translate laws and regulations for the digital age. They must ensure markets remain competitive, innovative and open. And they have a responsibility to protect people’s rights and freedoms online. (...) Companies must do more to ensure their pursuit of short-term profit is not at the expense of human rights, democracy, scientific fact or public safety. Platforms and products must be designed with privacy, diversity and security in mind. (...) And most important of all, citizens must hold companies and governments accountable for the commitments they make, and demand that both respect the web as a global community with citizens at its heart.(Berners-Lee, 2019)

With this call, Berners-Lee - someone whose work has significantly contributed to making the internet a communication and information space accessible to many - publicly acknowledged the importance of policy and regulation for the future development of the internet. While internet exceptionalists like John Perry Barlow declared the independence of cyberspace from governments’ control and intervention 23 years ago, a number of scholars have noted that the internet today is already the product of a myriad of state (democratic and authoritarian), corporate, civil society and user choices (Mansell, 2012). Today it is commonly accepted by scholars and users that internet regulation takes place in different forms and at different levels. It is also a common concern that policy and regulation - if motivated by the wrong intentions or poorly implemented - can cause fundamental harm to the global network, its technical infrastructure, its business environment and, most importantly, its users.

Many still insist that internet policies should primarily be made and implemented by technical experts, businesses or, ideally, a global multistakeholder community, and warn of the detrimental effects that the self-interests of nation states can have on the global network (Mueller, 2019). Indeed to a certain extent many areas of internet policy have been ‘privatised’, in the sense that private companies are playing a significant role (Curran, Fenton & Freedman, 2012). Yet, in many democratic countries around the world, the public is increasingly looking to their national governments and regional bodies, like the European Commission, for regulatory solutions to internet-related problems, such as data protection, misinformation, illegal content, freedom of speech, net neutrality and other. Even Mark Zuckerberg, founder and chief executive of Facebook, recently called on lawmakers and regulators to strive for stronger regulation of the global internet (Zuckerberg, 2019). Whatever the motivation for such a demand, Zuckerberg’s reasoning is similar to Berners-Lee’s: Only by carefully (re)shaping the rules of the internet, will it be possible to protect the rights of users and preserve digital networks as a means for personal communication, public debate and the exchange of information.

Despite these calls for more and better internet policy and regulation, and the increasing spread and importance of the internet as a global communication infrastructure and platform for information services, internet policy research is still a niche topic within media and communication research. In general, social sciences and the humanities have been slow in taking up internet policy as a research field (Brosda, 2015; Dutton, 2018), with the result that the community of internet policy scholars continues to be rather small, internationally scattered, multi-disciplinary and diverse in its conceptual and methodological approaches. At the same time it is important at this juncture that the development of the next generation of internet policies and instruments is informed by scholars with expertise in developing, implementing and evaluating communication policy in the public interest, and is informed by existing laws, policies and organisations that work to protect individual and collective rights.

This special issue was formulated by the chairs of the Communication Policy and Technology (CPT) section of the International Association for Media and Communication Research (IAMCR) in discussion with the managing editor of Internet Policy Review. It aims to specifically cross some of the international and disciplinary boundaries facing internet policy researchers, and contribute to the development of internet policies that operate in the public interest. CPT has been a platform for researching telecommunications policy and infrastructure since 1974 when the section was formed by Prof. Dallas Smythe from Simon Fraser University, in Canada. By 1990 “policy” was explicitly added to the section name under the leadership of Prof. Robin Mansell, now at the London School of Economics and Political Science (LSE) in the UK. By the mid-1990s the internet had become an important consideration for members’ working on communication policy, regulation and users. In the last few years it has dominated. The section encourages both theoretically robust and empirically informed research on the role that communication policy plays in relation to balancing digital service innovation with human rights and social justice. It also encourages critical and actively engaged research and researchers.

This special issue is an important opportunity for CPT members to bring their work into conversation with the Internet Policy Review readership. While CPT members have been actively involved in studying global internet governance and policy initiatives, they also provide significant analysis of internet policies at regional and national levels, including evaluating how policies work in practice in different contexts. This issue brings together studies of internet policy and its impacts on citizens and consumers in Europe, Australia, the Americas, and Africa. While Internet Policy Review has a focus on inter and pan European contexts, our contributors make critical connections from their work to the European context where appropriate. In this introduction we provide a brief overview of recent trends and key conceptual frameworks used in internet policy research. We then provide a more detailed overview of the papers in our special issue and in particular the normative and practical challenges of assessing the impact of internet policy and governance instruments on individual and collective rights.

Internet policy research: frameworks, actors and emerging concerns

While internet policy research and researchers come from multiple disciplines, and apply a range of theories and methods, they share a commitment to understanding the practice of governing the internet as a global and national infrastructure; the impact of public and private regulation on internet-based economies, communities and cultures; and the rights, responsibilities, norms and principles invoked by users and non-users. While some internet policy researchers evaluate policies developed by state actors, private companies and the institutions involved in formal political decision-making processes, others investigate alternative forms of commons based governance as well as activist and civil society initiatives. Still others are concerned with the problem definitions, discourses, laws, principles and imaginaries that proceed and inform policy decisions, policy debates and policy-making.

Internet policy research has applied a range of analytical and conceptual frameworks from various disciplines and research traditions over the past two decades. In order to assess the roles of actors in internet policy-making, communication scholars have used theories, concepts and methodological tools from political economy, social movement studies, network analysis, actor-network theory, domestication theory, field theory, regime theory and also more classic approaches of policy analysis, such as the advocacy coalition framework (e.g., Mathiason, 2008; Milan, 2015; Pavan, 2012; Pohle, Hösl, & Kniep, 2016). For the analysis of discourses, interests and strategies in internet policy related debates, researchers have also deployed “post-positivist” approaches such as discourse network analysis, interpretive policy analysis and online social network analysis (e.g., Epstein, Nisbet, & Gillespie, 2011; O’Rourke & Kerr, 2017; Pohle, 2018). More recently, scholars have started to focus on the role of institutional frameworks for internet policy and the interrelations of regulatory practices and institutions, building for example on neo-institutional theories such as historical, sociological institutionalism (e.g., Bannerman & Haggart, 2015; Galperin, 2004; Puppis, 2010). In addition, approaches from the field of Science and Technology Studies (STS) have frequently been mobilised to analyse the role, in internet policy and governance, of the “mundane practices” of all those involved in providing and maintaining, hacking and undermining, developing, testing, and using the network of networks (see e.g. the Internet Policy Review dedicated issue, edited by Epstein, Katzenbach & Musiani, 2016).

Deploying this broad range of conceptual approaches, initially much of the internet policy-related research in communication and other disciplines focused on the global nature of digital networks. Scholars tried to understand the regulatory challenges caused by the transnational character of the internet and its services, by analysing the actors and institutions involved in its coordination and regulation, in particular that of its technical infrastructure. Indeed, much of the early research work was related to the historical processes leading up to the institutionalisation of the internet’s coordination and policy-making mechanisms, such as the administration of the Internet Domain Name System (DNS) and its institutionalisation in the Internet Corporation for Assigned Names and Numbers (ICANN) (e.g., Christou & Simpson, 2007; Klein, 2000) or the World Summit on the Information Society (WSIS) and its culmination in the creation of the Internet Governance Forum (e.g., Frau-Meigs, Nicey, Palmer, Pohle, & Tupper, 2012; Raboy, Landry, & Shtern, 2010; Padovani, 2004; Sarikakis, 2004). Another strand of research examined how the internet as a global infrastructure could constitute not only a target of governance, but also be used as an instrument of governance in and of itself, by inscribing particular models, constraints and opportunities into the internet’s technical architecture (e.g., DeNardis, 2009; Braman, 2016). Finally, during this period researchers questioned the uneven diffusion and access to the internet across regions and countries. Internet policy research succeeded in moving the debate beyond a focus on ‘access’ to technology to thinking about the the skills, resources and capabilities required to use the internet and the unequal user patterns that were emerging. Scholars contributed, and continue to explore, how the internet policies and technologies are implicated in patterns of inclusion and exclusion in contemporary societies, often in highly gendered terms (e.g., Padovani & Shade, 2016; Stevenson, 2009).

More recently, in light of the increasingly complex policy frameworks regarding internet-related issues at the national level, the regulatory attempts by national or regional authorities have come to the fore amongst internet policy scholars (e.g., Collins, 2006; Löblich & Karppinen 2014; Pohle & Van Audenhove, 2017; see also the Internet Policy Review issue on Australian internet policy, edited by Daly & Thomas, 2017). The majority of the empirical analyses in the field of media and communication research takes the form of case studies on particular policy issues, such as data protection and privacy, copyright, security, digital literacy, net neutrality, content regulation and increasingly also data regulation (e.g., Kruschinski & Haller, 2017; Meyer, 2012; Mukerjee, 2016; Pierson, 2012; Powell & Cooper, 2011; Van Audenhove, Vanwynsberghe, & Mariën, 2018). Scholars also focussed on particular groups of actors involved in national internet policy-making, for instance activists, internet intermediaries or political parties (e.g. Breindl & Briatte, 2013; DeNardis & Hackl, 2015; Löblich & Wendelin, 2012; Macq & Jacquet, 2018). Others analysed the growing number of initiatives regarding national charters for internet rights such as the comprehensive Marco Civil framework in Brazil or similar initiatives in Europe (e.g., Cristofoletti, 2015; Gill, Redeker, & Gasser, 2015; Padovani & Santaniello, 2018). Very recently, the trend towards a stronger securitisation and surveillance of the online space led scholars to analyse regulatory competences for cyber security and related discourses in various countries (e.g., Hintz & Dencik, 2016; Maréchal, 2017; Tréguer, 2017, Zeng, Stevens, & Chen, 2017).

The internet today is a taken for granted aspect of everyday life for many people. Despite its varying quality many human activities take place on and via the internet (Bortzmeyer, 2019). This has prompted explicit discussions of the relationship between rights, values and the internet among technology practitioners, researchers and policymakers alike. Some of these issues are widely discussed, such as those related to how particular services, such as Facebook, is fighting misinformation; others are far less visible and receive much less public scrutiny, including how rights and values relate to protocols and infrastructure. The relationship between human rights and internet protocols is under scrutiny in a number of political and technical arenas (e.g. the Internet Research Task Force and its Human Rights Protocol Considerations research group). The relationship between emerging artificial intelligence technologies, such as machine learning, and ethics, broadly defined, is also a key policy issue at the European level, and these technologies are increasingly embedded in many of the tools developed and used by major corporations to govern user behaviour online. In 2019 the European High Level Expert Group on Artificial Intelligence (AI), established by the European Commission, released a set of Ethics Guidelines for Trustworthy AI to encourage technology companies to consider how their AI tools might impinge upon fundamental rights. Key issues include human autonomy, fairness, accountability, privacy, discrimination, diversity and fairness. This, and similar ethics initiatives, were prompted by the realisation that the design and training of algorithmic and artificial intelligence tools may introduce highly discriminatory practices which are hard to evaluate, trace and regulate after the fact (e.g., Dencik, Hintz, & Carey, 2017; Eubanks, 2018). For some internet policy scholars a reliance on ethics alone is not sufficient, and more robust frameworks and legislation may be required. For others, ethics can be a useful mode of critique and counterbalance to the securitisation and platform capitalism discussions (e.g. Lyon, 2014). Yet, emerging guidelines tend to focus on ethics at the individual level rather then the collective or public interest values and rights, and fails to differentiate between ethics in different contexts. The policy issues may become even more complicated as AI becomes embedded in the ‘Internet of Things’ and our devices fade into the background of our everyday environments (e.g. Kitchin & Dodge, 2011).

While the contexts are changing the fact that new technologies pose challenges to fundamental rights is not new. The extent to which, and how, technical artefacts are imbued with political issues, in a broad sense, is a much-debated issue in the history of technology. When it comes to the internet, the issue has perhaps been best summarised by Lawrence Lessig’s “code is law” (1999) and its numerous offspring. Among them, Laura DeNardis’ work (2009) has arguably been a pioneer in examining, with concepts and methods derived from STS, how protocols are political. Indeed, despite being difficult to grasp because they are intangible and often invisible to internet users, protocols have political value as they control global flows of information, influence economic competitiveness of nations and their ability to compete fairly, and often make decisions “by proxy” that influence online civil liberties and a number of individual rights, including, for example, the access to knowledge (DeNardis, 2009, p. 6).

Over the past two decades, the work of historians, philosophers and social scientists has shown that values have always entered the design of technological infrastructure; internet engineers have been no exception, asking themselves questions not only about technical optimisation but also on what it meant to build protocols that fostered individual privacy, accessibility for persons with disabilities, and other public interest concerns (Russell, 2014; Nissenbaum, 2001; Braman, 2011). Recent work has also examined how infrastructures of internet governance have become politicised and made to carry out operations that bear very little resemblance to the core technological objective of the system, and how this has unintended consequences for the stability and security of the internet, as well as human rights online (Musiani, Cogburn, DeNardis, & Levinson, 2016). Even more recent contributions have argued that there is a “human rights gap” in internet policy, inasmuch as human rights are public - given that so far only state actors can be held responsible for not respecting them - while internet architecture is mainly privately owned or privately operated despite holding an important mediating/governing function for human rights online (Zalnieriute & Milan, 2019).

Thus emerging issues related to the interplay of the internet, rights, and values are numerous and the existing evidence suggests that the issues vary from country to country and region to region. Currently the spread of misinformation and disinformation online and its implications for the democratic process, and the need to balance freedom of expression, surveillance and privacy are significant policy issues. Also of increasing importance is how to achieve cultural and content diversity on increasingly centralised digital services. The solutions will require, as Berners-Lee suggested, sharing responsibilities between transnational institutions, governments, corporations and users, but the instruments and policies that will achieve an acceptable balance between communication rights and values for all these actors are far from obvious.

IAMCR and communication policy research beyond critique

IAMCR was founded in 1958 under the aegis of UNESCO, the UN agency in charge of education, science, culture and communication. The association’s objective was to provide an international forum for researchers concerned with the importance of freedom of information and communication in journalism and mass media. Since then human rights, democratic participation, diversity, gender equality and asymmetries of power have been central to its work. During its 60 years of existence, IAMCR not only kept a close link with UNESCO but also collaborated with a range of transnational policy institutions to inform and critique policy in the areas of broadcasting, journalism and telecommunications. Its members have long been collaborating with state and civil society to develop and deploy communication policies in the public interest. As such, they often were involved in international policy debates on communication rights and the role of media and quality journalism for society, including in the Global South. For instance, in the early 1980s several IAMCR members, including its then president and vice-president, contributed background papers to the work of the MacBride Commission (Nordenstreng, 2008: 240). This group was commissioned by UNESCO to study imbalances in global communication flows and create a scientific base for a New World Information and Communication Order (NWICO). In 2003 and 2005, many IAMCR members were participants and researchers involved in the two phases of the UN’s World Summit on the Information Society (WSIS) - the first global conference discussing the chances and challenges of digital connectivity for the developing world and the controversial question of how the global internet infrastructure should be governed. In 2015, IAMCR established a clearinghouse for public statements on media and communication issues and academic freedom. Some of these statements have involved internet policy, including most recently the impact of online disinformation on democratic elections.

Members of IAMCR have long argued that policy makers and regulators need to attend to the internet as a socio-technical system. Mansell (2012) for example has documented the competing social imaginaries dominating the development of the internet: a market led approach which aims to limit regulation by state and other actors and an information commons imaginary which also favours limited regulation. Despite the widespread discourse that the internet is ungovernable due to its decentralised and non hierarchical network structure, for many years, academics have uncovered a range of ways in which the infrastructure, services and content are governed from above and below in favour of particular interests. To date internet policies in many countries are dominated by state and corporate actors. What is of concern to social science theorists is the lack of transparency and responsibility in these governance arrangements and processes, and the lack of fora where alternative approaches in the public interest can be developed (Helberger, Pierson, & Poell, 2018; Kerr, De Paoli, & Keatinge, 2014).

Of course some transnational internet policy initiatives have emerged in the last two decades. After their involvement in WSIS, IAMCR members were attentive to the rise of the Internet Governance Forum (IGF), a global venue for multi-stakeholder policy discussions on internet-related issues that the United Nations created as one of the WSIS outcomes in 2006; after being hosted by UNESCO in Paris in 2018, the fourteenth meeting of this forum takes place in Berlin in 2019. In 2015, IAMCR members also contributed to the 10-year review of the World Summit on the Information Society (WSIS+10) and the renewed discussions on communication rights, imbalances and global challenges in the digital age. Since 2016 IAMCR hosted a series of special sessions on UNESCO’s recent initiative to develop the concept of ‘Internet Universality’. This concept highlights the importance for the future development of the internet to be based on the principles of human rights, openness, accessibility and multi-stakeholder participation; it also links internet development to sustainable development and to distributing more widely the benefits of the knowledge society. In 2018 UNESCO released its Internet Universality Indicators which were developed based on the inputs of researchers around the world, including IAMCR members, and provide an instrument for stakeholders to conduct national assessments of internet development. It remains to be seen what impact such initiatives have on policies and practice.

The Communication Policy & Technology (CPT) section is the venue within IAMCR where empirical and theoretical policy research on communication and technology-related aspects is featured most prominently. It has been chaired by a number of international academics from Europe, the Americas and Asia, some of whom have also worked as policymakers, or closely with communication policymakers (e.g., Dunn, 2010; Melody, 1996; 1999; Samarajiva, 1994). Initially dominated by work on satellite, telecommunications and broadcast systems, since the 1990s members have been actively concerned with how the internet impacts on the rights of citizens and consumers globally. This focus on ‘policy in practice’ means that the section encourages papers that bridge theory and practice, that critically engage with the impact of internet policies and, sometimes, provides recommendations for political action and policymakers (e.g., Mansell, 2011; Frau-Meigs, 2012). It encourages work that evaluates the roles of different institutions and the interaction between top down and bottom up perspectives (e.g. Michalis, 2007). Members evaluate the effectiveness of multi-stakeholder practices and the reality of activism by civil society (e.g., Cammaerts & Carpentier, 2005; Hintz & Milan, 2009). The section also organises joint sessions with other sections including the law section and the Global Media Policy Working Group. Its members collaborate and are involved in IAMCR’s most visible publication projects, such as the Handbook of Global Media and Communication Policy (Mansell & Raboy, 2011) which included chapters on the emerging conceptual and methodological challenges posed by the internet for media and communication researchers.

Special issue: internet policy and practice around the world

This special issue presents papers from the CPT section of the annual IAMCR conference which took place in Eugene, Oregon, USA in 2018. The conference theme was ‘Reimagining Sustainability: Communication and Media Research in a Changing World’, a theme which resonates with the UN Sustainable Development Goals. We invited all internet policy related papers to contribute to this special issue and after three rounds of open peer review we are delighted to present a sample of the excellent scholarship in the CPT section.

As IAMCR is an interdisciplinary organisation, the papers in this issue come from social, political and communication sciences. The selection includes papers by emerging scholars as well as more established academics. They are empirically grounded in different regions - from Africa, America, Australia, and Europe - and while limited in number we are certain that they contribute to our understanding of how rights and values are refracted through different economic, political and cultural perspectives, internet policies and networked forms of governance. In particular, these papers seek to identify the logics and values embedded in state, corporate and non-governmental policies and practices, and the varying impact of these policies and practices on citizens and consumers. Further, we find that civil liberties and social values often get narrowed to legal and technocratic principles or measurable values. This has varying implications for citizen agency in situations often characterised by power asymmetries and significant resource differentials.

The five papers presented in this special issue share a common preoccupation. All of them put at the core of their work the potential impact that communication technology-based networked systems and policies can have on users - in their multiple facets of consumers, citizens and policy targets - and on their rights vis-à-vis other actors and stakeholders in such systems, including private companies, civil society organisations, and of course the state in its different forms and instances. Taken together, the articles contribute to the complex portrait of internet policy today by questioning the impact of state policies on citizen rights, in particular surveillance and privacy-related ones, and the ways in which it can be measured; addressing the impact of lateral and state oversight of company policies and activities; observing the different shapes and configurations taken by networked forms of governance; and finally, examining how civil liberties, social justice and human rights perspectives can still unfold in today’s increasingly centralised internet, and how best to safeguard them.

In their article, Gerard Goggin and his colleagues address the specificities of the digital rights debates in Australia, while, at the same time, situating them in a broader context of global discussions about data privacy and the means to enforce it. Taking as a case study two recent Australian policy developments, the “Digital Platforms Inquiry” and the development of a consumer data right, the article makes a case for the importance of national contexts in assessing how digital rights can be enacted. It emphasises the necessity for states to engage seriously with citizens regarding their knowledge, expectations and experience of digital rights as a crucial component of law-making.

Another article which focuses on how digital rights are defined and operationalised in a particular national context is authored by Marko Ala-Fossi and colleagues. This paper proposes a model to analyse the concept of communication rights that is based on understanding the changing role of communication in a social democracy, using Finland as a country case study. The authors pay specific attention to four core rights – access, availability, dialogicality, and privacy - and they examine how these rights are negotiated in the development of digital services at four levels: the regulatory, the public sector, the commercial, and the citizen-consumer. In the Finnish context we see an evolving and distinctive approach to privacy and the ‘epistemic commons’, but also a clear tension between established communication rights in this social democratic state, and emerging commercial incentives, European privacy legislation and citizen consumer online activities.

The question of the impact of state-led internet policy on digital rights is also at the heart of Téwodros Workneh’s article, which focuses on the dialectic between surveillance and freedom of expression in Ethiopia. Situating the research in the neopatrimonial state framework, the article discusses how a counter-terrorism legal instrument, promulgated in 2009, has become a way for the Ethiopian state to stifle freedom of expression involving mediated communication, especially on digital platforms. By means of this case study, Workneh draws broader lessons on the impact of counter-terrorism laws on freedom of expression globally, and on the consequences of this phenomenon for internet policy.

The core policy role of digital platforms for civil liberties is an issue also tackled in Steph Hill’s contribution. Blending political economy and media studies approaches, this paper seeks to understand how private sector-led governance can gain prominence with respect to state-led policy by examining two controversies over social media content that happened in 2017: the so-called “adpocalypse”, a hiatus in several prominent companies’ advertising on social media platforms due to the co-location of their online advertisements with problematic content, and the first public hearings over Russian operatives disseminating misinformation in relation to the 2016 US presidential elections. Social media companies’ actions, the author warns, indicate an expanded role for marketing and advertising firms as “controllers” of media content, while, democratic representatives often take the back seat.

Finally, the paper by Guy Hoskins, recipient of the inaugural CPT and Internet Policy Review award, examines a core and controversial sub-topic of the network neutrality issue, zero-rating - the practice of providing internet access for free under specific conditions, such as restricted access to certain websites. Mixing political economy and ICT for development approaches, the author proposes to draw together the issues of network neutrality, digital divide and digital inclusion, and their relationship to zero rating, for a better understanding of the phenomenon. A comparative analysis of four wireless markets in the Global South – Brazil, Colombia, Mexico and South Africa - allows the author to paint a detailed portrait of zero rating as the product of multiple, interweaved factors that greatly nuance the “access vs. neutrality” equation that has, so far, summed up the phenomenon.

With this special issue of the Internet Policy Review, we aim to set the frame for illustrating one of the core issues that internet policy researchers face today: untangling the normative and practical challenges of assessing the impact of internet policy on individual and collective rights. Taken together, the articles in this special issue show that internet policy is being studied through an increasingly strong hybridisation of disciplines and issues that, until recently, have been addressed by relatively separate research traditions. In addition, they make us reflect on this hybridity. Discussions on the digital divide, inclusion, and ICTs for development are closer than we might think to the nitty-gritty political economy of net neutrality, as Hoskins shows. Controversies over advertising strategies and content choice on social media, as examined by Hill, are strongly intertwined with internet governance and its privatisation, as well as ‘classical’ conceptualisations of Empire from Innis. While scholarship on the issue of surveillance most prominently associates it with privacy and data protection, Workneh demonstrates its close ties with freedom of expression, and proposes, with the neopatrimonial state framework, an original way to address it. Finally, Goggin et al. and Ala-Fossi et al. clearly demonstrate the extent to which the ‘global’ internet as a worldwide system needs, now more than ever, to be grounded in analyses of national studies of internet policy creation and application, each with their unique mix of state-led intervention, private sector strategies, and the role of internet users as citizens and consumers.

Weaving an increasingly tangled nexus of disciplines, issues and objects, internet policy research has evolved over time, responding to both technological evolutions, changes in power balances and the birth and development of new ‘networked’ socio-political issues. Academic organisations such as IAMCR have evolved in response to these changes. It has especially been the case for the CPT section, as the three elements composing its title -- communication, policy and technology -- moved from covering satellite and broadcast technology to examining a wide range of networking and connected technical artifacts, which nowadays extend to the most recent internet developments, including artificial intelligence, algorithms, and the internet of things. What has remained strong however is the critically engaged research and the scholars who contribute to the organisation, and who bring normative and diverse values to bear on the evolving logics of internet policies. As we plan the next IAMCR conference in Madrid, the papers in this special issue are a stimulating guide to the importance of this work.

Acknowledgment

We would like to acknowledge the support of the IAMCR Sections and Working Group Fund which made this open access publishing collaboration possible. Aphra would like to thank the Institute for Advanced Studies in Humanities and the School of Social and Political Science at the University of Edinburgh who are hosting her from February to May 2019. All papers went through open peer review and we want to thank the generous efforts of all our reviewers. This special issue would not have been possible without the support, input and patience of the Internet Policy Review’s managing editor, Frédéric Dubois.

References

Bannerman, S., & Haggart, B. (2015). Historical Institutionalism in Communication Studies. Communication Theory, 25(1), 1–22. doi:10.1111/comt.12051

Berners-Lee, T. (2019, March 12). 30 years on, what’s next #ForTheWeb?. World Wide Web Foundation. Retrieved from: https://webfoundation.org/2019/03/web-birthday-30/.

Bortzmeyer, S. (2019). Cyberstructure. L’Internet, un espace politique [Cyberstructure. The internet, a political space]. Caen: C & F Éditions.

Braman, S. (2011). Internet policy. In M. Consalvo & C. Ess (Eds.), Handbook of Internet Studies (pp. 137-167), Oxford: Wiley-Blackwell. doi:10.1002/9781444314861.ch7

Braman, S. (2016). Instability and internet design. Internet Policy Review, 5(3). doi:10.14763/2016.3.429

Breindl, Y., & Briatte, F. (2013). Digital Protest Skills and Online Activism Against Copyright Reform in France and the European Union. Policy & Internet, 5(1), 27–55. doi:10.1002/poi3.21

Brosda, C. (2015). Orientierung in der digitalen Unübersichtlichkeit. Zur medienpolitischen Relevanz der Kommunikationswissenschaft. In M. Emmer & C. Strippel (Eds.), Kommunikationspolitik für die digitale Gesellschaft (pp. 25–40). Berlin: Freie Universität Berlin, Institut für Publizistik- und Kommunikationswissenschaft. doi:10.17174/dcr.v1.3

Cammaerts, B. & Carpentier, N. (2005). The Unbearable Lightness of Full Participation in a Global Context: WSIS and Civil Society participation. In J. Servaes & N. Carpentier (Eds.), Towards a Sustainable Information Society: Beyond WSIS (pp. 17-49). Bristol: Intellect Books

Christofoletti, R. (2015). Privacidade e Regulamentação do Marco Civil da Internet: registros e preocupações [Privacy and Regulation of the Civil Framework of the Internet: records and concerns]. Revista ECO-Pós, 18(3), 213–229. doi: 10.29146/eco-pos.v18i3.2150

Christou, G., & Simpson, S. (2007). Gaining a Stake in Global Internet Governance: The EU, ICANN and Strategic Norm Manipulation. European Journal of Communication, 22(2), 147–164. doi:10.1177/0267323107076765

Collins, R. (2006). Internet governance in the UK. Media, Culture & Society, 28(3), 337–358. doi:10.1177%2F0163443706061686

Curran, J., Fenton, N., & Freedman, D. (2012). Misunderstanding the Internet. London: Routledge. doi:10.4324/9780203146484

Daly, A., & Thomas, J. (2017). Australian internet policy. Internet Policy Review, 6(1). doi:10.14763/2017.1.457

DeNardis, L. (2009). Protocol politics: The globalization of Internet governance. Cambridge, MA: The MIT Press.

DeNardis, L., & Hackl, A. M. (2015). Internet governance by social media platforms. Telecommunications Policy, 39(9), 761–770. doi:10.1016/j.telpol.2015.04.003

Dencik, L., Hintz, A., & Carey, Z. (2017). Prediction, pre-emption and limits to dissent: Social media and big data uses for policing protests in the United Kingdom. New Media & Society, 20(4), 1433–1450. doi:10.1177/1461444817697722

Dunn, H. S. (2010). Information Literacy and the Digital Divide: Challenging e-Exclusion in the Global South. In E. Ferro, Y. K. Dwivedi, J. R. Gil-Garcia, & M. D. Williams (Eds.), Handbook of Research on Overcoming Digital Divides: Constructing an Equitable and Competitive Information Society (Vol. 1, pp. 326-344). Hershey, PA: Information Science Reference (an imprint of IGI Global). doi:10.4018/978-1-60566-699-0.ch018

Dutton, W. H. (2018). Networked publics: multi-disciplinary perspectives on big policy issues. Internet Policy Review, 7(2). doi:10.14763/2018.2.795

Epstein, D., Katzenbach, C. & Musiani, F. (2016). Doing internet governance: practices, controversies, infrastructures, and institutions. Internet Policy Review, 5(3). doi:10.14763/2016.3.435

Epstein, D., Nisbet, E. C., & Gillespie, T. (2011). Who’s Responsible for the Digital Divide? Public Perceptions and Policy Implications. The Information Society, 27(2), 92–104. doi:10.1080/01972243.2011.548695

Eubanks, V. (2018). Automating Inequality. How High-Tech Tools Profile, Police and Punish the Poor. New York: St Martin’s Press.

Frau-Meigs, D. (2012). Transliteracy as the New Research Horizon for Media and Information Literacy. Media Studies, 3(6), 14–27.

Frau-Meigs, D., Nicey, J., Palmer, M., Pohle, J., & Tupper, P. (Eds.) (2012). From NWICO to WSIS: 30 Years of Communication Geopolitics - Actors and Flows, Structures and Divides. Bristol: Intellect Books. doi:10.1177/0267323113476942b

Galperin, H. (2004). Beyond Interests, Ideas, and Technology: An Institutional Approach to Communication and Information Policy. The Information Society, 20(3), 159–168. doi:10.1080/01972240490456818

Gill, L., Redeker, D., & Gasser, U. (2015). Towards Digital Constitutionalism? Mapping Attempts to Craft an Internet Bill of Rights (Research Publication No. 2015–15). Cambridge, MA: The Berkman Center for Internet & Society at Harvard University. Retrieved from http://nrs.harvard.edu/urn-3:HUL.InstRepos:28552582

Hintz, A., & Dencik, L. (2016). The politics of surveillance policy: UK regulatory dynamics after Snowden. Internet Policy Review, 5(3). doi:10.14763/2016.3.424

Hintz, A., & Milan, S. (2009). At the margins of Internet governance: grassroots tech groups and communication policy. International Journal of Media & Cultural Politics, 5(1), 23–38. doi:10.1386/macp.5.1-2.23_1

Helberger, N., Pierson, J., & Poell, T. (2018). Governing online platforms: From contested to cooperative responsibility. The Information Society, 34(1), 1–14. doi:10.1080/01972243.2017.1391913

Kerr, A., De Paoli, S., & Keatinge, M. (2014). Surveillant Assemblages of Governance in Massively Multiplayer Online Games: a comparative analysis. Surveillance and Society,12(3), 320–336. doi:10.24908/ss.v12i3.4953

Kitchin, R., & Dodge; N. (2011). Code/space: Software and everyday life, Cambridge, MA: The MIT Press.

Klein, H. (2002). ICANN and Internet Governance: Leveraging Technical Coordination to Realize Global Public Policy. The Information Society, 18(3), 193–207. https://doi.org/10.1080/01972240290074959

Kruschinski, S., & Haller, A. (2017). Restrictions on data-driven political micro-targeting in Germany. Internet Policy Review, 6(4). doi:10.14763/2017.4.780

Lessig, L. (1999). Code: And Other Laws Of Cyberspace. New York: Basic Books.

Löblich, M., & Karppinen, K. (2014). Guiding Principles for Internet Policy: A Comparison of Media Coverage in Four Western Countries. The Information Society, 30(1), 45–59. https://doi.org/10.1080/01972243.2013.855688

Löblich, M., & Wendelin, M. (2012). ICT policy activism on a national level: Ideas, resources and strategies of German civil society in governance processes. New Media & Society, 14(6), 899–915. https://doi.org/10.1080/01972243.2013.855688

Lyon, D. (2014). Surveillance, Snowden, and Big Data: Capacities, consequences, critique. Big Data & Society,1(2). https://doi.org/10.1177/2053951714541861

Macq, H. & Jacquet, V. (2018). S’engager dans un cyberparti. Internet et militantisme au sein du parti pirate belge [To engage in a cyberparty: Internet in the Belgian Pirate Party Membership]. RESET, 7. doi:10.4000/reset.1102

Mansell, R. (2011). New visions, old practices: Policy and regulation in the Internet era. Continuum, 25, 19–32. doi:10.1080/10304312.2011.538369

Mansell, R. (2012). Imagining the Internet: Communication, Innovation, and Governance. Oxford, UK: Oxford University Press.

Mansell, R., & Raboy, M. (Eds.). (2011). The Handbook of Global Media and Communication Policy. Chichester, West Sussex: Wiley-Blackwell. doi:10.1002/9781444395433

Maréchal, N. (2017). Networked Authoritarianism and the Geopolitics of Information: Understanding Russian Internet Policy. Media and Communication, 5(1), 29–41. doi:10.17645/mac.v5i1.808

Mathiason, J. (2008). Internet Governance : The New Frontier of Global Institutions. London: Routledge. doi:10.4324/9780203946084

Milan, S. (2015). From social movements to cloud protesting: the evolution of collective identity. Information, Communication & Society, 18(8), 887–900. doi:10.1080/1369118x.2015.1043135

Melody, W. H. (1996). Towards a framework for designing information society policies. Telecommunications Policy, 20(4), 243–259. doi:10.1016/0308-5961(96)00007-9

Melody, W. H. (1999). Telecom reform: progress and prospects. Telecommunications Policy, 23(1), 7–34. doi:10.1016/s0308-5961(98)00073-1

Meyer, T. (2012). Graduated Response in France: The Clash of Copyright and the Internet. Journal of Information Policy, 2, 107–127. doi:10.5325/jinfopoli.2.2012.0107

Michalis, M. (2007). Governing European Communications: From unification to coordination. Lanham, MD: Lexington Books.

Mueller, M. (2019, April 2). Why we should say No to Facebook’s call to “Regulate the Internet”. Internet Governance Project. Retrieved from: https://www.internetgovernance.org/2019/04/02/why-we-should-say-no-to-facebooks-call-to-regulate-the-internet/.

Mukerjee, S. (2016). Net neutrality, Facebook, and Indias battle to #SaveTheInternet. Communication and the Public, 1(3), 356–361. doi:10.1177/2057047316665850

Musiani, F., Cogburn, D. L., DeNardis, L., & Levinson, N. S. (Eds.) (2016). The turn to infrastructure in Internet governance. New York: Palgrave-Macmillan. doi:10.1057/9781137483591

Nissenbaum, H. (2001). How computer systems embody values. Computer, 34(3), 119–120. doi:10.1109/2.910905

Nordenstreng, K. (2008). Institutional Networking: The Story of the International Association for Media and Communication Research (IAMCR). In D. W. Park & J. Pooley (Eds.), The History of Media and Communication Research: Contested Memories. New York: Peter Lang. doi:10.1177/0196859909334842

O'Rourke, C., & Kerr, A. (2017). Privacy Shields for Whom? Key Actors and Privacy Discourses on Twitter and in Newspapers. Westminster Papers in Communication and Culture, 12(3). doi:10.16997/wpcc.264

Padovani, C. (2004). The World Summit on the Information Society: Setting the Communication Agenda for the 21st Century? An Ongoing Exercise. International Communication Gazette, 66(3–4), 187–191. doi:10.1177/0016549204043604

Padovani, C., & Shade, L. R. (2016). Introduction to the Special Issue: Gendering Global Media Policy: Critical Perspectives on “Digital Agendas”. Journal of Information Policy, 6(1), 332–337. doi:10.5325/jinfopoli.6.2016.0332

Padovani, C., & Santaniello, M. (2018). Digital constitutionalism: Fundamental rights and power limitation in the Internet eco-system. International Communication Gazette, 80(4), 295–301. doi:10.1177/1748048518757114

Pavan, E. (2012). Frames and Connections in the Governance of Global Communications: A Network Study of the Internet Governance Forum. Lanham, MD: Lexington Books.

Pierson, J. (2012). Online privacy in social media: a conceptual exploration of empowerment and vulnerability. Communications & Strategies, 88, 99–120

Pohle, J. (2018). The Internet as a global good: UNESCO’s attempt to negotiate an international framework for universal access to cyberspace. International Communication Gazette, 80(4), 354–368. doi:10.1177/1748048518757140

Pohle, J., Hösl, M., & Kniep, R. (2016). Analysing internet policy as a field of struggle. Internet Policy Review, 5(3).

Pohle, J., & van Audenhove, L. (2017). Post-Snowden Internet Policy: Between Public Outrage, Resistance and Policy Change. Media and Communication, 5(1), 1-6. doi:10.17645/mac.v5i1.932

Powell, A., & Cooper, A. (2011). Net neutrality discourses: comparing advocacy and regulatory arguments in the United States and the United Kingdom. The Information Society, 27(5), 311–325. doi:10.1080/01972243.2011.607034

Puppis, M. (2010). Media Governance: A New Concept for the Analysis of Media Policy and Regulation. Communication, Culture & Critique, 3(2), 134–149. doi:10.1111/j.1753-9137.2010.01063.x

Raboy, M., Landry, N., & Shtern, J. (2010). Digital Solidarities, Communication Policy and Multi-stakeholder Global Governance: The Legacy of the World Summit on the Information Society. New York: Peter Lang.

Russell, A. L. (2014). Open standards and the digital age. Cambridge: Cambridge University Press.

Samarajiva, R. (1994). Privacy in Electronic Public Space: Emerging Issues. Canadian Journal of Communication, 19(1). doi:10.22230/cjc.1994v19n1a796

Sarikakis, K. (2004). Ideology and policy: Notes on the shaping of the Internet. First Monday, 9(8). doi:10.5210/fm.v9i8.1167

Stevenson, S. (2009). Digital Divide: A Discursive Move Away from the Real Inequities. The Information Society, 25(1), 1–22. 10.1080/01972240802587539

Tréguer, F. (2017). Intelligence Reform and the Snowden Paradox : The Case of France. Media and Communication, 5(1). doi:10.17645/mac.v5i1.821 Available at https://hal.archives-ouvertes.fr/hal-01481648/

Van Audenhove, L., Vanwynsberghe, H., & Mariën, I. (2018). Media Literacy Policy in Flanders – Belgium: From Parliamentary Discussions to Public Policy. Journal of Media Literacy Education, 10(1), 59–81. doi: 10.23860/jmle-2018-10-1-4

Zalnieriute, M., & Milan, S. (2019). Internet Architecture and Human Rights: Beyond the Human Rights Gap. Policy & Internet, 11(1), 6–15. doi:10.1002/poi3.200

Zeng, J., Stevens, T., & Chen, Y. (2017). China’s Solution to Global Cyber Governance: Unpacking the Domestic Discourse of “Internet Sovereignty”. Politics & Policy, 45(3), 432–464. doi:10.1111/polp.12202

Zuckerberg, M. (2019, March 30). The Internet needs new rules. Let’s start in these four areas.The Washington Post. Retrieved from: https://www.washingtonpost.com/opinions/mark-zuckerberg-the-internet-needs-new-rules-lets-start-in-these-four-areas/2019/03/29/9e6f0504-521a-11e9-a3f7-78b7525a8d5f_story.html

Algorithms at work: rules about data rights of workers might need an update

$
0
0

The future of work is happening now. Platform companies like Uber or Deliveroo collect massive amounts of data about workers to automate their decision-making systems (Rosenblat and Stark, 2016). Algorithmic management is also used to control the workers. During our research about Deliveroo and Foodora, we found that the digital control of couriers operates by automatically sorting workers into three categories based on their personal statistics. Only the workers with the best statistics get the promised flexibility when it comes to choosing shifts, while the worst performers can be fired based on algorithmic recommendations (Ivanova et al., 2018).

Digital control is not unique to the food delivery sector or even the platform economy. Workers at Amazon warehouses or call centres are faced with a similar challenge (Rozwadowska, 2018; Woodcock, 2017; Moore, 2018): how is the data produced at work collected, processed and used to evaluate our work? Technologies are transforming management in many sectors, so perhaps we all should be asking ourselves: are my data rights protected at work?

The answer to this question might be the key to securing the healthy balance of power of future work relations. In principle, the workers in Europe have their data rights protected by national labour laws and the General Data Protection Regulation (GDPR). Are these regulations sufficient to protect workers at a time when digitisation of the workplace is accelerating?

The GDPR did provide a useful common legal standard, which harmonises the rules for all companies in Europe. The regulation prohibits using automated processing, which produces “legal effects” or similarly significant results for the worker without human involvement (Article 22). Moreover, if specific rules are introduced by European member states or collective agreements, they should protect the human dignity of the data subjects, as well as “their legitimate interests and fundamental rights, with particular regard to the transparency of processing, [..] and monitoring systems of the workplace.” (Article 88)

However, it appears that there are significant gaps in the current framework, which leave some workers vulnerable and voiceless. For example, the platform workers who work as self-employed do not enjoy the same rights as employees who can form works councils or trade unions (Degner and Kocher, 2018). In other words, they cannot bargain around the use of technology as part of the collective agreements with employers. As a result, in platform companies - where traditional unions are often unwilling or unable to organise, the workers have little or no say on how the technology used to control them is designed.

Indeed, the basic principle of digital platform employers is that of an information asymmetry between the companies and people who work for them (Rosenblat and Stark, 2016). As our research among Berlin food-delivery companies reveals, workers have very little information about the technology used to monitor them (Ivanova et al., 2018). For a healthy balance of power to be reestablished, workers need more than just access to their own data – they also need information about the parameters used to evaluate them and the design of the automated control.

As we mark the one-year anniversary of the GDPR, it is time to say loud and clear: the current regulatory framework might not be sufficient to protect our rights in a digitised workplace. We should consider codifying the principles for workers’ privacy and data protection developed by trade unions and technical organisations in a regulation that is specific to the workplace. Also, it would make a difference to introduce standards for designing accountable systems before they are rolled out, so that workers’ interests are represented already in the technology development phase (Wagner and Bronowicka, 2019). We should also think about strengthening the institutions responsible for the implementation of laws or for creating new ones, like the European Labor Inspection.

As we contemplate how to improve the existing rules, we desperately need research into the detailed reality of the implementation of the GDPR in a wide variety of workplaces. This kind of research is difficult because it needs to account for algorithm rules, which are dynamic and opaque. It requires trans-disciplinary work of legal, social and technical researchers who combine methods to analyse impact on workers data rights and well-being. Providing workers and researchers with access to data and inviting workers to co-design the technology can spur innovation – the kind that puts the interest of workers at the centre. The future of work is happening now, let’s make sure it is a fair one.

References

Degner, A.  and Kocher, E. (2018). Arbeitskämpfe in der „Gig-Economy“? Die Protestbewegungen der Foodora- und Deliveroo-„Riders“ und Rechtsfragen ihrer kollektiven Selbstorganisation. Kritische Justiz, 51(3), 247–265

Ivanova, M., Bronowicka, J., Kocher, E. und Degner, A. (2018).The App as a Boss?.Control and Autonomy in Application-Based Management, Arbeit | Grenze | Fluss – Work in Progress interdisziplinärer Arbeitsforschung 2, Frankfurt (Oder): Europa-Universität Viadrina Frankfurt, 27 Seiten. doi:10.11584/Arbeit-Grenze-Fluss.2

Moore, P. V. (2018). Tracking Affective Labour for Agility in the Quantified Workplace. Body & Society, 24(3), 39–67. https://doi.org/10.1177/1357034X18775203

Rosenblat, Alex and Stark, Luke, Algorithmic Labor and Information Asymmetries: A Case Study of Uber’s Drivers (July 30, 2016). International Journal Of Communication, 10, 27. Available at SSRN: https://ssrn.com/abstract=2686227orhttp://dx.doi.org/10.2139/ssrn.2686227

Rozwadowska, A. (2018). Znamy opinię biegłego o pracy w Amazonie: "Może powodować urazy psychologiczne i fizyczne", Gazeta Wyborcza, 30 January 2018 http://wyborcza.pl/7,155287,22939723,krzeslo-laski-mamy-pierwszy-raport-bieglego-o-pracy-w-amazonie.html

Wagner B. and Bronowicka, J. (2019), Designing for Labour? Accountability and self-management in an app-based management, ACM CHI Conference on Human Factors in Computing Systems, Glasgow 2019

Woodcock, J. (2017) Working the Phones: Control and Resistance in Call Centres, London: Pluto.

Internet and blockchain technologies: authoritarian or democratic?

$
0
0

Both the internet and blockchain technologies started out as libertarian aspirations to empower individuals through decentralisation, openness, and freedom. Over time, however, competing visions have emerged. Notably, authoritarian nations like China and Russia have reasserted their internet sovereignty through technologies of censorship and surveillance. In the blockchain world, Facebook and J.P. Morgan are reportedly launching their own centralised cryptocurrencies. What these examples show is that both the internet and blockchain are not monolithic architectures, but rather fluid arrangements subject to evolution and political pressure.

Technical arrangements as forms of political order

The role of technological artifacts in expressing political values existed long before the advent of modern information technologies such as the internet and blockchain. They can embody varying forms of power and authority often at tension with each other. The most important of these conflicts are those between peer-to-peer association and large-scale organisation, between decentralised autonomy and institutional control. Writing in Technology and Culture in 1964, Mumford gives the classic statement on how the tension between authoritarian and democratic technics has played out throughout human history:

From late neolithic times in the Near East, right down to our own day, two technologies have recurrently existed side by side: one authoritarian, the other democratic, the first system-centered, immensely powerful, but inherently unstable, the other man-centered, relatively weak, but resourceful and durable.

Building on this distinction, Winner (1980) explores in his seminal paper Do Artifacts Have Politics? ways in which technologies can embody politics and social relations. One of his central ideas is that technology often does not allow much flexibility - to choose them is to choose a particular form of political life. In the first instance, the adoption of a given device or system requires the establishment and maintenance of a particular pattern of power and authority. In the second instance, certain kinds of technology are strongly compatible with particular institutionalised patterns of power and authority. In both cases, the initial choice about whether to adopt a technology is decisive regarding subsequent political consequences. In this sense, technological innovations are akin to legislative acts or political foundings that establish a form of order enduring over many generations.

One classical example is energy production and distribution. Nuclear power promotes long-term centralisation and hierarchy. Once artifacts like nuclear power plants have been built and put in operation, not only are they permanent fixtures, a culture of centralised, hierarchical managerial control to fulfil the high technical requirements will also have to be institutionalised over time. On the other hand, renewable energy such as solar is built in a disaggregated, distributed manner. It enables individuals and local communities to manage their affairs effectively, and is generally seen as being compatible with democratic, egalitarian, and communitarian ideals.

The peculiarities of internet and blockchain technologies

Today, it is generally understood that internet and blockchain technologies are also political. The designs of these systems combine technical, organisational, and socio-cultural characteristics that govern the behaviour and power of a wide range of public and private actors. However, here I venture to suggest that the political qualities of these systems are adaptable and malleable. The defining feature of pre-digital technologies is that power characteristics are by and large fixed once the decision to go ahead and build them has been made. Internet and blockchain networks, by contrast, are not monolithic architectures. Instead, dimensions of power and control points are dynamic and continuously evolving. Both authoritarian and democratic values can at once be reflected over these networks.

Such peculiar feature has its origin in the layered and open design of the internet and blockchain technologies (Van Valkenburgh, 2018). The internet was originally founded on the public, permissionless architecture of TCP/IP. No one needs to gain access to a private network or verify their identity in order to communicate online or build applications. On the other hand, one can always build permissioned and identified layers and applications on top of the open TCP/IP protocols. The openness of the protocol layer guarantees diverse participation, innovation, and transparency. It epitomises the democratic technics. However, it also allows the emergence of private and identified higher level layers that amplify the power of governments and private companies, introducing authoritarian technics to the internet architecture.

Amidst these tensions, one can count at least four internets, gradually shifting from democratic to authoritarian (O’Hara and Hall, 2018). Silicon Valley’s open internet represents the original cyberpunk vision of decentralisation, openness, and freedom. John Perry Barlow’s Declaration of the Independence of Cyberspace is the most famous statement of this philosophy. Then, there is Brussels’ bourgeois internet, in which the European Union and Western European governments attempt to maintain civility and restrict what they consider to be “bad behaviour” through regulation. Third is Washington DC’s commercial internet which leverages innovation facilitated by data collection and oligopoly. Finally, Beijing’s authoritarian internet uses pervasive surveillance to influence social interactions and ensure political control.

Like the internet, blockchain technology has a layered architectural design. The Cambridge Centre for Alternative Finance (Rauchs et al, 2018) develops a conceptual framework which breaks down blockchain technology into layers: a protocol layer which defines and codifies the constitutional arrangements amongst system participants, and the upper network and data layers which implement the ruleset and store transaction records. In Satoshi Nakamoto’s cyberpunk vision of Bitcoin, there is complete decentralisation and openness at all layers. Bitcoin’s protocol is open source and its governance and change follows an anarchic process. Network access and transaction processing are open and unrestricted, and data broadcast is public and transparent.

The same cannot be said for subsequent blockchain developments. The fact is that permissions and privileges can be easily configured at the upper layers of blockchain technology, introducing control and censorship in the hands of powerful actors. For instance, the network privilege of the cryptocurrency Ripple is dictated by Ripple Labs, a private company. The blockchain ecosystem is also characterised by a number of private actors, such as Microsoft and IBM, offering intermediation services with asymmetries of information and power. Even for Bitcoin, we are seeing centralisation tendencies at the upper network layer with major coin holders and miners. As is the case with the internet, power dynamics can potentially play out within and between each layer and evolve over time. The end result is that we see a departure from the original democratic technics of Bitcoin, and the emergence of authoritarian technics in blockchain technology.

Once again, there are multiple possible futures for blockchain technology, ranging from democratic to authoritarian. It is true that we are still in the early days, but one can speculate the set of institutional and ideological champions that will emerge (Manski and Manski, 2018). A libertarian blockchain, like Silicon Valley’s open internet, would be faithful to Satoshi Nakamoto’s vision of a truly decentralized and peer-to-peer network. Next, there could be a corporate blockchain similar to Washington DC’s commercial internet, where major corporations like Amazon and Facebook adapt a permissioned and identified blockchain to their own purposes. This can further the degree of data granularity captured and monetised by these platforms, centralising power and exacerbating inequality. Finally, a sovereign blockchain would encapsulate varying degrees of government power. At the mild end, in the style of Brussels’ bourgeois internet, regulations may appear to control blockchain activities like initial coin offerings. At the extreme end, blockchain and smart contract technologies can be used to monitor citizens and enforce rules in a draconian way, tending towards Beijing’s authoritarian internet.

Towards a critical theory of internet and blockchain

In Feenberg’s philosophy of technology (2003), we can view value-laden technology through substantive and critical theories. Substantive theory holds that when you choose a specific technology, you choose a specific way of life. However, technology in this view is autonomous in the sense that invention and development have their own immanent laws, and the next step in the evolution of the technology is not up to us. By contrast, critical theory holds that technology is humanly controllable and we could determine the next step in its evolution in accordance with our intentions. While recognising the catastrophic consequences of technological development under substantivism, it still sees a promise of greater freedom as long as we devise appropriate institutions for exercising human control over it.

Both the internet and blockchain technologies started out as an open and loosely coupled collection of protocols, standards, systems, and interest groups. Their early evolution supported creativity, autonomy, and decentralisation. However, this is often linked with relatively weak organisation and capacity to express a coherent voice (Benkler, 2016). As a result, later developments are recast by well-resourced corporate and state actors who are highly incentivised to instil their values, be it profit, authority, or power. Today, we see competing models on how these technologies should be governed. Several internets and blockchains are currently co-existing.

While this co-existence may be uneasy, it is also a reflection that the future is malleable. The key insight is that we have to view internet and blockchain technologies through the critical theory of technology before we can see genuine possibilities for creative intervention that could change power relations and preserve human alternatives. In fact, we are already seeing the pendulum swinging back to the community-led ethos of the original internet and blockchain. If these technologies have seen power and value captured by corporate actors and governments at the upper layers, today communities are mobilising from the ground up to rebalance power back to the protocol layer. For example, cryptocurrencies provide a way to incentivise individuals and groups to build and maintain internet services, rather than relying on proprietary corporate services (Dixon, 2019). To ensure co-evolutionary design and democratised knowledge of technical considerations, some newer blockchain projects are hard coding these spirits into the software, a method known as on-chain governance (Kritikos, 2018). All these are possible thanks to a critical understanding of the malleable political qualities of the internet and blockchain. In the end, whether the future is authoritarian or democratic, the choice is up to us.

References

Benkler, Y. (2016). Degrees of Freedom, Dimensions of Power. Daedalus, 145(1), 18-32. doi:10.1162/DAED_a_00362

Dixon, C. (2019). Blockchain Can Wrest the Internet From Corporations' Grasp. Wired. Retrieved from https://www.wired.com/story/how-blockchain-can-wrest-the-internet-from-corporations/

Feenberg, A. (2003). What Is Philosophy of Technology? Lecture for the Komaba undergraduates. Retrieved from http://www.sfu.ca/~andrewf/komaba.htm

Kritikos, M. (2018). What if Blockchain were to be truly decentralised? Brussels: European Parliament Think Tank. Retrieved from http://www.europarl.europa.eu/thinktank/en/document.html?reference=EPRS_ATA(2018)624248

Manski, S. & Manski, B. (2018). No Gods, No Masters, No Coders? The Future of Sovereignty in a Blockchain World. Law and Critique, 29(2), 151-162. doi:10.1007/s10978-018-9225-z

Mumford, L. (1964). Authoritarian and Democratic Technics. Technology and Culture, 5(1), 1-8. doi:10.2307/3101118

O’Hara, K. & Hall, W. (2018). Four Internets: The Geopolitics of Digital Governance (Paper No. 206). Waterloo, Canada: Centre for International Governance Innovation. Retrieved from https://www.cigionline.org/publications/four-internets-geopolitics-digital-governance

Rauchs, M., Glidden, A., Gordon, B., Pieters, G., Recanatini, M., Rostand, F., … Zhang, B. (2018). Distributed Ledger Technology Systems: A Conceptual Framework. Cambridge, UK: Cambridge Centre for Alternative Finance. Retrieved from: https://www.jbs.cam.ac.uk/faculty-research/centres/alternative-finance/publications/distributed-ledger-technology-systems/

Van Valkenburgh, P. (2018). Exploring the Cryptocurrency and Blockchain Ecosystem. Testimony to the US Senate Banking Committee. Retrieved from https://www.banking.senate.gov/imo/media/doc/Van%20Valkenburg%20Testimony%2010-11-18.pdf

Winner, L. (1980). Do Artifacts Have Politics? Daedalus, 109(1), 121-136. Retrieved from http://www.jstor.org/stable/20024652

Viewing all 294 articles
Browse latest View live