Quantcast
Channel: News and Research articles on Governance
Viewing all 294 articles
Browse latest View live

Reading between the lines and the numbers: an analysis of the first NetzDG reports

$
0
0

Introduction

Good content, bad content? What seems reasonable to some might be offensive to others. Depending on our social norms, laws and culture, we tend to categorise what we see on the internet as a process of content selection that fits our expectations and our needs or not (Gillespie, 2018, p. 197). This plays into our perceptions of the content disseminated by users on social media platforms and these inherent discrepancies constitute a reason why regulating online speech is still an unresolved issue for lawmakers. In the legislative process one pervading question remains: how to improve content moderation in the light of long established legal provisions. The minimum European lawmakers tend to agree upon is the legitimacy of the need to take down unlawful content. At the very least, this is what the German government assumed when it issued a law that made it mandatory for the largest social media platforms to ban obviously unlawful content within 24 hours. In doing so, Germany was one of the first countries to issue a so-called anti-“hate speech” law and it has been the target of fairly constant criticism since it was instated. Not only do critics label it as unconstitutional (Gersdorf, 2017; Schulz, 2018), but it is also mentioned in the international discussion as a bad example of platform regulation (Special Rapporteur for the UN: Kaye, 2018). The NetzDG was meant to enhance the protection of users against hate speech and to provide more clarity on the way platforms handle and moderate unlawful content (Wischmeyer, 2018, p. 7). However, as this paper will show, so far there is still no certainty about either of these goals mainly because the reports do not provide well defined results. Many have expected the published reports to provide more substantial insights on content moderation policies – an expectation that seems to have been betrayed (Gollatz et al., 2018).

The main aim of this article is to provide an analysis of the reports published by the major social media platforms, including insights into the implementation of the NetzDG while focusing on the obligation of ensuring user-friendly complaint procedures. The result of this analysis shows that it is not enough to require companies to publish transparency reports if the information they contain has no real informative value. This article shows that the law might have incentivised the platforms to remove hate speech faster, but that there is no certainty about its effect due to the lack of substantial information in the reports. Facebook for example does not fully comply with the obligation of supplying an easily recognisable complaint procedure, therefore the number of complaints cannot be considered conclusive. After introducing the NetzDG in general, and the reasons why it is deemed unconstitutional in the German debate, the present article dives deeper into the rationale of the NetzDG. This law can serve as an example of “new-school speech regulation”, that is, a type of regulation that is aimed at the owners of the digital infrastructure instead of the speakers themselves (Balkin, 2014, p. 2298). Starting from the published complaint figures, this article examines the implementation of complaint tools and eventually goes to show that the reports constitute no factual ground for a re-evaluation of the NetzDG or for a similar regulatory project, even though the reporting obligation is a key provision. The law leaves room for interpretation with regards to its implementation and this has somehow hollowed out the obligation to publish transparency reports as the figures do not reflect the full picture. The reporting obligation under NetzDG might however serve as a counterexample in the discussion on transparency and the corresponding reports. It shows that we need to formulate transparency rules in a clearer way so that the data collected can serve the purpose of iteration for both the companies and the state.

1. The NetzDG: an act to improve law enforcement on social media platforms

1.1. The regulatory rationale of the NetzDG

A series of events involving online discrimination against and agitation towards ethnic minorities (that reached a peak in 2015-2016 when a wave of refugees from Syria arrived in Germany) triggered the German government in acting against hateful online content. There are other factors that could also be interpreted as catalysts that caused the government to act: an increasing awareness of the problem of aggressive and potentially harmful online communication and an air of mistrust of the tech companies that run the biggest social media platforms. .The latter especially applies to their governance through content moderation policies (Citron, 2017, p. 1065 f.). In order to combat hate speech and other unwanted content on social networks (cf. Delort et al., 2011, p. 9), the German government drafted a law that would force social networks to examine complaints and delete content if possible within 24 hours of receiving user complaints. The main motive for the legislator’s action was to thwart the increase of hate speech on social networks (as stated in the law’s explanatory memorandum) and to respond to public pressure surrounding the issue (Liesching, 2018c, para. 2). After several failed attempts to implement a system of self-regulation by the social network companies to reduce the proliferation of hate speech, the German Ministry of Justice drafted the NetzDG and it was finally ratified by Parliament in July 2017. The EU, for instance, began a self-regulatory initiative with the 2016 EU Code of Conduct on Countering Illegal Hate Speech Online in cooperation with Facebook, Microsoft, Twitter and YouTube (its fourth evaluation was published in February 2019).

The “Act to Improve Enforcement of the Law in Social Networks” (Network Enforcement Act, hereinafter NetzDG) became fully effective on 1 January 2018. It aims for the faster enforcement of German criminal laws, hence the deletion – when appropriate – of unlawful content. The law defines social networks as follows: “telemedia service providers which, for profit-making purposes, operate internet platforms which are designed to enable users to share any content with other users or to make such content available to the public” (official translation by German Ministry of Justice). Hate speech as such is – on the contrary – neither defined by the NetzDG nor by the German Penal Code (hereinafter StGB), although the general discussion around the legislative project mostly refers to the term. The general definition of hate speech is: speech designed to promote hatred on the basis of race, religion, ethnicity, national origin or other specific group characteristics (Rosenfeld, 2002, p. 1523; Djuric et al., 2015). The NetzDG refers to StGB sections without defining the offences listed. In the StGB these offences are not listed under one specific title but in different categories, for example, breaches of public order or libel. This creates a chain of provisions that refer to one another (see below). The offences targeted by the NetzDG can be conflated as hate speech when discussing the type of offences, but it is not actually a technical term under German law.

Although scholars agree on a general definition of hate speech, its criminal prosecution differs from one country to another (even within member states of the European Union). It is important to note, therefore, that no new criminal offences for online hate speech were created or added to the StGB. Instead, the NetzDG lists 22 offences that were already and still are punishable under the StGB, such as libel, defamation, sedition and calls for violence and adds a de facto enforcement obligation for large social media platforms. Anyone breaching these laws by posting, commenting or uploading content on social media platforms stills incurs a penalty from the state. In addition, German law now forces social networks to become more active. They are obliged to implement procedures that ensure obviously unlawful content is deleted within 24 hours of receiving a complaint. If there is any doubt regarding a takedown decision, the procedure may take up to seven days. After that deadline, a final decision on the lawfulness of a post must be reached and unlawful content needs to be removed, that is, either blocked or deleted. The fines for a breach of this obligation can reach up to €50 million. In addition to complying with this operational provision, social media platforms are obliged to publish bi-annual reports which will be addressed and analysed below.

1.2. Main allegations regarding the violation of constitutional law

In order to fully assimilate the importance of the published reports, it is helpful to know more about the context of the NetzDG. This law has been under attack ever since its first draft was made public. Not only was it perceived as an ad hoc legislative reaction, more importantly it is considered merely as a loophole that transfers public responsibility to a private actor. It has been criticised from many perspectives and an exhaustive description would go beyond the scope of this article (cf. Schulz, 2018, passim). However, a brief overview is necessary to comprehend the context in which the first NetzDG reports were published. To state it briefly, a wide array of scholars, politicians and activists demanded the abrogation of the NetzDG or a revised version in the near future. Liberal politicians have opposed the law in court and there have been counter-proposals (e.g., the Green Party’s proposal from January 2019). The criticism did not diminish over the course of 2018 although the consequences for free speech were not as severe as expected, instead the numbers below show that the NetzDG did not really have an impact on content moderation. Nonetheless, a law has no legitimation to stay in effect if it is deemed unconstitutional, which is why it is still expected to be revised. The following passage will give a general overview of the allegations made against the NetzDG. The procedural points of criticism will be omitted because they are more technical and inherent to the German legal system and add little to the present argument. Others are related to human rights infringements and partly transferrable to similar law projects in other jurisdictions (cf. Funke, 2018) and are, therefore, more relevant to this paper. 1

The main focus of the (non-procedural) criticism relates to possible violations of freedom of speech in various ways. The de facto obligation for social networks to delete manifestly unlawful content within 24 hours has raised questions pertaining to the potential overblocking of content and to the privatisation of the judiciary due to the interpretation and application of criminal law by private companies. These two elements combined can in turn have chilling effects on freedom of speech and we will take a closer look at them below. First, however, one must bear in mind that the question of whether the content targeted by NetzDG may be deleted is not my main concern. First, because such content is illegal and second, because takedown is still the most effective tool social media platforms make use of when it comes to hate speech (Citron & Norton, 2011, p. 1468; Klonick, 2018, p. 12). The main source of scepticism is the shift of responsibility towards private companies as a corollary of the obligations that have been laid upon them (Guggenberger, 2017b, p. 2582). In sum, scholars agree that the NetzDG is not really an exemplar of methods for fighting hate speech online (Gersdorf, 2017, p. 447; Guggenberger, 2017a, p. 101; Balkin, 2018, p. 28).

The German Basic Law allows lawmakers to restrict fundamental rights under certain conditions and freedom of speech may be constrained by general laws according to art. 5 (2) Basic Law. This includes criminalising offensive speech and it can have a horizontal effect between private parties when private actors require one another to observe the law. This is acceptable as long as it does not result in overblocking. Overblocking is the term used when content is deleted or blocked for no substantial reason, because the incentive to immediately delete rather than perform more fundamental checks arises (Richter, 2017; Holznagel, 2018, p. 369). Criminal offences related to the protection of honour (such as libel) mostly overlap with the categories used by social networks in their “community guidelines”. However, this overlap is not preserved when it comes to the specific elements of a criminal offence. One would need to know and practice (national) criminal law and consider the context of the generated content (Wimmers & Heymann, 2017, p. 100). These various parameters make it difficult to parse unlawful content in a short timeframe and that is what the social media platforms have been insisting upon in recent years when justifying the slow removal of hate speech. The problem lies in the risk that there is just not enough time to make accurate takedown decisions coupled with the high level of pressure of being fined. In this scenario, the net result could potentially be overblocking (Kaye, 2018, p. 7). In accordance with section 3 (2) NetzDG, social networks must provide a procedure that ensures the deletion of obviously unlawful content within 24 hours after a user complaint. If they fail to do so, as mentioned earlier, they risk a fine of up to €50 million, which makes the incentive for decisions in favour of takedowns stronger than before the implementation of the NetzDG (Schiff, 2018, p. 370). However, the fear of overblocking doesn't seem to have materialised when looking at the takedown numbers in the reports published by the companies concerned by the NetzDG (Wieschmeyer, 2018, p. 20).

However, the critique of a substantial shift of responsibility from the judiciary to platforms themselves is still under discussion. This is mainly due to the wording of sec. 3 (2) Nr. 2 NetzDG, that is, to delete “content that is manifestly unlawful”. Generally, content-related regulation has to be as neutral as possible with regards to the opinions expressed, i.e., it is subject to a strict proportionality test (so-called “Wechselwirkung”). The scope of application of a content targeting law must be sufficiently precise to avoid too much room for interpretation, which could, in the case of the NetzDG, result in an overly broad definition of legal terms and an unsubstantial removal of content. Using an unspecified legal term such as “manifestly”, although the law is applied by a private party and not by a judge, puts the power of the judiciary in its interpretation and the application of the law at risk. Clear legal definitions and specific criteria are necessary to constrain the platforms’ discretion (Nolte, 2017, p. 556-558; Liesching, 2018a, p. 27; Wieschmeyer, 2018, p. 15-16; Belli, Francisco, & Zingales, 2017, p. 52; Nunziato, 2014, p. 10); leaving the interpretation of a key term of the bill too unspecified is considered as unconstitutional (FCC: BVerfGE 47, 109, 121; Wimmers & Heymann, 2017, p. 97-98; Reuter, 2018, p. 86) because this kind of interpretation is actually a core function of the judiciary.

Unspecified legal terms are usually only filled with meaning by court rulings. Up until then, they can be loaded with interpretations which might be revised later. In order to decide whether a statement is still within the boundaries of the law and therefore still protected by freedom of speech, a judge will have to examine the requirements mentioned above and potentially balance the fundamental rights of both parties. The result can then later subsequently be applied by private parties as a standard or a guideline. What is “manifestly” illegal? The NetzDG’s explanatory memorandum defines it as follows: “Content is manifestly unlawful if no in-depth examination is necessary to establish the unlawfulness within the meaning of sec. 1 (3).” This sentence does not explain from which starting point an examination is considered “in depth”, it leaves the original question of how to define “manifestly” unanswered. Still, users’ right to take the platforms’ decisions to court remains intact when social networks delete user-generated content under NetzDG. This possibility makes it unlikely that the NetzDG will be abrogated on the grounds of the ‘privatisation’ argument. Nevertheless, the complexity of this assignment, deciding whether content is unwanted but perhaps not unlawful, is a core element of the public debate around content moderation (Kaye, 2018, p. 4). Because of the important implications for users’ freedom of speech, one cannot help but wonder about the fact that the German lawmakers delegated this task to social media platforms instead of enhancing their own law enforcement forces (Schmitz & Berndt, 2018, p. 7; Wieschmeyer, 2018, p. 15-16; Buermeyer, 2017).

1.3. The obligation to implement a complaint procedure

All in all, German lawmakers were aiming for a faster response from social networks as to when content is reported as unlawful. It resulted in the obligation to ensure a procedure that would guarantee a reaction on “manifestly unlawful content” within 24 hours after receipt of the complaint. As mentioned above, this provision is one of the most criticised because of the uncertain effects it could possibly have on free speech (Keller, 2018, p. 2). This topic is rightly at the centre of the debate because scholars are only beginning to know more about the effects of these rules on the behaviour of both the platforms and the users (Gollatz et al., 2018). However, this paper focuses more on the way social networks have implemented the obligation to ensure a complaint procedure. This will later contribute to measure the informational value of the reports (see infra, section 4).

According to sec. 3 (1) NetzDG, social networks have “to supply users with an easily recognisable, directly accessible and permanently available procedure for submitting complaints about unlawful content.” The implementation of this obligation is decisive for the relevance of the reports when it comes to conducting meaningful evaluation and regulatory impact assessments. I evaluated the significance of the reports in view of the accessibility of the complaint tool for users and their comparativeness. As mentioned above, the number of complaints filed could be of significance as far as the regulatory goals are concerned if the manner in which the data was collected and presented in the reports was different, or on the other hand, if the provision was identically implemented, regardless of the company carrying out their legal obligations. One needs to bear in mind (again) the explanatory memorandum of the NetzDG which states that social networks have to provide a “user-friendly procedure for submitting complaints about unlawful content”. Furthermore, the procedure must be “easily recognisable, immediately accessible and always available”. The memorandum does not provide any further expectations or provisions concerning the implementation of the complaint tool. The second half of the memorandum’s section on the complaint procedure contains the requirements for the way that complaints are handled once submitted by a user. It does not elaborate any further on the way social networks should design the complaint procedure as such, leaving the concrete implementation of complaint procedures, for the most part, at the platform’s discretion. This aspect was probably not expected to be as decisive as it appears to be now that the reports show that at least one platform violates this provision. The criteria mentioned above regarding expectations of the “user-friendly procedure” shall therefore be at the centre of the remarks below when it comes to the informative value of the reports.

2. The NetzDG reporting obligation

In order to gain a better understanding of the way social networks moderate user-generated content and how they decide whether or not to remove content, the German lawmakers included a biannual reporting obligation. According to section 2 NetzDG:

Providers of social networks which receive more than 100 complaints per calendar year about unlawful content shall be obliged to produce half-yearly German-language reports on the handling of complaints about unlawful content on their platforms, covering the points enumerated in subsection (2), and shall be obliged to publish these reports in the Federal Gazette and on their own website no later than one month after the half-year concerned has ended. The reports published on their own website shall be easily recognisable, directly accessible and permanently available.2

The NetzDG’s explanatory memorandum states that the reporting obligation is required “in order to create the necessary transparency for the general public”, 3 a requirement that was similarly formulated a while ago by scholars and activists (Gillespie, 2018, p. 199). The secondary goal of the reporting obligation is to provide numbers and facts “necessary in the interest of an effective impact assessment”. The German Parliament is currently discussing a revision of the law subsequent to proposals that range from a complete abrogation of the law to only light adaptations. Changes made to the NetzDG will be based, at least partly, on the reports. The whole NetzDG project also serves as an example (for better or worse) for similar legislative undertakings. These reports are, therefore, central to a better development of the legislative tool, not only at national level, but also to answer the challenge posed by content moderation in general. As mentioned above, the German approach was quite a push forward due to political circumstances and public pressure. There is so far no equivalent in other jurisdictions and no solutions considered standard nor has best practice been established across borders because, when it comes to balancing content moderation and freedom of expression, the issues that arise are too numerous and too varied.

Hate speech, fake news, copyright infringements – just to name a few of the issues that arise – are often confused in the public debate and their respective definitions differ from one country to another. Considering the fact that social media platforms act globally, a one-size-fits-all solution would reduce costs. At the same time, such an extensive approach could be a threat to freedom of expression because of scopes of application that are too broad, leading to more restrictive regimes. To design a new regulatory framework, it is, therefore, necessary to monitor the effectiveness of its application. On that account, implementing a reporting obligation in sec. 2 NetzDG was necessary to improve this type of regulation (Eifert, 2017, p. 1453). The memorandum also states that producing the reports shall “ensure a meaningful and comprehensive picture of how they [the social networks] deal with complaints”. As sec. 2 (2) NetzDG determines the minimum requirements for the reports, the memorandum justifies the reporting obligation with the special role of social networks. They are “crucial to the public debate” and must take on their “increased social responsibility”. Rather than providing numbers without context, the reports are supposed to help understand the connection between the grounds on which social networks delete or block unlawful content and the provisions provided by law. Unfortunately, this expectation was not fulfilled.

The minimum requirements for the reports include specific points under sec. 2 NetzDG, such as providing the “number of incoming complaints about unlawful content” (nr. 3), the “number of complaints for which an external body was consulted” (nr. 6) and the “number of complaints in the reporting period that resulted in the deletion or blocking of the content at issue” (nr. 7). The numbers listed in nr. 7 need to be broken down according to the reasons for the specific complaint which makes them particularly interesting with regards to the regulatory goal. The explanatory memorandum of the NetzDG is quite brief on that point: it merely mentions “the interests of transparency and the effectiveness of the complaint management” as the reason for that specific point and then refers to the comments on sec. 3 NetzDG (“Handling of complaints about unlawful content”). This part nevertheless reveals the tight connection between the reports and the handling of complaints. As a result, the obligation to report mainly serves to enhance transparency which goes hand in hand with an effective impact assessment of the new law, as stated in the memorandum, and the long-term goal of developing this regulatory framework in a sensible manner. These goals are important characteristics for the evaluation of the published reports. It will become clear at a later point in this article that they were perhaps underrated and minimised by the platforms – as propositioned by the title.

3. Main results of the first round of reports

According to sec. 1 (1) NetzDG, only social networks that have more than two million users have to comply with its rules and, therefore, with the reporting obligation in sec. 2. In view of the user numbers on the largest social networks, the minimum of 100 complaints per calendar year (as an obligation for having to publish reports) was easily arrived at by Facebook, YouTube and Twitter. Their reports demonstrate many similarities in the way they handle the matter of content moderation, but they also feature notable differences as far as the numbers of complaints are concerned. 4 The three reports from Facebook, YouTube and Twitter were analysed for this article not long after their publication, in August 2018. The overall result, as will be explained below, is that provisions for these types of reports need to be precise if one wishes to gather meaningful data. In substance, the reports show that social media platforms tend to moderate content on the grounds of their own community guidelines more than on the basis of national criminal law. I presume that the reason for this is that it allows them to react on a global scale rather than on a national one. Furthermore, social media platforms tend to use terms and tonalities in their community guidelines that are very similar to the vocabulary used in the NetzDG, making it rather unclear to the user where the differences lie (cf. Celeste, 2018). This similarity between reports being stated, the divergence between the complaint figures is quite significant.

3.1. Community guidelines are prioritised

As the reports show, the content review process is based on a two-step approach for all three platforms. After being notified of a user complaint, the first check is made on the grounds of community guidelines or standards (both terms being used synonymously hereinafter). If the content violates these internal rules the reviewer will take the content down. Only if the result of that review is negative and if the user also submitted a complaint under NetzDG provisions (not only community guidelines), the content will be further checked for NetzDG infringements. It remains unclear how much content was taken down as hate speech under community guidelines, which could also have been blocked because of a violation of German criminal law. To submit a complaint under the NetzDG, the user will either have to tick an additional NetzDG box in the case of YouTube and Twitter, or, in the case of Facebook, go to the “Help Centre” and follow a specific NetzDG complaint link. In the next subsection, I will take a closer look at how each platform implemented the NetzDG complaint procedure and the subsequent effects on their complaint numbers. The reports do not state whether complaints have been examined on NetzDG violations even if they were not flagged as such by users. Nevertheless, it appears that YouTube, Twitter and Facebook all prioritise their own community guidelines since none of them offers to immediately submit a complaint under NetzDG (which is not mandatory under sec. 3 (2) NetzDG).

A reason for this prioritisation could be the subsequent takedown options. So-called unwanted content, that is, content that violates community guidelines, will be deleted globally whereas content that is unlawful under German penal law (and therefore subject to removal under NetzDG) could only be blocked in Germany. Considering that content might be illegal in several countries, deleting it according to community guidelines might be more effective than taking it down for just one single country, with the possibility of repeating this action in another country further down the line. This raises questions of freedom of expression in privately-owned communication spaces, especially with regards to collateral censorship (Eifert, 2017, p. 1452; Balkin, 2018, p. 6). Although, from a European perspective there is a big overlap between unlawful and unwanted content, the definitions do not completely intersect. From a communications science perspective, it might be questionable as to which consequences the NetzDG provisions have on the way social networks formulate their community guidelines on a global scale because they could adjust their own policies to fit the broadest definition of hate speech (Gollatz et al., 2018, p. 6). It also points out that adapting the community guidelines to national (criminal) law in order to decrease the differences between community rules and German law could, in turn, have a massive influence on another country’s regulation of social networks. However, there is no certainty that platforms will follow that path because the cost of adapting to national legislation could be too high. Instead, they could broaden their community guidelines to cover multiple definitions of hate speech – eventually restricting more speech than necessary under respective national regulations (Eifert, 2017, p. 1452; Belli, Francisco, & Zingales, 2017, p. 46; Koebler & Cox, 2018).

3.2. Content moderation: how?

Under the reporting obligation of sec. 2 NetzDG, it is mandatory to describe the personnel and organisational resources deployed to comply with the provisions of the law. The reports show that human content reviewers are by no means replaced by machines (Delort et al., 2011, p. 24). None of the three social networks examined for this paper solely rely on algorithms or artificial intelligence to cover the tasks of recognising and reviewing unlawful content or handling user complaints. Given the amount of data uploaded by users, filters and other, technologies are in use and undergo constant optimisation. Yet, in order to properly review content that might be unlawful, platforms still heavily depend on human moderators (Roberts, 2016; Matsakis, 2018). The role of moderators in the review process is even more important when it comes to evaluating content that does not violate community standards but is potentially unlawful (see infra, section 3.1.). Cases that do not violate community guidelines but might be punishable under German criminal law are in general more complex and as a result require more sophisticated review. The cost of content moderation therefore increases when it has to comply with the law rather than with community guidelines, this is because of the complexity of applying legal provisions instead of internal guidelines. From a cost-benefit point of view, platforms would prefer to minimise staff overheads (Read, 2019) which is why the way they address this challenge is worth paying attention to.

Through a partnership with a German company named Arvato, Facebook has a team specially dedicated to NetzDG complaints numbering approximately 65 employees (as of August 2018). The staff speaks German and is trained to handle the complaints that are within the scope of section 1 (3) NetzDG. As an initial step, the “Community Operations” team reviews the reported content to determine whether or not it violates Facebook’s Community Standards. If the issue is taken further, the second part of the two-step approach, a so-called “Legal Takedown Operation” team takes over, who are specially trained to review content for potential illegality. The report does not mention any software that would support the work of the Facebook reviewers. This might indeed not be necessary since the amount of complaints under NetzDG seems quite manageable (see infra, section 3.3). In general, Facebook makes use of AI to identify content that clearly violates their community standards (Koebler & Cox, 2018), but relies on approximately 15,000 moderators worldwide to review unwanted content (Newton, 2019) as hate speech is still difficult to identify automatically with any precision (Koebler & Cox, 2018).

YouTube has integrated tools such as content filters in their upload process. Their NetzDG report states that they already filter any video uploaded for unlawful content and they interpret this measure as an additional compliance with the provisions of sec. 1 (3) NetzDG. YouTube had to deal with copyright infringements a long time before the hate speech problem became so virulent and has therefore been using their filtering software, ContentID to manage copyright issues. Since June 2017, YouTube has also integrated machine learning in order to optimise the filtering of illegal content (YouTube, 2017). Furthermore, there is a team dedicated to NetzDG flagged content that, similar to Facebook’s approach, does not violate community guidelines but is reported by a user or a complaints body as a NetzDG violation. For the time being, their team dedicated solely to NetzDG complaints numbers approximately 100 employees.

According to Twitter’s report, over 50 people work on NetzDG complaints. The report does not include any further information on the technological tools used to support that team in any way. Given Twitter’s massive deletion of (presumably) fake accounts in July 2018 on the one hand, and the immense volume of content constantly uploaded on the other, it is quite likely that Twitter also uses filters to detect unwanted content. The company does not disclose which technological resources it uses to find and review unwanted content such as hate speech. However, it probably makes use of algorithms and machine learning to support human reviewers to reduce the amount of labour involved in the process. Such filters could also be used by Twitter to proactively detect content that is potentially unlawful under NetzDG but it does not mention them in its report (this information is not mandatory under NetzDG reporting provisions).

Since automated technologies are not yet able to detect cases related to sensitive or context-related issues that can be classified as as hate speech, satire, awareness- or education-related or even politically sensitive, they are unable to handle all types of complaints. Although studies show that the technology is getting better at detecting hate speech and offensive language using machine learning (Gaydhani et al., 2018), researchers tend to agree that so far no technology is capable of recognising hate speech beyond very clear cases of offensive speech (Gröndahl et al., 2018, p. 3). YouTube’s statement on this subject is clear and straightforward: “Algorithms are not capable of recognising the difference between terror propaganda and critical coverage of such organisations or between inciting content and political satire.” The inability of filters to recognise unwanted content could be the cause of unsubstantial content removal. Several cases were left uncommented on by the concerned platforms, including satire and innocent pictures, which could be because of human mistakes but perhaps also because of algorithmic and intelligent systems’ failure to distinguish unwanted content from uncontentious content. These cases have been discussed in the media, but none of the platforms disclosed on which grounds the content was removed (Schmitz & Berndt, 2018, p. 31). Taken together, the point on the “resources deployed to comply with the NetzDG” constitutes only one of many sources of speculation around the use of technology in content moderation and the reports are not specific enough to draw further conclusions in that area.

3.3. Number of complaints

The biggest divergence between the reports lies in the number of complaints from one company to another. From January to June 2018, including reports from both complaint bodies and individuals, Twitter counted 264,818 cases, YouTube 214,827 cases, and Facebook 886 cases filed explicitly as NetzDG complaints. For the second half of 2018 and for the same type of complaints, Twitter counted 256,462 cases, YouTube 250,957 and Facebook 500. The gap between the figures published by Facebook and those published by Twitter and YouTube is still noticeable and shows no real change from the first round of reports. For the following analysis, the first round of reports can, therefore, be transferred to the period between July and December 2018.

These figures leave a big question mark hanging over the volume gap between YouTube and Twitter on the one hand, and Facebook on the other, especially since Facebook has the most users and the least complaints (absolutely and relatively). As already mentioned, the implementation of a complaint tool for users is crucial to the numbers that later constitute the central part of the report. After reading the reports and exploring the complaint mechanisms, the correlation between the flagging tool and the complaints filed seems obvious but should be analysed carefully. Compliance with section 3 (1) NetzDG bears the only meaningful difference between the social networks, hence its implementation is why Facebook’s numbers are significantly lower. Regarding the implementation of a NetzDG complaint tool, two approaches can be observed: either the NetzDG complaint procedure is incorporated within the flagging tool of the social network or it is located somewhere else. In the latter approach, the usual flagging tool does not include the option “complaint under NetzDG” at first glance. While Google and Twitter chose to include the NetzDG complaint in their flagging tool (visible in the first step described above), Facebook placed the access to its NetzDG complaint procedure separately from the content under its imprint and legal information.

To be more specific, Facebook’s complaint form according to NetzDG is not incorporated in their feedback function next to the contentious post. Users who want to report third-party content will first be offered the report categories under Facebook’s community guidelines, when clicking on a button to “give feedback”. Categories include nudity, violence, harassment, suicide or self-harm, fake news, spam, hate speech and illegal sales. In addition to Facebook’s reporting tool, a complaint under NetzDG can be submitted by using an external link located next to Facebook’s “Impressum” (the company’s imprint and legal information). This begs the question of whether or not this type of implementation of sec. 3 NetzDG is sufficient, which I examin below. Before analysing the consequences of this implementation in the next section, one has to bear in mind that the NetzDG does not oblige social networks to incorporate their respective complaint procedure into pre-existing complaint mechanisms.

4. Inherent bias due to a divergent implementation

According to sec. 3 (1) NetzDG, social networks have to implement an “easily recognisable, directly accessible and permanently available procedure for submitting complaints”. As described above, this provision has been implemented in quite disparate manner by YouTube and Twitter, on the one hand, and Facebook, on the other, even though the complaint procedure might actually constitute the linchpin of the legislative project. As only a similar (if not identical) implementation of this provision would lead to comparable results, the data produced can barely be used as grounds for evaluation and development because of the disparate nature of its implementation. The small number of complaints through their NetzDG tool raises the question as to whether or not Facebook’s complaint tool fulfils the requirements of sec. 3 (1) NetzDG and how it affects the significance of the reports.

4.1. User-friendly complaint procedure

As aforementioned, to comply with sec. 3 (1) NetzDG, platforms do not have to connect existing reporting tools to the NetzDG procedure. The latter must, however, be user-friendly, that is, an “easily recognisable, directly accessible and permanently available procedure”. The legislative memorandum does not provide further details on how these requirements shall be translated in the design of the complaint procedure. Hence, the question is not about the margin of discretion, but whether or not a procedure such as Facebook’s is meeting these criteria. First of all, the link to the NetzDG complaint form is not easily recognisable for users since it is located far away from what users are able to immediately see when using the platform’s complaints procedure. When a user sees a post that he or she believes to be unlawful, the feedback tool alongside it only shows the categories of the community guidelines and does not mention the possibility of reporting it under NetzDG provisions. The detour via an external link located next to Facebook’s “Impressum” can hardly be described as “easily recognisable”. Providing a NetzDG link next to a website’s imprint is easily recognisable if you are looking for Facebook’s general company information, but not if your goal is to report hate speech. That being said, it is “permanently available” when a user accesses Facebook in Germany.

Looking at the low volume of complaints under NetzDG in Facebook’s case, one cannot help but connect the remote location of the complaint link and the numbers in Facebook’s report. 886 cases in the first half-year of 2018 do not correlate with the high number of users and Facebook’s constant struggle with unwanted content when it comes to hate speech (and other contentious posts) (Pollard, 2018). Libel, defamation or incitement to violence have been a constant issue for the world’s largest social network (Citron & Norton, 2011, p. 1440) and Facebook only recently started to uncover some of its takedown rules (Constine, 2018). The latter have been kept secret for a long time, opening up speculation as to Facebook’s real takedown policies. Social media platforms are often criticised for their lack of transparency when it comes to policing speech (Citron & Norton, 2011, p. 1441; Ranking Digital Rights, 2018 Index, Section 6).

Every Facebook user has access to a reporting tool (the feedback button next to a post), regardless of the NetzDG provisions, but he or she might not be aware of the additional possibility provided by the NetzDG – which makes this case quite special. Not only is this additional complaint procedure well hidden, but once a user is presented with the NetzDG complaint form (on Facebook), he or she will be warned that any false statement could be punishable by law (even if this rule doesn’t apply to statements made to private parties). On the one hand, this might discourage people who wish to complain for no genuine reason and reduce the costs of unnecessary review loops. On the other hand, it could prevent users from reporting potentially unlawful content, which is cause for concern as it may result in chilling effects. The notion of chilling effects comes from US First Amendment scholarship and was introduced by a US Supreme Court ruling in 1952. The “chilling effects” concept essentially means that an individual will be deterred from a specific action under the “potential application of any civil sanction” (Schauer, 1978, p. 689). If a user – who is probably unsure of the lawfulness of third-party content – tries to use the complaint procedure and is confronted with the warning that any false statement could lead to legal steps, the act of deterrence seems likely. Again, Facebook’s NetzDG report is too short and unspecified to infer from the simple numbers on chilling effects. The latter is nevertheless not to be underestimated and all these elements combined suggest that Facebook is pushing its users away from the NetzDG complaint procedure.

4.2. Bypassing the legislative goal?

The concrete implementation of this complaint procedure is decisive for its usability, but it is also relevant for the informational value of the figures featured in the reports. This begs the question as to whether implementing the complaint procedure the way Facebook did could mean bypassing the legislative goal of the NetzDG. The implementation of the complaint procedure should in the first place – as stated above – protect users, not serve reporting purposes. A company’s compliance with this obligation is therefore related only collaterally to the information value of the reports. Nonetheless, it is important when it comes to evaluating how the law contributes to the enhanced protection of users. The arguments above have shown that although social networks have fulfilled their transparency obligation by publishing reports, there is actually very little that we can conclude from them. The Facebook case makes it even more difficult to use the figures as the foundation for further development. Compliance alone does not lead to insightful data. Speculating on the reasons why Facebook decided to keep users away from using the NetzDG complaint procedure will not lead anywhere, but what is certain is that the law was implemented in a rather symbolic way. One may raise doubts on how accurate the formulation of sec. 3 (1) NetzDG regarding the implementation of an “easily recognisable” complaint procedure is. However, it would be too easy to blame it on the wording alone.

The argument was made that Germany could not expect platforms to “self-regulate in its [in Germany’s] interest” (Fagan, 2017, p. 435) because of the relatively small size of its market on a global scale. On the contrary, if a company wants to do business in several countries it needs to respect each country’s laws, especially if the laws in a specific country are, in principle, in line with the company’s own guidelines – just as most of the offences enumerated in the NetzDG overlap with the “hate speech” category of platforms’ community guidelines. In the case of Germany, there is no legal obligation to prioritise the relevant legal norms over the platform’s community guidelines (as long as the unlawful content will be taken down), but that does not set aside the obligation to implement a user-friendly NetzDG complaint procedure. The subsequent question is: does Facebook’s failure to implement an easily-recognisable complaint procedure according to sec. 3 (1) NetzDG mean that it is also bypassing the general legislative goal?

The answer is that, even though Facebook’s complaint procedure is very likely to violate the legal provision (see supra, section 4.1), it has – in sum – achieved the outlined objective, that is, to remove hate speech quicker. The German government’s overall goal in 2017 was to force social networks to be speedier in their responses to alleged offences on their platforms. That is why the time span for takedown decisions is limited to 24 hours and why any breach of the obligation to ensure that this type of fast-track procedure would be severely fined. The legislator wanted to remove unlawful content from sight as quickly as possible while ensuring the users’ right to a due process. These reports confirm there was a need to address the issue of verbal coarsening and to protect digital communication spaces from hate speech. All three examined platforms name hate speech as the first source of complaints under the NetzDG. The social networks all implemented additional reporting tools (as part of the mandatory procedure), they deployed additional resources, and responded to the majority of complaints within 24 hours. To that extent the main requirements were met. The entry into force of the NetzDG led to larger and more specialised reviewer teams who could potentially provide a more granular review procedure. Under these circumstances it would be wrong to conclude that any of the platforms explicitly bypassed the primary legislative goal. Facebook’s implementation nevertheless undermines the significance of the reports since the numbers produced cannot be taken into account for an advanced evaluation.

Conclusion

The discussion around content moderation by social media platforms and its regulation is still unresolved on many levels. More work needs to be done on the relation between private rules for content moderation and national laws, including the question of prioritisation. This is also true for the enforcement of rules and the role of non-human content review in that process. We have seen that platforms cannot solely rely on technology, such as upload filters, for example, to carry out the task of content moderation since the technology is still not fully capable of recognising hate speech. Although most of the criticism around the NetzDG with regards to constitutional law still remains valid, the reports analysed in this paper show that there are other aspects on which attention should lie, such as the implementation of complaint procedures. Unfortunately, the reports analysed constitute no reliable ground for screening and a further development of this obligation despite the data they contain. This is mainly due to the fact that the biggest player, Facebook, has dodged the obligation of creating an accessible and user-friendly NetzDG complaint procedure, preferring to manoeuvre users towards its own feedback form featuring its own categories of community standards. There was no change in this regard in the second round of reports. As long as platforms prioritise their own community rules, the effects on online speech remain more or less similar than before the coming into force of the NetzDG making it almost impossible to truly evaluate the impact of such regulation.

We can nonetheless wonder about the added value of an additional complaint tool within the platform’s feedback mechanisms. Since (most) social media platforms operate globally, moderating content on the basis of global community guidelines is more cost-effective than if it was conducted on the basis of national regulation. Thus, the NetzDG reports could lead to the conclusion that this type of additional feedback tool, which would vary from country to country (because of national regulations), is ineffective and therefore unnecessary. As mentioned in the last section, the NetzDG did push the platforms to eventually take action against hate speech, an achievement which should not be downplayed. Perhaps, ensuring a faster review of user content by a more specialised content moderator is a sufficient goal for this type of law. The only conclusion to be drawn is that, for the time being and for the sake of acting on a global scale, social media platforms will prioritise their community guidelines when it comes to moderating user content.

I would like to thank Stephan Dreyer and Nikolas Guggenberger for their valuable feedback on the earlier draft of this paper. Thank you to the peer-reviewers for reading and evaluating this article.

References

Balkin, J. M. (2018). Free Speech is a Triangle. Columbia Law Review, 118(7), 2011-2056. Retrieved from https://www.jstor.org/stable/26524953

Balkin, J. M. (2014). Old-school/new-school speech regulation. Harvard Law Review, 127(8), 2296-2342. Retrieved from https://harvardlawreview.org/2014/06/old-schoolnew-school-speech-regulation/

Belli, L., Francisco, P.A., & Zingales, N. (2017). Law of the Land or Law of the Platform? Beware of the Privatisation of Regulation and Police. In L. Belli & N. Zingales (Eds.), Platform Regulations: How Platforms are Regulated and How They Regulate Us – Official Outcome of the UN IGF Dynamic Coalition on Platform Responsibility (pp. 41-64). Rio de Janeiro: FGV Direito Rio Edition.

Buermeyer, U. (2017, March 24). Facebook-Justiz statt wirksamer Strafverfolgung?. Legal Tribune Online. Retrieved from https://www.lto.de/recht/hintergruende/h/netzwerkdurchsetzungsgesetz-netzdg-facebook-strafverfolgung-hate-speech-fake-news/

Celeste, E. (2018). Terms of service and bills of rights: new mechanisms of constitutionalisation in the social media environment? International Review of Law, Computers & Technology, 33(2), 122-138. doi:10.1080/13600869.2018.1475898

Citron, D. K. (2017). Extremist Speech, Compelled Conformity, and Censorship Creep. Notre Dame Law Rev., 93(3), 1035-1071. Retrieved from https://scholarship.law.nd.edu/ndlr/vol93/iss3/3/

Citron, D. K., & Norton, H. L. (2011). Intermediaries and Hate Speech: Fostering Digital Citizenship for Our Information Age. Boston University Law Review, 91(4), 1435-1484. Available at http://www.bu.edu/law/journals-archive/bulr/volume91n4/documents/CITRONANDNORTON.pdf

Constine, J. (2018, April 24). Facebook reveals 25 pages of takedown rules for hate speech and more. TechCrunch. Retrieved from https://techcrunch.com/2018/04/24/facebook-content-rules/?guccounter=2

Delort, J. Y., Arunasalam, B., & Paris, C. (2011). Automatic moderation of online discussion sites. International Journal of Electronic Commerce, 15(3), 9-30. doi:10.2753/JEC1086-4415150302

Djuric, N., Zhou, J., Morris, R., Grbovic, M., Radosavljevic, V., & Bhamidipati, N. (2015, May). Hate speech detection with comment embeddings. Proceedings of the 24th International Conference on World Wide Web, 29–30, 29-30. doi: 10.1145/2740908.2742760 Available at http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.697.9571&rep=rep1&type=pdf

Fagan, F. (2017). Systemic Social Media Regulation. Duke Law & Technology Review, 16(1), 393-439. Retrieved from https://scholarship.law.duke.edu/dltr/vol16/iss1/14/

Funke, D. (2018, July 24). A guide to anti-misinformation actions around the world, Poynter. Retrieved from https://www.poynter.org/news/guide-anti-misinformation-actions-around-world

Gaydhani A., Doma V., Kendre S., & Bhagwat L. (2018). Detecting Hate Speech and Offensive Language on Twitter using Machine Learning: An N-gram and TFIDF based Approach. arXiv:1809.08651v1 [cs.CL]. Retrieved from https://arxiv.org/pdf/1809.08651.pdf

Gersdorf, H. (2017). Hate Speech in sozialen Netzwerken – Verfassungswidrigkeit des NetzDG-Entwurfs und grundrechtliche Einordnung der Anbieter sozialer Netzwerke. MMR – MultiMedia und Recht, (7), 439-447.

Gillespie, T. (2018). Custodians of the Internet: Platforms, content moderation, and the hidden decisions that shape social media. New Haven: Yale University Press.

Gollatz, K., Riedl, M. J., & Pohlmann, J. (2018, August 9). Removals of online hate speech in numbers [Blog post]. Retrieved from HIIG Digital Society Blog: https://www.hiig.de/en/removals-of-online-hate-speech-numbers/ doi:10.5281/zenodo.1342325

Gröndahl, T., Pajola, L., Juuti, M., Conti, M., & Asokan, N. (2018). All You Need is "Love": Evading Hate-speech Detection. arXiv preprint arXiv:1808.09115. Retrieved from https://arxiv.org/pdf/1808.09115.pdf

Guggenberger, N. (2017a). Das Netzwerkdurchsetzungsgesetz – schön gedacht, schlecht gemacht [The Network Enforcement Act – well thought, poorly done]. Zeitschrift für Rechtspolitik, 2017(4), 98-101.

Guggenberger, N. (2017b). Das Netzwerkdurchsetzungsgesetz in der Anwendung [The Network Enforcement Act in application]. Neue Juristische Wochenschrift, 36, 2577-2582.

Holznagel, D. (2018). Overblocking durch User Generated Content (UGC)-Plattformen: Ansprüche der Nutzer auf Wiederherstellung oder Schadensersatz? Computer und Recht, 34(6) 369-378. doi:10.9785/cr-2018-340611

Kaye, D. (2018) Report of the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression, United Nations Human Rights Council, A/HRC/38/35. Retrieved from http://undocs.org/A/HRC/38/35

Keller, D. (2018). Internet platforms: Observations on speech, danger, and money (Aegis Series Paper No. 1807). Stanford, CA: Hoover Institution. Retrieved from: https://www.hoover.org/sites/default/files/research/docs/keller_webreadypdf_final.pdf

Klonick, K.(2018). The New Governors: The People, Rules, and Processes Governing Online Speech, Harvard Law Review, 131(6), 1598-1670. Retrieved from https://harvardlawreview.org/2018/04/the-new-governors-the-people-rules-and-processes-governing-online-speech/

Koebler, J., & Cox, J. (2018, August 23). The Impossible Job: Inside Facebook’s Struggle to Moderate Two Billion People. Vice Motherboard. Retrieved from https://motherboard.vice.com/en_us/article/xwk9zd/how-facebook-content-moderation-works

Liesching, M. (2018a). Lösungsmodell regulierter Selbstregulierung – Zur Übertragbarkeit der JMStV-Regelungen auf das NetzDG. In M. Eifert, T. Gostomzyk (Eds.), Netzwerkrecht, (pp. 135 – 152). Baden-Baden: Nomos.

Liesching, M. (2018b). Die Durchsetzung von Verfassungs- und Europarecht gegen das NetzDG. MMR – MultiMedia und Recht, (1), 26-30.

Liesching, M. (2018c). Netzwerkdurchsetzungsgesetz 1. Online-Auflage. Baden-Baden: Nomos.

Matsakis, L. (2018, September 26). To Break a Hate-Speech Detection Algorithm, Try 'Love'. WIRED. Retrieved from https://www.wired.com/story/break-hate-speech-algorithm-try-love/

Newton, C. (2019, February 25). The Trauma Floor. The secret lives of Facebook moderators in America. The Verge. Retrieved from https://www.theverge.com/2019/2/25/18229714/cognizant-facebook-content-moderator-interviews-trauma-working-conditions-arizona

Nolte, G. (2017). Hate-Speech, Fake-News, das »Netzwerkdurchsetzungsgesetz« und Vielfaltsicherung durch Suchmaschinen. ZUM, 7, 552-565.

Nunziato, D. C. (2014). The Beginning of the End of Internet Freedom (Law School Public Law Research Paper No. 2017-40). Washington DC: George Washington University. Available at https://scholarship.law.gwu.edu/faculty_publications/1280/

Pollard, A. (2018, July 05). Facebook Found “Hate Speech” in the Declaration of Independence, Slate. Retrieved from https://slate.com/technology/2018/07/facebook-found-hate-speech-in-the-declaration-of-independence.html

Ranking Digital Rights, (2018). Corporate Accountability Index. Retrieved from https://rankingdigitalrights.org/index2018/report/executive-summary/

Read, M. (2019). Who Pays for Silicon Valley’s Hidden Costs? New York Magazine. Retrieved from http://nymag.com/intelligencer/2019/02/the-shadow-workforce-of-facebooks-content-moderation.html

Reinhardt, J. (2018, January 15). A Slight Case of Overblocking: Les enjeux constitutionnels de la loi allemande sur les réseaux sociaux [The constitutional issues of the German social networks law] [Blog post]. Retrieved from Jus Politicum http://blog.juspoliticum.com/2018/01/15/a-slight-case-of-overblocking-les-enjeux-constitutionnels-de-la-loi-allemande-sur-les-reseaux-sociaux-par-jorn-reinhardt/

Reuter, M. (2018). Das Netzwerkdurchsetzungsgesetz gefährdet die Meinungsfreiheit [The Network Enforcement Act endangers freedom of opinion]. In T. Müller-Heidelberg, M. Pelzer, M. Heiming, C. Röhner, R. Gössner, M. Fahrner, H. Pollähne, & M. Seitz (Eds.) Grundrechte-Report. Frankfurt am Main: Fischer Taschenbuch Verlag.

Richter, P. (2017). Das NetzDG – Wunderwaffe gegen „Hate Speech“ und „Fake News“ oder ein neues Zensurmittel?, ZD-Aktuell, 9, 05623

Rosenfeld, M. (2002). Hate speech in constitutional jurisprudence: a comparative analysis. Cardozo Law Review, 24(4), 1523-1467. Retrieved from https://larc.cardozo.yu.edu/faculty-articles/148/

Schauer, F. (1978). Fear, risk and the first amendment: Unraveling the chilling effect. Boston University Law Review, 58, 685-732. Available at https://scholarship.law.wm.edu/facpubs/879/

Schiff, A. (2018). Meinungsfreiheit in mediatisierten digitalen Räumen - Das NetzDG auf dem Prüfstand des Verfassungsrechts, MMR – MultiMedia und Recht, (6), 366-371.

Schulz, W. (2018). Regulating Intermediaries to Protect Privacy Online – the Case of the German NetzDG (Discussion Paper No. 2018-01) Berlin: Alexander von Humboldt Institut für Internet und Gesellschaft. Retrieved from https://www.hiig.de/publication/regulating-intermediaries-to-protect-privacy-online-the-case-of-the-german-netzdg/

Schulz, W., & Held, T. (2002). Regulierte Selbstregulierung als Form modernen Regierens. Im Auftrag des Bundesbeauftragten für Angelegenheiten der Kultur und der Medien. Endbericht [Regulated self-regulation as a form of modern governing. On behalf of the Federal Commissioner for Cultural and Media Affairs. Final report] (Working Paper No. 10). Hamburg: Verlag Hans-Bredow-Institut. Retrieved from https://www.hans-bredow-institut.de/uploads/media/Publikationen/cms/media/a80e5e6dbc2427639ca0f437fe76d3c4c95634ac.pdf

Scott, C. (2004). Regulation in the age of governance: The rise of the post-regulatory state. In J. Jordana, & D. Levi-Faur (Eds.), The politics of regulation: Institutions and regulatory reforms for the age of governance (pp. 145-173). Cheltenham: Edward Elgar. doi:https://doi.org/10.4337/9781845420673.00016

Schmitz, S., & Berndt, C. M. (2018). The German Act on Improving Law Enforcement on Social Networks (NetzDG): A Blunt Sword? Retrieved from https://ssrn.com/abstract=3306964

Wimmers, J., & Heymann, B. (2017). Zum Referentenentwurf eines Netzwerkdurchsetzungsgesetzes (NetzDG) - eine kritische Stellungnahme. AfP - Zeitschrift für das gesamte Medienrecht, 48(2), 93-102. doi:10.9785/afp-2017-0202

Wischmeyer, T. (2018). ‘What is Illegal Offline is Also Illegal Online’ – The German Network Enforcement Act 2017. doi:10.2139/ssrn.3256498

YouTube (2017, December 4). Expanding our work against abuse of our platform. Retrieved from https://youtube.googleblog.com/2017/12/expanding-our-work-against-abuse-of-our.html

Footnotes

1. The NetzDG itself might not be flawless but in some way, it can serve as an experiment for other lawmakers.

2. NetzDG quotes are from the official translation by the German Ministry of Justice.

3. Loose translation of the NetzDG explanatory memorandum, retrieved from https://dipbt.bundestag.de/doc/btd/18/123/1812356.pdf, last accessed 13 May 2019.

4. This paper does not examine the case of the social network Google+ explicitly, but its report shows that the legal requirements according to NetzDG were implemented in the same manner as for YouTube. The platform Change.org is also within the scope of application but was not examined here for the purpose of focusing on the biggest platforms.


Net neutrality regulation and the participatory condition

$
0
0

Introduction: participation trouble

Regulators all over the world have attempted to address the challenges of internet governance by turning to networked publics as stakeholders. This paper focuses on the issue of net neutrality as a key concern for regulatory frameworks concerning the internet as a core communications infrastructure. Network or net neutrality is the basic principle that all traffic should be treated equally as it traverses the internet (Wu, 2003). This principle is derived from earlier ideas of common carriage in transportation regulation, where telecommunication providers have similarly long been regulated as content-neutral common carriers (Lentz, 2013, p. 572). The stakes of network neutrality are particularly high for the ideal of networked deliberative democracy, in a climate where large telecommunications companies seek ever more consolidated power over both content and networks (Barratt & Shade, 2007). Yet the definition of neutrality in this context is perhaps more complex than it appears; as Christian Sandvig (2007) has argued, the way that layers of internet protocol work means that discrimination is not simply a legal matter but a technical one, built into internet infrastructure. Moreover, business practices such as differential pricing or zero rating, where certain applications can be used without contributing to data allowances, show how violations of the idea of net neutrality raise conflicting consumer interests, further clouding the apparent public interest perspective in developing regulatory provisions that uphold common carriage principles.

In their public consultations for proceedings implicating net neutrality, various regulators over the past few years have experienced or actively sought the input of everyday users of the internet, beyond that of the usual policy experts (see Table 1 for a summary). In some cases, the usual mechanisms for soliciting comments garnered many more responses than is typical. A particularly well known example of this is the US Federal Communications Commission (FCC) online comments portal, which received millions of submissions during the past two consultation periods on net neutrality in 2014 and 2017 (Kimball, 2016; Novak & Sebastian, 2019; Obar, 2016). In other cases, regulators expanded beyond their usual processes to actively invite broad public comment, such as in the way that the Canadian Radio-television and Telecommunications Commission (CRTC) set up a thread on the bulletin board style website Reddit to solicit public comments on its 2016 consultation about differential pricing. The Telecommunications Regulatory Authority of India (TRAI) held its 2015 consultations on net neutrality at a moment when Facebook was attempting a massive roll out of its Free Basics programme in the country, and so the public input in that case volleyed between TRAI’s consultation email address and rival form submissions set up by Facebook and advocacy group Save the Internet. In the European Union, the Body of European Regulators for Electronic Communications (BEREC) accepted submissions to its 2016 open internet consultation via email but also indirectly through faxes to members of the European Parliament. Such varied means of encouraging public participation in this particular regulatory debate among the four jurisdictions points toward the impact of large-scale advocacy work in support of net neutrality provisions (Faris, Roberts, Etling, Othman, & Benkler, 2016).

Table 1: Four examples of recent net neutrality consultation mechanisms

Region

Regulator

Year(s)

Consultation

Mechanism(s)

Comments

US

FCC

2014; 2017

Docket no. 14-28, “Protecting and Promoting the Open Internet”; Docket no. 17-108 “Restoring Internet Freedom”

comment portal

~4 million; ~22 million

Canada

CRTC

2016

File no. CRTC 2016-192, “Examination of differential pricing practices related to Internet data plans”

comment portal; Reddit

123; ~1,200

India

TRAI

2015

“Regulatory Framework for Over-the-Top (OTT) Services”

email; Facebook

~1 million; 1.35 million

EU

BEREC

2016

“Guidelines on the Implementation by National Regulators of European Net Neutrality Rules”

email; fax

~500,000; number unknown

These four examples illustrate ways that regulators have imagined tapping into a networked public interest perspective, and in the process, shoring up their own legitimacy as governing bodies actively consulting “the public” without questioning who that public includes and excludes (Salter, 2007, p. 304). Moreover, the digital spaces involved in these consultations provide key sites for critiquing how publics get constructed by the affordances of online platforms. Using discursive interface analysis - a method of assessing the “productive constraints of [web] interfaces and the norms they construct” (Stanfill, 2015, p. 1060) - I argue that these particular examples show how policymaking about the internet reifies a particular version of public participation that is directly tied to the internet’s own constitutive myth of democratisation.

The participatory condition

The key periodising context for this argument is that of the participatory condition, which Darin Barney and his co-authors (2017) describe as one in which “participation has evolved into a leading mode of subjective interpellation” (p. x). Interpellation is an Althusserian concept that accounts for the ways in which people are called into particular subject positions by the ideological apparatuses of their social milieu, including for example, the media. Taking the place of other public values such as equality, justice, fairness, community, or freedom, participation has become the norm of contemporary political subjectivity according to the promises of networked digital media, and upheld both by the set-up of Western institutions as well as critiques of those institutions (Barney et al., 2016, p. xii). In terms of policy, participation has been yoked to internet policy-making’s appeals to the public interest, which rest on an ideal of citizen participation. At the same time, participation is valorised in critiques of the ways that internet policy-making tends to be exclusionary given its inability to ensure participation of actors beyond its typically technocratic setting (Obar, 2016). This is a paradox that pervades all kinds of policymaking, not only that about the internet (e.g., Fischer, 2003), but the case of internet policymaking in particular illustrates the ideological weight of participation on two conjoined fronts: that of representing the will of the public alongside that of “empowering” the public via networks. Participation as a public goal has become foregrounded due to a combination of its generalisation (as a largely unspecific and unmeasurable property), compatibility with neoliberalism (the dominant mode for contemporary governance based on economic rationality), and alignment with the supposed democratising values projected onto networked digital technologies (Barney et al., 2016, p. x). The participatory condition thus describes the way internet’s assumed power to lower barriers to engaging in public deliberation recasts citizenship as a set of responsibilities oriented around the technology’s affordances of interactivity.

Such a notion of the participatory condition offers a rejoinder to the concept of networked publics. In danah boyd’s (2010) formulation, a networked public adapts the essential contours of the Habermasian public sphere and Fraserian counterpublics - accessible venues for public debate regarding governance - to see the internet as a platform for imagined collectivity within virtual spaces. While the participatory potential of networked publics should not be understated (e.g., Bennett & Segerberg, 2013), the limitations of networked publics are tested when they stand in for public participation or are tasked with articulating a public interest perspective (Pangrazio, 2016, p. 172). Such limitations stem from the way that the participatory condition rests on promises of digital media that are symbiotic with for-profit models and therefore deeply politically ambiguous (Langlois, 2013, p. 92). Participation, which comes to be taken-for-granted in liberal-democratic visions of digital culture as a virtue, does not necessarily beget the properties it comes to be associated with, such as equality, justice, or efficacy. In other words, while participation carries generally positive connotations in terms of addressing social exclusion, participation in and of itself is ambivalent. Participation is in fact often promoted by powerful “bureaucracies, police forces, security and intelligence agencies, and global commercial enterprises” to maintain political domination (Barney et al., 2016, p. xxxii), and thus cannot be mapped onto resistance. Accordingly, case examples of net neutrality consultations in the US, Canada, India and the EU demonstrate how attempts to integrate networked publics into policy proceedings can further entrench social divides by upholding an idea that participatory platforms might act as “conduits for governance” (Langlois, 2013, p. 99). These examples further demonstrate the ironies inherent in appeals to a public interest perspective predicated on the participatory condition, where regulatory bodies depend on the legitimacy conferred by consulting a networked public.

The networked public interest in net neutrality

The way that regulatory bodies have configured the networked public in net neutrality consultations is illustrative of how the participatory condition structures contemporary articulations of the public interest. Influential predecessors of this tendency might be found in what Brandie Nonnecke and Dmitry Epstein (2016) term “crowdsourcing internet governance”. In their examination of multistakeholder policy-making at the Internet Corporation for Assigned Names and Numbers (ICANN) through an internet platform called IdeaScale, Nonnecke and Epstein argue that while the platform enabled diverse stakeholders to participate in internet governance, more effective engagement required extensive research, face-to-face meetings, and a pre-existing relationship with ICANN. Moreover, the IdeaScale platform’s design parameters, particularly its lack of multilingual support, stymied the ideal of global participation (Nonnecke & Epstein, 2016, p. 16). In this case, the greater accessibility to policy-making processes enabled by networked digital platforms represented a limited effort to diversify the input that actually influenced decision making, ultimately maintaining the exclusivity of policy discourse.

Recent net neutrality consultations have likewise made attempts to harness the participatory veneer of internet platforms - platforms that seem to support public debate and yet are known to also hinder democratic aims (e.g., Hindman, 2008) - in ways that suggest an openness toward diverse publics.

The American example

Perhaps the best example of this is the move toward increased public engagement by the US FCC during the Obama administration under the leadership of Tom Wheeler (Kimball, 2016, p. 5961). The baseline for the FCC’s public consultations is the “notice and comment” process mandated by the Administrative Procedures Act of 1946. More expansive outreach initiatives had been established by 2014, when the FCC revisited its 2010 Open Internet Order on the heels of a challenge by Verizon Communications that internet service providers could not be considered common carriers. A 120-day period for accepting public comments on the FCC’s website opened in May 2014. Submitting a comment entailed a number of steps: navigating the list of open proceedings on the FCC comment website; locating this particular proceeding (Docket no. 14-28, “Protecting and Promoting the Open Internet”) and entering the electronic filing system; typing in one’s name, address, and comment; and agreeing to that personal information and the comment becoming part of the public record and available online. Through the lens of discursive interface analysis, the FCC’s comments platform produces a somewhat paradoxical version of participation. The procedure requires a degree of policy literacy from users who may not already be well-acquainted with the FCC’s procedures (Lentz, 2014), for example in even being able to locate the comment form among several obscurely numbered dockets, but then the text box for the comment itself is restricted to a few paragraphs and does not support attachments, suggesting a limited amount of feedback.

To try to encourage a general public less well-versed in the bureaucratic maze of comments submission to engage in the proceeding, a number of advocacy groups designed form letter templates that could easily be submitted through their own websites. These groups, including the Electronic Frontier Foundation, Free Press, and Demand Progress, sought to increase the number of comments supporting the protection of net neutrality by eliminating the need for commenters to struggle with the cumbersome interface of the FCC’s website. They also aimed to harness an increased public attention to the consultation driven by coverage on HBO’s late-night programme Last Week Tonight in early June 2014, when host John Oliver encouraged viewers to submit comments in favour of net neutrality provisions (Faris et al., 2016, p. 5849). In the wake of the publicity generated by Oliver’s segment, Robert Faris and his co-authors (2016) examined how the “networked public sphere” of online discourses reflected a core-periphery model of political mobilisation, where the “link economy” of hyperlinks on social media sites like Twitter further encouraged broad participation in comments submissions to the FCC (pp. 5852, 5860). When considered as designed interfaces for participation, however, the Oliver coverage as well as the Twitter discourse can also be seen as ways of reinforcing the participation of only a particular segment of the public. In traditional means of policy consultation it is also true that only a select few voices tend to be represented, but in the case of attempting to broaden participation via web platforms and television coverage, the issue is that these mediated arenas become synonymous with the public when they are instead still only representative of specialised demographics. For example in the FCC’s case, the demographics of Oliver’s viewership skew male, politically progressive, and highly educated, overlapping significantly with those who already appreciate the stakes of net neutrality (Freelon et al., 2016, p. 5910). Similarly, if Twitter discourse surrounding the FCC consultation represents a core-periphery flow of influence as Faris et al. (2016) suggest, then a central core of Twitter users – who also skew urban, higher income, politically progressive, and highly educated (Smith & Anderson, 2018) – are the most influential voices participating in this discourse. Moreover, as Jonathan Obar (2016) has argued about digital form letters such as those developed by advocacy groups in this case, their ability to overcome certain structural barriers to participating in the FCC’s consultations is undercut by the way that formalised, technocratic discourse is maintained in the actual decision making that takes place within the Commission (p. 5882).

Despite these shortcomings of participation in the way the FCC constructed it for the 2014 net neutrality consultations, the nearly four million online comments received were ultimately heralded as a win for the public interest perspective when the Commission announced its decision in 2015 to classify internet service providers as common carriers (Faris et al., 2016). It seemed as though internet channels enabled a broad public to be mobilised to participate in what would have normally been an exclusive arena for technocratic regulatory debate, as though people using the internet were motivated intrinsically to protect it. And yet, because the basic structural parameters of policy-making had not been fundamentally altered by the version of participation embodied in the FCC’s comments interface, a revisiting of net neutrality rules in 2017 shows how the same mechanism of participation can result in an opposite outcome.

Under a new federal administration and led by former telecom executive Ajit Pai, the FCC launched a public consultation in May 2017 to again solicit comments on net neutrality under the aegis of “Restoring Internet Freedom” (Docket no. 17-108). Again, advocacy groups set up form letter templates and Oliver covered the issue on his late-night programme. This time, the FCC received nearly 22 million comments, an immense number for an internet policy proceeding and a significant increase from the number of comments received in 2014. This surge in comments can be explained through designed vulnerabilities in the FCC’s comments platform. Not only did the platform crash after Oliver’s coverage (as it did the first time in 2014), but it also seemed that the FCC’s website was susceptible to spam comments submitted by bots. Over half of the comments submitted included false or misleading personal information, such as duplicate email addresses, 94% of comments themselves were duplicates submitted multiple times, and thousands of identical comments were often submitted at the same second (Hitlin et al., 2017). These vulnerabilities show how the solicitation of comments through a badly designed and implemented web form that is susceptible to abuse - many of the fraudulent comments have since been linked to Russian email addresses - represents an insufficient means of consulting broad publics or even actual people. The ultimate outcome of this particular consultation period, where the FCC decided to roll back the protections for net neutrality put in place in 2015, further demonstrates how the comments exercise represents a somewhat empty gesture toward integrating a public interest perspective through networked publics that are ill defined, taken for granted, and only presumed to exist.

The Canadian example

The US example is illustrative of how internet-mediated participation, as a social norm predicated on supposedly inherently democratic values of the web, is constructed by regulators through their appeals to networked publics as a signifier adaptable to ambivalent political ends. In Canada, the CRTC is mandated by Parliament to consult Canadians and respond to their enquiries and complaints. In order to do this, the CRTC uses an online platform similar to the FCC’s for comments submission, which is difficult to navigate and requires commenters to use their real names and contact information, which becomes public on the CRTC’s website. In addition, the regulator has produced a brochure to encourage participation, which admonishes Canadians to “make your voice heard” by contacting the CRTC online, by phone or fax, participating in a proceeding or submitting comments to a consultation, following the CRTC’s Twitter account or liking its Facebook page (CRTC, 2017). It’s not clear from the brochure, however, what the difference between these modes of participation might be or how they are used in decision making as part of a proceeding such as the 2016 inquiry into differential pricing practices (File no. CRTC 2016-192). Contrasting the millions of comments received by the FCC, only 123 comments were submitted to this proceeding via the CRTC’s online submission system, of which 86 were from private citizens and the rest were mainly from industry representatives, non-profit groups, or academics. The design of the CRTC’s interface requires that commenters know exactly which file number to search for and how to access its attendant submission button (similar to the FCC’s site); moreover, the way this proceeding was framed – in the language of differential pricing (zero rating) rather than a more inclusive consideration of net neutrality – may have resulted in discursively limiting citizen understanding of the issue under debate. In this way, the CRTC’s system for explaining a proceeding as well as linking to its submission portal sets up a discursive interface that structures the ideal user of the website according to more standard technocratic policy processes, preventing broad participation despite the apparent aims of the participation brochure.

The European and Indian examples

In Europe and India, regulators used a different system to solicit comments on their net neutrality proceedings in June-July 2016, by posting an email address instead of setting up an online submission form. The request for email submissions is a mechanism of consulting interested parties via written feedback on draft documents, which is legally mandated by European Union Regulation EC 1211/2009 but only suggested by the transparency clause in the TRAI Act (1997). A comparison of these two regulatory contexts demonstrates that, despite differences in its legal requirements, the request for email submissions is also a design choice that opens up the affordances of commenting beyond the restrictive parameters of online forms that suggest only a limited range of ideal comments (Stanfill, 2015, p. 1071). BEREC’s instructions for email submission note that comments should be sent in English and include reference to its draft guidelines on net neutrality, and that comments will be posted publicly but without any identifying personal information. In this way, an ideal comment is suggested by the instructions rather than the interface, which also does not require commenters’ personal information to be made public. Similarly, TRAI solicited comments from March-April 2015 through an email address that enabled commenters to write as much as they like and include attachments. In this case, however, there were some problems once comments were publicly posted to India’s mygov website: some of the email messages posted had nothing to do with the net neutrality consultation and others were confidential emails intended for TRAI employees. Despite their differences, both the BEREC and TRAI consultations share a common email interface for comments submission, and a similar trajectory where not many comments were submitted until the involvement of Save the Internet, an advocacy group that set up a submission form for each consultation through its dedicated regional sites savetheinternet.eu and savetheinternet.in. Described as a recursive public of geeks coalescing around the governance of internet infrastructure that supports their affinity (Prasad, 2017, p. 420), Save the Internet is largely responsible for the volume of public submissions to both the European (half a million) and Indian (one million) consultations.

Considering each of the cases of comments solicitation in the US, Canada, India, and the EU in relation to networked publics, the role of advocacy groups points to how those publics are each constructed from within specific subcultures despite the suggestion that they reflect broader public sentiment. The idea of recursive publics, suggested by Revati Prasad’s (2017) analysis of the Indian net neutrality consultation, implies that geek culture, as a subset of public culture, characterises the comments received in each of these proceedings. In his formulation of recursive publics as a concept, Christopher Kelty (2005) explains that they comprise “a distinct social group […] constituted by a shared, profound concern for the technical and legal conditions of possibility for their own association” (p. 185). In the US case, it is apparent that this particular group partly overlaps with the audience for Oliver’s HBO programme, which is often ascribed as the main driver for the volume of comments submissions, at least in the 2014 consultation (e.g., Faris et al., 2016). The following year in India, a similar strategy was devised by comedy group All India Bakchod, who produced a YouTube video that used internet humour and was directed toward Indian “digital natives” (Prasad, 2017, p. 421). Combined with the interfaces for comments submissions and the level of policy literacy required to make an effective comment, these sorts of mobilisation efforts show how in accessing the perspectives of a specifically networked public, regulators constrain the notion of policy participation in line with the way the participatory condition recasts citizenship as a specific affordance of digital technologies (Barney et al., 2016). In this way, regulators claim to want broad input, seem to put mechanisms in place to solicit broad input via the internet, but maintain structuring inequalities by ignoring how internet participation is itself constricted by the affordances of digital platforms.

Participation by platform

Especially when considering the involvement of social media platforms in net neutrality consultations, the ways that participation has been co-opted by private interests further complicates the way that interface affordances constrain public engagement. As Ganaele Langlois (2013) has argued, social media platforms have themselves become “conduits for governance”, with the consequence that “there is an undeniable closing off of the concept of participatory media as it is folded into a corporate online model of participation via a handful of software platforms” (pp. 99, 92). Perhaps the best examples of this sort of platform politics in net neutrality policy-making can be found in the Indian and Canadian cases. Both of these countries’ regulators contended with social media platforms in distinct ways: in India, a key backdrop for TRAI’s 2015 net neutrality consultations was Facebook’s simultaneous push to roll out its Free Basics programme throughout the country; in Canada, the low number of submissions received through the CRTC’s online submission portal was supplemented through a dedicated Reddit thread intended to generate increased public engagement. Despite the differences between these contexts for involving particular platforms in the net neutrality regulatory debate, both examples demonstrate how social media platforms in particular trade on participation as a constitutive myth of their societal value.

Facebook

While the 2015 TRAI net neutrality consultation was called independently, its timing coincided with Facebook’s aggressive marketing campaign to launch Free Basics (then called Internet.org) in India. As a result, the debate about TRAI’s consultation coalesced around Facebook’s controversial provision of zero-rated websites, with most of the comments submitted urging for a protection of net neutrality principles (Gurumurthy & Chami, 2016). In response, Facebook redoubled its efforts to portray Free Basics as a boon for the Indian public interest. Facebook CEO Mark Zuckerberg penned a blog post claiming that pro-net neutrality arguments were preventing marginalised populations from accessing the opportunities that come with connectivity, he also embarked on an extensive tour of the country, speaking in classrooms and villages, and the company’s large-scale advertising campaign depicted Free Basics as the cornerstone of a “Connected India” that would empower disadvantaged rural citizens (Shahin, 2019). These more traditional promotional efforts were matched by a Facebook campaign that went live in December 2015. At this time, Facebook sent out notifications to all of its users in India, encouraging them to send comments to TRAI to “save Free Basics” via a simple one-click interface. While the regulator critiqued the validity of the 1.35 million messages it received through Facebook, as Anita Gurumurthy and Nandini Chami (2016) argue, the power of the platform to generate its own support reveals “the sweet spot that platform control constitutes in the struggle for hegemony in the network society. As dominant actors vie for control, they mediate user experience by redefining the materialities of the multi-layered internet environment” (n.p.). What the authors identify here is a recursivity to Facebook’s actions: through its power to persuade users to demand regulation in its interest, Facebook essentially controls the means by which the internet can be shaped. By casting Free Basics as the internet itself, Facebook demonstrated how it could wield its power to shape the infrastructures of participation.

Reddit

Offering quite a different example of platform power, the CRTC turned to Reddit in order to reach people “who might not otherwise participate” in its differential pricing consultation, according to a CRTC spokesperson (quoted in Jackson, 2016). Despite the different context for the integration of Reddit by the CRTC versus the antagonistic position of Facebook in TRAI’s consultation, the way Reddit was used similarly underscores the limitations of conflating platforms with publics. Compared to the modest 123 submissions received through the CRTC’s own website, its Reddit thread garnered nearly 1,200 comments on the net neutrality implications of differential pricing practices over a four-day period in September 2016. These comments, overwhelmingly against differential pricing practices and in support of net neutrality, show one of the issues with conflating a particular website with the broader public interest perspective. Reddit is not an adequate stand-in for the public or even for the internet, as Adrienne Massanari (2017) has illustrated in her analysis of the way Reddit’s architecture and social norms support the formation of “toxic technocultures”. For example, Reddit accounts are pseudonymous and easy to create, and posts are subject to a system of upvoting and downvoting. These features suggest an egalitarianism or democracy inherent to the platform, however, in practice Reddit valorises individual contributions while also creating the conditions for a “herding mentality” in terms of what kinds of content becomes popular (Massanari, 2017, p. 337). The consequence of the herd mentality for the CRTC is that the Reddit comments provide only one common perspective. In this sense, the initial goal of seeking public input is somewhat skewed in that this segment of the public – mainly users with existing Reddit accounts since any new users were subject to increased moderation – presents quite a unified argument against differential pricing in line with the idea of the internet as inherently “neutral”.

Appeals to the supposed pre-existing neutrality of the internet is the central discursive strategy of sites like Reddit and Facebook, that present themselves as neutral platforms in order to elide the politics they produce through their interfaces, algorithms, content moderation practices, and exploitative business models (e.g., Gillespie, 2010; van Dijck, 2013; Zuboff, 2015).1 There is an important intersection here between the way platforms deploy the idea of neutrality and the way net neutrality is similarly constructed as a fundamental attribute of the internet. In this sense, the internet as neutral is cast as an apolitical space for the organic unfolding of politics, “a framework in which equal access and equal ability to express oneself are neutrally, and thus perhaps legally, protected” (Streeter, 2013, p. 497). Resting on the ideals of common carriage (Lentz, 2013; Sandvig, 2007), net neutrality thus comes with the baggage of its own conceits about participation as playing out in a utopian nonplace, as Carole Pateman (1970) has characterised the notion of full participation. Commercial social media platforms mobilise this idealised version of full participation in sometimes contradictory ways – for example, in Facebook’s support of the Open Internet Order in the US but simultaneous push for Free Basics to become ubiquitous in India – but always according to the ethos of maintaining their own appearance as mere facilitators of public debate (Busch & Shepherd, 2014). It is through subtle means, for example, by directing site users through paths of least resistance that reinforce normative claims (Stanfill, 2015, p. 1061), that platforms exercise the power to shape participation. As such, when regulators encounter platforms within their own net neutrality consultations, they also encounter versions of participation that are already conditioned by the platform economy.

The imagined participant

Both regulators’ and platforms’ versions of participation produce corresponding ideal participating subjects or imagined users, respectively. The mechanism of interpellation, where citizens or users are summoned into particular normative subjectivities, takes place throughout the participatory condition, in both institutional and technological contexts (Barney et al., 2016, p. ix; Stanfill, 2015, p. 1064). This is evident in the example of John Oliver’s audience submitting comments to the FCC, as it is in the CRTC’s choice of Reddit, a platform that reinforces a stereotypical internet user as young, white, and male (Massanari, 2017). The libertarian techno-utopianism that underlies internet culture further suggests that its political tenor is one driven by the rational choices of users as individual actors. Here, as Barney et al. (2016) suggest, the participatory condition aligns with the neoliberal politics that also pervade regulatory institutions (p. x). In a context where private interests tend to exert more pressure on policy-making than public interests, attempts at visibly including the public are crucial for the legitimacy of the regulatory project (Fischer, 2003, p. 205; Salter, 2007, p. 311). Nonetheless, that public tends to be reduced in neoliberal terms to a compilation of individual consumers, making rational choices within the communications marketplace.

This discursive interpellation of a participating subject is clear in the BEREC net neutrality consultation’s draft guidelines, published in June 2016 to precipitate public comment. The guidelines follow from the EU’s overarching Electronic Communications Regulatory Framework, designed to facilitate users’ ability to “access and distribute information or run applications and services of their choice”, a statement which evidences “a distinctly narrow character and reflects strongly a techno-economic treatment of Net Neutrality” (Simpson, 2016, p. 337). Accordingly, the guidelines refer to “end-users” and “consumers” rather than “citizens” (a word that never appears in the document): “BEREC understands ‘end-user’ to encompass individuals and businesses, including consumers as well as CAPs [Content and Application Providers]” (BEREC, 2016a, p. 3). According to the logic of technology as a good in and of itself, the notion of participation thus becomes defined in a limited way as simply any uses of the technology, “rather than how such use is enabling people to participate culturally, socially, politically and economically” (Karim, 1999, p. 57). This language also evidences the increased conflation of citizens with consumers in neoliberal regulatory discourse over the past twenty-odd years of media policy that has coincided with the rise of converged digital technology (Livingstone et al., 2007, p. 616). Positioning consumers as the primary subjects of net neutrality regulation suggests a version of participation founded on choice, and in turn, casts participation in the regulatory process in the language of consumer rights.

By seeking public input from end-user consumers, that public thus becomes collapsed into rational choice actors in a way that defines the domain of net neutrality as, fundamentally, a marketplace. Consider for example how BEREC summarised the responses it received through email alongside faxes sent to members of the European Parliament. Most of these responses were filtered through Save the Internet’s comment template, oriented around language about how service providers “Shall not limit the exercise of end-users’ rights,” for example through traffic management or differential pricing, in order to “ensur[e] consumers are protected against potentially harmful practices” (BEREC, 2016b, p. 12). The focus on harm to consumers suggests that the central concern is whether internet users are able to make choices unfettered by the structuring practices of service providers – an impossible scenario given the essential imbalance of power between these two stakeholders. The meaning of networked publics in this context is further subject to what Arjun Appadurai (1990) has identified as the “fetishism of the consumer”, whereby the consumer functions as a sign or “a mask for the real seat of agency, which is not the consumer but the producer and the many forces that constitute production” (p. 16). The consequence of imagining the participating subject as this sort of consumer is that, despite the appearance of public consultation, the regulatory process itself hasn’t fundamentally changed and largely remains captured by the private interests that stand to benefit from the way net neutrality is legislated, and, more importantly, enforced (Barratt & Shade, 2007).

Conclusion: reconfiguring participation

In considering the participatory condition as the context for contemporary internet policy-making, the choice to focus on net neutrality regulation in particular is not accidental. There is an illustrative parallel between the way participation can be discursively deployed to maintain existing power differentials and the way net neutrality has been framed in terms of a neutral context for unrestricted consumer choice. As Sandvig (2007) argues, the idea of “neutrality” misses the point that what is at stake is not whether service providers discriminate but how discrimination is exercised across networks, which requires “a normative vision of what public duties the internet is meant to serve” (p. 137). Correspondingly, a blunt emphasis on public participation in policy-making often fails to ask vital questions about “why the public is participating […] or who indeed properly constitutes the public” (Salter, 2007, p. 294, emphasis in original). As demonstrated by the examples of the US, Canada, India, and the EU, net neutrality public consultations tend to reinforce a particular version of participation according to the way the participatory condition suggests that the public interest can be expediently located in technologically constituted networked publics.

In order to get past the way that public participation can be reduced to “lip service” on the part of regulators (Fischer, 2003, p. 208), as well as the structuring inequalities of networked publics, policy processes need to be designed in a way that deeply considers what sorts of publics are being consulted (e.g., Nanz & Steffek, 2004). As Gurumurthy and Chami (2016) note about the Indian case, certain “access imaginaries [are] delegitimised in public discourse”, and as such, regulators must actively seek to instil participatory parity in the consultation process (n.p.). Participatory parity is a concept developed by Nancy Fraser (1990) that suggests the need to first address systemic inequalities in public space before assuming that democratic participation can function. In the example of net neutrality consultations, this would entail a reconsideration of how internet platforms are used to represent a public interest perspective, but would also necessitate structural changes to internet policy-making’s technocratic character. Such changes might include concerted steps to rebalance the naturalisation of neoliberal values in policy-making (e.g., Feedman, 2006), but also those that reconsider the inherent structural barriers around expertise in this space (e.g., Fischer, 1990). In both cases, what is required is a thorough critique of the participatory condition as it is supported by the promises of digital technology; technology that, like regulation, is largely configured according to neoliberal values that reinforce the political economy of exploitation.

References

Appadurai, A. (1990). Disjuncture and difference in the global cultural economy. Public Culture2(2), 1-24. doi:10.1177/026327690007002017

Barney, D., Coleman, G., Ross, C., Sterne, J., & Tembeck, T. (2016). The participatory condition: An introduction. In D. Barney, G. Coleman, C. Ross, J. Sterne & T. Tembeck (eds.), The participatory condition in the digital age (pp. vii-xxxix). Minneapolis: University of Minnesota Press.

Barratt, N., & Shade, L. R. (2007). Net neutrality: Telecom policy and the public interest. Canadian Journal of Communication32(2), 295-305. doi:10.22230/cjc.2007v32n2a1921

Bennett, W. L., & Segerberg, A. (2013). The logic of connective action: Digital media and the personalization of contentious politics. Cambridge: Cambridge University Press.

BEREC (2016a). Draft BEREC Guidelines on the Implementation by National Regulators of European Net Neutrality Rules. Retrieved from https://berec.europa.eu/eng/document_register/subject_matter/berec/public_consultations/6075-draft-berec-guidelines-on-implementation-by-national-regulators-of-european-net-neutrality-rules

BEREC (2016b). BEREC Report on the outcome of the public consultation on draft BEREC Guidelines on the Implementation by National Regulators of European Net Neutrality Rules. Retrieved from https://berec.europa.eu/eng/document_register/subject_matter/berec/reports/6161-berec-report-on-the-outcome-of-the-public-consultation-on-draft-berec-guidelines-on-the-implementation-by-national-regulators-of-european-net-neutrality-rules

boyd, d. (2010). Social network sites as networked publics: Affordances, dynamics, and implications. In Z. Papacharissi (ed.), A networked self: Identity, community, and culture on social network sites. (pp. 47-66). New York: Routledge.

Busch, T., & Shepherd, T. (2014). Doing well by doing good? Normative tensions underlying Twitter’s corporate social responsibility ethos. Convergence20(3), 293-315. doi:10.1177/1354856514531533

CRTC (2017). It’s your CRTC: Here’s how to have your say! Canadian Radio-television and Telecommunication Commission. Retrieved from: https://crtc.gc.ca/eng/info_sht/g10.htm

Faris, R., Roberts, H., Etling, B., Othman, D., & Benkler, Y. (2016). The role of the networked public sphere in the U.S. net neutrality policy debate. International Journal of Communication, 10, 5839-5864. Retrieved from https://ijoc.org/index.php/ijoc/article/view/4631

Fischer, F. (2003). Reframing public policy: Discursive politics and deliberative practices. Oxford: Oxford University Press.

Fischer, F. (1990). Technocracy and the politics of expertise. London: Sage.

Fraser, N. (1990). Rethinking the public sphere: A contribution to the critique of actually existing democracy. Social Text, 25(2), 56-80. doi:10.2307/466240

Freedman, D. (2006). Dynamics of power in contemporary media policy-making. Media, Culture & Society, 28(6), 907-923. doi:10.1177/0163443706068923

Freelon, D., Becker, A. B., Lannon, B., & Pendleton, A. (2016). Narrowing the gap: Gender and mobilization in net neutrality advocacy. International Journal of Communication10, 5908-5930. Retrieved from https://ijoc.org/index.php/ijoc/article/view/4598

Gillespie, T. (2010). The politics of ‘platforms’. New Media & Society12(3), 347-364. doi:10.1177/1461444809342738

Gurumurthy, A., & Chami, N. (2016). Internet governance as ‘ideology in practice’: India’s ‘Free Basics’ controversy. Internet Policy Review5(3). doi:10.14763/2016.3.431

Hindman, M. (2008). The myth of digital democracy. Princeton, NJ: Princeton University Press.

Hitlin, P., Olmstead, K., & Toor, S. (2017). Public comments to the Federal Communications Commission about Net Neutrality contain many inaccuracies and duplicates. Pew Research Center. Retrieved from http://www.pewinternet.org/2017/11/29/public-comments-to-the-federal-communications-commission-about-net-neutrality-contain-many-inaccuracies-and-duplicates/

Jackson, E. (2016, September 26). CRTC turns to online forum Reddit to solicit comments on differential pricing rules. Financial Post. Retrieved from https://business.financialpost.com/technology/crtc-turns-to-online-forum-reddit-to-solicit-comments-on-differential-pricing-rules

Karim, K. H. (1999). Participatory citizenship and the Internet: Refraining access within the capabilities approach. Journal of International Communication6(1), 57-68. doi:10.1080/13216597.1999.9751882

Kelty, C. (2005). Geeks, social imaginaries, and recursive publics. Cultural Anthropology20(2), 185-214. doi:10.1525/can.2005.20.2.185

Kimball, D. (2016). Wonkish populism in media advocacy and net neutrality policy making. International Journal of Communication10, 5949-5968. Retrieved from https://ijoc.org/index.php/ijoc/article/view/4678

Langlois, G. (2013). Participatory culture and the new governance of communication: The paradox of participatory media. Television & New Media14(2), 91-105. doi:10.1177/1527476411433519

Lentz, B. (2014). The media policy Tower of Babble: A case for “policy literacy pedagogy”. Critical Studies in Media Communication, 31(2), 134-140. doi:10.1080/15295036.2014.921318

Lentz, B. (2013). Excavating historicity in the US network neutrality debate: An interpretive perspective on policy change. Communication, Culture & Critique6(4), 568-597. doi:10.1111/cccr.12033

Livingstone, S., Lunt, P., & Miller, L. (2007). Citizens and consumers: Discursive debates during and after the Communications Act 2003. Media, Culture & Society29(4), 613-638. doi:10.1177/0163443707078423

Massanari, A. (2017). #Gamergate and The Fappening: How Reddit’s algorithm, governance, and culture support toxic technocultures. New Media & Society19(3), 329-346. doi:10.1177/1461444815608807

Nanz, P., & Steffek, J. (2004). Global governance, participation and the public sphere. Government and Opposition, 39(2), 314-335. doi:10.1111/j.1477-7053.2004.00125.x

Nonnecke, B. M., & Epstein, D. (2016). Crowdsourcing internet governance: The case of ICANN's Strategy Panel on Multistakeholder Innovation. Presented at the GigaNet: Global Internet Governance Academic Network, Annual Symposium 2016. doi: 10.2139/ssrn.2909353

Novak, A. N., & Sebastian, M. (2019). Network neutrality and digital dialogic communication: How public, private and government forces shape internet policy. New York: Routledge. doi:10.4324/9780429454981

Obar, J. (2016). Closing the technocratic divide? Activist intermediaries, digital form letters, and public involvement in FCC policy making. International Journal of Communication, 10, 5865-5888. Retrieved from https://ijoc.org/index.php/ijoc/article/view/4821

Pangrazio, L. (2016). Reconceptualising critical digital literacy. Discourse: Studies in the Cultural Politics of Education37(2), 163-174. doi:10.1080/01596306.2014.942836

Pateman, C. (1970). Participation and democratic theory. Cambridge: Cambridge University Press.

Prasad, R. (2017). Ascendant India, digital India: How net neutrality advocates defeated Facebook’s Free Basics. Media, Culture & Society, 40(3), 415-431. doi:10.1177/0163443717736117

Salter, L. (2007). The public of public inquiries. In L. Dobuzinskis, M. Howlett, & D. Laycock (eds.), Policy Analysis in Canada: The State of the Art (pp. 291-314). Toronto: University of Toronto Press. doi:10.3138/9781442685529-014

Sandvig, C. (2007). Network neutrality is the new common carriage. Info9(2/3), 136-147. doi:10.1108/14636690710734751

Shahin, S. (2019). Facing up to Facebook: How digital activism, independent regulation, and mass media foiled a neoliberal threat to net neutrality. Information, Communication & Society, 22(1): 1-17. doi:10.1080/1369118x.2017.1340494

Simpson, S. (2016). Intervention, net neutrality and European Union media policy. International Journal of Digital Television7(3), 331-346. doi:10.1386/jdtv.7.3.331_1

Smith, A., & Anderson, M. (2018). Social media use in 2018. Pew Research Center. Retrieved from: http://www.pewinternet.org/2018/03/01/social-media-use-in-2018/

Stanfill, M. (2015). The interface as discourse: The production of norms through web design. New Media & Society17(7), 1059-1074. doi:10.1177/1461444814520873

Streeter, T. (2013). Policy, politics, and discourse. Communication, Culture & Critique6(4), 488-501. doi:10.1111/cccr.12028

Van Dijck, J. (2013). The culture of connectivity: A critical history of social media. Oxford: Oxford University Press.

Wu, T. (2003). Network neutrality, broadband discrimination. Journal of Telecommunications and High Technology Law, 2, 141-179. Available at https://scholarship.law.columbia.edu/faculty_scholarship/1281

Zuboff, S. (2015). Big other: Surveillance capitalism and the prospects of an information civilization. Journal of Information Technology30(1), 75-89. doi:10.1057/jit.2015.5

Footnotes

1. Such appeals to neutrality have of course appeared increasingly untenable as continued revelations emerge about the role of social media platforms in large-scale misinformation campaigns, as seen in the polarisation of US voters via Facebook during the 2016 federal election.

Making sense of data ethics. The powers behind the data ethics debate in European policymaking

$
0
0

Introduction

January 2018: The tweet hovered over my head: “Where are the ethicists?” I was on a panel in Brussels about data ethics and this wasn’t the first time a panel or initiative as such was questioned. There wasn’t the foundation proper, the right expertise was not included - the ethicists were missing, the humanists were missing, the legal experts were missing. The results, outcome and requirements of these initiatives were unclear. Would they water down the law? I understood the critiques though. How could we talk about data ethics when a law was just passed following a lengthy negotiation process on this very topic? What was the function of these discussions? If we were not there to acknowledge a consensus, that is, the legal solution, what then was the point?

In the slipstream of sweeping data protection law reform in Europe, discussions regarding data ethics has gained traction in European public policy-making. Numerous data ethics public policy initiatives have been created, moving beyond issues of mere compliance with data protection law to increasingly focus on the ethics of big data, especially concerning private companies’ and public institutions’ handling of personal data in digital forms. Reception in public discourse has been mixed. Although gaining significant public attention and interest, these data ethics policy initiatives have also been depicted as governmental “toothless wonders” (e.g., Hill, 24 November 2017) and a waste of resources, and have been criticised for drawing attention away from public institutions’ mishandling of citizens’ data (e.g., Ingeniøren’s managing panel, op ed, 16 March 2018) and for potential “ethics washing” (Wagner, 2018), questioning the expertise and interests involved in the initiatives, as well as their normative ethics frameworks.

This article constitutes an analytical investigation of the various dimensions and actors that shape definitions of data ethics in European policy-making. Specifically, I explore the role and function of European data ethics policy initiatives and present an argument regarding how and why they took shape in the context of a European data protection regulatory reform. The explicit use of the term “ethics” calls for a philosophical framework; the term “data” for a contemporary perspective of the critical role of information in a digitalised society; and the policy context for consensus-making and problem solving. Together, these views on the role of the data ethics policy initiatives are highly pertinent. However, taken separately they each provide a one-sided kaleidoscopic insight into their role and function. For example, a moral philosophical view concerning data ethics initiatives (in public policy-making as well as in the private industry) might not be vigilant of the embedded interests and power relations; pursuit of actionable policy results may overlook their function as spaces of negotiation and positioning; while viewing data ethics initiatives as something radically new in the age of big data can lose sight of their place in and relation to history and governance in general.

In my analysis, I therefore adopt an interdisciplinary approach that draws on methods and theories from different subfields within applied ethics, political science, sociology, culture and infrastructure/STS studies. A central thesis of this article is that we should perceive data ethics policy initiatives as open-ended spaces of negotiation embedded in complex socio-technical dynamics, which respond to multifaceted governance challenges extended over time. Thus, we should not view data ethics policy initiatives as solutions in their own right. They do not replace legal frameworks such as the European General Data Protection Regulation (GDPR). Rather, they complement existing law and may inspire, guide and even set in motion political, economic and educational processes that could foster an ethical “design” of the big data age, covering everything from the introduction of new laws, the implementation of policies and practices in organisations and companies and the development of new engineering standards, to awareness campaigns among citizens and educational initiatives.

In the following, I first outline a cross-disciplinary conceptualisation of data ethics, presenting what I define as an analytical framework for a data ethics of power. I then describe the data ethics public policy focus in the context of the GDPR. I recognise that ethics discussions are implicit in legislative processes. Nevertheless, in this article I do not specifically focus on the regulation’s negotiation process as such, but rather on policymakers’ explicit use of the term “data ethics”, and especially on the emergence of formal data ethics policy initiatives (for instance, committees, working groups, stated objectives and results), many of which followed the adoption of the GDPR. I subsequently move on to an analysis of data ethics as described in public policy reports, statements, interviews and events in the period 2015–2018. In conclusion, I take a step back and review the definition of data ethics. Today, data ethics is an idea, concept and method that is used in policy-making, but which has no shared definition. While more aligned conceptualisations of data ethics might provide a guiding step for a collective vision for actions in law, business and society in general, an argument that runs through this article is that there is no definition of data ethics in this space neutral of values and politics. Therefore, we must position ourselves within a context-specific type of ethical action.

This article is informed by a study that I am conducting on data ethics in governance and technology development in the period 2017-2020. In that study and this article, I use an ethnographically informed approach based on active and embedded participation in various data protection/internet governance policy events, working groups and initiatives. Qualitative embedded research entails an immersion of the researcher in the field of study as an active and engaged member to achieve thorough knowledge and understanding (Bourdieu, 1997; Bourdieu & Wacquant 1992; Goffman, 1974; Ingold, 2000; Wong, 2009). Thus, essential to my understanding of the underlying dimensions of the topic of this article is my active participation in the internet governance policy community. I was part of the Danish government’s data ethics expert committee (2018) and am part of the European Commission’s Artificial Intelligence High Level Expert group (2018-2020). I am also the founder of the non profit organisation DataEthics.eu, which is active in the field.

In this article, I specifically draw on ideas, concepts and opinions generated in interaction with nine active players (decision-makers, policy advisors and civil servants) whom contributed to my understanding of the policy-making dynamics by sharing their experiences with data ethics in European 1 policy-making (see further in references). The interviewees were informed about the study and that they would not be represented by name and institution in any publications, as I wanted them to be minimally influenced by institutional interests and requirements in their accounts.2

Section 1: What is data ethics? A data ethics of power

In this section I introduce the emerging field of data ethics as the cross-disciplinary study of the distribution of societal powers in the socio-technical systems that form the fabric of the “Big Data Society”. Based on theories, practices and methods within applied ethics, legal studies and cultural studies, social and political sciences, as well as a movement within policy and business, I present an analytical framework for a “data ethics of power”.

As a point of departure, I define a data ethics of power as an action-oriented analytical framework concerned with making visible the power relations embedded in the “Big Data Society” and the conditions of their negotiation and distribution, in order to point to design, business, policy, social and cultural processes that support a human-centric distribution of power. In a previous book (Hasselbalch and Tranberg, 2016) we described data ethics as a social movement of change and action: “Across the globe, we’re seeing a data ethics paradigm shift take the shape of a social movement, a cultural shift and a technological and legal development that increasingly places the human at the centre” (p. 10). Thus, data ethics can be viewed as a proactive agenda concerned with shifting societal power relations and with the aim to balance the powers embedded in the Big Data Society. This shift is evidenced in legal developments (such as the GDPR negotiation process) and in new citizen privacy concerns and practices such as the rise in use of ad blockers and privacy enhancing services, etc. In particular, new types of businesses emerge that go beyond mere compliance with data protection legislation when incorporating data ethical values in collection and processing of data, as well as their general innovation practices, technology development, branding and business policies.

Here, I use the notion of “Big Data Society” to reflectively position data ethics in the context of a recent data (re)evolution of the “Information Society”, enabled by computer technologies and dictated by a transformation of all things (and people) into data formats (“datafication”) in order to “quantify the world” (Mayer-Schonberger & Cukier, 2013, p. 79) to organise society and predict risks. While I suggest that this is not an arbitrary evolution, but can also be viewed as an expression of negotiations between different ontological views on the status of the human being and the role of science and technology. As the realisation of a prevailing ideology of modernist scientific practices to command nature and living things, the critical infrastructures of the Big Data Society may therefore very well be described as modernity embodied in a “lived reality” (Edwards, 2002, p. 191) of control and order. From this viewpoint, a data ethics of power can be described as a type of post-modernist, or in essence vitalist, call for a specific kind of “ethical action” (Frohmann, 2007, p. 63) to free the living/human being from the constraints of the practices of control embedded in the technological infrastructures of modernity that at the same time reduce the value of the human being. It is here valuable to understand current calls for data ethical action in extension of the philosopher Henri Bergson’s vitalist arguments at the turn of the last century against the scientific rational intellect that provides no room for, or special status to, the living (1988, 1998). In a similar ethical framework, Gilles Deleuze, who was also greatly inspired by Bergson (Deleuze, 1988), later described over-coded “Societies of Control” (Deleuze, 1992), which reduce people (“dividuals”) to a code marking their access and locking their bodies in specific positions (p. 5). More recently, Spiekerman et al. (2017) in their Anti-Transhumanist Manifesto directly oppose a vision of the human as merely information objects, no different than other information objects (that is; non-human informational things), which they describe as “an expression of the desire to control through calculation. Their approach is limited to reducing the world to data-based patterns suited for mechanical manipulation” (p. 2).

However, a data ethics of power should also be viewed as a direct response to the power dynamics embedded in and distributed via our very present and immediate experiences of a “Liquid Surveillance Society” (Lyon, 2010). Surveillance studies scholar David Lyon (2014) envisions an “ethics of Big Data practices” (2014, p. 10) to renegotiate what is increasingly exposed to be an unequal distribution of power in the technological big data infrastructures. Within this framework we do not only pay conventional attention to the state as the primary power actor (of surveillance), but also include new stakeholders that gain power through accumulation and access to big data. For example, in the analytical framework of a data ethics of power, changing power dynamics are progressively more addressed in the light of the information asymmetry between individuals and the big data companies that collect and process data in digital networks (Pasquale, 2015; Powles, 2015–2018; Zuboff, 5 March 2016, 9 September 2014, 2019).

Beyond this fundamental theoretical framing, a data ethics of power can be explored in an interdisciplinary field addressing the distribution of power in the Big Data Society in diverse ways.

For instance, in a computer ethics perspective, power distributions are approached as ethical dilemmas or as implications of the very design and practical application of computer technologies. Indeed, technologies are never neutral, they embody moral values and norms (Flanagan, Howe, & Nissenbaum, 2008), hence power relations can be identified through analysing how technologies are designed in ethical or ethically problematic ways. Information science scholars Batya Friedman and Helen Nissenbaum (1996) have illustrated different types of bias embedded in existing computer systems that are used for tasks such as flight reservations and the assignment of medical graduates to their first job, and have presented a framework for such issues in the design of computer systems. From this perspective, we can also describe data ethics as what the philosophy and technology scholar Philip Brey terms a “Disclosive Computer Ethics”, identifying moral issues such as “privacy, democracy, distributive justice, and autonomy” (Brey, 2000, p. 12) in opaque information technologies. Phrased differently, a data ethics of power presupposes that technology has “politics” or embedded “arrangements of power and authority” (Winner, 1980, p. 123). Case studies of specific data processing software and their use can be defined as data ethics case studies of power, notably the “Machine Bias” study (Angwin et al., 2016), which exposed discrimination embedded in data processing software used in United States defence systems, and Cathy O’Neil’s (2016) analysis of the social implications of the math behind big data decision making in everything from getting insurance, credit to getting and holding a job.

Nevertheless, data systems are increasingly ingrained in society in multiple forms (from apps to robotics) and have limitless and wide-ranging ethical implications (from price differentiation to social scoring), necessitating that we look beyond design and computer technology as such. Data ethics as a recent designation represents what philosophers Luciano Floridi and Mariateresa Taddeo (2016, p. 3) describe as a primarily semantic shift within a computer and information ethics philosophical tradition from a concern with the ethical implications of the “hardware” to one with data and data science practices. However, looking beyond applied ethics in the field of philosophy to a data ethics of power, our theorisation of the Big Data Society is more than just semantic. The conceptualisation of a data ethics of power can also be explored in a legal framework, as an aspect of the rule of law and protection of citizens’ rights in an evolving Big Data Society. Here, redefining the concept of privacy (Cohen, 2013; Solove, 2008) in a legal studies framework, addresses the ethical implications of new data practices and configurations that challenge existing laws, and thereby the balancing of powers in a democratic society. As legal scholars Neil M. Richards and Jonathan King (2014) argue: “Existing privacy protections focused on managing personally identifying information are not enough when secondary uses of big data sets can reverse engineer past, present, and even future breaches of privacy, confidentiality, and identity” (p. 393). Importantly, these authors define big data “socially, rather than technically, in terms of the broader societal impact they will have,” (Richards & King, 2014, p. 394) providing a more inclusive analysis of a “big data ethics” (p. 393) and thus pointing to the ethical implications of the empowerment of institutions that possess big data capabilities at the expense of “individual identity” (p. 395).

Looking to the policy, business and technology field, the ethical implications of the power of data and data technologies are framed as an issue of growing data asymmetry between big data institutions and citizens in the very design of data technologies. For example, the conceptual framework of the “Personal Data Store Movement” (Hasselbalch & Tranberg, 27 September 2016) is described by the non-profit association MyData Global Movement as one in which “[i]ndividuals are empowered actors, not passive targets, in the management of their personal lives both online and offline – they have the right and practical means to manage their data and privacy” (Poikola, Kuikkaniemi, & Honko, 2018). In this evolving business and technology field, the emphasis is on moving beyond mere legal data protection compliance, implementing values and ethical principles such as transparency, accountability and privacy by design (Hasselbalch & Tranberg, 2016), and ethical implications are mitigated by values-based approaches to the design of technology. For example, engineering standards such as those of IEEE P7000s Ethics and AI standards3 that seek to develop ethics by design standards and guiding principles for the development of artificial intelligence (AI). A values based design approach is also revisited in recent policy documents such as section 5.2. “Embedded values in technology – ethical-by-design” of the European Parliament’s “Resolution on Artificial Intelligence and Robotics” adopted in February 2019.

A key framework for data ethics is the human-centric approach that we increasingly see included within ethics guidelines and policy documents. For example, the European Parliament’s (2019, V.) resolution states that “whereas AI and robotics should be developed and deployed in a human-centred approach with the aim of supporting humans at work and at home…”. The EC High Level Expert Group on Artificial Intelligence’s draft ethics guidelines also stress how the human-centric approach to AI is one that “strives to ensure that human values are always the primary consideration” (working document, 18 December 2018, p. iv), and directly associate it with the balance of power in democratic societies: “political power is human centric and bounded. AI systems must not interfere with democratic processes” (p. 7). The human-centric approach in European policy-making is framed in a European fundamental rights framework (as for example extensively described in the European Commission’s AI High Level Expert group’s draft ethics guidelines) and/or with an emphasis on the human being’s interests prevailing over “the sole interests of society or science” (article 2, “Oviedo Convention”). Practical examples of the human-centric approach can also be found in technology and business developments that aim to preserve the specific qualities of humans in the development of information processing technologies. Examples include the Human in the Loop (HITL) approach to the design of AI, The International Organization for Standardization (ISO) standards on human-centred design (HCD) and the Personal Data Store Movement, which is defined as “A Nordic Model for human-centered personal data management and processing.” (Poikola et al., 2018)

Section 2: European data ethics policy initiatives in context

Policy debates that specifically address ethics in the context of technological developments have been ongoing in Europe since the 1990s. The debate has increasingly sought to harmonise national laws and approaches in order to preserve a European value framework in the context of rapid technological progress. For instance, the Council of Europe’s “Oviedo Convention” was motivated by what Wachter (1997, p. 14) describes as “[t]he feeling that the traditional values of Europe were threatened by rapid and revolutionary developments in biology and medicine”. Data ethics per se gained momentum in pan-European politics in the final years of the negotiation of the GDPR, through the establishment of a number of initiatives directly referring to data and/or digital ethics. Thus, the European Data Protection Supervisor (EDPS) Digital Ethics Advisory Group (2018, p. 5) describes its work as being carried out against “a growing interest in ethical issues, both in the public and in the private spheres and the imminent entry into force of the General Data Protection Regulation (GDPR) in May 2018”.

Examination of the differences in scope and the stakeholders involved in respectively the development of the 1995 Data Protection Directive and the negotiation process of the GDPR beginning with the European Commission’s proposal in 2012, provides some insight into the evolution of the focus of data ethics. The 1995 Directive was developed by a European working party of privacy experts and national data protection commissioners in a process that excluded business stakeholders (Heisenberg, 2005). Nevertheless, the group of actors influencing and participating in the development of the GDPR process progressively expanded, with new stakeholders comprising consumer and civil liberty organisations and American industry representatives and policymakers. The GDPR was generally described as one of the most lobbied EU regulations (Warman, 8 February 2012). At the same time, the public increasingly scrutinised the ethical implications of a big data era, with numerous news stories published on data leaks and hacks, algorithmic discrimination and data-based voter manipulation.

Several specific provisions of the GDPR were discussed inside and outside the walls of European institutions. For example, the “right to erasure” proposed in 2012 was heavily debated by industry and civil society organisations, especially in Europe and the USA, and was frequently described in the media as a value choice between privacy and freedom of expression. In 2013, the transfer of data to third countries (including those covered by the EU-US Safe Harbour agreement) engendered a wider public debate between certain EU parliamentarians and US politicians regarding mass surveillance and the role of large US technology companies. Another example was the discussion of an age limit of 16. This called civil society advocates into action (Carr, Should I laugh, cry or emigrate?, 13 December 2015) and led to new alliances with US technology companies regarding young people’s right to “educational and social opportunities” (Richardson, “European General Data Protection Regulation draft: the debate”, 10 December 2015). A last-minute decision rendered it possible to lower the age limit to 13 in member states.

These intertwined debates and negotiations illustrate how the data protection field was transformed within a global information technology infrastructure. It took shape as a negotiation of competing interests and values between economic entities, EU institutions, civil society organisations, businesses and third country national interests. We can also perceive these spaces of negotiation of rights, values and responsibilities and the creation of new alliances to have a causal link with the emergence of data ethics policy initiatives in European policy-making. In the years following the first communication of the reform, data protection debates were extended, with the concept of data ethics increasingly included in meeting agendas, debates in public policy settings and reports and guidelines. Following the adoption of the GDPR, the list of European member states or institutions with established data or digital ethics initiatives and objectives rapidly grew. Examples included the UK government’s announcement of a £9 million Centre for Data Ethics and Innovation with the stated aim to “advise government and regulators on the implications of new data-driven technologies, including AI” (Digital Charter, 2018). The Danish government appointed a data ethics expert committee 4 in March 2018 with a direct economic incentive to create data ethics recommendations to Danish industry and to turn responsible data sharing into a competitive advantage for the country (Danish Business Authority, 12 March 2018). Several member states’ existing and newly established expert and advisory groups and committees began to include ethics objectives into their work. For example, the Italian government established an AI Task Force in April 2017, publishing its first white paper in 2018 (AI Task Force/Italy, 2018) with an explicit section on ethics. The European Commission’s communication on an AI strategy, published in April 2018, also included the establishment of an AI High Level Expert Group 5, whose responsibility it was, among others, to publish ethics guidelines for AI in Europe the following year.

Section 3: Data ethics - policy vacuums

“I’m pretty convinced that the ethical dimension of data protection and privacy protection is going to become a lot more important in the years to come” (in ‘t Veld, 2017). These words of a European parliamentarian in a public debate in 2017 referred to the evolution of policy debates regarding data protection and privacy. You can discuss legal data protection provisions, she claimed, but then there is “a kind of narrow grey area where you have to make an ethical consideration and you say what is more important” (in ‘t Veld, 2017). What did she mean by her use of the term “ethics” in this context?

In an essay entitled “What is computer ethics?” (1985), the moral philosophy scholar James H. Moor described the branch of applied ethics that studies the ethical implications of computer technologies. Published only a few years after Acorn, the first IBM personal computer, was introduced to the mass market, Moor was interested in computer technologies per se (what is special about computers), as well as the policies required in specific situations where computers alter the state of affairs and create something new. But he also predicted a more general societal revolution (Moor, 1985, p. 268) due to the introduction of computers that will “leave us with policy and conceptual vacuums” (p. 272). Policy vacuums, he argued, would present core problems and challenges, revealing “conceptual muddles” (p. 266), uncertainties and the emergence of new values and alternative policies (p. 267).

If we view data ethics policy initiatives according to Moor’s framework, they can be described as moments of sense-making and negotiation created in response to the policy vacuums that arise when situations and settings are amended by computerised systems. In an interview conducted at the Internet Governance Forum (IGF) in 2017, a Dutch parliamentarian described how in 2013, policy-makers in her country rushed to tackle the transformations instigated by digital technologies that were going “ very wrong” (Interview, IGF 2017). In response, she proposed the establishment of a national commission to consider the ethical challenges of the digital society: “it’s very hard to get the debate out of the trenches, you know, so that people stop saying, ‘well this is my position and this is my position’, but to just sit back and look at what is happening at the moment, which is going to be so huge, so incredible, we have no idea what is going to happen with our society and we need people to think about what to do about all of this, not in the sense you know, ‘I don’t want it’, but more in the sense, ‘are there boundaries?’ ‘Do we have to set limits to all of these possibilities that will occur in the coming years?’” Similarly, in another interview conducted at the same event, a representative of a European country involved in the information policy of the Committee of Ministers of the Council of Europe discussed how the results of the evolution of the Information Society included “violations”, “abuses” and recognition of the internet’s influence on the economy. Concluding, she stated that: “We need to slow down a little bit and to think about where we are going”.

In reviewing descriptions of data ethics initiatives, we can note implicit acknowledgement of the limits of data protection law in harnessing all of the ethical implications of a rapidly evolving information and data infrastructure. Data ethics thus become a means to make sense of emerging problems and challenges and to evaluate various policies and solutions. For example, a report from EDPS from 2015 states: “In today’s digital environment, adherence to the law is not enough; we have to consider the ethical dimension of data processing” (p. 4). It continues by describing how different EU law principles (such as data minimisation and the concepts of sensitive personal data and consent) are challenged by big data business models and methods.

The policy vacuums described in such reports and statements highlight the uncertainties and questions that exist regarding the governance of a socio-technical information infrastructure that increasingly shapes not only personal, but also social, cultural and economic activities.

In the same year as Moor’s essay was published, communications scholar Joshua Meyrowitz’s No Sense of Place (1985) portrayed the emergence of “information systems” that modify our physical settings via new types of access to information, thereby restructuring our social relations by transforming situations. As Meyrowitz (1985, p. 37) argued, “[w]e need to look at the larger, more inclusive notion of “patterns of information””, illustrating how our information realities have real qualities that shape our social and physical realities. Accordingly, European policymakers emphasise the real qualities of information and data. They see digital data processes as meaningful components of social power dynamics. Information society policy-making thus becomes an issue of the distribution of resources and of social and economic power, as an EU Competition Commissioner stated at a DataEthics.eu event on data as power in Copenhagen in 2016: “I’m very glad to have the chance to talk with you about how we can deal with the power that data can give” (Vestager, 9 September 2016). Thus, data ethics policy debates have moved beyond the negotiation of a legal data protection framework, increasingly involving a general focus on information society policy-making, in which different sectional policy areas are intertwined. As the European Commissioner for Competition elaborated at the DataEthics.eu event: “So competition is important. It keeps the pressure on companies to give people what they want. And that includes security and privacy. But we can’t expect competition enforcement to solve all our privacy problems. Our first line of defence will always be rules that are designed specifically to guarantee our privacy”.

Section 4: Data ethics - culture and values

According to Moor, the policy vacuums that emerge when existing policies clash with technological evolution, force us to “discover and make explicit what our value preferences are” (1985, p. 267). He proposes that the computer induced societal revolution will occur in two stages, marked by the questions that we ask. In the first “Introduction Stage”, we ask functional questions: How well does this and that technology function for its purpose? In the second “Permeation Stage”, when institutions and activities are transformed, Moor argues that we will begin to ask questions regarding the nature and value of things (p. 271). Such second-stage questions are echoed in the European policy debate of 2017, as one Member of the European Parliament (MEP) who was heavily involved in the GDPR negotiation process argued in a public debate: “[this is] not any more a technical issue, it’s a real life long important learning experience” (Albrecht, 2017), or as another MEP claimed in the same debate: “The GDPR is not only a legislative piece, it’s like a textbook, which is teaching us how to understand ourselves in this data world and how to understand what are the responsibilities of others and what are the rules which is governing in this world” (Lauristin, 2017).

Consequently, the technicalities of new data protection legislation are transformed into a general discussion about the norms and values of a big data age. Philip Brey describes values as “idealized qualities or conditions in the world that people find good”, ideals that we can work towards realising (2010, p. 46). However, values are not just personal ideals; they are also culturally situated. The cultural theorist Raymond Williams (1958, p. 6) famously defined culture as a “shape”, a set of purposes and common meanings expressed “in institutions, and in arts and learning”, which emerge in a social space of “active debate and amendment under the pressures of experience, contact and discovery”. Culture is thus traditional as well as creative, consisting of prescribed dominant meanings and their negotiation (Williams, 1958). Similarly, the anthropologist James Clifford (1997) replaced the metaphor of “roots” (an image of the original, authentic and fixed cultural entity) with “routes”: intervals of negotiation and translation between the fixed cultural points of accepted meaning. Values are advanced in groups with shared interests and culture but they exist in spaces of constant negotiation. In an interview conducted at the IGF 2017, one policy advisor to an MEP enquired as to the role of values in the GDPR’s negotiations, described privacy as a value shared by a group of individuals involved in the reform process: “I think a group of core players shared that value (…) all the way from people who wrote the proposal at the Commission, to the Commissioner in charge to the rapporteur from the EU Parliament, they all (…) to some extent shared this value, and I think that they managed to create a compromise closer to their value than to others”. He also explained how discussions about values were emerging in processes of negotiation between diverse and at times contradictory interests: “the moment you see a conflict of interest, that is when you start looking at the values (…) normally it would be a discussion about different values (….) an assessment of how much one value should go before another value (… ) so some people might say that freedom of information might be a bigger value or the right to privacy might be a bigger value” .

Accordingly, ethics in practice, or what Brey refers to as “the act of valuing something, or finding it valuable (…) to find it good in some way” (2010, p. 46) is in essence never merely a subjective practice, but neither is it a purely objective construct. If we investigate the meaning of data ethics and ethical action in European data protection policy-making, we can see the points of negotiation. That is, if we look at what happens in the “intervals” between established value systems and the renegotiation of these in new contexts, we discover clashes of values and negotiation as well as the contours of cultural positioning.

Section 5: Data ethics - power and positioning

Philosophy and media studies scholar Charles Ess (2014) has illustrated how culture plays a central role in shaping our ethical thinking about digital technologies. For instance, he argues that people in Western societies place ethical emphasis on “the individual as the primary agent of ethical reflection and action, especially as reinforced by Western notions of individual rights” (p. 196). Such cultural positioning in a global landscape can also be identified in the European data ethics policy debate. An example is the way in which one participant in the 2017 MEP debate discussed above described the GDPR with reference to the direct lived experiences of specific European historical events: “It is all about human dignity and privacy. It is all about the conception of personality which is really embedded in our culture, the European culture ( ...) it came from the general declaration of human rights. But there is a very very tragic history behind war, fascism, communism and totalitarian societies and that is a lesson we have learned in order to understand why privacy is important” (Lauristin, 2017).

Values such as human dignity and privacy are formally recognised in frameworks of European fundamental rights and data protection law, and conscious of their institutionalised roots in the European legal framework, European decision-makers will reference them when asked about the values of “their” data ethics. Awareness of data ethics thus becomes a cultural endeavour, transferring European cultural values into technological development. As stated in an EDPS report from 2015: “The EU in particular now has a ‘critical window’ before mass adoption of these technologies to build the values into digital structures which will define our society” (p. 13) .

When exploring European data ethics policy initiatives as spaces of value negotiations, a specific cultural arrangement emerges. In this context, policy and decision-makers position themselves against a perceived threat to a specifically European set of values and ethics that is pervasive, opaque and embedded in technology. In particular, a concern with a new opponent to the state power emerges. In an interview conducted in 2018 at an institution in Europe, a project officer reflected on her previous work in a European country’s parliament and government where concerns with the alternative form of power that the internet represents had surfaced. The internet is the place where discussions are held and decisions are made, she said, before remembering the policy debates concerning “GAFA” (the acronym for the four giant technology companies of Google, Apple, Facebook and Amazon). Such a clash in values has been directly addressed by European policymakers in public speeches and debates, increasingly naming the technology industry stakeholders they deem responsible. Embedded values of technology innovation are a “wrecking ball”, aiming not simply to “play with the way society is organised but instead to demolish the existing order and build something new in its place”, argued a President of the European Parliament in a speech in 2016 (Schultz, 2016). Values and ethics are hence directly connected with a type of cultural power that is built into technological systems. As one Director for Fundamental Rights and Union Citizenship, European Commission DG Justice claimed in a 2017 public debate: “the challenge of ethics is not in the first place with the individual, the data subject; the challenge is with the controllers, which have power, they have power over people, they have power over data, and what are their ethics? What are the ethics they instil in their staff? In house compliance ethics? Ethics of engineers?” (Nemitz,2017).

Section 6: Data ethics - spaces of negotiation

When dealing with the development of technical systems, we are inclined towards points of closure and stabilisation (Bijker et al., 1987) that will guide the governance, control and risk mitigation of the systems. Relatedly, we can understand data ethics policy initiatives as end results with the objectives “to formulate and support morally good solutions (e.g., right conducts or right values)” (Floridi & Taddeo, 2016, p. 1), emphasising algorithms (or technologies) that may not be “ethically neutral” (Mittelstadt et al., 2016, p. 4). That is to say, as solutions to the ethical problems raised within the very design of technologies, the data processing activities of the algorithms or the collection and dissemination of data. However, I would like to address data ethics policy initiatives in their contexts of interest and value negotiation. For instance, where does morality begin and end in a socio-technical infrastructure that extends across jurisdictions and continents, cultural value systems and societal sectors?

The technical does indeed in the very design represent forms of order, as the political theorist Langdon Winner reminded us (1980, p. 123). That is, it is “political” and thus has ethical implications when creating by design “wonderful breakthroughs by some social interests and crushing setbacks by others” (Winner, 1980, p 125). To provide an example, the Facebook APIs that facilitated the mass collection of user data, before these were reused and processed by Cambridge Analytica, were specifically designed to track users and share data en masse with third parties, hence directly enabling the mass collection, storage and processing of data. However, these design issues of the technical are also “inextricably” “bound up” into an “organic whole” with economic, social, political and cultural problems (Callon, 1987, p. 84). An analysis of data ethics as it is evolving in the European policy sphere demonstrates the complexity of governance challenges arising from the infrastructure of the information age being “shaped by multiple agents with competing interests and capacities, engaged in an indefinite set of distributed interactions over extended periods of time” (Harvey et al., 2017, p. 26). Governance in this era is, as highlighted by internet governance scholars Jeanette Hofmann et al., a “heterogeneous process of ordering without a clear beginning or endpoint” (2016, p. 1412). It consists of actors engaged in “fields of struggle” (Pohle et al-, 2016) of meaning making and competing interpretations of policy issues that are “continuously produced, negotiated and reshaped by the interaction between the field and its actors” (p. 4). I propose that we also explore, as essential components of our data ethics endeavours, the complex dynamics of the ways in which powers are distributed and how interests are met in spaces of negotiation.

Evidently, we must also recognise data ethics policy initiatives as components of a general infrastructural development’s rhythm rather than caved in ethical solutions and isolated events. Understand them as the kind of negotiation posts that repeatedly occur throughout the course of a technological system’s development (Bijker et al., 1987), and as segments of a process of standardisation and consensus-building within a complex general technological evolution of our societies that “contain messy, complex, problem-solving components”(Hughes, 1987, p. 51). The technological systems of modernity are like the architecture of mundane buildings. They reside, as Edwards (2002, p. 185) claims, in a “naturalised background”, ordinary as “trees, daylight, and dirt”. Silently they represent, constitute and are constituted by both our material and imagined modern societies and the distribution of power within. They remain unnoticed until they fail (Edwards, 2002). But when they do fail, we see them in all their complexity. An example is the US intelligence officers PowerPoint presentations (The Guardian, 2013) detailing the “PRISM program” leaked by Edward Snowden in 2013 that provide a detailed map of an information and data infrastructure that is characterised by intricate interconnections between a state organisation of mass surveillance, laws, jurisdictions and budgets, and the technical design of the world wide web and social media platforms. The technological infrastructures are indeed like communal buildings. With doors that we never give a second thought until the day we find one of them locked.

Conclusion

October 2018:“These are just tools!” one person exclaimed. We were at a working group meeting where an issue with using Google Docs for the practical work of the group was raised and discussed at length. While some were arguing for an official position on the use of the online service, mainly with reference to what they described as Google’s insufficient compliance with European data protection law, others saw the discussion as a waste of time. Why spend valuable work time on this issue?

What is data ethics? Currently, the reply is shrill, formally framed in countless statements, documents and mission statements from a multitude of sources, including governments, intergovernmental organisations, consultancy firms, companies, non-governmental organisations, independent experts and academics. But it also emerges when least expected, in “non-allocated” moments of discussion. Information technologies that permeate every aspect of our lives today, from micro work settings to macro economics and politics, are increasingly discussed as “ethical problems” (Introna, 2005, p. 76) that must be solved. Their pervasiveness sparks moments of ethical thinking, negotiated in terms of moral principles, values and ideal conditions (Brey, 2010). In allocated or unallocated spaces of negotiation, moments of pause and sense-making (Moor, 1985), we discuss the values (Flanagan et al., 2008) and politics (Winner, 1980) of the business practices, cultures and legal jurisdictions that shape them. These spaces of negotiation encompass very concrete discussions regarding specific information technology tools, but increasingly they also evolve into reflections concerning general challenges to established legal frameworks, individuals’ agency and human rights, as well as questions regarding the general evolution of society. As one Danish minister said at the launch of a national data ethics expert group: “This is about what society we want” (Holst, 11 March 2018).

In this article, I have explored data ethics in the context of a European data protection legal reform. In particular, I have sought to answer the question: “What is data ethics?” with the assumption that the answer will shape how we perceive the role and function of data ethics policy initiatives. Based on a review of policy documents, reports and press material, alongside analysis of the ways in which policymakers and civil servants make sense of data ethics, I propose that we recognise these initiatives as open-ended spaces of negotiation and cultural positioning.

This approach to ethics might be criticised as futile in the context of policy and action. However, I propose that understanding data ethics policy initiatives as spaces of negotiation does not prevent action. Rather, it forces us to make apparent our point of departure: the social and cultural values and interests that shape our ethical action. We can thus create the potential for a more transparent negotiation of ethical action in the “Big Data Era”, enabling us to acknowledge the macro-level data ethics spaces of negotiation that are currently emerging not only in Europe but globally.

This article’s analytical investigation of European data ethics policy initiatives as spaces of cultural value negotiations has revealed a set of actionable thematic areas. It has illustrated a clash of values and an emerging concern with the alternative forms of power and control embedded in our technological environment, which exert pressure on people and individuals in particular. Here, a data ethics of power that takes its point of departure in Gilles Deleuze’s description of computerised Societies of Control (1992) enables us to think about the ethical action that is necessary today. Ethical action could for example concern the empowerment of individuals to challenge the laws and norms of opaque algorithmic computer networks, as we have noted in debates on the right to explanation and the accountability and interpretability of algorithms. Ethical action may also strive towards ideals of freedom in order to break away from coding, to become indiscernible to “Weapons of Math Destruction” (O’Neil, 2016) that increasingly define, shape and limit us as individuals, as seen for instance in the digital self-defence movement (Heuer & Tranberg, 2013). Data ethics missions such as these are rooted in deeply personal experiences of living in coded networks, but they are also based on growing social and political movements and sentiments (Hasselbalch & Tranberg, 2016).

Much remains to be explored and developed regarding the power dynamics embedded in the evolving data ethics debate, not only in policy-making, but also in business, technology and public discourse in general. This article seeks to open up a more inclusive and holistic discussion of data ethics in order to advance investigation and understanding of the ways in which values are negotiated, rights and authority are distributed, and conflicts are resolved.

Acknowledgements

  • Clara
  • Francesco Lapenta for the many informed discussions regarding the sociology of data ethics.
  • Jens-Erik Mai for insightful comments on the drafts of this article.
  • The team at DataEthics.eu for inspiration.

References

Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016, May 23). Machine bias - There’s software used across the country to predict future criminals. And it’s biased against blacks. Propublica. Retrieved from https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing

Albrecht, J. P. (2017, January 26) MEP debate: The regulation is here! What now? [video file] Retrieved from https://www.youtube.com/watch?v=28EtlacwsdE

Bergson, H. (1988). Matter and Memory (N. M. Paul & W. S. Palmer, Trans.) New York: Zone Books.

Bergson, H. (1998), Creative Evolution (A. Mitchell, Trans.). Mineola, NY: Dover Publications.

Bijker, W. E., Hughes, T. P., & Pinch, T. (1987). General introduction. In W. E. Bijker, T. P. Hughes, & T. Pinch. (Eds.), The Social Construction of Technological Systems (pp. 1-7). Cambridge, MA: MIT Press.

Brey, P. (2000). Disclosive computer ethics. Computer and Society, 30(4), 10-16. doi:10.1145/572260.572264

Brey, P. (2010). Values in technology and disclosive ethics. In L. Floridi (Ed.), The Cambridge Handbook of Information and Computer Ethics (pp. 41-58). Cambridge: Cambridge University Press.

Bourdieu, P. (1997). Outline of a Theory of Practice. Cambridge: Cambridge University Press.

Bourdieu, P., & Wacquant, L. (1992). An Invitation to Reflexive Sociology. Cambridge: Polity Press.

Callon, M. (1987). Society in the making: the study of technology as a tool for sociological analysis. In Wiebe E. Bijker, Thomas P. Hughes, & Trevor Pinch (Eds.), The Social Construction of Technological Systems (pp. 83-103). Cambridge, MA: MIT Press.

Carr, J. (2015, December 13). Should I laugh, cry or emigrate? [Blog post]. Retrieved from Desiderata https://johnc1912.wordpress.com/2015/12/13/should-i-laugh-cry-or-emigrate/

Clifford, J. (1997). Routes: Travel and Translation in the Late Twentieth Century. Cambridge: Harvard University Press.

Cohen, J. E. (2013). What privacy is for. Harvard Law Review, 126(7). Retrieved from https://harvardlawreview.org/2013/05/what-privacy-is-for/

Danish Business Authority. (2018, March 12). The Danish government appoints new expert group on data ethics [Press release]. Retrieved from https://eng.em.dk/news/2018/marts/the-danish-government-appoints-new-expert-group-on-data-ethics

Deleuze, G. (1992). Postscript on the societies of control. October, 59, p. 3-7. Retrieved from http://www.jstor.org/stable/778828

Deleuze, G. (1966). Bergsonism (H. Tomlinson, Trans.). New York: Urzone Inc.

Edwards, P. (2002). Infrastructure and modernity: scales of force, time, and social organization in the history of sociotechnical systems. In Misa, T. J., Brey, P., & A. Feenberg (Eds.), Modernity and Technology (pp. 185-225). Cambridge, MA: MIT Press.

Ess, C. M. (2014). Digital Media Ethics. Cambridge, UK: Polity Press

Flanagan, M., Howe, D. C., & Nissenbaum, H. (2008). Embodying values in technology – theory and practice. In J. van den Hoven, & J. Weckert (Eds.), Information Technology and Moral Philosophy (pp. 322-353). Cambridge, UK: Cambridge University Press.

Floridi, L., & Taddeo, M. (2016). What is data ethics?. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 374(2083). doi:10.1098/rsta.2016.0360

Friedman, B., & Nissenbaum, H. (1996). Bias in computer systems. ACM Transactions on Information Systems, 14(3), 330-347. doi:10.1145/230538.230561

Frohmann, B. (2007). Foucault, Deleuze, and the ethics of digital networks. In R. Capurro, J. Frühbauer, & T. Hausmanninger (Eds.), Localizing the Internet. Ethical Aspects in Intercultural Perspective (pp. 57-68). Munich: Fink.

Goffman, E. (1974). Frame Analysis: An Essay on the Organization of Experience. Boston, MA: Northeastern University Press

Harvey, P., Jensen, C. B., & Morita, A. (2017). Introduction: infrastructural complications. In P. Harvey, C. B. Jensen, & A. Morita (Eds.), Infrastructures and Social Complexity: A Companion. p. 1-22. London: Routledge.

Hasselbalch, G., & Tranberg, P. (2016, December 1). The free space for data monopolies in Europe is shrinking [Blog post]. Retrieved from Opendemocracy.net https://www.opendemocracy.net/gry-hasselbalch-pernille-tranberg/free-space-for-data-monopolies-in-europe-is-shrinking

Hasselbalch, G., & Tranberg, P. (2016, September 27). Personal data stores want to give individuals power over their data [Blog post]. Retrieved from DataEthics.eu https://dataethics.eu/personal-data-stores-will-give-individual-power-their-data/

Hasselbalch, G., & Tranberg, P. (2016). Data Ethics. The New Competitive Advantage. Copenhagen: Publishare.

Heisenberg, D. (2005). Negotiating Privacy: The European Union, The United States and Personal Data Protection. Boulder, CA: Lynne Reinner Publishers.

Heuer, S., & Tranberg, P. (2013). Fake It! Your Guide to Digital Self-Defense. Copenhagen: Berlingske Media Forlag.

Hill, R. (24 November 2017). Another toothless wonder? Why the UK.gov’s data ethics centre needs clout. The Register. Retrieved from https://www.theregister.co.uk/2017/11/24/another_toothless_wonder_why_the_ukgovs_data_ethics_centre_needs_some_clout/

Hofmann, J., Katzenbach, C., & Gollatz, K. (2016). Between coordination and regulation: finding the governance in Internet governance. New Media & Society, 19(9), 1406-1423. doi:10.1177/1461444816639975

Holst, H. K. (2018, March 11). Regeringen vil lovgive om dataetik: det handler om, hvilket samfund vi ønsker [The government will legislate on data: it is about what we want to do in society]. Berlingske. Retrieved from https://www.berlingske.dk/politik/regeringen-vil-lovgive-om-dataetik-det-handler-om-hvilket-samfund-vi-oensker

Hughes, T. P. (1987). The evolution of large technological systems. In W. E. Bijker, T. P. Hughes, & T. Pinch (Eds.), The Social Construction of Technological Systems (pp. 51-82). Cambridge, MA: MIT Press.

Ingold, T. (2000) The Perception of the Environment: Essays in Livelihood, Dwelling and Skill. London: Routledge.

Introna, L. D. (2005). Disclosive ethics and information technology: disclosing facial recognition systems. Ethics and Information Technology, 7(2), 75-86. doi:10.1007/s10676-005-4583-2

Ingeniøren. (2018, March 16). Start nu med at overholde loven Brian Mikkelsen [Now start complying with the law, Brian Mikkelsen]. Version 2. Retrieved from https://www.version2.dk/artikel/leder-start-nu-med-at-overholde-loven-brian-mikkelsen-1084631

in ‘t, Veld, S. (2017, January 26). European Privacy Platform [video file]. Retrieved from https://www.youtube.com/watch?v=8_5cdvGMM-U

Lauristin, M. (2017, January 26). MEP debate: The regulation is here! What now? [video file] Retrieved from: https://www.youtube.com/watch?v=28EtlacwsdE

Lyon, D. (2014). Surveillance, Snowden, and big data: capacities, consequences, critique. Big Data & Society, 1(2). doi:10.1177/2053951714541861

Lyon, D. (2010). Liquid surveillance: the contribution of Zygmunt Bauman to surveillance studies. International Political Sociology, 4(4). (pp. 325-338). doi:10.1111/j.1749-5687.2010.00109.x

Mayer-Schonberger, V., & Cukier, K. (2013). Big Data: A Revolution That Will Transform How We Live, Work and Think. London: John Murray.

Meyrowitz, J. (1985). No Sense of Place: The Impact of the Electronic Media on Social Behavior. Oxford: Oxford University Press.

Mittelstadt, B. D., Allo, P., Taddeo, M., Wachter, S., & Floridi, L. (2016). The ethics of algorithms: mapping the debate. Big Data & Society, 3(2), 1-21. doi:10.1177%2F2053951716679679

Moor, J. H. (1985). What is computer ethics? Metaphilosophy, 16(4), 266-275. doi:10.1111/j.1467-9973.1985.tb00173.x

Nemitz, P. (2017, January 26) European Privacy Platform [video file]. Retrieved from: https://www.youtube.com/watch?v=8_5cdvGMM-U

O’Neil, C. (2016). Weapons of Math Destruction. New York: Penguin Books.

Pasquale, F. (2015). The Black Box Society – The Secret Algorithms That Control Money and Information. Cambridge, MA: Harvard University Press

Poikola, A., Kuikkaniemi, K., & Honko, H. (2018). Mydata – A Nordic Model for human-centered personal data management and processing [White paper]. Helsinki: Open Knowledge Finland. Retrieved from https://www.lvm.fi/documents/20181/859937/MyData-nordic-model/2e9b4eb0-68d7-463b-9460-821493449a63?version=1.0

Pohle, J., Hosl, M. & Kniep, R. (2016). Analysing internet policy as a field of struggle. Internet Policy Review, 5(3) doi:10.14763/2016.3.412

Powles, J. (2015–2018). Julia Powles [Profile]. The Guardian. Retrieved from https://www.theguardian.com/profile/julia-powles

Richards, N. M., & King J. H. (2014). Big data ethics. Wake Forest Law Review, 49, 393- 432.

Richardson, J. (2015, December 10). European General Data Protection Regulation draft: the debate. Retrieved from Medium https://medium.com/@janicerichardson/european-general-data-protection-regulation-draft-the-debate-8360e9ef5c1

Schultz, M. (2016, March 3) Technological totalitarianism, politics and democracy [video file] Retrieved from: https://www.youtube.com/watch?v=We5DylG4szM

Solove, D. J. (2008). Understanding Privacy. Cambridge: Harvard University Press.

Spiekermann, S., Hampson P., Ess, C. M., Hoff, J., Coeckelbergh, M., & Franckis, G. (2017). The Ghost of Transhumanism & the Sentience of Existence., Retrieved from The Privacy Surgeon http://privacysurgeon.org/blog/wp-content/uploads/2017/07/Human-manifesto_26_short-1.pdf

The Guardian. (2013, November 1). NSA Prism Programme Slides. The Guardian. Retrieved from https://www.theguardian.com/world/interactive/2013/nov/01/prism-slides-nsa-document

Vestager, M. (2016, September 9). Making Data Work for Us. Retrieved from European Commission https://ec.europa.eu/commission/commissioners/2014-2019/vestager/announcements/making-data-work-us_en Video available at https://vimeo.com/183481796

de Wachter, M. A. M. (1997). The European Convention on Bioethics. Hastings Center Report, 27(1), 13-23. Retrieved from https://onlinelibrary.wiley.com/doi/full/10.1002/j.1552-146X.1997.tb00015.x

Wagner, B. (2018). Ethics as an escape from regulation: from ethics-washing to ethics-shopping? In M. Hildebrandt (Ed.), Being Profiling. Cogitas Ergo Sum. Amsterdam: Amsterdam University Press. Retrieved from https://www.privacylab.at/wp-content/uploads/2018/07/Ben_Wagner_Ethics-as-an-Escape-from-Regulation_2018_BW9.pdf

Warman, M. (2012, February 8). EU Privacy regulations subject to ‘unprecedented lobbying’. The Telegraph. Retrieved from https://www.telegraph.co.uk/technology/news/9070019/EU-Privacy-regulations-subject-to-unprecedented-lobbying.html

Williams, R. (1993). Culture is ordinary. In A. Gray, & J. McGuigan (Eds.), Studying Culture: An Introductory Reader (pp. 5-14). London: Edward Arnold.

Winner, L. (1980). Do artifacts have politics? Daedalus, 109(1), 121-136. Retrieved from https://www.jstor.org/stable/20024652

Wong, S. (2009) Tales from the frontline: The experiences of early childhood practitioners working with an ‘embedded’ research team. Evaluation and Program Planning, 32(2), 99–108. doi:10.1016/j.evalprogplan.2008.10.003

Zuboff, S. (2016, March 5). The secrets of surveillance capitalism. Frankfurter Allgemeine. Retrieved from http://www.faz.net/aktuell/feuilleton/debatten/the-digital-debate/shoshana-zuboff-secrets-of-surveillance-capitalism-14103616.html

Zuboff, S. (2014, September 9). A digital declaration. Frankfurter Allgemeine. Retrieved from http://www.faz.net/aktuell/feuilleton/debatten/the-digital-debate/shoshan-zuboff-on-big-data-as-surveillance-capitalism-13152525.html

Zuboff, S. (2019). The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. London; New York: Profile Books; Public Affairs.

Policy documents and reports

AI Task Force & Agency for Digital Italy. (2018). Artificial Intelligence at the service of the citizen [White paper]. Retrieved from: https://libro-bianco-ia.readthedocs.io/en/latest/Council of Europe. (1997). Convention for the Protection of Human Rights and Dignity of the Human Being with Regard to the Application of Biology and Medicine: Convention on Human Rights and Biomedicine. (The “Oviedo Convention”) Treaty No.164. Retrieved from https://www.coe.int/en/web/conventions/full-list/-/conventions/treaty/164

Directive 95/46/EC of the European Parliament and of the Council of 24 October 1995 on the Protection of Individuals with Regard to the Processing of Personal Data and on the Free Movement of such Data. Retrieved from https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A31995L0046

EC High-Level Expert Group. (2018). Draft Ethics Guidelines for Trustworthy AI. Working document, 18 December 2018 (final document was not published when this article was written). Retrieved from https://ec.europa.eu/digital-single-market/en/news/draft-ethics-guidelines-trustworthy-ai

European Commission. (2012, January 25). Proposal for a Regulation of the European Parliament and of the Council on the Protection of Individuals with Regard to the Processing of Personal Data and on the Free Movement of such Data (General Data Protection Regulation). Retrieved from http://www.europarl.europa.eu/registre/docs_autres_institutions/commission_europeenne/com/2012/0011/COM_COM(2012)0011_EN.pdf

European Commission. (2018). Communication from the Commission to the European Parliament, the European Council, the Council, the European Economic and Social Committee and the Committee of the Regions - Coordinated Plan on Artificial Intelligence (COM(2018) 795 final). Retrieved from https://ec.europa.eu/digital-single-market/en/news/coordinated-plan-artificial-intelligence

European Parliament. (2019, February 12). European Parliament Resolution of 12 February 2019 on a Comprehensive European Industrial Policy on Artificial Intelligence and Robotics (2018/2088(INI)). Retrieved from http://www.europarl.europa.eu/sides/getDoc.do?pubRef=-//EP//NONSGML+TA+P8-TA-2019-0081+0+DOC+PDF+V0//EN

European Union Regulation 2016/679 of the European Parliament and of the Council of 27 April 2016 on the Protection of Natural Persons with Regard to the Processing of Personal Data and on the Free Movement of such Data, and Repealing Directive 95/46/EC (General Data Protection Regulation). Retrieved from https://eur-lex.europa.eu/legal-content/EN/TXT/?qid=1528874672298&uri=CELEX%3A32016R0679

Gov.uk. (2018, January 25). Digital Strategy. Retrieved from https://www.gov.uk/government/publications/digital-charter/digital-charter

European Data Protection Supervisor (EDPS). (2015). Towards a New Digital Ethics Data Dignity and Technology. Retrieved from https://edps.europa.eu/sites/edp/files/publication/15-09-11_data_ethics_en.pdf

European Data Protection Supervisor (EDPS). Ethics Advisory Group. (2018). Towards a Digital Ethics. Retrieved from https://edps.europa.eu/sites/edp/files/publication/18-01-25_eag_report_en.pdf

Footnotes

1. By “European” I am not only focusing on the European Union (EU), but on a constantly negotiated cultural context, and thus for example I do not exclude organisations like the Council of Europe or instruments such as the European Convention of Human Rights.

2. Interviews informing the article (anonymous, all audio recorded, except from one based on written notes): four directly quoted in the article; two policy advisors; four European institution officers; one data protection commissioner; one representative of a European country to the Committee of Ministers of the Council of Europe; one European parliamentarian.

3. I am the vice chair of the IEEE P7006 standard on personal data AI agents.

4. I was one of the 12 appointed members of this committee (2018).

5. I was one of the 52 appointed members of this group (2018-2020).

Operationalising communication rights: the case of a “digital welfare state”

$
0
0

This paper is part of Practicing rights and values in internet policy around the world, a special issue of Internet Policy Review guest-edited by Aphra Kerr, Francesca Musiani, and Julia Pohle.

Introduction

The rampant spread of disinformation and hate speech online, the so-called surveillance capitalism of the internet giants and related violations of privacy (Zuboff, 2019), persisting digital divides (International Telecommunication Union, 2018), and inequalities created by algorithms (Eubanks, 2018): these issues and many other current internet-related phenomena challenge us as individuals and members of the society. These challenges have sparked renewed discussion about the idea and ideal of citizens’ communication rights.

Either as a legal approach or as a moral discursive strategy, the rights-based approach is typically presented in a general sense as a counterforce that protects individuals against illegitimate forms of power, including both state and corporate domination (Horten, 2016). The notion of communication rights can not only refer to existing legally binding norms, but also more broadly to normative principles against which real-world developments are assessed. However, there is no consensus on what kinds of institutions are needed to uphold and enforce communication rights in the non-territorial, regulation-averse and rapidly changing media environment. Besides the actions of states, the realisation of communication rights is now increasingly impacted by the actions of global multinational corporations, activists, and users themselves.

While much of the academic debate has focused on transnational attempts to codify and promote communication rights at the global level, in this article, we examined a national approach to communication rights. Despite the obvious transnational nature of the challenges, we argued for the continued relevance of analysing communication rights in the context of national media systems and policy traditions. We provided a model to analyse communication rights in a framework that has its foundation in a specific normative, but also empirically grounded understanding of the role of communication in a democracy. In addition, we discussed the relevance of single country analyses to global or regional considerations of rights-based governance.

Communication rights and the case of Finland

The concept of communication rights has a varied history, starting with the attempts of the Global South in the 1970s to counter the Westernisation of communication (Hamelink, 1994; McIver et al., 2003). The connections between human rights and media policy have also been addressed, especially in international contexts and in the United Nations (Jørgensen, 2013; Mansell & Nordenstreng, 2006). Communication rights have also been invoked in more specific contexts to promote, for instance, the rights of disabled persons and cultural and sexual minorities in today’s communication environment (Padovani & Calabrese, 2014; McLeod, 2018). Currently, these rights are most often employed for the use of civil society manifestos and international declarations focused on digital or internet-related rights (Karppinen, 2017; Redeker, Gill, & Gasser, 2018).

Today, heated policy debates have surrounded the role of global platforms in realising or violating principles, such as freedom of expression or privacy, which are already stipulated in the United Nations Universal Declaration of Human Rights (MacKinnon, 2013; Zuboff, 2019). Various groups have made efforts to monitor and influence the global policy landscape, including the United Nations, its Special Rapporteurs, and the Internet Governance Forum; voluntary multi-stakeholder coalitions, such as the Global Network Initiative; and civil society actors, such as the Electronic Frontier Foundation, Freedom House, or Ranking Digital Rights (MacKinnon et al., 2016). At the same time, nation states are still powerful actors whose choices can make a difference in the realisation of rights (Flew, Iosifides, & Steemers, 2016). This influence is made evident through monitoring efforts that track internet freedom and the increased efforts by national governments to control citizens’ data and internet access (Shahbaz, 2018).

Communication rights in Finland are particularly worth exploring and analysing. Although the Finnish communication policy solutions are now intertwined with the broader European Union initiatives, the country has an idiosyncratic historical legacy in communication policy. Year after year, it remains as one of the top countries in press freedom rankings (Reporters without Borders, 2018). In the 1990s, Finland was a frontrunner in shaping information society policies, gaining notice for technological development and global competitiveness, especially in the mobile communications sector (Castells & Himanen, 2002). Finland was also among the first nations to make affordable broadband access a legal right ( Nieminen, 2013). On the EU Digital Economy and Society Index, Finland scores high in almost all categories, partly due to its forward-looking strategies for artificial intelligence and extensive, highly developed digital public services (Ministry of Finance, 2018). According to the think tank Center for Data Innovation, Finland’s availability of official information is the best in the EU (Wallace & Castro, 2017). Not only are Finns among the most frequent users of the internet in the European Union, they also report feeling well-informed about risks of cybercrime and trust public authorities with their online data more than citizens of any other EU country (European Union, 2017, pp. 58-60).

While national competitiveness in the global marketplace has informed many of Finland’s policy approaches (Halme et al., 2014), they also reflect the Nordic tradition of the so-called “epistemic commons”, that is the ideals of knowledge and culture as a joint and shared domain, free of restrictions (Nieminen, 2014 1). Aspects such as civic education, universal literacy, and mass media are at the heart of this ideal (Nieminen, 2014). This ideal has been central to what Syvertsen, Enli, Mjøs, and Moe (2014) called the “Nordic Media Welfare State”: Nordic countries are characterised by universal media and communications services, strong and institutionalised editorial freedom, a cultural policy for the media, and policy solutions that are consensual and durable, based on consultation with both public and private stakeholders.

Operationalising rights

How does Finland, a country with such unique policy traditions, fare as a “Digital Welfare State”? In this article, we employed a basic model that divides the notion of communication rights into four distinct operational categories (Nieminen, 2010; 2016; 2019; Horowitz & Nieminen, 2016). These divisions differ from other recent categorisations (Couldry et al., 2016; Goggin et al., 2017) in that they specifically reflect the ideal of the epistemic commons of shared knowledge and culture. Communication rights, then, should preserve and remove restrictions on the epistemic commons. We understand the following rights as central to those tasks:

  1. Access: citizens’ equal access to information, orientation, entertainment, and other contents serving their rights.
  2. Availability: equal availability of various types of content (information, orientation, entertainment, or other) for citizens.
  3. Dialogical rights: the existence of public spaces that allow citizens to publicly share information, experiences, views, and opinions on common matters.
  4. Privacy: protection of every citizen’s private life from unwanted publicity, unless such exposure is clearly in the public interest or if the person decides to expose it to the public, as well as protection of personal data (processing, by authorities or businesses alike, must have legal grounds and abide by principles, such as data minimisation and purpose limitation, while individuals’ rights must be safeguarded).

To discuss each category of rights, we deployed them in three levels: the level of the Finnish regulatory-normative framework; the level of implementation by the public sector, as manifested in the level of activity by commercial media and communications technology providers; and in the level of activity by citizen-consumers. This multi-level analysis aims at depicting the complex nature of the rights and the often contested and contradictory realisations at different levels. For each category, we also highlighted one example: for access, telecommunications; for availability, extended collective licencing in the context of online video recording services; for dialogical rights, e-participation; and for privacy, monitoring communications metadata within organisations.

Access

Access as a communication right well illustrates the development of media forms, the expansion of the Finnish media ecosystem, and the increasing complexity of rights as realised in regulatory decisions by the public sector, commercial media, and communications technology providers. After 100 years of independence, Finland is still short of domestic capital and heavily dependent on exports, which makes it vulnerable to economic downturns (OECD, 2018). Interestingly, despite changes to the national borders, policies, and technologies over time, it is these geopolitical, demographic, and socioeconomic conditions that have remained relatively unchanged and, in turn, have shaped most of the current challenges towards securing access to information and media.

While the right to access in Finland also relates to institutions, such as libraries and schools, the operationalisation here is illustrated by the case of telecommunications. Telecommunications are perhaps the most illustrative cases of access. They were originally introduced in Finland by the Russian Empire; however, the Finnish Senate managed to obtain an imperial mandate for licensing private telephone operations. As a result, the Finnish telephone system formed a competitive market based on several regional private companies. There was no direct state involvement in the telecommunications business before Finland became independent (Kuusela, 2007).

The licenses of the private telephone operators required them to arrange the telephone services in their area to meet the telephone customers’ needs for reasonable and equal prices. In practice, every company had a universal service obligation (USO) in its licensing area. However, as the recession of the 1930s stopped the development of private telephone companies in the most sparsely inhabited areas, the state of Finland had to step in. The national Post and Telecommunication service eventually played a pivotal role in providing telephone services to the most northern and eastern parts of Finland (Moisala, Rahko, & Turpeinen, 1977).

Access to a fixed telephone network improved gradually until the early 1990s, when about 95% of households had at least one telephone in their use. However, the number of mobile phone subscriptions surpassed the number of fixed line telephone subscriptions as early as 1999, and an increasing share of households gave up the traditional telephone completely. As a substitute to the fixed telephone, in the late 1990s, mobile phones were seen in Finland as the best way to bring communication “into every pocket” (Silberman, 1999). Contrary to the ideal of the epistemic commons, the official government broadband strategy was based much more on market-led development and mobile networks than, for example, in Sweden, where the government made more public investments in building fixed fibre-optic connections (Eskelinen, Frank, & Hirvonen, 2008). Finland also gave indirect public subsidies to mobile broadband networks (Haaparanta & Puhakka, 2002). While the rest of Europe had started to auction their mobile spectrum (Sims, Youell, & Womersley, 2015); in Finland, the operators received all mobile frequencies for free until 2013.

The European regulations of USOs in telecommunication have been designed to set a relatively modest minimum level of telephone services at an affordable price, which could be implemented in a traditional fixed telephone network. Any extensions for mobile or broadband services have been deliberately omitted (Wavre, 2018). However, the universal services directive (2002/22/EC) lets the member states use both fixed and wireless mobile network solutions for USO provision. In addition, while the directive suggests that users should be able to access the internet via the USO connection, it does not set any minimum bitrate for connections in the common market.

Finland amended its national legislation in 2007 to let the telecom operators meet their universal service obligations using mobile networks. The results were dramatic, as operators quickly replaced large parts of the fixed telephone network with a mobile network, especially in eastern and northern parts of Finland. Today, less than 10% of households have fixed telephones. At the same time, there are almost 10 million mobile subscriptions in use in a country with 5.5 million inhabitants. Less than 1% of households do not have any mobile phones at all (Statistic Finland, 2017). Thanks to the 3G networks using frequencies the operators had obtained for free, Finland became a pioneer in making affordable broadband a legal right. Reasonably priced access to broadband internet from home has been part of the universal service obligation in Finland since 2010. However, the USO broadband speed requirement (2 Mbps) is rather modest by contemporary standards.

It is obvious that since the 1990s, Finland has not systematically addressed access as a basic right, but rather as a tool to reach political and economic goals. Although about 90% of households already have internet access, only 51% of them have access to ultra-fast fixed connections. Almost one-third of Finnish households are totally dependent on mobile broadband, which is the highest share in the EU. To guarantee access to 4G mobile broadband throughout the country, the Finnish government licensed two operators, Finnish DNA and Swedish Telia, to build and operate a new, shared mobile (broadband) network in the northern and eastern half of Finland. Despite recent government efforts to also develop ultra-fast fixed broadband, Finland is currently lagging other EU countries. A report monitoring the EU initiative “A Digital Agenda for Europe” (European Court of Auditors, 2018) found that Finland is only 22nd in the ranking in terms of progress towards universal coverage with fast broadband (> 30 Mbps) by 2020. In contrast, another Nordic Media Welfare State, Sweden, with its ongoing investments in citizens’ access to fast broadband, expects all households have access to at least 100 Mbps by 2020 (European Court of Auditors, 2018).

Availability

As a communication right, availability is the counterpart to access, but also dialogical rights and privacy. Availability refers to the abundance, plurality, and diversity of factual and cultural content to which citizens may equally expose themselves. Importantly, despite an apparent abundance of available content in the current media landscape, digitalisation does not translate into limitless availability, but rather implies new restrictions and conditions thereof as well as challenges stemming from disinformation. Availability both overcomes many traditional boundaries and faces new ones, many pertaining to ownership and control over content. For instance, public service broadcasting no longer self-evidently caters for availability, and media concentration may affect availability. In Finland, one specific question of availability and communication pertains to linguistic rights. Finland has two official languages, which implies additional demands for availability both in Finnish and in Swedish, alongside Sami and other minority languages. These are guaranteed in a special Language Act, but are also included in several other laws, including the law on public service broadcasting.

Here, availability is examined primarily through overall trends in free speech and access to information in Finland, as well as from the perspective of copyright and paywalls in particular. Availability is framed and regulated from an international and supranational level (e.g., the European Union) to the national level. Availability at a national level relies on the constitutionally safeguarded freedom of expression and access to information as well as fundamental cultural and educational rights. Freedom of the press and publicity dates back to 18th-century Sweden-Finland. After periods of censorship and “Finlandization”, the basic tenet has been a ban on prior restraint, notwithstanding measures required to protect children in the audio-visual field (Neuvonen, 2005; 2018). Later, Finland became a contracting party to the European Convention of Human Rights (ECHR) in 1989, linking Finland closely to the European tradition. However, in Finland, privacy and freedom of expression were long balanced in favour of the former, departing somewhat from ECHR standards and affecting media output (Tiilikka, 2007).

Regarding transparency, and publicity in the public sector, research has showed that Finnish municipalities, in general, are not truly active in catering to citizens’ access to information requests, and there is an inequality across the country (Koski & Kuutti, 2016). This is in contrast to the ideals of the Nordic Welfare State (Syvertsen et al., 2014). In response, civil society group, Open Knowledge Finland, has created a website that publishes information requests and guides people to submit their own request.

The digital environment is conducive to restrictions and requirements stemming from copyright and personal data protection—both having an effect on availability. The “right to be forgotten”, for example, enables individual requests to remove links in search results, thus affecting searchability (Alén-Savikko, 2015). To overcome a particular copyright challenge, new provisions were tailored in Finland to enable online video recording services, thereby allowing people to access TV broadcasts at more convenient times in a manner that transcends the traditional private copying practices. The Finnish solution rests partly on the Nordic approach to so called extended collective licensing (ECL), which was originally developed as a solution to serve the public interest in the field of broadcasting. Collective management organizations are able to license such use not only on behalf of their members, with an extended effect (i.e. they are regarded representative of non-members as well), while TV companies license their rights (Alén-Savikko & Knapstad, 2019; Alén-Savikko 2016).

Alongside legal norms, different business models frame and construct the way availability presents itself to citizens. Currently, pay-per-use models and pay walls feature in the digital media sector, although pay TV development in particular has long been moderate in Finland (Ministry of Transport and Communications, 2014a). With new business models, availability transforms into conditional access, while equal opportunity turns into inequality based on financial means. From the perspective of individual members of the public, the one-sided emphasis on consumer status is in direct opposition to the ideals of the epistemic commons and the Nordic Media Welfare State.

Dialogical rights

Access and availability are prerequisites for dialogical rights. These rights can be operationalised as citizens’ possibilities and realised activities to engage in dialogue that fosters democratic decision-making. Digital technology offers new opportunities of participation: in dialogues between citizens and the government; in dialogues with and via legacy media; and in direct, mediated peer-to-peer communication that can amount to civic engagement.

Finland has a long legacy of providing equal opportunities for participation, for instance as the first country in Europe to establish universal suffrage in 1906, when still under the Russian Empire. After reaching independence in 1917, Finland implemented its constitution in 1919. The constitution secures freedom of expression, while also stipulating that public authorities shall promote opportunities for the individual to participate in societal activity and to influence the decisions that concern him or her.

Currently, a dozen laws support dialogical rights, ranging from the Election Act and Non-Discrimination Act to the Act on Libraries. Several of them address media organisations, including the Finnish Freedom of Expression Act (FEA) that safeguards individuals’ right to report and make a complaint about media content and the Act on Yleisradio (public broadcasting) that stipulates the organization’s role in supporting democratic participation.

Finland seems to do particularly well in providing internet-based opportunities for direct dialogue between citizens and their government. These efforts began, as elsewhere in Europe, in the 1990s (Pelkonen, 2004). The government launched a public engagement programme, followed in the subsequent decade by two other participation-focused programmes (Wilhelmsson, 2017). While Estonia is the forerunner in all types of electronic public services, Finland excels in the Nordic model of combining e-governance and e-participation initiatives: it currently features a number of online portals for gathering both citizen’s opinions and initiatives, both at the national and municipal levels (Wilhelmsson, 2017).

Still, increasing inequality in capability for political participation is one of the main concerns in the National Action Plan 2017–2019 (Ministry of Justice, 2017). The country report on the Sustainable Governance Indicators notes that the weak spot for Finland is public’s evaluative and participatory competencies (Anckar et al., 2018). Some analyses posit that the Finnish civil society is simply not very open for diverse debates, contrary to the culture of public dialogue in Sweden (Pulkkinen, 1996). While Finns are avid news followers, they trust the news, and they are more likely to pay for online news than news consumers in most countries (Reunanen, 2018), participatory possibilities do not entice them very much. Social media are not widely used for political participation, even by young people (Statistics Finland, 2017) and, for example, Twitter remains a forum for dialogues between the political and media elite (Eloranta & Isotalus, 2016).

The most successful Finnish e-participation initiative is based on a 2012 amendment to the constitution that has made it possible for citizens to submit initiatives to the Parliament. One option to do so is via a designated open source online portal. An initiative will proceed to Parliament if it has collected at least 50,000 statements of support within six months. By 2019, the portal had accrued almost 1000 proposals, 24 had proceeded to be discussed in Parliament, and two related laws had been passed. Research shows, however, that many other digital public service portals still remain unknown to Finns (Wilhelmsson, 2017).

As Karlsson (2015) has posited in the case of Sweden, public and political dialogues online can be assessed by their intensity, quality, and inclusiveness. The Finnish case shows that digital solutions do not guarantee participation if they are not actively marketed to citizens, and if they do not entail a direct link to decision-making (Wilhelmsson, 2017). While the Finnish portal for citizen initiatives has mobilized some marginalized groups, the case suggests that e-participation can also alienate others, for example older citizens (Christensen et al., 2017). Valuing each and every voice as well as prioritising ways to do so over economic or political priorities (Couldry, 2010) or the need to govern effectively (Nousiainen, 2016) could be seen as central to dialogical rights between the citizen and those in the government and public administration.

Privacy

Privacy brings together all the main strands of changes caused by digitalisation: changes in media systems from mass to multimedia; technological advancements; regulatory challenges of converging sectors; and shifting sociocultural norms and practices. It also highlights a shrinking, rather than expanding, space for the right to privacy.

Recent technical developments and the increased surveillance capacities of both corporations and nation states have raised concerns regarding the fundamental right to privacy. While the trends are arguably global, there is a distinctly national logic to privacy rights. This logic coexists with international legal instruments. In the Nordic case, the strong privacy rules exist alongside access to information laws that require the public disclosure of data that would be regarded as intimate in many parts of the world, such as tax records. Curiously, a few years ago, the majority of Finns did not even consider their name, home address, fingerprints, or mobile phone numbers to be personal information (European Union, 2011), and they are still among the most trusting citizens in the EU when it comes to the use of their digital data by authorities (European Union, 2017).

In Finland, the right to privacy is a fundamental constitutional right and includes the right to be left alone, a person’s honour and dignity, the physical integrity of a person, the confidentiality of communications, the protection of personal data, and the right to be secure in one’s home (Neuvonen, 2014). The present slander and defamation laws date back to Finland’s first criminal code from 1889, when Finland was still a Grand Duchy of the Russian Empire. In 1919, the Finnish constitution provided for the confidentiality of communications by mail, telegraph, and telephone, as well as the right to be secure in one’s home—important rights for citizens in a country that had lived under the watchful eye of the Russian security services.

In the sphere of privacy protection, new laws are usually preceded by the threat of new technology (Tene & Polonetsky, 2013); however, in Finland, this was not the case. Rather, the need for new laws reflected a change in Finland’s journalistic culture that had previously respected the private lives of politicians, business leaders, and celebrities. The amendments were called “Lex Hymy” (Act 908/1974) after one of Finland’s most popular monthlies had evolved into a magazine increasingly focused on scandals.

Many of the more recent rules on electronic communications and personal data are a result of international policies being codified into national legislation, perhaps most importantly EU legislation’s transposition into national law. What is fairly clear, however, is that the state has been seen as the guarantor of the right to privacy since even before Finland was a sovereign nation. The strong role of the state is consistent with the European social model and increased focus on public service regulation (cf., Venturelli, 2002, p. 80). Nevertheless, the potential weakness of this model is that the privacy rights seldom trump the public interest, and public uses of personal data are not as strictly regulated as their private use.

Finland has also introduced legislation that weakens the relatively strong right to privacy. After transposing the ePrivacy Directive guaranteeing the confidentiality of electronic communications into national law, the Finnish Government proposed an amending act that granted businesses and organisations the right to monitor communications metadata within their networks. The act was dubbed “Lex Nokia” after Finland’s leading newspaper published an article that alleged that the Finnish mobile giant had pressured politicians and officials to introduce the new law (Sajari, 2009). While it is difficult to assess to what degree Nokia influenced the contents of the legislation, it is clear that Nokia took the initiative and was officially involved in the legislative process (Jääsaari, 2012).

The Lex Nokia act demonstrates how the state’s public interest considerations might coincide with the economic interests of large corporations to the detriment of the right to privacy. Regardless, Finnish citizens remain more trusting of public authorities, health institutions, banks, and telecommunications companies than most of their European compatriots (European Union, 2015). It remains to be seen whether this trust in authority will erode, as more public and private actors aim to capitalise on the promises of big data. Nothing in recent Eurobarometer surveys (European Union, 2018a, pp. 38–56; European Union, 2018b) would indicate that the trust in public authorities would be in crisis or in steep decline—the same cannot be said for trust in political institutions, which seem to decline a few percentage points each year in various studies.

Discussion

The promotion of communication rights based on the ideal of epistemic commons is institutionalized in a variety of ways in Finnish communication policy-making, ranging from traditional public service media arrangements to more recent broadband and open data initiatives. However, understood as equal and effective capabilities, communication rights and the related policy principles of the Nordic Media Welfare State have never been completely or uniformly followed in the Nordic countries.

The analysis of the Finnish case highlights how the ideal of a “Digital Welfare State” falls short in several ways. Policies of access or privacy may focus on economic goals rather than rights. E-participation initiatives promoting dialogical rights do not automatically translate to a capacity or a desire to participate in decision-making. Arguably, the model employed in this article has been built on a specific understanding of which rights and stakeholders are needed to support the ideals of the epistemic commons and the Nordic Media Welfare State. That is why it focuses more on the national specificities and less on the impact on supranational and international influences on the national situation. It is obvious that in the current media landscape, national features are challenged by a number of emergent forces, including not only technological transformations but also general trends of globalisation and the declining capacities of nation states to enforce public interest or rights-based policies (Horten, 2016).

Still, more subtle and local manifestations of global and market-driven trends are worth examining to understand different policy options and interpretations. National mapping and monitoring the state of communication rights with measurement tools and indicators have been developed and employed that target their various components, such as linguistic issues or accessibility. In Finland, this type of approach has been adopted in the field of media and communications policy (Ala-Fossi et al., 2018; Artemjeff & Lunabba, 2018; Ministry of Transport and Communications, 2014b). Recent academic efforts aiming at comparative outlooks (Couldry et al., 2016; Goggin et al., 2017) are indications that communication rights urgently call for a variety of conceptualisations and operationalisations to uncover similarities and differences between countries and regions. As Eubanks (2017) argued, we seem to be at a crossroads: despite our unparalleled capacities for communication, we are witnessing new forms of digitally enabled inequality, and we need to curb these inequalities now—if we want to counter them at all. We may need both the global policy efforts, but we also need to understand their specific national and supranational reiterations to counter these and other inequalities regarding citizens’ communication rights.

References

Ala-Fossi, M., Alén-Savikko, A., Grönlund, M., Haara, P., Hellman, H., Herkman, J.,…Mykkanen, M. (2018). Media- ja viestintäpolitiikan nykytila ja sen mittaaminen [Current state of media and communication policy and measurement]. Helsinki: Ministry of Transport and Communications. Retrieved February 21, 2019, from http://urn.fi/URN:ISBN:978-952-243-548-4

Alén-Savikko, A. (2015). Pois hakutuloksista, pois mielestä? [Pois hakutuloksista, pois mielestä?]. Lakimies, 113(3-4), 410–433. Retrieved from http://www.doria.fi/handle/10024/126796

Alén-Savikko, A. (2016). Copyright-proof network-based video recording services? An analysis of the Finnish solution. Javnost – The Public, 23(2), 204–219. doi:10.1080/13183222.2016.1162979

Alén-Savikko, A., & Knapstad, T. (2019). Extended collective licensing and online distribution – prospects for extending the Nordic solution to the digital realm. In T. Pihlajarinne, J. Vesala & O. Honkkila (Eds.), Online distribution of content in the EU (pp. 79–96). Cheltenham, UK & Northampton, MA: Edward Elgar. doi:10.4337/9781788119900.00012

Anckar, D., Kuitto, K., Oberst, C. & Jahn, D. (2018). Finland Report Sustainable Governance Indicators 2018. Retrieved March 14, 2018, from https://www.researchgate.net/publication/328214890_Finland_Report_-_Sustainable_Governance_Indicators_2018

Artemjeff, P., & Lunabba, V. (2018). Kielellisten oikeuksien seurantaindikaattorit [Indicators for monitoring linguistic rights] (No. 42/2018). Helsinki: Ministry of Justice Finland. Retrieved from http://julkaisut.valtioneuvosto.fi/bitstream/handle/10024/161087/OMSO_42_2018_Kielellisten_oikeuksien_seurantaindikaattorit.pdf

Castells, M., & Himanen, P. (2002). The information society and the welfare state: The Finnish model. Oxford: Oxford University Press.

Christensen, H., Jäske, M., Setälä, M. & Laitinen, E. (2017). The Finnish Citizens’ Initiative: Towards Inclusive Agenda-setting? Scandinavian Political Studies, 40(4), 411–433. doi:10.1111/1467-9477.12096

Couldry, N. (2010). Why voice matters: Culture and politics after neoliberalism. London: Sage.

Couldry, N. Rodriguez, C., Bolin G., Cohen, J. , Goggin, G., Kraidy, M. …Zhao Y. (2016). Chapter 13 – Media and communications. Retrieved November 14, 2018, from https://comment.ipsp.org/sites/default/files/pdf/chapter_13_-_media_and_communications_ipsp_commenting_platform.pdf

Eloranta, A., & Isotalus, P. (2016). Vaalikeskustelun aikainen livetwiittaaminen – kansalaiskeskustelun uusi muoto? [Election discussion during the electoral debate - a new form of civic debate?]. In K. Grönlund & H. Wass (Eds.), Poliittisen osallistumisen eriytyminen: Eduskuntavaalitutkimus 2015 [Differentiation of Political Participation: Parliamentary Research 2015](pp. 435–455). Helsinki: Oikeusministeriö.

Eskelinen, H., Frank, L., & Hirvonen, T. (2008). Does strategy matter? A comparison of broadband rollout policies in Finland and Sweden. Telecommunications Policy, 32(6), 412–421. doi:10.1016/j.telpol.2008.04.001

Eubanks, V. (2017). Automating inequality: How high-tech tools profile, police, and punish the poor. New York: St. Martin’s Press.

European Court of Auditors (2018). Broadband in the EU Member States: despite progress, not all the Europe 2020 targets will be met (Special report No. 12). Luxembourg: European Court of Auditors. Retrieved February 22, 2018, from http://publications.europa.eu/webpub/eca/special-reports/broadband-12-2018/en/

European Commission (2011). Attitudes on data protection and electronic identity in the European Union (Special Eurobarometer No. 359). Luxembourg: Publications Office of the European Union. Retrieved November 14, 2018, from http://ec.europa.eu/public_opinion/archives/ebs/ebs_359_en.pdf

European Commission (2015). Data protection (Special Eurobarometer No. 431). Luxembourg: Publications Office of the European Union. Retrieved November 14, 2018, from https://data.europa.eu/euodp/data/dataset/S2075_83_1_431_ENG

European Commission (2017). Europeans’ attitudes towards cyber security (Special Eurobarometer No. 464a). Luxembourg: Publications Office of the European Union. doi:10.2837/82418

European Commission (2018a). Public opinion in the European Union (Standard Eurobarometer No. 89). Luxembourg: Publications Office of the European Union. doi:10.2775/172445

European Commission (2018b). Kansallinen raportti. KansalaismielipideEuroopan unionissa: Suomi [National Report. Citizenship in the European Union: Finland] (Standard Eurobarometer, National Report No. 90). Luxembourg: Publications Office of the European Union. Retrieved from https://ec.europa.eu/finland/sites/finland/files/eb90_nat_fi_fi.pdf

Flew, T., Iosifides, P., & Steemers, J. (Eds.). (2016). Global media and national policies: The return of the state. Basingstoke: Palgrave. doi:10.1057/9781137493958

Goggin, G., Vromen, A., Weatherall, K. G., Martin, F., Webb, A., Sunman, L., & Bailo, F. (2017). Digital rights in Australia (Sydney Law School Research Paper No. 18/23). Sydney: University of Sydney. https://ses.library.usyd.edu.au/bitstream/2123/17587/7/USYDDigitalRightsAustraliareport.pdf

Haaparanta P., & Puhakka M. (2002). Johtolangatonta keskustelua: Tunne ja järki huutokauppakeskustelussa. Kansantaloudellinen Aikakauskirja, 98(3), 267–274.

Habermas, J. (2006). Political communication in media society: Does democracy still enjoy an epistemic dimension? The impact of normative theory on empirical research. Communication Theory, 16(4), 411–426. doi:10.1111/j.1468-2885.2006.00280.x

Halme, K., Lindy, I., Piirainen, K., Salminen, V., & White, J. (Eds.). (2014). Finland as a knowledge economy 2.0: Lessons on policies and governance (Report No. 86943). Washington, DC: World Bank Group. Retrieved from http://documents.worldbank.org/curated/en/418511468029361131/Finland-as-a-knowledge-economy-2-0-lessons-on-policies-and-governance

Hamelink, C. J. (1994). The politics of world communication. London: Sage.

Horowitz, M., & Nieminen, H. (2016). European public service media and communication rights. In G. F. Lowe & N. Yamamoto (Eds.), Crossing borders and boundaries in public service media: RIPE@2015 (pp. 95–106). Gothenburg: Nordicom. Available at https://gupea.ub.gu.se/bitstream/2077/44888/1/gupea_2077_44888_1.pdf#page=97

Horten, M. (2016). The closing of the net. Cambridge: Polity Press.

International Telecommunication Union. (2018). Measuring the information society report 2018 - Volume 1. Geneva: International Telecommunication Union. Retrieved from: https://www.itu.int/en/ITU-D/Statistics/Pages/publications/misr2018.aspx

Jääsaari, J. (2012). Suomalaisen viestintäpolitiikan normatiivinen kriisi: Esimerkkinä Lex Nokia [The normative crisis of Finnish communications policy: For example, Lex Nokia]. In K. Karppinen & J. Matikainen (Eds.), Julkisuus ja Demokratia [Publicity and Democracy] (pp. 265–291). Tampere: Vastapaino.

Jørgensen, R. F. (2013). Framing the net: The internet and human rights. Cheltenham, UK & Northhampton, MA: Edward Elgar.

Karlsson, M. (2015). Interactive, qualitative, and inclusive? Assessing the deliberative capacity of the political blogosphere. In K. Jezierska & L. Koczanowicz (Eds.), Democracy in dialogue, dialogue in democracy: The politics of dialogue in theory and practice (pp. 253–272). London & New York: Routledge.

Karppinen, K. (2017). Human rights and the digital. In H. Tumber & S. Waisbord (Eds.), Routledge companion to media and human rights (pp. 95–103). London & New York: Routledge. doi:10.4324/9781315619835-9

Koski, A., & Kuutti, H. (2016). Läpinäkyvyys kunnan toiminnassa – tietopyyntöihin Vastaaminen [Transparency in municipal action - responding to requests for information]. Helsinki: Kunnallisalan kehittämissäätiö [Municipal Development Foundation]. Retrieved November 14, 2018, from http://kaks.fi/wp-content/uploads/2016/11/Tutkimusjulkaisu-98_nettiin.pdf

Kuusela, V. (2007). Sentraalisantroista kännykkäkansaan - televiestinnän historia Suomessa tilastojen valossa [From the central antennas to mobile phone - the history of telecommunications in Finland in the light of statistics]. Helsinki: Tilastokeskus. Retrieved November 14, 2018, from http://www.stat.fi/tup/suomi90/syyskuu.html

MacKinnon, R. (2013). Consent of the networked: The struggle for internet freedom. New York: Basic Books.

MacKinnon, R., Maréchal, N., & Kumar, P. (2016). Global Commission on Internet Governance – Corporate accountability for a free and open internet (Paper No. 45). Ontario; London: Centre for International Governance Innovation; Chatham House .Retrieved from https://www.cigionline.org/sites/default/files/documents/GCIG%20no.45.pdf

Mansell, R. & Nordenstreng, K. (2006). Great Media and Communication Debates: WSIS and the MacBride Report. Information Technologies and International Development, 3(4), 15–36. Available at http://tampub.uta.fi/handle/10024/98193

McIver, W. J., Jr., Birdsall, W. F., & Rasmussen, M. (2003). The internet and right to communicate. First Monday,8(12). doi:10.5210/fm.v8i12.1102

McLeod, S. (2018). Communication rights: Fundamental human rights for all. International Journal of Speech-Language Pathology, 20(1), 3–11. doi:10.1080/17549507.2018.1428687

Ministry of Finance, Finland. (2018, May 23). Digital Economy and Society Index: Finland has EU's best digital public services. Helsinki: Ministry of Finance. Retrieved February 28, 2019, from https://vm.fi/en/article/-/asset_publisher/digitaalitalouden-ja-yhteiskunnan-indeksi-suomessa-eu-n-parhaat-julkiset-digitaaliset-palvelut

Ministry of Justice, Finland. (2017). Action plan on democracy policy. Retrieved February 28, 2019, from https://julkaisut.valtioneuvosto.fi/bitstream/handle/10024/79279/07_17_demokratiapol_FI_final.pdf?sequence=1

Ministry of Transport and Communications, Finland. (2014a). Televisioala Suomessa: Toimintaedellytykset internetaikakaudella [Television industry in Finland: Operating conditions in the Internet era] (Publication No. 13/2014). Helsinki: Ministry of Transport and Communications. Retrieved from http://urn.fi/URN:ISBN:978-952-243-398-5

Ministry of Transport and Communications, Finland. (2014b). Viestintäpalveluiden esteettömyysindikaattorit [Accessibility indicators for communication services] (Publication No. 36/2014). Helsinki: Ministry of Transport and Communications. Retrieved from http://urn.fi/URN:ISBN:978-952-243-437-1

Moisala, U. E., Rahko, K., & Turpeinen, O. (1977). Puhelin ja puhelinlaitokset Suomessa 1877–1977 [Telephone and telephone companies in Finland 1877–1977]. Turku: Puhelinlaitosten Liitto ry.

Neuvonen, R. (2005). Sananvapaus, joukkoviestintä ja sääntely [Freedom of expression, media and regulation]. Helsinki: Talentum.

Neuvonen, R. (2014). Yksityisyyden suoja Suomessa [Privacy in Finland]. Helsinki: Lakimiesliiton kustannus.

Neuvonen, R. (2018). Sananvapauden historia Suomessa [The History of Freedom of Expression in Finland]. Helsinki: Gaudeamus

Nieminen, H. (2019). Inequality, social trust and the media. Towards citizens’ communication and information rights. In J. Trappel (Ed.), Digital Media Inequalities Policies against divides, distrust and discrimination (pp, 43–66). Gothenburg: Nordicom. Available at https://norden.diva-portal.org/smash/get/diva2:1299036/FULLTEXT01.pdf#page=45

Nieminen, H. (2016). Communication and information rights in European media policy. In L. Kramp, N. Carpentier, A. Hepp, R. Kilborn, R. Kunelius, H. Nieminen, T. Olsson, T. Pruulmann-Vengerfeldt, I. Tomanić Trivundža, & S. Tosoni (Eds.), Politics, civil society and participation: media and communications in a transforming environment (pp. 41–52). Bremen: Edition lumière. Available at: http://www.researchingcommunication.eu/book11chapters/C03_NIEMINEN201516.pdf

Nieminen, H. (2014). A short history of the epistemic commons: Critical intellectuals, Europe and the small nations. Javnost - The Public, 2(3), 55–76. doi:10.1080/13183222.2014.11073413

Nieminen, H. (2013). European broadband regulation: The “broadband for all 2015” strategy in Finland. In M. Löblich & S. Pfaff- Rüdiger (Eds.), Communication and media policy in the era of the internet: Theories and processes (pp. 119-133). Munich: Nomos. doi:10.5771/9783845243214-119

Nieminen, H. (2010). The European public sphere and citizens’ communication rights. In I. Garcian-Blance, S. Van Bauwel, & B. Cammaerts (Eds.), Media agoras: Democracy, diversity, and communication (pp. 16-44). Newcastle Upon Tyne, UK: Cambridge Publishing.

Nousiainen, M. (2016). Osallistavan käänteen lyhyt historia [A brief history of a participatory turn]. In M. Nousiainen & K. Kulovaara (Eds.), Hallinnan ja osallistamisen politiikat [Governance and Inclusion Policies] (pp. 158-189). Jyväskylä: Jyväskylä University Press. Available at https://jyx.jyu.fi/bitstream/handle/123456789/50502/978-951-39-6613-3.pdf?sequence=1#page=159

OECD. (2018). OECD economic surveys: Finland 2018. Paris: OECD Publishing. doi:10.1787/eco_surveys-fin-2018-en

Padovani, C., & Calabrese, A. (Eds.) (2014). Communication Rights and Social Justice. Historical Accounts of Transnational Mobilizations. Cham: Springer / Palgrave Macmillan. doi:10.1057/9781137378309

Pelkonen, A. (2004). Questioning the Finnish model – Forms of public engagement in building the Finnish information society (Discussion Paper No. 5). London: STAGE. Retrieved November 14, 2018, from http://lincompany.kz/pdf/Finland/5_ICTFinlandcase_final2004.pdf

Pulkkinen. T. (1996). Snellmanin perintö suomalaisessa sananvapaudessa [Snellman's legacy in Finnish freedom of speech]. In: K. Nordenstreng (Ed.), Sananvapaus [Freedom of Expression] (pp. 194–208). Helsinki: WSOY

Redeker, D., Gill, L., & Gasser, U. (2018). Towards digital constitutionalism? Mapping attempts to craft an Internet Bill of Rights. International Communication Gazette, 80(4), 302–319. doi:10.1177/1748048518757121\

Reporters without Borders (2018). 2018 World Press Freedom Index. Retrieved February 28, 2019, from: https://rsf.org/en/ranking

Reunanen, E. (2018). Finland. In N. Newman, R. Fletcher, A. Kalogeropoulos, D. A. L. Levy, & R. K. Nielsen (Eds.), Reuters Institute digital news report 2018 (pp. 77–78). Oxford: Reuters Institute for the Study of Journalism.

Sajari, P. (2009). Lakia vahvempi Nokia [The law is stronger Nokia]. Helsingin Sanomat.

Shahbaz, A. (2018). Freedom on the net 2018: The rise of digital authoritarianism. Washington, DC: Freedom House. Retrieved February 28, 2019, from https://freedomhouse.org/sites/default/files/FOTN_2018_Final%20Booklet_11_1_2018.pdf

Silberman, S. (1999, September). Just say Nokia. Wired Magazine.

Sims, M., Youell, T., & Womersley, R. (2015). Understanding spectrum liberalisation. Boca Raton, FL: CRC Press.

Statistics Finland (2017). Väestön tieto- ja viestintätekniikan käyttö 2017 [Population Information and Communication Technologies 2017]. Helsinki: Official Statistics of Finland. Retrieved February 28, 2019, from https://www.stat.fi/til/sutivi/2017/13/sutivi_2017_13_2017-11-22_fi.pdf

Syvertsen, T., Enli, G., Mjøs, O., & Moe, H. (2014). Media welfare state. Nordic media in the digital era. Ann Arbor: University of Michigan Press.

Tene, O., & Polonetsky, J. (2013). A theory of creepy: Technology, privacy and shifting social norms. Yale Journal of Law & Technology, 16, 59–102. Available at: https://yjolt.org/theory-creepy-technology-privacy-and-shifting-social-norms

Tiilikka, P. (2007). Sananvapaus ja yksilön suoja: lehtiartikkelin aiheuttaman kärsimyksen korvaaminen [Freedom of speech and protection of the individual: compensation for the suffering of a journal article]. Helsinki: WSOYpro.

Venturelli, S. (2002). Inventing e-regulation in the US, EU and East Asia: Conflicting social visions of the information society. Telematics and Informatics,19(2), 69–90. doi:10.1016/S0736-5853(01)00007-7

Wallace, N., & Castro, D. (2017). The state of data innovation in the EU. Brussels &

Washington, D.C. Center for Data Innovation. Retrieved February 28, 2019, from http://www2.datainnovation.org/2017-data-innovation-eu.pdf

Wavre, V. (2018). Policy diffusion and telecommunications regulation. Cham: Springer / Palgrave Macmillan.

Wilhelmsson, N. (2017). Finland: eDemocracy adding value and venues for democracy. In eDemocracy and eParticipation. The precious first steps and the way forward (pp 25-33). Retrieved February 28, 2019, from http://www.fnf-southeasteurope.org/wp-content/uploads/2017/11/eDemocracy_Final_new.pdf

Zuboff, S. (2019). The age of surveillance capitalism: The fight for a human future at the new frontier of power. New York: Public Affairs.

Footnotes

1. The quest for more openness and publicity is a continuation of the long historical development. European modernity is fundamentally based on the assumption that knowledge and culture belong to the common domain and that the process of democratization necessarily means removing restrictions on the epistemic commons. Aspects such as civic education, universal literacy, and mass media (newspapers; public service broadcasting as tool for the daily interpretation of the world) are at the heart of this ideal. The epistemic commons reflects the core ideas and ideals of deliberative democracy: At the centre of this view is democratic will formation that is public and transparent, includes everyone and provides equal opportunities for participation, and results in rational consensus (Habermas, 2006). The epistemic commons is thought to facilitate such will formation.

Regulation through “bricking”: private ordering in the “Internet of Things”

$
0
0

Introduction

With the rapid expansion of internet-connected physical products with embedded software known as the “Internet of Things” (IoT), once-ordinary goods like watches or televisions have become what is colloquially termed “smart” devices. The IoT can be commonly understood as networks of always-on, internet-connected and software-enabled devices that collect, distribute, and act upon data through embedded sensors (see Meola, 2016). As smart goods rely upon their embedded software that regularly communicates with the manufacturers’ servers for instructions, a manufacturer-dependent relationship that some scholars characterise as “tethered” (Zittrain, 2008), these products are vulnerable to any interruption or manipulation to their software.

The susceptibility of smart products to disruption in the provision of software became widely apparent in 2016 when customers of the Revolv smart home system learned their products would suddenly become inoperable. The problem started in 2014 when Google’s sister company Nest, which sells smart home systems, purchased the Revolv smart home hub that enabled communication among light switches, garage door openers, motion sensors, and thermostats, and allowed users to program these devices and operate them remotely. In 2016, Nest decided to discontinue the Revolv hub in a blunt announcement: “As of May 16, Revolv service will no longer be available” (Lawson, 2016). All Revolv data was deleted and the one-year warranty expired for all Revolv products. A Revolv user described the consequences: “My landscape lighting will stop turning on and off, my security lights will stop reacting to motion, and my home-made vacation burglar deterrent will stop working” (Gilbert, 2016). Although the company offered its customers refunds, Nest remotely destroyed functional services without their customers’ consent by withdrawing access to the software services that enabled the hub to operate normally.

Nest’s actions demonstrate the vulnerability of internet-connected, software-enabled products to any interruption to the manufacturers’ provision of software updates. Through their software, IoT products remain connected or “tethered” (Zittrain, 2008) to their manufacturers, a characteristic that enables companies to wield significant post-purchase control over the software. The most extreme form of post-purchase control is “bricking”. Bricking typically describes an electronic device’s loss of functionality in which it is rendered permanently inoperable (see e.g., Technopedia, n.d.). In this article, bricking refers more narrowly to manufacturer-pushed software interruption or impairment that has the intention of negatively affecting product functionality. The Revolv case, an example of bricking, shows that those who control the products’ software can determine how their customers use the goods and even the products’ lifespan. By discontinuing software updates, which also contain essential security patches, or by pushing software updates that negatively affect product functionality, IoT manufacturers can cause IoT products to cease functioning properly, either immediately or over time. Control over software thus enables control over hardware.

This article argues that IoT companies, particularly within the United States, are using bricking within a system of private ordering that is reshaping the governance of physical objects, as companies can alter the functionality of or brick any software-enabled, internet-connected device, typically without the consent or knowledge of their customers. The private ordering emerges from a distinctive legal and regulatory framework in which IoT companies use restrictive licensing agreements to govern the software within smart goods. There are clear benefits to the tethered relationship between IoT companies and smart goods as manufacturer-pushed software updates can be convenient for consumers and an efficient way to provide security patches and software upgrades to IoT goods. However, through companies’ post-purchase control over smart goods, IoT firms have an unfair capacity to impose their preferred policies unilaterally, automatically, and remotely.

To make its argument, the article draws upon the law and technology literature to explain bricking as a form of techno-regulation, which is the deliberate use of technology as a regulatory instrument (Brownsword, 2005; see also Hildebrandt, 2008) and an analysis of manufacturers’ licensing agreements for consumer-oriented smart products, particularly Nest, the fitness wearable Fitbit, and Samsung’s smart television. By interrupting or manipulating the provision of software to smart goods, IoT companies are regulating through “code” (Lessig, 2006; see Reidenberg, 1997).

With the incorporation of software into all manner of consumer-oriented objects 1 how the Internet of Things is governed, by whom, and with what consequences are issues of growing importance. Fitness wearables and household items like Amazon’s Echo products may be foremost in people’s minds when thinking about the Internet of Things, but it is important to recognise the wide variety of devices and systems reliant upon internet-connected, software enabled products. Smart cities, for instance, are characterised by networks of sensors attached to real-world objects embedded in the urban environment that enable real-time data collection, streaming, and analysis to deliver services, and integrate information and physical infrastructure (Edwards, 2016, p. 31; see also Kitchin, 2014). Although this article examines the governance of IoT products as a form of private ordering with a focus on consumer-oriented smart products, its argument has broader relevance to the control that manufacturers can impose over all manner of internet-connected, software-enabled goods. As well, the data-intensive nature of the IoT raises serious challenges in terms of consumers’ privacy, a problem that is further exacerbated by companies’ post-purchase control of internet-connected goods.

While there is a growing scholarly literature on the Internet of Things, particularly examining security and privacy risks (see e.g., DeNardis & Raymond, 2017; Friedland, 2017), few studies consider the ways that IoT manufacturers employ their newly expanded capacity to set rules governing the use and lifespan of these products, even after purchase (notable exceptions are Fairfield, 2017; Perzanowski & Schultz, 2016). Bricking, moreover, is critically under-examined in the scholarly literature, despite multiple cases documented in technology-focused news websites like TechDirt and Wired (see e.g., Wiens, 2016).

The rest of this article is organised as follows. First, it establishes bricking as a type of techno-regulation and sets out how IoT companies’ regulatory efforts operate as private ordering. Next, the article explains the governance of the Internet of Things through licensing agreements. The article then explores how IoT companies exert post-purchase control over their smart goods with bricking as the paradigmatic example, and considers the implications of post-purchase control on consumer consent and privacy. The article then provides a conclusion.

Regulating through technology

This article understands technology as being imbued by its creators with particular norms, rules and values (Brey, 2005; Franklin, 1995), a socially constructed view of technology that aligns with the law and technology literature (see Brownsword, 2005). Technology, from this perspective, may be designed to facilitate certain types of use, and, inadvertently or deliberately, discourage or prevent others (Brey, 2005; Hildebrandt, 2008). Rules are designed and implemented through architecture or code (see Lessig, 2006; Reidenberg, 1997), such as manufacturers’ deliberate changes to the smart goods’ software. Techno-regulation, a concept rooted in the law and technology literature, explains how technology can be employed as a regulatory instrument. Techno-regulation refers to the “deliberate employment of technology to regulate human behaviour” (Leenes, 2011, p. 149; see Brownsword, 2005). It is a type of design-based regulation in which “technology with intentionally built-in mechanisms” shapes behaviour (Koops et al., 2006, p. 158, as cited in Leenes, 2011, p. 149).

Designers may incorporate features into the technologies that encourage compliance (termed “regulative” rules) or that force compliance (“constitutive” rules) (Hildebrandt, 2008). A vehicle’s beeping to remind people to fasten seatbelts encourages compliance, while most ATMs require users to withdraw their card before their cash is issued, an anti-theft device that forces compliance. Bricking is a type of constitutive technological regulation in which users typically have no option to resist the regulatory outcome: corporations render smart goods remotely and automatically non-functional. Consumers’ primary option is to avoid purchasing smart devices where possible.

Understanding bricking as a form of constitutive techno-regulation requires situating the creation and use of technologies within existing laws and regulations (Mueller et al., 2012, p. 350). Keymolen & Van der Hof (2019, p. 5) employ the term “codification” to describe the legal frameworks and regulatory requirements with which IoT companies must comply, as well as the companies’ own systems of rules that govern their products. Legal and regulatory environments may vary, for example, in regards to consumer protection provisions. Similarly, wording drafted for one jurisdiction, such as the United States may be used in another like the European Union, even though the language may not be suitable, even reproducing “verbatim the contractual wording of the original US source” (Noto La Diega & Walden, 2016, p. 3; see also Manwaring 2017, p. 286). This article focuses on companies’ post-purchase regulation of consumer-oriented IoT goods within the United States.

Manufacturers’ governance of consumer-oriented IoT goods constitutes a system of private ordering (see Schwarcz, 2002) that relies upon privately drafted licensing agreements. By granting themselves the latitude to restrict or terminate service at any time to the devices’ software, IoT companies have a quasi-legislative power to set and enforce rules over their users and a quasi-executive power to enforce those rules through technical means (Belli & Venturini, 2016, p. 4; see Langenderfer, 2009).

Unlike the interpretive nature of law, rules embedded within and enforced using technology are less transparent and often more rigid (Koops, 2011, p. 4; Brownsword, 2008). Technology-embedded rules can force individuals to comply with the rules, effectively designing “out any option of non-conforming behaviour” (Brownsword, 2008, p. 247; see Koops, 2011, p. 4). Consumers can either choose to accept the rules set out or decide not to use the IoT products. IoT companies’ licensing agreements should therefore be understood as the “law of the platform” (Belli et al., 2017, p. 44), according the company the sole regulatory capacity to set, interpret, and enforce its rules. Once companies decide to downgrade or destroy device functionality, consumers may have few avenues for resistance beyond purchasing non-connected goods where similar goods are available (see e.g., Helberger, 2016).

Regulation through constitutive rules embedded within technology evokes scholarship comparing governance through law and “code” (Lessig, 2006; see Reidenberg, 1997). Schulz and Dankert (2016) contend that the software driving smart goods’ operation that can shape or direct human behaviour is a form of constitutive regulation they term “Governance by Things”. While their focus is on rules embedded within software that ensure normal product functioning, this article’s focus on bricking investigates IoT companies’ efforts to further their post-purchase control over smart goods by manipulating the provision of software. In doing so, companies can force customers accept certain product features, determine how goods are used, and even determine products’ lifespan. Italy’s competition authority, for example, fined Apple and Samsung each €5 million in 2018 after ruling that the companies deliberately reduced the speed of their phone operating system, which constituted “dishonest commercial practices” (Gibbs, 2018). While planned obsolescence is not unique to the Internet of Things, the tethered nature of IoT goods to their manufacturers facilitates companies’ control over product functionality (see Aladeojebi, 2013).

As the next section explores, consumers’ interaction with smart goods depends upon “rules established by an external authority” (Perzanowski & Schultz, 2016, p. 122), namely IoT makers. By embedding their rules and policies within technology - in this case, the product’s software, companies can restrict how consumers may use smart goods.

Governing the Internet of Things

The legal authority by which IoT companies regulate their software-enabled goods is through agreements attached to each product that governs the embedded software. End-user licensing agreements (EULAs), often called software licenses, are legal contracts that set out the conditions under which users can use the software and outline penalties for violation (see Langenderfer, 2009; Perzanowski & Schultz, 2016). 2 Some EULAs also set out terms governing issues like copyright ownership and penalties for violation, and the collection and use of customers’ data. In the United States, IoT companies have particular latitude to set rules restricting consumers’ use of smart goods within EULAs (see Langenderfer, 2009), whereas other jurisdictions may place limitations on the scope or use of EULAs.

A traditional understanding of contracts refers to an agreed-upon transaction between two consenting parties, but modern contracts depart from this conception with the increasing use of “click-wrap” contracts on websites to which users indicate their adherence by clicking “I agree” (Radin, 2012, pp. 3 & 11). Users may have to click through multiple webpages to review all the terms or, in some jurisdictions, may only be able to review conditions after purchase. Consumers need not even signal assent, as companies often include a clause that continued use of the product constitutes acceptance of the agreement. “Your continued use of the Product,” Nest tells its customers, “is your agreement to this EULA” (Nest, n.d.). Consumers’ bargaining power is therefore limited, and companies’ capacity to set and interpret rules unilaterally means that the rules set within those agreements become the “law of the platform” (Belli et al., 2017, p. 44). In its EULA, Nest informs its customers that software updates are automatically installed without notice: “You consent to this automatic update. If you do not want such Updates, your remedy is to stop using the Product” (Nest, n.d.).

Even when users have the option of either clicking “I accept” or “I do not accept,” the assumption is that individuals are providing informed consent to the agreement. However, people tend not to read corporate policies (Obar & Oeldorf-Hirsch, 2018) and may not even be aware of the rules that govern their use of IoT products (see Helberger, 2016; Manwaring, 2017). Further, companies have considerable latitude in crafting their policies and reserve the right to change the terms of their licensing agreements without notice to the user (see Tusikov, 2016). Consumers can decline contracts with onerous conditions, if they are aware of them, or they can switch to providers with more favourable conditions, if a suitable alternative exists. However, switching contracts can impose search and switching costs, and rival companies can add or amend conditions in the same arbitrary way (see Horton, 2010, p. 609).

Within their EULAs, companies grant themselves the right to restrict and sanction unwanted behaviour regarding their products and services. IoT manufacturers typically include a clause that gives them the right to terminate users’ access to or disable the product itself. Fitbit tells users: “We reserve the right (but are not required) to remove or disable access to the Fitbit Service, any Fitbit Content, or Your Content at any time and without notice, and at our sole discretion” (Fitbit, 2018). Even if the behaviour in question is legal, companies have the discretion to terminate users’ access to or disable the product.

Post-purchase regulation

IoT companies’ private ordering relies upon pervasive surveillance because, for manufacturers of IoT goods, surveillance is a business model (Schneier, 2013) and a regulatory mechanism (see Tusikov, 2019). Monitoring performs two interrelated functions: data collection and processing to enable the operation of IoT products, and customer/device monitoring to detect violations of the licensing agreements.

Smart goods’ proper functioning depends on their continual monitoring of their users and environments (see Farkas, 2017). Data-intensive products are features of the “sensor society” in which corporate infrastructures facilitate the mass-scale collection, storage, and processing of sensor-generated data from interactive, networked devices (Andrejevic & Burdon, 2015, p. 21). The sensor society, or what others term “data capitalism” (West, 2017), “surveillance capitalism” (Zuboff, 2015, 2019) and “platform capitalism” (Srnicek, 2017), accords importance to the control over information, particularly the mass accumulation, storage and processing of data with the goal of sorting populations and discerning patterns in data (Zuboff, 2015, 2019). Implicit within the IoT, then, is the normalisation of pervasive corporate surveillance of smart products and their users (see Andrejevic & Burdon, 2015; Friedland, 2017). An important aspect of IoT surveillance is the intensity of IoT devices’ communication with their servers: products can communicate daily or multiple times a day even when the products are not in use (Hill & Mattu, 2018).

The second aspect of surveillance is monitoring technologies that track and control how individuals use certain products to identify unwanted behaviour, which are common features of techno-regulation (see Brownsword, 2008). IoT devices’ tethered relationship to their manufacturers facilitates ubiquitous corporate surveillance (see Graber, 2015 p. 391), thereby providing companies with the capacity to police their customers for violations of the licensing agreements in what Zittrain (2008, p. 136) terms “perfect enforcement”. For example, when someone uses an internet-connected product, depending on the product type, the software may collect information to authenticate the user or activity, or may scan the device for potential violations to the licensing agreement (see Perzanowski & Schultz, 2016). Samsung, for example, informs its customers that it may “monitor your use of the Samsung+ Service” and “your accounts, content, and communications” to identify any violations of its policies regarding its smart television services (Samsung, 2018).

Bricking

IoT companies have the capacity to change products’ functionality as these companies can install software updates automatically without users’ consent or notification. According to Fitbit: “We reserve the right to determine the timing and content of software updates, which may be automatically downloaded and installed by Fitbit products without prior notice to you” (Fitbit, 2018). Customers can agree to the manufacturers’ terms, discontinue use of the product or, in some cases, accept decreased device functionality. For instance, the smart-speaker company Sonos announced in 2017 that if users declined to accept an updated privacy policy, their smart sound systems may “cease to function” (Whittaker, 2017).

Bricking, the most extreme form of post-purchase control, emerged in the early 2000s in the United States with the advent of consumer-oriented goods with embedded software. One of the earliest cases occurred in 2006 when TiVo, which introduced the first digital-video recorder, sued the EchoStar satellite television distributor in 2004 for patent infringement (Zittrain, 2008, p. 103). A Texas court ordered EchoStar in 2006 to disable the functionality of all recorders already owned by users (Zittrain, 2008, p. 103). 3 Following this case, bricking has occurred in a variety of contexts and, unlike EchoStar, often occurs without court rulings.

Bricking devices can be an effective, appropriately rapid practice for products that are dangerously defective or pose a public health or safety risk, especially given the challenges of implementing wide scale product recalls. In the summer of 2016, for example, Samsung launched the Galaxy Note 7, but customers reported that phones were overheating, catching fire and even exploding because of faulty battery design. By mid-September, the US Consumer Product Safety Commission issued a formal nationwide recall and Samsung issued a voluntary recall that returned over 90 percent of affected phones (Samsung, 2016). To reach and disable the remaining phones, Samsung bricked them by releasing a software update “that prevent[ed] US Galaxy Note 7 devices from charging and eliminate[d] their ability to work as mobile devices” (Samsung, 2016). Once the phones received this update, they ceased to function.

While bricking dangerously substandard or harmful products can be a useful regulatory practice, it is problematic for companies to disable still-functional devices, especially when it is done to further business interests by changing a business model or product line. Traditionally, when a company discontinued a product, consumers could still use functional goods. With smart products, however, when companies cancel a product line or merge business divisions, they may brick existing devices. After Fitbit acquired the Pebble smart watch in December 2016, for example, it announced that it would cease providing software updates to Pebble, a case similar to Nest’s bricking of the Revolv smart home system. After a transition period, Fitbit officially ended its software support for Pebble in June 2018 and encouraged Pebble users to adopt Fitbit products and operating system (Fitbit, 2018a).

The cases of bricked devices discussed above underscore the intertwined nature of hardware and software components within the IoT and, particularly, the reliance upon cloud software (see Gürses & van Hoboken, 2018). Smart products’ dependence on cloud software services for software updates, as well as data storage, transmission, and analytics means that they are highly susceptible to any software disruption and thus highly regulable by IoT manufacturers. Further, as IoT devices can be networked with each other, such as smart home hubs that bring together home security systems, thermostats, door locks, lights and carbon monoxide detectors, one bricked product can “become a missing link in a larger system” (Lawson, 2016). While bricking smart toys, televisions or fitness wearables may only cause users inconvenience, if companies brick smart smoke alarms, carbon monoxide detectors, or home temperature-control systems, people relying upon these systems may be injured or even killed.

Assessing post-purchase regulation

Smart goods’ continual linkage to their manufacturers can provide benefits to both consumers and companies. Easily programmable IoT products can enable consumers to remotely operate certain devices, such as controlling security systems or door locks to permit deliveries or monitor the comings and goings of household inhabitants. A particular advantage is security as automatic software updates can be a convenient, efficient way to ensure that products receive necessary security upgrades as customers may not reliably install updates (see e.g., Gürses & van Hoboken, 2018). Tethered relationships can also function as “trusted systems” in which “authenticated devices and platforms” deliver particular content or services to users (Graber, 2015, p. 391). Trusted systems, such as Amazon’s Echo product line sell the promise of interoperability and safety to consumers, while enabling companies to retain tight control over the software and hardware (Graber, 2015, p. 391). From a manufacturers’ perspective, monitoring how customers use IoT goods is necessary, for example, to ensure the products’ software is not infected with malware or verify that only authorised service providers repair the products (see Brass et al., 2017).

Unlike traditional unconnected products, IoT companies may have the capacity to improve smart goods’ functionality rapidly and remotely in ways that can benefit their customers. For example, with the approach of Hurricane Irma to the United States in 2017, Tesla remotely upgraded the battery capacity of Tesla vehicles in Florida, without cost to the owners, in order to enable the vehicles to travel greater distances without recharging as part of evacuation efforts (Westbrook, 2017). This free extended battery capacity expired several weeks later unless customers purchased the upgrade.

While these benefits are important, the drawbacks to IoT manufacturers’ post-purchase control can be significant as IoT firms can exploit the tethered nature of IoT goods to unilaterally impose their preferred policies without the consent or knowledge of their customers. Companies’ EULAs describe how customers may access and use IoT goods, as well as how the goods may collect and distribute data from customers. IoT companies characterise this data collection as voluntary since users consent, expressly or implicitly, to the monitoring (Friedland, 2017, p. 898). Consumers, however, may not understand the nature or extent of the data collection (see Obar & Oeldorf-Hirsch, 2018). Further, people may not feel that they have a choice in opting out of certain services or products because in order to “access essential technologies, relinquishing control over their personal data is the price they must pay” (Crawford et al., 2014, p. 1670). For example, if landlords install and require tenants to use smart locks, as was the case with a New York City apartment manager, landlords can monitor tenants’ visitors and household comings and goings (Ng, 2019).

One of the most serious drawbacks for consumers in regards to post-purchase regulation is manufacturer-imposed restrictions on modifying, refurbishing, or repairing IoT products in what has become known as the “right-to-repair” movement. The right to repair is the “freedom to understand, discuss, repair, and modify the technological devices you own” (Felton, 2013, cited in Samuelson, 2016, p. 565). In the United States, where the right to repair debate is prominent, 20 states have bills before state legislatures for a broad range of goods with embedded software, from cell phones and common household appliances to farm equipment (Proctor, 2019). The right-to-repair movement argues that consumers should have access to manufacturers’ diagnostic software, repair manuals, and service parts, and the ability to choose whether they patronise independent repair shops or those authorised by the IoT company. Many farmers in the United States are vocal proponents for repairing their tractors themselves or patronising independent repair shops because of the high cost of hauling tractors from rural properties to manufacturer-authorised repair shops (see, e.g., Carolan, 2017). The agricultural equipment manufacturer John Deere and other companies like Apple have been active opponents of the right to repair in the United States, where they have restricted access to product service manuals and diagnostic software, which are essential items for fixing often-complex software-enabled, internet-connected goods (see Raymond, 2014).

Like other US companies, John Deere’s licensing agreements prohibit its customers from any modification or repair that copies or alters the product’s software (see John Deere, 2016, p. 1). In addition to the legal authority of EULAs, IoT companies can also draw upon copyright law that protects the software embedded within smart goods, and grants copyright owners, typically the IoT manufacturers, the right to set rules relating to the use of software, such as whether the software can be copied or modified (see Perzanowski & Schultz, 2016). IoT companies in the United States employ a particular feature of copyright law, digital rights management, which is a broad set of policies that, among other things, establish the terms of use for the copyrighted content (Kerr, 2007, p. 6). Repair work that violates a manufacturer’s prohibition set within digital rights management policies on modifying the smart product’s software could constitute copyright infringement in the United States (see Perzanowski & Schultz, 2016). While John Deere may not prosecute farmers for copyright infringement for repairs that alter the tractors’ software systems, that possibility, along with the potential loss of the tractor’s warranty for violating the company’s licensing agreement enable the company to impose significant post-purchase restrictions.

Through post-purchase control of software-enabled goods, IoT manufacturers can impose their preferred policies that push their customers to purchase the company’s branded supplies over the often-cheaper alternatives provided by third parties. Companies have long encouraged or pressured customers to purchase their branded parts or patronise certain authorised suppliers prior to tethered goods. However, through the software linkage between manufacturer and IoT product, IoT companies can “hardwir[e] restrictions on consumer behaviour into our devices” (Perzanowski & Schultz, 2016, p. 123). The coffee maker Keurig, for example, has instituted digital locks, a form of digital rights management, into its branded coffee pods that the Keurig machines authenticate as genuine while rejecting third-party coffee pods (Barrett, 2015). From coffee makers, juicers, and cat litter trays to printer cartridges, a broad range of companies use digital rights management, paired with restrictive licensing agreements to pressure their customers to purchase authorised supplies. Consumers purchasing smart goods thus risk being locked into a manufacturer’s proprietary ecosystem where they can find it difficult “to switch to alternative platforms, equipment or services” (Graber, 2015, p. 391). Such anti-competitive practices, which companies reinforce with a threat that non-authorised parts may violate the licensing agreements, raise concerns of monopolistic behaviour that can negatively affect consumers and harm businesses (see Samuelson, 2016).

Corporate surveillance is an integral feature of IoT companies’ post-purchase regulation of smart goods. Purchasing a software-enabled product can be “only the beginning of an intimate and potentially long and dynamic relationship” with the IoT company, along with a potential network of third-party software suppliers, “that profile consumers’ behaviour and target them with personalised services” (Helberger, 2016, p. 5; see also Gürses & van Hoboken, 2018). While the data collected on users by smart products may, at least initially, appear innocuous, some products collect significant amounts of users’ data over long time periods, from sensitive household locations like bedrooms, and from vulnerable persons, including children. Smart televisions, for instance, capture data relating to users’ viewing habits and content preferences and, in doing so, “can provide very detailed and sensitive insights into what users think, know, and believe” (Irion & Helberger, 2017, p. 170). Further, given the complexity of some IoT devices, more than one corporate actor may be involved in the collection or processing of data, but consumers may be unaware of their involvement or data flows between companies (see Helberger, 2016; Keymolen & Van der Hof, 2019). In order to operate Mattel’s interactive Hello Barbie toy, for example, parents must download an app and set up an account from the company ToyTalk that developed the doll’s speech-recognition technology (Keymolen & Van der Hof, 2019, p. 7).

IoT devices are designed to accumulate, process, and distribute data as they operate through “always-on, ubiquitous, opportunistic ever-expanding forms of data capture” (Andrejevic & Burdon, 2015, p. 19). IoT companies employ expansive data collection practices in order to operate existing IoT products and develop new products or services. Data collection is thus a speculative activity as the value or use of some data only becomes clear in the future, which poses significant challenges to people’s capacity to provide consent to specific data collection practices. Additional challenges are that IoT companies may change their data collection practices without advance notice to users and people may not always be aware of the devices collecting their data (see DeNardis & Raymond, 2017). For example, one individual may install IoT products in the home without informing other family members in order to exert control and intimidate, a common practice in cases of technologically facilitated domestic violence (see Douglas et al., 2019). People can choose not to purchase IoT goods or, where possible, to turn off smart features if these are not integral to the product’s functionality. However, in certain industry sectors there is an “increasing ‘erosion of choice’” for individuals who prefer non-smart goods as these objects may not be available in the marketplace (Office of the Privacy Commissioner of Canada, 2016, p. 21; see also Manwaring, 2017).

With the increase of smart devices in urban environments, such as those tracking commuters through transit systems, people are also monitored as they move through cities, although these surveillance systems are often largely invisible (see Urquhart & Luger, 2015). Given the pervasiveness of these sensors within many urban environments, people concerned about surveillance may not be able to avoid tracking or opt out of essential services like transportation (see Edwards, 2016; Monahan, 2017).

Conclusion

IoT companies’ use of licensing agreements constitutes a form of private ordering in which the companies exercise power through their control over software. The tethered nature of these goods makes them highly regulable (Zittrain, 2008), and enables companies to change the terms of use after purchase, or even alter product functionality, which constitutes a powerful form of post-purchase constitutive regulation. Control over smart goods rests with those who control the products’ all-important software. Companies can remotely interrupt or manipulate the provision of software updates to IoT goods in order to affect product functionality, and they can do so without the consent or knowledge of their customers. In doing so, IoT companies are fundamentally changing the governance of software-enabled physical objects.

Tethered goods subject to manufacturer-pushed software changes may provide certain benefits. Companies may be able to address security vulnerabilities, software problems, or even device malfunction remotely and rapidly. Consumers may decide that manufacturer-imposed restrictions set within licensing agreements like those prohibit unauthorised individuals from repairing the goods are acceptable. In contrast to farmers fighting for the right to repair their tractors, not everyone has the drive, ability, or interest to tinker with or repair IoT devices. In short, some consumers may decide that for certain products, a licensing model is acceptable in that it provides consumers the ability to use the product under specific conditions set by the manufacturer. Under a licensing model, instead of buying a smart television or smart home security system outright, consumers purchase the use of the television and security system as software-enabled services (see Perzanowski & Schultz, 2016).

A key problem with the licensing model is that consumers do not fully understand the differences between smart and traditionally unconnected products, or the effects of manufacturer-imposed restrictions on IoT products (see Consumers International, 2017; Perzanowski & Schultz, 2016). The US Federal Trade Commission, commenting on consumer misunderstanding in this area, reported that it is unclear whether IoT manufacturers are selling hardware (device), software (service), or both, and it is also unclear whether consumers understand what they are purchasing (Rich, 2016). When hardware and software are interconnected, as they are in IoT devices, consumers are essentially purchasing the hardware outright, but only buying access to the use of the software as defined by the licensing agreement that sets out the manufacturers’ restrictions. Software is integral to the full functionality of IoT goods, although some products may still operate, albeit without their smart features, if their software is damaged or disabled. As consumers purchase only the product hardware, while product software remains under the control of manufacturers, ownership of IoT goods can thus be understood as “hybrid” (Keymolen & Van der Hof, 2019, p. 8). Consumers’ purchase of IoT goods is a “precarious” form of ownership subject to the discretion of IoT makers who can arbitrarily change conditions after purchase (Tusikov, 2019).

IoT companies’ capacity to monitor their users and control the provision of software means that they can enforce their rules at a scale and speed that was previously unfeasible. Bricking, the most extreme form of post-purchase control, underscores companies’ capacity to impose their preferred policies unilaterally, automatically, and remotely. Bricking can be an appropriately rapid and effective regulatory practice in cases where products pose significant health and safety risks. However, companies’ use of bricking to facilitate commercial interests, such as acquiring or discontinuing a product line, or instituting rules that preference the company’s supplies or repair services over those of competitors, raises concerns of anti-competitive behaviour.

While the US Federal Trade Commission did not recommend enforcement action against Nest relating to the bricking of the Revolv hub, the agency warned Nest that its “unilaterally rendering the devices inoperable” could have constituted an “unjustified, substantial consumer injury” that consumers could not “reasonably avoid” (Engle, 2016). IoT companies’ practice of deliberately rendering functional IoT devices inoperable without the consent of their customers would appear to violate consumer protection principles regarding the misleading marketing practices, unfair contract terms, and denying consumers access to sufficient information for making informed choices about IoT goods (see Manwaring, 2017, p. 268; see also Helberger, 2016). Companies’ post-purchase control over IoT goods raises particular challenges in relation to consumer choice as, for example, companies can unilaterally impose restrictions on consumers’ use of the product, modify the products’ software after sale, and require customers to purchase cloud processing services in order to operate the goods without providing consumers sufficient information about these conditions beforehand (see Manwaring, 2017).

This article’s examination of post-purchase regulation highlights the need for further research exploring varieties of post-purchase control across different countries and legal jurisdictions, and examining the diverse array of consumer-oriented IoT products. As well, the benefits and drawbacks of post-purchase control should be investigated more fully, with particular attention to consumer choice and privacy, implications from companies’ creation of proprietary ecosystems, and ever-expanding forms of data capture from always-connected devices. Research is also needed to consider the nature and degree of post-purchase control within the industrial Internet of Things, as well as the implications, particularly in terms of security, for smart cities when companies supplying services for critical systems like energy, water, or transport retain control over the software operating the hardware.

References

Aladeojebi, T. K. (2013). Planned Obsolescence. International Journal of Scientific & Engineering Research, 4(6), 1504-1508. Available at https://pdfs.semanticscholar.org/7b94/a236e2bbb9817a10e23428acaa821a724fd0.pdf

Andrejevic, M., & Burdon, M. (2015). Defining the sensor society. Television & New Media, 16(1), 19-36. doi:10.1177%2F1527476414541552

Barrett, B. (2015, May 8). Keurig’s My K-Cup Retreat Shows We Can Beat DRM. Wired. Retrieved from https://www.wired.com/2015/05/keurig-k-cup-drm/

Belli, L., & Venturini, J. (2016). Private ordering and the rise of terms of service as cyberregulation. Internet Policy Review, 5(4). doi:10.14763/2016.4.441

Belli, L., Francisco, P. A., & Zingales, N. (2017). Law of the Land or Law of the Platform: Beware of the Privatisation of Regulation and Police. In: L. Belli & N. Zingales (Eds.), Platform Regulations: How Platforms are Regulated and How They Regulate Us (pp. 41-64.). Retrieved from http://bibliotecadigital.fgv.br/dspace/handle/10438/19402

Brass, I., Carr, M., Tanczer, L., Maple, C., & Blackstock, J. (2017). Unbundling the Emerging Cyber-Physical Risks in Connected and Autonomous Vehicles. In S. Appt & Livesey N. (Eds.), Connected and Autonomous Vehicles: The emerging legal challenges (pp. 8-9). London: Pinsent Masons LLP.

Brey, P. (2005). Artifacts as Social Agents. In H. Harbers (Ed.), Inside the Politics of Technology: Agency and Normativity in the Co-Production of Technology and Society (pp. 61-84). Amsterdam: Amsterdam University Press.

Brownsword, R. (2005). Code, control and choice: why East is East and West is West. Legal Studies, 25(1), 1-21. doi:10.1111/j.1748-121X.2005.tb00268.x

Brownsword, R. (2008). Rights, Regulation, and the Technological Revolution. New York, NY: Oxford University Press.

Carolan, M. (2017). ‘Smart’ Farming Techniques as Political Ontology: Access, Sovereignty and the Performance of Neoliberal and Not-So-Neoliberal Worlds. Sociologia Ruralis, 58(4), 745-764. doi:10.1111/soru.12202

Consumers International. (2017). Testing Our Trust: Consumers and the Internet of Things 2017 Review. Retrieved from https://www.consumersinternational.org/media/154746/iot2017review-2nded.pdf

Crawford, K., Miltner, K., & Gray, M. L. (2014). Critiquing Big Data: Politics, Ethics, epistemology. International Journal of Communication, 8, 1663-1672. Retrieved from https://ijoc.org/index.php/ijoc/article/view/2167/1164

DeNardis, L., & Raymond, M. (2017). The Internet of Things as a Global Policy Frontier. University of California, Davis Law Review,51, 475-497. Retrieved from https://lawreview.law.ucdavis.edu/issues/51/2/Symposium/51-2_DeNardis_Raymond.pdf

Douglas, H., Harris, B. A., & Dragiewicz, M. (2019). Technology-Facilitated Domestic and Family Violence: Women’s Experiences. The British Journal of Criminology, 59(3), 551-570. doi:10.1093/bjc/azy068

Edwards, L. (2016). Privacy, Security and Data Protection in Smart Cities: A Critical EU Law

Perspective. European Data Protection Law Review, 2, 28-58.

Engle, M.K. (2016). Letter to Richard J. Lutton, Jr., Head of Legal and Regulatory Affairs, Nest Labs, Inc. from Mary K. Engle, Associate Director for Advertising Practices, Federal Trade Commission. Retrieved from https://www.ftc.gov/system/files/documents/closing_letters/nid/160707nestrevolvletter.pdf

Fairfield, J.A.T. (2017). Owned: Property, Privacy, and the New Digital Serfdom. Cambridge: Cambridge University Press.

Farkas, T.J. (2017). Data created by the Internet of Things: The new gold without ownership? Revista La Propiedad Inmaterial, (23), 5-17. doi:10.18601/16571959.n23.01

Fitbit. (2018, September 18). Fitbit Terms of Service. Retrieved from https://www.fitbit.com/legal/terms-of-service

Fitbit. (2018a January 24). Showing Pebblers Love with Longer Device Support. Retrieved from Fitbit Developer https://dev.fitbit.com/blog/2018-01-24-pebble-support/

Franklin, S. (1995). Science as Culture, Cultures of Science. Annual Review of

Anthropology,24, 163-184. doi:10.1146/annurev.an.24.100195.001115

Friedland, S.I. (2017). Drinking From the Fire Hose: How Massive Self-Surveillance from the Internet of Things are Changing Constitutional Privacy. West Virginia Law Review, 119(3), 891-913. Retrieved from https://researchrepository.wvu.edu/wvlr/vol119/iss3/5

Gibbs, S. (2018, October 24). Apple and Samsung fined for deliberately slowing down phones. Guardian. Retrieved from https://www.theguardian.com/technology/2018/oct/24/apple-samsung-fined-for-slowing-down-phones

Gilbert, A. (2016, April 3). The time that Tony Fadell sold me a container of hummus. Retrieved from https://arlogilbert.com/the-time-that-tony-fadell-sold-me-a-container-of-hummus-cb0941c762c1

Graber, C. B. (2015). Tethered technologies, cloud strategies and the future of the first sale/exhaustion defence in copyright law. Queen Mary Journal of Intellectual Property,5(4), 389-408.

Gürses, S., & van Hoboken, J. V. J. (2018). Privacy after the Agile Turn. In J. Polonetsky, O. Tene, & E. Selinger (Eds.), Cambridge Handbook of Consumer Privacy (pp. 579-601). Cambridge: Cambridge University Press.

Helberger, N. (2016). Profiling and targeting in the Internet of Things – A new challenge for consumer protection. In R. Schulze, & D. Staudenmayer (Eds.), Digital Revolution (pp. 135-161). Baden-Baden: Nomos Verlag.

Hildebrandt, M. (2008). Legal and Technological Normativity: more (and less) than twin sisters. Techné: Research in Philosophy and Technology, 12(3), 169-183. doi:10.5840/techne20081232 Available at http://works.bepress.com/mireille_hildebrandt/13.

Hill, K, & Mattu, S. (2018, February 7). The House that Spied on Me. Gizmodo. Retrieved from https://gizmodo.com/the-house-that-spied-on-me-1822429852.

Horton, D. (2010). The Shadow Terms: Contract Procedure and Unilateral Amendments. UCLA Law Review, 57, 605-667. Retrieved from https://www.uclalawreview.org/the-shadow-terms-contract-procedure-and-unilateral-amendments/

Irion, K., & Helberger, N. (2017). Smart TV and the online media sector: User privacy in view of changing market realities. Telecommunications Policy, 41(3), 170-184. 10.1016/j.telpol.2016.12.013

John Deere. (2016). License Agreement for John Deere Embedded Software. Retrieved May 3, 2018 https://www.deere.com/privacy_and_data/docs/agreement_pdfs/english/2016-10-28-Embedded-Software-EULA.pdf

Kerr, I. (2007). To Observe and Protect? How Digital Rights Management Systems Threaten Privacy and What Policy Makers Should do About it. In P. Yu (Ed.), Intellectual Property and Information Wealth: Copyright and Related Rights, (Vol.1, pp. 1-26). Westport, CN: Praeger Publishers.

Keymolen, E., & Van der Hof, S. (2019). Can I still trust you, my dear doll? A philosophical and legal exploration of smart toys and trust. Journal of Cyber Policy, doi:10.1080/23738871.2019.1586970

Kitchin, R. (2014). The real-time city? Big data and smart urbanism. GeoJournal, 79(1), 1-14. doi:10.1007/s10708-013-9516-8

Koops, B.J. (2011). The (In)flexibility of Techno-Regulation and the Case of Purpose-Binding. Legisprudence, 5(2), 171-194. doi:10.5235/175214611797885701

Langenderfer, J. (2009). End-User License Agreements: A New Era of Intellectual Property Control. Journal of Public Policy & Marketing, 28(2), 202-211. https://doi.org/10.1509/jppm.28.2.202

Lawson, S. (2016, April 4). Why Nest’s Revolv hubs won’t be the last IoT devices knocked offline. PC World. Retrieved from http://www.pcworld.com/article/3051760/hubs-controllers/why-nests-revolv-hubs-wont-be-the-last-iot-devices-knocked-offline.html

Leenes, R. (2011). Framing Techno-Regulation: An Exploration of State and Non- State Regulation by Technology. Legisprudence, 5(2), 143-169. doi:10.5235/175214611797885675

Lessig, L. (2006). Code: And Other Laws of Cyberspace, Version 2.0. New York, NY: Basic Books. Available at http://codev2.cc/download+remix/

Manwaring, K. (2017). Emerging information technologies: challenges for consumers. Oxford University Commonwealth Law Journal, 17(2), 265-289, doi:10.1080/14729342.2017.1357357

Meola, A. (2016, December 19). What is the Internet of Things (IoT)? Business Insider. Retrieved from http://www.businessinsider.com/what-is-the-internet-of-things-definition-2016-8

Monahan, T. (2017). The Image of the Smart City: Surveillance Protocols and Social Inequality. In Y. Watanabe (ed.) Handbook of Cultural Security (pp. 201-226). Cheltenham, UK: Edward Elgar.

Mueller, M., Kuehn, A., & Santoso, S.M. (2012). Policing the Network: Using DPI for Copyright Enforcement. Surveillance & Society, 9(4), 348-364. doi:10.24908/ss.v9i4.4340

Ng, A. (2019, May 7). Tenants win as settlement orders landlords give physical keys over smart locks. CNET. https://www.cnet.com/news/tenants-win-rights-to-physical-keys-over-smart-locks-from-landlords/

Noto La Diega, G., & Walden, I. (2016). Contracting for the 'Internet of Things': looking into the Nest. European Journal of Law and Technology, 7(2), 1-38. Retrieved from http://ejlt.org/article/view/450/658

Obar, J.A., & Oeldorf-Hirsch, A. (2018). The Biggest Lie on the Internet: Ignoring the Privacy Policies and Terms of Service Policies of Social Networking Services. Information, Communication & Society. doi:10.1080/1369118X.2018.1486870

Office of the Privacy Commissioner of Canada. (2016). Consent and privacy: A discussion paper exploring potential enhancements to consent under the Personal Information Protection and Electronic Documents Act [Discussion paper]. Gatenineau, Quebec: Policy and Research Group of the Office of the Privacy Commissioner of Canada. Retrieved from https://www.priv.gc.ca/en/opc-actions-and-decisions/research/explore-privacy-research/2016/consent_201605/

Perzanowski, A., & Schultz. J. (2016). The End of Ownership: Personal Property in the Digital Economy. Cambridge, MA: MIT Press.

Proctor, N. (2019, April 1). Right to Repair is Now a National Issue. Wired. Retrieved from https://www.wired.com/story/right-to-repair-elizabeth-warren-farmers/

Radin, M.J. (2012). Boilerplate: The Fine Print, Vanishing Rights, and the Rule of Law. Princeton, NJ: Princeton University Press.

Raymond, A. H. (2014). Pliers and Screwdrivers as Contributory Infringement Devices: Why Your Local Digital Repair Shop Might Be a Copyright Infringer, and Why We Must Stop the Craziness. Northwestern Journal of Technology and Intellectual Property, 12(1), 67-83. Available at https://scholarlycommons.law.northwestern.edu/njtip/vol12/iss1/2/

Rich, J. (2016, July 13). What happens when the sun sets on a smart product? [Blog post]. Retrieved from Federal Trade Commission https://www.ftc.gov/news-events/blogs/business-blog/2016/07/what-happens-when-sun-sets-smart-product

Reidenberg, J. (1997). Lex Informatica: The Formulation of Information Policy Rules through

Technology. Texas Law Review, 76(3), 553-593.

Samsung. (2016, December 9). Samsung Taking Bold Steps to Increase Galaxy Note7 Device Returns. Retrieved from https://news.samsung.com/us/samsung-taking-bold-steps-to-increase-galaxy-note7-device-returns/

Samsung. (2018, February 21). Samsung+ Terms of Service. Retrieved from

https://www.samsung.com/us/samsungplus/terms/

Samuelson, P. (2016). Freedom to Tinker. Theoretical Inquiries in Law, 17, 563-600. doi:10.1515/til-2016-0021

Schneier, B. (2013, November 25). Surveillance as a Business Model [Blog post]. Retrieved from Schneier on Security www.schneier.com/blog/archives/2013/11/surveillance_as_1.html.

Schulz, W., & Dankert, K. (2016). ‘Governance by Things’ as a challenge to regulation by law. Internet Policy Review, 5(2). doi:10.14763/2016.2.409

Schwarcz, S.L. (2002). Private Ordering. Northwestern University Law Review, 97(1), 319-350.

Srnicek, N. (2017). Platform Capitalism. Cambridge: Polity Press.

Technopedia. (n.d). Bricking. Retrieved May 16 from https://www.techopedia.com/definition/24221/bricking

Tusikov, N. (2016). Chokepoints: Global Private Regulation on the Internet. Oakland, CA.: University of California Press.

Tusikov, N. (2019). Precarious Ownership of the Internet of Things in the Age of Data. In B. Haggart, K. Henne, & N. Tusikov (Eds.), Information, Technology and Control in a Changing World: Understanding Power Structures in the 21st Century. (pp. 121-148.) Basingstoke, UK: Palgrave Macmillan.

Urquhart, L., & Luger, E. (2015). Smart Cities: Creative Compliance and the Rise of Designers

as Regulators. Society for Computers and Law, 26(2). Retrieved from https://www.scl.org/articles/3386-smart-cities-creative-compliance-and-the-rise-of-designers-as-regulators

West, S.M. (2017). Data Capitalism: Redefining the Logics of Surveillance and Privacy. Business & Society. doi:10.1177/0007650317718185

Westbrook, J.T. (2017, September 10). Tesla’s Hurricane Irma Update Taps into Our Deepest Fears of 21st Century Driving. Jalopnik. Retrieved from

https://jalopnik.com/teslas-hurricane-irma-update-taps-into-our-deepest-fear-1803081731

Whittaker, Z. (2017, August 21). Sonos says users must accept new privacy policy or devices may ‘cease to function”.CNET. Retrieved from http://www.zdnet.com/article/sonos-accept-new-privacy-policy-speakers-cease-to-function/

Wiens, K. (2016, February 18). Apple Shouldn’t get to Brick Your iPhone Because You Fixed it Yourself. Wire. Retrieved from https://www.wired.com/2016/02/apple-shouldnt-get-to-brick-your-iphone-because-you-fixed-it-yourself/

Zittrain, J.L. (2008). Perfect Enforcement on Tomorrow’s Internet. In R. Brownsword, & K. Yeung (Eds.), Regulating Technologies: Legal Futures, Regulatory Frames and Technological Fixes. (pp. 125-156). Oxford: Hart Publishing.

Zuboff, S. (2015). Big other: surveillance capitalism and the prospects of an information civilization. Journal of Information Technology, 30(1), 75-89. doi:10.1057/jit.2015.5

Zuboff, S. (2019). The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. New York, NY: Public Affairs.

Footnotes

1. While this article focuses on the consumer-oriented Internet of Things, there is also an industrial IoT underlying many industrial sectors, for example, robotics systems in manufacturing, medical diagnosis and treatment, and connected energy sensors in the oil and gas sectors (see DeNardis & Raymond, 2017).

2. Companies may use the terms “EULAs” or “terms-of-service agreements” (ToS) to describe the rules governing smart goods’ software, although the latter are broader than software licenses and set out rules for data collection, website security, and penalties for violating the policies. This article focuses on EULAs, but recognises that companies may incorporate similar policies under ToS.

3. EchoStar digital-video recorders escaped being bricked after a protracted legal battle that concluded in 2011 with TiVo being awarded a US$500 million settlement (Zittrain, 2008, p. 103).

Empire and the megamachine: comparing two controversies over social media content

$
0
0

This paper is part of Practicing rights and values in internet policy around the world, a special issue of Internet Policy Review guest-edited by Aphra Kerr, Francesca Musiani, and Julia Pohle.

Introduction

This paper considers two major controversies in 2017 over content on social media. The first was within the advertising industry as major brands found that programmatic advertising was appearing next to distasteful, violent, or otherwise objectionable content, particularly on YouTube. As a result, several prominent companies, including Procter & Gamble, stopped advertising on social media platforms for months or longer. This event, dubbed the “adpocalypse”, has had wide ranging effects, particularly on the monetisation of educational and LGBTQ content on YouTube. The second string of events were the first public hearings with representatives of social media companies over Russian operatives disseminating misinformation before, during, and after the 2016 US presidential elections. Misinformation campaigns on social media have since been the subject of government inquiries around the world. The public examinations of these controversies, by senate committees and advertising trade groups, provide insight into major themes of the governance relationships between social media companies and their major stakeholders.

The media, private and public, is essential to political engagement and the construction of democratic culture (Dahlgren, 2009). Arguments for the protection of the public’s interest in media typically refer to how government policy should intervene to protect the democratic public interest from the tendencies of private economic interests (Croteau & Hoynes, 2006). The term public interest lacks precision, however, and what is considered in the public’s interest may shift depending on the framework from which it is viewed—a political public interest versus an economic one, for instance (Shtern, 2009). To make matters more complicated, in practice, media governance is divided among a range of actors, including public (government) policy intervention and private (business) media interests. Researchers considering the public interest in communication and media must understand how governance of media is enacted in different contexts and how stakeholders intervene in media systems. Specifically, this paper argues that advertising interests can act as de facto governors of media content delivery in certain contexts, making editorial-style directives as to what content will succeed or fail. This power introduces an understudied layer of governance outside of national policy-making, which follows an inefficient route compared to advertising concerns. This paper uses an historical framework based on Harold Innis’ research into the publishing industry in the 18th and 19th centuries, combined with Mumford’s (1966) concept of the megamachine, particularly as explored by Latour (1999), to analyse how social media companies are being held accountable to nations and advertisers. Innis’ analysis of the print publishing industry demonstrates how freedom of the press laws enabled economic concerns, driven by the advertising industry, to expand the geographic reach of the press and facilitate US cultural imperialism (Innis, 2008). This paper suggests that social media companies are similarly shaped by their reliance on the advertising business model. World governments have largely taken a light approach to regulating social media content while advertisers represent most of those companies’ earnings. This paper compares a specific instance of advertisers shaping the rules by which social media companies are governed to public scrutiny by the US government. Both events involve outside actors pressuring social media companies to change their behaviours, with varying degrees of directness and success.

The rise of social media companies, their transnational nature, and the transnational, risk-averse nature of their advertising stakeholders has created an emphasis on brand safety in media content governance. This argument is complementary to work that defines design practices as regulation (Yeung, 2017) and arguments that platform content moderation is based in US free speech law, balanced against corporate social responsibility and user expectation (Klonick, 2017). It builds on research that shows that social media platforms attract efforts to regulate content, including social pressure, to prevent certain speakers or practices on the platforms, even those protected by legal rulings (Mueller, 2015). Social media companies are responding to increased public scrutiny by altering how creators and content are moderated and monetised, with impacts on how we understand the platforms’ role in enabling expression and circulating information. The public instances of platforms negotiating their accountability to different groups examined in this paper are instructive in understanding how different actors attempt to govern media systems and how media systems respond. Carey (1967) observed that Mumford’s ideas of transformation in social organisation are incorporated into Innis’ understanding of changes in the technology of communication. This paper combines Innis’ work on the press industry with theories of the megamachine, using the two as a novel way to examine the administrative links that govern media systems.

Harold Innis and the newspaper industry

Harold Innis was deeply concerned with the relationship between printing, monopolies of knowledge, and public life. Empire and Communication particularly places the newspaper industry at the centre of US cultural imperialism: “The United States, with systems of mechanized communication and organized force, has sponsored a new type of imperialism imposed on common law in which sovereignty is preserved de jure and used to expand imperialism de facto” (Innis, 2007 [1950], p. 195). Those “systems of mechanized communication,” including business models favouring maximum circulation, directly influenced the functioning of later technologies, including the telegraph and the radio. At the time of its creation, the newspaper industry reversed the influence between political centres and peripheries, as the US, then a colony, began providing England with content divorced from English culture and communities (Berland, 1997). Innis argues that while national laws were in effect, they were removed from the details of content and production, so that commercial interests—particularly those of advertisers—could and did govern most daily operations. The US government stepped in only to support industry growth through subsidising mail delivery or facilitating trade relationships. Innis’ work on the newspaper industry has significant implications for how we understand social media companies—which have extended the advertising model to new scales and transnational contexts while exerting considerable influence over how information is accessed and circulated by citizens.

The economic framework created by social media companies facilitates content flowing back into the US with unexpected consequences for that country’s own democratic processes and cultural stability. If the newspaper affected neighbouring cultures and economies by putting US industries and culture at the heart of global communications, social media may have created opportunities for other actors to move cultural and political content around the globe. National security concerns over political misinformation spreading through social media platforms have increased public scrutiny of social media content regulation and stoked enthusiasm for stricter national restrictions on online content, but these developments are slow when compared to changes made to meet commercial imperatives. Innis argued that commercial imperatives in print media tended to favour circulation over cultural or territorial integrity and that space vacated by policy direction could be filled by directives from commerce.

Becoming a vendible commodity: strategies of circulation

In the Bias of Communication, Innis locates the beginning of the newspaper’s economic model at the lapse of the Licensing of the Press Act. After the Act lapsed, “news became a vendible commodity” (Innis, 2008 [1951], p. 143). The Act was one of many legislative efforts to regulate the press and print industries in the United Kingdom, in this case by requiring that publications be registered (Nipps, 2014). In response to legislative pressures, media structured itself strategically to avoid regulation, shifting formats and obscuring content coverage as political developments made certain formats less expedient. When taxes affected newspapers, there was a rise in other, non-newspaper formats of publication, including publications that circulated on irregular schedules or used unusual page sizes. Innis (2008) depicts these encounters as the negotiation of parliamentary accountability to the people, but it might equally be read as actors in the publishing industry defending their market position and investment in the print industry in the face of uncertain political patronage and funding. Eventually, the need for a predictable source of funds “compelled dependence of the political press on advertisements” (Innis, 2008, p. 153). The print industry turned to the advertising business model in part to escape the uncertainty and risk of political patronage and the inconsistency of tax law and became subject to advertisers’ need to grow their audience in the process.

The rise of advertising as a preferred business model for publishing has been covered in detail elsewhere (see Wu, 2016). What concerns this paper is the technical and political ramifications of that preference. Once newspapers became dependent on advertising dollars, news was important insofar as it attracted readers. Roy Howard, an American publisher speaking before World War I, claimed: “We come here simply as news merchants. We are here to sell advertising and sell it at a rate profitable to those who buy it. But first we must produce a newspaper with news appeal that will result in a circulation and make that advertising effective.” (Originally published in Lords of the Press by George Seldes, 1939 in Innis, 2008, p. 181). Advertising requires circulation, and circulation has historically been achieved in publishing through efficient distribution, combined with attention-getting content strategies such as sensationalism, use of images, and exclusive content. The preferences of advertisers hoping to reach national or international audiences pushed technical developments that allowed for the printing of illustrations and the printing and shipping of more papers (Buxton, 1998; Innis, 2007). Paper and printing, instantiated in newspapers whose agendas were set by the demands of advertising, facilitated the connection of wide geographic areas to news, reporting, and advertisements created in a central location. It was the ability of media in this case to assert control over space that led Innis to identify the industry as an agent of US cultural imperialism— at the time, an expression of how “any given medium will…favour the growth of certain kinds of interests and institutions at the expense of others” (Carey, 1967, p. 9).

Once the press divorced itself from political money with the aid of advertising, publishers could become less interested in the specifics of content and pursue broader circulation, protecting themselves with freedom of expression laws as necessary. Innis argued that publishing’s freedom from direct control over content and its partnership with advertising interests made circulation its priority and allowed information to be treated as a commodity, maximising its spread over geographic space, with implications for affiliated industries and expressions of cultural sovereignty. As technology has moved beyond the printing press, the separation between publishers and content has become pronounced. That is the case of social media companies, who have engineered considerable distance in many jurisdictions from laws that ordinarily hold publishers accountable for the speech present on their platforms—a level of licentiousness undreamed of by the news merchants of Innis’ analysis. Innis’ work remains germane for its insight into advertising as a driving force in technical capability and the entanglement of commercial media with broader economic and political functioning, even as the geographic arrangements that concerned him (such as the focus on US culture as the central director of media development globally) have shifted considerably. Innis’ granular examinations of how media outlets were held accountable to laws and the mandates of an advertising business model can act as a model for analysing media interests and institutions.

Megamachines and machines for growth

In Innis’ account, guarantees of freedom of expression, once in place, rendered the relationship between content and regulation predictable.Once advertisers, regulators, and publishers achieved relative equilibrium in their goals, the industry was able to grow in a manner heedless to geographic location, creating the scale necessary for financial efficiency. One way to understand that operation is as a machine. In Pandora’s Hope, Bruno Latour (1999) describes the megamachine as “a large, stratified, externalized body politic” (p. 208). The megamachine organises “large numbers of humans via chains of command, deliberate planning and accounting procedures” (p. 207) to achieve a goal defined by central institutions. The concept of a megamachine was originally theorised by Mumford (1966) as the systems, including but not limited to bureaucracies, that could put large numbers of humans to work on a single goal—organising the military, for instance, or the labour necessary for large-scale construction projects. Mumford argues that the megamachine is essential to technological developments and consisted of “a reliable organisation of knowledge…and an elaborate structure for giving and carrying out orders” (p. 8). Latour’s megamachine functions through “nested subprograms” (p. 207) for action that can be tracked across social relationships. Latour suggests that identifying the workings of the subprogrammes may do more to explain collective behaviours and functionalities than discourses about identity (Latour, 1999). The megamachine, then, must be separate from individual or societal wants or preferences. The imperative to “make newspapers”, for instance, calls forth systems of salespeople, authors, accountants, publishers, postal workers, and trade relationships that push for increased circulation, regardless of individual preferences within the human systems.

The metaphor of a machine is relatively common in examinations of commodification. For instance, Harvey Molotch’s much-cited article The City as a Growth Machine (1976) examines the processes that divorce geographic and other forms of specificity from decision-making in urban development. Those managing transnational communication systems are similarly pursuing freedom from context. A media operation, divorced as much as possible from individual human concerns—achieved through centralisation, technological efficiency and predictable relationships between content and regulation—can be run as a machine. Viewed through the lens of a machine, Innis’ analysis of newspapers is a collective of human and industrial processes that became a political body of their own. Innis identifies an industry that he associates with a society (the press in the United States), he then identifies subprogrammes (trade relationships, subsidies, technologies) that encourage the industry to act on wider geographic areas, including neighbouring societies—a media megamachine with the US at its centre. In this scenario, news monopolies, separate from “place, ethics, and community” (Berland, 1997, p. 61), in conjunction with “contemporary transnational capitalism” (p. 68), allowed US agendas to dominate global communication.

In a context where the largest media companies are transnational as a rule, Innis’ publishing megamachine takes on new aspects. For one, consumer markets across the globe are gaining in importance and institutions for managing global commercial concerns have grown. In her application of Innis’ ideas to global media systems, Berland (1997) declared that corporations undermine nation states by undoing their central positioning—putting the corporation at the centre of the media machine, rather than the US state. It may be too soon to declare that corporations have ended US dominance of global communications, but transnational digital communication giants are increasingly tied to diverse social and policy environments. Understanding Innis’ “vendible commodity” as a map of the subprogrammes that make up a media megamachine gives researchers a template for examining that machine in a context that has shifted away from one national jurisdiction to a complex, global policy environment. Considering Innis alongside the megamachine, rather than conducting a straightforward political economic analysis, emphasises administrative links over power relations and the construction of a body politic, rather than a market model of competing interests—an idea of particular salience to the advertising industry, which has a particular definition of appropriate content.

The social media megamachine

Social media companies have emphatically defined themselves as outside of the press and publishing industries because of their reliance on user-generated content, but that distinction is blurring, as statistics place social media high on lists of news sources used by citizens, and professional media content is more prominent on the platforms (Bakshy, Messing, & Adamic, 2015; Smith, 2017; Burgess, 2015). In the US, Section 230 of the Communication Decency Act, which prohibits providers of online publishing services from being treated as the legal publisher or speaker of information on that service, gives US regulation little power over online platforms (Ardia, 2009; Klonick, 2017). Section 230 has contributed to the impression that social media platforms—as hosts for user-generated content rather than media platforms—are nearly immune to direct regulatory intervention. However, social media companies are governed—directly by national governments as well as through a range of voluntary self-regulation initiatives (in regard to terrorist content, for instance) (Gorwa, 2019). In Europe, and particularly in Germany, direct regulation has recently been used to force social media companies to comply with regional laws and norms. Current efforts are coalescing around competition law, privacy and data protection, and the rollback of intermediary protection from liability (Gorwa, 2019). Other jurisdictions have taken stricter measures, employing tactics such as geo-blocking towards social media companies to prevent them from operating either temporarily or permanently. China has made significant use of this ability and commanded noteworthy concessions from tech companies, such as Google, that wish to access the millions of potential users in that country (Gallagher, 2018). However, nations, no matter how populous, are only one group of stakeholders with fragmented interests. Advertisers represent upwards of 85% of social media companies’ earnings (Facebook Inc., 2018; Twitter Inc., 2018; Alphabet Inc., 2017).

As new publishing platforms have emerged, advertisers have remained central to media business models, acquiring new abilities to target viewers (Turow, 2011) and interact directly with consumers (Brodmerkal & Carah, 2016). Social media platforms’ advertising business started by serving banner ads to students and now cater to a complex global ecology of developers, brand pages, marketing partners, and ad publishers (Nieborg, 2017). Advertisers on social media platforms expect not only circulation, but also the ability to target specific categories of users, to have advertising content integrated into the look and function of the social media site, and to keep advertising separate from content that might be objectionable to their target audience—an ideal called “brand safety” (Trapp, 2016; Facebook, Google, and Twitter Executives on Russia Election Interference, 2017b, 1:40:36; Teich, 2017). Advertisers seek deeper relationships, including feelings and values, between consumers and brands (Banet-Weiser, 2012). Social media companies have made themselves central to meeting these preferences, and digital advertising is now a multimillion-dollar industry, with Facebook and Google earning more than half of that revenue (Ha, 2017; Reuters, 2017; Helmond, Nieborg, & van der Vlist, 2017). Scholarship on social media platforms has already traced some of the ways in which accountability to advertisers has motivated significant changes to platform infrastructure, algorithms, and content policies (Gehl, 2014; Van Dijck, 2013; Helmond, 2015) and created a business model that permeates borders, incentivises the sharing of personal data, and wields affective forces to keep users connected (Karppi, 2018). While there have long been fears that advertiser agendas affect the production of media content, studies have tended to focus on advertisers securing positive reviews of their own content (Rinallo, Basuroy, Wu, & Jeon, 2013) or on deceptive practices and native advertising (Carlson, 2015). The social media model has dispensed with conventions of “church-and-state” division between editorial and business concerns, making a virtue of its ability to integrate advertising content in ways tailored to achieve business goals (Couldry & Turow, 2014).

Where newspaper advertisers were primarily national in their operations, advertising partners for the social media platforms are global and their audience, like that of the social media companies themselves, is often borderless. Social media companies face a fragmented policy environment and comparatively coherent economic incentives. Where Innis argued that the commercialism of media was part of US media imperialism, we can use the framework of the media megamachine identified above to trace the chains of command in new circumstances and to locate the “centrally directed” elements of the machine’s programming (Mumford, 1966, p. 6). The social media megamachine is characterised by a policy environment strongly oriented toward self-regulation, where the ability to regulate exists but is often not exercised. Meanwhile, commercial incentives have become more granular and targeted towards outcomes at the level of content, affect, and quality of interaction with consumers, rather than only circulation. The next section examines the senate hearings with representatives of three major social media companies to better understand how pressure was applied to social media company representatives in a national context before comparing that process to interactions between advertisers and social media over content concerns. Doing so provides an illustrative comparison of how social media companies are governed in different institutional contexts.

The senate hearings and control of social media content

The US senate hearings with representatives of social media companies over Russian activities during the 2016 presidential election were held 31 October and 1 November 2017 before the judiciary Subcommittee on Crime and Terrorism and the house and senate Committees on Intelligence. In his opening remarks, the republican chair of the judiciary subcommittee called social media platforms “portals” into US society and everyday life. The actions of the Internet Research Agency (IRA), located in St. Petersburg—the group that created group pages, advertisements, and events in the guise of American social and political groups—were central to the hearings and made a prime example of the failings of content regulation. Other examples, including the spread of false news stories on social media, extreme content, and political content posted outside of an election cycle, were also significant parts of the inquiry (Facebook, Google, and Twitter Executives on Russia Election Interference, 2017a, 1:24:10). False or sensationalised news have a long history in media and even the “filter bubble” is preceded by media that mirrors the preferences of its audience (Bennet and Iyengar, 2008). The ease with which such content is created and disseminated on social media has added urgency to questions about how to control that content. The hearings took place over a year after the initial problematic behaviour was identified. During that year, social media platforms resisted the idea that the problems were worth examining, repeatedly claiming that deception and misinformation on online platforms was minimal and of little significance to elections (Hudgins & Newcomb, 2017).

During the hearings, the distance between the priorities of legislators and social media representatives was evident. Senate interlocutors repeatedly used the language and perspective of the national interest, including an interrogation centered on which nations social media companies consider a threat (Facebook, Google and Twitter Executives on Russian Disinformation, 2017). On the other side, the social media representatives argued that their tools are agnostic and “the internet is borderless”, (34:05) meaning national sovereignties and enmities mean much less on social media platforms. These are opposing views, and not necessarily reconcilable, though many members, such as democrat Adam Schiff, sought reassurances that social media companies consider their corporate responsibility to include the protection of democratic communication within the US (Facebook, Google, and Twitter Executives on Russia Election Interference, 2017a). The representatives of the social media companies were careful to separate themselves from public service obligations. However, the license enjoyed by those companies is not guaranteed and the tech companies were careful to acknowledge moral and societal stakes and responsibilities. The rhetorical dance of non-obligation and self-regulation has historically allowed tech companies to align the companies with national interests or legislation in various national contexts in a largely self-regulated manner (Klonick, 2017).

The newspaper industry of Innis’ analysis also had arms-length distance from the instruments of national governance, but, being located in the US and aligned with US interests through their readership, they had a more straightforward relationship with national politics. In the 2016 US election, much of the confusion and concern about the role of technology companies stemmed from the normalisation of social media content not made by or for US citizens. There was no reason, from the perspective of the tech companies to flag advertisements paid for in roubles because there are Russian companies online that might want to pay for advertisements for legitimate purposes. Twitter, in particular, argued that they were within their rights to partner with foreign media companies (Facebook, Google, and Twitter Executives on Russia Election Interference, 2017a). Both companies and senators admitted to having overlooked ordinary promotional tools and advertising for the dissemination of political content while focusing on cyber espionage efforts (Facebook, Google, and Twitter Executives on Russia Election Interference, 2017b, 2:23:36). In the face of US scrutiny, the companies had two strategic options: either argue that the US government has no authority over them or demonstrate enough responsiveness and control over platform content to forestall government concerns, maintaining their social license as trustworthy partners. That was the strategy employed during the hearings. Representatives emphasised the proactive measures they had taken and their records of cooperation with national governments.

The central mechanism of dissemination of Russian content during the US presidential election was promotional tools. The IRA’s actions in most respects resemble a public relations campaign. It used tools, including Facebook’s Custom Audience Tool, A/B testing for advertisements, and geographic targeting, to reach a desired audience (Facebook, Google and Twitter Executives on Russian Disinformation, 2017). They also rolled out paid promotional campaigns, along with group pages that created content which could then be “boosted” to reach more people. It is a strategy that can be used by other groups—friendly or unfriendly, state or business—because it was designed to fit a range of globalised business needs and is resistant to governance outside of the social media companies themselves (Facebook, Google and Twitter Executives on Russian Disinformation, 2017). The use of mundane tools of business promotion to do the work of infiltrating US media lead many senators to question the legal disparity between media companies and social media companies. Offline, to comply with election spending limits, the purchaser and the purpose of any advertisement must be clearly identifiable. Online, the speed and volume of interactions, along with the comparatively light regulation enjoyed by the companies, have made it difficult to track the content of advertisements and consistently enforce standards (Facebook, Google and Twitter Executives on Russian Disinformation, 2017, 1:08:09). When flagging content for violations of terms of service, social media companies have until recently focused on the authenticity of accounts or behaviour rather than content. However, much of the behaviour that necessitated the inquiry was virtually indistinguishable from ordinary accounts until the connection between its origin and the content was made (Facebook, Google and Twitter Executives on Russian Disinformation, 2017, 2:12). Many companies that might not be appropriate advertisers during election cycles are legitimate users, by the standards of the platforms’ terms of service, the rest of the time. The kinds of distinctions being made between acceptable and unacceptable content typically require human readers. During the hearings, social media companies, particularly Facebook, announced increases to their human moderation staff by, in some cases, tens of thousands. Google has made similar announcements in response to advertiser concerns over children’s content on YouTube (Schindler, 2017).

The social media companies emphasised other existing initiatives that, they argued, demonstrated their proactive engagement with the problems identified after the 2016 election. Facebook has created Transparency Centres that allow users to check all the advertising campaigns a page is running on the platform, piloted election integrity initiatives in Canada (Valentino-Devries, 2018), and applied machine learning to control content around the 2017 German federal election (Lapowsky, 2017). These efforts have focused on user-facing transparency along with prompt identification and removal of offensive, dangerous, or inappropriate content, aided by machine learning. Overall, the hearings were marked by social media companies’ insistence that the problems identified by the political representatives were already under control and that platform self-regulation should continue to be the norm. They did not suggest that institutional relationships between social media companies and policy regimes were under-developed. To prove their point, they indicated existing initiatives, particularly those focused on transparency, content review, and the proactive removal of offensive content. While they were presented to lawmakers as proactive measures that indicate the integrity of platform policies, these announcements closely resembled initiatives put in place earlier in 2017, in response to a different controversy on the platform.

IAB, content scandals, and brand safety

In the spring of 2017, social media companies faced widespread outrage from advertisers whose ads had appeared next to objectionable content. Dozens of companies, including major global advertisers such as AT&T, boycotted advertising on YouTube and elsewhere and demanded guarantees that the platforms were brand safe (Davies, 2017). To retain their advertising clients, the platforms were quick to create tools that allow marketing partners to review the placement of their ads and the content that they accompanied (Perez, 2017). The tools provided to advertisers—including machine learning to remove content, transparency centres, and human reviewers—resemble many of the initiatives social media companies cited in the US hearings as evidence of their proactive engagement of political misinformation. This section draws on industry coverage, including updates from trade groups such as the Interactive Advertising Bureau (IAB), to compare the two cases in more detail, and argue that the recycling in public hearings of steps taken to address commercial concerns indicates that, in some contexts, commercial actors may be able to drive the terms of platform governance more directly than policy processes.

This study examined IAB coverage, beginning from 20 March 2017 to establish a timeline of discussion and action between social media firms and advertisers during the brand safety crisis. Several developments outlined in the IAB’s coverage are of interest to this paper. The first is the quick reaction of the social media companies to threats of boycotts from some of their core stakeholders. Procter & Gamble announced a boycott of social media in early March 2017, and by the 31st Google had acknowledged the issues and created more conservative default advertising settings, as well as three new categories for excluding content. Where advertisers used to be able to opt out of running ads next to “sensitive subjects” and “tragedy and conflicts” they could now avoid content that might be “sexually suggestive”, “sensational and shocking”, or that might contain “profanity and rough language” (Sloane, 2017; Schindler, 2017). In contrast, the senate hearings over Russian misinformation in US elections took place more than a year after the initial reporting of misinformation and nearly a year after then-president Obama announced sanctions and investigations into Russian interference in the election (Sanger, 2016). During that year, the affected social media platforms denied the influence of Russian activities on their services and downplayed the significance of political messaging on their platforms (Hudgins & Newcomb, 2017).

In a blog post outlining their responses to the brand safety crisis, Google articulated a dual responsibility to creators (including those with controversial views) and to advertisers. The commitments made by the company—to tighten safeguards by restricting ads to creators in YouTube’s partner programme as well as to re-examine what kind of content to allow on the platform—were heavily weighted towards making sure that any content that is monetised is uncontroversial. Controls for advertisers include defaulting ads to narrower, more heavily vetted content, new tools to allow management of which sites and what content can appear next to ads, more options to exclude higher risk content and a commitment to sink more resources into content review by hiring more content reviewers and creating new machine learning tools (Schindler, 2017). During the senate hearings, increasing human content moderation and addressing content through artificial intelligence and machine learning were prominent in social media platforms’ claims to be proactively addressing government concerns about political misuse of promotional tools. In addition to changing the defaults for advertisements, social media platforms, including YouTube and Facebook, opened their platforms to auditing by third parties closely affiliated with the IAB, such as the Media Rating Council, and made changes to YouTube’s Preferred Partner Program. The Preferred Partner Program was formerly defined purely by the level of engagement with its content. It has been adjusted to reflect for heavily vetted content, specifically meant to be brand safe (Bardin, 2017). In the senate hearings, the platforms argued that the priorities of national governments are met through coordination between the platforms and government agencies, particularly with law enforcement agencies, and civil society groups. During the brand safety crisis, advertising interests were able to address their concerns directly to social media companies and get nearly immediate results, including powerful tools that changed how content on the platform was monetised - most likely with knock-on effects for what content is recommended. In contrast, government concerns are subject to coordination between disparate agencies, globalised civil society groups (some of whom resent companies taking credit for their role in social media moderation; Russell, 2018), and the social media companies. Thinking in terms of the hierarchies and accounting procedures that define the operation of the megamachine, there is a direct chain of accounting between globalised advertising interests and tools made by social media companies, while national interests and policies are represented by a host of competing interests. The concerns of advertisers were not subject to dispute in the way that national concerns were. While the platforms did claim that ads next to objectionable content were minimal, they also made concrete changes quickly after concerns were raised.

The brand safety crisis on YouTube was dubbed the “adpocalypse” after many individuals saw revenues for their channels drop drastically (Burgess & Green, 2018). The fallout from YouTube’s efforts to address brand safety concerns landed particularly on educational and LGBTQ content—content more likely to be flagged as “sensational and shocking” or “sexually suggestive” and therefore not necessarily brand safe. The commitments made by YouTube were meant to reassure advertisers that they can resume advertising on social media platforms—which most have done, though there continue to be problems with advertisements appearing next to objectionable content. They have strong implications for what content is monetised online, what is not, and how that is decided. The hazards of managing content this way are well articulated by Burgess and Green (2018), who argue that the role of advertisers in changing which topics are monetised on YouTube "problematically conflates sociocultural and political issues with commercial ones" (p. 151). The authors question whether brand safety initiatives will support diversity and inclusion when, in addressing violent and conspiratorial content, the adpocalypse also worked against sexual and gender minorities. It also showed a limited capacity for addressing the difference between inappropriate content and educational content, except in the case of professional media producers like the BBC. While advertisers have always been relatively conservative and risk-averse, the granular controls provided to them by social media allow them to more directly shape relationships between ads and content, and therefore the broader environment of content delivery.

This ability to shape content does not free advertisers or social media platforms from political pressure and social norms. Sometimes the closeness between social media and advertisers makes them targets for civil society efforts. In 2013, the Association for Progressive Communications and partners, including the Everyday Sexism Project, began communicating with Facebook advertisers whose content appeared next to images and text of violence against women. Within weeks of the campaign going public, the platform had taken action to address content that it had formerly resisted moderating (Fascendini, 2013; Levine, 2013; Pavan, 2017). The success of that campaign raises questions. For whom does that kind of pressure work? Advertisers are willing to boycott platforms on their own behalf and some are willing to act on behalf of other groups, such as women - an important consumer category - concerned about offensive content. Do political issues that are less commercially sensitive—LGBTQ media, Burmese speakers—have to take the “slow lane” of civil society and legal action? As Mueller (2015) demonstrates, it is a limited victory to win the legal right for a marginal group to speak if a platform cannot or does not continue to host that speech. There is no reason to think that any speech that is legal must also be monetisable. But if what is monetisable becomes the frontline of platform content governance, it is important to understand how those decisions are made and what their effects are.

Conclusion

During the senate hearings, more than one senator challenged the social media companies’ representatives to articulate their relationship to the nation in which they are based. As US democrat senator Amy Klobuchar, author of a bill that would harmonise advertising standards between online and offline media companies, remarked, any small radio station in the US is required to review every ad that runs on their station (Facebook, Google and Twitter Executives on Russian Disinformation, 2017, 2:55:59). The senate hearings are one of many recent examples of nations attempting to establish a clearer relationship between national priorities and online content. In the case of Klobuchar’s Honest Ads bill, establishing the relationship is a matter of extending previous standards for content to new media players, given that cultural infiltration by hostile powers is not “new or novel” (Facebook, Google, and Twitter Executives on Russia Election Interference, 2017a, 00:00:33). This paper used the theoretical concept of the megamachine, with Innis’ analysis of the newspaper industry as a template, to examine contemporary attempts to influence social media operations, following the chains of accounting and command between those directing the machine and those being directed by it. Both the newspaper industry and social media companies have a core product that acts as a “vendible commodity”, attracting audiences who, in turn, attract advertisers. However, where Innis connected the business model of the print industry with US cultural imperialism, social media platforms’ operations and major advertising clients are transnational and bridge many policy environments, complicating lines of accountability between the platforms and national governments. At the same time, advertiser’s comparatively unified desire for targeted, brand safe content institutionalises closer relationships between advertising goals and the content moderation and monetisation policies of social media companies. Innis’ analysis embedded media in the political economy of trade relationships and cultural products exchanged between the US and Canada, but examining his work as a megamachine allows insights into the administration of media content governance, which appears to have shifted towards transnational institutions as social media communications technologies cover more of the globe. With the emergence of social media, there is a renewed interest in content governance by both nations and advertisers. However, the advertisers have a head start in centralising their interests and building the infrastructure to see their vision of content governance respected, positioning them to be effective frontline governors of social media content.

Comparing the senate hearings with the brand safety controversy reveals correlating crises over content distribution online, stemming from inconsistently monitored placement of promoted materials. While this limited analysis is unable to reveal a core or causative relationship between the two, it does highlight similarities between actions presented to the US government in public hearings as proactive ways to address foreign interference and actions designed to assuage the concerns of advertisers over social media content. These similarities suggest a convergence between the actions needed to provide transparency in the advertising supply chain and the actions required to fulfill a public mandate for trustworthiness. Certainly, this comparison indicates that the architecture of social media companies is much more developed for meeting the concerns of advertisers than it is for regulators or public interest concerns. Advertising interests can demand concrete changes to social media platforms and see swift action to meet those demands, while even very powerful national governments may take the slow route of applying public pressure.

In her article on social media platforms as “the new governors”, Klonick (2017) argues that the loss of equal access and participation and the lack of direct platform accountability are major causes for concern, even as social media platform policies largely follow the outlines of US speech laws. This paper has used the scholarship on the megamachine to think through patterns of accountability between media systems and their stakeholders. It has argued that commercial actors are able to exert considerable pressure on social media content moderation, acting sometimes ahead of government policy processes, and that the criteria for that governance is not principles of speech and representation, but the fuzzier criteria of brand values and brand safety. Rather than direct accountability to users or policy, social media companies are accountable to a range of stakeholders, and advertisers are often at the front of the line. It is possible that the interests of advertiser can serve to curb dangerous or extreme speech on social media platforms. Cunningham and Craig (2019), for instance, suggest advertisers may encourage better democratic norms in online communication “because most brands and advertisers will not tolerate association with such affronts to civility and democracy” (p. 8). It seems unlikely that the conservative nature of advertisers and brands is a substitute for governance of online spaces by regulators outside of the private sector, however—particularly as advertisers have limited investment in small countries, minority populations, and political communication. While the social media megamachine appears well designed to administer the interests of advertisers in content delivery, it is less efficient in facilitating the governance in other contexts—taking considerably longer to acknowledge and respond to democratic concerns over content.

Globally, there have been recent attempts to define the power of social media companies and to strengthen their accountability to national media policy regimes. Indicative of this tendency is the General Data Protection Regulation (GDPR), which came into force in the May of 2018, and is to date the most far-reaching move to directly regulate the actions of social media companies. However, representatives of the EU parliament indicated in a May 2018 hearing with Mark Zuckerberg that it is unlikely that the GDPR will change the business model of Facebook and other social media companies (EURACTIV, 2018). Advertisers have quickly adapted to the changes, shifting their spending to publishers with accurate first-party data (from memberships and mailing lists, for instance) and those with established reporting systems (Seb, 2018; Holton, 2018). While requirements for individual consent are stricter, the desire for large networks of circulation and brand-safe content may serve to entrench the power of established players who have the resources to ensure compliance and the long-term users who must agree to the terms to continue to use the service. Advertisers will spend money only in supply chains that are verified—that have audiences the advertisers know they can target both safely and legally (Holton, 2018). As such, the influence of advertiser preferences on their publishing partners will continue to strongly affect how content is moderated and monetised, even under this stricter regulatory burden. Recently, Facebook founder Mark Zuckerberg has argued for a more direct, globally standardised watchdog of public interests in the governance of content (Zuckerberg, 2019). Such a body might correct for the diffuse nature of national policy-making in comparison to the coherent agenda of advertising interests. Gillespie (2018) called Facebook “two intertwined networks, content and advertising, both open to all" (p. 203). Perhaps social media governance needs to acknowledge a similar division in its stakeholders and match the influence of the advertising industry with a transnational institution for political governance that addresses the democratic interest in social media content.

References

Alphabet, Inc. (2017). Annual Report 2016. Retrieved from https://www.sec.gov/Archives/edgar/data/1652044/000165204417000008/goog10-kq42016.htm

Ardia, D. S. (2009). Free speech savior or shield for scoundrels: An empirical study of intermediary immunity under Section 230 of the Communications Decency Act. Loyola of Los Angeles Law Review, 43, 373-506. Available at https://scholarship.law.unc.edu/faculty_publications/37/

Bakshy, E., Messing, S., & Adamic, L. A. (2015). Exposure to ideologically diverse news and opinion on Facebook. Science, 348(6239), 1130-1132. doi:10.1126/science.aaa1160

Banet-Weiser, S. (2012). AuthenticTM: The politics of ambivalence in a brand culture. New York: NYU Press.

Bardin, A. (2017, March 20). Strengthening YouTube for advertisers and creators. YouTube Creator Blog. Retrieved from https://youtube-creators.googleblog.com/2017/03/strengthening-youtube-for-advertisers.html

Bennett, W. L., & Iyengar, S. (2008). A new era of minimal effects? The changing foundations of political communication. Journal of communication, 58(4), 707-731. doi:10.1111/j.1460-2466.2008.00410.x

Berland, J. (1997). Space at the margins: Colonial spatiality and critical theory after Innis. TOPIA: Canadian Journal of Cultural Studies, 1(1). doi:10.3138/topia.1.55

Brodmerkel, S., & Carah, N. (2016). Brand machines, sensory media and calculative culture. London: Palgrave Macmillan. doi:10.1057/978-1-137-49656-0

Burgess, J. (2015). From ‘broadcast yourself’ to ‘follow your interests’: Making over social media. International Journal of Cultural Studies, 18(3), 281–285. doi:10.1177/1367877913513684

Burgess, J., & Green, J. (2018). YouTube: Online Video and Participatory Culture. Cambridge: Polity Press.

Buxton, W. J. (1998). Harold Innis' excavation of modernity: the newspaper industry, communications, and the decline of public life. Canadian Journal of Communication, 23(3). doi:10.22230/cjc.1998v23n3a1047

Carey, J. W. (1967). Harold Adams Innis and Marshall McLuhan. The Antioch Review, 27(1), 5–39. doi:10.2307/4610816

Couldry, N., & Turow, J. (2014). Advertising, big data and the clearance of the public realm: marketers’ new approaches to the content subsidy. International Journal of Communication, 8, 1710–1726. Retrieved from https://ijoc.org/index.php/ijoc/article/view/2166

Croteau, D., & Hoynes, W. (2006). The Business of Media: Corporate Media and the Public Interest. Thousand Oaks, CA: Pine Forge Press.

Davies, J. (2017, April 4). The YouTube ad boycott concisely explained. Retrieved February 18, 2019, from https://digiday.com/uk/youtube-ad-boycott-concisely-explained/

EURACTIV. (2018). Mark Zuckerberg’s full meeting with EU Parliament leaders. Retrieved from https://www.youtube.com/watch?v=o0zdBUOrhG8

Facebook Inc. (2018). Facebook Annual Report 2017. Retrieved from https://s21.q4cdn.com/399680738/files/doc_financials/annual_reports/FB_AR_2017_FINAL.pdf

Facebook, Google and Twitter Executives on Russian Disinformation: Hearing before the Senate Judiciary Subcommittee on Crime and Terrorism, Senate, 114th Cong.(2017). Retrievedfrom https://www.c-span.org/video/?436454-1/facebook-google-twitter-executives-testify-russia-election-ads

Facebook, Google, and Twitter Executives on Russia Election Interference: Hearing before the House Select Intelligence Committee, House, 114th Cong. (2017a). Retrieved from https://www.c-span.org/video/?436362-1/facebook-google-twitter-executives-testify-russias-influence-2016-election

Facebook, Google, and Twitter Executives on Russia Election Interference: Hearing before the Senate Select Intelligence Committee, Senate, 114th Cong. (2017b). Retrieved from https://www.c-span.org/video/?436360-1/facebook-google-twitter-executives-testify-russias-influence-2016-election

Fascendini, F. (2013, May 24). How funny is this, Facebook? Retrieved February 26, 2019, from Association for Progressive Communications Website https://www.apc.org/en/news/how-funny-facebook

Gallagher, R. (2018, August 1). Google Plans to Launch Censored Search Engine in China, Leaked Documents Reveal. The Intercept. Retrieved August 14, 2018, from https://theintercept.com/2018/08/01/google-china-search-engine-censorship/

Gehl, R. W. (2014). Reverse engineering social media: Software, culture, and political economy in new media capitalism. Philadelphia: Temple University Press.

Gillespie, T. (2018). Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media. New Haven, NY: Yale University Press.

Gorwa, R. (2019). What is platform governance? Information, Communication & Society, 22(6), 854-871. doi:10.1080/1369118X.2019.1573914

Ha, A. (2017, December 21). Digital ad spend grew 23 percent in the first six months of 2017, according to IAB. Techcrunch. Retrieved from https://techcrunch.com/2017/12/20/iab-ad-revenue-report-2017/

Helmond, A. (2015). The platformization of the web: Making web data platform ready. Social Media + Society, 1(2). doi:10.1177/2056305115603080

Helmond, A., Nieborg, D. B., & van der Vlist, F. N. (2017). The Political Economy of Social Data: A Historical Analysis of Platform-Industry Partnerships. Proceedings of the 8th International Conference on Social Media & Society - #SMSociety17, 1–5. doi:10.1145/3097286.3097324

Holton, K. (2018, August 23). Europe’s new data law upends global online advertising. Reuters. Retrieved from https://ca.reuters.com/article/businessNews/idCAKCN1L80HW-OCABS

Hudgins, J., & Newcomb, A. (2017, November 1). Google, Facebook, Twitter and Russia: A timeline on the ’16 election. NBC News. Retrieved February 18, 2019, from https://www.nbcnews.com/news/us-news/google-facebook-twitter-russia-timeline-16-election-n816036

Innis, H. A. (2007). Empire and communications. Lanham, MD: Rowman & Littlefield.

Innis, H. A. (2008). The bias of communication (2nd ed.). Toronto: University of Toronto Press.

Isaac, M. (2016, November 22). Facebook said to create censorship tool to get back into China. The New York Times. Retrieved from https://www.nytimes.com/2016/11/22/technology/facebook-censorship-tool-china.html

Karppi, T. (2018). Disconnect: Facebook’s affective bonds. Minneapolis: University of Minnesota Press.

Klonick, K. (2017). The new governors: The people, rules, and processes governing online speech. Harvard Law Review, 131, 1598-1670.

Lapowsky, I. (2017, September 27). Facebook’s crackdown ahead of German election shows it’s learning. Wired. Retrieved from https://www.wired.com/story/facebooks-crackdown-ahead-of-german-election-shows-its-learning/

Latour, B. (1999). Pandora's hope: Essays on the reality of science studies. Cambridge, MA: Harvard University press.

Levine, M. (2013, May 28). Controversial, harmful and hateful speech on Facebook. Retrieved February 26, 2019, from https://www.facebook.com/notes/facebook-safety/controversial-harmful-and-hateful-speech-on-facebook/574430655911054

Medeiros, B. (2017). Platform (non-)intervention and the “marketplace” paradigm for speech regulation. Social Media + Society, 3(1), doi:10.1177/2056305117691997

Molotch, H. (1976). The city as a growth machine: Toward a political economy of place. American journal of sociology, 82(2), 309-332. doi:10.1086/226311

Mueller, M. L. (2015). Hyper-transparency and social control: Social media as magnets for regulation. Telecommunications Policy, 39(9), 804–810. doi:10.1016/j.telpol.2015.05.001

Mumford, L. (1966). The first megamachine. Diogenes, 14(55), 1-15. doi:10.1177/039219216601405501

Nieborg, D. (2017, November 10). Facebook messenger and the political economy of platforms. Presentation given as part of the Marketing Research Seminar Series presented by the Schulich School of Business at York University.

Nipps, Karen. (2014). Cum privilegio: Licensing of the press act of 1662. The Library Quarterly: Information, Community, Policy, 84(4), 494–500. doi:10.1086/677787

Pavan, E. (2017). Internet intermediaries and online gender-based violence. In M. Segrave & L. Vitis (Eds.), Gender, Technology and Violence (pp. 62–79). Taylor & Francis. doi:10.4324/9781315441160-5

Perez, S. (2017, Dec. 5). Youtube promises to increase content moderation and other enforcement staff to 10k in 2018. Techcrunch. Retrieved from https://techcrunch.com/2017/12/05/youtube-promises-to-increase-content-moderation-staff-to-over-10k-in-2018/

Reuters. (2017, July 28). Why Google and Facebook prove that online advertising is a duopoly. Fortune. Retrieved from http://fortune.com/2017/07/28/google-facebook-digital-advertising/

Rinallo, D., Basuroy, S., Wu, R., & Jeon, H. J. (2013). The media and their advertisers: Exploring ethical dilemmas in product coverage decisions. Journal of Business Ethics, 114(3), 425–441. doi:10.1007/s10551-012-1353-z

Russel, J. (2018, April 6). Myanmar group blasts Zuckerberg’s claim on Facebook hate speech prevention. Techcrunch. Retrieved February 26, 2019, from http://social.techcrunch.com/2018/04/06/myanmar-group-blasts-zuckerbergs-claim-on-facebook-hate-speech-prevention/

Schindler, P. (2017, March 20). Expanded safeguards for advertisers. Retrieved from https://blog.google/topics/ads/expanded-safeguards-for-advertisers/

Seb, J. (2018, June 25). A month after GDPR takes effect, programmatic ad spend has started to recover. Digiday. Retrieved August 18, 2018, from https://digiday.com/marketing/month-gdpr-takes-effect-programmatic-ad-spend-started-recover/

Shtern, J. (2009). Global internet governance and the public interest in communication (Unpublished doctoral dissertation). Université de Montréal, Montréal.

Sloane, G. (2017, Mar. 17). As Youtube tinkers with ad formula, its stars see their videos lose money. Adage. Retrieved from http://adage.com/article/digital/youtube-feels-ad-squeeze-creators/308489/

Smith, G. (2017, January 25). Newspapers scale back Facebook and Snapchat content as meagre advertising returns disappoint. The Independent. Retrieved from http://www.independent.co.uk/news/business/news/newspapers-facebook-snpachat-adverts-meagre-returns-news-media-outlets-social-media-money-earnings-a7545331.html

Teich, D. (2017, June 14). How Youtube handled its brand safety crisis. Digiday. Retrieved from https://digiday.com/marketing/youtube-handled-brand-safety-crisis/

Trapp, F. (2016, April 15). Algorithm and advertising: The real impact of Instagram’s changes. Adweek. Retrieved from http://www.adweek.com/digital/francis-trapp-guest-post-instagram-algorithm/

Turow, J. (2012). The daily you: How the new advertising industry is defining your identity and your worth. New Haven, CT: Yale University Press.

Twitter, Inc. (2018). Annual report 2018. Retrieved from http://files.shareholder.com/downloads/AMDA-2F526X/6366391326x0x976375/0D39560E-C8B5-4BA0-83C4-C9B5C88D4737/TWTR_2018_AR.pdf

Valentino-Devries, J. (Jan. 31, 2018). Facebook’s experiment in ad transparency is like playing hide and seek. Propublica.org. Retrieved from https://www.propublica.org/article/facebook-experiment-ad-transparency-toronto-canada

van Dijck, J. (2013). The culture of connectivity: A critical history of social media. Oxford: Oxford University Press.

Wu, T. (2016). The attention merchants: The epic scramble to get inside our heads. New York: Knopf.

Yeung, K. (2017). ‘Hypernudge’: Big data as a mode of regulation by design. Information, Communication & Society, 20(1), 118–136. doi:10.1080/1369118X.2016.1186713

Counter-terrorism in Ethiopia: manufacturing insecurity, monopolizing speech

$
0
0

This paper is part of Practicing rights and values in internet policy around the world, a special issue of Internet Policy Review guest-edited by Aphra Kerr, Francesca Musiani, and Julia Pohle.

Introduction

The potential of the internet as a site of protest, resistance, and social change in the context of repressive regimes has been the subject of scholarly inquiry in the past two decades (Baron, 2009; Castells, 2012; Cowen, 2009; Fenton, 2016; Pohle & Audenhove, 2017; Shirky, 2008). From the printing press to the internet, the transformative power of new communication technologies lies in their tendency to disrupt central authority and control. When faced with disruptive communication technologies, authoritarian governments’ knee-jerk reaction is usually one of confusion, suspicion, and prohibition. However, repressive regimes also learn to embrace these technologies with the aim of strengthening the existing order (Kalathil & Boas, 2003; Morozov, 2013). For example, McKinnon’s (2011) notion of “networked authoritarianism” reflects how authoritarian regimes in countries like China not only adopt new communication technologies but also use these technologies to bolster their legitimacy. While some of the most widespread practices of using the internet as a means of control include surveillance (Fuchs & Trottier, 2017; Grinberg, 2017), censorship (Akgül & Kırlıdoğ, 2015; Yang, 2014), and hacking (Gerschewski & Dukalskis, 2018; Kerr, 2018), these practices are oftentimes informed by internet policy frameworks and rational-legal dynamics. As Hintz and Milan (2018) articulate, the institutionalisation and normalisation of surveillance practices into law and popular imagination in Western democracies indicates authoritarian repurposing of the internet is now a global phenomenon.

Through a case study of Ethiopia, this paper attempts to shed some light on how the rise of counter-terrorism legal frameworks shape a nation state’s internet policy, especially as it pertains to communication of dissent, resistance, and protest. Consistent with global trends in response to acts of terrorism, the Ethiopian government adopted a counter-terrorism legislation in 2009. While this legislation was championed by the Ethiopian government as a way of combating terrorism, its adoption as arguably the most consequential legal framework in undermining freedom of expression is akin to the neopatrimonial rational-legal design of the ruling party, the Ethiopian People’s Revolutionary Democratic Front (EPRDF). Neopatrimonialism, Clapham (1985) notes, is “a form of organisation in which relationships of a broadly patrimonial type pervade a political and administrative system which is formally constructed on rational-legal lines” (p. 48). Neopatrimonial governments are organised through modern bureaucratic principles with formally defined powers although these powers are exercised “as a form of private property” (Ibid.).

In this paper, I discuss how the Anti-Terrorism Proclamation of 2009 (hereafter referred to as “the EATP” or “the Proclamation”) has been appropriated to stifle freedom of expression involving mediated communication, especially in digital platforms. The study relies on a policy review of the EATP and other supplementary legal frameworks to assess provisions affecting digital freedoms, internet governance frameworks and political expression. In examining EPRDF’s adoption and use of a counter-terrorism legal framework, I situate my discussion within the neopatrimonial state framework. Drawing on literature that critically examines the corrosive impact of counter-terrorism laws on freedom of expression globally, I analyse how the EATP has severely undermined civil liberties of expression. In doing so, I demonstrate how the law affected digital freedoms as well as other pillars of a democratic polity such as journalistic practice. I conclude by highlighting the implications of the Proclamation in projecting a highly restrictive Ethiopian internet policy framework as it pertains to regulation and surveillance.

The global rise of counter-terrorism laws

The terrorist attacks of 11 September 2001 in the US, as well as other similar incidents in different parts of the world have caused profound changes in political, economic, and social relations globally. From communication systems to immigration flows to financial transactions, nations have aggressively sought a wide range of mechanisms to proactively curb potential threats (Birkland, 2004; Croucher, 2018). While executive branches such as law enforcement bodies and even militaries are commonly part of the counter-terrorism apparatus, the most conspicuous common denominator across nations has been the rise of what came to be known as counter-terrorism laws (De Frias, Samuel, & White, 2012).

The recent prominence of counter-terrorism laws across the world has had significant implications to the study of global terrorism from legal and policy perspectives, especially in terms of determining what constitutes (and does not) an act of terrorism. In this regard, the lack of a universal definition of terrorism is not only unsurprising but may also be an impossible task. Although such fluidity of the term is not new, the executive delimitation of terrorism has been conditioned by Resolution 1373 of the United Nations Security Council that was issued on 28 September 2001 following the terrorist attacks on the US earlier that month. The Resolution, among other things, called nations to criminalise acts of terrorism as well as financing, planning, preparation and support for terrorism. In order to expedite the directive, the United Nations Security Council (UNSC) created a new Counter-Terrorism Committee that was tasked with overseeing counter-terrorism actions adopted by member states. While the resolution directed member states to step-up their counter-terrorism efforts, it did not provide a framework to define what constitutes an act of terrorism. Roach et al. (2012) note that this has left individual nations to define terrorism according to their contextual concerns. This approach is not unexpected given how international counter-terrorism law and policy involve multiple layers of actors and stakeholders as well as “interplay between international, regional and domestic sources of law” (Roach, Hor, Ramaj, & Williams, 2012, p. 3).

One of the consequences of the rather elastic framing of terrorism coupled with the rise of counter-terrorism laws across nation states has brought renewed concerns about infringement of basic human rights. Well known post-9/11 counter-terrorism activities in Guantanamo Bay or American “black sites” in some European countries (secret prisons mostly operated by the CIA where inmates have no rights other than those afforded to them by their detainers) as well as rendition sites in countries like Egypt have demonstrated there is a thin line between curbing terrorist acts and violating the basic right to be free from torture and degrading treatment (Setty, 2012). In addition to concerns over torture and degrading treatment, counter-terrorism efforts have also ignited debate on striking the right balance between thwarting terrorism and ensuring expressive, associational and assembly freedoms (Schneiderman & Cossman, 2002). Of critical importance here is how UNSC-endorsed counter-terrorism laws have created an added impetus for authoritarian governments to criminalise legitimate forms of domestic dissent (Roach, 2015).

Because of their reactive posture, counter-terrorism laws are closely linked with state securitisation. Securitisation, however, is subject to misperception in its framing of disorder. In many instances, as Bergson (1998) notes, disorder can be a construct of the state emanating from a contradiction between one’s own interests or needs. In this sense, securitisation generates insecuritisation by creating fear, which in turn empowers the state to expand its control. As Karatzogianni and Robinson (2017) highlight, securitisation “involves framing-out any claims, demands, rights or needs, which might be articulated by non-state actors. Such actors are simply disempowered, and either suppressed and ‘managed’ or paternalistically ‘protected’” (p. 287). By reducing social problems and differences to security issues, securitisation considerably expands state power by creating emergencies to combat imagined “dangers” (Bigo, 2000; Freedman, 1998; Gasteyger, 1999). In tandem with this securitisation rationale, many authoritarian and quasi-authoritarian states have aligned themselves with what came to be loosely known as the “war against terrorism” global front. However, these states have intensified the use of counter-terrorism apparatus, including legislative means, to revamp internet policy frameworks, which in turn have direct ramifications to civic liberties.

It should be noted that concerns over appropriating counter-terrorism legal frameworks for authoritarian ends is not a uniquely Ethiopian phenomenon. For example, Egypt adopted its own version of counter-terrorism law in 2015 that significantly curbed rights of freedoms of assembly, association and expression. Formally referred to as the Law of Organising the Lists of Terrorist Entities and Terrorists, Egypt’s counter-terrorism legislation gives mandate to the government to legally exercise surveillance over Egyptians as well as penalise those who oppose or criticise state policies and practices. Egypt’s counter-terrorism law has been criticised for criminalising dissent, usually through conflating crimes committed by violent groups to peaceful acts of expression that are critical to the government. By employing vague language that is prone to arbitrary interpretation, Hamzawy (2017, p.17) notes that the terrorism law “does not require the government’s accusations of terrorist involvement to be proven through transparent judicial proceedings before individuals are placed on the list.”

Another African country that has adopted a counter-terrorism law recently with controversial outcomes is Cameroon. The Law on the Suppression of Acts of Terrorism in Cameroon (No. 2014/028) was enacted in 2014 against a backdrop of an initiative to contain threats from designated terrorist organisations, most notably Nigeria’s Islamist Jihadist group, Boko Haram. While the law won notable support originally, its eventual deployment raised serious concerns over infringement of rights of expression protected under the Cameroonian Constitution and international human rights law. According to a report by the Committee to Protect Journalists (CPJ) (2017, p.7), the counter-terrorism legislation has been especially criticised for penalising journalists by conflating “news coverage of militants or demonstrators with praise,” resulting in journalists not knowing “what they can and cannot report safely, so they err on the side of caution.” One of the most notable cases involved Radio France Internationale (RFI) journalist Ahmed Abba, who is serving a ten-year prison sentence on terrorism charges for his reporting on the militant group Boko Haram after he was convicted by a military tribunal of “non-denunciation of terrorism” and “laundering of the proceeds of terrorist acts” (CPJ, 2017, p. 7).

Ethiopia, EPRDF and counter-terrorism

The Federal Democratic Republic of Ethiopia (FDRE) has been ruled by EPRDF since 1991. A coalition of four ethnically organised political parties, EPRDF instituted a highly centralised, top-down administration structure that championed an ethno-nationalist political programme (Gashaw, 1993; Gudina, 2007; Habtu, 2003). Although EPRDF’s administrative lifespan projects a nominal democratic façade of elections, its legitimacy to govern has been called into question several times (Aalen & Tronvoll, 2009; Lyons, 2016). In the most recent national elections of 2015, for example, EPRDF declared victory over every parliamentary seat to extend its already protracted longevity. For the second most populous country in Africa that holds contested ethnic, ideological, cultural, and political worldviews, EPRDF’s complete dominance in the rational-legal apparatus of the Ethiopian state has been anything but representative of the Ethiopian public.

In its 28 years’ dominance of the Ethiopian government, EPRDF carried out several mechanisms of quelling alternative political ideologies as well as the individuals and organisations that express them. The mechanisms through which EPRDF strived for political monism that guarantees its supremacy range from outright human rights violations to ideological warfare (Allo, 2017; Gudina, 2011; Kidane, 2001; Tronvoll, 2008). Arguably, the most common strategy EPRDF deployed to assert its power involved the mobilisation of its security and executive apparatus to go after political dissidents who were reportedly imprisoned, exiled, harassed, disappeared or died (Di Nunzio, 2014; Gudina, 2011; Vestal, 1999). Secondly, EPRDF was involved in a mass ideological indoctrination of its political programme (Abbink, 2017; Arriola & Lyons, 2016). Across federal and state government offices, state-owned enterprises, and state-run higher education institutions, devotion to the ideals of EPRDF’s abyotawi dimokrasi1 became the definitive rubric for reward and punishment. As former Prime Minister of Ethiopia and author of EPRDF’s political-cum-economic programme, Meles Zenawi believed, the long-term success of his party was contingent on the successful branding of “developmentalism” 2 and the creation of a mass devoted to it (Fana, 2014; Matfess, 2015). Thirdly, EPRDF was accused of fostering a state-sponsored social engineering of the Ethiopian people through an ethnic federalism design. By forging regional states along ethnic fault lines, EPRDF, despite its unpopularity, managed to sustain its political longevity through a divide-and-rule strategy that created mistrust and animosity between different ethnic groups (Bélair, 2016; Mehretu, 2012). Fourthly, and crucial to the current study, EPRDF laid out a neopatrimonial rational-legal network where state resources were systematically channelled for party interests (Kelsall, 2013; Weiss, 2016). This created a blurring of the demarcation between party and government, resulting in the rise of, among other things, a justice system that is loyal to EPRDF interests. It is this neopatrimonial interlocks between the Ethiopian legislative and judiciary organs coupled with global shifts in counter-terrorism strategies that cultivated the necessary conditions for the introduction of the EATP 2009.

An influential player in geopolitical and diplomatic affairs of the African continent, the FDRE is a key ally to the US in combating terrorism and terrorist groups in the Horn of Africa. The US and FDRE have established multiple counter-terrorism partnerships that specifically target designated terrorist groups such as Al Shabab in neighbouring Somalia (for example, see Agbiboa, 2015). In spite of its abysmal human rights record, especially between 2005-2018, Ethiopia continued to be regarded highly by the US and its allies due to the strategic alliance it offers in combating terrorism. Nevertheless, for Ethiopia’s ruling party, this partnership is as much about combating terrorism as it is about extending its grip on political power which has now lasted for nearly three decades (Clapham, 2009). EPRDF has been accused of repurposing counter-terrorism apparatuses—intelligence and surveillance systems, military equipment, and technical knowhow—financed and set up by its Western allies to quell critical expression, organisation and assembly domestically (Turse, 2017). In spite of years of US Department of State country reports that document state-sponsored human rights abuses, the US continued to follow a policy of appeasement toward the Ethiopian government, possibly to avoid the disruption of its geopolitical priority in the region. In this sense, it is plausible to argue that EPRDF views this as a critical leverage, one that is aimed at keeping outside political interference at bay, thereby effectively silencing external pressures of political reform. In the interest of maintaining its strategic priorities, EPRDF’s Western partners have chosen to be “oblivious to or even ignorant of Ethiopia’s worsening political exclusivity” (Workneh, 2015, p. 103), allowing the former to, without meaningful accountability, undermine basic human rights under the guise of counter-terrorism efforts.

It is against this background that Ethiopia adopted a counter-terrorism legal framework in 2009 although, in prior years, it was already involved in other counter-terrorism activities including the US backed military campaign against the Al Qaeda affiliated Islamic Court Union (ICU) in 2006 (Rice & Goldenberg, 2007). Since its enactment by the EPRDF-dominated Ethiopian parliament, the EATP has been extensively used to prosecute hundreds of individuals that include journalists, opposition political party members, and civil society groups. 3 In many ways, the Ethiopian government’s actions since the adoption of the Proclamation in 2009 justified concerns of human rights groups who have heavily criticised the law for being dangerously vague in framing terrorist acts, violating international human rights law, and dismantling criminal justice due process standards. Some observers highlight the EATP has become the most potent tool to stifle legitimate forms of critical expression, organisation, and assembly (Kibret, 2017; Sekyere and Asare, 2016).

The fatal consequences of Ethiopia’s adoption of a counter-terrorism framework to freedom of speech was mostly predictable because of the Ethiopian government’s poor track record on human rights. Several nations’ rush to adopt counter-terrorism laws has been motivated by the idea of creating a lawful means to bypass existing criminal justice procedures that may not be speedy or effective enough to respond to national security threats (Daniels, Macklem, & Roach, 2012). In this sense, counter-terrorism laws empower governments to exercise a “state of exception” where, under perceived or real terrorism threats, normal procedures of jurisprudence in criminal law may be circumvented in the spirit of upholding “the greater good.” As Roach et al. (2012, p. 10) succinctly summarised, the intent here is “accommodating terrorism and emergencies within the rule of law without producing permanent states of emergency and exception.” It is plausible to perceive, without overlooking critical loopholes, how countries with established democratic traditions would have better institutional mechanisms to combat corrosive uses of counter-terrorism laws. In a democratically fragile country like Ethiopia where all branches of government including the judiciary are set up to buttress the self-proclaimed hegemonic project of the ruling party, the EATP has become the rule and not the exception (see Fig. 1 for EPRDF’s neopatrimonial interlocks in the context of the EATP).

A circular chart displaying the neopatrimonial interlocks of the Ethiopian Anti-Terrorism Proclamation
Fig. 1: The neopatrimonial interlocks of the Ethiopian Anti-Terrorism Proclamation

Situating the Ethiopian counter-terrorism law apparatus under the neopatrimonial state framework

The anocratic design of the Ethiopian state has effectively created a monopoly of governance by EPRDF. 4 Through a rational-legal system that resembles a democratic polity but which, in practice, enables the continuity of a one-party rule, EPRDF has projected itself as a vanguard elite of democracy and development in Ethiopia. For critics, however, EPRDF’s Ethiopia is neither developmental nor democratic, but rather a neopatrimonial state that lodged a complex rational-legal bureaucracy to appropriate public resources into the group and individual interests of the ruling elite.

The concept of neopatrimonialism essentially encompasses a dualistic nature where “the state is characterized by patrimonialisation and bureaucratization” [sic.] (Bach, 2011, p. 277). This fusion, a quintessential characteristic of the neopatrimonial state, assumes a scenario where the “patrimonial logic coexists with the development of bureaucratic administration and at least the pretence of rational-legal forms of state legitimacy” (Van de Walle, 1994, p. 131). Such dualism can effectively translate into a wide array of empirical situations. These mirror variations in the state’s failure or capacity to produce “public” policies.Neopatrimonialism, in this sense, is a modern, sophisticated form of patrimonialism in which “patrimonial logic coexists with the development of bureaucratic administration and at least the pretense of rational-legal forms of state legitimacy” (Van de Walle, 1994, p. 131).

Davies (2008) incorporates two key features of neopatrimonial governance. Firstly, the neopatrimonial state personalises political authority significantly both as an ideology and as an instrument. Secondly, such governments develop a conflicting rational-legal bureaucratic system and clientelistic relations, with the latter usually dominating over the former. A number of scholars treat clientelism and patronage as integral components of neopatrimonialism (Bratton & van de Walle, 1994; Erdmann & Engel, 2007; Eisenstadt, 1973). Clientelism represents “the exchange or brokerage of specific services and resources for political support, often in the form of votes” involving “a relationship between unequals, in which the major benefits accrue to the patron” and “redistributive effects are considered to be very limited” (Erdmann and Engel, 2007, p. 107). In a broad sense, it is the complexity and sophistication of this “brokerage” that distinguishes neopatrimonial clientelism from patrimonial clientelism. Unlike the “direct dyadic exchange relation between the little and the big man” (Erdmann & Engel, 2007, p. 107) that is characteristic of patrimonialism, the neopatrimonial state needs a network of brokers that have permeated into a bureaucratic nexus where they can render the interest of the political center to the periphery (Powell, 1970; Weingrod, 1969).

Making sense of the EATP as a neopatrimonial instrument of control

In the Ethiopian context, a recent example of how the rational-legal bureaucracy is upended along neopatrimonial lines is indicated by the adoption of the EATP in 2009. Although the Proclamation was conceived as a means by which the Ethiopian state could legally circumvent existing laws of criminal justice—which is commonly practiced in other countries with similar legal frameworks—recent trends indicate the Proclamation has been excessively used to criminalise domestic political opposition and critical speech. In the following, I will address how the EATP was used not as a security tool but rather as a means of safeguarding the ruling party’s dominance in three ways: (a) curbing digital freedoms; (b) monopolising the political narrative; and (c) manufacturing fear to incubate self-censorship.

Curbing digital freedoms

One of the ways the EATP set itself up as a legal framework with substantial ramifications for freedom of expression is related to its determination of what it deems to be evidences of terrorist acts. Specifically, the law’s focus on “digital evidence” warrants critical scrutiny in relation to its implications to digital expressions of dissent and resistance. For example, Villasenor (2011) demonstrates how digital storage enables authoritarian governments to track organised dissent online. Hellmeier (2016), who surveyed determinants of internet filtering as measured by the Open Net Initiative in 34 autocratic regimes, outlines digital toolkits available to autocrats to control political activism on the internet. Dripps (2013) warns about the risks posed on privacy when unchecked access to digital evidences may lead to the exposition of “innocent and intimate information” of individuals (p. 51). Against this backdrop of authoritarian governments’ use of digital artifacts to stifle critical speech, the EATP’s definition of “digital evidence” poses a palpable risk to communication of dissent, protest or resistance:

[Digital evidence refers to] information of probative value stored or transmitted in digital form that is any data, which is recorded or preserved on any medium in or by a computer system or other similar device, that can be read or perceived by a person or a computer system or other similar device, and includes a display, printout or other output of such data (FDRE, 2009, p. 4829).

While this definition by itself may be fairly acceptable in everyday use of the language, it warrants special scrutiny in terms of what it entails in a counter-terrorism context. In tandem with the overall characteristic of the legislation, the broad definition of “digital evidence” leaves a vast latitude of interpretive discretion to judiciary and executive branches of the government. In the Ethiopian context, the extensive neopatrimonial interlocks between the legislative body that adopted the EATP, the judiciary that interprets the law, and the executive branch that carries out punishments have undermined the credibility of due process. When seen against the neopatrimonial roots of the EATP, the oppressive conceptualisation of “digital evidence” are evident in three ways: information storage; transmission, and consumption.

The information storage imperative poses a threat to digital freedoms because, at its core, it is an attempt to dissolve the notion of communication devices such as computers, cell phones, storage drives as private entities. The mobile phone or the computer is not only an information processing device, but a physical space where individuals purposefully (documents, pictures, audio, video, etc.) or inadvertently (cookies, search history, catches, etc.) store crucial information that enables them to archive different aspects of their livelihood. It gives them control over memory by enabling a sense of permanence. In this sense, the individual’s communication device has become an extension of private personhood (Conger, Pratt, & Loch, 2013; Kotz, Gunter, Kumar, & Weiner, 2016; Weber, 2010).

It should be noted that government encroachment on private digital spaces, especially through information extraction, is neither unusual nor uniquely Ethiopian. 5 In 2016, for example, the Federal Bureau of Investigation (FBI) in the US has asked Apple to help unlock an iPhone belonging to a shooter responsible to the death of 14 people in San Bernardino, California (Nakashima, 2017). The case ended up in court because Apple declined to help the FBI by arguing developing software to access the phone would be used in several other instances, thereby endangering encryption and data privacy altogether. In 2015, the New York Times reported how pro-Syrian government hackers targeted cellphones and computers of rebel fighters in an attempt to extract the latter’s contacts and operations (Sanger & Schmitt, 2015). In examining the Ethiopian case, the goal of state-sponsored encroachment of the private digital space is consistent with other similar practices globally, i.e., the case for information extraction. However, Ethiopia offers a compelling case of an attempt to institutionalise the de-personalisation of the private communication device through a counter-terrorism legal framework that is arguably designed for non-counter-terrorism acts of political dissent. The nebulous designation of digital evidence as “information of probative value” has indeed resulted in the prosecution of several individuals charged under the counter-terrorism law. 6

Extensive policing over communication technology devices by the Ethiopian government has been a common practice. For example, until recently, the government requires citizens to register their laptops with the Ethiopian Customs Authority before they could travel out of the country. so that they won’t bring new devices in upon return. Between 2017-2018, Ethio-Telecom, the state-owned telecommunications operator which has monopoly over voice, text, and data services in Ethiopia, required citizens to register their phones with the company in order to obtain service. Any phone that was not registered would not get access to telecommunication services in Ethiopia. It is within this already hostile ICT environment that the government, through the EATP, moved to dismantle citizens’ reasonable expectation that their communication devices are private. Several journalists, bloggers, opposition party members report how the government confiscates their mobile phones and personal computers in its “arrest first, find evidence later” approach. Sileshi Hagos is a good case in point here. He was briefly detained and interrogated by government security forces about his fiancé Reeyot Alemu, a journalist who was imprisoned under terrorism charges for her alleged communication with the banned and terrorist designee opposition party Ginbot 7 (Sreberny, 2014). 7 The government confiscated his laptop, presumably to extract information related to his and Reeyot Alemu’s alleged communication with Ginbot 7 (Pen International, 2011).

While the EATP’s designation of “storage” as an important element of “digital evidences” empowers the government to encroach on personal communication devices, perhaps the more dangerous way in which the law sets the state up to dissolve individual privacy rights is conditioned by the government’s newfound legal status to use information intercepted from communication exchanges in the digital sphere. While the EATP’s provision of the Ethiopian intelligence apparatus with legal protection to eavesdrop on citizens’ communication curbs privacy rights, 8 the more dangerous provision involves the authorisation of mass surveillance through communication service providers. The EATP’s stipulation that “any communication service provider shall cooperate when requested by the National Intelligence and Security Service to conduct the interception” (FDRE, 2009, p. 4834) directly puts Ethio-Telecom, the sole telecommunication service provider in Ethiopia, as a site of unchecked mass surveillance. A massive state-owned monopoly in the telecommunications sector of Ethiopia with more than 59 million de facto clients (ITU, 2018b), Ethio-Telecom has been implicated in citizen monitoring in several instances. During the 2005 general elections, for example, the Ethiopian government ordered the state-owned telecommunication provider to shut down the SMS system after opposition groups successfully deployed text-based campaigns (Abdi & Deane, 2008). Horne & Wong (2014) detail how the Ethiopian government acquires surveillance technologies from several countries, and are then oftentimes integrated with Ethio-Telecom operations. This results in unrestricted access to call records, internet browsing logs, and instant messaging platforms (Marquis-Boire, Marczak, Guarnieri, & Scott-Railton, 2013). In 2015, a massive online data dump involving the Italian commercial surveillance company, Hacking Team, showed numerous evidences including email transcripts, invoices, and technical manuals that directly implicated the Ethiopian government. The Hacking Team’s surveillance products were used by Ethiopia’s Information Network Security Agency (INSA) to acquire communication involving journalists affiliated with Ethiopian Satellite Television (ESAT), a US and Europe based network known for its critical views on EPRDF’s rule (Currier & Marquis-Boire, 2015). In this sense, the EATP’s directive for communication providers in Ethiopia—Ethio-Telecom by default—to relinquish private information of users only formalises what many considered to be a long-standing exercise of institutional control of citizens’ communication.

In addition to Ethio-Telecom, this stipulation enables the Ethiopian National Intelligence and Security Service (NISS) to require third-party communication service providers such as internet cafes to keep records of users’ online activities. Requiring third-party communication service providers to monitor and report users’ activities is not uncommon in other parts of the world. In the Ethiopian case, it is particularly concerning because the majority of users that rely on computers do not access the internet from their households but from third party public providers like internet kiosks, cafeterias, hotels, as well as schools.

The storage and transmission elements of “digital evidence” are compounded by the consumption component which directly implicates user behaviour. Under the “Failure to Disclose Terrorist Acts” section, the EATP stipulates, among other things, anyone who fails to disclose information or evidence that can be used to prosecute or punish a suspect involved in “an act of terrorism” will be punished with rigorous imprisonment (FDRE, 2009, p. 4832). The danger of this provision of the EATP lies in the parametric nebulousness of “an act of terrorism” which emanates from the contested conceptualisation of terrorism itself. If a journalist receives an email communication from one of the terrorist-designated Ethiopian political organisations and he/she keeps the name of the source anonymous as a matter of journalistic ethics, the EATP empowers the state to prosecute the journalist through the “Failure to Disclose Terrorist Acts” provision. For many journalists, the challenge here is the extensive popular support the Oromo Liberation Front (OLF) and Ginbot 7 enjoy compels them to report the organisations’ activities as a matter of public interest. For EPRDF, blacklisting these organisations serves the purpose of making them obsolete in the political arena. Journalists who transmit information regarding such organisations as OLF and Ginbot 7 in public discourse inevitably run the risk of being charged as terrorists or accomplices of terrorism.

Monopoly of political narrative

Although the various charges carried out under the premises of the EATP by the Ethiopian government differ in their scope and nature, a sizable number of the cases have serious implications for the state of freedom of expression, especially mediated critical speech. In this sense, it is of no surprise that the EATP has probably been put into retributive effect more than any other legal framework related to communication involving electronic media. Since its enforcement, the law has disproportionately targeted community members who are involved in the dissemination of information through traditional and digital media platforms, including bloggers, journalists, and freelance writers. In Ethiopian Satellite Television and Oromia Media Network v The Federal Public Prosecutor, US based television stations Ethiopian Satellite Television (ESAT) and Oromia Media Network (OMN) were accused of disseminating information deemed to be in the interest of the Ethiopian government designated terrorist groups Ginbot 7 and OLF. The underlying argument of the Federal Public Prosecutor was based on the assumption that disseminators of information involving terrorist-designated groups act as accessories of terrorism. The EATP renders a very broad and ambiguous language that criminalises speech deemed to be an “encouragement” of terrorism, whatever the latter may be, through the interpretive lens of the Ethiopian government. Consider Article 6 of the Proclamation:

Whosoever publishes or causes the publication of a statement that is likely to be understood by some or all of the members of the public to whom it is published as a direct or indirect encouragement or other inducement to them to the commission or preparation or instigation of an act of terrorism…is punishable with rigorous imprisonment from 10 to 20 years [emphasis mine] (FDRE, 2009, p. 4831).

When the determination of what encompasses an encouragement of a terrorism act is made based on the “likely” understanding of “members of the public”, the outcome warrants a scenario of arbitrary interpretation, jurisprudence, and execution of the law. In other words, by keeping the law as vague and broad as possible, the government can choose to use it haphazardly in order to stamp out legitimate acts of political expression and dissent. Consider, for example, the case of Reeyot Alemu Gobebo, former contributor of the weekly newspaper Feteh. She was convicted on three counts under the terrorism law for her writings that were highly critical of the ruling party and the former Prime Minister of Ethiopia, Meles Zenawi, who was persistent in his characterisation of members of the free press as “messengers” of terrorist groups (Abiye, 2011). Although Reeyot Alemu was formally convicted of having ties with terrorist groups—a common blanket accusation the Ethiopian government infers to arrest journalists and freelance writers—it is important to note that Reeyot Alemu and other journalists that were imprisoned with terrorism charges were targeted by the government for their continued journalistic practices that were viewed by EPRDF as divergent to its hegemonic rule (see CPJ, 2012; Dirbaba & O’Donnell, 2012).

In other words, when journalists such as Reeyot Alemu report about groups such as Ginbot 7 and OLF, their actions are justified based on the enormous public interest imperative that is at stake. If and when a journalist, according to the EATP, “publishes or causes the publication of” groups or individuals designated as terrorists by the Ethiopian government, they will run the very likely risk of being imprisoned. Everyday journalist routines of establishing a source, conducting an interview, or simply relaying a press release involving designated “terrorist organisations” can easily be prosecutable acts. 9

Manufacturing fear, fostering self-censorship

While the appropriation of the EATP to target media professionals by tying them to controversially terrorist designated political groups is in and of itself an attack on the freedom of expression enshrined in the Ethiopian Constitution, 10 the more dangerous consequence is probably the chilling effect this “example” has set for ordinary citizens. The indiscriminate use of “terrorist” to refer to journalists reporting on opposition groups has now evolved to include individuals whose political, economic, social or human rights opinions differ from EPRDF’s narrative. For example, Workneh (2015) notes how legal frameworks such as the Anti-Terrorism Proclamation adopted by the government have created a cloud of insecurity and fear in Ethiopian social media users when it comes to political opinions. The thin line between “dissent” and “terrorism” leads users to unwittingly undergo different forms of self-censorship in the digital sphere, a scenario that enables the government to create a subdued public that is reluctant in participating in a counter-hegemonic narrative.

This “fear factor” born out of the government’s criminalisation of critical speech is compounded by the EATP’s empowerment of the state with the authority to intercept communication that endows the National Intelligence and Security Service (NISS), upon getting court warrant, to: intercept or conduct surveillance on the telephone, fax, radio, internet, electronic, postal and similar communications of a person suspected of terrorism; enter into any premise in secret to enforce the interception; or install or remove instruments enabling the interception. As indicated earlier in this article, the Federal Public Prosecutor has presented transcripts of phone conversations obtained through wiretapping by the government as evidence in a court of law in Soliana Shimeles et al. v the Federal Public Prosecutor. 11 The frequency in which the government infiltrates into the private communications of Ethiopian citizens—especially activists and journalists with critical opinions—has become a common practice since the EATP was put in place in 2009. 12

Conclusion

The adoption of legal frameworks such as the EATP that stipulate broad and vague definitions of terrorism, which, in turn, are used to frame critical speech as terrorist acts, are used to directly prosecute critics of the Ethiopian government. More importantly, however, it is sensible to argue that the Ethiopian government’s actions through the EATP could be seen as a long-term proactive strategy of creating a rational-legal bureaucracy—consistent with the neopatrimonial logic—that is subject to arbitrary interpretation and execution at the will of the state. The result is the making of a public that is unsure about what could be considered as a “terrorist” message as opposed to “normal” speech, which in turn incubates a widespread self-censorship culture. Consequently, the much-publicised prosecution of Zone 9 bloggers and other online political activists in Ethiopia through the EATP and other legal frameworks is not necessarily an exercise of stifling the views of the defendants per se, but rather what they represent in terms of a young, critical and digitally literate Ethiopian populace that is in the making.

As a practical matter, it is evident to see how the EATP compounds the highly restrictive internet policy frameworks in Ethiopia informed by such legal frameworks as the Telecom Fraud Offense Proclamation of 2012. Elsewhere, I argued how Ethiopia’s internet policy frameworks negatively affect user activity (Workneh, 2015), and outlined how the highly centralised, top-down, and monopolistic Ethiopian telecommunication policy adversely affected quality, access, and usability of digital platforms for Ethiopians (Workneh, 2018). 13 The outdated legislative frameworks that shape Ethiopia’s digital ensemble need a reboot. This recourse should envisage a shift from vanguard centralism to a participatory, multi-stakeholder, and equitable paradigm. Inclusive, people-centered internet policies have paid dividends to citizens of other African countries like Kenya, where the highly successful mobile finance platform m-pesa, for example, brought about tangible results in information justice and financial inclusion (see e.g. Jacob, 2016).

It is in this spirit that, as Ethiopia currently undergoes an uncertain political reform (Burke, 2018) that includes the ongoing scrutiny of its counter-terrorism legislation (Maasho, 2018), a cautions diagnosis of what to do with the EATP is paramount. One approach to address the corrosive outcomes of the EATP is to get rid of the law in its entirety. This view is not uncommon. Brownlie (2014) argues there should be no category of a law of terrorism and that terrorism cases should be conducted “in accordance with the applicable sectors of public international law: jurisdiction, international criminal justice, state responsibility, and so forth” (p. 713). The second approach is to keep the EATP by making significant revisions, especially in terms of provisions that have been identified as problematic to civil liberties. While an argument can be made for the merits and shortcomings of both, the execution of either approach doesn’t necessarily guarantee the right to freely express opinions. In my view, the EATP is one instrument of EPRDF’s multifaceted neopatrimonial apparatus. Without a comprehensive political reform that ensures genuine multi-stakeholder participation and the termination of the neopatrimonial order, any action against the EATP, noble as it may be, will fall short of a meaningful stride toward a free society. The most consequential provision of the EATP is not any of the language that directly curb freedom of expression but rather the Proclamation’s designation of a politically homogeneous legislative body to have the power to “proscribe and de-proscribe an organization as terrorist organization” [sic.] (FDRE, 2009, p. 4837). It is this very clause that has enabled the EPRDF-dominated Ethiopian House of Peoples’ Representatives to proscribe opposition groups and their supporters such as OLF, Ginbot 7, and ONLF as terrorists 14, which in turn led to the persecution of thousands of Ethiopians. If the Ethiopian government’s legislative body is truly representative of the diverse political spectrum of the country, a politically-motivated designation of dissenting individuals and organisations as terrorists is highly unlikely, thereby minimising the likelihood of a counter-terrorism legislation’s significance as an instrument of neopatrimonial control.

References

Aalen, L., & Tronvoll, K. (2009). The end of democracy? Curtailing political and civil rights in Ethiopia. Review of African Political Economy, 36(120), 193–207. doi:10.1080/03056240903065067

Abbink, J. (2017). Paradoxes of electoral authoritarianism: the 2015 Ethiopian elections as hegemonic performance. Journal of Contemporary African Studies, 35(3), 303–323. doi:10.1080/02589001.2017.1324620

Agbiboa, D. (2015). Shifting the battleground: The transformation of Al-Shabab and the growing influence of Al-Qaeda in East Africa and the Horn. Politikon, 42(2), 177–194. doi:10.1080/02589346.2015.1005791

Akgül, M., & Kırlıdoğ, M. (2015). Internet censorship in Turkey. Internet Policy Review, 4(2). doi:10.14763/2015.2.366

Abdi, J., & Deane, J. (2008). The Kenyan 2007 elections and their aftermath: The role of media and communication. BBC World Service Trust. BBC World Service Trust.

Abiye, T. M. (2011, December 7). The journalist as terrorist: An Ethiopian story. Retrieved October 2, 2017, from Open Democracy: https://www.opendemocracy.net/abiye-teklemariam-megenta/journalist-as-terrorist-ethiopian-story

Allo, A. (2017). Protests, terrorism, and development: On Ethiopia’s perpetual state of emergency. Yale Human Rights and Development Journal, 19(1), 133–177. Retrieved from https://digitalcommons.law.yale.edu/yhrdlj/vol19/iss1/4/

Arriola, L. R., & Lyons, T. (2016). Ethipoia: The 100% election. Journal of Democracy, 27(1), 76–88. doi:10.1353/jod.2016.0011

Bach, D. C. (2011). Patrimonialism and neopatrimonialism: Comparative trajectories and readings. Commonwealth and Comparative Politics, 49(3), 275-295. doi:10.1080/14662043.2011.582731

Bach, J.-N. (2011). Abyotawi democracy: neither revolutionary nor democratic, a critical review of EPRDF’s conception of revolutionary democracy in post-1991 Ethiopia. Journal of Eastern African Studies, 5(4), 641–663. doi: 10.1080/17531055.2011.642522

Baron, D. (2009). A better pencil: Readers, writers, and the digital revolution. Oxford: Oxford University Press. doi:10.1017/s0047404511000832

Bélair, J. (2016). Ethnic federalism and conflicts in Ethiopia. Canadian Journal of African Studies, 50(2), 295–301. doi:10.1080/00083968.2015.1124580

Bergson, H. (1998). Creative evolution. New York: Dover Publications.

Bigo, D. (2000). When two become one: Internal and external securitisations in Europe. In M. Kelstrup & M. Williams (Eds.), International Relations Theory and the Politics of European Integration: Power, Security and Community (pp. 171–204). London: Routledge. doi:10.4324/9780203187807-8

Birkland, T. (2004). “The world changed today”: Agenda‐setting and policy change in the wake of the September 11 terrorist attacks. Review of Policy Research, 21(2), 179–200. doi:10.1111/j.1541-1338.2004.00068.x

Bratton, M., & Van de Walle, N. (1994). Neopatrimonial regimes and political transitions in Africa. World Politics, 46(4), 453–489. doi:10.2307/2950715

Brownlie, I. (2004). Principles of public international law. Oxford: Oxford University Press.

Burke, J. (2018, July 8). “These changes are unprecedented”: how Abiy is upending Ethiopian politics. The Guardian. Retrieved from https://www.theguardian.com/world/2018/jul/08/abiy-ahmed-upending-ethiopian-politics

Castells, M. (2012). Networks of outrage and hope: Social movements in the internet age (1st ed.). Cambridge: Polity Press.

Clapham, C. (1985). Third world politics. London: Helm.

Clapham, C. (2009). Post-war Ethiopia: The trajectories of crisis. Review of African Political Economy, 36(120), 181–192. doi:10.1080/03056240903064953

Committee to Protect Journalists (CPJ). (2012, August 3). Ethiopian appeals court reduces sentence of Reeyot Alemu. Retrieved December 5, 2018, from https://cpj.org/2012/08/ethiopian-appeals-court-reduces-sentence-of-reeyot.php

Committee to Protect Journalists (CPJ). (2017). Journalists not terrorists: In Cameroon, anti-terror legislation is used to silence critics and suppress dissent (Special Report). New York: Committee to Protect Journalists. Retrieved from https://cpj.org/reports/Cameroon-English-Web.pdf

Conger, S., Pratt, J. H., & Loch, K. D. (2013). Personal information privacy and emerging technologies. Information Systems Journal, 23(5), 401–417. doi:10.1111/j.1365-2575.2012.00402.x

Cowen, T. (2009). Create your own economy: The path to prosperity in a disordered world. New York: Dutton.

Croucher, S. (2018). Globalization and belonging: The politics of identity in a changing world (2nd ed.). Lanham: Rowman and Littlefield.

Currier, C., & Marquis-Boire, M. (2015, July 7). The Intercept. Retrieved March 14, 2016 https://theintercept.com/2015/07/07/leaked-documents-confirm-hacking-team-sells-spyware-repressive-countries/

Daniels, R., Macklem, P, & Roach, K. (2002). The security of freedom: Essays on Canada's Anti-Terrorism Bill. Toronto: University of Totonto Press. doi:10.3138/9781442682337-fm

Davies, S. (2008). The political economy of land tenure in Ethiopia (PhD Thesis, University of St. Andrews). Retrieved from http://hdl.handle.net/10023/580

De Frias, A. M. S., Samuel-Azran, K., & White, N. (Eds.). (2012). Counter-terrorism: International law and practice (1st ed.). Oxford: Oxford University Press.

Di Nunzio, M. (2014). ‘Do not cross the red line’: The 2010 general elections, dissent, and political mobilization in urban Ethiopia. African Affairs, 113(452), 409–130. doi:10.1093/afraf/adu029

Dirbaba, B., & O’Donnell, P. (2012). The double talk of manipulative liberalism in Ethiopia: An example of new strategies of media repression. African Communication Research, 5(3), 283–312.

Dripps, D. (2013). “Dearest property”: Digital evidence and the history of private “papers” as special objects of search and seizure. Journal of Criminal Law and Criminology, 103(1), 49–109. Retrieved from https://scholarlycommons.law.northwestern.edu/jclc/vol103/iss1/2/

Eisenstadt, S. N. (1973). Traditional patrimonialism and modern neopatrimonialism. London: Sage Publications.

Erdmann, G., & Engel, U. (2007). Neopatrimonialism reconsidered: Critical review and elaboration of an elusive concept. Commonwealth & Comparative Politics, 45(1), 95–119. doi:10.1080/14662040601135813

Fana, G. (2014). Securitisation of development in Ethiopia: the discourse and politics of developmentalism. Review of African Political Economy, 41(1), S64–S74. doi:10.1080/03056244.2014.976191

Federal Democratic Republic of Ethiopia (FDRE). (1995). Proclamation of the Constitution of the Federal Democratic Republic of Ethiopia. Federal Negarit Gazeta. doi:10.1093/gmo/9781561592630.article.42063

Federal Democratic Republic of Ethiopia (FDRE). (2009). A Proclamation on anti-terrorism. Federal Negarit Gazeta. Retrieved from http://www.refworld.org/docid/4ba799d32.html

Fenton, N. (2016). The internet of radical politics and social change. In J. Curran, N. Fenton, & D. Freedman (Eds.), Misunderstanding the Internet (pp. 173–202). London: Routledge. doi:10.4324/9781315695624-6

Freedman, L. (1998). International security: Changing targets. Foreign Policy, 110, 48–64. doi:10.2307/1149276

Fuchs, C., & Trottier, D. (2017). Internet surveillance after Snowden: A critical empirical study of computer experts’ attitudes on commercial and state surveillance of the internet and social media post-Edward Snowden. Journal of Information, Communication and Ethics in Society, 15(4), 412–444. doi:10.1108/jices-01-2016-0004

Gashaw, S. (1993). Nationalism and ethnic conflict in Ethiopia. In C. Young (Ed.), The rising tide of cultural pluralism: The nation state at bay? (pp. 138-157). Madison, Wisconsin: University of Wisconsin Press.

Gerschewski, J., & Dukalskis, A. (2018). How the internet can reinforce authoritarian regimes: The case of North Korea. Georgetown Journal of International Affairs, 19, 12–19. doi:10.1353/gia.2018.0002

Gasteyger, C. (1999). Old and new dimensions of international security. In K. Spillmann & A. Wenger (Eds.), Towards the 21st Century: Trends in Post-Cold War International Security Policy (Vol. 4, pp. 69–108). Bern: Peter Lang.

Grigoryan, A. H. (2013). A model for anocracy. Journal of Income Distribution, 22(1), 3–24. Retrieved from https://ideas.repec.org/a/jid/journl/y2013v22i1p3-24.html Available at https://newsroom.aua.am/files/2013/12/Model-for-Anocracy.pdf

Grinberg, D. (2017). Chilling developments: Digital access, surveillance, and the authoritarian dilemma in Ethiopia. Surveillance & Society, 15(3–4), 432–438. doi:10.24908/ss.v15i3/4.6623

Gudina, M. (2007). Ethnicity, democratisation and decentralization in Ethiopia: The case of Oromia. Eastern Africa Social Science Research Review, 23 (1), 81-106. doi:10.1353/eas.2007.0000

Gudina, M. (2011). Elections and democratization in Ethiopia, 1991–2010. Journal of Eastern African Studies, 5(4), 664–680. doi:10.1080/17531055.2011.642524

Habtu, A. (2003). Ethnic federalism in Ethiopia: Background, present conditions and future prospects. Second EAF International Symposium on Contemporary Development Issues in Ethiopia. Addis Ababa, Ethiopia.

Hamzawy, A. (2017). Legislating authoritarianism: Egypt’s new era of repression (Paper). Washington, DC: Carnegie Endowment for International Peace. Retrieved from https://carnegieendowment.org/2017/03/16/legislating-authoritarianism-egypt-s-new-era-of-repression-pub-68285

Heinlein, P. (2012, January 8). Ethiopian politicians on trial for terrorism. Voice of America. Retrieved from https://www.voanews.com/a/ethiopian-politicians-on-trial-for-terrorism-136960163/159430.html

Hellmeier, S. (2016). The dictator’s digital toolkit: Explaining variation in internet filtering in authoritarian regimes. Politics & Policy, 44(6), 1158–1191. doi:10.1111/polp.12189

Horne, F., & Wong, C. (2014). “They know everything we do”: Telecom and internet surveillance in Ethiopia (Report). New York: Human Rights Watch. Retrieved from https://www.hrw.org/report/2014/03/25/they-know-everything-we-do/telecom-and-internet-surveillance-ethiopia

Human Rights Watch. (2013a). Ethiopia: Terrorism law decimates media. Human Rights Watch. Retrieved from http://www.hrw.org/news/2013/05/03/ethiopia-terrorism-law-decimates-media

Human Rights Watch. (2013b). “They want a confession” Torture and ill-treatment in Ethiopia’s Maekelawi police station. Human Rights Watch. Retrieved from https://www.hrw.org/report/2013/10/17/they-want-confession/torture-and-ill-treatment-ethiopias-maekelawi-police-station

International Telecommunication Union (ITU). (2018a). Country ICT data: Percentage of individuals using the internet. Retrieved from https://www.itu.int/en/ITU-D/Statistics/Pages/stat/default.aspx

International Telecommunication Union (ITU). (2018b). Ethio Telecom. Retrieved October 1, 2018, from https://telecomworld.itu.int/exhibitor-sponsor-list/ethio-telecom/

Jacob, F. (2016). The role of m-pesa in Kenya’s economic and political development. In M. M. Koster, M. M. Kithinji, & J. Rotich (Eds.), Kenya after 50. African histories and modernities. New York: Palgrave Macmillan. doi:10.1057/9781137574633_6

Kalathil, S., & Boas, T. (2003). Open networks, closed regimes: The impact of the internet on authoritarian rule. Washington D.C.: Carnegie Endowment for International Peace. doi:10.5210/fm.v8i1.1025

Karatzogianni, A., & Robinson, A. (2017). Schizorevolutions versus microfascisms: The fear of anarchy in state securitisation. Journal of International Political Theory, 13(3), 282–295. doi:10.1177/1755088217718570

Kelsall, T. (2013). Business, politics, and the state in Africa: Challenging the orthodoxies on growth and transformation. London: Zed Books.

Kerr, J. A. (2018). Information, security, and authoritarian stability: Internet policy diffusion and coordination in the former Soviet region. International Journal of Communication, 12, 3814–3834. Retrieved from https://ijoc.org/index.php/ijoc/article/view/8542

Kidane, M. (2001). Ethiopia’s ethnic-Based federalism: 10 years after. African Issues, 29(1–2), 20–25. 10.2307/1167105

Kibret, Z. (2017). The terrorism of ‘counterterrorism’: The use and abuse of anti-terrorism law, the case of Ethiopia. European Scientific Journal, 13(13), 504-539. doi:10.19044/esj.2017.v13n13p504

Kotz, D., Gunter, C. A., Kumar, S., & Weiner, J. P. (2016). Privacy and security in mobile health: A research agenda. Computer, 49(6), 22–30. 10.1109/MC.2016.185

Loriaux, M. (1999). The French development state as a myth and moral ambition. In M. Woo-Cumings (Ed.), The developmental state (pp. 235–275). New York: Cornell University Press.

Lyons, T. (2016). From victorious rebels to strong authoritarian parties: prospects for post-war democratization. Democratization, 23(6), 1026–1041. doi:10.1080/13510347.2016.1168404

Maasho, A. (2018, May 30). Ethiopian government and opposition start talks on amending anti-terrorism law. Reuters. Retrieved from https://uk.reuters.com/article/uk-ethiopia-politics/ethiopian-government-and-opposition-start-talks-on-amending-anti-terrorism-law-idUKKCN1IV1RL

MacKinnon, R. (2011). Liberation technology: China’s “networked authoritarianism.” Journal of Democracy, 22(2), 32–46. doi:10.1353/jod.2011.0033

Marquis-Boire, M., Marczak, B., Guarnieri, C., & Scott-Railton, J. (2013, March 13). You only click twice: FinFisher’s global proliferation. (U. o. The Citizen Lab, Producer) Retrieved July 8, 2016, from The Citizen Lab: https://citizenlab.org/2013/03/you-only-click-twice-finfishers-global-proliferation-2/

Matfess, H. (2015). Rwanda and Ethiopia: Developmental authoritarianism and the new politics of African strong men. African Studies Review, 58(2), 181–204. doi:10.1017/asr.2015.43

Mehretu, A. (2012). Ethnic federalism and its potential to dismember the Ethiopian state. Progress in Development Studies, 12(2–3), 113–133. doi:10.1177/146499341101200303

Mkandawire, T. (2001). Thinking about developmental states in Africa. Cambridge Journal of Economics, 25(3), 289–314. doi:10.1093/cje/25.3.289

Morozov, E. (2013, March 23). Imprisoned by innovation. The New York Times. Retrieved from http://www.nytimes.com/2013/03/24/opinion/sunday/morozov-imprisoned-by-innovation.html?_r=0

Nakashima, E. (2016, February 17). Apple vows to resist FBI demand to crack iPhone linked to San Bernardino attacks. The Washington Post. Retrieved from https://www.washingtonpost.com/world/national-security/us-wants-apple-to-help-unlock-iphone-used-by-san-bernardino-shooter/2016/02/16/69b903ee-d4d9-11e5-9823-02b905009f99_story.html

Nunes Lopes Espiñeira Lemos, A., & Pasquali Kurtz, L. (2018). Sovereignty over personal data in Brazil: State jurisdiction facing denial of access to users’ data by online platform Whatsapp. Presented at the GigaNet: Global Internet Governance Academic Network, Annual Symposium 2017, SSRN. Retrieved from https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3107293

Pen International. (2011, September 23). Ethiopia: Two more journalists arrested under antiterrorism legislation; fears of torture. Pen International. Retrieved February 21, 2019, from https://pen-international.org/news/ethiopia-two-more-journalists-arrested-under-antiterrorism-legislation-fears-of-torture

Pohle, J., & Van Audenhove, L. (2017). Post-Snowden internet policy: Between public outrage, resistance and policy change. Media and Communication, 5(1), 1–6. doi:10.17645/mac.v5i1.932

Powell, J. D. (1970). Peasant societies and clientelist politics. American Political Science Review, 64(2), 411–425. doi:10.2307/1953841

Rice, X., & Goldenberg, S. (2007, January 13). How US forged an Alliance with Ethiopia over Invasion . The Guardian. Retrieved from https://www.theguardian.com/world/2007/jan/13/alqaida.usa

Roach, K. (2012). The criminal law and its less restrained alternatives. In V. V. Ramraj, M. Hor, K. Roach, & G. Williams (Eds.), Global anti-terrorism law and policy (pp. 91-121). Cambridge: Cambridge University Press. doi:10.1017/cbo9781139043793.007

Roach, K. (2015). Comparative counter-terrorism law comes of age. In K. Roach (Ed.), Comparative counter terrorism law (pp. 1-48). Cambridge: Cambridge University Press. doi:10.1017/cbo9781107298002.001

Roach, K., Hor, M., Ramraj, V. V., & Williams, G. (2012). Introduction. In V. Ramraj, M. Hor, K. Roach, & G. Williams (Eds.), Global anti-terrorism law and policy (2nd ed., pp. 1-16). Cambridge: Cambridge University Press. doi: 10.1017/cbo9781139043793.001

Sanger, D., & Schmitt, E. (2015, February 1). Hackers use old lure on web to help syrian government. New York Times. Retrieved from https://www.nytimes.com/2015/02/02/world/middleeast/hackers-use-old-web-lure-to-aid-assad.html

Schneiderman, D., & Cossman, B. (2002). Political association and the Anti-Terrorism Bill. In R. J. Daniels, Macklem, P, & K. Roach (Eds.), The security of freedom: Essays on Canada's Anti-Terrorism Bill (pp. 173-194). Toronto: University of Toronto Press. doi:10.3138/9781442682337-012

Shirky, C. (2008). Here comes everybody: The power of organizing without organizations. New York: Penguin Press.

Sekyere, P., & Asare, B. (2016). An examination of Ethiopia's anti-terrorism proclamation on fundamental human rights. European Scientific Journal, 12(1), 351-371. doi:10.19044/esj.2016.v12n1p351

Setty, S. N. (2012). The United States. In K. Roach (Ed.), Comparative counter-terrorism law (pp. 49-77). Cambridge: Cambridge University Press. doi:10.1017/cbo9781107298002.002

Sreberny, A. (2014). Violence against women journalists. In A. V. Montiel (Ed.), Media and gender: a scholarly agenda for the Global Alliance on Media and Gender (pp. 35–43). Paris: UNESCO.

Sutherland, E. (2018, June 22). Digital privacy in Africa: Cybersecurity, data protection & surveillance. doi:10.2139/ssrn.3201310

Tronvoll, K. (2008). Human rights violations in federal Ethiopia: When ethnic identity is a political stigma. International Journal on Minority and Group Rights, 15(1), 49–79. doi:10.1163/138548708x272528

Turse, N. (2017, September 13). How the NSA built a secret surveillance network for Ethiopia. The Intercept. Retrieved October 14, 2017: https://theintercept.com/2017/09/13/nsa-ethiopia-surveillance-human-rights/

van de Walle, N. (1994). Neopatrimonialism and democracy in Africa: with an illustration from Cameroon. In J. Widner (Ed.), Economic change and political liberalization in sub-Saharan Africa. (pp. 129-157). Baltimore: John Hopkins University Press.

Vestal, T. M. (1999). Ethiopia: A post-Cold War African state. Westport: Praeger.

Villasenor, J. (2011). Recording everything: Digital storage as an enabler of authoritarian governments. Washington, DC: Brookings Institute.

Weingrod, A. (1968). Patrons, patronage, and political parties. Comparative Studies in Society and History, 10(4), 377–400. doi:10.1017/s0010417500005004

Weis, T. (2016). Vanguard capitalism: party, state, and market in the EPRDF’s Ethiopia. (Phd Thesis, University of Oxford). Retrieved from https://ora.ox.ac.uk/objects/uuid:c4c9ae33-0b5d-4fd6-b3f5-d02d5d2c7e38

Workneh, T. (2015). Digital cleansing? A look into state-sponsored policing of Ethiopian networked communities. African Journalism Studies, 36(4), 102-124. doi:10.1080/23743670.2015.1119493

Workneh, T. (2018). State monopoly of telecommunications in Ethiopia: origins, debates, and the way forward. Review of African Political Economy. doi:10.1080/03056244.2018.1531390

Yang, F. (2014). Rethinking China’s internet censorship: The practice of recoding and the politics of visibility. New Media & Society, 18(7), 1364–1381. doi:10.1177/1461444814555951

Footnotes

1. See Bach (2011) for a discussion on EPRDF’s theory and practice of abyotawi dimokrasi.

2. EPRDF’s political discourse of “developmentalism” is rooted in the theory of the developmental state. The developmental state, according to Loriaux (1999), is “an embodiment of a normative or moral ambition to use the interventionist power of the state to guide investment in a way that promotes a certain solidaristic vision of national economy” (p. 24). The role of the elite developmental state model, Mkandawire (2001) contends, is “to establish an ‘ideological hegemony,’ so that its development project becomes, in a Gramscian sense, a ‘hegemonic’ project to which key actors in the nation adhere voluntarily” (p. 290). While EPRDF maintained the notion of development trumps all other priorities, critics argue that the party’s adoption of the developmental state theory into a political program is nothing more than an attempt to institutionalize rent seeking interests of the ruling elite (for example, see Berhanu, 2013).

3. For example, Kibret (2017) has identified more than 120 cases under which the Federal Public Prosecutor has charged nearly one thousand individuals by citing the provisions of the EATP. Most of these individuals have been charged for alleged affiliation with domestic rebel groups proscribed by the Ethiopian parliament as “terrorist organizations” in June 2011. These rebel groups include Ginbot 7 for Justice, Freedom and Democracy (Ginbot 7), Ogaden National Liberation Front (ONLF), Oromo Liberation Front (OLF). From the 985 individuals prosecuted under the EATP between September 2011 and March 2017, a third of all the charges involve civilians who have “nothing to do with either terrorism or the rebel groups” (p. 524).

4. An anocracy represents a political system which is neither fully democratic nor fully autocratic, often being vulnerable to political instability. See Grigoryan (2013).

5. See Nunes Lopes Espiñeira Lemos & Pasquali, and Kurtz (2018), Sutherland (2018) on global information extraction practices emanating from state-sponsored surveillance. In the Ethiopian context, information extraction is usually related to seizure of a digital apparatus by government officials to obtain information although other forms of the practice including surveillance are also common. For example, see Horne & Wong (2014).

6. For example, in Soliana Shimeles et al. v the Federal Public Prosecutor involving ten bloggers and journalists as defendants, the Federal Public Prosecutor charged the defendants under Article 3 of the Anti-Terrorism proclamation accusing them of “serious risk to the safety or health of the public or section of the public” and “serious damage to property”. The prosecutor presented, as part of its evidence, several pages of transcripts of phone conversations belonging to the defendants.

7. See note 4 for more on Ginbot 7

8. Article 14(4) of EATP states: “The National Intelligence and Security Services or the Police may gather information by surveillance in order to prevent and control acts of terrorism” (Federal Negarit Gazeta, 2009, p. 4834).

9. For example, the EATP was used to convict prominent Ethiopian media practitioners including Eskinder Nega, a journalist and blogger who received the 2012 PEN Freedom to Write Award, to serve a sentence of 18 years in prison (Dirbaba & O’Donnell, 2012). Another convicted journalist is 2012 Hellman-Hammett Award winner Woubshet Taye, who was sentenced to serve a 14-year sentence under the Anti-Terrorism Proclamation (Heinlein, 2012). Other journalists and media practitioners who faced charges under the anti-terrorism proclamation include Mastewal Birhanu, Yusuf Getachew and Solomon Kebede (Human Rights Watch, 2013a).

10. Article 29 (2) of the Ethiopian Constitution states: “Everyone shall have the right to freedom of expression without interference. This right shall include freedom to seek, receive and impart information and ideas of all kinds, regardless of frontiers, either orally, in writing or in print, in the form of art, or through other media of his choice” (FDRE, 1995, p. 9)

11. Zone 9 bloggers founding member, personal correspondence, 15 October 2017.

12. See Kibret (2017) for a complete list of cases involving the Ethiopian Anti-Terrorism Proclamation.

13. Ethiopia’s internet penetration--though improving--lags behind several other African countries. For details, see ITU, 2018a.

14. In July 2018, the Ethiopian House of Peoples’ Representatives, upon the Cabinet of Ministers’ recommendation, removed Ginbot 7, OLF, and ONLF from its terror list. In January 2019, the FDRE government announced it pardoned 13,000 accused of treason or terrorism.

Beyond ‘zero sum’: the case for context in regulating zero rating in the global South

$
0
0

This paper is part of Practicing rights and values in internet policy around the world, a special issue of Internet Policy Review guest-edited by Aphra Kerr, Francesca Musiani, and Julia Pohle.

Introduction

The contestation of network neutrality 1 in the United States was arguably the predominant communications policy debate over the last decade (Bauer and Obar, 2014). This otherwise arcane aspect of telecoms policy became the locus of concerns about limiting freedom of expression online, the stifling of digital innovation and exacerbating market concentration. Although major policy developments on this issue occurred concurrently in several countries of the global South (Belli & De Filippi, 2016), it is the practice of ‘zero rating’ mobile apps – exempting content and services from data charges (de Miera, 2016) – that has wrested academic and media attention away from the US case. There are shrill arguments on either side. Those who oppose zero rating (henceforth ZR) frame it as a “pernicious” threat to network neutrality and the multiple social goods that it protects (notably innovation and expression) (Crawford, 2015; Malcolm, 2014). Proponents defend zero rating as an internet on-ramp for the billions offline (Katz & Callorda, 2015; West, 2015). Prevailing voices have thus reduced ZR to a zero sum game; one torn between the apparently incommensurate goals of facilitating access, and preserving a neutral network.

Moreover, with some notable exceptions (A4AI, 2016; Mozilla, 2017; Marsden, 2016), judgements on ZR have tended towards broader theoretical strokes rather than granular empirical analysis. This tendency has become pronounced because Facebook’s one-size-fits-all Free Basics programme – offered in one basic format in 63 countries worldwide (Internet.org, 2017) - has dominated consideration of the issue and shaped the contours of the debate accordingly. In fact, much of the ZR offered in the global South is tailored by individual carriers and varies considerably 2. There is no universal prescription for zero rating, so analyses should be rigorously contextualised.

Accordingly, this paper examines the mesh of competing concerns around ZR to identify the complex interrelationship between them. I contend that through a pragmatic and contextual approach, we can move beyond absolutist judgements and better defend the social goods sought both by advocates of net neutrality (Crawford, 2015; Van Schewick 2012) and digital inclusion (West 2015).

We can observe these polarised tendencies in regulatory decisions. Market absolutists such as the head of the US’ FCC, Ajit Pai, claim that his laissez-faire approach to ZR benefit “those with low incomes” and encourage “a competitive marketplace” (Brodkin, 2017). Preserving network neutrality in this judgement scarcely registers as a concern. Conversely, the veto on zero rating implemented by India’s TRAI in 2016 was based primarily on the perceived threat to an “open and non-discriminatory” network (TRAI, 2016). This ban negates the possible benefits of ZR to millions of economically disadvantaged Indian citizens. The prospect of a regulatory ripple effect from two of the world’s largest telecoms markets is genuine. It is essential therefore to develop empirical analyses that can contribute to informed and balanced ZR regulation; or in other words, which effectively reconciles the rights of ZR users with no other means to access the internet, and the need to safeguard innovation, competition and free expression.

This article analyses the multiple forms of zero rating offered in four wireless markets – Brazil, Colombia, Mexico and South Africa - across two dimensions: political-economic and developmental. By using these contextual frames, I identifythe factors that exacerbate or mitigate ZR’s impacts on net neutrality and access. By weighing up these factors, I contend that we can better identify circumstances in which ZR could be sanctioned as a short-term means to boost mobile internet access. Conversely, in other contexts, ZR constitutes an intolerable infringement upon network neutrality, local innovation and freedom of expression.

Wireless markets in the global South are a dynamic object of study, with market offerings and regulatory decisions often in flux. Zero rating represents this dynamism in miniature. The case studies presented here capture particular modes of enabling mobile internet access; some of which may be obsolete within months, while others may become consolidated as dominant business practices. Only by tracking this ‘moving target’, however, and by applying the dominant presumptions about ZR to actual market conditions, will we be able to make informed judgements and meaningful policy interventions.

Structure and contributions of this article

This paper offers three principal contributions to the existing literature, and proceeds in three stages. In the first section, in addition to proposing a working definition, I identify the main arguments regarding ZR’s impact on network neutrality and mobile internet access. I present my first contribution here: a typology of the six forms of zero rating most prevalent in these four wireless markets. This provides the set of definitions that I use in my analysis, and adds two significant sub-categories absent from existing typologies (Carrillo, 2016; Belli, 2017).

In the second section I present a fine-grained analysis of all mobile internet offerings in the four countries using this typology. This demonstrates the prevalence of zero rated mobile internet services therein.

The central contribution of this article features in the last section. Here I examine these four wireless markets across two analytical frames:

  • political-economic, where I scrutinise the wireless market in terms of concentration, market-share and ownership structure. Various traditions within the political economy of communication focus on these criteria in order to analyse market strength, including the institutional political economy tradition (as described in Mosco, 2008) and critical Marxist approaches (Fuchs, 2015). I, however, follow most closely the monopoly capital school developed prominently by McChesney (2000).
  • developmental, in which I assess the affordability and penetration of the mobile internet, the level of local innovation, as well as state-led initiatives to boost internet access. In this frame I use development indicators as commonly applied within ICT4D research (e.g., Levendis & Lee, 2013)

Thereafter I assess how these insights might be applied to the challenge of crafting effective public policy around ZR in the global South.

Methodology

These countries have been purposefully selected in order to generate a rich array of findings from a limited number of cases. Three continents are represented, thus recreating some of the wide geographical range encompassed by the global South. There is also a diversity of scenarios with regard to key variables such as affordability of mobile services and the presence of programmes like Facebook Inc.'s Free Basics. Finally, the four countries demonstrate different approaches to legislating network neutrality and offer the opportunity to examine the relationship between forms of network neutrality legislation and the extent to which it is compromised by ZR.

In terms of analytically useful commonalities, all four countries are classified as large, but less mature, telecoms markets (Groene, Navelekar, & Coakley, 2017). Accordingly, they could represent bellwethers for the rest of the global South in terms of market and regulatory trends. Finally, all four counties selected are ones in which material could be accessed in languages spoken by this researcher.

To delimit the study, only those carriers with +10% of national market share were included. All data regarding mobile data offerings was collected from the carriers’ websites and was accurate as of August 2017. Where offerings varied by region, data was collected for the largest metropolitan area - e.g. São Paulo for Brazil.

Zero rating, network neutrality and mobile internet access

Zero rating refers to the practice of mobile web content being offered to consumers by mobile ISPs (MISPs) without counting against their data allowance. Indeed, it is essential to note that ZR is a product of the artificial scarcity implied by the imposition of data caps, without which ZR would hold no attraction for existing mobile internet users. ZR can therefore represent a cost saving to users as data plans typically limit the volume a subscriber may use per billing period. MISPs and content platforms, meanwhile, offer the service based on the calculation that longer-term revenue will outweigh short term costs through increased take-up of mobile internet services. ZR has become increasingly ubiquitous in wireless markets in the global South where cost presents a greater obstacle to mobile internet access than in the global North (ITU, 2015).

Before proceeding further, it is important to settle on a precise definition of ZR. Rossini and Moore offer a useful starting point by classifying zero rating as a matter of billing management by MISPs that discriminates between web content through price, rather than technical network management (2015, p.1). In turn, Marsden highlights the essential feature of positive discrimination of web content that characterises zero rating, as opposed to the negative discrimination implied by throttling or blocking (2016, p.7). By combining these, I propose the definition of zero rating as the practice ofpositive discrimination of web content by mobile ISPs enabled by billing management practices. Using this definition rather than a strict focus on ‘free’ services is important because it captures the practice of differential pricing that is commonly used to sell app-specific bundles and that might otherwise escape analysis.

Network neutrality

The concept of network neutrality features in discussions of zero rating because the former is compromised by the latter. Net neutrality refers to the normative goal that all data should move across the internet without being subject to discrimination based on origin or type (Wu, 2003). Academics and activists have interpreted net neutrality as a means to protect innovation and competition on the internet (Van Schewick, 2010), as well as users’ speech and information access rights (Nunziato, 2009). Regulatory actions have also been guided by such concerns, for example the BEREC ‘Guidelines on the Implementation by National Regulators of European Network Neutrality Rules’ (BEREC, 2016). By facilitating positive discrimination of web content, ZR constitutes a violation of network neutrality. By extension, ZR may also impede innovation, competition and free speech.

Zero rating necessarily favours access to certain web platforms at the expense of others. MISPs therefore assume a gatekeeper role “that pick winners and losers online” and “undermines the vision of an open Internet where all applications have an equal chance of reaching audiences” (Van Schewick 2016, p.4). This is exacerbated by the fact that most ZR features globally dominant platforms (Viecens & Callorda, 2016). Indeed, findings from the Zero Rating Map show that in each of the 100 mapped countries, at least one Facebook-owned app is zero rated. Meanwhile, the lower user bases and shallower pockets of smaller content providers, start-ups and non-commercial services means they are often left on the sidelines, which can distort competition and impede innovation. These effects may also manifest themselves amongst MISPs if zero rated offers serve to entrench the market power of dominant players.

At the same time as market distortions might be observed through infringement of net neutrality, the freedom of expression of users may also be diminished. Zero rating favours certain speech and information resources at the expense of others, meaning that the internet’s potential as a democratic space of open communication - already threatened by state surveillance, corporate control over user data and widespread disinformation - is further imperilled. It is also possible that users become siloed within a ‘walled garden’ of content. Finally, it is important to note that the quest to collect user data often drives ZR schemes. This has been well-documented in the case of Free Basics (LaFrance, 2016), and is also evident in jurisdictions such as Brazil, where the offer of zero rated applications becomes a means to circumvent internet regulation that prevents MISPs from monitoring the content of user communications (Presidencia da Republica, 2016)

There are, of course, counter-arguments. In the case of the wireless sector, if the market for MISPs is already competitive, then the presence of zero rating may not unduly distort it (Saenz, 2016; Galpaya, 2017). Moreover, if a smaller, struggling incumbent, or new entrant, can use zero rated offers to entice more subscribers, this may result in greater competition. Another claim is that because MISPs benefit from users accessing an ecosystem of applications, the carriers themselves will act to prevent zero rating from become anti-competitive at the application layer out of economic self-interest (Eisenach, 2015). The argument follows that this would therefore apply a natural brake to any tendency towards a non-neutral network.

In terms of user communication rights, one must ultimately be cognisant of the possibility that access to some applications may be better than none; a point that segues into discussion of the relationship between zero rating and mobile internet access.

Mobile internet access

The goal of increasing rates of mobile internet access is often invoked alongside net neutrality in discussions of ZR. This is because of the obvious potential that a cost-free form of mobile internet represents for boosting adoption. Increasing levels of mobile internet access amongst those estimated four billion people for whom the cost is prohibitive (ITU, 2015) is a goal that animates many NGOs, technology corporations and governments. Alongside the presumed commercial benefits for those providing the connectivity (the opacity of the economic arrangements negates the possibility of knowing for certain), the goal of increasing mobile internet access is justified on the basis that it will improve health, education, economic productivity and even democracy.

Although some research suggests that ZR is used in conjunction with a data cap (that permits open access to any web content within a pre-agreed data allowance) as a cost-saving measure (A4AI, 2016; Mozilla, 2017), for many users, zero rated offers may constitute their only access to the internet. Given the importance of messaging apps like WhatsApp for everyday communication in much of the global South (Galpaya, 2017) the significance of free access should not be understated.

One oft-repeated argument by proponents of zero rating (most notably the platforms and carriers) is that these services constitute an internet ‘on ramp’ for non-users. Facebook’s own research claims that 50% of Free Basics users go on to become full mobile internet subscribers (2015). Independent research offers some different perspectives. Surveys of 1000 users conducted by the Alliance for Affordable Internet (A4AI) in each of Colombia, Peru, Ghana, Nigeria, Kenya, India, Bangladesh, and the Philippines showed that only 12% of respondents had not experienced the internet prior to using a zero rated service (A4AI, 2016). Similar research conducted in seven developing countries on behalf of the Mozilla Foundation also discounted the ‘on ramp’ theory (2017).

Other arguments that connect ZR to an increase in the provision of affordable access focus on the possibility that zero rating can boost innovation for impoverished users as they join the network and edge providers offer specialist services corresponding to their needs (Sylvain, 2015). Furthermore, some researchers – as well as industry actors (Brito, 2015) – note the possibility that financial arrangements between content providers and MISPs could be struck, funnelling revenue towards infrastructure build-out (Berglind, 2016). This would in turn facilitate increased rates of internet access. However, the opacity of these agreements negates knowing this for certain. As a final counterpoint, some observers fret that ZR might permit governments a ‘free pass’ on infrastructure investment (Rossini & Moore, 2015, p. 12).

Having concluded this brief survey, we should now classify the forms of zero rating available to consumers in the global South. The following typology is based on analysis of the four wireless markets featured in this research, as well as the wider literature.

Table 1: Typology of forms of zero – and Near-0 – rated data offers in the global South

MISP-

driven

Model

Pre/Post-Pay 3

Description

Example

Apps plus cap

Post

Unlimited access to suite of apps with data cap for complete internet

Tigo’s ‘Combo’ plan (Colombia)

Add-On

Either

Single app made available as optional add-on, with data charge waived

TIM’s ‘Torcedor’ (Brazil)

Triple-lock bundle

Pre

Time-limited data cap for a suite of apps

Movistar’s ‘Recarga’ (Mexico)

Content-driven

Platform ZR

Either

Platform-driven walled garden

Free Basics/

Internet.org

Earned data

Either

Data earned in exchange for content consumption

Vivo Ads (Brazil)

Non-

commercial

Either

Users provided free access to non-commercial content. Not exclusive to carrier

Wikizero 4

Table 1 shows six forms of zero rated mobile internet services. They are grouped into two broad brackets: MISP-driven and content-driven. As mentioned above, I propose a broader definition of ZR that includes a bundled approach to selling apps and web services that I call Near-0 Rating. Although the service is not free, it corresponds to a form of positive discrimination premised on pricing. It also favours access to a select few globally dominant content and messaging platforms.

This practice is exemplified by the widely offered pre-pay Triple-lock bundles in which the limitations are trifold: temporal, volume-based and content-specific. An archetype is Movistar’s Recarga package in Mexico in which a data-capped bundle of access to WhatsApp, Facebook and Twitter is offered on a sliding scale from 24 hours to one month.

The most common form of zero rating in post-pay consists of unlimited access to a suite of web applications – typically Facebook, WhatsApp and Twitter – as part of a data contract that includes capped access to the wider internet. The Colombian carrier Tigo offers an archetype with their Combo plan that includes a sliding scale of monthly data allowances, from 800MB to 6GB, alongside unlimited access to six apps.

While both of these models represent clear forms of discrimination, it becomes more explicit when a) there is no additional data cap for the open internet, or b) when the zero rated content continues to be available after any accompanying data cap is reached. Both of these variants possess the potential to lock users into a ‘walled garden’ of content.

‘Earned data’ meanwhile refers to promotions in which users are rewarded with a data allowance in exchange for consumption of a certain kind of web content, likely an advertisement. An example of this type of zero rating exists in Brazil in the form of a partnership between the carrier Vivo and Procter & Gamble (Telecom Paper, 2016).

There are two other principle forms of content-driven ZR. Facebook’s ‘Internet.org’ project (re-branded ‘Free Basics’ in 2015) launched in 2013 (Internet.org, 2017) is the most conspicuous example of ‘platform ZR’ (Belli, 2016). It partners with a mobile carrier to offer voice-only subscribers access to a suite of pared down web applications and services – including Facebook itself – at no cost, but with no access to the wider internet. According to Facebook’s CEO, Mark Zuckerberg, it is an altruistically-driven plan to “connect every village…and improve the world for all of us” (Bhatia, 2016). Its critics, meanwhile, interpret it as a ploy to lock the four billion unconnected people in the global South into a corporate-faux-Internet (Levy, 2015).

Finally, there is also a non-commercial model of ZR. For example, the Wikimedia Foundation operated Wikizero 2011-2018, establishing non-exclusive partnerships with mobile carriers in countries where cost constituted an acute obstacle to access in order to provide free access to Wikipedia content (Wikimedia Foundation, 2017). Another state-led example is the Brazilian 800 Saude app that provided healthcare information (Governo do Brasil, 2017). When non-commercial models are offered non-exclusively, the benefits for access to knowledge are evident, while the infringement on net neutrality in terms of competition, innovation and expression should only concern absolutist defenders of the principle (Malcolm, 2014).

Prevalence of ZR in the countries under analysis

Table 2: Extent and form of zero rated mobile internet services

Country

% of post-pay services incl. ZR in each market

(% ‘apps + cap’) 5

% of pre-pay services incl. ZR in each market

(% ‘triple lock’)

Availability of Free Basics

Brazil

33% (100%)

76% (100%)

N

Colombia

100% (100%)

100% (100%)

Y

Mexico

100% (100%)

72% (100%)

Y

South Africa

10% (0%)

33% (0%)

Y

Sources: Websites of all MISPs with more than 10% wireless market share. Data collected July 2017. See annex for full details.

In examining these data in Table 2, we see that the ‘apps plus cap’ model in post-pay, and the ‘triple lock’ model in the pre-pay segment represent the dominant forms of zero rating internet services. In terms of the markets as a whole, in Mexico and Colombia, ZR has become integral to the preferred business models of the major carriers. In Brazil there is a significant difference in the extent of zero rated services in the pre and post-pay segments, while in South Africa, zero rating constitutes a minimal share of the market mix.

In order to gain further insight from these data, it must be properly contextualised. Accordingly, I will now examine the data through two frames: political-economic and developmental.

Two contextual frames for understanding the relationship between ZR, network neutrality and mobile internet access

The political-economic frame

The level of wireless concentration, the market positions of the carriers offering ZR, ownership of zero rated content services as well as the market strength of the zero rated service all have a significant bearing on the degree to which network neutrality is compromised.

Table 3: Wireless market characteristics

Country

Number of MISPs w/+10% market share

Market concentration: HHI 6 Score¹

Wireless market share (mobile Internet subs)

Brazil

4

2,457 (Unconcentrated)

Vivo 31%; TIM 25%; Claro 25%; Oi 17%²

Colombia

3

3,737 (Moderately concentrated)

Claro 49% (53%); Movistar 23% (30%); Tigo 18% (12%)³

Mexico

3

5,152 (Highly concentrated)

Telcel 65% (70%); Movistar 23% (15%); AT&T 11% (14%)⁴

South Africa

3

3,205 (Moderately concentrated)

Vodacom 35%; MTN 35%; Cell C 17%⁵

Sources: ¹ Economist Intelligence Unit, 2017; ² Anatel, 2017; ³ MinTIC, 2017; ⁴ ITF, 2017; ⁵ Business Tech, 2017.

When we think about zero rating and its impact on network neutrality, the market strength of the participating MISP is a key criterion. The case of Mexico is emblematic in this respect. Its wireless market is highly concentrated, with one player – América Móvil’s subsidiary, Telcel – accounting for 70% of all mobile internet subscriptions (ITF, 2017). The fact that all of Telcel’s post-pay, and one third of its pre-pay, data plans feature zero rated content means that the impact on competition is more acute. This is also true for the moderately concentrated market of Colombia where the market leader, Claro (also owned by América Móvil), offers zero rated services. It is probable that the offer of ZR will further exacerbate concentration in these wireless markets as the zero rated offers attract even more subscribers to the dominant MISPs.

These effects can also be registered in the content market. Research by Van Schewick in the United States shows that users will tend to favour zero rating over content that counts towards their data caps (2016). This distortion in the online environment is exacerbated when we consider that - in common with all of the zero rated content presented in Table 4 - all of Telcel’s zero rated content features the globally dominant platforms in terms of active users 7: social network Facebook; micro-blogging service Twitter; and messaging app WhatsApp (Statista, 2017). The phenomenon of network effects is accelerated when simple notification services (SNS) and messaging apps are zero rated which may hasten the onset of user ‘lock-in’ (Palfrey & Gasser, 2012), which would in turn further distort market competition.

Table 4: Zero rated content characteristics

Country

ZR that includes MISP-owned content (%)

ZR that includes global content platform (%)

Exclusivity between global content and MISP

Local content incl. in ZR offers

Brazil

31%

84%

N

N

Colombia

20%

100%

N

N

Mexico

33%

100%

N

N

South Africa

0%

100%

Y

Y

Moreover, if a carrier zero rates its own service, then we see a pernicious form of vertical integration in which one entity not only owns the pipes, platform and content, but can effectively lock users into this proprietary funnel through price discrimination. It should be noted that the phenomenon of ‘lock-in’ (Palfrey & Gasser, 2012) can occur irrespective of the use of ZR, and is widely considered to have a negative impact on innovation and competition within the market in question. We see this in the case of Telcel as it exacerbates its market power by zero rating its Claro Video service on 2/3 of its post-pay plans. Indeed, in Mexico, Colombia and Brazil 8, 20-33% of all zero rated services featured carrier-owned content and services. In all cases bar one (Tigo in Colombia), these were offered by the national subsidiary of one of four global operator groups: Telefónica, America Móvil, Telecom Italia and AT&T. This is significant because these are multinational corporations - with the former two in a dominant position in Latin America – meaning that when they zero rate their own content platforms in one market, it may serve to consolidate their power regionally.

The infringement of network neutrality by ZR could be justified as pro-competitive if it was offered by an MISP with the smallest market share; it might serve to attract more users, increase its share and thus make the market more competitive (Goodman, 2016). This would be especially true of markets that are defined as moderately or highly concentrated, such as Colombia and Mexico. While AT&T in Mexico (9%), and Tigo in Colombia (17%) are the market laggards and offer ZR in all of their plans, they do so in the context of ubiquitous ZR. As such, the pro-competitive impact is muted.

South Africa offers the only case where the smallest player – Cell C with 14% market share - offers ZR (Free Basics) in a moderately concentrated market where the dominant incumbents do not. This example also highlights the only instance of exclusivity between a zero rated global content platform and an MISP in this study. According to Marsden’s (2016) analysis, exclusivity in ZR arrangements should be ex ante prohibited. This arrangement can in theory create a more concentrated market than the non-exclusive alternative because it would draw even more users onto the favoured network in order to benefit from the zero rated services. The market position of Cell C is such that in this case, that is only a minor concern.

Brazil, unique amongst these four cases, can boast of a wireless market comprised of four large MISPs closely matched in market share. The provision of ZR by these carriers also seems to follow a pro-competitive model in that the two players grappling for second place, TIM and Claro, are more aggressive in their use of zero rated inducements than the market leader, Vivo 9 The outlier, however, is the fourth placed Oi SA that comprises 17% of the market and offers no ZR.

Although the infringement of network neutrality through the zero rating of locally developed apps and content could encourage local technological development, the data collected for this study suggests that this is a distant prospect. The only examples are the apps included in the Free Basics suite offered by Cell C in South Africa. This includes the youth employment accelerator Harambee, and the literacy app Fundza (Cell C, 2017). In this case, it should be noted that Facebook serves as the arbiter of which apps will be granted the privilege of admission, appointing themselves de facto gatekeepers of South Africa’s app ecosystem and discriminating against those applications that are not included in the Free Basics suite.

In sum, by applying this political economy lens to ZR and the markets in which it is offered, we can identify various instances of red lines, where ZR not only infringes network neutrality, but does so in a way that has a significantly detrimental impact on competition and innovation in the wireless and/or content market:

  • Any offer of ZR in a highly concentrated market (except by the carrier with lowest market share) 10
  • Any exclusive offer of ZR (except by the carrier with lowest market share)
  • Any offer of ZR by a carrier majority-owned by a global operator group (unless lowest market share)
  • Any carrier zero rating their own content/platform

We can also identify amber zones in which ZR’s benefits to innovation and competition could outweigh the negative impact of its infringement of net neutrality:

  • The ZR of locally developed/public interest apps and services in a non-exclusive form
  • The offer of ZR by a market laggard/newcomer/struggling incumbent

The developmental frame

The best way to understand the impact of zero rating on rates of mobile internet access is by using a developmental frame. This is because low levels of economic development, limited telecoms infrastructure and high access costs collectively create conditions whereby zero rated access to specific applications could be justified as a stopgap measure in the absence of widely available and affordable mobile internet access.

The offer of zero rated services is sometimes criticised on the basis that it allows governments to evade responsibility for improving mobile internet access for their citizens (Rossini & Moore 2015, p. 12). The enthusiasm with which many governments have welcomed the arrival of Facebook’s Free Basics perhaps validates this perspective. A market solution of zero rated internet is ultimately a profit-oriented scheme subject to corporate exigencies, a fact that explains the disquiet of many observers. Although community networks offer great promise to address deficiencies in both private and public provision of access (Baca, Belli, Huerta, & Velasco, 2018), their relatively limited scale means it is important to identify the extent of government programmes to reduce access costs and increase national penetration of mobile broadband. This also needs to be understood in the context of the level of national ICT development and the extent to which a significant deficit needs to be bridged. The national capacity for innovation, meanwhile, is a relevant metric to assess how ZR might stymie the local development of web apps and services 11. All this data can be reviewed in Table 5.

Table 5: Infrastructure and innovation

Country

ICT development index (/175 country ranking)¹

State policy to promote free/low cost internet access (/10 score)²

Capacity for innovation (/139 country ranking)³

Brazil

63

8

80

Colombia

83

9

93

Mexico

92

7

66

South Africa

88

6

32

Sources: ¹ ITU, 2016; ² A4AI, 2017; ³ WEF, 2016

The other major sub-index to consider is affordability and access. Although the provision of ZR is always offered by content providers with the goal of boosting market share and access to valuable user data, it is often presented by its boosters as a means to overcome socio-economic obstacles to mobile internet access, either by maximising the utility of data-capped open internet access, or providing some app-specific connectivity to those who otherwise have none (Layton & Calderwood, 2015; West, 2015). Intuitively, the provision of a free service should represent a boon to the poorest segments of society. The conundrum to consider is the extent to which the benefits of ZR to the poorest outweigh the potentially negative impact on network neutrality and its associated social goods. As such, Table 6 presents several key indicators that help to gauge the affordability of mobile internet access, as well as the take-up of those services.

In terms of measuring affordability, A4AI offers a new benchmark: that 1GB of data should not exceed 2% of a user’s monthly income (“1 for 2”). A4AI argue that this is a more substantive measure than the 500MB for 5% threshold defined by the UN Broadband Commission (2017). Another useful metric for gauging cost as impediment to access is the proportion of mobile subscribers that use a prepaid plan. This form of mobile access offers users the highest level of cost-control and is therefore often adopted by those with the lowest economic means. Looking at the respective indices of mobile subscriptions and mobile internet subscriptions, meanwhile, serves double duty as a measure both of cost and infrastructure; a significant disparity between the two forms of penetration suggests a barrier of cost and/or a lack of broadband availability. The final metric listed in Table 6 is useful to understand the intensity of the negative impact of ZR on network neutrality: a high proportion of internet use over WiFi means that users are accessing the open, full internet, and are not limited to the walled-garden provisions of application-specific ZR 12.

Table 6: Affordability and access

Country

Price of 1GB mobile prepaid plan as % of monthly income¹

Mobile subs/mobile broadband subs (% penetration)²

Mobile subs prepaid (%)

% time mobile internet users connected to WiFi (/95 ranking) ⁷

Brazil

1.97

119/73

66³

12th

Colombia

1.45

105/47

79⁴

36th

Mexico

2.03

81/59

84⁵

28th

South Africa

2.48

160/40

84⁶

71st

Sources: ¹ A4AI, 2017; ² GSMA Intelligence, 2015; ³ Teleco, 2017; ⁴ MinTIC, 2017; ⁵ IFT, 2017; ⁶ ICASA, 2016; ⁷ Open Signal, 2016

Through combining these measures we might illuminate the extent to which ZR could be justified as an expedient for facilitating higher levels of mobile internet access 13. In the case of Brazil, for instance, it boasts the highest level of ICT development of the four countries according to a cluster of indicators compiled by the ITU. It also scores highly in the A4AI’s aggregated metric for measuring the quality of the state’s efforts to increase mobile internet access. In terms of affordability, the data in Table 6 shows that Brazil almost exactly meets the ‘1 for 2’ threshold and demonstrates robust levels of penetration at 119% for mobile and 73% for mobile broadband subscriptions. With regards to the forms of access - at 66%, the level of prepaid subscriptions may be high compared to wireless markets in the global North, but it is the lowest of the four countries examined here. Relatively speaking, the level of WiFi use is very high.

Taken together, these indicators suggest that there is no compelling justification for ZR as a means to boost access in the case of Brazil: at least in urban areas where 86% of Brazil’s population resides (World Bank, 2017), mobile internet is relatively widely diffused and affordable, ICT infrastructure is robust, and the state is a willing partner in boosting levels of access. Moreover, the high levels of WiFi connection imply that many users are able to access the open internet, even if they also contract ZR services.

There is of course an alternate interpretation that focuses on the challenge of connecting the 27%, or 56 million Brazilians, who do not access mobile internet. Data compiled in 2015 by the Brazilian Internet Steering Committee showed that 90% of those Brazilians who had never used the internet were in the lowest social classes (Derechos Digitales 2017, p. 56). We can infer that cost is likely a significant impediment to access for these citizens (exacerbating other systemic obstacles such as (digital) illiteracy and a lack of locally relevant content and services), one that ZR could help to overcome. The fact that the Brazilian state is judged to be pro-active in addressing access issues, however, could alleviate concerns that ZR would permit it to abdicate its responsibilities.

South Africa demonstrates a more straightforward case where ZR could be justified as a means to generate access. It does not meet the ‘1 for 2’ threshold, there is a high penetration of mobile subscriptions – many of which are prepaid - accompanied by low levels of mobile internet subscriptions and WiFi access. The country also features in the bottom half of the ICT development index and receives a middling grade for state efforts to boost take-up of mobile internet. The South African government did launch a digital inclusion programme, South Africa Connect, in 2013 with the goal of connecting 90% of the population to the internet by 2020 (South African Government, 2013). A review of this plan reveals it is based on market-led initiatives rather than a state-led infrastructure programme. Such an approach may explain why the South African government was receptive to the arrival of Facebook’s Free Basics in 2014.

The only factor in this analysis that might undermine the case in favour of ZR is that South Africa ranks highly for innovation capacity (WEF, 2016). Heavy take-up of ZR might therefore damage this positive aspect of the South African economy. There is indeed evidence of this dynamic in practice as a local messaging service was forced to shut down in 2015 citing competition from WhatsApp as the cause (Steyn, 2016).

Colombia and Mexico present similar scenarios in terms of these development indicators and their relationship with ZR. Colombia receives the highest score for its government’s efforts to boost access. This is in recognition of the achievements wrought by Colombia’s Vive Digital programme that aimed to increase its internet connected population to 27 million in 2018 from 8 million in 2014 (Rossini & Moore, 2015, p. 49). Indeed, in 2016 the Colombian government announced an innovative programme dubbed Internet Móvil Social para la Gente which provides subsidised data connections and 4G handsets to citizens registered for government welfare programmes (MinTIC, 2016).

In the context of targeted and adequately funded state efforts to increase mobile internet access, the presence of commercial ZR could complement rather than undermine these programmes; a stopgap that addresses economic and infrastructural barriers while more substantive public policy is implemented. This becomes a more compelling argument given that although Colombia comfortably meets the affordability threshold of 1 for 2, it only figures at the halfway mark of the global ICT development ranking, demonstrates a significant disparity between rates of mobile and mobile broadband connections, as well as a high proportion of prepaid subscriptions.

Finally, Mexico appears lowest in the rankings for ICT development of the four countries here, the penetration rates are the lowest, and the level of prepaid subscriptions the highest alongside South Africa. And although Mexico technically meets the 1 for 2 benchmark, OECD data reveals that for the poorest tier of households, the cost of a mobile subscription represents 6.2% of monthly income (OECD, 2017). This suggests that Mexico faces a significant challenge in facilitating adequate levels of mobile internet access, one which ZR might partially address. Although the Mexican state received a lower score than Colombia or Brazil for fomenting access – though still above the emerging country average of ‘6’ (A4AI, 2017) – it has embarked on major ICT infrastructure projects such as Mexico Conectado and Red Compartida (IFT, 2017). This scheme is sufficiently well-developed in terms of existing investments, and ambitious enough in terms of future goals (OECD, 2017), to suggest that in common with Colombia, ZR might serve to complement rather than derail state connectivity programmes.

Overall, it is hard to define hard ‘red lines’ for ZR by examining access through development indicators. This is because the confluence of factors is more dynamic and complex, especially within the infrastructure and innovation sub index. In the first instance, the diverse states of telecoms infrastructure in the four countries under examination here further complicates the equation. Moreover, it is difficult to interpret whether a country’s low score for state connectivity programmes means that ZR should be considered a threat to those nascent efforts, or an essential stopgap to realise the same objectives. Similarly, does a high level of capacity in national technological innovation mean that ZR constitutes a grave threat to the growth potential for the mobile software sector, or does it suggest that this ecosystem is robust enough to withstand the pressure? These are fundamental questions to consider when we wish to evaluate ZR, and can only be substantively addressed through greater contextual analysis than the parameters of this study permit.

A more straightforward case to be made, one grounded in affordability and access, is that a combination of low penetration and high cost mean that there is a compelling argument for ZR addressing an economic barrier for many users. Even on this point, however, we must be aware that the 1GB for 2% of monthly income measure can prove a blunt tool as it is based on average income (A4AI 2017, p. 47). In societies like Brazil and Mexico that meet this affordability threshold, the economic inequality is such that the wealthy few skew the average. Thus for many, the cost of mobile internet will be more onerous, and the economic benefits of ZR potentially more significant.

Conclusion

Zero rated mobile internet services represent a thorny public policy challenge in the global South. On one hand they can overcome cost barriers to realise the valued goal of increasing mobile internet penetration. On the other, the dangers ZR poses to competition and innovation in the wireless and online services markets, as well as the implications of locking users into ‘walled gardens’ of content, are apparent. The premise of this research is that the challenge of ZR can be better addressed when it is rigorously contextualised; when we weigh the values of both neutrality and access on the scale. To that end, I created a typology of models of ZR. This classified the forms in which ZR is sold, and moved beyond a strict focus on ‘free data’ to demonstrate that ‘Near-0 Rating’ offers should also be considered.

I also identified two contextual frames through which ZR should be examined in order to evaluate the factors that accentuate or diminish its impact on neutrality and access. A political-economic lens guides our focus towards the market power of participating actors, as well as the circumstances in which the infringement of network neutrality can become pro or anti-competitive. Examining indices of technology diffusion, meanwhile, helps to assess whether ZR can address affordability and infrastructural deficits, as well as whether local innovation might be impeded.

Through charting the uneven conceptual terrain on which ZR appears, we can discount the notion that addressing ZR is a zero sum game composed of an ‘access or neutrality’ calculation. Instead, we need to be much more attentive to the multiple interlocking factors that influence how ZR impacts upon both the social goods sought by defenders of network neutrality, as well as the goals of digital inclusion advocates. The precise composition of these factors will vary in every society and wireless market, so the manner in which they are reconciled will depend on national policy priorities. Whether ZR is interpreted as a curse or a boon for local app development, for instance, is a matter for the relevant regulators, advocacy groups and industry associations to decide. Moreover, as previously stated, ZR is a moving target, and although the dominant tendency captured in this research is to zero rate market leaders in each application category, an alternative approach based on zero rating entire classes of applications would require that the negative implications of ZR for innovation and competition would need to be reassessed.

A policy of subsidised data and handsets, as introduced in Colombia, is arguably the ideal way to address limited mobile internet penetration for the most economically disadvantaged. However, in the absence of such progressive public policy, an absolute veto on ZR threatens to make the perfection implied by full internet access for all, an enemy of the good. Any proponent of an absolute ban on ZR should rehearse a speech to an impoverished user in the global South to explain why access to socially essential communication services should remain beyond their means. Ultimately, rather than an on-ramp, we might better conceptualise ZR as a temporary relief road: a makeshift piece of the network that can accommodate mass demand while the proper permanent infrastructure (through both public policy and market provision) is established.

Regarding the limitations of this research, the data on the prevalence of ZR in the four markets examined here represents a snapshot in time, and the available insights are accordingly restricted. Longitudinal studies are needed to assess the impacts of ZR on innovation and competition over time, as well as to understand whether they represent a short-term marketing ploy, or a permanent fixture of these markets. What are also needed are large-scale studies that probe the practices of mobile internet users in the global South. These would help us better understand whether ZR entices non-users online, and the extent to which that introduction shapes later patterns of use; especially whether users migrate beyond zero rated silos.

References

A4AI (Alliance for Affordable Internet). (2016). Impacts of Emerging Mobile Data Services in Developing Countries. Research Brief No. 2. Retrieved from http://a4ai.org/the-impacts-of-emerging-mobile-data-services-in-developing-countries/

A4AI (Alliance for Affordable Internet). (2017). Affordability Report 2017. Retrieved from https://a4ai.org/wpcontent/uploads/2017/02/A4AI-2017-Affordability-Report.pdf

Anatel (2017, December 8). Brasil registra 240,9 milhões de linhas móveis em operação em outubro de 2017 [240,9 million wireless subscriptions registered in Brazil in October 2017]. Retrieved from http://www.anatel.gov.br/dados/component/content/article?id=283

Baca, C., Belli, L. Huerta, E. & Velasco, K. (2018) Community Networks in Latin America: Challenges, Regulations and Solutions. Reston, VA; Geneva: Internet Society. Retrieved from https://www.internetsociety.org/wp-content/uploads/2018/12/2018-Community-Networks-in-LAC-EN.pdf

Bauer, J, & Obar, J. (2014). Reconciling political and economic goals in the net neutrality debate. The Information Society,30(1), 1-19. doi:10.1080/01972243.2013.856362

Bhatia, R. (2016, May 12). The inside story of Facebook’s biggest setback. The Guardian. Retrieved from https://www.theguardian.com/technology/2016/may/12/facebook-free-basics-india zuckerberg

Belli, L. (2017). Net neutrality, Zero-rating and the Minitelisation of the Internet. Journal of Cyber Policy, 2(1). doi:10.1080/23738871.2016.1238954

BEREC (Body of European Regulators for Electronic Communications) (2016). BEREC Guidelines on the Implementation by National Regulators of European Net Neutrality Rules. Retrieved from https://berec.europa.eu/eng/document_register/subject_matter/berec/regulatory_best_practices/guidelines/6160-berec-guidelines-on-the-implementation-by-national-regulators-of-european-net-neutrality-rules

Brito, C. (2015). The Internet in Mexico, two years after #ReformaTelecom. Digital Rights Latin America and the Caribbean. Retrieved from https://www.digitalrightslac.net/en/el-internet-en-mexico-a-dos-anos-de-la-reformatelecom/

Brodkin, J. (2017, February 28). FCC Head Ajit Pai: You can thank me for carriers’ new unlimited plans. Ars Technica. Retrieved from https://arstechnica.com/tech-policy/2017/02/fcc-head-ajit-pai-you-can-thank-me-for-carriers-new-unlimited-data-plans/

Business Tech (2017, June 28). SA mobile subscribers in 2017: Vodacom vs MTN vs Cell C vs Telkom. Retrieved from https://businesstech.co.za/news/mobile/182301/sa-mobile-market-share-in-2017vodacom-vs-mtn-vs-cell-c-vs-telkom/

Carrillo, A. (2016). Having Your Cake and Eating It Too? Zero Rating, Net Neutrality and International Law. Stanford Technology Law Review, 19. Retrieved from https://law.stanford.edu/wp-content/uploads/2017/11/19-3-1-carrillo-final_0.pdf

Cell C (2017). Free Basics. Retrieved from https://www.cellc.co.za/cellc/free-basics-by-facebook

Crawford, S. (2015, January 7). Less than Zero. Wired. Retrieved fromhttps://www.wired.com/2015/01/less-than-zero/

de Miera Berglind, O. (2016). The Effect of Zero-Rating on Mobile Broadband Demand: An Empirical Approach and Potential Implications. International Journal of Communication, 10. Retrieved from https://ijoc.org/index.php/ijoc/article/view/4651

Derechos Digitales (2017). Neutralidad de red en América Latina: Reglamentación, aplicación de la ley y perspectivas. Los casos de Chile, Colombia, Brazil y México [Network Neutrality in Latin America: Regulation, Law Enforcement and Perspectives. The cases of Chile, Colombia, Brazil and Mexico]. Retrieved from https://www.derechosdigitales.org/wp-content/uploads/NeutralidadeRedeAL_SET17.pdf

DTPS (Dept. of Telecommunications and Postal Services). (2016). National Integrated ICT Policy: (White Paper). Retrieved from https://www.dtps.gov.za/images/phocagallery/Popular_Topic_Pictures/National_Integrated_ICT_Policy_White.pdf

Economist Intelligence Unit (2017). The Inclusive Internet: Mapping Progress 2017. Retrieved from https://theinclusiveinternet.eiu.com/explore/countries/performance/affordability/competitive-environment/wireless-operators-market-share?highlighted=BR

Fuchs, C. (2015). Reading Marx in the Information Age: A Media and Communication Studies Perspective on Capital, Volume 1. Routledge: London. doi:10.4324/9781315669564

Galpaya, H. (2017). Global Commission on Internet Governance – Zero Rating in Emerging Economies (Paper series: No. 47). Ontario; London: Centre for International Governance Innovation; Chatham House. Retrieved from https://www.cigionline.org/sites/default/files/documents/GCIG%20no.47_1.pdf

García, L. & Brito, C. (2014). Enrique Peña Nieto contra el Internet [Enrique Peña Nieto against the Internet] [Blog post]. Retrieved from Nexos website: https://www.redaccion.nexos.com.mx/?p=6176

Goodman, E. (2016). Zero rating broadband data: Equality and free speech at the network’s other edge. Colorado Technology Law Journal, 15(1), 63-92. Retrieved from http://ctlj.colorado.edu/wpcontent/uploads/2017/01/4-Goodman-12.29.16_FINAL_PDF-A.pdf (accessed May 2017).

Governo do Brasil (2017). Aplicativo vai ampliar o acesso da população às informações de saúde [The application will increase the population's access to health information]. Retrieved from http://www.brasil.gov.br/saude/2017/06/aplicativo-vai-ampliar-o-acesso-da-populacao-as-informacoes-de-saude

Groene, F., Navelekar A. & Coakley M. (2017). An industry at risk: Commoditization in the wireless telecom industry. Strategy&. Retrieved from https://www.strategyand.pwc.com/reports/industry-at-risk

GSMA Intelligence (2015). Data. Retrieved from https://www.gsmaintelligence.com/markets/409/dashboard/

Hoskins, G. (2017). Draft once, deploy everywhere: Contextualizing digital law and Brazil’s Marco Civil da Internet, publication ahead of print in Television & New Media, 19(5), 431-447. doi:10.1177/1527476417738568

ICASA (Independent Communications Authority of South Africa). (2017). 2nd Report on the state of the ICT sector in South Africa. Retrieved from https://www.ellipsis.co.za/wpcontent/uploads/2017/05/ICASA-Report-on-State-of-SA-ICTSector-2017.pdf

IFT (Instituto Federal de Telecomunicaciónes). (2016). Comparador de planes de telefonía móvil [Mobile Phone Plan Comparator]. Retrieved from http://comparador.ift.org.mx/indexmovil.php

IFT (Instituto Federal de Telecomunicaciónes). (2017). Reportes de informes trimestrales [Quarterly reports]. Retrieved from https://bit.ift.org.mx/SASVisualAnalyticsViewer/VisualAnalyticsViewer_guest.jsp?appSwitchDisabled=false&reportName=%C3%8Dndice+Informes+Trimestrales&reportPath=/Shared+Daa/SAS+Visual+Analytics/Reportes/&appSwitcherDisabled=true

Internet.org. (2017). Where we’ve launched. Retrieved from https://info.internet.org/en/story/whereweve launched/

ITU (International Telecommunications Union). (2015). ICT Facts and Figures. Retrieved from http://www.itu.int/en/ITU-D/Statistics/Documents/facts/ICTFactsFigures2015.pdf

ITU (International Telecommunications Union). (2017). ICT Development Index 2017. Retrieved from http://www.itu.int/net4/ITU-D/idi/2017/index.html

Katz, R. & Callorda, F. (2015). Iniciativas para el Cierre de la Brecha Digital en America Latina [Initiatives to Close the Digital Divide in Latin America]. New York, NY: Telecom Advisory Services, LLC. Retrieved from http://www.mintic.gov.co/portal/604/articles14374_pdf.pdf

LaFrance, A. (2016). Facebook and the New Colonialism. The Atlantic. Retrieved from https://www.theatlantic.com/technology/archive/2016/02/facebook-and-the-new-colonialism/462393/

Layton, R. & Calderwood, S. (2015). Zero Rating: Do hard rules protect or harm consumers and competition? Evidence from Chile, Netherlands, Slovenia. Social Science Research Network. Retrieved from https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2587542

Levendis, J., & Lee, S. H. (2013). On the endogeneity of telecommunications and economic growth: Evidence from Asia. Information Technology for Development, 19(1), 62–85. doi:10.1080/02681102.2012.694793

Levy, J. (2015, May 5). Opinion: Facebook’s Internet.org isn’t the Internet, it’s Facebooknet. Wired. Retrieved from https://www.wired.com/2015/05/opinion-internet-org-facebooknet/

Lobo, A. & Grossman, L. (2016). Zero rating: Marco Civil proíbe ou não acordos comerciais com as OTTs? [Does Marco Civil prohibit commercial agreements with OTTs or not ?]. Convergência Digital [Digital Convergence]. Retrieved from http://convergenciadigital.uol.com.br/cgi/cgilua.exe/sys/start.htm?UserActiveTemplate=site&inoid=42398

Malcolm, J. (2014). Net Neutrality and the Global Digital Divide. Electronic Frontier Foundation. Retrieved from https://www.eff.org/deeplinks/2014/07/net-neutrality-and-global-digital-divide

Marsden, C. (2016). Comparative Case Studies in Implementing Net Neutrality: A Critical Analysis of Zero Rating. SCRIPTed, 13(1), 2-38. doi:10.2966/scrip.130116.1

McChesney, R. (2000) Rich Media, Poor Democracy: Communication Policy in Dubious Times. The New Press: NY.

MinTIC (Ministerio de Tecnologías de la Información y las Comunicaciones). (2011). Ley No. 1450. Retrieved from http://www.mintic.gov.co/portal/604/articles-3821_documento.pdf

MinTIC (Ministerio de Tecnologías de la Información y las Comunicaciones). (2016). Internet móvil para los colombianos más necesitados [Mobile Internet for Colombians most in need]. Retrieved from http://www.mintic.gov.co/portal/604/w3-article-16860.html

MinTIC (Ministerio de Tecnologías de la Información y las Comunicaciones). (2017). Boletín trimestral de las TIC [Quarterly ICT Newsletter]. Retrieved from http://colombiatic.mintic.gov.co/602/articles-55212_archivo_pdf.pdf

Mosco, V. (2008) Political Economy of the Media. In W. Donsbach (Ed.), The International Encyclopedia of Communication. Blackwell Publishing. doi:10.1002/9781405186407.wbiecp057.pub3

Mozilla Foundation (2017). Equal Rating. Retrieved from https://equalrating.com/research/

Nunziato, D. (2009). Virtual freedom: net neutrality and free speech in the Internet age. Stanford, CA: Stanford University Press.

OECD (Organisation for Economic Co-operation and Development). (2017). Telecommunication and Broadcasting Review of Mexico 2017. Retrieved from http://www.oecd.org/mexico/oecdtelecommunication-and-broadcasting-review-of-mexico-2017-9789264278011-en.htm

Open Signal (2016). Global state of mobile networks. Retrieved from https://www.opensignal.com/reports/2016/08/global-state-of-the-mobile-network

Palfrey, J. & Gasser, U. (2012). Interop: The promise and perils of highly interconnected systems. New York, NY: Basic Books.

Presidencia da Republica (2016). Decreto No. 8.771. Retrieved from http://www.planalto.gov.br/ccivil_03/_Ato2015-2018/2016/Decreto/D8771.htm

Rossini, C. & Moore T. (2015). Exploring Zero Rating Challenges: Views from Five Countries (Working Paper). Washington, DC: Public Knowledge. Retrieved from: https://www.publicknowledge.org/documents/exploring-zero-rating-challenges-views-from-five-countries

South African Government (2013). South Africa connect: Creating opportunities, ensuring inclusion, South Africa’s broadband policy. Retrieved from https://www.gov.za/sites/default/files/37119_gon953.pdf

Statista (2017). Most famous social networking sites 2017, by active users. Retrieved from https://www.statista.com/statistics/272014/global-social-networks-ranked-by-number-of-users/

Steyn, L. (2016, February 05). SA wades into global app regulation battle. Mail & Guardian. Retrieved from https://mg.co.za/article/2016-02-04-sa-wades-into-global-app-regulation-battle

Sylvain, O. (2016). Network equality. Hastings Law Journal, 67(2), 443-498. Retrieved from http://www.hastingslawjournal.org/network-equality/

Teleco (2017). Estatísticas de celulares no Brasil [Cellular statistics in Brazil]. Retrieved from http://www.teleco.com.br/ncel.asp

Telecom Paper (2016, May 16). Telefonica Vivo launches sponsored data service. Retrieved from https://www.telecompaper.com/news/telefonica-vivo-launches-sponsored-data-service--1143609

TIM Brasil. (2015, February 29). Capítulo 2: Da Neutralidade da Rede [Chapter 2: Network Neutrality]. Comment posted to http://pensando.mj.gov.br/marcocivil/texto-em-debate/minuta/

TRAI (Telecoms Regulatory Authority of India). (2016). Prohibition of Discriminatory Tariffs for Data Services Regulations. New Delhi, India.

van Schewick, B. (2012). Internet architecture and innovation. Cambridge, MA: MIT Press.

van Schewick, B. (2016). T-Mobile’s Binge On violates key network neutrality principles. Retrieved from https://services.crtc.gc.ca/pub/DocWebBroker/OpenDocument.aspx?DMID=2647608

Viecens, M. & Callorda, F. (2016). La brecha digital en America Latina: Precio, calidad y accesibilidad de la banda ancha en la región [The Digital Divide in Latin America: Price, Quality and Accessibility of Broadband in the Region] (Report). Ottawa: International Development Research Centre.

West, D. M. (2015). Digital divide: improving Internet access in the developing world through affordable services and diverse content (Report). Washington DC Brookings Institute. Retrieved from https://www.brookings.edu/wp-content/uploads/2016/06/West_Internet-Access.pdf

Wikimedia Foundation. (2017). Wikipedia Zero. Retrieved from https://wikimediafoundation.org/wiki/Wikipedia_Zero

WEF (World Economic Forum). (2016). The Global Information Technology Report 2016. Retrieved from http://www3.weforum.org/docs/GITR2016/WEF_GITR_Full_Report.pdf

World Bank (2017). Urban population. Retrieved from https://data.worldbank.org/indicator/SP.URB.TOTL.IN.ZS?page=1

Wu, T. (2003). Network neutrality, broadband discrimination. Journal of Telecommunications and High Technology Law, 2, 141-179. Available at https://scholarship.law.columbia.edu/faculty_scholarship/1281

Annex

Brazil

Carrier

Market share*

Operator Group

Plan

Pre/Post

ZR

Description

Vivo

30.00%

Telefonica

Vivo Pos

Post

N

4 different voice/data plans from 6-30GB

Vivo

 

Telefonica

Vivo V

Post

N

4 different voice/data plans from 6-30GB

Vivo

 

Telefonica

Vivo Controle

Pre/Post

N

5 monthly voice/data plan with data cap 1-3GB. Once cap is reached purchase of new data bundle required.

Vivo

 

Telefonica

Vivo Internet Redes Sociais

Add on

Y

Triple lock data bundle: one month or one week add-on permitting 400MB or 800MB use of FB, FB Messenger & Twitter. Available with Pre Vivo Turbo and Vivo Controle.

Vivo

 

Telefonica

Vivo Turbo

Pre

N

4 weekly/monthly voice/data plans with data cap 300MB-1.2GB.

Vivo

 

Telefonica

Vivo Easy

Pre

N

4 monthly voice/data plans with data cap 1.5-3GB.

TIM

25.00%

Telecom Italia

TIM Pre 1GB

Pre

Y

7 day package including 500MB data cap, voice plus unltd. WhatsApp & music streaming via Deezer

TIM

 

Telecom Italia

TIM Pre 150

Pre

Y

7 day package including 150MB data cap, voice plus unltd. WhatsApp

TIM

 

Telecom Italia

TIM Pre Diario

Pre

Y

1 day package including 50MB data cap, voice plus unltd. WhatsApp

TIM

 

Telecom Italia

TIM Pre 1.5GB

Pre

Y

30 day package including 1GB data cap, voice plus unltd. WhatsApp

TIM

 

Telecom Italia

TIM Beta

Pre

Y

Monthly & weekly voice/data plan with 10 or 1.5GB cap plus unltd. Music streaming with Deezer

TIM

 

Telecom Italia

TIM Beta Diario

Pre

N

Daily 100MB

TIM

 

Telecom Italia

Turbo WhatsApp

Pre

Y

30 day package including 50MB per day for WhatsApp and 50MB data cap for the duration

TIM

 

Telecom Italia

Infinity Turbo 7

Pre

Y

7 day package including voice, 100MB data cap per day and unltd WhatsApp

TIM

 

Telecom Italia

TIM Controle Light Factura

Pre

N

30 day package including voice and 1GB of Internet

TIM

 

Telecom Italia

TIM Controle

Pre

Y

30 day package including voice, 2GB of Internet and unltd. WhatsApp and Banca Virtual

TIM

 

Telecom Italia

TIM Music by Deezer

Add-On

Y

Available with all Pre and Controle plans: weekly unltd music streaming for set fee

TIM

 

Telecom Italia

TIM Black

Post

Y

5 monthly voice/data plans 3-20GB w/TIM Music and Banca Virtual (Brazilian digital magazines at no cost)

TIM

 

Telecom Italia

TIM Torcedor

Add-On

Y

Available with TIM Pos: free video of your favourite team's goals

TIM

 

Telecom Italia

TIM Pos Express

Post

Y

2 monthly voice/data plans with 3 or 5GB data cap plus TIM Music and Banca Virtual

TIM

 

Telecom Italia

TIM Da Vinci

Post

N

Monthly voice/data plan with 50GB data cap

Claro

25.00%

America Movil

Claro Controle

Post

Y

2/3GB monthly voice/data plans w unltd WhatsApp, Claro Music and Video

Claro

 

America Movil

Claro Pos Giga 5/6/7/9/14/25

Post

Y

Includes unltd WhatsApp, Claro Musica

Claro

 

America Movil

Claro PreMix Mega

Pre

Y

250MB monthly data plus WhatsApp & Claro Musica

Claro

 

America Movil

Pacote WhatsApp

Pre

Y

Multiple daily and monthly voice/data packages w/unlt WhatsApp

Claro

 

America Movil

Claro Pre Mix Super Giga

Pre

Y

1GB monthly data plus unltd. WhatsApp & Claro Musica

Oi

18.00%

Oi SA

Pos-Pago

Post

N

4 monthly voice/data plans w/4-20GB data cap

Oi

 

Oi SA

Controle

Post

N

3 monthly voice/data plans w/1-3.5GB data cap

Oi

 

Oi SA

Pre

Pre

N

Sliding scale of 8 time-ltd voice/data plans from 10-30 days

Colombia

Carrier

Market share

Operator Group

Plan

Pre/Post

ZR

Description

Claro

53.10%

America Movil

Smartphone en prepago

Pre

Y

Triple lock w/Tw, FB, WhatsApp

Claro

Compra tu SIM

Pre

Y

Triple lock w/Tw, FB, WhatsApp

Claro

Reventa Control

Pre

Y

Triple lock w/Tw, FB, WhatsApp

Claro

El Propio Chip

Pre

Y

Triple lock w/Tw, FB, WhatsApp

Claro

Prepago Amigo

Pre

Y

Triple lock w/Tw, FB, WhatsApp

Claro

Prepago Facil

Pre

Y

Triple lock w/Tw, FB, WhatsApp

Claro

Prepay Data Packets

Pre

Y

Triple lock w/Tw, FB, WhatsApp

Claro

Plan SM/IP Nav

Post

Y

Unltd access to WhatsApp, Twitter, FB

Claro

Plan Navegacion BB

Post

Y

Data cap plus unltd. FB, Twittter, Gtalk, MySpace, Yahoo Messenger, BB Messenger

Claro

Sinlimitenav 1/3/6/10GB

Post

Y

Unltd access to WhatsApp, Twitter, FB

Tigo

17.30%

Millicom International Cellular SA

Cargo basico 1.2 & 2.5 GB

Post

Y

Unltd access to WhatsApp & FB plus either Tigo Go music or Tigo Sports

Tigo

17.30%

Millicom International Cellular SA

Cargo basico 3.5, 4.5, 6.5 GB

Post

Y

Unltd access to WhatsApp, FB & 2 from 13 premium apps

Tigo

17.30%

Millicom International Cellular SA

Paquete prepago (x4)

Pre

Y

1,3,7,30 day packets with data cap and unltd. FB & WhatsApp

Tigo

17.30%

Millicom International Cellular SA

Super Bolsas Tigo (x5)

Pre

Y

30 day data caps. 3 w/unltd WhatsApp; 2 w/unltd. WA & FB

Tigo

17.30%

Millicom International Cellular SA

Prepagada en combo

Pre

Y

15 different time-ltd voice/data packets with unltd. FB & WhatsApp

Tigo

17.30%

Millicom International Cellular SA

Prepagadados de datos

Pre

Y

15 different time ltd. data packets with unltd FB

Movistar

23%

Telefónica Móviles Colombia S.A.

Plan Innovacion (x5)

Post

Y

8 different data caps w/unltd. Waze, Line, FB, Twitter, WhatsApp unltd (even after data cap is reached)

Movistar

23%

Telefónica Móviles Colombia S.A.

Plan Innovacion (x3)

Post

Y

Waze, Line, FB, Twitter, WhatsApp unltd (even after data cap is reached) PLUS Movistar Musica and/or Movistar Play

Movistar

23%

Telefónica Móviles Colombia S.A.

Internet 1,2,4,8GB

Post

Y

Unltd WhatsApp plus data cap

Movistar

23%

Telefónica Móviles Colombia S.A.

Todo En Uno

Pre

Y

7/90/180 days days of voice/data plus unltd. FB, Twitter & WhatsApp

Mexico

Carrier

Market share*

Operator Group

Plan

Pre/Post

ZR

Description

Telcel

67%

America Movil

Max Sin Limite 2/3/5/6/6.5/7/9/12,000MB

Post

Y

FB, Twitter & WhatsApp, Claro Video unltd. 5K MB > +Uber

Telcel

 

America Movil

Telcel Internet 1/2/3.5/7/10/20

Post

Y

FB, Twitter & WhatsApp, Claro Video unltd. 7K MB > +Uber

Telcel

 

America Movil

Telcel Max

Post

Y

FB, Twitter & WhatsApp

Telcel

 

America Movil

Amigo Sin Limite

Pre

Y

Sliding scale of triple locks w capped FB & Twitter in Mexico & WhatsApp in North America

Telcel

 

America Movil

Amigo Por Segundo

Pre

N

Sliding scale of triple locks w capped FB & Twitter in Mexico & WhatsApp in North America

Telcel

 

America Movil

Amigo Optimo Plus Sin Frontera

Pre

N

Sliding scale of triple locks w capped FB & Twitter in Mexico & WhatsApp in North America

Movistar

24%

Telefonica

Vas a Volar - 1.5/3/4.5/6/9/12/15,000MB

Post

Y

plus sliding scale of 2,3 or 4GB of WhatsApp, Tw & FB

Movistar

 

Telefonica

Vas a volar

Pre

Y

Sliding scale data packets 2/4/5.5/7/10/15 plus sliding scale of 2,3 or 4GB of WhatsApp, Tw & FB

AT&T Mexico

9%

AT&T

AT&T Con Todo 500MB-8GB

Post

Y

10 data packets: Unltd FB, Twitter & Whatsapp AND 'new SNS' Snapchat, Instagram & Uber

AT&T Mexico

 

AT&T

AT&T a Tu Manera

Post

Y

9 data packets: Unltd FB, Twitter & Whatsapp AND 'new SNS' Snapchat, Instagram & Uber

AT&T Mexico

 

AT&T

Unidos Prepago

Pre

Y

Sliding scale of 10 time-ltd packets. All include capped data for Whatsapp, FB and Twitter AND SC & Instag for 5 most expensive packets

AT&T Mexico

 

AT&T

AT&T a Tu Manera te damos Mas

Pre

Y

2/3/5/8GB plus unltd FB, Twitter, WhatsApp AND unltd Uber, Snapchat, Instagram

AT&T

 

AT&T

Recarga Plus

Pre

Y

1GB of Internet plus cap for all above SNS

South Africa

Carrier

Market share

Operator Group

Plan

Pre/Post

ZR

Vodacom

39.20%

Vodafone

VARIOUS (24)

Pre

N

Vodacom

39.20%

Vodafone

VARIOUS (26)

Post

N

Cell C

14%

3C Telecommunications (SA)

LTE Power Plan

Post

N

Cell C

14%

3C Telecommunications (SA)

Smartdata

Post

N

Cell C

14%

3C Telecommunications (SA)

Smartdata TopUp

Post

N

Cell C

14%

3C Telecommunications (SA)

FREE BASICS

Pre & Post

Y

MTN

33%

MTN Group (SA)

MTN Sky (4)

Post

N

MTN

33%

MTN Group (SA)

New MTN Sky

Post

N

MTN

33%

MTN Group (SA)

My MTN Choice +Talk

Post

N

MTN

33%

MTN Group (SA)

My MTN Choice

Pre & Post

N

MTN

33%

MTN Group (SA)

My MTN Choice Flexi

Post

N

MTN

33%

MTN Group (SA)

My MTN Choice+

Post

N

Footnotes

1. Network neutrality refers to the principle that network operators should treat all information packets in an isonomic fashion, and should not discriminate based on sender, receiver, content, device or application. Although it is widely agreed that some traffic management practices are essential, these should not extend to forms of discrimination such as throttling and blocking (negative) or priority access (positive) that produce a commercial/competitive advantage for network operators.

2. See the ‘Zero Rating Map’ coordinated by Luca Belli for a survey of the global landscape of zero rating https://public.tableau.com/profile/zeroratingcts#!/vizhome/zeroratinginfo/Painel1

3. Pre-pay services involve an upfront charge to the user, in exchange for a finite amount voice or data service. When the contracted airtime or data has expired, the user must pay an extra charge in order to be permitted to continue using the service, or wait until the beginning of their next billing period. Post-pay services present users with an invoice at the end of each billing period for a service bundle that often permits the user to exceed the caps on any contracted services on a pro-rata basis.

4. The Wikipedia Foundation announced on 16 February 2018 that the service would be discontinued at the end of 2018.

5. The figures listed in this table do not cumulatively equal 100% for each column, but instead indicate in every row the percentage of plans that include a ZR component for each payment category, in each market.

6. The Herfindahl-Hirschman Index measures concentration by the number of firms operating in a particular industry and their market share.

7. Excluding social networking sites where the majority user base is resident in only one country, e.g., WeChat, QQ and QZone.

8. It should be noted that in the Brazilian case, many common examples of zero rating are in fact illegal according to the regulation of the Marco Civil da Internet law, which prohibits positive discrimination of vertically integrated apps (Governo do Brasil, 2017)

9. Vivo claims a 30% market share, and does not offer ZR in any of its plans. TIM claims a 25% market share and 90% of its pre-pay, and 66% of its post-pay plans feature some ZR component. Claro also claims a 25% market share, and all of its pre and post pay plans contain some ZR component. All data recorded from the carrier websites in July 2017.

10. It should be noted that in certain situations, ZR of a market-leading platform or service by a struggling or new MISP could restrict competition at the application layer, even if the wireless market is not adversely affected.

11. It should be noted that more granular data is available to assess more precisely the state of innovation within local app development ecosystems. Within the limitations of this study, however, the national capacity of innovation ranking assessed by the World Economic Forum provides a useful proxy for assessing general national innovation, from which the levels of more specific sectors can be inferred.

12. It should be noted that significant disparities in access to WiFi may exist between urban and rural areas, meaning that a high national average could still obscure a dearth of infrastructure in rural areas and a commensurate dependence on ZR.

13. By ensuring that use of certain communication platforms and information services does not count against data-capped access to the full mobile internet, or by providing some app-specific access to those who have no mobile internet access.


Operationalising communication rights: the case of a “digital welfare state”

$
0
0

This paper is part of Practicing rights and values in internet policy around the world, a special issue of Internet Policy Review guest-edited by Aphra Kerr, Francesca Musiani, and Julia Pohle.

Introduction

The rampant spread of disinformation and hate speech online, the so-called surveillance capitalism of the internet giants and related violations of privacy (Zuboff, 2019), persisting digital divides (International Telecommunication Union, 2018), and inequalities created by algorithms (Eubanks, 2018): these issues and many other current internet-related phenomena challenge us as individuals and members of the society. These challenges have sparked renewed discussion about the idea and ideal of citizens’ communication rights.

Either as a legal approach or as a moral discursive strategy, the rights-based approach is typically presented in a general sense as a counterforce that protects individuals against illegitimate forms of power, including both state and corporate domination (Horten, 2016). The notion of communication rights can not only refer to existing legally binding norms, but also more broadly to normative principles against which real-world developments are assessed. However, there is no consensus on what kinds of institutions are needed to uphold and enforce communication rights in the non-territorial, regulation-averse and rapidly changing media environment. Besides the actions of states, the realisation of communication rights is now increasingly impacted by the actions of global multinational corporations, activists, and users themselves.

While much of the academic debate has focused on transnational attempts to codify and promote communication rights at the global level, in this article, we examined a national approach to communication rights. Despite the obvious transnational nature of the challenges, we argued for the continued relevance of analysing communication rights in the context of national media systems and policy traditions. We provided a model to analyse communication rights in a framework that has its foundation in a specific normative, but also empirically grounded understanding of the role of communication in a democracy. In addition, we discussed the relevance of single country analyses to global or regional considerations of rights-based governance.

Communication rights and the case of Finland

The concept of communication rights has a varied history, starting with the attempts of the Global South in the 1970s to counter the Westernisation of communication (Hamelink, 1994; McIver et al., 2003). The connections between human rights and media policy have also been addressed, especially in international contexts and in the United Nations (Jørgensen, 2013; Mansell & Nordenstreng, 2006). Communication rights have also been invoked in more specific contexts to promote, for instance, the rights of disabled persons and cultural and sexual minorities in today’s communication environment (Padovani & Calabrese, 2014; McLeod, 2018). Currently, these rights are most often employed for the use of civil society manifestos and international declarations focused on digital or internet-related rights (Karppinen, 2017; Redeker, Gill, & Gasser, 2018).

Today, heated policy debates have surrounded the role of global platforms in realising or violating principles, such as freedom of expression or privacy, which are already stipulated in the United Nations Universal Declaration of Human Rights (MacKinnon, 2013; Zuboff, 2019). Various groups have made efforts to monitor and influence the global policy landscape, including the United Nations, its Special Rapporteurs, and the Internet Governance Forum; voluntary multi-stakeholder coalitions, such as the Global Network Initiative; and civil society actors, such as the Electronic Frontier Foundation, Freedom House, or Ranking Digital Rights (MacKinnon et al., 2016). At the same time, nation states are still powerful actors whose choices can make a difference in the realisation of rights (Flew, Iosifides, & Steemers, 2016). This influence is made evident through monitoring efforts that track internet freedom and the increased efforts by national governments to control citizens’ data and internet access (Shahbaz, 2018).

Communication rights in Finland are particularly worth exploring and analysing. Although the Finnish communication policy solutions are now intertwined with the broader European Union initiatives, the country has an idiosyncratic historical legacy in communication policy. Year after year, it remains as one of the top countries in press freedom rankings (Reporters without Borders, 2018). In the 1990s, Finland was a frontrunner in shaping information society policies, gaining notice for technological development and global competitiveness, especially in the mobile communications sector (Castells & Himanen, 2002). Finland was also among the first nations to make affordable broadband access a legal right ( Nieminen, 2013). On the EU Digital Economy and Society Index, Finland scores high in almost all categories, partly due to its forward-looking strategies for artificial intelligence and extensive, highly developed digital public services (Ministry of Finance, 2018). According to the think tank Center for Data Innovation, Finland’s availability of official information is the best in the EU (Wallace & Castro, 2017). Not only are Finns among the most frequent users of the internet in the European Union, they also report feeling well-informed about risks of cybercrime and trust public authorities with their online data more than citizens of any other EU country (European Union, 2017, pp. 58-60).

While national competitiveness in the global marketplace has informed many of Finland’s policy approaches (Halme et al., 2014), they also reflect the Nordic tradition of the so-called “epistemic commons”, that is the ideals of knowledge and culture as a joint and shared domain, free of restrictions (Nieminen, 2014 1). Aspects such as civic education, universal literacy, and mass media are at the heart of this ideal (Nieminen, 2014). This ideal has been central to what Syvertsen, Enli, Mjøs, and Moe (2014) called the “Nordic Media Welfare State”: Nordic countries are characterised by universal media and communications services, strong and institutionalised editorial freedom, a cultural policy for the media, and policy solutions that are consensual and durable, based on consultation with both public and private stakeholders.

Operationalising rights

How does Finland, a country with such unique policy traditions, fare as a “Digital Welfare State”? In this article, we employed a basic model that divides the notion of communication rights into four distinct operational categories (Nieminen, 2010; 2016; 2019; Horowitz & Nieminen, 2016). These divisions differ from other recent categorisations (Couldry et al., 2016; Goggin et al., 2017) in that they specifically reflect the ideal of the epistemic commons of shared knowledge and culture. Communication rights, then, should preserve and remove restrictions on the epistemic commons. We understand the following rights as central to those tasks:

  1. Access: citizens’ equal access to information, orientation, entertainment, and other contents serving their rights.
  2. Availability: equal availability of various types of content (information, orientation, entertainment, or other) for citizens.
  3. Dialogical rights: the existence of public spaces that allow citizens to publicly share information, experiences, views, and opinions on common matters.
  4. Privacy: protection of every citizen’s private life from unwanted publicity, unless such exposure is clearly in the public interest or if the person decides to expose it to the public, as well as protection of personal data (processing, by authorities or businesses alike, must have legal grounds and abide by principles, such as data minimisation and purpose limitation, while individuals’ rights must be safeguarded).

To discuss each category of rights, we deployed them in three levels: the level of the Finnish regulatory-normative framework; the level of implementation by the public sector, as manifested in the level of activity by commercial media and communications technology providers; and in the level of activity by citizen-consumers. This multi-level analysis aims at depicting the complex nature of the rights and the often contested and contradictory realisations at different levels. For each category, we also highlighted one example: for access, telecommunications; for availability, extended collective licencing in the context of online video recording services; for dialogical rights, e-participation; and for privacy, monitoring communications metadata within organisations.

Access

Access as a communication right well illustrates the development of media forms, the expansion of the Finnish media ecosystem, and the increasing complexity of rights as realised in regulatory decisions by the public sector, commercial media, and communications technology providers. After 100 years of independence, Finland is still short of domestic capital and heavily dependent on exports, which makes it vulnerable to economic downturns (OECD, 2018). Interestingly, despite changes to the national borders, policies, and technologies over time, it is these geopolitical, demographic, and socioeconomic conditions that have remained relatively unchanged and, in turn, have shaped most of the current challenges towards securing access to information and media.

While the right to access in Finland also relates to institutions, such as libraries and schools, the operationalisation here is illustrated by the case of telecommunications. Telecommunications are perhaps the most illustrative cases of access. They were originally introduced in Finland by the Russian Empire; however, the Finnish Senate managed to obtain an imperial mandate for licensing private telephone operations. As a result, the Finnish telephone system formed a competitive market based on several regional private companies. There was no direct state involvement in the telecommunications business before Finland became independent (Kuusela, 2007).

The licenses of the private telephone operators required them to arrange the telephone services in their area to meet the telephone customers’ needs for reasonable and equal prices. In practice, every company had a universal service obligation (USO) in its licensing area. However, as the recession of the 1930s stopped the development of private telephone companies in the most sparsely inhabited areas, the state of Finland had to step in. The national Post and Telecommunication service eventually played a pivotal role in providing telephone services to the most northern and eastern parts of Finland (Moisala, Rahko, & Turpeinen, 1977).

Access to a fixed telephone network improved gradually until the early 1990s, when about 95% of households had at least one telephone in their use. However, the number of mobile phone subscriptions surpassed the number of fixed line telephone subscriptions as early as 1999, and an increasing share of households gave up the traditional telephone completely. As a substitute to the fixed telephone, in the late 1990s, mobile phones were seen in Finland as the best way to bring communication “into every pocket” (Silberman, 1999). Contrary to the ideal of the epistemic commons, the official government broadband strategy was based much more on market-led development and mobile networks than, for example, in Sweden, where the government made more public investments in building fixed fibre-optic connections (Eskelinen, Frank, & Hirvonen, 2008). Finland also gave indirect public subsidies to mobile broadband networks (Haaparanta & Puhakka, 2002). While the rest of Europe had started to auction their mobile spectrum (Sims, Youell, & Womersley, 2015); in Finland, the operators received all mobile frequencies for free until 2013.

The European regulations of USOs in telecommunication have been designed to set a relatively modest minimum level of telephone services at an affordable price, which could be implemented in a traditional fixed telephone network. Any extensions for mobile or broadband services have been deliberately omitted (Wavre, 2018). However, the universal services directive (2002/22/EC) lets the member states use both fixed and wireless mobile network solutions for USO provision. In addition, while the directive suggests that users should be able to access the internet via the USO connection, it does not set any minimum bitrate for connections in the common market.

Finland amended its national legislation in 2007 to let the telecom operators meet their universal service obligations using mobile networks. The results were dramatic, as operators quickly replaced large parts of the fixed telephone network with a mobile network, especially in eastern and northern parts of Finland. Today, less than 10% of households have fixed telephones. At the same time, there are almost 10 million mobile subscriptions in use in a country with 5.5 million inhabitants. Less than 1% of households do not have any mobile phones at all (Statistic Finland, 2017). Thanks to the 3G networks using frequencies the operators had obtained for free, Finland became a pioneer in making affordable broadband a legal right. Reasonably priced access to broadband internet from home has been part of the universal service obligation in Finland since 2010. However, the USO broadband speed requirement (2 Mbps) is rather modest by contemporary standards.

It is obvious that since the 1990s, Finland has not systematically addressed access as a basic right, but rather as a tool to reach political and economic goals. Although about 90% of households already have internet access, only 51% of them have access to ultra-fast fixed connections. Almost one-third of Finnish households are totally dependent on mobile broadband, which is the highest share in the EU. To guarantee access to 4G mobile broadband throughout the country, the Finnish government licensed two operators, Finnish DNA and Swedish Telia, to build and operate a new, shared mobile (broadband) network in the northern and eastern half of Finland. Despite recent government efforts to also develop ultra-fast fixed broadband, Finland is currently lagging other EU countries. A report monitoring the EU initiative “A Digital Agenda for Europe” (European Court of Auditors, 2018) found that Finland is only 22nd in the ranking in terms of progress towards universal coverage with fast broadband (> 30 Mbps) by 2020. In contrast, another Nordic Media Welfare State, Sweden, with its ongoing investments in citizens’ access to fast broadband, expects all households have access to at least 100 Mbps by 2020 (European Court of Auditors, 2018).

Availability

As a communication right, availability is the counterpart to access, but also dialogical rights and privacy. Availability refers to the abundance, plurality, and diversity of factual and cultural content to which citizens may equally expose themselves. Importantly, despite an apparent abundance of available content in the current media landscape, digitalisation does not translate into limitless availability, but rather implies new restrictions and conditions thereof as well as challenges stemming from disinformation. Availability both overcomes many traditional boundaries and faces new ones, many pertaining to ownership and control over content. For instance, public service broadcasting no longer self-evidently caters for availability, and media concentration may affect availability. In Finland, one specific question of availability and communication pertains to linguistic rights. Finland has two official languages, which implies additional demands for availability both in Finnish and in Swedish, alongside Sami and other minority languages. These are guaranteed in a special Language Act, but are also included in several other laws, including the law on public service broadcasting.

Here, availability is examined primarily through overall trends in free speech and access to information in Finland, as well as from the perspective of copyright and paywalls in particular. Availability is framed and regulated from an international and supranational level (e.g., the European Union) to the national level. Availability at a national level relies on the constitutionally safeguarded freedom of expression and access to information as well as fundamental cultural and educational rights. Freedom of the press and publicity dates back to 18th-century Sweden-Finland. After periods of censorship and “Finlandization”, the basic tenet has been a ban on prior restraint, notwithstanding measures required to protect children in the audio-visual field (Neuvonen, 2005; 2018). Later, Finland became a contracting party to the European Convention of Human Rights (ECHR) in 1989, linking Finland closely to the European tradition. However, in Finland, privacy and freedom of expression were long balanced in favour of the former, departing somewhat from ECHR standards and affecting media output (Tiilikka, 2007).

Regarding transparency, and publicity in the public sector, research has showed that Finnish municipalities, in general, are not truly active in catering to citizens’ access to information requests, and there is an inequality across the country (Koski & Kuutti, 2016). This is in contrast to the ideals of the Nordic Welfare State (Syvertsen et al., 2014). In response, civil society group, Open Knowledge Finland, has created a website that publishes information requests and guides people to submit their own request.

The digital environment is conducive to restrictions and requirements stemming from copyright and personal data protection—both having an effect on availability. The “right to be forgotten”, for example, enables individual requests to remove links in search results, thus affecting searchability (Alén-Savikko, 2015). To overcome a particular copyright challenge, new provisions were tailored in Finland to enable online video recording services, thereby allowing people to access TV broadcasts at more convenient times in a manner that transcends the traditional private copying practices. The Finnish solution rests partly on the Nordic approach to so called extended collective licensing (ECL), which was originally developed as a solution to serve the public interest in the field of broadcasting. Collective management organizations are able to license such use not only on behalf of their members, with an extended effect (i.e. they are regarded representative of non-members as well), while TV companies license their rights (Alén-Savikko & Knapstad, 2019; Alén-Savikko 2016).

Alongside legal norms, different business models frame and construct the way availability presents itself to citizens. Currently, pay-per-use models and pay walls feature in the digital media sector, although pay TV development in particular has long been moderate in Finland (Ministry of Transport and Communications, 2014a). With new business models, availability transforms into conditional access, while equal opportunity turns into inequality based on financial means. From the perspective of individual members of the public, the one-sided emphasis on consumer status is in direct opposition to the ideals of the epistemic commons and the Nordic Media Welfare State.

Dialogical rights

Access and availability are prerequisites for dialogical rights. These rights can be operationalised as citizens’ possibilities and realised activities to engage in dialogue that fosters democratic decision-making. Digital technology offers new opportunities of participation: in dialogues between citizens and the government; in dialogues with and via legacy media; and in direct, mediated peer-to-peer communication that can amount to civic engagement.

Finland has a long legacy of providing equal opportunities for participation, for instance as the first country in Europe to establish universal suffrage in 1906, when still under the Russian Empire. After reaching independence in 1917, Finland implemented its constitution in 1919. The constitution secures freedom of expression, while also stipulating that public authorities shall promote opportunities for the individual to participate in societal activity and to influence the decisions that concern him or her.

Currently, a dozen laws support dialogical rights, ranging from the Election Act and Non-Discrimination Act to the Act on Libraries. Several of them address media organisations, including the Finnish Freedom of Expression Act (FEA) that safeguards individuals’ right to report and make a complaint about media content and the Act on Yleisradio (public broadcasting) that stipulates the organization’s role in supporting democratic participation.

Finland seems to do particularly well in providing internet-based opportunities for direct dialogue between citizens and their government. These efforts began, as elsewhere in Europe, in the 1990s (Pelkonen, 2004). The government launched a public engagement programme, followed in the subsequent decade by two other participation-focused programmes (Wilhelmsson, 2017). While Estonia is the forerunner in all types of electronic public services, Finland excels in the Nordic model of combining e-governance and e-participation initiatives: it currently features a number of online portals for gathering both citizen’s opinions and initiatives, both at the national and municipal levels (Wilhelmsson, 2017).

Still, increasing inequality in capability for political participation is one of the main concerns in the National Action Plan 2017–2019 (Ministry of Justice, 2017). The country report on the Sustainable Governance Indicators notes that the weak spot for Finland is public’s evaluative and participatory competencies (Anckar et al., 2018). Some analyses posit that the Finnish civil society is simply not very open for diverse debates, contrary to the culture of public dialogue in Sweden (Pulkkinen, 1996). While Finns are avid news followers, they trust the news, and they are more likely to pay for online news than news consumers in most countries (Reunanen, 2018), participatory possibilities do not entice them very much. Social media are not widely used for political participation, even by young people (Statistics Finland, 2017) and, for example, Twitter remains a forum for dialogues between the political and media elite (Eloranta & Isotalus, 2016).

The most successful Finnish e-participation initiative is based on a 2012 amendment to the constitution that has made it possible for citizens to submit initiatives to the Parliament. One option to do so is via a designated open source online portal. An initiative will proceed to Parliament if it has collected at least 50,000 statements of support within six months. By 2019, the portal had accrued almost 1000 proposals, 24 had proceeded to be discussed in Parliament, and two related laws had been passed. Research shows, however, that many other digital public service portals still remain unknown to Finns (Wilhelmsson, 2017).

As Karlsson (2015) has posited in the case of Sweden, public and political dialogues online can be assessed by their intensity, quality, and inclusiveness. The Finnish case shows that digital solutions do not guarantee participation if they are not actively marketed to citizens, and if they do not entail a direct link to decision-making (Wilhelmsson, 2017). While the Finnish portal for citizen initiatives has mobilized some marginalized groups, the case suggests that e-participation can also alienate others, for example older citizens (Christensen et al., 2017). Valuing each and every voice as well as prioritising ways to do so over economic or political priorities (Couldry, 2010) or the need to govern effectively (Nousiainen, 2016) could be seen as central to dialogical rights between the citizen and those in the government and public administration.

Privacy

Privacy brings together all the main strands of changes caused by digitalisation: changes in media systems from mass to multimedia; technological advancements; regulatory challenges of converging sectors; and shifting sociocultural norms and practices. It also highlights a shrinking, rather than expanding, space for the right to privacy.

Recent technical developments and the increased surveillance capacities of both corporations and nation states have raised concerns regarding the fundamental right to privacy. While the trends are arguably global, there is a distinctly national logic to privacy rights. This logic coexists with international legal instruments. In the Nordic case, the strong privacy rules exist alongside access to information laws that require the public disclosure of data that would be regarded as intimate in many parts of the world, such as tax records. Curiously, a few years ago, the majority of Finns did not even consider their name, home address, fingerprints, or mobile phone numbers to be personal information (European Union, 2011), and they are still among the most trusting citizens in the EU when it comes to the use of their digital data by authorities (European Union, 2017).

In Finland, the right to privacy is a fundamental constitutional right and includes the right to be left alone, a person’s honour and dignity, the physical integrity of a person, the confidentiality of communications, the protection of personal data, and the right to be secure in one’s home (Neuvonen, 2014). The present slander and defamation laws date back to Finland’s first criminal code from 1889, when Finland was still a Grand Duchy of the Russian Empire. In 1919, the Finnish constitution provided for the confidentiality of communications by mail, telegraph, and telephone, as well as the right to be secure in one’s home—important rights for citizens in a country that had lived under the watchful eye of the Russian security services.

In the sphere of privacy protection, new laws are usually preceded by the threat of new technology (Tene & Polonetsky, 2013); however, in Finland, this was not the case. Rather, the need for new laws reflected a change in Finland’s journalistic culture that had previously respected the private lives of politicians, business leaders, and celebrities. The amendments were called “Lex Hymy” (Act 908/1974) after one of Finland’s most popular monthlies had evolved into a magazine increasingly focused on scandals.

Many of the more recent rules on electronic communications and personal data are a result of international policies being codified into national legislation, perhaps most importantly EU legislation’s transposition into national law. What is fairly clear, however, is that the state has been seen as the guarantor of the right to privacy since even before Finland was a sovereign nation. The strong role of the state is consistent with the European social model and increased focus on public service regulation (cf., Venturelli, 2002, p. 80). Nevertheless, the potential weakness of this model is that the privacy rights seldom trump the public interest, and public uses of personal data are not as strictly regulated as their private use.

Finland has also introduced legislation that weakens the relatively strong right to privacy. After transposing the ePrivacy Directive guaranteeing the confidentiality of electronic communications into national law, the Finnish Government proposed an amending act that granted businesses and organisations the right to monitor communications metadata within their networks. The act was dubbed “Lex Nokia” after Finland’s leading newspaper published an article that alleged that the Finnish mobile giant had pressured politicians and officials to introduce the new law (Sajari, 2009). While it is difficult to assess to what degree Nokia influenced the contents of the legislation, it is clear that Nokia took the initiative and was officially involved in the legislative process (Jääsaari, 2012).

The Lex Nokia act demonstrates how the state’s public interest considerations might coincide with the economic interests of large corporations to the detriment of the right to privacy. Regardless, Finnish citizens remain more trusting of public authorities, health institutions, banks, and telecommunications companies than most of their European compatriots (European Union, 2015). It remains to be seen whether this trust in authority will erode, as more public and private actors aim to capitalise on the promises of big data. Nothing in recent Eurobarometer surveys (European Union, 2018a, pp. 38–56; European Union, 2018b) would indicate that the trust in public authorities would be in crisis or in steep decline—the same cannot be said for trust in political institutions, which seem to decline a few percentage points each year in various studies.

Discussion

The promotion of communication rights based on the ideal of epistemic commons is institutionalized in a variety of ways in Finnish communication policy-making, ranging from traditional public service media arrangements to more recent broadband and open data initiatives. However, understood as equal and effective capabilities, communication rights and the related policy principles of the Nordic Media Welfare State have never been completely or uniformly followed in the Nordic countries.

The analysis of the Finnish case highlights how the ideal of a “Digital Welfare State” falls short in several ways. Policies of access or privacy may focus on economic goals rather than rights. E-participation initiatives promoting dialogical rights do not automatically translate to a capacity or a desire to participate in decision-making. Arguably, the model employed in this article has been built on a specific understanding of which rights and stakeholders are needed to support the ideals of the epistemic commons and the Nordic Media Welfare State. That is why it focuses more on the national specificities and less on the impact on supranational and international influences on the national situation. It is obvious that in the current media landscape, national features are challenged by a number of emergent forces, including not only technological transformations but also general trends of globalisation and the declining capacities of nation states to enforce public interest or rights-based policies (Horten, 2016).

Still, more subtle and local manifestations of global and market-driven trends are worth examining to understand different policy options and interpretations. National mapping and monitoring the state of communication rights with measurement tools and indicators have been developed and employed that target their various components, such as linguistic issues or accessibility. In Finland, this type of approach has been adopted in the field of media and communications policy (Ala-Fossi et al., 2018; Artemjeff & Lunabba, 2018; Ministry of Transport and Communications, 2014b). Recent academic efforts aiming at comparative outlooks (Couldry et al., 2016; Goggin et al., 2017) are indications that communication rights urgently call for a variety of conceptualisations and operationalisations to uncover similarities and differences between countries and regions. As Eubanks (2017) argued, we seem to be at a crossroads: despite our unparalleled capacities for communication, we are witnessing new forms of digitally enabled inequality, and we need to curb these inequalities now—if we want to counter them at all. We may need both the global policy efforts, but we also need to understand their specific national and supranational reiterations to counter these and other inequalities regarding citizens’ communication rights.

References

Ala-Fossi, M., Alén-Savikko, A., Grönlund, M., Haara, P., Hellman, H., Herkman, J.,…Mykkanen, M. (2018). Media- ja viestintäpolitiikan nykytila ja sen mittaaminen [Current state of media and communication policy and measurement]. Helsinki: Ministry of Transport and Communications. Retrieved February 21, 2019, from http://urn.fi/URN:ISBN:978-952-243-548-4

Alén-Savikko, A. (2015). Pois hakutuloksista, pois mielestä? [Pois hakutuloksista, pois mielestä?]. Lakimies, 113(3-4), 410–433. Retrieved from http://www.doria.fi/handle/10024/126796

Alén-Savikko, A. (2016). Copyright-proof network-based video recording services? An analysis of the Finnish solution. Javnost – The Public, 23(2), 204–219. doi:10.1080/13183222.2016.1162979

Alén-Savikko, A., & Knapstad, T. (2019). Extended collective licensing and online distribution – prospects for extending the Nordic solution to the digital realm. In T. Pihlajarinne, J. Vesala & O. Honkkila (Eds.), Online distribution of content in the EU (pp. 79–96). Cheltenham, UK & Northampton, MA: Edward Elgar. doi:10.4337/9781788119900.00012

Anckar, D., Kuitto, K., Oberst, C. & Jahn, D. (2018). Finland Report Sustainable Governance Indicators 2018. Retrieved March 14, 2018, from https://www.researchgate.net/publication/328214890_Finland_Report_-_Sustainable_Governance_Indicators_2018

Artemjeff, P., & Lunabba, V. (2018). Kielellisten oikeuksien seurantaindikaattorit [Indicators for monitoring linguistic rights] (No. 42/2018). Helsinki: Ministry of Justice Finland. Retrieved from http://julkaisut.valtioneuvosto.fi/bitstream/handle/10024/161087/OMSO_42_2018_Kielellisten_oikeuksien_seurantaindikaattorit.pdf

Castells, M., & Himanen, P. (2002). The information society and the welfare state: The Finnish model. Oxford: Oxford University Press.

Christensen, H., Jäske, M., Setälä, M. & Laitinen, E. (2017). The Finnish Citizens’ Initiative: Towards Inclusive Agenda-setting? Scandinavian Political Studies, 40(4), 411–433. doi:10.1111/1467-9477.12096

Couldry, N. (2010). Why voice matters: Culture and politics after neoliberalism. London: Sage.

Couldry, N. Rodriguez, C., Bolin G., Cohen, J. , Goggin, G., Kraidy, M. …Zhao Y. (2016). Chapter 13 – Media and communications. Retrieved November 14, 2018, from https://comment.ipsp.org/sites/default/files/pdf/chapter_13_-_media_and_communications_ipsp_commenting_platform.pdf

Eloranta, A., & Isotalus, P. (2016). Vaalikeskustelun aikainen livetwiittaaminen – kansalaiskeskustelun uusi muoto? [Election discussion during the electoral debate - a new form of civic debate?]. In K. Grönlund & H. Wass (Eds.), Poliittisen osallistumisen eriytyminen: Eduskuntavaalitutkimus 2015 [Differentiation of Political Participation: Parliamentary Research 2015](pp. 435–455). Helsinki: Oikeusministeriö.

Eskelinen, H., Frank, L., & Hirvonen, T. (2008). Does strategy matter? A comparison of broadband rollout policies in Finland and Sweden. Telecommunications Policy, 32(6), 412–421. doi:10.1016/j.telpol.2008.04.001

Eubanks, V. (2017). Automating inequality: How high-tech tools profile, police, and punish the poor. New York: St. Martin’s Press.

European Court of Auditors (2018). Broadband in the EU Member States: despite progress, not all the Europe 2020 targets will be met (Special report No. 12). Luxembourg: European Court of Auditors. Retrieved February 22, 2018, from http://publications.europa.eu/webpub/eca/special-reports/broadband-12-2018/en/

European Commission (2011). Attitudes on data protection and electronic identity in the European Union (Special Eurobarometer No. 359). Luxembourg: Publications Office of the European Union. Retrieved November 14, 2018, from http://ec.europa.eu/public_opinion/archives/ebs/ebs_359_en.pdf

European Commission (2015). Data protection (Special Eurobarometer No. 431). Luxembourg: Publications Office of the European Union. Retrieved November 14, 2018, from https://data.europa.eu/euodp/data/dataset/S2075_83_1_431_ENG

European Commission (2017). Europeans’ attitudes towards cyber security (Special Eurobarometer No. 464a). Luxembourg: Publications Office of the European Union. doi:10.2837/82418

European Commission (2018a). Public opinion in the European Union (Standard Eurobarometer No. 89). Luxembourg: Publications Office of the European Union. doi:10.2775/172445

European Commission (2018b). Kansallinen raportti. KansalaismielipideEuroopan unionissa: Suomi [National Report. Citizenship in the European Union: Finland] (Standard Eurobarometer, National Report No. 90). Luxembourg: Publications Office of the European Union. Retrieved from https://ec.europa.eu/finland/sites/finland/files/eb90_nat_fi_fi.pdf

Flew, T., Iosifides, P., & Steemers, J. (Eds.). (2016). Global media and national policies: The return of the state. Basingstoke: Palgrave. doi:10.1057/9781137493958

Goggin, G., Vromen, A., Weatherall, K. G., Martin, F., Webb, A., Sunman, L., & Bailo, F. (2017). Digital rights in Australia (Sydney Law School Research Paper No. 18/23). Sydney: University of Sydney. https://ses.library.usyd.edu.au/bitstream/2123/17587/7/USYDDigitalRightsAustraliareport.pdf

Haaparanta P., & Puhakka M. (2002). Johtolangatonta keskustelua: Tunne ja järki huutokauppakeskustelussa. Kansantaloudellinen Aikakauskirja, 98(3), 267–274.

Habermas, J. (2006). Political communication in media society: Does democracy still enjoy an epistemic dimension? The impact of normative theory on empirical research. Communication Theory, 16(4), 411–426. doi:10.1111/j.1468-2885.2006.00280.x

Halme, K., Lindy, I., Piirainen, K., Salminen, V., & White, J. (Eds.). (2014). Finland as a knowledge economy 2.0: Lessons on policies and governance (Report No. 86943). Washington, DC: World Bank Group. Retrieved from http://documents.worldbank.org/curated/en/418511468029361131/Finland-as-a-knowledge-economy-2-0-lessons-on-policies-and-governance

Hamelink, C. J. (1994). The politics of world communication. London: Sage.

Horowitz, M., & Nieminen, H. (2016). European public service media and communication rights. In G. F. Lowe & N. Yamamoto (Eds.), Crossing borders and boundaries in public service media: RIPE@2015 (pp. 95–106). Gothenburg: Nordicom. Available at https://gupea.ub.gu.se/bitstream/2077/44888/1/gupea_2077_44888_1.pdf#page=97

Horten, M. (2016). The closing of the net. Cambridge: Polity Press.

International Telecommunication Union. (2018). Measuring the information society report 2018 - Volume 1. Geneva: International Telecommunication Union. Retrieved from: https://www.itu.int/en/ITU-D/Statistics/Pages/publications/misr2018.aspx

Jääsaari, J. (2012). Suomalaisen viestintäpolitiikan normatiivinen kriisi: Esimerkkinä Lex Nokia [The normative crisis of Finnish communications policy: For example, Lex Nokia]. In K. Karppinen & J. Matikainen (Eds.), Julkisuus ja Demokratia [Publicity and Democracy] (pp. 265–291). Tampere: Vastapaino.

Jørgensen, R. F. (2013). Framing the net: The internet and human rights. Cheltenham, UK & Northhampton, MA: Edward Elgar.

Karlsson, M. (2015). Interactive, qualitative, and inclusive? Assessing the deliberative capacity of the political blogosphere. In K. Jezierska & L. Koczanowicz (Eds.), Democracy in dialogue, dialogue in democracy: The politics of dialogue in theory and practice (pp. 253–272). London & New York: Routledge.

Karppinen, K. (2017). Human rights and the digital. In H. Tumber & S. Waisbord (Eds.), Routledge companion to media and human rights (pp. 95–103). London & New York: Routledge. doi:10.4324/9781315619835-9

Koski, A., & Kuutti, H. (2016). Läpinäkyvyys kunnan toiminnassa – tietopyyntöihin Vastaaminen [Transparency in municipal action - responding to requests for information]. Helsinki: Kunnallisalan kehittämissäätiö [Municipal Development Foundation]. Retrieved November 14, 2018, from http://kaks.fi/wp-content/uploads/2016/11/Tutkimusjulkaisu-98_nettiin.pdf

Kuusela, V. (2007). Sentraalisantroista kännykkäkansaan - televiestinnän historia Suomessa tilastojen valossa [From the central antennas to mobile phone - the history of telecommunications in Finland in the light of statistics]. Helsinki: Tilastokeskus. Retrieved November 14, 2018, from http://www.stat.fi/tup/suomi90/syyskuu.html

MacKinnon, R. (2013). Consent of the networked: The struggle for internet freedom. New York: Basic Books.

MacKinnon, R., Maréchal, N., & Kumar, P. (2016). Global Commission on Internet Governance – Corporate accountability for a free and open internet (Paper No. 45). Ontario; London: Centre for International Governance Innovation; Chatham House .Retrieved from https://www.cigionline.org/sites/default/files/documents/GCIG%20no.45.pdf

Mansell, R. & Nordenstreng, K. (2006). Great Media and Communication Debates: WSIS and the MacBride Report. Information Technologies and International Development, 3(4), 15–36. Available at http://tampub.uta.fi/handle/10024/98193

McIver, W. J., Jr., Birdsall, W. F., & Rasmussen, M. (2003). The internet and right to communicate. First Monday,8(12). doi:10.5210/fm.v8i12.1102

McLeod, S. (2018). Communication rights: Fundamental human rights for all. International Journal of Speech-Language Pathology, 20(1), 3–11. doi:10.1080/17549507.2018.1428687

Ministry of Finance, Finland. (2018, May 23). Digital Economy and Society Index: Finland has EU's best digital public services. Helsinki: Ministry of Finance. Retrieved February 28, 2019, from https://vm.fi/en/article/-/asset_publisher/digitaalitalouden-ja-yhteiskunnan-indeksi-suomessa-eu-n-parhaat-julkiset-digitaaliset-palvelut

Ministry of Justice, Finland. (2017). Action plan on democracy policy. Retrieved February 28, 2019, from https://julkaisut.valtioneuvosto.fi/bitstream/handle/10024/79279/07_17_demokratiapol_FI_final.pdf?sequence=1

Ministry of Transport and Communications, Finland. (2014a). Televisioala Suomessa: Toimintaedellytykset internetaikakaudella [Television industry in Finland: Operating conditions in the Internet era] (Publication No. 13/2014). Helsinki: Ministry of Transport and Communications. Retrieved from http://urn.fi/URN:ISBN:978-952-243-398-5

Ministry of Transport and Communications, Finland. (2014b). Viestintäpalveluiden esteettömyysindikaattorit [Accessibility indicators for communication services] (Publication No. 36/2014). Helsinki: Ministry of Transport and Communications. Retrieved from http://urn.fi/URN:ISBN:978-952-243-437-1

Moisala, U. E., Rahko, K., & Turpeinen, O. (1977). Puhelin ja puhelinlaitokset Suomessa 1877–1977 [Telephone and telephone companies in Finland 1877–1977]. Turku: Puhelinlaitosten Liitto ry.

Neuvonen, R. (2005). Sananvapaus, joukkoviestintä ja sääntely [Freedom of expression, media and regulation]. Helsinki: Talentum.

Neuvonen, R. (2014). Yksityisyyden suoja Suomessa [Privacy in Finland]. Helsinki: Lakimiesliiton kustannus.

Neuvonen, R. (2018). Sananvapauden historia Suomessa [The History of Freedom of Expression in Finland]. Helsinki: Gaudeamus

Nieminen, H. (2019). Inequality, social trust and the media. Towards citizens’ communication and information rights. In J. Trappel (Ed.), Digital Media Inequalities Policies against divides, distrust and discrimination (pp, 43–66). Gothenburg: Nordicom. Available at https://norden.diva-portal.org/smash/get/diva2:1299036/FULLTEXT01.pdf#page=45

Nieminen, H. (2016). Communication and information rights in European media policy. In L. Kramp, N. Carpentier, A. Hepp, R. Kilborn, R. Kunelius, H. Nieminen, T. Olsson, T. Pruulmann-Vengerfeldt, I. Tomanić Trivundža, & S. Tosoni (Eds.), Politics, civil society and participation: media and communications in a transforming environment (pp. 41–52). Bremen: Edition lumière. Available at: http://www.researchingcommunication.eu/book11chapters/C03_NIEMINEN201516.pdf

Nieminen, H. (2014). A short history of the epistemic commons: Critical intellectuals, Europe and the small nations. Javnost - The Public, 2(3), 55–76. doi:10.1080/13183222.2014.11073413

Nieminen, H. (2013). European broadband regulation: The “broadband for all 2015” strategy in Finland. In M. Löblich & S. Pfaff- Rüdiger (Eds.), Communication and media policy in the era of the internet: Theories and processes (pp. 119-133). Munich: Nomos. doi:10.5771/9783845243214-119

Nieminen, H. (2010). The European public sphere and citizens’ communication rights. In I. Garcian-Blance, S. Van Bauwel, & B. Cammaerts (Eds.), Media agoras: Democracy, diversity, and communication (pp. 16-44). Newcastle Upon Tyne, UK: Cambridge Publishing.

Nousiainen, M. (2016). Osallistavan käänteen lyhyt historia [A brief history of a participatory turn]. In M. Nousiainen & K. Kulovaara (Eds.), Hallinnan ja osallistamisen politiikat [Governance and Inclusion Policies] (pp. 158-189). Jyväskylä: Jyväskylä University Press. Available at https://jyx.jyu.fi/bitstream/handle/123456789/50502/978-951-39-6613-3.pdf?sequence=1#page=159

OECD. (2018). OECD economic surveys: Finland 2018. Paris: OECD Publishing. doi:10.1787/eco_surveys-fin-2018-en

Padovani, C., & Calabrese, A. (Eds.) (2014). Communication Rights and Social Justice. Historical Accounts of Transnational Mobilizations. Cham: Springer / Palgrave Macmillan. doi:10.1057/9781137378309

Pelkonen, A. (2004). Questioning the Finnish model – Forms of public engagement in building the Finnish information society (Discussion Paper No. 5). London: STAGE. Retrieved November 14, 2018, from http://lincompany.kz/pdf/Finland/5_ICTFinlandcase_final2004.pdf

Pulkkinen. T. (1996). Snellmanin perintö suomalaisessa sananvapaudessa [Snellman's legacy in Finnish freedom of speech]. In: K. Nordenstreng (Ed.), Sananvapaus [Freedom of Expression] (pp. 194–208). Helsinki: WSOY

Redeker, D., Gill, L., & Gasser, U. (2018). Towards digital constitutionalism? Mapping attempts to craft an Internet Bill of Rights. International Communication Gazette, 80(4), 302–319. doi:10.1177/1748048518757121\

Reporters without Borders (2018). 2018 World Press Freedom Index. Retrieved February 28, 2019, from: https://rsf.org/en/ranking

Reunanen, E. (2018). Finland. In N. Newman, R. Fletcher, A. Kalogeropoulos, D. A. L. Levy, & R. K. Nielsen (Eds.), Reuters Institute digital news report 2018 (pp. 77–78). Oxford: Reuters Institute for the Study of Journalism.

Sajari, P. (2009). Lakia vahvempi Nokia [The law is stronger Nokia]. Helsingin Sanomat.

Shahbaz, A. (2018). Freedom on the net 2018: The rise of digital authoritarianism. Washington, DC: Freedom House. Retrieved February 28, 2019, from https://freedomhouse.org/sites/default/files/FOTN_2018_Final%20Booklet_11_1_2018.pdf

Silberman, S. (1999, September). Just say Nokia. Wired Magazine.

Sims, M., Youell, T., & Womersley, R. (2015). Understanding spectrum liberalisation. Boca Raton, FL: CRC Press.

Statistics Finland (2017). Väestön tieto- ja viestintätekniikan käyttö 2017 [Population Information and Communication Technologies 2017]. Helsinki: Official Statistics of Finland. Retrieved February 28, 2019, from https://www.stat.fi/til/sutivi/2017/13/sutivi_2017_13_2017-11-22_fi.pdf

Syvertsen, T., Enli, G., Mjøs, O., & Moe, H. (2014). Media welfare state. Nordic media in the digital era. Ann Arbor: University of Michigan Press.

Tene, O., & Polonetsky, J. (2013). A theory of creepy: Technology, privacy and shifting social norms. Yale Journal of Law & Technology, 16, 59–102. Available at: https://yjolt.org/theory-creepy-technology-privacy-and-shifting-social-norms

Tiilikka, P. (2007). Sananvapaus ja yksilön suoja: lehtiartikkelin aiheuttaman kärsimyksen korvaaminen [Freedom of speech and protection of the individual: compensation for the suffering of a journal article]. Helsinki: WSOYpro.

Venturelli, S. (2002). Inventing e-regulation in the US, EU and East Asia: Conflicting social visions of the information society. Telematics and Informatics,19(2), 69–90. doi:10.1016/S0736-5853(01)00007-7

Wallace, N., & Castro, D. (2017). The state of data innovation in the EU. Brussels &

Washington, D.C. Center for Data Innovation. Retrieved February 28, 2019, from http://www2.datainnovation.org/2017-data-innovation-eu.pdf

Wavre, V. (2018). Policy diffusion and telecommunications regulation. Cham: Springer / Palgrave Macmillan.

Wilhelmsson, N. (2017). Finland: eDemocracy adding value and venues for democracy. In eDemocracy and eParticipation. The precious first steps and the way forward (pp 25-33). Retrieved February 28, 2019, from http://www.fnf-southeasteurope.org/wp-content/uploads/2017/11/eDemocracy_Final_new.pdf

Zuboff, S. (2019). The age of surveillance capitalism: The fight for a human future at the new frontier of power. New York: Public Affairs.

Footnotes

1. The quest for more openness and publicity is a continuation of the long historical development. European modernity is fundamentally based on the assumption that knowledge and culture belong to the common domain and that the process of democratization necessarily means removing restrictions on the epistemic commons. Aspects such as civic education, universal literacy, and mass media (newspapers; public service broadcasting as tool for the daily interpretation of the world) are at the heart of this ideal. The epistemic commons reflects the core ideas and ideals of deliberative democracy: At the centre of this view is democratic will formation that is public and transparent, includes everyone and provides equal opportunities for participation, and results in rational consensus (Habermas, 2006). The epistemic commons is thought to facilitate such will formation.

How US-made rules shape internet governance in China

$
0
0

This paper is part of Transnational materialities, a special issue of Internet Policy Review guest-edited by José van Dijck and Bernhard Rieder.

Introduction

The rapid growth of Chinese internet companies over the past decade has generated friction between the United States and China (see e.g. Plantin & de Seta, 2019). One key issue is China’s practice of restricting or blocking access to popular US sites, platforms, and applications like Facebook, Twitter, Instagram and Snapchat. The Chinese government has strategically cultivated its own national technology champions by protecting domestic firms from foreign competitors, enacting policy incentives, and granting government contracts, which despite the imposition of a strict censorship regime has resulted in a symbiotic partnership between the Chinese government and its commercial internet firms (Jiang & Fu, 2018; see also Shen, 2016). As a result, Chinese platforms have grown rapidly with the emergence of dominant firms: Baidu, the search giant, Tencent, which operates the popular WeChat social media platform, and the Alibaba Group (hereafter Alibaba), whose Taobao and Tmall platforms are the dominant retail marketplaces in China. A “platform” here refers to a programmable digital architecture that facilitates interactions between users that is fueled by data and governed through user agreements (Van Dijck, Poell, & Waal, 2018, p. 9).

Trade tensions between China and the United States sharply increased in 2017 with the Trump administration’s imposition of tariffs on a range of Chinese-made goods to which China responded with tariffs against American goods (Meltzer & Shenai, 2019). In what some characterise as the beginning of a “technology cold war” (see Muñiz, 2019), as part of the US-China trade dispute, the US government has also targeted Chinese technology companies, especially Huawei, the massive manufacturer of telecommunications equipment, including consumer electronics and hardware for wireless networks, over concerns that Huawei may facilitate spying by the Chinese government on the United States. In May 2019, President Trump signed an executive order designating Huawei a national security risk (see Muñiz, 2019), which means US firms must seek government permission before doing business with the company and, as a result, companies like Google have ceased doing business with Huawei (see Sottek, 2019). Restrictions on technology companies by both the US and Chinese governments raise fears of a fracturing of global supply chains, particularly as companies are pushed to side with one country over the other (see Muñiz, 2019).

While the US-China trade dispute has important short- and long-term economic ramifications for both countries and the global economy (see Meltzer & Shenai, 2019), there are also larger technological and political issues at play. For the United States, fears about its declining hegemony and China’s ascendance are, at least in part, driving its trade dispute with China (see Meltzer & Shenai, 2019; Min-hyung, 2019). China, meanwhile, has a series of strategic projects to rapidly increase its technological capabilities to challenge US hegemony (see Min-hyung, 2019). One of these projects is China’s “Made in China 2025” plan, an ambitious ten-year industrial development project with the goal of making China a manufacturing superpower that will dominate global markets in advanced technologies like robotics, autonomous vehicles, and artificial intelligence (Min-hyung, 2019, p. 34; see Laskai, 2018). President Trump explicitly stated that US tariffs are intended to impede the Made in China 2025 programme (Hopewell, 2018). By imposing tariffs and targeting Chinese technology companies, the United States calculates that it will maintain its dominance within the global economy, a strategy that many analysts contend will backfire (see Hopewell, 2018).

Despite the ongoing debate of a decline in US hegemony with the balance of power shifting toward China (e.g., Layne, 2018), many scholars contend that the structural power of the US market remains strong (see e.g., Gilli & Gilli, 2019; Schwartz, 2017; Tooze, 2019). Structural power, as theorised by the British international political economist Susan Strange, is “the power to shape and determine the structures of the global political economy” (Strange, 1994, pp. 24–25). A key aspect of American global economic power is its ability to exert “control over a disproportionate share of global production flows” and resulting revenue streams, meaning US financial firms are “central to global financial flows” (Schwartz, 2017, p. 277). The United States wields considerable power in determining which actors, whether states or companies, can access its market. Chinese companies like Alibaba have been working for years to expand into the United States (see e.g., Lim, 2019). In another indication of US structural power, Chinese platforms’ continued growth within China and expansion internationally relies, in part, on their ability to access US financial markets (Fuchs, 2016; Jia & Winseck, 2018).

An important indication of the structural power of the United States is its long history of shaping regulatory practices and standards internationally, including in regards to intellectual property where US industry actors play a critical role (Drahos, 2017, p. 252; Drahos & Braithwaite, 2002). Copyright law determines how creative and artistic works like music, films, and books, along with software, can be accessed, used, and shared, while trademark law sets out the entities that can lawfully manufacture, distribute, advertise, and sell trademarked products. The United States, along with other industrialised actors like the European Union and Japan, considers intellectual property an economic and political priority because ownership of intellectual property rights is central to economic dominance in the modern globalised economy (Drahos & Braithwaite, 2002; Sell, 2003).

The United States and China both endeavour to institute globally their preferred conceptions of internet governance, and the governance of technology more broadly that prioritise their economic, political, and security interests and favour their industry actors (see Min-hyung, 2019; Powers & Jablonski, 2015). 1 This article argues that the US government, in cooperation with US industry, has interests in and the capacity to shape internet governance practices in China. In line with DeNardis (2014, p. 30), this article understands internet governance to include governance functions of private entities like Google or Alibaba controlling flows of information, typically through their private corporate policies. This article explores a little-examined dimension of internet governance in China, the role of the US government shaping Chinese internet firms’ regulation of intellectual property. Specifically, the article contends that the United States, with aligned interests between the US state and industry, exports its preferred standards and practices for the protection of intellectual property to China and institutes these within Chinese platforms. As a result, a dominant Chinese platform, Alibaba, has instituted US-drafted rules and standards to deal with the sale of counterfeit goods, a form of trademark infringement.

The article’s case study is Alibaba’s Taobao marketplace, the largest retail platform in China that western rights holders have long accused of facilitating the trade in counterfeit goods. Faced with pressure from the US government and key industry actors between 2008 and 2012, Alibaba significantly reformed Taobao’s enforcement practices in line with demands from the United States. Alibaba’s reform of Taobao raises an interesting puzzle. Why did a Chinese platform, particularly one as economically powerful as Alibaba, agree to enact specific regulatory reforms set forth by US companies? Further, what do Alibaba’s reforms tell us about the capacity of the United States to shape internet governance practices in China?

To explain the relationship between the United States and Taobao, the article employs the concept of compliance-plus regulation, which it develops by drawing from regulatory theory and the socio-legal literature. In this concept, state and industry actors come together, cooperatively and through coercive state pressure to push platforms to exceed their legal responsibilities in the absence of legislation or legal orders. Compliance-plus regulation focuses attention on the interests of state and industry actors in undertaking the regulation and the state-industry relationship. While the US government’s pressure on Alibaba was the impetus to push for Taobao’s reform, the article finds that there were common economic interests among the parties involved. The United States has economic and political interests in protecting American intellectual property (see Drahos & Braithwaite, 2002) and, more broadly, in shaping standards internationally in regards to internet governance (see e.g. Powers & Jablonski, 2015). US rights holders want to expand their access to the large Chinese consumer market, meanwhile Alibaba not only wants to sell popular western brands through its market places, but also needs access to the US financial and consumer markets in order to expand outside of China. The United States thus exerts considerable structural power by controlling access to finance and to its market (in relation to China, see Fuchs, 2016; Jia & Winseck, 2018).

To make its argument, the article analyses publicly available documents from the US government, US industry, and Alibaba relating to the reform of Taobao’s enforcement practices between 2008 and 2018. The rest of the article proceeds as follows. The article introduces the concept of compliance-plus regulation and then gives a brief overview of Taobao. Next it describes US state and industry pressure on Taobao and then explains Taobao’s reforms in response to this pressure. The article then examines Taobao’s reforms as compliance-plus regulation resulting from coercive state pressure and the market leverage of the United States before providing a brief conclusion.

Compliance-plus regulation

In order to account for the state’s privileging of certain (in this case, corporate) interests over others, the state is understood as embedded in the economic and social orders: the state and society mutually constitute one another (Underhill, 2003). Where there are competing interests among private actors, the state determines which actors are more authoritative and privileges certain policies over others (Hall, 1993, p. 288). These interdependencies among politics, society, and the economy are a key characteristic of regulatory capitalism, a framework that explains capitalism as a regulatory institution (see Braithwaite, 2008; Levi-Faur, 2005, 2017, p. 289). Regulation shapes and constrains the capitalist system and, in turn, capitalism creates demand for regulation (Levi-Faur, 2017, p. 289), which accounts for the transnational expansion of corporate regulatory efforts to protect intellectual property rights (Tusikov, 2017a, 2017b). Demand for regulation that accompanies capitalistic growth can bring together a hybrid arrangement of state and non-state actors (see e.g. Picciotto, 2011), as is characteristic of compliance-plus regulation.

Compliance-plus regulation builds upon research I have done elsewhere that identifies coercive state pressure underlying seemingly “voluntary” industry-led regulation undertaken by large US-based platforms (see Tusikov, 2017a). Compliance-plus regulation accounts for the role of state pressure, whether direct or indirect, in creating or facilitating private regulation, a similarity it shares with state-promoted private ordering (Bridy, 2011, 2015). From the regulatory and socio-legal literatures, the concepts of enforced self-regulation (Ayres & Braithwaite, 1995; Braithwaite, 1982) and coerced self-regulation (Black, 1996; Bonnici, 2008) explain private actors’ adoption of specific regulatory approaches, often in response to governmental pressure. However, coerced and enforced self-regulation generally focus on private actors regulating their own activities, often with public-interest benefits, such as corporate anti-pollution controls. In contrast, in compliance-plus regulation, the state directs, often using pressure, one set of private actors (platforms) to regulate on behalf of another set of private actors (multinational rights holders). While there may be a public benefit to anti-counterfeiting programmes, such as reducing the sale of dangerous goods, the focus of this regulatory activity is the protection of US and European companies’ intellectual property rights.

The defining feature of compliance-plus regulation is coercive state pressure on private actors to exceed their legal requirements “voluntarily”, that is, in the absence of legislation or formal legal orders (see Tusikov, 2017a, pp. 192–193). Corporate efforts to exceed voluntarily industry- or state-set rules are not unusual, especially when such efforts may burnish a company’s reputation or provide a competitive advantage (see Haufler, 2001; Picciotto, 2011). Compliance-plus regulation, however, involves states pressuring private actors to adopt a particular regulatory approach that goes beyond their legal responsibilities in order to benefit other corporate actors.

Compliance-plus regulation is possible because of platforms’ contractual terms-of-use agreements with their users that incorporate national laws and industry- or company-specific rules. Platforms can have a considerable regulatory capacity because, through these agreements, they have a quasi-legislative power to set and enforce rules over their users and a quasi-executive power to enforce those rules through technical means (Belli & Venturini, 2016, p. 4; see also Langenderfer, 2009; Belli, Francisco, & Zingales, 2017). Importantly, platforms grant themselves the latitude to designate certain behaviour as “inappropriate” for their services even if that behaviour is lawful, meaning platforms can act as private arbiters of legality (see also Bridy, 2015; Tusikov, 2017a). By pressuring platforms to tap into their regulatory latitude, states can push platforms to exceed their enforcement responsibilities.

Taobao

Two of Alibaba’s marketplaces, Taobao and Tmall, are of particular interest to US and European brands concerned about counterfeit goods. These marketplaces are a major part of the Alibaba ecosystem that also includes the Alibaba and 1688.com business-to-business marketplaces, a cloud storage business (Alibaba Cloud), and financial services (through its independent subsidiary, Ant Financial Services that operates the highly popular payment provider, Alipay). Formed in 1999, Alibaba is an economic success story in China: it has 699 million monthly users as of December 2018 and generated $39.9 billion in 2018, 2 largely from its China-focused marketplaces, particularly Taobao and Tmall (Alibaba Group, 2018b).

Taobao, created in 2003, is the largest retail marketplace in China in which consumers and businesses sell a wide variety of goods. Taobao is the target of American anti-counterfeiting campaigns. For example, a prominent Washington, DC-based industry association, the International Anti-Counterfeiting Coalition (IACC), which represents well-known US and European companies from the apparel, pharmaceutical, and entertainment industries, argued that Taobao functioned “as a virtual, and 24-hour, ‘trade exhibition’ for counterfeiters and pirates seeking sources for illicit goods” (International Anti-Counterfeiting Coalition, 2011, p. 20).

Tmall, created, in 2008, is one of China’s top business-to-consumer marketplaces in which merchants sell both Chinese and foreign brands. Tmall provides US and European brands a valuable entry point into the large Chinese marketplace. As of March 2018, Tmall had 150,000 brands on the platform of which 18,000 were foreign brands from 74 countries, including luxury brands like Burberry and Dom Pérignon (Alibaba Group, 2018b).

While American companies often stress Taobao’s ungoverned nature, Taobao is subject to legislation similar to platforms operating outside of China (see Ferrante, 2014; Friedmann, 2017). In 2010, China revised Article 36 of the Tort Law of the People’s Republic of China (Tort Law of the People’s Republic of China, 2010), which sets out the conditions under which platforms are liable for the infringement of intellectual property rights, and these conditions resemble those in the United States and Europe (see Ferrante, 2014). Taobao’s terms-of-service agreements, like those of eBay, echo national laws that prohibit the sale, distribution, or advertisement of counterfeit goods and Taobao operates a notice-and-takedown programme that removes problematic sales listings once alerted by complainants (for Taobao’s takedown process, see Alibaba Group, n.d.).

State and industry pressure on Taobao

The capacity for the United States to shape internet governance practices in China stems from its considerable structural power in which it can determine the conditions under which actors can access its market. Structural power “confers the power to decide how things shall be done, the power to shape frameworks within which states relate to each other, relate to people, or relate to corporate enterprises” (Strange, 1994, pp. 24–25). A key concern for the United States relating to China is the protection of intellectual property (see e.g. Tian, 2008). The United States exerts structural power in the protection of intellectual property rights, where it is the global leader in pushing for ever-stronger laws, standards, and enforcement practices (see Sell, 2010). In relation to China, US industry actors, supported by the US government, want to protect their valuable intellectual property rights. The US government also wants to ensure that it does not lose control of key technologies to Chinese companies, particularly those that may have a military application like artificial intelligence or robotics, which demonstrates why the US government strategically linked the protection of intellectual property to national security (see Halbert, 2016).

US rights holders were able to persuade the US government to pressure Alibaba into adopting their rules because there are aligned state-corporate interests regarding the protection of intellectual property rights (Drahos & Braithwaite, 2002; Sell, 2003). This alignment of interests continues in relation to the digital economy. As explained earlier, the United States and other industrialised nations accord significant political and economic importance to the protection of intellectual property rights (Drahos & Braithwaite, 2002; Sell, 2003). This is because economic benefits disproportionately flow to entities that own the intellectual property (e.g., California-based Apple) rather than those manufacturing the goods (e.g., China-based factories making iPhones) (see Kraemer, Linden, & Dedrick, 2011, p. 4).

Since the 1970s, when the United States first elevated intellectual property rights to an economic priority, US rights holders and their trade associations have been central to the US government’s campaign to push ever-tougher rules and standards for the global protection of intellectual property rights (Drahos & Braithwaite, 2002; Sell, 2003). Corporate actors played important roles in persuading and pressuring foreign governments and corporations to adopt intellectual property laws that disproportionately favoured US industries, as well as those in a handful of other industrialised nations (see Sell, 2003). The US government formalised the role of prominent US companies as trade advisors to the government (Sell, 2003), thereby legitimising industry’s push for tougher protection of intellectual property rights. A key industry player is the International Anti-Counterfeiting Coalition, which represents companies from the apparel, sporting goods, and pharmaceutical industries, as well as multinational companies based outside the United States, including Louis Vuitton Malletier and Chanel Inc.

The US government’s trade body, the Office of the United States Trade Representative (USTR) played a central role in pressuring Alibaba into reforming Taobao’s enforcement practices. The USTR operates the Special 301 Process, created in 1988 as part of the Omnibus Trade and Competitiveness Act (Omnibus Trade and Competitiveness Act, 1988). The Special 301 Process gives US companies the capacity to make complaints about countries that they contend provide insufficient protection of their intellectual property rights, and the US government can then impose trade sanctions against uncooperative countries. As part of the Special 301 Process, the USTR evaluates countries’ protection of intellectual property, classifies targeted countries within a tiered system of watchlist countries, and directs countries to make specific legal and regulatory changes. 3

The Special 301 Process relies upon and primarily serves US industry interests. Industry provides resources for the “global surveillance network” required for the Special 301 country surveys, including industry data, analysis, and recommendations and, in turn, the US government provides the bureaucratic infrastructure that negotiates with, threatens, and sanctions targeted countries (Drahos & Braithwaite, 2002, p. 107). The USTR’s coercive pressure draws upon the structural power of the US market, as the USTR can withdraw access to the US market to sanctioned countries.

Blacklisting Taobao

The USTR provides a specific forum for US rights holders to target the online infringement of their intellectual property rights. In 2006, in response to industry lobbying, the USTR created a specific report, the Out-of-Cycle Review of Notorious Markets, to target problematic physical marketplaces, such as the Silk Market in Beijing, and online markets like the infamous Pirate Bay. Like the Special 301 Process, the Review of Notorious Markets depends upon industry data to determine which entities, websites, or platforms are failing to protect intellectual property rights in a manner that US rights holders consider adequate. The USTR pressures the entities it determines to be “notorious markets” to make specific changes to their enforcement practices, in line with US rights holders’ demands, and, in turn, the US government may threaten targeted countries with trade sanctions to deal with their notorious markets.

While the USTR’s Review of Notorious Markets exerts coercive pressure on its targets, the USTR acknowledges its report “does not reflect findings of legal violations” (Office of the United States Trade Representative, 2015b, p. 1). Rather, the notorious market list—and the Special 301 Process more broadly—is an aspirational project of regulatory standards and practices that rights holders argue are necessary to protect their intellectual property rights. For example, the IACC has submitted reports to the USTR that criticise Taobao’s enforcement practices and proposes specific regulatory amendments (see e.g. International Anti-Counterfeiting Coalition, 2011).

The USTR is not unique in its use of a watchlist to monitor platforms and marketplaces, as the European Commission created its Counterfeit and Piracy Watch List in 2018 (see European Commission, 2018). While the European Commission’s Watch List does not have the global scope of the USTR’s Review of Notorious Markets or its coercive force, the European list is designed to raise consumer awareness and facilitate cooperation among EU trading partners and working groups regarding the online infringement of intellectual property rights (see European Commission, 2018). Like the USTR, the European Commission’s Watch List identifies problematic online and physical marketplaces that are involved in the distribution of counterfeit goods. 4 The European Commission also operates an informal anti-counterfeiting enforcement agreement in regards to European online marketplaces that, like Taobao’s enforcement agreement discussed in this article, was created through coercive governmental pressure (see Tusikov, 2017a). Launched in 2011 and updated in 2016, the European agreement sets non-legally binding principles for rights holders and marketplaces to address the sale of counterfeit goods (European Commission, 2011; see European Commission, 2016). Signatories include Adidas, Hermès, Lacoste, and Chanel and, in terms of marketplaces, eBay, Amazon, and Alibaba.

The USTR, based on rights holders’ complaints regarding the sale of counterfeit goods, listed Taobao as a notorious market from 2008 to December 2012. In 2012, the USTR released Taobao from the notorious list, but following continued complaints about counterfeit goods from American and European rights holders, relisted the platform in 2017 where it currently remains (Office of the United States Trade Representative, 2018). 5

Taobao’s response to US pressure

Alibaba’s campaign to free Taobao from the USTR’s notorious markets list provides an opportunity to study the internal regulatory efforts of platforms, which are not typically publicly disclosed. The USTR released Taobao from the blacklist in December 2012, following Alibaba’s reforms to the marketplace’s enforcement practices as detailed by Alibaba (see Spelich, 2012). Following Taobao’s release, the USTR kept up the pressure on Alibaba with specific recommendations in its annual reports from US rights holders for continued reform of Taobao’s enforcement practices. Since 2012, Alibaba has provided annual comments to the USTR’s notorious market review.

The USTR demanded that Alibaba streamline or simplify Taobao’s takedown process, and make takedowns more rapid (Office of the United States Trade Representative, 2012, 2015, 2016, 2018), and strengthen Taobao’s cooperation with US rights holders (Office of the United States Trade Representative, 2012, 2014, 2015, 2018). Based on an analysis of Alibaba’s reports to the USTR between 2012 and 2017 regarding Taobao, the article finds that Alibaba made three significant changes: it streamlined Taobao’s takedown processes, it made takedowns more rapid, and it established informal enforcement partnerships with US rights holders. Alibaba’s changes closely resemble the USTR’s demands.

First, between 2012 and 2016, according to Alibaba, the platform “revamped and streamlined” its notice-and-takedown programme in relation to counterfeit goods “to provide rights holders a more user friendly platform” (Pelletier, 2017b, p. 6). Prior to 2012, Alibaba allowed complaints only in Chinese, and then in 2012 introduced an English-language complaint system (Spelich, 2012, pp. 3–7). Before 2016, Alibaba had two separate systems for making complaints: AliProtect (for AliExress, Alibaba.com or 1688.com) and TaoProtect (for Taobao and Tmall). In 2016, the platform merged these systems into one enforcement programme (Alibaba Group, 2017, p. 14).

Second, in addition to simplifying the complaint process, Alibaba responded to the USTR’s demand to make its process more rapid. Before 2012, for example, Alibaba reported that it took between seven and ten days for Taobao to remove problematic sales listings (Spelich, 2012, p. 7). By 2015, the platform reported it took approximately two days to review takedown requests (Pelletier, 2017b, p. 6). In June 2017, Alibaba began using “enhanced algorithms and data modeling allow for greater automation in the analysis and processing of submissions” to reduce takedown requests to 24 hours during business days (Alibaba Group, 2018a, p. 4).

Third, Taobao began working directly with US rights holders on enforcement. A key example is the MarketSafe programme that was designed to “build a bridge between rights-holders and Alibaba” (The IACC MarketSafe Expansion Program, n.d.). In the summer of 2012, Taobao representatives approached the International Anti-Counterfeiting Coalition (IACC) with an “interest in partnering” with the trade association to address the sale of counterfeit goods (International Anti-Counterfeiting Coalition, 2012). The IACC-Taobao memorandum of understanding, announced in 2013, was launched in May 2014 as the MarketSafe Program. As of April 2017, the programme had 100 companies participating (International Anti-Counterfeiting Coalition, n.d.). For Taobao, the programme introduces US and European rights holders to its enforcement practices with the goal of shifting these companies into working directly with Taobao.

MarketSafe’s provides a “streamlined mechanism for expedited take-down actions” for listings from Taobao and Tmall for counterfeit goods (The IACC MarketSafe Expansion Program, n.d.) with IACC staff screening and coordinating the submission of takedown notice to the marketplaces (Pelletier, 2017a, p. 19). Most important for rights holders, the programme “shifts the burden of proof away from the brands and over to the sellers” (International Anti-Counterfeiting Coalition, 2016a). MarketSafe members only need to send a complaint to Taobao for the removal of sales listings without providing proof of ownership of the trademark/s or copyright/s in question, or evidence of infringement (Pelletier, 2017a, pp. 18–20). The IACC lauds this measure as “more effective and efficient” as rights holders are “not required to provide evidence in support of their complaints” (International Anti-Counterfeiting Coalition, 2016b). According to the IACC, complaints through MarketSafe have a “100% take-down rate” (The IACC MarketSafe Expansion Program, n.d.). MarketSafe, fully funded by Alibaba, provides rights holders with a simplified, rapid and efficient process to address complaints of counterfeit goods sales on Taobao and Tmall.

Taobao adopts compliance-plus regulation

The article’s analysis of the USTR’s reports and Alibaba’s USTR submissions shows that Alibaba reformed Taobao’s enforcement practices in line with US demands while the platform was blacklisted (see Spelich, 2012) and after its release (see Pelletier, 2017b). Overall, Taobao surpasses its legal requirements in terms of the speed and streamlined nature of its takedowns of problematic listings and in its reduced evidentiary requirements for MarketSafe members who do not need to provide proof of infringement or ownership of intellectual property before making complaints. Coercive state pressure paired with the latitude that platforms grant themselves to rapidly amend their terms-of-service agreements and enforcement practices enable compliance-plus regulation. Backed by the US government, US rights holders largely pushed a Chinese marketplace to adopt appropriately tough (that is, “US-style”) enforcement measures.

Alibaba executives underline the significance of Taobao’s enforcement changes in a report to the USTR, saying the platform has “established programs, technologies, and an approach to IP protection that goes far beyond our peers” (Pelletier, 2017b, p. 5). As a result, Taobao’s enforcement practices, once the object of rights holders’ condemnation, are remarkably similar to those of eBay in terms of the speed and streamlined nature of the takedown programmes and the reduced submission requirements for favoured rights holders (Tusikov, 2017a).

Corporate actors that voluntarily exceed their legal responsibilities are not unusual, particularly within industry self-regulatory programmes (see e.g. Haufler, 2001). There may be reputational or market advantages to adopt a compliance-plus position, such as a business voluntarily reducing its environmental impact (see van der Heijden, 2015). However, in Taobao’s case, US companies, backed by the US government, demanded that Alibaba adopt a particular regulatory arrangement for their benefit, not that of Alibaba. US demands on Alibaba continue, particularly as the USTR relisted Taobao as a notorious market in 2017. In response to this second blacklisting, Alibaba critiqued the USTR for pressuring Alibaba to continue to augment its regulatory capabilities and reminded the USTR that Alibaba is a private entity. “No private company in the world,” Alibaba wrote to the USTR, “can serve the role of a government, which is what the USTR is insisting Alibaba do in its report” (Alibaba Group, 2018c).

Compliance-plus regulation can impose serious risks on private actors, particularly when regulatory activities are undertaken to benefit other corporate actors. MarketSafe members can make takedown complaints without providing proof of infringement or ownership of the intellectual property in question, thereby shifting “the burden of proof” from rights holders to sellers (The IACC MarketSafe Expansion Program, n.d.). This change streamlined the complaint process for rights holders and made takedowns more efficient: a 100% takedown rate for MarketSafe members. However, Alibaba, not rights holders, bears a “significant litigation risk if we inadvertently take down a [sales] listing that proves to be legitimate” (Pelletier, 2017a, p. 20). Despite the improvements to Taobao’s enforcement practices, it is often difficult for platforms to determine the legality of products through sales listings as they typically do not have the legal or product-specific expertise to distinguish counterfeit from legitimate goods (see Tusikov, 2017a). Further, streamlined expedited takedown processes make it more difficult to detect bad-faith infringement complaints as platforms must act rapidly.

Coercive pressure

While this article has concentrated on pressure on Alibaba from the US government and its key industry actors, the Chinese government also plays a role, particularly the State Administration for Industry and Commerce (SAIC) that is China’s authority responsible for trademark administration. The SAIC has issued reports critical of Alibaba’s enforcement practices to pressure the platform to improve its enforcement practices (see Friedmann, 2017). More broadly, the Chinese government has strengthened its protection of intellectual property rights, including passing the People’s Republic of China E-commerce Law in 2019 that has provisions to address the online sale of counterfeit goods (People’s Republic of China Electronic Commerce Law, 2018). China has long been the target of US pressure to strengthen its protection of intellectual property and, as a result of this pressure and in response to its domestic needs, China has introduced significant reforms (see Tian, 2008). However, the Chinese government also has, unsurprisingly, strongly criticised the USTR’s blacklisting of Taobao and the USTR’s repeated listing China as a “priority watchlist” country in the Special 301 reports. Following Taobao’s blacklisting in 2012, Shen Danyang, a spokesman for the Ministry of Commerce, said the USTR’s use of “ambiguous terms and no conclusive evidence or detailed analysis, [was] very irresponsible and not objective” (Alizila Staff, 2012). Similarly, in 2019, the head of China’s National Intellectual Property Administration, Shen Changyu, said such criticisms of China’s protection of intellectual property “lack evidence” and overlook the significant progress China has made in this area (Reuters, 2019).

The USTR’s economic pressure on Alibaba was a central factor in pushing the company to comply with rights holders’ demands. It is unlikely that US rights holders could independently induce a similar regulatory change in Alibaba through threats of litigation or promises of licensing deals for Tmall to sell their brands. State pressure is particularly important when private actors may be “reluctant governors” (Avant, Finnemore, & Sell, 2010, p. 19) who may have conflicting interests in becoming regulators or when regulatory activities impose a significant financial burden on the actor.

Compliance-plus regulation underscores the importance of credible state pressure in compelling private actors to adopt specific regulatory goals or practices, or exceed their legal responsibilities. States, however, must carefully determine what situations and actors necessitate coercion. Coercion and its counterpart, reward, can be costly and states must be credible in their threats or promises of favour (Braithwaite & Drahos, 2000). While a state may, for example, threaten trade sanctions, such actions can result in economic and political costs for those making threats (Drahos, 2017, p. 258). The United States monitors multiple countries and companies through its trade watchlists and while it infrequently imposes trade sanctions, the threat of sanctions can be sufficient pressure to motivate action (Drahos, 2017, p. 258). The USTR’s ability to pressure notorious markets, however, may depend on whether the target has interests in operating legitimately or accessing the US market. The USTR, for example, has repeatedly blacklisted sites like The Pirate Bay, which provides unauthorised access to copyrighted movies, music, and software, but it has no ambitions to become a legitimate enterprise and is indifferent to its designation as a notorious market. Alibaba, in contrast, operates legitimately and is expanding internationally, particularly in regards to its marketplaces and financial services. In China, for example, Alibaba’s Alipay is battling Tencent’s WeChat Pay for control of China’s mobile payments industry where they collectively dominate the market, and both companies are also expanding aggressively internationally (Y. Wang & Armstrong, 2018b).

A key feature of compliance-plus regulation is that proponents repeatedly and coercively pressure private actors to set new or strengthen existing regulatory standards that exceed their legal responsibilities. Compliance-plus regulation is a process rather than an end goal. With the establishment of every new regulatory baseline, subsequent efforts focus on “ratcheting up” new tougher standards of enforcement, thereby resulting in ever-increasing standards (Sell, 2010; see also Bridy, 2015). As the Taobao case shows, compliance-plus regulation not only occurs through laws and international agreements, but also through coercive pressure on platforms to strengthen their regulatory practices in the absence of legislation or legal orders. As the USTR relisted Taobao as a notorious market in 2017, the USTR and US and European rights holders will continue to pressure Alibaba, although the nature and intensity of this pressure may change as the US-China trade dispute continues.

Power of market leverage

As the Taobao case demonstrates, US rights holders have a powerful weapon in their partnership with the USTR, the leverage of the US market. Granting access to or denying companies the ability to operate in the United States is a powerful tool. Access to US financial markets was an important incentive for Alibaba to work with the USTR. While the USTR was blacklisting Taobao between 2008 and 2012, Alibaba was planning to conduct an initial public offering. Alibaba first considered plans to hold the offering in Hong Kong, but because of the Hong Kong market regulator’s concerns with Alibaba’s governance structure, Alibaba shifted to the United States (see Lin & Mehaffy, 2016). In order to maximise the funds raised and the price of shares, Alibaba had to demonstrate to the US financial industry and US regulators that the company had solid financial and regulatory foundations by reforming Taobao’s enforcement practices. Freeing Taobao from the USTR’s blacklist removed a major impediment for Alibaba (see Javers, 2014). In September 2014, less than two years after Taobao’s removal from the blacklist, Alibaba held a record-breaking US$ 25 billion initial public offering.

Related to the incentive of accessing the US market, there were also common economic interests between US and Chinese industry actors, despite the continued enmity of some western rights holders toward Alibaba. US and European rights holders want greater access to the Chinese market and to ensure that Chinese consumers are purchasing lawfully trademarked goods, and Alibaba’s marketplaces, with their nearly 700 million users, are an ideal portal. In turn, Alibaba wants to increase Tmall’s offerings of popular foreign brands, especially luxury goods. Alibaba’s plans for Tmall and its international expansion rely, in part, on the platform’s ability to demonstrate to foreign rights holders that it can govern its platforms effectively to address the trade in counterfeit goods.

The USTR’s pressure on Taobao on behalf of multinational US and European rights holders continues the US government’s long practice of setting rules and standards to benefit its economic interests and those of its industry actors. Since World War 2, the United States has been the “single most important actor in the spread of regulatory models”, including in relation to intellectual property, and US multinational companies play an important role in this effort (Drahos, 2017, p. 252). Part of the campaign by the United States to export its preferred standards for the protection of intellectual property rights globally is to embed those standards within countries and, as this article argues, within non-US companies. Taobao’s blacklisting is not an isolated case. The USTR has in the past blacklisted other prominent Chinese platforms, including the search engine Baidu and the e-commerce company JD.com, as well as platforms in other countries like Russia’s VKontakte social media site (Office of the United States Trade Representative, 2011).

As the Taobao case shows, the United States wields considerable structural power as it can determine the actors that can access its market (Strange, 1994). Despite the growth of Chinese platforms and the economic power they wield in China, the structural power of the US market continues to be an important force shaping the internet, even within China. Baidu, Alibaba, and Tencent, along with other Chinese platforms seek international investors and list on US stock exchanges in order to attract US finance capital from investment banks and institutional investors (Fuchs, 2016, p. 34; Jia & Winseck, 2018, p. 32). Chinese internet companies continue to hold initial public offerings in the United States, such as the online streaming platform iQiyi that raised US$ 2.3 billion in 2018 (Hu, 2018). As a result, large Chinese platforms are “tightly integrated with a variety of sources of international finance capital” and increasingly rely upon foreign financial capital for their growth within China and international expansion (Jia & Winseck, 2018, p. 31).

The desirability of the US financial market and the tight integration of Chinese platforms with international finance capital, especially that from the United States, means that the United States retains structural power in granting or denying access to its financial market. In January 2018, for example, the US government blocked a bid by Alibaba’s Ant Financial to expand its payment service in the United States by acquiring MoneyGram International for US$ 1.2 billion. The Committee on Foreign Investment in the United States (CFIUS), a multi-agency panel that brings together the Department of Defence, Department of Justice, and intelligence agencies to review foreign investment in the United States, denied the bid on national security grounds relating to a Chinese company possessing US consumer data (Y. Wang & Armstrong, 2018a). CFIUS has also blocked other acquisitions of US technology by Chinese companies on national security grounds (see Blumental, Croley, & Xu, 2018), and demanded that the Chinese gaming company Kunlun Tech sell the US-created gay dating app Grindr after the CFIUS barred Kunlun from accessing Grindr’s personal data, which includes users’ personal information like locations and HIV status, or sending that data to China (E. Wang, 2019). Chinese investors and start-ups are increasingly looking to Europe given the regulatory restrictions in the United States (see Y. Wang & Armstrong, 2018a), but the US market remains attractive.

The United States remains an important force in setting standards and spreading norms that shape internet governance, particularly in relation to the regulation of intellectual property rights. From the early development of the internet, the United States has worked to embed standards that preference its economic, political, and national security interests through the internet, such as in relation to the commodification and free-flow of data (Powers & Jablonski, 2015; see also Carr, 2016). This article demonstrates that the US government, working on behalf of multinational US and European industry actors endeavours to set standards for the protection of intellectual property within platforms, including platforms in China, in addition to setting standards through trade agreements (see Sell, 2010). Despite US fears about its declining hegemony and a shift in influence to China (see Min-hyung, 2019), the appeal of the US market remains strong (Schwartz, 2017).

The Taobao case is not simply about addressing the sale of counterfeit goods. In targeting Alibaba, US state and corporate actors were not only seeking to strengthen the enforcement practices of a single company, but also to influence regulatory practices within Chinese platforms generally and, more broadly, shape Chinese internet governance. By successfully pressuring Alibaba to adopt a compliance-plus approach, a Chinese platform has instituted the preferred standards of US rights holders: streamlined, rapid notice-and-takedown programmes and reduced evidentiary requirements for complainants. Given Alibaba’s dominance within China in relation to its marketplaces, as well as its operation of multiple businesses, including payment services and cloud storage, Alibaba’s US-influenced regulatory practices may become industry standards. Further, other Chinese platforms that want to access US financial markets may find the USTR’s treatment of Taobao an instructive warning and amend their enforcement practices accordingly in line with US standards on the protection of intellectual property rights.

Conclusion

This article introduced the concept of compliance-plus regulation to explain a little-examined dimension of internet governance in China, the role of the US government in shaping Chinese internet firms’ regulation of intellectual property. Compliance-plus regulation, which builds upon research I have done elsewhere (see Tusikov, 2017a) and draws from the regulatory theory literature, identifies coercive state pressure underlying seemingly “voluntary” industry-led regulation by platforms for the benefit of other corporate actors. The defining feature of compliance-plus regulation is coercive state pressure on private actors to exceed their legal requirements in the absence of legislation or formal legal orders with the goal of setting ever-increasing standards of enforcement with each new round of pressure.

Compliance-plus regulation helps to explain why one of the largest Chinese platforms, Alibaba, has instituted US-drafted rules and standards to govern the protection of intellectual property rights. Following the wholesale reform of its enforcement practices, Alibaba’s Taobao marketplace exceeds its legal responsibilities to regulate the sale of counterfeit goods. The USTR’s economic pressure on Alibaba was a central factor driving Taobao’s reform, but Alibaba and rights holders also have common economic interests. Alibaba wanted to access US financial markets and secure popular US and European brands for sale through its marketplaces. In turn, US and European rights holders, in addition to protecting their trademarks from counterfeiting, want to access China’s large consumer market through which Alibaba’s marketplaces provide an ideal entry point. With Taobao relisted as a notorious market in 2017, the USTR and rights holders will continue to pressure Alibaba. How that pressure may occur, the reactions of Alibaba and the Chinese government, and the broader implications for Chinese internet governance, particularly in the context of continuing geo-political tensions between the United States and China are important topics for future research.

Examining American pressure on Taobao usefully underscores the structural power of the US market (see Strange, 1994). Alibaba needed to demonstrate to US financial and regulatory authorities that it could effectively govern its businesses in order to have Taobao freed from the USTR notorious market list and before it could realise its intention of holding a successful initial public offering in the United States in 2014. Alibaba’s efforts to access US finance capital highlights the dependence of Chinese platforms on finance capital from abroad, particularly the United States in order to expand within China and grow internationally (see Fuchs, 2016; Jia & Winseck, 2018). American structural power is also evident in standard setting, such as in relation to protecting intellectual property rights (see Drahos, 2017) and denying Chinese companies access to the US market, as was the case with Alibaba’s failed bid to acquire MoneyGram International in 2018. This article argues that US standard setting extends to influencing regulatory practices within Chinese platforms and, more broadly, shaping Chinese internet governance by embedding US-preferred standards for the protection of intellectual property rights.

While US state and corporate actors have successfully pressured Alibaba to make significant reforms to Taobao’s regulatory practices, the dynamic of US structural power over Chinese platforms is not set in stone. US power currently remains strong in relation to American control over a disproportionate share of global production and related revenue streams, the draw of its large consumer market, and the key role played by its financial market in the global economy (Schwartz, 2017). However, with concerns in the United States about its declining hegemony (Min-hyung, 2019), and some scholars pointing to the rise of China’s economic (and military) power (see Layne, 2018), the power dynamic between the countries may shift with as yet unknown effects on internet governance globally. Aside from the economic consequences from the US-China trade dispute, there are geopolitical implications as both countries are seeking technological dominance in areas including robotics, autonomous vehicles, and artificial intelligence (see Min-hyung, 2019). With continuing trade tensions, greater pushback from the Chinese government against US interference in Chinese internet governance is likely as both countries adopt protectionist measures that favour their domestic technology industries. As well, should the United States continue to restrict Chinese platforms’ expansion into the United States, particularly in the financial services industry, as was the case with Alibaba’s failed bid to acquire MoneyGram International, Chinese platforms will continue their expansion within Europe and Asia (see Detrixhe, 2019; Le Corre, 2019). The evolving US-China dynamics amid continuing geo-political tensions and the implications for internet governance globally are critical topics for future research.

References

Alibaba Group. (2017). Alibaba Group Platform Governance Annual Report 2016. Retrieved from Platform Governance Department website: https://www.alizila.com/wp-content/uploads/2017/06/Alibaba-Group-Platform-Governance-Report.pdf

Alibaba Group. (2018a). Alibaba Group 2017 Intellectual Property Rights Protection Annual Report. Retrieved from http://azcms31.alizila.com/wp-content/uploads/2018/05/Alibaba-Group-PG-Annual-Report-2017-FINAL_sm_final.pdf

Alibaba Group. (2018b, March 4). Alibaba Group Announced March Quarter 2018 Results and Full Fiscal Year 2018 Results. Retrieved from https://www.alibabagroup.com/en/news/press_pdf/p180504.pdf

Alibaba Group. (2018c). Alibaba Group Point by Point Rebuttal of USTR Notorious Markets Listing of Taobao. Retrieved from http://azcms31.alizila.com/wp-content/uploads/2018/01/Alibaba-USTR-NMR-Pt-by-Pt-Rebuttal_FINAL.pdf

Alibaba Group. (n.d.). IP Protection Platform. Retrieved from https://ipp.alibabagroup.com/

Alizila Staff. (2012, January 18). Commerce Ministry Has Taobao’s Back in USTR Spat. Alizila. Retrieved from https://www.alizila.com/commerce-ministry-has-taobaos-back-in-ustr-spat/

Avant, D. D., Finnemore, M., & Sell, S. K. (Eds.). (2010). Who Governs the Globe? Cambridge: Cambridge University Press. doi:10.1017/CBO9780511845369

Ayres, I., & Braithwaite, J. (1995). Responsive regulation: transcending the deregulation debate. New York: Oxford University Press.

Belli, L., Francisco, P. A. P., & Zingales, N. (2017). Law of the Land of Law of the Platform: Beware of the Privatisation of Regulation and Police. In L. Belli & N. Zingales (Eds.), Platform Regulations: How Platforms are Regulated and How They Regulate Us (pp. 41–64). Retrieved from http://bibliotecadigital.fgv.br/dspace/handle/10438/19402

Belli, L., & Venturini, J. (2016). Private ordering and the rise of terms of service as cyber-regulation. Internet Policy Review, 5(4). doi:10.14763/2016.4.441

Black, J. (1996). Constitutionalising Self-Regulation. The Modern Law Review, 59(1), 24–55. doi:10.1111/j.1468-2230.1996.tb02064.x

Blumental, D., Croley, S., & Xu, H. (2018). CFIUS and Chinese Investments in the United States—A Closed Door? [Article Reprint No. 2353]. Retrieved from Latham & Watkins website https://www.lw.com/thoughtLeadership/CFIUS-chinese-investments-united-states-reprint

Bonnici, J. P. M. (2008). Self-regulation in cyberspace. The Hague: T.M.C. Asser Press.

Braithwaite, J. (1982). Enforced Self-Regulation: A New Strategy for Corporate Crime Control. Michigan Law Review, 80(7), 1466. doi:10.2307/1288556

Braithwaite, J. (2008). Regulatory capitalism: how it works, ideas for making it work better. Cheltenham; Northampton: Edward Elgar.

Braithwaite, J., & Drahos, P. (2000). Global business regulation. Cambridge; New York: Cambridge University Press.

Bridy, A. (2011). ACTA and the Specter of Graduated Response. American University International Law Review, 26(3), 559–578. Retrieved from https://digitalcommons.wcl.american.edu/auilr/vol26/iss3/2/

Bridy, A. (2015). Internet Payment Blockades. Florida Law Review, 67(5), 1523–1568. Retrieved from http://scholarship.law.ufl.edu/flr/vol67/iss5/1

Carr, M. (2016). US Power and the Internet in International Relations: The Irony of the Information Age. London: Palgrave Macmillan. doi:10.1057/9781137550248

DeNardis, L. (2014). The global war for Internet governance. New Haven: Yale University Press.

Detrixhe, J. (2019, March 15). China’s Ant Financial, thwarted in the US, is expanding rapidly in Europe. Quartz. Retrieved from https://qz.com/1570052/ant-financials-alipay-is-expanding-rapidly-outside-of-china/

Drahos, P. (2017). Regulatory globalisation. In P. Drahos (Ed.), Regulatory Theory: Foundations and Applications (pp. 249–264). Canberra: ANU Press.

Drahos, P., & Braithwaite, J. (2002). Information feudalism: who owns the knowledge economy?. New York; London: The New Press.

European Commission. (2011). Memorandum of Understanding. Retrieved from http://ec.europa.eu/growth/industry/intellectual-property/enforcement/index_en.htm#Sale

European Commission. (2016). Memorandum of Understanding on the online sale of counterfeit goods [No. Ref. Ares(2016)3934515-26/07/2016]. Retrieved from http://ec.europa.eu/DocsRoom/documents/18023/attachments/1/translations/

European Commission. (2018). Counterfeit and Piracy Watch List [Commission Staff Working Document]. Retrieved from http://trade.ec.europa.eu/doclib/docs/2018/december/tradoc_157564.pdf

Ferrante, M. (2014). Online Counterfeit in China: Court Practice and Remedies against Infringing Websites. Section of Intellectual Property Law. Presented at the 29th Annual Intellectual Property Law Conference, Arlington, Virginia.

Friedmann, D. (2017). Oscillating from Safe Harbor to Liability: China’s IP Regulation and Omniscient Intermediaries. In G. Frosio (Ed.), World Intermediary Liability Map, Mapping Intermediary Liability Trends Online. Oxford: Oxford University Press.

Fuchs, C. (2016). Baidu, Weibo and Renren: the global political economy of social media in China. Asian Journal of Communication, 26(1), 14–41. doi:10.1080/01292986.2015.1041537

Gilli, A., & Gilli, M. (2019). Why China Has Not Caught Up Yet: Military-Technological Superiority and the Limits of Imitation, Reverse Engineering, and Cyber Espionage. International Security, 43(3), 141–189. doi:10.1162/isec_a_00337

Glasius, M., & Michaelsen, M. (2018). Illiberal and Authoritarian Practices in the Digital Sphere. International Journal of Communication, 12, 3795–3813. Retrieved from https://ijoc.org/index.php/ijoc/article/view/8899

Halbert, D. (2016). Intellectual Property Theft and National Security: Agendas and Assumptions. The Information Society, 32(4), 256–268. doi:10.1080/01972243.2016.1177762

Hall, P. A. (1993). Policy Paradigms, Social Learning, and the State: The Case of Economic Policymaking in Britain. Comparative Politics, 25(3), 275. doi:10.2307/422246

Haufler, V. (2001). A public role for the private sector: industry self-regulation in a global economy. Washington, D.C.: Carnegie Endowment for International Peace.

Hopewell, K. (2018, May 3). What is ‘Made in China 2015’ and why is it a threat to Trump’s trade goals? Washington Post. Retrieved from www.washingtonpost.com/news/monkey-cage/wp/2018/05/03/what-is-made-in-china-2025-and-why-is-it-a-threat-to-trumps-trade-goals/?utm_term=.7a1ab702e427

Hu, K. (2018, December 28). Chinese companies flooded into the U.S. IPO market in 2018. Yahoo! Finance. Retrieved from https://finance.yahoo.com/news/chinese-companies-flooded-u-ipo-183408925.html

International Anti-Counterfeiting Coalition. (2011). Submission of the International Anti-Counterfeiting Coalition to the United States Trade Representative Special 301 Recommendations [Report No. Docket number USTR-2010-0037]. Retrieved from https://www.iacc.org/_downloads/key-issues/2011_IACC_Special_301_Report_Submission.pdf

International Anti-Counterfeiting Coalition. (2012). Submission of the International Anti-Counterfeiting Coalition to the United States Trade Representative Special 301 Recommendations [No. USTR-2011-0021]. Retrieved from https://www.iacc.org/_downloads/key-issues/2012_IACC_Special_301_Report_Submission.pdf

International Anti-Counterfeiting Coalition. (2016a). Alibaba Group and International Anti-Counterfeiting Coalition (IACC) Announce IACC MarketSafe® Expansion Program [Press Release]. Retrieved from https://www.iacc.org/press-release-iacc-marketsafe-expansion

International Anti-Counterfeiting Coalition. (2016b). Submission of the International Anti-Counterfeiting Coalition to the United States Trade Representative Special 301 Recommendations [No. USTR-2015-0022]. Retrieved from https://www.iacc.org/_downloads/key-issues/IACC%202016%20SPECIAL%20301%20COMMENTS_FINAL.pdf

International Anti-Counterfeiting Coalition. (n.d.). Year in Review 2017 (pp. 1–8). Retrieved from https://www.iacc.org/IACC%202017%20Year%20in%20Review/IACC%202017%20Year%20in%20Review.pdf

Javers, E. (2014, September 16). Alibaba readies for post-IPO Washington office. CNBC. Retrieved from https://www.cnbc.com/2014/09/16/alibaba-readies-for-post-ipo-washington-office.html

Jia, L., & Winseck, D. (2018). The political economy of Chinese internet companies: Financialization, concentration, and capitalization. International Communication Gazette, 80(1), 30–59. doi:10.1177/1748048517742783

Jiang, M., & Fu, K.-W. (2018). Chinese Social Media and Big Data: Big Data, Big Brother, Big Profit?: Chinese Social Media and Big Data. Policy & Internet, 10(4), 372–392. doi:10.1002/poi3.187

Kraemer, K. L., Linden, G., & Dedrick, J. (2011). Who Captures Value in the Apple iPad and iPhone? Retrieved from http://pcic.merage.uci.edu/papers.htm

Langenderfer, J. (2009). End-User License Agreements: A New Era of Intellectual Property Control. Journal of Public Policy & Marketing, 28(2), 202–2011. doi:10.1509/jppm.28.2.202

Laskai, L. (2018). Why does everyone hate made in China 2015? Council on Foreign Relations. Retrieved from www.cfr.org/blog/why-does-everyone-hate-made-china-2025

Layne, C. (2018). The US-Chinese Power Shift and the End of Pax Americana. International Affairs, 94(1), 89–111. doi:10.1093/ia/iix249

Le Corre, P. (2019). On China’s Expanding Influence in Europe and Eurasia [Testimony to the US House of Representatives Foreign Affairs Committee, Subcommittee on Europe, Eurasia, Energy, and the Environment.]. Retrieved from https://carnegieendowment.org/2019/05/09/on-china-s-expanding-influence-in-europe-and-eurasia-pub-79094

Letter to Probir Mehta, Assistant United States Trade Representative for Innovation and Intellectual Property [Office of the United States Trade Representative, Docket Number: USTR-2016-2013, 2016 Special 301 Out-of-Cycle Review of Notorious Markets]. (2016, October 26). Retrieved from https://www.mema.org/sites/default/files/resource/Multi-Org%20Letter%20on%20Alibaba%20102616.pdf

Levi-Faur, D. (2005). The Global Diffusion of Regulatory Capitalism. The ANNALS of the American Academy of Political and Social Science, 598(1), 12–32. doi:10.1177/0002716204272371

Levi-Faur, D. (2017). Regulatory capitalism. In P. Drahos (Ed.), Regulatory Theory: Foundations and Applications (pp. 289–302). Retrieved from http://press-files.anu.edu.au/downloads/press/n2304/pdf/ch17.pdf

Lim, S. (2019, March 6). Alibaba eyes US e-commerce market with Office Depot partnership. The Drum. Retrieved from https://www.thedrum.com/news/2019/03/06/alibaba-eyes-us-e-commerce-market-with-office-depot-partnership

Lin, Y.-H., & Mehaffy, T. (2016). Open Sesame: The Myth of Alibaba’s Extreme Corporate Governance and Control. Brooklyn Journal of Corporate, Financial & Commercial Law, 10(2), 437–471. Retrieved from https://brooklynworks.brooklaw.edu/bjcfcl/vol10/iss2/5

Lv, A., & Luo, T. (2018). Authoritarian Practices in the Digital Age: Asymmetrical Power Between Internet Giants and Users in China. International Journal of Communication, 12, 3877–3895. Retrieved from https://ijoc.org/index.php/ijoc/article/view/8543

Meltzer, J. P., & Shenai, N. (2019). The US-China economic relationship: A comprehensive approach. Retrieved from Brookings Institute website: https://www.brookings.edu/research/the-us-china-economic-relationship-a-comprehensive-approach/

Min-hyung, K. (2019). A real driver of US–China trade conflict: The Sino–US competition for global hegemony and its implications for the future. International Trade, Politics and Development, 3(1), 30–40. doi: 10.1108/ITPD-02-2019-003

Muñiz, M. (2019, April 30). The Coming Technological Cold War. Project Syndicate. Retrieved from https://www.project-syndicate.org/commentary/us-china-technology-cold-war-by-manuel-muniz-2019-04

Office of the United States Trade Representative. (2011). 2011 Out-of-Cycle Review of Notorious Markets. Retrieved from https://ustr.gov/about-us/policy-offices/press-office/reports-and-publications/2011/out-cycle-review-notorious-markets

Office of the United States Trade Representative. (2012). Out-of-Cycle Review of Notorious Markets. Retrieved from https://ustr.gov/sites/default/files/121312%20Notorious%20Markets%20List.pdf

Office of the United States Trade Representative. (2015a). 2014 Out-of-Cycle Review of Notorious Markets. Retrieved from https://ustr.gov/sites/default/files/2014%20Notorious%20Markets%20List%20-%20Published_0.pdf

Office of the United States Trade Representative. (2015b). 2015 Out-of-Cycle Review of Notorious Markets. Retrieved from https://ustr.gov/sites/default/files/USTR-2015-Out-of-Cycle-Review-Notorious-Markets-Final.pdf

Office of the United States Trade Representative. (2016). 2016 Out-of-Cycle Review of Notorious Markets. Retrieved from https://ustr.gov/sites/default/files/2016-Out-of-Cycle-Review-Notorious-Markets.pdf

Office of the United States Trade Representative. (2018). 2017 Out-of-Cycle Review of Notorious Markets. Retrieved from https://ustr.gov/sites/default/files/files/Press/Reports/2017%20Notorious%20Markets%20List%201.11.18.pdf

Omnibus Trade and Competitiveness Act. Pub. L. No. 100–418 (1988).

Pelletier, E. (2017a). Alibaba’s Comprehensive Philosophy and Approach to IP Protection (Part II of letter dated October 2, 2017, from Eric C. Pelletier, Vice President, Head of International Government Affairs, Alibaba Group) (pp. 1–71). Retrieved from Alibaba Group website: https://www.regulations.gov/document?D=USTR-2017-0015-0020

Pelletier, E. (2017b). Comments Submitted by Alibaba Group – PUBLIC VERSION. Retrieved from Alibaba Group website: https://www.regulations.gov/document?D=USTR-2017-0015-0020

People’s Republic of China Electronic Commerce Law. (2018).

Picciotto, S. (2011). Regulating Global Corporate Capitalism. Cambridge: Cambridge University Press.

Plantin, J.-C., & de Seta, G. (2019). WeChat as infrastructure: the techno-nationalist shaping of Chinese digital platforms. Chinese Journal of Communication, 1–17. doi:10.1080/17544750.2019.1572633

Powers, S. M., & Jablonski, M. (2015). The real cyber war: the political economy of internet freedom. Urbana: University of Illinois Press.

Reuters. (2019, April 28). China says criticisms on IP protection lack evidence amid trade spat. Reuters. Retrieved from https://www.reuters.com/article/us-china-trade-ip/china-says-criticisms-on-ip-protection-lack-evidence-amid-trade-spat-idUSKCN1S402J

Schwartz, H. M. (2017). Elites and American structural power in the global economy. International Politics, 54(3), 276–291. doi:10.1057/s41311-017-0038-8

Sell, S. K. (2003). Private Power, Public Law: The Globalization of Intellectual Property Rights. doi:10.1017/CBO9780511491665

Sell, S. K. (2010). The Global IP Upward Ratchet, Anti-Counterfeiting and Piracy Enforcement Efforts: The State of Play [Research Paper No. 15]. Retrieved from American University Washington College of Law website: http://digitalcommons.wcl.american.edu/research/15/

Shen, H. (2016). China and global internet governance: toward an alternative analytical framework. Chinese Journal of Communication, 9(3). doi:10.1080/17544750.2016.1206028

Sottek, T. C. (2019, May 19). Google pulls Huawei’s Android license, forcing it to use open source version. The Verge. Retrieved from https://www.theverge.com/2019/5/19/18631558/google-huawei-android-suspension

Spelich, J. W. (2012). Comments Submitted by Taobao. Retrieved from Alibaba Group website: https://www.regulations.gov/document?D=USTR-2012-0011-0021

Strange, S. (1994). States and Markets (2nd ed.). New York: Continuum.

The IACC MarketSafe Expansion Program. (n.d.). Retrieved from https://www.iacc.org/MSE/MarketSafe_MSE%20Fact%20Sheet%206.18.2018.pdf

Tian, D. (2008). The USTR Special 301 Reports: an analysis of the US hegemonic pressure upon the organizational change in China’s IPR regime. Chinese Journal of Communication, 1(2), 224–241. doi:10.1080/17544750802288032

Tooze, A. (2019, April 4). Is this the end of the American century? London Review of Books, pp. 3–7.

Tort Law of the People’s Republic of China. Pub. L. No. PRC Presidential Order No. 21 (2010).

Tusikov, N. (2017a). Chokepoints: global private regulation on the Internet. Oakland, California: University of California Press.

Tusikov, N. (2017b). Transnational Non-State Regulatory Regimes. In P. Drahos (Ed.), Regulatory Theory: Foundations and Applications (pp. 339–353). Retrieved from https://press-files.anu.edu.au/downloads/press/n2304/pdf/book.pdf

Underhill, G. R. D. (2003). States, Markets and Governance for Emerging Market Economies: Private Interests, the Public Good and the Legitimacy of the Development Process. International Affairs, 79(4), 755–781. doi:10.1111/1468-2346.00335

van der Heijden, J. (2015). What Roles are There for Government in Voluntary Environmental Programmes?: What Roles for Government in Voluntary Environmental Programs? Environmental Policy and Governance, 25(5), 303–315.doi: 10.1002/eet.1678

Van Dijck, J., Poell, T., & De Waal, M. (2018). The Platform Society. Public values in a connective world. New York: Oxford University Press.

Wang, E. (2019, May 13). China’s Kunlun Tech agrees to U.S. demand to sell Grindr gay dating app. Reuters. Retrieved from https://www.reuters.com/article/us-grindr-m-a-beijingkunlun/chinas-kunlun-tech-agrees-to-u-s-demand-to-sell-grindr-gay-dating-app-idUSKCN1SJ28N

Wang, Y., & Armstrong, P. (2018a, January 8). U.S. Block Of Moneygram Sale Paves The Way For China Trade, Investment Showdown. Forbes. Retrieved from https://www.forbes.com/sites/ywang/2018/01/08/u-s-block-of-moneygram-sale-paves-the-way-for-china-trade-investment-showdown/#798f20c764ba

Wang, Y., & Armstrong, P. (2018b, March 28). Is Alibaba Losing to Tencent In China’s Trillion-Dollar Payment War? Forbes. Retrieved from https://www.forbes.com/sites/ywang/2018/03/28/is-alipay-losing-to-wechat-in-chinas-trillion-dollar-payment-war/#e0fe4df88220

White House. (2019). Execuive Order on Securing the Information and Communications Technology and Services Supply Chain [Executive Order]. Retrieved from https://www.whitehouse.gov/presidential-actions/executive-order-securing-information-communications-technology-services-supply-chain/

Footnotes

1. Despite operating in different political environments, platforms in China and the United States both have commercial practices that prioritise the accumulation and monetisation of users’ personal data through advertising, minimise user privacy, and enroll industry, whether through incentives or coercive state pressure (see Fuchs, 2016; Glasius & Michaelsen, 2018; Jiang & Fu, 2018; Lv & Luo, 2018).

2. All figures in US dollars.

3. These tiers, in order of seriousness are: priority foreign country, priority watch list, and watch list. See the USTR’s Special 301 Reports.

4. In 2018, for example, the European Commission singled out seven marketplaces like the Indonesian-based Bukalapak and, while it did not blacklist Alibaba, it warned the platform, along with Amazon and eBay, that “further progress is needed” to address the online sale of counterfeit goods (European Commission, 2018, p. 26).

5. While the IACC did not publicly call for Taobao to be relisted as a notorious market, two trade associations, the American Apparel and Footwear Association and the French anti-counterfeiting group Union des Fabricants (Unifab) called on the USTR to relist Taobao. These associations complained that despite Alibaba’s claimed enforcement improvements, “we have seen little evidence that there has been any noticeable change on the Alibaba platforms themselves” (Letter to Probir Mehta, Assistant United States Trade Representative for Innovation and Intellectual Property, 2016).

Technology, autonomy, and manipulation

$
0
0

This paper is part of Transnational materialities, a special issue of Internet Policy Review guest-edited by José van Dijck and Bernhard Rieder.

Public concern is growing around an issue previously discussed predominantly amongst privacy and surveillance scholars—namely, the ability of data collectors to use information about individuals to manipulate them (e.g., Abramowitz, 2017; Doubek, 2017; Vayena, 2018). Knowing (or inferring) a person’s preferences, interests, and habits, their friends and acquaintances, education and employment, bodily health and financial standing, puts the knower in a position to exercise considerable influence over the known (Richards, 2013).1 It enables them to better understand what motivates their targets, what their weaknesses and vulnerabilities are, when they are most susceptible to influence and how most effectively to frame pitches and appeals.2 Because information technology makes generating, collecting, analysing, and leveraging such data about us cheap and easy, and at a scarcely comprehendible scale, the worry is that such technologies render us deeply vulnerable to the whims of those who build, control, and deploy these systems.

Initially, for academics studying this problem, that meant the whims of advertisers, as these technologies were largely developed by firms like Google and Facebook, who identified advertising as a means of monetising the troves of personal information they collect about internet users (Zuboff, 2015). Accordingly, for some time, scholarly worries centred (rightly) on commercial advertising practices, and policy solutions focused on modernising privacy and consumer protection regulations to account for the new capabilities of data-driven advertising technologies (e.g., Calo, 2014; Nadler & McGuigan, 2018; Turow, 2012).3 As Ryan Calo put it, “the digitization of commerce dramatically alters the capacity of firms to influence consumers at a personal level. A specific set of emerging technologies and techniques will empower corporations to discover and exploit the limits of each individual consumer’s ability to pursue his or her own self-interest” (2014, p. 999).

More recently, however, the scope of these worries has expanded. After concerns were raised in 2016 and 2017 about the use of information technology to influence elections around the world, many began to reckon with the fact that the threat of targeted advertising is not limited to the commercial sphere.4 By harnessing ad targeting platforms, like those offered by Facebook, YouTube, and other social media services, political campaigns can exert meaningful influence over the decision-making and behaviour of voters (Vaidhyanathan, 2018; Yeung, 2017; Zuiderveen Borgesius et al., 2018). Global outrage over the Cambridge Analytica scandal—in which the data analytics firm was accused of profiling voters in the United States, United Kingdom, France, Germany, and elsewhere, and targeting them with advertisements designed to exploit their “inner demons”—brought such worries to the forefront of public consciousness (“Cambridge Analytica and Facebook: The Scandal so Far”, 2018; see also, Abramowitz, 2017; Doubek, 2017; Vayena, 2018).

Indeed, there is evidence that the pendulum is swinging well to the other side. Rather than condemning the particular harms wrought in particular contexts by strategies of online influence, scholars are beginning to turn their attention to the big picture. In their recent book Re-Engineering Humanity, Brett Frischmann and Evan Selinger describe a vast array of related phenomena, which they collectively term “techno-social engineering”—i.e., “processes where technologies and social forces align and impact how we think, perceive, and act” (2018, p. 4). Operating at a grand scale reminiscent of mid-20th century technology critique (like that of Lewis Mumford or Jacques Ellul), Frischmann and Selinger point to cases of technologies transforming the way we carry out and understand our lives—from “micro-level” to the “meso-level” and “macro-level”— capturing everything from fitness tracking to self-driving cars to viral media (2018, p. 270). Similarly, in her book The Age of Surveillance Capitalism (2019), Shoshana Zuboff raises the alarm about the use of information technology to effectuate what she calls “behavior modification”, arguing that it has become so pervasive, so central to the functioning of the modern information economy, that we have entered a new epoch in the history of political economy.

These efforts help to highlight the fact that there is something much deeper at stake here than unfair commerce. When information about us is used to influence our decision-making, it does more than diminish our interests—it threatens our autonomy.5 At the same time, there is value in limiting the scope of the analysis. The notions of “techno-social engineering” and “surveillance capitalism” are too big to wield surgically—the former is intended to reveal a basic truth about the nature of our human relationship with technology, and the latter identifies a broad set of economic imperatives currently structuring technology development and the technology industry.6 Complementing this work, our intervention aims smaller. For the last several years, public outcry has coalesced against a particular set of abuses effectuated through information technology—what many refer to as “online manipulation” (e.g., Abramowitz, 2017; Doubek, 2017; Vayena, 2018). In what follows, we theorise and vindicate this grievance.7

In the first section, we define manipulation, distinguishing it from neighbouring concepts like persuasion, coercion, deception, and nudging, and we explain why information technology is so well-suited to facilitating manipulation. In the second section, we describe the harms of online manipulation—the use of information technology to manipulate—focusing primarily on its threat to individual autonomy. Finally, we suggest directions for future policy efforts aimed at curbing online manipulation and strengthening autonomy in human-technology relations.

1. What is online manipulation?

The term “manipulation” is used, colloquially, to designate a wide variety of activities, so before jumping in it is worth narrowing the scope of our intervention further. In the broadest sense, manipulating something simply means steering or controlling it. We talk about doctors manipulating fine instruments during surgery and pilots manipulating cockpit controls during flight. “Manipulation” is also used to describe attempts at steering or controlling institutions and systems. For example, much has been written of late about allegations made (and evidence presented) that internet trolls under the authority of the Russian government attempted to manipulate the US media during the 2016 presidential election.8 Further, many suspect that the goal of those efforts was, in turn, to manipulate the election itself (by influencing voters). However, at the centre of this story, and at the centre of stories like it, is the worry that people are being manipulated, that individual decision-making is being steered or controlled, and that the capacity of individuals to make independent choices is therefore being compromised. It is manipulation in this sense—the attempt to influence individual decision-making and behaviour—that we focus on in what follows.

Philosophers and political theorists have long struggled to define manipulation. According to Robert Noggle, there are three main proposals (Noggle, 2018b). Some argue that manipulation is non-rational influence (Wood, 2014). On that account, manipulating someone means influencing them by circumventing their rational, deliberative decision-making faculties. A classic example of manipulation understood in this way is subliminal messaging, and depending on one’s conception of rationality we might also imagine certain kinds of emotional appeals, such as guilt trips, as fitting into this picture. The second approach defines manipulation as a form of pressure, as in cases of blackmail (Kligman & Culver, 1992, qtd. in Noggle, 2018b). Here the idea is that manipulation involves some amount of force—a cost is extracted for non-compliance—but not so much force as to rise to the level of coercion. Finally, a third proposal defines manipulation as trickery. Although a variety of subtly distinct accounts fall under this umbrella, the main idea is that manipulation, at bottom, means leading someone along, inducing them to behave as the manipulator wants, like Iago in Shakespeare’s Othello, by tempting them, insinuating, stoking jealousy, and so on.9

Each of these theories of manipulation has strengths and weaknesses, and our account shares certain features in common with all of them. It hews especially close to the trickery view, but operationalises the notion of trickery more concretely, thus offering more specific tools for diagnosing cases of manipulation. In our view, manipulation is hidden influence. Or more fully, manipulating someone means intentionally and covertly influencing their decision-making, by targeting and exploiting their decision-making vulnerabilities. Covertly influencing someone—imposing a hidden influence—means influencing them in a way they aren’t consciously aware of, and in a way they couldn’t easily become aware of were they to try and understand what was impacting their decision-making process.

Understanding manipulation as hidden influence helps to distinguish it from other forms of influence. In what follows, we distinguish it first from persuasion and coercion, and then from deception and nudging. Persuasion—in the sense of rational persuasion—means attempting to influence someone by offering reasons they can think about and evaluate.10 Coercion means influencing someone by constraining their options, such that their only rational course of action is the one the coercer intends (Wood, 2014). Persuasion and coercion carry very different, indeed nearly opposite, normative connotations: persuading someone to do something is almost always acceptable, while coercing them almost always isn’t. Yet persuasion and coercion are alike in that they are both forthright forms of influence. When someone is trying to persuade us or trying to coerce us we usually know it. Manipulation, by contrast, is hidden—we only learn that someone was trying to steer our decision-making after the fact, if we ever find out at all.

What makes manipulation distinctive, then, is the fact that when we learn we have been manipulated we feel played.11 Reflecting back on why we behaved the way we did, we realise that at the time of decision we didn’t understand our own motivations. We were like puppets, strung along by a puppet master. Manipulation thus disrupts our capacity for self-authorship—it presumes to decide for us how and why we ought to live. As we discuss in what follows, this gives rise to a specific set of harms. For now, what is important to see is the kind of influence at issue here. Unlike persuasion and coercion, which address their targets openly, manipulation is covert. When we are coerced we are usually rightly upset about it, but the object of our indignation is the set of constraints placed upon us. When we are manipulated, by contrast, we are not constrained. Rather, we are directed, outside our conscious awareness, to act for reasons we can’t recognise, and toward ends we may wish to avoid.

Given this picture, one can detect a hint of deception. On our view, deception is a special case of manipulation—one way to covertly influence someone is to plant false beliefs. If, for example, a manipulator wanted their partner to clean the house, they could lie and tell them that their mother was coming for a visit, thereby tricking them into doing what they wanted by prompting them to make a rational decision premised on false beliefs. But deception is not the only species of manipulation; there are other ways to exert hidden influence. First, manipulators need not focus on beliefs at all. Instead, they can covertly influence by subtly tempting, guilting, seducing, or otherwise playing upon desires and emotions. As long as the target of manipulation is not conscious of the manipulator’s strategy while they are deploying it, it is “hidden” in the relevant sense.

Some argue that even overt temptation, guilting, and so on are manipulative (these arguments are often made by proponents of the “non-rational influence” view of manipulation, described above), though they almost always concede that such strategies are more effective when concealed.12 We suspect that what is usually happening in such cases is a manipulator attempting to covertly tempt, guilt, etc., but failing to successfully hide their strategy. On our account, it is the attempted covertness that is central to manipulation, rather than the particular strategy, because once one learns that they are the target of another person’s influence that knowledge becomes a regular part of their decision-making process. We are all constantly subject to myriad influences; the reason we do not feel constantly manipulated is that we can usually reflect on, understand, and account for those influences in the process of reaching our own decisions about how to act (Raz, 1986, p. 204). The influences become part of how we explain to ourselves why we make the decisions we do. When the influence is hidden, however, that process is undermined. Thus, while we might naturally call a person who frequently engages in overt temptation or seduction manipulative—meaning, they frequently attempt to manipulate—strictly speaking we would only say that they have succeeded in manipulating when their target is unaware of their machinations.

Second, behavioural economists have catalogued a long list of “cognitive biases”—unreliable mental shortcuts we use in everyday decision-making—which can be leveraged by would-be manipulators to influence the trajectory of our decision-making by shaping our beliefs, without the need for outright deception.13 Manipulators can frame information in a way that disposes us to a certain interpretation of the facts; they can strategically “anchor” our frame of reference when evaluating the costs or benefits of some decision; they can indicate to us that others have decided a certain way, in order to cue our intrinsic disposition to social conformity (the so-called “bandwagon effect”); and so on. Indeed, though deception and playing on people’s desires and emotions have likely been the most common forms of manipulation in the past—which is to say, the most common strategies for covertly influencing people—as we explain in what follows, there is reason to believe that exploiting cognitive biases and vulnerabilities is the most alarming problem confronting us today.14

Talk of exploiting cognitive vulnerabilities inevitably gives rise to questions about nudging, thus finally, we briefly distinguish between nudging and manipulation. The idea of “nudging”, as is well known, comes from the work of Richard Thaler and Cass Sunstein, and points to any intentional alteration of another person’s decision-making context (their “choice architecture”) made in order to influence their decision-making outcome (Thaler & Sunstein, 2008, p. 6). For Thaler and Sunstein, the fact that we suffer from so many decision-making vulnerabilities, that our decision-making processes are inalterably and unavoidably susceptible to even the subtlest cues from the contexts in which they are situated, suggests that when we design other people’s choice-making environments—from the apps they use to find a restaurant to the menus they order from after they arrive—we can’t help but influence their decisions. As such, on their account, we might as well use that power for good, by steering people’s decisions in ways that benefit them individually and all of us collectively. For these reasons, Thaler and Sunstein recommend a variety of nudges, from setting defaults that encourage people to save for retirement to arranging options in a cafeteria in way that encourages people to eat healthier foods.15

Given our definition of manipulation as intentionally hidden influence, and our suggestion that influences are frequently hidden precisely by leveraging decision-making vulnerabilities like the cognitive biases nudge advocates reference, the question naturally arises as to whether or not nudges are manipulative. Much has been written on this topic and no consensus has been reached (see, e.g., Bovens, 2009; Hausman & Welch, 2010; Noggle, 2018a; Nys & Engelen, 2017; Reach, 2016; Selinger & Whyte, 2011; Sunstein, 2016). In part, this likely has to do with the fact that a wide and disparate variety of changes to choice architectures are described as nudges. In our view, some are manipulative and some are not—the distinction hinging on whether or not the nudge is hidden, and whether it exploits vulnerabilities or attempts to rectify them. Many of the nudges Thaler and Sunstein, and others, recommend are not hidden and work to correct cognitive bias. For example, purely informational nudges, such as nutrition labels, do not seem to us to be manipulative. They encourage individuals to slow down, reflect on, and make more informed decisions. By contrast, Thaler and Sunstein’s famous cafeteria nudge—placing healthier foods at eye-level and less healthy foods below or above—seems plausibly manipulative, since it attempts to operate outside the individual’s conscious awareness, and to leverage a decision-making bias. Of course, just because it’s manipulative does not mean it isn’t justified. To say that a strategy is manipulative is to draw attention to the fact that it carries a harm, which we discuss in detail below. It is possible, however, that the harm is justified by some greater benefit it brings with it.

Having defined manipulation as hidden or covert influence, and having distinguished manipulation from persuasion, coercion, deception, and nudging, it is possible to define “online manipulation” as the use of information technology to covertly influence another person’s decision-making, by targeting and exploiting decision-making vulnerabilities. Importantly, we have adopted the term “online manipulation” from public discourse and interpret the word “online” expansively, recognising that there is no longer any hard boundary between online and offline life (if there ever was). “Online manipulation”, as we understand it, designates manipulation facilitated by information technology, and could just as easily be termed “digital manipulation” or “automated manipulation”. Since traditionally “offline” spaces are increasingly digitally mediated (because the people occupying them carry smartphones, the spaces themselves are embedded with internet-connected sensors, and so on), we should expect to encounter online manipulation beyond our computer screens.

Given this definition, it is not difficult to see why information technology is uniquely suited to facilitating manipulative influences. First, pervasive digital surveillance puts our decision-making vulnerabilities on permanent display. As privacy scholars have long pointed out, nearly everything we do today leaves a digital trace, and data collectors compile those traces into enormously detailed profiles (Solove, 2004). Such profiles comprise information about our demographics, finances, employment, purchasing behaviour, engagement with public services and institutions, and so on—in total, they often involve thousands of data points about each individual. By analysing patterns latent in this data, advertisers and others engaging in behavioural targeting are able to detect when and how to intervene in order to most effectively influence us (Kaptein & Eckles, 2010).

Moreover, digital surveillance enables detection of increasingly individual- or person-specific vulnerabilities.16 Beyond the well-known cognitive biases discussed above (e.g., anchoring and framing effects), which condition most people’s decision-making to some degree, we are also each subject to particular circumstances that can impact how we choose.17 We are each prone to specific fears, anxieties, hopes, and desires, as well as physical, material, and economic realities, which—if known—can be used to steer our decision-making. In 2016, the voter micro-targeting firm Cambridge Analytica claimed to construct advertisements appealing to particular voter “psychometric” traits (such as openness, extraversion, etc.) by combining information about social media use with personality profiles culled from online quizzes.18 And in 2017, an Australian newspaper exposed internal Facebook strategy documents detailing the company’s alleged ability to detect when teenage users are feeling insecure. According to the report, “By monitoring posts, pictures, interactions and internet activity in real-time, Facebook can work out when young people feel ‘stressed’, ‘defeated’, ‘overwhelmed’, ‘anxious’, ‘nervous’, ‘stupid’, ‘silly’, ‘useless’, and a ‘failure’” (Davidson, 2017). Though Facebook claims it never used that information to target advertisements at teenagers, it did not deny that it could. Extrapolating from this example it is easy to imagine others, such as banks targeting advertisements for high-interest loans at the financially desperate or pharmaceutical companies targeting advertisements for drugs at those suspected to be in health crisis.19

Second, digital platforms, such as websites and smartphone applications, are the ideal medium for leveraging these insights into our decision-making vulnerabilities. They are dynamic, interactive, intrusive, and adaptive choice architectures (Lanzing, 2018; Susser, 2019b; Yeung, 2017). Which is to say, the digital interfaces we interact with are configured in real time using the information about us described above, and they continue to learn about us as we interact with them. Unlike advertisements of old, they do not wait, passively, for viewers to drive past them on roads or browse over them in magazines; rather, they send text messages and push notifications, demanding our attention, and appear in our social media feeds at the precise moment they are most likely to tempt us. And because all of this is automated, digital platforms are able to adapt to each individual user, creating what Karen Yeung calls “highly personalised choice environment[s]”—decision-making contexts in which the vulnerabilities catalogued through pervasive digital surveillance are put to work in an effort to influence our choices (2017, p. 122).20

Third, if manipulation is hidden influence, then digital technologies are ideal vehicles for manipulation because they are already in a real sense hidden. We often think of technologies as objects we attend to and use with focus and attention. The language of technology design reflects this: we talk about “users” and “end users,” “user interfaces,” and “human-computer interaction”. In fact, as philosophers (especially phenomenologists) and science and technology studies (STS) scholars have long shown, once we become habituated to a particular technology, the device or interface itself recedes from conscious attention, allowing us to focus on the tasks we are using it to accomplish.21 Think of a smartphone or computer: we pay little attention to the devices themselves, or even to the way familiar websites or app interfaces are arranged. Instead, after becoming acclimated to them, we attend to the information, entertainment, or conveniences they offer (Rosenberger, 2009). Philosophers refer to this as “technological transparency”—the fact that we see, hear, or otherwise perceive through technologies—as though they were clear, transparent—onto the perceptual objects they convey to us (Ihde, 1990; Van Den Eede, 2011; Verbeek, 2005). Because this language of transparency can be confused with the concept of transparency familiar from technology policy discussions, we might more helpfully describe it as “invisibility” (Susser, 2019b). In addition to pervasive digital surveillance making our decision-making vulnerabilities easy to detect, and digital platforms making them easy to exploit, the ease with which our technologies become invisible to us—simply through frequent use and habituation—means the influences they facilitate are often hidden, and thus potentially manipulative.

Finally, although we focus primarily on the example of behavioural advertising to illustrate these dynamics, it is worth emphasising that advertisers are not the only ones engaging in manipulative practices. In the realm of user interface/experience (UI/UX) design, increasing attention is being paid to so-called “dark patterns”—design strategies that exploit users’ decision-making vulnerabilities to nudge them into acting against their interests (or, at least, acting in the interests of the website or app), such as requiring automatically-renewing paid subscriptions that begin after an initial free trial period (Brignull, 2013; Gray, Kou, Battles, Hoggatt, & Toombs, 2018; Murgia, 2019; Singer, 2016). Though many of these strategies are as old as the internet and not all rise to the level of manipulation—sometimes overtlyinconveniencing users, rather than hiding their intentions—their growing prevalence has led some to call for legislation banning them (Bartz, 2019).

Worries about online manipulation have also been raised in the context of gig economy services, such as Uber and Lyft (Veen, Goods, Josserand, & Kaine, 2017). While these platforms market themselves as freer, more flexible alternatives to traditional jobs, providing reliable and consistent service to customers requires maintaining some amount of control over workers. However, without access to the traditional managerial controls of the office or factory floor, gig economy firms turn to “algorithmic management” strategies, such as notifications, customer satisfaction ratings, and other forms of soft control enabled through their apps (Rosenblat & Stark, 2016). Uber, for example, rather than requesting (or demanding) that workers put in longer hours, prompts drivers trying to exit the app with a reminder about their progress toward some earnings goal, exploiting the desire to continue making progress toward that goal; Lyft issues game-like “challenges” to drivers and stars and badges for accomplishing them (Mason, 2018; Scheiber, 2017).

In their current form, not all such practices necessarily manipulate—people are savvy, and many likely understand what they are facing. These examples are important, however, because they illustrate our present trajectory. Growing reliance on digital tools in all parts of our lives—tools that constantly record, aggregate, and analyse information about us—means we are revealing more and more about our individual and shared vulnerabilities. The digital platforms we interact with are increasingly capable of exploiting those insights to nudge and shape our choices, at home, in the workplace, and in the public sphere. And the more we become habituated to these systems, the less attention we pay to them.

2. The harm(s) of online manipulation

With this picture in hand, the question becomes: what exactly is the harm that results from influencing people in this way? Why should we be worried about technological mediation rendering us so susceptible to manipulative influence? In our view, there are several harms, but each flows from the same place—manipulation violates its target’s autonomy.

The notion of autonomy points to an individual’s capacity to make meaningfully independent decisions. As Joseph Raz puts it: “(t)he ruling idea behind the ideal of personal autonomy is that people should make their own lives” (Raz, 1986, p. 369). Making one’s own life means freely facing both existential choices, like whom to spend one’s life with or whether to have children, and pedestrian, everyday ones. And facing them freely means having the opportunity to think about and deliberate over one’s options, considering them against the backdrop of one’s beliefs, desires, and commitments, and ultimately deciding for reasons one recognises and endorses as one’s own, absent unwelcome influence (J. P. Christman, 2009; Oshana, 2015; Veltman & Piper, 2014). Autonomy is in many ways the guiding normative principle of liberal democratic societies. It is because we think individuals can and should govern themselves that we value our capacity to collectively and democratically self-govern.

Philosophers sometimes operationalise the notion of autonomy by distinguishing between its competency and authenticity conditions (J. P. Christman, 2009, p. 155f). In the first place, being autonomous means having the cognitive, psychological, social, and emotional competencies to think through one’s choices, form intentions about them, and act on the basis of those intentions. Second, it means that upon critical reflection one identifies with one’s values, desires, and goals, and endorses them authentically as one’s own. Of course, many have criticised such conceptions of autonomy as overly rationalistic and implausibly demanding, arguing that we rarely decide in this way. We are emotional actors and creatures of habit, they argue, socialised and enculturated into specific ways of choosing that we almost never reflect upon or endorse. But we understand autonomy broadly—our conception of deliberation includes not only beliefs and desires, but also emotions, convictions, and experiences, and critical reflection can be counterfactual (we must in principle be able to critically reflect on and endorse our motivations for acting, but we need not actually reflect on each and every move we make).

In addition to rejecting overly demanding and rationalistic conceptions of autonomy, we also reject overly atomistic ones. In our view, autonomous persons are socially, culturally, historically, and politically situated. Which is to say, we acknowledge the “intersubjective and social dimensions of selfhood and identity for individual autonomy and moral and political agency” (Mackenzie & Stoljar, 2000, p. 4).22 Though social contexts can constrain our choices, by conditioning us to believe and behave in stereotypical ways (as, for example, in the case of gendered social expectations), it is also our social contexts that bestow value on autonomy, teaching us what it means to make independent decisions, and providing us with rich sets of options from which to choose. Moreover, it is crucial for present purposes that we emphasise our understanding of autonomy as more than an individual good—it is an essential social and political good too. Individuals express their autonomy across a variety of social contexts, from the home to the marketplace to the political sphere. Democratic institutions are meant to register and reflect the autonomous political decisions individuals make. Disrupting individual autonomy is thus more than an ethical concern; it has social and political import.

Against this picture of autonomy and its value, we can more carefully explain why online manipulation poses such a grave threat. To manipulate someone is, again, to covertly influence them, to intentionally alter their decision-making process without their conscious awareness. Doing so undermines the target’s autonomy in two ways: first, it can lead them to act toward ends they haven’t chosen, and second, it can lead them to act for reasons not authentically their own.

To see the first problem, consider examples of targeted advertising in the commercial sphere. Here, the aim of manipulators is fairly straightforward: they want people to buy things. Rather than simply put products on display, however, advertisers can construct decision-making environments—choice architectures—that subtly tempt or seduce shoppers to purchase their wares, and at the highest possible price (Calo, 2014). A variety of strategies might be deployed, from pointing out that one’s friends have purchased the item to countdown clocks that pressure one to act before some offer expires, the goal being to hurry, evade, or undermine deliberation, and thus to encourage decisions that may or may not align with an individual’s deeper, reflective, self-chosen ends and values.

Of course, these strategies are familiar from non-digital contexts; all commercial advertising (digital or otherwise) functions in part to induce consumers to buy things, and worries about manipulative ads emerged long before advertising moved online.23 Equally, not all advertising—perhaps not even all targeted advertising—involves manipulation. Purely informational ads displayed to audiences actively seeking out related products and services (e.g., online banner ads displaying a doctor’s contact information shown to visitors to a health-related website) are unlikely to covertly influence their targets. Worries about manipulation arise in cases where advertisements are sneaky—which is to say, where their effects are achieved covertly. If, for example, the doctor was a psychiatrist, his advertisements were shown to people suspected of suffering from depression, and only at the specific times of day they were thought to be most afflicted, our account would offer grounds for condemning such tactics as manipulative.

It might also be the case that manipulation is not a binary phenomenon. We are the objects of countless influence campaigns and we understand some of them more than others; perhaps we ought to say that they are more or less manipulative in equal measure. On such a view, online targeted (or “behavioural”) advertising could be understood as exacerbating manipulative dynamics common to other forms of advertising, by making the tweaks to individual choice architectures more subtle, and the seductions and temptations that result from them more difficult to resist (Yeung, 2017). Worse still, the fluidity and porousness of online environments makes it easy for marketers to conflate other distinct contexts with shopping, further blurring a person’s reasoning about whether they truly want to make some purchase. For example, while chatting with friends over social media or searching for some place to eat, an ad may appear, thus requiring the target to juggle several tasks—in this case, communication and information retrieval—along with deliberation over whether or not to respond to the marketing ploy, thus diminishing the target’s ability to sustain focus on any of the them. This problem is especially clearly illustrated by so-called “native advertising” (advertisements designed to look like user-generated, non-commercial content). Such advertisements are a kind of Trojan horse, intentionally conflating commercial and non-commercial activities in an attempt to undermine our capacity for focused, careful deliberation.

In the philosophical language introduced above, these strategies challenge both autonomy’s competency and authenticity conditions. By deliberately and covertly engineering our choice environments to steer our decision-making, online manipulation threatens our competency to deliberate about our options, form intentions about them, and act on the basis of those intentions. And since, as we’ve seen, manipulative practices often work by targeting and exploiting our decision-making vulnerabilities—concealing their effects, leaving us unaware of the influence on our decision-making process—they also challenge our capacity to reflect on and endorse our reasons for acting as authentically on our own. Online manipulation thus harms us both by inducing us to act toward ends not of our choosing and for reasons we haven’t endorsed.

Importantly, undermining personal autonomy in the ways just described can lead to further harms. First, since autonomous individuals are wont to protect (or at least to try and protect) their own interests, we can reasonably expect that undermining people’s autonomy will lead, in many cases, to a diminishment of those interests. Losing the ability to look out for ourselves is unlikely to leave us better off in the long run. This harm—e.g., being tricked into buying things we don’t need or paying more for them than we otherwise would—is well described by those who have analysed the problem of online manipulation in the commercial sphere (Calo, 2014; Nadler & McGuigan, 2018; Zarsky, 2006; Zarsky, 2019). And it is a serious harm, which we would do well to take seriously, especially given the fact that law and policy around information and internet practices (at least in the US) assume that individuals are for the most part capable of safeguarding their interests (Solove, 2013). However, it is equally important to see that this harm to welfare is derivative of the deeper harm to autonomy. Attempting to “protect consumers” from threats to their economic or other interests, without addressing the more fundamental threat to their autonomy, is thus to treat the symptoms without addressing the cause.

To bring this into sharper relief, it is worth pointing out that even purely beneficent manipulation is harmful. Indeed, it is harmful to manipulate someone even in an effort to lead them more effectively toward their own self-chosen ends. That is because the fundamental harm of manipulation is to the processof decision-making, not its outcome. A well-meaning, paternalistic manipulator, who subtly induces his target to eat better food, exercise, and work hard, makes his target better off in one sense—he is healthier and perhaps more materially well-off—but it harms him as well by rendering him opaque to himself. Imagine if some bad habit, which someone had spent their whole life attempting to overcome, one day, all of a sudden, disappeared. They would be happy, of course, to be rid of the habit, but they might also be deeply confused and suspicious about the source of the change. As T.M. Scanlon writes, “I want to choose the furniture for my own apartment, pick out the pictures for the walls, and even write my own lectures despite the fact that these things might be done better by a decorator, art expert, or talented graduate student. For better or worse, I want these things to be produced by and reflect my own taste, imagination, and powers of discrimination and analysis. I feel the same way, even more strongly, about important decisions affecting my life in larger terms: what career to follow, where to work, how to live” (Scanlon, 1988).

Having said that, we have not demonstrated that manipulation is necessarily wrong in every case—only that it always carries a harm. One can imagine cases where the harm to autonomy is outweighed by the benefit to welfare. (For example, a case where someone’s life is in immediate danger, and the only way to save them is by manipulating them.) But such cases are likely few and far between. What is so worrying about online manipulation is precisely its banality—the fact that it threatens to become a regular part of the fabric of everyday experience. As Jeremy Waldron argues, if we allow that to happen, our lives will be drained of something deeply important: “What becomes of the self-respect we invest in our own willed actions, flawed and misguided though they often are, when so many of our choices are manipulated to promote what someone else sees (perhaps rightly) as our best interest?” (Waldron, 2014) That we also lack reason to believe online manipulators really do have our best interests at heart is only more reason to resist them.

Finally, beyond the harm to individuals, manipulation promises a collective harm. By threatening our autonomy it threatens democracy as well. For autonomy is writ small what democracy is writ large—the capacity to self-govern. It is only because we believe individuals can make meaningfully independent decisions that we value institutions designed to register and reflect them. As the Cambridge Analytica case—and the public outcry in response to it—demonstrates, online manipulation in the political sphere threatens to undermine these core collective values. The problem of online manipulation is, therefore, not simply an ethical problem; it is a social and political one too.

3. Technology and autonomy

If one accepts the arguments advanced thus far, an obvious response is that we need to devise law and policy capable of preventing and mitigating manipulative online practices. We agree that we do. But that response is not sufficient—the question for policymakers is not simply how to mitigate online manipulation, but how to strengthen autonomy in the digital age. In making this claim, we join our voices with a growing chorus of scholars and activists—like Frischmann, Selinger, and Zuboff—working to highlight the corrosive effects of digital technologies on autonomy. Meeting these challenges requires more than consumer protection—it requires creating the positive conditions necessary for supporting individual and collective self-determination.

We don’t pretend to have a comprehensive solution to these deep and complex problems, but some suggestions follow from our brief discussion. It should be noted that these suggestions—like the discussion, above, that prompted them—are situated firmly in the terrain of contemporary liberal political discourse, and those convinced that online manipulation poses a significant threat (especially some European readers) may be struck by how moderate our responses are. While we are not opposed to more radical interventions, we formulate our analysis using the conceptual and normative frameworks familiar to existing policy discussions in hopes of having an impact on them.

Curtail digital surveillance

Data, as Tal Zarsky writes, is the “fuel” powering online manipulation (2019, p. 186). Without the detailed profiles cataloguing our preferences, interests, habits, and so on, the ability of would-be manipulators to identify our weaknesses and vulnerabilities would be vastly diminished, and so too their capacity to leverage them to their ends. Of course, the call to curtail digital surveillance is nothing new. Privacy scholars and advocates have been raising alarms about the ills of surveillance for half a century or more. Yet, as Zarsky argues, manipulation arguments could add to the “analytic and doctrinal arsenal of measures which enable legal intervention in the new digital environment” (2019, p. 185). Furthermore, outcry over apparent online manipulation in both the commercial and political spheres appears to be generating momentum behind new policy interventions to combat such strategies. In the US, a number of states have recently passed or are considering passing new privacy legislation, and the U.S. Congress appears to be weighing new federal privacy legislation as well. (“Congress Is Trying to Create a Federal Privacy Law”, 2019; Merken, 2019). And, of course, all of that takes place on the heels of the new General Data Protection Regulation (GDPR) taking effect in Europe, which places new limits on when and what kinds of data can be collected about European citizens and by firms operating on European soil.24 To curb manipulation and strengthen autonomy online, efforts to curtail digital surveillance ought to be redoubled.

Problematise personalisation

When asked to justify collecting so much data about us, data collectors routinely argue that the information is needed in order to personalise their services to the needs and interests of individual users. Mark Zuckerberg, for example, attempted recently to explain Facebook’s business model in the pages of the Wall Street Journal: “People consistently tell us that if they're going to see ads, they want them to be relevant,” he wrote. “That means we need to understand their interests” (2019).25 Personalisation seems, on the face of it, like an unalloyed good. Who wouldn’t prefer a personalised experience to a generic one? Yet research into different forms of personalisation suggests that individualising—personalising—our experiences can carry with it significant risks.

These worries came to popular attention with Eli Pariser’s book Filter Bubble (2011), which argued forcefully (though not without challenge) that the construction of increasingly singular, individualised experiences, means at the same time the loss of common, shared ones, and describes the detriments of that transformation to both individual and collective decision-making.26 In addition to personalised information environments—Pariser’s focus—technological advances enable things like personalised pricing - sometimes called “dynamic pricing” or “price discrimination” (Calo, 2014) and personalised work scheduling - or “just-in-time” scheduling (De Stefano, 2015). For the reasons discussed above, many such strategies may well be manipulative. The targeting and exploiting of individual decision-making vulnerabilities enabled by digital technologies—the potential for online manipulation they create—gives us reason to question whether the benefits of personalisation really outweigh the costs. At the very least, we ought not to uncritically accept personalisation as a rationale for increased data collection, and we ought to approach with care (if not skepticism) the promise of an increasingly personalised digital environment.

Promote awareness and understanding

If the central problem of online manipulation is its hiddenness, then any response must involve a drive toward increased awareness. The question is what form such awareness should take. Yeung argues that the predominant vehicle for notifying individuals about information flows and data practices—the privacy notice, or what is often called “notice-and-consent”—is insufficient (2017). Indeed, merely notifying someone that they are the target of manipulation is not enough to neutralise its effects. Doing so would require understanding not only that one is the target of manipulation, but also who the manipulator is, what strategies they are deploying, and why. Given the well-known “transparency paradox”, according to which we are bound to either deprive users of relevant information (in an attempt to be succinct) or overwhelm them with it (in an attempt to be thorough), there is little reason to believe standard forms of notice alone can equip users to face the challenges of online manipulation.27

Furthermore, the problem of online manipulation runs deeper than any particular manipulative practice. What worries many people is the fact that manipulative strategies, like targeted advertising, are becoming basic features of the digital world—so commonplace as to escape notice or mention.28 In the same way that machine learning and artificial intelligence tools have quickly and quietly been delegated vast decision-making authorities in a variety of contemporary contexts and institutions, and in response, scholars and activists have mounted calls to make their decision-making processes more explainable, transparent, and accountable, so too must we give people tools to understand and manage a digital environment designed to shape and influence them.29

Attend to context

Finally, it is important to recognise that moral intuitions about manipulation are indexed to social context. Which is to say, we are willing to tolerate different levels of outside influence on our decision-making in different decision-making spheres. As relatively lax commercial advertising regulations indicate, we are—at least in the US—willing to accept a fair amount of interference in the commercial sphere. By contrast, somewhat more stringent regulations around elections and campaign advertising suggest that we are less willing to accept such interference in the realm of politics.30 Responding to the threats of online manipulation therefore requires sensitivity to where—in which spheres of life—we encounter them.

Conclusion

The idea that technological advancements bring with them new arrangements of power is, of course, nothing new. That online manipulation threatens to subordinate the interests of individuals to those of data collectors and their clients is thus, in one respect, a familiar (if nonetheless troubling) problem. What we hope to have shown, however, is that the threat of online manipulation is deeper, more insidious, than that. Being steered or controlled, outside our conscious awareness, violates our autonomy, our capacity to understand and author our own lives. If the tools that facilitate such control are left unchecked, it will be to our individual and collective detriment. As we’ve seen, information technology is in many ways an ideal vehicle for these forms of control, but that does not mean that they are inevitable. Combating online manipulation requires both depriving it of personal data—the oxygen enabling it—and empowering its targets with awareness, understanding, and savvy about the forces attempting to influence them.

References

Abramowitz, M. J. (2017, December 11). Stop the Manipulation of Democracy Online. The New York Times. Retrieved from https://www.nytimes.com/2017/12/11/opinion/fake-news-russia-kenya.html

Anderson, J., & Honneth, A. (2005). Autonomy, Vulnerability, Recognition, and Justice. In J. Christman & J. Anderson (Eds.), Autonomy and the Challenges to Liberalism (pp. 127–149). doi:10.1017/CBO9780511610325.008

Bartz, D. (2019, April 13). U.S. senators introduce social media bill to ban “dark patterns” tricks. Reuters. Retrieved from https://www.reuters.com/article/us-usa-tech-idUSKCN1RL25Q

Benkler, Y., Faris, R., & Roberts, H. (2018). Network Propaganda: Manipulation, Disinformation, and Radicalization in American Politics. New York: Oxford University Press.

Blumenthal, J. A. (2005). Does Mood Influence Moral Judgment? An Empirical Test with Legal and Policy Implications. Law & Psychology Review, 29, 1–28.

Boerman, S. C., Kruikemeier, S., & Zuiderveen Borgesius, F. J. (2017). Online Behavioral Advertising: A Literature Review and Research Agenda. Journal of Advertising, 46(3), 363–376. doi:10.1080/00913367.2017.1339368

Bovens, L. (2009). The Ethics of Nudge. In T. Grüne-Yanoff & S. O. Hansson (Eds.), Preference Change: Approaches from Philosophy, Economics and Psychology (pp. 207–219). Dordrecht: Springer Netherlands.

Brignull, H. (2013, August 29). Dark Patterns: inside the interfaces designed to trick you. Retrieved June 17, 2019, from The Verge website: https://www.theverge.com/2013/8/29/4640308/dark-patterns-inside-the-interfaces-designed-to-trick-you

Calo, M. R. (2014). Digital Market Manipulation. The George Washington Law Review, 82(4). Retrieved from https://www.gwlr.org/wp-content/uploads/2018/01/82-Geo.-Wash.-L.-Rev.-995.pdf

Cambridge Analytica and Facebook: The Scandal so Far. (2018, March 28). Al Jazeera News. Retrieved from https://www.aljazeera.com/news/2018/03/cambridge-analytica-facebook-scandal-180327172353667.html

Christman, J. P. (2009). The Politics of Persons: Individual Autonomy and Socio-Historical Selves. Cambridge; New York: Cambridge University Press.

Congress Is Trying to Create a Federal Privacy Law. (2019, February 28). The Economist. Retrieved from https://www.economist.com/united-states/2019/02/28/congress-is-trying-to-create-a-federal-privacy-law

Davidson, D. (2017, May 1). Facebook targets “insecure” to sell ads. The Australian.

De Stefano, V. (2015). The Rise of the “Just-in-Time Workforce”: On-Demand Work, Crowd Work and Labour Protection in the “Gig-Economy.” SSRN Electronic Journal. doi:10.2139/ssrn.2682602

Doubek, J. (2017, November 16). How Disinformation And Distortions On Social Media Affected Elections Worldwide. Retrieved March 24, 2019, from NPR.org website: https://www.npr.org/sections/alltechconsidered/2017/11/16/564542100/how-disinformation-and-distortions-on-social-media-affected-elections-worldwide

Dubois, E., & Blank, G. (2018). The echo chamber is overstated: The moderating effect of political interest and diverse media. Information, Communication & Society, 21(5), 729–745. doi:10.1080/1369118X.2018.1428656

Franken, I. H. A., & Muris, P. (2005). Individual Differences in Decision-Making. Personality and Individual Differences, 39(5), 991–998. doi:10.1016/j.paid.2005.04.004

Frischmann, B., & Selinger, E. (2018). Re-Engineering Humanity (1st ed.). doi:10.1017/9781316544846

Gray, C. M., Kou, Y., Battles, B., Hoggatt, J., & Toombs, A. L. (2018). The Dark (Patterns) Side of UX Design. Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems - CHI ’18, 1–14. doi:10.1145/3173574.3174108

Hausman, D. M., & Welch, B. (2010). Debate: To Nudge or Not to Nudge. Journal of Political Philosophy, 18(1), 123–136. doi:10.1111/j.1467-9760.2009.00351.x

Ihde, D. (1990). Technology and the Lifeworld: From Garden to Earth. Bloomington: Indiana University Press.

Kahneman, D. (2013). Thinking, Fast and Slow (1st pbk. ed). New York: Farrar, Straus and Giroux.

Kaptein, M., & Eckles, D. (2010). Selecting Effective Means to Any End: Futures and Ethics of Persuasion Profiling. In T. Ploug, P. Hasle, & H. Oinas-Kukkonen (Eds.), Persuasive Technology (Vol. 6137, pp. 82–93). doi:10.1007/978-3-642-13226-1_10

Kligman, M., & Culver, C. M. (1992). An Analysis of Interpersonal Manipulation. Journal of Medicine and Philosophy, 17(2), 173–197. doi:10.1093/jmp/17.2.173

Lanzing, M. (2018). “Strongly Recommended” Revisiting Decisional Privacy to Judge Hypernudging in Self-Tracking Technologies. Philosophy & Technology. doi:10.1007/s13347-018-0316-4

Levinson, J. D., & Peng, K. (2007). Valuing Cultural Differences in Behavioral Economics. ICFAI Journal of Behavioral Finance, 4(1).

Mackenzie, C., & Stoljar, N. (Eds.). (2000). Relational Autonomy: Feminist Perspectives on Automony, Agency, and the Social Self. New York: Oxford University Press.

Mason, S. (2018, November 20). High score, low pay: Why the gig economy loves gamification. The Guardian. Retrieved from https://www.theguardian.com/business/2018/nov/20/high-score-low-pay-gamification-lyft-uber-drivers-ride-hailing-gig-economy

Matz, S. C., Kosinski, M., Nave, G., & Stillwell, D. J. (2017). Psychological Targeting as an Effective Approach to Digital Mass Persuasion. Proceedings of the National Academy of Sciences, 114(48), 12714–12719. doi:10.1073/pnas.1710966114

Merken, S. (2019, February 6). States Follow EU, California in Push for Consumer Privacy Laws. Retrieved March 25, 2019, from Bloomberg Law website: https://news.bloomberglaw.com/privacy-and-data-security/states-follow-eu-california-in-push-for-consumer-privacy-laws-1

Murgia, M. (2019, May 4). When manipulation is the business model. Financial Times.

Nadler, A., & McGuigan, L. (2018). An Impulse to Exploit: The Behavioral Turn in Data-Driven Marketing. Critical Studies in Media Communication, 35(2), 151–165. doi:10.1080/15295036.2017.1387279

Nissenbaum, H. (2011). A Contextual Approach to Privacy Online. Daedalus, 140(4), 32–48. doi:10.1162/DAED_a_00113

Noggle, R. (2018a). Manipulation, Salience, and Nudges. Bioethics, 32(3), 164–170. doi:10.1111/bioe.12421

Noggle, R. (2018b). The Ethics of Manipulation. In E. N. Zalta (Ed.), Stanford Encyclopedia of Philosophy (p. 24). Retrieved from https://plato.stanford.edu/entries/ethics-manipulation/

Nys, T. R., & Engelen, B. (2017). Judging Nudging: Answering the Manipulation Objection. Political Studies, 65(1), 199–214. doi:10.1177/0032321716629487

Oshana, M. (Ed.). (2015). Personal Autonomy and Social Oppression: Philosophical Perspectives (First edition). New York: Routledge, Taylor & Francis Group.

Pariser, E. (2011). The Filter Bubble: What the Internet Is Hiding from You. Retrieved from http://search.ebscohost.com/login.aspx?direct=true&scope=site&db=nlebk&db=nlabk&AN=1118322

Rachlinski, J. J. (R). Cognitive Errors, Individual Differences, and Paternalism. University of Chicago Law Review, 73(1), 207–229. Available at https://chicagounbound.uchicago.edu/uclrev/vol73/iss1/11/

Raz, J. (1986). The Morality of Freedom (Reprinted). Oxford: Clarendon Press.

Reach, G. (2016). Patient education, nudge, and manipulation: Defining the ethical conditions of the person-centered model of care. Patient Preference and Adherence, 10, 459–468. doi:10.2147/PPA.S99627

Richards, N. M. (2013). The Dangers of Surveillance. Harvard Law Review, 126(7), 1934–1965. Available at https://harvardlawreview.org/2013/05/the-dangers-of-surveillance/

Rosenberger, R. (2009). The Sudden Experience of the Computer. AI & Society, 24(2), 173–180. doi:10.1007/s00146-009-0190-9

Rosenblat, A., & Stark, L. (2016). Algorithmic Labor and Information Asymmetries: A Case Study of Uber’s Drivers. International Journal of Communication, 10, 3758–3784. Retrieved from https://ijoc.org/index.php/ijoc/article/view/4892

Rudinow, J. (1978). Manipulation. Ethics, 88(4), 338–347. doi:10.1086/292086

Scanlon, T. M. (1988). The Significance of Choice. In A. Sen & S. M. McMurrin (Eds.), The Tanner Lectures on Human Values (Vol. 8, p. 68).

Scheiber, N. (2017, April 2). How Uber Uses Psychological Tricks to Push Its Drivers’ Buttons. The New York Times. Retrieved from https://www.nytimes.com/interactive/2017/04/02/technology/uber-drivers-psychological-tricks.html

Selbst, A. D., & Barocas, S. (2018). The Intuitive Appeal of Explainable Machines. Fordham Law Review, 87(3), 1085-1139. Retrieved from https://ir.lawnet.fordham.edu/flr/vol87/iss3/11/

Selinger, E., & Whyte, K. (2011). Is There a Right Way to Nudge? The Practice and Ethics of Choice Architecture. Sociology Compass, 5(10), 923–935. doi:10.1111/j.1751-9020.2011.00413.x

Singer, N. (2016, May 14). When Websites Won’t Take No for an Answer. The New York Times. Retrieved from https://www.nytimes.com/2016/05/15/technology/personaltech/when-websites-wont-take-no-for-an-answer.html

Solove, D. J. (2004). The Digital Person: Technology and Privacy In The Information Age. New York: New York University Press.

Solove, D. J. (2013). Privacy Self-Management and the Consent Dilemma. Harvard Law Review, 126(7), 1880–1903. Retrieved from https://harvardlawreview.org/2013/05/introduction-privacy-self-management-and-the-consent-dilemma/

Stanovich, K. E., & West, R. F. (1998). Individual Differences in Rational Thought. Journal of Experimental Psychology: General, 127(2), 161–188. doi:10.1037/0096-3445.127.2.161

Stole, I. L. (2014). Persistent Pursuit of Personal Information: A Historical Perspective on Digital Advertising Strategies. Critical Studies in Media Communication, 31(2), 129–133. doi:10.1080/15295036.2014.921319

Sunstein, C. R. (2016). The Ethics of Influence: Government in the Age of Behavioral Science. Cambridge: Cambridge University Press.

Susser. (2019a). Notice After Notice-and-Consent: Why Privacy Disclosures Are Valuable Even If Consent Frameworks Aren’t. Journal of Information Policy, 9, 37–62. doi:10.5325/jinfopoli.9.2019.0037

Susser, D. (2019b). Invisible Influence: Artificial Intelligence and the Ethics of Adaptive Choice Architectures. Presented at the AAAI/ACM Conference on AI, Ethics, and Society (AIES ’19), Honolulu. Available at http://www.aies-conference.com/wp-content/papers/main/AIES-19_paper_54.pdf

Susser, D., Roessler, B., & Nissenbaum, H. (2018). Online Manipulation: Hidden Influences in a Digital World. SSRN Electronic Journal. Retrieved from https://papers.ssrn.com/abstract=3306006

Thaler, R. H., & Sunstein, C. R. (2008). Nudge: Improving Decisions About Health, Wealth, and Happiness. New Haven: Yale University Press.

Tufekci, Z. (2014). Engineering the Public: Big Data, Surveillance and Computational Politics. First Monday, 19(7). doi:10.5210/fm.v19i7.4901

Turow, J. (2012). The Daily You: How the New Advertising Industry Is Defining Your Identity and Your Worth. Retrieved from https://books.google.com/books?id=rK7JSFudXA8C

Vaidhyanathan, S. (2018). Antisocial Media: How Facebook Disconnects Us and Undermines Democracy. New York; Oxford: Oxford University Press.

Van Den Eede, Y. (2011). In Between Us: On the Transparency and Opacity of Technological Mediation. Foundations of Science, 16(2/3), 139–159. doi:10.1007/s10699-010-9190-y

Vayena, M. I., Effy. (2018, March 30). Cambridge Analytica and Online Manipulation. Retrieved March 24, 2019, from Scientific American Blog Network website: https://blogs.scientificamerican.com/observations/cambridge-analytica-and-online-manipulation/

Veen, A., Goods, C., Josserand, E., & Kaine, S. (2017, June 18). “The way they manipulate people is really saddening”: Study shows the trade-offs in gig work. Retrieved June 16, 2019, from The Conversation website: http://theconversation.com/the-way-they-manipulate-people-is-really-saddening-study-shows-the-trade-offs-in-gig-work-79042

Veltman, A., & Piper, M. (Eds.). (2014). Autonomy, Oppression, and Gender. Oxford; New York: Oxford University Press.

Verbeek, P.-P. (2005). What Things Do: Philosophical Reflections on Technology, Agency, and Design. University Park: Pennsylvania State University Press.

Waldron, J. (2014, October 9). It’s All for Your Own Good. The New York Review of Books. Retrieved from https://www.nybooks.com/articles/2014/10/09/cass-sunstein-its-all-your-own-good/

Westin, A. F. (2015). Privacy and Freedom. New York: IG Publishing.

Wood, A. (2014). Coercion, Manipulation, Exploitation. In C. Coons & M. Weber (Eds.), Manipulation: Theory and Practice. Oxford ; New York: Oxford University Press.

Yeung, K. (2017). Hypernudge: Big Data as a Mode of Regulation by Design. Information, Communication & Society, 20(1), 118–136. doi:10.1080/1369118X.2016.1186713

Zarsky, T. (2006). Online Privacy, Tailoring, and Persuasion. In K. J. Strandburg & D. S. Raicu (Eds.), Privacy and Technologies of Identity: A Cross-Disciplinary Conversation (pp. 209–224). doi:10.1007/0-387-28222-X_12

Zarsky, T. Z. (2019). Privacy and Manipulation in the Digital Age. Theoretical Inquiries in Law, 20(1), 157–188. http://www7.tau.ac.il/ojs/index.php/til/article/view/1612

Zittrain, J. (2014). Engineering an Election. Harvard Law Review Forum, 127(8), 335–341. Retrieved from https://harvardlawreview.org/2014/06/engineering-an-election/

Zuboff, S. (2015). Big Other: Surveillance Capitalism and the Prospects of an Information Civilization. Journal of Information Technology, 30(1), 75–89. doi:10.1057/jit.2015.5

Zuboff, S. (2019). The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power (First edition). New York: Public Affairs.

Zuckerberg, M. (2019, January 25). The Facts About Facebook. Wall Street Journal. Retrieved from http://ezaccess.libraries.psu.edu/login?url=https://search-proquest-com.ezaccess.libraries.psu.edu/docview/2170828623?accountid=13158

Zuiderveen Borgesius, F. J., Möller, J., Kruikemeier, S., Ó Fathaigh, R., Irion, K., Dobber, T., … De Vreese, C. (2018). Online Political Microtargeting: Promises and Threats for Democracy. Utrecht Law Review, 14(1), 82–96. doi:10.18352/ulr.420

Zuiderveen Borgesius, F. J., Trilling, D., Möller, J., Bodó, B., De Vreese, C. H., & Helberger, N. (2016). Should We Worry About Filter Bubbles? Internet Policy Review, 5(1). doi:10.14763/2016.1.401

Footnotes

1. Richards describes this influence as “persuasion” and “subtle forms of control”. In our view, for reasons discussed below, the subtler forms of influence ought really to be called “manipulation”.

2. For a wide-ranging review of the scholarly literature on targeted advertising, see (Boerman, Kruikemeier, & Zuiderveen Borgesius, 2017).

3. See, for example, Zarsky gestures at there being more at stake than consumer interests, but he explicitly declines to develop the point, framing the problem instead as one of consumer protection. See (2006; 2019)

4. Which is not to say that no one saw this coming. As far back as 1967, Alan Westin warned about “the entire range of forthcoming devices, techniques, and substances that enter the mind to implant influences or extract data” and their application “in commerce or politics” (Westin, 2015, p. 331). See also (Tufekci, 2014; Zittrain, 2014).

5. Frischmann and Selinger write: “Across cultures and generations, humans have engineered themselves and their built social environments to sustain capacities for thinking, the ability to socialize and relate to each other, free will, autonomy, and agency, as well as other core capabilities. […T]hey are at risk of being whittled away through modern forms of techno-social engineering.” (2018, p. 271). And Zuboff argues that the behaviour modifications characteristic of surveillance capitalism “sacrifice our right to the future tense, which comprises our will to will, our autonomy, our decision rights, our privacy, and, indeed, our human natures” (2019, p. 347).

6. As Frischmann and Selinger write, “We are fundamentally techno-social animals” (2018, p. 271).

7. For a more fully developed and defended version of our account, see Susser, Roessler, and Nissenbaum (2018).

8. (Benkler, Faris, & Roberts, 2018). See also the many excellent reports from the Data & Society Research Institute’s “Media Manipulation” project: https://datasociety.net/research/media-manipulation/

9. Examples from Noggle (2018b).

10. The term “persuasion” is sometimes used in a broader sense, as a synonym for “influence”. Here we use it in the narrower sense of rational persuasion, since our goal is precisely to distinguish between different forms of influence.

11. Assuming we ever do learn that we have been manipulated. Presumably we often do not.

12. As Luc Bovens writes about nudges (discussed below), such strategies “typically work better in the dark” (2009, p. 209).

13. The classic formulation of these ideas comes from Daniel Kahneman and Amos Tversky, summarised in (Kahneman, 2013). See also (Thaler & Sunstein, 2008).

14. Writing about manipulation in 1978, Joel Rudinow observed: “Weaknesses are rarely displayed; they are betrayed. Since our weaknesses, in addition to making us vulnerable, are generally repugnant to us, we generally do our best to conceal them, not least from ourselves. Consequently too few people are insightful enough into or familiar enough with enough other people to make the use of resistible incentives a statistically common form of manipulation. In addition we are not always so situated as to be able genuinely to offer someone the incentive which we believe will best suit our manipulative aims. Just as often it becomes necessary to deceive someone in order to play on his weakness. Thus it is only to be expected that deception plays a role in the great majority of cases of manipulation.” (Rudinow, 1978, p. 347) As we’ll see below, it is precisely the limitations confronting the would-be manipulator in 1978, which Rudinow identifies, that thanks to technology have since been overcome.

15. Thaler and Sunstein refer to this as “libertarian paternalism” (2008).

16. Our thanks to a reviewer of this essay for the term “person-specific vulnerability.”

17. In fact, while we are all susceptible to the kinds of cognitive biases discussed by behavioral economists to some degree, we are not all susceptible to each bias to the same degree (Rachlinski, R; Stanovich & West, 1998). Empirical evidence suggests that individual differences in personality (Franken & Muris, 2005), cultural background (Levinson & Peng, 2007), and mood (Blumenthal, 2005), among others, can modulate how individuals are impacted by particular biases. It is not difficult to imagine digital tools detecting these differences and leveraging them to structure particular interventions.

18. Cambridge Analytica’s then-CEO Alexander Nix discusses these tactics here: https://www.youtube.com/watch?v=n8Dd5aVXLCc. Research suggests such tactics are plausible, see Matz, Kosinski, Nave, and Stillwell (2017).

19. For a deeper discussion about vulnerability, its varieties, and the ways vulnerabilities can be leveraged by digital tools, see Susser, Roessler, and Nissenbaum (2018).

20. See also Susser (2019b).

21. For an excellent discussion of the different ways this idea has been elaborated by a variety of philosophers and STS scholars, see Van Den Eede (2011).

22. It is worth noting, however: just because individuals and their capacities are inextricably social, that does not mean autonomy is only possible in egalitarian social contexts. See Anderson and Honneth (2005).

23.“While much about digital advertising appears revolutionary, it would be wrong to accept the notion of customer surveillance as a modern phenomenon. Although the internet’s technological advances have taken advertising in new directions and the practice of ‘data-mining’ to almost incomprehensible extremes, nearly all of what is transpiring reflects some of the basic methods developed by marketers beginning a hundred years ago” (Stole, 2014).

24. See https://eugdpr.org

25. Zuckerberg also cited needing user information for “security and operating our services”.

26. Some empirical researchers have expressed skepticism about the alleged harms of filter bubbles, some even suggesting that they are beneficial (Dubois & Blank, 2018; Zuiderveen Borgesius et al., 2016). Their findings, however, are far from conclusive.

27. On the “transparency paradox,” see Nissenbaum (2011). Though privacy notices are, in themselves, insufficient for shielding individuals from the effects of online manipulation, that does not mean that they are entirely without value. They might support individual autonomy, even if they can’t guarantee it: see Susser (2019a).

28. For example, Marcella Vayena writes: “[N]ot just Cambridge Analytica, but most of the current online ecosystem, is an arm’s race to the unconscious mind: notifications, microtargeted ads, autoplay plugins, are all strategies designed to induce addictive behavior, hence to manipulate” (Vayena, 2018).

29. For a helpful discussion about the calls for—and limits of—explainable artificial intelligence, see (Selbst & Barocas, 2018)

30. In a longer version of this paper, we also consider online manipulation in the context of the workplace. See Susser et al. (2018).

Making data colonialism liveable: how might data’s social order be regulated?

$
0
0

This paper is part of Transnational materialities, a special issue of Internet Policy Review guest-edited by José van Dijck and Bernhard Rieder.

Introduction

A new order is being constructed through the continuous extraction of data from our social lives. This new order, optimised for the creation of economic value, may well become the social order on which the next phase of capitalism depends for its viability. As part of that emerging order, calls for the regulation of data processing have intensified in the past two years, unsurprisingly perhaps given that capitalism has shown that it needs to be regulated if it is to be made liveable (Polanyi, 2001). But this push for regulation has been framed entirely in terms of taming certain rogue forms of contemporary capitalism. This article argues, however, that to frame data issues solely in terms of a “bad” form of capitalism misses the full scope, scale and nature of what is happening with data. Legal, social and civic responses to what is underway need to be grounded in a broader argument about what we will call “data colonialism”.

There is no doubt of course that what is happening with data today is inextricably linked to the development of capitalism. But is something even larger going on? We argue here that today’s quantification of the social—also known as datafication (Mayer-Schönberger and Cukier, 2013; Van Dijck, 2014)—represents the first step in a new form of colonialism. This emerging orderhas long-term consequences that may be as far-reaching as were the appropriations carried out by historic colonialism for the benefit of the capitalist economies and international legal order that subsequently developed.

Recognising what is happening with data as a colonial move means acknowledging the full scope of the resource appropriation under way today through datafication: it is human life itself that is being appropriated so that it can be annexed directly to capital as part of a reconstruction of the very spaces of social experience. In arguing this, we share some common ground with Shoshana Zuboff’s well-known argument on “surveillance capitalism”, but there are also crucial differences, which we briefly summarise in three points here (and further unpack later). 1

First, the transformation of what can be considered an input to capital actually goes well beyond what has been observed in the social media sector to include, for example, the rise of logistics, the new methods of control in the workplace, the emergence of platforms as new structures for profit extraction (for instance, in transportation and tourism), and most generally the reformulation of capitalism’s default business model around the extraction and management of data (Davenport, 2014). 2 What is going on with data, in other words, is much wider than a problem with a limited number of rogue surveillance capitalists who have gone astray, a problem that can be corrected by their reform. There is only one historic precedent for such a shift in the resources available for economic exploitation, and that is the emergence of colonialismin the late 15th and early 16th centuries. 3

Second,rethinking data processes on this longer 500-year time-scale allows us to see their implications for capitalism’s future in a broader way, too. Here we must recall that industrial capitalism itself was only made possible by the profits and socioeconomic reconfigurations that came with historic colonialism.

Third, a colonial framing highlights two central aspects of today’s transformations that would otherwise seem like mere collateral: the subjugation of human beings that is necessary to a resource appropriation on this scale (relations of subjection to external powers were central to historic colonialism), and the grounding of this entire transformation in a general rationality which imposes upon the world a very singular vision of Big Data’s superior claim on knowledge (just as colonisers justified their appropriation on the ground of the West’s superior rationality).

Our argument will consider the long-term historical relations between capitalism and colonialism in the first part of this article, and in the second part offer a discussion—informed by decolonial theory—of Carl Schmitt’s classic interpretation of historic colonialism’s relation to international law. We hope to give more substance to general calls to recognise the fight against “dataism” (Van Dijck, 2014) as “the most urgent political and economic project” of the 21st century (Harari, 2016, p. 459). This article, written from the intersection of social theory, decolonial theory, and critical data studies rather than policy studies, will hopefully be useful to those who wish to develop a more robust starting-point for critical work on data policy.

A decolonial reading of datafication

In this first section, we summarise our arguments for analysing contemporary practices of data extraction and data processing as replicating colonial modes of exploitation (see Couldry and Mejias, 2018; Couldry and Mejias, 2019). This will allow us to provide the starting-point for our policy-related discussion later on.

The public is often told that "data is the new oil" (Economist, 2017). A recent article in the Harvard Business Review goes further and argues not only that “data is the fuel of the new economy, and even more so of the economy to come,” but also that:

Algorithms trained by all these digital traces will be globally transformational. It’s possible that new world order will emerge from it, along with a new “GDP” – gross data product – that captures an emerging measure of wealth and power of nations (Chakravorti, Bhalli and Chaturvedi, 2019).

While the evocative idea of “new oil” might recall the benefits (for some) of historic colonialism, it obscures precisely the most important level at which data colonialism must be empirically studied. The most fundamental fact about data is that it is not like oil, but rather a social construct operating at a specific moment in history (Gitelman, 2014; Scholz, 2018), driven by much wider economic and social forces. The concept of data colonialism, therefore, highlights the reconfiguration of human life around the maximisation of data collection for profit. Without the resulting data flow, there would be no substance related to human life that could, even potentially, be called “oil”. The claim that data islike oil is thus an attempt to naturalise the outcome of data’s collection, and so make data extraction (and the categories it embeds in daily life) part of a social landscape whose contestability is hidden from view (Bowker and Star, 1999). Since regulating data depends, fundamentally, on opening up that contestability, it is essential to understand how the naturalisation of data collection occurs.

To do this, we draw on critical political economy and decolonial theory to trace continuities from colonialism’s historic appropriation of territories and natural resources to the datafication of everyday life today. While the modes, intensities, scales and contexts of dispossession have changed, the underlying drive of today’s data processes remains the same: to acquire “territory” and resources from which economic value can be extracted. To do so in no way diverts us from an analysis of capitalism. On the contrary, it places datafication squarely within the centuries-long relations between colonialism and capitalism, whose separation is now widely contested (Williams, 1994; Beckert and Rockman, 2016). Far from being disconnected from capitalism, the current phase of colonialism (data colonialism) is understood as preparing the way for a new, still undefined stage of capitalism, just as historic colonialism paved the way gradually for industrial capitalism. The medium for this long-term transformation are the interdependencies and rationalities through which social relations, conducted and organised via processes of data extraction, become a normal part of everyday life.

We therefore use the term “colonialism” not as a metaphor, 4 but to name an actual reality. In this non-metaphorical usage, however, our focus is on colonialism’s longer-term historical function: the dispossession of resources and the normalisation of that dispossession so as to generate a new fuel for capitalism’s global growth. Distinctive to data colonialism are the subjection of human beings to new types of relations configured around the extraction of data, and, even more broadly, the imposition on human life of a new vision of knowledge and rationality tailored to data extraction (the vision of Big Data). Each generates fundamental questions, in turn, about legal values such as freedom and autonomy, and challenges for existing systems of commercial regulation (we return to those challenges in the next section).

Underlying our argument are two forms of analysis: an analysis of the political economy of the data industry, or what we call the social quantification sector; and an analysis of the multimodal forms of exploitation that unfold through our participation in digital platforms and data-processing infrastructures, or what we call data relations. These two terms deserve more explanation.

The social quantification sector can be broken down into various sub-groups, starting with the manufacturers of digital devices and personal assistants: well-known media brands such as Amazon, Apple, Microsoft and Samsung, and less well-known makers of devices operating in the fast-expanding ‘Internet of Things’. Another group in the social quantification sector includes the builders of the computer-based environments and tools by means of which we connect: household names such as Alibaba, Baidu, Facebook, Google, TenCent and WeChat. Yet another group comprises the growing field of data brokers and data processing organisations such as Acxiom, Equifax, and (in China) TalkingData that collect, aggregate, process, repackage, sell and make decisions based on data of all sorts, while also supporting other organisations in their uses of data. In addition, the social quantification sector also includes the vast domain of organisations that increasingly depend for their basic functions on processing data from social life, whether to customise their services (like Netflix and Spotify), to link sellers and buyers (like Airbnb, Uber, and Didi), or to exploit data in areas of government or security, such as Palantir and Axon (formerly Taser). Finally, analytical consideration of the social impact of the social quantification sector needs to take into account the vast areas of economic life where internal data collection has become normalised as corporations’ basic mode of operation, for example in logistics (Cowan, 2014). Corporations such as IBM are key supporters of this wider infrastructure of business data collection (Davenport, 2014), even though they are not associated with either social media platforms or specialised data brokerage.

By data relations we do not mean relations between data, but the new types of human/institutional relations through which data becomes extractable and available for conversion into economic value. When fully established in daily life, data relations will become as naturalised as labour relations, and together comprise a second pillar of the social order on which capitalism is based. 5 This transformation—we propose—goes much further even than the shaping of social relations around the extraction of “surveillance capital” that Zuboff describes. Under data colonialism, human life becomes, as it were, present to capital without obstruction, although this “presence” is based on many levels of technosocial mediation. Data relations give corporations a privileged “window” onto the world of social relations, and a privileged “handle” on the levers of social differentiation. More generally, human life itself, including its relations to technology, becomes a direct input to capital and potentially exploitable for profit. Data relations make the social world readable to and manageableby corporations in ways that allow not just the optimisation of profit, but also new models of social governance, what legal scholars Niva Elkin-Koren and Eldar Haber (2016) call “governance by proxy”.

In this context, digital spaces for social life and economic transactions called “platforms” (Gillespie, 2010; compare Bucher, 2016; Gerlitz and Helmond, 2013) have significance beyond their convenience for individuals and corporations. Platforms become software-constructed spaces that produce the social for capital. Social life is thereby transformed into an open resource for extraction that is somehow “just there” for exploitation. For sure, capitalism has always sought to commodify everything and control all inputs to its production process. But how “everything” is defined at specific historical moments varies. What is unique about this historical moment is that human life is becoming organised through data relations so that it can be a direct input to capital. This transformation depends on many things: shifts in daily habits and conventions, software architectures that shape human life through, as Lessig famously argued, “code” (Lessig, 2001), and explicit legal frameworks that legitimate, sanction and regulate such arrangements. In this article, we focus on the last, but including the underlying legal rationalities that, as Julie Cohen (2017) argues, work to frame data as owner-less, redefining notions of privacy and property in order to establish a new moral order that justifies the appropriation of data.

To summarise the argument so far: humanity is currently undergoing a large-scale transformation of a social, economic and legal order, based on the massively expanded appropriation by capital of human life itself through the medium of data extraction. The long-term sustainability of this transformation depends, however, on the regulation or harmonising of various factors: the weight of habit and convenience in daily life; various social pressures on consumers, producers and workers towards datafication, which amount to something like a life force (Grewal, 2008); and, crucially, an emerging legal infrastructure. As a result, larger questions arise as to how to regulate this transformation and its emerging institutions. The answers depend on what approach we take to the question of what sort of transformation this is. We have argued, in condensed form, that this transformation can only be fully understood bifocally, that is, through the double lens of capitalism and colonialism. In the second part of the article, we extend this discussion into a brief review of current approaches to regulating personal data processing, and their limitations.

Thinking beyond existing legal approaches to datafication

The building of a new social and economic order based on the extraction of value from human life through data relations is not something that individuals can resist, or even manage, by themselves. It matters little whether I delete an app from my phone or withdraw from a platform. Nor, incidentally, can we expect much from the possibility that some players in data markets might act more ethically than others. Society-wide responses are needed to such society-wide transformations. If—to return once again to Polanyi (2001)—large-scaleeconomic change requires a double regulatory movement (first, the transformation of social relations so as to fit the new economic organisation, and then the emergence of a social counter-movement to make the transformation actually liveable), then the project of socially managing datafication is likely to be long and complex, and legal reform must play some part in that.

We have little interest here in proposed legal reforms that make partial adjustments to how social media platforms manage aspects of their operations (for example, the algorithms that organise personal news feeds). Our concern instead is with the prospects for large-scale regulation of the extraction of economic value from personal data, and what might currently be blocking this regulation (by “personal data” we mean not just data which explicitly relates to an individual person, but any data whose collection and processing can generate decisions relating to that person).

There is no doubt that important legal reforms concerning data practices have been advanced recently. Five years ago, North American market rhetoric went largely uncontested, arguing that the wholesale collection and processing of data, whether about a person (personal data, in a narrow sense) or otherwise, was essential to the development of the global economy. It is easy to find examples of such discourse, for example, from the World Economic Forum or from business consultants (Letouzé, 2012; World Economic Forum, 2011; McKinsey, 2011). But the balance has been disturbed by one particular legislative intervention, the European General Data Protection Regulation (GDPR), which came into effect in May 2018.

The GDPR’s very first sentence announces a normative challenge to market rhetoric about data: “the protection of natural persons in relation to the processing of personal data is a fundamental right” (GDPR, recital 1). Thus, one of the GDPR’s basic ideas is that whether or not she is likely to consent to it, the “data subject” must be informed “of the existence of a [data] processing operation” which affects her, and “its purposes.” Indeed, she should be informed of the “consequences of any data profiling” (Recital 60). This challenged the until-then dominant idea that personal data processing is just what corporations and markets do, and has been going on for so long and on such a scale that it cannot be challenged (an argument Helen Nissenbaum (2017) calls Big Data exceptionalism). Without going into the GDPR in detail, its importance as a symbolic challenge to the ideology of ‘dataism’ (Van Dijck, 2014) cannot be denied. The GDPR is being used as a model for legislative proposals in a number of countries across the world, including Brazil and the UK, and compliance with the GDPR has become a major feature of recent business practice.

While it is still unclear how effective the GDPR’s challenge to data practices from the perspective of human rights such as privacy will be, there is no doubt of the influence its publication has had on the climate of a global debate around data issues. Consider two UN reports from 2014 and 2018, both called “The Right to Privacy in the Digital Age” (UN High Commissioner for Human Rights, 2014, 2018). The 2014 report is almost entirely concerned with state surveillance; when it mentions corporations (paragraphs 42-46), it focuses on whether they should accede to state requests for access to their data. The question of whether corporations themselves should be more responsive to human rights concerns regarding how they collect data—arguably the key issues revealed, if not debated, in the 2013 Snowden revelations—is not even mentioned. By 2018 however, the emphasis had shifted to include a discussion of the growth in corporations’ data collection practices and their “analytic power” (paragraphs 15 and 16). The later report mentions “a growing global consensus on minimum standards that should govern the processing of personal data by state, business enterprises and other private actors” (paragraph 28), and insists that the resulting human rights protection “should also apply to information derived, inferred, and predicted by automated means, to the extent that the information qualifies as personal data” (paragraph 30). In effect, the 2018 UN report encourages states to adopt something like the GDPR. Yet there are still important gaps in its recommendations: at no point does the report challenge corporate data collection as such, or recognise how the continuous collection of data from and about persons might in itself undermine values such as freedom and autonomy, even though the report references the fundamental European law principle that the “individual should have an area of autonomous development, interaction and liberty” (para 5), a point to which we shall return.

These legal principles, if pursued, might have the potential to disrupt datafication. But so far it is not legislation but the work of critical legal scholars which has articulated these principles more fully. Scholars of privacy law have often noted that traditional notions of privacy are inadequate to deal with the vast amount of data which flows without being specifically attached to a particular named person, yet, which in combination with even small amounts of other information related to that person can lead to their identification. The result is, as Solon Barocas and Helen Nissenbaum put it in the language of American football, “Big Data’s end run around anonymity and consent” (Barocas and Nissenbaum, 2014). In other words, the scale of data processing that generates decisions affecting the algorithmically produced entities or “data doubles” (Haggerty and Ericson, 2000) to which actual individuals are tethered makes old style privacy regulation by individual consent almost impossible to practice. And yet “consent” is the basic principle on which the GDPR relies.

In response to this problem, Julie Cohen (2013, p. 1931-1932) has proposed an important meta-principle for regulating data practices, that of “semantic discontinuity”. This is designed to limit the possibility of separate data sets being combined so as to generate inferences of a sort that data subjects did not consent to being made. Recently Frischmann and Selinger (2018, p. 275-276) have endorsed this proposal, which radicalises the older principle of “contextual integrity” (Nissenbaum, 2010). But we do not know yet if this proposal has any chance of being translated into law in some form. It runs directly contrary to the purpose of corporate data collection, which is precisely to combine data streams without limit, so as to maximise the algorithmic inferences that can be generated from them. How can semantic discontinuity be made effective as a legal principle when it contradicts the stated purposes of countless corporations who seek access to personal data? Would the injunction of the 2018 UN report that “personal data processing should be necessary and proportionate to a legitimate purpose that should be specified by the processing entity” be sufficient to ground the principle of semantic discontinuity? Presumably not, if a business had a legitimate purpose which depended on semantic continuity, and that purpose was in broad terms disclosed to, and consented on by, a data subject. The same question could be asked of non-commercial organisations which might be protected prima facie by the “public interest exception” written into the GDPR (Article 21 (6)). On what ground could a “higher” principle of semantic discontinuity override that exception?

What becomes clear here is that a far-reaching challenge to the expanding rationalities of continuous data collection and value extraction runs against the basic organisation of power in contemporary economies and societies, issues which have not yet been broached by even the most enlightened legislation. This potential conflict between critical legal thinking and capitalism’s investment in datafication was anticipated in a remarkable article two decades ago by Paul Schwartz (1999). Schwartz foresaw that the emerging data collection practices made possible by the internet’s new infrastructure of connection would generate “a new structure of power over individuals” with “significant implications for democracy” (1999, p. 815). Schwartz also predicted that individualist liberal notions of autonomy would prove inadequate to counter this development, because they ignore the “constitutive value” (1999, p. 816) that protecting individuals from regular privacy violations and their consequences have for democratic culture itself. Schwartz’s implicitly relational (and post-liberal: Cohen, 2013) understanding of autonomy/freedom connects with more recent accounts of the social costs of datafication and algorithmic decision-making (Eubanks, 2017; Noble, 2018). But the way forward for building effective opposition to the changes under way requires us to move beyond the domain of contemporary legal theory and introduce a decolonial perspective on what is going on with datafication. We turn to this in the next section.

Schmitt and colonialism’s relation to law

At this juncture our argument finds support in a surprising source, someone who was certainly not an opponent of historic colonialism: the controversial German legal and political theorist Carl Schmitt. Schmitt (2006 [o.p. 1950]) offered the most clear-sighted account of the relation between law and the appropriation of territory and natural resources within historic colonialism, an account which has implications, we suggest, for grasping the regulatory implications of today’s data colonialism. 6 In discussing Schmitt as an exemplary case, we will admittedly be abstracting from the centuries-long debates about the possible legal justifications for the domination by some humans of others. Choosing Schmitt however is justified because of the clarity with which he makes explicit the underlying links between law, force and rationality within historic colonialism.

Schmitt analysed law’s relation to historic colonialism, and therefore to the industrial capitalism which colonialism made possible (Schmitt, 2006, p. 4), at a nostalgic moment. Looking back at colonialism, he found it to be an essential underpinning of a eurocentric international legal order which he believed had been shattered by Germany’s defeat in World War II. This context does not, however, diminish the importance of Schmitt’s remarkably direct portrayal of colonialism and its relation to law.

For Schmitt, controversially, the very idea of law (nomos) is based on the seizure of land (2006, p. 42). He interprets the international law of property and nations that dominated the world from the 17th to mid-20th centuries as emerging from the demise of an earlier order, the “medieval spatial order of the respublica Christiana” whose legitimacy was fading by the 16th century. According to Schmitt, what enabled a new international legal order to be built was the discovery of “previously unknown (i.e., by Christian sovereigns) oceans, islands, and territories” (2006, p. 131).

Two things are remarkable about the analysis Schmitt develops. First, he makes no pretence that colonial conquests were legal in a conventional sense; rather he distinguishes two types of land-appropriation, those which proceed in accordance with international law, and those (of which historic colonialism was an example) “which uproot an existing spatial order and establish a new nomos” of property entitlement (2006, p. 82). In this initially law-less, but ultimately lawful move of historic colonialism, “law and order are one . . . they cannot be separated” (2006, p. 81). Order, that is, makes law. Second, Schmitt regards the extra-legal seizure of territory by colonial powers as justified by a higher principle of rationality, or rather a legitimate hierarchy in relation to rationality itself. As he writes (2006, p. 131), “the means of the legal title ‘discovery’ lay in an appeal to the historically higher point of the discoverer vis-à-vis the discovered.” For Schmitt, the conqueror’s “scientific cartographic survey was a true title to a terra incognita,” because it embodied a superior rationality, generating a “completely different type of legal title . . . ‘effective occupation’” (2006, p. 133).

For Schmitt, the history of colonial appropriation represented the legitimate fusion of effective force (order) into law, justified by a claim to higher knowledge or rationality. Here is Schmitt’s fullest statement of the relations between law, force and a certain “modern” reading of rationality: “European discovery of a new world in the 15th and 16th centuries thus did not occur by chance . . . it was an achievement of newly awakened Occidental rationalism . . . The Indians lacked the scientific power of Christian-European rationality. The intellectual advantage was entirely on the European side, so much so that the New World could simply be ‘taken’” (2006: 132). This unapologetic argument for colonialism’s rationality offers some interesting parallels with the contemporary justification and rationalisation of Big Data practices, parallels that we can only notice within the bifocal approach to capitalism and colonialism that we are proposing. Within this perspective, we also see more clearly the significance of the failure so far of even the boldest legislation on datafication to challenge its basic practice: the banal, almost universal collection of personal and non-personal data, and, through this, the creation from the flow of human existence of an informational terrain from which extraction for economic value is possible, indeed increasingly seamless. What are the parallels between the legal status of contemporary datafication (understood as a new type of colonial enterprise) and Schmitt’s reading of the legal status of historic colonialism?

First, datafication involves a defacto appropriation of resources, a domain of connectible information that, through processing, can be attached reliably to entities that are proxies for actual individuals (“data doubles”) and thus provide a basis for judgements that effectively discriminate between real individuals. That appropriation depends on the prior collection of data, that is, on the multi-dimensional monitoring of as much of these individuals’ online activity as possible, regardless of the device they are using. Granted, there is a legal debate and potential conflict at present (for example via the GDPR) around the legality of some of the consequences of this appropriation, just as there was early on in relation to the Spanish conquests of the “New” World. But, as we saw, these legal debates tend never to challenge the fundamental fact of continuous monitoring itself, even if it is in tension with established values such as autonomy (for example the “right to full development of the personality” under German constitutional law: Hornung and Schnabel, 2009).

Second, although it is as yet only in the early stages of development, a justificatory ideology of data appropriation is emerging that parallels Schmitt’s version of colonial ideology: the vision that only through the superior calculating power of Big Data and machine learning can a higher state of human knowledge be achieved, thereby justifying corporate access to data that can be extracted from the flow of individuals’ daily lives. The core issue here is the imposing on the whole domain of human life a very specific version of rationality which requires all life to be tracked continuously in the interests, simultaneously, of capital and of a certain version of human knowledge (the vision of Big Data or dataism).

It follows, thirdly, —and here we move from parallels onto implications— that the more fundamental challenge to processes of datafication to which critical legal scholars such as Cohen and Frischmann are committed requires a challenge to the underlying legitimacy of acquiring data through data relations, which is today a feature of most platforms, apps, and mechanisms for knowledge production and daily organisation (think of the Internet of Things). Cohen’s principle of “semantic discontinuity” is important, but only goes so far as challenging the transferability of data, when it is the very act of collecting data that must above all be challenged.

There are indeed good reasons (which Cohen in her work has noted) for arguing that the continuous collection of data from and about individuals conflicts with the principle of autonomy on which democracies, fundamentally, rely. Continuous surveillance or monitoring by the state is, after all, generally regarded as “chilling” of individual agency (Cohen, 2013, p. 1911-1912). The same is true of surveillance when it is conducted by private corporations, particularly if those corporations often have both capacity and need to yield up data to the state. What so far has been difficult to assert is the primacy of these concerns against the opposing rationality of the social quantification sector, which relies on its “effective occupation” of human life (to use Schmitt’s chilling phrase) as the starting-point for defending its practices of data collection against interference by the state. What is needed is to reject precisely this act of “effective occupation”.What cuts through all the rationalities which mask the dynamics of datafication is precisely the realisation that the social quantification sector’s “right” to hold what they gather is no more legally justifiable than (and just as legally contentious as) the effective occupation of overseas territory by colonial states once was.

If so, the existence (or not) of “consent” to continuous monitoring is beside the point. What matters are the implications of this occupation for what we call the space of the self, that is, the basic idea of selfhood on which most notions of democracy and even legal authority rely. 7 We are drawing here on a relational notion of freedom which assumes that “individual” freedom can only emerge through a web of social relations (Elias, 1978), but also more specifically on the idea that, underlying all notions of freedom and autonomy (some of which no doubt are today unsatisfactory) and underlying also all culturally relative formulations of personal privacy, is a basic notion of the “space of the self”: that is, “the socially grounded integrity without which we cannot recognize ourselves or others as selves at all” (Couldry and Mejias, 2019, p. 155). This is the space that Hegel captured in his relational definition of freedom as “the freedom to be with oneself in the other” and that Dussel terms the “natural substantivity of the person.” 8

Our approach to reframing legal challenges to datafication is, we acknowledge, expansive. It cuts across the detailed debates of policy and law in particular contexts. But it usefully sidesteps the confusion caused by the anomalous notion of “personal data”. As many critics of traditional notions of privacy have noted, much of the data that makes a difference to how we are treated by corporations is not personal data, because it is not exactly “about” us. Rather, it is relational data, in which patterns emerge across myriad comparisons within much larger data sets, patterns that predict particular outcomes for a data double to which as a real individual each of us is tethered. The protection of “personal data” in a more straightforward sense—data about individuals and data files such as photos that an individual claims to own—is therefore likely only to protect people from part of the harms that can be done to them through data. Our approach challenges the very validity of continuous data collection, regardless of what entities happen to be affected by any one particular decision or practice. It challenges, in other words, the multiple practices which construct the new “territory” of human life from which something like “personal data” emerges as potentially extractable, a territory which is steadily supplanting the space of social interaction and social governance that was taken for granted before datafication through a process that started centuries before the advent of digital data. In other words, it makes this challenge in response to processes of human subjection that only a colonial perspective can fully recognise.

There is one last and crucial respect in which legal and civic challenges to datafication require the frame of colonialism. This regards the underlying rationality of Big Data itself which works as a reference-point for and legitimation of data collection in all its breadth and depth. Underlying all the specific and important issues under discussion about algorithmic injustice lies a deeper injustice that, following decolonial thinker Boaventura de Sousa Santos (2014), we can call “cognitive injustice”. Put simply, this is the assumption that there is only one path to human knowledge and that it lies through the progressive extraction, collection, processing and evaluation of data from the flow of human life, and indeed life more generally. 9 The characteristics of this rationality have been expressed not by an analyst of capitalism or even modernity, but a decolonial thinker, the Peruvian sociologist Aníbal

Quijano, reflecting on the relations between capitalism, modernity and the longer process of not just historic colonialism but coloniality:

Outside the ‘West’, virtually in all known cultures… all systematic production of knowledge is associated with a perspective of totality. But in those cultures, the perspective of totality in knowledge includes the acknowledgement of the heterogeneity of all reality; of the irreducible, contradictory character of the latter; of the legitimacy, i.e., the desirability of the diverse character of the components of all reality — and therefore, of the social. The [better, alternative] idea of social totality, then, not only does not deny, but depends on the historical diversity and heterogeneity of society, of every society. In other words, it not only does not deny, but it requires the idea of an ‘other’ — diverse, different. (Quijano, 2007, p. 177, added emphasis).

Through the quantification of the social, we risk installing a new version of this exclusive notion of rationality, via what Jose van Dijck (2014) has called “dataism”. Only legal proposals which challenge rationales of data collection in this more fundamental way can hope, effectively, to challenge the direction of data colonialism.

Our approach therefore stands firmly against other recent proposals for individuals to own “their” data, be free to manage access to it, and perhaps even to be paid in return for such access (Lanier, 2013; Arrieta-Ibarra et al, 2018; for a recent popular argument in The Economist, see will.i.am (2019)). Such proposals risk legitimating precisely the underlying practices of data collection, and ignoring completely the rationality of appropriation which underlies data colonialism.

Conclusion

Our goal in this article has been to develop the starting-points of a more radical and potentially more comprehensive approach to framing critical legal and policy responses to ongoing processes of datafication. We began by reframing what is currently going on with data not just within the continuing expansion of capitalism, but as a new and epochal renewal of colonialism itself, which, in time, may pave the way for a stage of capitalism whose full outline we cannot yet predict.

By placing datafication within the longer history of colonial appropriations of territory and natural resources on a global scale, we seek to address more effectively the fundamental unease across wide sectors of the population at today’s practices of expanding surveillance via marketing, artificial intelligence and the Internet of Things. Existing legal approaches, and even critical legal theory, fall short of providing an adequate starting-point for wider critique. So too do accounts of capitalism which frame what is going on with data principally in terms of recent developments (surveillance capitalism, platform capitalism, and the like), rather than the longer term relations between colonialism and capitalism.

By contrast, legal approaches which take seriously Carl Schmitt’s reading of the role of historic colonialism in making law through effective force (that is, what becomes an order) offer a warning of the underlying direction of change. Unless we grasp this, policy debate regarding the challenges of datafication is always likely to fall short of the mark.

A postscript: One day after being fined for privacy violations, Google announced that “data is more like sunlight than oil” (Ghosh and Kanter, 2019). In other words, instead of a resource that is being appropriated from someone’s territory, Google would like us to believe that data is a replenishable, inexhaustible, owner-less resource that can be harvested sustainably for the benefit of humanity. This illusion, once again, conveniently bypasses the questions about privacy and protecting the individual that any attempt at “regulation” would normally want to raise. Instead, this “regulation” attempts to establish data colonialism as the status quo. It is time for a more radical grounding of established regulatory discourse that enables it to challenge datafication’s social order. This must involve more than regulatory adjustments to certain aspects of contemporary capitalism. What is required is a fundamental challenge to the direction and rationale of capitalism as a whole in the emerging era of data colonialism.

References

Arrieta-Ibarra, I., Goff, L., Hernandez, D., Lanier, J., & Weyl, G. (2018). Should We Treat Data as Labor? Moving Beyond “Free”. AEA Papers and Proceedings, 108, 38–42. doi:10.1257/pandp.20181003

Barocas, S. & Nissenbaum, H. (2014). Big Data’s End Run Around Anonymity and Consent (pp. 44-75). In J. Lane, V. Stodden, S. Bendo, & H. Nissenbaum (Eds.), Privacy, Big Data and the Public Good. New York: Cambridge University Press.

Beckert, S. & Rockman, S. (Eds). (2016). Slavery’s Capitalism. Philadelphia: University of Pennsylvania Press.

Bowker, G. & Leigh Star, S. (1999). Sorting Things Out. Cambridge, MA: The MIT Press.

Bratton, B. (2016). The Stack: On Software and Sovereignty. Cambridge, MA: The MIT Press.

Bucher, T. (2017). The Algorithmic Imaginary: Exploring the Ordinary Affects of Facebook Algorithms. Information Communication and Society, 20(1),30–44. doi:10.1080/1369118X.2016.1154086

Chakravorti, B., Bhalli, A., & Chaturvedi, R. S. (2019, January 24). Which Countries are Leading the Data Economy? Harvard Business Review. Retrieved from https://hbr.org/2019/01/which-countries-are-leading-the-data-economy

Cohen, J. (2013). What Privacy Is for. Harvard Law Review,126(7), 1904–1933. Retrieved from https://harvardlawreview.org/2013/05/what-privacy-is-for/

Cohen, J. (2018). The Biopolitical Public Domain: The Legal Construction of the Surveillance Economy. Philosophy & Technology, 31(2), 213–233. doi:10.1007/s13347-017-0258-2

Cohen, J. (2019). Between Truth and Power. Oxford: Oxford University Press.

Couldry, N., & Mejias, U. A. (2018). Data Colonialism: Rethinking Big Data’s Relation to the Contemporary Subject. Television & New Media, 20(4): 336-349. doi:10.1177/1527476418796632

Couldry, N., & Mejias, U. A. (2019). The Costs of Connection: How Data is Colonizing Human Life and Appropriating it for Capitalism. Redwood City, CA: Stanford University Press.

Cowen, D. (2014). The Deadly Life of Logistics. Minneapolis: University of Minnesota Press.

Davenport, T. (2014). Big Data @ Work. Cambridge, MA: Harvard Business Review Press.

Dussel, E. (1985). Philosophy of Liberation. Oregon: Wipf and Stock.

The Economist. (2017, May 6). The World’s Most Valuable Resource Is No Longer Oil, but Data. Retrieved from https://www.economist.com/leaders/2017/05/06/the-worlds-most-valuable-resource-is-no-longer-oil-but-data

Elias, N. (1978). What is Sociology? London: Hutchinson.

Elkin-Koren, N., & Haber, E. (2016). Governance by Proxy: Cyber Challenges to Civil Liberties. Brooklyn Law Review, 82(1), 105–162. Retrieved from https://brooklynworks.brooklaw.edu/blr/vol82/iss1/3/

Eubanks, V. (2018). Automating Inequality. New York: St. Martin’s Press.

Frischmann, B. & Selinger, E. (2018). Reengineering Humanity. Cambridge:Cambridge University Press.

Gerlitz, C., & Helmond, A. (2013). The Like Economy: Social Buttons and the Data-intensive Web. New Media & Society15 (8), 1348-1365. doi:10.1177/1461444812472322

Gillespie, T. (2010). The Politics of ‘Platforms’. New Media & Society12(3): 347-364. doi:10.1177/1461444809342738

Gitelman, L. (Ed). (2013). “Raw Data” is an Oxymoron. Cambridge, MA: The MIT Press.

Gosh, S., & Kanter, J. (2019, January 22). Google says data is more like sunlight than oil, one day after being fined $57 million over its privacy and consent practices. Business Insider. Retrieved February 1, 2019, from https://www.businessinsider.com/google-data-is-more-like-sunlight-than-oil-france-gdpr-fine-57-million-2019-1.

Grewal, D. (2008). Network Power. New Haven, CT: Yale University Press.

Haggerty, K., & Ericson, R. (2000). The Surveillant Assemblage. British Journal of Sociology, 51(4): 605–622. doi:10.1080/00071310020015280

Hegel, G. W. F. (1991). Elements of the philosophy of right (A. W. Wood, Ed.; H. B. Nisbet, Trans.). Cambridge: Cambridge Univ. Press.

Hildebrandt, M. (2015). Smart Technologies and the End(s) of Law. Cheltenham: Edward Elgar Publishing.

Hornung, G., & Schnabel, C. (2009). Data Protection in Germany I: The Population Census Decision and The Right to Informational Self-determination. Computer Law & Security Review, 25(1): 84–88. doi:10.1016/j.clsr.2008.11.002

Lanier, J.(2014) Who Owns the Future? London: Allen Lane.

Lessig, L. (2000). Code and Other Laws of Cyberspace. New York: Basic Books.

Letouzé, E. (2012). Big Data for Development: Challenges & Opportunities [Report]. New York: UN Global Pulse. Retrieved from http://www.unglobalpulse.org/sites/default/files/BigDataforDevelopment-UNGlobalPulseJune2012.pdf

Mayer-Schönberger, V., & Cukier, K. (2013). Big Data. London: John Murray.

McKinsey (2011). Big Data: The next frontier for innovation, competition, and productivity [Report]. McKinsey Global Institute.

Nissenbaum, H. (2010). Privacy in Context. Stanford, CA: Stanford University Press.

Nissenbaum, H. (2017). Deregulating Collection: Must Privacy Give Way to Use Regulation? doi:10.2139/ssrn.3092282

Noble, S. U. (2018). Algorithms of Oppression: How Search Engines Reinforce Racism. New York: NYU Press.

Pippin, R. (2008) Hegel’s Practical Philosophy. Cambridge: Cambridge University Press.

Polanyi, K. (2001). The Great Transformation. Boston: Beacon Press.

Postone, M. (1998). Rethinking Marx (In a Post-Marxist World) (pp. 45-80). In C. Camic (Ed.), Reclaiming the Sociological Classics. Oxford: Wiley-Blackwell.

Quijano, A. (2007). Coloniality and Modernity/Rationality. Cultural Studies 21(2-3): 168-178. doi:10.1080/09502380601164353

Santos, B. de S. (2016). Epistemologies of the South: Justice Against Epistemicide. London: Routledge. doi:10.4324/9781315634876

Schmitt, C. (2006) The Nomos of the Earth. Candor, NY: Telos Press.

Scholz, L. (2018). Big Data is not Big Oil: The Role of Analogy in the Law of New Technologies [Research paper No. 895]. Tallahassee, FL: FSU College of Law. Retrieved from https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3252543

Schwartz, P. (1999). Internet Privacy and the State. Connecticut Law Review, 32, 815-859. Available at https://scholarship.law.berkeley.edu/facpubs/766/

Sen, A. (2002). Rationality and Freedom. Cambridge, MA: Harvard University Press.

Shepherd, T. (2015). Mapped, Measured and Mined: The Social Graph and Colonial Visuality. Social Media + Society, 1(1). doi:10.1177/2056305115578671

Thatcher, J., O’Sullivan, D. & Mahmoudi, D. (2017). Data Colonialism Through Accumulation by Dispossession: New Metaphors for Daily Data. Environment and Planning D: Society and Space, 34 (6), 990-1006. doi:10.1177/0263775816633195

UN High Commissioner for Human Rights. (2014). The Right to Privacy in the Digital Age. Retrieved from http://www.justsecurity.org/wp-content/uploads/2014/07/HRC-Right-to-Privacy-Report.pdf

UN High Commissioner for Human Rights. (2018). The Right to Privacy in the Digital Age. Retrieved from https://documents-dds-ny.un.org/doc/UNDOC/GEN/G18/239/58/PDF/G1823958.pdf

Van Dijck, J. (2014). Datafication, Dataism and Dataveillance: Big Data Between Scientific Paradigm and Ideology. Surveillance & Society, 12(2), 197-208. doi:10.24908/ss.v12i2.4776

will.i.am. (2019, January 21). We Need to Own Data as a human right – and be compensated for it. The Economist. Retrieved from https://www.economist.com/open-future/2019/01/21/we-need-to-own-our-data-as-a-human-right-and-be-compensated-for-it

Williams, E. (1994). Capitalism and Slavery. Chapel Hill: University of North Carolina Press.

World Economic Forum. (2011). Personal Data: The Emergence of a New Asset Class. Retrieved from http://www3.weforum.org/docs/WEF_ITTC_PersonalDataNewAsset_Report_2011.pdf.

Zuboff, S. (2015). Big Other: Surveillance Capitalism and the Prospects of an Information Civilization. Journal of Information Technology, 30(1),75–89. doi:10.1057/jit.2015.5

Zuboff, S. (2019) The Age of Surveillance Capitalism. London, UK:Profile Books.

Footnotes

1. The full context of our argument is provided in Couldry and Mejias (2019). We have developed it since 2016, and first presented it publicly at the Big Data in the Global South network at IAMCR, Cartagena, Colombia, in July 2017 (https://data-activism.net/2017/07/datactive-presents-big-data-from-the-south-in-cartagena-july-15/). For a summary version of our book’s argument, see Couldry and Mejias (2018).

2. We therefore question the boundary between “capitalism” and “surveillance capitalism” (sometimes called “raw surveillance capitalism”) on which Zuboff relies, when she writes: “When a firm collects behavioral data with permission solely as a means to product or service improvement, it is committing capitalism but not surveillance capitalism” (2019, p. 22). But this assumes a world where “permission” is clearly delineated, and the purposes of data use and scope of data collection are neatly delineated too: the purpose of data colonialism is to blur those boundaries in the service of a broader appropriation of human life itself.

3. Interestingly Zuboff notes the colonial precedent at certain points (e.g., Chapter 6), but without either theorising data processes as a new type of colonialism, or explaining the implications of the colonial precedent for her framing of what’s going on with data exclusively in terms of capitalism.

4. For recent valuable discussions of the colonial in relation to data, Thatcher et al. (2017) see ‘data colonialism’ explicitly as a metaphor, while Cohen (2017) and Shepherd (2015) emphasise neo-colonial continuities in data practices. None proposes, as we do, that data practices constitute literally a new phase of colonialism.

5. This unorthodox extension of Marx’s critical theory of capitalism is inspired by Moishe Postone’s reading of Marx and the importance of abstraction, rather than labour as such, as the fundamental driver of creating a capitalist social order (Postone, 1998). There is no space to discuss this in detail here, but see Couldry and Mejias, 2018; Couldry and Mejias, 2019, chapter 1.

6. For an earlier discussion of Schmitt’s discussion of law and colonialism in relation to the internet, see Bratton (2016: 19-40).

7. Our larger argument here draws on the philosophy of G. W. F. Hegel and Enrique Dussel: for more detail, see Couldry and Mejias (2019, chapter 5). On the question of legal authority, see Hildebrandt (2015).

8. See Hegel’s Encyclopedia quoted by Pippin (2008, p. 186); Dussel (1985, p.158).

9. On data extraction from physical nature, see Gabryz (2016).

Zombie contracts, dark patterns of design, and ‘documentisation’

$
0
0

This paper is part of Transnational materialities, a special issue of Internet Policy Review guest-edited by José van Dijck and Bernhard Rieder.

Recently, a particular kind of spotlight has been placed on the popular social media company Facebook, and its CEO Mark Zuckerberg. Asking for more transparency in regard to behind-the-scenes such as data collection schemes that lead to micro-targeted advertisements, this spotlight - which also targeted a few other technology conglomerates - was invoked by numerous leaks that revealed these activities. A New York Times report, for instance, published in March 2018 claimed that Facebook inadvertently provided academic Alexsandr Kogan (University of Cambridge) with access to the data of over 50 million user names (later changed to 87 million) and other personally identifying information through the use of a quiz application that Kogan created (Rosenberg et al., 2018). This information was then provided to the firm Cambridge Analytica (CA) whom had been hired by the Trump campaign team to profile users using this data for political influence through social media platforms. In the aftermath, including hearings with the U.S. Senate Judiciary and Commerce committees on April 10 and the U.S. House Energy and Commerce Committee on April 11 in which Zuckerberg testified, questions surrounding Facebook’s knowledge of CA’s data collection activities were at issue. After all, the collection went beyond the targeted users of Facebook (i.e., those who used the application that collected the data) to friends of those users and even to the data of non-users (Facebook, Social Media Privacy, and the Use and Abuse of Data, 2018). In particular, the company’s Terms of Service (ToS) agreement was highlighted as it contained (at the time) some of the data collection policies for users of the platform. About two weeks after the hearings, Facebook itself was asked if it had read the ToS agreement laid out by CA in 2014 when it first allowed the firm to access its users’ data, to which the company's chief technology officer Mike Schroepfer answered: “we did not read all of the terms and conditions” (Romm, 2018). For obvious reasons, this became a comedic headline in the days that followed.

Although this event brought ToS agreements to the forefront of public discourse, they were already in the purview of the issues related to the broader genre of contracts to which they belong - generically known as ‘standard form contacts’ or, more specifically for ToS, consumer-facing standard form contracts 1. As the ‘duty to read’ by both parties can commonly be invoked as a defense associated with these types of contracts (Calamari, 1974), a familiar adage warns that we should read all of the fine print we sign or else the more powerful entity will most probably take advantage. Yet, this ‘fine print’ now exists within the corners and margins of nearly all of our online activities and often contains clauses that govern several important aspects of our lives, including copyright and ownership policies, dispute and jurisdiction information, acceptable use, even labour terms in many contexts (e.g., independent contractors, developers). Their parameters have wide-reaching implications for a giant-sized portion of the general public, yet we are desensitised to their presence to the point of almost total collective ignorance. One commonly cited study found that only about 1 or 2 in 1,000 users access a ToS agreement for at least one second, producing an informed minority 2 of 0.2% that is “orders of magnitude smaller than the required informed minority size in realistic market settings and theoretical examples suggested in the literature” (Bakos et al., 2014, p. 2). Leib and Eigen (2017) presume that the cohort of users under the age of 35 years of age do not actually recognise this new form of contract as a contract at all 3. At the very least, even though zombie contracts may appear as archetypal contracts on the surface, they have “several distinct features that sit in very deep tension” with traditional contract doctrine (p. 82). Anecdotally, over the last decade, it seems a common question asked when the topic is introduced is: “Are those actually contracts?”

Strategic, compliant, or even creative efforts to make users aware of their content, then, seem to have failed. In some sense, especially with the ‘manufactured’ consensus seen in recent legal discourse 4, the quest to explore the means to transfer knowledge of these contracts to at least a portion of the population is nearly over. Sometimes, the logic follows: who would want to read so many thousands of words as a daily practice? It is not feasible nor rational to think consumers/users will do so, even if it is in their best interest (Ben-Shahar and Schneider, 2014). It has been claimed they care more about the features of their devices than the content of these agreements anyway 5. One prominent author on the topic, Omri Ben-Shahar, cited a “grand bargain”—a term that was eventually disfavoured after a pushback by consumer advocacy groups 6—when he described how new legal descriptions of these contracts 7 embody a trade-off between a service and convenience of use:

On the one hand, the draft endorses rules of assent that are fairly lenient; they do not require too many clicks for terms to be adopted, they do not require too many boxes to pop up when consumers surf on the Internet. Meaningful notices are enough for the terms to be adopted [...] (We also have) fairly strict limitations on how far businesses can go. We have adopted rules that have to do with unconscionability, what counts as unconscionable with deception, and how the promise that was made to the consumers, the representations that were made to the consumers that drove them to enter into the contract, how those can be vindicated and not be frustrated by the fine print. (Malfitano, 2018, par. 20)

Ben-Shahar references forms and processes that have already been proven ineffective, including clickwrap agreements 8, and notification methods (i.e., mandatory disclosure, email, or banners), as desirable solutions even though he knows well their limits and failures —he literally co-authored the book on the topic (Ben-Shahar and Schneider, 2014). Yet recent legal proposals such as ‘the grand bargain’ allow for the documentation of this specific type of contract to be specially minimised in the name of convenience and usability for users, heavily favouring the drafting party and leaving users little choice or recourse. The more troubling aspect of Ben-Shahar’s statement is that he characterises the restraint of these features—“too many boxes to pop up” and “too many clicks”—as being for the consumer, which is only accurate if their true purpose (i.e., to notify users or gain their explicit consent) has been abandoned completely. At best, disclosure and consent efforts are read as routine or benign; at worst, their design choices signal to courts ‘good faith effort’ to notify users, while actually pressuring them toward unfavourable options (Forbruker Rådet, 2018). In fact, a recent study by the Norwegian Consumer Council (NCC) found multiple deceptive representations on the interfaces of Google, Windows 10 and Facebook in terms of privacy notices and choices. This study concluded that “dark patterns of design”, including illusions of control and misleading wording that are meant to “nudge users toward privacy intrusive options” (p. 3), were numerous and rampant on these platforms.

Efforts to make known some of the egregiousness that can stem from SFCCs has been undertaken by consumer advocacy groups, including the Electronic Frontier Foundation (EFF) that wrote a series of white papers in 2013 on ToS agreements 9 and voluntary organisations such as TOSback.org that work toward archiving, tracking, and rating these contracts. A mainstream film called, Terms and Conditions May Apply (2013), documented how these contracts contributed to widespread, over-broad data collection activity three years before the Zuckerberg hearings. Most revelatory, a study undertaken in 2016 by a partnership between the Dynamic Coalition on Platform Responsibility (DCPR) and the United Nations’ Internet Governance Forum found that ToS agreements affect human rights significantly in the areas of freedom of speech, privacy, and due process, particularly for marginalised and low-income communities (Venturini et al., 2016). Another study by Obar and Oeldorf-Hirsch (2018) found that most users do not read ToS and empirically concluded that the “vast majority […] completely missed a variety of potentially dangerous and life-changing clauses” (p. 16). Moreover, Lieb and Eigen (2017) report how rather than to reinforce equity between classes, which was an early use of standard form contracting, differences in perception of the legal system in general motivates “elite customers” with more resources to only stop ToS egregiousness when it threatens them personally, and not for the fairness of the agreement in general or on behalf of less well-off consumers (p. 85).

Warnings about standard form contracts have appeared for at least over a century (Kessler, 1943; Leff, 1967; Gilmore, 1974). After the industrial period, ideals of standardised relations that ensured fair transactions as a way of subverting class systems were met with skepticism (Isaacs, 1917). Most recently, Leib and Eigen (2017) predicted “a zombie contract apocalypse” should these contracts continue to permeate every aspect of our digital lives without further regulation. The authors describe how:

zombie contracts [...] don the skin of contract, routinely get taken for contract, and at the same time, live by consuming contract's soul. Yet, it is exceedingly difficult to kill the undead, as any zombie scholarship will tell you. It is hard to kill zombies because they look so much like the real, living thing that has been killed to use as a host. (p.70)

The legal discourse and governance around SFCCs that supports and validates their use relies on certain myths about information presentation and user engagement that ultimately upset some of the traditional incentives to memorialise bargain terms. The desire to document the specifics of a bargain into a contract document generally relies on the fact that in cases of dispute, it is in the interest of both parties to ensure the terms are as clear as possible toward their own understanding. This is the case when two sophisticated parties are negotiating drafts of business-to-business contracts, which involves a process that includes agonising over each word, punctuation, and formatting change (Stark, 2007). However, with standard form contracts, including what Leib and Eigen call “zombie contracts”, it is accepted that the majority of naïve parties do not read or understand the terms, and this responsibility is carelessly deflected onto the notion of an “informed minority” that will somewhere, somehow balance the asymmetry. In other words, even though much of the legal discourse recognises this asymmetry between the parties in the SFCC contracting situation, without an actual ‘informed minority’, the conundrum stands—all of the power to write, present, and validify these agreements stays with the drafter, with very little pushback from consumer-users, whom are generally uninformed and do not even realise they are engaging in a contract in the first place. More dangerously, the dynamics of the situation are now being confirmed rhetorically as notions of convenience, economics, and usability, which also favours the more powerful party (in terms of information and resources).

Leib and Eigen (2017) suggest that the best solution would be one that is created and monitored by academics, as they describe how this is the most trusted group for consumers. Specifically, they want to make use of technology to “leverage” academic expertise to “permit organisations to recast zombie contracts as live contracts” (p. 72, emphasis mine). Expanding on solutions from this vantage point, how might one regain some of the ‘liveliness” of previous or alternate contracting situations (e.g., paper contracts or B2B contracts)? So far, it seems their digital form has allowed them and their drafters more power—I believe we can use the very same source to even out the imbalance. This project suggests at least part of the issues exist from 1) a lack of consideration of the SFCC document itself (i.e., medium, format, authenticity, reliability, stability, boundaries, and ontology); and 2) the processes that deem it ‘standard.’ It begins by describing how actually standardising the document form of a SFCC by looking at studies of documents throughout at least a century of associated practices and criticism could be of use. I argue standardising would allow for a more granular understanding of this ubiquitous genre, providing the potential for future regulation that more discreetly articulates the areas of issue within the vast landscape of fine print that we now face.

In order to formulate a revised measure of assessment for the contract document, I make use of the disciplines that specialise in these topics, including document theory, library and information science, diplomatics, standards of records management, textual criticism and bibliography to offer a refreshed perspective on SFCC issues and develop implementable outcomes. Combined with an application of the phases of analysis offered by a document-engineering approach (Glushko and McGrath, 2008) that also utilises concepts and measures of assessment from these other disciplines, a much more comprehensive understanding of SFCCs might be had. This is especially important right now as, from a documentation and standardisation standpoint, I believe new digital forms of SFCCs are actually in their earliest wild, wild west stage and there is much to be done.

Initially, these specifications would serve several purposes, including more access for consumer advocacy groups, which could increase the “informed minority”, as well as provide more document stability for the consumer should they be subject to the onus of burden of proof (Alliance for Justice et al., 2018). By targeting the form of the document and its documentation requirements (including its presentation, design, and methods of consent and notification), rather than the content of the contract itself, it avoids the ‘freedom of contract’ problem as drafters can continue to write the terms they wish. Thus, this document-engineered approach is a more elegant solution than those currently offered in that it would work toward making the contract more: a) Preservable/scrapable; b) Trackable/stable; c) Authentic/reliable (in a diplomatic sense); and d) Processable (machine-readable), while minimally infringing on the individual, democratic right to contract. Most importantly, however, SFCCs with these qualities, by facilitating an informed minority, could produce a counterbalance to any negative aspects from the asymmetries of knowledge and power. Even if recent empirical studies have claimed that the informed minority does not exist (Bakos et al., 2014), this project suggests that certain revised documentation, design, and explanatory requirements and practices could help fulfill its potential by giving motivated groups (e.g., consumer advocacy organisations) the ability to fully access and analyse these documents with the proper tools.

Ensuring that SFCCs have these qualities includes a multi-step approach that will take place over time. It might consist of the following:

  1. Developing an ontology or type of document standard (i.e., metadata and markup) for SFCCs specifically, perhaps through an international standards body (such as the International Organization for Standardization [ISO]) or other technology organisation that considers the voices of multiple stakeholders. This standard could first be used to tag a data set (i.e., a corpus of current ToS) that would ultimately train a tool to automatically identify pieces of the contract. It could also be invoked in future regulation and compliance efforts.
  2. Requiring a machine-readable format (json, csv, xml) that would allow for processing by consumer advocates and that allow for a controlled process of creation, classification, description and organisation;
  3. Requiring a standard html version of the SFCC (without captcha) for scraping;
  4. Requiring a PDF for user download (or automatic download onto keychain).

This list is not comprehensive; rather it is to show that a spectrum of relatively simple documentation and standardisation choices could enact a major change in the way SFCCs are presented and understood.

Ultimately, this study concludes that current governance is not adequate to address the issues of these agreements and suggests three principles, or shifts in concept around this genre of contract: 1) standardisation, not standard practice; 2) documentation, not integration; and3) explanation, not notification.“Standardisation” in this sense relies on practices in regard to documents, information, and organisation that have been fine-tuned over time and has accessibility and knowledge for the general public in mind. This is a change in ambiguous notions of ‘standard practice’ that permeate contract discourse currently and are determined by the drafters and corporate entities, or those with the knowledge and resources to draft in their favour. Rather, the approach proposed by this project would make use of the expertise of a variety of stakeholders, including those who study user-consumers and documents, and would standardise so that those organisations that are serviced have the impetus to safeguard these contracts for consumers. This could also increase consumer trust both in these contracts particularly and service providers more generally.

“Documentation” in this sense refers to a systematic identification of the various components that make up the contract document, with practices such as description and bibliographical information noted and tracked with the various versions (e.g., with markup, metadata). This type of documentation also implicates a critical design approach to display and presentation. This is a turn from the ‘integrated’ (in a legal sense) contract documents that are afforded ideal textual status as a “fully [...] final and complete expression of all the terms agreed upon between (or among) the parties,” which allows SFCCs to be in the form the drafter prefers (Rowley, 2011, p. 2). Instead the SFCC could be documented—a rematerialised image for the user-consumer through its understandability, accessibility, and presence as a text in the digital environment. This includes documenting changes and modifications in more practiced and developed implementations than obscure and ineffective notifications, the parameters and criteria for which are currently determined by drafters. It also includes a range of practices that could have benefits for consumers, including formatting the contract in a way that allows for processing and granular documentation, with markup that relays semantic information and has the potential for pinpointing areas that need further regulation. This latter use would benefit users perhaps indirectly through the knowledge gained by their processing by advocacy groups, but could have the greatest impact in terms of ensuring their fairness.

“Explanation” here refers to an actual engagement with reception and accessibility to knowledge for consumers based on alternate approaches, including counterfactual explanations that narratively explain a set of complex information through the relationship of two variables (Wachter et al., 2017), or decision provenance that shows a history of decisions made by artificial intelligence (AI) systems (Singh et al., 2018). These examples are a move away from the conventional sense of notification that has been proven to cause ‘blindness’ to its effect for the average consumer population. Explanatory mechanisms would be made possible through the documentation efforts of SFCCs that could prompt drafter compliance through regulation of certain problematic clauses. This would allow regulators to locate only those terms so problematic that most consumers would not agree in the first place (what the legal community calls “unexpected terms” 10), which are supposed to nullify a contract (“Restatement,” 1981).

Changing the document form of SFCCs will not itself change the majority of users’ reading practices; most, if not the vast majority of users, will still ignore them as they should. However, by paying attention to a contract as a document, we might be able to provide access to those interested in protecting consumer rights, including advocacy groups and researchers that seek to better understand their content and implications, and potentially fulfill the promise of an informed minority. Moreover, adapting some of these practices for the digital form of these contract documents using techniques such as metadata, markup, and classification built from information garnered from natural language processing (NLP) tools, could train machine learning (ML) tools for automatic tagging and potentially to eventually help correlate legal outcomes with language choices, which is an important interpretive step that is mostly missed by the average user. The automatic tagging of the pieces of the contract would contribute to describing the content of the contracts in a more rigorous fashion than simply requiring compliance to a schema on the part of the business, for instance. Most significantly, it might use the goals of access, literacy, and equity to process and standardise these digital documents in such a way as to provide a healthy counterbalance to current epistemologies of business (i.e., values of economics and convenience) preserved and perpetuated by the current contracting paradigm.

Standard form consumer contracts (SFCCs)

SFCCs between consumers and businesses are ‘standard form’ so as to increase the efficiency of transactions and save costs, which are presumably passed on to the consumer (Kessler, 1943; Sales, 1953; Burke, 2000). These take the form of ToS agreements, End User License Agreements (EULAs), or more generally fine print, boilerplate, or adhesion contracts. Since one party, the consumer or user is commonly less powerful in terms of knowledge and resources, these contracts have been called out for being imbalanced in favour of the business entity (“the drafter”) (Kessler, 1943; Patterson, 2010).

Much of the tension in the discourse around regulating SFCCs comes from a negotiation of the ‘freedom of contract’ principle that is seen as a cornerstone of a free market and democracy and underlies much of traditional contract doctrine. This freedom is often put in tension with enforcement mechanisms that regulate and dictate certain egregious aspects of the contract. Freedom of contract thus generally assures private ordering between individuals without intrusive top-down regulations (Micklitz, 2015).

The specific phrase “freedom of contract” might originate from Sir Henry Sumner Maine’s well-known passage from Chapter V of Ancient Law (1861) 11, in which he characterises the evolution from status (an ascribed position) to contract (a voluntary stipulation)]:

The movement of the progressive societies has been uniform in one respect. Through all its course it has been distinguished by the gradual dissolution of family dependency and the growth of individual obligation in its place. [...] But, whatever its pace, the change has not been subject to reaction or recoil, and apparent retardations will be found to have been occasioned through the absorption of archaic ideas and customs from some entirely foreign source. Nor is it difficult to see what is the tie between man and man which replaces by degrees those forms of reciprocity in rights and duties which have their origin in the Family. It is Contract. (par. 100)

Maine’s description depicts the attitude towards contract during the 19th century in which family and status obligations are replaced by individual ones and obligations with which one freely chooses to engage. In 1917, Nathan Isaacs produced one of the first detailed discussions of the adhesion doctrine 12 and argued against what he saw was an equating of ‘contract’ with ‘agency’ in an effort to counter the status-to-contract principle as a form of inevitable progress. He describes how the line from status to contract is not one that is simple, binaristic, or linear, as Maine suggests. Instead, he notes that what is actually being discussed is a progression from individualised relations to standardised relations, in which, throughout legal history, he states “has room not merely for one single line of progress in one direction or the other, but for a kind of pendulum movement back and forth between periods of standardisation and periods of individualization” 13 (p. 47). At the time Isaacs was writing, standardisation in contracting was not viewed as a regulatory measure as it sometimes is today; instead, as is evident in Maine’s description, it was seen as a further push toward more accessible and fair transacting for those with less power, where the freedoms and obligations of individuals are cemented as new ways of freely expressing their fundamental right to transact. In this way, standardisation at this time was seen as an extension of individual liberty by further promoting equality—an act of standardising relations for those with less power so that these relationships cannot inappropriately benefit one party over the other. Isaacs presents examples to the contrary, however:

that medieval hardening of relations known as feudalism was also, in its beginnings, a progress from contract to status. And those whose philosophy of history is a belief in the gradual development of liberty through the principle of contract have been forced to regard feudalism as a pause in human progress, an armistice in the war between two opposite ideas, status and contract-at best, a compromise, an exceptional, disturbing element in their whole scheme. Perhaps if we were able to go back to what we accept as standard family relations, we should find their basis, too, in the hardening of individual practices into rules. Perhaps even back of caste there was a progress from the individual non-standardised conduct to the standardised. (p. 40)

Current iterations of digital standard form contracts, then, compliment Isaacs’ swinging pendulum model (and disruptions of the progressive model such as feudalism) as their ‘standardisation’ or allowance of ‘standard form’ seems to only benefit the powerful rather than those with less power, further solidifying these discrepancies.

Isaacs noticed early on how contractual relationships were “being displaced by uniform corporations organised under general laws” and how this negates any notion of equity as, in his words, “corporate powers are purely affairs of status” (p. 45). Particularly interesting for this paper, Isaacs describes how an “ignoring of forms is the triumph of the contract principle within the history of contracts,” meaning that the freedom of contract principle has most importantly continuously obscured the form of the contract, which would attempt to provide some type of evidence for the intention of the parties (p. 47). In favour of seeking a kind of ‘truth’ of the bargain terms, freedom of contract allows “the meeting of free minds” supported by the “ideal of individual freedom in the negative sense of ‘absence of restraint’ or laissez faire” to determine truth (p. 47). In other terms, unchecked ‘freedom of contract’ can tend to facilitate ambiguous notions of truth rather than a move toward the standard practice of something concrete and based on documentary forms and practices. Legal scholar, Friedrich Kessler (1943) noted:

With the decline of the free enterprise system due to the innate trend of competitive capitalism towards monopoly, the meaning of contract has changed radically. [...] Freedom of contract enables enterprisers to legislate by contract and, what is even more important, to legislate in a substantially authoritarian manner without using the appearance of authoritarian forms. Standard contracts in particular could thus become effective instruments in the hands of powerful industrial and commercial overlords enabling them to impose a new feudal order of their own making upon a vast host of vassals (p. 640).

As several authors predicted, standardised contracts have become increasingly affiliated with corporate entities and businesses and provided them a source of power to govern beyond the forums of their services alone. The forms these contracts take, the allowances they are given, and the types of authority they signal through their document form is one meaningful space where these issues need to be teased out in practice with an eye towards protecting consumers.

Rhetoric, presentation, and dark patterns of design

Lisa Gitelman (2014) notes how the word “document” descends from the Latin root docer, which means “to teach or show,” suggesting that documents help “define and are mutually defined by the know-show function” (p. 1). In this way, documenting fulfills its purpose and is an “epistemic practice: the kind of knowing that is all wrapped up with showing, and showing wrapped with knowing” (p. 1-2). Gitelman notes how “closely related to the know-show function of documents is the work of no show, since sometimes documents are documents merely by dint of their potential to show: they are flagged and filed away for the future, just in case” (p. 2). Both “know-show” and “no-show” can rely on an “implied self-evidence that is intrinsically rhetorical”, as “persuasion” is implicit in documentation practices (p. 2-3). By making this persuasion and the motivations of the practitioners that perpetuate it more explicit, it can fulfill the “horizon of accountability” that is a shared expectation of documentation practices. We might view documentation, in this way, as a site for the type of rhetoric that Plato suggests 14—without an analysis of his idealisation of truth. This provides an analytic framework within which practices of rhetoric may be assessed more broadly with documents and their various measures of standardisation, ontology, authenticity, reliability, and evidentiary qualities as the touchstone.

Standardisation, then, when viewed as both the process of producing information--either a type of information or information ‘about’ other information-- in a systematic manner could be seen through its documentation practice as an act of persuasion in the way Gitelman describes. While the power to name, classify, and standardise has been acknowledged as persuasive (Bowker and Star, 2000; Russell, 2014), its capacity to also work against acts of “no-show”, where the document is hidden as a hyperlink or some other dislocated form, might also be seen as just as powerful. The disciplines rooted in library and information science, with the motivation to provide access to information to the public, might be a place to look for this effect.

Standardisation has been the science of many disciplines and its own study in the areas of library and information science since at least the late-nineteenth century, but as a general concept much longer (Rayward, 1994). Primarily concerned with the management (i.e., selection, collection, arrangement, indexing), retrieval, and dissemination of recorded knowledge, often with a pursuit of technical and systemic efficiency, documentation science studies the organisation of documents and the creation of standards and other mechanisms that aid this organisation. The European strand of the documentalists’ movement (with Belgian lawyer Paul Otlet (1868-1944), and Henri La Fontaine) promoted the idea that for science to become a legitimate discipline, it needed a more efficient knowledge management system, especially in light of the proliferation of records with contemporary technological advances (Rayward, 1994). Otlet’s Traité de Documentation published in 1934 was the culmination of a lifetime of thinking about problems of improving systems of organised knowledge and was an exploration of early documentation principles (describing what we now tend to call Information Storage and Retrieval). Initially, this study promoted a functional view of what could be considered a document; a term traditionally reserved for “text-like records” in the systemisation of knowledge organisation, but increasingly, the definition expanded the concept of document to include three-dimensional objects, including “sculpture, museum objects, and live animals” (Buckland, 1997, “Abstract”). By the 1920s, documentation was increasingly seen as a general term to incorporate the work of “bibliography, scholarly information services, records management, and archival work” (par. 5). At stake in these discussions are inquiries into what constitutes knowledge and documents and how these perceptions contribute to its accessibility, completeness, and participation in the transparency and accountability of an institution. If we are to accept that documentation is always necessarily an act of rhetoric in Gitelman’s show/no-show characterisation, it might be considered that these studies could work in the direction of fulfilling Plato’s task. In other words, using rhetoric towards the task of distinguishing, organising, and simplifying for the public the most ‘truthful’ information possible with their documentation practices (while perhaps still acknowledging that truth is hard to find).

Thus the disciplines that should be considered are those most practiced in this query and that are concerned with documents and their performance:

    1. Diplomatics assesses the authenticity and reliability of an official document, and articulates the various channels and practices by which it gains legitimacy (e.g., Duranti, 1989, 1994);
    2. Bibliography and textual criticism offer standards of editing and related documentation practices (e.g., Greg, 1950; Bowers, 1978; Tanselle, 1978; McGann, 1992);
    3. Records management provides compliance requirements and measures of quality (e.g., International Organization for Standardization [ISO] 15489 standard for digital business documents);
    4. Theories of evidence allows for practical and conceptual specifications in cases of dispute (e.g., Federal Rules of Evidence; Furner, 2004; Yeo, 2007; Anderson and Twining, 1991);
    5. Theories of documents and information help with documenting contracts appropriately by type, with the most useful and ethical classificatory and descriptive elements (e.g., Briet, 2006; Buckland, 1991; Day 2001, 2014).

At first glance, the familiarisation with certain design conventions or notions of standard practice for SFCC documents might not reveal the ways in which it is being de-documentised. The courts, for instance, might rely on design practices such as ALL CAPS for satisfactory disclosure, for instance, to signal good faith effort, even though we know it decreases the consumer’s ability to read the text (Sullivan, 2012). Recent determinations about privacy policies, for instance, which are often treated as SFCCs (or ‘transactional documents’), have been recently ordered by a trend of statutes, orders, and rules to be separated from other fine print agreements, including ToS. These efforts are meant to provide a solution to egregious data practices by making choices (and ‘transactions’) in regard to data collection more apparent for the consumer, as these policies often dictate the affordances of data collection. A 2011 “Consent Order” set out by the Federal Trade Commission (FTC), for instance, included a directive in which Facebook agreed it would not “misrepresent in any manner, expressly or by implication, the extent to which it maintains the privacy or security of covered information”. The activities covered include the collection or disclosure of these activities, as well as the extent to which user data is accessible to third parties such as data brokers. Further, the specific directions on how it should be disclosed detail that the platform should:

A. clearly and prominently disclose to the user, separate and apart from any “privacy policy,” “data use policy,” “statement of rights and responsibilities” page, or other similar document: (1) the categories of nonpublic user information that will be disclosed to such third parties, (2) the identity or specific categories of such third parties, and (3) that such sharing exceeds the restrictions imposed by the privacy setting(s) in effect for the user; and B. obtain the user’s affirmative express consent. (Section II.A). (United States of America Federal Trade Commission, 2011)

These restrictions refer to how the information in the ToS agreement is presented to users, documented in a way that seems fairer and more noticeable. Implicit in this presumption is the notion that returning to a document-form, rather than a piecemeal, dislocated fashion consisting of some of the information in various places (e.g., individual controls, FAQs, or as a clause or link within another agreement), might benefit users. If they are presented with a form they recognise, it is more likely they will understand they are engaging in a contract in the first place.

During the hearing on April 11 when Congressman Gene Green (Democrat-Texas) questioned Zuckerberg about the newly implemented GDPR laws that went into effect on 25 May 2018 in the EU, he described how they “require that the company's request for user consent be requested in a clear and concise way, using language that is understandable, and clearly distinguishable from other pieces of information including terms and conditions” (Facebook: Transparency and Use of Consumer Data, p. 52). Green is referring to the GDPR laws (specifically Article 2) as a result of the data policies of Facebook and similar companies. Zuckerberg answered this query with the deflection that they “are going to put [...] a tool that walks people through the settings and gives people the choices and asks them to make decisions on how they want their settings set [...] at the top of everyone's app when they sign in” (p. 53).

There are two issues with Zuckerberg’s statement. First, Facebook’s offer of more granular controls as alternatives to the legalese of ToS could work toward further confirming those ineffective consent ‘tools’ as signals of ‘genuine effort’ for courts (see Hillman, 2006). Second, this is especially dangerous for consumers as these alternate controls have actually been found to be associated with deceptive design practices, including “hidden privacy defaults” (p. 18-9), cumbersome or illusory privacy options (p. 31-4), and “positive and negative wording” that frames certain options as convenient 15 (p. 22-5)--in other words, reward-punishment systems that are designed to favour and elicit consent for privacy-intrusiveness (Forbruker Rådet, 2018, p. 25-27). This study revealed the way the design of certain aspects of its presentation all contribute to what it is that is considered the contract document - the memorialisation of the transaction or bargain through documentation of the terms and consent to those terms. In its current form, the privacy policy is not projecting that it is a transaction or contract at all; instead, it looks like clauses, statements, notifications, FAQs, and individual ‘controls’. If a privacy policy is considered to set out the transaction details of the deal a user makes with a service (i.e., an exchange of their data for the service) - a perception that is supported by its allowance as a SFCC that would stand up in court (and by its reliance on codes like UETA for consent, which are intended for sales of goods, not just services) - then its presentation should be forced to come off as a reliable and stable contract document. And if it is allowed that users do not read or understand most of these policies due to their standard form, then their form should actually be standardised by reliable practices and analysis.

Documentation and standardisation solutions

This section presents a potential method and set of solutions for SFCCs, which makes use of a document-engineering framework that views the various components of a SFCC as a holistic document to which a user might be bound. Although other types of standardising and automating processes 16 have been suggested (Wilson et al., 2016; Sathyendra et al., 2017), this perspective is novel in that it suggests that the contract should be treated as a document, a record, and as a piece of evidence in the most standardised, rigorous sense. This would work against conventional practices of documentation that are developed by drafters and reaffirmed by courts, creating effectively a private conversation that leaves out users or advocates of users (Horton, 2009).

Document-engineering is an approach proposed by Robert J. Glushko and Tim McGrath outlined in their 2008 book that synthesises “complementary ideas from separate disciplines”, including from information and systems analysis, electronic publishing, business process analysis and business informatics, and user-centered design (p. 27). The document-engineering approach uses both ‘document analysis’ that analyses text and ‘task analysis’ that analyses data and objects (pp. 29-30). It seeks to provide a spectrum of solutions addressing document and process specifications that, I argue, could lead to a better comprehension of SFCCs. This potentially includes a set of metadata, an XML schema or ontology that recognises various common components and assemblies of components of these documents, and a metamodel of a type of interpretation protocol for analysis.

Although it might be simply considered an early treatment on the construction of a relational database, Glushko and McGrath’s unique angle considers the document type and form. For instance, they provide a description of the spectrum of document types to explain the ambiguity between them—they note how these lines are often blurry, but similar to a colour spectrum, we can recognise the difference between the colours red and blue (p. 10). The opposite ends of their spectrum are narrative documents and transactional documents, with the latter being involved in “document exchange” and thus more apt to benefit from a document-engineering approach. SFCCs are unique in a contractual sense in that there is no meaningful exchange, but rather they are presented ‘narratively’, so they exist somewhere in the middle and would also benefit from further distinction along this spectrum.

The authors propose the following phases of the document-engineering process (pp. 33-35):

1. Analysing the context of use: uses “business and task analysis techniques [to] establish the context of the document-engineering effort by identifying the requirements and rules that must be satisfied to provide an acceptable solution.”

2. Analysing business process/apply patterns:“appl[ies] business process analysis to identify the requirements for the document exchange patterns needed to carry out the desired processes, collaborations, and transactions in the context of use”; identifies documents that are needed, but “only generally as the payload of the transactions.”

3. Document analysis: “involves identifying a representative set of documents or information sources (including people) and analys[es] them to harvest all the meaningful information components and business rules”; identifies document needs beyond “payload” (see phase 2).

4. Component assembly: a “document component model” is developed that “represents structures and their associations and content that define the common rules for possible contexts of use.”

5. Document assembly: uses the “document component model to create document assembly models for each type of document required”; move from analysing “tasks” to designing “new document models;” reuses “common or standard patterns to make the documents more general and robust.”

6. Implementation: the conceptual models are encoded using “a suitable language to support their physical implementation.”

The authors describe the ‘document component model’ as “a conceptual model that encompasses all the information components for any documents required by the context of use” (p. 354). This conceptual model looks like “specifications for interfaces, for generating code, or configuring an application that creates or exchanges new documents” created bottom-up and in a rigorous manner, then uses these models to “implement [...] solution[s] in an automated or semi-automated manner [...] to bridge the gap between knowing what to do and actually doing it” (p. 354). Put simply, the documents in question are first analysed contextually and then individually to identify their various components, and then patterns across these components are recognised and assembled into a document hierarchy that describes a single instance of a set of components for a type of document.

1. Analysing the context of use: For SFCCs, this phase will outline what the ideal solution would accomplish—the information ideally communicated or explained, processed, extracted, understood, and preserved, based on the SFCC situation. This might look like identifying the clauses that need supplemental information, the information needed for evidentiary reasons, or the specifications for the automatic identification of clause types. The questions being asked in this first phase is: What would be the ideal document outcome needed to encourage a voice for consumers? What is the most important information to communicate? What are the best ways to communicate this information?

It will consider the discourse, governance, and issues outlined in this governance around SFCCs, as well as ontological and epistemological conversations about what it means to be a document or record in an information system or on a digital interface. This includes recognising the need to retain the right to freedom of contract, as well as acknowledging that certain social and political paradigms might solidify power imbalances amongst the parties of this type of contract, which warrants sacrifices this freedom to some extent.

2. Analysing business process/apply patterns: This phase is the one most dissimilar from the original document-engineered process. Rather than looking at ‘business processes’, this phase looks slightly tangentially at how legal processes and associated discourse specify the “ultimate payload document”. The question being answered in this phase is: What is necessary for drafters to comply with basic SFCC requirements? How is this type of presentation afforded by legal discourse and what is it lacking? How have document standards been used previously to regulate other types of contracts?

This phase will be informed by theories of contract, records, and documents, but also by using foresight to suggest how SFCCs might be regulated once in a document-engineered form. These predictions might be gained by an analysis of similar situations that have standardised contract and other regulations that have had success regulating by document form. The most important outcome of this phase will be to decipher between the types of compliance efforts that would be required of businesses (e.g., format or schema requirements) and the work that would be done by other entities such as consumer advocacy groups. These two efforts most probably would be iterative and inform each other, but a clear distinction is necessary to set out the types of regulation that might be needed.

3. Document analysis: For SFCCs, this phase considers a selection of contracts to analyse with various document analysis tools by researchers and consumer advocacy groups. While this also might include any member of the public who wishes to analyse these documents, the most likely interested parties would be consumer advocacy groups with a vested interest in understanding these contracts. The tools used to analyse SFCCs might include text analysis tools that include topic modeling or clustering to show language data (e.g., word frequency, word proximity, common topics). The information garnered from these analyses shows some of the patterns that occur amongst these documents, which is important to build the necessary schematics from the bottom-up (rather than imposing a schema onto the genre in a top-down fashion). Moreover, this process should be iteratively revised (along with phases 4-6) in order to continually reflect how these contracts are written and implemented in practice. This preserves the ability for drafters to exert their right of freedom to contract to some extent and keeps the standard reflective of actual practice. Such a method might include the NEH-funded text mining and analysis tool Lexos17built by computer scientists from Wheaton College, Massachusetts and medieval scholar Dr. Scott Kleinman (Cal. State University, Northridge), that is described as being designed with a workflow that helps a researcher to be “mindful of the many decisions made in [their] experimental methods” (par. 1).

4. Component assembly: As document component models strive to define all the necessary components to maximise and minimise redundancy for each individual document, this phase will strive to identify the components of the SFCC document from the analysis that took place in phase 3. These components might include structural components, content components, and associative components (p. 34). Different types of documents or, in this case, contracts, might make use of the same pattern of components, and these patterns should be identified and reused. For instance, related SFCCs (e.g., privacy policies, copyright policies) might have some of the same components, including “Data Use”, “Tracking”, “Jurisdiction”, “Legal Notice”, or others.

This process, since it is an act of ‘naming’ and thus exerting some type of bibliographic control onto the document and those affected by it, should be cognisant of the critical lens applied to information science work. This includes recognising that while organising is essentially “bringing all the same information together”, that information is often standardised habitually, which risks sacrificing complexity in the name of simplicity and economy, a common issue in the creation of information organisation systems (Svenonius, 2009, p. 80). Additionally, the naming practices can either become habitual or seemingly benign and can mask the politics, strategy, and implications behind the labeling decisions (Bowker and Star, 2000). These are relevant concerns for SFCCs, although it might be argued that they are already being classified according to the wishes of the drafters, and the goal of this project is to use information organisation to work against these current manifestations.

5. Document assembly: Once the document components are identified, this phase would consider the relationships between the components to figure out the best possible configuration of individual contract documents. This ideal schema would strive to “define on document-specific view of the more complex document component model” (p. 463). In Glushko and McGrath’s conception, the “document component model [the outcome of phase 4] [is] the roadmap of a city that depicts the entire network of roads. A particular document assembly model [the outcome of phase 5] describes a specific route through that network” (p. 464).

This phase is informed by the literature on information studies, that argues that semantics and naming are always not exact (Svenonius, 2009). However, as the document-engineering approach strives to produce document exchanges that “require unambiguous clarity in semantic interpretation”, this project also strives to reduce ambiguity as much as possible, even if the documents are not being ‘exchanged’ in the same sense (p. 463). It would also make use of previous work done in this regard, including the list of Topics and Cases identified by TOS;DR, the results of the language processing tools, and previous XML schemas such as the one created by the nonprofit OASIS in 2007 18.

6. Implementation: First, a corpus of current and past SFCCs would be analysed to find topics, relationships, establish definitions, and build a standard (metadata, markup, and ontology). Then, a corpus of ToS documents (such as that provided by TOSBack.org) would be marked up with this new standard by a group of experts on these contracts and then used to train a machine learning tool to be able to automatically tag a document. ToS agreements organised in this fashion would make a difference in terms of the information extracted for consumer advocacy groups, policymakers, and consumers. Lastly, a holistic analysis should be conducted of the usefulness of the possible regulatory activity that could stem from this document-engineered SFCC (i.e., what a tag would implicate compliance-wise). One particularly important study of this phase, which would be a result of the analysis from the other phases, would be a study of the reliability and authenticity of a SFCC from the concerns of diplomatics (Duranti, 1989, 1994). Questions of this nature would be: does the contract perform according to an understanding of a conventional contract form (is it a ‘reliable’ contract)? Does this inform consumers of the nature of its creation or changes (is it an ‘authentic’ contract)? These queries would aid the outcome of the description process toward labelling according to communication, literacy, and access, rather than just for simplification, ease of use, or efficiency.

Conclusions and principles

Ultimately, this project suggests three principles, which are each a shift in concept around an issue with SFCCs and the way they are discussed in legal contract discourse. Along the way, I offer potential methods (e.g., document-engineering) and identify potentially novel solutions (e.g., new types of explanation) that would aid in these shifts. Additionally, the solution offered in this paper suggests that the contract should be treated as a document, a record, and as a piece of evidence in the most standardised, rigorous sense, not simply as a step in registration processes or as a series of displaced privacy controls. However, the shifts in the concepts themselves are most important and they could be achieved by various means, not all of which are listed here.

Standardisation, not standard practice

Legal discourse around SFCCs allows for presumptions of knowledge based on ‘standard practice’ and ‘unexpected terms’, meaning there is no preemptive mechanism in place to standardise the information in the contract—it is left up to the drafters to decide what to put into the contract and how to present the information. Notions of “value judgments” that determine what is meant by ‘standard practice’, or the determination of “oppressive clauses”, however, rely on presumed trajectories of the effects of the clauses and literacy of consumer-users in terms of how well drafters can predict these trajectories when forming the agreement (Murray, 1982; Garamello, 2015). A recent trend in paring economic and legal theory has prompted some SFCC scholars to argue that since it is accepted and not rational for consumers to read the terms, predicting bias or reasoning on the part of the consumer might produce even more ambiguous results that weigh down the autonomy of the contracting process (Ayers and Schwartz, 2014).

Although this line of thinking is often an exercise in the freedom of contract principle, other legal scholars have noted that it prompts a trend of “rampant drafting isomorphism” wherein the drafters copy and paste any seemingly relevant clause from other similar agreements 19. Thus,it does not seem the case that “efficiency is a focal point at the drafting stage, and, therefore, unlikely that resulting from contracts can be described as efficiency maximizing machines.” In this view, everything is included in these contracts as an “exercise in risk aversion” and as a way to “keep them at the same cost level as their competitors” (p. 83). In other words, the “race to the bottom” has already occurred, so no future cost-savings can be expected to benefit the consumer (p. 83).

Creating document assemblies from a process of document-engineering, for instance, would help standardise the SFCC document as it would provide a controlled process of creation, help identify the genre and type of contract, and allow for the complimentary nuanced and holistic assessments of these contracts. Common components and assemblies should be identified by professionals who are experts on SFCCs. An example from another genre of contracts is the American Institute of Architects’ (AIA) Contract Document System that provides type and version numbers for a wide variety of contract documents put together by 35 industry professionals from various fields, such as construction, design, insurance, and law 20. Other models include those from certain industries that already have their own organisations for standardising contracts, such as the Insurance Service Organization that is well regarded in the legal insurance world, which also registers their contracts numerically 21 and provides economic statistic information that verifies the usefulness of these contracts.

Documentation, not integration

US laws that came out of the Uniform Commercial Code (U.C.C.) such as the Uniform Electronic Transaction Act (UETA) and the E-Sign laws that were intended to streamline the process of digital transactions and to harmonise some of the discrepancies of transacting across state borders, however, might have subverted this debate and exacerbated the issues in some of the arguments for the unconscionability of SFCCs. By allowing commercial interests not to have to keep paper copies of their electronic documents as evidence of transactions, the UETA effectively gave legally binding status to electronic documents and signatures without requiring a paper component (Section 7 (c)). The E-Sign laws broadened the notions of agreement and awareness even further by claiming “the mere fact of use, or of behavior consistent with acceptance” is “sufficient to evidence that party’s willingness.” Regardless of the explicitness of the consent mechanism, other aspects of SFCCs can supersede any of the understanding of the contract egregious unilateral modification clauses, make the other promises in the contract “completely illusory, as this term essentially asserts that the online service provider will only be bound to the terms in the ToS for as long as the online service provider decides not to change those terms” (Preston and McCann, 2012, p. 23). In other words, for the user, the concept of the document as a stable entity that could be potentially understood is disrupted by the mere fact that the service provider could change the document at any time without their knowledge of this change. As Preston and McCann (2012) ask: “If the service provider can change the contract at will, why bother to call it a contract at all?” (p. 25)

The concept of unilateral modification in the context of SFCCs allows for the continual modification of terms. As it stands right now in the US and with new confirmations of contract doctrine such as the Draft, unilateral modification clauses and practices are mostly allowed; some jurisdictions require notification of changes, but very little other documentation is required (Horton, 2009). Compared with other legal systems such as the EU’s, where this type of editing is forbidden entirely, a lack of attention to the continual ‘instability’ of these texts seems problematic; moreover, markers of the appearance of stability creates the illusion that the texts are either stable or immaterial (nonexistent), and thus these contracts and their drafters can have free rein to include any terms at will without an acknowledgement of the change, or else with the assurance that readers will ignore any notification efforts. Once this instability is recognised, however, certain bibliographic practices if implemented carefully, such as archiving, editing, and documenting using metadata schematics, might be offered in producing a more useful record for an adherent unfamiliar with its content.

Both the terms used to articulate the concept of a stable text and the terms used to critically revise that concept as perpetually unstable might similarly provide a lens to describe the state of SFCCs. Descriptions of textual edits such as those described by the Greg-Bowers-Tanselle method, could also be adapted to articulating certain egregious displays of authority on the part of drafters and continual modifications, rather than romanticise (or ignore) authorial intention, could be viewed as dangerously persistent from the validity afforded by court opinion. Ultimately, viewing modifications as ‘edits’ in the textual sense broadens the notion of modification and allows for a more nuanced engagement with any changes to SFCCs. Rather than invisible behind-the-scenes changes with dull or annoying notification practices, it might be imagined that textual and bibliographic theories could offer a new vocabulary that could revise the understanding of this process for adherents, including delineating between vertical and horizontal revisions 22, for instance, and/or making determinations of ‘ideal texts’ (Greg, 1950) that would make markers of authoritative or intentional power more explicit.

Explanation, not notification

One thread in zombie contract scholarship (Grether et al., 1986; Ayers and Schwartz, 2014) claims that instead of regulating for this ‘market-imperfection’ (i.e., asymmetric knowledge 23), evidence-based disclosure methods would be more helpful. In fact, mandatory disclosure remedies, including those that specify how disclosure should occur (e.g., the UETA’s ‘posting rule’) or how it should be written (e.g., notions of transparency, simplification, plain language rule), are currently the primary method used to remedy these agreements. For instance, the ‘posting rule’ specifies the timing and method of disclosures 24. Some (Hillman, 2006) have responded to these claims by noting how disclosures might exacerbate the issues they try to solve by seeming to satisfy notification requirements when, actually, they have the opposite effect for adherents and rather add to the issue of “information overload”. Other scholars (Marotta-Wurgler, 2011; Bakos et al., 2014; Ben-Shahar and Schneider, 2014) are on the more extreme end of the spectrum than the ‘disclosureites’ and have responded with studies that they believe have proven “entirely” that disclosure methods do not work. These studies rely on evidence that they argue proves users would not engage or try to understand the information even if made transparent by simple and clear presentation.

This shift in concept would work toward the goal of explanation, rather than conventions of notification, because it would move beyond the banners or emails to which consumers have become accustomed and strive to find points of the contract that need further information to understand. The fulfillment of this principle might upset ill-conceived notions of consumer engagement as it could be possible to consider that understanding the contract—seeing it as a ‘material’ object with consequences—could be a beneficial goal of contracts, perhaps especially for SFCCs. In practice, this might mean that “meaningful disclosure”, for instance, works toward disrupting familiar forms such as the annoying cookie notification, and assent means understanding the contract within the context of other information. Drucker (2013) claims that “more attention to acts of producing and less emphasis on product” help promote “the creation of an interface that is meant to expose and support the activity of interpretations, rather than to display finished forms,” which might be “the antidote to the familiarity that blinds us” (par. 42).

Ultimately, I argue the tenets of contract doctrine that have been refined and studied over many centuries should not be abandoned in favour of new types of contracts, yet the new forms of contracting and their relationship to notions of actual practices of standardisation and documentation. Moringiello and Reynolds (2014) claim that traditional contract law is sufficient to handle new forms of contracting such as digital SFCCs. In one sense, it may seem naïve or even neglectful to assume that contract doctrine must not change in order to accommodate new iterations of zombie contracts that have proven detrimental effects for consumers. In another sense, however, if the law changes in such a way that they are accommodated, such as what was proposed in the draft of the Restatement of Consumer Contracts this year, some of the issues with SFCCs might be codified further and exacerbated in the future.

In 1978, legal scholar Ronald C. Griffin wrote: “We are faced with an historic choice in contracts. We can lump together standard forms and classic contracts, or we can treat the former differently” (p. 20). In the decades since, it seems standardised contracts have been “lumped together”, not only with other types of contracts, but also with new technological forms of these documents. Contract law changed very little from the First Restatement of Contracts in 1932 to the early 2000s, due to no “disruptive” technological developments in this field during these years (Moringiello and Reynolds, 2014). Even at that early stage in the late 1970s, Griffin understood “the rules of the quiet past are simply too cumbersome to deal with the complexities of a stormy contract future” (p. 21). We have now reached that future, and it is indeed stormy and full of zombies. In order to prevent continual deflections (and apologies) of some of these issues by CEOs such as Zuckerberg at his hearings, a more nuanced and rigorous understanding of SFCCs should be undertaken by a variety of stakeholders. In other words, the argument of this paper boils down to the simple statement that standard form contracts, especially those that are consumer-facing, should actually be standardised, which requires that they be viewed as documents and held to the specific measures of assessment and practices associated with that form. Only then might we be able to change the rhetoric and presentation of these contracts to fulfill the show function (in Plato’s view) and work toward an actual transparency of the workings of current technical service platforms.

References

Advocates for Basic Legal Equality, Inc., Allied Progress, Americans for Financial Reform, Americans for Financial Reform, Arkansans Against Abusive Payday Lending Arkansas Community Institute, Berkeley Law Consumer Advocacy & Protection Society, … Woodstock Institute. (2018, October 12). Reject Council Draft No. 5 of the Restatement of Consumer Contracts (Sept. 19, 2018). Retrieved from National Consumer Law Center website: https://www.nclc.org/images/pdf/udap/letter-reject-council-draft-no.5-oct2018.pdf

Alliance for Justice, Allied Progress, Arkansans Against Abusive Payday Lending, Berkeley Law Consumer Advocacy & Protection Society, Center for Responsible Lending, Consumer Action, … Woodstock Institute. (2018, January 10). Re: Council Draft No. 4 of the Restatement of Consumer Contracts. Retrieved from National Consumer Law Center website: https://www.nclc.org/images/pdf/udap/26-ali-comments-council-draft-4.pdf

Anderson, T., & Twining, W. L.. (1991). Analysis of Evidence: How to Do Things with Facts Based on Wigmore’s Science of Judicial Proof. Evanston, IL: Northwestern University Press.

Ayres, I. & Schwartz, A. (2014). The No-Reading Problem in Consumer Contract Law. Stanford Law Review, 66, 545–610. Retrieved from https://www.stanfordlawreview.org/print/article/the-no-reading-problem-in-consumer-contract-law/

Bakos, Y., Marotta-Wurgler, F., & Trossen, D. R. (2014). Does Anyone Read the Fine Print? Consumer Attention to Standard Form Contracts [Working Paper No. 195]. New York: New York University.

Ben-Shahar, O., & Schneider, C. (2014). More Than You Wanted to Know: The Failure of Mandated Disclosure. Princeton: Princeton University Press.

Bowers, F. (1978). Greg’s ‘Rationale of Copy-Text’ Revisited. Studies in Bibliography, 31, 90-161. Retrieved from https://www.jstor.org/stable/40371676

Bowker, G., & Star, L. (2000). Sorting Things Out: Classification and its Consequences. Cambridge, MA: The MIT Press.

Briet, S. (2006). What is Documentation? English Translation of the Classic French Text (R. E. Day, L. Martinet & H. G. B. Angehelescu, Eds. & Trans.). Lanham, MD: The Scarecrow Press, Inc.

Buckland, M. K. (1991). Information as thing. Journal of the American Society of Information Science,42(5), 351–360. doi:10.1002/(SICI)1097-4571(199106)42:5<351::AID-ASI5>3.0.CO;2-3 Available at http://people.ischool.berkeley.edu/~buckland/thing.html

Buckland, M. K. (1997). What is a Document? Journal of the American Society of Information Science, 48(9), 804–809. doi:10.1002/(SICI)1097-4571(199709)48:9<804::AID-ASI5>3.0.CO;2-V Available at http://people.ischool.berkeley.edu/~buckland/whatdoc.html

Burke, J. J. A. (2000). Contract as Commodity: A Nonfiction Approach. Seton Hall Legislative Journal, 24, 285–317.

Calamari, J. D. (1974). Duty to Read: A Changing Concept. Fordham Law Review, 43(3), 341–362. Retrieved from http://ir.lawnet.fordham.edu/flr/vol43/iss3/1

D’Agostino, E. (2015). Contracts of Adhesion Between Law and Economics: Rethinking the Unconscionability Doctrine. Cham: Springer. doi:10.1007/978-3-319-13114-6

Day, R. E. (2001). The Modern Invention of Information: Discourse, History, and Power. Carbondale, IL; Edwardsville, IL: Southern Illinois University Press.

Day, R. E. (2014). Indexing it All: The Subject in the Age of Documentation, Information, and Data. Cambridge, MA: The MIT Press

Drucker, J. (2013). Performative Materiality and Theoretical Approaches to Interface. Digital Humanities Quarterly, 7(1). Retrieved from http://www.digitalhumanities.org/dhq/vol/7/1/000143/000143.html

Duranti, L. (1989). Diplomatics: New Uses for an Old Science. Archivaria, 28, 7–27. Retrieved from https://archivaria.ca/archivar/index.php/archivaria/article/view/11567

Duranti, L. (1994). Reliability and Authenticity: The Concepts and their Implications. Archivaria, 39, 5–10. Retrieved from https://archivaria.ca/archivar/index.php/archivaria/article/view/12063

Facebook, Social Media Privacy, and the Use and Abuse of Data: Hearing before the Committee on the Judiciary and the Committee on Commerce, Science and Transportation, United States Senate, 116th Cong. (2018, April 10). (Testimony of Mark Zuckerberg). Retrieved from https://www.judiciary.senate.gov/download/04-10-18-zuckerberg-testimony

Facebook: Transparency and Use of Consumer Data. Hearing before the Committee on Energy and Commerce, United States House of Representatives, 116th Cong (2018, April 11). Retrieved from https://docs.house.gov/meetings/IF/IF00/20180411/108090/HHRG-115-IF00-Transcript-20180411.pdf

Forbruker Rådet. (2018). Deceived by Design: How tech companies use dark patterns to discourage us from exercising our rights to privacy [Report]. Oslo: Forbruker Rådet.Retrieved from https://fil.forbrukerradet.no/wp-content/uploads/2018/06/2018-06-27-deceived-by-design-final.pdf

Furner, J. (2004). Conceptual Analysis: A Method for Understanding Information as Evidence and Evidence as Information. Archival Science, 4(3–4), 233–265. doi:10.1007/s10502-005-2594-8

Gilmore, G. (1974). The Death of Contract. Columbus, OH: Ohio State University Press.

Glushko, R. J. & McGrath, T. (2008). Document Engineering: Analyzing andDesigning Documents for Business Informatics and Web Services. Cambridge, MA: The MIT Press.

Griffin, R. C. (1978). Standard Form Contracts. North Carolina Central Law Journal, 9(2), 158–177. Retrieved from https://archives.law.nccu.edu/ncclr/vol9/iss2/3

Gitelman, L. (2014). Paper Knowledge: Toward a Media History of Documents. Durham, NC: Duke University Press.

Greg, W. W. (1950). The Rationale of Copy-Text. Studies in Bibliography, 3, 19–36. https://www.jstor.org/stable/40381874

Hillman, R. (2006). Online Boilerplate: Would Mandatory Website Disclosure of E-Standard Terms Backfire? Michigan Law Review, 104(5), 837–856. Retrieved from https://repository.law.umich.edu/mlr/vol104/iss5/2

Horton, D. (2009). The Shadow Terms: Contract Procedure and Unilateral Amendments. UCLA Law Review, 57, 605–667. Retrieved from https://www.uclalawreview.org/the-shadow-terms-contract-procedure-and-unilateral-amendments/

Isaacs, N. (1917). The Standardizing of Contracts. Yale Law Journal, 27(1), 34–48. Retrieved from https://digitalcommons.law.yale.edu/ylj/vol27/iss1/6/

Kessler, F. (1943). Contracts of Adhesion—Some Thoughts about Freedom of Contract. Columbia Law Review, 43(5), 629–642. doi:10.2307/1117230 Available at https://digitalcommons.law.yale.edu/fss_papers/2731

Klass, G. (2019). Empiricism and Privacy Policies in the Restatement Consumer Contract Law. Yale Journal on Regulation, 36(1), 45–115. Retrieved from https://digitalcommons.law.yale.edu/yjreg/vol36/iss1/2/

Korobkin, R. (2003). Bounded Rationality, Standard Form Contracts, and Unconscionability. University of Chicago Law Review, 70(4), 1203–1295. Retrieved from https://www.jstor.org/stable/1600574

Leff, A. A. (1967). Unconscionability and the Code—The Emperor’s New Clause. University of Pennsylvania Law Review, 115(4). https://doi.org/10.2307/3310882

Leib, E. J., & Eigen, Z. J. (2017). Consumer Form Contracting in the Age of Mechanical Reproduction: The Unread and the Undead. University of Illinois Law Review, 65–108. https://ir.lawnet.fordham.edu/faculty_scholarship/883/

Levitin, A.J., Kim, N. S., Kunz, C. L., Linzer, P., & McCoy, P. A. (2019). The Faulty Foundation of the Draft Restatement of Consumer Contracts. Yale Journal on Regulation 36(1), 447–470. Retrieved from https://digitalcommons.law.yale.edu/yjreg/vol36/iss1/7/

Malfitano, N. (2018, June 25). Criticism Follows Powerful Law Group to Next Project - A ‘Troubling’ Take on Consumer Contracts. Forbes. Retrieved from https://www.forbes.com/sites/legalnewsline/2018/06/25/criticism-follows-powerful-law-group-to-next-project-a-troubling-take-on-consumer-contracts/

Marotta-Wurgler, F. (2011). Will Increased Disclosure Help? Evaluating the Recommendations of the ALI's “Principles of the Law of Software Contracts”. University of Chicago Law Review, 78(1). Retrieved from http://lawreview.uchicago.edu/publication/will-increased-disclosure-help-evaluating-recommendations-ali%E2%80%99s-%E2%80%9Cprinciples-law-software

McGann, J. J. (1992). A Critique of Modern Textual Criticism. Charlottesville: University Press of Virginia.

Micklitz, H. W. (2015). The Transformation of Enforcement in European Private Law: Preliminary Considerations. European Review of Private Law, 23(4), 491–524.

Moringiello, J. M., & Reynolds, W. L. (2014). The New Territorialism in the Not-So-New Frontier of Cyberspace. Cornell Law Review, 99(6), 1415–1440. Available at http://scholarship.law.cornell.edu/clr/vol99/iss6/5

Murray J. E., Jr. (1982). Standardized Agreement Phenomena in the Restatement (Second) of Contracts. Cornell Law Review, 67(4), 735–784. Retrieved from http://scholarship.law.cornell.edu/clr/vol67/iss4/6

Obar, J. A., & Oeldorf-Hirsch, A. (2018). The biggest lie on the Internet: ignoring the privacy policies and terms of service policies of social networking services. Information, Communication & Society. doi:10.1080/1369118X.2018.1486870

Patterson, M. R. (2010). Standardization of Standard-Form Contracts: Competition and Contract Implications. William and Mary Law Review, 52(2). Available at https://scholarship.law.wm.edu/wmlr/vol52/iss2/2

Preston, C., & McCann, E. W. (2012). Unwrapping Shrinkwraps, Clickwraps, and Browsewraps: How the Law Went Wrong from Horse Traders to the Law of the Horse. Brigham Young University Journal of Public Law, 26(1). Available at https://digitalcommons.law.byu.edu/jpl/vol26/iss1/2

Rayward, W. B. (1994). Visions of Xanadu: Paul Otlet and Hypertext. Journal of the American Society for Information Science, 45(4). doi:10.1002/(SICI)1097-4571(199405)45:4<235::AID-ASI2>3.0.CO;2-Y

Draft of the Restatement of Consumer Contracts. (2019). American Law Institute.

Restatement of the Law Second, Contracts. (1981). American Law Institute.

Richter, D. H. (Ed.). (2007). Aristotle. In The Critical Tradition: Classic Texts and Contemporary Trends (3rd ed, pp. 55–58). Boston: Bedford/St. Martin’s.

Romm, T. (2018, February 14). The U.S. Government and Facebook are Negotiating a Record, Multibillion-Dollar Fine for the Company’s Privacy Lapses. The Washington Post. Retrieved from https://www.washingtonpost.com/technology/2019/02/14/us-government-facebook-are-negotiating-record-multi-billion-dollar-fine-companys-privacy-lapses/?noredirect=on&utm_term=.03da770e7abe

Rosenberg, M., Confessore, N., & Cadwalladr, C. (2018, March 17). How Trump Consultants Exploited the Facebook Data of Millions. The New York Times. Retrieved from https://www.nytimes.com/2018/03/17/us/politics/cambridge-analytica-trump-campaign.html

Rowley, K. A. (2011). Contract Terms. Retrieved from https://law.unlv.edu/faculty/rowley/KTermsSp11.pdf

Korobkin, R. (2003). Bounded Rationality, Standard Form Contracts, and Unconscionability. University of Chicago Law Review, 70(4), 1203–1295. Available at https://chicagounbound.uchicago.edu/uclrev/vol70/iss4/2/

Russell, A. (2014). Open Standards in the Digital Age. Cambridge: Cambridge University Press.

Sales, H.B. (1953). Standard Form Contracts. The Modern Law Review, 16(3). Retrieved from https://www.jstor.org/stable/1091838

Mysore Sathyendra, M., Wilson, S., Shaub, F., Zimmeck, S., & Sadeh, N. (2017). Identifying the Provision of Choices in Privacy Policy Text. Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, 2774–2779. doi:10.18653/v1/D17-1294

Schwartz, A., & Wilde, L. L. (1979). Intervening in Markets on the Basis of Imperfect Information: A Legal and Economic Analysis. University of Pennsylvania Law Review, 127(3), 630–682. Available at https://scholarship.law.upenn.edu/penn_law_review/vol127/iss3/2/

Singh, J., Cobbe, J., & Norval., C. (2018). Decision Provenance: Harnessing data flow for accountable systems Compliant & Accountable Systems. IEEE Access, 7, 6562–6574. doi:10.1109/ACCESS.2018.2887201

Stark, T. L. (2007). Drafting Contracts: How and Why Lawyers Do What They Do. New York: Aspen Publishers in Wolters Kluwer.

Sullivan, M. (2012, January 19). Attack of the Fine Print. MarketWatch. Retrieved from http://www.smartmoney.com/spend/technology/attack-of-the-fine-print-1326481930264/

Svenonius, E. (2009). The Intellectual Foundation of Information Organization. Digital Libraries and Electronic Publishing. Cambridge, MA: The MIT Press.

Tanselle, G. T. (1975). Problems and Accomplishments in the Editing of the Novel. Studies in the Novel, 7(3), 323–360. Retrieved from https://www.jstor.org/stable/29531734

Tanselle, G. T. (1978). The Editing of Historical Documents. Studies in Bibliography, 31, 1–56. Retrieved from https://www.jstor.org/stable/40371673

United States of America Federal Trade Commission. (2011). Agreement Containing Consent Order in the Matter of Facebook, Inc., a corporation [File No. 092 3184] Retrieved from https://www.ftc.gov/sites/default/files/documents/cases/2011/11/111129facebookagree.pdf

Venturini, J., Louzada, L., Maciel, M., Zingales, N. Stylianou, K., & Belli, L. (2016). Terms of Service and Human Rights: An Analysis of Online Platform Contracts (2nd ed.; F. Jardim & C Hirsch, Trans.). Rio de Janeiro: Editoria Revan. Available at https://bibliotecadigital.fgv.br/dspace/handle/10438/18231

Wachter, S., Mittelstadt, B., & Russell, C. (2017) Counterfactual Explanations Without Opening the Black Box: Automated Decisions and the GDPR. Harvard Journal of Law and Technology 31(2), 841–887. Available at https://jolt.law.harvard.edu/assets/articlePDFs/v31/Counterfactual-Explanations-without-Opening-the-Black-Box-Sandra-Wachter-et-al.pdf Preprint retrieved from https://arxiv.org/abs/1711.00399

Wilson, S., Schaub, F., Dara, A. A., Liu, F., Cherivirala, S., Leon, P. G., … Sadeh, N. (2016). The Creation and Analysis of a Website Privacy Policy Corpus. Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics – Long Papers, 1330–1340. Available at https://www.aclweb.org/anthology/P16-1126

Yeo, G. (2007). Concepts of Record (1): Evidence, Information, and Persistent Representations. American Archivist, 70(2), 315–343, doi:10.17723/aarc.70.2.u327764v1036756q

Footnotes

1. I refer to the combination of these genres as standard form consumer contracts (SFCCs) throughout this paper.

2. Relying on this hypothesis has been called the “cornerstone” to a law and economics approach to standard form contracts (Bakos et al., 2014, p. 5). Based on the idea that imperfect information in a market does not need to be safeguarded against, which first appeared in Swartz and Wilde (1979), an informed minority hypothesis that claims “regulation is effective if it at least increases the proportion of informed consumers to a critical mass able to influence sellers’ decisions” (D’Agostino, 2015).

3. Leib and Eigen (2017) cite a difference in two distinctive cohorts’ (i.e., those under 35 and those over 35) perception of zombie contracts: for people under thirty-five years old, they are more familiar with hyperlinks in footers than the little pamphlets of papers that dictated the privacy policies previously mailed periodically with each credit card.

4. In new authoritative documents such as the American Law Institute’s (ALI) recently proposed Restatement of Consumer Contracts (Klass, 2019; Levitin, 2019).

5. This is a reference to Omri Ben-Shahar’s (2014) quote in “More than you wanted to know: The Failure of Mandated Disclosure” where he assumes users would rather know the features of a product such as the ability for an iPhone screen to not be scratched by keys rather than a piece of salient information in a SFCC such as the jurisdiction of the contract to which they agree.

6. (Advocates for Basic Legal Equality, Inc. et al., 2018).

7. Seen in recent authoritative documents such as the American Law Institute’s (ALI) recently proposed Restatement of Consumer Contracts.

8.“I Agree” buttons versus browsewrap that relies on more ambiguous notions of action consistent with consent. While the notion that actually clicking “Agree” makes an agreement more valid seems to make sense, when it comes to consent, the type (i.e., clickwrap over browsewrap[4]) only increases reading by a tiny margin of 0.36% (Marotta-Wurgler, 2011). Mandatory disclosure methods have been well established as ineffective (Ben-Shahar and Schneider, 2014), even if the default remedy throughout much regulatory discourse (e.g., UETA’s “posting rule”, GDPR’s transparency requirements).

9.https://www.eff.org/issues/terms-of-abuse

10. Studies have shown (Korobkin, 2003; Ayers and Schwartz, 2014) that salient terms can be generally limited to two to five terms, for instance, based on the limits of our psychology and/or what has been found to be considered ‘unexpected’. Russell Korobkin (2003) wrote an oft-cited article that described how two to three salient points, for instance, fulfill the extent of our psychological understanding of terms on average. Ayers and Schwartz have called this process ‘term optimization’, and it might involve surveys, or other means to gather information about users’ knowledge of the terms, their interest in certain terms as salient, and their ability to understand the terms they encounter.

11.http://www.gutenberg.org/files/22910/22910-h/22910-h.htm

12. Leff, 1967, p. 505, footnote 68

13. To clarify, the term “individualized” here is being used to describe the customized relationships within each status relationship, not the Individual that is associated with liberation by standardisation.

14. Plato (in Gorgias) blamed misuse of persuasive language on the orator (i.e., the Sophist) whom did not believe that people could “obtain absolute knowledge” and thus “concerned themselves only with probabilities” (Richter, 2007, p. 81). Rhetoric according to Plato should be based on discourse that is “analytic, objective, and dialectical”, rather than “synthetic” or “emotional” (p. 81). Instead of producing “mere appearances of truth” like the Sophists, without regard for whether or not it is transcendently true, Plato argued for a type of rhetoric that could distinguish the truth behind such appearances.

15. For example, a statement on Facebook frames one choice as: “if you keep face recognition turned off, we won’t be able to use this technology if a stranger uses your photo to impersonate you” (p. 22).

16.https://usableprivacy.org/learn_more

17.https://wheatoncollege.edu/academics/special-projects-initiatives/lexomics/lexos-installers/

18.http://docs.oasis-open.org/legalxml-econtracts/CS01/legalxml-econtracts-specification-1.0.html

19. There is evidence that “drafting isomorphism is prevalent, and that it results in over-drafting with duplicate clauses, inconsistent terms, and clauses retaining ‘ghosts’ of other contracts found in form contracts.” (Leib and Eigen, 2017, p. 83)

20.https://www.aiacontracts.org/contract-doc-pages/21536-what-we-do

21.https://www.verisk.com/insurance/brands/iso/about/

22. G. Thomas Tanselle (1975) claimed that edits of a text should be recognized as two types: 1) vertical revision, or one that “aims at attempting to make a different sort of work,” and 2) horizontal, which “aims at intensifying, refining, or improving the work” (p. 330). This idea from bibliography provides an articulation of the rationale behind certain changes (Tanselle, 1978), which could be one example of more meaningful documentation practices when applied to SFCCs.

23. Stiglitz (2000) identifies asymmetric information as one of the major departures from previous economic theory and the major market failure presented by the information age.

24. The UETA contains a section entitled ‘Time and Place of Sending and Receipt’, which states that an electronic record is deemed to be sent when it is properly addressed or directed to another recipient, is in a form capable of being read by the other parties' system and when it is out of the control of the sender […] Additionally, ‘an electronic record is deemed received when it enters an information processing system designated by the recipient for receiving such messages (e.g., home office), and it is in a form capable of being processed by that system." (Section 15 of the UETA) (Ibrahim et al., 2007).

The ‘golden view’: data-driven governance in the scoring society

$
0
0

This paper is part of Transnational materialities, a special issue of Internet Policy Review guest-edited by José van Dijck and Bernhard Rieder.

Introduction

Questions about how data is generated, collected and used have taken hold of public imagination in recent years, not least in relation to government. While the collection of data about populations has always been central to practices of governance, the digital era has placed increased emphasis on the politics of data in state-citizen relations and contemporary power dynamics. In part a continuation of long-standing processes of bureaucratisation, the turn to data-centric practices in government across Western democracies emerges out of a significant moment in the securitisation of politics, the shrinking of the public sector, and the rise of corporate power. In the case of the United Kingdom, this is particularly brought to bear through an on-going austerity agenda since the financial crisis of 2008. Data analytics, in this context, is increasingly viewed and sold as providing a means to more efficiently target and deliver public services and to better understand social problems (Beer, 2018).

As government has entered into this space, adopting the processes, logics and technologies of the private sector, this raises major questions about the nature of contemporary governance and the socio-technical shaping of citizenship. Of particular concern is how new and often obscure systems of categorisation, risk assessment, social sorting and prediction may influence funding and resource decisions, access to services, intensify surveillance and determine citizen status or worth. The proliferation of data sharing arrangements among government agencies is raising concerns about who is accessing citizen data, the potential for highly personal profiling, function creep and misuse. At the same time, the black boxed nature of big data processes, the dominant myths about data systems as objective and neutral, as well as the inability of most to understand these processes makes interrogating government data analytics systems difficult for researchers and near impossible for citizens without adequate resources (Pasquale, 2015; O’Neil, 2016; Kitchin, 2017).

Moreover, the empirical underpinning for a more thorough understanding of these dynamics remains obscure as the implementation of data analytics in public services is only emerging. In this article we therefore contribute with an overview of developments of data analytics in public services in the particular case of the UK. Drawing on research carried out for the one-year project ‘Data Scores as Governance’, the article provides the first integrated analysis of the use of such systems in the UK and of the often polarised views and approaches among stakeholders. In mapping this emerging field, we explore the way these data systems are situated and used in practice, engaging with the myriad negotiations and challenges that emerge in this context.

The article identifies an upsurge in data-driven forms of what we term ‘citizen scoring’ - the use of data analytics in government for the purposes of categorisation, assessment and prediction at both individual and population level. It demonstrates citizen scoring as a situated practice that emerges from an amalgamation of actors, imaginaries and political and economic forces that together shape and contest what was described in our research as a desired ‘golden view’ of citizens. The article thus highlights the heterogeneity of data practices, and points to the need for a nuanced understanding of the contingency of data systems on significant contextual factors that moves us beyond an engagement with the technologies themselves, towards a wider politics of their development, deployment, implementation and use as part of understanding the nature of citizenship in an emerging ‘scoring society’.

From data to data scores

The growing collection of data across social life, what has been described as the ‘datafication’ of society (Mayer-Schönberger & Cukier, 2013), is now a prominent feature of politics, economics and culture. At once celebrated for driving a ‘new industrial revolution’ (Hallerstein, 2008), the technical ability to turn increasing amounts of social activity and human behaviour into data points that can be collected and analysed has simultaneously advanced a power dynamic in need of investigation and critique. The trend to put phenomena in a quantified format that can be tabulated and analysed requires both the right set of tools as well as a desire to quantify and record. Premised on the notion that it is possible to infer probabilities by feeding systems substantial quantities of data on which to base predictions, data science has taken hold across both private and public sectors, as well as civil society, constituting effectively, according to Van Dijck (2014), a new paradigm based on a particular set of (highly contested) assumptions. Not only is there an assumption that (objective) data flows through neutral technological channels, but also that there is “a self-evident relationship between data and people, subsequently interpreting aggregated data to predict individual behaviour.” (Van Dijck, 2014, p. 199) It is, moreover, argues McQuillan (2017), a paradigm rooted in a belief akin to Neo-Platonism in which a hidden mathematical order is perceived to be ontologically superior to the one available to our everyday senses.

In this context, citizen scoring emerges as emblematic of the logics and functions that accompany this wider datafication of society, particularly as it relates to the governance of citizens. We use it as a term to connote the typical practices of data analytics in public services to do with the categorisation and segmentation, and sometimes rating and ranking, of populations according to a variety of interoperable data sets, with the goal of allocating resources and services accordingly. In some instances this involves types of risk assessments and the identification of particular characteristics in individuals as a way to predict their behaviour. Data-driven scores and classifications that combine data from different sources towards calculating risks or outcomes are emerging as a prime means for such categorisations. We are predominantly familiar with these practices in the financial sector, most notably in the form of the credit score, which increasingly relies on an array of digital transactions to inform predictions about the financial responsibility of individuals (Citron & Pasquale, 2014). A wider range of consumer scores are now being applied across different economic sectors (Dixon & Gellman, 2014). Sources of data for such scores may include, for example, an analysis of people’s mobile phone use, or the creditworthiness of their social media friends. People’s social activities are thus increasingly incorporated into particular commercial assessments, which points to a growing integration of social and transactional data sets (McCann et al., 2018). This practice builds on established experiences in the marketing industry and, more recently, the platform economy, where consumption patterns are predicted based on a variety of social, cultural, health and other data.

Whilst perhaps more normalised in financial and commercial industries, the use of data-driven scores has also reached governmental and public services. Much recent attention has focused on the ‘social credit score’ being developed in China, for example, which aims to integrate the rating of citizens’ financial creditworthiness with a wide range of social and consumer behaviour to assess people’s overall trustworthiness and allow, or deny, services accordingly (Ly & Luo, 2018). The Chinese social credit score is distinct in many ways, but it demonstrates possible implications of the algorithmic mediation of daily life and therefore offers interesting pointers for investigating the use of data analytics in the public sector of other countries (Fullerton, 2018; Jefferson, 2018). In particular, it provides a stark illustration of how practices of consumer scoring have migrated into citizenship debates, pointing to the actuarial logics underpinning citizen scoring more broadly (Poon, 2016; McQuillan, 2019).

In her study of the uses of data and automated processing in the United States, Eubanks (2018) points to a rise of a ‘regime of data analytics’ in public services, detailing, for example, uses of automated welfare eligibility systems and predictive risk models in child protection akin to the kinds of assessments and categorisations we associate with citizen scoring. Automated ‘decision support systems’, such as risk scores, have also been considered and implemented elsewhere with mixed results. Australia’s automated debt recovery system, now popularly referred to as ‘robo-debt’, was introduced to identify those with overpaid benefits and seek repayment. The system has caused scandal because of its errors and impact on marginalised communities and been widely criticised as unethical as well as illegal (Carney, 2018). In New Zealand, the government shelved its plans to introduce the use of predictive risk assessments in child welfare services following public critique (Gillingham, 2019). In Europe, The Netherlands has introduced an automated system to try and detect benefit fraud, France has automated traffic offence processing and Italy is using automation to allocate health treatments (AlgorithmWatch, 2019). The widely referenced ProPublica investigation into the use of algorithmic processes in US criminal justice systems highlighted the prevalence of risk assessment tools that produce ‘risk scores’ on defendants to estimate their likelihood of re-offending to inform sentencing (Angwin et al., 2016). Similar investigations in the UK have pointed to Durham Constabulary's Harm Assessment Risk Tool (HART) and its categorisation of risk for defendants to inform custody decisions (Big Brother Watch, 2018a). In border control, data-driven profiling based on a cross-set of aggregated data is increasingly used for ‘vetting’ the ‘threat’ of migrants and refugees, producing what has been referred to as a ‘terrorist credit score’ (Crawford, 2016).

Although increasing attention is being paid to developments relating to this kind of citizen scoring, little is known about the uses of new data systems, particularly at the local government level where public services are predominantly provided. Prominent calls have been made to increase transparency about the use of algorithmic decision-making in government across different national contexts. The government of New Zealand recently responded to this by providing an overview of operational algorithms as part of a ‘algorithm assessment’ report (Stats NZ, 2018). Following public pressure, New York City set up an Automated Decision Systems Task Force to increase transparency and review how the City uses algorithms (Kirchner, 2017). In the UK, an inquiry into algorithms in decision-making in 2018 led to a recommendation from the House of Commons Select Committee on Science and Technology to produce a list of algorithms in local and central government (Science and Technology Committee, 2018). At the time of writing, such a list is still not available. Moreover, we lack analysis of how these systems are implemented and used in practice, changes in governance that occur, how trade-offs are negotiated, and how these relate to the questions and concerns expressed by different stakeholder groups across society. It is only through such an analysis that we can engage with the actual implications of the turn to data-driven technologies, understood in context and in relation to other social practices and historical trends, as a way to politicise their development, deployment, implementation and use as sites of struggle (Christin, 2017; Dencik, 2019). We therefore now turn our attention to detailing developments and practices pertaining to citizen scoring in the UK.

Method

In order to investigate the uses of data-driven scoring systems in public services we combined a number of different methods that would provide us with insights into general tendencies as well as particular practices. For this article, we draw predominantly on two data sets that form part of a larger project into citizen scoring 1: 1) 423 Freedom of Information (FOI) requests to local authorities and partner agencies in the UK asking for names and uses of data systems in public services; 2) 27 semi-structured interviews with public sector workers (17) and civil society groups (10) discussing the implementation and uses of data-driven systems, key advantages, challenges and concerns with using such systems in public services.

Our interviews are structured around six case studies that explore different kinds of data system applications in different parts of the UK, including areas of benefit fraud, child welfare, health, and policing across North and South England. For these case studies, we submitted some more targeted Freedom of Information requests and carried out semi-structured interviews with public sector workers, seeking to speak with people involved with the development, management and user side of the data systems in order to include a range of perspectives. The six case studies are:

  1. Bristol’s Integrated Analytical Hub
  2. Kent’s Integrated Dataset
  3. Camden’s Resident Index
  4. Hackney’s Early Help Profiling System
  5. Manchester’s Research & Intelligence Database
  6. Avon & Somerset Police’s Qlik Sense

In order to further engage with the implications of using data analytics in public services and the practice of citizen scoring, we also interviewed a range of civil society groups that were sampled according to their role as public service stakeholders and their familiarity with service-users and other impacted communities. These included diverse orientations pertaining to digital rights, welfare rights, and citizen participation (see table 1). All interviews were carried out during May-November 2018 in person, through online video or on the phone, lasting on average 30-60 minutes.

Table 1: Sample of civil society groups

Organisation

Orientation

Big Brother Watch

Civil liberties

British Association of Social Workers (BASW)

Professional association

Citizen’s Advice Bureau

Advice & advocacy

Defend Council Housing

Housing activism

Disabled People Against the Cuts (DPAC)

Disability activism

Involve

Public engagement

Liberty

Human rights

Netpol

Police watchdog

Open Rights Group (ORG)

Digital rights

Independent activist

Welfare rights

Finally, for our analysis we also draw on discussions that took place during two project-dedicated workshops with stakeholders from across the public sector, civil society and academia working in the area of data and public services. Both these workshops took place in 2018, one in April and one in November, in London, UK.

The study constitutes the most comprehensive analysis of citizen scoring in UK public services to date, and the first to combine a map of developments with stakeholder perspectives from across the public sector and civil society. We now outline some key developments and their implications based on our findings.

Predicting and scoring

Citizen scoring relies on predictive analytics, but not all uses of predictive analytics lead to citizen scoring. Given the lack of information available about where and how predictive analytics is being used, we began by producing a list of all the instances we could identify through manually analysing our FOI requests. At the time of writing, 328 responses had been received out of 423 requests. The others were either blocked, delayed, or not yet responded to. From this exercise, we identified 53 councils using predictive analytics (table 2). 2 Whilst we did not include FOI requests to separate police forces, we were able to complement our list with further information from research conducted by the non-governmental organisation Liberty that identified 14 UK police forces making use of predictive analytics based on 90 FOI requests (table 3). 3

Table 2: Predictive analytics systems in UK public services

Council

Systems

Argyll and Bute Council

Sentiment Metrics social media sentiment analysis

Birmingham City Council

Business Objects (SAP)

Birmingham City Council

Tableau

Blaby District Council

Mosaic (Experian)

Bournemouth Borough Council

AccsMap

Bournemouth Borough Council

Arcady

Bournemouth Borough Council

Mova

Bournemouth Borough Council

Scoot

Bournemouth Borough Council

SocialSignIn

Bournemouth Borough Council

Stratos

Bournemouth Borough Council

Tableau

City of Bradford Metropolitan District Council

CapitaONE

City of Bradford Metropolitan District Council

Liquidlogic Children's Social Care System (LCS)

London Borough of Brent

Risk Based Verification

Brighton and Hove City Council

[Name not specified]

Brighton and Hove City Council

ArcGIS

Brighton and Hove City Council

Business Objects (SAP)

Brighton and Hove City Council

Predictive Analytics (SAP)

Bristol City Council

Think Family

Carlisle City Council

Housing Benefit System (Capita)

Carlisle City Council

Risk Based Verification (Xantura)

Ceredigion County Council

Daffodil

Ceredigion County Council

Local Development Plan

Ceredigion County Council

POPGROUP (Edge Analytics)

Charnwood Borough Council

Abritas Shortlisting

Charnwood Borough Council

QL Rent Arrears Progression

Chiltern District Council

Risk Based Verification

City of York Council

[Name not specified]

Copeland Borough Council

GIS (Geographical Information Systems)

London Borough of Croydon

Business Objects (SAP)

Dacorum Borough Council

Risk Based Verification (CallCredit)

Derbyshire Dales District Council

M3PP (Northgate Public Services)

Dudley Metropolitan Borough Council

[Name not specified]

Dudley Metropolitan Borough Council

[Name not specified]

Dudley Metropolitan Borough Council

[Name not specified]

Dudley Metropolitan Borough Council

Business Objects (SAP)

Dudley Metropolitan Borough Council

Single Person Discount Review (TransUnion/CallCredit) -- provided by external service provider, Civica

London Borough of Ealing

Risk Based Verification (Coactiva)

East Hampshire District Council

Dynamics (Microsoft)

East Hampshire District Council

Experian Public Sector Profiler

East Riding of Yorkshire Council

Risk Based Verification (Xantura & Northgate PS Ltd)

Erewash Borough Council

Risk Based Verification

Folkestone & Hythe District Council

Risk Based Verification (Xantura)

Fylde Borough Council

Risk Based Verification (TransUnion/Callcredit)

Greater Manchester Combined Authority

[Name not specified]

Hertfordshire County Council

[Name not specified]

Hertfordshire County Council

[Name not specified]

Hertfordshire County Council

Mosaic (Experian)

Hull City Council

Risk Based Verification

Huntingdonshire District Council

Risk Based Verification (CallCredit)

Inverclyde Council

Scottish Patients at Risk of Readmission and Admission (SPARRA) (Health & Social Care Partnership)

Ipswich Borough Council

Risk Based Verification (CallCredit)

London Borough of Islington

Holistix (Quality Education Systems (QES))

Kent County Council

ACORN (CACI)

Kent County Council

Kent Integrated Dataset (KID)

Kent County Council

Mosaic (Experian)

Leeds City Council

FFT Aspire

Liverpool City Council

[Name not specified]

London Borough of Southwark

Student council tax discount review, with National Fraud Authority Initiative & Fujitsu

Medway Council

National Child Measurement Programme

Medway Council

NHS Health Checks programme

Milton Keynes Council

NHS Health Check software

Northamptonshire County Council

CapitaOne Admissions

Northamptonshire County Council

Fischer Family Trust Aspire

Northamptonshire County Council

Youth Offender Group Reconviction Scale (YOGRS)

Nottinghamshire County Council

Mosaic (Experian)

Purbeck District Council

Risk Based Verification (Xantura)

Rotherham Metropolitan Borough Council

Rentsense (Mobysoft)

Royal Borough of Windsor and Maidenhead

Risk Based Verification

South Bucks District Council

Risk Based Verification

Suffolk County Council

Connect Measure

Sunderland City Council

Risk Based Verification (CallCredit)

London Borough of Waltham Forest

Looker

London Borough of Waltham Forest

Sagemaker (Amazon Web Services)

West Lothian Council

Risk Based Verification (CallCredit)

Weymouth and Portland Borough Council

Risk Based Verification (Xantura)

Wigan Metropolitan Borough Council

Risk stratification models

Worcester City Council

Risk Based Verification (Capita)

London Borough of Hackney

Early Help Profiling System

London Borough of Tower Hamlets

Children's Safeguarding Profiling Model

London Borough of Newham

Children's Safeguarding Profiling Model

Table 3: Use of predictive policing programmes in the UK (source: Liberty, 2019)

Police Force

Predictive mapping programmes

Individual risk assessment programmes

Avon and Somerset

X

X

Cheshire

X

 

Durham

 

X

Dyfed Powys

X (in development)

 

Greater Manchester Police

X

 

Kent

X

 

Lancashire

X

 

Merseyside

X

 

The Met

X

 

Norfolk

X

 

Northamptonshire

X

 

Warwickshire and West Mercia

X (in development)

 

West Midlands

X

X

West Yorkshire

X

 

In analysing the general FOI requests, we found a varied landscape across local authorities in the UK in terms of both understanding and implementation of data systems in public services. The range of responses we were provided with indicate that there is as yet no common understanding of what constitutes data analytics within local government, let alone the use of data for practices such as prediction, risk assessment, categorisation, profiling or scoring. It is therefore very difficult to collate a comprehensive list of predictive analytics systems 4. The map over predictive analytics illustrates the diverse uses of data by different councils and partner agencies. At the same time, in combination with our case study research we can identify a number of key players and central trends. In particular, as we go on to outline below, the turn to data analytics is happening in a context of funding cuts and whilst some systems are being developed in-house, we see the emergence of a few prominent private companies as suppliers of data systems, with a push towards collecting, sharing and integrating data across agencies and a view to carry out risk assessments and profiling at individual and population level.

Austerity and public-private partnerships

Across our case studies, interview participants pointed to the need to implement new systems for data sharing and analysis in order to contend with the financial realities of an austerity agenda. Local authorities in England have had their funding cut from central government by up to 60% since 2010 (Davies, Boutaud, Scheffield, Youle, 2019). A developer who worked to implement Qlik Sense in the Avon & Somerset Constabulary said of their turn to more data systems, “it’s viewed very much as a critical enabler, a strategic imperative for any… organisation that’s facing cuts” (Avon & Somerset Police developer). In engaging with this context, whilst some councils develop systems in-house our research identified a number of prominent companies involved and different kinds of public-private relationships ongoing or emerging with the turn to data-driven systems.

We can see from the list of predictive analytics that a few private companies have established themselves as prominent suppliers of predictive algorithms. Xantura and CallCredit, for example, provide data sharing and analytics to several public sector clients across the UK, particularly in the area of risk assessments. On its website (www.xantura.com), Xantura lists their key areas of focus as “improving outcomes for vulnerable groups, protecting the public purse and, helping clients build ‘smarter’ business processes”. Their systems relate to areas such as the Troubled Families programme (a government initiated reform of social services launched in 2012), fraud and error detection, and children’s safeguarding. Their Early Help Profiling System (EHPS), used for example in Hackney, one of our case studies, “translates data on families into risk profiles, sending monthly written reports to council workers with the 20 families in most urgent need of support,” as drawn from their website. In addition, they provide a Risk Based Verification (RBV) system for the automated detection of “fraud and error” which applies different levels of checks to benefit claims according to the risk associated with those claims, determining the level of verification required (Department for Work and Pensions, 2011). CallCredit, meanwhile, is a major consumer credit reporting agency (now acquired by TransUnion) that also, similar to Xantura, offers a Risk Based Verification system service to councils processing Housing and Council Tax benefits claims. CallCredit also provides a demographic profiling tool similar to Mosaic, the geodemographic segmentation system provided by Experian, which our research highlights is widely used across local authorities and partner agencies for a range of purposes. Most controversially, it was found to be used to inform the risk assessment of defendants as part of Durham Constabulary's Harm Assessment Risk Tool (HART).

Policing has become a prominent area of predictive analytics. The research carried out by Liberty indicates that predictive policing programmes predominantly fall in two areas: 1) predictive mapping programmes and 2) individual risk assessment programmes. Most forces using predictive policing programmes engage in forms of mapping, which are programmes that evaluate police data about past crimes to identify ‘hot spots’ of high risk on a map. These are supplied by a range of private companies, including HunchLab, IBM, Microsoft, Hitachi, and Palantir. A few are also engaging in individual risk assessment programmes which predict how people will behave, including whether they are likely to commit or be victims of certain crimes (Couchman, 2019).

From data warehouses to risk assessments

Our FOI requests point to different applications of data analysis across contexts and our case studies demonstrate the distinct nature of developments in different local authorities. No standard procedures are in place for how data systems are implemented, discussed and audited. Instead, uses of data systems are approached very differently, with some data-sharing leading to the creation of individual risk scoring, whilst in other contexts this is not practiced and databases serve predominantly as verification tools or to provide population level analytics. This indicates that whilst it is broadly accepted that public service planning requires data and analytics, there is not a shared understanding amongst local authorities as to what is appropriate to do with such technologies.

Despite differences in application, data sharing between agencies and different parts of the council is a prominent trend, described as the creation of “data warehouses” or “data lakes”, that seek to get “the golden view” (Camden Council manager) of citizens. This refers essentially to integrated databases that gather information about residents and their interactions with public services, across areas such as housing, education, social services, and sometimes also health and policing. In the case of Bristol’s Integrated Analytical Hub, for example, the Think Family database that is used for services pertaining to child welfare, integrates 35 different social issue data sets, including school attendance, criminal records, unemployment, domestic abuse and mental health problems in the family. These are similar to the data sets used elsewhere, including the Early Help Profiling System developed by Xantura that is used in Hackney, and the iBase system that is part of Manchester’s Research & Intelligence Database. In the case of Camden’s Resident Index, which is used for benefit fraud detection, the data sources include council tax and benefits, housing, electoral registration, libraries and parking permits data in addition to adult and children’s social services and school information. Avon & Somerset Police have sought to connect internal data sets as well as some data sets from other agencies in Bristol Council to provide integrated assessments and evaluations through the self-service analytics software Qlik Sense. Kent’s Integrated Dataset (KID) on the other hand, brings together data from 250 local health and social care provider organisations as well as Fire and Rescue Service data to support planning and commissioning decisions. In integrating data sets from across agencies and different parts of the council, managers see a potential for targeting resources more effectively and being better positioned to respond to primary need. One manager described it as a need “to have a more strategic understanding of the city” (Bristol Council manager).

The creation of this ‘golden view’ of citizens, as one of our research participants described it, takes several forms and plays out in a broad range of data applications. We use it here as a metaphor to understand data systems as part of a desire to have both additional and more integrated information about populations as well as more granular information about citizens that form the basis of prediction and can drive actions taken. In our case studies, some of these applications involve population level analytics and network analysis, and do not involve the production of ‘scores’ as such, but rather a map of general trends and connections. In other cases, scoring can take several forms; in some instances it is predominantly a matching score used for identity verification (the probability that a record refers to the same person in different data sets), whilst in others it is based on a risk assessment relating to individuals or populations that indicate either a percentage score or a particular ‘risk threshold’ that is passed to trigger an alert (based on a combination of risk factors), or a ranking of high to low individuals ‘at risk’ within a specific ward.

With Camden’s Resident Index, for example, citizen scoring predominantly concerns identity verification used to indicate the risk of fraud. This may include household views to show the different records from the different people associated with an address, allowing different levels of verification to data points such as data based on council tenancy registration or data from accessing a library service. The view provides the possibility to detect fraud such as “school admissions where people are applying for school places from places they don’t live in, or people are illegally subletting their council tax properties, or people retaining accessible transport benefits when they no longer live in the borough” (Camden Council developer). Whilst the model does not lead to any final decision or action, the project manager noted that “it helps the service whittle down the likely cases to investigate.” (Camden Council developer)

Meanwhile, councils such as Bristol and Hackney and police forces such as the Avon & Somerset Police Constabulary have developed or contracted systems that are concerned with identifying risks and vulnerability amongst individuals and households. Prominent uses of citizen scoring in this respect exist in areas such as child welfare and policing where vulnerability and risk are calculated through the combination of extensive data sets that identify characteristics and behaviours of previous victims and offenders in order to flag individuals with similar characteristics. These scores and reports are provided to frontline workers as intelligence to help indicate who might need further attention. Bristol’s Think Family database, for example, includes data on all children and young people within the local authority, who are all provided a score to indicate the likelihood that they may become victims of some form of exploitation. The Qlik Sense system adopted by the Avon & Somerset Police, ranks all offenders and victims of crime, categorising them as high, medium and low risk for either re-offending or becoming a victim of crime, alongside the harm that an offender carries (e.g., grievous bodily harm or threats to kill).

In outlining developments, we therefore see the varied applications of data systems, the significance of contextual factors, such as policy agendas relating to austerity, for turning to data-driven technologies in public services, and the further intertwining of government and business spheres. As we discuss further below, this is significant for the ability to engage citizens in consultations and advance public transparency, as well as positioning public sector workers in an empowered position in relation to negotiating these systems as government agencies become locked-in and reliant on external expertise the less they invest in developing their own internal capabilities (Garrido et al., 2018). Moreover, in the extensive data collection and sharing, and the onus on prediction and risk assessments as a central feature of data systems in public services, concerns about the implications of these for citizen rights and impact of such decision-making on different groups and communities have become prominent. We now turn to outline some of these negotiations and tensions.

Negotiations and tensions

There are ongoing tensions emerging as local authorities try to respond to the problems facing communities, and doing so with less resources driving a need to be ‘smarter’ and more efficient. Some of these tensions are prominent amongst public sector workers as they are confronted with different challenges pertaining to the practices of citizen scoring, but we also see a discrepancy in relation to the nature of issues raised by different stakeholder groups. In this section we outline some key themes emerging from our research interviews with regards to transformations and implications of the implementation of data systems in public services.

Citizen rights and harms

The extent of data collection, who gets to see it, and the lack of transparency around its uses were raised as prominent concerns amongst civil society groups, but are tackled very differently by different councils. Although the EU’s General Data Protection Regulation (GDPR) addresses some aspects of data sharing and use, detailed requirements are still unclear and many parts of public service provision are exempt from such regulation (Big Brother Watch, 2018b). In the context of this regulatory vacuum, there was therefore a recognition among interviewees from local authorities that they were balancing or engaging in a tradeoff between privacy rights and the rights of vulnerable individuals to protection and care. Indeed the drive to enable a ‘golden view’ of citizens by linking up all available data sets comes in part from perceived failings of agencies to adequately share and act on information in order to respond to needs and risks, marked by high-profile cases such as the deaths of Baby P, Victoria Climbié, and Fiona Pilkington following instances of long-term child abuse. Yet interpretations of what this means for data practices are varied.

The KID was developed to provide population level health planning rather than to aid decision-making related to specific individuals. This also means that practitioners only have access to pseudonymised data. On the other hand, Manchester’s Research and Intelligence Database is designed to make it easier for frontline workers to share and access identifiable data about those receiving support services to develop a fuller picture of these individuals and the networks around them. With Hackney’s Children’s Safety Profiling System, identifiable data is sent to case workers once the automated system deems a certain risk threshold has been crossed. Similarly, we found that different councils have different ideas about what and how citizen consent is required. For example, Manchester seeks consent from service users whose data is contained in the system whereas Hackney council does not, as it is thought this would compromise the effectiveness of the system. Finding ways to communicate data practices to citizens and upholding genuine ‘informed consent’ was generally seen across both public sector and civil society as a prominent challenge.

Less discussed and addressed by local authorities are issues of ‘bias’ or harms, particularly how the use of these new systems might negatively affect people’s lives. Such concerns have become particularly prominent as data processes often sit behind a veneer of technological ‘objectivity’. There is recognition across our case study interviews that bias can be embedded in models and that harms can occur. This is prevalent in how these systems are discussed. For example, Xantura developers explicitly stated that they develop their systems to not be ‘punitive’ but to instead enable early intervention and regularly monitor and check for biases in their model. However, engaging with unanticipated harms that may arise from using such models is less salient amongst developers and managers working to implement data systems. In contrast, this was one of the critiques raised most often by civil society groups. They highlighted concerns about the ways in which a data lens particularly targets those on the margins and how these systems impact citizen rights and opportunities differently. As one interviewee noted, “it’s not something that the bulk of the population will ever encounter. It’s something you only encounter when you are part of a risk group, a risk population” (Netpol). Such concerns are echoed in research carried out in other countries that have highlighted how systems like this, which disproportionately draw on and use data about people who make use of social services, are biased through the over-representation of a particular part of the population. The variables being used can in practice be proxies for poverty, for example by using the length of time someone has been on benefits as a variable influencing risk assessments (see also Gillingham & Graham, 2017; Eubanks, 2018).

Related to this, several of our civil society interviewees raised the issue of stigmatisation as a central feature of citizen scoring, highlighting how the creation of data warehouses, risk assessments and predictions in itself can be harmful: “Because of this kind of quantification and categorisation approach that data analytics actually demands and the use of ever more sensitive data, there are people who will feel sidelined, maligned, judged, stereotyped” (Big Brother Watch). Further, none of the case studies we analysed included a means for people who had been scored to know their score, how it was generated and how to interrogate it. This inability to see or ‘talk back’ was seen as having significant democratic implications in terms of due process and can lead to differential treatment and opportunity given the way that someone may unknowingly be affected by a score. Indeed, transparency about how data is used and processed and for what purpose was noted as “the first step” (Netpol) towards mitigating harms that may emerge from the implementation of data systems and the kinds of interventions that will be acted on them, not least in the context of the speed with which data systems are being deployed, often with limited consultation and impact assessment: “I think the issue is that things are being introduced so quickly and without adequate oversight and without adequate testing for things like bias” (Liberty). Moreover, this information asymmetry also speaks to the way the pursuit of a ‘golden view’ situates citizens in relation to their social context, through the practice of labeling, sorting and scoring: “You think that you’re normal working class, maybe a poor family and suddenly you are being classed as a risk in some way. It’s a fundamental question, what right do you have to label people based on something” (Open Rights Group).

Engaging with civil society concerns and assessments of the implications for impacted communities is especially pertinent as there is an underlying assumption in the implementation of data systems in public services that information will lead to action. The perceived value of these scoring systems lies in part in their ability to incorporate ‘real-time’ data that provides a profile and assessment of individuals and households on a continuous basis, informing also an escalation of risk. For example, this is a key part of the scoring for offenders: “once you’re measuring risk in an automated way, you can then measure the escalation risk. So if someone’s offending behaviour changes over the last week or two or even overnight, the model will then show you that and it’ll push it up the list” (Avon & Somerset Police developer). This, in turn, serves to advance a logic around early intervention and pre-emptive measures, or what was referred to as “targeted interventions” (Bristol Council manager) in the context of “preventative proactive work” that in “capturing more risk” through the use of automated risk assessments will require engagement with individuals who are not usually considered high enough, asking for a “light touch” that engages with people on an ongoing basis (Bristol Council developer). However, we found that those interviewed often could not tell us how the data systems introduced led to concrete measures. Without comprehensive evaluation of how these new data arrangements are, or are not, affecting action, engagement and resources, these claims remain unproven. The argument that these systems make it easier for frontline staff to access and share information and assess risk is made with little, if any, evidence provided about how this affects resource allocation or actions taken.

At the same time, experiences amongst service users and communities point to the need to engage more comprehensively with the way data systems relate to different activity that might lead to a range of harms and feelings of being targeted. This requires a re-evaluation of how authorities and the state might be perceived as not necessarily benign, and that technologies are not necessarily neutral. Whilst harmful outcomes relating to data collection and use might not be intentional, such evaluations point to the need to consider how data has the potential to facilitate punitive measures. Yet what kind of impact would need to be assessed and how evaluations on actions taken on citizen scores would be carried out remain difficult areas as there is no clear line of accountability for any one system that is distributed across different people and uses. Moreover, councils pointed to a lack of resources in pursuing any comprehensive evaluations or impact assessments of transformations in practices and provision with the implementation of new data systems.

Professional authority and operational logics

This question of how to evaluate or assess impact gains further pertinence as the tensions and negotiations surrounding the harms and rights infringements that may arise with the use of data systems in public services are simultaneously playing out in a context of changing practices and organisational transformations that position different understandings and activities at odds. In building a culture of data collection, we found a concern amongst both civil society groups and frontline staff about a fundamental re-orientation of professional practices and routines, relationships and the kinds of information deemed valuable in delivering public services. In determining a family’s needs, for example, a member of a professional association for social workers noted that “the systems are set up for social workers to collect data as performance management,” pointing to a concern that this “can divert the social worker from being able to understand the case because the sort of data that they’re collecting, they might be lost in there, the complexities of the case” (Godfred Boahen, BASW).

In the prominent application of data systems for the purposes of identifying and measuring risk, such as the widespread use of Risk Based Verification systems, we are also confronted with a general shift within public administration towards risk management as a new ‘paradigm’ of operations (Yeung, 2018). The way in which this shifts authority away from public sector workers themselves towards computational outputs was a recognised tension across our case studies and frequently addressed through an explicit emphasis on professional judgment as the central pillar for any decision-making, regardless of the implementation of data systems. One manager described it as, “it’s not computer says yes or no, it’s computer provides advice and guidance to the professional who adds the professional judgment in order to make better decisions about resource allocation” (Bristol Council manager). This was similarly echoed elsewhere, with developers working with Hackney Council, for example, stressing that the goal is “not to replace professional staff but to support them by giving them the information they need to do their job better,” and Avon & Somerset Police inspectors pointing out “it is just a tool” and not “the be-all-and-end-all”.

Emphasising the continued value of professional judgment as the ultimate ‘decision-maker’ has been key to advancing the implementation of data analytics within public services in the face of what was recognised by several interviewees as an element of hostility towards technology amongst frontline staff. This resistance was often reduced or dismissed by managers and developers as issues of professional conservatism or a lack of technical skills. One described it as “confidence around technology is low” (Avon & Somerset Police developer) and another pointed to a historical scepticism towards alternative approaches to knowledge: “There’s been a strongly held view that the only people who should tell you something about them is children and families themselves” (Bristol Council manager).

Maintaining a prominent rhetoric around the importance of professional knowledge and domain-specific expertise is also a way to contend with what are perceived to be not just cultural challenges within the organisation, but also technical challenges that limit the so-called ‘accuracy’ of systems. In interviews, developers of data systems pointed to continued issues of data quality within public services, with some data sets being riddled with a high volume of errors, for example “with people giving wrong names, wrong date of birth, things like that” (Bristol Council developer). High error rates mean that practitioners find it important to be able to interrogate scores. As a coordinator within Avon & Somerset Police noted: “if someone has got a particularly high score, we will look at what’s given them the high score and drill in to make sure the data’s correct but it isn’t always. For example, it might be a data quality issue where someone is identified as high risk because they were previously linked to a murder or attempted murder and actually they were eliminated from that murder” (Avon & Somerset Police co-ordinator). This has spurred on managers to call for increased “data literacy” training dedicated to enhancing people’s “ability to engage, interpret and argue data and pla[ce] data at the centre point of how people make decisions” (Avon & Somerset Police manager).

At the same time, we see a frustration amongst frontline staff with the ways in which professional judgment is continuously confined within limited parameters as data systems come to set the terms of engagement with citizens. One frontline police officer complained, “there will still be people who say…[following the technology] is what we must do” (Avon & Somerset Police inspector). This tension was also recognised by some of the developers: “we can’t control what people do off the back of [the data system]… It might force them into activity they wouldn’t otherwise do” (Bristol Council developer). In part, this speaks to the challenge of what is also referred to as ‘automation bias’ (Cummings, 2004) in which people attribute higher value to technological outputs, sometimes trusting these more than their own judgments and decision-making. However, it also points to a broader challenge with regards to how professionals are positioned in relation to data systems, not least in a context of austerity and cuts to services. In our workshop discussions, experiences indicated how the implementation of data-driven technologies advanced a push towards the rationalisation of ‘messy’ lives through the recognised reductionism and functionalism that are fundamental features of the information processing of data-driven scoring systems, undermining the holistic assessments that are hallmarks of good judgment (Pasquale, 2019). That is, the crude categorisations that data systems rely on in order to provide analyses and scores are unable to account for the rich contextual domain-specific knowledge that professionals consider to be central to appropriate decision-making. This is significant in several respects. In the case of Bristol’s Integrated Data Analytics Hub, for example, developers noted that data-driven risk assessments can only take account of risk factors such as school attendance, records of domestic abuse, etc. but cannot account for insulating ‘positive’ factors such as other types of social engagement or wider family networks that rely on contextual knowledge and unstructured information. Furthermore, whilst there are attempts to aggregate data to identify broader social issues that shape opportunities and challenges for families and individuals, household-level and individual-level data relies on attaching risk factors to individual characteristics and behaviour that therefore might divert focus away from structural causes, such as issues of inequality, poverty or racism.

As such, we see how at the level of management and development of data systems in the context of public services challenges are predominantly seen as either technical and cultural in nature. Issues pertaining to data quality or organisational scepticism towards technology are current obstacles, but are of a kind that can eventually be overcome through ‘better’ data practices that ultimately fit a shift towards data-driven governance. This understanding of challenges marks a significant discrepancy with the more fundamental concerns expressed by stakeholder groups from both civil society and frontline staff. Here we see a concern with social and political issues that speak to tensions at the core of what the ‘golden view’ of citizens might mean, in terms of different harms, rights, and the potential for enacting agency both as service users and professionals. Moreover, as we will go on to discuss further below, these tensions point to more rudimentary questions about the way data-driven systems might transform state-citizen relations and understandings of both people and social issues.

Transformations in governance: deconstructing the ‘golden view’

In outlining developments and pointing to the complex amalgamation of political and economic forces, private and public actors, interpretive and regulatory vacuums, and prominent tensions and differences amongst stakeholders that makes up the turn to data-driven governance, we see a broader politics of such a turn emerge. Whilst our case study research points to the fact that no decision is currently made solely on the basis of these data-driven scores, the implementation of such systems is shaping the terms upon which citizens are engaged with and constructed in the context of public services. These systems are part of a move towards a perceived need for more integrated and granular information about populations that is now seen to be possible with the advent of data-driven technologies. Moreover, using data to categorise and classify behaviours and characteristics is seen as a way to target resources in the face of significant cuts in public sector spending, with a view to predict and pre-empt activities and outcomes to advance more proactive forms of engagement with citizens.

At the same time we have seen that in many instances these systems are being bought in from private suppliers that develop various off-the-shelf tools and applications that can be deployed and repurposed within different parts of local government and the public sector, particularly for identification and risk assessments in the areas of benefit fraud, child welfare and policing. Yet in this context, the turn to data-driven technologies raise concerns across different stakeholders not just about the lack of transparency and the likelihood of errors and bias in the design and uses of these systems, but also about a more fundamental shift in what constitutes or is privileged as social knowledge, the kinds of actions that might be taken on such knowledge, and the way in which this positions citizens as subjects of governance. Whilst a notion of a ‘golden view’ of populations, as expressed by management, suggests an advanced, more comprehensive engagement with the needs of the city or borough and the people living within it, the tensions and negotiations we have seen as systems are implemented amongst professionals and civil society groups illustrate the politicised nature of this ‘view’ in practice.

Concerns point to the implications of ‘seeing’ people through data within this context, and the abstracted and reduced understanding this may lead to when relied upon at the expense of other types of knowledge. In conjunction with the deskilling and disempowerment of professionals as the use of data systems grows, issues raised by stakeholders speak to a perceived danger that the messiness of people and lived experiences is necessarily sidelined or ignored for the algorithmic processing of information. The extent to which digitised systems can be used in ways that reduce what is ‘knowable’ and hide complexity while appearing objective and neutral is a repeated finding across research investigating the ‘modernization’ of public services (Gillingham, 2011; Munroe, 2010; White et al., 2009; Bartlett & Tkacz, 2017). Furthermore, in mapping what data systems are used for, we see how these technologies advance an onus on risk management as the dominant operative logic of public services. As Amoore (2013) has argued, with the turn to algorithmic decision-making in governance, authority and expertise is transferred to calculative devices seeking to capture risk over and above other forms of expertise. As such, citizens are positioned within this ‘golden view’ not as participants or co-creators, but primarily as (potential) risks, unable to engage with or challenge decisions that govern their lives.

Moreover, concerns with targeting and stigmatisation, particularly of marginalised and poor groups in society, highlight the way these systems attribute risk factors to individuals’ behaviour and characteristics, shifting the burden of responsibility for social ills onto individuals over and above collective solutions. When the focus is on individuals, predicting risks of committing crime through data-driven profiling, for instance, is comfortably presented as a ‘solution’ for tackling increasing crime levels whilst doing little to engage with any underlying causes of crime (Andrejevic, 2017). Similarly, when assessing child welfare in the context of individual households, emphasis falls on seeing this primarily as an outcome of family history and behaviour. The worry is that what is measured is the impact of school absences but not the impact of school cuts; or of measuring the impact of benefit claims but not the impact of precarious work. In other words, these systems, in their emphasis on correlation over causation, can individualise social problems by directing attention away from structural causes of social problems (Keddell, 2015).

In deconstructing the ‘golden view’ of citizens that data systems afford, we therefore need to consider how state-citizen relations and indeed the substance of public services are configured within such a view. This goes beyond questions of error and bias or forms of data discrimination that have received increased attention. Instead, it requires us to consider the implementation of data systems as a distinctly political process, engaging with the politics of data at different and interconnected scales (Ruppert et al., 2017). At one level, the ‘golden view’ signifies an interpretation of state-citizen relations marked by a context of data-rich societies subject to austerity (McQuillan, 2018), turning to increased data sharing and analysis as a coping mechanism for a reduction in resources. This requires a critical interrogation of the premise of these systems, and the interests and agendas their implementation is seeking to serve. Moreover, the nature of citizen scoring, and the use of data for the purposes of categorisation, segmentation and profiling is embedded within a particular understanding of the relationship between people and data, and with that, a particular type of social knowledge and value system (Van Dijck, 2014; Kitchin, 2014). In effect, data scores come to order the contours of citizenship, shaping the deserving and undeserving, the risky and the vulnerable, and, ultimately, the terms upon which access to and participation in society might occur.

Conclusion

The introduction of predictive analytics, scoring systems, intelligent databases and data warehouses into local government is a rapidly emerging feature of datafication. An increasing emphasis on data use in UK government has led to a proliferation of data systems being implemented, leading to significant experimentation with algorithmic processes designed to provide new insights and value extraction based on different kinds of analytics. For public services, these systems are said to offer an opportunity to allocate resources and respond to needs more effectively. However, little is known about the kinds of systems in place, how and where they are used, and what practitioners and stakeholders think about these developments. This is especially a challenge in what we have identified as both a regulatory and interpretive vacuum that signifies a lack of shared understandings of not only what constitutes data-driven decision-making and algorithmic processing of information, but also what is appropriate to do with such systems. Through FOI requests, interviews and workshops we have sought to map uses and detail the different kinds of data systems being implemented as well as the benefits and concerns being identified by practitioners and civil society experts.

Our findings demonstrate the heterogeneity of data systems and their uses, and the contingency of their implementation on both local and broader societal factors. The turn to scoring systems and predictive analytics is being fueled by an austerity context in which local councils have faced substantial cuts. While these technologies are being implemented as ‘smart’ and effective solutions for better service provision, they are introduced in the context of service reduction. Further, we can observe a strong reliance on commercial systems that provide additional challenges to transparency and incorporate a wider set of (transactional, social, etc.) data on people into public sector decision-making. In this setting, shifts in organisational practices and logics that implicate the role of professional judgment and the extent to which data systems come to guide decision-making have led to prominent concerns amongst stakeholder groups in civil society that are not necessarily considered within local authorities and partner agencies. These include concerns beyond questions of transparency, bias and discrimination, and point to broader worries about targeting and stigmatisation, and how people come to be ‘seen’ and engaged with as citizens and service users.

The desire to rely on data collection and analysis as a way to create a ‘golden view’ of populations serves as a pertinent metaphor as it encapsulates not only the perception of what is possible with increased data sharing, but also suggests a particular conception of state-citizen relations. The kind of negotiations that emerge when looking at the implementation, deployment and uses of data systems in practice point to the contestations that exist over what comes to constitute social knowledge in such a view, the individualisation of risk and responsibility, the differential treatment that this can introduce, and the inability for citizens to know, engage or challenge such assessments. These negotiations will continue to play a key part in understanding the transformations in governance emerging with the scoring society and need to form a prominent part of discussions on what is at stake as data systems come to govern more and more aspects of our lives.

References

AlgorithmWatch. (2019). Automating Society: Taking Stock of Automated Decision-Making in the EU [Report]. Berlin: AlgorithmWatch. Retrieved from https://algorithmwatch.org/wp-content/uploads/2019/01/Automating_Society_Report_2019.pdf

Amoore, L. (2013). The Politics of Possibility: Risk and Security Beyond Probability. Durham; London: Duke University Press.

Andrejevic, M. (2017). To pre-empt a thief. International Journal of Communication, 11, 879–896. Retrieved from https://ijoc.org/index.php/ijoc/article/view/6308

Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016, May 23) Machine Bias. Pro Publica. Retrieved from https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing

Bartlett, J., & Tkacz, N. (2017). Governance by Dashboard [Policy Paper]. London: Demos. Retrieved from https://www.demos.co.uk/wp-content/uploads/2017/04/Demos-Governance-by-Dashboard.pdf

Beer, D. (2018). Envisioning the power of data analytics. Information, Communication & Society, 21(3): 465-479. doi:10.1080/1369118X.2017.1289232

Big Brother Watch (2018, April 2018) A closer look at Experian big data and artificial intelligence in Durham Police [Blog post]. Retrieved from Big Brother Watch website: https://bigbrotherwatch.org.uk/2018/04/a-closer-look-at-experian-big-data-and-artificial-intelligence-in-durham-police/

Brown, W. (2015). Undoing the Demos: Neoliberalism’s Stealth Revolution. Cambridge, MA: The MIT Press.

Carney, T. (2018) Robo-debt illegality: The seven veils of failed guarantees of the rule of law? Alternative Law Journal, https://doi.org/10.1177/1037969X18815913

Cheney-Lippold, J. (2017). We Are Data. New York: New York University Press.

Christin, A. (2017). Algorithms in practice: Comparing web journalism and criminal justice. Big Data & Society, 4(2). doi:10.1177/2053951717718855

Citron, D. K., & Pasquale, F. (2014). The Scored Society: Due Process for Automated Predictions. Washington Law Review, 89(1). Available at: https://digitalcommons.law.uw.edu/wlr/vol89/iss1/2

Couchman, H. (2019). Policing by Machine: Predictive Policing and the Threat to Our Rights [Report]. Available at: https://www.libertyhumanrights.org.uk/sites/default/files/LIB%2011%20Predictive%20Policing%20Report%20WEB.pdf

Crawford, K. (2016, May 2). Know your terrorist credit score! Talk presented at the re:publica, Berlin. Retrieved from https://16.re-publica.de/en/16/session/know-your-terrorist-credit-score

Cummings, M. L. (2004). Automation Bias in Intelligent Time Critical Decision Support Systems. Presented at the AIAA 1st Intelligent Systems Technical Conference, Chicago.doi:10.2514/6.2004-6313 Available at: https://web.archive.org/web/20141101113133/http://web.mit.edu/aeroastro/labs/halab/papers/CummingsAIAAbias.pdf

Davies, G. Boutaud, C., Scheffield, H., Youle, E. (2019, March 4) Revealed: The thousands of public spaces lost to the council funding crisis. The Bureau of Investigative Journalism. Available at: https://www.thebureauinvestigates.com/stories/2019-03-04/sold-from-under-you

Dencik, L. (2019). Situating practices in datafication – from above and below. In H. C. Stephansen & E. Treré (Eds.), Citizen Media and Practice. London; New York: Routledge.

Department for Work and Pensions. (2011, November 9). Housing Benefit and Council Tax Benefit Circular HB/CTB S11/2011. Retrieved from https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/633018/s11-2011.pdf

Dixon, P., & Gellman, R. (2014). The Scoring of America: How Secret Consumer Scores Threaten Your Privacy and Your Future [Report]. Lake Oswego: World Privacy Forum. Available at http://www.worldprivacyforum.org/wp-content/uploads/2014/04/WPF_Scoring_of_America_April2014_fs.pdf

Eubanks, V. (2018). Automating inequality: How high-tech tools profile, police, and punish the poor. New York: St Martin’s Press.

Fullerton, J. (2018, March 24). China’s ‘social credit’ system bans millions from travelling. The Telegraph. Retrieved from https://www.telegraph.co.uk/news/2018/03/24/chinas-social-credit-system-bans-millions-travelling/

Garrido S., Allard M. C., Béland J., Caccamo, E., Reigeluth, T., & Agaisse, J-P. (2018) Literature Review: Ethical issues and social acceptability of IoT in the Smart City [Final Report No. 1]. Montreal: CIRAIG. Retrieved from http://ville.montreal.qc.ca/pls/portal/docs/page/prt_vdm_fr/media/documents/ido_vi_revue_litt_final_en.pdf

Gillingham, P. (2011). Decision making tools and the development of expertise in child protection practitioners: Are we “just breeding workers who are good at ticking boxes”?. Child and Family Social Work16(4): 412–421. doi: 10.1111/j.1365-2206.2011.00756.x

Gillingham, P., & Graham, T. (2017). Big data in social welfare: the development of a critical perspective on social work's latest “electronic turn.” Australian SocialWork, 70(2), 135–147. doi: 10.1080/0312407X.2015.1134606

Hellerstein, J. (2008, November 19). The Commoditization of Massive Data Analysis. Radar. Retrieved from http://strata.oreilly.com/2008/11/the-commoditization-of-massive.html

Jefferson, E. (2018, April 24). No, China isn’t Black Mirror – social credit scores are more complex and sinister than that. New Statesman. Retrieved from https://www.newstatesman.com/world/asia/2018/04/no-china-isn-t-black-mirror-social-credit-scores-are-more-complex-and-sinister

Keddell, E. (2015). The ethics of predictive risk modelling in the Aotearoa/New Zealand child welfare context: Child abuse prevention or neo-liberal tool? Critical Social Policy, 35(1), 69–88. doi:10.1177/0261018314543224

Kirchner, L. (2017, December 18). New York City Moves to Create Accountability for Algorithms. Propublica. Retrieved from https://www.propublica.org/article/new-york-city-moves-to-create-accountability-for-algorithms

Kitchin, R. (2014). The data revolution. Big data, open data, data infrastructures & their consequences. London: Sage.

Kitchin, R. (2017). Thinking critically about and researching algorithms. Information, Communication & Society, 20(1), 14–29. doi:10.1080/1369118X.2016.1154087

Lv, A., & Luo, T. (2018). Asymmetrical Power Between Internet Giants and Users in China. International Journal of Communication 12, 3877–3895. Retrieved from https://ijoc.org/index.php/ijoc/article/view/8543

Mayer-Schönberger, V., & Cukier, K. (2013). Big Data: A Revolution That Will Transform How We Live, Work and Think. New York: John Murray.

McCann, D., Hall, M., & Warin, R. (2018). Controlled by calculations?: Power and accountability in the digital economy [Report]. London: New Economics Foundation. Retrieved from https://neweconomics.org/2018/06/controlled-by-calculations

McQuillan, D. (2017). Data Science as Machinic Neoplatonism. Philosophy & Technology, 31(2), 253–272. doi:10.1007/s13347-017-0273-3

McQuillan, D. (2018, October 13). Rethinking AI through the politics of 1968. Opendemocracy.Retrieved from https://www.opendemocracy.net/digitaliberties/dan-mcquillan/rethinking-ai-through-politics-of-1968

McQuillan, D. (2019, June 7). AI Realism and Structural Alternatives. Talk presented at the Data Justice Lab, Cardiff. Retrieved from http://danmcquillan.io/ai_realism.html

Munroe, E. (2010, October 1). The Munro review of child protection. Part one: A systems analysis. London: Department for Education. Retrieved from https://www.gov.uk/government/publications/munro-review-of-child-protection-part-1-asystems-analysis

O’Neil, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy. New York: Penguin.

Pasquale, F. (2015). The Black Box Society: The Secret Algorithms that Control Money and Information. Cambridge, MA: Harvard University Press.

Pasquale, F. (2019). Professional Judgment in an Era of Artificial Intelligence and Machine Learning. boundary 2, 46(1).

Poon, M. (2016). Corporate Capitalism and the Growing Power of Big Data: Review Essay. Science, Technology & Human Values, 41(6), 1088–1108. doi10.1177/0162243916650491

Qlik. (2017, January 11). UK Police Force Visualizes Incident and Operations Data to Fight Crime Faster and Improve Public Safety [Press release]. Retrieved from https://www.qlik.com/us/company/press-room/press-releases/0111-police-force-visualizes-incident-operations-data-fight-crime-faster-improve-public-safety

Ruppert, E., Isin, E., & Bigo, D. (2017). Data politics. Big Data & Society, 4(2). doi:10.1177/2053951717717749

Science and Technology Committee, House of Commons. (2018). Algorithms in decision-making [Report No. 4]. Retrieved from https://publications.parliament.uk/pa/cm201719/cmselect/cmsctech/351/351.pdf

Stats NZ. (2018). Algorithm assessment report [Report]. Wellington: Government Information Services. Retrieved from https://data.govt.nz/use-data/analyse-data/government-algorithm-transparency

Van Dijck, J. (2014). Datafication, dataism and dataveillance: Big Data between scientific paradigm and ideology. Surveillance & Society, 12(2), 197–208. doi:10.24908/ss.v12i2.4776

White, S., Broadhurst, K., Wastell, D., Peckover, S., Hall, C., & Pithouse, A. (2009). Whither practice-near research in the modernization programme? Policy blunders in children’s services. Journal of Social Work Practice, 23(4), 401–411. doi:10.1080/02650530903374945

Footnotes

1. Full details of the methodology, including an outline of FOI responses can be found here: https://datajustice.files.wordpress.com/2018/12/data-scores-as-governance-project-report2.pdf

2. For an interactive map of these systems, see: https://data-scores.org/overviews/predictive-analytics

3. For details of this research, see the full report here: https://www.libertyhumanrights.org.uk/sites/default/files/LIB%2011%20Predictive%20Policing%20Report%20WEB.pdf

4. See also the submission from Big Brother Watch to the UN Special Rapporteur on extreme poverty and human rights: https://bigbrotherwatch.org.uk/wp-content/uploads/2018/11/BIG-BROTHER-WATCH-SUBMISSION-TO-THE-UN-SPECIAL-RAPPORTEUR-ON-EXTREME-POVERTY-AND-HUMAN-RIGHTS-AHEAD-OF-UK-VISIT-NOVEMBER-2018.pdf

Reframing platform power

$
0
0

This paper is part of Transnational materialities, a special issue of Internet Policy Review guest-edited by José van Dijck and Bernhard Rieder.

Introduction

In March 2019, the European Commission fined Google’s parent company Alphabet Inc. 1.5 billion euro for antitrust violations in the online advertising market—the third fine in three years. In July 2018, European Commissioner Margrethe Vestager had levied a record fine of 4.3 billion euro on Google for breaching European competition rules by forcing cell phone manufacturers to pre-install a dozen of the firms’ apps when using Android—Google’s mobile operating system. And in 2016, the company was punished for unlawfully favouring Google Shopping services in the results of its own search engine. Commenting on the EU-Google decisions, several critics interpreted the rulings as (European) retaliations against a single (US) tech company for exerting undue market power over potential competitors entering the digital market. Others saw them as an encouraging signal for lawmakers to actively counter the market power of one dominant company, paving the regulatory pathway for further antitrust measures. In a radically transforming digital world fuelled by data and steered by platforms, European regulators and policymakers are rethinking their strategies towards digital markets in which platform power mostly rests with US firms (Crémer, de Montjoye, & Schweitzer, 2019). Meanwhile, scholars, politicians and citizens increasingly wonder whether the available arsenal of national and supra-national regulatory tools (e.g., antitrust law, competition law, privacy law, etc) are sufficiently agile when it comes to redressing the powerful position of tech companies in the era of “digital dominance” (Moore & Tambini, 2018).

This article addresses the problem of platform power by probing current regulatory frameworks’ basic assumptions about how tech firms operate in digital ecosystems. Should platform power be assessed merely in terms of economic markets in which individual corporate actors harness technological innovations to compete fairly, thereby maximising consumer welfare? Or does platform power need to be reconceptualised in the face of emerging platform ecosystems on which citizens and societies have become dependent for their social and democratic wellbeing? Recently, a number of American and European legal scholars have argued the need for a broader set of concepts to help investigate how power gets accumulated or abused in online networked environments (Cohen, 2017; Daskalova, 2015; Kahn, 2018; Rahman, 2018). In line with these calls for antitrust-reform, we question the suitability of prevailing legal-economic concepts to capture undue accumulation of platform power. We argue that power concentration and asymmetry can only be remedied if we widen the scope of legal frameworks to include the socio-technical and political-economic relations in which these frameworks are embedded.

Reframing platform power is a necessary precondition for the potential harmonisation of different regulatory regimes. We acknowledge such efforts to be profoundly political and ideological in nature, so we have to articulate the normative stance from which we undertake this conceptual challenge. Our perspective is motivated by the needs of European states struggling to adhere to principles of fairness in governance, public accountability, and democratic control while being encapsulated in platform ecosystems which technical architecture and economic dynamics are firmly grounded in American neoliberal, and to some extent, libertarian principles (Jin, 2015; Mansell, 2017; Smyrnaios, 2018). With that political-economic perspective in mind, we will take the EU decisions to fine Google-Alphabet in 2016 and 2018 as a starting point to ask how Europe’s regulatory scope can be broadened to address platform companies’ societal—rather than just economic—power.

The need for reconceptualising platform power

The notion of platform has never been well-defined, as platforms are recognised as having “features of firms and of markets, involving both production and exchange” (Coyle, 2018a, p. 51).1 Attesting to this abstruseness is the fact that the term platform is often used interchangeably with the companies that own and operate them. Indeed, some of these firms have long enjoyed a rather vague status as “connectors” in an amorphous online space that allowed them to successfully dodge lagging regulatory frameworks (Napoli & Caplan, 2017).2 By virtue of their rapid global expansion, successful single platforms became the backbone of sizeable companies, which then diversified by operating numerous “multi-sided platforms” or MSPs (Tiwana, 2014). After a bonanza of new market entrants, a few rapidly growing firms were able to position their own services at crucial intersections of an emerging platform ecosystem.3 As digital markets evolved, worries about concentration of power and an oligopolistic market structure have grown proportionally. In the US and Europe, those worries are particularly (though not exclusively) levelled at Alphabet-Google, Amazon, Facebook, Apple, and Microsoft (GAFAM). The Big Five are regarded by many not just as individual companies engaged in mutual competition, but also as a “corporate platform elite” utilising “superplatforms” to control the gateways to digital markets (Ezrachi & Stucke, 2016; Dolata & Schrape, 2018; Srnicek, 2017).

The quintet’s rapid ascendance triggered several questions: what makes the big tech companies different from conventional market players? What makes them powerful as individual platform companies and as a collective? With the emergence of a platform-and-data-driven ecosystem, a mixture of old currencies (attention and capital) and new ones (data and users) became paramount to developing a shared set of platform mechanismsto rule digital interactions and transactions. Beyond sheer size or capital accumulation, central to most concerns is a platform company’s ability to cajole large data flows to the detriment of consumers in terms of pricing (an old worry), but increasingly also in terms of citizens’ personal information as instruments for manipulation and as input for artificial intelligence and machine learning analysis (new worries). This holds true for single companies’ power, but perhaps even more so for the power-dynamic and mechanisms they share as co-developers of the ecosystem’s infrastructure. Arguably the chief mechanisms in this dynamic are datafication and commodification: the algorithmic governance of data flows and their transformation into business models based on the trade of (mostly free) services for (mostly user-generated) data (Graeff, 2017; Van Dijck, Poell, & De Waal, 2018).

Platforms’ ability to develop interoperable technical and economic standards and to control a set of platform mechanisms, combined with their potential to leverage network effects and global diffusion, have become crucial conditions for power accumulation (Nieborg & Poell, 2018). This distinctive power raises people’s concerns about 1) their potential to endlessly recombine and reuse data flows as input for algorithmic knowledge; 2) their ability to manage various gateway functions to steer online traffic; 3) their potential to exertcontrol over relationships with “complementors”, platform-dependent stakeholders such as advertisers, app developers, newspapers, gig workers, home sharers; 4) their capacity to govern connective infrastructures on which users are increasingly dependent for all their online activities; and 5) their potential to interfere with social and democratic functions in society.

Evidently, worries about platform power extend beyond mere economic concerns, and pertain not just to markets but to society as a whole. Yet there is no single regulatory framework to address all of these concerns. The most relevant regulatory frameworks are consumer law, competition law, antitrust law, and privacy law—areas the EU authorities have been laudably active in. They commonly focus on consumer welfare, looking into ways in which single companies behave in specificmarkets; their aim is to ensure a level playing field that is in the best interest of consumers with regards to pricing, accessibility, and choice. Since new entities like platforms, users, data flows, and algorithms have entered legislative and regulatory discourse, they are often used complementary to, or interchangeably with, conventional concepts. However, the mounting complexity of digital societies necessitates to revisit regulatory frameworks’ basic assumptions.

American and European legal scholars have begun to ask whether regulatory frameworks still “capture the realities of how dominant firms acquire and exercise power in the Internet economy” (Kahn, 2018, p. 122). Coyle (2018b, p. 12) contends we ought to abandon a traditional market definition in favour of a wider classification of platforms as social constructs. Framing the problem beyond competition law, Cohen (2017, p. 144) points at the extraordinary power of platforms to supply and organise digital infrastructures that control not just markets, but “reshape the conditions for economic exchange”. By the same token, Ezrachi and Stucke (2016, p. 586) wonder whether a level playing field is at all possible in a world “where entry is possible, but expansion will likely be controlled by super-platforms”. And arguing in favour of an expansive antitrust framework, Patterson (2017) contends such a perspective should not only be applied to prevent anticompetitive conduct from platform owners to the economic detriment of consumers, but may also be used to prevent societal harm in the form of forcing consumers to exchange personal information for free services. In line with this reasoning, Colaps (2018) favours a coordinated EU-approach that connects relevant competition analysis to the misuse of personal data in digital markets.

So how can we address legitimate concerns about power abuse and undue power concentration of platform companies if they reach beyond the current legislative and regulatory frameworks? Such investigation starts with some basic questions: where is platform power located, at whom is it levelled, how and by whom is it exercised? We think it is crucial to widen conventional legal probes to include political and sociotechnical perspectives. Therefore, we propose three paradigmatic shifts in the conceptualisation of platform power. First, we suggest to expand the notion of consumer welfare to citizen wellbeing, hence addressing a broader scope of platform services’ beneficiaries. Second, we propose to regard single platformcompanies as part ofan integratedplatformecosystem, acknowledging its inter-relational, dynamic structure. And third, we shift attention from markets as (ideally) level playing fields towards societal infrastructures, in which platforms introduce new hierarchies and dependencies. Each of these shifts will be elaborated in the next three sections.

From consumer welfare to citizen wellbeing

Since the 1970s, the foundations of antitrust legislation, competition and consumer law, both in the US and in Europe, have rested on notions of single companies operating in markets aimed at safeguarding consumer welfare; the phenomenon of companies operating multisided platforms (MSPs) has later been squeezed into this frame. The 2016 and 2018 fines levelled at Alphabet-Google illustrate this. Google’s Search product has been the economic heart of its activities from the very onset but the company has “branched out” into disparate sectors, ranging from video-sharing to health and from retail to education. As noted in our introduction, in 2016, EU-regulators proved that Google systematically favours its own Shopping product in its Search rankings, at the disadvantage of consumer choice and to the detriment of small businesses who are increasingly dependent on the search engine. The accumulation of market power was found to be contingent on the company’s ability to leverage economies of scale and scope by aggregating transactions among consumers and businesses, as well as by vertical integration of data flows and algorithmically-driven analysis across products and divisions (Barwise & Watkins, 2018).

The 2018-verdict was different from the 2016 one in that it fined parent company Alphabet for effectively imposing its services upon device manufacturers. Alphabet was also found guilty of enticing manufacturers and telecom operators to offer Google Search by giving them a revenue share each time a Google ad is clicked on. Forced integration of various platform services, according to the regulator, prohibits fair competition and gives the company unfair advantages over other entrants in the market for apps. In response to the EU-verdict, Google’s chief executive Sundar Pichai warned that the EU-decision may harm consumer welfare because app bundling optimises the user experience and facilitates innovation by third-party developers (Warren, 2018).

The two EU-verdicts both address Google’s ability to control a vertically integrated system upstream and downstream, at the expense of consumers as buyers of products and services, i.e., phones with pre-installed apps or Google Shopping deals. In general, the verdicts intersect with consumers’ short-term interest, as they aim at preventing discriminatory pricing and guaranteeing consumer choice; and they protect the interests of entrepreneurs, ensuring a level playing field for businesses large and small. A generous reading of the verdicts shows how the regulator keenly recognises that “pricing” is a dubious concept in an environment where most services are often free of charge and where data have become the prime currency. Indeed, the notion of datafication leads to an expansive conception of consumer interest where the potential combination of data flows from different services may be primarily aimed at personalisation and profiling to steer consumer behaviour (Crain, 2019). Implicitly, the verdicts also address a long-term broader concern: Alphabet’s ability to integrate its own hardware, software, analytics, distribution, and marketing services allows them to collect, store, and process more data, which in turn provides enormous competitive advantages when entering new markets, using them against competitors who lack historical data. In capturing the constellation of digital markets, legislators and regulators are thus reinventing the notion of “consumers”.

In addition to being consumers and producers, users are also citizens who for their democratic and civic duties have come to depend on services offered by platform companies. Google does not just offer search and shopping services but also gives access to information and news—crucial for making informed democratic decisions— and provides advertising platforms which are tightly interlinked with the company’s other platforms. Of course, one may argue a citizen “consumes” news and information, and therefore, Google’s News aggregator or YouTube is no different than its Shopping product. Yet the data-flow integration in the back-end of news content-distribution with advertising, search, and social networking invokes a whole new set of questions about consumer welfare (Nechushtai, 2018). Are news consumers the same as retail consumers? Can profiled information derived from shopping data be used to send political advertisements?

The notion of platform users as citizens takes us beyond the definition of individual recipients of services for another reason. Citizens can also take the shape of collectives or public bodies—think of classes of students or public school systems, which for their educational data processing have become increasingly dependent on platform companies. Google, for instance, sells Chromebooks that come installed not only with Google’s familiar services (Search, Gmail, etc.) but also with a specific set of apps, such as Google Suite for Education; the latter allows schools to integrate educational software along with performance tracking and analytics, as well as various administrative functions. The possibilities for cross-linking data flows from various platforms are of course manifold. Should school children be considered “customers” or a vulnerable group of citizens whose educational environment warrants special protection? In light of the 2016 and 2018 verdicts, the question is whether the EU-regulator should treat educational platforms differently from other platforms.

For more than one reason, the notion of consumer welfare is inadequate to account for citizens’ wellbeing (Kahn, 2018; Melamed & Petit, 2018). While most regulatory regimes currently address short term concerns about a user’s welfare in terms of discriminatory pricing or controlled access, they hardly tackle the potential long-term consequences in terms of privacy, surveillance, access to accurate information, or (social) profiling that may be detrimental to citizen’s wellbeing. Of course, those concerns may well be addressed under various other regulatory frameworks, such as the European General Data Protection Regulation (GDPR), but the point is that citizens enact different roles simultaneously. You cannot bracket off the “consumer” from the “citizen”, the “entrepreneur” from the “worker”, or the “patient” from the “student” in an online environment. The type of power that digital platforms deploy affects people in various different roles, and such multi-variety should be relevant to lawmakers and regulators when rethinking the governance of digital societies (Suzor, 2018).

Regulators can take a cue from tech companies themselves who, since 2017, have started to acknowledge the concurrent roles of users as consumers, entrepreneurs, and citizens—prompted mostly by public outcry against their deficient policies in terms of content moderation. An example to instantiate this was provided in July and August 2018, when Apple removed four of Infowars podcasts from iTunes because its host, US conspiracy theorist Alex Jones, broke Apple’s Terms of Service with regards to hate speech; Google shut down Infowars’ popular YouTube channel; Amazon abandoned Jones’ companies’ product endorsements; and Facebook removed four Infowars pages for violating the social network’s policies on invoking violence and hate speech. Other significant platform services quickly followed suit by denying Infowars access to its services or banning it from some of their crucial MSPs. This is relevant for two reasons: first, platform operators acknowledge their user’s consumer value as much as their civic role; and second, although the companies acted as individual “custodians of the internet” (Gillespie, 2018), their concerted—even if not orchestrated—effort instantiates a cognisance of their collective responsibility for a proprietary ecosystem that has far-reaching societal influence.

So while companies “govern” platform societies by acknowledging their societal impact on citizens-cum-consumers, regulators have not yet adapted their “governance of platforms” by integrating their segmented frameworks; some regulatory frameworks exclusively address companies and consumers while others relate to societal sectors and citizens.

From platform companies to an integrated platform ecosystem

Shifting the focus, we can observe another restriction built into the fragmented regulatory frameworks that renders them out of sync with reality: their scope is often limited to single companies forming proprietary ecosystems, whereas in everyday practice, these company ecosystems are part of an integrated online environment where they can be distinguished to a degree, but cannot be separated from each other. In both EU verdicts, Alphabet-Google was fined for the vertical integration of its own platform services to the detriment of consumers and competitors, small and large. With regards to the 2016 case, some asked why Alphabet was fined for funnelling consumers to Google Shopping, while Amazon—the real elephant in the retail market—remained unscathed. Concerning the 2018 verdict, some wondered why the EU did not investigate Apple for their bundling strategy across hardware (iPhone), operating systems (iOS), app stores (iOS App Store), and a number of other services. Google CEO Pichai implicitly alluded to Apple when, in his response to the 2018 verdict, he said that the forced “unbundling” of apps may “upset the balance of the Android ecosystem”, which would send “a troubling signal in favour of proprietary systems over open platforms” (Warren, 2018). Indeed, a similar case could be made against Amazon; Kahn (2018) convincingly argued how Amazon harnesses its dominance in e-commerce and cloud hosting to expand its own line of business ventures into an “everything store” (Stone, 2013). Or consider Facebook, which besides running the world’s major social network service, operates a “family of apps” (Messenger, Instagram, and WhatsApp), each of which can be seen as a “platform instance” that is integrated into Facebook’s wider data infrastructure (Nieborg & Helmond, 2019).

Each of these tech giants run their own chain of multi-sided platforms operated by a single “platform company”. What renders them powerful is their ability to steer data flows and use them as input for algorithmic profiling and ad-targeting across their own platforms. One common response to corporate platform ecosystems becoming monopolistic is to demand breakups or refuse mergers of large MSP-owners with too much power in one market. Such a market could be a conventional consumer market (e.g., online platforms for retail or urban transport) but also a new kind of platform market (e.g., social networking or app stores). Criticasters have called the merger of Instagram and WhatsApp with Facebook unjustifiable market concentration—partly because Facebook tends to dominate the new market of social networking to the detriment of its direct competitors (e.g., the now defunct Google Plus), but perhaps even more so because it concurrently performs a gatekeeping function to online news distribution and advertising services. Facebook’s ownership of the Facebook app, Messenger, Instagram and WhatsApp raises concerns about the company’s potential sway over public discourse. Evidently, the concern of single tech companies operating as competing platform company ecosystems that dominate markets is still valid, but the power to control access to the ecosystem as a whole may be more disconcerting.

Another concern pertains to concentration of platform power within an oligopoly of company ecosystems (Smyrnaios, 2018). One way to measure market dominance is to look at relative market share compared to competitors (e.g., Google has approximately 90% of the search market in Europe, while Microsoft’s Bing has 7%). Google forms a duopoly with Facebook in online advertising and with Apple in app stores, while cloud computing is controlled by a triopoly (Amazon-Google-Microsoft). Assessments of oligopolies come in contrasting flavours. For instance, public policy scholar Diane Coyle concludes that dominance in specific platform markets comes in the form of “intense oligopolistic rivalry with one or at most two of the rest of the GAFAM group” (Coyle, 2018a, p. 8). She argues that each platform market segment is dominated by a different combination of one, two, or three of the Big Five players; they compete in some segments, yet collaborate in others. By contrast, Nicholas Petit (2016) refers to big tech firms as “moligopolists”, contending that they are engaged in a process of vibrant oligopolistic competition, hence benefitting a healthy digital market because there will always be at least a few rivalrous competitors.

Regardless of these different assessments of the Big Five as a collective power house, we think the complexity of platform power cannot be understood exclusively in terms of monopolistic or oligopolistic domination by single company ecosystems. Instead, we propose to broaden the scope from single corporations operating MSPs to an integrated platform ecosystem that allows us to inspect how platforms are behaving in relation to each other, across markets, and across societal sectors (Van Dijck, 2013). Taking an integral approach to the platform ecosystem, one has to look simultaneously at ownership relations in terms of power over data flows, as well as at technical and organisational control over the ecosystem as a whole. What does that imply?

Ownership relations are typically the focus of political economy approaches that scrutinise acquisitions, mergers, buy-outs, and partnerships (Hardy, 2014; Mosco, 2009; Winseck, 2008). Besides platform companies owning and operating their own MSPs, power is also derived from their ability to funnel a wide variety of actors across economic sectors into their online services (Bamberger & Lobel, 2017; Bechmann, 2013). Governance over data flows that are proprietary and invisible to regulators or users gives platform companies enormous power over the ecosystem as a whole. For instance, in December 2018, the New York Times revealed that for years, Facebook had systematically given access to private user profiles to selected other platforms, such as Microsoft’s Bing search engine, Amazon, and Spotify’s music stream service (Dance et al., 2018). And on the technical level, the interweaving of multiple data flows is operationalised through the extension of platform features and buttons into third party sites and apps, e.g., Facebook’s like button or log-in feature (Helmond, 2015; Nieborg & Helmond, 2019).

The complexity of ownership relations, governance, and technical interrelations leads to a dynamic that is all but inscrutable to outsiders. For instance, Facebook had a partnership with Uber that allowed users to order taxis through its Messenger app. By the same token, we can identify Google as a shareholder of Uber, and yet Google Maps recently severed its integration of the taxi-app to allow for its product to attract customers from Uber’s competitors.4 These examples all go to show: platform companies may be competitors in one segment, they are partners in others as they may channel users towards their own branded services via rival platforms. Facebook’s apps Messenger and Instagram, for instance, are the most downloaded apps in Apple’s iOS App Store and they are used as often as Apple’s own native apps that offer similar functionality. Through an intricate dynamic of relational platform services, tech companies manage to govern an opaque and complex ecosystem in which connections are invisible to the public eye and hence largely beyond societal control. The invisibility of centralised data flow control stands in sharp contrast to the user’s lack of control over their own generated data.

Analysing such relationships across platforms, companies, and markets could theoretically result in an intricate taxonomy of single platforms and their mutual (ownership, governance, technical) connections. However, such efforts may be at best momentary since platform constellations are transient and dynamic. To design effective regulation, we need to be more pragmatic and untangle patterns of dependency that tie platforms, end-users, and complementors together. Such patterns allow us to see how some platforms accrue unfair advantages from controlling specific nodes in the integratedplatform ecosystem, through gatekeeping, lock in, cross-subsidizing or combining crucial data flows. The notion of an integrated platform ecosystem potentially widens the conceptual horizon from single company platform configurations operating in specific markets to an integrated environment where platforms operate across markets and societal sectors. On the surface all platforms look equal: they tend to leverage similar management strategies, such as network effects, strong brands, a reliance on habitual usage and a striving for seamless connectivity (Barwise & Watkins, 2018, p. 43). But once we examine the platform ecosystem as an integrated environment, it appears remarkably hierarchical in its (self-)organisation. Platform interdependency, as we will argue below, hinges mostly on infrastructural nodes of power.

From platform markets to societal infrastructures

When we say that some platforms have crucial “infrastructural” power in the ecosystem, this may invoke images of equivalents in the physical world, such as mainports, interstate highways, water management, or sewer systems (Frischmann, 2012). Living in a digital world equally requires physical infrastructures upon which the platform ecosystem has been built that allow societies to organise all kinds of activities online.5 There are several reasons to complement the concept of markets with the notion of infrastructures online. First, some platforms have carved out gatekeeping positions in the ecosystem that gives them enormous leverage over all kinds of economic transactions entrusted to them as intermediaries between users, complementors, and businesses. Second, the influence of some platforms goes way beyond markets, affecting entire societal sectors, democratic processes, online social traffic, and national institutions.

The first reason prompts a definition of the term “infrastructural” in relation to platforms. Building on earlier work by Plantin et al. (2018), we refer to infrastructural platform services to identify the integral ecosystem’s “nodes” through which data flows are managed, processed, stored, and channelled, and upon which many other online services, complementors, and users have come to depend. In our earlier work, we have identified some seventy infrastructural services, including social networking services, search engines, app stores, advertising systems, retail networks, cloud services, pay systems, identification services, audio-visual platforms and more, that have infrastructural functions (Van Dijck, Poell, & De Waal, 2018). It is important to note that “infrastructural” is not a property of one platform itself, but a corollary of a platform’s deployment and function in the context of other platforms in a dynamic ecosystem. Infrastructural services may vary in form and function: for instance, cloud services are very different than social networking services, and advertising platforms are distinct from app stores, hence warranting different treatment by regulators. Following this logic, attribution of infrastructural power can hardly be limited to one company or one market, but needs to apply to how platforms operate in conjunction.

Let us return to the EU’s verdicts in both Google cases, which treats all platforms equally as markets, whether it is “search” or “shopping”. The question arises: what would warrant the qualification of “search” as an infrastructural platform? The fact that Google’s search engine handles 90% of online queries in the European search market, on which many businesses depend for their online visibility, is an important indication of its huge social responsibility. But equally relevant is the fact that Google simultaneously operates AdSense and Google Marketing Platform (previously DoubleClick) as online advertising services upon which many users, including small businesses, depend. The combination of data flows allows for the potential interference of organic search results and paid (or sponsored) results. To ensure consumers’ trust in its search engine, Google vows to keep them apart and to guarantee the neutrality of its search algorithms (Rieder & Sire, 2014). Such promises are in a sense an acknowledgement of the platform’s special (infrastructural) status, but whether they merit users’ trust is problematic, considering Google’s proven misbehaviour in the 2016 Google Shopping case.

One condition for allowing a search platform to operate as a near-monopolist could be to regulate it as an infrastructural platform. Such epithet could warrant putting up a “firewall” around the platform because the neutrality or impartiality of algorithmic selection is crucial to its trustworthiness as a service across the integrated ecosystem. In the past, some legal scholars have proposed enforcing firewalls between different types of platforms, such as hardware producers, content creators, and distributors (Zittrain, 2008; Rahman, 2018). In our assessment, firewalling would affect individual platforms (e.g., Google Search) rather than an entire platform company or an entire class of platforms, for the succinct goal of guaranteeing a reliable infrastructure on which other platforms (including Alphabet’s own) can rely. Acknowledgement of infrastructural platforms inevitably requires independent oversight to guarantee the transparency of a platform’s use of algorithms and processing of data flows. Of course, such measures would deeply impact Alphabet’s business models but it might be less drastic than other potential measures, including a forced breakup of the company. It is beyond our scope and expertise to explore specific regulatory instruments, but we will return to this in the conclusion.

The second motive to employ the notion of infrastructural is to highlight that some platforms have accrued considerable social and political value in addition to economic worth. In 2018, Facebook and Google both faced charges of political bias and manipulation pertaining to their social networks (Facebook and Instagram) and video sharing services (YouTube). The debate clearly reached a contentious stage, in which governments and citizens addressed platform companies as guardians of a societal infrastructure. During the 2018 parliamentary hearings in Brussels and Washington, where CEOs of the Big Five were called to testify, some politicians openly discussed whether draconian government interventions in a platform’s operation (including forced break-ups) may be legitimate to guarantee the long-term sustainability of the connective ecosystem. It is a truism that politicians like to call for easy-to-understand, one-size-fits-all solutions. However, it is much harder to explain how differentiated services afford tech companies various types of power in the integrated platform ecosystem. Social networks and video-sharing platforms function differently from search engines, raising the question whether they should be regulated as media companies or publishing houses, holding them liable for content they distribute. Infrastructural services such as search engines could be regulated through neutrality requirements, but such measures work out very differently for social networking services which simultaneously function as news aggregators. As it stands now, platform companies with multiple MSPs are either exempted from regulation under section 230 (see note 2) or they face regulation under a single regulatory regime, likening them for instance to telecoms, software companies, internet service providers, or media companies. Based on a detailed analysis of particular platform services, as well as a nuanced understanding of the integrated platform ecosystem, we should be able to determine how these services function within the larger ecosystem. In turn, this should enable more precise regulation.

Besides the “how” of regulation, we should also address “who” is responsible. If we look at physical infrastructures, such as water management systems, it is commonly a government’s responsibility to ensure facilities for water and sewage systems. As Cohen (2017, p. 144) observes, some physical infrastructures are managed as utilities by charging for metered consumption, others are privately managed but are subject to strict regulatory obligations. With regards to the integrated platform ecosystem, one may ask which pivotal platform functions controlled by corporate actors are no longer just optional infrastructures but hard-to-avoid necessities. To what extent should some infrastructural services be regulated as utilities while others can be provided as private services subject to regulatory control?

One problem we encounter here is that the idea of distinct public utilities, public sectors and public space is simply not part of the online ecosystem’s architectural design. Institutions, governments, and non-governmental actors have rapidly integrated with, and become dependent on, the corporate platform ecosystem, where entire public sectors (e.g. health, education) are increasingly reliant on privately held infrastructural platforms (Lupton, 2016; Williamson, 2017). The mechanisms, strategies, and infrastructures developed by big tech companies are a contingent result of free-market forces; governments and non-market forces were never involved in their design. In other words, the integrated platform ecosystem imposes the market dynamics of online economic transactions and consumer behaviour on every type of online activity—economic or social, private or public—hence collecting and connecting data flows related to disparate practices, ranging from people’s fitness regimes to democratic election processes. In a remarkable inversion of common hierarchies, societies historically imposing their administrative rules upon technological innovations have given way to a proprietary platform ecosystem that administers its data-and-algorithms-based rules upon societies (Caplan & boyd, 2018).

While undergoing this seismic shift, citizens and policymakers have yet to recapture their public role vis-à-vis the platform ecosystem. How and where should non-corporate actors be involved in the ecosystem’s organisation, particularly when infrastructural functions are at stake? Most relevant legal frameworks are still predicated on the notion that it is a government’s task to take care of the public interest, while companies serve the private interest unless forced by governments to also take care of a public interest (Gerbrandy, 2016). If we would adhere to this principle, a simple conclusion would be that since there is no public infrastructure in the online world, its corporate players carry no responsibility for the common good. However, corporate actors find themselves increasingly confronted with societal backlashes, forcing them to accept the huge responsibilities that come along with organising not just the technical infrastructure for a consumer market but a societal apparatus defining the norms for human interaction and information exchange. Facebook’s chief executive Mark Zuckerberg (2017) has openly admitted his social network service should now be considered a “social infrastructure”.

The question as to whether or not to turn some platforms into utilities has been extensively discussed in legal circles.6 Eventually, though, platform governance is a political choice that needs to be decided at the various levels of policy-making. The safeguarding of public interest and public values has to come from local, national, or supra-national (EU) governments designing rules on how platform companies may run their services for the common good. In the best tradition of European democratic governance, responsibility for taking policy decisions may come from multi-stakeholder organisations balancing the interests of state, civil society, and market actors (Cowhey & Aronson, 2017). So far, legal and regulatory authorities have typically acted cautiously within the bounds of distinctive legal frameworks to speak on behalf of consumers while applying marketvalues; they have yet to find ways to account for public values or public interests where citizens are concerned—citizens whose interests go beyond typical consumer benefits and beyond typical market concerns. But if we accept the notion of an integrated platform ecosystem, we can only conclude that there is a desperate need for integrated policy and regulatory frameworks.

Conclusion: Integrated ecosystems require integrated policy and regulation

The platformisation of products and services has not simply replaced old economies and markets, but it has profoundly transformed societal organisation and public accountability. We need to recognise the vital role the state and its citizens play in making markets serve society rather than the other way around; it is our contention that markets on their own will not provide a platform infrastructure that optimally serves society. In the previous sections, we proposed to reframe platform power by expanding the notions of consumers, companies, and markets to include broader notions of citizen wellbeing, an integral platform ecosystem, and societal infrastructure. Such expansive notions obviously push the envelope of current regulatory frameworks, but it may help pave the way for a comprehensive approach to policy-making and regulation. To get there, a few steps are necessary.

First, academics and policymakers need to deliver a more precise analysis and nuanced assessment of how the integrated ecosystem of platforms functions. Specific case studies may reveal how power is carved into the ecosystem’s infrastructure, in its aggregation of data flows, and its algorithmic curation of online activities; how power is distributed amongst various stakeholders whose platforms exert distinctive yet interdependent functions in the ecosystem; how some of the services can be regarded as societal infrastructures and therefore warrant a regulatory treatment in kind with its public gravity; how the ecosystem facilitates competition and collaboration at the same time—integrating services from different platforms within the same company as well as from partnering and competing companies; and lastly, how such power is applied in the entire value chain, upstream and downstream, while acknowledging how various kinds of recipients may enact different roles simultaneously.

Such analyses can become more concrete when starting with an inquiry into selected areas of infrastructural power and conducting a fair number of such specific inquiries to help define relevant features. As Constantinides, Henfridsson, and Parker (2018, p. 396) propose, we might start with examining “whether social media should be viewed as ‘critical infrastructure’ given their ability to influence critical societal functions such as elections.” Search engines could be a potential candidate for inquiry, considering their nodal function as information gateways. App stores could also be a vital object of scrutiny, given their intermediary positions as gatekeepers between end-users and (small) businesses, as well as their network-making power versus competitors. Besides social network sites, search engines, and app stores, we might want to look into online advertising and retail services as important gateways. But rather than examining them as single markets run by single proprietary ecosystems in the interests of consumers, it is essential to approach them as part of an integrated ecosystem inhabited by citizens who have become fully dependent on these systems for governing their personal and collective wellbeing.

Secondly, nuanced analyses of power in the integrated platform ecosystem can help articulate a cohesive set of governance principles, both at the EU-level as well as at national and local levels. New EU-reports evaluating approaches to platforms and data are beginning to show an awareness towards integrated societal interests (European Commission, 2018). A British discussion paper published by the IPPR Commission on Economic Justice forms a lightning example of what integrated policy-making at the national level could look like (Lawrence & Laybourn-Langton, 2018). And at the local level, there is a growing awareness that cities and civil society organisations play a vital role in communal efforts to govern the platform society. Amsterdam and Barcelona, for example, are currently designing comprehensive platform policies that provide a basis for negotiating power with big platforms, such as Google and Airbnb, in urban areas where the control over data flows plays a vital role for urban infrastructures.

Thirdly, an analytic reframing of platform power could nourish efforts in various countries to harmonise, expand, and update current regulation. Harmonising single regulatory frameworks for antitrust, consumer, and competition law with recent updated frameworks that address privacy, media regulation, and net neutrality may be a first step towards an integrative approach. Regulators across Europe are cautiously exploring new territory, and it is encouraging that new policy directions are currently probed at various levels, national as well as transnational (Crémer, de Montjoye, & Schweitzer, 2019). Policy studies and regulatory explorations that combine specific areas such as competition law and privacy law will bring the broader scope needed to penetrate the intricate and complex problem of platform power. Legal scholars have also called for a new range of potential (ex ante) measures besides the (ex post) levying of fines after proven legal violation. Gilman and Farrell (2018) courageously argue for the articulation of “moral frameworks” at the (supra-)national level—guidelines that help legislators and institutions navigate a world that is being remade by data, platforms, and algorithms.

At a time when governments are stepping up their antitrust efforts to curb Big Tech’s power, we urge to look beyond single regulatory frameworks and consider the articulation of a comprehensive set of principles that can be applied to the platform ecosystem. We hope our reframing exercise may contribute to such integral perspective on regulatory regimes; if complemented by an analytical tool set that is both expansive and differentiated, we hope such effort supplies ammunition to both lawmakers’ and regulators’ efforts to govern the future platform society.

References

Bamberger, K.A., & Lobel, O. (2017). Platform Market Power. Berkeley Technology Law Journal, 32(3), 2–41. doi:10.15779/Z38N00ZT38

Barwise, T.P., & Watkins, L. (2018). The evolution of digital dominance: how and why we got to GAFA. In M. Moore & D. Tambini (Eds.), Digital Dominance: The Power of Google, Amazon, Facebook, and Apple (pp. 21–49). New York: Oxford University Press. Available at http://fdslive.oup.com/www.oup.com/academic/pdf/openaccess/9780190845124.pdf

Bechmann, A. (2013). Internet profiling: The economy of data intraoperability on Facebook and Google. MedieKultur,55, 72–91. doi:10.7146/mediekultur.v29i55.8070

Caplan, R. & boyd, d. (2018). Isomorphism through algorithms: Institutional dependencies in the case of Facebook. Big Data & Society,5(1). doi:10.1177/2053951718757253

Cohen, J. (2017). Law for the Platform Economy. UC Davis Law Review, 51(1), 133–204. Retrieved from https://lawreview.law.ucdavis.edu/issues/51/1/Symposium/51-1_Cohen.pdf

Colaps, A. (2018). Big Data: Is EU Competition Ripe Enough to Meet the Challenge? In R. Mastroianni & A. Arena (Eds.), 60 Years of EU Competition Law. Stocktaking and Future Prospects (pp. 31–46). Napoli: Editoriale Scientifica.

Constantinides, P., Henfridsson, O., & Parker, G.G. (2018). Introduction—Platforms and Infrastructures in the Digital Age. Information Systems Research,29(2), 381–400. doi:10.1287/isre.2018.0794

Cowhey, P.F., & Aronson, J.D. (2017). Digital DNA. Disruption and the Challenges for Global Governance. New York: Oxford University Press.

Coyle, D. (2018a). Platform Dominance. The Shortcomings of Antitrust Policy. In M. Moore & D. Tambini (Eds.), Digital Dominance: The Power of Google, Amazon, Facebook, and Apple (pp. 50-70). New York: Oxford University Press.

Coyle, D. (2018b). Practical Competition Policy Implications of Platforms [Working Paper No. 01/2018]. Cambridge: Cambridge University, Bennett Institute for Public Policy. Retrieved from https://www.bennettinstitute.cam.ac.uk/publications/practical-competition-policy-tools-digital/

Crain, M. (2019). A critical political economy of web advertising history. In N. Brügger, N & I. Milligan (Eds.), The SAGE Handbook of Web History (pp. 397–410). Los Angeles: SAGE. 

Crémer, J., de Montjoye, Y-A., & Schweitzer, H. (2019). Competition Policy for the Digital Era [Report]. Luxembourg: Publications Office of the European Union. Available at http://ec.europa.eu/competition/publications/reports/kd0419345enn.pdf

Dance, G.J.X., LaForgia, M., & Confessore, N. (2018, December 18). As Facebook Raised a Privacy Wall, It Carved an Opening for Tech Giants. The New York Timeshttps://www.nytimes.com/2018/12/18/technology/facebook-privacy.html

Daskalova, V. (2015). Consumer Welfare in EU Competition Law: What is It (Not) About? The Competition Law Review, 11(1), 131–160.

Dolata, U.m & Schrape, J-F. (2018). Collectivity and Power on the Internet. A Sociological Perspective. Cham: Springer. doi:10.1007/978-3-319-78414-4

European Commission. (2018). Artificial Intelligence. A European Perspective. Luxembourg: Publications Office of the European Union. doi:10.2760/11251

Ezrachi, A., & Stucke, M.E. (2016). Virtual Competition. Editorial. Journal of European Competition Law & Practice, 7(9), 585–586.

Frischmann, B.M. (2012). Infrastructure: the Social Value of Shared Resources. New York: Oxford University Press.

Gerbrandy, A. (2016). Toekomstbestendig Mededingingsrecht. Markt & Mededinging, 3, 102–112.

Gillespie, T. (2010). The Politics of ‘Platforms’. New Media & Society, 12(3), 347-364. doi:10.1177/1461444809342738

Gillespie, T. (2018). Custodians of the Internet. New Haven: Yale University Press.

Gliman, N., & Farrel, H. (2018, November 7). Three Moral Economies of Data. The American Interest. Retrieved from https://www.the-american-interest.com/2018/11/07/three-moral-economies-of-data/

Graeff, I. (2018). When Data Evolves into Market Power— Data Concentration and Data Abuse under Competition Law. In M. Moore & D. Tambini (Eds.), Digital Dominance: The Power of Google, Amazon, Facebook, and Apple (pp. 72-97). New York: Oxford University Press.

Hardy, J. (2014). Critical political economy of the media: An introduction. New York: Routledge. doi:10.4324/9780203136225

Helmond, A. (2015). The Platformization of the Web: Making Web Data Platform Ready. Social Media & Society, 1(2), doi:10.1177/2056305115603080

Jin, D.Y. (2015). Digital Platforms, Imperialism and Political Culture. New York: Routledge.

Kahn, L.M. (2018). Amazon— An Infrastructure Service and Its Challenge to Current Antitrust Law. In M. Moore & D. Tambini (Eds.), Digital Dominance: The Power of Google, Amazon, Facebook, and Apple (pp. 98–129). New York: Oxford University Press. Available at http://fdslive.oup.com/www.oup.com/academic/pdf/openaccess/9780190845124.pdf

Lawrence, L., & Laybourn-Langton, M. (2018). The Digital Commonwealth. From private enclosure to public benefit [Discussion paper]. London: IPPR Commission on Economic Justice. Retrieved from https://www.ippr.org/research/publications/the-digital-commonwealth

Lupton, D. (2016). The Quantified Self. London: Polity.

Mansell, R. (2017). Bits of Power: Struggling for Control of Information and Communication Networks. The Political Economy of Communication, 5(1), 2–29. Retrieved from http://www.polecom.org/index.php/polecom/article/view/75

Melamed, A., & Petit, N. (2018). The Misguided Assault on the Consumer Welfare Standard in the Age of Platform Markets. Review of Industrial Organization, 54(4), 741–774. doi:10.1007/s11151-019-09688-4

Moore, M., & Tambini, D. (Eds.), Digital Dominance: The Power of Google, Amazon, Facebook, and Apple. New York: Oxford University Press.

Mosco, V. (2009). The Political Economy of Communication (2nd ed.). Los Angeles: Sage.

Napoli, P. M., & Caplan, R. (2017). Why media companies insist they are not media companies, why they’re wrong, and why it matters. First Monday, 22(5). doi:10.5210/fm.v22i15.7051

Nechushtai, E. (2018). Could digital platforms capture the media through infrastructure? Journalism, 19(8), 1043–1058. doi:10.1177/1464884917725163

Nieborg, D. B., & Poell, T. (2018). The platformization of cultural production: Theorizing the contingent cultural commodity. New Media & Society, 20(11), 4275–4292. doi:10.1177/1461444818769694

Nieborg, D. B., & Helmond, A. (2019). The Political Economy of Facebook’s Platformization in the Mobile Ecosystem: Facebook Messenger as a platform instance. Media, Culture & Society41(2), 196–218. doi:10.1177/0163443718818384

Pasquale, F. (2015). The Black Box Society: The Secret Algorithms That Control Money and Information. Cambridge: Harvard University Press

Patterson, M. R. (2017). Introduction. In Antitrust Law in the New Economy: Google, Yelp, LIBOR, and the Control of Information (pp. 1–20). Cambridge, MA: Harvard University Press. Retrieved from https://ssrn.com/abstract=2941573

Petit, N. (2016). Technology Giants, the Moligopoly Hypothesis and Holistic Competition: A Primer. doi:10.2139/ssrn.2856502

Plantin, J.-C., Lagoze, C., Edwards, P. N., & Sandvig, C. (2018). Infrastructure studies meet platform studies in the age of Google and Facebook. New Media & Society, 20(1), 293–310. doi:10.1177/1461444816661553

Rahman, K. S. (2018). The New Utilities: Private Power, Social Infrastructure, and the Revival of the Public Utility Concept. Cardozo Law Review,39(5), 1621-1689. Available at https://brooklynworks.brooklaw.edu/faculty/988/

Rieder, B., & Sire, G. (2014). Conflicts of Interest and Incentives to Bias: A microeconomic critique of Google's tangled position on the Web. New Media & Society, 16(2), 195–211. doi:10.1177/1461444813481195

Stone, B. (2013). The Everything Store: Jeff Bezos and the Age of Amazon. New York: Little, Brown and Company.

Smyrnaios, N. (2018). Internet Oligopoly: The Corporate Takeover of Our Digital World. Wagon Lane: Emerald.

Srnicek, N. (2017). Platform Capitalism. London: Polity.

Suzor, N. (2019, in press). Lawless: The secret rules that govern our digital lives. Cambridge: Cambridge University Press.

Tiwana, A. (2014). Platform Ecosystems. Aligning Architecture, Governance, and Strategy. Amsterdam: Morgan Kaufmann.

Thierer, A. (2013). The Perils of Classifying Social Media Platforms as Public Utilities. Commonlaw Conspectus,21(2), 249–297. Retrieved from https://scholarship.law.edu/commlaw/vol21/iss2/2/

Van Dijck, J. (2013). The Culture of Connectivity. A critical history of social media. New York: Oxford University Press.

Van Dijck, J., Poell, T., & De Waal, M. (2018). The Platform Society. Public values in a connective world. New York: Oxford University Press.

Warren, T. (2018, July 18). Google warns Android might not remain free because of EU decision. The Verge. Retrieved from https://www.theverge.com/2018/7/18/17585396/google-android-eu-fine-response.

Williamson, B. (2017). Big Data in Education: The digital future of learning, policy and practice. London: Sage.

Winseck, D. (2008). The State of Media Ownership and Media Markets: Competition or Concentration and Why Should We Care? Sociology Compass, 2(1), 34–47. doi:10.1111/j.1751-9020.2007.00061.x

Zittrain, J. (2008). The Future of the Internet and How to Stop It. New Haven: Yale University Press. Available at http://yupnet.org/zittrain/

Zuckerberg, M. (2017). Building Global Community [Note]. Retrieved from Facebook website: https://www.facebook.com/notes/mark-zuckerberg/building-global-community/101545442928066

Footnotes

1. In the context of this article, we understand platforms as “(re-) programmable architectures designed to organize interactions among heterogeneous users that are geared toward the systematic collection, algorithmic processing, circulation, and monetization of data” (Van Dijck, Poell, & De Waal, 2018, p. 4). For a more thorough definition of the various meanings of platforms, see Gillespie (2010).

2. For example, in the US, Facebook and Google operate with few restrictions thanks to section 230 of the 1996 Communications Decency Act releasing them from free speech liability.

3. For lack of a better term, we use the metaphor “ecosystem”, but we are acutely aware of the restrictions involved in such figurative use; it is notoriously vague and shows a tendency to ‘naturalise’ social structures.

4. For a long time, one could order an Uber taxi from inside Google Maps; Google quietly took away this direct link in June 2018.

5. The term “infrastructure” may be confusing because there is a difference between what we call “digital infrastructure” and “platform infrastructure”. We follow Constantinides, Henfridsson, and Parker in their definition of digital infrastructures as “the computing and network resources that allow multiple stakeholders to orchestrate their service and content needs” (2018, p. 381). Examples of digital infrastructures are the internet itself, data centres, and communication satellites. There is an increasingly gliding scale between digital and platform infrastructure; this is what Plantin et al. (2018) have theorised as the “infrastructuralization of platforms” and the “platformization of infrastructure”. For example, cloud servers come disguised as platform services and are owned and operated by platform companies (e.g., Amazon Cloud Services).

6. There has been a fierce debate in media and scholarship whether some of the big companies should be touted utilities because their services have become common goods like water or electricity. Briefly summarised, the legal debate hoovers between two extremes: while some experts argue that demanding public service from corporate platforms or forcing private corporations to become “utilities” is not a legal option (Thierer, 2013), others have argued that a public utility frame “accepts the benefits of monopoly and chooses to instead limit how a monopoly may use its power” (Wu, 2010, p. 1643, cited in Kahn 2018, p. 120).


Mediated democracy – Linking digital technology to political agency

$
0
0

This paper is part of Transnational materialities, a special issue of Internet Policy Review guest-edited by José van Dijck and Bernhard Rieder.

1. Digitalisation and democracy: proposal for a research perspective

The relevance of digital media for contemporary democracies is a subject of increasing interest across the social sciences, the media and the political sphere. We are observing a growing diversity of political engagement, political actors and organisational forms, particularly around election and referendum campaigns. At the same time, conventional boundaries between social movements, public audiences and political parties, even between political and non-political action are eroding. The rise of digital media seems to have evoked a period of experimentation, which calls into question established democratic institutions. Representative democracies may undergo profound changes of their form. As a result, the focus of national constitutions on the electoral dimension of democracy no longer captures evolving democratic practices, as Rosanvallon (2008) argues.

The increasing uncertainty about the stability and future of western democracies brings about old and new narratives, which seek to connect the transformation of the political landscape to the influence of digital media. It is more or less common sense today to blame social media, platform capitalism or algorithms for the disintegration of the public sphere, for example (see Margetts, 2019). Yet, how the links between the transformation of democracy and the rise and shape of digital media can or should be understood is by no means obvious, as this article argues. Indeed, studying the relationship between democracy and communication media can be akin to nailing several jellies to the wall. Both democracy and media are complex, abstract, if not aspirational subject matters that escape simple definitions, not least because these definitions are themselves subject of long standing theoretical debates. Both democracy and media occur in variations and seem to be in a process of constant evolution. For this reason, Papacharassi (2010, p. 2) ascribes even a "mystical" quality to the link between technology and democracy. As a concept, she writes, democracy is "evolving and fluid", and since "media (dis-)engagement" develops in line with this flow, we can but study moments within its "fluid progression" (Papacharassi, 2010, p. 11). This article takes the dynamic interplay of democracy and communication media as a premise and addresses the question of how this ensemble can be studied. Borrowing from communication and media studies the notion of mediatisation, it suggests the term "mediated democracy" as a lens for conceptualising the co-evolution of communication media and democratic self-determination.

Although the issue of relating democracy to media sits precisely at the interface of the social sciences and clearly calls for interdisciplinary approaches, most of the relevant research so far comes from communication and media studies while contributions from political science remain rare. It may be for this reason that the academic attention, that of political science included, primarily focuses on the role of communication media as a driver of social and political change. Recent research on mediatisation, which is most relevant in the context of this article, takes the proliferation of various "waves" of communication media (Couldry & Hepp, 2017) as its overall reference point for exploring the relationship between the transformation of society and communication media. Likewise, John Keane's "Democracy and Media Decadence" (2013), which traces the transformation of representative democracy towards what he calls "monitory democracy" ascribes powerful structuring agency to "communicative abundance". The reverse perspective, however, which would look at media development as an effect rather than a driver of the evolution of modern democracies is still under-researched. As a result, communication media's properties and modes of operation often appear somewhat reified. Their fundamental contingency, while always emphasised, tends to get lost in meta-accounts on mediatisation. Thus, choosing media (r)evolutions as the reference frame of mediatisation (see Couldry & Hepp, 2017; Krotz, 2017; Lundby, 2014) entails the risk of obscuring the view on all the paths not taken in the development of communication media. Similar shortcomings can be found with regard to the concept of democracy in the context of communication technologies. Studied from a mediatisation perspective, democracy seems to narrow down to a single model or dimension, which, as Thiel (2018, p. 52) notes, attributes a "strangely universal trait" to a concept that has been interpreted in such diverse ways throughout the history of political ideas. This leaves us with the difficult issue of how to approach the relationship between digitalisation and democracy without losing sight of its inherent openness or contingency.

Building on the research on mediatisation, the concept of mediated democracy suggests understanding the relationship of digital technologies and political self-determination as a constellation1 rather than a causal relationship. Instead of looking at the impact of digital technologies, the idea of mediated democracy assumes an ensemble of conditions, which enables possibilities of political action without determining them.2 Hence, mediated democracy does not denote a specific type of democracy (such as deliberative democracy, for instance) but a specific research perspective, which centres on the relationship of democracy and communication media, understood as ranging between co-evolution and co-production.

The inspiration for this approach goes back to Benedict Anderson's "Imagined Communities" (1983), a book, which illustrates how the relationship between the emergence of the printing press, the public sphere and the democratic nation state can be told.3 The next section proposes some lessons to be learned from Anderson's account. The third section sketches out the concept of media underlying my understanding of mediated democracy. Unlike mediatisation research whose definition typically reflects the evolution of communication technologies, this article draws on recent philosophy of technology contributions, which specifically aim at emphasising the contingency and performativity of media development. The fourth part tests the usefulness of this school of thought by way of a short reconstruction of the internet as the unlikely winner of several competing network architectures. The fifth part, finally, offers an interpretation of digitally mediated democracy by situating it in the context of the transformation of basic institutions and mechanisms of representative democracy. The supposition is that the crisis of representative democracy shapes the political use and development of social media while the properties of social media are simultaneously transforming the experiences, and future expectations, underlying democratic engagement. If the outline of this article sounds a bit fragmented, the impression is not inaccurate. This contribution offers reasons and building blocks for a concept of mediated democracy rather than a fully fleshed-out model.

2. Rereading Benedict Anderson: print capitalism as enabler of representative democracy

Benedict Anderson's "Imagined Community" (1983) interprets nationalism as a cultural construct going back to the intersection of various historical forces referred to as "print capitalism". Print capitalism, according to Anderson (1983, p. 36), enabled people "to think about themselves, and to relate themselves to others, in profoundly new ways", it created the possibilities to imagine themselves as part of a "community in anonymity". What is crucial about Anderson's narrative is the lack of a single driver. Compared to the present public discourse on digitalisation and its social consequences, the printing press itself seems to play a minor role in his account. To be sure, the printing press formed the mechanical precondition for the development of the newspaper, one of the first industrial mass products. Yet, its invention and use must be interpreted against the long-term process of secularisation, which manifested itself as a growing interest in non-religious literature and new formats of secular texts. Literacy increased, and, simultaneously, larger transregional language communities formed, both preconditions for the emergence of newspapers markets. Hence, the rise of a public sphere extending beyond local communities was made possible and mediated by printed texts (see Eisenstein, 1979) but cannot be solely explained by the rise of technical artefacts.

As a general constellation enabling the emergence of the national demos, Anderson (1983, p. 42-43) identifies the "explosive interaction between a system of production and productive relations (capitalism), a technology of communication (print), and the fatality of human linguistic diversity". Predicated on this constellation of technology, capitalism and language, practices of newspaper consumption evolved that facilitated a sense of belonging among strangers precisely because they combine the reception of news about their world with ceremonial actions: "The obsolescence of the newspaper on the morrow of its printing (…), creates this extraordinary mass ceremony: the almost precisely simultaneous consumption ('imagining') of the newspaper-as-fiction. (…) Yet each communicant is well aware that the ceremony he performs is being replicated simultaneously by thousands (or millions) of others" (Anderson, 1983, p. 51). Anderson's public sphere interconnects specific technologies, a mode of capitalism but also collective production and consumption practices able to create a sense of collective identity.

If we accept that publics are necessarily mediated by communication technologies and national demoi cannot develop without a public space, it seems plausible to understand representative democracies as mediated forms of government. In other words, the notion of political self-determination constitutive for representative democracies is necessarily predicated on the existence of distribution media. The emergence of reproducible and unified print languages links a "community in anonymity" (Anderson, 1983, p. 36) over geographic distances, enables exchange and the evolution of common worlds and concerns. The newspaper market contributed, as Couldry and Hepp (2017, p. 43) put it, to the "thickening" of "national communicative spaces" and to the evolution of modern societies. Seen from this perspective, democracy and technology are linked through a co-evolutionary process of mutual enabling. While print capitalism made possible the territorially constituted nation state, national politics helped in shaping the concept of newspapers and subsequent mass media formats. Anderson's narrative exemplifies the diversity and contingency of long-term macro-level developments as well as the specific social practices that we need to consider in order to understand the role that digital media play in the present transformation of modern societies. It shows the great variety of possible relationships between political regimes and communication media, and it demonstrates how such constellations can be studied. Yet while Anderson chose a historic perspective, which focuses on the specific material and semantic properties of print capitalism, this article pursues a more conceptual approach towards communication media. By drawing on a philosophy of technology school of thought, it seeks to direct attention to the contingency of the process of digital mediatisation.

3. Conceptualising mediatisation: medium and form

The role of mass media as an enabler of modern societies has been shown in much detail. However, in order to understand the contingency of the evolution of communication media, it is helpful to draw on a more abstract notion of media technologies. Such an understanding of media should meet three requirements: it should reflect the contingency or openness of technology development, take into account the performativity of its use, and, in order to study communication technologies and democracy as co-evolutionary processes, it should offer an interface to broader meso- and macro-level social theories.

In accordance with science and technology studies, recent philosophy of technology approaches seek to overcome the common duality between society and technology in favour of emphasising the technicity of the social: "Technology is society made durable", as Latour (1991) once put it4. Following the work of Don Ihde, technologies are understood as mediators of the relationship between people and the world. Hence, people and machines are not considered as separate entities but as mutually constituted: "Humans need to recognize the common bond between themselves and artifacts (…) and accept the 'co-evolution of humans and machines'", as Mitcham (2014, p. 23) summarises this school of thought. The human-machine bond shapes our common life together, our modes of social integration and communication. In short, "technological mediation is part of the human condition – we cannot be human without technologies" (Verbeek, 2015, p. 30). Current philosophy of technology approaches and recent mediatisation research both emphasise the interconnectedness of societal and medial change. Yet, the latter typically equates media with distribution media while the first prefers a broader notion decoupled from specific medial artefacts.5 Drawing on the work of psychologist Fritz Heider, Luhmann (2012) introduces the distinction between medium and form to differentiate specific manifestations of media ("form") from the unknown reservoir of all potential appearances ("medium"). Concrete forms such as language, print or digital platforms are understood as temporary manifestations; they result from ongoing selection processes, which could have ended up differently. The full dimension of the medium, by contrast, remains unknown to us. We get to know its limits and possibilities only through the experience of changing forms.6 Each form can be reversed. In the words of Luhmann (2012, p. 118), "the medium is bound—and released. Without medium, there is no form, and without form, no medium".

One doesn't have to be a system theorist to appreciate the analytical benefits of the medium/form distinction for studying mediatisation processes. Its major strength is its emphasis on the contingency and alterability of communication technologies understood as specific forms. As "somnambulant makers of new worlds" (Mitcham, 2014, p. 26) who are often unaware of abandoned alternatives, the medium/form distinction encourages us to systematically study and contextualise them. Bertold Brecht's "radio theory" (1967, p. 129) comes to mind as an example of unrealised alternatives to the unidirectional broadcasting system. Another example, discussed in the next section, concerns data networking.

The second analytical benefit of the medium/form distinction concerns its emphasis on performative effects. Communication technologies enable ways of making sense of the world (Couldry & Hepp, 2017). From a philosophy of technology perspective, the performativity of media is grounded in expectations and experiences of acting on and through them; experiences of success and failure with media, which become represented as "generalised frameworks" of the world and our influence on it (Hubig, 2006). Mediated contexts of action considered particularly impactful have a chance of turning into key images of "era narratives" (Hubig, 2006, p. 159). One could say that the "digital society" qualifies as such a key image of the present. The public discourse about it frames our alterable experiences and expectations of digital technologies. Crucially, Hubig (2006, p. 143) does not regard media as an independent cause or driver of social change but as a structured "space of possibilities", a medium in other words, which is capable of producing different forms.

4. Competing forms of computer networks

The medium/form distinction offers a window onto the contingency of digital technologies. Even if the digital medium cannot be observed and attempts to nail down its constitutive properties remain somewhat unsatisfying (Kaufmann & Jeandesboz, 2017; Kallinikos et al., 2010), it is possible to trace back emerging forms, for instance through conflicts over competing use scenarios or complicated trade-offs among quality norms and operational specifications. In his article on "the contingent Internet", Clark (2016) proposes looking at the history of the internet as a set of bifurcations or 'forks in the road', each of which could have paved the way for a different future of data networking. Indeed, at the onset of computer networks in the 1970s, there was no internet but a Babylonian diversity of more or less incompatible network architectures. During the 1990s, when efforts increased to establish global standards for data networking, the architectural diversity consolidated into two paradigmatic "conceptions for how to build a 'computer network'" (Clark, 2016, p. 9). Somewhat oversimplifying, one may portray them as the centralised and the distributed approach.

In practice, the competition over digital network architectures had many facets; it was a battle between the doctrines of communication and computer engineering professions, a battle over future market shares of the computer and the telephone industries but also a competition between visions of the good computer network (Abbate, 1999). Modelled after the notion of the computer as a universal Turing machine, the computer industry favoured a "general purpose network". This approach was supposed to level the distinction between the computer and the network and offer basic mechanisms for linking computer nodes and transporting data packages between them, regardless of the application. It pictured the internet as a network of networks, to be used for all applications, but, controversially, not privileging any of them. The counter-vision of the computer network reflected existing public communication infrastructures such as the telephone and the postal network. It goes back to international efforts of the Post, Telegraph and Telephone administrations (PTT), which conceptualised data networks as an assembly of interconnected national public networks to be centrally operated by the postal organisations. The PTTs optimised their Open Systems Interconnection (OSI) architecture for specific publicly planned (and charged for) applications such as electronic mail. The French Minitel, the British Teletext and the German BTX, for example, imagined users as "tele readers" of official information resources and shopping supplies, to be accessed via terminals with limited functionality, not unlike telephones. Given the political authority of PTTs, the public data network model was considered the likely winner of the competing network architectures in the 1990s. Even in the US, the internet was regarded as a mere temporal phenomenon, soon to be replaced by "the real thing", to be introduced by the common carriers (Clark, 2018, p. 24).

The competition between two categorically different models of data networking exemplifies Niklas Luhmann's distinction between medium and form. Each type of communication media can generate different manifestations. It is worth noting in this context that the struggle over the internet's architectural principles has never really ceased. Even if the internet (as an infrastructure for data transmission) appears fairly stable, its distributed form is still being renegotiated.7 On top of that infrastructure, the same kind of open-ended, dynamic interplay between medium and form can nowadays be found for the development of platforms and social networks. The profound changes of Facebook's constitution within a few years of its existence (see van Dijck, 2013; Ellison & boyd, 2013), for example, give ample evidence of all the paths not taken.

While it seems fairly obvious that both national and transnational public spheres would have evolved rather differently under the conditions of centrally managed public data networks, one should withstand the temptation of thinking in causal terms about this relationship. The co-evolutionary interplay of infrastructure development and socio-economic transformation can be better understood if one situates the struggle over digital mediatisation in a broader context social context.

The internet as an offspring of late modernity

Societies began recognising computer networks as a genuine new space of possibilities during a time of fundamental cultural, economic and political transformation. These changes have been variously termed as "late modernity" (Giddens, 1991), "reflexive modernisation" (Beck, 1992; Beck, Giddens, & Lash 1994), "postmodern condition" (Lyotard, 1979) or the "end of organised modernity" (Wagner, 2008). While these grand narratives of the end of the 20th century accentuate different aspects, they share the proposition of the ending of a stable social period that had shaped the global North after the Second World War. This societal formation was characterised by a strong (welfare) state, which assumed responsibility for the prosperity and stability of the economy, the well-being of its citizenry including the universal availability and quality of its public infrastructures. This notion of a protective hierarchical state corresponded with a high level of collective organisation in the form of political parties, trade unions, commercial associations (termed in Europe as "neo corporatism") and a social stratification in the form of class-based milieus stabilised through widely shared social norms.

Organised modernity came under pressure when cultural norms began diversifying, collective identities in the form of classes and political parties lost cohesion, markets increasingly expanded beyond the nation state and challenged the paternalistic welfare state model. Economic innovation, individual freedom and cultural diversity became benchmarks in their own right and formed a competing force against dominating rules and customs. Citizen initiatives sprang up to explore new forms of political participation outside of political parties. Political orientations became more individualised and, termed by Giddens (1991, p. 214) as "life politics", added self-realisation to the agenda. Even the field of technical standard-setting incorporated the societal transformation as an identity conflict over competing architectural principles and processes. In opposition to the International Telecommunication Union, a UN sub-organisation, the Internet Engineering Task Force coined its central credo: "We reject kings, presidents, and voting. We believe in: rough consensus and running code" (see DeNardis, 2009, p. 47 for more context). In some respects, "architectural proposals are creatures of their time", as Clark (2018, p. 106) notes. The two models of national data networks with hierarchically planned applications on the one hand and the internet as a distributed network of networks on the other emerged during the transition phase from organised to late modernity and epitomised both periods as ideal-types.

As part of the "dismantling of organised modernity" (Wagner, 2016, p. 121), the scope and quality of state activity lost legitimacy. The rising neoliberal paradigm pushed for a privatisation of public infrastructures and the telephone networks were among the early targets in OECD countries. The so-called liberalisation of telephone networks in the 1980s and 1990s shows the co-evolution of social transformation and the formatting of data architectures particularly well. With the end of the public telephone monopoly and the creation of markets for communication services, new actors and benchmarks established themselves. A "new economy" took root, which experimented with innovative business models and celebrated them as the demise of "tyrannical rules of corporate hierarchies" (cited after Turner 2006, p. 14). The internet came to be seen as a "prototype" for "networked forms of economic organization" that would flatten bureaucratic hierarchies, both public and private, and provide for self-determined ways of working. It would "liberate the individual entrepreneur" (Turner, 2006, p. 175), and fulfil the dream of "marry(ing) the competitive demands of business with the desire for personal satisfaction and democratic participation; to achieve productive coordination without top-down control" (Turner, 2006, p. 204). The new economy projected onto the internet the role of an enabler of new forms of economic activity that would liberate economic spirit and replace hierarchies with distributed networking.

Mirroring the 1990s public discourse on globalisation, the early academic literature portrayed cyberspace as a forerunner of a post-national social order governed by code and bottom-up consensus rather than by national laws: "The Net thus radically subverts a system of rule-making based on borders between physical spaces" as Johnson and Post (1996) put it in a widely quoted essay. As Turner (2006) notes, public discourse managed to reinterpret the computer technology once firmly associated with military violence into resources of emancipation: "throughout the 1960s, computers loomed as technologies of dehumanization, of centralized bureaucracy and the rationalization of social life, and, ultimately, of the Vietnam War. (…) Two decades after the end of the Vietnam War and the fading of the American counterculture, computers somehow seemed poised to bring to life the countercultural dream of empowered individualism, collaborative community, and spiritual communion" (Turner, 2006, p. 2).

This reinterpretation included a translation of the internet's operational principles into a political language of liberation and decentralisation. As Gillespie (2006) shows in detail with regard to the end-to-end principle, users and academic observers contributed in a discursive way to defining what the internet is and is not. The political reformulation of the end-to-end principle as individual empowerment radiated an "aura of populist participation, democratic egalitarianism, openness (…) and inclusiveness" (Gillespie, 2006, p. 445). Reflecting the libertarian Zeitgeist, the internet became "map(ped) onto a set of political projects that both precede the design of the Internet, draw on it for justification, and carry it forward" (Gillespie, 2006, p. 452). Noteworthy in hindsight, the claim of the uncontrollability of cyberspace seemed an unreservedly good thing. The lack of an "off-switch" became a symbol for a new communication infrastructure that governments (and telecommunication companies) would be unable to control. Collective agency in the form of public rulemaking authority was considered illegitimate since it was thought to stifle individual freedom and economic innovation. This liberal hands-off approach, which associated democratic agency with bottom-up initiatives and new forms of participation, was largely oblivious to any institutional frameworks of power limitation and law enforcement.

In retrospect, the internet presents itself as a specific form of computer network, which architecturally and semantically reflects the transformation of cultural, economic and political values. These values became inscribed as operational principles and standards into the network architecture and, as such, subject of political interpretation. The evolution of the computer network and its communication services can be analysed as an oscillation between the possibilities of the medium and the contingencies of specific forms. The next section discusses these temporary forms in the context of ongoing transformations of western democracies. The goal is to identify linkages between changing democratic practices and the emergent properties of digital technologies.

5. Mediated democracy under conditions of digitalisation

Understanding media as spaces of possibilities directs attention to the question of how public action incorporates and thereby shapes digital media. The experience of digital communication technologies has given rise to new accounts of media development reflecting the broader structural transformations of western societies. From a cultural sociology perspective, Reckwitz (2008, p. 168) draws links between specific media technologies and the formation of subjectivities. Media, in his understanding form "training grounds" for the evolution of specific cultures. Characteristic for the computer era as a "training space" is the "expressive-elective subject", which practices "a way of thinking in terms of options", which permanently call for choices to be made. Baecker (2018, pp. 10-11) distinguishes four periods of media (language, script, print, electronic media), each of which extends our possibilities of meaningful action and thereby introduces new levels of contingency. The experience of contingency, in turn, challenges social institutions and leads to structural change. The transformation of democracy is a good example for the connection between ongoing experiments, which aim to explore new opportunities of meaningful action and the shifts in our understanding of democracy (see below). Keane (2013) sheds light on the affinities between communication modi and types of democracy in a more general sense. Representative democracy constitutes itself in the period of print and fell into a state of crisis during the rise of broadcast media. The present type of democracy is characterised by a sea change consisting in the transition of representative democracy to what Keane calls "monitory democracy". As the term already suggests, monitory democracy is linked to the rise of "multimedia-saturated societies, whose structures of power are continuously questioned by a multitude of monitory or 'watchdog' mechanisms operating within a new media galaxy defined by the ethos of communicative abundance" (Keane, 2013, p. 78). What these narratives have in common is an idea already present in Benedict Anderson's Imagined Communities: specific forms of communication media emerge in tandem with larger societal formations and mutually enable each other.

The debate in political science on the ongoing transformation of democracy offers some reference points for approaching this question. Social theories on late modernity already described the declining importance of traditional democratic institutions for representing political interests. Old cleavages such as that between capital and labour were losing their significance, which undermined class-based party loyalties and increased the share of swing voters. As a consequence, parties moved to the political centre and became less distinguishable. The transition from "party democracy" to "audience democracy" also implied a shifting focus from programmatic platforms to political leaders (Manin, 1997). Voting, once the legitimate core of representative democracy, lost its quasi holy character and voter turnout began decreasing across European countries. Simultaneously, the constitutional power of parliaments gradually decentred and shifted towards the executive branch, the private sector and international organisations.

Democratic theory keeps chronicling the dismantling of state functions, the hollowing out of democratic institutions and the growing power of the private sector. Some observers refer to this decay as pending "post-democracy" (Crouch, 2004). Others remind us of the principle openness of the democratic project and point out the innovation opportunities emerging from the transformation of democratic institutions. Keane's "monitory democracy" (2013) and to some extent Rosanvallon's "Counter Democracy" (2008) are examples of the latter. Both authors share the observation of a long-term decline of trust in democratic institutions and political elites, and both deduce from this trend a new role for the public sphere and, relatedly, a fundamental change of democratic practices. The general shift from trust to distrust has turned the public sphere into a space of watching, evaluating, controlling and scandalising political actors and actions. The "voter citizen" who trusted the democratic institutions has been sidelined by the "vigilant citizen" Rosanvallon (2008, p. 41), a "naysayer" (op. cit., p. 123) who resorts to a much broader range of democratic practices than just voting.

The present public is characterised by "new forms of social attentiveness" (op. cit., 40), which call for transparency and often express themselves as "negative sovereignty" (op. cit., p. 122). These new modes of articulation have repercussions for the organisational fabric of the democratic subject, the demos. The spread of "networks, swarms, and multitudes" can be interpreted as evidence for a mutating body politic, as Thacker (2004; see also Heidenreich, 2016, p. 58) reckons. However, the point to be highlighted in the context of the notion of mediated democracy is that the changing democratic practices and attitudes can be interpreted as one driver of the evolution of social networks and, in a wider sense, of mediatisation. Digital technologies constitute a space of possibilities, which derive their specific form not least from the ongoing transformation of democratic agency. In this sense, digital communication services are used to and shaped by experimenting with new modes of political expression.

While legacy mass media communicated to, and thus co-created, a largely passive public of information recipients, the digital medium enables many-to-many communication and thus a redistribution of authorship. The emergence but more so the impetus of "mass self-communication" (Castells, 2009) reflects both, the material properties of a new medium but also the ongoing transformation of democracy and the public sphere. To the extent that the status of elections as the central, constitutionally privileged form of democratic expression is eroding, the public sphere and its medial infrastructure is gaining importance. Yet, the use of digital media simultaneously transforms the properties of the audience. Reading the newspaper, watching or listening to the news used to take place at certain times and places, and it left the media content untouched. By contrast, the use of digital media not only pervades every facet of everyday life around the clock, it also impacts the content in various ways. By interacting through digital communication services, the public circulates, sorts, links and weighs information and, hence, acts as co-creators of a semi-personalised information order. Publics have become generative. Their everyday actions involuntarily contribute to the production of algorithmically curated information flows.8 As a result, social, economic and legal boundaries between the production and consumption of news are becoming de-institutionalised. Moreover, the idea of a common public sphere framed by mass media and characterised by shared reference points can no longer be taken for granted. It seems as if Anderson's "imagined communities" among strangers, which the spread of newspapers once enabled, are turning into a multitude of imagined publics. These pluralised publics are not necessarily congruent with the national demos (Bennett & Pfetsch, 2018); they open up new possibilities of collective self-organisation independent of territorial circumstances.

Parallel to the long-term decline in party membership and voting, the rise of "issue politics" has strengthened the propensity for unconventional forms of political organisation. Digital media reduce the resources necessary for collective action and therefore broaden the potential of organisational structures, as Bimber (2016, p. 5) argues. Indeed, political engagement seems to be currently undergoing a phase of experimentation. New social network-based forms of political collectives are emerging while political parties, old and new, are adopting outreach and campaigning strategies typical for social movements. What is striking about recent political movements such as Extinction Rebellion, Fridays for Future or Sea Watch is their rapid growth and geographic expansion but also their low degree of formal organisation, hierarchy and modes of representation. Digitally mediated political networks and swarms are characterised by the absence of a centre. While networks rest on more or less stable structures of connectivity, swarms can be understood as "collectivity in actu", which requires permanent reproduction (Horn, 2009, p. 16). Social networks and messengers are the present medium for recognising each other as part of a collective and sustaining it as well as for making oneself heard. The hallmark of digital mediated social movements is their unpredictable emersion and often short-lived character. They tend to expect instant political change, and they reject conventional modes of democratic representation in favour of a high intensity, real-time operation with immediately visible effects (Zuckerman, 2014).

There is an "elective affinity" between digital media and new forms of political engagement, as Bennett et al. (2017, p. 2) aptly put it. For young people, digitally mediated forms of political articulation are increasingly replacing the role organisational memberships used to play. Bennett and Segerberg (2012) have coined the term "connective action" to distinguish the logics and incentives of digitally mediated types of association from traditional collective action problems: "the logic of connective action applies increasingly to life in late modern societies in which formal organizations are losing their grip on individuals, and group ties are being replaced by large-scale, fluid social networks", which "can operate importantly through the organizational processes of social media" (Bennett & Segerberg, 2012, p. 748). Digitally networked engagement requires, yet also offers, less collective identification than traditional membership parties. Instead, it provides creative opportunities for linking political intervention to individual self-expression.

Social networks allow for a greater variety of political involvement including temporary, project-like engagements but also the new category of "armchair activism". The much criticised "slacktivism" is suspected to reduce political engagement to the minimum effort of a few clicks. However, "thin" forms of engagement are not necessarily futile, as Zuckerman (2014, p. 158) argues. On the contrary, in the eyes of Margetts (2019, p. 108) "tiny acts of participation" such as "following, liking, tweeting, retweeting, sharing text or images relating to a political issue, or signing up to a digital campaign" should be regarded as the categorical difference "that social media have brought to the democratic landscape" (see also Møller Hartley at al., 2019 on "small acts on engagement"). The lowered threshold for political action broadens the circle of people willing to contribute, and it also entails the albeit unlikely possibility of a large-scale mobilisation or an institutionalisation in the form of political parties (occurring particularly on the political right, see Bennett et al., 2018, p. 1661). The mechanisms behind the sudden growth of a small fraction of digital movements points to another characteristic of the digital media environment. Social networks offer their members instantaneous information about the actions of others and thus significantly expand the possibility for mutual social observation and imitation. Tiny acts of participation convey "signals of viability to others" (Margetts, 2019, p. 111) and thereby alter the conditions for movements or swarms to emerge.

The interplay between the digital medium and the changing culture of political engagement resembles a kind of public laboratory for experimenting with old and new, formal and information types of political organisation. Political parties across Europe actively participate in this process. Responding to the continuous decline of membership and voter turnout, they are testing modes of communication and concertation also below formal membership. A "party as movement mentality" (Chadwick & Stromer-Galley, 2016, p. 284) is gaining ground, which de-emphasises established merits such as party loyalty and age or merit-based stratification. Instead, party boundaries are becoming less pronounced in favour of integrating supporters with lower levels of identification and commitment. For example, armchair activists may be encouraged to participate in leadership elections or the development of party manifestos.

Not surprisingly, new parties pursue more radical digitalisation strategies. Gerbaudo (2019, p. 191) even proclaims a "new stage in the evolution of the party-form". Reflecting the logics of the network society, the digital or "platform party" supersedes the party structure of the industrial society. Particularly left-leaning parties aim to reinvent bottom-up democratic decision-making by emulating the fluid structure of social movements. Flat hierarchies and digitally mediated "in-person assemblies" (Bennett et al., 2018, p. 1656) are believed to increase transparency and ensure direct individual impact on policy or party development. In fact, digital platforms, sometimes custom-built, are the new organisational infrastructure supposed to replace old-style party bureaucracies. "Connective parties", at least as Bennett et al. (2018, p. 1667) define them, crucially depend on digital platforms as "operating systems" for internal communication as well as mobilising supporters. Although it is unclear whether or not platformisation is a long-term trend able to disrupt established party bureaucracies, digital parties clearly present novel experimental structures in the political landscape. Such new political structures are the product of a specific techno-political constellation, which links the effects of the legitimacy crisis of representative democracy with the possibilities of the digital medium. Hence, this constellation shapes both, the properties of digital platforms and the institutional structures of democratic agency.

Coming back to Anderson's account (1983), it was print capitalism, which turned information into a commodity and created the double role of the enlightened citizen and customer of political information. At present, the business of circulating information is reinventing itself, and print capitalism is being replaced by what is varyingly referred to as data or platform capitalism (Langley & Leyshon, 2017). One of the trademarks of digital platforms is their orchestrated modularity, which allows third parties to offer content or services and thereby create value for the platform operator. If the value proposition of newspapers was the curating of political news (with agenda setting as one of its effects), that of platforms consists in moderating public exchange. As Gillespie (2018, p. 216) puts it, platforms "constantly tune public discourse through their moderation, recommendation, and curation" and thereby shape the public discourse – if not an increasing part of all social interaction.

If print capitalism was predicated on homogenised print languages, platform or data capitalism seems to flourish on standardising many-to-many communication practices and turn data-generating citizens into co-producers of economic value (Langley & Leyshon, 2017, p. 17). To the extent that public discourse and political engagement become digitally mediated, they are undergoing a process of infrastructuralisation (Plantin et al., 2018), which follows the logic of expanding connectivity and data extraction (Couldry & van Dijck, 2015). Hence, democratic agency becomes increasingly mediated by "digital economic circulation in action" (Langley & Leyshon, 2017, p. 19). In fact, political engagement right now can hardly be imagined independent of globally controlled digital infrastructures as both enabling and regulating agents. Often perceived as "digital disintermediation" with potentially empowering effects for ordinary citizens, the platformisation of social movements and political parties suggests another reading: rather than becoming disintermediated, the mediatisation of democracy is going through a phase of transformation, with likely effects on the resources and distribution of political and commercial power.

6. Conclusion – the case for mediated democracy

This paper introduced the concept of mediated democracy as a specific research perspective on the interplay of democracy and digital media. Its central preposition is that democratic agency including its institutional apparatus is necessarily technically mediated. Following Anderson (1983), the paper argued that mediated democracy should be approached as a contingent constellation rather than a causal relationship of variables. The final question and touchstone of this concept concerns the insights gained through this lens. If we look at digital media as part of a macro-level constellation of social change, they turn into contingent forms of communication technologies, which are shaped by society as much as they shape it. This becomes very obvious with regard to the role of digital media in the context of western democracies. The core institutions of representative democracy began losing support and stability long before the internet advanced as a medium for "mass self-communication" (Castells, 2009). Hence, social networks did not cause the decay of conventional channels of political expression and participation, they should be rather understood as a training ground for experimenting with new forms of democratic agency.

Yet, these experiments do not leave democracies unaffected. They enable new experiences and expectations and thus shape future democratic practices (Ercan et al., 2019, p. 21). Beneath the formal constitutional level of national democracies, we see long-term changes, among them a growing importance and changing role of the public sphere, a broadening range of political action including tiny and trivial forms of participation, a shift from membership parties to "parties as movements", but perhaps also a changing perception of democracy itself. Under the condition of "communicative plenty" (Ercan et al., 2019, p. 24) note, democracy is becoming primarily associated with "voice-as-democratic-participation", in other words with making oneself heard and generating visibility, to the detriment of opportunities for collective reflection. In more general terms, we can observe a rising awareness of the contingency and alterability of democratic institutions. Seen through the lens of mediated democracy, this is the outcome of a co-evolutionary process rather than that of a causal relationship.

References

Abbate, J. (1999). Inventing the Internet. Cambridge, MA; London: The MIT Press

Anderson, B. (1983). Imagined Communities: Reflections on the Origin and Spread of Nationalism. London; New York: Verso.

Averbeck-Lietz, S. (2014). Understanding mediatization in 'first modernity': sociological classics and their perspectives on mediated and mediatized societies. In K. Lundby (Ed.), Mediatization of Communication (pp. 109-130). Berlin; Boston: De Gruyter.

Beck, U. (1992). Risk Society: Towards a New Modernity. London: Sage Publications.

Beck, U., Giddens, A., & Lash, S. (1994). The reinvention of politics: towards a theory of reflexive modernization. Cambridge: Polity Press.

Bennett, W. L., & B. Pfetsch. (2018). Rethinking Political Communication in a Time of Disrupted Public Spheres. Journal of Communication, 68(2), 243–253. doi:10.1093/joc/jqx017.

Bennett, W. & Segerberg, A. (2012). The Logic of Connective Action. Information, Communication & Society, 15(5), 739–768. doi:10.1080/1369118X.2012.670661

Bimber, B. (2016). Three Prompts for Collective Action in the Context of Digital Media. Political Communication, 34(1), 1091–7675. doi:10.1080/10584609.2016.1223772

Brecht, B. (1967[1932]). Der Rundfunk als Kommunikationsapparat. Rede über die Funktion des Rundfunks. In Brecht, B.: Gesammelte Werke, Schriften zur Literatur und Kunst (Vol. 18, pp. 126-134). Frankfurt am Main: Suhrkamp.

Castells, M. (2009). Communication power. Oxford; New York: Oxford University Press.

Chadwick, A. & Stromer-Galley, J. (2016). Digital Media, Power, and Democracy in Parties and Election Campaigns: Party Decline or Party Renewal? The International Journal of Press/Politics, 21(3), 283–293. doi:10.1177/1940161216646731

Clark, D. D. (2018). Designing an Internet. Cambridge, MA: The MIT Press.

Clark, D. (2016). The Contingent Internet. Daedalus, the Journal of the American Academy of Art & Sciences, 145(1), 9–17. doi:10.1162/DAED _a_00361

Couldry, N. & Hepp, A. (2017). The Mediated Construction of Reality. Cambridge: Polity.

Couldry, N. & van Dijck (2015). Researching Social Media as if the Social Mattered. Social Media + Society, 1–7. doi:10.1177/2056305115604174

Crouch, C. (2004). Post-Democracy. Cambridge: Polity.

DeNardis, L. (2009). Protocol Politics. The Globalization of Internet Governance. Cambridge, MA: The MIT Press.

Doll. M. (2013). Theorie und Genealogie des Techno-Imaginären: Social Media zwischen 'Digital Nation' und kosmopolitischem Pluralismus [Theory and genealogy of the techno-imaginary: social media between ‘digital nation’ and cosmopolitan pluralism]. In M. Doll & O. Kohns (Eds.), Die imaginäre Dimension der Politik (pp. 49-89). Paderborn: Fink.

Ellison, N.B. & boyd. d. (2013). Sociality through Social Network Sites. In: W. Dutton (Ed.), The Oxford Handbook of Internet Studies (pp. 151–172). Oxford: Oxford University Press.

Eisenstein, E. L. (1979). The printing press as an agent of change: communications and cultural transformations in early modern Europe. Cambridge: Cambridge University Press.

Ercan, S. A., Hendricks, C. M., & Dryzek, J. S. (2019). Public deliberation in an era of communicative plenty. Policy & Politics, 47(1),19–35. doi:10.1332/030557318X15200933925405

Faraj, S., Azad, Bijan (2012): The Materiality of Technology: An Affordance Perspective. In P. M. Leonardi, B. A. Nardi, & J. Kallinikos (Eds.), Materiality and Organizing: Social Interaction in a Technological World (pp 237–258). Oxford: Oxford University Press.

Gerbaudo, P. (2019). The Platform Party: The Transformation of Political Organisation in the Era of Big Data. In D. Chandler D. & C. Fuchs (Eds.), Digital Objects, Digital Subjects: Interdisciplinary Perspectives on Capitalism, Labour and Politics in the Age of Big Data (pp. 187–198). London: University of Westminster Press.

Giddens, A. (1991). Modernity and Self-Identity: Self and Society in the Late Modern Age. Cambridge: Blackwell.

Giddens, A. (1999). Risk and Responsibility. The Modern Law Review, 62(1), 1–10. doi:10.1111/1468-2230.00188

Gillespie, T. (2018). Platforms Are Not Intermediaries. Georgetown Law Technology Review, 2(2), 198–216. Retrieved from https://georgetownlawtechreview.org/platforms-are-not-intermediaries/GLTR-07-2018/

Gillespie, T. (2006). Engineering a Principle: ‘End-to-End’ in the Design of the Internet. Social Studies of Science, 36(3), 427–457. doi:10.1177/0306312706056047

Habermas, J. (1962). Strukturwandel der Öffentlichkeit. Untersuchungen zu einer Kategorie der bürgerlichen Gesellschaft. Neuwied: Luchterhand.

Heidenreich. F. (2016). Die Organisation des Politischen. Pierre Rosanvallons Begriff der 'Gegen-Demokratie' und die Krise der Demokratie. Zeitschrift für Politische Theorie, (1), 53–72. doi:10.3224/zpth.v7i1.06

Horn, E. (2009). Schwärme – Kollektive ohne Zentrum. In E. Horn & L. M. Gisi (Eds.), Schwärme – Kollektive ohne Zentrum. Eine Wissensgeschichte zwischen Leben und Information (pp. 7–26). Bielefeld: transcript Verlag.

Hubig, C. (2006). Die Kunst des Möglichen I. Grundlinien einer dialektischen Philosophie der Technik. Technikphilosophie als Reflexion der Medialität. Bielefeld: transcript Verlag.

Ingold, A. (2017). Digitalisierung Demokratischer Öffentlichkeiten. Der Staat, 56(4), 491–533. doi:10.3790/staa.56.4.491

Johnson, D. & Post, D. (1996). Law and Borders: The Rise of Law in Cyberspace. Stanford Law Review, 48(5), 1367–1402. doi:10.2307/1229390

Kallinikos, J., A. Aaltonen, Marton, A. (2010). A Theory of Digital Objects. First Monday, 15(6). https://doi.org/10.5210/fm.v15i6.3033.

Kaufmann, M., und J. Jeandesboz. (2017). Politics and ‘the Digital: From Singularity to Specificity. European Journal of Social Theory, 20(3), 309–328. doi:10.1177/1368431016677976

Keane, J. (2013). Democracy and media decadence. Cambridge; New York: Cambridge University Press.

Krotz, F. (2017). Mediatisierung: Ein Forschungskonzept. In F. Krotz, C. Despotović & M-M. Kruse (Eds.), Mediatisierung als Metaprozess. Transformationen, Formen der Entwicklung und die Generierung von Neuem (pp. 13–32). Wiesbaden: Springer. doi:10.1007/978-3-658-16084-5_2

Langley, P., & Leyshon, A. (2017). Platform capitalism: The intermediation and capitalization of digital economic circulation. Finance and Society, 3(1), 11–31. doi:10.2218/finsoc.v3i1.1936

Latour, B. (1991). Technology is society made durable. In Law, J (Ed.), A Sociology of Monsters: Essays on Power, Technology and Domination (pp. 103-132). London: Routledge.

Lovink, G. (2008, September 5). The Society of the Query and the Googlisation of Our Lives. A Tribute to Joseph Weizenbaum. Eurozine. Retrieved from https://www.eurozine.com/the-society-of-the-query-and-the-googlization-of-our-lives/

Luhmann, N. (2012). Theory of Society. Cultural Memory in the Present. Stanford: Stanford University Press.

Lundby, K. (2014). Mediatization of Communication. In K. Lundby (Ed.), Mediatization of Communication (pp. 3–35). Berlin; Boston: De Gruyter.

Lyotard, J. F. (1979). The postmodern condition. Minneapolis: University of Minnesota Press.

Manin, B. (1997). The principles of representative government. Cambridge: Cambridge University Press.

Margetts, H. (2019). Rethinking Democracy with Social Media. The Political Quarterly, 19(S1), 107–123. doi:10.1111/1467-923X.12574

Mitcham, C. (2014). Agency in Humans and in Artifacts: A Contested Discourse. In P. Kroes & P.P. Verbeek (Eds.), The Moral Status of Technical Artefacts (pp. 11–31). Dordrecht; Heidelberg; New York; London: Springer. doi:10.1007/978-94-007-7914-3_2

Møller Hartley, J., Kleut, J., Picone, I., Pavlíčková, T., de Ridder, S. & Romic, B. (2019). Small acts of engagement: Reconnecting productive audience practices with everyday agency, New Media & Society, 00(0). 1-19.doi.org/10.1177/1461444819837569

Plantin, J-C., Lagoze, C., Edwards, P. N., & Sandvig, C. (2018). Infrastructure studies meet platform studies in the age of Google and Facebook. New Media & Society, 20(1), 293–310. doi:10.1177/1461444816661553

Reckwitz, A. (2008). Medientransformation und Subjekttransformation. In A. Reckwitz (Ed.), Unscharfe Grenzen. Perspektiven der Kultursoziologie (pp. 159–176).Bielefeld: transcript Verlag.

Rosanvallon, P. (2008). Counter Democracy: Politics in an Age of Distrust. Cambridge: Cambridge University Press.

Thacker, E. (2004). Networks, Swarms, Multitudes (Part one). CTheory. Available at https://journals.uvic.ca/index.php/ctheory/article/view/14542

Thiel, T. (2018). Digitalisierung: Gefahr für die Demokratie? Ein Essay. Politikum, 4(3), 50–56. Available at http://hdl.handle.net/10419/184647

Turner, F. (2006). From Counterculture to Cyberculture: Stewart Brand, the Whole Earth Network, and the Rise of Digital Utopianism. Chicago; London: University of Chicago Press.

Van Dijck, J. (2013). The Culture of Connectivity. A Critical History of Social Media. Oxford: Oxford University Press.

Verbeek, P. (2015). Beyond Interaction: A Short Introduction to Mediation Theory. Interactions, 22(3), 26-31. Available at https://ris.utwente.nl/ws/portalfiles/portal/6973415/p26-verbeek.pdf

Wagner, P. (2008). Modernity as Experience and Interpretation: A New Sociology of Modernity. Cambridge; Malden, MA: Polity.

Wagner, P. (2016). Progress: A Reconstruction. Cambridge: Polity.

Weber, M. (1904). Die "Objektivität" sozialwissenschaftlicher und sozialpolitischer Erkenntnis. Archiv für Sozialwissenschaft und Sozialpolitik, 19(1), 22–87. Available at https://nbn-resolving.org/urn:nbn:de:0168-ssoar-50770-8

Zuckerman, E. (2014). New Media, New Civics? Policy and Internet, 6(2), 151–168. doi:10.1002/1944-2866.POI360

Footnotes

1. Max Weber (1904) is considered to be the sociologist who introduced the term constellation into social analysis. He argued against the idea that the reality of cultural phenomena can be studied with the same means and results (i.e. scientific laws) as the constellation of celestial bodies. I am grateful to Florian Eyert for pointing this out to me.

2. Bimber (2016, p. 11) suggests a similar perspective by treating digital media as a changed "context for action, not a variable". He defines this changed context as "expanded opportunities for action due to lowered costs of communication and information". While the effects of constellations may imply causal mechanisms, this article is more interested in concurrences of social, political and medial transformations.

3. Anderson is by no means the only scholar who addressed the links between communication media, the nation state and the public sphere (see Habermas, 1962, and for a historic overview Averbeck-Lietz, 2014). What makes Anderson's account special in the context of this article is his emphasis on the nation as an imaginary construct and the performative role of communication media in generating such collective imaginaries (see Doll, 2014, p. 52-53, who calls this a "media-materialist" approach).

4. Science and technology studies are struggling with the third requirement of defining media, which is why this article explores a different approach.

5. If one associates media with mass media, it makes sense to diagnose an increasing mediatisation of society (...). More abstract notions of media may only allow distinguishing different modi of mediatisation.

6. The medium/form distinction overlaps with the concept of affordances (Gibson, 1979), which also aims to study the contingency of artefacts and their use. However, research on affordances tends to analyse specific objects (see Faraj & Azad, 2012) and is therefore not easily compatible with this article's focus.

7. With the rise of the mobile internet, for example, devices, operation systems and applications have become more tightly coupled and commercially controlled. The political debates on net neutrality also demonstrate the existing range of policy options of how to organise data transmission.

8. Lovink (2008) coined the term "query publics" for the changed role of the audience (see also Ingold, 2017, p. 513).

The algorithmic dance: YouTube’s Adpocalypse and the gatekeeping of cultural content on digital platforms

$
0
0

This paper is part of Transnational materialities, a special issue of Internet Policy Review guest-edited by José van Dijck and Bernhard Rieder.

Introduction

The consequences of algorithmic selection and ranking of “culture” (Striphas, 2015; Rieder et al., 2017) on the plurality and diversity of the online cultural realm have been widely critiqued by scholars who have shown the homogenising effect of the process. This essay adds to these critiques by probing the financial and cultural ramifications of the automated curation of culture through a close analysis of the raging controversy resulting from the advertisers’ revolt against YouTube in 2017, an event widely termed the Adpocalypse (Dunphy, 2017; Hess, 2017). The controversy erupted in March 2017 after news reports about advertisements playing around extremist content, first appearing in the UK press (Mostrous, 2017; Neate, 2017) rippled over into Europe and the USA (Solon, 2017), bringing a wide-ranging list of latent grievances against the platform to a head. As the event unfolded, news reports (Rath, 2017) listed threats from as many as 250 big advertisers from across industries, that announced plans to freeze advertising on the platform. I focus on the changes brought about by YouTube in the event’s aftermath, within its processes of classifying and monetising content to show how the platform’s reaction has had enduring consequences on its culture of creation. Through an analysis of the specific policy changes brought about on YouTube as well as in-depth conversations with content creators who were affected by them, this essay advances three claims about the consequences of algorithmic classification and monetising of cultural content on its plurality and financial viability.

First, the essay claims that the Adpocalypse controversy shows us how algorithmic decisions about the sorting and categorisation of content on YouTube are simultaneously also decisions about the financial trajectory of the said content. This is the case because a new regime of classification has come alongside expanded options for advertisers to exclude entire categories of content to run their ads on. Secondly, given the pre-eminence of profit-centred ecology of social media platforms (Wasko & Erickson, 2009; Fuchs, 2014; Fuchs & Mosco, 2016), the Adpocalypse allows us to establish that decisions about the categorisation (and hence the monetising) of cultural content invariably have a bearing on the extent of their viewership on social media platforms. The emerging hierarchy of content where some videos accrue income for YouTube each time they run and others that do not, creates structural “incentives to bias” (Rieder & Sire, 2014, p. 202) on the platform that are bound to have enduring consequences on content plurality. It is reasonable to ask if the algorithmic architecture of a profit-driven platform will treat a demonetised video (that makes no money for YouTube) equal to the “valuable and profitable” (Bucher 2012, p. 1169) content around which it can play advertisements and hence accrue earnings for the platform. If not (as the evidence in subsequent sections seems to suggest) then a demonetised video, by virtue of its inability to make money for the platform, will be suppressed from viewership and caught in a downward spiral of diminishing visibility, thus emphasising the enduring consequences of the gatekeeping function (Granka 2010; Beer 2017) of algorithmic sorting and classification on the diversity and plurality of online content. Lastly and leading from the two prior conclusions, this project helps interrogate the quandary arising from the increasingly utility-like role of social media platforms and their popular perception as being publicly accountable. While content moderation, the “central service platforms offer” (Gillespie 2018, p.202), has far-reaching social, cultural and political implications, the process nevertheless remains entirely out of the purview of democratic processes of public debate and deliberation (Lewandowski, 2014). The Adpocalypse, unlike any other event before, allows us to consider the policy implications of a scenario where financial interests compete with ensuring the viability of contrarian, risky, unpopular or non-mainstream ideas that are necessary to enable a robust debate and has historically served as an antidote to power by disallowing the pre-emption (Andrejevic, 2017) of critique.

In closely analysing the events around the Adpocalypse, this essay focuses on a raging controversy that has received scant critical scholarly attention (Hill, 2019) but one that helps shine new light on key dimensions of the algorithmic regulation of digital culture. It helps interrogate the relationship between creators and platforms especially in cases where the popular expectation of public accountability from them has meant their dual role as digital infrastructures on which we must live our cultural, political and social lives but also as profit making businesses. Close study of controversies caused due to breakdowns, stalemates and malfunction (Larkin, 2008; Gillespie, 2017; Kumar, 2017; Burgess & Matamoros-Fernández, 2016) have provided scholars with rich case studies to draw out their broader ramifications. Gillespie’s 2017 analysis of the controversy around manipulation of Google search results by Dan Savage to advance a particular meaning of Rick Santorum’s name exemplifies this method. His analysis shows how specific incidents within glitches, controversies and conflicts can function as data to be analysed in order to deduce their broader ramifications. Such analyses have shown that public controversies resulting from the breakdown of procedures and systems when confronted with an anomalous variable or incident, reveal the limits and outer boundaries of existing norms that may otherwise remain invisible and hidden. Hence, “accidents and breakdowns” represent fertile test cases where the underlying infrastructure “comes out of the woodwork” (Peters, 2015, p. 52) and becomes “more visible” (Larkin, 2008, p. 245). The rich lessons from disruptions such as the Adpocalypse underscore that “Glitches can be as fruitful intellectually as they are frustrating practically.” (Peters, 2015, p. 52). In closely analysing the key occurrences within this controversy, the essay replicates this method of studying moments of disruption and breakdown in order to draw out their broader implications. It supplements its analysis through testimonies from creators (both publicly available and through interviews) to build the sequence of events that it seeks to analyse. This multi-method analysis is important for a study that seeks to both reconstruct the sequence of an important event and analyse its implications. The wide-ranging changes introduced in YouTube’s processes of content classification, terms of partnership with creators and its advertising structure after the Adpocalypse form vital components of this case study whose ramifications are analysed below.

The Adpocalypse and asymmetrical power

With its advertisement and profit driven economic model under threat, YouTube’s response to the Adpocalypse was understandably swift and (some would argue) extreme. What began as short term quick fixes to the threats of boycott were soon institutionalised into permanent changes on the platform with long-lasting effects on the relationship between creators, advertisers and YouTube and arguably the very future of the global digital ecosystem (of which the platform is an important part). The Adpocalypse led to a slew of policy changes on YouTube that included (but were not limited to) the unprecedented decision (in the digital era) of refunding advertisers for ads that had already played (McGoogan, 2017), significant expansion of human moderation of content (Levin, 2017), allowing advertisers to exclude broad categories of content from playing their advertisements on 1, a steeper and longer on-ramp for content creators to join the YouTube Partners’ Program (Popper, 2017) - the YPP makes creators eligible for monetising their videos, a stricter regime of demonetising videos found to not be “advertiser friendly” 2, a higher threshold for content creators to qualify for appeal once their videos had been demonetised 3 (Patel, 2017) and allowing creators to self-certify their videos as meeting YouTube’s conditions for monetisation (Peterson, 2018). When taken together these sweeping changes (which included others in addition to the seven mentioned above) reveal how grave of a threat to its existence YouTube perceived the Adpocalypse to be and establish the platform’s prioritisation of advertisers’ interests over those of creators and users. The event also forces us to confront the consequences of the inevitable algorithmic moderation of content by showing the convergence of algorithmic gatekeeping of content (Kitchin, 2017; Yeung, 2017) with the profit motive of digital platforms. When these two come together, decisions about the categorisation, regulation and classification of culture (through machine learning systems) have a direct and significant bearing on the financial viability of cultural producers and the sustainability of independent cultural production.

Perhaps the most consequential decision after the Adpocalypse was the expanded ability given to advertisers to exclude broad categories of videos from their advertising campaigns. These content categories (see Figure 1) - five in all that YouTube made available to advertisers, had broad labels such as “Tragedy and conflict”, “Sensitive social issues”, “Sexually suggestive content”, “Sensational and shocking” and “Profanity and rough language” thus giving first evidence of YouTube’s new (but hidden) practice of evaluating all videos and then labeling and categorising those fitting the above classifications. Entirely concealed from the creators who have uploaded them, the process of evaluating and labeling all content now provide wide-ranging powers to advertisers to exclude broad categories of content through simply checking the required box and thus enabling a blunt, context-free and algorithmically moderated method of excluding particular formats, genres and topics. Given the challenge of categorising its total video archive, (ranging between 6 to 7 billion and growing at the rate of over 400 videos per minute), the five labels can only be described in broad, catch all terms, each with a descriptive paragraph that cast a wide net. For instance the category “Sensitive social issues”, is described as including, “news commentary, documentaries, and educational or historical content related to wars, conflicts, or tragic events” thus allowing advertisers to exclude their ads from a key content vertical of news commentary on the platform with a single click. For the creator, the signal is that such content will invariably make no money. In addition to the excludable content areas, YouTube now also allows advertisers the alternative of letting the platform do the work of matching advertisements to content by allowing advertisers to choose from three broad groupings of Expanded, Standard and Limited Inventories on which to run their ads. These groups of content that can only be accessed after signing into one’s Google Ads account (see Figure 2) represent a sequence of increasingly exclusive categories that also come with YouTube’s recommendation for advertisers to choose the middle one of “Standard Inventory”. Notably, this recommended middle category omits a significant amount of content on the platform including those covered by generic descriptions such as, “focus on sex as a topic”, “blood shown in body modifications or medical procedures” and “News, documentary, or education content with words that could be considered biased”. Allowing the exclusion of these meta groupings of content, further delegates the task of matching content with audiences to automated decision making systems thus reifying their power, “to decide what matters and to decide what should be most visible” (Kitchin, 2017 p. 6). As opposed to the finer levels of control provided by the five categories, this global option pushes more of the process of selection, evaluation and categorisation behind the black box of machine learning. As the experience of creators on the platform shows, this process is far more likely to punish the riskier and diverse types of content that push the boundaries of mainstream discourses thus disincentivising their production and sharing and functioning to “suppress content creators’ freedom of speech” (Cantz, 2018).

Figure 1: The categories of content (accessible through a Google Ads account) that potential advertisers can now exclude on YouTube.
Figure 2: The three groupings of content offered to advertisers to run their advertisements on.

In addition to the expanded categories for exclusion, the second decisive change introduced by YouTube after the controversy has been a gradually steepening on-ramp for new creators before they could join the YPP (the YouTube Partners Program that makes channels eligible for monetisation). This occurred in multiple stages with the first step immediately after the events of the Adpocalypse (in April 2017) when YouTube raised the benchmark for YPP to 10,000 lifetime views 4, a move clearly aimed at changing the fundamental character of the platform by introducing a gestation period before creators could begin to earn money from their content. While a drastic enough change by itself, it was followed up (at the beginning of 2018) by a further raising of the bar with the rationale that, “it’s been clear over the last few months that we need a higher standard” (16 January 2018). The new criteria for joining the YPP was raised to 4,000 hours of watchtime during the previous 12 months and 1,000 subscribers (Synek, 2018) which, taken together, virtually eliminated the chances of an amateur YouTuber from making any money on the platform for a significant period of time. The motivation, persistence, financial support and resources (for producing and uploading content) now needed to continue without the hope of any returns for a substantial period shifts the balance on the platform towards the professional, financially secure, and the determined creator/producer rather than the unsure and tentative amateur looking to gauge the value and popularity of an idea but without the necessary resources or the motivation to sustain for long without financial returns. When juxtaposed with the fact that most native YouTube stars started out as amateurs without necessarily having the support and motivation to continue in the absence of any financial returns, this raised bar underscores the blurring of the radical edges and the mainstreaming of a platform whose early rise cannot be divorced from its peer to peer networked architecture and its ruthless disregard for restrictions and prescribed formats of television.

This mainstreaming was further underscored in another rule that prescribed higher threshold of appeal after a creator’s video has been deemed unfit for monetisation by YouTube’s algorithm, the third key change post-Adpocalypse. Instituted a couple of months after the Adpocalypse, the policy now requires a minimum count of a thousand views within a week for a demonetised video to be eligible for human re-evaluation (Patel, 2017). There was however an exception built into this rule for channels with over 10,000 subscribers whose appeal would be considered irrespective of the view count. YouTube explained this exception stating, “We do this because we want to make sure that videos from channels that could have early traffic to earn money are not caught in a long queue behind videos that get little to no traffic and have nominal earnings” (Kain, 2017). Presented as a plausible arrangement to deal with the paucity of human moderators who could re-evaluate every appealed video, this caveat nevertheless underscores a prioritisation whereby the bigger well-established channels’ interests supersede those of the new ones still finding their feet. When combined with the higher requirements for YPP, such exceptions instantiate a dynamic wherein the big channels get bigger while newcomers find it far tougher to gain a sustainable foothold on the platform. The prioritisation of well-established channels in the appeals process legitimises and makes public the unacknowledged phenomenon of hierarchical tiers on the platform. While stratification along the lines of popularity, viewer count or subscriber numbers are inevitable on any platform, the application of different yardsticks and rules by the platform depending on the popularity levels of channels makes permanent and institutionalises the barriers that are insurmountable for new and upcoming channels and creators. Eventually, these institutionalised hierarchies are bound to reflect in the platform’s content thus making it a different entity than one animated by the “youthful exuberance of its early years” (Lobato, 2016, p. 348) and encapsulating the web’s early ethos of innovation, expression and creation without permission or fear.

Perhaps the death knell of that stated ethos of giving everyone a voice and ensuring equality of expression 5 comes as the fourth decisive change in the form of guidelines for “advertising friendly” content 6 introduced after the Adpocalypse. These criteria (comprising of nine descriptive categories) are now used to make an early determination about the monetisability of content and are applied prior to the finer distinctions about content classification within categories. They are written in broad catch-all terms, with the face-saving caveat that “We aren’t telling you what to create - each and every creator on YouTube is unique and contributes to the vibrancy of YouTube”, before adding that, “However, advertisers also have a choice about where to show their ads.” The very first of the nine content types listed as “not suitable for most advertisers” is “Controversial issues and sensitive events” whose description includes,

Video content that features or focuses on sensitive topics or events including, but not limited to, war, political conflicts, terrorism or extremism, death and tragedies, sexual abuse, even if graphic imagery is not shown, is generally not suitable for ads. For example, videos about recent tragedies, even if presented for news or documentary purposes, may not be suitable for advertising given the subject matter. 6

Notable in this description is the denial of “advertising friendly” label to videos that even discuss potentially pressing social issues such as sexual abuse and political conflicts. This broad description ends up excluding a common and popular genre of channels focused on news, analysis and critique but also content such as political and dissenting speech whose sharing through digital platforms in the years prior to the Adpocalypse was instrumental in several political and social movements (Wall & El Zahed, 2011; Meek, 2011). Other items on the guidelines for “advertising friendly” videos similarly discourage the mention of certain topics even if done in a satirical or comedic context (e.g., in the guideline “Inappropriate use of family entertainment characters”). To be sure, critiques of the tightening guidelines for advertisement friendly content must be presented with the caveat that they only matter if creators seek revenue from videos and hence do not apply to videos uploaded for a myriad of other motivations. These could include merely spreading the word through organised campaigns as well as uploading for educational or archival/storage purposes. Acknowledging these varied motivations however does not take away from the discriminatory effect of the newly instituted policies both because the algorithmic architecture of a profit driven platform is unlikely to treat unmonetisable (and non-profitable) videos equal to monetisable (and profitable) ones (as shown below) irrespective of the motive of the creator and because such an ecosystem skews the incentives for creating particular genres and types of content and away from others. Taken together the combined effects of these changes incentivise particular kinds of content creation while disincentivising others, thus raising worrying questions about the consequences of the algorithmic “nudging” (Yeung, 2017; Thaler & Sunstein, 2008) of the creative process towards mainstream, conformist subjects and away from other genres and topics.

The algorithmic dance

Changes such as the newly introduced criteria for advertiser friendly content and a new regime that evaluates all videos to ascertain if they fall under the five labels instituted after the Adpocalypse have initiated an era of content regulation that radically alters the creator ecosystem and engenders an anxiety laden environment of second-guessing, self-surveillance and continuous tweaking (Bishop, 2018; Nieborg & Poell, 2018) of the content being produced. The delegation to algorithms of tasks that were earlier managed by the community on YouTube has strengthened the perception that not only the evaluation of content but (as we will note in the subsequent section) even the earnings (which in some cases is the livelihood) of creators is at the mercy of “YouTube’s unfeeling, opaque and shifting algorithms” (Hess, 2017). Besides sending an irrefutable message of conformity with their guidelines, the enhanced algorithmic authority induces a perpetual sense of precarity among creators given how often they bear the brunt of minor tweaks within the architectural code of the system. The famed “objectivity” of algorithmic systems (Gillespie, 2016) that is a boon when having to sort through homogeneous data at a computational scale, is far less so when discrete labels need to be applied to heterogeneous content varying across the dimensions of context, culture and the umpteen linguistic cues that shape the meaning of words and texts. Limitations exposed during the algorithmic scrutiny of uploaded videos starkly elevate the importance of the “warmly human” (Gillespie, 2016, p. 26) qualities of deciphering nuance, reading context and subjectively differentiating between the varied intentionalities, cues and frames within cultural content. Stark instances of these limitations repeatedly confront creators, for instance in the experience of Nick Schade, whose eponymous YouTube channel about “Boat building and Sea Kayaking clips” frequently uses a specialised technique called “strip built” that requires the bending and shaping of thin strips of wood. Under the new system Schade’s videos have repeatedly been flagged and demonetised by YouTube’s algorithm, a phenomenon he ascribes to the machine’s singular understanding of the word “strip” as the removal of one’s clothes. While his high subscriber count has allowed him immediate manual review that has reversed the algorithmic decision each time, his sobering lesson from the ongoing cycle of demonetisation and its reversal that has continued for months is that,

So far, I have no evidence that the algorithm is learning that there may be multiple definitions of the word “strip“. If I include the word in the title, it is flagged immediately, if I change the title it is unflagged immediately. This has held true for months.” 7 (March 2018).

That a simple equation of the word “strip” with a fixed immutable meaning can throw a wrench in the system of algorithmic sorting of content, leading to flagging and demonetisation of videos and causing immense inconvenience to creators, goes at the very heart of the limitations of machine learning in handling nuance, contexts and cues within human language. Such frustrating encounters abound in the post-Adpocalypse era as in the case of the creators of the channel Faerie Rings Crochet Things, focused on crochet videos and tutorials who sought to investigate the constant demonetisation of their videos by experimentally uploading two versions of it with a few scenes changed between them. Their experiment led them to conjecture that one version of the wig-tutorial video was continuously getting flagged by the automated system because it had the phrase “was just bugging me” 8, the only slang-like word in the entire video. Such stories about the erasure of context and misreading of meaning within the automated process of content moderation entrench a culture of speculative guessing about the reasons behind their content’s demonetisation. With the usual offline channels of feedback about individual decisions missing, the comment sections of channels and official YouTube blogs on policy are replete with conversations that resemble a process akin to reading tea leaves. While the rampant disaffection stands out as a prominent thread within creators’ responses to the policy changes, so does a sense of communal solidarity to help each other by decoding the “mind” of the algorithm, thus pointing to the strange new relationship between creators and machine learning systems that moderate their content. A case in point is creator Drina Dayz’s dismay at the demonetisation of her video about unboxing a crate of snacks, 9“This makes no sense, what about this video is not suitable to all advertisers,” she asks along with the link to the said video on the YouTube help forum. 10 A good digital samaritan by the name of Shaun Joy replies below her comment offering to help her out since, “YouTube refuses to do anything to help their creators,” and after closely analysing the video and its metadata, he concludes that it is perhaps the word “gross” in the demonetised video’s description that is triggering a flag. He goes on to explain,

I’m guessing that advertisers don't want to have their product associated with something "gross", which makes sense on a 10000 ft level. Except you know, CONTEXT., which Youtube's automatic portion of the algorithm seems to be unable to figure out.” (19 September 2017)

Contextual differentiation between meanings, that is key to the human experience of language, is germane to the rising discontent against algorithmic gatekeeping introduced after the Adpocalypse. The pervasiveness of algorithmic power in cultural curation can be gauged by the fact (shared by YouTube) that despite it expanding human moderation, almost ninety-eight percent decisions to remove videos for “violent extremism” are now taken by algorithms (Levin, 2017). While the official help pages of both YouTube 11 and Google 12 advise creators to contextualise their videos to help the platform “understand background and intent”, it is just as clear that delegating the task to creators can barely begin to anticipate the complex ways in which language, culture and meaning can be related. As YouTube expands globally, not only does it need this sort of contextual intelligence across languages but it does so in different national, local and cultural versions of the same language.

Their repeated unpleasant encounters with this irreducible and insurmountable gap between human and machinic understandings of language and culture has ensured that creators have begun to take a far more cautious approach to dance around the algorithmic blindspots and avoid the frustrating cycle of demonetisation, appeal and restoration leading to loss of precious time and revenue. This algorithmic dance is akin to what Battelle (2005) described as the “Google Dance” - “the moniker given to Google’s periodic update of its algorithms” (Battelle 2005, p.157) that could wildly swing fortunes of small businesses dependent on Google to be discovered. Evading its capricious power necessitates that in addition to avoiding topics and issues likely to be deemed unfriendly to advertisers or that could possibly be slotted under the five labels, creators often find themselves paying close attention to the choice of words, phrases and metaphors that, despite being commonplace in day-to-day language, could trigger the system’s alarms. In describing the new regime, David Pakman (who runs the channel The David Pakman Show) explains how they, “are more careful about what words we include in video titles and description to lower the chances of the video being automatically flagged for demonetization” 13. If Pakman’s established channel with over half a million subscribers would need to be cautious, it would be safe to conjecture that new and emerging channels seeking to grow and develop a good relationship with the algorithm would go further to conform voluntarily.

Since most of the policy decisions to sanitise and mainstream content on YouTube can only be exercised and enforced algorithmically, the algorithmic dance to avoid being mislabeled by the automated systems adds an entirely new reason for the “becoming contingent of cultural commodities” (Nieborg & Poell, 2018, p.2) that is a hallmark of the platformisation of culture. The fear of the all-powerful and indecipherable algorithm begins to function as a deterrence to creating risky, edgy or experimental content and yet this linguistic self-policing, while consequential, is among the last stages of self-regulation in the process. Before that lie the challenges of ensuring that a video has already met the criteria of being advertising friendly and has avoided being labelled under one of the five categories that the advertisers can exclude. Those prior levels of exclusion in the post-Adpocalypse regime already omit a vast quantity of content and move the platform in a direction that it had positioned itself against from the start. The now-famous words of YouTube CEO Susan Wojicki that, “YouTube is not TV, and we never will be,” (McCracken, 2017) are often contrasted by the disgruntled creators with the ongoing moves towards sanitising the platform’s content to make it a more desirable place for advertisers. This regime of sanitisation ensures creators even tangentially referring to one of the categories of exclusion must now continuously choose between their revenue and their ability to speak and create content freely. Genres such as news, commentary, political shows and comedy seem particularly vulnerable to the loss of revenue due to the risk of partial or full demonetisation. In explaining the predicament Ethan Klein of the comedy channel h3h3 says,

It’s getting so bad that you can’t even speak your mind or be honest without fear of losing money and being not ‘brand-friendly. YouTube is on the fast track to becoming Disney vloggers: beautiful young people that wouldn’t say anything controversial and are always happy.” (Hess, 2017).

This chilling effect of algorithmic changes is elaborated upon by Jörg Sprave, the owner of The Sling Shot Channel who in July 2018 quit being a full-time YouTuber to go back to a regular job, and claims that the episode did not just have losers but also winners. “I know a lot of channels who are on the winning side of this too – if you are into cooking recipe videos or if you make songs to make babies fall asleep – these people now make a lot of money – many more times the money they used to make,” he explains (Sprave, Personal Conversation). The disincentive to produce particular types of content comes alongside the implicit nudge and the explicit incentive to produce other types of content thus pointing to a turn on the platform whose effects are likely to be far more widespread in the coming years.

The contrast in this visible turn would not be as notable, were it not for YouTube’s positioning of itself as a subversive medium “to give everyone a voice” 14 and imbibing the web’s culture of freedom, rebelliousness and disruption (as captured in Wojicki’s quote above). In their authoritative text on YouTube, Burgess and Green (2018) discuss the mainstream media’s worried recognition of the platform’s disruptive effect. Its promise to empower the common creator was a wager on a cultural ecology independent of the pressures of institutions such as states or corporations that have historically sought to exercise editorial control over channels of communication. Events following the Adpocalypse show a reversal in that culture on the platform, a turn that is explained by the historically symbiotic relationship between advertisers and the media that has brought attendant complications alongside. Historically, not only have corporations sought to exercise editorial control on media but have openly deployed the power of their advertising dollars to create a sanitised environment for their commercials through “economic censorship” (Richards & Murphy, 1996; Baker, 1992). In a now infamous memo to broadcasters, Procter & Gamble, the largest advertiser in the US (and incidentally also a leader of the post-Adpocalypse boycott of YouTube 15) had very specific instructions to facilitate a “buying mood” that included instructions to avoid depicting the horrors of war, portraying criminal activities on screen, showing business as “cold, ruthless and lacking all sentiment” and any attacks on “some basic conception of the American way of life” (Bagdigkian, 2009). The consumer giant’s diktat however was just one instance among a normalised culture (elaborately documented by Richards & Murphy, 1996) wherein corporations routinely sought to shape and regulate the content of media. So explicit was this practice that there existed job titles such as “screeners” for trained experts who worked at agencies specialising in screening television content to vet shows as appropriate for advertisements to run on. In an eerie similarity with YouTube’s sensitive categories that advertisers can exclude today, one such screener Tami Engelhardt described their method as, ''Basically, we look for what we call the Big Six: sex, violence, profanity, drugs, alcohol and religion’’ (Carter, 1990). The migration of this historical phenomenon onto the digital platform is not surprising but evidence of its gradual emulation of TV, a point further underscored by YouTube’s decision to hire more than 10,000 human moderators for “reviewing content that could violate its policies” (Levin, 2017) after the Adpocalypse.

Explicit diktats by advertisers to shape media’s content perhaps conceal the more pernicious consequence of the process, that is the invisible process of self-censorship both by individuals and media institutions with a reverberating chilling effect that self-proscribes any allusions to controversial issues and topics. When creators on YouTube discuss the process of “self-optimization” (Bishop, 2018) in order to make their content algorithm friendly (see Gillespie, 2014), they are only acting according to “a pervasive awareness that deviation can be costly,” (Baker, 1992, p. 2142) for advertising supported media. In describing how this fear can ripple through the media ecosystem, Baker (1992) explains that, “Eventually, this system of predicted disapproval dissuades reporters or producers from even thinking about a problematic story,” (p. 2143). The uncanny similarities between the historical experience of media personnel and the process of second-guessing and the cautious approach to play safe by YouTube creators today, irrefutably signals a new era on the platform. The privileging of advertisers’ interests to the detriment of creators after the Adpocalypse fits well within this altered scenario and is only the latest such move by the platform that is now in serious competition (with television networks) for the advertising spend of the big television advertisers. An important prior link in that chain was the launch of the Google Preferred programme that created an elite list of channels (comprising the top 5% of YouTube videos) determined by “a proprietary algorithm involving total audience and passion level among viewers.” (McCracken, 2017) to be given special treatment and to be offered as a package to advertisers. The creation of an elite tier only further establishes a truth that is by now a well-known dictum - that everyone is not equal on YouTube - thus institutionalising the very hierarchical structuration (Mosco, 1996) that were the hallmarks of older, pre-digital media entities.

Public infrastructures, private interests

Creators’ frustration and YouTube’s gradual pivot towards a more mainstream media outlet encapsulates the emerging quandary when private digital entities “inhabit a new position of responsibility” akin to essential public utilities and are “entwined with the institutions of public discourse” (Gillespie, 2018, p.203) but also continue to remain largely outside the purview of democratic deliberation and public accountability. Their exasperation arises from unmet expectations that have naturally risen as users have got used to building their creative and commercial lives around digital infrastructures such as YouTube. The increasingly common apperception of private digital platforms as public infrastructures becomes acutely real during conflicts, breakdowns or other moments of dissatisfaction as users instinctively calling up public institutions such as the police to restore service during disruptions of access. The most recent instance of this fascinating phenomenon occurred on 16 October 2018 when YouTube faced a rare hour long global outage (Almasy, 2018) leading to a social media eruption of accounts and testimonies of what it was like to live without access to the platform even for a few minutes. As the twitter hashtag #YouTubeDown began to trend and a sense of purposeless ennui gripped social media users, police departments as far apart as Philadelphia and New Zealand reported calls from desperate users seeking their help in restoring service. In tweets whose hilarity cannot conceal their significance, the Philadelphia police department (@PhillyPolice) said, “While it is extremely annoying, @YouTube‬ being down is not a police matter #YouTubeDOWN‬” and the New Zealand police (@nzpolice) said, “Yes, our @YouTube is down, too. No, please don't call 911 - we can't fix it”. The phenomenon of users calling the police during interruptions of service is not limited to YouTube either and has occurred just as well in the case of other digital media platforms. When Facebook went down briefly due to technical issues at the end of September 2017 for instance, users as far apart as a small town of Cheshire in the UK to a big city such as Houston in the US called their local police departments to complain (Griffin, 2015; Hooper, 2017), thus underscoring its spontaneous equation with a public institution with service obligations and accountability. ‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬

Reactions to breakdowns such as the above reveal both the creeping role of digital platforms as social, cultural and material infrastructures but also their “taken-for-grantedness” in the public mind akin to public services that are inconspicuous, “until they break down or something goes awry” (Larkin, 2008, p. 245). Users’ spontaneous calls to the police show a subconscious and illusory process of identification of platforms as publicly accountable entities, a perception arising no doubt from their central role in digital life. While this perception would not hold upon thoughtful scrutiny, the basis for its popular prevalence nevertheless must be acknowledged and has led to interrogations of their role in ensuring rights of citizens (Belli et al., 2017) in areas such as freedom of speech, access to information as well as data protection. It is important to acknowledge that this perception has been a long time in the making and expectations of fairness and accountability amidst the asymmetrical relationship between platforms and users were already visible in the early years of the web, when Google emerged as the giant of online advertising it is today. John Battele’s (2005) exhaustive account of Google’s birth and expansion contains a prescient nugget about Neil Moncrief, a small business owner whose niche trade of making shoes for people with bigger sized feet witnessed a precipitous rise and an equally dramatic fall due to tweaks in Google’s search algorithm. In describing the sharp drop Moncrief claimed that it was, “…as if the Georgia Department of Transportation had taken all the road signs away in the dead of night and his customers could no longer figure out how to drive to his store” (Battelle, 2005, p. 156). Moncrief’s metaphorical comparison of Google’s services with the public highway system was prophetic as it anticipated by more than a decade, the experience of creators on YouTube after the Adpocalypse. These instances, though a decade apart, show a continuing re-orientation of the boundaries between our understanding of the public and the private in a scenario where our social, political, commercial and cultural world is increasingly mediated by digital platforms.

Their function as public utilities that form the bedrock of digital life is made all the more salient given the acute material and economic repercussions of their decisions. Creators lament that a vital dimension of the Adpocalypse controversy that went relatively un-interrogated was that in addition to the financial consequences of algorithmic decisions about content categorisation and curation, those decisions far more worryingly, affected the visibility of their content. To parse out the significance of this point we have to navigate the thicket of the controversy and the policy statements made by YouTube with a thought experiment 16 that asks: given a choice between content (that is advertiser friendly and not excludable by the five labels) on which all ads can run and those excludable either entirely (due to being non-advertiser friendly) or partially by being excluded by advertisers, who can check off a label that the video has been categorised under, which among these two categories of videos would the YouTube algorithm give preference to? This question is akin to invoking the sanctity of the historical divide between the editorial and the commercial side of traditional media such as newspapers and television. YouTube’s answer to this question (given through its unofficial channel Creator Insider) is an emphatic denial that the algorithm for search and discovery has any knowledge of the monetisation status of a video thus implying that whether or not a video is fit for running ads has no implications on its viewership numbers. In emphasising this point a YouTube engineer Todd, explains 17 in a video,

The search and discovery systems that decide which videos to recommend - they don’t have any knowledge about what is going on in the advertising system - so if you get that yellow icon that you see that says it may not be suitable for all advertisers the information about that does not even flow into our system. So they are completely different systems different teams manage them.

The implication of this answer, that YouTube’s worries about its bottomline and its established affinity to its advertisers (as evident in the policy changes instituted post Adpocalypse), do not influence its decisions about the discoverability of videos and recommendations made to viewers, would be laudable if true. However, given the thrust of YouTube’s decisions that were aimed to protect their bottomline and advertisers’ interest, it would seem like an aberration for the platform to treat videos that cannot make them money at par with those that can. Its denial instantiates an enduring quandary for critical scholars faced with seemingly incredulous claims by platforms that are difficult to disprove unless the algorithmic black box (Pasquale, 2015) can be circumvented. Thanks to the angst arising from the Adpocalypse, evidence to disprove YouTube’s claims emerged from among the community of creators and users who were desperately seeking answers to the unfolding controversy. By delving deeper to unearth evidence from his video’s analytics one of them was able to show a clear correlation between the monetisation status of videos and their viewership patterns. Discussing the 90 day overview of his videos 18 the German YouTuber Jörg Sprave not only showed that viewership numbers fell exactly at the time his videos were demonetised but also that they were back to normal levels the precise moment the demonetisation decision was reversed (see figure 3& 4). Claiming that there was “100% correlation” between monetisation status and viewership patterns, Sprave argues that, “they are only supporting videos that make money for them and if they don’t make any money they are filtering them out.” Refuting this conclusion, representatives of YouTube (writing below Sprave’s video and using the Creator Insider handle) 18 deny any correlation and instead ascribe the drop in viewership numbers of his demonetised videos to the fact that “the underlying reason for less ads or no ads is also used by search and discovery”.

Figure 3: Screenshot from Sprave’s video analysing how viewership numbers on his video (two graphs on the left) dropped at the same time his video was demonetised (bottom right graph).
Figure 4: Screenshot from Sprave’s video showing the perfect synchronisation between viewership (two graphs on the left) and the monetisation status of his video (bottom right graph).

While plausible, this explanation however is unable to explain the perfect synchronicity in the timing between monetisation status and viewership numbers, which can only be explained by a coordination between the search and discovery algorithms and those classifying and categorising content. That the decisions about demonetisation and suppressed viewership arrived at through two separate systems (i.e., content classification and search and discovery) had perfect synchronicity is too striking a coincidence to be explained by chance alone, that representatives of YouTube would have us believe. Moreover, if algorithmic decisions about monetisation and visibility are informed by the same underlying reason, they raise worrying questions about the types of content that will get visibility on the platform.

If true, this similarity between the two sides of the platform fits well with YouTube’s aggressive attempts to increase its bottomline that include proactively courting the global corporate advertising spend by encashing its latent potential as a hub of online cultural creativity. The rising graph of its share of the global advertising pie aided by the increasing migration of the advertising dollars from television to the digital domain (McCracken, 2017) was estimated to make YouTube a US$15 billion business (Jhonsa, 2018) in 2018 thus entrenching its role as a profitable cash cow for its parent company Alphabet. That YouTube’s search and discovery algorithms would prioritise profitable content over those unlikely to bring in any advertising dollars (as shown by Sprave’s experience) seems a natural fit with the platform’s financial ambitions and yet raises troubling questions about it’s role in shaping the global cultural ecology. When financial considerations prevail over other factors within decisions about what content gets promoted and what does not, they begin to skew content moderation towards particular types of discourses, truths and versions of realities and away from others.

In contrast to its “implicit contract” (Gillespie, 2018, p. 203) of having no stakes in the ongoing duel of ideas that define liberal democracies, YouTube’s financial priorities begin to create invisible pathways for users thus “shaping what they know, who they know, what they discover, and what they experience” (Kitchin, 2017, p. 6). There are precedents to such power in prior media institutions (McChesney, 2015) and yet its consequence in the digital domain, given the kind of monopoly that YouTube enjoys in the field of video content globally is far more enduring. And while those earlier monopolies were critiqued, resisted and regulated, digital platforms such as YouTube function with wide protection wherein they are trusted to self-regulate for the common good. As the aftermath of the Adpocalypse reveals, trusting digital platforms to foreground plurality, contrarianness and heterogeneity (Mill, 2014) while also remaining impartial adjudicators between competing ideas and truth claims leaves decisions far too consequential at the mercy of their pecuniary profit-driven interests.

Conclusions

The scholarly history of media industries is replete with laments about the consequences of unrequited trust in profit seeking corporations to take decisions for the larger common good. The public perception of globally dominant digital platforms as public infrastructures, emboldened no doubt by their necessity for living our digital life, makes their inclusion within the purview of democratic deliberation and accountability an imperative for our age. The chilling effect created by the looming fear of demonetisation, and the loss of viewership resulting due to algorithmic categorisation has consequences far beyond the immediate discontent generated by YouTube’s decisions post the Adpocalypse. The Adpocalypse signals a decisive shift in the incentive structures of content creation on YouTube, thus likely to deter creators away from particular topics, genres and categories of content and charting a path away from a plural, free and heterogenous ecosystem to a more sanitised, family-friendly and mainstreaming of the platform.

While YouTube is continuously tweaking its rules in response to emerging crises, what remains entirely missing from the process is any formalised process of stakeholder participation in the decisions it makes. In a system where it is not answerable to any regime other than concerns about its own survival and bottomline, it is free to arbitrarily take cognisance of concerns that it considers worthy of attention and ignore the rest. Its expanding global footprint and the millions of users that come to rely on it for a wide range of activities that make up our cultural life, make such an asymmetrical relationship untenable in the long run. More than two years after the controversy, the lack of a formalised redressal mechanism has ensured that the rumblings among the creator community have not diminished thus raising critical questions about precarity of creator labour and the exploitative nature of the relationship between platforms and ‘produsers’. Instituting formalised mechanisms for stakeholder participation that go beyond mere gestures to recognise how the platform profits from uncompensated labour is key to redressing the grievances arising from the controversy. Such mechanisms must seek to live up to the original promise of the platform, by recognising the precarious position of the creators, who remain the most vulnerable and the least powerful voice among the stakeholders affected by the Adpocalypse.

References

Almasy, S. (2018, October 17). YouTube back online after brief outage. CNN. Retrieved November 18, 2018 from https://www.cnn.com/2018/10/16/tech/youtube-outage/index.html

Anderson, C. (2008, June 23). The End of Theory: The Data Deluge Makes the Scientific Method Obsolete. Wired. Retrieved from https://www.wired.com/2008/06/pb-theory/

Andrejevic, M. (2017). To Pre-Empt A Thief. International Journal of Communication, 11, 879–896. Retrieved from https://ijoc.org/index.php/ijoc/article/view/6308

Bagdikian, B. H. (2009). Dr. Brandreth Has Gone to Harvard. In J. Turow & M. P. McAllister (Eds.), The Advertising and Consumer Culture Reader (pp. 76–90). New York: Routledge.

Baker, C. E. (1991). Advertising and a Democratic Press. University of Pennsylvania Law Review, 140(6), 2097–2243. Retrieved from http://scholarship.law.upenn.edu/penn_law_review/vol140/iss6/1

Battelle, J. (2006). The Search: How Google and Its Rivals Rewrote the Rules of Business and Transformed Our Culture (Reprint edition). New York: Portfolio.

Beer, D. (2017). The social power of algorithms. Information, Communication & Society, 20(1), 1–13. doi:10.1080/1369118X.2016.1216147

Belli, L., Erdos, D., Fernández Pérez, M., Francisco, P. A. P., Garstka, K., Herzog, J., … Zingales, N. (2017). Platform regulations: How platforms are regulated and how they regulate us. Retrieved from http://bibliotecadigital.fgv.br/dspace/handle/10438/19402

Bishop, S. (2018). Anxiety, panic and self-optimization: Inequalities and the YouTube algorithm. Convergence, 24(1), 69–84. doi:10.1177/1354856517736978

Bucher, T. (2012). Want to be on the top? Algorithmic power and the threat of invisibility on Facebook. New Media & Society, 14(7), 1164–1180. doi:10.1177/1461444812440159

Burgess, J., & Green, J. (2018). YouTube: Online Video and Participatory Culture (2nd ed.). Cambridge; Malden, MA: Polity.

Burgess, J., & Matamoros-Fernández, A. (2016). Mapping sociocultural controversies across digital media platforms: One week of #gamergate on Twitter, YouTube, and Tumblr. Communication Research and Practice, 2(1), 79–96. doi:10.1080/22041451.2016.1155338

Carter, B. (1990, January 29). THE MEDIA BUSINESS; Screeners Help Advertisers Avoid Prime-Time Trouble. The New York Times. Retrieved from https://www.nytimes.com/1990/01/29/business/the-media-business-screeners-help-advertisers-avoid-prime-time-trouble.html

Craig, D., & Cunningham, S. (2019). Social Media Entertainment: The New Intersection of Hollywood and Silicon Valley. New York: NYU Press.

Dunphy, R. (2017, December 28). Can YouTube Survive the Adpocalypse? Intelligencer Retrieved November 18, 2018, from http://nymag.com/intelligencer/2017/12/can-youtube-survive-the-adpocalypse.html

Fuchs, C. (2017). Social Media: A Critical Introduction (Second edition). Thousand Oaks, CA: SAGE Publications.

Fuchs, C., & Mosco, V. (Eds.). (2017). Marx in the Age of Digital Capitalism. Chicago: Haymarket Books.

Gillespie, T. L. (2014). The Relevance of Algorithms. Media Technologies: Essays on Communication, Materiality, and Society Tarleton Gillespie, Pablo Boczkowski, and Kirsten Foot (ed.), MIT Press.

Gillespie, T. (2016). Algorithm. In B. Peters (Ed.), Digital Keywords (pp. 18–30). Princeton; Oxford: Princeton University Press.

Gillespie, T. (2017). Algorithmically recognizable: Santorum’s Google problem, and Google’s Santorum problem. Information, Communication & Society, 20(1), 63–80. doi:10.1080/1369118X.2016.1199721

Gillespie, T. (2018). Platforms are not intermediaries. Georgetown Law Technology Review, 2, 198–216. Retrieved from https://georgetownlawtechreview.org/platforms-are-not-intermediaries/GLTR-07-2018/

Granka, L. A. (2010). The politics of search: A decade retrospective. The Information Society, 26(5), 364–374. doi:10.1080/01972243.2010.511560

Griffin, A. (2015, September 29). Facebook Down: Don’t Ring Us When Site Stops Working, Say Police. Retrieved November 18, 2018, from The Independent website: http://www.independent.co.uk/life-style/gadgets-and-tech/facebook-down-don-t-ring-us-when-site-stops-working-say-police-a6672081.html

Hallinan, B., & Striphas, T. (2016). Recommended for you: The Netflix Prize and the production of algorithmic culture. New Media & Society, 18(1), 117–137. doi:10.1177/1461444814538646

Hess, A. (2017, December 22). How YouTube’s Shifting Algorithms Hurt Independent Media. The New York Times. Retrieved from https://www.nytimes.com/2017/04/17/arts/youtube-broadcasters-algorithm-ads.html

Hillis, K., Petit, M., & Jarrett, K. (2013). Google and the culture of search. Abingdon: Routledge. doi:10.4324/9780203846261

Hooper, B. (2017, October 12). Police: Don’t call 911 to report Facebook is down. UPI. Retrieved from https://www.upi.com/Odd_News/2017/10/12/Police-Dont-call-911-to-report-Facebook-is-down/5381507814083/

Jhonsa, E. (2018, May 12). How Much Could Google’s YouTube Be Worth? Try More Than $100 Billion. TheStreet. Retrieved from https://www.thestreet.com/investing/youtube-might-be-worth-over-100-billion-14586599

Kain, E. (2017, September 18). YouTube Wants Content Creators To Appeal Demonetization, But It’s Not Always That Easy. Forbes. Retrieved from https://www.forbes.com/sites/erikkain/2017/09/18/adpocalypse-2017-heres-what-you-need-to-know-about-youtubes-demonetization-troubles/#33a23c4c6c26

Kitchin, R. (2017). Thinking critically about and researching algorithms. Information, Communication & Society, 20(1), 14–29. doi:10.1080/1369118X.2016.1154087

Kumar, S. (2017). A river by any other name: Ganga/Ganges and the postcolonial politics of knowledge on Wikipedia. Information, Communication & Society, 20(6), 809–824. doi:10.1080/1369118X.2017.1293709

Larkin, B. (2008). Signal and Noise. Duke University Press.

Levin, S. (2017, December 5). Google to hire thousands of moderators after outcry over YouTube abuse videos. The Guardian. Retrieved from https://www.theguardian.com/technology/2017/dec/04/google-youtube-hire-moderators-child-abuse-videos

Lewandowski, D. (2014). Why We Need an Independent Index of the Web. In R. Konig & R. Miriam (Eds.), Society of the Query Reader: Reflections on Web Search (pp. 49–58). Amsterdam: Institute of Network Cultures. Retrieved from http://networkcultures.org/query/reader/articles-for-download-society-of-the-query-reader/

Lobato, R. (2016). The cultural logic of digital intermediaries: YouTube multichannel networks. Convergence, 22(4), 348–360. doi:10.1177/1354856516641628

McChesney, R. W. (2015). Rich Media, Poor Democracy: Communication Politics in Dubious Times. New York: The New Press. Retrieved from https://www.telegraph.co.uk/technology/2017/07/03/youtube-refunds-advertisers-terror-content-scandal/

McCracken, H. (2017, June 18). Susan Wojcicki Has Transformed YouTube—But She Isn’t Done Yet. Fastcompany. Retrieved from https://www.fastcompany.com/40427026/susan-wojcickis-youtube-isnt-tv-but-its-tvs-biggest-rival

McGoogan, C. (2017, July 3). YouTube refunds advertisers after terror content scandal. The Telegraph. Retrieved from https://www.telegraph.co.uk/technology/2017/07/03/youtube-refunds-advertisers-terror-content-scandal/

Meek, D. (2012). YouTube and Social Movements: A Phenomenological Analysis of Participation, Events and Cyberplace. Antipode, 44(4), 1429–1448. doi:10.1111/j.1467-8330.2011.00942.x

Mosco, V. (1996). The Political Economy of Communication. London: Sage Publications.

Mostrous, A. (2017, February 9). Google faces questions over videos on YouTube. The Times. Retrieved from https://www.thetimes.co.uk/article/google-faces-questions-over-videos-on-youtube-3km257v8d

Neate, R. (2017, March 17). Extremists made £250,000 from ads for UK brands on Google, say experts. The Guardian.

Nieborg, D. B., & Poell, T. (2018). The platformization of cultural production: Theorizing the contingent cultural commodity. New Media & Society, 20(11), 4275–4292. doi:10.1177/1461444818769694

Pasquale, F. (2016). The Black Box Society: The Secret Algorithms That Control Money and Information (Reprint edition). Cambridge, MA; London: Harvard University Press.

Patel, S. (2017, September 6). The “demonetized”: YouTube’s brand-safety crackdown has collateral damage. Digiday. Retrieved from https://digiday.com/media/advertisers-may-have-returned-to-youtube-but-creators-are-still-losing-out-on-revenue/

Peters, J. D. (2015). The Marvelous Clouds: Toward a Philosophy of Elemental Media. Chicago: University of Chicago Press.

Peterson, T. (2018, April 24). To identify unsafe content, YouTube tries asking creators to rate their own videos. Digiday. Retrieved from https://digiday.com/media/identify-unsafe-content-youtube-tries-asking-creators-rate-videos/

Popper, B. (2017, April 6). YouTube will no longer allow creators to make money until they reach 10,000 views. The Verge. Retrieved July 8, 2019, from https://www.theverge.com/2017/4/6/15209220/youtube-partner-program-rule-change-monetize-ads-10000-views

Randy Cantz. (2018, May 1). Adpocalypse: How YouTube Demonetization Imperils the Future of Free Speech. Berkeley Political Review. Retrieved July 8, 2019, from https://bpr.berkeley.edu/2018/05/01/adpocalypse-how-youtube-demonetization-imperils-the-future-of-free-speech/

Rath, J. (2017, March 23). Here are the biggest brands that have pulled their advertising from YouTube over extremist videos. Business Insider India. Retrieved November 18, 2018, from https://www.businessinsider.in/here-are-the-biggest-brands-that-have-pulled-their-advertising-from-youtube-over-extremist-videos/articleshow/57793895.cms

Richards, J. I., & Murphy, J. H. (1996). Economic Censorship and Free Speech: The Circle of Communication between Advertisers, Media, and Consumers. Journal of Current Issues & Research in Advertising, 18(1), 21–34. doi:10.1080/10641734.1996.10505037

Rieder, B., Matamoros-Fernández, A., & Coromina, Ò. (2018). From ranking algorithms to ‘ranking cultures’: Investigating the modulation of visibility in YouTube search results. Convergence, 24(1), 50–68. doi:10.1177/1354856517736982

Rieder, B., & Sire, G. (2013). Conflicts of interest and incentives to bias: A microeconomic critique of Google’s tangled position on the Web. New Media & Society, 16(2), 195-211. doi:10.1177/1461444813481195

Roio, D. (2018). Algorithmic Sovereignty (PhD Thesis, University of Plymouth). Retrieved from http://hdl.handle.net/10026.1/11101

Solon, O. (2017, March 25). Google’s bad week: YouTube loses millions as advertising row reaches US. The Guardian. Retrieved from https://www.theguardian.com/technology/2017/mar/25/google-youtube-advertising-extremist-content-att-verizon

Striphas, T. (2015). Algorithmic culture. European Journal of Cultural Studies, 18(4–5), 395–412. doi:10.1177/1367549415577392

Synek, G. (2018, January 17). YouTube raises activity requirements for partner program monetization. TechSpot. Retrieved July 9, 2019, from https://www.techspot.com/news/72792-youtube-raises-activity-requirements-partner-program-monetization.html

Thaler, R. H., & Sunstein, C. R. (2008). Nudge: Improving Decisions about Health, Wealth, and Happiness. New Haven: Yale University Press.

Wall, M., & El Zahed, S. (2011). “I’ll Be Waiting for You Guys”: A YouTube Call to Action in the Egyptian Revolution. International Journal of Communication, 5, 1333–1343. Retrieved from https://ijoc.org/index.php/ijoc/article/view/1241

Wasko, J., & Erickson, M. (2009). The Political Economy of YouTube. In P. Snickars & P. Vonderau (Eds.), The Youtube Reader (pp. 372–386). Stockholm: National Library of Sweden.

Yeung, K. (2017). ‘Hypernudge’: Big Data as a mode of regulation by design. Information, Communication & Society, 20(1), 118–136. doi:10.1080/1369118X.2016.1186713

A guideline for understanding and measuring algorithmic governance in everyday life

$
0
0

This paper is part of Transnational materialities, a special issue of Internet Policy Review guest-edited by José van Dijck and Bernhard Rieder.

Introduction

The growing use, importance and embeddedness of internet-related algorithms in various life domains is widely acknowledged. Academic and public debates focus on a spectrum of implications in everyday life, caused by internet-based applications that apply automated algorithmic selection (AS) for, among other things, searches, recommendations, scorings or forecasts (Latzer, Hollnbuchner, Just, & Saurwein, 2016; Willson, 2017). These discussions are often combined with reflections on growing automation in general and the impact of artificial intelligence (e.g., machine learning) in particular (Larus et al., 2018). Questions emerge as to how to analytically grasp and assess the consequences of the diffusion of algorithmic selections in modern societies, which some observers characterise as algocracies (Aneesh, 2009) in an algorithmic age (Danaher et al., 2017), marked by growing relevance of informatics and statistics in the governance of societies.

In this paper we provide a guideline for answering these questions. We (1) take a governance perspective and suggest to understand the influence of automated algorithmic selections on daily practices and routines as a form of institutional steering (governance) by technology (software). This institutional approach is combined with practice-related concepts of everyday life, in particular of the daily social and mediated constructions of realities, and embraces the implications of algorithmic governance in selected life domains. Based on these combined approaches, and on a review of empirical algorithmic-governance literature that identifies research gaps, we (2) develop a theoretical model that includes five variables that measure the actual significance of algorithmic governance in everyday life from a user perspective. To examine these variables for different life domains an innovative empirical mixed-methods approach is proposed, which includes qualitative user interviews, an online survey and user tracking.

Results from applying the proposed guideline should contribute to a more nuanced understanding of the significance of algorithmic governance in everyday life and provide empirically informed input for improved risk assessments and policies regarding the governance of algorithms. Accordingly, applying this guideline should help both academics and practitioners to conduct policy analyses and assist them in their policy-making.

A nuanced understanding of algorithmic governance in everyday life

In the fast growing academic and non-academic literature on algorithms, their implications in daily life are summarised using a variety of sometimes misleading and only vaguely defined terms, ranging from algocracy and algorithmic selection to algorithmic regulation and algorithmic decision-making. In the following, a nuanced understanding of “algorithmic governance” is developed from an institutional perspective, that can form the basis for policy analyses and policy-making.

Governance can be understood as institutional steering (Schneider & Kenis, 1996), marked by the horizontal and vertical extension of traditional government (Engel, 2001). Governance by algorithms, also referred to as algorithmic governance, captures the intentional and unintentional steering effects of algorithmic-selection systems in everyday life. Such systems are part of internet-based applications and services, applied by private actors / commercial platforms (e.g., music recommender systems) and political actors (e.g., predictive policing). They include both institutional steering with and by algorithms in societies, i.e., as tools or as (semi-) autonomous agents, either in new or already established commercial and political governance systems. Our understanding of algorithmic governance in everyday life overlaps with Yeung’s (2018) algorithmic regulation. But algorithmic governance in everyday life goes far beyond ‘intentional attempts to manage risk or alter behaviour in order to achieve some pre-specified goal’, and refers not only to ‘regulatory governance systems that utilise algorithmic decision making’ (Yeung, 2018, p. 3). Unintentional effects of automated algorithmic selections are a major part of algorithmic governance and call for special attention in policy analyses and policy-making.

Danaher et al. (2017) use the terms algorithmic governance and algocracy largely synonymously, referring to the intertwined trends of (1) growing reliance on algorithms in traditional corporate and bureaucratic decision-making systems, and (2) the outsourcing of decision-making authority to algorithm-based decision-making systems. In accordance with Aneesh (2009) and Danaher (2016), we do not understand algocracy as the final stage of technological singularity ‘when humans transcend biology’, as foreseen by Google’s director of engineering Ray Kurzweil (2005), but rather as a kind of governance system where algorithms govern (i.e., shape, enable and constrain activities) either as intentionless tools of human agents or as non-human agents equipped with a certain autonomy. 1 Together and also as part of other kinds of (traditional) governance systems (e.g., legal systems, self-regulations, cultural norms and traditions), they co-govern societies. The extent of the relative importance of algorithmic selections in daily routines and their overall effect on social order in societies, however, is an open research question. Empirically assessing the significance of algorithmic governance is particularly important since accurate assessments of the role of algorithms (e.g., degree of automation and autonomy) and associated risks are a prerequisite for the development of adequate public policies.

Different aspects of algorithmic governance have received attention from various disciplines, leading to a large but fragmented body of research. A comprehensive empirical assessment of the significance of algorithmic selection in daily life requires both concepts of algorithmic selection and of everyday life that can be operationalised. This article commences with a working definition of algorithmic selection as the automated assignment of relevance to certain selected pieces of information and a focus on internet-based applications that build on algorithmic selection as the basic unit of analysis (Latzer et al., 2016).

Algorithmic selection applications as units of analysis

The emerging field of critical algorithm studies can roughly be grouped into studies that centre on (single) algorithms per se as their unit of analysis, and those that focus on the socio-technical context of AS applications. Studies focusing on the algorithm itself show the capabilities of AS and aim to detect an algorithm’s inner workings, typically by reverse engineering the code (Diakopoulos, 2015), experimental settings (Jürgens, Stark, & Magin, 2015), or code review (Sandvig, Hamilton, Karahalios, & Langbort, 2014). Often, however, they are not able to determine the overall social power that algorithms exert, because algorithms are studied in isolation and user perceptions and behaviour are not sufficiently accounted for. Generally, a purely technical definition of algorithms as encoded procedures that transform input data into specific output based on calculations (e.g., Kowalski’s, 1979, ‘algorithm = logic + control’) and the mere uncovering of the workings of an algorithm do not reveal much about the risks of their applications and their social implications. Algorithms remain ‘meaningless machines’ (Gillespie, 2014) or ‘mathematical fiction’ (Constantiou & Kallinikos, 2015) until they are connected to real-world data (Sandvig et al., 2014). This is accounted for in studies on the socio-technical context of AS, where algorithms are viewed as situated artefacts and generative processes embedded in a complex ecosystem (Beer, 2017; Willson, 2017). As such, algorithms are only one component in a broader socio-technical assemblage (Kitchin, 2017), comprising technical (e.g., software) and human (e.g., uses) components (Willson, 2017). By focusing on internet-based applications that build on algorithmic selection as units of analysis and on the societal functions they perform (see Table 1), this article integrates itself within the second group of research.

Table 1: Functional typology of AS applications (adapted from Latzer et al., 2016)

Types

Examples

Search

General search engines (e.g., Google search, Bing, Baidu)

Special search engines (e.g., findmypast.com, Shutterstock, Social Mention)

Meta search engines (e.g., Dogpile, Info.com)

Semantic search engines (e.g., Yummly)

Question and answer services (e.g., Ask.com)

Aggregation

News aggregators (e.g., Google News, nachrichten.de)

Observation/surveillance

Surveillance (e.g., Raytheon’s RIOT)

Employee monitoring (e.g., Spector, Sonar, Spytec)

General monitoring software (e.g., Webwatcher)

Prognosis/forecast

Predictive policing (e.g., PredPol)

Predicting developments: success, diffusion etc. (e.g., Sickweather, scoreAhit)

Filtering

Spam filter (e.g., Norton)

Child protection filter (e.g., Net Nanny)

Recommendation

Recommender systems (e.g., Spotify, Netflix)

Scoring

Reputation systems: music, film, and so on (e.g., eBay’s reputation system)

News scoring (e.g., reddit, Digg)

Credit scoring (e.g., Kreditech)

Social scoring (e.g., PeerIndex, Kred)

Content production

Algorithmic journalism (e.g., Quill, Quakebot)

Allocation

Computational advertising (e.g., Google AdSense, Yahoo!, Bing Network)

Algorithmic trading (e.g., Quantopian)

The typology in Table 1 demonstrates how broad the scope of AS applications has become. An approach that focuses on socio-technical and functional aspects is accessible for research into the social, economic and political impact of algorithms (Latzer et al., 2016) and the power algorithms may have as gatekeepers (Jürgens, Jungherr, & Schoen, 2011), agents (Rammert, 2008), ideologies (Mager, 2012) or institutions (Napoli, 2014). The institutional governance perspective, that is applied in this paper, identifies algorithms as norms and rules that affect daily behaviour by limiting activities, influencing choices, and creating new scope for action. They shape how the world is perceived and what realities are constructed. In essence, algorithms co-govern everyday life and impact the daily individual construction of realities—the individual consciousness—and consequently the collective consciousness, which in turn makes them a source and factor of social order, resulting from a shared social reality in a society (Just & Latzer, 2017).

Algorithms co-govern daily life as instruments and actors

The governing role of algorithms needs further analytical specification. As general-purpose technologies(Bresnahan, 2010), algorithms have an impact on a wide range of life domains, and as enabling technologies their impact is contingent on social-use decisions. From a co-evolutionary perspective (Just & Latzer, 2017), algorithmic governance is a complex, interconnected system of distributed agency (Rammert, 2008) between humans and software, a co-evolutionary circle of permanent shaping and being shaped at the same time. Algorithms co-govern what can be found (e.g., algorithmic searches), what is anticipated (e.g., algorithmic forecasts), consumed (e.g., algorithmic recommendations) and seen (e.g., algorithmic filtering), and whether it is considered relevant (e.g., algorithmic scoring) (Just & Latzer, 2017). They thereby contribute to the constitution and mediation of our lives (Beer, 2009). The use of only vaguely defined terms like algorithmic decision-making can be misleading regarding the assessment of social consequences of different kinds of algorithmic governance. Various analytical distinctions should be kept in mind when studying algorithmic governance:

Algorithmic selection applications on the internet differ widely in their degree of automation and autonomy. At one end of the spectrum, algorithms are used as instruments with imposed agency to exert power without any autonomy, with predefined and widely predictable outcomes 2. At the other end, machine-learning algorithms govern with a delegated agency that implies a predefined autonomy, leading to unforeseeable results 3.

To indicate the actual autonomy of algorithmic systems on the internet, a similar classification to that applied for self-driving cars may be helpful, where a labelling from 1 (low) to 5 (full) marks the degree of automation (Bagloee, Tavana, Asadi, & Oliver, 2016). Literature on automated weapons systems provides another instrumental way to categorise the remaining control by humans in automated decision-making systems: humans are classified as being either (1) in-the-loop and fully in control, (2) on-the-loop and able to intervene if felt necessary, or (3) off-the-loop and without any option to intervene (Citron & Pasquale, 2014). This distinction, for example, proves helpful when liabilities for algorithmic governance are evaluated. The term automated decision-making algorithms often refers to decisions by algorithms without human involvement (off-the-loop), and has already led to regulatory interventions. The use of automated decision-making systems with significant legal or social effects is restricted (e.g., fully automated tax assessments), for example, by article 22(1) of the European General Data Protection Regulation (GDPR), whereas the use of other automated decision-making systems—based on non-personal data—is not restricted (Martini & Nink, 2017).

Algorithmic selections as part of internet-based applications are related to everyday human decisions in different ways. In most of the functional categories listed in Table 1, automated algorithmic selections are applied to augment and enhance everyday human decision-making but not to fully replace it. This is predominantly the case for algorithmic recommendations, filtering and scoring results. Nevertheless, it has to be considered that in many cases (e.g., credit scoring, predictions on recidivism, ranking of job candidates) it becomes increasingly problematic for those responsible to ignore or counteract algorithmic results in their decisions, in particular if these algorithmic outputs are accessible to others or to the public. Accordingly, AS applications that are aimed at enhancing human decisions can de facto evolve into systems where humans merely remain on-the-loop and will only intervene in exceptional cases.

Further, algorithmic selections vary strongly in their scope of potential consequences (social and economic risks). For instance, there is a significant difference between a simple algorithmic filtering concerning which post from a friend is shown in someone’s social media feed and a more meaningful and directly relevant algorithmic scoring of someone’s creditworthiness. Accounting for the case-specific scope and context of algorithmic selections is therefore highly relevant for appropriate policy conclusions. For instance, two technologically identical algorithms where one is applied for recommending books and the other for recommending medical treatments call for very different policies due to the disparity of risks of these automated algorithmic selections.

Algorithmic (co-)governance results in opportunities and risks. The advantages of algorithmic governance such as efficiency gains, speed, scalability and adaptability are compromised by risks ranging from bias, manipulation and privacy violations, to social discrimination, heteronomy and the abuse of market power (Latzer et al., 2016), or by efficiency-based (inaccurate decisions) and fairness-based objections (unfair decisions) in algorithmic governance (Zarsky, 2016).

In sum, while algorithms are increasingly active as tools and actors in governance regimes that affect many life domains on a daily basis, the relative importance of algorithmic governance is far from clear. The practice-related approach proposed here aids the empirical assessment and understanding of this significance of algorithmic governance.

A practice-related approach to everyday life

Everyday life as a field of research is rooted in various theoretical traditions (Adler, Adler, & Fontana, 1987), among other things in phenomenological sociology (Schütz, 2016), historical materialism (Heller, 1984) and De Certeau’s (1984) anthropology.

As for the area of inquiry, this paper takes a practice-related approach (Pink, 2012). Since the field lacks comprehensive empirical research that goes beyond individual services, this article suggests studying the significance of algorithmic governance for everyday life in a more inclusive manner. In order to derive an executable research design, however, it is necessary to analytically segment ‘everyday life’. We focus on four domains of everyday life that span central areas of everyday practice: (a) social and political orientation, (b) recreation, (c) commercial transactions, and (d) socialising. This categorisation is derived from a representative, country-wide CATI survey of internet use in Switzerland. While an infinite number of activities can be performed on the internet, a confirmatory factor analysis revealed four distinct internet usage factors that group the most important internet activities for Swiss internet users (see Büchi, Just, and Latzer, 2016 for an overview of the activities for each domain). Therefore, this categorisation lends itself to an analytical distinction between different life domains in which people engage in online activities and use AS applications in particular. It is important to note that these life domains are obviously closely interrelated and do not necessarily represent the categories in which individuals perceive their everyday lives. Although there is no standard conceptual framework for everyday life, Sztompka (2008), for example, points to its various defining traits, such as that everyday life events include relationships with other people, that they are repeated and not unique, have a temporal duration, and often happen non-reflexively, following internalised habits and routines.

In order to appropriately account for the increasing role of technology, research must go beyond human relationships as one defining characteristic of everyday life. The theory of the social or mediated construction of reality (Berger & Luckmann, 1967; Couldry & Hepp, 2016) is fruitful for the understanding of how social interactions and media technologies shape the perception of the social world. Berger and Luckmann (1967) argue that the social world is constructed through social interactions and underlying processes of reciprocal typification and interpretation of habitualised actions. In this meaningful process, a social world is gradually constructed whose habitualised actions provide orientation, make it possible to predict the actions of others and reduce uncertainty. This leads to an attitude that the world in common is known, a natural attitude of daily life (Schütz & Luckmann, 2003). Accordingly, the resources, interpretations and the common-sense knowledge of routinised practices in everyday life—which increasingly includes AS applications—are seemingly self-evident and remain unquestioned.

This paper particularly aims to expose what is generally left unquestioned and to propose a guideline for the assessment of perceptions and use of AS applications for a wide range of everyday practices in order to better understand their impact, associated risks, and the need for public policies. Willson (2017) emphasises that one of the concerns of studying the everyday is to make the invisible visible and to study the power relations and practices involved. AS applications are seamlessly integrated into the routines of everyday life through domestication (Silverstone, 1994)—the capacity and the process of appropriation—which renders them invisible. Algorithms operate at the level of the ‘technological unconscious’ (Thrift, 2005) in widely unseen and unknown ways (Beer, 2009). Consequently, the study of algorithms aims to reveal the technological unconscious and to understand how AS applications co-govern everyday online and offline activities. AS applications must be investigated in relation to online and offline alternatives to determine the relative significance of algorithmic governance for everyday life, for example by bearing in mind an individual’s media repertoire 4 (Hasebrink & Hepp, 2017). Thus far only a small body of empirical research on AS has emerged with regard to the everyday activities of orientation, recreation, commercial transactions and socialising.

Existing empirical results and research gaps

(a) The significance of algorithmic governance has received the most attention in research on social and political orientation. Search applications and news aggregators are understood as intermediaries (Bui, 2010; Newman, Fletcher, Kalogeropoulos, Levy, & Nielsen, 2018) between traditional mass media and individual news consumption. Empirical research suggests that algorithmic selection will become more important for information retrieval in the future (Newman et al., 2018; Shearer & Matsa, 2018). Accompanying these considerations are fears of personalised echo chambers (Sunstein, 2001) or filter bubbles (Pariser, 2011), leading to fragmented, biased perceptions of society (Dylko, 2016). However, recent empirical studies fail to show a coherent picture: there are clear patterns of algorithmically induced, homogenous opinion networks (Bakshy, Messing, & Adamic, 2015; Del Vicario et al., 2016; Dylko et al., 2017), but other studies indicate more opinion diversity despite algorithmic selection and qualify the risk of echo chambers with empirical evidence (Barbera, Jost, Nagler, Tucker, & Bonneau, 2015; Dubois & Blank, 2018; Fletcher & Nielsen, 2017; Heatherly, Lu, & Lee, 2017; Helberger, Bodo, Zuiderveen Borgesius, Irion, & Bastian, 2017; Zuiderveen Borgesius et al., 2016).

(b) AS applications also increasingly shape daily recreation (i.e., entertainment and fitness). Recommendation applications have been shown to play a predominant role here. The main concerns are diminishing diversity (Nguyen, Hui, Harper, Terveen, & Konstan, 2014), the algorithmic shaping of culture (Beer, 2013; Hallinan & Striphas, 2016) and the social power of algorithms (Rieder, Matamoros-Fernandez, & Coromina, 2018). Again, there has been no clear empirical evidence for this hypothesis, but rather studies qualifying this risk (Nguyen et al., 2014; Nowak, 2016).

Further, wearables—networked devices equipped with sensors—have entered everyday life. Empirical studies investigate the perception, use and modes of self-tracking (Lupton, 2016; Rapp & Cena, 2016), and its social and institutional context (Gilmore, 2015). Such wearables have often been disregarded in critical algorithm studies, although they are an important way in which AS governs the perception of the self (Williamson, 2015) and everyday life in general.

(c) For commercial transactions, there has been a focus on studying recommender systems focusing on the performance of algorithms (Ur Rehman, Hussain, & Hussain, 2013) or the implementation of new features (Hervas-Drane, 2015). Their impact on consumers is mostly studied by evaluating their perceived usefulness (Li & Karahanna, 2015). Furthermore, allocation algorithms in the form of online behavioural advertising have attracted attention (Boerman, Kruikemeier, & Zuiderveen Borgesius, 2017), revealing inconsistent results on users’ perceptions of personalised advertisements (McDonald & Cranor, 2010; Smit, Van Noort, & Voorveld, 2014; Ur, Leon, Cranor, Shay, & Wang, 2012).

(d) For socialising, the research focus is on how algorithms curate user interactions on social networking sites and dating platforms (Bucher, 2012; Hitsch, Hortaçsu & Ariely, 2010). These applications raise concerns like social distortion effects or the question of how social connections are adapting to an algorithmically controlled model (Eslami et al., 2015; Rader, 2017; Rader & Gray, 2015; Van Dijck, 2013). So far, there has been no empirical analysis to confirm the relevance of these risks.

Altogether, research on the impact of algorithmic governance on everyday life has produced a plethora of theoretical considerations and fragmented, application-specific empirical findings. To date there has been no comprehensive and systematic empirical investigation of the various central domains of everyday practices. However, generalising policy implications from studies on individual AS services (e.g., Facebook, Twitter or search engines) should be treated with caution. Moreover, existing studies focus on AS applications in relative isolation. Due to this narrow perspective, they are unable to evaluate the power of algorithmic governance in everyday life. Existing work has mostly taken a top-down approach, disregarding the perspective of users. Studies on user perceptions have predominantly relied on self-reported survey measures. While extensive qualitative studies (e.g., Bucher, 2017) offer the basis for a better scientific understanding of the social effects of AS applications, they do not allow generalisable statements at the population level. There is also a lack of empirical work with data on individuals’ actual internet use. To the best of our knowledge, there is no empirical study on the population level that uses tracking data on both mobile and desktop devices, a prerequisite to gain a comprehensive picture of individual internet use. Finally, there have been very few nationally representative studies on the use and perception of AS (e.g., Araujo et al., 2018; Fischer & Petersen, 2018). These existing empirical results do not provide a sound basis for policy-making in this area.

The following section proposes a methodological design that is suited to filling the research gaps identified above. It is designed with the objectives of providing a better understanding of how algorithms exert their power over people (Diakopoulos, 2015)—which essentially corresponds to our understanding of algorithmic governance—and to offer useful evidence-based insights for public policy deliberations regarding algorithmic governance and the policy choices for the governance of algorithms.

Measuring algorithmic governance from a user perspective

This section develops a theoretical model of the variables intended to measure the significance of algorithmic governance for everyday life and form the basis for theory-driven empirical assessments. We then propose a mixed-methods approach to empirically determine the extent to which AS applications govern daily life, since purely theoretically derived risks may lead to premature policy recommendations.

Theoretical model of the significance of algorithmic governance in everyday life

To empirically grasp the significance of algorithmic governance for everyday life, we develop a theoretical model that accommodates the operationalisation of algorithmic governance and entails five variables that influence the potential and effectiveness of this particular type of governance: usage of AS applications, subjective significance assigned to them, awareness of AS, awareness of associated risks, and practices to cope with these risks.

Theoretical model of variables measuring the significance of algorithmic governance in everyday life.
Figure 1: Theoretical model of variables measuring the significance of algorithmic governance in everyday life.

First, in order to determine the governing potential of AS applications in everyday life, their usage (extent, frequency) must be measured, particularly compared to their online and offline counterparts. Also, their governing potential is determined by whether and how these applications have changed people’s behaviour, for instance with regard to individual information seeking, listening to music, gaming, or dating. Second, the subjective significance people attribute to these applications plays an important role in how AS applications affect everyday life. The substantial substitution of traditional online and offline alternatives by AS applications is a prerequisite if fears of AS-associated risks are to be justified. Assessing the significance that users assign to AS applications makes it possible to determine the accuracy of these theoretical estimations. Third, it is essential to investigate how aware people are of the fact that algorithms operate in the services they use and of the specific algorithmic modes of operation. Awareness of AS substantially affects the effectiveness and impact of algorithmic governance. A variety of risks is attributed to the use of AS applications (e.g., filter bubbles, diminishing diversity of content), which are often directly associated with the algorithmic modes of operation. Accordingly, without awareness, users cannot accurately assess potential benefits and risks 5. The fourth factor of algorithmic governance is the risks people associate with the AS applications they use. Algorithmic governance per se is a neutral concept, but it can involve risks that lead to stronger governing effects of AS applications, especially when awareness is low. From a user perspective, applying practices that are opposed to companies’ strategies is the most viable way to exert agency. Based on De Certeau (1984), algorithmic governance is understood in terms of strategies and tactics: platforms that apply AS postulate their own delimited territory from which they manage power relationships with an exteriority—in this case users. These platforms apply ‘panoptic practices’: they observe, measure, and control, and consequently turn users into measurable types. These panoptic practices allow the platforms to create user classifications based on a user habitus that reflects their social disposition. Through these panoptic practices, AS applications co-govern users’ constructions of reality by mirroring their social dispositions in the form of scorings, recommendations, search results or advertisements. We consider user practices as tactics that are the counterpart of the strategies that companies or platforms apply. Accordingly, user practices are generally aimed at coping with risks that companies induce through their data collection and analysis strategies. Such practices are discussed as ‘slow computing’ by Fraser and Kitchin (2017). This term implies slowing down internet use, connectivity, and practices against data grabbing infrastructures. The practices can be seen as complementary to other measures like empowering users by governing algorithms with, for instance, consumer policies that improve the protection of user data (Larsson, 2018). The practices users apply to cope with the risks that they perceive associated with AS applications are thus the fifth factor of investigation when trying to assess the extent of algorithmic governance in everyday life.

The mixed-methods approach

Suitable assessments of risks related to AS applications and corresponding policy measures require the empirical measurement of the governance that AS applications exert in users’ everyday lives. To answer the call for taking algorithms’ ‘socio-technical assemblages’ (Kitchin, 2017) into account and investigating how users engage with AS applications in their lives, existing top-down approaches should be complemented by a user-centred perspective (Bucher, 2017).

Therefore, we propose a user-centred, mixed-methods approach to measuring the significance of AS applications, which is comprised of three research phases. Based on a literature review, (I) semi-structured qualitative interviews are to be conducted for each of the four domains of everyday practice. As these practices (e.g., newsgathering, dating) are not limited to internet use, the significance of AS applications must be considered in relation to alternative online and offline activities. This enlarged and contextualised perspective promises to provide an understanding of individuals’ life worlds and how AS applications are integrated within them. The qualitative interviews can provide in-depth information on individuals’ perceptions, opinions and interpretations regarding AS applications in the four life domains.

These qualitative interviews should form the basis for the quantitative empirical part, which we propose to consist of a representativeonline survey (II) in combination with a representative passive metering (tracking) (III) of internet usage at the population level. The combination of self-reported survey measures and tracked internet use (passive metering) makes it possible to compare the tracked share of AS services used with the self-reports of internet use, which can be systematically biased (Scharkow, 2016) or subject to social desirability effects. Further, the non-transparent, “black-box” nature of algorithms raises questions about users’ awareness of the mechanisms at play. When asking people about their experiences with algorithms, it must be kept in mind that their awareness of the existence of algorithms might be low and their statements could be biased accordingly. Therefore, a measurement of AS by means of tracking data additionally to the interview and survey data is inevitable 6. This could, for instance, be done by installing tracking software that records internet use on the survey respondents’ mobile and desktop devices 7. It should, for instance, collect the websites they visit (URLs), the search terms they use and the time and duration of their visits.

All three methodological approaches lend themselves to the accomplishment of different goals and results, which are summarised in Table 2. Only in its entirety is this mixed-methods approach able to significantly contribute to closing existing research gaps with regard to the empirical understanding of algorithmic governance and the overall significance of AS applications in everyday life.

Table 2: Expected contributions of the three methods to the empirical assessment of algorithmic governance in everyday life
 

Qualitative interviews with internet users

Quantitative survey
with internet users

Passive metering of individual internet use

Usage of AS applications

Not primarily relevant, gather context data on circumstances of use

Determine frequency of use of offline alternatives

Determine frequency of use of online alternatives and AS applications

Subjective
significance assigned to AS applications

Find reasons why AS applications are relevant, find out whether & how AS applications have changed behaviour

Quantify relevance of AS applications, online and offline alternatives for domains of everyday life

Not primarily relevant

User awareness of AS

Determine interviewees’ understanding of AS applications, use results for appropriate measure for awareness in survey

Quantitatively determine knowledge about / awareness of algorithms at population level

Not primarily relevant

User awareness of related risks

Expand existing list of risks; understand context to explain, interpret and contextualise survey data

Determine perceived importance of risks associated with AS applications

Not primarily relevant

User practices to cope with risks

Find practices that users apply to cope with AS / associated risks

Quantitatively determine relevance of strategies by constructing measure for coping practices

This mixed-methods approach allows for a re-assessment of opportunities and risks of AS applications in the different life domains that form the basis for evidence-based public policy and governance of AS applications, aiming at the democratic control of algorithmic power. The guideline that we propose is to be understood as an exemplary research design that has to be adapted to specific research questions 8.

Conclusions

In this paper we propose a guideline to both a theoretical understanding and an empirical measuring of algorithmic governance (= governance by algorithms) in everyday life. We argue that the assessment of algorithmic governance—a form of institutional steering by software—requires a nuanced theoretical understanding that differentiates between (a) different units of analysis, (b) intentional and unintentional governance effects, (c) public and private, human and nonhuman governing actors, (d) degrees of automation and of the remaining role of human actors in decision-making, as well as (e) the kinds of decisions that are taken by algorithms, their different contexts of applications and scopes of risks. Further, such an assessment needs empiricalevidence to measure the actual significance of associated, theoretically derived risks of the governance by internet services that apply automated algorithmic selections in everyday life.

Our review of algorithmic-governance literature illustrates the lack of empirical studies from a user-centred perspective going beyond single platforms or services. Such limited empirical analyses in combination with purely theoretical considerations may lead to the derivation of exaggerated risks and unrealistic policy-relevant conclusions. So far, there is not a sufficient empirical basis to justify the detrimental risks and adventurous policy suggestions that are occasionally associated with AS applications. Rather, recent attempts to empirically investigate these phenomena have tended to reduce the significance of risks like manipulation, bias, or discrimination.

We propose a mixed-method, user-centred approach to make the significance of algorithmic governance in everyday life measurable and to provide a basis for more realistic, empirically grounded governance choices. We identified five variables—usage of AS, subjective significance of these services, awareness of AS, awareness of associated risks, and user practices—as relevant dimensions of inquiry to measure the significance of algorithmic governance in everyday life from a user-centred perspective. The mixed-methods approach consists of qualitative interviews, a representative online-survey and representative user tracking to empirically grasp the significance of algorithmic governance in four domains of everyday life—social and political orientation, recreation, commercial transactions, and socialising. This representative sample of affected life domains is derived from a representative, country-wide survey on internet usage.

Altogether, in the emerging field of critical algorithm studies, where empirical results are limited, contradictory or lacking, the guideline presented here permits a nuanced theoretical understanding of algorithmic governance and a more holistic and accurate measurement of the impact of governance by algorithms in everyday life. This combination of theoretical and evidence-based insights can form a profound basis for policy choices in the governance of algorithms.

References

Adler, P. A., Adler, P., & Fontana, A. (1987). Everyday life sociology. Annual Review of Sociology, 13, 217–235. https://doi.org/10.1146/annurev.so.13.080187.001245

Aneesh, A. (2009). Global labor: Algocratic modes of organization. Sociological Theory, 27(4), 347–370. doi:10.1111/j.1467-9558.2009.01352.x

Araujo, T., de Vreese, C., Helberger, N., Kruikemeier, S., van Weert, J., Bol, N., … Taylor, L. (2018, September 25). Automated decision-making fairness in an AI-driven world [Report]. Amsterdam: Digital Communication Methods Lab, RPA Communication, University of Amsterdam.Retrieved from http://www.digicomlab.eu/wp-content/uploads/2018/09/20180925_ADMbyAI.pdf

Bagloee, S. A., Tavana, M., Asadi, M., & Oliver, T. (2016). Autonomous vehicles. Journal of Modern Transportation, 24(4), 284–303. doi:10.1007/s40534-016-0117-3

Bakshy, E., Messing, S., & Adamic, L. A. (2015). Exposure to ideologically diverse news and opinion on Facebook. Science348(6239), 1130–1132. doi:10.1126/science.aaa1160

Barbera, P., Jost, J. T., Nagler, J., Tucker, J. A., & Bonneau, R. (2015). Tweeting from left to right. Psychological Science26(10), 1531–1542. doi:10.1177/0956797615594620

Beer, D. (2009). Power through the algorithm? Participatory web cultures and the technological unconscious. New Media & Society, 11(6), 985–1002. doi:10.1177/1461444809336551

Beer, D. (2013). Popular culture and new media: The politics of circulation. New York: Palgrave Macmillan. doi:10.1057/9781137270061

Beer, D. (2017). The social power of algorithms. Information,Communication & Society, 20(1), 1–13. doi:10.1080/1369118X.2016.1216147

Berger, P. L., & Luckmann, T. (1967). The social construction of reality. London, UK: Allen Lane.

Boerman, S. C., Kruikemeier, S., & Zuiderveen Borgesius, F. J. (2017). Online behavioral advertising. Journal of Advertising, 46(3), 363–376. doi:10.1080/00913367.2017.1339368

Bresnahan, T. (2010). General purpose technologies. In B. H. Hall & N. Rosenberg (Eds.), Handbook of the economics of innovation (pp. 761–791). Amsterdam: Elsevier. doi:10.1016/s0169-7218(10)02002-2

Bucher, T. (2012). Want to be on the top? Algorithmic power and the threat of invisibility on Facebook. New Media & Society14(7), 1164–1180. doi:10.1177/1461444812440159

Bucher, T. (2017). The algorithmic imaginary: Exploring the ordinary affects of Facebook algorithms. Information, Communication & Society, 20(1), 30–44. doi:10.1080/1369118x.2016.1154086

Bui, C. (2010). How online gatekeepers guard our view: News portals’ inclusion and ranking of media and events. Global Media Journal: American Edition9(16), 1–41.

Büchi, M., Just, N., & Latzer, M. (2016). Modeling the second-level digital divide. New Media & Society, 18(11), 2703–2722. doi:10.1177/1461444815604154

Citron, D. K., & Pasquale, F. A. (2014). The scored society. Washington Law Review, 89(1), 1–33. Available at http://hdl.handle.net/1773.1/1318

Constantiou, I. D., & Kallinikos, J. (2015). New games, new rules: Big data and the changing context of strategy. Journal of Information Technology30(1), 44–57. doi:10.1057/jit.2014.17

Couldry, N., & Hepp, A. (2016). The mediated construction of reality. Cambridge, UK: Polity Press.

Danaher, J. (2016). The Threat of Algocracy: Reality, Resistance, and Accommodation. Philosophy & Technology, 29(3), 245–268. doi:10.1007/s13347-015-0211-1

Danaher, J., Hogan, M. J., Noone, C., Kennedy, R., Behan, A., De Paor, A., … Shankar, K. (2017). Algorithmic governance. Big Data & Society, 4(2), 1–21. doi:10.1177/2053951717726554

De Certeau, M. (1984). The practice of everyday life. Berkley, CA: University of California Press.

Del Vicario, M., Bessi, A., Zollo, F., Petroni, F., Scala, A., Caldarelli, G., … Quattrociocchi, W. (2016). The spreading of misinformation online. Proceedings of the National Academy of Sciences of the United States of America, 113(3), 554–559. doi:10.1073/pnas.1517441113

Diakopoulos, N. (2015). Algorithmic accountability: Journalistic investigation of computational power structures. Digital Journalism3(3), 398–415. doi:10.1080/21670811.2014.976411

Dubois, E., & Blank, G. (2018). The echo chamber is overstated: The moderating effect of political interest and diverse media. Information, Communication & Society, 21(5), 729–745. doi:10.1080/1369118x.2018.1428656

Dylko, I. B. (2016). How technology encourages political selective exposure. Communication Theory, 26(4), 389–409. doi:10.1111/comt.12089

Dylko, I. B., Dolgov, I. Hoffman, W., Eckhart, N. Molina, M., & Aaziz, O. (2017). Impact of customizability technology on political polarization. Journal of Information Technology & Politics, 15(1), 19–33. doi: 10.1080/19331681.2017.1354243

Engel, C. (2001). A constitutional framework for private governance. German Law Journal, 5(3), 197–236. doi:10.1017/S2071832200012402

Eslami, M., Rickman, A., Vaccaro, K., Aleyasen, A., Vuong, A., Karahalios, K., … Sandvig, C. (2015). “I always assumed that I wasn’t really that close to [her]”: Reasoning about invisible algorithms in news feeds. CHI '15 Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (pp. 153–162). Seoul: Human Factors in Computing Systems. doi:10.1145/2702123.2702556

Fischer, S., & Petersen, T. (2018, May). Was Deutschland über Algorithmen weiss und denkt [What Germany knows and thinks about algorithms]. Retrieved from https://www.bertelsmann-stiftung.de/fileadmin/files/BSt/Publikationen/GrauePublikationen/Was_die_Deutschen_ueber_Algorithmen_denken.pdf

Fletcher, R., & Nielsen, R. K. (2017). Are news audiences increasingly fragmented? A Cross-National Comparative Analysis of Cross-Platform News Audience Fragmentation and Duplication. Journal of Communication, 67(4), 476–498. doi:10.1111/jcom.12315

Flick, U. (2009). An introduction to qualitative research (4th edition). London, UK: SAGE.

Fraser, A., & Kitchin, R. (2017). Slow computing [Working Paper No. 36]. Maynooth: The Programmable City, Maynooth University. Retrieved from http://progcity.maynoothuniversity.ie/2017/12/new-paper-slow-computing/

Gillespie, T. (2014). The relevance of algorithms. In T. Gillespie, P. Boczkowski, & K. Foot (Eds.), Media technologies: Essays on communication, materiality, and society (pp. 167–194). Cambridge, MA: The MIT Press. doi:10.7551/mitpress/9780262525374.003.0009

Gilmore, J. N. (2015). Everywear: The quantified self and wearable fitness technologies. New Media & Society, 18(11), 2524–2539. doi:10.1177/1461444815588768

Glaser, B. G., & Strauss, A. L. (2009). The discovery of grounded theory (4th paperback printing). New Brunswick, NJ: Aldine.

Hallinan, B., & Striphas, T. (2016). Recommended for you: The Netflix prize and the production of algorithmic culture. New Media & Society18(1), 117–137. doi:10.1177/1461444814538646

Hasebrink, U., & Hepp, A. (2017). How to research cross-media practices? Investigating media repertoires and media ensembles. Convergence: The International Journal of Research into New Media Technologies, 23(4), 362–377. doi:10.1177/1354856517700384

Heatherly, K. A., Lu, Y., & Lee, J. K. (2017). Filtering out the other side? New Media & Society, 19(8), 1271–1289.doi:10.1177/1461444816634677

Helberger, N., Bodo, B., Zuiderveen Borgesius, F. J., Irion, K., & Bastian, M. B. (2017). Personalised communication. Retrieved from http://personalised-communication.net/the-project

Heller, A. (1984). Everyday life. London, UK: Routledge.

Hervas-Drane, A. (2015). Recommended for you: The effect of word of mouth on sales concentration. International Journal of Research in Marketing, 32(2), 207–218. doi:10.1016/j.ijresmar.2015.02.005

Hitsch, G. J., Hortaçsu, A., & Ariely, D. (2010). Matching and sorting in online dating. The American Economic Review100(1), 130–163. doi:10.1257/aer.100.1.130

Jürgens, P., Jungherr, A., & Schoen, H. (2011). Small worlds with a difference: New gatekeepers and the filtering of political information on Twitter. Proceedings of the 3rd International Web ScienceConference, 3 (pp. 21–26). Koblenz, GE: Web Science. doi:10.1145/2527031.2527034

Jürgens, P., Stark, B., & Magin, M. (2015). Messung von Personalisierung in computervermittelter Kommunikation [Measuring personalization in computer-mediated communication]. In A. Maireder, J. Ausserhofer, C. Schumann, & M. Taddicken (Eds.), Digitale Methoden in der Kommunikationswissenschaft (pp. 251–270). Berlin, GE: GESIS.

Jürgens, P., Stark, B., & Magin, M. (2019). Two half-truths make a whole? On bias in self-reports and tracking data. Social Science Computer Review. Advance online publication. doi:10.1177/0894439319831643

Just, N., & Latzer, M. (2017). Governance by algorithms: reality construction by algorithmic selection on the Internet. Media, Culture & Society, 39(2), 238–258. doi:10.1177/0163443716643157

Kitchin, R. (2017). Thinking critically about and researching algorithms. Information, Communication & Society, 20(1), 14–29. doi:10.1080/1369118x.2016.1154087

Kowalski, R. (1979). Algorithm = logic + control. Communications of the ACM22(7), 424–436. doi:10.1145/359131.359136

Kurzweil, R. (2005). The singularity is near: When humans transcend biology. New York: Viking.

Larsson, S. (2018). Algorithmic governance and the need for consumer empowerment in data-driven markets. Internet Policy Review, 7(2). doi:10.14763/2018.2.791

Larus, J., Hankin, C., Carson, S. G., Christen, M., Crafa, S., Grau, O., … Werthner, H. (2018, March 21). When computers decide: European Recommendations on Machine-Learned Automated Decision Making [Technical report]. Zurich; New York: Informatics Europe; ACM. doi:10.1145/3185595 Retrieved from http://www.informatics-europe.org/news/435-ethics_adm.html

Latzer, M., Hollnbuchner, K., Just, N., & Saurwein, F. (2016). The economics of algorithmic selection on the Internet. In J. Bauer & M. Latzer (Eds.), Handbook on the economics of the Internet (pp. 395–425). Cheltenham, UK: Edward Elgar. doi:10.4337/9780857939852.00028

Li, S. S., & Karahanna, E. (2015). Online recommendation systems in a B2C e-commerce context: A review and future directions. Journal of the Association for Information Systems, 16(2), 72–107. doi:10.17705/1jais.00389

Lupton, D. (2016). The diverse domains of quantified selves: self-tracking modes and dataveillance. Economy and Society45(1), 101–122. doi:10.1080/03085147.2016.1143726

Mager, A. (2012). Algorithmic ideology: How capitalist society shapes search engines. Information, Communication & Society, 15(5), 769–787. doi:10.1080/1369118x.2012.676056

Martini, M., & Nink, D. (2017). Wenn Maschinen entscheiden… – vollautomatisierte Verwaltungsverfahren und der Persönlichkeitsschutz [When machines decide… – fully automated administrative proceedings and protection of personality]. Neue Zeitschrift für Verwaltungsrecht – Extra, 10(36), 1–14.

McDonald, A. M., & Cranor, L. F. (2010). Beliefs and behaviors: Internet users’ understanding of behavioral advertising. TPRC 2010. Retrieved from http://ssrn.com/abstract=1989092

Napoli, P. M. (2014). Automated media: An institutional theory perspective on algorithmic media production and consumption. Communication Theory24(3), 340–360. doi:10.1111/comt.12039

Newman, N., Fletcher, R., Kalogeropoulos, A., Levy, D. A. L., & Nielsen, R. K. (2018). Reuters Institute digital news report 2018. Retrieved from http://media.digitalnewsreport.org/wp-content/uploads/2018/06/digital-news-report-2018.pdf?x89475

Nguyen, T. T., Hui, P.-M., Harper, F. M., Terveen, L., & Konstan, J. A. (2014). Exploring the filter bubble. Proceedings of the 23rd International Conference on World Wide Web (pp. 677–686). New York: ACM. doi: 10.1145/2566486.2568012

Nowak, R. (2016). The multiplicity of iPod cultures in everyday life: uncovering the performative hybridity of the iconic object. Journal for Cultural Research20(2), 189–203. doi:10.1080/14797585.2016.1144384

Pariser, E. (2011). The filter bubble. London, UK: Penguin Books.

Pink, S. (2012). Situating everyday life. Los Angeles, CA: Sage.

Rader, E. (2017). Examining user surprise as a symptom of algorithmic filtering. International Journal of Human-Computer Studies98, 72–88. doi:10.1016/j.ijhcs.2016.10.005

Rader, E., & Gray, R. (2015). Understanding user beliefs about algorithmic curation in the Facebook news feed. Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (pp. 173–182). New York: ACM. doi:10.1145/2702123.2702174

Rammert, W. (2008). Where the action is: Distributed agency between humans, machines, and programs [Working Paper No. TUTS-WP-4-2008]. Berlin: The Technical University of Berlin, Technology Studies. Retrieved from http://www.ts.tu-berlin.de/fileadmin/fg226/TUTS/TUTS_WP_4_2008.pdf

Rapp, A., & Cena, F. (2016). Personal informatics for everyday life. International Journal of Human-Computer Studies94, 1–17. doi:10.1016/j.ijhcs.2016.05.006

Rieder, B., Matamoros-Fernandez, A., & Coromina, O. (2018). From ranking algorithms to ‘ranking cultures’. Convergence: The International Journal of research into New Media Technologies, 24(1), 50–68. doi:10.1177/1354856517736982

Sandvig, C., Hamilton, K., Karahalios, K., & Langbort, C. (2014). Auditing algorithms: Research methods for detecting discrimination on internet platforms. Paper presented to “Data and Discrimination: Converting Critical concerns into productive inquiry," a Preconference at the 64th Annual Meeting of the International Communication Association, Seattle, WA. Available at https://pdfs.semanticscholar.org/b722/7cbd34766655dea10d0437ab10df3a127396.pdf

Scharkow, M. (2016). The accuracy of self-reported internet use: A validation study using client log data. Communication Methods and Measures, 10(1), 13–27. doi:10.1080/19312458.2015.1118446

Schneider, V., & Kenis, P. (1996). Verteilte Kontrolle: Institutionelle Steuerung in modernen Gesellschaften [Spread control: Institutional management in modern societies].  In V. Schneider & P. Kenis (Eds.), Organisation und Netzwerk. Institutionelle Steuerung in Wirtschaft und Politik (pp. 9–43). Frankfurt am Main: Campus. doi:10.5771/9783845205694-169

Schütz, A. (2016). Der sinnhafte Aufbau der sozialen Welt [The meaningful construction of the social world] (7th ed.). Frankfurt am Main: Suhrkamp.

Schütz, A., & Luckmann, T. (2003). Strukturen der Lebenswelt [Structures of the lifeworld]. Stuttgart: UVK.

Shearer, E., & Matsa, K. E. (2018, September 10). News use across social media platforms 2018. Retrieved from http://www.journalism.org/wp-content/uploads/sites/8/2018/09/PJ_2018.09.10_social-media-news_FINAL.pdf

Silverstone, R. (1994). Television and everyday life. London, UK: Routledge.

Smit, E. G., Van Noort, G., & Voorveld, H. A. M. (2014). Understanding online behavioral advertising. Computers in Human Behavior, 32, 15–22. doi:10.1016/j.chb.2013.11.008

Sunstein, C. R. (2001). Echo chambers: Bush v. Gore, impeachment, and beyond. Princeton, NJ: Princeton University Press.

Sztompka, P. (2008). The focus on everyday life: A new turn in sociology. European Review, 16(1), 1–15. doi:10.1017/S1062798708000045

Thrift, N. J. (2005). Knowing capitalism. London, UK: Sage.

Ur, B., Leon, P. G., Cranor, L. F., Shay, R., & Wang, Y. (2012). Smart, Useful, Scary, Creepy: Perceptions of Online Behavioral Advertising. Proceedings of the Eighth Symposium on Usable Privacy and Security, Washington, DC. doi: 10.1145/2335356.2335362

Ur Rehman, Z., Hussain, F. K., & Hussain, O. K. (2013). Frequency-based similarity measure for multimedia recommender systems. Multimedia Systems, 19(2), 95–102. doi:10.1007/s00530-012-0281-1

Van Dijck, J. (2013). The culture of connectivity. Oxford, UK: Oxford University Press.

Williamson, B. (2015). Algorithmic skin: Health-tracking technologies, personal analytics and the biopedagogies of digitized health and physical education. Sport, Education and Society20(1), 133–151. doi: 10.1080/13573322.2014.962494

Willson, M. (2017). Algorithms (and the) everyday. Information, Communication & Society20(1), 137–150. doi:10.1080/1369118X.2016.1200645

Yeung, K. (2018). Algorithmic regulation: A critical interrogation. Regulation & Governance, 12, 505–523. doi:10.1111/rego.12158

Zarsky, T. (2016). The trouble with algorithmic decisions: An Analytic Road Map to Examine Efficiency and Fairness in Automated and Opaque Decision Making. Science, Technology & Human Values, 41(1), 118–132. doi: 10.1177/0162243915605575

Zuiderveen Borgesius, F. J., Trilling, D., Möller, J., Bodó, B., de Vreese, C. H., & Helberger, N. (2016). Should we worry about filter bubbles? Internet Policy Review5(1), 1–16. doi:10.14763/2016.1.401

Acknowledgements:

The authors would like to thank Natascha Just and two reviewers for their valuable comments on an earlier draft of this article.

Footnotes

1. This notion is related to Rammert’s (2008) concept of “distributed agency between humans, machines, and programs”.

2. e.g., simple alphabetical sorting.

3. e.g., personalised recommender systems in e-commerce using reinforcement learning.

4. Consideration of individuals’ entire media repertoires, comprising online and offline sources, is vital because, for instance, the effects of using AS services like Facebook for news purposes vary with the person’s use of other news channels or other (offline) sources.

5. Awareness is not to be misunderstood as knowledge of specific algorithmic modes of operation here. Our model suggests that, for instance, without being aware that Google search results are personalised, individuals can not grasp the concept of filter bubbles. They are therefore unable to understand this risk and maybe adapt their behaviour accordingly.

6. Tracking data can also be subject to different biases (e.g., self-selection biases), which must be considered when applying these novel methods (see e.g., Jürgens, Stark, & Magin, 2019).

7. When tracking individuals’ internet use, it is vital to be very mindful of potential effects on participants’ privacy. Specific study designs have to be approved by the responsible ethics committee and defining measures to protect individuals’ privacy are crucial.

8. This guideline – combining the proposed theoretical model and mixed-methods research design – has already been applied by the authors in Switzerland. Results from qualitative internet user interviews and a representative online-survey combined with internet use tracking on a mobile and desktop device for a representative sample of the Swiss population are forthcoming.

The recursivity of internet governance research

$
0
0

The recursivity of internet governance research

Introduction

Technological visionary Stewart Brand once remarked that “[o]nce a new technology rolls over you, if you’re not part of the steamroller, you’re part of the road” (1987, p.9). About forty years after the somewhat muddled invention of the internet and right after the 25th birthday of the web, it seems that these technologies have quite thoroughly rolled over contemporary societies. But instead of simply shaping our societies from the outside, the internet’s “message” – to speak with McLuhan – has become increasingly difficult to read. While the mythos of cyberspace as a new frontier has long faded, common terms like “internet culture” or even “online shopping” signal that there is some kind of elsewhere in the clouds behind our screens. But the stories about election tampering, privacy breaches, hate speech, or algorithmic bias that dominate the headlines are just one reminder that issues still commonly prefixed with “digital”, “internet”, “online”, or similar terms have fully arrived at the centre of collective life. Elsewhere is everywhere. Trends like datafication or platformisation have seeped deeply into the fabric of societies and when scholars discuss questions of internet governance or platform governance, they know all too well that their findings and arguments pertain to social and cultural organisation in ways that go far beyond the regulation of yet another business sector.

It therefore comes as no surprise that not only the subject areas covered by conferences like the one organised by the Association of Internet Researchers every year since 2000 are proliferating, but also that the stakes have grown in proportion. As technologies push deeper into public and private spheres, they encounter not only appropriations and resistances, but complex forms of negotiation that evolve as effects become more clearly visible. Steamroller and road, to stick with Brand’s metaphor, blend into a myriad of relations operating at different scales: locally, nationally, supra-nationally, and globally.

The centrality of the internet in general and online platforms in particular means that the number and variety of actors seeking to gain economic or political advantages continues to grow, pulling matters of governance to the forefront. While the papers assembled in this special issue do not fall into the scope of “classic” internet governance research focused on governing bodies such as the ICANN or W3C and the ways they make and implement decisions, they indeed highlight the many instances of shaping and steering that follow from the penetration of digital technologies into the social fabric. The term “governance” raises two sets of questions: how societies are governed by technologies and how these technologies should be governed in return (cf. Gillespie, 2018). These questions are complicated by the fact that technologies and services are deeply caught up in local circumstances: massive platforms like Facebook or YouTube host billions of users and are home to a vast diversity of topics and practices; data collection and decision-making involving computational mechanisms have become common practices in many different processes in business and government—processes that raise different questions and potentially require different kinds of policy response. Global infrastructures reconfigure local practices, but these local practices complicate global solutions to the ensuing problems.

This knotty constellation poses significant challenges to both the descriptive side of governance research concerned with analysis of the status quo and the prescriptive side that involves thinking about policy and, in extremis, regulation. The papers assembled here do not neatly fit into this distinction, however. Instead, they highlight the complicated interdependence between is and ought, to speak with Hume, and indicate a need for recursive dialogue between different perspectives that goes beyond individual contributions. In this sense, this special issue maps the larger field of debate emerging around governance research in terms of perspectives or entry points rather than disciplines or clearly demarcated problem areas. Three clusters emerge:

First, a normative perspective that testifies and responds to the destabilisation of normativity that characterises societies, which are challenged on several levels at the same time. This involves an examination of the possibilities and underpinnings of critique: how can we evaluate our current governance and political perspectives in normative terms and thereby lay the ground for thinking about adaptations or alternative arrangements?

Second, a conceptual perspective concerned with the intellectual apparatus we use to address and to render our current situation intelligible. The authors in this group indeed argue that conceptual reconfigurations are necessary to capture many of the emerging fault lines, such as the need for transnational policy-making and the complex relationships between groups of stakeholders.

Third, an empirical perspective can be framed as asking how these more abstract concerns can be connected with understanding and evidence of actual practices and effects and how they affect the lived realities of individuals and social groups. The diversity of situations indeed challenges and complicates theoretical discussion, but also play a crucial role in shedding light on situations that may be opaque and counterintuitive.

We will discuss each of these perspectives in greater detail but suffice to say that adequate understanding of contemporary societies depends on their recursive interrelation: normative engagement serves as moral grounding, conceptual work sharpens our analytical grids, and empirical evidence connects us to the actual realities of lived lives. Internet researchers are tasked with the responsibility to advance on all three lines to increase our knowledge of the world we live in and to open pathways for policy responses that are up to the considerable challenges we face.

Normative perspectives: governing the data-subject

Research into the governance of platform-based, data-fueled, and algorithmically driven societies is obviously informed by economic and political theories. Over the past few years, several economic scholars have critically interrogated orthodox political models, such as capitalism and liberal democracy, to find out whether they still apply to societies where offline activities—private or public—are increasingly scarce (Zuboff, 2019; Jacobs and Mazzucato, 2016; Mayer-Schönberger and Ramge, 2018). Wavering between “surveillance capitalism” and “algocracy”, markets can be seen adapting to the advent of data as a new resource and predictive analytics as significant tools that turn users into “data-subjects”. But the study of data-subjects cannot easily be delineated as the study of “citizens” or “consumers” fitting the contextual parameters of “democracies” and “markets”. Normative perspectives cover economic and political principles but also pertain to moral principles—norms and values; the study of data-subjects, in other words, also involves the fundamental rights of human beings participating in “democracies” and “markets”.

Norms and principles are often invisible, hidden in the ideological folds of a social fabric woven together by an invisible technological apparatus that barely leaves traces upon its imprints. It is important to bare the normative perspectives by which the internet is governed; it is equally important to articulate and discuss normative perspectives on the basis of which the internet should be governed—what we called above the complicated interdependence between is and ought. Contributing perspectives from sociology, political economy and philosophy, the authors of the first three articles in this special issue each highlight a different aspect of “governing the data subject”: as an economic resource, as a citizen in a democracy, and as an autonomous individual. All three papers take a broader view of data-subjects as the centre of data practices and try to rethink the normative frameworks by which they are governed.

Nick Couldry and Ulises Mejias propose the political-historical perspective of “data colonialism” to dissect the new social order that has been the result of rapid datafication linked to extractive capitalism. Data colonialism, they argue, is about more than capitalism; it is “human life itself that is being appropriated … as part of a reconstruction of the very spaces of social experience.” Colonialism should thus not be understood metaphorically, and neither should data simply be seen as the “new oil”; data colonialism is a new phase in the history of colonialist expansion—a phase that is characterised by a massive transformation of humanity’s socio-legal and economic order through the appropriation of human life itself by means of data extraction. The data-subject emerging from this perspective is at once personal and relational. Data are not “personal” in the sense that they are “about” our individual selves, but they emerge as constructions of data points—“data doubles”—out of a myriad of data sets. Hence, privacy is important for individuals and collectives: data doubles are projections of the social and thus contribute to reshaping social realities. Couldry and Mejias conclude that existing legal approaches and policy frameworks are profoundly inadequate when it comes to governing datafied societies. Instead, they propose a radical reframing of regulatory discourse that calls into question the direction and rationale of a social order resting on exploitative data extraction.

Starting from the rapid shift from broadly optimistic attitudes concerning the relationship between digitalisation and democracy to broadly negative ones, Jeanette Hofmann argues that the fundamental relationship between media and democratic life should be (re)considered in greater conceptual depth to form a starting point for a critical perspective on governance. Instead of merely describing the “effect” or “influence” of media, she makes a distinction between medium and form that highlights the “alterability” of technologies and the normatively charged struggles over architecture and design that ensue. This perspective allows for a reading of the internet’s history through the lens of shifting and competing ideological models, through “different modes of social coordination and political regulation, which became inscribed as operational principles and standards into the network architecture and as such again subject of political interpretation”. While concepts like “connective action” (Bennett and Segerberg, 2012) emphasise the distributed character of the internet, Hofmann argues that the contemporary emergence of digital platforms is still lacking a clearer appreciation in terms of its consequences for democratic agency. Only a deeper conceptual understanding of the treacherous waters of mediated democracy would allow for a programmatic appropriation of alterability and the realisation of “unrealised alternatives”.

Daniel Susser, Beate Roessler, and Helen Nissenbaum move from broad conceptualisations of digital societies to a more fine-grained level of analysis that deals with a phenomenon that is often mentioned when discussing potential harms but is rarely examined in greater depth: the notion of (online) manipulation. Starting from the specific possibilities for steering and controlling digital platforms incorporate, they argue that core liberal values—autonomy in particular—are under threat when cognitive biases and data profiles can be easily exploited through mechanisms that often remain hidden. But the gist and merit of this paper lie not so much in highlighting these increasingly well-known phenomena, but to submit them to a normative assessment that connects to existing policy discussions, proposing concrete measures for “preventing and mitigating manipulative online practices”. The authors thus invest precisely in what we mean by recursivity: the connection between descriptive and prescriptive modes as well as the tighter coupling between academic research and government policy.

Conceptual perspectives: digital governance between policy-making and politics

Gravitating between what is and what ought are conceptual perspectives of internet governance: what needs to be done to get us from current (inadequate) legal and policy frameworks to frames that work? The papers in this section critically assess foundational notions such as markets, consumers, companies, stakeholders, agreements, and contracts—notions on which much of our governance structures rest but which have become porous, to say the least. If “classic” governance structures no longer seem to apply to an platform-based, data-fuelled, and algorithmically driven society, how can they be reconceptualised? Such reframing and retooling exercises inevitably raise questions of policy-making and political manoeuvring. Not everything that can be theoretically reconceived is politically conceivable. A useful political reality-check is to compare different national governance frameworks and show how policy-making for the internet is an intensely (geo)political affair. The conceptual perspectives in this section range from the very broad to the very specific: they interrogate the foundations of platform power and how power is distributed between state, market, and civil society actors (Van Dijck et al.; Gorwa); they compare (trans)national initiatives of data governance (Meese et al.) and probe the geopolitical implications of compliance with regulatory standards (Meese at al.; Tusikov); and finally, they study how the digital rendering of consumer-facing contracts can be both a threat and an opportunity (Cornelius).

José van Dijck, Thomas Poell, and David Nieborg probe the very assumptions underlying recent decisions by the European Commission to impose substantial fines upon Alphabet-Google for anti-competitive behaviour. They argue that the concepts of consumer welfare, internet companies, and markets—concepts in which many regulatory frameworks are staked—no longer suffice to catch the complex interrelational and dynamic nature of online activities. Instead, they propose expansive concepts such as citizen well-being, an integrated platform ecosystem, and societal platform infrastructures to inform policy-making efforts. But more than a theoretical proposal, their “reframing power” exercise hints at the need for recursive internet governance research. Researchers should help policy-makers in defining the dynamics of platform power by providing a set of analytical tools that help explain the complex relationships between platforms and their responsible actors. Armed by detailed insights from national and comparative case studies, policy-makers, and politicians can help articulate regulatory principles at the EU-level.

Conceptual rethinking is obviously not restricted to formal regulatory frameworks, but also extends into informal governance arrangements. Robert Gorwa, in his contribution to this special issue, reviews the growing number of non-binding governance initiatives that have been proposed by platform companies over the past few years, partly in response to mounting societal concerns over user-generated content moderation. The question “who is responsible for a fair, open, democratic digital society across jurisdictions?” is picked up not just by (transnational) bodies like the EU, but by a variety of actors in multi-stakeholder organisations. Companies like Facebook and Google seek out provisional alliances to create “oversight bodies” and other forms of informal governance. However, as Gorwa shows, the power relationships in the “governance triangle” of companies, states, and civil society actors in these informal arrangements remains unbalanced because civil society actors are notoriously underrepresented. The poignant issue is responsibility rather than liability: we are all responsible for a fair, open, and democratic society, but “we” is not an easy-to-define collective concept. Detailed analyses of big platform companies’ “spheres of influence” through informal arrangements—in conjunction to in depth analyses of formal regulatory toolboxes, as suggested in the previous article—are needed to map the complex power relationships between actors with varying degrees of power. Once again, recursivity is the magic word: researchers inform policy-makers who inform researchers.

James Meese, Punit Jagasia, and James Arvanitakis, in their article “Citizen or consumer?” continue the reframing exercise of this section by comparing data access rights between the EU and Australia. They ask whether the two continents’ regulatory frameworks—the General Data Protection Regulation (GDPR) versus the Consumer Data Right (CDR)—are grounded in different ideological concepts of citizen versus consumer. The authors show the deep interpenetration of policy-making and politics. In Europe, this results in the GDPR’s strong emphasis on protecting fundamental rights of citizens, such as privacy and data protection against (ab)use by companies and governments. In Australia, the CDR betrays clear signs of a neoliberal approach which grants individual rights in the context of markets. This concrete comparison between Europe’s and Australia’s regulatory efforts on data protection signal the importance of including ideological and (geo)political premises into a conceptual approach to governance. Across the globe, we are witnessing the clash between market-oriented approaches vis-à-vis approaches that start from the fundamental rights and freedoms of citizens. Whereas the GDPR, in the eyes of some Europeans, does not go far enough in the second direction, for Australians this would mean a major straying from the first.

A second comparative perspective is provided by Natasha Tusikov, who closely examines the effects of US regulation on China’s internet governance in the area of intellectual property rights protection. A detailed analysis of policy and regulatory policy documents illuminates the power choreography between American private actors, American state regulators, and Chinese platform companies; the US state exerts coercive power on Chinese actors to comply with American standards, as illustrated by Alibaba adopting US-drafted rules to prohibit the sale of counterfeit products via their Taobao marketplace. Tusikov’s careful reconstruction of the “compliance-plus” process demonstrates that the US dominance in transnational platform governance continues a long history of setting rules and standards to benefit its own economic interests and those of its industry actors. Such analysis of reciprocal fine-tuning between regulation, policy-making, and politics is extremely relevant when trying to understand the recent trade war between the US and China which is a clash between two giants to secure their economic, political, and national security interests through internet governance. The world of geopolitics is no longer external to issues of internet governance; on the contrary, disputes concerning internet governance are at the core of geopolitical conflicts.

Kristin Cornelius’ contribution finally approaches the intersection of technology and governance from a very different angle. Looking at the explosive proliferation of “consumer-facing standard form contracts” such as Terms of Service – contracts we constantly submit to yet hardly ever read – she argues that the “digital form” of these documents merits closer attention. Taking a conceptual perspective grounded in information science and document-engineering, she shows not only how the technical form that implements a legal relationship has a normative dimension in the sense that it structures power relations, but also that this technicity is an opportunity: emphasising elements such as standardisation, stabilisation, and machine-readability would not necessarily change the content of these (zombie) contracts, but allow for different forms of social embedding that keep them from coming to haunt the users they apply to. Looking at contracts as documents having specific material forms instead of limiting them to their abstract legal meaning shows how crucial conceptual frames have become for making sense of a situation where technical principles shake established lines of reasoning.

Empirical perspectives: data uses and algorithmic governance in everyday practices.

The last section of this special issue brings us from the higher spheres of politics and policy-making to the concrete everyday practices in which “data subjects” play a central role. The three papers listed in this section scrutinise empirical cases concerning actual data uses which, in turn, serve to inform researchers and policy-makers intent on reshaping internet governance. Whether adopting the notion of “citizens” or “consumers”, these articles ground their research perspectives in empirical observations and interrogations of data subjects—the way they are steered by algorithms and how they respond to certain manipulations of online behaviour. Moreover, all three papers seek to tie concrete, empirical research to normative and conceptual perspectives: from what is to what ought and what could be. Whether the cases concern “citizen scoring” practices at the local levels (Dencik et al.), revolts of YouTube users against the platform’s algorithmic advertising and moderation practices (Kumar), or the broader question of how to study real-world effects of algorithmic governance in different areas of everyday life (Latzer and Festic)—they all come back to the recursivity of research: how to make sense of current algorithmic and data practices in light of the wider political and economic transformations of internet governance?

Lina Dencik, Joanna Redden, Arne Hintz, and Harry Warne provide an insightful analysis of data analytics uses in UK public services. The authors draw on a large number of databases and interviews to investigate what they call “citizen scoring practices”: the categorisation and assessment of data (e.g., financial data, welfare data, health data and school attendance) to predict citizen behaviour at both the individual and the population level. Significantly, Dencik et al. show how the interpretation of data analytics is the result of negotiation between the various stakeholders in data-driven governance, from the private companies that provide the data analytics tools to the public sector workers that handle them. While the use of data analytics in public service environments is steadily increasing, there appears to be no shared understanding of what constitutes appropriate technologies and standards. And yet, such a “golden view” seems to inform the various data-driven analytics practises at the local level. One important goal of this paper is to understand the heterogeneity of local data-based practices against the backdrop of a regulatory vacuum and quite often an austerity-driven policy regime. Hopefully, studies like this one provide a much needed empirical basis for articulating policies that address broader concerns of data use with regards to discrimination, stigmatisation, and profiling.

The article by Sangeet Kumar moves to a very different arena, one where data-driven governance has been at the centre from the very beginning: analysing the so called “Adpocalypse”, an advertiser revolt against YouTube in 2017, he shows how decisions concerning the monetisation of videos have complemented practices such as content moderation or deplatforming as instruments of governance. More subtle in nature, they may have nonetheless a large effect on the overall composition of the platform by steering money flows away from conflictual yet important subjects, transforming YouTube—and the web more broadly—from “a plural, free and heterogenous space” into a “sanitised, family-friendly and scrubbed version” of itself. The paper ends with a call for wider stakeholder participation to put the inevitable decisions on rules and modes of governance on wider bases. Given the outsized role YouTube has come to play in the emerging “hybrid media system” (Chadwick, 2013), one could rightfully ask whether platforms of this size should become touted as “public utilities” as Van Dijck et al. suggest in their conceptual reframing.

While Michael Latzer and Noemi Festic’s contribution does not rely on empirical research itself, it is very much concerned with the question of how empirical evidence on complicated and far-reaching concepts like algorithmic governance can be collected in the first place. While theoretical models proliferate and efforts for algorithmic accountability gain traction, the actual integration of the various mechanisms for selection, ranking, and recommendation users regularly encounter into the practices of everyday life remains elusive. Qualitative studies have given us some idea concerning effects and imaginaries on the user side, but “generalisable statements at the population level” are severely lacking. Such a broad ground-level view is, however, essential for informed policy choices. The authors therefore propose a programmatic framework and mixed-methods approach for studying the actual consequences of algorithmic governance on concrete user practices, in the hope of filling a research gap that continues to blur the picture, despite the heightened attention the topic has recently received. The methodologies used to produce empirical insights thus constitute yet another area where internet researchers have a crucial social role to play, despite the significant challenges they face.

Conclusion

It may still be too early to omit the terms “digital”, “online”, or “internet” as meaningful adjectives when discussing the transformation of societies in which data, algorithms, and platforms play a central and crucial role. Obviously, we are no longer restricted by a predominantly technological discourse when discussing the internet and its governance—like we were in the 1990s when most researchers saw the steamroller coming, but did not quite know how to gauge its power and envision its implications. And, perhaps on a hopeful note, we have not yet become part of the “road” which the steamroller threatens to flatten. However, it takes a conscious and protracted effort for researchers to understand the “internet” and the “digital” as transformative forces before they become part of the road we walk on. And that is what makes the recursivity of governance and policy research so relevant at precisely this moment in time.

When studying the effects of data-informed practices first-hand, internet researchers can detect patterns in how society is governed by platforms; in turn, their insights and conceptual probes might inform regulators and policy-makers to adjust and tweak existing policies. There is a clear knowledge gap, an asymmetry of information that affects not only researchers as they study complicated actor constellations and powerful companies, but also democratic institutions themselves. Governments may be able to wield considerable power in specific situations, in particular around market competition, but they are nonetheless increasingly dependent on the multifaceted input of a wide range of disciplines. Internet researchers may be rightfully sceptical about engaging with institutions that are clearly imperfect; but our current situation requires that we accept our responsibilities as knowledge producers and push the insights we develop beyond the boundaries of our disciplines and institutions. The recursive nature of normative, conceptual, and empirical approaches hopefully encourages collectives of researchers and policy-makers to cooperate in governance design.

References

Bennett, W., & Segerberg, A. (2012). The Logic of Connective Action. Interaction, Communication & Society, 15(5), 739–768. doi:10.1080/1369118X.2012.670661

Brand, S. (1987). The Media Lab: Inventing the Future at MIT. New York: Viking; Penguin.

Chadwick, A. (2013). The Hybrid Media System. Oxford; New York: Oxford University Press.

Gillespie, T. (2018). Custodians of the Internet. Platforms, content moderations, and the hidden decisions that shape social media. New Haven: Yale University Press.

Jacobs, M., & Mazzucato, M. (2016). Rethinking Capitalism: Economics and Policy for Sustainable and Inclusive Growth. London: Wiley.

Mayer-Schönberger, V., & Ramge, T. (2018). Reinventing Capitalism in the Age of Big Data. New York: Basic Books.

Zuboff, S. (2019). The Age of Surveillance Capitalism. The Fight for a Human Future at the New Frontier of Power. New York: PublicAffairs.

New perspectives on ethics and the laws of artificial intelligence

$
0
0

Introduction

With the growing dissemination of ‘Big Data’ and computing techniques, technological evolution spread rapidly and increasingly intelligent algorithms have become a great resource for innovation and business models.

This new context based on the concepts of Web 3.0, internet of things and artificial intelligence, depends on the continuous interaction between intelligent devices, sensors and people generating a huge amount of data being produced, stored and processed, changing, in various aspects, our daily life (Magrani, 2017).

The increasing connectivity and symbiotic interaction among these agents,1 bring a significant challenge for the rule of law and contemporary ethics, demanding a deep reflection on morality, governance and regulation.

What role should intelligent things play in our society? Do machines have morality? What legal liability regime should we adopt for damages arising from increasingly advanced artificial intelligence (AI)? Which ethical guidelines should we adopt to orient its development? In this paper we will discuss the main normative and ethical challenges imposed by the advancement of artificial intelligence.

Technology is not neutral: Agency and morality of things

Peter-Paul Verbeek in his work Moralizing Technology: Understanding and Designing the Morality of Things aims to broaden the scope of ethics to better accommodate the technological age, and in doing so, reveals the inseparable nature of humanity and technology. Following Verbeek’s contributions, technologies can be considered “moral mediators” that shape the way we perceive and interact with the world and thus reveal and guide possible behaviours. Since every technology affects the way in which we perceive and interact with the world, and even the way we think, no technology is morally neutral – it mediates our lives (Verbeek, 2011).

Technical artifacts, as explained by the theorist Peter Kroes, can be understood as man-made Things (objects), which have a function and a plan of use. They consist of products obtained through technological action, designating the attitudes we take daily to solve practical problems, including those related to our desires and our needs. Technical artifacts involve the need for rules of use to be observed, as well as for parameters to be created in relation to the roles of individuals and social institutions in relation to them and their use (Vermaas, Kroes, van de Poel, Franssen, & Houkes, 2011).

Technical artifacts, as specific objects (Things) with their own characteristics have a clear function and usage plan. Besides, they are subject to an evaluation analysis as to whether they are good or bad and whether they work or not. Thus, it is possible to observe the great importance that the function and the plan of use have in the characterisation of a technical artifact. These two characteristics are intimately connected with the goals that the individuals who created the object seek with it, so that they do not stray from the intended purposes (Vermaas et al., 2011).

Faced with this inseparability, the questioning of the morality of human objectives and actions extends to the morality of technical artifacts (Vermaas et al., 2011). Technology can be used to change the world around us and individuals have goals – be they private and / or social – that can be achieved with the help of these technical artifacts and technologies. Considering that the objectives sought by the humans when creating a technical artifact are not separated from the characteristics of the object itself, we can conclude that the technical artifacts have an intrinsically moral character.

Therefore, alongside the technical artifacts, which can represent the simplest objects, with little capacity for interaction/influence, to the more technologically complex ones, we have the sociotechnical systems, which consist of a network that connects humans and things, thus possessing greater capacity for interaction and unpredictability (Latour, 2001).

For a regulatory analysis, this concept is even more fundamental (Vermaas et al., 2011). Precisely because of its complexity embodied in a conglomerate of ‘actants’ (in relation to Bruno Latour’s conception of actor-network theory), causing sociotechnical systems to have even less predictable consequences than those generated by technical artifacts. In addition, they generate a greater difficulty to prevent unintended consequences, and to hold agents liable in case of harm, since the technological action, reflected in the sociotechnical system, is a sum of actants’ actions, entangled in the network in an intra-relation (Barad, 2003).

Technical artifacts and sociotechnical systems: Entangled in intra-relation

To illustrate the difference between the concepts of technical artifact and sociotechnical system, we can think of the former being represented by an airplane, and the second by the complex aviation system. The sociotechnical system is formed by the set of interrelated agents (human and non-human actants - things, institutions, etc.) that work together to achieve a given goal. The materiality and effects of a sociotechnical system depend on the sum of the agency of each actant. However, there are parameters of how the system should be used, which means that these systems have pre-defined operational processes and can be affected by regulatory laws and policies.

Thus, when a tragic accident involving an airplane occurs, it is necessary to analyse what was in the sphere of control and influence of each actor and technical artifact components of the sociotechnical network. Quite possibly we will observe a very complex and symbiotic relationship between the components that led to this fateful result (Saraiva, 2011). Moreover, this result is often unpredictable, due to the autonomy of the system based on a diffused and distributed agency among all components (actants).

These complex systems bring us to debate the liability and ethics concerning technical artifacts and sociotechnical systems. Issues such as the liability of developers and the existence of morality in non-human agents - with a focus here on technological objects - need a response or, at least, reflections that contribute to the debate in the public sphere. 2

Bruno Latour’s theory offers progress in confronting and discarding the formal binary division between humans and non-humans, but it places objects with different complexities and values at the same level. Given this context, from a legal and regulatory point of view, assigning a different status to technical artifacts and sociotechnical systems, according to their capacity for agency and influence is justifiable and should be endowed with different moral status and levels of liability. It is necessary, then, to distinguish the influence and importance that each thing also has in the network and, above all, in the public sphere (Latour, 2001).

Hello world: Creating unpredictable machines

For this analysis, we will focus on specific things and technologies, aiming at advanced algorithms with machine learning or robots equipped with artificial intelligence (AI), considering that they are technical artifacts (Things) attached to sociotechnical systems with a greater potential for autonomy (based largely on the processing of ‘Big Data’) and unpredictability.

While technical artifacts, such as a chair or a glass, are artifacts “domesticated” by humans, i.e., more predictable in terms of their influence and agency power, it is possible to affirm that intelligent algorithms and robots are still non-domesticated technologies, since the time of interaction with humans throughout history has not yet allowed us to foresee most of the risks in order to control them, or to cease them altogether.

Colin Allen and Wendell Wallach (Wallach and Allen, 2008) argue that as intelligent Things - like robots 3 - become more autonomous and assume more responsibility, they must be programmed with moral decision-making skills for our own safety.

Corroborating this thesis, Peter-Paul Verbeek, while dealing with the morality of Things understands that: as machines now operate more frequently in open social environments, such as connected public spheres, it becomes increasingly important to design a type of functional morality that is sensitive to ethically relevant characteristics and applicable to intended situations (Verbeek, 2011).

A good example is Microsoft's robot Tay, which helps to illustrate the effects that a non-human element can have on society. In 2016, Microsoft launched an artificial intelligence programme named Tay. Endowed with a deep learning4 ability, the robot shaped its worldview based on online interactions with other people and producing authentic expressions based on them. The experience, however, proved to be disastrous and the company had to deactivate the tool in less than 24 hours due to the production of worrying results.

The goal was to get Tay to interact with human users on Twitter, learning human patterns of conversation. It turns out that in less than a day, the chatbot was generating utterly inappropriate comments, including racist, sexist and antisemitic publications.

In 2015, a similar case occurred with “Google Photos”. This was a programme that also learned from users to tag photos automatically. However, their results were also outright discriminatory, and it was noticed, for example, that the bot was labeling coloured people as gorillas.

The implementation of programmes capable of learning and adapting to perform functions that relate to people creates new ethical and regulatory challenges, since it increases the possibility of obtaining results other than those intended, or even totally unexpected ones. In addition, these results can cause harm to other actors, such as the discriminatory offenses generated by Tay and Google Photos.

Particularly, the use of artificial intelligence tools that interact through social media requires reflection on the ethical requirements that must accompany the development of this type of technology. This is because, as previously argued, these mechanisms also act as agents in society, and end up influencing the environment around them, even though they are non-human elements. It is not, therefore, a matter of thinking only about the “use” and “repair” of new technologies, but mainly about the proper ethical orientation for their development (Miller, Wolf, & Grodzinsky, 2017).

Microsoft argued that Tay’s malfunctioning was the result of an attack by users who exploited a vulnerability in their programme. However, for Miller et al. this does not exempt them from the responsibility of considering the occurrence of possible harmful consequences with the use of this type of software. For the authors, the fact that the creators did not expect this outcome is part of the very unpredictable nature of this type of system (Miller et al., 2017).

The attempt to make artificial intelligence systems increasingly adaptable and capable of acting in a human-like manner, makes them present less predictable behaviours. Thus, they begin to act not only as tools that perform pre-established functions in the various fields in which they are employed, but also to develop a proper way of acting. They impact the world in a way that is less determinable or controllable by human agents. It is worth emphasising that algorithms can adjust to give rise to new algorithms and new ways to accomplish their tasks (Domingos, 2015), so that the way the result was achieved would be difficult to explain even to the programmers who created the algorithm (Doneda & Almeida, 2016).

Also, the more adaptable the artificial intelligence programmes become, the more unpredictable are their actions, bringing new risks. This makes it necessary for developers of this type of programme to be more aware of the ethical and legal responsibilities involved in this activity.

The Code of Ethics of the Association for Computing Machinery(Miller et al., 2017) indicates that professionals in the field, regardless of prior legal regulation, should develop “comprehensive and thorough assessments of computer systems and their impacts, including the analysis of possible risks”.

In addition, there is a need for dedicated monitoring to verify the actions taken by such a programme, especially in the early stages of implementation. In the Tay case, for instance, developers should have monitored the behaviour of the bot intensely within the first 24 hours of its launch, which is not known to have occurred(Miller et al., 2017). The logic should be to prevent possible damages and to monitor in advance, rather than the remediation of losses, especially when they may be unforeseeable.

To limit the possibilities of negative consequences, software developers must recognise those potentially dangerous and unpredictable programmes and restrict their possibilities of interaction with the public until it is intensively tested in a controlled environment. After this stage, consumers should be informed about the vulnerabilities of a programme that is essentially unpredictable, and the possible consequences of unexpected behaviour (Miller et al., 2017).

The use of technology, with an emphasis on artificial intelligence, can cause unpredictable and uncontrollable consequences, so that often the only solution is to deactivate the system. Therefore, the increase in autonomy and complexity of the technical artifacts is evident. They are endowed with an increased agency, and are capable of influencing others but also of being influenced in the sociotechnical system in a significant way, often composing even more autonomous and unpredictable networks.

Although there is no artificial intelligence system yet that is completely autonomous, with the pace of technological development, it is possible to create machines that will have the ability to make decisions in an increasingly autonomous way, which raises questions about who would be responsible for the result of its actions and for eventual damages caused to others (Vladeck, 2014).

Application of norms: Mapping legal possibilities

The ability to amass experiences and learn from massive data processing, coupled with the ability to act independently and make choices autonomously can be considered preconditions for legal liability. However, since artificial intelligence is not recognised today as a subject of law, it cannot be held individually liable for the potential damage it may cause.

In this sense, according to Article 12 of the United Nations Convention on the Use of Electronic Communications in International Contracts, a person (natural or an entity) on behalf of whom a programme was created must, ultimately, be liable for any action generated by the machine. This reasoning is based on the notion that a tool has no will of its own (Čerka et al., 2015).

On the other hand, in the case of damage caused by acts of an artifact with artificial intelligence, another type of responsibility is the one that makes an analogy with the responsibility attributed to the parents by the actions of their children or even the responsibility of animal owners in case of damage. In this perspective, the responsibility for the acts of this artifact could fall not only on its producer or programmers, but also on the users that were responsible for their “training” (Čerka et al., 2015).

Another possibility is the model that focuses on the ability of programmers or users to predict the potential for these damages to occur. According to this model, the programmer or user can be held liable if they acted deceitfully or had been negligent considering a result that would be predictable (Hallevy, 2010).

George S. Cole refers to predetermined types regarding civil liability: (i) product liability, (ii) service liability, (iii) malpractice, and (iv) negligence. The basic elements for applicability of product liability would be: (i) the AI ​​should be a “product”; (ii) the defendant must be an AI seller; (iii) the AI must reach the injured party without substantive change; (iv) the AI ​​must be defective; and (v) the defect shall be the source of the damage. The author sustains that the standards, in this case, should be set by the professional community. Still, as the field develops, for Cole, the negligence model would be the most applicable. However, it can be difficult to implement, especially when some errors are unpredictable or even unavoidable (Cole, 1990).

To date, the courts worldwide have not formulated a clear definition of the responsibility involved in creating AIs which, if not undertaken, should lead to negligent liability. This model will depend on standards set by the professional community, but also clearer guidelines from the law side and jurisprudence.

The distinction between the use of negligence rule and strict liability rule may have different impacts on the treatment of the subject and especially on the level of precaution that is intended to be imposed in relation to the victim, or in relation to the one who develops the AI

In establishing strict liability, a significant incentive is created for the offender to act diligently in order to reduce the costs of anticipating harm. In fact, in the economic model of strict responsibility, the offender responds even if he adopts a high level of precaution. This does not mean that there is no interest in adopting cautious behaviour. There is a level of precaution in which the offender, in the scope of strict liability will remove the occurrence of damage. In this sense, if the adoption of the precautionary level is lower than the expected cost of damages, from an economic point of view, it is desirable to adopt the precautionary level (Shavell, 2004). But even if the offender adopts a diligent behaviour, if the victim suffers damage, she will be reimbursed, which favours, in this case, the position of the victim (Magrani, Viola, and Silva, 2019).

The negligence rule, however, forms a completely different picture. As the offender responds only when he acts guilty, if he takes diligent behaviour, the burden of injury will necessarily fall on the victim, even if the damage is produced by reason of a potentially dangerous activity. Therefore, the incentive for victims to adopt precautionary levels is greater, because if they suffer any kind of loss, they will bear it (Magrani, Viola, and Silva, 2019).

Should an act of an artificial intelligence cause damages by reason of deceit or negligence, manufacturing defect or design failure as a result of blameworthy programming, existing liability rules would most often indicate the “fault” of its creators. However, it is often not easy to know how these programmes come to their conclusion or even lead to unexpected and possibly unpleasant consequences. This harmful potential is especially dangerous in the use of artificial intelligence programmes that rely on machine learning and especially deep learning mechanisms, in which the very nature of the software involves the intention of developing an action that is not predictable, and which will only be determined from the data processing of all the information with which the programme had contact. Existing laws are not adequate to guarantee a fair regulation for the upcoming artificial intelligence context.

The structure contained in the table below, produced in a UNESCO study (UNESCO, 2017), contains important parameters that help us think about these issues, at the same time trying to identify the different agencies involved.

Table 1. From UNESCO (2017)

Decision by robot

Human involvement

Technology

Responsibility

Regulation

Made out of finite set of options, according to preset strict criteria

Criteria implemented in a legal framework

Machine only: deterministic algorithms/robots

Robot’s producer

Legal (standards, national or international legislation)

Out of a range of options, with room for flexibility, according to a preset policy

Decision delegated to robot

Machine only: AI -based algorithms, cognitive robots

Designer, manufacturer, seller, user

Codes of practice both for engineers and for users; precautionary principle

Decisions made through human-machine interaction

Human controls robot’s decisions

Ability for human to take control over robot in cases where robot’s actions can cause serious harm of death

Human beings

Moral

Although the proposed structure is quite simple and gives us important insights, its implementation in terms of assigning responsibility and regulating usage is complex and challenging for scientists and engineers, policymakers and ethicists, and eventually it will not be sufficient for applying a fair and adequate response.

HOW TO DEAL WITH AUTONOMOUS ROBOTS: INSUFFICIENT NORMS AND THE PROBLEM OF ‘DISTRIBUTED IRRESPONSIBILITY’

Scientists from different areas are concerned and deliberate that conferring this autonomous “thinking” ability to machines will necessarily give them the ability to act contrary to the rules they are given (Pagallo, 2013). Hence the importance of taking into consideration and investigating the spheres of control and influence of designers and other agents during the creation and functional development of technical artifacts (Vladeck, 2014). 5

Often, during the design phase, the consequences are indeterminate because they depend partly on the actions of other agents and factors beside those of the designers. Also, since making a decision can be a complex process, it may be difficult for a human to even explain it. It may be difficult, further, to prove that the product containing the AI was defective, and especially that the defect already existed at the time of its production (Čerka et al., 2015).

As the behaviour of an advanced AI is not totally predictable, and its behaviour is the result of the interaction between several human and non-human agents that make up the sociotechnical system and even of self-learning processes, it can be difficult to determine the causal nexus6 between the damage caused and the action of a human being or legal entity. 7

According to the legal framework we have today, this can lead to a situation of “distributed irresponsibility” (the name attributed in the present work to refer to the possible effect resulting from the lack of identification of the causal nexus between the agent’s conduct and the damage caused) among the different actors involved in the process. This will occur mainly when the damage transpires within a complex sociotechnical system, in which the liability of the intelligent thing itself, or of a natural or legal person, will not be obvious. 8

‘With a little help from my friends’: Designing ethical frameworks to guide the laws of AI

When dealing with artificial intelligence, it is essential for the research community and academia to promote an extensive debate about the ethical guidelines that should guide the construction of these intelligent machines.

There is a strong growth of this segment of scientific research. The need to establish a regulatory framework for this type of technology has been highlighted by some initiatives as mentioned in this section.

The EU Commission published in April 2019 the document “Ethics guidelines for trustworthy AI" with guidelines on ethics in artificial intelligence. According to the guidelines, trustworthy AI should be: “(i) lawful -  respecting all applicable laws and regulations; (ii) ethical - respecting ethical principles and values; and (iii) robust - from a technical perspective” (HLEG AI, 2019).

The guidelines put forward a set of seven key requirements that AI systems should meet in order to be deemed trustworthy. According to the document, a specific assessment list (hereunder) aims to help verify the application of each of the key requirements:

  • Human agency and oversight: AI systems should empower human beings, allowing them to make informed decisions and fostering their fundamental rights. At the same time, proper oversight mechanisms need to be ensured, which can be achieved through human-in-the-loop, human-on-the-loop, and human-in-command approaches;
  • Technical robustness and safety: AI systems need to be resilient and secure. They need to be safe, ensuring a fall back plan in case something goes wrong, as well as being accurate, reliable and reproducible. That is the only way to ensure that also unintentional harm can be minimized and prevented;
  • Privacy and data governance: besides ensuring full respect for privacy and data protection, adequate data governance mechanisms must also be ensured, taking into account the quality and integrity of the data, and ensuring legitimized access to data;
  • Transparency: the data, system and AI business models should be transparent. Traceability mechanisms can help achieving this. Moreover, AI systems and their decisions should be explained in a manner adapted to the stakeholder concerned. Humans need to be aware that they are interacting with an AI system, and must be informed of the system’s capabilities and limitations;
  • Diversity, non-discrimination and fairness 9: unfair (algorithmic) bias must be avoided, as it could have multiple negative implications, from the marginalization of vulnerable groups, to the exacerbation of prejudice and discrimination. Fostering diversity, AI systems should be accessible to all, regardless of any disability, and involve relevant stakeholders throughout their entire life circle;
  • Societal and environmental well-being: AI systems should benefit all human beings, including future generations. It must hence be ensured that they are sustainable and environmentally friendly. Moreover, they should take into account the environment, including other living beings, and their social and societal impact should be carefully considered;
  • Accountability: mechanisms should be put in place to ensure responsibility and accountability for AI systems and their outcomes. Auditability, which enables the assessment of algorithms, data and design processes plays a key role therein, especially in critical applications. Moreover, adequate an accessible redress should be ensured.

(HLEG AI, 2019)

Similar to this well-grounded initiative, many countries, companies and professional communities are publishing guidelines for AI, with analogous values and principles, intending to ensure the positive aspects and diminish the risks involved in AI development. In that sense, it is worth mentioning the recent and important initiatives coming from:

  1. Future of Life Institute – Asilomar AI;
  2. Berkman Klein Center;
  3. Institute Electrical and Electronic Engineers IEEE;
  4. Centre for the study on existential risks;
  5. K&L gates endowment for ethics;
  6. Center for human-compatible AI;
  7. Machine Intelligence Research Institute;
  8. USC center for AI in society;
  9. Leverhulme center for future of intelligence;
  10. Partnership on AI;
  11. Future of Humanity Institute;
  12. AI Austin;
  13. Open AI;
  14. Foundation for Responsible Robotics;
  15. Data & Society (New York, US);
  16. World Economic Forum’s Council on the Future of AI and Robotics;
  17. AI Now Initiative;
  18. AI100.

Besides the great advancements on ethical guidelines designed by the initiatives hereinabove, containing analogous values and principles, one of the most complex discussions that pervades the various guidelines that are being elaborated is related to the question of AI’s autonomy.

The different degrees of autonomy allotted to the machines must be thought of, determining what degree of autonomy is reasonable and where substantial human control should be maintained. The different levels of intelligence and autonomy that certain technical artifacts may have must directly influence the ethical and legal considerations about them.

Robot rights: Autonomy and e-personhood

On 16 February 2017, the European Parliament issued a resolution with recommendations from the European Commission on civil law rules in robotics. The document the European Parliament issued (“Recommendations from the European Commission on civil law rules in robotics 2015/2103 – INL”) advocates for the creation of an European agency for robotics and artificial intelligence, to provide the necessary technical, ethical and regulatory expertise. The European Parliament also proposed the introduction of a specific legal status for smart robots as well as the creation of an insurance system and compensatory fund 10 with the aim of creating a protection system for the use of intelligent machines.

Regarding the legal status that could be given to these agents, the resolution uses the expression “electronic person” or “e-person”. In addition, in view of the discrepancy between ethics and technology, the European proposition rightly states that dignity, in a deontological bias, must be at the centre of a new digital ethics.

The attribution of a legal status to intelligent robots, as designed in the resolution, it is intended to be one possible solution to the legal challenges that will arise with the gain of autonomy of intelligent Things. The European Parliament's report defines "intelligent robots" as those whose autonomy is established by their interconnectivity with the environment and their ability to modify their actions according to changes.

With the purpose of building up on this discussion, the Israeli researcher Karni Chagal-Feferkorn performs the analysis on robot autonomy to help us differentiate the potential of responsibility in each case. To Chagal-Feferkorn, in order to resolve the liability issue, it is crucial to think on different levels of robot's autonomy (Chagal-Feferkorn, 2018). Nevertheless, she is aware that given the complexity of the artificial intelligence systems, the classification is difficult to implement, since the autonomy is not a binary classification.

Two possible metrics raised for assessing autonomy are freedom of action of the machine with respect to the human being and the capacity of the machine to replace human action. Such metrics are branched and complex with several possible sub-analyses and, according to Chagal-Feferkorn, these tests should also consider the specific stage of the machine decision-making process (Chagal-Feferkorn, 2019).

To illustrate, Chagal-Feferkorn designed the following table (hereunder), with a metric showing the possibility for machines to substitute humans in complex tasks and analysing also the decision making capacity of the machine (Chagal-Feferkorn, 2019). The more machines get closer to a “robot-doctor” stage, the more reasonable it would be to attribute new forms of accountability, liability, rights or even an electronic personhood.

Table 2. From Chagal-Feferkorn (2019).

Roomba robot

Autopilot

Autonomous vehicle

Robo-doctor

Success rates not measurable?

Responsible for more than two OODA loop stages?

+

+

Independently selects type of info to collect?

?

+

Independently selects sources of info to collect from?

+

Dynamic nature of sources of info?

+

Replaces professionals in complex fields?

?

?

+

Life and death nature of decisions?

+

+

+

Real time decisions required?

+

+

?

One criteria used by Chagal-Feferkorn is the OODA [observe-orient-decide-act] cycle. 11 Since the analysis of autonomy is complex, Chagal-Feferkorn states that we should observe the characteristics of different decision-making systems. These systems manifest themselves in four different stages, according to the OODA cycle, affecting different justifications for liability concerning machines. These four points are: (i) Observe: collect current information from all available sources; (ii) Orient: analyse the information collected and use it to update its reality; (iii) Decide: decide the course of action; (iv) Act: implement its decision.

Considering the stages of the OODA cycle used by Chagal-Feferkorn, the more the characteristics of the system are analogous to traditional products / things, the greater the possibility of being embedded in the logic of consumer law. However, advanced robots and algorithms, because of its specific characteristics, might be classified differently from traditional consumer products and, therefore, needing a differentiated treatment and responsibility perspective.

The parameters for assigning responsibility in accordance with consumer law are defined and precise. However, as the complexity of systems increases, in the case of ‘doctor robots’, for instance, as a specific example brought in the study, the number of scenarios and justification for assigning responsibility depend on a number of factors. The doctor robots’ example correspond to the last stage of autonomy thought of by Chagal-Feferkorn, in which algorithms of reasoning are programmed to be capable of replacing human beings in highly complex activities, like medical activities of diagnosis and surgery.

In order for the degree of autonomy-based responsibility to be measured, one should consider the size of the parameter matrix that the algorithm judges before the final decision-making and how much of that decision was decisive for the damaging outcome. It is necessary to consider that the more stages of OODA a system is able to operate, the greater the unpredictability of the manufacturer on the decisions taken by artificial intelligence (Magrani, Viola, & Silva, 2019). 12

In the case of the robot doctor, for instance, it is up to the machine to decide to what extent it should consider the medical history of the patient and the more independent of human action these decisions are, the further the human responsibility will be. On the contrary, it would be possible to programme the machine in such a way as to consult a human being whenever the percentage of certainty for a decision-making is below a certain level, but the establishment of such issues would also imply an increase in the responsibility of the manufacturer (that should also be based on a deontological matrix type). The limit of action of the machine will be determinant for the attribution of responsibility (Magrani, Viola, & Silva, 2019).

Although our technology has not yet developed robots with sufficient autonomy to completely replace human beings for complex tasks, such as the case of doctor robots, if this moment arrives, we should have theoretical mechanisms to implement this type of attribution of responsibility without provoking chilling effects on technological innovations.

For the time being, and according to the consumerist logic, the responsibility should be attributed to the manufacturer. Nevertheless, considering the possibility of robots reaching more independence with respect to humans, fulfilling the four stages of OODA, the aforementioned logic of accountability of the consumer chain may not be applicable. This would trigger the need to assign rights and eventually even a specific personality to smart robots with high autonomy level, besides the possibility of creating insurance and funds for accidents and damages involving robots.

Because we are not yet close to a context of substantial or full robotic autonomy, such as a ‘strong AI’ or ‘general artificial intelligence’, there is a sizeable movement against the attribution of a legal status to them. Recently, over 150 experts in AI, robotics, commerce, law, and ethics from 14 countries have signed an open letter denouncing the European Parliament’s proposal to grant personhood status to intelligent machines. 13 The open letter suggests that current robots do not have moral standing and should not be considered capable of having rights.

However, as computational intelligence can grow exponentially, we should deeply consider the possibility of robots gaining a substantial autonomy on the next years, stressing the need for the attribution of rights.

Considering the myriad of possibilities, the Italian professor and researcher Ugo Pagallo states:

Policy makers shall seriously mull over the possibility of establishing novel forms of accountability and liability for the activities of AI robots in contracts and business law, e.g., new forms of legal agenthood in cases of complex distributed responsibility. Second, any hypothesis of granting AI robots full legal personhood has to be discarded in the foreseeable future. (...) However, the normative reasons why legal systems grant human and artificial entities, such as corporations, their status, help us taking sides in today’s quest for the legal personhood of AI robots. (Pagallo, 2018)

One of the important features to consider is the learning speed and individual evolution of the robot (based on data processing and deep learning), which may represent in some cases the infeasibility of an educational process, thus limiting its moral and legal liability. But how could one punish a robot? It cannot be as simple as “pulling the plug”. In this case, there are two viable options: rehabilitation and indemnification. The first would involve reprogramming the guilty robot. The second, would be to compel the same to compensate the victim for the damage caused. In such a context, the European resolution is relevant. The proposition in assigning a new type of personhood, an electronic one, considering the characteristics of intelligent Things, coupled with the idea of ​​compulsory insurance or a compensatory fund can be an important step.

The new European proposal reflects, therefore, a practical and prompt response to the previously mentioned problem of “distributed irresponsibility”, which occurs when there is no clear connection between an agent and the harm generated (unclear causal nexus between agents and damages).

In view of a causal nexus that cannot be identified directly, for some scholars, we can infer its presumption from the economic group, making it possible to repair the damages caused by facilitating the burden of proof for the victim. However, when we think of the damages that can occur within complex sociotechnical systems, we can have an unfair or unassured application of the causal nexus and legal liability. This is because we are often talking about the action caused by a sum of agencies of human beings, institutions and intelligent things with autonomy and agency power of their own. In this case, the focus on the economic group, despite being able to respond to several cases of damages, may not be sufficient for the fair allocation of liability in the artificial intelligence and internet of things era.

Therefore, as a pragmatic response to this scenario of uncertainty and lack of legal appropriateness, the European proposal suggests that in case of damages the injured party may either take out the insurance or be reimbursed through the compensatory fund linked to the intelligent robot itself.

Beside the concern that this legal arrangement could lead to a convenient tool for companies and producers to disproportionately set aside their responsibility before users and consumers, this step should be closely followed by a continuous debate on the ethical principles that should guide such technical artifacts. Furthermore, this discussion must be coupled with an adequate governance of all the data used by these agents. In observance of these factors, the recommendation is that the development of these intelligent artifacts be fully oriented by the previously described values, such as: (i) fairness; (ii) reliability; (iii) security (iv) privacy and data protection; (v) inclusiveness; (vi) transparency; and (vii) accountability.

Governing intra-action with human rights and by design

One point worth considering in this context is that flaws are natural and that they can be considered even desirable for the faster improvement of a technical artifact. Therefore, a regulatory scenario that would extinguish all and any flaws or damages would be uncalled for. AI-inspired robots are products with inherently unforeseeable risks. “The idea of avant-garde machine learning research is for robots to acquire, learn, and even discover new ways of interactions without the designer’s explicit instruction. The idea of artificial general intelligence (which is admittedly looking far into the future) is to do so even without any implicit instruction” (Yi Tan, 2018). Therefore, we could say that those technologies are “unforeseeable by design”.

From a legal standpoint, it is fundamental to keep in mind the new nature of a diffused liability, potentially dispersed in space, time and agency of the various actants in the public sphere. In that sense, we need to think about the context in which assumptions on liability are made. The question that is presented to us is not only how to make computational agents liable, but how to reasonably and fairly apply this liability.

The idea of a shared liability between the different agents involved in the sociotechnical network seems a reasonable perspective, requiring, in order to attribute a fair liability to each one, the analysis of their spheres of control and influence over the presented situations and over other agents (humans and non-humans), considering their intra-relation (intra-action) (Barad, 2003).

However, we are still far from obtaining a reasonable consensus 14 on the establishment of appropriate legal parameters for the development and regulation of intelligent Things, although we already see many advancements concerning ethical guidelines.

These agents can influence relationships between people, shaping behaviours and world views, especially and more effectively when part of their operation have technological complexity and different levels of autonomy, as it happens in the case of artificial intelligence systems with the capacity of reasoning and learning according to deep learning techniques in artificial neural networks (Amaral, 2015).

In view of the increasing risks posed by the advance of techno-regulation, amplified by the dissemination of the ‘Internet of Things’ and artificial intelligence, the rule of law should be seen as the premise for technological development, or as a meta-technology, which should guide the way technology shapes behaviour rather than the other way around - which often results in violation of human and fundamental rights.

For law to act properly as a meta-technology, it must be backed by ethical guidelines consistent with the age of hyperconnectivity. In this sense, it is necessary to understand the capacity of influence of non-human agents, aiming to achieve a better regulation, especially for more autonomous technologies, thinking about preserving the fundamental rights of individuals and preserving the human species.

The law, backed by an adequate ethical foundation, will serve as a channel for data processing and other technological materialities avoiding a techno-regulation harmful to humanity. In this new role, it is important that the law guides the production and development of Things (technical artifacts) in order to be sensitive to values, for example, regulating privacy, security and ethics by design. In a metaphor, law as meta-technology would function as a pipeline suited to the digital age, through which all content and actions would pass.

With technology moving from a simple tool to an influencing agent and decision maker, law must rebuild itself in the techno-regulated world, incorporating these new elements from a meta-perspective (as a meta-technology), building the normative basis to regulate the ethics of new technologies through design. To do so, we must enhance and foster human-centred design models that are sensitive to constitutional values ​​(value-sensitive design).

Governing AI with the mentioned ethical principles (fairness; reliability; security; privacy; data protection; inclusiveness; transparency; and accountability) and the “by design” technique, are an important step to try to follow the pace of technological innovation, at the same time as trying to guarantee effectiveness of the law.

Conclusion

It is evident that these intelligent artifacts are consistently exerting more influence in the way we think and organise ourselves in society and, therefore, the scientific and legal advance cannot distance itself from the ethical and legal issues involved in this new scenario.

In that sense, new ontological and epistemological lenses are needed. We need to think about intelligent Things not as mere tools but as moral machines that interact with citizens in the public sphere, endowed with intra-acting agencies, entangled in sociotechnical systems.

Legal regulation, democratically construed in the public sphere, should provide the architecture for the construction of proper legal channels so that non-human agents can act and be developed within the prescribed ethical limits. To design adequate limits for the AI era, we must recognise Things as agents, based on a post-humanist perspective, but with a human rights based approach to guide its development.

Certainly, the reasons to justify an electronic personhood are not there yet. Nevertheless, since computational intelligence can grow exponentially, as well as their level of interaction on our daily lives and on the connected public sphere, with the gain of new stages of autonomy, we must inevitably think about the possibilities of establishing new forms of accountability and liability for the activities of AI, including the possibility of attributing rights, subjectivity and even an e-personhood in the future.

The granting of an electronic personality is the path suggested by the European Parliament for smart robots and we cannot reject this recommendation, as a future regulation, depending on the degree of autonomy conferred on AIs. Such construction, however, is not immune to criticism, notably as regards the comparison between an AI and a natural person. 15

As evidenced, the discussion about ethics and responsibility of artificial intelligence still navigates murky waters. However, the difficulties arising from technological transformations of high complexity cannot prevent the establishment of new regulation that has the capacity to reduce the risks inherent in new activities and, consequently, the production and repair of damages (Magrani, Viola, & Silva, 2019). The exact path to be taken still remains uncertain. Nevertheless, it is already possible to envision possibilities that can serve as important parameters. In the wise words of the Italian philosopher Luciano Floridi: “The new challenge is not technological innovation, but the governance of the digital”.

Acknowledgements:

This article counted on the collaboration of Beatriz Laus in the translation.

References

Amaral, G. (2014). Uma dose de pragmatismo para as epistemologias contemporâneas: Latour e o parlamento das coisas [A dose of pragmatism for contemporary epistemologies: Latour and the parliament of things]. São Paulo: Digital de Tecnologias Cognitivas.

Barad, K. (2003). Posthumanist Performativity: Toward an Understanding of How Matter Comes to Matter. Signs: Journal of Women in Culture and Society, 28(3). doi:10.1086/345321

Castro, M. (2009). Direito e Pós-humanidade: quando os robôs serão sujeitos de direitos [Law and post-humanity: when robots will be subject to rights]. Curitiba: Juruá.

Čerka, P., Grigienė, J., & Sirbikytė, G. (2015). Liability for damages caused by artificial intelligence. Computer Law & Security Review, 31(3), 376–389. https://doi.org/10.1016/j.clsr.2015.03.008

Chagal-Feferkorn, K. A. (2019). Am I an Algorithm or a Product? When Products Liability Should Apply to Algorithmic Decision-Makers. Stanford Law & Policy Review, 30, 61–114. Retrieved from https://www-cdn.law.stanford.edu/wp-content/uploads/2019/05/30.1_2-Chagal-Feferkorn_Final-61-114.pdf

Cole, G. S. (1990). Tort Liability for Artificial Intelligence And Expert Systems. Computer/Law Journal, 10(2), 127–231. Retrieved from https://repository.jmls.edu/jitpl/vol10/iss2/1/

Domingos, P. (2015). The Master Algorithm: how the quest for the ultimate learning machine will remake our world. New York: Basic Books.

Doneda, D., & Almeida, V. A. F. (2016). What Is Algorithm Governance? IEEE Internet Computing, 20(4), 60–63. doi:10.1109/MIC.2016.79

Hallevy, G. (2010). The Criminal Liability of Artificial Intelligence Entities: From Science Fiction to Legal Social Control. Akron Intellectual Property Journal, 4(2), 171–201. Retrieved from https://ideaexchange.uakron.edu/akronintellectualproperty/vol4/iss2/1/

High-Level Expert Group on AI (HLEG AI). (2019). Ethics guidelines for trustworthy AI [Report / Study]. Brussels: European Commission. Retrieved from: https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai

Vermaas, P., Kroes, P., van de Poel, I., Franssen, M., & Houkes, W. (2011). A Philosophy of Technology: From Technical Artefacts to Sociotechnical Systems. Synthesis Lectures on Engineers, Technology, and Society, 6(1). doi: 10.2200/S00321ED1V01Y201012ETS014

Latour, B. (2001). A Esperança de Pandora: Ensaios sobre a realidade dos estudos científicos [Pandora's Hope: Essays on the reality of scientific studies]. São Paulo: EDUSC.

Magrani, E. (2017). Threats of the Internet of Things in a techo regulated society A New Legal Challenge of the Information Revolution. ORBIT Journal, 1(1). doi:10.29297/orbit.v1i1.17

Magrani, E., Silva, P., Viola, R. (2019). Novas perspectivas sobre ética e responsabilidade de inteligência artificial [New perspectives on ethics and responsibility of artificial intelligence]. In C. Mulholland, & A. Frazao (Eds.), Inteligência Artificial e Direito: Ética, Regulação e Responsabilidade [Artificial Intelligence and Law: Ethics, Regulation and Responsibility]. São Paulo: RT.

Matias, J. (2010). Da cláusula pacta sunt servanda à função social do contrato: o contrato no Brasil [From the pacta sunt servanda clause to the social function of the contract: the contract in Brazil]. In E. Vera-Cruz (Ed.), O sistema contratual romano: de Roma ao direito actual [The Roman contractual system: from Rome to current law]. Coimbra; Lisboa: Coimbra Editora; Faculdade de direito da universidade de Lisboa.

Miller, K. W., Wolf, M. J., & Grodzinsky, F. S. (2017). Why We Should Have Seen That Coming: Comments on Microsoft's tay “Experiment,” and Wider Implications. ORBIT Journal, 1(2). doi:10.29297/orbit.v1i2.49

Pagallo, U. (2013). The Law of Robots: Crimes, Contracts and Torts. Dordrecht: Springer. doi:10.1007/978-94-007-6564-1

Pagallo, U. (2018). Vital, Sophia, and Co.—The Quest for the Legal Personhood of Robots. Information, 9(9). doi:10.3390/info9090230

Saurwein, F., Just, N., Latzer, M. (2015). Governance of algorithms: options and limitations. info, 17(6), 35–49. doi:10.1108/info-05-2015-0025

Shavell, S. (2004). Foundations of economic analysis of law. Cambridge, MA: Belknap Press of Harvard University Press.

UNESCO. (2017). Report of COMEST on robotics ethics. Paris: World Commission on the Ethics of Scientific Knowledge and Technology. Retrieved from https://unesdoc.unesco.org/ark:/48223/pf0000253952

Verbeek, P. (2011). Moralizing Technology: Understanding and Designing the Morality of Things. Chicago; London: The University of Chicago Press.

Vladeck, D. C. (2014). Machines without principals: liability rules and artificial intelligence. Washington Law Review, 89(1), 117–150. Retrieved from https://digitalcommons.law.uw.edu/wlr/vol89/iss1/6/

Wallach, W. & Allen, C. (2008). Moral Machines: Teaching Robots Right from Wrong. Oxford: Oxford University Press.

Yi Tan, C. (2018, December 11). Artificial Intelligence, Artificial Persons, and the Law. Becoming Human: Artificial Intelligence Magazine. Retrieved from https://becominghuman.ai/artificial-intelligence-artificial-persons-and-the-law-2cce322743b6

Footnotes

1. Better understood by the expression “actant” in Latour’s theory.

2. In its Habermasean definition.

3. The 2005 UN Robotics Report defines a robot as a semi or fully autonomous reprogrammable machine used for the well-being of human beings in manufacturing operations or services.

4.“Deep learning is a subset of machine learning in which the tasks are broken down and distributed onto machine learning algorithms that are organized in consecutive layers. Each layer builds up on the output from the previous layer. Together the layers constitute an artificial neural network that mimics the distributed approach to problem-solving carried out by neurons in a human brain.” Available at: http://webfoundation.org/docs/2017/07/AI_Report_WF.pdf.

5. The engineers are responsible for thinking about the values that will go into the design of the artifacts, their function and their use manual. What escapes from the design and use manual does not depend on the control and influence of the engineer and can be unpredictable. That’s why engineers must design value-sensitive technical artifacts. An artifact sensitive to constitutionally guaranteed values (deliberate in the public sphere) is a liable artifact. It also necessary to think about the concepts of “inclusive engineering and “explainable AI”, to guarantee non-discrimination and transparency as basic principles for the development of these new technologies.

6. With this regard, to enhance the transparency and the possibility of accountability in this techno-regulated context, there is nowadays a growing movement in civil society demanding the development of “explainable artificial intelligences”. Also, the debate around a “right to explanation” for algorithmic and autonomous decisions that took place on discussions around the General Data Protection Regulation (GDPR) is also a way to achieve the goals of transparency and accountability since algorithms are taking more critical decisions on our behalf and is increasingly hard to explain and understand its processes.

7.‘Causal nexus’ is the link between the agent’s conduct and the result produced by it. Examining the causal nexus determining what were the conducts, be they positive or negative, gave rise to the result provided by law. Thus, to suggest that someone has caused a certain fact, it is necessary to establish a connection between the conduct and the result generated.

8. This legal phenomenon is also called by other authors as “problem of the many hands” or “accountability gap”.

9. For the purposes of this article, although “fairness” can be understood as a broader term, it is addressed here on the topic of AI with a smaller scope, focused on algorithmic fairness. It is not in the scope of this article to expand the discussion of algorithmic fairness in special. A deeper exploration of this concept deserves a specific article focused on each guiding principle.

10. The type of insurance that should be applied to the case of intelligent robots and which agents and institutions should bear this burden is still an open question. The European Union’s recent report (2015/2103 (INL)) issued recommendations on the subject, proposing not only mandatory registration, but also the creation of insurance and funds. According to the European Parliament, insurance could be taken by both the consumer and the company in a similar model to those used by the car insurance. The fund could be either general (for all autonomous robots) or individual (for each category of robot), composed of fees paid at the time of placing the machine on the market, and / or contributions paid periodically throughout the life of the robots. It is worth mentioning that, in this case, companies would be responsible for bearing this burden. Despite this proposal, however, the topic continues open to debate, with new alternatives and more interesting models - such as private funds, specific records, among other possibilities - that will not be the subject of a deep analysis in this thesis.

11.OODA means the "observe–orient–decide–act" orientation cycle, a strategy developed by military strategist John Boyd to explain how individuals and organisations can win in uncertain and chaotic situations.

12. Parts of this subsection were built upon a recent and unpublished work of the author, in co-authorship (Magrani, Viola, & Silva, 2019), and cited here to bring an updated vision of the author in dialogue with other recent publications.

13. The characteristics most used for the foundation of the human personality are: consciousness; rationality; autonomy (self-motivated activity); the capacity to communicate; and self-awareness. Another possible social criterion is to be considered a person whenever society recognises one (we can even apply the Habermasian theory here, through a deliberative process in the public sphere). Other theorists believe that the fundamental characteristic for the attribution of personality is sensibility, which means the capacity to feel pleasure and pain. The legal concept of a person is changeable and is constantly evolving. For example, afro-descendants have once been excluded from this category, at the time of slavery. Therefore, one cannot relate the legal concept of a person to Homo sapiens. A reservation is necessary at this point because even if robots can feel and demonstrate emotions as if they were sensuous, the authenticity of these reactions is questioned since they would not be genuine, but at most a representation (or emulation), analogous to human actors when they simulate these emotions in a play, for example, feelings in certain roles, not being considered by many as something genuine. Because of this, the Italian jus-philosopher Ugo Pagallo calls this ‘artificial autonomy’.

14. In the present article, it is argued that the consensus must be constructed according to Jurgen Habermas’s proposal, that is, through dialectical conflicts in the public sphere.

15. Such criticism, however, can be overcome by instruments already available on legal regulation. The recognition that the AI expresses a centre of interests would already be more than sufficient to admit that it has subjectivity and therefore deserving at least some rights. Nothing would prevent the granting of subjectivity to AIs as a mid-term regulation and leaving the path open for a future grant of an effective e-personality depending on the degree of autonomy (based on a matrix type). As an initial measure, it would play an important role in guaranteeing the reparation of victims, avoiding a scenario of ‘distributed irresponsibility’.

Viewing all 294 articles
Browse latest View live