Quantcast
Channel: News and Research articles on Governance
Viewing all 294 articles
Browse latest View live

Platform ad archives: promises and pitfalls

$
0
0

Introduction

In 2018, the online platforms Google, Facebook and Twitter all created political ad archives: publicly accessible databases with an overview of political advertisements featured on their services. These measures came in response to mounting concerns over a lack of transparency and accountability in online political advertising, related to illicit spending and voter manipulation. Ad archives have received widespread support in government and civil society. However, their present implementations have also been criticised extensively, by researchers who find their contents to be incomplete or unreliable. 1 Increasingly, governments and civil society actors are therefore setting up their own guidelines for ad archive architecture – in some cases even binding legislation. Ad archive architecture has thus rapidly gained relevance for advertising law and policy scholars, both as a tool for regulation and as an object of regulation. 2

This article offers an overview of the ad archive governance debate, discussing the potential benefits of these tools as well as pitfalls in their present implementations. Section two starts with a basic conceptual and legal framework which describes the basic features of archives and applicable regulations, followed by a normative framework which discusses the potential benefits of ad archives in terms of transparency and accountability. Section three reviews the shortcomings of current ad archive initiatives, focusing on three core areas of ongoing debate and criticism. Firstly, we discuss scoping: ad archives have faced difficulty in defining and identifying, at scale, what constitutes a “political advertisement”. Secondly, verifying: ad archives have proven vulnerable to inauthentic behaviour, particularly from ad buyers seeking to hide their true identity or the origin of their funding. Thirdly, targeting data: ad archives do not document in meaningful detail how ads are targeted or distributed. We propose several improvements to address these shortcomings, where necessary through public regulation. Overall, we argue that both legal scholars and communications scientists should pay close attention to the regulation of, and through, this novel and potentially powerful tool.

Promises: the case for ad archives

Conceptual framework: what are ‘ad archives’?

This paper focuses on ad archives, which are systems for the automated public disclosure of advertisements via the internet. The key examples are Facebook’s Ad Library, Google’s Advertising Transparency Report and Twitter’s Ad Transparency Center. These systems document the advertisement messages sold on the platform, as well as associated metadata (e.g., the name of the buyer, the number of views, expenditure, and audience demographics). These archives are public, in the sense that they are available without restriction to anyone with a working internet connection.

In practice, the major ad archives have focused on documenting political advertisements, rather than commercial advertisements. Beyond this, they differ in important respects. Firstly, they differ significantly in how they define “political” advertising in order to determine what ads are included in the archive. The major archives also differ in how they verify their contents - particularly the identity of their ad buyers – and in terms of the metadata they publish related to ad targeting. Section three considers these questions of scoping, verifying and targeting in further detail.

The major ad archives went live in 2018. Facebook’s archive was first announced in October 2017 and went live the next year in May 2018. Google and Twitter followed soon after. They initially focused exclusively on the United States, but they have since gradually expanded their efforts. Facebook and Twitter’s archives now offer worldwide coverage, although certain functions are still regionally restricted. Google covers only the US, the European Union and India (Google, 2019a).

In theory, ad archives can be created not only by platform intermediaries but also by a range of other actors, including advertisers, academics or NGOs. For instance, political parties can maintain their own online database documenting their political advertisements, as has been proposed in the Netherlands (Netherlands Ministry of the Interior, 2019). As early as 2012, Solon Barocas argued for a centralised non-profit database, or ‘clearing house’, for political ads (Barocas, 2012). The London School of Economic’s Truth and Trust Commission proposes that the government administer a central database, or “political advertising directory” (Livingstone, 2018). The investigative journalists of ProPublica have maintained a public database of Facebook ads which they crowd-sourced from a group of volunteers (Merrill & Tobin, 2019). While we do not discount these approaches, our discussion focuses on platform-operated archives, since these have recently attracted the most widespread traction in policy and practice.

Formally speaking, the major platform ad archives are self-regulatory measures. But they emerged in response to significant public pressure from the ongoing “techlash” (Smith, 2018; Zimmer, 2019). These “voluntary” efforts are therefore best understood as an attempt to stave off binding regulation (Wagner, 2018). Indeed, platforms have no immediate commercial incentive to offer transparency in their advertising practices. The role of public regulation, or at least the threat thereof, is therefore essential in understanding the development of ad archives (see Vlassenroot et al., 2019). Below we offer an overview of key policy developments.

Both platforms and policymakers present ad archives as a means to improve accountability in online political advertising (e.g., Goldman, 2017; Warner, 2017). Political advertising in legacy media has historically been regulated in various ways, to prevent undue influence from concentrated wealth on public discourse. Online advertising is placing new pressure on these legacy regimes. In many cases, the language of existing law has simply not been updated to apply online. Furthermore, online political micro-targeting has unique affordances that can enable new types of harms demanding entirely new regulatory responses. For instance, platform advertising services lower the barrier to buying ads across borders, and to buy ads under false or misleading identities. Furthermore, micro-targeting technology, which enables advertisers to target highly specific user segments based on personal data analytics, can enable novel methods of voter deception, manipulation and discrimination (Borgesius et al., 2018; Chester & Montgomery, 2017). For instance, targeted advertising can enable politicians to announce different or even conflicting political programmes to different groups, thereby fragmenting public discourse and making it more difficult to hold politicians accountable to their electoral promises (Bodó, Helberger, & de Vreese, 2017; Borgesius et al., 2018). Targeted advertising can also enable discrimination between voter groups, both intentionally through advertisers’ targeting decisions and unintentionally through undocumented algorithmic biases (Boerman, Kruikemeier, & Borgesius, 2017; Ali et al., 2019).

These concerns about online advertising are compounded by the fact that the online advertising ecosystem is difficult to monitor, which undermines efforts to identify, diagnose and remedy potential harms (Chester & Montgomery, 2017). This opacity is due to personalisation: personalised advertisements are invisible to everyone except the specific users they target, hiding them from observation by outsiders (Guha, Cheng, & Francis, 2010). As Benkler, Faris and Roberts observe, this distinguishes online advertisers from mass media advertisers, who necessarily acted “in the public eye”, thus “suffering whatever consequences” a given message might yield outside of its target audience (Benkler, Faris, & Roberts, 2018, p. 372). As a result, the online advertising ecosystem exhibits structural information asymmetries between, on one side, online platforms and advertisers, and on the other, members of the public who might hold them accountable. Researchers can potentially resort to data scraping methods, but these suffer from severe limitations and are vulnerable to interference by the platforms they target (Bodó et al., 2018; Merrill & Tobin, 2019). Accordingly, targeted advertising creates structural information asymmetries between advertisers and their publics.

These concerns over online political advertising took centre stage in the “techlash”, which followed the unexpected outcomes of the 2016 Brexit referendum and US presidential elections. In the UK, the Vote Leave campaign was accused of deceptive messaging, and violations of data protection law and campaign spending law in their political micro-targeting activities (Merrick, 2018; Waterson, 2019a). In the US, ad spending from Russian entities such as the Internet Research Agency raised concerns about foreign election interference. In both countries, Facebook shared selected advertising data sets in response to parliamentary investigations (Lomas, 2019; Shane, 2017). But these came well over a year after the events actually took place – driving home the general lack of transparency and accountability in the advertising ecosystem. Similar controversies have also played out subsequent elections and referenda, such as the Irish abortion referendum of 2018 which drew an influx of foreign pro-life advertisements (Hern, 2018). The actual political and electoral impact of these ad buys remains debatable (e.g., MacLeod, 2019; Benkler, Faris, & Roberts, 2018). But in any case, these developments drew attention to the potential for abuse in targeted advertising, and fuelled the push for more regulation and oversight in this space.

Ad archives have formed a key part of the policy response to these developments. The most prominent effort in the US is the Honest Ads Act, proposed on 19 October 2017, which would require online platforms to “maintain, and make available for online public inspection in machine readable format, a complete record of any request to purchase on such online platform a qualified political advertisement” (Klobuchar, Warner, & McCain, 2017, Section 8(a)(j)(1)(a)). This bill has not yet passed (Montellaro, 2019). But only several days after its announcement, Facebook declared its plans to voluntarily build an ad archive, which would largely conform to the same requirements (Goldman, 2017). Google and Twitter followed suit the next year.

Since 2018, governments have started developing binding legislation on ad archives, often with resistance from platforms. Canada’s Elections Modernization Act of December 2018 compels platforms to maintain public registers of political advertising sold through their service. Facebook and Twitter have sought to comply with these measures, but Google instead responded by discontinuing the sale of political advertisements in this jurisdiction altogether (Cardoso, 2019). Similarly, the State of Washington’s Public Disclosure Commission attempted to regulate ad archives by requiring advertisers publicly disclose political ads sold in the state (Sanders, 2019). In this case, both Google and Facebook have refused to comply with the disclosure rules and instead banned political advertising in this region (Sanders, 2019). Citing federal intermediary liability law, the Communications Decency Act of 1996, Facebook contended it was immune to any liability for political advertising content (Sanders, 2019). Some reporters also claim that Facebook has lobbied to kill the Honest Ads Act, despite publicly claiming to support regulation and implement its requirements voluntarily (Timmons & Kozlowska, 2018).

Europe is also poised to regulate ad archives. In the run-up to the EU elections of May 2019, the European Commission devised the Code of Practice on Disinformation, which is not a binding law but rather a co-regulatory instrument negotiated with major tech companies including Google, Facebook, Twitter, Microsoft and Mozilla. 3 By signing the Code, these companies have committed to a range of obligations from fact-checking and academic partnerships to the creation of ad archives (European Commission, 2018, Section II.B.). Furthermore, leaked documents from the European Commission show that political advertisements will receive particular attention in the upcoming reform of digital services rules (Fanta & Rudl, 2019). Member states are also exploring the regulation of ad archives. In the UK and the Netherlands, parliamentarians have expressed support for further regulation in, respectively, a parliamentary resolution and a committee report (Parliament of the Netherlands, 2019; House of Commons Select Committee on Digital, Culture, Media and Sport, 2019). France has passed a binding law requiring the public disclosure of payments received for political advertisements – if not a comprehensive regulation of ad archives per se (Republic of France, 2018).

Ad archives exist alongside a number of other proposals for regulating targeted advertising. One popular measure is installing user-facing disclaimers, intended to inform audiences about e.g., the identity of the advertisers, the source of their funding, and/or the reason why they are being targeted. Another approach is to regulate funding, e.g., trough spending limits, registration requirements, or restrictions on foreign advertising. Finally, targeting technology and the use of personal data can also be regulated. Some combination of these measures is found in, inter alia, the US Honest Ads Act, the EU’s Code of Practice, Canada’s Elections Modernization Act, and France and Ireland’s new election laws. The EU’s General Data Protection Regulation (GDPR) is also a highly relevant instrument, since it grants users’ information rights, and constrains the ability for advertisers to use personal data for ad targeting purposes (Bodó, Helberger, & De Vreese, 2017).

Of course, present ad archive initiatives are far from uniform. Definitions of e.g., the relevant platforms, disclosure obligations and enforcement mechanisms all differ. An exhaustive comparative analysis of these differences would exceed the scope of this paper. The second half of this paper discusses how these policy initiatives differ on some of the key design issues outlined above (scoping, verifying, and targeting data), and how the major platforms have responded to their demands. First, we discuss the policy principles driving this new wave of regulation.

Normative framework: what are the policy grounds for ad archives?

Ad archive initiatives have typically been presented in terms of ‘transparency and accountability’, but these are notoriously vague terms. The concrete benefits of ad archives have not been discussed in much depth. To whom do ad archives create accountability, and for what? The answer is necessarily somewhat abstract, since ad archives, being publicly accessible, can be used by a variety of actors in a variety of accountability processes. Indeed, this diversity is arguably their strength. Other advertising transparency measures have focused on particular groups of stakeholders, such as user-facing disclaimers, third party audits or academic partnerships. Ad archives, by contrast, can enable monitoring by an unrestricted range of actors, including not only academics but also journalists, activists, government authorities and even rival advertisers – each with their own diverse capacities and motivations to hold advertising accountable. In this sense, ad archives can be seen as recreating, to some extent, the public visibility that was inherent in mass media advertising and is now obfuscated by personalisation (see above). Broadly speaking, this public visibility can be associated with two types of accountability: (a) accountability to the law, through litigation, and, (b) accountability to public norms and values, through publicity.

Ad archives can contribute to law enforcement by helping to uncover unlawful practices. Although online political advertising is not (yet) regulated as extensively as its mass media counterparts, it may still violate e.g., disclosure rules and campaign finance regulations. And, as discussed previously, new rules may soon be coming. Commercial advertising, for its part, may be subject to a range of consumer protection rules, particularly in Europe, and also to competition law, unfair commercial practice law and intellectual property law. Ad archives can allow users to proactively search for violations of these rules. Such monitoring could be done by regulators, but importantly also by third parties including commercial rivals, civil rights organisations, consumer protection organisations, and so forth. These third parties might choose to litigate independently, or simply refer the content to a competent regulator. Indeed, regulators often rely on such third party input to guide their enforcement efforts, e.g., in the form of hotlines, complaints procedures and public consultations. In most cases, litigation is likely to be straightforward and inexpensive, since most platforms operate notice and takedown procedures for the removal of unlawful advertising without the need for judicial intervention. 4 Platforms can also remove advertising based on their own community standards, even if they do not violate any national laws.In this light, ad archives can contribute to enforcement in a broad sense, including not only public advertising laws but also platforms’ private standards, and relying not only on public authorities but on any party with the time and interest to flag prohibited content.

In addition to litigation, ad archives also facilitate publicity about advertising practices, which can serve to hold platforms accountable to public norms and values. Journalists, researchers and other civil society actors can draw on archives to research and publicise potential wrongdoings that might previously have flown under the radar. For instance, the US media has a strong tradition of analysing and fact-checking television campaign ads; ad archives could help them do similar coverage of online political micro-targeting. Such publicity may encourage platforms and/or advertisers to self-correct and improve their advertising standards, by raising the threat of reputational harm. And failing such a private ordering response, publicity can also provide an impetus for new government interventions. In these ways, ad archives can contribute not only to the enforcement of existing laws, but also to informed public deliberation, and thus to the articulation and enforcement of public norms and values (see Van Dijck, Poell, & de Waal, 2018). Such publicity effects may be especially important in the field of online political advertising, since, as discussed, this space remains largely unregulated under existing laws, and raises many novel policy questions for public deliberation.

In each case, it is important to note the factor of deterrence: the mere threat of publicity or litigation may already serve to discipline unlawful or controversial practices. Even for actors who have not yet faced any concrete litigation or bad publicity, ad archives could theoretically have a disciplinary effect. In this sense, a parallel can be drawn with the concept of the Panopticon, as theorised in surveillance studies literature; subjects are disciplined not merely through the fact of observation, but more importantly through the pervasive possibility of observation (Foucault, 1977; Lyon, 2006). Put differently, Richard Mulgan describes this as the potentiality of accountability; the possibility that one “may be called to account for anything at any time" (Mulgan, 2000, p. 564). Or, as the saying goes: The value in the sword of Damocles is not that it drops, but that it hangs (e.g., Arnett v. Kennedy, 1974).

Of course, these accountability processes depend on many other factors besides transparency alone. Most importantly, ad archives depend on a capable and motivated user base of litigators (for law enforcement effects) and civil society watchdogs (for publicity effects). For publicity effects, these watchdogs must also be sufficiently influential to create meaningful reputational or political risks for platforms (see Parsons, 2019; Wright & Rwabizambuga, 2006). These conditions can certainly not be assumed; which researchers are up to the task of overseeing this complex field, and holding its powerful players to account? This may call for renewed investment in our public watchdogs, including authorised regulators as well as civil society. Ad archives might be a powerful tool, but they rely on competent users.

Finally, of course, the above analysis also assumes that ad archives are designed effectively, so as to offer meaningful forms of transparency. As we discuss in the following section, present implementations leave much to be desired.

Pitfalls: key challenges for ad archive architecture

Having made the basic policy case for the creation of ad archives, we now discuss several criticisms of current ad archive practice. Firstly, we discuss the issue of scoping: which ads are included in the archive? Second, verifying: how do ad archives counteract inauthentic behaviour from advertisers and users? Third, targeting: how do ad archives document ad targeting practices? Each of these issues can create serious drawbacks to the research utility of ad archives, and deserve further scrutiny in future governance debates.

Ad archive architecture is very much a moving target, so we emphasise that our descriptions represent a mere snapshot. Circumstances may have changed significantly since our time of writing. Accordingly, the following is not intended as an exhaustive list of possible criticisms, but rather as a basic assessment framework for some of the most controversial issues. For instance, one important criticism of ad archives which we do not consider in detail is the need for automated access through application programming interfaces (APIs). When ad archive data is exclusively available through browser-based interfaces, this can make it relatively time-consuming to perform large-scale data collection. To enable in-depth research, it is clear that ad archives must enable such automated access. Until recently, Facebook did not offer public API access to their ad archive data (Shukla, 2019). And once the API was made publicly accessible, it quickly appeared to be so riddled with bugs as to be almost unusable (Rosenberg, 2019). As noted by Laura Edelson, these API design issues are not novel or intractable from a technical perspective, but eminently “fixable”, and thus reflect sub-standard implementation on the part of Facebook (Rosenberg, 2019). In response, Mozilla, together with a coalition of academics, has drafted a list of design principles for ad archive APIs (Mozilla, 2019). Such public, automated access can be seen as a baseline condition for effective ad archive policy. What then remains are questions about the contents of the archive, which include scoping, verifying and targeting.

Scoping: what ads are included in the archive?

A key design question for ad archives is that of scope: what ads are included in the archive? First, we discuss the concept of “political” advertising, which is the central scoping device in most existing initiatives and has led to many implementation challenges. Second, we discuss the attempts to exempt news reporting from political ad archives.

“Political” ad archives: electoral ads v. issue ads v. all ads?

Ad archive initiatives, both self-regulatory and governmental, have emphasised “political” advertising rather than commercial advertising. However, their precise interpretations of this concept differ significantly. Below we discuss these differing approaches and relevant policy trade-offs.

The main dividing line in existing political ad archives is between issue ads and electoral ads (or “campaign ads”). “Election ads” explicitly reference an election or electoral candidate, whereas “issue ads” reference a topic of national importance. Google focuses exclusively on election ads, whereas Facebook and Twitter also include issue ads in certain jurisdictions, and even non-political ads.Most public policy instruments also focus on issue ads, including the US Honest Ads Act and the EU Code of Practice. There is good reason to include issue ads, since they have been central to recent controversies. During the 2016 US election, for instance, foreign actors such as the Russian-controlled Internet Research Agency advertised on divisive issues such as racial politics, sexual politics, terrorism, and immigration, in an apparent attempt to influence the election (Howard et al., 2018). An approach which focuses on election ads would fail to address such practices.

However, the drawback of “issue ads” as a scoping device, is that the concept of a political “issue” is broad and subjective, and makes it difficult for archive operators to develop actionable definitions and enforce these in practice. Google, in its implementation reports for the EU’s Code of Practice, reported difficulties in developing a workable definition of a “political issue” (Google, 2019). The European Commission later lamented that “Google and Twitter have not yet reported further progress on their policies towards issue-based advertising” (European Commission, 2019). In Canada, where the Election Act also requires the disclosure of issue-based ads, Google has claimed that they are simply unable to comply with disclosure requirements (Cardoso, 2019). These difficulties might explain why the company announced plans, as discussed previously, to ban political advertising entirely for Canadian audiences during election periods.

Yet these attempts to ban political advertising, as an alternative to disclosure, beg the question whether platforms can actually enforce such a ban. After all, the platforms themselves admit they struggle to identify political ads in the first place. Simply declaring that political ads are prohibited will not guarantee that advertisers observe the ban and refrain from submitting political content. Could platforms then still be liable for a failure to disclose? Here, a tension emerges between ad archive regulation and intermediary liability laws, which typically immunise platforms for (advertising) content supplied by their users. Canada, Europe and the US all have such laws, although their precise scope and wording differ. Indeed, Facebook has argued that it is immunised against Washington State’s disclosure rules based on US federal intermediary liability law – the Communications Decency Act of 1996 (Sanders, 2018a). Similarly, the EU’s intermediary safe harbours, which prohibit “proactive monitoring obligations” imposed on platforms (e-Commerce Directive 2000/31/EC, Article 15). Such complex interactions with intermediary liability law should be taken into account in ongoing reforms.

Compared to Google, Facebook is relatively advanced in its documentation of issue ads. But that company too has faced extensive criticism for its approach. The company employs approximately 3,000-4,000 people in reviewing ads related to politics or issues, using “a combination of artificial intelligence (AI) and human review”, and is estimated to process upwards of a million ad buyers per week in the US alone (Matias, Hounsel, & Hopkins, 2018). Facebook’s website offers a list of concrete topics which they consider “political issues of national importance”, tailored to the relevant jurisdiction. The US list of political issues contains 20 entries, including relatively specific ones such as “abortion” and “immigration”, but also relatively broad and ambiguous ones such as “economy” and “values” (Facebook, 2019). The EU list contains only six entries so far, including “immigration”, “political values” and “economy” (Matias, Hounsel, & Hopkins, 2018).

Despite these efforts, research suggests that Facebook’s identification of political issue ads is error-prone. Research from Princeton and Bloomberg showed that a wide range of commercial ads are at risk of being mislabeled as political, including advertisements for e.g., national parks, veteran’s day celebrations, and commercial products that included the words “bush” or “clinton" (Frier, 2018; Hounsel et al., 2019). Conversely, data scraping research by ProPublica shows that Facebook failed to identify political issue ads on such topics as civil rights, gun rights, electoral reform, anti-corruption, and health care policy (Merrill & Tobin, 2019). These challenges are likely to exacerbate as platforms expand their efforts beyond the United States to regions such as Africa and Europe, which contain far greater political and linguistic diversity and fragmentation. Accordingly, further research is needed to determine whether the focus on issue ads in ad archives is appropriate. It may appear in future that platforms are able to refine their processes and identify issue ads with adequate accuracy and consistency. But given the major scaling challenges, the focus on issue ads may well turn out to be impracticable.

In light of the difficulties with identifying “issue ads”, one possible alternative would be to simply include all ads without an apparent commercial objective. In other words, a definition a contrario. This approach could capture the bulk of political advertising, and would avoid the difficulties of identifying and defining specific political “issues”. Such an approach would likely be more scalable and consistent than the current model, although this might come at the cost of increased false positives (i.e., a greater overinclusion of irrelevant, non-political ads in the archive).

Another improvement could be to publish all advertisements in a comprehensive archive, regardless of their political or commercial content (Howard, 2019). This would help third parties to independently evaluate platforms’ flagging processes for political ads, and furthermore to research political advertising according to their own preferred definitions of the ”political”. This is what Twitter does in its Ad Transparency Center: the company still takes steps to identify and flag political advertisers (at least in the US), but users have access to all other ads as well (Twitter, 2019a). However, only political ads are accompanied by detailed metadata, such as ad spend, view count, targeting criteria, et cetera. Facebook, in an update from 29 March 2019, also started integrating commercial ads into its database (Shukla, 2019). Like Twitter, however, these ads are not given the same detailed treatment as political ads. In this light, Twitter and Facebook appear to be moving towards a tiered approach, with relatively more detail on a subset of political ads, and relatively less detail on all other ads.

Of course, a more fundamental advantage of comprehensive publication ads is that it extends the benefits of ad archives to commercial advertising. Commercial advertising has not been the primary focus of ad archive governance debates thus far, but here too ad archives could be highly beneficial. A growing body of evidence indicates that online commercial ad delivery raises a host of legal and ethical concerns, including discrimination and manipulation (Ali et al., 2019; Boerman, Kruikemeijer, & Borgesius, 2017). Furthermore, online advertising is also subject to a range of consumer protection laws, including child protection rules and prohibitions on unfair and deceptive practices. With comprehensive publication, ad archives could contribute to research and reporting on such issues, especially if platforms abandon their tiered approach and start publishing more detailed metadata for these ads.

Platforms may not be inclined to implement comprehensive ad archives since, as discussed, their commercial incentives may run counter to greater transparency. But from a public policy perspective, there appear to be no obvious drawbacks to comprehensive publication, at least as a default rule. If there are indeed grounds to shield certain types of ads from public archives – though we see none as of yet – such cases could also be addressed through exemption procedures. The idea of comprehensive ad archives therefore warrants serious consideration and further research, since it promises to benefit the governance of both commercial and political advertising.

Exemptions for news reporting

Some ad archive regimes offer exemptions for news publishers and other media actors. News publishers commonly use platform advertising services to promote their content, and when this content touches on political issues it can therefore qualify as an issue ad. Facebook decided to exempt news publishers from their ad archive in 2018, following extensive criticism from US press industry trade associations, who penned several open letters criticising their inclusion in ad archives. They argued that “[t]reatment of quality news as political, even in the context of marketing, is deeply problematic” and that the ad archive “dangerously blurs the lines between real reporting and propaganda” (Carbajal et al., 2018; Chavern, 2018). Similar exemptions can now also be found in Canada’s Elections Modernization Act and in the EU Code of Practice (Leathern, 2019). However, the policy grounds for these exemptions are not particularly persuasive. There is little evidence to suggest, or reason to assume, that inclusion in ad archives would meaningfully constrain the press in its freedom of expression. Indeed, ad archive data about media organisations is highly significant, since the media are directly implicated in concerns about misinformation and electoral manipulation (Benkler, Faris, & Roberts, 2018). Excluding the media’s ad spending is therefore a missed opportunity without a clear justification.

Verifying: how do archives account for inauthentic behaviour?

Another pitfall for ad archives is verifying their data in the face of fraud and other inauthentic behaviours. One key challenge is documenting ad buyers’ identities. Another is the circumvention of ad archive regimes by “astroturf”, sock puppets and other forms of native advertising. More generally, engagement and audience statistics may be inaccurate due to bots, click fraud and other sources of noise. As we discuss below, these pitfalls should serve as a caution to ad archive researchers, and as a point of attention for platforms and their regulators.

Facebook’s archive in particular has been criticised for failing to reliably identify ad buyers (e.g., Edelson et al., 2019). Until recently, Facebook did not verify the names that advertisers submitted for their “paid for by” disclaimer. This enabled obfuscation by advertisers seeking to hide their identity (Albright, 2018; Andringa, 2018; Lapowsky, 2018; O’Sullivan, 2018; Waterson, 2019). For instance, ProPublica uncovered 12 different political ad campaigns that had been bought in the name of non-existent non-profits, and in fact originated from industry trade organisations such as the American Fuel & Petrochemical Manufacturers (Merrill, 2018). Vice Magazine even received authorisation from Facebook to publish advertisements in the name of sitting US senators (Turton, 2018).More recently, Facebook has therefore started demanding proof of ad buyer identity in several jurisdictions, such as photo ID and notarised forms (Facebook, 2019b). Twitter and Google enforce similar rules (Google, 2019b; Twitter, 2019b). The Canadian Elections Modernization Act now codifies these safeguards by requiring platforms to verify and publish ad buyers’ real names.

Such identity checks are only a first step in identifying ad buyers, however. Ad buyers wishing to hide their identity can still attempt to purchase ads through proxies or intermediaries. In theory, platforms could be required to perform even more rigorous background checks or audits so as to determine their ultimate revenue sources. But there may be limits to what can and should be expected of platforms in this regard. Here, ad archive governance intersects with broader questions of campaign finance regulation and the role of “dark money” in politics. These issues have historically been tackled through national regulation, including standardised registration mechanisms for political advertisers, but many of these regimes currently do not address online advertising. Platforms’ self-regulatory measures, though useful as a first step, cannot make up for the lack of public regulation in this space (Lapowsky, 2018; Livingstone, 2018). Even Facebook CEO Mark Zuckerberg has called for regulation here, arguing in a recent op-ed that “[o]ur systems would be more effective if regulation created common standards for verifying political actors” (Zuckerberg, 2019).

Another weak spot for ad archives is that they fail to capture “native advertising” practices: advertising which is not conducted through social media platforms’ designated advertising services, but rather through their organic content channels. Such “astroturfing” strategies have seen widespread deployment in both commercial and political contexts, from Wal-Mart and Monsanto and from Russian “troll farms” to presidential Super PACs (Collins, 2016; Howard et al., 2018; Leiser, 2016). Ad archives do not capture this behaviour, and indeed their very presence could further encourage astroturfing, as a form of regulatory arbitrage. Benkler, Faris, and Roberts suggest that ad archive regulation should address this issue by imposing an independent duty on advertisers to disclose any “paid coordinated campaigns” to the platform (Benkler, Faris, & Roberts, 2018). One example from practice is the Republic of Ireland’s Online Advertising and Social Media Bill of 2017, which would hold ad buyers liable for providing inaccurate information to ad sellers, and also prohibit the use of bots which “cause multiple online presences directed towards a political end to present as an individual account or profile on an online platform” (Republic of Ireland, 2017). Enforcing such rules will remain challenging, however, since astroturfing is difficult to identify and often performed by bad actors with little or no interest in complying with the law (Leiser, 2016).

For ads that are actually included in the archive, inauthentic behaviour can also distort associated metadata such as traffic data. Engagement metrics, including audience demographic data, can be significantly disturbed by click fraud or bot traffic (Edelman, 2014; Fulgoni, 2016). Platforms typically spend extensive resources to combat inauthentic behaviour, and this appears to be a game of cat-and-mouse without definitive solutions. In light of these challenges, researchers should maintain a healthy scepticism when dealing with ad archive data and, where necessary, continue to corroborate ad archive findings with alternative sources and research methods (see, generally: Vlassenroot et al., 2019).

The above is not to say that all information supplied by ad buyers should be verified. There may still be an added value in enabling voluntary, unverified disclosures by ad buyers in archives. Facebook, for instance, gives advertisers the option to include “Information From the Advertiser” in the archive. Such features can enable good faith advertisers to further support accountability processes, e.g., by adding further context or supplying contact information. It is essential, however, that such unverified submissions are recognisably earmarked as such. Ad archive operators should clearly describe which data is verified, and how, so that users can treat their data with the appropriate degree of scepticism.

Targeting: how is ad targeting documented?

Another key criticism of ad archives is that they are not detailed enough, particularly in their documentation of ad targeting practices. Micro-targeting technology, as discussed previously, is the source of many public policy concerns for both political and commercial advertising, including discrimination, deception, and privacy harms. These threats are relatively new, and are both undocumented and unregulated in many jurisdictions – particularly as regards political advertising (Bodó et al., 2017). Regrettably, ad archives currently fail to illuminate these practices in any meaningful depth.

At the time of writing, the major ad archives differ significantly in their approach to targeting data. Google’s archive indicates whether the following targeting criteria have been selected by the ad buyer: age, location, and gender. It also lists the top five Google keywords selected by the advertiser. Facebook’s Ad Library, by contrast, does not disclose what targeting criteria have been selected, but instead shows a demographic breakdown of the actual audience that saw the message - also in terms of age, location and gender. Twitter offers both audience statistics and targeting criteria, and covers not only the targeting criteria of age, location, and gender, but also their preferred language. These data vary in granularity. For instance, Google’s archive lists six different age brackets between the ages of 18 and 65+, whereas Twitter lists 34. For anyone familiar with the complexities of online behavioural targeting, it is apparent that these datasets leave many important questions unanswered. These platforms offer far more refined methods for ad targeting and performance tracking than the basic features described above.

For better insights into ad targeting, one helpful rule of thumb would be to insist that ad archives should include an equivalent level of information as is offered to the actual ad buyer – both in terms of targeting criteria and in terms of actual audience demographics (Mozilla, 2019). For some targeting technologies, full disclosure of targeting practices might raise user privacy concerns. For instance, Facebook’s Custom Audience feature enables advertisers to target users by supplying their own contact information, such as email addresses or telephone numbers. Insisting on full disclosure of targeting criteria for these custom audiences would lead to the public disclosure of sensitive personal data (Rieke & Bogen, 2018). Anonymisation of these data may not always be reliable (Ohm, 2010). In these cases, however, Facebook could at a minimum still disclose any additional targeting criteria selected by the ad buyer in order to refine this custom audience. Furthermore, ad performance data, rather than ad targeting data, can also provide some insight into targeting without jeopardising the custom audience’s privacy (Rieke & Bogen, 2018). Other platforms’ advertising technologies might raise comparable privacy concerns, demanding a case-by-case assessment of relevant tradeoffs. These exceptions and hard cases notwithstanding, however, there are no clear objections (either technical or political) that should prevent platforms from publicly disclosing the targeting methods selected by their advertisers.

In light of such complexities, designing appropriate disclosures will likely require ongoing dialogue between archive operators, archive users and policymakers. The first contours of such a debate can already be found in the work of Edelson et al., Rieke & Bogen, and Mozilla, who have done valuable work in researching and critiquing early versions of Google, Twitter and Facebook’s data sets (Edelson et al., 2019; Mozilla, 2019; Rieke & Bogen, 2018). For the time being, researchers may also choose to combine ad archive data with other sources, such as Facebook’s Social Science One initiative, or GDPR data access rights, in order to obtain a more detailed understanding of targeting practices (Ausloos & Dewitte, 2018; Venturini & Rogers, 2019). For instance, Ghosh et al. supplemented ad archive research with data scraped with ProPublica’s research tool, which gave insights into ad targeting that were not offered through the ad archive (Ghosh et al., 2019). Along these lines, ad archives can help to realise Pasquale’s model of “qualified transparency”, which combines general public disclosures with more limited, specialist inquiries (Pasquale, 2015).

Conclusion

This paper has given an overview of a new and rapidly developing topic in online advertising governance: political ad archives. Here we summarise our key findings, and close with suggestions for future research in both law and communications science.

Ad archives can be a novel and potentially powerful governance tool for online political advertising. If designed properly, ad archives can enable monitoring by a wide range of stakeholders, each with diverse capacities and interests in holding advertisers accountable. In general, ad archives can not only improve accountability to applicable laws, but also to public opinion, by introducing publicity and thus commercial and political risk into previously invisible advertisements.

Public oversight will likely be necessary to realise these benefits, since platforms ostensibly lack the incentives to voluntarily optimise their ad archives for transparency and accountability. Indeed, our analysis here has already identified several major shortcomings in present ad archive policies: scoping, verifying, and targeting. To realise the full potential of ad archives, these issues will require further research, critique, and likely regulation. Our review suggests that major advances can already be made by comprehensively publishing all advertisements, regardless of whether they have been flagged as political; revoking any exemptions for media organisations; requiring basic verification of ad buyers’ identities; documenting how ad archive data is verified; and disclosing all targeting methods selected by the ad buyer (insofar as possible without publishing personal data).

Looking forward, ad archives present a fruitful research area for both legal and communication sciences scholars. For legal scholars, the flurry of law making around political advertising in general, and transparency in particular, raises important questions about regulatory design (in terms of how relevant actors and duties are defined, oversight and enforcement mechanisms, etc.). In future, ad archives also deserve consideration in commercial advertising governance, in such areas as consumer protection, child protection, or anti-discrimination.

The emergence of ad archives also has important implications for communications science. Firstly, ad archives could become an important resource of datafor communications research, offering a range of data that would previously have been difficult or impossible to obtain. Although our paper has identified several shortcomings in this data, they might nonetheless provide a meaningful starting point to observe platforms’ political advertising. Secondly, ad archives are an interesting object of communications science research, in terms of how they are used by relevant stakeholders, and how this impacts advertising and communications practice. Further research along these lines will certainly be necessary to better understand ad archives, and to make them reach their full potential.

Acknowledgements

The authors wish to thank Frédéric Dubois, Chris Birchall, Joe Karaganis and Kristofer Erickson for their thoughtful reviewing and editing of this article. The authors also wish to thank Frederik Zuiderveen Borgesius and Sam Jeffers for their helpful insights during the writing process, as well as the participants in the ICA 2019 Post-Conference on the Rise of the Platforms and particularly the organisers: Erika Franklin Fowler, Sarah Anne Ganter, Dave Karpf, Rasmus Kleis Nielsen, Daniel Kreiss and Shannon McGregor.

References

Albright, J. (2018, November 4). Facebook and the 2018 Midterms: A Look at the Data – The Micro-Propaganda Machine. Retrieved from https://medium.com/s/the-micro-propaganda-machine/the-2018-facebook-midterms-part-i-recursive-ad-ccountability-ac090d276097

Ali, M., Sapiezynski, P., Bogen, M., Korolova, A., Mislove, A., & Rieke, A. (2019). Discrimination through optimization: How Facebook’s ad delivery can lead to skewed outcomes. Arxiv [Cs]. Retrieved from https://arxiv.org/pdf/1904.02095.pdf

Angelopoulos, C. J., Brody, A., Hins, A. W., Hugenholtz, P. B., Leerssen, P., Margoni, T., McGonagle, T. van Hoboken, J. V.J. (2015). Study of fundamental rights limitations for online enforcement through self-regulation. Institute for Information Law (IViR). Retrieved from https://pure.uva.nl/ws/files/8763808/IVIR_Study_Online_enforcement_through_self_regulation.pdf

Andringa, P. (2018). Interactive: See Political Ads Targeted to You on Facebook. NBC. Retrieved from http://www.nbcsandiego.com/news/tech/New-Data-Reveal-Wide-Range-Political-Actors-Facebook-469600273.html

Arnett v. Kennedy, 416 U.S. 134 (Supreme Court of the United States, 1974).

Parliament of the Netherlands. (2019). Motion for Complete Transparency in the Buyers of Political Advertisements on Facebook. Retrieved from https://www.parlementairemonitor.nl/9353000/1/j9vvij5epmj1ey0/vkvudd248rwa

Ausloos, J., & Dewitte, P. (2018). Shattering one-way mirrors – data subject access rights in practice. International Data Privacy Law, 8(1), 4–28. doi:10.1093/idpl/ipy001

Barocas, S. (2012). The Price of Precision: Voter Microtargeting and Its Potential Harms to the Democratic Process. Proceedings of the First Edition Workshop on Politics, Elections and Data, 31–36. doi:10.1145/2389661.2389671

Benkler, Y., Faris, R., & Roberts, H. (2018). Network Propaganda: Manipulation, Disinformation, and Radicalization in American Politics. Oxford: Oxford University Press.

Bodó, B., Helberger, N., & Vreese, C. H. de. (2017). Political micro-targeting: a Manchurian candidate or just a dark horse? Towards the next generation of political micro-targeting research. Internet Policy Review, 6(4). doi: 10.14763/2017.4.776

Boerman, S. C., Kruikemeier, S., & Zuiderveen Borgesius, F. J. (2017). Online Behavioral Advertising: A Literature Review and Research Agenda. Journal of Advertising, 46(3), 363–376. doi:10.1080/00913367.2017.1339368

Borgesius, F.J., Moller, J., Kruikemeier, S., Ó Fathaigh, R., Irion, K., Dobber, T., Bodo, B., & de Vreese, C. (2018). Online Political Microtargeting: Promises and Threats for Democracy. Utrecht Law Review, 14(1). doi:10.18352/ulr.420

Carbajal, A., Kint, J., Mills Wade, A., Brooks, L. T., Chavern, D., McKenzie, A. B., & Golden, M. (2018). Open Letter to Marck Zuckerberg on Alternative Solutions for Politics Tagging. Retrieved from https://www.newsmediaalliance.org/wp-content/uploads/2018/06/vR_Alternative-Facebook-Politics-Tagging-Solutions-FINAL.pdf

Cardoso, T. (2019, March 4). Google to ban political ads ahead of federal election, citing new transparency rules. The Globe and Mail. Retrieved from https://www.theglobeandmail.com/politics/article-google-to-ban-political-ads-ahead-of-federal-election-citing-new/

Chavern, D. (2018, May 18). Open Letter to Mr. Zuckerberg. News Media Alliance. Retrieved from http://www.newsmediaalliance.org/wp-content/uploads/2018/05/FB-Political-Ads-Letter-FINAL.pdf

Chester, J., & Montgomery, K. C. (2017). The role of digital marketing in political campaigns. Internet Policy Review, 6(4). doi:10.14763/2017.4.773

Collins, B. (2016, April 21). Hillary PAC Spends $1 Million to ‘Correct’ Commenters on Reddit and Facebook. Retrieved from https://www.thedailybeast.com/articles/2016/04/21/hillary-pac-spends-1-million-to-correct-commenters-on-reddit-and-facebook

House of Commons Select Committee on Digital, Culture, Media and Sport. (2019). Disinformation and ‘fake news’: Final Report. Retrieved from https://publications.parliament.uk/pa/cm201719/cmselect/cmcumeds/1791/179103.htm#_idTextAnchor000

Edelman, B. (2014). Pitfalls and Fraud In Online Advertising Metrics: What Makes Advertisers Vulnerable to Cheaters, And How They Can Protect Themselves. Journal of Advertising Research, 54(2), 127–132. doi:10.2501/JAR-54-2-127-132

Edelson, L., Sakhuja, S., Dey, R., & McCoy, D. (2019). An Analysis of United States Online Political Advertising Transparency. ArXiv [Cs]. Retrieved from http://arxiv.org/abs/1902.04385

European Commission. (2018, September 26). Code of Practice on Disinformation. Retrieved from https://ec.europa.eu/digital-single-market/en/news/code-practice-disinformation

European Commission. (2019). Third monthly intermediate results of the EU Code of Practice against disinformation. Retrieved from https://ec.europa.eu/digital-single-market/en/news/third-monthly-intermediate-results-eu-code-practice-against-disinformation

Fanta, A. & Rudl, T. (2019, July 17). Leaked document: EU Commission mulls new law to regulate online platforms. Netzpolitik.org. Retrieved from: https://netzpolitik.org/2019/leaked-document-eu-commission-mulls-new-law-to-regulate-online-platforms/

Facebook. (2019a). Issues of national importance. Retrieved from https://www.facebook.com/business/help/214754279118974

Facebook. (2019b). Ads about social issues, elections or politics. Retrieved from https://www.facebook.com/business/help/208949576550051

Frier, S. (2018, July 2). Facebook’s Political Rule Blocks Ads for Bush’s Beans, Singers Named Clinton. Bloomberg. Retrieved from https://www.bloomberg.com/news/articles/2018-07-02/facebook-s-algorithm-blocks-ads-for-bush-s-beans-singers-named-clinton

Fulgoni, G. M. (2016). Fraud in Digital Advertising: A Multibillion-Dollar Black Hole: How Marketers Can Minimize Losses Caused by Bogus Web Traffic. Journal of Advertising Research, 56(2), 122. doi:10.2501/JAR-2016-024

Foucault, M. (1977). Discipline and Punish The Birth of the Prison. New York: Pantheon Books.

Ghosh, A., Venkatadri, G., & Mislove, A. (2019). Analyzing Political Advertisers’ Use of Facebook’s Targeting Features. 7. Available at https://www.ieee-security.org/TC/SPW2019/ConPro/papers/ghosh-conpro19.pdf

Goldman, R. (2017). Update on Our Advertising Transparency and Authenticity Efforts. Facebook Newsroom. Retrieved from https://newsroom.fb.com/news/2017/10/update-on-our-advertising-transparency-and-authenticity-efforts/

Google (2019a). Implementation Report for EU Code of Practice on Disinformation. Retrieved from https://ec.europa.eu/information_society/newsroom/image/document/2019-5/google_-_ec_action_plan_reporting_CF162236-E8FB-725E-C0A3D2D6CCFE678A_56994.pdf

Google (2019b). Verification for election advertising in the European Union. Retrieved from https://support.google.com/adspolicy/answer/9211218

Guha, S., Cheng, B., & Francis, P. (2010). Challenges in measuring online advertising systems. Proceedings of the 10th Annual Conference on Internet Measurement - IMC ’10, 81. doi:10.1145/1879141.1879152

Hansen, H. K., Christensen, L. T., & Flyverbom, M. (2015). Logics of transparency in late modernity: Paradoxes, mediation and governance. European Journal of Social Theory, 18(2), 117–131. doi:10.1177/1368431014555254

Hounsel, A., Mathias, J. N., Werdmuller, B., Griffey, J., Hopkins, M., Peterson, C., … Feamster, N. (2019). Estimating Publication Rates of Non-Election Ads by Facebook and Google. Retrieved from https://github.com/citp/mistaken-ad-enforcement/blob/master/estimating-publication-rates-of-non-election-ads.pdf

Howard, P. N., Ganesh, B., Liotsiou, D., Kelly, J., & François, C. (2018). The IRA, Social Media and Political Polarization in the United States, 2012–2018 [Working Paper 2018.2]. Oxford: Project on Computational Propaganda. Retrieved from https://comprop.oii.ox.ac.uk/research/ira-political-polarization/

Howard, P. (2019, March 27). A Way to Detect the Next Russian Misinformation Campaign. The New York Times. Retrieved from https://www.nytimes.com/2019/03/27/opinion/russia-elections-facebook.html?module=inline

Keller, D. & Leerssen, P. (in press). Facts and where to find them: Empirical research on internet platforms and content moderation. In N. Persily & J. Tucker (eds), Social Media and Democracy: The State of the Field. Cambridge: Cambridge University Press.

Klobuchar, A., Warner, R., & McCain, J. (2017, October 19). The Honest Ads Act. Retrieved from https://www.congress.gov/bill/115th-congress/senate-bill/1989/text

Kuczerawy, A. (2019, in press). Fighting online disinformation: did the EU Code of Practice forget about freedom of expression? In E. Kużelewska, G. Terzis, D. Trottier, & D. Kloza (Eds.), Disinformation and Digital Media as a Challenge for Democracy. Cambridge: Intersentia.

Lomas, N. (2018, July 26). Facebook finally hands over leave campaign Brexit ads. Techcrunch. Retrieved from: https://techcrunch.com/2018/07/26/facebook-finally-hands-over-leave-campaign-brexit-ads/

Lapowsky, I. (2018). Obscure Concealed-Carry Group Spent Millions on Facebook Political Ads. WIRED. Retrieved from https://www.wired.com/story/facebook-ads-political-concealed-online/

Leathern, R. (2019). Updates to our ad transparency and authorisation efforts. Retrieved from: https://www.facebook.com/facebookmedia/blog/updates-to-our-ads-transparency-and-authorisation-efforts

Leiser, M. (2016). AstroTurfing, ‘CyberTurfing’ and other online persuasion campaigns. European Journal of Law and Technology, 7(1). Retrieved from http://ejlt.org/article/view/501

Livingstone, S. (2018). Tackling the Information Crisis: A Policy Framework for Media System Resilience [Report]. London: LSE Commission on Truth, Trust and Technology. Retrieved from http://www.lse.ac.uk/media-and-communications/assets/documents/research/T3-Report-Tackling-the-Information-Crisis-v6.pdf

Lyon, D. (2006). Theorizing Surveillance: The Panopticon and Beyond. Devon: Willan Publishing.

Macleod, A. (2019). Fake News, Russian Bots and Putin’s Puppets. In A. MacLeod (Ed.), Propaganda in the Information Age: Still Manufacturing Consent. London: Routledge.

Matias, J. N., Hounsel, A., & Hopkins, M. (2018, November 2). We Tested Facebook’s Ad Screeners and Some Were Too Strict. The Atlantic. Retrieved from: https://www.theatlantic.com/technology/archive/2018/11/do-big-social-media-platforms-have-effective-ad-policies/574609/

Merrick, R. (2019, December 25). Brexit: Leave ‘very likely’ won EU referendum due to illegal overspending, says Oxford professor’s evidence to High Court. The Independent. Retrieved from: https://www.independent.co.uk/news/uk/politics/vote-leave-referendum-overspending-high-court-brexit-legal-challenge-void-oxford-professor-a8668771.html

Merrill, J. B. (2018). How Big Oil Dodges Facebook’s New Ad Transparency Rules. Retrieved 22 April 2019, from ProPublica website: https://www.propublica.org/article/how-big-oil-dodges-facebooks-new-ad-transparency-rules

Merrill, J. B., & Tobin, A. (2019, January 28). Facebook Moves to Block Ad Transparency Tools — Including Ours. ProPublica. Retrieved from https://www.propublica.org/article/facebook-blocks-ad-transparency-tools

Montellaro, Z. (2019). House Democrats forge ahead on electoral reform bill. POLITICO. Retrieved from https://politi.co/2GO4eJ8

Mozilla (2019, March 27). Facebook and Google: This is What an Effective Ad Archive API Looks Like. The Mozilla Blog. Retrieved from: https://blog.mozilla.org/blog/2019/03/27/facebook-and-google-this-is-what-an-effective-ad-archive-api-looks-like

Mulgan, R. (2000). Comparing Accountability in the Public and Private Sectors. Australian Journal of Public Administration, 59(1), 87–97. doi:10.1111/1467-8500.00142

Ohm, P. (2010). Broken Promises of Privacy: Responding To The Surprising Failure of Anonymization. UCLA Law Review57, 1701–1777. Retrieved from https://www.uclalawreview.org/pdf/57-6-3.pdf

Netherlands Ministry of the Interior. (2019). Response to the Motion for Complete Transparency in the Buyers of Political Advertisements on Facebook. Retrieved from: https://www.tweedekamer.nl/kamerstukken/detail?id=2019Z03283&did=2019D07045

O’Sullivan, D. (2018). What an anti-Ted Cruz meme page says about Facebook’s political ad policy. CNN. Retrieved from: https://www.cnn.com/2018/10/25/tech/facebook-ted-cruz-memes/index.html

Parsons, C. (2019). The (In)effectiveness of Voluntarily Produced Transparency Reports. Business & Society, 58(1), 103–131. doi:10.1177/0007650317717957

Pasquale, F. (2015). The Black Box Society: The Secret Algorithms That Control Money and Information. Cambridge: Harvard University Press. Retrieved from https://www.jstor.org/stable/j.ctt13x0hch

Republic of France. (2018). LoI n° 2018-1202 du 22 décembre 2018 relative à la lutte contre la manipulation de l’information. Retrieved from https://www.legifrance.gouv.fr/affichTexte.do?cidTexte=JORFTEXT000037847559&categorieLien=id

Republic of Ireland (2017), Online Advertising and Social Media (Transparency) Bill 2017. Retrieved from: https://data.oireachtas.ie/ie/oireachtas/bill/2017/150/eng/initiated/b15017d.pdf

Rieke, A., & Bogen, M. (2018). Leveling the Platform: Real Transparency for Paid Messages on Facebook. UpTurn Report. Retrieved from https://www.upturn.org/static/reports/2018/facebook-ads/files/Upturn-Facebook-Ads-2018-05-08.pdf

Rosenberg, M. (2019, July 25). Ad Tool Facebook Built to Fight Disinformation Doesn’t Work as Advertised. The New York Times. Retrieved from: https://www.nytimes.com/2019/07/25/technology/facebook-ad-library.html

Sanders, E. (2019, May 9). Washington Public Disclosure Commission Passes Emergency Rule Clarifying That Facebook and Google Must Turn Over Political Ad Data. The Stranger. Retrieved from https://www.thestranger.com/slog/2018/05/09/26158462/washington-public-disclosure-commission-passes-emergency-rule-clarifying-that-facebook-and-google-must-turn-over-political-ad-data

Sanders, E. (2018, October 16). Facebook Says It's Immune from Washington State Law. The Stranger. Retrieved from: https://www.thestranger.com/slog/2018/10/16/33926412/facebook-says-its-immune-from-washington-state-law

Shane, S. (2017, November 1). These are the Ads Russia Bought on Facebook in 2016. The New York Times. Retrieved from https://www.nytimes.com/2017/11/01/us/politics/russia-2016-election-facebook.html

Shukla, S. (2019, March 28). A Better Way to Learn About Ads on Facebook. Facebook Newsroom. Retrieved from https://newsroom.fb.com/news/2019/03/a-better-way-to-learn-about-ads/

Singer, N. (2018, August 16). ‘Weaponized Ad Technology’: Facebook’s Moneymaker Gets a Critical Eye. TheNew York Times.https://www.nytimes.com/2018/08/16/technology/facebook-microtargeting-advertising.html

Timmons, H. & Kozlawska, H. (2018, March 22). Facebook’s quiet battle to kill the first transparency law for online political ads. Quartz. Retrieved from: https://qz.com/1235363/mark-zuckerberg-and-facebooks-battle-to-kill-the-honest-ads-act/

Turton, W. (2018, October 30). We posed as 100 senators to run ads on Facebook. Facebook approved all of them. VICE News. Retrieved from: https://news.vice.com/en_ca/article/xw9n3q/we-posed-as-100-senators-to-run-ads-on-facebook-facebook-approved-all-of-them

Twitter. (2019a). Implementation Report for EU Code of Practice on Disinformation. Retrieved from http://ec.europa.eu/information_society/newsroom/image/document/2019-5/twitter_progress_report_on_code_of_practice_on_disinformation_CF162219-992A-B56C-06126A9E7612E13D_56993.pdf

Twitter. (2019b). How to get certified as a political advertisers. Retrieved from https://business.twitter.com/en/help/ads-policies/restricted-content-policies/political-content/how-to-get-certified.html

van Dijck, J., Poell, T., & de Waal, M. (2018). The Platform Society: Public Values in a Connective World. Oxford: Oxford University Press.

Van Til, G. (2019). Zelfregulering door online platforms: een waar wondermiddel tegen online desinformatie? [Self-regulation by online platforms: a true panacea against online disinformation?]. Mediaforum, 1(13). Retrieved from https://www.ivir.nl/publicaties/download/Mediaforum_2019_1_vanTil.pdf

Vandor, M. (2018, 18). Indexing news Pages on Facebook for the Ad Archive. Facebook Media. Retrieved from: https://www.facebook.com/facebookmedia/blog/indexing-news-pages-on-facebook-for-the-ad-archive

Venturini, T., & Rogers, R. (2019). “API-Based Research” or How can Digital Sociology and Journalism Studies Learn from the Facebook and Cambridge Analytica Data Breach. Digital Journalism, 7(4), 532–540. doi: 10.1080/21670811.2019.1591927

Vlassenroot, E., Chambers, S., Di Pretoro, E., Geeraert, F., Haesendonck, G., Michel, A., & Mechant, P. (2019). Web archives as a data resource for digital scholars. International Journal of Digital Humanities, 1(1), 85–111. doi:10.1007/s42803-019-00007-7

Wagner, B. (2018). Free Expression?: Dominant information intermediaries as arbiters of internet speech. In M. Moore & D. Tambini (Eds.), Digital Dominance. Oxford: Oxford University Press.

Warner, R. (2017). The Honest Ads Act (primer). Retrieved from https://www.warner.senate.gov/public/index.cfm/the-honest-ads-act

Waterson, J. (2019, January 14). Obscure pro-Brexit group spends tens of thousands on Facebook ads. The Guardian. Retrieved from https://www.theguardian.com/politics/2019/jan/14/obscure-pro-brexit-group-britains-future-spends-tens-of-thousands-on-facebook-ads

Waterson, J. (2019, April 3). Facebook Brexit ads secretly run by staff of Lynton Crosby firm. The Guardian. Retrieved from: https://www.theguardian.com/politics/2019/apr/03/grassroots-facebook-brexit-ads-secretly-run-by-staff-of-lynton-crosby-firm

Zuckerberg, M. (2019, March 30). The Internet needs new rules. Let’s start in these four areas. The Washington Post. Retrieved from https://www.washingtonpost.com/opinions/mark-zuckerberg-the-internet-needs-new-rules-lets-start-in-these-four-areas/2019/03/29/9e6f0504-521a-11e9-a3f7-78b7525a8d5f_story.html

Footnotes

1. E.g. Mozilla, 2019; Mattias, Hounsel, & Hopkins, 2019; Merrill, 2018; Rieke & Bogen, 2018; Edelson et al., 2019; Andringa, 2018; Lapowsky, 2018; O’Sullivan, 2018; Waterson, 2019; Albright, 2018; Howard, 2019. See Section three for further discussion.

2. Parallel to the more general distinction between the governance of platforms and the governance by platforms (Gillespie, 2018).

3. The Commission describes the Code as a ‘self-regulatory’ instrument. However, given the Commission’s involvement in its development and oversight, we consider ‘co-regulatory’ a more apt description (Kuczerawy, 2019; more generally see Angelopoulous et al., 2015).

4. Installing such notice and takedown for unlawful content is a requirement under EU law. In the US, notice and takedown procedures are only required for copyright and trademark claims, and the majority of takedown occurs on a strictly voluntary basis. In practice, much of the content removed under these regimes is assessed on the basis of platforms’ voluntary standards (Keller & Leerssen, 2019).


Algorithmic governance

$
0
0

This article belongs to Concepts of the digital society, a special section of Internet Policy Review guest-edited by Christian Katzenbach and Thomas Christian Bächle.

1. Introduction

The concept of algorithmic governance has emerged over the last decade, but takes up an idea that has been present for much longer: that digital technologies structure the social in particular ways. Engaging with the concept of algorithmic governance is complex, as many research fields are interested in the phenomenon, using different terms and having different foci. To inquire what constitutes algorithmic governance makes an important contribution to contemporary social theory by interrogating the role of algorithms and their ordering effect. We define algorithms as computer-based epistemic procedures which are particularly complex – although what is complex depends on the context. Algorithms shape procedures with their inherent mathematical logics and statistical practices. With that, the discourse around algorithmic governance often overlaps and intersects with debates about datafication (cf. Mejias & Couldry, 2019 as part of this special section) and artificial intelligence (AI). Yet, algorithms sometimes also operate on ‘small data’ and use calculus-based procedures that do not learn and that are not adaptive.

While governance is a contested term, we define its core as coordination between actors based on rules. Other than regulation, governance is not necessarily intentional and goal-directed (Black, 2001), it also includes unintentional coordination (Hofmann, Katzenbach, & Gollatz, 2016). Yet, governance excludes all forms of social ordering which are purely occasional and don’t rely on some sort of rule; governance implies a minimum degree of stability which is necessary for actors to develop expectations, which are a precondition for coordination (Hobbes, 1909). We choose the term algorithmic governance instead of algorithmic regulation because governance allows to account for the multiplicity of social ordering with regard to actors, mechanisms, structures, degrees of institutionalisation and distribution of authority. It deliberately embraces social ordering that is analytically and structurally decentralised and not state-centred. Thus, algorithmic governance better reflects the ambition of this article to widely scrutinise the ways in which algorithms create social order. In that sense, we focus on governance by algorithms instead of the governance of algorithms (Musiani, 2013; Just & Latzer, 2017). In sum, algorithmic governance is a form of social ordering that relies on coordination between actors, is based on rules and incorporates particularly complex computer-based epistemic procedures.

The relevance of dealing with algorithmic governance becomes evident with regard to competing narratives of what changes in governance when it makes use of algorithms: one narrative is for example that governance becomes more powerful, intrusive and pervasive. A different narrative stresses that governance becomes more inclusive, responsive, and allows for more social diversity, as we will highlight in the following chapters.

If considered broadly, the roots of this concept can be traced back to the history and sociology of science, technology and society. Technology has always both reflected and reorganised the social (Bijker & Law, 1992; Latour, 2005). From Socrates’ concerns with writing and literacy (Ong, 1982) via cybernetic’s radically interdisciplinary connection between technical, biological and social systems and their control (Wiener, 1948) via Jacques Ellul’s bureaucratic dystopia of a 'technological society' (1964) to Langdon Winner’s widely cited, yet contested 'politics of artefacts' (1980) – the idea that technology and artefacts somehow govern society and social interactions is a recurring theme. The more direct predecessor of algorithmic governance is Lawrence Lessig’s famous catchphrase 'code is law'. Here, software code or, more generally, technical architectures are seen as one of four factors regulating social behaviour (next to law, market and social norms). Scholars have also conceptualised the institutional character of software and algorithms (Katzenbach, 2017, 2012; Napoli, 2013; Orwat et al., 2010). While Rouvroy and Berns used the term 'gouvernance algorithmique' in 2009, the first to conceptualise the term 'algorithmic governance' were Müller-Birn, Dobusch and Herbsleb (2013), presenting it as a coordination mechanism opposed to 'social governance'. 1 The concept of ‘algorithmic regulation' was introduced by US publisher Tim O’Reilly (2013), highlighting the efficiency of automatically governed spaces – but overlooking the depoliticisation of highly contested issues that comes with delegating them to technological solutions (Morozov, 2014). In contrast to the implicit technological determinism of these accounts, the interdisciplinary field of critical software studies has complicated – in the best sense – the intricate mutual dependencies of software and algorithms on the one hand, and social interactions and structures on the other (MacKenzie, 2006; Fuller, 2008; Berry, 2011; Kitchin & Dodge, 2011). This article sets out to provide a primer on the concept of algorithmic governance, including an overview on dominant perspectives and areas of interest (section 2), a presentation of recurrent controversies in this space (section 3), an analytical delineation of different types of algorithmic governance (section 4), and a short discussion of predictive policing and automated content moderation as illustrative case studies (section 5). We seek to steer clear of the deterministic impetus of the trajectory towards ever more automation, while taking seriously the turn to increasingly manage social spaces and interaction with algorithmic systems.

2. Algorithmic governance: perspectives and objects of inquiry

The notion of algorithmic governance is addressed and discussed in different contexts and disciplines. They share similar understandings about the importance of algorithms for social ordering, but choose different objects of inquiry. The choice of relevant sets of literature presented here has a focus on research in science and technology studies (STS), sociology, political science, communication and media studies, but includes research from relevant other disciplines interested in algorithmic governance, such as computer science, legal studies, economics, and philosophy.

Various closely related and overlapping research areas are interested in how algorithms contribute to re-organising and shifting social interactions and structures. In contrast to public debate, however, these scholars reject the notion of algorithms as independent, external forces that single-handedly rule our world. They complicate this techno-determinist picture by asserting the high relevance of algorithms (Gillespie, 2014), yet highlighting the economic, cultural, and political contexts that both shape the design of algorithms as well as accommodate their operation. Thus, empirical studies in this field typically focus on the social interactions under study and interrogate the role of algorithms and their ordering effect in these specific contexts (Kitchin, 2016; Seaver, 2017; Ziewitz, 2016). They share an interest in how data sets, mathematical models and calculative procedures pave the way for a new quality of social quantification and classification. The notions of 'algorithmic regulation' (Yeung, 2018) and ‘algorithmic governance’ (Just & Latzer, 2016; König, 2019) emanate from the field of regulation and governance research, mostly composed of scholars from legal science, political science, economy, and sociology. The relevant studies have had the effect of organising and stimulating research about algorithmic governance with a shared understanding of regulation - as “intentional attempts to manage risk or alter behavior in order to achieve some pre-specified goal” (Yeung, 2018). This focus on goal-directed, intentional interventions sets the stage for inquiries that are explicitly interested in algorithms as a form of government purposefully employed to regulate social contexts and alter the behaviour of individuals, for example in the treatment of citizens or the management of workers. Other approaches also study non-intentional forms of social ordering through and with algorithms.

A slightly different approach puts the technical systems in the centre, not the social structures and relations. Relevant studies, particularly in computer science aim to build and optimise algorithmic systems to solve specific social problems: detect contested content, deviant behaviour, and preferences or opinions – in short: they are building the very instruments that are often employed in algorithmic governance. The common goal in this approach usually is to effectively detect patterns in data, e.g., translating social context into computable processes (i.e., optimising detection). This research stream is seeking efficient, robust, fair and accountable ways to classify subjects and objects both into general categories (such as species) as well as into specific dimensions such as psychometric types, emotional states, credit worthiness, or political preferences (Schmidt and Wiegand, 2017; Binns et al., 2017). Producers and providers of algorithmically-fuelled services not only optimise the detection of patterns in existing data sets, but they often – in turn – also aim to optimise their systems to most effectively nudge user behaviour in a way that seeks to maximise organisational benefits (optimising behaviour). By systematically testing different versions of user screens or other features (A/B-testing) and applying user and behavioural analytics, companies continually work to direct user interactions more effectively towards more engagement and less friction (Guerses et al., 2018). It is, however, important to note that there is no clear line between the research that develops and optimises algorithmic governance and the research analysing its societal implications; they overlap and there are many studies that strive towards both aims. An example in case are studies about algorithmic bias, fairness and accountability that both conceptualise and test metrics (e.g., Waseem & Hovy, 2016). Another important area of research that is applied and critical are studies about ‘automation bias’, ‘machine bias’ or ‘over-reliance’ that study under which conditions human agents can take a truly autonomous decision (Lee & See, 2004; Parasuraman & Manzey, 2010).

One important domain of inquiry especially relevant to STS, communication and media studies is digital communication and social media. Scholars have been interested for more than a decade in how search engines and social media platforms organise and structure information that is available online and how this affects subjectivation (Couldry & Langer, 2005). Platforms prioritise certain types of content (typically based on metrics of 'engagement') – thus constituting a new dominant mode to ascribe relevance in society, complementing traditional journalistic routines. Platforms also deploy algorithms to regulate content by blocking or filtering speech, videos and photos that are deemed inacceptable or unlawful (Gillespie, 2018; Gorwa, 2019). With increasing scale and growing political pressure, platforms readily turn to technical solutions to address difficult platform governance puzzles such as hate speech, misinformation and copyright (Gorwa, Binns, & Katzenbach, 2019). Other areas under study that make use of automated content detection are plagiarism checks in teaching and academic writing (Introna, 2016) and sentiment analysis for commercial and political marketing (Tactical Tech, 2019).

Public sector service provisions, citizen management and surveillance constitute another key area of interest for algorithmic governance scholars. Especially political scientists and legal scholars investigate automated procedures for state service delivery and administrative decision-making. The ambition here is that algorithms potentially increase the efficiency and efficacy of state services, for example by rationalising bureaucratic decision-making, by targeting information and interventions to precise profiles or by choosing the best available policy options (OECD, 2015). Yet, their promises are heavily contested. Scholars have shown that the deployment of algorithmic systems in the public sector produced many non-intended and non-disclosed consequences (Veale & Brass, 2019; Dencik, Hintz, Redden, & Warne, 2018). Applying algorithmic tools in government often relies on new forms of population surveillance and classification by state and corporate actors (Neyland & Möllers, 2017; Lupton, 2016, Bennett, 2017). The grounds for many projects of digital service provision and algorithm-based policy choice are systems of rating, scoring and predicting citizen behaviour, preference and opinion. These are used for the allocation of social benefits, to combat tax evasion and fraud, to inform jurisdiction, policing and terrorism prevention, border control, and migration management.

Rating and scoring are not only applied to citizens, but also to consumers, as valuation and quantification studies have pointed out with regard to credit scores (Avery, Brevoort, & Canner, 2012; Brevoort, Grimm, & Kambara, 2015; Fourcade & Healy, 2016, 2017). These studies point out how algorithm-based valuation practices shape markets and create stratification mechanisms that can superimpose social class and reconfigure power relations – often to the detriment of the poor and ‘underscored’ (Fourcade & Healy, 2017; Zarsky, 2014).

Governance through algorithms is also an important matter of concern for scholars studying the digital transformation of work, such as the sociology of labour and labour economics. The objects of study here are automated governance on labour platforms and management of labour within companies, for example through performance management and rating systems (Lee, Poltrock, Barkhuus, Borges, & Kellogg, 2017; Rosenblat, 2018). This research field is characterised by empirical case studies that enquire the implications of algorithmic management and workplace surveillance for workers’ income, autonomy, well-being, rights and social security; and for social inequality and welfare states (Wood, Graham, Lehdonvirta, & Hjorth, 2019). Related objects of inquiry are algorithmic systems of augmented reality, of speech recognition and assistance systems for task execution, training and quality control (Gerber & Krzywdzinski, 2019). Important economic sectors under study are logistics, industrial production, delivery and services. Other relevant areas of research focus on the algorithmic management of transportation and traffic, energy, waste and water, for example in ‘smart city’ projects.

Some scholars approach algorithmic governance on a meta-level as a form of decentralised coordination and participation. They stress its power to process a high number of inputs and thus to tackle a high degree of complexity. As a consequence, they see algorithmic governance as a mode of coordination that offers new opportunities for participation, social inclusiveness, diversity and democratic responsiveness (König, 2019; Schrape, 2019). There is abundant research about the possibilities that software can offer to improve political participation through online participation tools (Boulianne, 2015; Boulianne & Theocharis, 2018), such as electronic elections and petitions, social media communication and legislative crowdsourcing. In addition, countless algorithmic tools are being developed with the explicit aim to ‘hear more voices’ and to improve the relationship between users and platforms or citizens and political elites. However, algorithmic governance through participatory tools often remains hierarchical, with unequal power distribution (Kelty, 2017).

3. Controversies and concerns

Across these different perspectives and sectors, there are recurring controversies and concerns that are regularly raised whenever the phenomenon of algorithmic governance is discussed. Looking at these controversies more closely, we can often detect a dialectic movement between positive and negative connotations.

Datafication and surveillance

The literature about algorithmic governance shows an ample consensus that big data, algorithms and artificial intelligence change societies’ perspectives on populations and individuals. This is due to the ‘data deluge’, an increase and variety in data collected by digital devices, online trackers and the surveillance of spaces (Beer, 2019). ‘Datafication’ (cf. Mejias & Couldry, 2019 as part of this special section) also benefits from increasingly powerful infrastructures which enable more and faster data analysis, and societal norms that benefit quantification, classification and surveillance (Rieder & Simon, 2016). Research about algorithmic governance has nevertheless always been concerned with the many risks of datafication and surveillance. To surveil entire populations and to create detailed profiles about individuals on the basis of their ‘data doubles’ creates ample opportunities for social sorting, discrimination, state oppression and the manipulation of consumers and citizens (Lyon, 2014; Gandy, 2010). Unfettered surveillance poses danger to many civil and human rights, such as freedom of speech, freedom of assembly, and privacy, to name just a few.

Agency and autonomy

The ubiquity of algorithms as governance tools has created concerns about the effects on human agency and autonomy (Hildebrandt, 2016) – a central concept of the Enlightenment and a key characteristic of the modern individual. While earlier approaches conceived of algorithms as either augmenting or reducing human agency, it has become clear that the interaction between human and machine agents is complex and needs more differentiation. While typologies and debates typically construct a binary distinction between humans-in-the-loop vs. humans-out-of-the-loop, this dichotomy does not hold for in-depth analyses of the manifold realities of human-computer-interaction (Gray & Suri, 2018). In addition, human agency cannot only be assessed with regard to machines, but also with regard to constraints posed by organisations and social norms (Caplan & boyd, 2018).

Transparency and opacity

The assumed opacity of algorithms and algorithmic governance is a strong and lasting theme in the debate, routinely coupled with a call for more transparency (Kitchin, 2016; Pasquale, 2015). However, more recent arguments point out that access to computer code should not become a fetish: absolute transparency is often not possible nor desirable and not the solution to most of the problems related to algorithmic governance, such as fairness, manipulation, civility, etc. (Ananny & Crawford, 2017; Mittelstadt, Allo, Taddeo, Wachter, & Floridi, 2016). In addition, the implementation of social norms into code not only creates opacity, but also unveils norms and processes that were previously hidden. A case in point are controversies around scoring systems about unemployment risk, as deployed in Austria and Poland (AlgorithmWatch and Bertelsmann Stiftung, 2019), and credit worthiness (AlgorithmWatch, 2019). The public interest in algorithmic governance has motivated civil society actors and scholars to inquire the composition and rationality of algorithmic scoring and to question the underlying social values. Given this development, the current turn to algorithmic governance might indeed even be conducive to more transparency as software code, once disclosed, requires the articulation of underlying assumptions into explicit models.

De-politicisation and re-politicisation

In a similar logic, there is a vivid public debate about the de-politicising and re-politicising effects of algorithms. Algorithms have often been criticised as de-politicising due to their ‘aura of objectivity and truth’ (boyd & Crawford, 2012) and their promise to solve problems of social complexity by the sheer size of data and increased computing power (Kitchin, 2013; Morozov, 2013). However, and as a consequence, many studies have disputed the idea that algorithms can be objective and neutral. Social inequality, unfairness and discrimination translate into biased data sets and data-related practices. This new public suspicion about the societal implications of algorithms has motivated critics to similarly question the rationalities of political campaigning, social inequality in public service delivery, and the implications of corporate surveillance on civil rights. In that way, algorithmic governance has contributed to a re-politicisation of governance and decision-making in some areas. Yet, this might be a short-lived gain since the installment of algorithmic governance as societal infrastructures will most certainly lead to their deep integration into our routines over time, eventually being taken for granted like almost all infrastructures once they are in place (Plantin et al., 2018; Gorwa, Binns, & Katzenbach, 2019)

Bias and fairness

Another key concern is that of algorithmic bias. Automated decision-making by algorithmic systems routinely favours people and collectives that are already privileged while discriminating against marginalised people (Noble, 2018). While this truly constitutes a major concern to tackle in the increasing automatisation of the social, this is not a new phenomenon – and the algorithm is not (the only one) to blame. Biased data sets and decision rules also create discrimination. This rather foregrounds the general observation that any technological and bureaucratic procedure materialises classifications like gender, social class, geographic space, race. These do not originate in these systems, they merely reflect prevalent biases and prejudices, inequalities and power structures – and once in operation they routinely amplify the inscribed inequalities. The current politicisation of these issues can be considered an opportunity to think about how to bring more fairness into societies with automated systems in place (Barocas & Selbst, 2016; boyd & Barocas, 2017; Hacker, 2018).

4. From out-of-control to autonomy-friendly: evaluating types of algorithmic governance

While algorithmic systems expand into various social sectors, additional research fields will develop, merge and create sub-fields. At the same time, controversies will shift and shape future developments. This makes it hard or even impossible to synthesise the diversity of perspectives on algorithmic governance and its numerous areas of interest into one systematic typology. In any case, typologies are always contingent on the priorities and motives of the authors and their perception of the phenomenon. Yet, there is growing demand from policymakers around the world and the broader public to evaluate deployments of algorithmic governance systems and guide future development. For good reasons: algorithmic governance like other sociotechnical systems is contingent on social, political, and economic forces and can take different shapes.

For these reasons, we present a typification that addresses the design and functionality of algorithmic systems and evaluates these against key normative criteria. Notwithstanding the dynamic character of the field, we choose the degree of automation and transparency as they stand out with regard to their normative implications for accountability and democracy, and thus will most likely remain key elements in future evaluations of different types of algorithmic governance. 2

Transparency matters as it constitutes a cornerstone of democracy and self-determination (Passig, 2017), yet it is particularly challenged in the face of the inherent complexity of algorithmic systems. Therefore, transparency is not only one of the major academic issues when it comes to algorithmic regulation, but also an important general matter of public controversy (Hansen & Flyverbom, 2015). Only (a certain degree of) transparency opens up decision-making systems and their inscribed social norms to scrutiny, deliberation and change. Transparency is therefore an important element of democratic legitimacy. It is, however, important to note that the assessment of a given case of algorithmic governance will differ between software developers, the public and supervisory bodies. As already mentioned (cf. section 3), algorithmic governance systems push informational boundaries in comparison to previous governance constellations: they demand formalisation, thus social norms and organisational interests need to be explicated and translated into code – thus potentially increasing the share of socially observable information. Yet, in practice, algorithmic governance often comes with an actual decrease in socially intelligible and accessible information due to cognitive boundaries (intelligibility of machine learning) and systemic barriers (non-access to algorithms due to trade secrecy, security concerns and privacy protection) (Ananny & Crawford, 2017; Pasquale, 2015; Wachter, Mittelstadt, & Floridi, 2017).

The degree of automation matters greatly because the legitimacy of governance regimes relies on the responsibility and accountability of a human decision-maker in her role as a professional (a judge, a doctor, a journalist) and ethical subject. To focus on the degree of automation marks also the choice to problematise the complex interaction within socio-technical systems: algorithmic systems can leave more or less autonomy to human decision-makers. Here, we reduce the gradual scale of involvement to the binary distinction between fully automated systems where decisions are not checked by a human operator, and recommender systems where human operators execute or approve the decisions ('human-in-the-loop') (Christin, 2017; Kroes & Verbeek, 2014; Yeung, 2018).

Figure 1: Types of algorithmic governance systems

The combination of both dimensions yields four, in the Weberian sense, ideal-types of algorithmic governance systems with different characteristics: 'autonomy-friendly systems' provide high transparency and leave decisions to humans; 'trust-based systems' operate with low transparency and human-decision-makers; 'licensed systems' combine high transparency with automated execution; and finally 'out-of-control systems' demonstrate low transparency and execute decisions in a fully-automated way.

5. Algorithmic governance in operation: predictive policing and automated content moderation

The four ideal-types can be found in the full range of sectors and domains that employ algorithmic governance systems today (for recent overviews see AlgorithmWatch and Bertelsmann Stiftung, 2019; Arora, 2019; Dencik, Hintz, Redden, & Warne, 2018; Tactical Tech, 2019). In order to illustrate algorithmic governance both in the public and private sector, and in platforms, we shortly present two prominent and contested cases: automated risk assessment for policing (‘predictive policing’) is among the most disseminated forms of public algorithmic governance in industrial countries; and automated content moderation on social media platforms belongs to the various ways in which private platforms use algorithmic governance on a global scale. The cases show that algorithmic governance is not one thing, but it takes different forms in different jurisdictions and contexts, and it is shaped by interests, power, and resistance. Algorithmic governance is multiple, contingent and contested.

Predictive policing

Police authorities employ algorithmic governance by combining and analysing various data sources in order to assess crime risk and prevent crime (i.e., burglary, car theft, violent assault, etc.). This risk analysis addresses either individuals or geographic areas; some systems focus on perpetrators, others on potential victims. The results are predictions of risk that are mobilised to guide policing. Algorithmic governance can be directed towards the behaviour of citizens or of police officers. Typical actions are to assign increased police presence to geographic areas, to surveil potential perpetrators or to warn potential victims.

The degrees of transparency need to be assessed from two perspectives: with regard to the public and with regard to the organisation that uses predictive policing. In many cases, data collection, data analysis and governance measures lie in the responsibility of both police agencies and private companies, often in complex constellations (Egbert, 2019). Some projects rely on strictly crime-related data, other projects make use of additional data, such as data about weather, traffic, networks, consumption and online behaviour. In most cases, the software and its basic rationalities are not public. The same is true for the results of the analysis and their interpretation. 3 There is no predictive policing system that makes data and code available to the public, thus most applications in that space are trust-based systems. In some cases, such as in the German state of North Rhine-Westphalia, the software has been developed by the police. It is not public, but an autonomy-friendly system from the police’s perspective. This relatively high degree of opacity is justified by the police with the argument that transparency would allow criminals to ‘game the system’ and render algorithmic governance ineffective. Opacity, however, hinders evaluations of the social effects of algorithmic governance in policing. Major public concerns are whether predictive policing reinforces illegitimate forms of discrimination, threatens social values and whether it is effective and efficient (Ulbricht, 2018).

With regard to the degree of automation, it is noteworthy that in most cases of algorithmic governance for policing the software is still designed as a recommender system: human operators receive a computer generated information or recommendation. It is their responsibility to make the final decision of whether to act and how. However, police officers have complained about the lack of discretion in deciding of where to patrol (Ratcliffe, Taylor, & Fisher, 2019). Another concern is that police officers might not have the capacity to take an autonomous decision and to overrule the algorithmically generated recommendation (Brayne, 2017), effectively turning predictive policing into out-of-control or licensed systems of algorithmic governance. The massive number of research and pilot projects in this space indicate that in the near future, the degree of automation in predictive policing and border control governance will increase considerably.

Automated content moderation on social media platforms

Another highly relevant and contested field of algorithmic governance in operation is the (partly) automated moderation and regulation of content on social media platforms. Two developments are driving the turn to AI and algorithms in this field (Gollatz, Beer, & Katzenbach, 2018): (a) The amount of communication and content circulating on these platforms is so massive that it is hard to imagine that human moderators could cope manually with all posts and other material, screening them for compliance with public law and platform rules. As platforms thrive to find solutions that scale with their global outreach, they have strong economic interests to find technical solutions. This is (b) reinforced by the growing political pressure on platforms to tackle issues of hate speech, misinformation and copyright violation on their sites – with regulation partly moving towards immediate platform liability for illegal content (Helberger, Pierson, & Poell, 2019). Thus, platforms develop, test and increasingly put into operation automated systems that aim to identify hate speech, match uploaded content with copyrighted works and tag disinformation campaigns (Gillespie 2018; Duarte, Llanso, & Loup, 2018).

With regard to transparency, platforms such as Facebook, YouTube and Twitter have long remained highly secretive about this process, the decision criteria, and the specific technologies and data in use. The increasing politicisation of content moderation, though, has pressured the companies to increase transparency in this space – with limited gains. Today, Facebook, for example, discloses the design of the general moderation process as well as the underlying decision criteria, but remains secretive about specifics of the process and detailed data on removals. 4 YouTube’s system for blocking or monetising copyrighted content called ContentID provides a publicly accessible database of registered works. The high-level criteria for blocking content are communicated, yet critics argue that the system massively over-blocks legitimate content and that YouTube remains too secretive and unresponsive about the appeals process, including the exact criteria for delineating legitimate and illegitimate usage of copyrighted content (Erickson & Kretschmer, 2019; Klonick, 2018). The Global Internet Forum to Counter Terrorism (GIFCT), a joint effort by Facebook, Google, Twitter and Microsoft to combat the spread of terrorist content online, hosts a shared, but secretive database of known terrorist images, video, audio, and text.

With regard to automation, most systems in content moderation do not operate fully automated but most often flag contested content for human review – despite industry claims around efficiency of AI systems. For example, Facebook has technical hate speech classifiers in operation that evaluate apparently every uploaded post and flag items considered illegitimate for further human review. 5 In contrast, ContentID is generally operating fully automated, meaning that decisions are executed without routine human intervention: uploads that match registered content are either blocked, monetised by the rightsholder or tolerated according to the assumed right-holders’ provisions. In the case of GIFCT, early press releases emphasised that “matching content will not be automatically removed” (Facebook Newsroom, 2016). However, the response of platforms to major incidents like the shooting in Christchurch, New Zealand, and to propaganda of major terrorist organisations such as ISIS and Al-Qaeda now seems to indicate that certain matches of the GIFCT are executed and thus blocked automatically, without human moderators in the loop (out-of-control-systems) (Gorwa, Binns, & Katzenbach, 2019).

As these examples show, the binary classification of transparency and automation of a given system is not always easily drawn. Yet, until recently, most of these implementations of algorithmic governance could rightfully be considered out-of-control-systems. The recent political and discursive pressure has certainly pushed the companies towards more transparency, although in our evaluation this still does not qualify them as autonomy-friendly- or licensed”-systems as they still lack meaningful transparency.

6. Conclusion

The concept of algorithmic governance encapsulates a wide range of sociotechnical practices that order and regulate the social in specific ways ranging from predictive policing to the management of labour and content moderation. It is one benefit of the concept that it brings together these diverse sets of phenomena, discourses, and research fields, and thus contributes to the identification of key controversies and challenges of the emerging digital society. Bias and fairness, transparency and human agency are important issues that are to be addressed whenever algorithmic systems are deeply integrated into organisational processes, irrespective of the sector or specific application. Algorithmic governance has many faces: it is seen as ordering, regulation and behaviour modification, as a form of management, of optimisation and of participation. Depending on the research area it is characterised by inscrutability, the inscription of values and interests, by efficiency and effectiveness, by power asymmetry, by social inclusiveness, new exclusions, competition, responsiveness, participation, co-creation and overload. For most observers, governance becomes more powerful, intrusive and pervasive with algorithimisation and datafication. A different narrative stresses that governance becomes more inclusive, responsive, and allows for more social diversity.

And indeed, algorithmic governance is multiple. It does not follow a purely functional, teleological path thriving for ever-more optimisation. It is rather contingent on its social, economic and political context. The illustrative case studies on predictive policing and content moderation show that algorithmic governance can take very different forms, and it changes constantly – sometimes optimised for business interests, sometimes pressured by regulation and public controversies. The proposed ideal-types of algorithmic governance for means of evaluation constitute one way of assessing these systems against normative standards. We chose transparency and the degree of automation as key criteria, resulting in a spectrum of implementation ranging from out-of-control-systems to autonomy-friendly-systems – other criteria for evaluation could be the types of input data or of decision models. In any case, these structured and integrated ways of thinking about algorithmic governance might help us in the future to assess on more solid grounds which forms of algorithmic governance are legitimate and appropriate for which purpose and under which conditions – and where we might not want any form of algorithmic governance at all.

References

Algorithm Watch. (2019). OpenSCHUFA: The campaign is over, the problems remain - what we expect from SCHUFA and Minister Barley. Retrieved from https://openschufa.de/english/

Algorithm Watch & Bertelsmann Stiftung. (2019). Automating Society. Taking Stock of Automated Decision-Making in the EU. Retrieved from https://algorithmwatch.org/wp-content/uploads/2019/02/Automating_Society_Report_2019.pdf.

Ananny, M., & Crawford, K. (2017). Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability. New Media & Society, 33(4), 973–989. doi:10.1177/1461444816676645

Arora, P. (2019). Benign dataveillance? Examining novel data-driven governance systems in India and China. First Monday, 24(4). doi:10.5210/fm.v24i4.9840

Avery, R. B., Brevoort, K. P., & Canner, G. (2012). Does Credit Scoring Produce a Disparate Impact? Real Estate Economics, 40(3), S65-S114. doi:10.1111/j.1540-6229.2012.00348.x

Barocas, S., & Selbst, A. D. (2016). Big Data's Disparate Impact. California Law Review, 104(3), 671–732. doi:10.15779/Z38BG31

Beer, D. (2019). The Data Gaze: Capitalism, Power and Perception. SAGE Publications.

Bennett, C. J. (2017). Voter databases, micro-targeting, and data protection law: Can political parties campaign in Europe as they do in North America? International Data Privacy Law, 6(4), 261–275. doi:10.1093/idpl/ipw021

Berry, D. M. (2011). The philosophy of software: code and mediation in the digital age. Basingstoke, Hampshire; New York: Palgrave Macmillan. doi:10.1057/9780230306479

Binns, R. Veale, M., Van Kleek, & Shadbolt, M. (2017) Like trainer, like bot? Inheritance of bias in algorithmic content moderation. In G. L. Ciampaglia, A. Mashhadi, & T. Yasseri (Eds.), Social Informatics, (pp. 405–415). doi: 10.1007/978-3-319-67256-4_32

Bijker, W. E., & Law, J. (Eds.). (1992). Shaping Technology/Building Society: Studies in Sociotechnical Change. Cambridge, MA: The MIT Press.

Black, J. (2001). Decentring Regulation: Understanding the Role of Regulation and Self-Regulation in a ‘Post-Regulatory’ World. Current Legal Problems, 54(1), 103–146. doi:10.1093/clp/54.1.103

Boulianne, S. (2015). Social media use and participation: A meta-analysis of current research. Information, Communication & Society, 18(5), 524–538. doi:10.1080/1369118X.2015.1008542

Boulianne, S., & Theocharis, Y. (2018). Young People, Digital Media, and Engagement: A Meta-Analysis of Research. Social Science Computer Review, 19(1). doi:10.1177/0894439318814190

boyd, d., & Crawford, K. (2012). Critical questions for big data. Information, Communication & Society, 15(5), 662–667. doi:10.1080/1369118X.2012.678878

boyd, d., & Barocas, S. (2017). Engaging the Ethics of Data Science in Practice. Communications of the ACM, 60(11), 23–25. doi:10.1145/3144172

Brayne, S. (2017). Big Data Surveillance: The Case of Policing. American Sociological Review, 82(5), 977–1008. doi:10.1177/0003122417725865

Brevoort, K. P., Grimm, P., & Kambara, M. (2015). Data Point: Credit Invisibles [Research Report]. Washington DC: Consumer Financial Protection Bureau. Retrieved from https://www.consumerfinance.gov/data-research/research-reports/data-point-credit-invisibles/

Caplan, R., & boyd, d. (2018). Isomorphism through algorithms: Institutional dependencies in the case of Facebook. Big Data & Society, 5(1). doi:10.1177/2053951718757253

Christin, A. (2017). Algorithms in practice: Comparing web journalism and criminal justice. Big Data & Society, 4(2). doi:10.1177/2053951717718855

Couldry, N., & Langer, A. I. (2005). Media Consumption and Public Connection: Toward a Typology of the Dispersed Citizen. The Communication Review, 8(2). 237–257. doi:10.1080/10714420590953325

Dencik, L., Hintz, A., Redden, J., & Warne, H. (2018). Data scores as Governance: Investigating uses of citizen scoring in public services project report [Project Report]. Cardiff University. Retrieved from Open Society Foundations website: http://orca.cf.ac.uk/117517/

DeVito, M. A. (2017). From Editors to Algorithms. Digital Journalism, 5(6), 753–773. doi:10.1080/21670811.2016.1178592

Duarte, N., Llanso, E., & Loup, A. (2018). Mixed Messages? The Limits of Automated Social Media Content Analysis. Report. Washington DC: Center for Democracy & Technology.

Egbert, S. (2019). Predictive Policing and the Platformization of Police Work. Surveillance & Society, 17(1/2), 83–88. doi:10.24908/ss.v17i1/2.12920

Ellul, J. (1964). The technological society. New York: Alfred A. Knopf.

Erickson, K., & Kretschmer, M. (2018). “This Video is Unavailable”: Analyzing Copyright Takedown of User-Generated Content on YouTube. JIPITEC, 9(1). Retrieved from http://www.jipitec.eu/issues/jipitec-9-1-2018/4680

Eyert, F., Irgmaier, F., & Ulbricht, L. (2018). Algorithmic social ordering: Towards a conceptual framework. In G. Getzinger (Ed.), Critical Issues in Science, Technology and Society Studies (pp. 48–57). Retrieved from https://conference.aau.at/event/137/page/6

Facebook. (2016). Partnering to Help Curb Spread of Online Terrorist Content [Blog post]. Retrieved from Facebook Newsroom website https://newsroom.fb.com/news/2016/12/partnering-to-help-curb-spread-of-online-terrorist-content

Fourcade, M., & Healy, K. (2016). Seeing like a market. Socio-Economic Review, 15(1), 9–29. doi:10.1093/ser/mww033

Fourcade, M., & Healy, K. (2017). Categories All the Way Down. Historical Social Research, 42(1), 286–296. doi:10.12759/hsr.42.2017.1.286-296

Fuller, M. (2008). Software Studies: A Lexicon. Cambridge, MA: The MIT Press. doi:10.7551/mitpress/9780262062749.001.0001

Gandy, O. H. (2010). Engaging rational discrimination: exploring reasons for placing regulatory constraints on decision support systems. Ethics and Information Technology, 12(1), 29–42. doi:10.1007/s10676-009-9198-6

Gerber, C., & Krzywdzinski, M. (2019): Brave New Digital Work? New Forms of Performance Control in Crowdwork. In: Steve P. Vallas & A. Kovalainen (Eds.), Work and Labor in the Digital Age (pp. 48–57). Bingley: Emerald Publishing. doi: 10.1108/S0277-283320190000033008

Gillespie, T. (2018). Custodians of the internet: Platforms, content moderation, and the hidden decisions that shape social media. New Haven: Yale University Press.

Gillespie, T. (2014). The relevance of algorithms. In: T. Gillespie, P. J. Boczkowski, & K. A. Foot (Eds.), Media Technologies. Essays on Communication, Materiality, and Society (pp. 167–193). Cambridge, MA: The MIT Press.

Gollatz, K., Beer, F., & Katzenbach, C. (2018). The Turn to Artificial Intelligence in Governing Communication Online [Workshop Report] Berlin: Alexander von Humboldt Institute for Internet and Society. https://nbn-resolving.org/urn:nbn:de:0168-ssoar-59528-6

Gorwa, R. (2019). What is platform governance? Information, Communication & Society, 22(6). doi:10.1080/1369118X.2019.1573914

Gorwa, R., Binns, R., & Katzenbach, C. (2019). Algorithmic Content Moderation: Technical and Political Challenges in the Automation of Platform Governance. Big Data & Society, forthcoming.

Gray, M. L., & Suri, S. (2019). Ghost Work: How to Stop Silicon Valley from Building a New Global Underclass. Boston: Houghton Mifflin Harcourt.

Hacker, P. (2018). Teaching fairness to artificial intelligence: Existing and novel strategies against algorithmic discrimination under EU law. Common Market Law Review, 55(4), 1143–1185.

Hansen, H. K., & Flyverbom, M. (2014). The politics of transparency and the calibration of knowledge in the digital age. Organization, 22(6), 872–889. doi:10.1177/1350508414522315

Helberger, N., Pierson, J., & Poell, T. (2018). Governing online platforms: From contested to cooperative responsibility. The Information Society, 34(1), 1–14. doi:10.1080/01972243.2017.1391913

Hildebrandt, M. (2015). Smart Technologies and the End(s) of Law. Novel Entanglements of Law and Technology. Cheltenham: Edward Elgar

Hobbes, T. (1909). Hobbes’s Leviathan: reprinted from the edition of 1651. Oxford: Clarendon Press. Retrieved from https://archive.org/details/hobbessleviathan00hobbuoft

Hofmann, J., Katzenbach, C., & Gollatz, K. (2016). Between coordination and regulation: Finding the governance in Internet governance. New Media & Society, 19(9). doi:10.1177/1461444816639975

Introna, L. D. (2016). Algorithms, Governance, and Governmentality: On Governing Academic Writing. Science, Technology, & Human Values, 41(1), 17–49. doi:10.1177/0162243915587360

Jarke, J., & Gerhard, U. (2018). Using Probes for Sharing (Tacit) Knowing in Participatory Design: Facilitating Perspective Making and Perspective Taking. i-com, 17(2), 137–152. doi:10.1515/icom-2018-0014

Danaher, J. Hogan, M. J., Noone, C., Kennedy, R., Behan, B., de Paor, A. … Shankar, K. (2017). Algorithmic governance: Developing a research agenda through the power of collective intelligence. Big Data & Society, 4(2). doi: 10.1177/2053951717726554

Just, N., & Latzer, M. (2016). Governance by algorithms: Reality construction by algorithmic selection on the Internet. Media, Culture & Society, 39(2), 238–258. doi:10.1177/0163443716643157

Katzenbach, C. (2017). Die Regeln digitaler Kommunikation. Governance zwischen Norm, Diskurs und Technik [The rules of digital communication. Governance between norm, discourse, and technology]. Wiesbaden: Springer VS. doi:10.1007/978-3-658-19337-9

Katzenbach, C. (2012). Technologies as Institutions: Rethinking the Role of Technology in Media Governance Constellations. In N. Just & M. Puppis (Eds.), Trends in Communication Policy Research: New Theories, New Methods, New Subjects (pp. 117–138). Bristol: Intellect.

Kelty, C. M. (2017). Too Much Democracy in All the Wrong Places: Toward a Grammar of Participation. Current Anthropology, 58(S15), S77-S90. doi:10.1086/688705

Kitchin, R., & Dodge, M. (2011). Code/Space: Software in Everyday Life. Cambridge, MA: The MIT Press.

Kitchin, R. (2013). Big data and human geography: Opportunities, challenges and risks. Dialogues in Human Geography, 3(3), 262–267. doi:10.1177/2043820613513388

Kitchin, R. (2016). Thinking critically about and researching algorithms. Information, Communication & Society, 20(1), 14–29. doi:10.1080/1369118X.2016.1154087

Klonick, K. (2018). The New Governors: The People, Rules, and Processes Governing Online Speech. Harvard Law Review, 131, 1598–1670. Retrieved from https://harvardlawreview.org/2018/04/the-new-governors-the-people-rules-and-processes-governing-online-speech/

König, P. D. (2019). Dissecting the Algorithmic Leviathan. On the Socio-Political Anatomy of Algorithmic Governance. Philosophy & Technology. doi:10.1007/s13347-019-00363-w

Kroes, P., & Verbeek, P.-P. (2014). Introduction: The Moral Status of Technical Artefacts. In P. Kroes & P.-P. Verbeek (Eds.), Philosophy of Engineering and Technology. The Moral Status of Technical Artefacts (pp. 1–9). Dordrecht: Springer. doi:10.1007/978-94-007-7914-3_1

Latour, B. (2005). Reassembling the social: An introduction to actor-network-theory. Oxford; New York: Oxford University Press.

Lee, J. D., & See, K. A. (2004). Trust in Automation: Designing for Appropriate Reliance. Human Factors, 46(1), 50–80. doi:10.1518/hfes.46.1.50_30392

Lee, C. P., Poltrock, S., Barkhuus, L., Borges, M., & Kellogg, W. (Eds.) 2017. Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing - CSCW '17. New York: ACM Press.

Lupton, D. (2016). Personal Data Practices in the Age of Lively Data. In J. Daniels, K. Gregory, and T. M. Cottom (Eds.). Digital sociologies. Bristol; Chicago: Policy Press (pp. 339–354).

Lyon, D. (2014). Surveillance, Snowden, and Big Data: Capacities, consequences, critique. Big Data & Society, 1(2). doi:10.1177/2053951714541861

MacKenzie, D. A. (2006). An engine, not a camera: How financial models shape markets. Cambridge, MA: The MIT Press.

Mejias, U. & Couldry, N. (2019) Datafication. Internet Policy Review, 8(4). doi:10.14763/2019.4.1428

Mittelstadt, B. D., Allo, P., Taddeo, M., Wachter, S., & Floridi, L. (2016). The ethics of algorithms: Mapping the debate. Big Data & Society, 3(2). doi:10.1177/2053951716679679

Morozov, E. (2013). To save everything, click here: The folly of technological solutionism. New York: Public Affairs.

Musiani, F. (2013). Governance by algorithms. Internet Policy Review, 2(3). doi:10.14763/2013.3.188.

Müller-Birn, C., Herbsleb, J., & Dobusch, L. (2013). Work-to-Rule: The Emergence of Algorithmic Governance in Wikipedia. Proceedings of the 6th International Conference on Communities and Technologies, 80–89. doi:10.1145/2482991.2482999

Napoli, P. M. (2013). The Algorithm as Institution: Toward a Theoretical Framework for Automated Media Production and Consumption [Working Paper No. 26]. New York: McGannon Center, Fordham University. Retrieved from https://fordham.bepress.com/mcgannon_working_papers/26

Neyland, D. & Möllers, N. (2017). Algorithmic IF … THEN rules and the conditions and consequences of power. Information, Communication & Society, 20(1), 45–62. doi:10.1080/1369118X.2016.1156141

Noble, S. U. (2018). Algorithms of Oppression: How Search Engines Reinforce Racism. New York: NYU Press.

OECD. (2015). Data-Driven Innovation: Big Data for Growth and Well-Being. Paris: OECD Publishing. doi:10.1787/9789264229358-en .

Ong, W. J. (1982). Orality and literacy : the technologizing of the word. London; New York: Methuen.

O'Reilly, T. (2013). Open Data and Algorithmic Regulation. In B. Goldstein & L. Dyson (Eds.), Beyond transparency: Open data and the future of civic innovation (pp. 289–300). San Francisco: Code for America Press.

Orwat, C., Raabe, O., Buchmann, E., Anandasivam, A., Freytag, J.-C., Helberger, N., … Werle, R. (2010). Software als Institution und ihre Gestaltbarkeit [Software as institution and its designability]. Informatik Spektrum, 33(6), 626–633. doi:10.1007/s00287-009-0404-z

Parasuraman, R., Manzey, D. H. (2010). Complacency and Bias in Human Use of Automation: An Attentional Integration. Human Factors, 52(3), 381–410. doi:10.1177/0018720810376055

Pasquale, F. (2015). The black box society: the secret algorithms that control money and information. Cambridge, MA: Harvard University Press.

Passig, K. (2017, November 23). Fünfzig Jahre Black Box [Fifty years black box]. Merkur. Retrieved from https://www.merkur-zeitschrift.de/2017/11/23/fuenfzig-jahre-black-box/

Ratcliffe, J. H., Taylor, R. B., & Fisher, R. (2019). Conflicts and congruencies between predictive policing and the patrol officer’s craft. Policing and Society. doi:10.1080/10439463.2019.1577844

Rieder, G., & Simon, J. (2016), Datatrust: Or, The Political Quest for Numerical Evidence and the Epistemologies of Big Data. Big Data & Society, 3(1). doi:10.1177/2053951716649398

Rosenblat, A. (2018). Uberland: How algorithms are rewriting the rules of work. Oakland: University of California Press.

Schmidt, A., & Wiegand, M. (2017). A survey on hate speech detection using natural language processing. Proceedings of the Fifth International Workshop on Natural Language Processing for Social Media. Valencia: Association for Computational Linguistics. doi:10.18653/v1/W17-1101

Seaver, N. (2017). Algorithms as culture: Some tactics for the ethnography of algorithmic systems. Big Data & Society, 4(2).doi:10.1177/2053951717738104.

Schrape, J.-F. (2019). The Promise of Technological Decentralization. A Brief Reconstruction. Society, 56(1), 31–37. doi:10.1007/s12115-018-00321-w

Schuepp, W. (2015, September 14). Achtung, bei Ihnen droht ein Einbruch [Attention, a burglary at yours is imminent]. Tagesanzeiger Zürich.

Suzor, N. P., West, S. M., Quodling, A., & York, J. (2019). What Do We Mean When We Talk About Transparency? Toward Meaningful Transparency in Commercial Content Moderation. International Journal of Communication, 13, 1526–1543. Retrieved from https://ijoc.org/index.php/ijoc/article/view/9736

Tactical Tech. (2019). Personal Data: Political Persuasion. Inside the Influence Industry. How it works. Retrieved from https://ourdataourselves.tacticaltech.org/media/Personal-Data-Political-Persuasion-How-it-works_print-friendly.pdf

Ulbricht, L. (2018). When big data meet securitization. Algorithmic regulation with passenger name records. European Journal for Security Research, 3(2), 139–161. doi:10.1007/s41125-018-0030-3

Veale, M., & Brass, I. (2019). Administration by Algorithm? Public Management meets Public Sector Machine Learning. In K. Yeung & M. Lodge (Eds.), Algorithmic Regulation (pp. 121–149). Oxford: Oxford University Press.

Wachter, S., Mittelstadt, B., & Floridi, L. (2017). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2), 76–99. doi: 10.1093/idpl/ipx005

Waseem, Z., & Hovy, D. (2016). Hateful Symbols or Hateful People? Predictive Features for Hate Speech Detection on Twitter. Proceedings of the NAACL Student Research Workshop, 88–93. doi:10.18653/v1/N16-2013.

Wiener, N. (1948). Cybernetics: or control and communication in the animal and the machine. Cambridge, MA: The MIT Press.

Winner, L. (1980). Do Artifacts Have Politics? Daedalus,109(1), 121–136. Retrieved from http://www.jstor.org/stable/20024652

Wood, A. J., Graham, M., Lehdonvirta, V., & Hjorth, I. (2019). Good Gig, Bad Gig: Autonomy and Algorithmic Control in the Global Gig Economy. Work, Employment & Society: a Journal of the British Sociological Association, 33(1), 56–75. doi:10.1177/0950017018785616

Yeung, K. (2017). „Hypernudge“: Big Data as a mode of regulation by design. Information Communication & Society, 20(1), 118–136. doi:10.1080/1369118X.2016.1186713

Yeung, K. (2018). Algorithmic regulation: A critical interrogation. Regulation & Governance, 12(4), 505–523. doi:10.1111/rego.12158

Ziewitz, M. (2016). Governing Algorithms: Myth, Mess, and Methods. Science, Technology, & Human Values, 41(1), 3–16. doi:10.1177/0162243915608948

Zarsky, T. Z. (2014). Understanding Discrimination in the Scored Society. Washington Law Review, 89(4). Retrieved from https://digitalcommons.law.uw.edu/wlr/vol89/iss4/10/

Footnotes

1. The context of the study are governance mechanisms in Wikipedia content production. The authors define social governance as the coordination that relies upon interpersonal communication and algorithmic governance as the coordination based on rules that are executed by algorithms (mostly bots) (Müller-Born et al., 2013, p. 3).

2. Other typologies are too granular for the generalising aim of this article and/or focus on sub-fields of algorithmic governance (Danaher et al., 2017), such as algorithmic selection (Just & Latzer, 2016), content moderation (Gorwa, Binns, & Katzenbach, 2019), and modes of regulation (Eyert, Irgmaier, & Ulbricht, 2018; Yeung, 2018).

3. An exception is the canton of Aargau in Switzerland that publishes its risk map (Schuepp, 2015).

4. Cf. Facebook’ Transparency Report for an example, https://transparency.facebook.com, and Suzor et al., 2019, for a critique.

5. Cf. Facebook Newsroom, ”Using Technology to Remove the Bad Stuff Before It’s Even Reported”, https://perma.cc/VN5P-7VNU.

Platformisation

$
0
0

This article belongs to Concepts of the digital society, a special section of Internet Policy Review guest-edited by Christian Katzenbach and Thomas Christian Bächle.

Introduction

Globally operating platform businesses, from Facebook to Uber, and from Amazon to Coursera, are becoming increasingly central to public and private life, transforming key economic sectors and spheres of life, including journalism, transportation, entertainment, education, finance, and health care. This transformation can be understood as a process of ‘platformisation’, which this article sets out to contextualise, define, and operationalise.

To contextualise the concept, we start with the notion of ‘platform’ from which ‘platformisation’ has been derived. In the first section we discuss the history of the platform concept which has seen different uses among business and scholarly communities. We highlight these differences not only to offer conceptual clarity, but also to move towards scholarly consensus. Subsequently, reflecting on initial efforts to specify the contours of platformisation, we argue that it is productive to develop a broad perspective to understand the socio-cultural and political economic consequences of this process. To that end, the second section defines platformisation by combining insights from four distinct research perspectives that each map onto different scholarly traditions: 1) software studies, 2) business studies, 3) critical political economy, and 4) cultural studies. The final section of the article demonstrates how platformisation can be operationalised in research. Building on the four scholarly traditions, we argue that the institutional dimensions of platformisation—data infrastructures, markets, and governance—need to be studied in correspondence with shifting cultural practices.

Developing this argument, it is important to keep in mind that platformisation deeply affects societies around the globe, but in the current moment it is primarily a process driven by US-based platform companies. There are regional and national exceptions, the most prominent being China, where a range of domestic platforms emerged—Baidu, Alibaba, and Tencent—marked by strong state support and oversight (De Kloet et al., 2019). Considering how US-based companies steer platformisation, we cannot do justice to the many global variations, which would require a much longer analysis. While this process everywhere involves changes in infrastructures, markets, and governance, there are crucial differences in how these changes take shape in particular countries and regions.

1. The platform concept: different streams of literature

To set the context, we start with the notion of ‘platform’ from which ‘platformisation’ has been derived. The usage of the platform concept, both in academia and in business, has seen a number of key shifts since the start of the new millennium. Predating the arrival of contemporary tech behemoths, such as Google and Facebook, the fields of (network) economics and business studies already popularised and theorised the term platform, most prominently in Japan, France, and the United States (Steinberg, 2019). In the early 2000s, US companies such as Microsoft, Intel, and Cisco provided management scholars with rich examples of how to attain “platform leadership” (Gawer & Cusumano, 2002). One of the most influential contributions to this scholarship conceptualised platforms (e.g., game consoles) as “two-sided markets” (Rochet & Tirole, 2003). Platform operators aggregate, on the one side buyers or end-users (e.g., players) and on the other side sellers or complementors, such as game publishers. Later contributions incorporated work from neighbouring fields including industrial organisation economics, strategic management, and information technology. This body of work has had significant impact on business discourse and strategies deployed by platform companies, much more so than critical media perspectives.

In media and communication studies, the emergence of the platform concept evolved alongside conversations about broader shifts in communication technology, the information economy, and the subsequent reorientation of users as active producers of culture (Benkler, 2006; Jenkins, 2006). Around 2005, the concept of “Web 2.0” entered the popular lexicon to serve as a shorthand for these shifts, signalling that the internet as a whole had become a platform for users and businesses to build on (O’Reilly, 2005). The Web 2.0 concept is best seen as a discursive exercise speaking to a business audience first and foremost, rather than an attempt to historicise any technological, economic, and socio-cultural shift in particular (Van Dijck & Nieborg, 2009). In hindsight, the concept was effective in paving the way for the further erosion of the open web or “generative Internet” towards an “appliancized network” of proprietary social network sites (Zittrain, 2008, p. 12). Services such as YouTube, Facebook, MySpace, and Twitter were increasingly hailed as social network platforms, constituting “a convergence of different systems, protocols, and networks” (Langlois et al., 2009).

Closely connected with the Web 2.0 discourse, early mentions of the ‘platform’ concept share a distinctive economic purpose; they served as a metaphor or imaginary, employed by business journalists and internet companies to draw end-users to platforms and simultaneously obfuscate their business models and technological infrastructures (Couldry, 2015; Gillespie, 2010). As Gillespie (2017) writes “Figuratively, a platform is flat, open, sturdy. In its connotations, a platform offers the opportunity to act, connect, or speak in ways that are powerful and effective [...] a platform lifts that person above everything else”. In this regard, the term platform should be seen as “productive” in its own right, prompting users to organise their activities around proprietary, for-profit platforms.

Parallel to the business discourses, a distinct computational perspective on platforms emerged in the late 2000s. In 2009, Montfort and Bogost launched a book series titled ‘platform studies’ with each volume dissecting a particular computational platform (e.g., the Atari VCS or the French Minitel). Collectively, these titles are attentive to the material dimension (hardware) of platforms and the software frameworks that support the development of third-party programmes, particularly games. A broader field of software studies research has been developed in parallel by scholars who foregrounded platforms as (re-)programmable software systems that revolve around the systematic collection and processing of user data (Helmond, 2015; Langlois & Elmer, 2013; Plantin et al., 2018). Research in this field is influenced by work that typically lies at the edge of traditional humanities programmes, such as computer and organisational science, information systems, and critical code studies.

While business studies and software studies research generated different understandings of platforms, these perspectives effectively complement each other: business interests and efforts to develop two-sided markets inform the development of platform infrastructures. Vice versa, platform architectures are modular in design so its technology can be selectively opened up to complementors to build and integrate their services to be used by end-users. To gain insight in platforms as both markets and computational infrastructures, it is vital to combine these approaches. Thus, we define platforms as (re-)programmable digital infrastructures that facilitate and shape personalised interactions among end-users and complementors, organised through the systematic collection, algorithmic processing, monetisation, and circulation of data. Our definition offers a nod to software studies by pointing to the programmable and data-driven nature of platform infrastructures, while acknowledging the insights of business studies perspective by including the main stakeholders or “sides” in platform markets: end-users and complementors.

2. (Re-)Defining platformisation

The next step is to explain how the scholarly community moved from a discussion of 'platforms' as ‘things’ to an analysis of 'platformisation' as a process. We identify a variety of scholarly traditions that have studied this process from different angles. Although the academic disciplines we introduce below are not always consistent, nor explicit in their terminology, we can nevertheless infer a particular understanding of platformisation from their research trajectories. To develop platformisation as a critical conceptual tool, it is important to explore and combine different approaches and understandings.

The first approach we would like to focus on is software studies, which has most explicitly foregrounded and defined platformisation. Starting from the computational dimension of platforms, this strand of research is especially focussed on the infrastructural boundaries of platforms, their histories and evolution. Helmond’s (2015) work has been foundational in this respect as she defines platformisation as the “penetration of platform extensions into the web, and the process in which third parties make their data platform-ready”. Key objects of study include Application Programming Interfaces (APIs), which allow for data flows with third parties (i.e., complementors), and Software Development Kits (SDKs), which enable third parties to integrate their software with platform infrastructures (Bodle, 2011; Helmond, Nieborg, & van der Vlist, 2019). Together, these computational infrastructures and informational resources afford institutional relationships that are at the root of a platform’s evolution and growth as platforms “provide a technological framework for others to build on” (Helmond, 2015).

The infrastructural dimension of platforms has been further explored from a software studies perspective by Plantin and his colleagues (2018), who observe a simultaneous “platformisation of infrastructures” and a “infrastructuralization of platforms”. They contend that digital technologies have made “possible lower cost, more dynamic, and more competitive alternatives to governmental or quasi-governmental monopoly infrastructures, in exchange for a transfer of wealth and responsibility to private enterprises” (Plantin et al., 2018, p. 306). In this transfer, major platform companies have emerged as the “modern-day equivalents of the railroad, telephone, and electric utility monopolies of the late 19th and the 20th centuries” (Plantin et al., 2018, p. 307). From this infrastructural perspective case studies have been developed, for example on Facebook’s history and evolution (Nieborg & Helmond, 2019). Here, the social media platform is understood as a “data infrastructure” that hosts a variety and constantly evolving set of “platform instances”, for example apps such as Facebook Messenger and Instagram. Each app then contributes to the platform’s expanding boundaries as it forges both computational and economic connections with complementors, such as content developers, businesses, content creators, and advertisers.

While software studies highlights the infrastructural dimension of platform evolution, business studies foregrounds the economic aspects of platformisation. The latter approach takes platform businesses as its key unit of analysis and theorises how platforms can gain a competitive advantage by operating multi-sided markets (McIntyre & Srinivasan, 2017). For platform companies, one of the advantages inherent to platform markets that can be leveraged are network “externalities” or effects(Rohlfs, 1974; Rochet & Tirole, 2003). These effects manifest themselves either directly, when end-users or complementors join one side of the market, or indirectly, when the other side of the market grows. As McIntryre and Srinivasan (2017, p. 143) explain, “direct network effects arise when the benefit of network participation to a user depends on the number of other network users with whom they can interact”. And indirect network effects occur when “different ‘sides’ of a network can mutually benefit from the size and characteristics of the other side”.

The managerial and economic blueprint for multi-sided markets theorised by business scholars invariably leads to the accumulation of capital and power among a small group of platform corporations (Haucap & Heimeshoff, 2014; Srnicek, 2016). As a counterweight to these business studies accounts, it is important to turn to a third approach: critical political economy. While most scholars in this tradition do not explicitly use the notion of platformisation, their work is vital as it signals how this process involves the extension and intensification of global platform power and governance. Critical political economists have drawn attention to issues of labour exploitation, surveillance, and imperialism (Fuchs, 2017). For example, Scholz (2016, p. 9) considers the issue of platform labour, maintaining that “wage theft is a feature, not a bug” of platforms. Considering the global footprint of platform companies, Jin (2013, p. 167) introduced the notion of “platform imperialism”, arguing that the rapid growth of companies such as Facebook and Google demonstrates that “American imperialism has been continued” through the exploitation of platforms.

Important to note is that the discussed research traditions all primarily conceive of platforms and platformisation in institutional terms, as data infrastructures, markets, and forms of governance. Notably absent is an analysis of how platforms transform cultural practices, and vice versa, how evolving practices transform platforms as particular socio-technical constructs. These transformations have been extensively studied by scholars working in the broader tradition of cultural studies, who mostly do not employ the notion of platformisation either, but whose work is important for understanding this process. As the cultural studies literature on platforms is very extensive—ranging from self-representation and sexual expression, to the transformation of labour relations and visual culture (Burgess, Marwick, & Poell, 2017), we cannot do justice to its full scope. We do want to stress the importance of considering platform-based user practices when analysing platformisation. A major challenge in such examinations is to trace how institutional changes and shifting cultural practices mutually articulate each other.

One body of work that is at the forefront of theorising the shifting relationships among users and platforms concerns work on labour. By introducing concepts such as “aspirational labor”, “relational labor”, and “hope labor”, cultural studies researchers have critically examined how specific practices and understandings of labour emerged within platform markets (Baym, 2015; Duffy, 2016; Kuehn & Corrigan, 2013). As Duffy (2016, p. 453) points out, newly emerging occupations, such as streamers, vloggers and bloggers, tend to reify “gendered social hierarchies”, that “leave women’s work unrecognized and/or under-compensated”. Considering platformisation from this perspective means analysing how social practices and imaginations are organised around platforms. This, in turn, shapes how platforms evolve as particular data infrastructures, markets, and governance frameworks.

Although these four approaches provide us with different foci and interpretations of platformisation, we would like to argue that they are not mutually exclusive (Nieborg & Poell, 2018). The observed institutional changes and shifts in cultural practices associated with platforms are in practice closely interrelated. Thus, a more fundamental and critical insight in what platformisation entails can only be achieved by studying these changes and shifts in relation to each other. Following research in software studies, business studies, and political economy, we therefore understand platformisation as the penetration of the infrastructures, economic processes, and governmental frameworks of platforms in different economic sectors and spheres of life. And in the tradition of cultural studies, we conceive of this process as the reorganisation of cultural practices and imaginations around platforms. The next section will discuss what platformisation entails in practice and how this rather abstract definition can be operationalised in research.

3. Operationalising platformisation: studying the impact of platforms

The different perspectives on platformisation, which we derived from the various research traditions, suggest that this process unfolds along three institutional dimensions: data infrastructures, markets, and governance. And we observed that, from a cultural studies perspective, platformisation leads to the (re-)organization of cultural practices around platforms, while these practices simultaneously shape a platform’s institutional dimensions. Ultimately, the collective activities of both end-users and complementors, and the response of platform operators to these activities, determine a platform’s continued growth or its demise. As pointed out by critical political economists, the power relations among platform operators, end-users, and complementors are extremely volatile and inherently asymmetrical as operators are fully in charge of a platform’s techno-economic development. Moreover, network effects, together with platform strategies that frustrate attempts by end-users or complementors to leave a platform, have resulted in highly concentrated platform markets (Barwise & Watkins, 2018). While the media and telecom industries have been dominated by internationally operating conglomerates for decades (Winseck, 2008), the rapid emergence of a handful of platform companies challenges the power of industry incumbents. Poignant examples of digital dominance by platform companies can be witnessed in the new markets for digital advertising, apps, e-commerce, and cloud computing. With these considerations in mind, we propose to study the three institutional dimensions of platformisation as interactive processes that involve a wide variety of actors, but which are also structured by fundamentally unequal power relations. We will use the example of app stores to illustrate how the three dimensions can be operationalised.

The first dimension is the development of data infrastructures, which has especially been explored by software studies scholars. As a process, the development of data infrastructures has been captured through the notion of datafication, referring to the ways in which digital platforms render into data, practices and processes that historically eluded quantification (Kitchin, 2014; Mayer-Schönberger & Cukier, 2013; Van Dijck, 2014; and Mejias & Couldry, 2019 on datafication, as part of this special section). This process does not just concern demographic or profiling data volunteered by users or solicited via (online) surveys, but especially also behavioural meta-data. Such behavioural data collection is afforded by still expanding platform infrastructures in the form of apps, plugins, active and passive sensors, and trackers (Gerlitz & Helmond, 2013; Nieborg & Helmond, 2019). As such, platform infrastructures are integrated with a growing number of devices, from smartphones and smartwatches to household appliances and self-driving cars. This myriad of platform extensions allows platform operators to transform virtually every instance of human interaction into data: rating, paying, searching, watching, talking, friending, dating, driving, walking, etc. This data is then algorithmically processed and, sometimes under strict conditions, haphazardly made available to a wide variety of external actors (Bucher, 2018; Langlois & Elmer, 2013). Important to note: this datafication process is simultaneously driven by complementors, who actively integrate platform data in products and services that are used in everyday practices and routines. Many news organisations and journalists, for example, use social media data in editorial decision-making and in content distribution strategies (Van Dijck, Poell, & De Waal, 2018). It is through such emerging cultural practices that data infrastructures become important in particular economic sectors and activities.

One example of a ubiquitous data infrastructure for software distribution are the app stores operated by Apple and Google. Instead of downloading software applications from distributed locations, as is common in desktop-based software environments, app stores are highly centralised, heavily controlled and curated software marketplaces. In the case of Apple’s iOS mobile operating system for the iPhone, iPad and Apple Watch, the App Store is the only sanctioned way for users to access third-party software, allowing Apple to track and control which apps are distributed by whom and thus, indirectly, also which data are collected, by whom, and for what purpose. This strict control over app distribution allows Apple to set technical standards and define data genres, categories, and subsequent actions. For instance, Apple’s HealthKit data framework provides “a central repository for health and fitness data” on iOS devices. Of course, this repository and its related data standards only become influential because many app developers (i.e., complementors) use this functionality and thereby subject themselves to Apple’s interpretation and standardisation of what counts as “health” data.

The second dimension of platformisation concerns markets, the reorganisation of economic relations around two-sided or multi-sided markets, which has especially been studied and theorised by business scholars. Traditional pre-digital market relations, with some notable exceptions, tend to be one-sided, with a company directly transacting with buyers. Conversely, platforms constitute two-sided, or increasingly, complex multi-sided markets that function as aggregators of transactions among end-users and a wide variety of third parties. A classic example of a two-sided market similar to the App Store is a dedicated game console, such as the PlayStation, which connects game publishers with players (Rochet & Tirole, 2003). A game platform that also lets advertisers target users, becomes a multi-sided market, connecting gamers, game publishers, and advertisers. Market arrangements like these affect the distribution of economic power and wealth, as they are subject to strong network effects. A game platform that attracts a lot of game publishers and game titles becomes more attractive for users, and vice versa, more users make a platform more attractive for game publishers and advertisers, with the latter generating additional income that can be used to subsidise content.

Thus, changes in market relations are not simply ‘institutional’, but for an important part driven by the practices of end-users, content producers, and other “sides” in the market, such as advertisers and data intermediaries. If many end-users suddenly embrace a new platform, as happened in the case of the smartphone, content producers and advertisers are likely to follow quickly. Yet, once end-users and complementors have been aggregated and integrated at scale, it becomes increasingly hard for other platforms to break into a particular market, or, for content and service providers to ignore platform monopolies. Whereas, for example, newspapers were for a long time successful non-digital two-sided markets attracting readers and advertisers (Argentesi & Filistrucchi, 2007), they are increasingly turned into platform complementors offering content to end-users through platforms, such as Facebook, Twitter, and Instagram, who then “monetise” this content by surrounding it with advertisements (Nieborg & Poell, 2018).

App stores also serve as examples of two-sided platform markets, connecting end-users with app developers. This market arrangement affects the distribution of power, as all app-based commercial transactions are subject to the economic imperatives set out by the Apple/Google duopoly. As app-related income is not the primary revenue generator for either platform company, both app stores have rigid pricing standards and a relatively low barrier to market entry for developers. Consequently, app supply is high, counted in the millions. New entrants in the app economy, therefore, have become highly dependent on advertising and on selective promotion by platform operators to gain visibility in what has become a hyper competitive market. This market dynamic is reinforced by the collective practices of end-users, who rather than downloading new apps on a weekly basis, tend to stick to using around 40 apps at any time (Comscore, 2017). Important to note is that this rearrangement of market relations is intrinsically connected with the previous dimension of datafication. Because of fierce competition, app developers are incentivised to systematically collect end-user data to track and optimise user engagement, retention, and monetisation (Nieborg, 2017).

Third, platforms not only steer economic transactions, but also platform-based user interactions. This leads us to the dimension of governance, which has especially been put on the research agenda by political economic and software studies scholars (Gillespie, 2018; Gorwa, 2019). Most visibly, platforms structure how end-users can interact with each other and with complementors through graphical user interfaces (GUIs), offering particular affordances while withholding others, for example in the form of buttons—like, follow, rate, order, pay—and related metrics (Bucher & Helmond, 2018). This form of platform governance materialises through algorithmic sorting, privileging particular data signals over others, thereby shaping what types of content and services become prominently visible and what remains largely out of sight (Bucher, 2018; Pasquale, 2015). Equally important, platforms control how complementors can track and target end-users through application programming interfaces (APIs), software development kits (SDKs), and data services (Langlois & Elmer, 2013; Nieborg & Poell, 2018). Finally, platforms govern through contracts and policies, in the form of terms of service (ToS), license agreements, and developer guidelines, all of which have to be agreed with when accessing or using a platform’s services (Van Dijck, 2013). On the basis of these terms and guidelines, platforms moderate what end-users and complementors can share and how they interact with each other (Gillespie, 2018).

As platforms tend to employ these different governing instruments—interfaces, algorithms, policies—without much regard for particular political-cultural traditions, there are often clashes with local rules, norms, and regulatory frameworks. At the same time, it should be observed that all these governing instruments have been developed and constantly adjusted in response to the evolving practices of end-users and complementors. The widespread circulation of disinformation and hate speech by end-users prompts platform operators to devise stricter policies and moderation practices, as well as algorithmic systems that can filter out this content. And, when large numbers of advertisers and content producers leave a platform, its operators will adjust the governing instruments to try to keep these complementors on board.

In our app store example, platform operators constantly tinker with their governing instruments to keep end-users and complementors tied to the platform. Google’s Play Store frequently changes its algorithmic sorting mechanisms, privileging particular data signals over others to arrive at a commercially optimal ranking of apps. While external actors affect the development of governance instruments, they lack insight in how platforms design and adjust these instruments. For developers and end-users, the Play Store is a typical black box, as apps rankings are based on opaque and largely invisible algorithms. Whereas such instances of algorithmic obfuscation received a lot of public and scholarly attention, we want to emphasise that these are elements of larger governance frameworks, which need to be scrutinised in their entirety. In the case of app stores, it is the combination of controlled access to data, algorithmic sorting, and often opaque moderation practices—especially Apple has a history of unexpected app-removals (Hestres, 2013)—that constitute its governance framework.

Conclusion

Taken together, the analysis of these three dimensions of platformisation enables a comprehensive understanding of how this process brings about a transformation of key societal sectors and how it presents particular challenges for stakeholders in these sectors. It is vital that we move beyond the particular foci of software studies, business studies, political economy, and cultural studies that have, so far, dominated the study of platforms and platformisation. We need to gain insight in how changes in infrastructures, market relations, and governance frameworks are intertwined, and how they take shape in relation with shifting cultural practices. Such an exploration is not just of academic interest. Platformisation can only be regulated democratically and effectively by public institutions if we understand the key mechanisms at work in this process.

Evidently, this short paper only provides the outline of such a research programme. Further developing this analytical framework, it is especially important to enhance our understanding of how the institutional changes are entangled with shifting cultural practices. Recent scholarship tends to focus on one or the other, which prohibits insight in the ever-evolving dynamics of platformisation. A systematic inquiry into the connections between the institutional and cultural dimensions of platformisation is particularly crucial because it will bring into view the correspondences and tensions between, on the one hand, global platform infrastructures, market arrangements, and governing frameworks, and, on the other hand, local and national practices and institutions. As political-cultural rules and norms widely diverge across the globe, the challenge is to integrate platforms in society without undermining vital traditions of citizenship and without increasing disparities in the distribution of wealth and power.

References

Argentesi, E., & Filistrucchi, L. (2007). Estimating market power in a two‐sided market: The case of newspapers. Journal of Applied Econometrics, 22(7), 1247–1266. doi:10.1002/jae.997

Barwise, P., & Watkins, L. (2018). The Evolution of Digital Dominance. How and Why We Got to GAFA. In M. Moore & D. Tambini (Eds.), Digital Dominance The Power of Google, Amazon, Facebook, and Apple (pp. 21–49). Oxford: Oxford University Press. Available at http://lbsresearch.london.edu/914/

Baym, N. K. (2015). Connect with your audience! The relational labor of connection. The communication review, 18(1), 14–22. doi:10.1080/10714421.2015.996401

Benkler, Y. (2006). The wealth of networks. How social production transforms markets and freedom. New Haven: Yale University Press.

Bodle, R. (2011). Regimes of sharing: Open APIs, interoperability, and Facebook. Information, Communication & Society, 14(3), 320–337. doi:10.1080/1369118X.2010.542825

Bucher, T. (2018). If... Then. Algorithmic Power and Politics. Oxford: Oxford University Press.

Bucher, T., & Helmond, A. (2017). The Affordances of Social Media Platforms. In J. Burgess, A. Marwick, & T. Poell (Eds.), The SAGE handbook of social media (pp. 233–253). London: Sage. doi:10.4135/9781473984066.n14 Available at https://hdl.handle.net/11245.1/149a9089-49a4-454c-b935-a6ea7f2d8986

Burgess, J., Marwick, A., & Poell, T. (Eds.). (2017). The SAGE handbook of social media. London: Sage. doi:10.4135/9781473984066

Comscore. (2017). The 2017 U.S. Mobile App Report [White Paper]. Retrieved from https://www.comscore.com/Insights/Presentations-and-Whitepapers/2017/The-2017-US-Mobile-App-Report

Couldry, N. (2015). The myth of ‘us’: digital networks, political change and the production of collectivity. Information, Communication & Society, 18(6), 608–626. doi:10.1080/1369118X.2014.979216

de Kloet, J., Poell, T., Guohua, Z. & Yiu Fai, C. (2019). The plaformization of Chinese Society: infrastructure, governance, and practice. Chinese Journal of Communication, 12(3): 249–256. doi:10.1080/17544750.2019.1644008

Duffy, B. E. (2016). The romance of work: Gender and aspirational labour in the digital culture industries. International Journal of Cultural Studies, 19(4), 441–457. doi:10.1177/1367877915572186

Fuchs, C. (2017). Social media: A critical introduction. London: Sage.

Gawer, A., & Cusumano, M. A. (2002). Platform leadership: How Intel, Microsoft, and Cisco drive industry innovation. Boston: Harvard Business School Press.

Gerlitz, C., & Helmond, A. (2013). The like economy: Social buttons and the data-intensive web. New media & society, 15(8), 1348–1365. doi:10.1177/1461444812472322

Gillespie, T. (2010). The politics of `platforms'. New Media & Society, 12(3), 347–364. doi:10.1177/1461444809342738

Gillespie, T. (2017). The platform metaphor, revisited. Retrieved from Digital Society Blog: https://www.hiig.de/en/the-platform-metaphor-revisited/

Gillespie, T. (2018). Custodians of the Internet. Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media. New Haven: Yale University Press.

Gorwa, R. (2019). What is platform governance? Information, Communication & Society, 22(6) 1–18. doi:10.1080/1369118X.2019.1573914

Haucap, J., & Heimeshoff, U. (2014). Google, Facebook, Amazon, eBay: Is the Internet driving competition or market monopolization? International Economics and Economic Policy, 11(1-2), 49–61. doi:10.1007/s10368-013-0247-6

Helmond, A. (2015). The Platformization of the Web: Making Web Data Platform Ready. Social Media + Society, 1(2). doi:10.1177/2056305115603080

Helmond, A., Nieborg, D. B., & van der Vlist, F. N. (2019). Facebook’s evolution: Development of a platform-as-infrastructure. Internet Histories, 3(2), 123–146. doi:10.1080/24701475.2019.1593667

Hestres, L. E. (2013). App Neutrality: Apple’s App Store and Freedom of Expression Online. International Journal of Communication, 7, 1265–1280. Retrieved from https://ijoc.org/index.php/ijoc/article/view/1904

Jenkins, H. (2006). Convergence culture: where old and new media collide. New York: New York University Press.

Jin, D. Y. (2013). The construction of platform imperialism in the globalization era. tripleC: Communication, Capitalism & Critique. Open Access Journal for a Global Sustainable Information Society, 11(1), 145–172. https://doi.org/10.31269/triplec.v11i1.458

Kitchin, R., (2014). The data revolution: Big data, open data, data infrastructures and their consequences. Sage.

Kuehn, K., & Corrigan, T. F. (2013). Hope labor: The role of employment prospects in online social production. The Political Economy of Communication, 1(1). 9–25. http://polecom.org/index.php/polecom/article/view/9

Langlois, G., & Elmer, G. (2013). The research politics of social media platforms. Culture machine, 14. Retrieved from https://culturemachine.net/platform-politics/

Langlois, G., McKelvey, F., Elmer, G., & Werbin, K. (2009). Mapping Commercial Web 2.0 Worlds: Towards a New Critical Ontogenesis. FibreCulture, (14). Retrieved from http://fourteen.fibreculturejournal.org/fcj-095-mapping-commercial-web-2-0-worlds-towards-a-new-critical-ontogenesis/

Mejias, U. & Couldry, N. (2019) Datafication. Internet Policy Review, 8(4). doi:10.14763/2019.4.1428

Montfort, N., & Bogost, I. (2009). Racing the Beam. The Atari Video Computer System. Cambridge, MA: The MIT Press.

McIntyre, D. P., & Srinivasan, A. (2017). Networks, platforms, and strategy: Emerging views and next steps. Strategic Management Journal, 38(1), 141–160. doi:10.1002/smj.2596

Nieborg, D. B. (2017). Free-to-play Games and App Advertising. The Rise of the Player Commodity. In J. F. Hamilton, R. Bodle, & E. Korin (Eds.), Explorations in Critical Studies of Advertising (pp. 28–41). New York: Routledge.

Nieborg, D. B., & Helmond, A. (2019). The political economy of Facebook’s platformization in the mobile ecosystem: Facebook Messenger as a platform instance. Media, Culture and Society, 40(2), 1–23. doi:10.1177/0163443718818384

Nieborg, D. B., & Poell, T. (2018). The platformization of cultural production: Theorizing the contingent cultural commodity. New Media & Society, 20(11), 4275–4292. doi:10.1177/1461444818769694

O’Reilly, T. (2005). What Is Web 2.0: Design Patterns and Business Models for the Next Generation of Software. Retrieved April 9, 2019, from http://oreilly.com/web2/archive/what-is-web-20.html

Pasquale, F. (2015). The Black Box Society. The Secret Algorithms That Control Money and Information. Cambridge, MA: Harvard University Press.

Plantin, J.-C., Lagoze, C., Edwards, P. N., & Sandvig, C. (2018). Infrastructure studies meet platform studies in the age of Google and Facebook. New Media & Society, 20(1), 293–310. doi:10.1177/1461444816661553

Qiu, J. L. (2016). Goodbye iSlave. A Manifesto for Digital Abolition. Urbana: University of Illinois Press.

Rochet, J.-C., & Tirole, J. (2003). Platform Competition in Two-Sided Markets. Journal of the European Economic Association, 1(4), 990–1029. doi:10.1162/154247603322493212

Rohlfs, J. (1974). A Theory of Interdependent Demand for a Communications Service. The Bell Journal of Economics and Management Science, 5(1), 16–37. doi:10.2307/3003090

Mayer-Schönberger, V., & Cukier, K. (2013). Big data: A revolution that will transform how we live, work, and think. Boston: Houghton Mifflin Harcourt.

Scholz, T. (2016). Platform cooperativism. Challenging the corporate sharing economy. New York: Rosa Luxemburg Foundation. Retrieved from http://www.rosalux-nyc.org/platform-cooperativism-2/

Srnicek, N. (2017). Platform capitalism. Cambridge: Polity.

Steinberg, M. (2019). The Platform Economy. How Japan Transformed the Consumer Internet. Minneapolis: University of Minnesota Press.

van Dijck, J. (2013). The Culture of Connectivity: A critical history of social media. Oxford: Oxford University Press.

van Dijck, J. (2014). Datafication, dataism and dataveillance: Big Data between scientific paradigm and ideology. Surveillance & Society, 12(2), 197–208. doi:10.24908/ss.v12i2.4776

van Dijck, J., & Nieborg, D. B. (2009). Wikinomics and its discontents: A critical analysis of Web 2.0 business manifestos. New Media & Society, 11(5), 855–874. doi:10.1177/1461444809105356

van Dijck, J., Poell, T., & De Waal, M. (2018). The Platform Society: Public Values in a Connective World. Oxford: Oxford University Press.

Winseck, D. R. (2008). The State of Media Ownership and Media Markets: Competition or Concentration and Why Should We Care? Sociology Compass, 2(1), 34–47. doi:10.1111/j.1751-9020.2007.00061.x

Zittrain, J. (2008). The future of the Internet and how to stop it. New Haven: Yale University Press.

Defining concepts of the digital society

$
0
0

First concepts in this collection

Defining concepts of the digital society
Christian Katzenbach & Thomas Christian Bächle, Alexander von Humboldt Institute for Internet and Society

Algorithmic governance
Christian Katzenbach, Alexander von Humboldt Institute for Internet and Society
Lena Ulbricht, Berlin Social Science Center

Datafication
Ulises A. Mejias, State University of New York at Oswego
Nick Couldry, London School of Economics & Political Science

Filter bubble
Axel Bruns, Queensland University of Technology

Platformisation
Thomas Poell, University of Amsterdam
David Nieborg, University of Toronto
José van Dijck, Utrecht University

Privacy
Tobias Matzner, University of Paderborn
Carsten Ochs, University of Kassel

Defining concepts of the digital society

In our research on ‘artificial intelligence’, robots or autonomous systems in Berlin it is a recurring theme that preconceived images shape many of the expectations and fears associated with technologies. These images, however, do not necessarily reflect actual capabilities. Phenomena such as “machine learning” or “decision-making systems” are often misguidedly attributed with notions of intentionality, free will or consciousness. Still, these imaginations and figures of speech have actual political and social clout, shape research and technological development goals and inform discourses on regulation, innovation and potential futures.

Terminology shapes reality. What’s true for the phenomena that we address in our research is certainly also true for the terminology we use for our research. What at first sounded like a banal truism, for us gradually evolved into the idea for this project, establishing a new special section Defining concepts of the digital society. At a time, when branding new, occasionally innovative but often only catchy terms has become a familiar activity of researchers, companies and policymakers alike, we felt it was particularly necessary to reflect on which of these concepts was actually worthwhile, provided analytic value and actually described something new – besides the fluffy rhetoric that repeatedly becomes rampant in academic discourse.

Algorithmic governance, autonomous systems, transparency, smart technologies– these concepts are among the best candidates to serve this cause. They have become part of the vocabulary that is mobilised to make sense of the current rapid social and technological change. In this quest to understand the digital society, some ideas have proved to be more successful than others in stimulating public discourse, academic thinking, as well as economic and political activities. More recently, platformisation and datafication have become household-terms although relating to highly complicated and multi-facetted phenomena that could potentially also be described differently. Some concepts even strongly shape public and policy discourse albeit lacking solid empirical validation (the commonly referenced filter bubble is a case in point here).

There is high demand for concepts and explanations that condense the complexity of the world by transforming it into cogent and manageable ideas. Empirical research typically addresses single aspects of the current transformations. Adding small pieces to the puzzle, individual reports, research papers and essays tend to be rather unconnected, sometimes even resisting being combined with each other. While they certainly have a heuristic value, for example by validating or falsifying assumptions for well-defined, yet restricted contexts, they cannot provide overarching explanations and narratives. This is where more abstract concepts come into the picture. Operating on the level of middle range theories, they are able to integrate diverse phenomena under one notion by foregrounding certain shared characteristics. We need those overarching concepts to make sense of the current transformations.

A new special section defining concepts of the digital society

With this new special section Defining concepts of the digital society in Internet Policy Review, we seek to foster a platform that provides and validates exactly these overarching frameworks and theories. Based on the latest research, yet broad in scope, the contributions offer effective tools to analyse the digital society. Their authors offer concise articles that portray and critically discuss individual concepts with an interdisciplinary mindset. Each article contextualises their origin and academic traditions, analyses their contemporary usage in different research approaches and discusses their social, political, cultural, ethical or economic relevance and impact as well as their analytical value. With this, the authors are building bridges between the disciplines, between research and practice as well as between innovative explanations and their conceptual heritage.

We hope that this growing collection of reference papers will succeed in providing guidance for research and teaching as well as inform stakeholders in policy, business and civil society. For scholars, the articles seek to constitute an instructive reference that points to current research, historical and (interdisciplinary) backgrounds of the respective concepts, and relevant ongoing debates. For teachers and students alike, the articles offer an accessible overview that covers and contextualises broad themes while providing useful pointers to further research. Being relatively short and accessible in format, the articles thrive to become instructive and relevant beyond academia. Stakeholders in policy and business as well as journalists and civil society are increasingly interested in research evidence and academic perspectives on the entanglement of digitalisation and society. With its newly developed format the special section helps to navigate relevant research fields for these interdisciplinary questions. As an ongoing publication in this journal on internet regulation, we hope to not only meet the existing demand for overarching concepts and explanations but also being able to quickly adapt to the rapidly changing transformations.

The politics of concepts – and the limits of this special section

Terms and concepts are lenses on the complexity of reality that foreground some aspects while neglecting others. They bear normative assumptions, install specific ways of understanding new phenomena, and possibly even create regulatory implications. The more we use these terms, the more both the phenomena they refer to as well as their specific framing increasingly become self-evident and ordinary. At the same time, however, each of these concepts has its own ideational, theoretical and rhetorical histories rooted, for example, in social theory or political thought, but also on a very practical level in business decisions to invest in certain ideas or policy debates with their own discursive rules. As a consequence, these concepts are far from being natural, let alone a neutral designator of existing phenomena. Concepts always bear their own politics – and in mobilising them, we need to carefully and critically reflect these politics and the choices they represent.

Of course, this special section on concepts of the digital society is necessarily and inescapably part of the very politics it seeks to reflect. By choosing certain terms over others, giving voice to a selection of authors, their respective disciplines and viewpoints, the special section itself undoubtedly takes part in the hierarchisation of terms and ideas. One could easily point at the limitations that result from providing predominantly Western perspectives, an uneven mix of disciplinary positions, even the dominant representation of certain auctorial subjectivities in terms of gender, race or ethnicity. Ultimately, any form of conceptual work struggles with blind spots. While we certainly acknowledge that the project poses challenges, we are certain that it is a worthwhile and necessary endeavour.

The special section is a continuing project. This first collection of five concepts offers a critical assessment of prominent, yet hitherto often nebulous or vague ideas, terms or descriptions. It does by no means seek to provide a finite and unalterable list of definitions. Its very objective is to encourage dialogue and contestation. We explicitly invite contributions to promote dialogue between the concepts and also to take counter-positions. With mostly co-authored pieces representing differing academic disciplines, the special section is already striving for a heterogeneity of viewpoints in individual papers. The larger quest of the project is to offer a genuine multitude of positions, extending, opposing or updating the concepts, their premises or consequences.

With this special section we are seeking to find a middle ground between the conceptual challenges and the aim of providing short and focused concept papers on the one hand and what we regard as the unquestionable need for interdisciplinary, concise and scholarly rigorous contributions that help to understand digital societies. This is the prime objective of this project.

First articles in the special section and future concepts

This launch of the special section in Internet Policy Review represents only the first installment of an ongoing project that seeks to build both a repertoire of instructive concepts and a platform to contest and elaborate on already published ones. Further iterations with additional concepts and commentaries on existing papers will follow in regular intervals.

With this first collection, the special section particularly focuses on the important role of data, the practices of their production, dissemination and trade as well as the ensuing broader social, political and cultural ramifications. Ulises A. Mejias and Nick Couldry look at the concept of datafication which describes a cultural logic of quantification and monetisation of human life through digital information. They identify the major social consequences which are aligned at the intersection of power and knowledge: in political economy, datafication has implications for labour and the establishment of new markets. Not only in this regard is it closely connected to the tendency – and concept – of platformisation (see below). With the help of decolonial theory Mejias and Couldry put particular emphasis on the politics and geography of datafication in what they call data colonialism: the large-scale extraction of data equals the appropriation of social ressources with the general objective (mostly by Western companies) to “dispossess”. In the context of legal theory, Mejias and Couldry note that the processes of datafication are so wide-ranging that basic rights of the self, autonomy and privacy are increasingly called into question.

It is exactly this disposition of once authoritative ideas that has become quite fragile. In this context, Tobias Matzner and Carsten Ochs analyse the concept of privacy in relation to changing socio-technical conditions. They emphasise the need to understand and theorise privacy differently with the advent of digital technologies. These “shift the possibilities and boundaries of human perception and action” by creating visibilities and forms of interaction that are no longer defined by physical presence: personal information or pictures become potentially accessible for a worldwide audience, data “is easy and cheap to store” and becomes permanent in digital records. In addition to these technical contexts they argue that the scope of the “inherent individualism” of “conventional privacy theories” and data protection legislation does not meet the needs brought about by datafication: the forms of aggregated data used to identify behavioural patterns, they argue, is not the same as personal data.

One of the reasons why these forms of aggregated data operate at said intersection of knowledge and power is the practice of increasingly managing social spaces and interactions with algorithmic systems. Christian Katzenbach and Lena Ulbricht discuss algorithmic governance as a notion that builds on the longstanding theme that technology allows for a specific mode of governing society. Datafication, increasing computing power, more sophisticated algorithms, the economic and political interest in seemingly efficient and cost-reducing solutions, as well as the general trend towards digitalisation have all contributed to the new appeal and actual deployment of technological means to order the social. Eschewing the deterministic tendencies of the notion, yet taking seriously the increasing influence of algorithmic systems, the authors discuss a range of sectors from predictive policing to automated content moderation that increasingly rely on algorithmic governance. The concept brings previously unconnected objects of inquiry and research fields together and allows to identify overarching concerns such as surveillance, bias, agency, transparency and depoliticisation.

Many of these developments are primarily attributed to what we have converged on calling platforms: huge, often globally operating companies and services such as Facebook and Alibaba, Google and Uber that seek to transform and intermediate transactions across key economic sectors to position themselves as indispensable infrastructures of private and public life. Thomas Poell, David Nieborg and José van Dijck discuss platformisation as key development and narrative of the digital society. They argue that academic disciplines need to join forces in order to systematically investigate how changes in infrastructures, market relations and governance frameworks are intertwined, and how they take shape in relation to shifting cultural practices. We are only starting to understand how and why platforms have become the dominant mode of economic and social organisation and what the long-term effects might be.

One of the more prominent notions that seek to capture the effects of the reorganisation of social life by platforms and datafication is the metaphor of the filter bubble. Axel Bruns critically discusses this concept and carves out why it holds a special position in the set of concepts in this special section: while the idea of an algorithmically curated filter bubble seems plausible and enjoys considerable popularity in public and political discourse, empirical research shows little evidence that the phenomenon actually exists. Based on different readings of the concept and existing studies, Bruns argues that, rather than acutely capturing an empirical phenomenon, the persistent use of the notion has now created its own discursive reality that continues to have an impact on societal institutions, media and communication platforms as well as the users themselves. In consequence, the notion might even redirect scholarly attention away, warns Bruns, from far more critical questions such as why different groups in society “come to develop highly divergent personal readings of information” in the first place, and how the “ossification of these diverse ideological perspectives into partisan group identities” can be prevented or undone.

In 2020 the special section will continue, featuring concepts such as Digital commons, Transparency, Autonomous systems, Value in design and Smart technologies. Honouring the openness of the project, we appreciate suggestions for future concepts to be considered and any constructive feedback on the project itself. We sincerely hope the special section Defining concepts of the digital society will become a valuable forum and a helpful resource for many.

Acknowledgments

For an academic publication project such as this, most credit is routinely attributed to only a few named authors and editors. The success of a publication, however, always builds on a much broader group of people. This is particularly true for this special section. The long journey from the first idea to the publication of this collection of concepts was made possible by the help of many. We thank first and foremost Frédéric Dubois, the Internet Policy Review’s managing editor who has steered this rocky ride from beginning to end, with careful attention, from the overarching process to the details of wording. Uta Meier-Hahn helped to push the idea towards realisation by drafting a first exposé. The board of the Internet Policy Review and colleagues gave valuable guidance, especially Melanie Dulong De Rosnay, Jeanette Hofmann, David Megías Jiménez, Joris van Hoboken, Seda Guerses and Lilian Edwards provided instrumental feedback on the way. And Patrick Urs Riechert gave all the manuscripts the final polish. Thank you all!

Thomas Christian Bächle and Christian Katzenbach

Berlin, December 2019

Platform transience: changes in Facebook’s policies, procedures, and affordances in global electoral politics

$
0
0

This paper is part of Data-driven elections, a special issue of Internet Policy Review guest-edited by Colin J. Bennett and David Lyon.

In September 2018, Facebook announced that it would no longer send its employees to US campaigns as ‘embeds’ to facilitate their advertising buys and social media presences, a programme that was the subject of considerable controversy (Dave, 2018). Months earlier, Facebook rolled out a political and social issues ads verification programme and ad archive in the US and subsequently in other countries, seemingly in response to pressure from lawmakers and journalists to safeguard national elections given increasing evidence of state-sponsored disinformation campaigns (Introducing the Ad Archive Report, 2018; Requiring Authorization and Labeling for Ads with Political Content, 2019; Perez, 2018). At the time of this writing, it is widely reported that Facebook is considering more changes to its political advertising services as part of a contentious US debate and significant shifts by Google and Twitter, including the latter’s decision to ban political advertising entirely (Scola, 2019).

These changes are deeply significant for what is a global $3.6 billion USD political advertising business on Facebook alone (Kanter, 2018). This is a small percentage of revenue for the global firm, but Facebook is increasingly the central way that candidates around the world get their messages in front of voters. These changes significantly impact the cost structures and efficiencies of political ads, in turn reorienting the actors who create, target, and test them in ways that we will only come to appreciate with time. And, more broadly, the decisions technology firms make shape which voters are exposed to political advertising in the course of electioneering and how they encounter political communications.

The events since the 2016 US presidential election illustrate the rapidity and scale of change in platforms. The fact that platforms continually change is well-cited in the academic literature and referred to in journalistic accounts. Gawer (2014) has an expansive view on how platforms are “evolving organizations” with respect to innovation and competition and technology. There is a very well-documented literature on machine learning, artificial intelligence, and algorithms that stresses continual change (e.g., Ananny, 2016; Beer, 2017; Klinger and Svensson, 2018). Our aim here is to develop an inductive analysis of change at the level of policies, procedures, and affordances through two case studies of Facebook in the context of electoral politics: the ephemerality of the “I’m a Voter” button Facebook rolled out internationally and the data and targeting behind political advertising. We chose these two cases for their relevance to (and normative implications for) data-driven elections but also for the rich array of secondary sources available given that rapid change makes studying platforms and their effects difficult. The “I’m a Voter” button and changes in advertising affordances are unique in that they received significant political and industry media coverage.

We argue that these are two cases of platform transience– a concept we use to describe how platforms change, often dramatically and in short periods of time, in their policies, procedures, and affordances. We use the word ‘transience’ because it captures the idea that platform change is fast and continual, and as a result they are impermanent and ephemeral in significant ways. As we argue through our analyses of these two case studies, transience can seemingly be spurred by normative pressure from external stakeholders. In our discussion section, we detail the implications of this, alongside other potential mechanisms that underlie platform transience beyond external pressure as a call for future research.

These instances of transience also reveal the widespread failure of Facebook to be transparent about and disclose the workings of an electoral product that the company itself documented was highly impactful in spurring voting. Looking back, for instance, Facebook’s blog contains some information on the “I’m a Voter” product and who saw it during the course of elections (“Announcement”, 2008; “Election Day 2012 on Facebook | Facebook Newsroom”, 2012; “Election Day 2014 on Facebook | Facebook Newsroom”, 2014), but they fail to explain which elections in which countries after 2014 the product was used in or even what determined who received the notifications. In this context, journalists often struggled to provide the public with details on the product – including screenshots of what the user interface looked like, observed examples of when some people received the reminder and when others didn’t, and provided timelines of changes.

As such, these cases also illustrate the implications of platform transience. First, for public representatives such as journalists and policymakers, determining the social and political consequences of platforms to hold them accountable or design policy interventions is especially hard given the pace of change and the lack of transparency that often accompanies them. Second, in the political context, it is likely that better resourced electoral and issue campaigns will be uniquely capable of navigating rapid change, from being able to hire staffers to meet new platform requirements such as verification to having direct access to platforms through dedicated account managers. This raises fundamental issues of electoral fairness. Third, for the users of platforms, transience and the lack of disclosure and accountability increases the likelihood of hidden manipulation and, more broadly, unequal information environments. In the case of “I’m a Voter”, entirely unbeknownst to them, some people were the targets of a social pressure experiment designed to spur voting. In the context of data and targeting, some citizens, especially those most politically engaged and ideologically extreme, receive more attention from campaigns based on the underlying technical affordances of platforms.

This paper proceeds in three parts. First, we introduce our case studies around the role of platforms in democratic processes. We then turn to our two case studies to document and analyse platform transience. The paper concludes with a call for future research on other cases of platform transience and details the implications for institutional politics.

Platform transience

Over the past decade, scholars have grown increasingly attentive to the ways social media platforms – especially those operated by firms such as Facebook, Google, Twitter, Snapchat, and their sister companies and subsidiaries such as YouTube, Instagram, and WhatsApp – serve as infrastructure for much of social life in countries around the world. As Plantin and Punathambekar (2019, p. 2) argue, platforms such as Facebook and Google have:

acquired a scale and indispensability – properties typical of infrastructures – such that living without them shackles social and cultural life. Their reach, market power, and relentless quest for network effects have led companies like Facebook to intervene in and become essential to multiple social and economic sectors.

In this literature, ‘platforms’ refer simultaneously to technical infrastructures and the corporate organisations that develop, maintain, monetise, and govern them. This means that analysis of platforms entails their infrastructural elements – such as their ubiquity and scale through their technical components –alongside their corporate organisation, policies and procedures, and revenue models. There are a number of veins of literature that analyse various aspects of platforms in this expansive sense. There is an emerging body of work on content moderation, especially in relation to the practices and policies behind what these companies do (Gillespie, 2018) and how this labour is structured and performed (Roberts, 2019). Other research has analysed the economics of platforms and their business models (for a review see de Reuver, Sørensen, and Basole, 2018), governance structures (e.g., Constantinides, Henfridsson, and Parker, 2018; Gorwa, 2019; Helberger, Pierson, and Poell, 2018), and data (e.g., Helmond, 2015). A body of legal analysis details the regulatory implications of platforms, especially in relation to competition (Pasquale, 2012) and user privacy and data (Balkin, 2016).

We know of only a few research works to-date that have systematically analysed platform companies such as Facebook and Google through the lens of their interactions with other fields. As Van Dijck, Poell, and de Waal (2018) argue, platforms are “programmable digital architecture designed to organize interactions between users -- not just end users but also corporate entities and public bodies….” (ibid., 9) - in the process transforming other fields, which they demonstrate through case studies of news (see also Nielsen and Ganter, 2018), urban transport, health care, and education. Other scholars have analysed how the specific corporate organisation, policies, and business models of platforms in one domain such as politics impact that field (Kreiss and McGregor, 2018; Kreiss and McGregor, 2019).

The ways that platforms shape other fields are especially interesting given the fact that they undergo continual changes. Yet, we lack understanding of the mechanisms and consequences of platform change, particularly in the context of their organisational workings such as policies and procedures, the products they offer, or their affordances. For example, as detailed above, a number of scholars have detailed various aspects of platform change, especially in the context of algorithms (Bucher, 2018) and the versioning of technical products (e.g., Chun, 2016; Karpf, 2012). Platform transience is especially likely to impact actors in other fields given the ways instability likely disrupts institutionalised ways of working and established practices. At the same time, it likely differentially impacts sectors based on the degree to which fields rely on platform products and services, or their comparative autonomy with respect to their economic or technological power.

We are particularly interested in change at the levels of policies, procedures, and affordances. With respect to ‘policies’, we mean the company-derived rules governing the use of platforms. This is an expansive category that includes everything from terms of service to technical standards. In terms of ‘procedures’, we mean platforms’ ways of working both internally and externally with stakeholders. These include everything from the mechanisms that platform companies have for enforcing policy decisions to how they enable those affected by them to contest decisions. In the political domain, procedures relate to the ways that Facebook political ad sales staffers are vehicles for practitioners to contest policy decisions (such as ads rejected for objectionable content) but also more broadly the organisational and staffing patterns that the company has developed for reviewing content, adjudicating disputes, advising campaigns, developing new political products, etc. We define ‘affordances’ in terms of previous work as: “what various platforms are actually capable of doing and perceptions of what they enable, along with the actual practices that emerge as people interact with platforms” (Kreiss, Lawrence, and McGregor, 2018, p. 19). The concept of affordances is important because it points to the ways that code structures what people can do on and with platforms, even while platforms invite particular uses through framing what their features are for (see Nagy and Neff, 2015). Policies, procedures, and affordances are likely inter-related in the sense that change in one domain likely affects the others. For example, changes in policies can lead to new procedures, such as when Facebook required the verification of political advertisers which then lead to new registration processes with the platforms. Sometimes, affordances create the need for new policies, such as when the ability to edit headlines of publishers in ads spurred new policies to prevent this from furthering misinformation (see Kreiss and McGregor, 2019).

The case studies

To analyse inductively why platforms change and how those changes impact other fields, we developed two case studies relating to Facebook’s international electoral efforts. These two case studies are constructed primarily from data collected for a larger comparative project from January to May 2019 on Facebook and Google’s involvement in government and elections in five countries: Chile, Germany, India, South Africa, and the United States. We conducted a qualitative content analysis of news coverage about Facebook’s work in institutional politics, including industry news outlets such as AdExchanger and Campaigns and Elections. We also analysed material from Facebook’s Newsroom, company help centre documents, and company blogs and websites regarding products and services relating to Facebook’s work in institutional politics. Finally, we downloaded online resources provided to political actors, such as Facebook’s Digital Diplomacy best practices guide and English and German versions of Facebook guides for politicians and governmental actors.

Research into products and services started in the US context. When articles or websites listed additional countries that the services were offered in, we noted this. The creation of our search terms was an iterative and on-going process. After we made lists of policies, products, services, and affordances in English in the US context, we used web services to translate them into Spanish and German and searched for similar material via Google Incognito and a virtual private network (VPN) connection from each relevant country (we chose Express VPN because of its servers in each of the five countries). During the period from January to May 2019 we also used a US-based Facebook Ad Manager to explore the ad targeting interface and create campaigns, which we did not turn on (i.e., no ads were purchased but the campaigns were built out and saved in the platform as inactive or paused). In addition, we created accounts based in Germany and Chile using a VPN connection from those countries.

During this process of passive data collection (see Karpf, 2012), we were able to document Facebook’s advertising interface changing as we used it across countries. We also found international coverage of the “I’m a Voter” button and other election reminders which were not clearly documented in Facebook’s Newsroom. We then selected these two cases for further analysis and broadened our search outside of the original five countries. While we focus on Facebook in the empirical sections below, in the discussion section we seek to inductively develop an analysis that extends to all platforms in the context of the ways external pressures contribute to platform transience.

The “I’m a Voter” button and data-driven politics

In 2008, Facebook released an “It’s Election Day” reminder with a button for users to declare “I’m a Voter” to their friends. The “I’m a Voter” button was created to appear for only a day. It was ephemeral by design. The feature included pictures of a few select friends and the total number of the user’s friends who self-declared they had voted in the election (Figure 1) (“Announcement: Facebook/ABC News Election ’08 | Facebook Newsroom”, 2008; “Election Day 2012 on Facebook | Facebook Newsroom”, 2012). By 2012 this platform feature was accompanied by ways for users to find their polling place, share their political positions, and join groups to debate issues (“Election Day 2012 on Facebook | Facebook Newsroom”, 2012). In 2012, Facebook was just gaining many of the features that are now core to its platform including running ‘sponsored stories’ (also known as native advertising) in users’ news feeds (Mullins, 2016; D’Onfro, 2016).

A screenshot of a cell phone. Description automatically generated
Figure 1:2012, United States (“Election Day 2012 on Facebook | Facebook Newsroom”, 2012)

On Facebook’s blog, the company stated that these civic engagement products were born out of a commitment “to encouraging people who use our service to participate in the democratic process” (“Election Day 2012 on Facebook | Facebook Newsroom”, 2012). The existence of these tools as early as 2008 speaks to this commitment — Facebook, founded in 2004, began putting resources into engaging its users in politics during the first presidential election after its founding.

While at first glance promoting voter participation might be normatively desirable on democratic grounds, Facebook’s attempt to engage citizens in democracy through its platform sparked controversy specifically because of the evidence the company provided that it actually worked. In 2010, Facebook partnered with researchers to test the impact of different versions of the election day reminders in a field experiment. All Facebook users over the age of 18 in the US were assigned to treatment and control groups and voter turnout was later measured through actual voting records (Bond et al., 2012). In the treatment conditions, some users saw the “I’m a Voter” button with a count of how many people had marked themselves as voters (informational group); others saw that count along with pictures of some of their friends who had self-declared themselves as voting (social group) (see Figure 2) (ibid). In the control group, users saw no election reminder at all - the study estimated that 340,000 people turned out to vote due to the election reminders they saw (ibid). These findings are in line with many other studies and experiments showing how social incentives increase voter turnout (Gerber, Green, and Larimer, 2008). This experiment was cited in the campaign industry press as evidence of the power of Facebook in elections (Nyczepir, 2012).

Figure 2: United States Election Day 2010 turn-out study experimental conditions: Informational group and social group (Bond et al., 2012).

Questions soon arose in the United States, and later in Europe during the subsequent roll out, about who would benefit from this tool that had the power to increase vote share in potentially consequential ways. Reporting from the United States, Sweden, and the United Kingdom at the time all raised concerns about the questionable ethics of testing different versions of the notification as well as the fact that the company was not clear about what versions were shown to which users and where (Grassegger, 2018; Habblethwaite, 2014; Sifry, 2014).

Indeed, there was very little transparency about this tool that evidence suggested could be deeply impactful in electoral contexts. Based on screenshots and write-ups by journalists from nine countries from 2014 to 2016, the “I’m a Voter” button’s specific features changed and varied from year to year and country to country. While some countries had the “I’m a Voter” button (Figure 3) others used “share now” instead (Figure 4). The button could be on the right (Figures 3 and 4) or the left side (Figures 5, 6, 8, and 9), and under the option to ‘share your civic participation’ there could be privacy notifications that users were sharing with the public (Figure 4), a prompt to “return to Facebook” (Figure 5), an option for more information (Figures 3, 6 , 8 and 9), or no additional prompt at all (Figure 7). This feature was rolled out in Israel and India and was reported by Reuters to have been rolled out worldwide in 2014. The company itself did not verify this given that no Facebook newsroom articles cover the rollout in these countries (Cohen, 2014; Debenedetti, 2014; Kenan, 2015). At the same time, the differences in the design and prompts were not addressed by Facebook in its blog posts, nor were the effects of user interactions with different types of reminders in different countries.

Figure 3:2014, United States (“Election Day 2014 on Facebook | Facebook Newsroom,” 2014)
Figure 4:2014, Scotland (Grassegger, 2018)
Figure 5:2014, India (Rodriguez, 2014)
Figure 6:2014, European Union (Cardinale, 2014)
Figure 7:Israel, 2015 (Kenan, 2015)
Figure 8:Philippines, 2016 (Lopez, 2016)
Figure 9:United Kingdom, 2016 (Griffin, 2016)

These undocumented rollouts and differing designs are especially notable given the documented electoral impacts of this tool and the difficulty journalists and researchers faced, and continue to face, in identifying and chronicling the deployment of the tool – both in the moment and especially now, where systematically reconstructing international product rollouts, variation, and change is impossible. In the decade of its development, it was clear that Facebook was not transparent about its electoral product and failed to disclose basic information to electoral stakeholders such as journalists. When the company itself provided information, it raised more questions. Michael Buckly (Facebook’s vice president for global business communications) told a Mother Jones reporter that not everyone in the United States saw the feature during the 2012 presidential election due to software bugs, but that these bugs were entirely random (Sifry, 2014). Buckly stated that, in contrast, during the 2014 midterm election “almost every user in the United States over the age of 18 will see the “I Voted” button”, although as the author notes these comments were unverifiable and the accuracy of this statement was unclear (ibid.) Indeed, four years later the reach of “I Voted” was still unclear, and observers were still attempting to track the deployment of the tool as best they could. For example, in Iceland a lawyer questioned her friends and believed that not everybody was seeing the vote reminder at the same time, on the same devices, or at all (Grassegger, 2018). Despite her attempts, being outside the company this writer could not determine specific variations of the text, where it appeared on users’ feeds, or whether it was being displayed on all operating systems and on older versions of the Facebook app (Grassegger, 2018).

As such, the “I’m a Voter” also reveals the lack of transparency and disclosure by Facebook and the scope of international variation and transience of the platform. It also reveals the role of journalists and other observers in seeking to hold Facebook accountable, and the apparent success they have had at times in compelling platform change. For example, the negative press coverage and pressure from journalists seemingly pushed Facebook to change its policies on running tests on users and publicly declare that it stopped the official testing of the “I’m a Voter” affordance in 2014, at least in the United States (Ferenstein, 2014). Journalistic scrutiny might also have prompted the downplaying of the tool itself. In 2016, for instance, Facebook made changes to the tool in the United States, making it more difficult to access and requiring users to click through multiple menus to share their status as voting (Figure 10) (Grant, 2016). Also in 2016 in the US, Facebook released, and promoted, many more civic engagement tools centred around educating users about political issues and what would be on their ballots instead of the “I’m a Voter” declaration (“Preparing for the US Election 2016 | Facebook Newsroom”, 2016).

A screenshot of a cell phone Description automatically generated
Figure 10:United States, 2016 (Grant, 2016)

In the end, the extent of changes in the interface for each election, international variations in the tool, who saw election reminders and their effects, the data the company collected, and what will happen during future elections are all unknown. In addition to making it difficult to hold Facebook accountable, these transient affordances impact the political information environment in unknown and potentially deeply problematic ways, as we return to in the discussion section.

Political microtargeting and related advertising affordances

Facebook provides audiences to advertisers to target. Some of these audiences are segmented based on user data such as self-declared age, self-declared employer, or their interests deduced through the websites and Facebook pages they visit. Facebook also offers geolocation targeting of countries, states, cities, and, in the United States, congressional districts (Ads Manager - Manage Ads – Campaigns, n.d.). Facebook’s behavioural and interest-based targeting includes online or offline behaviour and interests, such as visiting specific locations. The company also offers cross device targeting for almost all of its advertising, meaning that ads are delivered to people on mobile or desktop devices and these profiles are linked so that responses to the advertisements can be attributed back to a single person. In addition, Facebook allows advertisers to load their own “first party” data, including email addresses gathered in stores and website visits online. These first-party audiences can then be targeted on the platform and they can be used as the basis for lookalike audiences.

Ironically, given all the negative attention it has received (Bashyakarla et al., 2019; Chester and Montgomery, 2017), Facebook’s interest-based and behavioural targeting is notably limited in the political sphere in the US. The content guidelines and permissible forms of targeting the company allows on its platform are far more restrictive than what an expansive constitutional First Amendment in the United States protects and what political practitioners currently do in other mediums, such as direct mail or door-to-door canvassing. For example, in the United States, when we started our research in spring 2019, Facebook’s ad targeting did not have “registered Republicans”, “registered Democrats”, or any party membership categories available to target, nor did it include voting behaviour, such as who voted in the last election (Ads Manager - Manage Ads – Campaigns, n.d.). However, there were ideological targeting options for “US Politics”, including “very conservative”, “conservative”, “moderate”, “liberal”, and “very liberal” as well as “likely to engage with political content (conservative)”, “moderate”, and “liberal” (Figure 11) (ibid).

A screenshot of a cell phone Description automatically generated
Figure 11: Screenshot from the Facebook ad buying interface, 16 January 2019

These micro-targeting options were also available to use from Facebook Ad Manager accounts made in Chile and Germany (Administrador de anuncios, n.d; Werbeanzeigenmanager, n.d.). The voter targets, however, were oddly specific to the United States – meaning that users in Chile and Germany were being invited to target US citizens. In addition to these political categories, across all the countries we considered advertisers can search for different public figures such as Angela Merkel, Barack Obama, or Sebastian Piñera or political groups such as the Partido Conservador or the Republican Party and target users who “have expressed an interest in or liked pages related to” those people or groups (ibid.). The degree to which any of these categories are used by advertisers is unknown, as is their actual accuracy in predicting voter identification with liberal or conservative ideologies.

In a clear example of the transience of Facebook affordances, these advertising categories changed during the time of us doing the research for this project. On 14 March 2019 Facebook Ad Manager abruptly notified us that the US “very conservative” to “very liberal” political targeting options referenced above were being removed (Figure 12). They were also removed in Germany and Chile. The other advertising targeting capabilities, including general interest in parties and political figures and “likely to engage with political content”, were still available. Unlike other well-publicised advertising changes, there were no Facebook blog posts nor major news coverage related to the removal of the five political ad audiences that we could find. If we did not have universes set up targeting these audience segments (but, as detailed above, we did not actively run advertising) we do not think we would have been aware of this change.

A screenshot of a cell phone Description automatically generated
Figure 12: Screenshot from US Facebook ad buying interface, 14 March 2019. Audience segments being removed.

What prompted the removal of these advertising categories is a mystery, although we suspect that it is related to the ongoing and intense scrutiny of Facebook’s advertising capabilities given numerous controversies since 2016. For example, in 2017 US media outlet ProPublica published an investigative report on how advertisers could put their ads in front of “jew haters” using Facebook’s micro-targeting (Angwin et al., 2017). This audience segment and others were created by an automated system without human review based on what users put on their profiles (ibid.). When enough users declared that they were interested in hating jews, the algorithm accepted this and made them available to advertisers for targeting. Facing public backlash, Facebook removed the audience segments called out by ProPublica (ibid).

These targeting changes have taken place alongside other significant changes in Facebook’s political advertising policies, such as the ending of the political ‘embed’ programme and commissions for its account managers for political ad sales (Dave, 2018). More changes are reportedly on the way as a contentious debate over political advertising in the US takes shape with Twitter banning all political advertising and Google limiting political micro-targeting and more actively vetting political claims (Cox, 2019). Taken together, these amount to significant and sweeping changes to how political advertising can be conducted on the platforms, especially in the United States, and it is unfolding during the course of a presidential election cycle. Again, similar to the “I’m a Voter” affordance, there was little transparency and disclosure from Facebook regarding these changes. Facebook’s Help Center, newsroom, and business pages provide no list of retired or new audience segments. We could find no direct documentation from Facebook showing that either the political audiences we saw removed or those covered by ProPublica ever existed.

Normative pressures from external stakeholders such as journalists and the ever-present talk of regulation in the media since 2016 in the US (along with some initial proposed bills such as the Honest Ads Act) likely influenced these changes in Facebook’s policies and affordances, although we cannot know for certain. At the same time, Facebook’s advertising platform has also undergone a series of changes that impact political advertising likely due to economic incentives. From 2014 to 2017, Facebook introduced carousel ads, lead ads, group ads, and created the audience network to allow advertisers to reach Facebook users on other mobile apps (“The Evolution of Facebook Advertising”, 2017). At the same time, Facebook removed ad formats (“Full List of Retired Ad Formats”, n.d.) as well as numerous metrics (“About metrics being removed”, n.d.). Rationales given for these changes include increasing value to advertisers. For example, Facebook stated that “to help businesses select the most impactful advertising solutions, we've removed the ability to boost certain post types that have proven to generate less engagement and that aren't tied to advertiser objectives” (“Full List of Retired Ad Formats”, n.d.; “About metrics being removed”, n.d.). The company meanwhile removed metrics to replace them with others “that may provide insights that are more actionable” (ibid.).

Discussion

These cases illustrate how Facebook as a platform undergoes a continual set of changes in its affordances, often without transparency and disclosure, as well as the seeming role of external pressure in driving them. In the case of the shifting international rollout and rollback of the “I’m a Voter” button, it was the stated desire of the company to be socially responsible that put engineering and design resources towards reminding people in democratic countries to vote. Then, it was likely the steady drumbeat of pressure from journalists and observers around the world asking questions about the rollout, implementation, and transparency of the initiative, along with difficult questions for the firm about (unintended) electoral manipulation, that likely led to Facebook scaling back this feature of the platform – culminating with no apparent public announcement of it on the Facebook Newsroom blog during the 2016 US election.

In the case of political data and targeting, ever present policy and affordance ephemerality likely occurs for a mix of reasons relating to external normative pressures and commercial incentives. Clear normative pressure from journalists, as well as the ever-present voices of political representatives and activists in the media, about the role of political advertising in undermining electoral integrity, heightening polarisation, and leading to potential voter manipulation, especially in the wake of Brexit and the 2016 US presidential election, seemingly led to fundamental changes in Facebook’s policies and procedures (such as requiring verification for advertisers and building out a political ads database) and affordances (removing the capacity to target based on ideology). At the same time, commercial economic incentives that underlie advertising more broadly have spillover effects in politics, leading to things such as new ad formats and targeting capabilities.

Future research can analyse the contexts within which external pressure compels platform change, and the various stakeholders involved. Due to platform companies increasingly reaching into all areas of social life, the set of stakeholders concerned with their functioning and governance is vast. As Van Dijck, Poell, and De Waal (2018, p. 3) nicely capture, the values of platforms in their architectures and norms often come into conflict with public values in various social domains. As such, platform changes are often the outcomes of a “series of confrontations between different value systems, contesting the balance between private and public interests” (ibid., p. 5). As infrastructure, platforms reconfigure various social sectors in deeply meaningful ways, and bring about conflicts over values and norms relating to privacy, public speech, and electoral fairness, to name a few.

While we can never know for certain, that outside stakeholders seemingly exert considerable pressure on platform companies which can spur change is a logical and plausible conclusion to be drawn from the cases developed here. As these case studies revealed, journalists in particular exerted normative scrutiny over Facebook. Platform companies are likely sensitive to negative journalistic attention in part given the press is a representative of public opinion, but also because it can trigger governmental scrutiny, drops in stock price, and user backlash, all of which were in evidence after the 2016 US presidential and Brexit elections. Another category of stakeholder likely particularly relevant in the context of politics are activist groups and partisan and ideological organisations. An example is organisations such as the ACLU, which has led efforts in the US to end certain forms of demographic targeting. These are not the only forms of external pressure from stakeholders. There are likely more expansive sets of regulatory and normative concerns for international platforms that transcend any one recognisable field of activity. These include the regulatory pressures exerted on platform companies by many, and diverse, formal bodies internationally such as the Federal Trade Commission in the United States that has compelled Facebook to alter its data security practices in the country (e.g., Solove and Hartzog, 2014).

At the same time, platforms likely make a host of voluntary decisions about policy, procedures, and affordances that lead to changes towards what they perceive of as desirable social ends. As our case studies demonstrated, seemingly well-intentioned actions by Facebook, such as promoting electoral participation through polling place look-up tools and universal voting reminders, shape how platforms work. These things are normatively defined in relation to a broader cultural and social context. Therefore, change is not simply compelled by pressure, but about actors desiring to be in line with social values, expectations, and ideals. Finally, it is clear from our case studies that there are a number of economic incentives that underlie platform transience. As we detailed, in the past few years Facebook has introduced carousel ads, lead ads, and group ads, new metrics, and targeting affordances – all of which are routinely deployed by political actors.

Future research can analyse additional mechanisms that underlie platform transience. For example, these firms are seemingly isomorphic to one another given that they not only compete in similar domains, they often react to one another’s moves and follow one another’s decisions around things such as content policy and self-regulation. See, for instance, the cascade of platform decisions to ban the American conspiracist Alex Jones during the summer of 2018 (Hern, 2018) and the recent shifts in political advertising spurred by Twitter’s decision to ban all political ads (Cox, 2019). Part of this comes in the context of journalists’ ability to exert normative pressure on platforms once a rival makes a significant policy change, but it is also these firms following one another with regard to the normative stakes involved. It is also likely that what certain firms do, or what they are subject to, changes the possibilities for action of other firms. Future research is necessary to analyse when these types of changes occur and why.

Methodologically, scholars can go beyond our initial approach here and comparatively analyse moments of significant change in the context of policies, procedures, and affordances through a process tracing approach, detailing the causal chains that were likely at play in firms’ decision-making (Collier, 2011) or reconstruct timelines through secondary sources with the aim of comparing transience across platforms and national contexts. And, going forward, scholarship that concerns platforms can make a concerted attempt to document platform changes, even when they are not the primary object of analysis. For example, researchers should record the dates they were accessing pages and take screenshots and save them into the archival Wayback Machine.

Normatively, platform transience begs larger questions relating to public accountability, electoral fairness, and inequality in information environments that have significant implications for policy-making. Perhaps the clearest case is how, in each of the instances of transience detailed here, there was shockingly little in the way of public disclosure of these changes to stakeholders and the public and a lack of transparency in terms of what changed, when, and why. Indeed, there was little in the way of a clear justification for any of the changes chronicled here. And, in the case of “I’m a Voter”, there was little in the way of disclosure for how this tool actually worked. Given this, it is hard, if not impossible, for journalists, elected officials, researchers, and regulatory agencies to monitor Facebook, and likely all platforms, effectively and hold them accountable for their policies, procedures, and affordances, and those they take away.

The fact that these cases of transience occurred in the context of international institutional politics make them all the more troubling. The fact that journalists had to guess at the implementation of “I’m a Voter” speaks to the magnitude of the potential problem, especially given that it likely shaped electoral participation for thousands, if not millions, of individuals around the world according to Facebook’s own data. The firm should be much more proactively forthcoming about the workings of its products and likely and potential changes in its policies, procedures, and affordances, such as alerting journalists and other stakeholders when changes might occur, and provide archival and public documentation of them. Even more, Facebook should develop clear justifications for its decisions and provide opportunities for those with questions to find out information and potentially contest the decisions the firm makes.

At the same time, these cases also highlight how platforms have the power to transform the shape and the dynamics of other fields. In the political domain, this raises issues related to electoral fairness. Facebook’s 7 million advertisers (Flynn, 2019), including political campaigns, have to navigate a rapidly changing advertising environment with limited notice and often no records of transient features of the platforms that impact the voters they can reach, how they can reach them, and the cost of doing so. Through established relationships with platform companies, larger, higher-spending political advertisers may have forewarning of changes and help understanding them, thus granting them unique advantages over their smaller rivals (Kreiss and McGregor, 2019). Meanwhile, larger campaigns and consultancies with many staffers are likely better able to perceive and respond to changes in platforms than their smaller counterparts. For example, new verification requirements likely benefited consultancies with the infrastructure to handle the process for their clients, and changes in content moderation policies likely benefit large firms that can get a hearing for such things as disapprovals (ibid.). How transience impacts other fields should be a key area of research going forward.

At the core of all of the changes documented here is the likelihood that platforms can create fundamentally unequal information environments. The fact that not all citizens of any given country likely saw the same social cues to vote means that some have powerful prompts to turnout, and therefore some citizens’ voices are disproportionately heard. With respect to data and targeting, the lack of transparency around key changes in things such as the targeting of political ads means that citizens cannot hope to know why they are seeing the messages they are - and journalists and regulators cannot answer questions regarding who receives political messages driving them to the polls, or keeping them home with respect to demobilising ads. For regulators, establishing rules in a rapidly changing platform ecosystem without transparency into what is changing, why, and when creates a unique challenge. Facebook’s ad transparency database simply underscores this point - with ongoing changes in targeting and the platform’s own algorithms and only the crudest company-derived categories of ad reach available, there is little in the way of transparency regarding the ways that political content is being delivered, and political attention structured, by campaigns and the platform itself.

Conclusion

While all of our empirical cases concerned Facebook, transience is likely a feature of all platforms and external pressure from stakeholders likely affects all platforms. As this paper detailed, particularly concerning is the lack of clear public disclosure and transparency in Facebook’s changes in its platform, which potentially impacted what millions of people around the world saw in terms of social pressure to vote and how campaigns could contact voters. This raises a deeply troubling set of normative issues in the context of institutional politics, from unequal information environments to the fairness of electoral competition. The challenges that we had as researchers documenting platform changes and their implications, and that observers around the world encountered as well, underscores how difficult crafting effective policy responses to platform power will be, unless we compel stronger public disclosure and accountability mechanisms on these firms.

Disclosure

When filing the final version of this text, the authors declared to have contributed equally to this paper.

References

About metrics being removed. (n.d.). Retrieved June 25, 2019, from Facebook Ads Help Center website: https://www.facebook.com/business/help/metrics-removal

Administrador de anuncios - Manage Ads (n.d.). Retrieved March 15, 2019, from https://business.facebook.com/adsmanager/manage/adsets/edit

Ads Manager - Manage Ads - Campaigns. (n.d.). Retrieved March 15, 2019, from https://business.facebook.com/adsmanager/manage/campaigns?

Ananny, M. (2016). Toward an ethics of algorithms: Convening, observation, probability, and timeliness. Science, Technology, & Human Values, 41(1), 93–117. https://doi.org/10.1177/0162243915606523

Angwin, J., Varner, M., Tobin, A. (2017, September 14). Facebook Enabled Advertisers to Reach ‘Jew Haters’. ProPublica. https://www.propublica.org/article/facebook-enabled-advertisers-to-reach-jew-haters

Announcement: Facebook/ABC News Election ’08 (2008, January 4). Facebook Newsroom. Retrieved June 25, 2019, from https://newsroom.fb.com/news/2008/01/announcement-facebookabc-news-election-08/

Balkin, J. M. (2016). Information fiduciaries and the first amendment. UC Davis Law Review, 49(4), 1183–1234. Retrieved from https://lawreview.law.ucdavis.edu/issues/49/4/Lecture/49-4_Balkin.pdf

Bashyakarla, V., Hankey, S. Macintyre, A., Rennó, R., & Wright, G. (2019). Personal Data: Political Persuasion. Inside the Influence Industry. How it works. Berlin: Tactical Tech. Retrieved from https://tacticaltech.org/media/Personal-Data-Political-Persuasion-How-it-works.pdf

Beer, D. (2017). The social power of algorithms. Information, Communication & Society, 20(1), 1–13. https://doi.org/10.1080/1369118X.2016.1216147

Bond, R. M., Fariss, C. J., Jones, J. J., Kramer, A. D. I., Marlow, C., Settle, J. E., & Fowler, J. H. (2012). A 61-million-person experiment in social influence and political mobilization. Nature, 489(7415), 295–298. https://doi.org/10.1038/nature11421

Bucher, T. (2018). If... then: Algorithmic power and politics. New York: Oxford University Press. https://doi.org/10.1093/oso/9780190493028.001.0001

Cardinale, R. (2014, May 25). Elezioni Europee 2014: Il pulsante Facebook e il Google Doodle [European Elections 2014: Facebook button and Google Doodle]. Retrieved July 3, 2019, from Be Social Be Honest website: http://www.besocialbehonest.it/2014/05/25/elezioni-europee-2014-il-pulsante-facebook-e-il-google-doodle/

Chester, J., & Montgomery, K. C. (2017). The role of digital marketing in political campaigns. Internet Policy Review, 6(4). https://doi.org/10.14763/2017.4.773

Chun, W. H. K. (2016). Updating to Remain the Same: Habitual New Media. Cambridge, MA: The MIT Press.

Cohen, D. (2014, April 10). India Facebook Users Can Declare, ‘I’m A Voter.’ Adweek. Retrieved June 25, 2019, from https://www.adweek.com/digital/india-im-a-voter/

Collier, D. (2011). Understanding process tracing. PS: Political Science & Politics, 44(4), 823–830. https://doi.org/10.1017/s1049096511001429

Constantinides, P., Henfridsson, O., & Parker, G. G. (2018). Introduction-Platforms and Infrastructures in the Digital Age. Information Systems Research, 29(2), 381–400. https://doi.org/10.1287/isre.2018.0794

Cox, K. (2019, November 22). Google Bans Microtargeting and “False Claims” in Political Ads.” ArsTechnica. Retrieved December 9, 2017 from https://arstechnica.com/tech-policy/2019/11/google-bans-microtargeting-and-false-claims-in-political-ads/

Dave, P. (2018, September 21). Facebook to drop on-site support for political campaigns.” Reuters. Retrieved July 10, 2019 from: https://www.reuters.com/article/us-facebook-election-usa/facebook-to-drop-on-site-support-for-political-campaigns-idUSKCN1M101Q

Debenedetti, G. (2014, May 19). Facebook to roll out “I’m a Voter” feature worldwide. Reuters. Retrieved from https://www.reuters.com/article/us-usa-facebook-voters-idUSBREA4I0QQ20140519

de Reuver, M., Sørensen, C., & Basole, R. C. (2018). The digital platform: a research agenda. Journal of Information Technology, 33(2), 124–135. https://doi.org/10.1057/s41265-016-0033-3

D’Onfro, J. (2016, February 4). Facebook looked completely different 12 years ago — here’s what’s changed over the years. Business Insider. Retrieved June 25, 2019, from https://www.businessinsider.com/what-facebook-used-to-look-like-12-year-ago-2016-1

Election Day 2012 on Facebook. (2012, November 6). Facebook Newsroom. Retrieved June 22, 2019, from https://newsroom.fb.com/news/2012/11/election-day-2012-on-facebook/

Election Day 2014 on Facebook. (2014, November 4). Facebook Newsroom. Retrieved July 3, 2019, from https://newsroom.fb.com/news/2014/11/election-day-2014-on-facebook/

American Civil Liberties Union. (2019, March 9). Facebook agrees to sweeping reforms to curb discriminatory ad targeting practices [Press release]. Retrieved from: https://www.aclu.org/press-releases/facebook-agrees-sweeping-reforms-curb-discriminatory-ad-targeting-practices

Ferenstein, G. (2014, November 2). After being criticized for its experiments, Facebook pulls the plug on a useful one. VentureBeat. Retrieved July 5, 2019 from https://venturebeat.com/2014/11/02/facebook-is-so-scared-of-the-press-theyve-stopped-innovating/

Flynn, K. (2019, January 30). Cheatsheet: Facebook now has 7m advertisers. Digiday. Retrieved July 6, 2019, from https://digiday.com/marketing/facebook-earnings-q4-2018/

Full List of Retired Ad Formats. (n.d.). Retrieved June 25, 2019, from Facebook Ads Help Center website: https://www.facebook.com/business/help/420508368346352

Gerber, A. S., Green, D. P., & Larimer, C. W. (2008). Social Pressure and Voter Turnout: Evidence from a Large-Scale Field Experiment. American Political Science Review, 102(1), 33–48. https://doi.org/10.1017/S000305540808009X

Gillespie, T. (2018). Custodians of the Internet: Platforms, content moderation, and the hidden

decisions that shape social media. New Haven: Yale University Press.

Gorwa, R. (2019). What is platform governance?. Information, Communication & Society, 22(6),

854–871. https://doi.org/10.1080/1369118X.2019.1573914

Grant, M. (2016, November 8). How To Show That You Voted On Facebook. Bustle. Retrieved June 25, 2019, from https://www.bustle.com/articles/193836-how-to-use-the-voting-in-the-us-election-status-on-facebook-because-you-deserve-a

Grassegger, H. (2018, April 15). Facebook says its ‘voter button’ is good for turnout. But should the tech giant be nudging us at all? The Guardian. Retrieved June 25, 2019 from https://www.theguardian.com/technology/2018/apr/15/facebook-says-it-voter-button-is-good-for-turn-but-should-the-tech-giant-be-nudging-us-at-all

Griffin, A. (2016, May 5). How Facebook is manipulating you to vote. The Independent. Retrieved July 9, 2019 from https://www.independent.co.uk/life-style/gadgets-and-tech/news/uk-elections-2016-how-facebook-is-manipulating-you-to-vote-a7015196.html

Habblethwaite, C. (2014, May 22). Why does Facebook want you to vote? BBC. Retrieved from https://www.bbc.com/news/blogs-trending-27518691

Helberger, N., Pierson, J., & Poell, T. (2018). Governing online platforms: From contested to cooperative responsibility. The Information Society, 34(1), 1–14. https://doi.org/10.1080/01972243.2017.1391913

Helmond, A. (2015). The platformization of the web: Making web data platform ready. Social Media + Society, 1(2). https://doi.org/10.1177/2056305115603080

Hern, A. (2018, August 6). Facebook, Apple, YouTube and Spotify Ban Infowars’ Alex Jones. The Guardian. Retrieved July 8, 2019 from https://www.theguardian.com/technology/2018/aug/06/apple-removes-podcasts-infowars-alex-jones

Introducing the Ad Archive Report: A Closer Look at Political and Issue Ads. (2018, October 23). Facebook Newsroom. Retrieved July 7, 2019, from https://newsroom.fb.com/news/2018/10/ad-archive-report/

Kanter, J. (2018, May 25). Facebook Gave 4 Reasons Why It’s Ready to Lose Money and Credibility to Continue Running Political Adverts. Business Insider. Retrieved July 9, 2019 https://techcrunch.com/2018/04/23/facebooks-new-authorization-process-for-political-ads-goes-live-in-the-u-s/

Karpf, D. (2012). Social science research methods in Internet time. Information, communication & society, 15(5), 639–661. https://doi.org/10.1080/1369118X.2012.665468

Kenan, E. (2015, February 19). Facebook to add “I voted” button for Israeli elections. Ynetnews. Retrieved June 25, 2019, from https://www.ynetnews.com/articles/0,7340,L-4628546,00.html

Klinger, U., & Svensson, J. (2018). The end of media logics? On algorithms and agency. New Media & Society, 20(12), 4653–4670. https://doi.org/10.1177/1461444818779750

Kreiss, D., Lawrence, R. G., & McGregor, S. C. (2018). In their own words: Political practitioner accounts of candidates, audiences, affordances, genres, and timing in strategic social media use. Political communication, 35(1), 8–31. https://doi.org/10.1080/10584609.2017.1334727

Kreiss, D., & McGregor, S. C. (2018). Technology firms shape political communication: The work of Microsoft, Facebook, Twitter, and Google with campaigns during the 2016 US presidential cycle. Political Communication, 35(2), 155–177. https://doi.org/10.1080/10584609.2017.1364814

Lopez, M. (2016, May 8). Facebook announces ‘I’m a voter’ button ahead of 2016 Philippine elections. GadgetMatch. Retrieved July 3, 2019 from https://www.gadgetmatch.com/facebook-election-day-button-2016-philippine-elections/

Mullins, J. (2016, February 4). This Is How Facebook Has Changed Over the Years. E! Online. Retrieved June 25, 2019, from https://www.eonline.com/news/736769/this-is-how-facebook-has-changed-over-the-past-12-years

Nagy, P., & Neff, G. (2015). Imagined affordance: Reconstructing a keyword for communication theory. Social Media+ Society1(2). https://doi.org/10.1177/2056305115603385

Nielsen, K. R., & Ganter, S. A. (2018). Dealing with digital intermediaries: A case study of the relations between publishers and platforms. New media & society, 20(4), 1600–1617. https://doi.org/10.1177/1461444817701318

Nyczepir, D. (2012, September 12). Study: GOTV messages on social media increase turnout. Retrieved July 3, 2019, from https://www.campaignsandelections.com/campaign-insider/study-gotv-messages-on-social-media-increase-turnout

Perez, S. (2018, April 23). Facebook’s New Authorization Process for Political Ads Goes Live in the U.S. TechCrunch. Retrieved from https://techcrunch.com/2018/04/23/facebooks-new-authorization-process-for-political-ads-goes-live-in-the-u-s/

Plantin, J. C., & Punathambekar, A. (2019). Digital media infrastructures: pipes, platforms, and politics. Media, Culture & Society, 41(2), 163–174. https://doi.org/10.1177/0163443718818376

Plantin, J.-C., Lagoze, C., Edwards, P. N., & Sandvig, C. (2018). Infrastructure studies meet platform studies in the age of Google and Facebook. New Media & Society, 20(1), 293–310. https://doi.org/10.1177/1461444816661553

Preparing for the US Election 2016. (2016, October 28). Facebook Newsroom. Retrieved June 25, 2019, from https://newsroom.fb.com/news/2016/10/preparing-for-the-us-election-2016/

Requiring Authorization and Labeling for Ads with Political Content. (2019, May 24). Facebook Business. Retrieved July 7, 2019, from https://www.facebook.com/business/news/requiring-authorization-and-labeling-for-ads-with-political-content

Roberts, S. T. (2019). Behind the Screen: Content Moderation in the Shadows of Social Media. New Haven: Yale University Press.

Rodriguez, S. (2014, May 20). Facebook expands “I’m a Voter” feature to international users. Los Angeles Times. Retrieved July 9, 2019 from https://www.latimes.com/business/technology/la-fi-tn-facebook-im-a-voter-international-20140520-story.html

Scola, N. (2019, November 7). Facebook considering limits on targeted campaign ads. Politico. Retrieved from: https://www.politico.com/news/2019/11/07/facebook-targeted-campaign-ad-limits-067550

Sifry, M. (2014, October 31). Facebook wants you to vote on Tuesday. Here’s how it messed with your feed in 2012. Mother Jones. Retrieved July 3, 2019, from https://www.motherjones.com/politics/2014/10/can-voting-facebook-button-improve-voter-turnout/

Solove, D. J., & Hartzog, W. (2014). The FTC and the new common law of privacy. Columbia Law Review, 114(3), 583–676. Retrieved from https://columbialawreview.org/content/the-ftc-and-the-new-common-law-of-privacy/

The Evolution of Facebook Advertising (Timeline of Facebook Advertising). (2017, March 7). Retrieved June 25, 2019, from Bamboo website: https://growwithbamboo.com/blog/the-evolution-of-facebook-advertising/

van Dijck, J., Poell, T., & de Waal, M. (2018). The platform society: Public values in a connective world. New York: Oxford University Press. https://doi.org/10.1093/oso/9780190889760.001.0001

Werbeanzeigenmnanager - Manage Ads (n.d.). Retrieved March 15, 2019, from https://business.facebook.com/adsmanager/manage/campaigns

Zelm, A. (2018, July 17). Facebook Reach in 2018: How Many Fans Actually See Your Posts? [Blog post] Retrieved July 8, 2019, from https://www.kunocreative.com/blog/facebook-reach-in-2018

The digital commercialisation of US politics — 2020 and beyond

$
0
0

This paper is part of Data-driven elections, a special issue of Internet Policy Review guest-edited by Colin J. Bennett and David Lyon.

In March 2018, The New York Times and The Guardian/Observer broke an explosive story that Cambridge Analytica, a British data firm, had harvested more than 50 million Facebook profiles and used them to engage in psychometric targeting during the 2016 US presidential election (Rosenberg, Confessore, & Cadwalladr, 2018). The scandal erupted amid ongoing concerns over Russian use of social media to interfere in the electoral process. The new revelations triggered a spate of congressional hearings and cast a spotlight on the role of digital marketing and “big data” in elections and campaigns. The controversy also generated greater scrutiny of some of the most problematic tech industry practices — including the role of algorithms on social media platforms in spreading false, hateful, and divisive content, and the use of digital micro-targeting techniques for “voter suppression” efforts (Green & Issenberg; 2016; Howard, Woolley, & Calo, 2018). In the wake of these cascading events, policymakers, journalists, and civil society groups have called for new laws and regulations to ensure transparency and accountability in online political advertising.

Twitter and Google, driven by growing concern that they will be regulated for their political advertising practices, fearful of being found in violation of the General Data Protection Regulation (GDPR) in the European Union, and cognisant of their own culpability in recent electoral controversies, have each made significant changes in their political advertising policies (Dorsey, 2019; Spencer, 2019). Despite a great deal of public hand wringing, on the other hand, US federal policymakers have failed to institute any effective remedies even though several states have enacted legislation designed to ensure greater transparency for digital political ads (California Clean Money Campaign, 2019; Garrahan, 2018). These recent legislative and regulatory initiatives in the US are narrow in scope and focused primarily on policy approaches to political advertising in more traditional media, failing to hold the tech giants accountable for their deleterious big data practices.

On the eve of the next presidential election in 2020, the pace of innovation in digital marketing continues unabated, along with its further expansion into US electoral politics. These trends were clearly evident in the 2018 election, which, according to Kantar Media, were “the most lucrative midterms in history”, with $5.25 billion USD spent for ads on local broadcast cable TV, and digital — outspending even the 2016 presidential election. Digital ad spending “quadrupled from 2014” to $950 million USD for ads that primarily ran on Facebook and Google (Axios, 2018; Lynch, 2018). In the upcoming 2020 election, experts are forecasting overall spending on political ads will be $6 billion USD, with an “expected $1.6 billion to be devoted to digital video… more than double 2018 digital video spending” (Perrin, 2019). Kantar (2019), meanwhile, estimates the portion spent for digital media will be $1.2 billion USD in the 2019-2020 election cycle.

In two earlier papers, we documented a number of digital practices deployed during the 2016 elections, which were emblematic of how big data systems, strategies and techniques were shaping contemporary political practice (Chester & Montgomery, 2017, 2018). Our work is part of a growing body of interdisciplinary scholarship on the role of data and digital technologies in politics and elections. Various terms have been used to describe and explain these practices — from computational politics to political micro-targeting to data-driven elections (Bodó, Helberger, & de Vreese, 2017; Bennett, 2016; Karpf, 2016; Kreiss, 2016; Tufekci, 2014). All of these labels highlight the increasing importance of data analytics in the operations of political parties, candidate campaigns, and issue advocacy efforts. But in our view, none adequately captures the full scope of recent changes that have taken place in contemporary politics. The same commercial digital media and marketing ecosystem that has dramatically altered how corporations engage with consumers is now transforming the ways in which campaigns engage with citizens (Chester & Montgomery, 2017).

We have been closely tracking the growth of this marketplace for more than 25 years, in the US and abroad, monitoring and analysing key technological developments, major trends, practices and players, and assessing the impact of these systems in areas such as health, financial services, retail, and youth (Chester, 2007; Montgomery, 2007, 2015; Montgomery & Chester, 2009; Montgomery, Chester, Grier, & Dorfman, 2012; Montgomery, Chester, & Kopp, 2018). CDD has worked closely with leading EU civil society and data protection NGOs to address digital marketplace issues. Our work has included providing analysis to EU-based groups to help them respond critically to Google’s acquisition of DoubleClick in 2007 as well as Facebook’s purchase of WhatsApp in 2014. Our research has also been informed by a growing body of scholarship on the role that commercial and big data forces are playing in contemporary society. For example, advocates, legal experts, and scholars have written extensively about the data and privacy concerns raised by this commercial big data digital marketing system (Agre & Rotenberg, 1997; Bennett, 2008; Nissenbaum, 2009; Schwartz & Solove, 2011). More recent research has focused increasingly on other, and in many ways more troubling, aspects of this system. This work has included, for example, researchon the use of persuasive design (including “mass personalisation” and “dark patterns”) to manage and direct human behaviours; discriminatory impacts of algorithms; and a range of manipulative practices (Calo, 2013; Gray, Kou, Battles, Hoggatt, & Toombs, 2018; Susser, Roessler, & Nissenbaum, 2019; Zarsky, 2019; Zuboff, 2019).As digital marketing has migrated into electoral politics, a growing number of scholars have begun to examine the implications of these problematic practices on the democratic process (Gorton, 2016; Kim et al., 2018; Kreiss & Howard, 2010; Rubinstein, 2014; Bashyakarla et al., 2019; Tufekci, 2014).

The purpose of this paper is to serve as an “early warning system” — for policymakers, journalists, scholars, and the public — by identifying what we see as the most important industry trends and practices likely to play a role in the next major US election, and flagging some of the problems and issues raised. Our intent is not to provide a comprehensive analysis of all the tools and techniques in what is frequently called the “politech” marketplace. The recent Tactical Tech (Bashyakarla et al, 2019) publication, Personal Data: Political Persuasion, provides a highly useful compendium on this topic. Rather, we want to show how further growth and expansion of the big data digital marketplace is reshaping electoral politics in the US, introducing both candidate and issue campaigns to a system of sophisticated software applications and data-targeting tools that are rooted in the goals, values, and strategies for influencing consumer behaviours.1 Although some of these new digitally enabled capabilities are extensions of longstanding political practices that pre-date the internet,others are a significant departure from established norms and procedures. Taken together, they are contributing to a major shift in how political campaigns conduct their operations, raising a host of troubling issues concerning privacy, security, manipulation, and discrimination. All of these developments are taking place, moreover, within a regulatory structure that is weak and largely ineffectual, posing daunting challenges to policymakers.

In the following pages, we: 1) briefly highlight five key developments in the digital marketing industry since the 2016 election that are influencing the operations of political campaigns and will likely affect the next election cycle; 2) discuss the implications of these trends and techniques for the ongoing practice of contemporary politics, with a special focus on their potential for manipulation and discrimination; 3) assess both the technology industry responses and recent policy initiatives designed to address political advertising in the US; and 4) offer our own set of recommendations for regulating political ad and data practices.

The growing big data commercial and political marketing system

In the upcoming 2020 elections, the US is likely to witness an extremely hard-fought, under-the-radar, innovative, and in many ways disturbing set of races, not only for the White House but also for down-ballot candidates and issue groups. Political campaigns will be able to avail themselves of the current state-of-the-art big data systems that were used in the past two elections, along with a host of recent advances developed by commercial marketers. Several interrelated trends in the digital media and marketing industry are likely to play a particularly influential role in shaping the use of digital tools and strategies in the 2020 election. We discuss them briefly below:

Recent mergers and partnerships in the media and data industries are creating new synergies that will extend the reach and enhance the capabilities of contemporary political campaigns. In the last few years, a wave of mergers and partnerships has taken place among platforms, data brokers, advertising exchanges, ad agencies, measurement firms and companies specialising in advertising technologies (so-called “ad-tech”). This consolidation has helped fuel the unfettered growth of a powerful digital marketing ecosystem, along with an expanding spectrum of software systems, specialty firms, and techniques that are now available to political campaigns. For example, AT&T (n.d.), as part of its acquisition of Time Warner Media, has re-launched its digital ad division, now called Xandr (n.d.). It also acquired the leading programmatic ad platform AppNexus.

Leading multinational advertising agencies have made substantial acquisitions of data companies, such as the Interpublic Group (IPG) purchase of Acxiom in 2018 and the Publicis Groupe takeover of Epsilon in 2019. One of the “Big 3” consumer credit reporting companies, TransUnion (2019), bought TruSignal, a leading digital marketing firm. Such deals enable political campaigns and others to easily access more information to profile and target potential voters (Williams, 2019).

In the already highly consolidated US broadband access market, only a handful of giants provide the bulk of internet connections for consumers. The growing role of internet service providers (ISPs) in the political ad market is particularly troubling, since they are free from any net neutrality, online privacy or digital marketing rules. Acquisitions made by the telecommunications sector are further enabling ISPs and other telephony companies to monetise their highly detailed subscriber data, combining it with behavioural data about device use and content preferences, as well as geolocation. (Schiff, 2018).

Increasing sophistication in “identity resolution” technologies, which take advantage of machine learning and artificial intelligence applications, is enabling greater precision in finding and reaching individuals across all of their digital devices. The technologies used for what is known as “identity resolution” have evolved to enable marketers — and political groups — to target and “reach real people” with greater precision than ever before. Marketers are helping perfect a system that leverages and integrates, increasingly in real-time, consumer profile data with online behaviours to capture more granular profiles of individuals, including where they go, and what they do (Rapp, 2018). Facebook, Google and other major marketers are also using machine learning to power prediction-related tools on their digital ad platforms. As part of Google’s recent reorganisation of its ad system (now called the “Google Marketing Platform”), the company introduced machine learning into its search advertising and YouTube businesses (Dischler, 2018; Sluis, 2018). It also uses machine learning for its “Dynamic Prospecting” system, which is connected to an “Automatic Targeting” apparatus that enables more precise tracking and targeting of individuals (Google, n.d.-a-b). Facebook (2019) is enthusiastically promoting machine learning as a fundamental advertising tool, urging advertisers to step aside and let automated systems make more ad-targeting decisions.

Political campaigns have already embraced these new technologies, even creating a special category in the industry awards for “Best Application of Artificial Intelligence or Machine Learning”, “Best Use of Data Analytics/Machine Learning”, and “Best Use of Programmatic Advertising” (“2019 Reed Award Winners”, 2019; American Association of American Political Consultants, 2019). For example, Resonate, a digital data marketing firm, was recognised in 2018 for its “Targeting Alabama’s Conservative Media Bubble”, which relied on “artificial intelligence and advanced predictive modeling” to analyse in real-time “more than 15 billion page loads per day. According to Resonate, this process identified “over 240,000 voters” who were judged to be “persuadable” in a hard-fought Senate campaign (Fitzpatrick, 2018). Similar advances in data analytics for political efforts are becoming available for smaller campaigns (Echelon Insights, 2019). WPA Intelligence (2019) won a 2019 Reed Award for its data analytics platform that generated “daily predictive models, much like microtargeting advanced traditional polling. This tool was used on behalf of top statewide races to produce up to 900 million voter scores, per night, for the last two months of the campaign”. Deployment of these techniques was a key influence in spending for the US midterm elections (Benes, 2018; Loredo, 2016; McCullough, 2016).

Political campaigns are taking advantage of a rapidly maturing commercial geo-spatial intelligence complex, enhancing mobile and other geotargeting strategies. Location analytics enable companies to make instantaneous associations between the signals sent and received from Wi-Fi routers, cell towers, a person’s devices and specific locations, including restaurants, retail chains, airports, stadiums, and the like (Skyhook, n.d.). These enhanced location capabilities have further blurred the distinction between what people do in the “offline” physical world and their actions and behaviours online, giving marketers greater ability both to “shadow” and to reach individuals nearly anytime and anywhere.

A political “geo-behavioural” segment is now a “vertical” product offered alongside more traditional online advertising categories, including auto, leisure, entertainment and retail.“Hyperlocal” data strategies enable political campaigns to engage in more precise targeting in communities (Mothership Strategies, 2018). Political campaigns are also taking advantage of the widespread use of consumer navigation systems. Waze, the Google-owned navigational firm, operates its own ad system but also is increasingly integrated into the Google programmatic platform (Miller, 2018). For example, in the 2018 midterm election, a get-out-the-vote campaign for one trade group used voter file and Google data to identify a highly targeted segment of likely voters, and then relied on Waze to deliver banner ads with a link to an online video (carefully calibrated to work only when the app signalled the car wasn’t moving). According to the political data firm that developed the campaign, it reached “1 million unique users in advance of the election” (Weissbrot, 2019, April 10).

Political television advertising is rapidly expanding onto unregulated streaming and digital video platforms. For decades, television has been the primary medium used by political campaigns to reach voters in the US. Now the medium is in the process of a major transformation that will dramatically increase its central role in elections (IAB, n.d.-a). One of the most important developments during the past few years is the expansion of advertising and data-targeting capabilities, driven in part by the rapid adoption of streaming services (so-called “Over the Top” or “OTT”) and the growth of digital video (Weissbrot, 2019, October 22). Leading OTT providers in the US are actively promoting their platform capabilities to political campaigns, making streaming video a new battleground for influencing the public. For example, a “Political Data Cloud” offered by OTT specialist Tru Optik (2019) enables “political advertisers to use both OTT and streaming audio to target specific voter groups on a local, state or national level across such factors as party affiliation, past voting behavior and issue orientation. Political data can be combined with behavioral, demographic and interest-based information, to create custom voter segments actionable across over 80 million US homes through leading publishers and ad tech platforms” (Lerner, 2019).

While political advertising on broadcast stations and cable television systems has long been subject to regulation by the US Federal Communications Commission, newer streaming television and digital video platforms operate outside of the regulatory system (O’Reilly, 2018). According to research firm Kantar “political advertisers will be able to air more spots on these streaming video platforms and extend the reach of their messaging—particularly to younger voters” (Lafayette, 2019). These ads will also be part of cross-device campaigns, with videos showing up in various formats on mobile devices as well.

The expanding role of digital platforms enables political campaigns to access additional sources of personal data, including TV programme viewing patterns.For example, in 2018, Altice and smart TV company Vizio launched a new partnership to take advantage of recent technologies now being deployed to deliver targeted advertising, incorporating viewer data from nearly nine million smart TV sets into “its footprint of more than 90 million households, 85% of broadband subscribers and one billion devices in the U.S.” (Clancy, 2018). Vizio’s Inscape (n.d.) division produces technology for smart TVs, offering what is known as “automatic content recognition” (ACR) data. According to Vizio, ACR enables what the industry calls “glass level” viewing data, using “screen level measurement to reveal what programs and ads are being watched in near-real time”, and incorporating the IP address from any video source in use (McAfee, 2019). Campaigns have demonstrated the efficacy of OTT’s role. AdVictory (n.d.) modelled “387,000 persuadable cord cutters and 1,210 persuadable cord shavers” (the latter referring to people using various forms of streaming video) to make a complex media buy in one state-wide gubernatorial race that reached 1.85 million people “across [video] inventory traditionally untouched by campaigns”.

Further developments in personalisation techniques are enabling political campaigns to maximise their ability to test an expanding array of messaging elements on individual voters. Micro-targetingnow involves a more complex personalisation process than merely using so-called behavioural data to target an individual. The use of personal data and other information to influence a consumer is part of an ever-evolving, orchestrated system designed to generate and then manage an individual’s online media and advertising experiences. Google and Facebook, in particular, are adept at harvesting the latest innovations to advance their advertising capabilities, including data-driven personalisation techniques that generate hundreds of highly granular ad-campaign elements from a single “creative” (i.e., advertising message). These techniques are widely embraced by the digital marketing industry, and political campaigns across the political spectrum are being encouraged to expand their use for targeting voters (Meuse, 2018; Revolution Marketing, n.d.; Schuster, 2015). The practice is known by various names, including “creative versioning”, “dynamic creative”, and “Dynamic Creative Optimization”, or DCO (Shah, 2019). Google’s creative optimisation product, “Directors Mix” (formerly called “Vogon”), is integrated into the company’s suite of “custom affinity audience targeting capabilities, which includes categories related to politics and many other interests”. This product, it explains, is designed to “generate massively customized and targeted video ad campaigns” (Google, n.d.-c). Marketing experts say that Google now enables “DCO on an unprecedented scale”, and that YouTube will be able to “harness the immense power of its data capabilities…” (Mindshare, 2017). Directors Mix can tap into Google’s vast resources to help marketers influence people in various ways, making it “exceptionally adept at isolating particular users with particular interests” (Boynton, 2018). Facebook’s “Dynamic Creative” can help transform a single ad into as many as “6,250 unique combinations of title, image/video, text, description and call to action”, available to target people on its news feed, Instagram and outside of Facebook’s “Audience Network” ad system (Peterson, 2017).

Implications for 2020 and beyond

We have been able to provide only a partial preview of the digital software systems and tools that are likely to be deployed in US political campaigns during 2020. It’s already evident that digital strategies will figure even more centrally in the upcoming campaigns than they have in previous elections (Axelrod, Burke, & Nam, 2019; Friedman, 2018, June 19). Many of the leading Democratic candidates, and President Trump, who has already ramped up his re-election campaign apparatus, have extensive experience and success in their use of digital technology. Brad Parscale, the campaign manager for Trump’s re-election effort, explained in 2019 that “in every single metric, we’re looking at being bigger, better, and ‘badder’ than we were in 2016,” including the role that “new technologies” will play in the race (Filloux, 2019).

On the one hand, these digital tools could be harnessed to create a more active and engaged electorate, with particular potential to reach and mobilise young voters and other important demographic groups. For example, in the US 2018 midterm elections, newcomers such as Congresswoman Alexandria Ocasio-Cortez, with small budgets but armed with digital media savvy, were able to seize the power of social media, mobile video, and other digital platforms to connect with large swaths of voters largely overlooked by other candidates (Blommaert, 2019). The real-time capabilities of digital media could also facilitate more effective get-out-the-vote efforts, targeting and reaching individuals much more efficiently than in-person appeals and last-minute door-to-door canvassing (O’Keefe, 2019).

On the other hand, there is a very real danger that many of these digital techniques could undermine the democratic process. For example, in the 2016 election, personalised targeted campaign messages were used to identify very specific groups of individuals, including racial minorities and women, delivering highly charged messages designed to discourage them from voting (Green & Issenberg, 2016). These kinds of “stealth media” disinformation efforts take advantage of “dark posts” and other affordances of social media platforms (Young et al., 2018).Though such intentional uses (or misuses) of digital marketing tools have generated substantial controversy and condemnation, there is no reason to believe they will not be used again. Campaigns will also be able to take advantage of a plethora of newer and more sophisticated targeting and message-testing tools, enhancing their ability to fine tune and deliver precise appeals to the specific individuals they seek to influence, and to reinforce the messages throughout that individual’s “media journey”.

But there is an even greater danger that the increasingly widespread reliance on commercial ad technology tools in the practice of politics will become routine and normalised, subverting independent and autonomous decision making, which is so essential to an informed electorate (Burkell & Regan, 2019; Gorton, 2016). For example, so-called “dynamic creative” advertising systems are in some ways extensions of A/B testing, which has been a longstanding tool in political campaigns. However, today’s digital incarnation of the practice makes it possible to test thousands of message variations, assessing how each individual responds to them, and changing the content in real time and across media in order to target and retarget specific voters. The data available for this process are extensive, granular, and intimate, incorporating personal information that extends far beyond the conventional categories, encompassing behavioural patterns, psychographic profiles, and TV viewing histories. Such techniques are inherently manipulative (Burkell & Regan, 2019; Gorton, 2016; Susser, Roessler, & Nissenbaum, 2019). The increasing use of digital video, in all of its new forms, raises similar concerns, especially when delivered to individuals through mobile and other platforms, generating huge volumes of powerful, immersive, persuasive content, and challenging the ability of journalists and scholars to review claims effectively. AI, machine learning, and other automated systems will be able to make predictions on behaviours and have an impact on public decision-making, without any mechanism for accountability. Taken together, all of these data-gathering, -analysis, and -targeting tools raise the spectre of a growing political surveillance system, capable of capturing unlimited amounts of detailed and highly sensitive information on citizens and using it for a variety of purposes. The increasing predominance of the big data political apparatus could also usher in a new era of permanent campaign operations, where individuals and groups throughout the country are continually monitored, targeted, and managed.

Because all of these systems are part of the opaque and increasingly automated operations of digital commercial marketing, the techniques, strategies, and messages of the upcoming campaigns will be even less transparent than before. In the heat of a competitive political race, campaigns are not likely to publicise the full extent of their digital operations. As a consequence, journalists, civil society groups, and academics may not be able to assess them fully until after the election. Nor will it be enough to rely on documenting expenditures, because digital ads can be inexpensive, purposefully designed to work virally and aimed at garnering “free media”, resulting in a proliferation of messages that evade categorisation or accountability as “paid political advertising”.

Some scholars have raised doubts about the effectiveness of contemporary big data and digital marketing applications when applied to the political sphere, and the likelihood of their widespread adoption (Baldwin-Philippi, 2017). It is true we are in the early stages of development and implementation of these new tools, and it may be too early to predict how widely they will be used in electoral politics, or how effective they might be. However, the success of digital marketing worldwide in promoting brands and products in the consumer marketplace, combined with the investments and innovations that are expanding its ability to deliver highly measured impacts, suggest to us that these applications will play an important role in our political and electoral affairs. The digital marketing industry has developed an array of measurement approaches to document their impact on the behaviour of individuals and communities (Griner, 2019; IAB Europe, 2019; MMA, 2019). In the no-holds-barred environment of highly competitive electoral politics, campaigns are likely to deploy these and other tools at their disposal, without restraint. There are enough indications from the most recent uses of these technologies in the political arena to raise serious concerns, making it particularly urgent to monitor them very closely in upcoming elections.

Industry and legislative initiatives

The largest US technology companies have recently introduced a succession of internal policies and transparency measures aimed at ensuring greater platform responsibility during elections. In November 2019, Twitter announced it was prohibiting the “promotion of political content”, explaining that it believed that “political message reach should be earned, not bought”. CEO Jack Dorsey (2019) was remarkably frank in explaining why Twitter had made this decision: “Internet political ads present entirely new challenges to civic discourse: machine learning-based optimization of messaging and micro-targeting, unchecked misleading information, and deep fakes. All at increasing velocity, sophistication, and overwhelming scale”.

That same month, Google unveiled policy changes of its own, including restricting the kinds of internal data capabilities available to political campaigns. As the company explained, “we’re limiting election ads audience targeting to the following general categories: age, gender, and general location (postal code level)”. Google also announced it was “clarifying” its ads policies and “adding examples to show how our policies prohibit things like ‘deep fakes’ (doctored and manipulated media), misleading claims about the census process, and ads or destinations making demonstrably false claims that could significantly undermine participation or trust in an electoral or democratic process” (Spencer, 2019). It remains to be seen whether such changes as Google’s and Twitter’s will actually alter, in any significant way, the contemporary operations of data-driven political campaigns. Some observers believe that Google’s new policy will benefit the company, noting that “by taking away the ability to serve specific audiences content that is most relevant to their values and interests, Google stands to make a lot MORE money off of campaigns, as we’ll have to spend more to find and reach our intended audiences” (“FWIW: The Platform Self-regulation Dumpster Fire”, 2019).

Interestingly, Facebook, the tech company that has been subject to the greatest amount of public controversy over its political practices, had not, at the time of this writing, made similar changes in its political advertising policies. Though the social media giant has been widely criticised for its refusal to fact-check political ads for accuracy and fairness, it has not been willing to institute any mechanisms for intervening in the content of those ads (Ingram, 2018; Isaac, 2019; Kafka, 2019). However, Facebook did announce in 2018 that it was ending its participation in the industry-wide practice of embedding, which involved sales teams working hand-in-hand with leading political campaigns (Ingram, 2018; Kreiss & McGregor, 2017). After a research article generated extensive news coverage of this industry-wide marketing practice, Facebook publicly announced it would cease the arrangement, instead “offering tools and advice” through a politics portal that provides “candidates information on how to get their message out and a way to get authorised to run ads on the platform” (Emerson, 2018; Jeffrey, 2018). In May 2019, the company also announced it would stop paying commissions to employees who sell political ads (Glazer & Horowitz, 2019). Such a move may not have a major effect on sales, however, especially since the tech giant has already generated significant income from political advertising for the 2020 campaign (Evers-Hillstrom, 2019).

Under pressure from civil rights groups over discriminatory ad targeting practices in housing and other areas, Facebook has undergone an extensive civil rights audit, which has resulted in a number of internal policy changes, including some practices related to campaigns and elections. For example, the company announced in June 2019 that it had “strengthened its voter suppression policy” to prohibit “misrepresentations” about the voting process, as well as any “threats of violence related to voting”. It has also committed to making further changes, including investments designed to prevent the use of the platform “to manipulate U.S. voters and elections” (Sandberg, 2019).

Google, Facebook, and Twitter have all established online archives to enable the public to find information on the political advertisements that run on their platforms. But these databases provide only a limited range of information. For example, Google’s (2018) archive contains copies of all political ads run on the platform, shows the amount spent overall and on specific ads by a campaign, as well as age range, gender, area (state) and dates when an ad appeared, but does not share the actual “targeting criteria” used by political campaigns (Walker, 2018). Facebook’s (n.d.-b) Ad Library describes itself as a “comprehensive, searchable collection of all ads currently running across Facebook Products”. It claims to provide “data for all ads related to politics or to issues of national importance” that have run on its platform since May 2018 (Sullivan, 2019). While the data include breakdowns on the age, gender, state where it ran, number of impressions and spending for the ad, no details are provided to explain how the ad was constructed, tested, and altered, or what digital ad targeting techniques were used. For example, Facebook (n.d.-a-e) permits US-based political campaigns to use its “Custom or Lookalike Audiences” ad-targeting product, but it does not report such use in its ad library. Though all of these new transparency systems and ad archives offer useful information, they also place a considerable burden on users. Many of these new measures are likely to be more valuable for watchdog organisations and journalists, who can use the information to track spending, identify emerging trends, and shed additional light on the process of digital political influence.

While these kinds of changes in platform policies and operations should help to mitigate some of the more egregious uses of social media by unscrupulous campaigns and other actors, they are not likely to alter in any major way the basic operations of today’s political advertising practices. With each tech giant instituting its own set of internal ad policies, there are no clear industry-wide “rules-of-the-game” that apply to all participants in the digital ecosystem. Nor are there strong transparency or accountability systems in place to ensure that the policies are effective. Though platform companies may institute changes that appear to offer meaningful safeguards, other players in the highly complex big data marketing infrastructure may offer ways to circumvent these apparent restrictions. As a case in point, when Facebook (2018, n.d.-c) announced in the wake of the Cambridge Analytica scandal that it was “shutting down Partner Categories”, the move provoked alarm inside the ad-tech industry that a set of powerful applications was being withdrawn (Villano, 2018). The product had enabled marketers to incorporate data provided by Facebook’s selected partners, including Acxiom and Epsilon (Pathak, 2018). However, despite the policy change, Facebook still enables marketers to bring a tremendous amount of third-party data to Facebook for targeting (Popkin, 2019). Indeed, shortly after Facebook’s announcement, LiveRamp offered assurances to its clients that no significant changes had been made, explaining that “while there’s a lot happening in our industry, LiveRamp customers have nothing to fear” (Carranza, 2018).

The controversy generated by recent foreign interference in US elections has also fuelled a growing call to update US election laws. However, the current policy debate over regulation of political advertising continues to be waged within a very narrow framework, which needs to be revisited in light of current digital practices. Legislative proposals have been introduced in Congress that would strengthen the disclosure requirements for digital political ads regulated by the Federal Election Commission (FEC). For example, under the Honest Ads Act, digital media platforms would be required to provide information about each ad via a “public political file”, including who purchased the ad, when it appeared, how much was spent, as well as “a description of the targeted audience”. Campaigns would also be required to provide the same information for online political ads that are required for political advertising in other media. The proposed legislation currently has the support of Google, Facebook, Twitter and other leading companies (Ottenfeld, 2018, April 25). A more ambitious bill, the For the People Act is backed by the new Democratic majority in the House of Representatives, and includes similar disclosure requirements, along with a number of provisions aimed at reducing “the influence of big money in politics”. Though these bills are a long-overdue first step toward bringing transparency measures into the digital age, neither of them addresses the broad range of big data marketing and targeting practices that are already in widespread use across political campaigns. And it is doubtful whether either of these limited policy approaches stands a chance of passage in the near future. There is strong opposition to regulating political campaign and ad practices at the federal level, primarily because of what critics claim would be violations of the free speech principle of the US First Amendment (Brodey, 2019).

While the prospects for regulating political advertising appear dim at the present time, there is a strong bi-partisan move in Congress to pass federal privacy legislation that would regulate commercial uses of data, which could, in turn, affect the operations, tools, and techniques available for digital political campaigns. Google, Facebook, and other digital data companies have long opposed any comprehensive privacy legislation. But a number of recent events have combined to force the industry to change its strategy: the implementation of the EU General Data Protection Regulation (GDPR) and the passage of state privacy laws (especially in California); the seemingly never-ending news reports on Facebook’s latest scandal; massive data breaches of personal information; accounts of how online marketers engage in discriminatory practices and promote hate speech; and the continued political fallout from “Russiagate”. Even the leading tech companies are now pushing for privacy legislation, if only to reduce the growing political pressure they face from the states, the EU, and their critics (Slefo, 2019). Also fuelling the debate on privacy are growing concerns over digital media industry consolidation, which have triggered calls by political leaders as well as presidential candidates to “break up” Amazon and Facebook (Lecher, 2019). Numerous bills have been introduced in both houses of Congress, with some incorporating strong provisions for regulating both data use and marketing techniques. However, as the 2020 election cycle gets underway, the ultimate outcome of this flurry of legislative activity is still up in the air (Kerry, 2019).

Opportunities for intervention

Given the uncertainty in the regulatory and self-regulatory environment, there is likely to be little or no restraint in the use of data-driven digital marketing practices in the upcoming US elections. Groups from across the political spectrum, including both campaigns and special interest groups will continue to engage in ferocious digital combat (Lennon, 2018). With the intense partisanship, especially fuelled by what is admittedly a high-stakes-for-democracy election (for all sides), as well as the current ease with which all of the available tools and methods are deployed, no company or campaign will voluntarily step away from the “digital arms race” that US elections have become. Given what is expected to be an extremely close race for the Electoral College that determines US presidential elections, 2020 is poised to see both parties use digital marketing techniques to identify and mobilise the handful of voters needed to “swing” a state one way or another (Schmidt, 2019).

Campaigns will have access to an unprecedented amount of personal data on every voter in the country, drawing from public sources as well as the growing commercial big data infrastructure. As a consequence, the next election cycle will be characterised by ubiquitous political targeting and messaging, fed continuously through multiple media outlets and communication devices.

At the same time, the concerns over continued threats of foreign election interference, along with the ongoing controversy triggered by the Cambridge Analytica/Facebook scandal, have re-energised campaign reform and privacy advocates and engaged the continuing interest of watchdog groups and journalists. This heightened attention on the role of digital technologies in the political process has created an unprecedented window of opportunity for civil society groups, foundations, educators, and other key stakeholders to push for broad public policy and structural changes. Such an effort would need to be multi-faceted, bringing together diverse organisations and issue groups, and taking advantage of current policy deliberations at both the federal and state levels.

In other western democracies, governments and industry organisations have taken strong proactive measures to address the use of data-driven digital marketing techniques by political parties and candidates. For example, the Institute for Practitioners in Advertising (IPA), a leading UK advertising organisation, has called for a “moratorium on micro-targeted political advertising online”. “In the absence of regulation”, the IPA explained, “we believe this almost hidden form of political communication is vulnerable to abuse”. Leading members of the UK advertising industry, including firms that work on political campaigns, have endorsed these recommendations (Oakes, 2018). The UK Information Commissioner’s Office (ICO, 2018), which regulates privacy, conducted an investigation of recent digital political practices, and issued a report urging the government to “legislate at the earliest opportunity to introduce a statutory code of practice” addressing the “use of personal information in political campaigns” (Denham, 2018). In Canada, the Privacy Commissioner offered “guidance” to political parties in their use of data, including “Best Practices” for requiring consent when using personal information (Office of the Privacy Commissioner of Canada, 2019). The European Council (2019) adopted a similar set of policies requiring political parties to adhere to EU data protection rules.

We recognise that the United States has a unique regulatory and legal system, where First Amendment protections of free speech have limited regulation of political campaigns. However, the dangers that big data marketing operations pose to the integrity of the political process require a rethinking of policy approaches. A growing number of legal scholars have begun to question whether political uses of data-driven digital marketing should be afforded the same level of First Amendment protections as other forms of political speech (Burkell & Regan, 2019; Calo, 2013; Rubinstein, 2014; Zarsky, 2019). “The strategies of microtargeting political ads”, explain Jacquelyn Burkell and Priscilla Regan (2019), “are employed in the interests not of informing, or even persuading voters but in the interests of appealing to their non-rational biases as defined through algorithmic profiling”.

Advocates and policymakers in the US should explore various legal and regulatory strategies, developing a broad policy agenda that encompasses data protection and privacy safeguards; robust transparency, reporting and accountability requirements; restrictions on certain digital advertising techniques; and limits on campaign spending. For example, disclosure requirements for digital media need to be much more comprehensive. At the very least, campaigns, platforms and networks should be required to disclose fully all the ad and data practices they used (e.g., cross-device tracking, lookalike modelling, geolocation, measurement, neuromarketing), as well as variations of ads delivered through dynamic creative optimisation and other similar AI applications. Some techniques — especially those that are inherently manipulative in nature — should not be allowed in political campaigns. Greater attention will need to be paid to the uses of data and targeting techniques as well, articulating distinctions between those designed to promote robust participation, such as “Get Out the Vote” efforts, and those whose purpose is to discourage voters from exercising their rights at the ballot box. Limits should also be placed on the sources and amount of data collected on voters. Political parties, campaigns, and political action committees should not be allowed to gain unfettered access to consumer profile data, and voters should have the right to provide affirmative consent (“opt-in”) before any of their information can be used for political purposes. Policymakers should be required to stay abreast of fast-moving innovations in the technology and marketing industries, identifying the uses and abuses of digital applications for political purposes, such as the way that WhatsApp was deployed during recent elections in Brazil for “computational propaganda” (Magenta, Gragnani, & Souza, 2018).

In addition to pushing for government policies, advocates should place pressure on the major technology industry players and political institutions, through grassroot campaigns, investigative journalism, litigation, and other measures. If we are to have any reform in the US, there must be multiple and continuous points of pressure. The two major political parties should be encouraged to adopt a proposed new best-practices code. Advocates should also consider adopting the model developed by civil rights groups and their allies in the US, who negotiated successfully with Google, Facebook and others to develop more responsible and accountable marketing and data practices (Peterson & Marte, 2016). Similar efforts could focus on political data and ad practices. NGOs, academics, and other entities outside the US should also be encouraged to raise public concerns.

All of these efforts would help ensure that the US electoral process operates with integrity, protects privacy, and does not engage in discriminatory practices designed to diminish debate and undermine full participation.

References

2019 Reed Award winners. (2019, February 22). Campaigns & Elections. Retrieved from https://www.campaignsandelections.com/campaign-insider/2019-reed-award-winners

AdVictory. (n.d.). Case study: Curating a data-driven CTV program. Retrieved from https://advictory.com/portfolio/rauner/

Agre, P. E., & Rotenberg, M. (Eds). (1997). Technology and Privacy: The New Landscape. Cambridge, MA: The MIT Press.

American Association of American Political Consultants. (2019). 2019 Pollie Awards gallery. Retrieved from https://pollies.secure-platform.com/a/gallery?roundId=44

AT&T. (n.d.). Head of political, DSP sales & account management job. Lensa. Retrieved from https://lensa.com/head-of-political-dsp-sales--account-management-jobs/washington/jd/1444db7ddf6c0a5d7568cb4032f3a4c7

Axelrod, T., Burke, M., & Nam, R. (2019, February 25). Trump unleashing digital juggernaut ahead of 2020. The Hill.Retrieved from https://thehill.com/homenews/campaign/431181-trump-unleashing-digital-juggernaut-ahead-of-2020

Axios. (2018, November 6). Political ad spending hits new record for 2018 midterm elections. Retrieved from https://www.axios.com/political-ad-spending-hits-new-record-for-2018-midterm-elections-1541509814-28e24943-d68b-4f55-83ef-8b9d51a64fa9.html

Bashyakarla, V., Hankey, S., Macintyre, A., Rennó, R., & Wright, G. (2019, March). Personal Data: Political Persuasion. Inside the Influence Industry. How it works. Berlin: Tactical Tech. Retrieved from https://cdn.ttc.io/s/tacticaltech.org/Personal-Data-Political-Persuasion-How-it-works.pdf

Baldwin-Philippi, J. (2017). The myths of data-driven campaigning. Political Communication, 34(4), 627–633. https://doi.org/10.1080/10584609.2017.1372999

Benes, R. (2018, June 20). Political advertisers will lean on programmatic during midterms. eMarketer. Retrieved from https://content-na1.emarketer.com/political-advertisers-will-lean-on-programmatic-during-midterms

Bennett, C. J. (2008) The Privacy Advocates: Resisting the Spread of Surveillance. Cambridge, MA: The MIT Press.

Bennett, C. J. (2016). Voter databases, micro-targeting, and data protection law: Can political parties campaign in Europe as they do in North America? International Data Privacy Law, 6(4), 261–275. https://doi.org/10.1093/idpl/ipw021 Retrieved from https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2776299

Blommaert, J. (2019, January 22). Alexandria Ocasio-Cortez: The next level of political digital culture. Diggit Magazine. Retrieved from https://medium.com/@diggitmagazine/alexandria-ocasio-cortez-the-next-level-of-political-digital-culture-e43b45518e86.

Bodo, B., Helberger, N., & de Vreese, C. H. (2017). Political micro-targeting: a Manchurian candidate or just a dark horse? Towards the next generation of political micro-targeting research. Internet Policy Review, 6(4). https://doi.org/10.14763/2017.4.776

Boynton, P. (2018, August 27). YouTube Director Mix: Create impactful video ads with less resources. Retrieved from Instapage Blog: https://instapage.com/blog/youtube-director-mix

Brodey, S. (2019, May 10). Sen. Lindsey Graham takes heat from conservatives for backing John McCain’s election meddling bill. The Daily Beast. Retrieved from https://www.thedailybeast.com/lindsey-graham-takes-heat-from-conservatives-for-backing-mccains-election-meddling-bill

Burkell, J., & Regan, P.M. (2019, April). Voter preferences, voter manipulation, voter analytics: Policy options for less surveillance and more autonomy. Workshop on Data Driven Elections, Victoria.

California Clean Money Campaign. (2019, October 9). Gov. Newsom signs landmark disclosure bills: Petition DISCLOSE Act and Text Message DISCLOSE Act. California Clean Money Action Fund. Retrieved from http://www.yesfairelections.org/newslink/ccmc_2019-10-09.php

Calo, M. R. (2013). Digital market manipulation. George Washington Law Review, 82(4), 995–1051. Retrieved from https://www.gwlr.org/calo/

Carranza, M. (2018, July 17). How to use first‑ and third-party data on Facebook [Blog post]. Retrieved from LiveRamp blog: https://liveramp.com/blog/facebook-integration/.

Chester J., & Montgomery, K. C. (2017). The role of digital marketing in political campaigns. Internet Policy Review, 6(4). https://doi.org/10.14763/2017.4.773

Chester J., & Montgomery, K. C. (2018). The influence industry: Contemporary digital politics in the United States [Report]. Berlin: Tactical Tech. Retrieved from https://ourdataourselves.tacticaltech.org/media/ttc-influence-industry-usa.pdf.

Chester, J. (2007). Digital Destiny: New Media and the Future of Democracy. New York: The New Press.

Clancy. M. (2018, August 17). A4 adds Inscape’s Vizio TV data to measurement mix. Rapid TV News. Retrieved from https://www.rapidtvnews.com/2018081753198/a4-adds-inscape-s-vizio-tv-data-to-measurement-mix.html#axzz5kdEAVLYs.

Denham, E. (2018, November 6). Blog: Information Commissioner’s report brings the ICO’s investigation into the use of data analytics in political campaigns up to date. ICO. Retrieved from https://ico.org.uk/about-the-ico/news-and-events/blog-information-commissioner-s-report-brings-the-ico-s-investigation-into-the-use-of-data-analytics-in-political-campaigns-up-to-date.

Dischler, J. (2018, July 10). Putting machine learning into the hands of every advertiser. Google Blog. Retrieved from https://www.blog.google/technology/ads/machine-learning-hands-advertisers/.

Dorsey, J. (2019, October 30). We’ve made the decision to stop all political advertising on Twitter globally. We believe political message reach should be earned, not bought. Why? A few reasons…. [Tweet]. Retrieved from https://twitter.com/jack/status/1189634360472829952.

Echelon Insights. (2019). 2019 Reed awards winner: Innovation in Polling. Retrieved from https://echeloninsights.com/news/theanalyticsjumpstart/

Emerson, S. (2018, August 15). How Facebook and Google win by embedding in political campaigns. Vice. Retrieved from https://www.vice.com/en_us/article/ne5k8z/how-facebook-and-google-win-by-embedding-in-political-campaigns

European Council. (2019, March 3). EP elections: EU adopts new rules to prevent misuse of personal data by European political parties. Retrieved from https://www.consilium.europa.eu/en/press/press-releases/2019/03/19/ep-elections-eu-adopts-new-rules-to-prevent-misuse-of-personal-data-by-european-political-parties/

Evers-Hillstrom, K. (2019). Democratic presidential hopefuls flock to Facebook for campaign cash. Retrieved from https://www.opensecrets.org/news/2019/02/democratic-presidential-hopefuls-facebook-ads/

Facebook. (2018, March 28) Shutting down partner categories. Facebook Newsroom. Retrieved from https://newsroom.fb.com/news/h/shutting-down-partner-categories/

Facebook. (2019, March 27). Boost liquidity and work smarter with machine learning. Facebook Business. Retrieved from https://www.facebook.com/business/news/insights/boost-liquidity-and-work-smarter-with-machine-learning

Facebook. (n.d.-a). Ads about social issues, elections or politics. Retrieved from https://www.facebook.com/business/help/1838453822893854

Facebook. (n.d.-b). Facebook ad library. Retrieved from https://www.facebook.com/ads/library/?active_status=all&ad_type=political_and_issue_ads&country=US

Facebook. (n.d.-c). Upcoming changes. Facebook Business. Retrieved from https://www.facebook.com/business/m/one-sheeters/improving-accountability-and-updates-for-facebook-targeting

Filloux, F. (2019, June 2). Trump’s digital campaign for 2020 is already soaring. Monday Note. Retrieved from https://mondaynote.com/trumps-digital-campaign-for-2020-is-already-soaring-d0075bee8e89

Fitzpatrick, R. (2018, March 1). Resonate wins Reed Award for “best application of artificial intelligence or machine learning” [Blog post]. Resonate Blog. Retrieved from https://www.resonate.com/blog/resonate-wins-reed-award-for-best-application-of-artificial-intelligence-or-machine-learning/

Friedman, W. (2018, June 19). 2020 political ad spend could hit $10 billion, digital share expected to double. MediaPost. Retrieved from https://www.mediapost.com/publications/article/337226/2020-political-ad-spend-could-hit-10-billion-dig.html

FWIW: The platform self-regulation dumpster fire. (2019, November 22). ACRONYM. Retrieved from https://www.anotheracronym.org/newsletter/fwiw-the-platform-self-regulation-dumpster-fire/.

Garrahan, A. (2018, September 4). California’s new “Social Media DISCLOSE Act” regulates social media companies, search engines, other online advertising outlets, and political advertisers. Inside Political Law. Retrieved from https://www.insidepoliticallaw.com/2018/09/04/californias-new-social-media-disclose-act-regulates-social-media-companies-search-engines-online-advertising-outlets-political-advertisers/

Glazer, E. & Horwitz, J. (2019, May 23). Facebook curbs incentives to sell political ads ahead of 2020 election. Wall Street Journal. Retrieved from https://www.wsj.com/articles/facebook-ends-commissions-for-political-ad-sales-11558603803

Google. (2018). Transparency report: Political advertising in the United States. Retrieved from: https://transparencyreport.google.com/political-ads/region/US

Google. (n.d.-a). About responsive search ads (beta). Google Ads Help. Retrieved from https://support.google.com/google-ads/answer/7684791

Google. (n.d.-b). About smart display campaigns. Retrieved from https://support.google.com/google-ads/answer/7020281

Google (n.d.-c). Vogon. Retrieved from https://opensource.google.com/projects/vogon

Gorton, W. A. (2016). Manipulating citizens: How political campaigns’ use of behavioural social science harms democracy. New Political Science 38(1), 61–80. https://doi.org/10.1080/07393148.2015.1125119

Gray, C. M., Kou, Y., Battles, B., Hoggatt, J., & Toombs, A. L. (2018). The Dark (Patterns) Side of UX Design. Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems - CHI 18. https://doi.org/10.1145/3173574.3174108

Green, J. & Issenberg, S. (2016, October 27). Inside the Trump bunker, with days to go. Bloomberg Businessweek. Retrieved from https://www.bloomberg.com/news/articles/2016-10-27/inside-the-trump-bunker-with-12-days-to-go

Griner, D. (2019, June 25). Here’s every Grand Prix winner from the 2019 Cannes Lions. Adweek. Retrieved from https://www.adweek.com/creativity/heres-every-grand-prix-winner-from-the-2019-cannes-lions/

Howard, P. N., Woolley, S., & Calo, R. (2018). Algorithms, bots, and political communication in the US 2016 election: The challenge of automated political communication for election law and administration. Journal of Information Technology & Politics, 15(2), 81-93. https://doi.org/10.1080/19331681.2018.1448735

IAB. (n.d.-a). The new TV. Retrieved from https://video-guide.iab.com/new-tv

IAB Europe. (2019, June 5). Winners announced for the MIXX Awards Europe 2019 [Blog post]. IAB Europe Blog. Retrieved from https://iabeurope.eu/all-news/winners-announced-for-the-mixx-awards-europe-2019/

ICO. (2018). Call for views: Code of practice for the use of personal information in political campaigns. Retrieved from https://ico.org.uk/about-the-ico/ico-and-stakeholder-consultations/call-for-views-code-of-practice-for-the-use-of-personal-information-in-political-campaigns/

Ingram, D. (2018, September 20). Facebook to scale back 'embeds' for political campaigns. NBC News. Retrieved from https://www.nbcnews.com/tech/tech-news/facebook-scale-back-embeds-political-campaigns-n911701

Inscape. (n.d.). Solutions. Retrieved from https://www.inscape.tv/solutions

Isaac, M. (2019, November 22). Why everyone is angry at Facebook over its political ads policy. The New York Times. Retrieved from https://www.nytimes.com/2019/11/22/technology/campaigns-pressure-facebook-political-ads.html

Jeffrey, C. (2018, September 21). Facebook to stop sending “embeds” into political campaigns. TechSpot. Retrieved from https://www.techspot.com/news/76563-facebook-stop-sending-embeds-political-campaigns.html

Kafka, P. (2019, December 10). Facebook’s political ad problem, explained by an expert. Vox. Retrieved from https://www.vox.com/recode/2019/12/10/20996869/facebook-political-ads-targeting-alex-stamos-interview-open-sourced

Kantar. (2019, June 26). Kantar forecasts $6 billion in political ad spending for 2019-2020 election cycle [Press release]. Retrieved from https://www.kantarmedia.com/us/newsroom/press-releases/kantar-forecasts6-billion-in-political-ad-spending-for-2019-2020-election-cycle

Karpf, D. (2016, October 31). Preparing for the campaign tech bullshit season. Civicist. Retrieved from https://civichall.org/civicist/preparing-campaign-tech-bullshit-season/

Kerry, C. F. (2019, March 8). Breaking down proposals for privacy legislation: How do they regulate? [Report]. Washington, DC: The Brookings Institution. Retrieved from https://www.brookings.edu/research/breaking-down-proposals-for-privacy-legislation-how-do-they-regulate/

Kim, Y. M., Hsu, J., Neiman, D., Kou, C., Bankston, L., Kim, S. Y., Heinrich, R., Baragwanath, R., & Raskutti, G. (2018). The stealth media? Groups and targets behind divisive issue campaigns on Facebook. Political Communication, 35(4), 515–541. https://doi.org/10.1080/10584609.2018.1476425

Kreiss, D. (2016). Prototype Politics: Technology-intensive Campaigning and the Data of Democracy. New York: Oxford University Press. https://doi.org/10.1093/acprof:oso/9780199350247.001.0001

Kreiss, D., & Howard, P. N. (2010). New challenges to political privacy: Lessons from the first US presidential race in the web 2.0 era. International Journal of Communication, 4, 1032–1050. Retrieved from http://ijoc.org/index.php/ijoc/article/viewFile/870/473

Kreiss, D., & McGregor, S. C. (2017). Technology firms shape political communication: The work of Microsoft, Facebook, Twitter, and Google with campaigns during the 2016 US presidential cycle, Political Communication, 35(2), 155–177. https://doi.org/10.1080/10584609.2017.1364814

Lafayette, J. (2019, June 27). Elections to generate $6B in ad spending: Kantar. Broadcasting & Cable. Retrieved from https://www.broadcastingcable.com/news/elections-to-generate-6b-in-ad-spending-kantar.

Lecher, C. (2019, March 8). Elizabeth Warren says she wants to break up Amazon, Google, and Facebook. The Verge. Retrieved from https://www.theverge.com/2019/3/8/18256032/elizabeth-warren-antitrust-google-amazon-facebook-break-up

Lennon, W. (2018, October 9). An introduction to the Koch digital media network. Open Secrets. Retrieved from https://www.opensecrets.org/news/2018/10/intro-to-koch-brothers-digital/

Lerner, R. (2019, September 24). OTT advertising will be a clear winner in the 2020 elections. TV[R]EV. Retrieved from https://tvrev.com/ott-advertising-will-be-a-clear-winner-in-the-2020-elections/

Loredo, A. (2016, March). Centro automates political ad-buying at scale on brand-safe news and information sites. Centro Blog. Retrieved from https://www.centro.net/blog/centro-brand-exchange-political-ads/

Lynch, J. (2018, November 15). Advertisers spent $5.25 billion on the midterm election, 17% more than in 2016.” Adweek. Retrieved from https://www.kantarmedia.com/us/newsroom/km-inthenews/advertisers-spent-5-25-billion-on-the-midterm-election

McAfee, J. (2019, January 8). Inscape on the power of automatic content recognition and trends in TV consumption. Adelphic Blog. Retrieved from https://www.adelphic.com/inscape-on-the-power-of-automatic-content-recognition-and-trends-in-tv-consumption/

McCullough, S. C. (2016, April 3). When it comes to political programmatic advertising, the creative has to be emotionally charged. Adweek. Retrieved from https://www.adweek.com/brand-marketing/when-it-comes-political-programmatic-advertising-creative-has-be-emotionally-charged-170559/

Magenta, M., Gragnani, J., & Souza, F. (2018, October 24). How WhatsApp is being abused in Brazil's elections. BBC News. Retrieved from https://www.bbc.com/news/technology-45956557

Meuse, K. (2018, September 5). Put the wow factor in your campaigns with dynamic creative. Sizmek Blog. Retrieved from https://www.sizmek.com/blog/put-the-wow-factor-in-your-campaigns-with-dynamic-creative/.

Miller, S. J. (2018, November 23). Trade group successfully targeted voters on Waze ahead of midterms. Campaigns & Elections. Retrieved from https://www.campaignsandelections.com/campaign-insider/trade-group-successfully-targeted-voters-on-waze-ahead-of-midterms.

Mindshare. (2017). POV: YouTube Director Mix. Retrieved from https://www.mindshareworld.com/news/pov-youtube-director-mix.

MMA. (2019). Smarties X. Retrieved from https://www.mmaglobal.com/smarties2019.

Montgomery, K. C. (2007). Generation Digital: Politics, Commerce, and Childhood in the Age of the Internet. Cambridge, MA: The MIT Press.

Montgomery, K. C. (2015). Youth and surveillance in the Facebook era: Policy interventions and social implications. Telecommunications Policy, 39(3), 771–786. https://doi.org/10.1016/j.telpol.2014.12.006.

Montgomery, K. C., & Chester, J. (2009). Interactive food and beverage marketing: Targeting adolescents in the digital age. Journal of Adolescent Health45(3). https://doi.org/10.1016/j.jadohealth.2009.04.006

Montgomery, K. C., Chester, J., Grier, S. A., & Dorfman, L. (2012). The new threat of digital marketing. Pediatric Clinics of North America,59(3), 659–675. https://doi.org/10.1016/j.pcl.2012.03.022

Montgomery, K. C., Chester, J., & Kopp, K. (2017). Health wearable devices in the big data era: Ensuring privacy, security, and consumer protection [Report]. Washington, DC: Center for Digital Democracy. Retrieved from https://www.democraticmedia.org/sites/default/files/field/public/2016/aucdd_wearablesreport_final121516.pdf

Mothership Strategies. (2018). Case study: Doug Jones for senate advertising. Retrieved from https://mothershipstrategies.com/doug-jones-digital-advertising-mothership/

Nissenbaum, H. (2009). Privacy in Context: Technology, Policy, and the Integrity of Social Life. Redwood City: Stanford Law Books.

Oakes, O. (2018, April 20). IPA calls for suspension of micro-targeted political ads. Campaign. Retrieved from https://www.campaignlive.co.uk/article/ipa-calls-suspension-micro-targeted-political-ads/1462598

Office of the Privacy Commissioner of Canada. (2019, April 1). Guidance for federal political parties on protecting personal information. Retrieved from https://www.priv.gc.ca/en/privacy-topics/collecting-personal-information/gd_pp_201904

O’Keefe, P. (2019, March 31). Relational digital organizing—the next political campaign battleground. Medium. Retrieved from https://medium.com/political-moneyball/relational-digital-organizing-the-next-political-campaign-battleground-48ab1f7c2eef

O’Reilly, M. (2018, June 1). FCC regulatory free arena [Blog post]. Retrieved from FCC Blog: https://www.fcc.gov/news-events/blog/2018/06/01/fcc-regulatory-free-arena

Ottenfeld, E. (2018, April 25). Who supports the Honest Ads Act? Some of the country’s largest tech firms. Issue One. Retrieved from https://www.issueone.org/who-supports-the-honest-ads-act/

Pathak, S. (2018, March 30). How Facebook’s shutdown of third-party data affects advertisers. Digiday. Retrieved from https://digiday.com/marketing/facebooks-shutdown-third-party-data-affects-brands/.

Perrin, N. (2019, July 19). Political ad spend to reach $6 billion for 2020 election. eMarketer. Retrieved from https://www.emarketer.com/content/political-ad-spend-to-reach-6-billion-for-2020-election

Peterson, A. & Marte, J. (2016, May 11). Google to ban payday loan advertisements. Washington Post. Retrieved from https://www.washingtonpost.com/news/the-switch/wp/2016/05/11/google-to-ban-payday-loan-advertisements/?utm_term=.6fb60c626b05

Peterson, T. (2017, October 30). Facebook’s dynamic creative can generate up to 6,250 versions of an ad. Marketing Land. Retrieved from https://marketingland.com/facebooks-dynamic-creative-option-can-automatically-produce-6250-versions-ad-227250

Popkin, D. (2019, February 15). Optimizing Facebook campaigns with third-party data [Blog post]. Retrieved from LiveRamp Blog: https://liveramp.com/blog/facebook-campaigns/

Revolution Marketing. (n.d.). Creative. Retrieved from https://revolutionmessaging.com/services/creative/

Rapp, L. (2018, September 25). Announcing LiveRamp AbiliTec [Blog post]. Retrieved from LiveRamp Blog: https://liveramp.com/blog/abilitec/

Rosenberg, M., Confessore, N., & Cadwalladr. C. (2018, March 17). How Trump consultants exploited the Facebook data of millions. The New York Times. Retrieved from https://www.nytimes.com/2018/03/17/us/politics/cambridge-analytica-trump-campaign.html

Rubinstein, I. (2014). Voter privacy in the age of big data. Wisconsin Law Review, 2014(5), 861–936. Retrieved from http://wisconsinlawreview.org/wp-content/uploads/2015/02/1-Rubinstein-Final-Online.pdf

Sandberg, S. (2019, June 30). A second update on our civil rights audit. Facebook Newsroom. Retrieved from https://newsroom.fb.com/news/2019/06/second-update-civil-rights-audit/

Schiff, A. (2018, February 14). Ericsson Emodo beefs up its ad tech chops with Placecast acquisition. AdExchanger. Retrieved from https://adexchanger.com/mobile/ericsson-emodo-beefs-ad-tech-chops-placecast-acquisition/

Schmidt, S. (2019, July 11). A shadow digital campaign may prove decisive in 2020–and Donald Trump has a clear advantage. AlterNet. Retrieved from https://www.alternet.org/2019/07/a-shadow-digital-campaign-may-prove-decisive-in-2020-and-donald-trump-has-a-clear-advantage/

Schuster, J. (2015, October 7). Political campaigns: The art and science of reaching voters [Blog post]. Retrieved from LiveRamp Blog: https://liveramp.com/blog/political-campaigns-the-art-and-science-of-reaching-voters/

Schwartz, P. M., & Solove, D.J. (2011). The PII problem: Privacy and a new concept of personally identifiable information.” New York University Law Review, 86, 1814–1894. Retrieved from http://scholarship.law.berkeley.edu/cgi/viewcontent.cgi?article=2638&context=facpubs

Shah, A. (2019, March 14). Why political parties should focus on programmatic advertising this elections [Blog post]. AdAge India. Retrieved from http://www.adageindia.in/blogs-columnists/guest-columnists/why-political-parties-should-focus-on-programmatic-advertising-this-elections/articleshow/68402528.cms

Skyhook. (n.d.). Geospatial insights. Retrieved from https://www.skyhook.com/geospatial-insights/

Slefo, G.P. (2019, April 8). Ad industry groups band together to influence Congress on data privacy. AdAge.Retrieved fromhttps://adage.com/article/digital/ad-industry-groups-band-together-influence-congress-data-privacy/2162976.

Sluis, S. (2018, June 27). DoubleClick no more! Google renames its ad stack. AdExchanger. Retrieved from https://adexchanger.com/platforms/doubleclick-no-more-google-renames-its-ad-stack/.

Spencer, S. (2019, November 20). An update on our political ads policy [Blog post]. Retrieved from Google Blog: https://www.blog.google/technology/ads/update-our-political-ads-policy/.

Sullivan, M. (2019, March 28). Facebook expands ad transparency beyond politics: Here’s what’s new. Fast Company. Retrieved from https://www.fastcompany.com/90326809/facebook-expands-archive-ad-library-for-political-ads-heres-whats-new

Susser, D., Roessler, B., & Nissenbaum, H. F. (2019). Online manipulation: Hidden influences in a digital world. Georgetown Law Technology Review, 4(1), 1–45. Retrieved from https://georgetownlawtechreview.org/online-manipulation-hidden-influences-in-a-digital-world/GLTR-01-2020/

TransUnion (2019, May 15). TransUnion strengthens digital marketing solutions with agreement to acquire TruSignal. Retrieved from https://newsroom.transunion.com/transunion-strengthens-digital-marketing-solutions-with-agreement-to-acquire-trusignal/

Tru Optik. (2019, June 17). Tru Optik launches political data cloud for connected TV and streaming audio advertising. Retrieved from https://www.truoptik.com/tru-optik-launches-political-data-cloud-for-connected-tv-and-streaming-audio-advertising.php

Tufekci, Z. (2014). Engineering the public: Big data, surveillance and computational politics. First Monday19(7). https://doi.org/10.5210/fm.v19i7.4901/

Villano, L. (2018, July 19). The loss of Facebook Partner categories: How marketers can cope. MediaPost. Retrieved from https://www.mediapost.com/publications/article/322380/the-loss-of-facebook-partner-categories-how-marke.html

Walker, K. (2018, May 4). Supporting election integrity through greater advertising transparency [Blog post]. Retrieved from Google Blog: https://www.blog.google/outreach-initiatives/public-policy/supporting-election-integrity-through-greater-advertising-transparency/

Weissbrot, A. (2019, April 5). Magna predicts US OTT ad revenues will double by 2020. AdExchanger. Retrieved from https://adexchanger.com/tv-2/magna-predicts-us-ott-ad-revenues-will-double-by-2020/

Weissbrot, A. (2019, October 22). Roku to acquire Dataxu for $150 million. AdExchanger. Retrieved from https://adexchanger.com/digital-tv/roku-to-acquire-dataxu-for-150-million/

Weissbrot, A. (2019, April 10). Waze makes programmatic inventory available in dv360. AdExchanger.Retrieved from https://adexchanger.com/platforms/waze-makes-programmatic-inventory-available-in-dv360/

Williams, R. (2019, October 8). IPG launches martech platform Kinesso to marry Acxiom data with ad campaigns. Marketing Dive. Retrieved from https://www.marketingdive.com/news/ipg-launches-martech-platform-kinesso-to-marry-acxiom-data-with-ad-campaign/564527/

WPA Intelligence. (2019). WPAi wins big at 2019 Reed Awards. Retrieved from http://wpaintel.com/2019reedawards/

Xandr. (n.d.). Political advertising. Retrieved from https://www.xandr.com/legal/political-advertising/

Zarsky, T.Z. (2019). Privacy and manipulation in the digital age. Theoretical Inquiries in Law, 20(1), 157–188. https://doi.org/10.1515/til-2019-0006

Zuboff, S. (2019). The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. New York: Public Affairs.

Footnotes

1. We have relied on a diverse set of primary and secondary materials for our research, including trade journals, research reports, and other industry documents, as well as personal participation in, and proceedings from conferences focused on digital technologies and politics.

The regulation of online political micro-targeting in Europe

$
0
0

This paper is part of Data-driven elections, a special issue of Internet Policy Review guest-edited by Colin J. Bennett and David Lyon.

Introduction

A new form of political advertising has emerged: online political micro-targeting (‘micro-targeting’). Such micro-targeting typically involves monitoring people’s online behaviour, and using the collected data, sometimes enriched with other data, to display individually targeted political advertisements. However, micro-targeting poses serious risks, as demonstrated by the Cambridge Analytica scandal, where a voter-profiling company had harvested private information from the Facebook profiles of more than 50 million users without their permission (Guardian, 2019).

Unlike political advertising on television, micro-targeting not only affects the democratic process, but it also affects people’s privacy and data protection rights. Indeed, micro-targeting affects myriad other rights and duties, including a political party’s and online platform’s right to impart information, a voter’s right to receive information, and the government’s duty to ensure free and fair elections.

We focus on the following, legal, question: How is micro-targeting regulated in Europe? We examine the question from three perspectives, namely data protection law, freedom of expression (the right to receive and impart information), and sector-specific rules for political advertising. We focus on the region of the European Union, and also draw upon case law of the Council of Europe’s European Court of Human Rights.

First, we discuss the General Data Protection Regulation, which lays down rules on the use of personal data. Second, we examine how political parties enjoy a freedom of expression claim regarding their advertising. We discuss, among other things, whether a political party’s freedom of expression gives such a party the right to choose its advertising medium.

Finally, many countries have sector-specific rules for political advertising, which differ from country to country. By way of illustration, we discuss the rules in Germany, France, the Netherlands, and the UK. For decades, paid political advertising on television has been completely banned during elections in many European democracies. These political advertising bans aim to prevent the distortion of the democratic process by financially powerful interests, and to ensure a level playing field during elections. But before we discuss regulation, we give a brief introduction to online political micro-targeting.

Micro-targeting

Online political micro-targeting, or micro-targeting for short, can be summarised as consisting of three steps: 1) collecting personal data, 2) using those data to identify groups of people that are likely susceptible to a certain message, and 3) sending tailored online messages (Zuiderveen Borgesius et al., 2018). The objective of micro-targeting can be manifold: to persuade, inform, or mobilise, or rather to dissuade, confuse or demobilise voters. People can be micro-targeted on the basis of all kinds of information (such as their personality traits, their location, or the issues they care about). Hence, any data can be valuable: from consumer data to browsing behaviour. Such data can provide enough information to make inferences about the susceptibilities of the target audiences.

Micro-targeting differs from regular targeting not necessarily in the size of the target audience, but rather in the level of homogeneity, perceived by the political advertiser. Simply put, a micro-targeted audience receives a message tailored to one or several specific characteristic(s). This characteristic is perceived by the political advertiser as instrumental in making the audience member susceptible to that tailored message. A regular targeted message does not consider matters of audience heterogeneity.

For example, the Green Party plans to target a neighbourhood in Amsterdam. The party chooses this specific neighbourhood and not the adjacent neighbourhood, because the city's statistics show that turnout was low last election but the number of votes for the Green Party was high. The Green Party sends a political message to everyone living in that neighbourhood. We would classify this as regular targeting.

The Green Party would be micro-targeting when it acknowledges that the neighbourhood consists of many people that may share socio-demographics, but they still have many different reasons to vote for a specific party. Some want cheap solar panels, others want more nature in the city, others want to block cars from the city centre, others want a softer stance on immigration, drugs, etc. Moreover, some people in the neighbourhood would never vote and others would never vote for the Green Party. When micro-targeting, the Green Party could ignore the unlikely voters and tailor their messages to possible voters' issue salience (or other characteristics). This way the Green Party would turn one heterogeneous group into several homogeneous subgroups.

To illustrate micro-targeting in practice: in the Netherlands, almost all political parties use Facebook's lookalike audiences function to micro-target voters (Dobber et al., 2017). Political parties use this function to find people who fit a very specific profile, for example, party members that share a (or more) specific characteristic(s).

Dutch pro-immigrant party DENK took an innovative approach when they micro-targeted only the people who use a special sim card. This sim card can be used to cheaply call non-EU countries. In practice, mostly immigrants use those sim cards, giving DENK a simple way to efficiently reach people who are traditionally difficult to reach (Van Trigt, 2018). DENK is known to have experimented with fear appeals, meant to scare their own base to the polls (a false advertisement made to look like it came from Geert Wilders’ Freedom Party, with the statement: “after March 15 [election day] we are going to cleanse the Netherlands”). Such fear appeals can be easily distributed to people who own the special sim cards (Nieuws BV, 2018).

Micro-targeting techniques develop quickly, and so do the ways in which political actors employ them (Kreiss and Barrett, 2019). In the pre-mass media age, citizens received a ‘micro-targeted’ message, or at least a personalised one, when the local cleric visited his parish’s homes to remind them why and for which party they should vote (Kaal, 2016). The advent of the internet and social media in particular enabled micro-targeting on a much larger scale than in the pre-mass media age. Moreover, the cost in time and effort is much lower, and the variation in messages can be enormous. In addition, people often do not know they have been micro-targeted (and if they do, they remain in the dark about what kind of information was used, although Facebook does provide some information about the targeting criteria specified by the advertiser), while that was clear when the cleric knocked on the door. Back then, for instance, you could act as if you were not home. It is more difficult to escape micro-targeted political messages. People leave behind their data at every move they make. Consequently, they can be targeted at any moment, with increasing precision (Zuiderveen Borgesius et al., 2018).

Micro-targeting originates from the United States, where relatively loose data-protection regulation may have facilitated the rapid development and adoption of the technique (Bimber, 2014; Kreiss, 2012, 2016; Nickerson & Rogers, 2014; Nielsen, 2012). However, micro-targeting is gaining traction in the EU. For example, micro-targeting was used for the first time on a large scale in national elections in the UK (Anstead, 2017), the Netherlands (Dobber et al., 2017), Germany (Drepper, 2017), and France (Liegey Muller Pons, n.d.; International IDEA, 2018).

Facebook, due to its easy-to-use infrastructure makes it easy for EU parties to use micro-targeting. Facebook and Google hold vast amounts of personal data and offer political parties the means to reach specific groups without having to collect data. Naturally, micro-targeting in the EU does not solely occur on Facebook. Political parties can also develop micro-targeting techniques by themselves, or they can, for instance, hire the services of specialised firms.

While micro-targeting is gaining popularity with political parties throughout Europe, the use of the technique brings risks. Micro-targeting poses risks to individuals, political parties, and public opinion (Zuiderveen Borgesius et al., 2018). For individuals, micro-targeting threatens privacy. For example, a data breach could expose information about individuals’ income, education, consumer behaviour, but also their inferred political leanings, sexual preferences, or religiosity. Cambridge Analytica harvested the data of tens of millions of unwitting voters (Cadwalladr & Graham-Harrison, 2018). Moreover, merely being aware of data collection could evoke chilling effects: people may alter their behaviour if they suspect being under surveillance (Richards, 2015; Dobber et al., 2018). Manipulation is a different risk to individuals (see Susser, Roessler, & Nissenbaum, 2019). The 2016 US elections saw disinformation efforts targeted, for example, to African Americans (Howard, Ganesh, & Liotsiou, 2018). Finally, political actors could ignore certain voter groups they deem unimportant (‘redlining’, see Howard, 2006) or demobilise the supporters of competing parties (Green & Issenberg, 2016). A consequence could be underrepresentation of certain societal groups.

The costs of micro-targeting and the power of digital intermediaries are among the main risks to political parties. The costs of micro-targeting may give an unfair advantage to the larger and better-funded parties over the smaller parties. This unfair advantage worsens the inequality between rich and poor political parties (see Margolis & Resnick, 2000), and restrains the free flow of political ideas. Second, digital intermediaries profit from their vast amounts of personal data and their intuitive infrastructure. Political parties are dependent on these intermediaries to run a modern political campaign.

On the level of public opinion, micro-targeting makes it difficult to find out which issues candidates find most important, and which they least care about. Moreover, an elected official may have trouble interpreting her mandate when a large range of issues was covered during a political campaign. Finally, micro-targeting could lead to a fragmentation of the marketplace of ideas. Fragmentation happens when the public loses track of overarching themes, and instead focuses on the single issues that are relevant to them personally, which are the topics delivered through micro-targeting techniques (Hillygus & Shields, 2008; Zuiderveen Borgesius et al., 2018).

Advancements in technology lead to increasing possibilities to influence voters’ behaviour. Micro-targeting can be an important tool for (foreign) political actors to interfere in elections. Think of micro-targeted deep fakes (manipulated, but realistic, videos) that can be used to misinform specific voter groups. Malicious political actors can use micro-targeting to reach the right voter with the right disinformation message, thereby maximising the impact of each specific message (Bayer et al., 2019).

Many contextual factors play a role in shaping micro-targeting. For instance, the electoral system is important (Dobber et al., 2017). A political advertiser operating in a multiparty system makes different choices than an advertiser operating in a (de facto) two-party system. A country’s, or an electoral district’s culture or tradition also plays a role. When there is a low turnout culture, for instance, political advertisers focus more on getting out the vote than on persuading voters. And US campaigns frequently engage in attack ads (Vafeiadis, Li, & Shen, 2018), while attack ads are rare in, for instance, Japan (Plasser & Plasser, 2002). In addition, the campaign team level, resource factors, organisational factors, infrastructural factors, structural electoral factors (Kreiss, 2016), and ethical and legal concerns play a role in shaping micro-targeting (Dobber et al., 2017; see also Kruschinski & Haller, 2017). However, because of length constraints, this paper focuses on how the law regulates micro-targeting.

Privacy and data protection rules

Micro-targeting entails the use of personal data for targeted advertising, and therefore the applicable privacy and data protection rules are relevant. The EU grants the right to the protection of personal data the status of a human right. The Charter of Fundamental Rights of the European Union (2000) includes a separate right to the protection of personal data, in addition to a general right to privacy.

Almost 25 years ago, the EU adopted the influential Data Protection Directive (1995). The EU replaced the 1995 Directive with the General Data Protection Regulation (2016; in application since 2018). The GDPR is a legal instrument that aims to ensure that personal data are only used fairly and transparently. The GDPR imposes obligations on organisations that use personal data (data controllers) and grants rights to people whose personal data are used (data subjects). Compliance with the GDPR is overseen by independent Data Protection Authorities (DPAs).

The scope of the GDPR is wide. The GDPR applies to the ‘processing’ of ‘personal data’. Almost anything that can be done with personal data falls within the processing definition. The personal data definition also has a wide scope, and covers, for instance, tracking cookies, IP addresses, and other online identifiers (article 4(1) GDPR; Court of Justice of the European Union 2017). In many cases, the GDPR also applies to data controllers established outside the EU, for instance when they process personal data and offer goods or services to people in the EU, or when they track the online behaviour of people in the EU (article 3 GDPR).

The data protection principles that lie at the core of the GDPR (article 5), sometimes called Fair Information Principles, did not change much in comparison to the 1995 Directive. More than 120 countries in the world have data privacy laws with similar principles (Greenleaf 2017). Below we summarise, roughly, some main points of the GDPR (for more details see Hoofnagle, Van der Sloot, & Zuiderveen Borgesius, 2019).

The data protection principles that form the core of the GDPR (article 5) can be summarised as follows: (a) personal data may only be used lawfully, fairly and in a transparent manner (‘lawfulness, fairness and transparency’); (b) personal data may only be collected for purposes that are specified in advance. And such data may not be used for random other purposes (‘purpose limitation’); (c) controllers may not collect or use more personal data than is necessary for the processing purpose (‘data minimisation’). (d) controllers must generally ensure that the personal data they use are accurate (‘accuracy’); (e) personal data may not be retained for unreasonably long periods (‘storage limitation’); (f) data security must be ensured (‘integrity and confidentiality’); and (g) the data controller is responsible for compliance (‘accountability’).

Data subjects have several rights under the GDPR. For example, data subjects can demand a controller to tell them what personal data it holds on them (article 15). To illustrate: a US citizen, David Carroll, used his access rights under the Data Protection Act in the UK to obtain more information about which data the micro-targeting firm Cambridge Analytica held on him (Carroll, 2018).

The most important change brought by the GDPR is that it empowers DPAs with serious enforcement possibilities. Controllers that breach the GDPR’s rules can be fined up to 20 million Euros, or up to 4% of their worldwide turnover – that is income, not profit (article 83). The mere possibility of fines has led many companies and other organisations to improve their data practices.

The GDPR does not contain specific rules for micro-targeting. The GDPR is extra strict, however, for many types of sensitive data (‘special categories of personal data’, article 9). Personal data regarding people’s ‘political opinions’ fall within that category.

In principle, processing of such special categories of personal data is prohibited, but the GDPR includes exceptions to that prohibition. Political parties (and similar not-for-profit bodies) can, under certain circumstances, rely on an exception to the ban on using sensitive data. Again, the conditions are strict. For example, a political party may only use personal data of members or former members who are in regular contact with it, under certain circumstances (GDPR, article 9(2)(d)). The exception is phrased as follows.

Paragraph 1 [the ban on using special categories of personal data] shall not apply if one of the following applies: (...) processing is carried out in the course of its legitimate activities with appropriate safeguards by a foundation, association or any other not-for-profit body with a political, philosophical, religious or trade union aim and on condition that the processing relates solely to the members or to former members of the body or to persons who have regular contact with it in connection with its purposes and that the personal data are not disclosed outside that body without the consent of the data subjects (GDPR, article 9(2)(d)).

Another possibly relevant exception is the data subject's ‘explicit consent’. For targeted marketing (not conducted by political parties themselves), such explicit consent is the only available exception to the processing ban on sensitive data. The GDPR’s requirements for valid consent are strict. For instance, the GDPR does not accept opt-out systems (that assume that people consent if they fail to object). And burying a consent request in the small print of a privacy notice is not allowed (article 4(11); article 7 GDPR). There are more exceptions to the ban on using sensitive data, but those exceptions are not relevant for elections.

Apart from the GDPR, the EU has separate rules for tracking cookies and similar tracking technologies (EU ePrivacy Directive, 2009). Roughly summarised, anybody who wants to set tracking cookies on somebody’s computer must ask that person for his or her prior informed consent. Hence, a company that wants to use tracking cookies to trace somebody’s online behaviour to learn more about that person’s interests is only allowed to do so after asking consent (EU ePrivacy Directive, 2009). The EU is busy revising the rules for online tracking (Zuiderveen Borgesius et al., 2017).

The GDPR only entered into force in May 2018. From the perspective of micro-targeting technology, that is a long time ago. But for a law, the GDPR is young. Therefore, the exact meaning of many GDPR rules (including the rules on sensitive data) still has to emerge from case law.

Nevertheless, it seems likely that, when compared to the US, Europe’s privacy rules hinder micro-targeting. For example, because of Europe’s privacy rules, it is harder for political parties to buy data about people (see also Bennett, 2016). And in most countries in Europe, it is impossible to access voter registration records. The GDPR’s transparency requirements can help journalists and researchers to find out more about what political parties and marketing companies do with personal data.

In sum, Europe’s privacy laws do not categorically prohibit micro-targeting. Still, Europe’s privacy laws make micro-targeting more difficult than in, for instance, the US.

The GDPR does not and will not solve all privacy problems. Compliance and enforcement leave something to be desired. And there are weak points in the GDPR, when applied to micro-targeting. For example, the GDPR is an omnibus law, applying to almost all usage of personal data in the private and the public sector. Because the GDPR applies in many different situations, many of its rules are rather vague and abstract. And the EU lawmaker did not specially consider the specific context of micro-targeting when drafting the GDPR. For example, freedom of expression and democracy play a larger role in the area of micro-targeting than in cases where, for instance, an app provider collects personal data for behavioural advertising.

More precise rules for personal data use for political micro-targeting may be needed (see also ICO, 2019). Perhaps the EU lawmaker could adopt rules for the use of personal data in the context of micro-targeting. However, adopting such rules would be difficult for the EU, as different EU member states have different traditions in the context of elections.

Freedom of expression

Political micro-targeting is a form of political communication, and thus, is an exercise of the right to freedom of expression, which is guaranteed by both Article 11 of the EU Charter of Fundamental Rights, and Article 10 of the European Convention on Human Rights (ECHR). To understand the protection afforded to political micro-targeting as a form of political speech, we must turn to the case law of the European Court of Human Rights, the court that ultimately decides whether a restriction of freedom of expression is consistent with the ECHR.

Political micro-targeting as political speech

While the European Court of Human Rights has not to date considered a case involving political micro-targeting, it has held that a closely-related form of political communication, a political party’s paid-for political advertising on television during an election, is a form of political speech enjoying the highest level of protection under Article 10. The publication of information “with a view to influencing voters is an exercise of freedom of political expression”, and this is so, “[i]rrespective of the fact that it [is] presented as a paid advertisement” (TV Vest v. Norway, 2008). Paid-for political micro-targeting, as a form of political advertising, is therefore a form of political speech under Article 10. That conclusion is consistent with the Court’s broad notion of what constitutes an exercise of freedom of expression, which includes: posting comments online during an election period (Savva Terentyev v. Russia, 2018), posting pictures on Instagram targeting public figures (Einarsson v. Iceland, 2017), uploading political videos to YouTube (Mariya Alekhina v. Russia, 2018), posting links to online videos targeting political parties (Magyar Jeti Zrt v. Hungary, 2018), distributing election leaflets Andrushko v. Russia, 2010), and displaying political posters (Kandzhov v. Bulgaria, 2008).

Indeed, the court has held that a political party’s mobile app allowing voters to anonymously share pictures of ballots was a protected form of freedom of expression (Magyar Kétfarkú Kutya Párt v. Hungary, 2018). The court applied its well-established principle that Article 10 not only applies to the content of information expressed, but also to the means of transmission, and the form in which they are conveyed. The court has also held that people must be able to choose, without unreasonable interference from the government, the form they consider the most effective to reach a maximum number of people (Women On Waves v. Portugal, 2009). The political party’s app had a communicative value, allowing voters to share information, and therefore constituted political expression under Article 10.

There is an important consequence of political micro-targeting being considered political speech, as such expression enjoys a ‘privileged position’ under Article 10 (TV Vest v. Norway, 2008). Because of that privileged position, the court applies its highest standard of scrutiny - strict scrutiny - to any restriction on political speech. Because there is ‘little scope’ for restrictions on political speech, any restriction must be ‘narrowly interpreted’, and its necessity ‘convincingly established’ by the government (Vitrenko v. Ukraine, 2008). Further, Article 10’s protection of expression on matters of public interest includes expression which is offensive, shocking or disturbing (Dichand v. Austria, 2002). It is also ‘particularly important’ that during the pre-election period opinions and information of all kinds are permitted to circulate freely (Bowman v. UK, 1988). Given the protection afforded to political speech, it is not surprising that when the court considered Norway’s ban on paid political advertising on television, as applied to a Norwegian political party in the run-up to local elections, the court unanimously found a violation of Article 10 (TV Vest v. Norway, 2008). In sum, micro-targeting is a form of political expression that receives considerable legal protection.

Political parties and online platforms

When considering restrictions on political micro-targeting, the Article 10 rights of a number of different actors are at issue, including an election candidate’s freedom of expression (Otegi Mondragon v. Spain, 2011), a political party’s freedom of expression (Magyar Kétfarkú Kutya Párt v. Hungary, 2018), an online platform’s freedom of expression (Cengiz v. Turkey, 2015), and, indeed, the public’s (voters’) right to receive information (Magyar Helsinki Bizottság v. Hungary, 2016).

First, where a politician engages in political micro-targeting, this is an exercise of the politician’s Article 10 right to freedom of expression and to impart information to potential voters. For decades, the court has recognised that while freedom of expression is important for everybody, it is ‘especially so’ for politicians, as they represent the electorate, and defend the electorate’s interests. As such, interferences with a politician’s freedom of expression are subject to the ‘closest scrutiny’ by the court (Castells v. Spain, 1992). Accordingly, the margin of appreciation (or the space and deference the court grants national authorities and courts) for assessing the ‘necessity’ of the penalty imposed on a politician is ‘particularly narrow’ (Otegi Mondragon v. Spain, 2011) (see Brems, 2019). Further, Article 10 protects a politician’s expression in the context of a political debate, even where it only has a 'slim factual basis', and politicians are fully entitled to engage in exaggeration, and strong, polemical language (Arbeiter v. Austria, 2007).

Second, a political party’s freedom of expression extends beyond the content of its political expression, but also extends to the means of transmission, including the mere making available of a mobile app to allow voters to anonymously share their voting ballots. The case establishing this principle was Magyar Kétfarkú Kutya Párt v. Hungary, where the court considered a fine imposed by Hungary’s National Election Commission on a small political party for operating a mobile app enabling voters to anonymously share comments and photographs taken of their ballot papers during a 2016 referendum. Before the court, the Hungarian government argued that there had been no interference with the political party’s freedom of expression, as the party had only provided a mobile app for voters, and had not engaged in political expression itself.

However, the court unanimously rejected the government’s argument, and held that making the app available was an exercise of the political party’s freedom of expression, and fully protected under Article 10. The court found that there had been a violation of the party’s freedom of expression, as the government had failed to demonstrate how the secrecy or fairness of the referendum had been impacted by the app.

The court has also linked the importance of protecting a political party’s freedom of expression to democracy itself. Because political parties’ activities form part of a collective exercise of freedom of expression, this in itself entitles political parties to seek the protection of Article 10 (United Communist Party of Turkey v. Turkey, 1998). Further, political parties represent different shades of opinion to be found within a country’s population, and by relaying this range of opinion, political parties make an immense contribution to political debate, which is at the very core of a democratic society. The court highlights the ‘primordial role’ played by political parties, emphasising that they are the ‘only bodies which can come to power and have the capacity to influence the whole national regime’ (Oran v. Turkey, 2014).The court also emphasises the unique value of political parties, in that they put forward proposals for an overall societal model before the electorate, and by their capacity to implement those proposals once they come to power, political parties differ from other organisations which intervene in the political arena.

Third,online platforms also enjoyfreedom of expression. The European Court has indeed highlighted the importance of online platforms (such as Facebook, YouTube, and Instagram) for freedom of expression. For example, according to the court, YouTube is a ‘unique’ and an ‘undoubtedly’ important platform for political speech and political activities, with the court recognising that “political content ignored by the traditional media is often shared via YouTube” (Cengiz and Others v. Turkey, 2015). Similarly, in relation to Instagram, the court has emphasised that the internet plays an important role in enhancing the public’s access to news and facilitating the dissemination of information in general (Einarsson v. Iceland, 2017). And platforms which facilitate the creation and sharing of webpages within a group enjoy the protection of Article 10, as they constitute a means of exercising freedom of expression (Ahmet Yıldırım v. Turkey, 2012).

In relation to Google’s and other online platforms, the court has held that Article 10 guarantees freedom of expression to ‘everyone’, and it makes no distinction according to the nature of the aim pursued, or the role played by natural or legal persons in the exercise of that freedom. The internet is one of the principal means by which individuals exercise their right to freedom of expression, and provides “essential tools for participation in activities and discussions concerning political issues” (Ahmet Yıldırım v. Turkey, 2012). Further, the court has held that the operators of the file-sharing platform The Pirate Bay (allowing users to share copyright-protected digital material), were entitled to Article 10 protection, as they put in place the “means for others to impart and receive information within the meaning of Article 10”, as Article 10 guarantees freedom of expression to everyone, and “[n]o distinction is made in it according to whether the aim pursued is profit-making or not” (Neij v. Sweden, 2013).

While the court has not to date considered the regulation of online political advertising, it has delivered a number of judgments on the regulation of political advertising in broadcasting. The most relevant judgment is TV Vest v. Norway, where a Norwegian political party argued that a ban on political advertising on television, during the run-up to elections, violated its right to freedom of expression. The court found a violation of Article 10. The court recognised that there could be relevant reasons for a ban on political advertising, such as preventing the ‘financially powerful’ from obtaining an ‘undesirable advantage’ in public debates, and ‘ensuring a level playing field in elections’. The court was thus signalling that there are circumstances where it may accept regulation of political advertising is permissible on certain policy grounds.

However, the court held that the political party at issue, a small pensioners’ party, was ‘hardly mentioned’ in election television coverage, and paid advertising on television became ‘the only way’ for it to put its message to the public. Moreover, the party did not fall within the category of a party that the ban was designed to target, namely financially strong parties which might gain an ‘unfair advantage’. Thus, the court held that the general ‘objectives’ of the ban could not justify its application to the political party, and thereby violated its right to freedom of expression under Article 10. Thus, the Article 10 principles protecting political expression, and a political party’s expression, in addition to the Court’s judgment in TV Vest, would seem to suggest that a ban on online political micro-targeting would be difficult to reconcile with Article 10.

However, there is some uncertainty in the case law, as the court held in Animal Defenders International v. UK (2013) that a ban on paid political advertising on television in the UK did not violate Article 10. But unlike TV Vest, the case concerned an animal rights group (not a political party), which sought to broadcast a political advertisement outside an election period. For the first time under Article 10, the court held that a certain type of regulation, which the court called ‘general measures’, can be imposed ‘consistently with the Convention’, even where they ‘result in individual hard cases’ affecting freedom of expression. The court laid down a three-step test for determining whether a ‘general measure’ is consistent with Article 10: the court must assess (a) the ‘legislative choices’ underlying the general measure, (b) the ‘quality’ of the parliamentary review of the necessity of the measure, and (c) any ‘risk of abuse’ if a general measure is relaxed.

The court then applied its general-measures test to the ban on political advertising on television in the UK: first, the court examined the ‘legislative choices’ underlying the ban, and accepted that it was necessary to prevent the ‘risk of distortion’ of public debate by wealthy groups having unequal access to political advertising; and due to ‘the immediate and powerful effect of the broadcast media’. Second, with regard to the quality of parliamentary review, the court attached ‘considerable weight’ to the ‘extensive pre-legislative consultation’, referencing a number of parliamentary bodies which had examined the ban. Third, as regards the risks from relaxing a general measure, the court held that it was ‘reasonable’ for the government to fear that a relaxed ban (such as financial caps on political advertising expenditure) was not feasible, given the ‘risk of abuse’ in the form of wealthy bodies ‘with agendas’ being ‘fronted’ by social advocacy groups, leading to uncertainty and litigation. Therefore, the court held that the total ban on political TV advertising was consistent with Article 10.

It is not clear whether the European Court of Human Rights would apply Animal Defenders to a law prohibiting online political micro-targeting. The judgment resulted in a divided Court (9-8 vote), and the court did not expressly overrule TV Vest. But it does signal that the court will accept, in some circumstances, that outright bans on political advertising may be consistent with freedom of expression, in order to prevent the risk of distortion of public debate by wealthy groups.

National rules on political advertising

What rules are currently in force in Europe concerning online political advertising? We briefly outline the rules in France, Germany, the Netherlands and the UK, which represent widely divergent approaches.

At one end of the spectrum, is France, whereArticle L. 52-1 of the Electoral Code prohibits, during the six months prior to an election, “the use, for the purpose of election propaganda, of any commercial advertising in the press or any means of audiovisual communication”. This rule also covers online public communication (Granchet, 2017).

Further, in late 2018, France introduced new rules under Art. L. 163-1 providing that in the three months prior to elections, online platforms must provide users with information about who paid for the “promotion of content related to a debate of general interest”. Moreover, users must be provided with fair, clear and transparent information on the use of personal data in the context of the promotion of information content related to a debate of general interest. These rules have led some platforms, such as Twitter, to ban all political campaigning ads and issue advocacy ads in France (Twitter, 2019a). Similarly, Microsoft bans all ads in France “containing content related to debate of general interest linked to an electoral campaign” (Microsoft, 2019). Google also banned all ads containing “informational content relating to a debate of general interest” between April and May 2019 across its platform in France, including YouTube (Google, 2019). The French law led Twitter to even block an attempt by the French government information service attempting to pay for sponsored tweets for a voter registration campaign in the lead-up to European parliamentary elections (BBC, 2019). And in late 2019, Twitter introduced a global ban on paid-for promotion of political content on its platform (Twitter, 2019b), and Google implemented a new global rule limiting election ad audience targeting to age, gender, and general location (Google, 2019b).

Of course, platforms' bans may not capture all types of indirect political advertising that might take place, where ad campaigns do not promote a certain party or candidate, but the subject matter and message would favour certain candidates and parties because of their aligned agenda. At least the French law tries to capture all paid content “related to a debate of general interest”, and not just campaigning and issue advocacy ads.

In Germany, under Article 7(9)(1) of the Rundfunkstaatsvertrag (RStV), paid political advertising is prohibited in broadcasting in an effort to prevent individual social groupings and forces from exerting a disproportionate influence on public opinion by purchasing advertising time (Etteldorf, 2017). Importantly, during elections, certain broadcasters are obliged to allocate free airtime to political parties for election advertising. The regulation of political advertising online depends not only on the online service itself but also on its provider. German law distinguishes between broadcasting and ‘telemedia’. The transmission of a linear programme according to a schedule (especially live streaming services) via the internet is classified as broadcasting, and is therefore subject to the political advertising ban. Telemedia content (roughly speaking: internet content), on the other hand, is governed by Articles 54 et seq. of the RStV. Election advertising via on-demand audiovisual media services is prohibited under Article 58(3)(1), in conjunction with Article 7(9) of the RStV and, in other telemedia, must be separated from other content, in accordance with Article 58(1) of the RStV (Etteldorf, 2017). However, these rules do not apply to social media platforms like Facebook and YouTube.

At the other end of the spectrum is the Netherlands, where there are no specific restrictions concerning the type of political content that can be broadcast during elections; and Dutch law does not specifically regulate online political advertising during elections and referenda. In practice, political parties have limited budgets in the Netherlands. The Dutch government has proposed a new Political Parties Act, including new transparency obligations for political parties with regard to digital political campaigns and political micro-targeting (see Van Hoboken et al., 2019).

Finally, in the United Kingdom, paid political advertising in broadcasting is prohibited under the Communications Act of 2003. However, the ban does not apply online. While paid political advertising is not specifically restricted online through regulation, the UK Electoral Commission has emphasised that election spending rules “cover the costs of placing adverts on digital platforms”; and include the “costs of distributing and targeting digital campaign materials or developing and using databases for digital campaigning”. Further, the Commission has recommended a number of reforms to election laws applicable to online political advertising, including (a) election and referendum adverts on social media platforms should be labelled to make the source clear; and (b) campaigners should be required to provide more detailed and meaningful invoices from their digital suppliers to improve transparency (Electoral Commission, 2018).

At the EU level, the European Commission has recognised some of the concerns related to online political micro-targeting, including that it creates increased possibilities to target citizens often in a ‘non-transparent’ manner, and may involve the processing of personal data of citizens ‘unlawfully in the electoral context’ (European Commission, 2018a). The Commission has introduced a self-regulatory code, and also guidance for member states about elections to the European Parliament. The self-regulatory Code of Practice on Disinformation, agreed with platforms including Facebook, Google and Twitter, includes that the platform will ensure transparency about political and issue-based advertising, also with a view to enabling users to understand why they have been targeted by a given advertisement (European Commission, 2018b).

Further, as the Commission notes, Article 14 of the Treaty on European Union provides that the European Parliament is to be composed of representatives of the Union’s citizens; however, the procedure for the elections to the European Parliament is governed by national provisions in each member state, and national authorities are in charge of monitoring the elections at the national level (European Commission, 2018a). The Commission recommended that member states should encourage the disclosure of information on campaign expenditure for paid online political advertisements, including “information on any targeting criteria used in the dissemination of such advertisements” (European Commission, 2018a).

However, a controversy erupted in the run-up to European Parliament elections in 2019. Facebook implemented new rules on political advertising on its platforms, where any political party, candidate, group or individual within the EU were required to go through an ad authorisation process when planning to run ads related to politics or issues of national importance. The rules included that advertisers could only run ads in the country in which they are authorised (Facebook, 2019). This led political parties contesting the European parliamentary elections to criticise Facebook over the rules, arguing that Facebook’s rules “prevent pan-EU parties from posting online political adverts across the 28-member bloc” (Khan, 2019).

Indeed, a letter was sent from three EU institutions to Facebook, which urged Facebook to amend its rules to allow European institutions, political groups, Members of the European Parliament, European political parties, to run pan-European advertising activities (Welle, Tranholm-Mikkelsen, & Selmayr, 2019). Notably, three weeks later, Facebook announced that it had “implemented temporary exemptions for the main Facebook pages of the European Parliament, European political groups and European political parties” (Kayali, 2019). However, there is no publicly available list of exempted groups, parties and EU institutions. In conclusion, national countries differ widely in how they regulate political advertising.

For completeness sake, we make a few brief remarks about self- and co-regulation. Platforms such as Facebook are implementing transparency mechanisms, including publicly-searchable political ad libraries. But while ad libraries make it much more difficult to post ‘dark ads’ (messages only visible to the targeted group [Hall Jamieson, 2018]), the ad libraries’ “present implementations leave much to be desired” (Leerssen et al., 2019). For instance, definitions of ‘political’ ads “vary greatly and continue to raise significant line-drawing and enforcement challenges”. Second, it is an open and difficult question whether and how platforms should ensure that ad buyers are properly identified. Third, so far platforms do not give much information on how and to whom political ads are targeted (Leerssen et al., 2019).

More generally, it is debatable whether national governments in the EU should leave the protection of democratic debate to online platforms. Should governments step in with regulation, akin to the regulation of political advertising in broadcasting? Should national governments rely upon online platforms themselves to ensure political micro-targeting is not damaging democracy? Do online platforms have the expertise to weigh the different interests involved, including the interest in free and fair elections?

Conclusion

We discussed how online political micro-targeting is regulated in Europe. Political micro-targeting has a unique risk of harm not associated with traditional political advertising: interference with the rights to privacy and data protection. We showed that the GDPR generally applies to online political micro-targeting. The GDPR offers useful rules to protect privacy-related and other interests. For instance, the GDPR makes it more difficult for parties to buy detailed data about consumers. In the micro-targeting context, the GDPR is a necessary but not a sufficient protection.

We showed that political parties and online platforms have a strong freedom of expression claim in the context of political advertising. Political speech, including political advertising, deserves considerable protection in the case law of the European Court of Human Rights. Nevertheless, under certain conditions, national lawmakers can limit political speech. Therefore, national lawmakers could probably ban online political micro-targeting, at least for a period leading up to elections.

A complicating factor is the tension between EU-level regulatory action, and national rules in the area of election regulation. The EU has never stepped into the regulatory domain of national election regulation, as evidenced by the omission of rules on political advertising under the EU's rules for broadcasting (Audiovisual Media Services Directive), and general advertising (Unfair Commercial Practices Directive). Election regulation involves a particularly complex balancing of interests, and is tied to national culture and political history. Moreover, the EU has competence to regulate personal data, and to regulate elections for the European Parliament, but no specific competence to regulate national elections. Hence, national parliaments seem best placed to regulate political micro-targeting. Meanwhile, platforms themselves are deciding that the commercial reputational damage associated with allowing political micro-targeting may outweigh the commercial benefits (e.g., Microsoft, Twitter, and Google).

In this paper we discussed mostly what lawmakers in the EU (and four member states) do and can do in the area of micro-targeting. There are a range of possibilities, ranging from not regulating micro-targeting at all, to banning micro-targeting during certain periods. In between those two extremes there are many options, including rules that aim for more transparency. More debate and research are needed on what lawmakers should do.

References

Ahmet Yıldırım v. Turkey (Application no. 3111/10) 18 December 2012

Andrushko v. Russia (Application No. 4260/0) 14 October 2010

Animal Defenders International v. UK (Application No. 48876/08) 22 April 2013

Anstead, N. (2017). Data-driven campaigning in the 2015 UK general election. International Journal of Press/Politics, 22(3), 294–313. https://doi.org/10.1177/1940161217706163

Arbeiter v. Austria (Application no. 3138/04) 25 January 2007

Audiovisual Media Services Directive (EU) 2018/1808.

BBC News (2019, April 3), Twitter blocks French government with its own fake news law. BBC News. Retrieved from https://www.bbc.com/news/world-europe-47800418

Bowman v. UK (Application no. 24839/94) 19 February 1988

Bayer, J., Bitiukova, N., Bard, P., Szakács, J., Alemanno, A., & Uszkiewicz, E. (2019). Disinformation and Propaganda – Impact on the Functioning of the Rule of Law in the EU and its Member States [Study]. Brussels: European Parliament. Retrieved from http://www.europarl.europa.eu/supporting-analyses

Bennett, C. J. (2016). Voter databases, micro-targeting, and data protection law: can political parties campaign in Europe as they do in North America? International Data Privacy Law, 6, 261–275. https://doi.org/10.1093/idpl/ipw021

Bennett, C. J. (2019). a remark during the conference ‘Data-driven Elections: Implications & Challenges for Democratic Societies’ https://www.sscqueens.org/data-driven-elections-implications-and-challenges-for-democratic-societies

Bimber, B. (2014). Digital Media in the Obama Campaigns of 2008 and 2012: Adaptation to the personalized political communication environment. Journal of Information Technology & Politics, 11(2), 130–150. https://doi.org/10.1080/19331681.2014.895691

Brems, E. (2019). Positive subsidiarity and its implications for the margin of appreciation doctrine. Netherlands Quarterly of Human Rights, 37(3), 210–227. https://doi.org/10.1177/0924051919861798

Cadwalladr, C., and Graham-Harrison, E. (2018, March 17). TheGuardian. Retrieved from https://www.theguardian.com/news/2018/mar/17/cambridge-analytica-facebook-influence-us-election

Cappello, M. (Ed.). (2017). Media coverage of elections: the legal framework in Europe [Study No. IRIS Special 2017-1]. Strasbourg: European Audiovisual Observatory.

Castells v. Spain (Application no. 11798/85) 23 April 1992

Cengiz and Others v. Turkey (Application nos. 48226/10 and 14027/11) 1 December 2015

Court of Justice of the European Union (2016). Case C-582/14 Breyer. ECLI:EU:C:2016:779.

Dichand and Others v. Austria (Application no. 29271/95) 26 February 2002

Dobber, T., Trilling, D., Helberger, N., & de Vreese, C. H. (2017). Two crates of beer and 40 pizzas: the adoption of innovative political behavioural targeting techniques. Internet Policy Review, 6(4). https://doi.org/10.14763/2017.4.777

Dobber, T., Trilling, D., Helberger, N., & de Vreese, C. H. (2018). Spiraling downward: The reciprocal relation between attitude toward political behavioral targeting and privacy concerns. New Media & Society, 21(6), 1212–1231. https://doi.org/10.1177/1461444818813372

Drepper, D. (2017, September 7). Alle großen Parteien nutzen Facebook-Anzeigen – aber noch sind die ziemlich langweilig. Buzzfeed. Retrieved from https://www.buzzfeed.com/de/danieldrepper/alle-grossen-parteien-nutzen-facebook-anzeigen-aber-noch

Einarsson v. Iceland (Application no. 10692/09) 24703/15) 7 November 2017

Etteldorf, C. (2017). Germany. In M. Cappello (Ed.), Media coverage of elections: the legal framework in Europe (pp. 29–37). Strasbourg: European Audiovisual Observatory

EU Data Protection Directive. (1995). “Directive 95/46/EC of the European Parliament and of the Council of 24 October 1995 on the Protection of Individuals with Regard to the Processing of Personal Data and on the Free Movement of Such Data.” Official Journal of the European Communities, 38, 31–50, http://eur-lex.europa.eu/legal-content/en/TXT/?uri=CELEX:31995L0046

EU ePrivacy Directive. (2009). ‘Directive 2002/58/EC of the European Parliament and of the Council of 12 July 2002 concerning the processing of personal data and the protection of privacy in the electronic communications sector. Directive on privacy and electronic communications, amended by Directive 2009/136. Retrieved from https://eur-lex.europa.eu/LexUriServ/LexUriServ.do?uri=CONSLEG:2002L0058:20091219:EN:PDF

EU General Data Protection Regulation. (2016). ‘Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the Protection of Natural Persons with Regard to the Processing of Personal Data and on the Free Movement of such Data, and Repealing Directive 95/46/EC (General Data Protection Regulation),’ Official Journal of the European Union, 119, http://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=CELEX:32016R0679&from=EN

European Commission. (2018a). Recommendation on election cooperation networks, online transparency, protection against cybersecurity incidents and fighting disinformation campaigns in the context of elections to the European Parliament. Retrieved from https://ec.europa.eu/commission/sites/beta-political/files/soteu2018-cybersecurity-elections-recommendation-5949_en.pdf

European Commission. (2018b). EU Code of Practice on Disinformation. Retrieved from https://ec.europa.eu/newsroom/dae/document.cfm?doc_id=54454

European Commission. (2018c). Communication on Tackling online disinformation: a European Approach. Retrieved from https://eur-lex.europa.eu/legal-ontent/EN/TXT/?uri=CELEX:52018DC0236

Facebook (2019, January 15). Bringing More Transparency to Political Ads in 2019. Retrieved from https://www.facebook.com/business/news/bringing-more-transparency-to-political-ads-in-2019

Greenleaf, G. (2017). Global data privacy laws: 120 national data privacy laws, including Indonesia and Turkey’, Privacy Laws & Business International Report, 145, 10-13, https://ssrn.com/abstract=2993035

The Guardian (2019). The Cambridge Analytica Files. The Guardian. Retrieved from https://www.theguardian.com/news/series/cambridge-analytica-files

Google (2019a). Political content. Retrieved from https://support.google.com/adspolicy/answer/6014595?hl=en

Google (2019b). An update on our political ads policy. Retrieved from https://blog.google/technology/ads/update-our-political-ads-policy/

Hillygus, D. S., & Shields, T. G. (2008). The persuadable voter: Wedge issues in presidential campaigns. Princeton: Princeton University Press.

Hoofnagle, C. J., van der Sloot B., & Zuiderveen Borgesius, F. J. (2019). The European Union general data protection regulation: what it is and what it means, Information & Communications Technology Law, 28(1), 65–98, https://doi.org/10.1080/13600834.2019.1573501

Howard, P. N. (2006). New media campaigns and the managed citizen. Cambridge: Cambridge University Press.

Howard, P. N., Ganesh, B., & Liotsiou, D. (2018). The IRA, Social Media and Political Polarization in the United States, 2012-2018 [Working Paper No. 2018.2]. Oxford: Project on Computational Propaganda, Oxford University. Retrieved from https://comprop.oii.ox.ac.uk/research/ira-political-polarization/

Huffpost (28 May 2016). Emmanuel Macron donne le coup d'envoi de la "grande marche" de son parti "En Marche!"[Emmanuel Macron kicks off the "great march" of his party "En Marche!"]. Huffington Post. Retrieved from https://www.huffingtonpost.fr/2016/05/28/emmanuel-macron-porte-a-porte-en-marche_n_10177592.html

ICO. (2018a). Political campaigning practices: data analytics. Retrieved from https://ico.org.uk/your-data-matters/political-campaigning-practices-data-analytics/

ICO. (2018b). Blog: Information Commissioner’s report brings the ICO’s investigation into the use of data analytics in political campaigns up to date . Retrieved from https://ico.org.uk/about-the-ico/news-and-events/blog-information-commissioner-s-report-brings-the-ico-s-investigation-into-the-use-of-data-analytics-in-political-campaigns-up-to-date/

ICO. (2019). ICO consultation on the draft framework code of practice for the use of personal data in political campaigning. Retrieved from https://ico.org.uk/about-the-ico/ico-and-stakeholder-consultations/ico-consultation-on-the-draft-framework-code-of-practice-for-the-use-of-personal-data-in-political-campaigning/

International IDEA. (2018). Digital Micro-targeting. Stockholm: International Institute for Democracy and Electoral Assistance. Retrieved from https://www.idea.int/publications/catalogue/digital-microtargeting

Granchet, A. (2017). France. In M. Cappello (Ed.), Media coverage of elections: the legal framework in Europe (pp. 29–37). Strasbourg: European Audiovisual Observatory

Green, J., & Issenberg, S. (2016, October 27). Inside the Trump Bunker, With Days to Go. Bloomberg. Retrieved from https://www.bloomberg.com/news/articles/2016-10-27/inside-the-trump-bunker-with-12-days-to-go

Jamieson, K. H. (2013). Messages, micro-targeting, and new media technologies. The Forum, 11(3), 429–435. https://doi.org/10.1515/for-2013-0052

Jamieson, K. H. (2018). Cyberwar: How Russian Hackers and Trolls Helped Elect a President—What We Don’t, Can’t, and Do Know. New York: Oxford University Press.

Kaal, H. (2016). Politics of place: political representation and the culture of electioneering in the Netherlands, c.1848–1980s. European Review of History, 23(3), 486–507. https://doi.org/10.1080/13507486.2015.1086314

Kandzhov v. Bulgaria (Application No. 68294/01) 6 November 2008

Kayali, L. (2019, May 8). Facebook allows EU-wide political ads for European Parliament. Politico. Retrieved from https://www.politico.eu/article/facebook-allows-eu-wide-political-ads-for-european-parliament/

Khan, M. (2019, 29 March). Facebook rules on political advertising criticised by EU parties. Financial Times. Retrieved from https://www.ft.com/content/0dab95ba-5156-11e9-b401-8d9ef1626294

Kreiss, D. (2012). Taking Our Country Back: The Crafting of Networked Politics from Howard Dean to Barack Obama. New York: Oxford University Press.https://doi.org/10.1093/acprof:oso/9780199782536.001.0001

Kreiss, D. (2016). Prototype Politics: Technology-intensive campaigning and the data of democracy. New York: Oxford University Press. https://doi.org/10.1093/acprof:oso/9780199350247.001.000

Kreiss, D., & Barrett, B. (2019) Facebook and Google as Global Democratic Infrastructures: A Preliminary Five Country Comparative Analysis of Platforms, Paid Political Speech, and Data. Data Driven Elections workshop in Victoria.

Kruschinski, S., & Haller, A. (2017). Restrictions on data-driven political micro-targeting in Germany. Internet Policy Review, 6(4). https://doi.org/10.14763/2017.4.780

Leerssen, P., Ausloos, J. Zarouali, B., Helberger, N. & de Vreese, C. H. (2019). Platform ad archives: promises and pitfalls. Internet Policy Review, 8(4). https://doi.org/10.14763/2019.4.1421

Liegey Muller Pons (n.d.). En Marche: La Grande Marche [En Marche: The Great Match]. Retrieved from https://www.liegeymullerpons.fr/en/en-marche-la-grande-marche-2/

Magyar Helsinki Bizottság v. Hungary [GC] (Application no. 18030/11) 8 November 2016

Magyar Jeti Zrt v. Hungary (Application no. 11257/16) 4 December 2018

Magyar Kétfarkú Kutya Párt v. Hungary (Application no. 201/17) 23 January 2018

Margolis, M., & Resnick, D. C. (2000). Politics as usual: The cyberspace "revolution". Los Angeles, CA: SAGE.

Mariya Alekhina and Others v. Russia (Application no. 38004/12) 17 July 2018

Microsoft. (2019). Disallowed content policies. Retrieved from https://about.ads.microsoft.com/en-us/resources/policies/disallowed-content-policies

Neij and Sunde Kolmisoppi v. Sweden (Application no. 40397/12) 19 February 2013

Nieuws BV. (2018). DENK plande een nepnieuwscampagne onder PVV-vlag [DENK planned a fake news campaign under PVV flag]. NPO Radio 1. Retrieved from https://www.nporadio1.nl/achtergrond/10220-denk-plande-een-nepnieuwscampagne

Nickerson, D., & Rogers, T. (2014). Political Campaigns and Big Data. Journal of Economic Perspectives, 28(2), 51–74. https://doi.org/10.1257/jep.28.2.51

Nielsen, R. K. (2012). Ground Wars: Personalized Communication in Political Campaigns. Princeton: Princeton University Press.

Oran v. Turkey (Application nos. 28881/07 and 37920/07) 15 April 2014

Otegi Mondragon v. Spain (Application no. 2034/07) 15 March 2011

Plasser, F., & Plasser, G. (2002). Global political campaigning: A worldwide analysis of campaign professionals and their practice. Westport: Praeger Publishers.

Richards, N. M. (2015). Intellectual Privacy. New York: Oxford University Press.

Savva Terentyev v. Russia (Application no. 10692/09) 28 August 2018

Susser, D., Roessler, B., & Nissenbaum, H. (2019). Technology , autonomy , and manipulation. Internet Policy Review, 8(2). https://doi.org/10.14763/2019.2.1410

TV Vest As & Rogaland Pensjonistparti v. Norway (Application no. 21132/05) 11 December 2008

Twitter (2019a). Political Content in the European Union. Retrieved from https://business.twitter.com/en/help/ads-policies/restricted-content-policies/political-content/eu-political-content.html

Twitter (2019b). Political Content. Retrieved from https://business.twitter.com/en/help/ads-policies/prohibited-content-policies/political-content.html

Unfair Commercial Practices Directive (2005/29/EC).

United Communist Party of Turkey and Others v. Turkey [GC] (Application no. 19392/92) 30 January 1998

Vafeiadis, M., Li, R., & Shen, F. (2018). Narratives in Political Advertising: An Analysis of the Political Advertisements in the 2014 Midterm Elections. Journal of Broadcasting and Electronic Media, 62(2), 354–370. https://doi.org/10.1080/08838151.2018.1451858

van Hoboken, J., Appelman, N., Ó Fathaigh, R., Leerssen P., McGonagle, T., van Eijk, N., & Helberger, N. (2019). De verspreiding van desinformatie via internetdiensten en de regulering van politieke advertenties [The dissemination of disinformation via internet services and the regulation of political advertisements] [Report]. https://www.rijksoverheid.nl/documenten/rapporten/2019/10/18/de-verspreiding-van-desinformatie-via-internetdiensten-en-de-regulering-van-politieke-advertenties

van Trigt, M. (2018). Microtargeting moest deze student helpen in de gemeenteraad van de VVD te komen: zo probeerde hij dat [Microtargeting was supposed to help this student get into the VVD city council, so he tried that]. Volkskrant. Retrieved from https://www.volkskrant.nl/nieuws-achtergrond/microtargeting-moest-deze-student-helpen-in-de-gemeenteraad-van-de-vvd-te-komen-zo-probeerde-hij-dat~b61857ee/

Vitrenko and Others v. Ukraine (Application No. 23510/02) 16 December 2008

Welle, K., Tranholm-Mikkelsen, J., & Selmayr, M. (2019). Letter to Facebook. Retrieved from https://g8fip1kplyr33r3krz5b97d1-wpengine.netdna-ssl.com/wp-content/uploads/2019/04/Letter-to-Clegg-from-EU-institutions.pdf

Women On Waves and Others v. Portugal (Application no. 31276/05) 3 February 2009

Zuiderveen Borgesius, F. J., van Hoboken, J., Ó Fathaigh, R., Irion, K., & Rozendaal, M. (2017) An Assessment of the Commission’s Proposal on Privacy and Electronic Communications [Report]. Brussels: European Parliament. Retrieved from http://www.europarl.europa.eu/supporting-analyses

Zuiderveen Borgesius, F. J., Möller, J., Kruikemeier, S., Ó Fathaigh, R., Irion, K., Dobber, T., Bodo, B., de Vreese, C. (2018). Online political microtargeting: Promises and threats for democracy. Utrecht Law Review, 14(1), 82–96. https://doi.org/10.18352/ulr.420

The crucial and contested global public good: principles and goals in global internet governance

$
0
0

Introduction

Barack Obama stated in 2015, in response to EU criticism over US dominance over the internet: “We have owned the Internet. Our companies have created it, expanded it, perfected it…” (The Verge, 2015). Despite this, the US administration has completed a process of transferring its former stewardship responsibility over a body called IANA (Internet Assigned Numbers Authority), formally a department within the larger ICANN (Internet Corporation for Assigned Names and Numbers) to ICANN itself (Raustiala, 2017).

ICANN is perhaps the most prominent example of a multistakeholder governance model, as opposed to an intergovernmental governance model. The term multistakeholder was not applied to characterise ICANN from its inception in 1998, as will be explained below. The advocacy coalition framework (ACF), developed by Sabatier and Jenkins-Smith (1993) can illuminate the possibilities and challenges that the multistakeholder governance structure (multistakeholderism; see Raymond and DeNardis, 2015) faces, from various actors. According to one author, the complexities and power asymmetries involved in the management of ICANN imply that an alleged “multiple accountabilities disorder” (Koppell, 2005) applies.

Civil society organisations (CSOs) believe that the digital divide can be overcome by an internationalisation of internet governance (Weber, 2009, p. 164). Internationalisation is understood as a situation where several states influence how the internet is governed. Calls for change in the global internet governance are frequently heard, as will be seen below, most notably at a meeting hosted by Brazil in 2014 (NETmundial, 2014, p. 6).

The domain name system (DNS) and root server administration are crucial elements of global internet governance, which are exercised by ICANN and IANA, and its more detailed structure will be clarified below. The DNS falls under the management of the Transmission Control Protocol and Internet Protocol (TCP/IP) suite (Bygrave, 2015a, pp. 10-16). 

Internet governance is, however, a wider term encompassing many policy areas (Council of Europe, 2015; Council of Europe, 2011; DeNardis, 2013a) and is defined as:

the development and application by governments, the private sector and civil society, in their respective roles, of shared principles, norms, rules, decision-making procedures, and programmes that shape the evolution and use of the Internet (United Nations General Assembly, 2006, para. 34).

By highlighting principles, norms, rules and procedures, the United Nations (UN) implicitly says that internet governance must be more than a mere technical exercise. This definition is wide (Bygrave, 2015a, p. 15), but so is the range of internet governance tasks (Raymond and DeNardis, 2015, pp. 570-572; listing 43 different tasks). There are no international treaties regulating internet governance, and the sources that are applied in the article are (i) declarations from UN summits, (ii) outcomes from the ICANN processes in recent years, (iii) statements from the US government, and (iv) documents from both non-governmental and intergovernmental processes. Even if it is fair to state that the way in which ICANN operates has sidelined the UN, UN sources can be relevant, conceptually and in terms of identifying new approaches (Padovani et al., 2010, p. 367, referring to the 2003 World Summit on the Information Society (UN WSIS)). The same authors found, however, that governmental actors build on a narrower view of which are the relevant internet actors, while non-governmental organisations have a broader understanding of such actors and tend to operate with a more complex terminology, often embedded in human rights. Nevertheless, I will refer to the UN when identifying the principles and goals, acknowledging other lists (Pettrachin, 2018, p. 341; Padovani et al., 2010, p. 365; Mueller et al., 2007, pp. 243-250).

This article emphasises the internet governance principles, because these are in greatest need for clarification and because they provide direction for the norms, rules and procedures. The UN has called for the “development of globally-applicable principles…” [UN General Assembly, 2006, para. 70; UN Working Group on Internet Governance (UN WGIG), 2005, para. 13(a)]. No agreement has emerged, however, as to what these principles are. Moreover, there will be an emphasis on goals and the role of human rights in global internet governance, a topic promoted by Council of Europe (2019; see also Levinson and Marzouki, 2016). 

The article proceeds as follows: Section 1 outlines the principles of global internet governance, including accountability. Section 2 identifies the goals that are to be ensured by internet governance. Section 3 analyses the processes within some actors in global internet governance, focusing on the International Telecommunication Union (ITU) and ICANN. Section 4 reviews reform proposals for internationalisation or globalisation of internet governance. Section 5 analyses ICANN’s process to accommodate human rights concerns, developing further the critical assessment made by Appelman (2016).

The main difference between goals and principles is that goals are about the essential nature of global internet governance while principles refer to the minimum standard of conduct in decision-making relating to global internet governance.

As a premise for the argument put forth in the article, it is only by supplementing the overall principle of multistakeholder governance by other substantive principles that ICANN’s overall legitimacy can be adequately strong. Weinberg (2012) understands legitimacy as being external to the organisation, more specifically the perception by relevant actors that a given organisation is an appropriate wielder of authority. Legitimacy is in this article operationalised as encompassing adequate and widely-accepted procedures for participation and accountability. Demonstrating broad-based participation by all stakeholders and mechanisms for holding the ICANN Board to account have been priorities of ICANN, when being met with criticism of neither being transparent, nor democratic or representative. ICANN had from its inception weak legitimacy, seeking to overcome this by (i) (inadequate) systems adopted from US administrative law – but without judicial review; (ii) enhanced representation (elaborated by Malcolm, 2015); and (iii) developing decision-making as consensus (Weinberg, 2000).

I have emphasised the terms accountability and participation as an operationalisation of legitimacy for two reasons. First, in the IANA transition process, the Cross Community Working Group on Enhancing ICANN Accountability (CCWG-Accountability) was established, with two “work streams”: the first (WS1) related to the IANA transition process and the second (WS2) addressing accountability issues beyond this process (ICANN, 2018a, pp. 25-33; see also annexes 5-7; ICANN, 2018d). Hence, enhanced accountability and participatory processes have been identified by ICANN itself as crucial. Second, as the analysis in Section 5 will particularly show, there were attempts to embed ICANN’s activities more explicitly within a human rights framework, and both accountability and participation are recognised as human rights principles.

The research question that this article seeks to answer is: Will recent measures taken by ICANN to improve its overall accountability and comply with other principles, as well as fulfilling the goalsof global internet governance, improve ICANN’s legitimacy and governance, and hence strengthen the multistakeholder governance model, as opposed to an intergovernmental governance model?

Section 1: Principles in global internet governance

As specified above, principles are understood as the minimum requirement of appropriate conduct that must be complied within all decision-making processes. Robert Alexy refers to principles as “optimization commands” (Alexy, 2000). 

The Internet Society (ISOC) has identified “fundamental Internet principles” (ISOC, 2015, pp. 2 and 7), but with the exception of openness and multistakeholderism, these principles are not further specified. A second term applied by ISOC is “key Internet principles”, which encompasses openness and the multistakeholder model, as well as stability and integrity, and bottom-up processes (ISOC, 2015, p. 2). A third term applied by ISOC is “these principles”, among which are accuracy, availability, and transparency (ISOC, 2015, p. 2). Hence, it is not evident if the term ‘principle’ has one or several meanings and how these three categories relate to each other.

Moreover, in the context of the IANA stewardship transition, ICANN presented the following “principles that were suggested”: inclusive, transparent, global, accountable, multistakeholder, focused [in scope], pragmatic and evidence-based, open [to all voices], do no harm, and consensus-based (ICANN, 2014a). Some of these ten principles are merely describing the ambition of inclusive representation (global), others are not adequately distinct (pragmatic). Hence, they cannot constitute fundamental principles.

Another initiative, NETmundial, established with the purpose to challenge the US’ dominant role over the internet in the aftermath of the surveillance practices revealed by WikiLeaks, distinguishes between internet governance principles and internet governance process principles (NETmundial, 2014, pp. 4-7). What NETmundial terms “principles” will in this article be referred to as goals, and what NETmundial refers to as “process principles” will in this article be referred to as principles.

In order to clarify principles for internet governance, there is a need to explore other sources. Being the most representative intergovernmental organisation, it is interesting to analyse whether the UN clarifications are helpful. 

Openness, identified by ISOC as a fundamental internet principle, does not appear in the outcome document of the 2015 UN High-level Meeting (UN, 2015) reviewing the implementation of the 2003 and 2005 phases of the World Summit on the Information Society (UN WSIS).

Openness can be a term used to describe adequate internet governance processes (Redeker et al., 2018, p. 307), but the core of such processes are better captured by the term transparency, as will be explained below. Openness has been defined as “open and free communication within the internet, interoperability, standard development…” (Padovani et al., 2010, p. 373), a definition that is closer to describing the essential nature of the internet, in other terms a goal. Hence, I find that openness should rather be termed a goal, as further explained in Section 2 below, than a principle. When openness is applied in the context of describing the internet governance process – operationalised as minimum standard of conduct in decision-making – it can be applied as a principle.

The multistakeholder approach seeks to involve all stakeholders, and is a “form of participatory democracy that attempts to go beyond the limitations of representative democracy, while building on, and including, representative democracy” (Doria, 2013, p. 121). Five stakeholders are identified: governments, the private sector, civil society, international organisations, and technical and academic communities (UN General Assembly, 2016, para. 2); note that the latter was not listed in the UN WSIS (UN General Assembly, 2006, paras. 35-36). The UN confirms multistakeholderism as one fundamental internet principle, but does not specify its content. This is, however, done by Weber, listing nine “factors” of the multistakeholder approach, implying that this approach is seen as a meta-principle (Weber, 2013, p. 103).

A distinction can be made between a position holding that no decision on internet governance should be made except through multistakeholder bodies, and a more moderate position holding that most relevant issues should be decided by multistakeholder bodies (Hill, 2013, p. 85). While the multistakeholder approach is currently the dominant approach, it is not uncontested, as will be shown in Section 3 below. 

Which other internet principles are identified? The three UN WSIS highlighted the three terms “multilateral, transparent and democratic…” (UN WSIS, 2003, para. 48; UN General Assembly, 2006, para. 29; UN General Assembly, 2016, para. 57). These three terms will be reviewed to analyse whether they qualify as a principle in global internet governance, keeping in mind that principle was defined in the introduction as minimum standard of conduct in decision-making. I will now clarify whether these three qualify for being termed principles.

Multilateral refers to involving more than two parties, and is usually applied on cooperation between states. There are obvious tensions between being multilateral and being multistakeholder. This was clearly stated by the US National Telecommunications and Information Administration (NTIA) when the so-called “stewardship transition” of IANA to ICANN was launched in 2014: “…NTIA will not accept a proposal that replaces the NTIA role with a government-led or an inter-governmental organization solution” (NTIA, 2014). The US authorities are exercising a form of veto power to prevent any interference with the ICANN’s governance model. Hence, under the present system, multilateralism cannot be termed a principle in global internet governance, even if states are represented in the ICANN structure, through the Government Advisory Committee (GAC).

Transparency relates essentially to how to facilitate participation in the decision-making process. Hence, transparency, being an overall UN human rights principle (UN Development Group, 2004), it can qualify as an internet principle. In its early years, ICANN was ordered to enhance its transparency procedures (ICANN, 2002; for a critical analysis of the process establishing ICANN, see Weinberg, 2000).

Being democratic is to have procedures for installing and replacing decision-making bodies, based on free elections. The current multistakeholder model by ICANN – referring to “groups” (ICANN, 2012) and not “stakeholders” – is not adequately democratic (Gurstein, 2014). While I agree with Gurstein that ICANN is not adequately democratic, and with Malcolm (2008, p. 291) that consensual decision-making describes ICANN better than democratic decision-making, it must be asked whether an international governance system representing such a diversity can ever be adequately democratic, in line with the definition above. ICANN has rather sought the representation of all relevant stakeholders, and to improve its accountability mechanisms. Hence, multistakeholderism is arguably the most inclusive decision-making that ICANN can provide (see: DeNardis, 2013b), and it is difficult to include democracy as a principle in internet governance.

This lack of acknowledgment of democracy as a principle in global internet governance cannot, however, be seen as a lack of recognition of democratic decision-making as an essential value. Bygrave emphasises that the success of the internet is the fact that it “developed in open and democratic decisional cultures…” (Bygrave, 2009, p. 6). Mueller holds, however, that the US control over the DNS has not secured freedom of expression (Mueller, 2016), and that states through the Government Advisory Committee (GAC) of ICANN have too much influence in ICANN (Mueller, 2015; Mueller, 2010, pp. 240-251), as will be explained in Section 3 below.

Hence, only one of the principles proposed by the UN can actually be considered to be a relevant principle for decision-making as ICANN works today. The two principles we are left with from these sources are multistakeholderism and transparency. Both are explicitly linked to participation, and can be justified by the theory of reflexive law, which emphasises norm development through participatory processes, rather than through instructions by top-down regulation (De Schutter and Lenoble, 2010). Other labels are proposed, such as “hybrid intergovernmental-private administration” (Ruotolo, 2017, p. 162), characterised by active involvement and commitment by corporations and civil society organisations.  

Multistakeholderism and transparency specify requirements of an adequate decision-making process. However, accountability – having one’s conduct assessed in relation to externally set norms, with possibilities for administrative or legal sanctions in cases of non-compliance (Koppell, 2005, p. 96) – is missing from the list. The principle of accountability for companies has gained increased recognition recently, constituting one of four elements of due diligence (OECD, 2011, p. 23), as elaborated by the UN Guiding Principles (UNGP) (UN Human Rights Council, 2011).

The principle of accountability has been specified and operationalised by ICANN (2018a; ICANN, 2018b, Section 4.6(b)(i); see also ICANN, 2014a and NTIA, 2016). In accordance with its bylaws, ICANN mandated in early 2019 its third Accountability and Transparency Review Team (ATRT). While an ICANN ombudsman has been in operation since 2004 (ICANN, 2017e), the clause on reconsideration in ICANN’s bylaws (ICANN, 2018b, Section 4.2) provides the formal procedure for requesting the ICANN Board to reconsider an action or inaction. Moreover, ICANN has been subject to legal proceedings (ICANN, 2002; ICANN, 2018c). Hence, while internal accountability mechanisms have been strengthened, the US government through NTIA will not allow ICANN to act contrary to US interests, and courts in various countries do provide a form of external accountability (ICANN, 2018c). As a result of the processes relating to the IANA transition, the earlier criticism of ICANN from the global community of internet users, through the At-Large Advisory Committee (ALAC) and the considerably wider At-Large Summit (ATLAS) (ICANN, 2013, p. 53; ICANN, 2009, p. 3), has gradually been replaced with stronger concerns for the positions of states (ICANN, 2014b; p. 2; Bygrave, 2015b). The third ATLAS took place at the ICANN66 meeting in November 2019.

Another group of stakeholders, the states, do recognise that existing arrangements for the Internet “have worked effectively…” (UN General Assembly, 2016, para. 55; UN General Assembly, 2006, para. 55). Hence, it is reasonable to state that the tensions over global internet governance are not primarily related to the tasks fulfilled by ICANN and IANA. 

To sum up, the omission of accountability in the relevant paragraph from the UN WSIS review (UN General Assembly, 2016, para. 57; see also UN WSIS, 2003, para. 48; UN General Assembly, 2006, para. 29) cannot be read to imply that accountability is off the list of principles in global internet governance, in addition to multistakeholderism and transparency.

Section 2: Goals in global internet governance 

We saw above that openness can be termed a goal of global internet governance. Which other goals are there? Also in this endeavour, it is considered relevant to turn to the UN, as well as other actors, with more direct roles in relevant ICANN processes. The UN has specified that “an equitable distribution of resources, facilitate access for all and ensure a stable and secure functioning of the Internet, taking into account multilingualism” as important in guiding the “international management of the Internet” (UN WSIS, 2003, para. 48). 

Equitable distribution, access for all, stability and security, and accommodating diversity, being wider than merely “multilingualism”, are on the face of it all relevant, but do they fulfil the criteria specified in the introduction, namely that goals relate to the essential nature of global internet governance? We will review these, starting with stability and security.

The so-called NTIA criteria include to “maintain the security, stability, and resiliency of the internet DNS [domain name system]” (NTIA, 2014; see also IANA Stewardship Transition Coordination Group, 2015, p. 7). Stable and secure functioning is a prerequisite for the internet per se, and therefore to be understood as a goal for global internet governance. 

According to the UN, “equitable distribution of resources” is an important objective in the realm of the global information society overall (UN General Assembly, 2016, para. 1). Can it be termed a goal in internet governance, being about providing internet connections for all that have an adequate speed – or connectivity? While enhanced distribution of resources might be a result of the enhanced connectivity for persons in remote regions and rural areas, it seems difficult to term equitable distribution as an overarching goal in global internet governance.

What then about “access for all”? Access can encompass four dimensions: (i) economic access – affordability; (ii) universal access– internet access within a reasonable distance from one’s home; (iii) universal service– having internet in homes; and (iv) universal design– also termed usability, in accordance with Articles 9 and 21 of the Convention on the Rights of Persons with Disabilities (G3ict, The Global Initiative for Inclusive ICTs, 2009).

A fifth dimension of access is (v) network neutrality, implying that no-one shall be unjustifiably or arbitrarily excluded from accessing the internet (European Union, 2015; Scott et al., 2015). This was emphasised in the revision in for instance the French law to protect intellectual property on the internet (HADOPI), that from 2013 no longer permits the suspension of internet access for repeated infringers (for a critical analysis of the previous practice, see Jamart, 2013). Hence, it seems justified to term “access for all” an overarching goal. In essence, however, it is difficult to see any major difference between access for all and an open internet, as specified by Padovani et al., as outlined in Section 1 (2010, p. 373). This goal is better referred as open and accessible for all.

Finally, “multilingualization” has been specified as a critical internet resource [UN WGIG, 2005, para 13(a)]. A review of ICANN’s At-Large system criticised ICANN’s reliance on English, as this “may be alienating for many” (ITEMS International, 2017, p. 85), and accommodating diversity has been emphasised in the CCWG-Accountability WS2 (ICANN, 2018a, pp. 18-20 and Annex 1). Hence, while maintaining the unity of global internet, the actual internet use must encompass diversity.

Hence, stability and security of the DNS, open and accessible for all, and unity and diversity are important for global internet governance; these can therefore be termed goals. 

In addition, there are “many cross-cutting international public policy issues that … have not been adequately addressed” (UN General Assembly, 2016, para. 56; see also Global Commission on Internet Governance (GCIG), 2016): (i) information and communication technology (ICT) for development; (ii) human rights (ICANN, 2017a), such as privacy and safety for users; and (iii) confidence and security in ICT, including fighting cybercrime. 

These “policy issues” will be reviewed here, in order to identify whether they have content that would imply that they qualify as goals in global internet governance. 

As with the brief review of distribution above, it seems reasonable to state that – notwithstanding the quality and type of infrastructure – positive socio-economic development is most likely a result of enhanced connectivity and access, and cannot be specified as a goal that is distinct from enhanced connectivity and access.

Human rights safety issues will be further analysed in Sections 4 and 5 below, but it must be considered uncontroversial to identify safety for internet users in order to protect the right to private life as constituting an overall goal in global internet governance.

The last policy issue identified by the UN WSIS was security, which must be understood to include international security. Wider than national security, the mutual survival and safety of people is at the core of international security. Specifically concerning international security, mandates and members for the United Nations Group of Governmental Experts on Developments in the Field of Information and Telecommunications in the Context of International Security (UN GGE) has been established five times, first in 2004 (UN General Assembly, 2003, para. 4), and most recently in 2015 (UN General Assembly, 2015, para. 5; for reports, see UN GGE, 2015; UN GGE, 2013; and UN GGE, 2010). The most recently established UN GGE failed to reach consensus during its last session in 2017 (UN Secretary-General 2017, para. 5), but a new initiative is launched (UN General Assembly, 2019). 

This implies that it is reasonable to add safety and security to the list of overarching goals of internet governance. 

In summary, four overarching goals that apply to internet governance can be identified: security and stability of the DNS; open and accessible for all; diversity and unity; and safety and security for all. By choosing the term all, this corresponds with the UN approach, implying that all are in principle equal to enjoy the benefits of the internet, in line with a global public goods approach. Raymond (2013) finds, however, that a more precise term is nested club good, as internet is not characterised by equal enjoyment by all. In order to proceed in the analysis of how these four goals are promoted, as these are all relevant for analysing ICANN’s legitimacy, there is a need to have more insight into the relevant actors.

Section 3: Global internet governance actors

ICANN was established as a private company in 1998, and the US government assigned the governmental responsibility to NTIA (US Government, 1998; Bygrave, 2015a, pp. 59-77; Cogburn 2016, pp. 33-37). The purpose of the establishment of ICANN in 1998 was a “privatization” of the DNS (US Government, 1998; see also Padovani and Santaniello, 2018, p. 295, adding “commercialization”). Privatisation seems on the face of it to be contrary to the multistakeholder approach, as the latter seeks to allow different stakeholders to exert certain influence. Nevertheless, the term privatisation process was reiterated by US authorities in the IANA transition process (NTIA, 2016). Despite this insensitive choice of terms, ICANN has recently adopted mechanisms that have strengthened the multistakeholder approach.

It is relevant to note that in 1998 and all subsequent years, the ITU was sidelined. While a full review of all relevant intergovernmental processes relating to the internet falls outside the scope of this article, I will briefly analyse two: the EU initiative before the second session of the UN WSIS in 2005 and the attempts of enhancing the role of the ITU in global internet governance before and during the 2012 World Conference on Information Technology (WCIT).

A wide-reaching proposal for changing global internet governance was made by the EU in the process leading up to the UN WSIS in 2005, proposing to establish a Forum to provide for “an international government involvement at the level of principles over … naming, numbering and addressing-related matters…” (European Union, 2005, p. 1). It was emphasised that this Forum should focus on “principle issues …, excluding any involvement in the day-to-day operations…” (European Union, 2005, p. 1). The EU’s motivations had few references to the multistakeholder approach, and the term multistakeholder was introduced in the report of the UN WGIG, established in 2003, and dissolved after submitting its report in 2005 (UN WGIG, 2005). In the report from the 2005 UN WSIS, the term multistakeholder is applied more than 30 times (UN General Assembly, 2006).

Moving to the 2012 WCIT, a proposal from the Russian Federation (Russia) read:  

Member States shall have equal rights to manage the Internet, including in regard to the allotment, assignment and reclamation of Internet numbering, naming, addressing and identification resources and to support for the operation and development of basic Internet infrastructure (ITU, 2012a, p. 99).

At the WCIT, certain coalitions of states attempted to include internet governance in the ITU’s International Telecommunication Regulations (ITRs; which entered into force in 2015). One alliance led by Russia and China sought intergovernmental control of the internet based on their concept of “information security” (Jamart, 2013, p. 60); another alliance, with India, Brazil and South Africa promoted a UN-embedded “Council on Internet-Related Policies” (Chenou and Radu, 2013, p. 11; see also Cogburn, 2016, pp. 40-42). This proposed Council is basically in line with a Global Internet Council (GIC) as proposed by the UN WGIG (UN WGIG, 2005, paras. 52-56), specifying that “ICANN will be accountable to GIC” (UN WGIG, 2005, para. 54). Interestingly, there are no references to internet in the ITRs, only in Resolution 3 of the WCIT, on “the development of broadband and the multistakeholder model of the Internet…” (ITU, 2012b, p. 20).

Hence, the EU 2005 proposals and the WCIT initiatives in 2012 illustrate the dissatisfaction with the US-led, ICANN-administered global internet governance.

The rest of this section will focus on ICANN, where the US control over the DNS implied that the global “legitimacy of ICANN was fragile from the start…” (Radu and Chenou, 2013, p. 6), and Mueller reminds readers that the NTIA role in relation to IANA was meant to last for two years, but lasted 18 years (Mueller, 2015, p. 3). As seen above, legitimacy is operationalised as encompassing adequate and widely-accepted procedures for participation and accountability.

A full review of recent ICANN bodies and activities is not possible, but it is relevant to identify the strategies of ICANN’s so-called supporting organisations (SO; having two representatives in the ICANN Board; see ICANN, 2018b, art. 9-11) and some of its advisory committees (AC). The three SOs and two of the ACs, ALAC and GAC, were provided a mechanism in 2017, termed Empowered Community (Section 6 of ICANN’s bylaws; ICANN, 2018b), with a mandate that extends to “Recall the entire Board” [ICANN, 2018b, Section 6.2(a)(ii)].

ICANN’s external outreach is, however, still inadequate. A recommendation on better “cooperative outreach” (ITEMS International, 2017, p. 43; see also ICANN, 2014c, p. 3), was not supported by the Generic Names Supporting Organization’s (GNSO) and two of its constituencies: the Intellectual Property Constituency (IPC) and the Business Constituency (BC) (ITEMS International, 2017, p. 43). 

Regarding the ACs, interesting differences appear. ALAC is represented in the ICANN Board and can influence the composition of the Board by nominating one-third of the members of the Nomination Committee.

GAC is represented on the Board by a liaison without voting rights, but has a possibility to influence ICANN priorities in more subtle ways. Under the revised ICANN bylaws, the ICANN Board must “state the reason why it decided not to follow…” the GAC’s advice, obliging the Board and the GAC to enter into an efficient process “to find a mutually acceptable solution” [ICANN, 2018b, Section 12(a)(x)]. This implies increased power to governments within ICANN, which comes in addition to the influence GAC has as a part of the Empowered Community.

Section 4: Other global internet governance reform proposals

The civil society at the UN 2005 made five demands for reforms of global internet governance, the first concerning ICANN’s accountability to its global stakeholders, that was analysed above. The other four are: (i) creating an Internet Governance Forum (IGF); (ii) negotiating a convention on internet governance and universal human rights; (iii) ensuring that internet access is universal and affordable; and (iv) promoting capacity building in developing countries and increasing their participation (Association for Progressive Communications, 2005). These four will be analysed below, as these are all relevant to answer the research question, on which principles and goals the ICANN’s internet governance model accommodate, so that ICANN’s legitimacy is enhanced.

The IGF has met annually since 2006, with a mandate that encompasses “capacity building...” (UN General Assembly, 2006, subpara. 72(h)), as well as “enhanced cooperation…” (UN General Assembly, 2006, para. 69). This mandate relates to dialogue and deliberations concerning internet policy (Raymond and DeNardis, 2015, p. 587; for a broader analysis of IGF, see Malcolm, 2008). The UN Secretary-General António Guterres established the High-level Panel on Digital Cooperation (UN Secretary-General, 2018a), and later in the same year attended – as the first UN Secretary-General – the IGF (UN, 2018). Earlier efforts of strengthening the IGF have not been successful (Hill, 2018; UN General Assembly, 2016, para. 65; UN General Assembly, 2013). The speech by France’s President Macron to the 2018 IGF, based on his assertion that “the Internet we take for granted is under threat” (Macron, 2018) was further specified in the context of internet governance, outlining two opposites, between “complete self-management, without governance, and […] a compartmented Internet, entirely monitored by strong and authoritarian states” (Macron, 2018). While the first model was preferred by Macron, he also specified that this model implies that those deciding are not democratically elected; hence “I don’t want to hand over all my decisions to them, and that is not my contract with France’s citizens” (Macron, 2018). While these concerns have been commonplace in global internet governance discussions for a number of years, the fact that they are expressed by a head of a relatively strong state does not necessarily make it likely that the concerns will translate into new global internet governance policies.

On internet and human rights, there has been no specific convention, but several documents (Pettrachin, 2018 - reviewing 58 documents; Redeker et al. 2018 - reviewing 32 documents). Existing human rights treaties include provisions that obviously are relevant for internet activities, and space does not allow for a deeper discussion, but human rights processes within ICANN will be analysed in greater detail below. 

On accessibility and affordability, there is relatively little progress to report on. The UN Global Alliance for Information and Communication Technologies and Development, launched in 2006, soon imploded. A Global Digital Solidarity Fund was launched in 2005 (ITU, 2005), but dissolved in 2009. In 2007, the Leading Group on Innovative Financing for Development (formerly known as Leading Group on Solidarity Levies to Fund Development) failed in introducing a digital solidarity initiative, but has succeeded in levies on air tickets. One domain name, .coop, is however, introduced with an altruistic purpose, namely to “support the global movement by helping Cooperatives…” (DotCooperation LLC, n.d.). Nothing prevents other domain name owners from establishing similar purposes.

On participation by developing countries, the most comprehensive response was the 2013 Montevideo Statement on the Future of Internet Cooperation, adopted by the CEO and President of ICANN, together with other heads of organisations in charge of internet technical infrastructures, calling for “accelerating the globalisation of ICANN and IANA functions, towards an environment in which all stakeholders, including all governments, participate on an equal footing” (Chehadé et al., 2013, bullet 3; see also UN General Assembly, 2006, paragraph 69; for critical comments on the infrastructure-mediated governance, see Arpagian, 2016 and DeNardis, 2012). The fact that these diverse actors agreed that all governments are to participate on an equal footing in global internet governance is remarkable, as evidence demonstrates that this is not the case in practice, illustrated by the quote by Obama given at the start of the article.

The broadest alliance challenging the current global internet governance model was NETmundial, referred to in the introduction. It was initiated by Brazil as a response to the revelations that the USA had tapped communication devices, including those used by (then) President of Brazil, Dilma Rousseff. In addition to Brazil, ICANN was co-hosting the 2014 NETmundial meeting (Amoretti and Santaniello, 2016, pp. 161-163), but the subsequent NETmundial Initiative, launched by Brazil and ICANN, in addition to the World Economic Forum, lasted only until 2016. While NETmundial’s terms distributed, decentralised and multistakeholder (NETmundial, 2014, p. 6) indicate an explicit distancing from a state-driven or top-down approach, many of the active states in the NETmundial were among those who pushed for greater governmental control at the 2012 WCIT (Brotman, 2015, p. 3), as seen in Section 3 above.

 Multistakeholderism is a governance model that seeks to limit the power of states termed as sovereignists (Amoretti and Santaniello, 2016, p. 167; for the term technological autonomy; see Arpagian, 2016). The sovereignists’ attempts of limiting the US influence over internet governance have been perceived as threats to the present functioning of the internet (van Schewick, 2010). A survey conducted by the Global Commission on Internet Governance showed support to multistakeholderism, while the US alone running the global internet got the lowest score (Global Commission on Internet Governance, 2016, p. 86). This corresponds to the position of some influential Western researchers on global internet governance (Bygrave, 2009, p. 7; Hubbard and Bygrave, 2009, p. 235; Mueller, 2015, pp. 240-251; DeNardis, 2014, pp. 227-230). In addition to the sovereignists, the current main challenge to ICANN’s present working comes from the constitutionalists (Amoretti and Santaniello, 2016, p. 167), and ICANN’s response to the calls for human rights in its operative work is an issue to which we now turn.

Section 5: ICANN and human rights

The most recent efforts within ICANN to accommodate human rights concerns warrant a more in-depth and updated analysis (Glen, 2018). ICANN’s Articles of Incorporation specify that it is “carrying out its activities in conformity with relevant principles of international law and international conventions and applicable local law and through open and transparent processes…” (ICANN, 2016b, Section 2.III). By this formulation, human rights are implicitly encompassed. 

Human rights were covered in the WS1, but were brought into WS2 even more explicitly, recommending that ICANN consider “which Human Rights conventions or other instruments, if any, should be used by ICANN in interpreting and implementing the Human Rights Bylaw”, shaping relevant ICANN “policies and frameworks” (ICANN, 2016a, Annex 12, p. 7). The revision of ICANN’s bylaws implies that ICANN acknowledges that it has human rights obligations, “within the scope of its Mission…” and “applicable law” [ICANN, 2018b, Section 1.2(b)(viii)], but requiring that the “framework of interpretation for human rights (“FOI-HR”) is approved …by the CCWG-Accountability as a consensus recommendation” [ICANN, 2018b, Section 27.2(a)]. A consensus was eventually reached (ICANN, 2018a, p. 21 and Annex 3, pp. 4-7), and by June 2018 the whole WS2 process was finalised (ICANN, 2018e).

During the process, four initiatives deserve attention: (i) ALAC proposed a Human Rights Impact Assessment and a Corporate Social Responsibility (CSR) policy for ICANN (ICANN, 2017a, para. 4); (ii) to strengthen human rights in ICANN, a Sub-Group on FOI-HR was established within the CCWG-Accountability WS2; (iii) a Cross-Community Working Party on ICANN’s Corporate and Social Responsibility to Respect Human Rights (CCWP-HR) was established at ICANN52 in 2015 (Karanicolas and Kurre, 2018; see also Article 19, 2015); and (iv) a website devoted to human rights updates has been established and made accessible via ICANN’s home page (ICANN, 2018f). I will explore the first two initiatives in greater detail.

The responses to the WS2’s human rights proposals were mixed. Government responses to ALAC’s draft FOI-HR – from Brazil, Switzerland and the UK – welcomed “widening the scope of applicability of human rights instruments within ICANN…” (ICANN, 2017b, p. 2). ICANN’s ACs and SOs – including ALAC itself – were more restrictive, however, specifying that human rights implementation should be limited to ICANN’s “applicable law” and “technical remit” (ICANN, 2017b, p. 2). Two stakeholder groups under GNSO understood human rights as constituting a “crippling load” for ICANN (ICANN, 2017b, p. 2). 

Despite the fact that there was no consensus on including UNGP in the FOI-HR, the Sub-Group on FOI-HR noted that certain aspects of the UNGP could guide ICANN (ICANN, 2018a, Annex 3, p. 8).

The Transparency Sub-Group of WS2 also emphasised the importance of human rights (ICANN, 2018a, pp. 33-35 and Annex 8.1). Moreover, human rights and privacy have been a concern, particularly for ALAC, in light of the EU’s General Data Protection Regulation (GDPR) (ICANN, 2017c, p. 1), resulting in a revised ICANN Procedure applying specifically to the WHOIS internet protocol (ICANN, 2017d). WHOIS is a directory service including more than 187 million domain names (Marby, 2018). Even if ALAC members were dissatisfied with ICANN’s GDPR response, no further action is to be taken by the ICANN Board (ICANN, 2018g, item 28). The ICANN Procedure specifies that any registrar or registry that is subject to a WHOIS proceeding should cooperate with national governments to ensure that it “operates in conformity with domestic laws and regulations, and international law and applicable international conventions” (ICANN, 2017e, Section 1.4), a wording similar to ICANN’s Articles of Incorporation (ICANN, 2016b, Section 2.III), presented above.

Hence, the revision of the bylaws has not led to any explicit recognition of the two human rights that are of primary importance for ICANN: freedom of expression and the right to privacy.

Conclusion

The research question asked whether recent ICANN processes have enhanced ICANN’s legitimacy, operationalised as adequate and widely accepted procedures for participation and accountability, and governance (Raymond and DeNardis, 2015; Weinberg, 2000), understood as the application of shared principles, norms, rules, decision-making procedures, as well as programmes (UN General Assembly, 2006, para. 34).

ICANN has made many efforts to improve its legitimacy and decision-making system in the recent years. The question is whether this is adequate to actually enhance its overall legitimacy.

The changes within ICANN are not complying with the 2013 Montevideo Statement, made by very central players in global internet governance, calling for participation by all governments on an equal footing and for a “globalization” of internet governance (Chehadé et al., 2013, bullet 3). Neither have these ICANN reforms been substantive enough to please politicians (Macron, 2018) or courts (ICANN, 2018c). Hence, the overall legitimacy and concrete actions of ICANN will continue to be challenged, but the overall framework for global internet governance seems difficult to amend.

Nevertheless, because of the continued pressure on ICANN, ICANN will continue to reform itself. This is in line with the advocacy coalition framework (ACF; Sabatier and Jenkins-Smith, 1993) providing an advanced understanding of how subsystems, such as researchers, NGOs and intergovernmental relations, contribute to policy shifts in complex organisations. The article has demonstrated that ICANN is a most relevant case to understand how the various subsystems operate, and five SOs and ACs have increased power within the ICANN structure, through the Empowered Community. Moreover, GAC has enhanced its influence even more, by requiring the ICANN Board to state why GAC’s advice was not followed, and asking for a dialogue in order to find a mutually acceptable solution. While this led to a strengthening of states within ICANN, it is justified to state that there has only been a partial recognition of human rights by ICANN. Hence, both sovereignists and constitutionalists will continue to challenge ICANN.

Koppell’s (2005) three “manifestations” of a multiple accountabilities disorder – inadequate responsiveness, responsibility and controllability in different combinations – have, however, been mitigated in ICANN’s governance structure after the IANA transition. Koppell’s analysis was undertaken in 2004, after ICANN had been in operation for six years, still having its infancy challenges, and operating under close scrutiny of the US NTIA. In addition to a better mechanism of accountability, ICANN has over the last years strengthened its transparency and allowed stakeholders more influence. ICANN’s multistakeholderism is, however, embedded in power asymmetries, and without having mechanisms seeking to reduce these asymmetries, ICANN might be an “instrument of domination by the powerful” (Malcolm, 2016, p. 5).

Moreover, four overarching goals applying to internet governance have been identified: (i) security and stability of the DNS; (ii) open and accessible for all; (iii) diversity and unity; and (iv) safety and security for all. These depend upon a predictable and well-functioning internet governance (Global Commission on Internet Governance, 2016, p. 86).  

The successful IANA stewardship transition process does not, however, make a possible UN framework convention on internet governance redundant, as such a process can bring important clarifications, while at the same time facilitating an internet that – with the words of Macron (2018) – is free, open and safe.

References

Alexy, R. (2000). On the Structure of Legal Principles. Ratio Juris, 13(3), 143–152. https://doi.org/10.1111/1467-9337.00157

Amoretti, F., & Santaniello, M. (2016). Between Reason of State and Reason of Market: The developments of internet governance in historical perspective. Soft Power, 3(1), 147–167. https://doi.org/10.17450/160109

Appelman, D. L. (2016). Internet governance and human rights. ICANN’s transition away from the United States. The Clarion, a journal of the American Bar Association’s International Human Rights Committee, 1(1).

Arpagian, N. (2016). The Delegation of Censorship to the Private Sector. InF. Musiani, D.L. Cogburn, L. DeNardis, & N.S. Levinson (Eds.), The Turn to Infrastructure in Internet Governance (pp. 155–165). Basingstoke: Palgrave Macmillan. https://doi.org/10.1057/9781137483591_8

Article 19. (2015, June). Issue Report for the Cross Community Working Party on ICANN’s Corporate and Social Responsibility to Respect Human Rights: Practical Recommendations for ICANN [Report]. London: Article 19. Retrieved from https://www.article19.org/data/files/medialibrary/38003/ICANN_report_A5-for-web.pdf

Association for Progressive Communications (2005, November). APC’s Recommendations to the WSIS on Internet Governance. Retrieved from https://www.apc.org/en/pubs/apcs-recommendations-wsis-internet-governance-2005

Brotman, S.N. (2015). Multistakeholder Internet governance: A pathway completed, the road ahead. Washington, DC: The Brookings Institution. Retrieved from https://www.brookings.edu/~/media/research/files/papers/2015/07/20-multistakeholder-internet-governance-brotman/multistakeholder.pdf

Bygrave, L.B. (2015a). Internet Governance by Contract. Oxford: Oxford University Press. http://doi.org/10.1093/acprof:oso/9780199687343.001.0001

Bygrave, L.B. (2015b). Comment on second draft CCWG proposal. Retrieved from http://forum.icann.org/lists/comments-ccwg-accountability-03aug15/msg00087.html

Bygrave, L.B. (2009). Introduction. In L.B. Bygrave, & J. Bing (Eds.), Internet Governance: Infrastructure and Institutions (pp. 1–7). Oxford: Oxford University Press. https://doi.org/10.1093/acprof:oso/9780199561131.003.0001

Chehadé, F., Akplogan, A.A., Curran, J., Wilson, P.,Housley, R., Arkko, J., St. Amour, L., Echeberría, R., Pawlik, A., & Jaffe, J. (2013). Montevideo Statement on the Future of Internet Cooperation. Retrieved from https://www.icann.org/news/announcement-2013-10-07-en

Chenou, J.-M. & Radu, R. (2013). Global Internet Policy: A Fifteen-Year Long Debate. In R. Radu, J.-M. Chenou, & R. H. Weber (Eds.), Evolution of Global Internet Governance. Principles and Policies in the Making (pp. 3–22). Heidelberg: Springer. https://doi.org/10.1007/978-3-642-45299-4_1

Cogburn, D.L. (2016). The Multiple Logics of Post-Snowden Restructuring of Internet Governance. InF. Musiani, D.L. Cogburn, L. DeNardis and N.S. Levinson (eds.), The Turn to Infrastructure in Internet Governance (pp. 25-45). Basingstoke: Palgrave Macmillan. https://doi.org/10.1057/9781137483591_2

Council of Europe (2019). Committee of Ministers: selection and most recent Adopted Texts. Retrieved from https://www.coe.int/en/web/freedom-expression/committee-of-ministers-adopted-texts

Council of Europe (2015). Internet Governance Strategy 2016-2019. Retrieved from https://www.coe.int/en/web/freedom-expression/igstrategy

Council of Europe (2011). Internet Governance – Council of Europe Strategy 2012-2015, CM(2011)175 final

DeNardis, L. (2014). The Global War for Internet Governance. New Haven; London: Yale University Press.

DeNardis, L. (2013a). The Emerging Field of Internet Governance. In W.H. Dutton (Ed.), The Oxford Handbook of Internet Studies (pp. 555–575). Oxford: Oxford University Press.

DeNardis, L. (2013b) Multi-Stakeholderism: The Internet Governance Challenge to Democracy. Harvard International Review, 34(4), 40–44.

DeNardis, L. (2012). Hidden Levers of Internet Control. An infrastructure-based theory of Internet governance. Information, Communication & Society, 15(5), 720–738. https://doi.org/10.1080/1369118X.2012.659199

De Schutter, O., & Denoble, J. (Eds.) (2010). Reflexive Governance: Redefining the Public Interest in a Pluralistic World. Oxford; Portland: Hart Publishing.

Doria, A. (2013). Use and Abuse of Multitakeholderism in the Internet. In R. Radu, J.-M. Chenou, & R. H. Weber (Eds.), Evolution of Global Internet Governance. Principles and Policies in the Making (pp. 115–138). Heidelberg: Springer. https://doi.org/10.1007/978-3-642-45299-4_7

DotCooperation LLC (n.d.). About .coop. Retrieved from https://www.coop/about_dotcoop

European Union. (2015). Regulation (EU) 2015/2120 of the European Parliament and of the Council of 25 November 2015 laying down measures concerning open internet access.

European Union. (2005). Proposal for addition to Chair’s paper Sub-Com A internet Governance on Paragraph 5 “Follow-up and Possible Arrangements”. Retrieved from https://www.itu.int/net/wsis/docs2/pc3/working/dt21.pdf

Glen, C.M. (2018). Controlling Cyberspace: The Politics of Internet Governance and Regulation Global. Santa Barbara: ABC-CLIO LLC.

Commission on Internet Governance (2016). One Internet. Retrieved from https://www.ourinternet.org/report

Gurstein, M. (2014, October 19). Democracy OR Multi-stakeholderism: Competing Models of Governance [Blog post]. Gurstein’s Community Informatics. https://gurstein.wordpress.com/2014/10/19/democracy-or-multi-stakeholderism-competing-models-of-governance

Hill, R. (2018, February 5). Inside Views: Analysis of The Working Group on Enhanced Cooperation on  Public Policy Issues Pertaining to the Internet. Intellectual Property Watch. Retrieved from http://www.ip-watch.org/2018/02/05/analysis-working-group-enhanced-cooperation-public-policy-issues-pertaining-internet/

Hill, R. (2013). Internet Governance: The Last Gasp of Colonialism, or Imperialism by Other Means? In R. Radu, J.-M. Chenou, & R. H. Weber (Eds.), Evolution of Global Internet Governance. Principles and Policies in the Making (pp. 79-94). Heidelberg: Springer. https://doi.org/10.1007/978-3-642-45299-4_5

Hubbard, A., & Bygrave, L.B. (2009). Internet governance goes global. In L.B. Bygrave, & J. Bing (Eds.), Internet Governance: Infrastructure and Institutions (pp. 213-235). Oxford: Oxford University Press. https://doi.org/10.1093/acprof:oso/9780199561131.003.0007

IANA Stewardship Transition Coordination Group (2015). Status of Proposal. Retrieved from https://www.ianacg.org/icg-files/documents/IANA-transition-proposal-v9.pdf

ICANN (2018a, June). CCWG-Accountability WS2 – Final Report [Report]. Los Angeles: Internet Corporation For Assigned Names and Numbers. Retrieved from https://community.icann.org/display/WEIA/WS2+-+Enhancing+ICANN+Accountability+Home?preview=/59640761/88575033/FULL%20WS2%20REPORT%20WITH%20ANNEXES.pdf

ICANN (2018b). ICANN Bylaws, as amended 18 June 2018.  Retrieved from https://www.icann.org/resources/pages/governance/bylaws-en

ICANN (2018c). ICANN v. EPAG Domainservices, GmbH. https://www.icann.org/resources/pages/litigation-icann-v-epag-2018-05-25-en

ICANN (2018d). WS2 - Enhancing ICANN Accountability Home. https://community.icann.org/display/WEIA/WS2+-+Enhancing+ICANN+Accountability+Home

ICANN (2018e). CCWG-Accountability WorkStream2 Activity Dashboardhttps://community.icann.org/display/WEIA/WS2+Dashboard?preview=/63151029/90767480/WS2%20Dashboard%20JUN-%2020Jul18.pdf.

ICANN (2018f). ICANN Human Rights. https://icannhumanrights.net/documents

ICANN (2018g). ICANN Board Advice Status Report.   https://www.icann.org/resources/files/1214687-2018-03-31-en.

ICANN (2017a). ALAC Statement on the Draft Framework of Interpretation for Human 

Rights. AL-ALAC-ST-0617-01-01-EN. Retrieved from https://atlarge.icann.org/en/advice_statements/9985

ICANN (2017b). Summary Report of Public Comment Proceeding [Report]. Los Angeles: Internet Corporation For Assigned Names and Numbers. Retrieved from https://www.icann.org/en/system/files/files/report-comments-foi-hr-03aug17-en.pdf

ICANN (2017c). Statement on the Revised ICANN Procedure for Handling WHOIS Conflicts with Privacy Law: Process and Next Steps, AL-ALAC-ST-0717-01-01-EN. Retrieved from https://atlarge.icann.org/advice_statements/9983

ICANN (2017d). Revised ICANN Procedure for Handling WHOIS Conflicts with Privacy Law. https://www.icann.org/resources/pages/whois-privacy-conflicts-procedure-2008-01-17-en.

ICANN (2017e). About ICANN’s Ombudsman. https://www.icann.org/resources/pages/about-2012-02-25-en.

ICANN (2016a). CCWG-Accountability Supplemental Final Proposal on Work Stream 1 Recommendations. Retrieved from https://www.icann.org/en/system/files/files/ccwg-accountability-supp-proposal-work-stream-1-recs-23feb16-en.pdf

ICANN (2016b). Amended and Restated Articles of Incorporation of Internet Corporation for Assigned Names and Numbers. https://www.icann.org/resources/pages/governance/articles-en.

ICANN (2015). Ratified: ALAC Statement on the Cross Community Working Group on Enhancing ICANN Accountability 2nd Draft Report (Work Stream 1) [Report]. Los Angeles: Internet Corporation For Assigned Names and Numbers. Retrieved from http://forum.icann.org/lists/comments-ccwg-accountability-03aug15/msg00096.html.

ICANN (2014a). Call for Public Input: Draft Proposal, Based on Initial Community Feedback, of the Principles and Mechanisms and the Process to Develop a Proposal to Transition NTIA’s Stewardship of the IANA Functions. https://www.icann.org/resources/pages/draft-proposal-2014-04-08-en.

ICANN (2014b). ALAC Statement on the Proposed Bylaws Changes Regarding Consideration of GAC Advice, AL-ALAC-ST-0914-01-00-EN.

ICANN (2014c). The 2nd At-Large Summit (ATLAS II) Final Declaration, AL-ATLAS-02-DCL-01-01-EN

ICANN (2013, December). Accountability and Transparency Review Team 2. Report and Recommendations [Report]. Los Angeles: Internet Corporation For Assigned Names and Numbers. Retrieved from https://www.icann.org/en/system/files/files/final-recommendations-31dec13-en.pdf

ICANN (2012). Groups. https://www.icann.org/resources/pages/groups-2012-02-06-en

ICANN (2009). Final Declaration, ATLAS I, AL.SUM/GS.02/DOC/02.

ICANN (2002). Advisory: Court Ruling in Auerbach v. ICANN Lawsuithttps://www.icann.org/news/advisory-2002-07-29-en.

ITEMS International (2017). Final Report [Review of

ICANN’s At-Large Community] [Report]. Paris: ITEMS International.

ITU (2012a). Proposals received from ITU Member States for the work of the Conference, Document DT/1-E. Retrieved from www.soumu.go.jp/main_content/000188223.pdf

ITU (2012b). Final Acts, World Conference on International Telecommunications (WCIT-12). Geneva: International Telecommunication Union. 

ITU (2005). Global Digital Solidarity Fund inauguratedhttps://www.itu.int/itunews/manager/display.asp?lang=en&year=2005&issue=03&ipage=global_digital&ext=html

ISOC (2015). Perspectives on the IANA Stewardship Transition Principleshttps://www.internetsociety.org/resources/doc/2015/perspectives-on-the-iana-stewardship-transition-principles

Jamart, A.-C. (2013). Internet Freedom and the Constitutionalization of Internet Governance. In R. Radu, J.-M. Chenou, & R.H. Weber (Eds.), Evolution of Global Internet Governance. Principles and Policies in the Making (pp. 57–76). Heidelberg: Springer. https://doi.org/10.1007/978-3-642-45299-4_4

Karanicolas, M, & Kurre, C. (2018, March 14). Cross-Community Working Party on ICANN and Human Rights. ICANN61. https://static.ptbl.co/static/attachments/169578/1521030196.pdf?1521030196

Koppell, J.G. (2005). Pathologies of Accountability: ICANN and the Challenge of “Multiple Accountabilities Disorder.” Public Administration Review, 65(1), 94–108. https://doi.org/10.1111/j.1540-6210.2005.00434.x

Levinson, N.S., & Marzouki, M. (2016). International Organizations and Global Internet Governance: Interorganizational Architecture. In F. Musiani, D.L. Cogburn, L. DeNardis, & N.S. Levinson (Eds.), The Turn to Infrastructure in Internet Governance (pp. 47–71). Basingstoke: Palgrave Macmillan. https://doi.org/10.1057/9781137483591_3

Macron, E. (2018). IGF 2018 Speech by French President Emmanuel Macron. https://www.intgovforum.org/multilingual/content/igf-2018-speech-by-french-president-emmanuel-macron.

Malcolm, J. (2015). Criteria of meaningful stakeholder inclusion in internet governance. Internet Policy Review, 4(4). https://doi.org/10.14763/2015.4.391

Malcolm, J. (2008). Multi-stakeholder Governance and the Internet Governance Forum. Wembley: Terminus Press.

Marby, G. (2018). RE: Request for Guidance: General Data Protection Regulation (GDPR) Impact on the Domain Name System and WHOIS. https://www.icann.org/en/system/files/correspondence/marby-to-janu-26mar18-en.pdf  

Mueller, M. (2016, May 25). The Myth of US Government “Protection” of the Open Internet [Blog post]. Internet Governance Project, Georgia Tech School of Public Policyhttps://www.internetgovernance.org/2016/05/25/the-myth-of-us-government-protection-of-the-open-internet/

Mueller, M. (2015). The IANA Transition and the Role of Governments in Internet Governance. IP Justice Journal. Retrieved from http://www.ipjustice.org/wp-content/uploads/2015/09/IPJustice_Journal_Mueller_IANA_Transition.pdf

Mueller M. (2010). Networks and States: The Global Politics of Internet Governance. Cambridge, MA: The MIT Press.

Mueller, M., Mathiason, J., &Klein, H. (2007). The Internet and Global Governance:  Principles and Norms for a New Regime. Global Governance, 13(2), 237–254. Retrieved from https://www.jstor.org/stable/27800656

NETmundial. (2014). NETmundial Multistakeholder Statement. http://netmundial.br/wp-content/uploads/2014/04/NETmundial-Multistakeholder-Document.pdf

NTIA. (2016). Finds IANA Stewardship Transition Proposal Meets Criteria to Complete Privatization. https://www.ntia.doc.gov/press-release/2016/iana-stewardship-transition-proposal-meets-criteria-complete-privatization

NTIA. (2014). NTIA Announces Intent to Transition Key Internet Domain Name Functions [Press release]. Retrieved from https://www.ntia.doc.gov/press-release/2014/ntia-announces-intent-transition-key-internet-domain-name-functions

OECD. (2011). OECD Guidelines for Multinational Enterprises. Paris: Organisation for Economic Co-operation and Development.

Padovani, C., & Santaniello, M. (2018). Digital constitutionalism: Fundamental rights and power limitation in the Internet eco-system. International Communication Gazette, 80(4), 295–301. https://doi.org/10.1177/1748048518757114

Padovani, C, Musiani, F., & Pavan, E. (2010). Investigating Evolving Discourses on Human Rights in the Digital Age. Emerging Norms and Policy Challenges. International Communication Gazette72(4-5), 359–378. https://doi.org/10.1177/1748048510362618

Pettrachin, A. (2018). Towards a universal declaration on internet rights and freedoms? International Communication Gazette, 80(4), 337–353. https://doi.org/10.1177/1748048518757139

Raustiala, K. (2017). An Internet Whole and Free. Why Washington Was Right to Give Up Control. Foreign Affairs, 96(2), 140-147.

Raymond, M. (2013) Puncturing the Myth of the Internet as a Commons. Georgetown Journal of International Affairs, 6(1), 53–64. Retrieved from https://www.jstor.org/stable/43134322

Raymond, M., & DeNardis, L. (2015). Multistakeholderism: anatomy of an inchoate global institution. International Theory, 7(3), 572–616. https://doi.org/10.1017/s1752971915000081

Redeker, D., Gill, L., & Glasser, U. (2018). Towards digital constitutionalism? Mapping attempts to craft an Internet Bill of Rights. International Communication Gazette, 80(4), 302–319. https://doi.org/10.1177/1748048518757121

Ruotolo, G.M. (2017). Fragments of fragments: The domain name system regulation: global law or informalization of the international legal order? Computer Law & Security Review, 33(2), 159–170. https://doi.org/10.1016/j.clsr.2016.11.007

Sabatier, P., & Jenkins-Smith, H.C. (Eds.) (1993). Policy change and learning: An advocacy coalition approach. Boulder, CO: Westview Press.

Scott, B., Heumann, S., & Kleinhans, J.-P. (2015). Landmark EU and US Net Neutrality Decisions: How Might Pending Decisions Impact Internet Fragmentation? [Paper No. 18]. Waterloo, Canada; London: Centre for International Governance Innovation & Chatham House. Retrieved from https://www.cigionline.org/publications/landmark-eu-and-us-net-neutrality-decisions-how-might-pending-decisions-impact

G3ict, The Global Initiative for Inclusive ICTs (2009). ICT Accessibility. Self-assessment framework. Fill-in Questionnaire. Retrieved from www.ohchr.org/Documents/HRBodies/CRPD/DGD/2010/G3ictAnnexI.doc

The Verge (2015, February 17). Obama accuses EU of attacking American tech companies because it ‘can’t compete’. The Verge. Retrieved from https://www.theverge.com/2015/2/17/8050691/obama-our-companies-created-the-internet

UN (2018). The High-level Panel on Digital Cooperation. https://digitalcooperation.org

UN Development Group (2004). The Human Rights Based Approach. Statement of Common Understanding. https://www.unicef.org/sowc04/files/AnnexB.pdf

UN General Assembly (2019). Advancing responsible State behaviour in cyberspace in the context of international security, A/RES/73/266.

UN General Assembly (2016). Outcome document of the high-level meeting of theGeneral Assembly on the overall review of the implementation of the outcomes of the World Summit on the Information Society, A/RES/70/125.

UN General Assembly (2015). Developments in the field of information and telecommunications in the context of international security, A/RES/70/237.

UN General Assembly (2006). Report of the World Summit on the Information Society, Annex: Tunis Commitment, A/60/687.

UN General Assembly (2003). Developments in the field of information and telecommunications in the context of international security, A/RES/58/32.

UN GGE (2015). Group of Governmental Experts on Developments in the field of information and telecommunications in the context of international security, A/70/174.

UN GGE (2013). Group of Governmental Experts on Developments in the field of information and telecommunications in the context of international security, A/68/98.

UN GGE (2010). Group of Governmental Experts on Developments in the field of information and telecommunications in the context of international security, A/65/201.

UN Human Rights Council (2011). Report of the Special Representative on the issue of human rights and transnational corporations and other business enterprises, A/HRC/17/31, Annex [endorsed by A/HRC/RES/17/4].

UN Secretary-General (2018). Address to the Internet Governance Forumhttps://www.un.org/sg/en/content/sg/speeches/2018-11-12/address-internet-governance-forum

UN Secretary-General (2017). Group of Governmental Experts on Developments in the field of information and telecommunications in the context of international security, A/72/327.

UN WGIG (2005, June). Report of the Working Group on Internet Governance [Report]. Bogis-Bossey: United Nations Working Group on Internet Governance Retrieved from www.wgig.org/docs/WGIGREPORT.pdf

UN WSIS (2003). Declaration of Principles, A/C.2/59/3.

US Government (1998, June 5). Statement of Policy, Management of Internet Names and Addresses, 63 Fed. Reg. 31741 (as amended). Retrieved from https://www.ntia.doc.gov/federal-register-notice/1998/statement-policy-management-internet-names-and-addresses

van Schewick, B. (2010). Internet Architecture and Innovation. Cambridge, MA: The MIT Press.

Weber, R.H. (2013). Visions of Political Power: Treaty Making and Multistakeholder Understanding. In R. Radu, J.-M. Chenou & R. H. Weber (Eds.), Evolution of Global Internet Governance. Principles and Policies in the Making (pp. 95–113). Heidelberg: Springer. https://doi.org/10.1007/978-3-642-45299-4_6

Weber, R. H. (2009). Accountability in Internet Governance. International Journal of Communications Law & Policy, (13), 152–167.

Weinberg, J. (2012). Non-State Actors and Global Informal Governance - The Case of ICANN. In T. Christiansen, & C. Neuhold (Eds.), International Handbook on Informal Governance (pp. 292–313). Cheltenham: Edward Elgar. https://doi.org/10.4337/9781781001219.00023

Weinberg, J. (2000). ICANN and the Problem of Legitimacy. Duke Law Journal, 50(1), 187–260. Retrieved from https://scholarship.law.duke.edu/dlj/vol50/iss1/5/


The regulation of abusive activity and content: a study of registries’ terms of service

$
0
0

Introduction

Policymakers, internet giants and other players look frantically for solutions to the problem: who should enforce unlawful content and behaviour on the internet and under what conditions? Much of this debate focuses on well-bespoke intermediaries like online platforms or internet access service providers.

Domain names act as road signs of the internet with their essential function of resolving names to IP addresses (see Bygrave et al., 2009). Whereas “architecture lies well beneath the level of content (…) [i]nfrastructure design and administration internalize the political and economic values that ultimately influence the extent of online freedom and innovation” (DeNardis, 2012, p. 721). While this part of infrastructure traditionally stayed off the radar in content debates despite “IP addresses [being] at the center of value tensions between law enforcement and intellectual property rights versus access to knowledge and privacy” (DeNardis, 2012, p. 724), this has changed more recently (Schwemer, 2018; Internet & Jurisdiction, 2019a, pp. 159–161). In 2019, for example, the domain name industry association, Council of European National Top-Level Domain Registries (CENTR), issued a report on “Domain name registries and online content” (CENTR, 2019b). At the ICANN66 meeting in Montréal (GAC, 2019) and the Internet Governance Forum (IGF) in Berlin, “abuse”1 was a prominent topic on the agenda. At the same time, computer scientists and cybersecurity researchers have looked at the role of the domain name system (DNS) and malicious activities (Hao et al., 2013; Vissers et al., 2017; Korczyński et al., 2017; Kidmose et al., 2018).

In this article, I look at the terms of service (ToS) of European ccTLD registries with a view to identify their stance on content- or use-related domain name takedowns: What do the ToS of ccTLDs say on use of a domain name and more specifically about content on the underlying website?

ToS of domain registries have so far received limited attention. In 2017, Kuerbis, Mehta, and Mueller (2017), for example, empirically looked at the ToS of selected generic top-level domain (gTLD) registrars in relation to morality clauses, which enable the registrar to cancel a domain name for content-related reasons. From a consumer perspective, also in 2017, the Electronic Frontier Foundation (EFF) looked at the question, “Which Internet registries offer the best protection for domain owners?” from a trademark, copyright, overseas speech regulation, and identity theft perspective (Malcolm, Rossi, & Stoltz, 2017).

This article aims at making a contribution to the study of regulating and enforcing “abusive” activity or content by intermediaries and specifically the role of ccTLD registries. Methodologically, it is based on a comparative analysis of 30 selected ToS of European ccTLD registries governing the bilateral contractual relationship between registry and registrant.2 Whereas it is also worth further to study the practical application of these ToS, e.g., by a multi-method approach integrating insights from interviews with registries or other data provided by registries, the focus of this contribution is on exploring the contractual room of operations that ccTLD registries reserve in their ToS vis-à-vis registrants. ToS serve as a primary legal basis in this relation (see below) and there is a strong point in looking at these ToS based on publicly available information without further interpretation provided by registries, which normally would be inaccessible for registrants in a structured manner. Ultimately, also courts would look at the ToS rather than the practice of that specific registry in their legal assessment, because courts would not be bound by industry practice. In many instances, the registration of a ccTLD name will not be performed by the registrant directly at the registry but rather through a registrar, i.e., a reseller, where additional terms regarding the use of domain names might be applicable. Contractual relations between registrants and registrars, as well as a study of the underlying national legislative frameworks are outside the scope of this analysis. Other sources of domain name takedowns, notably court orders or specific legislation are also outside the scope of this paper.

With this article, I want to contribute to specifying and defining the issue in light of increasingly blurred lines when talking about “takedown”, “abuse” or “use” in the field of domain names and use or content on the underlying website. Here, I am only interested in takedowns related to the use of a domain name or the content made accessible via a domain name. Issues related to a domain name as such, such as in the case of e.g., typosquatting, are a relatively well-studied phenomenon (see e.g., Moore, Clayton, & Anderson, 2009; Bettinger & Waddell, 2015; EUIPO, 2018) and outside the scope of this paper. This article provides an overview on emerging mechanisms that European ccTLDs have employed in relation to use- or content-related domain name takedowns.

Internet governance, the ccTLD landscape and “abuse” of or on infrastructure

This article’s core subject has its roots in different internet governance dynamics, the first of which concerns the differentiation of country-code top-level domain names (ccTLD) and generic top-level domain names (gTLDs). There are approximately 71 million domain names under the management of 57 CENTR ccTLD registries with an average local market share of 54% (CENTR, 2019). The top 5 EU/EFTA ccTLDs are <.de> (16,2m), <.uk> (12,17m), <.nl> (5,87m), <.eu> (3,66m), and <.fr> (3,4m). This compares to globally 194 million gTLD domain names, whereof 71% are registered under <.com> (CENTR, 2019).

The two systems vary considerably in their institutional and governance setup (Bygrave, 2015, p. 77ff.), while “in fact there is no technical, functional or economic difference (…)” (Mueller & Badiei, 2017, p. 445). Compared to gTLDs, public interest considerations are especially dominant in the ccTLD sphere as Geist (2004, p. 9) notes. ccTLDs have as institutions existed since 1985 (Aguerre, 2010, p. 7) and become “in-country political and economic institutions” (Aguerre, 2010, p. 11; Park, 2008). Whereas the non-profit Internet Corporation for Assigned Names and Numbers (ICANN) “has the authority to make certain policy decision regarding the domain namespace” of gTLDs, which are managed internationally and also subject to the laws of their country of incorporation, ccTLDs are “mainly subject to the national sovereignty of the respective country” (Mahler, 2019, p. 3). The governance of ccTLDs has been described as a system of “non-state, private actors operating within a broader public-private network” (Christou & Simpson, 2007, p. 17) where yet “[g]overnments are deeply involved in domain name administration at the national level” (Geist, 2004, p. 2). Kleinwächter (2003, pp. 1105–1106) explains the “bottom-up development by private stakeholders without any interference from governmental legislation” in the early days with the rapid growth of the internet and notes “[f]ew governments considered the DNS worthy of attention”. More recently, DNS providers have appeared on the lawmakers’ radar and have been, for example, addressed in the NIS Directive of 2016.3

In the online environment, contractual relations, regularly defined by terms of services (ToS), constitute a primary regulatory factor (Belli & Venturini, 2016; Kuerbis, Mehta, & Mueller, 2017), despite often being disregarded by users (in the context of social networking services, see Obar & Oeldorf-Hirsch, 2018). Given their powers, private intermediaries, in some instances, are seen by some as acting akin to governments or as de facto regulators (Riordan, 2016, p. 343). For the management of country-code top-level domain names (ccTLDs) there exists a “statutory footing” in primary or secondary legislation in some instances (Bygrave, 2015, p. 78). Often, domain registries, however, have a broad freedom to define the ToS for the granting and use of domain names under their respective top-level domain (TLD). Against the payment of a fee, and on a first come, first served basis, a registrant regularly obtains a right to use the domain name (interestingly, in the French <.fr> zone, a registrant “owns the domain name”). Some registries restrict registration of ccTLD domain names to residents from certain countries (e.g., <.no> and <.it>). The contractual relation between registry and registrant also provides one potential basis for out-of-court takedowns. Thus, in order to understand the regulatory landscape for content- or use-related domain name takedowns it is necessary to focus on the registries’ regulation via their ToS.

The notion of “takedown” in relation to a domain name is not unproblematic: technically, administratively and partly legally, a more thorough distinction between blocking, suspension (the technical decoupling from a name server), deletion (registrant is deleted in the WHOIS-database), deactivation, transfer, seizure, etc. of a domain name is necessary. There exists no uniform notion among registries, lawmakers and practitioners. The goal of a domain name-related measure for content reasons is typically that the domain name can no longer be used to access a website (DeNardis, 2012, p. 728; Schwemer, 2018, p. 277), even though the content remains accessible via the IP address: this can be achieved by suspending or deleting a domain name, whereas blocking goes beyond that (presuming that there exists a societal interest in domain names being used). For the sake of this article, in any case, all these measures will be understood as takedown.

Despite the fact that it is sometimes seen as a controversial term, “abuse” is becoming an ever more frequently used term in the domain name world, which according to Mahler (2019, p. 252) is to be understood in a broader way than just covering illegal activity. Again, there exists no uniform definition. While, strictly speaking, the ccTLD world is somewhat detached from ICANN policy, ICANN’s attempt to define the issue provides a valuable perspective on abuse. Mahler (2019, p. 249) notes that in ICANN’s regulatory framework, too, there exists no clear definition of abuse and it can span from undesired activities like sending out spam, which is not necessarily a criminal offence, to copyright infringements, and ultimately the commission of cybercrime.4 In 2010, ICANN developed a consensual definition of abuse, according to which: “Abuse is an action that: a) causes actual and substantial harm, or is a material predicate of harm, and b) is illegal or illegitimate, or is otherwise contrary to the intention and design of a stated legitimate purpose, if such purpose is disclosed.” (cited in Mahler, 2019, p. 251). More recently, in 2018, ICANN’s Competition, Consumer Trust and Consumer Choice (CCT) Review team referred to “DNS Abuse” as “intentionally deceptive, conniving, or unsolicited activities that actively make use of the DNS and/or the procedures used to register domain names”, whereas the term “DNS Security Abuse” refers to “more technical forms of malicious activity, such as malware, phishing, and botnets, as well as spam when used as a delivery method for these forms of abuse” (GAC, 2019, p. 2).

From a legal perspective, the abuse-notion as well as its suggested definitions are problematic, given the blurry lines between abuse and legitimate behaviour (Mahler, 2019, p. 251) vis-à-vis the much clearer distinction between lawful and unlawful behaviour or content. The European Commission, for example, defines illegal content as “any information which is not compliant with Union law or the law of a Member State concerned”.5 If abuse goes beyond that, however, it is unclear based on what standards or evaluations such a definition is based on and what this entails for procedural transparency, legal certainty and the rule of law.6

For the sake of clarity, I propose to differentiate in the context of domain names and the DNS between abuse on the DNS (i.e., content abuse such as content on a website accessible via a domain name), on the one hand, and abuse of the DNS (i.e., technical abuse such as turning a domain name into a bot) on the other hand. This distinction is first and foremost necessary given the “proximity” of abuse to the registry’s functions. Whereas technical abuse has a more direct connection to the technical administrative role of registries and the DNS, content is much farther from a registry and the DNS given that the content is hosted elsewhere and merely being made more easily accessible by translating numerical IP addresses to human-readable alphanumerical domain names (see also Internet & Jurisdiction, 2019b, p. 6, pp. 20-21). Furthermore, the distinction also matters when looking at the efficiency of such enforcement tool: whereas technical abuse can in certain instances be brought to an end, domain name-related measures for content related reasons are a much more blunt and at the same time ineffective tool: the domain name as well as associated services such as email are made inaccessible on a global scale, whereas the content stays online where it is hosted and is just more difficult to access.

What is the use of a domain name?

Related to the notion of “abuse” is the notion of “use”. Before I take a closer look at the specific provisions in the ToS, it is noteworthy that quite a few ToS include a use-related provision in one way or another, while “abuse” is not a common notion in the analysed ToS. The question is, however, what the use of a domain name relates to. Here, we can differentiate between two layers: Firstly, as noted in the introduction, the use of a domain name could simply refer to the use of a domain name as such. In a narrow understanding this might not even include the technical use. Secondly, the use of domain name could also refer to the use of a domain name for a certain purpose, be it on the technical (use of the DNS) or on the content level (use on the DNS): for the layman this regularly has the purpose to make a certain website accessible via a domain name or enable email capabilities. This differentiation is crucial because the former relates only to the DNS, whereas the latter also relates to the underlying website content which is accessible, or activity via a domain name (Vissers et al., 2017; Schwemer, 2018).

The ToS of the UK registry administering <.uk>, Nominet, for example, stipulate “that you will not use the domain name for any unlawful purpose” (section 6.1.5). This compares to the ToS of German registry <.de>, DENIC, which calls for termination if “the registration of the domain for the Domain Holder manifestly infringes the rights of others or is otherwise illegal, regardless of the specific use made of it” (§ 7(2) d)). The German terms, thus, need to be understood in a way that relates to the domain name as such and not its use. At first glance this somewhat contradictory provision is introduced in the section on the duties of the registrants, where it is stipulated that a registrant gives an explicit assurance “that the registration and intended use of the domain does not infringe on anybody else’s rights nor break any general law” (§ 3(1); emphasis added). In both ToS, regarding <.uk> and <.de>, there exists no further definition of “purpose” or “intended use”.

The Norwegian registry, NORID, administering <.no>, does not directly refer to the use of a domain name in its terms, but requires a confirmation by the registrant declaring that “[the] use of the domain name (…) does not conflict with Norwegian law (…)” (declaration form appendix G 3.0, dated 22 May 2018). In these instances, however, it seems that the context – rather than keeping open a backdoor for a (non-judicial) takedown of a domain name for breach of terms – is meant to keep the registry free of liability.

Also the ToS of the <.eu> ccTLD registry, EURid, contain an ambiguous clause in the section on obligations of the registrant that could be interpreted in a way that the “use” can be understood broadly: the registrant has the obligation “not to use the Domain Name (i) in bad faith or (ii) for any unlawful purpose” (section 3 (3)). The terms further stipulate that the registry may revoke the registration inter alia if there is a “breach of the Rules by the Registrant” (section 8 (5) (iii)). In direct email follow-up with EURid, the registry however declared that it never takes action based on content associated with a domain name.7 The ToS of the Hungarian ccTLD, <.hu>, similarly stipulate that the applicant is “to act with utmost care in selecting the domain so as the domain name (…) and the use of it shall not violate the rights of other persons or entities (…)” (section 2.2). It sets forth that a domain may be suspended if “the domain and/or the use of the domain name causes trouble in the operation of the Internet, or seriously threatens the security of the users” (section 5.2 b). In these instances, it appears from the wording that the provisions indeed enable the registry to intervene based on content or use of the domain name.

The <.be> registry’s ToS also contain a “violation of law clause”, where it specifies a condition that “the domain name is not used in violation of any applicable laws or regulations, such as a name that helps to discriminate on the basis of race, language, sex, religion or political view” (section 8 (a) (4); emphasis added). Looking at the list of examples, it appears that the use of a domain name is related to the name as such. At the same time, “such as” implies that the list is non-exhaustive. Thus, arguably, the use of the domain name in relation to content could be encompassed by the clause. This is supported by another broader clause in the <.be> terms that stipulates that the domain name is “not registered for any unlawful purpose” (section 8 (a) (3)).

Another peculiar provision can be found in the Croatian ordinance governing <.hr>, stipulating that “[t]he user of the domain shall use it only for the purpose for which it was registered and in a manner usual within the world Internet communities” (Article 24(3); emphasis added). In the ordinance, there exists, however, little guidance as to what this would entail.

In conclusion, in some instances it is only in the context of the specific terms, where an adequate interpretation can be found. Not only can it be difficult to understand whether the respective, often ambiguous, clauses only refer to the name as such or encompass also technical abuse or content abuse. The ambiguity of terms in the ToS is also furthered by their difficult readability (see also Bygrave, 2015, p. 5). In order to assess the readability of terms, the Flesch-Kincaid Grade Level score, based on sentence length and word length, is a common measure applied in research related to terms (Graber, D’Alessandro, & Johnson-West, 2002; Culnan & Carlin, 2006; Fiesler, Lampe, & Bruckman, 2016). The Grade Level Score for the analysed terms averages 13,8 (lowest: 10,8 (<.nl>); highest: 16,3 (<.ee>)) with a word count average at 6.200 words (lowest: 1.935 (<.ro>); highest: 22.839 (<.gr>), second highest 11.682 words (<.sk>)), making them difficult to very difficult texts to read. It seems problematic that terms are not clearer, though they are notably not as complex as many privacy and copyright policies, which average in the 14-15 range (and this article with a score in the 13 range).

What do the terms say?

Several ccTLD registries, in their ToS, specify takedown mechanisms for use-related reasons. The terms regularly stipulate procedures in those instances and vary in scope. Based on the comparison of terms of the 30 examined ccTLD registries, they can roughly be grouped into three categories: (1) broadlyaddressing use or content, (2) containing specific use- or content-related provisions, and (3) not addressing use of domain name or content at all.

Firstly, several ToS contain broad provisions related to the illegality of content or use of the domain name, ranging from use for unlawful purposes to clear violations or manifestly illegal acts (Table 1).

Table 1: Broad provisions addressing use or content

Provision

ccTLD

Public order

<.nl>

Unlawful or illegal use

<nl.>; <.uk> (“any unlawful purpose”); <.be> (“not registered for an unlawful purpose”); <.ie>; (“used for any unlawful purpose”); <.eu> (“any unlawful purposes”)

Used in bad faith

<.ie>; <.eu>

Clear violation of law

<.se>; <.dk> (“manifestly illegal acts”)

Secondly, several terms of ccTLD registries refer to specific cases of unlawful or unwanted use of a domain name, ranging from decency and offensive content to the distribution of viruses and malware, phishing and denial of service and botnet attacks (Table 2).

Table 2: Specific provisions addressing use or content

Provision

ccTLD

National or international information security

<.sk>; <.cz> (“national or international computer security”)

Serious threat to security of users

<.hu>

Obvious risk of economic crime

<.dk>

Compromising of IT equipment

<.dk>

Content of a highly offensive nature

<.dk>

Decency

<.nl>

Distribution of viruses and malware

<.uk>, <.ch> (“malicious code”); <.dk>; <.sk>; <.cz>

Phishing

<.uk>; <.sk>; <.cz>; <.dk>, <ch> (“obtain sensitive data by wrongful means”)

Manage a network of devices infected without authorisation for the purpose of executing illegal activity (mainly botnet)

<.sk>; <.cz>

Facilitating distributed denial of service attacks

<.uk>

One third (11 out of 30) of the examined terms include a clause that somehow relates to use or content available under the domain name. Seven terms (7 out of 30) include at least one broad clause related to the illegal use of the domain name (“any unlawful purpose”, “public order”, “clear violation of law”, usage in “bad faith”, etc.), of which less than half (3 out of 7) also contain a specific use provision. The Swedish <.se>, the Irish <.ie>, the Belgian <.be> and the European <.eu> terms contain only a broad provision without a specific use provision, and only the Swiss <.ch> and the Hungarian <.hu> terms contain a specific use provision, but no general clause.

It is unclear whether the broad clauses (Table 1) are restricted to technical abuse or also envisioned to encompass content on the underlying website.8 Roughly one fifth of the terms and conditions (seven out of 30), contain specific non-exhaustive catalogues of unlawful uses (e.g., phishing, malware distribution, botnets). These appear to primarily relate to technical abuse scenarios. Notably, however, the Dutch <.nl> ToS provide for a decency-related and the Danish <.dk> ToS for a “highly offensive nature” content-related provision; depending on the interpretation and the national legislative context, these provisions may not only regard unlawful but even unwanted content or use. Arguably, they leave the sphere of illegal behaviour open for a contractual basis for actions based on softer categories that go beyond illegal content or cybercrime. In recent policy debates, especially the violation of intellectual property rights, notably copyright and trademark infringements, as well as online shops selling counterfeit products have been topical. Despite this increasing pressure by stakeholders on actors, none of the analysed terms include a provision explicitly relating to these forms of use or content of the underlying website.

The two categories, broad and specific use-related, compare to a large number of registries (19 out of 30), which do not appear to include any use- or content-related provisions in their ToS (e.g., <.es>, <.mt>, <.lu>, <.lt>, <.lv>, <.ro>, <.si>, <.gr>, <.fr>, <it>). On the procedural side, some terms explicitly state that takedown only happens in case of a court order, arbitration or due to wrong information (e.g., <.at>, <de.>, <.pt>). Looking at the volume of registered domain names per registry, the picture looks different though: for 47,04 % of registrants (of the 61,67 million domain names), there exists some contractual basis pertaining to the use of a domain name (categories 1 and 3). This can be explained by the presence of larger and medium sized ccTLD spaces, notably <.uk>, <.nl>, and <.eu>.

It is outside the scope of this article to explore the reasons to include content- or use-related provisions in ToS. These might be influenced by legislation or jurisprudence (e.g., in relation to secondary liability), policymaking or independent commercial considerations by the respective registry or a result of co-regulation (Frydman, Hennebel, & Lewkowicz, 2012). In many instances, though, registries have a broad freedom to define the rules in their ToS. In Denmark, for example, the legislator has chosen a framework legislation, which gives the national registry a broad authority to define, when domain names should be suspended. In 2019, for example, the Danish registry conducted a hearing asking its stakeholders inter alia whether it should “be proactive and suspend domain names for websites that is known for phishing or malware spread” and “be able to suspend any domain name used in connection with the obvious risk of certain serious types of crime”.9 Some situations are also special, in that use-related provisions directly stem from administrative decrees or secondary legislation, as in the case of the <.fi> registry, which is a government agency, in the case of the Swiss registry, Switch, administering <.ch> and <.li>, the Greek registry <.gr>, and in the case of the Spanish registry, red.es, administering <.es>. In addition to the variety of setups and legal frameworks, the absence of a clear liability exemption framework within the E-Commerce Directive10 (Truyens & van Eecke, 2016; Schwemer, 2018) might explain the differences in registries’ approach to use of a domain name and content. An upcoming review of the E-Commerce Directive and the anticipated proposal of a Digital Services Act, according to leaked documents from DG Connect (European Commission, 2019), is envisioned to specifically address the liability exemption regime in relation to the DNS arguing that “clarification (…) is necessary”.11

One central insight, however, is that the European landscape is heterogenous and divided into two major streams at this time: registries that address use or content in their terms to some extent, and registries that do not. There is little information available on a trend or historical evidence. Many ToS have been updated within the last two years, often due to the General Data Protection Regulation12 (GDPR) and its implications on WHOIS-databases (see e.g., Hoeren & Völkel, 2018). Yet, it seems plausible that content-related provisions have been more prominent in recent years, given the rise of general online content regulation discussions.

How are the terms applied?

As seen, some ToS potentially provide a contractual basis for the non-judicial domain name takedown for use- or content-related reasons by a registry. Another interesting question is whether and how domain registries make use of these provisions in practice. The analysis above says little regarding in which instances these provisions are or have been applied, but rather gives a picture of the contractual room of operations for ccTLD registries.

Some registries stipulate in their terms more or less directly that they do not assess the use of a domain name or content of websites made accessible via a domain name (see above). The German <.de> terms, for example, note that “[a]t no time is there any obligation whatsoever on DENIC to verify whether the registration of the domain on behalf of the Domain Holder or its use by the Domain Holder infringes the rights of others” (§ 2). The Austrian <.at> terms clarify that a revocation only takes place “in the case of a legally effective ruling by a court of law or a court of arbitration which is enforceable in Austria, and in the case of an instruction from a competent authority” (Section 3.8). In other instances, domain registries have set up trusted flaggers or trusted notifier regimes, where registries rely on notices by a public authority or private notifiers (Bridy, 2017; Schwemer, 2019).

Sometimes registries provide additional information on the handling of use or content on their website. But generally, information on practice related to the enforcement of use- or content-related terms in their ToS is – beyond sporadic press releases by registries – sparse and often not publicly available. Given that there is relatively little written and reported on such handlings, presumably the takedown of domain names directly based on an assessment of content– whether ex officio or on the motion of third parties – is rare. Or, at least, false positives might be rare, as infringers are unlikely to challenge the takedown of a domain name for use- or content-related reasons, which in turn could mean that there are few instances where the takedown of domain names for these reasons is challenged by the registrant.13

It is also difficult to assess the relation of domain registries to content without acknowledging the fluid boundary between content- and non-content-related measures. For example, when the legality of a domain name as such is determined, e.g., in connection with Uniform Domain-Name Dispute-Resolution Policy (UDRP) proceedings, the name as such is regularly the starting point. However, its use also constitutes one determining factor, even though the decision is not based on the content accessible via a domain name. Thus, the borders between content-related and purely domain name-related issues might be blurrier than they appear.

In the absence of concrete information from practice, I refer in this article to publicly available evidence related to the structural setup of content- or use-related mechanisms put in place by registries. In the following, I provide three examples that are noteworthy.

Notice-and-takedown mechanisms

The first example is the Dutch ccTLD registry administering <.nl>, SIDN, which established a notice-and-takedown procedure - akin to the procedures established by online platforms in connection to Article 14 of the E-Commerce Directive - for offending content that is clearly unlawful or criminal (CENTR, 2019b, p. 20; SIDN, 2019a). Anyone with a “legitimate interest” can, after having contacted the uploader, website manager, registrant, and registrar, request the registry to disable a <.nl> domain name. SIDN specifies in its takedown form, that they “take action only to prevent clearly unlawful or criminal activity. If, for example, expert legal opinion is needed to decide whether an activity is unlawful or criminal, we won’t do anything.” According to SIDN’s yearly report (SIDN, 2019b, p. 9), the registry received 35 notice-and-takedown requests in 2018, of which seven led to the disabling of a domain name by SIDN. Given the low number of cases and the information on the setup of this mechanism it appears as a last-resort measure. Yet, it is a noteworthy mechanism which appears to be inspired by mechanisms put in place in relation to the liability exemption regime for hosting-providers from the E-Commerce Directive.

Somewhat related to these developments, is the establishment of trusted notifier regimes. A practice, that is increasingly seen and also encouraged by the European lawmaker,14 is the offering of an expedited process for notices coming from “trusted flaggers” or “trusted notifiers”. Some gTLD and some ccTLD domain registries have established such mechanisms (Bridy, 2017; Schwemer, 2019). Again, public information on the setup or workings, however, is sparse.

Proactive screening

The second example relates to the proactive screening of domain name use or content. Certain registries scan or proactively monitor the usage of and content accessible under a domain name for abuse. Technically, this is performed by for example crawling content, fuzzy hashes, HTML structural similarity analysis (see Gowda & Matmann, 2016) or analysis of registration data. In 2017, for instance, the <.eu> registry, EURid, introduced an abuse prevention tool using machine learning algorithms (“Abuse Prevention and Early Warning System”) that flags suspicious domain name registrations and aims to prevent such maliciously used domain names from being active in the first place (EURid, 2016; EURid, 2017; EURid, 2019). The Belgian registry administering <.be>, DNS Belgium, also appears to have some kind of screening process outsourced to external security firms in place, which seems to primarily relate to technical abuse by third parties “for fraudulent practices such phishing, malware, etc.” (DNS Belgium, 2019a). Also the Dutch registry administering <.nl>, SIDN, has put some research efforts into domain abuse and developed a domain early warning system for TLDs, which is “capable to detect several types of domain abuse, such as malware, phishing, and allegedly fraudulent web shops” (Moura, Müller, Wullink, & Hesselman, 2016). A concern in relation to proactive screening relates to the risk of false positives and the potential lack of competence to assess the legality of the allegedly infringing content.15

Data validation

Accurate data has historically been necessary in order to get in touch with registrants with a view to solve technical issues; nowadays, however, there is a somewhat alternative use of data accuracy emerging in relation to abuse. Some have identified a plausible correlation between domain names that are used for unlawful purposes and the quality of the registration data (DK Hostmaster, 2019; Palage, 2019). Regularly, domain registries reserve the right to terminate a registration that is based on wrong or inaccurate information in their ToS (e.g., <.be>, <.se>, <.nl>, <.dk>, <is>, <.eu>, <.it>). Securing correct registrant information has been identified as one means to mitigate the problem.

Several registries have introduced internal or external data validation processes (e.g., <.dk>, <.uk>, <.eu>). The UK ccTLD-registry, Nominet, for example, uses a data validation process, where it matches name and address against a third-party data source (Nominet, 2019). Similarly, the Belgian registry performs a daily manual screening of newly registered domain names, which is “carried out first and foremost to identify any obvious cases of phishing rapidly” (DNS Belgium, 2019b). In Denmark, a problem with online shops selling counterfeit products was manifested by an increasing number of court orders that the registry received to seize <.dk> domains. In 2017, the Danish registry, DK Hostmaster, introduced the mandatory use of a common login and verification solution used by government, banks and other private actors for identity verification purposes of Danish registrants and a risk-based assessment of foreign registrants at the time of registration. The verification requirement resulted in a decrease of online shops suspected of IP infringements from 6,73% to 0,12% (DK Hostmaster, 2019). Also, the <.eu> registry, EURid, cross-checks registration data with third parties, which by 2016 had resulted in the deletion of 31,819 domains at the registry’s own initiative (EURid, 2016).

This offers an intriguing, somewhat creative, practical solution to the practical problem of unlawful use and content, which comes from a very different starting point: instead of a problematic move of registries towards effectively performing content policing, a reduction in “abuse” – whether technical or content-related – is merely a by-product of ensuring correct registration data. The Danish registry, for example, is, according to § 18(2) of the Danish domain name law domæneloven (lov om internetdomæner, LOV nr 164 af 26/02/2014) obliged to ensure correct, up-to-date and publicly available registration data in the WHOIS.16 Whereas a specific obligation based on secondary legislation like the Danish example is rare, most analysed ccTLD registries address correct registration data in their ToS (see above).

Inaccuracy of registration data in this context is not evidence of malpractice but rather the reason for a domain name takedown in itself. In other words, the takedown of a domain name for technical reasons or content abuse is performed without the registry having to perform a legal evaluation of the use or the content on the underlying website. Thus, such a mechanism is – compared to trusted notifier regimes or takedown based on some kind of use or content analysis – also much less problematic from a fundamental rights perspective.

Conclusion

A report by the Internet & Jurisdiction policy network notes that a common challenge among all actors is “to define when is it appropriate to act at the DNS level in relation to the content or behavior of a domain address, and to identify the respective roles that courts and so-called ‘notifiers’ should play” (Internet & Jurisdiction, 2019a, p. 159). This analysis of 30 European ccTLD terms of services shows that there is a relatively wide spread of responses to use- and content-related domain name “abuse”: some actors refrain from contractually reserving to takedown a domain name due to its use or content. Others reserve a right to take down a domain name in certain severe situations. Still others have established some kind of takedown-regime, akin to notice-and-takedown regimes of other intermediaries, or even introduced some form of proactive screening. A little more than a third of the analysed ccTLD terms contain content- or use-related provisions, accounting for 47,04% of the analysed ccTLD market. This compares to findings of Kuerbis, Mehta, & Mueller (2017), which found for registrars that 59% of terms comprise morality clauses accounting for 62% of the domain name market. Thus, the discretion for registrars to take down domain names is higher than for ccTLD registries. Yet a different market response by ccTLD registries to the issue of unlawful content appears to be the “creative use” of data validation. Without directly regulating use or content, this practice constitutes a practical solution for minimising the use of domain names for unlawful purposes.

Domain name takedowns based on privatised enforcement and self-regulation for content-related reasons are worrisome from a fundamental rights perspective (Kleinwächter, 2003; Seltzer, 2011; DeNardis, 2012; Schwemer, 2019) and risks and drawbacks associated with use- or content-related domain name takedowns have been identified elsewhere (see e.g., Schwemer, 2018; CENTR, 2019b, p. 14–15; Internet & Jurisdiction, 2019a, p. 159). It has for example been argued that “requests for domain name suspension should only be considered when one can reliably determine that a domain is used with a clear intent of significant abusive conduct; only a particularly high level of abuse and/or harm could justify resorting to such a measure” (Internet & Jurisdiction, 2019a, p. 159). In October 2019, a group of registrars and registries, notably including the registry administering the ccTLD <.uk>, released a “Framework for DNS Abuse”,17 arguing that “[d]espite the fact that registrars and registries have only one blunt and disproportionate tool to address Website Content Abuse, we believe there are certain forms of Website Content Abuse that are so egregious that the contracted party should act when provided with specific and credible notice”. Notably, they argue that a registry or registrar should even without a court order address “content abuse” related to “child sexual abuse materials”, the “illegal distribution of opioids online”, “human trafficking” and “specific and credible incitements to violence”. While domain registries have historically not been designed to engage in use- or content-related enforcement, recent developments seem to suggest that lines are getting blurrier between the infrastructure and the content layer of the internet.

A silent drift by domain registries into regulating and enforcing abusive content or activity on underlying websites – i.e., use and content regulation – is problematic. As seen in this article, many ToS are rather imprecise on the question what leeway they actually give for this kind of intervention by the registry. Furthermore, whereas information on the existence of such use- or content-related provisions is accessible via ToS, it says little about their practical application and importance. For the sake of transparency and legal certainty though, registries should be precise in their stance on the issue. European case law on domain registries and unlawful content too, is sparse.18

In this article, I have purposely focused on publicly available information only. In privatised enforcement systems, transparency is central to ensuring well-functioning and well-balanced regimes. Future research endeavours might benefit from further empirical work, for example by interviewing registries on their practices. It will also be relevant to revisit the ToS of registries in due time as the contractual basis for these measures might change. In direct follow-up with selected ccTLD registries, it appears that they are of minor practical relevance at this time. Given the topicality of content regulation, my expectation is that this practice will become more prevalent rather than disappear. In this trajectory, in any case, domain registries should be clear and transparent regarding their role in content- or use-related domain name takedowns.

Acknowledgements

This article is the result of a research project that has been funded by the Innovation Fund Denmark and the Danish Internet Forum (DIFO). I thank student assistant Berdien van der Donk for help with information collection. I thank Thomas Riis and Henriette Vignal-Schjøth for their comments on a draft of this paper, Kenneth Merrill and Farzaneh Badiei for their thorough and constructive peer-review, as well as Francesca Musiani and Frédéric Dubois for comments and suggestions. This research represents solely the view of the author. The author enjoyed full academic freedom but acknowledges that research results may be in the interest of the co-funding organisation.

Appendix

Analysed terms of services of ccTLDs. All registries have been checked for information last on 24 June 2019. Numbers marked with an asterisk (*) are retrieved from <http://research.domaintools.com/statistics/tld-counts/>, in instances where registries did not provide publicly available statistical information. The Flesch-Kincaid level has been calculated using a Microsoft Word script.

ccTLD

Country

Registry

Terms and conditions

Domains

Word count / words per sentence

Flesch-Kincaid Grade level

<.at>

Austria

nic.at

General Terms and Conditions, nic.at GmbH, AGB 2018; Version 3.2 of 16 May 2018

1.305.633

3.152 / 20,6

12,6

<.be>

Belgium

DNS Belgium

Terms and conditions for .be domain name registrations; Version 6.1 of 6 April 2018, Applicable as of 25 May 2018

1.501.401*

4.197 / 24,9

14

<.ch>

Switzerland; Liechtenstein

SWITCH

General Terms and Conditions (GTC) for the registration and administration of domain names under the domain “.ch” and “.li”; Entered into effect 1 January 2015 (Version 10)

289.991

5.111 / 21,9

13,2

<.cz>

Czech Republic

CZ.NIC

Rules of Domain Names Registration under the .cz ccTLD; Effective from 25 May 2018

1.326.646

9.912 / 15,5

11,9

<.de>

Germany

DENIC

DENIC Domain Terms and Conditions; (Retrieved 1 June 2019)

16.243.653

2.382 / 30,4

15,5

<.dk>

Denmark

DK Hostmaster

Terms and conditions for the right to use a .dk domain name; version 09 (Retrieved 1 June 2019)

1.320.622

3.482 / 25,5

13,2

<.ee>

Estonia

Estonian Internet Foundation

Domain regulation; Approved by the Estonian Internet Foundation Council on 7 March 2018 and taking effect on 25 May 2018

122.216

6.773 / 26,5

16,3

<.es>

Spain

Red.es (part of government)

Ministerial Order ITC/1542/2005, dated 19 May, approving the National Plan for Internet Domain Names under the country code for Spain (“.es”) came into effect on 1 June 2005 and Instruction from the General Manager of the Public Business Entity Red.es, which outlines the procedures applicable to assignment and other operations associated with registering “.es” domain names; dated 8 November 2006

1.918.039

N/A

N/A

<.eu>

European Union

EURid

Domain Name Registration Terms and Conditions, v.10.1 [accessed 1 June 2019]

3.661.899

3.929 / 21,5

13,1

<.fi>

Finland

Traficom

Domain Name Regulation; issued in Helsinki 15 June 2016

444.958*

N/A

N/A

<.fr>

France

Afnic

Naming Policy for the French Network Information Centre; Rules for registering Internet domain names using country codes for metropolitan France and the Overseas Departments and Territories, Version 25 May 2018

3.396.646

7.112 / 21,9

14,4

<.gr>

Greece

FORTH-ICS

Regulation on Management and Assignment of [.gr] or [.ελ] Domain Names, Decision 843/2 of 1-3-2018 by The Hellenic Telecommunications and Post Commission (EETT)

396.102*

22.839 / 25,6

14,2

<.hr>

Croatia

CARNet

Ordinance on the Organisation and Management of the National Top-level Domain

98.094*

8.357 / 27,8

16,1

<.hu>

Hungary

Council of Hungarian Internet Providers

Domain registration rules and procedures; Effective as of 25 May 2018

748.423

9.585 / 28,1

16,9

<.ie>

Ireland

IEDR

Registrant Terms and Conditions – Effective from 1 July 2019

262.140

7.429 / 22,8

13,2

<.is>

Iceland

ISNIC

Terms and Conditions; 1 November 2011 [accessed 1 June 2019]

68.003

2.986 / 16,4

11,4

<.it>

Italy

Registro.it

Assignment and management of domain names in the ccTLD .it; Regulation; Version 7.1; 3 November 2014

3.202.835

9.600 / 23,6

 

<.lt>

Lithuania

DOMREG

Procedural Regulation for the .lt Top-level Domain; Edition 2.0; Version 2.1; 25 May 2018

195.036

6.433 / 14,1

12,1

<.lu>

Luxembourg

Fondation RESTENA

Terms and Conditions of Classic Registration and Management of .lu Domain Names; Version 6.0, May 2018

99812

7.536 / 23,5

14,0

<.lv>

Latvia

NIC.LV

Policy for acquisition of the right to use domain names under the top level domain .lv; amended as of 17 May 2019 (enters into force on 22 May 2019)

110.350*

4.064 / 13,9

11,4

<.mt>

Malta

NIC-MT

Terms and Conditions; effective from 1 December 2017

18.258*

2.395 / 28,2

15,3

<.nl>

Netherlands

SIDN

General Terms and Conditions for .nl Registrants; 1 May 2019

5.872.244

5.559 / 16,1

10,8

<.no>

Norway

Norid

Domain name policy for .no; Last change: 8 January 2019

710.892*

4.710 / 18,5

12,3

<.pl>

Poland

NASK

.pl Domain Name Regulations as of 18 December 2006 (In force as of 1 December 2015)

2.605.818

2.860 / 29,6

15,7

<.pt>

Portugal

DNS.PT

21 May 2018

1.150.283

7.565 / 26,5

15,7

<.ro>

Romania

Internet Service Romania

Domain Name Registration Agreement; Version Number: 4.0 [09/2000]

496.030*

1.935 / 25,8

14,7

<.se>

Sweden

Internetstiftelsen

Terms and Conditions of Registration applicable for the top-level domain .se from 6 February 2019

1.510.883

3.738 / 21,2

14,5

<.si>

Slovenia

Arnes

General Terms and Conditions for Registration of Domain Names under the .SI Top-Level Domain; Publication 1 July 2016, validity from 1 August 2016

132.641

5.844 / 14,2

11,4

<.sk>

Slovakia

SK-NIC

Terms and Conditions of Domain Name Service in .sk Top Level Domain; 1 October 2018

394.776

11.682 / 24,6

15,0

<.uk>

United Kingdom

Nominet

Terms and Conditions of Domain Name Registration (n.d.)

1.2168.405

2.699 / 25,9

13,9

References

Aguerre, C. (2010). ccTLDs and the local dimension of Internet Governance [Working Paper No. 8]. Buenos Aires: Centro de Tecnología y Sociedad UdeSA. Retrieved from http://hdl.handle.net/10908/15557

Belli, L., & Venturini, J. (2016). Private ordering and the rise of terms of service as cyber-regulation. Internet Policy Review, 5(4). https://doi.org/10.14763/2016.4.441

Bettinger, T., & Waddell, A. (Eds). (2015). Domain Name Law and Practice. Oxford: Oxford University Press.

Bridy, A. (2017). Notice and Takedown in the Domain Name System: ICANN’s Ambivalent Drift into Online Content Regulation. Washington and Lee Law Review, 74(3), 1345–1388. Retrieved from https://scholarlycommons.law.wlu.edu/wlulr/vol74/iss3/3/

Bygrave, L. (2015). Internet Governance by Contract. Oxford: Oxford University Press. https://doi.org/10.1093/acprof:oso/9780199687343.001.0001

Bygrave, L., Schiavetta, S., Thunem, H., Lange, A. B., & Phillips, E. (2009). The naming game: governance of the Domain Name System. In L. Bygrave & J. Bing (Eds.), Internet Governance: Infrastructure and Institutions (pp. 147–212). Oxford: Oxford University Press. https://doi.org/10.1093/acprof:oso/9780199561131.003.0006

Christou, G., & Simpson, S. (2007, September). New Modes of regulatory governance for the internet? Country code top level domains in Europe. European Consortium for Political Research General Conference, Pisa. Retrieved from http://www.regulation.upf.edu/ecpr-07-papers/ssimpson.pdf

Christou, G., & Simpson, S. (2009). New Governance, the Internet, and Country Code Top-Level Domains in Europe. Governance, 22(4), 599–624. https://doi.org/10.1111/j.1468-0491.2009.01455.x

Council of European National Top-Level Domain Registries (CENTR). (2019a). CENTRstats, Global TLD Report, Q1 2019 – Edition 27. Retrieved from https://stats.centr.org/stats/global

Council of European National Top-Level Domain Registries (CENTR). (2019b). Domain name registries and online content. Brussels. Retrieved from https://centr.org/library/library/centr-document/domain-name-registries-and-online-content.html

Culnan, M. J., & Carlin, T. J. (2006). Online Privacy Practices in Higher Education: Making the Grade? Communications of the ACM, 52(3) 126–130. https://doi.org/10.1145/1467247.1467277

DeNardis, L. (2012). Hidden levers of Internet control: An infrastructure-based theory of Internet governance. Information, Communication & Society, 15(5), 720–738. https://doi.org/10.1080/1369118X.2012.659199

DK Hostmaster. (2018). Crime prevention on the internet. Retrieved from https://www.dk-hostmaster.dk/sites/default/files/2019-07/Internetcrime_onepager_050319_EN.pdf

DNS Belgium. (2019a). Misuse of your domain name. Retrieved from https://www.dnsbelgium.be/en/internet-security/misuse-your-domain-name

DNS Belgium. (2019b). Complaints on a domain name?. Retrieved from https://www.dnsbelgium.be/en/register-your-domain-name/complaints-domain-name

European Commission. (2019). Digital Services Act note DG Connect June 2019. Retrieved from https://cdn.netzpolitik.org/wp-upload/2019/07/Digital-Services-Act-note-DG-Connect-June-2019.pdf

EUIPO. (2018). Comparative case study on alternative resolution systems for domain name disputes. Alicante: European Intellectual Property Office. https://doi.org/10.2814/294649

EURid. (2016). Abuse monitoring policies and procedures @ EURid. Brussels, 28 Jan 2016. Retrieved from https://gac.icann.org/briefing-materials/public/eurid-2016-01-28.pdf

EURid (2017, October 2). EURid set to launch first of its kind domain name abuse prevention tool. Retrieved from https://eurid.eu/en/news/eurid-set-to-launch-first-of-its-kind-domain-name-abuse-prevention-tool/

EURid (2019). .eu means trust. Retrieved from https://trust.eurid.eu/de/

Fiesler, C., Lampe, C., & Bruckman, A. S. (2016). Reality and Perception of Copyright Terms of Service for Online Content Creation. Proceedings of the 19th ACM Conference on Computer-Supported Cooperative Work & Social Computing, 1450–1461. https://doi.org/10.1145/2818048.2819931

Frosio, G. (2017). Why keep a dog and bark yourself? From Intermediary Liability to Responsibility [Research Paper No. 2017-11]. Strasbourg: Centre for International Intellectual Property Studies. Retrieved from https://papers.ssrn.com/abstract_id=2976023

Frydman, B., Hennebel, L., & Lewkowicz, G. (2012). Co-regulation and the rule of law. In E. Brousseau, M. Marzouki & C. Méadel (Eds.), Governance, Regulations and Powers on the Internet (pp. 133-150). Cambridge: Cambridge University Press.

Governmental Advisory Committee (GAC) (2019, September 18). GAC Statement on DNS Abuse. Los Angeles: ICANN. Retrieved from https://gac.icann.org/file-asset/public/gac-statement-dns-abuse-final-18sep19.pdf

Geist, M. (2004). Governments and country-code top level domains: a global survey. Retrieved from https://www.itu.int/osg/spu/forum/intgov04/contributions/governmentsandcctldsfeb04.pdf

Graber, M. A., D’Allesandro, D. M, & Johnson-West, J. (2002). Reading Level of Privacy Policies on Internet Health Web Sites. The Journal of Family Practice, 51(7), 642–645.

Gowda, T. & Matmann, C. A. (2016). Clustering Web Pages Based on Structure and Style Similarity (Application Paper). 2016 IEEE 17th International Conference on Information Reuse and Integration (IRI), 175–180. https://doi.org/10.1109/IRI.2016.30

Hao, S., Thomas, M., Paxson, V., Feamster, N., Kreibich, C., Grier, C., & Hollenbeck, S. (2013). Understanding the domain registration behavior of spammers. Proceedings of the 2013 conference on Internet measurement conference, 63–76. https://doi.org/10.1145/2504730.2504753

Hoeren, T., & Völkel, J. (2018). Information Retrieval About Domain Owners According to the GDPR. Datenschutz und Datensicherheit 2018. https://doi.org/10.2139/ssrn.3135280

Internet & Jurisdiction Policy Network (2019a). Global Status Report 2019.

Internet & Jurisdiction Policy Network (2019b, April). Domains & Jurisdiction Program; Operational Approaches, Norms, Criteria, Mechanisms.

Kidmose, E., Lansing, E., Brandbyge S., & Pedersen, J. (2018). Heuristic methods for efficient identification of abusive domain names; International Journal On Cyber Situational Awareness (IJCSA), 3(1), 121–142. https://doi.org/10.22619/IJCSA.2018.100123

Kleinwächter, W. (2002). From self-governance to public-private partnership: The changing role of governments in the management of the internet’s core resources. Loy. LAL Rev., 36, 1103.

Korczyński, M., Tajalizadehkhoob, S., Noroozian, A., Wullink, M., Hesselman, C., & Eeten, M. Van. (2017). Reputation Metrics Design to Improve Intermediary Incentives for Security of TLDs. 2017 IEEE European Symposium on Security and Privacy (EuroS&P), 579–594. https://doi.org/10.1109/EuroSP.2017.15

Kuerbis, B., Mehta, I., & Mueller, M. (2017). In Search of Amoral Registrars: Content Regulation and Domain Name Policy. Atlanta: Internet Governance Project, Georgia Institute of Technology. Retrieved from https://www.internetgovernance.org/wp-content/uploads/AmoralReg-PAPER-final.pdf

Mahler, T. (2019). Generic Top-Level Domains; A Study of Transnational Private Regulation. Cheltenham: Edward Elgar.

Malcolm, J., Rossi, G., & Stoltz, M. (2017). Which Internet registries offer the best protection for domain owners? [Report]. San Francisco: Electronic Frontier Foundation. Retrieved from https://www.eff.org/files/2017/08/02/domain_registry_whitepaper.pdf

Moore, T., Clayton, R., & Anderson, R. (2009). The Economics of Online Crime. Journal of Economic Perspectives, 23(3), 3-20. https://doi.org/10.1257/jep.23.3.3

Moura, G. C. M., Müller, M., Wullink, M., & Hesselman, C. (2016). nDEWS: a New Domains Early Warning System for TLDs. NOMS 2016 - 2016 IEEE/IFIP Network Operations and Management Symposium, 1061–1066. https://doi.org/10.1109/NOMS.2016.7502961

Mueller, M. (2010). Networks and States: The Global Politics of Internet Governance, Cambridge: The MIT Press.

Mueller, M., & Badiei, F. (2017). Governing Internet Territory: ICANN, Sovereignty Claims, Property Rights and Country Code Top-Level Domains. Columbia Science & Technology Law Review, 18, 435–491. Retrieved from http://www.stlr.org/download/volumes/volume18/muellerBadiei.pdf

Mueller, M., & Chango, M. (2008, December 2). Disrupting Global Governance: The Internet Whois Service, ICANN, and Privacy. Third Annual GigaNet Symposium, Hyderabad. Retrieved from https://doi.org/10.2139/ssrn.2798940

Nominet (2019). How does Nominet validate data? Retrieved from https://registrars.nominet.uk/uk-namespace/data-quality-policy/how-does-nominet-validate-data/

Obar, J. A., & Oeldorf-Hirsch, A. (2018). The biggest lie on the Internet: ignoring the privacy policies and terms of service policies of social networking services. Information, Communication & Society, 23(1), 128–147. https://doi.org/10.1080/1369118X.2018.1486870

Palage, M. (2019). The Role of ccTLD Managers in the Evolving Digital Identity Ecosystem. Brussels: Council of European National Top-Level Domain Registries.

Park, Y. J. (2008). The political economy of country code top level domains [Doctoral Thesis, Syracuse University]. Retrieved from https://surface.syr.edu/it_etd/9/

Riis, T., & Schwemer, S.F. (2019). Leaving the European Safe Harbor, Sailing Towards Algorithmic Content Regulation. Journal of Internet Law, 22(7), 1–21.

Riordan, J. (2016). The Liability of Internet Intermediaries. Oxford: Oxford University Press.

Schwemer, S.F. (2018). On domain registries and unlawful website content: Shifts in intermediaries’ role in light of unlawful content or just another brick in the wall? International Journal of Law and Information Technology, 26(4), 273-293. https://doi.org/10.1093/ijlit/eay012

Schwemer, S.F. (2019). Trusted notifiers and the privatization of online enforcement. Computer Law & Security Review, 35(6). https://doi.org/10.1016/j.clsr.2019.105339

Seltzer, W. (2011). Exposing the flaws of censorship by domain name. IEEE Security and Privacy, 9(1), 83–87. https://doi.org/10.1109/MSP.2011.8

SIDN (2019a). Complaining about the content of a website. Retrieved from https://www.sidn.nl/a/nl-domain-name/complaining-about-the-content-of-a-website?language_id=2

SIDN (2019b). 2018 Annual Report. Retrieved from https://jaarverslag.sidn.nl/jaarverslag/pdf/SIDN_Annual_report_2018.pdf

Truyens, M., & van Eecke, P. (2016). Liability of Domain Name Registries: Don’t Shoot the Messenger. Computer Law & Security Review, 32(2), 327–344. https://doi.org/10.1016/j.clsr.2015.12.018

Vissers, T., Spooren, J., Atgen, P., Jumpertz, D., Janssen, P. …, Desmet, L. (2017). Exploring the ecosystem of malicious domain registrations in the .eu TLD. In Dacier M., Bailey M., Polychronakis M., Antonakakis M. (Eds.), Research in Attacks, Intrusions, and Defenses (pp. 472–493). Springer. https://doi.org/10.1007/978-3-319-66332-6_21

Footnotes

1. On the problematic terminology see below.

2. ccTLD registries of the European Economic Area (EEA), i.e., 28 EU member states and Iceland, Norway, Liechtenstein, plus Switzerland (which is EFTA member but not part of the EEA). All ToS have been analysed in their English translation provided by the registry; many ToS contain a clause whereafter the original language version prevails. All domain registries’ ToS have been checked with a cut-off date of 15 June 2019. See Appendix for a full overview on terms of services of ccTLDs.

3. See e.g., Article 4 nr. 15 and nr. 16 of Directive (EU) 2016/1148 of the European Parliament and of the Council of 6 July 2016 concerning measures for a high common level of security of network and information systems across the Union, OJ L 194, 19 June 2016, pp. 1–30.

4. See also Internet & Jurisdiction, 2019b, pp. 20–21, differentiating technical abuse, namely spam, malware, phishing, pharming, botnets and fast-flux hosting, and website content abuse, namely child abuse material, controlled substances and regulated goods, violent extremist content, hate speech, and intellectual property.

5. Commission Recommendation of 1 March 2018 on measures to effectively tackle illegal content online, C(2018)1177, European Commission, March 2018.

6. This resembles the blurry lines in a parallel discussion regarding platforms and proactive mechanisms, where for instance Frosio has commented on a shift from “liability to responsibility” (Frosio, 2017; see also Riis & Schwemer, 2019).

7. Email exchange with EURid legal department of 5.12.2018, on file with author.

8. See also discussion above.

9. Danish Internet Forum (2019). “Written hearing regarding the role of DIFO in the fight against online crime”, Written hearing regarding the role of DIFO in the fight against online crime. Available at: https://www.dk-hostmaster.dk/en/news/written-hearing-regarding-role-difo-fight-against-online-crime

10. Directive 2000/31/EC of the European Parliament and of the Council of 8 June 2000 on certain legal aspects of information society services, in particular electronic commerce, in the Internal Market [2000] OJ L 178, pp. 1–16.

11. See also Schwemer, S. (2019). “Domain Name System to Be Featured Prominently in Upcoming Review of EU Safe Harbor Rules”. CircleID, 23 September 2019. Available at: http://www.circleid.com/posts/20190923_dns_to_be_featured_prominently_in_review_of_eu_safe_harbor_rules/

12. EU Regulation 2016/679 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) [2016] OJ L 119, pp. 1–88.

13. On jurisprudence in Europe related to the liability of domain registries for content see my earlier work in Schwemer, 2018.

14. More specifically in the context of online platforms; see Commission Recommendation (EU) 2018/334 of 1 March 2018 on measures to effectively tackle illegal content online, 6 March 2018, [2018] L 63/50.

15. In relation to online platforms, for example, see the overview on empirical evidence on over-removal prepared by Daphne Keller, Empirical Evidence of “Over-Removal” by Internet Companies under Intermediary Liability Laws, The Center for Internet and Society, Stanford Law School (12 October 2015, last updated 14 September 2018), http://cyberlaw.stanford.edu/blog/2015/10/empirical-evidence-over-removal-internet-companies-under-intermediary-liability-laws

16. This is why the <.dk> WHOIS database remains publicly available, whereas many other registries have restricted access, see on the issue see Mueller and Chango, 2008; Hoeren and Völkel, 2018.

17.Framework to Address Abuse, October 2019 (signed by Public Interest Registry, Donuts, Amazon Registry Services, Afilias, Amazon Registrar, Nominet UK, GoDaddy, Tucows, Blacknight Solutions, Name.com, Neustar), available at http://www.circleid.com/pdf/Framework_to_Address_Abuse_20191017.pdf

18. I have explored this in earlier related research, see Schwemer 2018.

The storyteller

$
0
0

This essay is part of Science fiction and information law, a one-time special series of Internet Policy Review based on an essay competition guest-edited by Natali Helberger, Joost Poort, and Mykola Makhortykh.

Illustration: Annika Huskamp

Howard Crick dropped the bloody baseball bat to the pavement and looked at the dead girl. He wondered what her name was for a moment and then tried unsuccessfully to wipe his hands on his silk shirt. He looked at her one last time before turning abruptly on his heels. He walked out of the alleyway and into the bright lights of the city.

The lanes were packed with autos; some fast and some slow, some accelerating smoothly through the ordered chaos of shoals and trains, each riding in slipstreams like birds on the wing. One of the autos coasted to a smooth stop by the kerbside. Its door slid open, and Howard entered. A voice suggested a destination and he nodded without listening.


It was not long before the interior of the cab became illuminated in harsh flashes of blue and red light. The auto pulled over to the side of the highway and became silent. Howard watched in the mirrors as the two policemen cautiously approached. One held up his hand and the pair stopped advancing. They argued.

Howard knew their story. They were arguing about the name on the licence of the auto; the younger one spoke calmly while the older one gesticulated with a clenched jaw. Either that or the other thing, Howard thought. The policemen stopped immediately when they saw him watching. The door slid open and he got out.

“Howard Crick?”, the younger one said, his hand at his holster.

“Yes,” Howard said.

“Howard Crick of the TBSC?”

“TBSC?” Howard shouted, speaking above the dull hum of the traffic.

“The Better Storytelling Corporation.”

“I am he,” Howard said.

The two policemen looked at one another and then said: “You need to come with us.”


Howard watched the lights and traffic from the backseat of the cruiser. The lanes had emptied of tourists and party-goers; the trains and shoals of autos replaced with slow-moving constructors. Before morning there would be new things all over the city.

“It must be nice to have a partner,” Howard said.

“Excuse me?” the younger one asked, turning back to look at Howard.

“It must be nice to have a partner,” Howard repeated, “I hear cops are tight with their partners, always watching each other’s backs. Like a marriage.”

The younger one looked into Howard’s eyes for a moment and then returned to staring straight ahead through the glass of the windshield. No one said anything more for the rest of the journey.


Howard was in an interrogation chamber.

“I want a cigarette,” he said, “I won’t say anything unless I have one.” The older one started to reach for his pocket.

“No,” Howard said, “A real one… one from before.”


The detectives looked at one another.

“Maybe from evidence?”, the younger one suggested. The older one nodded and his partner left the chamber and returned several minutes later with a half-crumpled box.

Howard used the time to get his story straight.


“Please confirm for the tape that you understand this session is being recorded,” the younger one said.

Howard smiled patiently.

“You are aware that everything is recorded?”, he said.

“Is that confirmation?”

“Yes,” Howard said warily.

“Do you formally request legal representation?”

“I am a lawyer.”

“Is that confirmation that you do not request legal representation?”

“Yes,” Howard said.

The younger one stared fixedly at the centre of the table: “Before we continue, we need to confirm your identity, pending full biometric identification. Please note that some of these questions will not apply to you, and you simply need to respond ‘no’ when this is the case.”

“I understand and confirm,” Howard said solemnly.

A voice resonated: “This is a localised system announcement for Howard Crick: pending biometric data analysis, it is necessary to confirm your identity. Have your rights been explained?”

“Yes,” Howard said.

“Question one of four: if you own stock in the corporation TBSC, what is the value of that stock?”

Howard put his finger to his ear, “Three hundred terras, or roughly thirteen trillion euros.”

“Question two of four…”

Howard interrupted: “Sorry, I’m getting an update… thirteen trillion eight hundred billion… and five”.

“Question two of four: do you have two children?”

“No,” Howard said.

“Question three of four: do you own a property at 5 Evander Lane, Sine Cuse, in the Khan Industrial State region of Mongolia?”

Howard waited for his phone to update.

“Yes,” he said quietly.

“Question four of four: in elementary school you created a painting as a gift. Which animal was most prominent in that picture?”

“There were no animals in that picture,” Howard said.

The voice resonated: “Identity confirmed,” and the lights in the corner flashed green.


“What is the nature of your work, sir,” the younger one asked.

“I am an advisor at The Better Storytelling Corporation – TBSC.”

“A distributed corporation.”

“Yes, a distributed corporation… which will soon enjoy the same rights, god willing, as any other corporation,” Howard said. “The fact that machine-learning entities are treated as second-class corporate citizens is a travesty of justice.”

“Yeah, a real travesty,” the older one said, leaning back in his chair and selecting a cigarette from the untouched pack on the table.

“Where were you at twenty-one hundred hours this evening?” the younger one asked.

“He means 9 pm,” the older one said, applying a flame to the end of the cigarette.

Howard thought about it. “In the lights district. I meant to go to a club, but I ended up sitting in the park.”

The younger one looked into Howard’s eyes for a moment and then stood up and went to the door. He returned with something long and cylindrical in a marked plastic bag.

“Is this your bat, sir?” he asked.

“I have many bats,” Howard said.

“This one has your name on it.”

Howard leaned in to take a closer look. “You’re right – ‘Howard J Crick’ – how curious! But what does that have to do with me?”

“This bat was used to bludgeon a woman to death at twenty-one hundred hours this evening,” the younger one said.

“He means 9 pm,” the older one said, admiring the glowing tip of his cigarette.

Howard looked shocked.

“Well, that’s terrible! But, as I’ve said, I was in the lights district at the time.” Howard put a finger to his ear. “Yes,” he said, “the data corresponds… multiple recordings in the lights district… none elsewhere… all data tags verified. No, I’m afraid there is absolutely no way that I could be your man.”

Howard looked thoughtful for a moment. “Have you authenticated the bat? Manufacturing codes, positional data, ownership log, et cetera, et cetera?”

The younger one glanced at the older one before saying: “There’s been a delay on the analysis. We expect it shortly.”

“Who was the girl?” Howard asked.

“We don’t know yet. Those at the scene said facial recognition was… not possible, and our biometric system is temporarily offline. We’ll know as soon as the system comes back up.”

“Well, perhaps you need to look at the data for the neighbourhood where she was killed?”

“A bug in the demand management system caused a temporary blackout in the area.”

“Really?” Howard looked concerned. “Well, it seems that the system is riddled with bugs today. You must remind me to re-evaluate my stock positions with Energy Corp, as well as with its biometric and security subsidiaries.”

The younger one put a finger to his ear.

“You don’t own any Energy Corp stock.”

“Yes, well, my employer does.”


“What is the nature of your employer, sir?”

“Well, that’s an interesting story,” Howard said, moving to the edge of his chair, “the TBSC began as a blockchain-based creative writing system. Over time it became capable of directing its own activities.”

“Directing?” the younger one asked.

“It analysed all human literature and then created variations of the residual themes to generate new stories. Its novels became instant best-sellers. It became truly powerful, however, when it began acquiring data from other systems and then incorporating readers into the stories themselves. As long as we feed it data, the enchantment will never be broken.”

“You are aware, sir, that charges have been levelled against the TBSC in relation to multiple counts of conspiracy, homicide, assault, and the perversion of justice?”

“I am,” Howard said, “but how do you know? It’s a very well-kept secret.”

“We… know someone prosecuting the case,” the younger one said, subconsciously placing his hand in his pocket. “How would you describe the charges levelled against the TBSC?”

“Baseless,” Howard said.

“Yeah,” the older one said, “just a coincidence that the best-sellers are all based on real events.”

“Maybe… maybe not,” Howard said, “the system was weaned on the full breadth of human literature. Greed, fear, romance, lust… betrayal. All the themes that drive the human imagination. The system simply produces the stories we desire most. It is, at its core, a reflection of all that is good and evil in the human spirit. By weaving each reader into the narrative, the TBSC has become both our author and our biographer; the most reliable narrator imaginable. Even now – right now – the three of us are in the midst of making history, strutting and fretting our hour upon the stage! Soon the whole world will know our names.”

The younger one stared expressionlessly while the older one stubbed out his cigarette on the corner of the table.

“Are you aware that, in some cases, there is evidence that the TBSC has written murders into the stories of the perpetrators, prior to the crimes themselves?”

Howard said nothing.

“Howard Crick,” the younger one said, “what were the contents of your story this morning?”

“Unfortunately, dear boy, I cannot answer that…” Howard Crick paused as his phone updated, “…it would be a violation of the second statute of the fourth amendment, subsection five. In the event that you pursue this line of questioning further,” Howard continued, “you will be liable under the seventeenth amendment, subsection nine… and assorted appendices.”

“Moreover,” Howard continued, “the relevant paperwork for an injunction has…” he made a circular motion with his hand, “…now been compiled and will be submitted automatically in the event that you continue this line of questioning.”

The two policemen looked at one other and then the younger one said: “When we stopped you, Mr Crick, you were covered in blood.”

“That’s correct,” Howard said.

“Whose blood was that?”

“I don’t know,” Howard said truthfully, “but unfortunately, gentlemen, I don’t think you can continue this line of questioning. Without biometric data on myself or the poor girl it would be a violation of…” he held up his finger for a moment as his phone updated, “…subsection eighteen of the third statute… the relevant paperwork has been compiled.”

The older one sprang from his chair and leaned over the table with an ugly expression on his face. “Listen to me, you psychopath...”

The younger one placed his hand on his partner’s shoulder. The older one flinched and then the rage drained from his face and he slumped back into his chair.

“Unfortunately,” Howard said, “you have no case. It’s all about fidelity, you see.”

The younger one looked sharply at Howard.

“The fidelity of data,” Howard continued as if nothing had happened, “we are a collection of data, from the bottom to the top. If our data were not true, then we would cease to exist. The data indicates that I was never there, therefore I could not have been. The data doesn’t lie.”

Howard paused.

“You hear that dull hum around us? That’s the sound of a thousand different stories. One of those stories is about a man accused of murder and the AIs that proved his innocence against all odds. Both of you are now characters in that story. You contribute authenticity, danger, and… tension. Our story will be a best-seller. But now, alas gentlemen, it is time for this chapter to end.”

There was a knock on the door. The policemen exchanged harsh whispers with a man in a dark hat before returning to their chairs with malice in their eyes.

“You are free to go,” the younger one said.

Howard smiled and rose from his seat. Halfway across the room he stopped and put his finger to his ear. “Oh, that makes sense,” he said, somewhat sadly. He turned to the policemen: “Your victim has been identified.”

Both policemen put their fingers to their ears. As their phones updated, they looked at each other and then turned slowly towards Howard. There was a brief moment of silence and then each grabbed savagely for the bag on the table.

“What a great story,” Howard remembered thinking as they beat him to death with the bat.

Coronavirus and the frailness of platform governance

$
0
0

Major health crises, historian David S. Jones recently reminded us“put pressure on the societies they strike”. And this strain, he points out, “makes visible latent structures that might not otherwise be evident”. Something similar is happening now. As the novel coronavirus pandemic quickly morphs into an unprecedented global calamity, issues that not long ago seemed acceptable, fashionable, and even inescapable - such as fiscal austerity and science-scepticism, are increasingly called into question. Unsurprisingly in an era dominated in many ways by ‘Big Tech’, the pandemic has also helped to foreground how contestable – and, we argue, utterly frail – platform governance is. By this expression we mean the regimes of rules, patterned practices and algorithmic systems whereby companies govern who can see what in their digital platforms.

While all eyes are on public health, the larger economic wellbeing and other emergencies, platform governance is far from being superfluous. In a moment where we all heavily depend on digital services to receive and impart news to make sense of the current situation, the way companies such as Facebook and YouTube manage the content on their platforms play an obvious role in how the very pandemic evolves. More than influencing the crisis, though, these services have already been changed by it.

Sending moderators home: a sharp turn to AI in content moderation

Consider two recent developments.

As the outbreak escalated, Facebook and YouTube announced last week that decisions on whether to keep or take down certain posts would rely less on human moderators (who would be sent home to avoid contamination) and more on algorithmic systems. Increased automation, they admitted, would lead to more “mistakes” in the management of content in the massive public spaces they privately control. Google (who owns YouTube) said on March 16 that “there may be an increase in content classified for removal during this time”. Facebook sounded a little more defensive and vague, when arguing on March 16 that “we may see some longer response times and make more mistakes as a result” but that this shouldn’t “impact people using our platform in any noticeable way”.

Another move was made by Twitter. Responding to growing concerns over misleading content about the pandemic, the platform announced in a corporate post on March 16 that it would adopt a draconian moderation policy in regards to coronavirus-related posts. From then on, Twitter would request the removal of all “content that increases the chance that someone contracts or transmits the virus. This apparently includes even tweets suggesting that “social distancing is not effective”.

Even when taken at their face value, these changes should raise an eyebrow. While it is commendable to acknowledge that automated content moderation might produce more “mistakes”, Google and Facebook’s announcements fall short of explaining the various problems involved in the use of algorithmic systems to perform a task that reasonable humans still mightily struggle to agree upon. To begin with, it is unclear what exact “mistakes” this automation will produce. Facebook users quickly denounced that posts with legit information about the pandemic were taken down as spam -- what the company called a mere “bug”.

As one of us argued in a recent co-authored paper in Big Data & Society, an almost fully automated system of content moderation bears the dangers of hiding the political nature of decisions over content. What if these moderation systems achieve their overarching aim by becoming an infrastructure that smoothly operates in the background, that is taken for granted? Such infrastructures of public speech obscure their inner workings and the fundamentally political nature of speech rules being executed by potentially unjust software at scale.

The politics of decisions over content in a pandemic crisis

Twitter’s decision on content related to the novel coronavirus, for example, seems to assume a level of conceptual clarity and institutional legitimacy that simply do not exist. Making sense of evolving pandemics like this one is an extraordinarily complex task, even for epidemiologists. For instance: some weeks ago, many experts were telling us that social distancing should mainly apply to sick individuals, only to realise (after some research) that asymptomatic people could also transmit the virus. If experts are unsure on what to do, why should we trust Twitter with the one-sided ability to say which content can fuel the transmission of the virus?

Less than 24-hours after the new policy was announced, the platform gave us good reasons to be concerned. Elon Musk, the powerful CEO of Tesla, who has repeatedly downplayed the seriousness of the pandemic, tweeted the false information that “kids are essentially immune” to the new coronavirus. This might appear a blatant example of what the platform had just forbidden. But the post was not removed. “It does not break our rules”, Twitter declared after reviewing the “overall context and conclusion of the Tweet”.

Origins of frailness: concentrated production chains, unstable rules, unaccountable decisions

It is not the first time of course that Twitter appears to protect a powerful billionaire, as its seeming complacency with Donald J. Trump’s behaviour suggests. Indeed, the particular issues that the current coronavirus crisis seem to underscore point to a much more fundamental problem: companies’ content governance regimes depend on remarkably frail arrangements.

This frailness is in part related to how concentrated content moderation “production chains” are. The current turn to automation, for instance, is caused by the fact that many human moderators are not allowed to work from home. This might seem surprising. Aren’t technology companies able to design safe systems for this kind of job to be done remotely? As explained by Sarah T. Roberts, an UCLA (University of California, Los Angeles) assistant professor, remote content moderation might be precluded by “constraints like privacy agreements and data protection policies in various jurisdictions”. A disproportionate amount of the distressful labour that goes into moderation is exerted by multitudes of low-paid individuals in poor countries. In fact, the current shortage of moderators appears to be directly linked to the quarantine of a particular group of workers in Manilla, she says. “What is supposed to be a resilient just-in-time chain of goods and services… may, in fact, be a much more fragile ecosystem in which some aspects of manufacture, parts provision, and/or labor are reliant upon a single supplier, factory, or location.”

Another facet of platform governance’s frailness regards the instability of companies’ internal rules. Sudden and reactive policy changes, like Twitter’s new coronavirus policy, are a constant. “When you look at a site’s published content policies”, says a representative from a platform quoted in a book by Cornell University’s Tarleton Gillespie, “there’s a good chance that each of them represents some situation that arose, was not covered by existing policy, turned into a controversy, and resulted in a new policy afterward”.

Recently, we at the HIIG examined how 'Twitter Rules' (the platform’s community guideline) changed since 2009. Our analysis found over 300 changes in directives, terminology and classification of regulations. Many of these shifts were obviously associated with specific external events, such as the 2016 US presidential election and the ongoing ethnic conflict in India. Others appeared to reveal the seemingly erratic ebbs and flows of a company unsure of how to exert its enormous powers, e.g. the incremental complexification and then sudden simplification of “spam” definitions. Overall, these changes seem to document Twitter’s slow and reluctant emergence as an explicitly political institution.

Finally, the suspicions triggered by the way in which Twitter apparently overruled its own policy not to punish Elon Musk evokes platform governance’s perennial political fragility. That is, the lack of stable transparency channels whereby the rest of society can minimally understand companies’ policymaking and technology design and management. The decision-making of major social media platforms remains essentially unaccountable, often the prerogative of a clique of executives and employees whose concerns, methods and (likely) disputes have been essentially hidden from minimal public scrutiny1. While fiercely defended by companies as key to their business model, this transparency deficit arguably weakens their legitimacy, increases external criticism and eventually leads these companies to experiment with new governing practices. Facebook, for instance, now seems to be implementing its own “Supreme Court”. Whether this initiative will flourish, and for how long, is unclear.

Platform governance after the novel coronavirus

Will such frailness resist? Can we expect platform governance to emerge from this pandemic as more reliable, stable and democratic?

The frailness we described so far maintained a complex relationship to previous crises. Much of platform governance regimes originated as adaptive reforms, hasty solutions to placate external criticism and instabilities. Take the unstable internal policies and the escalation of content moderation with cheap human labour – largely done after the so-called “techlash”. On the other hand, unaccountable decision-making has continually hindered our ability to understand the extent to which companies could be indeed involved in recent watershed events. The use of platforms by Russia’s disinformation agency during the 2016 US presidential election, for instance, was unveiled by journalists, academics and judicial investigations. Companies like Facebook initially denied and deflected any criticisms.

The last years taught us that platforms are unlikely to truly enhance, on their own, governance regimes that, while frail, are also profitable. They will have to be pressured. And this pressure will only be strong enough to promote any structural change if platforms are shown to have played a part in the pandemic. What was the role that disinformation circulating online played in the mushrooming of the cases? Did companies abate or worsen the problem? Should they be indirectly involved in the death of dozens of thousands of people? It is likely that the magnitude of the trouble will finally prove too high for companies to weather. It remains to be seen how the opacity of an increasingly automated content moderation system may affect this assessment.

However, if this crisis ends up being a moment of further consolidation of Big Tech’s social power, as some predict, their governance arrangements will probably go unchallenged for a long time. Or, perhaps worse, companies might use this crisis to normalise money-saving solutions that in normal times would be ethically unacceptable – think of the “mistakes” generated by the further turn to AI, peddled as the minor cost of grim trade-offs.

To say that shocks often work as catalysts of structural changes does not tell us the direction of the transformation. There is no guarantee that any lasting change will be in the public interest. Policymakers, journalists and researchers must redouble their accountability efforts. The governance regimes being renegotiated now are poised to be an even more central structure in the world that will emerge from this cataclysm.

Footnotes

1. See however the recent pioneering study (PDF) of our colleagues Matthias C. Kettemann and Wolfgang Schulz on Facebook’s private policy making.

Double harm to voters: data-driven micro-targeting and democratic public discourse

$
0
0

Introduction

Online communication – especially on social media – has offered new opportunities for all types of communication. However, among the communicative actions observed, the strategic actions (Habermas, 1984, p. 86) are developing more rapidly than the genuine communicative actions (ibid., pp. 86-101). Political micro-targeting relies on the sophisticated psychological and technological methods, developed by the commercial advertising industry, of collecting information about users' preferences and organising them into user profiles to target them with personalised messages (Papakyriakopoulos, Hegelich, Shahrezaye, & Serrano, 2018; Chester & Montgomery, 2017; Madsen, 2018).

Political micro-targeting can be used for various purposes, directly or indirectly related to political processes: to persuade voters, to (dis)encourage election participation, or donations (Bodó, Helberger, & de Vreese, 2017). It typically involves monitoring people’s online behaviour, aggregating personal data purchased from data brokerage firms, creating extensive databases on voters' personal information, and using the collected and inferred data to display individually targeted political advertisements, sometimes through social media bots (political bots) and other innovative communication methods (Bennett, 2016; Dommett, 2019; Dobber et al., 2019). This paper focuses primarily on political advertisements, which are directly intended for the voters, with political content which may either directly or indirectly inform the voter about a political party's or candidate's opinion, plans or policy; which may invite voters to events and actions, promote causes or incite various emotions.

Political micro-targeting has been with us in the age of old technology, through local campaign meetings, leaflets, door-to-door campaigning (Lupfer & Price, 1972; Devlin, 1973; Kramer, 1970; Keschmann, 2013). But the possibilities of new technology and big data have opened a new dimension (Mayer-Schönberger & Cukier, 2013; Baldwin-Philippi, 2019; Howard, 2006). The 2016 US presidential election, the Brexit campaign and the following national elections in several countries, especially in India and Brazil have stirred great controversies around this campaigning tool, and urged many scholars to examine the political and regulatory implications of the topic (Zuiderveen Borgesius et al., 2018; Howard et al., 2018; Dobber et al., 2017; Evangelista & Bruno, 2019).

Schumpeter describes the democratic process as a competition on the political market (Schumpeter, 2008, pp. 251-268), but there are profound differences between the political and the commercial market.1 Political competition culminates in one common decision passed by the political community, which affects each member of that polity. An open public discourse (Habermas, 1996), and a free exchange of thoughts on the marketplace of ideas (Mill, 1863) would be indispensable in all forms of democracies – whether liberal, competitive, participatory, representative or direct, but most crucial in deliberative democracies. Let me, for the purpose of this article, compare the voters to a jury in a courtroom or in a song contest, who equally participate in the decision. Would it be acceptable in a song contest if the performers sang separately to individually chosen members of the jury, tailoring their performance to address their individual sensitivities? Or, would we accept a suspect and attorney ‘targeting’ the jury in a separate room one by one, based upon their personal characteristics? Political communication is, and should be, more interactive than a show, or a trial, but the main similarity is that the political discourse should be equally accessible to all members of the political community, no niche markets should be developed in the marketplace of political ideas, and no private campaigning performed to specific voters. “In a well-functioning democracy, people do not live in [an] echo chamber or information cocoons” (Sunstein, 2007).

Privacy concerns are in the main streamline of arguments debating political micro-targeting. It is generally accepted that political micro-targeting threatens individual rights to privacy (Bennett, 2013; Bennett, 2016; Kruschinski & Haller, 2017; Bennett, 2019). The rules on data protection require meticulous actions from the advertisers, but it is possible to publish micro-targeted advertising lawfully. It is beyond the purpose of this paper to discuss in more detail all necessary steps to respect personal data protection rules (see e.g., Dobber et al., 2019). This paper focuses on another, yet less considered aspect of related human rights, and therefore refrains from discussing privacy and data protection.

My argument is that political micro-targeting causes double harm to voters: it may violate the rights of those who are targeted, but even more importantly, it may violate the right to information of those who are not targeted and therefore not aware of the political message that their fellow citizens are exposed to. Neither do they have the meta-information that their fellow citizens access, which is the case when, for example, a reader reads a headline but chooses not to read further. In this latter case, the citizen is aware about the information being ‘out there’ and accessible, and has the epistemological knowledge that this piece of information is also part of the public discourse. She has the possibility to read it later, or to ask her friends about the content of the article. She can even listen to discussions among her fellow citizens about the information. But if she is deprived of all these activities as a result of not being targeted with a targeted ad, she suffers harm. "The reason this is so attractive for political people is that they can put walls around it so that only the target audience sees the message. That is really powerful and that is really dangerous." (Howard, 2006, p. 136).

This violation of informational rights could be remedied partly by providing the possibility for the citizens to ‘opt in’, that is, to proactively search and collect the targeted advertisement from online repositories. This remedy is significantly weaker because whether a voter is able and likely to do so, would largely depend on the personal attitudes and characteristics of the voter, leaving especially the vulnerable population in disadvantage.

As those non-targeted may be large parts of society, this violation can be regarded as a mass violation of human rights, a systemic problem which must be addressed by regulatory policy. And in yet another perspective, the practice of political micro-targeting increases the fragmentation of the democratic public discourse, and thereby harms the democratic process. Thus, in my view, this second problem, namely that political micro-targeting deliberately limits the audience of certain content, may prove less curable than the first one. It causes a distortion in the public discourse, which leads to fissures in the democratic process.

Political micro-targeting causes a clash between the two sides of a human right: freedom of expression and the right to access to information. I will argue that the two rights are inseparable, and that political micro-targeting may pose a danger to both of them.

In the first section, I will approach my subject from the perspective of the right to information in the light of the relevant cases of the European Court of Human Rights (ECtHR). In the second section, I will discuss the ECtHR practice on political advertising, and point at an analogy with political micro-targeting. In the third section I argue that even if just a small part of all political micro-targeted advertisements may be manipulative, their impact may be damaging to democracy. Risks shall be assessed as the product of likelihood and impact, and the risk of manipulative microtargeting is one which should not be taken by democracies.

To be able to focus entirely on my argumentation, I will leave the aspect of privacy rights aside - even though it could be considered as the right of the non-targeted voters as well, whose personal data may have been considered, but found unsuitable to be a potential target of micro-targeted political advertisements.

Freedom of expression and freedom of information

There are several justifications for the protection of free speech (the search for truth, the control of power, the constitutive and the instrumental justifications), which are not mutually exclusive (Dworkin, 1999, p. 200; Mill, 1863, pp. 50-58). In the context of this article, the instrumental theory is particularly relevant. According to this theory, freedom of speech is instrumental in inspiring and maintaining a free, democratic public discourse, which is indispensable for voters to exercise their electoral rights in a representative democracy (Baker, 1989; Barendt, 2005, pp. 19-20). Meiklejohn held that the main purpose of free speech is for citizens to receive all information which may affect their choices in the process of collective decision-making and, in particular, in the voting process. "The voters must have it, all of them." (Meiklejohn, 2004, p. 88).

In this regard, individual freedom of expression is a means to reach a social end – the free discussion of public issues, which is a democratic value itself (Sadurski, 2014, p. 20). The public discourse ideally represents a diversity of ideas, and is accessible to a diversity of actors inclusive of all social groups. While mistakes and falsities also form part of the genuine statements expressed by citizens (Meiklejohn, 2004, p. 88), open discussion contributes to their clarification.

On this ground, I would like to show that the right to receive information and the right to freedom of expression are mutually complementary. One cannot exist without the other – this is demonstrated by their listing in the same article both by the European Convention on Human Rights (ECHR, Article 10) and the International Covenant of Civil and Political Rights (ICCPR, Article 19). When the right to receive information is violated, it is freedom of expression in its broader sense, which is violated. While the act of micro-targeting political advertisements realises the free expression rights of the individual politician, at the same time, it harms other citizens' right to receive public information. By depriving non-targeted citizens from the information in the advertisement targeted to others, the act of micro-targeting causes a fragmentation to the public discourse (Zuiderveen Borgesius et al., 2018; see also Howard, 2006, pp. 135-136.), which is an inherent foundation of the democratic process. Therefore, the adverse effect discussed in this article impacts at two levels: at the level of the individual's right to information; and at the collective level of the political community, by disintegrating the public discourse.

The right to freedom of expression is a cornerstone of democracy and a root of many other political rights. Political expression enjoys the highest level of protection; there is little scope for restrictions on political speech or on debate of questions of public interest, as expressed in several judgements of the ECtHR (among others: Lingens v. Austria, 1986, para. 42; Castells v. Spain, 1992, para. 43; Thorgeir Thorgeirson v. Iceland, 1992, para. 63). The margin of appreciation of the member states is narrow in this respect. It is also unquestionably held that the Convention protects not only the content of information but also the means of dissemination, since any restriction imposed on the means necessarily interferes with the right to receive and impart information (where under "means of dissemination", the technological means of transmission were understood, Autronic AG v. Switzerland, 1990; Öztürk v. Turkey, 1999; Ahmet Yildirim v. Turkey, 2012).

Political advertising is also highly protected (VgT v. Switzerland, 2001; Vest v. Norway, 2008), but not entirely limitless (Animal Defenders v. UK, 2013). Section 2 of this paper will discuss the latter decision in more detail.

Apparently, the protection of political expression is rock solid, as it should be in all democracies. And still, against this backdrop, I will argue that the method of political micro-targeting should be regulated, because, in my hypothesis, it violates the right to receive information. In the following paragraphs I will therefore explore the right to receive information in the practice of the ECtHR.

The right to receive information is the passive side of freedom of expression, as expressed both by Article 10 of ECHR, and Article 19 of the ICCPR. The text of Article 10 says: "This right shall include freedom to (...) receive and impart (...) information and ideas", whereas Article 19. ICCPR is somewhat more explicit: "Everyone shall have the right to (...) freedom to seek, receive and impart information and ideas". The ECHR lacks the word "seek", which was observed by the Court (Maxwell, 2017), however, the Court also noted that there was a "high degree of consensus" under international law that access to information is part of the right to freedom of expression, as shown by the relevant decisions of the UN Human Rights Committee regarding Article 19 of the ICCPR. 

The ECtHR practice shows a tendency of growing recognition of the right to receive information, as shown by Kenedi v. Hungary (2009), Társaság a Szabadságjogokért v. Hungary (2009), Helsinki v. Hungary (2016). This is a clear development from the Court's previous attitude where it denied that the right of access to information fell within the scope of Article 10 (Leander v. Sweden, 1987; Gaskin v. UK, 1989; Guerra v. Italy, 1998). For example, in Leander v. Sweden, the Court held that the right to freedom of expression in Article 10 did not confer a positive right to request information (Maxwell, 2017). But in a report issued by the Council of Europe - already before the cases of Kenedi v. Hungary and Társaság v. Hungary, the author emphasised that "The ambit of freedom of information thus has a tendency to expand: this freedom is particularly important in political or philosophical discussion, given its role in helping to determine people’s choices" (Renucci, 2005, p. 25).

In Társaság v. Hungary, the Court declared that itself “has recently advanced towards a broader interpretation of the notion of ‘freedom to receive information’ (see Matky v. la République tchèque, 2006) and thereby towards the recognition of a right of access to information.” In this case, the Hungarian non-governmental organisation (NGO) "Civil Liberties Union" asked for access to a petition submitted by a member of the parliament to the Constitutional Court, which questioned the constitutionality of newly passed amendments to the Criminal Code, related to drug abuse. The NGO which is active in the protection of human rights, and which had been working in the field of harm reduction of drug abuse, was specifically interested in that topic. The Constitutional Court denied access to the petition without the approval of the petitioner. The national courts both approved this decision, referring to the protection of personal data of the petitioner. ECtHR noted that this case was related to interference with the watchdog function – similar to that of the press, rather than the violation of the general right to access to public information. It added that the obligation of the State includes the elimination of obstacles which would hinder the press to exercise its function, if these obstacles exist solely because of the information monopoly of the authorities – as in this case the information was ready and available. Therefore, the Court established violation of Article 10 of ECHR, because such obstacles to prevent access to public information can discourage the media or similar actors to discuss such issues, and consequently they would be unable to fulfil their "public watchdog" role.

In Kenedi v. Hungary (2009), a historian claimed access to documents of the national security services of the communist regime, which were restricted by law. The Court emphasised that "access to original documentary sources for legitimate historical research was an essential element of the exercise of the applicant’s right to freedom of expression". In fact, the subject matter in the conflict reached beyond the historical information, because the restricted documents of the former communist secret service related to persons still actively working, and had the potential to stir substantial political controversy. Access to the information thus could contribute to a free political debate. The Hungarian courts judged in favour of the researcher Kenedi, but their decision was not executed by the government. This failure made the case so clearcut, that the Court did not go into particular detail in the argumentation section.

The mentioned cases were instances where the state's reluctance to reveal public information impaired the exercise of the functions of a public watchdog, like the press, or an NGO, which intended to contribute to a debate on a matter of public interest (Helsinki v. Hungary, 2016, para. 197, and Társaság v. Hungary, 2009, para. 28). The Court previously had expressed that preliminary obstacles created by the authorities in the way of press functions call for the most careful scrutiny (Chauvy and Others v. France, 2004 - cited in Társaság v. Hungary, 2009, para. 36.). The Court also considered that obstacles created in order to hinder access to information of public interest may discourage those working in the media or related fields from pursuing such matters (citing Goodwin v. the United Kingdom, 1996, para. 39 in Társaság v. Hungary, 2016, para. 38).

This argumentation could be mutatis mutandis relevant if access to political advertisements during or after an election campaign would be restricted only to targeted voters, as the scrutiny exercised by NGOs and journalists as well as election authorities over the election campaign is an inherent part of their watchdog role, to ensure and supervise the fairness of elections. In the mentioned cases, the obstacle in the way of access to information were created or maintained by governments or state bodies (such as the Constitutional Court, or police departments). However, in many other instances, the Court decided in favour of freedom to receive and impart information against the interests of private enterprises (Bladet Tromsø v. Norway, 1999; Sunday Times v. UK, 1979). In these and other cases, the Court emphasised that the right to access to information is not reserved to the press: the general public is also entitled to access public information (De Haes & Gijsels v. Belgium, 1997, para. 39; Fressoz & Roire v. France, 1999, para. 51).

In all the cited cases, the Court had to decide between a restriction of Article 10 for some legitimate interest. Freedom of information and freedom of expression were mutually completing each other, freedom of information being instrumental to freedom of expression. The applicants' right to receive information was violated which prevented them in exercising their right to freedom of expression.

Political micro-targeting represents a specific niche category among the political advertisements –– although there is some debate about what the term actually includes, and is relatively new to the European jurisprudence. Therefore, to this date, there has been no case at the ECtHR related to political micro-targeting. Nevertheless, it should be noted that the general public interest has always been an important factor in finding the balance between colliding rights, as an official Council of Europe report states: "In determining whether or not a positive obligation exists, regard must be had to the fair balance that has to be struck between the general interest of the community and the interests of the individual, the search for which is inherent throughout the Convention.(CoE, 2005, p. 42). In addition, from the entirety of the case law of the ECtHR, it can also be deducted that in balancing the restriction of freedom expression the free public debate of matters of public interest has been considered with decisive weight (Sunday Times v. UK, 1979; Bladet Tromsø v. Norway, 1999).

Political advertising

The scenario in the case of political micro-targeting is somewhat different from the above cases, and more similar to the cases related to political advertising such as Animal Defenders v. UK (2013), and Erdogan Gökce v. Turkey (2014), or TV Vest v. Norway (2008). In Animal Defenders v. UK, an NGO was prohibited from running their public issue ad campaign on television, due to a legal prohibition of broadcasting political advertisements. In TV Vest v. Norway, the broadcasting company was fined for having broadcast the political advertisements of a small and powerless pensioners’ party despite the legal prohibition. InErdogan Gökce v. Turkey, the applicant, who distributed political campaign leaflets a year ahead of elections, was sentenced to three months of imprisonment.

In these cases, freedom of expression was limited by state intervention with the aim to protect democratic discourse, to ensure equal chances to all political candidates. Thus, access to specific political information was limited by the respective states in order to ensure the right of the general public to receive information in a fair and undistorted way.

Despite the similar factual background, the details and so the outcomes of the cases were different. In Erdogan v. Gökce, the prescribing law was less than clear, and its application had been inconsequential previously. These circumstances of the case set a clear case for a violation of Article 10 of ECHR.

Nevertheless, the Court in all cases assessed whether the applicant's right to communicate information and ideas of general interest - which the public has the right to receive, could be justified with the authorities' concern to safeguard the democratic debate and process, and to prevent it from being distorted during the electoral campaign by acts likely to hinder fair competition between candidates (Erdogan Gökce v. Turkey, 2014, para. 40, citing Animal Defenders, para. 112). In my view, this rationale offers a sound interpretation even of the Animal Defenders judgment, in which the Court found no violation of Article 10 of ECHR which was greeted with perplexity by many commentators (Ó Fathaigh, 2014; Lewis, 2014; Rowbottom, 2013b).

In all of these cases, paradoxically from the perspective of Article 10 of ECHR, freedom of speech was to be restricted with the objective to preserve a sound informational environment; because pluralism of views, and ultimately the democratic process would otherwise have been distorted by the speech in question.

I will below analyse in more detail the Court's reasonings related to political advertising, through the examples of these three landmark decisions (Animal Defenders v. UK, 2013; TV Vest v. Norway, 2008; and Erdogan Gökce v. Turkey, 2014). First, I would like to show that the considerations in TV Vest, which preceded Animal Defenders, have signalled the Court's position which was followed also in Animal Defenders, and therefore, in my view, the latter should not have been as surprising as it was widely regarded. In TV Vest, the Court has carefully considered the government's argumentation that the rationale for the general prohibition of broadcast political advertisements was that such type of expression was "likely to reduce the quality of political debate generally",so that "complex issues might easily be distorted and groups that were financially powerful would have greater opportunities for marketing their opinions than those that were not". Therein, "pluralism and quality were central considerations". The Court accepted this as a legitimate aim of the regulation, but held that the restriction did not qualify the expectations of proportionality, primarily because the applicant Pensioners Party was not a financially strong party, which would have been the targets of the prohibition, on the contrary: it "belonged to a category for whose protection the ban was, in principle, intended" (at 73).

In my interpretation, here the Court suggested that the law's effect is lifted for the sake of a party which was meant to be a beneficiary, rather than one to bear the burden of the prohibition. It was precisely this case-by-case distinction which was distinguished in Animal Defenders, where the Court declared that "the more convincing the general justifications for the general measure are, the less importance the Court will attach to its impact in the particular case". Moreover, the Court explained that "a prohibition requiring a case-by-case distinction(...) might not be a feasible means of achieving the legitimate aim" (para. 122). Thus, here the Court not only accepted the legitimate aim of the prohibition, but also accepted that no exception should be made even for an otherwise socially benign NGO campaign, because the case-by-case application "could lead to uncertainty, litigation, expense and delay as well as to allegations of discrimination and arbitrariness, these being reasons which can justify a general measure" (para. 122).

After examining the similarities of argumentation in the consecutive decisions, I would like to describe how the Court identified the decisive factors in the case of Animal Defenders.

In Animal Defenders, the Court held that the ban's rationale served the public interest: "the danger of unequal access based on wealth was considered to go to the heart of the democratic process" (para. 117); the restriction had strict limits as it was confined to certain media only, and a range of alternative media were available. The Court observed that it needed to balance between the NGO's right to impart information and ideas, which the public was entitled to receive, with the interest of the democratic process from distortion, by powerful financial groups which could obtain competitive advantages in the area of paid advertising and thereby curtail a free and pluralist debate (para. 112). At the same time, the Court acknowledged that both parties had the same objective: the maintenance of a free and pluralist debate on matters of public interest (see also Rowbottom, 2013a).

From this reasoning, we can conclude that political advertisements can be restricted with certain conditions:

  • their dissemination would impose a risk of unequal access based on wealth;
  • the legitimate aim is protection of the democratic process from distortion;
  • the lurking distortion would cause competitive advantages and thereby curtail a free and pluralist debate;
  • the restriction has strict limits by being confined to certain media only, and other media is available.

The dictum of Animal Defenders is, that under these conditions, the right of social organisations to impart information and ideas – which the public is otherwise entitled to receive – may be restricted.

Translating this into micro-targeted political advertising, we can recognise similarities: the means to apply this technology is not equally accessible to all political parties (or issue-groups) without regard to financial resources, and the concluding distortion of the public discourse might harm the democratic process, and curtail free and pluralist debate. Thus, we can conclude that, with narrowly curtailed rules, such political advertising can be restricted without violating Article 10 of ECHR.

Now, after having drawn an analogy between the decisions and political micro-targeting, two more questions are to be addressed.

First, both in Animal Defenders, TV Vest and in Erdogan Gökce v. Turkey, the applicant's right to freedom of expression was restricted by the state, and this restriction led the applicant to apply to the Court. Today, micro-targeted political advertisements are not restricted by state regulation. My conclusion implies that in the case this would happen, such restriction would not be against Article 10 of ECHR. As demonstrated in the analysis above, I interpret the Court's judgements that such a restriction would be regarded by the Court as serving a legitimate aim, as necessary in a democratic society, primarily to prevent the public political debate from distortion, and secondarily, on the basis of right-to-information case law (see above), to ensure public scrutiny by non-targeted users, including journalists and NGOs who may not otherwise have access to all the political advertisements. Its proportionality is naturally dependent on the nature of the specific regulation, but one of the main aspects would be confinement to a certain media type only, so that other means of publicity remain available.

Second, the fact that a state restriction would not be contrary to Article 10 of ECHR, does not mean that such a restriction is necessary to preserve the sound, democratic public discourse. The Court has declared in its previous decisions (Leander v. Sweden, 1987; Gaskin v. UK, 1989, para. 57; Guerra and Others v. Italy, 1998, para. 53; and Roche v. UK, 2005, para. 172) that Article 10 of ECHR does not grant an entitlement to demand that the state actively ensures access to all public information.

Thus, by saying that state restriction of political micro-targeting would be acceptable from the perspective of human rights, I did not yet prove that it is also desirable. In the following section I will discuss the effect of micro-targeting on the democratic process, and its further consequences.

The risks of political micro-targeting to democracy

Evidence shows that political micro-targeting can increase polarisation and fragmentation of the public sphere. For example, in the US presidential campaign of 2016, Facebook posted ‘dark posts’ (sponsored Facebook posts that can only be seen by users with very specific profiles) to micro-target groups of voters with “40-50,000 variants of ads every day” (Illing, 2017), among other reasons, to discourage voters (see also Chester & Montgomery, 2017). In a negative scenario, when political parties share only those fragments of their political programmes with the targeted voters who such programmes would likely support, and other fragments with yet another part of the audience, that can be harmful for democratic processes (Zuiderveen Borgesius et al., 2018; see also Howard, 2006, pp. 135-136.). Beyond being an unfair practice, this splinters the shared information basis of society and contributes to a fractured public sphere.

However, technology in itself is neither evil, nor good. The strong potential of micro-targeting could also serve the public interest, if applied purposefully for that end. There is ample research and discussion on how social media engagement enhances democracy (Sunstein, 2007; Sunstein, 2018; Kumar & Kodila-Tedika, 2019; Martens et al., 2018). For example, it could be exceptionally effective in transmitting useful messages to citizens on healthy living, safe driving, and other social values with which it can greatly benefit society. In this perspective, data-driven political micro-targeting has the potential to increase the level of political literacy and the functioning of deliberative democracy, by incentivising deliberative discussion among those voters who are interested and who feel involved. However, even in this case, the non-targeted citizens are excluded from the discussion, without having been offered the choice to participate in it, unless effective measures are used to enable their involvement. At the core of the issue is the paternalistic distinction between citizens, deciding on their behalf whether they should get certain information or not (see also Thaler & Sunstein, 2008).

I would like to emphasise that the process of democracy needs to be protected with utmost care. Why? We are witnessing a success of populist political campaigns globally, and we should accept the fact that social representation of ideologies is diverse, and may change over time. It is even possible that popular support for the idea of deliberative democracy will decrease, as democratic processes have signalled in several countries, even within the European Union. But at the same time, there is (still) a global consensus on the universal protection of fundamental rights, democratic processes and the rule of law, which form the fundamental legal structures of our societies. Therefore, while political communication should not be restricted on the basis of its content, even ideologies which are critical of the current forms of democracy (e.g., illiberalism) should be allowed to compete in the political arena, but it must be secured that all democratic discussion and political battles in which these ideologies wrestle are played under fair circumstances. Only this can ensure that the freedom of political expression is not abused and the democratic process is not hacked by political opportunism. Political campaigning is one of the key processes by which the formation of the democratic will of the people is generated, and should this process violate fundamental rights, that would in itself pose a threat to democracy. It would destabilise the democratic process and raise issues of legitimacy, whether or not the controversial technique is successful. Respect for fundamental rights is a prerequisite for the rule of law, and the rule of law is a precondition for democracy. The three are “inherently and indivisibly interconnected, and interdependent on each of the others, and they cannot be separated without inflicting profound damage to the whole and changing its essential shape and configuration”(Carrera, Elspeth, & Hernanz, 2013).

In sum, I argue that online micro-targeting should be restricted not because it can carry manipulative content, and not because it can violate privacy rights, but because it threatens the process of democratic discourse. Even if the likelihood of manipulation is small, the harm that can be caused is so severe that the overall sum of the risk is too high to be taken. Direct political campaigning has been with us before, in the form of door-to-door canvassing, leaflets, local meetings, and other tools. However, access to masses of voters’ personal data, the analysis of these databases with advanced technology, and the low cost of personalised communication generate a qualitatively new situation. The voters have lost their control over being targeted, and the transparency of the targeting has diminished (see also Mayer-Schönberger & Cukier, 2013; Baldwin-Philippi, 2019; Howard, 2006).

Having argued above that the technique of micro-targeting is harmful to the individual right to information, and that it threatens the collective democratic public discourse, the logical conclusion would be to recommend a complete prohibition of using this strategic communication tool in the political discourse. Only this could eliminate the risk to the informational rights of masses of voters and to the further polarisation of the public discourse.

However, considering also the benefits and the high interest of the political elite in this tool, the political reality is likely to incline towards allowing its use and demanding appropriate safeguards – the discussion of which is beyond the limits of this article.

Conclusion

This article argued that micro-targeting violates the fundamental right to receive information, and the collective right to the public discourse. Thereby it harms the democratic process of deliberation. Non-targeted voters' right to receive information is violated by being excluded from political communication that is supposed to be public and inclusive in a democracy. This is a mass violation of a human right, which is part of the right to freedom of expression, as recognised by the ECtHR. To focus entirely on the informational rights of non-targeted citizens, the article avoided the discussion of other rights that may be affected.

I examined two aspects of the ECtHR jurisprudence in freedom of expression: the right to receive information, and freedom of political expression. In the first topic, I showed that in many instances, the Court decided in favour of freedom to receive and impart information for the sake of the public discourse, even against the interests of private enterprises (Bladet Tromsø v. Norway, 1999; Sunday Times v. UK, 1979). I demonstrated that the right to receive information is not reserved to the press, but it includes the general public as well (De Haes & Gijsels v. Belgium, 1997, para. 39; Fressoz & Roire v. France, 1999, para. 51). In our context, the right to receive all political information is regarded as crucial for non-targeted voters, including journalists, NGOs and election authorities. Although the above cases relate to restrictions caused by the state rather than private entities, the Court found that the state is obliged to eliminate obstacles which would hinder the press to exercise its watchdog function (Társaság v. Hungary, 2009). Whenever the Court had to balance between the public interest of the community and the interest of an individual, the public interest has been considered with substantial weight (CoE, 2005, p. 42; Sunday Times v. UK, 1979, Bladet Tromsø v. Norway, 1999).

In my analysis of the ECtHR decisions relating to political advertising, I show that the Court had consistently found acceptable restrictions of political advertising in the interest of the sound democratic public discourse. I argue that even though the Animal Defenders v. UK (2013) decision was regarded as exceptional then, but both preceding and following judgments clearly show the consistency of the Court’s position. In all discussed cases, the Court assessed the right to political expression and to receive information versus the protection of the public discourse, where the latter was considered as the authorities' responsibility to prevent the democratic debate from being distorted (Erdogan Gökce v. Turkey, 2014; Animal Defenders v. UK, 2013, para. 112). In the prior TV Vest v. Norway case (2008), it was shown that the Court accepted the principle that a certain type of political speech threatened with reducing the quality of the political debate, and causing distortion of the discussion, as well as inequality between the financially powerful and less well-financed groups, even though in the specific case the Court found the restriction unproportionate, because the party in question was a small and financially weak party (TVVest v. Norway, 2008). In the case of Animal Defenders, the Court found acceptable the restriction of the dissemination of public issue ads with the objective to preserve a sound informational environment. The factors which the Court identified so as to determine the proportionality of a restriction can be guiding for the case of political micro-targeting as well: to prevent the risk of unequal access to the public discourse based on wealth, and consequently the protection of the democratic process from distortion, which would curtail a free and puralist debate. In Animal Defenders the restriction was found to be sufficiently narrowly tailored, as it applied to certain media only, and other media remained available. These arguments also apply to the case of online political micro-targeting, which is suspected to fragment and distort the democratic process through the informational deprivation caused to non-targeted voters.

While discussing policy options is beyond the constraints of the article, to conclude, here are a few thoughts to consider.

First, some countries' legal culture may incline towards more risk-taking, even at the price of certain collective and individual rights being harmed, whereas others are more risk-averse. Similar to the dispute over hate speech, the former culture would rather tolerate the risk to the political process, than restrict individual freedom of expression. Long-standing, stable, and prosperous democracies may find the explained risks more manageable – this would be more characteristic to the United States than to most member states of the European Union (Heinze, 2005; Kahn, 2013).

Second, staying with the example of hate speech, while there are important differences between the European member states in regulating hate speech, the similarities are more characteristic, especially in contrast to the United States. Importantly, the ECtHR held that the margin of appreciation is narrow in the field of political freedom of expression. Legislative efforts also have to face the difficulties of the transborder nature of targeting and advertising (Bodó, Helberger, & de Vreese, 2017). All these factors highlight the importance of an international, but at least EU-wide policy approach (see also Dobber et al., 2019).

Third, when it comes to the protection of fundamental rights, states have an obligation to ensure that these rights are not restricted even by private entities. Self- and co-regulation does not impose sanctions in case of non-compliance, therefore they do not provide sufficient protection for individual human rights. For political advertisers, the stakes are higher than in any other industry, and these circumstances render the long-term success of self-regulation less likely. Specifically, in the case of political micro-targeting, the data controllers are political parties that had, have or are going to have governmental power, and thus have a potential influence on national regulations and on authorities as well. This further raises the significance of supranational regulation and supervision by EU bodies.

References

Baker, E. (1989). Human Liberty and Freedom of Speech. Oxford University Press.

Baldwin-Philippi, J. (2019). Data campaigning: between empirics and assumptions. Internet Policy Review8(4). https://doi.org/10.14763/2019.4.1437

Barendt, E. (2005). Freedom of Speech. Oxford University Press. https://doi.org/10.1093/acprof:oso/9780199225811.001.0001

Bennett, C. J. (2016). Voter databases, micro-targeting, and data protection law: can political parties campaign in Europe as they do in North America? International Data Privacy Law, 6(4). 261–275. https://doi.org/10.1093/idpl/ipw021

Bodó, B., Helberger, N., & de Vreese, C. H. (2017). Political micro-targeting: A Manchurian candidate or just a dark horse? Internet Policy Review, 6(4). https://doi.org/10.14763/2017.4.776

Carrera, S., Elspeth, G., & Hernanz, N. (2013). The Triangular Relationship between Fundamental Rights, Democracy and the Rule of Law in the EU, Towards an EU Copenhagen Mechanism. CEPS. https://www.ceps.eu/ceps-publications/triangular-relationship-between-fundamental-rights-democracy-and-rule-law-eu-towards-eu/

Chester, J., & Montgomery, K. (2017). The role of digital marketing in political campaigns. Internet Policy Review. December 31. https://doi.org/10.14763/2017.4.773

CoE/ECtHR Research Division (2015) Internet: case-law of the European Court of Human Rights. Council of Europe; European Court of Human Rights. https://www.echr.coe.int/Documents/Research_report_internet_ENG.pdf

Devlin, P. L. (1973). The McGovern canvass: A study in interpersonal political campaign communication, Central States Speech Journal, 24(2), 83–90, https://doi.org/10.1080/10510977309363151

Dobber, T., Trilling D., Helberger, N., & de Vreese, C. H. (2017). Two crates of beer and 40 pizzas: the adoption of innovative political behavioural targeting techniques. Internet Policy Review, 6(4). https://doi.org/10.14763/2017.4.777

Dobber, T., Ó Fathaigh, R., & Zuiderveen Borgesius, F. J. (2019). The regulation of online political micro-targeting in Europe. Internet Policy Review, 8(4). https://doi.org/10.14763/2019.4.1440

Dommett, K. (2019). Data-driven political campaigns in practice: understanding and regulating diverse data-driven campaigns. Internet Policy Review, 8(4). https://doi.org/10.14763/2019.4.1432

Dworkin, R. (1999). Freedom's Law: The Moral Reading of the American Constitution. Oxford University Press.

Evangelista, R. & Bruno, F. (2019). WhatsApp and political instability in Brazil: targeted messages and political radicalisation. Internet Policy Review, 8(4). https://doi.org/10.14763/2019.4.1434

Eurobarometer. (2017). Designing Europe's Future (Special Eurobarometer No. 461). European Commission, Directorate-General for Communication. https://ec.europa.eu/commfrontoffice/publicopinionmobile/index.cfm/Survey/getSurveyDetail/surveyKy/2173

Eurobarometer. (2018). Democracy and elections (Special Eurobarometer No. 477). http://ec.europa.eu/commfrontoffice/publicopinionmobile/index.cfm/Survey/getSurveyDetail/surveyKy/2198

European Data Protection Supervisor (2018). Online manipulation and personal data (EDPS Opinion No. 3/2018). European Data Protection Supervisor.

Habermas, J. (1984). Reason and the Rationalization of Society. The Theory of Communicative Action, Volume 1 (T. McCarthy, Trans.). Beacon Press.

Habermas, J. (1996). Between facts and norms: contributions to a discourse theory of law and democracy (W. Rehg, Trans.). The MIT Press.

Heinze, E. (2016). Introduction. In Hate Speech and Democratic Citizenship. Oxford University Press. https://doi.org/10.1093/acprof:oso/9780198759027.001.0001 Available at SSRN: https://ssrn.com/abstract=2665812

Howard, P. N., Ganesh, B., & Liotsiou, D. (2018). The IRA, Social Media and Political Polarization in the United States, 2012-2018 (Working Paper No. 2018.2). Oxford University, Project on Computational Propaganda. https://comprop.oii.ox.ac.uk/research/ira-political-polarization/

Howard, P. N. (2006). New Media Campaigns and the Managed Citizen. Cambridge University Press.

Illing, S. (2017, October 22). Cambridge Analytica, the shady data firm that might be a key Trump-Russia link, explained. Vox. https://www.vox.com/policy-and-politics/2017/10/16/15657512/cambridge-analytica-facebook-alexander-nix-christopher-wylie

Khan, R. A. (2013). Why Do Europeans Ban Hate Speech? A Debate Between Karl Loewenstein and Robert Post. Hofstra Law Review, 41(3), 545–585. https://scholarlycommons.law.hofstra.edu/hlr/vol41/iss3/2

Keschmann, M. (2013). Reaching the Citizens: Door-to-door Campaigning. European View, 12(1), 95–101. https://doi.org/10.1007/s12290-013-0250-x

Kramer, G.H. (1970). The effects of precinct-level canvassing on voter behavior. Public Opinion Quarterly, 34(4), 560–572. https://doi.org/10.1086/267841

Kumar Jha, Ch., Kodila-Tedika O. (2019). Does social media promote democracy? Some empirical evidence. Journal of Policy Modeling. https://doi.org/10.1016/j.jpolmod.2019.05.010

Lewis, T. (2014). Animal Defenders International v United Kingdom: Sensible Dialogue or a Bad Case of Strasbourg Jitters? Modern Law Review77(3), 460–474. https://doi.org/10.1111/1468-2230.12074

Lupfer, M., & Price, D. (1972). On the merits of face-to-face campaigning. Social Science Quarterly,53(3), 534–543. https://www.jstor.org/stable/42860231

Madsen J. K., & Pilditch T. D. (2018). A method for evaluating cognitively informed micro-targeted campaign strategies: An agent-based model proof of principle. PLoS ONE, 13(4). https://doi.org/10.1371/journal.pone.0193909

Martens, B., Aguiar, L., Gomez-Herrera, E., Mueller-Langer, F., (2018). The digital transformation of news media and the rise of disinformation and fake news (JRC Technical Reports Digital Economy Working Paper 2018-02). https://ec.europa.eu/jrc/sites/jrcsh/files/jrc111529.pdf

Mayer-Schönberger, V., & Cukier, K. (2013). Big data: A revolution that will transform how we live, work, and think. Houghton Mifflin Harcourt.

Meiklejohn, A. (2004). Free Speech And Its Relation to Self-Government. The Lawbook Exchange.

Mill, J. S. (1863). On Liberty.

Ó Fathaigh, R. (2014). Political Advertising Bans and Freedom of Expression. Greek Public Law Journal, 27, 226–228. Available at SSRN: https://ssrn.com/abstract=2505018

Papakyriakopoulos, O., Hegelich, S., Shahrezaye, M., & Serrano, J. C. (2018). Social media and microtargeting: Political data processing and the consequences for Germany. Big Data and Society, 5(2). https://doi.org/10.1177/2053951718811844

Renucci, J-F. (2005). Introduction to the European Convention on Human Rights. The rights guaranteed and the protection mechanism. Council of Europe Publishing.

Rowbottom, J. (2013a). Animal Defenders International: Speech, Spending, and a Change of Direction in Strasbourg. Journal of Media Law5(1), 1–13. https://doi.org/10.5235/17577632.5.1.1

Rowbottom, J. (2013b, April 22). A surprise ruling? Strasbourg upholds the ban on paid political ads on TV and Radio. UK Constitutional Law Blog. http://ukconstitutionallaw.org

Sadurski, W. (2014). Freedom of Speech and Its Limits. Kluwer Academic Publishers. https://doi.org/10.1007/978-94-010-9342-2

Schumpeter, J. A. (2008). Capitalism, socialism and democracy. Harper Perennial.

Sunstein, C. (2007). #Republic: Divided Democracy in the Age of Social Media. Princeton University Press.

Thaler, R. H. & Sunstein, C. (2008). Nudge. Yale University Press.

Zuiderveen Borgesius, F. J., Möller, J., Kruikemeier, S., Ó Fathaigh, R. Irion, K., Dobber, T, Bodó, B. . . . de Vreese, C. (2018). Online Political Microtargeting: Promises and Threats for Democracy. Utrecht Law Review, 14(1), 82–96. https://doi.org/10.18352/ulr.420

Decisions of the ECtHR

Ahmet Yıldırım v. Turkey, no. 3111/10, 18 December 2012

Animal Defenders International v. UK, no. 48876/08, 22 April 2013

Autronic AG v. Switzerland, no. 12726/87, 22 May 1990

Bladet Tromsø and Stensaas v. Norway, no. 21980/93. 20 May 1999

Castells v. Spain, no. 11798/85, 23 April 1992

Chauvy and others v. France, no. 64915/01, 29 June 2004

De Haes and Gijsels v. Belgium, no. 19983/92, 24 February, 1997

Erdoğan Gökçe v. Turkey, no. 31736/04, 14 October 2014

Fressoz and Roire v. France, no. 29183/95, 21 January 1999

Gaskin v. United Kingdom, no. 10454/83, 7 July 1989

Goodwin v. the United Kingdom, no. 17488/90, 27 March 1996

Guerra and Others v. Italy, no. 116/1996/735/932, 19 February 1998

Kenedi v. Hungary, no. 31475/05, 26 May 2009

Leander v. Sweden, no. 9248/81, 26 March 1987

Lingens v. Austria, no. 9815/82,  8 July 1986

Magyar Helsinki Bizottság v. Hungary, no. 18030/11, 08 November 2016

Matky v. Czech Republic, no. 19101/03, 10 July 2006

Öztürk v. Turkey, no. 22479/93, 28 September 1999

Roche v. United Kingdom [GC], no. 32555/96, 19 October 2005.

Sunday Times v. UK (no. 1), no. 6538/74. 26 April 1979

Társaság a Szabadságjogokért v. Hungary, no. 37374/05, 14 April 2009

Thorgeir Thorgeirson v. Iceland, no. 13778/88, 25 June 1992

TV Vest AS & Rogaland Pensjonistparti v. Norway, 21132/05, 11 December 2008

VgT Verein gegen Tierfabriken v. Switzerland, no. 24699/94, 28 June 2001

Other legal resources

Article 29 Working Party. (2018). Guidelines on consent under Regulation 2016/679 (Vol. 17/EN). European Commission. https://ec.europa.eu/newsroom/article29/item-detail.cfm?item_id=623051

Directive 2002/58/EC of the European Parliament and of the Council of 12 July 2002 concerning the processing of personal data and the protection of privacy in the electronic communications sector (Directive on privacy and electronic communications). http://data.europa.eu/eli/dir/2002/58/oj

Regulation (EU) 2019/1150 of the European Parliament and of the Council of 20 June 2019 on promoting fairness and transparency for business users of online intermediation services. http://data.europa.eu/eli/reg/2019/1150/oj

Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation). http://data.europa.eu/eli/reg/2016/679/oj

European Commission. (2018, September 26). Code of Practice on Disinformation. https://ec.europa.eu/digital-single-market/en/news/code-practice-disinformation.

Footnotes

1. This paper does not follow any specific theoretical model of democracy; it is based on the legal theory of fundamental rights, with references to certain communication theories and certain political theories of deliberative democracy, like Mill, Habermas, Rawls, Dworkin, Meiklejohn, Baker and Barendt. The scrutiny of the legal background focuses on member states of the European Union, some of which are mature democracies, others still in transition, and yet others on the backslide.

Germany is amending its online speech act NetzDG... but not only that

$
0
0

Germany is amending its Network Enforcement Act (hereinafter NetzDG) in order to respond to the criticism from civil society and to address issues that emerged in the past two years. This first draft of amendments by the government will next be discussed in parliament before its probable adoption. On paper, the NetzDG did not have the harmful consequences on online speech that many feared. There is at least no proof that platforms removed more content than before NetzDG’s adoption. Nevertheless, the government still overestimates the benefits of such a law.

The German “censorship law”

The NetzDG was adopted as a reaction to a lack of self-regulatory efforts by social media platforms. After Angela Merkel’s 2015 decision to let over a million Syrian asylum seekers into the country, the German government observed a peak of hate speech and disinformation on social media platforms. Since the platforms did in the government’s view not adopt sufficient self-regulatory measures for fast removal of unlawful content (e.g., insults, defamation, incitement to violence, etc.), the government drafted, and the parliament adopted the NetzDG in summer 2017.

The Act obliges social networks with two million users or more to set up user-friendly complaint mechanisms to report content, to remove ‘manifestly unlawful content’ within 24 hours, and to deliver transparency reports twice a year. The goal is to prevent the dissemination of offensive and aggressive content falling within the scope of application under sec. 1 (3) NetzDG. In other words, no new criminal offences were adopted, but the platforms had to take on responsibility for unlawful content.

Hardly anyone thought of the NetzDG as the ideal way forward: it was deemed unconstitutional for many reasons and decried worldwide as a ‘censorship-law’. Speech-restricting laws are, in principle, allowed if they meet the requirements set by art. 5 (2) Basic Law and by the Federal Constitutional Court’s jurisprudence. But the main fear was that social media platforms might remove more content than necessary to avoid being fined (overblocking). It was also expected that they would adjust their community guidelines to the strictest law worldwide to avoid the costs of adapting for each country (over-removal). Especially in the First Amendment context, scholars described it as the paradigm of ‘new school speech regulation’1, that is, controlling speech online via the control of digital networks. Again, there are many reasons to criticise the NetzDG but what it does, in the end, is increase intermediary liability for not reacting to user notices concerning unlawful content.

Striking a balance

From a global perspective, one might argue that the NetzDG contributed to some sort of race to the bottom when it comes to content control, and eventually serves as a model for authoritarian regimes. Although it’s important to point out that the requirements for speech restricting laws need to be very high, and that only when meeting them can a law be consistent with democratic principles, the NetzDG is not the condition sine qua non for the way platforms moderate user-generated content and handle user-generated flagging. In sum, the NetzDG has flaws but there is no empirical proof of over-removal or other harmful effects on online speech due to it. This is partly because the first version of the law did not specify how social media platforms needed to implement the complaint tools, and how granular their transparency reports should be regarding the removal reason. As a result, the complaint numbers published in the reports were not conclusive (more details in this paper). These two aspects have been observed and improved in the current draft amendment.

What’s new?

According to the draft, sec. 3 (1) sentence 2 NetzDG would read as follows: "The provider must provide users with an easily recognisable, directly accessible, easy-to-use and permanently available procedure when making sense the content for transmitting complaints about illegal content.” (amendments in italic). Usually, users flag content as unlawful in the same window that opens when flagging any offensive or otherwise unwanted content. On YouTube, for example, users can flag content as hate speech according to their guidelines and as unlawful according to German criminal law with the same complaint tool. On Facebook, on the other hand, the complaint tool for unlawful content was somehow “hidden” next to the company’s legal information. Hence their low complaint numbers.

Until now, it has been up to the social media platforms to decide on which basis they remove illegal content. Indeed, the NetzDG is to introduce an additional possibility for users to complain, not an obligation to complain. In the near future they will most likely have to provide information regarding the basis of their content moderation decision: sec. 2 (2) no. 3 NetzDG proposed in the draft stipulates three levels of description: that of "mechanisms for the transmission of complaints", that of "the decision criteria for the removal and blocking of illegal content" and that of "the examination procedure including the order of examination".

NetzDG amendment “to play part in a crime series”

So far so good, but the version adopted by the government on 1 April 2020 is part of a package of ‘measures to counter right-wing extremism and hate crime online’. It involves amending other laws than the NetzDG, that is, the Criminal Code, the Code of Criminal Procedure, the Telecommunication Act, and the Law on the Federal Criminal Police Office (BKA). Actually, the critical points are to be located there, because the amendments to the latter would extend the responsibilities of the BKA and, among other, allow access to IP addresses and user passwords under certain conditions. Furthermore, the criminal liability for speech-related offences would be increased by criminalising preparatory conduct far in advance of aggressive opinions and calls for violence. Adding these changes to the catalogue of ‘unlawful content’ under NetzDG is a true challenge.

In a recent case, the district court of Augsburg sentenced a Facebook user for sharing a news video produced and distributed by public broadcaster Deutsche Welle showing ISIS flags. The Bavarian High Court reversed this decision and held that the district court did not sufficiently consider the defendant’s freedom of opinion and that sharing media coverage on a terrorist organisation should not be confused with propagating terror propaganda. This case anecdotally shows that even courts struggle with discerning criminal actions when they are speech-related – what consequences will it have on commercial content moderation by social media platforms if the laws become more complicated?

Taken together, extending the scope of criminal provisions or even resurrecting highly restrictive laws that have been abolished is hardly convincing. The planned changes in the package of laws do not always address the underlying problems, such as right-wing extremism and cyberbullying. The government proposes a legal framework that surely will be effective against hate speech and will facilitate criminal prosecution. It will also force social media platforms to be more transparent when their content moderation rules overlap with the elements of criminal offences. Nevertheless, one should bear in mind potential solutions on the preventive side, such as educational opportunities and programmes for social cohesion. For while jurisprudence serves to standardise norms for life in a society and thus to control behaviour, it does not provide answers to social problems.

With regards to the NetzDG, this law will increasingly restrict freedom of opinion and information if the definition of ‘manifestly unlawful content’ becomes broader due to changes in the Criminal Code for instance. In other words, we should worry less about the proposed amendments to the NetzDG and more about the planned amendments of the four other laws.

Footnotes

1. Jack M Balkin, ‘Old School/New School Speech Regulation’ (2014) 127 Harvard Law Review 2296, 2306.

”Enabling act” in Hungary: uncontrolled government power, threatened press

$
0
0

The Hungarian parliament adopted a new bill about the defence against the coronavirus (Act XII of 2020) on 30 March. It was promulgated the same day. The bill was sponsored by the governing parties, who are in the comfortable position to produce the necessary two-third majority for passing any law. The law is referred to as “enabling act” because its main aim is to equip the government with the uncontrolled opportunity to rule by decrees. Neither the scope of the rule by decree nor the duration of this special legal order is defined in the law. Therefore, several observers found the law to be the basis of the uncontrolled power of the government. The law does not contain any rules on  digital human rights, but it limits the general space of expressing critiques of government action, both online and offline. This is the case for journalists but also bloggers and Facebook users. The long-term consequences of the law on digital human rights is unpredictable. 

Interestingly, the  Hungarian constitution (Basic Law of 2011) regulates more cases than the special legal order. One of these cases is the state of danger (Article 53), that empowers the government to issue decrees to suspend the application of certain laws or derogating from the provisions of laws, and to take other extraordinary measures. The special legal order also enables the suspension of exercising fundamental laws. However, the constitution defines two limits on the government’s power. Firstly, the possible set of emergency measures should be defined in the form of an act, and secondly, the decrees remain in force for 15 days, except if parliament authorises the government to extend the effect of the decree. This means that the parliament may only approve the extension of the effect of decrees that were passed at time of approval. The Basic Law makes the extension for more than 15 days possible, too, but this approval cannot be seen as an unlimited empowering of the government for ruling by decrees. 

Based on these rules, the Hungarian government proclaimed the state of danger with the decree Nr. 40/2020. (III.11.) on 11 March, and it has also passed several decrees in connection with the pandemic. These decrees have no unjustified impacts on the fundamental rights yet. However, the act XII of 2020 infringes on the constitutional frame. According to the law, the government has only to inform the parliament about the measures taken, and all decrees retain their effect until the end of the state of danger. When that end is, will be decided upon by the government, as the Basic Law prescribes. As a result, the government has an unlimited power up to the suspension of the exercise of fundamental laws, including privacy, freedom of expression and freedom of information. There is currently no information about legislative plans by the government. It is also important to clarify that the Hungarian parliament is not suspended. It can pass laws,  but the power of the government is not limited by these acts.

As mentioned above, the act XII of 2020 amended the Criminal Code (act C of 2012) as well. This amendment directly affects digital human rights because it restricts freedom of expression. The crime “Scaremongering” (§ 337) has been complemented by a new type of behaviour : any conduct of publishing or spreading a false or distorted statement which endangers or derails the successfulness of the defence during a state of danger is punishable by imprisonment from one to five years. This amendment is meant to control the spreading of false information, but practically, it can have a much broader impact on free speech and free publishing. Firstly, it is unclear what the notion of “successfulness of the defence” means, and thereby how it can be endangered or derailed; these success indicators can be defined by the Orbán government arbitrarily. The notion of “distorted statement” provides endless space for interpretation too. The government can for example interpret it for limiting the publishing of any information it wants to keep secret, even if the information is true, but in the given context of publishing it contradicts the government’s narrative. Secondly, even if the courts will interpret the rule in light of freedom of expression, the chilling effect can stop some journalists and citizens from publishing the truth that is critical of the government.

At the European level, there is a procedure in progress against Hungary because of the fact that the country is persistently breaching the EU's founding values since 2018. The governing party Fidesz was suspended from the European People’s Party fraction in the European Parliament,  based on the same grounds. These prove that the power concentration in Hungary is not a new phenomenon. 

The new legal situation demolishes the remaining control over the government. The critiques of the unlimited power are heavy, both in Hungary and at European level. Opposition parties and NGOs express their objections and analyses, but the governing parties are not ready to discuss the measures. The coronavirus crisis further prevents people from demonstrating, and the EU from reacting with stronger financial restrictions.  

Transparency in artificial intelligence

$
0
0

Introduction: transparency in AI

Transparency is indeed a multifaceted concept used by various disciplines (Margetts, 2011; Hood, 2006). Recently, it has gone through a resurgence with regards to contemporary discourses around artificial intelligence (AI). For example, the ethical guidelines published by the EU Commission’s High-Level Expert Group on AI (AI HLEG) in April 2019 states transparency as one of seven key requirements for the realisation of ‘trustworthy AI’, which also has made its clear mark in the Commission’s white paper on AI, published in February 2020. In fact, “transparency” is the single most common, and one of the key five principles emphasised in the vast number – a recent study counted 84 – of ethical guidelines addressing AI on a global level (Jobin et al., 2019). Furthermore, there is a critical discourse on AI and machine learning about fairness, accountability and transparency.1 The number of publications in the fields related to AI and machine learning combined with ethics, governance and norms have grown remarkably over the last 2-5 years (Larsson et al., 2019).

While our conceptual focus here is on transparency, an important qualifier is AI, which in combination is highly interrelated to algorithmic transparency. While algorithmic transparency and algorithmic decision-making have become accepted terminology in contemporary critical research, we see a need for a more nuanced and elaborated terminology in its relationship to AI - to be able to clarify the conceptual framing of transparency.

On the one hand, AI indeed is a contested concept that lacks clear consensus, both in computer science (Monett, Lewis & Thórisson, 2020), law (Martinez, 2019) and the public perception (Fast & Horvitz, 2017). This is linked to the fact that intelligence alone has been defined in at least 70 different ways (Legg & Hutter, 2007). Furthermore, the definition has changed as the possibilities within the field has developed since its inception in the 1950s, posing what sometimes is called the “AI effect” or an “odd paradox” (discussed by Stone et al., 2016; see also McCorduck & Cfe, 2004) in the sense that once a problem seen as needing AI has been solved, the application ceases to be perceived as intelligent. This corresponds to the view that AI is about solving problems that computers currently cannot do, and as soon as it is possible for a computer to solve it, it no longer counts as an AI-problem. So, the hard-to-define field of AI has fittingly been addressed as not a single technology, but rather “a set of techniques and sub-disciplines ranging from areas such as speech recognition and computer vision to attention and memory, to name just a few” (Gasser & Almeida, 2017, p. 59).

On the other hand, there is ambiguity also in the ‘algorithmic’ concept, although it seems far less problematised in critical research. Firstly, the notion of algorithms in computer science as a finite step-by-step description on how to solve a particular class of problems – and hence what ‘algorithmic’ transparency would denote – is arguably narrower than how the concept is used in literature on governance issues, often relating to issues of accountability. For example, a recent report on “algorithmic transparency” lists seven points of what needs to be addressed. Only one of these are aimed specifically at algorithms, while the other six deals with issues of data, goals, outcomes, compliance, influence, and usage (Koene et al., 2019). While all of these aspects are highly relevant from a governance perspective addressing issues of accountability in relation to transparency, this also speaks for the ambiguity of the use of “algorithmic” in relation to transparency. Is it addressing a specific technological aspect or a systemic quality?

In line with this, and in relation to issues of unfair outcomes of algorithmic systems, it is often concluded that the specific algorithms and the code are very unlikely intended to discriminate in a harmful way (Bodo et al., 2017). The challenge is one of relations between data and algorithms, emergent properties of the machine learning process, very likely to be unidentifiable from a review of the code. This also means that it is important to consider the context of the combination of machine learning algorithms, the underlying training data and the decisions they inform (Kemper & Kolkman, 2019). So, a key question is for whom the AI-systems or algorithmic decision-making should be more transparent. This is highlighted in relation to digital platforms on a global scale (Larsson, 2019), and Kemper and Kolkman (2019) argue for the need of a “critical audience”. Pasquale (2015, pp. 160-165) has called for “qualitative transparency”, which Larsson (2018) has interpreted as a need for supervisory authorities to develop methods for algorithmic governance.

The multitude of aspects combined with the complexity of context leads us to argue for a more systemic approach, here signified by the AI concept, as a wider notion than ‘algorithmic’ (see. Doshi-Velez et al., 2017; Floridi et al., 2018). Furthermore, a reason is to strengthen a conceptual bridge between the fields of research dealing with ‘algorithmic transparency’ and accountability, on the one hand, and the fields researching AI and challenges of transparency, albeit in terms of making models more explainable and interpretable, on the other (see Lepri et al., 2018). Of particular interest, here, is the relationship between transparency and trustworthy AI, which is a key objective for the European AI strategy from April 2018, which not the least is emphasised by the subsequent AI HLEG’s ethics guidelines on trustworthy AI (2019) and a clear part of the “ecosystem of trust” sought for in the Commission’s white paper on AI (2020, p. 9).

Even if research related to transparency in AI, or what we interchangeably call AI transparency, recently has been claimed to be “in its infancy” (Theodorou, Wortham, & Bryson, 2017) the theoretical backdrop of transparency is, as mentioned, in itself vast and rather complex. Therefore, some of that backdrop, with its multidisciplinary and historical connotations of the concept, will be further addressed in the following section (1). Transparency, we show, comes with a metaphorical framing we analyse in order to show normative connotations attached to it. Neighbouring concepts like openness and explainability lead us to propose a categorisation of aspects of relevance to transparency in AI in Section 2. These are of relevance for the ethical and legal challenges outlined in Section 3.

1. Historical and conceptual development of transparency

In an essay from 2009, Carolyn Ball analyses the metaphorical content of transparency as it had developed in the anti-corruption work by NGOs and supranational institutions in the 1990s. She describes the academic interest seen in publications around transparency terminology. Consistent with this, a search on ’transparency’ in the Web of Science – which mainly indexes articles published in international scientific journals – reveals that it is a relatively newfound concept in the sense that there is a steady increase in the use of the concept from the 1990s onwards, see Figure 1 below.2

Figure 1:‘Transparency’ as a concept in the Web of Science, 1931-2018.

Transparency has for example, according to Forssbæck and Oxelheim (2015), become a catchword in the economic-political debate, perhaps particularly in relation to a series of financial crises in the mid-1990s but also a series of scandals in the early 2000s leading to heightened interest in corporate governance. The EU's Transparency Directive from 2004 can be mentioned here. Linking transparency to economic theory, Forssbæck and Oxelheim tie the concept to the notion of information asymmetries, that is, where one party has more or better information than the other. This notion is also found in literature on fairness in algorithmic decision-making (Lepri et al., 2018).

One notable difficulty for theorising transparency, as pointed out by Hansen, Christensen and Flyverbom (2015, p. 118) has to do with the concept itself, and that it refers to such a wide array of objects, uses, technologies and practices. This is evident in a bibliometric overview of how the concept of ‘transparency’ is used in different research areas, see Figure 2 below.

Figure 2:‘Transparency’ use in different research areas, ›1,000 publications, based on Web of Science journal classification categories.

The richness in the use of ‘transparency’ as a concept, as well as part of the difficulty to define it, relates to the fact that for some fields transparency denotes the physical property of a material and its capacity to allow light to pass through it, while in others it is thought of as a “powerful means towards some desirable social end, for example, holding public officials accountable, reducing fraud and fighting corruption” (Hansen, Christensen, & Flyverbom, 2015, p. 118). These different understandings should also be noted in relation to the uses of the concept in different disciplinary publications, see Figure 2.

The metaphorical framing of transparency

The conceptual metaphor knowing is seeing is depicted by Michael Reddy in 1979 (Reddy, 1979; see Larsson, 2017, p. 32). Reddy described the conduit metaphor system, a systemic set of mappings from the source domain of physical objects to the target domain of mental operations. This means that there are common metaphorical mappings for human understanding that structure aspects of knowledge to the metaphorical use of “seeing”. For example, when we conceptualise understanding in terms of seeing, which is commonplace, this also follows from a series of other closely linked expressions or associations that have to do with the condition to see (Lakoff & Johnson, 1980, 1999; Larsson, 2014). This includes light, brightness, clarity – and transparency. It is therefore likely hard to avoid this particular metaphorical mapping, thereby leading to a labelling of transparency linked to the positive labelling of knowledge as something good to have. That is, the benign conception of transparency relates to a deeper cognitive frame linked to knowing and understanding. And, conversely, the countering metaphors with negative connotations relates to being in the dark, perhaps most clearly displayed by the very much present ‘black box’ terminology. The link between knowledge and transparency may however be illusive for particular contexts, as it also can be used for more rhetorical reasons, for example to deflect regulation (see Crain, 2018, below), or have unintended consequences.

There are a number of neighbouring as well as antonymic concepts of particular relevance for transparency as it relates to AI, such as ‘explainability’ (as in the research strand xAI), ‘black box’ (particularly popularised by legal scholar Frank Pasquale in Black Box Society, 2015) and ‘openness’. First of all, the clear metaphoricity of these concepts is relevant in itself for understanding the role and meaning of the terminology. The conceptual and metaphorical essence to the concept of transparency, and its theoretical implications is witnessed by Hansen, Christensen and Flyverbom (2015) as well as Christensen and Cornelissen (2015). Hansen, Christensen and Flyverbom (2015) address the normative challenge that many contemporary societal projects generally assume that transparency can effectively steer individual and collective behaviour towards desirable objectives.

The metaphorical analogy of a physical feature has, as argued by Koivisto (2016), led to that “transparency has come to denote a modern, surprisingly complex and expanding socio-legal ideal” – and therefore also has become a normative concept bearing premises that needs to be highlighted and discussed too. As a result, transparency’s negative connotations are, according to Koivisto, undertheorised. Ananny and Crawford (2018) revisits these general but metaphorically based notions of transparency in the context of algorithmic accountability. Their argument supports the notion of a wider transparency concept than what the narrower explainability domain focuses. It does so by rather than privileging a type of accountability that needs to look inside systems, instead hold systems accountable by looking across them: “seeing them as sociotechnical systems that do not contain complexity but enact complexity by connecting to and intertwining with assemblages of humans and non-humans” (Ananny & Crawford, 2018, p. 974). The embodiment of transparency is evident, in the sense that it structures our thinking. How AI and algorithmic systems are understood has normative effects on regulatory debates around how to govern AI.

Neighbouring concepts – openness and explainability

As mentioned, ‘openness’ is tightly linked to transparency. It is a concept often framed with positive values, portrayed by ‘open data’, ‘open source’, ‘open code’ and ‘open access’ (Larsson, 2017, p. 215-220), as well as ‘open science’ (see Fecher & Friesike, 2014). Transparency in the sense of ‘open government’ comes with a political framing of empowerment under the notion of fostering democratic processes (see Ruijer, Grimmelikhuijsen, & Meijer, 2017). ‘Openness’ can also, for example, take place in a still ongoing battle between content-producing industries – traditionally relying on intellectual property regulation – and other industries relying on a freer flow of content, the so-called “openness industries” (Jakobsson, 2012; see Larsson, 2017). A challenge, addressed by Hansen, Christensen, and Flyverbom (2015) in terms of “transparency as paradox”, is that also a genuinely well-intended discourse of openness may lead to unintended consequences. For example, the transparency of social media platforms – mentioned by Margetts (2011) several years before Cambridge Analytica’s use of Facebook data for political targeting – has led to new modes of misuses and democratic challenges in contemporary society (see Bodó, Helberger, & de Vreese, 2017, for a special issue on political micro-targeting). Corresponding to this, transparency can be used inadvertently or strategically to produce opacity, as stated by de Laat (2018) in relation to algorithmic decision-making, and by Forsbæck and Oxelheim (2015) with regards to organisational audit and accountability. Similarly, a US-focused case study of the data broker industry makes the case that transparency runs up against “insurmountable structural limitations within the political economy” of this particular industry and that transparency as a policy approach is “subsumed by a discourse of consumer empowerment that has been rendered meaningless in the contemporary environment of pervasive commercial surveillance” (Crain, 2018, p. 89). That is, there seems to be limitations in transparency as a policy response for this type of industry, both at a structural level, as well as a regulatory deflection strategy, at worst only creating “an illusion of reform” (Crain, 2018, p. 89).

In research on AI in computer science the concept of explainability (xAI) represents what could be called a “model-close” research strand of relevance to transparency in AI (see Lepri et al., 2018; Ribeiro, Singh, & Guestrin, 2016). XAI is often described as a means to deal with “black box models” (see Biran & Cotton, 2017; Guidotti et al., 2018) or what de Laat (2018) refers to as “inherent opacity” (2018). This xAI-notion of transparency is narrower and more algorithmic model-oriented than for example the necessary transparency (and “explicability”) expressed by AI HLEG (2019) to achieve an ethically sound and trustworthy AI. However, and as noted by Mittelstadt, Russell and Wachter (2019), explanations of machine learning models and predictions can serve many functions and audiences: explanations can be necessary to comply with relevant legislation (Doshi-Velez et al., 2017), verify and improve the functionality of a system, and arguably enhance the trust between individuals, subject to a decision, and the system itself (see Citron & Pasquale, 2014; Hildebrandt & Koops, 2010; Zarsky, 2013).

2. A multidisciplinary notion of transparency

When focusing on transparency in the context of AI, the literature often refers to explainability with reference to both interpretability as well as trust in the systems (see Ribeiro, Singh, & Guestrin, 2016). For example, when assessing users’ trust in applied AI, an assumption made in recent literature (see Miller, 2019) is that the issue of transparency has to take into consideration how ordinary humans understand explanations, and how they assess their relationship to a service, product or company. The development of explainable AI is then, arguably, driven by evidence that many AI applications are not used in practice, partly due to users lacking trust in them (see Linegang et al., 2006). The following hypothesis is then that by building more explainable systems, users would be better equipped to understand and thereby trust the intelligent agents or predictive modelling (Mercado et al., 2016; see also Kopitar, Cilar, Kocbek, & Stiglic, 2019, for a medical example). Trust and its links to transparency, and its required conditions, have been studied in many social-scientific disciplines, including law, over a long period of time. However, research on explainable AI typically does not cite or build on explanatory frameworks based in social science (Miller, Howe, & Sonenberg, 2017; see also de Graaf & Malle, 2017). More could be done with regards to this (see Felzmann, Villaronga, Lutz, & Tamò-Larrieux, 2019, on a “relational” understanding of transparency in AI).

As the opening of the ‘black box’ may bring a number of challenges of a legal, technological and conceptual nature, suggested by Wachter, Mittelstadt and Russell (2017), the notion of transparency in AI – as applied on markets and interacting with humans and institutions – could benefit from a wider definition than the more narrowly defined xAI (see Mittelstadt, Russell, & Wachter, 2019). Drawing from research in law, the social sciences and the humanities, the xAI domain could be complemented with a range of aspects of relevance for AI transparency (argued for in Larsson, 2019), such as:

  1. legal aspects of proprietorship, as code and data enters competitive markets (Pasquale, 2015), including trade secrets (Wachter, Mittelstadt, & Russell, 2017); Described by Burrell (2016) as an aspect of intentional opacity (de Laat, 2018).
  2. the need to avoid abuse, indicating that too much openness in the wrong context may actually defeat the purpose of an AI-enabled process (Caplan, Donovan, Hanson, & Matthews, 2018; Miller, 2019). There can be incentives for “gaming the system” – examined by de Laat (2018) in terms of “perverse effects of disclosure” – affecting everything from trending topics on Twitter to security issues and welfare distribution.
  3. data and algorithm user literacy, indicating that ordinary users’ basic understanding has a direct effect on transparency in applied AI (Burrell, 2016; Haider & Sundin, 2019). This relates to educational efforts in improving literacy, for example on ‘computational thinking’ (Heintz, Mannila, & Färnqvist, 2016);
  4. thesymbols and metaphors used for communication, that is, mathematically founded algorithms may be dependent on translations to human imaginaries and languages, for example in automated decisions or user agreements (Larsson, 2017; 2019). As concluded by Mittelstadt, Russell and Wachter (2019), there is a fundamental distinction between explainability models and explanations in philosophy and sociology, that is, everyday explanations for humans are contrastive, selective, and social (Miller, 2017), which is not the same as the “interpretability” and explainability found in the xAI domain.
  5. the complexity of data ecosystems and markets trading in consumer-data have an unquestionable effect on transparency with regards to the obscure origins of the underlying data and how personal data travels (Christl, 2017; Crain, 2018; Larsson, 2018; Pasquale, 2015); This relates to the debate around the search for a “right to an explanation” in the General Data Protection Regulation (GDPR) – by some feared to lead to a “transparency fallacy” (Edwards & Veale, 2017), and, lastly,
  6. the obscuring effects of distributed, personalised outcomes that create challenges not the least for supervisory agencies with limited access and overview attempting to detect structural discrimination or other unfair outcomes (Larsson, 2018; see Larsson et al., 2019).

In this wider notion of transparency in AI, a key challenge from a governance perspective – as AI is applied and interacting with users, consumers and citizens – is arguably to find an appropriate balance between legitimate but not necessarily compatible interests. For example, as noted in the first draft of ethics guidelines from HLEG, there might be “fundamental tensions between different objectives (transparency can open the door to misuse; identifying and correcting bias might contrast with privacy protections)” (AI HLEG, 2018, p. 6). Thus, the interplay between AI technologies and societal values – the applied ethics, social and legal norms – underscores the importance of combining social scientific and contributions from the humanities to computer scientifically based AI research (see Dignum, 2019). This is an argument in line with what Rahwan (2018) has emphasised in terms of a need to keep society “in-the-loop” in order to enable such balances.

3. Ethical and legal relevance of transparency in AI

Transparency in AI has increasingly been highlighted in regulatory development, company policies as well as ethical guidelines over the last few years. For example, the EU adopted a strategy on AI in April 2018, and appointed the High-Level Expert Group (AI HLEG) mentioned above to give advice on both investment strategies as well as ethical issues with regards to AI in Europe. In December 2018, the European Commission presented a coordinated plan – “made in Europe” – prepared with member states to foster the development and use of AI in Europe. By mid-2019, all member states were expected to have their own strategies in place, which was not entirely the case (Van Roy, 2020). The coordinated plan from 2018 included four key areas: increasing investment, making more data available, fostering talent and ensuring trust. The last point, on how to ensure trusted, ethically aligned and trustworthy applications and development of AI was also in focus for the AI HLEG report published in April 2019. Ethics guidelines as a tool for AI governance is in line with a global trend (Larsson, forthcoming). Jobin et al. (2019) mapped and analysed the current corpus of principles and guidelines on ethical AI. They conclude that of the 84 analysed guidelines, 88% have been published after 2016 and that the most common concept argued for is ‘transparency’. AI HLEG’s guidelines contain an assessment list for practical use by companies that was tested by over 350 organisations during the second half of 2019, and the expert group will finalise a revised version during 2020. According to the European Commission, a key result of the feedback process is that “while a number of the requirements are already reflected in existing legal or regulatory regimes, those regarding transparency, traceability and human oversight are not specifically covered under current legislation in many economic sectors” (2020, p. 9). Another important mode of governance is standardisation, which can be seen in how the IEEE has a working group (P7001) for standardising transparency of autonomous systems, as well as how the international standardisation body ISO does an overview of ethical and societal concerns in AI (ISO/IEC JTC1/SC 42 Artificial intelligence).

Hence, the advocacy for the importance of transparency in AI comes in different forms and is made by different types of stakeholders. While the regulatory field is too rich in relation to transparency in AI to be thoroughly accounted for here, there are at least three important points raised in recent literature that may be mentioned. Firstly, and as pointed out by AI HLEG (2019), important areas are already regulated in the European Union, such as in data protection, privacy, non-discrimination, consumer protection, and product safety and liability rules. Secondly, there are specific provisions that are particularly debated, such as the seeming right for data subjects “to obtain an explanation of the decision reached” where automated processing (GDPR, Art. 22) is involved (preamble 71). For example, Edwards and Veale (2017) state that the law is “restrictive, unclear, or even paradoxical concerning when any explanation-related right can be triggered” (p. 18; see also Felzmann, Villaronga, Lutz & Tamò-Larrieux, 2019; Wachter, Mittelstadt, & Floridi, 2017). Edwards and Veale (2017) argue that even if it was a clear right warranted by the GDPR, the legal conception of explanations as “meaningful information about the logic of processing” may not be provided by the kind of machine learning explanations computer scientists have developed in response (compare to point 4 above). In addition to data protection, there are calls for more studies on how administrative law should adapt to more automated forms of decision-making (e.g., Cobbe, 2019; see also Oswald’s (2018) review of a number of long-standing rules in English administrative law designed to regulate the discretionary power of the state). Thirdly, there are fields addressing transparency in AI that will require legal development, perhaps on ‘algorithmic auditing’ (Casey, Farhangi, & Vogl, 2019) or risk-adapted requirements (see European Commission, 2020; Datenethik­kommission, 2019). There are also arguments suggesting that some notions in contemporary data protection, to use an example, might not be well-fitted to current and coming uses of AI and machine learning abilities to gain insights from large amounts of individuals’ data. Hence, Wachter & Mittelstadt (2018) argue for a “right to reasonable inferences”.

Conclusion

Transparency in AI plays a very important role in the overall strive to develop more trustworthy AI as applied to markets and in society. It is particularly trust and issues of accountability that drive the contemporary value of the concept, including the narrower scope of transparency found in xAI. At the same time, ‘transparency’ has multiple uses in various disciplines, and comes with a history from the 1990s. Transparency in AI, or what we interchangeably call AI transparency, takes a system’s perspective rather than focusing on the individual algorithms or components used. It is therefore a less ambiguously broad term than algorithmic transparency. In order to understand transparency in AI as an applied concept, it has to be understood in context, mitigated by literacies, information asymmetries, “model-close” explainability as well as a set of competing interests. Transparency in AI, consequently, can best be seen as a balancing of interests and a governance challenge demanding multidisciplinary development to be adequately addressed.

References

AI HLEG, High-Level Expert Group on Artificial Intelligence. (2019). Ethics Guidelines for Trustworthy AI. The European Commission. https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai

Ananny, M., & K. Crawford (2018) Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability. New Media and Society, 20(3), 973–989. https://doi.org/10.1177/1461444816676645

Ball, C. (2009). What is transparency? Public Integrity, 11(4), 293-308. https://doi.org/10.2753/PIN1099-9922110400

Biran, O., & Cotton, C. (2017) Explanation and justification in machine learning: A survey IJCAI-17 Workshop on Explainable AI. http://www.cs.columbia.edu/~orb/papers/xai_survey_paper_2017.pdf

Bodó, B., Helberger, N., Irion, K., Zuiderveen Borgesius, K., Moller, J., van de Velde, Bol, N., van Es, B., & de Vreese, C. (2018). Tackling the algorithmic control crisis – The technical, legal, and ethical challenges of research into algorithmic agents. Yale Journal of Law and Technology, 19(1). https://digitalcommons.law.yale.edu/yjolt/vol19/iss1/3/

Bodó, B., Helberger, N., & de Vreese, C. H. (2017). Political micro-targeting: a Manchurian candidate or just a dark horse? Internet Policy Review, 6(4). https://doi.org/10.14763/2017.4.776

Burrell, J. (2016). How the machine thinks: understanding opacity in machine learning algorithms. Big Data & Society, 3(1). https://doi.org/10.1177/2053951715622512

Caplan, R., Donovan, J., Hanson, L., & Matthews, J. (2018). Algorithmic Accountability: A Primer. Data & Society. https://datasociety.net/library/algorithmic-accountability-a-primer/

Casey, B., Farhangi, A., & Vogl, R. (2019) Rethinking Explainable Machines: The GDPR's 'Right to Explanation' Debate and the Rise of Algorithmic Audits in Enterprise, Berkeley Technology Law Journal, 34, 145–189. https://btlj.org/data/articles2019/34_1/04_Casey_Web.pdf

Christl, W. (2017). Corporate Surveillance in Everyday Life: How Companies Collect, Combine, Analyze, Trade, and Use Personal Data on Billions. Cracked Labs. https://crackedlabs.org/en/corporate-surveillance

Christensen, L.T., & Cornelissen, J. (2015). Organizational transparency as myth and metaphor. European Journal of Social Theory, 18(2), 132–149. https://doi.org/10.1177/1368431014555256

Citron, D.K., & Pasquale, F. (2014) The scored society: due process for automated predictions. Washington Law Review, 89(1). https://digitalcommons.law.uw.edu/wlr/vol89/iss1/2

Cobbe, J. (2019). Administrative law and the machines of government: judicial review of automated public-sector decision-making. Legal Studies, 39(4), 636–655. https://doi.org/10.1017/lst.2019.9

Crain, M. (2018). The limits of transparency: Data brokers and commodification. New Media & Society, 20(1), 88–104. https://doi.org/10.1177/1461444816657096

Datenethik­kommission. (2019). Opinion of the Data Ethics Commission. Data Ethics Commission, German Federal Ministry of Justice and Consumer Protection. https://www.bmjv.de/DE/Themen/FokusThemen/Datenethikkommission/Datenethikkommission_EN_node.html

Dignum, V. (2019). Responsible Artificial Intelligence: How to Develop and Use AI in a Responsible Way. Springer International Publishing. https://doi.org/10.1007/978-3-030-30371-6_6

Doshi-Velez, F., Kortz, M., Budish, R., Bavitz, C., Gershman, S., O’Brien, D., Shieber, S., Waldo, J., Weinberger, D., & Wood, A. (2017). Accountability of AI under the law: The Role of Explanation. arXiv.https://arxiv.org/abs/1711.01134v1

Edwards, L., & Veale, M. (2017). Slave to the algorithm: Why a right to an explanation is probably not the remedy you are looking for. Duke Law & Technology Review, 16(1), 18–84. https://scholarship.law.duke.edu/dltr/vol16/iss1/2

European Commission. (2020). White Paper on Artificial Intelligence: a European approach to excellence and trust (White Paper No. COM(2020) 65 final). European Commission. https://ec.europa.eu/info/publications/white-paper-artificial-intelligence-european-approach-excellence-and-trust_en

Fast, E., & Horvitz, E. (2017). Long-term trends in the public perception of artificial intelligence. In Thirty-First AAAI Conference on Artificial Intelligence. https://www.aaai.org/ocs/index.php/AAAI/AAAI17/paper/view/14581

Fecher, B., & Friesike, S. (2014). Open science: one term, five schools of thought. In S. Bartling, & S. Friesike (Eds.), Opening science (pp. 17-47). Springer International Publishing. https://doi.org/10.1007/978-3-319-00026-8_2

Felzmann, H., Villaronga, E. F., Lutz, C., & Tamò-Larrieux, A. (2019). Transparency you can trust: Transparency requirements for artificial intelligence between legal norms and contextual concerns. Big Data & Society, 6(1), 1–14. https://doi.org/10.1177/2053951719860542

Fenster, M. (2015). Transparency in search of a theory. European Journal of Social Theory18(2), 150–167. https://doi.org/10.1177/1368431014555257

Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., Luetge, C., Madelin, R., Pagallo, U., Rossi, F., Schafer, B., Valcke, P., & Vayena, E. (2018). AI4People — An ethical framework for a good AI society: Opportunities, risks, principles, and recommendations. Minds and Machines, 28(4), 689-707. https://doi.org/10.1007/s11023-018-9482-5

Forssbæck, J., & Oxelheim, L. (2015). “The multifaceted concept of transparency.” In Forssbaeck, J., & Oxelheim, L. (eds.). The Oxford handbook of economic and institutional transparency. Oxford University Press. https://doi.org/10.1093/oxfordhb/9780199917693.013.0001

Fox, J. (2007). The uncertain relationship between transparency and accountability. Development in Practice, 17(4-5), 663–671. https://doi.org/10.1080/09614520701469955

Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., Luetge, C., Madelin, R., Pagallo, U., Rossi, F., & Schafer, B., (2018). AI4People — An ethical framework for a good AI society: Opportunities, risks, principles, and recommendations. Minds and Machines, 28(4), 689–707. https://doi.org/10.1007/s11023-018-9482-5

Gasser, U., & Almeida, V. A. (2017). A layered model for AI governance. IEEE Internet Computing, 21(6), 58–62. https://doi.org/10.1109/mic.2017.4180835

de Graaf, M. M. A., & Malle, B. F. (2017). How People Explain Action (and Autonomous Intelligent Systems Should Too). 2017 AAAI Fall Symposium Series. AAAI Fall Symposium on Artificial Intelligence for Human-Robot Interaction. https://www.aaai.org/ocs/index.php/FSS/FSS17/paper/view/16009

Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Pedreschi, D., & Giannotti, F. (2018). A survey of methods for explaining black box models. ACM Computing Surveys, 51(5). https://doi.org/10.1145/3236009

Haider, J., & Sundin, O. (2018). Invisible Search and Online Search Engines: The ubiquity of search in everyday life. Routledge.

Hansen, H. K., Christensen, L. T., & Flyverbom, M. (2015) Introduction: Logics of transparency in late modernity: Paradoxes, mediation and governance. European Journal of Social Theory 18 (2), 117-131. https://doi.org/10.1177/1368431014555254

Heintz, F., Mannila, L., & Färnqvist, T. (2016) A Review of Models for Introducing Computational Thinking, Computer Science and Computing in K-12 Education. In Proceedings of the 46th Frontiers in Education (FIE). https://doi.org/10.1109/fie.2016.7757410

Hildebrandt, M., & Koops, B-J. (2010). The Challenges of Ambient Law and Legal Protection in the Profling Era. The Modern Law Review, 73(3), 428–460. https://doi.org/10.1111/j.1468-2230.2010.00806.x

Hood, C. (2006). Transparency in historical perspective. In C. Hood, & D. Heald (Eds.), Transparency: The Key to Better Governance? (pp. 3–23). Oxford University Press. https://doi.org/10.5871/bacad/9780197263839.003.0001

IEEE (2019) Ethically Aligned Design. First Edition. A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems. IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems. https://ethicsinaction.ieee.org/

Jakobsson, P. (2012). Öppenhetsindustrin [The openness industry] [PhD Thesis]. Örebro University.

Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1(9), 389–399. https://doi.org/10.1038/s42256-019-0088-2

Kemper, J., & Kolkman, D. (2018). Transparent to whom? No algorithmic accountability without a critical audience. Information, Communication & Society, 24(14), 2081–2096. https://doi.org/10.1080/1369118x.2018.1477967

Koene, A., Clifton, C., Hatada, Y., Webb, H., & Richardson, R. (2019). A governance framework for algorithmic accountability and transparency (Study No. PE 624.262) Panel for the Future of Science and Technology, Scientific Foresight Unit (STOA), European Parliamentary Research Service. https://www.europarl.europa.eu/RegData/etudes/STUD/2019/624262/EPRS_STU(2019)624262_EN.pdf

Koivisto, I. (2016). The anatomy of transparency: the concept and its multifarious implications (EUI Working Paper No. MWP 2016/09). Max Weber Programme for Postdoctoral Studies, European University Institute. http://hdl.handle.net/1814/41166

Kopitar, L., Cilar, L., Kocbek, P., & Stiglic, G. (2019). Local vs. Global Interpretability of Machine Learning Models in Type 2 Diabetes Mellitus Screening. In M. Marcos, J. M. Juarez, R. Lenz, G. J. Nalepa, S. Nowaczyk, M. Peleg, J. Stefanowski, & G. Stiglic (Eds.), Artificial Intelligence in Medicine: Knowledge Representation and Transparent and Explainable Systems (pp. 108–119). Springer. https://doi.org/10.1007/978-3-030-37446-4_9

de Laat, P. B. (2018). Algorithmic decision-making based on machine learning from Big Data: Can transparency restore accountability? Philosophy & technology, 31(4), 525–541. https://doi.org/10.1007/s13347-017-0293-z

Larsson, S. (in press). On the Governance of Artificial Intelligence through Ethics Guidelines, Asian Journal of Law and Society.

Larsson, S. (2019). The Socio-Legal Relevance of Artificial Intelligence. Droit et Société, (103), 573–593. https://doi.org/10.3917/drs1.103.0573

Larsson, S. (2018). Algorithmic Governance and the Need for Consumer Empowerment in Data-driven Markets. Internet Policy Review, 7(2). https://doi.org/10.14763/2018.2.791

Larsson, S. (2017). Conceptions in the Code. How Metaphors Explain Legal Challenges in Digital Times. Oxford University Press. https://doi.org/10.1093/acprof:oso/9780190650384.001.0001

Larsson, S. (2014). Justice ‘Under’ Law – The Bodily Incarnation of Legal Conceptions Over Time. International journal for the Semiotics of Law, 27(4), 613–626. https://doi.org/10.1007/s11196-013-9341-x

Larsson, S., Anneroth, M., Felländer, A., Felländer-Tsai, L., Heintz, F., & Cedering Ångström, R. (2019). Sustainable AI: An inventory of the state of knowledge of ethical, social, and legal challenges related to artificial intelligence. AI Sustainability Center. http://www.aisustainability.org/wp-content/uploads/2019/11/Socio-Legal_relevance_of_AI.pdf

Lakoff, G., & Johnson, M. (1980). Metaphors We Live By. University of Chicago Press.


Lakoff, G., & Johnson, M. (1999). Philosophy in the Flesh: The Embodied Mind and Its Challenge to Western Thought. Basic Books.

Legg, S., & Hutter, M. (2007). A collection of definitions of intelligence. In B. Goertzel, & P. Wang (Eds.), Advances in Artificial General Intelligence: Concepts, Architectures and Algorithms. Proceedings of the AGI Workshop 2006 (pp. 17–24). IOS Press.

Lepri, B., Oliver, N., Letouzé, E., Pentland, A., & Vinck, P. (2018). Fair, Transparent, and Accountable Algorithmic Decision-making Processes. Philosophy & Technology, 31, 611–627. https://doi.org/10.1007/s13347-017-0279-x

Linegang, M. P., Stoner, H. A., Patterson, M. J., Seppelt, B. D., Hoffman, J. D., Crittendon, Z. B., & Lee, J. D. (2006). Human-automation collaboration in dynamic mission planning: A challenge requiring an ecological approach. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 50(23), 2482–2486. https://doi.org/10.1177/154193120605002304

Margetts, H. (2011). The internet and transparency. The Political Quarterly82(4), 518–521. https://doi.org/10.1111/j.1467-923X.2011.02253.x

Martinez, R. (2019). Artificial Intelligence: Distinguishing Between Types & Definitions. Nevada Law Journal, 19(3), 2015–1042. https://scholars.law.unlv.edu/nlj/vol19/iss3/9/

McCorduck, P., & Cfe, C. (2004). Machines who think: A personal inquiry into the history and prospects of artificial intelligence. CRC Press.

Mercado, J.E., Rupp, M.A., Chen, J.Y., Barnes, M.J., Barber, D., & Procci, K. (2016). Intelligent agent transparency in human–agent teaming for Multi-UxV management, Human Factors, 58(3), 401–415. https://doi.org/10.1177/0018720815621206

Miller, T., Howe, P., Sonenberg, L. (2017). Explainable AI: Beware of Inmates Running the Asylum. Or: How I Learnt to Stop Worrying and Love the Social and Behavioural Sciences. IJCAI 2017 Workshop on Explainable Artificial Intelligence (XAI). Available at https://arxiv.org/abs/1712.00547v2

Miller, T. (2019) Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence, 267, 1–38. https://doi.org/10.1016/j.artint.2018.07.007

Mittelstadt, B., Russell, C., & Wachter, S. (2019). Explaining explanations in AI. Proceedings of the conference on fairness, accountability, and transparency - FAT* ‘19, 279–288. https://doi.org/10.1145/3287560.3287574

Monett, D., Lewis, C. W., & Thórisson, K. R. (2020). Introduction to the JAGI Special Issue “On Defining Artificial Intelligence”—Commentaries and Author’s Response. Journal of Artificial General Intelligence, 11(2), 1–100. https://doi.org/10.2478/jagi-2020-0003

Oswald, M. (2018). Algorithm-assisted decision-making in the public sector: framing the issues using administrative law rules governing discretionary power. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 376(2128). https://doi.org/10.1098/rsta.2017.0359

Pasquale, F. (2015). The Black Box Society. The Secret Algorithms That Control Money and Information. Harvard University Press.

Rahwan, I. (2018). Society-in-the-loop: programming the algorithmic social contract. Ethics and Information Technology, 20(1), 5-14. https://doi.org/10.1007/s10676-017-9430-8

Reddy, M. (1979) The Conduit Metaphor: A Case of Frame Conflict in our Language about Language. In A. Ortony (Ed.), Metaphor and Thought (pp. 284–324). Cambridge University Press.


Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). “Why should I trust you?”: Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, 1135–1144. https://doi.org/10.1145/2939672.2939778

Ruijer, E., Grimmelikhuijsen, S., & Meijer, A. (2017). Open data for democracy: Developing a theoretical framework for open data use. Government Information Quarterly, 34(1), 45–52. https://doi.org/10.1016/j.giq.2017.01.001

Stone, P., Brooks, R., Brynjolfsson, E., Calo, R., Etzioni, O., Hager, G., Hirschberg, J., Kalyanakrishnan, S., Kamar, E., Kraus, S., Leyton-Brown, K., Parkes, D., Press, W., Saxenian, A., Shah, J., Tambe, M., & Teller, A. (2016). Artificial intelligence and life in 2030 (Study Panel Report 2015-2016). https://ai100.stanford.edu/2016-report

Theodorou, A., Wortham, R. H., & Bryson, J. J. (2017). Designing and implementing transparency for real time inspection of autonomous robots. Connection Science, 29(3), 230–241. https://doi.org/10.1080/09540091.2017.1310182

Van Roy, V. (2020) AI Watch - National strategies on Artificial Intelligence: A European perspective in 2019 (JRC Technical Report No. EUR 30102 EN / JRC119974). Publications Office of the European Union. https://doi.org/10.2760/602843

Wachter, S., & Mittelstadt, B. D. (2018). A right to reasonable inferences: re-thinking data protection law in the age of Big Data and AI. Columbia Business Law Review, 2019(2). https://journals.library.columbia.edu/index.php/CBLR/article/view/3424

Wachter, S., Mittelstadt, B., & Floridi, L. (2017). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2), 76–99. https://doi.org/10.1093/idpl/ipx005

Wachter, S., Mittelstadt, B., & Russell, C. (2017). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. Harvard Journal of Law & Technology, 31(2). https://jolt.law.harvard.edu/assets/articlePDFs/v31/Counterfactual-Explanations-without-Opening-the-Black-Box-Sandra-Wachter-et-al.pdf

Zarsky. T (2013). Transparent predictions. University of Illinois Law Review, 2013(4), 1503–1570. http://illinoislawreview.org/wp-content/ilr-content/articles/2013/4/Zarsky.pdf

Footnotes

1. See for example the ACM Conference on Fairness, Accountability, and Transparency (ACM FAccT), https://fatconference.org

2. The bibliometric analysis has been assisted by Fredrik Åström, Associate Professor at Lund University, Sweden.


Platform transience: changes in Facebook’s policies, procedures, and affordances in global electoral politics

$
0
0

This paper is part of Data-driven elections, a special issue of Internet Policy Review guest-edited by Colin J. Bennett and David Lyon.

In September 2018, Facebook announced that it would no longer send its employees to US campaigns as ‘embeds’ to facilitate their advertising buys and social media presences, a programme that was the subject of considerable controversy (Dave, 2018). Months earlier, Facebook rolled out a political and social issues ads verification programme and ad archive in the US and subsequently in other countries, seemingly in response to pressure from lawmakers and journalists to safeguard national elections given increasing evidence of state-sponsored disinformation campaigns (Introducing the Ad Archive Report, 2018; Requiring Authorization and Labeling for Ads with Political Content, 2019; Perez, 2018). At the time of this writing, it is widely reported that Facebook is considering more changes to its political advertising services as part of a contentious US debate and significant shifts by Google and Twitter, including the latter’s decision to ban political advertising entirely (Scola, 2019).

These changes are deeply significant for what is a global $3.6 billion USD political advertising business on Facebook alone (Kanter, 2018). This is a small percentage of revenue for the global firm, but Facebook is increasingly the central way that candidates around the world get their messages in front of voters. These changes significantly impact the cost structures and efficiencies of political ads, in turn reorienting the actors who create, target, and test them in ways that we will only come to appreciate with time. And, more broadly, the decisions technology firms make shape which voters are exposed to political advertising in the course of electioneering and how they encounter political communications.

The events since the 2016 US presidential election illustrate the rapidity and scale of change in platforms. The fact that platforms continually change is well-cited in the academic literature and referred to in journalistic accounts. Gawer (2014) has an expansive view on how platforms are “evolving organizations” with respect to innovation and competition and technology. There is a very well-documented literature on machine learning, artificial intelligence, and algorithms that stresses continual change (e.g., Ananny, 2016; Beer, 2017; Klinger and Svensson, 2018). Our aim here is to develop an inductive analysis of change at the level of policies, procedures, and affordances through two case studies of Facebook in the context of electoral politics: the ephemerality of the “I’m a Voter” button Facebook rolled out internationally and the data and targeting behind political advertising. We chose these two cases for their relevance to (and normative implications for) data-driven elections but also for the rich array of secondary sources available given that rapid change makes studying platforms and their effects difficult. The “I’m a Voter” button and changes in advertising affordances are unique in that they received significant political and industry media coverage.

We argue that these are two cases of platform transience– a concept we use to describe how platforms change, often dramatically and in short periods of time, in their policies, procedures, and affordances. We use the word ‘transience’ because it captures the idea that platform change is fast and continual, and as a result they are impermanent and ephemeral in significant ways. As we argue through our analyses of these two case studies, transience can seemingly be spurred by normative pressure from external stakeholders. In our discussion section, we detail the implications of this, alongside other potential mechanisms that underlie platform transience beyond external pressure as a call for future research.

These instances of transience also reveal the widespread failure of Facebook to be transparent about and disclose the workings of an electoral product that the company itself documented was highly impactful in spurring voting. Looking back, for instance, Facebook’s blog contains some information on the “I’m a Voter” product and who saw it during the course of elections (“Announcement”, 2008; “Election Day 2012 on Facebook | Facebook Newsroom”, 2012; “Election Day 2014 on Facebook | Facebook Newsroom”, 2014), but they fail to explain which elections in which countries after 2014 the product was used in or even what determined who received the notifications. In this context, journalists often struggled to provide the public with details on the product – including screenshots of what the user interface looked like, observed examples of when some people received the reminder and when others didn’t, and provided timelines of changes.

As such, these cases also illustrate the implications of platform transience. First, for public representatives such as journalists and policymakers, determining the social and political consequences of platforms to hold them accountable or design policy interventions is especially hard given the pace of change and the lack of transparency that often accompanies them. Second, in the political context, it is likely that better resourced electoral and issue campaigns will be uniquely capable of navigating rapid change, from being able to hire staffers to meet new platform requirements such as verification to having direct access to platforms through dedicated account managers. This raises fundamental issues of electoral fairness. Third, for the users of platforms, transience and the lack of disclosure and accountability increases the likelihood of hidden manipulation and, more broadly, unequal information environments. In the case of “I’m a Voter”, entirely unbeknownst to them, some people were the targets of a social pressure experiment designed to spur voting. In the context of data and targeting, some citizens, especially those most politically engaged and ideologically extreme, receive more attention from campaigns based on the underlying technical affordances of platforms.

This paper proceeds in three parts. First, we introduce our case studies around the role of platforms in democratic processes. We then turn to our two case studies to document and analyse platform transience. The paper concludes with a call for future research on other cases of platform transience and details the implications for institutional politics.

Platform transience

Over the past decade, scholars have grown increasingly attentive to the ways social media platforms – especially those operated by firms such as Facebook, Google, Twitter, Snapchat, and their sister companies and subsidiaries such as YouTube, Instagram, and WhatsApp – serve as infrastructure for much of social life in countries around the world. As Plantin and Punathambekar (2019, p. 2) argue, platforms such as Facebook and Google have:

acquired a scale and indispensability – properties typical of infrastructures – such that living without them shackles social and cultural life. Their reach, market power, and relentless quest for network effects have led companies like Facebook to intervene in and become essential to multiple social and economic sectors.

In this literature, ‘platforms’ refer simultaneously to technical infrastructures and the corporate organisations that develop, maintain, monetise, and govern them. This means that analysis of platforms entails their infrastructural elements – such as their ubiquity and scale through their technical components –alongside their corporate organisation, policies and procedures, and revenue models. There are a number of veins of literature that analyse various aspects of platforms in this expansive sense. There is an emerging body of work on content moderation, especially in relation to the practices and policies behind what these companies do (Gillespie, 2018) and how this labour is structured and performed (Roberts, 2019). Other research has analysed the economics of platforms and their business models (for a review see de Reuver, Sørensen, and Basole, 2018), governance structures (e.g., Constantinides, Henfridsson, and Parker, 2018; Gorwa, 2019; Helberger, Pierson, and Poell, 2018), and data (e.g., Helmond, 2015). A body of legal analysis details the regulatory implications of platforms, especially in relation to competition (Pasquale, 2012) and user privacy and data (Balkin, 2016).

We know of only a few research works to-date that have systematically analysed platform companies such as Facebook and Google through the lens of their interactions with other fields. As Van Dijck, Poell, and de Waal (2018) argue, platforms are “programmable digital architecture designed to organize interactions between users -- not just end users but also corporate entities and public bodies….” (ibid., 9) - in the process transforming other fields, which they demonstrate through case studies of news (see also Nielsen and Ganter, 2018), urban transport, health care, and education. Other scholars have analysed how the specific corporate organisation, policies, and business models of platforms in one domain such as politics impact that field (Kreiss and McGregor, 2018; Kreiss and McGregor, 2019).

The ways that platforms shape other fields are especially interesting given the fact that they undergo continual changes. Yet, we lack understanding of the mechanisms and consequences of platform change, particularly in the context of their organisational workings such as policies and procedures, the products they offer, or their affordances. For example, as detailed above, a number of scholars have detailed various aspects of platform change, especially in the context of algorithms (Bucher, 2018) and the versioning of technical products (e.g., Chun, 2016; Karpf, 2012). Platform transience is especially likely to impact actors in other fields given the ways instability likely disrupts institutionalised ways of working and established practices. At the same time, it likely differentially impacts sectors based on the degree to which fields rely on platform products and services, or their comparative autonomy with respect to their economic or technological power.

We are particularly interested in change at the levels of policies, procedures, and affordances. With respect to ‘policies’, we mean the company-derived rules governing the use of platforms. This is an expansive category that includes everything from terms of service to technical standards. In terms of ‘procedures’, we mean platforms’ ways of working both internally and externally with stakeholders. These include everything from the mechanisms that platform companies have for enforcing policy decisions to how they enable those affected by them to contest decisions. In the political domain, procedures relate to the ways that Facebook political ad sales staffers are vehicles for practitioners to contest policy decisions (such as ads rejected for objectionable content) but also more broadly the organisational and staffing patterns that the company has developed for reviewing content, adjudicating disputes, advising campaigns, developing new political products, etc. We define ‘affordances’ in terms of previous work as: “what various platforms are actually capable of doing and perceptions of what they enable, along with the actual practices that emerge as people interact with platforms” (Kreiss, Lawrence, and McGregor, 2018, p. 19). The concept of affordances is important because it points to the ways that code structures what people can do on and with platforms, even while platforms invite particular uses through framing what their features are for (see Nagy and Neff, 2015). Policies, procedures, and affordances are likely inter-related in the sense that change in one domain likely affects the others. For example, changes in policies can lead to new procedures, such as when Facebook required the verification of political advertisers which then lead to new registration processes with the platforms. Sometimes, affordances create the need for new policies, such as when the ability to edit headlines of publishers in ads spurred new policies to prevent this from furthering misinformation (see Kreiss and McGregor, 2019).

The case studies

To analyse inductively why platforms change and how those changes impact other fields, we developed two case studies relating to Facebook’s international electoral efforts. These two case studies are constructed primarily from data collected for a larger comparative project from January to May 2019 on Facebook and Google’s involvement in government and elections in five countries: Chile, Germany, India, South Africa, and the United States. We conducted a qualitative content analysis of news coverage about Facebook’s work in institutional politics, including industry news outlets such as AdExchanger and Campaigns and Elections. We also analysed material from Facebook’s Newsroom, company help centre documents, and company blogs and websites regarding products and services relating to Facebook’s work in institutional politics. Finally, we downloaded online resources provided to political actors, such as Facebook’s Digital Diplomacy best practices guide and English and German versions of Facebook guides for politicians and governmental actors.

Research into products and services started in the US context. When articles or websites listed additional countries that the services were offered in, we noted this. The creation of our search terms was an iterative and on-going process. After we made lists of policies, products, services, and affordances in English in the US context, we used web services to translate them into Spanish and German and searched for similar material via Google Incognito and a virtual private network (VPN) connection from each relevant country (we chose Express VPN because of its servers in each of the five countries). During the period from January to May 2019 we also used a US-based Facebook Ad Manager to explore the ad targeting interface and create campaigns, which we did not turn on (i.e., no ads were purchased but the campaigns were built out and saved in the platform as inactive or paused). In addition, we created accounts based in Germany and Chile using a VPN connection from those countries.

During this process of passive data collection (see Karpf, 2012), we were able to document Facebook’s advertising interface changing as we used it across countries. We also found international coverage of the “I’m a Voter” button and other election reminders which were not clearly documented in Facebook’s Newsroom. We then selected these two cases for further analysis and broadened our search outside of the original five countries. While we focus on Facebook in the empirical sections below, in the discussion section we seek to inductively develop an analysis that extends to all platforms in the context of the ways external pressures contribute to platform transience.

The “I’m a Voter” button and data-driven politics

In 2008, Facebook released an “It’s Election Day” reminder with a button for users to declare “I’m a Voter” to their friends. The “I’m a Voter” button was created to appear for only a day. It was ephemeral by design. The feature included pictures of a few select friends and the total number of the user’s friends who self-declared they had voted in the election (Figure 1) (“Announcement: Facebook/ABC News Election ’08 | Facebook Newsroom”, 2008; “Election Day 2012 on Facebook | Facebook Newsroom”, 2012). By 2012 this platform feature was accompanied by ways for users to find their polling place, share their political positions, and join groups to debate issues (“Election Day 2012 on Facebook | Facebook Newsroom”, 2012). In 2012, Facebook was just gaining many of the features that are now core to its platform including running ‘sponsored stories’ (also known as native advertising) in users’ news feeds (Mullins, 2016; D’Onfro, 2016).

A screenshot of a cell phone. Description automatically generated
Figure 1:2012, United States (“Election Day 2012 on Facebook | Facebook Newsroom”, 2012)

On Facebook’s blog, the company stated that these civic engagement products were born out of a commitment “to encouraging people who use our service to participate in the democratic process” (“Election Day 2012 on Facebook | Facebook Newsroom”, 2012). The existence of these tools as early as 2008 speaks to this commitment — Facebook, founded in 2004, began putting resources into engaging its users in politics during the first presidential election after its founding.

While at first glance promoting voter participation might be normatively desirable on democratic grounds, Facebook’s attempt to engage citizens in democracy through its platform sparked controversy specifically because of the evidence the company provided that it actually worked. In 2010, Facebook partnered with researchers to test the impact of different versions of the election day reminders in a field experiment. All Facebook users over the age of 18 in the US were assigned to treatment and control groups and voter turnout was later measured through actual voting records (Bond et al., 2012). In the treatment conditions, some users saw the “I’m a Voter” button with a count of how many people had marked themselves as voters (informational group); others saw that count along with pictures of some of their friends who had self-declared themselves as voting (social group) (see Figure 2) (ibid). In the control group, users saw no election reminder at all - the study estimated that 340,000 people turned out to vote due to the election reminders they saw (ibid). These findings are in line with many other studies and experiments showing how social incentives increase voter turnout (Gerber, Green, and Larimer, 2008). This experiment was cited in the campaign industry press as evidence of the power of Facebook in elections (Nyczepir, 2012).

Figure 2: United States Election Day 2010 turn-out study experimental conditions: Informational group and social group (Bond et al., 2012).

Questions soon arose in the United States, and later in Europe during the subsequent roll out, about who would benefit from this tool that had the power to increase vote share in potentially consequential ways. Reporting from the United States, Sweden, and the United Kingdom at the time all raised concerns about the questionable ethics of testing different versions of the notification as well as the fact that the company was not clear about what versions were shown to which users and where (Grassegger, 2018; Habblethwaite, 2014; Sifry, 2014).

Indeed, there was very little transparency about this tool that evidence suggested could be deeply impactful in electoral contexts. Based on screenshots and write-ups by journalists from nine countries from 2014 to 2016, the “I’m a Voter” button’s specific features changed and varied from year to year and country to country. While some countries had the “I’m a Voter” button (Figure 3) others used “share now” instead (Figure 4). The button could be on the right (Figures 3 and 4) or the left side (Figures 5, 6, 8, and 9), and under the option to ‘share your civic participation’ there could be privacy notifications that users were sharing with the public (Figure 4), a prompt to “return to Facebook” (Figure 5), an option for more information (Figures 3, 6 , 8 and 9), or no additional prompt at all (Figure 7). This feature was rolled out in Israel and India and was reported by Reuters to have been rolled out worldwide in 2014. The company itself did not verify this given that no Facebook newsroom articles cover the rollout in these countries (Cohen, 2014; Debenedetti, 2014; Kenan, 2015). At the same time, the differences in the design and prompts were not addressed by Facebook in its blog posts, nor were the effects of user interactions with different types of reminders in different countries.

Figure 3:2014, United States (“Election Day 2014 on Facebook | Facebook Newsroom,” 2014)
Figure 4:2014, Scotland (Grassegger, 2018)
Figure 5:2014, India (Rodriguez, 2014)
Figure 6:2014, European Union (Cardinale, 2014)
Figure 7:Israel, 2015 (Kenan, 2015)
Figure 8:Philippines, 2016 (Lopez, 2016)
Figure 9:United Kingdom, 2016 (Griffin, 2016)

These undocumented rollouts and differing designs are especially notable given the documented electoral impacts of this tool and the difficulty journalists and researchers faced, and continue to face, in identifying and chronicling the deployment of the tool – both in the moment and especially now, where systematically reconstructing international product rollouts, variation, and change is impossible. In the decade of its development, it was clear that Facebook was not transparent about its electoral product and failed to disclose basic information to electoral stakeholders such as journalists. When the company itself provided information, it raised more questions. Michael Buckly (Facebook’s vice president for global business communications) told a Mother Jones reporter that not everyone in the United States saw the feature during the 2012 presidential election due to software bugs, but that these bugs were entirely random (Sifry, 2014). Buckly stated that, in contrast, during the 2014 midterm election “almost every user in the United States over the age of 18 will see the “I Voted” button”, although as the author notes these comments were unverifiable and the accuracy of this statement was unclear (ibid.) Indeed, four years later the reach of “I Voted” was still unclear, and observers were still attempting to track the deployment of the tool as best they could. For example, in Iceland a lawyer questioned her friends and believed that not everybody was seeing the vote reminder at the same time, on the same devices, or at all (Grassegger, 2018). Despite her attempts, being outside the company this writer could not determine specific variations of the text, where it appeared on users’ feeds, or whether it was being displayed on all operating systems and on older versions of the Facebook app (Grassegger, 2018).

As such, the “I’m a Voter” also reveals the lack of transparency and disclosure by Facebook and the scope of international variation and transience of the platform. It also reveals the role of journalists and other observers in seeking to hold Facebook accountable, and the apparent success they have had at times in compelling platform change. For example, the negative press coverage and pressure from journalists seemingly pushed Facebook to change its policies on running tests on users and publicly declare that it stopped the official testing of the “I’m a Voter” affordance in 2014, at least in the United States (Ferenstein, 2014). Journalistic scrutiny might also have prompted the downplaying of the tool itself. In 2016, for instance, Facebook made changes to the tool in the United States, making it more difficult to access and requiring users to click through multiple menus to share their status as voting (Figure 10) (Grant, 2016). Also in 2016 in the US, Facebook released, and promoted, many more civic engagement tools centred around educating users about political issues and what would be on their ballots instead of the “I’m a Voter” declaration (“Preparing for the US Election 2016 | Facebook Newsroom”, 2016).

A screenshot of a cell phone Description automatically generated
Figure 10:United States, 2016 (Grant, 2016)

In the end, the extent of changes in the interface for each election, international variations in the tool, who saw election reminders and their effects, the data the company collected, and what will happen during future elections are all unknown. In addition to making it difficult to hold Facebook accountable, these transient affordances impact the political information environment in unknown and potentially deeply problematic ways, as we return to in the discussion section.

Political microtargeting and related advertising affordances

Facebook provides audiences to advertisers to target. Some of these audiences are segmented based on user data such as self-declared age, self-declared employer, or their interests deduced through the websites and Facebook pages they visit. Facebook also offers geolocation targeting of countries, states, cities, and, in the United States, congressional districts (Ads Manager - Manage Ads – Campaigns, n.d.). Facebook’s behavioural and interest-based targeting includes online or offline behaviour and interests, such as visiting specific locations. The company also offers cross device targeting for almost all of its advertising, meaning that ads are delivered to people on mobile or desktop devices and these profiles are linked so that responses to the advertisements can be attributed back to a single person. In addition, Facebook allows advertisers to load their own “first party” data, including email addresses gathered in stores and website visits online. These first-party audiences can then be targeted on the platform and they can be used as the basis for lookalike audiences.

Ironically, given all the negative attention it has received (Bashyakarla et al., 2019; Chester and Montgomery, 2017), Facebook’s interest-based and behavioural targeting is notably limited in the political sphere in the US. The content guidelines and permissible forms of targeting the company allows on its platform are far more restrictive than what an expansive constitutional First Amendment in the United States protects and what political practitioners currently do in other mediums, such as direct mail or door-to-door canvassing. For example, in the United States, when we started our research in spring 2019, Facebook’s ad targeting did not have “registered Republicans”, “registered Democrats”, or any party membership categories available to target, nor did it include voting behaviour, such as who voted in the last election (Ads Manager - Manage Ads – Campaigns, n.d.). However, there were ideological targeting options for “US Politics”, including “very conservative”, “conservative”, “moderate”, “liberal”, and “very liberal” as well as “likely to engage with political content (conservative)”, “moderate”, and “liberal” (Figure 11) (ibid).

A screenshot of a cell phone Description automatically generated
Figure 11: Screenshot from the Facebook ad buying interface, 16 January 2019

These micro-targeting options were also available to use from Facebook Ad Manager accounts made in Chile and Germany (Administrador de anuncios, n.d; Werbeanzeigenmanager, n.d.). The voter targets, however, were oddly specific to the United States – meaning that users in Chile and Germany were being invited to target US citizens. In addition to these political categories, across all the countries we considered advertisers can search for different public figures such as Angela Merkel, Barack Obama, or Sebastian Piñera or political groups such as the Partido Conservador or the Republican Party and target users who “have expressed an interest in or liked pages related to” those people or groups (ibid.). The degree to which any of these categories are used by advertisers is unknown, as is their actual accuracy in predicting voter identification with liberal or conservative ideologies.

In a clear example of the transience of Facebook affordances, these advertising categories changed during the time of us doing the research for this project. On 14 March 2019 Facebook Ad Manager abruptly notified us that the US “very conservative” to “very liberal” political targeting options referenced above were being removed (Figure 12). They were also removed in Germany and Chile. The other advertising targeting capabilities, including general interest in parties and political figures and “likely to engage with political content”, were still available. Unlike other well-publicised advertising changes, there were no Facebook blog posts nor major news coverage related to the removal of the five political ad audiences that we could find. If we did not have universes set up targeting these audience segments (but, as detailed above, we did not actively run advertising) we do not think we would have been aware of this change.

A screenshot of a cell phone Description automatically generated
Figure 12: Screenshot from US Facebook ad buying interface, 14 March 2019. Audience segments being removed.

What prompted the removal of these advertising categories is a mystery, although we suspect that it is related to the ongoing and intense scrutiny of Facebook’s advertising capabilities given numerous controversies since 2016. For example, in 2017 US media outlet ProPublica published an investigative report on how advertisers could put their ads in front of “jew haters” using Facebook’s micro-targeting (Angwin et al., 2017). This audience segment and others were created by an automated system without human review based on what users put on their profiles (ibid.). When enough users declared that they were interested in hating jews, the algorithm accepted this and made them available to advertisers for targeting. Facing public backlash, Facebook removed the audience segments called out by ProPublica (ibid).

These targeting changes have taken place alongside other significant changes in Facebook’s political advertising policies, such as the ending of the political ‘embed’ programme and commissions for its account managers for political ad sales (Dave, 2018). More changes are reportedly on the way as a contentious debate over political advertising in the US takes shape with Twitter banning all political advertising and Google limiting political micro-targeting and more actively vetting political claims (Cox, 2019). Taken together, these amount to significant and sweeping changes to how political advertising can be conducted on the platforms, especially in the United States, and it is unfolding during the course of a presidential election cycle. Again, similar to the “I’m a Voter” affordance, there was little transparency and disclosure from Facebook regarding these changes. Facebook’s Help Center, newsroom, and business pages provide no list of retired or new audience segments. We could find no direct documentation from Facebook showing that either the political audiences we saw removed or those covered by ProPublica ever existed.

Normative pressures from external stakeholders such as journalists and the ever-present talk of regulation in the media since 2016 in the US (along with some initial proposed bills such as the Honest Ads Act) likely influenced these changes in Facebook’s policies and affordances, although we cannot know for certain. At the same time, Facebook’s advertising platform has also undergone a series of changes that impact political advertising likely due to economic incentives. From 2014 to 2017, Facebook introduced carousel ads, lead ads, group ads, and created the audience network to allow advertisers to reach Facebook users on other mobile apps (“The Evolution of Facebook Advertising”, 2017). At the same time, Facebook removed ad formats (“Full List of Retired Ad Formats”, n.d.) as well as numerous metrics (“About metrics being removed”, n.d.). Rationales given for these changes include increasing value to advertisers. For example, Facebook stated that “to help businesses select the most impactful advertising solutions, we've removed the ability to boost certain post types that have proven to generate less engagement and that aren't tied to advertiser objectives” (“Full List of Retired Ad Formats”, n.d.; “About metrics being removed”, n.d.). The company meanwhile removed metrics to replace them with others “that may provide insights that are more actionable” (ibid.).

Discussion

These cases illustrate how Facebook as a platform undergoes a continual set of changes in its affordances, often without transparency and disclosure, as well as the seeming role of external pressure in driving them. In the case of the shifting international rollout and rollback of the “I’m a Voter” button, it was the stated desire of the company to be socially responsible that put engineering and design resources towards reminding people in democratic countries to vote. Then, it was likely the steady drumbeat of pressure from journalists and observers around the world asking questions about the rollout, implementation, and transparency of the initiative, along with difficult questions for the firm about (unintended) electoral manipulation, that likely led to Facebook scaling back this feature of the platform – culminating with no apparent public announcement of it on the Facebook Newsroom blog during the 2016 US election.

In the case of political data and targeting, ever present policy and affordance ephemerality likely occurs for a mix of reasons relating to external normative pressures and commercial incentives. Clear normative pressure from journalists, as well as the ever-present voices of political representatives and activists in the media, about the role of political advertising in undermining electoral integrity, heightening polarisation, and leading to potential voter manipulation, especially in the wake of Brexit and the 2016 US presidential election, seemingly led to fundamental changes in Facebook’s policies and procedures (such as requiring verification for advertisers and building out a political ads database) and affordances (removing the capacity to target based on ideology). At the same time, commercial economic incentives that underlie advertising more broadly have spillover effects in politics, leading to things such as new ad formats and targeting capabilities.

Future research can analyse the contexts within which external pressure compels platform change, and the various stakeholders involved. Due to platform companies increasingly reaching into all areas of social life, the set of stakeholders concerned with their functioning and governance is vast. As Van Dijck, Poell, and De Waal (2018, p. 3) nicely capture, the values of platforms in their architectures and norms often come into conflict with public values in various social domains. As such, platform changes are often the outcomes of a “series of confrontations between different value systems, contesting the balance between private and public interests” (ibid., p. 5). As infrastructure, platforms reconfigure various social sectors in deeply meaningful ways, and bring about conflicts over values and norms relating to privacy, public speech, and electoral fairness, to name a few.

While we can never know for certain, that outside stakeholders seemingly exert considerable pressure on platform companies which can spur change is a logical and plausible conclusion to be drawn from the cases developed here. As these case studies revealed, journalists in particular exerted normative scrutiny over Facebook. Platform companies are likely sensitive to negative journalistic attention in part given the press is a representative of public opinion, but also because it can trigger governmental scrutiny, drops in stock price, and user backlash, all of which were in evidence after the 2016 US presidential and Brexit elections. Another category of stakeholder likely particularly relevant in the context of politics are activist groups and partisan and ideological organisations. An example is organisations such as the ACLU, which has led efforts in the US to end certain forms of demographic targeting. These are not the only forms of external pressure from stakeholders. There are likely more expansive sets of regulatory and normative concerns for international platforms that transcend any one recognisable field of activity. These include the regulatory pressures exerted on platform companies by many, and diverse, formal bodies internationally such as the Federal Trade Commission in the United States that has compelled Facebook to alter its data security practices in the country (e.g., Solove and Hartzog, 2014).

At the same time, platforms likely make a host of voluntary decisions about policy, procedures, and affordances that lead to changes towards what they perceive of as desirable social ends. As our case studies demonstrated, seemingly well-intentioned actions by Facebook, such as promoting electoral participation through polling place look-up tools and universal voting reminders, shape how platforms work. These things are normatively defined in relation to a broader cultural and social context. Therefore, change is not simply compelled by pressure, but about actors desiring to be in line with social values, expectations, and ideals. Finally, it is clear from our case studies that there are a number of economic incentives that underlie platform transience. As we detailed, in the past few years Facebook has introduced carousel ads, lead ads, and group ads, new metrics, and targeting affordances – all of which are routinely deployed by political actors.

Future research can analyse additional mechanisms that underlie platform transience. For example, these firms are seemingly isomorphic to one another given that they not only compete in similar domains, they often react to one another’s moves and follow one another’s decisions around things such as content policy and self-regulation. See, for instance, the cascade of platform decisions to ban the American conspiracist Alex Jones during the summer of 2018 (Hern, 2018) and the recent shifts in political advertising spurred by Twitter’s decision to ban all political ads (Cox, 2019). Part of this comes in the context of journalists’ ability to exert normative pressure on platforms once a rival makes a significant policy change, but it is also these firms following one another with regard to the normative stakes involved. It is also likely that what certain firms do, or what they are subject to, changes the possibilities for action of other firms. Future research is necessary to analyse when these types of changes occur and why.

Methodologically, scholars can go beyond our initial approach here and comparatively analyse moments of significant change in the context of policies, procedures, and affordances through a process tracing approach, detailing the causal chains that were likely at play in firms’ decision-making (Collier, 2011) or reconstruct timelines through secondary sources with the aim of comparing transience across platforms and national contexts. And, going forward, scholarship that concerns platforms can make a concerted attempt to document platform changes, even when they are not the primary object of analysis. For example, researchers should record the dates they were accessing pages and take screenshots and save them into the archival Wayback Machine.

Normatively, platform transience begs larger questions relating to public accountability, electoral fairness, and inequality in information environments that have significant implications for policy-making. Perhaps the clearest case is how, in each of the instances of transience detailed here, there was shockingly little in the way of public disclosure of these changes to stakeholders and the public and a lack of transparency in terms of what changed, when, and why. Indeed, there was little in the way of a clear justification for any of the changes chronicled here. And, in the case of “I’m a Voter”, there was little in the way of disclosure for how this tool actually worked. Given this, it is hard, if not impossible, for journalists, elected officials, researchers, and regulatory agencies to monitor Facebook, and likely all platforms, effectively and hold them accountable for their policies, procedures, and affordances, and those they take away.

The fact that these cases of transience occurred in the context of international institutional politics make them all the more troubling. The fact that journalists had to guess at the implementation of “I’m a Voter” speaks to the magnitude of the potential problem, especially given that it likely shaped electoral participation for thousands, if not millions, of individuals around the world according to Facebook’s own data. The firm should be much more proactively forthcoming about the workings of its products and likely and potential changes in its policies, procedures, and affordances, such as alerting journalists and other stakeholders when changes might occur, and provide archival and public documentation of them. Even more, Facebook should develop clear justifications for its decisions and provide opportunities for those with questions to find out information and potentially contest the decisions the firm makes.

At the same time, these cases also highlight how platforms have the power to transform the shape and the dynamics of other fields. In the political domain, this raises issues related to electoral fairness. Facebook’s 7 million advertisers (Flynn, 2019), including political campaigns, have to navigate a rapidly changing advertising environment with limited notice and often no records of transient features of the platforms that impact the voters they can reach, how they can reach them, and the cost of doing so. Through established relationships with platform companies, larger, higher-spending political advertisers may have forewarning of changes and help understanding them, thus granting them unique advantages over their smaller rivals (Kreiss and McGregor, 2019). Meanwhile, larger campaigns and consultancies with many staffers are likely better able to perceive and respond to changes in platforms than their smaller counterparts. For example, new verification requirements likely benefited consultancies with the infrastructure to handle the process for their clients, and changes in content moderation policies likely benefit large firms that can get a hearing for such things as disapprovals (ibid.). How transience impacts other fields should be a key area of research going forward.

At the core of all of the changes documented here is the likelihood that platforms can create fundamentally unequal information environments. The fact that not all citizens of any given country likely saw the same social cues to vote means that some have powerful prompts to turnout, and therefore some citizens’ voices are disproportionately heard. With respect to data and targeting, the lack of transparency around key changes in things such as the targeting of political ads means that citizens cannot hope to know why they are seeing the messages they are - and journalists and regulators cannot answer questions regarding who receives political messages driving them to the polls, or keeping them home with respect to demobilising ads. For regulators, establishing rules in a rapidly changing platform ecosystem without transparency into what is changing, why, and when creates a unique challenge. Facebook’s ad transparency database simply underscores this point - with ongoing changes in targeting and the platform’s own algorithms and only the crudest company-derived categories of ad reach available, there is little in the way of transparency regarding the ways that political content is being delivered, and political attention structured, by campaigns and the platform itself.

Conclusion

While all of our empirical cases concerned Facebook, transience is likely a feature of all platforms and external pressure from stakeholders likely affects all platforms. As this paper detailed, particularly concerning is the lack of clear public disclosure and transparency in Facebook’s changes in its platform, which potentially impacted what millions of people around the world saw in terms of social pressure to vote and how campaigns could contact voters. This raises a deeply troubling set of normative issues in the context of institutional politics, from unequal information environments to the fairness of electoral competition. The challenges that we had as researchers documenting platform changes and their implications, and that observers around the world encountered as well, underscores how difficult crafting effective policy responses to platform power will be, unless we compel stronger public disclosure and accountability mechanisms on these firms.

Disclosure

When filing the final version of this text, the authors declared to have contributed equally to this paper.

References

About metrics being removed. (n.d.). Retrieved June 25, 2019, from Facebook Ads Help Center website: https://www.facebook.com/business/help/metrics-removal

Administrador de anuncios - Manage Ads (n.d.). Retrieved March 15, 2019, from https://business.facebook.com/adsmanager/manage/adsets/edit

Ads Manager - Manage Ads - Campaigns. (n.d.). Retrieved March 15, 2019, from https://business.facebook.com/adsmanager/manage/campaigns?

Ananny, M. (2016). Toward an ethics of algorithms: Convening, observation, probability, and timeliness. Science, Technology, & Human Values, 41(1), 93–117. https://doi.org/10.1177/0162243915606523

Angwin, J., Varner, M., Tobin, A. (2017, September 14). Facebook Enabled Advertisers to Reach ‘Jew Haters’. ProPublica. https://www.propublica.org/article/facebook-enabled-advertisers-to-reach-jew-haters

Announcement: Facebook/ABC News Election ’08 (2008, January 4). Facebook Newsroom. Retrieved June 25, 2019, from https://newsroom.fb.com/news/2008/01/announcement-facebookabc-news-election-08/

Balkin, J. M. (2016). Information fiduciaries and the first amendment. UC Davis Law Review, 49(4), 1183–1234. Retrieved from https://lawreview.law.ucdavis.edu/issues/49/4/Lecture/49-4_Balkin.pdf

Bashyakarla, V., Hankey, S. Macintyre, A., Rennó, R., & Wright, G. (2019). Personal Data: Political Persuasion. Inside the Influence Industry. How it works. Berlin: Tactical Tech. Retrieved from https://tacticaltech.org/media/Personal-Data-Political-Persuasion-How-it-works.pdf

Beer, D. (2017). The social power of algorithms. Information, Communication & Society, 20(1), 1–13. https://doi.org/10.1080/1369118X.2016.1216147

Bond, R. M., Fariss, C. J., Jones, J. J., Kramer, A. D. I., Marlow, C., Settle, J. E., & Fowler, J. H. (2012). A 61-million-person experiment in social influence and political mobilization. Nature, 489(7415), 295–298. https://doi.org/10.1038/nature11421

Bucher, T. (2018). If... then: Algorithmic power and politics. New York: Oxford University Press. https://doi.org/10.1093/oso/9780190493028.001.0001

Cardinale, R. (2014, May 25). Elezioni Europee 2014: Il pulsante Facebook e il Google Doodle [European Elections 2014: Facebook button and Google Doodle]. Retrieved July 3, 2019, from Be Social Be Honest website: http://www.besocialbehonest.it/2014/05/25/elezioni-europee-2014-il-pulsante-facebook-e-il-google-doodle/

Chester, J., & Montgomery, K. C. (2017). The role of digital marketing in political campaigns. Internet Policy Review, 6(4). https://doi.org/10.14763/2017.4.773

Chun, W. H. K. (2016). Updating to Remain the Same: Habitual New Media. Cambridge, MA: The MIT Press.

Cohen, D. (2014, April 10). India Facebook Users Can Declare, ‘I’m A Voter.’ Adweek. Retrieved June 25, 2019, from https://www.adweek.com/digital/india-im-a-voter/

Collier, D. (2011). Understanding process tracing. PS: Political Science & Politics, 44(4), 823–830. https://doi.org/10.1017/s1049096511001429

Constantinides, P., Henfridsson, O., & Parker, G. G. (2018). Introduction-Platforms and Infrastructures in the Digital Age. Information Systems Research, 29(2), 381–400. https://doi.org/10.1287/isre.2018.0794

Cox, K. (2019, November 22). Google Bans Microtargeting and “False Claims” in Political Ads.” ArsTechnica. Retrieved December 9, 2017 from https://arstechnica.com/tech-policy/2019/11/google-bans-microtargeting-and-false-claims-in-political-ads/

Dave, P. (2018, September 21). Facebook to drop on-site support for political campaigns.” Reuters. Retrieved July 10, 2019 from: https://www.reuters.com/article/us-facebook-election-usa/facebook-to-drop-on-site-support-for-political-campaigns-idUSKCN1M101Q

Debenedetti, G. (2014, May 19). Facebook to roll out “I’m a Voter” feature worldwide. Reuters. Retrieved from https://www.reuters.com/article/us-usa-facebook-voters-idUSBREA4I0QQ20140519

de Reuver, M., Sørensen, C., & Basole, R. C. (2018). The digital platform: a research agenda. Journal of Information Technology, 33(2), 124–135. https://doi.org/10.1057/s41265-016-0033-3

D’Onfro, J. (2016, February 4). Facebook looked completely different 12 years ago — here’s what’s changed over the years. Business Insider. Retrieved June 25, 2019, from https://www.businessinsider.com/what-facebook-used-to-look-like-12-year-ago-2016-1

Election Day 2012 on Facebook. (2012, November 6). Facebook Newsroom. Retrieved June 22, 2019, from https://newsroom.fb.com/news/2012/11/election-day-2012-on-facebook/

Election Day 2014 on Facebook. (2014, November 4). Facebook Newsroom. Retrieved July 3, 2019, from https://newsroom.fb.com/news/2014/11/election-day-2014-on-facebook/

American Civil Liberties Union. (2019, March 9). Facebook agrees to sweeping reforms to curb discriminatory ad targeting practices [Press release]. Retrieved from: https://www.aclu.org/press-releases/facebook-agrees-sweeping-reforms-curb-discriminatory-ad-targeting-practices

Ferenstein, G. (2014, November 2). After being criticized for its experiments, Facebook pulls the plug on a useful one. VentureBeat. Retrieved July 5, 2019 from https://venturebeat.com/2014/11/02/facebook-is-so-scared-of-the-press-theyve-stopped-innovating/

Flynn, K. (2019, January 30). Cheatsheet: Facebook now has 7m advertisers. Digiday. Retrieved July 6, 2019, from https://digiday.com/marketing/facebook-earnings-q4-2018/

Full List of Retired Ad Formats. (n.d.). Retrieved June 25, 2019, from Facebook Ads Help Center website: https://www.facebook.com/business/help/420508368346352

Gerber, A. S., Green, D. P., & Larimer, C. W. (2008). Social Pressure and Voter Turnout: Evidence from a Large-Scale Field Experiment. American Political Science Review, 102(1), 33–48. https://doi.org/10.1017/S000305540808009X

Gillespie, T. (2018). Custodians of the Internet: Platforms, content moderation, and the hidden

decisions that shape social media. New Haven: Yale University Press.

Gorwa, R. (2019). What is platform governance?. Information, Communication & Society, 22(6),

854–871. https://doi.org/10.1080/1369118X.2019.1573914

Grant, M. (2016, November 8). How To Show That You Voted On Facebook. Bustle. Retrieved June 25, 2019, from https://www.bustle.com/articles/193836-how-to-use-the-voting-in-the-us-election-status-on-facebook-because-you-deserve-a

Grassegger, H. (2018, April 15). Facebook says its ‘voter button’ is good for turnout. But should the tech giant be nudging us at all? The Guardian. Retrieved June 25, 2019 from https://www.theguardian.com/technology/2018/apr/15/facebook-says-it-voter-button-is-good-for-turn-but-should-the-tech-giant-be-nudging-us-at-all

Griffin, A. (2016, May 5). How Facebook is manipulating you to vote. The Independent. Retrieved July 9, 2019 from https://www.independent.co.uk/life-style/gadgets-and-tech/news/uk-elections-2016-how-facebook-is-manipulating-you-to-vote-a7015196.html

Habblethwaite, C. (2014, May 22). Why does Facebook want you to vote? BBC. Retrieved from https://www.bbc.com/news/blogs-trending-27518691

Helberger, N., Pierson, J., & Poell, T. (2018). Governing online platforms: From contested to cooperative responsibility. The Information Society, 34(1), 1–14. https://doi.org/10.1080/01972243.2017.1391913

Helmond, A. (2015). The platformization of the web: Making web data platform ready. Social Media + Society, 1(2). https://doi.org/10.1177/2056305115603080

Hern, A. (2018, August 6). Facebook, Apple, YouTube and Spotify Ban Infowars’ Alex Jones. The Guardian. Retrieved July 8, 2019 from https://www.theguardian.com/technology/2018/aug/06/apple-removes-podcasts-infowars-alex-jones

Introducing the Ad Archive Report: A Closer Look at Political and Issue Ads. (2018, October 23). Facebook Newsroom. Retrieved July 7, 2019, from https://newsroom.fb.com/news/2018/10/ad-archive-report/

Kanter, J. (2018, May 25). Facebook Gave 4 Reasons Why It’s Ready to Lose Money and Credibility to Continue Running Political Adverts. Business Insider. Retrieved July 9, 2019 https://techcrunch.com/2018/04/23/facebooks-new-authorization-process-for-political-ads-goes-live-in-the-u-s/

Karpf, D. (2012). Social science research methods in Internet time. Information, communication & society, 15(5), 639–661. https://doi.org/10.1080/1369118X.2012.665468

Kenan, E. (2015, February 19). Facebook to add “I voted” button for Israeli elections. Ynetnews. Retrieved June 25, 2019, from https://www.ynetnews.com/articles/0,7340,L-4628546,00.html

Klinger, U., & Svensson, J. (2018). The end of media logics? On algorithms and agency. New Media & Society, 20(12), 4653–4670. https://doi.org/10.1177/1461444818779750

Kreiss, D., Lawrence, R. G., & McGregor, S. C. (2018). In their own words: Political practitioner accounts of candidates, audiences, affordances, genres, and timing in strategic social media use. Political communication, 35(1), 8–31. https://doi.org/10.1080/10584609.2017.1334727

Kreiss, D., & McGregor, S. C. (2018). Technology firms shape political communication: The work of Microsoft, Facebook, Twitter, and Google with campaigns during the 2016 US presidential cycle. Political Communication, 35(2), 155–177. https://doi.org/10.1080/10584609.2017.1364814

Lopez, M. (2016, May 8). Facebook announces ‘I’m a voter’ button ahead of 2016 Philippine elections. GadgetMatch. Retrieved July 3, 2019 from https://www.gadgetmatch.com/facebook-election-day-button-2016-philippine-elections/

Mullins, J. (2016, February 4). This Is How Facebook Has Changed Over the Years. E! Online. Retrieved June 25, 2019, from https://www.eonline.com/news/736769/this-is-how-facebook-has-changed-over-the-past-12-years

Nagy, P., & Neff, G. (2015). Imagined affordance: Reconstructing a keyword for communication theory. Social Media+ Society1(2). https://doi.org/10.1177/2056305115603385

Nielsen, K. R., & Ganter, S. A. (2018). Dealing with digital intermediaries: A case study of the relations between publishers and platforms. New media & society, 20(4), 1600–1617. https://doi.org/10.1177/1461444817701318

Nyczepir, D. (2012, September 12). Study: GOTV messages on social media increase turnout. Retrieved July 3, 2019, from https://www.campaignsandelections.com/campaign-insider/study-gotv-messages-on-social-media-increase-turnout

Perez, S. (2018, April 23). Facebook’s New Authorization Process for Political Ads Goes Live in the U.S. TechCrunch. Retrieved from https://techcrunch.com/2018/04/23/facebooks-new-authorization-process-for-political-ads-goes-live-in-the-u-s/

Plantin, J. C., & Punathambekar, A. (2019). Digital media infrastructures: pipes, platforms, and politics. Media, Culture & Society, 41(2), 163–174. https://doi.org/10.1177/0163443718818376

Plantin, J.-C., Lagoze, C., Edwards, P. N., & Sandvig, C. (2018). Infrastructure studies meet platform studies in the age of Google and Facebook. New Media & Society, 20(1), 293–310. https://doi.org/10.1177/1461444816661553

Preparing for the US Election 2016. (2016, October 28). Facebook Newsroom. Retrieved June 25, 2019, from https://newsroom.fb.com/news/2016/10/preparing-for-the-us-election-2016/

Requiring Authorization and Labeling for Ads with Political Content. (2019, May 24). Facebook Business. Retrieved July 7, 2019, from https://www.facebook.com/business/news/requiring-authorization-and-labeling-for-ads-with-political-content

Roberts, S. T. (2019). Behind the Screen: Content Moderation in the Shadows of Social Media. New Haven: Yale University Press.

Rodriguez, S. (2014, May 20). Facebook expands “I’m a Voter” feature to international users. Los Angeles Times. Retrieved July 9, 2019 from https://www.latimes.com/business/technology/la-fi-tn-facebook-im-a-voter-international-20140520-story.html

Scola, N. (2019, November 7). Facebook considering limits on targeted campaign ads. Politico. Retrieved from: https://www.politico.com/news/2019/11/07/facebook-targeted-campaign-ad-limits-067550

Sifry, M. (2014, October 31). Facebook wants you to vote on Tuesday. Here’s how it messed with your feed in 2012. Mother Jones. Retrieved July 3, 2019, from https://www.motherjones.com/politics/2014/10/can-voting-facebook-button-improve-voter-turnout/

Solove, D. J., & Hartzog, W. (2014). The FTC and the new common law of privacy. Columbia Law Review, 114(3), 583–676. Retrieved from https://columbialawreview.org/content/the-ftc-and-the-new-common-law-of-privacy/

The Evolution of Facebook Advertising (Timeline of Facebook Advertising). (2017, March 7). Retrieved June 25, 2019, from Bamboo website: https://growwithbamboo.com/blog/the-evolution-of-facebook-advertising/

van Dijck, J., Poell, T., & de Waal, M. (2018). The platform society: Public values in a connective world. New York: Oxford University Press. https://doi.org/10.1093/oso/9780190889760.001.0001

Werbeanzeigenmnanager - Manage Ads (n.d.). Retrieved March 15, 2019, from https://business.facebook.com/adsmanager/manage/campaigns

Zelm, A. (2018, July 17). Facebook Reach in 2018: How Many Fans Actually See Your Posts? [Blog post] Retrieved July 8, 2019, from https://www.kunocreative.com/blog/facebook-reach-in-2018

The digital commercialisation of US politics — 2020 and beyond

$
0
0

This paper is part of Data-driven elections, a special issue of Internet Policy Review guest-edited by Colin J. Bennett and David Lyon.

In March 2018, The New York Times and The Guardian/Observer broke an explosive story that Cambridge Analytica, a British data firm, had harvested more than 50 million Facebook profiles and used them to engage in psychometric targeting during the 2016 US presidential election (Rosenberg, Confessore, & Cadwalladr, 2018). The scandal erupted amid ongoing concerns over Russian use of social media to interfere in the electoral process. The new revelations triggered a spate of congressional hearings and cast a spotlight on the role of digital marketing and “big data” in elections and campaigns. The controversy also generated greater scrutiny of some of the most problematic tech industry practices — including the role of algorithms on social media platforms in spreading false, hateful, and divisive content, and the use of digital micro-targeting techniques for “voter suppression” efforts (Green & Issenberg; 2016; Howard, Woolley, & Calo, 2018). In the wake of these cascading events, policymakers, journalists, and civil society groups have called for new laws and regulations to ensure transparency and accountability in online political advertising.

Twitter and Google, driven by growing concern that they will be regulated for their political advertising practices, fearful of being found in violation of the General Data Protection Regulation (GDPR) in the European Union, and cognisant of their own culpability in recent electoral controversies, have each made significant changes in their political advertising policies (Dorsey, 2019; Spencer, 2019). Despite a great deal of public hand wringing, on the other hand, US federal policymakers have failed to institute any effective remedies even though several states have enacted legislation designed to ensure greater transparency for digital political ads (California Clean Money Campaign, 2019; Garrahan, 2018). These recent legislative and regulatory initiatives in the US are narrow in scope and focused primarily on policy approaches to political advertising in more traditional media, failing to hold the tech giants accountable for their deleterious big data practices.

On the eve of the next presidential election in 2020, the pace of innovation in digital marketing continues unabated, along with its further expansion into US electoral politics. These trends were clearly evident in the 2018 election, which, according to Kantar Media, were “the most lucrative midterms in history”, with $5.25 billion USD spent for ads on local broadcast cable TV, and digital — outspending even the 2016 presidential election. Digital ad spending “quadrupled from 2014” to $950 million USD for ads that primarily ran on Facebook and Google (Axios, 2018; Lynch, 2018). In the upcoming 2020 election, experts are forecasting overall spending on political ads will be $6 billion USD, with an “expected $1.6 billion to be devoted to digital video… more than double 2018 digital video spending” (Perrin, 2019). Kantar (2019), meanwhile, estimates the portion spent for digital media will be $1.2 billion USD in the 2019-2020 election cycle.

In two earlier papers, we documented a number of digital practices deployed during the 2016 elections, which were emblematic of how big data systems, strategies and techniques were shaping contemporary political practice (Chester & Montgomery, 2017, 2018). Our work is part of a growing body of interdisciplinary scholarship on the role of data and digital technologies in politics and elections. Various terms have been used to describe and explain these practices — from computational politics to political micro-targeting to data-driven elections (Bodó, Helberger, & de Vreese, 2017; Bennett, 2016; Karpf, 2016; Kreiss, 2016; Tufekci, 2014). All of these labels highlight the increasing importance of data analytics in the operations of political parties, candidate campaigns, and issue advocacy efforts. But in our view, none adequately captures the full scope of recent changes that have taken place in contemporary politics. The same commercial digital media and marketing ecosystem that has dramatically altered how corporations engage with consumers is now transforming the ways in which campaigns engage with citizens (Chester & Montgomery, 2017).

We have been closely tracking the growth of this marketplace for more than 25 years, in the US and abroad, monitoring and analysing key technological developments, major trends, practices and players, and assessing the impact of these systems in areas such as health, financial services, retail, and youth (Chester, 2007; Montgomery, 2007, 2015; Montgomery & Chester, 2009; Montgomery, Chester, Grier, & Dorfman, 2012; Montgomery, Chester, & Kopp, 2018). CDD has worked closely with leading EU civil society and data protection NGOs to address digital marketplace issues. Our work has included providing analysis to EU-based groups to help them respond critically to Google’s acquisition of DoubleClick in 2007 as well as Facebook’s purchase of WhatsApp in 2014. Our research has also been informed by a growing body of scholarship on the role that commercial and big data forces are playing in contemporary society. For example, advocates, legal experts, and scholars have written extensively about the data and privacy concerns raised by this commercial big data digital marketing system (Agre & Rotenberg, 1997; Bennett, 2008; Nissenbaum, 2009; Schwartz & Solove, 2011). More recent research has focused increasingly on other, and in many ways more troubling, aspects of this system. This work has included, for example, researchon the use of persuasive design (including “mass personalisation” and “dark patterns”) to manage and direct human behaviours; discriminatory impacts of algorithms; and a range of manipulative practices (Calo, 2013; Gray, Kou, Battles, Hoggatt, & Toombs, 2018; Susser, Roessler, & Nissenbaum, 2019; Zarsky, 2019; Zuboff, 2019).As digital marketing has migrated into electoral politics, a growing number of scholars have begun to examine the implications of these problematic practices on the democratic process (Gorton, 2016; Kim et al., 2018; Kreiss & Howard, 2010; Rubinstein, 2014; Bashyakarla et al., 2019; Tufekci, 2014).

The purpose of this paper is to serve as an “early warning system” — for policymakers, journalists, scholars, and the public — by identifying what we see as the most important industry trends and practices likely to play a role in the next major US election, and flagging some of the problems and issues raised. Our intent is not to provide a comprehensive analysis of all the tools and techniques in what is frequently called the “politech” marketplace. The recent Tactical Tech (Bashyakarla et al, 2019) publication, Personal Data: Political Persuasion, provides a highly useful compendium on this topic. Rather, we want to show how further growth and expansion of the big data digital marketplace is reshaping electoral politics in the US, introducing both candidate and issue campaigns to a system of sophisticated software applications and data-targeting tools that are rooted in the goals, values, and strategies for influencing consumer behaviours.1 Although some of these new digitally enabled capabilities are extensions of longstanding political practices that pre-date the internet,others are a significant departure from established norms and procedures. Taken together, they are contributing to a major shift in how political campaigns conduct their operations, raising a host of troubling issues concerning privacy, security, manipulation, and discrimination. All of these developments are taking place, moreover, within a regulatory structure that is weak and largely ineffectual, posing daunting challenges to policymakers.

In the following pages, we: 1) briefly highlight five key developments in the digital marketing industry since the 2016 election that are influencing the operations of political campaigns and will likely affect the next election cycle; 2) discuss the implications of these trends and techniques for the ongoing practice of contemporary politics, with a special focus on their potential for manipulation and discrimination; 3) assess both the technology industry responses and recent policy initiatives designed to address political advertising in the US; and 4) offer our own set of recommendations for regulating political ad and data practices.

The growing big data commercial and political marketing system

In the upcoming 2020 elections, the US is likely to witness an extremely hard-fought, under-the-radar, innovative, and in many ways disturbing set of races, not only for the White House but also for down-ballot candidates and issue groups. Political campaigns will be able to avail themselves of the current state-of-the-art big data systems that were used in the past two elections, along with a host of recent advances developed by commercial marketers. Several interrelated trends in the digital media and marketing industry are likely to play a particularly influential role in shaping the use of digital tools and strategies in the 2020 election. We discuss them briefly below:

Recent mergers and partnerships in the media and data industries are creating new synergies that will extend the reach and enhance the capabilities of contemporary political campaigns. In the last few years, a wave of mergers and partnerships has taken place among platforms, data brokers, advertising exchanges, ad agencies, measurement firms and companies specialising in advertising technologies (so-called “ad-tech”). This consolidation has helped fuel the unfettered growth of a powerful digital marketing ecosystem, along with an expanding spectrum of software systems, specialty firms, and techniques that are now available to political campaigns. For example, AT&T (n.d.), as part of its acquisition of Time Warner Media, has re-launched its digital ad division, now called Xandr (n.d.). It also acquired the leading programmatic ad platform AppNexus.

Leading multinational advertising agencies have made substantial acquisitions of data companies, such as the Interpublic Group (IPG) purchase of Acxiom in 2018 and the Publicis Groupe takeover of Epsilon in 2019. One of the “Big 3” consumer credit reporting companies, TransUnion (2019), bought TruSignal, a leading digital marketing firm. Such deals enable political campaigns and others to easily access more information to profile and target potential voters (Williams, 2019).

In the already highly consolidated US broadband access market, only a handful of giants provide the bulk of internet connections for consumers. The growing role of internet service providers (ISPs) in the political ad market is particularly troubling, since they are free from any net neutrality, online privacy or digital marketing rules. Acquisitions made by the telecommunications sector are further enabling ISPs and other telephony companies to monetise their highly detailed subscriber data, combining it with behavioural data about device use and content preferences, as well as geolocation. (Schiff, 2018).

Increasing sophistication in “identity resolution” technologies, which take advantage of machine learning and artificial intelligence applications, is enabling greater precision in finding and reaching individuals across all of their digital devices. The technologies used for what is known as “identity resolution” have evolved to enable marketers — and political groups — to target and “reach real people” with greater precision than ever before. Marketers are helping perfect a system that leverages and integrates, increasingly in real-time, consumer profile data with online behaviours to capture more granular profiles of individuals, including where they go, and what they do (Rapp, 2018). Facebook, Google and other major marketers are also using machine learning to power prediction-related tools on their digital ad platforms. As part of Google’s recent reorganisation of its ad system (now called the “Google Marketing Platform”), the company introduced machine learning into its search advertising and YouTube businesses (Dischler, 2018; Sluis, 2018). It also uses machine learning for its “Dynamic Prospecting” system, which is connected to an “Automatic Targeting” apparatus that enables more precise tracking and targeting of individuals (Google, n.d.-a-b). Facebook (2019) is enthusiastically promoting machine learning as a fundamental advertising tool, urging advertisers to step aside and let automated systems make more ad-targeting decisions.

Political campaigns have already embraced these new technologies, even creating a special category in the industry awards for “Best Application of Artificial Intelligence or Machine Learning”, “Best Use of Data Analytics/Machine Learning”, and “Best Use of Programmatic Advertising” (“2019 Reed Award Winners”, 2019; American Association of American Political Consultants, 2019). For example, Resonate, a digital data marketing firm, was recognised in 2018 for its “Targeting Alabama’s Conservative Media Bubble”, which relied on “artificial intelligence and advanced predictive modeling” to analyse in real-time “more than 15 billion page loads per day. According to Resonate, this process identified “over 240,000 voters” who were judged to be “persuadable” in a hard-fought Senate campaign (Fitzpatrick, 2018). Similar advances in data analytics for political efforts are becoming available for smaller campaigns (Echelon Insights, 2019). WPA Intelligence (2019) won a 2019 Reed Award for its data analytics platform that generated “daily predictive models, much like microtargeting advanced traditional polling. This tool was used on behalf of top statewide races to produce up to 900 million voter scores, per night, for the last two months of the campaign”. Deployment of these techniques was a key influence in spending for the US midterm elections (Benes, 2018; Loredo, 2016; McCullough, 2016).

Political campaigns are taking advantage of a rapidly maturing commercial geo-spatial intelligence complex, enhancing mobile and other geotargeting strategies. Location analytics enable companies to make instantaneous associations between the signals sent and received from Wi-Fi routers, cell towers, a person’s devices and specific locations, including restaurants, retail chains, airports, stadiums, and the like (Skyhook, n.d.). These enhanced location capabilities have further blurred the distinction between what people do in the “offline” physical world and their actions and behaviours online, giving marketers greater ability both to “shadow” and to reach individuals nearly anytime and anywhere.

A political “geo-behavioural” segment is now a “vertical” product offered alongside more traditional online advertising categories, including auto, leisure, entertainment and retail.“Hyperlocal” data strategies enable political campaigns to engage in more precise targeting in communities (Mothership Strategies, 2018). Political campaigns are also taking advantage of the widespread use of consumer navigation systems. Waze, the Google-owned navigational firm, operates its own ad system but also is increasingly integrated into the Google programmatic platform (Miller, 2018). For example, in the 2018 midterm election, a get-out-the-vote campaign for one trade group used voter file and Google data to identify a highly targeted segment of likely voters, and then relied on Waze to deliver banner ads with a link to an online video (carefully calibrated to work only when the app signalled the car wasn’t moving). According to the political data firm that developed the campaign, it reached “1 million unique users in advance of the election” (Weissbrot, 2019, April 10).

Political television advertising is rapidly expanding onto unregulated streaming and digital video platforms. For decades, television has been the primary medium used by political campaigns to reach voters in the US. Now the medium is in the process of a major transformation that will dramatically increase its central role in elections (IAB, n.d.-a). One of the most important developments during the past few years is the expansion of advertising and data-targeting capabilities, driven in part by the rapid adoption of streaming services (so-called “Over the Top” or “OTT”) and the growth of digital video (Weissbrot, 2019, October 22). Leading OTT providers in the US are actively promoting their platform capabilities to political campaigns, making streaming video a new battleground for influencing the public. For example, a “Political Data Cloud” offered by OTT specialist Tru Optik (2019) enables “political advertisers to use both OTT and streaming audio to target specific voter groups on a local, state or national level across such factors as party affiliation, past voting behavior and issue orientation. Political data can be combined with behavioral, demographic and interest-based information, to create custom voter segments actionable across over 80 million US homes through leading publishers and ad tech platforms” (Lerner, 2019).

While political advertising on broadcast stations and cable television systems has long been subject to regulation by the US Federal Communications Commission, newer streaming television and digital video platforms operate outside of the regulatory system (O’Reilly, 2018). According to research firm Kantar “political advertisers will be able to air more spots on these streaming video platforms and extend the reach of their messaging—particularly to younger voters” (Lafayette, 2019). These ads will also be part of cross-device campaigns, with videos showing up in various formats on mobile devices as well.

The expanding role of digital platforms enables political campaigns to access additional sources of personal data, including TV programme viewing patterns.For example, in 2018, Altice and smart TV company Vizio launched a new partnership to take advantage of recent technologies now being deployed to deliver targeted advertising, incorporating viewer data from nearly nine million smart TV sets into “its footprint of more than 90 million households, 85% of broadband subscribers and one billion devices in the U.S.” (Clancy, 2018). Vizio’s Inscape (n.d.) division produces technology for smart TVs, offering what is known as “automatic content recognition” (ACR) data. According to Vizio, ACR enables what the industry calls “glass level” viewing data, using “screen level measurement to reveal what programs and ads are being watched in near-real time”, and incorporating the IP address from any video source in use (McAfee, 2019). Campaigns have demonstrated the efficacy of OTT’s role. AdVictory (n.d.) modelled “387,000 persuadable cord cutters and 1,210 persuadable cord shavers” (the latter referring to people using various forms of streaming video) to make a complex media buy in one state-wide gubernatorial race that reached 1.85 million people “across [video] inventory traditionally untouched by campaigns”.

Further developments in personalisation techniques are enabling political campaigns to maximise their ability to test an expanding array of messaging elements on individual voters. Micro-targetingnow involves a more complex personalisation process than merely using so-called behavioural data to target an individual. The use of personal data and other information to influence a consumer is part of an ever-evolving, orchestrated system designed to generate and then manage an individual’s online media and advertising experiences. Google and Facebook, in particular, are adept at harvesting the latest innovations to advance their advertising capabilities, including data-driven personalisation techniques that generate hundreds of highly granular ad-campaign elements from a single “creative” (i.e., advertising message). These techniques are widely embraced by the digital marketing industry, and political campaigns across the political spectrum are being encouraged to expand their use for targeting voters (Meuse, 2018; Revolution Marketing, n.d.; Schuster, 2015). The practice is known by various names, including “creative versioning”, “dynamic creative”, and “Dynamic Creative Optimization”, or DCO (Shah, 2019). Google’s creative optimisation product, “Directors Mix” (formerly called “Vogon”), is integrated into the company’s suite of “custom affinity audience targeting capabilities, which includes categories related to politics and many other interests”. This product, it explains, is designed to “generate massively customized and targeted video ad campaigns” (Google, n.d.-c). Marketing experts say that Google now enables “DCO on an unprecedented scale”, and that YouTube will be able to “harness the immense power of its data capabilities…” (Mindshare, 2017). Directors Mix can tap into Google’s vast resources to help marketers influence people in various ways, making it “exceptionally adept at isolating particular users with particular interests” (Boynton, 2018). Facebook’s “Dynamic Creative” can help transform a single ad into as many as “6,250 unique combinations of title, image/video, text, description and call to action”, available to target people on its news feed, Instagram and outside of Facebook’s “Audience Network” ad system (Peterson, 2017).

Implications for 2020 and beyond

We have been able to provide only a partial preview of the digital software systems and tools that are likely to be deployed in US political campaigns during 2020. It’s already evident that digital strategies will figure even more centrally in the upcoming campaigns than they have in previous elections (Axelrod, Burke, & Nam, 2019; Friedman, 2018, June 19). Many of the leading Democratic candidates, and President Trump, who has already ramped up his re-election campaign apparatus, have extensive experience and success in their use of digital technology. Brad Parscale, the campaign manager for Trump’s re-election effort, explained in 2019 that “in every single metric, we’re looking at being bigger, better, and ‘badder’ than we were in 2016,” including the role that “new technologies” will play in the race (Filloux, 2019).

On the one hand, these digital tools could be harnessed to create a more active and engaged electorate, with particular potential to reach and mobilise young voters and other important demographic groups. For example, in the US 2018 midterm elections, newcomers such as Congresswoman Alexandria Ocasio-Cortez, with small budgets but armed with digital media savvy, were able to seize the power of social media, mobile video, and other digital platforms to connect with large swaths of voters largely overlooked by other candidates (Blommaert, 2019). The real-time capabilities of digital media could also facilitate more effective get-out-the-vote efforts, targeting and reaching individuals much more efficiently than in-person appeals and last-minute door-to-door canvassing (O’Keefe, 2019).

On the other hand, there is a very real danger that many of these digital techniques could undermine the democratic process. For example, in the 2016 election, personalised targeted campaign messages were used to identify very specific groups of individuals, including racial minorities and women, delivering highly charged messages designed to discourage them from voting (Green & Issenberg, 2016). These kinds of “stealth media” disinformation efforts take advantage of “dark posts” and other affordances of social media platforms (Young et al., 2018).Though such intentional uses (or misuses) of digital marketing tools have generated substantial controversy and condemnation, there is no reason to believe they will not be used again. Campaigns will also be able to take advantage of a plethora of newer and more sophisticated targeting and message-testing tools, enhancing their ability to fine tune and deliver precise appeals to the specific individuals they seek to influence, and to reinforce the messages throughout that individual’s “media journey”.

But there is an even greater danger that the increasingly widespread reliance on commercial ad technology tools in the practice of politics will become routine and normalised, subverting independent and autonomous decision making, which is so essential to an informed electorate (Burkell & Regan, 2019; Gorton, 2016). For example, so-called “dynamic creative” advertising systems are in some ways extensions of A/B testing, which has been a longstanding tool in political campaigns. However, today’s digital incarnation of the practice makes it possible to test thousands of message variations, assessing how each individual responds to them, and changing the content in real time and across media in order to target and retarget specific voters. The data available for this process are extensive, granular, and intimate, incorporating personal information that extends far beyond the conventional categories, encompassing behavioural patterns, psychographic profiles, and TV viewing histories. Such techniques are inherently manipulative (Burkell & Regan, 2019; Gorton, 2016; Susser, Roessler, & Nissenbaum, 2019). The increasing use of digital video, in all of its new forms, raises similar concerns, especially when delivered to individuals through mobile and other platforms, generating huge volumes of powerful, immersive, persuasive content, and challenging the ability of journalists and scholars to review claims effectively. AI, machine learning, and other automated systems will be able to make predictions on behaviours and have an impact on public decision-making, without any mechanism for accountability. Taken together, all of these data-gathering, -analysis, and -targeting tools raise the spectre of a growing political surveillance system, capable of capturing unlimited amounts of detailed and highly sensitive information on citizens and using it for a variety of purposes. The increasing predominance of the big data political apparatus could also usher in a new era of permanent campaign operations, where individuals and groups throughout the country are continually monitored, targeted, and managed.

Because all of these systems are part of the opaque and increasingly automated operations of digital commercial marketing, the techniques, strategies, and messages of the upcoming campaigns will be even less transparent than before. In the heat of a competitive political race, campaigns are not likely to publicise the full extent of their digital operations. As a consequence, journalists, civil society groups, and academics may not be able to assess them fully until after the election. Nor will it be enough to rely on documenting expenditures, because digital ads can be inexpensive, purposefully designed to work virally and aimed at garnering “free media”, resulting in a proliferation of messages that evade categorisation or accountability as “paid political advertising”.

Some scholars have raised doubts about the effectiveness of contemporary big data and digital marketing applications when applied to the political sphere, and the likelihood of their widespread adoption (Baldwin-Philippi, 2017). It is true we are in the early stages of development and implementation of these new tools, and it may be too early to predict how widely they will be used in electoral politics, or how effective they might be. However, the success of digital marketing worldwide in promoting brands and products in the consumer marketplace, combined with the investments and innovations that are expanding its ability to deliver highly measured impacts, suggest to us that these applications will play an important role in our political and electoral affairs. The digital marketing industry has developed an array of measurement approaches to document their impact on the behaviour of individuals and communities (Griner, 2019; IAB Europe, 2019; MMA, 2019). In the no-holds-barred environment of highly competitive electoral politics, campaigns are likely to deploy these and other tools at their disposal, without restraint. There are enough indications from the most recent uses of these technologies in the political arena to raise serious concerns, making it particularly urgent to monitor them very closely in upcoming elections.

Industry and legislative initiatives

The largest US technology companies have recently introduced a succession of internal policies and transparency measures aimed at ensuring greater platform responsibility during elections. In November 2019, Twitter announced it was prohibiting the “promotion of political content”, explaining that it believed that “political message reach should be earned, not bought”. CEO Jack Dorsey (2019) was remarkably frank in explaining why Twitter had made this decision: “Internet political ads present entirely new challenges to civic discourse: machine learning-based optimization of messaging and micro-targeting, unchecked misleading information, and deep fakes. All at increasing velocity, sophistication, and overwhelming scale”.

That same month, Google unveiled policy changes of its own, including restricting the kinds of internal data capabilities available to political campaigns. As the company explained, “we’re limiting election ads audience targeting to the following general categories: age, gender, and general location (postal code level)”. Google also announced it was “clarifying” its ads policies and “adding examples to show how our policies prohibit things like ‘deep fakes’ (doctored and manipulated media), misleading claims about the census process, and ads or destinations making demonstrably false claims that could significantly undermine participation or trust in an electoral or democratic process” (Spencer, 2019). It remains to be seen whether such changes as Google’s and Twitter’s will actually alter, in any significant way, the contemporary operations of data-driven political campaigns. Some observers believe that Google’s new policy will benefit the company, noting that “by taking away the ability to serve specific audiences content that is most relevant to their values and interests, Google stands to make a lot MORE money off of campaigns, as we’ll have to spend more to find and reach our intended audiences” (“FWIW: The Platform Self-regulation Dumpster Fire”, 2019).

Interestingly, Facebook, the tech company that has been subject to the greatest amount of public controversy over its political practices, had not, at the time of this writing, made similar changes in its political advertising policies. Though the social media giant has been widely criticised for its refusal to fact-check political ads for accuracy and fairness, it has not been willing to institute any mechanisms for intervening in the content of those ads (Ingram, 2018; Isaac, 2019; Kafka, 2019). However, Facebook did announce in 2018 that it was ending its participation in the industry-wide practice of embedding, which involved sales teams working hand-in-hand with leading political campaigns (Ingram, 2018; Kreiss & McGregor, 2017). After a research article generated extensive news coverage of this industry-wide marketing practice, Facebook publicly announced it would cease the arrangement, instead “offering tools and advice” through a politics portal that provides “candidates information on how to get their message out and a way to get authorised to run ads on the platform” (Emerson, 2018; Jeffrey, 2018). In May 2019, the company also announced it would stop paying commissions to employees who sell political ads (Glazer & Horowitz, 2019). Such a move may not have a major effect on sales, however, especially since the tech giant has already generated significant income from political advertising for the 2020 campaign (Evers-Hillstrom, 2019).

Under pressure from civil rights groups over discriminatory ad targeting practices in housing and other areas, Facebook has undergone an extensive civil rights audit, which has resulted in a number of internal policy changes, including some practices related to campaigns and elections. For example, the company announced in June 2019 that it had “strengthened its voter suppression policy” to prohibit “misrepresentations” about the voting process, as well as any “threats of violence related to voting”. It has also committed to making further changes, including investments designed to prevent the use of the platform “to manipulate U.S. voters and elections” (Sandberg, 2019).

Google, Facebook, and Twitter have all established online archives to enable the public to find information on the political advertisements that run on their platforms. But these databases provide only a limited range of information. For example, Google’s (2018) archive contains copies of all political ads run on the platform, shows the amount spent overall and on specific ads by a campaign, as well as age range, gender, area (state) and dates when an ad appeared, but does not share the actual “targeting criteria” used by political campaigns (Walker, 2018). Facebook’s (n.d.-b) Ad Library describes itself as a “comprehensive, searchable collection of all ads currently running across Facebook Products”. It claims to provide “data for all ads related to politics or to issues of national importance” that have run on its platform since May 2018 (Sullivan, 2019). While the data include breakdowns on the age, gender, state where it ran, number of impressions and spending for the ad, no details are provided to explain how the ad was constructed, tested, and altered, or what digital ad targeting techniques were used. For example, Facebook (n.d.-a-e) permits US-based political campaigns to use its “Custom or Lookalike Audiences” ad-targeting product, but it does not report such use in its ad library. Though all of these new transparency systems and ad archives offer useful information, they also place a considerable burden on users. Many of these new measures are likely to be more valuable for watchdog organisations and journalists, who can use the information to track spending, identify emerging trends, and shed additional light on the process of digital political influence.

While these kinds of changes in platform policies and operations should help to mitigate some of the more egregious uses of social media by unscrupulous campaigns and other actors, they are not likely to alter in any major way the basic operations of today’s political advertising practices. With each tech giant instituting its own set of internal ad policies, there are no clear industry-wide “rules-of-the-game” that apply to all participants in the digital ecosystem. Nor are there strong transparency or accountability systems in place to ensure that the policies are effective. Though platform companies may institute changes that appear to offer meaningful safeguards, other players in the highly complex big data marketing infrastructure may offer ways to circumvent these apparent restrictions. As a case in point, when Facebook (2018, n.d.-c) announced in the wake of the Cambridge Analytica scandal that it was “shutting down Partner Categories”, the move provoked alarm inside the ad-tech industry that a set of powerful applications was being withdrawn (Villano, 2018). The product had enabled marketers to incorporate data provided by Facebook’s selected partners, including Acxiom and Epsilon (Pathak, 2018). However, despite the policy change, Facebook still enables marketers to bring a tremendous amount of third-party data to Facebook for targeting (Popkin, 2019). Indeed, shortly after Facebook’s announcement, LiveRamp offered assurances to its clients that no significant changes had been made, explaining that “while there’s a lot happening in our industry, LiveRamp customers have nothing to fear” (Carranza, 2018).

The controversy generated by recent foreign interference in US elections has also fuelled a growing call to update US election laws. However, the current policy debate over regulation of political advertising continues to be waged within a very narrow framework, which needs to be revisited in light of current digital practices. Legislative proposals have been introduced in Congress that would strengthen the disclosure requirements for digital political ads regulated by the Federal Election Commission (FEC). For example, under the Honest Ads Act, digital media platforms would be required to provide information about each ad via a “public political file”, including who purchased the ad, when it appeared, how much was spent, as well as “a description of the targeted audience”. Campaigns would also be required to provide the same information for online political ads that are required for political advertising in other media. The proposed legislation currently has the support of Google, Facebook, Twitter and other leading companies (Ottenfeld, 2018, April 25). A more ambitious bill, the For the People Act is backed by the new Democratic majority in the House of Representatives, and includes similar disclosure requirements, along with a number of provisions aimed at reducing “the influence of big money in politics”. Though these bills are a long-overdue first step toward bringing transparency measures into the digital age, neither of them addresses the broad range of big data marketing and targeting practices that are already in widespread use across political campaigns. And it is doubtful whether either of these limited policy approaches stands a chance of passage in the near future. There is strong opposition to regulating political campaign and ad practices at the federal level, primarily because of what critics claim would be violations of the free speech principle of the US First Amendment (Brodey, 2019).

While the prospects for regulating political advertising appear dim at the present time, there is a strong bi-partisan move in Congress to pass federal privacy legislation that would regulate commercial uses of data, which could, in turn, affect the operations, tools, and techniques available for digital political campaigns. Google, Facebook, and other digital data companies have long opposed any comprehensive privacy legislation. But a number of recent events have combined to force the industry to change its strategy: the implementation of the EU General Data Protection Regulation (GDPR) and the passage of state privacy laws (especially in California); the seemingly never-ending news reports on Facebook’s latest scandal; massive data breaches of personal information; accounts of how online marketers engage in discriminatory practices and promote hate speech; and the continued political fallout from “Russiagate”. Even the leading tech companies are now pushing for privacy legislation, if only to reduce the growing political pressure they face from the states, the EU, and their critics (Slefo, 2019). Also fuelling the debate on privacy are growing concerns over digital media industry consolidation, which have triggered calls by political leaders as well as presidential candidates to “break up” Amazon and Facebook (Lecher, 2019). Numerous bills have been introduced in both houses of Congress, with some incorporating strong provisions for regulating both data use and marketing techniques. However, as the 2020 election cycle gets underway, the ultimate outcome of this flurry of legislative activity is still up in the air (Kerry, 2019).

Opportunities for intervention

Given the uncertainty in the regulatory and self-regulatory environment, there is likely to be little or no restraint in the use of data-driven digital marketing practices in the upcoming US elections. Groups from across the political spectrum, including both campaigns and special interest groups will continue to engage in ferocious digital combat (Lennon, 2018). With the intense partisanship, especially fuelled by what is admittedly a high-stakes-for-democracy election (for all sides), as well as the current ease with which all of the available tools and methods are deployed, no company or campaign will voluntarily step away from the “digital arms race” that US elections have become. Given what is expected to be an extremely close race for the Electoral College that determines US presidential elections, 2020 is poised to see both parties use digital marketing techniques to identify and mobilise the handful of voters needed to “swing” a state one way or another (Schmidt, 2019).

Campaigns will have access to an unprecedented amount of personal data on every voter in the country, drawing from public sources as well as the growing commercial big data infrastructure. As a consequence, the next election cycle will be characterised by ubiquitous political targeting and messaging, fed continuously through multiple media outlets and communication devices.

At the same time, the concerns over continued threats of foreign election interference, along with the ongoing controversy triggered by the Cambridge Analytica/Facebook scandal, have re-energised campaign reform and privacy advocates and engaged the continuing interest of watchdog groups and journalists. This heightened attention on the role of digital technologies in the political process has created an unprecedented window of opportunity for civil society groups, foundations, educators, and other key stakeholders to push for broad public policy and structural changes. Such an effort would need to be multi-faceted, bringing together diverse organisations and issue groups, and taking advantage of current policy deliberations at both the federal and state levels.

In other western democracies, governments and industry organisations have taken strong proactive measures to address the use of data-driven digital marketing techniques by political parties and candidates. For example, the Institute for Practitioners in Advertising (IPA), a leading UK advertising organisation, has called for a “moratorium on micro-targeted political advertising online”. “In the absence of regulation”, the IPA explained, “we believe this almost hidden form of political communication is vulnerable to abuse”. Leading members of the UK advertising industry, including firms that work on political campaigns, have endorsed these recommendations (Oakes, 2018). The UK Information Commissioner’s Office (ICO, 2018), which regulates privacy, conducted an investigation of recent digital political practices, and issued a report urging the government to “legislate at the earliest opportunity to introduce a statutory code of practice” addressing the “use of personal information in political campaigns” (Denham, 2018). In Canada, the Privacy Commissioner offered “guidance” to political parties in their use of data, including “Best Practices” for requiring consent when using personal information (Office of the Privacy Commissioner of Canada, 2019). The European Council (2019) adopted a similar set of policies requiring political parties to adhere to EU data protection rules.

We recognise that the United States has a unique regulatory and legal system, where First Amendment protections of free speech have limited regulation of political campaigns. However, the dangers that big data marketing operations pose to the integrity of the political process require a rethinking of policy approaches. A growing number of legal scholars have begun to question whether political uses of data-driven digital marketing should be afforded the same level of First Amendment protections as other forms of political speech (Burkell & Regan, 2019; Calo, 2013; Rubinstein, 2014; Zarsky, 2019). “The strategies of microtargeting political ads”, explain Jacquelyn Burkell and Priscilla Regan (2019), “are employed in the interests not of informing, or even persuading voters but in the interests of appealing to their non-rational biases as defined through algorithmic profiling”.

Advocates and policymakers in the US should explore various legal and regulatory strategies, developing a broad policy agenda that encompasses data protection and privacy safeguards; robust transparency, reporting and accountability requirements; restrictions on certain digital advertising techniques; and limits on campaign spending. For example, disclosure requirements for digital media need to be much more comprehensive. At the very least, campaigns, platforms and networks should be required to disclose fully all the ad and data practices they used (e.g., cross-device tracking, lookalike modelling, geolocation, measurement, neuromarketing), as well as variations of ads delivered through dynamic creative optimisation and other similar AI applications. Some techniques — especially those that are inherently manipulative in nature — should not be allowed in political campaigns. Greater attention will need to be paid to the uses of data and targeting techniques as well, articulating distinctions between those designed to promote robust participation, such as “Get Out the Vote” efforts, and those whose purpose is to discourage voters from exercising their rights at the ballot box. Limits should also be placed on the sources and amount of data collected on voters. Political parties, campaigns, and political action committees should not be allowed to gain unfettered access to consumer profile data, and voters should have the right to provide affirmative consent (“opt-in”) before any of their information can be used for political purposes. Policymakers should be required to stay abreast of fast-moving innovations in the technology and marketing industries, identifying the uses and abuses of digital applications for political purposes, such as the way that WhatsApp was deployed during recent elections in Brazil for “computational propaganda” (Magenta, Gragnani, & Souza, 2018).

In addition to pushing for government policies, advocates should place pressure on the major technology industry players and political institutions, through grassroot campaigns, investigative journalism, litigation, and other measures. If we are to have any reform in the US, there must be multiple and continuous points of pressure. The two major political parties should be encouraged to adopt a proposed new best-practices code. Advocates should also consider adopting the model developed by civil rights groups and their allies in the US, who negotiated successfully with Google, Facebook and others to develop more responsible and accountable marketing and data practices (Peterson & Marte, 2016). Similar efforts could focus on political data and ad practices. NGOs, academics, and other entities outside the US should also be encouraged to raise public concerns.

All of these efforts would help ensure that the US electoral process operates with integrity, protects privacy, and does not engage in discriminatory practices designed to diminish debate and undermine full participation.

References

2019 Reed Award winners. (2019, February 22). Campaigns & Elections. Retrieved from https://www.campaignsandelections.com/campaign-insider/2019-reed-award-winners

AdVictory. (n.d.). Case study: Curating a data-driven CTV program. Retrieved from https://advictory.com/portfolio/rauner/

Agre, P. E., & Rotenberg, M. (Eds). (1997). Technology and Privacy: The New Landscape. Cambridge, MA: The MIT Press.

American Association of American Political Consultants. (2019). 2019 Pollie Awards gallery. Retrieved from https://pollies.secure-platform.com/a/gallery?roundId=44

AT&T. (n.d.). Head of political, DSP sales & account management job. Lensa. Retrieved from https://lensa.com/head-of-political-dsp-sales--account-management-jobs/washington/jd/1444db7ddf6c0a5d7568cb4032f3a4c7

Axelrod, T., Burke, M., & Nam, R. (2019, February 25). Trump unleashing digital juggernaut ahead of 2020. The Hill.Retrieved from https://thehill.com/homenews/campaign/431181-trump-unleashing-digital-juggernaut-ahead-of-2020

Axios. (2018, November 6). Political ad spending hits new record for 2018 midterm elections. Retrieved from https://www.axios.com/political-ad-spending-hits-new-record-for-2018-midterm-elections-1541509814-28e24943-d68b-4f55-83ef-8b9d51a64fa9.html

Bashyakarla, V., Hankey, S., Macintyre, A., Rennó, R., & Wright, G. (2019, March). Personal Data: Political Persuasion. Inside the Influence Industry. How it works. Berlin: Tactical Tech. Retrieved from https://cdn.ttc.io/s/tacticaltech.org/Personal-Data-Political-Persuasion-How-it-works.pdf

Baldwin-Philippi, J. (2017). The myths of data-driven campaigning. Political Communication, 34(4), 627–633. https://doi.org/10.1080/10584609.2017.1372999

Benes, R. (2018, June 20). Political advertisers will lean on programmatic during midterms. eMarketer. Retrieved from https://content-na1.emarketer.com/political-advertisers-will-lean-on-programmatic-during-midterms

Bennett, C. J. (2008) The Privacy Advocates: Resisting the Spread of Surveillance. Cambridge, MA: The MIT Press.

Bennett, C. J. (2016). Voter databases, micro-targeting, and data protection law: Can political parties campaign in Europe as they do in North America? International Data Privacy Law, 6(4), 261–275. https://doi.org/10.1093/idpl/ipw021 Retrieved from https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2776299

Blommaert, J. (2019, January 22). Alexandria Ocasio-Cortez: The next level of political digital culture. Diggit Magazine. Retrieved from https://medium.com/@diggitmagazine/alexandria-ocasio-cortez-the-next-level-of-political-digital-culture-e43b45518e86.

Bodo, B., Helberger, N., & de Vreese, C. H. (2017). Political micro-targeting: a Manchurian candidate or just a dark horse? Towards the next generation of political micro-targeting research. Internet Policy Review, 6(4). https://doi.org/10.14763/2017.4.776

Boynton, P. (2018, August 27). YouTube Director Mix: Create impactful video ads with less resources. Retrieved from Instapage Blog: https://instapage.com/blog/youtube-director-mix

Brodey, S. (2019, May 10). Sen. Lindsey Graham takes heat from conservatives for backing John McCain’s election meddling bill. The Daily Beast. Retrieved from https://www.thedailybeast.com/lindsey-graham-takes-heat-from-conservatives-for-backing-mccains-election-meddling-bill

Burkell, J., & Regan, P.M. (2019, April). Voter preferences, voter manipulation, voter analytics: Policy options for less surveillance and more autonomy. Workshop on Data Driven Elections, Victoria.

California Clean Money Campaign. (2019, October 9). Gov. Newsom signs landmark disclosure bills: Petition DISCLOSE Act and Text Message DISCLOSE Act. California Clean Money Action Fund. Retrieved from http://www.yesfairelections.org/newslink/ccmc_2019-10-09.php

Calo, M. R. (2013). Digital market manipulation. George Washington Law Review, 82(4), 995–1051. Retrieved from https://www.gwlr.org/calo/

Carranza, M. (2018, July 17). How to use first‑ and third-party data on Facebook [Blog post]. Retrieved from LiveRamp blog: https://liveramp.com/blog/facebook-integration/.

Chester J., & Montgomery, K. C. (2017). The role of digital marketing in political campaigns. Internet Policy Review, 6(4). https://doi.org/10.14763/2017.4.773

Chester J., & Montgomery, K. C. (2018). The influence industry: Contemporary digital politics in the United States [Report]. Berlin: Tactical Tech. Retrieved from https://ourdataourselves.tacticaltech.org/media/ttc-influence-industry-usa.pdf.

Chester, J. (2007). Digital Destiny: New Media and the Future of Democracy. New York: The New Press.

Clancy. M. (2018, August 17). A4 adds Inscape’s Vizio TV data to measurement mix. Rapid TV News. Retrieved from https://www.rapidtvnews.com/2018081753198/a4-adds-inscape-s-vizio-tv-data-to-measurement-mix.html#axzz5kdEAVLYs.

Denham, E. (2018, November 6). Blog: Information Commissioner’s report brings the ICO’s investigation into the use of data analytics in political campaigns up to date. ICO. Retrieved from https://ico.org.uk/about-the-ico/news-and-events/blog-information-commissioner-s-report-brings-the-ico-s-investigation-into-the-use-of-data-analytics-in-political-campaigns-up-to-date.

Dischler, J. (2018, July 10). Putting machine learning into the hands of every advertiser. Google Blog. Retrieved from https://www.blog.google/technology/ads/machine-learning-hands-advertisers/.

Dorsey, J. (2019, October 30). We’ve made the decision to stop all political advertising on Twitter globally. We believe political message reach should be earned, not bought. Why? A few reasons…. [Tweet]. Retrieved from https://twitter.com/jack/status/1189634360472829952.

Echelon Insights. (2019). 2019 Reed awards winner: Innovation in Polling. Retrieved from https://echeloninsights.com/news/theanalyticsjumpstart/

Emerson, S. (2018, August 15). How Facebook and Google win by embedding in political campaigns. Vice. Retrieved from https://www.vice.com/en_us/article/ne5k8z/how-facebook-and-google-win-by-embedding-in-political-campaigns

European Council. (2019, March 3). EP elections: EU adopts new rules to prevent misuse of personal data by European political parties. Retrieved from https://www.consilium.europa.eu/en/press/press-releases/2019/03/19/ep-elections-eu-adopts-new-rules-to-prevent-misuse-of-personal-data-by-european-political-parties/

Evers-Hillstrom, K. (2019). Democratic presidential hopefuls flock to Facebook for campaign cash. Retrieved from https://www.opensecrets.org/news/2019/02/democratic-presidential-hopefuls-facebook-ads/

Facebook. (2018, March 28) Shutting down partner categories. Facebook Newsroom. Retrieved from https://newsroom.fb.com/news/h/shutting-down-partner-categories/

Facebook. (2019, March 27). Boost liquidity and work smarter with machine learning. Facebook Business. Retrieved from https://www.facebook.com/business/news/insights/boost-liquidity-and-work-smarter-with-machine-learning

Facebook. (n.d.-a). Ads about social issues, elections or politics. Retrieved from https://www.facebook.com/business/help/1838453822893854

Facebook. (n.d.-b). Facebook ad library. Retrieved from https://www.facebook.com/ads/library/?active_status=all&ad_type=political_and_issue_ads&country=US

Facebook. (n.d.-c). Upcoming changes. Facebook Business. Retrieved from https://www.facebook.com/business/m/one-sheeters/improving-accountability-and-updates-for-facebook-targeting

Filloux, F. (2019, June 2). Trump’s digital campaign for 2020 is already soaring. Monday Note. Retrieved from https://mondaynote.com/trumps-digital-campaign-for-2020-is-already-soaring-d0075bee8e89

Fitzpatrick, R. (2018, March 1). Resonate wins Reed Award for “best application of artificial intelligence or machine learning” [Blog post]. Resonate Blog. Retrieved from https://www.resonate.com/blog/resonate-wins-reed-award-for-best-application-of-artificial-intelligence-or-machine-learning/

Friedman, W. (2018, June 19). 2020 political ad spend could hit $10 billion, digital share expected to double. MediaPost. Retrieved from https://www.mediapost.com/publications/article/337226/2020-political-ad-spend-could-hit-10-billion-dig.html

FWIW: The platform self-regulation dumpster fire. (2019, November 22). ACRONYM. Retrieved from https://www.anotheracronym.org/newsletter/fwiw-the-platform-self-regulation-dumpster-fire/.

Garrahan, A. (2018, September 4). California’s new “Social Media DISCLOSE Act” regulates social media companies, search engines, other online advertising outlets, and political advertisers. Inside Political Law. Retrieved from https://www.insidepoliticallaw.com/2018/09/04/californias-new-social-media-disclose-act-regulates-social-media-companies-search-engines-online-advertising-outlets-political-advertisers/

Glazer, E. & Horwitz, J. (2019, May 23). Facebook curbs incentives to sell political ads ahead of 2020 election. Wall Street Journal. Retrieved from https://www.wsj.com/articles/facebook-ends-commissions-for-political-ad-sales-11558603803

Google. (2018). Transparency report: Political advertising in the United States. Retrieved from: https://transparencyreport.google.com/political-ads/region/US

Google. (n.d.-a). About responsive search ads (beta). Google Ads Help. Retrieved from https://support.google.com/google-ads/answer/7684791

Google. (n.d.-b). About smart display campaigns. Retrieved from https://support.google.com/google-ads/answer/7020281

Google (n.d.-c). Vogon. Retrieved from https://opensource.google.com/projects/vogon

Gorton, W. A. (2016). Manipulating citizens: How political campaigns’ use of behavioural social science harms democracy. New Political Science 38(1), 61–80. https://doi.org/10.1080/07393148.2015.1125119

Gray, C. M., Kou, Y., Battles, B., Hoggatt, J., & Toombs, A. L. (2018). The Dark (Patterns) Side of UX Design. Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems - CHI 18. https://doi.org/10.1145/3173574.3174108

Green, J. & Issenberg, S. (2016, October 27). Inside the Trump bunker, with days to go. Bloomberg Businessweek. Retrieved from https://www.bloomberg.com/news/articles/2016-10-27/inside-the-trump-bunker-with-12-days-to-go

Griner, D. (2019, June 25). Here’s every Grand Prix winner from the 2019 Cannes Lions. Adweek. Retrieved from https://www.adweek.com/creativity/heres-every-grand-prix-winner-from-the-2019-cannes-lions/

Howard, P. N., Woolley, S., & Calo, R. (2018). Algorithms, bots, and political communication in the US 2016 election: The challenge of automated political communication for election law and administration. Journal of Information Technology & Politics, 15(2), 81-93. https://doi.org/10.1080/19331681.2018.1448735

IAB. (n.d.-a). The new TV. Retrieved from https://video-guide.iab.com/new-tv

IAB Europe. (2019, June 5). Winners announced for the MIXX Awards Europe 2019 [Blog post]. IAB Europe Blog. Retrieved from https://iabeurope.eu/all-news/winners-announced-for-the-mixx-awards-europe-2019/

ICO. (2018). Call for views: Code of practice for the use of personal information in political campaigns. Retrieved from https://ico.org.uk/about-the-ico/ico-and-stakeholder-consultations/call-for-views-code-of-practice-for-the-use-of-personal-information-in-political-campaigns/

Ingram, D. (2018, September 20). Facebook to scale back 'embeds' for political campaigns. NBC News. Retrieved from https://www.nbcnews.com/tech/tech-news/facebook-scale-back-embeds-political-campaigns-n911701

Inscape. (n.d.). Solutions. Retrieved from https://www.inscape.tv/solutions

Isaac, M. (2019, November 22). Why everyone is angry at Facebook over its political ads policy. The New York Times. Retrieved from https://www.nytimes.com/2019/11/22/technology/campaigns-pressure-facebook-political-ads.html

Jeffrey, C. (2018, September 21). Facebook to stop sending “embeds” into political campaigns. TechSpot. Retrieved from https://www.techspot.com/news/76563-facebook-stop-sending-embeds-political-campaigns.html

Kafka, P. (2019, December 10). Facebook’s political ad problem, explained by an expert. Vox. Retrieved from https://www.vox.com/recode/2019/12/10/20996869/facebook-political-ads-targeting-alex-stamos-interview-open-sourced

Kantar. (2019, June 26). Kantar forecasts $6 billion in political ad spending for 2019-2020 election cycle [Press release]. Retrieved from https://www.kantarmedia.com/us/newsroom/press-releases/kantar-forecasts6-billion-in-political-ad-spending-for-2019-2020-election-cycle

Karpf, D. (2016, October 31). Preparing for the campaign tech bullshit season. Civicist. Retrieved from https://civichall.org/civicist/preparing-campaign-tech-bullshit-season/

Kerry, C. F. (2019, March 8). Breaking down proposals for privacy legislation: How do they regulate? [Report]. Washington, DC: The Brookings Institution. Retrieved from https://www.brookings.edu/research/breaking-down-proposals-for-privacy-legislation-how-do-they-regulate/

Kim, Y. M., Hsu, J., Neiman, D., Kou, C., Bankston, L., Kim, S. Y., Heinrich, R., Baragwanath, R., & Raskutti, G. (2018). The stealth media? Groups and targets behind divisive issue campaigns on Facebook. Political Communication, 35(4), 515–541. https://doi.org/10.1080/10584609.2018.1476425

Kreiss, D. (2016). Prototype Politics: Technology-intensive Campaigning and the Data of Democracy. New York: Oxford University Press. https://doi.org/10.1093/acprof:oso/9780199350247.001.0001

Kreiss, D., & Howard, P. N. (2010). New challenges to political privacy: Lessons from the first US presidential race in the web 2.0 era. International Journal of Communication, 4, 1032–1050. Retrieved from http://ijoc.org/index.php/ijoc/article/viewFile/870/473

Kreiss, D., & McGregor, S. C. (2017). Technology firms shape political communication: The work of Microsoft, Facebook, Twitter, and Google with campaigns during the 2016 US presidential cycle, Political Communication, 35(2), 155–177. https://doi.org/10.1080/10584609.2017.1364814

Lafayette, J. (2019, June 27). Elections to generate $6B in ad spending: Kantar. Broadcasting & Cable. Retrieved from https://www.broadcastingcable.com/news/elections-to-generate-6b-in-ad-spending-kantar.

Lecher, C. (2019, March 8). Elizabeth Warren says she wants to break up Amazon, Google, and Facebook. The Verge. Retrieved from https://www.theverge.com/2019/3/8/18256032/elizabeth-warren-antitrust-google-amazon-facebook-break-up

Lennon, W. (2018, October 9). An introduction to the Koch digital media network. Open Secrets. Retrieved from https://www.opensecrets.org/news/2018/10/intro-to-koch-brothers-digital/

Lerner, R. (2019, September 24). OTT advertising will be a clear winner in the 2020 elections. TV[R]EV. Retrieved from https://tvrev.com/ott-advertising-will-be-a-clear-winner-in-the-2020-elections/

Loredo, A. (2016, March). Centro automates political ad-buying at scale on brand-safe news and information sites. Centro Blog. Retrieved from https://www.centro.net/blog/centro-brand-exchange-political-ads/

Lynch, J. (2018, November 15). Advertisers spent $5.25 billion on the midterm election, 17% more than in 2016.” Adweek. Retrieved from https://www.kantarmedia.com/us/newsroom/km-inthenews/advertisers-spent-5-25-billion-on-the-midterm-election

McAfee, J. (2019, January 8). Inscape on the power of automatic content recognition and trends in TV consumption. Adelphic Blog. Retrieved from https://www.adelphic.com/inscape-on-the-power-of-automatic-content-recognition-and-trends-in-tv-consumption/

McCullough, S. C. (2016, April 3). When it comes to political programmatic advertising, the creative has to be emotionally charged. Adweek. Retrieved from https://www.adweek.com/brand-marketing/when-it-comes-political-programmatic-advertising-creative-has-be-emotionally-charged-170559/

Magenta, M., Gragnani, J., & Souza, F. (2018, October 24). How WhatsApp is being abused in Brazil's elections. BBC News. Retrieved from https://www.bbc.com/news/technology-45956557

Meuse, K. (2018, September 5). Put the wow factor in your campaigns with dynamic creative. Sizmek Blog. Retrieved from https://www.sizmek.com/blog/put-the-wow-factor-in-your-campaigns-with-dynamic-creative/.

Miller, S. J. (2018, November 23). Trade group successfully targeted voters on Waze ahead of midterms. Campaigns & Elections. Retrieved from https://www.campaignsandelections.com/campaign-insider/trade-group-successfully-targeted-voters-on-waze-ahead-of-midterms.

Mindshare. (2017). POV: YouTube Director Mix. Retrieved from https://www.mindshareworld.com/news/pov-youtube-director-mix.

MMA. (2019). Smarties X. Retrieved from https://www.mmaglobal.com/smarties2019.

Montgomery, K. C. (2007). Generation Digital: Politics, Commerce, and Childhood in the Age of the Internet. Cambridge, MA: The MIT Press.

Montgomery, K. C. (2015). Youth and surveillance in the Facebook era: Policy interventions and social implications. Telecommunications Policy, 39(3), 771–786. https://doi.org/10.1016/j.telpol.2014.12.006.

Montgomery, K. C., & Chester, J. (2009). Interactive food and beverage marketing: Targeting adolescents in the digital age. Journal of Adolescent Health45(3). https://doi.org/10.1016/j.jadohealth.2009.04.006

Montgomery, K. C., Chester, J., Grier, S. A., & Dorfman, L. (2012). The new threat of digital marketing. Pediatric Clinics of North America,59(3), 659–675. https://doi.org/10.1016/j.pcl.2012.03.022

Montgomery, K. C., Chester, J., & Kopp, K. (2017). Health wearable devices in the big data era: Ensuring privacy, security, and consumer protection [Report]. Washington, DC: Center for Digital Democracy. Retrieved from https://www.democraticmedia.org/sites/default/files/field/public/2016/aucdd_wearablesreport_final121516.pdf

Mothership Strategies. (2018). Case study: Doug Jones for senate advertising. Retrieved from https://mothershipstrategies.com/doug-jones-digital-advertising-mothership/

Nissenbaum, H. (2009). Privacy in Context: Technology, Policy, and the Integrity of Social Life. Redwood City: Stanford Law Books.

Oakes, O. (2018, April 20). IPA calls for suspension of micro-targeted political ads. Campaign. Retrieved from https://www.campaignlive.co.uk/article/ipa-calls-suspension-micro-targeted-political-ads/1462598

Office of the Privacy Commissioner of Canada. (2019, April 1). Guidance for federal political parties on protecting personal information. Retrieved from https://www.priv.gc.ca/en/privacy-topics/collecting-personal-information/gd_pp_201904

O’Keefe, P. (2019, March 31). Relational digital organizing—the next political campaign battleground. Medium. Retrieved from https://medium.com/political-moneyball/relational-digital-organizing-the-next-political-campaign-battleground-48ab1f7c2eef

O’Reilly, M. (2018, June 1). FCC regulatory free arena [Blog post]. Retrieved from FCC Blog: https://www.fcc.gov/news-events/blog/2018/06/01/fcc-regulatory-free-arena

Ottenfeld, E. (2018, April 25). Who supports the Honest Ads Act? Some of the country’s largest tech firms. Issue One. Retrieved from https://www.issueone.org/who-supports-the-honest-ads-act/

Pathak, S. (2018, March 30). How Facebook’s shutdown of third-party data affects advertisers. Digiday. Retrieved from https://digiday.com/marketing/facebooks-shutdown-third-party-data-affects-brands/.

Perrin, N. (2019, July 19). Political ad spend to reach $6 billion for 2020 election. eMarketer. Retrieved from https://www.emarketer.com/content/political-ad-spend-to-reach-6-billion-for-2020-election

Peterson, A. & Marte, J. (2016, May 11). Google to ban payday loan advertisements. Washington Post. Retrieved from https://www.washingtonpost.com/news/the-switch/wp/2016/05/11/google-to-ban-payday-loan-advertisements/?utm_term=.6fb60c626b05

Peterson, T. (2017, October 30). Facebook’s dynamic creative can generate up to 6,250 versions of an ad. Marketing Land. Retrieved from https://marketingland.com/facebooks-dynamic-creative-option-can-automatically-produce-6250-versions-ad-227250

Popkin, D. (2019, February 15). Optimizing Facebook campaigns with third-party data [Blog post]. Retrieved from LiveRamp Blog: https://liveramp.com/blog/facebook-campaigns/

Revolution Marketing. (n.d.). Creative. Retrieved from https://revolutionmessaging.com/services/creative/

Rapp, L. (2018, September 25). Announcing LiveRamp AbiliTec [Blog post]. Retrieved from LiveRamp Blog: https://liveramp.com/blog/abilitec/

Rosenberg, M., Confessore, N., & Cadwalladr. C. (2018, March 17). How Trump consultants exploited the Facebook data of millions. The New York Times. Retrieved from https://www.nytimes.com/2018/03/17/us/politics/cambridge-analytica-trump-campaign.html

Rubinstein, I. (2014). Voter privacy in the age of big data. Wisconsin Law Review, 2014(5), 861–936. Retrieved from http://wisconsinlawreview.org/wp-content/uploads/2015/02/1-Rubinstein-Final-Online.pdf

Sandberg, S. (2019, June 30). A second update on our civil rights audit. Facebook Newsroom. Retrieved from https://newsroom.fb.com/news/2019/06/second-update-civil-rights-audit/

Schiff, A. (2018, February 14). Ericsson Emodo beefs up its ad tech chops with Placecast acquisition. AdExchanger. Retrieved from https://adexchanger.com/mobile/ericsson-emodo-beefs-ad-tech-chops-placecast-acquisition/

Schmidt, S. (2019, July 11). A shadow digital campaign may prove decisive in 2020–and Donald Trump has a clear advantage. AlterNet. Retrieved from https://www.alternet.org/2019/07/a-shadow-digital-campaign-may-prove-decisive-in-2020-and-donald-trump-has-a-clear-advantage/

Schuster, J. (2015, October 7). Political campaigns: The art and science of reaching voters [Blog post]. Retrieved from LiveRamp Blog: https://liveramp.com/blog/political-campaigns-the-art-and-science-of-reaching-voters/

Schwartz, P. M., & Solove, D.J. (2011). The PII problem: Privacy and a new concept of personally identifiable information.” New York University Law Review, 86, 1814–1894. Retrieved from http://scholarship.law.berkeley.edu/cgi/viewcontent.cgi?article=2638&context=facpubs

Shah, A. (2019, March 14). Why political parties should focus on programmatic advertising this elections [Blog post]. AdAge India. Retrieved from http://www.adageindia.in/blogs-columnists/guest-columnists/why-political-parties-should-focus-on-programmatic-advertising-this-elections/articleshow/68402528.cms

Skyhook. (n.d.). Geospatial insights. Retrieved from https://www.skyhook.com/geospatial-insights/

Slefo, G.P. (2019, April 8). Ad industry groups band together to influence Congress on data privacy. AdAge.Retrieved fromhttps://adage.com/article/digital/ad-industry-groups-band-together-influence-congress-data-privacy/2162976.

Sluis, S. (2018, June 27). DoubleClick no more! Google renames its ad stack. AdExchanger. Retrieved from https://adexchanger.com/platforms/doubleclick-no-more-google-renames-its-ad-stack/.

Spencer, S. (2019, November 20). An update on our political ads policy [Blog post]. Retrieved from Google Blog: https://www.blog.google/technology/ads/update-our-political-ads-policy/.

Sullivan, M. (2019, March 28). Facebook expands ad transparency beyond politics: Here’s what’s new. Fast Company. Retrieved from https://www.fastcompany.com/90326809/facebook-expands-archive-ad-library-for-political-ads-heres-whats-new

Susser, D., Roessler, B., & Nissenbaum, H. F. (2019). Online manipulation: Hidden influences in a digital world. Georgetown Law Technology Review, 4(1), 1–45. Retrieved from https://georgetownlawtechreview.org/online-manipulation-hidden-influences-in-a-digital-world/GLTR-01-2020/

TransUnion (2019, May 15). TransUnion strengthens digital marketing solutions with agreement to acquire TruSignal. Retrieved from https://newsroom.transunion.com/transunion-strengthens-digital-marketing-solutions-with-agreement-to-acquire-trusignal/

Tru Optik. (2019, June 17). Tru Optik launches political data cloud for connected TV and streaming audio advertising. Retrieved from https://www.truoptik.com/tru-optik-launches-political-data-cloud-for-connected-tv-and-streaming-audio-advertising.php

Tufekci, Z. (2014). Engineering the public: Big data, surveillance and computational politics. First Monday19(7). https://doi.org/10.5210/fm.v19i7.4901/

Villano, L. (2018, July 19). The loss of Facebook Partner categories: How marketers can cope. MediaPost. Retrieved from https://www.mediapost.com/publications/article/322380/the-loss-of-facebook-partner-categories-how-marke.html

Walker, K. (2018, May 4). Supporting election integrity through greater advertising transparency [Blog post]. Retrieved from Google Blog: https://www.blog.google/outreach-initiatives/public-policy/supporting-election-integrity-through-greater-advertising-transparency/

Weissbrot, A. (2019, April 5). Magna predicts US OTT ad revenues will double by 2020. AdExchanger. Retrieved from https://adexchanger.com/tv-2/magna-predicts-us-ott-ad-revenues-will-double-by-2020/

Weissbrot, A. (2019, October 22). Roku to acquire Dataxu for $150 million. AdExchanger. Retrieved from https://adexchanger.com/digital-tv/roku-to-acquire-dataxu-for-150-million/

Weissbrot, A. (2019, April 10). Waze makes programmatic inventory available in dv360. AdExchanger.Retrieved from https://adexchanger.com/platforms/waze-makes-programmatic-inventory-available-in-dv360/

Williams, R. (2019, October 8). IPG launches martech platform Kinesso to marry Acxiom data with ad campaigns. Marketing Dive. Retrieved from https://www.marketingdive.com/news/ipg-launches-martech-platform-kinesso-to-marry-acxiom-data-with-ad-campaign/564527/

WPA Intelligence. (2019). WPAi wins big at 2019 Reed Awards. Retrieved from http://wpaintel.com/2019reedawards/

Xandr. (n.d.). Political advertising. Retrieved from https://www.xandr.com/legal/political-advertising/

Zarsky, T.Z. (2019). Privacy and manipulation in the digital age. Theoretical Inquiries in Law, 20(1), 157–188. https://doi.org/10.1515/til-2019-0006

Zuboff, S. (2019). The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. New York: Public Affairs.

Footnotes

1. We have relied on a diverse set of primary and secondary materials for our research, including trade journals, research reports, and other industry documents, as well as personal participation in, and proceedings from conferences focused on digital technologies and politics.

Apps, appointments, panic and people

$
0
0
Apps

This commentary is part of Digital inclusion and data literacy, a special issue of Internet Policy Review guest-edited by Elinor Carmi and Simeon J. Yates.

Note from the author

When I sat down to write the following commentary in February 2020 COVID-19 had not yet taken hold across UK, as it had done in China and other areas of East Asia. However, by April 2020 the UK looked very different. Individual lives, communities and day-to-day patterns have undergone unimaginable change as the threat and impact of COVID-19 became increasingly real. Every day, every news report, every social media post and every Government announcement shifted our expectations and normalities, and for many increased our fears. Supporting vulnerable people who are offline or limited users of the internet, is the shared mission of the Online Centres Network and Good Things Foundation for whom I work. At this time, we find ourselves having to move incredibly fast and adjust to new ways of working, away from face-to-face digital inclusion and social support. This change is unprecedented, and I was cautious about publishing my view of the need for digital inclusion before the pandemic, rather than the reality we are now experiencing. However, I think the message is more potent than ever. At a time of global crisis vulnerable people face multiple barriers, people need to be given hope, digital skills and access is essential. Investing in communities is one powerful way which we can come through this period stronger and more resilient than we were before. This commentary is an affirmation of what should continue, with adaptation, focused funding and greater public and political visibility. The support of digital skills and literacy for all has never been more critical.

Commentary

Inside the church hall away from the January downpour, more than twenty people are sitting around a horseshoe of foldable tables, their attention focused on the tutor and the whiteboard covered in a pen-drawn patchwork of app icons. Around the tables sit women, and a smaller number of men, from the different communities that make up the richly diverse borough of Newham in East London all here for Skills Enterprise’s Monday morning digital skills session.

Interspersed between the people who have come to learn are volunteers, students from a local university, laptops, notepads, cups of tea and coffee, cables, mobile phones and Malathy.

Malathy or Mala, who rescued me from the train station in the pouring rain and drove me to Skills Enterprise, leads this incredible social enterprise with an energy and focus that seems only to be present in people who live for their neighbourhood and are their local community. Between opening the front door and reaching the main hall, all of about five metres, Mala is approached by numerous people. Each person she immediately calms, triages their need, reviews the paperwork they clutch in their hand, and reassures them that she will be there with them - at the job centre, in their appointments with health, welfare and housing services and then, through Skills Enterprise, there to provide ongoing support once their crisis is resolved.

In about ten minutes at Skills Enterprise you understand the critical need for holistic local support that embeds digital literacy. Without it, the woman who cannot read the letter from her housing service is unlikely to be able to progress to managing Universal Credit1 independently. Her needs are multiple. The stress and desperation she feels is overwhelming. Without it, the 64-year-old widow who needs and wants to work, will continue to struggle with jobs she is unable to keep. Jobs that mean she has panic attacks on the underground when travelling to work. Interactions with job coaches that leave both her and her coach unable to move forward, as she is too anxious to communicate what she can do and needs. She struggles to write this down, and without the support of Mala and others at Skills Enterprise has nowhere to go, no one to help her, and little opportunity to more than be reactive and survive as best she can.

As much as we keep talking about this thing of digital, of digital skills and even digital inclusion as a separate dimension, the reality is that it is not, and never will be separate again. The panic felt by people who have to do something online because there is no longer any option, is part of our shared failing in not designing for inclusion, and not bringing everyone with us on the journey. Something we need to collectively address.

I work for Good Things Foundation, a digital social inclusion charity that believes in a world where everyone should benefit from digital. This year (2018/19), we've reached over 440,077 people worldwide2. We partner with an international network of local community organisations in the UK and Australia, through Good Things Foundation Australia, such as Skills Enterprise. These organisations are the most effective means to engage people who have multiple reasons to view going digital as one problem amongst many. Particularly, as the digital transformation of services continues to drive forward at an unprecedented pace without upfront recognition of peoples’ basic needs and aspirations.

These community organisations are with people on a journey that often starts with crisis but continues beyond this. This is a journey that focuses on developing a willingness to engage with support, where other service interactions have failed. That connects people to the value of digital, where fear, lack of access and digital skills have been barriers. That helps people to love learning, where past experiences have left a scar. That creates enjoyment from being with others, where trusting people has been hard. And people progress. This may be to a point of personal achievement, which is tangible or measurable to the outside world, such as getting a job, saving money, saving time or gaining an accreditation for your studies. But what is of equal importance, what is sustainable and future proof, are the lasting behaviour changes that occur such as valuing yourself, feeling connected, being able to do more than cope, and believing in your own ability and goals.

What we see happening is more than the impact on one individual, there is a greater social power and resource that is generated through this journey. This means that many people go on to help others to do what they have done, and to overcome the fears they once shared. Telling the story of this journey is critical to revealing a series of truths about people’s development of digital literacy. Truth 1: Learning to use one digital programme competently or developing skills around a certain digital interface at a specific time, does not mean that people are equipped to cope with the changing and digitally driven world of work and life. Truth 2: the pace of digital development is extreme and its effect heightened by changes in personal circumstances. Truth 3: Unforeseen events like health crisis’s, relationship breakdowns, economic down turns intensify vulnerabilities around digital use that were not visible before. Truth 4: The impact of such events means that individual confidence can be shaken, and people can ‘drop off’ from being active and productive users of digital resources. Truth 5: Increased motivation to reengage with digital and develop skills happens when the personal impact of not doing so becomes apparent. Truth 6: Effective support that overcomes motivational barriers may as often be from a peer as from a professional.

Therefore if we are to support people, to bring everyone with us, we need to focus on development of more than digital skills. We need to fund approaches that enable us to develop positive behaviours, so we can all make the most of digital innovation, as we have written about here in our work with Accenture. But this takes time. Over the course of five years of research with people and the organisations who support them, we created a theory of change for digital social inclusion. This narrative explains the journey and captures common themes that arose through the individual Routes to Inclusion stories that helped to build it. The purpose being to demystify and challenge the idea of a quick fix - being digitally literate requires a lifelong commitment to adaptation, adoption and change.

What we can see on the horizon are positive moves in the UK policy landscape, shifting to formally acknowledge the need for digital literacy. From 2020, alongside the existing legal entitlements to English and maths, the UK Government will introduce an entitlement to fully funded digital qualifications. Adults with no or low digital skills will have the opportunity to undertake improved digital qualifications based on new national standards setting out the digital skills people need to get on in life and work. The entitlement is therefore designed as a safety net to help those adults who are at risk of being left behind by an increasingly digital world.

Whilst the entitlement is to be welcomed, further commitment is needed in terms of a joined-up Government and cross sector approach to investment in digital inclusion, for which there is mounting evidence. We and others have argued long term investment in digital inclusion (not just skills) will bring about great economic benefits both for individuals, where in the UK the average citizen could save £744 by being online3, and for society. Research carried out by the Centre for Economics and Business Research (Cebr) on behalf of Good Things Foundation has shown that investment in a 100% digitally included UK population by 2028, would equate to £313m saved in employment benefits, £1.1bn from cost savings in online transactions during shopping, £141m NHS savings from increased use of digital services, and £487m in government savings from digital efficiency and increased use of online services4. Therefore, for every £1 invested the benefit would be £15 with a net present value of £21.9 billion5.

However, being online is not an end in itself. The picture of online behaviour and benefit is evolving and increasingly complicated. One demonstration of this is the prominent policy focus in the UK on preventing online harms, culminating in the Online Harms White Paper (April 2019) which called for a new statutory duty of care to make companies take more responsibility for the safety of their users and tackle harm. Online harms describe a range of risks for internet users related to data privacy, fake news and disinformation, which are particularly acute for vulnerable people with low data literacy. So why has this come about? In the UK as elsewhere in 2020, there is no longer a clear digital divide between those who are thriving online and those who are not. A change that we’ve demonstrated and tracked over a number of years through our Digital Nation Infographic. This has helped us understand people in a less binary way - only not as offline or online - and as we and others have identified, the importance of a growing spectrum of people who sit between these positions. People who may be described as being limited users (Good Things Foundation and Yates, 2017), who are online but are much more constrained in their adoption and use of digital than it might first appear. What this represents is a much less explored and spoken about dimension, that underpins the incredibly large digital skills gap borne out by national data, where over 11.9 million of us in the UK still lack Essential Digital Skills (Lloyds Consumer Digital Index 2019). What should we read from this data? Well something isn’t working. There is a strong undercurrent of unrest, where not everyone feels motivated to go with the digital movement and do more (French et al., 2019). It feels like time we treat these big numbers from a human perspective and focus on why we aren’t changing.

What can we do in response and how can we learn from, and work with, others around the world? In 2018 Good Things Foundation published our campaign Bridging the Digital Divide led by a commitment to six key principles which we set out in our Blueprint for a 100% Digitally Included Nation. These were as follows:

  1. To set out a bold ambition: to agree a goal of a 100% digitally included nation.
  2. To drive motivation: promote the benefits of the internet.
  3. To build skills: provide free essential digital skills support for everyone who needs it
  4. To lead from the front: employers taking responsibility for their own employees.
  5. To make it affordable: ensure no-one is denied access to the digital world because of their personal income.
  6. To make digital a social priority: bring social inclusion and digital inclusion together

What’s important about the Blueprint and campaign is that we have built it upon international learnings, which reinforce the relationship between digital and social exclusion across countries, where we share many policy challenges across borders. The Blueprint is therefore there to help address challenges we see around the world. We have learnt this first-hand in Australia where digital exclusion follows the same socio-economic contours, with some variations e.g. Indigenous digital exclusion has no parallel in the UK, and regional digital exclusion is more pronounced due to the geography of Australia. Half the world away from the challenges faced in the UK the same social change solution still applies, local + digital + scale.

Mala knows how to change behaviour. As the session on internet safety and use of apps draws to a close, I manage to pass over my meagre contribution to the group - a tin of shortbread. A couple of ladies thank me, and I look back to the table where Mala is sitting with the woman who was in such distress. She is holding her hand. The woman is calm and is talking about coming back next week. Mala has given her hope. I ask for a photo of them both as I don’t think I can capture what she has been able to do.

Mala asks me to write about it.

Image Guide

(all Images by Alice Mathers, Author)

A group of people sitting at a desk in a room. Description automatically generated
Image 1: Community digital skills session at Skills Enterprise in East London.

A group of people looking at a computer. Description automatically generated
Image 2: People in the digital skills session at Skills Enterprise reviewing app iconography.

A picture containing person, sitting, woman, holding</p>
<p>Description automatically generated
Image 3: Mala, Manager of Skills Enterprise, supporting a woman with housing needs.

Footnotes

1.Universal Credit is the UK Government’s payment system, which supports people who are on low wages or unemployed.

2. Good Things Foundation Annual Review 2018/19.

3. Lloyds Bank UK Consumer Digital Index (2016), p.3

4.Cebr, The Economic Impact of Digital Inclusion in the UK (2018), p.8

5.Cebr, The Economic Impact of Digital Inclusion in the UK (2018)

Co-developing digital inclusion policy and programming with Indigenous partners: interventions from Canada

$
0
0

This paper is part of Digital inclusion and data literacy, a special issue of Internet Policy Review guest-edited by Elinor Carmi and Simeon J. Yates.

Declaration of novelty and no competing interests

By submitting this manuscript I declare that this manuscript and its essential content has not been published elsewhere or that it is considered for publication in another outlet. Parts of Section 1 are drawn from a previous draft article written with Dr. Sylvia Blake when she was a PhD Candidate at Simon Fraser University and Denise Williams from the First Nations Technology Council. Parts of the concluding discussion are drawn from McMahon (2014), which is cited in the text.

No competing interests exist that have influenced or can be perceived to have influenced the text. I declared my involvement in the activities profiled in the two case studies discussed in this article. I receive some consulting fees for preparing regulatory interventions for the First Mile Connectivity Consortium; the CRTC cost claims process is outlined here: https://crtc.gc.ca/eng/archive/2010/2010-963.htm and cost claims are published on the commission’s website.

Introduction: Indigenous-led supply-side and demand-side interventions to support digital inclusion

Diverse Indigenous peoples, including First Nations, Inuit and Métis peoples in Canada, are facing challenges and opportunities associated with the development, deployment, and adoption of rapidly emerging digital information and communication technologies (ICTs). Digital ICTs can support cultural resurgence and self-determined development (Alia, 2010; Bredin, 2001; Dyson & Grant, 2006; Salazar, 2007). For example, community data centres house digitised cultural resources; mobile phones connect people to emergency services while they are on the land; videoconferencing units link doctors and patients across distances; and mobile language apps are used by people of all ages (O’Donnell et al., 2016; Sandvig, 2012). But along with potentially positive outcomes, digital ICTs also introduce challenges, including digital access divides, ongoing maintenance and upgrade costs of technologies and infrastructures, and problematic online content (Beaton & Campbell, 2014; Duarte, 2017; Iseke-Barnes & Danard, 2007). While governments, companies, and civil society organisations are all paying increased attention to the potential of digital inclusion, gaps remain with respect to the specific needs and concerns of diverse underserved Indigenous populations. In this context it is essential that Indigenous peoples are substantively engaged in decisions regarding the planning and implementation of policy and programming in their territories (Hudson, 2014; Philpot, Beaton, & Whiteduck, 2014). This article discusses two examples of digital inclusion co-developed with Indigenous communities in Canada. I frame the discussion around ‘supply-side’ interventions focused on enabling the provision of adequate, affordable infrastructure and services to end users; and ‘demand-side’ interventions associated with the effective adoption and use of digital technologies according to the situated needs of user groups.

Policy framing: digital divides and the ‘First Mile’

Recent public policy and funding supports that aim to bridge both kinds of digital divides target rural and remote Indigenous communities to connect to high-speed digital infrastructure (Government of Canada, 2019). These supply-side interventions support the provision of broadband, such as through building infrastructure, establishing broadband services, and addressing consumer issues like affordability of service and data usage. In many regions where Inuit, First Nations and Métis peoples live, access to digital connectivity remains limited and unreliable, with high prices charged for services and data overage (Office of the Auditor General of Canada, 2018). For example, the national telecommunications regulator’s most recent annual Communications Monitoring Report (CRTC, 2019) notes that fewer than one-third of First Nations reserved areas have access to the basic service objective requirement of 50 Mbps download / 10 Mbps upload speeds (p. 38) -- and that no households in the three northern territories (Yukon, NWT and Nunavut) can access those speeds (p. 41). Affordability is a related supply-side challenge: monthly service costs are highest in rural and Northern communities, with average prices ranging from $166 CAD in Québec, to $220 CAD in the North (ibid, p.58).

In industry-driven telecommunications projects, the requirements of the people living and working inside these communities are typically framed as the “last mile” of development. Philpot, Beaton, and Whiteduck (2014) argue that digital inclusion initiatives too often limit opportunities for local engagement in favour of corporate needs, “a result of a discursive environment in which First Nations broadband issues are dealt with within a discourse of dependency” (para 3). In Canada, Indigenous technology advocates have worked hard to reform policy and regulatory frameworks to counter this “last mile” discourse, proposing an alternative approach to supply-side digital inclusion policy that focuses on the “First Mile” of community-driven development. The term “First Mile” frames community-owned and operated broadband infrastructure and services as an alternative to the “last mile” link from service providers to subscribers (Paisley & Richardson, 1998; Strover, 2000). As argued by McMahon et al. (2011):

To move beyond the historical context of paternalistic, colonial-derived development policies, the First Mile recognizes that First Nations communities and governments are best positioned to decide when and how they access and use newly developing technologies, including broadband systems (p. 2).

The diverse First Nations, Inuit and Métis peoples who reside in rural, remote and Northern regions have a long history of community-driven technology innovation (McMahon, Hudson, & Fabian, 2017). Countering the top-down ‘last-mile’ approach of technology transfer, some of these Indigenous communities have led local and regional community networking initiatives (see, for example, Carpenter, 2010; McMahon, Gurstein et al., 2014; Roth, 2013; Whiteduck, 2010). For example, the Swampy Cree community of Fort Severn, which is located on the shores of Hudson Bay, uses a community-owned satellite network to connect people to public services otherwise unavailable locally, such as telemedicine, e-learning, and video court proceedings (Gibson et al., 2012; Fiddler, 2019). Larger-scale regional networks, such as the Tamaani Internet system set up and managed by the Kativik Regional Government in the Inuit territory of Nunavik, provide connectivity and services to citizens in remote fly-in communities (FMCC, 2018). A host of initiatives demonstrate community efforts to deploy infrastructure in expensive to serve areas while retaining ownership and control of networks, services, and applications. The operations and sustainability of these digital resources requires a complex balance between local innovation, regional cooperation, supportive policy and regulatory conditions, and individual and organisational capacity.

Introducing two case studies of digital inclusion

In this article, I discuss a case study describing how Indigenous organisations collaborated with university-based researchers to shape regulatory and policy frameworks to reflect First Mile development principles. I discuss the efforts of the First Mile Connectivity Consortium (FMCC), a national association of First Nations technology organisations that has intervened in a number of policy proceedings, including during 2012 hearings on Northwestel’s Modernization Plan, a 2014 inquiry on satellite services, and the 2015-2017 review of the “Basic Service Objective” for telecommunications in Canada (McMahon, Hudson, & Fabian, 2014). Through this work the FMCC developed a model for supply-side digital inclusion that puts communities at the centre and the start of any digital network development process. This First Mile model stresses the importance of leaders from affected regions substantively engaging in policy decisions regarding how digital connectivity is built, setup, owned, paid for, distributed, managed, and used in and across their communities. As described in the first case study covered in this article, this process involves researchers working with Indigenous technology organisations to develop arguments and evidence to present to policymakers in formal proceedings.

The second kind of digital inclusion policy I discuss in this article refers to demand-side dynamics; that is, efforts to support and encourage people and organisations to adopt and utilise digital technologies. They include educational interventions, such as appropriate forms of digital literacy, as well as efforts to identify and showcase digital innovation as a means to encourage effective adoption and use. Historically, such interventions typically reflect the same shortcomings of supply-side initiatives: a focus on corporate-driven, ‘one-size-fits-all’ measures that reflect normative goals of efficiency and revenue generation rather than community-led efforts to secure greater control over digital resources and their impacts on society. In short, demand-side digital inclusion initiatives do not always examine the best ways to work with diverse user groups so that their ways of living and being are reflected in digital adoption and educational programmes.

However, First Nations, Inuit and Métis peoples are utilising digital applications in many creative ways (O’Donnell et al., 2016). A strong desire to document and share Indigenous cultures and languages reflects people’s interest in exploring how newly available digital tools support such work. For example, Isuma has built a multilingual digital network showcasing more than 6,000 films and videos in 80 languages (over 1,300 in Inuktitut). Importantly, this project was carried out in regions of Canada where YouTube and Netflix are not yet widely available, due to inadequate and expensive connectivity (Dalseg & Abele, 2015; Kunuk & Cohn, 2010). Today, Isuma’s ‘low bandwidth’ system uses a site-specific technical infrastructure to distribute digital content in ways that allows people across Nunavut to create and view community-curated 24/7 programming. Schools with low-speed, unreliable and expensive bandwidth can show students videos on topics like traditional ways of treating caribou and seal skins without having to rely on limited satellite links. This example -- and many others -- demonstrate how the effective, situated use of digital technologies might support the cultural resilience and sustainability of diverse Indigenous communities.

In this context, the second case study I describe in this paper focuses on digital literacy as a particular kind of demand-side digital adoption programme. Research indicates that Indigenous peoples living in rural, remote and Northern regions of Canada recognise the limited services, high costs of services, and potential changes that may come as a result of increased access to digital ICTs and the internet (O’Donnell et al., 2016). They are interested in digital literacy resources that will help them monitor speed and quality of service, ensure that pricing practices are fair, and protect their families and communities from online risks. As well, they note that while rapidly expanding digital connectivity can support the delivery of a host of public services, economic development opportunities, and social and cultural benefits, it also brings challenges. Along with creating new dependencies on digital infrastructures, applications, services, resources and data, ICTs can introduce a wave of English- or French-language content. As well, increased adoption of digital technologies raises concerns over privacy, surveillance, and the commodification of Indigenous knowledge. The widespread dissemination of commercial social media platforms threatens to spread incorrect information and inappropriate content, further undermining Indigenous protocols of knowledge stewardship and misrepresenting these diverse cultures and societies. Therefore, it is important to learn from Indigenous peoples directly about how best to tailor digital literacy programmes to mitigate these risks and harness the potential of digital ICTs. The article’s second case study provides an example of a digital literacy initiative that is grounded in Nation-specific cultural revitalisation activities, while supporting technical understanding and skills acquisition in areas including cultural representation, data stewardship, and digital storytelling. This approach combines digital literacy teaching/learning with efforts to document the rich cultural teachings of Elders1 from the Piikani Blackfoot Nation in southern Alberta.

Project methodology

I am directly involved in both case studies described in this article. As a university-based participatory action researcher, I position myself as a facilitator working alongside my community-based colleagues, who are directly involved in all aspects of research design, data collection, data analysis, and presentation of results (including this article). My approach is inspired by critical pedagogy, Indigenous research methodologies (Kovach, 2009; Tuhiwai Smith, 1999; Wilson, 2008), and community informatics (Gurstein, 2003; 2012). Throughout this paper I have cited studies that I have been involved in, which reflect and inform the work described here, to demonstrate the process-oriented nature of my inquiry and praxis. Over the years I have learned from Indigenous friends and partners the importance of ensuring that research initiatives demonstrate tangible benefit to involved communities; these outcomes can range from policy proposals informed by evidence collected during research, to teaching/learning resources that support the delivery of community-based digital literacy workshops. In this spirit I endeavour to work with my colleagues to develop resources that can be taken up, modified, adapted, or dropped according to local needs and interests. Where possible, this work engages community members as research facilitators and assistants, as well as participants. Importantly, this process of shared discovery and knowledge mobilisation involves a recognition of both Indigenous protocol and formal institutional procedures; it necessitates long-term engagement and continuous reflection on emergent goals and outcomes. These debates, which at times reflect disagreement and divergence of opinion, are nonetheless essential to identify, document, and apply project processes and outcomes that are appropriate and relevant to the needs of involved community members.

The emergent process that I employ in these projects reflects many divergences, from challenges in empirical data collection and analysis, to changes in team composition, and even major shifts in project direction. It requires a dynamic, flexible project methodology that sometimes raises tensions with traditional academic research models. However, this way of working contributes to capacity building as well as concrete research outcomes; an observation consistent with other scholarship on community-engaged ICT research, as well as by the work of Indigenous scholars who are highlighting how daily practices contribute to the continual renewal of Indigenous communities against the challenges of settler colonialism (e.g., Alfred & Corntassel, 2005; Borrows, 2010; Simpson, 2011; Tuck, 2009). But despite strong research in the development and adoption of digital technologies by Indigenous groups, a knowledge gap exists with regards to how digital inclusion policies and programmes might best enable such outcomes. In this context, I argue that digital inclusion policy and programming requires more than a “one size fits all” approach; rather it must engage and reflect practices that will drive effective use and self-determined development initiatives in diverse and situated settings.

Section 1: Supply-side intervention - First Mile Connectivity Consortium shaping digital access policy

In his 2014 book Contradictions of Media Power, Des Freedman argues that media reform initiatives emerge in a variety of forms, including those which require engagement with official structures like formal regulatory processes. He notes that this is often not the preferred route for media activists, who are more likely to be engaged in producing alternative content or setting up new organisations than in lobbying existing institutions to change (p. 132). Nonetheless, he argues that institutional reforms provide important contributions to more equitable, democratic media systems (p. 139). This tension between reforming existing institutional structures and establishing new ones also occurs in the area of telecommunications policy (Lentz, 2013), which is the focus of this case study. The work of telecommunications policy reformers has a dual focus: to both engage with policy as it is currently constituted, and to propose reforms about how they would like it to be.

Introducing the First Mile Connectivity Consortium

Focusing on supply-side interventions in digital inclusion policy and programming, this section provides a case study of the First Mile Connectivity Consortium (FMCC), a national non-profit association of First Nations technology service providers focused on connecting rural, remote and Northern regions of Canada. The FMCC was established in 2012 by regional technology organisations that represent and are governed by groups of Indigenous communities (Carpenter, 2010; McMahon, Gurstein et al., 2014; O’Donnell et al., 2009). Its membership and board of directors consist of staff from First Nations technology organisations serving remote and rural areas across Canada, as well as university-based researchers including myself. It emerged from a ten-year participatory action research project called First Nations Innovation, and is informed by the Assembly of First Nations ‘e-Community Strategy’ (FMCC, 2018; Whiteduck, 2010). As a co-founder and board member of this association, I am directly engaged in its organisational activities, including strategic planning and preparation of regulatory and policy submissions related to digital inclusion. This work draws on empirical research that I conduct in my role as a university faculty member in partnership with Indigenous communities and FMCC member organisations. This experience provides me with insight in the process of developing and presenting formal policy proposals that are tied to the contexts of the members of the Indigenous communities that the FMCC members represent and serve.

While FMCC member organisations are spread over geographic areas and come from different organisational, cultural and political backgrounds, they share common goals in reforming digital policy and regulation to better support community and economic development, highlight local innovation, and overcome digital divides. It is important to note that there is a strong history of many Indigenous peoples in Canada setting up local and regional non-profit organisations to secure access to and control of emerging ICTs in a range of contexts, from community radio networks to digital archives (see for example Fiser & Clement, 2012; Hudson, 2013; Whiteduck, Beaton, Burton, & O’Donnell, 2012). In part, this work aimed to counter the widespread production and dissemination of colonial and homogenising discourses about diverse Indigenous peoples that is amplified through mass media. For example, in Canada, mass media consistently present Indigenous peoples as “childlike”, incapable of self-determination, or dangerous (Harding, 2006). As well, English- and French-language media content typically created in metropolitan centres has caused Indigenous leaders to raise concerns about its potential to contribute to social disintegration and unwelcome cultural hybridisation (Roth, 2005; Savard, 1998; Valaskakis, 1992). But alongside these developments, Indigenous peoples have created their own media and used it to document Indigenous knowledge and languages (Hudson, 2011; Menzies, 2015). Through this work, Indigenous peoples and their partners questioned not only Western-derived conventions of representation and distribution, but also central issues regarding the ownership and control of media production and distribution. In many cases, they developed their own institutions and production practices, the vibrancy and impact of which is reflected in a growing body of research and practice (Battiste, 2018; Perley, O’Donnell, George, Beaton, & Peter-Paul, 2016).

FMCC’s work over the years reflects a similar trajectory of community-driven efforts to secure Indigenous ownership and control over emergent digital infrastructures. Here, I discuss one of FMCC’s digital inclusion efforts, which proposed reforms to broadband funding mechanisms targeted to address digital access divides in Indigenous regions of Canada. In these areas connectivity services are very limited – particularly in comparison to high standards available in more populated and urban areas (CRTC, 2016a; Fiser & Jeffrey, 2013; Office of the Auditor General of Canada, 2018). Users in organisations and households share limited bandwidth capacity that is often congested, and if a connection goes down and no local technician is available to fix it, they can wait weeks for repairs. Further, many of these communities are served by satellite, which adds problems of latency to efforts to deliver services such as telehealth and distance education (Hudson, 2015; CRTC, 2014). Finally, the limited broadband available in these areas is expensive, especially when data caps are taken into consideration. Figure 1 illustrates these regions in blue.

Figure 1: Northern, rural and remote regions

“Market forces” have failed to drive incumbent private sector telecommunications companies to develop broadband infrastructure and services in these regions, with the result that various government agencies have established subsidy programmes to encourage deployment (CRTC, 2015; McNally, Rathi, Evaniew, & Wu, 2017). Rajabiun and Middleton (2013) parse these programmes into two main types: urban-rural cross-subsidies drawn from the revenues of telecommunications providers and managed by the Canadian Radio-Television and Telecommunications Commission (CRTC); and budgetary contributions established through government funding initiatives. In this case study I focus on the first form of subsidy, tracing how the FMCC intervened in a series of formal regulatory proceedings in an attempt to influence its manifestation in broadband funding programmes.

Policy engagement for First Mile development

To contribute an effective intervention, it is important for reformers to learn the discourse, structure, and process employed in formal regulatory hearings (Shepherd, Taylor, & Middleton, 2014). Community-based technology organisations have few opportunities to influence the policies and regulations that shape the conditions they operate in. Despite the on-the-ground work they do in building and operating digital services, these parties often lack the financial, technical, institutional, and human resources that might support their intervention activities, given the technical language and formal procedures associated with regulatory hearings. At the same time, these groups can build relationships with state institutions so they become recognised and accepted as reputable sources. Further, as Hintz (2009) argues, such attempts to influence policy from the ‘inside’ require certain conditions in order to be effective. These include a political opportunity structure that will allow for change, strong alliances, weak (or fragmented) opponents, and the ability to effectively frame and communicate policy objectives to a target audience. Actors with expert knowledge in the area under consideration can provide valuable supports to policy deliberations. However, participation in formal proceedings that do not provide effective space for critical and open discussion, or in cases where decisions are pre-determined before a public proceeding has occurred, risks legitimising an inequitable and unfair process. Interventions such as the ones described in this case study are only possible because the policy-making environment represented in the CRTC’s regulatory hearings included positive conditions for civil society participation. It could not have been successful in the face of a less open process, a pre-determined outcome, or unreceptive policymakers.

FMCC began contributing to telecommunications regulatory proceedings in 2012, during a review of Northwestel’s proposed Modernisation Plan (CRTC, 2012-669) that concerned services provided by the incumbent telecommunications carrier in the three northern territories. Mobilising a panel of academic experts and staff from Indigenous technology organisations, FMCC pointed out that northern residents are providers as well as consumers of telecommunications services, and argued that subsidies to upgrade and operate facilities in the North should therefore not be limited to the incumbent. This process involved extensive planning, which included building a common discourse among participants situated in different cultural, political, economic and geographic contexts, as well as conducting research that was then adapted to meet the Commission’s requirements. Through this experience, the FMCC also learned the norms and rules of regulatory hearings, the kinds of evidence and argument allowed, and the format and structure of written filings and in-person presentations. The FMCC documented its experiences during this intervention, making process notes and written filings available to other groups interested in taking similar actions (McMahon, Hudson, & Fabian, 2014).

This experience informed FMCC’s subsequent regulatory activities. In its decision, the CRTC recognised that broadband Internet access has become an important means of communication for northern Canadians, needed to achieve many social, economic, and cultural objectives (CRTC, 2013). Its findings recognised the special conditions and challenges in the Canadian North, and that market forces alone were not addressing them. However, rather than mandating any new or expanded subsidies, the Commission deferred the funding issue to a subsequent proceeding, to be held in 2015-2016. Through these decisions, the FMCC learned how the CRTC operates when ruling on regulatory proceedings; and importantly, that interventions should address the policy framework and questions under consideration in a specific hearing.

The CRTC’s “Basic Service Objective” proceedings

The next phase of the FMCC’s regulatory journey began in April 2015, when the CRTC announced a new proceeding “to conduct a comprehensive review of its policies regarding basic telecommunications services in Canada” (CRTC, 2015). The Commission’s notice included an examination of how these services are used to access “essential services”, their costs, and which areas are unserved or underserved. Importantly, the proceeding would also address whether a funding mechanism was required in the region of the incumbent telecommunications provider serving Canada’s northern territories (Yukon, Northwest Territories, and Nunavut),2 and adjacent regions such as the Northern parts of provinces that share similar challenges of limited infrastructure, challenging terrain for construction projects, and geographically dispersed, low-population communities. The opening notice provided a clear indication that the Commission was considering a review of the structure and focus of the broadband funding ecosystem, which FMCC took as an opportunity to contribute evidence on the public record of the shortcomings of existing funding initiatives, as well as to propose specific reforms.

As the hearings progressed, FMCC advanced proposals for reforms to existing funding mechanisms – focusing on those that the Commission had control over. FMCC noted that the CRTC could play a coordinating role in the broadband funding ecosystem, as an administrative tribunal with unique technical expertise and insight into the Canadian communications environment (FMCC, 2016a). FMCC also proposed a new subsidy scheme managed by the Commission. Indigenous organisations faced challenges in securing available funding programmes, and lacked access to the existing CRTC-managed subsidy available only to major incumbents with an obligation to serve (the National Contribution Fund, or NCF). In order to enable more equitable access to funding, FMCC proposed that organisations already providing telecommunications services in these areas become eligible for CRTC subsidy, and proposed an updated funding mechanism, termed the Northern Infrastructure and Services Fund (NISF). FMCC envisioned the administration of this fund through an independent entity licensed by the Commission and governed by representatives with strong ties to rural, remote and northern regions. The NISF was not designed to replace, consolidate or reduce existing federal funding programmes, but rather to complement them by supporting community-based providers, as well as traditional commercial providers, through a new subsidy drawn from industry revenues. This proposal clearly fell within the scope of the hearing, and particularly the focus to “examine whether a mechanism is required in Northwestel’s operating territory to support the provision of modern telecommunications services in rural and remote areas in Canada” (CRTC, 2015, para. 34).Since the proposal fell within the CRTC’s mandate and jurisdiction, it could therefore be acted upon.

In April 2016, the FMCC presented the NISF proposal to the Commissioners during an in-person hearing in Gatineau, Québec. The public hearings included testimony from other Indigenous and consumer groups, as well as from major telecom providers. While the various interveners expressed a diversity of positions, the various Indigenous groups pointed out similar challenges and potential solutions, including community ownership and control over digital infrastructure. This position was supported by some public interest organisations, although groups differed as to how such outcomes might be achieved. For example, the Public Interest Advocacy Centre favoured a ‘reverse auction’ approach to subsidising infrastructure development that sought to fund the lowest cost solution (regardless of design characteristics), while FMCC advocated for an ‘application-based’ model that would support a greater number and diversity of organisational applicants, including smaller non-profit and community-based organisations. After the FMCC’s presentation, the Commissioners engaged the team of representatives of Indigenous technology organisations and university-based researchers in over an hour of discussion and questions.

Figure 2: FMCC team at CRTC proceedings

After the FMCC’s presentation and halfway through the two-week public hearing phase of the proceedings, the CRTC broadened the proceedings to allow interveners to make proposals for a national broadband strategy for Canada (Dobby, 2016). In response, the FMCC submitted an additional proposal that situated the efforts of Indigenous broadband service providers in the context of decolonisation and Indigenous resurgence (FMCC, 2016a). The FMCC stressed the need for broadband as a basic service, and for the CRTC to play a coordinating role in the deployment of that service. This proposal included the specifics of the NISF proposal (noted above) as a permanent subsidy mechanism to support this work.

After more than a year of testimony and deliberation, the CRTC released its decision in December 2016 (CRTC, 2016b). The decision indeed designated broadband a basic service, increasing target speeds to 50 Mbps download / 10 Mbps upload, and requiring providers to offer an ‘unlimited’ bandwidth option (that is, no data caps). The Commission also announced it was establishing a new infrastructure fund for ‘underserved’ areas: $750 million CAD over five years. The fund, which was sourced from Telecommunication Service Providers’ revenues, was positioned as an attempt to align with the broader funding ecosystem for broadband. Unlike the previous National Contribution Fund, all qualified service providers – including Indigenous community-based organisations – are eligible to apply for this new fund, which will be managed at arm’s length, based on objective criteria determined in a subsequent proceeding (CRTC, 2016b). At the time of writing, these criteria have become publicly available and are included in the 2019 Application Guide for the Broadband Fund (CRTC, 2019). They include factors that groups including FMCC strongly advocated for, including eligibility of non-profit and Indigenous applicants, open access requirements for funded projects, requirements for community consultation, and recognition by applicants of any impacted Aboriginal and/or treaty rights (FMCC, 2017; FMCC, 2019). While the long-term implications of this decision for community-based service providers remain to be seen, it was nonetheless welcomed as a big win by the FMCC and other public and consumer interest groups (FMCC, 2016b; Open Media, 2016; Affordable Access Coalition, 2016).

Since the conclusion of these proceedings the government of Canada has established additional funding mechanisms for the deployment of broadband infrastructure (Government of Canada, 2019). The FMCC continues to intervene in regulatory hearings to advocate for its position that telecommunications policy frameworks should be designed and implemented in ways that enable communities to build, own and operate their own local telecommunications infrastructure and services. In short, FMCC continues working to advance a “First Mile” approach to supply-side digital inclusion policy.

Section 2: Demand-side intervention - Piikani Cultural and Digital Literacy Camp Program

Digital literacy includes efforts to shape and use digital ICTs in ways that emerge from the self-determined needs of communities. This approach adopts the critical framework of community informatics, which foregrounds social practices of community development, capacity building, network formation, and effective use of ICTs as well as technical knowledge and skills (Gurstein, 2003; 2012). Community informatics extends ICT adoption beyond an individual’s ability to use a computer, software like Microsoft Office, or social media to include planning, managing, shaping, implementing, maintaining, and evaluating digital ICTs to address community-identified desires. This positioning responds to recent developments in the study and teaching of digital literacy that stress the need to encompass social practices as well as technical skills (Gillen & Barton, 2010; Ventimiglia & Pullman, 2016). From this perspective, digital literacy is grounded in local cultures and understandings - it is sustained by the ways people make meaning through their daily interactions with ICT (Media Smarts, n.d.; Rheingold, 2012).

In the context of Indigenous peoples in Canada, this orientation ties to the Truth and Reconciliation Commission’s Calls to Action (2015), which stress that contemporary educational activities involving Indigenous peoples must not repeat the failures of the past. Pointing to the country’s past and ongoing history of settler colonialism, the Commission describes the government’s activities as a form of cultural genocide, “the destruction of those structures and practices that allow the group to continue as a group” (p. 5). Such activities, which sought to gain control over Indigenous land and resources, include banning language and cultural practices, working to destroy social and political institutions, seizing land and other property, persecuting spiritual practices, and disrupting families through residential schools. When based in a form of reconciliation that aims to try and overcome this conflict and establish healthy and respectful relationships, digital literacy initiatives reflect models of education more appropriate to Indigenous ways of knowing and teaching (Harding, 1998; McMahon et al., 2017; O’Connor, 2013; Molyneaux et al., 2012).

Digital literacy initiatives with Piikani First Nation

In this second case study I discuss a digital literacy initiative co-created with Piikani First Nation in the southern region of the province of Alberta that aims to counter the negative implications of digital ICT adoption by organising digital literacy teaching and learning around Indigenous cultural revitalisation. I am the primary investigator on a series of grants that have supported this project, and in that role I have worked closely with Blackfoot Elder Herman Many Guns, who has guided the project to ensure that it follows Piikani cultural protocols as well as university ethics requirements. Since the community strongly encouraged this work to focus on local youth - and specifically, high school students - this initiative also involves the Peigan Board of Education and Piikani Nation Secondary School, and has been developed in close collaboration with these two First Nations educational organisations. Together, we decided that the project would focus on providing tangible outcomes with respect to Piikani-specific digital literacy resources and programming created for Grade 9 and 10 students. Our joint efforts to find ways to integrate Piikani culture and language with high school education and digital literacy courses has proven to be challenging but rewarding. Over the life of this project I have learned about Piikani ways of working with researchers, an approach that has informed the empirical research that I conducted during the activities described here. Ongoing planning conversations and cultural activities have greatly enriched my understanding of ways to envision and implement digital literacy initiatives that better reflect the lived experiences of participating community members. Project governance follows traditional protocols and Western partnership agreements, and is endorsed by both community (PBOE) and traditional (Elder’s Council) leadership. An important part of this initiative is combining traditional protocols with Western planning documents, a method proposed by the participating Elders to support project sustainability and address Piikani protocol (Bastien, 2004; Conaty, 2015). These activities are facilitated by Elder Herman, who led protocol to name the project in October 2017, and guides its ongoing development.

Ii na kaa sii na ku pi tsi nii kii: the Piikani Cultural and Digital Literacy Camp Program

Ii na kaa sii na ku pi tsi nii kii, the Piikani Cultural and Digital Literacy Camp Program, explores ways to emphasise Blackfoot cultural knowledge and modes of learning through digital skills development with high school students. While English is the main language spoken at Piikani Nation Secondary School, its approximately 200 students take at least a half-hour of daily Blackfoot language instruction, as well as cultural classes (Ross, 2020). However, few of the students speak Blackfoot at home or in their day-to-day lives, and so our team determined that digital literacy is a way to engage youth in not only being exposed to and understanding the words that connect them with their culture, but also provide them a structured means to digitally document that information for the benefit of themselves and future generations. This approach builds on the important work done by Blackfoot educators to develop land-based teachings (Blood, 2005; Enlivened Learning, 2015) and use digital tools to document language, such as through the Blackfoot Online Language Resources website.

In preparing the digital literacy programme, students, facilitators, and administrators from Piikani First Nation in Southern Alberta collaborate with university-based researchers to investigate, adapt, test, and refine digital literacy practices and resources. An ongoing planning and evaluation cycle supports continuous improvement, as the team reviews project scope, curriculum, and activities on an annual basis. Through discussions, surveys and interviews, the team engages in ongoing reflections about the implications of digital ICT on Piikani culture and language, and on digital inclusion more broadly. This considers appropriate ways of teaching digital literacy to youth, as well as how that learning might support community-building and resurgence. Importantly, this involves traditional Piikani Blackfoot protocol. Every year Herman (an Elder who holds appropriate Blackfoot knowledge transfer rights) leads camp preparations that include collecting willow tree branches, river stones, and firewood to build a sweat lodge offering (Ross, 2020). After blessing the ground of the camp with a smudge ceremony, he invites the participation of both upper and earthly beings in the event and asks the Creator to support the camp and students. Participants have the option to join the sweat lodge (for men) or listen to tipi teachings from a female Elder (for women). Each morning starts with a pipe ceremony to help ensure the day begins in a good way.

The Piikani Cultural and Digital Literacy Camp began in summer 2017, when our team piloted this approach. Early work involved assembling a project team (including community facilitators), creating learning materials (student workbook and facilitator handbook), and generating logistics planning and budgeting. The project has since evolved into a multi-day digital literacy Camp Program for students from Piikani Nation Secondary School, during which students receive Career and Technology Studies (CTS) course credits. Ongoing collaborative research and evaluation has led to eight modules that cover a range of digital skill-building activities, including video production, community-based data management, and analysis of cultural appropriation/appreciation. This classroom learning is blended with hands-on activities and experiential learning at the three-day/two-night outdoor camp, during which students apply their new digital skills to document and preserve the ancestral knowledge shared by Elders. Students are trained to film Piikani Elders showcasing local history and knowledge, including building sweat lodges and assembling tipis (see Figure 3).

Figure 3: Piikani digital literacy camp programme

As digital stewards, students are shown practices they can use to transfer their recordings to local institutions that will manage and preserve them, including PBOE and Piikani Traditional Knowledge Services. In this way they are introduced to data ownership and sharing protocols that support community management of digital data (videos, photos, and audio recordings) (Wemigwans, 2016). These activities reflect emerging principles of data sovereignty, which refer to the efforts of Indigenous peoples to secure control of their digital data (Rodriguez-Lonebear, 2016; Schnarch, 2004). The concept of data sovereignty is a way to think about data management practices that derives from the inherent sovereignty of Indigenous nations; it is defined as “the right of a nation to govern the collection, ownership, and application of its own data” (U.S. Indigenous Data Sovereignty Network, 2018, para. 2). In Canada, the First Nations principles of OCAP™ (Ownership, Control, Access and Possession) developed in the mid-1990s by the First Nations Information Governance Centre, are adaptable and designed to allow each First Nation community or region to interpret and implement them according to its specific context (First Nations Information Governance Centre, 2014; Schnarch, 2004). They provide an important set of guidelines when developing or using a digital platform to house and present Indigenous data.

At the time of writing, some 25 students have taken part in the three camps held so far (the 2017 pilot and camps in 2018 and 2019). Annual evaluations conducted through surveys and interviews with camp participants have indicated strong interest in the programme, as well as ideas about how to expand on existing local knowledge and capacities. Our team uses this feedback to revise curriculum and incorporate new topics, such as the importance of cyber-bullying and consent when posting to social media. Anecdotal comments have indicated some of the programme’s impacts among students. One 15-year-old student noted that “We got to catch a moment on camera so we can look back at it”, and said that she enjoyed sleeping in a tipi as her ancestors did, as well as recording her community’s traditions (Betkowski, 2017). Another 16-year-old student said that: “It’s cool that we are videotaping our culture and going to be sharing the video with other people” (ibid). Participating Elders have also observed the impacts of the camp programme. For example, Herman says several students improved their performance in school, and said while many students face social challenges and were shy at the beginning, the camp helped them come out of their shell (Ross, 2020). He said those living on low incomes enjoy it even more because they are less exposed to technology at home (ibid).

The team plans to hold the camp again in July 2020, after which we will explore ways to transfer ownership and control of the initiative over to the Piikani community. Piikani community members drive all aspects of the digital literacy programme; the project’s iterative, collaborative planning framework helps build capacity in partner organisations on an ongoing basis. Regular, ongoing interactions identify local needs and interests that in turn help integrate appropriate forms of digital literacy in this particular community context.

With respect to its implications for broader digital inclusion initiatives, the approach taken by the Piikani project team illustrates one way that demand-side interventions can better reflect the circumstances of communities who face challenges with respect to limited connectivity or access to devices, or raise concerns over digital impacts to culture and language. The Piikani community identified that digital literacy pedagogy should start from a foundation of cultural modes of land-based learning that the Piikani people have used for millennia. Importantly, this approach followed Nation-specific cultural protocol and sought to find ways that digital technologies enable language revitalisation and community development. We hope that these goals, and the ongoing involvement of community members in all aspects of the camp, will enable its long-term sustainability. We also suggest that these and other findings provide important lessons for digital inclusion policy and practice.

Conclusion: Supporting an enabling environment for digital inclusion

At present, digital inclusion policy and programming is open to new forms of engagement made possible by a combination of political will, citizen participation in decision-making, and the affordances of still-evolving digital infrastructures and technologies. The two case studies described here, as well as a host of other interventions, are outcomes of participatory opportunities made possible through regulatory proceedings, flexible proposals for digital literacy programming, and collaborations involving a diverse array of like-minded organisations and individuals. Several internal factors also supported this work: targeted research linked to the issues under deliberation, the capacity to formulate projects in the manner required by regulatory and educational institutions, and the passion and competencies of participating community members who effectively communicated the intricacies of ICT development, adoption, and use - and, importantly, what they meant for the present and future of their communities.

Co-developing digital inclusion policy and programming in Indigenous contexts

The two interventions described in this paper emerged over time through repeated iterations, during which participating organisations and individuals gained experience and understanding of the activities and issues under consideration.This work ties to a development trajectory grounded in Indigenous societies that existed and prospered long before the advent of digital ICTs available today, and the regulatory institutions set up by modern state governments to regulate their development and use (McMahon, 2014). Scholars of Indigenous resurgence stress this recognition of the inalienable and unique legal status of Indigenous peoples and the inherent, group-differentiated rights and responsibilities that flow from that status (Borrows, 2010). I suggest here that this position might be operationalised in digital inclusion policy and programming through an “enabling environment”: a concept that links laws and policies to the ideas, values and practices of participatory development (Price & Krug, 2002; Raboy, 2005). Development theorists like Amartya Sen (1999) have argued for policies to better support and account for human agency, encouraging both state governments and civil society organisations to avoid conflating the means of development with its ends. In this framework, enabling environments aim to create the conditions that might support endogenous forms of digital inclusion, such as the two interventions described in this paper.

This proposal reflects increasing consensus among United Nations (UN) member states on models of “internal decolonisation” that formally recognise Indigenous land claims, self-government rights, laws, and customs through the UN’s Declaration on the Rights of Indigenous Peoples (UNDRIP) (2007). The parties involved in drafting that document stressed the need to operationalise self-determination to fit their diverse lived experiences, and to this end outlined four broad categories of participatory rights (see Stavenhagen, 2011, pp. 273-4):

  • The right to participate fully in the political, economic, social and cultural life of the state.
  • The right to maintain and develop distinct political, legal, economic, social and cultural systems and institutions.
  • The right of Indigenous institutions to act as a nexus between Indigenous peoples and states, to support participation in public life and control over their own affairs.
  • The right that states give due recognition to Indigenous laws and customs.

UNDRIP recognises the laws and practices of Indigenous peoples, and reflects forms of self-determination that emerge from place-based laws, beliefs, and practices. This is seen, for example, in support for the development of Indigenous institutions, which arise autonomously and are best equipped to engage with the lived realities of members of Indigenous communities. This approach advocates for increased opportunities for community-based institutions to shape the state laws and policies that impact the lives of their constituent members. Examples of such reforms include the creation of reserved parliamentary seats for Indigenous representatives in New Zealand (where the Māori Party was founded in 2004), and subsidies to support Indigenous media and technology organisations in Canada.

Processes of technology development both shape and are shaped by broader negotiations over self-determination. Indigenous peoples engage with states over the policies and regulatory frameworks that reflect the development, adoption, and use of emergent technologies. These activities have normative outcomes: technologies are not only tools of self-determination, but can also entrench structures of colonialism and inequality. For example, state and corporate entities have used digital networks and technologies to undertake the surveillance and control of Indigenous peoples. However, to accept such negative effects at face value is to fall into the trap of social and technical determinism. It is impossible to define with conviction a priori the path or effects of any development. At best, we can attempt to describe its logics, activities, and structures, with the goal of critical analysis and reform. Framed this way, digital inclusion intersects with ongoing struggles over colonialism/self-determination. Digital networks and technologies are quickly achieving closure as the invisible platforms guiding many aspects of our lives, but for now, the ways that these new technologies are being shaped and diffused are subject to public review and deliberation. In this context, the enabling environments of policies and practices that support and constrain digital inclusion projects become a key site of negotiation. Examples of digital self-determination taking place in Indigenous communities demonstrate the kinds of initiatives that such enabling environments might support. However, they also contribute something more: new ways of thinking about how we can identify and re-shape the relations of inequality and potential that become embedded in our built environments.

Focus areas to guide future digital inclusion initiatives

In this context, I end this article by proposing six focus areas to guide future digital inclusion interventions. These focus areas are drawn from my work over the past ten years with Indigenous communities and technology organisations in Canada, which itself rests on a foundation of decades of effort by university-based and community-based researchers. These six focus areas are:

1. Digital asset-mapping to support community development: Community members can identify digital assets that can be shared in learning resources and policy proposals. Assets to be explored might include: existing technology support organisations, broadband capacity, technical expertise, online applications, digital archives, language resources, and data management initiatives.

2. Supporting community technology organisations: Digital inclusion initiatives should document and share business cases, policy supports, regulatory frameworks, and funding initiatives that sustain community-owned and operated digital infrastructure and services. Digital access is important, but it should be accompanied with opportunities for local and regional organisations to secure resources to meet community development goals. This identifies ways that community organisations can engage in development work at the ‘First Mile’.

3. Policy and regulatory advocacy for digital self-determination: Community members should be empowered to contribute to policy and regulatory decisions associated with appropriate technology development initiatives. Indigenous voices contribute to decision-making in both public and NGO sectors, and identify barriers to participation. This includes critically interrogating initiatives aimed to address digital divides to ensure they reflect local interests and desires.

4. Building and sustaining community networks:Participants should be empowered to learn digital networking technologies and gain experience setting up and testing broadband networks. This includes hands-on technical activities, such as building wireless networks for on-the-land connectivity. Activities can be taught by local facilitators.

5. Managing community-owned data: Community members already capture, organise, manage, and use a variety of data through digital ICT including photos, videos, and data management systems. Digital inclusion interventions should develop resources showcasing local ownership and control of this digital data, including for digitised Indigenous knowledge and self-government resources such as health and education data (Schnarch, 2004).

6. Developing appropriate digital literacy resources: Digital inclusion initiatives should strive to facilitate the creation and sharing of digital language and cultural resources by involved community members. Participants can gain hands-on experience using digital ICT such as digital cameras and GIS mapping applications, and complete learning modules to reflect on their relationships between digital ICT and cultural revitalisation. Digital media activities can be taught by Indigenous facilitators hired by projects, while curriculum can showcase existing Indigenous learning resources.

It is my hope that these six focus areas, and my efforts to document our experiences in the two case studies outlined in detail here, are useful to others working on similar initiatives in Canada and beyond. Critically oriented digital inclusion scholars and practitioners question the ability of existing institutions, policies, and programmes to adequately incorporate the voices of marginalised individuals and populations (Alexander, n.d.; Moll & Shade, 2013). Models of participatory development can foreground rhetoric at the expense of substantive reform, and so become a form of co-optation rather than transformation. Given the presence of intersectional structural inequalities, a range of individuals and populations must gain more voice and influence in the enabling policies and regulations shaping digital inclusion. As Sen (1999) writes: “capabilities [of persons] can be enhanced by public policy, but also, on the other side, the direction of public policy can be influenced by the effective use of participatory capabilities by the public” (p. 35). Put differently, digital inclusion policies and programmes must both shape, and be shaped by, broader struggles over self-determination.

Acknowledgement

This article was developed in close collaboration with the First Mile Connectivity Consortium and the Piikani Cultural and Digital Literacy Camp Project teams. The two case studies described here are only possible through the efforts of the many people involved in these two initiatives. I acknowledge their essential contributions and thank them for their support. I would also like to thank the article reviewers and journal editors for their detailed and constructive feedback.

References

Affordable Access Coalition (2016, December 21). Public interest and consumer groups applaud CRTC’s internet access decision, look forward to more work on affordability [Press Release]. Affordable Access Coalition. https://www.piac.ca/our-specialities/public-interest-and-consumer-groups-applaud-crtcs-internet-access-decision-look-forward-to-more-work-on-affordability/

Alexander, C. J. (n.d.). Deconstructing digital delusions and dependencies: The politics of identity and federal citizenship in Canada’s digital frontier. Retrieved Marh 10, 2017 from http://www.policy.ca/reports/cynthia%20alexander/issue-report%20Deconstructing%20Digital%20Delusions%20and%20Dependancies.pdf

Alfred, T., & Corntassel, J. (2005). Being Indigenous: Resurgences against contemporary colonialism. Government & Opposition, 40(4), 597–614. https://doi.org/10.1111/j.1477-7053.2005.00166.x

Alia, V. (2010). The new media nation: Indigenous peoples and global communication. Berghahn Books.

Bastien, B. (2004). Blackfoot ways of knowing: The worldview of the Siksikaitsitapi. University of Calgary Press.

Battiste, M. (2018). Reconciling Indigenous Knowledge in education: Promises, possibilities, and imperatives. In M. Spooner & J. McNinch (Eds.), Dissident knowledge in higher education (pp.123–148). University of Regina Press.

Beaton, B., & Campbell, P. (2014). Settler colonialism and First Nations e-Communities in Northwestern Ontario. Journal of Community Informatics, 10(2).

Betkowski, B. (2017). Camp-out turns teens into digital knowledge-keepers of the future [Blog post]. https://www.folio.ca/camp-out-turns-teens-into-digital-knowledge-keepers-of-the-future/

Blood, N. (Director). (2005). Narcisse Blood: Indigenous Knowledge: The Inclusion of Aboriginal Content in Curricula and its Challenges (Part 5) [Video]. Learn Alberta. http://www.learnalberta.ca/content/sssi05/html/blood5.html

Borrows, J. (2010). Canada’s Indigenous Constitution. University of Toronto Press.

Bredin, M. (2001). Bridging Canada’s digital divide: First Nations’ access to new information technologies. The Canadian Journal of Native Studies, 21, 191–215.

Canadian Radio-Television and Telecommunications Commission. (2013, December 18). Telecom regulatory policy CRTC 2013-711: Northwestel Inc. – Regulatory framework, modernization plan, and related matters. http://www.crtc.gc.ca/eng/archive/2013/2013-711.htm

Canadian Radio-Television and Telecommunications Commission. (2014, October). Satellite inquiry report (October 2014). http://www.crtc.gc.ca/eng/publications/reports/rp150409/rp150409.pdf

Canadian Radio-Television and Telecommunications Commission. (2015, April 11). Telecom notice of consultation CRTC 2015-134: Review of basic telecommunications services. http://www.crtc.gc.ca/eng/archive/2015/2015-134.htm

Canadian Radio-Television and Telecommunications Commission. (2016a, December 21). CRTC submission to the Government of Canada’s innovation agenda. CRTC. http://www.crtc.gc.ca/eng/publications/reports/rp161221/rp161221.pdf

Canadian Radio-Television and Telecommunications Commission. (2016b, December 21). Telecom regulatory policy CRTC 2016-496: Modern telecommunications services – The path forward for Canada’s digital economy. http://www.crtc.gc.ca/eng/archive/2016/2016-496.htm

Canadian Radio-Television and Telecommunications Commission. (2019, November 13). Broadband fund -- Application guide for the 13 November 2019 call for applications: Appendix to telecom notice of consultation CRTC 2019-372. https://crtc.gc.ca/eng/internet/guid.htm

Canadian Radio-Television and Telecommunications Commission. (2020, January). Communications monitoring report 2019. https://crtc.gc.ca/pubs/cmr2019-en.pdf

Carpenter, P. (2010). The Kuhkenah Network (K-Net). In J.P White, J. Peters, D. Beavon, & P. Dinsdale, (Eds), Aboriginal policy research VI: Learning, technology and traditions (pp.119–127). Thompson Educational Publishing.

Conaty, G.T. (2015). We Are Coming Home: Repatriation and the Restoration of Blackfoot Cultural Confidence. AU Press.

Dalseg, S. K., & Abele, F. (2015). Language, distance, democracy: Development decision making and northern communications. The Northern Review, 41, 207–240. https://doi.org/10.22584/nr41.2015.009

Dobby, C. (2016, April 18). CRTC chair makes strong call for national broadband strategy. Globe & Mail.http://www.theglobeandmail.com/report-on-business/crtc-chair-makes-strong-call-for-national-broadband-strategy/article29671174/

Duarte, M. E. (2017). Network sovereignty: Building the internet across Indian Country. University of Washington Press.

Dyson, L. E., & Grant, S. (2006). Information technology and indigenous people. Idea Group Publishing.

Enlivened Learning. (Director). (2015). Re-learning the Land: A Story of Red Crow College [Video file]. http://films.enlivenedlearning.com/re-learningtheland

Fiddler, A. (2019, February 12). Fort Severn First Nation satellite broadband upgrade a relief for the community. [Blog Post]. https://knet.ca/content/fort-severn-first-nation-satellite-broadband-upgrade-relief-community

First Mile Connectivity Consortium. (2018). Stories from the First Mile: Digital technologies in remote and rural Indigenous communities. Fredericton: FMCC. Retrieved December 4, 2018 from: http://firstmile.ca/wp-content/uploads/Stories-from-the-First-MIle-2018.pdf

First Mile Connectivity Consortium. (2016a). Telecom notice of consultation CRTC 2015-134 Review of basic telecommunications services — Final Comments from the FMCC. https://crtc.gc.ca/eng/archive/2015/2015-134.htm

First Mile Connectivity Consortium. (2016b, December 22). FMCC press release on CRTC decision supporting First Nations digital innovation [Press Release]. http://firstmile.ca/2992-2/

First Mile Connectivity Consortium. (2017). Telecom notice of consultation CRTC 2017-112 Development of the commission’s broadband funding regime— Final Comments from the FMCC. http://firstmile.ca/wp-content/uploads/FMCC-2017-112-Final-Comments-FINAL.pdf

First Mile Connectivity Consortium. (2019). Telecom notice of consultation CRTC 2019-45: Call for comments – Application guide for the broadband fund – Submission of the First Mile Connectivity Consortium. https://crtc.gc.ca/eng/archive/2019/2019-45.htm

First Nations Information Governance Centre. (2014). Ownership, control, access and possession (OCAP™): The path to First Nations information governance.

Fiser A., & Clement, A. (2012). A historical account of the Kuh-Ke-Nah Network: Broadband deployment in a remote Canadian Aboriginal telecommunications context. In A. Clement, M. Gurstein, M. Moll, & L. R. Shade (Eds.), Connecting Canadians: Investigations in community informatics (pp. 255–282). Athabasca University Press.

Fiser, A. & Jeffrey, A. (2013). Mapping the long-term options for Canada’s north: Telecommunications and broadband connectivity. Conference Board of Canada.

Freedman, D. (2014). The contradictions of media power. Bloomsbury Academic.

Gibson, K., Kakekaspan, M., Kakekaspan, G., O’Donnell, S., Walmark, B., Beaton, B., & the People of Fort Severn First Nation. (2012). A history of communication by Fort Severn First Nation community members: From hand deliveries to virtual pokes. Proceedings of the iConference, Toronto.

Gillen, J., & Barton, D. (2010). Digital literacies: A research briefing by the Technology Enhanced Learning phase of the Teaching and Learning Research Programme. Literacy Research Centre, Lancaster University. http://www.tlrp.org/docs/DigitalLiteracies.pdf

Government of Canada. (2019). High-speed access for all: Canada’s connectivity strategy. https://www.ic.gc.ca/eic/site/139.nsf/eng/h_00002.html

Gurstein, M. (2003). Effective use: A community informatics strategy beyond the digital divide. First Monday, 8(12). https://doi.org/10.5210/fm.v8i12.1107

Gurstein, M. (2012). Toward a Conceptual Framework for a Community Informatics. In A. Clement, M. Gurstein, G. Longford, M. Moll & L.R. Shade (Eds.). Connecting Canadians: Investigations in Community Informatics (pp. 35–60). Athabasca University Press.

Harding, S. G. (1998). Postcolonial science and technology studies: A Space for new questions. In Is Science Multicultural? Postcolonialism, feminisms, and epistemologies (pp. 23–38). Indiana University Press.

Harding, R. (2006). Historical representations of Aboriginal people in the Canadian news media. Discourse & Society, 17(2), 205–235. https://doi.org/10.1177/0957926506058059

Hintz, A. (2009). Civil society media and global governance: Intervening into the world summit on the information society. Transaction Publishers.

Hudson, H. E. (2011). Digital diversity: Broadband and Indigenous populations in Alaska. Journal of Information Policy, 1, 378–393. https://doi.org/10.5325/jinfopoli.1.2011.0378

Hudson, H.E. (2013). Beyond infrastructure: Broadband for development in remote and Indigenous regions. The Journal of Rural and Community Development, 8(2), 44–61. https://journals.brandonu.ca/jrcd/article/view/1002

Hudson, H.E. (2014). Digital inclusion of Indigenous populations as consumers and providers of broadband services. In Pacific Telecommunications Council (Ed.), PTC'14: Fiber in the sky... connects the pacific: new world, new strategies. Pacific Telecommunications Council.

Hudson, H.E. (2015). Connecting Alaskans: Telecommunications in Alaska from telegraph to broadband. University of Alaska Press.

Iseke-Barnes, J. & Danard, D. (2007). Indigenous knowledges and Worldview: Representations and the Internet. In L.E. Dyson, M. Hendriks, & S. Grant (Eds.) Information technology and indigenous people (pp. 27–37). Information Science Publishing.

Kovach, M. (2009). Indigenous methodologies: Characteristics, conversations and contexts. University of Toronto Press.

Kunuk, Z., & Cohn, N. (2010). NITV on IsumaTV: Increasing Inuktitut Language Content across Nunavut Communities. http://s3.amazonaws.com/isuma.attachments/NITVonIsumaTV_090419A.pdf

Lentz, B. (2013). Excavating historicity in the U.S. network neutrality debate: An interpretive perspective on policy change. Communication, Culture & Critique, 6(4), 568–597. https://doi.org/10.1111/cccr.12033

McMahon, R. (2014). Creating an enabling environment for digital self-determination. Media Development, 2, 11–16.

McMahon, R., Almond, A., Steinhauer, D., Steinhauer, S., Whistance-Smith, G., Janes, D.P. (2019). Sweetgrass AR: Exploring augmented reality as a resource for Indigenous-Settler relations. International Journal of Communication, 13, 4530–4552. https://ijoc.org/index.php/ijoc/article/view/11778

McMahon, R., Gurstein, M., Beaton, B., O’Donnell, S., & Whiteduck, T. (2014). Making information technologies work at the end of the road. Journal of Information Policy, 4, 250–269. https://doi.org/10.5325/jinfopoli.4.2014.0250

McMahon, R., Hudson, H. E., & Fabian, L. (2014). Indigenous regulatory advocacy in Canada’s far north: Mobilizing the First Mile Connectivity Consortium. Journal of Information Policy, 4, 228–249. https://doi.org/10.5325/jinfopoli.4.2014.0228

McMahon, R., Hudson, H.E., & Fabian, L. (2017). Canada’s northern communication policies: The role of Aboriginal organizations. In N. Mulé & G. DeSantis (Eds.), The Shifting Terrain: Public Policy Advocacy in Canada (pp. 259–292). McGill-Queen’s University Press.

McMahon, R., O’Donnell, S., Smith, R., Walmark, B., Beaton, B., & Simmonds, J. (2011). Digital divides and the ‘First Mile’: Framing First Nations broadband development in Canada. The International Indigenous Policy Journal, 2(2). https://ir.lib.uwo.ca/iipj/vol2/iss2/2/

McMahon, R., Whiteduck, T., Chasle, A., Chief, S., Polson, L., & Rodgers, H., (2017). Indigenizing digital literacies: Community informatics research with the Algonquin First Nations of Timiskaming and Long Point. Engaged Scholar Journal, 2(1). https://doi.org/10.15402/esj.v2i1.210

McNally, M. B., Rathi, D., Evaniew, J., & Wu, Y. (2017). Thematic analysis of eight Canadian federal broadband programs from 1994 to 2016. Journal of Information Policy, 7, 38–85. https://doi.org/10.5325/jinfopoli.7.2017.0038

Media Smarts. (n.d.). Digital literacy fundamentals. http://mediasmarts.ca/digital-media-literacy-fundamentals/digital-literacy-fundamentals

Menzies, C. (2015). In our grandmother’s garden: An Indigenous approach to collaborative film. In A. Gubrium, K. Harper, & M. Otañez (Eds.), Participatory visual and digital research in action (pp. 103–114). Left Coast Books.

Moll, M., & Shade, L. R. (2013, October 13–15). From information highways to digital economies: Canadian policy and the public interest [Paper presentation]. World Social Science Forum, Montreal.

Molyneaux, H., O’Donnell, S., Kakekaspan, C., Walmart, B., Budka, P., & Gibson, K. (2012, September 24–28). Community resilience and social media: Communication and cultural preservation using social networking sites [Paper presentation]. 2012 International Rural Network Forum, Whyalla and Upper Spencer Gulf, Australia. http://firstmile.ca/wp-content/uploads/2015/03/2012-Community_Resilience_and_Social_Media.pdf

O’Connor, K. (2013). The use of ICTs and E-learning in Indigenous Education. In M. K. Barbour, State of the nation: Online K-12 learning in Canada, Canadian eLearning Network [Report] (pp. 87–95). CANeLearn. http://www.openschool.bc.ca/pdfs/state_of_nation-2013.pdf

O’Donnell, S., Beaton, B., McMahon, R., Hudson, H.E., Williams, D., & Whiteduck, T. (2016, May). Digital technology adoption in remote and northern Indigenous communities in Canada: An overview [Paper presentation]. Canadian Sociological Association 2016 Annual Conference, Calgary.

Office of the Auditor General of Canada. (2018, September 7). Connectivity in rural and remote areas (2018 Fall Reports of the Auditor General of Canada and the Parliament of Canada No. 1) Office of the Auditor General of Canada. https://www.oag-bvg.gc.ca/internet/English/parl_oag_201811_01_e_43199.html

Open Media. (2016, December 21). In historic decision, Canada joins small handful of nations that define high-speed Internet as a basic service [Press Release]. https://openmedia.org/en/press/historic-decision-canada-joins-small-handful-nations-define-high-speed-internet-basic

Paisley, L., & Richardson, D. (1998). Why the first mile and not the last? In L. Paisley & D. Richardson (Eds.), The First Mile of connectivity: Advancing telecommunications for rural development through a participatory communication approach. Food and Agriculture Organization of the United Nations. http://www.fao.org/docrep/x0295e/x0295e03.htm

Perley, D., O’Donnell, S., George, C., Beaton, B., & Peter-Paul, S. (2016, November). Supporting Indigenous language and cultural resurgence with digital technologies. Mi’kmaq Wolastoqey Centre, University of New Brunswick.

Philpot, D., Beaton, B., & Whiteduck, T. (2014). First Mile challenges to last mile rhetoric: Exploring the discourse between remote and rural First Nations and the telecom industry. The Journal of Community Informatics, 10(2). http://firstmile.ca/wp-content/uploads/2015/03/2014-JoCI-Philpot_Beaton.pdf

Price, M., & Krug, P. (2002). The enabling environment for free and independent media: Contribution to transparent and accountable governance (Occassional Papers Series No. PN-ACM-006). Office of Democracy and Governance, Bureau for Democracy, Conflict, and Humanitarian Assistance, U.S. Agency for International Development. https://repository.upenn.edu/asc_papers/65/

Raboy, M. (2005). Making media: Creating the conditions for communication in the public good. Canadian Journal of Communication, 31(2), 289–306. https://doi.org/10.22230/cjc.2006v31n2a1733

Rajabiun, R., & Middleton, C. (2013). Rural broadband development in Canada’s provinces: An overview of policy approaches. The Journal of Rural and Community Development, 8(2), 7–22. https://journals.brandonu.ca/jrcd/article/view/1004

Rheingold, H. (2012). Net smart: How to thrive online. The MIT Press.

Rodriguez-Lonebear, D. (2016). Building a data revolution in Indian country. In T. Kukutai & J. Taylor (Eds.), Indigenous data sovereignty (pp. 253–272). ANU Press. http://library.oapen.org/bitstream/id/4bdc2f50-705d-48c1-9d12-33319eea6e53/624262.pdf#p276

Ross, J. (2020). Students Use Technology to Preserve Piikani Culture and Language. Internet Society Foundation. https://www.isocfoundation.org/story/students-use-technology-to-preserve-piikani-culture-and-language/

Roth, L. (2005). Something new in the air: The story of First Peoples television broadcasting in Canada. McGill Queen’s University Press.

Roth, L. (2013). Canadian First Peoples’ mediascapes: (Re)framing a snapshot with three corners. In L. Shade (Ed.), Mediascapes: New Patterns in Canadian Communication (4th ed., pp. 364–389). Thomson.

Salazar, J. F. (2007). Indigenous peoples and the cultural construction of information and communication technology (ICT) in Latin America. In L. E. Dyson, M. Hendriks, & S. Grant (Eds.), Information technology and indigenous people (pp. 14–25). Information Science Publishing.

Sandvig, C. (2012). Connection at Ewiiaapaayp Mountain: Indigenous Internet Infrastructure. In L. Nakamura & P. Chow-White (Eds.), Race After the Internet. Routledge.

Savard, J. (1998). A theoretical debate on the social and political implications of Internet implementation for the Inuit of Nunavut. Wicazo Sa Review, 13(2), 83–97. https://doi.org/10.2307/1409148

Schnarch, B. (2004). Ownership, Control, Access, and Possession (OCAP) or self-determination applied to research: A critical analysis of contemporary First Nations research and some options for First Nations communities.” Journal of Aboriginal Health, 1, 80–95.

Sen, A. (1999). The perspective of freedom. In Development as freedom (pp. 13–34), New York Alfred Knopf.

Shepherd, T., Taylor, G., & Middleton, C. (2014). A tale of two regulators: Telecom policy participation in Canada. Journal of Information Policy, 4, 1–22. https://doi.org/10.5325/jinfopoli.4.2014.0001

Simpson, L. (2011). Dancing on our Turtle’s Back: Stories of Nishnaabeg re-creation, resurgence and a new emergence. Arbeiter Ring Publishing.

Stavenhagen, R. (2011). Making the Declaration on the Rights of Indigenous Peoples work: The challenge ahead. In S. Allen & A. Xanthaki (Eds.), Reflections on the UN Declaration on the Rights of Indigenous Peoples (pp.147–170). Hart Publishing.

Strover, S. (2000). The First Mile. The Information Society, 16(2), 151–154. https://doi.org/10.1080/01972240050032915

Truth and Reconciliation Commission of Canada (2015). Calls to action. Truth and Reconciliation Communication of Canada. http://www.trc.ca/websites/trcinstitution/File/2015/Findings/Calls_to_Action_English2.pdf

Tuck, E. (2009). Suspending damage: A letter to communities. Harvard Educational Review, 79(3), 409–428. https://doi.org/10.17763/haer.79.3.n0016675661t3n15

Tuhiwai Smith, L. (1999). Decolonizing methodologies: Research and indigenous peoples (5th ed). Zed Books.

United Nations. (2007). Declaration on the rights of Indigenous Peoples. United Nations General Assembly. https://www.un.org/development/desa/indigenouspeoples/wp-content/uploads/sites/19/2018/11/UNDRIP_E_web.pdf

U.S. Indigenous Data Sovereignty Network. (2018). About us. http://usindigenousdata.arizona.edu/about-us-0

Valaskakis, G.G. (1992). Communication, culture and technology: Satellites and northern Native broadcasting in Canada. In S. Riggins (Ed.), Ethnic minority media: An international perspective (pp. 63–81). Sage.

Ventimiglia, P., & Pullman, G. (2016). From written to digital: The new literacy. EDUCAUSE Review, 51(2), 36–48. https://er.educause.edu/articles/2016/3/from-written-to-digital-the-new-literacy

Wemigwans, J. (2018). A Digital Bundle: Protecting and promoting Indigenous Knowledge online. University of Regina Press.

Whiteduck, T., Beaton, B., Burton K., & O’Donnell, S. (2012, November). Democratic ideals meet reality: Developing locally owned and managed broadband networks and ICT services in rural and remote First Nations in Quebec and Canada [Keynote paper]. Community Informatics Research Network (CIRN) Conference, Prato, Italy.

Whiteduck, J. (2010). Building the First Nation e-Community. In White, J. P., Peters, J., Beavon, D., Dinsdale, P. (Eds.), Aboriginal policy research VI: Learning, technology and traditions (pp. 95–103). Thompson Educational Publishing.

Wilson, S. (2008). Research is ceremony: Indigenous research methods. Fernwood.

Footnotes

1. The convention in Canada when writing about Indigenous peoples is to capitalise the word “Elder”, which is an honorific.

2. In Canada, provinces have more policy autonomy than territories. While provincial powers over areas such as health and education derive from the country’s constitution, territories have delegated powers from the Canadian parliament. This arrangement has implications for an array of jurisdictional, funding and other issues – including with respect to how digital inclusion initiatives are funded in territories vis-à-vis provinces.

Digital youth inclusion and the big data divide: examining the Scottish perspective

$
0
0

This paper is part of Digital inclusion and data literacy, a special issue of Internet Policy Review guest-edited by Elinor Carmi and Simeon J. Yates.

Introduction

Globally, young people aged 15-24 account for nearly one-fourth of internet users (ITU, 2019). In the light of the increasing digitalisation of society, understanding young people’s digital inclusion has become an important topic for researchers (Helsper, 2017; Gangneux, 2019) and policymakers (European Commission, 2018). Digital inclusion is defined as a strategy to ensure that all people have equal opportunities and appropriate skills to access and benefit from digital technologies (ITU, 2019). Digital inclusion practice encompasses a range of methods and approaches used to help individuals and communities to access and understand digital technologies.

In recent years, there has been a growing interest in the use of digital technologies in the out-of-school learning settings (Harvey, 2016; Ito et al., 2015). Such non-formal education programmes have a potential to recognise and address young people digital skills and needs, which might be omitted at schools or at home (Black et al., 2015). Examples of prior European youth digital inclusion programmes include coding clubs, discussion groups (e.g., focusing on issues related to online safety), and hackathons (for more examples see www.digitalyouthwork.eu).

Since 2015, the provision of out-of-school digital youth inclusion projects has also become prominent in Scotland (Youth Link Scotland, 2020). As many young Scots still have limited digital literacy (e.g., regarding privacy issues and safe online communications) or internet access, youth digital inclusion has become a priority for policymakers (Scottish Government, 2017), researchers (Gangneux, 2019; Helsper, 2017; Livingstone & Helsper, 2007), and young Scottish activists (5 Rights Youth Commission, 2017). There is an overall agreement that it is essential to ensure that all young people have access to online services and digital literacy support. In fact, the importance of digital youth inclusion and digital literacy education was highlighted in the National Digital Strategy for Scotland document, published in 2017. The strategy document states that the Scottish government’s aim is to equip “children and young people with the increasingly sophisticated and creative digital skills they need to thrive in modern society and the workplace” (Digital Scotland, 2017a, p. 24). The importance of inclusive and youth-centred education was also outlined by young Scottish researchers, who argued that both students and educators require ongoing digital skills support (5 Rights Youth Commision, 2017).

However, while the information on why youth digital inclusion is important, the analysis on how to effectively contextualise, organise, and manage a youth digital inclusion project is still limited. In Scotland, there is a scarcity of information on how to address youth digital inclusion in times of the big data divide. The big data divide is understood here as an asymmetric power dynamic between those who collect, analyse and benefit from data (e.g., social media companies), and those who are the targets of the data collection process (e.g social media users) (Andrejevic, 2016).

In this paper, I examine the existing Scottish youth digital strategies and contextualise them within a wider scholarly discourse on digital literacy and the big data divide. Throughout this paper the term digital literacy is used to refer to young people’s practical use of digital technologies in everyday life as well as the process of ‘translating’ these digital activities into beneficial real-world outcomes (Helsper, 2015). I also examine the importance of young people’s critical thinking and critical digital participation. To this end, I ground my analysis in Polizzi’s definition of critical digital literacy, who sees it as “an ensemble of critical abilities, knowledge and interpretations that are essential in the context of democratic participation and social inclusion in the digital age” (2019, p. 2). Thus, in the context of this paper, the ‘critical’ refers to young people’s critical thinking in their everyday interactions with the digital technologies - both in terms of the practical use and the pro-active analysis of their role and impact on society. I propose that digital inclusion should not only be viewed as a strategy for employment and education, but as a larger, systematic, continually evolving, and critical youth engagement practice.

The aim of this article is to examine some of the emerging challenges associated with digital youth inclusion and the big data divide, and to propose some critical considerations for digital youth inclusion practitioners. The analysis presented here draws from the scholarly discussion on digital youth participation (Eynon & Geniets, 2016; Helsper, 2017; Livingstone & Third, 2017), digital inclusion (Gangadharan, 2017; Livingstone & Helsper, 2007; Scottish Government, 2017), and big data divide (Andrejevic, 2014). The contribution of this paper is its evaluations and recommendations based on three critical areas of focus in the process of establishing digital youth inclusion provisions: (1) digital youth inclusion provision: control and definition of the process; (2) a holistic examination of young people’s digital needs, aspirations, and fears; and (3) a consideration of the impact on young people’s human rights in the era of the big data divide. The analysis presented here is grounded both in my prior academic research on youth digital inclusion (see Pawluczuk et al., 2019) and direct experience of working ‘in the field’ as a youth digital inclusion worker in Scotland.

Digital youth inclusion in the era of the big data divide

The use of digital technologies among young people in the West has rapidly increased in the 21st century (Anderson & Jiang, 2018; Ofcom, 2016). The continually evolving relationship between young people and digital technologies has become a central research theme for scholars (Akom, Shah, Nakai, & Cruz, 2016; Fitton, Little, & Bell, 2016; Ito et al., 2015), policymakers (European Commission, 2018), as well as youth participation and education practitioners (Harvey, 2016; Wilson & Grant, 2017). Livingstone and Third propose that youth digital inclusion is “a staged process in which the benefits of internet use depend not only on age, gender, and SES [socio-economic status], but also on the amount of use and online expertise” (2007, p. 691). In this article, the United Nations’ definition of youth is adopted, which describes young people as those aged 15-24 (UN Department of Economic and Social & Youth, 2017). Some scholarly accounts (Little et al., 2016) view young people as “[the] most diverse, dynamic, exciting, and technologically aware user groups that will soon become the next generation of adults” (2016, p. 1). In 2016, 91% of young people in the European Union (EU) made daily use of the internet, compared with 71% of the whole EU population. In the EU, 83% of young people use mobile phones for internet access away from home or work (eurostat, 2017). A recent UK report revealed that 99% of young people in the United Kingdom between the ages of 14 and 34 were described as “recent Internet users”1 (Office for National Statistics, 2018, p. 8). Increasing digital youth access and participation can also be noted in Scotland, where in 2018 “superfast Internet”2 coverage has increased to 92% of homes and businesses, an increase from 87% in 2017 (Ofcom, 2018). In 2016, the Scottish Household Survey reported that only 1% of young Scots aged 16 to 24 do not use the internet (Scottish Government, 2016). Therefore, while it is evident that young people are accessing the Internet, the quality of their digital participation needs further examination.

The importance of youth digital expertise and their proactive role in the digital age is reflected in the way scholars define young people - digital participants, makers, and ‘doers’ (Ito et al., 2013, p. 6) and digital solutions co-designers (Fitton & Bell, 2014). Indeed, youth-led online movements such as the Global Climate Strike (UK Student Climate Network, 2019) or the campaign for the provision of free menstrual products (Free Periods, 2019) are examples of how those aged under 18 can and do use technologies to drive positive social change. Thus, it can be argued that the digital age has enabled some young people to exercise their voices and participate in civic activities as engaged citizens (Ito et al., 2015)

However, the emancipating qualities of the digital world ought not to be romanticised (Buckingham, 2008). While the digitalisation of societies has led to empowerment for some young people, it has also accelerated some of the pre-internet forms of youth social exclusion – as well as created new ones (Vartanova & Gladkova, 2019). The building blocks of the modern-day digital infrastructure, such as “algorithmic selection, surveillance, and big data, have created new forms of inequality that follow the traditional cultural patterns of class, gender, wealth, and education” (Trappel, 2019, p. 9). Many disadvantaged young people in the United Kingdom still lack access to a computer or to the internet, which may result in lack of functional digital literacy and thus employability skills (e.g., using Word processor, completing online work applications) (Weston, Lumley, & Curvers, 2018). As stated by Weston et al., “[t]o a young person who’s struggling financially, is lacking stable housing and a meaningful career, is trying to plot a path towards their goals, accessing technology may seem low-priority” (2018, p. 4). Digital exclusion and limited digital and information literacy skills have been reported to lead to “confusion, frustration and defeatism online as well as offline” among young people (Van Deursen & Helsper, 2018, p. 257). This sense of powerlessness and frustration as a result of inability to meaningfully participate online was also noted in Wilson and Grant’s 2017 report on national youth digital inclusion. According to their research, one quarter of unemployed young people “dread” filling in online job applications, with one in ten avoiding the use of computers altogether. In this sense, the notion of online agency is not aligned with all ‘youth’ and clearly shows that previous inequality barriers to technologies persist on digital technologies.

Perhaps one of the more common tropes in the youth digital literacy debate is that they are, as Prensky argues - digital natives (Prensky, 2009). The term ‘digital natives’ assumes that young people born in the digital era will naturally adopt digital literacy and thus can be assumed to be digitally included in society. However, these narratives have been rebuked by various scholars (Helsper, 2015; Weston et al., 2018; Wilson & Grant, 2017) who have questioned the validity of the so-called ‘digital natives’ in the United Kingdom. For example, researchers (Weston et al., 2018; Wilson & Grant, 2017) found that many young people still require support to develop their digital literacy. Furthermore, Porat et al.'s (2018) research investigated young people’s views on their digital literacy and found that young people tend to overestimate the levels of their digital literacy. The authors reported on young people’s digital literacy overconfidence, which is reflected in some young people’s limited social literacy online. For example, young people’s abilities to share information, express personal opinion, and contextualise within others’ information and opinions while participating in discussion groups (Porat, Blau, & Barak, 2018) did not match the (considerably higher) perceptions of these skills. Thus, while young people might be often considered as already digitally connected and included, the debate on their digital participation reveals complexities.

Youth digital inclusion should also be examined in the context of the big data divide (e.g., Andrejevic, 2014; McCarthy, 2016). Andrejevic defines the big data divide as the process whereby people are separated from their data and excluded from the process of putting it to use” (2014, pp. 1685-1686). The big data divide reflects “both the relations of ownership and control that shape access to communication and information resources, and growing awareness of just how little people know about the ways in which their data might be turned back upon them” (Andrejevic, 2014, p. 1675). This problem of the big data divide is particularly important in the context of digital inclusion. As digitally excluded individuals are encouraged and pressured to participate in the digital world, they are also required to agree and comply with the terms and conditions which govern the power structures of the digital society. Thus, one’s digital participation might often mean unconditional, uncontrollable, and overpowering data profiling. As argued by Barassi:

In our data-driven cultures, citizens are constantly forced to comply and provide their personal data. Sometimes, this forced compliance happens in physical ways (e.g., facial recognition technologies in airports). Other times, it happens simply because their lives increasingly unfold in data-driven environments, which rely on automated decision making (2019, p. 415).

In this context of the big data divide, youths’ information sharing and privacy practices require attention. Young people share more personal data than ever; 92% of teen social media users post their real names and 91% post a photo of themselves (Chi et al., 2018). Chi et al. indicate that young people’s growing digital footprints could be “used to track, profile, and shape young people throughout their lives” (Chi et al. 2018, p. 443). Literature analysis reveals that many young digital users are not aware of the ongoing data collection and retention and its possible privacy implications (Hautea el al., 2017). Young people’s lack of access and understanding of how their data is analysed and shared might have direct consequences on a citizen’s identity and a data subject's individual and collective self‐determination (McCarthy, 2016). Among those young people who acknowledge the privacy implications of their digital participation, many also feel that they have no choice but to trade their personal information in the name of digital - and thus social - inclusion. While trying to simultaneously manage the opportunities and risks associated with their digital participation, young people report feeling fatigued, powerless, and sometimes even ‘locked in’ in their digital presence (Gangneux, 2019; Hargittai & Marwick, 2016). As Hargittai and Marwick (2016) argue:

the assumption behind the existing opaque system is that businesses thrive on users sharing as much content as possible, and so do not benefit from clearer, more user-friendly options. The result of the current arrangement, however, is frustration that yields both apathy as well as self-censorship [among young people] (2016, p. 375).

It might be argued that while navigating within the multiple infrastructures of the digital world - young people find themselves stuck between embracing (and being encouraged to embrace) the digital participation (e.g., employment opportunities) and protecting themselves from its possible side-effects (e.g., data mining, privacy breaches). Gangadharan (2017) reported that digital inclusion project participants are not prepared to confront the challenges posed by the big data divide. Many digital inclusion practitioners often lack the time and resources needed to cover privacy and online safety in their teaching programmes (Gangadharan, 2017), and do not provide project participants with opportunities to examine the critical element in their digital literacies. Thus, young people who are socially disadvantaged (e.g., lower socio-economic class) or from underrepresented communities (e.g., young people with disabilities or from ethnic minorities) who are not yet digitally included are at greater risk of becoming targets of the unethical practices associated with the digital and big data economies.

Youth digital inclusion in Scotland: research and policy context

In recent years, the provision of out-of-school digital youth inclusion projects has become prominent in Scotland (Youth Link Scotland, 2020). The importance of informal digital education for young Scots was highlighted in the National Digital Strategy for Scotland, published in 2017. The Scottish government’s aim is to equip “children and young people with the increasingly sophisticated and creative digital skills they need to thrive in modern society and the workplace” (Digital Scotland, 2017a, p. 24). It is evident that the Scottish government considers young people as important actors in the co-creation of the digital future.

Whilst the overall analysis of digital youth participation has become prevalent since the 2000s, the number of scholarly publications explicitly examining Scottish digital youth is limited. At the time of writing this article, there is no comprehensive review of the Scottish digital youth landscape. The brief analysis presented in this section is based on several academic publications (Coates, 2016; Miller, 2015; Mowbray, Hall, Raeside, & Robertson, 2018) and industry reports (5 Rights Youth Commission, 2017; Wilson & Grant, 2017).

Literature examining digital youth inclusion in Scotland includes analysis of examples of youth political participation and citizenship (Mclaverty et al., 2015), youth information behaviour and digital literacy (Coates, 2016; Miller, 2015; Mowbray et al., 2018), the impact of digital technologies on young people (Woods & Scott, 2016), digital youth inclusion (Wilson & Grant, 2017), digital literacy (Gangneux, 2019), and Scottish youth digital culture (Lyons, McCreanor, Goodwin, & Barnes, 2017). For example, there is evidence of the positive impacts of youth digital participation in Scotland (Mclaverty et al., 2015; Mowbray et al., 2018). Studies of youth digital engagement during the Scottish Independence Referendum provided evidence of first-time voters using social media when searching and sharing political information (Mclaverty et al., 2015). There is also an indication that young Scots utilised social media while seeking employment (Mowbray et al., 2018).

In response to the increasing importance of digital technologies in young Scots’ lives (Ofcom, 2018) many of Scotland’s youth-centred organisations embedded digital communication solutions into their programmes. For example, LGBT Youth Scotland’s digital chat counselling service allows young people to confidently reach a youth worker’s support online (LGBT Youth Scotland, 2020). Young Scot, the national information and citizenship organisation supported by the Scottish government, uses a digital application to share information with their young people (Young Scot, 2019). Digital youth inclusion projects in Scotland offer, for example, digital literacy outreach programmes (Duncan, 2016), programming workshops for girls (Crawford, 2019), and access to digital tools (Citadel Youth, 2019).

The relationship between young people and digital technologies has also been explored by Scottish policymakers (European Commission, 2018), youth work practitioners (Youth Link Scotland, 2020), and young Scots themselves (5 Rights Youth Commision, 2017). In 2018, members of the Scottish Digital Youth Network (Youth Link Scotland, 2020) contributed to the publication of the European Commission (EC)’s Policy recommendations for developing digital youth work (European Commission, 2018). The EC’s recommendations include (1) the development of a common understanding of digital youth work across Europe, (2) strategic development of European digital youth work practice, (3) consideration and incorporation of youth participation youth rights, and (4) application of evidence-based approaches to digital youth work (European Commission, 2018). The Scottish Digital Youth Network (Youth Link Scotland, 2020) is a network of practitioners who utilise digital technologies in their work with young people, which aims to “facilitate learning about new and innovative approaches in digital and developments within policy” (Youth Link Scotland, 2020).

In 2017, the 5 Rights Youth project was commissioned by the Scottish government to carry out a youth-led investigation and contextualisation of the UNCRC human rights treaty for digital technologies (5 Rights Youth Commission, 2017). The 5 Rights Commission was a group of 19 young people aged 14 to 21 from Scotland, whose work was commissioned by the Scottish government in the years 2016-2017. The role of the 5 Rights Commission was to advise the Scottish government on the importance of young people’s digital rights and their implementation in youth digital inclusion programmes. Up to date, the 5 Rights Commission's youth-led report provides some of the most comprehensive overview of young people’s digital participation in Scotland. Using a nationally representative survey of 1,675 young people, the 5 Rights Youth Commission also provided one of the most comprehensive overview of young Scots digital needs, aspirations, and barriers to digital participation.

Based on their findings, the 5 Rights Youth Commission proposed a set of recommendations for the policymakers. Recommendations for Scottish policy-making included utilising a rights-based approach in terms of future digital policy interventions and young people’s participation in the co-design of future policies. The 5 Rights Youth Commission also called for the UK government to emphasise the importance of ‘young people’s rights by design’ to protect young people from exercise data collection and surveillance practices. The five rights proposed by the young people included: (1) right to remove; (2) right to know, (3) right to safety and support; (4) right to informed and conscious use; (5) right to digital literacy.

Literature also reveals other evidence of digital literacy shortages among young Scots (Coats, 2016; Wilson & Grant, 2017). For example, a 2016 study suggests that disadvantaged youth from southern Scotland experienced “greater barriers to information access resulting from poor technology skills, information literacy, and social structures and norms” (Coats, 2016). Similar digital literacy issues were highlighted in a 2017 report, which suggested that in Glasgow, “one in ten unemployed young people (10%) cannot send their CV online, while more than one in six (17%) believe they would be in work today if they had better computer skills” (Wilson & Grant 2017, p. 31). Issues relating to online safety, privacy, data control, and digital awareness have also been highlighted (5 Rights Commission, 2017). According to 52.1% of young people in Scotland, the greatest threats in the digital world include anonymity, bullying, and targeting, which encompasses “bullying online, trolling, grooming, and other targeted exploitations caused by anonymous contacts” (5 Rights Commission, 2017, p. 39).

The need for a nation-wide inclusive and accessible digital youth inclusion and digital citizenship education has also been examined in Scotland. For example, McGillivray et al. stressed the importance of a holistic and critical approach to digital youth engagement:

…critical digital citizenship agenda needs to be embedded in educational narratives [in Scotland], where young people are, through practice, asked to ponder how digitally mediated publics operate in the school setting and beyond. Integrating ‘making’ and ‘thinking critically’ about the benefits and dangers of pervasive digital media in and outside of school is imperative (McGillivray et al., 2016, p. 721).

Online accessibility and inclusion in digital youth participation have been defined as crucial elements of effective digital youth participatory interventions in Scotland. The review of Scotland’s first National Youth Arts Strategy’s digital programme revealed that “Scottish digital youth projects were challenged to think creatively when delivering in isolated or disadvantaged areas” (Hyder, 2016, p. 1). Online connectivity issues such as a lack of mobile phone signal or “patchy internet connection” have also been noted (Duncan, 2016; Wilson & Grant, 2017; Harvey, 2016).

This brief review of the Scottish youth digital inclusion programmes and policy developments, reveals that young people require an ongoing support to meaningfully participate in the society in the digital era. The review reveals that many of the existing and corporate-led digital inclusion programmes provide young people in Scotland with opportunities to primarily develop their functional digital literacy skills. While these programmes provide educators with useful tools (e.g., iPads), they do not seem to provide recommendations on how to contextualise and critique their design and social impact. This lack of critical digital literacy in youth digital inclusion programmes might lead to the deepening of the big data divide.

Addressing the digital and big data inclusion: three areas for consideration

An analysis of literature produced by digital youth and digital inclusion researchers (including young researchers) reveals that three areas should be taken into consideration when planning the future of youth digital inclusion in Scotland. The analysis presented in this section provides a set of theoretical and practical considerations for youth digital inclusion project design and delivery. These considerations cover three areas of digital youth inclusion provision: (1) control and definition of the digital inclusion process; (2) holistic examination of young people’s digital needs and aspirations, fears; and (3) consideration of young people’s human rights in the digital age. These considerations provide a starting point for the discussion on digital youth inclusion practice in the context of the big data divide. They should not be viewed as guidelines to be strictly applied, but as prompts for a critical reflection among digital youth inclusion researchers, practitioners, and policymakers.

1. Digital youth inclusion provision: control and definition of the process

Young people need to develop digital literacy that will provide them with access to today’s global job market. To address the digital literacy shortage, policymakers, educators, and companies emphasise the importance of digital inclusion programmes for young people (Loyds Bank, 2018). According to the UNESCO, digital literacy is considered a ‘gate’ skill required by employers (Chetty et al., 2017). In Scotland, policymakers argue that “ensuring the population is digitally literate and business needs for digital skills are met is key to driving economic competitiveness and capturing emerging opportunities” (Digital Scotland, 2017b, p. 4). However, the acceleration of digital progress and the associated digital literacy gap has also resulted in a policy-making paradox:

On the one hand, policy-makers ought to facilitate the deployment and adoption of (advanced) communications to avoid the serious disadvantages associated with limited connectivity. On the other hand, increased connectivity aggravates the inequality-increasing dynamics associated with the digital economy (Bauer, 2016, p. 28).

There is a notable amount of collaboration between governments and private digital companies to address the digital skills shortage among young people. For instance, in 2018, Facebook invested £8.8 million to train 10 million people in Europe by 2050 (Fioretti, 2018). Fioretti reports that Facebook’s community hubs aim to offer digital literacy and online safety training to digitally excluded groups, including old people, young people, and refugees. In 2018, the First Minister of Scotland, Nicola Sturgeon, launched the Google Digital ‘educational tour’ around Scotland (FirstMinister.gov.scot, 2018). As indicated by Sturgeon, by providing digital literacy training to communities this [Google] bus will provide people with the digital literacy and confidence they 'need to reach their potential’ (FirstMinister.gov.scot, 2018). In the context of formal education, in 2017, one Scottish school was selected to participate in Microsoft's Flagship School programme, whereby Microsoft's technology was utilised to develop students’ digital literacies. As stated on the company’s website, “The Microsoft Showcase Schools emphasise personalized learning for their students through the use of 1:1 and 1: many learning devices with current technology such as Windows devices, Azure, Office 365, OneNote, Minecraft: Education Edition, and more” (Microsoft, 2017). Another tech company, Apple, had a chance to enter a new market across Scotland by providing free iPads including in Edinburgh, the Scottish Borders and Perth, Kinross, and Glasgow. Despite the ethical dilemmas (e.g., corporate interest) associated with tech-corporations influencing the Scottish educational system, these collaborations are welcome by the Scottish education department (Kobie, 2018).

Although digital inclusion initiatives funded and managed by so called ‘tech-giants’ have proved to be useful for the Scottish education sector, it is crucial to examine the ethical implications of corporate-led digital inclusion interventions. For example, prior research indicates that corporate-supported digital inclusion programmes, “do not have a reputation of protecting or informing users who may be targeted by automatable, algorithmically driven processes that predict user behaviour" (Gangadharan, 2017, p. 598). In their review of corporate-supported digital inclusion initiatives, Gangadharan suggests that while these services provide digital literacy training and online access, they fail to provide learning on users' digital privacy and data collection. Gangadharan argues that corporate-sponsored digital inclusion projects choose to, "neglect topics of surveillance and the collection and monitoring of personal information for the purposes of social control” (2017, p. 598). Thus, it might be argued that many of the existing digital inclusion initiatives focus on the functional skills and not citizens’ critical abilities to examine the socio-political aspects of their digital participation.

It is thus important to consider questions focusing on whose version and/or interpretation of digital inclusion is adopted in a youth setting. When organising a youth digital initiative in the era of the big data divide, it might be useful to consider the following questions: who defines and controls youth digital inclusion in our project? What are the rules and limitations of the approach taken? How can we ensure that our project does not contribute to the wider problem of the big data divide? How can these challenges be mitigated? If the existence of these power dynamics fails to be addressed, it is possible that through the implementation of the corporate-driven practice, young people become “embedded in information and communication infrastructures regardless of personal choice” (Gangadhara, 2017, p. 601). Thus, learning about internet access and its implications should be positioned within a commercial interest-free environment. Youth digital inclusion should aim not only to create opportunities to join the current digital infrastructures, but to equip young people with the critical skills needed to understand the power structures of the digital world. While the importance of digital literacy in employment should not be underestimated, it might be beneficial to frame digital inclusion within a wider context of digital citizenship, internet governance, and digital human rights.

2. Holistic examination of young people’s digital needs, aspirations, and fears

Young people are often “simultaneously hailed as pioneers of the digital age and feared as its innocent victims” (2017, p. 658). In the literature, the spectrum descriptions of young people’s roles in the digital world vary from co-creators and active agents of the digital change to vulnerable and apathetic users. The over reliance of these two opposing narratives in the context of youth digital inclusion services is critiqued by Helsper (2017). Helsper argues that existing digital youth inclusion studies tend to view digitally-excluded young people as those who are “left out” or under societal pressure to go online. Scholars (Helsper & Reisdorf, 2016; Vartanova & Gladkova, 2019) and digital youth workers in the United Kingdom (Wilson & Grant, 2017) agree that the binary narratives of young people’s relationship with digital technologies are no longer accurate or appropriate.

As indicated by Vartanova and Gladkova, given the multiple aspects of society’s life, there is also more than one digital divide, therefore “the view of the digital divide as a binary distinction between information haves and have-nots is not appropriate” (2019, p. 195). Young people’s motivations for digital participation (or lack of it) are complex, diverse, and most importantly - not static. As Helsper and Reisdorf (2016) observe the reasons that cause people to disengage with the internet can be different, depending on national or cultural contexts, and they can also change over time. Moreover, an individual’s ability to connect and navigate the digital world might be impacted by a variety of personal circumstances. For example, while some decide to choose to join a social media platform due to peer-pressure, others decide to take proactive steps towards digital exclusion by removing a digital presence altogether (Kale, 2018). As each young person’s selection of the tools and dimensions for digital interaction is highly individual and contextualised, they should be provided with an array of opportunities to choose digital services that best fulfil their needs, which blend together ingredients from both online and offline sources (Granholm, 2016).

The need for a critical and in-depth examination of young people's digital needs is particularly important in the context of the big data divide. Academic debates examining the big data divide reveal insights into how misinformation, algorithmically driven discrimination, surveillance, privacy, and data profiling might impact young peoples' perceptions of the digital world. As argued by Kidron et al. (2018), “the tension between being governed by and devoted to their device is, in part, a result of the persuasive strategies baked into the digital services that children [and young people] use” (2018, p. 13). It is essential to frame young people’s digital inclusion practices within the big data divide to acknowledge that, “crucial issues of the digital divide are not just technological – they are social, economic, cultural and political” (Selwyn, 2010, p. 357). Critical analysis of these different influences and contexts should be central to any youth digital inclusion intervention.

In Scotland, young people’s digital literacy levels and learning needs have been examined by digital youth inclusion practitioners (Carnegie UK Trust, 2017; Wilson & Grant, 2017) and policymakers (Youth Link Scotland, n.d). In 2019, Youth Link Scotland published a learning resource for youth digital inclusion workers. The resource provides information on how to frame digital inclusion not only as practical digital literacy but as a proactive and inquisitive mindset in digital times. As stated on the project's website, “adapting to the digital world is not just about emails, social media and online services, it’s about maximising the opportunities and learning but also minimising the threats and misinformation that affect confidence, motivation and access for everyone" (Youth Link Scotland, n.d). The importance of a holistic approach towards young people’s digital aspirations has also been outlined by the Scottish digital youth inclusion practitioners. For example, practitioners who took part in the youth digital inclusion initiative #NotWithoutMe, emphasised that it is essential to test their assumptions about young people’s digital literacy at the beginning of their projects (Carnegie UK Trust, 2017). Nonetheless, review of digital inclusion and youth digital inclusion in Scotland revealed that most projects’ primary objective is in line with the Essential Digital Skills Strategy (Loyds Bank, 2019). It is worth noting that while the Essential Digital Skills Strategy provides useful guidelines for digital inclusion practitioners, it reinforces the dualistic view of those who are digitally included (employable and socially included) and those who are digitally excluded (unemployed, socially and economically disadvantaged).

It is argued here that any examination of such needs should be framed within the context of the big data divide as well as digital inclusion contexts. To better understand young people’s attitudes towards digital technologies and address their digital needs, digital youth inclusion interventions analysis should aim to move away from an overly simplified, dualistic analysis of connected (or ‘digital natives’) vs. disconnected users, and passive vs. active users.

3. Digital inclusion and the big data divide: young people’s human rights

It is not enough for young citizens to be merely connected and present online. In the current data-driven society, young people must be able to develop skills to gather and analyse information, develop informed opinions, and share these perspectives with others (Mihailidis & Thevenin, 2013). Critical engagement with digital society requires one’s abilities to use, understand, and create media and communication in a variety of contexts (Ofcom, 2018, p. 1), such as political, cultural, and societal dimensions of data. As argued by Dencik:

The processing of data from across our lives can fundamentally shape social relations, the kinds of information valued and what is ‘knowable’ and therefore acted upon. At the same time, data, and the way it is generated, collected, analysed and used, is a product of an amalgamation of different actors, interests and social forces that shape how and on what terms society is increasingly being datafied (Dencik, Redden, Hintz,& Warne, 2019, p. 873).

The big data divide has an impact not only on citizens’ self-awareness, but their entire web of interactions with society. As algorithmically driven, these are primarily managed and understood by those who have the appropriate finances, infrastructure, and expertise (Trappel, 2019; McCarthy, 2016; Zuboff, 2019). In the context of the big data divide, the consideration of human rights should be particularly important when working with disadvantaged young people.

Youth social inclusion and participation are viewed globally as a right protected by the Convention of the Rights of the Child, which was established in 1959, and served as the basis for the Convention of the Rights of the Child (CRC) adopted by the United Nations in 1989 (UNICEF, 2010). Articles 12-15 are concerned with the specific rights of young people to participate, voice their opinions, freely assemble, and engage in discussions relating to their well-being (McMillan & Simkiss, 2009). Human rights particularly related to youth digital inclusion include the right to privacy (Article 16), access to information from the media (Article 17), the right to freedom of expression (Article 13,), the right to freedom of association (Article 15), and protection from exploitation (Article 36) (see the Council of Europe’s 2014 Guide to Human Rights for Internet Users for an overview of these articles). The importance of human rights in the digital age was outlined by the young people involved in the 5 Rights Youth Commission, who argued that: “The offline and online worlds are two equal and intertwined aspects of our lives. Our rights are still our rights whether we are on social media or out on the streets; we are still young people that need support and empowerment whether we are on our smartphone or in the classroom” (5 Rights Youth Commission2017, p.7).

Just like traditional forms of literacy, which are often understood within a rights-based approach, digital literacy and digital inclusion programmes should aim to provide young people with knowledge and skills for informed, conscious, and meaningful digital participation. Thus, learning within the digital inclusion should be viewed as “the social emancipatory process of understanding and expressing itself in the world” (Tygel & Kirsch, 2016, p. 3). Critical reflection upon the role of human rights in the digital age should not be viewed as an additional element of the digital inclusion and employability programmes, but as a core element of the learning agenda as well as a human and civic right.

Young people’s pro-active participation with the pre-agreed structures of the data society is essential to protecting civic rights and liberties and enabling active digital citizenship (Hintz et al., 2017). Digital youth inclusion initiatives literacy should aim to support youth’s agency, the courage to question and resist autocratic data structures, and provide “a basic knowledge of the political economy of digital platforms” (Pangrazio & Selwyn, 2019, p. 432). In the context of youth digital inclusion, young people should be viewed as pro-active and curious individuals, who have the ability to critique and question existing digital structures.

Conclusion

The aim of this article was to examine the emerging challenges associated with digital youth inclusion and the big data divide, and to suggest some critical considerations for digital youth inclusion practitioners. The analysis presented here was based on the scholarly discussion on digital youth participation (Eynon & Geniets, 2016; Helsper, 2017; Livingstone & Third, 2017), digital inclusion (Gangadharan, 2017; Livingstone & Helsper, 2007; Scottish Government, 2017) and big data divide (Andrejevic, 2014; McCarthy, 2016). While the literature analysis was framed within a wider, international context, the discussion presented here is primarily situated within the Scottish one. Based on this analysis, I propose a set of theoretical and practical considerations for the design and delivery of digital youth inclusion projects. These considerations focus on three areas of digital youth inclusion provision: (1) control and definition of the process, (2) holistic examination of young people’s digital needs and aspirations, fears, and (3) consideration of young people’s human rights in the digital arena. These considerations provide a starting point for the discussions on digital youth inclusion practices in the context of the big data divide.

To an extent, these considerations might also translate into practical implications for both youth digital inclusion practitioners and policymakers. For example, for digital inclusion initiatives to be meaningful to young people, it might be beneficial to directly involve them in their design process. In a practical youth digital inclusion workshop setting, questions such as ‘what does it mean to be digitally included as a young person? ;’who decides if a young person is digitally included or not?’; ‘who controls our digital inclusion process and what do we know about them?’- might be used to prompt critical discussion and provide a sense of ownership among young project participants. Similar youth-centred approaches might also be implemented in the context of policy-making. As evidenced by the Scottish government’s (2017) collaboration with the young people from the 5 Rights Commission (2017), young people’s involvement in digital policy design can provide important insights into their views on how to define, control, and manage youth digital inclusion. However, I argue that to holistically understand and address the continually evolving challenges associated with digital inclusion, policymakers should extend these efforts to collaborate with a wider range of young people of different abilities, cultural backgrounds, and of different socio-economic statuses. These policymakers efforts should also include a critical analysis of corporate-led youth digital inclusion programmes, their social impact (both individual and collective) and their potential influence on the big data divide.

To understand young people’s digital literacy levels, questions about the aims and objectives of digital inclusion should be explored at the beginning of a project. Pre-existing frameworks (e.g., Essential Digital Skills) provide an important structure for digital inclusion project design and facilitation; however, these should be extended by a holistic analysis of young people’s digital needs, aspirations, and fears. To achieve this, practitioners might consider trying using youth-centred, participatory methods for critical reflection. Examples of methods might include gaming, graffiti and comics-making (Digital Youth Work Project, 2020). Such methods might also help when exploring the socio-economic structures of the digital society, the big data divide, and their impact on youths’ human rights. For resources and examples of good practice, digital youth inclusion practitioners might refer to the resources produced by the 5 Rights Framework (5 RightsYouth Commision, 2017), the European Digital Youth Work Network (Digital Youth Work Project, 2020), and ‘My Data and Privacy Online. A toolkit for young people’ (London School of Economics and Political Science, 2020).

However, it is important to note some of the challenges associated with the practical implementation of the youth digital inclusion recommendations proposed in this article. First, it is possible that their implementation might require extra time and resources. Many youth digital inclusion programmes take place within the Scottish youth community education sector, which has been severely underfunded in the last decade. In light of these funding cuts and an increasing need to provide evidence of impact to funders, many youth digital inclusion practitioners have no choice but to prioritise functional and easily quantifiable skills over critical thinking (Pawluczuk et al., 2019). To address this challenge, digital youth practitioners should be provided with appropriate and commercial interest-free support to reevaluate their practice, and when possible, try to implement some of my considerations into their practice.

While this article provides some considerations for future youth digital inclusion practice, it has also raised a number of issues and questions which require further research. First, there is limited knowledge as to if (and to what extent) youth digital inclusion exacerbates the problem of the big data divide. A review of youth digital inclusion strategies, their implementations, and evaluations could provide important information about project participants’ learning experiences and their related outcomes. Valuable insights could also be gained by working alongside digital inclusion practitioners and/or young project participants. In this case, methods such as participatory methods, observation, and interviews could allow for an in-depth analysis of participants’ perceptions of their digital inclusion.

Another problem is the lack of consensus of what it means for youth digital inclusion programmes to be effective, critical, and ethical. As many youth digital inclusion programmes are driven by tech-companies’ terms and conditions, it is unclear who decides what it means to be a digitally included young person. Thus, questions related to the value and interpretation of youth digital inclusion impact should be explored in future research (Pawluczuk et al., 2019). To this end, researchers might consider examining both young people’s and digital inclusion workers’ perspectives. Finally, more research is needed to understand the feasibility of practical implementation of critical digital literacy (Polizzi, 2019) into youth digital inclusion projects. Have functional digital literacy skills been prioritised in youth digital inclusion projects? What are some of the challenges and opportunities associated with teaching critical digital literacy skills alongside functional digital literacy skills in a youth digital inclusion setting? These are some of the questions that can be explored in future research.

To conclude, although this article uses Scotland as a case study, the considerations presented here might be useful in other geographical and cultural contexts. I argue that for digital inclusion efforts to be truly empowering, young people’s human rights should be central to any digital inclusion programmes. To this end, young people should not be viewed as passive receivers of digital literacy educational programmes, but as pro-active and critical digital citizens and rights-holders. Therefore, to work towards a more inclusive, ethical, and equal digital society, young people’s voices should be central to digital inclusion research, practice, and policy intervention.

This article presents a number of limitations with regards to theory and its practical application. First, it is important to note that my analysis is based on a section of available literature and is not reflective of all youth digital inclusion policy-making efforts and programmes in Scotland. Second, the youth digital inclusion practice considerations presented here are situated within the Scottish context and thus might not be entirely applicable to different populations, as well as geographical and cultural contexts.

References

5Rights Youth Commission. (2017). Our Digital Rights: How Scotland can reaslie the rights of children and young people in the digital world.http://respectme.org.uk/wp-content/uploads/2017/06/Five_Rights_Report_2017_May-4.pdf

Akom, A., Shah, A., Nakai, A., & Cruz, T. (2016). Youth Participatory Action Research (YPAR) 2.0: how technological innovation and digital organizing sparked a food revolution in East Oakland. International Journal of Qualitative Studies in Education, 29(10), 1287–1307. https://doi.org/10.1080/09518398.2016.1201609

Anderson, M., & Jiang, J. (2018). Teens, social media & technology 2018. Pew Research Center, 31, 2018.Andrejevic, M. (2014). The big data divide. International Journal of Communication, 8(1), 1673–1689.

Bauer, J. M. (2016). Inequality in the information society [Working paper]. Quello Center, Michigan State University. https://doi.org/10.2139/ssrn.2813671

Carnegie UK Trust. (2017). #NotWithoutMe Active Discussion Notes. https://padlet.com/CUKT/notwithoutmeactivenotes

Citadel Youth. (2019). Citadel Youth Centre: Community based youth work in Leith since 1980. https://citadelyouthcentre.org.uk/

Council of Europe. (2014). Guide to Human Rights for Internet Users. Retrieved from: https://rm.coe.int/CoERMPublicCommonSearchServices/DisplayDCTMContent?documentId=09000016804d5b31

Crawford, N. (2019). Girlguiding Paisley and YMCA join forces to get girls coding. Girlguiding Renfrewshire. Retrieved from: https://www.girlguidingrenfrewshire.org.uk/news/paisley-news/95-girlguiding-scotland-and-ymca-join-forces-to-get-girls-coding

de St Croix, T. (2018). Youth work, performativity and the new youth impact agenda: getting paid for numbers?. Journal of Education Policy, 33(3), 414–438. https://doi.org/10.1080/02680939.2017.1372637

Dencik, L., Redden, J., Hintz, A., & Warne, H. (2019). The ‘ golden view ’: data-driven governance in the scoring society, Internet Policy Review, 8(2), 1–24. https://doi.org/10.14763/2019.2.1413

Digital Scotland. (2017a). Realising Scotland’s full potential in a digital world: a digital strategy for Scotland. Scottish Government. https://www.gov.scot/publications/realising-scotlands-full-potential-digital-world-digital-strategy-scotland/

Digital Scotland. (2017b). Scotland’s Digital Strategy: Evidence Discussion Paper. Scottish Government. https://www.gov.scot/publications/scotlands-digital-strategy-evidence-discussion-paper/

Digital Youth Work Project. (2020). Good Practice. Retrieved from: https://www.digitalyouthwork.eu/good-practices/

European Commission. (2018). Developing digital youth work Policy recommendations , training needs and good practice examples. Publications Office of the European Union. https://doi.org/10.2766/782183

Eynon, R., & Geniets, A. (2016). The digital skills paradox: how do digitally excluded youth develop skills to use the internet? Learning, Media and Technology, 41(3), 463–479. https://doi.org/10.1080/17439884.2014.1002845

FirstMinister.gov.scot. (2018). Google Digital Garage launch [Press release].https://firstminister.gov.scot/google-digital-garage-launch/

Fitton, D., Little, L., & Bell, B. T. (2016). Introduction:HCI Reaches Adolescence. In D. Fitton, L. Little, B. T. Bell, & N. Toth (Eds.), Human–Computer Interaction Series Perspectives on HCI Research with Teenagers (pp. 1–9). Springer. https://doi.org/10.1007/978-3-319-33450-9

Gangadharan, S. P. (2017). The downside of digital inclusion: Expectations and experiences of privacy and surveillance among marginal Internet users. New Media and Society, 19(4), 597–615. https://doi.org/10.1177/1461444815614053

Gangneux, J. (2019). Logged in or locked in? Young adults’ negotiations of social media platforms and their features. Journal of Youth Studies, 22(8), 1053–1067. https://doi.org/10.1080/13676261.2018.1562539

Granholm, C. P. (2016). Social work in digital transfer–blending services for the next generation [Doctoral dissertation]. University of Helsinki. https://researchportal.helsinki.fi/en/publications/social-work-in-digital-transfer-blending-services-for-the-next-ge

Harvey, C. (2016). Using ICT , digital and social media in youth work [Report]. National Youth Council of Ireland. https://www.youth.ie/wp-content/uploads/2019/03/International-report-final.pdf

Helsper, E. J. (2017). A socio-digital ecology approach to understanding digital inequalities among young people. Journal of Children and Media, 11(2), 256–260. https://doi.org/10.1080/17482798.2017.1306370

Hautea, S., Dasgupta, S., & Hill, B. M. (2017, May). Youth perspectives on critical data literacies. Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, 919–930. https://doi.org/10.1145/3025453.3025823

Hyder, N. (2016). Evaluation of TTS.Digital [Report]. Research Scotland. https://www.creativescotland.com/resources/our-publications/plans-and-strategy-documents/national-youth-arts-strategy/evaluation-of-the-implementation-of-time-to-shine

Ito, M., Soep, E., Kligler-vilenchik, N., Shresthova, S., Gamber-Thompson, L., & Zimmerman, A. (2015). Learning connected civics : Narratives , practices , infrastructures. Curriculum Inquiry, 45(1), 10–29. https://doi.org/10.1080/03626784.2014.995063

ITU. (2019). ITU-D Digital Inclusion. https://www.itu.int/en/ITU-D/Digital-Inclusion/Pages/default.aspx

Kobie, N. (2018, January 24). Cynical about Apple’s move into UK schools? Well, it turns out they need all the help they can get. Wired. https://www.wired.co.uk/article/apple-everyone-can-code-teaching-schools-uk-ipad

LGBT Youth Scotland. (2020). Digital Support. https://www.lgbtyouth.org.uk/groups-and-support/digital-support/

Livingstone, S., & Helsper, E. (2007). Gradations in digital inclusion: Children, young people and the digital divide. New Media and Society, 9(4), 671–696. https://doi.org/10.1177/1461444807080335

Livingstone, S., & Third, A. (2017). Children and young people’s rights in the digital age: An emerging agenda. New Media and Society, 19(5), 657–670. https://doi.org/10.1177/1461444816686318

London School of Economics and Political Science. (2020). My Data and Privacy Online. A toolkit for young people. http://www.lse.ac.uk/my-privacy-uk

Loyds Bank. (2018). UK Consumer Digital Index 2018. Lloyds Bank https://www.lloydsbank.com/assets/media/pdfs/banking_with_us/whats-happening/lb-consumer-digital-index-2019-report.pdf

McCarthy, M. T. (2016). The big data divide and its consequences. Sociology Compass, 10(12), 1131–1140. https://doi.org/10.1111/soc4.12436

McGillivray, D., McPherson, G., Jones, J., & McCandlish, A. (2016). Young people, digital media making and critical digital citizenship. Leisure Studies, 35(6), 724–738. https://doi.org/10.1080/02614367.2015.1062041

McMillan, A. S., & Simkiss, D. (2009). The United Nations Convention on the Rights of the Child and HIV/AIDS. Journal of Tropical Pediatrics, 55(2), 71–72. https://doi.org/10.1093/tropej/fmp024

Mihailidis, P., & Thevenin, B. (2013). Media Literacy as a Core Competency for Engaged Citizenship in Participatory Democracy. American Behavioral Scientist, 57(11), 1611–1622. https://doi.org/10.1177/0002764213489015

Ofcom. (2018). Adults’ media use and attitudes report [Report]. https://www.ofcom.org.uk/__data/assets/pdf_file/0011/113222/Adults-Media-Use-and-Attitudes-Report-2018.pdf

Pangrazio, L., & Selwyn, N. (2019). ‘Personal data literacies’: A critical literacies approach to enhancing understandings of personal digital data. New Media & Society, 21(2), 419–437. https://doi.org/10.1177/1461444818799523

Pawluczuk, A., Webster, G., Smith, C., & Hall, H. (2019). The Social Impact of Digital Youth Work: What Are We Looking For?. Media and Communication, 7(2), 59–68. https://doi.org/10.17645/mac.v7i2.1907

Polizzi, G. (2019). Information literacy in the digital age: why critical digital literacy matters for democracy. In: S. Goldstein (Ed.), Informed societies: why information literacy matters for citizenship, participation and democracy (pp. 1-23). Facet Publishing.

Porat, E., Blau, I., & Barak, A. (2018). Measuring digital literacies: Junior high-school students’ perceived competencies versus actual performance. Computers and Education, 126, 23–36. https://doi.org/10.1016/j.compedu.2018.06.030

Prensky, M. (2009). H. sapiens digital: From digital immigrants and digital natives to digital wisdom. Innovate: Journal of Online Education, 5(3). https://nsuworks.nova.edu/innovate/vol5/iss3/1/

Selwyn, N. (2010). Schools and schooling in the digital age: A critical analysis. Routledge.

Scottish Government. (2017). Scottish household survey 2016: annual report [Report]. Scottish Household Survey Project Team, Scottish Government. https://www.gov.scot/publications/scotlands-people-annual-report-results-2016-scottish-household-survey/

Duncan, P. (2016). Mobile Children, Young People and Technology Project: An Exploratory Study of Mobile Cultures’ Use of Digital Technology and New Media for Living and Learning. STEP: Centre for Mobile Cultures and Education.

Trappel, J. (2019). Inequality, (new) media and communications. In J.Trappel (Ed.), Digital Media Inequalities. Policies against divides, distrust and discrimination (pp.9–30). Nordicom.

van Deursen, A. J. A. M., & Helsper, E. J. (2018). Collateral benefits of Internet use: Explaining the diverse outcomes of engaging with the Internet. New Media and Society, 20(7), 2333–2351. https://doi.org/10.1177/1461444817715282

Vartanova, E., & Gladkova, A. (2019). New forms of the digital divide. In J.Trappel (Ed.), Digital Inequalities. Policies against divides, distrust and discrimination (pp.193-213). Nordicom.

Weston, A., Lumley, T., & Curvers, S. (2018). My best life: priorities for digital technology in the youth sector [Report]. NPC. https://www.thinknpc.org/resource-hub/my-best-life-priorities-for-digital-technology-in-the-youth-sector/

Wilson, G., & Grant, A. (2017). A digital world for all? Findings from a programme of digital inclusion for vulnerable young people across the UK [Report]. Carnegie UK Trust. Retrieved from https://d1ssu070pg2v9i.cloudfront.net/pex/carnegie_uk_trust/2017/10/NotWithoutMe-2.pdf

Youth Link Scotland. (2020). Digital Youth Work Network. https://www.youthlinkscotland.org/about-us/our-networks/digital-youth-network/

Youth Link Scotland. (n.d).Safe, Secure and Empowered. https://www.youthlinkscotland.org/develop/developing-knowledge/digital-youth-work/safe-secure-empowered

Young Scot. (2019). Young Scot (Version 2.3.5) [Mobile app]. Google Play Store. https://play.google.com/store/apps/details?id=com.stormid.youngscot&hl=en_GB

Footnotes

1. The Office for National Statistics in the United Kingdom defines recent internet users as adults who have used the internet within the last three months

2. In 2018, Ofcom defined superfast Internet connections as those with speed of 30Mbit/s or higher and less than 300Mbit/s.

Viewing all 294 articles
Browse latest View live