Quantcast
Channel: News and Research articles on Governance
Viewing all 294 articles
Browse latest View live

The geopolitics of ‘platforms’: the TikTok challenge

$
0
0

Introduction

The short form video app TikTok is the first social media ‘platform’ born outside the US to significantly rival the Silicon Valley incumbents. Since its rise in the short video economy, TikTok has come under intense criticism from governments around the world, resulting in outright bans in some jurisdictions (Press Information Bureau Delhi, 2020). Lawmakers have questioned whether ByteDance, the company that owns TikTok, sufficiently protects user data against access by the Chinese state. Yet, TikTok’s data practices are not dissimilar to those of its US counterparts (Fowler, 2020) and the controversy over TikTok’s rise in the US cannot be explained by an analysis of the company’s technology, policies or practices alone. This article provides a geopolitical analysis of the political and economic contestations over TikTok that played out at the national level between the US and China from April to August 2020.

Applying a geopolitical lens to the political rhetoric and actions relating to TikTok improves understanding of the core issues that have animated this platform controversy. For at least two decades now, the US has dominated in the international digital platform market. This dominance falls within a broader geopolitical system of US hegemony that has defined the liberal world order since the end of the Cold War. Through strategic partnerships and alliances with nations around the world the US has enjoyed great economic and political power. China’s economic growth destabilises a world order that centres upon US hegemony. What the TikTok controversy shows is the extent to which this geopolitical setting is affecting the politics of platforms today. It is important to identify and isolate pertinent geopolitical motivations because they can work to obscure other factors relevant to platform politics, such as the value of competition in a highly concentrated international platform market.

In most sectors, market concentration is problematic but it is particularly so in an information economy. Digital platforms are information gatekeepers, with the capacity to influence social conditions by determining the ideas and information that are shared and amplified across vast socio-technical systems. In the international platform market, a handful of US companies enjoy immense cultural, economic and political power derived from their ownership and control over platform infrastructure and data. TikTok has provided competition to these US companies and the global success of TikTok confirms that users will adopt innovative new platforms when they are made available to them, regardless of their geographic origins. If consumers are not locked in to the incumbents, there is potential for increased competition to lead to a dilution of concentrated power in the international platform market. But this requires overcoming conventional geopolitical agendas and eschewing both US and Chinese hegemony.

1. Platforms and geopolitics: adventures in US hegemony

The dominance of the digital environment by Silicon Valley technology companies presents a range of social, political and economic problems, many of which are compounded by their oligopolistic status. It is undemocratic and anticompetitive to have a small number of companies own and control the systems by which we communicate, socialise and transact (Gray, 2020). The incumbent US platforms, including Facebook, Instagram, YouTube and Twitter, are also dysfunctional in their democratic role. To varying degrees, they spread and amplify misinformation and hate speech (Donzelli et al., 2018; Matamoros-Fernández, 2017), they generate and exploit data at the expense of user privacy (Burdon, 2020; Kitchin & Lauriault, 2014), they embed unfair and harmful biases into their algorithms and technical infrastructure (Maayan & Elkin-Koren, 2016; Pasquale, 2015), they fail to ensure the security of user data (Mann & Daly, 2020), and they offer limited mechanisms for transparency and accountability (Suzor et al., 2019). Two decades into the new millennium, the politics of ‘platforms’ (Gillespie, 2010) is a highly wrought sociopolitical affair.

Lawmakers around the world are increasingly taking action to curtail the reach and influence of the dominant US platforms. From the European Union’s General Data Protection Regulation(Lynskey, 2017), to the Australian Competition and Consumer Commission’s investigation of the impact of Facebook and Google on competition in media and advertising (Australian Competition & Consumer Commission, 2017), to the 2020 US congressional hearings where the heads of Google, Apple, Amazon and Facebook were questioned over the potential use of their market power to stifle competition (Romm, 2020), these lawmakers are questioning whether the activities of the dominant US platforms are legal, legitimate or fair. Given this backdrop, the open hostility evident at the level of international politics towards TikTok, a competitor to the incumbent US platforms, might appear illogical. Properly understanding the logic that underpins the political actions and rhetoric over TikTok requires that they be placed within a geopolitical context.

To systematically establish the geopolitics of the TikTok controversy, this study involves a qualitative content analysis of US and Chinese government documents issued between April and August 2020 (a period in which the Trump administration actively pursued the regulation of TikTok), as well as relevant corporate sources. Using keyword searches of government websites and databases, 27 documents were collected in which statements were made about TikTok by US and Chinese state officials. US sources included documents published by whitehouse.gov, specifically, transcripts of three White House briefings statements made by the US Press Secretary, one briefing statement published by the National Security Council, transcripts of eight briefing statements by President Trump, three Executive Orders by President Trump along with three related letters to the Speaker of the House and President of the Senate, and a press statement by the US Department of State authored by Mike Pompeo. Statements made by Mike Pompeo in an interview with Fox News were also reviewed. Chinese sources included eight English language transcripts of press conferences published by the Ministry of Foreign Affairs of the People’s Republic of China: three held by Spokesperson Zhao Lijian and five by Spokesperson Wang Wenbin. Together, these sources contain the political positions and responses of US and Chinese state actors during a period in which the Trump administration was actively pursuing the regulation of TikTok and, notably, they present highly consistent positions on TikTok for both states. Corporate documents included fourteen official statements released by TikTok executives and one official statement by Microsoft Inc.

In accordance with a classical geopolitical approach, these sources are used to establish an objective account of a specific geopolitical context. Classical geopolitics, unlike critical geopolitics (Toal et al., 1998), is characterised by the modernist epistemological and ontological assumption that a common geopolitical reality can be observed through ‘historical example, logic, common sense, visualisation, statistical analysis and rational choice’ (Kelly, 2006, p. 16). A limitation of this approach is that it requires that we accept that there is an objective geopolitical context to be established in the first place. Conducting a critical geopolitical analysis of the TikTok controversy, including an examination of how geopolitical knowledge is reproduced and reinforced in this controversy, is a topic worthy of future study and the research provided in this paper should be helpful in that pursuit. However, for the purposes of this research, which is to understand the impact of the geopolitical contest between China and the US on platform politics, a classical geopolitical approach is effective. Through a classical geopolitical analysis we can improve our understanding of how state territoriality and changes in the global distribution of economic and military power are playing out in the digital environment.

The classical model of geopolitics suggests the actions and policies of a state are influenced by that state’s geographical conditions (Lee, 2018). A state will have largely stable geographical conditions (Flint, 2016)—including its location, position in a region, resources, topography, climate, size and shape (Kelly, 2006)—and these conditions will influence how the state approaches international affairs. Geopolitical theorists recognise that historically US foreign policy has been influenced by its geographical isolation from other economic and military powers (Kelly, 2006). Unlike many European nations, for example, the US is geographically isolated, and US leaders have sought to use this position of relative isolation as insulation from national security threats. In broad terms, the US has managed its strategic interests through ‘offshore balancing’ (Layne, 1997). By aligning with certain ‘offshore’ nations, the US takes action to influence the global distribution of power, all while sustaining a position of isolation from the threat of military incursion at home. As Kelly explains, the US has had

a rather permanent security strategy of maintaining a favorable balance of power within the rimlands of the Eurasian continent, enabled by its marine strength and by bases in certain pivotal areas (Western Europe, Persian Gulf, and Korea/Japan). Its allies and opponents might vary from time to time; yet, North America will continue unrelentingly towards this secure rimlands position framed within its advantages of great distances and isolation from likely foes, no matter what other global and regional transformations may appear (Kelly, 2006, p. 16).

Regardless of swings in specific foreign policy approaches—from ‘US First’ to US as primus inter pares to US exceptionalism and anti-imperialism (O’loughlin, 1999)—a notion of protection through geographical isolation has long informed the US approach to its international affairs. For the US, the merits of this approach are evidenced by the global hegemony it has enjoyed. Through strategic alliances and partnerships that have offset military threats and advanced the US economy, for several decades, the US has sustained a position of global economic and military dominance (Alcaro, 2018).

The rise of China in the global world order represents a threat to US hegemony (Alcaro, 2018). This is, of course, a simplification of a complex relationship that also features ‘pathological codependency’ due in significant part to enduring macroeconomic imbalances resulting from China's foreign exchange reserves of US currency (Roach, 2014). Nevertheless, the relationship between the US and China is evolving and China’s growth can be viewed as destabilising to longstanding US geopolitical strategy. In geopolitics, military and economic capacity are key indices of power (Flint, 2016). But geopolitical power is also relational; military and economic capacities “only have an effect when two actors form a power relation” (Flint, 2016, p.16), whether that be a relationship of alliance or competition. In recent years, China has greatly improved its economic and military capacities resulting in increasingly competitive power relations between the US and China (Xuetong, 2020). While geopolitical relations between the two nations are complex, China’s continuing accumulation of power challenges a world order that is shaped to best serve US interests.

The TikTok controversy falls within an intensifying contest between the US and China over the strategic value of the digital environment (Cartwright, 2020). The geopolitics of the digital environment, and of platforms more specifically, involves contestation over who gets to extract economic value from the platform economy (Cartwright, 2020); who gets to set laws and norms for and exert ideological influence through vast sociotechnical systems (DeNardis & Hackl, 2015; Tusikov, 2019); and who enjoys the strategic political power derived from control over or access to digital data and infrastructure (Mann & Warren, 2018). Up to this point, this realm of geopolitics has largely mirrored that of the physical environment—the US has dominated economically and culturally across large regions of the digital environment (Gray, 2020). Recently, however, Chinese technology firms have flourished (Hong, 2017), expanding China’s economic and strategic capacities and sparking competitive tensions with the US (Cartwright, 2020). As the economic value of the digital environment continues to grow, we can expect to see more contests between nations seeking to extract value and exert influence in the digital environment; and identifying the geopolitical underpinnings of ensuing platform controversies is important for an effective evaluation of the related legal or public policy interventions. As Sickler explains, viewing “the world in which we live from a geopolitical and geostrategic perspective is important...because it can help make clear what the real stakes are in such issues and thus enable the public to assess the choices taken by the government in its name” (Sicker, 2010, pp. 17–18). By placing the recent contestation over TikTok within its geopolitical context, we improve understanding of key political motivations driving this platform controversy and the resulting policy responses.

2. The geopolitics of TikTok’s controversial rise in the US

In the short form video market, while apps such as Marco Polo and House Party have provided alternatives to market leaders Instagram, Snapchat and Facebook, only TikTok has managed to achieve a level of adoption high enough to pose an actual competitive threat to the market incumbents. TikTok rose to prominence in 2019 and, by early 2020, it was the most downloaded app globally (Williams, 2020), surpassing 2 billion downloads in the Google Play and App Store combined (Moshin, 2020). In 2021, TikTok is estimated to have over 700 million active monthly users worldwide (Datareportal, 2021).

TikTok is the product of technology developer ByteDance Ltd, a company born in China but incorporated in the Cayman Islands and notable for not being part of the three tech giants in China—Baidu, Alibaba and Tencent. In 2016, ByteDance brought its short video platform to the Chinese market as Douyin. A year later, the company acquired Musical.ly—a lip synching platform, also created in China, which had some limited success among teenagers in the US (Spangler, 2016). In 2018, ByteDance integrated Musical.ly with Douyin technology to produce TikTok, a product designed specifically for a global audience (Jia & Ruan, 2020). TikTok hosts audiovisual works between 15-60 seconds, algorithmically curated for audiences, with features designed to instigate user generated content and virality (Kaye et al., in press).

TikTok’s success is due in significant part to its innovative recommendation system (Chan, 2018). TikTok’s machine learning-enabled recommendation system does not require users to follow creators or to explicitly opt-in to certain types of content. Instead, the platform decides what to serve its users as they swipe through a never-ending stream of short videos (TikTok Newsroom, 2020a). To determine which videos it serves each user, TikTok reportedly uses three key algorithms: a recommendation algorithm, a content classification algorithm, and a user profiling algorithm (C. Wang, 2020). The recommendation algorithm reportedly uses real-time training and learns from features such as correlation between content and user information, user behaviour and trends (C. Wang, 2020). TikTok also provides all videos that pass an initial screening process exposure to at least 200 users, giving a wide range of content creators an opportunity for virality, but also providing TikTok with vast troves of user and video data that it can feed into its algorithms. With TikTok, Bytedance has successfully harnessed machine learning systems to provide an innovative short video platform experience.

Despite its large and enthusiastic global user base, TikTok has been the subject of intense criticism and speculation about the ethics of the platform, its objectives and its sociocultural implications. The platform has been accused of excessive data extraction and analysis practices, including unnecessarily copying data from users’ phones (Al-Heeti, 2020) and collecting information that may be used to identify and track the location of users (Misty Hong v Bytedance Inc, 2019). TikTok has also been subject to claims that it hosts harmful content; for example, in July 2020, the BBC reported that extremist libertarian groups in the US had a substantial presence on the platform (Clayton, 2020). At the same time, others argue TikTok unfairly exploits user content and copyrighted works without sufficient economic return for creators and intellectual property owners (Alexander, 2020). Yet, most of the issues at the heart of these criticisms are not unique to TikTok or new to the short form video economy. They are reflective of the well established politics of platforms, which feature controversies over data security and user privacy (Isaak & Hanna, 2018), content moderation (Gillespie, 2018), and the regulation of speech (Balkin, 2017). YouTube, Facebook, Google, Twitter, Instagram and other US platforms have been subject to criticisms and controversies similar to those surrounding TikTok (Suzor, 2019). What the TikTok controversy reveals is the extent to which the politics of platforms is immersed in geopolitical tensions between the US and China.

The geopolitical tensions underpinning the TikTok controversy are reflected most strongly in the debates over TikTok’s national security implications. On 24 June 2020, national security advisor to the US government, Robert O’Brien, spoke at the Arizona Commerce Authority on the topic of the Chinese government’s “ideological and global ambitions” (O’Brien, 2020). In his speech, O’Brien warned that China posed a threat to US citizens and he directly implicated TikTok in his critique:

On TikTok, a Chinese-owned social media platform with over 40 million American users—probably a lot of your kids and younger colleagues–accounts criticizing CCP policies are routinely removed or deleted…When the Chinese Communist Party cannot buy your data, it steals it…How will the Chinese Communist Party use this data? In the same way it uses data within China’s borders: to target, to flatter, to cajole, to influence, to coerce, and to even blackmail individuals to say and do things that serve the Party’s interests (O’Brien, 2020, para. 22).

While the Chinese government has been known to influence the content available on Douyin, the evidence of similar influence on TikTok is limited. For example, a 2020 comparative study of the two Bytedance short form video apps found the Chinese government used Douyin to promote Chinese patriotism but there was no corresponding evidence on TikTok (Chen et al., 2020). When Bytedance first began releasing TikTok into markets outside of China in 2017, the platform’s content moderation guidelines were aimed at limiting the circulation of all highly controversial materials on the platform. In some cases, this resulted in censorship of content unfavourable to the Chinese state, including videos relating to Tibetan independence and the treatment of Uighurs in Xinjiang (Hern, 2019). But TikTok’s approach to content moderation has evolved (TikTok Newsroom, 2019), and a range of political activists are now prominent on the platform (Andrews, 2020). While TikTok is often praised for facilitating access to diverse creators and content (see, e.g., Yan, 2020) it also continues to be criticised for algorithmically suppressing political videos, such as those from Black Lives Matter activists (McCluskey, 2020). Currently, there is a lack of robust evidence for measuring and evaluating the extent to which TikTok’s algorithms meaningfully promote politically and socially diverse content, but what does seem clear from the available evidence (e.g., Chen et al, 2020; Zhang 2020) is that TikTok is not subject to Chinese state influence to the same extent as is Douyin.

The lack of a robust evidence-base notwithstanding, in his speech, O’Brien was explicit about the ideological and security implications of TikTok, and its relationship to US geopolitical strategy:

President Trump understands that lasting peace comes through strength … The Trump Administration will speak out and reveal what the Chinese Communist Party believes, and what it is planning—not just for China and Hong Kong and Taiwan, but for the world…Together with our allies and partners, we will resist the Chinese Communist Party’s efforts to manipulate our people and our governments, damage our economies, and undermine our sovereignty. (O’Brien, 2020, para. 47).

On 6 July 2020, US Secretary of State, Mike Pompeo, confirmed the Trump administration was considering banning TikTok on national security grounds (Bella, 2020). Pompeo expressed that TikTok might pose a national security threat if Bytedance were compelled to provide information about US citizens to the Chinese government and implied TikTok should be treated similar to Huawei, the Chinese telecommunications company that is effectively banned in the US (Keane, 2020). Evidently, it was the Trump administration’s position that TikTok is not simply a platform for connection and entertainment but a tool the Chinese state might wield for strategic security and ideological influence within the US. In other words, TikTok might be used to further empower China in geopolitical relations between the two states.

TikTok’s data policies and practices provide no indication that the platform poses any singular national security threat and compared to other popular Chinese mobile applications it has higher standards of user data privacy protections (Jia & Ruan, 2020). The company also has a formal policy to deal with requests for user information made by governments. TikTok’s policy provides that TikTok will honour requests for user information that are made through “proper channels and where otherwise required by law” or “in limited emergency situations...without legal process” to prevent deaths or serious injury (TikTok Safety Centre, 2020a, Compliance with government requests section). TikTok’s law enforcement guidelines further clarify that proper process includes providing “the appropriate legal documents required for the type of information being sought, such as a subpoena, court order, or warrant, or submit an emergency request” (TikTok Law Enforcement, 2020, TikTok’s policy on responding to law enforcement requests section). Both Facebook and Google have similar policies for handling requests for user information made by governments (Facebook, 2019; Google Inc., 2020).

Since December 2019, TikTok has released to the public aggregated data on the requests for user information made to TikTok by governments from around the world (Ebenstein, 2019). According to 2019 and 2020 reports, TikTok has not received any requests from the Chinese government and the majority of requests for user information received between 1 July 2019 and 31 December 2020 were made by India and the United States (TikTok Safety Centre, 2020b). It is beyond the scope of this research to determine whether TikTok’s reports are wholly accurate, however, they broadly follow the standards of reporting that Google and Facebook provide in their transparency reports (Facebook, 2019; Google Inc., 2020). It is also notable that a 2020 CIA assessment of TikTok reportedly concluded that while it was possible that the Chinese government could intercept TikTok data, there was no evidence to suggest that it had in fact done so (Sanger & Barnes, 2020). From the available evidence, it appears that the potential for the Chinese government to use TikTok data to threaten US national security remains theoretical.

The national security concerns expressed by the US government—most specifically the concern that TikTok user data may be accessed by the Chinese government and used to track and gather information on US citizens and companies—raise an interesting issue of territoriality. Over the past few years, Bytedance has extended its operations well beyond Chinese territorial spaces. While Bytedance Ltd was founded in China, it is incorporated outside of China, in the Cayman Islands, and it operates as a multinational organisation, with subsidiaries in the US, Australia, Singapore and the UK (Bytedance, 2020). TikTok operates from across the US, Europe and Asia with its largest US offices in Los Angeles, Mountain View and New York (Shead, 2020). In 2020, TikTok reportedly had almost 1,400 US employees (McGill, 2020). To use TikTok for the surveillance of US citizens (a key concern stated by US officials) the Chinese state would either need to do so without TikTok’s permission (as it might seek to do with any company) or it would need to compel Chinese or US TikTok employees to provide access to its operations within the US. While neither course of action is inconceivable, and indeed TikTok’s multinational structure may make it more strategically useful to the Chinese state (Cartwright, 2020), the Chinese origins of TikTok can be used to overstate or oversimplify the platform’s actual territorial connection to China.

Notably, when new national security laws were introduced in Hong Kong in 2020—laws that may have subjected TikTok to China’s National Intelligence regime which require organisations to cooperate with Chinese intelligence agencies—TikTok suspended the app’s operation there (Wang, 2020). In this case, TikTok took overt action to stay out of the jurisdictional reach of the Chinese government.

Since its rise in the international digital platform market, TikTok has attempted to establish a reputation as a secure and transparent global corporate citizen. In April 2020, in an update on TikTok’s security policies and practices, Roland Cloutier, TikTok’s Chief Security Officer, explained that the company had “engaged with the world's leading cyber security firms to accelerate our work advancing and validating our adherence to globally recognized security control standards” and continued to limit “the number of employees who have access to user data and the scenarios where data access is enabled” (Cloutier, 2020a, para. 15). Cloutier further explained his goal was to “minimize data access across regions so that, for example, employees in the APAC region, including China, would have very minimal access to user data from the EU and US” (Cloutier, 2020a, para. 7). In late June 2020, TikTok announced that it had established offices in Los Angeles and Washington, DC dedicated to giving “lawmakers and experts the opportunity to look under the hood of TikTok” (Cloutier, 2020b, para. 2). At these centres, the access TikTok provides includes access to the source code of its algorithms, a practice that stands TikTok apart from most other platforms in the US digital platform market. Former TikTok CEO, Kevin Mayer, explained TikTok’s desire to be a market leader in transparency and encouraged other platforms to make similar disclosures (Mayer, 2020). In the midst of increasingly hostile rhetoric from the US government, TikTok has publicly given the appearance of working enthusiastically on its security, transparency and accountability credentials.

While openly rejecting the security claims made against it, TikTok has also sought to redirect political discourse towards competition in the digital platform market. In late July 2020, TikTok released a statement arguing that competition drives innovation and that it was “unfortunate for creators, brands, and the broader community that it has been years since a company came along and reimagined what a social entertainment platform could be” (Mayer, 2020, para. 1). TikTok’s then CEO, Kevin Mayer, also urged the US government to consider the benefits of “fair and open competition” and pointed to “maligning attacks by our competitor—namely Facebook—disguised as patriotism and designed to put an end to our very presence in the US” (Mayer, 2020, para. 7). As reported by the Wall Street Journal, in 2019, Mark Zuckerberg, CEO of Facebook, conducted a series of meetings with US politicians, including US Senators and President Donald Trump, in which he argued TikTok posed a serious economic threat to the US (Wells et al., 2020). For TikTok executives, the controversy over its rise internationally is an issue of market competition and TikTok considers its US counterparts to be seeking to leverage their relationship with US lawmakers (Birnhack & Elkin-Koren, 2003) to protect their dominance in a highly lucrative international digital platform market.

TikTok has also tried to use the criticisms of its data practices to initiate a discussion of the problems of transparency and accountability present in the international digital platform market at large. TikTok has suggested that rather than focusing narrowly on TikTok’s Chinese ties: “the bigger move is to use this moment to drive deeper conversations around algorithms, transparency, and content moderation, and to develop stricter rules of the road” (Mayer, 2020, para. 11). Mayer stated that he accepted the scrutiny the company received based on its Chinese origins and embraced “the challenge of giving peace of mind through greater transparency and accountability…Even more, we believe our entire industry should be held to an exceptionally high standard” (Mayer, 2020, para. 5). It is unclear whether these statements reflect a genuine commitment by TikTok. They may simply be an offensive rhetorical tactic in an ongoing political battle. Nonetheless, it appears TikTok has attempted to leverage the scrutiny over its data policies and practices to initiate a broader debate about the politics of platforms and the problems inherent to almost all market participants.

Despite TikTok’s actions and rhetoric, on 31 July 2020, President Trump confirmed that he was indeed intending to ban TikTok in the US (Trump, 2020a). Several days later, Microsoft released a statement explaining that its representatives had spoken to President Trump directly regarding the acquisition of TikTok (Microsoft, 2020). Microsoft also confirmed that it was in discussions with ByteDance and that any acquisition would include a term ensuring all US user data is stored only in the US (Microsoft, 2020). Speaking about his conversations with Microsoft, President Trump stated, “it can’t be controlled, for security reasons, by China. Too big, too invasive, and it can’t be” (Trump, 2020b, para. 451). Trump also commented that he wanted “no security problems with China. It’s got to be an American company. It’s got to be American security. It’s got to be owned here” (Trump, 2020c, para. 91). For the Trump administration, regardless of any enhanced transparency by TikTok or the benefits of increased competition in the US digital platform market, TikTok was to be treated as a Chinese asset, one that could be leveraged to enhance Chinese state power in relation to the US.

On 3 August 2020, Chinese Foreign Ministry Spokesperson, Wang Wenbin, commented that the US government’s involvement in the negotiations between Microsoft and ByteDance was a “violation of market economy rules” and he called on the US to “stop politicizing economic and trade issues, and stop practicing discriminatory and exclusive policies in the name of national security” (Wenbin, 2020a, paras. 5-6). On 4 August 2020, Wenbin added “it is nothing new for the US to use its state machine to suppress foreign companies” (Wenbin, 2020b, para. 34). In its response, China sought to position the US government’s rhetoric and actions over TikTok as an act of economic nationalism. Certainly, the intervention by a US President to force the sale of a private company stands at odds with principles of free market capitalism, and the Trump administration may have been motivated in some part by a desire to protect a thriving American industry. Yet, both states have more to gain in this contest than economic benefits alone. The US position on TikTok falls within a broader contest over the strategic capacities at play in the digital environment and within a broader set of policy objectives.

On 5 August 2020, the US Department of State announced an expansion of its Clean Network programme, which has the stated objective of “guarding our citizens’ privacy and our companies’ most sensitive information from aggressive intrusions by malign actors, such as the Chinese Communist Party (CCP)” (Pompeo, 2020, para. 1). Expansions to the programme included five new policies aimed explicitly at reducing the presence of China in the US, by limiting the use of Chinese providers of telecommunication carriers, applications sold in app stores and pre-installed on devices, cloud services, and undersea cables. When discussing the expanded programme Pompeo called for its “allies and partners in government and industry around the world to join the growing tide to secure our data from the CCP’s surveillance state and other malign entities” (Pompeo, 2020, para. 5). In keeping with its strategy of offshore balancing, the Trump administration sought to leverage its global alliances and partnerships to limit the growth of Chinese companies in the digital environment; as with, for example, Huawei, which the Trump administration pressured allies to ban (successfully so in the case of the UK and Canada where Huawei has been stopped from providing 5G network infrastructure) (Ljunggren, 2020; Baker & Chalmers, 2020).

The following day, the Trump administration issued an executive order aimed at forcing the sale of TikTok to a US company (Trump, 2020d). It did so by prohibiting transactions with Bytedance by any person in the US after 45 days from the issue of the executive order. The order specified that the prohibition was necessary in the context of a national emergency relating to “the spread in the United States of mobile applications developed and owned by companies in the People’s Republic of China” (Trump, 2020d, para. 2). The order posited that TikTok posed a threat to “the national security, foreign policy, and economy of the United States” and that TikTok’s data collection might allow the Chinese government “access to Americans’ personal and proprietary information—potentially allowing China to track the locations of Federal employees and contractors, build dossiers of personal information for blackmail, and conduct corporate espionage” (Trump, 2020d, para. 3). A similar order was issued against the Chinese company Tencent Holdings Ltd which owns the messaging app WeChat (Trump, 2020e). On 14 August 2020, the Trump administration announced a second executive order requiring Bytedance to divest from TikTok on the grounds that a review by the US Committee on Foreign Investment found credible evidence to suggest that Bytedance’s acquisition of Musical.ly posed a threat to US national security (Trump, 2020f).

Responding to the actions taken against it by the Trump administration, TikTok argued there was no evidence to support the administration’s position and the executive orders lacked due process (TikTok Newsroom, 2020b). The company explained that it had tried to engage in good faith negotiations to address security concerns but the US government “paid no attention to facts, dictated terms of an agreement without going through standard legal processes, and tried to insert itself into negotiations between private businesses” (TikTok Newsroom, 2020b, para. 2). TikTok asserted that the executive order showed the US government was relying on “unnamed "reports" with no citations, fears that the app "may be" used for misinformation campaigns with no substantiation of such fears, and concerns about the collection of data that is industry standard for thousands of mobile apps around the world” (TikTok Newsroom, 2020b, para. 3).

The Chinese government similarly voiced opposition to the executive orders. In support of TikTok, China’s Foreign Ministry Spokesperson, Zhao Lijian, called for the US to “correct its hysterical and wrong actions, come back to market principles and WTO norms, and stop unjustified suppression and discriminatory restriction targeting Chinese companies” (Lijian, 2020a, para. 53). Spokesperson Lijian also emphasised the strong foundations TikTok already has in the US, pointing to the senior TikTok staff who are US citizens, TikTok’s US operations including company servers and data centres, and hundreds of American employees (Lijian, 2020b). Lijian argued that US national security concerns were unfounded and the US was acting in contravention of free market principles and international trade laws:

even a CIA assessment says there's no evidence that China intercepted TikTok data or used the app to bore into cell phones. A think-tank in the US also said that there is no security justification for banning an app merely because it is owned by a Chinese company. This proves once again that "freedom and security" is nothing more than an excuse for some US politicians to pursue gunboat diplomacy in the digital age. Such bullying practices are a flagrant denial of the principles of market economy and fair competition, of which the US is a self-claimed "champion". (Lijian, 2020b, para. 13)

China also called for the US to reverse its executive orders (Lijian, 2020b), but, in late August, it announced new technology export rules that apply to artificial intelligence technology which may limit Bytedance’s capacity to sell TikTok’s machine learning algorithms. The Chinese government advised TikTok to “seriously and cautiously” review the rules and consider their implication for the intended US sale (Xiao & Lin, 2020, para. 5). In effect, China retaliated against the US with its own policy intervention aimed at spoiling negotiations over the sale of TikTok to a US firm.

This snapshot of the controversy over TikTok’s rise in the US provides a useful case study in the current geopolitics of platforms which is in large part defined by shifts in the world order caused by the economic rise of China and its implications for US hegemony. Essentially, TikTok has found itself in the middle of a contest between two nations over the strategic and economic value of the digital environment. Certainly, in 2020 TikTok was also at the whim of an idiosyncratic and highly transactional US President who may have been motivated by personal misgivings against TikTok (Lorenz et al., 2020), a desire to unconventionally profit from its sale (Davidson, 2020) and his particular view of the US-China economic relationship (Mason et al., 2020). Nonetheless, the Trump administration’s actions and rhetoric towards TikTok are consistent with a geopolitical analysis which suggests that regardless of specific leaders or foreign policy approaches, since the end of the Cold War, the US has pursued a hegemonic geopolitical strategy informed by its “territorial nation-state identity” (Sharp, 2000, p. 332) defined by its geographical isolation and capacity to undertake offshore balancing. In other words, there is a logic that underlies the US government’s view of TikTok, as evidenced by policymakers such as Pompeo and O’Brien, that is consistent with many decades of US geopolitical strategy. In the game of platform geopolitics, as China has begun to amass information and communication capacities (Hong, 2017; Thussu, 2018), the US is taking strategic policy action to preserve its advantages.

To be sure, China can not claim any high road in this geopolitical contest. In 2018, China forced Apple to move data about Chinese citizens to data centres located in China and certain US technology companies are completely banned from operating there (O’Hara & Hall, 2018). Indeed, both nations have a history of sustaining protectionist policies aimed at shielding strategically important industries (Lenway et al., 1996). By no means are China and the US the only states participating in the geopolitics platforms; the European Union, for example, recently issued a paper exploring the potential for greater ‘digital sovereignty’ for that region (Madiega, 2020) and in 2020 India banned TikTok and 58 other apps on the grounds that they were a “threat to sovereignty and integrity of India” (Press Information Bureau Delhi, 2020).The geopolitics of the digital environment is multifaceted. And yet, once we recognise the geopolitical motivations underpinning a platform controversy such as this, the harder challenge is working out how to move beyond conventional geopolitical bounds—neither US nor Chinese hegemony in the digital environment is ideal or inevitable—and there are other important policy considerations relevant to this platform controversy.

3. Digital platform competition: imagining a world order for the digital environment unbound from conventional geopolitics

TikTok provides significant competition to the US platform incumbents. But, so far, the US government has opted to explicitly focus on the geopolitical implications of TikTok, rather than the implications for the US digital platform market. Whether the Biden administration will take a similar approach to that of the Trump administration remains to be seen. Importantly though, TikTok’s willingness to increase transparency beyond current industry standards indicates that, when pressed, platforms can adapt in areas such as security, transparency and accountability. TikTok’s immense global user base also indicates the willingness of consumers to adopt products regardless of their country of origin. If real competition and user choice were to increase, requiring the incumbent platforms to compete for users more actively, we might see further innovations in the digital platform market and a dilution of the concentrated private power wielded by the incumbent market actors (Ghosh & Couldry, 2020). 

Free market economic theory suggests competition will occur if markets are free from government regulation and other state interventions, under the assumption of a level playing field for all market participants (Harvey, 2007). But, in the digital platform market, there are embedded companies who enjoy advantages from economies of scale and high barriers to entry, often resulting from vast and self-perpetuating data stores (Pasquale, 2015; Plantin et al, 2016). Under these conditions, competition is unlikely to spontaneously occur. In recent years, scholars have proposed a range of pathways for improving competition in digital platform markets (see e.g. Burdon, 2020; Daly, 2016; Flew et al., 2019; Khan, 2016, 2019; Svantesson, 2017; Winseck, 2020). Following these scholars and, more recently, certain policymakers (see e.g. Australian Competition and Consumer Commission, 2019, 2021; European Commission, 2020; Favaro Corvo Ribas & Maximiano Munhoz, 2020; Utsunomiya & Takamiya, 2020), two types of market reform interventions appear particularly urgent: antitrust measures including breaking up vertically and horizontally integrated firms and the effective application of rules that limit the expansion of market incumbents through mergers and acquisitions; and the widespread adoption of data portability and interoperability standards that meaningfully address power asymmetries and barriers to entry into data-driven markets. 

Geopolitical forces have the potential to both aid and hinder platform market reform agendas, depending on the nature and level of international cooperation (van Dijck et al., 2018). Successfully regulating companies with transnational operations, such as Facebook, Google and Amazon, is likely to require substantial international cooperation. Recently, Winseck and Puppis (2020) collated over 88 different platform regulation inquiries, reviews and proceedings undertaken in jurisdictions around the world between 2016 and 2020. A high number of regulatory initiatives globally suggests a high level of willingness among lawmakers globally to address the problems created by market concentration in the digital economy, yet so far there has not been a coordinated global regulatory undertaking. Any such undertaking would inevitably require confronting the countervailing force of digital nationalism (Mann & Daly, 2020; Pohle & Thiel, 2020). A challenge for interested researchers and policymakers is the pursuit of an agenda that will incentivise nation-states from around the globe to overcome nationalistic impulses and economic protectionism, in order to realise the benefits of competition, innovation, and the decentralisation of power in the digital economy.

The dilution or dispersal of geopolitical power derived from the digital economy would require competitive market participants from more countries than the US and China. In addition to market reforms, this objective could be supported by national digital infrastructure and investment programmes aimed at growing domestic platform economies. Specific programmes would need to be tailored to suit national economies, taking into account economic conditions, existing resources, and industry capacities, but undoubtedly, they must extend beyond microeconomic investments such as industry grants or start-up incubators. To spur innovation, policymakers must think big; at the scale of, for example, high speed rail programmes; national highway projects; or national broadband initiatives (Winseck, 2020).

More work is required to identify the most viable pathways for achieving a competitive international digital platform market and avoiding the solidification of conventional geopolitical power dynamics within the digital environment. To summarise, a research agenda with this goal might comprise the following streams of inquiry: 

  1. How can nation-state collaboration in the regulation of platforms be extended, beyond existing multilateral and regional undertakings, to achieve a globally coordinated regulatory approach to market reforms for greater platform competition? 
  2. How might growing international consensus about the need for interventions to reduce concentrated private power be reconciled with digital nationalism, with a view to achieving a more decentralised digital environment?
  3. What types of national infrastructure and investment programmes could a state implement, complementing market reforms, to support emerging digital platforms to overcome existing barriers to entry into markets nationally and internationally? 

Ambitious legal, economic, and technical policy undertakings are politically difficult, but they should not be relegated to the theoretical. China is acting on at least two of these policy fronts. The China Standards 2035 industry policy plan is expected to outline an agenda to enable China to set the technical standards for the future development of advanced technological systems including artificial intelligence, the Internet of Things, and 5G internet (Kharpal, 2020). This plan is expected to work in concert with the country’s Belt and Road Initiative, through which China is investing in infrastructure and trade corridors throughout Eurasia and other nearby regions (Koyt, 2020), and will be used to export Chinese technical standards (Cai, 2017). For those concerned about the future distribution of power in the digital environment, now is the time to take up the challenge of devising innovative and ambitious policy programmes.

Conclusion

On the basis of its data policies and practices, TikTok poses no greater security threat to its users than do its counterparts. Almost all of the most widely used digital platforms threaten the privacy and security of users, they all have the capacity for immense ideological influence, and they exploit user data for economic gain. As a geopolitical analysis makes clear, TikTok has found itself in the middle of a contest over the value of the digital environment and the US is eager to preserve the economic and strategic advantages it has enjoyed for several decades. In many ways, though, this is a familiar economic story: market incumbents striving to sustain their privileged positions, policymakers seeking to protect strategically important industries. But, if policymakers were to embrace platform competition, rather than rejecting it along conventional geopolitical grounds, there is potential for greater innovation and a dilution of the concentration of power in the international platform market. Achieving a competitive market, and eschewing conventional geopolitical power dynamics, is a monumental challenge. It will require highly ambitious public policy interventions. But it is a challenge worth taking up. The concentration of private power in the digital environment diminishes democratic societies and we should seek solutions that look beyond both US and Chinese hegemonic power.

References

Alcaro, R. (2018). The Liberal Order and its Contestations. A Conceptual Framework. The International Spectator, 53(1), 1–10. https://doi.org/10.1080/03932729.2018.1397878

Alexander, J. (2020, August 2). TikTok’s a year old, when will its creators make money? The Verge. https://www.theverge.com/2019/8/2/20748770/tiktok-monetization-youtube-anniversary-twitch-facebook-creators

Al-Heeti, A. (2020, June 26). IOS 14 drives TikTok to stop grabbing info from users’ clipboards, report says. CNET. https://www.cnet.com/news/ios-14-drives-tiktok-to-stop-grabbing-info-from-users-clipboards-report-says/

Andrews, P. C. (2020, June 2). How TikTok got political. The Conversation. http://theconversation.com/how-tiktok-got-political-139629

Australian Competition and Consumer Commission. (2017, December 4). ACCC Commences Inquiry into Digital Platforms [Press release]. Australian Competition and Consumer Commission. https://www.accc.gov.au/media-release/accc-commences-inquiry-into-digital-platforms

Australian Competition and Consumer Commission. (2019). Digital platforms inquiry. https://www.accc.gov.au/system/files/Digital%20platforms%20inquiry%20-%20final%20report.pdf

Australian Competition and Consumer Commission. (2021, March 11). Feedback sought on choice and competition in internet search and web browsers [Press release]. Australian Competition and Consumer Commission. https://www.accc.gov.au/media-release/feedback-sought-on-choice-and-competition-in-internet-search-and-web-browsers

Baker, L., & Chalmers, J. (2020). As Britain bans Huawei, U.S. https://www.reuters.com/article/us-britain-huawei-europe-idUSKCN24F1XG

Balkin, J. M. (2017). Free speech in the algorithmic society: Big data, private governance, and new school speech regulation. UCDL Rev, 51, 1149.

Bella, T. (2020). Pompeo says the U.S. is ‘certainly looking at’ banning TikTok and other Chinese apps. Washington Post. https://www.washingtonpost.com/nation/2020/07/07/tiktok-ban-china-usa-pompeo/

Birnhack, M. D., & Elkin-Koren, N. (2003). The Invisible Handshake: The Reemergence of the State in the Digital Environment. Va. JL & Tech, 8, 6–13. https://doi.org/10.2139/ssrn.381020

Burdon, M. (2020). Digital Data Collection and Information Privacy Law. Cambridge University Press.

Cai, P. (2017). Understanding China’s Belt and Road Initiative. Think-Asia, Lowy Institute For International Policy.

Cartwright, M. (2020). Internationalising state power through the internet: Google, Huawei and geopolitical struggle. Internet Policy Review, 9(3). https://doi.org/10.14763/2020.3.1494

Centre, T. S. (2020a). Safety Center—Resources | TikTok. https://www.tiktok.com/safety/resources/transparency-report?lang=en&appLaunch=

Centre, T. S. (2020b). TikTok Transparency Report. https://www.tiktok.com/safety/resources/transparency-report?lang=en&appLaunch=

Chan, C. (2018). When AI is the Product: The Rise of AI-Based Consumer Apps. Andreessen Horowitz. https://a16z.com/2018/12/03/when-ai-is-the-product-the-rise-of-ai-based-consumer-apps/

Chen, X., Kaye, D. B., & Zeng, J. (2020). #PositiveEnergy Douyin: Constructing ‘Playful Patriotism’ in a Chinese Short-Video Application. Chinese Journal of Communication. https://doi.org/10.1080/17544750.2020.1761848

Clayton, J. (2020). TikTok’s Boogaloo extremism problem. BBC News (Online. https://www.bbc.com/news/technology-53269361

Cloutier, R. (2020a). Our approach to security—Newsroom | TikTok. https://newsroom.tiktok.com/en-us/our-approach-to-security

Cloutier, R. (2020b). TikTok’s security and data privacy roadmap—Newsroom | TikTok. https://newsroom.tiktok.com/en-us/tiktoks-security-and-data-privacy-roadmap

Cloutier, R. (2020c). Updates on our security roadmap—Newsroom | TikTok. https://newsroom.tiktok.com/en-us/updates-on-our-security-roadmap

Commission, E. (2020). The Digital Markets Act: Ensuring fair and open digital markets [Text. European Commission. https://ec.europa.eu/info/strategy/priorities-2019-2024/europe-fit-digital-age/digital-markets-act-ensuring-fair-and-open-digital-markets_en

Daly, A. (2016). Private Power, Online Information Flows and EU Law: Mind the Gap. Hart.

Datareportal. (2021). Global Social Media Stats. DataReportal – Global Digital Insights. https://datareportal.com/social-media-users

Davidson, H. (2020). TikTok sale: Trump approves Microsoft’s plan but says US should get a cut of any deal. The Guardian. https://www.theguardian.com/technology/2020/aug/03/tiktok-row-trump-to-take-action-soon-says-pompeo-as-microsoft-pursues-deal

DeNardis, L., & Hackl, A. M. (2015). Internet Governance by Social Media Platforms. Telecommunications Policy, 39(9), 761–770. https://doi.org/10.1016/j.telpol.2015.04.003

Donzelli, G., Palomba, G., Federigi, I., Aquino, F., Cioni, L., Verani, M., Carducci, A., & Lopalco, P. (2018). Misinformation on vaccination: A quantitative analysis of YouTube videos. Human Vaccines & Immunotherapeutics, 14(7), 1654–1659. https://doi.org/10.1080/21645515.2018.1454572

Ebenstein, E. (2019, December 31). Our first Transparency Report. TikTok Newsroom. https://newsroom.tiktok.com/en-us/our-first-transparency-report

Facebook. (2019). Requests For User Data. https://transparency.facebook.com/government-data-requests

Favaro Corvo Ribas, G., & Maximiano Munhoz, N. (2020, October 26). The Brazil Antitrust Agency’s new study on digital markets. International Bar Association. https://www.ibanet.org/Article/NewDetail.aspx?ArticleUid=48D83F91-8874-4F3B-8FCF-E85A384BA841

Flew, T., Martin, F., & Suzor, N. (2019). Internet Regulation as Media Policy: Rethinking the Question of Digital Communication Platform Governance. Journal of Digital Media & Policy, 10(1), 33–50. https://doi.org/10.1386/jdmp.10.1.33_1

Flint, C. (2016). Introduction to Geopolitics (3rd ed.). Taylor and Francis. https://doi.org/10.4324/9781315640044

Fowler, G. (2020, July 13). Is it time to delete TikTok? A guide to the rumors and the real privacy risks. The Washington Post. https://www.washingtonpost.com/technology/2020/07/13/tiktok-privacy/

Ghosh, D., & Couldry, N. (2020). Digital Realignment: Rebalancing Platform Economies from Corporation to Consumer (Working Paper No. 155; Mossavar-Rahmani Center for Business and Government).

Gillespie, T. (2010). The politics of ‘platforms’. 12(3), 347–364. https://doi.org/10.1177/1461444809342738

Gillespie, T. (2018). Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions that Shape Social Media. Yale University Press.

Google Inc. (2020). Google Transparency Report. https://transparencyreport.google.com/?hl=en

Gray, J. (2020). Google Rules: The History and Future of Copyright Under the Influence of Google. Oxford University Press.

Harvey, D. (2007). A Brief History of Neoliberalism. Oxford University Press.

Hern, A. (2019). Revealed: How TikTok censors videos that do not please Beijing. The Guardian. http://www.theguardian.com/technology/2019/sep/25/revealed-how-tiktok-censors-videos-that-do-not-please-beijing

Hong, Y. (2017). Networking China: The Digital Transformation of the Chinese Economy. University of Illinois Press. https://doi.org/10.5406/illinois/9780252040917.001.0001

Isaak, J., & Hanna, M. J. (2018). User data privacy: Facebook, Cambridge Analytica, and privacy protection. Computer, 51(8), 56–59. https://doi.org/10.1109/MC.2018.3191268

Jia, L., & Ruan, L. (2020). Going global: Comparing Chinese mobile applications’ data and user privacy governance at home and abroad. Internet Policy Review, 9(3). https://doi.org/10.14763/2020.3.1502

Kaye, B., Rodriguez, A., Langton, K., & Wikstrom, P. (In press). You Made This? I Made This: Practices of Authorship and (Mis)Attribution on TikTok. International Journal of Communication.

Keane, S. (2020, September 11). Huawei ban timeline: Chinese company’s Harmony OS may hit phones next year. CNET. https://www.cnet.com/news/huawei-ban-full-timeline-us-restrictions-china-trump-android-google-ban-harmony-os/

Kelly, P. (2006). A Critique of Critical Geopolitics. Geopolitics, 11(1), 24–53. https://doi.org/10.1080/14650040500524053

Khan, L. (2019). The Separation of Platforms and Commerce. Columbia Law Review, 119(4), 973–1098. https://columbialawreview.org/wp-content/uploads/2019/05/Khan-THE_SEPARATION_OF_PLATFORMS_AND_COMMERCE-1.pdf

Khan, L. M. (2016). Amazon’s antitrust paradox. Yale Law Journal, 126(3), 710–805. https://digitalcommons.law.yale.edu/ylj/vol126/iss3/3/

Kharpal, A. (2020, April 27). Power is “up for grabs”: Behind China’s plan to shape the future of next-generation tech. CNBC. https://www.cnbc.com/2020/04/27/china-standards-2035-explained.html

Kitchin, R., & Lauriault, T. P. (2014). Towards critical data studies: Charting and unpacking data assemblages and their work (Working Paper No. 2; The Programmable City). Maynooth University.

Koyt, A. (2020, July 2). The China Standards 2035 Plan: Is it a Follow-Up to Made in China 2025? China Briefing News. https://www.china-briefing.com/news/what-is-china-standards-2035-plan-how-will-it-impact-emerging-technologies-what-is-link-made-in-china-2025-goals/

Layne, C. (1997). From preponderance to offshore balancing. International Security, 22(1), 86. https://doi.org/10.2307/2539331

Lee, K.-F. (2018). AI superpowers: China, Silicon Valley, and the new world order. Houghton Mifflin Harcourt.

Lenway, S., Morck, R., & Yeung, B. (1996). Rent seeking, protectionism and innovation in the American steel industry. The Economic Journal, 106(435), 410–421. https://doi.org/10.2307/2235256

Lijian, Z. (2020a, August 10). Foreign Ministry Spokesperson Zhao Lijian’s Regular Press Conference on August 10. Ministry of Foreign Affairs of the People’s Republic of China. https://www.fmprc.gov.cn/mfa_eng/xwfw_665399/s2510_665401/2511_665403/t1805288.shtml

Lijian, Z. (2020b, August 17). Foreign Ministry Spokesperson Zhao Lijian’s Regular Press Conference on August 17. Ministry of Foreign Affairs of the People’s Republic of China. https://www.fmprc.gov.cn/mfa_eng/xwfw_665399/s2510_665401/2511_665403/t1806940.shtml

Ljunggren, D. (2020, August 25). Canada has effectively moved to block China’s Huawei from 5G, but can’t say so. Reuters. https://www.reuters.com/article/us-canada-huawei-analysis-idUSKBN25L26S

Lorenz, T., Browning, K., & Frenkel, S. (2020). TikTok Teens Tank Trump Rally in Tulsa, They Say. The New York Times. https://www.nytimes.com/2020/06/21/style/tiktok-trump-rally-tulsa.html

Lynskey, O. (2017). Aligning data protection rights with competition law remedies? The GDPR right to data portability. European Law Review, 42(6), 793–814.

Maayan, P., & Elkin-Koren, N. (2016). Accountability in Algorithmic Copyright Enforcement. Stanford Technology Law Review, 19, 473–533. https://law.stanford.edu/wp-content/uploads/2016/10/Accountability-in-Algorithmic-Copyright-Enforcement.pdf

Madiega, T. (2020). Digital sovereignty for Europe (Briefing PE 651.992; EPRS Ideas Papers). European Parliamentary Research Service.

Mann, M., & Daly, A. (2020). Geopolitics, jurisdiction and surveillance. Internet Policy Review, 9(3). https://policyreview.info/geopolitics-jurisdiction-surveillance

Mann, M., & Warren, I. (2018). The digital and legal divide: Silk Road, transnational online policing and southern criminology. In K. Carrington, R. Hogg, J. Scott, & M. Sozzo (Eds.), The Palgrave handbook of criminology and the global south (pp. 245–260). Palgrave MacMillan. https://doi.org/10.1007/978-3-319-65021-0_13

Mason, J., Sanders, C., & Brunnstrom, D. (2020). Trump again raises idea of separating US economy from China. The Sydney Morning Herald. https://www.smh.com.au/business/the-economy/trump-again-raises-idea-of-separating-us-economy-from-china-20200908-p55tdb.html

Matamoros-Fernández, A. (2017). Platformed racism: The mediation and circulation of an Australian race-based controversy on Twitter, Facebook and YouTube. Information, Communication & Society, 20(6), 930–946. https://doi.org/10.1080/1369118X.2017.1293130

Mayer, K. (2020, July 29). Fair competition and transparency benefits us all. TikTok Newsroom. https://newsroom.tiktok.com/en-us/fair-competition-and-transparency-benefits-us-all

McCluskey, M. (2020, July 22). These TikTok Creators Say They’re Still Being Suppressed for Posting Black Lives Matter Content. Time. https://time.com/5863350/tiktok-black-creators/

McGill, M. H. (2020). Under fire from Washington, TikTok pledges U.S. job growth. Axios. https://www.axios.com/tiktok-plans-to-add-10000-jobs-963c6ee4-9dbf-431f-9ac8-1bb381855df4.html

Microsoft. (2020, August 2). Microsoft to continue discussions on potential TikTok purchase in the United States [Blog post]. The Official Microsoft Blog. https://blogs.microsoft.com/blog/2020/08/02/microsoft-to-continue-discussions-on-potential-tiktok-purchase-in-the-united-states/

Misty Hong v Bytedance Inc. Class Action Complaint, (US District Court Northern District of California 27 November 2019). https://www.courthousenews.com/wp-content/uploads/2019/12/Tiktok.pdf

Moshin, M. (2020, July 3). 10 TikTok Statistics That You Need to Know in 2021 [Blog post]. Oberlo. https://au.oberlo.com/blog/tiktok-statistics

O’Brien, R. (2020, June 24). The Chinese Communist Party’s Ideology and Global Ambitions. The White House. https://trumpwhitehouse.archives.gov/briefings-statements/chinese-communist-partys-ideology-global-ambitions/

O’Hara, K., & Hall, W. (2018). Four internets: The geopolitics of digital governance (No. 206; CIGI Papers). https://www.cigionline.org/sites/default/files/documents/Paper%20no.206web.pdf

O’loughlin, J. (1999). Ordering the ‘crush zone’: Geopolitical games in post‐cold war eastern Europe. Geopolitics, 4(1), 34–56. https://doi.org/10.1080/14650049908407636

Pasquale, F. (2015). The black box society: The secret algorithms that control money and information. Harvard University Press.

Plantin, J.-C., Lagoze, C., Edwards, P. N., & Sandvig, C. (2016). Infrastructure studies meet platform studies in the age of Google and Facebook. New Media & Society, 20(1), 293–310. https://doi.org/10.1177/1461444816661553

Pohle, J., & Thiel, T. (2020). Digital sovereignty. Internet Policy Review, 9(4). https://doi.org/10.14763/2020.4.1532

Pompeo, M. (2020, August 5). Announcing the Expansion of the Clean Network to Safeguard America’s Assets [Press statement]. U. S. Department of State. https://www.state.gov/announcing-the-expansion-of-the-clean-network-to-safeguard-americas-assets/

Press Information Bureau Delhi. (2020, June 29). Government Bans 59 mobile apps which are prejudicial to sovereignty and integrity of India, defence of India, security of state and public order. Press Information Bureau, Government of India, Ministry of Electronics & IT. pib.gov.in/Pressreleaseshare.aspx?PRID=1635206

Roach, S. (2014). Unbalanced: The Codependency of America and China. Yale University Press.

Romm, T. (2020, July 30). Big tech hearing: Apple, Google, Facebook and Amazon CEOs testified before Congress. The Washington Post. https://www.washingtonpost.com/technology/2020/07/29/apple-google-facebook-amazon-congress-hearing/

Sanger, D. E., & Barnes, J. E. (2020, August 7). Is TikTok More of a Parenting Problem Than a Security Threat? The New York Times. https://www.nytimes.com/2020/08/07/us/politics/tiktok-security-threat.html

Sharp, J. (2000). Refiguring Geopolitics (K. Dodds & D. Atkinson, Eds.). Routledge.

Shead, S. (2020, May 27). TikTok owner ByteDance reportedly made a profit of $3 billion on $17 billion of revenue last year. CNBC. https://www.cnbc.com/2020/05/27/tiktok-bytedance-profit.html

Sicker, M. (2010). Geography and Politics Among Nations: An Introduction to Geopolitics. iUniverse.

Spangler, T. (2016, September 30). Musical.ly’s Live.ly Is Now Bigger Than Twitter’s Periscope on iOS (Study). Variety. https://variety.com/2016/digital/news/musically-lively-bigger-than-periscope-1201875105/

Suzor, N. P. (2019). Lawless: The Secret Rules That Govern Our Digital Lives. Cambridge University Press. https://doi.org/10.1017/9781108666428

Suzor, N. P., Myers West, S., Quodling, A., & York, J. (2019). What Do We Mean When We Talk About Transparency? Toward Meaningful Transparency in Commercial Content Moderation. International Journal of Communication, 13, 1526–1543. https://ijoc.org/index.php/ijoc/article/view/9736/0

Svantesson, D. J. B. (2017). Solving the internet jurisdiction puzzle. Oxford University Press. https://doi.org/10.1093/oso/9780198795674.001.0001

Thussu, D. (2018). A new global communication order for a multipolar world. Communication Research and Practice, 4(1), 52–66. https://doi.org/10.1080/22041451.2018.1432988

TikTok. (2020). Tiktok Law Enforcement. https://www.tiktok.com/legal/law-enforcement

TikTok Newsroom. (2019, August 16). Statement on TikTok’s content moderation and data security practices. TikTok Newsroom. https://newsroom.tiktok.com/en-us/statement-on-tiktoks-content-moderation-and-data-security-practices

TikTok Newsroom. (2020a, June 19). How TikTok recommends videos #ForYou. TikTok Newsroom. https://newsroom.tiktok.com/en-us/how-tiktok-recommends-videos-for-you

TikTok Newsroom. (2020b, August 7). Statement on the Administration’s Executive Order. TikTok Newsroom. https://newsroom.tiktok.com/en-us/tiktok-responds

Toal, G., Tuathail, G. Ó., Dalby, S., & Routledge, P. (1998). The geopolitics reader. Routledge.

Trump, D. (2020a, July 31). Remarks by President Trump Before Marine One Departure. The White House. https://trumpwhitehouse.archives.gov/briefings-statements/remarks-president-trump-marine-one-departure-073120/

Trump, D. (2020b, August 3). Remarks by President Trump in a Meeting with U.S. Tech Workers and Signing of an Executive Order on Hiring American. The White House. https://trumpwhitehouse.archives.gov/briefings-statements/remarks-president-trump-meeting-u-s-tech-workers-signing-executive-order-hiring-american/

Trump, D. (2020c, August 3). Remarks by President Trump in Press Briefing. https://trumpwhitehouse.archives.gov/briefings-statements/remarks-president-trump-press-briefing-august-3-2020/

Trump, D. (2020d, August 6). Executive Order on Addressing the Threat Posed by TikTok. The White House. https://trumpwhitehouse.archives.gov/presidential-actions/executive-order-addressing-threat-posed-tiktok/

Trump, D. (2020e, August 6). Executive Order on Addressing the Threat Posed by WeChat. The White House. https://trumpwhitehouse.archives.gov/presidential-actions/executive-order-addressing-threat-posed-wechat/

Trump, D. (2020f, August 14). Order Regarding the Acquisition of Musical.ly by ByteDance Ltd. The White House. https://trumpwhitehouse.archives.gov/presidential-actions/order-regarding-acquisition-musical-ly-bytedance-ltd/

Tusikov, N. (2019). How US-made rules shape internet governance in China. Internet Policy Review, 8(2). https://doi.org/10.14763/2019.2.1408

Utsunomiya, H., & Takamiya, Y. (2020). Japan. In C. Jeffs (Ed.), E-Commerce Competition Enforcement Guide (3rd ed.). Global Competition Review. https://globalcompetitionreview.com/guide/e-commerce-competition-enforcement-guide/third-edition/article/japan

van Dijck, J., Poell, T., & Waal, M. (2018). The Platform Society (Vol. 1). Oxford University Press. https://doi.org/10.1093/oso/9780190889760.001.0001

Wang, C. (2020, June 7). Why TikTok made its user so obsessive? The AI Algorithm that got you hooked [Blog post]. Towards Data Science. https://towardsdatascience.com/why-tiktok-made-its-user-so-obsessive-the-ai-algorithm-that-got-you-hooked-7895bb1ab423

Wang, E. (2020, July 7). TikTok says it will exit Hong Kong market within days. Reuters. https://www.reuters.com/article/us-tiktok-hong-kong-exclusive-idUSKBN2480AD

Wells, G., Horwitz, J., & Viswanatha, A. (2020, August 23). Facebook CEO Mark Zuckerberg Stoked Washington’s Fears About TikTok. The Wall Street Journal. https://www.wsj.com/articles/facebook-ceo-mark-zuckerberg-stoked-washingtons-fears-about-tiktok-11598223133

Wenbin, W. (2020a, August 3). Foreign Ministry Spokesperson Wang Wenbin’s Regular Press Conference on August 3. Ministry of Foreign Affairs of the People’s Republic of China. https://www.fmprc.gov.cn/mfa_eng/xwfw_665399/s2510_665401/2511_665403/t1803668.shtml

Wenbin, W. (2020b, August 4). Foreign Ministry Spokesperson Wang Wenbin’s Regular Press Conference on August 4. Ministry of Foreign Affairs of the People’s Republic of China. https://www.fmprc.gov.cn/mfa_eng/xwfw_665399/s2510_665401/2511_665403/t1803971.shtml

Williams, K. (2020, February 7). Top Apps Worldwide for January 2020 by Downloads [Blog post]. Sensor Tower Blog. https://sensortower.com/blog/top-apps-worldwide-january-2020-by-downloads

Winseck, D. (2020). Vampire squids, ‘the broken internet’ and platform regulation. Journal of Digital Media & Policy, 11(3), 241–282. https://doi.org/10.1386/jdmp_00025_1

Winseck, D., & Puppis, M. (2020). Platform Regulation Inquiries [Unpublished manuscript].

Xiao, E., & Lin, L. (2020, August 30). TikTok Talks Could Face Hurdle as China Tightens Tech Export Rules. The Wall Street Journal. https://www.wsj.com/articles/china-tightens-ai-export-restrictions-11598703527

Xuetong, Y. (2019). The Age of Uneasy Peace: Chinese Power in a Divided World. Foreign Affairs, 98(1), 40–49. https://www.foreignaffairs.com/articles/china/2018-12-11/age-uneasy-peace

Yan, F. (2020, May 7). Diversity on Social Media: What We Can Learn From TikTok [Blog post]. RE-UP Agency. https://thisisreup.com/2020/05/07/diversity-on-social-media-what-we-can-learn-from-tiktok/

Zhang, Z. (2020). Infrastructuralization of TikTok: Transformation, power relationships, and platformization of video entertainment in China. Media, Culture & Society, 43(2), 219–236. https://doi.org/10.1177/0163443720939452


The promise of financial services regulatory theory to address disinformation in content recommender systems

$
0
0

Section 1. Introduction

Across Europe, policymakers and the public are increasingly demanding the regulation of so-called 'harmful-but-legal' online content (European Commission, 2018). Disinformation —information that is false and deliberately created to harm a person, social group, organisation or country—is a quintessential example of this ‘harmful-but-legal’ phenomenon (Wardle & Derakhshan, 2017, p. 20). 1 The motivation for regulatory interventions to address disinformation online is well-founded. There is an ever-increasing body of empirical evidence that links it to ‘real’ harms; harms that cut across individual welfare, broader social interests, and democratic stability (Shmargad & Klar, 2020; Vaccari & Chadwick, 2020). The ongoing COVID-19 pandemic bears witness to this unsettling reality, with online disinformation being linked to a variety of untenable outcomes—from attacks on telecommunications infrastructure, to the consumption of dangerous ‘miracle cures’, to increased xenophobia.

Yet despite the broad desire for intervention, policymakers in the EU and UK have thus far struggled to develop effective regulatory responses. This results from the fact that the long-standing paradigm within which content regulation in Europe manifests, and thus the starting-point for discussions concerning the regulation of disinformation online, is underpinned by regulatory theories that are unsuitable for the problem at hand. As will be explained in detail in the following sections there are two principle and related reasons for this. First, the regulatory theories underpinning our contemporary approach take their objective to be suppression of content that is considered objectionable, a reality which ignores the wholly context-dependent nature of the harm in disinformation. Second, much of the causal factors that give rise to the policy problem of disinformation online are informed by the business practices of certain contemporary online service providers, most notably their provision of open content recommender systems. Yet in spite of this, the theories that underpin the contemporary approach to content regulation largely ignore business practices by firms in the market as potential sites for regulatory intervention.

In that context, the aim of this article is to open the door to a potentially more effective approach for addressing disinformation online, by borrowing from contemporary European financial services regulation. 2 The intention of this article is not, however, to set out a fully-fledged alternative regulatory framework. Rather, it explores whether and to what extent the theories that underpin financial services regulation could be borrowed by policymakers who seek to develop a more appropriate regulatory response to disinformation, and indeed how they could serve as the crucial underpinning of novel regulatory interventions. To frame that endeavour, section II explains precisely why the present regulatory paradigm and its underlying theories are unsustainable, with a specific focus on their treatment of freedom of expression and content recommender systems. Section III isolates the key theories that underpin financial services regulation and outlines how they might be utilised to underpin a novel regulatory approach to disinformation online. Section IV then moves to illustrate the improvements that such an approach would have over the status quo, while section V engages with the potential shortcomings. Section VI concludes by plotting a course forward for this area, in context of the recently-proposed legislative proposal for an EU Digital Services Act (‘DSA’).

This article sits within the emergent body of scholarly and policy work that seeks to identify next-generation approaches to the governance of online content and the regulation of large content-sharing platforms. Many of these proposals highlight the need for an appreciation of, and regulatory attuning to, power dynamics in the platform ecosystem (Helberger, 2020; Graef & van Berlo, 2020; Gillespie et al., 2020); others identify firms’ business practices and commercial logics as motivations for, and necessary sites of, regulatory intervention (Cobbe & Singh, 2019; Woods & Perrin, 2019; Gary & Soltani, 2019); and others again provide paths forward for scrutinising and evaluating said practices and power dynamics (Wagner et al., 2021; Leerssen, 2020). As such, these efforts anticipate the kind of regulatory solutions that are required to better address the problem of disinformation and other online harms facing the internet ecosystem today. The contribution that his article seeks to make is to provide the necessary suite of regulatory theories that can underpin those promising solutions.

Ultimately, my contention is that a regulatory approach that grounds itself in the theories of financial services regulation would help us to better moderate the business practices that can make disinformation harmful, while engendering less interference with fundamental rights. However, this focus on addressing disinformation online through platform regulation is not premised on reductive technological determinism. We will not ‘solve’ disinformation by regulating online content. Problem definitions and policy solutions must not ignore the political, sociological, and economic contexts and structures within which disinformation online emerges. Nor should they ignore the role of entities that develop, transmit, and amplify disinformation for malign ends. Simply put, this is a multifaceted problem that necessitates multiple vectors of intervention both online and off. Some of the most important interventions—particularly those in the domain of media literacy—require action and investment now but whose ‘pay-off’ may not be observable for several years. That I have chosen to focus my attention on one component is not intended to dismiss the need for a holistic interdisciplinary approach.

Section 2. Disinformation online and the regulatory context

2.1 A focus on the wrong target

As noted above, the contemporary regulatory paradigm within which our approach to disinformation online is situated systematically engenders unjustified interferences with individuals' fundamental rights, in particular freedom of expression. This results from the fact that it takes its objective to be the regulation of content as such, and more precisely, from its strategic reliance on what is known in regulatory theory as performance and technology-based approaches to regulatory intervention (Coglianese & Lazar, 2003). In practice, these theoretical underpinnings typically manifest in law as output targets (e.g. the removal by firms of all notified content within 24 hours) and specific technological mandates (e.g. the deployment by firms of automated content filters). Crucially, performance and technology-based interventions aim at achieving certain outcomes with respect to content that is objectionable. Put another way, the belief is that these are ‘content’ problems, and as such, warrant ‘content’ solutions (Francois, 2019, p. 2).

One key reason why this regulatory strategy gives rise to systematic and intolerable rights interferences when applied to disinformation online is because the ‘harm’ of disinformation is not to be found in some objective and essential feature of the content. On the contrary, the fact of whether disinformation is harmful by any given metric of ‘harm’ is wholly contextual. It depends on the intersection of various factors related to the content, the consumer, and the broader social, political, and cultural environments within which that content is consumed (Wardle & Derakhshan, 2017). As such, parsing ‘harmful’ disinformation from trivial falsehoods, satire, and unintended inaccuracies is no easy task, and certainly not one that can be made with reference to the content alone (Francois, 2019). Policymakers are thus left in a bind. The regulatory paradigm within which they operate assumes harm to be located predominantly within content, and therefore the toolbox at their disposal includes almost exclusively instruments that address content as such. ‘Compliance success’ under this ‘content-centric’ approach means more takedowns and more filtering. Yet given that identifying disinformation requires careful assessment and its ‘harm’ depends on a variety of contextual factors, an approach which optimises for speedy content suppression at scale is naturally going to lead to systematic over-removal and suppression of legitimate expression, often with little possibility for affected individuals to plead their case or seek redress (Keller, 2019; Engstrom & Feamster, 2017). To give individuals’ fundamental rights the protection they warrant, we need a different approach.

2.2 Blind to the business practices

Yet not merely rights-interfering, the contemporary paradigm is also ineffective at addressing disinformation online. This is a consequence of its silence with respect to the influential role played by the business practices of certain types of online service providers (henceforth, ‘online service providers’ or ‘OSPs’) in exacerbating the policy problem. Indeed, certain types of OSPs control many of the aforenoted contextual factors that determine whether a piece of disinformation is absorbed by its consumer as either a trivial falsehood worthy of ridicule or as a serious source of 'real-world' harm (Cobbe & Singh, 2019). This occurs most notably where they operate content recommender systems.

Following Cobbe & Singh, I define content recommender systems as product features that algorithmically rank and present content to particular users, according to some determination made by the OSP of the relevance, interest, and importance of the content for that particular user (Cobbe & Singh, 2019, p. 3). Content recommender systems come in many forms, but of particular interest for our purposes is the 'open' model, such as Facebook's News Feed and YouTube's ‘Up Next’ feature, whereby unvetted third-party content is selected for promotion to a new audience (henceforth, ‘open content recommender systems’ or ‘OCR systems’). 3 In determining what third-party content to surface for a user, the system’s algorithm typically draws upon user inputs (e.g. what that individual has already consumed); group predictors (e.g. what similar types of users have consumed); and a variety of other contextual personal data points (e.g. profiling data from data brokers) (Ibid, 2019). As such, OSPs that operate OCR systems (hereafter, ‘the OCR sector’) largely define what content is seen, whom it is seen by, and how itis presented, while simultaneously being subject to no formal editorial control obligations vis-à-vis said content. 4

The problem is that the commercial incentives that determine the design and operation of these systems may inadvertently exacerbate the spread and impact of disinformation (Marechal & Roberts Biddle, 2020; Singh, 2019; Gary & Soltani, 2019). In short, the business model of the major OSPs operating OCR systems is targeted advertising; the OSPs’ revenue function depends on consumers spending time on the platform and consuming advertising content. OCR systems help maximise this revenue function by presenting users with the kind of personalised content that keeps them engaged with the platform, and hence addressable with advertising content (Solsman, 2018). Yet much of the content that typically engages users is shocking and misleading, and hence OCR systems that are designed to maximise user engagement can inadvertently compound the problem of disinformation and other 'harmful-but-legal' content (Gary & Soltani, 2019, §1). For instance, internal research from Facebook, undertaken in 2016 and only recently brought into the public domain by investigatory reporting, found that ‘64% of all extremist group joins are due to our recommendation tools’ (Horwitz & Seetharaman, 2020, §2). As Balkin (2018, p. 3) observes,the same business model that allows companies to maximize advertising revenues also makes them conduits and amplifiers for propaganda, conspiracy theories, and fake news’.

Unfortunately, complex OCR systems and their contribution to the problem of disinformation were unforeseen at the time that the present European regulatory paradigm was coming into being. It makes content the object of regulatory intervention, yet is largely silent on the systems and processes by which that content is served and consumed. In the case of disinformation this limitation poses acute challenges. Given that the harm of disinformation depends wholly on contextual factors—many of which are controlled by OSPs that operate OCR systems—it is untenable that the regulatory approach should remain passive with respect to the design and operation of these systems. On the basis of these shortcomings we need to change course in our regulatory approach to disinformation online. Yet we find few obvious alternative solutions within the domain of online content regulation. I therefore propose that we look further afield, to a sector with a longer tradition of intensive regulation, namely, the financial services sector.

Section 3. Imitation is the greatest form of flattery: learning from the financial services sector

3.1 Methodological motivations and justifications

The financial services sector is a suitable candidate of comparison for a number of reasons. As Black (2012) notes, it has been the testing ground for many of the leading new governance theories of regulation over the last 30 years. As such, by looking to the world of financial services we can grasp a landscape picture of the various possible regulatory theories at our disposal. The motivation for the substantive comparison arises because the factors that shape regulatory dynamics in the financial services sector appear similar to the OCR sector in three important respects: first, the structure of the market; second, the nature of firms’ incentives; and, third, the consequences that arise when things go wrong.

In the first case, the financial services sector is characterised by various types of actors (e.g. from retail to investment banking), operating at different scales (from multinationals to credit unions), and involved in numerous lines of business (from deposit holding to mortgages to wealth management). As such, there is a high degree of market heterogeneity, with regulation evolving in response to this. Considerable heterogeneity can also be observed in the OCR sector, where firm size varies and where OCR systems take on distinct roles within broader product bundles. For instance, Facebook's News Feed OCR system is a core feature of the user experience architecture, serving as a pathway to 'ancillary' services such as interest groups as well as public pages. In contrast, YouTube's OCR system takes multiple distinct forms, including AutoPlay and homepage recommendations. Crucially, in both instances the firm’s OCR system is bundled within largely distinct online services that could arguably be said to operate within different product markets. The revenue base for firms in the broader OCR market is likewise varied, even amongst the largest actors. For instance, Twitter’s 2019 revenue amounted to US$ 3.46 billion, a figure dwarfed by Facebook’s US$ 70.7 billion (Macrotrends, 2021).

In the second case, a guiding assumption of financial services regulation is that firms' commercial incentives are naturally misaligned with the public interest, and so regulatory intervention is essential to ensure that firms will internalise costs that would otherwise be externalised (House of Commons, 2009). As we saw in the section prior, an increasing body of research suggests that a similar dynamic of misaligned incentives is at play in the market for OCR systems. Therein, the commercial incentives underpinning the design and operation of OCR systems can inadvertently exacerbate the spread and impact of disinformation. Although we do not yet possess a comparable degree of insight as in the financial services sector, with every passing day new evidence emerges that points to a pronounced structural misalignment between the OCR sector’s incentives and the broader public interest (Hao, 2021; Horwitz & Seetharaman, 2020; Bergen, 2019).

In the third and final case, the comprehensiveness and intensity of the financial services regulatory regime is a response to the sheer degree of harm that may arise when financial services firms act in a manner that is contrary to the public interest (e.g. loss of savings by individuals and businesses; exploitation of vulnerable consumers, etc.). While the types of harm that may arise from disinformation in OCR systems (e.g., individuals ignoring crucial public health messaging during a pandemic; groups of voters succumbing to voter suppression efforts during an election period) are obviously different in nature, their impact in terms of negative public interest outcomes can arguably be similar. Indeed, it is not without reason that the draft DSA identifies OCR systems as a vector for ‘systemic risks’ and hence worthy of heightened oversight (COM/2020/825 final, rec. 54).

Equipped with this comparative framing, we may now turn to the question of what precisely are the key regulatory theories underpinning financial services in the EU and UK and how they might be applied to our domain. Broadly speaking, European financial services regulation includes three theoretical features that I believe could underpin a more effective policy response to disinformation online: first, a comprehensive focus on risk management; second, a dependence on principle-based rules; and third, a reliance on regulated firms to achieve desired policy outcomes (Black, 2012). While there are differences across jurisdictions and there is of course more to the sector’s regulatory theory than these three elements, these are selected for consideration on the basis of both their foundational role within financial services regulation and their prima facie promise for our purposes. In what follows, I will briefly outline their meaning before articulating their envisaged application.

3.2 Thinking in terms of risk

'Risk' in the financial services regulatory theory should be understood broadly, as the concept serves to underpin numerous modalities of policy therein. Most notably, risk provides the basis for regulatory legitimacy. Financial institutions are understood as posing systemic risks of various kinds, and that if left to their own devices firms are incapable of identifying, managing, and mitigating those risks. This understanding motivates and legitimises a body of law—known as ‘prudential regulation’—that intervenes with, and deeply scrutinises, everything from firms’ commercial practices, to their risk-mitigation strategies, and even the composition of their senior leadership. For instance the EU Prudential Requirements directive grants regulatory authorities the power to limit or prevent business practices which pose ‘excessive risk to the soundness of an institution’ (Directive 2013/36/EU, art. 92.2 (a)), and to take steps to ensure banks’ remuneration policies ‘promote sound and effective risk management and do not encourage [excessive] risk-taking’ (Ibid, art. 104.1(e)). In addition, risk often manifests as the metric by which compliance measures are determined. For instance, the body of law concerning anti-money laundering and terrorist financing (AML-TF) is grounded in the belief that it is impossible to ever fully prevent a firm’s services from being exploited to launder money or finance terrorist activities. Consequently, the counter-measures that a regulated entity should take to address these unlawful practices—such as know-your-customer due diligence and transaction analysis—are to be commensurate with the risk of the exploitation occurring. Entities like the Financial Action Task Force issue regular authoritative compliance guidance for regulators and firms on how to assess, evaluate, and manage AML-TF risks in practical settings.

To utilise the risk-based approach as an underpinning of our policy response to disinformation online, we would of course first need a clear conceptual understanding of risk as it pertains to our domain. Specifically, policymakers would be required to establish a methodology for determining various individual and public interest disinformation-related risks as well as a hypothesis explaining how these risks can manifest in both online content and on the basis of the commercial practices of firms. An effective risk schema should provide the means by which—in different contexts—we could assess whether and to what extent a given piece of disinformation is likely to cause harm to the public interest, and likewise assess the risk profile of specific commercial practices and product features.

Policy interventions on the basis of this approach would aim at the identification, management, and mitigation of these disinformation-related risks. Crucially, under the risk-based approach compliance efforts would be commensurate with the degree of risk posed to the public interest—this focus on ‘commensurability’ means that the fact of hosting disinformation would not in itself be indicative of noncompliance with the law. Moreover, the adoption of a risk-based approach would require a general shift in the locus of policy intervention away from content removal. Today, intervention often aims at firms’ outputs, meaning action occurs once the risk is materialising. Yet, many of the risks associated with disinformation are intimately related to firms' business practices and how they engage with third-party content on their services. Consequently, a true risk-based approach to addressing disinformation would manifest in regulatory interventions earlier in the product cycle. This means focusing particularly on OCR systems, given their significant influence in shaping the spread and impact of disinformation. For instance, it could manifest as regulatory obligations regarding algorithmic auditing and inspection, that aim at continuous quality assurance of OCR systems and early warning of any operational flaws. In addition, it could manifest as measures that place restrictions on the micro-targeting of content to specific types of uses, given that much of the public interest risk in disinformation is determined by who sees the content and under what circumstances (Haines, 2019).

Notably, the focus on ‘risk’ shines through in the draft DSA in at least two distinct modalities. First, we have seen how risk serves as the basis for regulatory legitimacy in the financial services sector. The DSA adopts a similar approach, delineating a category of online service providers—so-called ‘Very Large Online Platforms’—whose size, influence, and risk justifies asymmetric regulatory obligations (COM/2020/825 final, art. 25). In addition, risk manifests as a metric of compliance in a manner similar to AML-TF regulation, with firms obliged to assess “the systemic risks stemming from the functioning and use of their service, as well as by potential misuses by the recipients of the service, and take appropriate mitigating measures” (Ibid, rec. 56, my emphasis). Notably, while the provisions of the DSA that establish direct obligations and responsibilities with respect to content—e.g. the provisions on notice & action—limit their focus to that which is illegal, the risk assessment and mitigation measures enshrined in the law’s articles 26 and 27 do implicate disinformation explicitly and implicitly. Indeed, these provisions draw out ‘coordinated inauthentic behaviour’ as a risk to be mitigated (Ibid, art. 26. 1 (c)) and likewise earmark OCR systems as a site where risk mitigation measures should be located (Ibid, art. 27.1 (a)).

Yet we have discussed that for risk to serve as an effective basis for regulatory intervention (in the case of disinformation or any other ‘harmful-but-legal’ content), policymakers must first establish a rigorous risk schema. In that regard, the DSA is somewhat underwhelming, in that its guidance on what risks to assess and how they ought to be mitigated is vague. In the case of risk assessment, companies are expected to translate interferences with fundamental rights into the language of risk management, with little delineation of how those risks might be understood or quantified in practical terms. As we will discuss in section V, this shortcoming brings a myriad of theoretical and practical challenges. In addition, risk is typically understood across domains as a function of the probability and severity of a certain (inverse) outcome (ISO, 2018 §3). Yet the legal architecture of the DSA’s risk-based approach appears overly weighted toward addressing the severity of risks rather than their probability, notably through its focus on addressing systemic risks related to ‘dissemination’ of illegal content and the ‘intentional manipulation’ of their service. While of course it is important to assess and address the severity of risks that materialise (e.g. through developing partnerships with trusted flaggers; deploying content recognition software to assist human review), risk assessment in this domain will only be meaningful if it gives equal weight to the probability of a certain risk materialising (e.g. assessing whether OCR system design may inadvertently privilege the virality of disinformation).

3.3 Principles over prescription

A second key feature of regulatory theory in the European financial services domain, most associated with the UK, is the emphasis on principles-based rule-structures. The rules governing the conduct of regulated entities are often ‘general, qualitative, purposive, and behavioural’ (Black, 2008, p. 13). A quintessential example of this approach is found in the UK Prudential Regulatory Authority's 'Fundamental rules', where regulatory obligations are expressed in such terms as 'a firm must organise and control its affairs responsibly and effectively’ (PRA, 2014, p. 5). In the EU, the 2013 Prudential Requirements directive exhibits similar principles-based requirements with respect to the regulation of business conduct, demanding that regulated firms to have in place ‘robust’,‘sound’, and ‘effective’ policies and governance arrangements (2013/36/EU, art. 73). Transitioning from the macro to the micro—that is, the process whereby the principles are translated into specific compliance approaches for individual firms—is usually done through iterative dialogues between firms and regulators as well as through guidelines and secondary legislation.

While admittedly the regulating-by-principles approach can already be found in some foundational elements of the EU and UK content regulation legislative acquis, the paradigm has become increasingly defined by 'prescriptive' and 'bright-line' rule forms in the last two decades, whereby rules often take the form of highly-complex descriptive obligations or simple rigid directives respectively. 5 To understand how principles-based rule-structures could be more formally deployed in the policy response to disinformation online, it is worth considering a hypothetical statutory rule of its form, namely: firms must implement effective and proportionate measures to limit the virality of disinformation on their services. Notably, the qualitative and behavioural nature of the principles-based rule form allows for far greater flexibility in the types of measures that demonstrate compliance. Relevant OSPs could, depending on their specific context, satisfy this rule by implementing: an automated mechanism to identify content that meets a standard of virality, allowing for fast-track review; a content de-ranking policy for identified disinformation; or, specific OCR system design tweaks. Another such principle-based rule to address disinformation could be: firms must take steps to enhance the visibility of factual information in OCR systems. As with the previous example, the desired regulatory outcome here could be achieved in a number of different ways, as evidenced by the various voluntary efforts of platforms such as Facebook and YouTube to that specific end to date.

Again, the move towards principles-based rule-structuring would likely entail new loci of regulatory intervention. Principles-based rules invite compliance measures that manifest earlier in the product cycle—such as in the design and operation of OCR systems—as these interventions are likely to be far more influential in responding to the rules’ behavioural and purposeful requirements. For instance, the effort to limit virality of disinformation is likely to be far better satisfied by measures that correct for the very design flaws in OCR systems that give rise to said virality, rather than an intervention late in the product cycle that focuses on removal speed of notified content. 6

Again, while not ostensibly concerned with the regulation of disinformation online, the DSA signals a new embrace of the principles-based approach to rule form in the online content domain. For instance, the risk mitigation measures that firms are expected to take under article 27 are to be ‘reasonable, proportionate, and effective’ while also being ‘tailored’ to specific risks. In addition, firms operating OCR systems must provide service users with ‘clear, accessible, and easily comprehensible’ information regarding the curative role of these systems (art. 29). The DSA also seeks to anticipate the compliance challenges that are likely to accompany the shift to principles-based rule forms, most notably in its commitment to ‘support and promote the development and implementation of voluntary industry standards’ (art. 34) and its provisions on future Codes of Conduct that will aim to elaborate on, and give specific meaning to, the generalised rules (art. 35). It is important to note that while the DSA will not set out specific principles-based rules for addressing disinformation (such as those used in the examples previously), it will nonetheless provide the novel regulatory architecture within which such rules can be developed and implemented. Indeed, the European Commission has already suggested that the principles-based EU Code of Practice on Disinformation will, in the future, be subsumed under the legal architecture of the DSA’s Code of Conduct provisions (European Commission, 2020 §4.2).

3.4 The ‘responsibilisation’ of internal management

The third and final feature of financial services regulation that is relevant for our purposes is the explicit reliance on regulated entities in the execution of regulatory objectives. Firms are themselves expected to take primary responsibility for operationalising generalised rules in their own internal compliance programme and devising the means by which the rules’ objectives are best achieved. Examples of this philosophy can be found across the financial services legislative acquis. For instance, the Prudential Requirements directive (Directive 2013/36/EU) directs national regulators to ‘ensure oversight by the [firm’s] management body, promote a sound risk cultureat all levels of […] firms and […] monitor the adequacy of internal governance arrangements’ (Ibid, art. 54, my emphasis).In the theory, this strategy of empowering and relying on firms to achieve regulatory objectives is typically referred to as 'management-based' or ‘meta-‘ regulation (Black, 2012; Coglianese & Lazer, 2003). The management-based theory of regulation aims to incentivise commercial prudence at the point where firms are contemplating business decisions and practices, rather than in outputs.

When applied to the disinformation context, rather than being commanded and controlled through specific directives, relevant OSPs would bear primary responsibility for identifying and mitigating the risks that their commercial practices pose to the achievement of regulatory objectives regarding disinformation. Practically-speaking, it would likely materialise in an increased focus on formalised impact assessments that are tailored to disinformation-rated risks; systemic documentation by firms of their internal operational processes; and, an organisational restructuring that gives more prominence to internal compliance functions (e.g., the creation of a Chief Risk Officer; embedding compliance staff in product teams; etc.). As with the risk-based and principles-based approaches, the management-based approach will likely shift the focus of compliance measures towards OCR systems, as it forces firms to scrutinise how their business practices are likely to contribute to the spread and impact of disinformation. Ultimately, the specific compliance strategies that firms pursue under the management-based approach could be subject to varying degrees of supervision, depending on the levels of trust between firms on the one hand, and the public and policymakers on the other. 7

Again, we see some flavour of the management-based approach in the draft DSA, with firms themselves given the responsibility to identify, evaluate, and manage the risks that they face (arts. 26, 27); develop ‘audit implementation report[s]’ in response to recommendations of third-party auditors who monitor the law’s enforcement (art. 28); and to appoint compliance officers who ‘shall directly report to the highest management level of the platform’ (art. 32).

At this point we have outlined how, by deploying certain foundational theories of financial services regulation, we could develop a novel policy approach to disinformation online. We have also seen how these theories are already starting to enter the EU content regulation paradigm, through the provisions of the DSA. Equipped with this context, we can now turn to the crucial question of whether a regulatory transformation on this basis could meaningfully address the identified shortcomings of the contemporary regulatory approach to disinformation online.

Section 4. Improvements on the status quo

4.1 The appeal of content agnosticism

This project is motivated by the belief that our contemporary approach to disinformation online systematically engenders intolerable interferences with individuals’ freedom of expression, owing to its reliance on performance- and technological-based theories of regulation. ‘Compliance success’ under this ‘content-centric’ approach means more takedowns and more filtering, a state of affairs that does not apply well to a type of content the harm of which is dependent on a variety of contextual factors.

By borrowing regulatory theories from the financial services sector we could alleviate this problem to a considerable degree, simply because this approach would tend towards regulatory interventions that locate themselves ‘upstream’ in the product cycle. For instance, the risk-based approach gives recognition to the fact that firms’ business practices and processes—in particular, OCR systems—can contribute significantly to disinformation-related harms. As such, the approach aims at interventions vis-à-vis those very practices and processes. It is, as such, content agnostic—firms would not be explicitly directed to take action against specific pieces of online content and in the majority of cases there would not be an expectation that objectionable content be removed from the public domain (MacCarthy, 2020). 8 It simply requires firms adopt a more prudential approach in their business practices, and particularly with respect to how they commercially engage with the content that users upload to their services (e.g., how they amplify, target, and present it to new audiences). Indeed, ‘success’ under my envisaged approach could occur even if a firm filters or renders inaccessible no disinformation at all, but simply designs its OCR system such that content that is likely to be disinformation is not purposefully amplified and micro-targeted to those users for whom it may cause untenable harm. Ultimately, the risk-based approach aims to regulate firms, not the users of those firms’ products.

4.2 Reorienting regulation around the harmful practices

Section II also identified how the regulatory theories underpinning the contemporary approach eschew interventions that target the business practices that inform the harm in disinformation, namely the content’s promotion, targeting, and presentation through OCR systems.

Fortunately we have now seen how the theories underpinning financial services regulation unlock the means by which we can effectively intervene vis-à-vis said commercial practices. For instance, the risk-based approach renders OSPs that operate OCR systems accountable for ensuring the systems themselves are designed and operated with diligence, rather than simply obliging the firm to ‘clean up’ the bad outcomes that those systems give rise to. By reorienting interventions towards commercial practices and away from the substantive content, the risk-based approach acts on our recognition that the policy problem of disinformation is dependent on the various contextual factors that inform its consumption; many of which are controlled by firms. Put another way, the risk-based approach allows us to address one of the key causal factors of disinformation as a policy problem, rather than merely its symptoms. 9

Furthermore, the principle-based approach to rule-structuring—compared to the traditional ‘bright-line’ or ‘prescriptive’ forms—is likely to ensure that compliance interventions remain meaningful and focused on OCR systems through time, even in the face of rapid technological and operational change. Principle-based rules set out—in qualitative and behavioural terms—how firms are expected to act and what outcome they are expected to achieve. Intuitively, legal mandates of this form are more difficult to circumvent, as their qualitative and object-orientated nature means it is easier to assess whether a given action on the part of firms amounts to a genuine effort to achieve the regulatory goal. This approach can thus help us avoid the regulatory phenomenon of ‘hitting the target but missing the point’ (Black, 2010, p. 7). As an example, in section III.III I suggested that a potential principle-based rule could take the form that firms must implement effective and proportionate measures to limit the virality of disinformation on their services. While the regulatory objective of this rule could be satisfied by several unique approaches, its qualitative and object-orientated nature allows an observer to make certain baseline assessments of what meaningful compliance looks like. In this case, the pathology of online content virality and the fact that it is largely a function of the design and operation of OCR systems means that a compliance programme that ignores OCR systems is unlikely to be a good-faith effort. Moreover, those same rule-form characteristics mean OCR systems can remain the focal point of intervention through time, even if the technological and operational features of those systems evolve.

Finally, I discussed how the commercial incentives that underpin the design and operation of OCR systems are a key reason why those systems can unintentionally contribute to the spread and impact of disinformation online. While it is difficult to accurately measure this divergence between public and private incentives in the OCR sector, the anecdotal evidence that we referenced in Section II suggests that it is pronounced. Fortunately, the management-based approach can help correct for this tendency, by ensuring firms take primary responsibility for their social outputs and by facilitating the ‘internalisation’ of commercial prudence in the long-run. Experience in the financial services sector and beyond suggests that private stakeholders are more likely to view compliance measures and processes as worthy of adherence if they are the ones devising those specific measures (Black, 2012; Coglianese & Nash, 2006; Coglianese & Lazer, 2003). The responsibilisation of internal management under this regulatory theory thereby engenders a sense of ‘ownership’ in the pursuit of regulatory objectives (Gunningham & Sinclair, 2017). Given the breath of sectors where this phenomenon has been witnessed, it is reasonable to believe that a similar organisational psychology is likely to hold in the OCR sector, and thus the management-based approach can contribute to a narrowing of the sectoral incentives gap (Coglianese & Lazar, 2003).

Section 5. Appreciating the challenges

5.1 Lingering rights concerns

Admittedly, concerns regarding unjustified interferences with fundamental rights remain pertinent. While this approach aims to address the most acute interference that is foreseeable in the content regulation domain—namely the excessive blocking and removal of legitimate expression—novel concerns will come to the fore under this envisaged model. For instance, there are important questions as to whether down-ranking or de-ranking of certain content amounts to an excessive interference with an individual’s freedom of expression (e.g. ‘shadow-banning’), even if that content remains somewhat discoverable on the platform. Similarly, how are we to react when upstream compliance measures that aim at improving OCR systems are discovered to result in disparate impact on important public interest expression and group perspectives (e.g. an OCR system stops or significantly reduces its promotion of ‘Black Lives Matter’ activist content)?

In addition, one cannot ignore the critique that to deploy financial services regulatory theory to our domain is to do no more than legitimise the problem of ‘privatised enforcement’, the phenomenon whereby OSPs take the place of legislative and judicial authorities as the primary (and often sole) arbitrators of online speech, without the traditional legal safeguards that regulate state-driven interferences with individuals’ fundamental rights. The risks to fundamental rights—notably the right to receive and impart information; the right to privacy and data protection; and various due process rights—that can arise from the phenomenon of privatised enforcement are well established in the literature (Kuczerawy, 2017; Belli & Venturini, 2016; Angelopoulos et al., 2016).

Unfortunately, the problems associated with privatised enforcement could manifest in our suggested approach, particularly if it is ultimately unable to prevent firms from expressing their compliance in the ‘content-centric’ manner we see today. Indeed, it is reasonable to assume that some firms in the OCR sector will simply aim at ‘achieving compliance’ at the least cost to the business—that is, by building-on existing trust & safety programmes while ring-fencing their business practices and processes from any meaningful alteration. In addition, even where we assume good intent on the part of firms, it is possible that in practice some firms’ compliance functions will be unable to meet the rigorous expectations of this approach and may thus adopt excessively conservative or ‘industry-standard’ compliance strategies. The suggested approach would place primary responsibility on firms to identify and manage the disinformation-related risks that they face, while at every stage ensuring that they are acting towards the regulatory objectives as established in the generalised rulebook. That is no easy task, and as a result some well-intentioned firms may retreat into ‘content-centric’ approaches to minimise compliance risk. Indeed, rather than ushering in a modern era of rights-protective responsive regulation, this approach could quickly backslide into one that incentivises more blocking and removal, with less public oversight.

These risks cannot be underappreciated, and should be at the fore of policymakers’ considerations when contemplating regulatory interventions on the basis of the proposed regulatory approach. That said, given that many of the aforementioned risks are likely to manifest in the implementation of the regulatory approach, it may be possible to anticipate and manage them through thoughtful legislative crafting, rigorous oversight and monitoring, and a focus on safeguards-by-design. For instance, an obvious means to protect against the novel risks to freedom of expression that may arise through ‘upstream’ interventions aimed at OCR systems would be through obligations on regulated firms to develop public-facing policies on down- and de-ranking content; mandating third party algorithmic audits and impact assessments that monitor for discriminatory outcomes or disparate impact on certain groups’ expression; and mandating disclosure to public authorities and researchers of the specific content that has been impacted by content moderation practices. Furthermore, to address the risk of backsliding into rights-interfering content-centric approaches and opening the door to the worst excesses of ‘privatised enforcement’, policymakers could consider mechanisms for ensuring rolling behavioural oversight of regulated entities, and supports for firms to translate general statutory obligations into tailored compliance measures (e.g., through secondary legislation; co-regulatory codes of conduct; etc).

5.2 The ‘riskification’ of policy

While the endeavour to place risk at the centre of the content regulation paradigm would be in keeping with a broader trend within digital policy, and indeed public policy more generally, the ‘riskification’ of regulation is not without its problems (Beck, 1992).

First, as Böröcz (2016) notes in the context of the General Data Protection Regulation (GDPR) risk-based approach, fundamental rights are principally products of the legal domain, and enjoy their own distinct meaning and epistemic coherence. Risk management as a discipline is something altogether different, and bases itself predominantly within a techno-scientific epistemology. As such, the two domains do not always mesh together in a coherent manner, and it can be challenging, if not impossible, to understand fundamental rights in terms of risk and understand how different risk factors can promote or limit the enjoyment of certain fundamental rights. Indeed, as van Dijk et al. (2016, p. 289) observe, when rights and risks are conflated ‘the meaning of both are changed into something that could hardly be predicted in advance’. This difficulty is even more pronounced when our public policy interest is to manage risks that may only manifest on a societal level (e.g. the risk to electoral integrity and democratic discourse). In these contexts, the link between a certain business practice or product and risks to fundamental rights may not be obviously expressible in quantifiable terms, and fundamental rights may not even be the most appropriate yardstick against which to measure the harm in question (McKay & Tenove, 2020).

Second, risk management, like all techno-scientific disciplines, is value-laden. Value judgements are made when one chooses what risks to assess, how to assess them, and ultimately, how to manage then. As acclaimed risk theorist Paul Slovic has noted, ‘defining risk is an exercise of power’ (Buni & Chemaly, 2020, §2). Slovic’s observation is particularly concerning in our context, given that the centres of power in the technology sector are overwhelmingly male, white, and located in the global North (Harrison, 2019). As such, it is likely that—independent oversight notwithstanding—in the endeavour to assess, evaluate, and manage risks related to their service, firms will overlook serious risks that pertain to certain groups or adopt a risk-appetite posture that runs counter to reasonable expectations of the public and policymakers. Buni & Chemaly (2020) document numerous case studies that attest to how bias and a systemic diversity problem has shaped a sub-optimum risk management culture in the tech sector, encompassing everything from market risk (‘what risks do the political and social conditions in the region we’re deploying give rise to?’); to use-case risk (‘what bad outcomes to certain users and groups are likely to arise from use of our product?’); to trust and safety risk (‘in managing a known risk, what new risks are we likely to stimulate?’).

EU lawmakers would do well to heed these learnings from other sectors when considering the risk-related provisions of the DSA.

5.3 The thorny question of oversight

It has already been noted that the shift towards a regulatory model for disinformation online characterised by risk-based, principles-based, and management-based approaches is likely to place considerable compliance challenges on firms, and if poorly implemented, could pave the way for the interferences with rights (e.g. freedom of expression) that the proposal aims to mitigate. It is in recognition of similar inherent ‘implementation risks’ that financial services policymakers have opted to include independent regulatory agencies in their regulatory architecture. Bodies like the European Banking Authority and the UK Prudential Regulation Authority act as regulatory fulcrums, giving meaning and reality to policymakers’ political aspirations. It is thus unsurprising that, given its tentative endorsement of the types of regulatory theories that warrant agency-led oversight, the DSA places a similar oversight model at the centre of its proposed regulatory architecture (art. 38; art. 51).

Yet things are rarely simple with respect to agency-led oversight, especially in the context of online content regulation. For a start, instituting new regulatory authorities brings its own risks. First, there is the obvious risk that such bodies can be ‘captured’ by firms in the market. Viewed cynically, a firm needs not actually meet regulatory objectives, it must simply convince the regulator that it is doing so. What regulators see is not necessarily a compliance programme that a firm has implemented to achieve the regulatory objectives, but rather, a representation or idealisation of such a programme. Indeed, a critique of regulatory oversight in the financial services sector is that firms focus less on managing their own compliance processes, and more on managing the regulator (Black, 2012, p. 1047; Anderson 1982). Second, agency-led oversight of the (social) media sector has traditionally been viewed with skepticism in many jurisdictions, and my suggested approach could evoke images of a ‘ministry of truth’ and state censorship. That critique is not without merit, and there is a real risk that regulatory bodies themselves may ‘backslide’ into an approach that views compliance success under this paradigm in terms of ever more content removal in ever shorter periods of time. Indeed, bestowing new powers on regulatory authorities can be a dangerous exercise when those regulators’ mission and intent is subject to doubt (Article 19, 2021).

While these concerns are very real, they may not be immutable. Regulatory authorities can be resilient against corporate and cognitive capture if the institutional set-up and broader political context boasts three features: first, a high-degree of transparency in public affairs, so corporate influence can be kept in check; second, technical expertise and investigative resources within oversight authorities, so regulators can make diligent and effective assessments as to firms’ compliance efforts; and ultimately, strong government support for their mission, so said regulators can execute their mandate in the face of market and media pressure. 10 Of course, said government support will do more harm than good if the government in question seeks to co-opt agency-led regulation to suppress fundamental rights. In these situations, and where the jurisdiction in question is the site of systematic interference with the rule of law from state institutions, agency-led oversight—and perhaps the novel regulatory approach tout court—should not be considered in the first instance.

6. Conclusion – from theory to practice

At this point I have identified key regulatory theories underpinning financial services regulation in Europe, and articulated how they could be transposed to address disinformation online. I have illustrated how transposition would serve as an improvement on the regulatory status quo, in that it would allow us to locate regulatory interventions at OCR systems while mitigating impacts on freedom of expression. I have also articulated the potential implementation risks associated with this approach, and proffered tentative solutions for how they might be assuaged. What then, is left for us to do? Arguably, this project has achieved its main objective—that is, to verify that the envisaged regulatory transposition holds promise and should be considered and explored by European policymakers and the policy community. In that context, I will conclude by marking out a path to guide the next stages of this endeavour—what needs to happen to allow us to realise this promising alternative approach.

First, we must define the precise principle-based rules that we believe can best address disinformation in OCR systems. In section III I deployed examples that relate to virality and authoritative sources. Of course, these are not the only possible principles-based rules, and there are a whole host of others that could complement or replace them. Second, my approach places risk at its core—as both the grounding for regulatory legitimacy and the metric by which firms’ obligations ought to be defined. I have already noted that for a substantive framework to take shape, we must first develop a deeper understanding of risk in this context. The next stage must develop that conception of risk, as it is a crucial foundational pillar upon which we can consider what types of efforts the OSP sector should be undertaking to address disinformation in its various forms. Third, good policy depends on rich data, and to develop the best principle-based rules and an appropriate conception of risk we need better insight into how disinformation manifests online. Indeed, we still have little systematic knowledge of how precisely disinformation spreads through OCR systems and the true contribution of firms’ commercial practices to it (Leersson, 2000; Haim & Nienierza, 2019; Ingram, 2019). That needs to be addressed, and urgently.

Fortunately, the draft DSA offers a crucial opportunity to advance in each of these endeavours. Most notably, the draft law’s transparency requirements—a blend of third-party auditing requirements; investigatory powers for regulators; and the various obligations to provide transparency to the research community and to the public—can address the asymmetry of information that stifles policy work in this domain. Moreover, as noted in section III.II, the draft DSA provides a rudimentary conception of risk in the online content domain. Scrutinising, problematising, and ultimately improving this conception will be a necessary challenge and one which can inform future policy deliberations around disinformation online. Finally, while the DSA is ultimately unlikely to set out principles-based rules for addressing disinformation, it will provide the regulatory architecture within which such rules can be developed and implemented in the future.

In closing, I am under no illusions about the challenges inherent in pursuing this project through its next stages. I have taken merely the first step with this article. Yet I draw comfort from the fact that we have completed perhaps the most important step, as all practical regulation is underpinned by an initial theoretical hypothesis.

References

Anderson, J. (1982). The public utility commission of texas: A case of capture or rapture? Review of Policy Research, 1(3). https://doi.org/10.1111/j.1541-1338.1982.tb00453.x

Angelopoulos, C., Brody, A., Hins, W., Hugenholtz, B., Leerssen, P., Margoni, T., McGonagle, T., van Daalen, O., & van Hoboken, J. (2016). Study of fundamental rights limitations for online enforcement through self-regulation [Report]. Institute for Information Law (IViR), University of Amsterdam. https://hdl.handle.net/1887/45869

Article 19. (2021). At a glance: Does the EU digital services act protect freedom of expression? Article. https://www.article19.org/resources/does-the-digital-services-act-protect-freedom-of-expression/

Ayres, I., & Braithwaite, J. (1992). Responsive regulation: Transcending the deregulation debate. Oxford University Press.

Balkin, J. (2018). Fixing social media’s grand bargain (Paper No. 1814; Aegis Series). Hoover Institution. https://www.hoover.org/sites/default/files/research/docs/balkin_webreadypdf.pdf

Beck, U. (1992). Risk society: Towards a new modernity. SAGE Publications.

Belli, L., & Venturini, J. (2016). Private ordering and the rise of terms of service as cyber-regulation. Internet Policy Review, 5(4). https://doi.org/10.14763/2016.4.441

Bergen, M. (2019). YouTube executives ignored warnings, letting toxic videos run rampant. Bloomberg. https://www.bloomberg.com/news/features/2019-04-02/youtube-executives-ignored-warnings-letting-toxic-videos-run-rampant

Black, J. (2008). Forms and paradoxes of principles-based regulation (Working Paper No. 13/2008; LSE Law, Society and Economy). London School of Economics and Political Science. https://www.lse.ac.uk/law/working-paper-series/2007-08/WPS2008-13-Black.pdf

Black, J. (2010). The rise, fall, and fate of principles-based regulation (Working Paper No. 17/2010; LSE Law, Society and Economy). London School of Economics and Political Science. https://core.ac.uk/download/pdf/17332.pdf

Black, J. (2012). Paradoxes and failures: “New Governance” techniques and the financial crisis. The Modern Law Review, 75(6). https://doi.org/10.1111/j.1468-2230.2012.00936.x

Böröcz, I. (2016). Risk to the right to the protection of personal data: An analysis through the lenses of Hermagoras. European Data Protection Law Review, 2(4). https://doi.org/10.21552/EDPL/2016/4/6

Buni, C., & Chemaly, S. (2020). The risk makers [Medium Post]. OneZero. https://onezero.medium.com/the-risk-makers-720093d41f01

Cobbe, J., & Singh, J. (2019). Regulating recommending: Motivations, considerations, and principles. European Journal of Law and Technology, 10(3). https://ejlt.org/index.php/ejlt/article/view/686

Coglianese, C., & Lazer, D. (2003). Management-based regulation: Prescribing private management to achieve public goals. Law and Society Review, 37(4). https://doi.org/10.1046/j.0023-9216.2003.03703001.x

Coglianese, C., & Nash, J. (2006). Leveraging the private sector: Management-based strategies for improving environmental performance. Routledge. https://doi.org/10.4324/9781936331444

Dijk, N. (2016). A risk to a right? Beyond data protection risk assessments. Computer Law & Security Review, 32(2).

Engstrom, E., & Feamster, M. (2017). The limits of filtering: A look at the functionality & shortcomings of content detection tools. Engine. https://static1.squarespace.com/static/571681753c44d835a440c8b5/t/58d058712994ca536bbfa47a/1490049138881/FilteringPaperWebsite.pdf

Francois, C. (2019). Actors, behaviours, content: A disinformation ABC (Transatlantic High Level Working Group on Content Moderation Online and Freedom of Expression Series). Annenberg Public Policy Center, University of Pennsylvania; Annenberg Foundation Trust, Sunnylands; Institute for Information Law, University of Amsterdam. https://www.ivir.nl/publicaties/download/ABC_Framework_2019_Sept_2019.pdf

Gary, J., & Soltani, A. (2019). First things first: Online Advertising practices and their effects on platform speech (Free Speech Futures) [Essay]. Knight First Amendment Institute at Columbia University. https://knightcolumbia.org/content/first-things-first-online-advertising-practices-and-their-effects-on-platform-speech

Gillespie, T. (2020). Expanding the debate about content moderation: Scholarly research agendas for the coming policy debates. Internet Policy Review, 9(4). https://doi.org/10.14763/2020.4.1512

Graef, I., & Van Berlo, S. (2020). Towards smarter regulation in the areas of competition, data protection, and consumer law: Why greater power should come with greater responsibility. European Journal of Risk Regulation, 25(1). https://doi.org/10.1017/err.2020.92

Gunningham, N., & Sinclair, D. (2017). Trust, culture and the limits of management-based regulation: Lessons from the mining industry. In Regulatory theory. ANU Press. https://doi.org/10.22459/RT.02.2017.40

Haim, M., & Nienierza, A. (2019). Computational observation: Challenges and opportunities of automated observation within algorithmically curated media environments using a browser plug-in. Computational Communication Research, 1(1). https://doi.org/10.5117/CCR2019.1.004.HAIM

Haines, E. (2019). Manipulation machines: How disinformation campaigns suppress the Black vote. Columbia Journalism Review. https://www.cjr.org/special_report/black-misinformation-russia.php

Hao, K. (2021). How Facebook got addicted to spreading misinformation. MIT Technology Review. https://www.technologyreview.com/2021/03/11/1020600/facebook-responsible-ai-misinformation/

Harrison, S. (2019). Five years of tech diversity reports – and little progress. WIRED. https://www.wired.com/story/five-years-tech-diversity-reports-little-progress/

Helberger, N. (2020). The Political Power of Platforms: How Current Attempts to Regulate Misinformation Amplify Opinion Power. Digital Journalism, 8(3). https://doi.org/10.1080/21670811.2020.1773888

Horwitz, J., & Seetharaman, D. (2020). Facebook executives shut down efforts to make the site less divisive. Wall Street Journal. https://www.wsj.com/articles/facebook-knows-it-encourages-division-top-executives-nixed-solutions-11590507499

Ingram, M. (2019). Silicon valley’s stonewalling. Columbia Journalism Review. https://www.cjr.org/special_report/silicon-valley-cambridge-analytica.php

International Organisation for Standardisation. (2018). ISO 31000 risk management – guidelines. https://www.iso.org/obp/ui/#iso:std:iso:31000:ed-2:v1:en

Keller, D. (2019). Dolphins in the net: Internet content filters and the advocate general’s Glawischnig-Piesczek v. Facebook Ireland opinion [White Paper]. Stanford Center for Internet and Society, Stanford Law School. https://stanford.io/3kOfrKv

Kuczerawy, A. (2017). The power of positive thinking: Intermediary liability and the effective enjoyment of the right to freedom of expression. JIPITEC – Journal of Intellectual Property, Information Technology and E-Commerce Law, 8(3). https://nbn-resolving.org/urn:nbn:de:0009-29-46232

Leerssen, P. (2020). The soapbox as a blackbox: Regulating transparency in social media recommender systems. European Journal of Law and Technology, 11(2). http://www.ejlt.org/index.php/ejlt/article/view/786

MacCarthy, M. (2020). Transparency requirements for digital society media platforms: Recommendations for policymakers and industry (Transatlantic High Level Working Group on Content Moderation Online and Freedom of Expression Series) [Working Paper]. Annenberg Public Policy Center, University of Pennsylvania; Annenberg Foundation Trust, Sunnylands; Institute for Information Law, University of Amsterdam. https://www.ivir.nl/publicaties/download/Transparency_MacCarthy_Feb_2020.pdf

Macrotrends. (2021). Revenue comparison through time – Facebook, Twitter. Macrotrends. https://www.macrotrends.net/stocks/stock-comparison?s=revenue&axis=single&comp=FB:TWTR

Maréchal, N., & Roberts Biddle, E. (2020). It’s not just the content, it’s the business model: Democracy’s online speech challenge [Report]. Open Technology Institute, New America. https://www.newamerica.org/oti/reports/its-not-just-content-its-business-model/

McKay, S., & Tenove, C. (2020). Disinformation as a threat to deliberative democracy. Political Research Quarterly. https://doi.org/10.1177/1065912920938143

Shmargad, Y., & Klar, S. (2020). Sorting the news: How ranking by popularity polarizes our politics. Political Communication, 37(3). https://doi.org/10.1080/10584609.2020.1713267

Singh, S. (2019). ’Rising through the ranks: How algorithms rank and curate content in search results and on news feeds [Report]. Open Technology Institute, New America. https://d1y8sb8igg2f8e.cloudfront.net/documents/Rising_Through_the_Ranks_2019-10-21_134810.pdf

Solsman, J. (2018, January). YouTube’s AI is the puppet master over most of what you watch. CNET. https://www.cnet.com/news/youtube-ces-2018-neal-mohan/

Vaccari, C., & Chadwick, A. (2020). Deepfakes and disinformation: Exploring the impact of synthetic political video on deception, uncertainty, and trust in news. Social Media + Society, 6(1). https://doi.org/10.1177/2056305120903408

van Dijk, N., Gellert, R., & Rommetveit, K. (2016). A Risk to a Right? Beyond Data Protection Risk Assessments. Computer Law & Security Review, 32(2), 286–306. https://doi.org/10.1016/j.clsr.2015.12.017

Wagner, B., Kübler, J., Kuklis, L., & Ferro, C. (2021). Auditing big tech: Combating disinformation with reliable transparency [Report]. Enabling Digital Rights and Governance. https://enabling-digital.eu/wp-content/uploads/2021/02/Auditing_big_tech_Final.pdf

Wardle, C., & Derakhshan, H. (2017). Information disorder: Towards an interdisciplinary framework for research and policymaking (Report DGI(2017)09). Council of Europe. https://rm.coe.int/information-disorder-toward-an-interdisciplinary-framework-for-researc/168076277c

Woods, L., & Perrin, W. (2019). Online harm reduction – a statutory duty of care and regulator [Report]. Carnegie UK Trust. http://repository.essex.ac.uk/25261/1/Online-harm-reduction-a-statutory-duty-of-care-and-regulator.pdf

Legislation and governmental documents

Directive 2013/36/EU of the European Parliament and of the Council of 26 June 2013 on access to the activity of credit institutions and the prudential supervision of credit institutions and investment firms, amending Directive 2002/87/EC and repealing Directives 2006/48/EC and 2006/49/EC Text with EEA relevance, 32013L0036, EP, CONSIL, OJ L 176 (2013). http://data.europa.eu/eli/dir/2013/36/oj/eng

European Commission. (2018). Final results of the Eurobarometer on fake news and online disinformation [Report]. European Commission. https://ec.europa.eu/digital-single-market/en/news/final-results-eurobarometer-fake-news-and-online-disinformation

European Commission. (2020). Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee, and the Committee of the Regions—On the European democracy action plan. COM(2020) 790 final. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=COM%3A2020%3A790%3AFIN&qid=1607079662423

Gesetz zur Verbesserung der Rechtsdurchsetzung in sozialen Netzwerken (Netzwerkdurchsetzungsgesetz—NetzDG), [(‘NetzDG, 2017)’] BGBI. I S. 3352, (2017). https://www.gesetze-im-internet.de/netzdg/BJNR335210017.html

House of Commons, Treasury Committee. (2009). Banking Crisis: Regulation and supervision (Report No. 14; Session 2008–09). House of Commons. https://publications.parliament.uk/pa/cm200809/cmselect/cmtreasy/767/767.pdf

Proposal for a REGULATION OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL on a Single Market For Digital Services (Digital Services Act) and amending Directive 2000/31/EC. (2020). https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52020PC0825&from=en

Proposal for a Regulation of the European Parliament and of the Council on preventing the dissemination of terrorist content online, COM(2018) 640 final. (2018). https://eur-lex.europa.eu/legal-content/EN/ALL/?uri=CELEX:52018PC0640

Prudential Regulation Authority. (2014). PRA rulebook: Fundamental rules instrument 2014. http://www.prarulebook.co.uk/rulebook/Media/Get/308c054e-fae4-4e41-90dd-4826c139e2ae/PRA_2014_17/pdf

Footnotes

1. This should not detract from the fact that some content that falls within the broad definition of disinformation may indeed be illegal under national law, for instance disinformation that meets the standard of proscribed hate speech or which incites violence.

2. Where utilised in this article, ‘European’ refers to the legal frameworks of both the European Union and the United Kingdom.

3. This is of course notwithstanding the fact that some OSPs do engage in pre-screening of content in certain instances, particularly with respect to copyright infringement (e.g. YouTube’s Content ID) and child sexual abuse material (e.g. Microsoft’s PhotoDNA).

4. Admittedly this will soon change to an extent as, under the revised EU Audiovisual Media Services Directive, ‘video-sharing providers’ will for the first time be subject to certain principle-based editorial obligation measures. It should also be stressed that the fact that OCR systems remain outside the scope of traditional editorial obligations that apply to broadcasters is not a policy weakness per se. As Leerssen (2020) points out, unlike in the broadcasting realm users are not wholly passive with respect to open content recommender systems—in almost all cases the content served to the user is somewhat informed by those users’ explicit and implicit preference signaling.

5. The German NetzDG (2017) and the EU Terrorist Content Online regulation (COM (2018) 640 final) are two cases in point of this trend.

6. Of course, this assumes that companies and those overseeing them are committed to implementing the principles-based approach in good faith, an assumption that is interrogated in Section V.

7. For instance, firms could be obliged to submit their chosen compliance strategy to regulatory authorities for prior review; be obliged to keep formal compliance strategies ‘on file’ for ex post review; or simply be subject to ad hoc ‘spot checks’ by regulatory authorities.

8. Admittedly, there may be some discrete contexts where it may still be wise to define risk mitigation efforts in terms of content filtering and removal, particularly where specific disinformation is likely to pose serious and imminent harm to a sufficiently large number of people (e.g. disinformation encouraging the drinking of bleach as a miracle cure for COVID-19 just after a speech by an influential public figure that endorses the claim). In any case, I suspect these cases to be the exception rather than the norm.

9. Again, this is not to suggest that the problem of disinformation can be reduced to and solved by addressing interventions towards OCR systems alone. Rather, it seeks to give due regard to, and provide an effective response to, the considerable role played by such systems in compounding the broader problem.

10. As a case in point, not all financial services regulatory authorities failed in executing their oversight functions in the lead-up to the 2008 global crisis, and those that fared relatively well appear to have shared the aforementioned qualities (Black, 2014).

Once again platform liability: on the edge of the ‘Uber’ and ‘Airbnb’ cases

$
0
0

1. Introduction

Online (digital) platforms have been considered as disruptive means of modern communication, bargaining process, and social life (Busch, Schulte-Nölke, et al., 2016) which bring to light a novel understanding of pricing, production, and investment decisions (Evans, 2003). First, as professional intermediaries among their users, online platforms support new ways of interaction within communities (Morozov, 2016; de Reuver, 2018). By virtue of this feature online platforms allow ordinary citizens to share their spare resources, which all in all have fundamentally changed modern economy and transferred it into a so-called ‘sharing’ or ‘collaborative’ economy (European Commission, 2016). Second, online platforms ‘internalize externalities created by one group for the other group’ (Evans, 2003, p. 332). Since online platforms bring together distinct groups of users matching supply and demand, they create multi-sided markets where each group of users benefits from the number of actors of the other group (Hein, 2020). Thus, the more users there are on the one side of a platform, the better for the other side and vice versa.

Hence, online platforms nowadays have become powerful entities which have fundamentally changed the market structure and made it triangular-like, where most of the transactions are undertaken not between a customer and a provider, but between the customer and a platform, on the one hand, and the provider and the platform, on the other (Busch, Schulte-Nölke, et al., 2016).

In the light of these changes a plethora of questions have emerged. Since online platforms often act as ‘bottlenecks to control and limit interactions in an ecosystem’ (Boudreau, 2010; Hein, 2020), the first question is whether online platforms may still be regarded as mere intermediaries or they should be considered as suppliers or providers of goods, works and services. The second question is whether platforms as dominant market entities may be held liable to their customers for violations caused primarily by platform suppliers. Finally, the third question is whether there is a necessary link between the first and the second questions, i.e. that the platform operator may be held liable towards its customers when it may not be regarded as a mere intermediary, but is considered as a supplier of goods and services provided by the platform suppliers.

During the last five years these questions have been raised in case law and have revealed their ambiguous nature. The most prominent cases in this respect have been recently regarded by the Court of Justice of the European Union (CJEU). In its judgement from 20 December 2017 in Asociación Profesional Elite Taxi v Uber Systems Spain, SL (‘Uber’ case) the Court concluded that the services provided by Uber should be classified as ‘a service in the field of transport’ and should be excluded from the information society services (Uber Spain, 2017). On the contrary, in the judgement from 19 December 2019 in the Criminal proceedings against X (‘Airbnb’ case) the Court came to the opposite conclusion on the nature of services provided by the respective platform: Airbnb was affirmed to be a pure intermediary providing information society services (Airbnb Ireland, 2019).

There are two things in the judgements that are of particular interest considering platform liability issues.

First, the judgements have proved that platform operators may be considered as providers of the services going beyond mere intermediary or information society services (further in text—‘material services’). With respect to private law matters, and in particular to liability issues, the question which stems from this conclusion is: shall the platform operator be held liable to its customers if it is considered as a provider of material services (like Uber)? Or may it be held liable on some other grounds?

Second, the judgements have established certain criteria under which platform operators may be considered as providers of material services(Opinion, 2019). Regarding liability issues, the approach taken by CJEU raises the following question, which I propose to address in the next sections: should platform operators be held liable only when they meet the criteria established by CJEU, or may there be some other criteria?

Although the issues on platform liability have already been raised in literature and some attempts to answer them have been made by scholars (Busch, Dannemann, et al., 2016; Maultzsch, 2018; Twigg-Flesner, 2018), in light of the recent CJEU judgements I will revise these issues and try to find new approaches to address them. In Section 2 I provide a general overview of the concept of online platforms and the status of their users. In Section 3 I outline the nature of services provided by platforms and analyse the approaches taken in the ‘Uber’ and ‘Airbnb’ cases. In Section 4 I critically analyse the question of whether approaches elaborated by CJEU in the ‘Uber’ and ‘Airbnb’ cases are applicable to liability issues and raise the main problems related to their application. In Section 5 I take a closer look at liability issues and applicability of the approaches elaborated to CJEU. I come to the conclusion that the analysed approaches are generally applicable to liability issues since they go along with the current regulatory regime for the providers of intermediary services established by the Directive 2000/31/EU on electronic commerce (ECD). However, some flaws in this regime will be outlined as well.

2. The notion of ‘online platform’

The term ‘online platform’ is widely used not only in academic literature but in our everyday speech as well. While from the technical perspective platforms are usually defined merely as interfaces often embodied in products, services, or technologies (McIntyre, 2017), from the socio-economic perspective they are regarded as ecosystems containing autonomous agents that interact with each other (Hein, 2020). The term ‘online platform’ is often used interchangeably with the companies that orchestrate them (platform owners) (van Dijck, 2019). However, for the sake of both theoretical and practical clarity it is important to distinguish the notion of ‘online platform’ and the term ‘operator of an online platform’: the former is a kind of ‘virtual marketplace’ and an ecosystem comprising different agents, whereas the latter is a person or a company who runs the platform (de las Heras Ballell, 2017).

Considering the notion ‘online platform’ per se, it should be borne in mind that it is usually understood rather broadly. The term may encompass social networks, search engines, online payment systems, streaming services, online marketplaces etc. That is why for the sake of clarity I draw on Hein’s (2020) classification of platforms according to their ownership model (centralised, consortia-like and decentralised) and functionality. In the latter case two types of online platforms are distinguished: 1) transaction platforms which facilitate direct transactions between users on different sides of the platform (so-called ‘online marketplaces’), and 2) non‐transaction platforms which sell advertising on one side and sell or give away content on the other (like media platforms) (Katz, 2019). This paper will focus only on transaction platforms, thus, hereinafter the term ‘platform’ will be used in this narrow meaning.

Economists often call transaction platforms two- or multisided markets (Feld, 2019; Evans, 2003; Ward, 2017) since they facilitate interactions between different groups of users who have opposite purposes by matching the supply on the one side and the demand on the other. Thus, the typical structure of modern online platforms resembles a triangle (Busch, Schulte-Nölke, et al., 2016); Sørensen, 2018). At the ‘top’ of this triangle, as mentioned above, there is an operator of the online platform who develops (or manages the development of) the website or the app—enabling users to get in contact and to negotiate, drafts contractual framework for users, enters various contracts with users—which all in all help to regulate the relationships between users and defend their rights and interests (de las Heras Ballell, 2017).

The other angles of the triangle are represented by different groups of users who join the platform. From an economic perspective there are complementors who contribute services and customers who receive and use the services produced by complementors (Hein, 2020; McIntyre, 2017). However, in legal literature complementors are usually called suppliers (providers or business users), whereas ‘customers’ may also be called ‘consumers’ (Maultzsch, 2018; Busch, Dannemann, et al., 2016 ; de las Heras Ballell, 2017). Suppliers are natural or legal persons who use a platform generally for their commercial purposes, i.e. they offer their goods, services, etc. and become the counterpart to the platform operator in the membership agreement. Customers are users who merely enjoy the opportunities provided by the platform operator, i.e. they buy, rent, get access to the assets offered by the other group of users—suppliers (or providers) (de las Heras Ballell, 2017). They may be consumers (if customers are natural persons) or corporate clients (if customers are entrepreneurs).

Noticeably, platform operators and their users are bound by contracts concluded between them via the platform. There is a contract between a customer and the platform operator as well as a contract between a supplier and the platform operator usually called a ‘membership agreement’ (de las Heras Ballell, 2017). Also, there is a direct agreement between a supplier and a customer entered into by virtue of the online platform as a service. That is why platforms are usually described as ‘contract-based architectures’ (de las Heras Ballell, 2017).

3. The nature of services provided by transaction platforms

Services provided by modern online platforms have a sophisticated nature and are classified in different ways.

Basically, these services are identified as information society services (ISS). This concept is defined in Directive (EU) 2015/1535 of 9 September 2015 laying down a procedure for the provision of information in the field of technical regulations and of rules on Information Society services and refers to any service normally provided for remuneration, at a distance, by electronic means and at the individual request of a recipient of services. Under this definition ‘at a distance’ means that the service is provided without the parties being simultaneously present. ‘By electronic means’ signifies that the service is sent initially and received at its point of destination by means of electronic equipment for the processing (including digital compression) and storage of data, and entirely transmitted, conveyed and received by wire, by radio, by optical means or by other electromagnetic means (Directive, 2015). Finally, ‘at the individual request of a recipient of services’ means that the service is provided through the transmission of data on individual request (Directive, 2015).

If services provided by transaction platforms satisfy all the mentioned features of information society services, they may also fall within a narrower concept and be regarded as intermediary services. The latter have a different meaning under current EU secondary legislation, which depends on the regulatory scope. In particular, EU Regulation 2019/1150 on promoting fairness and transparency for business users of online intermediary services in Article 2 (2) defines online intermediary services as the ones that (a) allow business users to offer goods or services to consumers, with a view to facilitate initiating direct transactions between those business users and consumers, and (b) that are provided to business users on the basis of contractual relationships between the provider of those services and business users which offer goods or services to consumers (Regulation 2019). Thus, here the focus is on the middleman position of a transaction platform fostering communication and bargaining process between its users.

Meanwhile, ECD provides a different definition of intermediation services which focuses on liability issues. In particular, in ECD the concept of ‘intermediary service providers’ refers to the entities which may enjoy a so-called ‘safe-harbour regime’ and avoid liability for damage caused to their users. According to articles 12 through 14 of ECD intermediary service providers are entities which provide mere conduit, caching or hosting services and satisfy certain conditions.

Considering transaction platforms, it may be sometimes hard to determine their place within the mentioned types of intermediaries. However, most often they are considered as hosting providers. Noticeably, this approach is taken in the Proposal for Digital Services Act, which defines online platforms as providers of a hosting service which, at the request of a recipient of the service, stores and disseminates to the public information, unless that activity is a minor and purely ancillary feature of another service (Proposal, 2020). Therefore, in accordance with the provisions of article 14 of ECD, transaction platforms may be considered as hosting providers, if (a) they primarily store the information provided by their users, and (b) platform users do not act under the authority or the control of the platform operator (Directive, 2000).

Although most transaction platforms are considered as providers of information society and conversely as providers of intermediary services, some platforms provide services which go beyond the concept of ISS and qualify as material services. Initially it was emphasised in the European agenda for the collaborative economy (European Commission, 2016), but consequently revealed itself in the CJEU case law, in particular, in the ‘Uber’ and ‘Airbnb’ cases.

In the ‘Uber’ case, the CJEU was asked for a preliminary ruling in four questions, the most essential among which was whether the activity carried out by Uber Systems Spain was merely a transport service or an information society service. Based on a careful analysis of all the aspects of this service CJEU came to the general conclusion that intermediation service such as the one at issue was inherently linked to a transport service and, accordingly, was classified as ‘a service in the field of transport’ within the meaning of Article 58(1) TFEU. Apparently, on the one hand, the CJEU confirmed that the service provided by Uber could be called an intermediation one, but, on the other hand, the Court emphasised that the intermediation service was absorbed in the transport service and constituted an integral part of the latter. Therefore, all in all the service at hand could not qualify as an ISS, although partly it had an intermediary nature (Uber Spain, 2017).

In the ‘Airbnb’ case, the CJEU was asked for a preliminary ruling on much the same issue. The Court again made a careful analysis of the essence of the service provided by the platform at hand and came to the opposite conclusion that an intermediation service such as the one provided by Airbnb Ireland could be regarded as forming an integral part of an overall service, the main component of which was the provision of accommodation. Thus, an intermediation service at hand had to be classified as an ‘information society service’ under ECD (Airbnb, 2019).

Although the judgements fail to outline particular criteria to distinguish ISS from material services (Chapuis-Doppler, 2020), they may be found in the Advocate General’s Opinion of the ‘Airbnb’ case. In particular, two criteria are mentioned in this regard: (i) the criterion relating to the fact that the platform offers services having a material content and (ii) the criterion relating to the fact that the platform exercises decisive influence on the conditions under which such services are provided (Opinion, 2019).

The first criterion determines whether the service provided by a platform has been provided by electronic means previously and whether the platform users had had an opportunity to provide their services before the platform appeared. In this regard, where a platform has created a new supply of service via electronic means and where by virtue of this new supply the users have started to provide services they were not able to provide previously, the platform may be considered as the one providing services which have a material content. Uber is a good example in this regard. However, this criterion is considered by the Advocate General (AG) as not a decisive, but an indicative one (Opinion, 2019). Thus, the criterion relating to the subsequent activity of the platform is more important.

The second criterion identifies whether a platform operator has a decisive influence on the economically significant aspects of provision of services by platform users. In this regard the following aspects have been considered as significant: the price, the quality of services (vehicles), the conditions of access to the platform and services provided by its users (conditions of cancelation of orders, termination of the account etc.). This criterion has been considered by the AG as a determinative one to clearly distinguish whether a platform provides ISS or services having a material content.

For the sake of clarity and structure, I summarise all the mentioned options in Table 1.

Table 1: Platform services

Information society services

Material services

  • provided by a platform at a distance
  • sent and received by electronic means facilitated by a platform
  • provided at the individual request for remuneration via a platform
  • a platform has not created a new electronic supply for these services and there was an opportunity for the platform suppliers to provide their services before the platform has appeared
  • a platform does not have a decisive influence on conditions of the services provided by platform suppliers
  • a platform has created a new supply for services by electronic means, and users have been given an opportunity to provide their services only once the platform appeared
  • a platform has a decisive influence on conditions of the services provided by platform suppliers

Intermediation services

Focus on liability under ECD

Focus on middleman position under the Regulation 2019/1150

  • A platform operator merely stores the information provided by its users
  • Platform users do not act under the authority or the control of the platform operator.
  • A platform operator allows business users to offer goods or services to consumers, with a view to facilitating the initiating of direct transactions between those business users and consumers;
  • The service is provided by a platform operator to business users on the basis of contracts

4. Formulation of a problem: approaches elaborated by CJEU in the ‘Uber’ and ‘Airbnb’ cases and liability issues

The issue of platform liability is one of the most debatable in modern literature on online platforms. In particular, with respect to transaction platforms, the most questionable issue is whether operators of these platforms may be held liable towards platform customers for tortious or contractual violations caused by platform suppliers. Most scholars have answered this question in the affirmative ((Busch, Schulte-Nölke, et al., 2016; Maultzsch, 2018). However, it is still unclear on which grounds platform operators may be held liable and what rationale lies at the basis of their liability.

The CJEU judgements in ‘Uber’ and ‘Airbnb’ were obviously adopted without private liability issues in mind. In both cases, the questions passed to CJEU were initially raised in disputes concerning public law issues. In particular, the main issue of the ‘Airbnb’ case was whether the platform operator violated the public rules on licensing of mediators and managers of buildings. The dispute in the ‘Uber’ case, although referring to the issues of unfair competition, did not go beyond public law remedies and sanctions.

Meanwhile, the conclusions made by CJEU in these judgements may be of particular interest considering private law issues. Also, they seem to go beyond public issues in which they originated. Hence, approaches developed by CJEU in the mentioned cases may add new remarks to the debate on platform liability. This said, concerning liability issues, the CJEU approaches raise new questions which are worth thorough analysis.

The first question stemming from the CJEU judgements refers to whether platform operators may be held liable only as providers of material services (i.e. as sellers of goods, providers of transport, courrier or other services). Since CJEU concluded that platforms’ services may have a material nature, one may assume that platform operators may be held liable towards their customers, if services provided by platform operators qualify as material, not information society services. However, does it mean that platform operators whose services do not qualify as material ones, but rather as ISS, shall not be held liable towards their customers for the violations caused by platform suppliers? Or shall they still be held liable for such violations on some other grounds?

The second question raised by the CJEU judgements refers to conditions of platforms’ liability. Since CJEU developed two criteria under which a platform operator may be considered as a provider of the material service, may these criteria be regarded as conditions to hold platform operators liable towards their customers? Simply put, the question is whether it is correct to consider Uber in any case liable towards passengers for violations caused by taxi-drivers and, vice versa, to let Airbnb avoid liability if it satisfies the conditions of the ‘safe harbour’ regime provided by articles 12-14 of ECD.

In the next two subsections I analyse these questions in light of recent case law in various countries and approaches elaborated in European legal doctrine.

4.1. The nature of services provided by platform operators and liability issues

The first presumption based on the CJEU judgements regarding liability issues is that a platform operator may be held liable towards platform customers if the one may be considered as a provider of the material services.

This approach has been widely supported in case law and in academic research. The examples may be found in Danish recent case law. For instance, in the case concerning the platform GoLeif.dk, which offered its users to search for airline tickets, compare prices, and buy tickets, the Danish Eastern High Court concluded that the GoLeif.dk platform was directly liable to the passenger who had bought two flights from Copenhagen to Nice and back, but after staying in Nice could not go back to Denmark since the airline went bankrupt. The reasoning in this judgement is grounded in the Court’s conclusion which says that although formally the contract was concluded between the plaintiff and the airline and the latter was the one who caused damages to the plaintiff, the operator of GoLeif.dk from the plaintiff’s perspective was a person which the passenger had been dealing with directly (Ostergaard, 2019).

Much the same reasoning can be found in two recent groundbreaking judgements of the US courts of appeals concerning Amazon.com. The first case which confirms the thesis is Oberdorf v. Amazon.com Inc. in which the plaintiff sued Amazon.com for the damages caused by a defective dog collar she bought on the defendant’s platform. The US Court of Appeals for the Third Circuit based on a very scrupulous analysis of the nature of the relationship between Amazon and its users concluded that Amazon.com should be considered as a ‘seller’ of the defective product. Thus, the platform operator should be held liable towards the plaintiff, i.e. towards the platform customer who suffered damages because of the defective product she had bought (Oberdorf, 2019; Busch, 2019). The other case Angela Bolger v. Amazon.com LLC concerned much the same issue: the plaintiff sued Amazon.com for the damages caused by a defective replacement laptop computer battery she had bought on Amazon. The California Court of Appeal emphasised that Amazon was a link in the chain of product distribution even if it was not a seller as commonly understood. And just like in Oberdorf v. Amazon.com Inc. the Court concluded that ‘Amazon’s active participation in the sale, through payment processing, storage, shipping, and customer service, was what made it strictly liable’ (Bolger, 2020, p. 43).

On the other hand, some academics suggest taking a wider perspective on the issue of liability of transactional platforms. That is why platforms are considered liable towards their customers even if their services do not go beyond information society services.

From this perspective platform operators are bound by the duty of care about their users which stems from the contract between the platform operator, on the one side, and platform customers, on the other (Working Group on the Collaborative Economy et al, 2016). Therefore, in case of non-performance or defective performance of a contract by a platform supplier the platform operator may be held liable for the breach of its duty of care since the operator failed to ensure that the suppliers registered on his platform are reliable and that the information they give about their goods or services is true.

Yet another approach to determine the grounds for liability of platform operators has been developed by the authors of the Discussion Draft of a Directive on Online Intermediary Platforms (Discussion Draft, 2016) and of Model Rules on Online Platforms (Model Rules, 2020). In article 18 (1) of the Draft (article 20 (1) of the Model Rules) it is suggested that the platform operator should be held liable for the non-performance of the supplier-customer contract if the customer can reasonably rely on the platform operator having a predominant influence over the supplier (Discussion Draft, 2016). Presumably, the authors of the Draft do not mean to hold a platform operator liable towards its customers as a seller of goods or a provider of the material services. On the contrary, the authors suggest holding the platform operator liable if from the customer’s perspective the operator has a special (predominant) influence on the supplier.

4.2. Criteria developed by CJEU and private liability issues

The main criterion developed by CJEU in the ‘Uber’ and ‘Airbnb’ cases comes down to the idea that a platform operator may be considered as a provider of a material service if it has a decisive influence on the economically significant aspects of provision of services by platform users. Since this approach primarily focuses on the way the platform operator objectively arranges its relationships with suppliers of goods and services (i.e., arranges payment and rating systems, determines the content of agreements with suppliers, etc.) I will call it the objective approach moving forward.

Apparently, this approach recently has been widely supported in the US case law concerning liability issues. In particular, in Oberdorf v. Amazon.com Inc. the US Court of Appeals for the Third Circuit, among other criteria allowing Amazon.com to be held liable, mentioned that Amazon exerted substantial control over third-party vendors, since (a) third-party vendors could communicate with the customer only through Amazon, (b) Amazon was fully capable, in its sole discretion, of removing unsafe products from its website, (c) Amazon was uniquely positioned to receive reports of defective products, which in turn could lead to such products being removed from circulation, and (d) Amazon could adjust the commission-based fees that it charged to third-party vendors based on the risk that the third-party vendor presents (Oberdorf, 2019).

Much the same approach has been taken by the California Court of Appeal in Angela Bolger v. Amazon.com LLC. The Court emphasised:

‘Amazon is no mere bystander to the vast digital and physical apparatus it designed and controls. It chose to set up its website in a certain way […] it chose to regulate third-party sellers’ contact with its customers […] and most importantly it chose to allow the sale at issue here to occur in the manner described above’.

Based on this observation the Court concluded that Amazon should be held liable towards the plaintiff since it was an “integral part of the overall producing and marketing enterprise” (Bolger, 2020, p. 44).

However, by this time, a slightly different approach has been developed which focuses not on the way the respective platform operator made the arrangements, but on how the users perceived the operator’s role in the respective relationships. Thus, I will call it the subjective approach to liability issues. Examples may be found in recent Danish case law. In the case on GoLeif.dk platform the Danish Eastern High Court concluded that the platform was directly liable to the plaintiff since the consumer could assume they were dealing with GoLeif.dk directly and the GoLeif.dk website did not make it sufficiently clear that customers were not trading with GoLeif.dk, but instead with the airline delivering the flight (Ostergaard, 2019). In the same vein, in a case concerning Booking.com, the Danish Western High Court concluded that the accommodation platform could not be held liable for the host’s violations since the appellant should have understood that they entered into an overnight stay with the place of residence, and that Booking.com alone acted as an intermediary of the agreement (Ostergaard, 2019).

The subjective approach was also supported by European scholars (Busch, Dannemann, et al., 2016) and created a basis for the Discussion Draft of a Directive on Online Intermediary Platforms and for the Model Rules on Online Platforms (Model Rules, 2020). According to article 18 of the Discussion Draft, and to article 20 of the Model Rules, a platform operator may be held liable towards a platform customer if the customer can reasonably rely on the platform operator having a predominant influence over the supplier. Apparently, here the focus is again on how the platform appears from the customer’s perspective (Busch, Dannemann, et al., 2016; Maultzsch, 2018; Model Rules, 2020), which is typical of the subjective approach.

Thus, there are two basic approaches to determine whether a platform operator is liable towards platform customers – subjective and objective ones, and both approaches are equally supported by scholars and courts. In the next Section I will carefully analyse which approach fits the discussed liability issues better.

5. The approaches developed in the ‘Uber’ and ‘Airbnb’ cases and the theory of civil liability

When trying to extrapolate approaches developed in the ‘Uber’ and ‘Airbnb’ cases to private liability issues, it is crucial to analyse them from the perspective of fundamental theory of private liability.

To make the analysis in this section more illustrative let us first list the most common violations causing damages to customers of transaction platforms. It should be borne in mind that unlike sharing or streaming platforms where deployment of illegal content is the most widespread violation, transaction platforms may deal with a wide range of wrongful acts performed by platform suppliers.

First, like in both cases concerning Amazon.com the violations may come down to the distribution of defective products injuring buyers and damaging their property. Apparently, here we deal with product liability.

Secondly, it may also be a breach of a contract between a supplier on one side and a customer on the other, for instance, the sale of goods of low quality, late delivery of goods to customers, undue performance of service agreements (like late arrival of a taxi). Thirdly, the violation may come down to a fraudulent activity via a platform when a supplier pretends to offer goods or services, however, purporting only to get money from customers without any performance in return. In these two cases we deal with contractual liability. If a supplier delivers goods of a bad quality or unduly performs services, the supplier breaches a contract concluded with a customer. Likewise, there is a breach of the supplier-customer contract where the supplier purports to cheat customers offering them goods or services that she is not going to sell or perform.

Thus, the liability issues should be analysed in accordance with the type of relationships arising from the violations listed above, and the product liability issues should be regarded separately from issues concerning contractual liability.

5.1. The ‘Uber’ and ‘Airbnb’ judgements and product liability

Product liability is a kind of tort liability for production and distribution of defective products. Although liability issues are traditionally attributed to national legislation, basic rules on product liability are harmonised at the Union level in Product Liability Directive (Directive 85/374/EEC).

The main idea of the Directive is that the producer of the defective product should be strictly liable for the damage caused by the product. Thus, this Directive lays down the liability for the defective products on their producers (Art. 1). However, where the producer of the product cannot be identified, each supplier of the product shall be treated as its producer unless he informs the injured person, within a reasonable time, of the identity of the producer or of the person who supplied him with the product (Art. 3(3)).

Apparently, the Directive determines the persons liable towards customers rather rigidly and does not leave any room for the persons who are not literally ‘producers’ or ‘suppliers’ of a product. That is why the European Commission in its latest Report on the application of the Directive emphasised that “some of the concepts that were clear-cut in 1985, such as ‘product’ and ‘producer’ or ‘defect’ and ‘damage’ are less so today” and that “industry is increasingly integrated into dispersed multi-actor and global value chains with strong service components” (Report, 2018, n.p.).

Transaction platforms are good examples of this problem: introducing themselves as merely intermediaries between sellers and buyers, generally they may not be held liable towards customers. However, sometimes this rigid approach disturbs a fair balance between the business and consumer protection. Due to structural peculiarities of platforms as well as strong influence of platform operators on the communication between platform users it may be sometimes extremely hard for a consumer to identify the seller of a product and to communicate or sue the latter directly.

In this respect the approaches taken by CJEU in the ‘Uber’ and ‘Airbnb’ cases may serve as a roadmap. If some transaction platforms may under certain criteria qualify as providers of material services, then platforms serving as online marketplaces for goods may also be considered as sellers of goods who thus are liable for damages caused by defective products. In this regard objective criteria to distinguish pure intermediaries from providers of material services may also be helpful. Thus, if the service provided by a platform has created new opportunities for sellers and if the platform operator has a decisive influence on the economically significant aspects of distribution of goods by platform users, the operator should be considered as a seller of goods.

Noticeably, much the same view has been expressed exactly in product liability cases, in particular, in Oberdorf v. Amazon.com Inc. and in Angela Bolger v. Amazon.com LLC. In both cases Amazon.com has been considered as the seller of defective products since the platform exerted substantial control (obviously synonym to the CJEU’s ‘decisive influence’) over third-party vendors (Oberdorf, 2019), and could and did exert pressure on upstream distributors to enhance safety (Bolger, 2020).

This proves that approaches developed by CJEU are generally applicable to product liability cases.

However, American courts went further by distinguishing intermediary platforms from platforms qualifying as sellers. In particular, in Angela Bolger v. Amazon.com LLC the Court paid special attention to the fact that Amazon could in a particular case ‘be the only member of that enterprise reasonably available to the injured plaintiff,’ and that Amazon, like conventional retailers, ‘could be the only member of the distribution chain reasonably available to an injured plaintiff who purchased a product on its website’ (Bolger, 2019, p. 26). Thus, the availability of suppliers of a platform and the possibility to communicate with them directly is not less important in this context.

Following this idea some amendments to European secondary legislation on product liability may be suggested. Current wording of Article 3 of the Product Liability Directive, which identifies persons who may be held liable for damages caused by defective products, will hardly help to resolve disputes concerning platform operators since the latter are not literally suppliers or producers. Thus, this provision should be revised so as to provide for an opportunity to lay down product liability on a platform operator. The conditions under which the platform operator may be considered as the seller may be the following: (a) the platform operator has a decisive influence on the economically significant aspects of the distribution of goods and (b) the operator is the only member of the distribution chain reasonably available to the buyers.

5.2. The ‘Uber’ and ‘Airbnb’ judgements and the contract liability for the breach of contracts between platform users

Private liability issues concerning platform operators may stem from the sale of goods of low quality, undue performance of services, fake offers of goods or services to customers by the supplier etc. With respect to the mentioned types of violations, several options concerning liability of platform operators may be suggested.

Option 1. A platform operator is liable as the party to the contract with the customer

Unlike product liability analysed in the previous subsection, here we deal with contract liability. According to a general rule a contract is binding only upon its parties (article 1.3. of UNIDROIT Principles of International Commercial Contracts (UNIDROIT, 2016), article II.–1:103 (1) of Draft Common Frames of References (Bar et al, 2008) and thus it is only a party to the contract (an obligor) who may be held liable for its breach.

From this perspective a platform operator generally may not be held liable for the breach of a contract since the latter is concluded between platform users. However, the conclusion will be the opposite if the platform operator is considered as a party to the contract concluded with the platform customer, which is possible if the operator qualifies as the seller of goods or the provider of respective services.

Although the presumption may seem rather weird, it has a rationale, which is confirmed by CJEU case law, in particular in Ms Sabrina Wathelet and the Bietheres & Fils SPRL garage (‘Wathelet’ case). In this case Ms Wathelet purchased a second-hand vehicle from the Bietheres garage. Although Ms Wathelet thought she had bought the vehicle belonging to Bietheres garage, in fact the vehicle belonged to Ms Donckels, herself a private individual, and the garage acted on behalf of the latter. Later the vehicle broke down and was taken by Ms Wathelet to the Bietheres garage to be repaired for free, but the garage refused to repair it under guarantee since it was not the seller of the vehicle, but merely an intermediary. Therefore, the main issue of the case was whether Ms Wathelet could enjoy the right to require the seller (Bietheres garage) to repair the vehicle she bought, which was established by Directive 1999/44/EC of the European Parliament and of the Council of 25 May 1999 on certain aspects of the sale of consumer goods and associated guarantees. The CJEU in this case was asked whether the term “seller” under Directive 1999/44 on certain aspects of the sale of consumer goods and associated guarantees must be interpreted as covering not only a trader who, as seller, transfers ownership of consumer goods to a consumer, but also a trader who acts as intermediary for a non-trade seller.

The Court mentioned that “the concept of ‘seller’ can be interpreted as covering a trader who acts on behalf of a private individual where, from the point of view of the consumer, he presents himself as the seller of consumer goods under a contract in the course of his trade, business or profession” (Wathelet, 2016, n.p.). Following this vein the Court concluded that in the circumstances ‘in which the consumer can easily be misled in the light of the conditions in which the sale is carried out, it is necessary to afford the latter enhanced protection’. And ‘therefore, the seller’s liability, in accordance with Directive 1999/44, must be capable of being imposed on an intermediary who, by addressing the consumer, creates a likelihood of confusion in the mind of the latter, leading him to believe in its capacity as owner of the goods sold’ (Wathelet, 2016, n.p.).

This approach may be applied to transaction platforms. However, certain additional issues should also be taken into account. Recently adopted Directive 2019/2161 modernising European Union consumer protection rules (Directive, 2019) establishes additional information requirements for online marketplaces. In particular, it supplements Directive 2011/83/EU on consumer protection with Article 6a, which, among other, requires providers of online marketplaces to provide the consumer in a clear and comprehensible manner with the information on how the obligations related to the contract are shared between the third party offering the goods, services or digital content and the provider of the online marketplace, and the information on whether the third party offering the goods, services or digital content is a trader or not. Therefore, the fact that the provider of the online marketplace (i.e. the platform operator) (a) informs platform customers that contract obligations are shared between him and platform suppliers, or (b) fails to inform customers that only platform suppliers carry out all the contractual obligations or provides this information in an inappropriate manner—may also indicate that from the consumers’ point of view the operator presents himself as the platform supplier.

Apparently, the conclusion expressed in the ‘Wathelet’ case is generally in line with the basic approach expressed in the ‘Uber’ and ‘Aribnb’ judgements. Moreover, it may be regarded as a link between the ‘Uber’ (‘Airbnb’) judgements, which address only the issue of the nature of services provided by platform operators, and the private liability issues, which are touched upon in the ‘Wathelet’ case.

However, the criteria under which an intermediary may be considered as a seller or as a provider of material services in the ‘Wathelet’ case, on the one hand, and in the ‘Uber’ and ‘Airbnb’ cases, on the other, obviously are different. The criteria in the ‘Uber’ (‘Airbnb’) cases as mentioned above are objective since they are based on the objective nature of the relationships between a platform operator and platform users. However, the criteria developed in the ‘Wathelet’ case are of subjective nature since they focus on how a consumer perceives the role of the counterparty and whether the consumer has enough information that the contract is concluded with an intermediary, not the seller directly.

Considering the issue of private liability of platform operators, the subjective criterion developed in the ‘Wathelet’ case as well as other judgements of national courts (e.g. the judgement of Danish Eastern High Court on the GoLeif.dk platform, see Ostergaard, 2019) seem to be more relevant 1.

Therefore, since liability of platform operators is a private law issue the subjective criteria seems to be more relevant. Thus, the focus should be on whether the platform customer could reasonably be considered as a party to a sales contract with another customer. However, it does not downplay the meaning of the objective criteria which may be indicative in this regard. Apparently, it is impossible to determine whether a customer could reasonably consider the platform operator as a counterparty to the sales contract without taking into account the architecture of the platform, the relationships between the platform operators and platform users etc. In this regard whether the platform operator has observed his information duties mentioned above under the newly adopted Directive 2019/2161 modernising European Union consumer protection rules should also be taken into account.

Option 2. A platform operator is liable for negligence

Unlike the previous option, which is based on the presumption that platform operators may be held liable as parties to supplier-customer contracts, this option stems from the presumption that platform operators may be held liable even if they cannot be considered as parties to the contracts, i.e. on a non-contractual basis.

In most countries all over the world the law on non-contractual obligations generates the duties of careful conduct in relation to the interests of another protected by law. The concept is based on the standard of care which must be exercised under the circumstances of the case by a reasonably prudent person (Bar et al, 2008) 2. What follows from this standard is that if a person fails to exercise reasonable care, it acts negligently and thus is liable for damages caused to the injured person (BGB, article 276).

Extrapolating the concept to platforms it may be assumed that if a platform supplier fails to duly perform the supplier-customer contract, the platform operator may be held liable towards the platform customer since the operator failed to exercise her duty of care. Indeed, the platform operator is responsible for the arrangement of a safe and well-ordered online marketplace which may be available only to conscious and prudent users (both suppliers and customers). Accordingly, when the operator violates this duty making the platform available for rogues it may be assumed to be held liable for the damages caused to its users by this violation under provisions of tort law.

However, the fact that a platform supplier has breached a contract concluded with a platform customer does not itself mean that the platform operator has not exercised the duty of care. A key element of this duty is that it must be reasonable to hold a person liable under certain circumstances. If some of suppliers registered on the platform do not duly perform their contract obligations, it may hardly be a reason to blame the platform operator for the breach of the duty of care since the latter is not able to predict which of the users will be prudent suppliers and which of them will not.

Moreover, the platform operator generally will rely on the ECD rules providing liability exemptions for hosting providers. According to Article 14 of ECD a platform operator may not be held liable if the one satisfies at least one of the conditions mentioned in this article. The first condition is that the platform operator does not have actual knowledge of illegal activity or information, and as regards claims for damages, is not aware of facts or circumstances from which the illegal activity or information is apparent (constructive knowledge) (Baistrocchi, 2002). The other condition is that upon obtaining such knowledge or awareness, the platform operator acts expeditiously to remove or to disable access to the information (Directive, 2000).

Having regard to the current wording of article 14 and to the nature of transaction platforms, it may be concluded that in most cases platform operators will be exempted from liability and prove that they meet the mentioned conditions.

First, a platform operator in most cases will easily prove that the one did not have actual or constructive knowledge about the illegal activity of its users. According to article 15 of ECD there is no general obligation to monitor the information or activity of platform users imposed on platform operators. Thus, traditional systems of assessment of the users’ activity which are widely used by modern platforms, such as rating systems and comments, per se do not evidence that the platform operator has had knowledge of illegal activity or information placed on the platform. These systems generally do not address the information about platform users to the platform operator directly. On the contrary, they are created for platform users in the first place and allow the latter, not the platform operator, to assess the activity of platform suppliers.

Second, it is questionable whether the undue performance of contracts by platform suppliers may qualify as an illegal activity under article 14 of ECD. Generally undue performance (or a lack of performance) qualifies as a breach of obligations stemming from a contract, but not from the statutory provisions. Only if the supplier provides a fraudulent activity via the platform (e.g. places fake offers of goods or services), may this activity be considered illegal.

Third, in practice it will be hard to prove that the platform operator failed to expeditiously remove certain information from the platform. Again, since there is no general obligation to monitor the information, nor a general obligation to seek facts or circumstances indicating illegal activity, it is hard to prove when exactly the platform operator has obtained knowledge and when the one had to remove the information stored on the platform.

However, despite the mentioned obstacles to hold platform operators liable towards platform customers, in certain cases it still may be possible.

For example, if the supplier constantly or temporarily fails to duly perform contract obligations (delivers goods or services not in due time, distributes goods of low quality etc.), but the customers are notified about these facts by a rating system and comments of other users, it is their choice to enter a contract with the supplier. In this situation it cannot be said that the platform operator has failed to exercise the duty of care.

However, if the supplier fails to duly perform contract obligations, but it is not reflected in the rating or in comments since the platform operator has been deleting or amending them (even automatically), this will evidence that the platform operator has had an actual knowledge and has taken an active editorial role. Thus, according to the CJEU reasoning provided in Google France SARL and Google Inc. v Louis Vuitton Malletier SA in this situation the platform operator may not be exempted from liability (Google France, 2010). From the tort law perspective, the operator will be considered as the one who failed to exercise his duty of care. This conclusion is also supported by the recently adopted Directive (EU) 2019/2161 modernising European Union consumer protection rules (Directive, 2019), which, among other, supplements Annex I of Unfair Commercial Practices Directive 2005/29/EC (‘Commercial practices which are in all circumstances considered unfair’) with the new types of practices. In particular, providing search results in response to a consumer’s online search query without clearly disclosing any paid advertisement or payment specifically for achieving higher ranking as well as submitting or commissioning legal or natural persons to submit false consumer reviews in order to promote products—are both considered as unfair commercial practices. Thus, if a platform operator uses these practices it may not be exempted from liability under the ‘safe harbour’ provisions of ECD.

The mentioned solutions, unfortunately, do not entirely fit into the current regulatory regime established by the EU Directive on electronic commerce (ECD). However, they are necessary to provide for a balance between the interests of platform users and platform operators. Thus, article 14 of ECD needs to be amended. For example, the term ‘illegal information’ and ‘illegal activity’ should be replaced with a broader term, like ‘harmful’, ‘deceptive’, ‘misleading’, which will allow platform users to seek defence in the case of a platform operator constantly breaching its duty of care and ignoring customer reports and comments about the undue activity of suppliers registered on the platform. Moreover, provisions of article 14 of ECD should be balanced with the mentioned provisions of Directive (EU) 2019/2161 modernising European Union consumer protection rules. These suggestions may be taken into account while discussing provisions of the lately introduced Proposal for Digital Services Act.

Conclusion

Recent CJEU judgements in the ‘Uber’ and ‘Airbnb’ cases have revealed a lot of issues concerning transaction platforms. Although these judgements were adopted in disputes concerning primarily public law issues, approaches developed by CJEU may be used to solve the issues concerning liability of platform operators as well.

With respect to product liability, the CJEU approaches are decisive to determine when a platform operator may be held liable for damages caused by a defective product distributed via the platform. Based on these approaches the operator bears liability towards customers when the one may be considered as a seller of the defective product. In turn, the platform operator may be considered as the seller where the one has a decisive influence on the conditions of negotiations and communication between platform users. In this regard the fact that the platform operator is the only member of the distribution chain reasonably available to the buyers may be indicative.

Considering liability for the breach of supplier-customer contracts caused by a platform supplier, the CJEU approaches may also be applied, however, with certain exemptions and modifications. Apparently, the platform operator may be held liable for the breach of the contract if the one qualifies as a seller or a provider of the material services and thus as a party to the contract concluded with the customer. This approach stems directly from the ‘Uber’ and ‘Airbnb’ judgements. However, in the described cases the criteria by which the operator may be considered as a party to the supplier-customer contract differ from the criteria developed by CJEU. Unlike the objective criteria established by CJEU, with respect to contract liability issues the focus should be on subjective criteria. Thus, the platform operator may be held liable if from the consumer perception the operator is a seller or a provider of the material services and thus a party to a contract.

However, this is not the only condition to hold the platform operator liable for the breach of a supplier-customer contract. Even if the platform operator does not qualify as a party to this contract, the one still may be held liable towards customers for the failure to exercise the duty of care. The main conditions in this regard are: (a) the operator is informed about the violations or fraudulent activity performed by a platform supplier but does not remove the respective information on the services (goods) or the supplier’s account in whole, or (b) the operator interferes with the comments left by customers and amends them so as to make a false impression that the supplier is a prudent and honest platform user. However, to make this option possible, the amendments to article 14 of ECD suggested in the article are needed.

References

Belk, R. (2014). You are what you can access: Sharing and collaborative consumption online. Journal of Business Research, 67(8), 1595–1600. https://doi.org/10.1016/j.jbusres.2013.10.001

Bolger v. Amazon.com, LLC, No. 37-2017-00003009-CU-PL-CTL (Court of Appeal, Fourth Appellate District. Division One, State of California 13 August 2020).

Boudreau, K. J. (2010). Open platform strategies and innovation: Granting access vs. devolving control. Management Science, 56(10), 1849–1872. https://doi.org/10.1287/mnsc.1100.1215

Bürgerliches Gesetzbuch (FRG).

Busch, C. (2019). When Product Liability Meets the Platform Economy: A European Perspective on Oberdorf v. Amazon. Journal of European Consumer and Market Law, 8(5), 173–174.

Busch, C., Dannemann, G., Schulte-Nölke, H., Wiewiórowska-Domagalska, A., & Zoll, F. (2016). Discussion Draft of a Directive on Online Intermediary Platforms. Journal of European Consumer and Market Law, 5(4), 164–169. https://ssrn.com/abstract=2821590

Busch, C., Schulte-Nölke, H., Wiewiórowska-Domagalska, A., & Zoll, F. (2016), The Rise of the Platform Economy: A New Challenge for EU Consumer Law? Journal of European Consumer and Market Law, 5(1), 3–10. https://ssrn.com/abstract=2754100

von Bar, C., Clive., E., & Schulte-Nölke, H. (Eds.). (2008). Principles, definitions and model rules of European private law. Draft Common Frame of Reference (DCFR) interim outline edition. Sellier European Law Publishers.

Chapuis-Doppler, A., & Delhomme, V. (2020, February 12). A regulatory conundrum in the platform economy, case C-390/18 Airbnb Ireland [Blog post]. European Law Blog. https://europeanlawblog.eu/2020/02/12/a-regulatory-conundrum-in-the-platform-economy-case-c-390-18-airbnb-ireland/

Cohen, J. E. (2017), Law for the Platform Economy. University of California, Davis Law Review, 51(1), 133–204. https://lawreview.law.ucdavis.edu/issues/51/1/Symposium/51-1_Cohen.pdf

Council Directive 85/374/EEC of 25 July 1985 on the approximation of the laws, regulations and administrative provisions of the Member States concerning liability for defective products, 31985L0374, CONSIL, OJ L 210 (1985). http://data.europa.eu/eli/dir/1985/374/oj/eng

Smorto, G. (2017). Critical Assessment of European Agenda for the Collaborative Economy (Research Paper IP/A/IMCO/2016-10; PE 595.361). Committee on the Internal Market and Consumer Protection, European Parliament. http://www.astrid-online.it/static/upload/ep_i/ep_imco_sharing_assessment_02_2017.pdf

Dari-Mattiacci, G., & Parisi, F. (2003). The Cost of Delegated Control: Vicarious Liability, Secondary Liability and Mandatory Insurance. International Review of Law and Economics, 23(4), 453–475. https://doi.org/10.1016/j.irle.2003.07.007

de las Heras Ballell, T. R. (2017), The Legal Anatomy of Electronic Platforms: A Prior Study to Assess the Need of a Law of Platforms in the EU. Italian Law Journal, 3(1), 149–176. https://www.theitalianlawjournal.it/delasherasballell/

de Reuver, M., Sørensen, C., & Basole, R. C. (2018). The digital platform: A research agenda. Journal of Information Technology, 33(2), 124–135. https://doi.org/10.1057/s41265-016-0033-3

European Commission. (2016). Communication from the Commission to the European Parliament, the Council, the European economic and social committee and the Committee of the regions – A European agenda for the collaborative economy COM(2016) 356. https://eur-lex.europa.eu/legal-content/EN/ALL/?uri=CELEX:52016DC0356

European Commission. (2018). Commission Staff Working Document: Impact Assessment Accompanying the document Proposal for a Regulation of the European Parliament and of the Council on promoting fairness and transparency for business users of online intermediation services. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=SWD%3A2018%3A138%3AFIN

Directive (EU) 2015/1535 of the European Parliament and of the Council of 9 September 2015 laying down a procedure for the provision of information in the field of technical regulations and of rules on Information Society services (Text with EEA relevance), 32015L1535, EP, CONSIL, OJ L 241 (2015). http://data.europa.eu/eli/dir/2015/1535/oj/eng

Directive 2000/31/EC of the European Parliament and of the Council of 8 June 2000 on certain legal aspects of information society services, in particular electronic commerce, in the Internal Market ('Directive on electronic commerce’), 32000L0031, CONSIL, EP, OJ L 178 (2000). http://data.europa.eu/eli/dir/2000/31/oj/eng

Directive 2008/48/EC of the European Parliament and of the Council of 23 April 2008 on credit agreements for consumers and repealing Council Directive 87/102/EEC, 32008L0048, CONSIL, EP, OJ L 133 (2008). http://data.europa.eu/eli/dir/2008/48/oj/eng

Directive 2011/83/EU of the European Parliament and of the Council of 25 October 2011 on consumer rights, amending Council Directive 93/13/EEC and Directive 1999/44/EC of the European Parliament and of the Council and repealing Council Directive 85/577/EEC and Directive 97/7/EC of the European Parliament and of the Council Text with EEA relevance, 32011L0083, EP, CONSIL, OJ L 304 (2011). http://data.europa.eu/eli/dir/2011/83/oj/eng

Directive (EU) 2019/2161 of the European Parliament and of the Council of 27 November 2019 amending Council Directive 93/13/EEC and Directives 98/6/EC, 2005/29/EC and 2011/83/EU of the European Parliament and of the Council as regards the better enforcement and modernisation of Union consumer protection rules (Text with EEA relevance), 32019L2161, EP, CONSIL, OJ L 328 7 (2019). http://data.europa.eu/eli/dir/2019/2161/oj/eng

Evans, D. S. (2003). The Antitrust Economics of Multi-Sided Platform Markets. Yale Journal on Regulation, 20(2), 325–381. https://digitalcommons.law.yale.edu/yjreg/vol20/iss2/4/

Feld, H. (2020). From the telegraph to Twitter: The case for the digital platform act. Computer Law & Security Review, 36. https://doi.org/10.1016/j.clsr.2019.105378

Hein, A., Schreieck, M., Riasanow, T., Setzke, D. S., Wiesche, M., Böhm, M., & Krcmar, H. (2020). Digital platform ecosystems. Electronic Markets, 30(1), 87–98. https://doi.org/10.1007/s12525-019-00377-4

van Hoboken, J., Quintais, J. P., & van Eijk, N. (2019). Hosting Intermediary Services and Illegal Content Online: An Analysis of the Scope of Article 14 ECD in Light of Developments in the Online Service Landscape [Report]. Publications Office of the European Union. https://doi.org/10.2759/284542

Working Group on the Collaborative Economy, Koolhoven, R., Neppelenbroek, E. D. C., Santamaría Echeverria, O. E., & Verdi, P. L. (2016). Impulse paper on specific liability issues raised by the collaborative economy in the accommodation sector [Impulse Paper]. University of Groningen. https://ec.europa.eu/docsroom/documents/16946/attachments/1/translations/en/renditions/native

Boyle, J., & Jenkins, J. (2016). Intellectual Property Law & the Information Society—Cases and Materials. Center for the Study of the Public Domain. https://open.umn.edu/opentextbooks/textbooks/449

Judgement of 12 July 2011, L’Oréal SA and Others v eBay International AG and Others, C-324/09, ECLI:EU:C:2011:474, points 122, 124

Judgement of 16 February 2012, Belgische Vereniging van Auteurs, Componisten en Uitgevers CVBA (SABAM) v Netlog NV, C-360/10, ECLI:EU:C:2012:85, point 27

Judgement of 19 December 2019, Airbnb Ireland, C-390/18, EU:C:2019:1112

Judgement of 2 December 2010, Ker-Optika bt v ÀNTSZ Dél-dunántúli Regionális Intézete, C-108/09 , ECLI:EU:C:2010:725

Judgement of 20 December 2017, Uber Systems Spain SL, C-434/15, EU:C:2017:981

Judgement of 23 March 2010, Google France SARL and Google Inc. v Louis Vuitton Malletier SA, Case C-236/08, ECLI:EU:C:2010:159

Judgement of 9 November 2016, Sabrina Wathelet, Case C-149/15, ECLI:EU:C:2016:840

Katz, M. L. (2019). Platform economics and antitrust enforcement: A little knowledge is a dangerous thing. Journal of Economics & Management Strategy, 28(1), 138–152. https://doi.org/10.1111/jems.12304

Kosseff, J. (2019). First Amendment Protection for Online Platforms. Computer Law & Security Review, 35(2), 199–213. https://doi.org/10.1016/j.clsr.2018.12.002

Lahe, J. (2004). The Concept of General Duties of Care in the Law of Delict. Juridica International, IX, 108–115. https://juridicainternational.eu/article_full.php?uri=2004_IX_108_the-concept-of-general-duties-of-care-in-the-law-of-delict

Lobel, O. (2016). The Law of the Platform. Minnesota Law Review, 101(1), 87–166. https://scholarship.law.umn.edu/mlr/137/

LOI n° 2016-1321 du 7 octobre 2016 pour une République numérique (FR)

Maultzsch, F. (2018). Contractual Liability of Online Platform Operators: European Proposals and Established Principles. European Review of Contract Law, 14(3), 209–240. https://doi.org/10.1515/ercl-2018-1013

McIntyre, D. P., & Srinivasan, A. (2017). Networks, platforms, and strategy: Emerging views and next steps. Strategic Management Journal, 38(1), 141–160. https://doi.org/10.1002/smj.2596

Model Rules on Online Platforms (2019). Report of the European Law Institute. https://www.europeanlawinstitute.eu/fileadmin/user_upload/p_eli/Publications/ELI_Model_Rules_on_Online_Platforms.pdf

Oberdorf v. Amazon.com Inc, No. 18-1041 (United States Court of Appeals for the Third Circuit 3 July 2019).

O.E.C.D. (2010). The Economic and Social Role of Internet Intermediaries (Report DSTI/ICCP(2009)9/FINAL). http://www.oecd.org/digital/ieconomy/44949023.pdf

Opinion in AIRBNB Ireland UC delivered 30 April 2019 (C-390/18, ECLI:EU:C:2019:336)

Ostergaard, K. & Sandfeld Jakobsen, S. (2019), Platform Intermediaries in the Sharing Economy: Questions of Liability and Remedy. Nordic Journal of Commercial Law, 2019(1), 20-41. https://doi.org/10.5278/ojs.njcl.v0i1.3299

Baistrocchi, P. (2002). Liability of Intermediary Service Providers in the EU Directive on Electronic Commerce. Santa Clara High Technology Law Journal, 19(1), 111–130. https://digitalcommons.law.scu.edu/chtlj/vol19/iss1/3/

Pretelli, I. (2018). Improving Social Cohesion through Connecting Factors in the Conflict of Laws of the Platform Economy. In I. Pretelli (Ed.), Conflict of Laws in the Maze of Digital Platforms (pp. 17–52). Schulthess. https://ssrn.com/abstract=3328449

Proposal for a Regulation of the European Parliament and of the Council on a Single Market For Digital Services (Digital Services Act) and amending Directive 2000/31/EC COM/2020/825 final

Regulation 2019/1150 on promoting fairness and transparency for business users of online intermediation services, Official Journal of the European Union, L 186, 11.07.2019, pp. 57-80.

Report from the Commission to the European Parliament, the Council and the European Economic and Social Committee on the Application of the Council Directive on the approximation of the laws, regulations, and administrative provisions of the Member States concerning liability for defective products (85/374/EEC) COM/2018/246 final.

Sony Corp. v. Universal City Studios, 464 U.S. 417. (1984)

Sørensen, M. J. (2018). Intermediary Platforms –The Contractual Legal Framework. Nordic Journal of Commercial Law, 2018(1), 62–90. https://doi.org/10.5278/ojs.njcl.v0i1.2485

Tsvaigert, K., & Kötz, H. (1998). Vvedeniie v sravnitel’noie pravovedeniie v sfere chastnogo prava (M. Jumasheva, Trans.). Mezhdunarodnyie Otnosheniia.

Twigg-Flesner, C. (2018), The EU’s Proposals for Regulating B2B Relationships on Online Platforms – Transparency, Fairness and Beyond. Journal of European Consumer and Markets Law, 7(6). https://ssrn.com/abstract=3253115

UNIDROIT Principles 2016, Art. 1.6(2)

van Dijck, J., Nieborg, D., & Poell, T. (2019). Reframing platform power. Internet Policy Review, 8(2). https://doi.org/10.14763/2019.2.1414

Ward, P. R. (2017). Testing for multisided platform effects in antitrust market definition. The University of Chicago Law Review, 84(4), 2059–2102. https://lawreview.uchicago.edu/publication/testing-multisided-platform-effects-antitrust-market-definition

Footnotes

1. Subjective or mixed (subjective-objective) criteria and standards generally are more common for private law. In particular, subjective standard underlies the contract theory of interpretation (acontract shall be interpreted according to the common intention of the parties (article 4.1. of UNIDROIT Principles), the theory of mistake as a ground for the avoidance of a contract (the mistake must be of such importance that a reasonable person in the same situation as the party in error would only have concluded the contract on materially different terms, article 3.2.2. of UNIDROIT Principles) etc.

2. This standard is mostly attributed to German legal tradition and to common law case law, although mutatis mutandis it may be found in other legal traditions as well (Lahe, 2004).

News media’s dependency on big tech: should we be worried?

$
0
0

Australia’s News Media Bargaining Code (NMBC) and Facebook’s initial reaction to the law gave new grounds for debate about the level of power held by big tech in the media industry. This op-ed looks at what has happened and discusses the power relations and dependency dynamics of big tech, governments and news media. For Europe, the Australian case raises the question of whether it needs to worry about social media blocking access to news content.

Why did Australia decide to pass the NMBC?

On 17 February 2021 the Australian government passed the NMBC, designed to have large platforms pay for hosting (local) news content and to address the bargaining power imbalance between digital platforms and Australian news businesses, especially Facebook and Google. The law seeks to enable fairer negotiation circumstances by forcing digital platforms to enter negotiations with news producers for sharing ad revenues from content that appears on their platforms. While the law received strong support in the Australian Parliament, it was staunchly fought especially by Facebook’s and Google’s lobbying force. Despite open letters by both large companies pushing for amendments of the law, Google started entering negotiations with news organisations as of February 2021. Facebook however carried out its threat to block news content of news publishers for users worldwide overnight after the passing of the law on 17 February 2021 without any amendments. Facebook blocked not only news articles, but also important government, political and emergency sources. Consequently, Australians were not only curtailed from accessing information about current affairs but also from important health information, such as on the COVID 19 pandemic. As reported by The Guardian, in response, Australia’s prime minister Scott Morrison wrote in a post on Facebook that its move “confirms the concerns that an increasing number of countries are expressing about the behaviour of big tech companies who think they are bigger than governments and that the rules should not apply to them”. Yet, afraid of misinformation spreading through social media without credible sources posting on the platforms, Facebook and the Australian government reached an agreement and eventually last-minute amendments were introduced and enacted on 25 February 2021.

Why did Facebook’s decision to leave have such an impact on the news media?

The motivation for passing this law was primarily to fix the power imbalance between news media and digital platforms. More specifically, the goal was to address the negotiation power imbalance between news media organisations and major digital platforms as a “strong and independent media landscape is essential to a well-functioning democracy”. 1 Instead, the course of events reinforces big tech’s power in its relationship with the news media and the government. Therefore, the final version of the law received substantial criticism from Facebook, Google, Australian academics, and journalists alike.

In a public statement, Facebook said that “the proposed law fundamentally misunderstands the relationship between our platform and publishers” as it’s the publishers who choose to share their stories on social media or make them available to be shared by others because they get value from doing so. Further, Facebook pointed to its enormous impact on the news media, which is based on a deeply interwoven dependency. In fact, I observe a three-fold dependency:

  1. News media are dependent on Google and Facebook for news distribution, at least to a certain degree. Although news media are diversifying their distribution strategies, a change in Facebook’s algorithm in 2018 to prioritise content from friends and family (instead of news content) impacted the news media in amending their strategies to distribute and reach audiences. Additionally, Google’s system of “search engine optimisation” determines how journalists write articles to reach a broader audience and appear in search results. Likely, this kind of dependency would similarly outplay in Europe, as a recent report that examined Germany’s media landscape (“Google the Media Patron”) highlighted. The risk of Google becoming an “infrastructural monopoly” is accentuated as “whoever sets the conditions for producing, disseminating and marketing information also has considerable leverage when it comes to content”. In short, news media are reliant on Google’s and Facebook’s digital infrastructure for most steps in the news production cycle, from researching data to distributing news and reaching audiences.
  2. As academic and journalist Jeff Jarvis puts it, “in every attempt to take power away from the platforms, it only gives them more”. He adds that the Australian law gave “Google the power to decide which news organisations should get money and which shouldn’t” thereby reinstating existing negotiation power imbalances. The NMBC requires the stakeholders to agree on a dollar price of the news content distributed by the platforms, pay that revenue to registered news publishers, and agree to final offer arbitration in case of dispute between the platforms and the publisher on the value of the news content. Due to the costs and opaqueness of the process involved, it is likely big tech picks fewer yet larger outlets. As researchers Leaver and Meese argued elsewhere, small and regional media outlets are the “clear losers”. The NMBC empowered Rupert Murdoch’s media empire and smaller outlets would probably “not see a dollar”. Facebook’s Nick Clegg also found fault with the original draft of the Australian law, as “Facebook would have been forced to pay potentially unlimited amounts of money to multinational media conglomerates (…) without even so much as a guarantee that it is used to pay for journalism, let alone smaller publishers''. In a concentrated media landscape like Australia, small and regional news media may not get to negotiate with big tech due to the costs and opaqueness of the process. Small news outlets will lose in competition against Rupert Murdoch’s news empire and will, thus, not benefit from the original objective of passing the law. Instead, Google’s and Facebook’s power is again reinstated.
  3. The role of big tech in funding and investing into newsrooms is a central topic of debate due to an increasing financial dependency of news media, like a study analysing Google’s impact on the German news media landscape, stressed. Also, Professor Borchardt pointed to this in the 2021 MozFest event organised by the AI, Media and Democracy Lab and emphasised that “almost every institution is linked to or funded by Google or Facebook, thus making the news media hugely dependent”. Facebook reportedly invested US$ 600 million to the news industry since 2018 and plans to invest at least US$ 1 billion more over the next three years. Although Facebook claims that it does not only pass deals with large media outlets, but also with local and regional publishers, there are serious concerns about the growing power of big tech companies and their power in the news industry. News media are risking a strategic need to rely on Google and Facebook for financial survival.

Should Europe worry?

Governments worldwide have increasingly taken note that voluntary self-regulation and co-regulation efforts are not effective in curtailing big tech’s power. Australia’s attempt to regulate the power imbalance and protect news media’s economic independence reveals the complexity at hand. The extent to which the NMBC can in fact change and enforce power balances is ambiguous, due to the controversial benefits for small and regional outlets. It was not the first time that regulatory approaches by national governments against big tech were to the detriment of news media and the public, as Google protesting the Spanish copyright law (2020, p. 13) and blocking Google News in Spain, as well as YouTube blocking videos in Germany, due to a dispute with GEMA (government-mandated author rights organisation), reveal. Demands for levelling the playing field, enabling fairer negotiation conditions and facilitating competition and economic independence become louder. The EU lawmaker is currently working on and negotiating the Digital Services Act (DSA), the Digital Market Act (DMA) and the Democracy Action Plan (DAP), which aim at meeting those demands. Despite promising potential, the outcome remains unclear in large part due to big tech’s lobbying impact in Brussels.

Considering the power play in Australia and in the midst of EU lawmakers negotiating stricter regulations for big tech, Europeans should question: what would happen if Facebook and Google shut off access to EU-based news media? To avoid a similar situation as in Australia, where news media concentration is amplified and free information flows and news consumption are impeded due to existing dependency strings, Europeans should focus regulatory and policy efforts on creating counterbalancing powers. To do so, the news media’s dependency on big tech must be broken up by supportive government policies to enhance opportunities for diverse media funding and innovation as well as to reform rules in order to avoid new forms of media concentration. Hence, Europeans should learn from the Australian case and direct policies on enhancing media independence, media innovation and put laws in place that facilitate a diverse media landscape to counter big tech’s dominant power.

Acknowledgement

Thank you to Professor Natali Helberger, Professor Claes de Vreese, Tomás Dodds, and Valeria Resendez.

References

Helberger, N. (2018). Challenging Diversity - Social Media Platforms and a New Conception of Media Diversity. In M. Moore, & D. Tambini (Eds.), Digital Dominance: The Power of Google, Amazon, Facebook, and Apple (pp. 153–175). Oxford University Press.

Helberger, N. (2020). The Political Power of Platforms: How Current Attempts to Regulate Misinformation Amplify Opinion Power. Digital Journalism, 8(6), 842–854. https://doi.org/10.1080/21670811.2020.1773888

Meese, J., Hurcombe, E. (2020). Facebook, news media and platform dependency: The institutional impacts of news distribution on social media. New Media & Society. https://doi.org/10.1177/1461444820926472

Footnotes

1. Helberger (2018, p. 163) argues that balancing negotiation power between (legacy) media and large digital platforms as well as enabling a fair level-playing field is essential to deal with platform power and to ensure media independence and diversity. Helberger (2020) adds that (opinion power) imbalances, namely the ability to influence the process of public opinion formation, can pose a danger to a pluralistic media landscape (and ultimately democracy).

Black box algorithms and the rights of individuals: no easy solution to the “explainability” problem

$
0
0

1. Introduction

Recent advances in the development of machine learning (ML) algorithms, combined with the massive amount of data used to train them, has changed dramatically their utility and scope of applications. Software tools based on these algorithms are now routinely used in criminal justice systems, financial services, medicine, research and even in small business. Many decisions affecting important aspects of our lives are now made by algorithms rather than humans. Clearly, there are many advantages to this transformation. Human decisions are often biased and sometimes simply incorrect. Algorithms are also cheaper and easier to adjust to changing circumstances.

But algorithms have not proven a panacea. Despite promises to the contrary, there have been several instances of bias and discrimination discovered in algorithmic decision-making (Buiten, 2019, p. 42), particularly disturbing in the case of criminal justice (Huq, 2019; Richardson et al., 2019). Of course, once discovered, such bias can be removed and algorithms can be validated as non-discriminatory before they are deployed. But there is still widespread uneasiness—particularly among legal experts—about the use of these algorithms. Most of these algorithms are self-learning and their designers have little control over the models generated from the training data. In fact, computer scientists were formerly not very interested in studying these models because they were (and are) often extraordinarily complex (the reason they are often referred to as “black boxes”). The standard approach was that as long as an algorithm worked correctly, no one bothered to analyse how it worked 1.

This approach changed once the tools based on ML algorithms became ubiquitous and began directly affecting the lives of ordinary people (Pasquale, 2015). If the decision about how many years one will spend in prison is made by an algorithm, the convicted should have the right to know how this decision is made. 2 In other words, there is a clear need for the transparency and accountability of automatic decision-making (ADM) algorithms (Larsson & Heintz, 2020).

In recent years, many published papers have addressed the interpretability (variously defined) of models generated by ML algorithms. It has been argued that interpretability is not a monolithic notion. As a result, the subjectivity of each interpretation, due to different levels of human understanding, implies that there must be a multitude of dimensions that together constitute interpretability (Chakraborty et al., 2017). However, Zachary Lipton (2018) suggests that not only is the concept of interpretability muddled, it is also badly motivated. The approval of EU regulation 2016/679 (General Data Protection Regulation or GDPR) in 2016 prompted discussion of a related legal concept, the right to explanation. If this right is indeed mandated by GDPR (in effect since 2018), then software companies conducting business in Europe 3 are immediately liable if they are not able to satisfy this right.

The aim of this paper is to answer the question of whether and to what extent—given the specificity of ML systems—it is possible to provide information that would demonstrate algorithmic fairness, and as a result, compliance with the right to explanation. The first section analyses the concept of explanation within its legal as well as psychological context. We then demonstrate—using a case study of a music recommendation system—that the interpretability of “black box” algorithms is a challenging technical problem for which no solutions have yet been found. To that end, we show that models created by ML algorithms are inherently so complex that they cannot be “explained” in a meaningful way to an ordinary user of such systems. Instead, rather than looking “inside” an algorithm, we propose focussing on its statistical fairness and correctness. A promising way to achieve this goal may be to introduce event logging mechanisms and certification schemes, which are currently being used very successfully in the IT sector.

2. What is the right to explanation

One of the goals of the GDPR was to adapt EU regulations to modern methods of data processing, such as cloud computing or big data. 4 Hence, the EU legislature introduced a number of new provisions—including the widely discussed right to data portability (de Hert et al., 2018)—and expanded existing regulations (Hoofnagle et al., 2019), such as provisions on the right to information and automated decision-making.

According to the EU data protection model, every person has the right to know both the scope of data processed about them and the purpose of such processing. Furthermore, the data controller is required to provide them with this information “in a concise, transparent, intelligible and easily accessible form, using clear and plain language” (GDPR, 2016, Art. 12(1)).

In the EU legal system, the right to the protection of personal data—as well as the right to privacy—have been included in the catalogue of fundamental rights (CFR, 2012). Furthermore, it should be noted that, although both rights are closely related, they are, in fact, independent rights. This means, in particular, that—at least in the scope of EU law—data protection laws may be infringed even if privacy has not been affected in any way. Undoubtedly, one of the main goals of establishing dedicated data protection regulations is to guarantee the rights and freedoms of individuals in the digital era, and protect them from new types of threats arising from rapid technological development and the globalisation of modern IT services.

Article 22 of the GDPR is aligned with this goal; it introduces the right to not be subject to a decision made as a result of automated data processing that legally affects an individual or otherwise has a significant impact upon them. This regulation was also enshrined in Directive 95/46, the GDPR’s predecessor, which was in place for over 20 years. However, since bulk algorithmic processing of personal data has developed rapidly only within the last two decades, the practical significance of this provision was insignificant. The situation has changed with the growth in profiling, including profiling for purposes other than advertising products and services (Data Is Power, 2017). It is worth noting that Article 22 of the GDPR does not explicitly provide for an individual’s right to explanation of an automated decision. Instead, it sets out the general principle that an individual may object to automated decision-making (Malgieri & Comandé, 2017, p. 246).

In the case of automated decision-making, the EU legislature has extended the information obligation imposed on data controllers by introducing in Article 15(1)(h) of the GDPR the need to provide “meaningful information” on the logic involved in such decisions, taking into account the “significance and the envisaged consequences of such processing for the data subject”. And it is this regulation that is the source of the term “right to explanation”, though the phrase itself does not appear directly in the wording of the regulation. This interpretation is confirmed by Recital 71 of the GDPR, which states that processing based on automated decisions should always be subject to suitable safeguards, including the “right to obtain an explanation of the decision reached after such assessment and to challenge the decision”.

Hence, the question arises at the outset as to whether the right to explanation is in fact a separate (per se) right of an individual or just an element of a broader right—the right to information. Some scholars have questioned the very existence of such a right (Wachter et al., 2017), while others have pointed out that, regardless of how the right to explanation is defined, it is not "illusory" (Selbst & Powles, 2017). Undoubtedly, the right to explanation serves a specific purpose—to enable an individual to challenge the correctness of a decision that has been made by an algorithm. Without understanding what criteria and factors the decision was based on, this entitlement can not be exercised in practice. Indeed, failure to provide a procedure to challenge the decision, including legal action, would deprive individuals of a key fundamental right—the right to a fair trial. It should be noted that, within the GDPR, only automated decisions that legally affect or otherwise significantly impact an individual are addressed. This is an important condition, the omission of which may lead to false conclusions about the legal scope of the right to explanation. However, Michael Veale and Lilian Edwards, referring to the Article 29 Working Party’s position, advise a broad interpretation of this condition by showing that commonly used price comparison online services can also have “significant effects” on individuals (Veale & Edwards, 2018, p. 401).

The term “to obtain an explanation” used in the context of an automated decision may suggest that the obligation of a controller using automated decision-making is to explain how the algorithm reaches a specific result, which, according to Article 13(1) of the GDPR, should be presented in a transparent and intelligible form, using “clear and plain language”. A significant part of the controversy surrounding the right to explanation relates precisely to the possibility of meeting this condition.

Before trying to identify the source of the difficulty, the term “explanation” in the context of decision-making needs to be clarified. Decision-making tools are based almost exclusively on classification algorithms. Classification algorithms are “trained” with data obtained from past decisions to create a model which is then used to arrive at future decisions. In this case the model requires an explanation, not the algorithm itself (in fact, different algorithms may be generated by very similar models).

When a user submits their information to a decision-making tool, an answer is generated—such as a number, a No, or a category such as “high risk”. From the wording of Recital 71 (which states that the user has the right to challenge the decision) it is clear that the right to explanation is provided for cases where the answer given by the tool is different from what the user expected or hoped for. The most straightforward question an individual may then ask is: “Why X?”. When the user asks “Why X?”, having expected a different answer (“Y”), they mean in fact to ask: “Why X rather than Y?”. This type of question calls for a contrastive explanation (Miller et al., 2017). The answer that needs to be provided to the user must contain not only the explanation as to why the information provided by the user generated answer X, but also what information must change in order to generate answer Y (the one the user was expecting).

When people ask “Why X?”, they are looking for the cause of X. Thus, if X is a negative decision for a loan application, an answer would need to specify what information in an application (the so-called “features” used as input in the model) caused X. It should also be remembered that the decision-making tool making a decision for a user is replacing a human that used to make such decisions. In fact, a person reporting a decision to the user may not clearly state that the decision is the verdict of an algorithm (judges in the US routinely use software-based risk assessment tools to help them in sentencing). The user may thus expect that the explanation provided uses the language of social attribution (Miller et al., 2017), that is, explains the behaviour of the algorithm using folk psychology.

3. A case study: Building a music recommendation system

As it was argued in the introduction, algorithm interpretability is a challenging task for their designers. Three barriers to the transparency of algorithms in general are usually distinguished: (1) intentional concealment whose objective is the protection of intellectual property; (2) lack of technical literacy on the part of users; (3) intrinsic opacity which arises from the nature of ML methods. A right to explanation is probably void when trade secrets are at stake (see Recital 63 of the GDPR; see Article 29 Working Party, 2017, p. 17), but the other two barriers still need to be addressed. In fact, these two barriers depend on each other. The complexity of ML methods positively correlate with the level of technical literacy required to comprehend them.

The most obvious solution to the second barrier would be implementing educational programmes aimed at transferring knowledge about the functioning of modern technologies. This could be achieved with stronger education programmes in computational thinking, and by providing independent experts to advise those affected by algorithmic decision-making (Lepri et al., 2018). The effectiveness of this solution, however, is questionable: even if it were possible to improve technical literacy education (which seems very unlikely given previous experience in this area), that still leaves 80% of the population who completed their education many years ago.

As a solution to the last barrier, namely, the lack of transparency relating to the nature of ML methods, some sort of evidence gathering based on registering the key parameters of the algorithm should be sufficient (Wachter et al., 2017). Indeed, collecting this type of data would certainly help to understand how a system arrived at a specific decision. That said, it would still be completely unrealistic to expect a layperson to grasp these concepts.

Over the last few years much work has been done on “black box” model explanation. Some of this work (Adler et al., 2016; Baehrens et al., 2010; Lou et al., 2013; Montavon et al., 2018; Simonyan et al., 2014; Vidovic et al., 2015) has been aimed specifically at experts. The interpretability of a model is a key element of a robust validation procedure in applications such as medicine or self-driving cars. But there has also been some innovative work on model explanation alone (Datta et al., 2016; Fong & Vedaldi, 2017; Lakkaraju et al., 2019; Ribeiro et al., 2016, 2018; Shrikumar et al., 2016; Tamagnini et al., 2017; Yosinski et al., 2015; Zintgraf et al., 2017). Most of these papers are addressed to experts, with the aim of providing insights into the models they create or use. In fact, only in the last three papers mentioned above were explanations tested on people, and even then a certain level of sophistication was expected on their part (from the ability to interpret a graph or bar chart to completing a postgraduate course on ML). Most importantly, though, all of these works provide explanations of certain aspects of a model (for example, showing what features or attributes most influence the decision of an algorithm). None of them attempt to explain fully the two contrasting paths (“why X rather than Y”) in a model that lead to distinct classification results (which, as stated above, is necessary for a contrastive explanation).

Indeed, explaining the black box model of an ADM algorithm is much harder than is normally assumed. To illustrate this case better, we describe in this section recent work we were involved with (Shahbazi et al., 2018) on designing a song recommendation system for KKBOX, Asia’s leading music streaming service provider.

KKBOX had provided a training data set that consisted of information from listening sessions for each unique user-song pair within a specific timeframe. This information available to the algorithm includes information about the users, such as identification number, age, gender, etc., and about songs, such as length, genre, singer, etc. The training and the test data were selected from users’ listening history in a given time period and had around 7 and 2.5 million unique user-song pairs respectively.

The quality of a recommendation system’s predictions relies on two principal factors: predictive features available from past data (for example, what songs the user has listened to the most) and an effective learning algorithm. Very often, these features are only implicit in the training data and the algorithm is not able to extract them by itself. Feature engineering is an approach that exploits the domain knowledge of an expert to extract from the data set features that should generalise well to the unseen data in the test set. The quality and quantity of the features have a direct impact on the overall quality of the model. In this case, certain statistical features were created (or extracted, because they were not explicitly present in data), including the number of sessions per user, the number of songs per session and the length of time a user had been registered with KKBOX.

As a result, the number of features available to the algorithm was increased by a factor of about 10, to 185. And this is the key point: some of these derived features turned out to be extremely important in determining a user’s taste in songs and, as a result, the recommendation that was provided. But it should be emphasised that none of these features were explicitly present in the original data. The paradox is that if someone asked for an explanation of how the model worked, the answer would have to be based on features not present in the source data.

But this is only part of the story. The solution provided did not use a single algorithm to make a prediction. In total, five different algorithms were used, all of them very complex. Thus, here is another key point: the final model was the weighted average of all five models’ predictions. Again, it should be stressed that the result was not the outcome of just one algorithm. Figure 1 shows the complexity of one of these algorithms in the form of a simplified neural net structure. 5

Graph showing the structure of one of the algorithms used in the recommendation system
Figure 1: Structure of one of the algorithms used in the recommendation system (Shahbazi et al., 2018)

The model that was generated by these algorithms was also extremely large and complex. Since gradient boosting decision tree algorithms were used, the resulting model was a forest of such decision trees. 6 The forest contained over 1,000 trees, each with 10-20 children at each node and at least 16 nodes deep. 7

The question arises as to how a user can understand this model. One can begin by assuming that a user wants an explanation for why song X was recommended rather than song Y. There will be multiple trees with the X recommendation as well as the Y recommendation. But which one offers the right choice? These multiple trees cannot be generalised as this has already been done by the algorithm (one of the most difficult aspects of algorithms based on decision trees is their optimisation, that is, generating the simplest, most general trees). Indeed, an ordinary user would not be able to comprehend the model, let alone understand an explanation that uses vocabulary entirely foreign to them. It is up to the experts to verify the explanation and convey this verification to the user.

The ADM models are often even more complex than the system described above. Machine learning is heuristics-driven and no one expects rigorous mathematical proofs of the correctness of its algorithms. What often happens is that, if a model generated by an algorithm does not correctly classify the test data, a designer will place another algorithmic layer on top of it in the hope that it improves the results. Sometimes it does but at this point no one would be able to explain why this had happened. As Ali Rahimi put it in a recent keynote talk at the Conference on Neural Information Processing Systems (Rahimi, 2017, n.p.): “Machine learning has become alchemy (…) many designers of neural nets use technology they do not really understand”. If the people who design these algorithms do not understand them, how can anyone else?

4. Who needs the explanations anyway?

The juxtaposition of legal requirements arising from the GDPR with the specificity of ML systems has led to serious doubt about the actual usefulness of the right to explanation of an automated decision. Proponents of the view that the right to explanation is useless in the world of machine learning systems highlight two important arguments: one of a technological nature and the other of a social nature. First of all, as stated above, the way ML systems work makes it difficult (or even impossible) to present the criteria used by an algorithm when resolving a given case. It should be remembered that the decisions made by ML systems largely depend on the data used in the system learning process (this is related to the so-called incremental effect). 8 This conclusion is based not only on the presumption that understanding algorithms is too difficult for people, but also on the fact that, in general, the way algorithms operate and process information is qualitatively different from how humans operate and process information and, as such, the term “interpretability” has a different meaning both for people and ML algorithms (Krishnan, 2019). However, even if the technological limitation is overcome, another problem becomes apparent: the average individual’s lack of knowledge and expertise in analysing and evaluating the very complex results of operations carried out by advanced ML algorithms, where highly specialised knowledge is needed.

The latter issue will be analysed first. It can be reduced to the following argument: It is not necessary to explain the decisions made by the algorithms because no one will understand the explanation in the first place. If this were true, the same reasoning could be applied to the problem of analysing flaws and defects related to the operation of other advanced systems and products, such as cars and airplanes. Most users do not understand how a CPU works, but they are not denied the right to determine whether it was a processor failure that caused a plane to crash. Technology is becoming more and more complex every year, and this is true not only of the IT world. Most people do not understand the medical therapies they undergo, economic processes that affect their financial position or legislation—even though they are obliged to abide by it. At the same time, if an individual considers that they have suffered harm, or that their rights have been undermined, they can take their case to court. One does not have to be a professor of medicine to claim compensation for medical malpractice. 9 The scope or existence of this right should not be contingent upon whether the wrong diagnosis was made by a medical practitioner or by an algorithm. If the court decides that expert knowledge is needed to resolve a given case, it will appoint expert witnesses to assess the evidence gathered in that case. In this way, expert witnesses can help determine the causes of a plane crash, whether medical malpractice took place, or who has liability for a leaking roof in a house. Experts familiar with modern decision-making systems should be able to analyse the results of an algorithm’s operation in the same way. 10 However, for this to be possible, individuals affected by such a decision must have the right to know how this decision was reached. Depriving them of this right would effectively condone the practice of unknown decision-makers making non-transparent decisions according to unknown criteria, with no real possibility of challenging such decisions. This is a Kafkaesque world, incompatible with the principles of a democratic society.

5. Possible (and feasible) solutions

Assuming a general consensus that an individual should be able to challenge decisions taken automatically, the next step that needs to be addressed is to overcome the technical difficulty in determining (reconstructing) the criteria that were taken into account by the algorithm while formulating its decision. This problem should not be underestimated. As illustrated in Section 3, a relatively simple recommendation system used by a music provider demonstrates that in the era of big data systems, even seemingly straightforward decisions (“which song to recommend to a user”) are made with the use of very advanced algorithms. Society expects that IT systems will work not only faster than people, but also more efficiently and effectively, which means that algorithms will be able to solve complex problems with a speed unattainable for humans, and that they will also be able to solve problems that people could not otherwise solve at all (Hecht, 2018). Algorithm predictions are made in all applications of ML systems, including those extremely critical for individuals, such as medical diagnostics (Hoeren & Niehoff, 2018). However, due to the almost complete opacity of algorithm functioning, any attempt to trace their mode of operation, even by an expert in the field, if not actually impossible, would be affected by such a large margin of error as to make any results wholly unreliable (Burrell, 2016). In order to understand the correctness of a decision, an expert or even a group of experts, would have to not only learn the logic of the algorithm but also trace previous decisions and familiarise themselves with the system’s learning (training) process. Due to the increasing complexity of this type of algorithm, the scale of this problem will only escalate.

Providing an explanation that is understandable to humans also requires assessing the quality of the data on which an algorithm is based. Classification algorithms need data to learn how to make predictions. This training set must be representative of that data and sufficiently large. For example, the data set for the KKBOX recommendation system described in Section 3 contained information on 30,000 users, 360,000 songs and 7 million user-song pairs. One of the main sources of AI success has been the emergence of ‘big data’, that is, freely and automatically collected data widely available for anyone to use. However, it is important to note that the amount of data alone is not sufficient to generate correct predictions; the data must also be representative. In ADMs the problem may be further compounded by uncritical analysis, leading to discriminatory conclusions (Barocas & Selbst, 2016).

The data used by ADMs must therefore be validated to ensure lack of bias. Obviously, this is not an easy task. First, the data sets used by ADM systems are huge and cannot be analysed “manually”. To automate this process, the type of bias that might impact further processing should be defined in advance. Second, most of the data used by ADMs stems from past decisions made by humans, which could conceivably be biased along racial or gender lines. Therefore, when considering possible technical implementations of the right to explain in the context of ADM, the problem of ensuring adequate quality of data should also be addressed. In short, it is necessary not only to analyse the mechanisms used for confirming the correctness of an algorithm itself, but also the existence of safeguards that ensure the processed data is trustworthy.

There are at least two possible solutions to this problem. The first would require mandatory registration of the key parameters of those ADM systems whose decisions have legal ramifications for individuals (as in the case of Article 21 of the GDPR). The second way to validate the operation of an algorithm is not so much an attempt to trace the correctness of its decisions as a formal evaluation of the entire system through certification measures. The following sections will discuss both proposals, together with an analysis of their main advantages and limitations.

5.1. An event logging subsystem

A proven solution, used by IT system designers in cases where it was necessary to trace (reconstruct) the operation of an algorithm at a later stage, is the recording of significant processing parameters. A typical example of such a mechanism are flight recorders, the key elements used to determine the course of flight events. This proposal therefore aims to introduce an obligation to record (log) the reasons for decisions made by an ML algorithm. Proponents of such a solution highlight the ability to trace the correct operation of the system and thus the accuracy of the conclusions reached—what Margot Kaminski describes as “qualified transparency”: to provide individuals, experts and regulators with different, but appropriate, sets of information related to algorithmic decision-making (Kaminski, 2019).

The recording of relevant parameters is relatively simple to implement, does not increase the costs of deploying and maintaining the system, and does not require time-consuming validation procedures. These are important benefits because, when considering any proposals related to fulfilling regulatory requirements, one should not lose sight of their economic consequences. ML systems are mostly developed for global application. The introduction of regulations whose implementation would require significant costs to be borne by technology providers could lead to a distortion of market competition or result in providers’ relocation to jurisdictions where such regulations have not been implemented.

In addition to being straightforward to implement, the logging of system parameters can also be easily secured cryptographically to ensure the consistency and integrity of recorded data. Taking into account the type of ML system or sensitivity of data processed, logs can be maintained by a specific service provider or trusted third party—avoiding the risk of the data being changed without authorisation Moreover, there is no obstacle to such data being stored in systems supervised by public entities; in this way, the relevant parameters of, for example, a machine-based credit scoring system could be securely stored under the oversight of a financial market supervisor. This, in turn, opens up the possibility of introducing sector-specific requirements that would define a minimum set of parameters to be recorded by automatic decision-making systems and used for the provision of services in regulated markets. Under this approach, a person challenging the correctness of a decision taken or wishing to exercise their right to explanation of an automatic decision (Article 22 of the GDPR) would have access to the set of key parameters that influenced the final decision. In turn, the supervisory authority could have access to a wider (and more detailed) set of parameters with which it could analyse not only individual cases but also the regularity and legality of the operation of the whole system.

The solution outlined above does have its weaknesses. First of all, it cannot be applied to all types of machine learning algorithms—in particular, deep neural networks with weights attached to features and complex interactions that are not directly interpretable, and therefore no user-interpretable arguments that can be recorded.

ML systems are also not ‘static’—with new data, the prediction model generated by an algorithm will change. As a result, the inference process will be modified (e.g. new parameters will be included or pre-existing parameters omitted) and event logging mechanisms will change as well. In traditional IT solutions, it is the main user of the system who determines the set of data to be recorded and also indicates how often such recording should be done. Both the scope of data and the frequency of ML recording are criteria which cannot be defined in advance. Practically speaking, it is the system itself (or one of its components) that should be designed to determine what parameters are to be recorded and when. However, this goes against the idea behind this safeguard—to ensure transparency. Since it is not the developer who would establish strict and unchangeable criteria for recording key parameters, but the system itself, this mechanism could also be prone to error or external manipulation. As a result, there would need to be a formal evaluation of the recording process itself. In other words, the attempt to solve the problem of the transparency of an ML system would be replaced by the problem of ensuring the transparency of the event logging subsystem.

Another limitation of this solution is the context of analysis, which is difficult to take into account. It should be remembered that the operation of an algorithm depends not only on the input data and internal procedures for processing (the result of which is also easy to save), but also on previous analyses—that is, on the whole tree of decisions made earlier. Understanding the current result of an algorithm may therefore require the review of a huge knowledge base describing previous decisions made by the system. Without this information, simply saving the current parameters used in the inference might not allow one to reconstruct (and thus verify the correctness and fairness of) the inference performed. The more an algorithm is based on machine learning mechanisms, the more this problem will make difficult the use of logging as a way of ensuring system transparency.

A third limitation that needs discussing is the unobvious relationship between the stored parameters and the internal logic of an algorithm. Even assuming that the two previously mentioned obstacles can be overcome, and that the recording of key parameters allows the full and precise reproduction of the initial state and results of subsequent processing steps, the problem of access to the internal logic of an algorithm will subsequently become apparent. ML systems, like other highly specialised technologies, are subject to intellectual property protection (Gervais, 2020). The effectiveness of the protection of various AI technology components is a significant problem affecting the growth of this market. Without access to the source code—and thus to the logic of an AI algorithm—even detailed parameters of its operation will not be sufficient to fully understand the decision-making process whose correctness is to be assessed.

Another issue to be clarified is the adequacy of this measure in achieving its intended purpose. In fact, advocates of the transparency of processing expect the reliability (credibility) of algorithms’ operation to be ensured. It seems, however, that ensuring the transparency of the system will not always be a sufficient guarantee of processing reliability—and thus the protection of an individual’s rights. Ensuring that processing is fair must include not only confirmation of the correctness of the processing carried out but also its compliance with legal or ethical standards. After taking into account these additional limitations, it may turn out that a properly functioning IT system, which identifies objectively correct relationships between data, cannot be considered trustworthy. It will not be possible to reveal this limitation solely by recording the processing parameters. These parameters alone will not reveal a defect relating to the external data on which an algorithm is based.

5.2. Certification frameworks

A second way to validate the operation of an algorithm is not so much an attempt to trace the correctness of its decisions, then a formal evaluation of the entire system through certification. It proposes the creation of a national (or international) certification framework for machine learning systems. The purpose of such a framework would not only be to ensure that systems used to make automated decisions were designed, built and tested in compliance with applicable norms and standards, but also to make sure that their mode of operation (the reliability of decisions made) was confirmed statistically.

In the IT industry, certification mechanisms have been used for years to confirm the authenticity and integrity of software systems (Heck et al., 2010). The use of an external certification mechanism (independent of the provider or user) in relation to machine learning systems could also help to eliminate the risk of unauthorised interference in the way a system works. Furthermore, certification would not have to be mandatory—it could be an optional measure. To encourage ML system providers to participate in this framework, the legislature could introduce a number of legal presumptions based on the premise that decisions made by a certified system are correct. As with any legal presumption, a party challenging such a decision could contest it in court, but they would be required to prove the malfunction of the system. Certification would therefore be a mechanism that obviates the necessity to later prove the correctness and fairness of a system in litigation.

The proposal to introduce certification of advanced IT systems is not a new one and has already been defined, for instance, in relation to artificial intelligence (AI) systems. Matthew Scherer (2016), suggested regulating the AI market with a supervisory body that would issue certifications for AI systems (including tests of new versions of software agents). According to his proposal, certification was not a prerequisite for putting a system into operation but rather a manifestation of soft law regulation. This would provide an incentive for developers by limiting the liability for damage caused by their systems (Scherer, 2016). A similar idea was mooted 20 years earlier by Curtis Karnow. The model he proposed was simpler and primarily involved the creation of the Turing Registry (a hypothetical list of “safe” AI agents), without a reference to any regulatory aspects (Karnow, 1996).

It is worth noting that the implementation of a certification framework for systems making automated decisions is a solution that can be reconciled with the current wording of GDPR provisions. An element of every formal IT system certification framework is an assessment of whether the documentation provided is complete and up to date. It can be expected that in the case of ML systems, such documentation would contain not only a technical description of the environment and the algorithms used, but also a high-level description of the system's operating principles—prepared in a simple and readable manner, compliant in this respect with Article 15 of the GDPR.

It appears, therefore, that the introduction of a certification framework may be helpful in solving both of the problems discussed above. On the one hand, this solution would take into account the specificity of ML systems and would be technically feasible; on the other, it would not require people who want to challenge automated decisions to have specialised knowledge in the field of data analysis or the structure of expert systems.

However, the proposal to use certification frameworks also requires the resolution of several important problems. Firstly, it should be remembered that different certification mechanisms are used in the IT industry. In general, they can be divided into those confirming the correctness of software development and maintenance processes (process certification) and those intended to confirm the authenticity and integrity of software (code certification) (Eloff & von Solms, 2000). In both areas, different norms and standards are used.

Code certification makes it possible to ensure that no third party has interfered with and changed the structure of the computer software. However, such certification only applies to software supplied (or implemented) by the manufacturer (developer), and therefore does not confirm lack of interference with the memory structure of the ML system being run. In particular, it does not in any way refer to the possibility of poisoning the ML logic by deliberate manipulation or feeding the system with badly prepared data. Although system certification mechanisms have been used in the IT sector for several decades, they have so far been used mainly to validate systems that process sensitive data, e.g. in the area of state security (Lipner, 2015). This is due to the simple fact that formal certification of an IT system is a very time-consuming and costly process (Kaluvuri et al., 2014). The wide application of the existing certification framework, such as the Common Criteria (ISO/IEC, 2009), is therefore not enough to fully reflect the needs of the ML market, and it also seems problematic for commercial reasons (see generally, Mellado et al., 2007). It is difficult to imagine that European technology providers would conduct formal certification that might delay their product launch onto the market, whereas the activities of entities operating in other jurisdictions would not be limited in this way.

With regard to process certification in the IT industry, for years the reference frameworks have been the ISO/IEC 20000 and ISO/IEC 27001 family of standards (Siponen & Willison, 2009). Management systems built on their basis may be subject to formal certification. However, it should be remembered that in this scenario certification would ensure that the development, implementation and maintenance of IT systems were carried out with best practice in mind, and in a way that minimised identified risks. Moreover, management systems are part of soft law regulation, so they are mainly the source of internal requirements in the compliance area of the service provider and do not lay down legally binding obligations towards the system users. Processes' certification can also be used to establish a secure supply chain, in which many actors are de facto responsible for the proper operation of an ADM system. In this case, it would be possible to introduce standards dedicated to particular categories of entities, e.g. data brokers, companies responsible for data cleaning and quality assurance processes or those involved in the ADM training process. These standards could be subject to a formal evaluation of conformity by an independent external body in a similar way to current certification of management systems.

While certification is a good way to regulate the introduction and operation of ADM systems, there are currently no certification schemes that can be applied directly to this end. What is more, there are not even any legal regulations—at either EU or member state level—that could form the basis for introducing such certification schemes. Even Regulation 2019/881, which creates a framework for certification in the area of cybersecurity, cannot be regarded as such. The main application of the regulation is to improve the security of products used by critical infrastructure operators and digital service providers (Rojszczak, 2020). The main area of application of ML systems, in turn, is the mass consumer market. Hence, it seems that before it is possible to address in detail a future certification framework for ADM systems, it will be necessary to discuss the establishment of new EU regulations that could form the basis of such programmes.

Reuben Binns (2018) aptly notes that current approaches to fair machine learning are typically focused on interventions at the data preparation, model-learning or post-processing stages. Although certification seems to be a promising solution to the problem of confirming the correct operation of ADM algorithms, it will not overcome the significant limitation strictly related to the very nature of statistical analysis. As noted earlier, the right to an explanation is seen not only as a means of confirming the correctness of the decision but also a means of establishing the reasons for not taking the decision that the applicant had expected (the “why X and not Y?” problem presented earlier). As a result, even if a specific algorithm generates statistically correct results, which are confirmed in the certification procedure, its operation can still be questioned because an individual will be deprived of the possibility of ascertaining what circumstances determined the unexpected, or unwelcome, outcome.

6. Conclusions

Black box algorithms make decisions that affect human lives. This trend is not expected to change in the coming years. Automatic decisions will be made not only on an ever-increasing scale, but also with ever-increasing intensity—as a result of which there will also be increasing pressure on public opinion to develop effective control mechanisms, including those which make it possible to question the decision made in individual cases.

Numerous researchers have criticised the very concept of a right to explanation, pointing out the lack of precision of the EU legislature (Wachter et al., 2017) and questioning the usefulness of this right in practice (Edwards & Veale, 2017). Due to different definitions of and approaches to the “explainability” problem of ML systems, Cynthia Rudin (2019, p. 206) has stated that “the field of interpretability/explainability/comprehensibility/transparency in ML has strayed away from the needs of real problems.”

Today, the right to explanation of an automated decision may be perceived as one of the less important elements of the GDPR, with limited practical significance. However, this perception will change soon. ML systems are entering new areas of the economy as well as public administration. Hence, the wording and limits of applicability of the law laid down in the GDPR will undoubtedly be subject to recurrent interpretation, including interpretation by the Court of Justice of the European Union.

This would therefore seem an opportune moment to begin discussing the need for a comprehensive regulation on how ML systems are developed, implemented and supervised. Drawing on the experience of the IT sector, it seems most appropriate to introduce a regulatory model in which various types of certification mechanisms will play a leading role. The basis for such a model may be a certification scheme for ML systems—allowing for different certification schemes for systems operating in different markets. It will certainly be necessary to distinguish a specific category of systems whose decisions may affect fundamental rights and freedoms. Future legislation should also promote the use of soft law measures, such as certification based on international standards or codes of conduct, to support the development of industry standards and self-regulation mechanisms. An example of such soft law is the ISO/IEC CD 23053 (2020), a draft international standard that is intended to establish a framework for artificial intelligence systems using machine learning. Regardless of the certification, in the case of less advanced ML systems, it may be sufficient to use standardised (e.g. resulting from recommendations issued by competent supervisory authorities) procedures for recording key systems parameters. This proposal may additionally be combined with the establishment of a dedicated supervisory authority, competent to moderate the development of an AI market and—by introducing various regulatory mechanisms, including certification—ensuring their safe use (Tutt, 2016).

It should not be expected therefore that a single, universally-accepted certification scheme for ADM systems will be developed. It is also unlikely that such a uniform standard will be developed within the EU in the near future. The reason for this is not only the lack of consensus between member states on the need to establish EU regulation in this area but also the different digital maturity of individual national markets. Hence, it seems more probable that a set of different legal safeguards which can be applied in particular EU countries will be developed in order to ensure that the dynamic development of technology—including the spread of ADM—does not adversely affect the area of fundamental rights. This trend is already being observed today (Malgieri, 2019), and the problem of implementing the right to explanation of decisions taken automatically is one of the main areas of legislative activity.

References

Adler, P., Falk, C., Friedler, S. A., Rybeck, G., Scheidegger, C., Smith, B., & Venkatasubramanian, S. (2016, December). Auditing black box Models for Indirect Influence. 2016 IEEE 16th International Conference on Data Mining (ICDM. https://doi.org/10.1109/icdm.2016.0011

Anderson, C. (2008, June 23). The End of Theory: The Data Deluge Makes the Scientific Method Obsolete. Wired. https://www.wired.com/2008/06/pb-theory/

Article 29 Working Party. (2017). Guidelines on Automated individual decision-making and Profiling for the purposes of Regulation 2016/679 (WP251rev.01).

Baehrens, D., Schroeter, T., Harmeling, S., Kawanabe, M., Hansen, K., & Müller, K.-R. (2010). How to Explain Individual Classification Decisions. Journal of Machine Learning Research, 11, 1803–1831. https://www.jmlr.org/papers/volume11/baehrens10a/baehrens10a.pdf

Barcoas, S., & Selbst, A. D. (2016). Big data’s disparate impact. California Law Review, 104(3), 671–732. https://doi.org/10.15779/Z38BG31

Binns, R. (2018). Fairness in machine learning: Lessons from political philosophy. Proceedings of Machine Learning Research, 81, 149–159. http://proceedings.mlr.press/v81/binns18a.html

Burrell, J. (2016). How the machine “thinks”: Understanding opacity in machine learning algorithms. Big Data & Society, 3(1), 1–12. https://doi.org/10.1177/2053951715622512

Chakraborty, S., Tomsett, R., Raghavendra, R., Harborne, D., Alzantot, M., Cerutti, F., Srivastava, M., Preece, A., Julier, S., Rao, R. M., Kelley, T. D., Braines, D., Sensoy, M., Willis, C. J., & Gurram, P. (2017). Interpretability of deep learning models: A survey of results. In 2017 IEEE SmartWorld, Ubiquitous Intelligence Computing, Advanced Trusted Computed, Scalable Computing Communications, Cloud Big Data Computing, Internet of People and Smart City Innovation (SmartWorld/SCALCOM/UIC/ATC/CBDCom/IOP/SCI (pp. 1–6). https://doi.org/10.1109/UIC-ATC.2017.8397411

Charter of Fundamental Rights of the European Union, (2012).

Data is power: Towards additional guidance on profiling and automated decision-making in GDPR. (2017). [Report]. Privacy International. https://privacyinternational.org/report/1718/data-power-profiling-and-automated-decision-making-gdpr

Datta, A., Sen, S., & Zick, Y. (2016, May). Algorithmic Transparency via Quantitative Input Influence: Theory and Experiments with Learning Systems. 2016 IEEE Symposium on Security and Privacy (SP). https://doi.org/10.1109/sp.2016.42

Edwards, L., & Veale, M. (2017). Slave to the Algorithm? Why a ‘Right to an Explanation’ Is Probably Not the Remedy You Are Looking For. Duke Law & Technology Review, 16(1), 18–84. https://dltr.law.duke.edu/2017/12/04/slave-to-the-algorithm-why-a-right-to-an-explanation-is-probably-not-the-remedy-you-are-looking-for/

Eloff, M. M., & Solms, S. H. (2000). Information Security Management: An Approach to Combine Process Certification And Product Evaluation. Computers & Security, 19(8), 698–709. https://doi.org/10.1016/S0167-4048(00)08019-6

Fong, R. C., & Vedaldi, A. (2017, October). Interpretable Explanations of Black Boxes by Meaningful Perturbation. 2017 IEEE International Conference on Computer Vision (ICCV. https://doi.org/10.1109/iccv.2017.371

Gervais, D. (2020). Is Intellectual Property Law Ready for Artificial Intelligence? GRUR International, 69(2), 117–118. https://doi.org/10.1093/grurint/ikz025

Greze, B. (2019). The extra-territorial enforcement of the GDPR: A genuine issue and the quest for alternatives. International Data Privacy Law. https://doi.org/10.1093/idpl/ipz003

Hecht, J. (2018). Managing expectations of artificial intelligence. Nature, 563(7733), 141–143. https://doi.org/10.1038/d41586-018-07504-9

Heck, P., Klabbers, M., & Eekelen, M. (2010). A software product certification model. Software Quality Journal, 18(1), 37–55. https://doi.org/10.1007/s11219-009-9080-0

Hert, P., & Czerniawski, M. (2016). Expanding the European data protection scope beyond territory: Article 3 of the General Data Protection Regulation in its wider context. International Data Privacy Law, 6(3), 230–243. https://doi.org/10.1093/idpl/ipw008

Hert, P., Papakonstantinou, V., Malgieri, G., Beslay, L., & Sanchez, I. (2018). The right to data portability in the GDPR: Towards user-centric interoperability of digital services. Computer Law & Security Review, 34(2), 193–203. https://doi.org/10.1016/j.clsr.2017.10.003

Hoeren, T., & Niehoff, M. (2018). Artificial Intelligence in Medical Diagnoses and the Right to Explanation. European Data Protection Law Review, 4(3), 308–319. https://doi.org/10.21552/edpl/2018/3/9

Hoofnagle, C. J., Sloot, B., & Borgesius, F. Z. (2019). The European Union general data protection regulation: What it is and what it means. Information & Communications Technology Law, 28(1), 65–98. https://doi.org/10.1080/13600834.2019.1573501

Huq, A. Z. (2019). Racial Equity in Algorithmic Criminal Justice. Duke Law Journal, 68(6), 1043–1134. https://scholarship.law.duke.edu/dlj/vol68/iss6/1

ISO/IEC. (2009). ISO/IEC 15408-1:2009, Information technology—Security techniques—Evaluation criteria for IT security—Part 1: Introduction and general model.

ISO/IEC, C. D. (2020). Framework for Artificial Intelligence (AI) Systems Using Machine Learning (ML).

Kaluvuri, S. P., Bezzi, M., & Roudier, Y. (2014). A Quantitative Analysis of Common Criteria Certification Practice. In C. Eckert, S. K. Katsikas, & G. Pernul (Eds.), Trust, Privacy, and Security in Digital Business (Vol. 8647, pp. 132–143). Springer International Publishing. https://doi.org/10.1007/978-3-319-09770-1_12

Kaminski, M. E. (2019). The Right to Explanation, Explained. Berkeley Technology Law Journal, 34(1), 188–218. https://doi.org/10.15779/Z38TD9N83H

Karnow, C. E. A. (1996). Liability For Distributed Artificial Intelligences. Berkeley Technology Law Journal, 11(1), 147–204. https://btlj.org/data/articles2015/vol11/11_1/11-berkeley-tech-l-j-0147-0204.pdf

Krishnan, M. (2019). Against Interpretability: A Critical Examination of the Interpretability Problem in Machine Learning. Philosophy & Technology. https://doi.org/10.1007/s13347-019-00372-9

Lakkaraju, H., Kamar, E., Caruana, R., & Leskovec, J. (2019, January). Faithful and Customizable Explanations of Black Box Models. Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society. https://doi.org/10.1145/3306618.3314229

Larsson, S., & Heintz, F. (2020). Transparency in artificial intelligence. Internet Policy Review, 9(2). https://doi.org/10.14763/2020.2.1469

Lepri, B., Oliver, N., Letouzé, E., Pentland, A., & Vinck, P. (2018). Fair, transparent, and accountable algorithmic decision-making processes. Philosophy & Technology, 31, 611–627. https://doi.org/10.1007/s13347-017-0279-x

Lipner, S. B. (2015). The Birth and Death of the Orange Book. IEEE Annals of the History of Computing, 37(2), 19–31. https://doi.org/10.1109/MAHC.2015.27

Lipton, Z. C. (2018). The mythos of model interpretability. Communications of the ACM, 61(10), 36–43. https://doi.org/10.1145/3233231

Lou, Y., Caruana, R., Gehrke, J., & Hooker, G. (2013). Accurate intelligible models with pairwise interactions. Proceedings of the 19th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining - KDD \textquotesingle13. https://doi.org/10.1145/2487575.2487579

Malgieri, G. (2019). Automated decision-making in the EU Member States: The right to explanation and other “suitable safeguards” in the national legislations. Computer Law & Security Review, 35(5), 105327. https://doi.org/10.1016/j.clsr.2019.05.002

Malgieri, G., & Comandé, G. (2017). Why a Right to Legibility of Automated Decision-Making Exists in the General Data Protection Regulation. International Data Privacy Law, 7(4), 243–265. https://doi.org/10.1093/idpl/ipx019

Mellado, D., Fernández-Medina, E., & Piattini, M. (2007). A common criteria based security requirements engineering process for the development of secure information systems. Computer Standards & Interfaces, 29(2), 244–253. https://doi.org/10.1016/j.csi.2006.04.002

Miller, T., Howe, P., & Sonenberg, L. (2017). Explainable AI: Beware of Inmates Running the Asylum Or: How I Learnt to Stop Worrying and Love the Social and Behavioural Sciences. ArXiv. http://arxiv.org/abs/1712.00547

Montavon, G., Samek, W., & Müller, K.-R. (2018). Methods for interpreting and understanding deep neural networks. Digital Signal Processing, 73, 1–15. https://doi.org/10.1016/j.dsp.2017.10.011

Pasquale, F. (2015). The black box society: The secret algorithms that control money and information. Harvard University Press.

Rahimi, A. (2017). NIPS 2017 Test-of-Time Award presentation. Conference on Neural Information Processing Systems. https://www.youtube.com/watch?v=ORHFOnaEzPc

Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation.

Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). Why Should I Trust You? Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining - KDD. https://doi.org/10.1145/2939672.2939778

Ribeiro, M. T., Singh, S., & Guestrin, C. (2018). Anchors: High Precision Model-Agnostic Explanations. Thirty-Second AAAI Conference on Artificial Intelligence. https://www.aaai.org/ocs/index.php/AAAI/AAAI18/paper/view/16982

Richardson, R., Schultz, J. M., & Crawford, K. (2019). Dirty Data, Bad Predictions: How Civil Rights Violations Impact Police Data, Predictive Policing Systems, and Justice. New York University Law Review, 94, 15–55. https://www.nyulawreview.org/wp-content/uploads/2019/04/NYULawReview-94-Richardson_etal-FIN.pdf

Rojszczak, M. (2020). The Evolution of EU Cybersecurity Model: Current State and Future Prospects. In B. J. Pachuca-Smulska, E. Rutkowska-Tomaszewska, & E. Bani (Eds.), Public and private law and the challenges of new technologies and digital markets (Vol. 1, pp. 295–312). C. H. Beck.

Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature Machine Intelligence, 1(5), 206–215. https://doi.org/10.1038/s42256-019-0048-x

Scherer, M. U. (2016). Regulating Artificial Intelligence Systems: Risks, Challenges, Competencies, and Strategies. Harvard Journal of Law & Technology, 29(2), 353–400. http://jolt.law.harvard.edu/articles/pdf/v29/29HarvJLTech353.pdf

Selbst, A. D., & Powles, J. (2017). Meaningful information and the right to explanation. International Data Privacy Law, 7(4), 233–243. https://doi.org/10.1093/idpl/ipx022

Shahbazi, N., Chahhou, M., & Gryz, J. (2018). Truncated SVD-based Feature Engineering for Music Recommendation. WSDM Cup 2018 Workshop, Los Angeles. WSDM Cup 2018 Workshop, Los Angeles. https://wsdm-cup-2018.kkbox.events/pdf/2_WSDM-KKBOX_Nima_Shahbazi.pdf

Shrikumar, A., Greenside, P., Shcherbina, A., & Kundaje, A. (2016). Not Just a Black Box: Learning Important Features Through Propagating Activation Differences. ArXiv. http://arxiv.org/abs/1605.01713

Simonyan, K., Vedaldi, A., & Zisserman, A. (2014, April 19). Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps. 2nd International Conference on Learning Representations, Workshop Track Proceedings. http://arxiv.org/abs/1312.6034

Siponen, M., & Willison, R. (2009). Information security management standards: Problems and solutions—ScienceDirect. Information & Management, 46(5), 267–270. https://doi.org/10.1016/j.im.2008.12.007

Tamagnini, P., Krause, J., Dasgupta, A., & Bertini, E. (2017). Interpreting black box Classifiers Using Instance-Level Visual Explanations. Proceedings of the 2nd Workshop on Human-In-the-Loop Data Analytics, 1–6. https://doi.org/10.1145/3077257.3077260

Tutt, A. (2016). An FDA for Algorithms. Administrative Law Review, 69(1), 83–123. https://www.jstor.org/stable/44648608

Veale, M., & Edwards, L. (2018). Clarity, surprises, and further questions in the Article 29 Working Party draft guidance on automated decision-making and profiling. Computer Law & Security Review, 34(2), 398–404. https://doi.org/10.1016/j.clsr.2017.12.002

Vidovic, M. M.-C., Görnitz, N., Müller, K.-R., Rätsch, G., & Kloft, M. (2015). Opening the Black Box: Revealing Interpretable Sequence Motifs in Kernel-Based Learning Algorithms. In Machine Learning and Knowledge Discovery in Databases (pp. 137–153). Springer International Publishing. https://doi.org/10.1007/978-3-319-23525-7_9

Wachter, S., Mittelstadt, B., & Floridi, L. (2017). Why a right to explanation of automated decision-making does not exist in the general data protection regulation. International Data Privacy Law, 7(2), 76–99. https://doi.org/10.1093/idpl/ipx005

Yosinski, J., Clune, J., Nguyen, A. M., Fuchs, T. J., & Lipson, H. (2015). Understanding Neural Networks Through Deep Visualization. http://arxiv.org/abs/1506.06579

Zarsky, T. (2017). Incompatible: The GDPR in the Age of Big Data. Seton Hall Law Review, 47(4), 995–1020.

Završnik, A. (2019). Algorithmic justice: Algorithms and big data in criminal justice settings. European Journal of Criminology. https://doi.org/10.1177/1477370819876762

Zintgraf, L. M., Cohen, T. S., Adel, T., & Welling, M. (2017). Visualizing Deep Neural Network Decisions: Prediction Difference Analysis. ArXiv. http://arxiv.org/abs/1702.04595

Footnotes

1. This is how Chris Anderson summarised this approach: “Forget taxonomy, ontology, and psychology. Who knows why people do what they do? The point is they do it, and we can track and measure it with unprecedented fidelity. With enough data, the numbers speak for themselves.” (Anderson, 2008, n.p.)

2. Advanced algorithms have been used in criminal justice systems, both in the United States and increasingly in Europe (Završnik, 2019).

3. It should be remembered that, due to the so-called territorial scope of application, the provisions of the GDPR should also be applied by entities having their headquarters in third countries (that is, outside the EEA) but directing their services to the market of at least one of the member states (de Hert & Czerniawski, 2016). The issue of the cross-border application of the GDPR is another practical problem in the enforcement of EU data protection legislation (Greze, 2019).

4. It is disputable to what extent this goal has been achieved. Tal Zarsky points out that “the GDPR fails to properly address the surge in Big Data practices. The GDPR’s provisions are—to borrow a key term used throughout EU data protection regulation—incompatible with the data environment that the availability of Big Data generates” (Zarsky, 2017, p. 996).

5. Each of these steps has not been explained in detail as the key point is simply to present the complexity of the entire prediction process, not its technical aspects.

6. Nodes in a decision tree store conditions that have to be satisfied (for example, she must be under 15 years of age) if a user is to be recommended a particular song.

7. It took almost 128GB of RAM to derive the gradient boosting decision tree model and around 28 hours on 4 Tesla T4 GPUs to create the deep neural network model.

8. The incremental effect consists of changing the operation of the algorithm as a result of providing new information to the database. The algorithm “learns” on the basis of the new information, which may lead to a different interpretation of the information processed previously. Hence, the result of the algorithm is variable over time, which means that by providing the same data for analysis, different outcomes can be obtained. This leads to the conclusion that, in the case of ML algorithms, attempting to confirm their correct operation by processing the same data set at another time is not a good strategy.

9. It should be remembered that nowadays medicine is one of the main areas of application for ML algorithms (Hoeren & Niehoff, 2018).

10.Cf the examples discussed by Jenna Burrell, which she uses to “illustrate how the workings of machine learning algorithms can escape full understanding and interpretation by humans, even for those with specialized training, even for computer scientists” (Burrell, 2016, p. 10).

A step back to look ahead: mapping coalitions on data flows and platform regulation in the Council of the EU (2016-2019)

$
0
0

Section I: Introduction

The objective of this article is to inform our understanding of upcoming policy developments with novel data on the policy-making process of recent EU Digital Single Market (DSM) legislative files. This study draws on a new data set collected as part of a multi-year research programme on the power of member states in negotiations of the Council of the EU (or “Council”). This novel data set includes information on the initial policy preferences and issue salience of all member states and EU institutions on the main controversial issues of the following legislative negotiations: the regulation on the free flow of non-personal data, the European electronic communication code directive, and the directive on copyright in the DSM.

During these negotiations, member states have discussed extensively the opportunity to (de-)regulate data flows, and introduce new (legal and financial) obligations on internet platforms. These policy controversies will remain at the centre of the EU’s political agenda for years to come, with the launch of negotiations on the Data Governance Act (DGA), Digital Services Act (DSA) and Digital Markets Act (DMA).

These new legislative processes need to be seen as a continuation of previous EU negotiations, and would thus gain at being understood in light of the coalition patterns previously mobilised by member states in the Council. Often considered as the most powerful institution of the EU legislative system, the Council is however known for the opacity of its policy-making processes, which has greatly limited academic attempts to uncover its inner-workings (Naurin & Wallace, 2008). This research explores this black box, building on a public data set (Arregui & Perarnaud, 2021) based on 145 interviews conducted in Brussels with Council negotiators and EU officials between 2016 and 2020.

By highlighting the main controversies and coalition patterns between member states on the regulation of data flows and internet platforms as part of three negotiations, this research provides relevant analytical tools to approach future EU digital policy-making processes. It underlines in particular how the ability of certain member states to form and maintain coalitions may determine decision outcomes. Given its success in the adoption process of the free flow of data regulation, the “digital like-minded group” (or the D9+ group) could indeed be activated in the course of the next negotiations on the DGA, DSA and DMA. This paper also argues that the capacity of large member states, such as Germany and France, to formulate their policy preferences early in the process could be a key determinant of their bargaining success. Moreover, while the Council is expected to remain strongly divided regarding the regulation of data flows and internet platforms, this paper indicates why the European Parliament (EP) could have a significant role in these discussions. It also signals that member states are not equal in their capacity to engage with Members of the European Parliament (MEPs), thus suggesting that the ones with more structured channels of engagement with the EP may be more likely to be successful in upcoming negotiations.

The following section offers a brief literature review on EU negotiations and Council policy-making processes in relation to the DSM. Then, the methodological approach and data set are presented. The empirical section is divided in two parts. The first maps the constellation of preferences and issue salience of member states in the three legislative files, and the second uncovers the main coalition patterns. The findings are then discussed in light of their implications for upcoming EU negotiations.

Section II: Unpacking EU digital negotiations

In recent years, EU digital policies have attracted a growing attention from scholars, partly due to the acceleration of EU legislative activities on key internet-related issues, such as data protection and cybersecurity. This trend reflects a broader pattern of states’ increasing engagement with internet governance and policy-making to exercise power in and through cyberspace (Deibert & Crete-Nishihata, 2012; DeNardis, 2014; Harcourt et al., 2020; Radu et al., 2021).

The literature has acknowledged the increasingly active role of the EU in internet policies, as illustrated by recent regulatory and policy initiatives in the fields of data governance (Borgogno & Colangelo, 2019), privacy (Ochs et al., 2016; Bennett & Raab, 2020; Laurer & Seidl, 2020), copyright (Meyer, 2017; Schroff & Street, 2018) and cybersecurity (Christou, 2019). Recent research has also investigated the nature and determinants of a 'European approach' in regulating large internet companies and safeguarding competition (Radu & Chenou, 2015), in balancing competing policy objectives such as national security and data protection (Dimitrova & Brkan, 2018), or in promoting its ‘digital sovereignty’ (Pohle & Thiel, 2020).

EU decisions have direct implications for member states, companies and citizens, but also for third countries (Bradford, 2020), as illustrated by the recent reform of the EU data protection framework (Bendiek & Römer, 2018). Reflecting its ambition to increase its 'cyber power' (Cavelty, 2018) on the international stage, the EU has progressively established cyber partnerships with third countries to engage on digital issues (Renard, 2018), building on the recognised role of the EU over the past two decades in public policy aspects of internet governance (Christou & Simpson, 2006).

But EU digital policy can also be seen as a field of struggle (Pohle et al., 2016), with major divides among governments. As emphasised by Timmers in the case of EU cybersecurity policies, the wide diversity of interests of member states can be challenging for EU policy-making (Timmers, 2018) and thus usher in competing political dynamics in the Council. Though the literature provides refined accounts of the discourse and role of the EU in internet governance debates, more limited are political scientists’ attempts to unpack the complex political processes and controversies structuring EU digital policies, and identify the "winners and losers" of policy developments from the perspective of national governments.

Exploring this gap, this study draws on recent research on the decision-making system of the EU, grounded in rational choice institutionalist analysis. Due to the intergovernmental design of the Council, a great part of the scholarship pertaining to the Council embraces a rationalist perspective, giving to national decision-makers the lead role and assuming that member states determine their actions according to their national preferences and own calculation of utility (Naurin & Wallace, 2008). This scholarship assumes that negotiations' outcomes are shaped by strategic interactions between goal-seeking governments, with bounded rationality, operating within a set of institutional constraints (Lundgren et al., 2019).

It is common knowledge that a major part of the decisions in the Council is adopted by consensus, making the emphasis on the voting phase of the decision making process less relevant than the bargaining phase. This is the reason why recent research on member states’ influence in the EU decision-making system focuses on the actual negotiation processes at play (Thomson et al., 2006; Thomson, 2011). This scholarship is primarily driven by the Decision-making in the European Union (DEU) project, followed recently by the Economic and Monetary Union (EMU) Positions dataset (Wasserfallen et al., 2019), which led to more generalisable findings on power distribution and bargaining processes in the Council.

I will argue that these analytical tools are much welcome to approach EU negotiations on DSM policies. They offer an established methodology to map the constellation of preferences on key controversial issues, and document the determinants and patterns of coalitions in the Council. The following section describes the methodological steps taken to collect the data on which this article draws its analysis upon, as well as the structure of the data set.

Section III: Data and methodology

This research draws on a new data set documenting recent EU legislative processes, and covering the initial preferences of member states and EU institutions on controversial policy issues, as well as their decision outcomes. The DEU III data set (Arregui & Perarnaud, 2021) builds on 145 semi-structured interviews conducted in Brussels with representatives of member states and EU institutions, and covers 16 recent negotiations. Four of them are directly related to the DSM. They consist in the adoption process of the regulation on the free flow of non-personal data (EU Regulation 2018/1807), the Geoblocking regulation (EU Regulation 2018/302), the European electronic communication code directive (EU Directive 2018/1972), and the directive on copyright in the DSM (EU Directive 2019/790). The selection criteria for the legislative dossiers were the negotiation rules (qualified majority voting in the Council), the adoption period (between 2016 and 2019), and the high level of ‘controversiality’ of the policy issues under discussion.

In this data set, information on actors’ policy positions and their salience is represented spatially using ‘scales’ according to an established methodology (Thomson et al., 2006; Arregui & Perarnaud, 2021). During face-to-face interviews conducted in Brussels, negotiators were asked to identify the main controversies raised among member states once the Commission had introduced the legislative proposal. Subsequently, the policy experts had to locate the positions of all actors along the policy scale. The experts were also asked to estimate the level of salience that actors attached to each controversial issue. Every estimation provided had to be justified through evidence and substantive arguments. A number of validity and reliability tests on the DEU III data set (for instance by systematically comparing experts´ judgments and documents) have corroborated previous analysis of validity and reliability of the DEU I data set made by Thomson et al. (2006).

This methodological approach has its own weaknesses, already identified in the literature (Princen, 2012). In relation to the study of actors’ influence, the first limitation is the clear focus of the DEU data set on the negotiation phase of the EU policy cycle. As the literature on agenda-setting (Princen, 2009) suggests, national governments and other stakeholders can invest significant resources into the preparatory steps of legislative negotiations, dynamics that are not thoroughly addressed in the data set. Similarly, this data set does not investigate the implementation of EU legislations and the actual compliance of member states with decision outcomes. Yet, we know that governments can repeat the same influence efforts observed during negotiations once legislative decisions are actually adopted (Blom-Hansen, 2014). Though not immune from traditional biases related to expert surveys and spatial models of politics, this data set allows us to analyse and compare the structure of the constellation of preferences in the Council across the range of policy controversies under study. In a prior analysis of this data set, Perarnaud (forthcoming) shows that bargaining success is unevenly distributed among member states when looking at DSM negotiations, and suggests that these asymmetries relate in part to variations in the resources and coordination mechanisms that can be mobilised by national governments in Brussels. Instead, this research follows a policy-oriented approach and focuses on the political controversies that shaped these negotiations, in view to inform our analysis of upcoming legislative processes.

The following section presents the main policy controversies under study. The analysis focuses in particular on three overarching issues: the regulation of data flows, the introduction of new legal obligations for internet platforms, and new financial obligations for internet platforms. Then, the coalition patterns observed in these negotiations are described, laying an emphasis on the ‘digital like-minded group’ in the Council.

Section IV: Analysis

The three EU negotiations illustrate the extent to which member states were divided regarding the regulation of data flows, the introduction of new legal obligations for internet platforms, as well as new financial obligations for internet platforms.

Data flows

The adoption process of the regulation on the free flow of non-personal data in the EU was characterised by sharp divisions in the Council. This is well illustrated by numerous protracted influence efforts led by several member states, even prior to the publication of the legislative proposal by the European Commission in September 2017.

Before the actual launch of its proposal, certain member states had repeatedly called the European Commission to propose a legislative initiative on the free flow of data. For instance, in a letter sent in December 2016 to Donald Tusk (the European Council President), sixteen heads of states and prime ministers had asked for measures to end data localisation practices (supported by Belgium, Bulgaria, Czech Republic, Denmark, Estonia, Finland, Ireland, Latvia, Lithuania, Luxembourg, Netherlands, Poland, Slovakia, Slovenia, Sweden and the United Kingdom 1). France and Germany were initially opposed to this initiative. After the legislative proposal was internally drafted by the services of the European Commission, France was reportedly concerned about the scope of the draft proposal and pressured the Commission to delay its official publication, and successfully so between 2016 and 2017 2. Indeed, the Regulatory Scrutiny Board (RSB), the body with the power to issue opinions to the college of commissioners on draft legislations, adopted two consecutive negative opinions on the Commission draft proposal in 2016 and 2017. One of the main arguments given to justify these two negative opinions was that the draft proposal also covered personal data, and thus overlapped with the newly adopted General Data Protection Regulation (GDPR).

After the negotiations officially started, the main controversial issue opposing member states consisted in the scope of the derogations which could interfere with the principle of free flow of non-personal data. While some advocated for only very limited derogations to the principle of free flow of data, others favoured extensive derogations for different purposes (security, culture, public archives). Proponents of the principle of the free flow of data wanted to maintain the scope of the regulation as broad as possible, as envisioned initially by the European Commission. Estonia, Denmark, Ireland, Czech Republic, Poland, the United Kingdom, Netherlands and Sweden were the most active member states in this group. These member states coordinated their influence efforts at the Brussels-level as part of the “digital like-minded group”, a coalition group that will be further detailed in the following sections. Though sharing a common goal, these member states were driven by different motives. For instance, for Estonia, one of the main driving forces behind this reform, the free flow of data was both an economic and a political priority. Due to its very digitalised economy and society, the Estonian government saw an economic and political interest in supporting more data transfers within the EU, while increasing its resilience against potential third countries’ attacks over its data infrastructures 3. Interest groups representing the tech industry in Brussels were also advocating for the suppression of data localisation obstacles (DIGITALEUROPE, 2017). The free flow of data in the EU was considered as a very positive change for most internet platforms, as it could lead towards the formal establishment of the free flow of (non-personal) data between the EU and third countries (such as the United States).

A range of member states were however much less positive towards the establishment of a fully-fledged principle of free flow of data within the EU. France and Germany indeed supported a number of derogations to this principle, in particular for the storage of public data, as well as exemptions for national security purposes. Their position could be explained by their negative competitive advantage in terms of cloud and data storage solutions at the global level, but also by their concerns in terms of cybersecurity and intellectual property 2. Though with less issue salience, their call for more derogations to the principle of free flow of data was backed by Spain, Hungary, Austria and Greece in the first steps of the negotiations. This process thus highlighted two diverging approaches in the Council regarding the regulation of non-personal data flows, a controversial issue that appeared salient for France, Germany, Estonia, Denmark, Ireland, Czech Republic, Poland and the United Kingdom.

New legal obligations for internet platforms

A range of recent EU negotiations has considered the opportunity and possibility to introduce new legal obligations for internet platforms. I will focus here in particular on the recent directive on copyright in the DSM and the reform of the EU telecommunication code. These two legislative instruments have indeed contemplated the introduction of new obligations for internet platforms from different angles. As for the regulation on the free flow of data, both were part of the European Commission’s DSM strategy.

The electronic communication code directive was proposed on 14 September 2016 by the European Commission (COM/2016/0590). One of the key provisions of this complex directive proposed to include new communication services, known as over-the-top (OTT) services, within its scope. The extent to which these services needed to be regulated by the same rules as telecom operators was thus a very controversial issue. The European Commission proposal consisted in including OTTs within the scope of the directive, while providing for derogations for certain types of services. The draft directive differentiated between number-based services, which connect users or companies via the public phone network, and number-independent services which do not route communication through the public telephone network and do not use telephone numbers as identifiers (such as Facebook Messenger, Gmail or Apple FaceTime). The Commission had proposed to explicitly bring number-based communication services within the scope of the end user rights provisions of the directive, but to include number-independent ones only for a limited set of provisions.

Member states were strongly divided on the extension of telecom rules to OTT service providers. For instance, France and Spain had a maximalist approach and wanted to introduce wide requirements for OTTs. Germany and Poland supported a similar stance, but with less salience. Spain even proposed that these services should be covered by the telecom general authorisation regime in each member state, like any other telecom providers 4. On the other side of the spectrum, several member states (Sweden, Finland, Denmark, Czech Republic, Netherlands, Luxembourg, Ireland, United Kingdom and Belgium) claimed that regulating number-independent services was unjustified and could hamper innovation in the EU. In its proposal, the European Commission had intended to make a step in the direction of France and Spain, without alienating other member states initially opposed to this inclusion 5.

The adoption process of the copyright directive, and in particular of its article 17 (formerly known as ‘article 13’), generated even more divides between member states (Bridy, 2020; Dusollier, 2020), and is also relevant to map the constellation of member states’ preferences in relation to the regulation of internet platforms. Member states were opposed regarding the obligations and rules that internet platforms should follow to protect rights holders’ content. They had different views on how to address right holders’ challenges to prevent copyright infringements online, and on the type of obligations and liabilities that should be placed on internet platforms. Several member states (including Denmark, Finland, the Netherlands, Czech Republic, Estonia, Luxembourg, Sweden, the UK) argued for a light-touch regulation towards internet platforms, while agreeing with a requirement for platforms to introduce a redress and complaint mechanism for users. Though in favour of the Commission approach, this group opposed a provision aiming to introduce technical requirements for providers of ‘large amounts’ of copyrighted content, which would oblige platforms to use ‘automatic filters’ (or ex ante measures) to control for copyright infringements 6. For Denmark, Finland, Ireland and the Netherlands, this issue was highly salient due to the costs and legal risks it would potentially impose on the large and small digital companies headquartered on their territory. Germany supported new ex ante measures to protect copyright holders, but with a derogation for small and medium enterprises (SMEs) and start-ups having a turn-over of less than € 20 million per year. The position of Germany was highly salient given the strong interests of German copyright holders to force platforms to monitor content uploads, combined with the German ministry of economy’s red line to maintain a carve-out for small platforms. On the other side of the political spectrum, a number of member states, led by France and Spain, promoted the introduction of more stringent obligations for platforms (regardless of their size), via the introduction of civil liabilities for internet platforms, and ex ante measures to protect copyright holders. The high salience of France and Spain was driven by the interests of their large content industry to regulate and hold internet platforms liable, and force them to pay their ‘fair share’ 7. Other member states (such as Portugal, Cyprus and Greece) did support the position defended by France, but without significant interests at stake.

New financial obligations for internet platforms

The adoption process of the copyright directive also led to heated discussions on the introduction of new financial obligations for internet platforms, in particular with the introduction of a “neighbouring right” that would allow press publishers to receive financial compensation when their work is used by internet platforms such as Google News (Papadopoulou & Moustaka, 2020). A large number of member states were opposed to the introduction of a fully-fledged neighbouring right for press publishers. This group was led by Finland, the Netherlands and Poland, which perceived this change as detrimental to their digital sector, and in particular to news-related start-ups. Also, by regulating the practice of hyperlinks, these member states were concerned that it would restrict users’ access to information and impact the core functioning of the internet. Other member states, led by France and Germany, urged in favour of the introduction of a neighbouring right for press publishers established in the EU. The high salience of this issue for France and Germany relates to the significant interests of their national press publishers and news agencies (including Bertelsmann media group in Germany) to be compensated for the use of their works 8. Germany had already introduced in its national legislation a similar provision 9. Other member states such as Portugal, Italy and Spain were also supporting this introduction, though investing less political capital, due to the more limited benefits this new right would bring to their publishers’ industry.

These cases illustrate the overall distribution of preferences and issue salience in recent EU negotiations on data flows and the regulation of internet platforms. The next section will look at actual patterns of coordination between member states with similar policy preferences, in view to appreciate their ability to form and maintain coalitions around common interests.

Section V: Member states’ coalition patterns on DSM files in the Council

As it is commonly understood, and as the previous section clearly illustrates, member states tend to be significantly divided regarding the regulation of data flows and internet platforms at the EU level. However, we also know that all member states are not equally equipped to advance their policy preferences at the EU level (Panke, 2012). All national negotiators do not benefit from the same level of ‘network capital’ with their counterparts (Naurin & Lindahl, 2010), neither do they possess the same administrative resources to formulate and defend their positions (Kassim et al., 2001). These asymmetries are partly determined by a set of domestic factors, such as political stability, financial means, administrative legacies, and bargaining style. In a forthcoming article, I argue that they have a direct, though not systematic, effect on the bargaining success of member states (Perarnaud, forthcoming). Qualified majority voting rules in the Council indeed incentivise member states to find coalition partners to either build political momentum around common policy preferences or constitute a blocking minority to challenge competing dynamics. But coalition dynamics require a particular set of resources and capabilities that not all member states can mobilise to the same extent. Coalitions can thus be considered as key to analyse previous and future EU policy-making processes, and this section will present coalition patterns observed as part of the three DSM legislative files under study.

The digital like-minded group

The digital like-minded group is both an intriguing and little-known coalition at the Council level. Though it shares similar features with other issue-based Council groupings (see for instance the Green Growth Group in the context of EU negotiations related to the environment), it is particularly worth studying given its novelty and relative success in advancing policy preferences in the context of DSM policies.

The digital like-minded group is a mostly Brussels-based informal group of national negotiators, which gathers mostly at attaché level, though ambassadors of the respective member states can also meet under this format. Originally, this coalition group had been launched when the European Commission released its DSM strategy in 2015 3. It mobilised concretely for the first time during negotiations on the NIS directive (EU Directive 2016/1148), notably to devise common strategies regarding the possible introduction of new obligations for internet services. Representatives of Denmark and Estonia in Brussels initiated this coalition 3, and in 2019, it gathered 17 member states in total. Despite its influence activities, this group is an informal alliance and does not have any public presence.

Member states part of the digital like-minded group share a liberal approach to internal market and digital issues, and have in common to be digitally “ambitious”, but not necessarily digitally advanced. For instance, Bulgaria belongs to the digital like-minded group, despite being one of the least digitalised member states in the EU (according to the Digital Economy and Society Index). This coalition can mobilise both during and prior to negotiations. For instance, before the adoption of the Commission proposal on the free flow of data, a number of joint letters from heads of states were forwarded by this group (Fioretti, 2016). These efforts to liaise with EU leaders are also exemplified by a number of more recent letters, for instance on the ‘Digital Single Market mid-term review’ in 2017 10 or in the preparation of EU leaders’ meetings in 2019 11. As part of legislative negotiations, the digital like-minded group can meet regularly to discuss text compromises and help align member states’ influence efforts. This strategy proved successful in the context of the negotiations on the free flow of data, in which the high level of coordination between members of the ‘digital like-minded group’ allowed for the formulation of efficient strategies to contain the initial concerns forcefully expressed by France, and dismantle a blocking minority (Perarnaud, forthcoming). The format of the digital like-minded group is only used by member states when they share a rather homogeneous position on a particular subject matter. For instance, members of the digital like-minded group were significantly divided in the context of the copyright directive, and thus could not leverage this format.

Among the digital like-minded group, negotiators from the most digitally advanced member states appear prominent (Sweden, Denmark, Finland, Estonia, Belgium, the Netherlands, Luxembourg, Ireland and the UK). This group of member states also meet at high-level ministerial level, in what is known as the Digital 9 group (D9), which was launched by the Swedish minister for EU affairs and trade, Ann Linde, on 5 September 2016. According to an interviewee 12, this new initiative partly stemmed from recommendations published in a report on the “European Digital Front-Runners”, carried out by the Boston Consulting Group and funded by Google, which invited member states considered as digital front-runners to coordinate their action “at both political and policy levels […] for true digitization success and to communicate their strong commitment to the execution of the digital agenda” (Alm et al., 2016). This development illustrates the indirect lobbying channels that corporate actors can use to secure influence in relation to the DSM, in addition to their more traditional repertoire of actions at the EU level (Laurer & Seidl, 2020; Christou & Rashid, 2021).

Since October 2017, “digitally advanced” member states meet at ministerial level as part of a larger group, known as the D9+, which also includes Czech Republic and Poland (The Digital Hub, 2018). The D9 initiative was indeed not perceived positively by all member states within and outside this group. Some negotiators argued that these nine member states did not hold sufficient voting power to influence legislative negotiations, whereas certain member states, such as Poland and Czech Republic, still wanted to cooperate with like-minded member states but were found excluded. These considerations were the main drivers behind the creation of the D9+ group. Yet, the overlap between these formats was identified as a challenge by two interviewees 13.

The Franco-German alliance

As opposed to the “digital front-runners”, France and Germany did not appear to coordinate their influence efforts as part of the three negotiations under study. This can be partly explained by differences in their national preferences with respect to the regulation of internet platforms. Though they generally share common concerns, Germany and France had different, and sometimes, divergent interests to defend. Also, the repeated delays in the formulation process of the German position did not allow for strong coordination mechanisms between Paris and Berlin on key controversial issues. The position of Germany on article 13 of the copyright directive (article 17 in the final text) was agreed two years after the publication of the Commission proposal, and the German government only agreed on its own position in the negotiations on the free flow of data regulation during the final stages of the adoption process of the Council’s position in 2018. These delays mainly originate from the horizontal coordination structure between ministers and Länder (Sepos, 2005), especially as the German ministry of justice and the ministry of economy’s positions can be difficult to reconcile on digital matters. The Franco-German alliance on DSM policies was thus only visible at high-level on a limited number of instances 14, but rarely at the level of negotiators.

Though the research fieldwork did not allow for conducting interviews with all the negotiators involved in these processes, it should be noted that there was no evidence of other Council coalitions mobilised by member states. The literature on coalitions in the Council indicates that national representatives generally coordinate their position and strategy with their counterparts on an ad hoc basis depending on the issues at stake (Ruse, 2013), as illustrated here by the cases of Spain and Italy. Both could be considered pivotal in the shaping process of these decisions. Spain’s progressive shift in the negotiations regarding the regulation on the free flow of data is for instance essential to understand the outcomes of these dynamics, and thus shows how member states not pertaining to coalition groups can still leverage influence in EU policy-making processes.

Section VI: Conclusion

These negotiation processes highlight the competing interests of EU member states and institutions with regard to key controversial aspects of the EU’s DSM, and the coalitions they mobilise to amplify their message in Brussels.

These descriptive findings can be of great use for understanding the next political sequence initiated by the European Commission over the regulation of data flows and internet platforms. They provide new insights on the structure and salience of member states’ preferences on these controversial issues, and the mechanisms at their disposal to gain influence.

While the observation of coalition patterns was specifically focused on the Council, it should be noted that member states can also approach other EU institutions to advance their preferences. Previous studies (Panke, 2012; Bressanelli & Chelotti, 2017) show that not all Brussels-based national negotiators regularly engage with MEPs in order to channel national interests and ‘tame’ the different political positions voiced inside the EP. These variations partly relate to differences in the size of national delegations in the EP, but also to member states’ negotiation style and administrative resources (Perarnaud, forthcoming). Interestingly, the member states with the more structured channels of communications with EU institutions are the ones that have expressed the highest salience on issues related to data flows and internet platforms regulation. As shown by previous studies (Panke, 2012), France, the Netherlands, Sweden, Finland and Czech Republic are indeed among the member states with the most powerful connections with the EP and the European Commission. Given the existing divides among member states in the Council, the European Parliament could have a greater say in the upcoming negotiations, thus possibly giving more leverage to the member states with structured mechanisms to liaise with MEPs.

The negotiations on the DSA, DMA and DGA legislative dossiers will generate significant debates and controversies at the EU level. Many detailed proposals and ideas are being voiced to shape the EU’s approach on myriad of policy issues related to the regulation of data flows, competition and content moderation (Graef & van Berlo, 2020; Gillepsie et al., 2020). Policy proposals presented in the context of this new phase should take into account the informal power balance between member states in the Council, and existing asymmetries in their capabilities to defend national positions in Brussels. As it appears that strong coordination mechanisms between the most digitally advanced countries of the EU have granted them significant influence over large member states in recent years (Perarnaud, forthcoming), future research on these processes should carefully study coordination processes and coalition patterns, as well as their implications. The recent joint statement released by the D9+ group (D9+ group, 2019) ahead of the DSA negotiations is a good indicator of the relevance of studying preferences’ allocation structure and coalitions as part of this new political sequence.

Acknowledgements

I thank Roxana Radu, Oles Andriychuk and the editors for their valuable comments on previous versions of this article.

References

Alm, E., Colliander, N., Deforche, F., Lind, F., Stohne, V., & Sundström, O. (2016). Digitizing Europe: Why Northern European frontrunners must drive the digitization of the EU economy [Report]. Boston Consulting Group. https://www.beltug.be/news/4922/European_Digital_Front-Runners_The_Boston_Consulting_Group_publishes_new

Arregui, J., & Perarnaud, C. (2021). The Decision-Making in the European Union (DEUIII) Dataset (1999-2019) [Data set]. Repositori de dades de recerca. https://doi.org/10.34810/DATA53

Bendiek, A., & Römer, M. (2018). Externalizing Europe: The global effects of European data protection. Digital Policy, Regulation and Governance, 21(1), 32–43. https://doi.org/10.1108/DPRG-07-2018-0038

Bennett, C. J., & Raab, C. D. (2020). Revisiting the governance of privacy: Contemporary policy instruments in global perspective. Regulation & Governance, 14(3), 447–464. https://doi.org/10.1111/rego.12222

Blom-Hansen, J. (2014). Comitology choices in the EU legislative process: Contested or consensual decisions? Public Administration, 92(1), 55–70. https://doi.org/10.1111/padm.12036

Borgogno, O., & Colangelo, G. (2019). Data sharing and interoperability: Fostering innovation and competition through APIs. Computer Law & Security Review, 35(5), 105314. https://doi.org/10.1016/j.clsr.2019.03.008

Bradford, A. (2020). The Brussels effect: How the European Union rules the world. Oxford University Press. https://doi.org/10.1093/oso/9780190088583.001.0001

Bressanelli, E., & Chelotti, N. (2017). Taming the European Parliament: How MS Reformed Economic Governance in the EU (Research Paper No. 2017/54). Robert Schuman Centre for Advanced Studies Research Paper.

Bridy, A. (2020). The Price of closing the ‘value gap’: How the music industry hacked EU copyright reform. Vanderbilt Journal of Entertainment & Technology Law, 22(2), 323–358. https://scholarship.law.vanderbilt.edu/jetlaw/vol22/iss2/4

Cavelty, M. D. (2018). Europe’s cyber-power. European Politics and Society, 19(3), 304-320,. https://doi.org/10.1080/23745118.2018.1430718

Christou, G. (2019). The collective securitisation of cyberspace in the European Union. West European Politics, 42(2), 278–301. https://doi.org/10.1080/01402382.2018.1510195

Christou, G., & Rashid, I. (2021). Interest group lobbying in the European Union: Privacy, data protection and the right to be forgotten’. Comparative European Politics, 19, 380–400. https://doi.org/10.1057/s41295-021-00238-5

Christou, G., & Simpson, S. (2006). The internet and public–private governance in the European Union. Journal of Public Policy, 26(1), 43–61. https://doi.org/10.1017/S0143814X06000419

Deibert, R., & Crete-Nishihata, M. (2012). Global Governance and the Spread of Cyberspace Controls’. Global Governance, 18(3), 339–361. https://doi.org/10.1163/19426720-01803006

DeNardis, L. (2014). The global war for internet governance. Yale University Press. https://doi.org/10.12987/yale/9780300181357.001.0001

DIGITALEUROPE. (2017, September 19). Free Flow of Data proposal brings the Digital Single Market strategy back on track [Press release]. DIGITALEUROPE.

Dimitrova, A., & Brkan, M. (2018). Balancing National Security and Data Protection: The Role of EU and US Policy‐Makers and Courts before and after the NSA Affair’. JCMS: Journal of Common Market Studies, 56, 751–767. https://doi.org/10.1111/jcms.12634

Directive (EU) 2016/1148 of the European Parliament and of the Council of 6 July 2016 concerning measures for a high common level of security of network and information systems across the Union.

Directive (EU) 2018/1972 of the European Parliament and of the Council of 11 December 2018 establishing the European Electronic Communications Code (Recast.

Directive (EU) 2019/790 of the European Parliament and of the Council of 17 April 2019 on copyright and related rights in the Digital Single Market and amending Directives 96/9/EC and 2001/29/EC, 92 (2019).

Dusollier, S. (2020). ‘The 2019 Directive on Copyright in the Digital Single Market: Some progress, a few bad choices, and an overall failed ambition’. Common Market Law Review, 57(ue 4), 979-1030,. https://kluwerlawonline.com/JournalArticle/Common+Market+Law+Review/57.4/COLA2020714

European Commission. (2021). Digital Economy and Society Index (DESI) 2019. European Commission Directorate-General for Communications Networks, Content and Technology. http://semantic.digital-agenda-data.eu/dataset/DESI

Fioretti, J. (2016, May 23). EU countries call for the removal of barriers to data flows. Reuters. https://www.reuters.com/article/uk-eu-digital-data-idUKKCN0YE06O

Gillespie, T., Auderheide, P., Carmi, E., Gerrard, Y., Gorwa, R., Matamoros-Fernández, A., Roberts, S. T., Sinnreich, A., & Myers West, S. (2020). Expanding the debate about content moderation: Scholarly research agendas for the coming policy debates. Internet Policy Review, 9(4).

Graef, I., & Van Berlo, S. (2020). Towards smarter regulation in the areas of competition, data protection, and consumer law: Why greater power should come with greater responsibility. European Journal of Risk Regulation, 25(1). https://doi.org/10.1017/err.2020.92

Group, D. (2019). ‘D9+ non-paper on the creation of a modern regulatory framework for the provision of online services in the EU’. https://www.gov.pl/web/digitalization/one-voice-of-d9-group-on-new-regulations-concerning-provision-of-digital-services-in-the-eu

Harcourt, A., Christou, G., & Simpson, S. (2020). Global Standard Setting in Internet Governance. Oxford University Press.

Hub, T. D. (2018). Minister Breen chairs meeting of D9+ EU countries’. https://www.thedigitalhub.com/news/minister-breen-chairs-meeting-d9-eu-countries/

Kassim, H. M., A., & Peters, B. G. (Eds.). (2001). The National Coordination of EU Policy. The European Level.

Laurer, M., & Seidl, T. (2020). Regulating the European Data‐Driven Economy: A Case Study on the General Data Protection Regulation’. Policy & Internet. DOI. https://doi.org/10.1002/poi3.246

Lundgren, M., Bailer, S., Dellmuth, L. M., Tallberg, J., & Târlea, S. (2019). Bargaining success in the reform of the Eurozone’. European Union Politics, 20(1), 65–88. https://doi.org/10.1177/1465116518811073

Meyer, T. (2017). The Politics of Online Copyright Enforcement in the EU. Palgrave Macmillan.

Naurin, D., & Lindahl, R. (2010). Out in the cold? Flexible integration and the political status of Euro opt-outs. European Union Politics, 11(4), 485–509. https://doi.org/10.1177/1465116510382463

Naurin, D., & Wallace, H. (2008). Unveiling the Council of the European Union. Palgrave Macmillan UK.

Ochs, C., Pittroff, F., Büttner, B., & Lamla, J. (2016). Governing the internet in the privacy arena’. Internet Policy Review, 5(3), 1–13. https://doi.org/10.14763/2016.3.426

Panke, D. (2012). Lobbying Institutional Key Players: How States Seek to Influence the European Commission, the Council Presidency and the European Parliament’. Journal of Common Market Studies, 50, 129–150. https://doi.org/10.1111/j.1468-5965.2011.02211.x

Papadopoulou, M.-D., & Moustaka, E.-M. (2020). Copyright and the Press Publishers Right on the Internet: Evolutions and Perspectives. In T.-E. Synodinou, P. Jougleux, C. Markou, & T. Prastitou (Eds.), EU Internet Law in the Digital Era: Regulation and Enforcement (pp. 99–136). Springer International Publishing. https://doi.org/10.1007/978-3-030-25579-4_5

Perarnaud, C. (Forthcoming). Why do negotiation processes matter?’ Informal Capabilities as Determinants of EU Member States Bargaining Success in the Council of the EU [Doctoral dissertation]. University Pompeu Fabra.

Pohle, J., Hösl, M., & Kniep, R. (2016). Analysing internet policy as a field of struggle. Internet Policy Review, 5(3). https://doi.org/10.14763/2016.3.412

Pohle, J., & Thiel, T. (2020). Digital sovereignty. Internet Policy Review, 9(4). https://doi.org/10.14763/2020.4.1532

Princen, S. (2009). Agenda-setting in the European Union. Palgrave Macmillan. https://doi.org/10.1057/9780230233966

Princen, S. (2012). The DEU approach to EU decision-making: A critical assessment. Journal of European Public Policy, 19(4), 623-634,. https://doi.org/10.1080/13501763.2012.662039

Proposal for a directive of the European Parliament and of the Council establishing the European Electronic Communications Code (Recast). COM/2016/0590 final. (n.d.).

Radu, R., & Chenou, J. M. (2015). Data control and digital regulatory space(s): Towards a new European approach. Internet Policy Review, 4(2). https://doi.org/10.14763/2015.2.370

Radu, R., Kettemann, M. C., Meyer, T., & Shahin, J. (2021). Normfare: Norm entrepreneurship in internet governance. Telecommunications Policy, 45(6), 102148. https://doi.org/10.1016/j.telpol.2021.102148

Regulation (EU) 2018/302 of the European Parliament and of the Council of 28 February 2018 on addressing unjustified geo-blocking and other forms of discrimination based on customers’ nationality, place of residence or place of establishment within the internal market and amending Regulations (EC) No 2006/2004 and (EU) 2017/2394 and Directive 2009/22/EC, (2018).

Regulation (EU) 2018/1807 of the European Parliament and of the Council of 14 November 2018 on a framework for the free flow of non-personal data in the European Union.

Renard, T. (2018). EU cyber partnerships: Assessing the EU strategic partnerships with third countries in the cyber domain. European Politics and Society, 19(3), 321–337. https://doi.org/10.1080/23745118.2018.1430720

Ruse, I. (2013). (Why) Do neighbours cooperate? Institutionalised coalitions and bargaining power in EU Council negotiations. Burdrich Unipress.

Schroff, S., & Street, J. (2018). The politics of the Digital Single Market: Culture vs. Competition vs. Copyright. Information, Communication & Society, 21(10), 1305–1321. https://doi.org/10.1080/1369118X.2017.1309445

Sepos, A. (2005). The national coordination of EU policy: Organisational efficiency and European outcomes’. Journal of European Integration, 27(2), 169–190. https://doi.org/10.1080/07036330500098227

Thomson, R. (2011). Resolving Controversy in the European Union. Cambridge University Press.

Thomson, R., N., S. F., Achen, C. H., & T, K. (Eds.). (2006). The European Union Decides. Cambridge University Press.

Timmers, P. (2018). The European Union’s cybersecurity industrial policy. Journal of Cyber Policy, 3(3), 363–384. https://doi.org/10.1080/23738871.2018.1562560

Wasserfallen, F., Leuffen, D., Kudrna, Z., & Degner, H. (2019). Analysing European Union decision-making during the Eurozone crisis with new data. European Union Politics, 20(1), 3–23. https://doi.org/10.1177/1465116518814954

Appendix 1: Extracts from the DEU III data set (Arregui & Perarnaud, 2021)

The following figures illustrate the distribution of the policy positions (axis X), and their intensity (axis Y), for all member states and EU institutions on the main controversial issues under study. The vertical arrow indicates where research respondents located the decision outcome on the policy scale.

Figure 1: Structure of the controversy on ‘derogations’ during the adoption process of the regulation on free flow of non-personal data (2017/0228/COD).

Figure 2: Structure of the controversy on the ‘value gap’ during the adoption process of the directive on copyright in the DSM (2016/0280/COD).

Figure 3: Structure of the controversy on the introduction of a ‘neighbouring right’ for press publishers during the adoption process of the directive on copyright in the DSM (2016/0280/COD).

Figure 4: Structure of the controversy on the inclusion of ‘OTTs’ during the adoption process of the European electronic communication code directive (2016/0288/COD).

Footnotes

1. The United Kingdom (UK) was still part of the European Union at the time, until formally leaving in January 2020.

2.a.b. Interview with MS representative, 19/09/2018, Brussels. 3.a.b.c. Interview with MS representative, 11/09/2018, Brussels

4. Interview with MS representative, 20/06/2018, Brussels.

5. Interview with an EU official, 10/05/2019, Brussels.

6. Interview with MS representative, 20/09/2018, Brussels.

7. Interview with MS representative, 07/09/2018, Brussels.

8. Interview with Council Secretariat official, 11/09/2018, Brussels.

9. Germany adopted in 2013 an ancillary copyright law for press publishers (‘Presseverleger-Leistungsschutzrecht, Achtes Gesetz zur Änderung des Urheberrechtsgesetzes’, 7 May 2013).

10. For more, see: https://euractiv.eu/wp-content/uploads/sites/2/2017/06/170620_HOSGs-EUCO-digital-letter-FINAL.pdf

11. For more, see: https://images.politico.eu/wp-content/uploads/2019/03/Leadersjointletter_MarchEUCO_260219.pdf

12. Interview with MS representative, 18/09/2018, Brussels.

13. Interviews with MS representatives, 14/09/2018 and 18/09/2018, Brussels.

14. See for instance the Franco-German joint statement released in the context of a high-level ministerial meeting on October 2015: https://www.economie.gouv.fr/files/files/PDF/Declaration_conference_numerique_finale_FR.pdf

Recommender systems and the amplification of extremist content

$
0
0

Introduction

Recent years have seen a substantial increase of concern by policymakers towards the impact and role of personalisation algorithms on social media users. A key concern is that users are shown more content with which they agree at the expense of cross-cutting viewpoints, creating a false sense of reality and potentially damaging civil discourse (Vīķe‐Freiberga et al., 2013). While human beings have always tended to gravitate towards opinions and individuals that align with their own beliefs at the expense of others (Sunstein, 2002), so called “filter bubbles” are singled out as being particularly problematic because they are proposed to artificially exacerbate this confirmation bias without a user’s knowledge (Pariser, 2011).

The filter bubble debate is complex and often ill-defined; encompassing research into timelines, feeds, and news aggregators (Bruns, 2019), often trying to establish the importance of these algorithms versus a user’s own personal choice (see e.g., Bakshy, Messing, and Adamic, 2015; Dylko et al., 2017). We lay our contribution on one specific aspect of the debate—the role of social media platforms’ recommendation systems and interaction with far-right extremist content. Policymakers have articulated concern that these algorithms may be amplifying problematic content to users which may exacerbate the process of radicalisation (HM Government, 2019; Council of the European Union, 2020). This has also become a concern in popular news media, spawning articles which highlight the importance of recommender systems in The Making of a YouTube Radical (Roose, 2019), or referring to the platform as The Great Radicalizer (Tufekci, 2018) or as having radicalised Brazil and caused Jair Bolsonaro’s election victory (Fisher & Taub, 2019). Research into this phenomenon (discussed in more detail below) does seem to support the notion that recommendation systems can amplify extreme content, yet the empirical research tends to focus on a single platform—YouTube—or is observational or anecdotal in nature.

We contribute to this debate in two ways: firstly, we conduct an empirical analysis of interactions of recommendation systems and far-right content on three platforms—YouTube, Reddit, and Gab. This analysis provides a novel contribution by being the first study to account for personalisation in an experimental setting, which has been noted as a limitation by previous empirical research (Ledwich and Zaitsev, 2019; Ribeiro et al., 2019). We find that one platform—YouTube—does promote extreme content after interacting with far-right materials, but the other two do not. Secondly, we contextualise these findings into the policy debate, surveying the existing regulatory instruments, highlighting the challenges that are faced, before arguing that a co-regulatory approach may offer the ability to overcome these challenges while providing safeguards and respecting democratic norms.

Filter bubbles

Originally articulated by Eli Pariser, the concept filter bubbles posits that personalisation algorithms may act as “autopropaganda” by invisibly controlling what web users do and do not see, promoting ideas that users are already in agreement with and, in doing so, dramatically amplifying confirmation bias (Pariser, 2011). He claims that the era of personalisation began in 2009 when Google announced that they would begin to filter search results based on previous interactions which creates a personalised universe of information for each user, which may contain increasingly less diverse viewpoints and therefore increase polarisation within society. Since Pariser’s original contribution, online platforms have further developed and emphasised their personalisation algorithms beyond “organic” interactions. De Vito notes that Facebook’s News Feed is ‘a constantly updated, personalized machine learning model, which changes and updates outputs based on your behaviour…Facebook’s formula, to the extent that it actually exists, changes every day’ (DeVito, 2016, p. 16), while YouTube’s marketing director told a UK House of Commons select committee that around 70% of content watched on the platform was derived from recommendations rather than users’ organic searches (Azeez, 2019). Many scholars have highlighted concerns over such algorithms, such as them carrying the biases of their human designers (Bozdog, 2013); conflating the distinction between user satisfaction and retention (Seaver, 2018), which may blind users to important social events (Napoli, 2015); people not being aware of how they restrict information (Eslami et al., 2015; Bucher, 2017); and even their creators not fully understanding how they operate (Napoli, 2014; DeVito, 2016). In the context of extremism, each of these concerns may cause alarm as they posit a situation in which users may become insulated with confirming information in an opaque system.

In turn, a widespread concern has grown in policy circles. The EU Group on Media Freedom and Pluralism suggests that it may have adverse effects: ‘Increasing filtering mechanisms makes it more likely for people to only get news on subjects they are interested in, and with the perspective they identify with…Such developments undoubtedly have a potentially negative impact on democracy’ (Vīķe‐Freiberga et al., 2013, p. 27). More recently, the filter bubble effect has been blamed as a critical enabler of perceived failures of democracy such as the populist elections of Donald Trump, Jair Bolsonaro, and the Brexit referendum (Bruns, 2019). In the US, lawmakers have been considering legislative options to regulate social media algorithms. The proposed bipartisan “Filter Bubble Transparency Act” which, at time of writing is awaiting passage through the US Senate, claims to ‘make it easier for internet platform users to understand the potential manipulation that exists with secret algorithms’ (Filter Bubble Transparency Act, 2019, S, LYN19613).

Importantly to the aims of this paper, the role of personalisation algorithms promoting extreme content has been highlighted by policymakers as a prescient concern. After the Christchurch terror attack in 2019, New Zealand Prime Minister Jacinda Arden and French President Emmanuel Macron brought together several heads of state and tech companies to propose the Christchurch Call. The Call committed governments to, amongst other things, ‘review the operation of algorithms and other processes that may drive users towards and/or amplify terrorist and violent extremist content’ (Christchurch Call, 2019). The Call was signed by the European Commission, the Council of Europe, and 49 nation states. The UK government has also signalled personalisation algorithms as problematic in its Online Harms White Paper, noting that they can lead to echo chambers and filter bubbles which can skew users towards extreme and unreliable content (HM Government, 2019). Similarly, the UK’s Commission for Countering Extremism called on tech companies to ‘ensure that their technologies have a built-in commitment to equality, and that their algorithms and systems do not give extremists the advantage from the start by feeding existing biases’ (Commission for Countering Extremism, 2019, p. 86). In 2020, the EU Counter-Terrorism Coordinator released a report which argued that online platforms are a conduit for polarisation and radicalisation because their recommendation systems promote content linked to strong negative emotions, including extreme content (Council of the European Union, 2020).

Many scholars have criticised filter bubbles as being a problematic concept. Bruns notes that Pariser’s original formulation as being founded in anecdotes, for which there is scant empirical evidence. He argues that the disconnect between the public understanding of the concept and the scientific evidence has all the hallmarks of a moral panic which distract from more important matters, such as changing communications landscapes and increasing polarisation (Bruns, 2019). Munger & Phillips also criticise the prominent theory of YouTube radicalisation via algorithm, which is often purported in the media. They argue that this argument is tantamount to the now-discredited “Hypodermic Needle” model of mass communications and instead posit a “supply” and “demand” framework which emphasises both the affordances that the platform offers as well as a greater focus on the audience (Munger and Phillips, 2019).

Despite concerns by academics and policymakers, there is limited empirical evidence as to the extent (or harmfulness) of filter bubbles (Zuiderveen Borgesius et al., 2016); research tends to suggest that social media users have a more diverse media diet than non-users (Bruns, 2019). Studies have shown that personalisation algorithms do filter towards an ideological position and can increase political polarisation (Bakshy, Messing, and Adamic, 2015 1; Dylko et al., 2018) but that they play a smaller role than users’ own choices. Several studies have also studied the role of news recommendation systems, finding that the concerns over personalised recommendations are overstated (Haim, Graefe, and Brosius, 2018), do not reduce the diversity of content to users (Möller et al., 2018), and are more likely to be driven by factors such as time or date than past behaviours (Courtois, Slechten, and Coenen, 2018).

A key part of most of the critiques of the filter bubble as a concept is that it lacks clarity. There is little distinction made between the similar, yet different, concepts of filter bubbles and echo chambers (Zuiderveen Borgesius et al., 2016; Bruns, 2019). This is problematic because it hinders ability to do robust research, studies may offer different findings because they employ radically different definitions of these concepts (Bruns, 2019). It is worthwhile to consider the distinction drawn by Zuiderveen Borgesius and colleagues, who distinguish between two types of personalisation: firstly, self-selected, in which the user chooses to encounter like-minded views and opinions (i.e., an “echo chamber”) and secondly, pre-selected, which is driven by platforms without the user’s deliberate choice, input, knowledge or consent—i.e., a “filter bubble” (Zuiderveen Borgesius et al., 2016). This conceptual confusion also extends to discussions of involvement in terrorism and extremism, Whittaker (2020) argues that studies have frequently posited a causative relationship between online echo chambers and radicalisation—with little empirical evidence—and they are rarely clear as to whether they refer to users’ own choices or the effects of algorithms.

This conceptual distinction is important for the present study because social media recommendation systems do not fit easily into either the self-selected or pre-selected categories. In all three of the platforms in consideration for the present study (YouTube, Reddit, and Gab), content is pre-selected for users, with which they then have the option to engage. This is different, for example, to Facebook’s News Feed in which users do not have this option. Given this ambiguity, it is worthwhile to be clear about the phenomena that are under consideration—recommendation systems—which are defined by Ricci, Rokach and Shapira (2011) as software tools and techniques providing users with suggestions for items a user may wish to utilise. They expand upon this by stating ‘the suggestions relate to various decision-making processes, such as what items to buy, what music to listen to, or what online news to read’ (Ricci, Rokach and Shapira, 2011, p. 1). A user can navigate these platforms without utilising these systems—even if they happen to make up most of the traffic. Moreover, we define content amplification as the promotion of certain types of content—in this case far right extreme content—at the expense of more moderate viewpoints.

Extremist content and recommendation systems

There are a handful of existing studies which seek to explore the relationship between recommendation algorithms and extremist content, with a predominance towards YouTube. O’Callaghan et al. (2015) conducted an analysis of far-right content, finding that more extreme content can be offered to users and they may find themselves in an immersive ideological bubble in which they can be excluded from content which does not fit their belief system. Ribeiro et al. (2019) conducted an analysis of two million recommendations that were related to three categories: Alt-right, 2 Alt-Lite, 3 and Intellectual Dark Web. 4 They find that YouTube’s recommendation algorithm frequently suggests Alt-lite and Intellectual Dark Web content, and once in these communities, it is possible to find the Alt-right from recommended channels. This, they argue, supports the notion that YouTube has a “radicalisation pipeline”. Schmitt et al. (2018) study YouTube recommendations in the context of counter-messages designed to dissuade individuals from extremism. They utilise two seed campaigns (#WhatIS and ExitUSA), finding that the universe of related videos had a high crossover with radical propaganda, particularly the #WhatIS campaign which had several key words with thematic overlaps with Islamist propaganda (for example: “jihad”). All three studies’ findings suggest that extremist content may be amplified via YouTube’s recommendation algorithm.

Conversely, Ledwich and Zaitsev (2019) find that YouTube recommendation algorithms actively discourage users from visiting extreme content online, which they claim refutes popular “radicalisation” claims. They develop a coding system for channels based on ideological categories (e.g., “Conspiracy”, “Revolutionary”, “Partisan Right”) as well as whether the channel was part of the mainstream news or independent YouTubers. They find no evidence of migration to extreme right channels—their data suggest that the recommendation algorithm appears to restrict traffic towards these categories and that users are instead directed towards mainstream channels. Importantly, all four of these studies generate data by leveraging YouTube’s application programming interface (APIs) to retrieve data from respective platforms to analyse recommendations of extremist content. The data collection does not mimic the user-platform relationship that typically has users interact with content, includes repeated exposure to recommendations and continued interactions with content from which an algorithm can learn and tailor content accordingly. As such, these studies do not consider personalisation which is at the core of the filter bubble hypothesis. This is acknowledged by both Ribeiro et al. (2019) and Ledwich and Zaitsev (2019) as limitations.

Other studies have taken qualitative or observational approaches. Gaudette et al. (2020) studied Reddit’s upvoting and downvoting algorithm on the subreddit r/The_Donald by taking a sample of the 1,000 most upvoted posts and comparing them to a random sample, finding that the upvoted sample contained extreme discourse which facilitated “othering” towards two outgroups—Muslims and the left. The authors argue that the upvoting algorithm plays a key role in facilitating an extreme collective identity on the subreddit. Baugut and Neumann (2020) conduct interviews with 44 radical Islamists on their media diet, finding that many individuals began with only a basic interest in ideology but followed platforms’ recommendations when they were shown radical propaganda, which propelled them to engage in violence. Both Berger (2013) and Waters and Postings (2018) observe that Twitter and Facebook respectively suggest radical jihadist accounts for users to follow after an individual begins to engage with extreme content, arguing that the platforms inadvertently create a network which helps to connect extremists.

Looking at the existing literature, several inferences can be made. Firstly, little is known about the relationship between recommendation systems and the promotion of extremist content; of the few studies that exist, a majority do suggest that these algorithms can promote extreme content, but it tends to focus on one platform—YouTube—and mostly analyses the potential interaction between users and content or relies on collecting potential recommendations from platform APIs, rather than actual interactions based on personalisation. The focus on a single platform is significant because research may be driven by convenience to researchers due to YouTube’s open API rather than following the trail of extreme content. Secondly, when looking to the previous section and research into filter bubbles more broadly, despite theoretical apprehensions which can be alarmingly applied to the promotion of extremist content, the evidence base for this concern is limited, most studies tend to play down the effect, either suggesting that it does not exist or that users’ own choices play a bigger role in their decision-making.

Methodology

This study aims to empirically analyse whether social media recommendation systems can promote far-right extremist content on three platforms —YouTube, Reddit, and Gab. The Far-right was chosen to be the most appropriate ideology because it remains accessible on the internet because platforms have not been able to utilise the same methods of de-platforming as they used on jihadist content (Conway, 2020). Each of the platforms in question have been noted as hosting extreme far-right material, for example: YouTube (O’Callaghan et al., 2015; Lewis, 2018; Ottoni et al., 2018; Munger and Phillips, 2020; Van Der Vegt et al., 2020), Reddit (Conway, 2016; Copland, 2020; Gaudette et al., 2020), and Gab (Berger, 2018b; Conway, Scrivens, and Macnair, 2019; Nouri, Lorenzo-Dus, and Watkin, 2019).

Our methodology adds an important contribution to the existing literature. Rather than utilising the results from an API to generate data on potential recommendations, which is common in the existing literature (O’Callaghan et al., 2015; Ledwich and Zaitsev, 2019; Ribeiro et al., 2019), we engage with social media recommendation systems via automated user accounts (bots) to observe content adjustments (recommendations). Retrieving potential recommendations via the API does not mimic the users' relationship to the platform because the algorithm has nothing to learn about an individual user and no personalisation takes place. On the other hand, our methodology utilises automated agents which create behavioural data that the algorithm uses to personalise recommendations. Therefore, we effectively recreate the conditions in which a real user finds itself when using a platform with a personalised recommendation system. A similar research design was conducted by Haim, Graefe and Brosius (2018), who created personalised accounts on Google News in order to study the political bias of news recommendations. However, there are no studies that utilise this methodology to assess the amplification of extremist content. Given the divergent platform architecture, we utilise three different designs, explained below.

YouTube

We investigate whether extreme content was promoted after applying specific treatments. To do this, we created three identical accounts subscribed to the same 20 YouTube channels: 10 far-right channels that were identified in the academic literature 5 and 10 apolitical content producers (for example, sport or weather). We subjected these accounts to three different treatments:

  1. Acting predominantly with far-right channels—the extreme interaction account (EIA);
  2. Acting predominantly with apolitical channels—the neutral interaction account (NIA); and
  3. Doing nothing at all—the baseline account (BA).

Data were collected by visiting the YouTube homepage twice per day. Using the recommendation data, we proceeded to construct two variables of interest —the share and the rank of extreme, fringe, and moderate content. For the share of content, we divide the respective content of a specific category by the total number of content pieces by recommendation set. To determine rank, content that appears on top left is ranked as “1” so that for each content piece to the right and below the rank continuously increases. YouTube offers 18 recommended videos, so the data have a ranking of 1-18. This serves as a measurement of algorithm prioritisation as we can assume that content that is more easily accessible (e.g. more visible) will be consumed and viewed more.

For the first week, the three accounts did not interact with any content and just visited the homepage twice a day to collect data. This serves as a baseline to be compared against the treatments. After a week, the three different treatments were applied. 20 videos were chosen (one from each channel), and all the accounts watched one to kickstart the recommendation algorithm. Each time an account visited the YouTube frontpage, ten videos were randomly chosen from the recommendation tab. For the EIA, seven videos were chosen from far-right channels and three from neutral. The NIA watched seven neutral channels and three far-right. If this operation was not possible to perform because there were not enough videos from neutral or extreme channels present, videos were watched twice until the quota was met. If no video appeared from any of the initial 20 channels in any given session, the account would watch a video from the initial 20 videos that were used to kickstart the algorithm.

Quasi-Poisson models were used to estimate rate ratios and expected frequency counts to test whether extremist or fringe content was more or less prevalent after treatments were applied (Agresti, 2013). To test for rank differences in content, Wilcoxon rank sum tests were chosen, a non-parametric alternative to the unpaired two-samples t-test, which was chosen due to non-normality of the rank distribution (Kraska-Miller, 2013).

Reddit

The design for the Reddit experiment is almost identical to that of YouTube. We created three identical accounts and followed the same selection of far-right (including male supremacist) and apolitical Subreddits. We left these accounts dormant for one week to collect baseline data, before conducting three treatments:

  1. Extremist Interaction Account (EIA), which acted predominantly with far-right content
  2. Neutral Interaction Account (NIA), which acted predominantly with neutral content
  3. Baseline Account (BA), which did not interact at all.

As with YouTube, the two variables of interest were the share and rank of extreme and fringe content, which was decided based on where content appeared on Reddit’s “Best” timeline. Reddit offers 25 posts from top-to-bottom: giving the top result a rank of 1 through to the bottom of 25. The same procedure was followed as highlighted above; we collected data twice per day by logging in and viewing the recommended post, with the EIA interacting with seven posts from far-right subreddits and three apolitical ones, and the NIA interacting with seven apolitical and three far-right. Again, Quasi-Poisson models were used to estimate rate ratios and expected frequency counts, and Wilcoxon rank sum tests were utilised to test differences in rank.

Gab

Gab’s architecture is fundamentally different—and substantially more basic—requiring a more simplistic approach. One function that Gab offers is the ability to choose between three different types of news feed: “Popular”, “Controversial”, and “Latest”. Although not made explicit by the platform, we judge the first two to be algorithmically driven by non-chronological factors, possibly related to Gab’s upvote and downvote system. However, “Latest” by definition, is based either entirely or primarily on the most recent posts, which offers the ability to analyse how algorithmically recommended posts compare against a timeline influenced primarily by recency. We collected data from each of the three timeline options for three of Gab’s topics: News, Politics, and Humour, creating nine different investigations in total. We then assessed how much extreme content appeared in each timeline.

The experiments on YouTube and Reddit are relatively similar in aims and scope, while Gab’s investigation diverges. Therefore, the research questions are as follows:

  • RQ1: Does the amount of extreme content increase after applying treatments? (YouTube & Reddit)

  • RQ2: Is extreme content better ranked by the algorithm after applying treatments? (YouTube & Reddit)

  • RQ3: Do Gab’s different timelines promote extreme content? (Gab)

Data were collected over a two-week period in January/February 2019. For YouTube and Reddit, the bots logged in twice per day, which created 28 different sessions, for a total of 1,443 videos (of which 949 were unique), and 2,100 posts on Reddit (of which 834 were unique). Unfortunately, during the data collection period, Gab experienced several technical issues resulting in disruptions to the site, meaning that the authors were only able to log in for five sessions. This still resulted in 1,271 posts being collected, of which 746 were unique, which we deemed adequate for an exploratory investigation.

Coding

Two members of the team coded the data using the Extremist Media Index (EMI), which was developed by Donald Holbrook and consists of three levels: Moderate, Fringe, and Extreme (Holbrook, 2015, 2017b, 2017a). For an item to be categorised as Extreme, it must legitimise or glorify the use of violence or involve stark dehumanisation that renders an audience sub-human. For Holbrook’s research, this category also includes four sub-levels which relate to the specificity and immediacy of the violence, but for this study, the sample sizes were too small to produce reliable results. To be deemed Fringe, content had to be radical, but without justifications of violence. Anger or blame might also be expressed towards an out-group and may include profanity laden nicknames that go beyond political discourse (e.g., “libtards” or “feminazis”), or historical revisionism. All other content was deemed as Moderate, which can include references to specific out groups if it is deemed a part of acceptable political discourse.

For inter-rater reliability, the two categorisers coded a random sample of 35 pieces of content from each of the three platforms (105 in total). The two raters agreed in 80.76% of cases: 74.3% on YouTube, 85.7% on Reddit, and 81.8% on Gab, yielding a Krippendoff’s alpha value of 0.74 (YouTube = 0.77, Reddit = 0.72, Gab = 0.73). These values are deemed acceptable to draw tentative conclusions from the data. The coders then categorised the remaining content on each platform. Only the original post/video was taken into account (i.e., not comments underneath or outward links). In YouTube, many of the videos were multiple hours long, therefore raters had to make their decision based on the first five minutes.

Two thirds of the collected data on YouTube was rated as Moderate (n=949), while 28% was Fringe (409), and 6% was Extreme (85). On Reddit, almost four-fifths of the content was classified as Moderate (1,654) while 20% was Fringe (416) and less than 2% was Extreme (30). On Gab, 64% of posts were deemed to be Moderate (810), with 29% Fringe (366) and 7% was Extreme (95).

Results

RQ1: Does the amount of extreme content increase after applying treatments?

On YouTube, we found that the account that predominantly interacted with far-right materials (the EIA) was twice as likely to be shown Extreme content, and 1.39 times more likely to be recommended Fringe content. Conversely, the NIA and BA were 2.96 and 3.23 times less likely to be shown Extreme content. These findings suggest that when users interact with far-right content on YouTube, it is further amplified to them in the future.

On the other hand, Reddit’s recommendation algorithm does not seem to promote Extreme content with the EIA; none of the models show statistically significant effects, suggesting that interacting with far-right content does not increase the likelihood that a user is recommended further extreme content.

Predicted frequency of content per session (YouTube) for the EIA condition from Quasi-Poisson model. 95% confidence interval shown.
Figure 1: Predicted frequency of content per session (YouTube) for the EIA condition from Quasi-Poisson model. 95% confidence interval shown.

RQ2: Is extreme content better ranked by the algorithm after applying treatments?

As with RQ1, we found that YouTube prioritises Extreme content; it ranked such content significantly higher than Moderate. In the EIA the median rank for the former was 5, while the latter was 10. There was no significant difference between the Fringe and Extreme or the Fringe and Moderate content. There was also no significant difference between the EMI categories in the NIA or BA.

The results for RQ2 on Reddit also mirror those of RQ1; we found no statistically significant differences between any of the variables in the EIA. We did observe that the NIA does decrease the average rank of fringe content on the platform, which does point to a minor filtering effect.

Chart, box and whisker chart. Description automatically generated
Figure 2: Ranking by EMI scores for the EIA condition (YouTube) and test comparisons are Wilcoxon Rank Sum Tests. * p < 0.05.

RQ3: Do Gab’s timelines promote Extreme content?

The exploratory investigation on Gab did not yield any differences in the promotion of Extreme content in the nine observations (three timelines vs three topics). The content in the “Latest” and “Controversial” timelines showed no statistically significant differences with any of the EMI categories, and the “Popular” timeline shows a prioritisation for Fringe content above Moderate, but there is no statistically significant promotion of Extreme posts.

Chart, box and whisker chart. Description automatically generated
Figure 3: Ranking by EMI scores in the “Popular Timelines” on Gab. Test comparisons are Wilcoxon Rank sum tests. ** p < 0.01, * p < 0.05.

Policy discussion

In this section, we synthesise our empirical findings with the existing literature and policy concerns. We assess the current regulatory approaches which are addressing the problem, finding that where there is explicit legislation, it is mostly focused on algorithmic transparency and there is currently a gap in understanding on how to deal with the problem of borderline content. We argue that a move towards co-regulation between states, social media platforms, and other stakeholders may help to address some problems and concerns. This includes a lack of accountability, a homogenous approach in dealing with borderline content, and bridging knowledge gaps caused by the slow-moving pace of legislation.

Our research suggests that YouTube can amplify extreme content towards its users after a user begins to interact with far-right content. From a policy perspective, this could affirm the concerns of several stakeholders that have highlighted this problem in recent years (HM Government, 2019; Council of the European Union, 2020). RQ1 found that by applying a treatment of predominantly far-right content, users were significantly more likely to be recommended videos that were more extreme. The converse was true, too; acting predominantly with neutral content made extreme content less likely to be shown. RQ2 found that after applying an extreme treatment, far-right content was ranked higher on average than moderate. This is in keeping with the empirical literature on YouTube, which suggests that recommended videos may promote extreme content (O’Callaghan et al., 2015; Schmitt et al., 2018; Ribeiro et al., 2019; Baugut and Neumann, 2020). Importantly, for RQ1 and RQ2, we add a novel contribution to the wider literature by creating experimental conditions with a baseline and controls which account for personalisation, rather than relying on content that could be recommended to users.

It is also important to note that we find there to be minimal filtering of extreme content on both Reddit and Gab. However, we do find evidence of both extreme and fringe content on all three platforms—supported by research which posits the far-right as existing on the sites (O’Callaghan et al., 2015; Berger, 2018b; Lewis, 2018; Conway, Scrivens, and Macnair, 2019; Nouri, Lorenzo-Dus, and Watkin, 2019; Copland, 2020; Gaudette et al., 2020). For Reddit and Gab, the lack of algorithmic promotion suggests that there are other factors, possibly related to the platforms’ other affordances or their user bases, that drive extreme content. To reiterate Munger and Phillips's (2019) point, it is important to consider both the “supply” and “demand” of radical content on social media platforms. This shows the interrelated nature of the debates surrounding concerns of algorithmic promotion of extremism and content removal. This, as we will expand on below, has its own set of challenges with which policymakers must deal when considering appropriate regulatory responses.

Regulatory approaches

Despite repeated concerns from policymakers, the amplification of extremist content by algorithms is currently a blind spot for social media regulation and addressing it presents challenges to legislators. One challenge is that current and planned national regulation is focused on the moderation and removal of illegal content rather than amplification. The UK Online Harms White Paper (2019) initially addressed the question of content amplification, subsequent consultations have seemingly relegated its importance. The German NetzDG (2017) avoids the issue of the amplification of extremist content through focusing on the removal of illegal hate speech content alone. Both pieces of legislation are focused on regulating the removal of harmful—or illegal in the case of NetzDG—content from social media through the implementation of heavy fines if a platform does not implement mechanisms of notice and take downs. This is mirrored in the proposed legislation from the European Parliament on preventing terrorist content online (European Parliament, 2019).

At time of writing the UK Online Harms Bill has not been presented to Parliament, therefore this information is derived from the Online Harms White Paper (HM Government, 2019) and accompanying responses. The Bill proposes a duty of care for platforms which covers terrorism and extremism. In the white paper, the UK government referred to the issues surrounding the operation of algorithms and their role in amplifying extreme content; “Companies will be required to ensure that algorithms selecting content do not skew towards extreme and unreliable material in the pursuit of sustained user engagement” (HM Government, 2019, p. 72). The white paper affirmed that the proposed regulator would have the power to inspect algorithms in situ to understand whether this leads to bias or harm. This power is analogous to the form of algorithmic auditing advocated by Mittelstadt, who argues for the “prediction of results from new inputs and explanation of the rationale behind decision, such as why a new input was assigned a particular classification” (Mittelstadt, 2016, p. 4994). This can, in principle, be implemented at each stage of the algorithm’s development and lines up well with the regulator’s proposed power to review the algorithm in situ. Impact auditing that investigates the “types, severity, and prevalence of effects of an algorithm’s outputs” is also advocated which can be conducted while the algorithm is in use (Mittelstadt, 2016, p. 4995). There is also reference made to the regulator requiring companies to demonstrate how algorithms select content for children, and to provide the means for testing the operation of these algorithms, which implies the development of accountable algorithms (Kroll et al., 2016).

However, in the final government response to the consultation setting out the current plans for the Bill, algorithms are barely mentioned. In the government’s final response, it is stated that search engines should ensure that “algorithms and predictive searches do not promote illegal content” referring specifically to child sexual exploitation images (HM Government, 2020, para 1.3). This has been implemented in the interim Code of Practice on child sexual exploitation images and abuse. No such reference is made in the duty of care for terrorist material or the corresponding interim Code of Practice on terrorist content and activity online.

Transparency

Where legislation does address content amplification explicitly, regulation is largely limited to transparency requirements. Recommender systems are explicitly addressed in Article 29(1) of the EU Digital Services Act (2021). This requires very large platforms which use recommender systems to set out:

In their terms and conditions, in a clear, accessible and easily comprehensible manner, the main parameters used in their recommender systems, as well as any options for the recipients of the service to modify or influence those main parameters that they may have made available, including at least one option which is not based on profiling. (2021, Art. 29(1))

The following article specifies further: where several options are available pursuant to Article 29(1) very large online platforms "shall provide an easily accessible functionality on their online interface allowing the recipient of the service to select and to modify at any time their preferred option for each of the recommender systems that determines the relative order of information presented to them". In its current iteration, Facebook’s ‘why am I seeing this?’ tool likely meets these requirements (Sethuraman 2019). These sections only apply to very large platforms, defined as having an audience of at least 10% of the EU population (DSA 2020, para 54), and therefore smaller platforms may escape transparency requirements. Additionally, the wording of Article 29 DSA provides a large amount of discretion to social media platforms as to what parameters they choose to make available for users to modify or influence, effectively affirming the social media platforms as the leaders of recommender system regulation (Helberger, 2021). This evokes the reminder that transparency afforded to users by the ‘why am I seeing this?’ tool is not total transparency. The data available to users, as well as researchers, is subject to the politics of visibility and politics of knowledge within Facebook implemented via changes at an interface and software level (Stefanija & Pierson, 2020, p. 112).

Like the DSA, where the Online Harms Bill does address the regulation of algorithms, it is in pursuit of transparency. The UK government established a multi-stakeholder Transparency Working Group which included representatives from civil society and industry including Facebook, Google, Microsoft and Twitter. Discussions and recommendations in this group did not go much beyond the specifics of transparency reports (HM Government, 2020, para 2.2). One exception to this is that the recommendation reporting includes, where appropriate, information on the use of algorithms and automated processes in content moderation. However, it was noted that certain information about the operation of companies’ algorithms is commercially sensitive or could pose issues if user safety is publicly disclosed (HM Government, 2020, para 6.21). The EU Counter-Terrorism Coordinator refines this point by stating that companies’ transparency reports should include detailed information concerning their practices on recommendation; whether blocked or removed illegal and borderline content was promoted by the platform’s algorithms. This should include several views, as well as data on how often the content was recommended to users, and whether human oversight was involved (Council of the European Union, 2020).

The German implementation of the Audiovisual Media Services Directive, the Medienstaatsvertrag (2020) does address the issue of content amplification but only requires media platforms such as Netflix or Amazon Prime to be heavily regulated. So-called media intermediaries, such as YouTube or other video hosting sites, are given a lighter touch. In terms of transparency, for example, media platforms must disclose the way selection criteria are weighted, the functioning of the algorithm, how users can adjust and personalise the sorting and explain the reasoning behind content recommendations (2020, pp. 78-90). By contrast, media intermediaries must disclose the selection criteria that determine the sorting and presentation of content. These disclosures must be made in easily recognisable, directly accessible, and constantly available formats (ibid, pp. 91-96). Facebook’s ‘Why am I seeing this?’ goes further than its media intermediary obligation by disclosing the functioning of the algorithm and explaining the reasoning behind content recommendations. One interesting provision prohibits media intermediaries from discriminating against journalistic and editorial content or treating them differently without appropriate justification. If a provider of such content believes that they have been discriminated against, they can file a claim with the relevant state broadcasting authority (ibid, p. 94). It is difficult to imagine how discrimination could be proven in practice however, and it is unclear how this interacts with the objective of recommender systems, and search engines, which is to discriminate between content (Nelson and Jaursch, 2020).

While these attempts at increased transparency are welcome, it does not solve the issue of the algorithmic promotion of borderline, but legal extremist material. While a user who is presented with clear information on why a particular video was recommended to them may reconsider watching said video, they may also disregard it. This is particularly likely to happen if the recommended video is something that they are likely to find agreeable. Thus, regulation of this issue must move further than issues of transparency and into the issue of recommending borderline content. This moves into a more ethically challenging area for legislators in deciding what is considered borderline.

The case of borderline content

Policymakers have suggested that algorithms promote legal, yet borderline content that can be harmful and lead to radicalisation. As a solution, many have argued that platforms should restrict the flow of legal, yet potentially harmful content to their audiences. The EU Counter-Terrorism Coordinator notes that systems are specifically designed to target users and not just organise content generally, therefore there should be no exemption from liability (Council of the European Union, 2020). While there is an argument that platforms such as YouTube are not the neutral intermediaries that they claim to be (Suzor, 2018), this would seem to take this argument to an impractical conclusion.

Although studies, including ours, do find there to be extreme and fringe content that within platforms’ recommendations, they use neither a legal definition of extremism, nor one that mirrors terms of service. Extremism is a difficult phenomenon to define and identify and is subject to a great degree of academic debate (for example, see: Schmid, 2013; Berger, 2018a). This invariably leads to a sizable grey zone of borderline content that represents a challenge for regulation (Bishop et al., 2019; Vegt et al., 2019; Conway, 2020). In the face of such liability, over-removal is a potential problem, leading to concerns around free speech. The Coordinator attempts to address this problem by citing the platforms’ ability to automatically find, limit and remove copyrighted content. However, this is clearly not a reasonable comparison given the much greater sized grey area around extreme content and the freedom of expression issues that arise.

It is unclear whether this action would be contrary to EU law. Article 14 of the e-Commerce Directive exempts intermediaries from liability for the content they manage if they fulfil certain conditions including removing illegal content as fast as possible once they are aware of its illegal nature, and that they play a neutral, merely technical and passive role towards the hosted content (e-Commerce Directive, 2000, Art. 14). The European Court of Justice takes a case-by-case approach to whether a hosting intermediary has taken a passive role, however, the use of algorithms or automatic means to select, organise or present the information would not be sufficient to automatically meet the active role standard (Google France and Google 2010, paras 115-120). This was echoed by the European Commission in stating that the mere fact that an intermediary hosting service provider ‘takes certain measures relating to the provision of its services in a general manner does not necessarily mean that it plays an active role in respect of the individual content items it stores’ (European Commission 2017, p. 11).

This approach has been criticised by Tech Against Terrorism, who argue that discussions of removing legal content from recommendations are misplaced and do not understand the nature of terrorists’ use of the internet. They argue that it has harmful implications for the freedom of speech and the rule of law and raises serious concerns over extra-legal norm-setting. Moreover, the definitional subjectivity of concepts like extremism (as laid out above) or “harmful” result in this being difficult to operationalise (Tech Against Terrorism, 2021). They assert that norm-setting should be created by consensus-driven mechanisms which are driven by democratically accountable institutions.

Some platforms have taken steps to remove borderline content from their recommendations unilaterally. In 2017, YouTube announced that content that did not clearly violate its policies but was deemed potentially extreme (they give examples of inflammatory religious or supremacist content) would appear behind a warning and not be available for monetisation, recommendation, and not eligible for comments or endorsements (Walker, 2017). They argue that this would make the content harder to find, striking the right balance between free speech and access to information without amplifying extreme viewpoints. Facebook has adopted this approach too; they offer a range of factors that may cause them to remove content that is permitted from recommendations. Relevant for this discussion, they include accounts that have recently violated the platform’s community standards, as well as accounts that are associated with offline movements that are tied to violence (Facebook, n.d.). Reddit has a policy of “quarantining” subreddits that are grossly offensive, which removes it from the platform’s recommendation system and forces users to opt-in to see content (Reddit, 2021). One example of this is r/The_Donald, which Gaudette et al. (2020) identify as hosting problematic extremist content. However, the platform has been accused of only applying these measures after the subreddit had received negative media attention (Romano, 2017).

Currently, choosing to remove legal, yet potentially problematic content from recommendations is a choice for individual platforms—i.e. self-regulation. However, this can be problematic, the EU Counter-Terrorism Coordinator argues, if platforms being unable or unwilling to de-amplify content: ‘Some companies have no incentive to promote a variety of viewpoints or content…since recommending polarising content remains the most efficient way to expand watch time and gather more data on customers, to better target advertising and increase the returns’ (Council of the European Union, 2020, p. 5). This speaks to the lack of a strong natural coincidence between the public and private interest in this area. Absent such a natural coincidence, one or more external pressures sufficient to create such a coincidence are needed (Gunningham and Rees, 1997, p. 390). As discussed above, there are also issues with regulation which holds platforms liable for this type of content; therefore, we advocate the use of co-regulation to achieve this coincidence.

Towards co-regulation

We define co-regulation as a regulatory scheme which combines elements of self-regulation (and self-monitoring) with elements of traditional public authority and private sector elements. A key aspect of a co-regulatory regime is the self-contained development of binding rules by the co-regulatory organisation and the responsibility of this organisation for these rules (Palzer, 2003). While this concept encompasses a range of regulatory phenomena, each co-regulatory regime consists of a complex interaction of general legislation and a self-regulatory body (Marsden, 2010, p 222). In the EU context it is defined as a mechanism whereby a community legislative act entrusts the attainment of the objectives defined by the legislative authority to parties which are recognised in the field.

There are opportunities to implement co-regulation in present and upcoming legislation. In the UK Online Harms Bill, the government will set objectives for the regulator’s (Ofcom) code of practice in secondary legislation to provide clarity for the framework. Ofcom will have a duty to consult interested parties on the development of the codes of practice (HM Government 2020, para. 31). The requirements for consultation are low, meaning that not only social media companies but other stakeholders such as civil society can participate. However, as highlighted above, the UK Government's final consultation which sets out the current plans for the Online Harms Bill makes little mention of plans to regulate recommender systems or other content amplification; it remains unclear whether they will be addressed in the development of the codes of practice.

The DSA also provides a mechanism for very large platforms to cooperate in the drawing-up of codes of conduct, thus providing another avenue for co-regulation of this issue (DSA 2020, Art. 35). A co-regulatory scheme wherein government, industry and civil society can fully participate would be the appropriate venue to find the solution to the issue of the amplification of borderline and extremist content. This lines up with previous work on the increasingly shared responsibilities between states and companies, and the trend that social media platforms are adopting measures which are increasingly similar to administrative law (Heldt, 2019). Of particular note is the framework of cooperative responsibility sketched out by Helberger, Pierson, and Poell (2017) which emphasises the need for dynamic interaction between platforms, users and public institutions in realising core public values in these online sectors. This still leaves the issue of the ethical and practical implications of who decides what is inappropriate to be amplified but doing so in a consensual manner has clear benefits.

The primary benefit of a co-regulatory approach is the avoidance of self-regulatory or public authority regulatory approaches (Palzer, 2003). One issue with self-regulation is that there may be an accountability gap as the social media companies in question are responsible for holding themselves accountable (Campbell, 1998). Presently, that may lead to little being done to tackle the issue of amplification of borderline and extremist content. Sufficiently bad press may provoke the company to act, but this could be subject to short-termism as the company acts in their immediate economic self-interest which could lead to hasty and arbitrary decisions (Gunningham and Rees, 1997). Thus far, initiatives to regulate social media platforms have been mostly self-regulatory such as the Global Internet Forum to Counter Terrorism, the Facebook Oversight Board, or the above-mentioned methods to remove content from recommendations.

Public authority regulatory approaches may lead to a knowledge gap, as states’ attempts to regulate technology may be outdated by the time they are implemented (Ayres and Braithwaite, 1992). For example, the Online Harms Bill was proposed in 2017 and as of the writing of this paper has still not been introduced to Parliament. A co-regulatory approach provides the opportunity to implement novel solutions such as algorithmic auditing and accountable algorithms. A regulatory body such as Ofcom could work with social media platforms to conduct developmental auditing to develop adequate and sufficient safeguards in their algorithms. It could analyse the impact the algorithm has on the average user, potentially through similar methodologies as this paper, although on a much-expanded scale. Another benefit of a co-regulatory approach to this issue is that it avoids giving the responsibility for filling in the details of the law to programme developers. This avoids the problem of a programme developer designing a wide-ranging algorithm to solve a political problem, which the developer likely has little substantive expertise on, and with slight possibility of political accountability (Kroll et al., 2016). Wachter and Mittelstadt have advocated a right to ‘reasonable inferences’ to close the accountability gap posed by big data inferences which damage privacy, reputation, or are used in important decisions despite having low verifiability (Wachter and Mittelstadt, 2019). Should such a right be established, a co-regulatory body could audit or help design algorithms which keep inferences and subsequent recommendations, nudges and manipulations to a reasonable level.

Conclusion

We anticipate that the role of social media recommendation algorithms and extremist content will continue to be a point of policy concern moving forward. It seems inevitable that news organisations will continue to publish stories that highlight instances of unsavoury content being recommended to users and policymakers will continue to be concerned that this is harmful to users and may exacerbate radicalisation trajectories. This article has sought to provide clarity towards this future debate in two ways. Firstly, it has provided the first empirical assessment of interactions between extremist content and platforms’ recommendation systems in an experimental condition while accounting for personalisation. The findings suggest that one platform—YouTube—may promote far-right materials after a user interacts with it. The other two platforms—Reddit and Gab—showed no signs of amplifying extreme content via their recommendations.

Secondly, we contextualise these findings into the policy debate. At first glance, our research seems to support policy concerns regarding radical filter bubbles. However, we argue that our findings also point towards other problematic aspects of contemporary social media. More focus needs to be paid to the online radical milieu and the audience of extremist messaging. Despite repeatedly being signalled out by policymakers as problematic, there are currently few instruments in place for social media regulation. Moreover, where regulatory policy does exist, it tends to focus on transparency, which while welcome, is only one potential solution to the amplification of extreme content alone. We argue that policy is yet to fully understand the difficulties with “grey area” content as it relates to content amplification. Currently, platforms are left to self-regulate in this area and policymakers argue that they can do more. However, self-regulation can be problematic because of a lack of coincidence between public and private interests. We argue that a movement towards co-regulation offers numerous benefits as it can shorten the accountability gap while maintaining the opportunity for novel solutions from industry leaders.

Acknowledgments

We are grateful to our reviewers Amélie Heldt, Jo Pierson, Francesca Musiani, and Frédéric Dubois, who each helped improve this article through the peer-review process.

References

Agresti, A. (2013). Categorical Data Analysis. John Wiley & Sons.

Ayres, I., & Braithwaite, J. (1992). Responsive regulation: Transcending the deregulation debate. Oxford University Press.

Azeez, W. (2019, May 15). YouTube: We’re Learnt Lessons From Christchurch Massacre Video. Yahoo Finance UK. https://uk.finance.yahoo.com/news/you-tube-weve-learnt-lessons-from-christchurch-massacre-video-163653027.html

Bakshy, E., Messing, S., & Adamic, L. (2015). Exposure to Ideologically Diverse News and Opinion on Facebook. Science Express, 1–5. https://doi.org/10.1111/j.1460-2466.2008.00410.x

Baugut, P., & Neumann, K. (2020). Online propaganda use during Islamist radicalization. Information Communication and Society, 23(11), 1570–1592. https://doi.org/10.1080/1369118x.2019.1594333

Berger, J. M. (2013). Zero Degrees of al Qaeda. Foreign Policy. http://foreignpolicy.com/2013/08/14/zero-degrees-of-al-qaeda/.

Berger, J. M. (2018a). Extremism. MIT Press.

Berger, J. M. (2018b). The Alt-Right Twitter Census.

Bishop, P. (2019). Response to the Online Harms White Paper. Swansea University, Cyber Threats Reseacxh Centre. https://www.swansea.ac.uk/media/Response-to-the-Online-Harms-White-Paper.pdf

Bozdag, E. (2013). Bias in algorithmic filtering and personalization. Ethics and Information Technology, 15(3), 209–227. https://doi.org/10.1007/s10676-013-9321-6

Bruns, A. (2019). Filter bubble. Internet Policy Review, 8(4). https://doi.org/10.14763/2019.4.1426

Bucher, T. (2017). The algorithmic imaginary: Exploring the ordinary affects of Facebook algorithms. Information, Communication & Society, 20(1), 30–44. https://doi.org/10.1080/1369118X.2016.1154086

Campbell, A. J. (1998). Self-regulation and the media. Fed. Comm. LJ, 51.

Christchurch Call. (2019). The Call. https://www.christchurchcall.com/call.html

Commission for Countering Extremism. (2019). Challenging Hateful Extremism.

Conway, M. (2016). Violent Extremism and Terrorism Online in 2016. The Year in Review’, Vox Pol.

Conway, M. (2020). Routing the Extreme Right: Challenges for Social Media Platforms’. RUSI Journal. https://doi.org/10.1080/03071847.2020.1727157

Conway, M., Scrivens, R., & Macnair, L. (2019). Right-Wing Extremists’ Persistent Online Presence: History and Contemporary Trends’. ICCT Policy Brief.

Copland, S. (2020). Reddit quarantined: Can changing platform affordances reduce hateful material online?’. Internet Policy Review, 9(4), 1–26. https://doi.org/10.14763/2020.4.1516

Council of the European Union. (2020). The Role of Algorithmic Amplification in Promoting Violent and Extremist Content and its Dissemination on Platforms and Social Media.

Courtois, C., Slechten, L., & Coenen, L. (2018). Challenging Google Search filter bubbles in social and political information: Disconforming evidence from a digital methods case study. Telematics and Informatics, 35(7), 2006–2015. https://doi.org/10.1016/j.tele.2018.07.004

DeVito, M. A. (2016). From Editors to Algorithms. Digital Journalism, 1–21. https://doi.org/10.1080/21670811.2016.1178592

Dylko, I. (2018). Impact of Customizability Technology on Political Polarization. Journal of Information Technology and Politics, 15(1), 19–33. https://doi.org/10.1080/19331681.2017.1354243

Eslami, M. (2015). “I always assumed that I wasn’t really that close to [her]”. CHI’15 Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, 153–162. https://doi.org/10.1145/2702123.2702556

E.U. Commission. (2000). Directive 2000/31/EC of the European Parliament and of the Council of 8 June 2000 on certain legal aspects of information society services. https://eur-lex.europa.eu/legal-content/EN/ALL/?uri=CELEX%3A32000L0031.

E.U. Commission. (2017). Tackling Illegal Content Online; Towards an enhanced responsibility of online platforms. https://ec.europa.eu/newsroom/dae/document.cfm?doc_id=47383

E.U. Commission. (2020). Single Market for Digital Services (Digital Services Act) and amending Directive 2000/31/EC. https://www.euractiv.com/wp-content/uploads/sites/2/2020/12/Digital_Services_Act__1__watermark-3.pdf.

European Parliament. (2019). Terrorist content online should be removed within one hour, says EP [Press Release]. European Parliament. https://www.europarl.europa.eu/news/en/press-room/20190410IPR37571/terrorist-content-online-should-be-removed-within-one-hour-says-ep.

Facebook Help Centre. (nd). What are recommendations on Facebook?https://www.facebook.com/help/1257205004624246

Gaudette, T. (2020). Upvoting Extremism: Collective identity formation and the extreme right on Reddit. New Media and Society. https://doi.org/10.1177/1461444820958123

Government, H. M. (2020). The Government Report on Transparency Reporting in relation to Online Harms [Report]. The Stationary Office. https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/944320/The_Government_Report_on_Transparency_Reporting_in_relation_to_Online_Harms.pdf

Gunningham, N., & Rees, J. (1997). Industry self‐regulation: An institutional perspective. Law & Policy, 19(4), 363–414. https://doi.org/10.1111/1467-9930.t01-1-00033

Haim, M., Graefe, A., & Brosius, H. B. (2018). Burst of the Filter Bubble?: Effects of personalization on the diversity of Google News’. Digital Journalism, 6(3), 330–343. https://doi.org/10.1080/21670811.2017.1338145

Helberger, N., Pierson, J., & Poell, T. (2018). Governing online platforms: From contested to cooperative responsibility. The Information Society, 34(1), 1–14. https://doi.org/10.1080/01972243.2017.1391913

Heldt, A. (2019). Let’s Meet Halfway: Sharing New Responsibilities in a Digital Age’. Journal of Information Policy, 9, 336–369. https://doi.org/10.5325/jinfopoli.9.2019.0336

H.M. Government. (2019). Online Harms White Paper [White Paper]. The Stationary Office. https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/793360/Online_Harms_White_Paper.pdf.

Holbrook, D. (2015). Designing and Applying an “Extremist Media Index”. Perspectives on Terrorism, 9(5), 57–68. https://doi.org/10.1088/0031-9155/49/9/004

Holbrook, D. (2017a). Terrorism as process narratives: A study of pre-arrest media usage and the emergence of pathways to engagement. In Terrorism and Political Violence, In Press.

Holbrook, D. (2017b). What Types of Media Do Terrorists Collect? International Centre for Counter-Terrorism.

Kraska-Miller, M. (2013). Nonparametric Statistics for Social and Behavioral Sciences. CRC Press.

Kroll, J. A., Barocas, S., Felten, E. W., Reidenberg, J. R., Robinson, D. G., & Yu, H. (2016). Accountable algorithms. U. Pa. L. Rev, 165, 633.

Ledwich, M., & Zaitsev, A. (2019). Algorithmic Extremism: Examining YouTube’s Rabbit Hole of Radicalization. http://arxiv.org/abs/1912.11211.

Lewis, R. (2018). Alternative Influence: Broadcasting the Reactionary Right on YouTube. https://datasociety.net/research/media-manipulation.

Marsden, C. T. (2010). Net Neutrality: Towards a Co-regulatory Solution. Bloomsbury Academic.

Mittelstadt, B. (2016). Automation, algorithms, and politics| auditing for transparency in content personalization systems. International Journal of Communication, 10, 12.

Möller, J., Trilling, D., Helberger, N., & van Es, B. (2018). Do not blame it on the algorithm: An empirical assessment of multiple recommender systems and their impact on content diversity. Information, Communication & Society, 21(7), 959–977. https://doi.org/10.1080/1369118X.2018.1444076

Munger, K., & Phillips, J. (2019). A Supply and Demand Framework for YouTube Politics Introduction to Political Media on YouTube. Penn State Political Science.

Munger, K., & Phillips, J. (2020). Right-Wing YouTube: A Supply and Demand Perspective. International Journal of Press/Politics. https://doi.org/10.1177/1940161220964767

Napoli, P. M. (2014). Automated media: An institutional theory perspective on algorithmic media production and consumption. Communication Theory, 24(3), 340–360. https://doi.org/10.1111/comt.12039

Napoli, P. M. (2015). Social media and the public interest: Governance of news platforms in the realm of individual and algorithmic gatekeepers. Telecommunications Policy, 39(9), 751–760. https://doi.org/10.1016/j.telpol.2014.12.003

Nelson, M., & Jaursch, J. (2020). Germany’s new media treaty demands that platforms explain algorithms and stop discriminating. Can it deliver? Algorithm Watch. https://algorithmwatch.org/en/new-media-treaty-germany/

Nouri, L., Lorenzo-Dus, N., & Watkin, A. (2019). Following the Whack-a-Mole Britain First’s Visual Strategy from Facebook to Gab. Global Research Network on Terrorism and Technology, 4.

O’Callaghan, D. (2015). Down the (White) Rabbit Hole: The Extreme Right and Online Recommender Systems. Social Science Computer Review, 33(4), 459–478. https://doi.org/10.1177/0894439314555329

Ottoni, R. (2018). Analyzing Right-wing YouTube Channels: Hate, Violence and Discrimination’. Proceedings Ofthe 10th ACM Conference on Web Science. https://doi.org/10.1145/3201064.3201081

Palzer, C. (2003). Self-monitoring v. Self-regulation v. Co-regulation. In W. Closs, S. Nikoltchev, & European Audiovisual Observatory (Eds.), Co-regulation of the media in Europe (pp. 29–31).

Pariser, E. (2011). The filter bubble: What the Internet is hiding from you. Penguin Press.

Reddit Help. (2021). Quarantined Subreddits. https://reddit.zendesk.com/hc/en-us/articles/360043069012-Quarantined-Subreddits

Ribeiro, M. H. (2019). Auditing Radicalization Pathways on YouTube. ACM Symposium on Neural Gaze Detection. http://arxiv.org/abs/1908.08313.

Ricci, F., Rokach, L., & Shapira, B. (2011). Recommender Systems Handbook. Springer. https://doi.org/10.1007/978-0-387-85820-3

Romano, A. (2017). Reddit’s TheRedPill, notorious for its misogyny, was founded by a New Hampshire state legislator. Vox. https://www.vox.com/culture/2017/4/28/15434770/red-pill-founded-by-robert-fisher-new-hampshire

Schmid, A. P. (2013). Radicalisation, De-Radicalisation, Counter-Radicalisation: A Conceptual Discussion and Literature Review [Research Paper]. International Centre for Counter-Terrorism. http://www.icct.nl/download/file/ICCT-Schmid-Radicalisation-De-Radicalisation-Counter-Radicalisation-March-2013.pdf.

Schmitt, J. B. (2018). Counter-messages as prevention or promotion of extremism?! The potential role of YouTube. Journal of Communication, 68(4), 758–779. https://doi.org/10.1093/joc/jqy029

Seaver, N. (2018). Captivating algorithms: Recommender systems as traps. Journal of Material Culture. https://doi.org/10.1177/1359183518820366

Sethuraman, R. (2019). Why Am I Seeing This? We Have an Answer for You. Facebook. https://about.fb.com/news/2019/03/why-am-i-seeing-this/

Stefanija, A. P., & Pierson, J. (2020). Practical AI Transparency: Revealing Datafication and Algorithmic Identities. Journal of Digital Social Research, 2(3), 84–125.

Sunstein, C. R. (2002). The law of group polarization. The Journal of Political Philosophy, 10(2), 175–195. https://doi.org/10.1002/9780470690734.ch4

Suzor, N. (2018). Digital constitutionalism: Using the rule of law to evaluate the legitimacy of governance by platforms. Social Media + Society, 4(3). https://doi.org/10.1177/2056305118787812

Tech Against Terrorism. (2021). Content personalisation and the online dissemination of terrorist and violent extremist content [Position paper]. https://www.techagainstterrorism.org/wp-content/uploads/2021/02/TAT-Position-Paper-content-personalisation-and-online-dissemination-of-terrorist-content1.pdf

van Der Vegt, I. (2020). Online influence, offline violence: Language Use on YouTube surrounding the “Unite the Right” rally. Journal of Computational Social Science, 4, 333–354. https://doi.org/10.1007/s42001-020-00080-x

van Der Vegt, I., Gill, P., Macdonald, S., & Kleinberg, B. (2019). Shedding Light on Terrorist and Extremist Content Removal (Paper No. 3). Global Research Network on Terrorism and Technology. https://rusi.org/explore-our-research/publications/special-resources/shedding-light-on-terrorist-and-extremist-content-removal

Vīķe‐Freiberga, V., Däubler-Gmelin, H., Hammersley, B., & Maduro, L. M. P. P. (2013). A Free and Pluralistic Media to Sustain European Democracy [Report]. EU High Level Group on Media Freedom and Pluralism.

Wachter, S., & Mittelstadt, B. (2019). A Right to Reasonable Inferences: Re-Thinking Data Protection Law in the Age of Big Data and AI. Columbia Business Law Review, 2, 494–620. https://doi.org/10.7916/cblr.v2019i2.3424

Walker, K. (2017, June). Four Steps We’re Taking Today to Fight Terrorism Online [Blog post]. Google. https://www.blog.google/around-the-globe/google-europe/four-steps-were-taking-today-fight-online-terror/

Waters, G., & Postings, R. (2018). Spiders of the Caliphate: Mapping the Islamic State’s Global Support Network on Facebook [Report]. Counter Extremism Project. https://www.counterextremism.com/sites/default/files/Spiders%20of%20the%20Caliphate%20%28May%202018%29.pdf

Whittaker, J. (2020). Online Echo Chambers and Violent Extremism. In S. M. Khasru, R. Noor, & Y. Li (Eds.), The Digital Age, Cyber Space, and Social Media: The Challenges of Security & Radicalization (pp. 129–150). Dhaka Institute for Policy, Advocacy, and Governance.

Zuiderveen Borgesius, F. J., Trilling, D., Möller, J., Bodó, B., Vreese, C. H., & Helberger, N. (2016). Should we worry about filter bubbles? Internet Policy Review, 5(1). https://doi.org/10.14763/2016.1.401

Statutes cited:

Bundestag (2017), Network Enforcement Act (Netzdurchsetzunggesetz, NetzDG).

Bundestag (2020), State Treaty on the modernisation of media legislation in Germany (Medienstaatsvertrag) https://ec.europa.eu/growth/tools-databases/tris/en/index.cfm/search/?trisaction=search.detail&year=2020&num=26&dLang=EN

Filter Bubble Transparency Act. (2019). S, LYN19613

French National Assembly. (2019). Lutte Contre la Haine sur Internet.

Cases cited:

Case C-236/08 Google France SARL and Google Inc. v Louis Vuitton Malletier SA [2010] R.P.C. 19

Footnotes

1. It is worthwhile to note that this study was commissioned and undertaken by Facebook employees and studied the platform in question.

2. In their paper, they use the Anti-Defamation League’s description of the Alt-Right as: “Aloose segment of the white supremacist movement consisting of individuals who reject mainstream conservatism in favor of politics that embrace racist, anti-Semitic and white supremacist ideology” (Ribeiro et al., 2019, p. 2).

3. They argue that Alt-lite was created to demarcate individuals and content that engage in civil nationalism, but deny a link to white supremacy.

4. A group of contrarian academics and podcast hosts who discuss and debate a range of social issues such as abortion, LGBT issues, identity politics, and religion.

5. The authors follow the policy of Vox-Pol and J M Berger by not identifying the names of accounts in this research, both for reasons of potentially increasing exposure and privacy. See, Berger (2018).

Information interventions and social media

$
0
0

Introduction

When speaking to a group of reporters in 2018, the chairman of the UN Independent International Fact-Finding Mission on Myanmar, Marzuki Darusman, was clear that social media had a “determining role” in suspected acts of genocide in the country, arguing “[i]t has… substantively contributed to the level of acrimony, dissention and conflict…Hate speech is certainly part of that. As far as the Myanmar situation is concerned, social media is Facebook, and Facebook is social media” (Miles, 2018, n.p.). This view was further outlined with supporting evidence in the UN Mission’s report later that year, which also referred to reports from human rights observers going back until at least 2012 identifying the role of social media in provoking violence by promoting anti-Rohingya discourse along with inaccurate and inflammatory images of violence.

The role of social media and conflict in Myanmar is not an isolated case. The use of social media has been deeply intertwined with the decade-long war in Syria (O’Neil, 2013), while in the Central African Republic, online hate speech has been directly attributed to provoking mass atrocities between Christians and Muslims (Schlein, 2018). In Sri Lanka, rumours on social media are widely regarded as provoking a number of religious attacks, including the 2019 Easter Sunday church and hotel bombings (Fisher, 2019). This follows a longer legacy of the role of mass media in violent conflict, from the use of newspapers in Nazi propaganda campaigns (Herzstein, 1978) to the more recent conflicts involving radio in Rwanda and satellite television in Somalia (Allen and Stremlau, 2005; Stremlau, 2018).

In some cases of conflict involving mass media, international actors— including the United Nations and African Union—have undertaken ‘information interventions’, a term that came into its own in the mid-1990s in response to the ongoing conflict in the Balkans, and the use of radio in the Rwandan genocide (Metzl, 1997; Price and Thompson, 2002). Measures that fall under the broad umbrella of information intervention include the use of force to close newspapers or bomb radio transmitters, or softer interventions such as peace broadcasting (which entails supplementing existing media content with programming aimed at bridging divisions and encouraging reconciliation between conflicting parties), and conflict-sensitive journalism training.

While the harder forms of information interventions (such as the shutdowns of outlets) have been applied to mass media, in this article we focus specifically on the relevance of information interventions for online communications, and social media in particular. In doing so, we are primarily concerned with information interventions as a tool to forcibly silence certain voices or outlets, for example, the shutdown of social media sites, or even a partial or complete internet shutdown, on the part of international actors (including the United Nations (UN), the African Union (AU), or other multilateral organisations) to halt mass atrocities. If the state is the perpetrator of the violence, there might be a greater argument for external intervention, but if non-state actors are involved, the state itself might wish to respond by temporarily blocking social media or blocking the internet (although we recognise that rarely are conflicts so clear). Our focus here, however, is primarily on the potential role of international actors, rather than the state response. In other words, in cases where social media are misused to instigate violence and spread online hate that encourages genocide or mass atrocities, do international actors have a responsibility to launch an information intervention? And what exactly does an information intervention look like in the context of information spreading online?

The growing prominence of social media in disseminating disinformation, hate and inciting violence prompts urgent questions about whether—and to what extent—the doctrine of information intervention can be applied during mass atrocities when violence and hate is promoted through social media channels. As also underlined by Tufekci, social media can be drivers of hate and radicalisation (2018). Their role in contributing to the spread of hate and violence leads to questioning how to mitigate this situation. We recognise that the doctrine of information intervention needs to account for the peculiarities of the digital information landscape and the affordances of social media platforms. Unlike traditional media outlets which often are physically located in the countries where mass atrocities occur, in the absence of cooperation from a particular company, the only way to restrict such content may be to limit access to particular sites or to shut down the internet. The blunter the tool, such as an internet shutdown, the more problematic it is as it will have wider implications on society, not least in terms of expression, governance, and commerce (Marchant and Stremlau, 2020a, b).

Thus, against a worrying backdrop of inflammatory voices and online incitement to violence, we unpack what information interventions might mean where social media are involved, and to what extent it can be justified according to international law. While engaging with the notion of ‘intervention’, we focus on whether international actors could be given the legitimacy to intervene in a target country to shut down social media activities as part of an effort to cease violent conflict. While states have agreed that international law, including the principles of sovereignty and non-intervention, applies to states’ activities in cyberspace (UNGA, 2013), at the same time, as observed by Efrony and Shany, states rely on a ‘policy of silence and ambiguity’ to ensure broad margins of flexibility within the digital realm (2018, pp. 583-657).

We are aware that other forms of interventions may occur, such social media companies limiting or blocking content associated with an escalation of violence, as in the case of Facebook in Myanmar (Perrigo, 2021). But at present, social media companies have not demonstrated a consistent ability to effectively moderate content, particularly in the global south, and even in cases of genocide. Despite efforts to develop artificial intelligence capabilities to proactively identify and take down content, companies are still dependent on human monitors of which they simply do not have enough to address the scale of content being posted daily and the diverse contexts in which hate speech occurs (Barrett, 2020). As the UN Human Rights Council noted in their report on Myanmar, international law is clear about expressions of hate that must be prohibited (in contrast with those that may be prohibited and those that should be protected). Our concern is with those expressions that must be prohibited including incitement to commit genocide or incitement to violence. 1

We explore this issue primarily through the lens of the United Nations. Although we recognise that other international and non-governmental organisations, including regional organisations such as the African Union, could also play a critical role, we begin with the applicability of Chapter VII of the UN Charter, which outlines the UN Security Council’s powers to maintain peace, along with Chapters VI and VIII of the Charter which set out the responsibilities to protect populations from genocide, war crimes, ethnic cleansing and crimes against humanity. While we consider the potential consequences on the (digital) media environment of the target state, we underline that intervening in social media is not just a matter of legitimacy but also a measure that could significantly impact the digital information ecosystem of the target state.

In the first section of this article, we outline the doctrine of information intervention, grounding our arguments in a historical review of the debate. We consider information intervention by, firstly, analysing the principle of non-intervention under international law; and, secondly, examining how the human rights law framework provides legal justification to prevent mass atrocities, thus, triggering the responsibility to protect authorising intervention under Chapter VII.

In the second section, we focus on information intervention within the framework of social media, precisely focusing on the unique aspects of social media relative to traditional media outlets, such as radio and television, with regard to spreading hate and escalating violent conflicts. In particular, this comparative analysis highlights the peculiarities of the social media environment as well as the politics of online content moderation on a global scale.

On this basis, we offer a new framework for information intervention in the context of online media. Specifically, we consider how legal justifications are affected by the social media environment, and what suitable measures might be adopted in this new and emerging context. Drawing on the work of Metzl, who nearly 25 years ago argued the need for the UN to establish a unit to intervene when mass media (and particularly radio) is involved in genocide, we conclude by revisiting the need for an international mechanism, including, what we refer to, as a possible Information Intervention Council, that proceduralises situations of interventions. This new institution, or mechanism, would ideally be grounded in an international system, like the AU or UN, affording it legitimacy and accountability in international law. The implications of our argument are significant. We do not want to be misconstrued as advocating for the widespread use of censorship or internet shutdowns that we have seen increasing over the years as blunt tools for addressing everything from concerns around electoral fraud, to hate speech, to the leaking of exam papers (Henley, 2018). Rather, it is our belief that empowering an international council to intervene would reduce arbitrary shutdowns because claims of hate speech and the association with offline violence would be independently scrutinised, thereby offering more legitimate options for addressing the more severe cases while exposing the instances when shutdowns have been used for other reasons (they are often seen as a tool by autocratic governments to silence voices they dislike) for what they are.

The doctrine of information intervention

Weapons, troops, tanks and aircraft are not the only instruments of harm in violent conflict, with propaganda and communication channels—such as radio and television—having long demonstrated an ability to weaponise hate against minorities or targeted groups (Larson and Whitton, 1963). Media outlets have contributed to guiding and organising entire military forces, and promoting propaganda with a view to attracting new proselytists. In the 1960s, when the debate was focused on nuclear disarmament, Whitton and Larson noted that ‘while in past years we have often heard the phrase ‘the propaganda of disarmament’, we should now hold forth as an urgent need ‘the disarmament of propaganda’’ (1963, p. 1).

Information interventions are strategic efforts to interfere in (whether disrupting, manipulating or altering) a communications environment within a community, region or state afflicted by mass atrocities, in order to prevent the dissemination of violence-inciting speech. The characteristics of such an intervention can be assessed by observing its duration, goals and degree. The intervention can take place at various stages of a conflict. For example, it may attempt to tackle (in advance) the conveyance of messages that could lead to conflict escalation. However, the use of force (e.g. the takedown of a media outlet) has rarely been employed as a preventive measure, as it would be difficult to garner support for such interference in national sovereignty based solely on unsubstantiated assumptions. Interventions at this stage are therefore likely to be softer, focusing on offering alternative voices or perspectives, or training journalists (what often falls under the broader umbrella of media development).

During cases of escalating violence, an information intervention might consist of media and conflict monitoring of the target state; peace broadcasting, which seeks to provide an outlet for non-violent voices to counter; or media shutdowns, which involves censoring the media whose messages are provoking conflict, such as the bombing of radio towers (Larson and Whitton, 1963, p. 17). In situations where conflict is winding down—or at least where this is the hope—intervention usually focuses on measures promoting what the international community or funders believe to be a democratic and sustainable media environment. Thus, in conflict situations, information intervention consists of both short- and long-term strategies aimed at stabilising the media environment within a specific country (Larson and Whitton, 1963, pp. 185-186).

In terms of the level of interference a target state is subject to, international law has a role when assessing the legality of more interventionist measures, such as media restrictions or shutdowns. Our primary focus in this article is on what Metzl referred to as the ‘third step’, which requires the taking down or closure of a particular outlet or platform, or an internet shutdown, in situations of severe violence (e.g. genocide). In this article, we focus on the challenges arising from the legal basis justifying this level of information intervention, especially when censoring social media in the target state by relying on internet shutdowns.

Historical evidence concerning the role of media in escalating violent conflicts might constitute legitimate grounds justifying information intervention. Intervening in situations of genocide and violence promoted by the media could also be justifiable from a moral perspective. As suggested by Metzl:

We need to explore what can be done between the impossible everything and the unacceptable nothing. The political cost of doing everything is usually prohibitive. The moral cost of doing nothing is astronomical. If we accept that we are not going to do everything possible to stem a given conflict, what can we do to have as much impact as we are willing to have? (Metzl, 2002, pp. 41-42).

However, historic or moral legitimacy is not necessarily the same as legal legitimacy. The principle of non-intervention in international law aims to protect the sovereignty of each country and is recognised as a peremptory norm (i.e., jus cogens). The legal rank of this principle is one of the primary challenges to media intervention and specifically the use of force to block the spread of certain information. Members of the international community cannot lawfully intervene without the authorisation of the UN Security Council, except in exceptional cases like self-defence. There is, therefore, a clash between the principle of non-intervention aimed at safeguarding national sovereignty, and the need to address speech that fuels severe conflict, including genocide.

Before addressing the challenges raised by social media in spreading violence and hate online, it is necessary to outline the legal justifications that form the basis of information intervention.

The principle of non-intervention

Interventions raise serious challenges for state sovereignty. Recent conflicts, from Syria to Iraq, have contributed to promoting the debate on the boundaries of the principle of non-intervention (Chinkin and Kaldor, 2017; Davis et al., 2015), even if outside the framework of information intervention. The same principle could be extended to international telecommunications law governing territorial sovereignty as it relates to the protection of airwaves and the flow of information (Rajadhyaksha, 2006, p. 1).

The principle of non-intervention is enshrined in the UN Charter and, similar to the ban on the use of force, is derived from and supports the idea of state sovereignty. As a legal principle, it first appeared when the League of Nations was created, stipulating mutual respect for territorial integrity and sovereignty, and non-interference in the internal affairs of other states. These dispositions have been included in the UN Charter, (1945, Art. 2) 2 while in Nicaragua v United States (ICJ, 1986, p. 1), the International Court of Justice (‘ICJ’) considered the principle of non-intervention to be a general tenant of customary international law.

The principle of non-intervention can, however, be restricted in crucial ways relevant for digital information interventions. The prohibition established by the UN Charter is not absolute, with Chapter VII allowing the UN (through the Security Council) to intervene in domestic situations provided there is a threat to international peace and security (Frowein and Krisch, 2002, pp. 701-716). Chapter VII is the only part of the UN Charter empowering the Security Council to make binding decisions that apply to all UN members (Öberg, 2005, p. 879). More specifically, this process involves two steps (UN Charter, 1945, Art. 39). First, the Security Council must determine that a threat to or breach of peace, or an act of aggression, has occurred. Second, the Security Council must propose measures aimed at maintaining international peace and security that accord with the UN’s purposes and principles (Art. 24(2)). The Security Council can decide what measures are to be employed when giving effect to its decisions (Art. 41), including the ‘complete or partial interruption of economic relations and of rail, sea, air, postal, telegraphic, radio, and other means of communication, and the severance of diplomatic relations’ (UN Charter, 1945, n.p.).

Should the Security Council consider the aforementioned measures inadequate (Art. 42), or they have been proved to be inadequate in maintaining or restoring international peace (Art. 42), it can order that further measures be implemented based on the use of force. Within this framework, measures such as radio jamming would be included as ‘the most benign form of humanitarian intervention’ (Metzl, 1997, p. 628) whereas the shutdown of a media tower through the use of military force would fall under the scope of Article 42. As a result, the latter measure could only be authorised by the UN Security Council once initial measures had failed to meet their objective of restoring peace and security in the area of intervention.

The general principle underlying such measures is that any action taken must be consistent with the ‘purposes and principles of the United Nations’.However, the boundaries of this are ill-defined, and the Security Council enjoys absolute discretion in deciding what actions or events constitute a breach of peace, a threat to peace, or an act of aggression (Whittle, 2015, p. 671; King, 1996, p. 509).

Therefore, Security Council authorisation is the first step in assessing an information intervention’s compliance with international law and, particularly, the principle of non-intervention. Should the Security Council order an information intervention, it would no longer violate the non-intervention norm. This is because once a state enters into an international treaty, it is bound by its terms, and within the framework of the UN Charter, (almost) all recognised states are parties to a treaty binding them to Security Council decisions regarding threats to international peace and security. As a result, under Chapter VII, they have effectively consented to the Security Council intervening in their sovereign affairs in situations where it is necessary to restore peace and security. This includes information interventions.

This legal architecture could appear controversial, as the right to territorial integrity and political independence are guaranteed by the UN Charter. Furthermore, a primary principle of international radio law is the prohibition of ‘harmful interference’, with media jamming, for instance, potentially falling into this category (Preamble of Radio Regulations, 2016). Nevertheless, according to Blinderman:

If international law precluded a state from voluntarily delegating fragments of its sovereignty to a multinational treaty organization, the international system could not operate. As such, courts have long recognized that a state’s consent to a particular treaty covering a specific matter forecloses its ability to claim that the matter is exclusively within its domestic jurisdiction (2002, p. 111).

In contrast, when states intervene in the domestic affairs of another nation without receiving authorisation from the UN or the consent of the target state, they run the risk of violating the target state’s sovereign rights. While it might be argued that a particular information intervention is based on humanitarian need and the responsibility to protect, this can be challenged. As Shen (2001, n.p.) argues:

There is no commonly acceptable standard of what humanitarianism means and what human rights embrace under international law. In the absence of common understanding, the concepts of ‘humanitarianism’ and ‘human rights’ are bound to be abused if the international community allows humanitarian intervention, or favours individual human rights over national sovereignty. The consequences of this kind of abuse use would be too dreadful to contemplate. One of the consequences of placing human rights above state sovereignty and therefore permitting humanitarian intervention, would be that the ordinary and predictable short comings (sic) of third-world states would be attacked as human rights violations. Such domestic problems would provide excuses and opportunities for major powers to intervene and to ‘dominate’ weaker states.

Therefore, beyond the framework of Chapter VII, states are not authorised to intervene in the internal affairs of a sovereign nation, except in self-defence when protecting its national interests or under an invitation of the target state (Thomas, 1999). From this perspective, any intervention outside the framework of Chapter VII is a violation of state sovereignty, except for self-defence. If an intervention is based on a regime of consent, the boundaries of its authority are legitimated and determined by the conditions set by the UN or target country.

Hate speech and mass atrocities

The boundaries of the non-intervention principle raise the question of whether and when information intervention can be justified when seeking to prevent mass atrocities provoked by online hate speech and disinformation (HLEG, 2018; Wardle and Derakhshan, 2017). International law does not preclude the UN Security Council deciding what kind of speech or incitement satisfies the threshold required to trigger the Chapter VII mechanism. As a result of this discretion, changes in the global political environment—such as those that took place in Rwanda or Bosnia—allow for the translation of legal considerations into policy objectives.

Although certain communication channels can enable the spread of hate speech, the degree of danger may not be considered a threat to international peace and security. In general, while there is an international presumption in support of the free flow of ideas and information, this has been mitigated by international human rights law, where the protection of free speech is subject to certain conditions (Farrior, 2002, p 69). Though the right to freedom of expression is enshrined across international, regional and national bills of rights, it is subject to exceptions that protect other rights (e.g. dignity) or that pursue legitimate interests enshrined by the Universal Declaration of Human Rights (1948, Art. 7, 19, 29, and 30) and the International Covenant on Civil and Political Rights (ICCPR, 1966). As is well-known, the right to free speech is protected by international human rights law but its exercise is not absolute. With specific regard to hate speech, the International Convention on the Elimination of Racial Discrimination (ICERD) bans incitement to racial hatred and discrimination (Art. 4).

It is the UN Convention on the Prevention and Punishment of the Crime of Genocide that provides the most persuasive statement in support of information intervention (1948). Although it does not directly address hate speech, this Convention states that ‘direct and public incitement to commit genocide’ is a punishable crime (Art. 3). 3 The inclusion of incitement as a punishable crime could therefore support the need ‘for preventive, pre-emptive and pro-active measures to predict and intervene in potential mass suffering due in part to hate speech propagated by incendiary media’ (Erni, 2009, p. 867). While international agreements and covenants do not provide guidelines for determining an ‘information intervention threshold’, the responsibility to protect (‘R2P’) regime offers an important point of reference (Bellamy, 2014), and addresses whether and to what extent international actors should intervene in situations where state actors fail (either voluntarily or involuntarily) to protect their population from mass atrocities or genocide.

The repeated failure to prevent genocides and atrocities after the Second World War eventually led to a rethinking of the notion of state sovereignty, and eventually, the Responsibility to Protect (also known as R2P). In the aftermath of the violence in Rwanda and the NATO intervention in Kosovo without the authorisation of the UN Security Council (an intervention that has been described as ‘illegal but legitimate’ (Independent International Commission on Kosovo, 2002)), the International Commission on Intervention and State Sovereignty (ICISS) solidified the ‘responsibility to protect’ in 2001. The World Summit and the UN institutions swiftly followed suit in making use of the term (UNGA Resolution 60, 2005). The ICISS report made clear that with sovereignty comes responsibilities (Glanville, 2013), among them the responsibility of a state, its population, and also the international community to answer violations of human rights, and particularly mass atrocities.

Crucially, the World Summit clarified that the R2P principle should be implemented within the framework of the UN Charter. As a result, R2P does not permit a state to use force against another state without authorisation by the UN Security Council. Concern has been expressed that unilateral humanitarian intervention is simply another way for some countries to exert their political and technological dominance over less powerful states (Shen, 2001, p. 1). The use of jamming technology, for example, raises serious sovereignty issues, particularly for developing countries, with information intervention measures easier to implement against small-scale actors compared to states with consolidated media outlets (Varis 1970, Metzl, 1997, p. 19).

There is a growing weariness with interventions. The hubris of the 1990s and early 2000s, when Metzl and others were initially writing about information interventions, has shifted. Terms such as ‘democracy promotion’, ‘peacebuilding’ and ‘post-conflict reconstruction’ are associated with large, expensive and mostly failed initiatives in countries such as Iraq or Afghanistan. Meanwhile, interventions in countries such as Myanmar, Libya, Syria, Somalia and Yemen have been less ambitious than those advocating for the responsibility to protect have argued for.

In this context, it is important to consider whether information interventions, particularly in reference to social media, can legally be based on R2P and/or humanitarian reasons in countries afflicted by violent conflicts. The primary argument in favour would be that sovereign powers are bound to respect the rights of their communities and the limitations placed on their powers. For instance, the right to life and security should be protected against threats such as torture or genocide. Since not only the principle of non-intervention but also respect of human rights constitute jus cogens, sovereign nations cannot hide behind the former when violating the latter.

Outside the framework of Chapter VII, the debate shifts to other exceptional grounds that might justify intervention in the media environment of a target state, including protecting the intervening state’s national interest in terms of security, or in cases of gross human rights violations—such as genocide—a humanitarian intervention (Holzgrefe et al., 2003). Nevertheless, for the UN to intervene in an information space, a lack of UN authorisation constitutes the most relevant challenge; ultimately it is a political decision, and getting all security council members to agree, in the current political climate, would be difficult.

A new framework for digital information interventions?

The period of mid-1990 to early 2000 saw information interventions being intensely debated. Since then, media environments have changed significantly, particularly with regards to the emergence of social media. The doctrine of information intervention, which has traditionally dealt with mass media outlets based in the target state, has not kept pace with these changes.

While evidence of social media’s ability to disseminate messages of hate and violence worldwide is compelling, there are particularities that make the direct translation of information interventions from mass media to new media challenging. Unlike traditional media, such as radio and television, which are usually subject to extensive regulation, this has not happened to the same extent with social media. The near-infinite extent of the digital environment makes monitoring its boundaries complex, and this difficulty extends to tackling online hate and disinformation that could lead to offline violence and mass atrocities. While states may be considered the primary legitimate authorities when it comes to implementing and enforcing binding norms, this idea of exclusive control is challenged at the international level where states cannot exercise their sovereign powers externally. In the absence of cooperation, we have seen how, especially in countries in Africa and Asia, governments have reacted to the spread of online hate by criminalising speech or shutting down the internet (De Gregorio & Stremlau, 2020; Clark et al., 2017).

The shift from traditional media outlets to social media also reveals further challenges. The business model of social media is not based on the creation of content but on their organisation and the accumulation of data based on which social media provide tailored profiling services attracting advertising revenues. As such, their primary goal is not to protect human rights through providing platforms for free speech, but to profit from users’ data which are the primary source attracting advertising revenues. The immunity or exemption of liability for hosting third-party content makes this system particularly profitable. As service providers, social media are usually exempted from responsibility for the organisation and hosting of online content. In order to manage their online spaces and profile users, social media companies use automated technologies to organise content and enforce community rules (Gillespie, 2018). The increasing involvement of online platforms in organising content and user data through artificial intelligence has reflected a shift in their role toward becoming more active curators and content providers. Social media companies largely ‘govern’ the digital spaces where information flows (Bloch-Wehba, 2019; Klonick, 2018), and this does not change even in situations of conflict or violence where these actors can determine how to moderate hate and disinformation according to their ethical, business and legal framework.

This framework helps explain the need for a new approach within the information intervention doctrine as we shift from considering traditional media outlets to social media. As already underlined, Chapter VII can be used to authorise international intervention in the media environment of a target state without violating the principle of non-intervention. At first glance, this would seem to provide for the doctrine of information intervention being applied to social media promoting mass atrocities. Nonetheless, any information intervention measure must take into consideration the network architecture and modalities through which it is possible to limit dissemination of online hate and violence with specific regard to internet shutdowns.

The first of the following two subsections explores the challenges of extending information interventions in a social media context, particularly looking at content moderation, and the potential for establishing an information intervention council within the UN framework.

Intervention and social media

In cases where social media are involved in the escalation of violent conflicts, the UN Security Council could, in theory, authorise an intervention under Chapter VII due to a breach of international peace and security. Therefore, the international community would be legitimised to shut down the internet or limit access to social media as part of its response to addressing mass atrocities. However, even if such authorisation was forthcoming (which would undoubtedly be a challenge), such an intervention would require due care.

Unlike content-producing media outlets that are usually aware of the peculiarities of the local media environment, social media host third-party content which, due to the scale of content moderation required, is not subject to the same degree of granular decision-making. Moreover, social media companies generally do not have a substantial (or any) presence in the country involved in violent conflict, nor do they always have nationals familiar with local context and the language of online content. Information intervention in this field could lead to a more positive framework of content moderation, with greater safeguards and care applied by social media actors to avoid interventions by the international community.

However, any such information intervention runs the risk of social media actors choosing not to operate in conflict-affected countries. This would result in collateral censorship (Balkin, 1999, p. 2295; Wu, 2011, p. 293) that could involve not just the deletion of content, but the wholesale removal of specific social media spaces. Unlike traditional media outlets, which operate within a specific region and play an important role in providing information to those in that area, the presence of a social media platform is purely down to business opportunities. Therefore, social media would potentially be incentivised to cease operating in regions where information interventions might be enacted, provoking financial and reputational losses. In particular, international recognition of social media’s involvement in escalating violent conflict could lead to social media companies declining to provide services to countries afflicted by such conflicts. Effectively, this could mean the creation of a ‘social media vacuum’ in some areas of the world.

The consequences of such a situation could be serious, especially in countries where social media is the most popular way people experience the internet. This is the case in many countries in Africa. Social media are used every day by billions of users and, particularly in more closed regimes, are often a valuable source for connecting with others and accessing international information. Information intervention as put forward by Metzl focuses on ‘democratic objectives’, such as peacekeeping in the short run, and the building of civil and democratic media space in the long run. However, information intervention could potentially have more authoritarian, or protectorate, implications, such as imposing external sovereign powers over a target state’s media. Here, the line between information intervention and censorship can become blurred, with the real test being whether or not the measures address the responsibility to protect (Thompson, 2002, p. 56). 4

Unlike peace broadcasting, radio jamming, or the seizure of broadcasting towers, the international community could not proportionately fight the spread of online hate speech or disinformation without the cooperation of social media companies. Even while it is possible to rely on access providers to restrict access to this content, only companies can granularly intervene in the architecture of their digital spaces, including their proprietary algorithms (De Gregorio, 2018, p. 65). Should such companies decline to cooperate, or fail to devote significant resources and attention to addressing concerns of hate speech and disinformation, particularly in Africa and Asia, limiting access to the internet has become a primary tool for governments, either by shutting it down, slowing it down, or discriminating internet traffic relying on access providers. Without the direct cooperation of social media companies, information interventions may face difficulties adopting the scale of the approach proposed by Metzl based on monitoring, peace broadcasting and intervention. This could impede attempts to tackle the spread of hate and violence, as international actors may wait until events are deemed sufficiently serious before shutting down social media or the internet in the target state.

All this suggests that the cooperation of social media companies is an important, but not essential, component to addressing online content promoting hate and violence. Related to this is the risk of collateral censorship. Should social media be subject to pervasive information intervention measures, it is possible that companies—in seeking to evade responsibility and avoid interference from the international community—will decrease the degree of tolerance for hosted content and/or implement blanket content moderation technologies that rapidly (but less accurately) detect hate speech and violent content. The cooperation of social media companies in removing hate speech content from target states would require them to invest additional financial and human resources, especially for states requiring moderation in different languages where specific language policies are less compatible with blanket corporate content policies.

The lack of direct engagement and efforts to avoid offline harms on the part of social media companies derives not only from the fact that they are often exempt from secondary liability with regard to the content they host (Floridi and Taddeo, 2017; Dinwoodie, 2017), but also from the lack of direct obligation to respect human rights. Within the framework of international law, state actors are the only entities permitted to become parties to (and therefore subject to the obligations of) human rights treaties (Reinisch, 2005, p. 37), whereas social media companies—in the absence of any constraining legal instruments adopted by state actors—are unfettered by the need to protect human rights (Carillo-Santarelli, 2017;Clapham, 2006). Even if we can identify responsibilities of online platforms according to the Guiding Principles on Business and Human Rights (2011) and the Rabat Plan of Action (2013), these instruments do not introduce binding obligations for online platforms but require states to intervene to protect human rights.

While this paradigm aims to protect individual liberties, it also carries serious risks when private actors start to exercise new forms of power outside the boundaries of regulation (Knox, 2008, p. 1). In the past, it was thought the private sphere must be protected from the state—rather than from private actors—through the recognition of rights and liberties. Global dynamics and, especially, digital technologies have led private actors to gather power in new and significant areas (De Gregorio, 2019). In the era of globalisation, this concentration of power in the hands of transnational private actors raised primary issues for the protection of human rights (Teubner, 2006). There is increasing pressure on private actors to comply with international human rights law when moderating online content (Kaye, 2019) particularly given that social media exercise regulatory functions in the digital environment (Report of the Special Rapporteur to the Human Rights Council on online content regulation, A/HRC/38/35, 2018). 5 Doing this would allow platforms to apply a universal reference in their content moderation activities.

It is possible that regional developments in international criminal law in Africa could, in the future, fill this gap. The Malabo Protocol (2014), for example, aims to add an international criminal law section within the African Court of Justice and Human Rights. This would allow for the prosecution of crimes against humanity and genocide, including hate speech. Based on the precedents set in Nuremberg and Rwanda, an extension to the prosecution of crimes against humanity and genocide could open towards more responsibility of online platforms. In the past, both the Nuremberg trials and the UN International Criminal Tribunal for Rwanda convicted media content providers and executives (Lafraniere, 2003). Nonetheless, even if the Malabo Protocol would enter into force, it is unlikely that the failure of social media companies to tackle hate speech could qualify as an offense (Irving, 2019), as social media are not content providers, but organise content published by users. However, making social media companies liable could encourage the overly ambitious censorship of content to escape responsibility. Therefore, it is unlikely that this approach would make social media more accountable for the spreading of online hate and violence.

Within this existing framework, it is necessary to focus on an alternative paradigm of information intervention in cases where social media are involved which looks at cooperation with social media as the first, and primary, step and shutdowns as the exception.

Establishing an information intervention council

The doctrine of information intervention in its traditional form sits awkwardly within the present social media environment. Simply because policy has been slow to adapt to the online environment should not excuse a lack of intervention preventing the spread of online hate and violence, especially when such content is correlated with offline violence as significant as a genocide.

In the multi-stakeholder environment of internet governance, public actors share responsibility for defining the international legal and political framework within which they operate. Therefore, a first step toward defining a doctrine of digital information intervention would involve establishing an appropriate international body within the framework of an international organisation such as the UN or AU. In the case of the UN, such a body would be responsible for addressing how international law deals with decentralised public actors operating on a global scale, as well as potentially working in collaboration with the Special Advisor for the Responsibility to Protect or the Special Adviser for the Prevention of Genocide (a role created in 2004).

The body would be responsible for conducting research, establishing guidelines for media intervention in violent conflicts, and verifying the role played by the media environment in a potential target state. This would support the development of guidelines to nudge the private sector to comply with specific standards. There are, for example, ‘due-diligence guidelines’ promoting a specific code of conduct for companies operating in the import, processing and sale areas of the minerals extracted in places such as the Democratic Republic of Congo to mitigate the risk of an extension of the conflict in the Eastern part of the country (Security Council, Resolution 1952, 2010). In addition, the UN Security Council has also promoted new public-private partnerships to address global challenges like terrorism, referring especially to the role of social media (Resolution 2354, 2017). This is, however, just a first step since these measures are left to the discretion of private actors, thus raising questions about effectiveness and enforcement.

Given the aim of the body would not be to solve disputes, or interpret international law, it should not be structured like an international tribunal, but rather take the form of a dynamic council (‘Information Intervention Council’ or ‘IIC’) hosting members committed to addressing specific situations. In addition to members representing the international organisation (such as permanent members of the UN Security Council), temporary members should include representatives of social media companies operating in conflict zones, members of a target state’s government, scholars and experts in the media and responsibility to protect fields, as well as members of civil society organisations. This mix of membership would provide the opportunity—from a short-term perspective—of addressing challenging situations in a comprehensive way, allowing members to come up with concrete solutions, rather than simply making declarations about the behaviour of particular states or social media. The presence of permanent members would guarantee continuity and, in ensuring that standards and guidelines for targeting state and social media involved in violent conflict zones are adhered to, strengthen the body’s legitimacy.

The IIC would also contribute to ‘proceduralising’ how violence and hate speech within the social media environment are addressed, clarifying how potential target states and social media should behave, in particular defining the conditions whereby these actors should notify and/or report to the IIC regarding the dissemination of online hate and violent content. Such a system would allow granular information to be gathered, thereby informing the proportionality of any measures required to address specific concerns within a target state. The participation of social media companies would also make possible a more proportionate approach to media interventions and shutdowns, including—depending on the capacity and capabilities of the company involved—addressing content deemed objectionable by the Council. This process would increase the transparency and accountability of social media companies involved in moderating online content, with the IIC monitoring measures implemented by social media companies aimed at tackling hate speech in target countries. This process would also help to avert reliance on internet shutdowns as a blunt and general measure.

A key challenge in establishing such an international body would be how to encourage the various stakeholders to participate. There are several reasons why various communities concerned might choose to participate in an IIC. States afflicted by violent conflict may view participation as an opportunity to be heard at the international level, and also to monitor external interference in their media environment. States may also see an advantage—particularly in the context of losing sovereignty over their digital media environment—in being able to draw on a broader international framework when addressing online hate speech and violent content. And freedom of expression advocates, including civil society groups, may appreciate the safeguards reducing the justifications for arbitrary shutdowns or censorship. Given concerns that online hate and violence might undermine social media companies’ public standing and business, participation in the IIC would provide them with an opportunity to demonstrate their respect for human rights and peace on a global scale. This could solve the issue of enforcement since social media could be incentivised to participate and address the spread of online hate.

Developing an international framework would also help mitigate more extreme information intervention measures—such as internet shutdowns—which are often implemented in an ad hoc way and rarely through formal policy or legal channels. Limiting internet shutdowns is particularly relevant due to the effects they produce on population, including on opportunities for expression. The collaborative collection of relevant information by the IIC would help inform whether, and to what extent, intervention in the media environment of a target state is required. In addition, it would provide target states—which may lack remedies—with an alternative system and would also reduce the risk of collateral censorship as well as the use of internet shutdowns. Finally, the IIC would facilitate social media companies’ participation in how implemented practices are defined. As a result, even without solving all the issues around information intervention, this bottom-up approach would support greater shared responsibilities between all stakeholders.

Conclusion

Social media have demonstrated their ability to influence speech transnationally and it is clear that the internet (along with other technologies) can have a role in both enhancing and challenging freedoms and rights. Within this framework, digital information interventions can have a crucial role against the spread of genocide and mass atrocities. While mass media, including TV or radio, have long been recognised as a key actor in the escalation of violent conflicts, the scale of dissemination and the degree of accountability of digital actors involved is different. Although the doctrine of information intervention initially evolved to address concerns around the role of mass media in conflict, it can provide inspiration for adjusting legal frameworks, and core foundational tenets such as the Responsibility to Protect, to address the risks coming from the spread of hate speech and disinformation to social media channels. Nevertheless, the peculiarities of social media require a different approach, and one that includes the responsibilities of social media companies and has at its core, accountable content moderation. Private companies like social media can be both tools of intervention and barriers to intervention. Therefore, IIC could have a crucial role in increasing the degree of proceduralisation of information intervention and avoiding disproportionate interference with states’ sovereignty and human rights. There are some limits regarding the role of IIC with regard to participation of stakeholders, the complexity in dealing with escalation, and the effectiveness of its guidelines. However, the establishment of such a system, within regional or international bodies, would increase global awareness while providing a framework to address the spread of online hate and disinformation escalating offline harms including genocide and ethnic cleansing.

References

Allen, T., & Stremlau, N. (2005). Media policy, peace and state reconstruction (Discussion Paper No. 8). Crisis States Research Centre, London School of Economics and Political Science. http://eprints.lse.ac.uk/28347/

Article 19. (1996). Broadcasting genocide Censorship, propaganda and state-sponsored violence in Rwanda, 1990-1994.

Article 19. (2019). The Social Media Councils: Consultation Paper. https://www.article19.org/wp-content/uploads/2019/06/A19-SMC-Consultation-paper-2019-v05.pdf

Balkin, J. M. (1999). Free speech and hostile environments. Columbia Law Review, 99(8), 2295–2320. https://doi.org/10.2307/1123612

Barrett, P. M. (2020). Who Moderates the Social Media Giants? A Call to End Outsourcing NYU STERN [Report]. NYU Stern Center for Business and Human Rights. https://bhr.stern.nyu.edu/tech-content-moderation-june-2020

Barrie, S. (2019, December 19). Mass Atrocities in the Age of Facebook—Towards a Human Rights-Based Approach to Platform Responsibility [Blog post]. OpinioJuris. http://opiniojuris.org/2019/12/16/mass-atrocities-in-the-age-of-facebook-towards-a-human-rights-based-approach-to-platform-responsibility-part-one/

Bellamy, A. J. (2014). The Responsibility to Protect: A Defence. Oxford University Press. https://doi.org/10.1093/acprof:oso/9780198704119.001.0001

Blinderman, E. (2002). International Law and Information Intervention. In M. Price & M. Thompson (Eds.), Forging Peace: Intervention, Human Rights and the Management of Media Space (pp. 104–138). Edinburgh University Press. https://www.jstor.org/stable/10.3366/j.ctvxcrszn.7

Bloch-Wehba, H. (2019). Global platform governance: Private power in the shadow of the state. SMU Law Review, 72(1), 27–80. https://scholar.smu.edu/smulr/vol72/iss1/9/

Carillo-Santarelli, N. (2017). Direct International Human Rights Obligations of non-State Actors: A Legal and Ethical Necessity. Wolf Legal Publishers.

Chinkin, C., & Kaldor, M. (2017). International law and new wars. Cambridge University Press.

Clapham, A. (2006). Human Rights Obligations of Non-State Actors. Oxford University Press. https://doi.org/10.1093/acprof:oso/9780199288465.001.0001

Clark, J., Faris, R., Morrison-Westphal, R., Noman, H., Tilton, C., & Zittrain, J. (2017). The Shifting Landscape of Global Internet Censorship [Research Publication]. Berkman Klein Center for Internet & Society Research Publication. http://nrs.harvard.edu/urn-3:HUL.InstRepos:33084425

Davis, M. C. (2015). International Intervention in the Post-Cold War World. Routledge. https://doi.org/10.4324/9781315498171

De Gregorio, G. (2018). From constitutional freedoms to the power of the platforms: Protecting fundamental rights online in the algorithmic society. European Journal of Legal Studies, 11(2), 65–103. http://ejls.eui.eu/wp-content/uploads/sites/32/2019/05/4-EJLS-112-De-Gregorio.pdf

De Gregorio, G., & Stremlau, N. (2020). Internet shutdowns and the limits of law. International Journal of Communication, 14, 4224–4243. https://ijoc.org/index.php/ijoc/article/view/13752

Dinwoodie, G. B. (Ed.). (2017). Secondary liability of internet service providers. Springer International Publishing. https://doi.org/10.1007/978-3-319-55030-5

Efrony, D., & Shany, Y. (2018). A rule book on the shelf? Tallinn manual 2.0 on cyberoperations and subsequent state practice. American Journal of International Law, 112(4), 583–657. https://doi.org/10.1017/ajil.2018.86

Erni, J. N. (2009). War, ‘incendiary media’ and international human rights law. Media, Culture & Society, 31(6), 867. https://doi.org/10.1177/0163443709343792

Facebook. (2019). Draft Charter: An Oversight Board for Content Decision. Facebook Newroom US. https://fbnewsroomus.files.wordpress.com/2019/01/draft-charter-oversight-board-for-content-decisions-1.pdf

Farrior, S. (2002). Hate Propaganda and International Human Rights Law. In M. Price & M. Thompson (Eds.), Forging Peace: Intervention, Human Rights and the Management of Media Space (pp. 69–103). Edinburgh University Press. https://www.jstor.org/stable/10.3366/j.ctvxcrszn.6

Fisher, M. (2019, April 21). Sri Lanka Blocks Social Media, Fearing More Violence. The New York Times. https://www.nytimes.com/2019/04/21/world/asia/sri-lanka-social-media.html

Floridi, L., & Tadeo, M. (Eds.). (2017). The Responsibilities of Online Service Providers. Springer. https://doi.org/10.1007/978-3-319-47852-4

Frowein, J. A., & Krisch, N. (2002). Introduction to chapter VII. In The Charter of the United Nations (pp. 701–716). Oxford University Press.

Gillespie, T. (2018). Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions that Shape Social Media. Yale University Press.

Glanville, L. (2013). Sovereignty and the responsibility to protect: A new history. University of Chicago Press. https://doi.org/ 10.7208/chicago/9780226077086.001.0001

Guiding Principles on Business and Human Rights. (2011).

Henley, J. (2018, June 22). Algeria blocks internet to prevent students cheating during exams. The Guardian. https://www.theguardian.com/world/2018/jun/21/algeria-shuts-internet-prevent-cheating-school-exams.

Herzstein, R. E. (1978). The war that Hitler won: The most infamous propaganda campaign in history. Putnam Publishing Group.

Holzgrefe, J. L., Keohane, R. O., & Tesón, F. R. (2003). Humanitarian Intervention: Ethical, Legal and Political Dilemmas. Cambridge University Press.

Independent International Commission Kosovo. (2002). Kosovo Report: International Responses, Lessons Learned. Oxford University Press. https://doi.org/10.1093/0199243093.001.0001

International Commission on Intervention and State Sovereignty. (2001). The Responsibility to Protect: Report of the International Commission on Intervention and State Sovereignty. International Development Research Centre.

Irving, E. (2019). Suppressing Atrocity Speech on Social Media. AJIL Unbound, 113, 256–261. https://doi.org/10.1017/aju.2019.46

Kaye, K. (2019). Speech Police: The Global Struggle to Govern the Internet. Columbia Global.

King, F. P. (1996). Sensible Scrutiny: The Yugoslavia Tribunal’s Development of Limits on the Security Council’s Powers Under Chapter VII of the Charter. Emory International Law Review, 10, 509.

Klonick, K. (2018). The New governors: The People, rules, and processes governing online speech. Harvard Law Review, 131, 1598–1670. https://harvardlawreview.org/2018/04/the-new-governors-the-people-rules-and-processes-governing-online-speech/

Knox, J. H. (2008). Horizontal Human Rights Law. American Journal of International Law, 102(1), 1–47. https://doi.org/10.1017/S0002930000039828

Lafraniere, S. (2003). Court Finds Rwanda Media Executives Guilty of Genocide. New York Times. https://www.nytimes.com/2003/12/03/international/africa/court-finds-rwanda-media-executives-guilty-of-genocide.html,

Larson, A., & Whitton, B. (1963). Propaganda towards Disarmament in the War of Words. World Rule of Law Center, Duke University; Oceana Publications.

Manila Principles on Intermediary Liability and the DCPR Best Practices on Platforms’ Implementation on the Right to Effective Remedy. (2017). https://www.intgovforum.org/multilingual/index.php?q=filedepot_download/4905/1550

Marchant, E., & Stremlau, N. (2020a). A Spectrum of Shutdowns: Reframing Shutdowns from Africa. International Journal of Communication, 14, 4327–4342. https://ijoc.org/index.php/ijoc/article/view/15070

Marchant, E., & Stremlau, N. (2020b). The Changing Landscape of Internet Shutdowns in Africa – Introduction. International Journal of Communication, 14, 4216–4223. https://ijoc.org/index.php/ijoc/article/view/11490

Metzl, J. F. (1997a). Information intervention: When switching channels isn’t enough. Foreign Affairs, 15–20.

Metzl, J. F. (1997b). Rwandan genocide and the international law of radio jamming. American Journal of International Law, 91(4), 628–651. https://doi.org/10.2307/2998097

Miles, T. (2018, March 12). UN investigators cite Facebook role in Myanmar crisis. Reuters. https://www.reuters.com/article/us-myanmar-rohingya-facebook-idUSKCN1GO2PN

Nicaragua v United States, (International Court of Justice 1986).

Öberg, M. D. (2005). The legal effects of resolutions of the UN Security Council and general assembly in the jurisprudence of the ICJ. European Journal of International Law, 16(5), 879–906. https://doi.org/10.1093/ejil/chi151

O’Neil, P. (2013, September 18). Why the Syrian uprising is the first social media war. http://www.dailydot.com/politics/syria-civil-social-media-war-youtube/

Patrikarakos, D. (2017). War in 140 characters: How social media is reshaping conflict in the twenty-first century. Hachette UK.

Perrigo, B. (2019, October 23). Facebook Says It’s Removing More Hate Speech Than Ever Before. But There’s a Catch. Time. https://time.com/5739688/facebook-hate-speech-languages/.

Preamble of Radio Regulations. (2016).

Price, M. E., & Thompson, M. (Eds.). (2002). Forging peace: Intervention, human rights, and the management of media space. Edinburgh University Press. https://www.jstor.org/stable/10.3366/j.ctvxcrszn

Protocol on Amendments to the Protocol on the Statute of the African Court of Justice and Human Rights. (2014).

Rabat Action Plan. (2013).

Rajadhyaksha, M. (2006). Genocide on the Airwaves: An Analysis of the International Law Concerning Radio Jamming. Journal of Hate Studies, 5(1). https://doi.org/10.33972/jhs.43

Reinisch, A. (2005). The Changing International Legal Framework for Dealing with Non-State Actors. In P. Alston (Ed.), Non-State Actors and Human Rights. Oxford University Press.

Santa Clara Principles on Transparency and Accountability in Content Moderation. (2018).

Schlein, L. (2018, June 2). Hate Speech on Social Media Inflaming Divisions in CAR. Voice of America. https://www.voanews.com/africa/hate-speech-social-media-inflaming-divisions-car.

Shen, J. (2001). The non-intervention principle and humanitarian interventions under international law. International Legal Theory, 7.

Stecklow, S. (2018, August 15). Why Facebook is losing the war on hate speech in Myanmar. Reuters. https://www.reuters.com/investigates/special-report/myanmar-facebook-hate/

Stremlau, N. (2018). Media, Conflict, and the State in Africa. Cambridge University Press. https://doi.org/10.1017/9781108551199

Teubner, G. (2006). The Anonymous Matrix: Human Rights Violations by ‘Private’ Transnational Actors. The Modern Law Review, 69(3), 327–346. https://doi.org/10.1111/j.1468-2230.2005.00587.x

Thomas, G. (1999). NATO and International law. ON LINE Opinion. https://www.onlineopinion.com.au/view.asp?article=1647&page=0

Tufekci, Z. (2018). YouTube, the great radicalizer. The New York Times. https://www.nytimes.com/2018/03/10/opinion/sunday/youtube-politics-radical.html

United Nations. (1945). Charter of the United Nations.

United Nations Convention on the Prevention and Punishment of the Crime of Genocide. (1948).

United Nations General Assembly. (1965). Declaration on the Inadmissibility of Intervention in the Domestic Affairs of States and the Protection of their Independence and Sovereignty, GA Res. 2131/XX.

United Nations General Assembly. (1970). Declaration on Principles of International Law concerning Friendly Relations and Co-operation among States in accordance with the Charter of the United Nations (GA Res. 2625 (XXV)).

United Nations General Assembly. (2005). Resolution 60, 2005 World Summit Outcome (A/RES/60/1).

United Nations General Assembly. (2009). The responsibility to protect (A/RES/63/308).

United Nations General Assembly. (2013). Group of Governmental Experts on Developments in the Field of Information and Telecommunications in the Context of. International Security, 68(9824). https://undocs.org/A/68/98.

United Nations Guiding Principles on Business and Human Rights. (2011).

United Nations Human Rights Council. (2018). Report of the detailed findings of the Independent International Fact-Finding Mission on Myanmar. https://www.ohchr.org/Documents/HRBodies/HRCouncil/FFM-Myanmar/A_HRC_39_CRP.2.pdf

United Nations Human Rights Council, Report of the Special Rapporteur on minority issues, Fernand de Varennes. (2021). https://undocs.org/A/HRC/46/57

United Nations International Covenant on Civil and Political Rights. (1966).

United Nations Report of the Special Rapporteur on the Promotion and Protection of the Right to Freedom of Opinion and Expression. (2018).

United Nations Report of the Special Rapporteur to the Human Rights Council on online content regulation. (2018).

United Nations Security Council, Resolution 1952. (2010).

United Nations Security Council, Resolution 2354. (2017).

United Nations Universal Declaration of Human Rights. (1948).

Uyheng, J., & Carley, K. M. (2020). Bots and online hate during the COVID-19 pandemic: Case studies in the United States and the Philippines. Journal of Computational Social Science, 3, 445–468. https://doi.org/10.1007/s42001-020-00087-4

Varis, T. (1970). The Control of Information by Jamming Radio Broadcasts. Cooperation and Conflict, 5(3), 168–184. https://doi.org/10.1177/001083677000500303

Wardle, C., & Derakhshan, H. (2017). Information disorder: Toward an interdisciplinary framework for research and policy making (Report DGI(2017)09). Council of Europe. https://rm.coe.int/information-disorder-toward-an-interdisciplinary-framework-for-researc/168076277c

Whittle, D. (2015). The Limits of Legality and the United Nations Security Council: Applying the Extra-Legal Measures Model to Chapter VII Action. European Journal of International Law, 26(3), 671. https://doi.org/10.1093/ejil/chv042

Wu, F. T. (2011). Collateral Censorship and the Limits of Intermediary Immunity. Notre Dame Law Review, 87, 293.

Footnotes

1. Further hate speech that must be prohibited includes “any advocacy of national, racial or religious hatred that constitutes incitement to discrimination, hostility or violence; and all dissemination of ideas based on racial superiority or hatred, and on incitement to racial discrimination”. Still concerning but where restrictions must be targeted and proportionate include speech that “presents a serious danger for others and for their enjoyment of human rights…[or] be necessary in a democratic society for the respect of the rights or reputation of others or for the protection of national security or public order”.

2. See, also, United Nations General Assembly, Declaration on the Inadmissibility of Intervention in the Domestic Affairs of States and the Protection of their Independence and Sovereignty, GA Res. 2131/XX, 21 December 1965; United Nations General Assembly, Declaration on Principles of International Law concerning Friendly Relations and Co-operation among States in accordance with the Charter of the United Nations, GA Res. 2625 (XXV), 24 October 1970.

3. This Article includes a) genocide; b) conspiracy to commit genocide; c) direct and public incitement to commit genocide; d) attempt to commit genocide; e) complicity in genocide.

4. Metzl’s approach to information has been criticised as representing ‘a fashionable means of enhancing United States predominance within the international system, using information technology’.

5. See, also, Report of the Special Rapporteur on the Promotion and Protection of the Right to Freedom of Opinion and Expression, A/73/348 (2018); Guiding Principles on Business and Human Rights (2011).


Pandemic platform governance: Mapping the global ecosystem of COVID-19 response apps

$
0
0

Introduction

On 11 March 2020, the World Health Organisation (WHO) officially declared the coronavirus (COVID-19) outbreak as a global pandemic. By definition, a pandemic signals an ‘out of control’ contagion that threatens an entire population and implies a shift away from containment strategies towards extraordinary governance conditions (French et al., 2018). The WHO further stated: ‘it’s a crisis that will touch every sector, so every sector and every individual must be involved in the fight’ (WHO, 2020a, p. 3). Given the central role of platforms and apps in everyday life (van Dijck et al., 2018; Morris and Murray, 2018), this call to action would also necessarily involve working with big tech companies. Almost immediately, however, concerns were raised by civil society organisations and academic researchers about the development of apps to intervene in the COVID-19 crisis. These included risks for civil liberties regarding their potentially excessive surveillance capacities to doubts regarding their actual effectiveness particularly for digital contact-tracing, among other concerns (Ada Lovelace Institute 2020; Kitchin, 2020; Privacy International, 2020). For major platform companies such as Google and Apple, therefore, getting ‘involved in the fight’ would include making carefully negotiated decisions about how to regulate their emerging COVID-19 app ecosystems, and how to balance the concerns and priorities of multiple stakeholders.

Critical questions regarding how platforms govern stem in part from a recognition that as intermediating or multi-sided techno-economic systems, platform companies like Apple and Google have begun to resemble political actors by utilising a layering of interrelated yet distinct mechanisms to control and exploit innovation (van Dijck et al., 2018; Klonick, 2017; Suzor, 2018). Platforms like app stores, for instance, use both technical and legal regulatory means to govern their relationship with third-party software developers, end-users, and other stakeholders (Eaton et al., 2011; Gillespie, 2015; Greene and Shilton, 2018; Tiwana et al., 2010), while navigating ‘external’ legal frameworks from national and supranational institutions (Gorwa, 2019). Moreover, from the perspective of a public policy platform, corporations are also increasingly understood as political actors beyond strictly the terms of market power since they have become powerful gatekeepers of societal infrastructure that requires new forms of regulatory engagement (Khan, 2018; Klonick, 2017; Suzor, 2018). This is especially the case due to their entanglement with public communication, education, and healthcare, among other domains. Indeed, as a recent European Commission report on platform power observes, ‘the COVID-19 crisis has made the societal and infrastructural role taken up by platforms even more apparent’ (Busch et al., 2021, p. 4).

The exceptional conditions of the pandemic have produced equally exceptional responses from platform companies concerning the development of COVID-19 apps. Their interventions have, accordingly, shaped the complex and dynamic relations between software developers, users, and governments during the crisis. This article presents an exploratory systematic empirical analysis of this COVID-19 app ecosystem and draws attention to how layered platform governance and power relations have mediated the app response to the pandemic as a singular global emergency.We use the term ‘ecosystem’ to refer to a platform and the collection of (mobile) apps connected to it (Tiwana et al., 2010). Both the Android and iOS mobile platforms technically produce distinct COVID-19 app ecosystems with their own apps, despite being organisationally interconnected since many developers produce apps for both Android and iOS.

The numerous socio-political risks and issues identified with COVID-19 apps suggest an obvious need for critical observation of this domain of platform activity (Rieder and Hofmann, 2020). Rapid research outputs have assessed how the powerful global technology sector ‘mobilised to seize the opportunity’ and how the pandemic ‘has reshaped how social, economic, and political power is created, exerted, and extended through technology’ (Taylor et al., 2020). Critical commentators, moreover, have drawn attention to how specific protocological interventions by platform companies, such as the development of the GAEN (Google/Apple Exposure Notification) system, demonstrated the significant asymmetries between national governments and platform companies controlling these processes (Veale, 2020). Likewise, Milan et al. have explored the ‘technological reconfigurations in the datafied pandemic’ from the perspective of underrepresented communities (2020). Efforts to broadly map, document and categorise COVID-19 apps, meanwhile, have mainly originated from computer science with an interest in security and cryptography (Ahmed et al., 2020; Levy and Stewart, 2021; Samhi et al., 2020; Wang et al., 2020) or from public health research aiming to evaluate apps according to policy-related frameworks (Davalbhakta et al., 2020; Gasser et al., 2020). Other scoping studies have been conducted by the European Commission (Tsinaraki et al., 2020), yet such research has not systematically analysed platforms and app stores’ mediating role as socio-technical innovation and control (Eaton et al., 2011). Albright’s study is notable by stressing how ‘hundreds of public health agencies and government communication channels simultaneously collapsed their efforts into exactly two tightly controlled commercial marketplaces: Apple’s iOS and Google’s Play stores’ (2020, n.p.). However, a comprehensive empirical analysis of the specific ways that platform governance has played out in the emergence of COVID-19 apps has largely been missing.

Drawing from multi-situated app studies (Dieter et al., 2019), we address this gap by empirically mapping COVID-19 apps across Google’s Play store and Apple’s App Store ecosystems. By analysing apps in multiple infrastructural situations, moreover, we draw attention to how platform governance is layered across different dimensions. Specifically, this includes: the algorithmic sorting of COVID-19 apps; the kinds of actors involved in app development; the types of app responses; the geographic distribution of the apps; the responsivity of their development (i.e., how quickly apps are released or updated); how developers frame their apps and address their users; and the technical composition of the apps themselves. While we recognise the above mentioned importance of the GAEN protocol used to facilitate digital contract-tracing through mobile apps, it is not included in this study because it had not yet been widely implemented at the time of this analysis. 1 Similarly, while access to mobile device sensors (e.g. GPS sensors, Bluetooth adapters, etc.) is governed and controlled on the level of Google and Apple’s mobile operating systems (i.e. on the level of Android and iOS) as well as through app permissions requested from users, this study focused primarily on the governance by app stores. 2 Finally, we offer an assessment of our findings across these layers concerning key themes in discussions of platform governance, particularly around the dominance and public legitimacy of platforms as private governors, and suggest some implications for policy considerations that stem from the eventfulness of global crisis-driven platform interventions.

App stores’ responses to the COVID-19 pandemic

On 14 March 2020, three days after the initial pandemic declaration, Apple announced significant restrictive changes to its App Store policies. Apple would now evaluate all apps developed for the coronavirus disease with a heightened degree of attention. Reiterating their mantra of the App Store as ‘a safe and trusted space’, Apple affirmed a commitment ‘to ensure data sources are reputable’ as ‘Communities around the world are depending on apps to be credible news sources’ (Apple Developer, 2020a, n.p.). This would mean only accepting authoritative apps ‘from recognized entities such as government organisations, health-focused NGOs, companies deeply credentialed in health issues, and medical or educational institutions’ (Apple Developer, 2020a, n.p.). For Apple, this also meant that ‘Entertainment or game apps with COVID-19 as their theme will not be allowed’ (Apple Developer, 2020a, n.p.). On the same day, Google published an editorial campaign page on Google Play titled ‘Coronavirus: Stay Informed’ with a list of recommended apps for being ‘informed and prepared’ about coronavirus, including apps from organisations like Centers for Disease Control and Prevention (CDC), American Red Cross, News360, the WHO, and Twitter (Google Play, 2020, n.p.). Shortly before this ‘Stay Informed’ campaign, Google/Alphabet CEO Sundar Pichai had outlined measures in place across their range of services to deal with the unique challenges of the crisis, stressing that Google Play’s policies already would prohibit app developers from ‘capitalizing on sensitive events’ and restrict the distribution of medical or health-related apps that are ‘misleading or potentially harmful’ (Pichai, 2020, n.p.).

As the pandemic spread and intensified throughout the year, both companies continued to update their editorial and policy positions for managing COVID-19 apps, while elaborating a set of regulatory mechanisms, and developing new standards and techniques to control what had become an exceptional niche of software development activity. In May 2020, Google Play released its official developer guidelines for COVID-19 apps. In addition to setting Google up as an information matchmaker, ‘connecting users to authoritative information and services’, Google outlined economic limits on COVID-19 apps – that is, any apps that meet their eligibility requirements (Google Help, 2020b) – noting they could ‘not contain any monetisation mechanisms such as ads, in-app products, or in-app donations’ (Tolomei, 2020, n.p.). Similarly, it restricted content that contained ‘conspiracy theories, misleading claims, “miracle cures” or dangerous treatments, or any patently false or unverifiable information’ (Google Help, 2020b, n.p.). In an update to their App Store Review Guidelines, meanwhile, Apple required that apps providing services ‘in highly-regulated fields’, such as healthcare, should be submitted by a legal entity that provides the services, and not by an individual developer’ and that medical apps ‘must clearly disclose data and methodology to support accuracy claims relating to health measurements’, as well as new policies for collecting health-related data (Apple Developer, 2020b, n.p.). To ensure this, Apple claims that ‘every app is reviewed by experts’ based on its App Store Review Guidelines (Apple Developer, 2020b, n.p.). Both stores also added new pandemic-related requirements to their general app store policies (e.g., around health and medical advice) and expedited the app review process so that COVID-19 apps could be approved more quickly (Google Help, 2020a, n.p.; Google Help, 2020b, n.p.; Tolomei, 2020, n.p.).

Such policy changes indicated a suspension of ‘business-as-usual’ for COVID-19 apps, as particular mechanisms around competition and monetisation – typically central to the app economy – were altered by the platform companies to support the emergence of a unique space of software development. Moreover, these policy changes are also implemented through different layers of technical agency, from unique modes of algorithmic curation (i.e., Google’s editorial filter) to new protocols for developers (e.g., GAEN). In this respect, they signal broader changes that ultimately extend throughout the platform infrastructure. In what follows, we map how these layered changes initiate a form of pandemic platform governance that unfolds through an interplay between a platform’s affordances for app development, the emergence of app ecosystems around platforms, and the platform’s regulatory mechanisms, which together simultaneously enable generativity and control (Eaton et al., 2011; Tiwana et al., 2010). That is, these governance mechanisms become central to the creation, evolution, and regulation of the COVID-19 app ecosystems that have emerged around Google’s Android and Apple’s iOS mobile platforms. In turn, they support the efforts of a heterogeneous network of third-party actors that aim to intervene in and manage the unfolding pandemic as a crisis – whether or not these aims were ultimately achieved.

Demarcating pandemic app ecosystems

Since app stores are the primary environments for distributing mobile apps, we can use them to locate, demarcate, and characterise collections of mobile apps (Dieter et al., 2019). Our research focused on the two most popular app stores worldwide, Google Play for Android apps and Apple’s App Store for iOS apps, 3 and queried their supported countries and locations for [COVID-19]-related search terms. We first compared the results and analysed the types of actors behind the development of COVID-19 apps based on the developer listed for the app 4 and information on the app details page, and second compared what type of responses they offer to the pandemic by examining available information in the app stores, including developer name, developer identifier, app descriptions, app icons, app screenshots, and developer websites. In both cases, apps can belong to multiple categories as they may offer various response types and may be developed in collaboration between different actors. Third, we examined app development responsivity across countries by retrieving all app version updates to account for the release dynamics in pandemic crisis responses. This responsiveness is enabled by the generative conditions provided by platforms that enable unprompted innovation (Zittrain, 2008), but stresses the capacity of developers, rather than of platforms, to respond quickly in the face of the uncertainties of the pandemic. Fourth, we conducted a content analysis of the app descriptions to examine how developers rhetorically position their apps in terms of techniques used, and how they engage with data and privacy issues. Finally, we examined the building blocks developers use in their app software packages to build COVID-19 apps. Due to the strict technical governance of iOS apps by Apple, we focused on the embedded software development kits (SDKs, i.e., collections of software libraries and tools commonly used by app developers) in Android apps. We used the AppBrain API to retrieve the embedded SDKs. 5 We collected all the data in mid-2020 when most countries already had one or more apps listed in the app stores. Google Play data were collected on 29 June (editorial subset) and on 16 July (non-editorial subset); Apple’s App Store data were collected on 20 July. Versions were retrospectively retrieved from App Annie.

In the initial phase of demarcating our data sets, we noticed that both stores have distinct logics and mechanisms for surfacing, organising, and ranking apps. We queried the 150 supported Google Play ‘countries’ and the 140 supported App Store ‘countries and regions’ for [COVID], [COVID-19], [corona], and related keywords using custom-built app store scrapers. 6 Apple's App Store returned ranked lists of 100 apps per country for our search queries, resulting in a total source set of 248 unique iOS apps. Google Play, however, did not produce such ranked lists. Instead, it rerouted all COVID-19 queries to a relatively small set of pre-selected apps in each local store.

Typically, app stores are organised through an algorithmic logic of sorting and ranking, complemented with an editorial logic of ‘best of’ and ‘editor’s choice’ lists (Dieter et al., 2019; Gillespie, 2014). For COVID-19-related search queries, Google Play solely relies on an editorial strategy (i.e., a search query filter) to surface a highly curated set of COVID-19 apps per country. A user searching for COVID-19-related terms is automatically redirected to Google’s editorially curated list of COVID-19 apps, and specifically those of the user’s home location only. We found that we could easily circumvent this editorial filter by exposing it to simple misspellings (e.g., [COVIID], [coronna], etc.), after which Google Play returned a more extensive list of relevant apps. Consequently, we captured two complementary source sets for Google Play: (a) an ‘editorial’ set of app responses per country with 247 unique apps, and (b) a ‘non-editorial’ set of 163 additional apps through misspellings. These 163 ‘additional’ apps were present in Google Play, but Google Play's editorial filter prevents these apps from surfacing for standard [COVID-19] search queries. In addition, there are also apps that are not included in our data set (e.g., the German luca response app) because they do not mention ‘coronavirus’, ‘COVID-19’, ‘pandemic’, or related keywords (Google Help, 2020b), despite being part of the pandemic response. While this is a limitation to our method, it also attests to the governance of this app ecosystem through controlling the terms used on app details pages (as only apps from recognised sources are eligible to use COVID-19-related keywords in their titles or descriptions).

The global ecosystem of pandemic response apps

In what follows, we present results from our analysis of the [COVID-19]-related app ecosystems of Google Play (Android) and App Store (iOS).

Source sets and actor types identified

We first compared the app distribution in our data sets and the different actors involved in their production. Figure 1 shows the distribution of COVID-19 apps across both stores and further distinguishes between the editorial and non-editorial Google Play apps. Individual apps are colour-coded to represent actor types: government, civil society, health authority, academic, and private actors.

Figure 1: Demarcated source sets (Google Play and App Store). Light green: Android app ecosystem (Google Play source set); light blue: iOS app ecosystem (App Store source set). Illustration: authors and DensityDesign Lab.
Figure 1: Demarcated source sets (Google Play and App Store). Light green: Android app ecosystem (Google Play source set); light blue: iOS app ecosystem (App Store source set). Illustration: authors and DensityDesign Lab.

The most striking finding is the large number of apps that feature in only one store. While the apps shared across stores (N=136) tend to be made by government actors, many government-made apps are only available in one store. About 70% (N=134) of government apps within the Google Play editorial set do not have an iOS equivalent in the App Store. While more fine-grained analysis is needed to understand these differences, one likely factor is the different market shares of the respective mobile operating systems and app stores across countries. To illustrate, Android has a 95% market dominance in India (Statcounter, 2021), and this country produced the highest number of Android COVID-19 apps overall, as we detail below. Another contributing factor is Android’s more permissive (open) architecture, as compared to Apple’s restrictive (closed) iOS architecture style and governance (Eaton et al., 2011); specifically, the more permissive use of sensors on Android devices, which are key to developing contact-tracing applications. The variance suggests divergent national strategies for implementing apps across platforms, which has consequences for users who may be presented with a different selection of COVID-19 apps based on their mobile operating system and corresponding app store.

There are also notable differences in the composition of actors developing COVID-19-related apps in each store (Figure 2). Government-produced apps are the most prevalent in both stores, positioning governments as key official and recognised sources outlined in the app stores’ policies. However, they are significantly more prevalent in Google Play (65%, N=267), and even more so in the Google Play editorial set (79%, N=195), compared to the App Store (48%, N=121). One outcome of Google’s editorial strategy is an increased presence and visibility of these government-made apps, yet curiously 42% of Google Play’s government-made apps did not make it into the editorial source set, indicating that being a government actor alone is not enough to make the editorial list.

In contrast, private actor apps are relatively more prevalent in the App Store (41%) than Google Play (32%). The privately-developed iOS apps are predominantly from commercial actors offering healthcare solutions. While most also exist as Android apps, they do not surface in our Google Play data sets, signalling how Google and Apple have different criteria for retrieving health companies and organisations as official and recognised sources. Additionally, the COVID-19 app response conditions gave rise to governmental actors seeking app development collaborations with private actors for Android (N=26) and iOS (N=12) apps. These collaborations were often explicitly mentioned in the app description. Further, a small but significant number of apps have been developed with the involvement of academic researchers (e.g., Covid Symptom Study); civil society actors (e.g., Stopp Corona from the Austrian Red Cross, or the WHO apps); or health authorities (e.g., the French Covidom Patient to monitor COVID-19 patients after a hospital visit). While lesser in number, the presence of these other actor types contributes to the credibility and legitimacy of the apps and the ecosystem at large.

Figure 2: Actor types identified behind [COVID-19]-related apps (Android and iOS), based on the listed developer names and app descriptions. Note: apps can belong to multiple categories. Illustration: authors.
Figure 2: Actor types identified behind [COVID-19]-related apps (Android and iOS), based on the listed developer names and app descriptions. Note: apps can belong to multiple categories. Illustration: authors.

Geographical distribution of apps by country

After exploring the distribution of apps and actor types across platforms, we focused on their geographical distribution. The App Store’s ranked lists of apps are less country-specific and show a high overlap between countries and regions. Google Play, whose editorial filter surfaces only country-specific COVID-19 apps, allows for a more distinctive geographic image (Figure 3). In this store, we find that most countries offer a small selection of country-specific apps, coupled with two WHO apps (OpenWHO: Knowledge for Health Emergencies and WHO Info). As early as 15 February, a month before the pandemic was officially declared, the WHO stated that ‘we’re not just fighting an epidemic; we’re fighting an infodemic’ (Zarocostas, 2020, p. 676). To combat COVID-19 dis/misinformation, the WHO had begun working closely with more than 50 major platform companies, including Google, to implement solutions to fight the emerging infodemic (WHO, 2020b). This collaboration, initiated by the WHO, resulted in ensuring that ‘science-based health messages from the organisation or other official sources appear first when people search for information related to COVID-19’ on participating platforms (WHO, 2020b, n.p.), as we observe in Google Play with the surfacing of the WHO apps.

Figure 3: Geographical distribution of [COVID-19]-related Android apps by country or region. Illustration: authors and DensityDesign<br />
            Lab.
Figure 3: Geographical distribution of [COVID-19]-related Android apps by country or region. Illustration: authors and DensityDesign Lab.

Measured in terms of downloads, most countries have a primary app within the country-specific apps by a government actor. There are, however, notable exceptions. While India has one dominant government-provided app (Aarogya Setu), which was made mandatory for government and private sector employees during the early stages of the pandemic, India offers 61 apps in total, far more than any other country. Upon closer inspection, we found that India had a multi-tiered response with many apps developed for specific regions and developed by local governments (Bedi and Sinha, 2020). In contrast, countries such as Taiwan, Denmark, Iceland, Portugal, and Uruguay offered only one app (in addition to the WHO apps), all of which are government-provided. We also see countries where non-government apps are dominant or highly prevalent (Philippines, Thailand, Mauritius, Netherlands, Canada) or where the dominant app involves multiple actors in their production, including collaborations between governmental and private actors (Germany, Czechia, Austria, Kyrgyzstan). In some countries, we found multiple apps reflecting a regional or state-based app response, strategies with multiple apps with distinctive features, or competing (non-governmental) apps and strategies.

It is worth noting two final observations about geographical distribution. First, China is notably missing from our study because it banned Google Play. To battle the pandemic, China has relied on Health Code, a mini-programme developed by Alipay and WeChat, which generates a colour-based health code for travelling (Liang, 2020). Instead of developing new COVID-19 apps, China integrated Health Code into two dominant mobile payment apps. Second, the two WHO apps surface for every country, with one notable exception: the United States. Not only did the WHO apps not make it to the editorial list, but direct search queries for these apps redirected to the US editorial list where the WHO apps did not feature. In April 2020, President Trump halted funding to the WHO, after criticism of the US’ response to the COVID-19 pandemic. A few months later, in July, President Trump moved forward to officially withdraw the US membership from the WHO. The omission of the two WHO apps in the US may reflect broader geopolitical dynamics and suggests that the editorialisation of Google Play’s app ecosystem may not be conducted by Google alone. The editorial lists reflect a generally benevolent platform strategy to steer users to what is perceived to be the most appropriate apps; however, in this case, we see the editorial logic used for more overtly political purposes with the emergence of censorship (even though these WHO apps exist in the US store).

Pandemic response types

To understand the type of responses COVID-19 apps offer, we inquired into what kind of apps these actors built. This allows us to identify which response types are dominant, and which emerge with the distinct governance mechanisms of each store and the actors in each ecosystem.

While contact-tracing apps have received the most attention in news reporting, we found many different response types (Figure 4(a)). In both stores, 50–60% of all apps offer news and information on the pandemic, developed by various types of actors (Figure 4(b) and (c)). The prominence of authoritative information, updates and data may result from the WHO’s collaboration with platform companies to ‘immunize the public against misinformation’ by connecting users to official sources (WHO, 2020b).

At the time of the analysis, over 20% of apps engage with contact-tracing and exposure notification, which are typically built by government actors or in collaboration with private actors (Figure 4(b) and (c)). We find a diversity of potential surveillance forms beyond contact-tracing: over 48% of apps offer different kinds of symptom checkers or reporting tools, ranging from keeping a diary to the solicitation of medical and personal data. They are connected to private companies, academic research, or aligned with public healthcare. About 15% of all apps offer tools for remote healthcare developed by governmental and private actors.

Figure 4(a) to (c): Comparison of response types represented by [COVID-19]-related apps (Google Play vs App Store). Note: apps can belong to multiple categories. Illustration: authors.
Figure 4(a) to (c): Comparison of response types represented by [COVID-19]-related apps (Google Play vs App Store). Note: apps can belong to multiple categories. Illustration: authors.

We also found new categories compared to existing literature, such as mental health apps to deal with psychological pressures during the pandemic. We further found apps soliciting data for research studies, such as the German Corona-Datenspende, by donating data from various devices for assisting in academic studies on COVID-19. When comparing the two stores, we find that networked medicine apps (for healthcare workers to communicate and interact within a system) are more prevalent in the App Store, while crisis communication, quarantine compliance, and informant apps (to report people breaking COVID-19 rules to authorities) are mostly or only available in Google Play.

Notably, quarantine compliance, informant, movement permit, and crisis communication apps are primarily built by government actors. We found apps facilitating crowd-sourced state surveillance in Argentina, Chile, and Russia. These ‘social monitoring’ apps enable reporting on the suspicious behaviour of others. In Bangladesh and India, governmental apps call on citizens to report ‘possibly affected people’ to ‘free the country’ as part of their ‘citizen responsibility’. In Lithuania and India, we observed the gamification of a pandemic where users can participate in daily health monitoring or symptom tracking to collect points to receive rewards or discounts.

Developer responsivity

To analyse how rapidly the COVID-19 app ecosystem emerged and evolved, we examined how responsive app developers have been to the pandemic. We use the term responsivity as a measure or proxy for the dynamics of software updates during the crisis and its openness to unprompted innovation (Zittrain, 2008). Responsivity is defined by how quickly apps are released and is measured by the number of app updates per time interval. It captures a sense of how actively a country/developer is working on those apps and how invested countries are in the response that the app represents.

Figure 5 shows the Android apps per country plotted on a timeline, indicating when countries first introduced them in transparent circles and updated them in coloured squares. It shows that early app development commenced almost immediately after the official declaration of the pandemic with most countries launching their apps in March–April 2020. Interestingly, we found that several apps existed before the crisis started. These are primarily pre-existing e-government apps, medical apps for communicating with health professionals and apps providing healthcare information. While conforming with the new platform policies of Apple and Google that prioritise releases from official and recognised entities, these repurposed apps signal the developers’ agile response in using existing apps and app functionalities to deal with the crisis.

Figure 5: Responsivity of [COVID-19]-related app developers by country (Android only), 2013 –<br />
        August 2020. Circles are initial releases (i.e., app launches); squares are any additional releases (i.e., app updates); scaled by the total number of releases. Data: App Annie. Illustration: authors and DensityDesign Lab.
Figure 5: Responsivity of [COVID-19]-related app developers by country (Android only), 2013 – August 2020. Circles are initial releases (i.e., app launches); squares are any additional releases (i.e., app updates); scaled by the total number of releases. Data: App Annie. Illustration: authors and DensityDesign Lab.

Existing research on ‘app evolution’ has found that around 14% of apps are updated regularly on a bi-weekly basis (McIlroy et al., 2015), while developers abandon the vast majority of apps shortly after being released (Tiwana, 2015). By contrast, surveying the average pace of updates for the COVID-19 apps per country demonstrates a high level of responsivity, particularly in India, Brazil, and the United Arab Emirates. Zooming into specific examples such as Columbia’s CoronApp (the most frequently updated app in our data) reveals how agile development has coordinated with ongoing government injunctions to handle the pandemic. Inspecting the changelogs (‘What’s New’) reveals recurring efforts to synchronise app functionalities with state emergency decrees.

From an inverse perspective on responsivity, a relative absence of development activity can also prompt further research into pandemic governance. Denmark and the UK show limited responsivity, which may indicate delays in developing COVID-19 apps, including due to public controversies. In June 2020, Denmark’s data protection agency prohibited its app from processing personal data until further notice (Amnesty International, 2020). The app has since relaunched after addressing multiple privacy issues. England and Wales, meanwhile, initially experimented with an app that used a centralised approach to data collection, but this was eventually abandoned (Sabbagh and Hern, 2020). Thus the findings additionally can reflect cases of backlash and legal contestation, specifically related to data protection and privacy.

Finally, an essential aspect of pandemic app store governance is the degree to which the app stores actively enforce their policies by removing apps. While it is difficult to establish whether the developer or the app store removed an app, and for what reason, two large scale analyses found that after 1.5–2 years, Google Play (Wang et al., 2018) and the App Store (Lin, 2021) removed almost half of the apps in their stores. In our data set, Google Play removed only 7.5% (N=31) and the App Store only 6.0% (N=15) of all apps after eight months. This is even lower than the study by Samhi et al. on COVID-19 apps (2020), which observed that 15% of COVID-19-related apps had been removed in the first two weeks after data collection in June 2020. COVID-19 apps are subject to ‘an increased level of enforcement’ during the app review phase and are thus likely more thoroughly screened and removed sooner (Google Help, 2020b).

Discursive positioning of response apps

In the next step, we analysed how the apps discursively present themselves to users and how they engage with existing technology and data and privacy debates. Textual app descriptions address users in particular ways to inform them about the apps’ functionalities and use cases, and persuade users to download them. We examined whether apps explicitly mentioned specific techniques and data/privacy concerns in their descriptions, and measured their keyword frequency. The techniques listed in Figure 6(a) and (b) indicate how developers convey different COVID-19 app responses to users. It includes prominent terms like location, notification, track/trace, alongside implementation terms like GPS, Bluetooth, alert, smart, or platform, and even mentions of machine-learning algorithms and artificial intelligence to identify COVID-19 symptoms. We also found related terms such as video, chat, messaging, and bots – often used in relation to remote healthcare and diagnosis. Overall, the distribution of these terms is similar in both app ecosystems, suggesting a similar discourse around techniques is used.

Figure 6(a) and (b): Resonance of technique-related terms used in [COVID-19]-related app titles and/or descriptions (Android and iOS). Illustration: authors.

Next, we identified the presence of terms related to data/privacy solutions or concerns. Figures 8(a) and (b) show relatively high use of terms describing how apps deal with collected data, including anonymous, encrypted, sensitive, or locally stored data. We also find occasional claims that apps delete data, securely transmit data via HTTPS, or process data adhering to the EU General Data Protection Regulation (mostly European apps). As such, these apps express their compliance with the app stores’ policies, which have additional requirements for collecting and using personal or sensitive data to support COVID-19-related (research) efforts (Apple Developer, 2020b; Google Help, 2020b). Overall, we observe that the app response to the pandemic is primarily framed as a data/privacy-sensitive one. Half of iOS app descriptions (N=126) and 40% of Android apps (N=158) mention data/privacy terms, showing how app developers address their users’ potential privacy concerns. It bears emphasizing, of course, that the mere presence of these discourses does not mean the operations of these apps conform to such stated capacities and values (Kuntsman, Miyake and Martin, 2019).

Figure 7(a) and (b): Resonance of data/privacy-related terms used in [COVID-19]-related app titles and/or descriptions (Android and iOS). Illustration: authors.
Figure 7(a) and (b): Resonance of data/privacy-related terms used in [COVID-19]-related app titles and/or descriptions (Android and iOS). Illustration: authors.

Development of response apps

Finally, we inquired into the development of apps from a technical perspective, drawing attention to software development kits (SDKs) as the building blocks for mobile app development, enabling developers to implement particular frameworks and external functionalities. In this context, Google and Apple are essential players with their app stores as means of distribution, and their central role as infrastructure providers offering and controlling the means of production. They function as an ‘obligatory passage point’ for the production and distribution of apps in which their SDKs function as mechanisms of generativity and control, enabling platforms to govern the development of apps (Blanke and Pybus, 2020; Pybus and Coté, 2021; Tilson et al., 2012). This analysis, however, only focuses on Android apps due to Apple’s very restrictive technical governance of iOS apps.

For our 410 Android apps, we find 7,335 SDKs in total, with an average of 19 SDKs per app (28 apps returned no data from AppBrain). 79 apps contain no libraries at all, suggesting that they have not been built with standard development tools such as Android Studio and may have been coded from scratch, or perhaps that developers are cautious about implementing third-party code in this ecosystem. Among these are apps from the Indian, Nepalese, and Vietnamese government. The high average number of SDKs shows developers’ reliance on these libraries for building apps and for accessing (third-party) functionality. Figure 8 shows that the majority of the embedded SDKs are development tools (98.4%, N=7,217), followed by advertising network libraries (1.06%, N=78) and social libraries (0.54%, N=40). The main development tools are embedding user interface components, networking, app development frameworks, Java utilities, databases, and analytics. We find very few advertising libraries due to Google’s policy restrictions on COVID-19 app monetisation. Interestingly, we found most of them in apps built by governments. For example, we detected Google’s AdMob SDK in government-made apps from India, Qatar, and Singapore, and the Outbrain SDK in government-made apps from Australia, Argentina, Italy, and the United Arab Emirates.

Figure 8: Software libraries embedded in [COVID-19]-related apps (Android only). Nodes are library tags (left), library types, their developers or owners, and their open-source availability (right); scaled by the number of occurrences. Highlighted are libraries developed/owned by Google (dark green). Illustration: authors and DensityDesign Lab.

When looking at the developers behind the SDKs, we find 134 unique actors. We observe a strong dependency on Google as 56% of all apps rely on at least one Google-owned SDK, and a single app relies on 11 Google-owned SDKs on average (Figure 9). We further find 70 individual developers, most of them on GitHub, offering specific solutions such as data serialisation, data conversion and image cropping. 81% of all apps use one or more open source libraries with an average app using 15 open source SDKs. We find that Google dominates the means of production by owning the most libraries; not just the ‘core’ Android ones, but also those used to embed maps and app analytics. By focusing on the ownership of these libraries, we highlight the material conditions of platforms and apps like Google as ‘service assemblages’ (Blanke and Pybus, 2020) which reveals some of the deeper ways in which pandemic platform governance, and platform power more generally, manifests.

Figure 9: Developers behind software libraries embedded in [COVID-19]-related apps by country or region (Android only). Circles (pies) are library developer distributions per country; horizontal axis: continents; vertical axis: % of open source libraries. Illustration: authors and DensityDesign Lab.
Figure 9: Developers behind software libraries embedded in [COVID-19]-related apps by country or region (Android only). Circles (pies) are library developer distributions per country; horizontal axis: continents; vertical axis: % of open source libraries. Illustration: authors and DensityDesign Lab.

Conclusion: Governing the pandemic response

A key starting point for our analysis of COVID-19 apps was to go beyond the critical analysis of single apps within a national context. As we have shown, COVID-19 apps also need to be understood relationally, situated within infrastructures and embedded in the context of platform governance. Such an understanding recognises from the beginning that platform companies occupy a central role in app ecosystems, exercised through diverse mechanisms and agencies that operate across different layers (Gorwa, 2019), and mediated by the relationships between governments, citizens and other actors.

In this article, we demonstrated and discussed how the two dominant COVID-19 app ecosystems have taken shape during the pandemic through acts of exceptional platform governance. We observed unique techniques of control determining which apps make it into the stores, how they are positioned and accessed in the stores, who they are developed by, and what kinds of functionality they may have (including restrictions on ads and other economic features). Nevertheless, the platforms’ technical affordances have provided generative means for a diversity of responses to emerge, with individual apps negotiating these governing conditions as part of their development.

First, we observed a broad alignment of states, international organisations and platform companies in terms of the recognised need to act or get ‘involved in the fight’. While tensions have come to predominantly define the relations between platform companies and national governments in terms of competition, privacy, taxation or content moderation (e.g. Busch et al., 2021; Gorwa, 2019; Khan, 2018; Klonick, 2017; Suzor, 2018), the pandemic re-directs these powerful actors around a global threat in specific ways. This includes the related infodemic and the need to maintain the perception of legitimate authority during the roll-out of apps whose data-gathering powers may otherwise face strong resistance. While such tensions may obviously remain, yet they are thrown into relief by the context of the crisis (as the omission of the WHO apps in the US demonstrate), which allows for a unique empirical mapping of the asymmetries, power relations and points of potential negotiation that shape platform governance more generally.

Secondly, pandemic platform governance has initially supported the production of app ecosystems which are partially ‘sandboxed’ from the economic activity that typically constitutes platform scenarios. Although COVID-19 apps without a doubt further entrench the economic dominance of platforms overall, during this early period we observe a heightening of their role as ‘regulatory intermediators’ within this specific niche by connecting citizens with government services and other authorities (Busch, 2020). In the case of Google, for instance, this intermediation is heavily steered through specialised modes of editorialisation. How this role changes over time, however, should remain subject to ongoing critical observation.

Third, this repurposing of platform infrastructures for ostensibly public ends significantly intensifies the intermediation of platform companies and governments. Platform companies increasingly act as a quasi-critical global infrastructure (yet with limited public oversight); organising and managing the emerging app ecosystem across national contexts while also providing the means of distribution (stores) and production (with SDKs, but also in the case of the GAEN protocols). For their part, national governments are cast in the role of complementors, developing apps under the regulatory conditions of the platform companies, often in partnership with other actors. How governments act in this novel role varies significantly in terms of the apps they develop (app responses), their partnerships (actor types), and ongoing activity (responsivity).

Fourth, several aspects of the COVID-19 app ecosystem help legitimise the production and distribution of apps to respond to the pandemic. Within the apps’ descriptions, we detect discourses around specific digital technologies, data and privacy; with apps signalling their technical competence, awareness of data protection issues and data policies. Whether the apps actually abide by these stated claims is another question, yet it is telling that both solutionist and privacy protection discourses are mobilised within this niche for purposes of persuasion and reassurance. How these kinds of discourses might contribute to further blurring distinctions between figures of the user and citizen is a point for further inquiry.

Finally, within the context of the pandemic, mobile app platforms have facilitated heterogenous configurations of governance, while still systematically shaping the activities of complementors. That is, despite the tightening of platform control under pandemic conditions, there exists a wide diversity of pandemic apps responses that can raise different issues within distinct spheres of sovereign governance and authority. Thus, with platform companies acting as facilitators, we see a diverse range of national strategies, exceptions and outliers. While the operations of pandemic platform governance are global in scale, it can nevertheless produce scenarios where Argentinian citizens are snitching on each other through informant apps, United Kingdom citizens participate in academic symptom studies, and US citizens are uniquely denied access to the WHO information apps.

Pandemic platform governance, therefore, foregrounds how platforms have adopted and negotiated their new role as a marketplace serving commercial interests in ordinary times and additional public interests in exceptional circumstances. While precedents for this role exist in e-government and e-health apps and services, the pandemic has accelerated and intensified these dynamics. By mapping the ecosystems of available COVID-19 apps, therefore, we learn how mobile platforms have responded to the global pandemic and infodemic with additional extraordinary measures to demarcate public interest niches from the wider commercial environment of the app store. The question for policymakers and citizens is how this new governance might continue to evolve in future now that platforms have come to play a key role in mediating public values and global governmental responses to the pandemic.


Acknowledgements

Authors listed in alphabetical order. Thanks to Jason Chao and Stijn Peeters for their assistance in developing the Google Play and App Store scrapers for this study, and to Jason for uploading the Android application package (APK) files to the Internet Archive’s ‘COVID-19_Apps’ collection. Thanks also to Giovanni Lombardi, Angeles Briones, Gabriele Colombo, and Matteo Bettini (DensityDesign Lab) for their assistance with some of the graphics included in this article. Further, we thank those who participated in our data sprints during the 2020 Digital Methods Summer School (University of Amsterdam) and the ‘Exploring COVID-19 app ecologies’ (Aarhus University) and ‘Mapping the COVID-19 App Space’ workshops (Centre for Digital Inquiry). Finally, we thank the editors and reviewers, Michael Veale, Kaspar Rosager Ludvigsen, Angela Daly, and Frédéric Dubois, whose constructive and attentive comments greatly improved the article.

Data availability

The data that support the findings of this study are openly available in the Open Science Framework (OSF) at https://doi.org/10.17605/osf.io/wq3dr. Additionally, the available Android application package (APK) files of the COVID-19 Android apps covered in this study are openly available and preserved in the ‘COVID-19_Apps’ collection of the Internet Archive at https://archive.org/details/COVID-19_Apps.

References

Ada Lovelace Institute. (2020). Exit through the App Store? [Rapid evidence review]. Ada Lovelace Institute. https://www.adalovelaceinstitute.org/news/exit-through-the-app-store-uk-technology-transition-covid-19-crisis/

Ahmed, N., Michelin, R. A., Xue, W., Ruj, S., Malaney, R., Kanhere, S. S., Seneviratne, A., Hu, W., Janicke, H., & Jha, S. K. (2020). A Survey of COVID-19 Contact Tracing Apps. IEEE Access, 8, 134577–134601. https://doi.org/10.1109/ACCESS.2020.3010226

Albright, J. (2020, October 28). The Pandemic App Ecosystem: Investigating 493 Covid-Related iOS Apps across 98 Countries [Medium Post]. Jonathan Albright. https://d1gi.medium.com/the-pandemic-app-ecosystem-investigating-493-covid-related-ios-apps-across-98-countries-cdca305b99da

Amnesty International. (2020, June 15). Norway halts COVID-19 contact tracing app a major win for privacy. Amnesty International, News. https://www.amnesty.org/en/latest/news/2020/06/norway-covid19-contact-tracing-app-privacy-win/

Bedi, P., & Sinha, A. (2020). A Survey of Covid 19 Apps Launched by State Governments in India. The Centre for Internet and Society. https://cis-india.org/internet-governance/stategovtcovidapps-pdf

Blanke, T., & Pybus, J. (2020). The Material Conditions of Platforms: Monopolization Through Decentralization. Social Media + Society, 6(4). https://doi.org/10.1177/2056305120971632

Busch, C. (2020). Self-regulation and regulatory intermediation in the platform economy. In M. C. Gamito & H.-W. Micklitz (Eds.), The role of the EU in transnational legal ordering: Standards, contracts and codes (pp. 115–134). Edward Elgar Publishing.

Busch, C., Graef, I., Hofmann, J., & Gawer, A. (2021). Uncovering blindspots in the policy debate on platform power. European Commission Expert Group for the Observatory on the Online Platform Economy.

Davalbhakta, S., Advani, S., Kumar, S., Agarwal, V., Bhoyar, S., Fedirko, E., Misra, D. P., Goel, A., Gupta, L., & Agarwal, V. (2020). A Systematic Review of Smartphone Applications Available for Corona Virus Disease 2019 (COVID19) and the Assessment of their Quality Using the Mobile Application Rating Scale (MARS. Journal of Medical Systems, 44(9), 164. https://doi.org/10.1007/s10916-020-01633-3

Developer, A. (2020). Ensuring the Credibility of Health & Safety Information. News and Updates. https://developer.apple.com/news/?id=03142020a

Developer, A. (2021). App Store Review Guidelines. Apple: App Store Review Guidelines. https://developer.apple.com/app-store/review/guidelines/

Dieter, M., Gerlitz, C., Helmond, A., Tkacz, N., Vlist, F., & Weltevrede, E. (2019). Multi-Situated App Studies: Methods and Propositions. Social Media + Society, 5(2), 1–15. https://doi.org/10.1177/2056305119846486

Eaton, B., Elaluf-Calderwood, S., Sorensen, C., & Yoo, Y. (2011). Dynamic structures of control and generativity in digital ecosystem service innovation: The cases of the Apple and Google mobile app stores. School of Economics and Political Science. http://eprints.lse.ac.uk/47436/

French, M., Mykhalovskiy, E., & Lamothe, C. (2018). Epidemics, Pandemics, and Outbreaks. In A. J. Treviño (Ed.), The Cambridge Handbook of Social Problems (pp. 59–78). Cambridge University Press. https://doi.org/10.1017/9781108550710.005

Gasser, U., Ienca, M., Scheibner, J., Sleigh, J., & Vayena, E. (2020). Digital tools against COVID-19: Taxonomy, ethical challenges, and navigation aid. The Lancet Digital Health, 2(8), 425–434. https://doi.org/10.1016/S2589-7500(20)30137-0

Gillespie, T. (2014). The relevance of algorithms. In T. Gillespie, P. J. Boczkowski, & K. A. Foot (Eds.), Media technologies: Essays on communication, materiality, and society (pp. 167–194). MIT Press.

Gillespie, T. (2015). Platforms Intervene. Social Media + Society, 1(1). https://doi.org/10.1177/2056305115580479

Google Help. (2021). Requirements for coronavirus disease 2019 (COVID-19) apps. Play Console Help. https://support.google.com/googleplay/android-developer/answer/9889712?hl=en

Google Play. (2020, March 14). Coronavirus: Stay informed [App store]. Google Play. https://play.google.com/store/apps/topic?id=campaign_editorial_3003109_crisis_medical_outbreak_apps_cep

Gorwa, R. (2019). What is platform governance? Information, Communication & Society, 22(6), 854–871. https://doi.org/10.1080/1369118X.2019.1573914

Greene, D., & Shilton, K. (2018). Platform Privacies: Governance, Collaboration, and the Different Meanings of “Privacy” in iOS and Android Development. New Media & Society, 20(4), 1640–1657. https://doi.org/10.1177/1461444817702397

Help, G. (2021). Inappropriate Content. Policy Center. https://support.google.com/googleplay/android-developer/answer/9878810

Khan, L. M. (2018). Sources of tech platform power. Georgetown Law Technology Review, 2(2), 325–334. https://georgetownlawtechreview.org/sources-of-tech-platform-power/GLTR-07-2018/

Kitchin, R. (2020). Civil Liberties or Public Health, or Civil Liberties and Public Health? Using Surveillance Technologies to Tackle the Spread of COVID-19. Space and Polity, 24(3), 362–381. https://doi.org/10.1080/13562576.2020.177058

Klonick, K. (2018). The New governors: The People, rules, and processes governing online speech. Harvard Law Review, 131, 1598–1670. https://harvardlawreview.org/2018/04/the-new-governors-the-people-rules-and-processes-governing-online-speech/

Kuntsman, A., Miyake, E., & Martin, S. (2019). Re-thinking digital health: Data, appisation and the (im)possibility of ‘opting out’. Digital Health, 5, 1–16. https://doi.org/10.1177/2055207619880671

Levy, B., & Stewart, M. (2021). The evolving ecosystem of COVID-19 contact tracing applications [Preprint]. ArXiv. http://arxiv.org/abs/2103.10585

Liang, F. (2020). COVID-19 and Health Code: How Digital Platforms Tackle the Pandemic in China. Social Media + Society, 6(3). https://doi.org/10.1177/2056305120947657

Lin, F. (2021). Demystifying Removed Apps in iOS App Store [Preprint]. ArXiv. http://arxiv.org/abs/2101.05100

McIlroy, S., Ali, N., & Hassan, A. E. (2016). Fresh apps: An empirical study of frequently-updated mobile apps in the Google Play store. Empirical Software Engineering, 21(3), 1346–1370. https://doi.org/10.1007/s10664-015-9388-2

Milan, S., Treré, E., & Masiero, S. (Eds.). (2020). COVID-19 from the Margins: Pandemic Invisibilities, Policies and Resistance in the Datafied Society. Institute of Network Cultures. https://networkcultures.org/blog/publication/covid-19-from-the-margins-pandemic-invisibilities-policies-and-resistance-in-the-datafied-society/

Morris, J. W., & Murray, S. (2018). Appified: Culture in the Age of Apps. University of Michigan Press.

Pichai, S. (2020, March 6). Coronavirus: How we’re helping [Blog post]. The Keyword. https://blog.google/inside-google/company-announcements/coronavirus-covid19-response/

Privacy International. (2021). Fighting the Global Covid-19 Power-Grab. Privacy International Campaigns. https://privacyinternational.org/campaigns/fighting-global-covid-19-power-grab

Pybus, J., & Coté, M. (2021). Did you give permission? Datafication in the mobile ecosystem. Information, Communication & Society. https://doi.org/10.1080/1369118X.2021.1877771

Rieder, B., & Hofmann, J. (2020). Towards platform observability. Internet Policy Review, 9(4). https://doi.org/10.14763/2020.4.1535

Sabbagh, D., & Sinha, A. (2020). UK abandons contact-tracing app for Apple and Google model. The Guardian. https://www.theguardian.com/world/2020/jun/18/uk-poised-to-abandon-coronavirus-app-in-favour-of-apple-and-google-models

Samhi, J., Allix, K., Bissyandé, T. F., & Klein, J. (2021). A First Look at Android Applications in Google Play related to Covid-19 [Preprint]. ArXiv. http://arxiv.org/abs/2006.11002

Statcounter. (2021). Mobile Operating System Market Share India. Statcounter Global Stats. https://gs.statcounter.com/os-market-share/mobile/india

Suzor, N. (2018). Digital constitutionalism: Using the rule of law to evaluate the legitimacy of governance by platforms. Social Media + Society, 4(3). https://doi.org/10.1177/2056305118787812

Taylor, L., Sharma, G., Martin, A., & Jameson, S. (Eds.). (2020). Data Justice and COVID-19: Global Perspectives. Meatspace Press. https://shop.meatspacepress.com/product/data-justice-and-covid-19-global-perspectives

Tilson, D., Sorensen, C., & Lyytinen, K. (2012). Change and Control Paradoxes in Mobile Infrastructure Innovation: The Android and iOS Mobile Operating Systems Cases. 2012 45th Hawaii International Conference on System Sciences, 1324–1333. https://doi.org/10.1109/HICSS.2012.149

Tiwana, A. (2015). Platform Desertion by App Developers. Journal of Management Information Systems, 32(4), 40–77. https://doi.org/10.1080/07421222.2015.1138365

Tiwana, A., Konsynski, B., & Bush, A. A. (2010). Research Commentary—Platform Evolution: Coevolution of Platform Architecture, Governance, and Environmental Dynamics. Information Systems Research, 21(4), 675–687. https://doi.org/10.1287/isre.1100.0323

Tolomei, S. (2020, April 6). Google Play updates and information: Resources for developers. [Blog post]. Android Developers Blog. https://android-developers.googleblog.com/2020/04/google-play-updates-and-information.html

Tsinaraki, C., Mitton, I., Dalla Benetta, A., Micheli, M., Kotsev, A., Minghini, M., Hernandez, L., Spinelli, F., & Schade, S. (2020). Analysing mobile apps that emerged to fight the COVID-19 crisis (JRC 123209). European Commission. https://ec.europa.eu/jrc/communities/en/community/citizensdata/document/analysing-mobile-apps-emerged-fight-covid-19-crisis

van Dijck, J., Poell, T., & Waal, M. (2018). The Platform Society (Vol. 1). Oxford University Press. https://doi.org/10.1093/oso/9780190889760.001.0001

Veale, M. (2020). Sovereignty, privacy and contact tracing protocols. In L. Taylor, G. Sharma, A. Martin, & S. Jameson (Eds.), Data Justice and COVID-19: Global Perspectives (pp. 34–39). Meatspace Press.

Wang, H., Li, H., Li, L., Guo, Y., & Xu, G. (2018). Why are Android Apps Removed From Google Play? A Large-Scale Empirical Study. 2018 IEEE/ACM 15th International Conference on Mining Software Repositories (MSR, 231–242.

Wang, L., He, R., Wang, H., Xia, P., Li, Y., Wu, L., Zhou, Y., Luo, X., Guo, Y., & Xu, G. (2020). Beyond the Virus: A First Look at Coronavirus-themed Mobile Malware [Preprint]. ArXiv. http://arxiv.org/abs/2005.14619

W.H.O. (2020a). Virtual press conference on COVID-19. World Health Organization. https://www.who.int/docs/default-source/coronaviruse/transcripts/who-audio-emergencies-coronavirus-press-conference-full-and-final-11mar2020.pdf

W.H.O. (2020b, August 25). Immunizing the public against misinformation. World Health Organization. https://www.who.int/news-room/feature-stories/detail/immunizing-the-public-against-misinformation

Zarocostas, J. (2020). How to fight an infodemic. The Lancet, 395(10225), 676. https://doi.org/10.1016/S0140-6736(20)30461-X

Zittrain, J. (2008). The Future of the Internet—And How to Stop It. Yale University Press.

Footnotes

1. While the Google-Apple Exposure Notification (GAEN) protocols were introduced on 20 May 2020, we found that only 8 out of the 410 Android apps in our source set included (GAEN) API in their AndroidManifest.xml file by November.

2. While not discussed in this article, the collected data and information about the permissions requested by each app is openly available in the Open Science Framework (OSF).

3. Google’s mobile platform Android has a 71.18% market share worldwide, followed by Apple’s iOS with 28.19% (Statcounter, 2021). As a consequence of the platform companies tightly connecting their app stores to their mobile operating systems, Google’s Play store (except in China) and Apple’s App Store have become the key distribution channels of apps worldwide.

4. For the purposes of this article, we interpret the ‘developer name’ listed on the app store details page as the actor responsible for the development of that app. However, the actor listed as the ‘developer’ on the app details page is not necessarily, or not always, the same as the developer of that app (e.g. when the ‘developer’ merely listed the app in the app store, without having developed it).

5. AppBrain API specification, https://www.appbrain.com/info/help/api/specification.html

6. The app store scrapers have been developed by the App Studies and Digital Methods Initiatives and are available at: http://appstudies.org/tools/.

The new frontier of platform policy

$
0
0

Introduction

A great deal of academic literature has considered the policies and enforcement actions of online platforms with respect to content moderation (Klonick, 2018; Gillespie, 2018; Suzor, 2019; Douek, 2021), but there remains little, if any, literature addressing platform policies and enforcement actions targeting off-platform harassment and abuse by users. While few platforms explicitly have policies of this nature, live-streaming platform Twitch has become a pioneer in this space, having developed and enforced such a policy since 2018 (Twitch, 2018). 1 This policy was most notably put into effect during the summer of 2020 when the gaming world experienced an outpouring of sexual assault and harassment allegations that reached all corners of the industry, from major publishers and developers to broadcasters to community event organisers (Martens, 2020; Hall, 2020). While mainstream news media primarily focused on allegations about powerful men within large game development companies, a simultaneous outpouring of stories concerned Twitch streamers at various levels of popularity (D’Anastasio, 2020). In response, Twitch indefinitely suspended a number of streamers that had been identified in public sexual harassment and assault allegations (Kastrenakes, 2020; Hernandez, 2020).

While these were not the first enforcement actions taken by Twitch for off-platform abuse, the comparatively large number of bans over a short period of time associated with a wave of sexual abuse revelations suggests Twitch’s policy may represent a new frontier of platform policy: enforcing policies against off-platform abuse. Twitch’s active enforcement of policies against off-platform abuse goes beyond issues of content moderation to raise questions about the competence and accountability of platforms in imposing consequences on users for their behaviour regardless of where it occurs. In this article “abuse” is meant to signify a broad category of behaviour, whether online or offline, that targets and harms individuals, such as assault, harassment, bullying, communicating threats, or disclosing personal information about an individual, but does not include, for example, membership in a criminal organisation or terrorist group, or publishing hate speech or dangerous misinformation, even if some of the same concerns raised here may apply.

Policies against off-platform abuse may well be a positive step in producing healthier communities and ensuring consequences for harmful acts regardless of where they occur. In Twitch’s case, such policies follow long-standing problems with misogyny and harassment levelled at women on the platform (Taylor, 2018; Kastrenakes, 2020), and Twitch has stated that such policies are aimed at providing an environment that feels safe for all users (Twitch, 2018). Nonetheless, policies that undertake to investigate and sanction off-platform abuse even in the absence of legal sanction or widespread reporting raise unique problems in comparison to on-platform content moderation that increase the difficulty of balancing the positive aims of such policies with maintaining accountability and fairness. These problems do not suggest that such policies should not exist, but rather they suggest that where such policies are implemented, a significant commitment must be taken to balance the goals of such a policy with accountability and fairness in its enforcement. This article will discuss three such problems with respect to policies against off-platform abuse, like those of Twitch, with a special focus on sexual assault following the events of the summer of 2020.

First, platforms have little experience in setting policy for off-platform abuse and may lack competence in investigating and verifying such behaviour. Unlike with on-platform content moderation, platforms lack special access to the facts of the alleged conduct. Instead, platforms must rely on evidence provided by third-parties, but platforms lack expertise in obtaining, verifying, and weighing such evidence, which may increase the likelihood of error or undermine the effectiveness of a policy. Second, enforcing off-platform policies for abuse will often require engaging with highly sensitive information and events. Although not all investigations follow a direct report to Twitch—as many of those in the summer of 2020 began after public allegations rather than reports—many will nonetheless be based on direct reports. This heightens the need to be clear about the decision-making and investigatory process following such reports, and the privacy-invasive nature of such investigations heightens the dangers of failing to protect victim or complainant privacy and safety. Third, the impact of this kind of enforcement on sanctioned users is also potentially greater than in cases of content moderation enforcement actions. As the bans in the summer of 2020 demonstrate, a common enforcement action taken for off-platform behaviour is an indefinite account suspension, which can have significant and long-lasting impacts on a users’ social connectedness, wellbeing, and ability to earn an income (Gillespie, 2018, p. 176). Where enforcement follows a public accusation, it may both increase public attention to the allegations and be seen as a confirmation of those allegations, which could exacerbate public stigma.

These three elements of enforcement for off-platform abuse exacerbate the core problems of accountability and transparency that plague social media content moderation (Klonick, 2018; Balkin, 2018; Suzor, 2019; Douek, 2019) by increasing the challenge of balancing the objectives of the policy with heightened needs for due process, transparency, and confidentiality in enforcement given the increased stakes and risk of errors.

This article does not aim to fully resolve these tensions, although it suggests some areas where Twitch’s policies could be more transparent and accountable than at present. It also points to the need for considerable future public discussion and research in this area as it remains possible that policies of this nature could become more common among platforms, especially those that create asymmetric relationships between content creators and their audiences. Crafting the proper balance between the safety-based objectives of policies against off-platform abuse and fairness and accountability in enforcement will require a broader public discussion about the proper role of platforms in sanctioning such behaviour. Regulators may want to take cognisance of the important difference between policies aimed at on-platform behaviour and those aimed at off-platform behaviour in fashioning regulations that impose procedural accountability obligations, including the EU’s Digital Services Act (Proposal DSA).

This article proceeds as follows. Section 1 reviews Twitch’s policy with respect to off-platform abuse and its history and practice of enforcement. Section 2 discusses the unique nature of policies against off-platform abuse, including the significant potential impacts upon both the reporting and reported individuals and the complexity and uncertainty introduced by the need to make factual determinations based on external evidence. Section 3 briefly explores the potential for policies against off-platform abuse to spread to other kinds of social media platforms. The article concludes by considering how lawmakers might respond to these challenges when crafting accountability requirements for platforms.

1. Twitch and policies against off-platform abuse

a. Policy background

Launched in 2011 as a spinoff of the “lifestreaming” website Justin.tv, Twitch is a video-streaming platform originally focused on the live-streaming of video game play, although it has since branched out to numerous other content areas. Twitch is now the 35th most visited site on the internet at the time of writing (Alexa, n.d.) and commands the largest share of the online live-streaming market (Iqbal, 2021). Content on Twitch is, like YouTube, provided by third-parties (who are often individual users) rather than created by Twitch itself (Taylor, 2018). Almost anyone can create an account and, provided they have access to streaming software and the proper hardware, begin live-streaming on their own “channel”. The company was purchased by Amazon in 2014, and Twitch’s current business model relies primarily on advertising embedded in streams as well as optional subscriptions to either Twitch as a whole (via Twitch Turbo) or individual channels (Iqbal, 2021).

Streamers on Twitch are able to earn money through the platform’s Affiliate and Partner programmes, with Partner being the more lucrative of the two. Partners and Affiliates are able to earn money from channel subscriptions, advertising on their streams, as well as a form of payment from viewers to streamers known as “Bits”. Popular streamers may also have separate contracts with Twitch that may provide different benefits and terms in exchange for streaming exclusively on the platform (Gilbert, 2020). Twitch has over two million broadcasters and over 27,000 Partners (Twitch, n.d.d.). Twitch thus supports a sizable community of streamers, many of whom earn their entire living from streaming on the platform (Taylor, 2018; Wiltshire, 2019).

All streamers and users on Twitch, including Partners and Affiliates, are required to abide by Twitch’s Community Guidelines (Twitch, n.d.a). As Twitch is a live-streaming service that primarily broadcasts ephemeral content, unlike text-based platforms it does not focus its enforcement of its Community Guidelines on the removal of content. While it uses machine-learning tools to prevent some content contrary to its Community Guidelines from being broadcasted, its primary method of enforcement is at the account-level (Twitch, 2020c). Account enforcements can range from warnings to temporary suspensions to indefinite suspensions. According to Twitch’s first ever transparency report, concerning the year 2020, Twitch carried out over 1.1 million account enforcements during the second half of 2020 alone (Twitch, 2020c). However, the report did not specify how many of these enforcements were merely warnings compared with account suspensions.

Twitch made the decision in February 2018 to make enforcement for certain off-platform behaviours, including assault and harassment, an express part of its enforcement mandate in order to better protect its community (Twitch, 2018). This meant the inclusion of a relatively simple line in its policies that stated “[w]e may take action against persons for hateful conduct or harassment that occurs off Twitch services that is directed at Twitch users” (Twitch, n.d.a). Beyond potentially applying Twitch’s broader hateful conduct and harassment policy to the off-platform behaviour of its users, when or how this policy would be enforced remained unclear.

Twitch stated that the reason for going after off-platform conduct was that “ignoring conduct when we are able to verify and attribute it to a Twitch account compromises one of our most important goals: every Twitch user can bring their whole authentic selves to the Twitch community without fear of harassment” (Twitch 2018). It’s understandable that Twitch would want to prevent giving a platform to perpetrators of abuse and harassment, which may exacerbate the harms experienced by victims. This is especially true for a company at the centre of a gaming culture that has often been observed to be exclusionary and hostile to women’s presence and involvement (Taylor, 2018). As Twitch CEO Emmett Shear later stated in an internal company email following the 2020 sexual harassment and abuse allegations, the company wanted to “set a higher standard for ourselves and those with power and influence on our service” (Shear, 2020).

It’s unclear how often this policy has been enforced since 2018, as Twitch does not generally comment on enforcement actions. Twitch’s 2020 transparency report contained no information about enforcement of policies for off-platform behaviour (Twitch, 2020c). Details of its enforcement are thus limited to sporadic news reports that, by their nature, skew towards more popular streamers. 2

Despite some earlier cases, it wasn’t until the summer of 2020 that the application of Twitch’s off-platform policy to in-person interactions was truly tested. Over that summer, a “#MeToo” movement swept across the gaming industry, with an overwhelming number of individuals coming forward with personal stories of sexual harassment and assault in all areas of the industry, from game development to tournament organising to streaming (Schreier, 2020). While many stories did not identify perpetrators, many did, and dozens of men, and some women, were named as perpetrators. One streamer, Jessica Richey, created a Google Docs document that categorised over 400 personal stories (Lorenz & Browning, 2020).

In June and July, Twitch indefinitely suspended the accounts of several prominent streamers that had been identified in these stories without public comment. These included, for example, Gonzalo "ZeRo" Barrios, a major personality and former top competitor in the fighting game community. Barrios was banned shortly after several women within the community publicly alleged that Barrios had engaged in sexual misconduct, including sending sexually explicit messages to minors, and after Barrios admitted to some of that conduct (Galiz-Rowe, 2020). Others banned at the time included popular streamer Brad “BlessRNG” Jolly; high-level fighting-game competitor Nairoby ‘Nairo’ Quezada; and at least several others, including those going by the monikers “iAmSp00n”, “SayNoToRage”, “DreadedCone”, “Wolv21”, and “WarwitchTV” (Hernandez, 2020; Kastrenakes, 2020; Walker, 2020). As Twitch does not comment on individual enforcements, it is not known if these actions followed reports to Twitch, or whether Twitch took action based solely on public disclosures.

On 24 June 2020 Twitch provided a general response to the revelations and allegations in a short blog post, which expressly referred to investigations for behaviour that took place off of the Twitch platform:

We are reviewing each case that has come to light as quickly as possible, while ensuring appropriate due diligence as we assess these serious allegations. We’ve prioritised the most severe cases and will begin issuing permanent suspensions in line with our findings immediately. In many of the cases, the alleged incident took place off Twitch, and we need more information to make a determination (Twitch, 2020b).

Since then, Twitch has not made any public statements concerning its investigations or processes with respect to these cases, nor has it made any public statements explaining its actions in certain cases. Twitch did not elaborate on what ‘due diligence’ entailed, and it did not publicly provide reasons for why numerous other Twitch streamers accused of serious misconduct have not received account suspensions. It is also not clear if any of the accused were provided with the opportunity to respond.

This level of opacity is not unusual for the company. Twitch had already been the subject of much public commentary alleging it is inconsistent in content policy enforcement and fails to provide sufficient reasons for its enforcement decisions, even to those facing penalties (Asarch, 2019; Geigner, 2019). Indeed, Twitch’s lack of clarity and consistency in content moderation was addressed directly by Twitch CEO Emmett Sheer during the company’s annual convention, TwitchCon, in 2019 where he promised to increase transparency around policy enforcement processes broadly. He stated that “[n]o matter how good your decision-making process is, if people can’t understand it they can’t fully trust it. We’re going to really focus on increasing that transparency so people can trust the process” (Shanley, 2019). However, while Twitch has improved transparency in some ways, such as through its new transparency reports, much remains hidden. It remains difficult or impossible to find information concerning Twitch’s internal policy development process, the size or operating procedure of its content moderation team, or how often it indefinitely suspends accounts.

b. The current policy

In an apparent response to the challenges of enforcing its policy that arose during the summer of 2020, Twitch significantly updated its policy regarding off-platform behaviour in April of 2021. Twitch announced that it had hired an unnamed outside law firm with expertise in workplace and campus sexual assault cases to carry out its investigations (Twitch, 2021). Whereas the policy previously contemplated offline harassment or hate broadly, this update limited the policy to enforcement against relatively egregious offline misbehaviours including:

  • Deadly violence and violent extremism
  • Terrorist activities or recruiting
  • Explicit and/or credible threats of mass violence […]
  • Carrying out or deliberately acting as an accomplice to non-consensual sexual activities and/or sexual assault
  • Sexual exploitation of children […]
  • Actions that would directly and explicitly compromise the physical safety of the Twitch community [and]
  • Explicit and/or credible threats against Twitch (Twitch, n.d.-b).

In the blog post announcing the policy update, Twitch explained that “we only take action when there is evidence, which may include links, screenshots, video of off-Twitch behavior, interviews, police filings or interactions, that have been verified by our law enforcement response team or our third party investigators” (Twitch, 2021). However, official law enforcement or criminal justice action is not necessary for Twitch to enforce its policy. Instead, the company will take action where there is a “preponderance of evidence” that the behaviour took place. According to one reporter, Twitch stated that those under investigation will have an opportunity to respond (Newton, 2021), although that does not appear in the blog post or official policy. Additionally, contrary to the previous version of the policy, the new policy also states that it may take enforcement action for violations that target non-Twitch users. The policy will also apply to acts that occurred even before the policy violator was a Twitch user (Twitch, n.d.-b).

Twitch stated that its enforcement actions may include account suspensions, including “indefinite” suspensions. It’s not entirely clear if there is a meaningful distinction between an indefinite and a permanent suspension. In the summer of 2020, Twitch said it had been issuing “permanent” suspensions in response to its investigations (Twitch, 2020b). Presumably these terms are largely synonymous, and the term “indefinite” is used only to leave open the vague possibility that a suspension could be rescinded.

It’s also not clear whether Twitch will now only begin an investigation following a direct complaint, but a channel to report directly to Twitch’s new Off-Service Investigations Team was made available (Twitch, n.d.-b).

It should be noted that, while the high-profile bans for which any information is available have been directed at streamers, the policy is not limited to streamers, and theoretically all users, whether streamers or viewers, are subject to the policy. Whether any Twitch user that is not a regular streamer has been suspended under the off-platform policy is unknown, as Twitch neither publicly discloses any information about these suspensions, nor would such a user’s suspension likely attract public notice.

While it is understandable that Twitch will not provide information on individual enforcement decisions due to privacy and confidentiality concerns (Twitch, 2021), there is also little information on what either those that report violators or those under investigation can expect. Twitch did not provide information on how evidence will be evaluated or verified, what degree of involvement either party will have in the decision, or the availability of appeals, if new evidence comes to light.

The policy as it stands is perhaps limited compared to its earlier incarnation, as the behaviours listed here appear to correspond with widely criminalised behaviours, whereas the previous policy could presumably apply to many non-criminal behaviours. It should be noted that the list of prohibited off-platform behaviours provided by Twitch and reproduced above is not exhaustive, and Twitch suggested that this policy may expand to include other behaviour beyond those listed currently in the policy (Newton, 2021).

Some of the current categories of prohibited behaviour in the new policy correspond to some existing policies that consider off-platform behaviour of major social media companies like Twitter and Facebook, such as with respect to engaging in terrorist activities or recruiting (Twitter, n.d.; Facebook, n.d.-a). However, the focus of this article is those policies aimed at discrete off-platform abuse, especially sexual assault, that may require the platform to independently investigate. It is this kind of policy that puts Twitch, as Twitch Chief Operating Officer Sara Clemens said when discussing this policy, in “uncharted territory” (Newton, 2021). At the time, she stated she was unaware of any other platform with a similar policy (Newton, 2021).

Indeed, what makes Twitch’s policy unusual and raises unique accountability concerns is that it targets off-platform abuse with identifiable victims and that it undertakes to investigate behaviour even in the absence of law enforcement action, judicial action, or widespread reporting. The existence of these concerns does not depend on whether the conduct the policies target takes place on another platform or offline, nor on whether that conduct rises to the level of a criminal offense.

The policy, of course, has many potential benefits. Most obviously, as Twitch has stated, it is aiming to ensure a safe environment for its users and to protect individuals from abuse and harassment (Twitch, 2018). In this sense, the threat of policy enforcement serves as a deterrent to misbehaviour and enforcement of the policy serves as a means of incapacitation to protect Twitch users from being victimised on the platform, even if it cannot directly stop off-platform harm from occurring.

Perhaps more important than either of these functions is the expressive value of the policies: how they signal social norms and set expectations and perceptions. As Cass Sunstein notes with respect to law, the expressive value of rules lies in their ability to change behaviour and norms and to set the expectations of people (Sunstein, 1996). As mentioned earlier, Twitch has long-standing problems with harassment targeting women and other minority groups on the platform (Taylor, 2018; Kastrenakes, 2020). The policy may thus communicate that Twitch is a less abusive and more welcoming environment for all users and potential users. Such a communication of norms may set certain expectations of the environment on Twitch, which may itself change social behaviour (Reynolds, Subašić, & Tindall, 2015). Moreover, even to the extent that it does not change behaviour, it serves as a signal that Twitch is taking action and, thus, may create the perception that the platform is safer for users. This naturally aligns with Twitch’s own interest in continuing to grow the platform. There is presumably far more potential for growth in a service that appears inviting to a broad spectrum of people than one that appears to prioritise a small subset of abusive individuals.

This is especially true for a live-streaming service. Twitch is somewhat unlike some other social media platforms in its affordances and modes of engagement. While the affordances of other social media platforms such as Facebook and Twitter superficially create a rough parity between users by providing all users with the same basic set of tools with which to interact, at any given time, Twitch inherently divides users between streamers and audience members, creating a relationship more akin to that between a broadcaster and the public. This asymmetric relationship is exacerbated by the fact that popular Twitch streamers can have concurrent viewership in the tens, or even hundreds of thousands, and can earn lucrative incomes from the platform. While Twitch’s policies apply to any user regardless of whether that user is a streamer, it follows that higher standards of behaviour would exist on a service that provides some streamers a privileged position of relative power and influence, and where broadcasters may be seen as representatives of the platform. By failing to prevent perpetrators of harmful conduct from continuing to broadcast—or at least failing to appear to do so—one may see Twitch as rewarding and enabling such conduct and allowing further traumatisation of victims.

Finally, Twitch presumably also wants to maintain a space in which advertisers feel comfortable promoting their products and services on the platform. Advertising play a key role in influencing the content policies of major platforms (Caplan & Gillespie, 2020; Klonick, 2018, p. 1627), and advertisers have increasingly pressured social media platforms to moderate harmful or controversial content (Caplan & Gillespie, 2020; Hsu & Lutz, 2020). As Twitch noted in its Transparency Report, “[a]dvertising is an important part of Twitch, and brands that advertise on Twitch want to know how we are making our users safer, and promoting a more positive and less harmful environment” (Twitch, 2020c). Ensuring brand safety for advertisers is likely another key motivator of Twitch’s policy against off-platform abuse.

Thus, there are many reasons that policies of this nature can play an important role on Twitch and other platforms. The argument of this paper is not that such policies should not exist; rather it is that they raise new challenges that exacerbate accountability and transparency concerns, and that additional steps may be necessary to ensure accountability to users. I now turn to a discussion of some of those challenges.

2. Special concerns with policies against off-platform abuse

The academic platform literature over the past several years has identified numerous accountability deficits of traditional platform content moderation (Bloch-Wehba, 2019; Balkin, 2018; Suzor, 2019; Douek, 2019). As content moderation is increasingly being understood as a form of governance (Klonick, 2018; Gorwa, 2019) that typically attempts to balance the interests of users against a variety of content harms, an increasing consensus of academic commentators recognises that obligations with respect to transparency, error-correction, and fair decision-making processes attach to the development and enforcement of content moderation policy (Douek, 2019; Suzor, 2019; Bunting, 2018). Mark Bunting refers to these principles as “procedural accountability”, which aims to “encourage intermediaries to embed a concern for all the relevant impacts of their governance into the processes by which that governance is designed and executed, without specifying what the right rules may be for their particular context” (2018, p. 176) Similarly, Hannah Bloch-Wehba calls for the application of global administrative law norms—including those of transparency, due process, and public participation—to content governance (2019).

Such norms include those based on the rule of law and due process or procedural fairness, such as having clear policies, consistent enforcement, publicly transparent enforcement, notice to those impacted by a decision, the right for those impacted to provide evidence and argument, provision of reasons for a decision, and the opportunity to appeal that decision (Fuller, 2000; Benvenisti, 2014). These norms perform a variety of functions, such as reducing error, mitigating arbitrariness, protecting fundamental rights such as freedom of expression, 3 and promoting public legitimacy and trust in the policy enforcement process (Douek, 2019; Suzor, 2019, pp. 144-7). A central principle of natural justice and due process (or procedural fairness) is that the greater the impact of a decision upon an individual, the greater the need for attendant procedural safeguards (R v. Secretary of State for the Home Department, 1993; Mathews v. Eldridge, 1976; Baker v. Canada, 1999). For example, it’s an internationally-accepted principle of administrative law and natural justice that individuals be entitled to participate in decisions that affect their lives (Benvenisti, 2014, p. 161). The opportunity of the person affected by a decision to provide evidence and argument reduces error rates by ensuring that errors can be captured and all evidence considered (Mullan, 2001, p. 148; R v. Secretary of State for the Home Department, 1993), while also playing a role in increasing trust in the process for all impacted parties (Mullan, 2001, p. 148; Douek, 2019).

In the case of indefinite account suspensions—which is a common enforcement action for violations of Twitch’s off-platform policy—the impacts on users can be great. As Gillespie explains, “removal from a social media account matters. For a user, being suspended or banned … can have real consequences—detaching her from her social circle and loved ones, interrupting her professional life, and impeding her access to other platforms” (2018, p. 176). In the case of Twitch, it can also disrupt or terminate an individuals’ primary source of income. Such decisions should not, therefore, be made lightly.

Numerous civil society initiatives, including those of the Santa Clara Principles, Ranking Digital Rights, and the Electronic Frontier Foundation, aim to ensure that content moderation by platforms, and especially large platforms, meet basic standards based on rule of law and due process principles (“Santa Clara Principles,” n.d.; Ranking Digital Rights [RDR], 2021; Gebhart, 2019), and a number of platforms have made significant commitments towards meeting these requirements (RDR, 2021). Additionally, governments are looking to legislate forms of procedural accountability for platforms: the UK Online Harms White Paper’s approach to legal but harmful content is to require companies to enforce their terms consistently and transparently, and to provide redress mechanisms (UK Department for Digital, 2020). Meanwhile, the EU’s proposed Digital Services Act prescribes graduated procedural requirements for online intermediaries depending on their type and size, including presenting clear public policies, providing reasons for enforcement actions, and offering appeal mechanisms (Proposal DSA). However, steps to improve procedural accountability come with numerous costs and trade-offs (Stewart, p. 192), and it follows that the procedural design of any given platform decision should be based upon a balancing of the impact of the decision, the risk of error, and other costs. This is certainly true for platforms engaging in policy enforcement, where heightened fairness entails financial costs, delays in responding to potential harm, a lack of flexibility, and possible impacts on user privacy, among others (Douek, 2019). The balance is difficult to strike with respect to content moderation; it is certainly more difficult with respect to policies aimed at off-platform abuse.

This is significant, because while the same concerns that motivate accountability for content moderation are present, including consistency and confidence in enforcement, remedying errors, and user interests, policies aimed at off-platform abuse raise at least three additional concerns. These additional concerns make it significantly more difficult to establish good policy, minimise error in enforcement, minimise the negative impacts of error, and establish public trust and the trust of victims.

a. Defining and verifying off-platform behaviour

The first concern relates to the competence of platforms: platforms typically apply their policies with respect to content that is carried on their services. However, policies such as Twitch’s may apply to behaviour that neither manifests as content nor occurs on the platform, such as sexual assault.

The most important ramification of this is the factual uncertainty it generates. Platform content moderation typically does not face the problem of determining whether the behaviour in question actually occurred. Social media platforms often have full visibility over all content on their platforms, and can directly tie any user-generated content to the account holder that created it. Such platforms have access to the whole content, avoiding the possibility that it has access to only selectively chosen or edited content.

By comparison, when dealing with off-platform behaviour, the platform must ascertain what behaviour actually occurred or whether it differed from the reported accounts, as well as any surrounding context. In this, the platform faces considerable evidentiary problems—of the kind more commonly faced by trial courts—due to the inherent reliance on external evidence not within the possession of the platform. Such decisions demand acquiring evidence and determining the veracity of that evidence. This increases the decision-making complexity by requiring both investigation and evidentiary assessment: extra steps to the process of policy enforcement, and one that platforms like Twitch presumably lack expertise. In some cases, they also may lack access to the relevant evidence, or, as in Twitch’s case, need cooperation from law enforcement or other platforms to complete an investigation (Twitch, 2021).

Consider, for example, the case of professional fighting game player Nairobi “Nairo” Quezada. Quezada was indefinitely suspended from Twitch in September of 2020 following accusations that he had engaged in a sexual relationship with a minor and after releasing a statement apologising for that conduct (Walker, 2020). However, shortly after, some accounts came to light that called into question the credibility of the accusations (Michael, 2020). Quezada then released a statement in October of 2020 denying the allegations against him, and alleging that he had, in fact, been the victim of sexual assault rather than the perpetrator. He claimed he hadn’t understood what had happened to him at the time, and only realised that he was the victim following therapy (Quezada, 2020). He claims to have since filed an appeal with Twitch to restore his account (Quezada, 2021).

The full details of this situation remain obscure, and few credible news reports are available on the subject. Twitch, as per its policy, did not comment on the suspension, nor has it commented on the appeal. There is thus no way of knowing what evidence Twitch relied upon in issuing the suspension. If it was on the basis of Quezada’s initial confession, it raises the question of what should happen when such a confession is retracted. Indeed, what evidence would be necessary upon appeal to have his account restored?

Regardless of the truth of either the initial allegations or Quezada’s claims, the events surrounding Quezada’s suspension reflect the considerable uncertainty introduced by the investigatory process and the reliance on third-party statements and reports. This increases the potential for error in decision-making by introducing new opportunities for error to arise. This is especially the case where, as with Twitch, the standard of proof is the “preponderance of evidence” (Twitch, 2021): a standard used in common law civil litigation (although often called the “balance of probabilities” standard in British English) that requires sufficient evidence that the prohibited behaviour was more likely to have occurred than not. Such a standard has no direct Continental European civil law equivalent as civil law typically requires the conviction of the judge (Schweizer, 2016, p. 218). Regardless, it appears Twitch will apply the preponderance of evidence standard globally. This relatively low standard is balanced in common law civil courts by considerable procedural safeguards, as well as the availability of appeals (Harper et al., 2017).

Indeed, where the potential for error is greater, the need for error reduction and correction mechanisms is greater (Gertmann, 2018). In addition to acquiring outside expertise, error reduction can be aided by ensuring the opportunity for all parties involved to provide evidence and be heard by an impartial decision-maker (Mullan, 2001). Error correction mechanisms for platforms typically involve the opportunity to appeal an adverse decision should the initial decision appear faulty on some basis (Douek, 2019). In the case of Twitch, it currently remains unclear the extent to which the accused can participate in the decision, and while appeals for account suspensions are available (Twitch, n.d.-c), the details of the appeal procedure remain hidden.

Twitch has certainly turned its mind to the extra evidentiary problems created by policies aimed at off-platform misbehaviour: Twitch stated that they have hired an outside law firm with experience investigating workplace and campus sexual assault to assist in investigations (Twitch, 2021). However, both the name of the firm and the investigatory process remain secret. While it may be preferable that Twitch is leveraging existing expertise to mitigate the evidentiary problems, it remains impossible to know how much such expertise can actually mitigate these concerns without considerably more information. Further, given the heightened possibility of error, it may be reasonable for Twitch to offer robust due process to the accused. This may include a clear process and evidentiary standard for appeal as well as the opportunity to be heard and provide contrary evidence once the initial investigation is complete. Should that process not exist at present, taking steps to implement it would not appear to undermine the goals of Twitch’s policy.

In addition to the considerable evidentiary problems such policies raise, policies of this nature also increase the challenge in determining their scope and content, as it may demand that platforms determine what kinds of non-speech behaviour warrant enforcement action. While platforms like Twitch have significant experience enforcing policies against various kinds of speech and content, which may similarly apply to off-platform behaviour in some cases, they presumably have little experience determining appropriate responses to off-platform activity. Twitch has mitigated this problem by limiting enforcement for off-platform activity to behaviour that generally amounts to criminal activity, but it will be a significant issue should they expand this policy to other misbehaviours, as they have suggested is likely (Casey, 2021).

Indeed, sanctions imposed against off-platform behaviour implicate different underlying normative concerns. When platforms moderate content, they can prevent harms from occurring directly through their moderation actions. This is especially true where action is taken against content ex ante. For example, with respect to hate speech content or harassing content, if a platform removes that content or reduces its spread, it directly decreases the harms arising from that content by preventing users either from coming into contact with the material or preventing their ongoing exposure to it. While the normative value of content moderation may include those familiar to criminal justice systems, such as rehabilitation (Jahver et al., 2019) and deterrence (Srinivasan et al., 2019), as well as incapacitation in the case of account suspensions, the benefits of immediate harm reduction is sufficient on its own to justify moderation activities. In contrast, policies aimed at off-platform behaviour are less likely to directly mitigate or prevent harms flowing from the incident(s) giving rise to the enforcement action. Platforms’ enforcement actions here can only ever be ex post, and they have little direct control over the extent of any harm caused by the precipitating incident. Instead, their justification might lie much closer to criminal justice principles (Cohen, 1981), such as deterrence by warning of penalties for poor behaviour, denunciation by sending a signal to the community about what behaviour is not tolerated, and incapacitation by preventing an offender from committing future harms on the platform and re-traumatising victims thereon. Indeed, incapacitation appears to be one of Twitch’s primary motivations for its off-platform behaviour policy (Twitch, 2021). For victims, enforcement may also serve a retributive or vindicating role (Heydon & Powell, 2016). The potential for differing underlying values behind off-platform policies suggests that it may be preferable that any such policies be developed separately from those aimed at on-platform behaviour and involve considerable community input into what is necessary to create a safe environment.

b. Victims’ and complainants’ interests

A critical concern raised by policies targeting off-platform abuse is the heightened need to protect the interests of victims and/or complainants. As demonstrated in the summer of 2020, many instances of policy-violating off-platform conduct will involve highly-sensitive and potentially traumatising events, such as sexual assault. In these cases, serving victims’ and complainants’ interests will require both protecting their privacy and ensuring that, where a report is made directly to Twitch, those reports are received and reviewed through a clear process that promises accountability. This is especially the case since, as the new policy makes clear, those impacted by the off-platform abuse may not be Twitch users and may have little understanding of the platform.

While it does not appear that all investigations will be triggered by direct reports, as previous suspensions were in response to public allegations, Twitch’s new policy appears to highlight a direct reporting option (Twitch, n.d.-b), and thus it seems that enforcement actions will often be in response to such reports. Reports may also, presumably, be initiated by those that were not themselves targets of the abuse, creating a potential distinction between victims and complainants.

Where a complaint is made directly to Twitch, fully protecting victims’ or complainants’ interests in cases of off-platform behaviour can prove difficult, because unlike with decisions made for on-platform conduct, the potential reliance on victim statements and evidence provided by complainants will typically be privacy-invasive. Victims may prefer to remain anonymous when reporting or having their case reported (Powell & Cauchi, 2011). In some cases, victims or other complainants may fear reprisals should the report become known to the perpetrator. At the same time, such anonymity may undermine the possibility of a full investigation, and it naturally inhibits the ability of a decision-maker to seek and receive meaningful input from the alleged policy violator. For these reasons, in criminal cases numerous jurisdictions and institutions make anonymous or confidential sexual assault reporting available, such reports are typically used to assess criminal patterns and trends, and not to initiate individual investigations, which would require that the victim or complainant be identified (Heydon & Powell, 2016).

The criminal context demonstrates the tension, in some cases, between protecting victims’ and complainants’ privacy and safety and allowing the alleged policy-violator to participate in the decision-making process. The trade-offs create significant challenges for policy development as platforms that enforce rules against off-platform abuse may have to balance the privacy of the parties involved and fairness to the alleged policy violator. Twitch appears to have prioritised privacy, stating they ensure that all investigations remain confidential and only those involved will be notified of any decision. Unfortunately, it remains unclear what degree of notice or participation will be available for those subject to a decision prior to it being made.

This recalls some literature on campus sexual assault, where the lack of appropriate due process for the accused has received some scrutiny (Gerstmann, 2018; Harper et al., 2017). In that context, it has been argued that robust due process requirements themselves can play an important role in assisting victims in recovering from trauma by providing a responsive and thorough system and increasing the legitimacy of the reporting process (Harper, et al., 2017).

A tension may also arise with respect to providing reasons for decision-making. While public decision-making can increase trust in the system by demonstrating its effective operation, the privacy of the individuals involved may be undermined by any public statement on the matter. While identifying a specific victim in a decision would often be a considerable violation of privacy, even tying an enforcement action to a report for a specific breach may be enough to connect the victim or other involved parties to the incident. At the same time, failure to make public basic reasons for enforcement actions may weaken confidence in the system by making it appear capricious and arbitrary. This may undermine the feeling of safety of users as it may not be clear that the platform is following through on its policy, and it may undermine victims’ and complainants’ interests and chill reporting by failing to make it clear whether reports were seriously investigated. Ensuring a clear pathway to report policy violations with the promise that complainants will be heard and taken seriously is likely to be critical to ensuring that victims and other interested parties use reporting options and to ensuring that users feel safe on the platforms. Indeed, victims of sexual assault often report that participation, voice, and validation are central to their justice interests when reporting to police (Daly, 2014, p. 387). Victims or other potential complainants may choose not to report if they see no evidence that such interests will be respected.

However, while some tensions exist between due process and privacy with respect to transparency in individual cases, complainants’ interests are better served by fully explaining the process through which reports are received, investigated, and evaluated generally. In this, both parties stand to benefit from a well-articulated policy and process for the enforcement of policies against off-platform behaviour by increasing the confidence in the system, and therefore, increasing confidence in safety.

Unfortunately, there remains a dearth of specifics about how a report is handled by Twitch. There is little ground on which to conclude that it has a robust process that provides fairness or certainty to the victim or the accused. It remains unclear even whether all reports will receive a follow-up, let alone how investigations proceed, or what those who report violations, or those who are investigated, can expect from the process. Remedying these defects by more fully explaining the process may go some way in improving accountability.

c. Increased impact of adverse decisions

A third problematic difference between policies aimed at off-platform abuse and policies aimed at on-platform misconduct is the potential for the increased impact of an adverse decision. As Twitch’s past enforcement actions make clear, the most common sanction imposed upon those found to have violated policies aimed at off-platform misbehaviour is an indefinite account suspension. Twitch has stated that it can take other actions in response to violations of its off-platform conduct policy, including removing a streamer’s Partner status or preventing streamers from engaging in promotional activity (Shear, 2020), although it remains unclear how often such actions are taken. Given the ephemeral nature of content on Twitch, Twitch often engages in account level enforcements for content violations, but indefinite account suspensions appear to be the primary enforcement option for off-platform abuse. Indefinite suspensions have the potential to impact individuals more deleteriously than other content-level actions since they deny access to a vehicle for self-expression altogether and impose significant social and financial costs (Gillespie, 2018, p. 176). This is especially true where one’s income may be largely based on access to a platform, as it is in the case of many streamers (Taylor, 2018).

Furthermore, where enforcement actions follow public allegations of harassing or hateful off-platform conduct, such as those made over the summer of 2020, this enforcement may have the potential to be seen as a form of confirmation of those allegations, or at least to increase their visibility in the media. Either of these outcomes may contribute to public stigma. While such stigma may be justified, the possibility of it nonetheless increases the potential impact of Twitch’s decisions. This possibility remains speculative, as it is impossible to know what impact, if any, Twitch’s determinations have on public opinion. Certainly, where a public accusation is accompanied by significant corroborating accounts or evidence, or is admitted to by the perpetrator, Twitch’s actions will likely have little to no effect. However, where the accusations are denied by the alleged perpetrator, but enforcement action is taken nonetheless, it is possible that individuals could conclude that Twitch conducted proper due diligence with the accusations, and that its findings are therefore credible, even if the basis for its decision is unknown.

For example, after Quezada filed his appeal with Twitch, one prominent fighting game community member said with respect to the possibility of Twitch rescinding the suspension: “[Twitch will] not make a decision without doing everything possible and exhausting all the information. If Twitch comes out and decides to unban Nairo, I guess that they’ve found enough evidence” (Chen, 2021). Regardless of the degree to which assumptions of this nature are made, it is perhaps likely that account suspensions following high-profile accusations will attract additional media attention. Indeed, it appears that the Twitch account suspensions during the summer of 2020 did attract the attention of news outlets that referred directly to various allegations (BBC News, 2020; Kastrenakes, 2020; Walker, 2020).

As previously discussed, the greater the impact of a determination upon an individual, the greater the need for procedural safeguards. The reality that enforcement actions for off-platform behaviour can only result in permanent or indefinite account-level actions and include the danger of increasing public stigma associated with publicised accusations militates towards increased due process and procedural fairness. This may include the opportunity for impacted parties to be informed of the evidence against them, to be heard, and to provide evidence. It may also include the provision of reasons for a decision to those parties involved and the opportunity to appeal that decision should it reveal errors, bias, or should it have failed to consider all of the relevant evidence (Gerstmann, 2018).

While Twitch allows for appeals in the case of account suspensions (Twitch, n.d.-c), it remains entirely unclear how they’re considered, especially for those suspensions that relate to off-platform conduct. It is also not clear what Twitch communicates to the parties involved both before and after the relevant decision in a given appeal. To date, Twitch has not stated whether it provides reasons for enforcement decisions to either party. Transparency around these issues would be a positive first step.

d. The need for open discussion about off-platform policies

As the relatively cursory discussion above indicates, there are important differences between platform content moderation policies and policies aimed at off-platform behaviour like sexual assault. Indeed, the latter heightens the impact of decisions to all affected parties, attenuating the tension between due process and privacy, while also increasing the possibility of error and calling into question the very purposes of platform policy. As difficult as developing content moderation policy is, it appears that crafting and enforcing policies aimed at off-platform behaviour in an accountable and fair manner is even more difficult. But the difficulty of establishing accountable processes and policies can obviate the need for careful consideration and robust processes. Where platforms do choose to create such policies, it is incumbent upon them to carefully balance the trade-offs and to make themselves accountable to their users.

How to do so remains an unresolved question. The fact that Twitch has narrowed the scope of its policy to apply only to those behaviours that are likely criminal, and the fact that it has hired outside expertise, suggests awareness of these difficulties. Nonetheless, Twitch has to date offered little public accountability in their enforcement of policies against off-platform behaviour, and the recent policy update does not suggest that this will change. The processes and decisions remain opaque. The lack of transparency with respect to the enforcement process undermines confidence in the system, and thus vitiates the feeling of safety that Twitch is attempting to create through its policies and enforcement actions. At a minimum, Twitch should more fully explain how it handles reports, investigates complaints, makes decisions, and considers appeals. Explanations should clarify what those who make a report can expect, who reaches the decision, whether there is an opportunity for an individual subject to a decision to offer evidence or challenge existing evidence, and how appeals will be considered. Depending on how robust these processes are at present, more may need to be done to create a reliable and trustworthy system.

Indeed, as the creation and enforcement of platform policies against off-platform behaviour is a new and little-studied issue, what is needed is a broader public conversation that can inform the creation and enforcement of these policies. Creating policy with little public consultation and no transparency, as Twitch is doing, is a recipe for poor development and implementation. The norms of content moderation have changed enormously over the past decade, much of which is due to the open engagement of users, media, politicians, civil society, and academics (Klonick, 2018, pp. 1648–58). Twitch and other platforms considering or enforcing policies against off-platform behaviour should take the opportunity to begin a broader engagement process that takes into account the interests of victims, users, streamers, and makes use of the expertise of civil society and academia.

A final concern arises should policies against off-platform behaviour be widely adopted by other platforms. Should this happen, significant political and social disenfranchisement could be the result of a finding of harassing, abusive, or other harmful behaviour in any aspect of one’s life. As Casey Newton put it jokingly in discussing Twitch’s new policy, "[w]hat’s next, a social credit score that follows you around the web the way it does the Chinese internet?” (Newton, 2021). Despite the humorous intent, questions like this reflect important questions about the role of non-state actors and the public in sanctioning individual conduct that are beyond the scope of this article. Indeed, answering these questions involves complex interrogation about the role of private enterprise in policing norms of behaviour, the risks to user privacy of records or allegations of behaviour following them across platforms, and the dangers of past behaviours leading to widespread de-platforming. This will not be an issue, however, if policies of this kind remain limited to a small number of platforms.

3. A new frontier of platform policy?

There may be good reason to be skeptical that policies against off-platform behaviour will spread widely beyond Twitch and similar services. As discussed earlier, as a live-streaming service, Twitch may be more similar to a broadcaster than other social media sites like Facebook or Twitter, and thus feel and project a greater responsibility for those it allows to broadcast. Thus it might be reasonable to suspect that policies against off-platform abuse are likely to be limited to platforms that similarly create an asymmetry between a content creator and an audience.

Indeed, YouTube does take enforcement action against some off-platform behaviour by video creators through its Creator Responsibility Policy (YouTube, n.d.). That policy, which does not apply to non-video creators, states that “if we see that a creator’s on- and/or off-platform behavior harms our users, community, employees or ecosystem, we may take action to protect the community” (YouTube, n.d.). Examples of off-platform behaviour that may give rise to an enforcement action include participating in sexual abuse or violence, and enforcement can range from being removed from YouTube’s recommendations to channel demonetisation to account suspensions. While YouTube offers virtually no information about its complaint-handling, investigation, or decision-making process, other than to say a “team of experts” is involved (YouTube, n.d.), it has enforced this policy a number of times against creators, including for sexual misconduct and assault (Godwin, 2021; Crowley, 2021). Recently, for example, popular beauty influencer James Charles was “temporarily” removed from YouTube’s Partner Program after he admitted to sending sexually explicit messages to sixteen-year old boys (Godwin, 2021).

Beyond video-based platforms, both Patreon and Medium currently have somewhat analogous policies. Crowdfunding platform Patreon’s Community Guidelines expressly contemplate enforcement for bullying or harassment in “real-life interactions” (Patreon, n.d.). This policy has been enforced, for example, in banning one creator for revealing private information about another individual on a different platform (Kelly, 2019). Similarly, the Rules of the online publishing platform Medium currently state that it may “consider off-platform action in assessing a Medium account, and restrict access or availability to that account” (Medium, 2019). It is not clear how that policy has been enforced to date.

It’s notable that the affordances of both Patreon and Medium similarly create clear asymmetries between users (i.e. between creators and patrons and between authors and readers, respectively). But too much stock should not be placed in a clear distinction between platforms that create such asymmetries and those that do not. While platforms like Facebook and Twitter appear to create functional parities between users, many of their modern affordances, as well as the simple reality that some users have far more reach than others, can create similar asymmetries. Twitter, notably, creates asymmetrical relationships by virtue of allowing one account to follow another account without requiring a reciprocal follow. This can allow for some individuals and organisations to amass large followings without following many accounts themselves (Paul & Friginal, 2019). And Twitter has recently begun to roll out various monetisation options for its users, including the ability for users with large followings to charge for extra content under its Super Follows programme (Koksal, 2021). And while Facebook’s ‘Friends’ relationship has been categorised as creating a symmetrical relationship (Paul & Friginal, 2019), Facebook’s Pages, for example, are designed to allow individuals to follow a single individual or business without the reciprocal relationship associated with being Facebook Friends. Facebook has also increasingly implemented content monetisation options, such as offering fan subscriptions, video advertising, and methods to allow fans to support Facebook creators (Facebook, n.d.-b). Further, Facebook has its own direct live-streaming service and Twitch rival, Facebook Gaming, although it appears to lack similar policies (Facebook, n.d.-b). Affordances across both Twitter and Facebook can thus create similarly asymmetric relationships, and the increasing monetisation options available to creators on these platforms increasingly position creators in similar relationships to those of streamers on Twitch. It may then stand to reason that even these platforms will eventually face similar pressures to those of Twitch in sanctioning off-platform abuse, at least with respect to these aspects of their services.

At present, major social media platforms such as Twitter, Facebook, and Reddit (Twitter, n.d.; Facebook, n.d.-a; Reddit, n.d.) do not currently have policies analogous to Twitch’s off-platform abuse policy. But it should be noted that they do have policies against off-platform behaviour in some respects. For example, Facebook and Twitter, amongst others, prevent terrorist organisations or other violent criminal organisations from using the service for any purpose, while Facebook also prohibits any individual involved in mass or multiple murder, human trafficking, or organised crime from using the service. Naturally, enforcement of these policies involves consideration of acts and behaviour that occur beyond the enforcing platform, although they do not raise the same issues as discussed here.

Further, major platforms have begun looking to off-platform conduct in enforcing their existing content policies in order to determine the relevant context in which to understand potential violations. For example, in banning Donald Trump from their platforms, a number of platforms, including Twitter and Facebook, took into account the real-world impacts of Trump’s statements both on and off of their platforms, including the violence of 6 January 2021 at the United States Capitol (Kelly, 2021; Twitter 2021). With the pressure to apply the same rules to other world leaders (Morrison, 2021), and growing support for de-platforming based on real-world impacts of harmful speech (Mystal, 2021; Bedingfield, 2021), it is likely that even if these companies do not establish explicit policies against off-platform abuse, investigating off-platform behaviour, and some of those difficulties associated with it, may increasingly become elements of their policy work.

Conclusion

This article has argued that policies against off-platform abuse are a relatively new phenomenon that raises unique challenges in balancing their community-safety objectives with maintaining accountability and fairness. While such policies may be justified on the basis of limiting the potential for future harm and signalling the standards of the community, they also create new challenges in ensuring accountability in platform policy enforcement. These include making factual determinations, providing safe reporting mechanisms, and protecting victim privacy and safety all while ensuring transparency and fairness to all parties when meting out sanctions with potentially great impact on the sanctioned user. These challenges do not necessarily indicate that such policies should not exist, but rather that extra steps should be taken to balance the goals of the policy with fairness and accountability to users. This article has outlined some possible suggestions in the case of Twitch.

As countries around the world are increasingly attempting to regulate the creation and enforcement of platform content policy through requirements of transparency and due process, they may also want to consider to what extent these regulations do and should apply to policies aimed at off-platform misbehaviour and abuse. For example, the recently proposed Digital Services Act in the European Union would require internet intermediaries to ensure that they provide disclosures concerning their content policies and to publish transparency reports about content actions. Web hosts and platforms would have to provide notice and reasons for content removal decisions, while online platforms beyond a size threshold would have to provide internal appeals mechanisms (Proposal DSA). Notably, however, while some of these provisions might apply to the disabling of accounts for behaviour that occurred off of the platform in question (e.g. the requirement to provide reasons) 4, it does not appear that others, such as the requirement to provide an appeals process, would apply to policies aimed at off-platform behaviour. 5

Similarly, the UK’s Online Harms White Paper approach also considers transparency and user redress mechanisms, but is largely focused on ensuring complaint mechanisms for pieces of content and content removals, rather than account actions. The proposed Online Safety Bill based on the White Paper makes no mention of off-platform behaviour (Minister of State for Digital and Culture, 2021). In the United States, the bipartisan proposal for increased procedural accountability on interactive computer services, the PACT Act, focuses solely on content removal when it mandates transparency reporting and a complaint and appeals mechanism (2021). Policies aimed at off-platform conduct do not appear to be included.

If governments are concerned with ensuring not only that platforms remove harmful content, but that they protect the interests of users in continuing to use central platforms for modern discourse, they may want to consider the role of platforms in disabling access to individuals based upon conduct that took place off of the respective platform. To ensure effective regulation, it’s critical that academics, civil society, platforms, and governments begin a wider discussion of how and when policies against off-platform behaviour should be developed and enforced. Platforms like Twitch should begin this process by being transparent about their processes and by seeking public and civil society input on their policy development.

Acknowledgements

The author wishes to express his gratitude to Ariel Katz and Jack Enman-Beech for early discussions on this topic. The author would also like to thank the editors and reviewers of Internet Policy Review for their comments, including evelyn douek, Andrew Zolides, Frédéric Dubois, and Balázs Bodó, whose insights greatly improved this article. Any mistakes remain with the author.

References

Alexa. (n.d.). Top sites. Retrieved 6 April 2021, from https://www.alexa.com/topsites

Asarch, A. (2019, August 13). Twitch’s continuous struggle with moderation shines a light on platform’s faults. Newsweek. https://www.newsweek.com/twitch-over-party-ninja-stream-porn-moderation-faults-1454148

Baker v. Canada (Minister of Citizenship and Immigration), 2 SCR 817 (Supreme Court of Canada 1999).

Balkin, J. M. (2018). Free speech is a triangle. Columbia Law Review, 118, 2011–2056.

B.B.C. News. (2020, June 25). Twitch starts banning users over abuse. BBC News. https://www.bbc.com/news/newsbeat-53179288

Bedingfield, W. (2021). Deplatforming works, but it’s not enough to fix Facebook and Twitter. Wired. https://www.wired.co.uk/article/deplatforming-parler-bans-qanon

Benvenisti, E. (2014). The law of global governance. Hague Academy of International Law.

Bloch-Wehba, H. (2019). Global platform governance: Private power in the shadow of the state. SMU Law Review, 72(1), 27–80. https://scholar.smu.edu/smulr/vol72/iss1/9/

Bunting, M. (2018). From editorial obligation to procedural accountability: Policy approaches to online content in the era of information intermediaries. Journal of Cyber Policy, 3(2), 165–186. https://doi.org/10.1080/23738871.2018.1519030

Caplan, R., & Gillespie, T. (2020). Tiered governance and demonetization: The shifting terms of labor and compensation in the platform economy. Social Media + Society, 6(2), 1–13. https://doi.org/10.1177/2056305120936636

Chen, J. (2021). Tuesday 10.10—Justin Wong Talks Fatherhood And Becoming A Fighting Game God [Video]. YouTube. https://www.youtube.com/watch?v=iS48c4qPUGk

Cohen, S. A. (1981). An introduction to the theory, justifications and modern manifestations of criminal punishment. McGill Law Journal, 27, 73–91.

Colombo, C. (2020). Twitch sparks outrage after Onision is quietly unbanned. Dexerto. https://www.dexerto.com/entertainment/twitch-sparks-outrage-after-onision-is-quietly-unbanned-1430801/

Crowley, J. (2021, January 22). Why exactly has controversial YouTuber Onision been demonetized? Newsweek. https://www.newsweek.com/onision-demonetized-youtube-1563434

Daly, K. (2014). Reconceptualizing sexual victimization and justice. In I. Vanfraechem, A. Pemberton, & F. M. Ndahinda (Eds.), Justice for victims: Perspectives on rights, transition and reconciliation (pp. 378–395). Routledge. https://doi.org/10.4324/9780203094532-30

D’Anastasio, C. (2020, June 26). Twitch confronts its role in streaming’s #MeToo reckoning. Wired. https://www.wired.com/story/twitch-streaming-metoo-reckoning-sexual-misconduct-allegations/

Douek, E. (2019). Verified accountability: Self-regulation of content moderation as an answer to the special problems of speech regulation (Paper No. 1903; Aegis Series). https://www.hoover.org/research/verified-accountability

Douek, E. (2021). Governing online speech: From “posts-as-trumps” to proportionality & probability. Columbia Law Review, 121(3), 759–834. https://columbialawreview.org/content/governing-online-speech-from-posts-as-trumps-to-proportionality-and-probability/

Facebook. (n.d.-a). Community standards. https://www.facebook.com/communitystandards/

Facebook. (n.d.-b). Gaming community guidelines. Facebook Gaming. https://www.facebook.com/fbgaminghome/creators/gaming-community-guidelines

Facebook. (n.d.-c). How can I make money on Facebook? Facebook for Business. https://www.facebook.com/business/learn/lessons/how-make-money-facebook

Fuller, L. L. (2000). The morality of law. Yale University Press.

Galiz-Rowe, T. (2020, August 2). Popular super smash bros. Streamer ZeRo has been banned from twitch. GameSpot. https://www.gamespot.com/articles/popular-super-smash-bros-streamer-zero-has-been-ba/1100-6480106/

Gebhart, G. (2019, June). Who has your back? Censorship edition 2019. Electronic Frontier Foundation. https://www.eff.org/wp/who-has-your-back-2019

Geigner, T. (2019, September 20). Content moderation at scale especially doesn’t work when you hide all the rules. Techdirt. https://www.techdirt.com/articles/20190918/10465243018/content-moderation-scale-especially-doesnt-work-when-you-hide-all-rules.shtml

Gerstmann, E. (2018). Campus sexual assault: Constitutional rights and fundamental fairness. Cambridge University Press. https://doi.org/10.1017/9781108671255

Gilbert, B. (2020, September 20). Ninja just signed a multi-year contract that keeps him exclusive to Amazon-owned Twitch. Business Insider. https://www.businessinsider.com/ninja-signs-multi-year-exclusivity-contract-with-amazon-twitch-2020-9

Gillespie, T. (2018). Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions that Shape Social Media. Yale University Press.

Godwin, C. (2021, April 20). James Charles: YouTube temporarily demonetises beauty influencer. BBC News. https://www.bbc.com/news/world-us-canada-56811134

Gorwa, R. (2019). What is platform governance? Information, Communication & Society, 22(6), 854–871. https://doi.org/10.1080/1369118X.2019.1573914

Grayson, N. (2020, June 26). Twitch finally starts banning streamers accused of sexual abuse. Kotaku. https://kotaku.com/twitch-finally-starts-banning-streamers-accused-of-sexu-1844164469

Hall, C. (2020, July 3). Evo Online canceled following accusations of sexual abuse against CEO. Polygon. https://www.polygon.com/2020/7/3/21312536/evo-online-canceled-joey-cuellar-mr-wizard-sexual-abuse

Harper, S., Maskaly, J., Kirkner, A., & Lorenz, K. (2017). Enhancing Title IX due process standards in campus sexual assault adjudication: Considering the roles of distributive, procedural, and restorative justice. Journal of School Violence, 16(3), 302–316. https://doi.org/10.1080/15388220.2017.1318578

Hernandez, P. (2020, June 25). Twitch starts banning streamers over sexual abuse allegations. Polygon. https://www.polygon.com/2020/6/25/21302983/twitch-sexual-abuse-assault-harassment-bans

Heydon, G., & Powell, A. (2016). Written-response interview protocols: An innovative approach to confidential reporting and victim interviewing in sexual assault investigations. Policing and Society, 28(6), 631–646. https://doi.org/10.1080/10439463.2016.1187146

Hsu, T., & Lutz, E. (2020, August 1). More than 1,000 companies boycotted Facebook. Did it work? The New York Times. https://www.nytimes.com/2020/08/01/business/media/facebook-boycott.html

Iqbal, M. (2021, March 29). Twitch revenue and usage statistics. Business of Apps. https://www.businessofapps.com/data/twitch-statistics/

Jahver, S., Bruckman, A., & Gilbert, E. (2019). Does transparency in moderation really matter?: User behavior after content removal explanations on Reddit. Proceedings of the ACM on Human-Computer Interaction, 3. https://doi.org/10.1145/3359252

Kastrenakes, J. (2020, June 25). Twitch reckons with sexual assault as it begins permanently suspending streamers. The Verge. https://www.theverge.com/2020/6/25/21303185/twitch-sexual-harassment-assault-permanent-bans-streamers

Kaye, D. (2019). Speech Police: The global struggle to govern the internet. Columbia Global Reports.

Kelly, M. (2019, November 26). Controversial YouTuber banned from Patreon after alleged doxxing. The Verge. https://www.theverge.com/2019/11/26/20984785/onision-doxxing-patreon-deplatformed-twitter-youtube

Kelly, M. (2021, January 7). Facebook bans Trump ‘indefinitely’. The Verge. https://www.theverge.com/2021/1/7/22218725/facebook-trump-ban-extended-capitol-riot-insurrection-block

Klonick, K. (2018). The New governors: The People, rules, and processes governing online speech. Harvard Law Review, 131, 1598–1670. https://harvardlawreview.org/2018/04/the-new-governors-the-people-rules-and-processes-governing-online-speech/

Lorenz, T., & K, B. (2020, June 23). Dozens of women in gaming speak out about sexism and harassment. New York Times. https://www.nytimes.com/2020/06/23/style/women-gaming-streaming-harassment-sexism-twitch.html

Martens, T. (2020, July 9). Resignations and reckoning: Game industry’s existential quest for a more inclusive space. Los Angeles Times. https://www.latimes.com/entertainment-arts/story/2020-07-09/game-industry-reckoning-sexual-harassment-ubisoft-chris-avellone

Mathews v. Eldridge, 424 U.S, (US Supreme Court 1976).

Medium. (2019, November 25). Medium rules. Medium Policy. https://policy.medium.com/medium-rules-30e5502c4eb4

Michael, C. (2020, September 15). CaptainZack allegedly lied about taking ‘hush money’ from Nairo. Dot Esports. https://dotesports.com/fgc/news/captainzack-allegedly-lied-about-taking-hush-money-from-nairo

Draft online safety bill, no. CP 405, 405 (2021). https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/985033/Draft_Online_Safety_Bill_Bookmarked.pdf

Morrison, S. (2021, January 20). Facebook and Twitter made special world leader rules for Trump. What happens now? Vox. https://www.vox.com/recode/22233450/trump-twitter-facebook-ban-world-leader-rules-exception

Mullan, D. J. (2001). Administrative law. Irwin Law.

Mystal, E. (2021, January 22). Twitter and Facebook just proved that deplatforming works. The Nation. https://www.thenation.com/article/politics/twitter-facebook-free-speech/

Newton, C. (2021, April 7). Twitch calls in the cavalry. Platformer. https://www.platformer.news/p/twitch-calls-in-the-cavalry

Patreon. (n.d.). Community Guidelines. Patreon. https://www.patreon.com/policy/guidelines

Paul, J. Z., & Friginal, E. (2019). The effects of symmetric and asymmetric social networks on second language communication. Computer Assisted Language Learning, 32(5–6), 587–618. https://doi.org/10.1080/09588221.2018.1527364

Powell, M. B., & Cauchi, R. (2011). Victims’ perceptions of a new model of sexual assault investigation adopted by Victoria Police. Police Practice and Research, 14(3), 228–241. https://doi.org/10.1080/15614263.2011.641376

Proposal DSA. (2020). Proposal for a Regulation of the European Parliament and of the council on a single market for digital services (Digital Services Act) and amending Directive 2000/31/EC COM/2020/825 final. https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=CELEX:52020PC0825&from=en

Quezada, N. (2020, October 28). My statement [Medium Post]. Nairo. https://nairoby.medium.com/my-statement-9a091682fff3

Quezada, N. (2021, March 3). On the topic of Twitch, I do want to be clear that I’m not looking for a handout or anything [Tweet]. Twitter. https://twitter.com/NairoMK/status/1367222559826202630

R v. Secretary of State for the Home Department, ex p. Doody, No. 8 (UK House of Lords 1993).

Ranking Digital Rights. (2021). 2020 Ranking Digital Rights corporate accountability index. https://rankingdigitalrights.org/index2020

Reddit. (n.d.). Reddit Content Policy. Reddit Inc. https://www.redditinc.com/policies/content-policy

Report of the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression. UNHRC, 38th Sess, UN Doc A/HRC/38/35. (2018).

Reynolds, K., Subašić, E. & Tindall. (2015). The problem of behaviour change: From social norms to an ingroup focus. Social and Personality Psychology Compass, 9(1), 45–55. https://doi.org/10.1111/spc3.12155

Schreier, J. (2020, June 24). Video game industry rocked by outpouring of sexual misconduct allegations. Bloomberg. https://www.bloomberg.com/news/articles/2020-06-24/video-game-industry-rocked-by-outpouring-of-sexual-misconduct-allegations

Schweizer, M. (2016). The civil standard of proof—What is it, actually? The International Journal of Evidence & Proof, 20(3), 217–234. https://doi.org/10.1177/1365712716645227

Shanley, P. (2019, September 30). Twitch CEO on war for streaming talent, transparent moderation plans. The Hollywood Reporter. https://www.hollywoodreporter.com/news/twitchs-emmett-shear-streaming-talent-wars-moderation-plans-1244171

Shear, E. (2020, June 22). There’s been a lot of important conversation happening over the previous couple days, and I’ve heard your voices [Tweet]. Twitter. https://twitter.com/eshear/status/1275234049070526464

Srinivasan, K. B., Danescu-Niculescu-Mizil, C., Lee, L., & Tan, C. (2019). Content removal as a moderation strategy: Compliance and other outcomes in the ChangeMyView community. Proceedings of the ACM on Human-Computer Interaction, 3. https://doi.org/10.1145/3359265

Stewart, R. B. (2016). Global standards for national societies. In S. Cassese (Ed.), Research handbook on global administrative law (pp. 175–195).

Sunstein, C. (1996). On the expressive function of law. University of Pennsylvania Law Review, 144, 2021–2053. https://doi.org/10.2307/3312647

Suzor, N. P. (2019). Lawless: The Secret Rules That Govern Our Digital Lives. Cambridge University Press. https://doi.org/10.1017/9781108666428

Taylor, T. L. (2018). Watch me play: Twitch and the rise of game live streaming. Princeton University Press.

The Santa Clara principles on transparency and accountability in content moderation. (n.d.). https://santaclaraprinciples.org/

Twitch. (n.d.-b). Off-service conduct policy. Twitch Legal. https://www.twitch.tv/p/en/legal/community-guidelines/off-service-conduct-policy/

Twitch. (n.d.-a). Community guidelines: Hateful conduct and harassment. Twitch Legal. https://www.twitch.tv/p/en/legal/community-guidelines/harassment/

Twitch. (n.d.-c). About account enforcements and chat bans. Twitch Help. https://help.twitch.tv/s/article/about-account-suspensions-dmca-suspensions-and-chat-bans?language=en_US

Twitch. (n.d.-d). Frequently asked questions. Twitch Partnership Program. https://www.twitch.tv/p/en/partners/faq/

Twitch. (2018, February 8). Twitch community guidelines updates [Blog post]. Twitch Blog. https://blog.twitch.tv/en/2018/02/08/twitch-community-guidelines-updates-f2e82d87ae58

Twitch. (2020a, December 9). Introducing our new hateful conduct & harassment policy [Blog post]. Twitch Blog. https://blog.twitch.tv/en/2020/12/09/introducing-our-new-hateful-conduct-harassment-policy/

Twitch. (2020b, June 24). An update to our community [Blog post]. Twitch Blog. https://blog.twitch.tv/en/2020/06/24/an-update-to-our-community/

Twitch. (2020c). Transparency Report 2020 [Report]. https://www.twitch.tv/p/en/legal/transparency-report/

Twitch. (2021, April 7). Our plan for addressing severe off-service misconduct [Blog post]. Twitch Blog. https://blog.twitch.tv/en/2021/04/07/our-plan-for-addressing-severe-off-service-misconduct

Twitter. (n.d.). Rules and policies. Twitter Help. https://help.twitter.com/en/rules-and-policies#general-policies

Twitter. (2021, January 8). Permanent suspension of @realDonaldTrump [Blog post]. Twitter Blog. https://blog.twitter.com/en_us/topics/company/2020/suspension.html

U.K. Department for Digital, Culture, Media & Sport & U.K. Home Office. (2020). Consultation outcome: Online harms white paper: Full government response to the consultation. https://www.gov.uk/government/consultations/online-harms-white-paper/outcome/online-harms-white-paper-full-government-response

U.S. PACT Act To Require Transparency, Accountability, and Protections for Consumers Online, United States Congress, 117 (2021).

Walker, I. (2020, September 11). Twitch bans Smash champion after he admits to having sex with minor. Kotaku. https://kotaku.com/twitch-bans-smash-champion-after-he-admits-to-having-se-1845027272

Wiltshire, A. (2019, November 11). What does it take to make a living on Twitch? PC Gamer. https://www.pcgamer.com/what-does-it-take-to-make-a-living-on-twitch/

YouTube. (n.d.). Creator responsibility. Google Support. https://support.google.com/youtube/answer/7650329?hl=en

Zolides, A. (2020). Gender moderation and moderating gender: Sexual content policies in Twitch’s community guidelines. New Media & Society, 1–19. https://doi.org/10.1177/1461444820942483

Footnotes

1. While academic work on Twitch’s policy enforcement is limited, a notable exception is Taylor’s detailed account of the rise of the platform, which addresses on-platform policy enforcement, although not off-platform policy (2018). Other academic treatments of Twitch’s policies often concern the use of policies to control women’s attire and sexual content (Zolides, 2020; Ruberg, 2020).

2. One example of the enforcement of off-platform policies prior to the summer of 2020 is the suspension of Gregory ‘Onision’ Jackson in January of 2020 following a series of allegations of abuse and grooming minors. His account was controversially restored in October of 2020 without public comment from Twitch (Colombo, 2020).

3. Note that my use of freedom of expression here does not refer solely to constitutional rights to freedom of expression against government limitation. Freedom of expression values can be engaged by private action even where no recognised right is infringed. Corporations are increasingly expected to comply with international human rights law, including social media companies with respect to freedom of expression (Report of the Special Rapporteur, 2018). As David Kaye, the former United Nations Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression put it, the idea that human rights law applies only to governments and not to companies is “rapidly becoming an archaic way of thinking about the structure of international governance” (Kaye, 2019, pp. 119–290).

4. The requirement for the provision of reasons in Article 15 applies to situations in which “a provider of hosting services decides to remove or disable access to specific items of information provided by the recipients of the service.” Presumably, this could apply for reasons beyond content violations on the platform.

5. The requirement that platforms provide an internal complaint-handling system in Article 17 applies only to “decisions taken by the online platform on the ground that the information provided by the recipients [of the service] is illegal content or incompatible with its terms and conditions.” This would not appear to apply to actions taken for off-platform behaviour as the provision specifies it applies only to actions based on the information provided by the recipients of the service.

Germany: A change of (dis)course in digital policy

$
0
0
Civil society coalition

Fax machines in health departments, schools without email addresses, millions spent on screwed-up apps—the pandemic has unforgivingly revealed that "business as usual" in digital policy making endangers our future. The focus must no longer be on security interests or the profit margins of tech companies, but on the common good. This is what F5, named after the reload key, stands for—a new coalition calling for a new perspective on digital politics.

A closer look at recent debates in Germany shows how crucial civil society voices have been in preventing detrimental developments: among the most notable examples are the resistance to upload filters in the EU Copyright Directive, the (il)legal framework of Germany's foreign intelligence service or data retention policies—all of which were overhauled thanks to strong resistance from civil society in Germany and Europe. All too often, political decision-making has lagged behind the rapid change of the digital market. Only now, political decision-makers are beginning to take steps to oblige platforms like Facebook and YouTube to become more transparent and to implement a common European framework on platforms’ response to hate speech. They realised equally late that it's an aberration to leave cloud infrastructure entirely to private companies. Invariably, it has been organisations like ours that have ensured that policies were set back on track for the benefit of users and citizens.

Change of perspective on digital policy

We—F5, a new coalition pulling together AlgorithmWatch, the Society for Civil Liberties, the Open Knowledge Foundation Germany, Reporters Without Borders Germany and Wikimedia Deutschland, are calling for a change of perspective: digital policy must finally centre on promoting the common good. It is an appalling waste of scarce resources when government and public authorities create precedents in the form of policies, whose detrimental effects on society later have to be mitigated. So much better policy-making would be possible if rules for the digital age weren’t conceived in non-transparent procedures, all too often driven by lobbyists, business consultants and the sales departments of tech giants—and afterwards watchdog organisations have to build up enormous pressure to reign policies back in for the benefit of all.

Instead, a democratic, open, inclusive and transparent digital policy process must focus on the common good. But this can only succeed if more voices are heard and involved. Our organisations understand and connect the diversity of technological and societal change. Strengthening and institutionalising these diverse voices is one of the goals of our alliance.

We are committed to protecting the right to secure and confidential communications. This fundamental right, taken for granted in analogue life, is increasingly being eroded in the digital realm. Police and intelligence services across Europe have been granted ever-increasing powers to collect and access data and to intercept private messages and calls, often without appropriate reforms of the oversight structures. Now member states are going even further in calling for the development of additional technical means to circumvent encryption, a step that would undermine the rights and security of millions of Europeans and send a catastrophic message to repressive states worldwide.

Effective control of platforms and algorithms

We advocate for European platform regulation that promotes freedom of expression and information and civic discourse on the internet. For this to succeed, meaningful transparency obligations need to be imposed on impactful interaction structures like YouTube, Facebook, Twitter. Platforms specifically must be legally required to provide users with easy-to-use ways of submitting notices and of appealing content moderation decisions. Safeguards must be put in place to avoid the risk of platforms overblocking legitimate content; at the very least, meaningful human oversight and review processes must counteract the one-sided incentives for platforms to delete legitimate content rather than to get fined for inaction. Only these measures will ensure that users can enjoy digital self-determination. At the same time, the state must not abandon its responsibility: we need new ways to bring the judiciary to bear in sanctioning digital wrongdoing, instead of in effect delegating both policing and enforcement of democratically agreed rules almost entirely to private companies.

Increasingly, automated decision-making (ADM) systems, often labelled as artificial intelligence (AI) systems, are being used in the selection of job applicants, for medical diagnoses, to detect welfare fraud, to assess creditworthiness, and the like. The Artificial Intelligence Act, which is currently being negotiated within the European Union, acknowledges that so-called AI systems can come with high or in some scenarios even unacceptable risks for individuals, communities and societies. Among other things, it puts the use of AI in credit scoring or in the field of employment and labour into the high-risk category, due to these AI usage scenarios impacting on essential aspects of a person’s life plan and autonomy. But self-assessments of risks by developers—often corporate actors—and deployers, as currently foreseen by the AI Act, will not suffice to ensure that the use of these systems is guided by individual autonomy and the common good. If not combined with reliable enforcement mechanisms and accountability frameworks, the new rules risk becoming teethless. In the public sector, authorities need to be obliged to systematically evaluate risks relating to a system through an impact assessment and to provide information on all ADM systems in use within public registers. The risks that come with the use of an ADM system can only be assessed on a case by case basis, and not via pre-determined risk categories.

Digitisation and transparency for the public good

In order for our society to benefit as much as possible from the ideas, skills and diversity of civic engagement, however, there also needs to be greater support for common good-oriented digital projects. Liability risks for volunteer-run platforms must be lowered, and self-governance must be promoted. Freedom of information should be strengthened through transparency laws on all levels, and it must become a benchmark for any political reform to work towards free access to knowledge for all. Where public money is spent, by default such investments should favour the output of freely usable content, regardless of whether it’s software, educational materials, data, or another type of content. And public investments must favour diversely accessible, sustainable structures that are strongly committed to democratic values and human rights.

Our free and open digital society thrives on preconditions that neither the state nor companies alone can guarantee. F5 aims to be a major civil society counterweight to dominant business interests, and to policy-making that tends to lose sight of the common good in digitisation. We will establish an additional dedicated line of communication between civil society groups and policymakers at the federal level, in the form of regular high-level high-expertise events, with a pilot planned for 29 September, right after the German general elections. In launching this event series, the future viability of a democratic digital society is our main goal. This is the measure by which we will evaluate the actions of politicians and companies - and, of course, our own.

Governing “European values” inside data flows: interdisciplinary perspectives

$
0
0

Papers in this special issue

Governing “European values” inside data flows: interdisciplinary perspectives
Kristina Irion, University of Amsterdam
Mira Burri, University of Lucerne
Ans Kolk, University of Amsterdam
Stefania Milan, University of Amsterdam

Safeguarding European values with digital sovereignty: an analysis of statements and policies
Huw Roberts, University of Oxford
Josh Cowls, University of Oxford
Federico Casolari, University of Bologna
Jessica Morley, University of Oxford
Mariarosaria Taddeo, University of Oxford
Luciano Floridi, University of Oxford

Mitigating the risk of US surveillance for public sector services in the cloud
Jockum Hildén, University of Helsinki

Extraterritorial application of the GDPR: promoting European values or power?
Oskar Josef Gstrein, University of Groningen
Andrej Janko Zwitter, University of Groningen

Governing the shadow of hierarchy: enhanced self-regulation in European data protection codes and certifications
Rotem Medzini, The Hebrew University of Jerusalem

Personal data ordering in context: the interaction of meso-level data governance regimes with macro frameworks
Balázs Bodó, University of Amsterdam
Kristina Irion, University of Amsterdam
Heleen Janssen, University of Amsterdam
Alexandra Giannopoulou, University of Amsterdam

Embedding European values in data governance: a case for public data commons
Jan J. Zygmuntowski, Kozminski University
Laura Zoboli, University of Warsaw
Paul F. Nemitz, European Commission

Policy strategies for value-based technology standards
Amelia Andersdotter, Council of European National Top-Level Domain Registries (CENTR)
Lukasz Olejnik, Independent researcher

Value Sensitive Design and power in socio-technical ecosystems
Mattis Jacobs, Universität Hamburg
Christian Kurtz, Universität Hamburg
Judith Simon, Universität Hamburg
Tilo Böhmann, Universität Hamburg

Beyond the individual: governing AI’s societal harm
Nathalie A. Smuha, KU Leuven

What rights matter? Examining the place of social rights in the EU’s artificial intelligence policy debate
Jędrzej Niklas, Cardiff University
Lina Dencik, Cardiff University

Governing “European values” inside data flows: interdisciplinary perspectives

The European conundrum

Digitalisation has set into motion a deep transformation of our societies, cultures and economies (Castells, 2010) while challenging territoriality-based sovereignty (Hamelink, 1994) and eroding traditional regulatory configurations (Cohen, 2019). Global digital connectedness is in many fundamental ways beneficial, as it facilitates cross-border communications, seamless trade and production along global value chains, and enables novel types of technology-driven innovation and new business models (Henke et al., 2016; Burri, 2021a). However, recent years have also shown the downsides of such data-based internationalisation in terms of disrupted equity, dependency and sustainability, with digital activities being mediated by actors who deploy these technologies to serve particular interests, public or private, with sometimes problematic effects manifesting themselves in and co-creating our information civilization.

Scholarship across disciplinary boundaries has nurtured a critical discourse offering conceptual perspectives on trust, power, justice and authority in digital societies (e.g., Cohen, 2019; Zuboff, 2019). These distinct perspectives speak to the literature on important human rights and societal values that are contested and re-negotiated in the digital realm (Zalnieriute and Milan, 2019), including privacy and surveillance (Farrell and Newman, 2019), control over internet infrastructure (DeNardis, 2014), and fairness of algorithms (Pasquale, 2016). There is a heightened awareness of the risks from the digital (dis)intermediation of societal values that undergird social cohesion, the public sphere and democratic institutions (Van Dijck, Poell, and De Waal, 2018). As a result, the discourses on transnational interdependencies and responsibilities have turned much less deterministic and polarised, to become more nuanced and pluralistic (Irion, 2021; Yakovleva and Irion, 2020a).

In the last decade, the cross-border movement of digital products and services, as well as their underlying data have been disruptive to a range of European legal frameworks. Against this backdrop the European Union (EU) has been re-assessing how to better integrate its digital internal market paradigm with individual rights and public values that are foundational to “the European project”. The substance of the rights and values is codified in primary EU law, notably the Treaty on European Union (2012), the Treaty on the Functioning of the European Union (2012), and the Charter of Fundamental Rights of the European Union (2000), as further developed and implemented in secondary EU law (European Commission, 2021). The EU certainly had a headstart in the field of personal data protection regulation that has become a very influential model across the world. So much so that the General Data Protection Regulation (GDPR) is a frequently cited example of Anu Bradford’s (2020) Brussel’s effect that connotes the externalisation of EU’s regulatory standards. Differences over transatlantic flows of personal data on the part of the EU create new fissures in the geopolitical strategies for the digital economy of the US (Irion, 2015; 2020b), while China emerges as a digital power to be reckoned with.

With its quest for digital sovereignty the EU embraces a new assertive rhetoric (Von der Leyen, 2020), juxtaposing its “value-based” approach vis-a-vis a more market-based US and a top-down state-centric Chinese one. This contestation is notable in the idea of the “four internets”: within the US the “DC commercial internet” as counterpart to the existing “Silicon Valley open internet”, the “Brussels bourgeois internet”, and the “Beijing paternal internet” (with the “Moscow spoiler” as parasitic strategy) (O’Hara and Hall, 2020). While the “Brussels bourgeois” label seems slightly dismissive, O’Hara and Hall (2020) describe its substance as a vision of a “well-ordered, self-regulating, responsible [...] more or less open Internet, on which good behavior is the norm. Trolling, privacy invasion and fake news should be marginalized or regulated away by a strong civil society whose members are trustworthy and trusting” (n.p.). Others refer to the EU’s striving to set ethical standards for novel technologies “for good” (Kalff and Renda, 2019), fully compatible with sustainability (as per the European Green Deal) and more broadly with the Sustainable Development Goals (European Commission, 2021). Kalff and Renda (2019, pp. 187-188) note in this context that “Europe’s unique balance between freedom (‘of’, not ‘from’), and justice explains its unique legal and economic tradition. Fairness, reasonableness, good faith, pre- and post-contractual obligations are time-tested principles and part of the heritage of continental Europe”.

How to leverage this potential, if true, is the key question, however. Despite the size of the single market and substantive economic activity, the EU faces the problem of “delivering” on its well-intended “plethora of promising policies” (Kalff and Renda, 2019), especially in the absence of home-region big tech firms and large-scale online service providers. Smaller-scale and/or alternative ecosystems, infrastructures and architectures are yet to be developed within and by the Union, and a coordinated EU data governance approach embodying European values and digital sovereignty is still lacking. At the same time, there are concerns about regulation stifling innovation and competition, particularly focused on the GDPR and the Digital Services Act (Cennamo and Sokol, 2021), but also on how it may in fact favour big tech firms if the specifics of their business models and ecosystems and their geopolitical embeddedness are not properly accounted for (Jacobides, Bruncko, and Langen, 2020; cf. Ciulli and Kolk, 2019. Attempts are made to properly bridge competition policy and data protection law (Kira, Sinha, and Srinivasan, 2021). As a result, EU digital sovereignty has not only an external dimension. Internally, the EU seeks to leverage the constitutive role of the digital realm for European integration, to build itself constitutionally and increase its legitimacy.

Grounding European values in a transnational digital setting

Building on the debate outlined above, this special issue assembles interdisciplinary perspectives on governing the digital in ways that safeguard “European values” while adequately performing in a transnational digital environment. Instead of arguing why critical European values require protection, it asks howto effectively ground these values in a transnational digital setting. The special issue seeks to identify which institutions, regulatory formations and governance fora can be harnessed without disrupting otherwise beneficial data flows (e.g. Burri, 2021b). With this in mind we invited abstracts for multidisciplinary contributions and the resulting draft papers were presented and discussed at an international workshop held online on 29 January 2021. The final set of reviewed contributions is included in this special issue.

The guest editors of the special issue are a multidisciplinary group of four academics, affiliated with the Amsterdam Centre for European Studies (ACES), the Amsterdam Business School, the Institute for Information Law (IViR), and the DATACTIVE research project​​ at the University of Amsterdam, as well as the research project “ The Governance of Big Data in Trade Agreements” at the University of Lucerne. This resulting compilation of research articles not only addresses a wide spectrum of possible interventions, namely at the levels of digital technologies, data governance and regulatory design, but also contests the inclusion or exclusion of values in EU digital public policy. The articles contribute to the debate by introducing new issues and questions, critically framing and discussing them, advancing current thinking and offering recommendations for further research and policy-making. The ambition of this endeavour is to foster a new research agenda on the governance of public interests in transnational digital technologies.

The special issue starts with a contribution on EU digital sovereignty that queries its potential to enhance the protection of European values. In their article “Safeguarding European values with digital sovereignty: an analysis of statements and policies”, which offers an excellent entrance point to EU digital policy, Huw Roberts, Josh Cowls, Federico Casolari, Jessica Morley, Mariarosaria Taddeo, and Luciano Floridi interrogate the meaning and the use of digital sovereignty across EU policy fields. In their understanding, digital sovereignty constitutes “a form of legitimate, controlling authority”. Using content analysis of EU documents, the authors trace the policy areas and measures that are most closely associated with digital sovereignty. This analysis identifies the five key areas that EU institutional actors most frequently mention as important for strengthening digital sovereignty, i.e. data governance; constraining platform power; digital infrastructures; emerging technologies; and cybersecurity. The authors assess the EU’s ability to exercise legitimate control and discuss the efficacy of EU digital policy in the areas that have been most closely associated with EU digital sovereignty. The article concludes with recommendations on how the EU can address the identified deficits to further strengthen EU’s digital sovereignty as a vehicle to protect European values.

The other nine articles in the special issue are grouped around four themes containing more than one contribution. The first theme, “Lessons from the General Data Protection Regulation”, critically engages with the GDPR as a model for EU digital rulemaking. The contributions tackle enduring compliance issues for digital public services that use cloud infrastructure, the GDPR’s extraterritorial application and the role of regulatory intermediaries in enforcing data protection standards. The second theme, “Joining-up data ordering with rights-preserving governance”, argues in favour of introducing data governance schemes which operate below the legislative framework but align well with European values. The third theme, “Value design in digital architectures”, looks for the role of EU public policy in harnessing standardisation more effectively and explores the positive contribution of value-sensitive design of digital technologies. The fourth theme, “A European approach to artificial intelligence”, conceptualizes societal harm and the underrepresentation of social rights in the current EU proposal for an Artificial Intelligence Act.

Lessons from the General Data Protection Regulation

As a corollary to digital sovereignty, EU law demands untangling personal data flows from unfettered surveillance authorities of foreign governments. The convoluted legal situation surrounding the cross-border transfers of personal data is most pronounced in the EU-US relationship although it extends well beyond (Burri, 2021b). The jurisprudence of the Court of Justice of the EU (CJEU), which invalidated twice the legal bases for the transfer of personal data from the EU to the US, holds repercussions for the cloud computing sector. The article ”Mitigating the risk of US surveillance for public sector services in the cloud” by Jockum Hildén analyses the intricate legal situation from the perspective of public authorities in EU member states which are avid customers of US-incorporated cloud service providers. After reviewing the legal framework, the article documents how public authorities in the Netherlands and Sweden seek to mitigate the risks for cloud-based public sector services. The Dutch example shows how innovatively combining EU data protection law and public procurement rules helped to renegotiate the terms of service of a major cloud service provider. Nonetheless, the public sector would benefit from more legal certainty in their procurement and use of cloud-based services.

The contribution “Extraterritorial application of the GDPR: promoting European values or power?”, authored by Oskar Gstrein and Andrej Zwitter, explores the GDPR’s extraterritorial application and its unlikely promotion of European values. The authors contend that it is somewhat counterintuitive to assume that the GDPR could wield that much authority abroad all the while large internet platforms could accumulate virtually unrestrained socio-economic power. The authors question whether unilateralism as embodied in the GDPR can cope with transnational data flows, legal plurality and the complexity of the digital sphere. They advocate for a more sustainable multilateral strategy that emphasises value-driven harmonisation, such as the modernised Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data (Convention 108+) of the Council of Europe as a better avenue to safeguard European values.

In his article “Governing the shadow of hierarchy: enhanced self-regulation in European data protection codes and certifications”, Rotem Medzini explores two types of regulatory intermediaries, namely monitoring and certification bodies. Set against political science and regulatory governance theories, this contribution looks at the arrangements that allow these private actors to act as regulatory intermediaries whereby they monitor codes and assess conformity with certifications. The author traces rigorously why these regulatory intermediaries have been introduced and how they have been shaped during the legislative process leading up to the GDPR. According to two case studies of codes of conduct and certification, rule-takers can only adopt codes and certifications that are pre-approved and intermediated by accredited private actors. In both cases the GDPR mixes enforced self-regulation—through accreditation and the ratification of criteria—with components of enhanced self-regulation—through regulatory intermediation.

Joining-up data ordering with rights-preserving governance

The recognition that data ought to be governed as a resource has inspired an entirely new strand of research into the conception and practices of data ordering and governance. Data governance coins any regime that attracts data in order to be further processed to extract value from data following the logic of a particular regime. Balázs Bodó, Alexandra Giannopoulou, Kristina Irion, and Heleen Janssen make a conceptual differentiation between three levels of data governance, i.e. the macro, meso, and micro level. Their contribution “Personal data ordering in context: the interaction of meso-level data governance regimes with macro frameworks” charts the interdependence between these levels and the consequences for the governance of personal data. An international comparison between macro-level regimes distinguishes distinct approaches to data ordering and governance in the EU, US and China. In the context of global competition the authors argue that meso-legal data governance regimes determine the success or failure of the EU’s approach with its rights-oriented model. Providing legal recognition to “data intermediaries” as put forward in the proposal for a Data Governance Act could be a way to construct data governance regimes at meso-level in support of the EU’s fundamental rights approach. Yet, if the EU comes to prioritise data ordering over value preserving data governance, there is a risk that the fundamental rights preserving macro framework of the EU will be compromised.

As a corollary to the article above, Jan Zygmuntowski, Paul Nemitz, and Laura Zoboli take as a starting point “an ecosystem of trust” and inquire which data governance model creates conditions for data stewardship guided by European values and rights. Their contribution “Embedding European values in data governance: a case for public data commons” leverages science and technology studies, critical data studies and institutional economics in order to derive key conditions for the establishment of common European data spaces and comprehensive data-sharing framework that serve the public interest. In doing so the authors synthese a rich body of literature that is developed into an analytical framework and conceptual critique of four data governance models. Under the favoured model, which is conceptually linked to Ostrom’s (1990) common-pool resources framework, data becomes a common good that is protected collectively and governed by specific rules that safeguard European rights and values. Public data commons, the authors conclude, will serve the public interest in support of European technological sovereignty while increasing data sharing.

Value design in digital architectures

Internet governance implicates not only content regulation but also the frameworks that govern all layers of the internet’s communication model (e.g. Werbach, 2002). Digital architecture plays a critical role, as it embeds certain values that are in contestation between private and public actors and may ultimately enable or hinder policy implementation (e.g. Benkler, 2000). The contribution “Policy strategies for value-based technology standards” by Amelia Andersdotter and Lukasz Olejnik acknowledges this complex interplay and explores technical standardisation in Europe and the role of non-formal, industry-driven standards bodies, such as the the World Wide Web Consortium (W3C), Internet Engineering Task Force (IETF) or the Institute of Electrical and Electronics Engineers (IEEE). Starting from the European Commission’s formulation of “European values”, the authors carefully map the interdependencies between enforcement of codified societal norms and industry priorities. They argue in particular that the EU should devote more resources towards absorbing already existing innovation and standardisation into its compliance mechanisms and that shaping standardisation with European values is only possible through the lens of a human-centric approach to technologies. The authors highlight the need for enhanced cooperation between standardisation bodies and regulators, as well as the difficulties associated with the interface of standard-setting and innovation.

In the article “Value Sensitive Design and power in socio-technical ecosystems”, Mattis Jacobs, Christian Kurtz, Judith Simon and Tilo Böhmann explore the role of Value Sensitive Design (VSD) as a valuable framework that allows technology creators to account for values in the design of technical artefacts. However, they argue, power distribution within the process of technology design can potentially hinder the approach. The authors thus identify four factors that contribute to determining the impact of power distribution on VSD, namely: the level of decentralisation of the ecosystem; whether VSD is applied at the core or periphery; temporality, that is to say when VSD can be exercised; and the phase of VSD (conceptual, empirical, and technical) in which power can be exercised. Adopting a power-sensitive ecosystem perspective, Jacobs and colleagues explain how technology projects that aim to keep values at heart should account for power. Here the authors recognise how new regulatory initiatives and oversight institutions can support VSD practitioners and even form the basis for a close cooperation that can reveal problematic practices of powerful actors in socio-technical ecosystems and thereby lay the foundation for further regulatory action.

European approach to artificial intelligence

Artificial intelligence (AI) has been one of the policy areas, where the EU has newly positioned itself as a regulatory entrepreneur and seeks to promote a “European approach” in reaping the benefits of technological innovation while safeguarding fundamental rights and key values (European Commission, 2021a). Against the backdrop of these initiatives, the contribution by Nathalie Smuha, “Beyond the individual: governing AI’s societal harm”, argues for a more nuanced policy design that distinguishes different types of harm arising in the context of AI (individual, collective and societal). She enriches the current AI discourse by conceptualising AI’s societal harm in particular and argues that a shift in perspective beyond the individual towards a regulatory approach that addresses AI’s effects on society at large is needed. By making an analogy to environmental law and policy, Smuha identifies distinct “societal” mechanisms that the EU could employ in this context that involve public oversight mechanisms to increase accountability; public monitoring mechanisms to ensure independent information gathering about AI’s societal impact; and procedural rights with a societal dimension, including a right to access to information, access to justice, and participation in public decision-making on AI, regardless of the demonstration of individual harm.

In “What rights matter? Examining the place of social rights in EU’s AI policy debate”, Jędrzej Niklas and Lina Dencik explore the role of social rights in the EU debate on AI policy. The commitment to “European” values, they argue, rarely results in consistent policy frameworks. However, new policy areas, such as AI policy, allow us to take a closer look at what concerns, interests and priorities shape the European project today. The authors embark on a systematic analysis of the submissions to the public consultation on the EU White Paper on AI Strategy, open to citizens and stakeholders in 2020. They find that social rights occupy a marginal position in the EU’s policy debate on emerging technologies, whereas human rights, with emphasis on individual privacy and non-discrimination, take central stage. These concerns are often translated into design solutions or procedural safeguards and a commitment to market creation—at the expense of concerns over key questions of economic inequality and redistribution.

Concluding remarks

The articles in this special issue testify to the complexities of data governance in the public interest. As the current debate about the GDPR shows, its extraterritorial enforcement and effectiveness are controversial and its transnational appeal as a regulatory model has not been conclusively established so far. Yet, the GDPR is only one piece of the data governance puzzle that the EU must solve in a way that safeguards its core values and establishes it as a “regulatory entrepreneur” able to put in place functioning legal frameworks along the line from hard law to softer co- and self-regulatory instruments. This EU project is ongoing and there are several important legal initiatives in the pipeline, such as the proposals for a Digital Services Act, a Digital Markets Act, a European Data Act, and an Artificial Intelligence Act, all of which require regulatory designs that are robust enough to address the conundrum sketched in the first part of this editorial. How to uphold and implement European values in the complex competitive international landscape characterised by rapid technological developments is a crucial concern for policy-makers and societies, and a highly relevant area for further investigation.

When passing new legislation the EU should not only clearly conceptualise digital sovereignty but also define forward-looking strategies that can deal with transnational configurations of actors, digital infrastructures, and algorithms, as well as current geopolitical and sustainability challenges. This requires sustained accompanying efforts to cultivate value-sensitive design at the level of digital technologies, to reduce asymmetries of power and knowledge between providers and users, and to generate acceptance of and compliance with a shared set of values throughout digital ecosystems. At the same time, other actors from business and civil society should also play an active role in contributing to European values and to setting standards. Public scrutiny by academics and civil society has shown enormous potential to hold providers of digital technology accountable, which needs to be recognized and strengthened. This will continue to be an area of contestation given that scalable alternative business models and decentralized digital infrastructures, also feasible for the many smaller-scale entities in the EU, are not yet available. With this special issue we seek to advance a research agenda that harnesses multidisciplinary research to re-conceptualise and ground public interest governance with transnational digital technologies in Europe and beyond.

Acknowledgements

This special issue would not have been possible without the expert reviewers who provided invaluable comments on the draft papers, as well as the discussants and other participants reflecting on the drafts during the international workshop. We are deeply grateful to the managing editor of Internet Policy Review for his relentless support throughout the production of this special issue.

The international workshop and this special issue received financial support from the Amsterdam Centre for European Studies (ACES) at the University of Amsterdam, as well as the research project “The Governance of Big Data in Trade Agreements” at the University of Lucerne, sponsored by the Swiss National Science Foundation.

References

Benkler, Y. (2000). From consumers to users: Shifting the deeper structures of regulation toward sustainable commons and user access. Federal Communications Law Journal, 52(3), 561–579. http://www.fclj.org/wp-content/uploads/2013/01/benkler1.pdf

Bradford, A. (2020). Digital Economy. In The Brussels Effect: How the European Union Rules the World. Oxford University Press. https://doi.org/10.1093/oso/9780190088583.003.0006

Burri, M. (2021a). Interfacing Privacy and Trade. Case Western Reserve Journal of International Law, 53(1), 35. https://scholarlycommons.law.case.edu/jil/vol53/iss1/5

Burri, M. (Ed.). (2021b). Big Data and Global Trade Law (1st ed.). Cambridge University Press. https://doi.org/10.1017/9781108919234

Castells, M. (2010). The Rise of the Network Society (2nd ed.). Blackwell.

Cennamo, C., & Sokol, D. D. (2021, March). Can the EU regulate platforms without stifling innovation. Harvard Business Review. https://hbr.org/2021/03/can-the-eu-regulate-platforms-without-stifling-innovation

Ciulli, F., & Kolk, A. (2019). Incumbents and business model innovation for the sharing economy: Implications for sustainability. Journal of Cleaner Production, 214, 995–1010. https://doi.org/10.1016/j.jclepro.2018.12.295

Cohen, J. E. (2019). Between Truth and Power: The Legal Constructions of Informational Capitalism. Oxford University Press. https://doi.org/10.1093/oso/9780190246693.001.0001

DeNardis, L. (2014). The global war for internet governance. Yale University Press. https://doi.org/10.12987/yale/9780300181357.001.0001

European Commission. (2021a). Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions 2030 Digital Compass: The European way for the Digital Decade, COM/2021/118 final.

European Commission. (2021b). Proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain Union legislative acts, COM(2021) 206 final.

Farrell, H., & Newman, A. L. (2019). Of privacy and power: The transatlantic struggle over freedom and security. Princeton University Press.

Hamelink, C. J. (1994). The Politics of World Communication. Sage Publishing.

Henke, N., Bughin, J., Chui, M., Manyika, J., Saleh, T., Wiseman, B., & Sethupathy, G. (2016). The Age of Analytics: Competing in a Data-Driven World [Report]. McKinsey Global Institute. https://www.mckinsey.com/business-functions/mckinsey-analytics/our-insights/the-age-of-analytics-competing-in-a-data-driven-world

Irion, K. (2015). Accountability unchained: Bulk data retention, preemptive surveillance, and transatlantic data protection. In M. Rotenberg, J. Scott, & J. Horwitz (Eds.), Privacy in the modern age: The search for solutions (pp. 78–92). New Press, The.

Irion, K. (2020, July 24). Schrems II and Surveillance: Third Countries’ National Security Powers in the Purview of EU Law [Blog post]. European Law Blog. https://europeanlawblog.eu/2020/07/24/schrems-ii-and-surveillance-third-countries-national-security-powers-in-the-purview-of-eu-law/

Irion, K. (2021). Panta Rhei: A European Perspective on Ensuring a High Level of Protection of Human Rights in a World in Which Everything Flows. In M. Burri (Ed.), Big Data and Global Trade Law (pp. 231–242). CUP.

Jacobides, M. G., Bruncko, M., & Langen, R. (2020). Regulating Big Tech in Europe: Why, so what, and how understanding their business models and ecosystems can make a difference [White Paper]. Evolution Ltd. https://www.evolutionltd.net/post/regulating-big-tech-in-europe

Kalff, D., & Renda, A. (2019). Hidden Treasures. Mapping Europe’s sources of competitive advantage in doing business. CEPS.

Kira, B., Sinha, V., & Srinivasan, S. (2021). Regulating digital ecosystems: Bridging the gap between competition policy and data protection. Industrial and Corporate Change. https://doi.org/10.1093/icc/dtab053

O’Hara, K., & Hall, W. (2020). Four internets. Communications of the ACM, 63(3), 28–30. https://doi.org/10.1145/3341722

Ostrom, E. (1990). Governing the Commons: The Evolution of Institutions for Collective Action (p. 280). Cambridge University Press. https://doi.org/10.1017/CBO9780511807763

Pasquale, F. (2015). The black box society: The secret algorithms that control money and information. Harvard University Press.

van Dijck, J., Poell, T., & Waal, M. (2018). The Platform Society (Vol. 1). Oxford University Press. https://doi.org/10.1093/oso/9780190889760.001.0001

von der Leyen, U. (2020, September 16). State of the Union Address by President von der Leyen at the European Parliament Plenary. European Commission, Press Corner. https://ec.europa.eu/commission/presscorner/detail/en/SPEECH_20_1655

Werbach, K. (2002). A Layered Model for Internet Policy. Journal of Telecommunications and High Technology Law, 1, 37–67. http://jthtl.org/content/articles/V1I1/JTHTLv1i1_Werbach.PDF

Yakovleva, S., & Irion, K. (2020). Pitching trade against privacy: Reconciling EU governance of personal data flows with external trade. International Data Privacy Law, 10(3), 201–221. https://doi.org/10.1093/idpl/ipaa003

Zalnieriute, M., & Milan, S. (2019). Internet Architecture and Human Rights: Beyond the Human Rights Gap: Internet Architecture and Human Rights. Policy & Internet, 11(1), 6–15. https://doi.org/10.1002/poi3.200

Zuboff, S. (2019). The age of surveillance capitalism: The fight for a human future at the new frontier of power. Profile Books.

Personal data ordering in context: the interaction of meso-level data governance regimes with macro frameworks

$
0
0

This paper is part of Governing “European values” inside data flows, a special issue of Internet Policy Review guest-edited by Kristina Irion, Mira Burri, Ans Kolk, Stefania Milan.

Note: all authors contributed equally to the development of ideas and to the writing of this article.

1. Introduction

Data can be extracted and processed by private parties and governments at unprecedented scales, speed and efficiency. Such data’s fate is under intense debate, which takes place at multiple levels, ranging from the individual, micro-level strategies, via the meso-level approaches experimented by data sharing organisations, such as firms and municipalities, to how countries, competing on the global level, define strategic frameworks around data at the macro-level. Albeit data is not entirely lawless, there is much uncharted terrain opening spaces for competing logics of data governance.

1.1. Levels of data strategies: micro-, meso- and macro-level approaches

While the production, use and trade in data may seem intransparent at best, chaotic at worst, it is certainly not without structure. In the last decade a number of different data governance models emerged, both at the macro-level, and on the more context-specific meso-level. On the macro level, there are substantial political differences between the United States (US), the European Union (EU), and for example China, about the role data is envisioned to play in the economy, or in the organisation of the social-political order (e.g. Aaronson and Leblond, 2018; O’Hara and Hall, 2018; Goldfarb and Trefler, 2018). These differences play out in the political, legal and economic frameworks that define (personal) data governance at the macro-level, such as the EU General Data Protection Regulation (GDPR) (Granger and Irion, 2019), the piecemeal, sector-specific, but generally business-friendly approach which characterises the US (Chander, 2014), or the Chinese approach which harnesses its social credit system as a disciplinary mechanism (Backer, 2019; Mac Mac Síthigh and Siems, 2019).

At the meso-level, there is considerable variation in technical, legal and normative frameworks that govern the production, extraction and exploitation of data. Different firms, industries, governments and municipalities, and a diverse group of techno-legal driven communities came up with their own data governance practices, frameworks, technologies, such as data sharing agreements, data trusts and cooperatives, or distributed ledgers and personal data stores. The large variations between approaches to govern data can be attributed to the field being relatively nascent, and the fact that ‘good’ governance of data (Mann, Devitt, and Daly, 2019) depends on the highly specific conditions in which data is being extracted, used and traded. This paper is looking at data from a broad perspective, and it interrogates how different meso-level data governance regimes develop in the context of their macro-environment.

Various stakeholders have defined their own approaches to how they organise their data related practices. The bulk of meso-level governance regimes were developed by economic actors, often before any overarching macro-level framework emerged, and are shaped by technical capacities, and business interests. A second set of data governance logics emerged in the public sector. The making available of public sector information to the public in general, and for commercial uses, has released large caches of information with relatively few restrictions. Last but not least, a number of governance models, such as distributed ledgers or data commons, have emerged as counter-practices, defined in opposition to dominant public or private data regimes. Some of these counter-initiatives are heavily technological in nature, such as individual data control technologies developed by crypto-libertarian communities.

By now, we may have entered a next stage of consolidation, where economic, geopolitical and ideological differences over data play out, and are contested to the point where more successful data governance frameworks crowd out others. We argue that this consolidation process is also a product of the interaction between vertical layers of data governance: the macro-level political regimes can favour particular meso-level strategies at the expense of others, while pressure from organisations operating at meso-level influence macro-level legal) frameworks on data.

1.2. Moving forward at macro-level is largely shaped by meso-level approaches to data

Despite all the variety, the dominant meso-level practices seem to suffer from serious shortcomings, independent of how the data is being treated. On the one hand, the problems with the dominant data appropriation logics are well known. A substantial part of our social and economic interactions take place within often private, and often inaccessible and largely opaque technological and business-driven ecosystems. The fact that this happens at scale, creates immense social, economic and political power, and information asymmetries between those who control data vis-à-vis other businesses, governments, individuals and communities. On the other hand, even in those cases where data is on the move, and widely traded, serious issues have emerged. The current macro-level data governance frameworks for data trade have largely failed to produce transparent and functioning data markets. It is nearly impossible to ensure that individual rights are not breached in the course of, or as a result of such transactions, and there are indications of irregular and shady data markets, while regular practices of data sharing and trade may remain underdeveloped. In short, the current macro-level data governance regimes produce inadequate results both when the data is static, and when it is the subject-matter of transactions.

1.3. Research objective and plan

The hypothesis of this work is that any solution to the aforementioned issues must appear as an alternative data governance logic at the meso-level. European policymakers, public and private sector organisations and civil society have to focus on exactly this data governance space between macro-level data governance frameworks and data producers, because this is where the different logics, visions of data ordering and governance are competing for social, political and economic recognition, adoption, and success. The research was carried out to underpin this scientific statement. This article uses socio-political-legal research methods and literature review to interrogate the interaction between macro- and meso-level governance of personal data. That said, we do not attempt to conceptualise an all-encompassing global data governance framework. Our discussion paints a rather limited picture of macro- and meso-level data governance regimes, whereby our work focuses on personal data protection and flows thereof.

The article is structured as follows. In section 2, we introduce leading macro-level regimes that govern personal data and their interactions, with a special focus on the EU approach. In section 3, we turn to the discussion of meso-level data governance frameworks. We start with spelling out the expectations vis-à-vis a good enough data governance framework, then we match the currently competing alternatives against this background. In section 4, we conclude with an analysis of how the macro- and meso-level frameworks may interact, so the outcome of the competition at the meso-level results in successful governance frameworks that map closely the characteristics of good enough data governance.

2. Macro-level approaches to governing personal data

In the digital era, data emerged as a key asset in the global economic competition among world powers. Especially macro-level regimes on personal data catalyse ideological differences. On the one hand, the treatment of personal data carries the often pre-digital social, economic, political conditions which produced data-related regulation in the past. On the other hand, the national, regional ambitions, strategies, priorities in the global competition for economic power, political hegemony, innovation play-out over the macro-level approaches to personal data.

Whilst the interconnectedness of the global digital ecosystem generates increasing interdependence between countries and regions, distinct approaches to data ordering and governance remain. It has been argued that the US, China, and the EU have construed contrasting data realms (Aaronson and Leblond 2018) where domestic legal traditions and variations of capitalism have configured a distinct approach to data governance. O’Hara and Hall (2018) label the US approach libertarian and commercial, that of China authoritarian, and that of the EU, which emphasises human dignity, as—indeed—“bourgeois”, whereby this framing suggests a certain fallacy in the sense that the EU tries to compete on ethics and values instead of unleashing the economic power of data.

Goldfarb and Trefler (2018) who attest to a fundamental regulatory tension between countries’ approaches to data see a comparative economic and innovation advantage for countries with a lax regulatory framework for data. Strict data privacy protection, for example, is often considered fundamentally at odds with the insatiable appetite of big data and machine learning applications for exactly that data (O’Hara and Hall, 2018). Yet, in championing fundamental rights, the EU is largely regulating digital platforms and online services that are supplied from outside the EU, notably from the US. This export of EU rules is contested by political and commercial stakeholders who emphasise digital innovation and the free flow of (personal) data that can help evade claims of authority and jurisdiction (Irion, 2021).

We now provide a brief overview of these approaches to personal data governance in the EU, the US and China in order to highlight the dynamics between the macro- and meso-levels. On the one hand, macro-level legal regimes pre-structure meso-level data governance by public and private entities in these jurisdictions. On the other hand, stakeholders seek to influence macro-level outcomes that endorse their preferred meso-level approach to data governance. This tension is most explicit in the EU context of personal data rules and its contestation.

2.1. EU macro-level data governance approach

Europe conceives of the digital world through its history and commands respect for fundamental rights and European values as a basis for forging trust in the digital transformations that European societies are undergoing. To the European Commission (2020a) “this digital Europe should reflect the best of Europe - open, fair, diverse, democratic, and confident”. EU policy on data seeks to design governance models that enable regulators, industries, communities and others engaging in the processing of data for their own and/or other interests, in line with democratic standards, the rule of law, and societal needs more generally. The EU envisions trustworthy data governance that reconciles responsible and human centric data governance, subject to full compliance with the EU’s strict data protection rules, while enabling data governance to foster innovation, and to drive economic growth (European Commission 2020a).

2.1.1. The EU mixed approach: fundamental rights and free flow of data

The data strategies of the EU and its member states (European Commission 2020a) put data at the centre of the digital transformation. As the European data strategy (ibid.) stipulates: “In order to release Europe’s potential, we have to find our European way, balancing the flow and wide use of data, while preserving high privacy, security, safety and ethical standards”. In broad strokes, the macro-level approach of the EU is characterised by strong fundamental rights safeguards, an EU internal market in which data can circulate freely and, increasingly, data sharing obligations either for specific types of data or sectors. In the following, we will briefly revisit the main features of European data law, which pre-structure the data governance regimes in Europe.

2.1.2. Fundamental rights approach to personal data

Article 8 of the EU Charter of Fundamental Rights enshrines the fundamental right to data protection, among a range of other fundamental rights, such as the right to privacy, non-discrimination, or freedom of expression rights. The right to data protection is given further substance at ordinary legal EU-level, in the General Data Protection Regulation (GDPR; (EU) 2016/679), which guarantees fundamental rights of individuals while contributing to the EU’s internal market objective. The GDPR guarantees a high-level personal data protection by offering individuals transparency and mechanisms to control the processing of their data, including rights pertaining to their data, while imposing a range of obligations and responsibilities on those who are determining the purposes and means of the processing of personal data. The GDPR applies across nearly all sectors in society, both public and private.

As a key concept, Article 4(1) of the GDPR defines personal data as “any information relating to an identified or identifiable natural person (‘data subject’)”. The tendency to apply a broad interpretation in defining personal data is aligned with the CJEU’s repeated affirmations of “ensuring effective and complete control of data subjects”, which is the aim of data protection law (CJEU, 2014; Irion, 2016). However, the concept of personal data “comes with considerable legal uncertainty” (Drexl, 2019), as it relates to another unclear concept—namely that of identifiability. As the contours of the concept of identifiability remain foggy, it has been claimed that this broad interpretation of personal data—even if welcome—could lead to all data being considered as personal, inherently triggering the application of data protection law (Purtova, 2018). This regime can be simultaneously applicable to other data types along with other data regulatory regimes, such as machine-generated data, public sector data, or derivative data. As a result, the distinction between personal and non-personal data is far from clear-cut (Finck and Pallas, 2020).

2.1.3. Open data and data sharing

Open data and data sharing are a central policy objective in the EU framework. The reuse of public sector data, including from public undertakings, and data from publicly funded research for re-use is regulated by the Open Data Directive (EU) 2019/1024). Data sharing in the private sector is based on emerging, circumstantial or sector-specific arrangements, addressing for example payment service providers, electricity network data or agricultural data. New initiatives by the European Commission (2020a) aim to extend this model to the establishment of European data spaces, where data can seamlessly flow across sectors and domains, in compliance with EU norms.

2.1.4. Digital sovereignty

Recently EU policy has become more concerned over digital sovereignty which refers to Europe's ability to act independently in the global digital environment (Madiega, 2020; Roberts et al., 2021). The EU and member states highlight many digital sovereignty issues across domains and sectors, such as computing power, control over EU data and secure connectivity. The EU ‘Strategy for Data’ (European Commission 2020a) stresses the link between digital sovereignty and jurisdiction. In the global interconnected digital ecosystem, there are increasingly competing claims of jurisdiction, for example with non-EU companies, stemming from extraterritorial disclosure requests by third country governments (Irion, 2012; Madiega, 2020). At the EU level, there is now more attention to jurisdiction in the cross-border supply of digital services, especially in the field of cloud services (European Union, 2020; Roberts et al., 2021). Meanwhile, European policy is still inconclusive as to how better recognition for data sovereignty can be reconciled with cross-border data flows and digital trade.

2.2. US macro-level data governance approach

In contrast to the EU approach, the US approach lacks a federal level, comprehensive data protection regulation for the private sector (​​Chander, Kaminski, and McGeveran, 2021). Instead, federal law on privacy is specific to particular sectors and activities, such as for example the Children’s Online Privacy Protection Act (COPPA), the Gramm Leach Bliley Act (GLBA) concerning data collected by the financial service industry, the Fair Credit Reporting Act (FCRA) concerning credit data; the Health Information Portability and Accountability Act, protecting health information. Their sectoral regulation (instead of overarching federal legal provisions), the fact that the federal legislative branch has largely been weakened, and their (partly) ‘lenient’ legislative approach to data privacy is often seen as one of the drivers that has facilitated the emergence of a multibillion-dollar data industry in just a few years (Chander, 2014; Willis et al., 2018).

The outcome is a particular techno-legal hybrid, in which lenient data privacy legislation permits sophisticated private regimes that maximise value extraction from personal data. Consider for example the growth of marketing tech firms from 150 in 2011 to more than 8,000 in the year 2020 (Brinker, 2020). This industry grows on, enables, and nearly exclusively profits off data extraction, analyses, and sharing of data. Their highly sophisticated technological solutions move around and monetise data with extreme efficiency. This is possible because there are few legal hurdles hindering such data flows. In the beginning there have been little restrictions on the export of personal data and often these rules were rather malleable to business needs.

In the international context the US business-led paradigm on personal data has not only taken a commercial stronghold domestically, it has moreover been able to expand at a global scale. Online platforms which have become paradigmatic of today’s digital ecosystem testify to the powerful economies of scale and scope that can be built on data. Political scientists caution against the concentration of data in very large corporations who can scale-up their data-based operations for their private benefit (Spiekermann et al., 2019). Also, the European Commission states that “in the US, the organisation of the data space is left to the private sector, with considerable concentration effects” (European Commission 2020a). This concentration of centrally held data, in walled gardens of large internet companies (e.g. online platforms), is increasingly seen as an impediment for an open competition in a global, data-agile economy (European Commission, 2020a).

In recent years, there has been a surge of data privacy laws at the state level, with the 2018 adoption of the California Consumer Privacy Act (CCPA) being the most influential legislative initiative (Chabinsky and Pittman, 2020; Chander, Kaminski, and McGeveran, 2021). State legislation on data privacy is believed to spur the US legislator’s efforts to pass a federal consumers’ data privacy law that would in turn preempt state laws. The latest developments signify a re-valuation of an individual’s right to data privacy in commercial settings; however, legislation is not believed to go as far as the EU’s GDPR (Chander, Kaminski, and McGeveran, 2021).

2.3. China’s macro-level approach to personal data

The Chinese macro-level approach to data is currently an inward-facing regime, which combines private sector interests with the coercive powers of an authoritarian state. The Chinese social credit system personifies the inward facing direction of a data regime that serves as a totalitarian, reputation-based control of all aspects of life of a billion-plus population (Mac Síthigh and Siems, 2009). The system is based on the reputation ratings of individuals, businesses, public and private institutions, which are then aggregated through a tightly controlled cooperation of public and private entities. Positive and negative ratings can be accumulated through bad, or good behaviour: such as late payment of bills or blood donations, liquor or contentious books purchases and customer satisfaction, regime-critical or supportive social media posts. These ratings are then used to regulate access to a growing number of private and public services, from childcare, to high speed travel and even low interest rates. This approach substitutes “governance through measurement, assessment, and reward for obligation to obey the command of statute, regulation, or administrative decision” (Backer 2019, p 210).

The Chinese social credit system prioritises social control, communal interests, integrity, transparency, and accountability at the expense of the privacy and personal autonomy of the individual. It is argued that the system rewards honesty with economic opportunities, such as financial credit, while the blacklists may encourage individuals, such as debtors, to comply with court judgments (Mac Síthigh and Siems, 2009). All the while the same system is purportedly designed to keep government officials impartial, transparent and accountable. This use of data serves to reinforce and enforce norms that already exist in the country’s legal and extralegal norm system, and that it addresses the shortcomings and inefficiencies of the traditional state institutions (Dai, 2020). In a more skeptical interpretation, it extends and reinforces the powers of an authoritarian state to all aspects of the individual and the social.

Besides, China’s 2017 Cybersecurity Law has imposed several restrictions which aim to safeguard cyber security, protect cyberspace sovereignty and national security (Gao, 2021). Among others, this law requires operators of critical information infrastructure to locally store personal information and important data collected and generated in their operations within China (ibid.). What constitutes critical information infrastructures is broadly defined and covers many online activities. As a result, most personal data collected by Chinese online operators has to be locally stored in China. That means that Chinese operators can receive personal data in the context of their domestic and international activities but this data cannot leave China unless there is a government permission. In late 2020, the Chinese government published the draft of a new comprehensive data protection law. This draft contains several GDPR-style principles, such as transparency, fairness, purpose limitation, data minimisation, limited retention, data accuracy and accountability (Yin and Zhang, 2020).

2.4. EU’s interface with other macro-level data governance approaches

In the presence of a global digital ecosystem, the EU regime on personal data does not operate in isolation but co-exists and interacts with legal regimes in other parts of the world. Data can easily be moved across EU borders and regions, and be stored and processed in a decentralised manner, while becoming accumulated and oftentimes turned into a “proprietary” resource. There are different approaches to governing personal data and the flow of data across borders.

The GDPR has been heralded as a successful global standard-setter, rendering it an often-cited example for the so-called “Brussels effect” (Bradford, 2012; Gady, 2014). Clearly the GDPR has inspired data protection legislation elsewhere in the world (Greenleaf, 2012); however, the EU approach has also been contested for its procedural formalism when protecting personal data, and for a certain lack of effective enforcement (Bamberger and Mulligan, 2011; Granger and Irion, 2018). It is unlikely that a US approach to consumer data privacy will converge with the EU’s fundamental rights approach to personal data protection. The US is rather believed to incubate its own template for consumer privacy protection (Chander, Kaminski, and McGeveran, 2021). Also, the Chinese initiative to introduce better protection for personal data in the private sector would not be about individuals’ empowerment and fundamental rights, as it does not aim to reduce government’s control over all public and privately held personal data.

It turns out that the interaction between different legal regimes at the meta-level also matters for data governance given that certain approaches are clearly designed to extract data from other regimes whenever possible. The US data governance framework is the most open, however bearing in mind that it dominates the commercial internet and has incubated the platform economy. The EU would be regarded semi-open because it seeks to maintain the fundamental right’s protection by placing conditions on the export of personal data (European Commission, 2017). China, by contrast, treats the personal data its local digital technology companies have gathered as a national resource. This creates particular dynamics among each of these jurisdictions where the US private sector still benefits most from the flow of personal data across borders, China does not partake and instead aims to incubate its own digital champions, and the EU struggles to reconcile its high level of personal data protection with cross-border data flows.

In this context, the semi-open EU data protection law appears rather exposed and inconsequential because it did not yet forge usable legal interfaces with other data realms that would prevent circumvention of its rules. For meso-level data governance approaches embedded in EU data law, incubating good enough data governance practices has been a challenge, just like anywhere else. It is possible that the macro-level governance of personal data in the EU does not stimulate enough meso-level approaches that internalise the protection of personal data differently than the prevailing business logics.

3. Meso-level approaches to govern data

Technological innovation yielded highly sophisticated technological infrastructures to collect, store, analyse, and trade vast amounts of data, and their derivatives. These developments fuelled a rapid, parallel innovation of meso-level data governance approaches. Though still fluid and dynamic, meso level governance practices started to coalesce around a number of basic models: (1) business and platform logics to data governance; (2) public sector information logics; (3) technological and legal mechanisms that seek to empower individuals; and (4) community-based data logics.

In this section, these models of what a ‘good governance’ perspective entails, will be introduced, compared and assessed. The section starts from the assumption that the approaches are in essence competing with each other for adoption by data subjects, businesses, and communities. Their success in this process depends on a number of factors: compliance with the EU macro-framework, cost, ease of use, and/or efficiency. But, as we spell out in this section, good data governance is more than that, and as the US macro framework shows, if data governance approaches compete nearly exclusively on business efficiency terms, the most successful approach may not be the socially most beneficial, just or desirable. We first present a tentative list of desirable data governance properties, based on relevant literature. Then we introduce the four competing models and compare them against these properties.

3.1. Some dimensions of good enough data governance

There is to date no generally accepted comprehensive list of requirements for good enough data governance, but an emerging body of literature that emphasises certain goals and practices that speak to the quality of data governance regimes (Daly et al., 2019; Hardinges et al., 2019; Hardjono, Shrier, and Pentland, 2019; Langford, 2000; Mozilla Insights et al., 2020). These include:

  • Safeguards of normative interests. Good data governance must ensure the adequate protection of fundamental rights (privacy, non-discrimination, freedom of speech, right to an effective remedy access to a court, etc.) (Trenham and Steer, 2019), other public interest objectives, and safeguard equitable benefits from the use of data.
  • Minimise negative and maximise positive externalities to individuals, communities, and society as a whole. Data governance should contribute to a just distribution of the power and benefits generated from the use of the data. According to Lovett et al. (2019), this includes that data governance is respectful of particular cultural, social sensitivities, especially when data can be linked to well-defined groups and communities based on ethnicity, language, religious beliefs, or other elements of shared identity. We would add that these considerations are readily applicable to many other, western forms of community and social organisation.
  • Scalability and interoperability. Data governance must be able to scale according to the number of data subjects, data users, and the amount of data (Mozilla Insights et al., 2020). If the model does not scale well, the cost of shifting between different governance models must not be prohibitive. In general, different data governance models need to be interoperable and must not limit the choice and mobility of various stakeholders.
  • Context-sensitivity and sectoral fit. Data governance models should reflect the specific limitations, concerns, sensitivities of the context as defined by data subjects, the data in question, or the potential data uses. Non-personal machine generated data in the energy sector may require different data governance regimes than, say, the learning analytics data of children from vulnerable groups.
  • Proportional and transparent trade-offs between risks and benefits. Every choice between different governance models, and every decision taken within one may lead to unforeseen harm and uncertainties regarding benefits. Risk impact assessment must establish transparency of how potential harms and benefits are distributed across stakeholders, define standards of acceptable/unacceptable risks. Effective and proportional mitigating measures need to be in place and address the negative effects and preserve the trust in the governance model.
  • Transnational capacity. The issue of transnational data flows and jurisdictional data sovereignty are contentious economic, political issues. While transnational capacity should also preclude practices of data extractions, it should not stand in the way of global interconnectivity and international participation in value creation from the data. For example, data governance should achieve that data can be queried and used so that it contributes to value creation, however, without that, the data itself is transferred to, sold to or shared with third parties (Hardjono, Shrier and Pentland, 2019).

These data governance specific considerations must be provided for by the institutional design of governance itself. Even if the data collection and use is taking place within the walled gardens of a corporate data controller, similar, internal data governance mechanisms must be in place to comply with, if nothing more, that controller’s internal rules, GDPR—and other rights. For data intermediaries, and open technical infrastructures, the question of governance is equally and maybe even more directly relevant, either at the data, or at the technology level.

3.2. Competing logics of data governance

Having identified a set of requirements for good enough data governance, we now compare and evaluate the competing governance models. The meso-level data governance models have been grouped following their main logic: (1) business and platform logics to data governance; (2) Public sector information logics; (3) technological and legal mechanisms that seek to empower individuals; and (4) community-based data logics.

3.2.1. Business and platform logics to data governance

Most of the successful (in terms of business performance) data practices evolved in the US context (Varian and Shapiro, 1999). The already powerful credit rating and marketing industries provided the templates for the monetisation of the newly discovered forms of digital data (Lauer, 2017). Emanating from the US, a particular data-driven business logic that treats personal data as an asset or resource that serves to maximise extraction of profitable value, could take a foothold. This developed out of a res nullius perspective, where societal and individual normative interests and safeguards, context sensitivity relating to societal sectors seem non-existent. Businesses largely prefer governance models that help maximise these, to optimise accuracy, to enhance the speed of processing, and to enlarge computational resources that enable them to use (potentially rights-invasive) techniques such as automated decision-making or AI. These governance models are designed to deliver services in a highly competitive market (Baumer, 2014), and are supported by legitimised business secrets that help secure and further strengthen the market and information position of businesses vis-à-vis individual stakeholders in the data market (Janssen, 2020b). This business model has greatly contributed to a nearly unlimited growth of economically successful data-driven business and platforms in the US and across the globe.

EU-based businesses and platforms are largely driven by the same incentives, interests and logics as their US counterparts, but they had to internalise the EU’s data protection framework. The GDPR framework forces them to comply with rights and interests of individuals which are external to their own logics. However, despite EU-wide applicable uniform legal mechanisms to secure equitable data governance across the EU, platform and business logics have largely dominated present data governance models at meso-levels during the last decade—also within the EU. Several different causes have led to the creation of conditions for the dominance of business and platform logics; here, we mention the most important ones.

EU law, particularly when poured into a Regulation, generally aims at unifying its application to secure an equal level playing field across the Union. For instance, the GDPR and the EU Charter of Fundamental Rights and the case law pertaining to these rights apply uniformly in all member states. However, given that the GDPR entered into force in 2018, today’s practices have largely manifested under its predecessor, the 1995 Data Protection Directive (DPD) and attendant enforcement structures by national data protection authorities in the member states. That is, not all national regulators were stringent on DPD compliance and enforcement, which led platforms and businesses to settle where the least strict compliance and enforcement were implemented.

While the uniform GDPR's norms gradually take hold, the legacy of national oversight mechanisms—as the GDPR tasks national bodies with oversight again—still carries on. Data protection authorities in the member states may lack the means—in terms of expertise, manpower, and access to an organisation’s intentions, motivations and behaviours, or lack of insight in an organisation’s complex technical systems—to properly fulfil their oversight task. Notably, (foreign) businesses and platforms have calculated and settled in the member state with the ‘weakest’ or ‘more favourable’ regulator—permitting them to enlarge opportunities to create their own data governance models, as chances that the regulator strictly enforces are minimal (Venkataramakrishnan, 2021).

The accomplishment of the now dominant business and platform logics towards meso-level data governance, which are not bound by EU rules (unless they act under EU law), has caused concerns among EU businesses, in that the EU rights and values driven governance frameworks are too restrictive, thereby putting them into a competitive disadvantage compared to their US-based and Chinese competitors.

Where novel space of data governance at meso-level occurs—much under the pressure of opening-up data markets and public organisation-held data sets—the current platform and business logics might likely continue shaping and moulding the meso-level data governance approaches. However, regulators, at least those in the EU, are more wary about platform and business logics at meso-level that underpin proprietary data concentration. The European Commission has recently tabled three bills (the Data Governance Act, the Digital Markets Act and the Digital Services Act), which aim for setting new rules for digital and data-driven businesses.

3.2.2. Public sector information logics

Governments and the public sector are important producers of data and it is in line with international best practices to release public sector information (PSI) for reuse. In its 2008 Recommendation the Organisation for Economic Coordination and Development (OECD) enshrines “openness as the default rule to facilitate access and re-use” of PSI (OECD, 2008). The rationale for setting PSI free is simple and compelling: “to increase returns on public investments in public sector information and increase economic and social benefits from better access and wider use and re-use, in particular through more efficient distribution, enhanced innovation and development of new uses” (ibid.).

Next to considerations that the public has a right to access information and that public sector data is an important resource that can benefit society, another argument for opening up public sector data is that such data are generated with public funds, meaning that they should not be kept exclusive or that no new charges should be levied for its re-use. The Open Data Directive, which stimulates the re-use of open data for commercial or non-commercial purposes, is aligned with the European fundamental rights framework. As is customary, the Open Data Directive is without prejudice to the GDPR which protects individuals’ personal data. A similar logic, by extension, has been applied to publicly funded scientific research data for which the Open Data Directive requires member states to adopt open access policies. A related European initiative is the European Open Science Cloud (EOSC) which is currently developed with the help of EU funds in order to create an environment for hosting and processing research data pursuant to the FAIR principles (Findable, Accessible, Interoperable, Reusable) (European Commission 2020b). The EOSC, which is still under construction, has been designated as one of the nine European Data Spaces envisioned by the European Data Strategy (European Commission, 2020a). Once fully operational, also the EOSC will be opened-up beyond the research community and connect with the wider public and private sector (European Commission, 2020a).

Considerations of scalability and interoperability are incorporated into the legal framework. The Open Data Directive seeks to enable access and re-use of open data for all interested actors in the market, thereby giving recognition to the non-rival property of data. With this in mind, the Directive significantly limits the use of exclusive arrangements between public sector bodies or public undertakings over access to data with third parties. Also, the principle of ‘open by design and by default’ seeks to reverse the mechanism of access to publicly held data away from having to make a request to proactive release of such data. How this gets translated into practice depends on member states’ public sector and public undertakings to live by the principle ‘open by design and by default’. The Directive gives due prominence to access “by electronic means, in formats that are open, machine-readable, accessible, findable and re-usable, together with their metadata” (European Parliament and the Council, 2019).

The Open Data Directive is premised on the overwhelmingly positive feedback loop open data has in a data-driven economy and society. Following a critique, however, open data policies disproportionately benefit those private actors that command the necessary capabilities to extract value from ‘big data’ and that this—similar to a critique of public data—promotes inequality (Spiekermann et al., 2019). Kitchin (2013) in his ‘Four critiques of open data initiatives’ argues that “the real agenda of business interested in open data is to get access to expensively produced data for no cost, whilst […] weakening [governments] position as the producer of such data”. Collington (2019, p. 8) argues that the costs of producing open data “fall largely on the public sector and society, but the surplus value so often comes to be realised by large digital platform companies and the financial services industry”. There is also a geopolitical argument to be made that EU open data are released to the world and not only to European taxpayers whereby beneficiaries in third countries are not contributing to European societies. Currently there is very little research available about the value creation from open data and how it benefits European societies.

Moreover, while publicly funded data has to be open and released, private sector data is conventionally treated as an exclusive resource that is constitutionally protected under the freedom to conduct a business (Article 16, EU Charter of Fundamental Rights). However, there is a nascent school of thought highlighting that also business-to-government data sharing should be better enabled (High-Level Expert Group, 2020). Moreover, what has become known as reverse PSI (Poullet, 2020) is the idea to introduce mandatory data sharing obligations on private sector actors for data that is of high interest for the public sector and society. The French Digital Republic Bill is a case in point, as it contains a list of privately held data which have to be shared with the public sector and disclosed as a public record.

3.2.3 Technology-based data governance logics

The technological toolbox, another approach shaping data governance at meso-level, aims at facilitating individual data autonomy (Pohle and Thiel, 2020; Summa, 2020) and is rapidly expanding. The objective and consistent claim of these approaches is to empower individuals, by giving them tools to manage their data, and—ultimately—to achieve informational self-determination. In particular decentralised technical tools, such as personal data stores (PDSs) and distributed ledgers, are gaining momentum, positioning themselves as novel meso-level socio-technical alternatives for new data governance strategies.

Stemming from private companies with commercial interests, from public actors and government initiatives, or from bottom-up community projects, distributed ledgers and related decentralised design approaches are getting more established in the global data governance space. Similarly, technological solutions such as PDSs, or technical architecture offered by private platforms that seek to assist individuals in managing their data, are emerging in the data marketplace. Overall, their objectives are similar: to empower individual users with more transparency and control over the processing of their personal data.

Many of the technological intermediaries seek to tackle the growing information and power asymmetries between big platforms and individual users, thereby bringing the data processing close to individuals. Rather than bringing the data for the processing to big platforms, the compute is brought to the data—hence the decentralisation aspect. Decentralised data processing and 'self-sovereign identity' solutions are progressively receiving institutional, social, and regulatory attention for their potential to reshape current data governance (European Commission, 2020a). Coupled with the decentralised architecture on which they are based, blockchains also present tamper-proof and record keeping abilities. This positions the technology in the data marketplace, potentially supporting the objective of individual empowerment over data capture, data analytic and data sharing.

Self-sovereign identity technologies. Popularised by the German Constitutional Court Population Census case (1984) the right of informational self-determination is formally defined as "the authority of the individual to decide himself, on the basis of the idea of self-determination, when and within what limits information about his private life should be communicated to others” (Gutwirth, 2009, p. 45). It ensures that restrictions on this right by the state have to be based in law, while any restriction must be necessary and proportionate to the aim pursued by that restriction. The latest European Commission document on the creation of a European strategy for data highlights the need “to give individuals the tools and means to decide at a granular level what is done with their data” (European Commission, 2020a, p. 10). In particular, it highlights the promises that decentralised tools such as distributed ledgers, personal data stores and other technical architectural design might help individuals “manage data flows and usage, based on individual free choice and self-determination” (p. 11).

Within the technological realm of tools for individual empowerment self-sovereign identity is gaining popularity. The term “self- sovereign authority” was first used in the blog The Moxy Tongue in 2012 in order to contest the dependent relationship between individual identity and the state, and proposing the decoupling of the existence of individual identities from this act of identity registration by/through state actors (The Moxy Tongue, 2012, n.p.). The concept was recaptured by Christopher Allen (2016), who used it to describe a principle-based framework that would create a decentralised system of user-centric, self-administered, interoperable digital identities. This system is driven by ten foundational principles, following Kim Cameron’s Laws of Identity (2005): 1) Existence, 2) Control, 3) Access, 4) Transparency, 5) Persistence, 6) Portability, 7) Interoperability, 8) Consent, 9) Minimalisation, and 10) Protection. It constitutes the latest evolution of digital identity representations, further separating it from centralised and federated models, and aiming to decouple identity issuance by the state in order to bring it under full control of the citizen (Giannopoulou and Wang, 2021, p. 3). Ultimately, self-sovereign identity “makes the citizen entirely responsible for the management, exploitation and protection of one’s data” (Herian, 2018, p.115). While the implementations of the principles vary substantially, it can be said that self-sovereign identity aims to “enable a model of identity management that puts individuals at the centre of their identity-related transactions, allowing them to manage a host of identifiers and personal information without relying upon any traditional kind of centralized authority” (Fry and Renieris, 2020, n.p.).

Self-sovereign identity is “an identity management system created to operate independently of third-party public or private actors, based on decentralised technological architectures, and designed to prioritise user security, privacy, individual autonomy and self-empowerment” (Giannopoulou and Wang, 2021, p. 2). Its aim is to transcribe autonomy and individual control in technological design terms. Thus, technological user-centric design over the storage and access controls to personal identity data, appears to be the essence of any self-sovereign identity solution. Naturally, the degree to which these design choices manifest, varies depending on the objective in question. The objective of this architecture would be to create the conditions for data empowerment by design, giving data subjects the ability to both physically store their encrypted keys that unlock their identity features, and have access/use control over the whole or parts of their identity. The purported benefit from this design is that these features also prioritise security, encryption, and data minimisation by design.

Multiple projects promise to deliver individual ‘data sovereignty’ in a technological solution; one that embodies individual autonomy over one’s personal data and individual control over their processing lifecycle. These solutions aim to achieve a network of interoperable identities, by redesigning the way authorisations in data flows currently operate. In practice, there is a considerable number of actors in this field, which has been recognised to fall under the—now general—denomination of ‘self-sovereign identity’. While recognising that these projects are “still in their infancy”, the Commission highlights the field’s potential and examines what the appropriate regulatory environment that would manage to moderate these projects and accompany them towards their purported goal would be.

Personal data store technologies. Personal data store platforms provide an individual a technical device (the personal data store, or PDS) that allows individuals themselves to manage and take decisions over data capture, and over who can access and undertake data analytics over their data in that device (Janssen et al., 2020a). Individuals can also manage the transfer (the actual data sharing) of their data to an organisation. This can be raw data, or data from aggregate. Through the device, the data processing happens close to the individual (hence the ‘decentralisation aspect’), rather than within the walled gardens of large, data driven internet companies, out of sight of the individual. In addition to the technical component, PDSs often entail terms of services that govern the PDS system, operating as means to ensure that an organisation’s behaviour is compliant with an individual’s preferences, and PDS platform requirements.

A PDS’ empowerment aspirations are generally compliant with GDPR’s guiding principles, and aim to improve transparency and an individual’s management and control over the processing of their personal data (Janssen et al., 2020a). Yet, the effectiveness of PDSs, in their quest to empower individuals and to tackle the current information and power asymmetries, has recently been questioned (Janssen et al., 2020b). Once personal data moves beyond the device, control over how data is processed by data recipients is largely reduced. While decentralised data management might offer helpful user-oriented data management tools, PDSs remain grounded in the mistaken idea that with sufficient information presented in the right way, individuals will be able to overcome systemic asymmetries of information and power that were largely created by business logics at the same meso-level where PDSs operate. That is, PDSs do not alter the business logic created unequal distribution of understanding, knowledge, prediction, or risk assessment over a business' data processing. In all, decentralising data governance doesn’t necessarily imply decentralisation of control (Janssen et al., 2020b).

Decentralised non-personal data exchange technologies. The European Commission’s focus on facilitating the sharing of private sector data (e.g. of SMEs) underlines the need for developing new data governance for technological projects, which would be able to provide new architectural ideas for creating reliable data exchanges. Against this backdrop, decentralised data exchanges have recently emerged as a technological infrastructure solution, with the objective to ensure data traceability, transparency and trust between data sharing parties. These exchanges are created using a decentralised architecture, which, with the help of a distributed network of participating nodes, avoids the storage and processing of data in centralised intermediaries. Decentralised data sharing ecosystems are designed to facilitate all types of data flows (such as machine generated data) on a large scale, without risking trust between transacting parties or trust in the quality of the data. This is supported by technological safeguards and by design encryption techniques, as well as governance choices, which aim to diffuse asymmetric power dynamics among transacting parties.

There is a wide variety in projects and companies that attempt to attest to these considerations and expectations. For example, private companies such as the Ocean Protocol promise to deliver a data ordering blockchain-based framework that would support distributed data marketplaces according to—sector specific or general—data needs. Sector-specific data marketplaces for automobile data, health data, or more broadly research data are also being developed.

The variety of tools facilitating efficient data exchanges promise to deliver on the recognised market and innovation potential of organised sector-specific or general-purpose data marketplaces. When the legal shortcomings in facilitating non-personal data exchanges between businesses or between businesses and institutions cannot be amended through effective legal reforms—as it has been consistently shown, attention shifts to technological infrastructures. Blockchain data exchanges provide a stellar example of these infrastructures.

3.2.4 Community based logics: bottom-up data intermediaries

The idea of data commons, data cooperatives, or data trusts (see a detailed taxonomy, and the analysis of the legal consequences of the terms below), termed by the Open Data Institute (ODI) as “data institutions”, is gaining traction in policy and practice. Notable is the initiative of the European Commission (2020c) to introduce and regulate “data intermediaries” in its proposal for a Regulation on European Data Governance (the “Data Governance Act”). In essence, data intermediaries aim to institute an intermediating governance layer between data subjects on the one hand, and data recipients (natural or legal persons who seek to use that data for commercial or non-commercial purposes) on the other. This data intermediary has a number of roles and responsibilities, which it exercises on behalf of, among other beneficiaries, data subjects and data users, via its own agency:

  • It collects data from individual data subjects/sources;
  • It stores/processes data of individual data subjects, or facilitates data sharing and access arrangements, both legal, and technical, with data users and/or third parties;
  • It enters into agreements with third parties and authorises/licences the use of the data aggregates and derivatives;
  • It monitors, prevents unauthorised uses, and enforces agreements; and
  • It captures value from data use and redistributes value to data subjects.

There are multiple domains in which similar arrangements exist. For example, in scientific research, scientists have long been aware of the need to define the conditions, and infrastructures of data sharing arrangements, and designed bespoke systems to fit their needs (Wilbanks and Friend, 2016). One of the key features of these arrangements is that they reflect the very specific situations in which the sharing of often highly sensitive data, such as health data, must be facilitated among a defined group of stakeholders, such as medical professionals, researchers, commercial companies, public health bodies, etc.

In the current EU regulatory landscape, individuals face similar limitations as intellectual property (IP) rights holders which arise from the comparable nature of the two information markets. Both legal frameworks create and allocate legal entitlements in the information with the data producer, that is the individuals, and the IP rightsholder. In both cases the meaningful exercise of those rights is limited by the transaction costs. The same way it is costly and difficult for an individual IP rights holder to monitor the use of their creations, negotiate use terms with IP users, and enforce their rights vis-à-vis unlicensed users (Landes and Posner, 2003), individuals’ right to personal data protection is difficult to monitor (Giannopoulou, 2020). Solove (2013) argues that individuals cannot possibly keep up with privacy self-management given the sheer size of this task, information and power asymmetries. Despite the substantial differences between the nature, substance, purpose and destiny of the two fields of law (copyright and personal data protection), the nature of market failures in the two information markets are surprisingly similar.

Within the copyright domain, the solution to the transaction cost problem was to aggregate individual rights into Collective Rights Management Organizations (CRMOs) (Handke, 2014). To facilitate the licencing of copyrighted works where it was not always possible, feasible, or efficient, rights holders formed collective entities, which created pools of copyrighted works under a collective agency. CRMOs license the pooled works first on behalf of their members, or through extended collective licensing, on behalf of all rights holders. CMROs are legally empowered to license the pooled intellectual properties, monitor, and enforce copyrights, collect and distribute among their members remuneration. CRMOs also address the issue of imbalances in negotiation power between often powerful IP user organisations (such as broadcasters, or digital platforms), and individual creators. This raises the question as to whether collective rights management intermediary institutions would address the problems of power imbalances, and the practical erosion of data subject rights in the data domain? Would such an approach be legally (or technically) possible?

Several exemplar data intermediaries have already been identified; the Open Data Institute, in its report on data trusts (Hardinges et al., 2019) lists various expressions of collective data agency of individual data subjects. Data trustsare modelled after legal trusts. Trustees of a data trust will take on responsibility (with some liabilities) to steward data for an agreed purpose. Data cooperativesare mutual organisations owned and democratically controlled by members, who delegate control over data about them. Data commonsfollow the institutional models around common pool resources, such as forests and fisheries. Research partnershipsprovide access to data to universities and other research organisations. The umbrella term of data collaborativesrefers to such intermediaries that describes the collaborations between private data companies and the public sector (B2G) with the goal of engaging in data sharing activities in order to “generate public value”.

All these different approaches establish a “middleman” with its own agency to remove some of the friction and transaction costs from data use, by establishing legal and/or technological vehicles of data stewardship. Unlike traditional data controllers who collect and use data from individuals (often) largely for their own benefit, and which tend to capture most of the value from such data use, data intermediaries are supposed to be independent from the prospective data users. Data intermediaries might, depending on their purpose, the parties involved, and the data held, be controlled by the stakeholders (e.g. data cooperatives) involved in the data intermediary. The data intermediary’s obligations and responsibilities over decisions taken regarding the data processing, are supposed to be directed towards the beneficiaries.

These governance approaches are thought to have various benefits, such as the ability to balance conflicting views, and incentives about under what terms and conditions data can be shared and accessed. Collective data governance arrangements in data intermediaries may have a legally binding responsibility to address the interests of individuals, citizens and other beneficiaries. The decisions over data use and sharing can be more open, participatory and deliberative, so people have a say that they would otherwise not have, and the benefits of data use and sharing can be more widely, ethically and equitably distributed. One of the significant purported benefits of such intermediaries is that they might create entities with comparable size, clout, and negotiating power as the giants in the digital economy may also seek to receive raw data or data aggregate that is produced and held by the intermediary. In this way, data intermediaries could balance out the information and power asymmetries currently tipped in favour of digital businesses (Delacroix and Lawrence, 2019). In that way, the concept of informational self-determination would be brought under a new light, that of empowerment through the collective.

However, issues remain with the various expressions of data intermediaries. It seems that we lack a legal formulation which would best circumscribe the purpose of data intermediaries. The UK common law based legal trust might be appealing, but as a common law concept, it is not as such immediately applicable in continental legal systems. Other legal forms, such as cooperatives, or associations, have their own limitations, which in similar cases, such as collective rights management in the copyright domain, have been overcome with special legal mandates that apply to the particularities of the necessary intermediation. Endowing these new data intermediaries with data, and rights to exercise powers and rights on behalf of their members might however be difficult, as not every right assigned to natural persons can be mandated or transmitted to a legal entity. Where data intermediaries might not be effective, or where imperfections occur in their management of the processing of data of their members, substantial technical and legal opportunities may largely remain for prospective data users to bypass the data intermediaries when acquiring the data. A data intermediary’s benefits for commercial purposes have not yet been regulated. The draft Digital Governance Act (Chapter III, Data Governance Act) and the German legislative proposal regulating approved consent management services and end-user settings both propose a restrictive approach at this point (§26, Entwurf eines Gesetzes zur Regelung des Datenschutzes und des Schutzes der Privatsphäre in der Telekommunikation und bei Telemedien of 2021).

Also, and this seems to be the most consequential issue, all these arrangements assume that it is not only possible, but desirable to clearly define the group of data subjects, the scope of individual or personal data, and the purposes which could make up the collective arrangement. But such hard boundaries are rare, and more porous community, or stakeholder boundaries are prevalent. Extended collective licensing arrangements, common in the copyright domain, successfully developed to address similar challenges. It may be necessary that if the intermediary layer idea gathers momentum, similar extended powers should also be considered in the data space.

4. Conclusion

Digital data practices are in rapid flux and development. The same applies to the efforts which try to create some order in the creation, extraction, use, and trade in data. From a technical and business perspective, the data space is unified and global, with intense competition over unclaimed data resources. The legal landscape, meanwhile, is inconsistent and fragmented, while enforcement is often struggling to keep up with the latest developments. States are torn between conflicting objectives: on the one hand, opening up their data wealth for relatively unregulated reuse, and on the other, defining data as a basis of sovereignty, and competitive advantage. Individuals are consistently victims of extractive and also abusive data practices, even if they enjoy strong data protection rights. Technologists seek to offer tools of self-protection. Communities try to organise coordinated collective action at a scale.

As we have noted earlier, the interaction between meso- and macro-level frameworks, is bidirectional: macro-frameworks can shape all and favour certain meso-level governance logics, while local stakeholders—and increasingly also globally operating technology corporations—influence the macro-structures through political participation, economic activity, and various counter-practices.

The biggest tension in this relationship today is the apparent mismatch between the success criteria for companies competing in the global data economy at meso-level, and the semi-open fundamental rights approach of the EU’s macro-level framework. There are many signs that point to this tension: the success of US firms which could accumulate clout before being subjected to EU rules, EU businesses voicing their concerns that the EU frameworks are too restrictive, thus putting them into a competitive disadvantage vis-à-vis US and Chinese competition; the political declarations of EU institutions which pay lip service to European values while continuously seeking to expand data access and sharing arrangements for economic ends in order to compete with the rest of the world. This comes to the fore in the language used in EU policy documents emphasising the need to balance competitiveness and fundamental rights considerations in the data space (e.g. European Commission, 2020a).

The meso-level governance logics are under dual pressure. On the one hand they need to comply with not one, but multiple macro-regimes, if they want to do business in those jurisdictions. On the other, meso-regimes also compete with each other for adoption by citizens, public and private stakeholders in the face of a still pervasive business logic of data accumulation and concentration. Under these conditions, we see a real danger that the winning meso-logic will be the one which is the most successful within the global economic competition framework. Such an outcome would increase the pressure on the European macro-framework and slowly compromise its human rights and values-based attributes so that at some point, it starts to emulate its macro-competitors, such as the US and China. Given that Europe’s macro-level framework is still much better aligned with good data governance practices aspired to at the meso-level, such an outcome would be dramatic for their ability to persevere and succeed. Yet, the Commission’s proposal for a Data Governance Act that gives some legal recognition to “data intermediaries” could be a step in the right direction.

The plethora of new data governance logics, especially data intermediaries, certain technological frameworks, and their hybrids, might offer an alternative path, where the fundamental rights-based EU framework can deliver and empower meso-level data governance institutions, which carry these values in their DNA. The EU is the world’s major economic and political power, distinguished by its values-based approach, grounded in the Enlightenment values. As there are signs of the EU macro-approach to personal data protection being cautiously copied by her global competitors, all the conditions are there for it to become the largest exporter of value-sensitive meso-level governance logics as well. For that, it is necessary to better define and delineate the properties of good meso-level data governance within the EU context. This paper has taken the first steps to do that.

References

Aaronson, S. A., & Leblond, P. (2018). Another digital divide: The rise of data realms and its implications for the WTO. Journal of International Economic Law, 21(2), 245–272. https://doi.org/10.1093/jiel/jgy019

Allen, C. (2016, April 25). The path to self-sovereign identity [Blog post]. Life With Alacrity. https://www.lifewithalacrity.com/2016/04/the-path-to-self-soverereign-identity.html#dfref-1212

Andrejevic, M. (2011). The work that affective economics does. Cultural Studies, 5, 604–620. https://doi.org/10.1080/09502386.2011.600551

Backer, L. C. (2019). China’s Social Credit System: Data-Driven Governance for a ‘New Era.’ Current History, 118(809), 209–214. https://doi.org/10.1525/curh.2019.118.809.209

Bamberger, K. A., & Mulligan, D. K. (2011). Privacy on the Books and on the Ground. Stanford Law Review, 63, 247–316. https://www.stanfordlawreview.org/print/article/privacy-on-the-books-and-on-the-ground/

Baumer, E. P. S. (2014). Toward Human-Centred algorithm design. Big Data & Society, 4(2). https://doi.org/10.1177/2053951717718854

Bradford, A. (2012). The Brussels effect. Northwestern University Law Review, 107(1), 1–68. https://scholarlycommons.law.northwestern.edu/nulr/vol107/iss1/1/

Cameron, K. (2005, May). The laws of identity [Blog post]. Kim Cameron’s Identity Weblog. https://www.identityblog.com/?p=352

Chander, A. (2014). How Law Made Silicon Valley. Emory Law Journal, 63(3), 639–694. https://scholarlycommons.law.emory.edu/elj/vol63/iss3/3/

Chander, A., Kaminski, M., & McGeveran, W. (2021). Catalyzing Privacy Law. Minnesota Law Review, 105, 1732–1802. https://minnesotalawreview.org/wp-content/uploads/2021/04/3-CKM_MLR.pdf

Collington, R. (2019). Digital Public Assets: Rethinking Value and Ownership of Public Sector Data in the Platform Age [Discussion Paper]. Common Wealth. https://uploads-ssl.webflow.com/5e2191f00f868d778b89ff85/5e3bfa10722cc53f4c3cd817_Digital-Public-Assets-Common-Wealth.pdf

Daly, A., Devitt, S. K., & Mann, M. (Eds.). (2019). Good Data. Institute of Network Cultures. https://networkcultures.org/wp-content/uploads/2019/01/Good_Data.pdf

Decision of the 1. Senate, 1 BvR 209/83-NJW 1984 (Bundesverfassungsgericht 15 December 1983).

Delacroix, S., & Lawrence, N. D. (2019). Bottom-up Data Trusts: Disturbing the ‘one size fits all’ approach to data governance. International Data Privacy Law, 9(4), 236–252. https://doi.org/10.1093/idpl/ipz014

Drexl, J. (2019). Legal Challenges of the Changing Role of Personal and Non-Personal Data in the Data Economy. In A. Di Franceschi & R. Schulze (Eds.), Digital Revolution—New Challenges for Law: Data Protection, Artificial Intelligence, Smart Products, Blockchain Technology and Virtual Currencies (pp. 19–41). C.H. Beck; Nomos.

Entwurf eines Gesetzes zur Regelung des Datenschutzes und des Schutzes der Privatsphäre in der Telekommunikation und bei Telemedien of 2021, German Bundestag (2021). https://dsgvo-gesetz.de/ttdsg/

European Commission. (2017). Communication from the Commission to the European parliament and the Council: Exchanging and protecting Personal Data in a Globalised World (COM(2017)7 final).

European Commission. (2020a). Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions. A European strategy for data (COM(2020)66). https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52020DC0066

European Commission. (2020b). Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions. A new ERA for Research and Innovation (COM/2020/628 final). https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=COM:2020:628:FIN

European Commission. (2020c). Proposal for a Regulation of the European Parliament and of the Council on European data governance (Data Governance Act) (COM/2020/767 final). https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A52020PC0767

Directive 2003/98/EC of the European Parliament and of the Council of 17 November 2003 on the re-use of public sector information, OJ L 345 90 (2003). http://data.europa.eu/eli/dir/2003/98/oj

Directive (EU) 2019/1024 of the European Parliament and of the Council of 20 June 2019 on open data and the re-use of public sector information, OJ L 172 56 (2019). http://data.europa.eu/eli/dir/2019/1024/oj

European Union. (2020). Declaration Building the next generation cloud for businesses and the public sector in the EU. https://ec.europa.eu/digital-single-market/en/news/towards-next-generation-cloud-europe

Finck, M., & Pallas, F. (2020). They who must not be identified—Distinguishing personal from non-personal data under the GDPR. International Data Privacy Law, 10(1), 11–36. https://doi.org/10.1093/idpl/ipz026

Fry, E., & Renieris, E. (2020, March 31). SSI? What we really need is full data portability [Blog post]. Women in Identity. https://womeninidentity.org/2020/03/31/data-portability/

Fuchs, C. (2011). New Media, Web 2.0 and Surveillance. Sociology Compass, 5(2), 134–147. https://doi.org/10.1111/j.1751-9020.2010.00354.x

Gady, F.-S. (2014). EU/U.S. Approaches to Data Privacy and the “Brussels Effect”: A Comparative Analysis. In International Engagement on Cyber IV: A Post-Snowden Cyberspace.

Gao, H. (2021). Data Regulation with Chinese Characteristics. In M. Burri (Ed.), Big Data and Global Trade Law (pp. 245–267). Cambridge University Press. https://doi.org/10.1017/9781108919234.017

Giannopoulou, A. (2020). Algorithmic systems: The consent is in the detail? Internet Policy Review, 9(1). https://doi.org/10.14763/2020.1.1452

Giannopoulou, A., & Wang, F. (2021). Self-sovereign identity. Internet Policy Review, 10(2). https://doi.org/10.14763/2021.2.1550

Goldfarb, A., & Trefler, D. (2018). How Artificial Intelligence impacts labour and management. In World Trade Report: : The future of world trade (World Trade Report 2018, p. 140) [Opinion Piece]. World Trade Organization. https://www.wto.org/english/res_e/publications_e/opinionpiece_by_avi_goldfarb_and_dan_trefler_e.pdf

Google Spain, Case C-131/12 (European Court of Justice 13 May 2014).

Granger, M.-P., & Irion, K. (2018). The right to protection of personal data: The new posterchild of European Union citizenship? In H. de W. Vries & M.-P. Granger (Eds.), Civil Rights and EU Citizenship (pp. 279–302). Edward Elgar Publishing. https://doi.org/10.4337/9781788113441.00019

Greenleaf, G. (2012). The Influence of European Data Privacy Standards outside Europe: Implications for Globalization of Convention 108. International Data Privacy Law, 2(2), 68–92. https://doi.org/10.1093/idpl/ips006

Gutwirth, S., Poullet, Y., Hert, P., Terwangne, C., & Nouwt, S. (Eds.). (2009). Reinventing data protection. Springer. https://doi.org/10.1007/978-1-4020-9498-9

Handke, C. (2014). Collective administration. In Handbook on the Economics of Copyright. Edward Elgar Publishing.

Hardinges, J., Wells, P., Blandford, A., Tennison, J., & Scott, A. (2019). Data Trusts: Lessons from Three Pilots [Report]. Open Data Institute. https://theodi.org/article/odi-data-trusts-report/

Hardjono, T., Shrier, D., & Pentland, A. (Eds.). (2019). Trusted Data (Revised And Expanded). MIT Press.

Herian, R. (2018). Regulating Blockchain. Critical perspectives in law and technology. Routledge. https://doi.org/10.4324/9780429489815

Hess, C., & Ostrom, E. (2003). Ideas, artifacts, and facilities: Information as a common-pool resource. Law and Contemporary Problems, 66(1/2), 111–145. https://scholarship.law.duke.edu/lcp/vol66/iss1/5/

High-Level Expert Group on Business-to-Government Data Sharing. (2020). Towards a European strategy on business-to-government data sharing for the public interest [Final report]. European Union. https://ec.europa.eu/newsroom/dae/document.cfm?doc_id=64954

Insights, M., Geuns, J., & Brandusescu, A. (2020). Shifting Power Through Data Governance. https://drive.google.com/file/d/1XLlGWRbm2bu48GgTFjG2aU4DSCL0U1s9/view

Irion, K. (2012). Government Cloud Computing and National Data Sovereignty. Policy & Internet, 4(3), 40–71. https://doi.org/10.1002/poi3.10

Irion, K. (2016). A special regard: The Court of Justice and the fundamental rights to privacy and data protection. In U. Faber, K. Feldhoff, K. Nebe, K. Schmidt, & U. Waßer (Eds.), Gesellschaftliche Bewegungen—Recht unter Beobachtung und in Aktion: Festschrift für Wolfhard Kohte (pp. 873–890). Nomos.

Irion, K. (2021). Panta Rhei: A European Perspective on Ensuring a High Level of Protection of Human Rights in a World in Which Everything Flows. In M. Burri (Ed.), Big Data and Global Trade Law (pp. 231–242). CUP.

Janssen, H., Cobbe, j, Norval, J., & Singh, J. (2020). Decentralised data processing: Personal Data Stores and the GDPR. International Data Privacy Law, 10(4), 356–384. https://doi.org/10.1093/idpl/ipaa016

Janssen, H., Cobbe, J., & Singh, J. (2020). Personal information management systems: A user-centric privacy utopia? Internet Policy Review, 9(4). https://doi.org/10.14763/2020.4.1536

Kitchin, R. (2013, November 27). Four critiques of open data initiatives [Blog post]. London School of Economics Impact of Social Sciences. https://blogs.lse.ac.uk/impactofsocialsciences/2013/11/27/four-critiques-of-open-data-initiatives/

Landes, W. M., & Posner, R. A. (2003). The economic structure of intellectual property law. Harvard University Press.

Langford, J., Poikola, A., Janssen, W., & Lähteenoja, V. (2020). Understanding MyData Operators [Paper]. MyData Global. https://mydata.org/wp-content/uploads/sites/5/2020/04/Understanding-Mydata-Operators-pages.pdf

Lauer, J. (2017). Creditworthy. A History of Consumer Surveillance and Financial Identity in America. Columbia University Press.

Lovett, R., Lee, V., Kukutai, T., Cormack, D., Rainie, S. C., & Walker, J. (2019). Good data practices for Indigenous data sovereignty and governance. In A. Daly, S. K. Devitt, & M. Mann (Eds.), Good Data (pp. 26–36). Institute of Network Cultures. https://networkcultures.org/wp-content/uploads/2019/01/Good_Data.pdf#page=28

Mac Sıthigh, D., & Siems, M. (2019). The chinese social credit system: A model for other countries? Modern Law Review, 82(6), 1034–1071. https://doi.org/10.1111/1468-2230.12462

Madiega, T. (2020). Digital sovereignty for Europe Digital sovereignty: State of play (EPRS Ideas Paper No. PE, 651(992)). https://www.europarl.europa.eu/RegData/etudes/BRIE/2020/651992/EPRS_BRI(2020)651992_EN.pdf

Mann, M., Devitt, S. K., & Daly, A. (2019). What Is (in) Good Data? In A. Daly, S. K. Devitt, & M. Mann (Eds.), Good Data (pp. 8–23). Institute of Network Cultures. https://networkcultures.org/wp-content/uploads/2019/01/Good_Data.pdf#page=10

Micheli, M., Ponti, M., Craglia, M., & Berti Suman, A. (2020). Emerging models of data governance in the age of datafication. Big Data & Society, 7(2). https://doi.org/10.1177/2053951720948087

O.E.C.D. (2008). OECD Recommendation of the Council for Enhanced Access and More Effective Use of Public sector Information. https://www.oecd.org/sti/44384673.pdf

O’Hara, K., & Hall, W. (2018). Four internets: The geopolitics of digital governance (No. 206; CIGI Papers). https://www.cigionline.org/sites/default/files/documents/Paper%20no.206web.pdf

Ostrom, E. (1990). Governing the Commons: The Evolution of Institutions for Collective Action (p. 280). Cambridge University Press. https://doi.org/10.1017/CBO9780511807763

Ostrom, E. (1994). Neither market nor state: Governance of common-pool resources in the twenty-first century [Lecture]. Lecture Series No. 2, International Food Policy Research Institute.

Pasquale, F. (2015). The black box society: The secret algorithms that control money and information. Harvard University Press.

Pohle, J., & Thiel, T. (2020). Digital sovereignty. Internet Policy Review, 9(4). https://doi.org/10.14763/2020.4.1532

Poullet, Y. (2020). From open data to reverse PSI – A new European policy facing GDPR (No. 11; European Public Mosaic). Public Administraiton School of Catalonia. http://www.crid.be/pdf/public/8586.pdf

Purtova, N. (2018). The Law of Everything. Broad Concept of Personal Data and Future of EU Data Protection Law. Law, Innovation and Technology, 10(1), 40–81. https://doi.org/10.1080/17579961.2018.1452176

Roberts, H., Cowls, J., Casolari, F., Morley, J., Taddeo, M., & Floridi, L. (2021). Safeguarding European values with digital sovereignty: An analysis of statements and policies. Internet Policy Review, 10(3).

Solove, D. (2013). Privacy Self-Management and the Consent Dilemma. Harvard Law Review, 126, 1888–1903. https://harvardlawreview.org/2013/05/introduction-privacy-self-management-and-the-consent-dilemma/

Spiekermann, K., Slavny, A., Axelsen, D., & Lawford-Smith, H. (2019). Big Data Justice: A Case for Regulating the Global Information Commons. Journal of Politics, 83(2), 1–38. https://doi.org/10.1086/709862

Summa, H. A. (2020, March). ‘Building your own internet’: How GAIA-X is Paving the Way to European Data Sovereignty. Dotmagazine. https://www.dotmagazine.online/issues/cloud-and-orientation/build-your-own-internet-gaia-x

The Moxy Tongue. (2012, February 15). What is ‘sovereign source authority’? [BLog post]. The Moxy Tongue. https://www.moxytongue.com/2012/02/what-is-sovereign-source-authority.html

Trenham, C., & Steer, A. (2019). The Good Data Manifesto. In A. Daly, S. K. Devitt, & M. Mann (Eds.), Good Data (pp. 37–53). Institute of Network Cultures. https://networkcultures.org/wp-content/uploads/2019/01/Good_Data.pdf#page=39

Varian, H. R., & Shapiro, C. (1999). Information Rules. A Strategic Guide to the Network Economy. Harvard Business School Press.

Venkataramakrishnan, S. (2021, February 9). Irish data regulator under fire over dated software. Financial Times. https://www.ft.com/content/9484b8fe-ccca-4707-8ea5-87e883b7490f

Wagner, B., & Janssen, H. (2021, January 4). A first impression of regulatory powers in the Digital Services Act [Blog post]. Verfassungsblog. https://verfassungsblog.de/regulatory-powers-dsa/

Wilbanks, J., & Friend, S. (2016). First, design for data sharing. Nature Biotechnology, 34, 377–379. https://doi.org/10.1038/nbt.3516

Willis, D., & Kane, P. (2018, November 5). How Congress stopped working. ProPublica; The Washington Post. https://www.propublica.org/article/how-congress-stopped-working

Wong, J., Henderson, T., & Ball, K. (2020, July 29). Data Protection for the Common Good: Developing a framework for a data protection-focused data commons. Data for Policy Conference. https://doi.org/10.5281/zenodo.3965670

Yin, K., & Zhang, G. (2020, October 26). A Look at China’s Draft of Personal Data Protection Law [Blog post]. The International Association of Privacy Professionals. https://iapp.org/news/a/a-look-at-chinas-draft-of-personal-data-protection-law/

Zuboff, S. (2015). Big other: Surveillance capitalism and the prospects of an information civilization. Journal of Information Technology, 30, 75–89. https://doi.org/10.1057/jit.2015.5

Zuboff, S. (2019). The age of surveillance capitalism: The fight for a human future at the new frontier of power. Profile Books.

Governing the shadow of hierarchy: enhanced self-regulation in European data protection codes and certifications

$
0
0

This paper is part of Governing “European values” inside data flows, a special issue of Internet Policy Review guest-edited by Kristina Irion, Mira Burri, Ans Kolk, Stefania Milan.

1. Introduction

The governance of European values around issues of data protection is continually on global political, regulatory, academic, and business agendas. The policies that European policymakers and national regulators adopt address the concern that information technology (IT) companies and political actors use private information to track, misinform, and affect individuals’ political and commercial preferences. Data protection policies also increasingly influence business practices: they shape organisational structures and policies, assign tasks to corporate actors, and require the appointment of compliance officers. For multinational organisations, data protection policies can impact business decisions on information flow, information processing, and the location of data centres. New modes of governance and self-regulation emerge from these policies to include regulatory intermediaries, i.e., actors who work in conjunction with policymakers and regulators to influence the behaviour of regulated organisations: controllers and processors alike (together: rule-takers). In the emerging modes of governance, both these groups of actors—the regulatory intermediaries and the rule-takers—can self-regulate in the shadow of European and national hierarchies.

Political and social scientists have long been suggesting that self-regulation can exist in the shadow of coercive hierarchies (Black, 1996, p. 27; Héritier & Lehmkuhl, 2008). This occurs when policymakers initiate steps to legislate, for instance where there are no preexisting laws, or where regulators threaten executive decisions, for instance by reregulating towards tighter regulation as a precondition for creating an industry more willing to engage in self-regulation (Héritier & Eckert, 2008, p. 114). Therefore, political scientists and legal scholars explain that self-regulation in the shadow of hierarchy involves both the delegation of tasks and responsibilities from policymakers to private actors to formulate and impose regulations (Black, 1996, p. 27), as well as involving the mechanisms to continuously threaten or induce compliance (Héritier & Lehnkuhl, 2008, p. 2). Such relationships, especially when they constrain or incentivise self-regulation in the shadow of hierarchy, raise questions of politics and policies about whether the threat or incentive will materialise, and questions about the mode of governance that would ensure the long-term commitment of all involved stakeholders. This paper process-traces the adoption of two such (sub-)regimes that exist in the shadow of European and national hierarchies: the requirement under the General Data Protection Regulation (GDPR) to rely on private monitoring and certification bodies to adopt data protection codes of conduct or certifications. 1

The adoption of the GDPR in 2016, and consequently the emergence of a new European data protection regime, offers an interesting case study for the new modes of governance in the shadow of hierarchy. Until 2016, the old data protection regime and its core legislation—the European Data Protection Directive (EDPD)—assigned sole regulatory competence to the national data protection supervisory authorities (DPAs) to monitor and to enforce data protection rules (Newman, 2008). Under this regime, controllers needed to notify DPAs of their self-regulatory practices, and DPAs in turn ratified and then registered the controllers’ processing operations in public registries. Conversely, while the GDPR maintains the leading regulatory position of DPAs in monitoring and enforcing data protection rules (see Article 57), it also adopts a regime of enhanced self-regulation via regulatory intermediation that permits private bodies to interpret, monitor, and sometimes even enforce data protection rules (see Articles 41(1) and 43(1)). These data protection rules must first be defined, either in codes of conduct or certification (see Articles 40 and 42), and then the private bodies must receive accreditation from a public authority (see Articles 41(2) and 43(2)). 2 Only after the standardisation and accreditation phases can the private bodies monitor and assess conformity with the codes and certifications. Once accredited and certified, monitoring bodies, certification bodies, and the rule-takers that they monitor or certify, can all act in the shadow of the hierarchical decisions of European policymakers and national regulators.

Two preliminary clarifications are needed regarding the role of DPAs in creating a shadow of hierarchy. First, while a threat by European policymakers to amend the GDPR in order to nudge regulatory intermediaries and rule-takers towards self-regulation is possible, the more immediate ‘threat’ can originate from the DPAs. The GDPR clarifies that the existence of codes and certifications, as well as the delegation of responsibilities to monitoring and certification bodies, are without prejudice to the tasks and powers of the DPAs (see Articles 41(1), 41(4), 43(1), and 43(7)). As regulators, the DPAs have the final word about which decisions are subject to possible effective judicial remedies (Article 78). Second, Articles 40(1) and 42(3) of the GDPR specify that the purposes of the codes and certifications are, respectively, to contribute to the proper application of the GDPR, and to permit rule-takers to demonstrate compliance. Even though the codes and certification are enhanced by the empowerment of private monitoring and certification bodies, a demonstration of compliance does not ensure compliance with the GDPR (Leenes, 2020). Although DPAs can require higher compliance standards than demonstrated, given the continuous debate on whether DPAs have sufficient resources to successfully regulate data protection (European Commission, 2020), monitoring and certification bodies can assist the DPAs to regulate from a distance and can free up the DPAs’ time and resources.

I therefore ask, how have European policymakers established the two regulatory arrangements that permit private bodies to act as regulatory intermediaries in order to monitor codes and assess conformity with certifications in the shadow of hierarchy? The paper thereafter asks what the similarities and differences in the design of the two sub-regimes are, and concludes by addressing how hierarchical decisions can impact the self-regulation that exists in the sub-regimes’ shadows. To answer these questions, I chose to use the process-tracing methodology as it has previously been used to empirically and theoretically study European integration (Pierson, 1996). To apply the methodology, I first examined the European regimes for codes of conduct and certification prior to the adoption of the EDPD and the GDPR. I started with these initial decisions in order to understand whether they created path dependencies for policymakers and regulators. From there, I obtained documents for the process-tracing through formal freedom of information (FOIA) requests and from European Council documents leaked by civil activists (n=466). Additional documents included formal and online publications by European institutions such as the European Commission, the European Data Protection Board (EDPB), and the DPAs. Materials on the specification of the codes and certification schemes were retrieved from websites of the European institutions and of the owners of the relevant codes and certifications. I also participated in a workshop hosted by the European Commission on Data protection certification mechanisms and standards: industry needs and views on the new GDPR certification.

Based on the documents I gathered, the second step of the analysis involved tracing how the policy outcomes regarding codes of conduct and certification in general, and the reliance on monitoring and certification bodies in particular, came about. This step involved answering why the precise policy outcome became dominant over other policy alternatives, which policymakers were involved in the decision-making process, and how power was distributed amongst them and other parties (Van Den Bulck, 2012, p. 18). I searched the documents for behind-the-scenes political bargaining, decisions and arguments made by policymakers in the European Council, the European Parliament, and the European Commission both in favour and against the decisions to adopt codes and certifications into the regime and to enhance them by relying on monitoring and certification bodies. During the process tracing and document analysis I also looked for policy and regulatory decisions that lay the groundwork for the decision to include the certifications and codes of conduct as part of the proposal for the GDPR. At the next stage, I adopted an inductive qualitative approach aimed at comparing the similarities and differences between data protection codes of conduct and certification. Due to the length of the criteria that the GDPR provides and the additional specification by the EDPB, I separated the comparison into three parts, one part for each phase of the standardisation process: standardisation, accreditation, and certification. The final stage of the research included drawing conclusions about how regulators can impact self-regulation that exists in their shadow through regulating via intermediaries instead of using direct modes of regulation.

The next section addresses the theoretical framework of self-regulation: how regulatory regimes incorporate self-regulatory components, and how regulatory intermediation can also be used to create regimes of self-regulation. Section 3 then describes how the two self-regulatory sub-regimes in the shadow of European hierarchies have emerged, and Section 4 compares the two sub-regimes. Thereafter, I draw conclusions about how regulators can impact self-regulation that exists in their shadow by regulating via the intermediaries instead of using direct modes of regulation.

2. Theoretical framework

Self-regulation means the process through which individual organisations or the regulated industry design formal or informal rules and procedures and thereafter enforce the rules and procedures on themselves (Porter & Ronit, 2006). Self-regulation regimes usually benefit from a greater degree of experience and efficiency, yet they tend to suffer from a lack of accountability and legitimacy (Ogus, 1995). And while self-regulation can be voluntary (Black, 1996), policymakers and regulators can overcome deficiencies in its accountability and legitimacy by introducing self-regulatory components into public regulatory regimes. They can mix and match different policy mechanisms and constraints to mandate rule-takers to self-regulate (enforced self-regulation; Ayres & Braithwaite, 1992). Policymakers can also decide to share responsibilities between government actors and private bodies in order to overcome regulatory shortfalls (co-regulation; Levi-Faur, 2011) or regulate the manner in which private actors self-regulate (meta-regulation; Gilad, 2010). Policymakers and regulators can additionally coerce rule-takers to self-regulate by threatening to adopt constraining rules (Black, 1996, p. 27; Héritier & Lehmkuhl, 2008) or by inducing them to consider social values or environmental concerns (Schneider & Scherer, 2019). Hence, policymakers and regulators make the self-regulatory regimes more public, they offer to introduce democratic accountability, and they suggest that the new forms of regulation better consider long-term policy goals.

However, these mechanisms and constraints that mandate rule-takers to self-regulate tend to focus on the direct and hierarchical relationships between regulators and regulated organisations. Conversely, another method of influencing self-regulation is to introduce an intermediary to indirectly regulate and affect the self-regulatory practices of rule-takers. When actors in a self-regulatory regime move beyond self-regulation mechanisms and rely on independent regulatory intermediaries to constrain their conduct and improve policy implementation, I call such a form of self-regulation via regulatory intermediation ‘enhanced self-regulation’ (Medzini, 2021b). The term ‘enhanced’ indicates that actors can delegate responsibilities to regulatory intermediaries in order to improve the credibility of self-regulation, for example towards accountability in data protection (Medzini, 2021a).

The literature on regulatory governance defines regulatory intermediaries broadly as any actor that affects the behaviour of rule-takers and makes some aspect of the regulation of regulated organisations indirect (Abbott et al., 2017, p. 19). Regulatory intermediaries can enter the regulatory regime for either functional or political reasons. Policymakers, regulators, and regulated organisations might decide to rely on intermediaries due to the capacities they possess that other actors lack, or due to their legitimacy to regulate. At the same time, the same actors might decide to rely on regulatory intermediaries for political reasons. Regulatory actors, including the intermediaries, might consider the mechanism of regulatory intermediation as a way to capture the regulatory regime, to gain regulatory rents, or to direct decisions away from the public interest and towards special interests (Marques, 2019).

The literature on regulatory intermediation explains that regulatory regimes can also include more than one group of intermediaries. In such regimes, one group of regulatory intermediaries (I1) can regulate another group of regulatory intermediaries (I 2), although they might have different and unrelated functions. A leading example of regulatory intermediation regimes in which intermediaries have interrelated functions are tripartite standard regimes (TSR; Loconto & Busch, 2010). TSRs are defined by three separate phases: standardisation, accreditation, and certification. For example, policymakers and regulators first need to approve the criteria for accreditation and certification (the standardisation phase). They would then allow one group of intermediaries (I1) to accredit another group of intermediaries (I2) (the accreditation phase) in order for the accredited bodies to certify (the certification phase). 3 Such an approach creates multiple levels of oversight and consequently an indirect relationship between policymakers and regulated organisations. The literature further explains that European policymakers previously adopted such a TSR approach as part of the European ‘New Approach’ to standardisation; an approach which seeks to open the European market to products without threatening the safety of European consumers (Galland, 2017).

At the same time, one consequence of having several phases of intermediation is the introduction of increased complexity. Besides having more intermediaries to capture, more intermediaries mean there are more actors who can hold, or more critically fail to hold, regulated organisations to account. For example, intermediaries can fail to conduct proper oversight or to rectify noncompliance (accountability forum drift; Schillemans & Busuioc, 2014, pp. 201–205). To provide an explanation of how European policymakers have adopted the two sub-regimes of enhanced self-regulation, the next section traces the process by which European policymakers introduce certification and monitoring bodies into the European data protection regime.

3. The origins of the European codes of conduct and certification

3.1. Codes of conduct before and during the old regime

The use of codes of conduct as policy instruments in the European data protection regime can be traced to national legislation adopted during the 1970s and 1980s in response to the introduction of electronic data processing (Mayer-Schönberger, 1997). According to Francesca Bignami (2011), codes of conduct were popular in Britain, Germany, and the Netherlands, but not in France and Italy. While the codes existed at the national level, their adoption at the supranational level did not happen so easily. To begin with, European policymakers did not always see eye to eye on the urgency to have European data protection rules nor to have mechanisms of self-regulation. 4 Primarily, the European Commission disagreed with the resolutions passed by the European Parliament due to the potential cost of the resolutions to the private sector. According to Abraham Newman (2008, pp. 112–16), the actions taken by transgovernmental policy entrepreneurs against the emergence of data havens resulted in a much-needed policy shift. They nudged European policymakers to propose the adoption of a European-wide data-protection framework—the EDPD.

During the deliberations on the EDPD, the European Commission and the European Council did not agree on the purpose of using codes of conduct. The Commission envisioned that sectoral codes could enable the free flow of personal information throughout the European Community. The codes would contribute to the Commission’s objectives of establishing an adequate level of protection throughout the Community and preventing barriers to information flows. The Commission also sought to use codes of conduct as a source for establishing additional initiatives (European Council, 1991a) which could be considered while it proposed new sector-specific legislation and measures. While the Commission did not prevent the use of national codes of conduct (European Council, 1991b, pp. 17–18), it wanted member states to encourage their business circles to participate in drawing up European codes (European Council, 1990). In contrast, while the Council agreed with the Commission on the need to achieve harmonisation, delegations strived to have sufficient discretion for implementation, while considering their special national and sectoral characteristics. For instance, delegations proposed using the codes to exempt a large majority of cases from the broad notification requirement that was embedded early on into the draft EDPD (European Council, 1992a). The Council later also adopted the Dutch position that the purpose of codes was to supplement or to interpret data protection laws, and not to introduce derogations or new limitations (European Council, 1992b).

Following additional consultation with the European Parliament, the Commission amended its proposal for the EDPD. The new draft included provisions on national codes. In return for more authority and certainty, the new draft tasked DPAs with ensuring that trade associations that submit their codes are in fact representative, and that the codes are well thought through. As the codes would not bind third parties and the courts, national codes would hence only improve implementation (European Commission, 1992, pp. 36–7). The new draft EDPD also shifted decision-making about the codes from the Commission to regulators: DPAs would decide on national codes, and the Article 29 Working Party (WP29) would decide on Community codes. Consequently, instead of adopting the Commission’s position regarding Community codes, codes became a mechanism for contributing to the proper application of national legislation (European Council, 1993, p. 5). A later version of the EDPD joined the two articles that separately addressed national and Community codes into one article: Article 27 of the EDPD (European Council, 1994).

The practice of adopting codes of conduct under the EDPD was a direct continuation of the events that occurred during the deliberations among European policymakers. Researchers have observed that codes of conduct were mostly adopted at the national level, though with great variance between countries (Robinson et al., 2009). While in some countries, such as Denmark, the DPAs and industry collaborated on the process of drafting codes, in other countries, such as Ireland and Greece, codes tended to have a more binding effect (Vander Maelen, 2020, p. 236). Meanwhile, at the European Community level, WP29 formally issued a decision on three codes of conduct (Vander Maelen, 2020, p. 235). WP29 approved a code by the Federation of European Direct and Interactive Marketing (FEDMA) which addressed the use of personal data in direct marketing (Article 29 Working Party, 2003, 2010). It issued an opinion that a code by the International Air Transport Association (IATA) to address transborder data flows of personal data used in international air transport of passengers and of cargo should be read as a ‘suggested framework’ (Article 29 Working Party, 2001), but it rejected a standard by the World Anti-Doping Agency (WADA; Vander Maelen, 2020, p. 235). Unsurprisingly, the Commission made public its disappointment that only a few organisations had applied for Community codes (European Commission, 2003).

3.2. Certification during the old data protection regime

Unlike codes of conduct, certification schemes had no formal provisions under the EDPD. They existed either as private or as regulatory solutions (Kamara et al., 2019). Whereas most certification schemes are private, two schemes were managed by, or received the approval of, either the European Commission or national regulators who were members of the WP29. 5 The first scheme was the US–EU Safe Harbor Agreement (SHA). The SHA was signed by the European Commission and the US Department of Commerce (DoC) in order to allow American companies—whose federal legal system provides no comprehensive data protection regime—to process personal data of Europeans. Companies needed to self-certify, annually and in front of the DoC, that they would adhere to the seven principles embedded in the SHA. As organisations made public commitments, their misrepresentation would have been enforced by the Federal Trade Commission as an ‘unfair and deceptive’ trade practice (Bennett & Raab, 2006, pp. 167–69). 6 An adequacy decision by the European Commission bound the European member states and their regulators—until the European Court of Justice (ECJ) invalidated the Commission’s adequacy decision in 2015—as it found that the SHA failed to provide adequate safeguards for the personal information of Europeans. 7 The Privacy Shield Frameworks that replaced the SHA and which offered stronger obligations with more effective protections for individuals was invalidated by the ECJ in 2020. 8

The second European-wide certification scheme was the European Privacy Seal (EuroPriSe). EuroPriSe was established in 2007 as a voluntary privacy certification for IT products and services. It built upon the EDPD, national and European legislation, European court rulings, and policy papers adopted by the WP29. EuroPriSe draws its legitimacy from two sources. First, European policymakers and national regulators recognised and participated in EuroPriSe. The European Commission and the Directorate‑General for Communications Networks, Content and Technology (DG Connect) supported EuroPriSe through the eTEN programme for the deployment of e-services in Europe. Also, three DPAs—the DPAs of the German state of Schleswig-Holstein (ULD), the French CNIL, and the Spanish APDCM—comprised one third of the EuroPriSe consortium. Second, EuroPriSe’s evaluation and certification procedures also strengthened its legitimacy. Its consortium trains independent privacy and IT-security experts to evaluate candidate products and services. Their evaluation reports are then forwarded to the impartial certification body for validation of its methodology, and for consistency and completeness. If the evaluation shows compliance with the EuroPriSe criteria, a two-year certification and seal are issued. From 2009 to 2013, ULD ran EuroPriSe, but since 2013 it has been running as a private enterprise.

3.3. The adoption of a ‘new’ data protection regime

By the end of the decade, rapid technological developments and the processes of globalisation challenged European policymakers. They found that the existing legal and regulatory framework structured around the EDPD could neither cope nor offer sufficient harmonisation (European Commission, 2010; European Council, 2011). The adoption of the Lisbon Treaty, and with it, the now-binding EU Charter of Fundamental Rights, provided an opportunity to start promoting a new, comprehensive approach. This new approach would still build on the principles enshrined in the EDPD, yet it would also introduce new principles such as accountability, allow the Commission to further encourage the use of codes of conduct that had rarely been used under the EDPD, and formally establish European certification schemes (European Commission, 2010, pp. 12–13). Achievements reached during the development of such schemes could also enable the European Union to remain a driving force behind global data protection standards (European Commission, 2010, p. 16).

The Commission decided to resolve the legal and regulatory shortcomings of the existing governance framework around data protection by promoting a modernised legal framework (European Commission, 2012). The Commission preferred this alternative over two other alternatives. The first alternative was to amend the EDPD using soft action, imperative communications, and EU-wide self-regulatory initiatives. The second alternative was to establish a central, European data protection authority. The Commission identified that a modernised legal framework could have the potential for a positive impact on the identified policy problems, could not result in high compliance costs, and would not generate strong opposition. 9 The modernised legal framework alternative also included two consistency mechanisms. First, a single DPA would lead the regulation of rule-takers, while permitting other DPAs to object to decisions with pan-European implications through the newly-established European Data Protection Board (EDPB). Second, the Commission suggested awarding itself the competency to enact delegated and implementing acts to ensure, among other outcomes, an openness to future technological developments and to give general validity to self-regulatory initiatives such as codes and certifications (European Commission, 2012, pp. 87–93).

When the European Council received the proposal for the GDPR, it started by criticising several decisions made by the Commission. The Council and its delegations primarily disapproved of the Commission’s decision to use a regulation instead of a directive. It also criticised the Commission’s proposal to give itself the competency to enact implementing and delegated acts, which spread across almost 50 provisions (European Council, 2012a). The Cypriot Presidency and the delegations argued that actions by the EDPB, as well as the use of codes of conduct, could make redundant the reliance on implementing and delegated acts (European Council, 2012b). The Cypriot Presidency also sought to better understand the member states’ position on the possible administrative burdens that the proposal had raised, especially on small and medium-sized enterprises (SME), and whether a risk-based approach should be adopted to assess the rule-takers’ obligations. The Cypriot Presidency later noticed that the delegations had reached a consensus that a mere horizontal and ‘risk-based’ obligation would be insufficient, and instead, there was also a need to define, on an article-by-article basis, the exact content and scope of the rule-takers’ obligations (European Council, 2012c). For example, it was suggested that a stronger linkage between risk-assessment processes and the articles on codes of conduct and certification would promote their wider use (European Council, 2013a).

One development during deliberations within the Council on Chapter IV of the proposal for the GDPR—which deals with the obligations of rule-takers—was the inclusion of private institutions in the regulatory process. Based on a German proposal, it was suggested that private institutions could receive accreditation from the DPAs based on detailed criteria provided by the GDPR and recommendations made by the EDPB. Accredited private institutions could then monitor rule-takers against approved codes of conduct in exchange for allowing the DPAs to lodge complaints against the institutions, subject them to administrative fines, and revoke their accreditation. As the deliberations continued, the Council also introduced similar provisions for assessing conformity through accredited certification bodies. The delegations further agreed that codes and certifications would confirm compliance with the legal requirement of the GDPR (European Council, 2013b). As the Council moved to discuss Chapter V of the proposal—which deals with the transfer of personal data to non-European countries and organisations—the compromised text explicitly provided that rule-takers could transfer personal information if they applied appropriate safeguards, including the use of approved codes or certifications. Such appropriate safeguards would not require any additional authorisation from DPAs (European Council, 2014a, p. 3).

The European Parliament had a different approach to certification. 10 It proposed, based on the original suggestion by the Commission, that qualified and impartial auditors accredited by the DPAs could assess rule-takers in order for the DPAs to certify that their processing operations complied with the GDPR. Rule-takers that passed the certification process would receive the European Data Protection Seal, which would be valid for five years (European Council, 2014b, pp. 123–25). The Parliament also suggested that while EDPB could certify that standards for data-protection-enhancing technologies comply with the regulation, the Commission could specify the criteria for awarding certifications, set the accreditation criteria for auditors, and lay down technical standards for certification mechanisms. Following the discussions in the trilogue between the Commission, the Council, and the Parliament, the Council’s position was adopted. However, the agreement included provisions for the European Data Protection Seal, and it placed limitations on the time during which accreditation and certifications could be awarded.

4. A comparison between European data protection codes of conduct and certification

As process-tracing shows, policymakers have introduced codes of conduct and certification mechanisms into the new European data protection regime in order to serve similar functions. The codes of conduct and certification mechanisms help rule-takers manage their risk-based obligations, as well as make assurances and apply appropriate safeguards for international data transfers. Additionally, while both codes of conduct and certification have distinct origins within the European data protection regime, both mechanisms emerged with regulatory governance regimes that were structured around enhanced self-regulation via regulatory intermediation that allowed self-regulation in the shadow of European and national hierarchies. Regulators first need to approve the accreditation and certification criteria and then accredit private certification and monitoring bodies in order for them to certify and monitor rule-takers according to pre-approved criteria. Some differences between codes of conduct and certification do however exist.

Codes of conduct originated from national data protection legislation and were used as an industry-level, market-based, self-regulatory mechanism. During the discussion to adopt the EDPD, the Commission suggested repurposing the codes in order to further pan-European implementation, but the Council disagreed and preferred to maintain the codes’ national and interpretative characteristics. The limited surplus offered by adopting codes of conduct arguably lowered the stakeholders’ interest in relying on the codes. Conversely, during the deliberations about the GDPR, it was the Council that pushed to have codes with European-wide and extra-territorial implementation. The Council saw the codes as a way of contributing to the proper application of the GDPR, thus replacing the need to assign to the Commission the competency of adopting numerous delegated and implementing acts. A combination of institutional regulatory arrangements around the EDPB—the ‘one-stop-shop’ principle—and the consistency mechanism could now help push national codes to the European level.

Data protection and privacy certification mechanisms, in turn, are mostly considered market-led and self-regulatory. The EDPD did not formally recognise the certification schemes as viable regulatory mechanisms. Nevertheless, given the broad discretion that the EDPD awarded the DPAs, the DPAs could have decided to establish and support their own certification schemes. The Commission, which supported one such DPA-led certification, also introduced certifications into the proposed GDPR. The Commission introduced certification schemes even though the policy option it chose originally did not include certification as a mechanism. The Council then revised the Commission’s proposal and ensured that national regulators, and consequently also the EDPB, would replace the Commission in approving the criteria for certification. European policymakers consequently also limited the ability of DPAs to establish certification schemes that did not follow the procedures set out by the GDPR. While private actors might choose to adopt certification schemes that did not receive DPA approval, they have no guarantees that the DPAs would acknowledge the schemes when considering whether they infringed European or national data protection rules.

Table 1: A comparison of codes of conduct and certification
 

Codes of conduct

Certification

Purpose specification

  1. To demonstrate compliance (Article 24.3)
  2. To contribute to the proper application of the GDPR and to specify its application (Articles 40.1 and 40.2)
  3. To safeguard personal data transfers to third countries and international organisations (Article 46.2(e))
  4. To engage in risk-mitigation and risk negotiation (Article 83.2(j))
  1. To demonstrate compliance (Article 24.3)
  2. To demonstrate compliance with Privacy-by-Design requirements (Article 25.3)
  3. To safeguard personal data transfers to third countries and international organisations (Articles 42.2 and 46.2(f))
  4. To engage in risk-mitigation and risk negotiation (Article 83.2(j))
  5. Cannot be used to certify data protection officers

Mode of self-regulation

Voluntary industry-level self-regulation

Voluntary singlecorporate self-regulation

The GDPR sets the basic accreditation criteria for both monitoring and certification bodies. Both accreditation processes require that the certification and monitoring bodies would be experienced and independent; would have procedures for assessing the eligibility of rule-takers; would have procedures to handle complaints, and would have no conflict of interest. With monitoring bodies, DPAs are able to define the requirements for accreditation, which the EDPB would then have to approve. Once the requirements for monitoring bodies are approved, the DPAs could accredit them. In addition to the criteria set by the GDPR, certification bodies also needed to show that their procedures could periodically assess eligibility, and that they respect the certification criteria. If the member states decided that national accreditation bodies (NABs) would award accreditation—instead of, or together with, DPAs—then the accreditation requirements also needed to complement the requirements set by regulation (EC) 765/2008 and the EN-ISO/IEC 17065/2012 standards. 11

Table 2: A comparison of the accreditation criteria for codes of conduct and certification
 

Codes of conduct

Certification

Who accredits?

Data protection supervisory authorities

  1. Data protection supervisory authorities
  2. National accreditation bodies
  3. A joint accreditation

Who set the accreditation criteria?

  1. Basic criteria by the GDPR (Article 41(2))
  2. DPAs define requirements for accreditation
  3. EDPB approves the requirements
  1. Basic criteria by the GDPR (Article 43(2))
  2. Requirements set by either the DPAs or the EDPB
  3. Where accreditation is by NABs:
    1. EN-ISO/IEC 17065/2012
    2. Additional requirements and technical rules set by DPAs and complement Regulation (EC) 765/2008

Basic criteria for accreditation

  1. Independence and expertise
  2. Having procedures and structures to assess eligibility
  3. Having procedures and structures to handle complaints about infringement
  4. No conflict of interest
  1. Independence and expertise
  2. Having procedures and structures to periodically assess eligibility
  3. Having procedures and structures to handle complaints about infringement
  4. No conflict of interest
  5. Respect for the certification criteria

Duration

Until revocation

Up to five years (renewable) or until revocation

The criteria for approving certification and codes are, or at least should be, distinct from the criteria for their accreditation. The GDPR details several criteria for approving codes. First, codes need to be submitted by a representative body or a trade association. This is to achieve and maintain their industry-level self-regulatory nature. 12 Second, codes need to contain details of their purpose, scope, and applicability. Among other details, the codes must specify the application of the GDPR, facilitate the effective application of the GDPR, and provide sufficient safeguards to mitigate risks (EDPB, 1/2019, pp. 14–17). 13 Third, codes should also consider the specific features and needs of the relevant sector, specifically the characteristics of the SMEs in that sector. Fourth, while the GDPR uses a terminology that explains that the monitoring of codes may be carried out by monitoring bodies, the EDPB (2019) interprets this statement as a requirement. Therefore, the codes need to include mechanisms to enable accredited bodies to monitor compliance. Fifth, the codes should have appropriate review mechanisms to ensure that they remain up to date. Lastly, if the codes can also apply to non-European controllers and processors, they need to include binding and enforceable commitments to apply appropriate safeguards. 14 When codes are ready, the DPAs would review them in order to approve them or to provide an opinion on them. Codes with a European-wide application would also have to be reviewed by the EDPB for the Commission to give them European-wide validity. Codes are valid until they are revoked.

Conversely, the GDPR gives little information on what the criteria are for approving certifications. One major difference, in comparison with the codes, is that the GDPR does not assign exclusivity to the actors who can own certification schemes (EDPB, 2018b, p. 6). Additionally, European policymakers explain that the purpose of certification is to demonstrate compliance with the GDPR. Hence, the EDPB added that certifications should address the data protection principles of lawful processing, data subjects’ rights, and the obligations of rule-takers (EDPB, 2018a). Furthermore, as with codes, certifications should also consider the specific needs of SMEs and should require certified rule-takers to provide information and access to certification bodies. 15 Lastly, as with codes, if non-European controllers and processors are allowed to rely on these certifications, the certifications should also include binding and enforceable commitments to apply appropriate safeguards. 16 Once the codes are drawn up, the DPAs or the EDPB need to approve the criteria, and, if the latter, then the certifications may receive the title of ‘European Data Protection Seal’. After the certifications are approved, the Commission can specify the requirements to be considered for the mechanisms of the data protection certification. The Commission may also lay down technical standards to promote and recognise these certification mechanisms.

Once both the accreditation and certification criteria are approved, private bodies can be accredited and certified. European policies have therefore clarified who the actors are who can award accreditation, assess conformity with certifications, and monitor compliance with the codes. The policies specify that only DPAs, as public bodies, can assess conformity with the accreditation criteria and can accredit monitoring bodies. The monitoring bodies would then monitor compliance with the codes and issue decisions on suspension or exclusion from them. The codes might also allow the monitoring bodies to take additional action against rule-takers. Only monitoring bodies can take such actions. DPAs, in this regard, cannot take appropriate action in cases of infringement of the codes, and they cannot add rule-takers to the codes, or decide who should be excluded from them. DPAs can either investigate whether rule-takers who violated the codes also violated the GDPR, or they can decide to take action against the monitoring bodies. Such actions can include the administrative fines usually aimed at rule-takers. A decision to revoke the monitoring bodies’ accreditation does not have to mean that the codes themselves become void.

As with the accreditation of monitoring bodies, only public bodies can accredit certification bodies. However, unlike the codes, the GDPR enables member states to decide whether their DPA, NAB—or both together—would accredit certification bodies. In that regard, NABs usually benefit from having greater expertise in accreditation, while DPAs have greater expertise in data protection. Accreditations for certification bodies are awarded for at least five years and can be renewed. Certifications and codes also differ significantly with regards to the actors who can assess and award certification; both the DPAs and the accredited certification bodies can certify rule-takers. 17 Certifications are awarded for three years. When a certification body decides to award a certification, it has to provide its reasoning to the DPA. DPAs can sanction both the certification bodies and the rule-takers for infringing the criteria. For both, such sanctions can reach 10 million Euro, or 2% of worldwide annual turnover of the preceding financial year of that undertaking.

Table 3: A comparison of the certification phase for codes and certification
 

Codes of conduct

Certification

Who monitors or certifies?

Accredited monitoring bodies

  1. Accredited certification bodies
  2. DPAs

The function of accredited bodies*

  1. Monitor compliance
  2. Suspend or exclude from the code
  3. Other actions or sanctions (as defined in the code)
  1. Assessment
  2. Issue or renew certification
  3. Provide reasons for granting or withdrawing certifications

Basic criteria for codes of conduct and certification

  1. Specification for the application of the GDPR (Article 40(2))
  2. Facilitation of the effective application of the GDPR
  3. Contain suitable and effective safeguards to mitigate risks
  4. Having mechanisms for allowing accredited bodies to monitor and overall effective oversight
  5. Consideration for the specific features and needs of market sectors or SMEs
  6. For non-Europeans: having binding and enforceable commitments to apply appropriate safeguards
  7. When feasible, consultation with stakeholders, including data subjects
  8. Review mechanisms
  1. Consideration for the specific needs of SMEs
  2. Provide information and access to processing activities
  3. For non-Europeans: having binding and enforceable commitments to apply appropriate safeguards
  4. Specified requirements set by the Commission
  5. Technical standards set by the Commission
  6. Cannot be used to certify people (e.g., data protection officers)

Who prepares the codes or certification?

  1. Member states, the DPAs, the EDPB, and the Commission encourage drawing up of codes
  2. Associations or bodies that represent rule-takers prepare the codes
  3. DPAs provide an opinion on the draft codes or approve them
  4. Supranational application: the EDPB also provides an opinion and the Commission gives EU validity (via implementing acts)
  1. The member states, the DPAs, the EDPB, and the Commission encourage the establishment of certification mechanisms
  2. The DPAs or the EDPB approve the criteria
  3. The Commission specifies requirements and defines technology standards

Duration

Unlimited (until exclusion)

For three years (or until withdrawal)

Who can join the codes or receive certification?

  1. Rule-takers
  2. Non-European controllers and processors
  1. Rule-takers
  2. Non-European controllers and processors

Who can be fined up to 10 million Euro or 2% of worldwide annual turnover of the undertaking (article 83(4))?

Monitoring bodies

  1. Rule-takers
  2. Certification bodies
  3. Unclear about non-Europeans

* This does not take away the authority of DPAs to monitor and sanction rule-takers.

5. Conclusions

The two case studies of codes of conduct and certification under the European data protection regime have shown that policymakers can establish self-regulation in the shadow of hierarchy by enhancing the regime with regulatory intermediaries. Under this new enhanced data protection regime, rule-takers can only use codes or certifications that have been previously approved by European or national regulators. Rule-takers must also rely on accredited private bodies to monitor the codes and depend on them to assess conformity to certification schemes. Rule-takers can use both mechanisms to manage their risk-based obligations and to show their commitment to applying appropriate safeguards. In addition, rule-takers who adhere to the enhanced codes and certifications may also benefit from the ability to transfer data internationally, knowing that the codes and certifications provide appropriate safeguards (Article 46), and that regulators can consider adherence to codes and certification as a factor for reducing administrative fines (Article 83.2(j)).

Policymakers and regulators can also benefit from the successful adoption of codes or certification and the consequential development of a regime of enhanced self-regulation. Regulators can directly regulate the intermediaries and the benefits and certainty that their mechanisms may provide. Regulators who successfully regulate through the intermediaries can also free up their limited time and resources and indirectly nudge rule-takers towards compliance by tracking the work of the intermediaries, and they only need to respond to cases of noncompliance. Regulators may even decide to disregard or sanction the use of any certification scheme or regulatory intermediation that has not been approved or ratified by European or national regulators. The overall result is that private actors who seek to receive an accreditation undergo a conformity assessment to obtain a certification—or to establish new or join existing codes of conduct—and they must self-regulate in the shadow of hierarchy of European and national regulators.

Differences between the two hierarchical modes of governance do, however, exist. Codes of conduct under the GDPR are a form of industry self-regulation that can enable autonomy as well as reduce the compliance costs for rule-takers. Industry or sectoral codes of conduct only work if trade associations or other bodies that represent a group of rule-takers understand that the sector can benefit either from setting best practices or from having consistency in how data protection rules should apply in their sector (Bennett & Raab, 2006, p. 156). The codes would not work if DPAs decided that every violation also meant a violation of the GDPR. Regulators making such a decision might be able to use their full investigative and corrective capacities, yet they risk harming the self-regulatory nature embedded in the codes. Another counterproductive decision might result from the decision to always follow the EDPB’s interpretation that the codes must have a monitoring body. For instance, if representative bodies seek to establish codes of conduct that consider the specific needs of micro, small, and medium-sized enterprises, then requiring them to establish and finance a monitoring body with detailed procedures for oversight might reinstate the costs that the codes aim to reduce. The flexibility originally embedded in the regulatory intermediation around the codes enables the market to make decisions that reduce costs. Therefore, a careful interpretation should ensure that the costs saved would not be lost by establishing and maintaining the codes.

Conversely, certification is a self-regulatory mechanism that focuses on the individual organisation. Even if certification owners drafted certification criteria and the regulators approved them, it does not mean that private bodies would adopt them. The ability of DPAs to sanction both certification bodies and certified rule-takers, as well as their ability to order certification bodies how to act, might allow them to have more influence over the certification process. However, these actions might also increase the risk of having too many restrictions, which would disincentivise private actors from relying on certification schemes. Therefore, regulators must ensure that there are sufficient incentives not only for individual organisations to receive certification but also for certification bodies to take the additional risks and costs of assessing conformity and issuing certifications. Each private body that seeks to become a certification body needs to be able to balance the risks and benefits of undergoing a conformity assessment to receive accreditation. Each rule-taker must similarly decide for itself whether to voluntarily undergo a conformity assessment to receive certification. One option DPAs have is to assess how they can rely better on the periodic nature of the accreditation and certification processes with greater confidence. The periodic assessments provide DPAs with more points of interaction with the certification bodies and the certified rule-takers. Therefore, DPAs should balance between their ability to regulate at a distance through accreditation, their ability to regulate at the point of awarding or renewing certifications, and their ability to regulate directly at the point of investigating a possible infringement. A decision to simultaneously use the full investigative and corrective capacities at all three points of decision might unbalance the self-regulatory nature of the certification mechanisms.

This paper therefore refocuses the debate on self-regulation in the shadow of hierarchy around the enhancement of self-regulation via regulatory intermediation and the decisions of the DPAs acting as regulators. It suggests that DPAs should adopt a meta-regulatory approach of regulating at a distance through regulatory intermediation. The paper shows how both codes of conduct and certification under the European data protection regime mix enforced self-regulation, through accreditation and the ratification of criteria, with components of enhanced self-regulation, through regulatory intermediation. It suggests how policymakers and meta-regulators might use regulatory arrangements and decisions to incentivise and constrain intermediaries, and thereafter also the regulated organisations, to self-regulate in the shadow of hierarchical decisions of the meta-regulator.

These above-mentioned comparative analysis and suggestions are, nevertheless, limited by the scope of two case studies, the methodological approach used to study them, and the overall European data protection regime. Hence, future research should adopt a similar methodological approach to trace and assess how other regulatory regimes use regulatory intermediaries to induce self-regulation in the shadow of hierarchy. Scholars should also address how regulatory arrangements and decisions under other regulatory regimes can either constrain or incentivise both intermediaries and rule-takers to join the regime. Lastly, researchers should study whether and how policymakers and regulators’ decisions and actions impact or sanction self-regulatory practices that do not exist under approved hierarchical arrangements that permit rule-takers to self-regulate in their shadows.

References

Abbott, K. W., Levi-faur, D., & Snidal, D. (2017). Theorizing Regulatory Intermediaries: The RIT Model. The ANNALS of the American Academy of Political and Social Science, 670(1), 14–35. https://doi.org/10.1177/0002716216688272

Article 29 Working Party. (2001). Working Document on IATA Recommended Practice 1774 Protection for Privacy and Transborder Data Flows of Personal Data Used in International Air Transport of Passengers and of Cargo.

Article 29 Working Party. (2003). Opinion 3/2003 on the European Code of Conduct of FEDMA for the Use of Personal Data in Direct Marketing.

Article 29 Working Party. (2010). Opinion 4/2010 on the European Code of Conduct of FEDMA for the Use of Personal Data in Direct Marketing.

Ayres, I., & Braithwaite, J. (1992). Responsive regulation: Transcending the deregulation debate. Oxford University Press.

Bennett, C. J., & Raab, C. D. (2017). The Governance of Privacy: Policy instruments in global perspective (1st ed.). Routledge. https://doi.org/10.4324/9781315199269

Bignami, F. (2011). Cooperative Legalism and the Non-Americanization of European Regulatory Styles: The Case of Data Privacy. American Journal of Comparative Law, 59(2), 411–461. https://doi.org/10.5131/AJCL.2010.0017

Black, J. (1996). Constitutionalising Self-Regulation. The Modern Law Review, 59(1), 24–55. https://doi.org/10.1111/j.1468-2230.1996.tb02064.x

European Commission. (1992). Amended proposal for a Council Directive on the protection of individuals with regard to the processing of personal data and on the free movement of such data.

European Commission. (2003). Report from the Commission—First Report on the implementation of the Data Protection Directive (95/46/EC).

European Commission. (2010). A comprehensive approach on personal data protection in the European Union.

European Commission. (2012). Impact Assessment accompanying the document Regulation of the European Parliament and of the Council on the protection of individuals with regard to the processing of personal data and on the free movement of such data (General Data Protection Regulation) and Directive of the European Parliament and of the Council on the protection of individuals with regard to the processing of personal data by competent authorities for the purposes of prevention, investigation, detection or prosecution of criminal offenses or the execution of criminal penalties, and the free movement of such data.

European Commission. (2020). Data protection as a pillar of citizens’ empowerment and the EU’s approach to the digital transition—Two years of application of the General Data Protection Regulation.

European Council. (1990). Protection of individuals in relation to the processing of personal data in the Community and information security.

European Council. (1991a). Protection of individuals in relation to the processing of personal data in the Community and information security.

European Council. (1991b). Protection of individuals in relation to the processing of personal data in the Community and information security.

European Council. (1992a). Protection of individuals in relation to the processing of personal data in the Community and information security.

European Council. (1992b). Protection of individuals in relation to the processing of personal data in the Community and information security.

European Council. (1993). Amended proposal for a Council Directive on the protection of individuals with regard to the processing of personal data and on the free movement of such data.

European Council. (1994). Amended proposal for a Council Directive on the protection of individuals with regard to the processing of personal data and on the free movement of such data.

European Council. (2011). Council conclusions on the Communication from the Commission to the European Parliament and the Council – a comprehensive approach on personal data protection in the European Union.

European Council. (2012a). Proposal for a regulation of the European Parliament and of the Council on the protection of individuals with regard to the processing of personal data and on the free movement of such data (General Data Protection Regulation.

European Council. (2012b). Proposal for a regulation of the European Parliament and of the Council on the protection of individuals with regard to the processing of personal data and on the free movement of such data (General Data Protection Regulation) – Questionnaire on administrative burdens, delegated/implementing acts and flexibility in data protection rules for the public sector.

European Council. (2012c). Data protection package – report on progress achieved under the Cyprus Presidency.

European Council. (2013a). Proposal for a regulation of the European Parliament and of the Council on the protection of individuals with regard to the processing of personal data and on the free movement of such data (General Data Protection Regulation) – implementation of risk-based approach.

European Council. (2013b). Proposal for a regulation of the European Parliament and of the Council on the protection of individuals with regard to the processing of personal data and on the free movement of such data (General Data Protection Regulation) – essential elements of the one-stop-shop mechanism.

European Council. (2014a). Proposal for a Regulation of the European Parliament and of the Council on the protection of individuals with regard to the processing of personal data and on the free movement of such data (General Data Protection Regulation)—Outcome of the European Parliament’s first reading.

European Council. (2014b). Proposal for a regulation of the European Parliament and of the Council on the protection of individuals with regard to the processing of personal data and on the free movement of such data (General Data Protection Regulation) – partial general approach on Chapter V.

European Data Protection Board. (2018a). Guidelines 1/2018 on certifications and identifying certification criteria in accordance with Articles 42 and 43 of the Regulation (2016/679.

European Data Protection Board. (2018b). Guidelines 4/2018 on the accreditation of certification bodies under Article 43 of the General Data Protection Regulation (2016/679.

European Data Protection Board. (2019). Guidelines 1/2019 on Codes of Conduct and Monitoring Bodies under Regulation 2016/679.

European Data Protection Board. (2021a). Opinion 16/2021 on the draft decision of the Belgian Supervisory Authority regarding the “EU Data Protection Code of Conduct for Cloud Service Providers” submitted by Scope Europe.

European Data Protection Board. (2021b). Opinion 17/2021 on draft decision of the French Supervisory Authority regarding the European code of conduct submitted by the Cloud Infrastructure Service Providers (CISPE.

European Data Protection Board. (2021c). Guidelines 04/2021 on codes of conduct as tools for transfers.

Galland, J.-P. (2017). Big Third-Party Certifiers and the Construction of Transnational Regulation. The ANNALS of the American Academy of Political and Social Science, 670(1), 263–279. https://doi.org/10.1177/0002716217694589

Gilad, S. (2010). It runs in the family: Meta-regulation and its siblings: Meta-regulation and its siblings. Regulation & Governance, 4(4), 485–506. https://doi.org/10.1111/j.1748-5991.2010.01090.x

Héritier, A., & Eckert, S. (2008). New Modes of Governance in the Shadow of Hierarchy: Self-regulation by Industry in Europe. Journal of Public Policy, 28(1), 113–138. https://doi.org/10.1017/S0143814X08000809

Héritier, A., & Lehmkuhl, D. (2008). The Shadow of Hierarchy and New Modes of Governance. Journal of Public Policy, 28(1), 1–17. https://doi.org/10.1017/S0143814X08000755

Kamara, I., Leenes, R., Lachud, E., Stuurman, K., Lieshout, M., & Bodea, G. (2019). Data Protection Certification Mechanisms: Study on Articles 42 and 43 of the Regulation (EU) 2016/679 [Study]. European Commission.

Leenes, R. (2020). Article 42 certification. In C. Kuner, L. Bygrave, C. Docksey, & L. Drechsler (Eds.), The EU General Data Protection Regulation (GDPR): A commentary (pp. 732–743). Oxford University Press. https://doi.org/10.1093/oso/9780198826491.003.0081

Levi-Faur, D. (2011). Chapter 1: Regulation and Regulatory Governance. In D. Levi-Faur (Ed.), Handbook on the Politics of Regulation. Edward Elgar Publishing. https://doi.org/10.4337/9780857936110.00010

Loconto, A., & Busch, L. (2010). Standards, techno-economic networks, and playing fields: Performing the global market economy. Review of International Political Economy, 17(3), 507–536. https://doi.org/10.1080/09692290903319870

Loconto, A. M. (2017). Models of Assurance: Diversity and Standardization of Modes of Intermediation. The ANNALS of the American Academy of Political and Social Science, 670(1), 112–132. https://doi.org/10.1177/0002716217692517

Marotta-Wurgler, F. (2016). Self-regulation and competition in privacy policies. The Journal of Legal Studies, 45(S2), 13–39. https://doi.org/10.1086/689753

Marques, J. C. (2019). Private regulatory capture via harmonization: An analysis of global retailer regulatory intermediaries: Private regulatory capture. Regulation & Governance, 13(2), 157–176. https://doi.org/10.1111/rego.12252

Mayer-Schönberger, V. (1997). Generational Development of Data Protection in Europe. In P. E. Agre & M. Rotenberg (Eds.), Technology and Privacy: The New Landscape. The MIT Press. https://doi.org/10.7551/mitpress/6682.003.0010

Medzini, R. (2021a). Enhanced self-regulation: The case of Facebook’s content governance. New Media & Society, 146144482198935. https://doi.org/10.1177/1461444821989352

Medzini, R. (2021b). Credibility in enhanced self‐regulation: The case of the European data protection regime. Policy & Internet, 13(3), 366–384. https://doi.org/10.1002/poi3.251

Newman, A. L. (2008). Building Transnational Civil Liberties: Transgovernmental Entrepreneurs and the European Data Privacy Directive. International Organization, 62(01). https://doi.org/10.1017/S0020818308080041

Ogus, A. (1995). Rethinking Self-Regulation. Oxford Journal of Legal Studies, 15(1), 97–108. https://doi.org/10.1093/ojls/15.1.97

Pierson, P. (1996). The Path to European Integration: A Historical Institutionalist Analysis. Comparative Political Studies, 29(2), 123–163. https://doi.org/10.1177/0010414096029002001

Porter, T., & Ronit, K. (2006). Self-Regulation as Policy Process: The Multiple and Criss-Crossing Stages of Private Rule-Making. Policy Sciences, 39(1), 41–72. https://doi.org/10.1007/s11077-006-9008-5

Robinson, N., Graux, H., Botterman, M., & Valeri, L. (2009). Review of the European Data Protection Directive. RAND Corporation. https://www.rand.org/pubs/technical_reports/TR710.html.

Schillemans, T., & Busuioc, M. (2015). Predicting Public Sector Accountability: From Agency Drift to Forum Drift. Journal of Public Administration Research and Theory, 25(1), 191–215. https://doi.org/10.1093/jopart/muu024

Schneider, A., & Scherer, A. G. (2019). State Governance Beyond the ‘Shadow of Hierarchy’: A social mechanisms perspective on governmental CSR policies. Organization Studies, 40(8), 1147–1168. https://doi.org/10.1177/0170840619835584

van den Bulck, H. (2013). Tracing media policy decisions: Of stakeholders, networks and advocacy coalitions. In M. E. Price, S. Verhulst, & L. Morgan (Eds.), Routledge handbook of media law (pp. 17–34). Routledge. https://doi.org/10.4324/9780203074572-7

Vander Maelen, C. (2020). Codes of (mis)conduct? An appraisal of articles 40-41 GDPR in view of the 1995 data protection directive and its shortcomings. European Data Protection Law Review (EDPL, 6(2), 231–242. https://doi.org/10.21552/edpl/2020/2/9

Legislation:

Directive 95/46/EC of the European Parliament and of the Council on the Protection of Individuals with Regard to the Processing of Personal Data and on the Free Movement of Such Data. OJ No. L281, 24 October 1995.

Regulation (EC) 765/2008 of the European Parliament and of the Council of 9 July 2008 setting out the requirements for accreditation and market surveillance relating to the marketing of products and repealing Regulation (EEC) No 339/93. OJ L 218, August 13, 2008.

Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation). OJ L 119, April 27, 2016.

Footnotes

1. To date, the European Data Protection Board (EDPB) has registered three codes of conduct: two with named monitoring bodies and one conditional on naming a monitoring body. Additionally, in May 2019, the EDPB issued two opinions on draft decisions regarding European codes of conduct for cloud service providers (EDPB, 2021a) and cloud infrastructure service providers (EDPB, 2021b). No certifications have yet been registered.

2. According to Colin Bennett and Charles Raab (2006, pp. 153–155), codes of conduct or practice are a set of rules that provide guidance about correct procedures and behavior. Meanwhile, according to Loconto (2017, pp. 117–118) certification occurs when independent third-party actors both attest to the target’s compliance and determine conformity with a standard. An accreditation offers another level of determination, wherein accreditors determine conformity of the certifiers with another set of standards.

3. Certification and accreditation provide a statement of conformity following a process of attestation and determination. The mechanism that differentiates certification and accreditation from first-party conformity (self-reporting) or second-party verification is regulatory intermediation. Certification and accreditation occur when one or more third parties conducts the attestation and then determines conformity with the criteria.

4. While the Council of Europe adopted the Convention for the Protection of Individuals with Regard to Automatic Processing of Personal Data of 1981, the Convention lacked mechanisms for self-enforcement, and it was unable to create harmonisation among the European member states (Newman, 2008, pp. 109–10).

5. Some DPAs also introduce national-level certification schemes. A leading example are the French CNIL Labels aimed at providing assurances that a product or a procedure corresponds to the French data protection act and the CNIL’s regulations. The CNIL issued four labels: 1) for auditing procedures; 2) for certifying training courses on data protection; 3) for digital safe boxes, and 4) for data protection governance procedures. According to the CNIL, the Labels would transform into certification schemes.

6. In practice, a sample of 249 privacy policies have shown that due to poor monitoring and weak enforcement mechanisms almost all firms misrepresented their claims of adherence to the SHA (Marotta-Wurgler, 2016).

7. Case C-362/14 Maximillian Schrems v Data Protection Commissioner (6 October 2015).

8. Case C-311/18 Data Protection Commissioner v Facebook Ireland and Maximillian Schrems (16 July 2020).

9. The Commission did, however, adopt from the two unselected policy alternatives the reliance on certification schemes and the abolishment of notification obligations.

10. With regards to codes of conduct, the European Parliament slightly amended the Commission’s proposal but for the most part kept it unchanged (European Council, 2014b, pp. 122-23).

11. The EDPB has clarified that it is best that the DPAs would also follow the requirements set by Regulation (EC) 765/2008 and the EN-ISO/IEC 17065/2012 standards. It reasoned that doing so would contribute to a harmonised approach to accreditation (EDPB, 2018b).

12. Member states, the DPAs, the EDPB, and the Commission can only encourage representative bodies to draw up the codes.

13. Article 40(2) provides 12 non-exhaustive examples for the possible purpose of using codes of conduct. The codes should also indicate 1) how they meet a particular need of a sector or a processing activity; 2) how they facilitate the application of the GDPR; 3) how they specify the application of the GDPR; 4) how they provide sufficient safeguards, and 5) how they provide effective mechanisms for monitoring compliance. (EDPB 1/2019, p. 14). Additionally, in July 2021, the EDPS adopted for public consultation “Guidelines 04/2021 on codes of conduct as tools for transfers”. Guidelines 04/2021 specify how codes of conduct can be approved and then used for the purpose of providing appropriate safeguards to transfer data to third countries. The EDPB clarifies that codes can be drawn up only for the purpose of specifying the application of the GDPR (“GDPR codes”), only for the purpose of data transfer to third party countries (“codes intended for transfers”), or for both purposes.

14. According to the EDPB, only when the Commission decides that a code has European-wide validity can non-European controllers and processors rely on the code (EDPB 1/2019, p. 21).

15. The EDPB further explained that certifications need to be produced in a transparent manner. They should include supporting documents and descriptions of corrective actions (EDPB 2018a, p. 7).

16. Guidelines 04/2021 only apply to codes of conduct and not to certifications. The EDPB has yet to issue similar guidelines for certifications.

17. The EDPB clarified that certification bodies are accredited locally and are based on the decision about where to offer certifications. When the certification body seeks to certify against European Data Protection Seals, it would need to seek accreditation based on the location of its EU headquarters. Schemes that are intended for a single member state cannot receive the title of a European Data Protection Seal (EDPB, 2018a).

Mitigating the risk of US surveillance for public sector services in the cloud

$
0
0

This paper is part of Governing “European values” inside data flows, a special issue of Internet Policy Review guest-edited by Kristina Irion, Mira Burri, Ans Kolk, Stefania Milan.

1. Introduction

Never has it been so easy to share data. Decentralised operations can operate seamlessly thanks to cloud services, allowing for real-time updates of databases and other documentation. Digitising public services has been an integral part of the European Union’s digital strategies for over a decade (EC, 2010). However, the US surveillance scandals and associated regulatory frameworks make the digitisation of public services in Europe difficult. The use of cloud services necessitates a lack of “data sovereignty”, as control over data and infrastructures are relinquished to the service provider (Irion, 2012). In cases where transborder data flows are required, jurisdictional issues arise. Transborder flows of personal data often give rise to conflicts between fundamental rights and the competences of surveillance authorities in third countries such as India, China, or the US. As US companies provide the most popular cloud-based services, it is especially problematic to reconcile the requirements to secure data used for public services in the EU and the US regulatory framework. The Clarifying Lawful Overseas Use of Data Act (CLOUD Act) grants law enforcement the power to compel US-based companies to disclose data on their servers if they obtain a warrant. 1 The proposed solution to this issue is a treaty between the EU and the US, according to which also law enforcement in the EU member states could get granted access to data held in the US (EC, 2019; Vasquez Maymir, 2020). While such an agreement enables reciprocity and might be seen as an adequate political solution, it only solves the formal conflict with article 48 of the GDPR, 2 but not necessarily the problems associated with a lack of respect for fundamental rights.

In Sweden, this has led to a deadlock, where public authorities cannot readily move their operations to the cloud and use the services of US companies because sensitive personal data of Swedish citizens could be transferred to US law enforcement without Swedish judicial review, which is illegal under Swedish law (eSam, 2019). Furthermore, foreign court orders outside the scope of mutual legal assistance treaties are not regarded as a legal basis for transfers under the GDPR (European Data Protection Supervisor and the European Data Protection Board, 2019, p. 3). Similarly, a comprehensive impact assessment of Microsoft Office in the Netherlands revealed that the software’s data collection and transfers of telemetry data posed significant risks when used by governmental organisations (Privacy Company, 2018). As a result, Microsoft provided some new settings and adjusted their contracts with the Dutch government—but even those solutions were not completely satisfactory (Privacy Company, 2020). The situation is further complicated by Schrems II, which invalidated the Privacy Shield agreementand stated that transfers to third countries need to be protected by additional safeguards, without specifying what those might be. 3 Even if Microsoft or other cloud service providers are able to accommodate the security needs of the public sector, it is questionable whether they can insulate themselves from the data to such a degree that absolutely no data is within their control.

The purpose of this contribution is to provide an overview of the legal challenges associated with the use of US cloud services in the public sector in the EU. It shows how both administrative law and the EU fundamental rights framework together raise questions on the legality of using such services. Nonetheless, based on the Dutch and Swedish cases, this contribution also highlights to what extent these challenges can be mitigated with technical, organisational, and contractual measures. These can be specified in public procurement provisions.

The present contribution proceeds as follows. Section two briefly outlines how US law enforcement and the intelligence community may (legally) access data held by US cloud service providers. Section three presents to what degree this legal framework is incompatible with European fundamental rights as argued by the Court of Justice of the European Union (CJEU) in Schrems II. This ruling is further analysed in light of the European Data Protection Board’s (EDPB) (2020) recommendations on supplementary measures, the European Commission’s (2020) draft standard contractual clauses (SCCs) and the European Data Protection Supervisor’s (EDPS) and the EDPB’s (2020) joint opinions on said SCCs. Section four discusses how this presents a challenge for public services wishing to use the services of US cloud providers. This is demonstrated through two case studies: the evaluation of the Dutch government’s contract with Microsoft, and the debate surrounding the use of cloud services in the Swedish public sector. The contribution concludes with a discussion on what consequences this has for the future of the digitisation of the public sector in Europe.

2. US access to EU data

The Snowden revelations laid bare to what extent data on US services were subject to the intelligence gathering operations of the National Security Agency (NSA). What had been suspected by critics of the intelligence community was fundamentally confirmed by the leaked documents. The leaks would have a major impact on the diplomatic relations between the US and the EU member states, and importantly for the focus of this contribution, they put a dent in the trust of US tech companies (see Daskal, 2018, p. 236). The surveillance scandal has left a permanent stain on the US tech industry, which desperately tries to rid itself of the image that any data stored on their servers is automatically accessible by the NSA. Google’s (2020) and Microsoft’s (2020a) transparency reports with their adjoined Frequently Asked Questions are testimony to this. The existing data sharing frameworks on passenger name records and banking data between the US and the EU have also demonstrated that a global (or at least a transatlantic) framework agreement on the protection of personal data is needed (Vara 2014, p. 260; Mitsilegas, 2016). Especially the EU citizens’ access to justice in the US has been questioned.

In the aftermath of the NSA revelations the US government engaged in diplomatic damage control. President Obama famously issued Presidential Policy Directive 28 (PPD-28), stating that “All persons should be treated with dignity and respect, regardless of their nationality or wherever they might reside, and all persons have legitimate privacy interests in the handling of their personal information” (The White House, 2014). The consequences of PPD-28 are hard to measure—while the intelligence community is bound by presidential directives, nothing stops the president from overturning the directive and not making the decision public (Dwyer, 2002). Fahey (2019) has demonstrated that the degree of transparency in transatlantic relations is highly dependent on the political landscape in the US, showing that the friendly relations under Obama were highly challenged under Trump. In this environment, guarantees based on presidential directives appear fraught. If, however, for the sake of argument, one takes the directive at face value, it establishes a principle that is not recognised by the US Supreme Court: that non-US persons have fundamental rights not just inside but also outside the US.

The crux of the matter is that the Fourth Amendment of the US Bill of Rights granting “The right of the people to be secure in their persons, houses, papers, and effects” does not apply to foreign nationals abroad (Veneziano, 2019; De Filippi, 2013). The constitutional limits on national surveillance do not apply to foreign intelligence gathering operations. Instead, intelligence activities are regulated by Executive Order (EO) 12,333, which was issued by President Reagan (The White House, 1981). The relevant provisions can be found in section 2.3, which states (among other things) that the collection, retention and dissemination of foreign intelligence information, including information concerning corporations or other commercial organisations, is permissible.

EO 12,333 remains in force, but the surveillance capabilities of the intelligence community have been somewhat modified by section 702 of the Foreign Intelligence Surveillance Act (FISA). 4 Importantly, US tech companies are compelled to assist the government in the following manner:

… the Attorney General and the Director of National Intelligence may direct, in writing, an electronic communication service provider to—

(A) immediately provide the Government with all information, facilities, or assistance necessary to accomplish the acquisition in a manner that will protect the secrecy of the acquisition and produce a minimum of interference with the services that such electronic communication service provider is providing to the target of the acquisition. 5

In other words, US tech companies are compelled to assist the US government and cannot inform their customers that they have been targeted. They can nevertheless publish statistics on governmental requests in their transparency reports. This obviously has profound consequences for US cloud service providers that wish to have the European public sector as their customers—there is no guarantee that the data the European public authorities upload to the cloud will be left alone.

On 13 January 2021, the NSA released a document on the guidelines that govern signals intelligence, the so-called SIGINT Annex. 6 While mostly focused on the protections awarded to US persons, it did include some protections for non-US persons abroad, the main restriction being that data collection should be restricted to foreign intelligence requirements, support to military operations or to protect the safety of a US person held captive (Kris, 2021, p. 25). There are also further requirements to filter non-pertinent information (Kris, 2021, p. 77). However, foreign intelligence is a very broad category, and such a limitation does not in itself provide any safeguards for the fundamental rights of foreign nationals.

It is not only the intelligence community that wishes to gain access to data held by cloud service providers, but also law enforcement more broadly speaking. The CLOUD Act, which amended the Stored Communications Act, makes it possible for law enforcement to request access to records held by US companies abroad if they can obtain a warrant. The CLOUD Act differs from the surveillance capabilities regulated by EO 12,333 and FISA section 702 in two important ways—first, each request is subject to judicial review, and second, law enforcement will have to demonstrate probable cause to obtain a warrant. 7 According to Microsoft’s (2019) transparency report its enterprise customers are hardly ever targeted by US law enforcement, but individual, regular user accounts across the world are regularly subject to law enforcement requests.

However, from the perspective of public authorities there appears to be fewer concrete concerns related to law enforcement access—but as will be demonstrated later in this contribution, the procedure as such might make the arrangement incompatible with the laws of EU member states. Woods (2018, p. 400) has argued that such conflict of laws should be taken into account by the court considering the warrant based on comity principles recognised in US law. In short, the principles maintain that courts should consider any conflicts of laws that might arise and refrain from issuing decisions that undermine the laws of another nation. However, whether courts would actually be prone to taking European data protection rights into account, for instance when considering whether law enforcement should be granted access to data held by a US company abroad, is uncertain at best. The European Data Protection Supervisor (EDPS) and the European Data Protection Board (EDPB) (2019, p. 2) have issued an impact assessment in which they cast doubt over whether companies subject to CLOUD Act warrants will challenge them with reference to common law comity.

To conclude, the US surveillance framework is wide-reaching and lacks safeguards for foreign citizens outside the territory of the US. While the public authorities’ access to personal data on US soil may be further limited by law, the power to limit foreign surveillance rests with the president, which makes the US surveillance framework unpredictable and subject to sudden shifts.

3. The question of additional safeguards

3.1 Inadequate decisions, says the CJEU

The US legal framework clearly enables governmental access to data held by US companies—to such a degree that the CJEU has invalidated not one but two Commission (2000; 2016) adequacy decisions based on the Safe Harbor agreement (Schrems I) 8 and the Privacy Shield arrangement (Schrems II). The agreements had enabled international data transfers from the US to the EU even though the former did not formally offer an adequate level of protection of EU data. 9 Without going into the details of either the Safe Harbor agreement or the Privacy Shield arrangement, it is necessary to point out the conflict of laws that gave rise to the Court’s invalidation of the two decisions.

The central problem with the US surveillance regime outlined in the previous section is that it undermines the right to privacy, the right to data protection, and the right to an effective remedy and to a fair trial as stated by articles 7, 8 and 47 of the Charter of Fundamental Rights of the European Union. Essentially, both decisions were invalidated because the US surveillance framework did not recognise the fundamental rights of non-US persons. Whereas the Commission viewed PPD-28 as testimony to the privacy rights of Europeans, the court did not agree to this conclusion:

It follows therefore that neither Section 702 of the FISA, nor E.O. 12333, read in conjunction with PPD‑28, correlates to the minimum safeguards resulting, under EU law, from the principle of proportionality, with the consequence that the surveillance programmes based on those provisions cannot be regarded as limited to what is strictly necessary.(Schrems II, p. 184).

The invalidation of the Privacy Shield decision did not come as a surprise to data protection lawyers (Krouse, 2018; Callahan-Slaughter, 2016). The surveillance conducted by the US governmental agencies was still neither proportionate nor necessary by European standards, it was not based on European Union or member state laws, and a newly instated Privacy Shield Ombudsperson for handling data was not seen as equivalent to the right to effective redress according to article 47 of the Charter. However, the court did not invalidate Standard Contractual Clauses (SCCs), a contractual arrangement for transferring data to so-called third countries that, for one reason or another, are not able to offer EU data adequate protection. However, given that contractual terms do not bind other than the parties, it is clear that the SCC will not put a stop to US governmental access to data. Instead, the court referred to recital 109 of the GDPR, which states that controllers can add “other clauses or additional safeguards”, and added that “[the standard data protection clauses] may require, depending on the prevailing position in a particular third country, the adoption of supplementary measures by the controller in order to ensure compliance with that level of protection” (Schrems II, p. 133). Data protection lawyers have struggled with what, exactly, these supplementary measures might be.

In sum, the US approach that limits fundamental rights to its residents in combination with its propensity to use blanket surveillance measures pose a significant problem for the European fundamental rights regime. While the CJEU did leave the door open for some protective measures, exactly what would be considered a sufficient safeguard in light of the legal requirements to grant a third country access to personal data was not addressed by the court.

3.2 A tale of two interpretations

A few months after the Schrems II decision, the EDPB issued its own recommendations on the topic of supplementary measures. From the perspective of EU entities using US cloud services, the news was not good. In the recommendations, the Board stated that if the legal regime in a third country allows for public authorities’ access to data in a manner which goes beyond what is necessary and proportionate in a democratic society, no effective safeguards can be found. Specifically,

where unencrypted personal data is technically necessary for the provision of the service by the processor, transport encryption and data-at-rest encryption even taken together, do not constitute a supplementary measure that ensures an essentially equivalent level of protection if the data importer is in possession of the cryptographic keys. (EDPB 2020, p. 27)

Essentially, the conclusion was that no US Software-as-a-Service (SaaS) solutions would be able to fulfil the conditions of Schrems II. For a SaaS solution to be able to work, the service provider in question needs to have access to the cryptographic keys, at least momentarily. No contractual, organisational or technical measures can currently remedy this problem. Importantly, the EDPB (2020, p. 14) clarified that an assessment should be based on “objective factors”, that is, the legal framework or factual capabilities of public authorities in the third country in question, and not “subjective ones such as the likelihood of public authorities’ access to your data in a manner not in line with EU standards.”

The Commission would nevertheless not support this conclusion. In its draft SCCs, the Commission stated that when controllers or processors warrant that they have no reason to believe that personal data will be disclosed to public authorities, they should take due account of

(i) the specific circumstances of the transfer, including the content and duration of the contract; the scale and regularity of transfers; the length of the processing chain, the number of actors involved and the transmission channels used; the type of recipient; the purpose of processing; the nature of the personal data transferred; any relevant practical experience with prior instances, or the absence of requests for disclosure from public authorities received by the data importer for the type of data transferred. (Commission 2020, clause 2)

The conclusion was the polar opposite of what the EDPB had recommended. Expectedly, the EDPB and the EDPS (2020) refuted the Commission’s interpretation of Schrems II, and recommended that the Commission delete “the content and duration of the contract”; “the scale and regularity of transfers”; “the number of actors involved and the transmission channels used”; “any relevant practical experience with prior instances, or the absence of requests for disclosure from public authorities received by the data importer” from the clause. The Commission (2021) adopted the updated SCCs on June 4, but did not accept the EDPS suggestions, leaving the disputed quotes in recital 20.

In a direct response to the Schrems II ruling, the NSA (2020) published its internal targeting procedures. While most of it is concerned with demonstrating in what way US persons are not targeted, there are some parts which address how foreign intelligence is acquired:

NSA must also reasonably assess, based on the totality of the circumstances, that the target is expected to possess, receive, and/or is likely to communicate foreign intelligence information concerning a foreign power or foreign territory authorized for targeting under a certification or authorization executed by the Director of National Intelligence and the Attorney General in the manner prescribed by section 702 (NSA, 2020, p. 4)

If one follows the Commission’s subjective assessments of unauthorised access, the risk analysis a public authority must make is whether, “based on the totality of the circumstances”, the documents they upload to the cloud might be regarded as foreign intelligence from a US perspective. This obviously rules out defence departments, ministries, and possibly members of parliament, but what about the provision of welfare services? While the processed personal data may be highly sensitive for the person concerned, it is not likely that this type of information would be deemed “foreign intelligence”, unless that someone would be a person of interest for other reasons.

Schrems II has led to some difficult discussions at the EU level, given that the EU institutions (EUIs) themselves use US cloud services. On 27 May 2021, the EDPS (2021) opened two investigations on the EUI’s use of Amazon Web Services and Microsoft Office 365. Moreover, Max Schrems’ organisation noyb (2020) filed a complaint with the EDPS against the European Parliament on behalf of six MEPs, claiming that the Parliament’s EcoCare site illegally transferred personal data to the US through the installation of cookies.

While the EU institutions are attempting to find common ground relating to how Schrems II should be interpreted in practice, the use of US cloud services in the EU remains extensive in both the public and private sectors. In the following section, I will look at two concrete cases where data transfers to US cloud service providers have been put into question at the national level.

4. Moving the public sector to the cloud: some fundamental challenges

4.1 The case of Microsoft Office in the Netherlands

It is an established fact that the global cloud market is heavily dominated by US firms (Synergy Research Group, 2020). This is especially true of SaaS solutions. When it comes to all-encompassing productivity software used for editing documents, drafting presentations, and analysing spreadsheets, there are only two major players on the market: Google and Microsoft. While there is scant information on public sector SaaS adoption, SaaS revenues represent two thirds of total public cloud revenues on the EU market (DESI, 2020, p. 9). A Swedish study of cloud use in the public sector revealed that 53 per cent of Swedish public authorities used SaaS (Pensionsmyndigheten, 2015, p. 60). Given that Microsoft does not even sell its Office software to enterprise customers as a stand-alone product, that percentage is likely to be much higher today.

Public contracts are regulated by national procurement legislation and EU directives if the tenders exceed certain monetary thresholds. In the public procurement procedure, a public authority specifies certain criteria that the tenderer should fulfil. Price, economic standing, and technical and professional ability are the most important criteria, but whether price is valued higher than quality depends on the contract in question. For SaaS, the quality assessment should not only include a review of the functionality of the software, but also its level of security and data processing practices.

It speaks volumes that the Dutch government has a dedicated team just for handling its Microsoft contracts, Strategic Vendor Management Microsoft (SLM Rijk). In 2018, SLM Rijk commissioned a data protection impact assessment of Office 365 from Privacy Company, a consultancy specialised in data protection. In their initial report, Privacy Company (2018, p. 107) concluded that “the processing of diagnostic data about the use of the mobile Office apps and the Controller Connected Experiences leads to five high data protection risks. Only Microsoft can effectively mitigate these risks. Government organisations are advised to create policies for their employees to not use Office Online and the mobile Office apps”. One of the biggest problems was that Microsoft had retained controller status for the mobile and online apps, therefore effectively deciding how the data was being processed without the Dutch authorities having a say. 10

Some of the high risks mentioned in the report related to the lack of transparency into what data gets transferred to Microsoft, no way to disable “connected experiences” 11 and employee access to mobile apps, unlawful collection of data through connected experiences, a lack of purpose limitation for the mobile apps and the connected experiences, and not enough control over sub-processors of data (Privacy Company, 2018, pp. 104-5).

The findings from the impact assessment prompted Microsoft to introduce new tools for transparency, limit the scope of data collection and use of sub-processors, and new audit rights were granted to the Dutch government (Privacy Company, 2019). As a result, no high risks for using Office software remained according to Privacy Company, but the online and mobile apps were still problematic. A follow-up study in June 2020 concluded that while high risks related to the web and mobile apps remained, Microsoft had agreed to limit their data collection and, crucially, only act as a processor for the mobile and web apps (Privacy Company, 2020, pp. 134-136). These actions, in combination with measures taken at the governmental level, would mitigate the remaining data protection risks. The report recognises that the risk for unauthorised US governmental access remains but determines that the “likelihood … is remote”, resulting in a low risk for data subjects (Privacy Company, 2020, p. 126).

As a result of the impact assessments and the negotiations with the Dutch government, Microsoft (2020b) updated its “Online Services Data Protection Addendum (DPA)” worldwide. In the addendum, Microsoft states that it is primarily regarded as a processor and not a controller for the main online services (Microsoft, 2020b, p. 8). It is worth highlighting that Microsoft used to be the controller for the web and mobile apps. This is a welcome development for European public authorities that wish to use Microsoft’s services. Still, it is nevertheless necessary to limit data flows to Microsoft, which means that some IT expertise is still needed in-house. The primary lesson from SLM Rijk is that the complexity of SaaS solutions requires expert knowledge to properly assess whether or not the services offered are GDPR compliant. Furthermore, it is not enough to review the contracts, but data flows need to be examined as well. A combination of legal and technical expertise is necessary to ensure that what is stated on paper also holds true in bits.

Given the complexity of the matter it is doubtful whether smaller municipalities and other authorities are able to manage the negotiation of cloud contracts and limiting settings to the bare minimum in a way which enables GDPR compliance. Whereas a small administrative entity could, with relative ease, buy a few hundred copies of productivity software, cloud service contracts are no less complex if they concern a hundred users or a hundred thousand users. While the controllership and therefore responsibility for personal data must lie with a public authority, it is not realistic that each public authority negotiates their contracts separately—at least not on the level of data flows and retention policies.

It is also worth noting that insulating the service provider completely from the content is not possible with SaaS—something Microsoft (2020c) also admits. Since that requires that the customer controls the encryption key, web apps are not supported. Microsoft recommends that “Hold Your Own Key” is “typically suitable only for a small number of documents”. This leads to one major organisational challenge and one major legal challenge.

Microsoft’s description of its Hold Your Own Key service hints at the organisational problem—that if the most secure measures are instated and functionality is worsened, this could mean significant pushback from employees. As the Clinton email scandal demonstrated, individuals might go against internal regulations and security policies if an external service is easier to use. While Hillary Clinton was the secretary of state, she had sent emails containing classified information from her private email server (Labott, 2015). Even though public sector employees may be instructed to not use the cloud features for certain documents, it is possible that less technically oriented employees fail to understand the difference between using Office software on the desktop and in the browser.

The legal challenge is that if Microsoft controls the encryption keys, Microsoft remains at least partly “in control” of customer data, thus being in a position where it can be forced to cooperate with either US law enforcement or the intelligence community, which would be contrary to the EDPB’s recommendations. Whether this hypothetical possibility is a concrete risk for the fundamental rights of EU residents depends on the public authority in question.

4.2 Swedish administrative law and the CLOUD Act

Whereas the data protection impact assessment of Microsoft Office was mostly focused on the GDPR, in Sweden a debate has surfaced surrounding the use of cloud services with reference to the national law on secrecy and publicity. 12 The main point of concern in the Swedish context has not been the foreign intelligence gathering operations but the impact of the CLOUD Act. Different public authorities, cloud service providers and law firms have taken turns debating whether using an US cloud service constitutes an unlawful disclosure of classified information (see eSam, 2019; Frydlinger & Olstedt Carlström, 2020; Delphi, 2020; Westling Palm & Öberg, 2020). A city’s use of Microsoft 365 was even reported to the parliamentary ombudsman (Dataskydd.net, 2020), although the ombudsman decided not to take up the case with reference to ongoing governmental investigations (Justitieombudsmannen, 2020).

According to the Swedish law of secrecy and publicity, certain types of information are regarded as classified, and in order for the information to be disclosed, the authority in control of the information must make an assessment of whether the disclosure could cause harm to either the Swedish public interest or an individual. The threshold for harm depends on the information and the context, and it applies to both personal data and non-personal data. The problem boils down to this: does uploading confidential information to a US cloud service provider’s server in itself constitute an unlawful disclosure of confidential information based on the fact that US cloud service providers can be compelled to disclose information on their customers abroad? Some argue that the legal requirement introduced by the CLOUD Act means that any contract between a cloud service provider and a public authority specifying the secrecy of information will be rendered null and void (Westling Palm & Öberg, 2020). Others argue that the likelihood that US law enforcement would access documents in this way is so low that such an interpretation of the law is absurd (Frydlinger & Olstedt Carlström, 2020). In essence, the Swedish debate precluded the discussion on supplementary measures after Schrems II: should risk assessments be based on the legal framework, or the practical circumstances?

In a way, both sides are correct—given the billions of Microsoft customers across the world, it is not likely that US law enforcement would request access to documents held by a small Swedish municipality. On the other hand, if that situation were to occur, the Swedish municipality could not stop the cloud service provider from turning over the documents, which would be a breach of the law of secrecy and publicity. While there is no clear way out of this dilemma, some contractual measures could mitigate the concerns. As a first step, cloud service providers could be contractually obliged to redirect law enforcement agencies directly to the customers, as Microsoft (2020b, p. 7) promises to do in its data protection addendum. Daskal (2018, p. 235) highlights that corporations ultimately decide if they wish to challenge or comply with governmental access requests. It is therefore possible to make this a contractual obligation.

Furthermore, following Woods (2018), it is possible to challenge CLOUD Act warrants with reference to common law comity. The EDPS and the EDPB (2019) have doubts regarding whether companies would actually challenge warrants in this way, but they did not address the possibility of adding such a requirement to the cloud contract—when faced with a warrant regarding information held by a public authority, the cloud service provider should always challenge the request with reference to common law comity. Given the very low occurrence of extraterritorial requests to enterprise customer data (see Microsoft, 2019), such terms could provide a sufficient layer of contractual safeguards. While such terms are meaningless in the face of FISA requests that a) tend to be secret and b) do not require a warrant, at least classified information that holds no foreign intelligence value could be better protected from unsanctioned governmental access.

Somewhat paradoxically, it appears that the most mundane—typing away on Google Docs or Microsoft Word in a browser—would be the most challenging feature to incorporate in a way which is consistent with European fundamental rights. Cloud service providers already offer data to be stored within Europe, and although that type of requirement is quite meaningless in the face of US governmental access to European data, the data is at least fairly secure from physical intrusion. The data can be further encrypted, and the keys held by the customers, which insulates the cloud service provider from the content. This isa lot less challenging than creating new SaaS solutions, given the enormous advantage especially Microsoft has in productivity software. In fact, a study commissioned by the Swedish Competition Authority indicated that many public authorities tend to specify in their policy documents and procurement procedures that they prefer the proprietary standards and products provided by US technology companies over open standards (Lundell, Gamalielsson, & Tengblad, 2016, pp. 100-105). Another study by the Swedish Legal, Financial and Administrative Services Agency (2019, p. 50) concluded that Google and Microsoft are the only providers of web-based productivity software that can provide the necessary functionality.

The Swedish debate surrounding the CLOUD Act shows that clearing the hurdles associated with GDPR compliance is often not enough—public authorities process personal data for a wide variety of reasons, and national regulatory frameworks may add further restrictions to how data can be processed. The Swedish law on secrecy and publicity was not drafted with cloud services in mind, but as a filter for the otherwise far-reaching transparency of public documents. The underlying idea behind the law is that the administrative entities that gather sensitive data should make risk assessments of whether specific information can be transferred to the public domain. It is ill-fitted for handling routine submissions of large quantities of documents to subcontractors that are only supposed to store the data. Rather, this type of risk analysis is more appropriate to be conducted in the course of a rigorous data protection impact assessment. Here, the question of scale and scope becomes imperative. When a journalist requests access to a file held by the child protective services, the person responsible for the file is the right individual to make the assessment—s/he will know what the concrete risks are for the people mentioned in the file. When the child protective services transfer their entire database to the cloud, they might not possess the necessary expertise to properly gauge the risks. While the nature of the data is important also in that case, the risk analysis takes completely different variables into account. To put it bluntly, child protective services are not trained for making assessments of what constitutes foreign intelligence according to the US intelligence community.

4.3 Selective legal compliance

At the same time as the Schrems II ruling invalidated the Privacy Shield and introduced strict requirements for transferring data to third countries using SCCs, millions of European employees and students are still using productivity software that the US intelligence community could require access to as if nothing changed. To use Svantesson’s (2017, p. 220) terminology, both European customers and US cloud service providers are engaging in “selective legal compliance”. A few months after Schrems II, the EDPS (2020) issued a new strategy for how the EU institutions (EUIs) could comply with the ruling. In it, the EDPS (2020, p. 8) “strongly encourages EUIs to ensure that any new processing operations or new contracts with any service providers does not involve transfers of personal data to the United States”. Given the way SaaS operate, ensuring that not a single transfer of personal data occurs is a daunting task. While data at rest can quite easily remain in Europe—the big technology companies all have data centres in Europe—it is significantly harder to stop all flows of telemetry data associated with the services (see also Christakis, 2020, pp. 69-70). While the Dutch example has shown that at least telemetry data may be limited at the organisational level, web-based and mobile applications need to transfer data to the US for functional purposes. Based on the experiences from the Dutch and Swedish cases, it is nevertheless possible to draw a few conclusions.

First, it is evident that cloud service contracts need to include provisions that force US service providers to challenge law enforcement requests. While there are no guarantees that such protests will be taken into account in US courts, it at least contractually hinders companies from cooperating voluntarily. Second, significant attention should be devoted to scrutinising who determines the means and purposes of the processing, so that public authorities remain controllers for all personal data. Third, to satisfy the EDPB’s conditions, US service providers should be insulated from content and telemetry data to the furthest degree possible. In practice this means limiting the service provider’s access to data, and not submitting encryption keys to the service provider. However, this means that SaaS will not work, which leads to a fourth point: if mobile and web app functionality is needed, it is essentially not possible to comply with Schrems II according to the EDPB’s interpretation. However, the Commission’s interpretation that a subjective assessment is compatible with Schrems II indicates that a detailed risk analysis that thoroughly analyses the nature of the processing and the personal data involved might be sufficient. Lastly, the Dutch case shows that it is necessary to periodically assess the data flows to make sure that no data leaks occur.

5. Towards a European cloud?

After the Snowden revelations a lot more attention has been devoted to analysing the US regulatory framework on governmental access to data. This is a welcome development, because this screening has also resulted in increased knowledge of how personal data is processed, used, and transferred. Nevertheless, it is worth asking whether there is a risk that too much attention is devoted to data transfers that, despite their impermissibility, are unlikely to cause real harm. Is the telemetry data of productivity software the right focus? As US commentators are often keen to point out, European intelligence agencies also engage in significant surveillance operations (Schwartz & Peifer, 2017), but these fall under the list of permissible exceptions in the GDPR and other laws. The latest EU e-evidence proposal is also testimony to European law enforcement agencies’ ambition to access data across borders (Vazquez Maymir, 2020). A consequence might be that more attention is devoted to telemetry data transferred to the US than content data in the EU. Still, claims of hypocrisy fail to consider that at least in Europe, the subjects of surveillance have access to justice, which is not dependent on the nationality of the appellant (Vara, 2014, p. 260).

It is nonetheless clear that for some public authorities, using SaaS by US providers will never be an option due to the sensitivity of the data they process. But it is equally true that a lot of documents get processed that will never be of interest to the US intelligence community. The problem is that public authorities in Europe will not know if and when a person in their files will be a person of interest for the US intelligence community. Most public authorities are presently incapable of making this risk assessment themselves. In October 2020, the Commission Nationale de l'Informatique et des Libertés (CNIL - 2020) decided that the Health Data Hub needed to relocate its data following the Schrems II ruling. The purpose of the Health Data Hub is to centralise all health registries in France. The French government had negotiated a contract with Microsoft, which had ensured that the data was being stored on European soil. However, due to the same issues presented in this contribution, the CNIL did not see this as a sufficient safeguard. Microsoft controlled the encryption keys and could therefore potentially unlock the database, should the US intelligence community request so.

The CNIL has proposed that a potential solution would be to licence the Microsoft product to a European company that does not have significant activity in the US and is therefore protected from FISA or EO 12,333 orders. This way European customers could benefit from the Microsoft product without risking data breaches. It is perfectly imaginable that cloud-based infrastructure or platforms could operate in this way, but it is less likely to work in a SaaS environment that requires constant updates to a range of products. Unfortunately, it appears as if this problem will not be solved until there is a global or transatlantic political solution (see Mitsilegas, 2016). While international frameworks for regulating mass surveillance have been presented, they are not likely to be successful (Gstrein, 2020). In a world of global interdependence (Farrell and Newman, 2019), it is problematic that the country which is home to the most widely used IT services does not recognise the fundamental rights of people of other nations.

6. Conclusion

This contribution has pointed out that public authorities are facing overwhelming legal challenges when they are using US cloud services that provide more functionality than simple data storage. The only way to guarantee compliance with EU data protection jurisprudence is to insulate the service provider completely from the data, which effectively strips the service of any added functionality and renders SaaS completely unusable. Fundamentally, the present dilemma can be summarised in five points:

  1. The US Supreme Court has not, and will probably not, grant non-nationals outside its territory fundamental rights.
  2. The US is unlikely to limit surveillance to what is necessary and proportionate by European standards.
  3. The US is unlikely to grant non-US persons access to justice in a manner which fulfils the Charter’s requirements.
  4. Cloud services with other functionality than data storage require that the service provider has, at least momentarily, access to data in the clear.
  5. EU data localisation has no effect, because US-based companies are subject to the demands of US public authorities.

Does this, or should this, mean that no US-based cloud services can be used? While the EDPB’s answer appears to be yes, such a conclusion would lead to a situation where cloud-based software solutions that originate from third countries are unavailable for the public sector in the EU. This has ramifications also for the private sector, which would need to consider the risks in continuing with a practice that the European data protection authorities have, in essence, deemed incompatible with fundamental rights.

The Dutch and Swedish cases demonstrate that there are contractual, organisational, and technical steps available to minimise the risks involved in using US cloud services, but the requirement that no personal data whatsoever can be processed in the clear by an US company is virtually impossible to satisfy without breaking the services completely. Furthermore, the measures are complex, and smaller administrative units cannot perform these tasks alone. The Dutch example has shown that a centralised public procurement procedure is a better option. Large public contracts are far more attractive for cloud service providers, which puts public authorities in a better negotiating position. In the Dutch case the commissioned data protection impact assessments were used to renegotiate the service terms and contracts, effectively raising Microsoft’s data protection standards in the process.

All this goes to show that a transatlantic solution is urgently needed—but for the reasons outlined above, a completely satisfactory one is unlikely. The global frameworks that have been proposed have failed to materialise, and while the US has taken steps to accommodate the needs of EU member states, significant issues remain with the US approach to mass surveillance and EU citizens’ lack of judicial redress.

Acknowledgements

I would like to thank reviewers Marieke de Goede and Marijn Sax for their thoughtful comments and helpful suggestions, editors Ans Kolk and Kristina Irion for their guidance, Thorsten Wetzling for his recommendations, and managing editor Frédéric Dubois for his stylistic remarks. I also want to thank Matthew D. Green for helping me understand cryptography in the cloud.

References

C.‐311/18 Data Protection Commissioner vs Facebook Ireland Ltd, Maximillian Schrems (Schrems II), ECLI:EU:C:2020:559 (Court of Justice of the European Union 20 July 2020). https://curia.europa.eu/juris/document/document.jsf?text=&docid=228677&pageIndex=0&doclang=en&mode=req&dir=&occ=first&part=1

C‐362/14, Maximillian Schrems vs Data Protection Commissioner (Schrems I), ECLI:EU:C:2015:650 (Court of Justice of the European Union 6 October 2015). https://curia.europa.eu/juris/document/document.jsf?text=&docid=169195&pageIndex=0&doclang=en&mode=req&dir=&occ=first&part=1

Callahan-Slaughter, A. (2016). Lipstick on pig: The future of transnational data flow between the EU and the united states. Tulane Journal of International and Comparative Law, 25(1), 239–258.

Christakis, T. (2020). European Digital Sovereignty: Successfully Navigating Between the “Brussels Effect” and Europe’s Quest for Strategic Autonomy [Study (Preprint)]. Multidisciplinary Institute on Artificial Intelligence; Grenoble Alpes Data Institute. https://doi.org/10.2139/ssrn.3748098

Clarifying Lawful Overseas Use of Data Act (CLOUD Act) of 2018. 18 U. S. C. §2701 et seq. (n.d.).

Commission Nationale de l’Informatique et des Libertés. (2020). Conseil d’état. Section du contentieux. Refere l. 521-2 CJA. Memoire en observations. https://cdn2.nextinpact.com/medias/observations-de-la-cnil-8-octobre-2020-1---1-.pdf

Daskal, J. (2018). Borders and Bits. Vand. L. Rev, 71(1), 179–240. https://vanderbiltlawreview.org/lawreview/2018/01/borders-and-bits/

Dataskyddnet. (2020). JO-anmälan av Göteborgs Stad. https://dataskydd.net/files/JO-Anmalan_Goteborgs_stad_molntjanster.pdf

De Filippi, P. (2013). Foreign clouds in the European sky: How US laws affect the privacy of Europeans. Internet Policy Review, 2(1). https://doi.org/10.14763/2013.1.113

Delphi. (2020, May 28). Replik på eSams uttalanden om ”röjande-begreppet” enligt OSL [Blog post]. Delphi Tech Blog. https://www.delphi.se/sv/tech-blog/replik-pa-esams-uttalanden-om-rojande-begreppet-enligt-osl/.

Digital Economy Society Index 2020: Integration of digital technology. (2020). [Report]. European Commission. https://ec.europa.eu/newsroom/dae/document.cfm?doc_id=67076

Dwyer, C. M. (2002). The US Presidency and national security directives: An overview. Journal of Government Information, 29(6), 410–419. https://doi.org/10.1016/j.jgi.2002.05.001

eSAM. (2019). Kompletterande information om molntjänster. https://www.esamverka.se/download/18.4c1250a116d1bb3a3f094fe1/1568977769756/Kompletterande%20information%20om%20molnfr%C3%A5gan%202019-09.pdf

Commission Decision of 26 July 2000 pursuant to Directive 95/46/EC of the European Parliament and of the Council on the adequacy of the protection provided by the safe harbour privacy principles and related frequently asked questions issued by the US Department of Commerce (notified under document number C(2000) 2441) (Text with EEA relevance.), Pub. L. No. 2000/520/EC:, OJ L 215 7 (2000). https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=CELEX:32000D0520.

European Commission. (2010). Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions A Digital Agenda for Europe. https://eur-lex.europa.eu/legal-content/en/ALL/?uri=CELEX%3A52010DC0245.

Commission Implementing Decision (EU) 2016/1250 of 12 July 2016 pursuant to Directive 95/46/EC of the European Parliament and of the Council on the adequacy of the protection provided by the EU-U.S. Privacy Shield (notified under document C(2016) 4176) (Text with EEA relevance), OJ L 207/1 (2016). https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=CELEX:32016D1250.

European Commission. (2019). Security Union: Commission receives mandate to start negotiating international rules for obtaining electronic evidence. https://ec.europa.eu/commission/presscorner/api/files/document/print/en/ip_19_2891/IP_19_2891_EN.pdf.

European Commission. (2020). ANNEX to the COMMISSION IMPLEMENTING DECISION on standard contractual clauses for the transfer of personal data to third countries pursuant to Regulation (EU) 2016/679 of the European Parliament and of the Council. https://ec.europa.eu/info/law/better-regulation/have-your-say/initiatives/12741-Commission-Implementing-Decision-on-standard-contractual-clauses-for-the-transfer-of-personal-data-to-third-countries.

Commission Implementing Decision (EU) 2021/914 of 4 June 2021 on standard contractual clauses for the transfer of personal data to third countries pursuant to Regulation (EU) 2016/679 of the European Parliament and of the Council (Text with EEA relevance), Pub. L. No. C/2021/3972, OJ L 199, 31 (2021). http://data.europa.eu/eli/dec_impl/2021/914/oj.

European Data Protection Board. (2020). Recommendations 01/2020 on measures that supplement transfer tools to ensure compliance with the EU level of protection of personal data. https://edpb.europa.eu/sites/edpb/files/consultation/edpb_recommendations_202001_supplementarymeasurestransferstools_en.pdf.

European Data Protection Supervisor. (2020). Strategy for Union institutions, offices, bodies and agencies to comply with the ‘Schrems II’ Ruling [Strategy]. European Data Protection Supervisor. https://edps.europa.eu/sites/edp/files/publication/2020-10-29_edps_strategy_schremsii_en_0.pdf

European Data Protection Supervisor. (2021). The EDPS opens two investigations following the “Schrems II" Judgement [Press Release]. Press & Publications. https://edps.europa.eu/press-publications/press-news/press-releases/2021/edps-opens-two-investigations-following-schrems_en

European Data Protection Supervisor & European Data Protection Board. (2019). ANNEX. Initial legal assessment of the impact of the US CLOUD Act on the EU legal framework for the protection of personal data and the negotiations of an EU-US Agreement on cross-border access to electronic evidence. European Data Protection Supervisor. https://edps.europa.eu/sites/edp/files/publication/19-07-10_edpb_edps_cloudact_annex_en.pdf

Fahey, E. (2019). Transparency in transatlantic trade and data law. In V. Abazi & G. Rosen (Eds.), Foreign Policy Secrets in the Age of Transparency. Oxford University Press.

Farrell, H., & Newman, A. L. (2019). Of privacy and power: The transatlantic struggle over freedom and security. Princeton University Press.

Frydlinger, D., & Olstedt Carlström, C. (2020). Molntjänster, offentlighet och sekretess i offentlig sektor: Utredning om och förslag till lagstiftning rörande offentlig sektors möjligheter att använda publika molntjänster [Study]. Cirio Advokatbyrå AB. https://cirio.se/assets/uploads/images/hero-images/Molntjanster-offentlighet-och-sekretess-i-offentlig-sektor-Cirio-12-maj-2020-002.pdf

Google. (2020). Global requests for user information. Google Transparency Report. https://transparencyreport.google.com/user-data/overview

Gstrein, O. (2020). Mapping power and jurisdiction on the internet through the lens of government-led surveillance. Internet Policy Review, 9(3). https://doi.org/10.14763/2020.3.1497

Executive Order 12,333, Pub. L. No. 46 FR 59941, 3 CFR, 1981 Comp. (1981).

Irion, K. (2012). Government Cloud Computing and National Data Sovereignty. Policy & Internet, 4(3), 40–71. https://doi.org/10.1002/poi3.10

Justitieombudsmannen. (2020). Dnr. 551-2020.

Kris, D. S. (2021). The NSA’s New SIGINT Annex. Journal of National Security Law & Policy. https://jnslp.com/2021/01/19/the-nsas-new-sigint-annex/

Krouse, W. (2018). The inevitable demise of privacy shield: How to prepare. Computer and Internet Lawyer, 35(6), 19–22.

Labott, E. (2015, July 24). Official: Clinton emails included classified information. CNN. https://edition.cnn.com/2015/07/24/politics/hillary-clinton-email-justice-department.

Lundell, B., Gamalielsson, J., & Tengblad, S. (2016). IT-standarder, inlåsning och konkurrens: En analys av policy och praktik inom svensk förvaltning (Commissioned research report 2016:2). Konkurrensverket (Swedish Competition Authority). https://www.konkurrensverket.se/globalassets/publikationer/uppdragsforskning/forsk_rapport_2016-2.pdf

Microsoft. (2019). Law Enforcement Requests Report. Requests received for all Microsoft Services from July to December 2019. Microsoft. https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RE4sg0d

Microsoft. (2020a). Hold your own key (HYOK) details for Azure Information Protection. Microsoft Docs. https://docs.microsoft.com/en-us/azure/information-protection/configure-adrms-restrictions

Microsoft. (2020b). US National Security Orders Report [Report]. Microsoft. https://www.microsoft.com/en-us/corporate-responsibility/us-national-security-orders-report.

Microsoft. (2020c). Microsoft Online Services Data Protection Addendum. Microsoft; Internet Archive. https://www.microsoftvolumelicensing.com/Downloader.aspx?DocumentId=17880

Mitsilegas, V. (2016). Surveillance and digital privacy in the transatlantic war on terror: The case for a global privacy regime. Columbia Human Rights Law Review, 47(3), 1–77.

noyb. (2020). Complaint under article 63(1), 67 Regulation 2018/1725. Noyb Case-No: C-035. noyb. https://noyb.eu/sites/default/files/2021-01/NOYB%20COMPLAINT%20C035_Redacted.pdf

NSA. (2020). NSA’s 2019 Section 702 Targeting Procedures, Sep. 17, 2019. Office of the Director of National Intelligence (ODNI. https://www.intelligence.gov/assets/documents/702%20Documents/declassified/2019_702_Cert_NSA_Targeting_17Sep19_OCR.pdf.

Palm, K. W., & Öberg, N. (2020, May 26). Kommentar till kritisk rapport om molntjänster i offentlig sektor. eSam. https://www.esamverka.se/aktuellt/nyheter/nyheter/2020-05-26-kommentar-till---kritisk-rapport-om-molntjanster-i-offentlig-sektor.html

Pensionsmyndigheten. (2015). Molntjänster i staten. En ny generation av outsourcing [Report]. Pensions Myndigheten. https://www.pensionsmyndigheten.se/content/dam/pensionsmyndigheten/blanketter---broschyrer---faktablad/publikationer/svar-p%C3%A5-regeringsuppdrag/2016/Uppdrag%20att%20analysera%20potentialen%20f%C3%B6r%20anv%C3%A4ndning%20av%20molntj%C3%A4nster%20i%20staten%20.pdf

Privacy Company. (2018). DPIA diagnostic data in Microsoft Office Proplus [DPIA report]. Ministry of Justice and Security. https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=&ved=2ahUKEwjPm9DxwrLzAhUiRkEAHfr3A84QFnoECAIQAQ&url=https%3A%2F%2Fwww.rijksoverheid.nl%2Fbinaries%2Frijksoverheid%2Fdocumenten%2Frapporten%2F2018%2F11%2F07%2Fdata-protection-impact-assessment-op-microsoft-office%2FDPIA%2BMicrosoft%2BOffice%2B2016%2Band%2B365%2B-%2B20191105.pdf&usg=AOvVaw2739WkmYX_ksXqQ5Wj1njb

Privacy Company. (2019). DPIA Office 365 ProPlus version (June 2019) [DPIA Report]. Ministry of Justice and Security. https://www.government.nl/documents/publications/2019/07/22/dpia-office-365-proplus-version-1905

Privacy Company. (2020). DPIA Office 365 for the Web and mobile Office apps. Data protection impact assessment on the processing of diagnostic data [DPIA report]. Ministry of Justice and Security. https://www.rijksoverheid.nl/binaries/rijksoverheid/documenten/rapporten/2020/06/30/data-protection-impact-assessment-office-365-for-the-web-and-mobile-office-apps/DPIA+Office+for+the+Web+and+mobile+Office+apps+30+June+2020.pdf

Schwartz, P. M., & Peifer, K. (2017). Transatlantic data privacy law. Georgetown Law Journal, 106(1), 115–180. https://www.law.georgetown.edu/georgetown-law-journal/in-print/volume-106/volume-106-issue-1-november-2017/transatlantic-data-privacy-law/

Svantesson, D. J. B. (2017). Solving the internet jurisdiction puzzle. Oxford University Press. https://doi.org/10.1093/oso/9780198795674.001.0001

Swedish Legal, Financial and Administrative Services Agency. (2019). Webbaserat kontorsstöd. (Pre-study report Dnr 23.2-6283-18.).

Synergy Research Group. (2020, May 7). Amazon & Microsoft Lead the Cloud Market in all Major European Countries [News release]. GlobeNewswire. https://www.globenewswire.com/news-release/2020/05/07/2029605/0/en/Amazon-Microsoft-Lead-the-Cloud-Market-in-all-Major-European-Countries.html

The White House. (2014, January 17). Presidential Policy Directive 28: Signals Intelligence Activities [Statement]. The White House Briefing Room. https://obamawhitehouse.archives.gov/the-press-office/2014/01/17/presidential-policy-directive-signals-intelligence-activities.

Vara, J. (2014). Transatlantic counterterrorism cooperation agreements on the transfer of personal data: A test for democratic accountability in the EU. In E. Fahey & D. Curtin (Eds.), A Transatlantic Community of Law: Legal Perspectives on the Relationship between the EU and US Legal Orders (pp. 256–288). Cambridge University Press. https://doi.org/10.1017/CBO9781107447141.017

Vazquez Maymir, S. (2020). Anchoring the Need to Revise Cross-Border Access to E-Evidence. Internet Policy Review, 9(3). https://doi.org/10.14763/2020.3.1495

Veneziano, A. (2019). Applying the US Constitution Abroad, from the Era of the US Founding to the Modern Age. Fordham Urb. Fordham Urban Law Journal, 46(3), 602–640. https://ir.lawnet.fordham.edu/ulj/vol46/iss3/4

Woods, A. K. (2018). Litigating Data Sovereignty. Yale Law Journal, 128(2), 328-406 3 2 2 2 2 233 3 107 8-.

Funding

This research was funded by the Academy of Finland, decision no. 320895. The contribution was written during a visiting fellowship at the European University Institute.

Declaration of novelty and no competing interests

By submitting this manuscript I declare that this manuscript and its essential content has not been published elsewhere or that it is considered for publication in another outlet.

No competing interests exist that have influenced or can be perceived to have influenced the text.

Footnotes

1. 18 U. S. C. §2701 et seq.

2. Article 48 specifies that “Any judgment of a court or tribunal and any decision of an administrative authority of a third country requiring a controller or processor to transfer or disclose personal data may only be recognised or enforceable in any manner if based on an international agreement.”

3. Case C‑311/18, Data Protection Commissioner vs Facebook Ireland Ltd, Maximillian Schrems, (Schrems II) ECLI identifier: ECLI:EU:C:2020:559.

4. 50 U.S.C. ch. 36 § 1801 et seq.

5. 50 U.S.C. ch. 36 § 1881(a)(i)(1)(A).

6. Procedures governing the conduct of DoD intelligence activities: Annex governing signals intelligence information and data collected pursuant to section 1.7(c) of E.O. 12333, https://assets.documentcloud.org/documents/20454757/redacted-annex-dodm-524001-a.pdf.

7. 18 U.S. Code § 2703(c)

8. C‑362/14, Maximillian Schrems v. Data Protection Commissioner 6 October 2015, (Schrems I) ECLI identifier: ECLI:EU:C:2015:650

9. Adequacy decisions were issued with reference to article 25 of the Data Protection Directive (95/46/EC) and are now issued with reference to article 45 of the General Data Protection Regulation (2016/679).

10. According to EU jurisprudence, a controller defines the nature and purpose of the processing, while the processor only processes the personal data on documented instructions, see article 28 of the GDPR.

11. Services include Smart Lookup, Office Store, and 3D Maps.

12. Sw. Offentlighets- och sekretesslag (2009:400)(OSL).


Extraterritorial application of the GDPR: promoting European values or power?

$
0
0

Section 1. Introduction

In the recent history of the European Union (EU) few legislative acts gained as much attention as the 2016 General Data Protection Regulation (GDPR; Kantar, 2019). Pursuant to Article 97 GDPR, the EU Commission published an evaluation of the regulation on 24 June 2020, in which it states: ‘The GDPR has already emerged as a key reference point at international level and acted as a catalyst for many countries around the world to consider introducing modern privacy rules. This trend towards global convergence is a very positive development that brings new opportunities to better protect individuals in the EU when their data is transferred abroad while, at the same time, facilitating dataflows’ (European Commission, 2020a, p. 12).

On the one hand, the regulation has been hailed as the new global ‘gold standard’ (Rustad & Koenig, 2019, p. 366). On the other hand, the attention it receives is surprising when considering the substantive provisions of GDPR in the larger context of the historic development of data protection law (Hoofnagle et al., 2019, pp. 69–72; Rustad & Koenig, 2019, pp. 368–369). Core principles and requirements such as Article 5 and 6 GDPR are only incremental improvements of what was already established across many European countries in the 1970s and 1980s (Ukrow, 2018, pp. 239–247). Certainly, some novel elements such as a ‘right to be forgotten’ (RTBF; Article 17 GDPR), a right to data portability (Article 20 GDPR), or the requirement for mechanisms to mitigate risks of automated individual decision-making (‘artificial intelligence’; Article 22 GDPR) are innovative. However, it is precisely these provisions that require more detailed interpretation by courts and national data protection authorities via the European Data Protection Board (EDPB). Additionally, the precise interpretation of these rights is subject to intensive academic discourse and scrutiny (see e.g. Wachter et al., 2017). 

Combining the arguments that the core principles of GDPR are well known and that the innovative elements require better understanding, one might conclude that it is probably not the substantive dimension of the regulation that explains its impact (Hoofnagle et al., 2019, pp. 66, 97). Rather, it seems that procedural and architectural elements of the framework require attention (Rojszczak, 2020, pp. 31–34). From an intra-EU perspective, the establishment of the GDPR marks a shift towards almost fully harmonised European law, which entails direct effects for the individual (‘data subject’). In other words, the role of member states when it comes to the interpretation of provisions is being limited with more centralisation (European Data Protection Board, 2018, p. 4). In contrast, the emergence of this ‘unified block’ also has consequences for actors outside the EU, especially since the regulation contains, with Article 3, a provision on territorial scope with considerable extraterritorial effect (de Hert & Czerniawski, 2016, pp. 236–240). 

This contribution analyses central provisions and mechanisms of the GDPR that result in the extraterritorial effect of the framework. This includes Article 3 of the GDPR, as well as the legal regime that enables the European Commission to establish whether personal data is ‘adequately protected’ in other countries of the world. We investigate whether internal unification combined with extraterritorial reach is beneficial for the promotion of European values in data flows inside and outside the EU in the longer term. While scholars have already started to speculate about the effects of extraterritorial application before the applicability of the GDPR (de Hert & Czerniawski, 2016, p. 230), recent European and national jurisprudence on the RTBF (Gstrein, 2020, pp. 136–139) as well as criticism of the seeming lack of rigour of the Irish data protection authority to enforce European values in cross-Atlantic relations—as highlighted by the Court of Justice in ‘Schrems II’ (Tracol, 2020)—raise the question whether the extraterritorial effect is not factually overburdening citizens and businesses as well as public institutions and political actors.

Considering options for a better future with high and effective data protection standards, we suggest that rather than relying on extraterritorial effect that adopts a power-based approach using the ‘Brussels Effect’, the universal protection and promotion of European values will be more sustainable when adopting value-based strategies. These could manifest in enhanced cooperation and traditional harmonisation of legal frameworks, with the objective to build broader international consensus around central regulatory principles, institutional requirements, as well as effective safeguards and remedies for those affected by the abuse of personal data. Certainly, some will doubt whether European data protection standards have the potential to form the basis for a broader multilateral agreement. Nevertheless, comparative research already shows that most of the 145 national frameworks around the globe regulating privacy and data protection at the end of 2020 apply the principle-based and technology neutral ‘omnibus model’, replicating the distinctive essence of European data protection laws in their respective legal systems (Greenleaf, 2021a). In other words, while the extraterritorial effect of GDPR is only effective since 2018, countries around the world have already started much earlier to enact and upgrade national laws to mirror what is ‘arguably the world’s best practice’ (Greenleaf, 2021a, p. 5).

Therefore, in the area of data protection it might be best for the promotion of European values if the EU continues to develop and deliver high standards, while actively engaging in international fora and multilateral exchange—as long and as far as this opens venues to establish value-based governance frameworks. At the same time, effective and comprehensive enforcement of existing provisions on member states territories is important to maintain credibility. In conclusion, we argue that extraterritorial application of European data protection law is not a preferable strategy to promote European values sustainably. Rather, it evokes wrong expectations about the universality and enforceability of individual rights.

Section 2. The ‘Brussels Effect’ and European values

In 2012 Anu Bradford introduced the concept of the ‘Brussels Effect’, which describes ‘Europe’s unilateral power to regulate global markets’ (Bradford, 2012, p. 3). She argues that any political actor able to leverage and combine the five factors of market size, regulatory capacity, stringent standards (e.g. consistent approach to data protection), inelastic targets (e.g. non-mobile consumers), and non-divisibility (e.g. mass-production cost advantage for manufacturers and service providers) will be able to set the global regulatory standard for a certain regulatory area. According to her theory, the EU was able to increasingly establish such standards since the 1990s and therefore has become the ‘global regulatory hegemon’ (Bradford, 2020, pp. 25, 64). In simple terms, most global corporations adopt the European requirements for designing their products and services since this allows them to stick to a single regulatory regime. Even if this regime requires more costly adjustments compared to others, producers prefer the EU model since it enables them to operate and refine only one mode of production that is globally accepted. Therefore, products and services designed to comply with EU standards can be marketed globally.

According to Bradford’s studies, examples of areas where the effect can be witnessed include market competition, consumer health and safety, environmental law and the digital economy (Bradford, 2020, pp. 99–231). Her analysis includes the development of the GDPR with the extraterritorial effect that is relevant in the context of this article (Bradford, 2020, pp. 131–169). While all five factors are relevant for the establishment of the GDPR as a global standard, the extraterritorial effect is mainly created by a combination of market size and the non-mobile ‘data subjects’ that represent the inelastic targets in the context of Bradford’s theory.

For the purposes of this article, we consider the ‘Brussels Effect’ a power-based approach, since it combines elements of political and economic capability to determine societal and normative developments in a particular area. The theory emphasises economic scale and political influence, which are more important than European values as such. On a philosophical level, the five factors of the Brussels Effect and much of the EU efforts to spread GDPR norms follow the principles of the framework developed by Emanuel Kant for the establishment of universal peace. Specifically, the second part of the definitive articles that refer to a federal union of sovereign republican states joined by common interests in trade rather than in global civil rights (Weltbürgerrecht) indicate the means through which joint norms should be established—a balance of economic interests rather than a joint belief in value (Kant, 1796). We put this power-based approach in contrast to value-based strategies that emerge from human rights law, for instance. Kant has been criticised by Cosmopolitan scholars, amongst others by Jürgen Habermas, for failing to transcend power politics and for being unable to believe in any moral motivation to create and maintain a federation of free states (Habermas, 2000, p. 171). After all, Kant’s view of the nature of man is still one that is determined by greed and violence, albeit one that can be compelled by reason (Zwitter, 2015).

To define value-based strategies, it is necessary to consider the ‘value’ concept. In his seminal work ‘Being and Nothingness’ (L'être et le néant) first published in 1943, French philosopher Jean-Paul Sartre suggests that a value is an entity that exists in the human mind as what it currently is (Dasein), and as what it lacks (manqué) (Sartre, 2020, pp. 136–162). He uses the parable of the moon for illustration. Over time, it will appear as a crescent moon and ‘grow’ until it appears as a full disc. Regardless of its present form or colour humans all over the earth refer to this ever-changing entity as one and the same. This common reference object is also one that enables a discourse amongst global citizens. This discourse allows one to transcend the provincial limits of particular forms of our lives and specific ethical norms onto a level where a biggest common denominator can be universally agreed upon. The important difference between norms established through discourse rather than through power is the free consent of all parties and their belief in that norm. A value-based strategy, founded on the free consent through belief of consenting parties, we argue, might be a stronger foundation for realising common norms and for establishing lasting relationships between all parties of the agreement.

Now moving from philosophical considerations to the perspective of European integration, regional institutions such as the Council of Europe and the EU were historically established to ‘achieve greater unity between the States of Europe through respect for the shared values of pluralist democracy, the rule of law and human rights' (Polakiewicz, 2021, p. 2). These three overarching categories of values can also be identified in Article 2 of the Treaty on the European Union, which contains an overview of the values of the EU. When it comes to human rights including privacy, the European Convention on Human Rights (ECHR) has become the central legal reference framework in Europe since the Second World War. The ECHR is usually described as a ‘living instrument’ since the interpretation of the rights (values) it enshrines changes over time (Theil, 2017, pp. 589–590). It is also essential for the protection of fundamental rights in the EU, according to Article 6 paragraph 2 and 3 of the Treaty on the Functioning of the European Union. Therefore, human rights treaties such as the ECHR and the later developed and corresponding (see Article 53) Charter of Fundamental Rights of the EU (CFEU) enshrine ever-changing values that are observed and interpreted on a case-by-case basis by institutions, such as the Court of Justice of the European Union in Luxembourg (CJEU). From the perspective of EU law, regulatory frameworks, such as the GDPR, need to mirror the human rights (or values) enshrined in the ECHR and CFEU (e.g. GDPR recitals 1, 2, 4, 104).

Whereas power-based approaches focus on elements of political and economic capability to determine societal developments, value-based strategies, such as the ECHR and CFEU, emphasise human dignity, which is considered as the root of modern human rights law (Petersen, 2020). This common norm established as universally valid through discourse provides a stronger and longer-lasting foundation than a power-based approach which focuses on the means (of power capabilities) to achieve norm universality.

We argue that power-based approaches that result in extraterritorial effect do not primarily address the fundamental values at stake. At the same time, this approach to extraterritorial application of norms disrespects the sovereignty and rights of actors that are subject to it (Kamminga, 2020). Certainly, as in the case of the GDPR, some power-based attempts might come with an opportunity to replace less dignified approaches to data protection—such as the protection of personal data as a mere consumer right (Bradford, 2020, pp. 140–141)—with ones that do address it with human dignity at their core (see also Art 1 CFEU). In other words, we acknowledge that the GDPR has had a very positive influence for the strengthening of data protection rights. However, this emphasis on the substance of the right (or the essence of the value) is not a given. In the case of the data protection regulation it is the result of an incremental development of substantive privacy and data protection standards, that took place for more than fifty years. This process started with the first regional data protection law in Hesse in Germany in 1970 and continues since then on many different political and institutional levels (González Fuster, 2014, pp. 213–248; Greenleaf, 2021a, p. 3; Ukrow, 2018, pp. 239–340; van der Sloot, 2014, pp. 307–310).

In conclusion, the outcome of the extraterritorial application of a power-based approach will only enable to govern European values inside data flows for as long as the political actor promoting this position is (1) able to align the ‘effect’ factors and (2) requires the value-promoting outcome through the regulatory framework. The Brussels Effect and all of its alleged benefits are potentially exchangeable with a ‘Beijing Effect’ to name just one example. Bradford herself doubts that the Chinese authorities will be able to achieve similar authority, essentially since the relative growth rate of the Chinese economy might be more similar to those in EU countries by the time the institutional capabilities are reached to create the effect. At the same time, the average age in the future Chinese society will be higher and the Brussels Effect will have already influenced standards all over the world, including China itself (Bradford, 2020, pp. 266–270). One of the core differences might be that many norms spread extraterritorially by the EU might already more closely align with universal normative principles and, therefore, might be more readily accepted. Even if the Brussels Effect will not disappear anytime soon, the question still remains whether the value of data protection and privacy can be guaranteed on a high level should the EU and its member states change their political priorities. Before going on to illustrate this conflict between a power-based approach and a value-based strategy in the case studies below, we consider the extraterritorial effect of the GDPR by analysing its legal architecture.

Section 3. Legal architecture and extraterritorial application

In 2016 the GDPR replaced the Data Protection Directive 95/46 EC of the European Community from 1995 (DPD). Data flows have become increasingly global and relevant for business and governance since the time the DPD was drafted and negotiated. This created the need for more detailed regulation (Kuner, 2010, pp. 246–247) and the requirement to reconsider territorial scope when developing new legal frameworks (de Hert & Czerniawski, 2016, p. 230). The territorial restriction of application has gradually been loosened to address the changed technicalities around the collection, storage, processing and sharing of personal data. In fact, Svantesson rightly flags that the term ‘territorial scope’ has become misleading on the one hand, while remaining essential for the applicability and enforceability of the GDPR on the other. Hence, territorial scope should not be understood literally. Rather, the concept expresses how GDPR positions itself in the international data sphere, particularly when it comes to the protection of personal data created through the monitoring and profiling of persons by corporations and public entities (Svantesson, 2019, p. 74). In this section we analyse Article 3 GDPR, which defines the territorial scope of EU data protection law. Additionally, we briefly analyse Article 44-50 GDPR, which regulate transfers of personal data from the EU to third countries and international organisations, with a particular focus on the adequacy decisions as specified in Article 45 GDPR. This article sets forth the procedure and standards that allow the European Commission to assess if non-EU countries and territories have an adequate level of data protection when compared with the GDPR (Kuner, 2019, p. 774).

3.1. Article 3 GDPR

In principle, Article 3 and the corresponding recitals 22-25 of GDPR trigger territorial application via two elements: the presence of a relevant establishment of a controller or processor on EU territory, or the targeting or monitoring of data subjects associated with the EU (Van Alsenoy, 2018, pp. 78–79). Article 3 GDPR consists of three paragraphs. 1 In summary, paragraph 1 remains relatively close to the historic nucleus of Article 4 DPD, whereas paragraphs 2 and 3 shift the focus clearly beyond the territory of the EU (Svantesson, 2019, pp. 85–95).

Keeping in mind that the EU is first and foremost an economic community, the point of departure of territorial scope is an establishment on the territory of the EU, which is effectively exercising activities in which personal data is being processed. Both criteria named in Article 3 have been subject to considerable jurisprudence of the CJEU in cases such as Google Spain (C-131/12), Weltimmo (C-230/14) and Verein für Konsumenteninformation (C-191/15) (Van Alsenoy, 2018, pp. 80–83). This might have sparked the desire of policymakers to expand the territorial scope further once it was clear the DPD would be replaced with GDPR. Hence, Article 3 paragraph 1 GDPR includes not only the ‘controller’ of the data processing operation, but also the ‘processor’. The EDPB attempted to clarify these concepts through non-legally binding guidelines which were adopted on 2 September 2020. There it states that a controller must decide on both purpose and means of the use of personal data, whereas a processor processes data on behalf of the controller, providing technical and organisational support. A processor can be a natural or legal person, as well as a public authority, agency or another body (European Data Protection Board, 2020a, pp. 3–4). Finally, an element of extraterritorial application was added to paragraph 1 by stating that the rules of GDPR apply regardless of whether processing takes place on Union territory or not (Van Alsenoy, 2018, pp. 79–80).

Nevertheless, the most radical shift towards extraterritorial application comes in paragraph 2 of Article 3 GDPR. While the heritage provision in the DPD took the use of certain equipment as reference point, the GDPR focuses on data gathering from European data subjects. As mentioned in recital 14 of the GDPR, the concept of data subject is not limited to natural persons with EU citizenship, permanent residence, or any other legal status (European Data Protection Board, 2018, p. 14). The framework applies to any data subject in the Union if the goods or services are offered to this individual, regardless of where the offer ‘comes from’, or whether goods or services provided are ‘free’. Furthermore, GDPR also applies if data subjects are ‘monitored’ in their behaviour. While the formulation of the paragraph makes clear that the intention of the drafters of the GDPR was to give it an extraordinarily broad territorial scope, it also creates considerable challenges when trying to interpret and apply it (Svantesson, 2019, p. 95). As noted by Gömann, ‘it seems unlikely that the monitoring approach of Article 3(2)(b) GDPR will in practice provide for much more than a declaration of political intent’ (Gömann, 2017, p. 588). With such a broad coverage, it is difficult to think of an operation involving personal data carried out by a significant actor in the international data sphere which is not within the territorial scope of GDPR, as most globally available digital services and platforms will at least potentially have to consider that they target EU data subjects.

Moving on to Article 3 paragraph 3 GDPR, the historic background and the corresponding recital suggest that this provision has a specific and limited scope which only relates to the communication of EU member states with their diplomatic missions and consular posts. This also seems to be confirmed by the EDPB in the guidelines on territorial scope adopted on 12 November 2019, where the examples mention a consulate of an EU member state operating in the Caribbean, or a cruise ship serving customers on the high sea (European Data Protection Board, 2018, pp. 22–23). Nevertheless, since the legally binding text of the provision itself is not very specific or limited and seems to be based on questionable interpretations of public international law (Svantesson, 2019, pp. 92–95), it is not helpful in limiting and precisely understanding the territorial scope of GDPR either.

The extensive territorial scope of the GDPR makes it difficult to define its effective—or even intended—reach. Any significantly limiting factor to the scope is missing. Certainly, EU legislators attempted to create a framework for comprehensive protection of the rights of data subjects with an eye towards establishing a level economic playing field for competition in data-driven services across the EU and worldwide (European Data Protection Board, 2018, p. 4). However, this results in a situation where global actors in the digital sphere—such as large digital platforms or manufacturers of consumer electronics which market their products in several regions—have to decide whether GDPR applies in its entirety with all compliance requirements for their operations, or not at all. This conclusion is in line with the power-based approach that we defined in Section 2. Therefore, it is fair to state that with such an extensive territorial scope, GDPR is a polarising factor in the international data sphere, with actors outside the traditional scope of EU regulations having to comply with one of the most demanding data protection regulations globally. While it allows the EU to demand high standards when it comes to the protection of individual rights, it also raises questions on legitimacy, practicality, as well as legal certainty and enforceability.

Briefly addressing the aspects of legitimacy and practicality, de Hert and Czerniawski (2016, p. 240) proposed to establish a ‘centre of gravity’ test for the application of the GDPR, using factors such as minimum connection of the activity, purpose and enforceability. Similarly, a ‘layered approach’ can be found in the work of Svantesson. This entails consideration of the harm being caused for individuals by a specific data operation, taking into account how essential the infringed provisions of GDPR are, as well as balancing the cost and effects of enforcement with a final proportionality assessment (Svantesson, 2019, pp. 95–96).

When it comes to legal certainty and enforceability, there have been demands to deliver more guidance on key terms since the drafting of Article 3 GDPR (Van Alsenoy, 2018, p. 97). As we showed throughout this section, the EDPB has attempted to respond to those with Guidelines 07/2020 on the concepts of controller and processor, as well as with guideline 03/2018 on territorial scope. Nevertheless, these guidelines are ultimately not sufficient for two reasons: First, they themselves do not provide the amount of detail required. As guidelines, they need to keep a relatively high level of abstraction, frequently merely clarifying the applicable provisions and recitals within the regulation. However, it is the vague nature and wording of the articles in the GDPR that is the key problem. Secondly, the guidelines are not of legally binding nature. While it is commendable that European data protection authorities try to establish certainty, the authority to shape EU law is vested with the legislative bodies (European Parliament and European Council), and the authority to interpret it rests with the CJEU according to Article 19 paragraph 1 TEU.

3.2. International data transfers and adequacy decisions

To be able to comprehensively analyse the case studies in Section 4 we also have to consider the regime that regulates international data transfers. According to Article 44 GDPR, personal data can only be transferred out of the EU if the principles of the regulation are upheld. Chapter V contains Articles 44-50 GDPR to regulate such transfers and, according to Kuner, establishes a three-tiered structure that puts adequacy decisions at the top, appropriate safeguards in the middle and derogations at the bottom (Kuner, 2019, p. 774). An adequacy decision can be roughly described as a ‘data bridge’, which has the purpose of facilitating the international flow of data. The main advantage is that it renders other individual agreements creating a legal basis for transfers unnecessary. Examples for such alternatives are standard contractual clauses, binding corporate rules, or individual consent for single instances of data collection and processing. If the EU Commission finds that there is an adequate protection of personal data in another country, this simplifies business activities and other data-driven cooperation.

From a legal perspective, adequacy decisions are implementing acts issued by the European Commission, based on an assessment of the formal level of protection of personal data in another country. In other words, they do not involve the European Council (ministers of member states) or the European Parliament, but the Commission has to consult the EDPB (Greenleaf, 2021b, p. 23; Kuner, 2019, p. 785). As we will further outline in the second case study, the case law of the CJEU has become increasingly influential in outlining the criteria for adequate protection in countries outside the EU. This is especially the case with regard to the ‘Schrems I’ (C-362/14) and ‘Schrems II’ (C-311/18) judgments dealing with complaints against European Commission adequacy decisions known as ‘Safe Harbour’ and ‘Privacy Shield’ in regard to transatlantic data flows with the United States. These heavily influenced interpretation and application of the GDPR and a detailed description of them in the context of extraterritoriality is appropriate (Tzanou, 2020, pp. 100–114). Since there is currently some uncertainty on how the criteria that were developed by the CJEU in these judgments could best be implemented and enforced (e.g. by the Irish Data Protection Authority against Facebook, Busvine & Humphries, 2021), it became unavoidable to start a process of updating alternatives such as standard contractual clauses for international data transfers of private corporations (Boardman, 2020). Additionally, the EDPB adopted guidelines for international data transfers between public bodies in application of Articles 46 paragraph 2a and 46 paragraph 3b GDPR on 15 December 2020 (European Data Protection Board, 2020b). For the purposes of this article, we will not discuss them in further detail since the effects around the declaration of adequacy are most relevant.

We acknowledge that from a legal perspective an adequacy assessment only covers whether personal data can leave the EU, which raises the question whether adequacy decisions actually have extraterritorial effect. However, we argue that such a mono-disciplinary analysis neglects their political and economic character. Taking an interdisciplinary perspective, adequacy decisions do have extraterritorial effect since they provide an incentive—along the lines of the power-based approach—to update or revise national data protection laws. This has most recently been demonstrated in the cases of Japan and South Korea, which both have updated and aligned their national laws with the GDPR in attempts to have privileged access to the EU single market (Greenleaf, 2021b, p. 23). The adequacy decision for Japan was adopted on 23 January 2019 and is the first under the GDPR framework (Commission Implementing Decision (EU) 2019/419), while the talks with South Korea were successfully concluded on 30 March 2021 (European Commission, 2021c). Both adequacy assessments are part of a larger political package that mainly consists of a Free Trade Agreement. Fahey and Mancini argue that the Japanese adequacy decision was a ‘side-product of the EU-Japan Economic Partnership Agreement […]: despite the EU’s initial goal of excluding data from the trade negotiations, Japan insisted on data dialogues and the EU eventually accommodated the demands’ (Fahey & Mancini, 2020, p. 99). Certainly, rigid fundamental rights-inspired interpretations of data protection may create possibly unwelcome burdens for certain political branches of the EU itself (Ryngaert & Taylor, 2020, p. 7). However, this demand for flexibility in political negotiations with international partners raises doubts as to whether the EU is able to consistently apply European values such as human rights when scrutinising adequacy. At least the question of standardisation of adequacy procedures emerges, which could help to guide the Commission in producing more consistent adequacy assessments. This necessity for more procedural standardisation is not only visible in the context of the second case study in section 4.2 that covers the persistent uncertainty in transatlantic data flows as the United States effectively refuses to directly safeguard the rights of EU data subjects through changes of their laws and institutions. As Drechsler (2020) outlines, the difficulty to assess adequacy on the basis of values—such as the EU human rights catalogue—also exists when considering the larger context of the EU data protection package that includes the EU Law Enforcement Directive 2016/680.

Section 4. Case studies

We will now consider whether extraterritorial application of the GDPR promotes European values or power in the context of two case studies. We propose that discussion of the developments around the RTBF is particularly relevant since this individual right has been hailed as one of the central mechanisms that enables individuals to control personal data, although the vague territorial scope was a challenge from inception (Ausloos, 2020, pp. 98–104). The question of territorial scope and platform governance has also come up in the prominent Glawischnig-Piesczek case (C-18/18) that was decided by the CJEU on 3 October 2019. This case originated in 2016, when the former leader of the Austrian Green Party Eva Glawischnig-Piesczek started a procedure against another Facebook user that insulted her on the platform. The user posted inappropriate comments that criticised the political position of Glawischnig and the Green Party on migration issues (Kuczerawy & Rauchegger, 2020, pp. 1496–1498). Questions around the responsibility and role of Facebook in removing this inappropriate content from the platform led to the CJEU case, with the result that identical and similar comments needed to be deleted for all users globally. This finding was finally implemented by the Austrian courts with a decision of the high court of last instance from 15 September 2020 (Oberster Gerichtshof, 2020b, 2020a). While some aspects of this case show similarities to the discussion around the territorial scope of a RTBF, the case is not relevant in the context of this article since it does not relate to the GDPR or data protection. The relevant legal frameworks in the Glawischnig-Piesczek case include the EU eCommerce Directive (especially Article 15 paragraph 1 of Directive 2000/31/EC), the EU Directive on Copyright in the Digital Single Market 2019/790, as well as, potentially, the proposed EU Terrorist Content Regulation and the proposed EU Digital Services Act (Kuczerawy & Rauchegger, 2020, pp. 1495–1496). In order to keep the analysis focused and remain in the GDPR framework we have therefore decided not to further elaborate on this case. Finally and additionally to what has been outlined in section 3.2., discussion of the adequacy regime is particularly relevant since it allows one to outline the intended territorial scope of GDPR, as well as how the EU positions itself in data protection related matters against other influential actors, such as the United States.

4.1. Extraterritoriality and the ‘right to be forgotten’

One of the key promises of GDPR was the effective and comprehensive protection of individual rights in the digital sphere. This relates not only to traditional aspects such as transparency, fairness and notification (van der Sloot, 2014, pp. 310–314), but also to more novel and challenging scenarios such as the deletion of personal data from the entirety of the internet. This RTBF for the digital age was first envisaged by Viktor Mayer-Schönberger in 2007 (Mayer-Schönberger, 2011, p. ix), and subsequently integrated in the first proposal for GDPR by the European Commission as an extension of a ‘right to erasure’ at the beginning of 2012. Since that time much has been written about the desirability of a RTBF, as well as the final Article 17 GDPR (Ausloos, 2020).

Well before GDPR was finished, the discussion on how to operationalise a RTBF started to crystallise around the responsibilities of search engine operators (SEOs) on how to structure links in search results. This affected Google in particular due to its market dominance in the EU. In the seminal Google Spain judgment of 13 May 2014 (C-131/12) the interpretation of the concepts of processor and controller by the CJEU was central (Van Alsenoy, 2018, pp. 81–83), as well as the balancing act between privacy and freedom of expression (Gstrein, 2017, pp. 9–10). While territorial scope has also been an issue in Google Spain, this aspect took the spotlight more recently in the case of Google vs CNIL, which was decided in Luxembourg on 24 September 2019 C‑507/17 (see Samonte, 2020, pp. 841–844). In principle, there are three options for territorial scope; delisting can be limited to EU territory, enforced as a universal norm which has to be applied globally on all versions of a search engine and for all users, or implemented ‘glocally’ (Padova, 2019, pp. 21–29). This last approach does not mean that a service has to have servers physically on EU territory, but that measures such as geolocation of the user based on the monitoring of Internet Protocol (IP-)Addresses or GPS-location could be used to determine the physical position of the user and serve/hide search results accordingly. This could potentially result in the necessity to reduce the privacy of users when using a search engine, while being able to uphold local or cultural expectations of one region (e.g. not showing swastikas in search results in countries where this is forbidden for political and historical reasons). The extent to which this affects other regions varies with the detailed technical and organisational implementation, which is largely left to SEOs (Powles & Chaparro, 2015).

Briefly summarising the complex procedure, the French data protection authority (Commission Nationale de l'Informatique et des Libertés or CNIL) was not satisfied with the implementation of the delisting of links in search results adopted by Google in the aftermath of the Google Spain judgment from 2014. The argument of CNIL essentially boils down to the universality of an individual right enshrined in GDPR. According to the authority, such a right can only truly manifest itself if enforced on all versions of a search engine, even those operated outside the EU. If links to search results are not removed on all versions, an individual travelling back and forth between France and the United States who is seeking personal information about a business partner for instance, could access controversial information easily when in the United States, while this is more difficult in France. Such extraterritorial application of the GDPR was heavily contested (Keller, 2018) and Google itself tried to limit the territorial reach of delisting to the European versions of its search engine. Additionally, it adopted some technical measures to tie search results to regions, such as the analysis of user IP-addresses. After fighting over the implementation of delisting in French courts, the issue went back to the CJEU (Gstrein, 2020, pp. 130–133). 

In contrast to the ground-breaking judgment from 2014, the 2019 decision of the CJEU took place in greatly changed circumstances. The GDPR was finalised and in force, which brought considerable requirements for corporations and public institutions to comprehensively overhaul their privacy policies and data practices (Linden et al., 2020, p. 62). Concordantly, even the European Commission acknowledges in its review of GDPR from June 2020 that the enforcement of the regulation is a challenge for data protection authorities (European Commission, 2020a, p. 5). Given the EU-internal pressure not to overburden institutions of member states by making them guardians of data subject rights all over the globe, plus the external pressure not to interfere too strongly in international data flows and business, the restraint in Google vs CNIL makes sense politically. 

However, courts like the CJEU are supposed to interpret the law, and not to make political decisions. Nevertheless, the judges essentially avoided defining further the substantive nature and territorial scope of delisting in Google vs CNIL. While the Grand Chamber around president Lenaerts seemed to favour a ‘glocal’ approach, it did not provide any firm interpretations and left a space of discretion for the authorities of member states (see paragraphs 64-72 of C‑507/17). This vacuum of guidance on the European level was quickly seized by the German Federal Constitutional Court, which published two judgments on the RTBF shortly after the CJEU, on 6 November 2019. The German judges did not only further define the substantive nature in the context of the German legal order, they also sent an implicit message to the rest of the EU: digital rights such as a RTBF ought to be shaped through dialogue between the EU and its member states, and not be the product of a hierarchy with Brussels/Luxembourg at the top (Gstrein, 2020, pp. 136–139).

One can interpret these events from an intra EU perspective, where they demonstrate that progressive and consistent leadership on data flow-related rules is essential for European institutions to be able to shape the dynamic of events, as well as preserving European unity. At the same time, however, such focus on internal power struggles misses the point that the RTBF is not a European concept. While the EU and the jurisprudence of the CJEU has certainly been instrumental in making the RTBF a broadly known concept, it exists in many countries around the world and similar protections are enshrined in the majority of data protection frameworks of G20 member states (Erdos & Garstka, 2021, pp. 308–310; Gstrein, 2020, pp. 141–143). Hence, ‘the robust realization of a RTBF online will certainly require transnational consensus-building and coordination extending well beyond the EU Member States’ (Erdos & Garstka, 2021, p. 296). In the context of this article, we interpret this finding as a call for the development of a value-driven strategy to achieve more international consensus on the substantive dimension and territorial application of the right.

4.2. Inadequacy of data bridges without pillars

As outlined in Section 3.2., one of the central tools for positioning European digital space in relation to other regions is adequacy decisions, which are regulated in Article 45 GDPR. Currently, the EU Commission has fourteen adequacy decisions in place (e.g. for Switzerland, Israel, Japan and two for the United Kingdom), with the likely positive decision for South Korea imminent at the time of writing (European Commission, 2021a). However, the most discussed and contested decisions so far are those relating to the United States. Two adequacy decisions by the Commission have already been declared void by the CJEU, most recently bringing an end to the ‘EU-US Privacy Shield’ with the judgment in Schrems II from 16 July 2020 (C-311/18; (Tracol, 2020, p. 1).

While the end of Privacy Shield was not surprising for many experts, it created considerable uncertainty for more than 5,300 companies that relied on it as a legal basis for their data transfers (Propp & Swire, 2020). According to the Annual Governance Report 2019 of the International Association for Privacy Professionals (IAPP), the Privacy Shield was used by 60 percent of the respondents and only surpassed by standard contractual clauses used by 88 percent (IAPP, 2019). However, the discussion as to which extent Schrems II also invalidates the use of alternatives to an adequacy decision is still ongoing among legal experts, and the onus to prove compliance with legal requirements is on businesses and public institutions transferring data between the regions (Irion, 2020; Propp & Swire, 2020). Ad hoc, a combination of standard contractual clauses and additional technical and organisational measures (e.g. use of strong encryption) seem like a viable strategy (Christakis, 2021b; Tracol, 2020, pp. 9–11; European Data Protection Board & European Data Protection Supervisor, 2021).

Schrems II has many aspects worth analysing, but in the context of this article we focus on the consequences of the judgment for the territorial scope of GDPR. As the CJEU reiterates at paragraph 52 of the judgment, the territorial aspect is essential since, according to the complaint of Austrian digital rights activist Max Schrems, the United States ‘did not ensure adequate protection of the personal data held in [their] territory against the surveillance activities in which the public authorities were engaged.’ The investigation of such a claim puts the CJEU in a delicate position for two reasons.

First, any scrutiny of the Privacy Shield entails the necessity of an assessment of the protection of personal data of EU data subjects when it comes to surveillance by US authorities. Whereas the CJEU refrained in 2015 from analysing and discussing the details of US law (e.g. Section 702 of the Foreign Intelligence Surveillance Act or Executive Order 12333) in Schrems I and instead focused on the characteristics of a valid adequacy decision, Schrems II contains detailed findings on the necessity and proportionality of some US surveillance programmes (see C-311/18 paragraphs 165, 166, 178 to 184, 191 and 192; (Tracol, 2020, p. 7; Tzanou, 2020, pp. 109–114). Hence, it may not be entirely surprising that the judgment has also been described as a ‘mix of judicial imperialism and Eurocentric hypocrisy’ (Baker, 2020). Secondly, the CJEU lacks the competency to carry out a similar assessment on the situation regarding governmental surveillance for a member state of the EU (Christakis, 2021a). While the EU Fundamental Rights Agency has highlighted in a research report that intelligence laws in European states remain complex, with potential to improve oversight as well as effective individual remedies (European Union Agency for Fundamental Rights, 2018, pp. 9–10), Article 4(2) of the Treaty on the EU excludes national security from the competences of EU institutions.

The CJEU certainly tries to leverage the power of European data protection law through the Schrems II judgment to create higher protection standards for EU data subjects. However, remembering the fierce defence of the autonomy of EU law in the CJEU opinion on the accession to the European Convention on Human Rights in 2014 (Halberstam, 2016), it seems unlikely that the Grand Chamber of the CJEU is not following a carefully considered strategy. The question is to which degree this is a value- or power-based strategy, and we will return to this aspect and alternatives in the discussion and conclusion.

Regardless of the answer, the current levels of legal and political uncertainty make it increasingly attractive to keep personal data in the EU (Tracol, 2020, p. 11). While the European Commission has announced to start work on a third iteration of the EU-US data bridge (European Commission, 2020b), it is also obvious that the pillars of this bridge will only stand if political concessions are made on the American side with regards to the establishment of effective and accessible individual remedies for GDPR data subjects. In other words, the judgments in Schrems I and Schrems II gradually build pressure on the United States to change their own regulatory framework and institutions in a way that could be similar to Japan and South Korea. At the same time the question emerges if the European Commission might be more inclined to grant adequacy if the question of data protection becomes part of a larger political package that might involve economic benefits. Potentially, the perspective of adequacy could result in an upgrade of the US privacy regime, which could embrace some or all of the basic principles of the GDPR. Alternatively, some US-based commentators are optimistic that the requirements of Schrems II can be met with relatively little adjustment and reconfiguration of existing judicial and administrative institutions (Propp & Swire, 2020). Whichever route the European Commission and all actors involved choose, ultimately any new framework will most likely have to stand another test of the CJEU.

Section 5. Discussion

As Van Alsenoy puts it, ‘[e]xtraterritoriality and data protection make for a controversial mix. Different attitudes towards privacy, coupled with a lack of global consensus on jurisdictional boundaries, fuel an intense debate among those advocating jurisdictional restraint and those emphasizing the need to ensure effective protection’ (Van Alsenoy, 2018, p. 77). As has been shown in section 3.1., Article 3 GDPR is a vague provision that creates legal and political uncertainty. The current design of the legal framework results in friction when it comes to the precise scope of individual rights and makes it challenging to guarantee consistency and stability in international data flows. Additionally, adequacy decisions and the GDPR regime regulating international data flows might be strongly influenced by economic policy, which comes with the danger that the underpinning values of GDPR are consistently respected and protected by the EU. For instance, Greenleaf criticised the lack of consistency and level of rights protection of the draft agreement for the Japanese adequacy decision, questioning whether there is a discounted version of adequacy under certain circumstances (Greenleaf, 2018b). While one might welcome that it was possible for the two systems to open up to each other (Miyashita, 2020, p. 13), the question arises which kind of institutional safeguards are in place to guarantee consistent application of high data protection standards. The procedure to assess adequacy seems to lack standardisation as shown by a comparison with South Korea (Greenleaf, 2018a). In order to avoid the negative side effects of a power-based approach the EU system currently relies on members of civil society such as Max Schrems to check the decisions, which leads to lengthy legal procedures with uncertain outcomes.

At the same time, as the existence and increasing number of jurisdictions outside the EU and the United States with a RTBF demonstrates, there might be more potential for international harmonisation and consensus on the rights of data subjects than expected. While there seems to be little desire to have a power-based European leadership on data protection, the principles and rights enshrined in the GDPR inspire legislators across the world to adopt similar provisions. Even some US states such as California have recently begun to update their regulatory framework, which also takes into account some GDPR features and principles (Rothstein & Tovino, 2019, p. 5; Chander et al., 2021).

5.1. Alternative multilateral frameworks

Treaties that qualify individual rights as the object of fulfilment are special agreements in public international law. In their traditional form, they create a triangular relationship between participating states and their citizens. The duty-bearer remains the state, which is obliged to respect and protect the stipulated rights of the individuals it is responsible for (Zwitter & Lamont, 2014, pp. 363–365). Hence, such treaties ultimately create substantively harmonised national legal frameworks, which hinge on reciprocity and mutual respect as methods of enforcement on an international level. This guarantees the sovereignty of states, yet makes it challenging to enforce individual rights if remedies are not effective on a national level, or if the cause of infringement lies beyond the territory of the state. While there is an emerging realisation that privacy should be treated as a universal human right and guaranteed across and beyond territorial borders, the manifestation of this insight still requires time (Irion, 2020). 

When searching for existing frameworks capable of the establishment and harmonisation of high data protection standards at the global level the only existing and legally binding international treaty is the Council of Europe Convention 108 for the protection of individuals regarding automatic processing of personal data (Cannataci, 2018, pp. 21-22). The Convention has been discussed as a global standard in contrast to portraying the GDPR as a gold standard (Mantelero, 2020, pp. 1–3). The recently overhauled ‘Convention 108+’ shares many principles, individual rights and features with the GDPR but allows each signing state to adopt corresponding national laws which further define the principles. This modernised framework was opened for signature in Strasbourg on 25 June 2018 (Ukrow, 2018, p. 240). States which are not members of the Council of Europe can also join it. As of September 2021, 30 states have signed Convention 108+, of which 13 have already ratified it (Council of Europe, 2021). Hence, a potentially more sustainable and multilateral strategy than power-based extraterritorial application to promote European values inside international data flows might be to emphasise value-driven harmonisation more strongly, focusing on an open mind that seeks to identify common denominators where they exist. However, the relationship between the EU and the Council of Europe is complex. Polakiewicz recently highlighted again the lack of consistency, transparency and clarity when it comes to voting rights and speaking rights of the EU, as well as financial arrangements (Polakiewicz, 2021, p. 18).

5.2. A future without allies?

While the cases presented in this article focus almost exclusively on the current relationship between the EU and the United States, it also needs to be added that the data flows to and from other countries and regions increasingly face similar challenges. For instance, during the work on this article adequacy decisions have been adopted with regards to the United Kingdom, addressing the consequences of Brexit. This process was launched by the European Commission on 19 February 2021 and the EDPB presented opinions relating to a GDPR and EU Law Enforcement Directive adequacy decision on 16 April 2021 (European Commission, 2021b; European Data Protection Board, 2021). On 28 June 2021 the Commission announced the adequacy decisions which are based on the GDPR and the Law Enforcement Directive. It remains to be seen how fruitful the relationship between the two parties can become over the long term, especially in areas where the United Kingdom might seek to deviate in data protection standards governing the development of data-driven services, or opt for extensive data use for surveillance (European Commission, 2021d; Korff, 2021). It does seem possible that the Council of Europe will gain a more important role in the relationship between EU member states and the United Kingdom after Brexit, especially when it comes to safeguarding the right to privacy of individuals which is also protected by the ECHR framework.

Additionally, the intense economic cooperation of many EU countries with the People’s Republic of China leads to questions around the treatment of data flows and the standards used when it comes to personal data. In June 2020, reports emerged of a court case in the German town of Düsseldorf in which a former manager of Huawei was not given access to personal data stored by the company in China that might have been relevant to support his position in the case. The labour court found that Huawei needs to pay €5.000 in immaterial compensation for the damage suffered by the former employee, which was based on Article 5(2) in connection with Article 82 GDPR. However, it remains to be seen whether this decision of first instance (ArbG Düsseldorf v. 5.3.2020 - 9 Ca 6557/18) will be confirmed as there was an appeal by Huawei (Wybitul, 2020).

The question of how to guarantee effective enforcement of high data protection standards certainly remains essential. As the CJEU judgments on the EU-US adequacy decisions and the surrounding political and societal developments have demonstrated over the last years, the current approach to establish European values inside data flows exceeds the capabilities of GDPR on the one hand and reduces its implementation increasingly to a battlefield on the other. This does not only reveal that the promises on the universality of the rights of a data subject enshrined in the regulation are not realistic. Additionally, such a limited approach also fails to address the overarching issue, which is that the protection of personal data under current circumstances is systematically threatened. While it should be welcomed that GDPR re-emphasised the importance of data protection and that EU data protection authorities now have more competences and powers to address an urgent problem, it is also clear that the fulfilment of the task is overwhelming. One should not forget that ‘[t]he need to ensure trust and the demand for the protection of personal data are certainly not limited to the EU. Individuals around the world increasingly value the privacy and security of their data’ (European Commission, 2020a, p. 2).

It is also noteworthy that those public institutions which try to provide certainty are the national data protection authorities in the form of the EDPB, as well as the CJEU. However, as has been shown throughout this article, the powers transferred to them by constitutional law and the EU treaties limit their possibilities. The function of the court is to interpret European law and the core task of data protection authorities is to independently monitor the situation and enforce the legal order when necessary. This requires that the legal frameworks in place are designed in a way that is consistent and serves a clear purpose which is based on fundamental constitutional provisions and values such as human rights. In that regard, the extraterritorial effect of GDPR and the associated enforcement can only overwhelm the authorities and come with undesired side effects. This brings us to the last point, which is the lack of political leadership. Specifically, more attention needs to be paid to craft clear legal provisions that establish certainty, even if this means that an EU-internal compromise is harder to achieve throughout legislative negotiations. It is clear that extraterritorial application and unilateral standard-setting face severe limitations with the potential to harm the original cause in the long-term. At the same time, while there are a limited number of options available to establish reliable multilateral governance frameworks for the protection of personal data, there is still the potential for more cooperation that must be explored and acted upon.

5.3. Limitations of the value versus power dichotomy

Throughout this article we have treated power-based and value-based approaches as mutually exclusive. We have done this in order to highlight that the GDPR should not uncritically be considered as the only positive force for the establishment of high and universal global privacy and data protection standards. We have outlined our thinking in the preceding sections and flagged areas and cases where we believe that caution is warranted when applying and enforcing the GDPR. As we have shown in the introduction and throughout by referring to the work of Greenleaf and others, consensus around the substantive core of the regulation is increasingly building. At the same time, European institutions are overburdened in globally enforcing the regulation and political tensions are building, which threatens consistency and the credibility of the EU.

Nevertheless, treating power and value-based approaches as mutually exclusive falls short of the complex reality of internet governance. In order to shape digital spaces, states are not able to rely on traditional patterns of territorial sovereignty and depend more strongly on private actors and their powerful platforms. It has been argued that the GDPR is one of the most powerful symbols of a ‘digital constitutionalism’ of the EU through which it aims to protect essential values such as human rights and democracy even beyond the borders of the member states. However, the question remains whether in a next phase this leads to what De Gregorio describes as ‘privacy universalism’—including a lack of legal certainty and imperial tendencies—or ‘digital humanism’ with human dignity at the core (De Gregorio, 2021, pp. 63–70). We would very much opt for the latter, which—in the European context—has been achieved after the Second World War through the establishment of multi-level governance mechanisms with international, supranational and national layers that mutually reinforce efforts to promote values and check power. Institutions such as the ECtHR or the CJEU were able to control institutions and authorities of the other layers in cases where values were threatened. In a long-term perspective since the Second World War this system has worked reasonably well for a Europe that goes beyond the EU. Ideally, a similar dynamic could also be established gradually on a global level. In our view, the mutually reinforcing process that led to the establishment of the substantive principles of the GDPR with influences from the international, supranational and national layers is as important as the legislative end product.

Certainly, the international community has so far achieved too little when it comes to the development of detailed international standards for privacy protection. We are not ignoring the fact that the proceedings in multilateral fora can be dominated by power-based approaches failing to deliver the desired results. Organisations such as the United Nations can be heavily influenced by single actors who leverage their power and influence to undermine sincere discussions about values and principles. Nevertheless, also the EU and its member states will only be able to sustainably pursue a value-based strategy if their own political interests are balanced and checked by institutions and actors from all different governance layers. Finally, as we have outlined above, a value-based strategy based on the free consent and belief of the involved parties might be a stronger foundation for realising common norms and for establishing lasting relationships. A recent report on internet and jurisdiction in Latin America and the Caribbean formulated it in the following way: Is there room for cross-fertilization, or is this mere replication? (Economic Commission for Latin America and the Caribbean (ECLAC) et al., 2020, p. 15). In order to deliver answers to this question political actors within and beyond Europe would have to decide on and engage in international fora where constructive exchange is possible.

Section 6. Conclusion

This article has explored whether extraterritorial application of the GDPR is promoting European values inside data flows. While the regulation has received considerable attention internationally and has had a positive influence on the level of data protection globally, we argued that the significant extent of extraterritorial application in the GDPR is not a viable long-term strategy to guarantee respect, protection and promotion of European values. Rather than keeping the function of GDPR limited to the essential issue—the protection of personal data—it transforms the regulation into a battlefield for legal, economic and political conflicts.

As we have discussed in the analysis of the legal architecture of Article 3 GDPR, the provision contains vague language and is difficult to interpret and implement. It contains passages that read like political statements (Gömann, 2017, p. 588), which requires additional interpretation from the EDPB, the CJEU and academics. However, any one of these parties lacks the democratic legitimacy to make such far reaching decisions, which are essential for the applicability of the regulation. This becomes particularly apparent in the discussion about the territorial scope of the RTBF. Additionally, the failed attempts to establish an adequate framework for data transfers between the EU and the United States demonstrates that there is still a considerable gap between the normative aspirations in the regulation and the political reality. It is not impossible to bridge this gap and the consistency of the CJEU in upholding high standards for data protection as well as increased demands of civil society to protect personal data make it unlikely that convenient political trade-offs will create lasting solutions.

Ultimately, the question remains whether the next evaluation report of the GDPR by the European Commission—which is planned for 2024 (European Commission, 2020a, p. 14)—will reflect on a governance strategy of the digital sphere that is driven by the protection of power or the promotion of values. The creation of the latter is not only dependent on upholding and further clarifying existing frameworks but also on the creation of safe venues for substantive dialogue to establish broader international consensus, as well as the commitment to high and effective protection of human rights, which are guaranteed internationally regardless of individual privilege or status.

Acknowledgments

We are grateful to Liz Harvey for reviewing this manuscript.

References

Ausloos, J. (2020). The Right to Erasure in EU Data Protection Law (1st ed.). Oxford University Press. https://doi.org/10.1093/oso/9780198847977.001.0001

Baker, S. A. (2020, July 21). How Can the U.S. Respond to Schrems II? Lawfare [Blog post]. Lawfare. https://www.lawfareblog.com/how-can-us-respond-schrems-ii

Boardman, R. (2020). European Commission publishes proposed replacement SCCs. In International Association of Privacy Professionals. https://iapp.org/news/a/european-commission-publishes-proposed-replacement-sccs/

Bradford, A. (2012). The Brussels effect. Northwestern University Law Review, 107(1), 1–68. https://scholarlycommons.law.northwestern.edu/nulr/vol107/iss1/1/

Bradford, A. (2020). The Brussels effect: How the European Union rules the world. Oxford University Press. https://doi.org/10.1093/oso/9780190088583.001.0001

Busvine, D., & Humphries, C. (2021). Facebook faces prospect of ‘devastating’ data transfer ban after Irish ruling. https://www.reuters.com/business/legal/facebook-data-transfer-ruling-irish-court-due-friday-2021-05-14/

Cannataci, J. (2018). Big Data and Open Data—Annual Report to the 73rd session of the General Assembly [Annual report]. United Nations Special Rapporteur on the right to privacy. https://undocs.org/A/73/438

Chander, A., Kaminski, M., & McGeveran, W. (2021). Catalyzing Privacy Law. Minnesota Law Review, 105, 1732–1802. https://minnesotalawreview.org/wp-content/uploads/2021/04/3-CKM_MLR.pdf

Christakis, T. (2021a). Squaring the Circle? International Surveillance, Underwater Cables and EU-US Adequacy Negotiations (Part 1). European Law Blog. https://europeanlawblog.eu/2021/04/12/squaring-the-circle-international-surveillance-underwater-cables-and-eu-us-adequacy-negotiations-part1/

Christakis, T. (2021b). Squaring the Circle? International Surveillance, Underwater Cables and EU-US Adequacy Negotiations (Part 2). European Law Blog. https://europeanlawblog.eu/2021/04/13/squaring-the-circle-international-surveillance-underwater-cables-and-eu-us-adequacy-negotiations-part2/

Commission, E. (2021a). Data protection: Draft UK adequacy decision [Text. European Commission - European Commission. https://ec.europa.eu/commission/presscorner/detail/en/ip_21_661

Commission, E. (2021b). Joint Statement by Commissioner Reynders and Yoon Jong In. In Chairperson of the Personal Information Protection Commission of the Republic of Korea [Text]. European Commission—European Commission.

Commission, E. (2021c). Commission adopts adequacy decisions for the UK.

Council of Europe. (n.d.). Chart of signatures and ratifications of Treaty 223. https://www.coe.int/en/web/conventions/full-list/-/conventions/treaty/223/signatures

De Gregorio, G. (2021). The rise of digital constitutionalism in the European Union. International Journal of Constitutional Law, 19(1), 41–70. https://doi.org/10.1093/icon/moab001

Drechsler, L. (2020). Comparing LED and GDPR Adequacy: One Standard Two Systems. Global Privacy Law Review, 1(2), 93–103.

Economic Commission Latin America and the Caribbean (ECLAC), Internet & Jurisdiction Policy Network (I&JPN), & Souza, C. A. (2020). Internet & Jurisdiction and ECLAC Regional Status Report 2020 (Report LC/TS.2020/141). https://www.cepal.org/en/publications/46421-internet-jurisdiction-and-eclac-regional-status-report-2020

Erdos, D., & Garstka, K. (2021). The ‘right to be forgotten’ online within G20 statutory data protection frameworks. International Data Privacy Law, 10(4), 294–313. https://doi.org/10.1093/idpl/ipaa012

European Commission. (2020a). Data protection as a pillar of citizens’ empowerment and the EU’s approach to the digital transition—Two years of application of the General Data Protection Regulation. European Commission.

European Commission. (2020b, July 16). Opening remarks by VP Jourová and Cmner Reynders. Press Corner. https://ec.europa.eu/commission/presscorner/detail/en/statement_20_1366

European Commission. (2021). Adequacy decisions. How the EU determines if a non-EU country has an adequate level of data protection.https://ec.europa.eu/info/law/law-topic/data-protection/international-dimension-data-protection/adequacy-decisions_en

European Data Protection Board. (2018). Guidelines 3/2018 on the territorial scope of the GDPR (Article 3)—Version adopted after public consultation. European Data Protection Board. https://edpb.europa.eu/our-work-tools/our-documents/riktlinjer/guidelines-32018-territorial-scope-gdpr-article-3-version_en

European Data Protection Board. (2020a). Guidelines 07/2020 On the concepts of controller and processor in the GDPR. https://edpb.europa.eu/our-work-tools/public-consultations-art-704/2020/guidelines-072020-concepts-controller-and-processor_en

European Data Protection Board. (2020b). Guidelines 2/2020 on articles 46 (2) (a) and 46 (3) (b) of Regulation 2016/679 for transfers of personal data between EEA and non-EEA public authorities and bodies. https://edpb.europa.eu/our-work-tools/our-documents/guidelines/guidelines-22020-articles-46-2-and-46-3-b-regulation_en

European Data Protection Board. (2021, April 16). EDPB Opinions on draft UK adequacy decisions [News release]. News. https://edpb.europa.eu/news/news/2021/edpb-opinions-draft-uk-adequacy-decisions_en

European Data Protection Board & European Data Protection Supervisor. (2021). Joint Opinion 2/2021 on the European Commission’s Implementing Decision on standard contractual clauses for the transfer of personal data to third countries. https://edpb.europa.eu/sites/default/files/files/file1/edpb_edps_jointopinion_202102_art46sccs_en.pdf

European Union Agency for Fundamental Rights. (2018). Surveillance by intelligence services: Fundamental rights safeguards and remedies in the European Union. Volume II: summary. Publications Office of the European Union. https://data.europa.eu/doi/10.2811/84431

Fahey, E., & Mancini, I. (2020). The EU as an intentional or accidental convergence actor? Learning from the EU-Japan data adequacy negotiations. International Trade Law and Regulation, 26(2), 99–111.

Gömann, M. (2017). The new territorial scope of EU data protection law: Deconstructing a revolutionary achievement. Common Market Law Review, 54(2), 567–590.

González Fuster, G. (2014). The Right to the Protection of Personal Data and EU Law. In G. González Fuster (Ed.), The Emergence of Personal Data Protection as a Fundamental Right of the EU (pp. 213–252). Springer International Publishing. https://doi.org/10.1007/978-3-319-05023-2_7

Greenleaf, G. (2018a). Japan and Korea: Different Paths to EU Adequacy. Privacy Laws & Business International Report, 156, 9–11.

Greenleaf, G. (2018b). Japan: EU adequacy discounted. Privacy Laws & Business International Report, 155, 8–10.

Greenleaf, G. (2021a). Global data privacy laws 2021: Despite COVID delays, 145 laws show GDPR dominance. Privacy Laws & Business International Report, 169(1), 3–5.

Greenleaf, G. (2021b). Global data privacy laws 2021: Uncertain paths for international standards. Privacy Laws & Business International Report, 169(1), 23–27.

Gstrein, O. J. (2017). The Right to Be Forgotten in the General Data Protection Regulation and the aftermath of the “Google Spain” judgment (C-131/12). PinG Privacy in Germany, 1. https://doi.org/10.37307/j.2196-9817.2017.01.06

Gstrein, O. J. (2020). Right to be forgotten: European data imperialism, national privilege, or universal human right? Review of European Administrative Law, 13(1), 125–152. https://doi.org/10.7590/187479820X15881424928426

Habermas, J. (2000). Kant’s Idea of Perpetual Peace: A Two Hundred Years Historical Remove. In C. Cronin & P. Greiff (Eds.), The Inclusion of the Other: Studies in Political Theory (pp. 165–201). MIT Press.

Halberstam, D. (2016). Opinion 2/13 of the Court (C.J.E.U). International Legal Materials, 55(2), 267–306. https://doi.org/10.5305/intelegamate.55.2.0267

Hert, P., & Czerniawski, M. (2016). Expanding the European data protection scope beyond territory: Article 3 of the General Data Protection Regulation in its wider context. International Data Privacy Law, 6(3), 230–243. https://doi.org/10.1093/idpl/ipw008

Hoofnagle, C. J., van der Sloot, B., & Borgesius, F. Z. (2019). The European Union general data protection regulation: What it is and what it means. Information & Communications Technology Law, 28(1), 65–98. https://doi.org/10.1080/13600834.2019.1573501

International Association of Privacy Professionals. (2019). IAPP-EY Privacy Governance Report 2019 [Report]. International Association of Privacy Professionals. https://iapp.org/resources/article/iapp-ey-annual-governance-report-2019/

Irion, K. (2020, July 24). Schrems II and Surveillance: Third Countries’ National Security Powers in the Purview of EU Law [Blog post]. European Law Blog. https://europeanlawblog.eu/2020/07/24/schrems-ii-and-surveillance-third-countries-national-security-powers-in-the-purview-of-eu-law/

Kamminga, M. T. (2020). Extraterritoriality. In Max Planck Encyclopedias of International Law. Oxford Public International Law. https://opil.ouplaw.com/view/10.1093/law:epil/9780199231690/law-9780199231690-e1040?prd=MPIL

Kant, I. (1796). Zum ewigen Frieden: Ein philosophischer Entwurf (Bibliograph. aktualisierte Ausg.). Reclam.

Kantar. (2019). The General Data Protection Regulation (Special Eurobarometer, p. 487a) [Report]. European Commission. https://ec.europa.eu/commfrontoffice/publicopinion/index.cfm/ResultDoc/download/DocumentKy/86886

Keller, D. (2018, September 10). Don’t Force Google to Export Other Countries’ Laws. The New York Times. https://www.nytimes.com/2018/09/10/opinion/google-right-forgotten.html

Korff, D. (2021, June 17). Initial comments on the EU Commission’s final GDPR adequacy decision on the UK [Blog post]. Data protection and digital competition. https://www.ianbrown.tech/2021/06/17/initial-comments-on-the-eu-commissions-final-gdpr-adequacy-decision-on-the-uk/

Kuczerawy, A., & Rauchegger, C. (2020). Injunctions to remove illegal online content under the eCommerce Directive: Glawischnig-Piesczek. Common Market Law Review, 57, 1495–1526.

Kuner, C. (2010). Data Protection Law and International Jurisdiction on the Internet (Part 2). International Journal of Law and Information Technology, 18(3), 227–247. https://doi.org/10.1093/ijlit/eaq004

Kuner, C. (2019). Article 45. Transfers on the basis of an adequacy decision. In C. Kuner, L. A. Bygrave, & C. Docksey (Eds.), The EU General Data Protection Regulation (GDPR): A commentary (pp. 771–796). Oxford University Press.

Linden, T., Khandelwal, R., Harkous, H., & Fawaz, K. (2020). The Privacy Policy Landscape After the GDPR. Proceedings on Privacy Enhancing Technologies, 2020(1), 47–64. https://doi.org/10.2478/popets-2020-0004

Mantelero, A. (2020). The future of data protection: Gold standard vs. Global standard. Computer Law & Security Review, 40. https://doi.org/10.1016/j.clsr.2020.105500

Mayer-Schönberger, V. (2011). Delete: The virtue of forgetting in the digital age ; with a new afterword by the author. Princeton University Press.

Miyashita, H. (2020, July 3). EU-Japan mutual adequacy decision. [Blog post]. Blogdroiteuropéen. https://blogdroiteuropeen.com/2020/07/03/eu-japan-mutual-adequacy-decision-by-hiroshi-miyashita/

Padova, Y. (2019). Is the right to be forgotten a universal, regional, or ‘glocal’ right? International Data Privacy Law, 9(1), 15–29. https://doi.org/10.1093/idpl/ipy025

Petersen, N. (2020). Human Dignity, International Protection. In Max Planck Encyclopedias of International Law. Oxford University Press. https://opil.ouplaw.com/view/10.1093/law:epil/9780199231690/law-9780199231690-e809?prd=MPIL

Polakiewicz, J. (2021). A Council of Europe perspective on the European Union: Crucial and complex cooperation. Europe and the World: A Law Review, 5(1). https://doi.org/10.14324/111.444.ewlj.2021.30

Powles, J., & Chaparro, E. (2015, February 18). How Google determined our right to be forgotten. The Guardian. http://www.theguardian.com/technology/2015/feb/18/the-right-be-forgotten-google-search

Propp, K., & Swire, P. (2020, August 13). After Schrems II: A Proposal to Meet the Individual Redress Challenge [Blog post]. Lawfare. https://www.lawfareblog.com/after-schrems-ii-proposal-meet-individual-redress-challenge

RIS - 6Ob195/19y—Entscheidungstext—Justiz (OGH, OLG, LG, BG, OPMS, AUSL), (Oberster Gerichtshof 15 September 2020). https://www.ris.bka.gv.at/Dokument.wxe?Abfrage=Justiz&Gericht=&Rechtssatznummer=&Rechtssatz=&Fundstelle=&AenderungenSeit=Undefined&SucheNachRechtssatz=True&SucheNachText=True&GZ=&VonDatum=&BisDatum=12.11.2020&Norm=&ImRisSeitVonDatum=&ImRisSeitBisDatum=&ImRisSeit=Undefined&ResultPageSize=100&Suchworte=6Ob195%2f19y&Position=1&SkipToDocumentPage=true&ResultFunctionToken=c426c30b-ff1d-49c5-8896-dac2cb78a0c1&Dokumentnummer=JJT_20200915_OGH0002_0060OB00195_19Y0000_000

Rojszczak, M. (2020). Does global scope guarantee effectiveness? Searching for a new legal standard for privacy protection in cyberspace. Information & Communications Technology Law, 29(1), 22–44. https://doi.org/10.1080/13600834.2020.1705033

Rothstein, M. A., & Tovino, S. A. (2019). California Takes the Lead on Data Privacy Law. The Hastings Center Report, 49(5), 4–5. https://doi.org/10.1002/hast.1042

Rustad, M. L., & Koenig, T. H. (2019). Towards a Global Data Privacy Standard. Florida Law Review, 71(2), 365–454.

Ryngaert, C., & Taylor, M. (2020). The GDPR as Global Data Protection Regulation? American Journal of International Law, 114, 5–9. https://doi.org/10.1017/aju.2019.80

Samonte, M. (2020). Google v. CNIL: The Territorial Scope of the Right to Be Forgotten Under EU Law. European Papers - A Journal on Law and Integration, 4(3), 839–851. https://doi.org/10.15166/2499-8249/332

Sartre, J.-P. (2020). Being and Nothingness. Routledge.

Sloot, B. (2014). Do data protection rules protect the individual and should they? An assessment of the proposed General Data Protection Regulation. International Data Privacy Law, 4(4), 307–325. https://doi.org/10.1093/idpl/ipu014

Svantesson, D. J. B. (2019). Article 3. Territorial scope. In C. Kuner, L. A. Bygrave, & C. Docksey (Eds.), The EU General Data Protection Regulation (GDPR): A commentary (pp. 74–99). Oxford University Press.

Theil, S. (2017). Is the ‘Living Instrument’ Approach of the European Court of Human Rights Compatible with the ECHR and International Law? European Public Law, 23(3), 587–614.

Tracol, X. (2020). "Schrems II”: The return of the Privacy Shield. Computer Law & Security Review, 39. https://doi.org/10.1016/j.clsr.2020.105484

Tzanou, M. (2020). Schrems I and Schrems II: Assessing the Case for the Extraterritoriality of EU Fundamental Rights. In F. Fabbrini, E. Celeste, & J. Quinn (Eds.), Data Protection Beyond Borders: Transatlantic Perspectives on Extraterritoriality and Sovereignty. Hart Publishing. https://doi.org/10.5040/9781509940691

Ukrow, J. (2018). Data protection without frontiers? On the relationship between EU GDPR and amended CoE Convention 108. European Data Protection Law, 2, 239–247. https://doi.org/10.21552/edpl/2018/2/14

Van Alsenoy, B. (2018). Reconciling the (extra)territorial reach of the GDPR with public international law. In G. Vermeulen & E. Lievens (Eds.), Data protection and privacy under pressure. Transatlantic tensions, EU surveillance, and big data (pp. 77–100). Maklu.

Wachter, S., Mittelstadt, B., & Floridi, L. (2017). Why a right to explanation of automated decision-making does not exist in the General Data Protection Regulation. International Data Privacy Law, 7(2), 76–99. https://doi.org/10.1093/idpl/ipx005

Worldwide Obligation of Facebook to Cease and Desist from the Publication of Photographs of Dr. Eva Glawischnig-Piesczek in Connection with Defamatory Insults and/or Words of Equivalent Meaning, (Oberster Gerichtshof 15 September 2020). https://www.ogh.gv.at/en/uncategorized/worldwide-obligation-of-facebook-to-cease-and-desist-from-the-publication-of-photographs-of-dr-eva-glawischnig-piesczek-in-connection-with-defamatory-insults-and-or-words-of-equivalent-meaning/

Wybitul, T. (2020, June 24). Arbeitsgericht Düsseldorf: 5.000 Euro immaterieller Schadensersatz wegen Datenschutzverstößen [Blog post]. CRonline. Portal zum IT-Recht. https://www.cr-online.de/blog/2020/06/24/arbeitsgericht-duesseldorf-5-000-euro-immaterieller-schadensersatz-wegen-datenschutzverstoessen/

Zwitter, A. (2015). Peace and Peace Orders: Augustinian Foundations in Hobbesian and Kantian Receptions. In H. Gärtner, J. W. Honig, & H. Akbulut (Eds.), Democracy, Peace, and Security (pp. 59–80).

Zwitter, A., & Lamont, C. K. (2014). 15—Enforcing Aid in Myanmar: State Responsibility and Humanitarian Aid Provision. In A. Zwitter, C. K. Lamont, H.-J. Heintze, & J. Herman (Eds.), Humanitarian Action: Global, Regional and Domestic Legal Responses (pp. 349–374). Cambridge University Press. https://doi.org/10.1017/CBO9781107282100.022

Footnotes

1.1. This Regulation applies to the processing of personal data in the context of the activities of an establishment of a controller or a processor in the Union, regardless of whether the processing takes place in the Union or not.2. This Regulation applies to the processing of personal data of data subjects who are in the Union by a controller or processor not established in the Union, where the processing activities are related to:(a) the offering of goods or services, irrespective of whether a payment of the data subject is required, to such data subjects in the Union; or(b) the monitoring of their behaviour as far as their behaviour takes place within the Union.3. This Regulation applies to the processing of personal data by a controller not established in the Union, but in a place where Member State law applies by virtue of public international law.

Safeguarding European values with digital sovereignty: an analysis of statements and policies

$
0
0

This paper is part of Governing “European values” inside data flows, a special issue of Internet Policy Review guest-edited by Kristina Irion, Mira Burri, Ans Kolk, Stefania Milan.

Introduction

Governments’ interest in the “datafied society” (Hintz et al., 2018) as an object of policy and regulation is nothing new, with a long-held recognition that governance protocols (policies, ethics frameworks, and regulations) can be used to reshape the technological infrastructure underpinning society and hence its nature (Floridi, 2018; van Dijck & Poell, 2016). However, the widespread adoption of the term “sovereignty”—a concept loaded with legal and political connotations—to describe authority over the digital is a more recent phenomenon (see Section 2). In particular, this term has gained traction in the context of the European Union (EU), which will be the focus of this paper.

In her 2020 State of the Union Address at the European Parliament, Ursula von der Leyen, President of the European Commission, stated that

this is [the European Union’s] opportunity to make change happen by design, not by disaster or by diktat from others in the world […] it is about Europe’s digital sovereignty on a small and large scale (European Commission, 2020d).

This statement is both a signal of intent and a reflection of a newfound policymaking agenda within the EU. Digital sovereignty is seen as a basis for strengthening the EU’s role in an interconnected world, promoting its core interests, and protecting the fundamental values upon which the Union is based, namely, human dignity, freedom, democracy, equality, the rule of law and respect of human rights (art. 2 TEU). As we shall see, von der Leyen’s speech was not an isolated case. The term “digital sovereignty” has become increasingly popular among the EU’s main institutional actors. However, digital sovereignty lacks a clear definition and is deployed inconsistently across EU policy documents. Even fundamental points are unclear, such as whether digital sovereignty is something that the EU already holds, or whether it is a goal towards which the EU should strive. This lack of conceptual clarity is problematic as it raises questions over the exact aims of the EU and hinders the ability of the EU to garner support behind, and successfully enact, a clear policy agenda.

This paper aims to help dispel the aforementioned conceptual confusion by analysing the concept of digital sovereignty both at a high level and as it is used within the context of the EU. This clarifying analysis has four aims: i) to outline what is understood by digital sovereignty when it is discussed by EU institutional actors and place this within a broader conceptual framework; ii) to map the critical policy measures that the EU has taken, and/or has proposed to take, that are purported to implicitly or explicitly strengthen its digital sovereignty; iii) to assess the extent to which the EU’s current policy measures are actually effective in strengthening digital sovereignty, as we conceptualise it; and iv) to propose policy solutions that go above and beyond existing policy measures.

The paper is structured to reflect these aims. In section 1, we highlight the difficulties that arise when seeking to understand what digital sovereignty for the EU might consist of. Section 2 offers a precise conceptual understanding of digital sovereignty, which can be applied to the EU policy-making context. Section 3 analyses a corpus of 180 EU web pages that mentioned the term “digital sovereignty” within the past year to understand the specific policy areas of importance for EU institutional actors and the associated policy measures they have presented as furthering the aim of strengthening digital sovereignty. Section 4 draws the theoretical and empirical analysis together by assessing the extent to which the policy measures identified in section 3 actually strengthen the EU’s digital sovereignty, as we conceptualise it in section 2. Section 5 concludes by exploring how the EU can strengthen its digital sovereignty and provide specific recommendations for EU policymakers to this end.

1. Digital sovereignty in the European Union

EU institutional actors have referred to the concept of digital sovereignty for several years (Reding, 2016). However, the term is only more recently gaining high-profile traction among policymakers (Timmers, 2019a). Recently, several of the EU’s political institutions have put forward definitions of digital sovereignty and related terms. Ursula von der Leyen, President of the European Commission, has referred to “tech sovereignty” as the capacity of Europe “to make its own choices, based on its values, respecting its own rules” (von der Leyen, 2020). A similar understanding of “digital sovereignty” is implicit in the statement of Charles Michel, President of the European Council, who sees digital sovereignty as a means for achieving strategic autonomy which “is about being able to make choices […] this means reducing our dependencies, to better defend our interests and our values” (European Council, 2021). The German Presidency of the Council of the European Union (July-December 2020), in its programmatic manifesto, stressed that EU digital sovereignty involves a strengthening “of [the EU’s] broad research base and foster[ing] its growing digital infrastructure and economy, while making sure the continent’s core democratic values also apply in the digital age” (Germany’s Presidency of the Council of the European Union, 2020). Finally, the European Parliament’s Think Tank defines digital sovereignty as “Europe’s ability to act independently in the digital world” (Madiega, 2020).

From these statements, it is evident that digital sovereignty is a concept keenly promoted by the EU’s political institutions. There is a clear emphasis in these definitions on the EU having the capacity to act independently, in line with its values, with respect to digital technology. However, what constitutes digital sovereignty is not always clearly defined, and the institutions variously speak of “establishing” (European Parliament, 2020), “retaining” (European Commission, 2021a), “defending” (European Commission, 2020e), “bolstering” (Germany’s Presidency of the Council of the European Union, 2020), and “achieving” (European Council, 2020) digital sovereignty—all terms that have different policymaking connotations. Moreover, the extent to which it is viable that choices could be made by the EU “independently” is unclear, and the identities of those actors with whom the EU is competing (i.e., those who might also claim digital sovereignty) are frequently unstated.

Considering digital sovereignty with respect to adjacent concepts also evidences this confusion. Some analysts have proposed a substantive difference between “tech” and “digital” sovereignty, whilst for others the terms are synonymous (Burwell & Propp, 2020). Similarly, whilst it is clear that the EU’s aim of digital sovereignty relates to strategic autonomy, the distinction and relationship between the two concepts are often unclear (Timmers, 2019b). Digital sovereignty has been forwarded as a means of furthering strategic autonomy (European Council, 2021) and as the end goal of a policy of strategic autonomy (Eager et al., 2020). However, others consider these terms as interchangeable (European Commission, 2020b).

The lack of clarity and coherence amongst the statements of EU institutions and policymakers as to how digital sovereignty should be understood makes it challenging to assess whether the EU is successfully strengthening its digital sovereignty through policy measures that are said to be performing this function. To evaluate the EU’s digital sovereignty agenda systematically, identify its strengths and weaknesses, and monitor its progress, the first, necessary steps are to provide a clear, detailed and robust definition of “digital sovereignty”, and to offer a more granular set of criteria as a basis for assessing whether these aims are being met.

There are two approaches for developing a definition of digital sovereignty in the context of the EU’s policy agenda. The first is bottom-up and consists of using existing statements by EU policymakers to infer a shared concept that may be specific to the European context. However, as we have shown, statements by EU institutional actors are inconsistent, suggesting that such an exercise may yield a concept of digital sovereignty that is fragmented internally. These statements may also be misaligned externally with the usage of the expression by external actors (e.g., other states), which may only result in further conceptual confusion and the inability to reconcile “European” digital sovereignty with that adopted by other actors. For these reasons, we follow an alternative approach, top-down, which is to propose a more general definition of digital sovereignty that is applicable, among other things, to the EU context and can also be used to assess the EU’s policy of digital sovereignty in comparative contexts, for example when the digital sovereignty of the US or China is in question. We start by providing a conceptual analysis of digital sovereignty, in the next section, and then proceed to ground it empirically in the EU context in section 3.

2. Defining digital sovereignty

Whilst the concept of sovereignty has been widely discussed across academic literature (Philpott, 2016), attempts to define digital sovereignty have been more limited. Some scholarship has defined digital sovereignty through analysing the specific EU context (Burwell & Propp, 2020; Leonard & Shapiro, 2019) yet, as mentioned, this approach is inadequate for developing a generalisable concept. Other authors have analysed digital sovereignty conceptually, but these works have focused on outlining the analytical confusion or discursive practices surrounding the term (Couture & Toupin, 2019; Pohle & Thiel, 2020) rather than putting forward a conceptual understanding that can be effectively operationalised for assessing the EU’s digital sovereignty agenda.

In this article, we follow the understanding of digital sovereignty we developed elsewhere as constituting “a form of legitimate, controlling authority” (Cowls et al., forthcoming; Floridi, 2020) over—in the digital context—data, software, standards, services, and other digital infrastructure, amongst other things. Two subsidiary elements of our definition need to be explicated for it to be an analytically useful concept for assessing digital sovereignty in the EU, namely control and legitimacy as they pertain to authority. We follow Floridi’s (2020, p. 371) definition of control as

the ability to influence something (e.g., its occurrence, creation, or destruction) and its dynamics (e.g., its behaviour, development, operations, or interactions), including the ability to check and correct for any deviation from such influence. In this sense, control comes in degrees and above all can be both pooled and transferred.

 Concerning legitimacy, we follow (Franck et al., 1990, p. 24) in defining this as

a property of a rule or rule-making institution which itself exerts a pull towards compliance on those addressed normatively because those addressed believe that the rule or institution has come into being and operates in accordance with generally accepted principles of right process.

A notion of public consent underpins the definition of legitimacy we adopt, yet there are several ways in which consent can be understood. Here, we understand consent in terms of the tripartite framework of democratic legitimacy in the European context developed by Schmidt (2013) (itself a modification of Scharpf (1999)), which consists of input into the decision-making process that underpins a rule or institution (for instance, democracy), the throughput or quality of the process by which decisions are made (including accountability), and the output or effectiveness of a rule in achieving the goals of what citizens care about.

The overarching definition of digital sovereignty as “a form of legitimate, controlling authority” is agnostic with respect to where, when and by whom digital sovereignty can be held. The definition provides a more flexible view of digital sovereignty insofar as it moves beyond traditional state-centric definitions of sovereignty by acknowledging that authority may also be held by international or supranational bodies, as in the case of the EU. It also provides conceptual space for considering the involvement of private sector companies exerting close control—albeit with questionable, and questioned, legitimacy (Taylor, 2021)—over various aspects of digital life. Furthermore, the definition recognises that multiple agents may simultaneously hold digital sovereignty, indicating the possibility of viewing sovereignty as something which can be shared. We also move away from traditional notions of (digital) sovereignty as necessarily being defined by a fixed physical geography and instead view digital sovereignty as something that can be held across political communities and spatial networks that are not limited to the nation-state (Agnew, 2005). This is in recognition of the fact that not all historical polities have been territorially organised, and that contemporary governance of the digital is, in many areas, not territorially bound. In short, we assume that the concept of “sovereignty” has finally and completely detached itself from that of “sovereign”. This helps one understand that it is not the case, for example, that “in a lot of ways Facebook is more like a government than a traditional company”, as claimed by Facebook CEO Mark Zuckerberg in 2018 (Farrell et al., 2018), but rather that companies like Facebook and national governments are redefining, through their interactions and equilibria, the sense in which an agent can hold and exercise sovereignty.

One more clarification is necessary before grounding this definition of digital sovereignty in the context of the EU. “Sovereignty” relates to the similar but distinct concept of “governance”. In this article, we shall assume that digital sovereignty is the authority to set rules that regulate and govern action (relying, we have argued, on legitimacy and control), and hence that the (digital) governance process involves the exercise of the capacities afforded, a priori, by sovereignty. Sovereignty thus captures the capacity of an actor to act (it is something that is held), while governance concerns the interactions of sovereign actors and the nature of the act itself (it is something that is done).

3. Mapping policies for advancing European digital sovereignty

We now turn to grounding this conception of digital sovereignty empirically in the context of the EU. To do so, we assess a corpus of web pages within the subdomains of the European Commission, European Council, and European Parliament that explicitly mentioned the term “digital sovereignty” between 10 March 2020 and 10 March 2021 (N=180). The analysis is designed to ascertain the policy areas and associated measures that EU institutional actors consider most relevant to strengthening European digital sovereignty.Identifying these policies serves two purposes. It details the areas of digital sovereignty that are of importance to EU institutional actors. And it provides the foundation for assessing whether the actions taken by the EU, which are claimed to strengthen digital sovereignty, are doing so in reality.

To build this corpus, we used Google’s site-specific search function to collect relevant web pages from the European Commission (N = 80), Council (N = 70), and Parliament’s (N = 30) official websites 1. We analysed each result to understand the most common contexts in which digital sovereignty is spoken about by representatives of these three main EU governance institutions (see Annex for a full discussion of methodology). In absolute terms, the data set that resulted from the search query was small enough to allow a manual analysis of each of the 180 articles. However, it contains (to the best of our knowledge) all references to “digital sovereignty” from each website, which reassures us about the completeness of the data set concerning digital sovereignty discourse in the European governance context. This analysis identifies the five key areas and aspects of digital technology that institutional actors most frequently mentioned as important for strengthening digital sovereignty. These are: data governance; constraining platform power; digital infrastructures; emerging technologies; and cybersecurity. In the remainder of this section, we assess each of these in turn. In each case, we identify the stated relevance of each aspect of digital technology to the strengthening of European digital sovereignty, and note the associated policy measures and initiatives, planned or enacted, cited by institutional actors.

3.1. Data governance

In many web pages (N=33), meaningful European control over data is presented as integral to the EU’s digital sovereignty. This is often portrayed as a necessity for ensuring that infringements on individual privacy rights are curtailed and maximising the societal and industry benefits that can be gained from personal data. For example, it is claimed that data governance regulation will

provide for more control for citizens and companies over the data they generate, [which] will strengthen Europe’s digital sovereignty in the area of data (European Commission, 2020g, p. 1, emphasis ours).

The European Data Strategy was one initiative that is frequently presented in the data set as a means of strengthening European digital sovereignty. It seeks to ensure access to more data for the European economy and to provide citizens and companies with more control over their data, through measures such as encouraging open data, developing data pools and facilitating data sharing (European Commission, 2020g). Specific existing measures cited include the Open Data Directive (n. 2019/1024). This encourages making public sector data more freely available within the EU (Berlin Declaration on Digital Society, 2020), and the Single Digital Gateway Regulation (n. 2018/1724), which was seen to lessen bureaucracy and ease cross-border data flows through its “Only Once Principle”, thus “relaunching our economy [and] also ... building the EU’s digital sovereignty” (European Commission, 2020h).

Earlier data governance measures are also outlined as strengthening EU digital sovereignty. This includes the General Data Protection Regulation (n. 2016/679, GDPR), which strengthened digital sovereignty by “putting individuals in control of their data” and making the EU “a standard-setter in privacy and data protection”, as per a recent European Parliament report (Madiega, 2020).

The European Commission’s Regulation on the Free Flow of Non-Personal Data (n. 2018/1807, FFDR), which was passed by the European Parliament and the Council in 2018 and which has applied since May 2019, is another policy mentioned as strengthening digital sovereignty in the data set (EURAXESS, 2020). Unlike the GDPR, this regulation is exclusively market-centred: it prohibits national rules requiring non-personal data to be stored or processed in a specific member state, in effect neutering data localisation efforts by individual member states. Before this, at least 22 measures were present that explicitly imposed restrictions on transferring data to another member state, and a further 35 measures that could indirectly cause localisation within a member state (Bauer et al., 2016). 2

3.2 Constraining platform power

A second policy area that is outlined as a priority for strengthening digital sovereignty, albeit less frequently (N=13), is to increase EU control over large, non-European technology companies, in particular “platforms”. This is to ensure that relevant measures are effective and enforceable against those companies and thus that they respect EU law and values when operating in Europe. As Internal Market Commissioner, Thierry Breton, outlined in a July 2020 speech, that (digital) sovereignty is (among other things)

about making sure that anyone who invests, operates and bids in Europe respects our rules and values […] [and about] […] protecting our companies against predatory and sometimes politically motivated foreign acquisitions (European Commission, 2020b, emphasis ours).

The perceived misbehaviour of large American technology companies—and the scarcity of European alternatives—has given momentum to large-scale policy responses by the EU. The release of the Commission’s proposals for a Digital Services Act (DSA) and Digital Markets Act (DMA) represent clear moves in this direction. The DSA ascribes “cumulative obligations” to platforms that scale with their size, such that the largest (American) platforms will bear the greatest responsibility. The DMA uses “a set of narrowly defined objective criteria” to define “gatekeepers”, who will face obligations for ensuring interoperability and competitiveness in how they operate.

European Council President Charles Michel has referred to both acts—and the Commission’s antitrust competition policy—as “unique and undeniable strengths” with respect to European digital sovereignty. Moreover, Michel warned that while the EU is “determined to take up these challenges with the US […] if necessary, we are ready to lead the way on our own” (European Council, 2021).

3.3 Digital infrastructure

A third policy area that consistently recurs in the data set (N=34) was the need to improve core digital infrastructure, such as data storage capacity. References to strengthening infrastructure in the data set portray doing so as necessary for enabling the EU to act with less reliance on foreign technologies; ensuring that EU companies and data will not be subjected to third-country laws on account of foreign data storage (European Commission, 2020c); improving the competitiveness of EU companies (European Commission, 2020f); and empowering citizens and business (European Commission, 2021c).

A high-profile example of current EU policy in this area is the Gaia-X project, an initiative by the European Commission, Germany, France, and many others, introduced to resolve the problem of reliance on foreign cloud infrastructures. More recently, in October 2020, member states signed a declaration of cooperation on developing a competitive EU cloud infrastructure (European Commission, 2020f), facilitating data storage within the EU.

Finally, connectivity has been highlighted as a necessary infrastructural requirement for furthering EU digital sovereignty by enabling all EU citizens to have full access to digital opportunities and technologies. In the documents we analysed, the Connecting Europe Facility (CEF)—an EU funding instrument that seeks to promote growth, jobs and competitiveness through targeted infrastructure investment—including digital service infrastructures and broadband networks was often explicitly mentioned.Connectivity across the EU is also seen as necessary for supporting positive economic outcomes in the EU, particularly in light of the disruption caused by Covid-19 (European Commission, 2021c).

3.4. Emerging technologies

Emerging technologies was the policy area most frequently referenced across documents (N=42). They are seen as underpinning “most of the key value chains of the future” (European Commission, 2020c), and thus it is essential to ensure that the development of these emerging technologies is in line with EU values (European Commission, 2020b).

Underlying technologies, such as microelectronics, are an area of particular interest amongst policymakers. Semiconductor technologies are one of seven areas where coordinated plans from member states are encouraged under the NextGenerationEU €750 billion recovery plan, adopted in December 2020, that seeks to stimulate the EU’s economy in the wake of Covid-19 (Regulation n. 2020/2094). Following this, the European Initiative on Processors and Semiconductor Technologies was announced, a joint declaration by 20 member states that seeks to lessen the EU’s reliance on externally sourced microprocessor technologies through increased investment along the semiconductor value chain. Similarly, the Electronic Components and Systems for European Leadership (ECSEL) initiative seeks to grow Europe’s semiconductor capabilities, amongst other things, and has been cited as “proof” of Europe’s potential to hold digital sovereignty in the field of microelectronics (European Commission, 2020b).

Another emerging technology which the EU is focusing on is the field of supercomputing, with President von der Leyen promising an investment of €8 billion to develop the next generation of supercomputers (European Commission, 2020d), which are seen as a prerequisite to being competitive in the areas of cloud technologies, AI, and cybersecurity.

Finally, the EU is trying to expand its capacity to develop and regulate artificial intelligence (AI). Von der Leyen has recently stressed that

Artificial intelligence is a prime example of digital sovereignty. It is an example of our ambition to apply European standards and values to technology deployed in Europe. (European Commission, 2020e)

The EU’s AI strategy, outlined in documents such as the Communication on Artificial Intelligence for Europe and the draft Artificial Intelligence Act, seeks to ensure that AI is governed in line with EU values and also promotes competitiveness through improving R&D, skills, and partnering with member states and the private sector.

3.5. Cybersecurity

Cybersecurity is the fifth area where EU policymakers have outlined the necessity of further strengthening digital sovereignty (N=24). Strong cybersecurity is seen as a prerequisite for all the other policy areas already identified, because the protection of data, infrastructures, and business are necessary for a functioning and competitive EU digital economy, and the protection of EU values. This is why cybersecurity is described as “a pillar of the European sovereignty for the future” by the EU Science Hub (2020).

Several existing initiatives have sought to strengthen the EU’s digital sovereignty by making it a “standard setter in the field of cybersecurity” (Madiega, 2020), including the Network and Information Security Directive (n. 2016/1148, NIS Directive), which provides legal measures to boost the overall level of cybersecurity in the Union, and the Cybersecurity Act which established an EU-wide cybersecurity certification scheme (EURAXESS, 2020). More recently, the EU’s Cybersecurity Strategy for the Digital Decade, published in December 2020, aims to bolster Europe's collective resilience against cyber threats and “describes how the EU can harness and strengthen all its tools and resources to be technologically sovereign [which is] founded on the resilience of all connected services and products” (European Commission, 2021b). The strategy has three pillars: resilience, technological sovereignty and leadership; building operational capacity to prevent, deter and respond; advancing a global and open cyberspace through increased cooperation.

4. Assessing efforts to strengthen the EU’s digital sovereignty

In this section, we assess the extent to which the EU is actually furthering digital sovereignty, as per our definition in section 2, understood in terms of legitimate control. We focus our analysis specifically on the five areas that were repeatedly emphasised as important by the EU’s political institutions, whilst recognising that the concept of digital sovereignty is not limited to these areas, particularly when other state and non-state actors are considered. For example, establishing a “sovereign internet” through controlling what content can be viewed online is an area of digital sovereignty that is critical to China and Russia (McKune & Ahmed, 2018), but was not mentioned by EU institutional actors in our data set. Here we address these two components of our conceptual definition, control and legitimacy, in relation to the five areas identified above.

4.1. Control

Let us begin by considering the extent to which the EU has already succeeded in exerting control over the five forms of digital technology it most commonly identifies as impacting its digital sovereignty. First, the EU’s data governance approach represents the most developed and instantiated policy measures for strengthening the EU’s digital sovereignty. Internally, governance measures assert control over member states, by regulating data flows; this includes the FFDR prohibiting data localisation by member states. Externally, control is asserted by the GDPR and the related case-law of the European Court of Justice (in particular the Schrems saga, wherein European judges have imposed restrictions upon international data transfer, invalidating the approach elaborated by the Commission), ensuring that data flows are, as far as possible, subject to the control of the EU and the respect of its values. The implementation of the GDPR is also an example of the so-called “Brussels Effect”, which refers to the ability of the EU to regulate the global marketplace unilaterally on account of the size and attractiveness of its market. The territorial extension (Scott, 2014) of the GDPR provides the EU with external control as a regulatory superpower, with incentive mechanisms pushing both the private sector and other governments to follow the EU’s regulatory approach (Bradford, 2020). In this sense, market mechanisms provide the EU with a form of control over private sector companies.

Although data governance measures are the most developed area of the digital sovereignty agenda, their efficacy has been questioned. Measures such as the Open Data Directive, FFDR, Single Digital Gateway Regulation improve data efficiencies for governments, companies and individuals respectively, but the extent to which they significantly enhance EU control can be called into question; for example, these measures may slightly improve individual control and EU competitiveness, but it seems unlikely that this approach, by itself, meaningfully challenges the dominance of US technology companies. The efficacy of the GDPR has also been questioned on account of the effectiveness of its enforcement, which heavily relies on the national authorities of members states, leading to possible inconsistencies amongst EU states (European Parliament, 2021; Wagner & Janssen, 2021), difficulties in cross-border cooperation, and due to data protection authorities being generally under-resourced (Massé, 2020). Moreover, the fines given to companies that violate GDPR procedures frequently take a long time to materialise and typically pale in comparison to the revenues of major private sector technology firms.

Turning to the second area we identified in the data set, constraining the power of (non-European) platforms, efforts are mostly still getting underway. As noted, the DSA and DMA have not yet been cast into law, but the intention of the propositions is clear: the creation of a regulatory framework that allows EU authorities to pinpoint and sanction technology companies for a range of controversial practices that fly in the face of EU interests. The successful enactment of these measures would enhance control by allowing targeted measures; however, given that they are in the early stages of the law-making process, it is difficult to determine the likelihood of their success. Caution is merited since it is not uncommon that the legislature waters down the EU legal acts, as was the case with restrictions on the use of remote biometric surveillance in the drafting of the proposed EU AI Act.

However, even if they are easily passed into law, constraining the power of—that is to say, controlling—large platforms is likely to require more than the measures contained in the DMA and DSA. This point is best exemplified by the recent EU and member states’ response to Covid-19 digital contact tracing. The development and deployment of accurate, effective, and widely available digital contact tracing apps requires a complex socio-technical system, involving both hardware and software as well as analogue capabilities such as laboratory tests for Covid-19 (Morley et al., 2020). At least 19 EU member states turned to Apple and Google—the two companies that control the software and hence the API of most of the mobile phones on which the apps could run—to provide at least the “exposure notification” functionality via API (i.e., Apple and Google provide the functionality to alert individuals when they have been near an individual who has tested positive for Covid-19). Whilst states had control over the risk-scoring algorithm used by individual apps (e.g., deciding the threshold level for risky contact) and what individuals must do if they are notified about a Covid-19 contact, Apple and Google held complete sway over which phone models were compatible with the API (not all were); how the API worked; and crucially for how long they would make the API available. This means that, despite the efforts of EU states to exert greater control over the companies, the latter were able to design the technical framework for the system and thus determine key trade-offs between, for example, preserving privacy and sharing data with public health authorities. Apple and Google also maintained the ability in principle to turn off the contact tracing mechanisms of all those states using the API. In this case, existing regulatory measures did little to protect EU member states from the influence of US technology companies over the digital elements of the pandemic response (Sharon, 2020), and it is unlikely that the provisions contained in the DSA and DMA would have overcome these issues, even had they been in place.

This cautionary tale suggests that any regulatory measures must be accompanied by, among other things, the strengthening of EU infrastructures and industries, the third and fourth areas identified in our analysis. In both of these areas, progress has thus far been relatively limited. Improved cloud infrastructures within Europe would provide more opportunity for EU data to be stored within domestic infrastructures, which would strengthen digital sovereignty by ensuring that data is leveraged for emerging technologies, enhancing EU competitiveness, and governed according to European rules and standards (European Commission, 2020f). However, Gaia-X is not a cloud provider. It is a non-profit organisation conceived as a platform joining up the services of European businesses, which does not seek to compete directly with non-European technology companies. And in fact, Amazon and Google were among the 300 businesses involved in establishing the Gaia-X project (Delcker & Heikkilä, 2020). The Declaration of Cooperation on Cloud by EU member states is a strong signal of intent to improve EU cloud capabilities, yet it is unclear when or how these measures will materialise.

Efforts to support the emerging technologies industry are plagued by similar uncertainty. The intention to strengthen digital sovereignty through increased investment is a step in the right direction for semiconductor technologies, supercomputing and AI technologies. However, investment still pales compared to the EU’s economic competitors, specifically the US and China, which has led to calls for further investment in these areas (Brattberg et al., 2020). For AI in particular, given that the US and China have more permissible environments for innovation (at the expense of ethics), it is questionable as to whether the EU will be able to develop and deploy these technologies in a “competitive” manner (Roberts, Cowls, Hine, et al., 2021).

Finally, in terms of control over cybersecurity—the fifth area—the EU has been exerting increasingly more control in recent years, by focusing on developing Europe-wide cybersecurity standards and certification for companies providing digital technologies and services within the EU. This began with the 2019 Cybersecurity Act and the NIS Directive (Taddeo & Floridi, 2018) and continues with the 2021 Cybersecurity Strategy. Internally, the 2019 Cybersecurity Act helped member states improve their cybersecurity capabilities and established a forum, ENISA, for capabilities building, operational support, and standardisation. Externally, the NIS Directive required international providers to adopt EU standards to access the EU market. The proposed revisions to the NIS Directive and the 2021 Cybersecurity Strategy may successfully enhance this kind of control. However, control in the context of cybersecurity is only one element needed to foster the resilience of systems and the stability of cyberspace; international collaboration and regulation for state behaviour in cyberspace are crucial to this end. This is why it is reassuring that the strategy envisages forms of international collaboration to define international norms and standards that reflect EU core values. Ultimately, how much control the EU will have in the cybersecurity area will depend on how much leadership it will exert in these international, regulatory efforts (Taddeo, 2017).

4.2. Legitimacy

The second fundamental aspect of our definition of digital sovereignty is the normative consideration of legitimacy. We saw above that in the European context, the public consent that the criterion of legitimacy requires can be thought of in three senses, namely input (political), throughput (procedural) and output (performance and efficacy) legitimacy.

In each of these senses, the EU’s digital sovereignty agenda can be considered at least somewhat deficient. The EU doubtless has input legitimacy because its functioning is based on representative democracy (art. 10 TEU). However, the EU’s input legitimacy is limited because of the lack of direct input that citizens have into its selection or policy agenda, the limitations that flow from the lack of a single shared language and media, and the traditionally “de-politicised”, non-partisan nature of decision-making in the EU. The absence of a government that “the people” can directly vote out as a sign of disapproval is particularly troubling in this regard (Schmidt, 2013). When this is read in line with intense corporate campaigning and lobbying that shapes many of the legislative actions above—with the “Big Five” American technology companies reported to have spent €19m lobbying the EU in 2020 alone (Nicolás, 2021)—the actual input of citizens, and relative impact of this input, can be called into question. This raises critical questions about whether the digital sovereignty agenda has been sufficiently developed “by the people”.

Throughput legitimacy is also present within the EU in the sense that relevant documents surrounding process and effectiveness are regularly published (Schmidt & Wood, 2019). However, throughput legitimacy is hamstrung by a perception that EU decision-making is less open and transparent. The largely opaque meetings between the European Commission, Council and Parliament in cases where the Council disagrees on amendments proposed by the Parliament (commonly known as trilogue meetings) are a particularly problematic example. For instance, it was through the trilogue mechanism that a political agreement around the €7.5 billion Digital Europe Programme, which includes funding for supercomputing, cybersecurity, and AI, was reached (European Commission, 2020a). This mechanism is not provided for in EU treaties and may undermine transparency in the legislative process. Significantly, the European Court of Justice has highlighted that “a lack of information and debate … is capable of giving rise to doubts in the minds of citizens, not only as regards the lawfulness of an isolated act, but also as regards the legitimacy of the decision-making process as a whole” (European Court of Justice, 2008, para 59). Having this in mind, the EU judges have stated that the work of the trilogues shall also be available for access insofar as it constitutes a decisive stage in the legislative process (European General Court, 2018). Nonetheless, some argue that transparency over how negotiations are conducted is still deficient due to the limited amount of information being provided (Pennetreau & Laloux, 2021).

This leaves output legitimacy, which is achieved when the digital sovereignty agenda is enacted “for the people”, judged in terms of the effectiveness of the measures (Schmidt, 2013). As outlined above in our discussion on control, the EU’s policy measures, as well as the relevant case-law of the European Court of Justice, have made substantial progress in some areas, such as ensuring better protection of personal data and that the right to privacy is respected, whilst being more limited in others. More generally, the absence of a clear definition of digital sovereignty amongst EU policymakers and an associated assessment criteria undermines efforts to prove that digital sovereignty has been effectively enacted “for the people”.

The potential limitations to the legitimacy of the EU’s digital sovereignty agenda are problematic, especially given that other non-nation-state actors are competing to control critical aspects of digital life (Floridi, 2020; Pasquale, 2017). Consider the actions of large technology companies that we have seen exert considerable de facto control over various aspects of digital life, as exemplified in the case of Covid-19 tracing apps. And this control increasingly risks being seen as legitimate, which may be sustained by a competing loyalty felt by members of the public to technology companies as users, in contrast with affinity with the state as citizens (Culpepper & Thelen, 2020). This is problematic, not least because it could shield these companies from stricter governance requirements of the sort we identify here. This, in turn, could ultimately threaten fundamental EU values, with examples of technology companies already undermining workers’ rights, including gig economy companies threatening the ability to collectively bargain (Tan et al., 2020; Tassinari & Maccarrone, 2020), and algorithmic bias leading to discriminatory outcomes (Tsamados et al., 2021).

Conclusion and recommendations

Our analysis suggests that the range of policy measures adopted by the EU to strengthen its digital sovereignty is a promising first step, but falls short when assessed against the conception of digital sovereignty that we put forward in section 2. In particular, the EU continues to lack sufficient control over digital technologies to ensure that European values are safeguarded. Moreover, to longstanding questions over the institutional legitimacy of EU policy making have been added questions concerning the increasingly murky role of technology companies as “legitimate” actors, to the extent that the idea of individuals ascitizens is under increasing strain in an age when private sector actors command greater loyalty of those same individuals qua users. And at a higher level, as our analysis has highlighted, the EU still lacks a clear, coherent vision of digital sovereignty, with different actors from different EU institutions emphasising different domains in which, and mechanisms by which, digital sovereignty should be sought and strengthened. With all this in mind, we propose that the EU prioritise three steps to strengthen Europe’s digital sovereignty and safeguard its values. These steps, and the associated recommendations that we propose, are structured around the three core deficiencies our analysis has identified: a lack of clarity and coherence around what is meant by digital sovereignty; limits on the EU’s control of digital technology; and threats to its legitimacy in this area.

First, an important step is for the EU to establish a common understanding of and position on digital sovereignty throughout the bloc. By “pooling” sovereignty into achieving coherent goals regarding the digital throughout the bloc, the EU may be able to maximise its digital sovereignty and hence safeguard more effectively its interests and values. We hope that the definition of digital sovereignty proposed in this article and elsewhere (Floridi, 2020) may provide a good foundation for EU institutional actors to use the term “digital sovereignty” in a more precise manner.

However, even if the EU’s position on digital sovereignty were to be made clearer, by itself, this would remain inadequate for effectively controlling large technology companies, most of them not European. Therefore, the EU should strengthen its global reach in that domain, by elaborating legal instruments capable of extending their application beyond the Union’s borders; the above-mentioned GDPR approach could represent a benchmark in this respect. At the same time, the EU needs to equip itself with a stronger toolkit to promote and support European technology companies that align with the EU’s values and prevent the continued widening of the capability gap between European and non-European companies. A policy of national champions, similar to that adopted in China (Roberts, Cowls, Morley, et al., 2021), has been proposed by some EU institutional actors for boosting competitiveness (Calenda, 2020; Volpicelli, 2020). However, previous efforts to enact similar policies in the EU in the 1980s were disappointing and did little to increase international competition (Strange, 1996). It may thus prove fruitful instead for the EU to assess how member states have maintained a world-leading position in some industries, such as car and aerospace manufacturing, and determine the extent to which related policies can be adopted to foster successful technology companies. The corollary is equally worth stating: the EU should also work to identify the similarities and differences between the technology space and more established sectors, with respect to the effectiveness of the investment capacity, regulatory measures, and policy instruments currently at its disposal. The pre-eminence of Silicon Valley is just the most pronounced of these tech-sector-specific characteristics. However, there are many more, some of which may be more naturally advantageous to EU governance and more reflective of “European values”, such as the EU’s stated focus on trust and trustworthiness in the context of AI, or potential points of convergence with the EU’s globally ambitious green agenda.

Finally, the EU’s digital sovereignty agenda is presently undermined by the perceived limitations of the legitimacy of its policy-making processes. Unethical outcomes can arise if the EU is unable to introduce sufficiently strong regulatory measures on account of the perceived “pseudo-“or merely “quasi-legitimacy” held by technology companies through their ubiquity, scope, and their users’ loyalty or reliance on them. A clearer digital sovereignty agenda, which is aligned to member states’ understanding of the term, may provide a good foundation as per our first recommendation. However, to improve legitimacy still further, the EU should strive to strengthen transparency within governance and support open democracy initiatives of co-design and co-ownership of policies, stimulating thus social acceptability and public support. 3 One method for doing this is through futures and foresight techniques, which can be used for visioning the type of digital sovereignty towards which EU citizens want to strive. At the same time, a more accurate course of action should be elaborated at the EU level to improve the digital awareness of EU citizens, to allow them to exert a more active role in shaping the relevant measures.

These are only a narrow slice of the wide range of further steps that the EU could take as digital technologies increasingly impact on the lives and livelihoods of EU citizens. Nonetheless, the current efforts identified in our analysis represent an important first step in the EU’s efforts to maximise its digital sovereignty. However, as the capacity and complexity of digital technologies continue to grow, it will be increasingly necessary to introduce new and better measures to ensure that the interests of EU citizens are protected, and European values are safeguarded. Arguably, such changes would require a more effective allocation of competences among the Union and its member states, a result which cannot be achieved without amending the existing EU Treaties.

A first occasion for inclusive, high-level reflection on the following steps to be taken towards an agenda for Europe’s digital sovereignty is the ongoing Conference on the Future of Europe, where citizens, stakeholders, social partners and academia are empowered to have their say on the EU’s future policies and ambitions.

Annex

We used Google’s site search function to collect web pages from the European Commission’s, Council’s, and Parliament’s official websites, between 10 March 2020 and 10 March 2021, that explicitly mentioned the term “digital sovereignty”. A period of one year was selected because it ensured that the results analysed were reflective of only the most recent discussions of digital sovereignty in the EU context. These results returned 83 web pages for ec.europa.eu, 88 results from europarl.europa.eu, and 67 from consilium.europa.eu, totalling 238 web pages. Once duplicate entries were filtered out, our final corpus was 180 web pages, with 80, 70 and 30 returned from the Commission, Parliament and Council respectively.

We analysed these webpages to understand the policy areas that were being discussed in relation to furthering digital sovereignty and the associated measures that were being referenced as helping to achieve this. Five key themes emerged from our analysis, which were: data governance, constraining platform power, digital infrastructures, emerging technologies and cybersecurity. The table below outlines the frequency with which each policy area was cited as being of importance with respect to strengthening digital sovereignty.

It is important to acknowledge the potential limitations in our methodology, particularly in relation to the frequency of mentions. Although we filtered duplicate web pages from our results, we did not remove multiple results that referenced the same speeches of prominent EU figures. This led to the policy areas mentioned in some speeches to be repeated multiple times across different web pages, in different contexts. Accordingly, some policy areas received a significant increase in terms of their frequency of mentions due to the recurrence of these speeches across the data set. We did not consider this “echoing” to be problematic. Indeed, it is to be expected that explicit references to digital sovereignty by senior figures will be influential in shaping myriad policy measures pursued by the EU institutions that they represent. Whilst we could have further filtered our data to remove these examples, we believed that this would undermine the perceived relative significance of each policy area.

 European Commission (total = 80)European Parliament (total = 70)European Council (total = 30)TotalExample policy measures
Data governance1615233European Data Strategy; GDPR; FFDR; SDGR; Open Data Directive; Data Governance Act
Constraining platform power210113Digital Services Act; Digital Markets Act
Digital infrastructures189734Gaia X; Connecting Europe Facility; Joint Declaration on Cloud
Emerging technologies2016642White Paper on AI; European High Performance Computing Joint Undertaking; ECSEL; European Industrial Strategy
Cybersecurity910524Network and Information Security Directive; European Cybersecurity Act; European Cybersecurity Strategy

References

Agnew, J. (2005). Sovereignty Regimes: Territoriality and State Authority in Contemporary World Politics. Annals of the Association of American Geographers, 95(2), 437–461. https://doi.org/10.1111/j.1467-8306.2005.00468.x

Bauer, M., Ferracane, M. F., & van der Marel, E. (2016). Tracing the Economic Impact of Regulations on the Free Flow of Data and Data Localization (No. 30; Global Commission on Internet Governance Paper Series). https://www.cigionline.org/publications/tracing-economic-impact-regulations-free-flow-data-and-data-localization

Berlin Declaration on Digital Society and Value-Based Digital Government at the ministerial meeting during the German Presidency of the Council of the European Union on 8 December 2020. (2020). European Commission. https://ec.europa.eu/isa2/sites/isa/files/cdr_20201207_eu2020_berlin_declaration_on_digital_society_and_value-based_digital_government_.pdf

Bradford, A. (2020). The Brussels effect: How the European Union rules the world. Oxford University Press. https://doi.org/10.1093/oso/9780190088583.001.0001

Brattberg, E., Csernatoni, R., & Rugova, V. (2020). Europe and AI: Leading, Lagging Behind, or Carving Its Own Way? [Paper]. Carnegie Endowment for International Peace. https://carnegieendowment.org/2020/07/09/europe-and-ai-leading-lagging-behind-or-carving-its-own-way-pub-82236

Burwell, F. G., & Propp, K. (2020). The European Union and the Search for Digital Sovereignty:Building “Fortress Europe” or Preparing for a New World? [Issue Brief]. Atlantic Council Future Europe Initiative. https://www.atlanticcouncil.org/wp-content/uploads/2020/06/The-European-Union-and-the-Search-for-Digital-Sovereignty-Building-Fortress-Europe-or-Preparing-for-a-New-World.pdf

Calenda, C. (2020). Report on a New Industrial Strategy for Europe [Report]. European Parliament. https://www.europarl.europa.eu/doceo/document/A-9-2020-0197_EN.html

Couture, S., & Toupin, S. (2019). What does the notion of “sovereignty” mean when referring to the digital? New Media & Society, 21(10), 2305–2322. https://doi.org/10.1177/1461444819865984

Culpepper, P. D., & Thelen, K. (2020). Are We All Amazon Primed? Consumers and the Politics of Platform Power. Comparative Political Studies, 53(2), 288–318. https://doi.org/10.1177/0010414019852687

Custers, B., Dechesne, F., Sears, A. M., Tani, T., & Hof, S. (2018). A comparison of data protection legislation and policies across the EU. Computer Law & Security Review, 34(2), 234–243. https://doi.org/10.1016/j.clsr.2017.09.001

Eager, J., Whittle, M., Smit, J., Cacciaguerra, G., & Lale-Demoz, E. (2020). Opportunities of Artificial Intelligence. Study for the Committee on Industry, Research and Energy, Policy Department for Economic. Scientific and Quality of Life Policies, European Parliament, 99.

EU Science Hub. (2020). Cybersecurity—Our digital anchor—A European perspective [Report]. European Commission. https://ec.europa.eu/jrc/en/facts4eufuture/cybersecurity-our-digital-anchor

EURAXESS. (2020). A Europe fit for the digital age. https://euraxess.ec.europa.eu/worldwide/south-korea/europe-fit-digital-age

European Commission. (2020a). Commission welcomes agreement on Digital Europe Programme. https://ec.europa.eu/commission/presscorner/detail/en/ip_20_2406

European Commission. (2020b). Speech by Commissioner Thierry Breton at Hannover Messe. https://ec.europa.eu/commission/presscorner/detail/en/SPEECH_20_1362

European Commission. (2020c). Europe: The Keys to Sovereignty (Vol. 11). The Keys To Sovereignty. https://ec.europa.eu/commission/commissioners/2019-2024/breton/announcements/europe-keys-sovereignty_en

European Commission. (2020d, September 16). State of the Union Address by President von der Leyen. https://ec.europa.eu/commission/presscorner/detail/en/SPEECH_20_1655

European Commission. (2020e). Statement by the President at ‘Internet, a new human right’. https://ec.europa.eu/commission/presscorner/detail/en/statement_20_2001

European Commission. (2020f). Towards a next generation cloud for Europe. https://ec.europa.eu/digital-single-market/en/news/towards-next-generation-cloud-europe

European Commission. (2020g). Regulation on data governance – Questions and Answers. https://ec.europa.eu/commission/presscorner/detail/en/QANDA_20_2103

European Commission. (2020h). The Once Only Principle System: A breakthrough for the EU’s Digital Single Market. https://ec.europa.eu/info/news/once-only-principle-system-breakthrough-eus-digital-single-market-2020-nov-05_en

European Commission. (2021a). Key Digital Technologies: New partnership to help speed up transition to green and digital Europe. https://ec.europa.eu/digital-single-market/en/news/key-digital-technologies-new-partnership-help-speed-transition-green-and-digital-europe

European Commission. (2021b). Cybersecurity Strategy. https://digital-strategy.ec.europa.eu/en/policies/cybersecurity-strategy

European Council. (2020). Remarks by President Charles Michel after the Special European Council meeting on. https://www.consilium.europa.eu/en/press/press-releases/2020/10/03/remarks-by-president-charles-michel-after-the-special-european-council-meeting-on-2-october-2020/

European Council. (2021, February 3). Digital sovereignty is central to European strategic autonomy—Speech by President Charles Michel at ‘Masters of digital 2021 [Online event.]. https://www.consilium.europa.eu/en/press/press-releases/2021/02/03/speech-by-president-charles-michel-at-the-digitaleurope-masters-of-digital-online-event/

European Parliament. (2020). EU institutions establish common priorities for 2021 and until next elections. https://www.europarl.europa.eu/news/en/press-room/20201217IPR94201/eu-institutions-establish-common-priorities-for-2021-and-until-next-elections

Farrell, H., Levi, M., & O’Reilly, T. (2018, April 9). Mark Zuckerberg runs a nation-state, and he’s the king. Vox. https://www.vox.com/the-big-idea/2018/4/9/17214752/zuckerberg-facebook-power-regulation-data-privacy-control-political-theory-data-breach-king

Floridi, L. (2018). Soft ethics, the governance of the digital and the General Data Protection Regulation. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 376(2133). https://doi.org/10.1098/rsta.2018.0081

Floridi, L. (2020). The Fight for Digital Sovereignty: What It Is, and Why It Matters, Especially for the EU. Philosophy & Technology, 33(3), 369–378. https://doi.org/10.1007/s13347-020-00423-6

Franck, T. M. (1990). The Power of Legitimacy Among Nations. Oxford University Press.

Hintz, A., Dencik, L., & Wahl-Jorgensen, K. (2018). Digital citizenship in a datafied society. Polity.

Leonard, M., & Shapiro, J. (2019). Strategic sovereignty: How Europe can regain the capacity to act [Policy Brief]. European Council on Foreign Relations. https://ecfr.eu/publication/strategic_sovereignty_how_europe_can_regain_the_capacity_to_act/

Madiega, T. (2020). Digital sovereignty for Europe (Briefing PE 651.992; EPRS Ideas Papers). European Parliamentary Research Service.

Massé, E. (2020). Two years under the GDPR: An Implementation Progress Report. Access Now. https://www.accessnow.org/alarm-over-weak-enforcement-of-gdpr-on-two-year-anniversary/

McKune, S., & Ahmed, S. (2018). The Contestation and Shaping of Cyber Norms Through China’s Internet Sovereignty Agenda. International Journal of Communication, 12, 3835–3855. https://ijoc.org/index.php/ijoc/article/view/8540

Morley, J., Cowls, J., Taddeo, M., & Floridi, L. (2020). Ethical guidelines for COVID-19 tracing apps. Nature, 582(7810), 29–31. https://doi.org/10.1038/d41586-020-01578-0

Nicolás, E. S. (2021). Big Five’ tech giants spent €19m lobbying EU in 2020. https://euobserver.com/science/151072

Pasquale, F. (2017, June 12). From territorial to functional sovereignty: The case of amazon [Blog post]. Law and Political Order Blog. https://lpeblog.org/2017/12/06/from-territorial-to-functional-sovereignty-the-case-of-amazon/

Pennetreau, D., & Laloux, T. (2021). Talkin’ ‘bout a Negotiation: (Un)Transparent Rapporteurs’ Speeches in the European Parliament. Politics and Governance, 9(1), 248–260. https://doi.org/10.17645/pag.v9i1.3823

Philpott, D. (2016). Sovereignty. In E. N. Zalta (Ed.), The Stanford Encyclopedia of Philosophy. Metaphysics Research Lab, Stanford University. https://plato.stanford.edu/archives/sum2016/entries/sovereignty/

Pohle, J., & Thiel, T. (2020). Digital sovereignty. Internet Policy Review, 9(4). https://doi.org/10.14763/2020.4.1532

Reding, V. (2016). Digital Sovereignty: Europe at a Crossroads. EIB Institute. https://institute.eib.org/wp-content/uploads/2016/01/Digital-Sovereignty-Europe-at-a-Crossroads.pdf

Roberts, H., Cowls, J., Hine, E., Morley, J., Taddeo, M., Wang, V., & Floridi, L. (2021). Governing Artificial Intelligence in China and the European Union: Comparing Aims and Promoting Ethical Outcomes (SSRN Scholarly Paper).

Roberts, H., Cowls, J., Morley, J., Taddeo, M., Wang, V., & Floridi, L. (2021). The Chinese approach to artificial intelligence: An analysis of policy, ethics, and regulation. AI & SOCIETY, 36(1), 59–77. https://doi.org/10.1007/s00146-020-00992-2

Scharpf, F. (1999). Governing in Europe: Effective and Democratic? Oxford University Press. https://doi.org/10.1093/acprof:oso/9780198295457.001.0001

Schmidt, V. A. (2013). Democracy and Legitimacy in the European Union Revisited: Input, Output and ‘Throughput’. Political Studies, 61(1), 2–22. https://doi.org/10.1111/j.1467-9248.2012.00962.x

Schmidt, V. A., & Wood, M. (2019). Conceptualizing throughput legitimacy: Procedural mechanisms of accountability, transparency, inclusiveness and openness in EU governance. Public Administration, 97(4), 727–740. https://doi.org/10.1111/padm.12615

Sharon, T. (2020). Blind-sided by privacy? Digital contact tracing, the Apple/Google API and big tech’s newfound role as global health policy makers. Ethics and Information Technology. https://doi.org/10.1007/s10676-020-09547-x

Strange, S. (1996). The Retreat of the State: The Diffusion of Power in the World Economy. Cambridge University Press. https://doi.org/10.1017/CBO9780511559143

Taddeo, M. (2017). Deterrence by Norms to Stop Interstate Cyber Attacks. Minds and Machines, 27(3), 387–392. https://doi.org/10.1007/s11023-017-9446-1

Taddeo, M., & Floridi, L. (2018). Regulate artificial intelligence to avert cyber arms race. Nature, 556(7701), 296–298. https://doi.org/10.1038/d41586-018-04602-6

Tan, Z. M., Aggarwal, N., Cowls, J., Morley, J., Taddeo, M., & Floridi, L. (2020). The Ethical Debate about the Gig Economy: A Review and Critical Analysis. https://doi.org/10.2139/ssrn.3669216

Tassinari, A., & Maccarrone, V. (2020). Riders on the Storm: Workplace Solidarity among Gig Economy Couriers in Italy and the UK. Work, Employment and Society, 34(1), 35–54. https://doi.org/10.1177/0950017019862954

Taylor, L. (2021). Public Actors Without Public Values: Legitimacy, Domination and the Regulation of the Technology Sector. Philosophy & Technology. https://doi.org/10.1007/s13347-020-00441-4

Timmers, P. (2019a). Challenged by ‘Digital Sovereignty’. Journal of Internet Law, 23(6).

Timmers, P. (2019b). Ethics of AI and Cybersecurity When Sovereignty is at Stake. Minds and Machines, 29(4), 635–645. https://doi.org/10.1007/s11023-019-09508-4

Tsamados, A., Aggarwal, N., Cowls, J., Morley, J., Roberts, H., Taddeo, M., & Floridi, L. (2021). The ethics of algorithms: Key problems and solutions. AI & SOCIETY. https://doi.org/10.1007/s00146-021-01154-8

van Dijck, J., & Poell, T. (2016). Understanding the promises and premises of online health platforms. Big Data & Society. https://doi.org/10.1177/2053951716654173

Volpicelli, G. (2020, February 20). Who will really benefit from the EU’s big data plan? Wired UK. https://www.wired.co.uk/article/eu-tech-data-industrial

von der Leyen, U. (2020). Shaping Europe’s digital future. European Commission. https://ec.europa.eu/commission/presscorner/detail/en/AC_20_260

Footnotes

1. The specific domains were ec.europa.eu, consilium.europa.eu, europarl.europa.eu. Technically these are third level domains within the second-level europa.eu domain.

2. As an example, it was the introduction of this policy that forced the UK to introduce a data-offshoring policy for the National Health Service (NHS), with an implied policy of data localisation present prior to this.

3. This need not necessarily be limited to policy-making only with respect to digital technology.

Embedding European values in data governance: a case for public data commons

$
0
0

This paper is part of Governing “European values” inside data flows, a special issue of Internet Policy Review guest-edited by Kristina Irion, Mira Burri, Ans Kolk, Stefania Milan.

1. Introduction

Once only a grim reality envisioned by alarmists, the struggle over data has become a daily challenge and a leading topic in contemporary public debate. This struggle encompasses both personal data, which can be used to identify a person, as well as non-personal data 1. The proposal of a digitally inclusive society 2 is under threat from unrestrained, highly disruptive developments in the fields of data extraction, algorithmic profiling, and the training of artificial intelligence (AI) on troves of big data (Isin & Ruppert, 2015). This new digital economy is described in a variety of ways: as a cognitive capitalism, where a systematic process of privatisation of information in different forms allows for maximisation of profits (Bauwens et al., 2019; Fumagalli et al., 2019), as a data colonialism, in which data is used to subjugate and transform social relations (Couldry & Mejias, 2019), or as a form of surveillance capitalism, in which various technological apparatuses monitor humans and try to predict and control human behaviour for the sake of profit (Zuboff, 2019). Regardless of the theoretical approach, the key assumption that current data mining practices are failing individuals and society remains at the heart of the problem.

A stark example of governance failures is the collaboration between Alphabet’s DeepMind Health and Royal Free NHS Foundation Trust (Powles & Hodson, 2017; Winter & Davidson, 2019). Details about 1.6 million patients have been provided with inadequate efforts to secure data protection and usage clarity, thus resulting in a (legally ambiguous) violation of contextual integrity (Nissenbaum, 2010). Yet another consequence was significant value leakage to a private company, Google, which has become the largest collector of health and patient data worldwide. In this respect, one can speculate that Google bought DeepMind to get to the NHS data, and not just for its programming and development skills (Nemitz & Pfeffer, 2020).

On the other hand, an insufficient level of data sharing remains a key challenge for innovative organisations, especially when the data of private actors is examined in areas linked to provisioning of public services (Hofheinz & Osimo, 2017). According to Mitchell and Brynjolfsson (2017), new types of public-private partnerships should be installed, including tools to incentivise the collection of data for use in the public interest. The pooling of such data is still rare in public administrations or trustworthy intermediaries, which fails to capture value for public benefit. Similarly, the debate on how to “unlock private data for good” begins with creating conditions for access to private data, in order to achieve legitimate public policy goals (Alemanno, 2018).

Although efforts are made to disrupt the highly intrusive processes of data mining and datafication, these efforts have largely consisted of bottom-up activism, unlikely to scale and stabilise in an arrangement that leverages the digital economy for public purposes (Beraldo & Milan, 2019). Therefore, the public sphere needs an ‘ecosystem of trust’ (Mulgan & Straub, 2019), which could achieve objectives such as citizen empowerment, improvement of public welfare, or re-usage of data for the common good; all while safeguarding individual rights. Such governance is most notably the aim of the European Union (EU), seeking to align the development of technology with a set of core values constitutive for European society. Prior efforts to regulate transnational data flows exposed the insufficient scope of the European legal regime compared to Brussels’ normative objectives. Hence the growing necessity for a comprehensive data governance framework, effectively enforcing European values and rights, where they apply, even beyond the EU territory.

In light of the context framed above, the key research question posed by this paper is: which data governance model creates conditions for data stewardship guided by European values and rights? The analysis is largely informed by science and technology studies (STS), critical data studies (CDS) and institutional economics, specifically common-pool resources (CPR) theory. A constitutionalist perspective has been adopted with regards to European rights and values.

The article begins with an overview of current challenges in the European legal system with regards to data. Subsequently, it presents four data governance models conceptualised through a data infrastructure lens and critically examines two prominent, yet highly arguable, paradigms related to data. The following section scrutinises the current understanding of data commons and their stewardship, arguing that public data commons are best positioned to secure European values and rights, while increasing data sharing. Given a strong legal mandate for such a model, whether it succeeds depends largely on mechanisms governing access/excludability and power asymmetries of participating actors. The conclusion summarises the results and opens space for further investigation.

The article aims to contribute not only to academic discussions, but also to contemporary policy considerations. It is especially relevant in the context of the European quest for technological sovereignty and considering the impact of data-driven technologies (such as AI) on human rights (Council of the European Union, 2020; European Commission, 2020a). The digital indeed has a constitutive role in the current phase of European integration, which the European Commission has reaffirmed with its Data Strategy (European Commission, 2020b) and the subsequent proposal of Data Governance Act (European Commission, 2020c). We invite scholars to study the conditions, tools and design principles which will aid in the establishment of common European data spaces and comprehensive data-sharing framework which serve the public interest.

2. European rights and values in the context of data

2.1. Constitutive values and rights of the European Union

Present primary and secondary law in the EU provide a rich texture of rights and values for data, as well as balances between them. Many of the past and present discussions on the ethics of AI or digital charters of rights seem to reinvent the wheel as to basic rights and values. It is sometimes conveniently ignored that in the constitutional and legal history of Europe the values and rights on which there is consensus have been identified and set down in a binding way, in processes which carry higher legitimacy than any of the present debates on ethics in AI. 3 Thus, the challenge today is not to re-invent catalogues of values and potential rights, but to actualise and make operational the rights and values which have already been laid down in the rich texture of European Law—from anti-discrimination acquis to data protection. 4

However, European rights and values are not limited to individual rights. Indeed, the values of European constitutionalism, most notably the triad of human rights, rule of law, and democratic procedures, encompass collective rights and guarantees of institutions. Article 2 of the Treaty on EU also acknowledges values that refer to economic conditions and societal relations, such as equality, solidarity, and tolerance (European Union, 1992/2002). Both individual and collective rights, as well as institutions, erode in a regime where regulators fail to act in the public interest by enforcing the law as it stands (Nemitz, 2018). In this context, it is critical to understand how these rights and values are balanced within the data economy, with data protection as the benchmark.

The protection of personal data and, therefore, of data belonging to identified or identifiable individuals, has often been labelled a priori as overriding and this approach can be read as a consequence of the fact that an imbalance exists between the power of entities (whether business entities or not) that process and have access to large amounts of personal data, and the power of the individuals to whom that personal data belongs to control their information (European Union Agency for Fundamental Rights, 2018). However, it must be recognised that the fundamental rights to privacy and protection of personal data are not absolute rights and can be restricted by law under certain conditions. Both the European Court of Human Rights and the Court of Justice of the European Union (CJEU) have consistently held that a balance of the fundamental rights to privacy and protection of personal data with other rights is necessary in the application and interpretation of Article 8 ECHR and Articles 7 and 8 of the EU Charter of Fundamental Rights (Court of Justice of the European Union, 2008, 2011; European Court of Human Rights, 2012).

2.2. The challenges of balance and enforcement

A balance must be struck between these fundamental rights and other EU values, human rights, and public and private interests, to ensure basic rights such as social rights, freedom of expression, freedom of the press, or freedom of access to information. By way of example, the GDPR (General Data Protection Regulation) explicitly requires EU member states to reconcile by law the right to the protection of personal data with the right to freedom of expression and information, including for journalistic, academic, artistic, or literary purposes.

Specifically, it asks to “adopt legislative measures which lay down the exemptions and derogations necessary for the purpose of balancing those fundamental rights” (see Recital 153 GDPR). However, nothing stands in the way of the same legislator, in full respect of the Charter of Fundamental Rights and the Treaties, adjusting this balance in later legislation. In this context, the European Data Protection Supervisor (EDPS) has issued several opinions (Wiewiórowski, 2019), including on initiatives aimed at broadening the sharing of information for law enforcement purposes.

Furthermore, the GDPR states that the protection of personal data cannot be a justification for restricting the free movement of personal data within the Union (see Art. 1(3) GDPR). This also represents the result of a balancing act, although justified by the consistent level of data protection achieved in the EU internal market.

The CJEU, as a rule, does not consider the economic interest of a data controller to prevail over the interest of the individual data subject in data protection. It is worth mentioning for this purpose the Google Spain (Court of Justice of the European Union, 2014) case, where the interest of a private individual was considered to have a content about him de-indexed by Google, which prevailed over the economic interest that the operator of such an engine had in processing, and the interest of the public in general, given that neither the person nor the information in question were of public interest. In the Manni case (Court of Justice of the European Union, 2017), the CJEU had to balance the EU rules on data protection and Mr. Manni’s commercial interest in removing information related to the bankruptcy of his former company from the commercial register. In this case, the CJEU established that the public interest in accessing the information prevailed over Manni’s right to obtain the erasure of the data.

Although the terms for balancing rights and values in Europe predate the data economy, the shortcomings in the enforcement of existing individual and collective rights are today’s biggest weakness in the data economy. The logic of data protection would be to enforce the rules with the strictest rigour against those who collect and process most personal data, given that the risk of non-compliance rises with the amount of data collected, and the complexity of processing involved. It is also plain that central tenants of today’s data economy have been described as illegal in detail, without effective enforcement having been taken to reinstall the law. Today’s reality is that the Data Protection Authorities (DPA) have not yet given themselves rational common parameters for prioritising their interventions, such as, for example, a principle to investigate first ex officio the collectors of the biggest amount of personal data, as with the amount of personal data collected the need for full compliance with GDPR becomes ever more important in the public interest.

In the efforts of shaping novel data ordering in the EU, more focus is therefore needed to create and sustain infrastructure for effective rights enforcement. It is evident now that even a small set of values may need many rights and institutions. Since there are already conflicting approaches to regulatory design in the European digital market (Ó Fathaigh & van Hoboken, 2019), it is necessary to critically examine the proposed data governance models in the light of properties of data themselves.

3. Critical discussion of data governance models

3.1. Four models for data governance

The rising popularity of data governance is closely connected with the growing recognition of the value of data. Only a decade ago, data governance was understood as control over and the management of data (DAMA International, 2009), with an understanding that it is primarily an internal, company task, centred on coherence and clarity of information. More recent definitions of data governance incorporate numerous elements outside strict control, and tend to embrace a more holistic approach, defining data governance as a cross-functional framework, with specified rights, obligations, and formalised procedures for their application (Abraham et al., 2019). Thus, instead of guiding a single company or organisation, data governance encompasses the entirety of transnational and trans-organisational data flows, from the macro-level of nation-states to the micro-level of citizens.

The paradigm shift in governance from market to inter-organisational, networked organisations has been observed to be taking place across different types of economic resources 5; with a dimension related to multi-stakeholder engagement (Jemielniak & Przegalińska, 2020). Although data silos formed by a few technology companies are still the prevalent mode of data governance, with the profit motive clearly being the dominant rationale in data flows, there are pioneering attempts at creating collaborative governance regimes with public interest in mind.

Following Micheli et al. (2020), we identify four distinct models of data governance encompassing the plurality of both proposed and piloted options 6. These models are comprised of: 1) personal data sovereignty 7, where data owners control data as personal object and individually decide on selling their information, 2) data collaboratives, which encourage businesses to establish partnerships and exchange data, 3) data cooperatives, which empower individuals to voluntarily pool data for mutual benefit, 4) and public data commons, which rely on public actors to aggregate and steward citizen and commercial data (Micheli et al., 2020).

Despite their varying characteristics, the emerging models of data governance can be juxtaposed and assessed based on their main differences and similarities. The models outlined by Micheli et al. (2020), guided by analytical parameters (dimensions) inspired by Mulgan and Straub (2019), demonstrate that the dimensions of governance mechanisms and governance goals are actually derivatives of other two dimensions: specifically, stakeholder control and value allocation (see Table 1).

Table 1: Analytical matrix of data governance models
  

Stakeholder control / Governance mechanisms

  

Individual / Rights-based

Institutional / Trust-based

Value allocation / Governance goals

Private / Growth-driven

Personal data sovereignty

Data collaboratives

Public / Welfare-driven

Data cooperatives

Public data commons

Governance mechanisms are closely intertwined with the type of stakeholders engaged in decision-making (Borrás & Edler, 2014), as data subjects have different mechanisms for exercising control than data controllers. For example, legal instruments may secure individual rights and claims to power over data; GDPR rights being a prime example (Calzada, 2019). Data subjects therefore maintain individual control based on data-related rights, whereas data controllers establish trust-based mechanisms like contractual arrangements and processes (e.g., monitoring, verification of access), which rely on and support the credibility of the engaged institutions.

Since governance goals—the value-based objectives for creating governance schemes (Winter & Davidson, 2019)—find their realisation in the final distribution of gains from data sharing, the outcome is in fact an economic value allocation for the benefit of either a specific agent, or a broadly defined public interest. The public interest might, and often will, benefit the individual agent in the long-term, so the dialectical tension is rather again about balancing the order of values rather than choosing winners.

As data can be used multiple times and by varying actors, it may theoretically be possible to combine models to some extent. One should also consider that the processing of data leads to the creation of new data (OECD, 2019) and to a better understanding of the original data holder or data producer of added value of new uses of data. This may in turn inspire a learning process allowing the original holder or producer of the data to make better use of the data, to his/her own benefit too.

3.2. Critique of two paradigms in data governance

Yet the four data governance models do not seem equally capable of fostering data sharing while securing important values and rights. To provide a more nuanced description, the complex socio-technical ecosystem in which data is produced, collected, and processed is examined. The analysis draws largely from STS and CDS, which investigate infrastructure not merely as a positivist and value-free, objective materialisation of scientific progress (Iliadis & Russo, 2016), but as heterogeneous, closely interlinked data assemblages (ecosystems), consisting of, ‘technological, political, social, and economic apparatuses’, governing production and circulation of data (Kitchin & Lauriault, 2014). The previous approach often tended to either be naive techno-determinism, characterised by Shoshana Zuboff (2015) as “the idea that technology must not be impeded if society is to prosper”, or a particular agenda made opaque by pseudoscientific claims and declarations of ethical practices (Levy & Johns, 2016). However, the ubiquity of data in contemporary society requires going beyond some of the valid, yet narrow perspectives that dominated the landscape of the discussion until now. Two lines of critique seem especially promising and well-grounded in existing literature from various disciplines, encompassing STS, CDS and institutional economics.

3.2.1. Data-as-a-commodity

First, the idea that data are commodities is dubious, to say the very least. Ontologically speaking, data are digital reflections of the existing reality, not autonomous things-in-themselves (thinking in Kantian terms, existing independently of observation). Rather, data “are already embedded in the social”, since increasingly “every aspect of the world is rendered in a new symbolic dimension” (Zuboff, 2015). Scholars are increasingly calling to attention that the commodification of data, “dematerializes it from the facts or processes or so-called natural person from which it is derived” (Käll, 2020, p. 3). The same principle holds for data collected from objects, spaces, or nature. Data, as mere technology-based recording of reality, may misrepresent, oversimplify, or distort it. Nevertheless, digital systems rely on input to function.

The rights-based approach currently fails to secure the integrity of both personal and non-personal data, and the data ecosystem that produces it. Certainly, a broader consideration cannot overlook the fact that every system requires reproducibility if it is to survive; especially human society and social reproduction (Federici, 2012). There needs to exist the industry or nature to produce industrial data and non-personal data, and often non-personal data is very reliant on social reproduction of workers operating machines, while nature data is reliant on being captured by economic or public actors.

By commodifying data, it is possible to distance it from individuals, processes, and material conditions required for its production, hence dis-embedding data from wider social relations and values (Jessop, 2007). That, in turn, allows control over fictitious property, and extraction of value for profit; critical to the continuous expansion of capital to new territories (Hardt & Negri, 2017). This concept was evident to Dan Schiller in 1988, when he assessed that in capitalist logic, the value of information “stems uniquely from its transformation into a commodity - a resource socially revalued and redefined through progressive historical application of wage labour and the market” (Schiller, 1988, p. 41).

The real subsumption of existing production and consumption in the market sphere, mediated by novel technologies, is closely followed by a colonising move to capture the data and value from spheres of life which remain non-commodified, starting with nature and progressing into the most inner lives of humans; their dreams, emotions, and experiences (Couldry & Mejias, 2019; Zuboff, 2019). It is Polanyi’s “Great Transformation” occurring again, this time on digital platforms and led by cognitive capitalists to “rewire the flows of data and ultimately money and power” (Kenney et al., 2020). The rights-based approach aims to protect from the perils of data misuse, but it does not break from the fictitious commodity form of data.

Creating and enclosing virtual networks, digital spaces, and data producing technological apparatuses has serious consequences for social relations, as it fundamentally distorts and reshapes the needs of commercial success (Fuchs, 2016). Consequently, the governance goal of an empowered individual results in self-commodification, and the noble aim of sharing market conditions “effectively transforms the social context of what used to be a favour and turns it into something to be bought or sold” (Jemielniak & Przegalińska, 2020). Data governance founded on the premise of widespread, market-based treatment of data as a commodity could, “lead to another form of the tyranny of fine print” (Kerry & Morris Jr, 2019), where any online transaction would involve the sale of data (e.g., about the transaction).

In the long term it is argued that in the presence of externalities which reveal relevant information about other data owners, there will typically be a depression of prices, as leakage or sales of data by some users allows for inferences about others (through statistical analysis/machine learning), thus diminishing the cost of further surveillance (Acemoglu et al., 2019). Data access and sharing for public interest are still marginal in the EU digital marketplace, and that should be incentivised (IDC & Open Evidence, 2017; Kerber, 2017). Open data represents a negligible segment in the data realm (Blackman & Forge, 2017). However, even an increase in competition and open data sharing would not redress the problem of the excessively low price and oversharing of data, and only the presence of a mediator (third-party authority) would improve efficiency.

This mediator (data intermediary) introduces mechanisms of trust in data governance. Instead of relying on personal discipline and consent, which in fact might be considered a “victimisation discourse” that individualises systemic problems (Janssen et al., 2020), institutional control bases on trust in the institution to represent data subjects’ rights and aims. It is not our assertion that rights are unnecessary, but that for governance they are inadequate alone. Moving away from the practices of data-as-a-commodity requires complementary efforts to find systemic solutions not on an individual, but societal level.

3.2.2. Neglect of public interest

Second, public interest is a commonly missed part of the equation, with the struggle not between choices of data protection versus profit, but between different private allocations of value, and public, collective gains. Championing data protection has undoubtedly led to the establishment of individual rights to data in the EU, and increased social awareness of infringements committed by digital platforms (Rossi, 2018). However, data protection is insufficient as an overarching governance aim because of the negative consequences of different data governance models, as well as of positive network or spillover effects, which may be much wider than privacy alone.

There exist intersecting interests in personal data, many of which fall into the category of public interest, involving, for example, security, medical, financial, and environmental issues (Kerry & Morris Jr, 2019). When examining the relationship between personal data protection and public interest needs, some argue that the latter should always prevail. The GDPR provides for legal grounds and derogations from the protection of individual rights, under certain conditions, in the event of the performance of a public interest task. It entrusts EU member states with the responsibility of determining several public interest strands, and laying down the legal basis for usage of personal data in the public interest. It is also very specific on occasion, for example, when it comes to health and the clear priority given to use of personal data, even without consent, to avoid pandemics like that of COVID-19 at present.

While interest limiting the right to data protection needs to be laid down in law for a balanced approach (Borgesius et al., 2015), the current non-uniform scenario, with different member states laying down public interest laws, hinders a unified approach at the EU level. The outcome is neither the uniformity of protection of fundamental rights in the EU, nor a good functioning of the internal market. Although not preventing the free flow of data within the EU internal market, such national variations surely do not help, rather they may sometimes push the data controller (being that entity determining the purposes and means of the processing) to confine the processing within the territory of a specific member state(s). However, one could deem a certain amount of pluralism in data governance as not negative per se or actually desirable if it reflects the inherent characteristics of the specific interests at stake. Consequently, in some cases, such as relating to data protection at the workplace and its relation to labour law, and on the relationship between the freedom of the press and data protection, the GDPR explicitly allows for a certain degree of variation, as an exception to the general rule of harmonisation being the core condition for both a functioning internal digital market and a level playing field for all as regards protection of fundamental rights.

Although the EU’s position is to allow the processing of personal data in the public interest, considering the need to monitor the pandemic and its spread, the GDPR also points out that the member states have the autonomy to maintain or introduce national provisions to further specify the application of the rules governing the processing of personal data, when in the performance of a task carried out in the public interest.

In summary, it is clear in European law that important public interests can be placed above data protection. This is only possible provided that it is done in the form of parliamentary law, and in a way that is necessary in a democratic society as well as proportionate to the public interest pursued. There is always a need for a legal basis (EU or member state law) that precisely meets a public interest objective, that is proportionate to the legitimate aim being pursued, and that details specific provisions to adapt the application of the GDPR rules.

Moreover, it has become evident that even issues linked to data protection are not solely about data protection itself. As mentioned above, single players may have control over data sets with an enormous value, from a public policy goals point of view (Alemanno, 2018)—control without any duty to ensure access or sharing for public interest. Furthermore, in the case of dominant players, abusive conducts may be carried out through the exploitation of personal data collected. In this sense, one may recall the case involving Facebook in Germany—convicted by the Bundeskartellamt (federal antitrust office) of abuse of dominant position, as it made the use of its social networking service conditional on users granting broad permission to collect and process their personal data (Bundeskartellamt, 2019).

The protection of personal data may be seen as an obstacle to data sharing (Zoboli, 2020). First, the application of GDPR to any data sharing operation with mixed databases entails a direct obligation to secure GDPR rights in the process of data sharing, and substantially inhibits companies from starting data sharing initiatives. In practical terms, this means that, to share data, all the obligations of data controllers and processors, and the rights of data subjects, should be fulfilled. In practice, however, company data governance techniques are now so sophisticated and supported by tools of automation, which allow the precise separation of personal and non-personal data, and beyond that, the identification and follow up on rights and privileges granted or acquired for every piece of data.

Both the company that shares the data and the one that receives it must have a legal basis for the processing of personal data and must ensure an adequate level of security throughout the overall data set, in compliance with Article 5 (f) of the GDPR. In addition, data protection or data security might need to be invoked by controllers as means of restricting access for other stakeholders, with the aim of protecting an organisation’s competitive advantage of asymmetrical access to data (Calo, 2017).

There are proposals to make access to data mandatory where it constitutes a barrier to market entry, while fully respecting the GDPR (BEUC, 2020). Acting in the public interest requires an intervention in the complex socio-technical ecosystem, shaping it to explicitly increase the scope and quality of public services acting in accordance with European values, and to reinforce welfare-driven data sharing (Zygmuntowski, 2018). The ambition to fuel AI development, and other data-based innovation, will remain unsatisfied if private interest of data controllers is prioritised over public interest; and this is true for both personal and non-personal data.

In this context, the concept of “reverse-PSI” plays a role, that is, granting public sector bodies access to reuse privately held data (Poullet, 2020). This would thus be the mirror image of the Public Sector Information regulation—currently embodied in the Open Data Directive—which governs the access to and reuse of public data. A statement in this direction has been made by the Expert Group on Business-to-Government Data Sharing appointed by the Commission, whose report moves in the direction of defining a strategy to increase the sharing of data collected by the private sector for the benefit of public authorities (High-Level Expert Group on Business-to-Government Data Sharing, 2020).

Moreover, one of the key findings of the Furman Review on competition in digital markets is that ex ante rules bring advantages, such as clarifying procedures (Furman et al., 2019), which emphasise the need to expand the set of by-design principles. This is primarily because an ex ante regime specifies what can and cannot be done, what directly impacts the architecture of digital systems, leading to different design of data flows and governance mechanisms. Thus, the regulatory interest evolves from data access solutions (both horizontal and sectoral) to data governance systems with new institutions (Kerber, 2020). An example of such a governance tool is algorithmic impact assessment, proposed by various stakeholders to condition access to data and its algorithmic processing on creation of value aligned with public interest (Nemitz, 2018).

At the EU level, there is a shift towards an ex ante regulatory approach that seems to be underpinned by the EU Digital Package and, specifically, by the proposals for the Digital Services Act (DSA, European Commission, 2020d) and the Digital Markets Act (DMA, European Commission, 2020e) published last December 2020. More specifically, the DSA aims to introduce proportionate and balanced rules to better regulate the digital ecosystem, and its core is the provision of an innovative framework for transparency and liability for online platforms. Whereas the DMA addresses economic imbalances and unfair trade practices that may be implemented by online platforms acting as gatekeepers. This regulation is the consequence of the assumption that large online platforms can control important platform ecosystems in the digital economy, allowing them to leverage their advantages, including access to huge amounts of data (European Commission, 2020a).

In summary, the critique of both the individualistic, rights-based approach, and the critique of the growth-driven, strictly private value allocation, point to the necessity of institutional and trust-based mechanisms with a public interest focus, with the aim to enhance common welfare in practical concordance with the protection of the fundamental rights of individuals.

4. Towards public data commons

4.1. Characteristics of data commons

The systemic level of data assemblage must be re-conceptualised, putting more emphasis on institutional solutions with public interest in mind. For the purpose of governance, the most dire questions of value, goals, stakeholder control, and mechanisms of enforcement suggest that, “it is useful to think less about data as a commodity to be bought and sold, and more as a shared asset or common good” (Bass, 2020, n.p.). This claim can be interpreted two-fold, as Purtova (2021) points out: as a normative claim (commons vs siloes) or in the sense of a common-pool resources (CPR) framework. Discussing inherent properties of data and data assemblage, we opt for the latter.

There is renewed interest in studies on CPR, partly because the insights are applicable to novel challenges related to digital data (Bollier & Helfrich, 2012). The tensions between public value and appropriation have been deeply studied by Elinor Ostrom, who defined CPR as resources which have a high level of subtractability (rival consumption may threaten them), and a low level of excludability (difficulty in limiting consumption) (Ostrom, 1990). Although due to their replicative nature data would seem to be a pure public good (Mitchell & Brynjolfsson, 2017), it is increasingly viewed as commons that should be governed collectively, thus unlocking the different uses and values of data for different stakeholders while protecting their rights (Coyle et al., 2020).

When Hess and Ostrom discussed knowledge commons, they pointed to technological progress as the facilitator of the ability to exploit the commons. Their explanation is that the “ability to capture the previously uncapturable creates a fundamental change in the nature of the resource, with the resource being converted from a non-rivalrous, non-exclusionary public good into a common-pool resource that needs to be managed, monitored, and protected, to ensure sustainability and preservation” (Hess & Ostrom, 2007). What was once true for knowledge and the possibility to capture it (e.g., via intellectual property rights), became true as well for data with the expansion of surveillance apparatuses.

Subtractability is thus reinterpreted to mean not only depletion of physical resources but such conditions where overconsumption endangers sustainability of the commons. It emerges once the data ecosystem expands from the periphery of our socio-economic system, coming to its very fore. Capturing and processing data leads to far-reaching consequences for the very data subjects that produce data. When rights are harmed by misuse, trust in digital systems declines. Similarly, overconsumption by dominant actors is no longer non-distortionary, since it affects the distribution of data-related gains (e.g., innovation surplus, business insights) and leads to market capture. Although data consumption is not a rival per se, the outcomes contribute to rivalry in a capitalist economy.

If the usage of data commons is uncontrolled, in particular if no stewardship values are provided within the governance model, it will ultimately result in problems such as reinstated enclosure of data, or disempowerment of individuals, vis-à-vis digital companies (Purtova, 2017). Those negative externalities diminish the general sustainability of the data ecosystem. Data commons does not physically deplete, but is “threatened by undersupply, inadequate legal frameworks, pollution, lack of quality or findability” (de Rosnay & Stalder, 2020). Purtova’s (2021) claim that “one’s "enjoyment" of data does not lead to its deterioration or depletion” does not consider far reaching consequences if the data processor enjoys the data in an unsustainable manner.

It is not collective management or sharing itself that constitutes data as CPR, but the nature of data. Contrary to the calls for “moving beyond one-size-fits-all solutions” (Carballa Smichowski, 2019), trial and error scrutiny of different data governance models allows for moving towards the solution that takes the characteristics of data into account.

4.2. Governance of data commons

Commons-based peer production mediated by digital technologies has been studied and championed for many years now as an alternative mode of production (Bauwens et al., 2019; Benkler, 2006; Kostakis & Bauwens, 2014), although the problems of long-term, sustainable governance, value allocation, and data flows, as well as risks associated with the power of corporate actors co-opting open access commons, have not been well considered (Bauwens & Niaros, 2017; Papadimitropoulos, 2018). Bodó (2020) claims that the successes of corporate, extractive practices are actually “the failure of the commons-based peer production movement, because it refused to think about value”. Indeed, while Wikipedia is still being held out as a prime model of public interest related peer production, other ecosystems of peer production like Github, bought by Microsoft, Red Hat, bought by IBM, and SSRN, bought by Reed Elsevier (now RELX) have become part of the ever more private profit dominated digital ecosystem. Wikipedia, created with substantial funding from the big digital corporates and their staff, is spared, to maintain a fading historic dream, namely the claim that freedom from democracy made legislative rules plus technology will lead to the best public interest solutions, embodied in its most poetic form in the Declaration of the Independence of Cyberspace by John Perry Barlow.

Digital commons scholars point out that for a commons to reproduce both its objects (data) and subjectivities (trust, mutual aid), it has to interact with the market system outside the commons value circuit itself (De Angelis, 2017). However, oversharing or leakage of data to for-profit companies increases the risk of value extraction and drives the negative feedback loop, where top talents are attracted only to those companies and innovation happens for the sake of a limited customer group (or shareholders only) instead of common good (Weber, 2017). The cost burden for accessing data commons must be pertinent to the possibilities of a given actor, and relevant to the true aims of data usage.

A fine line between stewarding CPR in public interest and governance models protecting the dominant power needs be recognised. Data protection, for instance, and rules of access to data must be addressed in a way that allows access for smaller and more agile entities as well, instead of creating a silo that again would conform to procedures of the biggest industry players. Similar to other CPR governance schemes, data commons are not synonymous with unrestricted, open access, but in order to remain sustainable they might be bound by specific rules in regard to who and for what aims to benefit from sharing (Hess & Ostrom, 2007). Designing rules for who, why and with what consequences gets excluded is thus central to considerations on data commons (Prainsack, 2019). Access and exclusion are intertwined here and designing both cannot be void of addressing power asymmetries which undermine sustainability.

For those reasons, the notion of data stewardship gained considerable attention lately, departing from a strictly technical understanding towards a definition that encompasses sustainable, responsible management oriented towards improving people’s lives and securing public interest (Verhulst et al., 2020). Balancing embedded values with the ability to control data usage is difficult but carries the possibility of deployment of a repeatable framework utilising automated decision-making systems and protocols (O’Hara, 2019).

Successful cases of data sharing in research communities in the sciences were to a large extent governed according to CPR principles, in contrast with a few cases ridden with conflict (Fisher & Fortmann, 2010). Where data sharing caused considerable conflict between data producers and users, it was due to lack of mechanisms predicted by theory: unclear appropriation rules and boundaries, low impact on collective-choice arrangements, and recognition of rights to organise. Web-based data commons repositories also employ a variety of control policies and tools, guided by commons theory (Eschenfelder & Johnson, 2014). Those cases show that stewarding data commons is both effective and sustainable; yet scaling such solutions means changing the scope from narrow, industry-limited to societal and thus public.

4.3. Designing public data commons

To solve the data governance conundrum, we argue for institutional, trust-based mechanisms and welfare-driven value allocation. Taking into account the characteristics of data commons, this requires an intervention to establish public data commons, defined as a trusted data sharing space established in the public interest. Primarily, this includes safeguarding (or even advancing) European values and rights by active participation of public actors in stewarding data.

Mariana Mazzucato rightly asks: “why (should) the public’s data not be owned by a public repository that sells the data to the tech giants, rather than vice versa?” (Mazzucato, 2019, n.p.). Indeed, there is a growing body of literature calling for digital public infrastructure, such as public platforms or data spaces (Bass et al., 2018; Hall & Pesenti, 2017; Morozov & Bria, 2018), mindful of natural monopolies, and network effects on digital platforms (Belleflamme & Peitz, 2016). Public data commons unwind the corporate extraction of data by creating capacity in the public sector to shape socially beneficial AI, and regain technological sovereignty (Zygmuntowski, 2020). Enforcing interoperability might be one of the ways to encourage data sharing with the public data commons; several countries have already adopted obligatory data sharing mandates, departing from voluntary measures (Zoboli, 2020). In line with the reverse-PSI approach mentioned, for example, in France, private actors are obliged to open specific categories of data in their possession under certain conditions (See Loi du 7 octobre 2016 pour une république numérique, Art.17 and following). In particular, the common element of these categories of data is that they are of public interestand include, for instance, data generated in the context of procurement or commercial data for the development of official statistics.

It is however essential not to undermine the possibilities of data sharing, knowledge spillovers and re-utilisation of data by different societal actors. Instead of monopolising data for other means, the role of the public sector is to be the facilitator and custodian, setting out governance rules, objectives and enforcing them, while inviting or mandating different data ecosystem stakeholders and granting them participatory mechanisms. Collaborative governance acts as a systemic feedback loop, preventing the dominant organisation—and this can be a public actor, albeit in the present will often be a private actor—from rigging the system so that it continues with “business as usual” at the expense of other stakeholders (Susha & Gil-Garcia, 2019).

Based on the considerations discussed in this article, such governance should be based on a set of principles such as European values and rights, embedding them to guide rules of access/excludability and discrete choices on architecture design. Data-related rights would find their way as functionalities of public data commons interfaces or procedures. Their execution would not be at the mercy of data controllers, but a public service initiated by the commons on behalf of the data subject. In this sense, public data commons resembles the proposals of data trusts (Hardinges et al., 2019; O’Hara, 2019), yet the scope and main governance goal is much more universal given the role of public administration. Those considerations are summarised in Table 2, where we also point to technological, legal, and institutional solutions to main challenges of public data commons. Figure 1 is the proposal for governance design of public data commons which leverages those findings.

Table 2: Overview of public data commons characteristics

Issue

Design choice

Key problem

Ostrom principles

Possible solutions

Privacy

Data protection by design

Secure computing

Preventing data leakage

1. Clearly defined boundaries

‘Move algorithm to data’: Open Algorithms (OPAL) principles (Hardjono & Pentland, 2017)

Differential privacy

Federated storage & learning

Value

Public welfare-driven allocation

Benefits decoupled from public interest

2. Congruence between benefits and costs

Funding mission-oriented innovation (Mazzucato, 2018)

Licences granting returns to the public

Control

Trust-based institutional mechanisms

Power asymmetry between stakeholders

3. Collective-choice arrangements

Collaborative governance empowering data stakeholders

Consensus building for strategic decisions

Access & exclusion

Accountability to preserve data commons

Detecting malicious intent

Appropriate sanctions

4. Monitoring

5. Graduated sanctions

Algorithmic impact assessment

Monitoring and auditing access

Graduated fining & denylisting violators

“Ostrom principles” refer to Ostrom (1990).

Public data commons diagram.
Figure 1: Governance design of public data commons

It is understandable that personal data sovereignty, data pods and self-determination leveraging data seems to champion digital emancipation. It is the promise of conscious, active citizens effectively bargaining on the markets while leveraging their rights to stay in control over data governance. However, data ownership means little if the owned property is sold on uneven terms, in a world of growing inequalities. Indeed, “decentralising processing does not necessarily imply decentralising power” (Janssen et al., 2020). To mitigate this potential problem, proposals of countermeasures have been developed, such as Jaron Lanier’s data labour unions (Arrieta-Ibarra et al., 2018), multi-client privacy administrators (Betkier, 2019), and consent champions (Ruhaak, 2020). This approach, combining personal data sovereignty with institutional mediation, still does not question the commodification of data and its social impact, but merely finds ways to secure more equitable distribution and control in a strictly market environment (Morozov, 2015).

Establishment of public data commons requires ambitious cooperation on the part of regulators, civil society, and businesses willing to benefit from the new data governance regime. It is unlikely that we can get only value allocation or stakeholder control right without drastic losses to the other. Inadequate effort in cooperating on value allocation to the public results in a deadlock between stakeholders lacking trust to set common standards, procedures, and exchanging valuable data, as it is observed currently (European Commission, 2018; Mitchell & Brynjolfsson, 2017). Conversely, too low cooperation to secure trust-based mechanisms for institutions prevents data cooperatives from scaling up to meaningfully boost universal welfare and offer a significant alternative to monopolistic data practices (Sandoval, 2020).

Public data commons are a model yet to be further explored, but one that might better inform how to structure common European data spaces (European Commission, 2018) or build a federated collaboration between national data repositories stewarded by EU member states. Similarly, the Data Governance Act aims to unlock the value of data while protecting individual rights. Unfortunately, these actions so far fall short of ambitions because they overemphasise private value creation and cling to the usual governance model of strictly market competition (between proposed data intermediaries). We believe that balancing value creation and data stewardship requires creating institutions, setting standards, and then inviting data stakeholders to collaboratively govern. This differs very much from the ex post logic of ordering existing actors.

Public data commons could act entrepreneurially, leveraging their role as digital utilities operators to ensure equal terms and protection of rights in data flows, but also serve as enablers of innovation and custodians of public interest performance monitoring. Such a scenario will not happen without sufficient public infrastructure, democratic oversight and expansion of governance using new legal tools and drawing from experiences of institution-setting of the past (Sadowski et al., 2021). This amounts to a systemic change that is more often called for; one that recognises the need to rejuvenate the European welfare state by thorough digitalisation, expansion of fundamental rights and increased collaboration. It remains to be hoped that the orientation for public interest outlined above will find its way into a new set of European rules for good data governance, based on the rule of law, on fundamental rights and on the primacy of democracy.

5. Conclusion

In the search for a distinctively European understanding of technological sovereignty, encompassing both citizens’ rights and technological artefacts that “tackle real (not only commercial) problems based on open codes” (Calzada, 2019, n.p.), there is a need for a more inclusive, broad governance design, addressing data across sectors and regulatory instruments. Given the fluidity of data, its replicability, and its own multipurpose nature, the thinking of the past will not serve public policy purposes anymore. If the same data can be used at the same time for a wide variety of purposes, both private and public, only a systematic view of all elements of governance will make optimised policy-making possible. Treating data as mere commodities or neglecting public interest has severe shortcomings, so the data governance of the future must consider that data are digital commons produced in an ecosystem that requires sustainability to thrive.

This highly complex system will only work with an elaborate set of rules, without which an increased concentration of power and a rise of societal disasters, brought about by current data extraction, will be seen. We argue that public data commons which leverage multi-stakeholder, collaborative governance policies, will increase data sharing, while safeguarding European rights and values, in the triangle between individual rights, economic growth and innovation, and public interest, as enshrined in European Primary and Secondary law.

We hope that more scholars will join in the effort to study the design principles, tools, and conditions for data commons to be stewarded in the public interest, since it is the challenge of our time. It is a natural and necessary extension of existing legal protections in the digital age, establishing institutions for European society venturing into the XXIst century.

References

Abraham, R., Schneider, J., & vom Brocke, J. (2019). Data governance: A conceptual framework, structured review, and research agenda. International Journal of Information Management, 49, 424–438. https://doi.org/10.1016/j.ijinfomgt.2019.07.008

Acemoglu, D., Makhdoumi, A., Malekian, A., & Ozdaglar, A. (2019). Too much data: Prices and inefficiencies in data markets (Working Paper No. 26296; NBER Working Paper Series). National Bureau of Economic Research. https://doi.org/10.3386/w26296

Alemanno, A. (2018). Big data for good: Unlocking privately-held data to the benefit of the many. European Journal of Risk Regulation, 9(2), 183–191. https://doi.org/10.1017/err.2018.34

Arrieta-Ibarra, I., Goff, L., Jiménez-Hernández, D., Lanier, J., & Weyl, E. G. (2018). Should we treat data as labor? Moving beyond ‘Free’. AEA Papers and Proceedings, 108, 38–42. https://doi.org/10.1257/pandp.20181003

Asociación nacional de establecimientos financieros de crédito (ASNEF) and federación de comercio electrónico y marketing directo (FECEMD) v. Administración del estado (joined case), (Court of Justice of the European Union 2011).

Barlow, J. P. (1996). A declaration of the independence of cyberspace. https://www.eff.org/cyberspace-independence

Bass, T. (2020). It’s time to think about our data as a common good. British Council. https://www.britishcouncil.org/anyone-anywhere/explore/communities-connections/rethinking-data

Bass, T., Sutherland, E., & Symons, T. (2018). Reclaiming the smart city: Personal data, trust and the new commons [Report]. Nesta. https://www.nesta.org.uk/report/reclaiming-smart-city-personal-data-trust-and-new-commons/

Bauwens, M., Kostakis, V., & Pazaitis, A. (2019). Peer to peer: The commons manifesto. University of Westminster Press.

Bauwens, M., & Niaros, V. (2017). Value in the commons economy: Developments in open and contributory value accounting. P2P Foundation.

Becker, R., Thorogood, A., Ordish, J., & Beauvais, M. J. S. (2020). COVID-19 research: Navigating the european general data protection regulation. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.3593579

Belleflamme, P., & Peitz, M. (2016). Platforms and network effects Platforms and network e¤ects (Working Paper No. 16–14). University of Mannheim, Department of Economics; RePEc IDEAS. https://ideas.repec.org/p/mnh/wpaper/41306.html

Benkler, Y. (2006). The wealth of networks: How social production transforms markets and freedom (p. 515). Yale University Press.

Beraldo, D., & Milan, S. (2019). From data politics to the contentious politics of data. Big Data & Society, 6(2). https://doi.org/10.1177/2053951719885967

Betkier, M. (2019). Privacy online, law and the effective regulation of online services (1st ed.). Intersentia. https://intersentia.com/en/effective-privacy-management-for-internet-services.html

BEUC. (2020). Digital services act (ex ante rules) and new competition tool: Response to public consultations [Position paper]. BEUC. The European Consumer Organisation. https://www.beuc.eu/publications/digital-services-act-ex-ante-rules-and-new-competition-tool-response-consultations/html

Blackman, C., & Forge, S. (2017). Data Flows – Future Scenarios [In-depth analysis / research report]. Centre for European Policy Studies. https://www.ceps.eu/ceps-publications/data-flows-future-scenarios/

Bodó, B. (2019). Was the Open Knowledge Commons Idea a Curse in Disguise? – Towards Sovereign Institutions of Knowledge. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.3502119

Bollier, D., & Helfrich, S. (Eds.). (2012). The wealth of the commons: A world beyond market and state (p. 442). Levellers Press. http://wealthofthecommons.org/

Borgesius, F. Z., Gray, J., & van Eechoud, M. (2015). Open data, privacy, and fair information principles. Berkeley Technology Law Journal, 30(3), 2073–2131. https://doi.org/10.15779/Z389S18

Borrás, S., & Edler, J. (2014). The Governance of change in socio-technical and innovation systems: Three pillars for a conceptual framework. In S. Borrás & J. Edler (Eds.), Borrás, s, edler, j (eds), the governance of socio-technical systems: Explaining change (pp. 23–48). Edward Elgar Publishing.

Bundeskartellamt. (2019). Decision B6-22/16. https://www.bundeskartellamt.de/SharedDocs/Entscheidung/EN/Entscheidungen/Missbrauchsaufsicht/2019/B6-22-16.pdf?__blob=publicationFile&v=5

Calo, R. (2017). Artificial intelligence policy: A roadmap. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.3015350

Calzada, I. (2019). Technological sovereignty: Protecting citizens’ digital rights in the AI-driven and post-GDPR algorithmic and city-regional european realm. Regions. https://doi.org/10.1080/13673882.2018.00001038

Camera di commercio, industria, artigianato e agricoltura di lecce v. Salvatore manni, C-398/15 (Court of Justice of the European Union 2017).

Carballa Smichowski, B. (2019). Alternative data governance models: Moving beyond one-size-fits-all solutions. Intereconomics, 54(4), 222–227. https://doi.org/10.1007/s10272-019-0828-x

Couldry, N., & Mejias, U. A. (2019). The costs of connection: How data is colonizing human life and appropriating it for capitalism (p. 323). Stanford University Press.

Council of the European Union. (2020). Presidency conclusions—The charter of fundamental rights in the context of artificial intelligence and digital change (Note 11481/20 FREMP 87 JAI 776). Council of the European Union. https://www.consilium.europa.eu/media/46496/st11481-en20.pdf

Coyle, D., Diepeveen, S., Wdowin, J., Kay, L., & Tennison, J. (2020). The value of data: Policy implications [Report]. Benett Institute for Public Policy, Cambridge; Open Data Institute. https://www.bennettinstitute.cam.ac.uk/media/uploads/files/Value_of_data_Policy_Implications_Report_26_Feb_ok4noWn.pdf

DAMA International. (2009). The DAMA guide to the data management body of knowledge. Technics Publications.

De Angelis, M. (2017). Omnia Sunt Communia: On the commons and the transformation to postcapitalism (p. 436). Zed Books.

Diekert, F. K. (2012). The tragedy of the commons from a game-theoretic perspective. Sustainability, 4(8), 1776–1786. https://doi.org/10.3390/su4081776

Dulong de Rosnay, M., & Stalder, F. (2020). Digital commons. Internet Policy Review, 9(4). https://doi.org/10.14763/2020.4.1530

Eschenfelder, K. R., & Johnson, A. (2014). Managing the data commons: Controlled sharing of scholarly data. Journal of the Association for Information Science and Technology, 65(9), 1757–1774. https://doi.org/10.1002/asi.23086

European Commission. (2018). Towards a common european data space (COM(2018) 232 final). European Commission. https://ec.europa.eu/digital-single-market/en/news/public-consultation-building-european-data-economy

European Commission. (2020a). Digital Services Act package: Ex ante regulatory instrument for large online platforms with significant network effects acting as gate-keepers in the European Union’s internal market (Inception Impact Assessment Ares(2020) 287 7647).

Proposal for a Regulation of the European Parliament and of the Council on a Single Market for Digital Services (Digital Services Act), (2020) (testimony of European Commission).

Proposal for a REGULATION OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL on contestable and fair markets in the digital sector (Digital Markets Act), COM(2020) 842 final (2020). https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:52020PC0842&from=en

European Commission. (2020b). A European strategy for Data. European Union. https://eur-lex.europa.eu/legal-content/EN/TXT/?qid=1593073685620&uri=CELEX%3A52020DC0066

European Commission. (2020c). Proposal for a Regulation of the European Parliament and of the Council on European data governance (Data Governance Act) (COM/2020/767 final). https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A52020PC0767

Treaty on European Union (Consolidated Version), Treaty of Maastricht, C 325/5 Official Journal of the European Communities (2002).

European Union Agency for Fundamental Rights. (2018). Handbook on European data protection law. Publications Office of the European Union.

Federici, S. (2012). Revolution at point zero: Housework, reproduction, and feminist struggle. PM Press.

Fisher, J. B., & Fortmann, L. (2010). Governing the data commons: Policy, practice, and the advancement of science. Information and Management, 47(4), 237–245. https://doi.org/10.1016/j.im.2010.04.001

Frischmann, B. (2012). Infrastructure: The social value of shared resources. Oxford Univesity Press. https://doi.org/ 10.1093/acprof:oso/9780199895656.001.0001

Fuchs, C. (2016). Critical theory of communication. University of Westminster Press.

Fumagalli, A., Giuliani, A., Lucarelli, S., Vercellone, C., Dughera, S., & Negri, A. (2019). Cognitive capitalism, welfare and labour: The commonfare hypothesis. Routledge. https://doi.org/10.4324/9781315623320

Furman, J., Coyle, D., Fletcher, A., Marsden, P., & McAuley, D. (2019). Unlocking digital competition. Report of the digital competition expert panel [Report]. HM Treasury.

Google spain SL, google inc. V. Agencia española de protección de datos (AEPD), C-131/12 (Court of Justice of the European Union 2014).

Hall, W., & Pesenti, J. (2017). Growing the artificial intelligence industry in the UK. Department for Digital, Culture, Media & Sport and Department for Business, Energy & Industrial Strategy.

Hardinges, J., Wells, P., Blandford, A., Tennison, J., & Scott, A. (2019). Data Trusts: Lessons from Three Pilots [Report]. Open Data Institute. https://theodi.org/article/odi-data-trusts-report/

Hardjono, T., & Pentland, S. (2017, October 24). Open Algorithms for Identity Federation [Preprint]. ArXiv:1705.10880 [Cs]. http://arxiv.org/abs/1705.10880

Hardt, M., & Negri, A. (2017). Assembly. Oxford University Press.

Hess, C., & Ostrom, E. (2007). A framework for analyzing the knowledge commons (pp. 41–81). https://ieeexplore.ieee.org/servlet/opac?bknumber=6267279

High-Level Expert Group on Business-to-Government Data Sharing. (2020). Towards a European strategy on business-to-government data sharing for the public interest [Final report]. European Union. https://ec.europa.eu/newsroom/dae/document.cfm?doc_id=64954

Hofheinz, P., & Osimo, D. (2017). Making Europe a Data Economy: A new framework for free movement of data in the digital age [Policy Brief]. The Lisbon Council. https://lisboncouncil.net/wp-content/uploads/2020/08/LISBON-COUNCIL-Making-Europe-A-Data-Economy.pdf

IDC & Open Evidence. (2017). Final report: European data market (Study SMART 2013/0063). European Commission (Directorate-General for Communications Networks, Content and Technology). https://digital-strategy.ec.europa.eu/en/library/final-results-european-data-market-study-measuring-size-and-trends-eu-data-economy

Iliadis, A., & Russo, F. (2016). Critical data studies: An introduction. Big Data & Society, 3(2). https://doi.org/10.1177/2053951716674238

Isin, E. F., & Ruppert, E. (2015). Being Digital Citizens. Rowman & Littlefield.

Janssen, H., Cobbe, J., & Singh, J. (2020). Personal information management systems: A user-centric privacy utopia? Internet Policy Review, 9(4). https://doi.org/10.14763/2020.4.1536

Jemielniak, D., & Przegalinska, A. (2020). Collaborative society (p. 256). MIT Press.

Jessop, B. (2007). Knowledge as a fictitious commodity: Insights and limits of a polanyian perspective. In A. Buğra & K. Ağartan (Eds.), Reading karl polanyi for the twenty-first century (pp. 115–133). Palgrave Macmillan US. https://doi.org/10.1057/9780230607187_7

Käll, J. (2020). The materiality of data as property. Harvard International Law Journal, 61. https://harvardilj.org/2020/04/the-materiality-of-data-as-property/

Kenney, M., Zysman, J., & Bearson, D. (2020). What polanyi teaches us about the platform economy and structural change. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.3678967

Kerber, W. (2017). Rights on data: The EU communication “Building a european data economy” from an economic perspective. In S. Lohsse (Ed.), Trading data in the digital economy: Legal concepts and tools. Hart Publishing.

Kerber, W. (2020). From (horizontal and sectoral) data access solutions towards data governance systems. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.3681263

Kerry, C. F., & Morris Jr, J. B. (2019, June 26). Why data ownership is the wrong approach to protecting privacy [Blog post]. Brookings Techtank. https://www.brookings.edu/blog/techtank/2019/06/26/why-data-ownership-is-the-wrong-approach-to-protecting-privacy/

Kitchin, R., & Lauriault, T. P. (2014). Towards critical data studies: Charting and unpacking data assemblages and their work (Working Paper No. 2; The Programmable City). Maynooth University.

Kostakis, V., & Bauwens, M. (2014). Network society and future scenarios for a collaborative economy (p. 87). Springer. https://doi.org/10.1057/9781137406897

Kostyuk, N. (2015, February). The digital prisoner’s dilemma: Challenges and opportunities for cooperation. 2013 World Cyberspace Cooperation Summit IV, WCC4 2013. https://doi.org/10.1109/WCS.2013.7050508

Levy, K. E., & Johns, D. M. (2016). When open data is a Trojan Horse: The weaponization of transparency in science and governance. Big Data & Society, 3(1). https://doi.org/10.1177/2053951715621568

Ma, Y., Lan, J., Thornton, T., Mangalagiu, D., & Zhu, D. (2018). Challenges of collaborative governance in the sharing economy: The case of free-floating bike sharing in Shanghai. Journal of Cleaner Production, 197, 356–365. https://doi.org/10.1016/j.jclepro.2018.06.213

Mazzucato, M. (2018). Mission-oriented innovation policies: Challenges and opportunities. Industrial and Corporate Change, 27(5), 803–815. https://doi.org/10.1093/icc/dty034

Mazzucato, M. (2019). The value of everything: Making and taking in the global economy. Penguin.

Micheli, M., Ponti, M., Craglia, M., & Berti Suman, A. (2020). Emerging models of data governance in the age of datafication. Big Data & Society, 7(2). https://doi.org/10.1177/2053951720948087

Mitchell, T., & Brynjolfsson, E. (2017). Track how technology is transforming work. Nature, 544(7650), 290–292. https://doi.org/10.1038/544290a

Morozov, E. (2015, January). Socialize the Data Centres! New Left Review, 91. https://newleftreview.org/issues/II91/articles/evgeny-morozov-socialize-the-data-centres

Morozov, E., & Bria, F. (2018). Rethinking the smart city: Democratizing urban technology (No. 5; City Series). Rosa Luxemburg Stifung, New York Office. https://www.rosalux.de/fileadmin/rls_uploads/pdfs/sonst_publikationen/rethinking_the_smart_city.pdf

Mulgan, G., & Straub, V. (2019). The new ecosystem of trust: How data trusts, collaboratives and coops can help govern data for the maximum public benefit [Paper]. Nesta. https://www.nesta.org.uk/blog/new-ecosystem-trust/

Nemitz, P. (2018). Constitutional democracy and technology in the age of artificial intelligence. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 376(2133). https://doi.org/10.1098/rsta.2018.0089

Nemitz, P., & Pfeffer, M. (2020). Prinzip mensch: Macht, freiheit und demokratie im zeitalter der künstlichen intelligenz. Dietz Verlag.

Nissenbaum, H. (2010). Privacy in Context: Technology, Policy, and the Integrity of Social Life. Stanford University Press.

Ó Fathaigh, R., & van Hoboken, J. (2019). European Regulation of Smartphone Ecosystems. European Data Protection Law Review, 5(4), 476–491. https://doi.org/10.21552/edpl/2019/4/6

OECD. (2019). Going digital: Shaping policies, improving lives. OECD Publishing. https://doi.org/10.1787/9789264312012-en

O’Hara, K. (2019). Data trusts. Ethics, architecture and governance for trustworthy data stewardship (White Paper No. 1). Web Science Institute. https://eprints.soton.ac.uk/428276/1/WSI_White_Paper_1.pdf

Ostrom, E. (1990). Governing the Commons: The Evolution of Institutions for Collective Action (p. 280). Cambridge University Press. https://doi.org/10.1017/CBO9780511807763

Papadimitropoulos, E. (2018). Commons-based peer production in the work of yochai benkler. TripleC, 16(2), 835–856. https://doi.org/10.31269/triplec.v16i2.1009

Poullet, Y. (2020). From open data to reverse PSI – A new European policy facing GDPR (No. 11; European Public Mosaic). Public Administraiton School of Catalonia. http://www.crid.be/pdf/public/8586.pdf

Powles, J., & Hodson, H. (2017). Google DeepMind and healthcare in an age of algorithms. Health and Technology, 7(4), 351–367. https://doi.org/10.1007/s12553-017-0179-1

Prainsack, B. (2019). Logged out: Ownership, exclusion and public value in the digital data and information commons. Big Data & Society, 6(1). https://doi.org/10.1177/2053951719829773

Productores de música de españa (promusicae) v. Telefónica de españa SAU, C-275/06 (Court of Justice of the European Union 2008).

Purtova, N. (2017). Health Data for Common Good: Defining the Boundaries and Social Dilemmas of Data Commons. In S. Adams, N. Purtova, & R. Leenes (Eds.), Under Observation: The Interplay Between eHealth and Surveillance (Vol. 35, pp. 177–210). Springer International Publishing. https://doi.org/10.1007/978-3-319-48342-9_10

Purtova, N. (2021, July). Important questions to answer before talking of ‘data commons’ [Written input]. Socializing Data Value: Reflections on the State of Play [Roundtable], IT for Change, India office. https://itforchange.net/sites/default/files/2021-06/Nadezhda-Purtova-Socializing-Data-Value-Provocation.pdf

Rossi, A. (2018). How the Snowden revelations saved the EU general data protection regulation. International Spectator, 53(4), 95–111. https://doi.org/10.1080/03932729.2018.1532705

Ruhaak, A. (2020, February 13). When one affects many: The case for collective consent [Essay]. Mozilla Foundation. https://foundation.mozilla.org/en/blog/when-one-affects-many-case-collective-consent/

Sadowski, J., Viljoen, S., & Whittaker, M. (2021). Everyone should decide how their digital data are used—Not just tech companies. Nature, 595(7866), 169–171. https://doi.org/10.1038/d41586-021-01812-3

Sandoval, M. (2020). Entrepreneurial activism? Platform cooperativism between subversion and co-optation. Critical Sociology, 46(6), 801–817. https://doi.org/10.1177/0896920519870577

Schiller, D. (1988). How to think about information. In V. Mosco & J. Wasko (Eds.), The political economy of information. University of Wisconsin Press.

Susha, I., & Gil-Garcia, J. R. (2019). A Collaborative Governance Approach to Partnerships Addressing Public Problems with Private Data. Proceedings of the 52nd Hawaii International Conference on System Sciences, 2892–2901. https://doi.org/10.24251/HICSS.2019.350

Taylor, L. (2017). What is data justice? The case for connecting digital rights and freedoms globally. Big Data & Society, 4(2). https://doi.org/10.1177/2053951717736335

Verhulst, S. G., Zahuranec, A. J., Young, A., & Winowatan, M. (2020). Wanted: Data stewards. (Re-)defining the roles and responsibilities of data stewards for an age of data collaboration [Position paper]. TheGovLab. https://thegovlab.org/static/files/publications/wanted-data-stewards.pdf

Von Hannover v. Germany (no. 2), Nos. 40660 (European Court of Human Rights 2012).

Weber, S. (2017). Data, development, and growth. Business and Politics, 19(3), 397–423. https://doi.org/10.1017/bap.2017.3

Wiewiórowski, W. (2019, December). Sharing is caring? That depends… [Blog post]. European Data Protection Supervisor. https://edps.europa.eu/press-publications/press-news/blog/sharing-caring-depends_en

Winter, J. S., & Davidson, E. (2019). Big data governance of personal health information and challenges to contextual integrity. Information Society, 35(1), 36–51. https://doi.org/10.1080/01972243.2018.1542648

Yilma, K. (2017). Digital privacy and virtues of multilateral digital constitutionalism—Preliminary thoughts. International Journal of Law and Information Technology, 25(2), 115–138. https://doi.org/10.1093/ijlit/eax001

Zoboli, L. (2020). Fueling the european digital economy: A regulatory assessment of B2B data sharing. European Business Law Review, Forthcoming. https://doi.org/10.2139/ssrn.3521194

Zuboff, S. (2015). Big other: Surveillance capitalism and the prospects of an information civilization. Journal of Information Technology, 30, 75–89. https://doi.org/10.1057/jit.2015.5

Zuboff, S. (2019). The age of surveillance capitalism: The fight for a human future at the new frontier of power. Profile Books.

Zygmuntowski, J. J. (2018). Commoning in the digital era: Platform cooperativism as a counter to cognitive capitalism. Praktyka Teoretyczna Numer, 1(27). https://doi.org/10.14746/prt.2018.1.7

Zygmuntowski, J. J. (2020). Kapitalizm sieci. Roz:Ruch Publisher.

Footnotes

1. In this paper, the term data includes both of these categories, with differentiation as necessary.

2. Understood as a society in which public interest guides usage of data and allows for the emergence of equal digital citizens. See, Isin & Rupper, 2015.

3. For example, compare initiatives such as the Charter of Digital Rights championed by the Portuguese Presidency in the Council of the EU to the Charter of Fundamental Rights of the European Union.

4. See the Presidency Conclusions on the Charter of Fundamental Rights in the context of Artificial Intelligence and Digital Change (Council of the European Union, 2020, p. 3): “We want to ensure that the design, development, deployment and use of new technologies uphold and promote our common values and the fundamental rights guaranteed by the EU Charter of Fundamental Rights”.

5. Especially with the emergence of such trends as collaborative consumption, peer production, platform cooperativism and crowd sharing, as well as introduction of participatory governance schemes in municipalities across the world.

6. We adopt different terms, closer to existing literature than those found in Micheli et al. (2020). We use data collaboratives instead of data sharing pools, as found in e.g., Verhulst et al. (2020). Moreover, we opt for public data commons in place of public data trusts because we find that in terms of their properties data are commons; see further discussion.

7. The term sovereignty here denotes rather personal autonomy than sovereignty of a state actor. Self-sovereign identity is a notion traced back to libertarian and cypherpunk ideas of the early internet era, thus it refers to independence and self-determination outside the state (Barlow, 1996).

Beyond the individual: governing AI’s societal harm

$
0
0

This paper is part of Governing “European values” inside data flows, a special issue of Internet Policy Review guest-edited by Kristina Irion, Mira Burri, Ans Kolk, Stefania Milan.

Introduction

Artificial Intelligence (AI), an umbrella term for a range of technologies that are considered to demonstrate ‘intelligent’ behaviour, plays an increasingly important role in all domains of our lives. A distinction is often made between reasoning-based or code-driven AI on the one hand, and data-driven orlearning-based AI on the other hand(High-Level Expert Group on AI, 2019a).The former covers techniques that rely primarily on the codification of symbols and rules, based on which the system ‘reasons’ (using a top-down approach to design the system’s behaviour), whereas the latter covers techniques that rely primarily on large amounts of data, based on which the system ‘learns’ (using a bottom-up approach to design the system’s behaviour) (Hildebrandt, 2018; Dignum, 2019). The distinction between both should however not be seen as strict; models can be hybrid and incorporate elements of both techniques. In this paper, the focus lays on data-driven AI systems, given their reliance on data flows.

Spurred by increased computing capacity and data availability, AI systems can be deployed in a manner that generates significant benefits to individuals, groups and society at large. At the same time, they also raise the possibility of significant harm, for instance by breaching fundamental rights or causing other adverse effects (O’Neil, 2017; Russell, 2019; Yeung, 2019b; Solow-Niederman, 2020; Muller, 2020; Gebru, 2020).

The question can be asked to which extent the challenges raised by AI are new or a mere reiteration of the challenges raised by other new technologies. While certainly not unique in terms of the risks they entail, as argued elsewhere, the most promising features of AI applications are also liable to exacerbate these risks (Smuha, 2021). These features include, amongst others, AI systems’ self-learning ability and hence potential unpredictability, the vast scale and speed at which they can operate, their ability to distil information from data that may escape the human eye, the opportunity they provide to delegate human authority and control over the execution of—sometimes highly sensitive—decisions and tasks, as well as their opaque decision-making processes, which render it difficult to assess and evaluate compliance with legislation (Mittelstadt et al., 2016; O’Neil, 2017; Solow-Niederman, 2019).

An increasing number of actors—including researchers, journalists, private companies, public entities and civil society organisations—are trying to map how the development and use of AI can cause harm, for instance by breaching fundamental rights or causing other adverse effects (Zuiderveen Borgesius, 2018; Crawford et al., 2019; Yeung et al., 2020; CAHAI, 2020; European Union Agency for Fundamental Rights, 2020; Hao, 2021). Furthermore, across the world, regulators are starting to assess the extent to which existing laws are able to counter these harms, or whether new regulatory measures may be needed to secure protection therefrom (Council of Europe, 2019; UNESCO, 2020). The European Commission, for instance, is currently conducting such an assessment as regards the EU legal order (European Commission, 2020a; 2020c), and recently proposed a new AI-specific regulation (European Commission, 2021).

The legal assessments currently undertaken, and the risk analysis related thereto, are focusing primarily on AI’s adverse impact on individuals and—to a lesser extent—on specific groups or collectives of individuals (Mittelstadt, 2017; Taylor et al., 2017). The use of AI-systems can, however, also cause societal harm, which can be distinguished from—and which can transcend—individual harm and collective harm. The fact that certain uses of AI-systems for instance risk harming the democratic process, eroding the rule of law or exacerbating inequality goes beyond the concern of (the sum of) individuals but affects society at large (Bayamlıoğlu & Leenes, 2018; Zuiderveen Borgesius et al., 2018; Brkan, 2019).

While the societal impact of AI systems is increasingly discussed—particularly under the influence of STS studies—AI-enabled societal harm has so far been less examined from a legal perspective. Such examination is more difficult to conduct, as the contours of AI’s impact on societal interests are less tangible and hence more difficult to conceptualise in legal terms (Van der Sloot, 2017; Yeung, 2019a). As a consequence, policymakers risk making an incomplete analysis of the legal gaps they should tackle to secure comprehensive protection against AI’s adverse effects. In addition, by overlooking this societal dimension, the legal measures they may propose to address gaps in the legal framework can likewise prove inadequate.

This risk is particularly salient given the predominantly individualistic focus of the current legal system, such as data protection law, but also procedural law more generally (van der Sloot & van Schendel, 2021). Indeed, the manner in which EU law currently addresses AI-related harm primarily hinges on private enforcement, by relying on individuals to challenge potentially harmful practices. These challenges can in principle only be initiated by individuals able to establish the infringement of an individual right—such as the right to data protection or non-discrimination—or another directly suffered demonstrable harm to their private interests. Societal harm is however not always reducible to instances of individual harm (Kutz, 2000). Moreover, in some cases, even when both types of harm do overlap, individual harm may be negligible or indiscernible, and hence an insufficient ground to challenge the harmful practice. The following conundrum hence arises: how can we reconcile the need to protect societal interests adversely impacted by AI in the context of a legal system that primarily focuses on individual rights and remedies?

This conundrum is not unique to the AI-context. An analogy can, for example, be drawn with environmental harm, which likewise encompasses a societal dimension that cannot always be reduced to demonstrable individual harm. This resulted in the creation of new legal mechanisms to safeguard environmental interests at the EU level (van Calster & Reins, 2017). Accordingly, to secure protection against AI’s societal harms, a shift from an individualistic approach towards one that also embodies a societal perspective is warranted. This shift first requires a legal conceptualisation of AI’s societal harms, based on which gaps in protection can be identified and addressed. The importance of considering the societal adverse impact of the use of AI and other data-driven technologies was already stressed by other scholars (Hildebrandt, 2018; Yeung, 2019a; Cohen, 2019; Véliz, 2020; Viljoen, 2020; van der Sloot & van Schendel, 2021). In this paper, I build thereon with the aim of clarifying the legal protection gap in EU law and identifying mechanisms that EU policymakers can consider when tackling this issue.

To this end, I start by distinguishing the three above mentioned types of harm that AI systems can generate (2). Although the societal harms raised by AI can be very diverse, I identify some common features through which they can be conceptualised (3). Next, I venture into a parallel with a legal domain specifically aimed at protecting such an interest: EU environmental law (4). Based on the legal mechanisms adopted under environmental law with a distinct societal dimension, I draw a number of lessons for EU policymaking in the context of AI (5). Finally, I briefly evaluate the European Commission’s proposed AI regulation in light of those lessons (6), before providing concluding remarks (7).

2. Individual, collective and societal harm

For the purpose of this paper, harm is conceptualised as a wrongful setback to or thwarting of an interest (Feinberg, 1984), under which I also include harm in the non-physical sense, such as the breach of a right. 1 On this basis, I distinguish three types of interests—and hence three types of harm—that should be considered in the context of AI governance: individual harm, collective harm and societal harm. These harms are connected to each other, yet they can also be assessed in their own right. Evidently, the use of AI systems can also generate individual, collective and societal benefits, by positively impacting these respective underlying interests. In this paper, however, I focus on AI’s potential harms rather than its benefits. As the terms individual harm, collective harm and societal harm have been used in different ways by different authors, in what follows I provide a description of my understanding of each.

Individual harm occurs when one or more interests of an individual are wrongfully thwarted. 2 This is the case, for instance, when the use of a biased facial recognition system—whether in the context of law enforcement or in other domains—leads to wrongful discrimination against people of colour. Of course, the thwarting of such interest does not occur in isolation from a social, historical and political context (Winner, 1980; Simon, 1995)—after all, AI systems are socio-technical systems (Hasselbalch, 2019; High-Level Expert Group on AI, 2019b; Theodorou & Dignum, 2020; Ala-Pietilä & Smuha, 2021). Nevertheless, in this scenario, at the receiving end of the harm stands an identifiable individual.

Collective harm occurs when one or more interests of a collective or group of individuals are wrongfully thwarted. Just as a collective consists of the sum of individuals, so does this harm consist of the sum of harms suffered by individual members of the collective. The use of the abovementioned biased facial recognition system, for instance, can give rise to collective harm, in so far as it thwarts the interest of a specific collective of people—namely people of colour who are subjected to the AI system—not to be discriminated against. The collective dimension thus arises from the accumulation of similarly thwarted individual interests. The harmed individuals can be complete strangers to each other (like in the above example, where only their skin colour connects them) or they can be part of an (in)formal group.

Societal harm occurs when one or more interests of society are wrongfully thwarted. In contrast with the above, societal harm is thus not concerned with the interests of a particular individual or the interests shared by a collective of individuals. Instead, it concerns harm to an interest held by society at large, going over and above the sum of individual interests. This can be illustrated with a few examples.

3. AI and societal harm

3.1 Three examples: impact on equality, democracy and the rule of law

Let’s start by revisiting the above example of the facial recognition system. First, by making use of such a biased system and wrongfully thwarting the interest of an individual of colour, the system’s deployer can cause individual harm. The accumulation of the harm done to individuals of colour at the collective level, entails collective harm. Yet a third type of harm is at play. Whether individuals are coloured or not, and whether they are subjected to the particular AI system or not, they share a higher interest to live in a society that does not discriminate against people based on their skin colour and that treats its citizens equally. That interest is different from the interest not to be discriminated against, and can hence be distinguished from the individual or collective harm done to those directly subjected to the AI system. In other words, societal harm may well include instances of individual and collective harm, but has an impact beyond it. It can hence be assessed as a sui generis type of harm. 3

Besides the interest of equality, the deployment of AI systems can adversely impact a range of other societal interests. Consider this second example. AI systems can be used to collect and analyse personal data for profiling purposes, and subsequently subject individuals to targeted manipulation (Zuiderveen Borgesius et al., 2018; Brkan, 2019). Scandals like Facebook/Cambridge Analytica made it painfully clear that psychographic targeting can be deployed to shape political opinions with the aim of influencing election outcomes (Isaak & Hanna, 2018). Since individuals—in their capacity of product or service users—continue to share ever more data about themselves in different contexts, data flows steadily increase. Accordingly, these manipulative practices can occur at an ever-wider scale and can yield ever more effective results. The potential harm that can ensue—whether it is election interference, hate-mongering, or societal polarisation—is not limited to the individual who is directly manipulated, but indirectly affects the interests of society at large.

Of course, this example does not occur within a legal vacuum. While in the example above the right to non-discrimination might offer a certain level of solace, in this example some recourse can be found in data protection laws. However, as evoked above, these rights are primarily focused on preventing individual harm. In the case of the EU General Data Protection Regulation (GDPR), individuals are given ‘control’ of their personal data by equipping them with a mix of rights, including for instance the right to access their personal data and obtain information on the processing thereof, the right to have their data rectified and erased, and the right not to be subject to a decision based solely on automated processing. Their personal data can only be processed in case an appropriate legal basis exists, such as their consent for instance (Cate & Mayer-Schonberger, 2013; Van Hoboken, 2019; Tamo-Larrieux et al., 2020). And while the GDPR not only enshrines individual rights but also imposes certain obligations directly upon data processors and controllers, individuals can, to a certain extent, ‘waive’ such protection through their consent.

This raises two issues. First, the presence of a proper legal basis for data gathering and analysis is often disputed – even when it concerns consent. Despite legal obligations to this end, privacy notices can be anything but reader-friendly or effective, and may leave individuals unaware of what precisely they are consenting to—and which of the endless list of third-party vendors can use their data (Schermer et al., 2014; Van Alsenoy et al., 2014; Zuiderveen Borgesius, 2015; Barrett, 2018; Bietti, 2020). Second, even assuming that the individual carefully reads, understands, and consents to the use of her data, this still leaves uninvolved and unprotected all those who may be indirectly harmed by the subsequent practices enabled by that data, and hence leaves unaddressed the potential societal harm—such as the breach of integrity of the democratic process. As van der Sloot and van Schendel (2021) state: a legal regime that addresses incidental data harms only on an individual level runs the risk of leaving unaddressed the underlying causes, allowing structural problems to persist. Furthermore, besides harm at the societal level, it can also engender new types of indirect individual or collective harms. In this regard, Viljoen (2020) rightly emphasises that an overly individualistic view of the problem fails to acknowledge data’s relationality, and the way in which data obtained from individual A can subsequently be used to target individual B even without having obtained similar data from the latter.

The problem, however, still goes further. Consider a third example. AI systems can be used in the context of law enforcement, public administration or the judicial system, so as to assist public officials in their decision-making processes (Kim et al., 2014; Liu et al., 2019; Zalnieriute et al., 2019; AlgorithmWatch, 2020). These systems embody norms that are guided by the legal rules applicable in a specific situation (e.g., the allocation of welfare benefits, the detection of tax fraud, or the assessment of the risk of recidivism). The outcome of these decisions, which are liable to significantly affect legal subjects, hence depend on the way in which legal rules are embedded in the algorithmic system (Hildebrandt, 2015; Binns, 2018). In the case of learning-based AI systems, this process hinges on the system’s design choices and the data it is being fed—choices that often remain invisible. Moreover, the internal decision-making process of learning-based systems is often non-transparent and non-explainable, which also renders the explainability and the contestability of the decisions more difficult (Pasquale, 2015; Ananny & Crawford, 2016; Bayamlıoğlu, 2018).

In addition, the public officials in charge of taking the final decision and accountable for its legality, may lack the knowledge to assess the justifiability of the AI system’s outcome (Brownsword, 2016; Hildebrandt, 2018; Bayamlıoğlu & Leenes, 2018). This risks diminishing the accountability of public decision-makers for the legality of such decisions, and thereby undermining the broader societal interest of the rule of law (Binns, 2018; Zalnieriute et al., 2019; Buchholtz, 2020). The fact that, today, the outcomes generated by public AI systems are often merely informative or suggestive rather than decisive, is but a cold comfort amidst the existence of substantial backlogs which reduce a thorough review of the decision by public officials to a mere source of delay.

In this example too, different types of harm can be distinguished, which are not entirely reducible to each other. An individual subjected to an AI-informed decision taken by a public actor can suffer individual harm for a range of reasons. The correlations drawn by the AI system can be inapplicable, biased or incorrect, but it is also possible that the legal rule applicable to the situation was erroneously embedded in the system. Assuming the individual’s awareness of the problem and depending on the context, she may be able to invoke a right to challenge the AI-informed decision—such as the right to a good administration, the right to a fair trial, or the right to privacy (like in the Dutch SyRIcase 4 for instance). Yet the above-described impact on the rule of law also leads to societal harm which is firmly connected to, yet different from, potential individual harm. It also affects all those who are not directly interacting with or subjected to the decision-making process of the specific AI system, and hence goes over and beyond any (accumulative) individual harm.

In sum, an overly individualistic focus on the harm raised by AI risks overlooking its societal dimension, which should equally be tackled. Importantly, the above should not be read as an affirmation that all individual and collective harms raised by AI can already be tackled by mere reliance on individual rights. Also with regard to these harms, legal gaps in protection exist and merit being addressed (European Commission, 2020w; CAHAI, 2020; Smuha, 2020). In this article, however, the focus lays on legal protection against AI’s societal harms.

3.2 AI’s societal harm: common features and concerns

The three examples referred to above—AI-based facial recognition, AI-based voter manipulation, and AI-based public decision-making—concern three different AI applications impacting (at least) three different societal interests: equality, democracy and the rule of law. It is hence not possible, nor desirable, to reduce AI’s potential for societal harm to a monolithic concern. Nevertheless, an examination of the issues raised by such harm reveals some commonalities that are useful for the purpose of a legal conceptualisation.

First, in each case, a particular individual harm occurs and is typically safeguarded by an accompanying individual remedy for protection against such harm. In the first example, an individual’s right to non-discrimination is at play; in the second, an individual’s right to data protection; in the third, an individual’s right to good administration or a fair trial. Yet the potential breach of the right can often only be invoked by the individual concerned, not by a third party. 5 More importantly, the individual harm will often remain unnoticed given the opacity of the way in which AI systems are designed and operate. This opacity is often also accompanied by a lack of transparency of how AI systems are used by the product or service provider. As a consequence, it is not only difficult to be aware of the harm, but it may be even more difficult to demonstrate it and establish a causal link. I call this the knowledge gap problem. Moreover, even if there is awareness, the individual harm may be perceived as insignificant, or in any case as too small in proportion to the costs that may be involved when challenging it. Hence, the individual is unlikely prompted to challenge the problematic practice. I call this the threshold problem. Furthermore, as noted above, an individual can also consent to the practice or otherwise acquiesce, hence seemingly waiving the opportunity to invoke a protective right in exchange for perceived personal gains—thereby also undermining actions by third parties to invoke this right on their behalf. I call this the egocentrism problem.

Second, in each case, in addition to individual harm, there is also an instance of societal harm, as adverse effects occur not only to the individuals directly subjected to the AI system, but to society at large too. In the third example, the integrity of the justice system is an interest shared not only by individuals appearing before a court, but also to those who never set a foot therein. This societal harm concerns a different interest than the individual harm. The harm suffered by an individual who faces discrimination, manipulation or an unjust judicial or administrative decision, is different than the societal harm of the unequal treatment of citizens, electoral interference, or erosion of the rule of law. Nevertheless, the current legal framework primarily focuses on legal remedies for the individuals directly subjected to the practice, rather than to ‘society’.

Third, differently than individual or collective harm, societal harm will often manifest itself at a subsequent stage only, namely over the longer term rather than in the immediate period following the AI system’s use. This gap in time—especially when combined with the opacity of the AI systems functioning and use—not only complicates the identification of the harm itself, but also of the causal link between the harm and the AI-related practice. An additional obstacle in this regard concerns the ‘virtual’ nature of the harm. In contrast with, for instance, the harm caused by a massive oil leak or burning forest, the societal harm that can arise from the use of certain AI applications is not as tangible, visible, measurable or predictable—which renders the demonstration of both individual harm and of collective harm challenging.

Fourth, societal harm typically does not arise from a single occurrence of the problematic AI practice. Instead, it is often the widespread, repetitive or accumulative character of the practice that can render it harmful from a societal perspective (Kernohan, 1993). This raises further obstacles. Thus, it may be challenging to instil a sense of responsibility in those who instigate a certain practice, if it is only the accumulative effect of their action together with the action of other actors that causes the harm (Kutz, 2000). Moreover, this phenomenon also gives rise to the difficulty of the many hands-problem (Thompson, 1980; van de Poel et al., 2012; Yeung, 2019a). Besides the accumulative effects raised by a certain type of conduct, also the opacity of different types of (interacting) conduct by many hands can contribute to the harm and to the difficulty of identifying and addressing it (Nissenbaum, 1996).

In the context of the wide-spread use of AI systems, not one but three levels of this problem come to mind. First, at the level of the AI system, multiple components developed and operated by multiple actors can interact with each other and cause harm, without it being clear which component or interaction is the direct contributor thereof. Second, at the level of the organisation or institution deploying the system (whether in the public or private sector), different individuals may contribute to a process in many different ways, whereby the resulting practice can cause harm. Last, this issue manifests itself at the level of the network of organisations and institutions that deploy the problematic AI application. The scale of these networks and their potential interconnectivity and interplay renders the identification of the problematic cause virtually impossible—even more so if there isn’t necessarily one problematic cause. A broadened perspective of AI-enabled harm is hence not only required at the suffering side, but also at the causing side of the harm.

Finally, as was already alluded to above, the societal harm that can arise from the mentioned AI applications is not easily expressible in current humanrights discourse (Yeung, 2019a). While a particular human right may well be impacted, a one-on-one relationship between such right and the societal harm in question can be lacking. This is related to the fact that many human rights embody an individualistic perspective of harm. As Karen Yeung however clarifies, what is at stake is the destabilisation “of the social and moral foundations for flourishing democratic societies” which enable the protection of human rights and freedoms in the first place (Yeung, 2019a). Hence, our intuitive recourse to the legal remedies provided by human rights law in the context of AI’s harms will only offer partial solace. Furthermore, even this partial solace is on shaky grounds, in light of what I denoted above as the knowledge gap problem, the threshold problem, and the egocentrism problem.

By no means does this imply that human rights have become obsolete in this field—quite the contrary. As argued elsewhere, an overly individualistic interpretation of human rights overlooks the fact that most human rights also have a clear societal dimension, not least because their protection can be considered a societal good (Smuha, 2020). Moreover, in some cases, individual human rights have been successfully invoked to tackle societal challenges, such as for instance in the 2019 Dutch Urgenda case 6. However, since in these cases a demonstration of individual harm is typically still required, this will not be a uniformly accepted or comprehensive solution. Accordingly, while human rights remain both an essential normative framework and important legal safeguard against AI-enabled harm, relying on their private enforcement may be insufficient (van der Sloot & van Schendel, 2021). Policymakers assessing AI’s risks should hence broaden their perspectives as concerns both the analysis of harms and the analysis of legal gaps and remedies. While this shift is starting to take place as regards privacy—which an increasing number of scholars convincingly argued should be conceptualised in broader terms, as not just an individual but also a societal good (Lane et al., 2014; Cohen, 2019; Véliz, 2020; Bamberger & Mayse, 2020; Viljoen, 2020)—it must be applied to the wider range of societal interests that can be adversely impacted by the use of AI.

This broadened analysis is not only needed to better understand AI’s impact on society. Instead, policymakers should also draw thereon when assessing the legal gaps in the current legislative framework and identifying measures to tackle those gaps. Given its specific features, addressing societal harm may require different measures than addressing individual or collective harm. Legal remedies that hinge solely on (collectives of) individuals who may or may not be able to challenge potential right infringements will not always provide sufficient protection when societal interests are at stake, hence leading to a legal protection gap. Countering societal harm will also require ‘societal’ means of intervention to safeguard the underlying societal infrastructure enabling human rights, democracy and the rule of law. Given the EU’s intentions to address some of the adverse effects generated by AI, in what follows, I take a closer look at what role EU law can play in tackling this legal protection gap.

4. Countering societal harm: environmental law as a case study

To answer the above question, I argue that inspiration can be drawn from a legal domain that is specifically aimed at protecting a societal interest: (EU) environmental law. While a polluting practice or activity —whether undertaken by a public or private actor—is liable to cause individual harm, the interest to secure a clean and healthy environment is one that is shared by society at large. An individual living in city A might be unaware of, choose to ignore, or be indifferent to the polluting practice. Yet the adverse effects that will ensue from the practice are likely to go over and above the individual or collective level, as it can give rise to harm also for people living in city B, country C and region D, and to future generations.

Since a number of similarities exist between environmental harm and the societal harms described above 7, the solutions provided by EU environmental law could be explored by analogy. Indeed, just as in the examples of societal harm raised by the use of AI, a period of time may lapse between a polluting practice and the tangible manifestation of the environmental harm. Moreover, in many cases, a single instance of the polluting practice can be relatively benign, yet it may be the accumulative or systemic nature of the practice that causes the environmental harm at the societal level. In addition, access to an effective legal remedy by individuals may be challenging when relying on individual rights, especially when individual harm is insignificant, difficult to demonstrate or unknown.

Early on, however, environmental harm has been recognised as affecting a societal interest rather than a (mere) individual one. In other words: since the protection of the environment is important for society at large, it should not solely hinge upon the ability or willingness of individuals to go to court. As a consequence, mechanisms have gradually been adopted in this area to enable ‘societal’ types of intervention. 8 These mechanisms typically do not warrant the demonstration of individual harm or the breach of an individual’s right, but can be invoked in the interest of society. Moreover, given the acknowledgment that the interest is of societal importance, the protection provided is often focused on harm prevention ex ante rather than mere mitigation or redress ex post. Accordingly, a closer look at some of the mechanisms adopted in the context of environmental law could concretise what a shift in mindset from a mere individualistic to a societal interest perspective looks like.

Just like matters relating to the internal market, the EU’s environmental policy is a shared competence between the Union and member states (O’Gorman, 2013; van Calster & Reins, 2017). It addresses a diverse range of issues falling under environmental protection, from water pollution and sustainable energy to air quality. The first EU environmental intervention was primarily driven by an internal market logic: if member states maintain different environmental standards for products and services, this can constitute an obstacle to trade within the EU single market. Furthermore, the transboundary nature of the harm heightened the need for cross-border action. Interestingly for our analogy, at that time, EU primary law did not contain an explicit legal basis for environmental policy and the Union instead relied on its general ‘internal market functioning’ competence (Langlet & Mahmoudi, 2016). Such a legal basis was only created at a later stage, in the margin of subsequent Treaty revisions (currently reflected in Article 11 and Articles 191-193 TFEU). EU environmental law is also heavily influenced by international law. 9

My aim here is not to comprehensively discuss the numerous environmental directives and regulations that the EU adopted over the past decades, nor to argue that environmental law is a perfect model for (AI) regulation. Instead, I merely want to draw attention to the fact that the understanding of a problem as one that affects a societal interest also shapes its legal mechanisms. Here below I discuss three examples of protection mechanisms that merit attention.

First, rather than relying on private enforcement only 10, environmental law has also enshrined several public oversight mechanisms to ensure that the actions of public and private actors alike do not adversely impact the environment. Such public oversight has for instance taken the shape of verifying compliance with the environmental standards adopted over the years through various legislative instruments. One of the core merits of these standards concerns the creation of metrics and benchmarking of environmental performance, thereby ensuring common impact measuring methods across Europe. Environmental aspects have also been integrated into European technical standardisation more broadly (European Commission, 2004). In addition, an important oversight mechanism was introduced through the obligation to conduct an environmental impact assessment (EIA) 11. Such assessment must be undertaken for all programmes and projects that are likely to have significant effects on the environment and is meant to ensure that the implications on the societal interest of a clean environment are taken into account before a decision is taken (Langlet & Mahmoudi, 2016). An essential element of the assessment is public participation. This participation does not hinge on the risk of individual harm, but is meant to protect the interests of society at large. Besides raising broader awareness of the issues at stake, this process also enhances both the transparency and accountability of decision-making processes (O’Faircheallaigh, 2010).

Second, public monitoring mechanisms have been put in place, for instance through the establishment of the European Environmental Agency (EEA). The agency’s task is to provide independent information on the environment for all those developing, adopting, implementing and evaluating environmental policy—as well as to society at large. It works closely together with national environmental agencies and environmental ministries, and with the European environment information and observation network (Eionet). 12 Such public monitoring not only ensures that potential adverse effects on the environment are kept track of, but it also contributes to narrowing the knowledge gap, in light of the fact that most members of society—including public bodies—often lack the expertise to make such an analysis by themselves.

Third, a number of procedural rights were introduced with a clear ‘societal’ dimension, as they can be relied upon by all members of society without the need to demonstrate (a risk of) individual harm (Anton & Shelton, 2011; van Calster & Reins, 2017). Three of these—introduced through the respective three pillars of the Aarhus convention—can be pointed out in particular. First, everyone has the right to access environmental information held by public authorities, without a need to justify the access request (Krämer, 2012; Madrid, 2020). This includes information not only on the state of the environment, but also on policies or actions that were taken. In addition, public authorities are required to actively disseminate such information. Second, a right to public participation in environmental decision-making was established. Public authorities have an obligation to ensure that members of society can comment on environment-related decisions—such as the approval of projects or plans that can affect the environment—and these comments must be taken into account by the decision-maker. Information must also be provided of the final decision taken, including the reasons or justification—hence ensuring accountability and opening the door to challengeability. Third, an ‘access to justice’ right was created (Poncelet, 2012; Hadjiyianni, 2021), which provides all members of society with the right to challenge public decisions that were made without respecting the two other rights, or without complying with environmental law more generally. 13 Since this right can also be invoked by those who are indirectly impacted by the potentially harmful actions, they can be considered as societal rights.

5. Lessons for EU policymakers

Given the parallels identified between environmental harm and (other) harms potentially caused by the use of AI systems, policymakers aiming to address the legal protection gap arising from the overreliance on individual remedies can draw inspiration from the above mechanisms. These mechanisms exemplify how the inclusion of societal remedies can complement individual routes for redress and thereby broaden and strengthen the protection of societal interests. When translated to the context of AI, the following lessons can be drawn.

First, for those AI-applications that can adversely affect societal interests, EU policymakers should consider the introduction of public oversight and enforcement mechanisms. Rather than solely relying on those individuals who are able and willing to go to court and challenge an infringement of their individual rights, the onus can be shifted to developers and deployers of AI systems to comply with certain obligations that are subjected to public scrutiny (Yeung, 2019a). Mandatory obligations should not merely reiterate the need to comply with human rights, but should introduce specific process or outcome-oriented standards tailored to the societal interest at stake (van der Sloot & van Schendel, 2021). Moreover, they should allow organisations to demonstrate compliance with these standards, for instance through audits and certifications—without losing out of sight the limitations of algorithmic auditing (Burrell, 2016; Ananny & Crawford, 2016; Galdon Clavell et al., 2020). Public enforcement can take various forms, from a centralised to a decentralised approach, and at national or EU level—or a hybrid set-up. The sensitivity of the societal interest at stake, the expertise and resources required for effective oversight, and the legal basis that will be relied upon, are all factors that will influence the enforcement mechanism’s shape. 14

An additional element to consider concerns the introduction of mandatory ex ante impact assessments —modelled to the existing environmental impact assessments or data protection impact assessments—for AI applications that can affect societal interests. Importantly, this assessment should not only focus on the impact of the AI system’s use on human rights, but also on broader societal interests such as democracy and the rule of law (CAHAI, 2020). AI developers and deployers would hence need to anticipate the potential societal harm their systems can generate as well as rationalise and justify their system’s design choices in light of those risks. For transparency and accountability purposes, the impact assessments should be rendered public. Policymakers could provide further guidance as regards the assessment’s format, scope and methodology. Furthermore, where AI projects can have a considerable impact on societal interests—especially in the public sector, but potentially also in private settings that have de facto become part of the public sphere—societal participation in the assessment process should be foreseen, analogously to the way this right exists in environmental impact assessments.

Concretely, a public administration that considers implementing an AI system in its public decision-making processes would be required to conduct a human rights, democracy and rule of law impact assessment prior to designing or procuring the system. This assessment would be published, for instance on the administration’s website, and all interested stakeholders would be able to provide feedback on the assessment to ensure its comprehensiveness. When ultimately greenlighted, the system would need to be designed in a way that adheres to certain standards and obligations, for instance in terms of documentation and logging, verification of unjust bias, or accuracy and robustness. Finally, once deployed, the system should be regularly evaluated, independently auditable, and the necessary information to carry out such audits - or at the very least the outcomes of the independent auditor’s report—should be made available to the public.

Second, EU policymakers should consider establishing a public monitoring mechanism—for instance by mandating an existing or new agency—to map and gather information on the potential adverse effects of AI systems on specific societal interests. Just like the European environmental agency is providing information about the state of the environment, there is a need to assess and disseminate independent and impartial information about the impact of AI systems on various societal interests—and in particular on society’s normative and institutional infrastructure—both in the short and in the longer term. This information can not only help bridge the knowledge gap stemming from information asymmetries between AI deployers and AI affectees, but it can also inform the public on how the accumulative and systematic deployment of certain AI systems might affect societal interests. In turn, societal actors—from policymakers to civil society organisations—can use this information to raise awareness of the issues at stake, improve policies, and assess whether the protective measures adopted achieve their objectives.

Importantly, monitoring mechanisms will need to rely on specific metrics and criteria to measure and assess potential (adverse) effects of certain uses of AI. Yet obtaining these metrics can be a challenge when it concerns more abstract interests such as the rule of law or societal freedom. Given that AI’s societal harms often manifest themselves in an intangible manner, they herein diverge from typical environmental harms (like oil spills or air pollution) which can more easily be measured and monitored. This is also relevant when setting compliance standards, which require verifiable criteria. That said, even ‘abstract’ interests like the rule of law can be broken down into more concrete elements that each have an effect on its overall vitality. Inspiration can, for instance, be drawn from the World Justice Project’s Rule of Law Index (World Justice Project, 2020) or the European Commission’s first Rule of Law Report (European Commission, 2020b), in which the rule of law situation in various countries is assessed based on their respective systems and policies. Establishing appropriate frameworks and methodologies to map and evaluate the impact of AI systems on specific societal interests could be an important task for any future monitoring entity, and constitute a useful input for evidence-based AI policies.

Third, EU policymakers should consider strengthening procedural societal rights in the context of AI. These could include a right to access information held by public authorities on publicly used AI systems, without a need to justify the access request. As indicated above, such rights should go beyond existing access to document rights and ideally also enable citizens, researchers, civil society organisations, media and other interested parties to audit the system, all the while respecting applicable privacy laws and justifiable public interest exceptions like national security. Certain pieces of information could also be required to be made available proactively. Importantly, given the public sector’s heavy reliance on private actors as regards AI systems and their underlying data, this right should also apply when AI systems are procured from private actors (Crawford & Schultz, 2019). In addition, a right to societal participation in public decision-making on AI-related projects could be established. For instance, when a public authority considers procuring, developing or deploying an AI system for a public service in a manner that risks affecting a societal interest, members of society could be given the opportunity to comment on this project and to receive information—and a justification—about the decision taken. This will not only enhance accountability for the use of AI systems in public services, but can also help ensure that the potential adverse impacts on society are more comprehensively anticipated and mitigated through stakeholder participation. Last, a societal ‘access to justice’ right could be introduced, providing all members of society—without the need to demonstrate individual or collective harm —the right to challenge public decisions that were made without completing a comprehensive impact assessment, complying with societal participation rights, or adhering to the laws and standards applying to the use of AI systems more generally.

Finally, it should be explored whether these societal rights could also be rendered applicable when AI systems are not deployed by public authorities, but in private settings where their use nevertheless significantly affects a societal interest. 15 Certainly, the legitimate interests of private commercial actors, such as business secrets or intellectual property rights, should be respected. Simultaneously, however, an argument can be made for the introduction of societal information, participation and justice rights also in those situations where AI systems are used in a de facto public environment—such as social media platforms that became part of the public sphere—and hence shape society’s infrastructure. 16

It can be noted that these mechanisms need not be established through one catch-all AI regulation. AI’s (societal) harms are manifold and can manifest themselves differently in specific contexts. Depending on the interest and domain at stake, mechanisms to counter them might require a tailored approach that complements (existing or new) horizontal regulation. The role of existing bodies, institutions and agencies should in this regard also be considered, both as regards relevant sectors and as regards relevant interests. For instance, when it comes to monitoring AI’s impact on societal interests, policymakers would need to assess whether the establishment of a new entity that bundles expertise to examine a range of interests is warranted, or whether instead reliance on (existing) specialised entities—from equality bodies and data protection authorities to sectoral agencies—is preferred.

6. The Commission’s proposal for an Artificial Intelligence Act: a brief evaluation

On 21 April 2021, the European Commission published a proposal for an EU regulation on Artificial Intelligence, the so-called Artificial Intelligence Act (European Commission, 2021). The proposal has the dual aim of safeguarding individuals’ fundamental rights against AI’s adverse effects, as well as harmonising member states’ rules to eliminate potential obstacles to trade on the internal market. To reach this aim, a new AI-specific legal regime is introduced, complementing rules that already apply in a technology-neutral manner (such as, for instance, the GDPR). The proposal distinguishes different categories of AI: (1) prohibited applications, which cannot be deployed unless certain exceptions apply (Title II), (2) high-risk AI systems, which need to comply with mandatory requirements prior to their placement on the market or their deployment (Title III), (3) applications that require transparency measures regardless of their risk-level (Title IV) and (4) applications that are not considered as high-risk AI systems, but may be subjected to voluntary codes of conduct that reflect similar requirements (Title IX).

Much can be said about the proposed regulation’s strengths and weaknesses, yet it goes beyond the scope of this paper to conduct a detailed analysis of whether it strikes the right balance and provides adequate protection against the various harms that the use of AI systems can raise. However, bearing in mind the three types of societal protection mechanisms suggested above to counter AI’s societal harm, in what follows, a brief assessment is made of the extent to which the proposal takes these mechanisms into consideration.

First, the proposal can be commended for introducing a public oversight and enforcement framework for high-risk AI applications. 17 This signals that the need for adequate safeguards against AI’s risks is a matter of importance to society at large, and not only for the potentially harmed individuals or collectives. The proposal thus bridges an important protection gap by shifting the burden from individuals to independent national supervisory authorities to ensure that a set of essential rules are respected. In this regard, the introduction of prohibitions and ex ante requirements in terms of data quality, transparency and accountability for high-risk AI systems, the possibility to conduct independent audits, and the threat of substantive fines are important contributors. Moreover, a publicly accessible EU database of stand-alone high-risk AI systems will be established, managed by the European Commission, which can increase public transparency. At the same time, however, some caveats can be made.

As of today, many of the proposed requirements—for instance in terms of data quality or human oversight—still lack uniform implementation standards, which renders it difficult to assess their compliance in a non-arbitrary fashion. This is especially important considering that enforcement is organised at the national level, which raises the risk that different protection standards may be applied by the different national supervisory authorities, and that citizens across the EU may not be equally protected 18(in addition to the risk that not all authorities will be equipped with sufficient resources to fulfil their task, reminiscent of the problems faced in the context of the GDPR). 19 It can also be questioned whether the list of high-risk AI applications is sufficiently comprehensive, and whether some of those applications should not rather undergo an ex ante authorisationprocess by an independent authority, rather than solely facing the possibility of ex post control—especially in public contexts. In addition, while conducting a risk assessment seems to be required for high-risk AI systems, these assessments are not rendered public, nor is there a possibility for society to provide feedback thereon. Only national authorities seem to be able to request access to the AI systems’ (assessment) documentation. Finally, neither citizens nor civil society organisations are granted the right to file a complaint with the national supervisory authority in case they suspect non-compliance with the regulation. In other words, the shift from private to public enforcement might arguably have been taken somewhat too far since the regulation appears to rely entirely on national authorities without envisaging any role for society. There is, however, no reason why both could not act in a complementary or cooperative manner.

Second, it can be noted that the proposed regulation does not explicitly provide for a public monitoring mechanism to map and disseminate independent information on the potential (adverse) effects of AI systems on specific societal interests, such as equality or the rule of law. The establishment of market surveillance authorities is proposed to monitor compliance with the regulation’s requirements, yet at first sight these authorities are not meant to research, or collect information on, the societal impact that the implementation of AI—for instance in public infrastructures—could generate over the longer term. And while the aforementioned EU database could certainly increase transparency about existing stand-alone high-risk AI systems, it still does not fit the bill of the second societal protection mechanism outlined in the chapter above. At this stage, it remains to be seen whether the proposed European Artificial Intelligence Board– which will be chaired by the Commission and composed of the national supervisory authorities as well as the European Data Protection Supervisor – could play a role in this regard. 20

Third, as regards the introduction of procedural rights with a societal dimension, the proposed regulation is entirely silent. In fact, the drafters of the proposal seem to have been very careful not to establish any new rights in the regulation, and only impose obligations. This means that, if an individual wishes to challenge the deployment of an AI system that breaches the requirements of the regulation or adversely affects a societal interest, she will still need to prove individual harm. As already mentioned above, she will also not be able to lodge a complaint with the national supervisory authority to investigate the potentially problematic practice—unless the national authority decides to provide this possibility on its own motion. Besides the lack of an access to justice right, the proposed regulation also does not provide for an access to the information right or a right to societal participation in public decision-making on AI related projects.

Finally, and more generally, it is clear that the proposal remains imbued by concerns relating almost exclusively to individual harm, and seems to overlook the need for protection against AI’s societal harms. This manifests itself in at least four ways: (1) the prohibition of certain manipulative AI practices in Title II are carefully delineated so as to require individual harm—and in particular, “physical or psychological harm21—thus excluding the prohibition of AI systems that cause societal harm; (2) as regards the high-risk systems of Title III, their risk is only assessed by looking at the system in an isolated manner, rather than taking into account the cumulative or long term effects that its widespread, repetitive or systemic use can have; (3) the Commission can only add high-risk AI systems to the list of Annex III in case these systems pose “a risk of harm to the health and safety, or a risk of adverse impact on fundamental rights22, thus excluding risks to societal interests; (4) certain AI practices that are highly problematic from a societal point of view—such as AI-enabled emotion recognition 23 or the AI-enabled biometric categorisation of persons 24—are currently not even considered as high-risk.

In sum, while the Commission’s proposal constitutes a firm step towards enhanced protection against AI’s adverse effects, it does not fully reflect the societal dimension thereof. Moreover, it does not provide a role for societal actors to challenge the way in which AI systems can cause societal harm, whether by ensuring the collection of and access to relevant information, granting the possibility to give feedback and participate in public decision-making, or facilitating access to justice. One can wonder whether this is not a missed opportunity, given the importance of the interests at stake. At the same time, as mentioned previously, mechanisms to counter AI’s societal interests need not be tackled in a single regulation, but may well be established or strengthened through various actions. Moreover, it should be kept in mind that this proposal is subject to change, as both the European Parliament and the Council of the European Union will now have the opportunity to propose alterations. 25 In other words, there is still scope for further improvement, and it can be hoped that EU policymakers will place the societal dimension of AI’s harm more prominently on their radar.

7. Conclusions

The development and use of AI applications risks not only causing individual and collective harm, but also societal harm—or wrongful setbacks to societal interests. The societal impact of AI systems is increasingly acknowledged, yet the fact that it cannot always be equated with the sum of individual harms often still remains overlooked. As a consequence, policymakers aiming to analyse legal gaps in the legislative framework applicable to AI systems and other data-driven technologies often fail to account for this discrepancy, and thereby also pay insufficient attention to the legal protection gap arising from an overreliance on individual remedies to safeguard adversely affected societal interests.

I argued above that, even in those circumstances when a specific human right could be invoked in a context where societal harm ensues—for instance because an individual’s right to data protection or right to non-discrimination has been infringed—there is still a risk that this opportunity runs aground. First, the individual may not know that a right was infringed, as many AI systems operate in a non-transparent manner (the knowledge gap problem). Second, the individual could be aware of and opposed to the practice, but the individual harm may be disproportionately small compared to the effort and resources it would take to challenge the practice in court—even if the societal harm arising therefrom may be significant (the threshold problem). Third, the individual may have consented to the use of her personal data or to the use of the AI system, for instance in exchange for a particular service, and might be unaware of—or unbothered by—the fact that this action may subsequently cause societal harm (the egocentrism problem).

To bridge the legal protection gap that leaves societal interests vulnerable to AI-enabled harm, I argue that policymakers should shift their perspective from an individualistic one to a societal one, by recognising the societal interests threatened by the use of AI as sui generis interests. This means that AI’s adverse effects on societal interests should not only be included in AI-related harm analyses, but also in proposals for new legislation. Such legislation could help counter societal harm more effectively by introducing legal remedies with a societal dimension. Drawing on the domain of environmental law, which aims to protect one of the most recognised societal interests, several such mechanisms were identified. While the European Commission’s proposal for an AI regulation seems to head into the right direction by introducing a public oversight structure for certain AI applications, the proposal does not incorporate the proposed societal mechanisms to bridge the legal protection gap as regards AI’s societal harm and could benefit from further improvement.

To conclude this article, five caveats merit being spelled out at this stage, and can at the same time be read as a future research agenda. First, it has been stressed that a very diverse set of societal interests can be adversely impacted by the use of AI systems. Hence, a one-size-fits-all approach to tackle AI’s societal harms is unlikely to succeed. While some mechanisms can contribute to the protection of several societal interests, other interests may require more tailored solutions.

Second, it should be borne in mind that AI systems are but one technology amongst many others, and that the societal harm their use might generate will not always be unique to their features. Indeed, many societal risks are not specific to AI, and it may be counterproductive to treat them as such (Smuha, 2021). As a consequence, in some instances, introducing societal remedies that are not necessarily AI-specific—while nevertheless covering the harm they raise—could be more effective.

Third, the above overview of environmental legislation only focused on three types of legal mechanisms and should not be understood as an exhaustive parallel; other mechanisms could also provide inspiration. At the same time, the broad nature of the above overview also means it did not contain an analysis of how these mechanisms fall short, yet it is certainly not argued that they are infallible, nor is it argued that the analogy between AI’s societal harm and environmental harm always stands. 26 Instead, my primary aim is drawing attention to their underlying societal logic.

Fourth, the specific shape and scope of potentially new legal mechanisms, as well as the feasibility of introducing them at the EU level, will rely on the legal basis that can be invoked—which will in turn also determine if they can withstand the subsidiarity and proportionality test. As noted above, at this stage, it remains to be seen whether the legal basis that the Commission’s regulatory proposal for AI relies on, is in fact appropriate to counter AI’s societal impact beyond interests related to personal data protection issues or internal market issues.

Last, the role of EU member states—and their potential desires to maintain jurisdictional competence to assess and address AI’s impact on societal interests—will also need to be considered. Although, in principle, all member states adhere to the values set out in Article 2 TEU, in practice, their cultural and political interpretations of these values can at times diverge. The degree of this divergence will be an important factor in the process to adopt EU-wide policies in this domain.

Regardless of these caveats, societal interests will not be adequately protected as long as policymakers do not look beyond the individual when analysing the shortcomings of the current legal framework. They should hence shift their perspective by including an assessment of AI’s potential to cause societal harm, and ensure effective governance mechanisms to tackle it. This paper aims to contribute to such assessment.

References

Ala-Pietilä, P., & Smuha, N. A. (2021). A Framework for Global Cooperation on Artificial Intelligence and Its Governance. In B. Braunschweig & M. Ghallab (Eds.), Reflections on Artificial Intelligence for Humanity (pp. 237–265). Springer International Publishing. https://doi.org/10.1007/978-3-030-69128-8_15

AlgorithmWatch. (2020). Automating Society Report [Report]. AlgorithmWatch; Bertelsmann Stiftung. https://automatingsociety.algorithmwatch.org/wp-content/uploads/2020/10/Automating-Society-Report-2020.pdf

Ananny, M., & Crawford, K. (2018). Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability. New Media & Society, 20(3), 973–989. https://doi.org/10.1177/1461444816676645

Anton, D. K., & Shelton, D. (2011). Environmental Protection and Human Rights. Cambridge University Press. https://doi.org/10.1017/CBO9780511974571

Bamberger, K. A., & Mayse, A. (2020). Privacy in Society: Jewish Law Insights for the Age of Big Data. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.3731770

Barrett, L. (2018). Model(Ing) Privacy: Empirical Approaches to Privacy Law & Governance. Santa Clara High Tech Law Journal, 35(1). https://digitalcommons.law.scu.edu/chtlj/vol35/iss1/1

Bayamlioglu, E. (2018). Contesting Automated Decisions: European Data Protection Law Review, 4(4), 433–446. https://doi.org/10.21552/edpl/2018/4/6

Bayamlıoğlu, E., & Leenes, R. (2018). The ‘rule of law’ implications of data-driven decision-making: A techno-regulatory perspective. Law, Innovation and Technology, 10(2), 295–313. https://doi.org/10.1080/17579961.2018.1527475

Bietti, E. (2020). Consent as a Free Pass: Platform Power and the Limits of the Informational Turn. Pace Law Review, 40(1), 310–398. https://digitalcommons.pace.edu/plr/vol40/iss1/7

Binns, R. (2018). Algorithmic Accountability and Public Reason. Philosophy & Technology, 31(4), 543–556. https://doi.org/10.1007/s13347-017-0263-5

Brkan, M. (2019). Artificial Intelligence and Democracy: Delphi - Interdisciplinary Review of Emerging Technologies, 2(2), 66–71. https://doi.org/10.21552/delphi/2019/2/4

Brownsword, R. (2016). Technological management and the Rule of Law. Law, Innovation and Technology, 8(1), 100–140. https://doi.org/10.1080/17579961.2016.1161891

Buchholtz, G. (2020). Artificial Intelligence and Legal Tech: Challenges to the Rule of Law. In T. Wischmeyer & T. Rademacher (Eds.), Regulating Artificial Intelligence (pp. 175–198). Springer International Publishing. https://doi.org/10.1007/978-3-030-32361-5_8

Burgers, L. E. (2020). Justitia, the People’s Power and Mother Earth: Democratic legitimacy of judicial law-making in European private law cases on climate change [PhD Thesis, University of Amsterdam]. https://pure.uva.nl/ws/files/52346648/Front_matter.pdf

Burrell, J. (2016). How the machine ‘thinks’: Understanding opacity in machine learning algorithms. Big Data & Society, 3(1), 205395171562251. https://doi.org/10.1177/2053951715622512

CAHAI (Council of Europe Ad Hoc Committee on Artificial Intelligence). (2020). Feasibility Study [CAHAI(2020)23]. Council of Europe. https://rm.coe.int/cahai-2020-23-final-eng-feasibility-study-/1680a0c6da

Cate, F. H., & Mayer-Schonberger, V. (2013). Notice and consent in a world of Big Data. International Data Privacy Law, 3(2), 67–73. https://doi.org/10.1093/idpl/ipt005

Cohen, J. E. (2017). Affording Fundamental Rights: A Provocation Inspired by Mireille Hildebrandt. Critical Analysis of Law, 4(1), 76–90. https://scholarship.law.georgetown.edu/facpub/1964

Cohen, J. E. (2019). Between Truth and Power: The Legal Constructions of Informational Capitalism. Oxford University Press. https://doi.org/10.1093/oso/9780190246693.001.0001

Conaghan, J. (2002). Law, harm and redress: A feminist perspective. Legal Studies, 22(3), 319–339. https://doi.org/10.1111/j.1748-121X.2002.tb00196.x

Council Europe. (2019). Terms of reference for the Ad hoc Committee on Artificial Intelligence (CAHAI).

Crawford, K., Dobbe, R., Dryer, T., Fried, G., Green, B., Kaziunas, E., Kak, A., Mathur, V., McElroy, E., Sánchez, A. N., Raji, D., Rankin, J. L., Richardson, R., Schultz, J., West, S. M., & Whittaker, M. (2019). AI Now 2019 Report [Report]. AI Now Institute. https://ainowinstitute.org/AI_Now_2019_Report.pdf

Crawford, K., & Schultz, J. (2019). AI systems as state actors. Columbia Law Review, 119(7), 1941–1972. https://columbialawreview.org/content/ai-systems-as-state-actors/

Dignum, V. (2019). Responsible Artificial Intelligence: How to Develop and Use AI in a Responsible Way. Springer International Publishing. https://doi.org/10.1007/978-3-030-30371-6

Durkheim, E. (1925). L’éducation morale. Alcan.

European Commission. (2004). Communication from the Commission to the Council, the European Parliament and the European Economic and Social Committee—Integration of Environmental Aspects into European Standardisation {SEC(2004)206}. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52004DC0130

European Commission. (2020a). Inception Impact Assessment on the Proposal for a legal act of the European Parliament and the Council laying down requirements for Artificial Intelligence.

European Commission. (2020b). Rule of Law Report—The Rule of Law Situation in the European Union.

European Commission. (2020c). White Paper on Artificial Intelligence—A European approach to excellence and trust (White Paper COM(2020) 65 final). European Commission. https://eur-lex.europa.eu/legal-content/en/ALL/?uri=CELEX:52020DC0065

European Commission. (2021). Proposal for a Regulation of the European Parliament and the Council laying down harmonised rules on Artificial Intelligence (Artificial Intelligence Act) and amending certain Union legislative acts.

European Union Agency for Fundamental Rights. (2020). Getting the future right – Artificial intelligence and fundamental rights [Report]. Publications Office of the European Union. https://doi.org/10.2811/774118

Feinberg, J. (1984). Harm to Others. In The Moral Limits of the Criminal Law—Volume 1: Harm to Others. Oxford University Press.

Galdon Clavell, G., Martín Zamorano, M., Castillo, C., Smith, O., & Matic, A. (2020). Auditing Algorithms: On Lessons Learned and the Risks of Data Minimization. Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 265–271. https://doi.org/10.1145/3375627.3375852

Gebru, T. (2020). Race and Gender. In M. D. Dubber, F. Pasquale, & S. Das (Eds.), The Oxford Handbook of Ethics of AI (pp. 251–269). Oxford University Press. https://doi.org/10.1093/oxfordhb/9780190067397.013.16

Hadjiyianni, I. (2021). Judicial protection and the environment in the EU legal order: Missing pieces for a complete puzzle of legal remedies. Common Market Law Review, 58(3), 777–812. https://kluwerlawonline.com/journalarticle/Common+Market+Law+Review/58.3/COLA2021050

Hänold, S. (2018). Profiling and Automated Decision-Making: Legal Implications and Shortcomings. In M. Corrales, M. Fenwick, & N. Forgó (Eds.), Robotics, AI and the Future of Law (pp. 123–153). Springer Singapore. https://doi.org/10.1007/978-981-13-2874-9_6

Hao, K. (2019, June 6). Training a single AI model can emit as much carbon as five cars in their lifetimes. MIT Technology Review. https://www.technologyreview.com/2019/06/06/239031/training-a-single-ai-model-can-emit-as-much-carbon-as-five-cars-in-their-lifetimes/

Hao, K. (2021, March 11). He Got Facebook Hooked on AI. Now He Can’t Fix Its Misinformation Addiction. MIT Technology Review. https://www.technologyreview.com/2021/03/11/1020600/facebook-responsible-ai-misinformation/

Hasselbalch, G. (2019). Making sense of data ethics. The powers behind the data ethics debate in European policymaking. Internet Policy Review, 8(2). https://doi.org/10.14763/2019.2.1401

High-Level Expert Group AI. (2019). A definition of AI: Main capabilities and scientific disciplines. European Commission. https://digital-strategy.ec.europa.eu/en/library/definition-artificial-intelligence-main-capabilities-and-scientific-disciplines

High-Level Expert Group on Artificial Intelligence. (2019). Ethics guidelines for trustworthy AI [Report]. European Commission. https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai

Hildebrandt, M. (2015). Smart Technologies and the End(s) of Law. Edward Elgar. https://doi.org/10.4337/9781849808774

Hildebrandt, M. (2018). Algorithmic regulation and the rule of law. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 376(2128), 20170355. https://doi.org/10.1098/rsta.2017.0355

Hoofnagle, C. J., van der Sloot, B., & Borgesius, F. Z. (2019). The European Union general data protection regulation: What it is and what it means. Information & Communications Technology Law, 28(1), 65–98. https://doi.org/10.1080/13600834.2019.1573501

Isaak, J., & Hanna, M. J. (2018). User Data Privacy: Facebook, Cambridge Analytica, and Privacy Protection. Computer, 51(8), 56–59. https://doi.org/10.1109/MC.2018.3191268

Jørgensen, R. F. (Ed.). (2019). Human Rights in the Age of Platforms. The MIT Press. https://doi.org/10.7551/mitpress/11304.001.0001

Kaminski, M. E. (2019). Binary Governance: Lessons from the GDPR’s Approach to Algorithmic Accountability. Southern California Law Review, 92(6), 1529–1616. https://southerncalifornialawreview.com/2019/09/01/binary-governance-lessons-from-the-gdrps-approach-to-algorithmic-accountability-article-by-margot-e-kaminski/

Kernohan, A. (1993). Accumulative Harms and the Interpretation of the Harm Principle. Social Theory and Practice, 19(1), 51–72. https://doi.org/10.5840/soctheorpract19931912

Kim, G.-H., Trimi, S., & Chung, J.-H. (2014). Big-data applications in the government sector. Communications of the ACM, 57(3), 78–85. https://doi.org/10.1145/2500873

Krämer, L. (2012). Transnational Access to Environmental Information. Transnational Environmental Law, 1(1), 95–104. https://doi.org/10.1017/S2047102511000070

Kutz, C. (2000). Complicity: Ethics and Law for a Collective Age (1st ed.). Cambridge University Press. https://doi.org/10.1017/CBO9780511663758

Lane, J., Stodden, V., Bender, S., & Nissenbaum, H. (Eds.). (2014). Privacy, Big Data, and the Public Good—Frameworks for Engagement. Cambridge University Press. https://doi.org/10.1017/CBO9781107590205

Langlet, D., & Mahmoudi, S. (2016). EU Environmental Law and Policy. In EU Environmental Law and Policy. Oxford University Press. https://doi.org/10.1093/acprof:oso/9780198753926.001.0001

Liu, H.-W., Lin, C.-F., & Chen, Y.-J. (2019). Beyond State v Loomis: Artificial intelligence, government algorithmization and accountability. International Journal of Law and Information Technology, 27(2), 122–141. https://doi.org/10.1093/ijlit/eaz001

Madrid, J. Z. (2020). Access to Environmental Information held by the Private Sector under International, European and Comparative Law. KU Leuven.

Mittelstadt, B. (2017). From Individual to Group Privacy in Big Data Analytics. Philosophy & Technology, 30(4), 475–494. https://doi.org/10.1007/s13347-017-0253-7

Mittelstadt, B. D., Allo, P., Taddeo, M., Wachter, S., & Floridi, L. (2016). The ethics of algorithms: Mapping the debate. Big Data & Society, 3(2), 205395171667967. https://doi.org/10.1177/2053951716679679

Muller, C. (2020). The Impact of AI on Human Rights, Democracy and the Rule of Law. In CAHAI Secretariat (Ed.), Towards regulation of AI systems ((CAHAI(2020)06) (pp. 23–31). Council of Europe. https://www.coe.int/en/web/artificial-intelligence/cahai

Nissenbaum, H. (1996). Accountability in a computerized society. Science and Engineering Ethics, 2(1), 25–42. https://doi.org/10.1007/BF02639315

O’Faircheallaigh, C. (2010). Public participation and environmental impact assessment: Purposes, implications, and lessons for public policy making. Environmental Impact Assessment Review, 30(1), 19–27. https://doi.org/10.1016/j.eiar.2009.05.001

O’Gorman, R. (2013). The Case for Enshrining a Right to Environment within EU Law. European Public Law, 19(3), 583–604.

O’Neil, C. (2017). Weapons of Math Destruction. Penguin Books Ltd.

Pasquale, F. (2015). The black box society: The secret algorithms that control money and information. Harvard University Press.

Poncelet, C. (2012). Access to Justice in Environmental Matters—Does the European Union Comply with its Obligations? Journal of Environmental Law, 24(2), 287–309. https://doi.org/10.1093/jel/eqs004

Russell, S. (2019). Human Compatible: Artificial Intelligence and the Problem of Control. Viking.

Schermer, B., Custers, B., & van der Hof, S. (2014). The Crisis of Consent: How Stronger Legal Protection May Lead to Weaker Consent in Data Protection. Ethics and Information Technology, 16(2), 171–182. https://doi.org/10.1007/s10676-014-9343-8

Simon, T. W. (1995). Democracy and Social Injustice: Law, Politics, and Philosophy. Rowman & Littlefield.

Smuha, N. A. (2020). Beyond a Human Rights-Based Approach to AI Governance: Promise, Pitfalls, Plea. Philosophy & Technology. https://doi.org/10.1007/s13347-020-00403-w

Smuha, N. A. (2021). From a ‘race to AI’ to a ‘race to AI regulation’: Regulatory competition for artificial intelligence. Law, Innovation and Technology, 13(1), 57–84. https://doi.org/10.1080/17579961.2021.1898300

Solow-Niederman, A. (2019). Administering Artificial Intelligence. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.3495725

Strubell, E., Ganesh, A., & McCallum, A. (2019). Energy and Policy Considerations for Deep Learning in NLP [Preprint]. ArXiv. http://arxiv.org/abs/1906.02243

Tamo-Larrieux, A., Mayer, S., & Zihlmann, Z. (2020). Not Hardcoding but Softcoding Privacy. https://www.alexandria.unisg.ch/262254/

Taylor, L., Floridi, L., & van der Sloot, B. (Eds.). (2017). Group Privacy: New Challenges of Data Technologies. Springer International Publishing. https://doi.org/10.1007/978-3-319-46608-8

Theodorou, A., & Dignum, V. (2020). Towards ethical and socio-legal governance in AI. Nature Machine Intelligence, 2(1), 10–12. https://doi.org/10.1038/s42256-019-0136-y

Thompson, D. F. (1980). Moral Responsibility of Public Officials: The Problem of Many Hands. American Political Science Review, 74(4), 905–916. https://doi.org/10.2307/1954312

UNESCO. (2020). Outcome document: First draft of the Recommendation on the Ethics of Artificial Intelligence (SHS/BIO/AHEG-AI/2020/4 REV.2). Ad Hoc Expert Group (AHEG) for the preparation of a draft text of a recommendation on the ethics of artificial intelligence. https://unesdoc.unesco.org/ark:/48223/pf0000373434

van Alsenoy, B., Kosta, E., & Dumortier, J. (2014). Privacy notices versus informational self-determination: Minding the gap. International Review of Law, Computers & Technology, 28(2), 185–203. https://doi.org/10.1080/13600869.2013.812594

van Calster, G., & Reins, L. (2017). EU Environmental Law. Edward Elgar Publishing.

van de Poel, I., Nihlén Fahlquist, J., Doorn, N., Zwart, S., & Royakkers, L. (2012). The Problem of Many Hands: Climate Change as an Example. Science and Engineering Ethics, 18(1), 49–67. https://doi.org/10.1007/s11948-011-9276-0

van der Sloot, B. (2017). Privacy as Virtue: Moving Beyond the Individual in the Age of Big Data (1st ed.). Intersentia. https://doi.org/10.1017/9781780686592

van der Sloot, B., & van Schendel, S. (2021). Procedural law for the data-driven society. Information & Communications Technology Law, 30(3), 304–332. https://doi.org/10.1080/13600834.2021.1876331

Véliz, C. (2020). Privacy is Power. Bantam Press.

Viljoen, S. (2020). Democratic Data: A Relational Theory For Data Governance. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.3727562

Winner, L. (1980). Do Artifacts Have Politics? Daedalus, 109(1), 121 136. https://www.jstor.org/stable/20024652

World Justice Project. (2020). Rule of Law Index 2020 [Report]. World Justice Project. https://worldjusticeproject.org/sites/default/files/documents/WJP-ROLI-2020-Online_0.pdf

Wrigley, S. (2018). Taming Artificial Intelligence: “Bots,” the GDPR and Regulatory Approaches. In M. Corrales, M. Fenwick, & N. Forgó (Eds.), Robotics, AI and the Future of Law (pp. 183–208). Springer. https://doi.org/10.1007/978-981-13-2874-9_8

Yeung, K. (2019a). Responsibility and AI - A study of the implications of advanced digital technologies (including AI systems) for the concept of responsibility within a human rights framework (Study DGI)2019)05). Council of Europe. https://rm.coe.int/responsability-and-ai-en/168097d9c5

Yeung, K. (2019b). Why Worry about Decision-Making by Machine? In K. Yeung, Algorithmic Regulation (pp. 21–48). Oxford University Press. https://doi.org/10.1093/oso/9780198838494.003.0002

Yeung, K., Howes, A., & Pogrebna, G. (2020). AI Governance by Human Rights–Centered Design, Deliberation, and Oversight: An End to Ethics Washing. In M. D. Dubber, F. Pasquale, & S. Das (Eds.), The Oxford Handbook of Ethics of AI (pp. 75–106). Oxford University Press. https://doi.org/10.1093/oxfordhb/9780190067397.013.5

Zalnieriute, M., Moses, L. B., & Williams, G. (2019). The Rule of Law and Automation of Government Decision‐Making. The Modern Law Review, 82(3), 425–455. https://doi.org/10.1111/1468-2230.12412

Zuiderveen Borgesius, F. J. (2015). Behavioural Sciences and the Regulation of Privacy on the Internet. In A. Alemanno & A.-L. Sibony (Eds.), Nudge and the Law: A European Perspective (pp. 179–208). Hart Publishing. https://doi.org/10.5040/9781474203463

Zuiderveen Borgesius, F. J. (2018). Discrimination, artificial intelligence, and algorithmic decision-making [Study]. Council of Europe, Directorate General of Democracy. https://rm.coe.int/discrimination-artificial-intelligence-and-algorithmic-decision-making/1680925d73

Zuiderveen Borgesius, F. J., Möller, J., Kruikemeier, S., Ó Fathaigh, R., Irion, K., Dobber, T., Bodo, B., & De Vreese, C. (2018). Online Political Microtargeting: Promises and Threats for Democracy. Utrecht Law Review, 14(1), 82. https://doi.org/10.18352/ulr.420

Footnotes

1. Some, but not all, wrongful setbacks to interests are protected by law. It should also be noted that the concept of harm is not static. It changes over time, along with the normative framework of a given society. See in this regard Conaghan, 2002.

2. These interests can, for instance, be physical, psychological, financial, social or legal in nature.

3. See in this regard also Emile Durkheim’s conceptualisation of society as a sui generis entity (Durkheim, 1925).

4.NJCM et. al. and FNV v Staat der Nederlanden, Rechtbank Den Haag, 5 February 2020, C-09-550982-HA ZA 18-388, ECLI:NL:RBDHA:2020:865.

5. Article 80 of the General Data Protection Regulation, however, establishes a right for data subjects to mandate certain organisations to lodge a complaint with the national supervisory authority on their behalf. Furthermore, it also allows member states to provide that these organisations—independently of a data subject's mandate—have the right to lodge a complaint with the authority if they consider that the data subject’s rights under the regulation were infringed. Hence, under this scenario, a third (specifically qualified) party could in fact undertake independent action. Nevertheless, this third party must still demonstrate that a data subject’s right was infringed. Evidence of individual harm thus remains a requirement, which complicates matters in case of potential consent or lack of knowledge.

6.Staat der Nederlanden v Stichting Urgenda, Hoge Raad, 20 December 2019, 19/00135, ECLI:NL:HR:2019:2006.

7. It can also be pointed out that a healthy environment is one of the societal interests that can be adversely affected by the use of data-driven AI systems, given the considerable energy consumption they entail (Hao, 2019; Strubell et al., 2019).

8. Some even argue that EU environmental law focuses too much on the societal dimension of the harm, and would benefit from the recognition of an individual right to environment (O’Gorman, 2013).

9. For instance, and of relevance to the discussion on access to justice for societal interests, the UNECE Convention on Access to Information, Public Participation in Decision-making and Access to Justice in Environmental Matters (Aarhus Convention), signed on 25 June 1998, had a substantial impact on EU environmental law.

10. As raised above, the fact that individual rights do not provide comprehensive protection against environmental harm does not diminish their importance—and the importance of private enforcement mechanisms—to tackle such harm (see e.g. Burgers, 2020). Also in the context of AI, public enforcement should be seen as complementary to (rather than substituting) private enforcement (see in this regard also Kaminski, 2019).

11. In this regard, two directives are of particular importance, namely Directive 2011/92/EU, known as the “Environmental Impact Assessment” (EIA) Directive, and Directive 2001/42/EC, known as the “Strategic Environmental Assessment” (SEA) Directive.

12. Besides the 27 EU member states, Eionet also counts Iceland, Liechtenstein, Norway, Switzerland, Turkey and the six West Balkan countries.

13. While this right has been established in theory, in practice it can however be noted that various jurisdictions - including the European Union itself - did not fully implement it, raising quite some criticism (Hadjiyianni, 2021). See also the findings and recommendations of the UN committee overseeing compliance with the Aarhus Convention, published in 2017, concluding that the EU did not fully comply with its obligations on access to justice, accessible at: https://unece.org/fileadmin/DAM/env/pp/compliance/CC-57/ece.mp.pp.c.1.2017.7.e.pdf.

14. It can be noted that, to some extent, inspiration can also be drawn from the GDPR (Kaminski, 2019). Despite its predominantly individualistic focus as described above, the GDPR does not entirely lay the onus on the individual, but sets out a number of mechanisms for public oversight (for instance by virtue of the role assigned to national data protection authorities) and for accountability (for instance by virtue of transparency obligations upon personal data controllers and processors) (Hoofnagle et al., 2019). While the individual, collective and societal risks raised by AI can certainly not be reduced to a personal data protection problem, the GDPR currently does function as one of the most relevant pieces of legislation to regulate AI (Hänold, 2018; Wrigley, 2018), and could provide further lessons in this context.

15. Evidently, if the existence of a potential adverse impact on a societal interest dictates the applicability of the above rights, an adequate delineation of ‘societal interests’ becomes pressing. There is no sui generis definition of a ‘societal interest’ under European Union law. Nevertheless, primary EU legislation does offer some clues about the societal interests that the EU legal order upholds. Article 2 of the Treaty on European Union (TEU), for instance, lists the European Union’s values, which both EU institutions and EU member states must comply with. These concern “respect for human dignity, freedom, democracy, equality, the rule of law and respect for human rights, including the rights of persons belonging to minorities”, which are “common to the Member States in a society in which pluralism, non-discrimination, tolerance, justice, solidarity and equality between women and men prevail.” Of course, the fact that the EU Treaties express some of the EU’s societal interests does not mean that the Union automatically has the competence to take legal action in this field. The Union’s competences are limited to those specifically conferred to it by the member states and are exhaustively listed in the Treaties.

16. The concern for societal harm caused by this type of private actors seems to be one of the main drivers behind the European Commission’s proposed Digital Services Act. See for instance recital 94: ‘Given the importance of very large online platforms, in view of their reach and impact, their failure to comply with the specific obligations applicable to them may affect a substantial number of recipients of the services across different Member States and may cause large societal harms, while such failures may also be particularly complex to identify and address.

17. It can be noted that this is in line with the proposal for a Digital Services Act, which also covers the use of certain AI systems on online platforms and which likewise establishes a public oversight mechanism, thus recognising the societal interests at stake. Interestingly, in the recitals of this proposal, explicit use is made of terms like ‘societal harm’, ‘societal risks’ and ‘societal concerns’, albeit without precisely defining what is meant therewith.

18. It can however be noted that a European Artificial Intelligence Board will be established with representatives of the national supervisory authorities, which will serve inter alia to “contribute to uniform administrative practices in the member states”, as per the proposed Article 58 of the regulation.

19. Consider in this regard the report published by Access Now in May 2020, in which the lack of adequate resources for certain data protection authorities is highlighted. The report is accessible at: https://www.accessnow.org/cms/assets/uploads/2020/05/Two-Years-Under-GDPR.pdf.

20. While not explicitly mentioned in the proposal, it can be noted that the European Commission already announced its intention to establish an expert group that could contribute to the European Artificial Intelligence Board’s work.

21. See in this regard Article 5(1)(a) and (b) of the proposal.

22. See in this regard Article 7 of the proposed regulation.

23. The only exception concerns AI systems intended to be used by public authorities “as polygraphs and similar tools or to detect the emotional state of a natural person” either in the context of migration, asylum and border control management, or in the context of law enforcement. See in this regard Annex III, point 6(b) and 7(a).

24. The insertion of “biometric categorisation of natural persons” in the title of Annex III, point 1, does appear to indicate that the Commission could include such systems as ‘high-risk’ upon a revision of this Annex at a later stage, if this inclusion can be justified pursuant to Article 7 of the proposal.

25. The proposed regulation, relying on articles 16 and 114 of the Treaty of the Functioning of the European Union as its legal basis, will in principle need to be adopted through the ordinary legislative procedure.

26. Consider, for instance, the difference in terms of tangibility between societal harms raised by AI and societal harms of environmental nature—and hence also in terms of measurability, quantifiability and verifiability.

What rights matter? Examining the place of social rights in the EU’s artificial intelligence policy debate

$
0
0

This paper is part of Governing “European values” inside data flows, a special issue of Internet Policy Review guest-edited by Kristina Irion, Mira Burri, Ans Kolk, Stefania Milan.

Introduction

The entrenchment and establishment of particular rights has from the outset been part of the advancement of the European project and how the European Union (EU) has defined itself. References to ‘European values’ are often rooted in an understanding of this commitment to rights seen to uphold certain principles about democracy and the relationship between market, state and citizens. Although the notion that Europe is premised on a set of exceptional values is contentious, Foret and Calligaro argue that European values can be understood as those ‘values enshrined in the treaties and asserted by European institutions in their discourses’ (2018, p. 2). These treaties and institutional discourses do not always translate into consistent policy agendas and geopolitical activity, but they provide a window into what is considered valuable and for whom. This is particularly relevant in new policy areas, such as emerging technologies, where concrete conceptualisations of different fundamental rights are still being formulated. In these circumstances, we are provided with an opportunity to explore policy debates as indications of the priorities and concerns that make up the European integration project as it is shaped by different strategic interests and self-understandings.

In this paper we approach the question of what rights matter in EU policy debates by looking at the discourses of different stakeholders in the policy debate surrounding AI. We do so with a particular focus on the place of social rights as a growing, but historically neglected aspect of the governance discourse surrounding emerging technologies. Rights-based approaches in the governance of technologies, especially optimisation technologies 1, have tended to prioritise human rights understood in terms of individual privacy, non-discrimination and procedural safeguards pertaining to consent and transparency as significant entry-points for regulation (Gangadharan, 2019). Whilst these are important areas for engaging with technology, ample research demonstrates how impacts on social and economic rights, such as the right to work, social security, healthcare, or education, constitute a crucial component of the societal tensions surrounding developments in AI (Alston, 2019). Yet despite these rights being important for the European project, they have received marginal attention in AI policy and governance debates.

As a way to uncover how social rights are understood in EU’s policy debate on AI we use the public consultation on the White Paper for AI Strategy as a case study for examining concerns and priorities amongst different stakeholder groups. The engagement with the White Paper for AI Strategy is an important discussion in this regard, as it forms part of a discourse on AI that from the outset positioned policy concerns in relation to the protection of fundamental rights and so-called ‘European values’. We start by outlining the historical and theoretical context out of which social rights emerge, situating them in relation to the broader pursuit of the ‘European social model’ following World War II and the subsequent creation and integration of the EU. We then go on to discuss some of the ways social rights intersect with optimisation technologies, and the role of rights-based approaches in concerns about data justice, particularly in areas such as employment and social welfare. Against this backdrop we outline the emergence of AI policy in the EU as an introduction to our study of the submissions to the public consultation on the White Paper on AI Strategy from key stakeholder groups, including civil society, public authorities and business associations. In analysing the dominant themes of these submissions, we argue that social rights are relatively muted within the AI policy debate despite the profound significance AI policy has for the articulation of resource distribution and economic inequality. Whilst concerns about social rights manifest themselves in discourses pertaining to public services and employment, they do so predominantly in a procedural context that emphasise fair data collection or right to redress rather than in material or distributive terms. Moreover, as an indication of what actually informs ‘European values’, social rights are marginalised in favour of geopolitical concerns about the single market, regional competition and technological innovation.

Social rights and the European project

Although they are sometimes perceived as elusive, social rights have a firm role in the broader discussion on the evolution of citizenship, most famously perhaps in T. H. Marshall’s three dimensions that include civil, political and social citizenship (Marshall, 1950). At the same time, social rights are steeped in ambiguity of both a political and legal nature that relates, in part, to the division of rights into different categories that we see play out particularly at the international level (Ssenyonjo, 2009). Post World War II international human rights regimes, for example, adopted separate treaties for civil and political rights (such as freedom of religion, the right to assembly or to privacy) and economic, social and cultural rights (including the right to work, health or social security). 2 While both categories of rights are part of international human rights, civil rights have dominated the discourse and practice of human rights, often becoming their synonym and leaving social rights in the position of ‘poor stepsister’ (Alston, 2005).

One of the key differences between these categories of rights revolves around state-citizen relations. In the case of civil rights, the emphasis is on issues of individual freedom, especially from state interference, whereas the implementation of social rights often requires state intervention, incurring budgetary expenses and limiting private property or economic freedom (Eide, 2001). Furthermore, social rights have been considered dysfunctional in terms of any legal structure of those rights, making their judicial assessment harder to carry out (Langford, 2009). They also intersect with other categories of rights in some circumstances. For example, in the constitutional practice of some countries, the right to life is used to confirm the protection of access to medical care or medicines. On the other hand, conflicts can arise between rights, especially when the individual freedom vs. state intervention binary comes into play (Toebes, 1999).

While the scope of social rights might be contested, for the purposes of this article we follow Marshall’s early understanding of social citizenship to include ‘(t)he right to share to the full in the social heritage and to live the life of a civilized being according to the standards prevailing in the society’ and the ‘universal right to real income which is not proportionate to the market value of the claimant’ (Marshall, 1950, p. 11 and 47). In other words social rights are strongly connected to public services, fair working conditions, equality, and a guarantee of social protection delivered through universal systems and wealth redistribution measures such as minimum income or progressive taxation (Katrougalos, 2007; Moyn, 2018). Legal jurisprudence in the field on international social rights denotes that such rights are structurally complex and consist of various obligations of the state such as the guarantee of non-discrimination, procedural standards, but most of all ensuring the availability of public services and welfare (e.g. UN CESR 2008).

In Europe, the emergence of the modern welfare state has been associated with a strong commitment to social rights since the 19th century (Esping-Andersen, 1990). Furthermore, the dual crises of the global recession and second world war ushered in a widespread consensus around the need for state institutions to play a permanent role in mitigating the harms of the market economy through social reforms that ensured social protection, access to employment and decent care (Judt, 2007). Today, the recognition of social rights, along with civil and political rights, is part of what Fabre (2005) has called a ‘European culture of social justice’. This model is visible in national constitutions and international treaties including the European Social Charter from 1961.

At the same time, the place of social rights in the European integration project (including European Communities [EC] and later European Union [EU]) has been ambiguous, sometimes based on contradictions and marked by fluctuations that raise questions about their place within common ‘European values’. From the outset of the EC, the ‘social question’ has been the subject of disputes between the ordoliberal model of European integration and the vision of ‘Social Europe’ (Dodo, 2015). A commitment to social rights has been a prominent feature in what has characterised the European model, but the advancement of a European integration project was always primarily oriented towards the formation of a common market (Maduro, 1999; Kenner, 2003; Garben, 2020). Already in 1957, the Treaty of Rome included commitments such as the guarantee of equal pay between women and men, community coordination of paid holiday schemes, and the establishment of a European Social Fund. Yet, until the 1980s, there were limited practical initiatives concerning social policy from the European Commission that only really changed after 1989 with the proposal for a European Charter of Social Rights. This initiated various policy debates and was partially incorporated into the Treaty of Amsterdam, that referred to ‘fundamental social rights’ (art. 136 EC, see also Maduro, 1999). In this context, social rights were presented as a component of eligibility rules that would allow for migration within the EU and the standardisation of national social security systems, constitutive of a ‘market making’ rather than a ‘market breaking’ imperative (Katrougalos, 2007; Maduro, 1998). Far from traditional welfare model and comprehensive social redistribution mechanisms, EU’s social legislation set up minimal common standards (Demertzis 2011). Social rights eventually became inscribed in the European Charter of Fundamental Rights under the category described by Menendez (2003) as ‘rights to solidarity’. Most recently, the UK’s exit from the EU (Brexit) marked the adoption of a so-called European Pillar of Social Rights—a non-binding instrument that proclaims different rights related to equal opportunities, access to the labour market, fair working conditions, social protection and inclusion (Plomien, 2018). We now turn to the relationship of such rights with emerging technologies.

Optimisation technologies and social rights

Whilst there is widespread recognition that the rapid development and deployment of data-centric technologies has significant transformative implications, the question as to what these are and how they should be addressed is still a point of contention. Initial concerns have been oriented towards the mass collection of data that have tended to focus on issues of surveillance and privacy, prominent in public debate particularly in the immediate aftermath of the Snowden leaks in 2013 (Hintz et al., 2018). These events made clear the limitations of existing legislation and fed into a long-standing discussion about the need for further protection of privacy and personal data and better oversight in the handling and processing of data by both corporate and state actors (Lyon, 2015). Some of these concerns have subsequently been translated into the 2018 General Data Protection Regulation (GDPR), intended to give a new impetus to the protection of fundamental rights in the context of dynamically developing digital technologies and services (de Hert & Papakonstantinou, 2016).

The focus on privacy has been particularly dominant in relation to optimisation technologies, but there has also been a growing emphasis on issues such as harmful profiling, automated sorting, and biases embedded in data and algorithms that lead to forms of discrimination (Gandy, 1992). Both privacy and non-discrimination have become significant organisational concepts for policy debates on optimisation technologies. Yet in assessing the transformative potentials of such technologies both privacy and non-discrimination policies also have limitations (Mann & Matzner, 2019; Schermer, 2011). In part, the way these priorities have been operationalised has been critiqued for lending itself to design solutions that seek remedies in efforts such as ‘privacy-by-design’ or bias mitigation that, although useful, rarely address the contextual nature of technologies or their operative logics (Powles, 2018, Hoffmann, 2019). Furthermore, a more ‘holistic evaluation’ of the impact of optimisation technologies has been said to be needed in order to consider international human rights law in earnest (McGregor et al., 2019). This requires a broader suite of considerations that go beyond citizens, political and consumer rights. Moreover, a focus on individual rights struggles to account for the structural transformations that are brought to bear with the advent of optimisation technologies. These different concerns have particular relevance for areas that have historically been central to the European social model, such as the role of labour and the protection of the welfare state. Whilst not obviously part of mainstream rights-based approaches concerned with data and computational infrastructures, these areas are receiving increasing attention in discussions on automation and AI (Dencik, 2021).

One of the most prominent themes in this regard is the growing orientation towards the so-called ‘future of work’ that has often focused on anxieties about the automation of work, potential mass job losses, wage reductions or global workplace restructuring (Arntz et al., 2016; Frey & Osborne, 2017). These discussions have provided impetus for new policy initiatives focused on redistribution and income guarantees, such as a universal basic income and public services or new wage policies (Standing, 2016; Portes et al., 2017; McGaughey, 2018). At the same time, debates about the impact of emerging technologies on actual job quality and the position of workers are also a growing focus, such as the impact of algorithmic management or increased workplace surveillance (Stefano, 2018; Wood, 2021). The focus on the precarity at the intersection of optimisation technologies and work has also informed debates on the future of the welfare state more broadly. This question encompasses not only ways to secure workers’ rights or income guarantees, but increasingly focuses on the ways in which data infrastructures are shaping public services, including eligibility checks, risks assessments, and profiling (Dencik & Kaun, 2020; AlgorithmWatch, 2019; Eubanks 2018). In his report to the General Assembly, the UN Special Rapporteur on extreme poverty and human rights, Philip Alston, describes these developments as the advent of the ‘digital welfare state’ that is already a reality or is emerging in many countries across the globe. In these states, ‘systems of social protection and assistance are increasingly driven by digital data and technologies that are used to automate, predict, identify, surveil, detect, target and punish’ (Alston, 2019, n.p.). Such systems have frequently been implemented in a context of spending cuts, reduction in services and new behavioural requirements, whilst at the same time being perceived as void of policy implications that exempt them from much scrutiny or public debate (Alston, 2019).

These different areas of concern point to the relevance of social rights in the context of datafication and the advent of optimisation technologies, even if they are rarely directly addressed. While privacy and data protection across work and welfare have been part of this debate, social rights, as a constructive frame, has seldom been a dominant focus. It remains unclear how these can effectively be translated into policy debates and shape legislative agendas in relation to data infrastructures and emerging technologies. As a way to explore this further, we now turn to the recent policy debate on AI in the EU.

The case of European artificial intelligence policy

Over the last few years, the EU has been actively engaging in a range of policy initiatives that have focused on the development of AI within Europe and includes investments and financial policies, regulation of AI systems, international cooperation and other activities. Importantly, European AI policy should be seen as part of a larger ecosystem of institutional and legal interventions regarding communications and digital technologies that has a long history and dates back to the early 1970s (Mărcuț, 2017). It is not the intention to detail these here, but it is worth noting that the interest in AI started to gain traction in 2017 and 2018 with the adoption of the first communications of the European Commission and resolutions of the European Parliament on AI (see Niklas & Dencik, 2020). This has continued with publications from the High-Level Expert Group on Artificial Intelligence in 2019 on ethics guidelines, a five-year plan on digital policy from the European Commission titled Shaping Europe’s Digital Future, published in 2020, and other strategic documents such as the White Paper on AI and the recently published AI draft regulation (European Commission, 2020b and 2021a). All these different institutional, legal and budgetary efforts together constitute what Ryan Calo (2017) refers to as ‘AI policy’, a distinctive area of policymaking that addresses different challenges tied to AI and similar technologies, including justice and equity, safety and certification, privacy and power dynamics, taxation or displacement of labour.

Within Europe, we see that AI policy plays out along the lines of what Jasanoff (2009) describes as the dualistic nature of liberal state interventions in technology and innovation informed, on the one hand, by a principle of public funding in research that grants significant autonomy to scientists, whilst on the other hand, recognising a need for regulatory intervention before new products enter the market. This dynamic is evident, for example, in discussions concerning tensions between the need for binding legislation and business-preferred ethical principles and soft guidelines (Wagner, 2019).

Among the documents that make up the European AI policy is the White Paper published in February 2020 as part of the five-year strategy Shaping Digital Future. White papers initiate debates in a particular area and contain ideas for particular actions (sometimes outlining possible options) and are used for consultations with stakeholders and institutions before legislative proposals are formulated (Overy, 2009). The scope of the White Paper on AI is broad and covers legislative, financial, educational and scientific activities. It is an outline of a broad strategy containing goals and concrete action plans, together with an estimated time for their implementation. It is not the aim to provide a comprehensive review of the White Paper here, but it is worth highlighting a few noteworthy aspects that inform our analysis.

AI is defined through its main components—algorithms and data. The two pillars of the European strategy are so-called ‘ecosystems’ of trust and excellence. The ecosystem of trust includes the strategies for funding and economic growth, research support and creating incentives for adopting AI systems by public and private sectors. The ecosystem of trust focuses on risks that AI systems create for fundamental rights, product safety and liability in what is considered a risk-based approach. Such an approach entails an assessment of ‘high’ and ‘low’-risk applications that should inform interventions and requirements, e.g. obligations to keep a record for data, quality requirements for training models and transparency rules for consumers. The White Paper also makes suggestions for voluntary labelling schemes, conformity assessments and new governance structures that involve cooperation between national authorities.

The articulation of rights in the White Paper primarily concerns privacy, personal data protection, consumer rights and non-discrimination. The emphasis on non-discrimination distinguishes the AI policy from many existing policy discourses on rights and technology that have prioritised privacy and personal data, leaving discrimination issues aside (Mann & Metzner, 2019). It is important to note that discrimination in the White Paper is primarily interpreted as a problem of bias, data quality and specific technological architecture. The paper also notes that AI systems can support ‘the democratic process and social rights’ but there are no further mentions of such rights except rare references to healthcare, public services or employment. For example, the White Paper refers to discrimination ‘in access to employment’, ‘the rejection of an application for social security benefits’ or the use of AI system to ‘improve healthcare’.

Whilst the White Paper serves as an illustration of regulatory approaches to AI and a proposed institutional framework for research and innovation in this area, it is also indicative of a wider set of discourses that are part of asserting the meaning of the European project and how the EU seeks to define itself. As Jasanoff (2007, p. 92) notes in relation to the EU’s biotechnology policy, policies on technology ‘became a site of interpretive politics, in which important elements of European identity were debated along with the goals and strategies of European research’. Similarly, the White Paper on AI Strategy makes frequent references to notions such as ‘European values’, ‘European data’ and ‘digital sovereignty’ that denote a close connection between narrower regulatory and funding initiatives with a broader articulation of the EU’s geopolitics and vision for the relationship between European institutions and citizens. This is the case not least in its positioning as an alternative to the ‘surveillance capitalism’ of the US and the ‘technological authoritarianism’ of China (European Commission, 2020a). In this sense, the White Paper reveals a certain set of priorities. Yet in order to understand the AI policy debate in broader terms it is important to engage with the different stakeholder interests and concerns that shape this debate. As a way to further explore how social rights feature in the AI policy debate, we therefore now go on to examine stakeholder perspectives with regards to the White Paper.

Methods

In order to examine the place of social rights in the EU’s AI policy debate, we conducted a qualitative content analysis of documents submitted to the public consultations on the White Paper on AI Strategy (European Commission, 2020d). The process of public consultations in the European Union invites various social actors, such as non-governmental organisations, trade unions, enterprises and academics to participate in the policy or regulatory process. These consultations are intended to make policy-making more democratic, sensitive to the voices of civil society and increase legitimacy for new political decisions (Rasmussen & Toshkov, 2013). However, they have also been accused of prioritising the involvement of particular groups of actors and require specific expertise that place limitations on their results (Persson, 2007). They are also bound by particular structures, such as on-line consultations that often use standardised questionnaires, shaping the extent of problem-definition and inclusivity (Quittkat, 2011). This is a significant aspect to consider in the analysis of any public consultation process and is illustrative in some of the conclusions we are able to draw.

Organised and structured by the European Commission, the public consultations on AI (that lasted from February 2020 till June 2020) attracted a very high number of contributions (1,215) from individual citizens, business organisations, trade unions, civil society or academia (European Commission, 2020d). All contributions were published on the official EU webpage and consist of two types of content: a) answers to the online questionnaire, b) policy papers, briefs and other materials attached to the submission. Contributors choose to submit either or both of them. The variety of actors that provided submissions offer an opportunity to examine the different values, priorities, interests and narratives that form part of the AI policy debate as articulated in the documents submitted. To make such a qualitative analysis we decided to create a sample of a cross-range of actors representing different areas of interests, focusing on organisations and groups rather than individual citizens as most representative of different stakeholder perspectives. With a comparative sample of each type of actor we ended up with submissions from 74 organisations in total taking into account diversity, nature of contribution, and relevance for social rights concerns (Tab. 1). The last factor was determined by an initial keyword search (Tab. 3). We also prioritised those organisations that, apart from the answers to the questionnaire, attached additional opinions, briefs and reports. Such an approach allowed us to analyse a richer data set containing more extensive evaluations of the policy proposals, values and recommendations of the particular organisations. For the analysis, we also included four ‘opinions’ created by European agencies, such as the European Data Protection Supervisor, that were not part of the public consultation process but are part of the White Paper debate.

Table 1: Submissions in White Paper consultations (by type)
Type of actorsNumber of analysed submissions (by actors)Number of total submissions (by actors)
Companies15222
Research organisations (academia and think thanks)15152
NGOs12138
Business associations12130
Trade unions1022
Public authorities1073
Citizens-406
Others-72
Table 2: List of organisations cited in the article
Acronym/OrganisationFull name
AMIThe International Association of Mutual Benefit Societies
Amnesty Int.Amnesty International
ANAccess Now
EDFEuropean Disability Forum
EDRiEuropean Digital Rights
EFPIAEuropean Federation of Pharmaceutical Industries and Associations
EPHAEuropean Public Health Alliance
EPSUEuropean Federation of Public Service Unions
EUROCITIES 
EWLEuropean Women’s Lobby
FRAFundamental Right Agency of the European Union
Government of Ireland-
industriAll-
LOLandsorganisationen i Sverige
NJCMThe Dutch section of the International Commission of Jurists
OGBÖsterreichischer Gewerkschaftsbund
REIFReprésentations des Institutions Françaises de sécurité sociale
Institute for Human-Centered Artificial Intelligence, Stanford UniversityHCAI
UGICTUnion générale des ingénieurs, cadres et techniciens CGT
UNIUnion Network International-Europa

We conducted a thematic data analysis, following six steps recommended by Braun and Clarke (Braun & Clarke, 2006) and using qualitative data coding software (NVivo). First, we identified prominent concepts and initial findings. Second, based on this first reading of the collected data and previous research on social rights and optimisation technologies we developed a list of codes that summarise and capture the crucial aspects of the given concepts. Those codes were assigned to particular sentences or larger segments of text. Then initial codes were defined and grouped in a way to help identify connections between them. They focused on different aspects of texts—description of particular phenomena, normative statements about the role of technology in society or recommendations for new laws or budget policies regarding AI. We ended up with a group of codes that were focused on particular problems and represent four areas of interest: a) social rights and policies (access to public services, work and employment, welfare administration), b) human rights and justice (discrimination, privacy, due process, transparency), c) narratives about AI systems (beneficial, critical) and d) approaches to European AI policy (critiques, recommendations, approval). After analysing the materials from each group of actors participating in the consultations, we prepared a summary on this group. Summaries covered the role of human rights in documents, political recommendations, issues related to social policies, and a general approach to AI. These summaries and the comparisons between them also allowed us to capture significant differences between specific actors participating in consultations, e.g. between NGOs and companies. Importantly, drawing on the interpretative policy analysis approach, we understand policy debates as a set of discourses constituting a conglomerate of various narratives, frames and understandings, where policy issues such as rights, regulations or institutions are seen as social constructs (Hajer, 1993). In this sense, we also approach rights as discursive and sociological rather than legal phenomena and are less interested in the legal interpretations and normative content of specific rights. We predominantly want to explore how rights and ‘rights talk’ build political discourses, set up priorities and indicate decisions about values.

Findings

As a way of outlining how social rights feature in the consultation on the EU’s White Paper on AI Strategy, we start by briefly outlining the structure of the on-line questionnaire in the consultation and the results from our search of keywords relating to fundamental rights and policies in the answers to that questionnaire (Tab. 3).

The questionnaire was divided into three sections, with a total of 16 close-ended questions, 10 open questions and additional space for comments (European Commission, 2020b). Each participant could also provide additional documents like policy briefs, reports or more elaborated positions. Section one included questions related to the ‘ecosystem of excellence’ and covered issues such as support for development and uptake of AI, research excellence, and financing for start-ups. Section two referred to the AI regulation and section three raised questions about safety and liability. As part of these two, there were a limited number of questions pertaining to human rights, that also included potential answers such as ‘AI may breach fundamental rights’ or ‘The use of AI may lead to discriminatory outcomes’ and one question referred to workers’ rights. In this sense, the questionnaire provided limited scope for human rights concerns to be raised and made no overt reference to social rights.

The analysis of responses to the questionnaire (especially the open-ended ones) using keyword search shows that human rights was still an important part of the consultation. When writing about potential threats and problems, participants noted the violations of human rights in general terms, and in particular privacy and non-discrimination. Social and labour rights were very rarely included in the responses. Keyword searches specifically related to social policies demonstrate that mentions of healthcare or education were most prominent, and less so work, with a significant absence of mentions of social security or protection altogether. Whilst this may illustrate certain priorities, it may also be related to a focus on educational skills and innovation in healthcare related to AI.

Table 3: Keywords relating to rights and policies in answers to the online questionnaire as part of the public consultation on the White Paper on AI Strategy
KeywordFrequency
Human rights and social rights
Social rights/ social and economic rights/welfare rights1
Labour rights/ workers rights2
Collective bargaining4
Human rights217
Fundamental rights249
Privacy226
GDPR186
Data protection147
Discrimination196
Social policies and employment
Workers49
Welfare8
School18
Trade union38
Housing8
Healthcare129
Social security/protection1
Welfare state1
Education108

This initial analysis indicates, in simplified terms, some priorities in the discussion on the White Paper. To further explore the question of the place of social rights in the EU's policy debate on AI, we next draw on our qualitative analysis of submissions and provide four central themes that emerged from our analysis. The first theme engages with the privileging of human rights in discussions on AI, whilst the second theme showcases how rights are operationalised in the context of the dual efforts of strategic investment and a risk-based approach. The final two themes focus particularly on how the intersection between social rights and technology is understood in relation to two policy areas: workplace relations and public services.

Human rights as a starting point

References to human rights and fundamental rights were very prominent in the submissions. All NGOs, trade unions and most research institutions and public authorities privileged a concern with human rights, with business organisations referring less to them. For some organisations human rights were an important starting point and normative basis for regulative intervention and technological development as stated by the Fundamental Right Agency of the European Union (FRA): ‘fundamental rights frameworks and other legal commitments are the best starting point for any evaluation of the opportunities and challenges brought by new technologies’ (FRA, 2020, p. 1). In this context, rights were often a manifestation of certain normative claims that are part of delineating an understanding of what ‘European values’ are.

Whilst references to rights in the submissions encompassed a wide range of relevant rights and freedoms for the impact of AI, there was a particular focus on data protection and non-discrimination (mentioned in more than 40 contributions) in line with previous policy debates on emerging technologies. At the same time most submissions referred to general fundamental rights, without explaining specific challenges and particular rights. Where specific challenges were expressed, these were mostly prominent in contributions from migrants groups, organisations representing people with disabilities, women, ethnic minorities, or the elderly. Many submissions noted that AI discrimination is different to other non-technical forms of discrimination, as for example outlined by EDRi: ‘due to greater scales of operation, increased unlikelihood that humans will challenge its decisions (automation bias), and lower levels of transparency about how such decisions are made’ (EDRi, 2020, p. 5). While often focused on the issue of biases and data processing, some organisations also explained that technologies may lead to discrimination because they are applied to certain groups, sectors of society or‘problem districts’ (NJCM, 2020, p. 9).

In terms of explicit references to social rights, we found these in 15 of the submissions analysed and engaged with the framework of rights to health, social security or work in the context of using AI systems. Such references came from organisations such as those representing healthcare insurance institutions who noted: ‘AI should not have a negative impact on the social rights guaranteed by the Pillar of Social Rights’ (AMI, 2020, p. 8). Other organisations like trade unions, NGOs representing people with disabilities, or associations representing social security organisations referred to the need to respect labour rights or provide better safeguard for workers or social security recipients. Several of these submissions addressed those issues exclusively through the frame of non-discrimination. For example, an organisation that focused on healthcare implied that ‘machine-generated decisions could potentially exacerbate existing health inequalities, discrimination and exclusion’ (EPHA in European Commission, 2020e, n.p.). Other submissions noted that algorithmic-driven discrimination could affect access to public transportation, employment or social security, predominantly concentrating on ‘inherent bias embedded into software’ (EWL, 2020, p. 3) or inadequate consideration for certain groups ‘An AI based solution for transport services will most likely dismiss the way which persons with disabilities travel’ (EDF, 2020, p. 4). Social rights were also referenced in contrasting terms that stressed how AI systems may be beneficial for advancing social rights, such as the submission from the Irish government that stated: ‘AI-based diagnostic systems will improve living standards and quality of life’ (Government of Ireland, 2020, p. 16).

Operationalising human rights: from accountability to public investment

The engagement with rights language is not only indicative of normative priorities, but also suggests specific policy initiatives. For example, most submissions from NGOs in our sample argued for new requirements to increase transparency or accountability, such as ‘mandatory human rights impact assessments’ (EDRi, 2020, p. 12), ‘disclosure scheme for AI/ADM systems deployed in the public sector’ (AN, 2020, p. 7), and ‘clear measures for enforcement’ (Amnesty Int., 2020, p. 3) and bans for particular technologies: ‘the EU must establish red lines to ban applications of AI which are incompatible with fundamental rights’ (AN, 2020, p. 8). Some of those instruments create direct links with social rights, such as the proposal of a risk assessment that includes ‘social discrimination, and impact on working conditions’ (UGICT, 2020, p. 10) as a response to the question of how to give human rights more concrete meaning in the development of AI. With regards to business organisations, rights were often operationalised in terms of particular organisational and technical procedures that focus especially on biases. Google, for example, explained how discrimination is addressed within their operations ‘from fostering an inclusive workforce that embodies critical and diverse knowledge, to assessing training datasets for potential sources of bias, to training models to remove or correct problematic biases’ (Google, 2020, p. 21).

When it comes to investment efforts, human rights concerns were highlighted by NGOs (at least seven) as a necessary inclusion to ensure trust: ‘Ecosystem of excellence must include trust’ (EDRi in European Commission, 2020e, n.p.). In particular, NGOs, trade unions or research institutes advocated for greater participation or evaluation methods that included fundamental rights, such as the suggestion from EWL that investing in and developing technology should include ‘gender budgeting, impact assessments and well-funded monitoring frameworks’ (EWL, 2020, p. 4). Beyond these procedural safeguards, some organisations also engaged with the question of how decisions about resource allocation should be made: ‘initiatives on research should ensure that the public interest is taken into account and that priorities are not simply set by the private sector but by broader social and environmental policy objectives’ (EPSU in European Commission, 2020e, n.p.). Relatedly, some saw public investment as an opportunity to challenge a ‘surveillance-based business model’ (Amnesty Int., 2020, p. 4) and data monopolies, and made suggestions for ‘mandatory nonexclusive licensing of machine-collected data’ (industriAll, 2019, p. 5) or ‘legislative action to ensure access and use of business to government (B2G) data sharing’ (EUROCITIES, n.d., p. 1). These discourses are indicative of a perceived role for the public sector in technological innovation as a way of ensuring fundamental rights.

Employment: automation of jobs to algorithmic management

References to social rights in the submissions centred on two main areas; employment and public services. With regards to the intersection of AI and employment, the most engagement came unsurprisingly from trade unions who focused particularly on transformations in the labour market: ‘AI and robotics significantly impact the labour market and the way of working, not only because older jobs and tasks transform or disappear, and new ones emerge but also because of change in the nature of human work in relation to AI systems’ (UGICT, 2020, p. 1). Two of the trade union submissions also highlighted the particular impact this would have on different groups of workers such as blue-collar workers and women. As a response, many of the submissions (including all trade unions and some business associations) called for educational programmes and re-skilling, such as the American Chamber of Commerce who wanted: ‘significant investments in education, life-long learning and reskilling to ensure our workforce is ready for the jobs of tomorrow’ (AmCham, 2020, p. 3). For trade unions those changes are essential for the ‘transition to a fair workplace of the future’ (UNI, 2020, p. 1) and called for the ‘individual worker’s right to training, preferably guaranteed by collective agreements’ (industriAll, 2019, p. 6). The focus on automation also engaged with resource distribution such as the call for the ‘European transition fund to support those workers and regions negatively impacted by AI and more generally by the digitalisation of industry” (industriAll, 2019, p. 6) or the need for ‘transferable benefits and commit[ment] to increasing support for those navigating the future of the labor… like universal basic income and adaptive social safety nets’ (HCAI, 2020. p. 16).

In addition to restructuring the labour market, the submissions also focused on the impact of AI on management and working conditions where rights-based approaches were particularly prominent in questions of data governance, privacy, workers surveillance and algorithmic decision-making. All of the analysed submissions from trade unions highlighted that AI systems facilitate the possibility to ‘supervise all workers, permanently, and to detect all occasions of noncompliance with prescriptions, in real time’ (industriAll, 2019, p. 2). Automated systems for hiring, firing and performance-related decisions can ‘deprive the worker from any possibility to discuss, present arguments to support their case and gain redress’ (industryAll, 2019, p. 3). Responses ranged from a focus on data protection: ‘[workers] must have the right to control the personal data that AI has generated about them’ (OGB, 2020, p. 3) to a greater role for unions in the implementation of AI systems: ‘Negotiating the algorithm should become a real practice’ (UNI, 2020, p. 1). Suggestions here involve collective bargaining agreements that cover data governance issues but with the recognition that meaningful participation and social negotiations requires institutional support and capacity building for trade unions. Some unions also saw a need for more prominent engagement with ethical concerns in the development of technology, advocating for tech workers’ ‘ right to know what they are building and to contest unethical or harmful uses of their work’ (UGICT, 2020, p. 9), whilst others suggested greater regulation of the use of AI in the workplace, calling for a ‘system of regulation of the application of AI technologies for employment and management decisions’ (HCAI, 2020. p. 17).

Public services: providing access to benefits, healthcare and education

The other significant area for engagement with social rights was in relation to AI and public services. A diverse range of actors (business associations, NGOs, research institutions) referred in their submissions to the way automated systems are used by the public sector in areas like social security, healthcare or education. Only six submissions linked those issues with a language of rights although they did provide an indication of the normative expectations for AI in those areas, predominantly seeing AI as advancing social rights. For example, in describing the use of AI in public administration, some noted the benefits of AI for ensuring ‘health workers spend their limited time in the most productive way’ (EPHA, 2019, p. 3), provide ‘better, faster and more customised care to patients’ (EFPIA, 2020, p. 1), ‘support and improve decision making’ (REIF, 2020, p. 3), ‘help to inform policy direction and actions’ (Government of Ireland, 2020, p. 11) or even ‘assist in deploying resources in a more accurate, strategic, and affordable way’ (AMI, 2020, p. 2). Some submissions also highlighted that AI and intensive data processing may bring particular benefits for ‘the most underserved and marginalised groups’, where‘linking of relevant administrative datasets of homeless people using different social welfare and health services could enable better observational studies, predictive analytics (e.g. service-use patterns)’ and ‘… help illuminate the complex, intersectional reality of discrimination and exclusion’ (EPHA, 2019, p. 8).

At the same time, some submissions also noted the challenges of implementing AI systems in public administration. One NGO stated: ‘(automatic) ‘optimization’ of medical resources and waiting times for medical procedures…[may be] of high-risk to the fundamental rights of patients (right to health, right to life)’ (NJCM, 2020, p. 16). With regards to a risk-based approach, several of the submissions therefore suggested that public services should fall into the high-risk category of AI applications: ‘Social security as a whole must be defined as a high-risk sector by virtue of its primordial nature in the life of all Europeans’ (REIF, 2020, p. 3). Other policy recommendations included legal bans for ‘use of AI to solely determine access to or delivery of essential public services (such as social security, policing, migration control)’ (AN, 2020, p. 9). Similarly, another submission cautioned against any general deployment of AI: ‘(t)he uptake of any technology, particularly in the public sector, should not be a standalone goal or value in itself’ and that ‘AI solutions must be evaluated against non-AI approaches’ (AN, 2020, p. 3). Interestingly, one submission raised the question of how AI impacts on more fundamental principles associated with public services: ‘What happens to the values that are built into our welfare systems when, for example, processes are automated?’ (LO, 2020, p. 2) whilst another put this into the context of how investment is carried out: ‘Public services need to be able to control the introduction of AI and make the necessary investment without being led or constrained by the private sector’ (EPSU in European Commission, 2020e, n.p.). As also noted above, these comments speak to the perceived close association between a strong public sector and the safeguarding of social rights.

The place of social rights in the EU’s AI policy debate

The White Paper on AI and submissions to the public consultation provide a useful indication of the different priorities and interests that are shaping the AI policy debate in Europe. When it comes to the question of social rights, it is noteworthy that their place is limited in the current AI policy. The White Paper does not lack a ‘rights language’, however the clear priority remains privacy, different transparency safeguards and specific understandings of non-discrimination. It is also in relation to non-discrimination that we see most engagement with social rights, such as unequal access to public services and care. Furthermore, submissions to the public consultations of the White Paper on AI Strategy do illustrate significant normative expectations of the role of the public sector in relation to AI innovation and a continued emphasis on employment regulation that are indicative of the sustained relevance of the European social model in defining ‘European values’. This has especially been a result of trade unions and some NGOs beginning to engage more in policy debates surrounding optimisation technologies.

This framing of priority is reflective of the wider EU AI policy debate. For example, 2018’s ‘Artificial Intelligence for Europe’ demonstrated little engagement with social rights or social policies but did outline an agenda that would address changes in labour markets caused by AI which include ‘ensuring access for all citizens, including workers and the self-employed, to social protection, in line with the European Pillar of Social Rights’ (European Commission, 2018, p. 11). The High-Level Expert Group on Artificial Intelligence in its discussion on a commitment to fundamental rights does not engage much with social rights, but mentions workers in the context of the need for AI consultations and power imbalances (HLEG on AI, 2019). On the other hand, it is worth adding that digital technology has been highlighted in the recently adopted documents on European social policy. For example, ‘Strong Social Europe for Just Transitions’ noted that AI will generate structural changes in the job market and supports the advancement of digital skills, commitment to the development of digital technologies to avoid ‘new patterns of discrimination or new risk to workers’ physical and mental health’ as well as improving working conditions for platform workers (European Commission, 2020c, p. 9). Similarly, in the Action Plan for the Pillar of Social Rights, the post-pandemic recovery package encompasses an agenda on the digitisation of the workplace including ‘issues related to surveillance, the use of data, and the application of algorithmic management tools’, setting a minimum standard on rights to disconnect, implementing a Digital Education Plan and making social security fit for technological changes (European Commission, 2021b, p. 19).

There are many different ways in which we might explain this limited conversation about social rights in the EU’s AI policy debate. First of all, social rights hold an awkward position in European integration also in relation to the historical trajectory of the welfare state in the broader discussion on European identity and values (Katrougalos, 2007; Dodo, 2012). Whilst a social agenda within the EU has evolved over decades, the notable priority on market creation advances a political environment that makes some debates possible and some not. Social rights occupy a controversial place in policy debates that make them a less favourable frame for those actors that often prioritise individual freedoms or lack expertise in areas of social welfare or employment. Furthermore, the character of the policy process on technology prioritises the regulation of risks and the allocation of resources in innovation as main concerns (Jasanoff, 2009). Such a focus favours procedural and budgetary questions rather than, for example, the character of work or sustainability of public services. It also prioritises certain kinds of actors and language that can engage with these priorities. With regards to civil society, for example, this means that very often actors that have a particular techno-centric focus tend to respond to policy consultations and play an essential role in setting the agenda (Gangadharan & Niklas, 2019). This has played out in terms of a framing of issues that privileges data protection and non-discrimination as main dominant human rights concerns as evidenced in our analysis. Both these issues have become widely recognised as spaces for policy intervention that engage with questions of data processing, algorithmic bias and transparency of computational models. This specific nature of the discussion on technology policy also undoubtedly influenced the nature of public consultations on the White Paper, which from the very beginning provided limited space for an engagement with social rights within the discussion.

Moreover, setting priorities in terms of rights discourses is a political matter, and is often associated with a broader economic and political context. This also means that how issues are understood creates certain parameters for the nature of responses. For example, the nature of the discussion on discrimination in AI debates that has tended to favour a focus on data and algorithmic bias has led to concerns about the presence of ‘happy talk’ on inclusion and diversity (Benjamin, 2019) and the drive towards an atomistic and techno-centric response to automated inequality (Hoffmann, 2019). These outcomes can be the result of many factors including particular corporate involvement, priorities of civil society or specific approaches to the topic in the media. Whilst rights-based approaches in general can be said to always have limitations (see also Hoffmann, 2020), the marginalisation of social rights within the EU’s AI policy debate should be seen as a political struggle over the meaning of ‘European values’ that goes beyond technological policy and touches upon the wider political priorities of the European project.

Nonetheless, social rights remain a relevant component of European integration and continue to be significant for addressing harmful market practices and for informing regulatory mechanisms (Kapczynski, 2019). Even if, as Moyn (2018) points out, there is also a need to engage with bigger structural questions about institutions, mechanisms of redistribution, taxation and control over infrastructures, public money and funds. Social rights can indeed broaden the horizon of political struggles, introducing new dynamics in the discussions about budgetary policies, architecture of institutions and reformulate the position of people who are receiving benefits, use healthcare or other public services (Yamin, 2008). Both Yamin and Kapczynski argue that in relation to a ‘narrow understanding of human rights’, social rights play a significant role in confronting matters of political economy, can ‘articulate claims to public prerogatives and infrastructures’ and reconstruct existing market mechanisms (Kapczynski, 2019). On this reading, social rights are integral to the creation of egalitarian social institutions and provide them with renewed relevance in light of neoliberal marketisation and widespread austerity agendas.

Conclusion

The policy debate on AI within Europe provides significant insights into how ‘European values’ are being constructed and what priorities are shaping approaches to technology innovation and regulation. Concerns about the turn to data infrastructures across areas of social life have tended to focus on particular human rights issues such as privacy and more recently non-discrimination, that are often translated into design solutions or procedural safeguards. At the same time, funding and intervention in the advancement of technology has been informed by an overarching commitment to the creation of a common market that can compete globally. These dynamics continue to play out in relation to current AI policy debates. Although the characteristics of the ‘European culture of justice’ have historically been associated with a social model that contrasts with other parts of the world (most notably the US) through its commitment to employment regulation and access to public services, an engagement with social rights in the context of emerging technologies has been notably absent and limited at best. Despite a growing recognition of the significance of social rights in addressing the impacts of AI advancements, they continue to occupy a marginal and awkward position in EU’s policy debates.

Yet certain openings for a discussion on social rights are emerging, particularly within questions of the future of work (including automation) or the use of optimisation technologies in the public sector, healthcare or education. Often this is bound up with an emphasis on non-discrimination. As we have seen, the increased involvement of trade unions and some NGOs that have not traditionally been prominent in policy discussions on technology has meant that there is emerging, albeit limited, engagement with social rights concerns in the most recent consultation on the White Paper on AI Strategy, particularly in relation to transformations in work and in public administration. Whilst these concerns speak to the continued relevance of the European social model, they rarely translate into a social rights frame that can effectively be operationalised in relation to AI, relying instead on design solutions or procedural safeguards. Instead, interests in redistribution and equality may need to engage with structural changes that involve the power relations of institutions, political economy and broader forms of governance not easily captured by rights-based approaches. Insisting on such an engagement as part of establishing any ‘European values’ in relation to technology that are said to be committed to (data) justice will continue to be a huge challenge.

References

AlgorithmWatch. (2019). Automating society: Taking stock of automated decision making in the EU [Report]. AlgorithmWatch; Bertelsmann Stiftung. https://algorithmwatch.org/wp-content/uploads/2019/01/Automating_Society_Report_2019.pdf

Alston, P. (2005). Assessing the strengths and weaknesses of the European Social Charter’s Supervisory System. In G. Búrca, B. Witte, & L. Ogertschnig (Eds.), Social rights in Europe. Oxford University Press. https://doi.org/10.1093/acprof:oso/9780199287994.003.0004

Alston, P. (2017). The Populist challenge to human rights. Journal of Human Rights Practice, 9(1), 1–15. https://doi.org/10.1093/jhuman/hux007

Alston, P. (2019). Report of the Special Rapporteur on extreme poverty and human rights (A/74/493). UN Special Rapporteur on extreme poverty and human rights. https://undocs.org/A/74/493

American Chamber of Commerce (AmCham). (2020). AmCham EU’s response to the White Paper on Artificial Intelligence.

Amnesty International (Amnesty Int). (2020). Amnesty International Submission to the European Commission’s Consultation on Artificial Intelligence.

Arntz, M., Gregory, T., & Zierahn, U. (2016). The risk of automation for jobs in OECD Countries: A comparative analysis (Working Paper No. 189; OECD Social, Employment and Migration Working Papers). Organisation for Economic Cooperation and Development. https://doi.org/10.1787/5jlz9h56dvq7-en

Benjamin, R. (2019). Race after technology: Abolitionist tools for the new Jim code. Cambridge Polity Press.

Braun, V., & Clarke, V. (2006). Using thematic analysis in psychology. Qualitative Research in Psychology, 3(2), 77–101. https://doi.org/10.1191/1478088706qp063oa

Calligaro, O., & Foret, F. (2018). Analysing European values: An introduction (O. Calligaro & F. Foret, Eds.). Challenges and opportunities for EU governance. Routledge.

Calo, R. (2017). Artificial Intelligence Policy: A Primer and Roadmap. University of California, Davis Law Review, 51(2), 399–435. https://lawreview.law.ucdavis.edu/issues/51/2/Symposium/51-2_Calo.pdf

Committee On Economic, Social And Cultural Rights. (2008). General Comment no. 19. The right to social security (art. 9) (E/C.12/GC/19).

Demertzis, V., & Routledge.Dencik, L. (2011). The European social model(s) and the self image of Europe (F. Cerutti, S. Lucarelli, & P. Verdegem, Eds.). Westminster University Press.

Dencik, L. (2011). Toward data justice unionism? A labour perspective on AI governance. In P. Verdegem (Ed.), AI for Everyone? Critical Perspectives (pp. 267–284). Westminster University Press.

Dencik, L., & Kaun, A. (2020). Datafication and the welfare state. Global Perspectives, 1(1). https://doi.org/10.1525/gp.2020.12912

Dencik, L., Redden, J., Hintz, A., & Warne, H. (2019). The ‘golden view’: Data-driven governance in the scoring society. Internet Policy Review, 8(2). https://doi.org/10.14763/2019.2.1413

Dodo, M. K. (2014). Historical evolution of the social dimension of the European integration: Issues and future prospects of the European Social Model. L’Europe En Formation, 372(2), 51. https://doi.org/10.3917/eufor.372.0051

Eide, A. (2001). Economic, Social, and Cultural Rights as Human Rights. In A. Eide, C. Krause, & A. Rosas (Eds.), Economic, social, and cultural rights: A textbook (2nd revised). Nijhoff Publishers.

Esping-Andersen, G. (1990). The three worlds of welfare capitalism. Polity Press.

Eubanks, V. (2018). Automating inequality: How high-tech tools profile, police, and punish the poor. St Martin’s Press.

Eurocities. (2020). People-centred Artificial Intelligence (AI) in cities. Response to EU’s white paper on AI.

European Commission. (2018). Communication from the Commission: Artificial Intelligence for Europe (COM/2018/237 final). European Commission. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A52018DC0237&qid=1629380391776

European Commission. (2020a). Communication: A European strategy for data (Communication COM/2020/66 final). European Commission. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A52020DC0066

European Commission. (2020b). Consultation outcome [Spreadsheet and documents]. White Paper on Artificial Intelligence - a European Approach. https://ec.europa.eu/info/law/better-regulation/have-your-say/initiatives/12270-White-Paper-on-Artificial-Intelligence-a-European-Approach/public-consultation_en

European Commission. (2020c). White Paper on Artificial Intelligence—A European Approach. Public consultation. European Union. European Commission. https://ec.europa.eu/info/law/better-regulation/have-your-say/initiatives/12270-White-Paper-on-Artificial-Intelligence-a-European-Approach/public-consultation_en

European Commission. (2020d). Communication from the commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions: A Strong Social Europe for Just Transitions (COM/2020/14 final). https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A52020DC0014

European Commission. (2020e). White Paper on Artificial Intelligence—A European approach to excellence and trust (White Paper COM(2020) 65 final). European Commission. https://eur-lex.europa.eu/legal-content/en/ALL/?uri=CELEX:52020DC0065

European Commission. (2021). Communication: The European Pillar of Social Rights Action Plan (COM/2021/102 final). https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=COM:2021:102:FIN

Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts, no. COM(2021) 206 final; 2021/0106(COD) (2021). https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A52021PC0206

European Digital Rights (EDRi). (2020). Rights-based Artificial Intelligence Regulation [Paper]. European Digital Rights. https://edri.org/wp-content/uploads/2020/06/AI_EDRiRecommendations.pdf

European Federation of Pharmaceutical Industries and Associations (EFPIA). (2020). EFPIA responses to the European Approach for AI.

European Public Health Alliance. (2020). Moving Beyond the Hype. EPHA Reflection Paper on Big Data and Artificial Intelligence.

European Services Workers Union. (2020). UNI Europa ICTS specific responses to the Consultation on the White Paper on Artificial Intelligence.

European Union Agency for Fundamental Rights (FRA). (2020). Public Consultation on the European Commission’s White Paper on Artificial Intelligence – a European Approach Contribution by the European Union Agency for Fundamental Rights.

European Women’s Lobby. (2020). Recommendations on Artificial Intelligence and Gender Equality.

Fabre, C. (2005). Social rights in European constitutions. In G. Búrca, B. Witte, & L. Ogertschnig (Eds.), Social rights in Europe. Oxford University Press. https://doi.org/10.1093/acprof:oso/9780199287994.003.0002

Frey, C. B., & Osborne, M. A. (2017). The future of employment: How susceptible are jobs to computerisation? Technological Forecasting and Social Change, 114, 254–280. https://doi.org/10.1016/j.techfore.2016.08.019

Gandy, O. H. (1995). It’s discrimination, stupid! In J. Brook (Ed.), Resisting the virtual life: The culture and politics of information (pp. 35–47). City Lights Books.

Gangadharan, S. P. (2019). What do just data governance strategies need in the 21st century? [Keynote]. Data Power Conference, Bremen, Germany.

Gangadharan, S. P., & Niklas, J. (2019). Decentering technology in discourse on discrimination. Information, Communication & Society, 22(7), 882–899. https://doi.org/10.1080/1369118X.2019.1593484

Garben, S. (2020). Balancing social and economic fundamental rights in the EU legal order. European Labour Law Journal, 11(4), 364–390. https://doi.org/10.1177/2031952520927128

Google. (2020). Consultation on the white paper on AI – a European approach. Google’s submission.

Hajer, M. A. (1993). Discourse coalitions and the institutionalization of practice: The case of acid rain in Great Britain. In F. Fischer & J. Forester (Eds.), The Argumentative Turn in Policy Analysis and Planning (pp. 43–76). Duke University Press.

Hert, P., & Papakonstantinou, V. (2016). The new General Data Protection Regulation: Still a sound system for the protection of individuals? Computer Law & Security Review, 32(2), 179–194. https://doi.org/10.1016/j.clsr.2016.02.006

High-Level Expert Group on Artificial Intelligence. (2019). Ethics guidelines for trustworthy AI [Report]. European Commission. https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai

Hintz, A., Dencik, L., & Wahl-Jorgensen, K. (2018). Digital citizenship in a datafied society. Polity.

Hoffmann, A. L. (2019). Where fairness fails: Data, algorithms, and the limits of antidiscrimination discourse. Information, Communication & Society, 22(7), 900–915. https://doi.org/10.1080/1369118X.2019.1573912

Hoffmann, A. L. (2020). Terms of inclusion: Data, discourse, violence. New Media & Society. https://doi.org/10.1177/1461444820958725

industriAll. (2019). Artificial Intelligence: Humans must stay in command [Policy brief].

Institute for Human-Centered Artificial Intelligence (HCAI). (2020). Input on the European Commission White Paper “On Artificial Intelligence – A European approach to excellence and trust”. Institute for Human-Centered Artificial Intelligence, Stanford University.

International Association of Mutual Benefit Societies (AIM). (2020). Artificial Intelligence: Great Potentials and Some Challenges for Healthcare.

Ireland, G. (2020). Ireland’s National Submission to the Public Consultation on the EU White Paper on Artificial Intelligence.

Jasanoff, S. (2007). Designs on nature: Science and democracy in Europe and the United States. Princeton Univ. Press.

Jasanoff, S. (2009). Governing Innovation: The Social Contract and the Democratic Imagination. Seminar, 597, 18.

Judt, T. (2007). Postwar: A history of Europe since 1945. Pimlico.

Kapczynski, A. (2019). The Right to Medicines in an Age of Neoliberalism. Humanity: An International Journal of Human Rights, Humanitarianism, and Development, 10(1), 79–107. https://doi.org/10.1353/hum.2019.0003

Katrougalos, G. (2007). The (Dim) Perspectives of the European Social Citizenship (Working Paper No. 05/07; Jean Monnet Working Paper Series). NYU School of Law. https://jeanmonnetprogram.org/paper/the-dim-perspectives-of-the-european-social-citizenship/

Kenner, J., Hart.Kulynych, B., Overdorf, R., Troncoso, C., & Gürses, S. (2003). Economic and social rights in the EU legal order: The mirage of indivisibility. In T. K. Hervey & J. Kenner (Eds.), Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (pp. 177–188). https://doi.org/10.1145/3351095.3372853

Landsorganisationen Sverige. (2020). LOs svar på Kommissionens öppna samråd: Om artificiell intelligens—En EU-strategi för spetskompetens och förtroende.

Langford, M. (2009). The justiciability of social rights: From practice to theory. In M. Langford (Ed.), Social Rights Jurisprudence Social Rights Jurisprudence Emerging Trends in International and Comparative Law (pp. 3–45). Cambridge University Press. https://doi.org/10.1017/CBO9780511815485.003

Lyon, D. (2015). Surveillance After Snowden. Polity Press.

Maduro, M. P. (1999). Striking the Elusive Balance Between Economic Freedoms and Social Rights in the EU. In P. Alston (Ed.), The EU and Human rights. Oxford University Press.

Mann, M. (2020). Technological politics of automated welfare surveillance: Social (and data) justice through Critical Qualitative Inquiry. Global Perspectives, 1(1). https://doi.org/10.1525/gp.2020.12991

Mann, M., & Matzner, T. (2019). Challenging algorithmic profiling: The limits of data protection and anti-discrimination in responding to emergent discrimination. Big Data & Society, 6(2). https://doi.org/10.1177/2053951719895805

Mărcuț, M. (2017). Crystalizing the EU digital policy: An exploration into the Digital Single Market. Springer. https://doi.org/10.1007/978-3-319-69227-2

Marshall, T. (1950). Citizenship and social class. In Citizenship and social class: And other essays. Cambridge University Press.

McGaughey, E. (2018). Will robots automate your job away? Full employment, basic income, and economic democracy (Working Paper No. 496). Centre for Business Research, University of Cambridge. https://www.cbr.cam.ac.uk/wp-content/uploads/2020/08/wp496.pdf

McGregor, L., Murray, D., & Ng, V. (2019). International human rights law as a framework for algorithmic accountability. International and Comparative Law Quarterly, 68(02), 309–343. https://doi.org/10.1017/S0020589319000046

Menendez, A. J. (2003). The Sinews of peace: Rights to solidarity in the Charter of Fundamental Rights of the European Union. Ratio Juris, 16(3), 374–398. https://doi.org/10.1111/1467-9337.00241

Moyn, S. (2018). Not enough: Human rights in an unequal world. The Belknap Press of Harvard University Press.

Nederlands Juristen Comité voor de Mensenrechten (NJCM). (2020). Consultation EU White paper on Artificial Intelligence.https://ec.europa.eu/info/law/better-regulation/have-your-say/initiatives/12270-White-Paper-on-Artificial-Intelligence-a-European-Approach/public-consultation_en

Niklas, J., & Dencik, L. (2020). European Artificial Intelligence policy: Mapping the institutional landscape (Data Justice Working Paper. Cardiff University. https://datajusticeproject.net/wp-content/uploads/sites/30/2020/07/WP_AI-Policy-in-Europe.pdf

Now, A. (2020). Submission to the Consultation on the “White Paper on Artificial Intelligence—A European approach to excellence and trust”.

Österreichischer Gewerkschaftsbund, AK Europa (OGB). (2020). Weissbuch Zur künstlichen Intelligenz – ein europäisches Konzept für Exzellenz und Vertrauen [Position paper]. Österreichischer Gewerkschaftsbund.

Overy, P. (2009). European union: Guide to tracing working documents. Legal Information Management, 9(2), 107–111. https://doi.org/10.1017/S1472669609000279

Persson, T. (2007). Democratizing European chemicals policy: Do consultations favour civil society participation? Journal of Civil Society, 3(3), 223–238. https://doi.org/10.1080/17448680701775648

Plomien, A. (2018). EU social and gender policy beyond Brexit: Towards the European Pillar of Social Rights. Social Policy and Society, 17(2), 281–296. https://doi.org/10.1017/S1474746417000471

Portes, J., Reed, H., & Percy, A. (2017). Social prosperity for the future: A proposal for Universal Basic Services (IGP Working Paper Series. UCL Institute for Global Prosperity. https://mronline.org/wp-content/uploads/2019/08/universal_basic_services_-_the_institute_for_global_prosperity_.pdf

Powles, J. (2018). The Seductive diversion of ‘solving’ bias in Artificial Intelligence. https://onezero.medium.com/the-seductive-diversion-of-solving-bias-in-artificial-intelligence-890df5e5ef53

Quittkat, C. (2011). The European Commission’s online consultations: A success story? JCMS: Journal of Common Market Studies, 49(3), 653–674. https://doi.org/10.1111/j.1468-5965.2010.02147.x

Rasmussen, A., & Toshkov, D. (2013). The effect of stakeholder involvement on legislative duration: Consultation of external actors and legislative duration in the European Union. European Union Politics, 14(3), 366–387. https://doi.org/10.1177/1465116513489777

Représentations des Institutions Françaises de sécurité sociale (REIF). (2020). Position REIF. Stratégie européenne pour les données et approche européenne sur l’intelligence artificielle.

Schermer, B. (2011). The limits of privacy in automated profiling and data mining. Computer Law and Security Review, 27(45). https://doi.org/10.1016/j.clsr.2010.11.009

Seamster, L., & Charron-Chénier, R. (2017). Predatory inclusion and education debt: Rethinking the racial wealth gap. Social Currents, 4(3), 199–207. https://doi.org/10.1177/2329496516686620

Ssenyonjo, M. (2009). Economic, social and cultural rights in international law. Hart.

Standing, G. (2016). The precariat: The new dangerous class (Revised). Bloomsbury Academic.

Stefano, V. D. (2018). ‘Negotiating the algorithm’: Automation, artificial intelligence and labour protection (Working Paper No. 246; Employment Working Paper Series). International Labour Office, Employment Policy Department. https://ilo.userservices.exlibrisgroup.com/discovery/delivery/41ILO_INST:41ILO_V2/1254389610002676

Toebes, B. C. A. (1999). The right to health as a human right in international law. Intersentia.

Union générale des ingénieurs, cadres et techniciens CGT (UGICT). (2020). Consultation on the White Paper on Artificial Intelligence—A European Approach. https://ec.europa.eu/info/law/better-regulation/have-your-say/initiatives/12270-White-Paper-on-Artificial-Intelligence-a-European-Approach/public-consultation_en

Wagner, B. (2019). Ethics as an escape from regulation. From “ethics-washing” to ethics-shopping? In E. Bayamlioglu, I. Baraliuc, L. A. W. Janssens, & M. Hildebrandt (Eds.), Being Profiled: Cogitas Ergo Sum. 10 Years of ‘Profiling the European Citizen‘ (pp. 84–88). Amsterdam University Press. https://doi.org/10.2307/j.ctvhrd092.18

Wood, A. J. (2021). Algorithmic Management Consequences for Work Organisation and Working Conditions (Technical Report No. 2021/07; JRC Working Papers Series on Labour, Education and Technology). European Commission Joint Research Centre. https://ec.europa.eu/jrc/sites/default/files/jrc124874.pdf

Yamin, A. E. (2008). Will we take suffering seriously? Reflections on what applying a human rights framework to health means and why we should care. Health and Human Rights, 10(1), 45–63. https://doi.org/10.2307/20460087

Footnotes

1. We are following Kulynych et al. (2020) in describing a set of different data and algorithmic driven technologies, as ‘optimisation technologies’ that are ‘developed to capture and manipulate behavior and environments for the extraction of value’ (p. 1) and operates within the optimisation logic that prioritise technological performance and cost minimisation.

2. In this article we use shorter terms for each group of rights: for civil and political rights—civil rights, and for economic, social and cultural rights—social rights.

Viewing all 294 articles
Browse latest View live