Quantcast
Channel: News and Research articles on Governance
Viewing all 294 articles
Browse latest View live

Geopolitics, jurisdiction and surveillance

$
0
0

Papers in this special issue

Geopolitics, jurisdiction and surveillance
Monique Mann, Deakin University
Angela Daly, University of Strathclyde

Mapping power and jurisdiction on the internet through the lens of government-led surveillance
Oskar J. Gstrein, University of Groningen

Regulatory arbitrage and transnational surveillance: Australia’s extraterritorial assistance to access encrypted communications
Monique Mann, Deakin University
Angela Daly, University of Strathclyde
Adam Molnar, University of Waterloo

Internationalising state power through the internet: Google, Huawei and geopolitical struggle
Madison Cartwright, University of Sydney

Public and private just wars: distributed cyber deterrence based on Vitoria and Grotius
Johannes Thumfart, Vrije Universiteit Brussel

Going global: Comparing Chinese mobile applications’ data and user privacy governance at home and abroad
Lianrui Jia, University of Toronto
Lotus Ruan, University of Toronto

Transnational collective actions for cross-border data protection violations
Federica Casarosa, European University Institute

The legal geographies of extradition and sovereign power
Sally Kennedy, Deakin University
Ian Warren, Deakin University

Anchoring the need to revise cross-border access to e-evidence
Sergi Vazquez Maymir, Vrije Universiteit Brussel

Geopolitics, jurisdiction and surveillance

Introduction

With this special issue we offer critical commentary and analysis of the geopolitics of data, transnational surveillance and jurisdiction, and reflect upon the question of if and how individual rights can be protected in an era of ubiquitous transnational surveillance conducted by private companies and governments alike. The internet provides a number of challenges, and opportunities, for exercising power, and regulating, extraterritorially to the sovereign nation state. These practices are shaped and influenced by geopolitical relations between states. Certainly, the trans-jurisdictional nature of the internet means that the legal geographies of the contemporary digital world require rethinking, especially in light of calls for a more sophisticated and nuanced approach to understanding sovereignty to govern, and also protecting individual rights in the electronic age (Johnson & Post, 1996; Goldsmith & Wu, 2006; Brenner, 2009; Hilderbrandt, 2013; Svantesson, 2013; 2014; 2017; DeNardis, 2014). These issues raise a host of additional contemporary and historical questions about attempts by the US to exert power over extraterritorial conduct in various fields including crime, intellectual property, surveillance and national security (see e.g., Bauman et al., 2014; Boister, 2015; Schiller, 2011). Yet dynamics are shifting with the emergence of the new technological superpower China, and regulatory efforts of the European Union (for example via the General Data Protection Regulation). The emergence of large transnational corporations providing critical virtual and physical infrastructure adds private governance to this equation, which offers further new dimensions to the rule of law and also self- or co-regulation (see for example Goldsmith & Wu, 2006; DeNardis & Hackl, 2015; Brown & Marsden, 2013; Daly, 2016).

The idea for this special issue emerged from a workshop that we co-convened in 2016 in which we sought to explore a range of questions: the impact of domestic and international cybercrime, data protection and intellectual property laws on sovereignty and extraterritoriality; the geopolitical impacts of domestic and international surveillance and cybercrime laws such as the Council of Europe’s Convention on Cybercrime (Budapest Convention), the recent United States Clarifying Lawful Overseas Use of Data (CLOUD) Act and other lawful access regimes including European Union e-Evidence proposals; the application of due process requirements in the contemporary policing of digital spaces; the objectives of justice in the study of private governance in online environments; and the implications of these transnational developments for current and future policy and regulation of online activities and spaces.

Since 2016, we have witnessed striking developments in the geopolitical and geoeconomic relationships between states, global technology companies, their transnational surveillance practices, and corresponding governance frameworks. In particular, the rise of China and the globalisation of its internet industry is a major development in this time, along with the Trump presidency in the US and the ensuing trade war (Daly, in press). Just in the weeks prior to the publication of this special issue, there was significant escalation of tensions between the US and China played out via the restriction of social media companies’ access to the US market. On 6 August 2020, Donald Trump issued executive orders banning transactions with ByteDance (Tik Tok’s parent company) and Tencent (WeChat’s parent company) that are subject to US jurisdiction, stating that “the spread in the United States of mobile applications developed and owned by companies in the People’s Republic of China (China) continues to threaten the national security, foreign policy, and economy of the United States. Surveillance and sharing US citizens’ data with the Chinese Communist Party, protection of intellectual property from corporate espionage, and Chinese censorship and disinformation were cited as justification supporting the purge. Subsequently, Trump issued a further executive order requiring that ByteDance sell off all of TikTok’s US based assets. These types of geopolitical struggles are examined further in Cartwright’s timely contribution to this special issue on ‘Internationalising state power through the internet’ (Cartwright, 2020).

Further to the recent US-Chinese tensions, in the month prior to publication of this collection, the Court of Justice of the EU (CJEU) handed down its landmark decision in Data Protection Commissioner v Facebook Ireland Limited, Maximillian Schrems(Schrems II) (2020) invalidating the EU-US Privacy Shield (following Schrems I invalidating the predecessor EU-US Safe Harbour agreement in 2015) with significant ramifications for the transfer of the data of EU citizens to the US as a consequence of the US’ extensive state surveillance, and insufficient safeguards protecting privacy. The exact impacts that this decision will have for transborder data transfers are yet to be fully understood, but will undoubtedly be significant. At the same time, the US is negotiating executive agreements under its Clarifying Lawful Overseas Use of Data (CLOUD) Act that allow for authorised states to access the content of communications held by US technology companies without prior judicial authorisation, and for the US to compel US technology companies to provide access to data stored extraterritorially to the US jurisdiction (as per the initial Microsoft case rendered moot by the introduction of the CLOUD Act, see further Warren, 2015; Svantesson, 2017; Mann & Warren, 2018; Mulligan, 2018).

This all comes at a time when nations, and indeed regions, are asserting their “digital sovereignty” through data localisation initiatives that limit transborder data flows, as witnessed recently with France and Germany enacting their plans for European digital sovereignty (ANSSI, n.d.) and the corresponding launch of the GAIA-X cloud computing project (GAIA-X, n.d.) that creates a European data infrastructure independent of both China and the US.

China has also started asserting itself legally beyond its territorial borders. Hong Kong’s new controversial National Security Law includes provisions which criminalise secession, subversion, terrorism, and collusion with foreign powers and via Art 38, purports to apply to non-HK permanent residents committing these offences even if they are based in other countries. In addition, Art 43 enables the Hong Kong Police Force when investigating national security crimes to direct service providers to remove content and provide other assistance. How these provisions will be applied to Hong Kong’s transnational internet (which to date has included both Chinese and Western internet companies and services including some which are banned in mainland China) remains unclear, some US-based companies such as Facebook and Twitter have already announced their suspension of compliance with data requests from the Hong Kong authorities (Liao, 2020).

Taken together, these most recent developments highlight the significance of the geopolitical and geoeconomic dimensions of data, private-public surveillance interests, and associated impacts for human rights and international trade. They also demonstrate that extraterritoriality is no longer just a feature of US internet law and policy and equally that national sovereignty is no longer just a feature of Chinese internet law and policy.

These dimensions become more relevant with the concurrent reinforcement of physical borders amid a new global crisis brought by the COVID-19 pandemic that also has significant implications for cross-border information sharing and data storage e.g., immunity passports, contact tracing applications with data stored on the cloud (see Taylor et al., 2020). Certainly, expanded surveillance and information collection by states and private companies have proven to be central to the global response to bio(in)security created by the pandemic, with significant extraterritorial implications (Privacy International, 2020). For example, one of the main criticisms leveled at the Australian COVIDSafe contact tracing application was that Amazon was contracted to host the contact tracing information on its web services (AWS), with the potential for the US to access the data via the US technology company. In response, and like Germany and France, the Australian government is considering the development of a “sovereign cloud” for the storage of Australia's data (Besser & Welch, 2020; Sadler, 2020). Nevertheless, the pandemic response has also demonstrated the transnational corporate power of Google and Apple as key gatekeepers to the operation of government-backed COVID contact tracking apps, despite the questionable or unproven effectiveness of these apps in automating contact tracing (Braithwaite et al., 2020). Google and Apple have even become the source of apps that offer improved data protection when compared to the in-house attempts of various European governments to create their own apps (Daly, in press), yet simultaneously cement their infrastructural power (Veale, 2020).

Main contributions to this special issue

With these brief introductory remarks in mind, we turn to the overview of the papers and their main contributions to this issue. We open the collection with Oskar J. Gstrein’s contribution‘Mapping power and jurisdiction on the internet through the lens of government-led surveillance’ (Gstrein, 2020) that examines governance frameworks for the regulation of government-driven surveillance to avoid the ‘balkanisation’ of the internet. Two proposals are analysed, namely, the ‘Working Draft Legal Instrument on Government-led Surveillance and Privacy’, presented to the United Nations Human Rights Council, and the proposal for a ‘Digital Geneva Convention’ (DGC) by Microsoft’s Brad Smith. The article questions whether it is possible to create an internet based on human rights principles and values. Interlinked with issues of human rights online, our own (along with Adam Molnar) contribution on ‘Regulatory arbitrage and transnational surveillance’(Mann, Daly, & Molnar, 2020)examines developments regarding encryption law and policy within ‘Five Eyes’ (FVEY) countries, specifically the Telecommunications and Other Legislation Amendment (Assistance and Access) Act 2018 (Cth 1) in Australia. We argue that this new law is significant both domestically and internationally given its extraterritorial reach enables the development of new ways for Australian law enforcement and security agencies to access encrypted telecommunications via transnational providers, and allows for Australian authorities to assist foreign counterparts in both enforcing and potentially circumventing their domestic laws. We show that deficiencies in Australian human rights protections are the ‘weak link’ in the FVEY alliance, which means there is the possibility for regulatory arbitrage to exploit these new surveillance powers to undermine encryption, at a global scale, via Australia.

Madison Cartwright’s article ‘Internationalising state power through the internet: Google, Huawei and geopolitical struggle’ (Cartwright, 2020) shows how the US has exploited the international market dominance of US-based internet companies to internationalise its own state power through surveillance programmes. Using Huawei as a case study, Cartwright also examines how Chinese companies threaten the dominance of US companies as well as the geopolitical power of the US state, and in response, how the US has sought to shrink the ‘geo-economic space’ available to Huawei by using its firms, such as Google, to disrupt Huawei’s supply chains. The analysis demonstrates how states may use internet companies to exercise power and authority beyond their borders. Extraterritorial exercise of power by non-state actors is explored further in ‘Public and private just wars: distributed cyber deterrence based on Vitoria and Grotius’ (Thumfart, 2020). In Johannes Thumfart’s contribution, the role of non-state actors in cyber attacks are considered from the perspective of just war theory. He argues that private and public cyber deterrence capacities form a system of distributed deterrence that is preferential to state-based deterrence alone.

In ‘Going global: Comparing Chinese mobile applications’ data and user privacy governance at home and abroad’ Lianrui Jia and Lotus Ruan argue that differential levels of privacy and data protection demonstrate the importance of jurisdictional influences in the regulatory environment and argue this shapes the global expansion of Chinese internet companies (Jia & Ruan, 2020). They examine the governance of Chinese mobile applications at a global scale and their comparative analysis of international-facing versions of Chinese mobile apps versus Chinese-facing versions demonstrates greater levels of data protection are proffered to those users located outside China than those within. Continuing with the theme of transnational data protection, in ‘Transnational collective actions for cross-border data protection violations’ Federica Casarosa examines alternative forms of enforcement, specifically, transnational collective actions in the European Union as an avenue to empower data subjects and achieve remedies for data protection infringements (Casarosa, 2020). Casarosa uses the Cambridge Analytica-Facebook scandal to highlight the multijurisdictional and cross-border nature of data protection violations, examines some of the limits of existing redress mechanisms under the EU’s General Data Protection Regulation (GDPR), and argues for greater scope for transnational collective actions where associations or non-government organisations represent claimants from various jurisdictions.

Cross-border access to data is a central concern for transnational online policing. In the contribution on ‘The legal geographies of extradition and sovereign power’ Sally Kennedy and Ian Warren raise a series of questions about access, use and exchange of digital evidence under mutual legal assistance treaty (MLAT) requirements (Kennedy & Warren, 2020). Via a case study concerning a Canadian citizen facing extradition to the US, they show how US sovereignty and criminal enforcement powers are advanced with implications for global online criminal investigations. Their analysis shows a need for clearer transnational data exchange protocols or the possibility of shifting prosecution forums to the source of online harm, arguing that this would promote fairness for those accused of online crimes with a cross-jurisdictional aspect. Matters of e-evidence are further explored in ‘Anchoring the need to revise cross-border access to e-evidence’ in which Sergie Vazquez Maymir examines the European Commission’s e-evidence package, including the ‘Proposal for a Regulation on European Production and Preservation Orders’ and associated impact assessment. He critically analyses the arguments and evidence supporting the EPO regulation and the policy shift away from Mutual Legal Assistance to direct cooperation. Vazquez Maymir argues that the problems associated with cross border access to e-evidence are framed in terms of technical and efficiency considerations, and in doing so, the political and economic motivations are lost.

Conclusion

Utilising, and in some cases exploiting, information communication technology to exert private and public power across multiple jurisdictions undoubtedly creates new challenges for traditional forms of regulatory governance and the protection of human rights. Each of the papers in this collection raise and speak to critical questions about the type of internet that we want (free, open, unified and decentralised?), and the role that states and companies (should) play in creating it. The papers demonstrate the significance of the internet as a forum for geopolitical struggle and the weaponisation of jurisdiction, especially with exterritorial reach, for states to extend their power beyond their own borders directly, and via transnational companies.

While the US, due to historical reasons as the birthplace of the internet and the de facto international hegemon in the 1990s/2000s, has been the focus for private and public extensions of political and economic power via the internet, the increasing multipolarity of the world and its impact on technology law and policy is impacting upon the relationship between jurisdiction and power online, as can be seen through this collection’s contributions. The EU has been gaining prominence as a ‘regulatory superpower’ especially since the introduction of the GDPR, and the emergence of China as a global internet player is now also apparent through the globalisation of its internet services and the extraterritorial reach of the new Hong Kong National Security Law. Increasing attention ought to be paid to such developments beyond the US and EU, particularly from BRICS countries, and how these interact with, and impact upon, global internet governance and internet law and policy with the West too.

Acknowledgements

Mann received funding as part of her Vice-Chancellor’s Research Fellowship in Technology and Regulation, and from the Intellectual Property and Innovation Law (IPIL) Programme, at Queensland University of Technology. This supported the original workshop, copy-editing and editorial assistance.

Angela Daly would like to thank University of Strathclyde Scholarly Publications and Research Data/Open Access@Strathclyde, and in particular Pablo de Castro, for making a financial contribution to support this special issue being made available on an open access basis. She would also like to thank the Queensland University of Technology IPIL Programme for financially supporting the original workshop.

We would like to thank Dr Kayleigh Murphy for her excellent editorial assistance. We would especially like to acknowledge and thank Frédéric Dubois and the entire Internet Policy Review team for their enthusiasm and support in publishing this collection. We thank the participants at the workshop we held at QUT in 2016, and the international peer-reviewers that contributed their expertise and constructive comments on the papers (including ones that did not make it into the final collection): Songyin Bo, Balázs Bodó, Evelien Brouwer, Lee Bygrave, Jonathan Clough, Robert Currie, Jake Goldenfein, Samuli Haataja, Blayne Haggart, Danielle Ireland-Piper, Tamir Israel, Martin Kretschmer, Joanna Kulesza, Robert Merkel, Adam Molnar, Gavin Robinson, Stephen Scheel, James Sheptycki, Nic Suzor, Dan Svantesson, Peter Swire, Johannes Thumfart, Natasha Tusikov and Janis Wong.

References

A.N.S.S.I. (n.d.). The European Digital Sovereignty—A Common Objective for France and Germany. https://www.ssi.gouv.fr/en/actualite/the-european-digital-sovereignty-a-common-objective-for-france-and-germany/

Bauman, Z., Bigo, D., Esteves, P., Guild, E., Jabri, V., Lyon, D., & Walker, R. B. J. (2014). After Snowden: Rethinking the impact of surveillance. International Political Sociology, 8(2), 121–144. https://doi.org/10.1111/ips.12048

Besser, L., & Welch, D. (2020, April 23). Australia’s coronavirus tracing app’s data storage contract goes offshore to Amazon. ABC News. https://www.abc.net.au/news/2020-04-24/amazon-to-provide-cloud-services-for-coronavirus-tracing-app/12176682

Boister, N. (2015). Further reflections on the concept of transnational criminal law. Transnational Legal Theory, 6(1), 9–30. https://doi.org/10.1080/20414005.2015.1042232

Braithwaite, I., Callender, T., Bullock, M., & Aldridge, R. W. (2020). Automated and partly automated contact tracing: A systemic review to inform the control of COVID-19. The Lancet Digital Health. https://doi.org/10.1016/s2589-7500(20)30184-9

Brenner, S. W. (2009). Cyber Threats: The emerging fault lines of the nation state. Oxford University Press. https://doi.org/10.1093/acprof:oso/9780195385014.001.0001

Brown, I., & Marsden, C. T. (2013). Good governance and better regulation in the information age. MIT Press.

Cartwright, M. (2020). Internationalising state power through the internet: Google, Huawei and geopolitical struggle. Internet Policy Review, 9(3). https://doi.org/10.14763/2020.3.1494

Casarosa, F. (2020). Transnational collective actions for cross-border data protection violations. Internet Policy Review, 9(3). https://doi.org/10.14763/2020.3.1498

Daly, A. (In press). Neo-Liberal Business-As-Usual or Post-Surveillance Capitalism With European Characteristics? The EU’s General Data Protection Regulation in a Multi-Polar Internet. In R. Hoyng & G. P. L. Chong (Eds.), Communication Innovation and Infrastructure: A Critique of the New in a Multipolar World. Michigan State University Press.

Daly, A. (2016). Private Power, Online Information Flows and EU Law: Mind the Gap. Hart.

Data Protection Commissioner v Facebook Ireland Limited, Maximillian Schrems (C‑311/18), (The Court of Justice of the European Union (Grand Chamber) 2020). http://curia.europa.eu/juris/document/document.jsf?text=&docid=228677&pageIndex=0&doclang=EN&mode=lst&dir=&occ=first&part=1&cid=9745404

DeNardis, L. (2014). The global war for internet governance. Yale University Press. https://doi.org/10.12987/yale/9780300181357.001.0001

DeNardis, L., & Hackl, A. M. (2015). Internet governance by social media platforms. Telecommunication Policy, 39, 761–770. https://doi.org/10.1016/j.telpol.2015.04.003

Executive Order on Addressing the Threat Posed by TikTok. (2020). The White House. https://www.whitehouse.gov/presidential-actions/executive-order-addressing-threat-posed-tiktok/

GAIA-X. (n.d.). GAIA-X: A Federated Data Infrastructure for Europe. https://www.data-infrastructure.eu/GAIAX/Navigation/EN/Home/home.html

Goldsmith, J., & Wu, T. (2006). Who controls the internet? Illusions of a borderless world. Oxford University Press.

Gstrein, O. (2020). Mapping power and jurisdiction on the internet through the lens of government-led surveillance. Internet Policy Review, 9(3). https://doi.org/10.14763/2020.3.1497

Hildebrandt, M. (2013). Extraterritorial jurisdiction to enforce in cyberspace?: Bodin, Schmitt, Grotius in cyberspace. University of Toronto Law Journal, 63(2), 196–224. https://doi.org/10.3138/utlj.1119

Johnson, D., & Post, D. (1996). Law and borders: The rise of law in cyberspace. Stanford Law Review, 48(5), 1367–1402. https://doi.org/10.2307/1229390

Kennedy, S., & Warren, I. (2020). The legal geographies of extradition and sovereign power. Internet Policy Review, 9(3). https://doi.org/10.14763/2020.3.1496

Liao, R. (2020, July 8). The tech industry comes to grips with Hong Kong’s national security law. TechCrunch. https://techcrunch.com/2020/07/08/hong-kong-national-security-law-impact-on-tech/

Mann, M., Daly, A., & Molnar, A. (2020). Regulatory arbitrage and transnational surveillance: Australia’s extraterritorial assistance to access encrypted communications. Internet Policy Review, 9(3). https://doi.org/10.14763/2020.3.1499

Mann, M., & Warren, I. (2018). The digital and legal divide: Silk Road, transnational online policing and southern criminology. In K. Carrington, R. Hogg, J. Scott, & M. Sozzo (Eds.), The Palgrave handbook of criminology and the global south (pp. 245–260). Palgrave MacMillan. https://doi.org/10.1007/978-3-319-65021-0_13

Mulligan, S. P. (2018). Cross-Border Data Sharing Under the CLOUD Act (No. 7-5700 R45173; CRS Report). Congressional Research Service. https://fas.org/sgp/crs/misc/R45173.pdf

Order Regarding the Acquisition of Musical.ly by ByteDance Ltd. (2020). The White House. https://www.whitehouse.gov/presidential-actions/order-regarding-acquisition-musical-ly-bytedance-ltd/

Privacy International. (2020). Tracking the Global Response to COVID-19. https://privacyinternational.org/examples/tracking-global-response-covid-19

Sadler, D. (2020, July 7). Government Finally Backs Sovereign Cloud Capability. Innovation Aus. https://www.innovationaus.com/govt-finally-backs-sovereign-cloud-capability/

Schiller, D. (2011). Special commentary: Geopolitical-economic conflict and network infrastructures. Chinese Journal of Communication, 4(1), 90–107. https://doi.org/10.1080/17544750.2011.544085

Svantesson, D. (2013). A ‘layered approach’ to the extraterritoriality of data privacy laws. International Data Privacy Law, 3(4), 278–286. https://doi.org/10.1093/idpl/ipt027

Svantesson, D. (2014). Sovereignty in international law – how the internet (maybe) changed everything, but not for long. Masaryk University Journal of Law and Technology, 8(1), 137–155. https://journals.muni.cz/mujlt/article/view/2651

Svantesson, D. J. B. (2017). Solving the internet jurisdiction puzzle. Oxford University Press. https://doi.org/10.1093/oso/9780198795674.001.0001

Taylor, L., Sharma, G., Martin, A., & Jameson, S. (Eds.). (2020). Data Justice and COVID-19: Global Perspectives. Meatspace Press. https://shop.meatspacepress.com/product/data-justice-and-covid-19-global-perspectives

Thumfart, J. (2020). Private and public just wars: Distributed cyber deterrence based on Vitoria and Grotius. Internet Policy Review, 9(3). https://doi.org/10.14763/2020.3.1500

Vazquez Maymir, S. (2020). Anchoring the Need to Revise Cross-Border Access to E-Evidence. Internet Policy Review, 9(3). https://doi.org/10.14763/2020.3.1495

Veale, M. (2020, July 1). Privacy is not the Problem with the Apple-Google Contact-Tracing Toolkit. The Guardian. https://www.theguardian.com/commentisfree/2020/jul/01/apple-google-contact-tracing-app-tech-giant-digital-rights

Warren, I. (2015). Surveillance, criminal law and sovereignty. Surveillance & Society, 13(2), 300–305. https://doi.org/10.24908/ss.v13i2.5679

Footnotes

1. Cth stands for Commonwealth, which means “federal” legislation, as distinct from state-level legislation.


Mapping power and jurisdiction on the internet through the lens of government-led surveillance

$
0
0

This paper is part of Geopolitics, jurisdiction and surveillance, a special issue of Internet Policy Review guest-edited by Monique Mann and Angela Daly.

Introduction

On 6 June 2020, seven years passed since Edward Snowden commenced his revelations on governmental misuse of surveillance capabilities (Edgar, 2017, p. 75). The impact and meaning of his actions are still subject to vivid discussion. Snowden has recently refuelled this discourse with the publication of a memoir entitled Permanent Record (2019), but despite considerable echo in the press and civil society campaigns, not much seems to have changed (see e.g., Stoycheff et al., 2019, pp. 613-614). At first glance this is surprising since actors such as the United Nations and the Council of Europe have undertaken several efforts to support human rights, democracy and the rule of law in the digital age (De Hert & Papakonstantinou, 2018, pp. 526-527; Terwangne, 2014, pp. 118-119). Furthermore, the United Kingdom (UK) and South Africa admitted bulk surveillance in court (Privacy International, 2019). Observing more closely, however, Snowden’s activities might unintentionally have catalysed a process which leads to more fragmentation of the digital space.

In a section entitled ‘The dark side of hope’ in his 2018 book on the culture of surveillance, David Lyon describes the ex-employee of a contractor of the United States National Security Agency (NSA) as a thoughtful technical expert, who believes in the potential for democratic and human development through the internet. In addition, Lyon points out that Snowden revealed some of the secret mechanisms contributing to creating a world in which the majority of humans remain poor and dependent. First and foremost, underlying Snowden’s revelations is the fact that ‘today’s surveillance undoubtedly contributes to and facilitates this world’ (Lyon, 2018, p. 143). From this perspective, it is not surprising that many societies and states choose to reduce exposure of ‘their’ data to a multilateral setting with complex and hard-to-control implications. Rather, they shift the focus onto immediate political and economic interests. This is probably not the reaction that Snowden and his supporters might have hoped for, since it will most likely not result in more protection for human rights in the short term. However, it is the easiest answer to a myriad of complex questions.

The motives of states vary as they engage in this process of reassurance. Some focus on strengthening internal security and stability, which typically takes the form of creating legal frameworks with the objective to facilitate access to personal data for law enforcement agencies. Concrete examples are the e-evidence package of the European Union (EU), or the United States’ Clarifying Lawful Overseas Use of Data Act (CLOUD Act). Whereas e-evidence tries to make investigations across EU member states easier by removing the requirement for review by local national authorities as ‘middlemen’ (Buono, 2019, pp. 307-312), the CLOUD Act enables the US administration to sign agreements with states like the UK and Australia to make cross-border data access quicker and easier (Galbraith, 2020, pp. 124-128). Both of these legal frameworks aim at circumventing the complex and time consuming mutual legal assistance procedures enshrined in traditional international mutual legal assistance treaties (MLATs) (see also Vazquez Maymir, 2020, this issue).

Other countries have started to focus increasingly on maintaining external security and national sovereignty. For example, the Russian Federation has enacted regulation to strengthen the autonomy of the Russian part of the internet (Sherwin, 2019), which follows an earlier law requiring that servers containing personal data of Russian citizens must be located on state territory (Anishchuk, 2014). If one reacts sceptically to this type of decisions, one should take into account that discussions on ‘digital sovereignty’ are not only taking place in seemingly ‘inward looking’ countries, but also seemingly in ‘open-minded’ ones such as Germany (Fox, 2018; Gräf et al., 2018). At the same time, the People’s Republic of China seems not to distinguish between political and economic power, and pursues regulation that also limits the international exchange of economic data flows (Yang, 2019).

These observations create the impression that the interest in investing in further development of a multi-stakeholder mechanism for internet governance is limited. This mechanism gathers states/governments, the private sector, civil society, intergovernmental organisations, international private organisations, as well as the academic and technical communities to shape the global internet through interactive discourse (Hill, 2014, p. 29). Organisations such as the Internet Corporation for Assigned Names and Numbers (ICANN), the Internet Governance Forum (IGF), or the International Telecommunication Union (ITU) take prominent roles in this process. They differ considerably when it comes to democratic anchorage, legitimacy, representation, as well as regulatory authority, which results in undeniable tension with institutions of traditional governance. While some national administrations fear loss of sovereignty, several groups of civil society feel under- or non-represented (Ewert et al., 2020). Hence, the decentralised and universal nature of the internet, which are both paradigms that the multi-stakeholder mechanism traditionally supports (Hill, 2014, pp. 16-46), has become subject to scrutiny. Scholars such as O’Hara and Hall argue that there are already four internets: a heavily regulated European ‘bourgeois’ version that prides itself on its dedication to human rights and ethics; the US version with its strong focus on monetisation and economic activity; the authoritarian internet of China where social cohesion and surveillance mean the same thing; and the internet of misinformation and hacking associated with the Russian Federation, North Korea and other ‘rogue states’ (O’Hara & Hall, 2018).

If anything, this makes it clear that the nature of the internet is still developing. Since John Perry Barlow published the ‘Declaration of the Independence of Cyberspace’ (1996) the digital sphere has evolved considerably. The internet has become an important layer of social interaction which also means it has become more than a mere passage for data flows. While the idea to regulate it in a similar way to what Hugo Grotius suggested for the High Seas in 1608 seems appealing (Hildebrandt, 2013, p. 211) - particularly to those primarily concerned about the free passage of data packages - the technical infrastructure that constitutes the internet cannot be qualified as res nullius. In contrast to the High Seas, the internet does not lend itself neutrally to any kind of activity. In this context, Melvin Kranzberg’s first law is not the only thing that comes to mind (‘technology is neither good nor bad; nor is it neutral’ (Kranzberg, 1995, p. 5)). As inhabitants of industrialised and more developed countries face the looming threat of being replaced by ‘superintelligent’ autonomous machines (West, 2018, pp. 19-41), the rushed and under-reflected adoption of ‘digital identities’ might further widen the inequality gap with less-developed areas of the world (Gstrein & Kochenov, 2020). Hence, as the rapid mass-adoption of information technology and autonomous systems continues, this urgent question emerges: is it possible to create a trustworthy and universal digital space, based on human rights principles and values, which might be able to prevent a dystopian future with limited opportunities for information exchange and undignified data-driven economies whose purpose is to fuel fantasies of political hegemony?

The objective of this article is to identify potentially useful regulatory strategies that at least mitigate or avoid (further) balkanisation/fragmentation of the digital space. This process of fragmentation has also been described as movement towards a ‘splinternet’. Once it is completed, the remaining digital information networks will have turned away from the three central principles of openness in terms of universal cross-border data transfer, trust in terms of faith in other users, and decentralisation which supports resilience and freedom (Ananthaswamy, 2011, p. 42). The lens through which the regulatory strategies will be identified is government-led surveillance as defined in the Methodology section. After analysis of two proposals for the regulation of government-led surveillance from a universal perspective, available strategies to manage power and jurisdiction on the internet will be discussed. At the end of this exercise it will be concluded whether regulatory frameworks that respect, protect and promote the internet as one global space are feasible under current circumstances.

Methodology

Before starting to map out salient issues, a few brief remarks on the selection of proposals and the methodology of this submission are necessary. The topic of surveillance is immensely complex with much historic context to consider (Galič et al., 2017). Additionally, surveillance as a phenomenon is not only purely government-driven. Much has been written about the profiling of individuals for commercial gain, the impact of ‘surveillance capitalism’ (Zuboff, 2019) on individual and collective autonomy, and how new technologies such as artificial intelligence influence democracy and the formation of the public (Nemitz, 2018). Other scholars propose the consideration of surveillance ‘as cultural imaginaries and practices’ which makes surveillance a subject of everyday life (Lyon, 2018, p. 127).

While these are valid approaches, this submission will predominantly leave out major corporate and non-government related aspects. For this piece the term ‘surveillance’ is defined as ‘any monitoring or observing of persons, listening to their conversations or other activities, or any other collection of data referring to persons regardless whether this is content data or metadata’; furthermore, government-led is to be interpreted as ‘surveillance [that] is carried out by a state, or on its behalf, or at its order’ (Cannataci, 2018a, p. 9). These definitions emphasise the relationship between the citizen/resident and the state, which is typical for international human rights law. Furthermore, this concentration makes the topic more manageable in volume, and allows distilling the essence of available regulatory strategies. However, this does not suggest that regulation is all that is required to create a trustworthy and universal digital space. Furthermore, applying government-led surveillance as a lens with its focus on states and individuals does not mean that the actions of corporations are entirely irrelevant. As supporters or enablers of state surveillance - which has been claimed to reach structural dimensions embedded in the telecommunications infrastructure of powerful western actors (Gallagher & Moltke, 2018) - their actions remain relevant. The United Nations stressed such responsibility in a General Assembly resolution on privacy in the digital age on 14 November 2018 (2018, pp. 6-7).

In this setting focused on regulatory frameworks for government-led surveillance, the available regulatory strategies are explored on the basis of two recent proposals which attempt to address the issues on universal level: The ‘Working Draft Legal Instrument on Government-led Surveillance and Privacy’ (‘LI’) (Cannataci, 2018a) which has been presented to the United Nations Human Rights Council in March 2018, and the proposal for a ‘Digital Geneva Convention’ (DGC) presented by one of Microsoft’s presidents Brad Smith (2017).

General nature of the proposals

Both proposals are based on the premise that the internet is an independent and universal/global space which should be accessible and open to all individuals on the planet regardless of where they come from. By emphasising this enabling and empowering dimension from an individual perspective, the proposals call on states to overhaul the international legal framework in order to address government-led surveillance in more detail and more effectively (Cannataci, 2018b, p. 22; Smith, 2017). Smith starts with the observation that cyber-attacks, cyber-espionage and cyber-warfare have become widespread and dangerous (see also Thumfart, 2020). Since many of these are directly or indirectly led by state powers that carry out surveillance, he proposes to address this new reality with the development of a DGC to protect private citizens and entities, or in other words ‘non-combatants’ (Smith, 2017). His proposal relates to the Fourth Geneva Convention for the protection of civilians in warfare from 1949 (Mansell & Openshaw, 2010, p. 23). Since the appropriateness, effectiveness, and timeliness of the Geneva Conventions of 1949 is being questioned due to the emergence of asymmetric warfare and international terrorism in today’s conflict scenarios (Gordon 2010, pp. 95-96; Ratner, 2008), it might be surprising to see such a concrete demand spearheaded by one of the presidents of one of the most influential technology corporations. Smith (2017) suggests enshrining six principles in a novel instrument of public international law: (1) No targeting of tech companies, private sector or critical infrastructure; (2) Assist private sector efforts to detect, contain, respond to, and recover from events; (3) Report vulnerabilities to vendors rather than to stockpile, sell or exploit them; (4) Exercise restraint in developing cyber weapons and ensure that they only allow for limited and precise use; Additionally, they should be non-reusable; (5) Commit to non-proliferation activities relating to cyberweapons; and (6) Limit offensive operation to avoid a mass event. In terms of impact, the DGC has not gained traction within the community of states at the time of writing. However, it was followed up with the signing of a ‘Cybersecurity Tech Accord’ by 34 technology and security companies in 2018 (Smith, 2018).

The principles of the DGC have to be evaluated critically, keeping in mind that Smith is associated with a corporation that has strong relationships with some of the most powerful states. On the one hand, this scepticism vis-à-vis Smith’s intention was recently supported with the award of a US$ 10 billion contract by the United States Pentagon which allows Microsoft to provide an advanced cloud infrastructure for the United States Army that might also be used for surveillance (Conger et al., 2019). On the other hand, when focusing on the substance of his proposal one can also discover aspects that support openness, trust and decentralisation of the internet by making the internet less dependent on traditional governance mechanisms and institutions. Hence, the DGC is potentially able to support the creation of a digital space which is universal and safe(r) for its users. For example, the United States NSA recently shared the discovery of a major security vulnerability in Windows systems that could have facilitated large scale surveillance if kept secret (Newman, 2020). In this sense the agency delivered on the demands enshrined in principles two and three of the DGC. Furthermore, it seems fair to assume that corporations focus predominantly on the relationship with their users/clients who do not only consist of governments. Their business opportunities increase with the ability to act in stable and universal legal frameworks across the globe, independent from territorial restraints. This also means there is an interest in having checks and balances that control public institutions as they carry out surveillance, since the independence of the private sector is strengthened. This also means that aspects like citizenship or residency do not matter as much for corporations as they do for states, and in traditional human rights/civil liberties law when it comes to applicability and enforcement (Nowak, 2018, pp. 273-275).

Smith’s proposal is based on a perspective where technology corporations are independent and neutral actors which promote the internet as a universally accessible layer of societal interaction. Whether this is a sincere, desirable or achievable objective can remain for speculation. Regardless, the DGC as a blueprint for a regulatory instrument in itself is a worthy object of study for the purposes of this article. While Smith and Microsoft’s intentions might be more or less motivated by economic short-term gains in their day-to-day business, the DGC can be scrutinised as a self-standing set of principles that has its own strengths and weaknesses, which will be outlined below.

Although the DGC and the LI have the common goal of catalysing the evolution of international public law to achieve regulation of government-led surveillance, the process in which the LI emerged differs significantly. In its essence it is a co-production of an EU funded research project (MAPPING i.e. Managing Alternatives for Privacy, Property and Internet Governance) and the initiative of the inaugural UN Special Rapporteur on the Right to Privacy Joseph A. Cannataci (2018a, p. 3). The text itself, which is a sort-of blueprint for an international agreement between states, is based on earlier surveillance-related research carried out in other EU funded projects. Those insights were combined in a first draft with international and European human rights law principles, such as those enshrined in the International Covenant on Civil and Political Rights of the United Nations (ICCPR), the European Convention on Human Rights (ECHR) and the Charter of Fundamental Rights of the EU (CFEU). Furthermore, developments in the modernisation of the Council of Europe’s Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data (Convention 108; updated by protocol CETS No. 223 on 10 October 2018 to become Convention 108+) as well as the EU General Data Protection Regulation (GDPR) were included. Finally, the findings of landmark judgments of national courts, the European Court of Human Rights (ECtHR) and the Court of Justice of the European Union (CJEU) were also taken into consideration.

Although international human rights law standards of the UN such as Article 12 of the Universal Declaration of Human Rights or Article 17 of the ICCPR underpin the provisions of the LI as well, one might criticise this heavy influence of European standards, regulation, and jurisprudence on a proposal that was developed to potentially become the basis for a global regulatory framework. Nevertheless, the initial idea for this project was to start with a relatively concrete and substantive text that would allow the mapping of salient issues in this area, as well as creating the ability for concrete discussions about them. Furthermore, international comparative research on the development of data protection and privacy law shows that European standards have heavily influenced regulation in those 142 states which did have specific privacy-related regulation at the end of 2019, the majority of which are located outside the EU, the territories of Council of Europe member states, and the United States. Additionally, the relevance of European activity is underlined by the fact that the number of specific laws across the world has risen significantly following the attention that the GDPR and Convention 108+ have received (Greenleaf & Cottier, 2020, pp. 24-26).

The final published version of the LI consists of 18 articles accompanied by explanatory remarks (‘memorandum’). Before the LI was presented to the Human Rights Council in March 2018 it went through a process of thirteen substantive iterations which were based on feedback received in ten meetings carried out in different locations in Europe and the United States from April 2016 to February 2018, as well as written submissions and verbal feedback (Cannataci, 2018a, pp. 1, 38). This feedback was sourced from a multi-stakeholder community consisting of corporations, members of the law enforcement and intelligence community, members of civil society organisations, academics and other experts on the topic of government-led surveillance (Cannataci, 2018a, p. 2).

Salient issues

Due to the limited amount of space in this article it is not possible to discuss the proposals in their entire breadth and depth. Furthermore, rather than considering them as detailed and sufficient solutions for the complex issue, it might be more appropriate to use them as indicators (or ‘maps’) which together might be capable of highlighting why it is so difficult to regulate government-led surveillance. The proposals are probably best understood as attempts to reveal the substantive issues which should be resolved in order to establish the basis for progress. Although some may claim that it is unlikely that global consensus on these matters is achieved at all given the relevance of cultural differences across and within regions (Vecellio & Segate, 2019, pp. 83-88), it is precisely this audacious aspiration of becoming the basis for a discussion on detailed universal standards which bears the potential to highlight links, gaps and frictions.

What is (mass-)surveillance?

Probably the most discussed issue in the aftermath of the Snowden revelations is what government-led (mass-)surveillance is, could be, and should be (Buono & Taylor, 2017). The use of ‘Big Data’ to support national security, law enforcement and the public order requires adjusted and new safeguards to protect individuals and their rights (Broeders et al., 2017, pp. 311-321). The DGC principles do not explicitly provide a conceptual definition of mass-surveillance. Reading principles one, three, four, five and six together and attempting to combine their content in one statement, surveillance seems to be based on the use of cyberweapons which exploit security vulnerabilities. Furthermore, cyberweapons should not be used to target the private sector, and their use results potentially in mass events.

Article 2 paragraph 1 of the LI defines surveillance as ‘any monitoring or observing of persons, listening to their conversations or other activities, or any other collection of data referring to persons regardless whether this is content data or metadata.’ It goes on to add that ‘surveillance is carried out by a state, or on its behalf, or at its order’ (Cannataci, 2018a, p. 9). To understand this broad definition better it should be added that the LI uses a distinction between ‘surveillance data’ and ‘non-surveillance data’ which stems from the understanding that data is produced either by systems that are predominantly operated to carry out surveillance, or systems which have another primary purpose. The obligations stemming from the use of these different kinds of data differ in the LI, and it remains open whether it is always possible to make such a categorical distinction in practice. There might be cases in which repurposing historical surveillance data or reinterpretation/-combination of openly available data with confidential data are not covered by the provisions of the LI. Potentially, it is too strongly focused on the generation and initial processing of data, therefore underestimating the amount of usable information already available. In this regard, it might also be interesting to consider how the trend towards open source intelligence (OSINT) impacts the work of states when carrying out surveillance (Hassan & Hijazi 2018, pp. 341-345).

Article 3 paragraph 6 exempts the surveillance of military personnel from the scope of the LI. Hence, this is the only area in which the LI does not mandate that states create a dedicated law for the authorisation of surveillance which also comes with the obligation to install corresponding oversight institutions, safeguards, and remedies (Cannataci, 2018a, pp. 9, 11). An important consequence of this exemption is that all other surveillance activities - those carried out within a state through law enforcement agencies, those carried out externally through security and intelligence services, as well as those carried out through (private) mandated entities working in both domains - are subject to the same principles and rules according to the LI.

This very broad definition of surveillance covers many activities, which makes a clear distinction between ‘targeted’ and ‘mass’/‘bulk’ surveillance potentially less relevant. Indeed, it might be more useful to focus the discussion on the concept of surveillance as such, since the emphasis of ‘bulk’ or ‘mass’ capabilities often seems to add little in substance. Ultimately, such narrowing of the focus could transform the public dialogue into a mock debate that is predominantly held to capture attention. Especially when considering the rich case law encapsulated in the ECtHR’s factsheet on ‘mass surveillance’ (2019), it needs instead to be decided on a case-by-case basis whether any kind of surveillance is legitimate (i.e. proportionate and necessary). The relevant substantive criteria to decide whether ‘bulk surveillance’ measures are compliant with human rights law also need to be as strict and coherent as for targeted surveillance measures.

Substantive scope

While it has already been established that both proposals transcend territorial borders - which might support making progress on otherwise very complex problems relating to data localisation and differing regulatory requirements due to incompatible standards and fragmentation as highlighted in particular through the CJEU Schrems case (Kuner, 2017, pp. 917-918) - this feature in turn results in the requirement to define the substantive scope of the instruments more granularly. Since the kind of actions they target is less clear, it is more important to define them in principle. Smith’s DGC attempts to resolve this issue by embedding the proposal in a description or narrative of a constantly ongoing hostile conflict in the digital domain, which results in a predominantly negative and almost implicit definition of the substantive scope. This comes with the consequence that one has to share the underlying assumptions and worldview of Smith before being able to substantively engage with the content.

If one were to define the substantive scope positively however, there are at least two perspectives which need to be taken into account: the individual/user perspective, and the perspective of the collective/regulatory institutions. When it comes to the user, French sociologist Frédéric Martel (2014, Épilogue) has made the valid argument that even in times of globalisation, users with different cultural backgrounds will continue to use the internet differently, depending on factors such as shared language, shared script, cultural perception, personal interest, etc. From the outset, this has nothing to do with interconnected infrastructure or technological features. Rather, it is the digital expression of Ludwig Wittgenstein’s famous quote: ‘The limits of my language mean the limits of my world’ (1974, p. 68). In other words, one can complain about the fact that the government of the People’s Republic of China puts technical restrictions in place which make it very difficult for its citizens and residents to access censored content from abroad. However, if one were to do so one also has to be consistent to only complain on behalf of those who are able to read the Latin alphabet and have a sufficient command of English, for example. It is very difficult to establish who truly feels limited in one’s personal development (Wang & Mark, 2015, pp. 19-20). Certainly, limiting technical capabilities and putting policies in place that aim at restricting the interests of those who crave for accessing censored content will hamper the existence of a truly universal and global internet. Yet it remains uncertain how many users are genuinely interested in using such a network, at least for a significant amount of time, or when it seems reasonable to expect so.

Turning to the institutional/regulatory perspective the issue does not get much easier. As technology is constantly evolving, it makes a difference if the substantive scope is defined as either the ‘internet’ or the ‘digital domain’, which might for example also include digital information that is not being transferred through use of internet protocol addresses. One could think of the transfer of digital data using technologies such as Bluetooth, or Near-field Communication (NFC). In Article 1 sentence 1, the LI relates to ‘surveillance through digital technology’ (Cannataci, 2018a, p. 7) which means it adopts a relatively broad scope. This wording is the result of an attempt to align the substantive scope with the mandate of the SRP that is tied to the UN’s work on human rights in the digital age (Cannataci, 2018a, p. 8). Still, one has to question if this substantive scope is precise enough for an instrument that attempts to be of global relevance and have a cross-cultural dimension. Another practical issue that might be interesting to consider in the context of government-led surveillance is the question of what should happen if ‘traditional surveillance’ using non-digital means is combined with digital surveillance during a surveillance operation. In such situations it might even be relevant to have universal standards for both the digital and non-digital domains.

Necessary and proportionate

Even strong proponents of individual autonomy, privacy and data protection will agree that some form of government-led surveillance is necessary under limited circumstances, which typically include ‘the prevention, investigation, detection or prosecution of crime […] increasing public safety or protecting state security’ as stated in Article 3 paragraph 3 LI (Cannataci, 2018a, p. 11). It is worthwhile comparing this provision with Article 8 paragraph 2 ECHR where it is stated that the right to respect for private and family life may be limited if it ‘[…] is necessary in a democratic society in the interests of national security, public safety or the economic well-being of the country, for the prevention of disorder or crime, for the protection of health or morals, or for the protection of the rights and freedoms of others(emphasis added). Hence, the LI contains fewer objectives that legitimise surveillance, and notably does not include the economic well-being of the state. In the LI’s memorandum this is explained by stating that governments may legislate to punish corresponding criminal offences with an impact on economic well-being if such crimes are serious enough (Cannataci, 2018a, p. 14).

While this discussion on the legitimate objectives making surveillance necessary is interesting, more public attention was dedicated to the proportionality of surveillance measures in the aftermath of Edward Snowden’s revelations. As one of the leading US civil society organisations the Electronic Frontier Foundation has developed 13 principles aimed at ensuring proportionality. These include judicial authority, due process, user notification, transparency, oversight, and others (Electronic Frontier Foundation, 2015). While the formulation of such general principles seems helpful, this might be one area where a much more granular understanding of the terms is needed. Although proportionality is a concept that is used often in international public law, there seems to be very little shared understanding of what it means objectively (Newton and May, 2014, pp. 28-32). The LI makes an attempt in this direction by proposing a three-step test in the preamble in paragraph 5 (general usefulness of the measure for the purpose; least invasive measure available used; proportionality stricto sensu, or considering what needs to be sacrificed/affected if feasible and least invasive measure is applied (Cannataci, 2018a, p. 3)).

In contrast, the proposal for a DGC only seems to touch upon this issue by requiring the targeted use of cyberweapons as well as restraint in their development (Smith, 2017). Remaining neutral on which kind of test is ‘the right one’ to resolve this issue, these considerations highlight that it is paramount to explore what the terms necessary and proportionate objectively/universally mean if any kind of international regulatory framework for this area should be developed. If it is impossible to reach such a detailed understanding through the establishment of clear definitions and detailed procedures, at least an institution needs to be appointed which has the authority to decide this on a case by case basis.

Transborder access to personal data

Since both instruments have a universal scope it is particularly interesting to see how they address the question of transborder access to data. This issue was recently highlighted through the Microsoft Ireland case which developed after the United States government attempted to access data stored on a server located in Ireland, bypassing the use of lengthy traditional mutual legal assistance arrangements. Hence, Smith’s corporation was at the centre of the issue, although a similar case was also pending against Google at the time (Currie, 2017). Before the US Supreme Court decided the case, the situation was resolved by the US legislator with the introduction of the CLOUD Act (Daskal, 2018). Smith proposes in the DGC that technology companies remain neutral in this regard, solely focusing on the relationship with their users (Smith, 2017). Whether Microsoft and other technology corporations truly lived up to this principle during discussion of the CLOUD Act is very questionable however, since Microsoft, Google and others supported the legislation when it was discussed in the United States Congress (Brody, 2018). Furthermore, the DGC does not explicitly address the issue of cross-border access. Hence, the pledge for neutrality remains abstract and weak. In the end, the DGC is set in a landscape where government activities are omnipresent in the digital domain, and high information security standards would allow individuals to enjoy more privacy, regardless of where their data is physically located in the world.

The LI has a whole provision relating to this issue in Article 16. It is envisioned that its signatories would set up a new international institution consisting of experts from all participating states which would also monitor the implementation of the LI more generally. These experts would have the capacity to respond to cross-border demands in a swift manner and in procedures where the individuals affected would be represented by a ‘Human Rights Defender’ (Cannataci, 2018, pp. 32-35). While the establishment of this institution is compelling due to its completeness and potential to deliver a valid multilateral solution for the cross-border access problem, it remains highly doubtful whether states would be willing to transfer that much sovereign power to an international body whose actions would have significant implications for the success of criminal investigations, and potentially intelligence service activities. As the discussion within the EU about the e-evidence package shows, even in a structured regional cooperation bloc the establishment of such a central authority is not envisaged, and many gaps and issues remain to be resolved (Biasotti et al., 2018, pp. 375-420).

Strategies for regulation

While the presented proposals are commendable in that they aim at supporting the ideal of the internet as a universal space dedicated to personal autonomy and individual development, the previous section makes clear that this requires much more detailed substantive understanding of key terms and concepts among the international community. While the LI is certainly more comprehensive and detailed than the DGC, it struggles to set a clear substantive scope, define the subject matter, and deliver solutions on the understanding of terms such as ‘necessary and proportionate’. Key topics such as transborder access to personal data are addressed, but the whole document can only claim to be a modest step towards (more) international consensus. Hence, before universal regulation through institutions like the UN seems realistic, more detailed understanding and broad agreement on the substantive issues is required. To enable more progress on the UN level, De Hert and Papakonstantinou (2018, pp. 529-531) suggested creating a privacy-dedicated agency which could work in a similar way to the World Intellectual Property Organisation. However, this seems politically almost impossible to achieve under the current circumstances which have been outlined in the introduction. Furthermore, attempts of the UN to have more control over the internet have failed in the recent past. Just before the Snowden revelations the International Telecommunication Union (ITU) was used as a vessel for an attempt to replace the open multi-stakeholder mechanism with an open or closed model of traditional multilateral governance focused on states. During negotiations in the context of the 2012 World Conference on International Telecommunications this attempt was blocked by a coalition led by western countries, mainly due to concerns that repressive regimes might gain too much control over the internet (Glen, 2014, pp. 643-652). Finally, there might also be valid factual reasons to have different standards and concepts as Martel’s research highlights, with its focus on the user perspective (2014, Épilogue). Maybe the idea of a borderless internet-driven world was, and remains, an illusion, as Goldsmith and Wu (2006) already proposed more than 15 years ago, emphasising the role of governments and the many layers that enable the existence of the digital space.

Nevertheless, assuming for a moment that there could be more substantive consensus, the crucial question of effective enforcement also remains largely unanswered in the proposals. The DGC does not address this aspect in detail and mostly calls for states to refrain from certain actions, while the LI develops in Article 16 a perspective on oversight of its provisions through the establishment of a powerful international body operating in a centralised manner. How delicate such an attempt of attribution and redistribution of sovereign power can become can be studied by following the ongoing discussion on the ‘rule of law’ in the EU (Weiler, 2016, pp. 313-326). Furthermore, the German civil society organisation Stiftung Neue Verantwortung has recently presented recommendations to improve intelligence oversight (Vieth & Wetzling, 2019), which is another important element to guarantee individuals’ rights and freedoms of individuals.

Hence, it seems that at the time of writing universal regulation of government-led surveillance is predominantly an academic (idealistic?; utopian?) endeavour. It is worthwhile to map the current landscape to potentially pave the way for progress, but due to the complexity of the issue it is unrealistic as a practical solution. States and regional political actors instead seem to prefer to use traditional assets and strategies to shape the internet and claim jurisdiction and power. Furthermore, restrictions about the use of the Android operating system for companies like Huawei (Afhüppe & Scheuer, 2019), or the struggle to establish trust in the deployment of 5G mobile networks suggest that this is not about to change in the near future (Matsumoto, 2019; see also Cartwright, 2020).

What other alternatives are there, leaving aside the re-territorialisation of the internet which threatens to fragment it? Is it possible to regulate topics such as government-led surveillance with strategies not using extraterritorial effect, or the reinforcement of physical borders? Such a third-way between universal frameworks and national regulation might be the establishment of ‘blocs of trust’. One international agreement employing this strategy is Convention 108+ of the Council of Europe, another one the Budapest Convention on Cybercrime. These international treaties allow for a traditional harmonisation of national laws, while being open to non-member states of the Council by either acceding to them or implementing their principles in national regulation autonomously (Ukrow, 2018; Polakiewicz, 2019). Additionally, their provisions entail many useful and detailed principles, but are drafted in a way that avoids too much regulatory detail, respecting the sovereignty of states.

It is unclear how successful this strategy can be. Therefore, it will be particularly interesting to observe the push to establish Convention 108+ as a new global regulatory baseline for data protection, which is less demanding and detailed than the EU General Data Protection Regulation, but more concrete than the abstract provision on privacy in Article 17 of the ICCPR at the UN level. Maybe the areas of data protection and privacy can become pioneers in this regard, which could potentially inspire the regulation of surveillance as well. However, initial research on the requirements to adapt or (re-)model data protection laws finds that it will be difficult enough for Latin American and African countries to join the 47 member states of the Council of Europe by signing up to Convention 108+. For Asian countries it will be even more difficult. South Korea and Thailand seem to have legal frameworks which are already mostly in line with the treaty (Greenleaf, 2020). However, this is only an analysis of requirements in terms of legal provisions, leaving aside the procedures and dynamics of political negotiation processes, as well as the question why and how countries should develop the political will to accede.

Conclusion

Developing blueprints for international instruments that might ultimately help to create a more harmonious and detailed framework for the regulation of surveillance on and through the internet is a commendable effort. However, in the absence of political will to develop overarching multilateral proposals on the matter, other strategies for regulation are needed. Hence, it seems that the only available alternative to such an unlikely, truly universal international framework is the development of regional frameworks which are open to third states for adoption through accession and ratification, or through modelling national laws after such agreements. The Council of Europe has been successful in developing such texts in the past (Gstrein, 2019, p. 89-90), but also has recently struggled to provide a productive forum for political exchange itself (Rankin, 2019). Certainly, this approach also comes with the danger that instead of falling back to the national level, the internet might become separated on a regional level. Still, if none of the multilateral options gain considerable influence and develop further, the only possible outcome is fragmentation of the digital space, which would also result in the reversal of many achievements made in recent times.

Acknowledgement

The author thanks Taís F. Blauth for reviewing the manuscript.

References

Afhüppe, S., & Scheuer, S. (2019, September 2). Huawei-Chairman wirbt für europäisches Ökosystem als Konkurrenz zu Google und Apple. Handelsblatt. https://www.handelsblatt.com/technik/it-internet/interview-huawei-chairman-wirbt-fuer-europaeisches-oekosystem-als-konkurrenz-zu-google-und-apple/24970238.html

Ananthaswamy, A. (2011). Age of the splinternet. New Scientist, 211(2821), 42–45. https://doi.org/10.1016/S0262-4079(11)61710-7

Anishchuk, A. (2014, July 4). Russia passes law to force websites onto Russian servers. Reuters. https://www.reuters.com/article/us-russia-internet-bill-restrictions-idUSKBN0F91SG20140704

Barlow, J. P. (1996). A Declaration of the Independence of Cyberspace. https://www.eff.org/nl/cyberspace-independence

Biasotti, M. A., Mifsud Bonnici, J. P., Cannataci, J., & Tudorica, M. (2018). The way forward: A roadmap for the European Union. In M. A. M. B. Biasotti, J. P. Cannataci, J., & F. Turchi (Eds.), Handling and exchanging electronic evidence across Europe (pp. 375–420). Springer. https://doi.org/10.1007/978-3-319-74872-6_18

Brody, B. (2018, February 7). Tech Giants Back U.S. Bill Governing Cross-Border Data Searches. Bloomberg.

Broeders, D. (2017). Big Data and security policies: Towards a framework for regulating the phases of analytics and use of Big Data. Computer Law & Security Review, 2017(33), 309–323. https://doi.org/10.1016/j.clsr.2017.03.002

Buono, I., & Taylor, A. (2017). Mass Surveillance in the CJEU: Forging a European consensus. Cambridge Law Journal, 76(2), 250–253. https://doi.org/10.1017/S0008197317000526

Buono, L. (2019). The genesis of the European Union’s new proposed legal instrument(s) on e-evidence. ERA Forum, 19. https://doi.org/10.1007/s12027-018-0525-4

Cannataci, J. (2018a). Report to the Human Rights Council, A/HRC/37/62. United Nations, Office of the High Commissioner for Human Rights. https://ap.ohchr.org/documents/dpage_e.aspx?si=A/HRC/37/62

Cannataci, J. (2018b). Report to the Human Rights Council, A/HRC/37/62, Appendix 7 Working Draft Legal Instrument on Government-led Surveillance and Privacy. United Nations, Office of the High Commissioner for Human Rights. https://www.ohchr.org/Documents/Issues/Privacy/SR_Privacy/2018AnnualReportAppendix7.pdf

Cartwright, M. (2020). Internationalising state power through the internet: Google, Huawei and geopolitical struggle. Internet Policy Review, 9(3). https://doi.org/10.14763/2020.3.1494

Conger, K., Sanger, D. E., & Shane, S. (2019, December 9). Microsoft Wins Pentagon’s $10 Billion JEDI Contract, Thwarting Amazon. New York Times. https://www.nytimes.com/2019/10/25/technology/dod-jedi-contract.html

Currie, R. J. (2017). Cross-Border Evidence Gathering in Transnational Criminal Investigation: Is the Microsoft Ireland Case the “Next Frontier”? Canadian Yearbook of International Law, 54, 63–97. https://doi.org/10.1017/cyl.2017.7

Daskal, J. (2018). Microsoft Ireland, the CLOUD Act, and International Lawmaking 2.0. Stanford Law Review, 71. https://www.stanfordlawreview.org/online/microsoft-ireland-cloud-act-international-lawmaking-2-0/

De Hert, P., & Papakonstantinou, V. (2018). Moving Beyond the Special Rapporteur on Privacy with the Establishment of a New, Specialised United Nations Agency: Addressing the Deficit in Global Cooperation for the Protection of Data Privacy. In D. J. Svantesson & D. Kloza (Eds.), Trans-Atlantic Data Privacy Relations as a Challenge for Democracy. Cambridge University Press.

Edgar, T. (2017). Beyond Snowden: Privacy, Mass Surveillance, and the Struggle to Reform the NSA. Brookings Institution Press.

European Court Human Rights. (2019). Factsheet – Mass surveillance. European Court of Human Rights. https://www.echr.coe.int/Documents/FS_Mass_surveillance_ENG.pdf

Ewert, C., Kaufmann, C., & Maggetti, M. (2020). Linking democratic anchorage and regulatory authority: The case of internet regulators. Regulation & Governance, 14, 184–202. https://doi.org/10.1111/rego.12188

Foundation, E. F. (2015). 13 International principles on the application of human rights in communication surveillance.

Fox, D. (2018). Digitale Souveränität. Datenschutz und Datensicherheit, 2018(5), 271.

Galbraith, J. (2020). United States and United Kingdom Sign the First Bilateral Agreement Pursuant to the CLOUD Act, Facilitating Cross-Border Access to Data. American Journal of International Law, 114(1), 124–128. https://doi.org/10.1017/ajil.2019.80

Galič, M., Timan, T., & Koops, B.-J. (2017). Bentham, Deleuze and Beyond: An Overview of Surveillance Theories from the Panopticon to Participation. Philosophy & Technology, 30(1), 9–37. https://doi.org/10.1007/s13347-016-0219-1

Gallagher, R., & Moltke, H. (2018, June 25). The Wiretap Rooms. The Intercept. https://theintercept.com/2018/06/25/att-internet-nsa-spy-hubs/

Glen, C. M. (2014). Internet Governance: Territorializing Cyberspace? Politics & Policy, 5(42), 635–657. https://doi.org/10.1111/polp.12093

Goldsmith, J., & Wu, T. (2006). Who controls the internet? Illusions of a borderless world. Oxford University Press.

Gordon, S. (2010). Civilian Protection – What’s left of the norm? In S. Perrigo & J. Whitman (Eds.), The Geneva Convention under Assault. Pluto Press.

Gräf, E., Lahmann, H., & Otto, P. (2018). Die Stärkung der digitalen Souveränität, Diskussionspapier May 2018 i.RightsLab [Discussion Paper]. iRights.Lab. https://irights-lab.de/wp-content/uploads/2018/05/Themenpapier_Souveraenitaet.pdf

Greenleaf, G. (2020). How far can Convention 108+ ‘globalise’?: Prospects for Asian accessions. Computer Law & Security Review. https://doi.org/10.1016/j.clsr.2020.105414

Greenleaf, G., & Cottier, B. (2020). 2020 ends a decade of 62 new data privacy laws. Privacy Laws & Business International Report, 163, 24–26.

Gstrein, O.J. (2019). The Council of Europe as an Actor in the Digital Age: Past Achievements, Future Perspectives. In J. Jungfleisch (Ed.), Festschrift der Mitarbeiter*Innen und Doktorand*Innen zum 60. Geburtstag von Univ. - Prof. Dr Thomas Giegerich (pp. 77–90). Alma Mater Velag Saarbrücken.

Gstrein, Oskar J., & Kochenov, D. (2020). Digital Identity and Distributed Ledger Technology: Paving the Way to a Neo-Feudal Brave New World? Frontiers in Blockchain, 3. https://doi.org/10.3389/fbloc.2020.00010

Hassan, N., & Hijazi, R. (2018). Open Source Intelligence Methods and Tools. Apress.

Hildebrandt, M. (2013). Extraterritorial jurisdiction to enforce in cyberspace?: Bodin, Schmitt, Grotius in cyberspace. University of Toronto Law Journal, 63(2), 196–224. https://doi.org/10.3138/utlj.1119

Hill, R. (2014). The internet, its governance, and the multi-stakeholder model. Info, 16(2), 16–46. https://doi.org/10.1108/info-05-2013-0031

Kranzberg, M. (1995). Technology and History: “Kranzberg’s Laws”. Bulletin of Science, Technology & Society, 15, 1, 5–13. https://doi.org/10.1177/027046769501500104

Kuner, C. (2017). Reality and Illusion in EU Data Transfer Regulation Post Schrems. German Law Journal, 18(4), 881–918. https://doi.org/10.1017/S2071832200022197

Lyon, D. (2018). The Culture of Surveillance: Watching As a Way of Life. Polity Press.

Mann, M., Daly, A., & Molnar, A. (2020). Regulatory arbitrage and transnational surveillance: Australia’s extraterritorial assistance to access encrypted communications. Internet Policy Review, 9(3). https://doi.org/10.14763/2020.3.1499

Mansell, W., & Openshaw, K. (2010). The History and Status of the Geneva Conventions. In S. Perrigo & J. Whitman (Eds.), The Geneva Convention under Assault. Pluto Press.

Martel, K. (2014). Smart – Enquête sur les internets. Editions Stock.

Matsumoto, F. (2019, August 31). Huawei to cut engineers in Australia and restructure after 5G ban. Financial Times. https://www.ft.com/content/d88608f4-ca02-11e9-af46-b09e8bfe60c0

Nemitz, P. (2018). Constitutional democracy and technology in the age of artificial intelligence. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 376(2133), 20180089. https://doi.org/10.1098/rsta.2018.0089

Newman, L. H. (2020, January 14). Windows 10 Has a Security Flaw So Severe the NSA Disclosed It. Wired. https://www.wired.com/story/nsa-windows-10-vulnerability-disclosure/

Newton, M., & May, L. (2014). Proportionality in International Law. Oxford University Press.

Nowak, M. (2018). A World Court of Human Rights. In G. Oberleitner (Ed.), International Human Rights Institutions, Tribunals, and Courts. Springer Nature. https://doi.org/10.1007/978-981-10-5206-4_10

O’Hara, K., & Hall, W. (2018). Four internets: The geopolitics of digital governance (No. 206; CIGI Papers). https://www.cigionline.org/sites/default/files/documents/Paper%20no.206web.pdf

Polakiewicz, J. (2019, February 26). Reconciling security and fundamental rights. Conference on Criminal Justice in Cyberspace. https://www.coe.int/en/web/dlapil/-/conference-on-criminal-justice-in-cyberspace

Privacy International. (2019, August 15). Two states admit bulk interception practices: Why does it matter? [Blog post]. https://privacyinternational.org/node/3164

Rankin, J. (2019, May 17). Council of Europe votes to maintain Russia’s membership. The Guardian.

Ratner, S. (2008). Geneva Conventions. Foreign Policy, 165, 26–32. https://www.jstor.org/stable/25462268

Sherwin, E. (2019, April 16). Russia’s parliament votes to unplug internet from world. Deutsche Welle. https://www.dw.com/en/russias-parliament-votes-to-unplug-internet-from-world/a-48334411.

Smith, B. (2017). The need for a Digital Geneva Convention [Blog post]. Microsoft on the Issues. https://blogs.microsoft.com/on-the-issues/2017/02/14/need-digital-geneva-convention/#sm.0001hkfw5aob5evwum620jqwsabzv

Smith, B. (2018). 34 companies stand up for cybersecurity with a tech accord [Blog post]. Microsoft on the Issues. https://blogs.microsoft.com/on-the-issues/2018/04/17/34-companies-stand-up-for-cybersecurity-with-a-tech-accord/

Snowden, E. (2019). Permanent Record. Pan Macmillan.

Stoycheff, E., Liu, J., Xu, K., & Wibowo, K. (2019). Privacy and the Panopticon: Online mass surveillance’s deterrence and chilling effects. New Media & Society, 21(3), 602–619. https://doi.org/10.1177/1461444818801317

Terwange, C. (2018). The work of revision of the Council of Europe Convention 108 for the protection of individuals as regards the automatic processing of personal data. International Review of Law, Computers & Technology, 28(2), 118–130. https://doi.org/10.1080/13600869.2013.801588

Thumfart, J. (2020). Private and public just wars: Distributed cyber deterrence based on Vitoria and Grotius. Internet Policy Review, 9(3). https://doi.org/10.14763/2020.3.1500

U.K. Government. (2016). Operational case for bulk powers. https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/504187/Operational_Case_for_Bulk_Powers.pdf

Ukrow, J. (2018). Data protection without frontiers? On the relationship between EU GDPR and amended CoE Convention 108. European Data Protection Law, 2, 239–247. https://doi.org/10.21552/edpl/2018/2/14

United Nations. (2018). The right to privacy in the digital age (A/C.3/73/L.49/Rev.1). United Nations, General Assembly.

Vazquez Maymir, S. (2020). Anchoring the need to revise cross-border access to e-evidence. Internet Policy Review, 9(3). https://doi.org/10.14763/2020.3.1495

Veccelio Segate, R. (2019). Fragmenting Cybersecurity Norms Through the Language(s) of Subalternity: India in ‘the East’ and the Global Community. Columbia Journal of Asian Law, 32(2).

Vieth, K., & Wetzling, T. (2019). Data-driven intelligence oversight. Recommendations for a System Update [Report]. Stiftung Neue Verwantwortung. https://www.stiftung-nv.de/sites/default/files/data_driven_oversight.pdf

Wang, D., & Mark, G. (2015). Internet censorship in China: Examining user awareness and attitudes. ACM Transactions on Computer-Human Interaction, 22(6). https://doi.org/10.1145/2818997

Weiler, J. H. H. (2016). Epilogue: Living in a Glass House. In C. Closa & D. Kochenov (Eds.), Reinforcing Rule of Law Oversight in the European Union (pp. 313–326). Cambridge University Press.

West, D. M. (2018). The future of work. Brookings Institution Press.

Wittgenstein, L. (1974). Tractatus Logico Philosophicus (D. F. Pears & B. F. McGuiness, Trans.). Routledge.

Yang, Y. (2019, April 21). Trade war with US delays China’s rules curbing data transfers. Financial Times. https://on.ft.com/2UYQ6Eo

Zubhoff, S. (2019). Surveillance Capitalism and the Challenge of Collective Action. New Labor Forum, 28(1), 10–29. https://doi.org/10.1177/1095796018819461

The legal geographies of extradition and sovereign power

$
0
0

This paper is part of Geopolitics, jurisdiction and surveillance, a special issue of Internet Policy Review guest-edited by Monique Mann and Angela Daly.

Introduction

Article 24 of the Budapest Convention on Cybercrime (2001)(the ‘Budapest Convention’) contains specific provisions on extradition for various online offences, including crimes related to child pornography, computer-related fraud, and infringements of copyright (Clough, 2014; Council of Europe, 2001). It introduces jurisdictional requirements that supplement, and in some cases replace, those forged through pre-existing bilateral and multilateral extradition arrangements that are incorporated into national laws. A key objective for the 47 member nations of the Council of Europe - and 28 other nations that have signed and ratified the Budapest Convention (Council of Europe, 2020), which include the United States (US), Canada and Australia - is to enhance cooperation in the investigation and prosecution of transnational online offending. This includes fast-tracking the extradition process for specified online offences involving terms of imprisonment of 12 months or more (Clough, 2014). These processes work in much the same way as the European Arrest Warrant (EAW) (Warren & Palmer, 2015, pp. 324-338).

While the trans-geographic nuances of online activity generate new forms of digital and data sovereignty where the ownership and control of information moves beyond any single nation state (see Couture & Toupin, 2019), extradition remains wedded to legal principles based on physical territory and national sovereignty. The voluntary nature of ratification also has the potential to limit the reach and enforceability of cooperative transnational treaties. In light of these issues, we argue that the fast-tracking mechanisms of the Budapest Convention do not address the inherent legal and geographic conflicts associated with established extradition procedures that tend to be slow, cumbersome, politically sensitive, and doctrinally technical.

Our central focus in this paper is to demonstrate how complex due process issues are embedded within the process of extradition. These fundamental issues are characterised by conflicting legal geographies that pre-date, and are not reconciled by, the Budapest Convention or other bilateral and multilateral arrangements seeking to fast-track the extradition process, irrespective of the nations involved. We demonstrate that, even with enhanced transnational online surveillance capabilities, investigators and prosecutors must be sensitive to the geographic impacts of due process that pre-date the digital age and have been forged through the historical development of extradition law. This convergence of law and geography has significant ramifications in light of the unknown international scale of global online surveillance by any nation (see Geist, 2015), the impact of this issue on the world’s national legal systems (Svantesson, 2017), and the evolving role of extradition in reflecting the apparent willingness of certain nations, such as the US, to commence criminal proceedings for a wide range of offences to protect narrow commercial, moral or law enforcement interests (Bauman et al., 2014). These objectives, we argue, serve to undermine the very types of transnational justice cooperation envisaged by the Budapest Convention.

Extradition must be viewed in light of its ongoing political and legal ramifications that reflect, and contribute to, the degree of international comity between nations. For example, the US and Canada have a long history of contentious bilateral extradition arrangements pre-dating the digital age (Miller, 2016), which inform contemporary approaches to extradition for online and many other forms of offending. The US ratified the Budapest Convention in 2006, while Canada did so in 2015. This temporal split delays the coordination of the Budapest Convention requirements between these two nations, which also have to negotiate distinct approaches to due process that affect domestic criminal procedures for evidence collection, the apprehension and questioning of suspects, as well as the exchange of evidence and fugitives. Each of these factors can potentially undermine transnational justice cooperation and magnify the difficulties of determining an extradition request.

In addition, Article 24(6) of the Budapest Convention incorporates the principle of aut dedere aut judicare, alternately known as “extradite or prosecute”. This is a central aspect of continental European extradition law that can offset prejudicial disparities “in domestic legal systems with respect to both substantive law and procedure”, and the “potential for bias and prejudice against the surrendered person, based solely on his [sic] foreign origin and nationality” (Plachta, 1999, p. 88). However, this principle is based on nationality, rather than a key aspect of the legal geography that characterises many forms of contemporary cyber offending, where the suspected offender can commit all or most of the wrongful activity outside of the affected jurisdiction (Mann, Warren, & Kennedy, 2018). Significantly, extradition laws and procedures historically developed in line with the presumption that an extraditee had fled the jurisdiction where the harmful act occurred.

As we explain, the contemporary legal geography of extraterritorial crimes involves a direct tension between theories of subjective and objective territoriality. These are central yet highly problematic rationales for asserting jurisdictional power beyond recognised national geographic boundaries. Our argument is framed in light of a leading Canadian case involving a request for extradition issued by the US in 2012 regarding the offence of child luring via the internet. The suspect was physically located in Canada at all times during the incident, but it became clear during approximately seven and a half years of legal proceedings in Canada that US enforcement surveillance had identified several child victims who were not mentioned in the initial extradition request. While this case pre-dates Canada’s ratification of the Budapest Convention, we believe it aptly demonstrates the hazards of fostering transnational justice cooperation through distinct national systems, which encompasses both extradition and the transfer of criminal evidence through mutual legal assistance processes. We argue the separation of these issues reflects a different form of due process, or “rule-with-law” (Bowling & Sheptycki, 2015), that ultimately favours granting an extradition request, even if there are discernible due process and human rights concerns linked to the surrender of crime suspects to jurisdictions where their prior connection is limited (Mann et al., 2018).

Our argument proceeds in four parts. First, we outline how subjective and objective territoriality are embedded and highly problematic geographic aspects of extradition, and discuss their relationship to mutual legal assistance processes. Second, we provide a detailed description of the Canadian cases scrutinising the US request for the extradition of Marco Viscomi for alleged child luring. Of particular importance is the range of legal and factual issues considered by Canadian courts, and arguments questioning whether Viscomi could be sufficiently identified as the correct suspect via his subscription to the internet service provider (ISP) address identified by US authorities, an issue that has received limited scholarly attention to date. Third, we discuss the importance of focusing on due process of law in transnational cyber investigations, while at the same time suggesting that prevailing views of the mobility of data must be divorced from the idea that individuals facing extraterritorial criminal charges should be considered equally mobile. We consider this fosters a form of “rule-with-law” (Bowling & Sheptycki, 2015) that prioritises bilateral and multilateral interests in crime control over the protection of individuals who are sought for extradition. We conclude by suggesting the power of any nation to assert extraterritorial criminal jurisdiction is preserved through extradition processes (Svantesson, 2017), while greater credence should be given to holding transnational trials in the geographic location where the harm emanated (Dugard & Van den Wyngaert, 1998; Mann et al., 2018).

Transnational data, extradition and mutual legal assistance

Many authors claim that questions of internet jurisdiction require reformulating due to the inherently un-territorial nature of global online data flows (Daskal, 2015; Svantesson, 2017). We disagree. This is because due process of law was built into many established stages of the criminal process, including domestic extradition laws and procedures, that sought to deal with transnational offending before the advent of the global world wide web. The significant question is whether and how these laws are upheld or subverted in any individual case. For example, the famed Kim Dotcom case, which remains unresolved at the time of writing despite eight years of hearings in the New Zealand (NZ) court system, involved significant questions about the legality of the search of Dotcom’s residence by NZ police acting on a request by US authorities. While the parameters of a lawful search were clear under NZ law (Boister, 2017a; Palmer & Warren, 2013), the transnational nature of the offence added increased pressure on NZ investigators, resulting in the selective use of existing laws, or slippages in conventional notions of due process that became a form of “rule-with-law” (Bowling & Sheptycki, 2015). This can also extend to the tactical use of extradition in cases with or without online components, or where a suspect is wanted by one jurisdiction, yet transiting through another to a third destination (United States of America v. Meng, 2019). We suggest that rather than introducing new legal requirements to deal with the intricacies of cyber activity, existing extradition laws and due process requirements can appropriately balance the interests of the requesting nation in obtaining justice for transnational cybercrime suspects who might never have entered the jurisdiction where the effects of the wrongful act have occurred (Mann et al., 2018).

Despite these arguments, judicial practice in most Western nations remains tethered to the prevailing view that those suspected of online criminal activity should be considered as geographically mobile as the digital harms and evidence associated with their behaviour. We suggest the fast-track extradition measures in Article 24 of the Budapest Convention reflect this view. This logic can produce a complex set of legal geographies associated with sovereign power, particularly during extradition processes involving online child sex offences, where questions of bilateral and multilateral political comity risk undermining individual due process and human rights protections under the laws of extradition (Arnell, 2013; Arnell, 2018; Blakesley, 2008; Dugard & Van den Wyngaert, 1998; Murchison, 2007). We suggest that a renewed emphasis on established geographies of extradition is required to fully appreciate the limits of altering these processes for online offences. This requires understanding the distinction between subjective and objective territoriality, which was consolidated by the Harvard Draft Extradition Convention (see Burdick, 1935) and is regularly cited by leading scholars as a central geographic aspect of extradition law (Blakesley, 1984).

Subjective territoriality can serve as a bar to extradition by locating the trial where any key element of the crime has emanated, regardless of whether the effect is felt elsewhere. The forum bar test in English extradition law reflects this principle (Mann et al., 2018). By contrast, objective territoriality, or the “effects test”, allows a nation to assert jurisdiction extraterritorially by commencing prosecution where the harm was experienced (Raustiala, 2009). Objective and subjective territoriality apply to various forms of criminal conduct, and have been asserted inconsistently by some jurisdictions, such as the US, to assert jurisdiction over offences with limited territorial impact (Blakesley, 2008, p. 137). However, most extradition requests are predicated on subjective territoriality, under the assumption that the offender is a fugitive from the location where the harm was committed (Abelson, 2009). Where the offence has occurred remotely, objective territoriality might make logical sense in terms of the nature of victimisation (Svantesson, 2017). However, this principle also raises many concerns about potential bias, because objective territoriality emphasises conceptions of harm, justice and penalty that are forged solely from the perspective of where the effects are experienced. This emphasis can serve to undermine due process for suspects located offshore at the time of the offence (Mann et al., 2018).

We contend objective territoriality is less tenable in a digital age, where different nations are likely to share concurrent jurisdiction for the same conduct (Burdick, 1935, p. 93; Mann et al., 2018), and online crime suspects could be particularly disadvantaged by being extradited to a nation simply because it has chosen to exercise jurisdiction over their allegedly criminal offshore conduct. This logic becomes especially problematic as the impacts of most forms of transnational cyber offending are experienced in multiple locations, or in a single jurisdiction outside an alleged offender’s immediate geographic setting. This enables the domestic surveillance, investigative processes and criminal laws of the requesting nation to be enforced transnationally, which can have problematic due process implications for the legality of cross-border operations involving multiple police agencies with different domestic investigative and surveillance powers (Bowling & Sheptycki, 2015), as demonstrated by the Kim Dotcom case (Boister, 2017a; Palmer & Warren, 2013). Moreover, shifting the trial forum, rather than extraditing an alleged offshore suspect, is less problematic in light of the global convergence of domestic cybercrime laws under instruments such as the Budapest Convention.

The inherent complexity of extradition and mutual legal assistance offsets the idea that both processes can simply be fast-tracked for certain types of crime. Each is activated by a series of formal requests between nations that are mediated by the judicial and executive arms of government. This creates a relatively complicated structure of judicial review in a receiving state if an extradition or mutual legal assistance request is challenged by a suspect. Whilst politicisation can create uneven or hierarchical relationships between nations, in some cases, such as the Gary McKinnon case in the UK, political oversight can offer an important accountability measure to foster bilateral legal cooperation, or potentially block the surrender of individuals or evidence for important humanitarian reasons (Mann et al., 2018). However, the primary responsibility for determining the legality and enforceability of an extradition request lies with the courts of the nation that receives the request, while model rules for mutual legal assistance offer considerable latitude for the “expedited preservation and disclosure of stored computer data, production of stored computer data and search and seizure of computer data” at the transnational level (Clough, 2014, p. 731). The admissibility of such evidence will ultimately be scrutinised at the trial location.

In supranational jurisdictions, such as the EU, streamlined procedures can simplify and fast-track the exchange of fugitives or evidence, including electronic evidence. For example, the EAW and European Evidence Warrant (EEW) operate in conjunction with the European Investigation Order (EIO). This structure aims to provide more direct transnational justice cooperation by attempting “to remove geographic boundaries through a form of centralisation” based on mutual trust in the operation of the established justice institutions of nations that are part of the EU (Warren & Palmer, 2015, p. 341). However, this regime is also criticised for prioritising the interests of the EU and national justice agencies at the expense of preserving the due process rights of individuals subjected to these streamlined procedures (Gless, 2015). It also raises significant questions about whether trust in transnational legal relations and international comity can extend beyond the EU, while retaining some degree of protection for individuals suspected of engaging in transnational crimes.

Although these cooperative transnational processes are forged through bilateral or multilateral agreements, including the Budapest Convention, that are subsequently incorporated into the domestic laws of Anglo-Western jurisdictions, there are growing concerns that many jurisdictions are too willing to accede to the enforcement interests of powerful nations, such as the US. This is particularly evident in cases where an alleged online offender might never have physically entered the country (Mann et al., 2018; Palmer & Warren, 2013) or where digital platforms associated with the offence are owned, operated or governed by US laws and surveillance protocols (Geist, 2015; Goldsmith & Wu, 2006; Warren, 2015). Thus, rather than being un- or trans-territorial (Daskal, 2015), much online data in the English-speaking world is subject to the superior surveillance, enforcement and regulatory power of US corporate and law enforcement interests (Mann & Warren, 2018; Warren, 2015; Zuboff, 2019). This does not mean the US is the only jurisdiction exercising extraterritorial criminal enforcement and surveillance powers in similar ways. However, extradition and mutual legal assistance become benchmarks for determining where a criminal trial should proceed in light of various forms of extraterritorial online surveillance. This increasingly occurs in circumstances where justice officials of the receiving state might have been totally unaware of the alleged online misconduct, or where domestic laws in the jurisdiction issuing the extradition request establish different due process and human rights protections for citizens compared with non-citizens (US Department of Justice, 2019).

Case studies documenting extradition and mutual legal assistance indicate it can be viable to shift the location of a criminal trial to the source of the harm to facilitate prosecution whilst simultaneously protecting individual rights (Mann et al., 2018). This can prevent undue hardship when surrendering a suspect who has never entered the requesting country to face a potentially lengthy criminal trial or sentence if they are ultimately convicted. The idea of a “forum bar”, which blocks extradition if a substantial proportion of the offence occurred in the jurisdiction where the extraditee is located, alters the jurisdictional geography of the incident by recognising the offence can be prosecuted at the source of harm (Mann et al., 2018). This relies on evidence obtained at the source of the crime, as well as evidence shared by foreign enforcement agents being shifted to accommodate the location of the suspect, rather than moving the individual to accommodate the interests of justice. This emphasis can streamline the otherwise lengthy series of appeals on technical aspects of extradition and mutual legal assistance, even if it remains outside the accepted contemporary norms of transnational justice cooperation.

Our examination of these issues involves the case of Canadian citizen Marco Viscomi. His challenges to extradition reinforce the complexity and time-consuming nature of these processes that must also consider the requirements of the Canadian Mutual Legal Assistance in Criminal Matters Act (1985) (MLACMA) and Charter of Rights and Freedoms (1982) (the Charter). These laws predate the Budapest Convention, which did not apply to Viscomi pending Canada’s formal ratification in 2015. However, the decisions affecting Viscomi are important for highlighting the legal, surveillance and jurisdictional geographies that support US criminal law enforcement interests, while revealing equivalent problems with the inherent structure of extradition that can occur beyond these two nations.

The complex case of Marco Viscomi

Between May 2013 and November 2019, Marco Viscomi appeared before the Canadian judicial system on 13 reported occasions to challenge the US request for his extradition to face a charge of child luring. Child luring is the Canadian equivalent of US charges of sexual coercion of a minor and transporting visual depictions of sexually explicit conduct involving a minor through a computer. However, it is unclear whether either of these offences amount to child grooming, which is a notable omission from the Budapest Convention (Clough, 2014, p. 702). If convicted under US law, Viscomi would be subject to a mandatory term of up to 30 years imprisonment. The appendix to this paper highlights the grounds for each decision, and demonstrates how Canadian courts have classified the evidentiary and procedural elements of the extradition request. Our emphasis in this section documents the legal and factual issues considered by the Canadian courts based on the allegations contained in the US extradition request.

US authorities claimed Viscomi communicated with a 17-year-old female located in Virginia Beach on 5 and 6 January 2012 via the internet chat room Tiny Chat. This communication progressed to a Skype video call, where Viscomi could see the young woman, but she could not see him. During the course of the online conversation, it is alleged Viscomi “coerced, threatened, extorted and otherwise manipulated this naïve young woman” into exposing her breasts and engaging in explicitly sexual and violent activities with her 13-year-old sister for his own “voyeuristic pleasure” (United States of America v. Viscomi, 2013, para. 2). The Skype session lasted approximately one hour and ten minutes, and Canadian authorities later discovered Viscomi had captured sexually explicit images of the two US victims on his computer.

A two-month investigation commenced after the girls’ father reported the incident to US police, who conducted a forensic examination of the victim’s computer. This led to an administrative subpoena being issued to Skype that revealed the screen and account names of the suspect, which were then linked by US authorities to an Internet Protocol (IP) address connected to Zing Networks, an ISP operating in Ontario. US police then made a request under the Canadian Personal Information Protection and Electronic Documents Act (2000) ( PIPEDA) for Zing Networks to “voluntarily disclose any information needed to satisfy [the] government request” (United States of America v. Viscomi, 2014, para. 21). Common law at the time determined that an equivalent request by Canadian authorities did not require a search warrant (R v. Ward, 2012).

On 7 March 2012, US authorities shared details of the investigation with police in Ontario, who commenced their own inquiries. At the same time, US authorities were investigating another cross-border child exploitation case in Wisconsin linked to the same IP address, which involved “virtually identical predatory methods” (R v. Viscomi, 2016b, para. 61). This investigation was disclosed to authorities in Ontario, but the major focus of Viscomi’s extradition and mutual legal assistance claims involved the evidentiary and legal issues associated with the Virginia Beach communications only.

The information provided to Ontario police by US authorities before the formal extradition request led to search warrants being issued at Viscomi’s family home and student residence. After seizing three laptops and external hard drives, which were forensically examined in Canada, Viscomi was charged with two counts of child luring, extortion and uttering threats, which proceeded through the Canadian judicial system for approximately four and a half months. These charges were withdrawn on 10 August 2012 when the US issued its extradition request. On 16 August 2012, Viscomi was apprehended under an extradition arrest warrant based on the Record of the Case. This is the requesting state's summary of the evidence that supports the allegations, which at this stage in Viscomi’s case only contained evidence obtained by the US authorities. He was also denied bail “for the protection of the public” due to the “horrific” facts of the case, including the “systemic psychological and physical abuse of children … [and] sadistic, sexualised conduct … which verges on torture” (Viscomi v. Ontario (Attorney General), 2014, para. 6).

Our discussion of the progression of cases focuses on three key legal issues raised by Viscomi that questioned his eligibility for extradition, and the legality of the evidence obtained from both US and Canadian searches. These issues raise doubts about the connection between the ISP account and Viscomi’s identity as the person who unlawfully communicated with the US victims, the process of evidentiary exchange between US and Canadian authorities, and the human rights implications of Viscomi’s surrender.

i. ISP and identity

A key element of any extradition request is the ability to identify the suspect. Viscomi’s identification was determined via his ownership of the Canadian ISP account. US authorities verified this through the chat log obtained from the victim’s computer in Virginia Beach, which recorded screen and account names that were traced back to Viscomi’s IP and residential addresses. Canadian authorities later determined this evidence matched Viscomi’s Ontario driver’s licence. Viscomi claimed this evidence did not sufficiently prove he was using the Canadian ISP account at the time of the incident. This contradiction between competing interpretations of the ownership and use of the Ontario ISP demonstrates several intersecting aspects of legal geography that were examined by Canadian courts. For example, the decision that Viscomi was the user of the account impacted the decision to deny bail, which can be granted in Canadian extradition proceedings unless detention is considered necessary to ensure attendance in court, for public safety or to maintain confidence in the administration of justice (see United States of America v. Meng, 2019, para. 22). These questions also inform rulings about the legality of evidence collected and exchanged through formal and informal trans-jurisdictional communications between the Canadian and US authorities (Bowling & Sheptycki, 2015; Palmer & Warren, 2013).

Viscomi claimed the US ISP evidence did “not logically connect him to the offence [and] amounts to no more than speculation that he may have been the perpetrator” (United States of America v. Viscomi, 2013, para. 8). However, the first Magistrate’s ruling in 2013 supported a “reasonable inference” that Viscomi was the offender, which would justify proceeding to trial under Canadian law, even though it could not be conclusively proved he was the Skype user at the time of the US offences (United States of America v. Viscomi, 2013, para. 17). Notably, this allegation did not rest on any evidence collected from two Ontario search warrants that was later conveyed to US authorities under the mutual legal assistance sending procedure.

However, during the course of these proceedings, Canadian common law governing ISP evidence changed. The 2013 ruling in Viscomi was governed by the precedent established in the Ontario case R v. Ward (2012). This case ruled that Ontario police did not require a warrant to obtain ISP information. This ruling was subsequently overturned by the Canadian Supreme Court in R v. Spencer (2014), which determined that any information obtained from an ISP amounts to a search under section 8 of the Charter. If an ISP search is conducted without a warrant, any evidence can be excluded from trial in a Canadian court. Viscomi sought the “benefit of this change in the law, in order to argue retrospectively” that the warrantless search, and “all the subsequent warranted seizures that relied on it”, violated his section 8 Charter rights (Viscomi v. Ontario (Attorney General), 2014, para. 46).

This argument raises a temporal dimension to Viscomi’s claims, as Spencer was handed down after Canadian evidence had been transferred to US authorities under the MLACMA proceedings. The key ruling on this issue was handed down in June 2015. It supported Viscomi’s claim that ownership and use of the ISP account were separate and insufficient to infer he had committed the alleged US offences.

The evidence could reasonably lead to a finding that Marco Viscomi … was the subscriber to the IP address at the time the crime was committed utilizing that IP address. However, on that evidence alone, it was simply too great a leap to draw the inference that he was the user of the IP address at the relevant time. (United States of America v. Viscomi, 2015, para. 18, emphasis in original)

Thus, Canadian law favoured the view that “information regarding the subscriber and the IP address cannot, without more, provide the necessary link to draw an inference about who used that IP address at a particular time” (United States of America v. Viscomi, 2015, para. 29). This meant US authorities required a stronger factual connection between the identity of the person involved in the Skype conversation and the holder of the Canadian ISP account. Presumably, this could only be possible by transferring the evidence obtained by Canadian authorities from the searches of Viscomi’s computers. However, rather than specifying this requirement, the 2015 ruling indicated the initial decision regarding the connection between a subscriber and user of an ISP involved “a misapprehension of the evidence”. In other words, there was:

nothing … to establish that the subscriber’s residential address and the address associated with the IP address are one and the same. Indeed, there is no evidence to explain what an IP address is, in the context of this case, or how it worked. We do not know on this record whether an IP address identifies a particular subscriber only, or a particular device only, or whether it identifies a particular residential address at which the IP address is located, or even whether the IP address is limited to one particular residential location or could have been used at different locations. (United States of America v. Viscomi, 2015, para. 25, emphasis in original)

This case also examined Viscomi’s claim that a retroactive application of the 2014 Spencer ruling mandated the exclusion of evidence from a warrantless ISP search under section 24(2) of the Charter. However, this argument was rejected because the Canadian police in Spencer believed a warrant was not required, and the Charter breach was not considered sufficiently serious in light of the alleged offending to deem the evidence from the ISP inadmissible. This may mean that ISP evidence could still sustain prosecution under Canadian law, or support a police investigation into Viscomi’s activities by US authorities, as the protection of the Charter does not apply outside the physical territory of Canada. The only way Canadian courts would accept an extraterritorial extension of the Charter would be for US police to commence “a lawful procedure in making contact with a Canadian entity” that could legally convert them “into Canadian actors” (United States of America v. Viscomi, 2015, para. 49). However, this result is questionable, as it would create “no basis for distinguishing between the conduct of Canadian and foreign officials in cases involving international police cooperation” (United States of America v. Viscomi, 2015, para. 49). This would ultimately render any legal and procedural distinctions between US and Canadian police search, seizure and evidentiary requirements irrelevant when dealing with transnational cyber-investigations.

ii. Mutual legal assistance and the exchange of evidence

The Canadian MLACMA is a key aspect of cooperative “investigative” procedure that gives life to the bilateral mutual legal assistance treaty (MLAT) between the US and Canada. The MLACMA procedure entitles US police “to obtain information about a US crime from a witness located in Canada who is willing to voluntarily assist” (Viscomi v. Ontario (Attorney General), 2014, para. 43), and supplements the PIPEDA request that led to Zing Networks disclosing Viscomi’s ISP details. However, evidence from the Canadian police searches must comply with the MLACMA “investigative” procedures, which aims to ensure the transnational exchange of evidence remains “expeditious” and “confidential” (R v. Viscomi, 2015, para. 30).

Canadian courts identify a “duty by treaty … to maintain the confidentiality of the MLAT application” and offer the “widest measure of mutual legal assistance” to limit the potential for a suspect to “meddle” in a transnational investigation (Viscomi v. Ontario (Attorney General), 2014, para. 26). This aims to prevent the “loss of any evidence that has not yet been seized, [by] tipping off suspects, associates or accomplices in Canada or abroad” to ensure “the successful expeditious completion of the investigation” (R v. Viscomi, 2015, para. 52). As a key component of transnational investigative procedure, MLACMA processes must remain confidential, which enables law enforcement agencies “to quickly complete an investigation before the suspects become aware”, while fostering a “legitimate interest in protecting the secrecy” of collaborative police processes (R v. Viscomi, 2015, para. 36).

Viscomi claimed he had a right to know about and legally challenge the sending procedure that enabled US police to issue the second extradition request. This was accompanied by a second Record of the Case containing evidence originally collected during the search of Viscomi’s computer by police in Ontario that was sent to US authorities under the Canadian MLACMA. Viscomi claimed the confidential nature of the gathering and sending procedures under the Canadian MLACMA was unlawful, because he had no opportunity to scrutinise or contest the procedure in open court.

The second Record of the Case contained an expansive list of evidence, including images of and chat logs with the Virginia Beach victims involving the Skype screen name “Jamie Paisley” that corresponded with the January 2012 incident, images of and chat records with other young women, and links to IP addresses connected to Viscomi. This evidence also disclosed a common methodology, involving threats to install a remotely activated Trojan virus onto the victims’ computers if they did not follow the Skype user’s instructions (see R v. Viscomi, 2016a, paras. 40-42; R v. Viscomi,2016c, para. 14). The obvious need to intercept and prosecute such transnational conduct highlights why both mutual legal assistance and extradition procedures must be conducted as expeditiously as possible (R v. Viscomi, 2015, paras. 25-30; R v. Viscomi, 2016a, para. 57).

However, MLAT procedures can also promote undue secrecy, as transnational criminal investigations are not grounded in a clear body of neutral laws that enshrine due process (Boister, 2017a; Bowling & Sheptycki, 2015; Palmer & Warren, 2013). While a right to know about and legally challenge a sending order under the MLACMA could compromise a complex cyber-investigation, the timing of this form of evidence disclosure also had direct bearing on Viscomi’s ability to contest extradition. Canadian courts have found no “air of reality” to Viscomi’s claim that the original Canadian search warrants were invalid (R v. Viscomi, 2016a, para. 73), which could have resulted in the transfer of unlawfully obtained evidence to US authorities to support their investigation. This is a problem revealed in other transnational cyber-investigations instigated by the US, yet conducted according to the policing laws of other nations, such as the Kim Dotcom case (see Palmer & Warren, 2013). Moreover, as the second US Record of the Case openly disclosed the evidence transferred from Canada, the legality of the confidential exchange of evidence under the MLACMA has been consistently upheld. This did not prevent Viscomi from raising concerns about the ability of transnational evidence exchange to violate his fundamental rights under the Canadian Charter.

iii. Human rights arguments

The relationship between extradition and human rights law is predicated on mutual trust that the justice systems in each participating jurisdiction operate according to common agreed standards (Marin, 2011). Thus, specific human rights protections within extradition treaties are generally limited (Boister, 2003). The application of international human rights safeguards can also be difficult to achieve in domestic courts (Murchison, 2007), even though they often play a crucial role in protecting individuals (Rose, 2002). Therefore, domestic rights protections such as the Charter, US Constitution and other national due process mechanisms play an important role in protecting individual rights during extradition. However, human rights protections become more difficult to balance alongside legal discussions of standard extradition principles, such as the rule of specialty, which confines surrender to only those charges listed in the Record of the Case. In the Viscomi case, this generated a delicate responsibility for Canadian extradition courts in balancing:

the rights of Mr. Viscomi, … with the court’s gatekeeper responsibility to ensure that the extradition process does not cripple the operation of the extradition proceedings. This careful balancing must respect the rights of the individual without losing sight of the importance of honouring Canada’s international treaty obligations. (R v. Viscomi, 2016a, para. 62)

Viscomi unsuccessfully raised concerns about Charter violations stemming from the Ontario police searches, the subsequent disclosure of this evidence to US authorities, and its use in the second Record of the Case as admissible evidence supporting his extradition. For example, the court hearing this issue in March 2016 stated it was not “directly concerned” with any allegations of a Charter breach, but instead examined standard domestic MLACMA requirements concerning the disclosure of evidence (R v. Viscomi, 2016a, para. 68). These complexities are magnified in cases involving online criminal investigations, as both domestic and internationally recognised human rights protections can be viewed by law enforcement agencies as unduly restricting their capacity to suppress serious crime (Arnell, 2018; Bowling & Sheptycki, 2015). In Canada, extraditees ultimately bear the “onus of establishing a Charter breach” (R v. Viscomi, 2016a, para. 63), although the standard for meeting recognised international human rights requirements, including those that are incorporated into the domestic laws of most nations and throughout the EU, is extremely high (Mann et al., 2018).

Any potential Charter breaches resulting from the Canadian police investigation into Viscomi were considered by the courts to lie “at the lower end of the spectrum of misconduct … and had no impact on the lawfulness of the search and seizure of the computer equipment” (R v. Viscomi, 2016b, para. 157). This means there was no Charter violation associated with the legitimacy of confidentially sending this evidence to US authorities, or its later use in the second Record of the Case.

A further series of human rights issues are tied to the pre-trial, trial and post-conviction processes in the requesting jurisdiction. Viscomi did not raise any specific human rights concerns, including possible physical or mental conditions that may be exacerbated by his surrender to the US. However, such issues have provided human rights protection against extradition for prosecutions commenced against individuals located in other jurisdictions at the time online incidents were detected by US authorities (see Mann et al., 2018). The Canadian courts also appear unconcerned about the potential violation of the specialty principle, with Viscomi’s “uncharged conduct” considered as a possible “aggravating factor in sentencing on a conviction for the crimes for which [he] is committed for extradition” (United States of America v. Viscomi, 2019, para. 49). Further, there appears to be no issue with the US relying “on evidence about other victims on sentencing … [as] Canadian courts are similarly entitled in sentencing to take into account surrounding circumstances that could support a separate charge” (United States of America v. Viscomi, 2019, para. 50).

Such technicalities associated with reviewing the merits of the US prosecution case in a Canadian extradition forum are magnified by the rule of non-inquiry(Bassiouni, 2014; King, 2015; Pyle, 2001). Ordinarily, overseas courts examining extradition requests are unlikely to undertake a detailed examination of the operation of justice in a requesting state, because:

it is not the responsibility of an extradition judge to cull out cases that may be viewed, on all of the evidence, as weak or unlikely to result in the conviction of the person sought … all of those issues are for the trier of fact in the foreign jurisdiction. (United States of America v. Viscomi, 2013, para. 13)

In other words, the reluctance of foreign extradition courts to inquire into US evidence collection or imprisonment practices renders many potentially valid human rights claims subject only to vague notions of political trust and comity. These political concepts underpin various forms of mutual cooperation that potentially undermine the notion of due process associated with many contemporary forms of transnational criminal law enforcement (Mann, et al., 2018; Warren, 2015).

Conclusion

In June 2019, Viscomi’s arguments concerning the search warrants, evidence disclosure, the infringement of sections 6(1) and 7 of the Charter, the violation of the specialty principle, the use of extrinsic evidence from the second Record of the Case for future prosecutions and sentences, and the potential for civil commitment were again rejected by a Canadian court. It was decided there was “no unfairness” associated with any previous court decisions and the Minister’s order favouring extradition “is entitled to a high level of deference on judicial review and should only be interfered with in the clearest of cases” (United States of America v. Viscomi, 2019, paras. 37, 59). This highlights that extradition is more an expression of the political content of transnational law than a question of due process. Such reasoning is also a symptom of the highly technical nature of both extradition and MLAT procedures, which potentially undermine the very forms of transnational enforcement cooperation they are designed to foster. The Supreme Court of Canada declined to intervene in Viscomi’s case in November 2019 and he was extradited to Virginia to stand trial in April 2020 (Daugherty, 2020). In August 2020 Viscomi pled guilty to two counts of producing child pornography. At a sentencing hearing set for January 2021 he faces a minimum of 15 years imprisonment and a maximum of 60 years, to be served in the US (Harper, 2020).

Viscomi’s extradition proceedings took approximately seven and a half years to resolve in the Canadian courts before the US extradition request was ultimately granted. This time frame counters the rhetoric for expeditious procedures to deal with transnational cyber-investigations and its obvious benefits for victims, offenders and justice agencies in multiple jurisdictions. However, calls for the development of new modes for dealing with transnational cybercrime and related jurisdictional issues require caution. EU experience suggests there is considerable disquiet over the ready transfer of crime suspects across national jurisdictional borders to face trial in potentially unfamiliar geographic locations or legal cultures (see Gless, 2015).

Transnational cybercrime is possible through the mobility of digital computing technologies and data flows (Daskal, 2015). This creates an illusion that cybercrime suspects are geographically located where the effects of their activities are felt. When viewed in this way, it would make sense to place more emphasis on clearer MLAT procedures governing the collection and transfer of digital evidence, rather than simplifying extradition procedures through the removal of due process protections (Boister, 2017b). Our analysis also suggests due process remains important in all cases, whether this is promoted through clearer transnational data exchange protocols or shifting the prosecution forum to the source of online harm. While international comity dictates that the US had a viable claim for prosecution in this case, it is also perfectly feasible to hold the trial at the source of the harm given Canada had instigated criminal charges against Viscomi that were ultimately abandoned after the US extradition request. We suggest this contradiction can only be reconciled with greater attention to the types of extraterritorial harms that justify extradition, and consideration of when a forum bar can help promote both international comity and fairness for those accused of offshore crimes.

References

Abelson, A. (2009). The prosecute/extradite dilemma: Concurrent criminal jurisdiction and global governance. UC Davis Journal of International Law & Policy, 16(1), 1–38. https://jilp.law.ucdavis.edu/issues/volume-16-1/Abelson.pdf

Arnell, P. (2013). The European human rights influence upon UK extradition—Myth debunked. European Journal of Crime, Criminal Law and Criminal Justice, 21(3–4), 317–337. https://doi.org/10.1163/15718174-21042032

Arnell, P. (2018). The contrasting evolution of the right to a fair trial in UK extradition law. International Journal of Human Rights, 22(7), 869–887. https://doi.org/10.1080/13642987.2018.1485655

Bassiouni, M. C. (2014). International extradition: United States law and practice (6th ed.). Oxford University Press. https://doi.org/10.1093/law/9780199917891.001.0001

Bauman, Z., Bigo, D., Esteves, P., Guild, E., Jabri, V., Lyon, D., & Walker, R. B. J. (2014). After Snowden: Rethinking the impact of surveillance. International Political Sociology, 8(2), 121–144. https://doi.org/10.1111/ips.12048

Blakesley, C. L. (1984). A conceptual framework for extradition and jurisdiction over extraterritorial crimes. Utah Law Review, 1984(4), 685–761. Blakesley, C. L. (1984). A conceptual framework for extradition and jurisdiction over extraterritorial crimes. Utah Law Review, 1984(4), 685–761.

Blakesley, C. L. (2008). Extraterritorial jurisdiction. In M. C. Bassiouni (Ed.), International criminal law: Volume II: Multilateral and bilateral enforcement mechanisms (3rd ed., pp. 85–152). Martinus Nijhoff Publishers.

Boister, N. (2003). Transnational criminal law? European Journal of International Law, 14(5), 953–976. https://doi.org/10.1093/ejil/14.5.953

Boister, N. (2017a). Global simplification of extradition: Interviews with selected extradition experts in New Zealand, Canada, the US and EU. Criminal Law Forum, 29(3), 327–375. https://doi.org/10.1007/s10609-017-9342-7

Boister, N. (2017b). Law enforcement cooperation between New Zealand and the United States: Serving the internet ‘pirate’ Kim Dotcom up on a silver platter? In S. Hufnagel & C. McCartney (Eds.), Trust in international police and justice cooperation (pp. 193–220). Hart Publishing.

Bowling, B., & Sheptycki, J. (2015). Global policing and transnational rule with law. Transnational Legal Theory, 6(1), 141–173. https://doi.org/10.1080/20414005.2015.1042235

Burdick, C. K. (1935). Codification of international law: Part 1 – Extradition. American Journal of International Law Supplement, 29, 15–434.

Clough, J. (2014). A world of difference: The Budapest Convention on Cybercrime and the challenges of harmonisation. Monash Law Review, 40(3), 698–736. https://www.monash.edu/__data/assets/pdf_file/0019/232525/clough.pdf

Council of Europe. (2001). Convention on Cybercrime. https://www.coe.int/en/web/conventions/full-list/-/conventions/rms/0900001680081561

Council of Europe. (2020). Chart of Signatures and Ratifications of Treaty 185: Convention on Cybercrime. https://www.coe.int/en/web/conventions/full-list/-/conventions/treaty/185/signatures

Couture, S., & Toupin, S. (2019). What does the notion of “sovereignty” mean when referring to the digital? New Media & Society, 21(2), 2305–2322. https://doi.org/10.1177/1461444819865984

Daskal, J. (2015). The un-territoriality of data. Yale Law Journal, 125(2), 326–398.

Daugherty, S. (2020). After seven year extradition fight, Canadian man arrives in Norfolk for ‘sextortion’ trial. Daily Press. https://www.dailypress.com/news/crime/vp-nw-canadian-extradition-viscomi-20200106-azg6n2bgavce7b4uva74ffl37q-story.html

Dugard, J., & Wyngaert, C. (1998). Reconciling extradition with human rights. American Journal of International Law, 92(2), 187–212. https://doi.org/10.2307/2998029

Geist, M. (Ed.). (2015). Law, privacy and surveillance in Canada in the post-Snowden era. University of Ottawa Press.

Gless, S. (2015). Bird’s-eye view and worm’s eye view: Towards a defendant-based approach in transnational criminal law. Transnational Legal Theory, 6(1), 117–140. https://doi.org/10.1080/20414005.2015.1042233

Goldsmith, J., & Wu, T. (2006). Who controls the internet? Illusions of a borderless world. Oxford University Press.

Harper, J. (2020, August 12). Canadian man pleads guilty in ‘sextortion’ case involving Virginia Beach sisters. The Virginian Pilot. https://www.pilotonline.com/news/crime/vp-nw-viscomi-plea-20200812-2fg4bngvnrhy3evomo3w52xgmq-story.html

Mann, M., & Warren, I. (2018). The digital and legal divide: Silk Road, transnational online policing and southern criminology. In K. Carrington, R. Hogg, J. Scott, & M. Sozzo (Eds.), The Palgrave handbook of criminology and the global south (pp. 245–260). Palgrave MacMillan. https://doi.org/10.1007/978-3-319-65021-0_13

Mann, M., Warren, I., & Kennedy, S. (2018). The legal geographies of transnational cyber-prosecutions: Extradition, human rights and forum shifting. Global Crime, 19(2), 107–124. https://doi.org/10.1080/17440572.2018.1448272

Marin, L. (2011). "A spectre is haunting Europe”: European citizenship in the area of freedom, security and justice. Some reflections in the principles of discrimination (on the basis of nationality), mutual recognition, and mutual trust originating from the European Arrest Warrant. European Public Law, 17(4), 705–728.

Miller, B. W. (2016). Borderline crime: Fugitive criminals and challenge of the border, 1819-1914. University of Toronto Press.

Murchison, M. (2007). Extradition’s paradox: Duty, discretion, and rights in the world of non-inquiry. Stanford Journal of International Law, 43(2), 295–318.

Palmer, D., & Warren, I. (2013). Global policing and the case of Kim Dotcom. International Journal of Crime, Justice and Social Democracy, 2(3), 105–119. https://doi.org/10.5204/ijcjsd.v2i3.105

Plachta, M. (1999). (Non-)Extradition of nationals: A neverending story? Emory International Law Review, 13(1), 77–160.

Pyle, C. H. (2001). Extradition, politics, and human rights. Temple University Press.

Raustiala, K. (2009). Does the Constitution follow the flag?: The evolution of territoriality in American law. Oxford University Press.

Rose, T. (2002). A delicate balance: Extradition, sovereignty, and individual rights in the United States and Canada. Yale Journal of International Law, 27(1), 193–215. https://digitalcommons.law.yale.edu/yjil/vol27/iss1/7/

Svantesson, D. J. B. (2017). Solving the internet jurisdiction puzzle. Oxford University Press. https://doi.org/10.1093/oso/9780198795674.001.0001

United States Department of Justice. (2019). Promoting public safety, privacy, and the rule of law around the world: The purpose and impact of the CLOUD Act. US Department of Justice.

Warren, I. (2015). Surveillance, criminal law and sovereignty. Surveillance & Society, 13(2), 300–305. https://doi.org/10.24908/ss.v13i2.5679

Warren, I., & Palmer, D. (2015). Global criminology. Thomson Reuters.

Zuboff, S. (2019). The age of surveillance capitalism: The fight for a human future at the new frontier of power. Profile Books.

Cases

R. v. Spencer, 2014 SCC 43, [2014] S.C.R. 212, No. 34644 (Supreme Court of Canada 13 June 2014). http://canlii.ca/t/g7dzn

R v. Viscomi 2014 ONCA 765, No. M44253 (Court of Appeal for Ontario 2014). http://canlii.ca/t/gf4hc

R v. Viscomi 2015 ONSC 61, (Ontario Superior Court of Justice 9 January 2015). http://canlii.ca/t/gfws4

R. v Viscomi, 2016 ONSC 1830, (Ontario Superior Court of Justice 17 March 2016). http://canlii.ca/t/gnrp4

R v. Viscomi 2016 ONSC 5423, (Ontario Superior Court of Justice 1 September 2016). http://canlii.ca/t/gt778

R v. Viscomi 2016 ONSC 6658, (Ontario Superior Court of Justice 25 October 2016). http://canlii.ca/t/gvkfz

R v. Ward 2012 ONCA 660, 112 OR (3d) 321, (Court of Appeal for Ontario 2 October 2012). http://canlii.ca/t/ft0ft

United States of America v. Meng 2018 BCSC 2255, No. 27761–1 (Supreme Court of British Columbia 11 December 2018). http://canlii.ca/t/hwmhm

United States of America v. Viscomi 2013 ONSC 2829, (Ontario Superior Court of Justice 24 May 2013). http://canlii.ca/t/fxnq8

United States of America v. Viscomi 2014 ONCA 879, M44414 (C57211) (Court of Appeal for Ontario 5 December 2014). http://canlii.ca/t/gfjwc

United States of America v. Viscomi 2015 ONCA 484, No. C57211, C57910, C59973, C59982 (Court of Appeal for Ontario 30 June 2015). http://canlii.ca/t/gjsrg

United States of America v. Viscomi 2016 ONCA 980, M47285 (C62967) (Court of Appeal for Ontario 23 December 2016). http://canlii.ca/t/gwljk

United States of America v. Viscomi 2019 ONCA 490, No. C62967, C64283 (Court of Appeal for Ontario 14 June 2019). http://canlii.ca/t/j0zt0

Viscomi v. Attorney General of Canada; Attorney General of Ontario 2015 SCC 397, (Supreme Court of Canada 17 December 2015). http://canlii.ca/t/gmmrw

Viscomi v. Attorney General of Canada (on behalf of the United States of America) 2019 SCC 38760., (Supreme Court of Canada 28 November 2019). http://canlii.ca/t/j3n64

Viscomi v. Ontario (Attorney General) 2014 ONSC 5262, (Ontario Superior Court of Justice 11 September 2014). http://canlii.ca/t/g8zzk

Appendix

Table 1:

Summary of the progression of the Viscomi case through the Canadian judicial system

Case citation

Legal & factual issue

Outcome

United States of America v. Viscomi 2013 ONSC 2829 [24 May]

Can ISP records identify Viscomi as the offender to justify extradition?

Extradition certified: ISP evidence is sufficient to establish identity.

Viscomi v. Ontario (Attorney General) 2014 ONSC 5262 [11 Sep]

Was Viscomi entitled to evidence presented during the ex parte MLACMA hearing?

Dismissed: No relevant non-disclosure.

R v. Viscomi 2014 ONCA 765 [31 Oct]

Was the previous decision concerning the ex parte MLACMA decision correct?

Dismissed: Court does not have jurisdiction due to procedural regulations.

United States of America v. Viscomi 2014 ONCA 879 [5 Dec]

Should Viscomi receive bail?

Bail denied: No arguable ground for appeal.

R v. Viscomi 2015 ONSC 61 [9 Jan]

Do ss. 18 & 20 of the MLACMA violate ss. 7 & 8 of the Charter?

Dismissed: MLACMA is not unconstitutional & contains protections that comply with ss. 7 & 8 of the Charter.

United States of America v. Viscomi 2015 ONCA 484 [30 Jun]

Was the evidence sufficient for the Magistrate to infer Viscomi was the user of the IP address?

Decision set aside: Magistrate misapprehended the evidence.

Viscomi v. Attorney General of Canada; Attorney General of Ontario 2015 SCC 397 [17 Dec]

Was the dismissal of the ruling upholding constitutional validity correct?

Dismissed: Previous ruling was correct.

R v. Viscomi 2016a ONSC 1830 [17 Mar]

Does Viscomi have the right to further disclosure prior to the second extradition hearing?

Dismissed: Additional disclosure would not impact the extradition decision.

R v. Viscomi 2016b ONSC 5423 [1 Sep]

Is the Canadian gathered evidence inadmissible due to Charter breaches?

Dismissed: Searches were lawful & any Charter breaches were minimal.

R v. Viscomi 2016c ONSC 6658 [25 Oct]

Does the Canadian and US gathered evidence identify Viscomi as the offender to justify extradition?

Extradition certified: Sufficient evidence.

United States of America v. Viscomi 2016 ONCA 980 [23 Dec]

Should Viscomi receive bail?

Bail denied: Potential flight risk & Viscomi may access the internet to reoffend.

United States of America v. Viscomi 2019 ONCA 490 [14 Jun]

Did the Magistrate or Minister err in any way?

Dismissed: No errors were made in either stage of extradition.

Viscomi v. Attorney General of Canada (on behalf of the United States of America) 2019 SCC 38760 [28 Nov]

Was the previous dismissal correct?

Dismissed: Court declined to hear appeal.

Internationalising state power through the internet: Google, Huawei and geopolitical struggle

$
0
0

This paper is part of Geopolitics, jurisdiction and surveillance, a special issue of Internet Policy Review guest-edited by Monique Mann and Angela Daly.

Introduction

This article examines the importance of the market dominance of private companies in geopolitical struggles between states. In particular, it assesses how the United States (US) has used the market dominance of its internet companies to attempt to restrict the growth of ‘geo-economic space’ for Chinese competitors. This is more broadly in response to the challenge that China now poses to US hegemony over global communication systems. Overall, the article argues that dominant private firms can have geopolitical significance, acting as conduits through which states can exercise power.

It is true that modern corporations have enormous economic power, which they can leverage for private political authority in the international economy (Büthe and Mattli, 2011; Elbra, 2014; Haufler, 2006; Quack and Dobusch, 2013; Tusikov, 2019b). However, international firms can also be used as conduits for internationalising state power through the extra-territorial application of state authority (Crasnic, Kalyanpur, and Newman, 2017; Farrell and Newman, 2019; Tusikov, 2016, 2019a). Whilst multinational corporations are integral to global internet governance, it is more true to say that American multinational corporations are (Tusikov, 2016). The US government can use this to its advantage. Other economic powers, such as the European Union (EU), have sought to exercise their own authority over US firms through reforms such as the General Data Protection Regulation (GDPR).

The US has internationalised its authority through private companies to pursue its security interests, including by harvesting data collected through the international operations of US internet firms (Farrell and Newman, 2019; Mann and Warren, 2018). Through programmes such as PRISM, the National Security Agency (NSA) and other law enforcement agencies have accessed data directly from major US internet firms, without having to make a request to the firms or having to obtain individual court orders ( Greenwald and MacAskill, 2013). These firm-focused programmes were combined with so-called ‘upstream’ methods which harvested information from fibre optical cables and other infrastructure as data was in transit. Security agencies are able to achieve this because, as the NSA explained, “[m]uch of the world's communications flow through the U.S.” (National Security Agency & Special Source Operations, 2013, p. 2, PDF).

However, in order for states to use private companies to internationalise their power, they must be able to exercise authority over these companies. That is, states must be able to compel a firm to, for example, grant security agencies access to its data. One way a state can achieve this is through controlling a company’s access to its market – that is utilising the state’s ‘market power’ (Tusikov, 2019a). However, other conditions, both exogenous and endogenous, are crucial in enabling states to effectively leverage market power (Crasnic et al., 2017; Farrell and Newman, 2010; Kaczmarek and Newman, 2011; Newman and Posner, 2011). This article focuses specifically on two such conditions: sunk cost, or the level of investment in a market, and jurisdictional substitutability, that is the availability of alternative markets (Crasnic et al., 2017). These conditions will determine if a state can exercise authority over private firms.

However, this is only useful in internationalising state power if these firms are internationally dominant in their respective markets. For example, the Society for Worldwide Interbank Financial Telecommunication (SWIFT) is a Belgian-based firm which handles approximately 80 percent of all global wire transfer traffic. Despite being a Belgian company, SWIFT has a data processing centre based in the US, where copies of transactions are temporarily stored. Following the September 11 attacks, the US Treasury subpoenaed SWIFT data as part of its Terrorist Finance Tracking Program (de Goede, 2012). The authority of the US to subpoena SWIFT’s data, the location of its data centre in the US, and the international reach of the firm (80 percent of the market) made SWIFT a useful tool for global surveillance.

The spatial location of a firm’s operations will thus determine how useful it is to a state in internationalising state power. The firm must be sufficiently embedded within a state’s territory in order for that state to exercise authority over it, meanwhile it must also have sufficient presence in foreign markets for the state to be able to internationalise that authority through the firm’s operations. The so-called ‘transnational’ corporation is not placeless, and where it operates matters. If states have authority over firms that are particularly central to international economic activity, they can ‘weaponise’ them for geopolitical and security ends by coercing unfriendly states (Crasnic et al., 2017; Farrell and Newman, 2019).

This article analyses how control over internet companies has empowered the US in responding to the geopolitical threat posed by China. Whilst the internet has emerged over the past few decades, dominance over international information and communication infrastructure has been a source of conflict between great power states for over a century (Hills 2002; Powers and Jablonski, 2015). Control over international information flows is important to the geopolitical power of great power states and is often dominated by the world’s hegemonic state (Hills, 2002). Geopolitical power refers to how the spatial allocation of resources between states shapes international politics.

The ‘transnationality’ of the internet does not diminish the role of geopolitics because, as will be discussed, internet businesses and infrastructure remain geographically concentrated within specific territories. Furthermore, the prominent role of private interests in the geopolitics of communications is not novel to the internet. Past technological innovations, such as submarine telegraph cables, broadcast radio and the telephone, all involved heavy involvement with private firms which would work with states to “alter established international power relations” (Hills, 2002, p. 7). Thus, emerging market competitors from other countries threaten not just the commercial interests of US firms but the geopolitical influence of the US state. In this way, the growing international competitiveness of Chinese internet and technology companies, such as Huawei, potentially threatens US national security by enabling the same sort of surveillance programmes that the US is known to engage in.

The article begins by discussing the literature on international market regulation to establish what conditions are required to exercise extraterritorial authority over multinational corporations. This section will also examine how extraterritorial authority can be used to internationalise state power. Second, the article illustrates that the US has considerable leverage over its internet firms, and that these firms have extensive international reach. The article then argues that taken together, the extraterritorial authority of the US and international operations of internet firms makes them useful conduits for internationalising US power. Last, the article analyses how the US has used this power to respond to an emerging threat to its hegemonic position from China, through the growing international dominance of Chinese companies, in particular Huawei. The article examines how the US has sought to leverage its incumbent position in the market to disrupt Huawei’s supply chains in an effort to slow its growth in international markets.

Extraterritorial authority and state power

To illustrate how the US has internationalised its power through internet firms the article draws on the international market regulation literature. This literature analyses how domestic laws can establish rules in internationally-exposed markets (Farrell and Newman, 2010). Globalisation creates opportunities for states with large markets, such as the US, Europe and China, to extend and apply their regulations extraterritorially and effect enforcement in other jurisdictions (Farrell and Newman, 2015, 2016; Kaczmarek and Newman, 2011). However, market size is a necessary but not a sufficient variable in enabling states to do this. Domestic institutional capacity, such as the expertise and capabilities of regulators, are also important in allowing states to effectively leverage their market (Bach and Newman, 2010; Farrell and Newman, 2010).

As Crasnic et al. have illustrated, another important factor in the ability of states to apply extraterritorial authority is their access to multinational businesses and their capacity to “reach through the affiliate or subsidiary structure into the business practices of the corporate group” (2017, p. 911). This capacity is determined by two variables. The first is sunk cost: the investment of the firms in the state’s market and thus the costs of exiting the market in an effort to avoid regulatory oversight. The second is jurisdictional substitutability: the availability of other markets with less stringent oversight that still provide comparable business opportunity. Therefore, states benefit from having a number of multinational firms with high sunk costs located within their national jurisdiction, with few suitable options for relocation.

The above two variables determine if a state has high or low levels of extraterritorial authority over a given firm, provided that the state has the institutional capacity to exercise this authority. However, the ability of states to internationalise state power through the firm depends on another variable: the dominance of that firm in international markets. Market dominance here refers to high levels of market share, as well as dominance over small but crucial elements of supply chains, in at least two different markets.

Table 1 illustrates how these two variables, extraterritorial authority and international market dominance, interact. As it shows, if a state has high levels of extraterritorial authority over a firm with high levels of international market dominance then the state has the ability to use the firm to internationalise its power, as the US was able to do with SWIFT as discussed above. However, if the state has low extraterritorial authority over a firm with high international market dominance then it may be subject to the internationalised power of other states. Firms with low levels of international market dominance primarily operate in a single domestic market and cannot be used to internationalise state power.

Table 1: Internationalising state power through firms
 

Extraterritorial authority

  

High

Low

International market dominance

High

The firm can be used to internationalise state power.

The state may be subject to the internationalised power of other states using the firm.

Low

The firm primarily operates domestically and cannot be used to internationalise state power.

The firm primarily operates in a foreign domestic market and cannot be used to internationalise state power.

It is important to make two further observations at this point. First, most international industries do not function as competitive markets but are dominated by a select few oligopolistic corporations (Mikler, 2011, 2018). This means that the ability to exercise extraterritorial authority over a few market leading firms can result in wide reaching effects on the entire international market. Second, these dominant firms are territorially embedded, in terms of sales, assets, employment and ownership, within a select few host states, the most prominent of which is the US (Mikler, 2011, 2018; Starrs, 2013). In other words, the most dominant firms have high sunk costs in powerful states. The internationalisation of corporations thus provides a powerful asset through which states can internationalise their power.

Extraterritorial authority and state power on the internet

The ability of the US state to realise its power through its internet firms is crucial to its mass online surveillance programmes. As the Snowden leaks revealed in 2013, US internet companies, including Microsoft, Google, Yahoo, Facebook and Apple, have all been used in US surveillance (Greenwald and MacAskill, 2013; National Security Agency and Special Source Operations, 2013, PDF). For example, Microsoft helped the NSA and the Federal Bureau of Investigation (FBI) gain access to encrypted information on its email, cloud storage and online voice chat services (Greenwald, MacAskill, Poitras, Ackerman, and Rushe, 2013).

Meanwhile, the NSA piggybacked on Google’s cookies to identify targets for offensive hacking operations (Soltani, Peterson, and Gellman, 2013). The NSA also received or intercepted computer network devices such as routers being exported from the US and implanted them with backdoor surveillance tools (Greenwald, 2014). It was later revealed that after the Snowden leaks, in 2015, Yahoo built custom software to enable the NSA and the FBI to search incoming Yahoo email for specific information (Menn, 2016).

The above demonstrates that the US clearly can exercise authority over its internet firms in order to assist in online surveillance. Yet despite this, these firms have not sought to exit the US market in order to dodge this authority. Nor has the reputational damage from the Snowden and other leaks been sufficient to trigger market exit.

This should not be surprising because many of these companies are heavily invested in the US and thus have high sunk costs. For example, Alphabet (i.e. Google) holds 77 percent of its assets and employs 77 percent of its workers in the US and Microsoft holds 56 percent of its assets and employs 60 percent of its workers in the US. Amazon has less sunk costs in the US, holding 29 percent of its assets and employing 29 percent of its workers there, however this remains a considerable investment in the US (UNCTAD, 2019, XLS). Furthermore, the domestic market provides an outsized share of revenues for American firms: 46 percent of Alphabet’s revenue came from within the US; 51 percent of Microsoft’s revenue came from within the US; 61 percent of Amazon’s revenue came from within the US; and 43 percent of Facebook’s revenue came from within the US (Facebook Inc, 2019; UNCTAD, 2019, XLS). The high dependence on the US market means that jurisdictional substitutability is low.

However, substitutability is about more than sales and revenue. Internet firms have emerged within the US market and have developed their business models to match its regulatory environment, namely as it relates to liability for online intermediaries. This is particularly true for the US copyright regime. The Digital Millennium Copyright Act of 1998 plays an important role in this, along with other legislation and public law which are unique to the US market, such as fair use (Samuelson, 2015). Of course, the US regulations could be changed, and if they were the high sunk costs and large dependence on the US market would discourage market exit. However, the US regulatory framework is, broadly speaking, favourable for internet firms when compared to alternative markets.

US internet companies have even complained to the US Trade Representative that copyright liability on internet intermediaries in Europe constitutes a trade barrier. The opposition to copyright law in Europe drove many internet companies to lobby against the Anti-Counterfeiting Trade Agreement, which included the EU in the negotiations. Meanwhile, US internet companies have attempted to export US-style copyright laws internationally, through US trade agreements (Cartwright, 2019).

In addition to high sunk costs and low jurisdictional substitutability, internet firms have also benefited from a sort of jurisdictional protection which has further increased the leverage of the US. ‘Jurisdictional protection’ has involved internet firms trying to avoid the extraterritorial application of laws from other countries by claiming that the US has primary authority over them. For example, in June 2017 the Canadian Supreme Court ruled that Google is required to globally delist content when removing it from its Canadian subsidiary, in a case involving counterfeit goods sold online (Supreme Court of Canada, 2017). However, Google prevented the enforcement of this ruling over its global network after a US District Court found in its favour in November 2017. The court found that Google meets the requirements for immunity from liability under the Section 230 of the Communications Decency Act– a US law. In the ruling the judge opined that “the Canadian order undermines the policy goals of Section 230 and threatens free speech on the global internet” (United States District Court, 2017, p. 6, PDF).

Google has similarly resisted efforts by European authorities to have their regulations apply across Google’s global network. For example, in 2015 French authorities ordered Google to ensure that content removed under Europe’s so-called ‘right to be forgotten’ laws was effective throughout Google’s global network. However, Google refused to do this, only removing links from traffic coming from France. In 2019 the Court of Justice of the EU (CJEU) found that EU laws should not apply outside of the EU, and therefore Google would only be compelled to remove content across EU member states and not globally (Court of Justice of the European Union, 2019). Meanwhile, other European regulations, such as the General Data Protection Regulation (GDPR) and the recent Directive on Copyright in the Digital Single Market, have suffered from lack of enforcement in the face of intransigent US internet companies ( Vinocur, 2019; Willsher, 2019). By contrast, US internet companies have willingly applied US domestic law across their global networks, including the Digital Millennium Copyright Act’s so-call ‘take down’ requirement which, like the examples above, requires companies to remove content from their networks. Both Google and Facebook apply these take-downs globally.

However, as table 1 illustrates, the ability to exercise authority over firms is only useful in internationalising state power if these firms are internationally dominant. This is certainly the case for US internet companies. Table 2 below shows the companies which both appear on the 2019 Fortune 500 list of largest corporations in the world by revenue, and which own a website that is among the top 50 most visited in the world (in July 2019). As the table illustrates, all of these firms are based in either the US or China, and together they account for half of the top 50 most visited websites. Table 2 also shows the share of traffic each domain receives from its home market. As the table shows, top domains from US-based firms receive more traffic from outside the US market than from within it (with the exceptions of Amazon.com and eBay.com). This illustrates that the dominance of US-based firms is a result of their international competitiveness, not just the size of the US market.

Table 2: Top internet firms by traffic

Firm

Country HQ

Domain

Alexa rank

% of traffic from home market

Alphabet Inc.

US

Google.com

1

20.00%

Youtube.com

2

14.40%

Blogspot.com

22

9.70%

Google.co.hk

30

5.40%

Google.co.in

42

0.70%

Amazon.com

US

Amazon.com

12

67.30%

Twitch.tv

37

31.30%

Amazon.co.jp

46

8.10%

Imdb.com

49

32.60%

Microsoft

US

Live.com

18

14.40%

Bing.com

25

44.90%

Office.com

32

33.60%

Microsoft.com

33

26.30%

Msn.com

47

20.50%

Facebook

US

Facebook.com

5

29.90%

Instagram.com

24

29.30%

Alibaba Group Holding

China

Tmall.com

3

88.60%

Taobao.com

8

88.50%

Login.tmall.com

9

88.60%

Pages.tmall.com

19

88.30%

Alipay.com

23

90%

Aliexpress.com

44

*

JD.com

China

Jd.com

14

88.20%

Tencent Holdings

China

Qq.com

6

87%

Soso.com

41

98.70%

Source: (Alexa, 2019; Fortune Magazine, 2019). * No data as home market was ranked too low.

US internet companies dominate other online markets as well. In 2019, Google’s Chrome and Apple’s Safari had a combined 79 percent share of the global internet browser market (StatCounter, 2020) 1. Meanwhile, 16 of the 20 most installed apps through Google Play – the app marketplace for the Android operating system – are developed by either Google itself or Facebook. Of the 52 apps in the Google Play store which have over a billion installs, 34 are developed by either Facebook, Google or Microsoft (Androidrank, 2020). With US internet companies both being under the authority of the US state and being internationally dominant, they can thus be placed in the upper left quadrant of table 1 above. This makes them useful for the US in internationalising its power through surveillance programmes that have harvested information from these companies and their international networks.

Internationalised power and geopolitical struggles

With high extraterritorial authority and international market dominance, internet companies can be powerful tools for the US state. However, this is increasingly under threat from China, which has developed a home grown internet sector of its own through targeted industrial policy, aided by information sovereignty and online censorship policies which have restricted the market access of foreign internet companies (i.e. the ‘Great Firewall’) (Hong, 2017, pp. 123-146; Powers and Jablonski, 2015). As Powers and Jablonski argue, “China is well on its way to having a popular and robust de facto intranet system” (Powers and Jablonski, 2015, p. 169), with 96 percent of all pageviews in China being of sites hosted within China. This is reflected in table 2 above, which shows that whilst Chinese firms have a strong presence on the internet, they nevertheless have low levels of international market dominance, mostly relying on their home market. However, China is increasingly moving from the techno-nationalist emphasis on developing home grown industries and technologies, to becoming more outwardly focused and more willing to pursue its own internet governance preferences – though with varying levels of success (Higgins 2017; Hong, 2017, pp. 141-145; Suttmeier, Yoa and Tan, 2009).

China’s growing demands for “more power in allocation and control decisions about critical information resource” indicates “mounting geopolitical tensions centered on communications” (Hong, 2017, p. 11). First, in challenging the US hegemonic position, China is seeking to create a new ‘geo-economic space’ to maintain export markets for its emerging technology giants (Hong, 2017, p. 138). This is evident through international institution building, such as free trade agreements, as well as through China’s ambitious Belt and Road Initiative (BRI). The BRI includes a so-called ‘digital silk road’ alongside its transportation infrastructure, which is being developed and built by Chinese firms (Shen, 2017). This ‘digital road’ has also “expanded the geographical range, organized specific policy funds, and coordinated an extensive network of resources for corporate China to go global” (Shen, 2017, p. 2688). Alibaba, for example, is rapidly expanding its cloud computing business overseas, with a strong emphasis on BRI countries (Shen, 2017, p. 2689).

Second, as these geo-economic spaces are established Chinese firms are becoming more internationally competitive. For example, popular Chinese video-sharing app Douyin, owned by the Chinese parent company ByteDance, was launched internationally as ‘Tiktok’ in 2017. TikTok later merged with the US-owned video sharing app, music.ly, after it was bought by ByteDance. In November 2019, TikTok had been downloaded over a combined 1.5 billion times across the Apple App Store and Google Play store, up from the 500 million just 16 months earlier (Chapple, 2019). Another Chinese technology company enjoying increased growth is Huawei. In December 2015, Huawei’s share of the international mobile vendor market was just two percent, however by December 2019 it had risen to ten percent. In Europe, Huawei’s market share grew from four percent to 18 percent during the same period (StatCounter, 2020). As companies such as Huawei and ByteDance continue to emerge and gain market dominance, they have the potential to increase the geopolitical power of China by moving from the lower left to the upper left quadrant in table 1.

The US fears that this potential will be realised. In October 2019, Senators Tom Cotton and Charles (‘Chuck’) Schumer wrote to the Director of National Intelligence requesting an assessment of the national security risks posed by TikTok’s collection of user data. The letter noted that while “the company has stated that TikTok does not operate in China and stores U.S. user data in the U.S., ByteDance is still required to adhere to the laws of China” (Schumer and Cotton, 2019, PDF). In other words, they were concerned not only by the company’s growing international presence and collection of data, but also by the potential for China to exercise authority over ByteDance and thus internationalise state power. Two months later the Department of Treasury’s Committee on Foreign Investment in the US opened an investigation into both TikTok’s use of its users’ data and its acquisition of music.ly (Espinosa de los Monteros Pereda, 2019).

On 6 August 2020, the Trump Administration responded to the security concerns over TikTok. In an executive order, President Trump declared that TikTok’s “data collection threatens to allow the Chinese Communist Party access to Americans’ personal and proprietary information” (Trump, 2020a). In response, the executive order issued a ban on ‘transactions’ with ByteDance, to be specified by the Secretary of Commerce within 45 days of the order. On the same day, similar restrictions were placed on other Chinese internet companies, including WeChat (Trump, 2020b). Reports prior to the executive order indicated that TikTok could be sold to an American company, Microsoft (a known PRISM participant), in order to avoid the ban (Isaac, Swanson and Rappeport, 2020). This would cut off ByteDance’s international dominance, thus curtailing its ability to internationalise Chinese state power for surveillance purposes and thus the national security threat posed to the US.

The US has also long held concerns about the growing international reach of Huawei, and its perceived links to the Chinese government. In 2012, the House Intelligence Committee published an investigation into Huawei and ZTE’s (another Chinese telecommunications company) involvement in the US telecommunications market. The report argued that the two companies posed a major threat for the US, as “to the extent these companies are influenced by the state, or provide Chinese intelligence services access to telecommunication networks, the opportunity exists for further economic and foreign espionage by a foreign nation-state already known to be a major perpetrator of cyber espionage” (Rogers and Ruppersberger, 2012, p. iv). That is, the report raised the concern that allowing Huawei or ZTE to gain a significant position in the US market would allow China to internationalise its power through the companies and undermine the US national interest. US intelligence officials would later claim, in February 2020, that Huawei has indeed had ‘backdoor’ access to mobile phone networks through its telecommunications products since 2009 - which the company denies (Pancevski, 2020).

The House Intelligence Committee report advised US companies against trading with Huawei and ZTE, and recommended executive and legislative action to stall their growth in the US market (Rogers and Ruppersberger, 2012, pp. iv-vii). The following year, Huawei’s Chief Executive Officer responded by beginning to exit the US market voluntarily, saying that “[i]f Huawei gets in the middle of U.S-China relations", and causes problems, "it’s not worth it" (in Harris and Fish, 2013). In addition to preventing the ‘espionage’ detailed in the report, given the size and importance of the US market, this has also presumably hampered Huawei’s international growth.

However, as the report also noted in 2012, the growing global presence of Chinese companies is also a concern. Whilst the US can restrict access to its own market, limiting the growth of these firms in third markets is more complicated. In particular, Huawei’s deepening involvement in 5G infrastructure has become a major issue for the US, among others. An executive order from President Trump on 15 May 2019, though not specifically mentioning Huawei, articulated the US concerns:

[F]oreign adversaries are increasingly creating and exploiting vulnerabilities in information and communications technology and services, which store and communicate vast amounts of sensitive information, facilitate the digital economy, and support critical infrastructure and vital emergency services, in order to commit malicious cyber-enabled actions, including economic and industrial espionage against the United States and its people. (Trump, 2019, PDF)

The response by the US has been to attempt to shrink the ‘geo-economic space’ available to Huawei by pressuring other states to ban the company from their national 5G rollouts. This has included traditional tools of statecraft. For example, the US has leveraged its market power by warning the United Kingdom that a post-Brexit trade deal with the US would not be possible if Huawei were to provide equipment for the 5G network there (Isaac, 2019). The US has also threatened to withhold intelligence from Germany should it allow Huawei involvement in its network (Pancevski and Germano, 2019). The United Kingdom would eventually ban Huawei from its 5G rollout in July 2020, however Germany and other european states remain open to the company playing some role in their networks (Baker and Chalmers, 2020). This illustrates the growing international market dominance of Huawei, and perhaps the waning influence of the US.

However, in addition to this bilateral pressure, the US has also been able to use its authority over US-based software and hardware firms to unilaterally disrupt Huawei’s business. In this way, private US-based internet firms have become tools of statecraft, acting as conduits through which the US state can exercise power. The day after the Trump Administration made the above executive order, the US Department of Commerce placed Huawei and 68 of its affiliates 2 on the ‘Entity List’ believing there is “reasonable cause to believe that Huawei has been involved in activities contrary to the national security or foreign policy interests of the United States” (Bureau of Industry and Security, 2019). Being on the Entity List means that US firms must receive a license in order to trade with Huawei.

The ban on Huawei will have limited impact on its revenue or sales due to the loss of access to the US market, where it has no presence. However, Huawei is vulnerable to the US market for other reasons. Huawei relies heavily on US-based firms such as Intel, Qualcomm, and Xilinx for semiconductors and other components used in its products (King, Bergen, and Brody, 2019). Huawei phones also use the Android operating system. Therefore, the dominance of US-based firms in Huawei’s supply chains means that the ban acts as a chokepoint on the firm, cutting it off from the vital hardware and software it needs to operate. The US has sought to protect this advantage, blocking takeovers of US semiconductor manufacturers on national security grounds – including proposed takeovers from Chinese companies (Woodhouse, 2018).

In response to Huawei being placed on the Entity List, Google suspended some of its business with Huawei on 19 May 2019, immediately ending Huawei’s access to Android operating system updates. New Huawei products, meanwhile, would only have access to the Android Open Source Project version of the Android operating system, meaning that proprietary Google mobile applications such as the Google Play store, Gmail and YouTube would not be available (Moon, 2019). Losing access to Google apps is less a problem for Huawei in its main market, China, where Google itself is effectively banned. However, it is a major problem for Huawei’s presence in foreign markets, particularly Europe where Huawei has a growing market share, considering Android’s share of the mobile phone market is 72 percent both globally and in Europe (StatCounter, 2020).

However, a day after Google’s suspension the US Department of Commerce issued a temporary general license, giving Huawei a 90 day reprieve so that businesses could prepare. The temporary general licence allows “certain activities necessary to the continued operations of existing networks and to support existing mobile services” (US Department of Commerce, 2019b). This has allowed Google to continue updating existing Huawei devices, however semiconductor manufacturers remain unable to supply Huawei with components to manufacture new devices.

The temporary general licence has since been extended four times, however the Department of Commerce believes the latest renewal, made on 15 May 2020, to be the last. It then urged companies to prepare to apply for individual licences in order to trade with Huawei once the general licence expired again on 3 August (US Department of Commerce, 2020). On 15 May 2020 the Department also further restricted Huawei’s access to semiconductors by placing export restrictions on foreign manufactured components, mainly from Taiwan and South Korea, which use US software and/or US chip-making technology ( Davis and Ferek, 2020).

However, despite the temporary general licences offered to Google, Huawei is nevertheless moving to a permanent Google-free position in order to maintain future reliability in its supply chains. Its newest flagship model, which has no access to the Google Play store, was launched in select international markets in late-2019 (Smith, 2020). As part of its commitment to moving away from Google, Huawei has joined three other companies to develop an alternative, and ultimately a competitor, to the Google Play store for the international market (Kirton, 2020). The success or failure of this initiative will ultimately determine how effective the Trump Administration’s ban will be in limiting the growth of Huawei in international markets over the long-term. In the short-term, there is some evidence that the ban has hurt Huawei. Despite a surge in sales in its home market of China, Huawei’s mobile sales have slumped elsewhere (Pham, 2019).

In the meantime, the US continues to restrict the ‘geoeconomic space’ available to Chinese-owned internet companies. In addition to the TikTok and WeChat bans discussed above, the US has also expanded its so-called ‘Clean Network’ programme in an attempt to further sideline Chinese companies from telecommunications networks, mobile application stores, smartphones, cloud-based systems, and submarine internet cables. The Clean Network programme encourages other countries and private actors to shun ‘untrusted’ Chinese vendors with the explicit goal of securing national data against the “CCP’s [Chinese Communist Party’s] surveillance state” (Pompeo, 2020). The programme is thus directly aimed at hampering the ability of Chinese companies to gain international market dominance and by extension the ability of the Chinese state to internationalise its power and conduct surveillance abroad.

As Chinese companies become internationally competitive their market dominance increases. The US is concerned that this will enable Chinese technology and internet companies to access information on behalf of the Chinese state, in a similar way to how we know the US uses its own internet companies to conduct surveillance. In response to this threat the US has leveraged its incumbent position, using the dominance of its firms in supply chains to shrink the ‘geo-economic space’ available to Chinese companies. The ability of the US to impose export restrictions on even foreign manufactured goods, because they use US technology and software, illustrates the potential of exercising authority over firms with high market dominance to internationalising state power.

Conclusion

This article analysed the role of internet companies in the geopolitical struggle between the US and China. It illustrated how the US has been able to use internet firms to internationalise state power to conduct surveillance. The article began by arguing that internationalising state power through multinational firms depends on two variables. First, states must be able to exercise high levels of extraterritorial authority over firms. If a firm has high levels of sunk costs in a national market and/or if the firm has few alternative markets to exploit, then exiting that market to avoid regulatory oversight is costly. This increases the leverage of the state over the firm, creating high levels of extraterritorial authority – although this must be coupled with the institutional capacity to exercise this authority. Second, the firm subject to a state’s extraterritorial authority must also be internationally dominant, either through having a high overall market share or by having a high share in a small but crucial part of a market’s supply chain.

The article then examined the growing threat of China to the US hegemony over the internet, focusing specifically on China’s efforts to internationalise its technology companies such as Huawei. This has included work by China to create ‘geo-economic space’ in which its companies can build international market dominance. The US not only sees these efforts as an economic risk to the competitiveness of its industries, but as a security and political threat. In response, the US has sought to shrink this geo-economic space, both by denying its own market and encouraging others to do the same. Finally, the US has also used the international market dominance of its companies to directly disrupt Huawei’s operations and the appeal of its products in third markets.

References

Alexa. (2019). The top 500 sites on the web. https://www.alexa.com/topsites

Androidrank. (2020). List of Android Most Popular Google Play Apps. Androidrank. https://www.androidrank.org/android-most-popular-google-play-apps?start=1&sort=4&price=all&category=all

Bach, D., & Newman, A. L. (2010). Governing lipitor and lipstick: Capacity, sequencing, and power in international pharmaceutical and cosmetics regulation. Review of International Political Economy, 17(4), 665–695. https://doi.org/10.1080/09692291003723706

Baker, L., & Chalmers, J. (2020). As Britain bans Huawei, U.S. pressure mounts on Europe to follow suit. Reuters. https://www.reuters.com/article/us-britain-huawei-europe/as-britain-bans-huawei-u-s-pressure-mounts-on-europe-to-follow-suit-idUSKCN24F1XG

Bureau of Industry and Security, Commerce. (2019). Addition of Entities to the Entity List, A Rule by the Industry and Security Bureau on 05/21/2019. Federal Register. https://www.federalregister.gov/documents/2019/05/21/2019-10616/addition-of-entities-to-the-entity-list.

Büthe, T., & Mattli, W. (2011). The New Global Rulers: The Privatization of Regulation in the World Economy. Princeton University Press. https://doi.org/10.23943/princeton/9780691144795.001.0001

Cartwright, M. (2019). Business conflict and international law: The political economy of copyright in the United States. Regulation & Governance. https://doi.org/10.1111/rego.12272

Chapple, C. (2019, November 14). TikTok Clocks 1.5 Billion Downloads on The App Store and Google Play [Blog post]. Sensort Tower Blog. https://sensortower.com/blog/tiktok-downloads-1-5-billion

Crasnic, L., Kalyanpur, N., & Newman, A. (2017). Networked liabilities: Transnational authority in a world of transnational business. European Journal of International Relations, 23(4), 906–929. https://doi.org/10.1177/1354066116679245

Davis, B., & Ferek, K. /S. (2020, May 15). U.S. Moves to Cut Off Chip Supplies to Huawei. The Wall Street Journal. https://www.wsj.com/articles/u-s-moves-to-cut-off-chip-supplies-to-huawei-11589545335

De Goede, M. (2012). The SWIFT affair and the global politics of European security. Journal of Common Market Studies, 50(2), 214–230. https://doi.org/10.1111/j.1468-5965.2011.02219.x

Donnan, S., Leonard, J., & King, I. (2019, July 23). Trump meets with tech CEOs and takes a step toward easing Huawei ban. The Los Angeles Times. https://www.latimes.com/business/technology/story/2019-07-23/trump-moves-toward-easing-huawei-ban

Elbra, A. D. (2014). Interests need not be pursued if they can be created: Private governance in African gold mining. Business and Politics, 16(2), 247–266. https://doi.org/10.1515/bap-2013-0021

Facebook Inc. (2019). Form 10-K: Annual Report Pursuant to Section 13 or 15 (D) of the Securities Exchange Act of 1934 for the fiscal year ended December 31, 2018. Facebook Inc.

Farrell, H., & Newman, A. (2015). The new politics of interdependence: Cross-national layering in trans-Atlantic regulatory disputes. Comparative Political Studies, 48(4), 497–526. https://doi.org/10.1177/0010414014542330

Farrell, H., & Newman, A. (2016). The new interdependence approach: Theoretical development and empirical demonstration. Review of International Political Economy, 23(5), 713–736. https://doi.org/10.1080/09692290.2016.1247009

Farrell, H., & Newman, A. L. (2010). Making global markets: Historical institutionalism in international political economy. Review of International Political Economy, 17(4), 609–638. https://doi.org/10.1080/09692291003723672

Farrell, H., & Newman, A. L. (2019). Weaponized Interdependence: How Global Economic Networks Shape State Coercion. International Security, 44(1), 42–79. https://doi.org/10.1162/isec_a_00351

Fortune Magazine. (2019). The Fortune Global 500, 2019. Fortune Magazine. https://fortune.com/global500/2019/search/

Google LLC, successor in law to Google Inc. V Commission nationale de l’informatique et des libertés (CNIL), (Court of Justice of the European Union 2019).

Google L.L.C. v. Equustek Solutions Inc., (United States District Court, Northern District of California, San Jose Division 2017).

Google LLC v. Equustek Solutions Inc, SCC 34, [2017] 1 S.C.R. 824 (Supreme Court of Canada 2017).

Greenwald, G. (2014). No place to hide: Edward Snowden, the NSA, and the US surveilance state. Macmillan.

Greenwald, G., & MacAskill, E. (2013, June 7). NSA Prism program taps in to user data of Apple, Google and others. The Guardian. https://www.theguardian.com/world/2013/jun/06/us-tech-giants-nsa-data

Greenwald, G., MacAskill, E., Poitras, L., Ackerman, S., & Rushe, D. (2013, July 12). Microsoft handed the NSA access to encrypted messages. The Guardian. https://www.theguardian.com/world/2013/jul/11/microsoft-nsa-collaboration-user-data

Harris, S., & Fish, I. S. (2013, December 2). Accused of Cyberspying, Huawei Is ‘Exiting the US Market’. Foreign Policy. https://foreignpolicy.com/2013/12/02/accused-of-cyberspying-huawei-is-exiting-the-u-s-market/

Haufler, V. (2006). Global Governance and the Private Sector. In C. May (Ed.), Global Corporate Power (pp. 85–103). Lynn Reinner Publishers.

Higgins, V. (2015). Beyond neo-techno-nationalism: An introduction to China’s emergent third way: Globalised adaptive ecology, emergent capabilities and policy instruments. In Alliance Capitalism, Innovation and the Chinese State (pp. 115–145). Palgrave Macmillan. https://doi.org/10.1057/9781137529657_5

Hills, J. (2002). The struggle for control of global communication: The formative century. University of Illinois Press.

Hong, Y. (2017). Networking China: The Digital Transformation of the Chinese Economy. University of Illinois Press. https://doi.org/10.5406/illinois/9780252040917.001.0001

Isaac, A. (2019, July 13). US tells Britain: Fall into line over China and Huawei, or no trade deal. The Telegraph. https://www.telegraph.co.uk/business/2019/07/13/us-tells-britain-fall-line-china-huawei-no-trade-deal/

Isaac, M., Swanson, A., & Rappeport, A. (2020, July 31). Microsoft Said to Be in Talks to Buy TikTok, as Trump Weighs Curtailing App. The New York Times. https://www.nytimes.com/2020/07/31/technology/tiktok-microsoft.html

Kaczmarek, S. C., & Newman, A. L. (2011). The long arm of the law: Extraterritoriality and the national implementation of foreign bribery legislation. International Organization, 65(4), 745–770. https://doi.org/10.1017/S0020818311000270

King, I., Bergen, M., & Brody, B. (2019, May 19). Top US Tech Companies Begin to Cut Off Vital Huawei Supplies. Bloomberg. https://www.bloomberg.com/news/articles/2019-05-19/google-to-end-some-huawei-business-ties-after-trump-crackdown

Kirton, D. (2020, February 6). Exclusive: China’s mobile giants to take on Google’s Play store—Sources. Reuters. https://www.reuters.com/article/us-china-mobile-exclusive/exclusive-chinas-mobile-giants-to-take-on-googles-play-store-sources-idUSKBN20018H

Mann, M., & Warren, I. (2018). The digital and legal divide: Silk Road, transnational online policing and southern criminology. In K. Carrington, R. Hogg, J. Scott, & M. Sozzo (Eds.), The Palgrave handbook of criminology and the global south (pp. 245–260). Palgrave MacMillan. https://doi.org/10.1007/978-3-319-65021-0_13

Menn, J. (2016, October 5). Exclusive: Yahoo secretly scanned customer emails for U.S. intelligence – sources. Reuters. https://www.reuters.com/article/us-yahoo-nsa-exclusive-idUSKCN1241YT

Mikler, J. (2011). The illusion of the ‘power of markets’. The Journal of Australian Political, 68, 41–61.

Mikler, J. (2018). The political power of global corporations. Polity Press.

Monteros Pereda, E. (2019, December 2). TikTok Under Investigation for Posing a Threat to National Security—Is Chinese Tech Running Out of Time in the U.S.? JOLT Digest. https://jolt.law.harvard.edu/digest/tiktok-under-investigation-for-posing-a-threat-to-national-security-is-chinese-tech-running-out-of-time-in-the-u-s

Moon, A. (2019, May 19). Exclusive: Google suspends some business with Huawei after Trump blacklist—Source. Reuters. https://www.reuters.com/article/us-huawei-tech-alphabet-exclusive/exclusive-google-suspends-some-business-with-huawei-after-trump-blacklist-source-idUSKCN1SP0NB

National Security Agency, and Special Source Operations. (2013). https://snowdenarchive.cjfe.org/greenstone/collect/snowden1/index/assoc/HASH01f5/323b0a6e.dir/doc.pdf.

Newman, A. L., & Posner, E. (2011). International interdependence and regulatory power: Authority, mobility, and markets. European Journal of International Relations, 17(4), 589–610. https://doi.org/10.1177/1354066110391306

Pancevski, B. (2020, February 12). U.S. Officials Say Huawei Can Covertly Access Telecom Networks. The Wall Street Journal. https://www.wsj.com/articles/u-s-officials-say-huawei-can-covertly-access-telecom-networks-11581452256

Pancevski, B., & Germano, S. (2019, March 11). Drop Huawei or See Intelligence Sharing Pared Back, US Tells Germany. The Wall Street Journal. https://www.wsj.com/articles/drop-huawei-or-see-intelligence-sharing-pared-back-u-s-tells-germany-11552314827

Pham, S. (2019, November 14). Huawei phones are still red hot in China. But the Google app ban is hurting sales overseas. CNN. https://edition.cnn.com/2019/11/14/tech/huawei-sales-mate-30/index.html

Pompeo, M. (2020, August 5). Announcing the Expansion of the Clean Network to Safeguard America’s Assets [Press statement]. U. S. Department of State. https://www.state.gov/announcing-the-expansion-of-the-clean-network-to-safeguard-americas-assets/

Powers, S. M., & Jablonski, M. (2015). The real cyber war: The political economy of internet freedom. University of Illinois Press. https://doi.org/10.5406/illinois/9780252039126.001.0001

Quack, S., & Dobusch, L. (2013). Framing standards, mobilizing users: Copyright versus fair use in transnational regulation. Review of International Political Economy, 20(1), 52–88. https://doi.org/10.1080/09692290.2012.662909

Rogers, M., & Ruppersberger, D. (2012). Investigative Report on the U.S. National Security Issues Posed by Chinese Telecommunications Companies Huawei and ZTE [Report]. House Intelligence Committee.

Samuelson, P. (2015). Possible futures of fair use. Washington Law Reveiw, 90(2), 815–868. https://digitalcommons.law.uw.edu/wlr/vol90/iss2/9/

Schumer, C., & Cotton, T. (2019). Letter to The Honorable Joseph Maguire. https://www.democrats.senate.gov/imo/media/doc/10232019%20TikTok%20Letter%20-%20FINAL%20PDF.pdf

Shen, H. (2018). Building a digital silk road? Situating the Internet in China’s belt and road initiative. International Journal of Communication, 12, 2683–2701. https://ijoc.org/index.php/ijoc/article/view/8405

Smith, C. (2020, January 30). Huawei is done with Google for good. BGR. https://bgr.com/2020/01/30/huawei-android-ban-p40-pro-and-other-phones-wont-get-google-apps/

Soltani, A., Peterson, A., & Gellman, B. (2013, December 10). NSA uses Google cookies to pinpoint targets for hacking. The Washington Post. https://www.washingtonpost.com/news/the-switch/wp/2013/12/10/nsa-uses-google-cookies-to-pinpoint-targets-for-hacking/

Starrs, S. (2013). American Economic Power Hasn’t Declined—It Globalized! Summoning the Data and Taking Globalization Seriously. International Studies Quarterly, 57(4), 817–830. https://doi.org/10.1111/isqu.12053

StatCounter. (2020). GlobalStats. https://gs.statcounter.com/

Suttmeier, R. P., Yao, X., & Tan, A. Z. (2009). Standards of power? Technology, institutions, and politics in the development of China’s national standards strategy. Geopolitics, History, and International Relations, 1(1), 46–84.

Trump, D. J. (2019). Executive Order 13873: Securing the Information and Communications Technology and Services Supply Chain. Homeland Security Digital Library. https://www.hsdl.org/?view&did=825242

Trump, D. J. (2020a). Executive Order on Addressing the Threat Posed by TikTok. The White House. https://www.whitehouse.gov/presidential-actions/executive-order-addressing-threat-posed-tiktok/

Trump, D. J. (2020b). Executive Order on Addressing the Threat Posed by WeChat. The White House. https://www.whitehouse.gov/presidential-actions/executive-order-addressing-threat-posed-wechat/

Tusikov, N. (2016). Chokepoints: Global Private Regulation on the Internet. University of California Press. https://doi.org/10.1525/california/9780520291218.001.0001

Tusikov, N. (2019a). How US-made rules shape internet governance in China. Internet Policy Review, 8(2). https://doi.org/10.14763/2019.2.1408

Tusikov, N. (2019b). Regulation through “bricking”: Private ordering in the “Internet of Things”. Internet Policy Review, 8(2). https://doi.org/10.14763/2019.2.1405

United Nations Conference on Trade and Development. (2019). World Investment Report: Annex table 19. The world’s top 100 non-financial MNEs, ranked by foreign assets, 2018. World Investment Report: Annex Tables. https://unctad.org/Sections/dite_dir/docs/WIR2020/WIR2020Tab19.xlsx

United States Department of Commerce. (2019a, May 20). Department of Commerce Issues Limited Exemptions on Huawei Products [Press release]. https://www.commerce.gov/news/press-releases/2019/05/department-commerce-issues-limited-exemptions-huawei-products

United States Department of Commerce. (2019b, August 19). Department of Commerce Adds Dozens of New Huawei Affiliates to the Entity List and Maintains Narrow Exemptions through the Temporary General License [Press release]. https://www.commerce.gov/news/press-releases/2019/08/department-commerce-adds-dozens-new-huawei-affiliates-entity-list-and

United States Department of Commerce. (2020, May 15). Department of Commerce Issues Expected Final 90-Day Extension of Temporary General License Authorizations [Press release]. https://www.commerce.gov/news/press-releases/2020/05/department-commerce-issues-expected-final-90-day-extension-temporary

Vinocur, N. (2019, December 12). We have a huge problem’: European tech regulator despairs over lack of enforcement. Politico. https://www.politico.com/amp/news/2019/12/27/europe-gdpr-technology-regulation-089605

Willsher, K. (2019, October 17). France accuses Google of flouting EU copyright law meant to help news publishers. Los Angeles Times. https://www.latimes.com/business/story/2019-10-17/france-accuses-google-ignoring-copyright-law

Woodhouse, A. (2018, February 23). US blocks Chinese takeover of semiconductor equipment company. Financial Times. https://www.ft.com/content/b3ad7274-1842-11e8-9376-4a6390addb44

Footnotes

1. Browser market shared based on volume of internet usage. Market share shown is the average of the monthly market share in 2019.

2. This was later expanded to 114 affiliates on 19 August 2019 (US Department of Commerce, 2019a)

Regulatory arbitrage and transnational surveillance: Australia’s extraterritorial assistance to access encrypted communications

$
0
0

This paper is part of Geopolitics, jurisdiction and surveillance, a special issue of Internet Policy Review guest-edited by Monique Mann and Angela Daly.

Introduction

Since the Snowden revelations in 2013 (see e.g., Lyon, 2014; Lyon, 2015) an ongoing policy issue has been the legitimate scope of surveillance, and the extent to which individuals and groups can assert their fundamental rights, including privacy. There has been a renewed focus on policies regarding access to encrypted communications, which are part of a longer history of the ‘cryptowars’ of the 1990s (see e.g., Koops, 1999). We examine these provisions in the Anglophone ‘Five Eyes’ (FVEY) 1 countries - Australia, Canada, New Zealand, the United Kingdom and the United States (US) - with a focus on those that attempt to regulate communications providers. The paper culminates with the first comparative analysis of recent developments in Australia. The Australian developments are novel in the breadth of entities to which they may apply and their extraterritorial reach: they attempt to regulate transnational actors, and may implicate Australian agencies in the enforcement - and potential circumvention - of foreign laws on behalf of foreign law enforcement agencies. This latter aspect represents a significant and troubling development in the context of FVEY encryption-related assistance provisions.

We explore this expansion of extraterritorial powers that extend the reach of all FVEY nations via Australia, by requesting or coercing assistance from transnational technology companies as “designated communications providers”, and allowing foreign law enforcement agencies to request their Australian counterparts to make such requests. Australia has unique domestic legal arrangements, which includes an aggressive stance on mass surveillance (Molnar, 2017), an absence of comprehensive constitutional or legislated fundamental rights at the federal level (Daly & Thomas, 2017; Mann et al., 2018), and has recently enacted the Telecommunications and Other Legislation Amendment (Assistance and Access) Act 2018 (Cth) 2, the focus of this article. We demonstrate that Australia’s status as the ‘weak link’ in the FVEY alliance enables the introduction of laws less likely to be constitutionally or otherwise legally permissible elsewhere. We draw attention to the extraterritorial reach of the Australian provisions which affords the possibility for other FVEY members to engage in regulatory arbitrage to exploit the weaker human rights protections and oversight measures in Australia.

Human rights and national security in Australia

Australia has a well-documented track record of ‘hyper legislation’ of national security measures (Roach, 2011), having passed over 64 anti-terrorism specific laws since 9/11 that have been recognised as having serious potential to encroach democratic rights and freedoms (Williams & Reynolds, 2017). Some of these laws have involved digital and information communications infrastructures and their operators, such as those facilitating Australian security and law enforcement agencies’ use of Computer Network Operations (Molnar, Parsons, & Zouave, 2017) and the introduction of mandatory data retention obligations on internet service providers (Suzor, Pappalardo, & McIntosh, 2017). Australia’s role as a leading proponent in advocating for stronger powers against encrypted communications is consistent with this history.

Yet, unlike any of the other FVEY members, Australia has no comprehensive enforceable human rights protection at the federal level (Daly & Thomas, 2017; Mann et al., 2018). 3 Australia does not have comprehensive constitutional rights (like the US and Canada), a legislated bill of rights (like NZ and the UK) nor recourse to regional human rights bodies (like the UK and its relationship with the European Convention on Human Rights) (Refer to Table 1).

Given this situation, we argue Australia is a ‘weak link’ among FVEY partners because its legal framework allows for a more vigorous approach to legislating for national security at the expense of human rights protections, including but not limited to, privacy (Williams & Reynolds, 2017; Mann et al., 2018). Australia’s status as a human rights ‘weak link’ affords the ‘legal possibility’ for measures which may be ‘legally impossible’ in other jurisdictions, including those of the other FVEY countries, given peculiar domestic and regional rights protections.

Encryption laws in the Five Eyes

FVEY governments have made frequent statements regarding their surveillance capabilities ‘going dark’ due to encryption, with consequences for their ability to prevent, detect and investigate serious crimes such as terrorism and the dissemination of child exploitation material (Comey, 2014). This is despite evidence that the extensive surveillance powers that these agencies maintain are mostly used for the investigation of drug offences (Wilson & Mann, 2017; Parsons & Molnar, 2017). Further, there is an absence of evidence that undermining encryption will improve law enforcement responses (Gill, Israel, & Parsons, 2018), coupled with disregard for the many legitimate uses of encryption (see e.g., Abelson et al., 2015), including the protection of fundamental rights (see e.g., Froomkin, 2015).

It is important to note, as per Koops and Kosta (2018), that communications may be encrypted by different actors at different points in the telecommunications process. Where, and who applies encryption, will affect which actors have the ability to decrypt communications, and accordingly where legal obligations to decrypt may lie, or be actioned. For example, in some scenarios the service provider maintains the means of decrypting the communications, but this would not be the case where the software provider or end user has the means to decrypt (i.e., ‘at the ends’). More recently, the focus has shifted to communications providers offering encrypted services or facilitating a third party offering such services over their networks. These actors can be forced to decrypt communications either via ‘backdoors’ (i.e., deliberate weaknesses or vulnerabilities) built into the service, or via legal obligations to provide assistance. The latter scenario is not a technical backdoor per se, but could be conceptualised as a ‘legal’ means to acquire a ‘backdoor’ as the government agency will obtain covert access to the service and communications therein, thus having a similar outcome to a technical backdoor. It is these measures which are the focus of our analysis. We provide a brief overview of the legal situation in each FVEY country (Table 1), before turning to Australia as our main focus.

United States

The legal situation in the US to compel decryption depends, at least in part, on the actor targeted. The US has no specific legislation dealing with encryption although other laws on government investigatory and surveillance powers may be applicable (Gonzalez, 2019). Forcing an individual to decrypt data or communications has generally been considered incompatible with the Fifth Amendment to the US Constitution (i.e. the right against self-incrimination), although there is no authoritative Supreme Court decision on the issue (Gill, 2018). Furthermore, the US government may be impeded by arguments that encryption software constitutes ‘speech’ protected by the First Amendment and Fourth Amendment (Cook Barr, 2016; Gonzalez, 2019; see also Daly, 2017).

For communications providers, the US has a provision in the Communications Assistance for Law Enforcement Act (CALEA) §1002on Capability Requirements for telecommunications providers, which states that providers will not be required to decrypt or ensure that the government can decrypt communications encrypted by customers, unless the provider has provided the encryption used (see e.g., Koops & Kosta, 2018). 4

In an attempt to avoid the difficulty of forcing individuals to decrypt, and the CALEA requirements’ application only to telecommunications companies, attention has been turned to technology companies, including equipment providers. Litigation has been initiated against companies that refuse to provide assistance; the most notable being the FBI-Apple dispute concerning the locked iPhone of one of the San Bernardino shooters (Gonzalez, 2019). Ultimately the FBI were able to unlock the iPhone without Apple’s assistance, by relying on a technical solution from Cellebrite (Brewster, 2018), thereby engaging in a form of ‘lawful hacking’ (Gonzalez, 2019). Absent a superior court’s ruling, or legislative intervention, the legal position regarding compelled assistance remains uncertain (Abraha, 2019).

Canada

Canada does not have specific legislation that provides authorities the power to compel decryption. Canadian authorities have imposed requirements on wireless communications providers through spectrum licensing conditions in the form of the Solicitor General Enforcement Standards for Lawful Interception of Telecommunications (SGES) Standard 12 which obliges providers to decrypt any communications they have encrypted on receiving a lawful request, but excludes end-to-end encryption “that can be employed without the service provider’s knowledge” (Gill, Israel, & Parsons, 2018, p. 59; West & Forcese, 2020). It appears the requirements only apply to encryption applied by the operator itself, can involve a bulk rather than case-by-case decryption requirement, do not require the operator to develop “new capabilities to decrypt communications they do not otherwise have the ability to decrypt”, and do not prevent operators employing end-to-end encryption (Gill, Israel, & Parsons, 2018, p. 60; West & Forcese, 2020).

There are provisions of the Canadian Criminal Code which give operators immunity from civil and criminal liability if they cooperate with law enforcement ‘voluntarily’ by preserving or disclosing data to law enforcement, even without a warrant (Gill, Israel, & Parsons, 2018, p. 57). There are also production orders and assistance orders that can be issued under the Criminal Code to oblige third parties to assist law enforcement, and disclose documents and records which could, in theory, be used to target encrypted communications (Gill, Israel, & Parsons, 2018, pp. 62-63), but West and Forcese (2020, p. 13) cast doubt on this possibility. There are also practical limitations, including the fact that many digital platforms and service providers do not have a physical presence in Canada, and thus are effectively beyond the jurisdiction of Canadian authorities (West & Forcese, 2020). Here, Mutual Legal Assistance Treaty (MLATs) could be used, although their use is notoriously beset with delay, and may only be effective if the other jurisdiction has its own laws to oblige third parties to decrypt data or communications (West & Forcese, 2020).

The Canadian Charter of Rights and Freedoms has a number of sections relevant to how undermining encryption can interfere with democratic freedoms, namely sections 2 (freedom of expression), 7 (security of the person), 8 (right against unreasonable search and seizure), and the right to silence and protection from self-incrimination contained in sections 7, 11 and 14 (West & Forcese, 2020). Case law from Canadian courts suggests that individuals cannot be compelled to decrypt their own data (Gill, 2018, p. 451). The Charter implications of BlackBerry’s assistance to the Canadian police in the R v Mirarchi5 case was never ruled on as the case was dropped (Gill, Israel, & Parsons, 2018, p. 58).

In absence of a legislative proposal before the Canadian Parliament, it is difficult to surmise how, and whether, anti-encryption powers would run up against human rights protections. Yet any concrete proposal would likely face scrutiny in the courts given the impacts on Canadians’ Charter-protected rights.

New Zealand

In New Zealand, provisions in theTelecommunications (Interception Capability and Security) Act 2013 (TISCA) require network operators to ensure that their networks can be technically subjected to lawful interception (Cooper, 2018). 6 Section 10(3) requires that public telecommunications network operators, on receipt of a lawful request, must decrypt encrypted communications carried by its network, if that operator has provided the means of encryption. Subsection 10(4) states that an operator is not required to decrypt communications that have been encrypted using a publicly available product supplied by another entity, and the operator is not under any obligation to ensure that a surveillance agency has the ability to decrypt communications.

It appears these provisions may entail that an operator cannot provide end-to-end encryption on its services so that their networks can be subject to lawful interception - that is, they must maintain the cryptographic key where encryption is managed centrally by the service provider (Global Partners Digital, n.d.) and engineer a ‘back door’ into the service (Cooper, 2018). However, NGO NZ Council for Civil Liberties considered the impact of this provision is theoretical as most services are offshore, and this provision does not apply extraterritorially (Beagle, 2017). Yet, section 38 of TICSA allows the responsible minister to make “service providers” (discussed below) subject to provisions such as this on the same basis as “network operators”, which may involve section 10 having an extraterritorial reach (Keith, 2020).

There is a further provision in section 24 of TISCA that places both network operators and service providers (defined as anyone, whether in New Zealand or not, who provides a communications service to an end user in New Zealand) under obligations to provide ‘reasonable’ assistance to surveillance agencies with interception warrants or lawful interception authorities, including the decryption of communications, when they were the source of the encryption. Such companies do not have to decrypt encryption they have not provided nor “ensure that a surveillance agency has the ability to decrypt any telecommunication” (TICSA s 24(4)(b)). It is unclear what “reasonable assistance” entails, and how that would apply to third party app providers such as WhatsApp (to which section 24 would prima facie apply but not section 10 in the absence of a section 38 decision). It is also unclear how this provision would be enforced against offshore companies (Dizon et al., 2019, pp. 74-75).

There are further provisions in the Search and Surveillance Act 2012 which affect encryption. Section 130 includes a requirement that “the user, owner, or provider of a computer system […] offer reasonable assistance to law enforcement officers conducting a search and seizure including providing access information” which could be used to force an individual or business to decrypt data and communications (Dizon et al., 2019, p. 61). There is a lack of clarity as to how the privilege against self-incrimination operates (Dizon et al., 2019, pp. 62-63). There is also a lack of clarity about what “reasonable assistance” from companies, which will likely be third parties, and not able to avail themselves of the protection against self-incrimination, may entail (Dizon et al., 2019, pp. 65-66).

New Zealand has human rights protections enshrined in its Bill of Rights Act 1990, and section 21 contains the right to be secure against unreasonable searches and seizures. However, it “does not have higher law status and so can be overridden by contrary legislation…but there is at least some effort to avoid inconsistencies” (Keith, 2020). There is also the privilege against self-incrimination, “the strongest safeguard available in relation to encryption as it works to prevent a person from being punished for refusing to provide information that could lead to criminal liability” (Dizon et al., 2019, p. 7). There is no freestanding right to privacy in the New Zealand Bill of Rights, and so aspects of privacy must be found via other recognised rights (Butler, 2013), or may be protected via data protection legislation and New Zealand courts’ “relatively strong approach to unincorporated treaties, including human rights obligations” (Keith, 2020).

Despite being part of the FVEY communiques on encryption mentioned below, Keith (2020) views New Zealand’s domestic approach as more “cautious or ambivalent”, with “no proposal to follow legislation enacted by other Five Eyes countries”.

United Kingdom

The most significant law is the UK’s Investigatory Powers Act 2016 (henceforth IPA). 7 Section 253 allows a government minister, subject to approval by a 'Judicial Commissioner', to issue a ‘Technical Capability Notice’ (TCN) to any communications operator (which includes telecommunications companies, internet service providers, email providers, social media platforms, cloud providers and other ‘over-the-top’ services), whether UK-based or anywhere else in the world, imposing obligations on that provider. Such an obligation can include the operator having to remove “electronic protection applied by or on behalf of that operator to any communications or data”. The government minister must also consider technical practicalities such as whether it is ‘practicable’ to impose requirements on operators, and for the operators to comply. Section 254 provides that Judicial Commissioners conduct a necessity and proportionality test before approving a TCN. This means that a provider receiving a TCN would not be able to provide end-to-end encryption for its customers, and must ensure there is a method of decrypting communications. In other words, the provider must centrally manage encryption and maintain the decryption key (Smith, 2017a).

In November 2017, the UK Home Office released a Draft Communications Data Code of Practice for consultation, which clarified that a TCN would not require a telecommunications operator to remove encryption per se, but “it requires that operator to maintain the capability to remove encryption when subsequently served with a warrant, notice or authorisation” (UK Home Office, 2017, p. 75). Furthermore, it was reiterated that an obligation to remove encryption can only be imposed where “reasonably practicable” for the communications provider to comply with, and the obligation can only pertain to encryption that the communications provider has itself applied, or in circumstances when this has been done, for example, by a contractor on the provider’s behalf.

Later, in early 2018, after analysing responses to the Draft Code, the UK Home Office introduced draft administrative regulations to the UK Parliament, which were passed in March 2018. These regulations affirm the Home Office’s previous statements that TCNs require that operators “maintain the capacity” to disclose communications data on receipt of an authorisation or warrant, and such notices can only impose obligations on telecommunications providers to remove “electronic protection” applied by, or on behalf of, the provider “where reasonably practicable” (Ni Loideain, 2019, p. 186). This would seem to entail that encryption methods applied by the user are not covered by this provision (Smith, 2017b). However, Keenan (2019) argues that the regulations may “compel […] operators to facilitate the ‘disclosure’ of content by targeting authentication functions” which may have the effect of secretly delivering messages to law enforcement.

While some of the issues identified above with the UK’s TCNs may be clarified by these regulations, other issues remain. For example, the situation remains unclear for a provider wanting to offer end-to-end encryption to its customers without holding the means to decrypt them. Practical questions remain about how the provisions can be enforced against providers which may not be geographically based in the UK, such as technology companies and platforms which may or may not maintain offices in the UK. To date, there is also no public knowledge of whether any TCNs have been made, approved by Judicial Commissioners, and complied with by operators (Keenan, 2019).

In addition to TCNs, section 49 of the Regulation of Investigatory Powers Act (2000) (RIPA) allows law enforcement agencies in possession of a device to issue a notice to the device user or device manufacturer to compel them to unlock encrypted devices or networks (Keenan, 2019). The law enforcement officer must obtain permission from a judge on the grounds that it is “necessary in the interests of national security, for the purpose of preventing or detecting crime, or where it is in the interest of the economic well-being of the United Kingdom” (Keenan, 2019). Case law on section 49 notices in criminal matters has generally not found the provision’s use to force decryption to violate the privilege against self-incrimination, in sharp distinction to the US experience (Keenan, 2019).

It is unclear whether these provisions would withstand such a challenge before the European Court of Human Rights on the basis of incompatibility with ECHR rights, especially Article 6 (right to a fair trial) and Article 8 (right to privacy).

Australia

In Australia the encryption debate commenced in June 2017 when then-Australian Prime Minister Turnbull (in)famously stated that “the laws of mathematics are very commendable, but the only law that applies in Australia is the law of Australia” (Pearce, 2017, para. 8). This remark, interpreted colloquially as a ‘war on maths’ (Pearce, 2017), gestured at an impending legislative proposal that would introduce provisions to weaken end-to-end encryption.

In August 2018, the Five Eyes Alliance met in a ‘Five Country Ministerial’ (FCM) and issued a communique that stated: “ We agreed to the urgent need for law enforcement to gain targeted access to data, subject to strict safeguards, legal limitations, and respective domestic consultations” (Australian Government Department of Home Affairs, 2018, para. 18). The communique was accompanied by a Statement of Principles on Access to Evidence and Encryption, assented to by all FVEY governments (Australian Government Department of Home Affairs, 2018). The statement affirmed the important but non-absolute nature of privacy, and signalled a “pressing international concern” posed by law enforcement inability to access encrypted content. FVEY partners also agreed to abide by three principles in the statement: mutual responsibility; the paramount status of rule of law and due process; and freedom of choice for lawful access solutions. “Mutual responsibility” relates to industry stakeholders being responsible for providing access to communications data. The “freedom of choice” principle relates to FVEY members encouraging service providers to “voluntarily establish lawful access solutions to their products and services that they create or operate in our countries”, with the possibility of governments “pursu[ing] technological, enforcement, legislative or other measures to achieve lawful access solutions” if they “continue to encounter impediments to lawful access to information” (Australian Government Department of Home Affairs, 2018, paras. 34-35).

In the month following this meeting, the Australian government introduced what became the Telecommunications and Other Legislation Amendment (Assistance and Access) Act 2018 (Cth) (or ‘AA Act’), which was subsequently passed by the Australian Parliament in December 2018. The Act amends pre-existing surveillance legislation in Australia, including the Telecommunications Act 1997(Cth) and the Telecommunications (Interception and Access) Act 1979 (Cth). It includes a series of problematic reforms that have extraterritorial reach beyond the Australian jurisdiction. 8

Specifically, three new mechanisms which seem (at least at face value) to be inspired by the UK’s IPA are introduced into the Telecommunications Act: Technical Assistance Requests (TARs), 9 Technical Assistance Notices (TANs) 10 and Technical Capability Notices (TCNs). 11 TARs can be issued by Australian security agencies 12 that may “ask the provider to do acts or things on a voluntary basis that are directed towards ensuring that the provider is capable of giving certain types of help.” 13 TARs escalate to TANs compelling assistance and impose penalties for non-compliance. The Australian Attorney-General can also issue TCNs which “may require the provider to do acts or things directed towards ensuring that the provider is capable of giving certain types of help” or to actually do such acts and things.

While the language of TCN is similar to the UK IPA, there is a much longer and more broadly worded list of “acts or things” that a provider can be asked to do on receipt of a TCN. 14 Although, as per section 317ZG, “systemic weaknesses” cannot be introduced, 15 there is still a significant potential impact on the security and privacy of encrypted communications. An important distinction between Australian and the UK TCNs is that the Australian notices are issued by the executive and are not subject to judicial oversight (Table 1).

The AA Act has extraterritorial reach beyond Australia in two main ways. The first is via obligations imposed on “designated communications providers” located outside Australia. “Designated communications providers” is defined extremely broadly to include, inter alia, carriers, carriage service providers, intermediaries and ancillary service providers, and any provider of an “electronic service” with any end-users in Australia, or of software likely to be used in connection with such a service, that has any end-users in Australia. It includes any “constitutional corporation” 16 that manufactures, installs, maintains or supplies devices for use, or likely to be used, in Australia, or develops, supplies or updates software that is capable of being installed on a computer or device that is likely to be connected to a telecommunications network in Australia (Ford & Mann, 2019). Thus a very wide range of providers from Australia and overseas will fall within these definitions (McGarrity & Hardy, 2020). Failure to comply with notices may result in financial penalties for companies, yet it is not clear how such penalties may be enforced vis-à-vis companies which are not incorporated or located in Australia. In any case in which a TAR is issued, it provides designated communications providers with civil immunity 9 from damages that may arise from the request (for example, rendering phones or devices useless), which may incentivise compliance prior to escalation to an enforceable TAN or TCN (Ford & Mann, 2019).

The second aspect of the AA Act’s extraterritorial reach is the provision of assistance by Australian law enforcement to their counterparts via the enforcement of foreign laws. The TARs, TANs, and TCNs all involve “assisting the enforcement of the criminal laws of a foreign country, so far as those laws relate to serious foreign offences”. 17 This is also reinforced by further amendments to theMutual Assistance in Criminal Matters Act 1987(Cth) that bypass MLAT processes, and provide a conduit to the extraterritorial application of Australia’s surveillance laws. That is, Australian law enforcement agencies are able to assist foreign governments through their requests for Australian assistance, including in the form of accessing encrypted communications and/or designing new ways to access encrypted communications (as per TCNs), for the enforcement of their own criminal laws. 18 This may operate as a loophole through which foreign law enforcement agencies circumvent their own legal system’s safeguards and capitalise on Australia’s lack of a federal human rights framework (Ford & Mann, 2019).

Table 1: Overview of anti-encryption measures in each FVEY country
 

United States

Canada

New Zealand

United Kingdom

Australia

Relevant law/s

Communications Assistance for Law Enforcement Act § 1002.

No specific legislation that provides authorities the power to compel decryption.

Narrow obligation in Solicitor General Enforcement Standards for Lawful Interception of Telecommunications (SGES) Standard 12.

Telecommunications (Interception Capability and Security) Act 2013 sections 10 and 24.

Investigatory Powers Act 2016 section 253.

Telecommunications and Other Legislation Amendment (Assistance and Access) Act 2018 (Cth) section 317A.

Entities targeted

Application only to “telecommunications companies.”

Application only to “wireless communication providers.”

Section 10 applies to “network operators” and section 24 applies to “network operators” and “service providers”.

Any “communications operator” (which includes telecoms companies, internet service providers, email providers, social media platforms, cloud providers and other ‘over-the-top’ services).

The definition of “designated communications provider” is set out in section 317C. It includes but is not limited to “a carrier or carriage service provider”, “person provides an electronic service that has one or more end-users in Australia”, or “the person manufactures or supplies customer equipment for use, or likely to be used, in Australia”.

Statutory obligations imposed on target

Companies will not be required to decrypt or ensure that the government can decrypt communications encrypted by customers, unless the provider itself has provided the encryption used.

Providers must decrypt any communications they have encrypted themselves on receiving a lawful request. Seems not to apply to end-to-end encryption not applied by the provider.

Operators, on the receipt of a lawful request to provide interception, must decrypt encrypted communications carried by its network, if that operator has provided the means of encryption (s 10).

Operators and providers must provide “reasonable” assistance to surveillance agencies with interception warrants or lawful interception authorities, including the decryption of communications when they have provided the encryption (s 24).

Operators obliged to do certain things which can include the removal of “electronic protection applied by or on behalf of that operator to any communications or data”. It is unclear whether a provider receiving a TCN would be able provide end-to-end encryption for its customers.

Providers may be issued with Technical Assistance Requests (TARs), Technical Assistance Notices (TANs) and/or Technical Capability Notices (TCNs). TARs can be issued by Australian security agencies that may “ask the provider to do acts or things on a voluntary basis that are directed towards ensuring that the provider is capable of giving certain types of help.” TARs escalate to TANs compelling assistance and impose penalties for non-compliance. The Australian Attorney-General can also issue TCNs which “may require the provider to do acts or things directed towards ensuring that the provider is capable of giving certain types of help” or to actually do such acts and things.

Human rights protections

US Constitution, notably the Fourth and Fifth Amendment. Also, First Amendment in terms of cryptographic code as a possible form of protected free speech.

Canadian Charter of Rights and Freedoms: Section 2 (freedom of expression), Section 7 (security of the person), Section 8 (right against unreasonable search and seizure), and the right to silence and protection from self-incrimination contained in sections 7, 11 and 14.

Human Rights Act 1993.

Human Rights Act 1998, European Convention on Human Rights.

No comprehensive protection at the federal level; no right to privacy in Australian Constitution.

Approval mechanisms for encryption powers’ exercise

N/A

Minister of Public Safety (executive branch).

Powers subject to interception warrants or other lawful interception authority. “Indirect” judicial supervision (Keith, 2020).

Approval by Judicial Commissioner.

Approval by administrative or executive officer (TCNs are approved by the Attorney-General). If a warrant or authorisation was previously required for the activity, it is still required after these reforms.

Extraterritorial application

Does not apply extraterritorially

Does not apply extraterritorially.

Section 10 does not apply extraterritorially unless section 38 decision made.

Section 24 applies to both NZ providers and foreign providers providing a service to any end-user in NZ.

Applies to both UK-based and foreign-based communications operators.

Applies to both Australian and foreign-based providers.

Providers can receive notices to assist with the enforcement of foreign criminal laws.

Relevant court cases

Apple-FBI

R v Mirarchi

None known.

None known.

Not applicable.

Discussion

The recent legislative developments in Australia position it as a leading actor in the ongoing calls for a broader set of measures to weaken or undermine encryption. The AA Act introduces wide powers for Australian law enforcement and security agencies to request, or mandate assistance in, communications interception from a wide category of communications providers, internet and equipment companies, both in Australia and overseas, and permits foreign agencies to make requests to Australian agencies to use these powers in the enforcement of foreign laws. Compared to the other FVEY jurisdictions’ laws in Table 1, the AA Act’s provisions cover the broadest category of providers and companies, to do the broadest category of assistance acts, with the weakest oversight mechanisms and no protections for human rights.

Australia’s AA Act also gives these provisions the most broad and significant extraterritorial reach of the FVEY equivalent. While New Zealand and the UK also extend their assistance obligations to foreign entities, Australia’s AA Act surpasses this to provide assistance to foreign law enforcement agencies. This is a highly worrying development since the AA Act facilitates the paradoxical enforcement (of criminal laws) and circumvention of (human rights) foreign laws on behalf of foreign law enforcement agencies, through inter alia the coercion of transnational technology companies into designing new ways of undermining encryption at a global scale via Australian law in the form of TCNs.

The idea of jurisdiction shopping by FVEY law enforcement agencies may be applicable, whereby Australia has enacted powers that have extraterritorial consequence, and that could operate to serve the wider FVEY alliance, especially given the lack of judicial oversight of TCNs, and Australia’s weak human rights protections. Jurisdiction shopping concerns the strategic pursuance of legislative, policy and operational objectives in specific venues to achieve outcomes that may not be possible in other venues due to the local context. 19

The AA Act provisions expand legally permissible extraterritorial measures to obtain encrypted communications, and in theory, this enables FVEY partners to ‘jurisdiction shop’ to exploit the lack of human rights protections in Australia. This is not the first time Australia has been an attractive jurisdiction shopping destination. One previous example relates to Operation Artemis run by the Queensland Police where a website used for the dissemination of child exploitation material was relocated to Australian servers so that police could engage in a controlled operation and commit criminal offences (including the dissemination of child exploitation material) without criminal penalty (Høydal, Stangvik, & Hansen, 2017; McInnes, 2017). 20

Australia emerges as a strategic forum for FVEY partners to implement new laws and powers with extraterritorial reach, as unlike other FVEY members, Australia has no meaningful human rights protections that would prevent gross invasions arising from measures that undermine encryption, coupled with weak oversight mechanisms (McGarrity & Hardy, 2020). These considerations also relate to the pre-existing use of ‘regulatory arbitrage’ by FVEY members, which involves information being legally accessed and intercepted in one of the FVEY countries with weaker human rights protection, then being transferred and used in other FVEY countries with more restrictive legal frameworks (Citron & Pasquale, 2010). This situation may allow for authorisation for extraterritorial data gathering to, in effect, be funnelled through the ‘weak link’ of Australia. Thus, the AA Act presents an opportunity for FVEY partners to engage in further regulatory arbitrage by jurisdiction shopping their requests to access encrypted communications and to mandate designated communications providers (i.e. transnational technology companies) design and develop new ways to access encrypted communications via Australia.

However, it is difficult to ascertain the extent to which the FVEY partners are indeed exploiting the Australian ‘weak link’, for two reasons. One, the FVEY alliance operates in a highly secretive manner. Second, the AA Act severely restricts transparency, via the introduction of secrecy provisions and enhanced penalties for unauthorised disclosure, and an absence of judicial authorisation of the exercise of the powers (Table 1). There is very limited ex-post aggregated public reporting of the exercise of the powers. One of these few mechanisms is the Australian Department of Home Affairs annual report on the operation of the Telecommunications (Interception and Access) Act 1979 (Cth). The 2018-2019 report stated that seven TARs were issued, five to the Australian Federal Police and two to the New South Wales Police. Cybercrime and telecommunications offences were the two most common categories of crimes for which the TARs were issued, with the notable absence of any terrorism offences - the main rationale supporting the introduction of the powers. In the Australian Senate Estimates process in late 2019, it was revealed that the TAR powers had been used on a total of 25 occasions up to November 2019 (Sadler, 2020a). 21 The fact that only TARs have been issued may indicate that designated communications providers are complying with requests in the first instance, and thus there is no need to escalate to enforceable notices.

One possible, and as yet unresolved, countervailing development to the AA Act in the FVEY countries concerns the US introduction of the Clarifying Lawful Overseas Use of Data (CLOUD) Act, which aims to facilitate US and foreign law enforcement access to data held by US-based communications providers in criminal investigations, bypassing MLAT procedures (Abraha, 2019; see also Gstrein, 2020, this issue; Vazquez Maymir, 2020, this issue). Bilateral negotiations regarding mechanisms for accessing (via US technology companies) and sharing e-evidence under the CLOUD Act between the US and Australia are underway, and there have been some early questions and debates (Bogle, 2019; Hendry, 2020) as to whether Australia will comply with CLOUD requirements. Specifically, the CLOUD Act allows “foreign partners that have robust protections for privacy and civil liberties to enter into executive agreements with the United States to use their own legal authorities to access electronic evidence” (Department of Justice, n.d) (PDF). CLOUD agreements between the US and foreign governments should not include any obligations forcing communications providers to maintain data decryption capabilities nor should they include any obligation preventing providers from decrypting data. 22 It is uncertain whether Australia would comply with CLOUD requirements given its aforementioned weak human rights framework, and the absence of judicial oversight for the authorisation of the anti-encryption powers.

These concerns seem to have motivated the current Australian opposition party, Labor, to introduce a private member’s bill into the Australian Parliament in late 2019 to ‘fix’ some aspects of the AA Act, despite their bipartisan support in passage of the law at the end of 2018. Notable fixes sought include the introduction of enhanced safeguards, including judicial oversight and clarification that TARs, TANs, and TCNs cannot be used to force providers to build systemic weaknesses and vulnerabilities in their systems, including implementing or building a new decryption capability. At the time of writing, the Australian Parliament is considering the bill, although it is unlikely it will be passed given the government has indicated it will vote down Labor’s proposed amendments (Stadler, 2020b).

Conclusion

Laws to restrict encryption occur in the context of regulatory arbitrage (Citron & Pasquale, 2010). This paper has analysed new powers that allow for Australian law enforcement and security agencies to request or mandate assistance in accessing encrypted communications, and permits foreign agencies to make requests to Australian agencies to use these powers in the enforcement of foreign laws, taking advantage of a situation where there is less oversight and fewer human rights or constitutional protections. The AA Act presents new opportunities for FVEY partners to leverage access to (encrypted) communications via Australia’s ‘legal backdoors’, which may undermine protections that might otherwise exist within local legal frameworks. This represents a troubling international development for privacy and information security.

Acknowledgements

The authors would like to acknowledge Dr Kayleigh Murphy for her excellent research assistance and the Computer Security and Industrial Cryptography (COSIC) Research Group at KU Leuven, the Law Science Technology Society (LSTS) Research Group at Vrije Universiteit Brussel, and the Department of Journalism in Maria Curie-Skłodowska University (Lublin, Poland) for the opportunity to present and receive feedback on this research. Finally, we thank Tamir Israel, Martin Kretschmer, Balázs Bodó, and Frédéric Dubois for their comprehensive peer-review comments and editorial review.

References

Abelson, H., Anderson, R., Bellovin, S. M., Benaloh, J., Blaze, M., Diffie, W., Gilmore, J., Green, M., Landau, S., Neumann, P. G., Rivest, R. L., Schiller, J. I., Schneier, B., Specter, M. A., & Weitzner, D. J. (2015). Keys under doormats: Mandating insecurity by requiring government access to all data and communications. Journal of Cybersecurity, 1(1), 69–79. https://doi.org/10.1093/cybsec/tyv009

Abraha, H. H. (2019). How Compatible is the US ‘CLOUD’ Act’ with Cloud Computing? A Brief Analysis. International Data Privacy Law, 9(3), 207–215. https://doi.org/10.1093/idpl/ipz009

Australian Constitution. https://www.aph.gov.au/about_parliament/senate/powers_practice_n_procedures/constitution

Australian Government Department of Home Affairs. (2018). Five country ministerial 2018. Australian Government Department of Home Affairs. https://www.homeaffairs.gov.au/about-us/our-portfolios/national-security/security-coordination/five-country-ministerial-2018

Australian Government, Department of Home Affairs. (2019). Telecommunications (Interception and Access) Act 1979: Annual Report 2018-19 [Report]. Australian Government, Department of Home Affairs. https://parlinfo.aph.gov.au/parlInfo/download/publications/tabledpapers/c424e8ec-ce9a-4dc1-a53e-4047e8dc4797/upload_pdf/TIA%20Act%20Annual%20Report%202018-19%20%7BTabled%7D.pdf;fileType=application%2Fpdf#search=%22publications/tabledpapers/c424e8ec-ce9a-4dc1-a53e-4047e8dc4797%22

Beagle, T. (2017, July 2). Why we support effective encryption [Blog post]. NZ Council for Civil Liberties. https://nzccl.org.nz/content/why-we-support-effective-encryption

Bell, S. (2013, November 25). Court rebukes CSIS for secretly asking international allies to spy on Canadian suspects travelling abroad. The National Post. https://nationalpost.com/news/canada/court-rebukes-csis-for-secretly-asking-international-allies-to-spy-on-canadian-terror-suspects

Bogle, A. (2019, October 31). Police want Faster Data from the US, but Australia’s Encryption Laws Could Scuttle the Deal. ABC News. https://www.abc.net.au/news/science/2019-10-31/australias-encryption-laws-could-scuttle-cloud-act-us-data-swap/11652618

Brewster, T. (2018, February 26). The Feds Can Now (Probably) Unlock Every iPhone Model in Existence. https://www.forbes.com/sites/thomasbrewster/2018/02/26/government-can-access-any-apple-iphone-cellebrite/#76a735e8667a

Butler, P. (2013). The Case for a Right to Privacy in the New Zealand Bill of Rights Act. New Zealand Journal of Public & International Law, 11(1), 213–255.

Citron, D. K., & Pasquale, F. (2010). Network Accountability for the Domestic Intelligence Apparatus. Hastings, 62, 1441–1494. https://digitalcommons.law.umaryland.edu/fac_pubs/991/

Comey, J. B. (2014). Going Dark: Are Technology, Privacy, and Public Safety on a Collision Course? Federal Bureau of Investigation. https://www.fbi.gov/news/speeches/going-dark-are-technology-privacy-and-public-safety-on-a-collision-course

Constitution Act, (1982). https://laws-lois.justice.gc.ca/eng/const/page-15.html

Cook Barr, A. (2016). Guardians of Your Galaxy S7: Encryption Backdoors and the First Amendment. Minnesota Law Review, 101(1), 301–339. https://minnesotalawreview.org/article/note-guardians-of-your-galaxy-s7-encryption-backdoors-and-the-first-amendment/

Cooper, S. (2018). An Analysis of New Zealand Intelligence and Security Agency Powers to Intercept Private Communications: Necessary and Proportionate? Te Mata Koi: Auckland University Law Review, 24, 92–120.

Daly, A. (2017). Covering up: American and European legal approaches to public facial anonymity after SAS v. France. In T. Timan, B. C. Newell, & B.-J. Koops (Eds.), Privacy in Public Space: Conceptual and Regulatory Challenges(pp. 164–183). Edward Elgar.

Daly, A., & Thomas, J. (2017). Australian internet policy. Internet Policy Review, 6(1). https://doi.org/10.14763/2017.1.457

Department of Justice. (n.d.). Frequently Asked Questions. https://www.justice.gov/dag/page/file/1153466/download

Dizon, M., Ko, R., Rumbles, W., Gonzalez, P., McHugh, P., & Meehan, A. (2019). A Matter of Security, Privacy and Trust: A study of the principles and values of encryption in New Zealand (Report. New Zealand Law Foundation and University of Waikato.

Ford, D., & Mann, M. (2019). International Implications of the Telecommunications and Other Legislation Amendment (Assistance and Access) Act 2018. Australian Privacy Foundation. https://privacy.org.au/wp-content/uploads/2019/06/APF_AAAct_FINAL_040619.pdf

Froomkin, D. (2015). U.N. Report Asserts Encryption as a Human Right in the Digital Age. The Intercept. https://theintercept.com/2015/05/28/u-n-report-asserts-encryption-human-right-digital-age/

Gill, L. (2018). Law, Metaphor and the Encrypted Machine. Osgoode Hall Law Journal, 55(2), 440–477. https://doi.org/10.2139/ssrn.2933269

Gill, L., Israel, T., & Parsons, C. (2018). Shining a Light on the Encryption Debate: A Canadian Fieldguide [Report]. Citizen Lab; The Canadian Internet Policy & Public Interest Clinic. https://citizenlab.ca/2018/05/shining-light-on-encryption-debate-canadian-field-guide/

Global Partners Digital. (n.d.). World Map of Encryption Law and Policies. https://www.gp-digital.org/world-map-of-encryption/

Gonzalez, O. (2019). Cracks in the Armor: Legal Approaches to Encryption. Journal of Law, Technology & Policy, 2019(1), 1–46. http://illinoisjltp.com/journal/wp-content/uploads/2019/05/Gonzalez.pdf

Gstrein, O. (2020). Mapping power and jurisdiction on the internet through the lens of government-led surveillance. Internet Policy Review, 9(3). https://doi.org/10.14763/2020.3.1497

Hendry, J. (2020, January 14). Home Affairs Rejects Claims Anti-Encryption Laws Conflict with US CLOUD Act. IT News. https://www.itnews.com.au/news/home-affairs-rejects-claims-anti-encryption-laws-conflict-with-us-cloud-act-536339

Holyoke, T., Brown, H., & Henig, J. (2012). Shopping in the Political Arena: Strategic State and Local Venue Selection by Advocates. State and Local Government Review, 44(1), 9–20. https://doi.org/10.1177/0160323X11428620

Høydal, H. F., Stangvik, E. O., & Hansen, N. R. (2017, October 7). Breaking the Dark Net: Why the Police Share Abuse Pics to Save Children. VG. https://www.vg.no/spesial/2017/undercover-darkweb/?lang=en

Human Rights, E. C. (2010). European Convention on Human Rights. https://www.echr.coe.int/Documents/Convention_ENG.pdf

Investigatory Powers Act 2016 (UK), Pub. L. No. 2016 c. 25 (2016). http://www.legislation.gov.uk/ukpga/2016/25/contents/enacted

Investigatory Powers (Technical Capability) Regulations 2018 (UK. (n.d.). http://www.legislation.gov.uk/ukdsi/2018/9780111163610/contents

Keenan, B. (2019). State access to encrypted data in the United Kingdom: The ‘transparent’ approach. Common Law World Review. https://doi.org/10.1177/1473779519892641

Keith, B. (2020). Official access to encrypted communications in New Zealand: Not more powers but more principle? Common Law World Review. https://doi.org/10.1177/1473779520908293

Telecommunications Amendment (Repairing Assistance and Access) Bill 2019, (2019) (testimony of Kristina Keneally). https://parlinfo.aph.gov.au/parlInfo/download/legislation/bills/s1247_first-senate/toc_pdf/19S1920.pdf;fileType=application%2Fpdf

Koops, B.-J. (1999). The Crypto Controversy: A Key Conflict in the Information Society. Kluwer Law International.

Koops, B.-J., & Kosta, E. (2018). Looking for some light through the lens of “cryptowar” history: Policy options for law enforcement authorities against “going dark”. Computer Law & Security Review, 34(4), 890–900. https://doi.org/10.1016/j.clsr.2018.06.003

Ley, A. (2016). Vested Interests, Venue Shopping and Policy Stability: The Long Road to Improving Air Quality in Oregon’s Willamette Valley. Review of Policy Research, 33(5), 506–525. https://doi.org/10.1111/ropr.12190

Lyon, D. (2014). Surveillance, Snowden, and Big Data: Capacities, Consequences, Critique. Big Data & Society, 1(2). https://doi.org/10.1177/2053951714541861

Lyon, D. (2015). Surveillance After Snowden. Polity Press.

Mann, M., & Daly, A. (2019). (Big) data and the north-in-south: Australia’s informational imperialism and digital colonialism. Television and New Media, 20(4), 379–395. https://doi.org/10.1177/1527476418806091

Mann, M., Daly, A., Wilson, M., & Suzor, N. (2018). The Limits of (Digital) Constitutionalism: Exploring the Privacy-Security (Im)Balance in Australia. International Communication Gazette, 80(4), 369–384. https://doi.org/10.1177/1748048518757141

McGarrity, N., & Hardy, K. (2020). Digital surveillance and access to encrypted communications in Australia. Common Law World Review. https://doi.org/10.117/1473779520902478.

McInnes, W. (2017, October 8). Queensland Police Take Over World’s Largest Child Porn Forum in Sting Operation. Brisbane Times. https://www.brisbanetimes.com.au/national/queensland/queensland-police-behind-worlds-largest-child-porn-forum-20171007-gywcps.html

Molnar, A. (2017). Technology, Law, and the Formation of (il)Liberal Democracy? Surveillance & Society, 15(3/4), 381–388. https://doi.org/10.24908/ss.v15i3/4.6645

Molnar, A., Parsons, C., & Zouave, E. (2017). Computer network operations and ‘rule-with-law’ in Australia. Internet Policy Review, 6(1). https://doi.org/10.14763/2017.1.453

Murphy, H., & Kellow, A. (2013). Forum Shopping in Global Governance: Understanding States, Business and NGOs in Multiple Arenas. Global Policy, 4(2), 139–149. https://doi.org/10.1111/j.1758-5899.2012.00195.x

Mutual Assistance in Criminal Matters Act 1987 Compilation No. 35, (2016). https://www.legislation.gov.au/Details/C2016C00952

Nagel, P. (2006). Policy Games and Venue-Shopping: Working the Stakeholder Interface to Broker Policy Change in Rehabilitation Services. Australian Journal of Public Administration, 65(4), 3–16. https://doi.org/10.1111/j.1467-8500.2006.00500a.x

New Zealand Bill of Rights Act 1990. http://www.legislation.govt.nz/act/public/1990/0109/latest/DLM224792.html

Ni Loideain, N. (2019). A Bridge Too Far? The Investigatory Powers Act 2016 and Human Rights Law. In L. Edwards (Ed.), Law, Policy and the Internet (2nd ed., pp. 165–192). Hart.

Parsons, C. A., & Molnar, A. (2017). Horizontal Accountability and Signals Intelligence: Lesson Drawing from Annual Electronic Surveillance Reports. SSRN. http://dx.doi.org/10.2139/ssrn.3047272

Pearce, R. (2017, July 27). Australia’s War on Maths Blessed with Gong at Pwnie Awards. ComputerWorld. https://www.computerworld.com.au/article/625351/australia-war-maths-blessed-gong-pwnie-awards/

Pfefferkorn, R. (2020, January 30). The EARN IT Act: How to Ban End-to-End Encryption Without Actually Banning It [Blog post]. The Center for Internet Society. https://cyberlaw.stanford.edu/blog/2020/01/earn-it-act-how-ban-end-end-encryption-without-actually-banning-it

Pralle, S. (2003). Venue Shopping, Political Strategy, and Policy Change: The Internationalization of Canadian Forest Advocacy. Journal of Public Policy, 23(3), 233–260. https://doi.org/10.1017/S0143814X03003118

Regulation of Investigatory Powers Act 2000, Pub. L. No. 2000 c. 23 (2000). http://www.legislation.gov.uk/ukpga/2000/23/contents

Roach, K. (2011). The 9/11 Effect: Comparative Counter-Terrorism. Cambridge University Press.

Sadler, D. (2020a, February 3). Encryption laws not used to fight terrorism [Blog post]. InnovationAus. https://www.innovationaus.com/encryption-laws-not-used-to-fight-terrorism/?fbclid=IwAR2fdjBwK827idNXHY4X5-5Xk3d8LZJBjSVJrLMutxBn6XeWXTvzyNhsVtg

Sadler, D. (2020b, February 14). No encryption fix until at least October [Blog post]. InnovationAus. https://www.innovationaus.com/no-encryption-fix-until-at-least-october/?fbclid=IwAR0HdUHyy2ArihJC6lEze0H_rxvJnB4ryNknGMAlsWf4PeibIpJXJYD--dI

Search and Surveillance Act, (2012). http://www.legislation.govt.nz/act/public/2012/0024/latest/DLM2136536.html

Smith, G. (2017, May 8). Back doors, black boxes and #IPAct technical capability regulations [Blog post]. Graham Smith’s Blog on Law, IT, the Internet and Online Media. http://www.cyberleagle.com/2017/05/back-doors-black-boxes-and-ipact.html

Smith, Graham. (2017, May 29). Squaring the circle of end to end encryption [Blog post]. Graham Smith’s Blog on Law, IT, the Internet and Online Media. https://www.cyberleagle.com/2017/05/squaring-circle-of-end-to-end-encryption.html

Solicitor General. (2008). Solicitor General’s Enforcement Standards for Lawful Interception of Telecommunications. https://perma.cc/NQB9-ZHPY

Suzor, N., Pappalardo, K., & McIntosh, N. (2017). The Passage of Australia’s Data Retention Regime: National Security, Human Rights, and Media Scrutiny. Internet Policy Review, 6(1). https://doi.org/10.14763/2017.1.454

Telecommunications Act, (1997). https://www.legislation.gov.au/Details/C2017C00179

Telecommunications and Other Legislation Amendment (Assistance and Access) Act 2018, Pub. L. No. 148 (2018). https://www.legislation.gov.au/Details/C2018A00148

Telecommunications (Interception Capability and Security) Act 2013 (NZ), (2013). http://www.legislation.govt.nz/act/public/2013/0091/22.0/DLM5177923.html

United Kingdom Home Office. (2017). Communications Data Draft Code of Practice. https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/663675/November_2017_IPA_Consultation_-_Draft_Communications_Data_Code_of_Pract....pdf

US Telecommunications: Assistance Capability Requirements, USC § 1002, 47 Telecommunications § 1002 (1994). https://www.law.cornell.edu/rio/citation/108_Stat._4280

Vazquez Maymir, S. (2020). Anchoring the Need to Revise Cross-Border Access to E-Evidence. Internet Policy Review, 9(3). https://doi.org/10.14763/2020.3.1495

West, L., & Forcese, C. (2020). Twisted into knots: Canada’s challenges in lawful access to encrypted communications. Common Law World Review. https://doi.org/10.1177/1473779519891597

Williams, G., & Reynolds, D. (2017). A charter of rights for Australia (4th ed.). NewSouth Press.

Wilson, M., & Mann, M. (2017, September 7). Police Want to Read Encrypted Messages, but They Already Have Significant Power to Access our Data. The Conversation. https://theconversation.com/police-want-to-read-encrypted-messages-but-they-already-have-significant-power-to-access-our-data-82891

Zuan, N., Roos, C., & Gulzau, F. (2016). Circumventing Deadlock Through Venue-shopping: Why there is more than just talk in US immigration politics in times of economic crisis. Journal of Ethnic and Migration Studies, 42(10), 1590–1609. https://doi.org/10.1080/1369183X.2016.1162356

Footnotes

1. The FVEY partnership is a comprehensive intelligence alliance formed after the Second World War, formalised under the UKUSA Agreement (see e.g., Mann & Daly, 2019).

2. Cth stands for Commonwealth, which means “federal” legislation, as distinct from state-level legislation.

3. At the state and territory level: Victoria, Queensland and the Australian Capital Territory have human rights laws, however the surveillance powers examined in this article are subject to Commonwealth jurisidiction rendering so these state and territory based protections are inapplicable. See: Charter of Human Rights and Responsibilities Act 2006 (Vic); Human Rights Act 2019 (QLD); Human Rights Act 2004 (ACT).

4. However, the draft EARN IT bill currently before the US Congress, if enacted, may impact negatively upon providers’ ability to offer end-to-end encrypted messaging. See Pfefferkorn (2020).

5.R v Mirarchi involved BlackBerry providing the Canadian police with a key which allowed them to decrypt one million BlackBerry messages (Gill, Israel & Parsons, 2018, p. 57-58). The legal basis and extent of BlackBerry’s assistance to the Canadian police was unclear from the ‘heavily redacted’ court records (West & Forcese, 2020).

6. For a full picture of New Zealand legal provisions which may affect encryption see Dizon et al. (2019).

7. For additional provisions in UK law which may be relevant to encryption see Keenan (2019).

8. The analysis presented here focuses on Schedule 1 of the AA Act. Schedule 2 of the AA Act introduces computer access warrants that allow law enforcement to covertly access and search devices, and to conceal the fact that devices have been accessed.

9.a.b. S 317G.

10. S 317L.

11. S 317T.

12. Namely ‘the Director‑General of Security, the Director‑General of the Australian Secret Intelligence Service, the Director‑General of the Australian Signals Directorate or the chief officer of an interception agency’.

13. Namely ‘ASIO, the Australian Secret Intelligence Service, the Australian Signals Directorate or an interception agency’.

14. For example, “removing one or more forms of electronic protection that are or were applied by, or on behalf of, the provider”, “installing, maintaining, testing or using software or equipment” and “facilitating or assisting access to… a facility, customer equipment, electronic services and software” are included in the list of ‘acts or things’ that a provider may be asked to do via these provisions. The complete list of ‘acts or things’ are listed in section 317E

15. According to AA Act s 317B a systematic vulnerability means “a vulnerability that affects a whole class of technology, but does not include a vulnerability that is selectively introduced to one or more target technologies that are connected with a particular person” and a systematic weakness means “a weakness that affects a whole class of technology, but does not include a weakness that is selectively introduced to one or more target technologies that are connected with a particular person.”

16. A category which, according to paragraph 51(xx) of the Australian Constitution, comprises “foreign corporations, and trading or financial corporations formed within the limits of the Commonwealth”.

17. S 317A; Table 1.

18. AA Act s 15CC(1); Surveillance Devices Act 2004 (Cth) ss 27A(4) and (4)(a).

19. Analyses of policy venue shopping have been conducted in relation to a range of policy areas, inter alia, immigration, environmental, labour, intellectual property, and rehabilitation policies (see e.g., Ley, 2016; Holyoke, Brown, & Henig, 2012; Pralle, 2003; Zuan, Roos, & Gulzau, 2016; Nagel, 2006; Murphy & Kellow, 2013). According to Pralle (2003, p. 233) a central “component of any political strategy is finding a decision setting that offers the best prospects for reaching one’s policy goals, an activity referred to as venue shopping”. Further, Murphy and Kellow (2013, p. 139) argue that policy venue shopping may be a political strategy deployed at global levels where “entrepreneurial actors take advantage of ‘strategic inconsistencies’ in the characteristics of international policy arenas”.

20. A further example that demonstrates regulatory arbitrage between FVEY members from the perspective of Canada, brought to light in 2013, involved Canada’s domestic security intelligence service (CSIS) being found by the Federal Court to have ‘breached duty of candour’ by secretly refusing to disclose their leveraging of FVEY networks when it applied for warrants during an international terrorism investigation involving two Canadian suspects (Bell 2013).

21. It should be noted that due to the overlapping time frames and aggregated nature of reporting, the 25 occasions the powers were used may also include some of the 7 occasions reported in the most recent Home Affairs annual-report.

22. CLOUD Act s 105 (b) (3). Note: The US Department of Justice claims the CLOUD Act is “encryption neutral” in that “neither does it prevent service providers from assisting in such decryption, or prevent countries from addressing decryption requirements in their own domestic laws.” (Department of Justice, n.d)

Going global: Comparing Chinese mobile applications’ data and user privacy governance at home and abroad

$
0
0

This paper is part of Geopolitics, jurisdiction and surveillance, a special issue of Internet Policy Review guest-edited by Monique Mann and Angela Daly.

In February 2019, the short video sharing and social mobile application TikTok was fined a record-setting penalty (US$ 5.7 million) for violating the Children’s Online Privacy Protection Act by the US Federal Trade Commission for failing to obtain parental consent and deliver parental notification. TikTok agreed to pay the fine (Federal Trade Commission, 2019). This settlement implies several significant developments. Owned by the Chinese internet company ByteDance, TikTok is popular worldwide, predominantly among young mobile phone users, while most commercially successful Chinese internet companies are still based in the Chinese market. Such global reach and commercial success makes Chinese mobile applications pertinent sites of private governance on the global scale (see Cartwright, 2020, this issue). China-based mobile applications therefore need to comply with domestic statutory mechanisms as well as privacy protection regimes and standards in the jurisdictions as they expand outward, such as the extraterritorial application of Article 3 of the EU’s General Data Protection Regulation (GDPR).

To examine how globalising Chinese mobile apps respond to the varying data and privacy governance standards when operating overseas, we compare the Chinese and overseas version of four sets of China-based mobile applications: (1) Baidu mobile browser - a mobile browser with a built-in search engine owned and developed by Chinese internet company Baidu, (2) Toutiao and TopBuzz - mobile news aggregators developed and owned by ByteDance, (3) Douyin and TikTok - mobile short video-sharing platforms developed and owned by ByteDance, with the former only available in Chinese app stores and the later exclusively in international app stores, and (4) WeChat and Weixin - a social application developed and owned by Chinese internet company Tencent. Together, these four mobile applications represent a global reach of flagship China-based mobile apps and a wide range of functions: search and information, news content, short videos and social. They also represent a mix of more established (Baidu, Tencent) and up-and-coming (ByteDance) Chinese internet companies. Lastly, this sample also demonstrates the varying degree of commercial success as they all offer services globally, with Baidu browser the least commercially successful, and TikTok the most successful.

An earlier study shows that Chinese web services had a bad track record in privacy protection: back in 2006, before China had in place a national regime of online privacy protection, among 82 commercial websites in China, few websites posted a privacy disclosure and an even fewer number of websites followed the four fair information principles of notice, choice, access and security (Kong, 2007). These four principles are to enhance self-regulation of the internet industry by providing consumers notice, control, security measures, and ability to view and contest the accuracy and completeness of data collected about them (Federal Trade Commission, 1998). In 2017, only 69.6 percent of the 500 most popular Chinese websites had disclosed their privacy policies (Feng, 2019). These findings suggest a significant gap between data protection requirements on paper and protection in practice (Feng, 2019). In a recent study, Fu (2019) finds improvement of the poor privacy protection track record of the three biggest internet companies in China (Baidu, Alibaba, and Tencent). Her study shows that BAT’s privacy policies are generally compliant with the Chinese personal information protection provisions but lack sufficient considerations to transborder data flows and in the case of change of ownership (such as merger and acquisitions (Fu, 2019). Moreover, the privacy policies of BAT offer more notice than choice—that user either is forced to accept the privacy policy or forego the usage of the web services (Fu, 2019, p. 207). Building on these findings, this paper asks: does the same app differ in data and privacy protection measures between international and Chinese versions? How are these differences registered in the app’s user interface design and privacy policies?

In the following analysis, we first outline the evolving framework of data and privacy protection that governs the design and operation of China-based mobile apps. The next section provides a background overview of key functions, ownership information, business strategies of examined apps. The walkthrough of app user interface design studies how a user experiences privacy and data protection features in various stages of app usage. Last, we present the comparison of privacy policies and terms of service between the two versions of the same China-based apps to identify the differences in data and privacy governance. We find that not only different apps vary in data and privacy protection, the international and Chinese versions of the same app also show discrepancies.

Governance ‘of’ globalising Chinese apps

Law and territory has always been at the centre of debates in the regulation and development of the internet (Goldsmith & Wu, 2006; Kalathil & Boas, 2003; Steinberg & Li, 2016). Among others, China has been a strong proponent of internet sovereignty in global debates about internet governance and digital norms. The 2010 white paper titledThe Internet In China enshrines the concept of internet sovereignty into the governing principles of the Chinese internet. It states: “within Chinese territory the internet is under the jurisdiction of Chinese sovereignty” (State Council Information Office, 2010). The principle of internet sovereignty was later reiterated by the Cyberspace Administration of China (CAC), the top internet-governing body since 2013, to recognise “each government has the right to manage its internet and has jurisdiction over information and communication infrastructure, resources and information and communication activities within their own borders” (CAC, 2016).

Under the banner of internet sovereignty, the protection of data and personal information in China takes a state-centric approach, which comes in the form of government regulations and government-led campaigns and initiatives. The appendix outlines key regulations, measures and drafting documents. Without an overarching framework for data protection, China’s data protection approach is characterised in a “cumulative effect” (de Hert & Papakonstantinou, 2015), which is composed of multitude of sector-specific legal instruments, promulgated in a piecemeal fashion. While previous privacy and data protection measures are dispersed across various government agencies, laws and regulation, the first national standard for personal data and privacy protection was put forth only in 2013. The promulgation of the Cybersecurity Law in 2016 is a major step forward in the nation’s privacy and data protection efforts, despite the policy priority of national security over individual protection. Article 37 of the Cybersecurity Law stipulates that personal information and important data collected and produced by critical information infrastructure providers during their operations within the territory of the People’s Republic of China shall be stored within China. Many foreign companies have complied either as a preemptive goodwill gesture or as a legal requirement in order to access, compete, and thrive in the Chinese market. For example, in 2018, Apple came under criticism for moving the iCloud data generated by users with a mainland Chinese account to data management firm Guizhou-Cloud Big Data - a data storage company of the local government of Guizhou province (BBC, 2016). LinkedIn, Airbnb (Reuters, 2016), and Evernote (Jao, 2018) have stored mainland user data in China, even prior to the promulgation of the Cybersecurity Law. The Chinese government asked transnational internet companies to form joint ventures with local companies to operate data storage and cloud computing businesses, such as Microsoft Azure’s cooperation with Century Internet and Amazon AWS-Sinnet technology (Liu, 2019).

The Chinese state participates in a wide range of online activities including, among other things, data localisation requirements for domestic and foreign companies (McKune & Ahmed, 2018). The Chinese government attributes data localisation requirements to national security and the protection of personal information on the basis that the transfer of personal and sensitive information overseas may undermine the security of data (Xu, 2015). While others point out the recurring themes of the ideological tradition of technological nationalism and independence as Cyberspace Administration of China’s prioritisation of security over personal privacy and business secrets (Liu, 2019). Captured in President Xi’s speech “without cybersecurity comes no national security”, data and privacy protection is commonly framed under the issue of internet security (Gierow, 2014).

There is a growing demand for the protection of personal information among internet users and a growing number of government policies pertaining to the protection of personal information in China (Wang, 2011). Since 2016, the Chinese government is playing an increasingly active role in enforcing a uniform set of rules and standardising the framework of privacy and data protection. As of July 2019, there are 16 national standards, 10 local standards and 29 industry standards in effect that provide guidelines on personal information protection. However, there is no uniform law or a national authority to coordinate data protection in China. The right to privacy or the protection of personal information (the two are usually interchangeable in the Chinese context) often comes as an auxiliary article along with the protection of other rights. Whereas jurisdictions such as the EU have set up Data Protection Authorities (DPAs) - that are independent public entities that supervise the compliance of data protection regulations, in China the application and supervision of data protection has fallen on private companies and state actors respectively. User complaints against the violation of data protection laws are mostly submitted to, and handled by, private companies themselves rather than an independent agency. This marks the decisive difference underlying China’s and the EU’s approach to personal data processing: in China, data protection is aimed exclusively at the individual as consumer, versus in the EU, the data protection recipient is regarded as an individual or a data subject and protection of personal data is both a fundamental right and is conducive to the trade of personal data within the Union, as stipulated in Article 1 of the General Data Protection Regulation (de Hert & Papakonstantinou, 2015).

The pre-existing legal modicum and self-regulatory regime of privacy and data protection by Chinese internet platform companies gives rise to rampant poor privacy and data protection practices, even among the country’s largest and leading internet platforms. Different Chinese government ministries have also tackled the poor data and privacy regulation of mobile apps and platform in rounds of “campaign style” (运动式监管) regulation—a top down approach often employed by the Chinese government to provide solutions to emerging policy challenges (Xu, Tang, & Guttman, 2019). For instance, Alibaba’s payment service Alipay, its credit scoring system Sesame Credit, Baidu, Toutiao, and Tencent have all shown poor track records of data and privacy protection and have come under government scrutiny (Reuters, 2018). Alipay was fined by the People’s Bank of China in 2018 for collecting users’ financial information outside the scope defined in the Cybersecurity Law (Xinhua, 2018). The Ministry of Industry and Information Technology publicly issued a warning to Baidu and ByteDance’s Toutiao for failing to properly notify users about which data it is collecting (Jing, 2018).

As China experienced exponential mobile internet growth, mobile apps stand out as a poignant regulatory target. The Cyber Administration of China put forth the Administrative Rules on Information Services via Mobile Internet Applications in 2016 that distinguishes the duties for mobile app stores and mobile apps. Mobile apps, in particular, bear six regulatory responsibilities: 1) enforce real name registration and verify the identity of users through cell phone number or other personally identifiable information, 2) establish data protection mechanism to obtain consent and disclose the collection and use of data, 3) establish fulsome information gatekeeping mechanisms to warn, limit, suspend accounts that post content that violate laws or regulations, 4) safeguard privacy during app installation processes, 5) protection of intellectual property, 6) obtain and store user logs for sixty days.

As more China-based digital platforms join the ranks of the world’s largest companies by measures of user population, market capitalisation and revenues (Jia & Winseck, 2018), various scholarly studies have already started to grapple with the political implications of their expansion. Existing studies call for attention to the distinctions between global and domestic versions of the same Chinese websites and mobile applications in information control and censorship activities and results show Chinese mobile apps and websites are lax and inconsistent at content control when they go global (Ruan, Knockel, Ng, & Crete-Nishihata, 2016; Knockel, Ruan, Crete-Nishihata, & Deibert, 2018; Molloy & Smith, 2018). To ameliorate these dilemmas, some China-based platforms have designed different versions of their products that serve domestic and international users separately. Yet, data and privacy protection of Chinese mobile apps is under-studied, especially as they embark on a global journey. This is ever more pressing an issue as Chinese internet companies that have been successful at growing their international businesses, such as Tencent and ByteDance, simultaneously struggle to provide a seamless experience for international users and complying with data and content regulations at home.

Methods

We employ a mixed-method approach to investigate how globalising Chinese mobile apps differ in data and privacy governance between Chinese and international versions accessed through Canadian app stores. While Baidu Search, TikTok, WeChat, and Topbuzz do not appear to have region-based features, the actual installation package may or may not differ based on where a user is based and downloads the apps from. First, we conducted an overview of tested mobile apps and functions, looking at issues of ownership, revenue, user population. Each app’s function and business model has a direct bearing on the data collection and usage. Secondly, to study how mobile apps structure and shape end users’ experience with regards to data and privacy protection, we deployed the walkthrough method (Light, Burgess, & Duguay, 2018). We tested both the Android and iOS version of the same app. In the case of China-based apps (i.e., Douyin & Toutiao), we downloaded the Android version from the corresponding official website of each service and the iOS version from the Chinese regional Apple App Store. For the international-facing apps (i.e., TikTok and TopBuzz), we downloaded their Android versions from the Canadian Google Play Store and the iOS version from the Canadian Apple App Store. Baidu and WeChat do not offer separate versions for international and Chinese users; instead, the distinction is made when users register their account. After we downloaded each app, we systematically stepped through two stages in the usage of the apps: app entry and registration, and discontinuation of use. We conducted the walkthrough on multiple Android and Apple mobile devices in August 2019.

In addition, we conducted content analysis of the privacy policies and terms of service of each mobile app. These documents demonstrate the governance by mobile apps as well as the governance of mobile apps within certain jurisdictions. They are also key legal documents that set the conditions of user’s participation online and lay claim to the institutional power of the state (Stein, 2013). We examined a total of 15 privacy policies and terms of service in Chinese and English language, retrieved in July 2019. Here are the numbers of documents we examined for each app: Baidu (2), Weixin (2), WeChat (2), TopBuzz (2), TikTok (3), Douyin (2), Toutiao (2). We then conducted content analysis of mobile app privacy policies and terms of service along five dimensions: data collection, usage, disclosure, transfer, and retention. For data collection, we looked for items that detailed the types of information collected, the app’s definitions of personally identifiable information, and the possibility to opt out of the data collection process; for data usage, we looked for terms and conditions that delineated third party use; for disclosure, we looked at whether the examined app would notify its users in case of privacy update, merger and acquisitions, and data leakages; for data transfer and retention, we examined whether app specified security measures such as encryption of user data, emergency measures in case of data leaks, terms and conditions of data transfer, as well as the specific location and duration of data retention.

Research limitations

Due to network restrictions, our walkthrough is limited to the Canadian-facing versions of these China-based apps. For each mobile app we studied, its parent company offers only one version of an international-facing app and one version of a China-facing app on the official website. Yet, even though there is only one international-facing app for each of the products we analysed, it remains to be tested whether the app interface, including the app’s notification setting differs when downloaded and/or launched in different jurisdictions. Moreover, our research is based on a close reading of the policy documents put together by mobile app companies. It does not indicate whether these companies actually comply with their policy documents in the operation of services, or the pitfalls of notice and consent regime (Martin, 2013). Existing research has already shown that under the Android system, there are many instances of potential inconsistencies between what the app policy states and what the code of the app appears to do (Zimmeck et al., 2016).

Overview of apps

Baidu Search

Baidu App is the flagship application developed by Baidu, one of China’s leading internet and platform companies. The Baidu App provides the search function but also feeds users highly personalised content based on data and metadata generated by users. Often regarded as the Chinese counterpart of Google, Baidu’s main business includes online search, online advertising and artificial intelligence. In 2018, the daily active users of Baidu app reached 161 million, a 24% jump from 2017. Although Baidu has embarked on many foreign ventures and expansion projects, according to its annual report, the domestic market still accounts for 98% of Baidu’s total revenue for 2016, 2017, and 2018 consecutively. Based on revenue composition, Baidu’s business model is online advertising. The major shareholders of Baidu are its CEO Robin Yanhong Li (31.7%) and Baillie Gifford (5.2%), an investment management firm headquartered in Edinburgh, Scotland.

TikTok vs Douyin, TopBuzz vs Toutiao

TikTok, Douyin, TopBuzz and Toutiao are among the flagship mobile apps in ByteDance’s portfolio. ByteDance represents a new class of up-and-coming Chinese internet companies competing for global market through diversification, merger and acquisitions of foreign apps. ByteDance acquired US video app Flipagram in 2017, France-based News Republic in 2017, and invested in India-based news aggregator Dailyhunt. TikTok, first created in 2016, was rebranded with ByteDance’s US$ 1 billion acquisition of Muscial.ly in 2018. The Chinese version of TikTok, Douyin, was released in 2016 by ByteDance as the leading short-video platform in the country. The Douyin app has several different features that are particular to the Chinese market and regulation. For example, the #PositiveEnergy was integrated into the app as an effort to align with the state's political agenda to promote Chinese patriotism and nationalism (Chen, Kaye, & Zeng, 2020). Douyin also differs from TikTok in the app’s terms of service, of which it states that content undermining the regime, overthrowing the socialist system, inciting secessionism, and subverting the unification of the country is forbidden on the platform (Chen, Kaye, & Zeng, 2020; Kaye, Chen, & Zeng, 2020). Such regulation does not exist on TikTok. ByteDance’s Chinese news and information app Toutiao was launched in 2012, followed by its English version TopBuzz in 2015, for the international market.

Dubbed as the “world’s most valuable startup” (Byford, 2018), ByteDance secured investment from Softbank and Sequoia Capital. ByteDance has made successful forays into North American, European and Southeast Asian markets, reaching 1 billion monthly active users globally in 2019 (Yang, 2019). It is one of the most successful and truly global China-based mobile apps. The company focuses on using artificial intelligence (AI) and machine learning algorithms to source and push content to its users. To accelerate its global reach, ByteDance sources its top-level management from Microsoft and Facebook for AI and global strategy development.

Both apps and their overseas versions have received much legal and regulatory scrutiny. In 2017, Toutiao was accused of spreading pornographic and vulgar information by the Beijing Cyberspace and Informatisation Office. In the 2018 Sword Net Action, China’s National Copyright Administration summoned Douyin to better enforce copyright law and put in place a complaint mechanism to report illegal content (Yang, 2018). Reaching millions of youth, TikTok was temporarily banned by Indian court and Indonesia’s Ministry of Communication and Information Technology for “degrading culture and encourag[ing] pornography” and for spreading pornography, inappropriate content and blasphemy. TikTok attempted to resolve the ban by building data centres in India while hiring more content moderators (Sharma & Niharika, 2019).

WeChat/Weixin

WeChat or Weixin is China’s most popular mobile chat app and the fourth largest in the world. It is a paradigmatic example of the infrastructurisation of platforms, where the app bundles and centralises many different functions, such as digital payment, group buying, taxi hailing into one super-app (Plantin & de Seta, 2019). Owned by Tencent, one of China’s internet behemoths, WeChat has a user base of 1 billion, though Tencent has not updated the number of its international users since 2015 (Ji, 2015). WeChat’s success was built upon Tencent’s previous social networking advantages.

Unlike ByteDance which separates its domestic and international users by developing two different versions of its major products (i.e., the internationally-facing TikTok can only be downloaded in international app stores whereas Douyin can only be downloaded in Chinese app stores and Apple’s China-region App Store), Tencent differentiates WeChat (international) and Weixin (domestic) users by the phone number a user originally signs up with. In practice, users download the same WeChat/Weixin app from either international or Chinese app stores. The app then decides whether the user is an international or Chinese user during the account registration process. Besides certain functionalities such as Wallet that is exclusive to Chinese users, the overall design of the app and the processes of account registration and deletion are the same for international and domestic users.

App walkthrough

We conducted app walkthroughs to examine and compare user experience in data and privacy protection during the app registration and account deletion process. Figure 1 compares the walkthrough results. 

Android-iOS difference

Registration processes for Baidu, Douyin, Toutiao and WeChat differ between the Android and iOS versions. The Android and iOS registration processes for TopBuzz and TikTok are similar, therefore they are recorded in one timeline in Figure 1. In general, app registrations on iOS devices comprise of more steps compared to Android, meaning that the apps need to request more function-specific authorisation from users. In the Android versions, access to certain types of data is granted by default when users install and use the app; users need to change authorisations within the app or on the device’s privacy settings. For example, TopBuzz and TikTok, both owned by ByteDance, set app push notifications as the default option without prompting for user consent. If users want to change the setting, they need to do so via their device’s privacy settings. 

“Ask until consent”

All Chinese versions of apps will prompt a pop-up window displaying a summary of privacy notification, while this is not the case for the Canadian version. However, the pop-up reminder for privacy notification does not give the users a choice to continue usage of the app without ticking “I agree”. For example, if you do not agree with the privacy reminder, the app will show the notice again until user consent is obtained to proceed to the next step. This is a reflection of the failure of the notice and choice approach to privacy protection that the users are left without a choice but to accept the terms or relinquish the usage of the app (Martin, 2013). It also mirrors and reaffirms existing study on the lack of choice if users do not agree with a privacy notice. For Douyin, TikTok, Toutiao, TopBuzz, and Baidu, users can still use limited app functions if they do not sign up for an account. However, these apps will still collect information during the use of the apps, such as device information and locational information, as per privacy policies. WeChat and Weixin, on the other hand, mandate the creation of accounts to use app services.

Real name registration

For all examined apps, users can choose to register with either cell phone numbers or emails in the international version. However, for all domestic versions, cell phone numbers are mandatory to sign up for services. This is a key difference between the international and domestic versions. The main reason is that Article 24 of China’s Cybersecurity Law requires internet companies to comply with the real name registration regulation. During account registration, all apps request for access to behavioral data (request for location) and user data (contact). The real name registration process mandated under the Chinese law differs in intent and in practice from those of US-based internet companies and platforms. For example, Facebook, YouTube, now-defunct Google+, Twitter and Snapchat have different policies about whether a user has the option of remaining anonymous, or creating an online persona that masks their identity to the public (DeNardis & Hackl, 2015, p. 764). The decisions made on part of internet companies and digital platforms could jeopardise the online safety and anonymity of minority populations and have potential to stifle freedom of expression. However, in the Chinese context, the real name registration is overseen and enforced by different levels of government for the purpose of governance and control, following the principle of “real identity on the backend and voluntary compliance on the front end”, which means apps, platforms, and websites must collect personally identifying information while it is up to users to decide whether to adopt real name as screen name.

Account deletion

For all apps examined, users need to go through multiple steps to reach the account deletion options: WeChat 5 steps, Douyin 6 steps, TikTok 4 steps, TopBuzz 3 steps. The more steps it takes, the more complicated it is for users to de-register and delete data and metadata generated on the app. All Chinese versions of the tested apps prompt an “account in secure state” notification in the process of account deletion. To have an account in secure state, it means that the account does not have any suspicious changes such as changing password or unlinking the mobile phone within a short period of time before the request, as a security measure. To have an account in a secure state is a prerequisite for account removal. The domestic versions also have screening measures so that only accounts that have a “clean history” can be deleted. A clean history means the account has not been blocked nor engaged in any previous activities that are against laws and regulations. TikTok also offers a 30-day deactivation period option before the account is deleted and TopBuzz requires users to tick “agree” on privacy terms during account deletion. It also offers a re-participation option by soliciting reasons why users delete accounts.

Figure 1: Walkthrough analysis

Content analysis of privacy policies and terms of service

Table 1: Cross-border regulation

Company

Regions

Privacy policy application scope

Laws and jurisdictions referred

Specific court that legal proceedings must go through

Baidu

 

Part of larger organization

Relevant Chinese Laws, Regulations

Beijing Haidian District People’s court

TopBuzz

EU

Part of larger organization

GDPR and EU

No

Non-EU

Part of larger organization

US, California Civil Code, Japan, Brazil

Singapore International Arbitration Center

Toutiao

 

For Toutiao

Relevant Chinese Laws, Regulations

Beijing Haidian District

Douyin

 

For Douyin

Relevant Chinese Laws, Regulations

Beijing Haidian District People’s court

TikTok

US

For TikTok

Yes

Unspecified

EU

For TikTok

Yes

Unspecified

Global

For TikTok

No

Unspecified

WeiXin

 

For Weixin

Relevant Chinese Laws, Regulations

Shenzhen Nanshan People's Court

WeChat

US

For WeChat

No

American Arbitration Association

EU

The court of the user’s place or residence or domicile

Other

Hong Kong International Arbitration Centre

We retrieved and examined the privacy policies and terms of service of all apps as of July 2019. Baidu only has one set of policies covering both domestic and international users. WeChat/WeiXin, TopBuzz/Toutiao and TikTok/Douyin have designated policies for domestic and international users, respectively. TikTok’s privacy policies and terms of service are most regional-specific, with three distinctive documents for US, EU, and global users (excluding US and EU). TopBuzz distinguishes EU and non-EU users with jurisdiction-specific items for users based in the US, Brazil, and Japan in the non-EU users privacy policies. Most policies and terms of service refer to privacy laws of the jurisdictions served, but WeChat and TikTok’s global users’ privacy policies are vague as they do not explicitly name the laws and regulations but refer to them under “relevant laws and regulations”. Compared to the Canadian versions of the same app, Chinese apps provide clearer and more detailed information about the specific court where disputes are to be solved.

Table 2: Storage and transfer of user data

Company

Regions

Storage of data

Location of storage

Duration of storage

Data transfer

Baidu

 

Yes

PRC

Unspecified

Unspecified

TopBuzz

EU

Yes

Browser behavior data stored for 90 days

third party servers in US & Singapore Amazon Web Services

Varies according to jurisdictions

Yes

Non-EU

Yes

US and Singapore

Unspecified

Yes

Toutiao

 

Yes

PRC

Unspecified

No

Douyin

 

Yes

PRC

Unspecified

Transfer with explicit consent 

TikTok

US

Unspecified

Unspecified

Unspecified

Unspecified

EU

Yes

Unspecified

Unspecified

Yes

Global

Unspecified

Unspecified

Unspecified

Unspecified

WeiXin

 

Yes

PRC

Unspecified

Unspecified

WeChat

 

Yes

Canada, Hong Kong

 

Unspecified

In terms of data storage, as shown in Table 2, most international versions of examined apps store user data in foreign jurisdictions. For example, WeChat’s international-facing privacy policy states that the personal information it collects from users will be transferred to, stored at, or processed in Ontario, Canada and Hong Kong. The company explains explicitly why it chooses the two regions: “Ontario, Canada (which was found to have an adequate level of protection for Personal Information under Commission Decision 2002/2/EC of 20 December 2001); and Hong Kong (we rely on the European Commission’s model contracts for the transfer of personal data to third countries (i.e., the standard contractual clauses), pursuant to Decision 2001/497/EC (in the case of transfers to a controller) and Decision 2010/915/EC (in the case of transfers to a processor).” Only Baidu stores user data in mainland China, regardless of the residing jurisdictions of users. However, the latter app’s policies do not specify where and for how long the transnational communications between users based in China and users based outside will be stored. Baidu’s privacy policies are particularly ambiguous about how long data will be stored. Governed by the GDPR, privacy policies serving EU users are more comprehensive than others in disclosing whether user data will be transferred.

All apps have included mechanisms through which users can communicate their concerns or file complaints about how the company may be retaining, processing, or disclosing their personal information. Almost all apps – with the exception of Baidu – provide an email address and a physical mailing address of where users can initiate communications. TikTok has provided the name of an EU representative in its EU-specific privacy policy, though the contact email provided is the same as the one mentioned in TikTok’s other international privacy policies.

Table 3: Privacy disclosure

Company

Regions

Last policy update date

Access to older versions

Notification of update?

Complaint mechanism

Complaint venue

Baidu

 

No

No

No

Yes

Legal process through local court

TopBuzz

EU

No

Yes

Yes

No privacy officer listed

Non-EU

No

No

Yes

Yes

No privacy officer listed

Toutiao

 

Yes

No

Yes

Yes

No privacy officer listed

Douyin

 

Yes

No

Yes

Yes

Email and physical mailing address

TikTok

US

Yes

No

Yes

Yes

No privacy officer listed

EU

Yes

No

Yes

Yes

A EU representative is listed

Global

Yes

No

Yes

Yes

Email and a mailing address

WeXin

 

Yes

No

Yes

Yes

Contact email and location of Tencent Legal Department

WeChat

 

Yes

No

Yes

Yes

Contact email of Data Protection Officer and a physical address

Baidu only mentions that any disputes should be resolved via legal process through local court, which increases the difficulties if users, especially international users, wish to resolve a dispute with the company. WeChat/Weixin is another interesting case: unlike ByteDance which distinguishes its domestic and international users by providing them with two different versions of apps, Tencent’s overseas and domestic users use the same app. Users receive different privacy policies and terms of service based on the phone number they signed up with. In addition, the company’s privacy policy and terms of service differentiate international users and domestic users not only via their place of residence but also their nationalities. Tencent’s terms of service for international WeChat users denote that if the user is “(a) a user of Weixin or WeChat in the People’s Republic of China; (b) a citizen of the People’s Republic of China using Weixin or WeChat anywhere in the world; or (c) a Chinese-incorporated company using Weixin or WeChat anywhere in the world,” he or she is subject to the China-based Weixin terms of service. However, neither WeChat/Weixin explain how the apps identify someone as a Chinese citizen in these documents. That said, even if Weixin users are residing overseas, they will need to go through the complaint venue outlined in the Chinese privacy policy version rather than taking it to the company’s overseas operations.

Our analysis of these apps’ data collection practices show some general patterns in both the domestic and international versions. All apps mention the types of information they may collect such as name, date of birth, biometrics, address, contact, location. However, none of the apps, except WeChat for international users offer a clear definition or examples of what counts as personally identifiable information (PII). As for disclosure of PII, all apps state that they will share necessary information with law enforcement agencies and government bodies. TikTok’s privacy policy for international users outside the US and EU seems to be the most relaxed when it comes to sharing user information with third parties or company affiliates. All the other apps surveyed state that they will request users’ consent before sharing PII with any non-government entities. TikTok’s global privacy policy states that it will share user data – without asking for user consent separately — with “any member, subsidiary, parent, or affiliate of our corporate group”, “law enforcement agencies, public authorities or other organizations if legally required to do so”, as well as with third parties.

Conclusion

This study shows that not only different Chinese mobile apps vary in data and privacy protection but also the Chinese domestic and international versions of the same app vary in data and privacy protection standards. More globally successful China-based mobile apps have better and more comprehensive data and privacy protection standards. Similar to previous findings (Liu, 2019; Fazhi Wanbao, 2018), our research shows that Baidu, compared to other apps, has the most unsatisfactory data and privacy protection measures. ByteDance’s apps: TopBuzz/Toutiao, TikTok/Douyin are more attentive to users from different geographical regions by designating jurisdiction-specific privacy policies and terms of service. In this case, the mobile app’s globalisation strategies and aspirations play an important part in the design and governance of mobile app data and privacy protection. ByteDance is the most internationalised company, when compared to Baidu and Tencent. ByteDance’s experience of dealing with fines from the United States, Indian and Indonesian law enforcement and regulatory authorities has helped revamp its practices overseas. For instance, TikTok updated its privacy policy after the Federal Trade Commission’s fine in February 2019 (Alexander, 2019). Faced with probing from US lawmakers and a ban from US Navy, TikTok released its first Transparency report in December 2019 and the company is set to open a “Transparency Center” in its Los Angeles office in May 2020, where external experts will oversee its operations (Pappas, 2020). For Tencent, with an expanding array of overseas users, the company was also among the first to comply with the GDPR. Tencent updated its privacy policy to meet GDPR’s requirement on 29 May 2018 — a day after it came into force.

For China-based internet companies that eye global markets, expanding beyond China means that they must provide a compelling experience for international users and comply with laws and regulations in jurisdictions where they operate. In this regard, nation-states and their designed ecosystem of internet regulations have a powerful impact on how private companies govern their platforms. Our analysis suggests that nation-based regulations on online spaces have at times spilled beyond their territory (e.g., Tecent’s WeChat/Wexin’s distinguishing domestic and international users based on their nationality). However, the effects of state regulations on transnational corporations are not monolithic. They vary depending on how integrated a platform is into a certain jurisdiction, where its main user base is, and what its globalisation strategies are. For example, ByteDance’s TikTok is more responsive to international criticism and public scrutiny than the other applications in this study potentially because of the app’s highly globalised presence and revenue streams.

Secondly, this paper highlights that in addition to app makers, other powerful actors and parties shape the app’s data and privacy protection practices. One of the actors is mobile app store owners (e.g., Google Play and Apple App Store). As the walkthrough analysis demonstrates, the app interface design and requests on Apple iOS do a better job at informing and notifying data access for mobile phone users. The Android version of tested apps have set user consent for push notification as default in some cases, therefore it requests individual efforts to navigate and learn how to opt out or withdraw consent. Examined mobile apps operating in the Android system are more lenient in requesting data from users, as compared to iOS. The gatekeeping function of mobile app platforms that host these apps and set the standards for app designers and privacy protection further indicates a more nuanced and layered conceptualisation of corporate power in understanding apps as a situated digital object. This further shows that in a closely interconnected platform ecosystem, some platform companies are more powerful than others with their infrastructural reach in hosting content, providing cloud computing and data services (van Dijck, Nieborg, & Poell, 2019). Even though Tencent, ByteDance and Baidu are powerful digital companies in China, they still rely on Google Play store and Apple’s App Store for the domestic and global distribution of their apps, therefore subjecting to the governance of these mobile app stores (see Cartwright, 2020, this issue). Another example is the mini-programmes, which are “sub-applications” hosted on WeChat, where developers and apps are subject to WeChat’s privacy policies and developer agreements. This shows that apps are always situated in and should be studied together with the complex mobile ecosystem and their regional context (Dieter et al., 2019). Therefore, we should consider the relational and layered interplay between different levels of corporate power in co-shaping the data and privacy practices of mobile apps.

As shown in the analysis, the international-facing version of the same China-based mobile app provides relatively higher levels of data protection to app users in the European Union than its Chinese-facing version. This further highlights the central role of nation states and the importance of jurisdiction in the global expansion of Chinese mobile apps. As non-EU organisations, Chinese app makers are subject to the territorial scope of GDPR (Article 3) when offering services to individuals in the EU. On the other hand, Chinese-facing apps have operationalised Chinese privacy regulations in app design and privacy policies compliant with rules such as real name registration. Through the analysis of terms of service and privacy policies, this paper shows that China-based mobile apps are generally in compliance with laws and data protection frameworks across different jurisdictions. However, there lacks detailed explanations of data retention and storage when users are in transit, for example, when an EU resident travels outside, do they have the same level of privacy protection as residing in the EU? On average, EU users of Chinese mobile apps are afforded greater transparency and control with regards to how data is used, stored and disclosed compared to other jurisdictions for these four particular sets of China-based mobile apps. Under China’s privacy regulation regime, which itself is full of contradictions and inconsistencies (Lee, 2018; Feng, 2019), data and privacy protection is weak for domestic Chinese users. Certain features of the app, such as the “security clearance” declaration during account deletion for domestic versions of Chinese mobile apps also shows the prioritisation of national security over the individual right to privacy as key doctrines in China’s approach to data and privacy protection under the banner of internet sovereignty. This, however, is not unique to China as national security and privacy protection is portrayed in many policy debates and policymaking processes as a zero-sum game (Mann, Daly, Wilson, & Suzor, 2018). The latest restrictions imposed by the Trump administration on TikTok and WeChat in the US citing concerns over the apps’ data collection and data sharing policies (Yang and Lin, 2020) is just another example of the conundrum China-based apps face in their course of global expansion and global geopolitics centered around mobile and internet technologies. To be sure, data and privacy protection is one of the biggest challenges if China-based apps continue to expand overseas and it is going to incur a steep learning curve and possible reorganisation of a company’s operation and governance structure.

References

Alexander, J. (2019, February 27). TikTok will pay $5.7 million over alleged children’s privacy law violations. The Verge. https://www.theverge.com/2019/2/27/18243312/tiktok-ftc-fine-musically-children-coppa-age-gate

Balebako, R., Marsh, A., Lin, J., Hong, J., & Cranor, L. F. (2014, February 23). The Privacy and Security Behaviors of Smartphone App Developers. Network and Distributed System Security Symposium. https://doi.org/10.14722/usec.2014.23006

BBC News. (2016, July 18). Apple iCloud: State Firm Hosts User Data in China. BBC News. https://www.bbc.com/news/technology-44870508

Byford, S. (2018, November 30). How China’s Bytedance Became the World’s Most Valuable Startup. The Verge. https://www.theverge.com/2018/11/30/18107732/bytedance-valuation-tiktok-china-startup

C.A.C. (2016, December 27). Guojia Wangluo Anquan Zhanlue. Xinhuanet. http://www.xinhuanet.com/politics/2016-12/27/c_1120196479.htm

Cartwright, M. (2020). Internationalising state power through the internet: Google, Huawei and geopolitical struggle. Internet Policy Review, 9(3). https://doi.org/10.14763/2020.3.1494

Chen, J. Y., & Qiu, J. L. (2019). Digital Utility: Datafication, Regulation, Labor, and Didi’s Platformization of Urban Transport in China. Chinese Journal of Communication, 12(3), 274–289. https://doi.org/10.1080/17544750.2019.1614964

Chen, X., Kaye, D. B., & Zeng, J. (2020). #PositiveEnergy Douyin: Constructing ‘Playful Patriotism’ in a Chinese Short-Video Application. Chinese Journal of Communication. https://doi.org/10.1080/17544750.2020.1761848

de Hert, P., & Papakonstantinou, V. (2015). The Data Protection Regime in China. [Report]. European Parliament. https://www.europarl.europa.eu/RegData/etudes/IDAN/2015/536472/IPOL_IDA(2015)536472_EN.pdf

Deibert, R., & Pauly, L. (2017). Cyber Westphalia and Beyond: Extraterritoriality and Mutual Entanglement in Cyberspace. Paper Prepared for the Annual Meeting of the International Studies Association.

DeNardis, L., & Hackl, A. M. (2015). Internet Governance by Social Media Platforms. Telecommunications Policy, 39(9), 761–770. https://doi.org/10.1016/j.telpol.2015.04.003

Dieter, M., Gerlitz, C., Helmond, A., Tkacz, N., Vlist, F., & Weltevrede, E. (2019). Multi-Situated App Studies: Methods and Propositions. Social Media + Society, 1–15.

Dijck, J., Nieborg, D., & Poell, T. (2019). Reframing Platform Power. Internet Policy Review, 8(2). https://doi.org/10.14763/2019.2.1414

Federal Trade Commission. (1998). Privacy Online: A Report to Congress [Report]. Federal Trade Commission. https://www.ftc.gov/sites/default/files/documents/reports/privacy-online-report-congress/priv-23a.pdf

Federal Trade Commission. (2013). Mobile Privacy Disclosures: Building Trust Through Transparency [Staff Report]. Federal Trade Commission. https://www.ftc.gov/reports/mobile-privacy-disclosures-building-trust-through-transparency-federal-trade-commission

Federal Trade Commission. (2019, February 27). Video Social Networking App Musical.ly Agrees to Settle FTC Allegations That it Violated Children’s Privacy Law [Press release]. Federal Trade Commission. https://www.ftc.gov/news-events/press-releases/2019/02/video-social-networking-app-musically-agrees-settle-ftc

Feng, Y. (2019). The Future of China’s Personal Data Protection Law: Challenges and Prospects. Asia Pacific Law Review, 27(1), 62–82. https://doi.org/10.1080/10192557.2019.1646015

Fernback, J., & Papacharissi, Z. (2007). Online Privacy as Legal Safeguard: The Relations Among Consumer, Online Portal and Privacy Policy. New Media & Society, 9(5), 715–734. https://doi.org/10.1177/1461444807080336

Flew, T., Martin, F., & Suzor, N. (2019). Internet Regulation as Media Policy: Rethinking the Question of Digital Communication Platform Governance. Journal of Digital Media & Policy, 10(1), 33–50. https://doi.org/10.1386/jdmp.10.1.33_1

Fu, T. (2019). China’s Personal Information Protection in a Data-Driven Economy: A Privacy Policy Study of Alibaba, Baidu and Tencent. Global Media and Communication, 15(2), 195–213. https://doi.org/10.1177/1742766519846644

Fuchs, C. (2012). The Political Economy of Privacy on Facebook. Television & New Media, 13(2), 139–159. https://doi.org/10.1177/1527476411415699

Gierow, H. J. (2014). Cyber Security in China: New Political Leadership Focuses on Boosting National Security (Report No. 20; China Monitor). merics. https://merics.org/en/report/cyber-security-china-new-political-leadership-focuses-boosting-national-security

Gillespie, T. (2018a). Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions that Shape Social Media. Yale University Press.

Gillespie, T. (2018b). Regulation Of and By Platforms. In J. Burgess, A. Marwick, & T. Poell (Eds.), The SAGE Handbook of Social Media (pp. 254–278). SAGE Publications. https://doi.org/10.4135/9781473984066.n15

Goldsmith, J., & Wu, T. (2006). Who Controls the Internet? Illusions of Borderless World. Oxford University Press.

Gorwa, R. (2019). What is platform governance? Information, Communication & Society, 22(6), 854–871. https://doi.org/10.1080/1369118X.2019.1573914

Greene, D., & Shilton, K. (2018). Platform Privacies: Governance, Collaboration, and the Different Meanings of “Privacy” in iOS and Android Development. New Media & Society, 20(4), 1640–1657. https://doi.org/10.1177/1461444817702397

Jao, N. (2018, February 8). Evernote Announces Plans to Migrate All Data in China to Tencent Cloud. Technode. https://technode.com/2018/02/08/evernote-will-migrate-data-china-tencent-cloud/

Jia, L., & Winseck, D. (2018). The Political Economy of Chinese Internet Companies: Financialization, Concentration, and Capitalization. International Communication Gazette, 80(1), 30–59. https://doi.org/10.1177/1748048517742783

Kalathil, S., & Boas, T. (2003). Open Networks, Closed Regimes: The Impact of the Internet on Authoritarian Rule. Carnegie Endowment for International Peace.

Kaye, B. V., Chen, X., & Zeng, J. (2020). The Co-evolution of Two Chinese Mobile Short Video Apps: Parallel Platformization of Douyin and TikTok. Mobile Media & Communication. https://doi.org/10.1177/2050157920952120

Knockel, J., Ruan, L., Crete-Nishihata, M., & Deibert, R. (2018). (Can’t) Picture This: An Analysis of Image Filtering on WeChat Moments [Report]. Citizen Lab. https://citizenlab.ca/2018/08/cant-picture-this-an-analysis-of-image-filtering-on-wechat-moments/

Kong, L. (2007). Online Privacy in China: A Survey on Information Practices of Chinese Websites. Chinese Journal of International Law, 6(1), 157–183. https://doi.org/10.1093/chinesejil/jml061

Lee, J.-A. (2018). Hacking into China’s Cybersecurity Law. Wake Forest Law Review, 53, 57–104. http://wakeforestlawreview.com/wp-content/uploads/2019/01/w05_Lee-crop.pdf

Light, B., Burgess, J., & Duguay, S. (2018). The Walkthrough Method: An Approach to the Study of Apps. New Media & Society, 20(3), 881–900. https://doi.org/10.1177/1461444816675438

Liu, J. (2019). China’s Data Localization. Chinese Journal of Communication, 13(1). https://doi.org/10.1080/17544750.2019.1649289

Logan, S. (2015). The Geopolitics of Tech: Baidu’s Vietnam. Internet Policy Observatory. http://globalnetpolicy.org/research/the-geopolitics-of-tech-baidus-vietnam/

Logan, S., Molloy, B., & Smith, G. (2018). Chinese Tech Abroad: Baidu in Thailand [Report]. Internet Policy Observatory. http://globalnetpolicy.org/research/chinese-tech-abroad-baidu-in-thailand/

Mann, M., Daly, A., Wilson, M., & Suzor, N. (2018). The Limits of (Digital) Constitutionalism: Exploring the Privacy-Security (Im)Balance in Australia. International Communication Gazette, 80(4), 369–384. https://doi.org/10.1177/1748048518757141

Martin, K. (2013). Transaction Costs, Privacy, and Trust: The Laudable Goals and Ultimate Failure of Notice and Choice to Respect Privacy Online. First Monday, 18(12). https://doi.org/10.5210/fm.v18i12.4838

McKune, S., & Ahmed, S. (2018). The Contestation and Shaping of Cyber Norms Through China’s Internet Sovereignty Agenda. International Journal of Communication, 12, 3835–3855. https://ijoc.org/index.php/ijoc/article/view/8540

Nissenbaum, H. (2011). A Contextual Approach to Privacy Online. Dædalus, 140(4), 32–48. https://doi.org/10.1162/DAED_a_00113

Pappas, V. (2020, March 11). TikTok to Launch Transparency Center for Moderation and Data Practices [Press release]. TikTok. https://newsroom.tiktok.com/en-us/tiktok-to-launch-transparency-center-for-moderation-and-data-practices

Plantin, J.-C., Lagoze, C., Edwards, P., & Sandvig, C. (2016). Infrastructure Studies Meet Platform Studies in the Age of Google and Facebook. New Media & Society, 20(1), 293–310. https://doi.org/10.1177/1461444816661553

Plantin, J.-C., & Seta, G. (2019). WeChat as Infrastructure: The Techno-nationalist Shaping of Chinese Digital Platforms. Chinese Journal of Communication, 12(3). https://doi.org/10.1080/17544750.2019.1572633

Reuters. (2016, November 1). Airbnb Tells China Users Personal Data to be Stored Locally. Reuters. https://www.reuters.com/article/us-airbnb-china/airbnb-tells-china-users-personal-data-to-be-stored-locally-idUSKBN12W3V6

Reuters. (2018, January 12). China Chides Tech Firms Over Privacy Safeguards. Reuters. https://www.reuters.com/article/us-china-data-privacy/china-chides-tech-firms-over-privacy-safeguards-idUSKBN1F10F6

Ruan, L., Knockel, J., Ng, J., & Crete-Nishihata, M. (2016). One App, Two Systems: How WeChat Uses One Censorship Policy in China and Another Internationally (Research Report No. 84). Citizen Lab. https://citizenlab.ca/2016/11/wechat-china-censorship-one-app-two-systems/

Sharma, I., & Niharika, S. (2019, July 22). It Took a Ban and a Government Notice for ByteDance to Wake Up in India. Quartz India. https://qz.com/india/1671207/bytedance-to-soon-store-data-of-indian-tiktok-helo-users-locally/

State Council Information Office. (2010). The Internet in China. Information Office of the State Council of the People’s Republic of China. http://www.china.org.cn/government/whitepaper/node_7093508.htm

Stein, L. (2013). Policy and Participation on Social Media: The Cases of YouTube, Facebook and Wikipedia. Communication, Culture & Critique, 6(3), 353–371. https://doi.org/10.1111/cccr.12026

Steinberg, M., & Li, J. (2016). Introduction: Regional Platforms. Asiascape: Digital Asia, 4(3), 173–183. https://doi.org/10.1163/22142312-12340076

Wanbao, F. (2018, January 6). Shouji Baidu App Qinfanle Women de Naxie Yinsi. 163. http://news.163.com/18/0106/17/D7G2O0T200018AOP.html

Wang, H. (2011). Protecting Privacy in China: A Research on China’s Privacy Standards and the Possibility of Establishing the Right to Privacy and the Information Privacy Protection Legislation in Modern China. Springer Science & Business Media. https://doi.org/10.1007/978-3-642-21750-0

Wang, W. Y., & Lobato, R. (2019). Chinese Video Streaming Services in the Context of Global Platform Studies. Chinese Journal of Communication, 12(3), 356–371. https://doi.org/10.1080/17544750.2019.1584119

West, S. M. (2019). Data Capitalism: Redefining the Logics of Surveillance and Privacy. Business & Society, 58(1), 20–41. https://doi.org/10.1177/0007650317718185

Xu, D., Tang, S., & Guttman, D. (2019). China’s Campaign-style Internet Finance Governance: Causes, Effects, and Lessons Learned for New Information-based Approaches to Governance. Computer Law & Security Review, 35, 3–14. https://doi.org/10.1016/j.clsr.2018.11.002

Xu, J. (2015). Evolving Legal Frameworks for Protecting the Right to Internet Privacy in China. In J. Lindsay, T. M. Cheung, & D. Reveron (Eds.), China and Cybersecurity: Espionage, Strategy, and Politics in the Digital Domain(pp. 242–259). Oxford Scholarship Online. https://doi.org/10.1093/acprof:oso/9780190201265.001.0001

Yang, J., & Lin, L. (2020). WeChat and Trump’s Executive Order: Questions and Answers. The Wall Street Journal. https://www.wsj.com/articles/wechat-and-trumps-executive-order-questions-and-answers-11596810744.

Yang, W. (2018, September 15). Online Streaming Platforms Urged to Follow Copyright Law. ChinaDaily. http://usa.chinadaily.com.cn/a/201809/15/WS5b9c7e90a31033b4f4656392.html

Yang, Y. (2019, June 21). TikTok Owner ByteDance Gathers 1 Billion Monthly Active Users Across its Apps. South China Morning Post. https://www.scmp.com/tech/start-ups/article/3015478/tiktok-owner-bytedance-gathers-one-billion-monthly-active-users

Zimmeck, S., Wang, Z., Zou, L., Iyengar, R., Liu, B., Schaub, F., & Reidenberg, J. (2016, September 28). Automated Analysis of Privacy Requirements for Mobile Apps. 2016 AAAI Fall Symposium Series. http://pages.cpsc.ucalgary.ca/~joel.reardon/mobile/privacy.pdf

Appendix

Current laws, regulations and drafting measures for data and privacy protection in China

Year

Title

Government ministries

Legal effect

Main takeaway

2009

General Principles of The Civil Law

National People's Congress

Civil law

Lays the foundation for the protection of personal rights including personal information, but privacy protection comes as an auxiliary article

2010

Tort Liabilities Law

Standing Committee of the National People’s Congress

Civil law

2012

Decision on Strengthening Online Personal Data Protection

Standing Committee of the National People’s Congress

General framework

Specifies the protection of personal electronic information or online personal information for the first time

2013

Regulation on Credit Reporting Industry

State Council

Regulation

Draws a boundary of what kinds of personal information can and cannot be collected by credit reporting business

2013

Telecommunication and Internet User Personal Data Protection Regulations

Ministry of Industry and Information Technology

Department regulation

Provides industry-specific regulations on personal information protection duties

2013

Information Security Technology Guidelines for Personal Information Protection with Public and Commercial Services Information Systems

National Information Security Standardization Technical Committee; China Software Testing Center

Voluntary national standard

Specifies what “personal general information” 个人一般信息 and what “personal sensitive information” 个人敏感信息 entail respectively;

Defines the concepts of “tacit consent” 默许同意 and “expressed consent” 明示同意 for the first time

2014

Provisions of the Supreme People's Court on Several Issues concerning the Application of Law in the Trial of Cases involving Civil Disputes over Infringements upon Personal Rights and Interests through Information Networks

Supreme People's Court

General framework

Defines what is included in the protection of "personal information", with a specific focus on regulating online search of personal information and online trolls

2015

Criminal Law (9th Amendment)

Standing Committee of the National People’s Congress

Criminal law

Criminalises the sale of any citizen's personal information in violation of relevant provisions.

Criminalises network service providers' failure to fulfil network security management duties.

2016

Administrative Rules on Information Services via Mobile Internet Applications

Cyberspace Administration China

Administrative rules

Reiterates app stores and internet app providers' responsibilities to comply with real-name verification system and content regulations regarding national security and public order;

Mentions data collection principles (i.e., legal, justifiable, necessary, expressed consent)

2017

Cybersecurity Law

Standing Committee of the National People’s Congress

Law

Requires data localisation;

Provides definitions of ""personal information""

Defines data collection principles;

Currently the most authoritative law protecting personal information

2017

Interpretation of the Supreme People's Court and the Supreme People's Procuratorate on Several Issues concerning the Application of Law in the Handling of Criminal Cases of Infringing on Citizens' Personal Information

Supreme People's Court

General framework

Defines "citizen personal information", what activities equate to "providing citizen personal information", and what are the legal consequences of illegally providing personal information

2017

Information security technology

Guide for De-Identifying Personal Information

Standardization Administration of China

Drafting

Provides a guideline on de-identification of personal information

2018

Information security technology Personal information security specification

Standardization Administration of China

Voluntary national standard / Currently under revision

Lays out granular guidelines for consent and how personal data should be collected, used, and shared.

2018

E-Commerce Law

Standing Committee of the National People’s Congress

Law

Provides generally-worded personal information protection rules for e-commerce vendors and platforms

2019

Measures for Data Security Management

Cyberspace Administration China

Drafting

Proposes new requirements with a focus on the protection of "important data", which is defined as "data that, if leaked, may directly affect China’s national security, economic security, social stability, or public health and security"

2019

Information security technology Basic specification for collecting personal information in mobile internet applications

Standardization Administration of China

Drafting

Provides guidelines on minimal information for an extensive list of applications ranging from navigation services to input software

2019

Measures for Determining Illegal Information Collection by Apps

Drafting stage

 

Digital commons

$
0
0

This article belongs to Concepts of the digital society, a special section of Internet Policy Review guest-edited by Christian Katzenbach and Thomas Christian Bächle.

Introduction

Commons are holistic social institutions for governing the (re)production of resources. They represent a comprehensive and radical approach to organise collective action, placing it “beyond market and state” (Bollier & Helfrich, 2012). As De Angelis (2017, p. 10) put it, they are characterised by “a plurality of people (a community) sharing resources and governing them and their own relations and (re)production processes through horizontal doing in common, commoning". Thus, they form a third way of organising society and the economy that differs from both market-based approaches with their orientation toward prices, and from bureaucratic forms of organisation with their orientation toward hierarchies and commands.

The model has been applied to tangible and intangible resources, to local initiatives (e.g., a shared garden, educational material created by a school) and to resources governed by global politics (e.g., climate, internet infrastructure).

In our proposed definition (see also Stalder, 2010), the digital commons are a subset of the commons, where the resources are data, information, culture and knowledge which are created and/or maintained online. They are shared in ways that avoid their enclosure and allow everyone to access and build upon them. The notion of the digital commons lies at the heart of digital rights, the political fight to expand, rather than restrict, access to information, culture and knowledge (Kapczynski & Krikorian, 2010). Unlike tangible commons (such as urban gardens, forests or meadows), the digital commons (such as free software or Wikipedia) are not affected by overuse or material exclusivity. However, their existence can still be threatened by undersupply, inadequate legal frameworks, pollution, lack of quality or findability.

The traditional and the digital commons provide a socially progressive alternative to producing and sharing resources and to organising collective action across a wide range of domains, with a focus on sustainability and democracy. While not all resources can or should be governed as commons, we claim that this approach can provide political inspiration beyond the digital domain where it is currently applied, with a potential to improve life by expanding access to resources and creating new areas of collective self-governance, at a global and local levels both offline and online.

In the following, we focus on the holistic character of the digital commons as an approach to governance, that is, how their economic, social, legal and cultural dimensions relate to one another in contrast to both market and public provision of resources. We highlight how fundamental an alternative the commons can be, particularly in relation with current issues of capitalism with data-driven surveillance, platform monopolies and the increasingly authoritarian orientation of even many democracies.

We begin with a short history of the commons and the differences between traditional and digital commons. We then introduce the main fields where the digital commons emerged historically (free software, free culture, cultural heritage, science, data and public sector information). We then adopt disciplinary perspectives to analyse each one of the four dimensions which shape each other and together constitute the commons. Digital commons rely on open licensing rules and we study legal models preserving sharing and access, which constitute the originality of the digital commons compared to standard copyright used by firms focusing on exclusivity. We then study cultural models, which have an impact on authorship and creativity, leading to original economic peer production models, the third pillar of commons studied holistically. Last, these three holistic dimensions depend on governance by communities, presented as a fourth overarching dimension.

To conclude, we observe emerging fields where the digital commons become a highly relevant model to produce alternatives to both centralised-controlled state politics and surveillance capitalism, towards autonomy and control. In a final section, we analyse examples of platforms enabling decentralised participation for citizens, and sovereignty about personal data.

From traditional to digital commons

Commons have existed across cultures as crucial institutions of traditional, rural communal life (Ostrom, 1990). Through the process of enclosure, which started in the 13th century in England, most of the common land was turned into private property and thus removed from communal use (Linebaugh, 2008). While some traditional commons in remote areas have survived (Nanchen & Borgeat, 2015), they have long been marginalised in theory and in social practice.

In recent years, the theory and practice of commons and commoning have made a remarkable return, highlighted by the Nobel Memorial Prize awarded to Elinor Ostrom in 2009. Ostrom (1990) identified across a large number of case studies institutional features and governance factors that allow the flourishing of commons. Initially, this perspective had been formulated with regard to traditional common pool resources (such as fisheries, meadows, etc.), but since then the framework has been applied to knowledge commons (Hess & Ostrom, 2007) and to digital commons (Frischmann, Madison & Strandburg, 2014).

There are three main factors that contribute to an increased interest in the commons. First, the ecological crisis creates high urgency to develop alternatives to economic growth and new modes of managing natural resources. This led not only to renewed interest in traditional local commons, but also to conceiving new “global commons” such as the atmosphere or the oceans (pioneered by International Union for Conservation of Nature and Natural Resources et al., 1980). The digital commons are part of both this problem—since internet infrastructure and consumer electronics needed for the production of digital resources, being commons or not, carry a large environmental cost—and the solution. Its governance model can serve as inspiration, thus contributing to the development of alternatives elsewhere (Rifkin, 2014) .

Second, the accumulation of negative effects of untethered commodification and marketisation in the wake of neoliberal policies (particularly social exclusion and inequality) have spurred a broad search of innovative alternatives to austerity, particularly in urban areas (Borch & Kornberger, 2015).

Third, on the internet, new digital commons were emerging from several sources and social contexts, dealing with a wide range of complex knowledge and information resources (Benkler, 2006; Bollier, 2009). Newly developed community-based production models were effectively countering the negative externalities of capitalism on intangible resources, such as enclosure and commodification of knowledge.

The new paradigm of producing informational goods as commons emerged first in the field of software development. In 1984, the programmer Richard M. Stallman founded the free software movement to counter the rise of proprietary software and promote “ four freedoms” (Stallman, 1996) related to code: 0) the freedom to run the software for any purpose; 1) the freedom to study and change the programme without restrictions; 2) the freedom to distribute copies of the programme; and 3) the freedom to distribute changes of the programme. As FLOSS (Free, Libre and Open Source Software) projects grew and proliferated, it established the practical example that complex, knowledge-intensive informational resources can be managed as commons in Ostrom’s sense (Schweik & English, 2012) and that these are stable and reliable over long periods of time capable of competing directly with market-based commodity production (Weber, 2004). Much of the current internet backbone relies on FLOSS (for example, the Domain Name System) and the wide availability and use of open source web servers (first CERN httpd (1990), then Apache (1995) and now also nginx (2004) has played an important role in the spread and rapid innovation of the World Wide Web. By the end of the 1990s, the tension between conventional notions of property (as enshrined in copyright law) and the growing popularity of collaborative cultural practices online (such as remixing and file sharing) rose to the surface and spilled over to the mainstream.

In part as a response to the increasingly aggressive assertion of copyright by the cultural industries suing customers for performing everyday acts online, and in part drawing inspiration from the free software movement, the free culture (or open content) movement began to take shape (Boyle, 1997; Lessig, 2001). The largest free culture project is Wikipedia, an encyclopedia that is cooperatively written and financed by donations from the readers who are part of the community (Dobusch & Kapeller, 2018). Since the project started in 2001, it has become the most popular and comprehensive global reference source, with 25 billion page views, across its more than 150 language versions and its various sister projects (in June 2020). Other online creation communities can also be governed as digital commons, such as photos shared on Flickr (Fuster Morell, 2010).

GLAM stands for Galleries, Libraries, Archives and Museums, a community of digital commons advocates and projects working to digitise public domain works of our cultural heritage without unnecessary legal (Mazzone, 2011), economic, or technical restrictions to their access and reuse by the public. The public domain comprises creative works to which copyright no longer applies, because it has expired, expressly been waived, or may be inapplicable (Dusollier, 2010). While the public domain is legally and conceptually separate from the digital commons, in practice, public domain works constitute an important source from which commoning practices can draw, all the more as public domain books and artworks are being digitised (Boyle, 2008; Dulong de Rosnay & De Martin, 2012). Public domain in its strict legal meaning happens after the term of copyright, about 70 years after the death of the author. However, some libraries and museums decide, through contractual terms of use of their websites, to reserve some rights to reuse digitised reproductions of the public domain works they preserved, while others (such as within the Europeana consortium) release them under public domain conditions without any restriction to reuse, or may even collaborate with Wikipedia (see projects of the Glam-Wiki initiative) to release high-quality resolution reproductions directly in the Wikimedia Commons repository (Dulong de Rosnay, 2011).

Another genre of works that are increasingly part of the digital commons are scientific articles and books (Suber, 2016). Largely funded by external sources, it is economically sustainable to dedicate science to the commons. However, a large part of the written output of research is still enclosed in journals controlled by private publishers, commodifying the free labour of publicly-paid researchers and selling it to academic libraries. A developing trend since the early 2000s has been the movement for open access to science, which relies on three economic models to govern scientific output as commons: the green open access model, where authors are authorised by publishers to upload their articles or a pre-print version in open access institutional repositories to make them accessible to the public for free, the gold open access model, where articles are directly accessible under free and open conditions, with or without author processing charges depending of the publisher’s policy, and finally, the diamond or platinum models (Fuchs & Sandoval, 2013; Normand, 2018), where institutions or libraries are financing gold open access journals or books.

The movement for open science and scientific commons encompasses also data. A rationale for open access to scientific data and data reproducibility is that science will work better if other scientists can review, verify, and reuse data from a study (Royal Society, 2012). Also, opening scientific data will ensure it remains available (Vines et al., 2014). Another justification for open science and open data, also valid for state-supported culture, heritage and education, is that their production is already covered by public funds, making the restriction of copyright to remove them from the commons an unnecessary incentive (Suber, 2016). Many legislators or funders have created policies requiring research they sponsor to be released under open access and open data conditions. These are available in the Registry of Open Access Repository Mandates and Policies (ROARMAP). Whether researchers effectively comply with these policies and if they need better enforcement mechanisms remains to be seen (Larivière et al., 2018).

Four dimensions of digital commons

Commons are managed by different socio-economic arrangements than the standard market and state models. They rely on a holistic combination of legal frameworks, transformed practices of authorship, economic models and modes of governance.

In this section, we are going to present the legal, authorship, economic and social models that inform and govern sustainable digital commons.

While there are many commonalities between digital and the tangible commons, one of the fundamental differences between them is that in the former, the resource is by and large, non-rival. There is no danger of overuse. Therefore, the boundaries (to reuse Ostrom’s 1990 terminology) of the community for the digital commons tend to be drawn loosely. Everyone who adheres to the relevant governing rules, for example, the conditions of use prescribed in a licence, is allowed to use the resource and thus can be regarded as part of the community at large. In other words, producers and users are not separated. Like tangible commons, digital commons are in need of ongoing maintenance. They face a danger of pollution, degrading its quality (such as vandalism or the inclusion of wrong facts in Wikipedia pages) or destroying it altogether, and of underproduction and thus need to be curated, sustained and preserved through governance and participation rules.

Law and licencing

Western, liberal law in general is oriented towards creating individual rights, protecting private property and enabling market exchanges (Söderberg, 2002; Capra & Mattei, 2015; Dulong de Rosnay, 2016). It was not designed to support commons and thus can be inadequate to regulate the digital commons, where community, shared resources and non-market relationships are central. The liberal conception of intellectual property, a legal fiction, aimed at implementing this model to regulate intangible creations (Hettinger, 1989; Dutfield & Suthersanen, 2004). For creative works such as text or music, copyright law has been designed to protect individual property by granting original authorship. Derived from that, comes the claim to individual ownership and the right to produce, the control of distribution by the cultural industry and the use of the works by the public and subsequent authors. Only works which are no longer covered by copyright are free to use (Lessig, 2001; Boyle, 2008).

The legal mechanisms of the digital commons have a completely different philosophy, because instead of focusing on providing an economic incentive or reward to individual creators to share by restricting the rights of the public, they aim at preserving copyrightable works against private enclosure, allowing access to knowledge for all (Kapczynski & Krikorian, 2010). This allows for creative production and transformation processes being led by future, unidentified peers and groups. Private instruments (Elkin-Koren, 2005; Dusollier, 2007), in the form of free and open licences or contracts with the public, have been designed to counter the automatic assignment of exclusive rights to initial authors by copyright laws, and offer more rights to the public than copyright rules applied by default. After free and open licensing was developed originally to support the collaborative development of software (Berry, 2008), a large number of open licensing schemes have been designed to support the development of the digital commons for cultural and scientific works (Guadamuz, 2006), as well as data and databases. While any free and open licence guarantees everybody the rights to use, transform and share a resource, some provide this right unconditionally, others reserve the rights of commercial exploitation, and some require the users to put all derivative works under the same licence in order to preserve the freedoms for subsequent users. The latter are called “copyleft” licences and meant to support creative generativity and avoid private enclosures.

The General Public License (GPL) is the first and most well-known copyleft licence for free software. Its creators, the computer scientist Richard Stallman and the lawyer Eben Moglen, devised the concept of “copyleft” to counter copyright: software licenced under the GPL licence will carry the four freedoms mentioned above.

Creative Commons (CC) licences are the most prominent licencing scheme transposing this model to non-software works such as text, music, images and videos. Creative Commons licences are private governance tools (Elkin-Koren, 2005) to manage the bundle of rights granted by copyright to authors, such as the right of reproduction, of commercial exploitation, of modification, of exclusion and alienation (Dulong de Rosnay, 2016). While all require attribution and allow for the non-commercial sharing of works, not all of them allow for modification of works, and only a couple include a copyleft (“share alike”) clause. One variant called CC0 allows one to voluntarily dedicate a work or a database to the public domain, renouncing copyright as much as legally possible. Another CC instrument, the Public Domain mark, allows expert institutions to identify works which already are in the public domain, such as cultural heritage.

While the issue of adapting individual legal culture of property to community rights has been solved by copyleft and open licensing options, some other legal and governance questions have not been addressed by licensing instruments (Elkin-Koren, 2006; Chen, 2009). The wish to accommodate different models of openness and national legal frameworks prompted the development of many different options, leading to legal issues. Some licensing options make different digital commons incompatible (Katz, 2005) with each other. And, more problematic for the legal sustainability of digital commons, the issue of legal responsibility, in case of copyright violation, has been left out of scope of open licensing, which could create substantial complications for digital commons sustainability and deter institutions from re-using them in their own works. Indeed, if works are distributed without liability by the original licensor, and may contain copyright infringement, institutions could refrain from using such works in order to avoid legal risks. While the digital commons recognise collaborative authorship modes as presented in the next section— and support a vision of incremental building of a collective, shared culture based on public domain works as cultural commons—their legal instruments are still based on liberal legal culture and fail at acknowledging the contribution and the appropriation of non-Western, indigenous cultures and works of folklore (Chen, 2011) into global commons.

Authorship

The alternative legal framework is complemented by a transformation in the notions and practices of creativity. Conventionally, liberal theory conceived creativity as the capacity of the individual exercised in isolation by an unusually gifted person, the (white male) genius (Woodmansee, 1984). Many cultural tropes, from the writer struggling with the empty page, to the artist secluded in her atelier, and the inventor with his personal “eureka” moment, reflect and popularise this notion. This model of the creative process underlies copyright and justifies to attribute a creative work to a single person and afford him (and only much later, her) sole ownership of the work, which is seen as an “original”, that this, as something new, a beginning without precedence. While this notion has long dominated the cultural field and the public imagination, for complex knowledge-intensive goods this was never seen as adequate. In 1942, Robert K. Merton (1973, p. 273), defined “communism”, understood as “common ownership of goods [as] a[n] … integral element of the scientific ethos”, because “the substantive findings of science are a product of social collaboration and are assigned to the community”.

Since the late 1960s, postmodern literary theories, using notions such as intertextuality, started to question ideas of individual authorship and reveal the collective dimension of literary work (Woodmansee, 1992). While these theories remained confined to relatively specialised audiences for a long time, they started to resonate with the experience within digital networks (Turkle, 1995) where collaboration and transformation of third party works were technically supported and culturally accepted. The free software movement started out as a cultural revolt in which the encroachment of intellectual property was seen as threatening long-held values of community and cooperation (Stallman, 1985). Within networked culture more implicitly and the commons more explicitly, creativity is understood less as the faculty of an individual genius, and more as a balance between individual contribution and collective enablement (Stalder, 2018). This points to a more comprehensive transformation of subjectivity, away from standard liberal notions starting from, and centering around, the individual—separate from his or her environment—to different configurations that some started to call “networked individualism” through which the collective (the network) and the singular (the individual) are co-constituted (Nyiri, 2005; Rainie & Wellman, 2012). All of this rubs against notions of individual authorship which are deeply rooted in Western countries, both legally and culturally. It indicates the depths of the challenge that the commons poses to the framework of Western modernity.

Economics, new models of production

In “Tragedy of the Commons”, Hardin (1968) famously claimed that resources not managed as private (or state) property were subject to overuse by individual, profit-maximising economic actors. Ostrom (1990) successfully refuted this idea by showing that commons as economic institutions provide successful, long-term alternatives to both market and state-oriented approaches to the (re)production of resources. Since these institutions emerge from local self-organisation, their variability is high and Ostrom deliberately never tried to distill a universal “model” from them, but focussed on a number of “design principles” (McGinnis & Ostrom, 1992), ranging from the definition of boundaries around the resource and the community, the design conflict resolution mechanism to the recognition of the rights of the commons by external actors, which are designated by governance rules in the next section.

Even Hardin (1994) eventually acknowledged that the tragedy of the commons only applies to “unmanaged commons”, by which he meant simple open access resources with no use constraints. Of course, such resources are not commons, because it is the shared management that makes a resource a commons. For the same reasons distinct from commons are public goods, usually defined as goods that are non-excludable and non-rivalrous such as a lighthouse (Coase, 1974) or national defense. The resulting free-rider problem makes them unattractive for market players and hence it is often regarded as a function of the state to provide them.

It was Benkler (2002, p. 369) who, focusing on the digital commons, postulated the emergence of a new mode of production, which he called commons-based peer production: “Its central characteristic is that groups of individuals successfully collaborate on large-scale projects following a diverse cluster of motivational drives and social signals, rather than either market prices or managerial commands”. Benkler noted that this new mode of production emerged at the centre of the most advanced knowledge industry (e.g., software development), largely to its superior way of assigning human resources (self-selection) allowing to draw on motivations and skills that could neither be organised by top-down management nor captured by price signals.

Initially, commons-based peer production was widely seen as a fundamental alternative to the market (Benkler, 2002). Today, only the more radical approaches still pursue this line of inquiry (e.g., Vercellone, 2015; Morozov, 2019). For more mainstream economists, the relationship between commons-based and market-oriented production has become more central, as many commercial firms both contribute to the digital commons and are using Common Pool Resources in their commercial strategies (Sadi et al., 2015). The aim is to develop “open strategies”, which is integrating commons-based production and various kinds of crowd-based inputs into company management (Birkinshaw, 2017).

However, without strong approaches to govern the appropriation from the digital commons, it is not certain that large companies benefiting from it will contribute back. The sharing economy, while initially also working with notions of non-market exchange (for example, couch-surfing used to be a non-commercial community platform (Schöpf, 2015)), has been overtaken by capitalist approaches redefined “sharing” as short-term rental of granular resources (such as a room in an apartment, a taxi ride and so on) and has lost all relation to the commons (Slee, 2015).

Governance of digital commons

Governance issues were at the heart of the Ostromian perspective on the commons and the aforementioned eight design principles (Ostrom, 1990; McGinnis & Ostrom, 1992) are, in essence, challenges of governance that need to be solved by communities for a commons to survive as a social institution.

A body of scholarship on the digital commons is aiming at adapting Ostromian governance design principles to the specifics of the intangible, online knowledge economy (Hess & Ostrom, 2007; Schweik & English, 2012; Dulong de Rosnay & Le Crosnier, 2012; Frischmann, Madison, & Sandburg, 2014; Bollier & Helfrich, 2015). Using the same methodology than for traditional commons, these authors’ methodology relies on case studies of communities, as they try to observe specificities which might serve as lessons to better govern other digital commons.

In the context of the digital commons, community boundaries are constituted not only by producers, but also by potential users. A licence based on opening copyright will define and allocate rights of access and reuse, the digital equivalent of inclusion and exclusion of the community for tangible commons. The challenge for the digital commons, rather than the exhaustion of finite resources, is not only to ensure the availability of the digital resource for all to use while avoiding their exclusive appropriation. Governance of open source software as commons includes the definition of legal constraints (O’Mahony, 2003). Communities can also develop guidelines and procedures to fight against pollution, or to protect information quality, like in the case of the Wikimedia community acting against disinformation (Saez-Trumper, 2019).

But communities, even when they have explicit boundaries, are not legal entities. Surprisingly, in Ostromian institutional analysis, this plays almost no role. Yet, to overcome this limitation many digital commons have created their own foundations as “boundary organisations” (O’Mahony & Bechky, 2008) capable of performing legal, financial, technological and governance services that the community itself cannot provide. Foundations play an important role in the governance of the commons, often leading to an explicit division of labour between community volunteers and foundation staff with a professionalisation of certain functions (Fuster Morell, 2011). Today, most large digital commons are governed by a hybrid community-foundation structure.

As for participation and social norms of digital commons communities, they should also focus on fostering the participation of volunteers to various aspects of the production of and caring for the digital commons resources. If the tasks are too difficult, or if the culture is not inclusive, the project will not be sustainable or representative of a diversity of points of views.

Digital commons, particularly in the usually highly structured and often explicitly hierarchical projects, are giving greater centrality to some people seen as contributing more significantly to the shared resource than others (O’Neil, 2009). This can go as far as awarding the core figure (often the founder) the status of “benevolent dictator”. The dictatorial powers are, in fact, sharply limited, because the communities are voluntary and there is no exclusivity over the digital resource. Leadership and decision-making can be more or less open (De Noni, Ganzaroli, & Orsi, 2011). The centralisation around one individual will endanger the project in case of departure of the core contributor. However, even within stable and well-functioning digital commons, significant governance challenges remain, such as the potentially inefficient nature of large-scale participatory processes (Jemielniak, 2016) and the persistent underrepresentation of women and people of colour (Dunbar-Hester, 2020).

Decision-making should be participatory and digital platforms can help digital commons communities to coordinate, trace debates and reach consensus. Collective democratic participation in the drafting of the rules is deemed to ensure a higher respect of those rules by the community, if they designed them. For digital commons, framing shared values, such as supporting a non-commercial culture, can help to develop those commons rules. There are different political understandings of commons and as such, there are different approaches on how to govern digital commons as "open" resources and on the degree of openness to choose.

This is also visible in the open government data sector, where the notion of openness (Tkacz, 2012) will vary according to different trends “across the political spectrum” (Bates, 2012, n.p.). This political choice will influence the governance of data and statistics generated or collected by public institutions, by citizens using those public services, or by public procurement services and applications developed by commercial actors.

On the one hand, according to the more libertarian wing of the open data movement, public sector information must be placed under licences which do not impose any legal friction or restrictions on downstream reusers, even commercial ones, in order to foster innovation and growth (Gray, 2014). Open government data will then be made available in conditions as close as possible to the public domain. On the other hand, the more socially-oriented models of the commons (Broumas, 2017) are proposing to retain some rights for the public by favouring copyleft licensing and develop other policies to preserve the digital commons and avoid private appropriation and commodification without a return to the community, the state or the general public.

Depending on the political values, projects, citizens and municipalities will make different decisions to govern, for instance, user data about public transportation as digital commons. On the one hand, they can be made available without any legal restriction or prescription, to support all possible downstream innovation and have the market develop numerous apps, including private companies who might not release them under a free licence. On the other hand, municipalities and citizens can choose to not release public transport data in the public domain, or to mandate public procurement transport apps which would reuse this data to release their product as free software. The degree of reciprocity of the appropriability regime has been a feature observed in open source software communities (De Noni, Ganzaroli, & Orsi, 2011).

Within the communities governing both tangible commons and digital commons, not all tasks should be dedicated to the production and the usage of the resource. Besides activities of caring, community-building, communication and governance, similarly to tangible commons, the monitoring of the respect of shared governance rules can translate into roles of quality-control, accountability, moderation or edition. If physical meetings are limited, other arenas to develop trust and conflict resolution have been developed: chatrooms, e-conferences.

And since rules need to be recognised by higher-level authorities and compatible with the applicable legal frameworks, legal mechanisms are needed to not only recognise the legitimacy as Ostrom showed, but also to defend the digital commons from enclosure or appropriation. As presented in the next section, current controversies of the digital commons are the capture of public investment to produce open data and of personal data by huge corporations relying on those openly available resources to commodify them.

In order to sustain their activities, digital commons projects can rely on each other: as Ostrom had identified, smaller, local communities need to be embedded, and interact with broader networks, towards a fruitful ecology of interoperable projects likely to collaborate, reuse parts and rely on each other, pass the threshold of local micro-initiatives, perhaps develop joint advocacy activities in order to have legal regulation recognise the needs of the digital commons. But advocacy led by digital commoners goes beyond purely digital stakes, and communities are nowadays fighting larger struggles than the expansion of intellectual property.

Since 2000, advocates for access to knowledge (Kapczynski & Krikorian, 2010) associated with supporters of the digital commons (formed by a network of civil society organisations such as Free Software, Wikimedia, Creative Commons, Communia) have been fighting for regulation and social institutions that balance private and public interests, and preserve and enlarge the digital commons. In the 2010s, digital rights activists (Postigo, 2012) achieved some important successes including the defeat of trade agreements, namely ACTA (Anti-Counterfeiting Trade Agreement), through actions such as the blackout of the most famous digital commons, Wikipedia, as a means of protest (Powell, 2016). This blackout strategy to hide portions of Wikipedia, similar to a strike, was later used again in support of freedom of panorama, a right of the public threatened by exclusive rights, in some countries, on the publication of photographs of buildings which are in the (physical) public domain (Dulong de Rosnay & Langlais, 2017).

Moving forward in a world faced with climate emergency, extreme right-wing politics, systemic inequalities, and a pandemic, we are convinced that the digital commons needs to intersect with larger power imbalances and social movements, such as the green new deal, crossing environmental and technology battles, to develop more sustainable alternatives to capitalism.

Emerging issues

Beyond creative and functional works of authorship, the model of the digital commons is expanding to more fields, with projects trying to apply the holistic framework to new domains. The governance of cities and of personal data exemplifies more recent instances of digital commons.

Urban democratic participation and commons

The commons is increasingly investigated as a political institution and as a way to expand democratic participation beyond the framework of representative democracy. This builds one of the central aspects of commons, self-governance, where the commoners come together through various online and offline fora in order to define rules of collaboration and ways to resolve conflicts based on these rules (De Angelis, 2017). Stalder (2018) sees the renewal of democracy through the commons as a way of countering the crises of the institutions of liberal democracy and their tendency to be absorbed into post-democratic frameworks. In 2017, the city of Barcelona started to implement a new commons-oriented participatory platform, decidim (Stark, 2017) while progressive governments from Iceland (Landemore, 2014) to Taiwan (Horton, 2018) used crowd-sourcing efforts to drive policy development. As an alternative to smart cities based on central governance and surveillance capitalism, commoners, hackers, cryptographers and sociologists provide commoning tools for citizens to participate online (D-Cent and Decode EU-funded projects) based on self-governance, decentralised, bottom-up values and data commons (the last type of digital commons we analyse below).

Beyond politics in a more narrow sense, the commons is seen as an enabling condition for increasing participation, flexibility and collaboration throughout society, in areas from education (“open educational resources”), to various kinds of civic engagement (“civic media”) (Gordon & Mihailidis, 2016).

Data commons and personal data

Control over data has emerged as an increasingly central techno-political issue. The notion of the data commons has been proposed against the increasing centralisation and commodification of data in the hands of a small number of companies (Mozorov, 2015). This recognises that value generated by pooling data, data mining, internet of things and algorithmic decision-making or artificial intelligence and argues that the underlying data can be governed as a commons rather than be handed over to the state and/or to surveillance capitalism. The concept of data commons tries to counter the tendency for centralisation of economic and political power that comes with the currently dominant model of amassing these pools of data as privately held assets (Goldstein et al., 2018). The notion is, however, still underdeveloped, and lacks a conceptual and legal framework as data commons is sometimes viewed as simply a collection of “open data” resources. However, it is necessary to differentiate more clearly between open data (available to all) and data commons, where the modes of access to the data can be segmented between members of the commons and outsiders. The constitution of data commons also needs to overcome the apparent contradiction between personal data and property, and between privacy and open access, as a personal data commons would not lead to sharing personal information, but to govern their reuse according to values of the digital commons. The Data Commons Manifesto (2019) is an attempt to formulate such a view.

The notion of the data commons, in its most ambitious political form, is part of a larger quest for what has been called “technological sovereignty”. The sovereign here is not the isolated individual, but the city as a collective, that is the community of citizens who should be able to exercise “full control and autonomy of their Information and Communications Technologies (ICTs), including service infrastructures, websites, applications and data, in compliance with and with the support of laws that protect the interests of municipalities and their citizens” (Bria & Bain, 2019). Data commons and commons-based practices applied to personal data can be enforced by democratic platforms mentioned in the previous section, but also by cooperative platforms and open science citizen projects.

Outlook: from the digital to social commons

As the social and ecological crises escalate, we see the digital commons as no longer just a concept that is essential to the debates over copyright reform to fit the 21st century digital society, free software or open data. It rather challenges the very character of contemporary societies. Its inclusive model of sharing and participation outlines a comprehensive alternative to surveillance capitalism (Doctorow, 2020; Zuboff, 2018) and digital colonisation of social life (Couldry & Mejias, 2019).

This also means that the (digital) commons cannot succeed on their own, but are part of a comprehensive vision of a participatory, democratic and ecological society. This requires the transformation of business models, infrastructure, governance mechanisms and social attitudes, for example as part of a “green new deal” (Rifkin, 2019). These are all necessary to rebalance the relation between individual and collective rights, whereas both singular and collective need to be understood, as Donna Haraway (2016) calls it, “entities-in-assemblages”. The digital commons offers a set of ideas, practices and experiences that can inform other areas that might not be thought of as digital, but are increasingly based on digital infrastructures that allow new commons-based institutions to function effectively.

References

Bates, J. (2012). ‘This is what modern deregulation looks like’: Co-optation and contestation in the shaping of the UK’s Open Government Data Initiative. Journal of Community Informatics, 8(2). https://doi.org/10.15353/joci.v8i2.3038

Benkler, Y. (2002). Coase’s Penguin, or, Linux and The Nature of the Firm. The Yale Law Journal, 112(3), 369. https://doi.org/10.2307/1562247

Benkler, Y. (2006). The Wealth of Networks: How Social Production Transforms Markets and Freedom. Yale University Press.

Berry, D. M. (2008). Copy, rip, burn: The politics of copyleft and open source. Pluto Press.

Birkinbine, B. J. (2018). Commons Praxis: Toward a Critical Political Economy of the Digital Commons. TripleC: Communication, Capitalism & Critique, 16(1), 290–305. https://doi.org/10.31269/triplec.v16i1.929

Birkinshaw, J. (2017). Reflections on open strategy. Long Range Planning, 50(3), 423–426. https://doi.org/10.1016/j.lrp.2016.11.004

Bollier, D. (2008). Viral spiral: How the commoners built a digital republic of their own. New Press.

Bollier, D., & Helfrich, S. (Eds.). (2012). The wealth of the commons: A world beyond market and state. Levellers Press.

Bollier, D., & Helfrich, S. (Eds.). (2015). Patterns of commoning. Commons Strategies Group; Off the Common Books.

Borch, C., & Kornberger, M. (Eds.). (2015). Urban Commons: Rethinking the City. Routledge. https://doi.org/10.4324/9781315780597

Boyle, J. (1997). A Politics of Intellectual Property: Environmentalism for the Net? Duke Law Journal, 47(1), 87–116. https://doi.org/10.2307/1372861

Boyle, J. (2008). The public domain: Enclosing the commons of the mind. Yale University Press.

Broumas, A. (2017). Social Democratic and Critical Theories of the Intellectual Commons: A Critical Analysis. TripleC: Communication, Capitalism & Critique, 15(1), 100–126. https://doi.org/10.31269/triplec.v15i1.783

Capra, F., & Mattei, U. (2015). The ecology of law: Toward a legal system in tune with nature and community (First). Berrett-Koehler Publishers.

Chen, S.-L. (2009). To surpass or to conform – what are public licenses for? Journal of Law, Technology & Policy, 2009(1), 107–139.

Chen, S.-L. (2011). Collaborative Authorship: From Folklore to the Wikborg. Journal of Law, Technology and Policy, 2011(1), 131–167.

Coase, R. H. (1974). The Lighthouse in Economics. The Journal of Law and Economics, 17(2), 357–376. https://doi.org/10.1086/466796

Couldry, N., & Mejias, U. A. (2019). The costs of connection: How data is colonizing human life and appropriating it for capitalism. Stanford University Press.

De Angelis, M. (2017). Omnia Sunt Communia: On the commons and the transformation to postcapitalism. Zed Books.

Dobusch, L., & Kapeller, J. (2018). Open strategy-making with crowds and communities: Comparing Wikimedia and Creative Commons. Long Range Planning, 51(4), 561–579. https://doi.org/10.1016/j.lrp.2017.08.005

Doctorow, C. (2020, August 26). How to Destroy ‘Surveillance Capitalism [Blog post]. OneZero. https://onezero.medium.com/how-to-destroy-surveillance-capitalism-8135e6744d59

Dulong de Rosnay, M. (2011). Access to digital collections of public domain works: Enclosure of the commons managed by libraries and museums”. Proceedings of the 13th Biennial Conference of the International Association for the Study of the Commons (IASC. https://halshs.archives-ouvertes.fr/halshs-00671628

Dulong de Rosnay, M. (2016). Peer to party: Occupy the law. First Monday, 21(12). https://doi.org/10.5210/fm.v21i12.7117

Dulong de Rosnay, M., & De Martin, J. C. (Eds.). (2012). The Digital Public Domain: Foundations for an Open Culture. Open Book Publishers.

Dulong de Rosnay, M., & Langlais, P.-C. (2017). Public artworks and the freedom of panorama controversy: A case of Wikimedia influence. Internet Policy Review, 6(1). https://doi.org/10.14763/2017.1.447

Dulong de Rosnay, Melanie, & Le Crosnier, H. (2012, September 12). An Introduction to the Digital Commons: From Common-Pool Resources to Community Governance. Building Institutions for Sustainable Scientific, Cultural and genetic Resources Commons. https://halshs.archives-ouvertes.fr/halshs-00736920

Dunbar-Hester, C. (2020). Hacking diversity: The politics of inclusion in open technology cultures. Princeton University Press.

Dusollier, S. (2007). Sharing Access to Intellectual Property through Private Ordering. Chicago Kent Law Review, 82, 1391–1435.

Dusollier, S. (2010). Scoping Study On Copyright And Related Rights And The Public Domain (Study CDIP/4/3/REV./STUDY/INF/1). World Intellectual Property Organization. https://www.wipo.int/ip-development/en/agenda/news/2010/news_0007.html

Dutfield, G., & Suthersanen, U. (2004). The innovation dilemma: Intellectual property and the historical legacy of cumulative creativity. Intellectual Property Quarterly, 4, 370 – 380.

Elkin-Koren, N. (2005). What Contracts Cannot Do: The Limits of Private Ordering in Facilitating a Creative Commons. Fordham Law Review, 74(2), 375–422. https://ir.lawnet.fordham.edu/flr/vol74/iss2/3

Elkin-Koren, N. (2006). Creative Commons: A Skeptical View of a Worthy Pursuit. In P. B. Hugenholtz & L. Guibault (Eds.), The Future Of The Public Domain. Kluwer Law International.

Frischmann, B. M., Madison, M. J., & Strandburg, K. J. (Eds.). (2014). Governing knowledge commons. Oxford University Press. https://doi.org/10.1093/acprof:oso/9780199972036.001.0001

Fuchs, C., & Sandoval, M. (2013). The Diamond Model of Open Access Publishing: Why Policy Makers, Scholars, Universities, Libraries, Labour Unions and the Publishing World Need to Take Non-Commercial, Non-Profit Open Access Serious. TripleC: Communication, Capitalism & Critique, 11(2), 428–443. https://doi.org/10.31269/triplec.v11i2.502

Fuster Morell, M. (2010). Governance of online creation communities. Provision of infrastructure for the building of digital commons [PhD Thesis]. European University Institute.

Fuster Morell, M. (2011). The Wikimedia Foundation and the Governance of Wikipedia’s Infrastructure: Historical Trajectories and its Hybrid Character. In G. Lovink & N. Tkacz (Eds.), Critical point of view: A Wikipedia reader(pp. 325–341). Institute of Network Cultures.

Gordon, E., & Mihailidis, P. (Eds.). (2016). Civic media: Technology, design, practice. MIT Press.

Gray, J. (2014, September). Towards a Genealogy of Open Data. Conference of the European Consortium for Political Research, Glasgow. https://doi.org/10.2139/ssrn.2605828

Guadamuz, A. (2006). Open science: Open source licences for scientific research. North Carolina Journal of Law and Technology, 7(2), 321–366. https://osf.io/9xmsk/

Haraway, D. J. (2016). Staying with the trouble: Making kin in the Chthulucene. Duke University Press.

Hardin, G. (1968). The Tragedy of the Commons. Science, 162, 1243–1248. https://doi.org/10.1126/science.162.3859.1243

Hardin, G. (1994). The tragedy of the unmanaged commons. Trends in Ecology & Evolution, 9(5), 199. https://doi.org/10.1016/0169-5347(94)90097-3

Hess, C., & Ostrom, E. (2011). Understanding Knowledge as a Commons: From Theory to Practice. MIT Press.

Hettinger, E. C. (1989). Justifying Intellectual Property. Philosophy & Public Affairs, 18(1), 31–52. https://www.jstor.org/stable/2265190

Horton, C. (2018, August 21). The simple but ingenious system Taiwan uses to crowdsource its laws. MIT Technology Review. https://www.technologyreview.com/s/611816/the-simple-but-ingenious-system-taiwan-uses-to-crowdsource-its-laws/

Jemielniak, D. (2016). Wikimedia movement governance: The limits of a-hierarchical organization. Journal of Organizational Change Management, 29(3), 361–378. https://doi.org/10.1108/JOCM-07-2013-0138

Kapczynski, A., & Krikorian, G. (Eds.). (2010). Access to knowledge in the age of intellectual property. Zone Books.

Katz, Z. (2005). Pitfalls of open licensing: An analysis of Creative Commons licensing, IDEA. The Intellectual Property Law Review, 46(3), 391–413.

Landemore, H. (2014). Inclusive Constitution-Making: The Icelandic Experiment. Journal of Political Philosophy, 23(2). https://doi.org/10.1111/jopp.12032

Larivière, V., & Sugimoto, C. R. (2018). Do authors comply when funders enforce open access to research? Nature, 562(7728), 483–486. https://doi.org/10.1038/d41586-018-07101-w

Lessig, L. (2001). The Future of Ideas: The Fate of the Commons in a Connected World. Random House.

Linebaugh, P. (2008). The Magna Carta manifesto: Liberties and commons for all. University of California Press.

Mazzone, J. (2011). Copyfraud and other abuses of intellectual property law. Stanford University Press.

McGinnis, M. D., & Ostrom, E. (1992). Design Principles for Local and Global Commons. Presented at the Linking Local and Global Commons. Harvard Center for International Affairs. http://hdl.handle.net/10535/5460

Merton, R. K. (1973). The Normative Structure of Science. In The Sociology of Science: Theoretical and Empirical Investigations (pp. 267–278). University of Chicago Press.

Morozov, E. (2015, January). Socialize the Data Centres! New Left Review, 91. https://newleftreview.org/issues/II91/articles/evgeny-morozov-socialize-the-data-centres

Morozov, E. (2019, March). Digital Socialism? New Left Review, 116/117, 33–67. https://newleftreview.org/issues/II116/articles/evgeny-morozov-digital-socialism

Nanchen, E., & Borgeat, M. (2015). Bisse der Savièse: A Journey Through Time to the Irrigation Systems in Valais, Switzerland. In D. Bollier & S. Helfrich (Eds.), Patterns of Commoning (pp. 61–64). The Commons Strategy Group.

Normand, S. (2018). Is Diamond Open Access the Future of Open Access? The IJournal: Graduate Student Journal of the Faculty of Information, 3(2). https://theijournal.ca/index.php/ijournal/article/view/29482

Nyíri, K. (2005, May). The Networked Mind. Presented at (May 27–28) [Talk]. The Mediated Mind – Rethinking Representation, The London Knowledge Lab. http://www.hunfi.hu/nyiri/Nyiri_Networked_Mind_London_2005.pdf

O’Mahony, S. (2003). Guarding the commons: How community managed software projects protect their work. Research Policy, 32(7), 1179–1198. https://doi.org/10.1016/S0048-7333(03)00048-9

O’Mahony, S., & Bechky, B. A. (2008). Boundary Organizations: Enabling Collaboration among Unexpected Allies. Administrative Science Quarterly, 53(3), 422–459. https://doi.org/10.2189/asqu.53.3.422

O’Neil, M. (2009). Cyberchiefs: Autonomy and authority in online tribes. Pluto Press.

Ostrom, E. (1990). Governing the Commons. Cambridge University Press.

Postigo, H. (2012). The digital rights movement: The role of technology in subverting digital copyright. MIT Press.

Powell, A. B. (2016). Network exceptionalism: Online action, discourse and the opposition to SOPA and ACTA. Information, Communication & Society, 19(2), 249–263. https://doi.org/10.1080/1369118X.2015.1061572

Rainie, H., & Wellman, B. (2012). Networked: The new social operating system. MIT Press.

Rifkin, J. (2014). Zero Marginal Cost Society: The Rise of the Collaborative Commons and the End of Capitalism. St. Martin’s Press.

Rifkin, J. (2019). The Green New Deal: Why the fossil fuel civilization will collapse by 2028, and the bold economic plan to save life on Earth. St. Martin’s Press.

Royal Society. (2012). Science Policy Centre, & Royal Society (Great Britain). Science as an open enterprise. The Royal Society. https://royalsociety.org/topics-policy/projects/science-public-enterprise/report/

Sadi, M. H., Dai, J., & Yu, E. (2015). Designing Software Ecosystems: How to Develop Sustainable Collaborations?: Scenarios from Apple iOS and Google Android. In A. Persson & J. Stirna (Eds.), Advanced Information Systems Engineering Workshops (Vol. 215, pp. 161–173).

Saez-Trumper, D. (2019). Online Disinformation and the Role of Wikipedia. ArXiv. https://arxiv.org/abs/1910.12596

Schöpf, S. (2015). The Commodification of the Couch: A Dialectical Analysis of Hospitality Exchange Platforms. TripleC: Communication, Capitalism & Critique, 13(1). https://doi.org/10.31269/triplec.v13i1.480

Schweik, C. M., & English, R. C. (2012). Internet success: A study of open-source software commons. MIT Press.

Slee, T. (2015). What’s yours is mine: Against the sharing economy. OR Books.

Söderberg, J. (2002). Copyleft vs. Copyright: A Marxist Critique. First Monday, 7(3). https://doi.org/10.5210/fm.v7i3.938

Stalder, F. (2010). Digital Commons. The Human Economy: A Citizen’s Guide. Polity Press.

Stalder, F. (2018). The digital condition. Polity Press.

Stallman, R. (1985). The GNU Manifesto. GNU Project. http://www.gnu.org/gnu/manifesto.html

Stallman, R. (1996). What Is Free Software? GNU Project. https://www.gnu.org/philosophy/free-sw.html.en

Suber, P. (2016). Knowledge unbound: Selected writings on open access, 2002-2011. MIT Press.

The International Union for Conservation of Nature and Natural Resources, United Nations Environment Programme, World Wildlife Fund, Food and Agriculture Organization of the United Nations, & UNESCO (Eds.). (1980). World conservation strategy: Living resource conservation for sustainable development. IUCN.

Tkacz, N. (2012). From open source to open government: A critique of open politics. Ephemera: Theory and Politics in Organization, 12(4), 386–405. http://wrap.warwick.ac.uk/53295/

Turkle, S. (1995). Life on the Screen. Identity in the Age of the Internet. Simon & Schuster.

Vercellone, C. (2015). From the Crisis to the ’Welfare of the Common’ as a New Mode of Production. Theory, Culture & Society, 32(7–8), 85–99. https://doi.org/10.1177/0263276415597770

Vines, T. H., Albert, A. Y. K., Andrew, R. L., Débarre, F., Bock, D. G., Franklin, M. T., Gilbert, K. J., Moore, J.-S., Renaut, S., & Rennison, D. J. (2014). The Availability of Research Data Declines Rapidly with Article Age. Current Biology, 24(1), 94–97. https://doi.org/10.1016/j.cub.2013.11.014

Weber, S. (2004). The Success of Open Source. Harvard University Press.

Woodmansee, M. (1984). The Genius and the Copyright: Economic and Legal Conditions of the Emergence of the ‘Author’. Eighteenth-Century Studies, 17(4), 425. https://doi.org/10.2307/2738129

Woodmansee, M. (1992). On the Author Effect: Recovering Collectivity. Cardozo Arts & Entertainment Law Journal, 10, 279–292. https://scholarlycommons.law.case.edu/faculty_publications/283

Zuboff, S. (2019). The age of surveillance capitalism: The fight for a human future at the new frontier of power. Profile Books.

Smart technologies

$
0
0

This article belongs to Concepts of the digital society, a special section of Internet Policy Review guest-edited by Christian Katzenbach and Thomas Christian Bächle.

Introduction: the rise of smart ‘anywares’

A wide range of products and services is currently discussed under the heading of ‘smart’, loosely overlapping with the technologies of (cloud) robotics, Internet of Things (IoT) and more generally to hardwired applications of AI. 1 More concretely we can think of semi-autonomous vehicles (smart cars), cyberphysical infrastructures such as homes animated with sensor technologies that enable surreptitious adaptation of temperature, light, and all kinds of services (smart homes), or energy networks that afford real-time demand-side energy supply systems (smart grids) and myriad online architectures that require responsive and adaptive interaction, such as collaborative platforms (smart office) and e-learning (smart education). Most of these systems will be hybrid between online and offline, as exemplified by crime mapping (smart policing), social security fraud detection (smart forensics), the sharing economy (smart services), remote healthcare (smart healthcare) and initiatives in the realm of computational law that employ the alluring notion of smart law to sell their services. We seem to have arrived in the era of smart everyware (Greenfield, 2006), confronted with myriad smart anywares.

In this article I first ground the concept of smartness in the history of artificial intelligence and cybernetics (sections 1 and 2), highlighting the different strands in the research domains that enabled the development of smart technologies. Though this will bring out the smartness of those selling the idea of computational systems as cognitive engines, there is more to smart technologies than some social constructivists may wish to believe. 2 Grounded in postphenemenological philosophy and embodied enaction I will develop a layered approach to the kind of machine agency involved in smart technologies (section 3) that allows me to simultaneously reject confounding human and machine agency while nevertheless acknowledging the fact that their computational inferences are used to anticipate our behaviours, thus introducing agential affordances to our environments. Coupled with nudge theory such affordances have resulted in attempts to manipulate consumers (behavioural advertising, credit rating) and further manipulation in the political realm that has been reconfigured as a marketplace under the influence of neoliberal economics (section 4). In the final part I address the need to confront the affordances of smart environments, taking their machine agency seriously without buying into either marketing speak or social constructivist relativism (section 5).

1. Smart-arse technologies: avoiding the AI debate

Why would one call a technology smart? The Cambridge English Dictionary (2020) tells us that smart is about ‘having a clean, tidy, and stylish appearance (mainly in the UK)’, and/or refers to someone who is ‘intelligent, or able to think quickly or intelligently in difficult situations (mainly in the US)’. The geographical difference raises some interesting questions about cultural inclinations, but that may be merely a smart-arse move on my side, taking note that a smart-arse is defined in the same dictionary as ‘someone who is always trying to seem more clever than other people in a way that is annoying’. The answer must be situated in the computer-related adjective, which is defined as ‘[a] smart machine, weapon, etc. uses computers to make it work so that it is able to act in an independent way: Until the advent of smart weapons, repeated attacks were needed to ensure the destruction of targets’ (Cambridge Dictionary 2020). This must be what my article here aims to describe: technologies with a mindless mind of their own, ‘getting on with things’ such that we have time to do more important things. Let’s not forget, however, that the term normally refers to something aesthetic (perhaps to sleek interfaces that intuitively guide our usage based on a clean, tidy and stylish interface) and that it otherwise refers to a clever type of acuity, or intelligence. Acting in an independent way comes close, though one wonders in what sense smart weapons act independently, considering the fact that they are designed by humans with specific computational skills in mind and are also used by humans to destroy very precise targets. The independence seems to only refer to how these smart technologies achieve the goals set for them, even if one could say that they may be able to develop subgoals if they ‘think’ it will help to achieve our goals (Boulanin & Verbruggen, 2017). One could ask who the smart-arse is here: the human or the machine.

McCarthy, the AI pioneer who coined the term artificial intelligence (AI) in the famous 1956 Dartmouth conference (Leslie, 2019), certainly was smart. One of the other founding fathers of what we now refer to as AI, Herbert Simon, preferred to stick to the term ‘complex information processing’ (Leslie, 2019 at 32), because it supposedly better described the operations they were conceptualising. McCarthy, however, had the luminous insight that nobody would invest much in something so dry as ‘complex information processing’, whereas the mysterious idea of ‘artificial intelligence’ might trigger some real interest (Leslie, 2019 at 32) – let’s acknowledge McCarthy’s skills in the realm of public relations (PR). One of the reasons why I used the term smart technologies in one of my books, was to avoid the endless debates about whether machines can be intelligent, what is meant with intelligence and whether and, if so, how machine intelligence differs from both animal and human intelligence. Perhaps the more intriguing question is why it was not simply called machine or computational intelligence. I would actually claim that human intelligence itself is deeply artificial (Hildebrandt, 2020b; Plessner & Bernstein 2019). Speaking of machine or computational intelligence would probably highlight the difference that McCarthy wished to ignore, as becomes clear in the (cl)aim of those early pioneers who professed that (Moor, 2006):

[t]he study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.

Against such overreach, I dare say that the advantage of the term smart technologies is first and foremost that it avoids metaphysical debates on (1) whether machines ‘think’, (2) whether it suffices to answer that question in terms of observable behaviour only, and perhaps also (3) the debate on whether intelligence requires ‘thinking’ in the first place, considering that viruses and reptiles do very well in terms of independent navigation – even though we may think that they don’t think. 3 Speaking of smart instead of AI technologies allows us to be more pragmatic about their added value, steering free from the pseudo-religious (and thereby pseudo-scientific) claims around ‘artificial intelligence’ (Natale & Ballatore, 2020).

This may, however, result in those advocating smart technologies getting away with even bolder claims about their added value, precisely because the threat of a competing intelligence is sidetracked. Adopting the term may therefore be a smart move to downplay the extent to which smart technologies are capable of reconfiguring our choice architecture. This reconfiguration is often meant to manipulate our intent, as the entanglement of nudge theory and machine learning demonstrates in the realm of behavioural targeting (for advertising, recruiting, insurance, admission to college or parole decisions). Such entanglement hides the choices made at the backend of computing systems by those who stand to gain from their employment, or those intent on serving our best interests (though preferably behind our backs, see e.g. Yeung, 2017).

This confronts us with the question of ‘smart for whom?’, which is a political question. Ignoring it will not make such backend choices neutral. Perhaps we should coin these technologies, smart-arse technologies, highlighting their ability to manipulate our environment and our selves, without, however, attributing thoughts or intent to technologies that can only follow their masters’ script. Though their masters may actually be at a loss, due to the interacting network effects and emergent behaviours they generate (known as the Goodhart effect) (Strathern, 1997).

2. Smart technologies: embracing the cybernetic approach

Being at a loss is deeply human. Machines are never at a loss, because they lack the mind for it. But machines or technologies can help humans to gain control, if they somehow get smart about how to achieve their goals (I am glad to leave ‘they’ undefined here). The aim of increased control, often achieved by developing remote control, is key to another branch of what became AI, namely cybernetics (Wiener, 1948) or operations research (Pickering, 2002). Cybernetics is focused on developing the right kind of feedback loops, such that a technology is capable of adapting its behaviour to changing circumstances, based on its perception. This fits well with the idea that perception is triggered by the need to anticipate how one’s actions will affect one’s environment, and how this will in turn affect one’s action-potential, familiar in phenomenological research and affordance theory (Kopljar, 2016; Gibson, 2014). A narrower, machinic understanding often describes cybernetics as a specific type of regulation, entailing standard-setting, behaviour monitoring and behaviour modification. This obviously accords with computational systems, which require formalisation and thrive on disambiguated instructions. One could say that these systems require formalised regulation in order to regulate their environment, even if their approach is data-driven and based on machine learning. Even in that case input must be formalised (digitised, labelled or clustered) and the patterns mined are based on mathematical hypotheses (Blockeel, 2010). Cyber is Greek for steering (Hayles, 1999 at 8), and though steering can be done in many ways, computational systems necessarily reduce the art of steering to computational ‘algorithmics’. One could surmise that the notion of smart technologies is connected with the idea of remote control over an environment, for instance in the case of hardware failure detection combined with automated interventions once failure is detected.

In point of fact, SMART technologies originally referred to the acronym of self-monitoring, analysis and reporting technology (SMART), mainly used for failure prediction in hardware (Gaber et al., 2017). This is interesting because such SMART-ness remains focused on monitoring and reporting, without necessarily including an automated or even autonomous response in order to repair. In that sense the current use of the term smart is different as it refers to systems capable of responding to changes in their environment based on input data, and it is precisely the notion of control and the ability to steer that are central to what we now call smart technologies, both within the context of online and offline systems. Smart technologies are digital technologies of control, and the most important question will be whose control they facilitate. We will return to this question in section 4.

3. Levels of being smart: the diversity of non-human agents

The term smart is not a technical term within either computer science, law, the life sciences or social science. It may have been initiated as a marketing term, hoping to lure investors and potential users, offering an intuitive way to speak about computational systems that offer real world interventions supposedly achieving efficiency, convenience, or novel products and services. Though ‘stuff’ may be sold under the heading of smart even if it has no adaptive qualities, the core idea is that smart technologies do have some level of agency, in the practical sense of being able to respond to input data with behaviours that are conducive to our goals (who “our” refers to is the key question, see the final section).

Smartness, however, is not a categorical attribute of a technology, as there are different levels of being smart. Much depends on the extent to which its adaptive behaviours have been premeditated by the developer, which corresponds with the extent to which a technology can be said to express its own agency. Simultaneously, much depends on the complexity of the environment, taking note of the fact that environments are often stabilised or even closed, to ensure safe functioning of a smart technology. In robotics the environment of the robot, called its ‘envelop’, is usually developed in tandem with the robot, aligning its ‘degrees of freedom’ with the physical properties of the envelop (Floridi, 2014a). This way developers can more easily foresee the scenarios they must code for, preventing undefined behaviours of the device. Many roboticists foresee that self-driving cars will only hit the road if the road infrastructure is locked off from e.g. pedestrians, making sure the car runs within a relatively predictable environment, meeting only other self-driving cars on connected roads (Brooks, 2018).

If we acknowledge the fact that a technology may exhibit a specific type of agency, we can move away from discussions of whether they ‘have’ humanesque agency or not. My take is that they don’t, because they are embodied otherwise and that has consequences (Pfeifer & Bongard, 2007). Also, why invest in human-like agents if we already have wonderful ways to reproduce? Aficionados such as Kurzweil (2005) will claim that artificial agents will, however, soon outperform us and reach a singularity where humanity becomes a historical footnote to an intelligence too advanced for us to even conceive (though he did, nevertheless). I would say that here we enter the realm of pseudo-religious devotion if not hubris, observing that most believers in such singularity come from outside the domain of computer science, uninformed by the limits imposed by physics, complexity and much more (Walsh, 2017). It makes more sense to acknowledge that plants, animals and smart technologies all have different but nevertheless pivotal types of agency, and thereby differ from stones, screwdrivers and volcanoes. This will allow us to focus on figuring out what kind of agency is typical for smart technologies and raise the crucial question of how it may affect human agency. In this section I will discuss three levels of smartness, in the next I will probe into the implications for human agency.

Based on a series of seminal definitions of agency (Floridi & Sanders, 2004; Pfeifer & Bongard, 2007; Russell et al., 2010; Steels, 1995; Varela et al., 1991) I have compiled a definition that depicts agency as (Hildebrandt, 2015 at 22):

the capability:

  • to act autonomously,
  • to perceive and respond to changes in the environment,
  • to endure as a unity of action over a prolonged period of time and
  • to adapt behaviours to cope with new circumstances.

Let’s note that the term autonomous here does not refer to moral autonomy and does not assume human mindness, it simply refers to an entity capable of goal-oriented behaviour without intervention of its creator, designer or engineer. Let’s also note that the most crucial aspect of agency is the combination of perception and action; a simple thermostat perceives temperature and if a threshold is met it acts by triggering the heating system. Though nobody would imply that a thermostat can think, is autonomous in the sense of deciding which temperature we prefer or is in need of human rights to protect their dignity, a thermostat differs substantially from a water tap that only provides water after we open it. The thermostat has a minimal type of agency. Interestingly, we would not call such a thermostat smart, because it is not learning from our behaviour, it does not adapt to the way we set its temperature.

A smart thermostat, however, responds to our behaviour by adapting its own behaviour in line with how it profiles us. This may suit us, or suit those who profit from the data it provides about our behaviour. The smart thermostat may not be designed to endure as a unity of action over a prolonged period of time, which would imply self assessment and self repair. Similarly, it may not be designed to reconfigure its architecture to survive in a changing environment. And this brings me back to the first aspect of agency, being autonomy.

Here we need to distinguish between automation and autonomy. A simple and a smart thermostat both display automation, but their level of autonomy differs. Though one could claim that both are capable of ‘goal-oriented behaviour without intervention of its creator, designer or engineer’, the smart thermostat has a broader palette of goal-oriented behaviour due to its learning capacity. The difference between automation (which does not imply agency) and autonomy (which does) is usually sought in the ability of a system to respond to its environment in a way that sustains its own endurance and prevents or repairs failure modes, notably in the face of uncertainty (Endsley, 2017). This means that autonomy here refers to a technology capable of behaving as an adaptive unity of action over the course of time, highlighting the overlap and the interaction with the different aspects of machine agency or smartness.

This has led me to distinguish three levels of machine agency (Hildebrandt, 2015, chapter 2):

  1. Agency of logic-based systems that are by definition deterministic and explainable, though in real life their interaction with unforeseen events may cause unforeseeable behaviour and the more complex their decision-tree and the more complex their environment, the more difficult it may be to reconstruct a correct explanation. A simple thermostat is a fine example of such a system, but so is a fraud detection system based entirely on a complex but preconceived decision tree. This is the type of agency closely related to good old fashioned artificial intelligence (GOFAI), grounded in knowledge representation and symbol manipulation (Haugeland, 1997, at 16).
  2. Agency of machine learning systems that are capable of perceiving their environment (in terms of data), and of improving their performance of a specified task based on such perception. Such systems thrive on iterant feedback loops that are validated in terms of pattern recognition, which is ultimately grounded in statistical correlations expressed in mathematical functions. Examples of these types of systems abound: from e-learning to fintech and from load balancing in smart energy networks to facial recognition technology, machine translation as well as personal digital assistants such as Siri or Alexa. This type of agency is closely related to cybernetics and to artificial intelligence the modern approach (AIMA), grounded in inductive inferences and predictive analytics (Russell, Norvig, & Davis, 2010).
  3. Agency of multi-agent systems (MASs) where a large number of agents interact, each based on their own goal or scripts, which results in emergent behaviours of the overall system due to network effects and complex dynamics (Vázquez-Salceda, Dignum, & Dignum, 2005). The agents can be logic-based or based on machine learning and the environment that a MAS navigates and regulates can be closed or open, predictable or unpredictable. Interestingly, a MAS will often be a distributed system that in itself constitutes an environment for humans and other types of agents, thus blurring the borders between an agent, an agent system and an environment. Whereas a smart fridge may be an identifiable agent in my home environment, my smart home that consists of interacting agents working together to better serve my needs (or those of the provider) is an environment rather than an agent. This is precisely where things become complicated, opaque and potentially turbulent. Some have termed this situation, where the environment itself takes on agency, an onlife world (Floridi, 2014b). Others have highlighted how the continuous and reflexive updating of one’s environment may reshape our own agency, though some do and others don’t make a difference between the nature of expectations between human agents and those between a human and a smart environment (Dittrich, 2002; Esposito, 2017).

Obviously, this tripartite categorisation is not the ultimate way to frame different levels of smartness, while noting that these levels interact and morph where smart technologies meet. This is where the difference between smart technologies and smart infrastructures becomes pivotal, especially since so many smart infrastructures are data-driven and surreptitiously reconfigure our choice architecture and our life world on the basis of our behavioural data. As indicated, this has been framed under the headings of Ambient Intelligence, the Internet of Things, cloud robotics, smart cities, connected cars or smart energy grids. All these headings refer to what computer science now refers to as cyberphysical infrastructures (Suh et al., 2014). As such infrastructure increasingly configures our environments, it becomes critical infrastructure. This will in turn augment our dependence on networked smart technologies and the energy requirements they involve, while also raising a number of issues of distributive justice, both with regard to access and with regard to potential unfair distribution of risks.

Although there seems to be an urge to discuss these societal implications in terms of ethics, I prefer to end this article with a discussion of the political economy of smart technologies, including some pointers to the importance of democracy and the rule of law. The main reason for this preference is the need to counter attempts to engage in ‘ethics-washing’ and ‘ethics shopping’ (Wagner, 2018) by teasing out the power relationships that are at stake. This is not to suggest that ethics is not relevant here (Hildebrandt, 2020a, chapter 11).

4. A political economy of smart technologies

For many decades economic analysis has been in the thralls of neo-classical economics, originating from the Chicago school of e.g. Milton Friedman, Gary Becker and Richard Posner, influenced by the Austrian-British philosopher and economist Hayek (2007), whose 1940 The Road to Serfdom celebrated an unconstrained market libertarianism. Becker’s rational choice theory inspired the Chicago school of political economy, applying the atomistic type of methodological individualism that grounds rational choice theory when used in for instance crime control and antitrust policy. Those unimpressed by the somewhat metaphysical and unsubstantiated claims of libertarian market fundamentalism usually address the conglomerate of neo-classical economic policies as ‘neoliberalism’ (Blakely, 2020). Under that heading I would also include the application of Kahneman and Tversky’s (1979) models of cognitive science to economics, for which they won the Nobel Prize. Their particular strand of neoliberalism identifies as behavioural economics, popularised as nudge theory by Thaler and Sunstein (2008). Nudge theory, in turn, fits very well with machine learning applications that aim to manipulate ‘users’ of smart technologies into specific behaviours, whether purchasing, voting or eating. Both nudge theory and machine learning thrive on an atomistic variant of methodological individualism that is closely aligned with utilitarianism (Hildebrandt, 2020a, chapter 11). Nudge theory’s alliance with behavioural economics has resulted in paternalistic libertarianism (Sunstein, 2016), though not everyone buys into the sneaky tyranny it professes (Yeung, 2017). Meanwhile most of our free services feed on behavioural advertising, which is firmly grounded on the pseudo-science of behavioural nudging by way of real-time-bidding systems, ultimately built on the quicksand of the same contested branch of cognitive science mentioned above (see Gigerenzer (2018) and Lepenies and Małecka (2019) on the pitfalls of nudge theory). The unwieldy marriage between machine learning and nudge theory has nevertheless been a major success for the rather deep pockets of big tech companies (Frederik & Martijn, 2019; Lomas, 2019a, 2019b).

To properly understand the political economy of smart technologies it may be more interesting to study the work of Karl Polanyi (2012), whose seminal The Great Transformation is inspiring a new generation of scholars. For instance, Benkler (2018), Britton-Purdy et al. (2020) and Cohen (2019) are intent on a better understanding of how law and the rule of law are implicated in the rise of economic superpowers such as big tech. They demonstrate the need to reinvent the countervailing powers of Montesquieu’s checks and balances, or calling for new versions of Polanyi’s counter-movements to put a halt to predatory capitalism. It is interesting that some of these scholars have been studying the advent of smart technologies for decades, notably investigating how the smart aspects of big tech products and services invalidate previous forms of legal protection, playing into the hands of those who control the backend systems.

In my own Smart Technologies and the End(s) of Law (Hildebrandt, 2015), I argue that we must learn to interact with smart systems, instead of perpetuating the illusion that we are using them whereas these systems are often using us (as data engines for their data-driven nudging). Moving from usage to interaction will require keen attention to effective and actionable transparency (Schraefel et al., 2020) and what some have called ‘throttling mechanisms’ that slow down high-frequency profiling by big tech platforms (Ohm, 2020). It seems clear that the kind of transparency and the human timescale that is needed to reorient cybernetic control back to individual human beings will require legislation, supervisory oversight and dedicated case law to rebalance current power relationships (Hildebrandt, 2018). Though developers of smart technologies often speak of human centred design, the current economic incentive structure reduces such humane approaches to PR about ‘the human in the loop’. Instead, we need an approach that puts the machine back in its proper place: smart technologies should be ‘in the loop’, while human beings navigate an animated technological landscape that serves them instead of spying on them (Zuboff, 2019).

5. Conclusion: human acuity and smart machines

Speaking of smart technologies could have the advantage of steering free from the snake oil narratives of AI, while still hinting at the cybernetics that inspires their design. But as it is not a technical term it can mean anything to anybody, and this becomes a drawback when it is used to sell systems based on unsubstantiated claims, as often happens under the heading of AI or machine learning (Giles, 2018; Lipton & Steinhardt, 2018).

Users and those forced to live with these systems (in smart cities, under surveillance of smart policing, at the mercy of smart recruiting) should ask themselves “smart for whom?”, “smart in what sense?” and “smart compared to what?”. Though many so-called smart applications are sold as ‘outperforming humans’ there is much to say about this (Cabitza et al., 2017), as such bombastic claims are based on performance metrics that refer to accuracy in the context of a data set, which may say little about the real-world performance (e.g. Caruana et al., 2015). Humans do not live in a data set, they navigate the material and institutional fabric of a real world, and they flourish based on an acuity that is not within the remit of smart machines.

Some will say the latter is an open question, others are certain that artificial general intelligence is around the corner. Still others focus their attention on how smart technologies affect human agency, taking a more pragmatic and simultaneously concerned approach. Referring to technologies as smart hopefully avoids metaphysical beliefs in technologies that cannot but follow a script, even if that script allows them to reorganise their internal structure to perform better on a given task. What matters is that human beings must be able to anticipate how the machinic agency of clever engineering feats profiles them, noting the difference between the fragile beauty of human acuity and the precocious foresight of machines that result from human ingenuity.

References

Benkler, Y. (2018). The Role of Technology in Political Economy: Part 1 [Blog post]. The Law and Political Economy Project. https://lpeproject.org/blog/the-role-of-technology-in-political-economy-part-1/

Blakely, J. (2020). How Economics Becomes Ideology: The Uses and Abuses of Rational Choice Theory. In P. Róna & L. Zsolnai (Eds.), Agency and Causal Explanation in Economics (pp. 37–52). Springer International Publishing. https://doi.org/10.1007/978-3-030-26114-6_3

Blockeel, H. (2010). Hypothesis Space. In C. Sammut & G. I. Webb (Eds.), Encyclopedia of Machine Learning (pp. 511–13). Springer US. https://doi.org/10.1007/978-0-387-30164-8_373

Boulanin, V., & Verbruggen, M. (2017). Article 36 Reviews: Dealing with the Challenges posed by Emerging Technologies [Report]. Stockholm International Peace Institute. https://www.sipri.org/publications/2017/other-publications/article-36-reviews-dealing-challenges-posed-emerging-technologies

Britton-Purdy, J., Singh Grewal, D., Kaczynski, A., & Rahman, K. S. (2020). Building a Law-and-Political-Economy Framework: Beyond the Twentieth-Century Synthesis. The Yale Law Journal, 129(6), 1784–1835. https://www.yalelawjournal.org/feature/building-a-law-and-political-economy-framework

Brooks, R. (2018, January 1). My Dated Predictions [Blog post]. Robots, AI, and Other Stuff. https://rodneybrooks.com/my-dated-predictions/

Cabitza, F., Rasoini, R., & Gensini, G. F. (2017). Unintended Consequences of Machine Learning in Medicine. JAMA, 318(6), 517–518. https://doi.org/10.1001/jama.2017.7797

Cambridge English Dictionary. (2020). Smart. In Cambridge English Dictionary Online. https://dictionary.cambridge.org/dictionary/english/smart

Caruana, R., Lou, Y., Gehrke, J., Koch, P., Sturm, M., & Elhadad, N. (2015). Intelligible Models for HealthCare: Predicting Pneumonia Risk and Hospital 30-day Readmission. Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1721–1730. https://doi.org/10.1145/2783258.2788613

Cohen, J. E. (2019). Between Truth and Power: The Legal Constructions of Informational Capitalism. Oxford University Press. https://doi.org/10.1093/oso/9780190246693.001.0001

Dittrich, P. K. (2002). On the Scalability of Social Order—Modeling the Problem of Double and Multi Contingency Following Luhmann. Journal of Artificial Societies and Social Simulation, 6(1). http://jasss.soc.surrey.ac.uk/6/1/3.html

Dolgin, E. (2019). The Secret Social Lives of Viruses. Nature, 570(7761), 290–92. https://doi.org/10.1038/d41586-019-01880-6

Endsley, M. R. (2017). From Here to Autonomy: Lessons Learned From Human–Automation Research. Human Factors, 59(1), 5–27. https://doi.org/10.1177/0018720816681350

Esposito, E. (2017). Artificial Communication? The Production of Contingency by Algorithms. Zeitschrift Für Soziologie, 46(4), 249–265. https://doi.org/10.1515/zfsoz-2017-1014

Floridi, L. (2014a). Agency: Enveloping the World. In The Fourth Revolution: How the infosphere is reshaping human reality. Oxford University Press.

Floridi, L. (2014b). The Onlife Manifesto—Being Human in a Hyperconnected Era. Springer. https://doi.org/10.1007/978-3-319-04093-6

Floridi, L., & Sanders, J. W. (2004). On the Morality of Artificial Agents. Minds and Machines, 14(3), 349–379. https://doi.org/10.1023/B:MIND.0000035461.63578.9d

Frederik, J., & Martijn, M. (2019, November 6). The new dot com bubble is here: It’s called online advertising. The Correspondent. https://thecorrespondent.com/100/the-new-dot-com-bubble-is-here-its-called-online-advertising/13228924500-22d5fd24

Gaber, S., Ben-Harush, O., & Savir, A. (2017). Predicting HDD failures from compound SMART attributes. Proceedings of the 10th ACM International Systems and Storage Conference, 1. https://doi.org/10.1145/3078468.3081875

Gibson, J. J. (2014). The Ecological Approach to Visual Perception (1st ed.). Routledge. https://doi.org/10.4324/9781315740218

Gigerenzer, G. (2018). The Bias Bias in Behavioral Economics. Review of Behavioral Economics, 5(3–4), 303–336. https://doi.org/10.1561/105.00000092

Giles, M. (2018, September 13). Artificial intelligence is often overhyped—And here’s why that’s dangerous. MIT Technology Review. https://www.technologyreview.com/2018/09/13/240156/artificial-intelligence-is-often-overhypedand-heres-why-thats-dangerous/

Greenfield, A. (2006). Everyware. The dawning age of ubiquitous computing.

Haugeland, J. (Ed.). (1997). Mind Design II: Philosophy, Psychology, and Artificial Intelligence (2nd Revised, Enlarged ed.). The MIT Press.

Hayek, F. A. (2007). The Road to Serfdom: Text and Documents – The Definitive Edition (B. Caldwell, Ed.). University of Chicago Press.

Hayles, K. N. (1999). How we became posthuman. Virtual bodies in cybernetics, literature, and informatics. University of Chicago Press.

Hildebrandt, M. (2015). Smart Technologies and the End(s) of Law. Edward Elgar. https://doi.org/10.4337/9781849808774

Hildebrandt, M. (2018). Primitives of Legal Protection in the Era of Data-Driven Platforms. Georgetown Law Technology Review, 2(2), 252–273. https://georgetownlawtechreview.org/primitives-of-legal-protection-in-the-era-of-data-driven-platforms/GLTR-07-2018/

Hildebrandt, M. (2020a). Law for Computer Scientists and Other Folk. Oxford University Press. https://doi.org/10.1093/oso/9780198860877.001.0001

Hildebrandt, M. (2020b). The Artificial Intelligence of European Union Law. German Law Journal, 21(1), 74–79. https://doi.org/10.1017/glj.2019.99

Ihde, D. (1990). Technology and the Lifeworld: From Garden to Earth. Indiana University Press.

I.T.U. (2005). The Internet of Things. International Telecommunications Union.

Kahneman, D., & Tversky, A. (1979). Prospect Theory: An Analysis of Decision under Risk. Econometrica, 47(2), 263–292. https://doi.org/10.2307/1914185

Kopljar, S. (2016). How to think about a place not yet: Studies of affordance and site-based methods for the exploration of design professionals’ expectations in urban development processes [PhD Thesis, Lund University]. http://lup.lub.lu.se/record/4bdb64d0-8551-44c7-aabc-26888e0cfbb4

Kurzweil, R. (2005). The singularity is near: When humans transcend biology. Viking.

Latour, B. (1992). Where Are the Missing Masses? The Sociology of a Few Mundane Artifacts. In W. E. Bijker & J. Law (Eds.), Shaping Technology / Building Society (pp. 225–258). MIT Press.

Lepenies, R., & Małecka, M. (2019). Behaviour Change: Extralegal, Apolitical, Scientistic? In H. Straßheim & S. Beck (Eds.), Handbook of Behavioural Change and Public Policy (pp. 344–360). Edward Elgar Publishing. https://doi.org/10.4337/9781785367854.00032

Leslie, D. (2019). Raging robots, hapless humans: The AI dystopia. Nature, 574(7776), 32–33. https://doi.org/10.1038/d41586-019-02939-0

Lipton, Z. C., & Steinhardt, J. (2018). Troubling Trends in Machine Learning Scholarship. ArXiv. http://arxiv.org/abs/1807.03341

Lomas, N. (2019a). The case against behavioral advertising is stacking up. In TechCrunch. https://techcrunch.com/2019/01/20/dont-be-creepy/?

Lomas, N. (2019b, May 31). Targeted ads offer little extra value for online publishers, study suggests. TechCrunch. https://social.techcrunch.com/2019/05/31/targeted-ads-offer-little-extra-value-for-online-publishers-study-suggests/

Moor, J. (2006). The Dartmouth College Artificial Intelligence Conference: The Next Fifty Years. AI Magazine, 27(4), 87–87. https://doi.org/10.1609/aimag.v27i4.1911

Natale, S., & Ballatore, A. (2020). Imagining the thinking machine: Technological myths and the rise of artificial intelligence. Convergence, 26(1), 3–18. https://doi.org/10.1177/1354856517715164

Ohm, P. (2020). Throttling machine learning. In M. Hildebrandt & K. O’Hara (Eds.), Life and the Law in the Era of Data-Driven Agency (pp. 214–229). Edward Elgar. https://doi.org/10.4337/9781788972000.00019

Pfeifer, R., & Bongard, J. (2007). How the Body Shapes the Way We Think. A New View of Intelligence. MIT Press.

Pickering, A. (2002). Cybernetics and the Mangle: Ashby, Beer and Pask. Social Studies of Science, 32(3), 413–437. https://doi.org/10.1177/0306312702032003003

Plessner, H., & Bernstein, J. M. (2019). Levels of Organic Life and the Human: An Introduction to Philosophical Anthropology (M. Hyatt, Trans.). Fordham University Press. https://doi.org/10.5422/fordham/9780823283996.001.0001

Polanyi, K. (2012). The Great Transformation: The Political and Economic Origins of Our Time. Amereon Ltd.

Russell, S. J., Norvig, P., & Davis, E. (2010). Artificial intelligence: A modern approach. Prentice Hall.

schraefel, m. c., Gomer, R., Gerding, E., & Maple, C. (2020). Rethinking transparency for the Internet of Things. In M. Hildebrandt & K. O’Hara (Eds.), Life and the Law in the Era of Data-Driven Agency (pp. 100–116). Edward Elgar. https://doi.org/10.4337/9781788972000.00012

Steels, L. (1995). When are robots intelligent autonomous agents? Robotics and Autonomous Systems, 15, 3–9. https://doi.org/10.1016/0921-8890(95)00011-4

Strathern, M. (1997). “Improving Ratings”: Audit in the British University System’. European Review, 5(3), 305–321.

Suh, S. C., Tanik, U. J., Carbone, J. N., & Eroglu, A. (Eds.). (2014). Applied Cyber-Physical Systems. Springer. https://doi.org/10.1007/978-1-4614-7336-7

Sunstein, C. R. (2016). The Ethics of Influence: Government in the Age of Behavioral Science. Cambridge University Press.

Thaler, R. H., & Sunstein, C. R. (2008). Nudge: Improving decisions about health, wealth, and happiness. Yale University Press.

van den Berg, B. (2010). The Situated Self: Identity in a World of Ambient Intelligence. Wolf Legal Publishers.

Varela, F. J., Thompson, E., & Rosch, E. (1991). The Embodied Mind. Cognitive Science and Human Experience. MIT Press.

Vázquez-Salceda, J., Dignum, V., & Dignum, F. (2005). Organizing Multiagent Systems. Autonomous Agents and Multi-Agent Systems, 11(3), 307–60. https://doi.org/10.1007/s10458-005-1673-9

Wagner, B. (2018). Ethics as an Escape from Regulation. From "Ethics-Washing’ to Ethics-Shopping? In E. Bayamlioglu, I. Baraliuc, L. Janssens, & M. Hildebrandt (Eds.), Being Profiled:Cogitas Ergo Sum: 10 Years of Profiling the European Citizen (pp. 84–87). Amsterdam University Press. https://doi.org/10.2307/j.ctvhrd092.18

Walsh, T. (2017). The Singularity May Never Be Near. AI Magazine, 38(3), 58–62. https://doi.org/10.1609/aimag.v38i3.2702

Weiser, M. (1991). The Computer for the 21st Century. Scientific American, 265(3), 94–104. https://doi.org/10.1145/329124.329126

Wiener, N. (1948). Cybernetics: Or Control and Communication in the Animal and the Machine. MIT Press.

Winch, P. (1958). The Idea of a Social Science. Routledge & Kegan Paul.

Yeung, K. (2017). 'Hypernudge’: Big Data as a mode of regulation by design. Information, Communication & Society, 20(1), 118–136. https://doi.org/10.1080/1369118X.2016.1186713

Zuboff, S. (2019). The age of surveillance capitalism: The fight for a human future at the new frontier of power. Profile Books.

Footnotes

1. One could write a history of these loosely overlapping concepts, for instance noting the rise of previous concepts such as ubiquitous computing (Weiser, 1991), ambient intelligence (Berg, 2010) that preceded the term Internet of Things (ITU, 2005).

2. A good pointer to how we might steer free from both technological determinism and social constructivism may still be Latour (1992), though my own position is rooted in e.g., Winch (1958), Ihde (1990), Gibson (2014), and Varela, Thompson, and Rosch (1991)

3. One may suggest that viruses don’t independently navigate their environment. I use the term navigate in a broader sense than intended physical movement (even humans navigate their environment in a broader sense, e.g. their institutional environment, and let’s note that much physical movement is induced by our autonomic nervous system rather than the result of conscious intent). The 2020 pandemic has shown the extraordinary intelligence of a virus’ global navigation, see on the viral communication that informs their navigation e.g. (Dolgin, 2019).


Digital sovereignty

$
0
0

This article belongs to Concepts of the digital society, a special section of Internet Policy Review guest-edited by Christian Katzenbach and Thomas Christian Bächle.

In July 2020, the German government, in its official programme for its presidency of the European Council, announced its intention “to establish digital sovereignty as a leitmotiv of European digital policy” (The German Presidency of the EU Council, 2020, p. 8). This is just one of the many recent episodes, albeit a very prominent one, in which the term digital sovereignty has been used by governments to convey the idea that states should reassert their authority over the internet and protect their citizens and businesses from the manifold challenges to self-determination in the digital sphere.

At first glance, the digital transformation and the global technical infrastructure of the internet seem to challenge sovereignty. The principles of territoriality and state hierarchy appear opposed to the diffuse, flexible, forever shifting constellations of global digital networks. What is more, digital applications and communication practices have created a momentum that seems to defy legal governance and control. Therefore, the growth of digital networks in the 1990s made the disappearance of the state an immediately plausible scenario. This was most famously captured in John Perry Barlow’s bold Declaration of the Independence of Cyberspace (Barlow, 1996). Yet, while this reference is still very much alive in public discourse, today it is more often framed as a threat than a promise. To counter risks to their authority, states have made it possible to enforce national laws and undertake governmental interventions in the digital sphere. Over the years, they have created and reformed technical and legal instruments to address issues of digital governance (Goldsmith & Wu, 2006). In addition, they have successfully convinced their publics that sovereignty and state authority are necessary to protect “vital goods” ranging from security to prosperity, cultural rules and media control. As a result, in many countries, citizens today expect their governments to protect their privacy online or to combat online disinformation and cybercrime. But the various calls for digital sovereignty in the last few years, in both centralised/authoritarian countries and liberal democracies, do more than reaffirm state authority and intervention in the digital sphere. The concept of digital sovereignty has become a powerful term in political discourse that seeks to reinstate the nation state, including the national economy and the nation’s citizens, as a relevant category in the global governance of digital infrastructures and the development of digital technologies. We can expect the concept of digital sovereignty to continue to gain even more political currency in the years to come, given the broad deployment of highly invasive digital technologies ranging from artificial intelligence to the “Internet of Things”.

To date, the concept of digital sovereignty has been widely used in political discourse but rarely scrutinised in academic research, with a small but growing number of exceptions (Couture & Toupin, 2019; Mueller, 2010, 2019; Pohle, 2020c; Pohle & Thiel, 2019; Thiel, 2014, 2019; Glasze & Dammann, in press; Peuker, 2020). To understand where the concept comes from and where it is headed, we proceed in two steps. First, we reconstruct key controversies that define the relationship between sovereignty and digital networks. We then analyse how the concept of sovereignty and statehood re-emerged and digital sovereignty was elevated to a cherished form of sovereignty in its own right. Secondly, we systematise the various claims to digital sovereignty, thereby highlighting the concept’s internal tensions and contradictions. By tracing the dynamics of politicisation we attempt to show that sovereignty is a discursive practice in politics and policy rather than the legal and organisational concept that it is traditionally conceived of.

The relationship between sovereignty and the digital: a reconstruction

The political concept of sovereignty, understood as the power enjoyed by a governing body to rule over itself, free from any interference by outside sources or bodies, is derived from the Latin word superanus, which means “over” or “superior”. Whereas the traditional theory of sovereignty, as proposed in the sixteenth century by French political philosopher Jean Bodin, concerned the ruler’s authority to make final decisions, Jean-Jacques Rousseau recast the concept so that it focused on popular sovereignty rather than monarchical sovereignty; over time, it became increasingly associated with democracy, the rule of law and territoriality. Today, sovereignty always primarily means a state’s independence vis-à-vis other states (external sovereignty) as well as its supreme power to command all powers within the territory of the state (internal sovereignty). Understood as democratic sovereignty, it encompasses popular sovereignty and citizens’ right to exercise self-determination by making use of their inalienable rights. Crucial to all of these meanings is a geographical specification, that is, the restriction of sovereignty to a specific territory, which is seen as a functional prerequisite for authority to be exercised effectively (Grimm, 2015). 1

Ever since Bodin, sovereignty has been seen as a central concept for understanding politics. But in the 1990s, this importance seemed to wane, leading to talk of a post-sovereign world in which states would no longer be the most important and ultimately superior source of power and where democracy would be more closely associated with pluralism and participation than with the capacity of a demos to govern itself (MacCormick, 1999). This predicted decline in state importance strongly influenced the early stages of the internet’s development and governance. The idea of state sovereignty was particularly challenged by two different, yet related, discursive strands that significantly shaped public and academic discourses: cyber exceptionalism and multi-stakeholder internet governance. Yet, in more recent years, policy actors have successfully sought to justify and reaffirm sovereignty in the digital sphere against these two perspectives.

Two challenges: cyber exceptionalism and internet governance

The first challenge, cyber exceptionalism, suggests that the digital realm is qualitatively distinctive from the analogue world and that digital spaces therefore need to be treated differently from all previous technological innovations. This perspective was especially popular during the rise of the commercial internet in the 1990s but is still evident in public and academic discourse. Cyber exceptionalist thinking is based on the assumption that the growing importance of computer-aided network communication implies the demise of state sovereignty (Katz, 1997).Although the internet’s actual development did not take place outside of concrete legal spaces and would not have been possible without the incentives provided by markets, regulatory regimes or public research infrastructures (Mazzucato, 2011), cyber exceptionalism—which most often takes the form of cyber libertarianism (Keller, 2019)—was the formative ideology in those early days with a strong cultural and economic backing in Silicon Valley (Barbrook & Cameron, 1996; Turner, 2006).

As actors who greatly distrust established political institutions, cyber libertarians argue that digitally mediated forms of politics will prompt a decentralised organisation of societies. This should enable a better tailored response to the complex demands of governing modern societies than is offered by traditional forms of political organisation. In this view, external sovereignty, law and territoriality are expected to matter less in the context of transnational networks. The arguments for this are manifold. First, the complexity of nested responsibilities and the global reach of networks cannot be addressed properly within national jurisdictions; second, legislative procedures are too slow to keep up with the pace of innovation of digital technologies and the associated business models; and third, digital technologies enable individuals to evade liability, because attribution becomes a shaky construct in the digital world (Post, 2007).Hence, in contrast to a world bound by territories and sovereign nations, the world invoked by cyber libertarianism requires the existence of cyber sovereignty, with cyberspace as a new and autonomous virtual realm that is independent of governmental interference (Barlow, 1996). 2

The cyber exceptionalists and cyber libertarian positions still resonate today—for example, in the debates about cryptocurrencies (Pistor, 2020). But the main claim, namely that the rise of digital networks as such will lead to a demise of territorial conceptions of sovereignty, has lost its attraction. The infrastructures and the management of digital communication have steadily been transformed, making it easier to observe and steer digital flows. This trend has been reinforced by the commercialisation of the internet, as it has given rise to walled gardens and created new agents interested in a fine-grained, less anonymous and less horizontal architecture, which allows for intervention at many points (DeNardis, 2012; Deibert & Crete-Nishihata, 2012).

At least from the year 2000 onwards, a second, related but less confrontational challenge to sovereignty in its original sense emerged: multi-stakeholder internet governance. Here, the focus is not on states’ shortcomings at regulating digital matters, but on the different and non-sovereign roles that states have to play in a regulatory ideal that views the administration of the internet as the task of those directly affected by it. Taking their origins in the technical community, characterised by expertise and meritocratic decision-making, a multiplicity of decentralised processes emerged, which were designed to serve the development and application of shared norms, rules and procedures to maintain and develop the internet (Klein, 2002; Chenou, 2014).In this vision, self-governance would take place in a multi-stakeholder governance structure based on the principles of openness, inclusion, bottom-up collaboration and consensual decision-making. This form of coordination, it was argued, could counteract the need for a central decision-making authority (Hofmann, 2016; Raymond & DeNardis, 2015).

While multi-stakeholder internet governance has become established as a relatively autonomous field in the global policy arena, it is characterised by conflicts of various kinds. Its external conflicts are often rooted in the fact that the multi-stakeholder governance model continues to explicitly reject established government-dominated international institutions and seeks to replace them with the principle of transnationalism. Conversely, representatives of some states have insisted on putting the authority to make binding decisions on internet governance issues in the hands of multilateral institutions and, hence, subjecting them more heavily to state control (Musiani & Pohle, 2014; Glen, 2014). Internal conflicts in the field are caused by increasingly obvious coordination problems due to the multitude of often parallel internet governance processes as well as the thematic shift away from primarily technological matters towards more openly political or social questions (Malcolm, 2008). Furthermore, the idea of multi-stakeholder internet governance has often been accused of being associated with neoliberal thinking (Chenou, 2014). Thus, hopes of a lasting or expansive change in how transnational politics is done have not been fulfilled. Given the increasing attempts of both authoritarian and democratic nations to more strongly regionalise the development of digital networks, it is doubtful whether the efforts towards reforming multi-stakeholder internet governance will find the acceptance that would be necessary to preserve the model and its principles (Voelsen, 2019b).Therefore, multi-stakeholder internet governance cannot be seen as the future of governance as such, nor as a dichotomous alternative to decision-making by sovereign states, but rather as a parallel governance model adapted for non-binding coordination processes.

Resurgence of sovereignty as a principle of digital policy-making

In many respects, the public imaginary of digital communications as somehow hostile to state sovereignty and the practical challenges of enforcing sovereign power in the digital realm have remained (Mueller, 2010). But the arguments for dismissing state sovereignty have significantly weakened; instead, various actors have started to proclaim the need to establish sovereignty in the digital realm. The justifications for these calls are manifold.

First, it is often argued that the real challenge to state sovereignty is no longer to be found in the amorphous organisational qualities of decentralised networks, but in the enormous power of the corporate actors that thrive in our commercialised internet environment and that hold the material and immaterial power of owning vital societal structures. The internet’s commercial focus has come to centre on advertising and the exploitation of network effects (Christl, 2017). Intermediaries and digital platforms play such a dominant role in making content available that the open internet protocols that digital communications rely upon become meaningless (Pasquale, 2016; Srnicek, 2017; Hindman, 2018). Today, it is not just the enormous resources that those intermediaries command, but also the way in which they exercise control, that makes them one of the biggest challenges to the concept of democratic sovereignty (Staab, 2019; Zuboff, 2019). Internet corporations provide the infrastructures of our societies and, therefore, interfere with state matters at highly sensitive points. Examples abound: whether we are talking about the creation and regulation of markets or the provision and structuring of public communication, today’s digital economy significantly differs from older constellations for ordering societies — to a point where many of the powerful corporate actors can be described as quasi-sovereign. The emergence of these corporate powerhouses, which appear to be largely unaccountable via traditional political mechanisms, has—especially in Europe—given rise to a new, more structural and often more expansive thinking about the demands and domains of democratic self-governance (van Dijck, 2020).

A second justification for enlarging and pushing digital sovereignty becomes most obvious when we look at the slightly paradoxical response of governments to Edward Snowden’s 2013 revelations regarding the massive global surveillance practices of the United States’ intelligence services and their allies (Tréguer, 2017, 2018; Steiger et al., 2017). Snowden revealed the mostly unconstrained exercise of hegemonic power and the enormous possibilities for data gathering, data analysis and data control by intelligence agencies and tech companies in the United States and other Western countries. Surprisingly, their decision to behave as sovereign yet non-territorial entities did not lead to a critique of power agglomeration as such (Hintz & Dencik, 2016). Instead, it triggered the demand for a decoupled digital sphere that allows for exclusive national control over communications, data and regulation. Ever since the Snowden revelations, demands for national (or regional) digital sovereignty are invoked by actors who highlight the risks of foreign surveillance and manipulation by citing examples ranging from disinformation (Tambiama, 2020) to telecommunication infrastructure (Voelsen, 2019a) and industrial policy (Hobbs et al., 2020).

If we sum up the observations made so far, we can see how (state) sovereignty, traditionally thought to be the bedrock of modern politics, has become a contested concept. Yet, it then slowly but forcefully found a way to accommodate itself in the digital age. Nowadays, justifications for insisting on sovereignty abound. Especially in international relations we can see a resurrection of sovereignty as a geopolitical claim, which has set in motion a race to establish and expand the scope of sovereignty. Nevertheless, digital sovereignty needs to be actively explained and adjusted in order to fit our networked societies with their wide range of communications, strong transnational ties and pluralist understandings of democracy.

Political discourse(s) on digital sovereignty

Today, the concept of digital sovereignty is being deployed in a number of political and economic arenas, from more centralised and authoritarian countries to liberal democracies. It has acquired a large variety of connotations, variants and changing qualities. Its specific meaning varies according to the different national settings and actor arrangements but also depending on the kind of self-determination these actors emphasise (Pohle, 2020c; Lambach, 2019; Wittpahl, 2017). Focusing on this last factor, we can systematise digital sovereignty claims by distinguishing whether they address the capacity for digital self-determination by states, companies or individuals. What the different discursive layers resulting from this variety of claims share is their prescriptive and normative nature; rather than referring to existing instruments or specific practices, they usually formulate aspirations or recommendations for action. 3

State autonomy and the security of national infrastructures

In the most prominent category of digital sovereignty claims, the emphasis is on the idea that a nation or region should be able to take autonomous actions and decisions regarding its digital infrastructures and technology deployment. The majority of these claims relate to the geographical restriction of sovereignty to a specific territory and to states’ efforts ensuring the security of digital infrastructures and their authority regarding digital communication matters pertaining to their territories and citizens.

We can identify two strands of this line of thinking. On the one hand, powers outside of the liberal world have experienced the rise of networked communication as a threat to existing political systems. China was the first country to respond to this by propagating and developing its idea of digital sovereignty—mostly framed as cyber sovereignty or internet sovereignty (Creemers, 2016, 2020; Jiang, 2010; Zeng et al., 2017). The underlying ideas were later adapted by other authoritarian and semi-authoritarian countries, most prominently Russia (Budnitsky & Jia, 2018; Stadnik, 2019; Nocetti, 2015). On the other hand, early on, Western states also addressed the need for control and independence in digital matters. Here the justification for creating architectures of control was mostly security-driven. As global networks emerged, states became more and more aware of their vulnerabilities, expressed in matters of infrastructural control. Computer security was then translated into national security and expanded to ever more areas (Nissenbaum, 2005; Hansen & Nissenbaum, 2009). In this process, the role and capacities of democractic states and of infrastructural control has grown strongly (Cavelty & Egloff, 2019)—although often times these practices have conflicted with liberal-democratic ideals of society and older understandings of technology as inclusive and pluralistic (Möllers, 2020). Since the 2013 Snowden revelations, the focus on state autonomy and security has become a core element of digital sovereignty discourses.

Prime examples of government-fostered practices and ideas resulting from this discursive strand are the many recent proposals towards data localisation. They seek to restrict the storage, movement and/or processing of data to specific areas and jurisdictions and are typically justified by the need to limit the access that foreign intelligence and commercial agencies may have to specific types of data, for example, industrial or personal data. It is often assumed, but rarely clearly stated, that many such proposals are also driven by other motivations, such as the increased accessibility of citizens’ data by intelligence actors and law-enforcement agencies and the wish to generate revenues for actors like local internet service providers (Chander & Le, 2015; Hill, 2014). In many countries, including Brazil and India—two important emerging economies—proposals towards data localisation have so far only been realised in fragmented form or remain limited to specific contexts (Panday & Malcom, 2018; Selby, 2017). An emblematic case of a proposed data localisation initiative in Europe is the Schengen Routing idea, that is, the proposal to avoid routing data flows within Europe via exchange points and routes outside of Europe (Glasze & Dammann, in press, p. 11). The idea, which was proposed by Deutsche Telekom, the largest internet provider in Germany and the largest telecommunications organisation in the European Union, was hotly debated both in the public and the political sphere but ultimately failed to garner sufficient political support (Kleinhans, 2013).

Present in both authoritarian and democratic countries, claims and proposed measures emphasising the autonomy and self-determination of states and the security of critical digital infrastructures have been met with fierce criticism. Both policy actors and observers, such as academics and technical experts, fear that efforts focusing on IT security and the regulation of internet issues on the national level would interfere with the open and universally accessible nature of the internet (Maurer et al., 2014) and ultimately lead to the re-territorialisation of the global internet, causing its fragmentation into national internet segments (Drake et al., 2016; Mueller, 2017). This, in return, may have important negative economic and political impacts for the countries concerned due to their digital and geographical isolation (Hill, 2014).

Economic autonomy and competition

There is a second category of digital sovereignty claims, which is closely related, yet different from the focus on state autonomy. This emphasises the high and often opposing economic stakes surrounding the digital environment and focuses on the autonomy of the national economy in relation to foreign technology and service providers. Like the previous category of assertions, claims focusing on economic self-determination have been primarily spurred by the perceived market dominance of technology companies from the United States and increasingly also China (Steiger et al., 2017, p. 11). Likewise, the specific measures and instruments that governments apply to compensate for these imbalances in the digital economy partly overlap with measures seeking to strengthen the security of technological systems and national autonomy (Baums, 2016). But in contrast to the first category, these measures are usually part of a nation’s larger economic and industrial policy strategy, aiming at the digital transformation of entire sectors of the economy. As such, they concern both traditional industries and sectors (telecommunications, media, logistics) and new IT-related economic sectors, and primarily aim to promote the innovative power of the domestic economy and to nurture local competitors (Bria, 2015). In addition, a growing number of instruments centre on digital trade and seek to regulate commerce and data flows delivered via digital networks (Burri, 2017; Ferracane, 2017).

A prime example of an initiative that seeks to strengthen economic autonomy is the European cloud service Gaia-X, which was announced jointly by France and Germany in 2019 and is yet to be launched (BMWi, 2020). The project plans to connect small and medium-sized cloud providers in Europe through a shared standard that allows them to offer an open, secure and trustworthy European alternative to the world's biggest (often US-based) cloud service providers (e.g., Amazon, Google, Microsoft), while at the same time respecting European values and data protection standards. The initiative is heavily promoted by policy actors as an important step towards European data sovereignty (BMBF, 2019a; Summa, 2020)—another closely related concept. But it has already been criticised for being an overly ambitious and purely state-driven project that does not offer real innovation and that will have to compete for market acceptance with more established providers (Lumma, 2019; Mahn, 2020).

As with the previous category, the goal to achieve more independence from foreign technologies and to promote the innovative power of the domestic industry is a central element of discourses on digital sovereignty in both authoritarian and democratic countries. In democratic countries, some measures are additionally justified by the aim to protect consumers by offering technological services that respect user rights and domestic laws and norms such as data protection regulations (Hill, 2014; Mauer et al., 2014, p. 8). In many emerging economies, such as India, the proposed measures are also often clearly directed at what has been described by both policy actors and scholars as digital imperialism or digital colonialism. Both terms refer to the overly dominant position of Western technology corporations in the Global South which leads to new forms of hegemony and exploitation (Pinto, 2018; Kwet, 2019; PTI, 2019). Unsurprisingly, such claims and initiatives have been met with scepticism and repudiation by some Western countries, where policy and business actors have been quick to label such ideas and practices digital protectionism, meaning the “erection of barriers or impediments to digital trade” (Aaronson, 2016, p. 8; see also Aaronson & Leblond, 2018). But while in the United States, where the notion of digital sovereignty has principally a negative connotation (Couture & Toupin, 2019, p. 2313), a wide variety of policies are considered potentially protectionist—including censorship, filtering, localisation and intellectual property-related measures and regulations to prevent disinformation and to protect privacy—in other regions and countries, such as Europe and Canada, narrower definitions that account for specific trade restrictions due to privacy concerns and cultural exceptions have been proposed (Aaronson, 2016, p. 10).

User autonomy and individual self-determination

In recent years, a third category of digital sovereignty claims has emerged. This is primarily present in the discourses of democratic countries and a particularly strong component of the policy debate on digital sovereignty in Germany (Pohle, 2020a, p. 7ff.; Glasze & Dammann, in press, p. 13). Emphasising the importance of individual self-determination, these claims focus on the autonomy of citizens in their roles as employees, consumers, and users of digital technologies and services. An interesting aspect of this category is the departure from a state-centred understanding of sovereignty. Instead of viewing sovereignty as the prerequisite to exercise authority in a specific territory, actors view it as the ability of individuals to take actions and decisions in a conscious, deliberate and independent manner. By strengthening these capacities, individuals should be protected as consumers and strengthened in their rights as democratic citizens (Gesellschaft für Informatik, 2020; VZBV, 2014). Discursive claims by policy makers and civil society actors in this category also refer to user sovereignty and digital consumer sovereignty, thereby replacing the control of users and citizens who might be subject to digital sovereignty measures in authoritarian regimes with the goal to strengthen domestic internet users’ capacity for self-determination (Pohle, 2020c, p. 8ff.; SVRV, 2017).

The proposed means to achieve this kind of sovereignty in the digital sphere include economic incentives for user-friendly and domestic technology development, but also the introduction of technical features allowing for effective encryption, data protection and more transparent business models. In addition, a large majority of measures targeting individual self-determination seek to enhance users’ media and digital literacy, thus strengthening the competences and confidence of users and consumers in the digital sphere. In Germany, for example, a recently created innovation fund by the Federal Ministry of Education and Research (the “Human-Technology-Interaction for Digital Sovereignty” fund) builds on the idea that digital literacy means more than being technologically knowledgeable or competent in the use of digital tools. Rather, it is understood as the critical or conscious engagement of users with the technology and their own data (Datenbewusstsein, see BMBF, 2019b).

An interesting aspect of this discursive category of digital sovereignty is the references made to users’ technological or digital sovereignty made by tech activists and social movements. Their perspective contradicts a state-centred understanding of sovereignty and instead emphasises the need for users to better understand commercial and state powers in the digital sphere and to appropriate their technologies, data and content (Couture & Toupin, 2019, p. 2315ff). This could either be done by prioritising open and free software and service or by users protecting themselves from the exploitation of their personal data by tech companies through data protection and encryption practices (Haché, 2014, 2018; Cercy & Nitot, 2016). While some facets of this perspective and some of the proposed measures may align with the claims to individual self-determination that we can see in democracies, the underlying beliefs are, however, different. Moreover, references made and measures suggested by policymakers seeking to increase user sovereignty need to be evaluated very carefully. In many instances, citizens are being reduced to consumers of digital services rather than valued in their capacity as democratic citizens. But the focus on the autonomy and security of consumers might obfuscate measures that primarily serve security and economic purposes, leading to a situation in which fundamental user rights—such as privacy or freedom of expression—are restricted rather than enforced.

Sovereignty in the networked world

This essay has argued that advocates of the concept of digital sovereignty, so popular in political and public discourse nowadays, not only had to reverse some of their early beliefs about the governability of a networked world but that the idea of sovereignty itself has shifted as it has risen to prominence. The issue is no longer cyber sovereignty as a non-territorial challenge to sovereignty that is specific to the virtual realm of the internet. Today, digital sovereignty has become a much more encompassing concept, addressing not only issues of internet communication and connection but also the much wider digital transformation of societies. Digital sovereignty is—especially in Europe—now often used as a shorthand for an ordered, value-driven, regulated and therefore reasonable and secure digital sphere. It is presumed to resolve the multifaceted problems of individual rights and freedoms, collective and infrastructural security, political and legal enforceability and fair economic competition (Bendiek & Neyer, 2020).

Traditionally, sovereignty has largely been thought of as an enforceable law that is backed by clear structural arrangements, such as the state monopoly on violence. In this context, the state is conceived of as a more or less coherent actor, capable, independent and hence autonomous. Although sovereignty has always been imperfect—Stephen Krasner famously depicted it as “organized hypocrisy” (Krasner, 1999)—the means of sovereign power in the Westphalian system have been rather straightforward. But due to digitalisation, globalisation and platformisation the situation has become more complicated. The digital sovereignty of a state cannot be reduced to its ability to set, communicate and enforce laws. Rather than relying on the symbolic representation and organisational capacity of the state, digital sovereignty is deeply invasive. In many instances, the idea of strengthening digital sovereignty means not only actively managing dependencies, but also creating infrastructures of control and (possible) manipulation. Therefore, we believe that much more reflection and debate is needed on how sovereign powers can be held democratically accountable with regard to the digital. It is not sufficient to propose that the power of large digital corporations could be tamed by subjecting them to democratic sovereignty, as has been suggested by many democratic governments worldwide. Likewise, we should not simply equate (digital) sovereignty with the ability to defend liberal and democratic values, as is often done by policy actors in Europe. Digital sovereignty is not an end in itself. Instead, we have to put even more thought into the procedural framework of how sovereign power can be held accountable and opened up to public reflection and control in order to truly democratise digital sovereignty.

References

Aaronson, S. A. (2016). The digital trade imbalance and its implications for internet governance (Paper No. 25; Global Commission on Internet Governance). Centre for International Governance Innovation.

Aaronson, S. A., & Leblond, P. (2018). Another digital divide: The rise of data realms and its implications for the WTO. Journal of International Economic Law, 21(2), 245–272. https://doi.org/10.1093/jiel/jgy019

Barbrook, R., & Cameron, A. (1996). The Californian ideology. Science as Culture, 6(1), 44–72. https://doi.org/10.1080/09505439609526455

Barlow, J. P. (1996). A Declaration of the Independence of Cyberspace. Electronic Frontier Foundation. https://www.eff.org/cyberspace-independence

Baums, A. (2016). Digitale Standortpolitik in der Post-Snowden-Welt. In M. Friedrichsen & P.-J. Bisa (Eds.), Digitale Souveränität: Vertrauen in der Netzwerkgesellschaft (pp. 223–235). Springer VS. https://doi.org/10.1007/978-3-658-07349-7_20

Bendiek, A., & Neyer, J. (2020). Europas digitale Souveränität. Bedingungen und Herausforderungen internationaler politischer Handlungsfähigkeit. In M. Oswald & I. Borucki (Eds.), Demokratietheorie im Zeitalter der Frühdigitalisierung (pp. 103–125). Springer VS.

B.M.B.F. (2019a). "GAIA-X“: Ein neuer Datenraum für Europa. Bundesministerium für Bildung und Forschung. https://www.bmbf.de/de/gaia-x-ein-neuer-datenraum-fuer-europa-9996.html

B.M.B.F. (2019b). Mensch-Technik-Interaktion für digitale Souveränität—Mensch-Technik-Interaktion. Bundesministerium für Bildung und Forschung. https://www.technik-zum-menschen-bringen.de/foerderung/bekanntmachungen/digisou

BMWi. (2020). GAIA-X: A Federated Data Infrastructure for Europe. Bundesministerium für Wirtschaft und Energie. https://www.bmwi.de/Redaktion/EN/Dossier/gaia-x.html

Bria, F. (2015). Public policies for digital sovereignty. Platform Cooperativism Consortium conference, New York. https://www.academia.edu/19102224/Public_policies_for_digital_sovereignty

Budnitsky, S., & Jia, L. (2018). Branding Internet sovereignty: Digital media and the Chinese–Russian cyberalliance. European Journal of Cultural Studies, 21(5), 594–613. https://doi.org/10.1177/1367549417751151

Burri, M. (2017). The Regulation of Data Flows through Trade Agreements. Georgetown Journal of International Law, 48(1), 408–448.

Cavelty, M. D., & Egloff, F. J. (2019). The Politics of Cybersecurity: Balancing Different Roles of the State. St Antony’s International Review, 15(1), 37–57. https://www.ingentaconnect.com/content/stair/stair/2019/00000015/00000001/art00004

Cercy, N., & Nitot, T. (2016). Numérique: Reprendre le contrôle. Framasoft.

Chander, A., & Le, U. P. (2015). Data Nationalism. Emory Law Journal, 64(6), 677–739.

Chenou, J.-M. (2014). From cyber-libertarianism to neoliberalism: Internet exceptionalism, multi-stakeholderism, and the institutionalisation of internet governance in the 1990s. Globalizations, 11(2), 205–223. https://doi.org/10.1080/14747731.2014.887387

Christl, W. (2017). Corporate Surveillance In Everyday Life. How Companies Collect, Combine, Analyze, Trade, and Use Personal Data on Billions [Report]. Cracked Labs. http://crackedlabs.org/en/corporate-surveillance

Couture, S., & Toupin, S. (2019). What does the notion of “sovereignty” mean when referring to the digital? New Media & Society, 21(2), 2305–2322. https://doi.org/10.1177/1461444819865984

Creemers, R. (2016). The Chinese cyber-sovereignty agenda (M. Leonard, Ed.). European Council on Foreign Relations.

Creemers, R. (2020). China’s Conception of Cyber Sovereignty. In D. Broeders & B. Berg (Eds.), Governing Cyberspace: Behavior, Power and Diplomacy (pp. 107–145). Rowman & Littlefield.

Deibert, R. J., & Crete-Nishihata, M. (2012). Global governance and the spread of cyberspace controls. Global Governance, 18(3), 339–361. https://doi.org/10.1163/19426720-01803006

DeNardis, L. (2012). Hidden Levers of Internet Control. Information, Communication & Society, 15(5), 720–738. https://doi.org/10.1080/1369118X.2012.659199

Drake, W. J., Cerf, V. G., & Kleinwächter, W. (2016). Internet Fragmentation: An Overview (Future of the Internet Initiative) [White Paper]. World Economic Forum. https://www.weforum.org/reports/internet-fragmentation-an-overview.

Ferracane, M. (2017). Restrictions on Cross-Border Data Flows: A Taxonomy (Working Paper No. 1/2017). European Centre for International Political Economy. https://doi.org/10.2139/ssrn.3089956

Gesellschaft für Informatik. (2020). Schlüsselaspekte Digitaler Souveränität [Working Paper]. Gesellschaft für Informatik. https://gi.de/fileadmin/GI/Allgemein/PDF/Arbeitspapier_Digitale_Souveraenitaet.pdf

Glasze, G., & Dammann, F. (in press). Von der „globalen Informationsgesellschaft“ zum „Schengenraum für Daten“ – Raumkonzepte in der Regierung der „digitalen Transformation“ in Deutschland. In T. Döbler, C. Pentzold, & C. Katzenbach (Eds.), Räume digitaler Kommunikation (forthcoming. Halem.

Glen, C. M. (2014). Internet Governance: Territorializing Cyberspace? Politics & Policy, 5(42), 635–657. https://doi.org/10.1111/polp.12093

Goldsmith, J., & Wu, T. (2006). Who controls the internet? Illusions of a borderless world. Oxford University Press.

Grimm, D. (2015). Sovereignty: The Origin and Future of a Political and Legal Concept. Columbia University Press.

Haché, A. (2014). La Souveraineté technologique (Vol. 1). Dossier ritimo. https://www.ritimo.org/La-Souverainete-technologique.

Haché, A. (2018). La Souveraineté technologique—Volume 2. Dossier ritimo. https://www.ritimo.org/La-Souverainete-Technologique-Volume2.

Hansen, L., & Nissenbaum, H. (2009). Digital Disaster, Cyber Security, and the Copenhagen School. International Studies Quarterly, 53(4), 1155–1175. https://doi.org/10.1111/j.1468-2478.2009.00572.x

Hill, J. F. (2014). The Growth of Data Localization Post-Snowden: Analysis and Recommendations for U.S. Policymakers and Industry Leaders. Lawfare Research Paper Series, 2(3), 1–41.

Hindman, M. (2018). The Internet trap: How the digital economy builds monopolies and undermines democracy. Princeton University Press. https://doi.org/10.23943/princeton/9780691159263.001.0001

Hintz, A., & Dencik, L. (2016). The politics of surveillance policy: UK regulatory dynamics after Snowden. Internet Policy Review, 5(3). https://doi.org/10.14763/2016.3.424

Hobbs, C. (Ed.). (2020). Europe’s digital sovereignty: From rulemaker to superpower in the age of US-China rivalry. European Council on Foreign Relations. https://ecfr.eu/publication/europe_digital_sovereignty_rulemaker_superpower_age_us_china_rivalry/.

Hofmann, J. (2016). Multi-stakeholderism in Internet governance: Putting a fiction into practice. Journal of Cyber Policy, 1(1), 29–49. https://doi.org/10.1080/23738871.2016.1158303

Jiang, M. (2010). Authoritarian Informationalism: China’s Approach to Internet Sovereignty. SAIS Review of International Affairs, 30(3), 71–89. https://doi.org/10.1353/sais.2010.0006

Johnson, D. R., & Post, D. G. (1996). Law and Borders—The Rise of Law in Cyberspace. Stanford Law Review, 48(5), 1367–1402. https://doi.org/10.2307/1229390

Katz, J. (1997). Birth of a Digital Nation. In Wired. https://www.wired.com/1997/04/netizen-3/.

Keller, C. I. (2019). Exception and Harmonization: Three Theoretical Debates on Internet Regulation (2020(2); HIIG Discussion Paper Series). Alexander von Humboldt Institut für Internet und Gesellschaft. https://doi.org/10.2139/ssrn.3572763

Klein, H. (2002). ICANN and Internet Governance: Leveraging Technical Coordination to Realize Global Public Policy. The Information Society, 18(3), 193–207. https://doi.org/10.1080/01972240290074959

Kleinhans, J.-P. (2013, November 13). Schengen-Routing, DE-CIX und die Bedenken der Balkanisierung des Internets. Netzpolitik. https://netzpolitik.org/2013/schengen-routing-de-cix-und-die-bedenken-der-balkanisierung-des-internets/.

Krasner, S. D. (1999). Sovereignty: Organized Hypocrisy. Princeton. https://doi.org/10.2307/j.ctt7s9d5

Kukutai, T., & Taylor, J. (2016). Indigenous data sovereignty: Toward an agenda. Anu Press.

Kwet, M. (2019). Digital colonialism: US empire and the new imperialism in the Global South. Race & Class, 60(4), 3–26. https://doi.org/10.1177/0306396818823172

Lambach, D. (2019). The Territorialization of Cyberspace. International Studies Review, 22(3), 482–506. https://doi.org/10.1093/isr/viz022

Lumma, N. (2019). Die „europäische Cloud“ ist eine Kopfgeburt, die nicht überleben wird. Gründerszene Magazin. https://www.gruenderszene.de/technologie/gaia-x-europaeische-cloud-wird-scheitern

MacCormick, N. (1999). Questioning Sovereignty: Law, State, and Nation in the European Commonwealth. Oxford University Press.

Mahn, J. (2020). Die digitale europäische Idee. Gaia-X: Wie Europa in der Cloud unabhängig werden soll. Magazin für Computertechnik, 14. https://www.heise.de/select/ct/2020/14/2015610312088025860

Malcolm, J. (2008). Multi-stakeholder governance and the Internet Governance Forum. Terminus Press.

Maurer, T., Morgus, R., Skierka, I., & Hohmann, M. (2014). Technological Sovereignty: Missing the Point? [Paper]. New America; Global Public Policy Institute. http://www.digitaldebates.org/fileadmin/media/cyber/Maurer-et-al_2014_Tech-Sovereignty-Europe.pdf

Mazzucato, M. (2011). The entrepreneurial state. Demos. http://oro.open.ac.uk/30159/1/Entrepreneurial_State_-_web.pdf

Möllers, N. (2020). Making Digital Territory: Cybersecurity, Techno-nationalism, and the Moral Boundaries of the State. Science, Technology, & Human Values, 46(1), 112–138. https://doi.org/10.1177/0162243920904436

Mueller, M. (2017). Will the internet fragment?: Sovereignty, globalization and cyberspace. Polity.

Mueller, M. (2020). Against Sovereignty in Cyberspace. International Studies Review, 22(4), 779–801. https://doi.org/10.1093/isr/viz044

Mueller, M. L. (2010). Networks and States: The Global Politics of Internet Governance. MIT Press. https://doi.org/10.7551/mitpress/9780262014595.001.0001

Musiani, F., & Pohle, J. (2014). NETmundial: Only a Landmark Event If “Digital Cold War” Rhetoric Abandoned. Internet Policy Review, 3(1). https://doi.org/10.14763/2014.1.251

Nissenbaum, H. (2005). Where Computer Security Meets National Security. Ethics and Information Technology, 7(2), 61–73. https://doi.org/10.1007/s10676-005-4582-3

Nocetti, J. (2015). Contest and conquest: Russia and global internet governance. International Affairs, 91(1), 111–130. https://doi.org/10.1111/1468-2346.12189

Panday, J., & Malcolm, J. (2018). The Political Economy of Data Localization. Partecipazione e conflitto, 11(2), 511–527. https://doi.org/10.1285/i20356609v11i2p511

Pasquale, F. (2016). Two narratives of platform capitalism. Yale Law & Policy Review, 35(1), 309–321. https://ylpr.yale.edu/two-narratives-platform-capitalism

Peuker, E. (2020). Verfassungswandel durch Digitalisierung. Mohr Siebeck.

Pinto, R. Á. (2018). Digital Sovereignty or Digital Colonialism? New tensions of privacy, security and national policies. Sur, 15(27), 15–27. https://sur.conectas.org/en/digital-sovereignty-or-digital-colonialism/

Pistor, K. (2020). Statehood in the digital age. Constellations, 27(1), 3–18. https://doi.org/10.1111/1467-8675.12475

Pohle, J. (2020a). Digitale Souveränität. In T. Klenk, F. Nullmeier, & G. Wewer (Eds.), Handbuch Digitalisierung in Staat und Verwaltung (pp. 1–13). Springer. https://doi.org/10.1007/978-3-658-23669-4_21-1

Pohle, J. (2020b). Digital sovereignty – a new key concept of digital policy in Germany and Europe [Research paper]. Konrad Adenauer Stiftung. https://www.kas.de/en/single-title/-/content/digital-sovereignty

Pohle, J., & Thiel, T. (2019). Digitale Vernetzung und Souveränität: Genealogie eines Spannungsverhältnisses. In I. Borucki & W. J. Schünemann (Eds.), Internet und Staat: Perspektiven auf eine komplizierte Beziehung (pp. 57–80). Nomos.

Post, D. G. (2007). Governing Cyberspace: Law. Santa Clara High Technology Law Journal, 24(4), 883–913. https://digitalcommons.law.scu.edu/chtlj/vol24/iss4/5/

P.T.I. (2019, January 20). India’s data must be controlled by Indians: Mukesh Ambani. mint. https://www.livemint.com/Companies/QMZDxbCufK3O2dJE4xccyI/Indias-data-must-be-controlled-by-Indians-not-by-global-co.html

Raymond, M., & DeNardis, L. (2015). Multistakeholderism: Anatomy of an inchoate global institution. International Theory, 7(3), 572–616. https://doi.org/10.1017/S1752971915000081

Selby, J. (2017). Data localization laws: Trade barriers or legitimate responses to cybersecurity risks, or both? International Journal of Law and Information Technology, 25(3), 213–232. https://doi.org/10.1093/ijlit/eax010

Srnicek, N. (2017). The challenges of platform capitalism. Understanding the logic of a new business model. Juncture, 23(4), 254–257. https://doi.org/10.1111/newe.12023

Staab, P. (2019). Digitaler Kapitalismus: Markt und Herrschaft in der Ökonomie der Unknappheit. Suhrkamp.

Stadnik, I. (2019). Internet Governance in Russia–Sovereign Basics for Independent Runet. 47th Research Conference on Communication, Information and Internet Policy (TPRC47). https://doi.org/10.2139/ssrn.3421984

Steiger, S., Schünemann, W. J., & Dimmroth, K. (2017). Outrage without Consequences? Post-Snowden Discourses and Governmental Practice in Germany. Media and Communication, 5(1), 7–16. https://doi.org/10.17645/mac.v5i1.814

Summa, H. A. (2020, March). How GAIA-X is Paving the Way to European Data Sovereignty. Dotmagazine. https://www.dotmagazine.online/issues/cloud-and-orientation/build-your-own-internet-gaia-x

SVRV (Advisory Council for Consumer Affairs). (2017). Digitale Sourveränität. Sachverständigenrat für Verbraucherfragen.

Tambiama, M. (2020). Digital sovereignty for Europe (EPRS Ideas Papers, pp. 1–12) [Briefing]. European Parliamentary Research Service. https://www.europarl.europa.eu/RegData/etudes/BRIE/2020/651992/EPRS_BRI(2020)651992_EN.pdf

The German Presidency of the EU Council. (2020). Together for Europe’s recovery: Programme for Germany’s Presidency of the Council of the European Union (1 July to 31 December 2020). Council of the European Union.

Thiel, T. (2014). Internet und Souveränität. In C. Volk & F. Kuntz (Eds.), Der Begriff der Souveränität in der transnationalen Konstellation (pp. 215–239). Nomos.

Thiel, T. (2019). Souveränität: Dynamisierung und Kontestation in der digitalen Konstellation. In J. Hofmann, N. Kersting, C. Ritzi, & W. J. Schünemann (Eds.), Politik in der digitalen Gesellschaft: Zentrale Problemfelder und Forschungsperspektiven (pp. 47–61). Transcript.

Tréguer, F. (2017). Intelligence reform and the Snowden paradox: The case of France. Media and Communication, 5(1), 17–28. https://doi.org/10.17645/mac.v5i1.821

Tréguer, F. (2018). US Technology Companies and State Surveillance in the Post-Snowden Context: Between Cooperation and Resistance. (Research Report No. 5; UTIC Deliverables). SciencesPo. https://halshs.archives-ouvertes.fr/halshs-01865140

Turner, F. (2006). From counterculture to cyberculture: Stewart Brand, the Whole Earth Network, and the rise of digital utopianism. University of Chicago Press.

van Dijck, J. (2020). Governing digital societies: Private platforms, public values. Computer Law & Security Review, 36. https://doi.org/10.1016/j.clsr.2019.105377

Voelsen, D. (2019a). 5G, Huawei und die Sicherheit unserer Kommunikationsnetze – Handlungsoptionen für die deutsche Politik (Report No. 5; SWP-Aktuell). Stiftung Wissenschaft und Politik. German Institute for International and Security Affairs. https://doi.org/10.18449/2019A05

Voelsen, D. (2019b). Cracks in the internet’s foundation: The future of the internet’s infrastructure and global internet governance (Research Paper No. 14). Stiftung Wissenschaft und Politik. German Institute for International and Security Affairs. https://doi.org/10.18449/2019RP14

VZBV (Federation of German Consumer Organisations). (2014). Digitalisierung: Neue Herausforderungen für Politik und Verbraucher [Press release]. https://www.vzbv.de/pressemitteilung/digitalisierung-neue-herausforderungen-fuer-politik-und-verbraucher

Wittpahl, V. (Ed.). (2017). Digitale Souveränität: Bürger, Unternehmen, Staat. Springer Vieweg. https://doi.org/10.1007/978-3-662-55796-9

Zeng, J., Stevens, T., & Chen, Y. (2017). China’s Solution to Global Cyber Governance: Unpacking the Domestic Discourse of “Internet Sovereignty”. Politics & Policy, 45(3), 432–464. https://doi.org/10.1111/polp.12202

Zuboff, S. (2019). The age of surveillance capitalism: The fight for a human future at the new frontier of power. Profile Books.

Footnotes

1. Over the last decades, there have been many attempts to apply the concept of sovereignty to other political entities than states, such as supranational and sub-national institutions or indigenous peoples (e.g. Kukutai & Taylor, 2016). These derivative usages of the term often equalise sovereignty with autonomy and thereby deemphasise aspects of control and legitimation. While we believe that these broader understandings are important and can partly explain the popularity of the concept of digital sovereignty, we stick to a more traditional political understanding of the term.

2. A less pointed but still deeply state-sceptical variant of cyber exceptionalism is networked independence, a discursive stream frequently found in legal discourse and aligned with the discourse on globalisation and global governance. It argues that state sovereignty is in decline because of the dysfunctional fragmentation of a static order bound to geographical territories (Johnson & Post, 1996).

3. The proposed systematisation results from a structured qualitative analysis of selected policy documents applying the word digital sovereignty and similar terms (such as tech sovereignty, digital resilience, digital autonomy, etc.), which does not claim to be comprehensive. We use selected examples of policy texts and proposed measures to illustrate the different layers of digital sovereignty claims.

Cybersecurity

$
0
0

This article belongs to Concepts of the digital society, a special section of Internet Policy Review guest-edited by Christian Katzenbach and Thomas Christian Bächle.

Introduction

Cybersecurity1 covers the broad range of technical, organisational and governance issues that must be considered to protect networked information systems against accidental and deliberate threats. It goes well beyond the details of encryption, firewalls, anti-virus software, and similar technical security tools. This breadth is captured in the widely used International Telecommunication Union (ITU) definition (ITU-T, 2008, p. 2):

Cybersecurity is the collection of tools, policies, security concepts, security safeguards, guidelines, risk management approaches, actions, training, best practices, assurance and technologies that can be used to protect the cyber environment and organization and user’s assets. Organization and user’s assets include connected computing devices, personnel, infrastructure, applications, services, telecommunications systems, and the totality of transmitted and/or stored information in the cyber environment. Cybersecurity strives to ensure the attainment and maintenance of the security properties of the organization and user’s assets against relevant security risks in the cyber environment

The importance of cybersecurity has increased as so many government, business, and day-to-day activities around the world have moved online. But especially in emerging economies, “[m]any organizations digitizing their activities lack organizational, technological and human resources, and other fundamental ingredients needed to secure their system, which is the key for the long-term success” (Kshetri, 2016, p. 3).

The more technically-focused information security is still in widespread use in computer science. But as these issues have become of much greater societal concern as “software is eating the world” (Andreessen, 2011), cybersecurity has become more frequently used, not only in the rhetorics of democratic governments as in the 2000s, but also in general academic literature (shown in Figure 1):

A graph showing the increase of the usage of the term cybersecurity versus information-, data-, or computer- security
Figure 1: Academic articles with cybersecurity/cyber-security/cyber security versus information security, data security and computer security in title, keywords or abstract of Web of Science indexed publications over time. Small numbers of records exist for both information security and computer security in the database since 1969. Data from Web of Science.

Barely used in academic literature before 1990 (except in relation to the Cray CYBER 205 supercomputer from the late 1970s), cyber became ubiquitous as a prefix, adjective and even noun by the mid-1990s, with Google Scholar returning results across a broad range of disciplines with titles such as ‘Love, sex, & power on the cyber frontier’ (1995), ‘Surfing in Seattle: What cyber-patrons want’ (1995), ‘The cyber-road not taken’ (1994) and even the ‘Cyber Dada Manifesto” (1991).

It evolved from Wiener’s cybernetics, a “field of control and communication theory, whether in machine or in the animal” (1948)—derived from the Greek word for ‘steersman’—with an important intermediate point being the popular usage of cyborg, a contraction of cybernetic organism, alongside the Czech-derived robot (Clarke, 2005, section 2.4). The notion of a ‘governor’ of a machine goes back to the mid-19th century, with J. C. Maxwell (discoverer of the electron) noting in 1868 it is “a part of a machine by means of which the velocity of the machine is kept nearly uniform, notwithstanding variations in the driving-power or the resistance” (Maxwell, 1868, p. 270)—what Wiener called homeostasis.

The use of cyberspace to refer to the electronic communications environment was coined in William Gibson’s 1982 short story Burning Chrome (“widespread, interconnected digital technology”) and popularised by his 1984 science fiction novel Neuromancer (“a graphic representation of data abstracted from the banks of every computer in the human system […] lines of light ranged in the nonspace of mind, clusters and constellations of data […] a consensual hallucination experienced by millions”). Cyberspace’s arrival in legal and policy discussions was spearheaded by John Perry Barlow’s Declaration of the Independence of Cyberspace (1996). But by 2000, Gibson declared cyberspace was “evocative and essentially meaningless ... suggestive ... but with no real meaning” (Neale, 2000).

Despite its ubiquity in present-day national security and defence-related discussions, Wagner and Vieth found: “Cyber ​​and cyberspace, however, are not synonymous words and have developed different meanings [...] Cyber ​​is increasingly becoming a metaphor for threat scenarios and the necessary militarisation” (2016). Matwyshyn suggested the term is “the consequence of a cultural divide between the two [US] coasts: ‘cybersecurity’ is the Washington, D.C. legal rebranding for what Silicon Valley veterans have historically usually called ‘infosec’ or simply ‘security’” (2017, p. 1158). Cybersecurity issues have, to many whose interests are served by the interpretation, becomenational security issues (Clarke, 2016; Kemmerer, 2003; Nissenbaum, 2005).

A review by Craigen et al. (2014) found cybersecurity used in a range of literature and fields from 2003 onwards, including software engineering, international relations, crisis management and public safety. Social scientists interacting with policymakers, and academics generally applying for research and translation funding from government sources and interacting with the defence and signals intelligence/information security agencies that are the cybersecurity centres of expertise in many larger governments, have further popularised the term, 2 which appears in similar form in many languages, as shown in Appendix 1.

Looking beyond academia to literature more widely, Figure 2 shows computer security was most prevalent in the Google Books corpus from 1974, overtaken by information security in 1997, and cybersecurity in 2015 (with cyber security increasingly popular since 1996, but cyber-security negligible the entire period). Computer (Ware, 1970), system, and data (Denning, 1982) security were all frequently used as closely-related terms in the 1970s (Saltzer & Schroeder, 1975). 3

Google n-gram analysis of the usage of the words computer-, cyber-, data-, and information- security over time.
Figure 2: Google n-gram analysis (Lin et al., 2012) of the usage of variants of information security over time. Cybersecurity encompasses cybersecurity, cyber security and cyber-security. Retrieved using ngramr (Carmody, 2020).

This trend is unfortunate, since “using the term ‘cybersecurity’ seems to imply that information security issues are limited to code connected to the Internet [but] physical security of machines and human manipulability through social engineering are always key aspects of information security in both the private and public sector” (Matwyshyn, 2017, p. 1156).

Cybersecurity in early context

In computer science, attacks on the security of information systems are usually concerned with:

  • Breaching the confidentiality of systems, with data exposed to unauthorised actors;
  • Undermining the integrity of systems, and disruption of the accuracy, consistency or trustworthiness of information being processed;
  • Affecting the availability of systems, and rendering them offline, unusable or non-functional.

Together, confidentiality, integrity and availability are called the CIA triad, and have been the basis of information security since the late 1970s (Neumann et al., 1977, pp. 11–14). Echoing this history decades later, the Council of Europe’s 2001 Budapest Convention on Cybercrime set out in its first substantive section “Offences against the confidentiality, integrity and availability of computer data and systems”.

Cybersecurity across disciplines

The study and practice of cybersecurity spans a range of disciplines and fields. In this article, we consider three of the main angles important to cybersecurity practice: technical aspects; human factors; and legal dimensions. This is necessarily an incomplete list—notably, the topic is also the subject of study by those who are interested in, for example, how it reconfigures organisational structures (information systems), or relationships between actors such as states (international relations), and significant non-state actors such as organised crime gangs (criminology).

Technical aspects

Many technical domains are of direct relevance to cybersecurity, but the field designed to synthesise technical knowledge in practical contexts has become known as security engineering: “building systems to remain dependable in the face of malice, error, or mischance” (Anderson, 2008, p. 3). It concerns the confluence of four aspects—policy (the security aim), mechanisms (technologies to implement the policy), assurance (the reliability of each mechanism) and incentives (of both attackers and defenders). Security engineers may be intellectually grounded in a specialised technical domain, but they require a range of bridging and boundary skills between other disciplines of research and practice.

A daunting (and worsening) challenge for security engineers is posed by the complexities of the sociotechnical environments in which they operate. Technological systems have always evolved and displayed interdependencies, but today infrastructures and individual devices are networked and co-dependent in ways which challenge any ability to unilaterally “engineer” a situation. Systems are increasingly servitised, (e.g., through external APIs) with information flows not under the control of the system engineer, and code subject to constant ‘agile’ evolution and change which may undermine desired system properties (Kostova et al., 2020).

Human factors and social sciences

The field of human factors in cybersecurity grew from the observation that much of the time “hackers pay more attention to the human link in the security chain than security designers” (Adams & Sasse, 1999, p. 41), leaving many sensitive systems wide open to penetration by “social engineering” (Mitnick & Simon, 2002).

It is now very problematic to draw cybersecurity’s conceptual boundaries around an organisation’s IT department, software vendors and employer-managed hardware, as in practice networked technologies have permeated and reconfigured social interactions in all aspects of life. Users often adapt technologies in unexpected ways (Silverstone & Hirsch, 1992) and create their own new networked spaces (Cohen, 2012; Zittrain, 2006), reliant on often-incomprehensible security tools (Whitten & Tygar, 1999) that merely obstruct individuals in carrying out their intended tasks (Sasse et al., 2001). Networked spaces to be secured—the office, the university, the city, the electoral system—cannot be boxed-off and separated from technology in society more broadly. Communities often run their networked services, such as a website, messaging group, or social media pages, without dedicated cybersecurity support. Even in companies, or governments, individuals or groups with cybersecurity functions differ widely in location, autonomy, capabilities, and authority. The complexity of securing such a global assemblage, made up of billions of users as well as hundreds of millions of connected devices, has encouraged a wider cross-disciplinary focus on improving the security of these planetary-scale systems, with social sciences as an important component (Chang, 2012).

Research focussed on the interaction between cybersecurity and society has also expanded the relevant set of risks and actors involved. While the term cybersecurity is often used interchangeably with information security (and thus in terms of the CIA triad), this only represents a subset of cybersecurity risks.

Insofar as all security concerns the protection of certain assets from threats posed by attackers exploiting vulnerabilities, the assets at stake in a digital context need not just be information, but could, for example, be people (through cyberbullying, manipulation or intimate partner abuse) or critical infrastructures (von Solms & van Niekerk, 2013). Moreover, traditional threat models in both information and cybersecurity can be limited. For example, domestic abusers are rarely considered as a threat actor (Levy & Schneier, 2020) and systems are rarely designed to protect their intended users from the authenticated but adversarial users typical in intimate partner abuse (Freed et al., 2018).

The domain of cyber-physical security further captures the way in which cybersecurity threats interact with physically located sensors and actuators. A broader flavour of definition than has been previously typical is used in the recent EU Cybersecurity Act (Regulation 2019/881), which in Article 2(1) defines cybersecurity as “the activities necessary to protect network and information systems, the users of such systems, and other persons affected by cyber threats” [emphasis added]. The difficult interaction between information systems, societies and environments is rapidly gaining traction in the research literature.

Research at the intersection of human–computer interaction and cybersecurity has also pointed to challenges of usability and acceptability in deploying approaches developed in fields such as security engineering. Consider the encryption of information flowing across the internet using Transport Layer Security (TLS), a protocol which is able to cryptographically authenticate the endpoints and protect the confidentiality and integrity of transmitted data. TLS raises usability challenges in relation to developers’ and administrators’ understanding of how it works and thus how to correctly implement it (Krombholz et al., 2017, 2019) as well as challenges with communicating its properties—and what to do in its absence—to end users in their web browsers (Felt et al., 2015; Reeder et al., 2018). Focusing on the user experience of the web browser, Camp (2013) suggests principles of translucent security: high security defaults, single-click override, context-specific settings, personalised settings, and use-based settings.

Related challenges faced by both users and developers or other specialists are found widely across the cybersecurity field, including passwords (e.g., Naiakshina et al., 2019) and encrypted email (Whitten & Tygar, 1999). The field of usable security seeks a fit between the security task and the humans expected to interact with it (Sasse et al., 2001). Without an understanding of issues such as these, the techniques used can bring at best a false sense of security, and at worst, entirely new threat vectors.

Legal dimensions

While few laws explicitly state they are governing cybersecurity, cybersecurity–related provisions are found in an extremely wide array of instruments. Law might incentivise or require certain cybersecurity practices or standards; apply civil or criminal sanctions, or apportion liability, for persons experiencing or taking action which leads to cybersecurity breaches; mandate practices (such as information sharing or interoperability) that themselves have cybersecurity implications; or create public advisory or enforcement bodies with cybersecurity responsibilities.

Data protection and privacy laws generally contain varied provisions with cybersecurity implications. They are, at the time of writing, present in 142 countries around the world (Greenleaf & Cottier, 2020) as well as promoted by the Council of Europe’s Convention 108+ and model laws from several international organisations, such as the Commonwealth (Brown et al., 2020). They often, although not always, span both the public and private sectors, with common stipulations including the creation of an independent supervisory authority; overarching obligations to secure ‘personal’ data or information, often defined by reference to its potential identifiability; data breach notification requirements; obligations to design in enforcement of data protection principles and appoint a data protection officer; and rights that can be triggered by individuals to access, manage and if they wish, erase identifiable data that relates to them.

Other specific laws also contain cybersecurity breach notification (to users and/or regulators) and incident requirements scoped beyond personal data, such as the European eIDAS Regulation (Regulation 910/2014, concerning identity and trust providers) and Network and Information Security Directive (Directive 2016/1148, concerning essential infrastructure, including national infrastructure such as electricity and water as well as ‘relevant digital service providers’, meaning search engines, online marketplaces and cloud computing). While lacking an omnibus federal data protection law, all 50 US states have some form of data breach law, although their precise requirements vary (Kosseff, 2020, Appendix B).

In the EU, the law that would seem the most likely candidate for a horizontal regime is the 2019 Cybersecurity Act (Regulation 2019/881).It however provides little of real substantive interest, mainly increasing the coordination and advisory mandates of ENISA, the EU’s cybersecurity agency, and laying the foundation for a state-supported but voluntary certification scheme.

A grab-bag of highly specific cybersecurity laws also exists, such as the California Internet of Things Cybersecurity Law, aimed mostly at forbidding devices from using generic passwords (Cal. Civ. Code § 1798.91.04). These reactive, ad-hoc instruments are often not technologically neutral: they may have clarity and legal certainty in the current situation, but may not be sustainable as technologies change, for example, away from passwords (Koops, 2006). On the other hand, generic laws have also, over time, morphed into cybersecurity laws. The Federal Trade Commission in the US penalises companies for exceptionally poor data security practices under the prohibition of “unfair or deceptive practices” in the FTC Act (15 U.S.C. § 45).

There are, however, limits to the ability of generic laws to morph into cybersecurity laws. Computer misuse laws emerged in legal regimes in part due to the limitations of existing frameworks in capturing digital crime. Before the mid-1980s, the main avenue to prosecuting computer misuse in the US was theft (Kerr, 2003), a rationale which proved strained and unpredictable. The UK saw unsuccessful attempts to repurpose the law of forgery against unauthorised password use (R v Gold [1988] AC 1063), leading to the passing of the Computer Misuse Act 1990.

The US has struggled with the concept of ‘unauthorised’ access in its law. Offences in the Computer Fraud and Abuse Act (CFAA) of 1984 typically occur when individuals enter systems without authorisation, or where they exceed authorised access, mimicking laws of trespass (Kerr, 2016). But the notion of authorisation in digital systems quickly becomes tricky. If a website is designed such that sensitive information is discoverable by typing in a long URL (a problematic “security through obscurity” approach), without any authentication mechanism, is there implicit authorisation? Is an address bar more like a password box—guessing someone else’s being telling about your motive to access unauthorised material; or a telephone keypad or map—and the user is simply exploring?

The CFAA has also created tensions based on its interaction with a site’s terms of service (ToS). This tension centres on whether authorisation is revoked based on statements in these long, legalistic documents that only few read. For example, such documents often preclude web scraping in broad, vague language (Fiesler et al., 2020), and despite over sixty legal opinions over the last two decades, the legal status of scraping remains “characterized as something just shy of unknowable, or a matter entirely left to the whims of courts” (Sellars, 2018, p. 377). This becomes highly problematic for firms, researchers or journalists, as computer misuse law may effectively turn potential civil liability for breach of contract into criminal liability under the CFAA.

As a consequence, scholars such as Orin Kerr have argued that only the bypassing of authentication requirements, such as stealing credentials, or spoofing a log-in cookie, should be seen as creating a lack of authorisation under CFAA (Kerr, 2016). This contrasts with messy existing case law, which includes prosecution on the basis that an IP address was changed (as it often does by design) to avoid a simple numeric IP block. Contingent and subjective social aspects of cybersecurity law will remain, both in computer misuse and in other areas, even if this argument was accepted.

Legal instruments around cybercrime and cybersecurity more generally continue to develop—the Council of Europe’s Budapest Convention on Cybercrime was concluded in 2001, seeking to harmonise cybercrime legislation and facilitate international cooperation, and drawing on experiences and challenges of earlier cybersecurity and cybercrime law. It has been ratified/acceded to by 65 countries including the US, which has only ever ratified three Council of Europe treaties. However, the further development of legal certainty in areas of cybersecurity will require yet clearer shared norms of how computing systems, and in particular, the internet, should be used.

Cybersecurity’s broader impact

Here, we select and outline just two broader impacts of cybersecurity—its link to security-thinking in other domains of computing and society, and its effect on institutional structures.

(Cyber)securitisation

While computer security narrowly focussed on the CIA triad, the cybersecurity concept expanded towards both national security and the use of computers for societally harmful activities (e.g., hatred and incitement to violence; terrorism; child sexual abuse) and attacks on critical infrastructures, including the internet itself (Nissenbaum, 2005). The privileged role of technical experts and discourse inside computer security has given technical blessing to this trend of securitisation (Hansen & Nissenbaum, 2009, p. 1167).

Security is not new to technification, as ‘Cold War rationality’ showed (Erickson et al., 2013). Yet not only have technical approaches arguably been able to take a more privileged position in cybersecurity than any other security sector (Hansen & Nissenbaum, 2009, p. 1168), their success in raising salience through securitisation has resonated widely across computing issues.

For example, privacy engineering has a dominant strand focussing on quantitative approaches to confidentiality, such as minimising theoretical information leakage (Gürses, 2014); while algorithmic fairness and anti-discrimination engineering has also emerged as a similar (and controversial) industry-favoured approach to issues of injustice (Friedler et al., 2019; see Gangadharan & Niklas, 2019). Gürses connects the engineering of security, privacy, dependability and usability—an ideal she claims “misleadingly suggests we can engineer social and legal concepts” (Gürses, 2014, p. 23).

These echoes may have their origins in the very human dimensions of these fast-changing areas, as organisations seek to apply or redeploy employees with security skill sets shaped by strong professional pressures to these recently salient problems (DiMaggio & Powell, 1983), as well as the hype-laden discourse of cybersecurity identified as fuelling a range of problems in the field (Lee & Rid, 2014). While these areas may not yet be able to be considered securitised, insofar as neither privacy nor discrimination is commonly politically positioned as an existential threat to an incumbent political community (Buzan et al., 1998; Cavelty, 2020; see Hansen & Nissenbaum, 2009), neither can they be said to be unaffected by the way cybersecurity and national security, and the forms of computing knowledge and practice considered legitimate in those domains, have co-developed over recent decades.

Institutions

Requirements of cybersecurity knowledge and practice have led states to create new institutions to meet perceived needs for expertise. The location of this capacity differs. In some countries, there may be significant public sector capacity and in-house experts. Universities may have relevant training pipelines and world-leading research groups. In others, cybersecurity might not be a generic national specialism. In these cases, cybersecurity expertise might lie in sector-specific organisations, such as telecommunications or financial services companies, which may or may not be in public hands.

Some governments have set up high-level organisations to co-ordinate cybersecurity capacity-building and assurance in public functions, such as the Australian Cyber Security Centre, the Canadian Centre for Cyber Security, the National Cyber Security Centre (UK and Ghana—soon to become an Authority) and the Cyber Security Agency (Singapore). A new Cybersecurity Competence Centre for the EU is set to be based in Bucharest. Relatedly, and sometimes independently or separately, countries often have cybersecurity strategy groups sitting under the executive (Brown et al., 2020).

Cybersecurity agencies can find themselves providing more general expertise than simply security. During the COVID-19 pandemic, for example, the first version of the UK’s National Health Service (NHS) contact tracing app for use in England had considerable broad technical input from the government’s signals intelligence agency GCHQ and its subsidiary body the National Cyber Security Centre, which was considered a data controller under UK data protection law (Levy, 2020). Relatedly, these agencies have also been called upon to give advice in various regimes to political parties who are not currently in power—a relationship that would be challenging in countries where peaceful transitions of power cannot be easily taken for granted, particularly given many of these institutions’ close links with national security agencies which may have politically-motivated intelligence operations (Brown et al., 2020).

National Computer Security Incident Response Teams (CSIRTs) are a relatively recent form of institution, which act as a coordinator and a point of contact for domestic and international stakeholders during an incident. Some of these have been established from scratch, while others have been elevated from existing areas of cybersecurity capacity within their countries (Maurer et al., 2015). These expert communities, trusted clearing houses of security information, are found in many countries, sectors and networks, with 109 national CSIRTs worldwide as of March 2019 (International Telecommunication Union, 2019).

CSIRTs can play important international roles, although as they are infrequently enshrined in or required by law, they often occupy a somewhat unusual quasi-diplomatic status (Tanczer et al., 2018). Under the EU’s Network and Information Security Directive however, all 27 member states must designate a national CSIRT, with ENISA playing a coordinating role under the NIS Directive.

Some researchers have expressed a more sceptical view of CSIRTs, with Roger Clarke telling the authors: “Regrettably, in contemporary Australia, at least, the concept has been co-opted and subverted into a spook sub-agency seeking ever more power to intrude into the architecture and infrastructure of telecommunications companies, and whatever other ‘critical infrastructure’ organisations take their fancy. Would you like a real-time feed of the number-plates going under toll-road gantries? Easily done!” (personal communication, September 2020).

Conclusion

Understanding cybersecurity is a moving target, just like understanding computing and society. Exactly what is being threatened, how, and by whom are all in flux.

While many may still look on with despair at the insecurities in modern systems, few computing concepts excite politicians more. It is hardly surprising to see the language of security permeate other computing policy concepts as a frame. Politicians talk of keeping the internet safe; dealing with privacy breaches, and defending democracies against information warfare. This makes cybersecurity an important concept for scholars to study and understand, and its legal and institutional adventures instructive for the development of neighbouring domains (although perhaps not always as the best template to follow). Its tools and methodological approach are also a useful training ground for interdisciplinary scholars to gain the skills required to connect and work across social, legal and technical domains.

In a 2014 review, three Canadian Communications Security Establishment science and learning advisers (Craigen et al., 2014) concluded cybersecurity is “used broadly and its definitions are highly variable, context-bound, often subjective, and, at times, uninformative”. In 2017, Matwyshyn noted “‘cyberized’ information security legal discourse makes the incommensurability problems of security worse. It exacerbates communication difficulty and social distance between the language of technical information security experts on the one hand, and legislators, policymakers and legal practitioners on the other” (Matwyshyn, 2017, p. 1150).

It is not clear the situation has since improved in this regard. Cybersecurity has become a catch-all term, attached to the prevention of a very wide range of societal harms seen to be related to computing and communications tools now omnipresent in advanced economies, and increasingly prevalent in emerging economies. There are concerns this has led to a militarisation (Wagner & Vieth, 2016) or securitisation of the concept and hence measures taken by states as a result. (The UK Ministry of Defence trumpeted the launch of its “first cyber regiment” in 2020.) And the large-scale monitoring capabilities of many cybersecurity tools have led to serious concerns about their impact on human rights (Korff, 2019).

Meanwhile, many computer and social scientists publicly mock 4 the notion of cyber and cyberspace as a separate domain of human action (Graham, 2013). Rid (2016, chapter 9) noted even Wiener “would have disdained the idea and the jargon. The entire notion of a separate space, of cordoning off the virtual from the real, is getting a basic tenet of cybernetics wrong: the idea that information is part of reality, that input affects output and output affects input, that the line between system and environment is arbitrary”. Matwyshyn concluded “[s]ecurity experts fear that in lieu of rigorously addressing the formidable security challenges our nation faces, our legal and policy discussions have instead devolved into a self-referential, technically inaccurate, and destructively amorphous “cyber-speak,” a legalistic mutant called “cybersecurity”” (p. 1154).

We have described now that notions relating to the protection of information systems—and all the societal functions those systems now support—are increasingly significant in both academic literature and the broader public and policy discourse. The development of the “Internet of Things” will add billions of new devices over time to the internet, many with the potential to cause physical harm, which will further strengthen the need for security engineering for this overall system (Anderson, 2018).

There appears little likelihood of any clear distinctions developing at this late stage between information security and cybersecurity in practice. It may be that the former simply falls out of common usage in time, as computer security slowly has since 2010—although those with security capabilities (a.k.a. state hacking) still stick resolutely with cyber.

Anderson suggests the continued integration of software into safety-critical systems will require a much greater emphasis on safety engineering, and protection of the security properties of systems like medical devices (even body implants) and automotive vehicles for decades—in turn further strengthening political interest in the subject (2021, p. 2).

Martyn Thomas, a well-known expert in safety-critical system engineering, told us (personal communication, September 2020):

Rather than attackers increasingly finding new ways to attack systems, the greater threat is that developers increasingly release software that contains well-known vulnerabilities – either by incorporating COTS (commercial off-the-shelf) components and libraries with known errors, or because they use development practices that are well known to be unsafe (weakly typed languages, failure to check and sanitise input data, etc.). So, the volume of insecure software grows, and the pollution of cyberspace seems unstoppable.

Powerful states (particularly the US) have since at least the 1970s used their influence over the design and production of computing systems to introduce deliberate weaknesses in security-critical elements such as encryption protocols and libraries (Diffie & Landau, 2010), and even hardware (Snowden, 2019). The US CIA and NSA Special Collection Service “routinely intercepts equipment such as routers being exported from the USA, adds surveillance implants, repackages them with factory seals and sends them onward to customers” (Anderson, 2020, p. 40). It would be surprising if other states did not carry out similar activities.

In the long run, as with most technologies, we will surely take the cyber element of everyday life for granted, and simply focus on the safety and security (including reliability) of devices and systems that will become ever more critical to our health, economies, and societies.

Acknowledgements

The authors thank Roger Clarke, Alan Cox, Graham Greenleaf, Douwe Korff, Chris Marsden, Martyn Thomas and Ben Wagner for their helpful feedback, and all the native speakers who shared their linguistic knowledge.

References

Adams, A., & Sasse, M. A. (1999). Users are not the enemy. Communications of the ACM, 42(12), 40–46. https://doi.org/10.1145/322796.322806

Anderson, R. (2008). Security Engineering: A Guide to Building Dependable Distributed Systems (2nd ed.). Wiley.

Anderson, R. (2018). Making Security Sustainable. Communications of the ACM, 61(3), 24–26. https://doi.org/10.1145/3180485

Anderson, R. (2020). Security Engineering: A Guide to Building Dependable Distributed Systems (3rd ed.). Wiley.

Andreessen, M. (2011, August 20). Why Software Is Eating The World. The Wall Street Journal. https://www.wsj.com/articles/SB10001424053111903480904576512250915629460

Baran, P. (1960). Reliable Digital Communications Systems Using Unreliable Network Repeater Nodes (P-1995 Paper). The RAND Corporation. https://www.rand.org/pubs/papers/P1995.html

Barlow, J. P. (1996). A declaration of the independence of cyberspace. https://www.eff.org/cyberspace-independence

Bell, D. E., & LaPadula, L. J. (1973). Secure Computer Systems: Mathematical Foundations (Technical Report No. 2547; Issue 2547). MITRE Corporation.

Biba, K. J. (1975). Integrity Considerations for Secure Computer Systems (Technical Report MTR-3153). MITRE Corporation.

Brown, I., Marsden, C. T., Lee, J., & Veale, M. (2020). Cybersecurity for elections: A Commonwealth guide on best practice. Commonwealth Secretatiat. https://doi.org/10.31228/osf.io/tsdfb

Buzan, B., Wæver, O., & De Wilde, J. (1998). Security: A new framework for analysis. Lynne Rienner Publishers.

Camp, P. L. J. (2013). Beyond usability: Security Interactions as Risk Perceptions [Position paper]. https://core.ac.uk/display/23535917

Carmody, S. (2020). ngramr: Retrieve and Plot Google n-Gram Data (1.7.2) [Computer software]. https://CRAN.R-project.org/package=ngramr

Cavelty, M. D. (2020). Cybersecurity between hypersecuritization and technological routine. In E. Tikk & M. Kerttunen (Eds.), Routledge Handbook of International Cybersecurity (1st ed., pp. 11–21). Routledge. https://doi.org/10.4324/9781351038904-3

Chang, F. R. (2012). Guest Editor’s Column. The Next Wave, 19(4). https://www.nsa.gov/Portals/70/documents/resources/everyone/digital-media-center/publications/the-next-wave/TNW-19-4.pdf

Clark, D. D., & Wilson, D. R. (1987). A Comparison of Commercial and Military Computer Security Policies. 184–194. https://doi.org/10.1109/SP.1987.10001

Clarke, R. (2005, May 9). Human-Artefact Hybridisation: Forms and Consequences. Ars Electronica 2005 Symposium, Linz, Austria. http://www.rogerclarke.com/SOS/HAH0505.html

Clarke, R. (2016). Privacy Impact Assessments as a Control Mechanism for Australian National Security Initiatives. Computer Law & Security Review, 32(3), 403–418. https://doi.org/10.1016/j.clsr.2016.01.009

Clarke, R. (2017). Cyberspace, the Law, and our Future [Talk]. Issue Launch of Thematic Issue Cyberspace and the Law, UNSW Law Journal, Sydney. http://www.rogerclarke.com/II/UNSWLJ-CL17.pdf

Cohen, J. E. (2012). Configuring the Networked Self: Law, Code, and the Play of Everyday Practice. Yale University Press. http://juliecohen.com/configuring-the-networked-self

Craigen, D., Diakun-Thibault, N., & Purse, R. (2014). Defining Cybersecurity. Technology Innovation Management Review, 4(10), 13–21. https://doi.org/10.22215/timreview/835

Denning, D. E. R. (1982). Cryptography and data security. Addison-Wesley Longman Publishing Co., Inc.

Diffie, W., & Landau, S. (2010). Privacy on the Line: The Politics of Wiretapping and Encryption. MIT Press. https://library.oapen.org/handle/20.500.12657/26072

DiMaggio, P. J., & Powell, W. W. (1983). The Iron Cage Revisited: Institutional Isomorphism and Collective Rationality in Organizational Fields. American Sociological Review, 48(2), 147. https://doi.org/10.2307/2095101

Erickson, P., Klein, J. L., Daston, L., Lemov, R. M., Sturm, T., & Gordin, M. D. (2013). How Reason Almost Lost its Mind: The Strange Career of Cold War Rationality. The University of Chicago Press. https://doi.org/10.7208/chicago/9780226046778.001.0001

Felt, A. P., Ainslie, A., Reeder, R. W., Consolvo, S., Thyagaraja, S., Bettes, A., Harris, H., & Grimes, J. (2015). Improving SSL Warnings: Comprehension and Adherence. Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems - CHI, 15, 2893–2902. https://doi.org/10.1145/2702123.2702442

Fiesler, C., Beard, N., & Keegan, B. C. (2020). No Robots, Spiders, or Scrapers: Legal and Ethical Regulation of Data Collection Methods in Social Media Terms of Service. Proceedings of the International AAAI Conference on Web and Social Media, 14(1), 187–196.

Freed, D., Palmer, J., Minchala, D., Levy, K., Ristenpart, T., & Dell, N. (2018). "A Stalker’s Paradise”: How Intimate Partner Abusers Exploit Technology. Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, 667, 1–667. https://doi.org/10.1145/3173574.3174241

Friedler, S. A., Scheidegger, C., Venkatasubramanian, S., Choudhary, S., Hamilton, E. P., & Roth, D. (2019). A comparative study of fairness-enhancing interventions in machine learning. Proceedings of the Conference on Fairness, Accountability, and Transparency - FAT*, 19, 329–338. https://doi.org/10.1145/3287560.3287589

Gangadharan, S. P., & Niklas, J. (2019). Decentering technology in discourse on discrimination. Information, Communication & Society, 22(7), 882–899. https://doi.org/10.1080/1369118X.2019.1593484

Global Cyber Security Capacity Centre. (2016). Cybersecurity Capacity Maturity Model for Nations (CMM) Revised Edition. Global Cyber Security Capacity Centre, University of Oxford. https://doi.org/10.2139/ssrn.3657116

Graham, M. (2013). Geography/internet: Ethereal alternate dimensions of cyberspace or grounded augmented realities? The Geographical Journal, 179(2), 177–182. https://doi.org/10.1111/geoj.12009

Greenleaf, G., & Cottier, B. (2020). 2020 ends a decade of 62 new data privacy laws. Privacy Laws & Business International Report, 163, 24–26.

Grossman, W. (2017, June). Crossing the Streams: Lizzie Coles-Kemp. Research Institute for the Science of Cyber Security Blog.

Gürses, S. (2014). Can you engineer privacy? Communications of the ACM, 57(8), 20–23. https://doi.org/10.1145/2633029

Hansen, L., & Nissenbaum, H. (2009). Digital Disaster, Cyber Security, and the Copenhagen School. International Studies Quarterly, 53(4), 1155–1175. https://doi.org/10.1111/j.1468-2478.2009.00572.x

International Telecommunication Union. (2019, March). National CIRTs Worldwide [Perma.cc record]. https://perma.cc/MSL6-MSHZ

I.T.U.-T. (2008, April 18). X.1205: Overview of cybersecurity. https://www.itu.int/rec/T-REC-X.1205-200804-I

Kabanov, Y. (2014). Information (Cyber-) Security Discourses and Policies in the European Union and Russia: A Comparative Analysis (WP 2014-01. Centre for German and European Studies (CGES. https://zdes.spbu.ru/images/working_papers/wp_2014/WP_2014_1–Kabanov.compressed.pdf

Kanwal, G. (2009). China’s Emerging Cyber War DoctrineJournal of Defence Studies, 3(3).

Kemmerer, R. A. (2003). Cybersecurity. 25th International Conference on Software Engineering, 2003. Proceedings, 705–715. https://doi.org/10.1109/ICSE.2003.1201257

Kerr, O. S. (2003). Cybercrime’s Scope: Interpreting Access and Authorization in Computer Misuse Statutes. New York University Law Review, 78(5), 1596–1668.

Kerr, O. S. (2016). Norms of Computer Trespass. Columbia Law Review, 116, 1143–1184.

Koops, B.-J. (2006). Should ICT Regulation Be Technology-Neutral? In B.-J. Koops, C. Prins, M. Schellekens, & M. Lips (Eds.), Starting Points for ICT Regulation: Deconstructing Prevalent Policy One-liners (pp. 77–108). T.M.C. Asser Press.

Korff, D. (2019). First do no harm: The potential of harm being caused to fundamental rights and freedoms by state cybersecurity interventions. In Research Handbook on Human Rights and Digital Technology. Elgar.

Kosseff, J. (2020). Cybersecurity law (Second). Wiley. https://doi.org/10.1002/9781119517436

Kostova, B., Gürses, S., & Troncoso, C. (2020). Privacy Engineering Meets Software Engineering. On the Challenges of Engineering Privacy By Design. ArXiv. http://arxiv.org/abs/2007.08613

Krombholz, K., Busse, K., Pfeffer, K., Smith, M., & Zezschwitz, E. (2019). 'If HTTPS Were Secure, I Wouldn’t Need 2FA’—End User and Administrator Mental Models of HTTPS. 246–263. https://doi.org/10.1109/sp.2019.00060

Krombholz, K., Mayer, W., Schmiedecker, M., & Weippl, E. (2017). I Have No Idea What I’m Doing’—On the Usability of Deploying HTTPS. 1339–1356. https://www.usenix.org/conference/usenixsecurity17/technical-sessions/presentation/krombholz

Kshetri, N. (2016). Cybersecurity and Development. Markets, Globalization & Development Review, 1(2). https://doi.org/10.23860/MGDR-2016-01-02-03

Lee, R. M., & Rid, T. (2014). OMG Cyber! The RUSI Journal, 159(5), 4–12. https://doi.org/10.1080/03071847.2014.969932

Levy, I. (2020). High level privacy and security design for NHS COVID-19 Contact Tracing App. National Cyber Security Centre. https://www.ncsc.gov.uk/files/NHS-app-security-paper%20V0.1.pdf

Levy, K., & Schneier, B. (2020). Privacy threats in intimate relationships. Journal of Cybersecurity, 6(1). https://doi.org/10.1093/cybsec/tyaa006

Lin, Y., Michel, J.-B., Aiden, E. L., Orwant, J., Brockman, W., & Petrov, S. (2012). Syntactic annotations for the Google Books ngram corpus. Proceedings of the ACL 2012 System Demonstrations, 169–174.

Matwyshyn, A. M. (2017). CYBER! Brigham Young University Law Review, 2017(5), 1109. https://digitalcommons.law.byu.edu/lawreview/vol2017/iss5/6/

Maurer, T., Hohmann, M., Skierka, I., & Morgus, R. (2015). National CSIRTs and Their Role in Computer Security Incident Response [Policy Paper]. New America; Global Public Policy Institute. http://newamerica.org/cybersecurity-initiative/policy-papers/national-csirts-and-their-role-in-computer-security-incident-response/

Maxwell, J. C. (1867-1868). On Governors. Proceedings of the Royal Society of London, Vol. 16 (1867 - 1868), pp. 270-283

Miller, B. (2010, March 1). CIA Triad [Blog post]. Electricfork. http://blog.electricfork.com/2010/03/cia-triad.html

Mitnick, K. D., & Simon, W. L. (2002). The Art of Deception: Controlling the Human Element of Security. Wiley.

Moyle, E. (2019). CSIRT vs. SOC: What’s the difference? In Ultimate guide to cybersecurity incident response [TechTarget SearchSecurity]. https://searchsecurity.techtarget.com/tip/CERT-vs-CSIRT-vs-SOC-Whats-the-difference

Naiakshina, A., Danilova, A., Gerlitz, E., Zezschwitz, E., & Smith, M. (2019). 'If you want, I can store the encrypted password’: A Password-Storage Field Study with Freelance Developers. Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, 19, 1–12. https://doi.org/10.1145/3290605.3300370

Neale, M. (2000, October 4). No Maps for These Territories [Documentary]. Mark Neale Productions.

Neumann, A. J., Statland, N., & Webb, R. D. (1977). Post-processing audit tools and techniques. In Z. G. Ruthberg (Ed.), Audit and evaluation of computer security (pp. 2–5). National Bureau of Standards. https://nvlpubs.nist.gov/nistpubs/Legacy/SP/nbsspecialpublication500-19.pdf

Nissenbaum, H. (2005). Where Computer Security Meets National Security. Ethics and Information Technology, 7(2), 61–73. https://doi.org/10.1007/s10676-005-4582-3

Reeder, R. W., Felt, A. P., Consolvo, S., Malkin, N., Thompson, C., & Egelman, S. (2018). An Experience Sampling Study of User Reactions to Browser Warnings in the Field. Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, 18, 1–13. https://doi.org/10.1145/3173574.3174086

Rid, T. (2016). Rise of the Machines: The lost history of cybernetics. Scribe.

Saltzer, J. H., & Schroeder, M. D. (1975). The protection of information in computer systems. Proceedings of the IEEE, 63(9), 1278–1308. https://doi.org/10.1109/PROC.1975.9939

Sasse, M. A., Brostoff, S., & Weirich, D. (2001). Transforming the ‘Weakest Link’—A Human/Computer Interaction Approach to Usable and Effective Security. BT Technology Journal, 19(3), 122–131. https://doi.org/10.1023/a:1011902718709

Sellars, A. (2018). Twenty Years of Web Scraping and the Computer Fraud and Abuse Act. Boston University Journal of Science & Technology Law, 24(2), 372. https://scholarship.law.bu.edu/faculty_scholarship/465/

Silverstone, R., & Hirsch, E. (1992). Consuming Technologies: Media and Information in Domestic Spaces. Routledge. https://doi.org/10.4324/9780203401491

Snowden, E. (2019). Permanent Record. Pan Macmillan.

Solms, R., & Niekerk, J. (2013). From information security to cyber security. Computers & Security, 38, 97–102. https://doi.org/10.1016/j.cose.2013.04.004

Tanczer, L. M., Brass, I., & Carr, M. (2018). CSIRTs and Global Cybersecurity: How Technical Experts Support Science Diplomacy. Global Policy, 9(S3), 60–66. https://doi.org/10.1111/1758-5899.12625

Wagner, B., & Vieth, K. (2016). Was macht Cyber? Epistemologie und Funktionslogik von Cyber. Zeitschrift für Außen- und Sicherheitspolitik, 9(2), 213–222. https://doi.org/10.1007/s12399-016-0557-1

Ware, W. (1970). Security Controls for Computer Systems: Report of Defense Science Board Task Force on Computer Security (Issues R609-1)) [Report]. The RAND Corporation. https://doi.org/10.7249/R609-1

Whitten, A., & Tygar, J. D. (1999). Why Johnny can’t encrypt: A usability evaluation of PGP 5.0. Proceedings of the 8th Conference on USENIX Security Symposium, 8. https://www.usenix.org/legacy/events/sec99/full_papers/whitten/whitten.ps

Wiener, N. (1948). Cybernetics: Or Control and Communication in the Animal and the Machine. MIT Press.

Zittrain, J. L. (2006). The Generative Internet. Harvard Law Review, 119, 1974–2040. http://nrs.harvard.edu/urn-3:HUL.InstRepos:9385626

Appendix 1 – Cybersecurity in other languages

5

Table 1: Terms for cybersecurity (via Google Translate on 14 September 2020, checked against by native speakers).

Language

Term

Afrikaans

kubersekuriteit

Arabic

الأمن الإلكتروني

Bengali

সাইবার নিরাপত্তা

Bulgarian

киберсигурност

Chinese

网络安全

Danish

computersikkerhed

Dutch

cyberbeveiliging

Finnish

Kyberturvallisuus

Farsi

امنیت شبکه (or امنیت سایبری/ امنیت رایانه)

French

la cyber-sécurité

German

Cybersicherheit (sometimes IT-sicherheit, Informationssicherheit, or Onlinesicherheit in Austria)

Greek

κυβερνασφάλεια

Hindi

साइबर सुरक्षा

Bahasa Indonesia

keamanan siber

Italian

sicurezza informatica

Japanese

サイバーセキュリティ

Portuguese

cíber segurança

Marathi

सायबर सुरक्षा

Romanian

securitate cibernetica

Russian

кибербезопасность

Spanish

ciberseguridad or (more popularly) seguridad informática

Swahili

usalama wa mtandao

Swedish

Cybersäkerhet (or, commonly, IT-säkerhet)

Urdu

سائبر سیکورٹی

Xhosa

ukhuseleko

One important difference between European languages is that some (such as English) differentiate security and safety, while others (such as Swedish and Danish) do not. One sociologist of security noted: “it does frame how you understand the concepts, particularly structure. When you're talking about access control in Swedish it's a different logic than when you talk about it in Anglo-Saxon languages […] In the Scandinavian view of the world there is always a much more socio-technical bent for thinking about security” (Grossman, 2017).

Footnotes

1. The authors use cybersecurity, not cyber security, throughout this text, as it is the one most in use in computer science, even in Britain.

2. The second author must admit he has not been immune to this.

3. Ware’s 1970 report begins: “Although this report contains no information not available in a well stocked technical library or not known to computer experts, and although there is little or nothing in it directly attributable to classified sources…”

4. See the Twitter hashtag #cybercyber and @cybercyber account, and Google search results for “cyber cyber cyber", for hundreds of thousands of further examples, and the “cyber song” and video Unsere Cyber Cyber Regierung - Jung & Naiv: Ultra Edition.

5. According to Google Translate, confirmed or updated by native speakers consulted by the authors, including the top-15 most spoken languages according to Wikipedia. With thanks to Eleftherios Chelioudakis, Francis Davey, Fukami, Andreas Grammenos, Hamed Haddadi, Werner Hülsmann, Douwe Korff, Sagwadi Mabunda, Bogdan Manolea, Matthias Marx, Veni Markovski, Grace Mutung'u, Yudhistira Nugraha, Jan Penfrat, Judith Rauhofer, Kaspar Rosager, Eric Skoglund, Anri van der Spuy and Mathias Vermeulen for many of these translations!

Algorithmic bias and the Value Sensitive Design approach

$
0
0

This article belongs to Concepts of the digital society, a special section of Internet Policy Review guest-edited by Christian Katzenbach and Thomas Christian Bächle.

1. Introduction

When, in 2016, investigative journalists at ProPublica published a report indicating that a software system used in US courts was racially biased, a lively debate ensued. In essence, the journalists had found that COMPAS, a decision support tool used by judges and parole officers to assess a defendant's likelihood to re-offend, was systematically overestimating the recidivism risk of black defendants while underestimating that of white defendants (see Angwin et al., 2016). Northpointe, the company that developed COMPAS, disputed the allegations, arguing that its assessment tool was fair because it predicted recidivism with roughly the same accuracy regardless of defendants' ethnicity (see Dieterich et al., 2016). The ProPublica journalists, in turn, held that an algorithmic model cannot be fair if it produces serious errors, that is, false positives (i.e., false alarms) and false negatives (i.e., missed detections), more frequently for one ethnicity than for another, triggering a debate about the very idea of programming fairness into a computer algorithm (see, e.g., Wong, 2019). To date, over 1,000 academic papers have cited the ProPublica article, 1 and its findings have been discussed in popular news outlets around the globe.

But the ProPublica case was not a one-off. Rather, it marked the beginning of a series of reports and studies that found evidence for algorithmic bias in a wide range of application areas: from hiring systems (Dastin, 2018) to credit scoring (O'Neil, 2016) to facial recognition software (Buolamwini and Gebru, 2018). Cases such as these, which highlight the potential for automated discrimination based on characteristics such as age, gender, ethnicity, or socio-economic status, have reinvigorated old debates regarding the relationship between technology and society (see, e.g., Winner, 1980), questioning the neutrality of algorithms and inviting discussions about their power to structure and shape, rather than merely reflect, society. However, if technologies are not morally neutral and if the values and disvalues embedded have tangible consequences for both individuals and society at large, would this not imply that algorithms should be designed with care and that one should seek not only to detect and analyse problems, but to proactively engage with them through mindful design decisions? 2 Such questions, which are now being discussed within the computer science community, are not new, but have a long and often neglected history within computer science itself—e.g., through research in participatory design—but also in other fields and disciplines such as computer ethics, philosophy of technology, history of science, or science and technology studies (STS). The most principled attempt to design responsibly and sensitively to human values, however, is the Value Sensitive Design (VSD) approach, which emerged out of this intellectual landscape in the mid-1990s and has been expanded and refined ever since. More recently, and as result of increased awareness that "data is not a panacea" and that algorithmic techniques can "affect the fortunes of whole classes of people in consistently unfavorable ways" (Barocas and Selbst, 2016, p. 673), interest in the VSD methodology has been growing, begging the question: what insights can the approach offer to ongoing debates about bias and fairness in algorithmic decision-making and machine learning?

This article provides a brief overview of the key features of Value Sensitive Design (Section 2), examines its contributions to understanding and addressing issues around bias in computer systems (Section 3), outlines the current debates on algorithmic bias and fairness in machine learning (Section 4), and discusses how such debates could profit from VSD-derived insights and recommendations (Section 5). Relating these debates on values in design and algorithmic bias to research on cognitive biases, we conclude by stressing our collective duty to not only detect and counter biases in software systems, but to also address and remedy their societal origins (Section 6).

2. Value Sensitive Design: a brief overview

Value Sensitive Design as a theoretically grounded methodology emerged against the backdrop of the 1990s rapid computerisation and as a response to a perceived need for a design approach that would account for human values and social context throughout the design process (see Friedman and Hendry, 2019). Indeed, Friedman's (1997) seminal edited book Human Values and the Design of Computer Technology already provided an impressive demonstration on how to conceptualise and address issues around agency, privacy, and bias in computer systems, emphasising the need to "embrace value-sensitive design as part of the culture of computer science" (ibid.: p. 1). At its core, the VSD approach offers a concrete methodology for how to intentionally embed desired valuesinto new technologies. It consists of three iterative phases, namely conceptual-philosophical, empirical, and technical investigations (see Friedman et al., 2006; Flanagan et al., 2008): 3

Conceptual-philosophical investigations encompass both the identification of relevant human values and the identification of relevant direct and indirect stakeholders. Regarding the former, careful working conceptualisations of specific values are meant to (a) clarify fundamental issues raised by the project at hand and (b) enable comparisons across VSD-based studies and research teams. While VSD defines human values relatively broadly as "what is important to people in their lives, with a focus on ethics and morality" (Friedman and Hendry, 2019, p. 24), Friedman et al. (2006, p. 364f) have provided a heuristic list of human values with ethical import 4 that are often implicated in system design. Regarding the latter, by not only considering direct but also indirect stakeholders, VSD aims to counter the frequent neglect of non-users in technology design, that is, of groups which may not use a technology themselves, but who are nonetheless affected by it (see Oudshoorn and Pinch, 2005; Wyatt, 2005). Given that values are often interrelated—consider, e.g., the ongoing debate about the relationship between privacy and security—and that what is important to one group of stakeholders may or may not be important to another group, conceptual investigations are also concerned with the relative importance of different values as well as potential trade-offs between conflicting values.

Empirical investigations make use of a wide range of quantitative and qualitative social science methods (e.g., surveys, interviews, observations, experiments) to provide a better understanding of how stakeholders actually conceive and prioritise values in specific socio-technical contexts. Cultural, historical, national, ethnic, and religious affiliations may play a role in this process and can determine how value conflicts are handled and resolved (see Flanagan et al., 2008, p. 328). Moreover, empirical investigations may reveal differences between espoused practice (what is said) and actual practice (what people do), enabling a more nuanced analysis of design decisions and their impact on usage, thereby complementing the conceptual investigations outlined above. Ultimately, it is through this empirical mode of inquiry that a more situated understanding of the socio-technical system can be derived, facilitating not only the observation of stakeholders' usage and appropriation patterns, but also whether the values envisioned in the design process are fulfilled, amended, or subverted.

Technical investigations are premised on the assumption that any given technological design provides "value suitabilities" (Friedman and Hendry, 2019, p. 34) in that it supports certain values and activities more readily than others. Following Friedman et al. (2008), investigations into these suitabilities can take one of two forms: in the first form, technical investigations focus on how existing technological properties can support or hinder specific human values. This approach bears similarities to the empirical mode, but instead of focusing on individuals, groups, or larger social systems, the emphasis is on the technology itself. In the second form, technological investigations involve the proactive design of systems to support and realise values identified in the conceptual investigation. If, for instance, privacy is a value that ought to be preserved, technical mechanisms must be implemented that further and promote privacy protections rather than diminish them. As specific designs will prioritise certain values over others, technical investigations can reveal both existing (first form) or prospective (second form) value hierarchies, thus adding another layer of insight to the analysis.

Through these three modes of investigation, VSD aims to contribute to the critical analysis of socio-technical systems and the values that have been—intentionally or unintentionally—embedded into them. Accordingly, VSD on the one hand serves as an analytical tool to open up valuation processes within technology design and development that are usually black-boxed or neglected. On the other hand, it provides a constructive tool that enables and supports the realisation of specific desired values in the design and development of new technologies. 5

3. Bias in computer systems

Long before the current debate about algorithmic bias and its consequences, Friedman and Nissenbaum (1996) had already pioneered an analysis of bias in computer systems, arguing that such systems are biased if they "systematically and unfairly discriminate against certain individuals or groups of individuals in favor of others [by denying] an opportunity for a good or [assigning] an undesirable outcome to an individual or groups of individuals on grounds that are unreasonable or inappropriate" (ibid.: p. 332). For Friedman and Nissenbaum it was important to develop a better understanding of bias in computer systems, not least because they considered biased systems to be "instruments of injustice" and stressed that "freedom from bias should be counted among the select set of criteria according to which the quality for systems in use in society should be judged" (ibid.: p. 345f). A good understanding of biases would allow us to identify potential harms in a system and either avoid them in the process of design or correct them if the system is already in use. To this end, Friedman and Nissenbaum provided a taxonomy of biases that remains highly relevant and useful for today's debate on algorithmic bias and discrimination (see, e.g., Dobbe et al., 2019; Cramer et al., 2018). Based on the respective origin of bias, they specified three different types of biases, namely preexisting bias, technical bias, and emergent bias.

According to Friedman and Nissenbaum (1996), preexisting bias has its roots in social institutions, practices, and attitudes and usually exists prior to the creation of the system. It can either originate from individuals who have significant input into the design of the system (individual preexisting bias) or from prejudices that exist in society or culture at large (societal preexisting bias). Importantly, such biases mostly enter a system implicitly and unconsciously rather than through conscious effort.

Technical bias, in turn, arises from technical constraints or considerations. Sources of technical bias may include limitations of computer tools (e.g., in terms of hardware, software, or peripherals), the use of algorithms that have been developed for a different context, and the unwarranted formalisation of human constructs, that is, the attempt to quantify the qualitative and discretise the continuous.

Finally, emergent bias is bias that arises in a context of use, typically some time after a design is completed, as a result of (a) new societal knowledge or changing cultural values that are not or cannot be incorporated into the system design or (b) a mismatch between the users—their expertise and values—assumed in the system design and the actual population using the system.

In sum, Friedman and Nissenbaum's taxonomy of biases is meant to enable designers and researchers to identify and anticipate bias in computer systems by considering individual and societal worldviews, technological properties, and the contexts of use. Their analysis of biases foregrounds the value-laden nature of computer systems and stresses the possibility to mitigate or eliminate potential harmsthrough proactively engaging with the design and development of the systems, which is one of the main objectives of the Value Sensitive Design approach. Consequently, their analysis reflects the double function of VSD as a tool for the analysis of (dis)values in existing technologies and for the construction of novel technologies that account for specific desired values. These two functions, the analytical and the constructive, are also central in recent research on bias and fairness in machine learning.

4. Algorithmic bias and fairness in machine learning

When mathematician Cathy O'Neil published her popular book Weapons of Math Destruction in 2016, the message was clear: Mathematical models, she wrote, can "encod[e] human prejudice, misunderstanding, and bias into the software systems that increasingly manag[e] our lives. [...] Their verdicts, even when wrong and harmful, [are] beyond dispute or appeal. And they ten[d] to punish the poor and the oppressed in our society, while making the rich richer" (O'Neil, 2016, p. 3). To support this claim, O'Neil works through a number of cases—from crime prediction software and personalised online advertising to college ranking systems and teacher evaluation tools to credit, insurance, and hiring algorithms—demonstrating the punitive power such systems can have on those who already suffer from social inequalities and emphasising the task to "explicitly embed better values into our algorithms, creating 'Big Data' models that follow our ethical lead" (ibid.: p. 204). O'Neil's book, along with a few other academic and non-academic texts, was at the forefront of a movement that sought to push back against the depiction of algorithms as fair and objective, showcasing their potential to "so[w] injustice, until we take steps to stop them" (ibid.: p. 203).

In the computer science community, where research on bias and discrimination in computational processes was conducted even prior to the current debate on the impacts of "Big Data" and artificial intelligence (see, e.g., Custers et al., 2013), attempts to detect and prevent such biases intensified. An example for this would be the organisation of the yearly FAT/ML6 annual meeting from 2014 onwards, which in light of a growing recognition that techniques such as machine learning raise "novel challenges for ensuring non-discrimination, due process, and understandability in decision-making," sought to "provid[e] researchers with a venue to explore how to characterize and address these issues with computationally rigorous methods" (FAT/ML, 2018). Other events such as the DAT (Data and Algorithmic Transparency) Workshop in 2016 or the FATREC Workshop on Responsible Recommendation in 2017 followed, and the FAT/ML meeting was eventually succeeded by the FAT* and later the ACM FAccT Conference, which seeks to bring together "researchers and practitioners interested in fairness, accountability, and transparency in socio-technical systems" (ACM FAccT Conference, 2021). As mentioned, this research and the VSD approach find common ground in their twofold objective to (a) identify bias and discrimination in algorithmic systems (analytical objective) and (b) to create and design fair algorithmic systems (constructive objective). 7

With respect to (a), researchers of algorithmic bias have proposed different frameworks for understanding and locating the sources of algorithmic biases, thereby delineating ways to mitigate or correct them (see, e.g., Baeza-Yates, 2018; Mehrabi et al., 2019; Olteanu et al., 2019). Barocas and Selbst (2016), for instance, provide a detailed description of the different ways that biases can be introduced into a machine learning system, including (i) through problem specification, where the definition of target variables rests on subjective choices that may systematically disadvantage certain populations over others; (ii) through the training data, where biased data sets can lead to discriminatory models and harmful results; (iii) through feature selection, where the reductive representation of real-world phenomena may result in inaccurate determinations and adverse effects; and (iv) through proxy variables, where specific data points are highly correlated with class membership, facilitating disparate treatment and potentially implicating less favorable outcomes for members of disadvantaged groups. In a similar vein, Danks and London (2017) identify different forms of algorithmic bias in autonomous systems, namely (i) training data bias, (ii) algorithmic focus bias, (iii) algorithmic processing bias, (iv) transfer context bias, and (v) interpretation bias. The parallels between such recent approaches and Friedman and Nissenbaum's earlier work become most apparent when considering their common goal to sound a "call for caution" (Barocas and Selbst, 2016, p. 732), provide "a taxonomy of different types and sources of algorithmic bias" (Danks and London, 2017, p. 4691), and offer a "framework for understanding and remedying it" (Friedman and Nissenbaum, 1996, p. 330). In either case, the designation and characterisation of different types of biases is thus seen as a key element of the common analytical objective to recognise and remedy such biases in existing algorithmic systems.

With respect to (b), and in addition to the analytical task of identifying and mitigating bias, there is also a more constructive aspiration in the machine learning community to design fair algorithms. Kearns and Roth, for instance, describe this aspiration as the "science of socially aware algorithm design" that looks at how algorithms can "incorporate – in a quantitative, measurable, verifiable manner – many of the ethical values we care about as individuals and as a society" (2019, p. 18). Alternatively, research on algorithmic fairness has been characterised as "translat[ing non-discrimination] regulations mathematically into non-discrimination constraints, and develop[ing] predictive modeling algorithms that would be able to take into account those constraints, and at the same time be as accurate as possible." (Žliobaitė, 2017, p. 1061) In other words, algorithmic fairness research does not only aim at identifying and mitigating bias, but more proactively at building the value of fairness into algorithmic systems. Such research generally proceeds from some predefined fairness metrics or fairness constraints, and then aims to develop algorithmic systems that are optimised according to the proposed metrics or satisfy the specified constraints. This process can either take place (i) in the pre-process stage, where input data are modified to ensure that the outcomes of algorithmic calculations when applied to new data will be fair, (ii) during the in-process stage, where algorithms are modified or replaced to generate fair(er) output, or (iii) in the post-process stage, where the output of any model is modified to be fairer. 8 Once again, there are obvious parallels between such computational approaches and VSD's goal of "influencing the design of technology early in and throughout the design process" (Friedman and Hendry, 2019, p. 4). In both cases, the adoption of a proactive orientation is indicative of a shared commitment to progress and improvement through ethical, value-based design. It is a constructive agenda that aims at contributing to responsible innovation rather than taking a purely analytical, after-the-fact approach. As Friedman and Hendry (2019, p. 2) put it: "While empirical study and critique of existing systems is essential, [VSD] is distinctive for its design stance – envisioning, designing, and implementing technology in moral and ethical ways that enhance our futures."

5. Discussion

Despite the conceptual similarities outlined above and the fact that the VSD literature is often cited by the FAT (Fairness, Accountability, and Transparency in socio-technical systems) community, the uptake and integration of some of VSD's core ideas in computer science remains inadequate in several important aspects.

First, concerns have been raised that the current literature on fairness in machine learning tends to focus too narrowly on how individual or societal biases can enter algorithmic systems—concentrating mostly on what Friedman and Nissenbaum refer to as "preexisting bias"—while ignoring other sources of bias such as technical bias or emergent bias. In response to this, Dobbe and co-authors (2019) have stressed the need for a broader view on algorithmic bias that takes into account all the categories of Friedman and Nissenbaum's (1996) taxonomy and considers "risks beyond those pre-existing in the data" (Dobbe et al., 2019, p. 2). Thus, in order to better fulfill the analytical objective of identifying and mitigating bias in algorithmic systems, it is important that the academic machine learning community does not resort to VSD in an eclectic, piecemeal manner, but rather draws on the full breadth of the proposed frameworks.

Second, it is important to remember that concepts such as fairness are by no means self-explanatory or clear-cut. Verma and Rubin (2018), for instance, point out that more than twenty different notions of fairness have been proposed in AI-related research in the last few years, a lack of agreement that calls into question the very idea of operationalising fairness when seeking to design fair algorithms. Although the idea of fairness and the related concept of 'equality of opportunity' have been extensively discussed in philosophical research (see, e.g., Ryan, 2006; Hooker, 2014; Arneson, 2018), Binns (2018) has argued that most fairness measures in machine learning research tend to be undertheorized from a philosophical perspective, resulting in approaches that focus "on a narrow, static set of prescribed protected classes [...] devoid of context" (ibid.: p. 9). Last but not least, Corbett-Davies and Goel (2018) have highlighted the divergence between formalised notions of fairness and people's common understanding of fairness in everyday decision contexts. What follows from these objections is that attempts to formalise and operationalise fairness in specific ways can be contested on numerous grounds.

Unfortunately, this contestability is often disregarded or downplayed in the presentation of technical solutions, 9 even though recent years have shown a trend toward more interdisciplinary approaches that are conscious of the need to broaden the analytical scope. Proper utilisation of VSD could support such efforts as the method not only requires diligent investigations of the values at stake (see, in particular, the philosophical and technical investigations in the VSD method), but also calls for the involvement of interdisciplinary research teams that include, for example, philosophers, social scientists, or legal scholars. Of course, such interdisciplinary approaches can be challenging and resource intensive, but ethical design ultimately demands more than mechanical, recipe-based treatments of FAT requirements (see Keyes et al., 2019). Striving for truly value-sensitive designs implies being sensitive to the manifold meanings of values in different societal and cultural contexts and requires recognising, relating, and applying different disciplinary competences.

Finally, and on a related note, there is not only a need to expand the breadth of disciplinary perspectives, but also to widen the scope of the object of investigation itself. Simply put, instead of focusing more narrowly on fairness, accountability, and transparency in machine learning, research on algorithmic bias should also account for (a) the broader socio-technical system in which technologies are situated and (b) the different logics and orders that these algorithmic technologies produce and engender. Regarding the former, Gangadharan and Niklas (2019) have warned that the techno-centric focus on embedding fairness in algorithms, which is based on the idea that technical tweaks will suffice to prevent or avoid discriminatory outcomes, runs the danger of ignoring the wider social, political, and economic conditions in which unfairness and inequality arise. Regarding the latter, Hoffmann (2019, p. 910) reminds us that work on algorithmic bias does not only demand sustained attention to system failures but also to "the kinds of worlds being built – both explicitly and implicitly – by and through design, development, and implementation of data-intensive, algorithmically-mediated systems". What would thus be needed is greater attention to the "broader institutional, contextual, and social orders instantiated by algorithmically mediated systems and their logics of reduction and optimization" (ibid.). The FAT community has already made strides in this direction, with the ACM FAT* Conference 2020 explicitly seeking "to sustain and further improve the high quality of computer science research in this domain, while simultaneously extending the focus to law and social sciences and humanities research" (ACM FAcct Conference, 2020). Nevertheless, we believe that a more comprehensive uptake of VSD, which has been conceptualised as an interdisciplinary approach from the very start, could support this process.

6. Concluding remarks

This paper has offered a concise review of the methodology of Value Sensitive Design and the taxonomy of biases proposed by Friedman and Nissenbaum (1996). It has shown that both VSD and the taxonomy of biases remain highly relevant for current research on bias and fairness in socio-technical systems. Despite its usefulness, however, VSD is often taken up only partially and crucial insights—e.g., regarding the conceptual underpinnings of values, the need to consider both users and non-users of a technology, 10 or the importance of interdisciplinarity—are lost. Consequently, it would be advisable to intensify efforts to revitalise and deepen the uptake of Value Sensitive Design in Fairness, Accountability, and Transparency (FAT) and related research. Fortunately, there is indeed a trend to expand the debates and move the discussion beyond the technical domain.

Clearly, the review of VSD and research on algorithmic bias in this paper does not fully capture the evolving debate. Moreover, it is important to note that research on biases goes well beyond the purview of VSD and computer science. Indeed, psychology and the cognitive sciences have long studied cognitive biases (Gigerenzer et al., 2012; Kahneman, 2011) and implicit biases (Holroyd et al., 2017). 11 While Friedman and Nissenbaum's notion of preexisting bias has, to some extent, accounted for implicit biases, the relationship between human cognitive biases and bias in computer systems requires further analyses. Especially in the context of automated decision-making (ADM), where human decisions are complemented—or even replaced—by machine decisions, human cognitive biases can have interesting ramifications for the design and use of ADM systems.

Firstly, cognitive biases can be causally related to biased automated decision-making. Cognitive limitations and biases may for instance contribute to the formation of societal stereotypes, prejudices and unwarranted preferences, or poor decision-making practices (e.g., through the defective interpretation of probabilities), which are fed into ADM systems through training data, thereby hiding while at the same time reproducing and reinforcing such biases in seemingly neutral machines.

Secondly, and conversely, ADM systems can also reduce and/or eliminate cognitive biases by accounting for and possibly correcting flaws in human reasoning (see, e.g., Savulescu and Maslen, 2015; Sunstein, 2018). In this respect, if designers and researchers of ADM systems can a) identify the sources of cognitive biases and b) counter them through specific methodological choices in designing and implementing the system, such systems can be conceived as tools to both disclose cognitive biases in human decision-making and to reduce or even prevent their negative impacts through sophisticated human-machine interaction in decision-making.

Finally, unwarranted delegation of human decision-making to machines can be a cognitive bias in itself, known as automation bias (Mosier et al., 1996) or automation complacency (Parasuraman and Manzei, 2010). Automation bias is characterised by the human tendency to over-trust and over-rely on allegedly neutral machines in that they follow wrong (or questionable) 'decisions' from the machines without seeking further corroborative or contradictory information, or even discount information from other existing sources (Skitka et al., 1999). Relatedly, automation complacency describes human operators' belief in the system's reliability, thereby causing them to pay insufficient attention to monitoring the process and to verifying the outputs of the system. Thus, recognising the dangers of automation bias and automation complacency—i.e., of overreliance on automated decision-making—brings us right back to Friedman and Nissenbaum's early warnings regarding biases in seemingly accurate, neutral, and objective computer systems, and their timely request to actively expose and counter them for better design and informed public discourse on the merits and limitations of such software tools. However, improving our tools will only bring us so far—accounting for values and countering bias also requires us to acknowledge and remedy existing inequalities and injustices in our societies and to concede that not all decision-making processes should be conducted by algorithms.

References

A.C.M.FAcct Conference. (2020). ACM FAT* Conference 2020. ACM FAccT Conference. https://facctconference.org/2020/

A.C.M.FAccT Conference. (2021). ACM FaccT Conference 2021. ACM FaccT Conference. https://facctconference.org/2021/index.html

Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016). Machine Bias. In ProPublica. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing

Arneson, R. (2018). Four Conceptions of Equal Opportunity. The Economic Journal, 128(612), 152– 173,. https://doi.org/10.1111/ecoj.12531

Baeza-Yates, R. (2018). Bias on the web. Communications of the ACM, 61(6), 54–61. https://doi.org/10.1145/3209581

Barcoas, S., & Selbst, A. D. (2016). Big data’s disparate impact. California Law Review, 104(3), 671–732. https://doi.org/10.15779/Z38BG31

Bellamy, R. K. E., Dey, K., Hind, M., Hoffman, S. C., Houde, S., Kannan, K., & Zhang, Y. (2019). AI Fairness 360: An extensible toolkit for detecting and mitigating algorithmic bias. IBM Journal of Research and Development, 63(4/5), 1–15. https://doi.org/10.1147/JRD.2019.2942287

Binns, R. (2018). Fairness in machine learning: Lessons from political philosophy. Proceedings of Machine Learning Research, 81, 149–159. http://proceedings.mlr.press/v81/binns18a.html

Brey, P. (2010). Values in technology and disclosive computer ethics. In L. Floridi (Ed.), The Cambridge handbook of information and computer ethics (pp. 41–58). Cambridge Univerity Press. https://doi.org/10.1017/CBO9780511845239.004

Buolamwini, J., & Gebru, T. (2018). Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification. Proceedings of Machine Learning Research, 81, 1–15. http://proceedings.mlr.press/v81/buolamwini18a.html

Corbett-Davies, S., & Goel, S. (2018). The Measure and Mismeasure of Fairness: A Critical Review of Fair Machine Learning. ArXiv. https://arxiv.org/abs/1808.00023

Cramer, H., Garcia-Gathright, J., Springer, A., & Reddy, S. (2018). Assessing and Addressing Algorithmic Bias in Practice. Interactions, 25(6), 58–63. https://doi.org/10.1145/3278156

Custers, B., Calders, T., Schermer, B., & Zarsky, T. (Eds.). (2013). Discrimination and Privacy in the Information Society Data Mining and Profiling in Large Databases. Springer. https://doi.org/10.1007/978-3-642-30487-3

Danks, D., & London, A. J. (2017). Algorithmic bias in autonomous systems. Proceedings of the 26th International Joint Conference on Artificial Intelligence, 4691–4697. https://dl.acm.org/doi/10.5555/3171837.3171944

Dastin, J. (2018). Amazon scraps secret AI recruiting tool that showed bias against women. In Reuters. https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G

Davis, J., & Nathan, L. P. (2014). Value sensitive design: Applications, adaptions, and critique. In J. Hoven, P. E. Vermaas, & I. Poel (Eds.), Handbook of Ethics, Values, and Technological Design (pp. 1–26). Springer. https://doi.org/10.1007/978-94-007-6970-0_3

Dieterich, W., Mendoza, C., & Brennan, T. (2016). COMPAS Risk Scales: Demonstrating accuracy equity and predictive parity [Technical report]. Northpointe. https://www.documentcloud.org/documents/2998391-ProPublica-Commentary-Final-070616.html

Dobbe, R., Dean, S., Gilbert, T., & Kohli, N. (2018). A Broader View on Bias in Automated Decision-Making: Reflecting on Epistemology and Dynamics. 2018 Workshop on Fairness, Accountability and Transparency in Machine Learning. http://arxiv.org/abs/1807.00553

F.A.T./M.L. (2018). Fairness, Accountability, and Transparency in Machine Learning. https://www.fatml.org/

Flanagan, M., Howe, D. C., & Nissenbaum, H. (2008). Embodying Values in Technology: Theory and Practice. In J. Hoven & J. Weckert (Eds.), Information Technology and Moral Philosophy (pp. 322–353). Cambridge University Press. https://doi.org/10.1017/CBO9780511498725.017

Friedler, S. A., Scheidegger, C., Venkatasubramanian, S., Choudhary, S., Hamilton, E. P., & Roth, D. (2019). A comparative study of fairness-enhancing interventions in machine learning. Proceedings of the Conference on Fairness, Accountability, and Transparency - FAT*, 19, 329–338. https://doi.org/10.1145/3287560.3287589

Friedman, B. (Ed.). (1997). Human Values and the Design of Computer Technology. Cambridge University Press.

Friedman, B., & Hendry, D. G. (2019). Value Sensitive Design: Shaping Technology with Moral Imagination. MIT Press.

Friedman, B., Kahn, P. H., & Borning, A. (2006). Value Sensitive Design and Information Systems. In P. Zhang & D. Galetta (Eds.), Human-Computer Interaction in Management Information Systems: Foundations (pp. 348–372). M.E. Sharpe.

Friedman, B., & Nissenbaum, N. (1996). Bias in Computer Systems. ACM Transactions on Information Systems, 14(3), 330–347. https://doi.org/10.1145/230538.230561

Gangadharan, S. P., & Niklas, J. (2019). Decentering technology in discourse on discrimination. Information, Communication & Society, 22(7), 882–899. https://doi.org/10.1080/1369118X.2019.1593484

Gigerenzer, G., Fiedler, K., & H, O. (2012). Rethinking Cognitive Biases as Environmental Consequences. In P. M. Todd & G. Gigerenzer (Eds.), Ecological Rationality. Intelligence in the World (pp. 80–110). Oxford University Press. https://doi.org/10.1093/acprof:oso/9780195315448.001.0001

Hoffmann, A. L. (2019). Where fairness fails: Data, algorithms, and the limits of antidiscrimination discourse. Information, Communication & Society, 22(7), 900–915. https://doi.org/10.1080/1369118X.2019.1573912

Holroyd, J., Scaife, R., & Stafford, T. (2017). What is Implicit Bias? Philosophy Compass, 12, e12437. https://doi.org/10.1111/phc3.12437

Hooker, B. (2014). Utilitarianism and fairness. In B. Eggleston & D. Miller (Eds.), Cambridge Companion to Utilitarianism (pp. 280–302). Cambridge University Press. https://doi.org/10.1017/CCO9781139096737.015

Kahneman, D. (2011). Think, Fast and Slow. Allen Lane.

Kearns, M., & Roth, A. (2020). The ethical algorithm: The science of socially aware algorithm design. Oxford University Press.

Keyes, O., Hutson, J., & Durbin, M. (2019). A Mulching Proposal: Analysing and Improving an Algorithmic System for Turning the Elderly into High-Nutrient Slurry. Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems, 1–11. https://doi.org/10.1145/3290607.3310433

Kluttz, D. N., Kohli, N., & Mulligan, D. K. (2020). Shaping Our Tools: Contestability as a Means to Promote Responsible Algorithmic Decision Making in the Professions. In K. Werbach (Ed.), After the Digital Tornado: Networks, Algorithms, Humanity (pp. 137–152). Cambridge University Press. https://www.cambridge.org/core/books/after-the-digital-tornado/shaping-our-tools-contestability-as-a-means-to-promote-responsible-algorithmic-decision-making-in-the-professions/311281626ECA50F156A1DDAE7A02CECB

Lepri, B., Oliver, N., Letouzé, E., Pentland, A., & Vinck, P. (2018). Fair, transparent, and accountable algorithmic decision-making processes. Philosophy & Technology, 31, 611–627. https://doi.org/10.1007/s13347-017-0279-x

Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K., & Galstyan, A. (2019). A Survey on Bias and Fairness in Machine Learning. arXiv. https://arxiv.org/abs/1908.09635.

Mosier, K. L., & Skitka, L. J. (1996). Human Decision Makers and Automated Decisions Aids: Made for Each Other? In R. Parasuraman & M. Mouloua (Eds.), Automation and Human Performance: Theory and Applications (pp. 201–220). NJ. Lawrence Erlbaum Associates.

Olteanu, A., Castillo, C., Diaz, F., & Kıcıman, E. (2019). Social data: Biases, methodological pitfalls, and ethical boundaries. Frontiers in Big Data, 2. https://doi.org/10.3389/fdata.2019.00013

O’Neil, C. (2016). Weapons of Math Destruction. How Big Data Increases Inequality and Threatens Democracy. Crown Publishers.

Oudshoorn, N., & Pinch, T. (2005). How Users and Non-users Matter. In N. Oudshoorn & T. Pinch (Eds.), How Users Matter: The Co-Construction of Users and Technology (pp. 1–28). MIT Press.

Parasuraman, R., & Manzey, D. H. (2010). Complacency and Bias in Human Use of Automation: An Attentional Integration. Human Factors, 52(3), 381–410. https://doi.org/10.1177/0018720810376055

Ryan, A. (2006). Fairness and Philosophy. Social Research, 73(2), 597–606. https://www.jstor.org/stable/40971838

Savulescu, J., & Maslen, H. (2015). Moral enhancement and artificial intelligence: Moral AI? In J. Romportl, E. Zackova, & J. Kelemen (Eds.), Beyond Artificial Intelligence: The Disappearing Human-Machine Divide (pp. 79–95). Springer. https://doi.org/10.1007/978-3-319-09668-1_6

Simon, J. (2017). Value-sensitive design and responsible research and innovation. In S. O. Hansson (Ed.), The ethics of technology methods and approaches (pp. 219–235). Rowman & Littlefield.

Skitka, L. J., Mosier, K. L., & Burdick, M. (1999). Does Automation Bias Decision-Making? International Journal of Human–Computer Studies, 51, 991–1006. https://doi.org/10.1006/ijhc.1999.0252

Sunstein, C. R. (2019). Algorithms, correcting biases. Social Research, 86(2), 499–511. https://www.muse.jhu.edu/article/732187

Verma, S., & Rubin, J. (2018). Fairness definitions explained. Proceedings of the International Workshop on Software Fairness, 1–7. https://doi.org/10.1145/3194770.3194776

Winner, L. (1980). Do Artifacts Have Politics? Daedalus, 109(1), 121 136. https://www.jstor.org/stable/20024652

Wong, P. H. (2019). Democratizing Algorithmic Fairness. Philosophy & Technology, 33, 225–244. https://doi.org/10.1007/s13347-019-00355-w

Wong, P. H., & Simon, J. (2020). Thinking About ‘Ethics’ in the Ethics of AI. IDEES, 48. https://revistaidees.cat/en/thinking-about-ethics-in-the-ethics-of-ai/

Wyatt, S. (2005). Non-Users Also Matter: The Construction of Users and Non-Users of the Internet. In N. Oudshoorn & T. Pinch (Eds.), How Users Matter: The Co-Construction of Users and Technology (pp. 67–79). MIT Press.

Žliobaitė, I. (2017). Measuring discrimination in algorithmic decision making. Data Mining and Knowledge Discovery, 31(4), 1060–1089. https://doi.org/10.1007/s10618-017-0506-1

Footnotes

1. See the article's citation count on Google Scholar at https://scholar.google.com/scholar?cites=9718961392046448783&as_sdt=2005&sciodt=0,5&hl=en.

2. In this paper, we use the term value to refer to "[t]hose things that people find valuable that are both ideal and general" and the term disvalue to refer to "those general qualities that are considered to be bad or evil" (Brey, 2010, p. 46).

3. The following paragraphs are a reworked and expanded version of section 1 in "Value-Sensitive Design as a Methodology" (Simon, 2017).

4. Examples of such "values with ethical import" include privacy, meaning "the right of an individual to determine what information about himself or herself can be communicated to others"; autonomy, meaning "people's ability to decide, plan, and act in ways that they believe will help them to achieve their goals"; or informed consent, which refers to "garnering people's agreement, encompassing criteria of disclosure and comprehension (for 'informed') and voluntariness, competence, and agreement (for 'consent')" (Friedman et al., 2006, p. 364).

5. Of course, like any mature methodology, the Value Sensitive Design approach has also been subject to a good deal of critique (see Friedman and Hendry, 2019, p. 172f). For a comprehensive review of these critiques, see Davis and Nathan (2014).

6. The acronym FAT/ML stands for Fairness, Accountability and Transparency in Machine Learning.

7. From an VSD perspective, the development of a "fair" algorithmic system would entail the embedding of specific values such as fairness, accountability, or transparency into the system.

8. See Lepri et al. (2018), Bellamy et al. (2019), and Friedler et al. (2019) for an overview of these techniques.

9. For a detailed discussion of the concept of contestability and the importance of contestable design, see Kluttz et al., 2020.

10. For a more detailed discussion on the need to also take non-users into account, see Wong (2019) and Wong and Simon (2020).

11. It should be noted that cognitive bias and implicit bias do not necessarily have the negative moral connotation as in the case of bias in VSD.

Towards platform observability

$
0
0

1. Introduction

Platforms are large-scale infrastructures specialised in facilitating interaction and exchange among independent actors. Whether understood economically as two- or multi-sided markets (Langley & Leyshon, 2017) or with an eye on online media as services that ‘host, organize, and circulate users’ shared content or social interactions’ (Gillespie, 2018, p. 18), platforms have not only become highly visible and valuable companies but also raise important social challenges. While intermediaries have in one form or another existed for millennia, contemporary platforms are relying on digital technologies in (at least) two fundamental ways. First, platforms ‘capture’ (Agre, 1994) activities by channelling them through designed functionalities, interfaces, and data structures. Uber, for example, matches riders with drivers in physical space, handles payment, and enforces ‘good behaviour’ through an extensive review system covering both parties. This infrastructural capture means that a wide variety of data can be generated from user activity, including transactions, clickstreams, textual expressions, and sensor data such as location or movement speed. Second, the available data and large numbers of users make algorithmic matching highly attractive: ranking, filtering, and recommending have become central techniques for facilitating the ‘right’ connections, whether between consumers and products, users and contents, or between people seeking interaction, friendship, or love.

Digital platforms host social exchange in ways that Lawrence Lessig (1999) summarised under the famous slogan ‘code is law’, which holds that technical means take part in regulating conduct and shaping outcomes. The combination of infrastructural capture and algorithmic matching results in forms of socio-technical ordering that make platforms particularly powerful. As Zuboff (2019, p. 15) discusses under the term surveillance capitalism, the tight integration of data collection and targeted ‘intervention’ has produced ‘a market form that is unimaginable outside the digital milieu’. The rising power of platforms poses the question of what kind of accountability is necessary to understand these processes and their consequences in more detail. Matching algorithms, in particular, represent ordering mechanisms that do not follow the same logic as traditional decision-making, leading to considerable uncertainty concerning their inner workings, performativities, and broader social effects.

So far, most regulatory approaches to tackling these questions seek to create accountability by ‘opening the black box’ of algorithmic decision-making. A recent EU regulation on fairness in platform-to-business relations, for example, proposes transparency as its principal means. 1 The public debate about the upcoming EU Digital Services Act indeed shows that calls for transparency of algorithmic power have gained support across parliamentary factions and stakeholder groups. 2 The ‘Filter Bubble Transparency Act’—a US legislative proposal that seeks to protect users from being ‘manipulated by algorithms driven by user-specific data’ - focuses more specifically on platforms as media, but again relies on transparency as guiding principle. 3 The German Medienstaatsvertrag (‘State Media Treaty’), which has recently been ratified by all state parliaments, explicitly requires platform operators to divulge criteria for ranking, recommendation, and personalisation ‘in a form that is easily perceivable, directly reachable, and permanently available’. 4 This widespread demand for disclosure and explanation articulates not only justified concerns about the opacity of platforms but also testifies to the glaring lack of information on their conduct and its social, political, and economic repercussions.

In this paper, we likewise take up the challenge posed by platform opacity from the angle of accountability but seek to probe the conceptual and practical limitations of these transparency-led approaches to platform regulation. Echoing the critical literature on transparency as a policy panacea (e.g., Etzioni, 2010; Ananny & Crawford, 2018), we propose the concept of observability as a more pragmatic way of thinking about the means and strategies necessary to hold platforms accountable. While transparency and observability are often used synonymously (e.g. August & Osrecki, 2019), we would like to highlight their semantic differences. Unlike transparency, which nominally describes a state that may exist or not, observability emphasises the conditions for the practice of observing in a given domain. These conditions may facilitate or hamper modes of observing and impact the capacity to generate external insights. Hence, while the image of the black box more or less skips the practicalities involved in opening it, the term observability intends to draw attention to and problematise the process dimension inherent to transparency as a regulatory tool.

While observability incorporates similar regulatory goals to transparency, it also deviates in important respects, most importantly by understanding accountability as a complex, dynamic ‘social relation’ (Bovens, 2007, p. 450), which is embedded in a specific material setting. The goal is not to exchange one concept for the other but to sharpen our view for the specificities of platform power. At the risk of stating the obvious, regulatory oversight needs to take into account the material quality of the objects under investigation. Inspecting the inner workings of a machine learning system differs in important ways from audits in accounting or the supervision of financial markets. Rather than nailing down ‘the algorithm’, understood as a singular decision mechanism, the concept of observability seeks to address the conditions, means, and processes of knowledge production about large-scale socio-technical systems. In the everyday life of platforms, complex technologies, business practices, and user appropriations are intersecting in often unexpected ways. These platform dynamics result in massive information asymmetries that affect stakeholder groups as well as societies at large. Regulatory proposals need to take a broader view to live up to these challenges.

Our argument proceeds in three steps. In the next section, we retrace some of the main problems and limitations of transparency, paying specific attention to technical complexity. The third section then discusses the main principles guiding the observability concept and provides concrete examples and directions for further discussion. We conclude by arguing for a policy approach to promoting observability, emphasising that institutional audacity and innovation are needed to tackle the challenges raised by digital platforms.

2. Limitations to transparency

Much of the debate around our insufficient understanding of platforms and their use of complex algorithmic techniques to modulate users’ experience has centred on the metaphor of a ‘black box’. Although Frank Pasquale, whose Black Box Society (2015) has popularised the term beyond academia, prefers the broader concept of intelligibility, the talk of black boxes is often accompanied by demands for transparency. The regulatory proposals mentioned above are largely organised around mechanisms such as explanations, disclosures, and—more rarely—audits 5 that would bring the inner workings of the machine to light and thereby establish some form of control. But these calls for transparency as a remedy against unchecked platform power encounter two sets of problems. First, the dominant understanding of transparency as information disclosure faces important limitations. Second, the object under scrutiny itself poses problems. Platforms are marked by opacity and complexity, which effectively challenges the idea of a black box whose lid can be lifted to look inside. This section discusses both of these issues in turn.

2.1. Accountability as mediated process

Transparency has a long tradition as a ‘light form’ (Etzioni, 2010) of regulation. It gained new popularity in the 1970s as a neoliberal governance method, promising better control of organisational behaviour through inspection (August & Osrecki, 2019). Transparency is seen as an essential means of oversight and of holding commercial and public entities to account: only if powerful organisations reveal relevant information about their actions are we able to assess their performance. This understanding of transparency implies a number of taken for granted assumptions, which link information disclosure to visibility, visibility to insight, and insight to effective regulatory judgement (Ananny & Crawford, 2018, p. 974). According to this view, transparency is able to reveal the truth by reflecting the internal reality of an organisation (Albu & Flyverbom, 2019, p. 9) and thereby creating ‘representations that are more intrinsically true than others’ (Ananny & Crawford, 2018, p. 975). Making the opaque and hidden visible, creates truth and truth enables control, which serves as a ‘disinfectant’ (Brandeis, 1913, p. 10) capable of eliminating malicious conduct. Transparency is considered crucial for the accountability of politics because seeing, just as in the physical world, is equated with knowing: ‘what is seen is largely what is happening’, as Ezrahi (1992, p. 366) summarises this view. These assumptions also inform current considerations on platform regulation.

However, recent research on transparency has shown that transparency does more and different things than shedding light on what is hidden. The visibility of an entity and its procedures is not simply a disclosure of pre-existing facts, but a process that implies its own perspective. While transparency requirements expect ‘to align the behavior of the observed with the general interest of the observers’, empirical studies found that ‘transparency practices do not simply make organizations observable, but actively change them’ (August & Osrecki, 2019, p. 16). As Flyverbom (2016, p. 15) puts it, ‘transparency reconfigures - rather than reproduces - its objects and subjects’. The oversight devices used to generate visibility shape what we get to see (Ezrahi, 1992; Flyverbom, 2016), which puts into question the idea of direct, unmediated access to reality if only the disclosed information is accurate.

From a social science perspective, transparency should not be regarded as a state or a ‘thing’ but as the practice ‘of deciding what to make present (i.e. public and transparent) and what to make absent’ (Rowland & Passoth, 2015, p. 140). Creating visibility and insights as part of regulatory oversight consists of specific procedures, which involve choices about what specifically should be exposed and how, what is relevant and what can be neglected, which elements should be shown to whom and, not least, how the visible aspects should be interpreted (Power, 1997). In their critique of transparency-led approaches to algorithmic accountability, Ananny & Crawford (2018) moreover argue that there is a distinct lack of sensitivity for fundamental power imbalances, strategic occlusions, and false binaries between secrecy and openness, as well as a broad adherence to neoliberal models of individual agency.

In light of these criticisms, it may not come as a surprise that regulatory transparency obligations often fall short of their goals and create significant side-effects instead. Among the most common unintended outcomes are bureaucratisation, generalised distrust, and various forms of ‘window dressing’ designed to hide what is supposed to be exposed to external review. Informal organisational practices emerge and coexist with official reports, accounts, and presentations (August & Orecki, 2019, p. 21). While the critical literature on regulatory failures of transparency obligations is increasing, these insights have yet to have an impact on regulatory thinking. Most regulatory proposals resort to traditional ideas of external control through transparency and frame transparency as a straightforward process of disclosure. As a result, they are missing the mark on the complex and conflictual task of creating meaningful understanding that can serve as an effective check on platform power.

Taken together, a social science perspective on this key ideal of regulation suggests that making platforms accountable requires a critical engagement with the achievements and shortcomings of transparency. It needs to take on board efforts to combine different forms of evidence, and above all, to become attentive to the selective and mediated character of knowledge-building. Similar to the flawed logic of ‘notice and consent’ in the area of privacy protection, which holds that informing individuals on the purposes of data collection allows them to exercise their rights, a superficial understanding of transparency in the area of platform regulation risks producing ineffective results (see Obar, 2020; Yeung, 2017).

2.2. Opacity, complexity, fragmentation

A second set of complications for transparency concerns algorithms and platforms as the actual objects of scrutiny. Large-scale technical systems, in particular those incorporating complex algorithmic decision-making processes, pose severe challenges for assessing their inner workings and social effects. One obvious reason for this is indeed their opacity. As Burrell (2016, p. 2) argues, opacity may stem from secrecy practices, lack of expertise in reading code, and the increasing ‘mismatch between mathematical optimization in high-dimensionality characteristic of machine learning and the demands of human-scale reasoning’. The last point in particular introduces significant challenges to transparency understood as information disclosure or audit. Even if decision procedures behind automated matchmaking can sometimes still be meticulously specified, platforms nowadays mainly deploy statistical learning techniques. These techniques develop decision models inductively and ‘learn programs from data’ (Domingos, 2012, p. 81), based on an arrangement between data, feedback, and a given purpose (see Rieder, 2020).

In the canonical example of spam filtering, users label incoming emails as spam or not spam. Learning consists in associating each word in these messages with these two categories or ‘target variables’. Since every word contributes to the final decision to mark an incoming message as spam or not spam, the process cannot be easily traced back to singular factors. Too many variables come into play, and these algorithms are therefore not ‘legible’ in the same way as more tangible regulatory objects. With regard to regulatory oversight, this means that transparency in the sense of reconstructing the procedure of algorithmic decision making ‘is unlikely to lead to an informative outcome’, as Koene et al. (2019, p. II) conclude. Audits are unable to find out ‘what the algorithm knows because the algorithm knows only about inexpressible commonalities in millions of pieces of training data’ (Dourish, 2016, p. 7). There is a large gulf between the disclosure of ‘fundamental criteria’ mandated by regulatory proposals like the Medienstaatsvertrag and the technical complexities at hand.

Even if regulators were given access to data centres and source code, the process of sense-making would not be straightforward. Reading the gist of an algorithm from complex code may run into difficulties, even if no machine learning is involved. As Dourish (2016) shows, the presence of different programming languages and execution environments adds further complications, and so do the many subsystems and modules that concrete programmes often draw on. Algorithmic decision procedures ‘may not happen all in one place’ (Dourish, 2016, p. 4) but can be distributed over many different locations in a large programme or computer network. In the case of online advertising, for example, the placement of a single ad may entail a whole cascade of real-time auctions, each drawing on different algorithms and data points, each adding something to the final outcome. The result is a continuously evolving metastable arrangement. Thus, time becomes a crucial analytical factor, causing considerable difficulties for the ‘snapshot logic’ underlying most audit proposals.

For these reasons, algorithms turn out to be difficult to locate. In his ethnographic study of a recommender system, Seaver (2017) observes that even in small companies it can be a challenge for staff members to explain where exactly ‘the algorithm’ is. As Bogost (2015) quips, ‘[c]oncepts like “algorithm” have become sloppy shorthands, slang terms for the act of mistaking multipart complex systems for simple, singular ones’. What is referred to as ‘algorithm’, i.e. the actual matchmaking technique, may thus only be a small component in a much larger system that includes other various instances of ordering, ranging from data modelling to user-facing interfaces and functions that inform and define what users can see and do. YouTube, for example, not only fills its recommendation pipeline with a broad array of signals generated from the activities of billions of users but actually uses two different deep learning models for ‘candidate generation’ (the selection of hundreds of potential videos from the full corpus) and ‘ranking’ (the selection and ordering of actual recommendations from the candidate list) (see Covington et al., 2016). The fuzzy, dynamic, and distributed materiality of contemporary computing technologies and data sets means that algorithmic accountability is harder to put into practice than the call for transparency suggests. Regulatory proposals such as disclosures, audits, or certification procedures seeking to establish effective control over their functionality and effects assume properties that algorithmic systems may often not meet. Suffice to say that technical complexity also facilitates the attempts at dissimulation and ‘window dressing’ mentioned above.

Yet, as if this was not difficult enough, our understanding of platform accountability should extend beyond oversight of algorithms and platform conduct to be meaningful. The ordering power of platforms also encompasses shared or distributed accomplishments (see Suchman, 2007) to which platforms, users and content providers each contribute in specific ways. As Rahwan et al. (2019, p. 477) argue, machine behaviour ‘cannot be fully understood without the integrated study of algorithms and the social environments in which algorithms operate’. The actions of users, for example, provide the data that shape algorithmic models and decisions as part of machine learning systems. In the same vein, platform behaviour cannot be reduced to platform conduct, that is, to the policies and design decisions put in place by operators. It must include the evolving interactions between changing social practices and technical adjustments, which may, in turn, be countered by user appropriations. As use practices change, algorithmic decision models change as well. Platform companies are therefore neither fully in control of actual outcomes, nor fully aware what is happening within their systems.

Finally, the effects of platforms can only be sufficiently addressed if we consider what is being ordered. For example, ranking principles considered beneficial in one culture domain, e.g. music recommendation, may have troubling implications in another, e.g. the circulation of political content. Accountability thus has to consider what is made available on platforms and how ordering mechanisms interact with or shape the content and its visibility. This again requires a broader view than what algorithm audits or broad technical disclosures are able to provide.

Taken together, research on the properties of algorithms and algorithmic systems suggests that regulatory proposals such as ‘opening the black box’ through transparency, audit, or explainability requirements reflect an insufficient understanding of algorithms and the platform architectures they enable. Algorithms can neither be studied nor regulated as single, clear-cut, and stable entities. Rather, their behaviour and effects result from assemblage-like contexts whose components are not only spatially and functionally distributed but also subject to continuous change, which is partly driven by users or markets facilitated by platforms. Given the ephemeral character of algorithms on the one side and the enormous generative and performative power of algorithmic systems on the other, the question arises what concepts, strategies, and concrete tools might help us to comprehend their logics and to establish effective political oversight. Such an approach needs to take on board the critique of transparency as a regulatory tool and consider accountability as a continuous interaction and learning process rather than periodical undertakings. It should recognise that the legibility of algorithmic systems significantly differs from that of other objects or areas of regulation; and it should take into account that any form of review is not only selective but also shapes the object under investigation. Thus, the debate on platform regulation needs to become reflexive with regard to the specific materiality of the regulatory field and the constitutive effects of studying it.

3. Principles of observability

This section seeks to flesh out an understanding of observability as a step toward tackling the problems platform accountability currently faces. While the term is regularly used in the literature on transparency (e.g., Bernstein, 2012; Albu & Flyverbom, 2015; August & Osrecki, 2019), we seek to calibrate it to our specific goals: the challenges raised by platforms as regulatory structures need to be addressed more broadly, beginning with the question of how we can assess what is happening within large-scale, transnational environments that heavily rely on technology as a mode of governance. Who gets treated how on large online platforms, how are connections between participants made and structured, what are the outcomes, and—crucially—who can or should be able to make such assessments? Rather than a binary between transparency and opacity, the question is how to foster the capacity to produce knowledge about platforms and ‘platform life’ in constructive ways. The increasingly technological nature of our societies requires not just penalties for law infringements, but a deeper and well-informed public conversation about the role of digital platforms. This includes attention to the larger impacts of the new kinds of ordering outlined above, as well as a sensitivity for the ideological uses of transparency, which may serve ‘as a tool to fight off the regulations opposed by various business groups and politicians from conservative parties’ (Etzioni, 2010, p. 2). We therefore position observability as an explicit means of, not an alternative to regulation. As van Dijck et al. (2018, p. 158) underline, ‘[r]egulatory fixes require detailed insights into how technology and business models work, how intricate platform mechanisms are deployed in relation to user practices, and how they impact social activities’. Our concept of observability thus seeks to propose concrete actions for how to produce these insights. While some of the more concrete strategies we discuss may come out of self-regulation efforts, effective and robust observability clearly requires a regulatory framework and institutional support. In what follows, we outline three principles that inform the concrete conceptual and practical directions observability seeks to emphasise.

3.1. Expand the normative and analytical horizon

The first principle concerns the research perspective on platforms and argues that a broader focus is needed. This focus takes into consideration how digital platforms affect societies in general, ranging from everyday intimacy to economic and labour relations, cultural production, and democratic life. Given that platformisation transforms not only specific markets but ‘has started to uproot the infrastructural, organizational design of societies’ (van Dijck, 2020, p. 2), it seems crucial to develop knowledge capacities beyond critical algorithm studies and include platform conduct, behaviour, and effects across relevant social domains in our agendas. As Powles and Nissenbaum (2018) have recently argued for artificial intelligence systems, limiting our focus to the important yet narrow problems of fairness and biases means that ‘vast zones of contest and imagination are relinquished’, among them the question whether the massive efforts in data collection underlying contemporary platform businesses are acceptable in the first place. The ability to say no and prohibit the deployment of certain technologies such as political micro-targeting of voters or face recognition requires robust empirical and normative evidence on its harm for democracies.

While investigations into misinformation and election tampering are important, there are other long-term challenges waiting to be addressed. Recent studies on surveillance capitalism (Zuboff, 2019), digital capitalism (Staab, 2019), informational capitalism (Cohen, 2019), the platform society (van Dijck et al., 2018), or the ‘dataist state’ (Fourcade & Gordon, 2020) aim to capture and make sense of the ongoing structural changes of societies and economies, including the power shifts these imply. EU commissioner Vestager recently evoked Michel Foucault’s notion of biopower when addressing novel data-based techniques of classifying, sorting, and governing (Stolton, 2019). While the term addresses a set of political technologies that emerged in the 19th century to manage the behaviour of populations by means of specific regimes of knowledge and power, digital platforms’ considerable reach and fine-grained ‘capture’ (Agre, 1994) of everyday activities invites comparison. The deep political and social repercussions these conceptual frames highlight require broader forms of social accountability (Bovens, 2007) than disclosures or audits are able to provide.

How can researchers, regulators, and civil society expand their capacity to study, reflect and act on these developments? The concept of observability starts from the recognition of a growing information asymmetry between platform companies, a few data brokers, and everyone else. The resulting data monopoly deprives society of a crucial resource for producing knowledge about itself. The expanding data sets on vast numbers of people and transactions bear the potential for privileged insights into societies’ texture, even if platforms tend to use them only for operational purposes.

AirBnB’s impact on urban development, Uber’s role in transforming transportation, Amazon’s sway over retail, or Facebook and Twitter’s outsized influence on the public sphere cannot be assessed without access to relevant information. It is symptomatic that companies refuse access to the data necessary for in-depth, independent studies and then use the lack of in-depth, independent studies as evidence for lack of harm. New modes of domination are unfolding as part of analytics-driven business models and the unprecedented information asymmetries they bring about. Powles and Nissenbaum (2019) therefore argue that we need ‘genuine accountability mechanisms, external to companies and accessible to populations’. An essential condition and experimental construction site for such accountability mechanisms would be the institutionalisation of reliable information interfaces between digital platforms and society—with a broad mandate to focus on the public interest.

We propose the concept of public interest as a normative reference for assessing platform behaviour and regulatory goals. However, public interest is neither well defined nor without alternatives. 6 We prefer public interest over the closely related because the former refers to an internationally established mandate in media regulation and could thus inform the formulation of specific requirements or ‘public interest obligations’ for platforms as well (Napoli, 2015, p.4). Furthermore, the concept speaks to our specific concern with matters of governance of platform life. The use of public interest spans different disciplinary and regulatory contexts, and it is open to flexible interpretation. Yet, the often-criticised vagueness of the concept has the advantage of accommodating the broad range of existing platforms. As a normative framework it can be used to critically assess the design of multiple-sided markets as much as the impact of digital intermediaries on the public sphere. Approaches to defining and operationalising public interest depend on the context. In economic theory, public interest is suspected of functioning as a ‘weapon’ for justifying regulatory intervention into markets for the purpose of enhancing social welfare (Morgan & Yeung, 2007). Correcting failing markets constitutes a minimalist interpretation of public interest, however. In politics, public interest is associated with more diverse social goals, among them social justice, non-discrimination, and access to social welfare; or more generally the redistribution of resources and the maintenance of public infrastructures. With regard to the public sphere and the media sector, public interest refers to protecting human rights such as freedom of information and freedom of expression, fostering cultural and political diversity, and not least sustaining the conditions for democratic will formation through high quality news production and dissemination (Napoli, 2015).

What these different understandings of public interest have in common is a focus on both procedural and substantial aspects. Obviously, public interest as a frame of reference for assessing and regulating digital platforms is not a given. Rather, the meaning and principles of public interest have to be constantly negotiated and reinterpreted. As van Dijck (2020, p. 3) reminds us, such battles over common interest do not take place in a vacuum, they are ‘historically anchored in institutions or sectors’ and ‘after extensive deliberation’ become codified in more or less formal norms. From a procedural point of view, public interest can also be defined as a practice, which has to meet standards of due process such as inclusiveness, transparency, fairness, and right to recourse (Mattli & Woods, 2009, p. 15). In terms of substance, the notion of public interest clearly privileges the collective common welfare over that of individuals or private commercial entities. In this respect, it entails a departure from the neoliberal focus on individual liberty toward collective freedoms. Thereby it also extends the space of policy options beyond ‘notice and consent’ to more far-reaching regulatory interventions (Yeung, 2017, p. 15). We see similar conceptual adjustments toward public interest in other areas such as the discourse on data protection. As Parsons (2015, p. 6) argues, it is necessary to recognise ‘the co-original nature of [...] private and public autonomy’ to understand that mass surveillance is not merely violating citizens’ individual rights, but ‘erodes the integrity of democratic processes and institutions’ (p. 1).

To conclude, the concept of observability emphasises the societal repercussions of platformisation and suggests public interest as a normative horizon for assessing and regulating them. It problematises the poor conditions for observing platform life and its effects, and suggests levelling off, in institutionalised ways, the information asymmetry between platforms and platform research. Thus, we think of observability as one possible ‘counter power’ in the sense of Helberger (2020, p. 9) who calls for establishing ‘entirely new forms of transparency’. First and foremost, observability therefore seeks to improve the informational conditions for studying the broader effects of platformisation. Over the next two sections, we discuss the modalities for such an approach.

3.2. Observe platform behaviour over time

Building on the arguments laid out in section two, the second principle of observability holds that the volatility of platforms requires continuous observation. While ex ante audits of technical mechanisms and ex post analysis of emblematic cases are certainly viable for more restricted systems, the dynamic and distributed nature of online platforms means that intermittent inspections or disclosures are insufficient, thwarted by the object’s transient character. Traditional forms of information sharing through transparency reports, legal inquiries, and regulated and structured disclosures, similar to those that exist for stock markets, can still be part of an observability framework, as can investigative reporting and whistleblowing. However, to tackle the specific challenges of digital platforms, more continuous forms of observation need to be envisaged.

When terms of service, technical design, or business practices change, the ‘rules of the game’ change as well, affecting platform participants in various ways. Projects like TOSBack 7 use browser plugins and volunteer work to track and observe changes in platforms’ terms of service continuously, that is, while they are happening and not after some complaint has been filed. These are then distilled into more readable forms to accommodate wider audiences. The joint Polisis 8 and PriBot 9 projects pursue similar goals, drawing on artificial intelligence to interpret privacy policies and deal with the limitations of volunteer work. Such efforts should be made easier: a recent proposal by Cornelius (2019) suggests making terms of service contracts available as machine-readable documents to facilitate ongoing observation and interpretation. Similar approaches can be imagined for other areas of platform conduct, including technical tweaks or changes in business practices.

However, to account for the distributed and dynamic character of platform life, as it emerges from the interaction between policies, design choices, and use practices, continuous observation needs to reach beyond legal and technical specifications. Bringing the space of distributed outcomes into view is by no means easy, but the importance of doing so is increasingly clear. In their discussion of algorithms as policies, Hunt and McKelvey (2020, p. 330) indeed argue that the ‘outcomes of these policies are as inscrutable as their intentions - under our current system of platform governance, it is beyond our reach to know whether algorithmic regulation is discriminatory or radicalizing or otherwise undermines the values that guide public policy’. Here, observability does not alter the underlying normative concerns but asks how platform reality can be sufficiently understood to make it amenable to normative reasoning in the first place. As platforms suck the bulk of online exchange into their increasingly centralised infrastructures, we need the capacity to probe not merely how algorithms work, but how fundamental social institutions are being reshaped. Answering these questions requires studying technical and legal mechanisms, use practices, and circulating units such as messages together. Given that our first goal is to understand rather than to place blame, there is no need to untangle networks of distributed causation from the outset. Entanglement and the wide variety of relevant questions we may want to ask mean that observability thus favours continuous and broad access to knowledge generating facilities.

There are at least four practical approaches that align with what we are aiming at. First, platforms have occasionally entered into data access agreements with researchers, journalists, NGOs, and so forth. Facebook is a case in point. The company’s Data for Good 10 programme, which builds ‘privacy-preserving data products to help solve some of the world's biggest problems’, shares data with approved universities and civil society groups. The recently launched Social Science One initiative 11, a collaboration with the US Social Science Council, is supposed to grant selected researchers access to both data and funding to study ‘the impact of social media on elections and democracy’ (King & Persily, 2019, p. 1). While these initiatives are good starting points, they have been plagued by delays and restrictions. Scholars have rightfully criticised that the scope and modalities for access remain in the hands of platforms themselves (Hegelich, 2020; Suzor et al., 2019). The central question is thus how to structure agreements in ways that asymmetries between platforms and third parties are reduced. Without a legal framework, companies can not only start and stop such initiatives at will but are also able to control parameters coming into play, such as thematic scope, coverage, and granularity.

Accountability interfaces providing continuous access to relevant data constitute a second direction. Facebook’s Ad Library 12, for example, is an attempt to introduce carefully designed observability, here with regard to (political) advertisement. Despite the limitations of the existing setup (see Leerssen et al., 2019), machine-readable data access for purposes of accountability can enable third-party actors to ask their own questions and develop independent analytical perspectives. While tools like Google Trends 13 are not designed for accountability purposes, a broader understanding of the term could well include tools that shed light on emergent outcomes in aggregate terms. There are already working examples in other domains, as the German Market Transparency Unit for Fuels 14, a division of the Federal Cartel Office shows. It requires gas stations to communicate current prices in real-time to make them available on the Web and via third-party Apps. 15 Well-designed data interfaces could both facilitate observability and alleviate some of the privacy problems other approaches have run into. One could even imagine sandbox-style execution environments that allow third parties to run limited code within platforms’ server environment, allowing for privacy-sensitive analytics where data never leaves the server.

Developer APIs are data interfaces made available without explicit accountability purposes. These interfaces have been extensively repurposed to investigate the many social phenomena platforms host, ranging from political campaigning (e.g. Larsson, 2016) to crisis communication during disasters (e.g. Bruns & Burgess, 2014), as well as the technical mechanisms behind ranking and recommendation (e.g., Airoldi et al., 2016; Rieder et al., 2018). Depending on the platform, developer APIs provide data access through keyword searches, user samples, or other means. Twitter’s random sample endpoint 16, which delivers representative selections of all tweets in real time (Morstatter et al., 2014), is particularly interesting since it allows observing overall trends while reducing computational requirements. One of the many examples for exploiting a data interface beyond social media is David Kriesel’s project BahnMining 17, which uses the German railroad’s timetable API to analyse train delays and challenge the official figures released by Deutsche Bahn.

But the so-called ‘APIcalypse’ (Bruns, 2019) that followed the Facebook-Cambridge Analytica scandal has led to restrictions in data access, rendering independent research much more difficult. Even before Facebook-Cambridge Analytica, working with developer APIs regularly created issues of reliability and reproducibility of results, research ethics, and privacy considerations (see Puschmann, 2019). Generally, developer interfaces are not designed for structured investigations into the layers of personalisation and localisation that may impact what users actually see on their screens. YouTube’s ‘up next’ column is a case in point: while the API does make so-called ‘related videos’ available, it leaves out the personalized recommendations that constitute a second source for suggested videos. Research on the YouTube’s recommender system, for example a study by PEW 18, is therefore necessarily incomplete. But the fact that developer APIs enable a wide variety of independent research on different topics means that in cases where privacy concerns can be mitigated, they are worth extending further. A structured conversation between platforms and research organisations about possible long-term arrangements is necessary and independent regulatory institutions could play a central role here.

Finally, due to API limitations, researchers have been relying on scraping, a set of techniques that glean data from end-user interfaces. Search engines, price snipers, and a whole industry of information aggregators and sellers rely on scraped data, but there are many non-commercial examples as well. Projects like AlgoTransparency 19, run by former YouTube employee Guillaume Chaslot, regularly capture video recommendations from the web interface to trace what is being suggested to users. Roth et al. (2020) have recently used a similar approach to study whether YouTube indeed confines users to filter bubbles. Such high-profile questions call for empirical evidence, and since research results may change as quickly as systems evolve, continuous monitoring is crucial. While scraping does not demand active cooperation from the platforms under scrutiny, large-scale projects do require at least implicit acquiescence because websites can deploy a whole range of measures to thwart scraping.

Although more precarious than API-based approaches, taking data directly from the user interface allows for the explicit study of personalisation and localisation. Data retrieved through scraping may also serve to verify or critique data obtained through the previously mentioned techniques. Not unlike the panels assembled by analytics companies like Nielsen for their online products 20, the most promising platform-centred crowd-sourcing projects ask volunteers to install custom-built browser plugins to ‘look over their shoulder’. The Datenspende project, a collaboration between several German state-levelmedia authorities, the NGO AlgorithmWatch, the Technical University Kaiserslautern, and Spiegel Online, recruited 4,500 volunteers before the German parliamentary elections in 2017 to investigate what users actually see when they look for party and candidate names on Google Search and Google News. 21 The same approach was later used to scrutinise the SCHUFA 22, Germany’s leading credit bureau, and most recently Instagram 23.

There are many other areas where scraping has been productively used. The $herrif project 24, for example, also deployed browser plugins to investigate price discrimination practices on retail websites like Amazon (Iordanou et al., 2017). Even regulators have to resort to scraping: a recent study by the French Conseil Supérieur de l’Audiovisuel used the accounts of 39 employees and four fictitious users to study YouTube’s recommendation system. 25 The City of Amsterdam already began scraping data from AirBnB in 2017 26, analysing consequences for the housing market and compliance by landlords with rules on short-term rentals. Given that sample quality, scale, and the dependence on platform acquiescence are significant disadvantages under current conditions, a legal framework regulating access to platform data would increase the practical viability of this approach. The current ambiguities risk creating chilling effects that discourage smaller research projects in particular. NYU’s Ad Observer 27, a tool that uses browser plugins and scraping to investigate ad targeting on Facebook to compensate for the limitations of the above-mentioned Ad Library, tells a cautionary tale. The researchers recently received a cease and desist letter from the company, putting the whole project in peril (Horwitz, 2020).

However, it should be stated that not all forms of access to platform data further the public interest. Across all these four approaches we encounter serious privacy concerns. While there are areas where data access is unproblematic, others may require restricting access to certain groups, anonymise data, use aggregate statistics, or explore innovative models such as sandbox environments. These are not trivial problems; they raise the need for innovative and experimental approaches supported by institutional oversight. From a legal perspective, a recent interpretation of the GDPR by the European Data Protection Supervisor 28 clarified that research in the public interest must have leeway if done in accordance with ethical best practices. Still, concrete measures will need to be the subject of broader conversations about the appropriate balance to strike, which may lead, in certain cases, to more restrictions rather than fewer.

3.3. Strengthen capacities for collaborative knowledge creation

In his analysis of accountability as a social relation, Bovens (2007, p. 453) argues that ‘transparency as such is not enough to qualify as a genuine form of accountability, because transparency does not necessarily involve scrutiny by a specific forum’. Given their deep and transversal impact, the question as to how knowledge about platforms is generated and how it circulates through society is crucial. In this section, we argue that effective accountability requires the participation of different actors and the generation of different forms of knowledge.

Our argument starts from the fact that platform companies have largely treated information about their systems, what users are posting or selling, and which kind of dynamics emerge from their interactions as private assets. They heavily invest in sophisticated analytics to provide insights and pathways for corporate action. Product development, optimisation, and detection and moderation of all kinds of illegal or ‘undesirable’ content have become important tasks that fully rely on evolving observational capabilities. While platforms would be able to facilitate knowledge creation beyond such operational concerns, the existing information asymmetries between those collecting and mining private data and society at large make this highly unlikely. Instead, platforms provide businesses and individual users with deliberately designed ‘market information regimes’ (Anand & Peterson, 2000) consisting of analytics products and services that provide information about the larger market and one’s own standing.

Creators on YouTube, for example, are now able to gauge how their videos are faring, how the choice of thumbnails affects viewer numbers, or how advertisers are bidding on keywords within the platform interface. But such interfaces are ‘socially and politically constructed and [...] hence fraught with biases and assumptions’ (Anand & Peterson, 2000, p. 270), privileging operational knowledge designed to boost performance over broader and more contextualised forms of insight. The narrow epistemological horizon of platform companies thus needs to be supplemented by inquiries that contextualise and question this business model. The problematic monopolisation of analytical capacities legitimises our demand for a more inclusive approach, which would open the locked-up data troves to qualified external actors. However, there simply is no one-size-fits-all approach able to cover all types of platforms, audiences, and concerns. Researchers, journalists, and activists are already engaged in ‘accountability work’, covering a range of questions and methods. Regulators add to this diversity: competition and antitrust inquiries require different forms of evidence than concerns regarding misinformation or radicalisation. We may therefore prefer to speak of ‘accountabilities’ in plural form.

There are many approaches coming from the technical disciplines that promise to enhance understanding. Emerging research fields like ‘explainable AI’ (e.g. Doran et al., 2017) seek to make primary ordering mechanisms more accountable, even if the issue remains of what ‘explainable’ means when different audiences ask different questions. Other strategies like the ‘glass box’ approach (Tubella & Dignum, 2019) focus on the monitoring of inputs and outputs to ‘evaluate the moral bounds’ of AI systems. A particularly rich example for image classification from Google Researchers comes in the form of an ‘activation atlas’, which intends to communicate how a convolutional neural network ‘sees’. 29 But since platforms are much more than contained ordering mechanisms, the problem of how to make their complexity readable, how to narrate what can be gleaned from data (see Dourish, 2016), remains unsolved. However, researchers in the humanities and social sciences have long been interested in how to make sense of quantitative information. Work on ‘narrating numbers’ (Espeland, 2015), ‘narrating networks’ (Bounegru et al., 2017), or the substantial research on information visualisation (e.g. Drucker, 2014) can serve as models. But as Sloane & Moss (2019) argue in their critique of current approaches to AI, there is a broader ‘social science deficit’ and the one-sided focus on quantitative information is part of the problem. The marginalisation of qualitative methods such as ethnographic work that tries to elucidate both the context within which platforms make decisions and the meaning actors ascribe to practices and their effects, limits knowledge production.

Journalists also have unique expertise when it comes to forms of knowledge generation and presentation. A recent example is the work by Karen Hao and Jonathan Stray 30 on the controversial KOMPASS project, 31 which questions the very possibility of fair judgements by allowing users to ‘play’ with the parameters of a simplified model. Likewise, NGOs have long worked on compound forms of narration that combine different data sources and methods for purposes of accountability. Greenpeace’s Guide to Greener Electronics, which includes a grade for companies’ willingness to share information, or the Ranking Digital Rights 32 project are good examples for the translation of research into concrete political devices. Accountability, understood as an inherent element of democratic control, cannot be reduced to a forensic process that transposes ‘facts’ from obscurity into the light. It needs to be considered as an ongoing social achievement that requires different forms of sense-making, asking for contributions from different directions and epistemological sensitivities. Access to machine-readable data, our focus in the last section, has limitations, but also allows different actors to develop their own observation capacities, adapting their analytical methods to the questions they want to ask.

We are aware that increased understanding of platform life would prompt reactions and adaptations by different stakeholders gathering around platforms, including actors seeking to ‘game’ the system and even platform owners themselves. Making the constant negotiations between these actors more visible may have the advantage, however, that the process of establishing boundaries of acceptable behaviour could be engaged more explicitly. As Ziewitz (2019, p. 713) argues for the field of search engine optimisation (SEO), ‘the moral status of reactive practices is not given, but needs to be accomplished in practice’. Distributing this ‘ethical work’ over a wider array of actors could thus be a step toward some modest form of ‘cooperative responsibility’ (Helberger et al., 2018), even if fundamental power asymmetries remain.

Observability thus raises the complicated question of how data and analytical capacities should be made available, to whom, and for what purpose. This clearly goes beyond data access. As Kemper & Kolkman (2019) note, ‘no algorithmic accountability without a critical audience’, and the capacity for critique requires more than a critical attitude. For this reason, frameworks for data access should ‘go hand-in-hand with the broader cultivation of a robust and democratic civil society, which is adequately funded and guaranteed of its independence’ (Ausloos et al., 2020, p. 86). And Flyverbom (2015, p. 115) reminds us that transparency, understood as a transformative process, cannot succeed ‘without careful attention to the formats, processes of socialization, and other affordances of the technologies and environments in which they play out’. Monitoring platforms on a continuous basis may thus call for considerable resources if done well. Governmental institutions, possibly on a European level, could play a central role in managing data access, in making long-term funding available for research, and in coordinating the exchange between existing initiatives. But given the complexity of the task, regulators will also have to build ‘in-house’ expertise and observational capacity, backed by strong institutional support.

The capacity to make sense of large and complex socio-technical systems indeed relies on a number of material conditions, including access to data, technical expertise, computing power, and not least the capacity to connect data-analytical practices to social concerns. Such a capacity is typically produced as a collective effort, through public discourse. The quality of observability depends on such discourses to explore what kind of knowledge forms allow concerned actors to make actually meaningful interpretations.

4. Conclusion: toward platform observability

This article developed the concept of observability to problematise the assumptions and expectations that drive our demands for transparency of platform life. Observability is not meant to be a radical departure from the call for transparency. Rather, it draws practical conclusions from the discrepancy we noted between the complexity of the platform machinery and the traditional idea of shedding light on and seeing as a way of establishing external oversight. In a nutshell, we are suggesting observability as a pragmatic, knowledge-focused approach to accountability. Observability stresses technical and social complexities, including the distributed nature of platform behaviour. Moreover, it regards continuous and collaborative observation within a normative framework as a necessary condition for regulating the explosive growth of platform power. We see three main directions where further steps are needed to move closer to the practical realisation of these principles.

Regulating for observability means working toward structured information interfaces between platforms and society. 33 To account for quickly changing circumstances, these interfaces need to enable continuous observation. To allow for a broader set of questions to be asked, a broad range of data has to be covered. And to bring a wider variety of epistemological sensitivities into the fold, they need to be sufficiently flexible. What constitutes suitable and sufficient access will have to be decided on a per-platform basis, including the question of who should be able to have access in the first place. But the examples we briefly discussed in section 3.2—and the many others we left out—show that there is already much to build on. The main goal, here, is to develop existing approaches further and to make them more stable, transparent, and predictable. Twitter’s new API 34, which now explicitly singles out academic research use cases, is a good example for a step in the right direction, but these efforts are still voluntary and can be revoked at any time. Without binding legal frameworks, platforms can not only terminate such initiatives at will, they also control relevant modalities such as thematic scope and depth of access. Realigning the structural information asymmetries between platforms and society thus requires curtailing the de facto ownership over data that platforms collect about their users.

Observability as part of regulation requires engaging with the specific properties of algorithmic systems and the co-produced nature of platform behaviour. The complex interactions between technical design, terms of service, and sometimes vast numbers of both users and ‘items’ mean that the concept of a singular algorithm steering the ordering processes at work in large-scale platforms is practically and conceptually insufficient. If techniques like machine learning are here to stay, regulatory approaches will have to adapt to conditions where the object of regulation is spread out, volatile, and elusive. The pressing questions are not restricted to how and what to regulate, but also encompass the issue of what platforms are doing in the first place. While normative concepts such as algorithmic fairness or diversity are laudable goals, their focus seems rather narrow considering the fundamental change of markets and the public sphere that platforms provoke. We therefore suggest the broader concept of public interest as a normative benchmark for assessing platform behaviour, a concept obviously in need of specification. But whatever set of norms or values are chosen as guiding principles, the question remains how to ‘apply’ them, that is, how to assess platform behaviour against public interest norms. Observation as a companion to regulation stresses the fact that we need to invest in our analytical capacities to undergird the regulatory response to the challenges platforms pose. Likewise, the existing approaches to studying platforms should be supplemented with specific rights to information. Together, these elements would constitute important steps towards a shared governance model (see Helberger et al., 2018), where power is distributed more equally between platforms and their constituencies.

Institutionalising processes of collective learning refers to the need to develop and maintain the skills that are required to observe platforms. A common characteristic of the data collecting projects mentioned above is their ephemeral, experimental, and somewhat amateurish nature. While this may sound harsh, it should be obvious that holding platforms to account requires ‘institution-building’, that is, the painstaking assembly of skills and competence in a form that transposes local experiments into more robust practices able to guarantee continuity and accumulation. While academic research fields have their own ways of assembling and preserving knowledge, the task of observing large-scale platforms implies highly specialised technical and logistical feats that few organisations are able to tackle. Material resources are only one part of the equation and the means to combat discontinuity and fragmentation are at least equally important. One form of institutional incorporation of observability would therefore be something akin to ‘centres of expertise’ tasked with building the capacity to produce relevant knowledge about platforms. Such centres could act as an, ‘important bridge builder between those holding the data and those wishing to get access to that data’ (Ausloos et al., 2020, p. 83). Pushing further, a European Platform Observatory, 35 driven by a public interest mandate, equipped with adequate funding, and backed by strong regulatory support, could be a way forward to platform accountability.

Holding platforms to account is a complex task that faces many challenges. However, given their rising power, it is quickly becoming a necessity. The concept of observability spells out these challenges and suggests steps to tackle them, taking a pragmatic, knowledge-based approach. The goal, ultimately, is to establish observability as a ‘counter power’ to platforms’ outsized hold on contemporary societies.

Acknowledgements

This work was, in part, inspired by discussions we had as members of the European Commission’s Observatory on the Online Platform Economy. We would also like to thank Joris van Hoboken, Paddy Leerssen, and Thomas Poell for helpful comments and feedback.

References

Agre, P. E. (1994). Surveillance and Capture: Two Models of Privacy. The Information Society, 10(2), 101–127. https://doi.org/10.1080/01972243.1994.9960162

Albu, O. B., & Flyverbom, M. (2019). Organizational Transparency: Conceptualizations, Conditions, and Consequences. Business & Society, 58(2), 268–297. https://doi.org/10.1177/0007650316659851

Anand, N., & Peterson, R. A. (2000). When Market Information Constitutes Fields: Sensemaking of Markets in the Commercial Music Industry. Organization Science, 11(3), 270–284. https://doi.org/10.1287/orsc.11.3.270.12502

Ananny, M., & Crawford, K. (2018). Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability. New Media & Society, 20(3), 973–989. https://doi.org/10.1177/1461444816676645

August, V., & Osrecki, F. (2019). Transparency Imperatives: Results and Frontiers of Social Science Research. In V. August & F. Osrecki (Eds.), Der Transparenz-Imperativ: Normen – Praktiken – Strukturen (pp. 1–34). Springer. https://doi.org/10.1007/978-3-658-22294-9

Bernstein, E. S. (2012). The Transparency Paradox: A Role for Privacy in Organizational Learning and Operational Control. Administrative Science Quarterly, 57(2), 181–216. https://doi.org/10.1177/0001839212453028

Bogost, I. (2015, January 15). The Cathedral of Computation. The Atlantic. https://www.theatlantic.com/technology/archive/2015/01/the-cathedral-of-computation/384300/

Bovens, M. (2007). Analysing and Assessing Accountability: A Conceptual Framework. European Law Journal, 13(4), 447–468. https://doi.org/10.1111/j.1468-0386.2007.00378.x

Brandeis, L. D. (1913, December 20). What publicity can do. Harper’s Weekly.

Bruns, A. (2019). After the ‘APIcalypse’: Social media platforms and their fight against critical scholarly research. Information, Communication & Society, 22(11), 1544–1566. https://doi.org/10.1080/1369118X.2019.1637447

Bruns, A., & Burgess, J. (2013). Crisis communication in natural disasters: The Queensland floods and Christchurch earthquakes. In K. Weller, A. Bruns, J. Burgess, M. Mahrt, & C. Puschmann (Eds.), Twitter and Society (pp. 373–384). Peter Lang.

Burrell, J. (2016). How the machine “thinks”: Understanding opacity in machine learning algorithms. Big Data & Society, 3(1), 1–12. https://doi.org/10.1177/2053951715622512

Cohen, J. E. (2019). Between Truth and Power: The Legal Constructions of Informational Capitalism. Oxford University Press. https://doi.org/10.1093/oso/9780190246693.001.0001

Cornelius, K. B. (2019). Zombie contracts, dark patterns of design, and ‘documentisation’. Internet Policy Review, 8(2). https://doi.org/10.14763/2019.2.1412

Covington, P., Adams, J., & Sargin, E. (2016). Deep Neural Networks for YouTube Recommendations. Proceedings of the 10th ACM Conference on Recommender Systems, 191–198. https://doi.org/10.1145/2959100.2959190

Domingos, P. (2012). A few useful things to know about machine learning. Communications of the ACM, 55(10), 78–87. https://doi.org/10.1145/2347736.2347755

Doran, D., Schulz, S., & Besold, T. R. (2017). What Does Explainable AI Really Mean? A New Conceptualization of Perspectives. ArXiv. http://arxiv.org/abs/1710.00794

Douglass, B. (1980). The Common Good and the Public Interest. Political Theory, 8(1), 103–117. https://doi.org/10.1177/009059178000800108

Dourish, P. (2016). Algorithms and their others: Algorithmic culture in context. Big Data & Society, 3(2). https://doi.org/10.1177/2053951716665128

Espeland, W. (2015). Narrating Numbers. In R. Rottenburg, S. E. Merry, S.-J. Park, & J. Mugler (Eds.), The World of Indicators: The Making of Governmental Knowledge through Quantification (pp. 56–75). Cambridge University Press. https://doi.org/10.1017/CBO9781316091265.003

Etzioni, A. (2010). Is Transparency the Best Disinfectant? Journal of Political Philosophy, 18(4), 389–404. https://doi.org/10.1111/j.1467-9760.2010.00366.x

Ezrahi, Y. (1992). Technology and the civil epistemology of democracy. Inquiry, 35(3–4), 363–376. https://doi.org/10.1080/00201749208602299

Flyverbom, M. (2016). Transparency: Mediation and the Management of Visibilities. International Journal of Communication, 10, 110–122. https://ijoc.org/index.php/ijoc/article/view/4490

Fourcade, M., & Gordon, J. (2020). Learning Like a State: Statecraft in the Digital Age. Journal of Law and Political Economy, 1(1), 78–108. https://escholarship.org/uc/item/3k16c24g

Gillespie, T. (2018). Custodians of the Internet. Yale University Press.

Hegelich, S. (2020). Facebook needs to share more with researchers. Nature, 579, 473–473. https://doi.org/10.1038/d41586-020-00828-5

Helberger, N. (2020). The Political Power of Platforms: How Current Attempts to Regulate Misinformation Amplify Opinion Power. Digital Journalism, 8(3). https://doi.org/10.1080/21670811.2020.1773888

Helberger, N., Pierson, J., & Poell, T. (2018). Governing online platforms: From contested to cooperative responsibility. The Information Society, 34(1), 1–14. https://doi.org/10.1080/01972243.2017.1391913

Horwitz, J. (2020, October 23). Facebook Seeks Shutdown of NYU Research Project Into Political Ad Targeting. The Wall Street Journal. https://www.wsj.com/articles/facebook-seeks-shutdown-of-nyu-research-project-into-political-ad-targeting-11603488533

Hunt, R., & McKelvey, F. (2019). Algorithmic Regulation in Media and Cultural Policy: A Framework to Evaluate Barriers to Accountability. Journal of Information Policy, 9, 307–335. https://doi.org/10.5325/jinfopoli.9.2019.0307

Iordanou, C., Soriente, C., Sirivianos, M., & Laoutaris, N. (2017). Who is Fiddling with Prices?: Building and Deploying a Watchdog Service for E-commerce. Proceedings of the Conference of the ACM Special Interest Group on Data Communication - SIGCOMM, 17, 376–389. https://doi.org/10.1145/3098822.3098850

Kemper, J., & Kolkman, D. (2019). Transparent to whom? No algorithmic accountability without a critical audience. Information, Communication & Society, 22(14), 2081–2096. https://doi.org/10.1080/1369118X.2018.1477967

King, G., & Persily, N. (2019). A New Model for Industry–Academic Partnerships. PS: Political Science & Politics, 53(4), 703–709. https://doi.org/10.1017/S1049096519001021

Langley, P., & Leyshon, A. (2017). Platform capitalism: The intermediation and capitalisation of digital economic circulation. Finance and Society, 3(1), 11–31. https://doi.org/10.2218/finsoc.v3i1.1936

Larsson, A. O. (2016). Online, all the time? A quantitative assessment of the permanent campaign on Facebook. New Media & Society, 18(2), 274–292. https://doi.org/10.1177/1461444814538798

Leerssen, P., Ausloos, J., Zarouali, B., Helberger, N., & Vreese, C. H. (2019). Platform Ad Archives: Promises and Pitfalls. Internet Policy Review, 8(4), 1–21. https://doi.org/10.14763/2019.4.1421

Lessig, L. (1999). Code: And other laws of cyberspace. Basic Books.

Mattli, W., & Woods, N. (2009). In Whose Benefit? Explaining Regulatory Change in Global Politics. In W. Mattli & N. Woods (Eds.), The Politics of Global Regulation (pp. 1–43). https://doi.org/10.1515/9781400830732.1

Morgan, B., & Yeung, K. (2007). An introduction to Law and Regulation. Cambridge University Press. https://doi.org/10.1017/CBO9780511801112

Morstatter, F., Pfeffer, J., & Liu, H. (2014). When is it Biased? Assessing the Representativeness of Twitter’s Streaming API. ArXiv. http://arxiv.org/abs/1401.7909

Napoli, P. M. (2015). Social media and the public interest: Governance of news platforms in the realm of individual and algorithmic gatekeepers. Telecommunications Policy, 39(9), 751–760. https://doi.org/10.1016/j.telpol.2014.12.003

Obar, J. A. (2020). Sunlight alone is not a disinfectant: Consent and the futility of opening Big Data black boxes (without assistance). Big Data & Society, 7(1). https://doi.org/10.1177/2053951720935615

Parsons, C. (2015). Beyond Privacy: Articulating the Broader Harms of Pervasive Mass Surveillance. Media and Communication, 3(3), 1–11. https://doi.org/10.17645/mac.v3i3.263

Pasquale, F. (2015). The black box society: The secret algorithms that control money and information. Harvard University Press.

Power, M. (1997). The audit society. Rituals of verification. Oxford University Press. https://doi.org/10.1093/acprof:oso/9780198296034.001.0001

Powles, J., & Nissenbaum, H. (2018, December 7). The Seductive Diversion of ‘Solving’ Bias in Artificial Intelligence. OneZero. https://onezero.medium.com/the-seductive-diversion-of-solving-bias-in-artificial-intelligence-890df5e5ef53

Puschmann, C. (2019). An end to the wild west of social media research: A response to Axel Bruns. Information, Communication & Society, 22(11), 1582–1589. https://doi.org/10.1080/1369118X.2019.1646300

Rahwan, I., Cebrian, M., Obradovich, N., Bongard, J., Bonnefon, J.-F., Breazeal, C., Crandall, J. W., Christakis, N. A., Couzin, I. D., Jackson, M. O., Jennings, N. R., Kamar, E., Kloumann, I. M., Larochelle, H., Lazer, D., McElreath, R., Mislove, A., Parkes, D. C., Pentland, A. ‘Sandy’, … Wellman, M. (2019). Machine behaviour. Nature, 568(7753), 477–486. https://doi.org/10.1038/s41586-019-1138-y

Rieder, B. (2020). Engines of Order. A Mechanology of Algorithmic Techniques. Amsterdam University Press. https://doi.org/10.2307/j.ctv12sdvf1

Rieder, B., Matamoros-Fernández, A., & Coromina, Ò. (2018). From ranking algorithms to ‘ranking cultures’: Investigating the modulation of visibility in YouTube search results. Convergence, 24(1), 50–68. https://doi.org/10.1177/1354856517736982

Roth, C., Mazières, A., & Menezes, T. (2020). Tubes and bubbles topological confinement of YouTube recommendations. PLOS ONE, 15(4). https://doi.org/10.1371/journal.pone.0231703

Rowland, N. J., & Passoth, J.-H. (2015). Infrastructure and the state in science and technology studies. Social Studies of Science, 45(1), 137–145. https://doi.org/10.1177/0306312714537566

Sandvig, C., Hamilton, K., Karahalios, K., & Langbort, C. (2014). Auditing algorithms: Research methods for detecting discrimination on internet platforms. Data and Discrimination: Converting Critical concerns into productive inquiry, a Preconference at the 64th Annual Meeting of the International Communication Association, Seattle, WA. https://pdfs.semanticscholar.org/b722/7cbd34766655dea10d0437ab10df3a127396.pdf

Seaver, N. (2017). Algorithms as culture: Some tactics for the ethnography of algorithmic systems. Big Data & Society, 4(2), 1–12. https://doi.org/10.1177/2053951717738104

Sloane, M., & Moss, E. (2019). AI’s social sciences deficit. Nature Machine Intelligence, 1(8), 330–331. https://doi.org/10.1038/s42256-019-0084-6

Staab, P. (2019). Digitaler Kapitalismus: Markt und Herrschaft in der Ökonomie der Unknappheit. Suhrkamp.

Stolton, S. (2019, November 20). Vestager takes aim at ‘biopower’ of tech giants. EURACTIV. https://www.euractiv.com/section/copyright/news/vestager-takes-aim-at-biopower-of-tech-giants/

Suchman, L. A. (2007). Human-Machine Reconfigurations. Plans and Situated Actions (Second). Cambridge University Press. https://doi.org/10.1017/CBO9780511808418

Suzor, N. P., Myers West, S., Quodling, A., & York, J. (2019). What Do We Mean When We Talk About Transparency? Toward Meaningful Transparency in Commercial Content Moderation. International Journal of Communication, 13, 1526–1543. https://ijoc.org/index.php/ijoc/article/view/9736/0

van Dijck, J. (2020). Governing digital societies: Private platforms, public values. Computer Law & Security Review, 36. https://doi.org/10.1016/j.clsr.2019.105377

van Dijck, J., Poell, T., & De Waal, M. (2018). The platform society: Public values in a connective world. Oxford University Press. https://doi.org/10.1093/oso/9780190889760.001.0001

Yeung, K. (2017). 'Hypernudge’: Big Data as a mode of regulation by design. Information, Communication & Society, 20(1), 118–136. https://doi.org/10.1080/1369118X.2016.1186713

Ziewitz, M. (2019). Rethinking gaming: The ethical work of optimization in web search engines. Social Studies of Science, 49(5), 707–731. https://doi.org/10.1177/0306312719865607

Zuboff, S. (2019). The age of surveillance capitalism: The fight for a human future at the new frontier of power. Profile Books.

Footnotes

1.https://eur-lex.europa.eu/eli/reg/2019/1150/oj

2. See, for example, the response by AlgorithmWatch and other signatories to the European Commission’s planned Digital Services Act: https://algorithmwatch.org/en/submission-digital-services-act-dsa/.

3.https://www.congress.gov/bill/116th-congress/senate-bill/2763/all-info

4.https://www.rlp.de/fileadmin/rlp-stk/pdf-Dateien/Medienpolitik/ModStV_MStV_und_JMStV_2019-12-05_MPK.pdf

5. The ACM’s Statement on Algorithmic Transparency and Accountability (https://www.acm.org/binaries/content/assets/public-policy/2017_usacm_statement_algorithms.pdf), for example, explicitly mentions ‘auditability’ as a desirable principle.

6. For a discussion of the intricate history of ideas behind the concepts of the common good and public interest in the anglo-american realm and a definition of the latter see Douglass (1980, p. 114): ‘the public interest would come to mean what is really good for the whole people. And in a democratic society, this would mean what is really good for the whole people as interpreted by the people.’

7.https://tosback.org

8.https://pribot.org/polisis

9.https://pribot.org

10.https://dataforgood.fb.com/

11.https://socialscience.one

12.https://www.facebook.com/ads/library/

13.https://trends.google.com

14.https://www.bundeskartellamt.de/EN/Economicsectors/MineralOil/MTU-Fuels/mtufuels_node.html

15.https://creativecommons.tankerkoenig.de / https://de.wikipedia.org/wiki/Markttransparenzstelle_für_Kraftstoffe

16.https://developer.twitter.com/en/products/tweets/sample

17.https://www.heise.de/newsticker/meldung/36C3-BahnMining-offenbart-die-nackte-Wahrheit-hinter-der-DB-Puenktlichkeitsquote-4624384.html

18.https://www.pewinternet.org/2018/11/07/many-turn-to-youtube-for-childrens-content-news-how-to-lessons/

19.https://algotransparency.org

20.https://www.nielsen.com/us/en/solutions/measurement/online/

21.https://algorithmwatch.org/datenspende-unser-projekt-zur-bundestagswahl/

22.https://algorithmwatch.org/openschufa-warum-wir-diese-kampagne-machen/

23.https://algorithmwatch.org/instagram-algorithmus/

24.http://sheriff-v2.dynu.net/views/manual

25.https://www.csa.fr/Informer/Toutes-les-actualites/Actualites/Pourquoi-et-comment-le-CSA-a-realise-une-etude-sur-l-un-des-algorithmes-de-recommandations-de-YouTube

26.https://publicaties.rekenkamer.amsterdam.nl/handhaving-vakantieverhuurbestuurlijk-rapport/

27.https://adobserver.org

28.https://edps.europa.eu/sites/edp/files/publication/20-01-06_opinion_research_en.pdf

29.https://distill.pub/2019/activation-atlas/

30.https://www.technologyreview.com/s/613508/ai-fairer-than-judge-criminal-risk-assessment-algorithm/

31.https://www.technologyreview.com/s/607955/inspecting-algorithms-for-bias/

32.http://rankingdigitalrights.org

33. This aligns with Sandvig et al. (2014, p. 17), who call for ‘regulation toward auditability’.

34.https://blog.twitter.com/developer/en_us/topics/tools/2020/introducing_new_twitter_api.html

35. The European Commission is already hosting an Observatory on the Online Platform Economy (https://platformobservatory.eu/)—of which both authors are members—and it plans to create a digital media observatory.https://ec.europa.eu/digital-single-market/en/news/commission-launches-call-create-european-digital-media-observatory. However, both bodies have a thematically restricted mandate and lack any regulatory authority.

Platform developmentalism: leveraging platform innovation for national development in Latin America

$
0
0

Introduction

In the last few years, international development scholars have explored the rise of platform businesses in developing countries (Galo, 2016; Hira & Reilly, 2017; Koskinen et al., 2018). By platform businesses, I mean multi-sided platforms (Hagiu & Wright, 2011) that broker logistics (Reilly, 2017) between actors within a marketplace through digital means (Kenney & Zysman, 2016; Srnicek, 2017). The adoption of platform models has resulted in the widespread reorganisation of business practices across many sectors, with important implications for incumbent industries, labour, and social processes. Within this new field of study, theories about the relationship between the process of platformisation and social change or development outcomes are only just beginning to emerge. Among these, Ezeomah and Duncombe (2019) suggest focusing on how platformisation of business processes reshapes the efficiency or effectiveness of value chains, Koskinen, Bonina, and Eaton (2018) look at the implications for labour, Artopoulos (2019) asks whether e-commerce platforms potentiate small and medium enterprises, and, IT for Change offers a global social justice framework (Gurumurthy, 2018).

In this paper, I consider the policy conditions that shape the developmental potential of the platform economy from a developmental state point of view (Johnson, 1982; Evans, 1995; Haggard, 2018), and I explore this approach using case material from Latin America. Unlike neoliberal regulatory approaches which use policy to regulate a balance between competing interests (Levi-Faur, 2005; Osborne & Gaebler, 1992; Jayasuriya, 2001), developmentalism suggests that states can and should establish complementary social and economic goals for the nation, and then govern with corporate and social actors to achieve these goals in ways that produce both economic growth and social benefits. The approach to platform policymaking offered here differs fundamentally from the regulatory recommendations offered by the G20 (Schwarzer et al., 2019) and the Inter-American Development Bank (2017) for the platform economy.

I start by briefly reviewing the idea of the developmental state and relating it to the notion of platformisation. I then consider the structural barriers facing state-led developmental interventions in the Latin American context. Taking this into consideration, I offer a set of concepts for comparative analysis of the politics that surround policy-making for developmental governance. Finally, I demonstrate the relevance of these concepts by exploring instances of policy struggle resulting from platform technology disruptions in the transportation, lodging and fintech sectors in Chile, Colombia, Mexico and Peru. These cases offer further insights into the nature of platform developmentalism and raise key issues that policymakers would need to keep in mind when attempting to leverage platformisation to achieve national development objectives. In the conclusions, I draw from these illustrations to suggest some specific areas for future research based on this approach.

The developmental state and Latin America

The extensive literature on the developmental state has been detailed in focused literature reviews (Bagchi, 2000; Routley, 2012). My efforts to apply the concept to Latin America, and also to post-industrial economies, position me outside of the purist camp which views the developmental state as a specific formation that occurred in Asia during the post-war period. Instead, I leverage the concept as a framework for socially oriented policy-making (Singh & Ovadia, 2018).

With this in mind, broadly speaking, developmentalism studies the role of the state in fostering economic growth that can be leveraged to generate social benefits. Early works in this field emphasised the role of industrial policy in driving rapid transformations in Asia’s late industrialising countries (Johnson, 1982; Evans, 1995). The developmental state literature offered a contrasting narrative to the neoliberal, small state regulatory approach promoted by the Washington Consensus, 1 both by highlighting the positive potential of a strong, interventionist state, and by prioritising shared and equitable growth (Pérez Caldentey, 2008). These ‘strong states’ had sufficient bureaucratic expertise, as well as autonomy from social and economic pressures (Evans, 1995), to solve coordination problems that undermine economic growth (Amsden, 1989; Haggard, 2018).

In terms of social policy, early work in the genre focused on absorption of surplus labour and upskilling the workforce. Pérez Calendey (2008) argues that while the Washington Consensus contemplates state provision of education, healthcare and pensions to support workers, it ignores the developmentalist idea that “growth is a precondition for the improvement of welfare” (p. 49). And, as Bagchi (2000) argues “increases in inequality tend to depress income growth […] Thus without a drastic change in policies leading to increased construction of social and physical capital, these developing countries will remain doomed to poverty and social disarray” (pp. 435-6). So, recent works have sought to recover and expand the social role of state economic management, taking into consideration the constraints posed by globalisation. In order to reduce social exclusion, the social developmental state should therefore strive to engage with global markets in productive ways while also mitigating their negative impacts on society (Sandbrook et al., 2007, pp. 232-4).

Latin American countries have a notable history of industrial policy, and several countries in the region took up new forms of economic management in the early 2000s as part of the region’s Pink Tide 2 and post-Washington Consensus moment (Bresser-Pereira & Theuer, 2012). However, as Schneider (1999) argues, while Latin American states often have developmental objectives, these plans take place in the context of personalistic bureaucratic appointments, powerful business interests, and social exclusion from decision-making. These factors undermine the state’s best developmentalist intentions, and raise the question of whether Latin America offers examples of developmental states or merely developmental policies.

In this context, rather than succumb to debates over ideal types, the developmental state literature can be seen as a benchmark for governmental efforts to move in the direction of greater state intervention in economic management with a view to advancing social policy. This implies studying regulatory projects that challenge established policy approaches and entrenched interests. The technological innovations offered by platformisation may cause just such a disruption, and can therefore be studied as moments of potential for social change.

Platform developmentalism

A developmental approach aligns with van Dijck et al.’s (2018) argument that “The responsibility for a balanced platform society rests […] with all actors involved in its construction […] [but] supranational, national, and local governments have a special responsibility in this regard” (p. 6). However despite devoting a chapter to governance, van Dijck et al. leave us with little concrete foundation for policy research, observing expansively in their concluding paragraph that: “who governs the platform society and how it should be governed based on what values is complex and multifaceted” (p. 163). As one reviewer from the Indian Journal of Human Development observed “The writers though do not offer a clear solution, successfully elaborate and analyse the prevailing situation” (Shipra, 2019, p. 238). Meanwhile Miconi (2020) critiques van Dijck et al. (2018) for not taking a clear position on the exploitative dynamics of platformisation noting that its analysis ought to “lead to a more radical conclusion about the nature of platform society, with structure fatally becoming more relevant than agency” (p. 792).

Some recent works have begun to suggest developmental approaches to platform regulation. Since platforms are themselves innovations, and platformisation also requires business and technological innovations to support uptake, innovation policy is one part of the equation. Katz argues that Latin America’s governments need to find new ways to resolve inefficiencies in private incubation of digital companies, since these offer high returns on investment: “Industrial policies and technologies are central to effectively reducing gaps in the digital sector so that their positive effect on growth and productivity can manifest its full potential” (Katz, 2015, p. XIX; see also OECD, 2016).

This is closely related to regulation for value chain innovations in the platform economy. Platform business models offer the potential to enhance supply chain efficiency in ways that drive economic growth (Ezeomah & Duncombe, 2019). However, as Artoplolous et al. (2019) point out, the state must create the conditions that enable small and medium enterprises to participate in newly platformised value chains by ensuring sufficient connectivity, but more importantly by “redefining the architecture of domestic commerce given the possibilities offered by e-commerce” (p. 279).

This raises questions about how to regulate the global data value chain across the base data layer, the cloud computing layer, the intelligence layer, and the consumer-facing intelligence services layer (Singh, 2020) so that countries are able to harness big data and artificial intelligence to support developmental objectives in the platform economy (Gurumurthy et al., 2020). Such regulation might enhance or defend national information systems so that they are available to support the above mentioned processes of innovation and supply chain management. This might mean nationalising certain parts of the data supply chain (data sovereignty), legislating data property rights and the benefits that flow from them, or supporting the development of domestic capacity.

Thirdly, platformisation is widely acknowledged to undermine labour standards (Graham et al., 2017), but despite this, several developing country governments have looked to the gig economy as a potential means for labour absorption (ibid, p. 138). Socially-minded labour market regulations for the platform economy would drive labour absorption across a spectrum of skill sets within the platform economy, and also devise a means to upskill the market so that workers can transition to higher paying jobs over time (Parthasarathy & Matilal, 2019). Fostering national data and information systems is one way that governments can upskill the platform economy labour market.

While they offer a clear vision of what platform developmentalism might imply, these works do not always draw a clear line from economic policy to social gains. For example, Katz (2015) suggests that the growth of the digital sector could benefit people by giving them better access to basic services, better access to the political process, by reducing prices, and by increasing salaries (p. 131), but the suggested gains fall short of upward mobility or redistribution of wealth. Even platform economy labour policy tends to focus on ameliorating the worst impacts of labour outsourcing rather than proposing developmental alternatives.

What would it mean to adopt a fundamentally developmentalist approach to platform policy making? We can arrive at a better understanding by contrasting regulatory and developmental approaches to regulating personal data, which is a fundamental factor of production for the platform economy (Srnicek, 2017, p. 23). In Latin America personal data regulations take a regulatory approach, which, much like the General Data Protection Regulation of the European Union, seeks to strike a balance between a personal need for privacy, and a corporate need for access to personal data (Rodriguez & Alimonti, 2020). The question driving this type of policy is thus What is the right balance between the privacy rights of individuals and the ability of firms to innovate and grow? The resulting regulatory frameworks are arm’s length and business-centric.

Since a developmental state aims to resolve the coordination problems that undermine a nation’s transition towards an inclusive equilibrium between business and social interests, then the question driving policy-making would instead be What is the best way for the community to invest its data resources or mobilise platformization? This would imply situating data and information systems at the centre of national development efforts and engaging in hands-on policies that advance the stewardship of data resources and information systems at a national level. In distinction to the view put forward by the Data Commons Manifesto (tecnopolítica, 2020), in this case a data commons therefore implies governance of the tensions between communal-individual data, open-owned data or data extractivism-data social responsibility with a view to achieving developmental ends. Policy would work to cultivate data systems, enforce data incentive mechanisms, and produce strategic data sets that support the resolution of economic and social coordination problems through platformisation. One of the main advantages of this approach is that it situates the social production of platforms at the centre of industrial policy-making for the achievement of developmental objectives within a particular local context.

The Latin American context

Advancing policy of this kind would require the buy-in of both public and private sector actors, but in Latin America this is a difficult proposition. As a result of the historical trajectories which have given rise to the Latin American expression of capitalism:

The structural situation in the region confirms that the institutional trajectory of inequality prevails as a product of the reproduction of benefits concentrated in elite power compacts and extended social habits and support rentier comportment and favor informality, with negative consequences for the distribution of income and the possibilities that this might create. (Hernández López, 2017, n.p.).

In this context, economic policy is heavily guided by multinational corporations and diversified business groups that benefit from the continued presence of low skill labour to extract a profit (Cimoli & Rovira, 2008; Schneider, 2008, 2009, 2013; Serna & Bottinelli, 2019; Egan, 2010). There is every suggestion that platform enterprises will align with this logic given the lean business practices (Srnicek, 2017, p. 39) that lie at the heart of platform business models.

Because business conglomeration and low labour costs are effective strategies to mitigate business risk and generate profits, both domestic and multinational firms work to sustain these strategies through political influence. As Schneider explains “political systems and practices in Latin America are remarkably accommodating for business interests, especially narrow or individual interests of big business” (Schneider, 2013, p. 148). Business groups that have emerged out of the family holdings of bygone eras are able to influence politics due to key structural advantages, including money, growth through acquisitions, and, with growing transnationalisation, the threat of shifting their investments abroad (López García, 2017). Thus, the region’s political systems tend to favour insiders who press governments to sustain economic institutions that maintain long-standing relations between corporations and labour.

These structural conditions make it very difficult for governments in the region to intervene in the economy to promote economic growth, let alone to pursue social agendas. One effect of this are low rates of innovation in Latin America, which undermine economic development (Castillo et al., 2016, p. 21). 3 For example, Saucedo et al. (2016) draw a correlation between the structural factors described above as they manifest in Argentina, Brazil and Mexico, and low levels of innovation in these three countries, in comparison to analogous economies with different structural conditions in other parts of the world.

The case of Chile offers a concrete example of how these dynamics manifest in policy processes throughout the region. In this country, there are several large firms that combine resource extraction with service offerings, such as Matte (forestry, mining, energy, and banking), Angelini (forestry, mining, fishery, and gas) or Luksic (mining, energy, beverages, and banking). As a result, despite Chile’s strong GDP, 65% of its exports are primary materials, and 80% of those are copper, so the economy is undiversified, and products exhibit low complexity (Guzmán, 2016). These firms could invest in developing new markets, which would in turn create new and higher paid jobs, but the structure of their businesses does not incentivise the pursuit of new innovations. Meanwhile, past policies undermined innovation. For example, the Taxable Utilities Fund (FUT), which was created in 1986, allowed companies to postpone tax payments when they reinvested in utilities. It resulted in an accumulation of US$ 200 billion over the course of a decade, however in that same period private R&D was only around US$ 300 thousand per year (Jara Román, 2019).

Since 2000, Chilean governments have pursued policies to incentivise investment in innovation, but their efforts have been blocked by business elites. As Bril-Mascarenhas and Madariaga (2017) explain, the Lagos government created the National Council for Innovation and Competitiveness (CNIC) in 2005 to manage an Innovation for Competitiveness Fund (FIC) with the aim of leveraging taxes from mining wealth to drive economic diversification. The initiative was actively opposed by business interests who used legislative influence to water down the relevant tax law, and took control of the board of the CNIC to shape its policy interventions. Once pro-business leader Piñera came to power in 2014 “the CNIC was de facto depleted of its original purpose” and its agenda was modified to “fostering a ‘culture of innovation’ among individual entrepreneurs rather than strengthening the policy and institutional environment for firms to upgrade their productive capacities” (Bril-Mascarenhas & Madariaga, 2017, n.p.). They conclude that “we cannot understand why executive-branch leaders in post-authoritarian Chile were not able to deviate from a policy path originated in the mid-1970s without considering how big business used its leverage to circumscribe the set of ‘viable’ industrial policy alternatives and bias policy making toward its preferred small-state policies” (n.p.).

This case is emblematic of the relationship between business interests and the state throughout Latin America. It shows that Latin American regulators do pursue developmental policies in an effort to drive growth and social change, however, they face strong structural challenges to the effective realisation of these plans even when they might advance economic interests.

This is a pattern that repeats throughout the region, and is evident in all four of the countries considered as case examples below: Chile, Colombia, Mexico, and Peru. For example, in Colombia, elites focused on advancing transnational capitalism have captured local policy-making processes to the detriment of fundamental development of the local supply chain, undermining the productivity of local companies (Franz, 2018). In their in-depth work on the political economy of Peru, Crabtree and Durand (2017) show that private sector co-optation of state power has become the norm, and is used to prevent state intervention in the marketplace. Finally, Ondetti explains that Mexico has the lightest tax burden in Latin America as a result of “the resistance of an exceptionally politically mobilized economic elite, which has resulted in the defeat or dilution of repeated attempts at reform” (2017, p. 47). Indeed, in his first speech as President of Mexico in 2018, left-leaning leader López-Obrador announced that under his mandate, “The government will no longer be a committee at the service of a rapacious minority” of “rapacious elites” nor would it be a “simple facilitator of pillaging, as it has been” (López-Obrador, 2018).

Platform developmentalism and policy disruption in Latin America

Under these conditions, the ability to pass developmental policy-making depends on the relationship between the state and economic actors. Hegemonic coalitions differ between countries, and this explains historical differences in the organisation of capitalism and development outcomes between countries. Brazil has state-led development of internal markets; in Mexico, liberal capitalism focuses on export markets; and, in Chile, state regulation of capitalism drives export markets. Each of these models arise according to how hegemonic coalitions have guided capitalism within each country, and how business interests have adapted to these realities (Bizberg & Théret, 2012; Gaitán & Boschi, 2015). This points to the importance of comparative analysis for studying the relationship between state and market in platform policymaking and the resulting platform economy (Sheahan, 2002; Boschi, 2011).

Biber et al. (2017) provide a useful framework for comparative study of platform policy-making. They argue that not all technological disruptions require policy reform, but in the platform economy, technological disruptions frequently cause collateral policy disruptions, which in turn give rise to policy windows that could lead to new regulatory approaches (p. 1580). When faced with such a policy disruption, they say, regulators should be neutral between different business interests, while taking into consideration the health, safety, environmental, privacy or distributional consequences of business activities (p. 1608). If health and safety outweigh neutrality between incumbents and newcomers, then policymakers would be justified in blocking the newcomer to the advantage of the incumbent. But if innovation benefits outweigh the need to be neutral, policymakers would be justified in favouring the newcomer to the detriment of the incumbent. Finally, they model four types of policy disruption: end runs, exemptions, gaps or solutions (p. 1565). In an end run, innovators argue that existing policy does not apply to them because their business is fundamentally new. Exemptions are cases in which innovators are exploiting exceptions in existing policies. Gaps cover cases in which innovators raise entirely new policy concerns, and there is no applicable policy. Finally, solutions are cases in which the innovator offers better social welfare outcomes, but faces barriers under existing regulations.

Layering-in a developmentalist logic, if innovations serve to unlock structural barriers to development then they should be pursued, and if they reinforce those barriers they might well be avoided. In particular, applying a platform developmentalist logic, the state would favour or even incentivise innovations that advance the stewardship of data resources and the creation of information systems in ways that promote balanced growth. If national development objectives outweigh narrow business interests, governments would be justified in crafting regulatory frameworks that prioritise the achievement of development gains. In the case of end runs, exemptions or gaps, a regulator would evaluate the economic and social potential of the new model, and either block the innovation if it undermines developmental gains, or introduce new regulations to support the innovation if it can be leveraged to advance them. In the case of solutions the regulator could either give the innovators a free pass (no new policy), or introduce new regulatory structures that better enable people to leverage these new benefits.

If policy resulting from platform disruptions falls outside of these frameworks, then we can surmise that either there has been a failure of governance, or the process has been co-opted by special interests. According to Fairfield (2015), these special interests can be expressed through instrumental power, structural power, electoral influence and social forces. The first two are the domain of businesses. Instrumental power is expressed through “favorable relationships with policymakers that enhance access and create bias in favor of business interests” (ibid., p. 420). Structural power is the ability of businesses to mobilise investment decisions to influence state decisions. Electoral influence is the pressure that elected officials feel to serve the public, as when they feel pressure to support Uber because it is well liked by users. Lastly, people affected by technological innovations can create pressure through popular mobilisation, as when taxi unions have protested against the introduction of Uber. The latter two forms of pressure could in theory legitimise a neutral position by policymakers, however these forces can be co-opted by business interests as well. In particular, the ability of platform businesses to mobilise electoral power has been well documented by observers around the world (Gillespie, 2010) and in Latin America (Torres Castro, 2016).

Analysis of policy disruptions

By applying Biber et al.’s framework and Fairfield’s analysis of special interests, it is possible to explore the dynamics shaping the emergence of platform policies in Latin America, and consider whether platform developmentalism is leveraging these new technological innovations to advance economic and social goals in the region. Case material drawn from a review of policymaking in the transportation, lodging and fintech sectors of Chile, Colombia, Mexico, and Peru demonstrates how the ideas presented above can be used to analyse platform policymaking from a developmental point of view. These accounts are meant to be illustrative of the potential of the framework, rather than comprehensive in their analysis. They reveal specific themes of importance to developmental policy-making in the platform space: policy autonomy and capture in the case of transportation policy, how society values and leverages resources in the case of lodging, and state-market collaborations in the case of fintech. Based on these presentations, the conclusions offer possible avenues for future research that could explore platform developmentalism in greater specificity and with comparative rigour.

These four countries were selected as sources of case material to conduct an initial exploration of the model presented above because of their advanced uptake of platformisation, and also because they are particularly strong examples of the structural dynamics discussed above. They rank among the top six Latin American countries for ICT sector value added as a share of GDP 2010-2017 according to UNCTAD (2019, p. 74); among the top six Latin American countries in terms of the economic impact of digitalisation 2005-2013 (Katz, 2015, p. 167); among the top six Latin American countries in terms of job creation from digitisation (2005-2013) (ibid, p. 170), and among the top six in terms of financing for digital innovation (ibid, p. 282). Brazil, Argentina and Costa Rica also figure prominently in these classifications, however, Gaitán and Boschi (2015) argue that in Chile, Colombia, Mexico and Peru, government plays only a “subsidiary role” in terms of regulating economic activity (p. 180) in contrast to other Latin American states where governments have pursued more interventionist roles such as Argentina and Brazil, or Bolivia, Ecuador and Venezuela. Costa Rica, for its part, is well known as an exceptional case in Latin America (Sandbrook et al., 2007). The industries were selected because, according to a study conducted by Barros for the Government of Chile, they have been the most affected by platform disruption in Latin America (Barros, 2018).

Transportation: policy autonomy and transnational policy capture

The contrast between taxi regulation in Chile and Peru offers insights into the autonomy and capacity of policymakers in the two countries where platform policy-making is concerned. In July 2019, Chile’s Senate approved a new law to regulate ride-hailing (Cámara de Diputados de Chile, 2018). The law defines “Transportation Application Companies” as offering a service that allows a user to contract a driver for the purposes of moving from one place to another. Companies must be legally constituted in the country, registered as a tax paying entity, and offer civil and life insurance for both drivers and passengers. They must also provide a means to receive complaints, which implies having local staff in the country. The law creates a public registry with information about companies, drivers and vehicles. Drivers must have a professional taxi licence, and their vehicles must meet minimal technical, safety and signage requirements.

Meanwhile, a legal project with broad similarities was approved by the Congress of Peru in 2018 (Congreso de la República de Perú, 2016). It aimed to regulate entities that administer the digital intermediation of transportation services that allow passengers to access a paid transportation service. Peru’s legal project focused on consumer protection. It contemplated a registry, required companies to constitute themselves legally within the country, and to have “at least one administrative officer in the country, and a central telephone line with 24-hour a day client service” (Article 4). The project includes basic requirements around car insurance and licencing, but is less comprehensive in terms of taxation, insurance, public registries, licencing and safety than the Chilean law.

In Chile, the traditional taxi sector has been vocal in defending its position within the marketplace, and has demanded that regulators step up to protect their standing (Díaz, 2019). Chile’s taxi sector took the form of unions with invested privileges, so both drivers and the companies they work for stood to lose through the introduction of ride-hailing apps. In Peru, in contrast, taxis have a long history of operating illegally, and efforts to regulate the sector have faced continual challenges (Cohen, 1987; Pashley & Hidalgo López, 2014). Uber’s arrival to Peru did provoke protests in September 2016, at which drivers argued that the 30% commission charged by the company was too high. However, there is little other evidence of protest in the country, and working tables held by the Peruvian Congress about its legal project struggled to identify official representatives of the taxi sector to participate (Valga Gutierrez, 2020, p. 38). Meanwhile, ride-hailing has been widely embraced in Peru by both drivers and passengers (Valga Gutierrez, 2020, p. 35).

The public sector faced different pressures around taxi regulation in these two countries. Chile faced an end run situation, in which innovators raised the same underlying policy concerns as incumbents, but were working outside of regulatory frameworks. Chile’s new law takes a neutral position on taxi market conditions that balances benefits and risks for all actors within this space.

Peru, on the other hand, is in some ways facing a solution situation, since the introduction of ride hailing offered a means to address the long-time problem of informality in the sector. This aligns with von Vacano’s (2017) argument that the introduction of ride-hailing systems into countries with highly informal taxi services can create improvements by formalising employment and service offerings (p. 100). If we apply the logic of Biber et al.’s framework (2017), new regulation would not be strictly necessary in Peru’s case, since the introduction of ride-hailing offers benefits, and those benefits may outweigh the costs. Peru’s legal project was introduced to enhance ride-hailing by ensuring protection of consumer rights in this new space: “The majority of the congressional leaders in the debate agreed and had the same reading of the problem: offer security given high rates of citizen violence and defend the rights of consumers” (Valga Gutierrez, 2020, p. 39). But despite being a pro-market regulation, this legal project failed to pass into law. Why is this?

Chile and Peru’s legal projects reveal a significant difference in business strategy between Uber and its main competitor, Cabify (personal correspondence, April 10, 2019). Cabify sets up offices and constitutes itself legally in countries where it operates, whereas Uber inserts its application into local markets from afar. While both are multinationals, this sets up a contest between a localised operation and a global operation. The introduction of legislation that localises operations provides a competitive advantage to Cabify, which is already positioned within Peru and Chile to operate under these conditions.

Chile’s new law gives Cabify a competitive advantage against Uber, but not against incumbent taxi companies. Chilean incumbents gain by already having the means to adhere to insurance, safety, training and signage regulations. And the law can be justified on consumer rights and customer safety grounds. This dynamic allowed Chilean regulators to sustain an impartial regulatory approach.

In Peru, on the other hand, when the legal project was sent from Congress to the executive it was rejected by President Vizcarra (Vizcarra & Villanueva, 2018). According to a local source, Uber leveraged its membership in ComexPeru to pressure Vizcarra to reject the law (personal correspondence, April 10, 2019). Comex represents foreign business interests in Peru and has considerable power given the openness of Peru’s economy. It launched a media campaign on Uber’s behalf which argued that “the collaborative economy introduces healthy competition” in ways that “help to resolve Peruvian challenges such as inequality, self-sufficiency and informality” (Gestión, 2019a) and chastised regulators for exhibiting “technophobia” and “wanting to corset new relations created by the digital economy into traditional models” (Castro, 2019). Meanwhile, Uber introduced new safety features to better serve their clients, a move which the press characterised as “getting ahead of Congress” (Gestión, 2019b). These interventions bore fruit because Comex has “weight in the Ministry of Finance and that, at the end of the day, this is the entity which most influences approval of legislative initiatives within the Executive” (Valga Gutierrez, 2000, p. 40).

These two cases suggest that the Chilean government has more regulatory autonomy than the Peruvian one when it comes to addressing the platform economy, particularly vis-à-vis transnational companies. However, Chile continued to endorse the pro-free market regulatory approach that is typical of that country. In comparison, the inability to pass even a consumer protection law in Peru demonstrates the very narrow space to manoeuvre for platform policymakers in that country. And yet, within the context of a pro-market regulatory state, the fact that the Peruvian congress contemplated a policy that might favour companies that at least establish some operations in the country suggests that there is a desire to intervene in the economy in proactive ways. Overall, this analysis raises questions about the type of capacity and negotiating tactics required to advance more developmentalist policies in the platform economy, particularly in the face of powerful multinational actors.

Lodging: leveraging societal resources for socially conscious forms of production

Where at first the disruptions caused by platformisation in the lodging sector appear to be easily resolved by striking a new balance between stakeholders, on closer examination this case demonstrates how platformisation challenges the definition of resources and how they might be mobilised for development.

In all four countries, early regulatory efforts by municipalities and federal actors in the digital lodging sector raised concerns in the traditional hotel sector. For example, in Mexico City a 3% tax on AirBnB bookings (CDMX, 2017) was challenged by the Mexican Association of Hotels and Motels which pointed out that traditional hotels pay 16% value added tax plus 35% income tax (Martínez, 2017). And, in Colombia, the Ministry of Commerce, Industry and Tourism’s policy only required hosts to register their operations with the state (Vida, 2017) while the traditional hotel sector was required to register operations with tourism, migration and tax authorities and meet significant minimum requirements for health and safety (Corrales, 2018). The Colombian Association for Hotels and Tourism (Cotelco) noted a 1.8% drop in hotel occupation and a 2.8% drop in hotel employment from 2016-2018 even as tourist numbers rose by 17% (Corrales, 2018, p. 14) prompting them to suggest that the Colombian policy promoted unfair competition (Cifuentes, 2020).

Representatives of the hotel sector have made various efforts to defend their established position by expressing instrumental power in policy spaces. For example, in Mexico, the current Tourism Secretary, Miguel Torruco Marqués came to the role after serving simultaneously as the President of the Mexican Association of Hotels and Motels for the Federal District (2012-2017) and the Secretary of Tourism for the Federal District. As of 2019, he was carrying out an audit of digital lodging in Mexico, and working with the Ministry of Finance and Public Credit to regulate digital lodging platforms (Herrera, 2019), however to date no policy proposals have emerged.

In Peru, the former Minister of Tourism, Rogers Valencia Espinoza, came to his position after serving as President of the Regional Council for Tourism of Cusco, and as a member of the Peruvian Association for Hotels, Restaurants and Associated Entities (AHORA). In 2018, he proposed a Ministerial Resolution to the Peruvian Congress (Congreso de la República de Perú, 2018a) that would have made it illegal to operate short term rental operations unless all formal hotels were already full (Guerrero, 2018). ComexPerú (discussed earlier) which also represents AirBnB, issued statements in opposition to the new regulation (Riofrío, 2018) and legal experts pointed out that the proposed reform impinged on several other legal rights (Guerrero, 2018). Valencia Espinoza was shuffled into a lesser role as Minister of Foreign Trade and Tourism in April 2018 and the proposed regulatory reform was shelved.

In these examples, lodging innovations have been treated like an end run, and proposed policy solutions have gravitated towards levelling the playing field for lodging providers. But in fact, the policy dilemma is considerably more complex in this case. Digital lodging has become a mechanism that drives the transformation of housing stock into an investment asset since platform innovations allow property owners to earn more from short term rentals than from long-term renters. This has led to the emergence of both professional AirBnB property managers and investors, and AirBnB investment schemes. For example, in Chile, a 2019 background document produced by a Member of Congress notes that “40% of advertisements belong to hosts who have multiple advertisements which, since it is unlikely that they live in all of their properties, correspond to commercial operators of informal accommodations” (Winter Etcheberry, 2019). In Colombia developers such as Korn Architects have begun to build AirBnB apartment towers as an investment opportunity (Portafolio, 2018).

These types of schemes, if they prove successful, could offer opportunities for small scale investment that facilitate upward mobility, but only if lodging operations do not become concentrated in a few hands. To complicate matters, there are no restrictions on foreign property ownership in any of the four countries under consideration, which opens the housing stock to the possibility of foreign capitalisation. Allowing housing to become an investment asset has created well-documented problems with housing affordability and availability in cities around the world (Kusisto & Grant, 2019) which is not in the best interests of social equity. This means that the regulatory dilemma in the lodging sector is not just about levelling the playing field between hoteliers and homeowners. It is more broadly about protecting housing as a matter of equity and quality of life, and also regulating real estate to protect against housing bubbles or collapses. What at first appeared to be an end run in the eyes of hotel associations and some policymakers has revealed itself to be a policy gap, since digital lodging has created entirely new concerns.

Some legal projects are beginning to emerge to address these policy gaps. In October 2019 the Chilean Chamber of Deputies saw a motion to modify Law 20.423 Institutional System for the Development of Tourism (Winter Etcheberry, 2019). It aimed to “regulate temporary accommodations to ensure the interests of the users and the communities that experience their effects” (p. 2). It proposed limiting short-term rentals to 90 days per unit per year as a means to control the incentive mechanisms driving investment in housing stock. It also proposed new municipal fines, as well as changes to the regulation of condominiums that would limit individuals to a single vote, regardless of the number of units they own. 4 This proposal might be considered developmental in that it seeks to protect the economic potential of micro-lodging for small-scale operators, while also limiting concentration of power in the hands of housing investors. It is also interesting to note that the motion presenting the legal project explicitly cites corporate actors’ unwillingness to share data about their operations with the state, and the difficulties that this creates for crafting regulation (ibid., p. 3).

The Chilean example can be contrasted with a recent municipal legal project put forward in July, 2020 that proposes a ban on the short term rental of condominium units in Mexico City. The document argues that “For no motive can they be used for temporary accommodation such as what is offered by platforms like AirBnB and other similar modalities which are in contravention of condominium, business, sanitation, civil protection, fiscal and other norms” (Ortega & Navarrete, 2020). Here rather than exploring socially relevant applications of the platform economic form, the policy seeks to engineer a separation between social and productive spaces.

Because the platform lodging sector presents policy gaps that demand novel regulatory approaches, there may be greater possibility for policy innovations that include more developmental orientations. However the brief exploration presented here suggests the complexity of this proposition. A developmentally minded policymaker could perhaps create opportunities to leverage housing stock for investment, economic growth, economic diversification, and microenterprise if this is in the interest of the community; but such a policy would also need to resist pressures from international investors and tourism operators, as well as real estate companies and property holding elites. This raises questions about how platform developmentalism could or should leverage community resources, and how best it can incentivise socially beneficial forms of capitalisation.

Fintech: crafting state-market collaborations that advance developmental objectives

Fintech demonstrates the challenges and potential of state-market collaborations to advance developmental objectives. It describes the use of digital technologies to create innovations in financial business models (Anagnostopoulos, 2018). These include crowd-based means to access capital, online or mobile banking applications, and new ways of processing transactions in the marketplace. Fintech is often itself platformised, as in the case of crowd-based financial services, and it also drives the larger platform ecosystem by enabling digital transactions.

Mexico’s new Fintech Law, which came into force in March 2018 (Cámara de Diputados de México, 2018), is the most advanced of the four countries under consideration (Aleman, 2020). Mexico designed a totally new legal framework in which “fintech activities are regulated similarly to other financial services, such as banking and securities services” (Bolaños & Botello, 2019, p. 25). The new law creates standards for the operation of crowdfunding, lending and equity schemes, and the operation of virtual wallets. It also establishes a regulatory framework for developing new fintech products or services under the supervision of the banking authority. These sandboxes are meant to offer a way for regulators and innovators to learn and grow together. Finally, the law also allows banks to invest in or acquire fintech companies, or companies that develop fintech innovations.

Mexico’s law offers a response to issues that have been the subject of intense debate within the fintech sector in the countries under study. When fintech innovations such as crowdfunding first arose in the region they created an end run situation. Fintech innovators claimed that existing regulations did not apply to them, however, regulators often claimed that they did. In Chile, for example, crowdfunding initiatives exist without the benefit of crowdfunding regulation, however “Chilean law forbids raising funds from the public (except for banks) and crowdfunding has been seen as breaching such a restriction” (Peralta & Noriega, 2019, p. 6). 5 This situation leaves both innovators and regulators in a difficult position.

In these countries, existing regulatory frameworks, as well as the banking sector they refer to, were developed to support big capital and big investments. In contrast, fintech innovations have been oriented towards servicing small scale banking and micro transactions. Given that financial inclusion has been historically low in these countries, this casts fintech in a favourable light. However regulators have been cautious about the potential risks of digital banking innovations. For example, in the debates that arose around Mexico’s fintech law, the Mexican National Commission for Protection and Defence of Users of Financial Services (CONDUSEF) argued that informal banking mechanisms contradicted the goals of financial inclusion by putting citizens at risk (Foro Jurídico, 2017). To complicate matters, it is difficult for authorities to keep up with the innovations produced by the fintech sector, and in particular, to distinguish serious innovators from those looking to exploit regulatory loopholes for personal gain.

Some regulators have managed these contradictions by asserting authority over crowd-based initiatives and creating regulatory sandboxes that allow corporate actors to experiment with financial innovations under the supervision of the regulator. Sandbox frameworks are often paired with regulations that allow banks to invest in or purchase fintech companies, thus providing a means to fund fintech innovations. For example, in 2018 the Colombian Supervisor of Finance issued Decree 1357 which regulated debt and equity crowdfunding, it created a fintech sandbox, and it issued Decree 2443 which authorised credit institutions, financial services entities and capitalisation companies to invest in fintech companies that develop technologies directly related to the corporate activities of the investor.

But this move provoked pushback from Colombian fintech start-ups. The new regulatory arrangement effectively formalises fintech, and ties access to investment financing to the needs of existing financial institutions. A representative of industry association Colombia Fintech complained about the new scenario saying that “The Supervisor doesn’t want to put itself in problems with the banks, so it is taking a lukewarm position, but fintech companies do need a regulatory framework, because each time they try to do something, the Bank of the Republic, the Ministry of Finance, or the very same Supervisor of Finance, tells them that they can’t do it because it is too risky” (quoted in García, 2018). This can be read as a call for greater transparency in the operations of the Supervisor of Finance which is perceived to be in the pocket of Colombia’s traditional banking sector. If this is true, then it suggests that incumbent banking players may be influencing policy in ways that undermine competition.

Peru has taken a different approach. There are currently no restrictions on investment by Peruvian financial entities in fintech companies, and there is no regulatory sandbox. Instead, fintech innovation has been left to the private sector. 6 At the end of 2018, the Peruvian fintech company Culqi, which created payment gateway services for authorising credit cards at point of sale, was purchased by Banco de Crédito del Perú. This transaction was considered a success because it generated new business for a large Peruvian company, while at the same time facilitating access to international credit card transactions at competitive rates (Alcázar & Sánchez, 2019). This could be considered a developmental outcome since it drives economic growth while also reducing costs for small businesses and consumers.

But if platform developmentalism takes seriously the need to put data and information systems at the centre of national development planning, Peru’s fintech deal needs careful analysis, because credit card companies are key sources of data for global consumer data brokerages (Christl, 2017). Indeed, a study of fintech sandboxes in 28 countries by the Consultative Group to Assist the Poor (CGAP) and the World Bank found that sandbox projects overwhelmingly focus on payments services and market infrastructure, and that “most sandbox-tested innovations do not target excluded and underserved customers at the base of the pyramid” (Jenik et al., 2019, n.p.). 7 The report concludes by recommending that regulators create financial inclusion themed regulatory sandboxes so that state and private actors can co-design policies and technologies that together address barriers to financial inclusion. Given the growing value of data, regulators could also use sandboxes to experiment with novel forms of data intermediation such as data trusts in order to ensure responsible capitalisation of this resource within the national economy. Two observations arise out of this analysis, which are the significance of novel models of state-business collaboration to platform developmentalism, and also the need to consider the relationship between data and money in creating policy for national economic and social well being.

Conclusions

The idea of platform developmentalism challenges us to shift the notion of a platform society (van Dijck et al., 2018) from an ontological to a normative position, and to address platform business models as strategic choices rather than social outcomes. It means asking, given the technologies at our disposal, what kind of society do we want? How can these technologies best be applied to mobilise society’s resources to achieve those outcomes?

This means moving away from self-regulation by business interests, as well as the elision of societal governance with platform governance. Rather than looking at data and information systems as a way to connect local markets with global capitalist circuits, platform developmentalism argues that governments should steward their data and information resources as potential sources of economic growth and social wellbeing. Achieving this goal implies taking up new approaches to policy-making. It would require shifting away from small state regulatory approaches and towards more interventionist forms of state management.

In the four countries considered here, Chile, Colombia, Mexico and Peru, this would mean taking on powerful actors who have a vested interest in protecting limited state intervention in economic affairs. The challenges of overcoming historically embedded hegemonic coalitions and fundamentally shifting a nation’s approach to regulating capitalism are too complex to address here. What can be said is that the role of the ‘digital developmental state’, as Heeks (2018, p. 12) calls it, is a pressing research agenda. This paper offers guidance for researching “the processes and structures – particularly the politics and political economy – of digital economy policy-making and implementation in developing countries” (ibid., p. 11). In doing so, it moves the developmental research agenda away from treating platforms like a technological intervention (Koskinen et al., 2019) and rather places our focus on the context in which platformisation creates effects.

With this in mind, if we are to find new regulatory pathways for the platform economy, I agree with van Dijck et al.’s (2018) call to analyse “how societal values form the heart of debates over private gain versus public interest” in platform business models (p. 140). They go on to say that “Articulating which values are contested by whom in which context may help reshape the current platform ecosystem in ways that make it more responsive to public concerns” (ibid.). However, while this is a great agenda for research, I am unsure why citizens would put their trust in the rather vague and faceless, and for Latin American countries, often foreign, platform ecosystems. In that regard, if people are to understand platformisation and put pressure on governments to properly regulate this business model, community engaged research is required on personal data literacy (D’Ignazio, 2017), especially where it serves to articulate local social values around data and information systems (Reilly, 2020).

These can in turn serve as alternative frameworks for locally relevant policy research. In particular, platformisation involves a reorganisation of the boundary between social space and productive space, as when our cars or homes, indeed our personal identity, become the subjects of new forms of commodification. This has important implications for developmentalism, which by definition must seek a balance between economic growth and social equity. Where does the community draw the line when it comes to leveraging its resources for productivity? Here research is needed into the implications of governance frameworks that put people at the centre of regimes of production based on personal data and information systems (Lehtiniemi & Haapoja, 2020).

Meanwhile, since the platform ecosystem is dominated by global corporations, research is urgently required about how small states can best negotiate with transnational platform actors to extract benefits for their nation. Egan (2010) points out that the balance of power between states and multinationals operates differently in resource extraction versus manufacturing. The same will be true of platformisation, because data resources are valued, cultivated and traded differently. When should states open their borders to platform multinationals, and how can they best position themselves for that negotiation to extract maximum developmental benefits?

Finally, as platformisation involves rapid technological innovation, it is often argued that governments should leave the heavy lifting to the private sector. But the experience of fintech shows us that this need not be the case. Platform developmentalism requires innovative models of corporate-government innovation that can mobilise investment in ways that advance community agendas. Some readers may wonder if platform developmentalism means putting more data in the hands of state officials, but I do not think this need necessarily be the case. Rather, drawing on Jenik et al. (2019), research into sandbox models could explore regulatory avenues that allow private actors to work with personal data through fiduciary arrangements in ways that reinvest in community wellbeing. This would represent a true recognition of the close relationship between data and economic value in the platform economy.

To conclude, the technocentric assumption that platform innovation will somehow lead directly to development is a shallow proposition that ignores the political economy of the digital economy. Where platformisation can offer pathways beyond structural barriers, it could offer fundamental opportunities to advance development goals. These opportunities will differ from one sector to another, and will take shape differently depending on existing hegemonic coalitions. Comparative analysis of policy struggles and their outcomes can offer a powerful means to better understand how platformisation is reshaping development processes, while also offering insights into the types of policies required to put platformisation to work for the betterment of communities.

Acknowledgements

This paper was written with the support of the Social Sciences and Humanities Research Council (SSHRC) of Canada. Thank you to Luis Lozano-Paredes for his assistance with field research.

References

Alcázar, R., & Sánchez, M. (2019). Peru. In Lloreda Camacho & Co (Ed.), LATAM Fintech Regulation (2nd ed., pp. 34–41). Lloreda Camacho & Co. https://lloredacamacho.com/wp-content/uploads/2019/12/LATAMFINTECHREGULATION-NE-EN-111219.pdf

Aleman, X. (2020). Fintech regulations in Latin America could fuel growth or freeze out startups. In TechCrunch. https://techcrunch.com/2020/05/27/fintech-regulations-in-latin-america-could-fuel-growth-or-freeze-out-startups/

Amsden, A. H. (1989). Asia’s Next Giant: South Korea and Late Industrialization. Oxford University Press. https://doi.org/10.1093/0195076036.001.0001

Anagnostopoulos, I. (2018). Fintech and regtech: Impact on regulators and banks. Journal of Economics and Business, 100, 7–25. https://doi.org/10.1016/j.jeconbus.2018.07.003

Andrea, M. (2020). Book Review: The Platform Society: Public Values in a Connective World. International Journal of Communication, 14, 781–783. https://ijoc.org/index.php/ijoc/article/view/14271

Artopoulos, A., Cancela, V., Huarte, J., & Rivoir, A. (2019). El último kilómetro del e-commerce. Segunda brecha (digital) del desarrollo informacional. In A. L. Rivoir & M. J. Morales (Eds.), Tecnologías Digitales: Miradas Críticas de la Apropiación en América Latina (pp. 259–282). CLACSO.

Bagchi, A. (2000). The past and the future of the developmental state. Journal of World-Systems Research, 2, 398–442. https://doi.org/10.5195/jwsr.2000.216

Barros, C. (2018). Tecnologías disruptivas: Regulación de plataformas digitales. La Comisión Nacional de Productividad. http://www.comisiondeproductividad.cl/2018/04/tecnologias-disruptivas-regulacion-de-plataformas-digitales/

Biber, E., Light, S. E., Ruhl, J. B., & Salzman, J. (2017). Regulating business innovation as policy disruption: From the Model T to Airbnb. Vanderbilt Law Review, 70(5), 1561–1625. https://cdn.vanderbilt.edu/vu-wp0/wp-content/uploads/sites/278/2017/10/06190338/Regulating-Business-Innovation-as-Policy-Disruption.pdf

Bizberg, I., & Théret, B. (2012). La diversidad de los capitalismos latinoamericanos: Los casos de Argentina, Brasil y México. Noticias de la Regulación, 61, 1–22.

Bolaños, I., & Botello, M. (2019). Mexico. In Lloreda Camacho & Co (Ed.), LATAM Fintech Regulation (2nd ed., pp. 25–33). Lloreda Camacho & Co. https://lloredacamacho.com/wp-content/uploads/2019/12/LATAMFINTECHREGULATION-NE-EN-111219.pdf

Boschi, R. R. (2011). Variedades de Capitalismo, Política e Desenvolvimento na América Latina. Editora UFMG.

Bresser-Pereira, L., & Theuer, D. (2012, May 25). Latin America: After the neoliberal years, is the developmental state back in? Latin American Studies Association Annual Congress (LASA-2012), San Francisco. http://www.bresserpereira.org.br/papers/2012/373-Developmental-State-Back-in-Latin-America-Theuer.pdf

Bril-Mascarenhas, T., & Madariaga, A. (2017). Business power and the minimal state: The defeat of industrial policy in Chile. Journal of Development Studies, 55(6), 1047–1066. https://doi.org/10.1080/00220388.2017.1417587

Regula a las aplicaciones de transporte remunerado de pasajeros y los servicios que a través de ellas se presten, (2018). https://www.camara.cl/pley/pley_detalle.aspx?prmID=12456&prmBOLETIN=11934-15=

Ley para regular las instituciones de tecnología financiera, Pub. L. No. DOF 09-03-2018 (2018). https://static1.squarespace.com/static/58d2d686ff7c50366a50805d/t/5ac450630e2e72d53c20f091/1522815078233/LRITF_090318.pdf

Castillo, M., Rovira, S., Peres, W., Porcile, G., Rodríguez, A., Brossard, F., Rodrigues, M., Patiño, A., & Valderrama, P. (2016). Science, technology and innovation in the digital economy: The state of the art in Latin America and the Caribbean. Conference on Science, Innovation and Information and Communications Technologies of ECLAC, San Jose. https://repositorio.cepal.org/bitstream/handle/11362/40840/S1600832_en.pdf?sequence=1&isAllowed=y

C.D.M.X. (2017, May 11). Regula Gobierno de la Ciudad de México operación de AirBnB. Finanzas CDMX. http://www.finanzas.cdmx.gob.mx/comunicacion/nota/regula-gobierno-de-la-ciudad-de-mexico-operacion-de-airbnb

Christl, W. (2017). Corporate Surveillance in Everyday Life [Report]. Cracked Labs. https://crackedlabs.org/en/corporate-surveillance

Cifuentes, L. F. (2020). Cotelco destaca acuerdo entre el Gobierno y la Plataforma Airbnb. In RCN Radio. https://www.rcnradio.com/colombia/cotelco-destaca-acuerdo-entre-el-gobieno-y-la-plataforma-airbnb

Cimoli, M., & Rovira, S. (2008). Elites and Structural Inertia in Latin America: An Introductory Note on the Political Economy of Development. Journal of Economic Issues, 42(2), 327–347. https://doi.org/10.1080/00213624.2008.11507142

Cohen, R. (1987). To become cabdriver in Lima, Peru, start car, start cruising. The Wall Street Journal.

Congreso Perú. (2016). Proyecto de ley que crea y regula el servicio privado de transporte a través de plataformas tecnológicas. http://www.leyes.congreso.gob.pe/Documentos/2016_2021/Proyectos_de_Ley_y_de_Resoluciones_Legislativas/PL0150520170608.pdf

Disponen la prepublicación del proyecto de Reglamento de Establecimientos de Hospedaje en el Portal Institucional del Ministerio. Resolution 170-2018-MINCETUR, (2018). https://hiperderecho.org/wp-content/uploads/2018/06/proyecto_reglamento_hospedajes_mincetur-2.pdf

Corrales, M. (2018). ¿Hay cama para tanta gente? Un Análisis sobre la regulación de AirBnB en Colombia[Undergraduate Thesis, Facultad de Ciencias Jurídicas]. http://hdl.handle.net/10554/36479

Crabtree, J., & Durand, F. (2017). Peru: Elite Power and Political Capture. Zed Books.

Diaz, R. (2019, April 5). ¿Final conflicto entre taxistas y ubers? Esto busca la ley de aplicaciones de transporte que pasó al Senado. El Definido. https://eldefinido.cl/actualidad/pais/10986/Fin-al-conflicto-entre-taxistas-y-ubers-Esto-busca-la-ley-de-aplicaciones-de-transporte-que-paso-al-Senado/

D’Ignazio, C. (2017). Creative data literacy: Bridging the gap between the data-haves and data-have nots. Information Design Journal, 23, 6–18. https://doi.org/10.1075/idj.23.1.03dig

Egan, P. (2010). Hard bargains: The impact of multinational corporations on economic reform in Latin America. Latin American Politics and Society, 52(1), 1–32. https://doi.org/10.1111/j.1548-2456.2010.00072.x

El Comercio. (2016, September 3). Taxistas realizaron caravana de protesta contra Uber. El Comercio. https://elcomercio.pe/lima/taxistas-realizaron-caravana-protesta-uber-fotos-254353-noticia/

Evans, P. (1995). Embedded Autonomy: States and Industrial Transformation. Princeton University Press.

Ezeomah, B., & Duncombe, R. (2019). The Role of Digital Platforms in Disrupting Agricultural Value Chains in Developing Countries. In P. Nielsen & H. C. Kimaro (Eds.), Information and Communication Technologies for Development. Strengthening Southern-Driven Cooperation as a Catalyst for ICT4D (Vol. 551, pp. 231–247). Springer International Publishing. https://doi.org/10.1007/978-3-030-18400-1_19

Fairfield, T. (2015). Structural power in comparative political economy: Perspectives from policy formulation in Latin America. Business and Politics, 17(3), 411–441. https://doi.org/10.1515/bap-2014-0047

Foro Jurídico. (2017, September 29). Ley Fintech: Acelerando la inclusión financiera. Foro Jurídico. https://forojuridico.mx/ley-fintech-acelerando-la-inclusion-financiera/

Franz, T. (2018). Power balances, transnational elites, and local economic governance: The political economy of development in Medellín. Local Economy: The Journal of the Local Economy Policy Unit, 33(1), 85–109. https://doi.org/10.1177/0269094218755560

Gaitán, F., & Boschi, R. (2015). State-business-labour relations and patterns of development in Latin America. In M. Ebenau, I. Bruff, & C. May (Eds.), New Directions in Comparative Capitalisms Research (pp. 172–188). Palgrave Macmillan. https://doi.org/10.1057/9781137444615_11

Galo, I. (2016). Economía Colaborativa En América Latina [Report]. Interamerican Development Bank. https://publications.iadb.org/es/economia-colaborativa-en-america-latina

García, C. (2018, September 10). Fuerte polémica por regulación para firmas de tecnología financiera. El Tiempo. https://www.eltiempo.com/economia/sector-financiero/cual-es-la-regulacion-fintech-en-colombia-266774

Gestión. (2019a). Comex: Economía digital puede ayudar a resolver retos como la desigualdad en Perú. In Gestión. https://gestion.pe/economia/comex-economia-digital-ayudar-resolver-retos-desigualdad-peru-274430-noticia/

Gestión. (2019b, September). Uber se adelanta al Congreso e implementa botón de pánico. Gestión. Gestión. https://gestion.pe/economia/empresas/uber-adelanta-congreso-e-implementa-boton-panico-243587-noticia/

Gillespie, T. (2010). The politics of ‘platforms’. 12(3), 347–364. https://doi.org/10.1177/1461444809342738

Graham, M., Hjorth, I., & Lehdonvirta, V. (2017). Digital labour and development: Impacts of global digital labour platforms and the gig economy on worker livelihoods. Transfer: European Review of Labour and Research, 23(2), 125–162. https://doi.org/10.1177/1024258916687250

Guerrero, C. (2018, June). Mincetur busca prohibir que peruanos alquilen sus viviendas a través de Airbnb [Blog post]. Hiperderecho. https://hiperderecho.org/2018/06/mincetur-busca-prohibir-que-peruanos-alquilen-sus-viviendas-a-traves-de-airbnb/

Gurumurthy, A. (2018). Polities for the platform economy: Current trends and future directions [Report]. IT for Change. https://itforchange.net/platformpolitics/wp-content/uploads/2018/09/Mid_Project_Reflections_2018.pdf

Gurumurthy, A., Bharthur, D., Chami, N., & Narayan, V. (2020). Unskewing the Data Value Chain: A Policy Research Project for Equitable Platform Economies [Background Paper]. IT for Change. https://itforchange.net/unskewing-data-value-chain-a-policy-research-project-for-equitable-platform-economies

Guzmán, J. A. (2016). El capitalismo jerárquico de Chile difícilmente puede ser defendido por los partidarios del libre mercado. Centro de Investigación Periodística. https://ciperchile.cl/2016/05/04/el-capitalismo-jerarquico-de-chile-dificilmente-puede-ser-defendido-por-los-partidarios-del-libre-mercado/

Haggard, S. (2018). Developmental States. Cambridge University Press. https://doi.org/10.1017/9781108552738

Hagiu, A., & Wright, J. (2011). Multi-sided platforms (Working Paper No. 12–024). Harvard Business School. https://hbswk.hbs.edu/item/6681.html

Heeks, R. (2018a). Digital platforms in the Global South: Foundations and research agenda. (Development Implications of Digital Economies) [Working Paper]. Centre for Development Informatics, Global Development Institute, University of Manchester. https://diodeweb.files.wordpress.com/2018/10/digital-platforms-diode-paper.pdf

Heeks, R. (2018b). Digital economies and development: A research agenda. (Development Implications of Digital Economies) [Briefing]. Centre for Development Informatics, Global Development Institute, University of Manchester. https://diodeweb.files.wordpress.com/2018/10/digital-economy-and-development-research-agenda.pdf

Hernández López, M. (2017). Variedades de capitalismo, implicaciones para el desarrollo de América Latina. Economía Teoría y Práctica. Nueva Época, 46, 195–226. https://doi.org/10.24275/etypuam/ne/462017/hernandezlopez

Herrera, M. (2019). Sectur y la SHCP buscan generar marcos regulatorios para plataformas de hospedaje. In Inmobiliare. https://inmobiliare.com/sectur-y-la-shcp-buscan-generar-marcos-regulatorios-para-plataformas-de-hospedaje/

Hira, A., & Reilly, K. (2017). The emergence of the sharing economy: Implications for development. Journal of Developing Societies, 33(2), 175–190. https://doi.org/10.1177/0169796X17710071

Jara Román, S. (2019, September 2). El secreto informe tributario de los super ricos que el SII hizo desaparecer. El Desconcierto. https://www.eldesconcierto.cl/2019/09/02/el-secreto-informe-tributario-de-los-super-ricos-que-el-sii-hizo-desaparecer/

Jayasuriya, K. (2001). Globalization and the changing architecture of the state: The politics of the regulatory state and the politics of negative co-ordination. Journal of European Public Policy, 8(1), 101–123. https://doi.org/10.1080/1350176001001859

Johnson, C. (1982). MITI and the Japanese Miracle. Stanford University Press.

Katz, R. (2015). El ecosistema y la economía digital en América Latina. Fundación Telefónica; Editorial Ariel. https://repositorio.cepal.org/bitstream/handle/11362/38916/1/ecosistema_digital_AL.pdf

Kenney, M., & Zysman, J. (2016). The rise of the platform economy. Issues in Science and Technology, XXXII(3). http://issues.org/32-3/the-rise-of-the-platform-economy/.

Koskinen, K., Bonina, C., & Eaton, B. (2018). Digital platforms in the Global South: Foundations and research agenda(Paper No. 08; Development Implications of Digital Economies). Centre for Development Informatics, Global Development Institute, University of Manchester. https://diodeweb.files.wordpress.com/2018/10/digital-platforms-diode-paper.pdf

Kusisto, L., & Grant, P. (2019, April 2). Affordable housing crisis spreads throughout world; Shortages persist despite millions of dollars invested and hundreds of thousands of units built. The Wall Street Journal.

Lehtiniemi, T., & Haapoja, J. (2020). Data agency at stake: MyData activism and alternative frames of equal participation. New Media & Society, 22(1), 87–104. https://doi.org/10.1177/1461444819861955

Levi-Faur, D. (2005). The global diffusion of regulatory capitalism. Annals of the American Academy of Political and Social Science, 598(1), 12–32. https://doi.org/10.1177/0002716204272371

López García, A. (2017, January 23). Oligarchic politics in Latin America. VoxUkraine. https://voxukraine.org/en/oligarchic-politics-en/.

López-Obrador, A. (2018). Primer discurso como Presidente [Speech]. https://www.youtube.com/watch?v=eKIZLDH3t_U.

Martínez, E. (2017, July 20). Hoteleros quieren regulación federal para Airbnb y HomeAway. El Financiero. https://www.elfinanciero.com.mx/empresas/hoteleros-quieren-regulacion-federal-para-airbnb-y-homeaway.html

O.E.C.D. (2016). Start-up Latin America 2016: Building an Innovative Future. Assessment and recommendations(Development Centre Studies). OECD Publishing. https://www.oecd.org/dev/americas/Startups2016-Assessment-and-Recommendations.pdf

Ondetti, G. (2017). El poder de las preferencias. Las élites económicas y tributación baja en México. Revista Mexicana de Ciencias Políticas y Sociales, 62(231), 47–76. https://doi.org/10.1016/S0185-1918(17)30038-7

Ortega, E., & Navarrete, F. (2020, July 20). Diputada de Morena propone ley que prohíbe rentar condominios en Airbnb en la CDMX. El Financiero. https://www.elfinanciero.com.mx/empresas/diputada-de-morena-propone-ley-que-prohibe-rentar-condominios-en-airbnb-en-la-cdmx

Osborne, D., & Gaebler, T. (1992). Reinventing Government: How the Entrepreneurial Spirit is Transforming the Public Sector. Addison-Wesley.

Parthasarathy, B., & Matilal, O. (2019). The platform economy and digital work: A developmental state perspective(Paper No. 9; Development Implications of Digital Economies). Centre for Development Informatics, Global Development Institute, University of Manchester. https://diodeweb.files.wordpress.com/2019/04/developmental-state-diode-paper-final.pdf

Pashley, A., & Hidalgo López, C. (2014, July 16). Por qué los taxistas de Lima ven con recelo la nueva normativa que rige la apariencia de los taxis. Global Voices. https://es.globalvoices.org/2014/07/16/por-que-los-taxistas-de-lima-ven-con-recelo-la-nueva-normativa-que-rige-la-estetica-de-los-taxis/

Peralta, D., & Noriega, F. (2019). Colombia. In Lloreda Camacho & Co (Ed.), LATAM Fintech Regulation (2nd ed., pp. 11–17). Lloreda Camacho & Co. https://lloredacamacho.com/wp-content/uploads/2019/12/LATAMFINTECHREGULATION-NE-EN-111219.pdf

Pérez Caldentey, E. (2008). The concept and evolution of the developmental state. International Journal of Political Economy, 37(3), 27–53. https://doi.org/10.2753/IJP0891-1916370302

Portafolio. (2018). Nuevo edificio en Bogotá se arrendará bajo la modalidad de Airbnb. In Portafolio. https://www.portafolio.co/negocios/airbnb-tendra-un-nuevo-edificio-en-bogota-y-busca-inversionistas-522593

Quirós, F. (2019, March 29). Afirman que, en materia de regulación Fintech, Chile es el país que menos avanzó en la región. Cointelegraph. https://es.cointelegraph.com/news/in-terms-of-fintech-regulation-chile-is-the-least-advanced-country-in-the-region-acoording-to-fintechile

Reilly, K. (2017, June 9). Logistics brokerage, development and the urban economy. Presented at Hacia Ciudades Colaborativas: Desarrollo Urbano Integral Y Economía Colaborativa, Buenos Aires, Argentina [Talk]. CIPPEC XII Foro Lideres de Ciudades. https://www.youtube.com/watch?v=U68garIa2NE

Reilly, K. (2020). The challenge of decolonizing big data through Citizen Data Audits [1/3] [Blog post]. In BigDataSur. https://data-activism.net/2020/04/bigdatasur-the-challenge-of-decolonizing-big-data-through-citizen-data-audits-1-3/

RioFrío, M. (2018, June). ¿En qué consiste la norma que traba el despegue de Airbnb? El Comercio. https://elcomercio.pe/economia/negocios/consiste-norma-traba-desplieuge-airbnb-noticia-529144-noticia/

Rodriguez, K., & Alimonti, V. (2020). A look-back and ahead on data protection in Latin America and Spain. [Blog post]. Electronic Frontier Foundation Deeplinks. https://www.eff.org/deeplinks/2020/09/look-back-and-ahead-data-protection-latin-america-and-spain

Routley, L. (2012). Developmental states: A review of the literature (Working Paper No. 03). School of Environment and Development, University of Manchester. https://doi.org/10.2139/ssrn.2141837

Sandbrook, R., Edelman, M., Heller, P., & Teichman, J. (2007). Social Democracy in the Global Periphery: Origins, Challenges, Prospects. https://doi.org/10.1017/CBO9780511491139

Saucedo, E., Rullan Rosanis, S., & Villafuerte, L. (2016). Hierarchical capitalism in Latin America: Comparative Analysis with Other Economies. International Journal of Business and Economic Sciences Applied Research, 8(3), 69–82. https://hrcak.srce.hr/161613

Schneider, B. (1999). The desarrollista state in Brazil and Mexico. In M. Woo-Cumings (Ed.), The Developmental State. Cornell University Press.

Schneider, B. (2008). Economic liberalization and corporate governance: The resilience of business groups in Latin America. Comparative Politics, 40(4), 379–397. https://doi.org/10.5129/001041508X12911362383237

Schneider, B. (2009). A comparative political economy of diversified business groups, or how states organize big business. Review of International Political Economy, 16(2), 178–201. https://doi.org/10.1080/09692290802453713

Schneider, B. R. (2013). Hierarchical Capitalism in Latin America: Business, Labor, and the Challenges of Equitable Development. Cambridge University Press. https://doi.org/10.1017/CBO9781107300446

Serna, M., & Bottinelli, E. (2018). El poder fáctico de las élites empresariales en la política latinoamericana. CLACSO.

Sheahan, J. (2002). Alternative models of capitalism in Latin America. In E. Huber (Ed.), Models of Capitalism. Lessons for Latin America. Penn State University Press.

Shipra. (2019). Book Review: José van Dijck, Thomas Poell and Martijn De Waal, The Platform Society: Public values in a connective world. Indian Journal of Human Development, 13(2), 235–38. https://doi.org/10.1177/0973703019870883

Singh, N., & Ovadia, J. (2018). The theory and practice of building developmental states in the Global South. Third World Quarterly, 39(6), 1033–1055. https://doi.org/10.1080/01436597.2018.1455143

Singh, P. J. (2020). Breaking up Big Tech: Separation of its data, cloud and intelligence layers. (Working Paper No. 09). Data Governance Network. https://datagovernance.org/files/research/1595858876.pdf

Srnicek, N. (2016). Platform capitalism. Polity Press.

tecnopolítca. (2020, May 14). Data Commons Manifesto. May 14. https://tecnopolitica.net/en/content/data-commons-manifesto

Torres Castro, C. M. (2016). Comparación de Factores que han Afectado la Regulación de Uber en Bogotá, Colombia y Ciudad de México, México [Civil Engineering Degree Project, Universidad de los Andes]. https://repositorio.uniandes.edu.co/bitstream/handle/1992/15139/u754541.pdf

U.N.C.T.A.D. (2019). Digital Economy Report 2019. Value Creation and Capture: Implications for Developing Countries[Report]. United Nations.

Vacano, M. (2017). Sharing economy’ versus ‘informal sector’: Jakarta’s motorbike taxi industry in turmoil. ANUAC, 6(2), 97–101. https://doi.org/10.7340/anuac2239-625X-3076

Valga Gutierrez, A. (2020). ¿Políticas para la economía del futuro? La economía colaborativa y las plataformas digitales en el Perú: Un análisis de la respuesta del Congreso de la República (2014-2019) [Faculty of Social Sciences, Pontificia Universidad Católica del Perú]. http://hdl.handle.net/20.500.12404/16790

van Dijck, J., Poell, T., & Waal, M. (2018). The Platform Society: Public Values in a Connected World. Oxford University Press.

Vida. (2017). Anfitriones de Airbnb tienen que formalizarse: Mincomercio. El Tiempo. https://www.eltiempo.com/vida/viajar/anfitriones-de-airbnb-tienen-que-formalizarse-con-registro-nacional-de-turismo-117

Vizcarra, M., & Villanueva, C. (2018). Oficio No 363-2018-PR Letter from the President of the Republic to Daniel Salavrry Villa, President of the Congress of the Republic. http://www.leyes.congreso.gob.pe/Documentos/2016_2021/Observacion_a_la_Autografa/OBAU0150520181228.pdf

Winter Etcheberry, G. (2019). Moción: Modifica la ley No 20.423, del Sistema Institucional para el Desarrollo del Turismo, y la ley No 19.537, sobre Copropiedad Inmobiliaria, para regular los servicios de alojamiento temporal ofrecidos a través de plataformas digitales. In 12978-02. https://www.camara.cl/verDoc.aspx?prmTipo=SIAL&prmID=50725&formato=pdf

Footnotes

1. The Washington Consensus refers to a set of free-market policies promoted by the World Bank, International Monetary Fund, and the Inter-American Development Bank in response to economic crises in developing countries sparked by the debt crisis of the 1980s.

2. The Pink Tide (‘Ola Rosada’) describes a wave of left-wing electoral victories that took place in Latin America from 1998-2010.

3. I focus on innovation here because of its obvious link to platformisation. Platformisation is the result of digital innovations, and its adoption also requires business innovation.

4. This project was remitted to committee, and as of September 2020, it had not yet returned to Congress for discussion.

5. Chile is otherwise considered to be the least advanced in terms of fintech regulation in Latin America (Quirós, 2019).

6. Alcázar and Sánchez (2019, p. 35) report that new fintech regulations are being considered by the Peruvian Superintendent of Banking. See: Ley que declara de interés nacional y necesidad pública la regulación de la tecnología financiera. Legal Project 3403/2018-CR. Congreso de la República de Perú. September 28, 2018. http://www.leyes.congreso.gob.pe/Documentos/2016_2021/Proyectos_de_Ley_y_de_Resoluciones_Legislativas/PL0340320180918.pdf

7.https://www.cgap.org/blog/do-regulatory-sandboxes-impact-financial-inclusion-look-data

A non-discrimination principle for rankings in app stores

$
0
0

Introduction

When consumers search for applications on their smartphones, they often start their discovery with entering queries into the search function of app stores. Based on those queries, the algorithms of app store operators generate sarch results, giving certain applications a more visible position than other applications. Consumers are more likely to select applications that have a top position in the rankings. Dogruel et al. (2015) show that the first five search results in app stores receive an estimated 87% of the traffic through search. This means that discriminatory rankings of search results can limit consumer choice and the ability of app developers to compete within app stores.

App stores serve as a gateway for connecting app developers and consumers. They turn from gateways into gatekeepers when they limit the ability of app developers to reach consumers (Bostoen, 2018, p. 11). The vertical integration of app stores gives them the economic incentive to give their own applications a competitive advantage in the rankings of search results (Krämer & Schnurr, 2018, p. 523). Nicas and Collins (2019) report that Apple has favoured its own applications over those of competitors in the rankings in the App Store. Others argue that rankings based on popularity favour large and established players, placing small and new businesses on a lower competitive footing (Pandey et al., 2005, p. 1).

Due to high legal standards, EU competition law currently does not allow the European Commission to intervene effectively against discriminatory rankings in app stores. To complement EU competition law, the EU legislator adopted the Platform-to-Business Regulation (Regulation (EU) 2019/1150, P2B Regulation) in June 2019. The P2B Regulation contains various transparency obligations for the rankings of search results in app stores. However, the P2B Regulation does not contain a prohibition of discrimination between app developers in those rankings.

Referring to the gatekeeper position of platforms, some scholars argue in favour of a non-discrimination principle for the rankings of search results (Pasquale, 2008, pp. 266-267). Bostoen (2018) draws parallels with EU telecom regulation, including net neutrality. On 30 April 2016, the Open Internet Access Regulation (Regulation (EU) 2015/2120, OI Regulation) entered into force. The Regulation introduces an obligation for providers of internet access services (ISPs) to treat traffic within their networks without discrimination. The EU legislator justifies this ex ante non-discrimination principle based on the gatekeeper position of ISPs. The OI Regulation seeks to prevent discriminatory practices of ISPs that reduce the incentives for online businesses to compete and innovate within the internet ecosystem.

Based on parallels with the non-discrimination principle in the OI Regulation, this article proposes a new ex ante EU-wide regulatory regime that prohibits app store operators to treat applications differently in the rankings of search results without objective justification. The article arrives at a (non-exhaustive) list of forbidden and permitted ranking rationales and variables for app stores. Ranking rationales refer to the economic or legal logic of app store operators for engaging in specific ranking practices. Permitted ranking rationales are formulated, such as those based on quality and price, which reflect consumer choice and the parameters of effective competition between app developers in a market economy. Ranking rationales, which potentially limit consumer choice and distort the digital level playing field, include self-favouring without objective justification and the popularity of applications. The proposed framework could serve as a source of inspiration for a prohibition of discriminatory rankings under the new Digital Markets Act.

This article is structured as follows. Firstly, the article describes how Apple ranks search results within the App Store (section 1). Apple's ranking practices are used as a case study in this article. Secondly, the article explores the need for new ex ante sector-specific regulation of discriminatory rankings in app stores, complementing EU competition law (section 2). Based on theoretical and empirical literature, this article identifies several shortcomings of the enforcement of EU competition law against discriminatory rankings in app stores. The article also shows that the P2B Regulation does not contain a prohibition of such discriminatory practices. Supported by doctrinal legal research, this article derives a new framework from the OI Regulation for the regulation of rankings in app stores (section 3). The proposed non-discrimination principle is then applied to the case of Apple's rankings within the App Store (section 4). Finally, some of the counter-arguments, as identified in the literature on the regulation of online search, are rebutted (section 5).

Section 1: Apple's ranking practices in the App Store

Introduction

In 2008, Apple launched the App Store and enabled app developers to produce apps for its operating system, iOS (ACM, 2019, p. 20). App store operators are active in two-sided markets, which are subject to indirect network effects (Evans, 2003, p. 192). Indirect network effects mean that Apple's platform is more attractive to consumers when there is a range and diversity of popular applications available in its App Store. In order to convince app developers to launch their applications in the App Store, Apple must attract a sufficiently large consumer base. An important way for app store operators to attract consumers is to make it easier for consumers to find their preferred applications through the search function in the app store. Data from Sensor Tower shows that the majority of app downloads in the App Store originates from search (Briskman, 2018).

The ranking of apps within the App Store

When consumers want to find an application on their iPhone, they can enter queries into the search function of the App Store. Based on about 42 variables, Apple's algorithms rank the applications for a given search query and provide search results to the consumer (Nicas & Collins, 2019). Although Apple's algorithms are largely a “black box”, Apple has published the following rationales and variables for the ranking of organic search results:

  • The first ranking rationale is the text relevance for a given search query entered by the consumer. This ranking rationale includes ranking variables such as the name of the application, keyword field and the selected primary category of the application (Apple, 2020).
  • The second ranking rationale concerns the quality of the applications. This rationale covers variables such as the average rating of consumers and the quality of reviews (Apple, 2020).
  • The third ranking rationale is the popularity of applications. This ranking rationale covers variables such as the amount of downloads and the number of ratings and reviews (Apple, 2020). The use of these variables may favour large and established over small and new app developers in downstream markets. The reason is that popularity often correlates with market positions of firms (Pandey et al., 2005, p. 1).
  • The fourth ranking rationale is the personalisation of search results in the App Store. This rationale consists of variables such as the search and purchase history of consumers (Apple, 2020).

Potentially discriminatory ranking practices

Reportedly, Apple has systematically favoured its own applications in the rankings of the App Store. Based on 600 searches on six iPhone models in the US, Mickle (2019) found that Apple's applications are ranked first in 95% of searches for applications that generate revenue through subscriptions or sales. Nicas and Collins (2019) report that Spotify used to be ranked first in the App Store for the search term “music” in the United States. However, the launch in June 2016 of Apple's own music-streaming service, Apple Music, caused a drop in the rankings for Spotify. Apple Music has appeared first in the rankings in the category “music” since its introduction (Nicas & Collins, 2019).

Apple has acknowledged that it does not apply consumer ratings and reviews to its own pre-installed applications, while these variables do affect the rankings of competitors' applications (Mickle, 2019, p. 6). However, Apple has stated that it does not favour its own applications over others in the rankings of its search results. Apple claims that the high rankings of its own applications are based on “user behaviour data” and can be explained by the “strong connection” of consumers with Apple's services (Mickle, 2019, p. 1).

Apple's self-favouring ranking practices and the use of the popularity ranking rationale raise concerns. When online businesses expect that a platform will discriminate against them, they are less likely to compete and innovate (Khan, 2018, p. 1008). EU competition law currently does not deal effectively with discriminatory ranking practices in app stores (section 2). At the same time, the P2B Regulation does not contain ex ante rules that prohibit discriminatory rankings in app stores (section 2).

On 15 December 2020, the European Commission published a proposal for a new Digital Markets Act (DMA). The proposed DMA bans self-favouring ranking practices by app stores that will be designated as “gatekeepers” under the DMA (section 3). It remains to be seen whether the proposed prohibition will also be included in the final text of the DMA, which requires that the prohibition is accepted by the European Parliament and the Council in the legislative procedure.

Section 2: The need for new ex ante sector-specific regulation complementing EU competition law

Limitations of EU competition law

The European Commission and the Netherlands Authority for Consumers and Markets (ACM) have announced competition law investigations into several practices of Apple surrounding its App Store. Yet, these authorities do not seem to focus their investigations on discriminatory rankings in Apple's App Store.

Geradin and Katsifis (2020) argue that Apple's self-favouring ranking practices (section 1) could potentially be qualified as an abuse of dominance under EU competition law. Nevertheless, for several reasons, this article argues that EU competition law provides the European Commission with limited means to intervene effectively ex post against discriminatory rankings in app stores. Firstly, the substantive legal standards for intervening against self-favouring ranking practices are unclear. The Google Search (Shopping) decision from the European Commission shows that self-favouring ranking practices of dominant platforms can constitute an exclusionary abuse under EU competition law (European Commission, 2017, para 341). However, the decision leaves legal uncertainty as to what substantive legal standards determine whether a specific differential ranking practice amounts to abusive self-favouring (Zingales, 2019, pp. 407-408).

Secondly, the substantive legal standards for intervention are presumably too high in cases where the use of the popularity ranking rationale systematically favours large over small app developers in app stores. One important reason is that EU competition law aims to protect effective competition, but does not seek to safeguard a level playing field for all firms (Graef, 2019, p. 480). For example, if a new app developer is not (yet) as efficient as the dominant app store operator in the downstream market, EU competition law generally provides little protection to this app developer.

Thirdly, the long average duration of EU competition law cases undermines the ability of the European Commission to prevent restrictions of competition in downstream markets by dominant platforms (Van Gorp & De Bijl, p. 201). Cases regarding abuse of dominant position have an estimated average duration of 61 months (Dethmers & Blondeel, 2017, p. 161). This is due to the laborious work of defining the relevant market, evaluating dominance and establishing a theory of harm, which is especially complicated in digital markets. Posner (2000) argues that lengthy procedures are particularly problematic in digital markets where conditions change rapidly.

Some scholars, such as Lundqvist (2019), have argued that the EU legislator should not turn to ex ante sector-specific regulation, but should wait for EU competition law to develop new tests and tools. There are various initiatives to improve the effectiveness of the enforcement of EU competition law against discriminatory practices of “digital gatekeepers” (Furman, 2019, p. 41). The European Commission currently explores the need for a new competition tool to ensure “timely and effective intervention against structural competition problems across markets”, which includes competition problems related to digital gatekeepers (European Commission, 2020, p. 1). Scholars have also proposed several legal shortcuts to enable speedier interventions against platforms that act as digital gatekeepers. For example, Van Gorp and De Bijl (2019) propose a policy option where a discriminatory practice of a vertically integrated platform gives rise to a legal presumption of an abuse of dominance (Van Gorp & De Bijl, 2019, pp. 43-44).

Although these initiatives and proposals seem promising, some scholars have rightly pointed out that it is currently unclear when a platform should be considered a “digital gatekeeper” under EU competition law (Graef, 2018, p. 486). The European Commission and European courts have not yet defined and applied this concept in EU competition law cases (Alexiadis & De Streel, 2020, pp. 5-6). The shortcomings of EU competition law, as mentioned in this section, therefore also justify the exploration of complementary ex ante regulation by the EU legislator.

The proposal for the DMA acknowledges that EU competition law provides limited means to intervene timely and effectively against a number of harmful practices of digital gatekeepers (Proposal DMA, consideration 5). The DMA seeks to complement EU competition law and includes, inter alia, an ex ante prohibition of self-favouring rankings of app stores that are designated as “gatekeeper” under the DMA (section 3). In contrast to EU competition law, the proposal for the DMA sets out a framework for determining whether a specific platform must be considered a digital gatekeeper (Proposal DMA, Article 3).

The Platform-to-Business Regulation

To complement EU competition law, the EU legislator adopted the P2B Regulation in June 2019. The Regulation contains ex ante rules to improve fairness, transparency and effective redress possibilities in the commercial relationship between providers of online intermediary services (platforms) and businesses that provide their services through these platforms (online businesses). The Regulation mentions that platforms “serve as a gateway to consumers” and are “crucial for the commercial success of undertakings who use such services to reach consumers” (Regulation (EU) 2019/1150, considerations 2 and 12). The P2B Regulation seeks to promote fairness and transparency in this relationship of economic dependency between platforms and online businesses, with the aim of enhancing trust in the platform economy (Regulation (EU) 2019/1150, considerations 2 and 3).

To achieve its goals, the P2B Regulation introduces, inter alia, various transparency requirements for the rankings of platforms such as app stores. The transparency obligations in the P2B Regulation seek to balance between 1) the interest of online businesses to get an adequate understanding of the functioning of the algorithms, and 2) the interest of app stores to prevent imitation and “gaming” of the algorithms (Regulation (EU) 2019/1150, consideration 27).

The first transparency obligation means that app stores must be transparent about the main ranking variables and changes in those variables. Article 5(1) of the Regulation imposes the obligation on app stores to describe in their terms and conditions 1) the number and type of main variables used to rank services on their platform and 2) the reasons of the relative importance of those variables as compared to other variables. Furthermore, app stores are required to notify online businesses about changes in the main ranking variables in their terms and conditions (Regulation (EU) 2019/1150, Article 3(2) in conjunction with Article 5(1)). Article 3(2) of the P2B Regulation states that the notification must generally be provided at least 15 days before applying the changes in the main variables. The main ranking variables are to be selected by the app stores themselves (European Commission, 2020, paras 39-42), which may cover those currently published by Apple (section 1).

The second transparency obligation entails that app stores should provide transparency regarding decisions to lower the rankings of applications. Article 4(1) of the P2B Regulation states that app stores must provide a “statement of reasons” when they decide to demote the rankings of a specific application (Regulation (EU) 2019/1150, consideration 22). The statement must enable the app developer to challenge the demotion in rankings within the internal complaint-handling process of the app store (Regulation (EU) 2019/1150, consideration 22). Furthermore, under Article 12 of the Regulation, a large app store and app developer also have the possibility to solve a dispute concerning the demotion in rankings through mediation. If the app developer successfully challenges the decision to lower its rankings, Article 4(3) of the Regulation obliges the app store to correct the demotion “without undue delay”. However, the P2B Regulation does not contain any rules as to when a demotion in rankings would be unjustified.

Article 7(1) of the P2B Regulation contains the third transparency obligation, which entails that app stores should be transparent about the differential treatmentbetween own applications and those of competitors. Article 7(3) mentions that such differential treatment includes the favouring of own applications in the rankings of search results. Article 7(1) states that the app store must be transparent about the “main economic, commercial or legal considerations for such preferential treatment”.

The P2B Regulation does not contain an ex ante prohibition of discrimination in the rankings of search results in app stores. The European Commission has stated that a “fully binding solution (…) prohibiting the trading practices in question (…)” is not adopted because it was considered “disproportionate” (European Commission, 2018, p. 2). One reason for this could be that most obligations in the P2B Regulation apply to all online platforms, regardless of their size. This article proposes an ex ante prohibition of discriminatory rankings that would only apply to app stores that act as digital gatekeepers. The non-discrimination principle for rankings in app stores, as proposed in this article, can be used as a source of inspiration for a prohibition of discriminatory rankings under the DMA.

Section 3: The parallels with the Open Internet Access Regulation

Justifications for deriving inspiration from the Open Internet Access Regulation

As stated above, the European Commission has published a proposal for a new Digital Markets Act (DMA) on 15 December 2020. The goal of the DMA is to ensure the contestability and fairness of digital markets (Proposal DMA, Article 1(1)). Article 6(1), under d, of the proposed DMA lays down a prohibition for digital platforms, designated as gatekeepers, to “refrain from treating more favourably in ranking services and products offered by the gatekeeper itself (...) compared to similar services or products of third party and apply fair and non-discriminatory conditions to such ranking”. The aim behind this proposed prohibition is to prevent digital gatekeepers from undermining the contestability for services (e.g. music streaming services) offered through their platforms (Proposal DMA, consideration 48). The European Commission has mentioned in earlier documents that ex ante regulation in the telecom sector can be a useful source of inspiration for the new legal framework in the DMA (European Commission, 2020, p. 4). Based on parallels with EU telecom legislation, this article provides proposals for the way that the EU legislator can shape a prohibition of discriminatory rankings in the DMA. It also shows how the European Commission can apply this prohibition to app stores when implementing and enforcing the DMA in the future.

Drawing parallels with the OI Regulation, this article proposes new ex ante EU regulation that forbids app store operators to differentiate in the rankings of search results without objective justification. The motivation for drawing parallels with the OI Regulation is twofold. Firstly, both ISPs and app store operators are gatekeepers of the internet. This gives these companies the power to discriminate against competitors, which reduces the incentives of competitors to compete and innovate. Secondly, the OI Regulation contains a non-discrimination principle, which provides a useful source of inspiration for the regulation of rankings in app stores. Based on parallels with the OI Regulation, the EU legislator is able to provide flexibility to app store operators to engage in ranking practices that reduce search costs for consumers, while prohibiting those practices that limit consumer choice and distort the level playing field.

The article acknowledges that app store operators and ISPs differ in various ways from each other. These companies offer different types of services and operate in different markets. For example, a number of ISPs used to be public companies, which were privatised in the EU from the 1980s onwards (Savin, 2018, p. 13). App store operators do not share this history with ISPs. Possibly, app store markets are also more dynamic in terms of innovation than the markets in which ISPs operate. Despite these differences, drawing parallels with the OI Regulation enables the EU legislator to formulate clearly defined prohibitions of specific discriminatory ranking practices in app stores. The OI Regulation sets out two legal frameworks to assess if the use by ISPs of a specific rationale for differential treatment of traffic violates the non-discrimination principle. These legal frameworks can be used to formulate specific permitted and forbidden ranking rationales for app stores (section 4).

The gatekeeper position of ISPs and app store operators

Providers of internet access services

ISPs can be considered gateways for connecting online businesses and consumers. Two conditions can be formulated for the gateway position of ISPs. Firstly, the internet access services that ISPs provide are crucial for online businesses to reach consumers. Secondly, consumers have few alternatives to access the applications of online businesses next to their internet access service.

For a specific group of online businesses, such as music and video streaming services, internet connectivity is indispensable for reaching consumers with their applications. It is crucial for this group of firms that an ISP provides them with an internet access service that meets the quality of service requirements of their applications. If an ISP blocks or throttles the traffic of these applications, this affects the quality of experience of the consumer (European Commission, 2015, p. 2). It makes the service less attractive to consumers, potentially leading to a decrease in consumers and a loss of revenues for the online business.

Consumers also have few alternatives besides the internet access service of their current ISP to access the applications of their choice. Due to high economic and legal barriers to entry in telecom markets (Hauge & Jamison, 2009, pp. 23-25), a limited number of ISPs is active in the national telecom markets of EU member states (Lear et al., 2017, pp. 40 and 53). Another factor that makes consumers dependent on their ISP, are the relatively high costs of switching between ISPs (Hauge & Jamison, 2009, p. 25). Data for the Netherlands shows that only 11% of the Dutch consumers of mobile subscriptions switched between mobile providers in a period of 12 months in 2019-2020 (ACM, 2020, p. 18).

ISPs are thus gateways for connecting a specific group of online businesses and consumers. The OI Regulation seeks to prevent ISPs from turning into gatekeepers that pick winners and losers on the internet (European Commission, 2015, p. 2). The vertical integration of ISPs gives the economic incentive to treat the traffic of their own applications more favourably than the traffic of competitors. With the adoption of the OI Regulation, the EU legislator responded to the frequently reported discriminatory practices of ISPs to block or throttle the traffic of Voice over IP services and peer-to-peer services (BEREC, 2012, p. 8).

App stores

App stores can also be regarded as a gateway for connecting online businesses and consumers. Firstly, app stores provide a service crucial for app developers to reach consumers. Secondly, consumers have few alternatives for accessing the applications of app developers next to app stores. This section will illustrate this based on the case study of Apple's App Store.

For app developers, Apple's App Store is crucial to reach iPhone users. Hyrynsalmi et al. (2016) show that multi-homing by app developers between app stores is relatively uncommon.For the App Store, 2.6 to 4.9% of the applications are offered in one or more other app stores (Hyrynsalmi et al., 2016, p. 122). Even if an app developer is multi-homing between different app stores, it is still crucial for an app developer to be present in the App Store. The decision to withdraw from the App Store would make it difficult for the app developer to serve iPhone users, which will likely result in a loss of consumers and revenues (Geradin & Katsifis, 2020, p. 32). Within Apple's ecosystem, app developers have few alternative channels to reach iPhone users (ACM, 2019, pp. 43-46). For example, another option for app developers could be to offer content via web-apps. However, apps provided through the App Store give better access to the hardware functionalities of an iPhone (e.g., the camera or GPS), while web-apps do not offer the possibility of “swiping” (ACM, 2019, p. 43).

Consumers that use the iPhone also have few alternatives besides Apple's App Store to access the applications of their choice. Most consumers use one smartphone (Höppner et al., 2013, p. 6). Due to the high barriers to enter these markets, consumers have few options when choosing their preferred operating system (European Commission, 2018). When consumers use an iPhone, Apple's App Store is the only app store available on iOS (ACM, 2019, p. 21). ACM (2019) shows that consumers have few alternatives to the App Store for accessing apps on their iPhone, except some limited possibilities for “tech-savvy” consumers (e.g. “jailbreaking”). The costs for consumers to switch between operating systems is high because of, inter alia, the effort to learn how to use another operating system (European Commission, 2018, para 527). As a result, consumers may become “locked-in” with Apple's ecosystem beyond the life cycle of their iPhone (ACM, 2019, p. 55). For example, in the Netherlands, only 9% of the consumers who bought another smartphone in 2018 voluntarily switched between operating systems (ACM, 2019, p. 53).

App store operators turn from gateways into gatekeepers when their rankings of search results limit the ability of app developers to reach consumers. As indicated in the introduction, Dogruel et al. (2015) show that the first five search results in app stores receive an estimated 87% of the traffic through search. 1 At the same time, Apple increasingly plays the “dual role” of a platform operator and a market participant in downstream markets (Khan, 2019, pp. 983-984). This vertical integration gives Apple the economic incentive to favour its own downstream applications over those of competitors. Apple's self-favouring ranking practices (section 1) provides an example where an app store operator turns into a gatekeeper. Another example is when the use of the popularity ranking rationale results in a systematic advantage for large app developers (section 1).

The non-discrimination principle in the Open Internet Access Regulation

In Telenor Hungary (2020), Advocate General Sánchez-Bordona considered the protection of an open internet as the primary aim of the OI Regulation. The internet is considered open when it is an “open platform for innovation with low access barriers for end-users” (OI Regulation, consideration 3). To achieve this aim, the OI Regulation lays down a non-discrimination principle for the treatment of traffic by ISPs. This allows ISPs to differentiate between traffic within their networks, as long as the consumers´ freedom to access the applications of their choice (i.e., consumer choice) is not limited and the level playing field is not distorted. This contrasts with a strict neutrality principle, which would require the equal treatment of traffic by ISPs. The EU legislator seeks to give room to ISPs to handle traffic efficiently and provide innovative connectivity services, while safeguarding an open internet (ACM, 2018, p. 10).

Similarly, the proposed non-discrimination principle for rankings of search results within app stores aims to find a balance between 1) giving app store operators the flexibility to engage in ranking practices that reduce search costs for consumers, and 2) prohibiting those practices that limit consumer choice and distort the level playing field (section 4).

The two legal frameworks in the Open Internet Access Regulation

The OI Regulation sets out two legal frameworks for assessing if ISP practices violate the non-discrimination principle. In Telenor Hungary (2020), Advocate General Sánchez-Bordona stated that both frameworks aim to protect the right of end users to open internet access, as laid down in Article 3(1) of the Regulation. This right consists of 1) the right for consumers and business users of internet access services to access the applications of their choice and 2) the right for content and application providers (online businesses) to provide their applications via the internet access service. The first element seeks to protect consumer choice, while the second element seeks to safeguard a level playing field on the internet. It aims to ensure that small and new players, such as digital start-ups, can compete on an equal footing with big and established players (European Commission, 2015, p. 2).

The first legal framework applies to agreements and commercial practices of ISPs. The second legal framework deals with technical measures of ISPs to differentiate between traffic within the network. Central to both frameworks are the potential and actual effects of a practice on the open internet. In Telenor Hungary (2020), the European Court of Justice ruled that the first framework requires authorities to assess the effects of the practice on the rights of end-users as laid down in Article 3(1), while the second framework does not.

Agreements and commercial practices

Article 3(2) of the OI Regulation sets out a legal framework for practices relating to the commercial relationship between ISPs and end users. This category includes 1) agreements between ISPs and end-users on commercial and technical conditions of internet access services, and 2) any commercial practices of ISPs. One specific example of a commercial practice is “zero-rating”. Zero-rating refers to a practice where the ISP does not count the use of certain applications towards the monthly maximum data that consumers can use for mobile internet (Krämer & Peitz, 2018, p. 502).

Article 3(2) states that these agreements and commercial practices are allowed, as long as these do not limit the exercise of the end users' right to open internet access as laid down in Article 3(1). A zero-rating service violates the non-discrimination principle when it limits consumer choice and distorts the digital level playing field. Based on multiple factors, national regulatory authorities must assess this on a case-by-case basis (BEREC, 2020, paras 42-48). In particular, the authorities should evaluate whether the zero-rating is open to all applications within a certain category (e.g. music streaming). They must assess the barriers to enter the zero-rating scheme and the number of small businesses that have entered the zero-rating (Krämer & Peitz, 2018, p. 508). In other words, the zero-rating scheme may not be based on the rationale of favouring the large and popularover small and new businesses.

Technical treatment of traffic

Article 3(3) of the OI Regulation contains a legal framework for the technically differential treatment of traffic within the networks of ISPs. These practices vary from giving temporary priority to certain traffic (e.g. VoIP) over other traffic (e.g. e-mail) in a situation of network congestion, to the practice of throttling or blocking traffic of specific applications. The European Court of Justice ruled in Telenor Hungary (2020) that national regulatory authorities can assess these technical measures—in parallel—under Article 3(2) and 3(1), to the extent that these measures are included as technical conditions in the agreements with end users.

Article 3(3) contains the general rule that ISPs must treat all traffic equally and without discrimination in technical terms. Under Article 3(3), second paragraph, ISPs are allowed to take reasonable traffic management measures. This provision allows technical differentiation between traffic by ISPs to efficiently handle traffic and prevent congestion in the network. To be considered reasonable, traffic management measures must meet several cumulative requirements. One of these requirements is that the rationale for these measures should be based on objectively technical requirements of traffic, and may not be based on commercial considerations. This ban covers the prioritisation of traffic from the ISP's own applications without objective technical justification. In other words, the OI Regulation categorically bans prioritisation of traffic based on the rationale of self-favouring without objective justification.

Section 4: Application of the framework to Apple's ranking practices

The non-discrimination principle in the OI Regulation can be used as a source of inspiration for the regulation of discriminatory rankings in app stores. Based on the case study of Apple's App Store, a list of permitted and forbidden ranking rationales is developed. The article formulates several permitted ranking rationales that do not limit consumer choice or distort the level playing field. The article proposes a categorical ban of self-favouring ranking rationales without objective justification and an effects-based prohibition of the popularity ranking rationale. A public authority in the EU could monitor and enforce the proposed regulatory framework. The article leaves open whether this should be a new or existing public authority.

Permitted ranking rationales

It is the very business of app store operators to rank applications when consumers search for their preferred application. However, Chandler (2007) rightly argues that online businesses should not be subject to discriminatory ranking rationales that the consumer would not use. When app store operators use ranking rationales based on text relevance, quality and personalisation (section 1), this merely reflects consumer choice and the parameters of effective competition in a market economy. The same reasoning applies to rationales related to the price and legality of content. The use of these ranking rationales reduces search costs for consumers, which is welfare-enhancing (Martens, 2016, p. 20). Under the proposed non-discrimination principle, these would constitute permitted ranking rationales.

Consumers may have different notions of the quality of applications and the desirability of personalisation of search results. The EU legislator could facilitate consumer choice by requiring app stores to implement a functionality enabling consumers to opt-in and opt-out of ranking variables relating to quality and personalisation.

Forbidden ranking rationales

This section discusses the proposed framework for self-favouring ranking rationales without objective justification, the popularity ranking rationale, and a circumvention by app store operators of the forbidden ranking rationales.

Unequal application of ranking variables

Apple has stated that it does not apply consumer ratings and reviews to its own pre-installed applications, while these variables do affect the rankings of competitors' applications (section 1). An Apple spokesperson argued that “pre-installed apps don't need to be rated because they're already integrated into the iPhone” (Mickle, 2019, p. 6). Based on this information, Apple's justification for the unequal application of ranking variables does not follow from one of the formulated permitted ranking rationales such as quality or price.

If Apple cannot provide an objective justification for excluding its own applications from the application of certain ranking variables, then it should be regarded as the forbidden ranking rationale of self-favouring without objective justification. The higher position of Apple's own applications would then not be based on the merits of its applications, but would give Apple an artificial competitive advantage. The use of such a ranking rationale would violate the proposed non-discrimination principle, as it would unjustifiably steer consumers and distort the level playing field in favour of Apple's own applications.

Variables favouring own applications

Apple has reportedly engaged in ranking practices where it favoured its own applications (e.g. Apple Music) over those of competitors (e.g. Spotify). As Apple's algorithms are largely a “black box”, there is not much public information available about their exact functioning (section 1). This article discusses various scenarios that have been reported in the context of other markets (i.e. airlines and e-commerce platforms). This section shows how the proposed non-discrimination principle would deal with each of the scenarios.

The first scenario is that Apple favours its own applications in the App Store's rankings based on permitted ranking rationales such as quality. In this scenario, Apple is able to provide an objective justification for its ranking practices. The ranking practice would be allowed under the proposed non-discrimination principle.

The second scenario is that its own applications are ranked more favourably than those of competitors based on variables relating to Apple’s own identity. In the past, airlines in the United States applied variables relating to their own identity in their Customer Reservation System (CRS). When travel agents searched for flights through CRS, the airlines' own flights were often ranked first, even though other companies offered lower prices or better service (Edelman, 2011, p. 27). Eventually, the United States Department of Justice intervened and introduced rules prohibiting variables relating to the airlines' own identity (Edelman, 2011, p. 27).

A third scenario is that Apple uses ranking variables relating to the profitability for its own business. Mattioli (2019) reports that Amazon had plans to add “proxies for profit” to its search algorithms used on the market platform. These proxies are variables that correlate with the profitability of rankings for Amazon's own business, which might not be observable for consumers and online businesses (Mattioli, 2019, p. 6).

These last two scenarios would mean that Apple would use ranking variables relating to its own identity or profitability, which fall underthe forbidden ranking rationale of self-favouring without objective justification. The use of such variables would give an artificial competitive advantage to Apple in the rankings of search results, which is not based on the merits of the applications. It would unjustifiably steer consumers and distort the level playing field in favour of Apple's own applications over those of competitors. Therefore, it would constitute an infringement of the proposed non-discrimination principle.

Popularity ranking rationale

App developers operate in “winner-takes-most” markets, where 3% of the app developers get more than 80% of app downloads (Hyrynsalmi et al., 2016, p. 125). Apple uses the popularity ranking rationale, such as a variable for the number of downloads. The use of such variables may favour large and established over small and new businesses in downstream markets (section 1).

The use of a popularity ranking rationale leads to a self-enforcing mechanism or “entrenchment effect” (Pandey et al., 2005, p. 1). Large and established online businesses generally receive relatively many clicks and downloads, giving them a higher position in the rankings of search results. These higher rankings lead to more clicks and downloads for these firms, improving their rankings further. This self-enforcing mechanism could reduce the ability of new and small online businesses to challenge large and established players—even if they have a superior offer (Pandey et al., 2005, p. 1).

Based on parallels with the OI Regulation, the proposed non-discrimination principle aims to protect the ability of small and new players to challenge large incumbents within app stores (section 3). Drawing parallels with the assessment of zero-rating under the OI Regulation, the EU legislator could introduce an effects-based prohibition of popularity ranking variables. Based on multiple factors, the effects of using popularity ranking variables on consumer choice and the level playing field could be assessed. These factors could include the number of small players being ranked in top positions within a certain category of applications (e.g. music streaming) and the availability of alternative channels to small and new businesses to reach consumers. To provide legal certainty, the EU legislator could formulate factors that give rise to the legal presumption that the non-discrimination principle is not violated. Such factors could include that app store operators provide a functionality to consumers to switch popularity variables on and off.

Circumvention of the forbidden ranking rationales

App store operators may attempt to circumvent the forbidden ranking rationales. One way could be that the app store operator tweaks the weights attached to the ranking variables in such a way that this results in a higher ranking of its own apps. Another way to circumvent the proposed non-discrimination principle could be that the app store operator uses a permitted ranking rationale, which nevertheless leads to a systematic advantage for its own apps when it is combined with other terms and conditions applicable to the app store. For example, Apple imposes an obligation on app developers that offer digital goods and services in the App Store to use its payment systems. This obligation is combined with a 30% commission in the first year (ACM, 2019, p. 96). This commission drives up the prices of these applications for consumers, while this is not the case for Apple's own applications. As a result, the use of the permitted ranking rationale of price could still favour Apple's own apps over those of competitors in the rankings of search results.

The OI Regulation prohibits any agreements and practices of ISPs that circumvent the goals and provisions of the Regulation (see for example BEREC, 2020, para 126). Similarly, the proposed framework would forbid app store operators to engage in ranking practices that circumvent the forbidden ranking rationales. The proposal for the DMA also seeks to ensure that the proposed prohibition of self-favouring rankings is not circumvented (Proposal DMA, Article 11). To that end, the current text of the DMA prohibits “any measure that may have an equivalent effect to the differentiated or preferential treatment in ranking” (Proposal DMA, consideration 49).

Section 5: Rebutting counter-arguments

Rankings have a limited impact on consumer choice

An argument against the proposed framework can be that discriminatory ranking practices of online platforms have little impact on consumer choice and the economic performance of online businesses. The argument goes that consumers with strong preferences will find the application they are looking for, even when their preferred application has a low ranking in the search results (Manne & Wright, 2012, p. 176). Consumers with weak preferences are to a larger degree influenced by the rankings, but their lack of preferences suggests little welfare loss (Manne & Wright, 2012, p. 177).

However, a study from Narayanan and Kalyanam (2015) suggests that consumers with weak preferences and lesser-known businesses are more strongly affected by online rankings. Consumers generally have less strong preferences for applications that are relatively unknown, while incumbents have had the time to form consumer preferences. New businesses generally belong to the category of lesser-known companies, which are thus more strongly affected by the position in the rankings.

Consumers with weak preferences arguably require additional protection, because they are more vulnerable to the influence that app store operators exert through their rankings of search results. The proposed non-discrimination principle aims to ensure that discriminatory rankings do not limit the ability of new app developers with superior offerings to challenge incumbents.

Regulation hampers innovation of search algorithms

Another concern with the proposed non-discrimination principle might be that regulation intervening with the ranking variables of platforms could reduce innovation of search algorithms (Crane, 2014, p. 401, p. 405). This decreases the ability of platforms to differentiate from each other, resulting in a loss of competition between platforms.

The proposed framework seeks to protect the innovation and competition in downstream markets, such as the market for music streaming services. More specifically, the framework aims to protect the ability of small and new app developers to compete with large and established players. This admittedly imposes limitations on the direction of upstream innovation and competition by app store operators. For example, introducing new self-favouring ranking variables without objective justification is not allowed (section 4). These limitations are, however, necessary for the protection of downstream innovation and competition.

Some scholars, such as Renda (2015) have argued that the rules in the OI Regulation could impede the development of innovative network technologies such as 5G. However, BEREC (2018) considers that the OI Regulation “leaves considerable room for the implementation of 5G technologies”. The reason is that the non-discrimination principle in the OI Regulation leaves room for technically differential treatment of traffic based on objective technical requirements of traffic (section 3).

A similar argument can be made against the claim that regulation would stifle innovation of search algorithms. The proposed non-discrimination principle leaves room to develop innovative ranking variables for which there is potential consumer demand, while protecting consumer choice and the level playing field. For example, app store operators could explore the implementation of new ranking variables representing quality, including privacy protection.

Competition between platforms gives incentive to serve consumers

Some scholars argue that competition between platforms forces them to provide high quality search results (Goldman, 2011, p. 101). The value of a platform depends on the presence of a diversity of applications for which there is consumer demand (Farrell & Weiser, 2003, p. 101). Platforms would therefore in principle have no incentive to discriminate against competing applications.

In principle, platforms have an interest in attracting a diversity of applications to increase the value of the platform. This might be different when an application becomes a sufficient competitive threat to the platform's business. In that case, the incentive for the platform to discriminate and avoid competition may dominate (Krämer & Schnurr, 2018, p. 524).

Two factors may lead to consumers showing limited switching behaviour in response to a decrease in quality of search results. Firstly, online platforms, such as Apple, often operate in highly concentrated markets characterised by high switching costs and network effects (Stucke & Ezrachi, 2017, pp. 76-77). Secondly, Patterson (2013) argues that search results can often be regarded as a so-called “credence good”. This means that it is often hard for consumers to observe the quality of search results even after the search (Patterson, 2013, pp. 11-12). The combination of these factors may reduce the incentive for apps stores to produce high quality search results.

Discriminatory rankings are difficult to detect for authorities

The search algorithms of app stores are largely a “black box” (Pasquale, 2010, p. 170), which are protected by the trade secrets of app store operators. Although the P2B Regulation introduces various transparency requirements for app stores (section 2), these requirements will most likely leave some deviations from the proposed framework unrevealed. For example, the inclusion of “proxies for profit” in the algorithms (Mattioli, 2019, p. 6) and more subtle ways of circumventing the forbidden ranking rationales (section 4) would be difficult to detect for public authorities.

Therefore, the enforcement of the proposed framework would only be effective when it is supplemented by a different transparency regime than currently applies under the P2B Regulation. Pasquale (2010) has for example proposed a regime of “qualified transparency”, which would allow a government agency to detect discriminatory rankings while protecting the trade secrets of platforms. The regime could give a government agency the competence to audit the algorithms of an app store that acts as a gatekeeper when a reasonable suspicion of discriminatory rankings arises. The observed sudden drop of Spotify (section 1) could for example give rise to such a reasonable suspicion. The current text of the proposed DMA gives the European Commission the power to get access and explanations about algorithms when this is necessary for the implementation and enforcement of the provisions of the DMA, including the prohibition of self-favouring rankings (Proposal DMA, Article 19 and Article 21 in conjunction with consideration 69).

To increase legal certainty, the auditability regime could include a safe harbour for app store operators that follow a “due diligence procedure” to test for possible sources of discrimination in their algorithms (Zingales, 2019, pp. 414-415). The auditability regime could for example be executed by an independent body, consisting of technical experts, which would assist the authority that enforces the proposed non-discrimination principle (Pasquale, 2010, pp. 168-169). The proposal for the DMA provides the European Commission with the power to appoint “independent external experts and auditors to assist the Commission to monitor the obligations and measures and to provide specific expertise or knowledge to the Commission” (Proposal DMA, Article 24(2)). Future will show whether this will result in the creation of an independent body of technical experts that provide assistance to the Commission when using its proposed power to access algorithms for the implementation of the DMA.

Conclusion

Based on parallels with the Open Internet Access Regulation, this article proposes new ex ante regulation for rankings of search results in app stores. The proposed framework contains a prohibition for app store operators to differentiate between app developers in the rankings without objective justification. The article formulates permitted ranking rationales, such as those based on text relevance, price, quality and the legality of content. The use of these ranking rationales merely reflects consumer choice and the parameters of effective competition within a market economy. The article also formulates a categorical ban of self-favouring ranking rationales without objective justification and an effects-based prohibition of popularity ranking rationales. These prohibitions are essential to protect consumer choice and the ability of small and new app developers to compete against big players in app stores. The article provides recommendations for the way that the EU legislator can shape a prohibition of discriminatory rankings in the DMA, and how this prohibition can be applied to app stores by the public authority that will implement and enforce the provisions of the DMA.

This article proposes a regulatory framework that focusses on discrimination in the rankings of app stores. Future research could explore if the proposed non-discrimination principle can also be applied to the ranking practices of other platforms, such as Google Search and Amazon. Furthermore, it could be investigated further what ranking rationales and variables caused the reported drop of Spotify in the rankings. The article raises questions about whether the EU legislator should also introduce an auditability regime for search algorithms in app stores. Future research could address how such an auditability regime would be shaped. There are ongoing discussions among policymakers and scholars about how EU competition law should be adapted to the needs of the digital economy. In its current shape, EU competition law does not deal effectively with discriminatory rankings in app stores. In line with the proposal for the DMA, the proposed regulatory framework in this article aims to complement EU competition law to ensure a more effective protection of consumer choice and the digital level playing field within app stores in the future.

Acknowledgements

The author wishes to express his gratitude to University researcher Juha Vesala and Professor Taina Pihlajarinne from the Faculty of Law, University of Helsinki. This article has benefited from their comments and input. The author would also like to thank the editors and reviewers of Internet Policy Review for their comments, which helped to improve the article further. Discussions with doctoral students at the University of Helsinki, including Olli Honkkila and Tone Knapstad, have also benefited the article. Any mistakes are the author’s sole responsibility. Finally, the author wants to give special thanks to Helmi Liikanen.

References

Apple (2020). App Store. https://developer.apple.com/app-store/discoverability

Apple (2020). App Store & Privacy. https://support.apple.com/en-us/HT210584

Alexiadis, P., & De Streel, A. (2020). Designing an EU Intervention Standard for Digital Platforms [Robert Schuman Centre for Advanced Studies Research Paper No. 2020/14]. European University Institute. https://cadmus.eui.eu/bitstream/handle/1814/66307/RSCAS%202020_14.pdf?sequence=1&isAllowed=y

Body of European Regulators for Electronic Communications (2012). A view of traffic management and other practices resulting in restrictions to the open Internet in Europe. Findings from BEREC's and the European Commission's joint investigation. Report, BoR (12) 30. https://berec.europa.eu/files/document_register/2012/7/BoR12_30_tm-snapshot.pdf

Body of European Regulators for Electronic Communications (2018). BEREC Opinion for the evaluation of the application of Regulation (EU) 2015/2120 and the BEREC Net Neutrality Guidelines. Opinion, BoR (18) 244. https://berec.europa.eu/eng/document_register/subject_matter/berec/opinions/8317-berec-opinion-for-the-evaluation-of-the-application-of-regulation-eu-20152120-and-the-berec-net-neutrality-guidelines

Body of European Regulators for Electronic Communications (2020). BEREC Guidelines on the Implementation of the Open Internet Regulation. Guidelines, BoR (20) 112. https://berec.europa.eu/eng/document_register/subject_matter/berec/regulatory_best_practices/guidelines/9277-berec-guidelines-on-the-implementation-of-the-open-internet-regulation

Bostoen, F. (2018). Neutrality, fairness or freedom? Principles for platform regulation. Internet Policy Review, 7(1), 1-19. https://doi.org/10.14763/2018.1.785

Briskman, J. (2018, May 14). App Store Browsing Has Grown to Drive 15% of Downloads Since iOS 11's Launch. Sensor Tower. https://sensortower.com/blog/app-store-download-sources

Chandler, J. (2007). A Right to Reach an Audience: An Approach to Intermediary Bias on the Internet. Hofstra Law Review, 35(3), 1095-1137. https://scholarlycommons.law.hofstra.edu/hlr/vol35/iss3/6/

Crane, D. (2014). After Search Neutrality: Drawing a Line between Promotion and Demotion. Journal of Law and Policy for the Information Society, 9(3), 397-406. https://heinonline.org/HOL/Page?handle=hein.journals/isjlpsoc9&div=16&g_sent=1&casa_token=&collection=journals

Crémer, J., de Montjoye, Y., & Schweitzer, H. (2019). Competition policy for the digital era. Publications Office of the European Union. https://ec.europa.eu/competition/publications/reports/kd0419345enn.pdf

Dethmers, F., & Blondeel, J (2017). EU enforcement policy on abuse of dominance: Some Statistics and Facts. European Competition Law Review, 38(4), 147-164. http://awa2018.concurrences.com/IMG/pdf/eu_enforcement_policy_on_abuse_of_dominance_some_statistics_and_facts_f_dethmers_and_j_blondeel.pdf

Dogruel, L., Joeckel, S., & Bowman, N. (2015). Choosing the right app: An exploratory perspective on heuristic decision processes for smartphone app selection. Mobile Media & Communication, 3(1),125-144. https://doi.org/10.1177%2F2050157914557509

Dogruel, L., Joeckel, S., & Bowman, N. (2020, October 13). Personal communication with the authors.

Edelman, B. (2011). Bias in Search Results?: Diagnosis and Response. The Indian Journal of Law and Technology, 7, 16-32. http://ijlt.in/archive/volume7/2_Edelman.pdf

European Commission (2015). Roaming Charges and Open Internet: questions and answers. https://ec.europa.eu/commission/presscorner/detail/sv/MEMO_15_5275

European Commission (2017). Case AT.39740 Google Search (Shopping). Commission Decision, C(2017) 4444. https://ec.europa.eu/competition/antitrust/cases/dec_docs/39740/39740_14996_3.pdf

European Commission (2018). Case AT.40099 Google Android. Commission Decision, C(2018) 4761. https://ec.europa.eu/competition/antitrust/cases/dec_docs/40099/40099_9993_3.pdf

European Commission (2018). Commission Staff Working Document Executive Summary of the Impact Assessment Accompanying the document Proposal for a Regulation of the European Parliament and the Council on promoting fairness and transparency for business users of online intermediation services, COM(2018) 238 final. https://ec.europa.eu/digital-single-market/en/news/impact-assessment-proposal--promoting-fairness-transparency-online-platforms

European Commission (2020). Inception Impact Assessment New Competition Tool, Ares(2020)2877634. https://ec.europa.eu/info/law/better-regulation/have-your-say/initiatives/12416-New-competition-tool

European Commission (2020). Inception Impact Assessment Digital Services Act package, Ares(2020)2877647. https://ec.europa.eu/info/law/better-regulation/have-your-say/initiatives/12418-Digital-Services-Act-package-ex-ante-regulatory-instrument-of-very-large-online-platforms-acting-as-gatekeepers

European Commission (2020). Commission Notice Guidelines on ranking transparency pursuant to Regulation (EU) 2019/1150 of the European Parliament and of the Council, 2020/C 424/01. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A52020XC1208%2801%29

Evans, D. (2003). Some Empirical Aspects of Multi-sided Platform Industries. Review of Network Economics, 2(3), 191-209. https://www.rnejournal.com/articles/evans_final_sept03.pdf

Farrell, J., & Weiser, P. (2003). Modularity, Vertical Integration, and Open Access Policies: Towards a Convergence of Antitrust and Regulation in the Internet Age. Harvard Journal of Law and Technology 17(1), 85-134. https://scholar.law.colorado.edu/cgi/viewcontent.cgi?article=1539&context=articles

Furman, J., Coyle, D., Fletcher A., McAuley, D., & Marsden, P. (2019). Unlocking digital competition. Digital Competition Expert Panel. https://www.gov.uk/government/publications/unlocking-digital-competition-report-of-thedigital-competition-expert-panel

Geradin, D., & Katsifis, D. (2020, April 22). The antitrust case against the Apple App Store. SSRN.https://ssrn.com/abstract=3583029

Goldman, E. (2011). Revisiting Search Engine Bias. William Mitchell Law Review, 38(1), 95-110. https://articles.ssrn.com/sol3/articles.cfm?abstract_id=1860402

Graef, I. (2019). Differentiated Treatment in Platform-to-Business Relations: EU Competition Law and Economic Dependence. Yearbook of European Law, 38(1), 448-499. https://doi.org/10.1093/yel/yez008

Hauge, J., & Jamison, M. (2009). Analyzing Telecommunications Market Competition: Foundations for Best Practice. Public Utility Research Center. https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.542.6468&rep=rep1&type=pdf

Höppner, T, Westerhoff, P, & Weber, J. (2019, May 13). Taking a Bite at the Apple: Ensuring a Level-Playing-Field for Competition on App Stores. SSRN. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3394773

Hyrynsalmi, S., Suominen, A., & Mäntymäki, M. (2016). The influence of developer multi-homing on competition between software ecosystems. The Journal of Systems and Software, 111, 119-127. https://doi.org/10.1016/j.jss.2015.08.053

Joined Cases C‑807/18 and C‑39/19 Telenor Magyarország Zrt. v Nemzeti Média- és Hírközlési Hatóság Elnöke. (Advocate General Sánchez-Bordona 4 March 2020). http://curia.europa.eu/juris/document/document.jsf;jsessionid=CBF3663352B8851833B319C7C296A784?text=&docid=224082&pageIndex=0&doclang=EN&mode=req&dir=&occ=first&part=1&cid=409056

Joined Cases C-807/18 and C-39/19, requests for a preliminary ruling under Article 267 TFEU from the Fővárosi Törvényszék (Budapest High Court, Hungary), made by decisions of 11 September 2018, received at the Court on 20 December 2018 and 23 January 2019, respectively, in the proceedings Telenor Magyarország Zrt. v Nemzeti Média- és Hírközlési Hatóság Elnöke. (European Court of Justice 15 September 2020). http://curia.europa.eu/juris/document/document.jsf?text=&docid=231042&pageIndex=0&doclang=EN&mode=lst&dir=&occ=first&part=1&cid=7957333

Khan, L (2019). The Separation of Platforms and Commerce. Columbia Law Review, 119(4), 973-1098. https://columbialawreview.org/wp-content/uploads/2019/05/Khan-THE_SEPARATION_OF_PLATFORMS_AND_COMMERCE-1.pdf

Krämer, J., & Schnurr, D. (2018). Is there a need for platform neutrality regulation in the EU? Telecommunications Policy, 42(7),514-529. https://doi.org/10.1016/j.telpol.2018.06.004

Krämer, J., & Peitz, M. (2018). A fresh look at zero-rating. Telecommunications Policy, 42(7), 501-513. https://doi.org/10.1016/j.telpol.2018.06.005

Lear, DIW Berlin & Analysys Mason (2017). Economic impact of competition policy enforcement on the functioning of telecoms markets in the EU. Publications Office of the European Union. https://op.europa.eu/en/publication-detail/-/publication/5a579e1c-969e-11e7-b92d-01aa75ed71a1

Lundqvist, B. (2019). Regulating competition in the digital economy. In B. Lundqvist and M. Gal (Eds.), Competition Law for the Digital Economy (1st ed., pp. 2-28). Edward Elgar Publishing Limited.

Manne, G., & Wright, J. (2012). If Search Neutrality is the Answer, What's the Question? Columbia Business Law Review, 2012, 151-238. https://laweconcenter.org/wp-content/uploads/2012/01/Manne-Wright-If-Search-Neutrality-Is-the-Answer-Whats-the-Question-2012.pdf

Martens, B. (2016). An Economic Policy Perspective on Online Platforms [Institute for Prospective Technological Studies Digital Economy Working Paper 2016/05]. Joint Research Center. https://ec.europa.eu/jrc/sites/jrcsh/files/JRC101501.pdf

Mattioli, D. (2019, September 16). Amazon Changed Search Algorithm in Ways That Boost Its Own Products. Wall Street Journal. https://www.wsj.com/articles/amazon-changed-search-algorithm-in-ways-that-boost-its-own-products-11568645345

Mickle, T. (2019, July 23). Apple Dominates App Store Search Results, Thwarting Competitors. Wall Street Journal. https://www.wsj.com/articles/apple-dominates-app-store-search-results-thwarting-competitors-11563897221

Narayanan, S., & Kalyanam, K. (2015). Position Effects in Search Advertising and their Moderators: A Regression Discontinuity Approach. Marketing Science, 43(3), 388-407. https://doi.org/10.1287/mksc.2014.0893

Nicas, J. and Collins, K. (2019, September 9). How Apple's Apps Topped Rivals in the App Store It Controls. New York Times. https://www.nytimes.com/interactive/2019/09/09/technology/apple-app-store-competition.html

Pandey, S., Roy, S., Olston, C., Cho, J., & Chakrabarti, S. (2005). Shuffling a Stacked Deck: The Case for Partially Randomized Ranking of Search Engine Results. arXiv. https://arxiv.org/abs/cs/0503011

Pasquale, P. (2008). Internet Nondiscrimination Principles: Commercial Ethics for Carriers and Search Engines. The University of Chicago Legal Forum, 2008, 263-300. https://chicagounbound.uchicago.edu/uclf/vol2008/iss1/6

Pasquale, P. (2010). Beyond Innovation and Competition: The Need for Qualified Transparency in Internet Intermediaries. Northwestern University Law Review, 104(1), 105-173. https://digitalcommons.law.umaryland.edu/cgi/viewcontent.cgi?article=2348&context=fac_pubs

Patterson, M. (2013). Google and Search-Engine Market Power. Harvard Journal of Law & Technology Occasional Article Series, 2013, 1-23. https://jolt.law.harvard.edu/assets/misc/Patterson.pdf

Proposal for a Regulation of the European Parliament and the Council on contestable and fair markets in the digital sector (Digital Markets Act) COM(2020) 842 final. https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:52020PC0842&from=nl

Regulation (EU) 2015/2120 of the European Parliament and the Council of 25 November 2015 laying down measures concerning open internet access and amending Directive 2002/22/EC on universal service and users’ rights relating to electronic communications networks and services and Regulation (EU) No 531/2012 on roaming on public mobile communications networks within the Union [2015] OJ L310/1. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A32015R2120

Regulation (EU) 2019/1150 of the European Parliament and the Council of 20 June 2019 on promoting fairness and transparency for business users of online intermediation services [2019] OJ L186/57. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32019R1150

Renda, A. (2015). Antitrust, Regulation and the Neutrality Trap: A plea for smart, evidence-based internet policy. CEPS. https://www.ceps.eu/ceps-publications/antitrust-regulation-and-neutrality-trap/

Savin, A. (2018). EU Telecommunications Law (2nd ed.). Edward Elgar Publishing.

Stucke, M., & Ezrachi, A. (2017). When Competition Fails to Optimize Quality: A Look at Search Engines. Yale Journal of Law and Technology, 18(1), 70-110. https://digitalcommons.law.yale.edu/cgi/viewcontent.cgi?article=1120&context=yjolt

The Netherlands Authority for Consumers and Markets (2018). 5G and the Netherlands Authority for Consumers and Markets.https://www.acm.nl/sites/default/files/documents/2018-12/5g-and-acm.pdf

The Netherlands Authority for Consumers and Markets (2019). Market study into mobile app stores. https://www.acm.nl/sites/default/files/documents/market-study-into-mobile-app-stores.pdf

The Netherlands Authority for Consumers and Markets (2020). Consumentenonderzoek telecommarkt 2020. https://www.acm.nl/sites/default/files/documents/2020-09/consumentenonderzoek-telecommarkt-2020.pdf

Van Gorp, N., & De Bijl, P. (2019). Digital Gatekeepers: Assessing Exclusionary Conduct.
e-Conomics. https://www.government.nl/binaries/government/documents/reports/2019/10/07/digital-gatekeepers/Digital+Gatekeepers.pdf

Zingales, N. (2019). Antitrust intent in an age of algorithmic nudging. Journal of Antitrust Enforcement, 7(3), 386-418. https://doi.org/10.1093/jaenfo/jnz010

Footnotes

1. The estimate of 87% is based on observations of 49 smartphone users from the United States and Germany viewing 189 apps in the Google Play Store from three predetermined categories of apps on a laboratory smartphone. In personal correspondence, the authors mentioned they “do not expect any different findings” were they to replicate the study (L. Dogruel et al., personal communication, 13 October 2020).

Towards a ban of discriminatory rankings by digital gatekeepers? Reflections on the proposal for a Digital Markets Act

$
0
0

December 2020 may mark a fundamental change in the regulation of market power of digital platforms in both the EU and the US. The US Federal Trade Commission (FTC) sued Facebookfor illegal monopolisation of the personal social networking market and requested a district court to order Facebook to break-off Instagram and WhatsApp. On 15 December 2020, the European Commission published the long awaited and ambitious proposals for a Digital Services Act (DSA) and a Digital Markets Act (DMA), which lay down a new EU regulatory framework for digital platforms. The DSA deals with transparency and liability of digital platforms, while the DMA seeks to address the economic imbalances and unfair practices of powerful digital platforms. This commentary gives several first reflections on the goals of the proposed DMA and its prohibition of discriminatory rankings.

The proposal for the DMA imposes a wide range of obligations on digital platforms (“providers of core platform services” 1) that the European Commission designates as “gatekeeper” under the DMA. 2 One of these obligations is a prohibition of discriminatory rankings by digital gatekeepers. Based on parallels with the European Union’s Open Internet Access Regulation, my research paper from December 2020 (Brouwer, 2020) gives recommendations for the way that the Commission can apply this prohibition to app stores when implementing and enforcing the DMA.

Will the DMA make enforcement against digital gatekeepers more effective?

The proposal for the DMA contains a critical note about the functioning of EU competition law in the digital economy. It acknowledges that EU competition law does not allow the Commission to intervene timely and effectively against a number of harmful practices of digital gatekeepers (Explanatory Memorandum DMA, p. 3, 8). For example, the investigation by the Commission in the Google Search (Shopping) case took more than six years. 3 In other cases, the high legal standards of competition law may even prevent the Commission from intervening at all. This is for example the case when a dominant platform excludes from the market a digital start-up that is not yet as efficient as the dominant platform. The reason is that EU competition law seeks to safeguard effective competition and not a level playing field for all firms (e.g. De Graef, 2019, p. 480). The DMA complements EU competition law and aims to protect the “contestability and fairness of digital markets” (Proposal DMA, Article 1(1), recital 10). This includes that the DMA seeks to level the playing field between digital gatekeepers and other online businesses such as digital start-ups (Explanatory Memorandum DMA, p. 10).

The proposal for the DMA seems to give the European Commission several “legal shortcuts” to make interventions against harmful practices of digital gatekeepers more effective than under EU competition law. The proposed DMA does not require the Commission to define a relevant market, establish dominance, and formulate a theory of harm, which is very complex in digital markets (Crémer et al., 2019). Instead, the obligations in the DMA apply to digital platforms that the Commission designates “gatekeeper”. This designation process can start from the moment that the DMA enters into force (Proposal DMA, recital 16).

The proposed DMA considers digital platforms a “gatekeeper” if they meet three cumulative criteria: 1) they have a significant impact on the internal market, 2) operate one or more important gateways to end users, and 3) enjoy or are expected to enjoy an entrenched and durable position in their operations (Proposal DMA, Article 3(1)). When digital platforms provide a core platform service in at least three member states and fulfil specified quantitative metrics based on e.g. turnovers and number of active users during the last three years, this leads to the rebuttable presumption of a gatekeeper position under the DMA (Proposal DMA, Article 3(2) in conjunction with Article 3(4)). 4 If a digital platform is a designated “gatekeeper”, a list of unfair practices is presumed to be harmful for the contestability of digital markets (Proposal DMA, Article 5, Article 6 in conjunction with recital 15). Contrary to EU competition law, the proposed DMA gives the Commission the power to intervene against these listed practices without the need to prove the actual, likely or presumed effects on competition of these practices (Proposal DMA, recital 10). It must be noted that the DMA does not limit the ability of the Commission to intervene under the EU competition rules (Proposal DMA, Article 1(6)).

What will be the scope of the ban of discriminatory rankings under the DMA?

One of the presumably harmful practices of digital gatekeepers are discriminatory rankings. Article 6(1), under d, of the proposed DMA prescribes that digital platforms, designated as gatekeepers, must “refrain from treating more favourably in ranking services and products offered by the gatekeeper itself (...) compared to similar services or products of third party and apply fair and non-discriminatory conditions to such ranking”. The aim behind this proposed prohibition is to prevent digital gatekeepers from undermining the contestability for services provided through their platforms (Proposal DMA, recital 48).

Several observations can be made in relation to this prohibition of discriminatory rankings in the proposal for the DMA. Firstly, the provision seems to introduce a categorical ban of “any form of differentiated or preferential treatment in ranking (…) whether through legal, commercial or technical means, in favour of products or services it offers itself (…)” (Proposal DMA, recital 49). My research paper proposes to limit such a prohibition to those differential rankings for which the digital platform cannot provide an objective justification based on e.g. differences on quality or price (Brouwer, 2020, p. 15). For example, Apple has reportedly not applied consumer ratings and reviews to its own pre-installed apps, whereas these variables do determine the rankings of competitors’ apps (Mickle, 2019, p. 6). When Apple cannot provide an objective justification for this reported practice, then this would constitute a discriminatory ranking practice under my proposed framework.

Secondly, Article 6(1), under d, proposed DMA in addition obliges digital gatekeepers to “apply fair and non-discriminatory conditions to such ranking”. In my view, this obligation should be read in conjunction with the provisions of the Platform-to-Business Regulation (Regulation (EU) 2019/1150, P2B Regulation). 5 The P2B Regulation aims to ensure fairness in the commercial relationship between digital platforms and businesses that provide services on these platforms. To that end, the P2B Regulation prescribes that digital platforms must, inter alia, be transparent in their terms and conditions about their main ranking parameters and any differential treatment between their own services and those of competitors (P2B Regulation, Article 5, Article 7). Ranking conditions should be considered “unfair” when these do not comply with the transparency requirements in the P2B Regulation.

Thirdly, the obligation for digital platforms to apply “non-discriminatory” conditions to rankings must also be interpreted in accordance with EU law. Under EU law, non-discrimination means that “comparable situations should not be treated differently and different situations should not be treated in the same way unless such treatment is objectively justified” (see Open Internet Access Regulation, recital 8). Amazon’s alleged practice of giving a lower ranking on its e-commerce platform to sellers that do not pay for its logistic services “Fulfilment by Amazon” (see e.g. AGCM, 2019) could potentially be considered a discriminatory ranking condition under the DMA. It remains the question whether, as proposed in my research paper, this provision would forbid the use of the popularity ranking rationale (e.g. number of clicks and downloads) when this results in a systematic advantage for large over small and new businesses (Brouwer, 2020, p. 17) and therefore undermines the contestability of digital markets. 6

The detection of discriminatory rankings under the DMA

The prohibition of discriminatory rankings is one of the obligations that is “susceptible of being further specified” (Proposal DMA, Article 6). This means that the European Commission has the possibility to engage in a “regulatory dialogue” with the gatekeeper to ensure an effective implementation of the prohibition of discriminatory rankings (Proposal DMA, recital 58). My research paper identifies potential circumventions of the prohibition of discriminatory rankings 7, which may be difficult to detect for the Commission (Brouwer, 2020, p. 20). For example, a gatekeeper could use “proxies for profit” (Mattioli, 2019, p. 6), or tweak the weights of ranking variables (Brouwer, 2020, p. 18), with the aim of giving their own services a higher position in the rankings. During a regulatory dialogue, the Commission can use its power to access documents about the design of the algorithms, or get access to and explanations about the algorithms, to ensure that the digital gatekeeper does not circumvent the prohibition of discriminatory rankings in the DMA (Proposal DMA, Article 19 in conjunction with recital 29).

If the Commission finds in a market investigation that the digital gatekeeper has “systematically infringed” 8 the prohibition of discriminatory rankings (or other obligations in the DMA), the DMA gives the Commission the power to impose “any behavioural or structural remedies which are proportionate to the infringement committed and necessary to ensure compliance” with the DMA (Proposal DMA, Article 16(1)). If a number of strict conditions are met, 9 the remedy could even require the digital gatekeeper to divest a part of its business (Proposal DMA, Article 16(1) in conjunction with recital 64). The future will show whether this will result in comparable legal actions as the FTC has recently taken against Facebook in the US. In any case, if adopted, the proposal for the DMA will mark a fundamental change in the regulation of market power exercised by digital gatekeepers.

References

Autorita' Garante della Concorrenza e del Mercato. A528 - Amazon: investigation launched on possible abuse of a dominant position in online marketplaces and logistic services. https://en.agcm.it/en/media/press-releases/2019/4/A528

Brouwer, D. (2020). A non-discrimination principle for rankings in app stores. Internet Policy Review, 9(4). https://doi.org/10.14763/2020.4.1539

Crémer, J., de Montjoye, Y., & Schweitzer, H. (2019). Competition policy for the digital era. Publications Office of the European Union. https://ec.europa.eu/competition/publications/reports/kd0419345enn.pdf

Graef, I. (2019). Differentiated Treatment in Platform-to-Business Relations: EU Competition Law and Economic Dependence. Yearbook of European Law, 38(1), 448-499. https://doi.org/10.1093/yel/yez008

Mattioli, D. (2019, September 16). Amazon Changed Search Algorithm in Ways That Boost Its Own Products. Wall Street Journalhttps://www.wsj.com/articles/amazon-changed-search-algorithm-in-ways-that-boost-its-own-products-11568645345

Mickle, T. (2019, July 23). Apple Dominates App Store Search Results, Thwarting Competitors. Wall Street Journalhttps://www.wsj.com/articles/apple-dominates-app-store-search-results-thwarting-competitors-11563897221

Proposal for a Regulation of the European Parliament and the Council on contestable and fair markets in the digital sector (Digital Markets Act) COM(2020) 842 final. https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:52020PC0842&from=nl

Regulation (EU) 2015/2120 of the European Parliament and the Council of 25 November 2015 laying down measures concerning open internet access and amending Directive 2002/22/EC on universal service and users’ rights relating to electronic communications networks and services and Regulation (EU) No 531/2012 on roaming on public mobile communications networks within the Union [2015] OJL310/1. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=uriserv:OJ.L_.2015.310.01.0001.01.ENG

Regulation (EU) 2019/1150 of the European Parliament and the Council of 20 June 2019 on promoting fairness and transparency for business users of online intermediation services [2019] OJ L186/57. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32019R1150

Footnotes

1. Article 2(2) of the proposed DMA lists a number of “core platform services”, including online intermediation services (e.g. app stores), online search engines, social networking, and operating systems. Pursuant to Article 17, the Commission can update this list based on a market investigation.

2. Based on Article 10 in conjunction with Article 17, the Commission can update the obligations and add new practices to those listed in the DMA where the Commission identifies in a market investigation that these practices are unfair or limit the contestability of core platform services.

3. Counting from the announcement of the opening of the investigation until the issuing of the prohibition decision.

4. The digital platform carries the burden of proof to show that it does not meet the three criteria of gatekeepers, as laid down in Article 3(1). When the platform presents “sufficiently substantiated evidence” that it does not fulfil these criteria, the Commission shall conduct a market investigation into the gatekeeper position based on various qualitative elements, such as network effects, economies of scale, and lock-in effects (Proposal DMA, Article 3(4) in conjunction with Article 3(6)).

5. This view seems to be supported by Explanatory Memorandum DMA, p. 3, where it is mentioned that the proposed DMA ”builds on the existing P2B Regulation” and that the “definitions used in the present proposal are coherent with that Regulation”.

6. It should be noted, however, that recital 48 of the proposed DMA seems to indicate that the prohibition of discriminatory rankings is targeted at vertically integrated gatekeepers that have an economic incentive to favour their own downstream services over those of competitors.

7. Recital 49 of the proposed DMA indicates that circumventions of the prohibition of discriminatory rankings are also forbidden under the DMA.

8. According to Article 16(3), a gatekeeper shall be presumed to have engaged in a systematic non-compliance “where the Commission has issued at least three non-compliance or fining decisions (…) against a gatekeeper in relation to any of its core platform services within a period of five years prior to the adoption of the decision opening a market investigation in view of the possible adoption of a decision pursuant to this Article.”

9. Recital 64 of the proposed DMA mentions that such a remedy can only be imposed if 1) it is proportionate, 2) there is no less burdensome and equally effective behavioural remedy available, and 3) there is a substantial risk that the systematic non-compliance results from the very structure of the digital gatekeeper.


Regulation of news recommenders in the Digital Services Act: empowering David against the Very Large Online Goliath

$
0
0

Nowadays it is difficult to imagine the online world without recommendation algorithms. They filter and classify the growing abundance of information, prioritising content according to predefined ranking criteria. The result of that process is a recommendation to users on what content best matches their interests or personal profile. We encounter recommendation algorithms on a daily basis. They suggest products and services on e-commerce websites such as Amazon, help finding the love of your life on a dating platform, help you discover music and films on services such as Spotify or Netflix, give personalised recommendations on news websites, and match you with content on social media platforms such as YouTube and Facebook. As such, recommender algorithms are the engines behind the internet’s knowledge infrastructure.

Because of their importance for the way users find and access information online, recommender algorithms are a source of great power in the algorithmic society—a power that comes with responsibilities, and systematic risks that worry policymakers and academics alike, particularly if these algorithms are in the hands of some internet giants. Concerns range from the potential polarising effect and the ability to create filter bubbles of selective exposure to information (Möller et al., 2018; Pariser, 2011; Zuiderveen Borgesius et al., 2016) to the potential abuse of the enormous commercial and political gatekeeper powers that control over recommender algorithms entails (Helberger, 2020; Napoli, 2019) or the potentially invasive or even manipulative effect that data-driven recommendations can have for users’ privacy, autonomy and informational self-determination (Eskens, 2020). As such, it is to be welcomed that the European Commission, in its suggestion for a Digital Services Act (DSA), has devoted considerable attention to news recommenders and even included a specialised provision for them: Art. 29 of the DSA specifically addresses the recommender systems 1 used by what are referred to as ‘Very Large Online Platforms’ 2 and focuses on the information and options users have to influence recommendations by online giants such as Facebook and Google. The following commentary offers a number of critical reflections on the goals behind Art. 29 DSA, the likelihood that it will be able to realise those goals but also how a more ambitious vision on news recommender regulation could have looked like. We conclude with a number of concrete suggestions for more effectively addressing risks but also opportunities from news recommenders.

The intended goal of the provision

Art. 29 of the draft DSA provision should be read in the context of recital 62, which explains that the provision is a response to the significant impact that recommenders can have on the behaviour of individuals and their ability to retrieve and interact with information. Finding ways of dealing with not only the economic but also the power about the way people form opinions is a challenge that has occupied academics and policymakers for the past few years (Moore & Tambini, 2018). What, then, is the solution the Commission suggests to tame the internet Goliaths of this world?

The solution the Commission suggests is to empower users—we call them David—with information. Art. 29 of the draft DSA requires Very Large Online Platforms to explain in their terms and conditions both what the main parameters of their recommender system are and the options for users to modify or influence those parameters, including the possibility, where available, to choose an option not based on profiling (Art. 29 (1) DSA), and if online platforms allow users to choose, to create an “easily accessible functionality on their online interface allowing the recipient of the service to select and to modifyat any time their preferred option for each of the recommender systems that determines the relative order of information presented to them” (Art. 29 (1) DSA).

As such, the planned Art. 29 is a rather fundamental step beyond the present focus of the General Data Protection Regulation (GDPR) on users’ ability to exercise control over their data. The DSA seeks to extend users’ control over the recommendation metrics that the algorithm is optimised for. One may wonder, as the European Data Protection Supervisor (EDPS) does, whether informing consumers by adding a couple of lines to the terms and conditions is the most effective way of telling David (who is notoriously unwilling to read terms and conditions) how to stand up to Goliath (EDPS, 2021). Instead, the EDPS’s suggestion to offer that information on a prominent part of the website does indeed seem to be the better way forward.

Information as a means of exercising power and influence is meaningless without choice, and from that perspective it is to be welcomed that the provision speaks of David’s ability to choose between different metrics—provided platforms are willing to offer that choice to consumers. What the draft Art. 29 of the DSA does not do, is to oblige platforms to offer users the possibility to choose between, modify or implement parameters, including the ability to choose an option not based on profiling. One may wonder exactly what incentives VLOPs would have to offer users the ability to choose between metrics, particularly if one of these options should be not based on profiling, the core element of the business model of many of these platforms. The lack of incentives is especially problematic because Art. 29 only applies to very large online platforms, which are also the least sensitive to mere calls from users for more choice. To conclude, David is on his own here. But should one of the platforms decide to provide him with a slingshot, he will be the first to know!

Real and fake choices

As enticing as the idea of democratising algorithmic recommendations is, it is questionable whether offering users some choice to modify or influence a recommender’s parameters is really likely to make the difference between digital hegemony and a society that takes back control. A recommender engine bases its decisions on an extremely large set of data, including not only everything that the user has done and liked in the past but also all the clicks made by the users’ peers and other users of the platform (collaborative filtering) and a whole range of metadata (content-based) (Karimi, Jannach, Jugovac, 2018). The few parameters that users will be able to influence are only a selection from an enormous pool of parameters that influence the ultimate recommendations, including many that are far beyond users’ control (others, for example, could be general popularity, paid content, date of publication, etc. - see Covington et al., 2016; DeVito, 2017). 3 In most state-of-the-art recommender systems, which are usually some form of machine learning model, it is not even entirely clear what the ‘main parameters’ in the model are and what their effects are. Rather than empowering users, DSA is far more likely to create a fake sense of transparency. Or, to stick with our metaphor: empowering David through Art. 29 of the DSA is a bit like handing him the slingshot and the stone but forgetting to mention that the slingshot is unfortunately lacking the sling.

Let’s talk about personalisation

The draft Art. 29 of the DSA emphasises the importance of options, but it is surprisingly vague regarding the question of what those options actually should be, or how they could align with public values and fundamental rights. The only guidance the provision offers is that those options should relate to a user’s ‘preferred option’ … that determines the relative order of information presented (Art. 29 (2)) and that one of those options could be not based on profiling (Art. 29 (1)). This formulation of Art. 29 is problematic for at least three reasons.

First, the provision fails to engage with the way users can be enabled to realise public values on platforms. Simply offering users the possibility of choosing their preferred option for ranking content does not address the underlying polarisation and power issues in Very Large Online Platforms’ recommenders. At worst, the provision reinforces precisely the kind of filter bubbles that have given rise to concerns about recommenders in the first place, by optimising for personal relevance. This optimisation for personal relevance and satisfaction is precisely what a growing body of research into recommendation systems decries as overly simplistic (Bernstein et al., 2020; Nechushtai & Lewis, 2019). In a more optimistic scenario users’ preferences do align with normative goals such as diversity in recommenders (Bodó et al., 2019; Harambam et al., 2019). Effectively involving users in platform governance, however, requires a nuanced insight into how their personal goals relate to public values, and how the control mechanisms can be designed in such a way that they will be used productively. The DSA avoids these issues by leaving it up to platforms to decide what options, if any, users are given.

Secondly, Art. 29 (1) of the DSA chimes with the growing level of criticism regarding data-driven profiling. The EPDS, in its opinion on the DSA, is even more explicit, suggesting an outright ban on certain forms of profiling (EDPS, 2021). But is profiling always bad? It is true that profiling can be a basis for a range of digital malpractices online, but profiling users and offering personalised services can also be a way to help users find their way through the digital abundance of online information and, as such, is considered by users as a potentially very useful tool. Studies have even demonstrated that, under certain conditions, users are equally or even more likely to appreciate algorithmic recommendations above recommendations from e.g. news editors (Thurman et al., 2018). Also, recommenders come in very different shapes and sizes, and on very different kinds of platforms. Profiling music or movie lovers and issuing personalised recommendations for all the songs they might also like could have very different implications for society than profiling news users.

The point is this: recommender systems can fulfil a crucial role in democratic society and not only endanger but also contribute to the realisation of fundamental rights and public values, but this is a nuance that has been completely omitted from the DSA. Symptomatic is also the framing of Art. 26 of the DSA (risk assessment) that urges platforms to only assess the risks to and potential negative effects on fundamental rights and public values. Instead, the DSA should urge platforms to also consider how their recommendation algorithms could create opportunities for making them better, more democratic and fairer places.

This leads us to our third point: Art. 29 of the DSA is characterised by a complete lack of vision and regard for broader public and societal values. It does nothing to create incentives for platforms (and other services using recommendations) to invest in developing richer parameters that optimise for the realisation of public values and medium-term goals. Until now, the most frequently used Key Performance Indicators (KPIs) have assessed short-term user engagement, such as clicks or time spent on a page. These KPIs are often inspired in turn by technological and business demands rather than the societal and democratic mission of the media. News recommender systems, however, should also be oriented towards public values such as media diversity, inclusivity and fostering tolerance. Nor does Art. 29 do anything to encourage alternative providers of recommendation algorithms and logics (e.g. news media) to challenge the dominant recommendation algorithms on platforms and offer users a real choice. Why not include an obligation for platforms to enable users to choose between different recommendation algorithms, including some from third parties? That would be a true step towards curbing the central communication power of Very Large Online Goliaths.

Conclusions

The DSA takes a fundamental step by moving beyond empowering users to control their data to enabling them to control the recommendation logic. Unfortunately, the proposed Art. 29 of the DSA lacks any vision of how empowering users to exercise control over the recommendation logic can contribute to realising public values or mitigate the potential risks of recommenders to fundamental rights and the public sphere. Recommender systems are essentially framed as threats that must have an off switch, and the provision follows the long-standing and long-criticised traditions of empowering users through terms and conditions, while relying on platforms to voluntarily offer users the means to translate information into concrete choices.

This is a missed opportunity. A more effective and more forward looking version of Art. 29 of the DSA would create incentives for platforms to build recommenders that not only optimise for short-term clicks and immediate user satisfaction but also contribute in the longer term to the realisation of public values such as media diversity. A more effective version of Art. 29 would equip users with more than just some information hidden away in the terms of use and oblige platforms to provide users with a real choice.

References

Bernstein, A., Vreese, C. D., Helberger, N., Schulz, W., & Zweig, K. A. (2020). Diversity, Fairness, and Data-Driven Personalization in (News) Recommender System (p. 8). Schloss Dagstuhl--Leibniz-Zentrum fuer Informatik. https://drops.dagstuhl.de/opus/volltexte/2020/11986/

Bodó, B., Helberger, N., Eskens, S., & Möller, J. (2019). Interested in Diversity. Digital Journalism, 7(2), 206–229. https://doi.org/10.1080/21670811.2018.1521292

Covington, P., Adams, J., & Sargin, E. (2016). Deep Neural Networks for YouTube Recommendations. 8.

DeVito, M. A. (2017). From Editors to Algorithms. Digital Journalism, 5(6), 753–773. https://doi.org/10.1080/21670811.2016.1178592

EDPS. (2021). Opinion 1/2021 on the Proposal for a Digital Services Act. Brussels, 10 February 2021.

Eskens, S. (2020). The personal information sphere: An integral approach to privacy and related information and communication rights. Journal of the Association for Information Science and Technology, 71(9), 1116–1128. https://doi.org/10.1002/asi.24354

Harambam, J., Bountouridis, D., Makhortykh, M., & van Hoboken, J. (2019). Designing for the better by taking users into account: A qualitative evaluation of user control mechanisms in (news) recommender systems. Proceedings of the 13th ACM Conference on Recommender Systems - RecSys ’19, 69–77. https://doi.org/10.1145/3298689.3347014

Helberger, N. (2020). The Political Power of Platforms: How Current Attempts to Regulate Misinformation Amplify Opinion Power. Digital Journalism, 1–13. https://doi.org/10.1080/21670811.2020.1773888

Möller, J., Trilling, D., Helberger, N., & Es, B. van. (2018). Do not blame it on the algorithm: An empirical assessment of multiple recommender systems and their impact on content diversity. Information, Communication & Society, 21(7), 959–977. https://doi.org/10.1080/1369118X.2018.1444076

Moore, M., & Tambini, D. (Eds.). (2018). Digital Dominance: The Power of Google, Amazon, Facebook, and Apple. Oxford University Press.

Napoli, P. M. (2019). Social Media and the Public Interest: Media Regulation in the Disinformation Age. Columbia University Press.

Nechushtai, E., & Lewis, S. C. (2019). What kind of news gatekeepers do we want machines to be? Filter bubbles, fragmentation, and the normative dimensions of algorithmic recommendations. Computers in Human Behavior, 90, 298–307. https://doi.org/10.1016/j.chb.2018.07.043

Pariser, E. (2011). The Filter Bubble: What The Internet Is Hiding From You.

Thurman, N., Moeller, J., Helberger, N., & Trilling, D. (2018). My friends (algorithms & I., Eds.).

Zuiderveen Borgesius, F. J., Trilling, D., Moeller, J., Bodó, B., de Vreese, C. H., & Helberger, N. (2016). Should We Worry About Filter Bubbles? Internet Policy Review, 5(1). https://doi.org/10.14763/2016.1.401

Footnotes

1. According to Art. 2 (o) of the draft DSA, recommenders are defined as follows: “‘recommender system’ means a fully or partially automated system used by an online platform to suggest in its online interface specific information to recipients of the service, including as a result of a search initiated by the recipient or otherwise determining the relative order or prominence of information displayed”.

2. Art. 25 (1) of the draft DSA defines Very Large Online Platforms as “online platforms which provide their services to a number of average monthly active recipients of the service in the Union equal to or higher than 45 million.”

3.See for some more information from Facebook on how the news feed algorithm works: https://about.fb.com/news/2021/01/how-does-news-feed-predict-what-you-want-to-see/

Civil legal personality of artificial intelligence. Future or utopia?

$
0
0

1. Preliminary remarks

Technology associated with artificial intelligence is developing rapidly. As a consequence, artificial intelligence is being applied in many spheres of life and increasingly affects the functioning of society. The actions of artificial intelligence may cause damage (e.g. autonomous vehicles that cause traffic accidents). The occurrence of such damage in practice has prompted experts to take actions with regard to qualifying artificial intelligence as a legal entity bearing liability, where the former is understood as being subject to rights and obligations under the law.

Rules of civil law, especially those relating to liability for damage that results from somebody’s fault or risk (strict liability), had been formed before artificial intelligence appeared and mostly before its significant recent development. They are included in the Polish Code of civil procedure (hereinafter: PCCP), which addresses issues associated with liability, adopted in 1964 and still in force today, though with certain amendments. Therefore, no provisions that would directly refer to artificial intelligence and the legal consequences of its actions have been introduced into the Polish civil law. This also applies to European law. The European Union forum has taken some initiatives to consider the possibility of applying existing regulations of member states to artificial intelligence and to formulate conclusions regarding the need for legislative changes.

This paper presents an analysis of existing regulations in the context of their potential application to artificial intelligence and a formulation of conclusions as to the need to apply these regulations to the essence of artificial intelligence. An examination of the possibility of attributing the status of an entity before the law to artificial intelligence is adopted as a starting point for this analysis. This is because only when artificial intelligence is considered a legal entity, can it bear independent liability for the damages it causes. Considering artificial intelligence as a legal entity requires an examination in the context of technology (e.g. autonomous vehicles) currently used, and also in the future (e.g. when dealing with fully independent robots that can deal with every aspect of life).

A consequence of the introduced analysis involves a specification of whether artificial intelligence may (now or in the future) bear liability for the damages it causes. A negative answer to this question means that we need to establish which other person does or will bear liability for the actions of artificial intelligence. Also, this person’s liability for damage has the nature of tort liability; however, it results not only from this person’s own actions, but from the risk of bearing liability for another person or thing (e.g. an animal, and in this case - artificial intelligence). A relevant solution must take into account primarily the compensatory function of tort liability for damages, which means the possibility to level off the harm caused to the aggrieved party.

This paper addresses the issue of tort liability. This liability is the second regime of liability for damage—next to contractual liability. Tort liability results from a prohibited act caused mainly due to the individual’s own fault or due to risk. In this case, it is a prohibited act that is an event causing the damage (Art. 415 - 449 of PCC). The underlying event for contractual liability involves non-performance or improper performance of the contract executed between parties that causes damage. Contractual liability stays beyond the scope of this study.

With reference to artificial intelligence, the risk of activity of this person is of primary importance. Therefore, any person, not only an individual bound to artificial intelligence by a contract, may suffer an injury as a result of the tort of artificial intelligence. Incidents that involve tort-related damage caused by artificial intelligence already occur in practice, e.g. traffic accidents caused by autonomous vehicles.

The reflections included in this paper refer to the issue of liability for actions of artificial intelligence against the background of the achievements of Polish and European civil law scholarship, especially principles of the European tort law. The conclusions formulated in the paper relating to legislative changes are thus applicable to all national legal orders informed by European principles of civil law (European Group on Tort Law, 2005). The starting point of these reflections is to establish the possibility of attributing the status of a legal entity to artificial intelligence under the provisions of Polish civil law while considering ethical issues and the degree to which artificial intelligence would be subordinate to humans, and also the types of risks associated with its actions (Bryson et al., 2017, p. 273ff; Teubner, 2018, p. 106ff).

Such an approach to the issues of artificial intelligence leads to the formulation of de lege ferenda conclusions. They refer both to Polish law and other national legal orders that are affected by the European principles of civil law (European Group on Tort Law, 2005).

The research, as a result of which this paper was drafted, was conducted by means of various methods, in particular by the method of interpretation of applicable laws, the analytical method and, in an auxiliary role, the comparative method. This interpretation and analysis concern national and European legislation in force. The comparative method was used to analyse the Polish law vis-à-vis foreign law.

2. Artificial intelligence

Artificial intelligence is defined inconsistently. Sometimes it is perceived widely as a field of science primarily related to computer science and robotics. In a narrower sense, artificial intelligence is the ability of an IT system to correctly interpret external data, to learn from it, and to use the experience gained in this way to accomplish specific tasks. This ability includes the capacity to flexibly adapt to external conditions (Wang, 2008, p. 362; Kaplan & Haenlein, 2019, pp. 15 - 25; Kok et al., 2002, p. 1095 ff).

An analysis of legal solutions relating to the consequences of actions of artificial intelligence and the possibility of attributing liability for damage to it requires recognising that artificial intelligence, for the needs of this study, means the ability of an IT system to interpret data correctly, to learn from this data and to use the experience acquired in this manner to carry out specific tasks. From the perspective of the analysis performed in this article, the fact whether this IT system is only analytical, inspired by a human or humanoid is irrelevant and so is the device or object in which it was placed, and the purpose for this. Therefore, issues concerning liability for damage caused by artificial intelligence refer to artificial intelligence located in computers, cars or robots.

Artificial intelligence is no longer just a vision of the future—we are surrounded by it. The most common devices equipped with artificial intelligence include mobile phones, computers and cars, which can even drive autonomously. Artificial intelligence is used extensively to create so-called bots, i.e. programmes whose purpose it is to replace people (Grimme et al., 2017, p. 279; Klopfenstein et al., 2017, pp. 555-565). Bots perform their tasks primarily in the services market, in the absence of human beings. One of the bots tested by IT tools producers for the needs of internet communication was a Microsoft product called ‘Tay’, which interacted with the public via Twitter. The bot created its entries based on interactions with the users of this portal. However, within a few hours of operation it began to publish offensive entries, so the project was closed (Neff & Nagy, 2016, p. 4915).

Artificial intelligence is also placed in robots that have a physical or even humanoid shape. Sophia, created by Hanson Robotics’ scientists from Hong Kong, is a human-like robot. She is endowed with artificial intelligence, thanks to which she is able to learn and adapt to human behaviour. She has given many interviews around the world and also obtained citizenship of Saudi Arabia (Retto,2017, p. 3).

Models of processing a natural language are also significant from the point of view of artificial intelligence, which is able to write texts the way a human would do (GPT-3). Such an algorithm was also created by the research laboratory OpenAI. The text samples it formulates are difficult to distinguish from texts prepared by humans (Zagórna, 2020).

The listed examples of the application of artificial intelligence are evidence of its rapid development. Therefore, a question arises about how artificial intelligence will function in the future. This concerns primarily the legal aspects of its actions, ethical issues and the degree to which artificial intelligence is subordinate to humans and also—the threats associated with its functioning.

3. Civil law entities

A legal entity should be understood as an entity participating in legal relations that has rights and obligations under the given legal system with respect to other entities and tangible or intangible objects (Wolter et al., 2001, p. 157). Article 1 of the Polish Civil Code provides that this code regulates the civil law relations between natural and legal persons. Every human being is a natural person. Such a person's legal position is determined by the following attributes: legal capacity, capacity to perform acts in law, surname and first name, place of residence, marital status, personal status and personal rights (Art. 8 – 32 of PCC). Legal persons include the State Treasury and organisational units that are granted legal personality by provisions of law. The law does not indicate any general characteristics of legal persons on the basis of which an organisational unit could be classified in this category. Legal personality is acquired from the moment of entry into the relevant register (Art. 37 of PCC).

Civil law entities are also organisational units with legal capacity, despite the fact that the Act does not grant them legal personality (Art. 331 of PCC). These entities are not listed in Art. 1 of PCC. However, under Art. 331 of PCC there is no doubt that they have legal personality. Nevertheless, inaccuracy in this respect caused that in the draft of the new Civil Code, an organisational unit with legal capacity was recognised as a legal entity (Art. 31 of the draft) (Machnikowski, 2017, p. 47). In the current legal status, legal persons (and organisational units with legal capacity) are entitled to the following attributes: legal capacity, capacity to perform acts in law, name, seat and personal rights (Art. 33 – 43 of PCC).

These regulations show that the category of legal entities (i.e. entities that have rights and obligations under the law) is broad and includes two types of entities—natural persons and legal persons. A natural person is a legal entity, but he or she is not a legal person. Similarly, a legal person is a legal entity, but is not a natural person.

The most important attributes of natural persons, legal persons and organisational units to which pursuant to Art. 331 of PCC the provisions on legal persons apply, are legal capacity and capacity to perform acts in law. Legal capacity is the possibility of being a subject of rights and obligations in the field of civil law, while the capacity to perform acts in law is the possibility to acquire rights and incur obligations in the field of civil law through one’s own actions (Ziemianin & Kuniewicz,2007, p. 75). The procedural consequence of having legal capacity and capacity to perform acts in law is granting in the Code of Civil Procedure the capacity to be a party in court proceedings and the capacity to perform actions in court proceedings to natural persons, legal persons and organisational units with legal capacity (Art. 64 and 65 of the Polish Code of Civil Procedure). Therefore, these entities may be parties to civil proceedings and carry out procedural acts.

An analysis of the provisions of the Civil Code leads to the conclusion that legal personality is equivalent to having legal capacity. Therefore, deciding whether artificial intelligence can be a legal entity requires a reference to its ability to obtain legal personality.

4. Artificial intelligence as a legal entity

I. Natural person

Civil law scholars and commentators often assumed that people are legal entities by their very nature (Pilich, 2018, art. 8; Targosz 2004, p. 1). In comparison with other living organisms they are distinguished by biological properties, social skills and individual character. Therefore, a person is someone, not something. Thus, it is assumed that human legal capacity is an inherent feature, as is dignity. However, this view needs to be supplemented. It happened in history that certain people were denied legal capacity, e.g. slaves under Roman law (see Shumway, 1901) 1. Despite this, each person has legal capacity because they are human. However, this ability also derives from the law because it has been confirmed by its regulations. These provisions confer legal capacity upon newborn babies the moment they enter the world. For example, Article 8 of the Polish Civil Code stipulates that every human being has legal capacity from the moment of birth. Similarly, § 1 of the German Civil Code BGB shows that the legal capacity of a human being begins at birth.

Therefore, one may conclude that the provisions of the law reflect the basic ethical and moral principles of society. The legal order is built on the premise of a certain system of values. For modern democratic countries, this system should be based on a culturally neutral understanding of humanity (Chauvin, 2020). Human dignity results from this humanity, while in the sphere of civil law, legal capacity is understood as the possibility to acquire rights and obligations resulting from such dignity. Legal capacity of a human, and in consequence, his status as a legal entity, is conferred by the law. Therefore, it derives from legal regulations, although it confirms the ideologically neutral human dignity.

Age or incapacitation do not affect a human’s legal capacity. A small child, e.g. is an entity before the law. It will not, naturally, be able to sign a contract independently, but this does not affect its legal capacity. The possibility to execute a contract independently results from the attribute of the capacity to perform acts in law. This attribute is secondary in relation to the status of a legal entity itself.

It is different in the case of other entities upon whom provisions of civil law currently confer the status of a legal entity, e.g. commercial companies. Their status as legal entities results from provisions of the law, and therefore has a solely norm-based nature. Such status will be discussed later.

Artificial intelligence is an element of an IT system that is created by humans to perform specific tasks. Therefore, it cannot be said that it has inherent biological properties or social skills (as is the case of legal personhood of natural persons). Even if these features can be attributed to it, they are programmed by its creator. Of course, artificial intelligence may then be subject to certain social processes, but this occurs as a consequence of human activities. By the very nature of artificial intelligence, it is neither possible to speak of its birth. Currently, it is also difficult to imagine the possibility of robots creating social structures. Possible behaviours aimed at this goal can therefore be—as it seems—only a consequence of human programming. Artificial intelligence can perfectly imitate human beings. However, it cannot be assumed that it is human. The legal capacity of artificial intelligence is not natural. Hence it can only be normative, i.e. deriving from and established by the provisions of the law. Therefore, artificial intelligence is certainly not a natural person 2.

II. Legal person

Due to the legal capacity of artificial intelligence resulting from provisions of the law, one should perhaps consider the possibilities of adapting artificial intelligence to the requirements of legal persons.

In connection with the development of the concept of legal persons over the years, legal scholars and commentators have formulated theories regarding the essence of a legal person. A concept was distinguished according to which a legal person, like a natural person, is a real entity, as well as an opposite concept, under which a legal person is only a legal entity (Wolter et al., 2001, p. 202; Radwański & Olejniczak, 2013, pp. 180-182). Leaving the analysis of these theories outside the scope of this article, it should be emphasised that a legal person is a certain organisation whose activities depend—directly or indirectly—on the intent of a natural person. From a legal point of view, the action of a natural person is the action of a legal person only if the natural person acts in the manner provided for in the Act and the statute based on it as the body of a legal person (Art. 38 of PCC). Under civil law, it is possible for a legal person to be liable for damages. For example, a commercial company which deals with renovations, causes damage by improper performance of the renovation. Nevertheless, this issue results from statutory regulations, and in some cases the responsibility lies with natural persons acting for the legal person. It is people that sit on its bodies that act on behalf of the legal person. For example, members of the management board of a limited liability company are liable for the obligations of this company if enforcement of the company’s assets is ineffective, i.e. the company’s assets are not sufficient to cover the debt.

The concept of granting legal personality to artificial intelligence is widely discussed in legal and philosophical literature. At the European Union forum initiatives are being taken to consider the possibility of applying the current legal regulations of the member states in relation to artificial intelligence and to formulate conclusions as to the need for legislative changes (inter alia European Commission, 2019). These initiatives expressed the view that granting legal personality to artificial intelligence is unnecessary, since the responsibility for its actions should be borne by existing persons (European Commission, 2019, p. 4). The Polish position on the legal personality of artificial intelligence, expressed in the key points in the strategy of artificial intelligence in Poland (Ministerstwo Cyfryzacji, 2018), is sceptical. According to these assumptions, granting legal personality to artificial intelligence does not seem beneficial due to the lack of a concept regarding the principles of liability. Therefore, we do not know how artificial intelligence, which is an independent legal entity, should bear liability—are people supposed to bear this liability?; is it supposed to have its own funds to pay compensation?; or would it be necessary to maintain a register of such artificial intelligence? Therefore—according to the authors who made these assumptions—the legal personality of artificial intelligence should be opposed. Liability for AI actions should be attributed to its creators, operators or possible end users 3.

This state of affairs is primarily the result of the impossibility to predict how artificial intelligence will function in the future. However, it seems that the speed of development of artificial technology necessitates careful analysis of the possibility of granting legal capacity to artificial intelligence. Such an attempt can be made with respect to those types of artificial intelligence that are currently functioning or appear to be able to function in the near future. This is because it may turn out that in the future, artificial intelligence will be completely independent of humans, and thus—no one alive today will be naturally liable for its actions.

Common features of artificial intelligence and a legal person include the fact that legal capacity can only be granted to them by law; in contrast with natural persons, they do not obtain this in a natural way. However, there are clear differences between artificial intelligence and a legal person. Artificial intelligence cannot be considered an organisational unit whose acts can only be performed through its bodies. The issue of artificial intelligence boils down to determining the relationship between it acting alone, without the help of a natural person, and man—the creator or owner of the robot. Therefore, it should be recognised that the concept of legal personality cannot be directly applied to artificial intelligence, because AI is not an organisational unit acting through organs stipulated by the statute.

III. Electronic person

A view was formulated in a study commissioned by the European Parliament according to which artificial intelligence may be another, new legal entity—an electronic person (Nevejans, 2016, p. 14). Electronic personality would to some extent refer to legal personality, in particular taking into account the fact that legal capacity derives from provisions of the law. The actions of such a person would require the introduction of appropriate, detailed legal regulations. Due to such nature of personality, an electronic person could acquire legal capacity upon its entry in the appropriate register. The actions of an electronic person, although undertaken independently, would, to a certain extent, burden natural persons, e.g. persons listed in this register (as is the case of liability of persons on the management board of a company in relation to the activity of this company). These people would primarily include programmers, creators and owners. The scope of their liability for an electronic person would be determined on the basis of legal regulations, but also on the manual (rules) for operating the robot. With the development of artificial intelligence, it cannot be excluded that the concept of ownership would change. Representation similar to the statutory representation of minors or commercial law companies could then apply to electronic persons.

When referring to the ethical aspects of its functioning in society, it should be assumed that the personality of artificial intelligence is not justified by its very essence. The basis for granting legal personality to an electronic person in the future may be the advanced level of technological development of artificial intelligence, which could make it impossible to predict the way it works. Artificial intelligence, whose actions are predictable, can be framed in detailed provisions on liability (e.g. for a product or an animal). The more autonomously artificial intelligence acts, the broader the concept of liability that should be applied to it. This concept could indeed result from legal personality.

It has, however, been sceptically received by European experts (Open letter, 2018). Therefore, it seems that artificial intelligence’s status of a legal entity gains importance only if its application may lead to easier assignment of liability for the actions of artificial intelligence. This is about avoiding a situation in which in the future no one will be responsible for its actions. The concept of this status should, however, be similar to the legal persons’ status of being a legal entity, for whose actions humans are liable, not to the concept of natural persons who bear liability themselves. As has been mentioned, such status seems admissible only where it would make it possible to specify the principles of liability for the actions of artificial intelligence.

As a side note, the future actions of artificial intelligence, and in consequence also specifying the possibility of bearing liability for them, depends on how artificial intelligence will function tomorrow. Therefore, it is important to emphasise the ethics of creating artificial intelligence.

5. Tort liability

Under the Polish Civil code tort liability covers liability for culpable human behaviour (tort in the strict sense) and for other types of tort, e.g. caused by things or animals (Czachórski, 1994, p. 144). This means that the legislator links the obligation to repair the damage with a person’s actions or omissions or another phenomenon if it was the reason for the damage (Śmieja, 2009, pp. 338-340). On this basis, tort liability covers liability for one’s own deeds (Art. 415 of PCC) and liability for other people’s deeds, including liability for negligent supervision (Art. 427 of PCC), liability for fault in choosing the performer of the task (Art. 429 of PCC), liability for the subordinate (Art. 430 of PCC), as well as liability for damage caused by animals (Art. 431 of PCC ) and liability for damage caused by a dangerous product (Art. 4491 et seq. of PCC 4). These types of tort liability are based on the principles of fault, risk and equity.

Civil law scholars and commentators distinguish intentional fault and unintentional fault (Ohanowicz & Górski, 1970, p. 126). Intentional fault can be attributed to the perpetrator when they acted with the intention of causing unlawful effects (dolus directus), or when they acted without such an intention, but were aware that unlawful effects could arise and agreed to their creation (dolus eventualis). Unintentional fault can be attributed to the perpetrator when they act carelessly, that is, they do not exercise due diligence, which causes unlawful effects (Longchamps de Berier, 1939, p. 232). The perpetrator bears responsibility on a fault basis, including for their own deeds. For example, when throwing a stone, the perpetrator breaks a neighbour’s window.

The principle of risk shapes tort liability (strict liability) when a debtor bears liability for accidental damage that is damage caused not through their own fault (Nowakowski, 1979, p. 108). Liability for the subordinate, as long as the damage was this subordinate’s fault, and liability for a dangerous product are examples of liability based on the principle of risk. The principle of equity is associated with bearing liability due to strong ethical motives set out by the principles of community life, for example in the case of liability for animals (Szpunar, 1985, p. 43). The principles of community life are rules of fair, reliable and loyal conduct, principles of equity and ethics.

Depending on the type of tort liability, different conditions must be met. Under the Polish Civil Code, the conditions for liability for one’s own deeds include damage (i.e. damage to goods or interests protected by law, arising against the will of the injured party (Radwański,1997, p. 83), an act violating the law or principles of community life, and a causal relationship between the damage and this act. However, the conditions for liability for other people’s deeds include damage and causation, while the other conditions depend on the type of tort liability.

In the case of liability for negligent supervision, the premises include a violation of the law or the rules of community life by the person who caused the damage, lack of supervision or improper supervision over that person, a causal relationship between the damage and lack of supervision or improper supervision, and the fault of the person obliged to supervise. The basis for liability for fault in choosing the performer of the task is—apart from the damage—entrusting activities to be performed by another person, violation of the law or rules of community life by the performer of the task, a causal relationship between the damage and the unlawful action of the person who performed the entrusted activity, incorrect choice of the performer of the task and fault in choosing the performer. Liability for the subordinate is based on entrusting the subordinate with the performance of activities on behalf of the supervisor, the subordinate to a third party causing damage, violation of the law or rules of community life by the subordinate, a causal relationship between the damage and violation of the law or rules of community life by the subordinate and the fault of that subordinate. As for liability for animals, this is associated with the occurrence of damage as a result of the animal’s behaviour, when there is a causal relationship between the damage and this behaviour. Conversely, the conditions for liability for a dangerous product include damage, placing the product on the market and causation (see Ziemianin & Kitłowski, 2013, p. 195ff).

European rules on tort liability, developed in connection with the attempt to create the European Civil Code, were shaped similarly. As part of the harmonisation of European private law, the following projects were created: DraftCommon Frame of Reference (von Baret al., 2009), and in relation to tort liability—Principles of European Tort Law, hereinafter as PETL(European Group on Tort Law, 2005). European provisions of civil law stipulate an obligation to compensate for damage in three cases: damage caused by one’s own fault, damage caused by dangerous activities on the basis of risk (strict liability) and damage caused by others (liability for others)—Art. 1:101 of PETL. The scope of liability—in accordance with Art. 3:201—of these principles depends on the following circumstances: the possibility of a reasonable person to foresee the damage at the time of the activity, taking into account in particular the closeness in time or space between the damaging activity and its consequence, or the magnitude of the damage in relation to the normal consequences of such an activity, the nature and the value of the protected interest, the basis of liability, the extent of the ordinary risks of life and the protective purpose of the rule that has been violated.

In accordance with the Principles of European Tort Law, liability on the basis of fault consists of intentional or negligent violation of the required standard of conduct (Art. 4:101 of PETL). The required standard of conduct is that of a reasonable person who takes into account the nature and value of the protected interest involved, the dangerousness of the activity, and the expertise to be expected of a person carrying it on (Art. 4:102 of PETL).

Liability on the basis of risk includes mainly abnormally dangerous activities. Article 5:101 of PETL provides that a person who carries on an abnormally dangerous activity is strictly liable for the damage characteristic to the risk presented by the activity and resulting from it. Pursuant to this provision, an activity is abnormally dangerous if it creates a foreseeable and highly significant risk of damage even when all due care is exercised in its management and this risk is not a matter of common usage.

Draft European provisions in the scope of private law also provide for liability for others. Article 6:101 of PETL provides that a person in charge of another who is a minor or subject to mental disability is liable for damage caused by the other unless the person in charge shows that they maintained the required standard of conduct in supervision. Conversely, under Art. 6:102 of PETL, a person is liable for damage caused by their auxiliaries acting within the scope of their functions, provided that they violated the required standard of conduct; this provision does not, however, apply to independent contractors.

The principles mentioned above are detailed in the Draft Common Frame of Reference. It also introduces rules on liability for damage caused by others (Art. VI.–3:104, 3:201 of DCFR), by animals (Art. VI.–3:203 of DCFR) and by products (Art. VI.–3:204 of DCFR). In terms of liability rules, these provisions are consistent with the Polish provisions.

Under the above-mentioned provisions of Polish and European civil law, the liability is borne by a civil law entity. Therefore, artificial intelligence can only be held liable if it is granted the status of an entity before the law. In this case, liability for artificial intelligence will be borne by natural or legal persons, e.g. as in the case of animals, minors or mentally disabled persons 5.

6. Tort liability for damage caused by artificial intelligence

In the present legal state, with the current level of technological development, there are no grounds for granting legal personality to artificial intelligence. Lack of legal personality therefore results in the inability to bear responsibility for one’s own deeds. This means that if the artificial intelligence currently existing causes damage, another person should be responsible for it. The concept of bearing responsibility for other people’s deeds, as already indicated above, is not foreign to civil law. It was reflected in both the Polish Civil Code and European rules on tort liability. However, in this regard it is necessary to analyse which rules of tort liability can be applied in the event of damage caused by artificial intelligence, and who should be liable for such damage. The starting point of this analysis is to consider whether the current provisions of the Polish Civil Code and the principles of European contract law correspond to the specifics of damage caused by artificial intelligence.

Under the Polish Civil Code, liability for other people’s deeds is related to the need for one person to redress the damage caused by another person. In consequence, the legal entity—a natural person, legal person or organisational unit with legal capacity will be responsible for someone else’s act, if this person/unit can be accused of negligent supervision over another person who cannot be held liable (Art. 427 of PCC), entrusts performance of a task to another person (Art. 429 of PCC) or if the person is the superior of the person to whom they entrust the performance of the task (Art. 430 of PCC). In each of these cases, the person causing the damage is a legal entity. These provisions, in their current wording, cannot therefore be applied to artificial intelligence (see Bosek,2019, p. 13).

Article 427 of PCC provides that anyone who, under the law or contract, is obliged to supervise a person who cannot be held liable due to age or mental or physical condition, is obliged to redress the damage caused by that person, unless the obligation of supervision has been fulfilled or the damage would also arise even with supervision being exercised with due care. This provision also applies to persons who, without a legal or contractual obligation, take permanent care of a person who cannot be held liable due to age, or mental or physical condition. Pursuant to this article, it is the supervisor who is liable for negligent supervision in the event of damage caused by a minor or mentally disabled persons, because these persons—in accordance with Art. 425 and 426 of PCC—cannot be held liable. Art. 427 of PCC in its current wording cannot be applied to damage caused by artificial intelligence, as it is neither mentally disabled nor a minor. It is also—as indicated above—not a legal entity. It should be noted, however, that the rule of liability normalised in this provision could apply to liability for artificial intelligence. Minors and mentally disabled persons are individuals whose actions cannot be fully predicted, similarly to actions of artificial intelligence with a higher degree of independence.

Articles 429 and 430 of PCC relate to the issue of liability in the event of entrusting the performance of a task to another person. The first of these articles stipulates that a person who entrusts the performance of tasks to another person, is responsible for damage caused by the perpetrator in the performance of the entrusted task, unless the person entrusting the performance of the task is not at fault in the choice or that the performance of the task was entrusted to a person, enterprise or factory which perform such acts within the scope of their professional activity. According to the second of the above-mentioned articles, anyone who, on their own account, entrusts the performance of a task to a person who, while performing the task, is under this person’s supervision and is obliged to follow the instructions of that person is liable for damage caused by that person when performing the entrusted tasks.

Under Art. 429 of PCC a legal entity may entrust the performance of a task to any person, but is liable for the actions of that person if they are at fault when choosing the performer of the task. This provision does not apply to artificial intelligence, because it relates only to damage caused by the legal entity. However, it seems that responsibility on the basis of fault in the selection of artificial intelligence, whose task would be to perform a specific action, could rest on the person who made this choice, contrary to the manufacturer’s recommendations regarding the scope of skills of artificial intelligence. Acting against the given creator’s recommendations would then be a basis for liability for artificial intelligence.

In contrast, Art. 430 of PCC includes within its scope situations in which damage is caused by a subordinate who follows the instructions of a supervisor during the performance of tasks. This responsibility is based on the principle of risk, thus has the nature of strict liability. Under this provision, a subordinate is a natural person. Therefore, this does not apply to artificial intelligence. However, it can be assumed that the development of autonomous devices containing artificial intelligence will require consideration of similar principles of liability. However, this may only apply to situations in which artificial intelligence is so advanced that it responds to the user’s instructions.

The status of technologically advanced artificial intelligence seems similar to that of an animal. In both cases, it is a certain individual, on whose behaviour a natural person does not have full influence. The liability for an animal may be based on Art. 415 or Art. 431 of PCC. Article 415 of PCC, which refers to liability for one’s own deeds, concerns the use of an animal as a tool. The basis of liability in this situation is fault. In contrast, if an animal causes damage by itself, the liability for its behaviour results from Art. 431 of PCC.

Section 1 of this article provides that anyone who keeps or uses an animal is obliged to redress the damage caused by it, regardless of whether it was under supervision, had strayed or escaped, unless neither it nor the person for whom it is responsible is at fault. However, pursuant to § 2, even if the person who keeps or uses an animal is not responsible for it in accordance with the provisions of the preceding section, the aggrieved party may demand full or partial compensation if it follows from the circumstances, and especially from a comparison of the financial condition of the aggrieved party and that of the other person, that the rules of community life require so (see Art. 431 § 2 PCC). As a rule, responsibility for an animal is therefore based on the principle of fault, but in an auxiliary role, also on the principle of equity.

The above provisions do not currently apply to artificial intelligence. They concern animals and cannot be interpreted in a way that would extend their scope. This is because the rules of legal interpretation adopted under Polish law do not allow such broad interpretation. Nevertheless, the widespread use of artificial intelligence in the future may require similar rules of liability.

Therefore, none of the provisions listed above can be applied to liability for damage caused by artificial intelligence. The situation is different in the case of legal regulations regarding liability for damage caused by a dangerous product (Art. 4491 of PCC). In accordance with Art. 4491§ 2 a product means a movable thing, even if it is attached to another thing. Animals and electricity are also considered as a product. However, under § 3 of this article, a product is dangerous if it does not guarantee the safety that could be expected based on normal use of the product. The circumstances at the time the product is placed on the market, and especially the manner in which the product is presented on the market and the information provided to the consumer regarding the product’s properties, dictate whether the product is dangerous. A product cannot be considered unsafe only because a similar, improved product is placed on the market at a later time.

These regulations will only apply if artificial intelligence is classified as a product (Barton2019). However, artificial intelligence understood as a computer programme or computer application is not a thing, because under the provisions of the Polish civil code a thing must be tangible and separated from nature (e.g. a literary piece is not a thing) (see Dubis, 2016, p. 920). Therefore, a thing can only be a device equipped with artificial intelligence. For damage caused by such a device the provisions of Art. 4491 et seq. of PCC apply. They will apply, for example, for damage caused by autonomous vehicles. The legislator should, however, consider extending the definition of a product.

Responsibility for a dangerous product lies with the manufacturer who placed the product on the market if there is a causal relationship between the damage and the placing of the product on the market. The mere placing on the market of a product that caused damage may be a basis for being held liable. This responsibility is based on the principle of risk, thus it is strict liability. The manufacturer may, however, be released from liability if it has not placed the product on the market or if the product has been placed on the market outside the scope of its business activity (Art. 4493§ 1 of PCC), as well as when the dangerous properties of the product have come to light after it has been placed on the market, unless they were due to an element inherent in the product. The manufacturer is also not liable if the dangerous properties of the product could not have been foreseen based on scientific and technological conditions at the time of the placement of the product on the market or if these properties resulted from the application of legal provisions (Art. 4493§ 2 of PCC). However, in the case of products with built-in artificial intelligence, e.g. electronic cars, the exclusion of manufacturer’s liability requires proof that the dangerous properties were not foreseeable at the stage of production and could not have a different design.

Legal regulations regarding liability for damage caused by a dangerous product are currently the only ones applicable to devices equipped with artificial intelligence, although liability on this principle is subject to restrictions. For example, if an autonomous car causes an accident due to a system error, due to e.g. faulty design, the liability for damage can be attributed to the manufacturer. If the accident occurred due to a change made to the product by its user (car owner), then the owner should be liable for damage—e.g. when the owner changes the software setting by himself.

As has been pointed out above, the manufacturer or owner may be liable for an accident caused by an autonomous car, but such liability must be limited as well as the liability of a natural person being the driver. In each road accident, the reason for such an accident and its circumstances will be assessed.

The possibility of attributing responsibility for damage caused by artificial intelligence in a situation in which it works completely independently of the creator and operator or customer is most questionable (see Vladeck,2014, p. 122ff). The controversy in this regard relates in particular to the lack of a causal relationship between the damage and the actions of these people (Barton,2019, p. 1ff). However, it should be emphasised that currently the law of obligations provides for tort liability also in cases where there is no causal relationship between the damage and the actions of the person responsible. An example is liability for an animal. Causation in this case may be regulated, i.e. it results from a provision of the law (Machnikowski, 2015, p. 394). If there was no such provision, there would be no natural basis for attributing human liability to another living being, either.

The above-mentioned de lege ferenda conclusions in the scope of changes in civil law correspond to the demands expressed by members of the European expert group on artificial intelligence (European Commission, 2019, p. 3ff). They point out that both the user and the manufacturer may be responsible for artificial intelligence. The user is obliged to use the technology properly. The role of the manufacturer is to introduce “good” artificial intelligence to the market. According to the proposed concept, in the future artificial intelligence can be treated as a helper, for whom the person entrusting it with the task is responsible. This concept is part of the current civil law regulations regarding tort liability for other people’s actions. When supplementing this view, it should be emphasised that legal provisions should be adapted to the appropriate technological solutions so as to protect the society and to comply with human rights. In order not to become a brake on technological development, the legal provisions cannot be too advanced. They should also take into account technological developments beneficial to society (see Rommetveitet al, 2020, p. 47ff).

It cannot be ruled out that conferring the status of an entity under the law to artificial intelligence will be beneficial from the point of view of society in the future. An analysis of the Polish and European regulations led to the conclusion that artificial intelligence, which would acquire legal personality, would bear responsibility itself, just like a natural person. Fault would be primarily the basis for attributing this responsibility to it. Such a concept of liability would not, however, remove all doubts about the compensation for damage caused by artificial intelligence. The question arises whether artificial intelligence, which would receive a legal personality regulated by the law, would be able to redress the damage itself, e.g. whether it would have adequate financial resources. The current state of technology does not allow this question to be answered. The assumption of a legally-regulated rather than inherent personality leads to the conclusion that the responsibility of artificial intelligence would be more similar to the responsibility of a legal person. Therefore, every electronic legal entity of this kind would have to have certain funds, collected e.g. from compulsory insurance of manufacturers and users of artificial intelligence (similar to a company which possesses funds contributed initially by shareholders). Having such independent funds is necessary for remedying the damage independently.

7. Final remarks

The conducted analysis of civil law leads to the conclusion that there are currently no grounds to grant legal personality to artificial intelligence. Therefore, liability for damages caused by artificial intelligence must be borne by natural persons, legal persons or organisational units with legal capacity. The principles of this responsibility should depend on the type of artificial intelligence and its technological advancement. Civil law must be changed in this respect and adapted to the requirements that artificial intelligence sets for the law, because none of the above-mentioned provisions can refer directly and comprehensively to artificial intelligence, which is already operating and may cause damage (as in the case of autonomous cars). These changes should take place gradually, but some de lege ferenda conclusions should be taken into account as soon as possible. Currently, compensation for damage caused by artificial intelligence can only be made on the basis of regulations on dangerous products. However, they do not apply to all types of artificial intelligence—even those known today, if they cannot be classified as a product.

In the future, the compensatory function of tort liability may justify granting legal personality to artificial intelligence. This should involve the simplification of rules of liability for devices whose operation will be completely unpredictable. It may transpire that only such a solution will make it possible to redress damage in relation to an injured party.

There is no doubt that the creation of a coherent and comprehensive concept of legal personality of artificial intelligence will require the cooperation of experts from various fields, primarily lawyers, IT specialists and philosophers. The purpose of their work should be to shape artificial intelligence in such a way that it works for the benefit of humankind within the established legal regulations. It cannot be ruled out that with the development of technology, the legislative solutions previously proposed will prove to be insufficient and ineffective. Nevertheless, the changes in law should be gradual, taking into account the specificity of artificial intelligence. Some changes, concerning inter alia extension of the definition of a product, should be adopted without delay.

References

Act of 17 November 1964 – the Code of Civil Procedure (Dz. U. (Journal of Laws) of 2019, item 1460 as amended).

Act of 23 april 1964 – the civil code (dz. U (Journal of Laws) of 2019, item 1145 as amended).

Barton, J. T. (2019). Introduction to AI and IoT issues in product liability litigation. Thomson Reuters Westlaw.

Bosek, L. (2019). Perspektywy rozwoju odpowiedzialności cywilnej za inteligentne roboty. Forum Prawnicze, 2(52). https://doi.org/10.32082/fp.v2i52.200

Bryson, J., Diamantis, M., & Grant, D. (2017). Of, for, and by the people: The legal lacuna of synthetic persons. Artificial Intelligence and Law, 25(3), 273–291. https://doi.org/10.1007/s10506-017-9214-9

Chauvin, T. (2020). Godność człowieka jako źródło podmiotowości prawnej i granica władz. Edukacja prawna, 1(175), 5–11.

Council Directive 85/374/EEC of 25 July 1985 on the approximation of the laws, regulations and administrative provisions of the Member States concerning liability for defective products (OJ L 210 of 7 August 1985), amended with the Directive 1999/34/EC (OJ L 141 of 4 June 1999).

Cyfryzacji, M. (2018). Założenia do strategii AI w Polsce. Ministerstwo Cyfryzacji.https://www.gov.pl/web/cyfryzacja/ai.

Czachórski, W. (1994). Zobowiązania. Zarys wykładu. Wydawnictwo Naukowe PWN.

Dijk, N. (2020). In the hall of masks: Contrasting modes of personification. In M. Hildebrandt & K. O’Hara (Eds.), Life and the law in the era of data-driven agency (pp. 230–251). Edward Elgar Publishing.

Dubis, W. (2016). Title VII.Performance obligations. Dział III. Wykonanie i skutki niewykonania zobowiązań z umów wzajemnych. In E. Gniewek & P. Machnikowski (Eds.), Kodek cywilny. Komentarz (5th ed., pp. 917–921). C.H. Beck.

European Commission. (2019). Liability for artificial Intelligence and other emerging digital technologies [Report]. Publications Office of the European Union. https://doi.org/10.2838/573689

European Group on Tort Law. (2005). Principles of European tort law. Text and commentary. Springer.

Grimme, C., Preuss, M., Adam, L., & Trautmann, H. (2017). Social bots: Human-like by means of human control? Big Data, 5(4). https://doi.org/10.1089/big.2017.0044

Kaplan, A., & Haenlein, M. (2019). Siri, Siri, in my hand: Who’s the fairest in the land? On the interpretations, illustrations, and implications of artificial intelligence. Business Horizons, 62(1), 15–25. https://doi.org/10.1016/j.bushor.2018.08.004

Klopfenstein, L., Delpriori, S., Malatini, S., & Bogliolo, A. (2017). The rise of bots: A survey of conversational interfaces, patterns, and paradigms. In O. Mival (Ed.), DIS ’17: Proceedings of the 2017 conference on designing interactive systems (pp. 555–565). Association for Computing Machinery. https://doi.org/10.1145/3064663.3064672

Kok, J. N., Boers, E. J. W., Kosters, W. A., Putten, P., & Poel, M. (2002). Artificial intelligence: Definition, trends, techniques and cases. In J. N. Kok (Ed.), Encyclopedia of life support systems (pp. 270–299). Eolss Publishers.

Longchamps de Berier, R. (1999). Polskie prawo cywilne: Zobowiązania (Vol. 2). Ars boni et aequi.

Machnikowski, P. (2015). Prawo zobowiązań w 2025 roku. Nowe technologie, nowe wyzwania. In A. Olejniczak, J. Haberko, A. Pyrzyńska, & D. Sokołowska (Eds.), Współczesne problemy prawa zobowiązań (pp. 379–396). Wolters Kluwer.

Machnikowski, P. (Ed.). (2017). Kodeks cywilny. Księga pierwsza. In Część ogólna. Projekt Komisji Kodyfikacyjnej Prawa Cywilnego przyjęty w 2015 r. Z komentarzem członków Zespołu Problemowego KKPC. C.H. Beck.

Neff, G., & Nagy, P. (2016). Talking to bots: Symbiotic agency and the case of Tay. International Journal of Communication, 10, 4915–4931. https://ijoc.org/index.php/ijoc/article/view/6277

Nevejans, N. (2016). European civil law rules in robotics [Study]. Publications Office of the European Union. https://doi.org/10.2861/946158

Nowakowski, Z. (1979). Wina i ryzyko jako podstawy odpowiedzialności. In Z. Radwański (Ed.), Studia z prawa zobowiązań (pp. 103–115). Państwowe Wydawnictwo Naukowe.

Ohanowicz, A., & Górski, J. (1970). Zarys prawa zobowiązań. Państwowe Wydawnictwo Naukowe.

Open letter to the European Commission: Artificial intelligence and robotics. (2018). https://www.politico.eu/wp-content/uploads/2018/04/RoboticsOpenLetter.pdf.

Pegani, A. (2016). Sztuczny Człowiek. Wizerunek w wybranej literaturze oraz filmie. Rozpisani.

Pilich, M. (2018). Prezdmowa & Ustawa z dnia 23 kwietnia 1964 r. – Kodeks cywilny. In J. Gudowski (Ed.), Kodeks cywilny. Część ogólna. Komentarz do wybranych przepisów (pp. 8–22). Wolters Kluwer.

Radwański, Z. (1997a). Prawo cywilne – część ogólna. C.H. Beck.

Radwański, Z. (1997b). Zobowiązania. Część ogólna. C.H. Beck.

Retto, J. (2017). Sophia, first citizen robot of the world.

Rommetveit, K., Dijk, N., & Gunnarsdóttir, K. (2020). Make way for the robots! Human- and machine-centricity in constituting a European public–private partnership. Minerva, 58(1), 47–69. https://doi.org/10.1007/s11024-019-09386-1

Shumway, E. (1901). Freedom and slavery in Roman law (Vol. 49, pp. 636–653). The American Law Register. https://doi.org/10.2307/3306244.

Śmieja, A. (2009). Ogólna charakterystyka odpowiedzialności z tytułu czynów niedozwolonych. In A. Olejniczak (Ed.), System prawa prywatnego. Prawo zobowiązań – część ogólna (Vol. 6, pp. 335–363). C.H. Beck; Instytut Nauk Prawnych PAN.

Świerczyński, M., & Żarnowiec, Ł. (2019). Prawo właściwe dla odpowiedzialności za szkodę spowodowaną przez wypadki drogowe z udziałem autonomicznych pojazdów. Zeszyty Prawnicze, 19(2), 101–135. https://doi.org/10.21697/zp.2019.19.2.03

Szpunar, A. (1985). Odpowiedzialność za szkody wyrządzone przez zwierzęta i rzeczy. Wydawnictwo Prawnicze.

Targosz, T. (2004). Nadużycie osobowości prawnej. Zakamycze.

Teubner, G. (2018). Digital personhood? The status of autonomous software agents in private law.

Vladeck, D. C. (2014). Machines without principals: Liability rules and artificial intelligence. Washington Law Review, 89(1), 117–150.

von Bar, C., Clive, E., & Schulte-Nölke, H. (Eds.). (2009). Principles, definitions and model rules of European private law. Draft common frame of reference (DCFR). Articles and comments. Sellier.

Wang, P. (2008). What do you mean by “AI”? In P. Wang, B. Goertzel, & S. Franklin (Eds.), Proceedings of the 2008 conference on Artificial General Intelligence 2008: Proceedings of the First AGI Conference (Frontiers in Artificial Intelligence and Applications) (Vol. 171, pp. 362–373). IOS Press.

Wolter, A., Ignatowicz, J., & Stefaniuk, K. (2001). Prawo cywilne. Zarys części ogólnej. Lexis Nexis.

Zagórna, A. (2020, June 9). GPT-3, czyli SI dobra dla ludzi. Podobno. https://www.sztucznainteligencja.org.pl/gpt-3-czyli-si-dobra-dla-ludzi-podobno.

Ziemianin, B., & Kitłowski, E. (2013). Prawo zobowiązań. Część ogólna. Wolters Kluwer.

Ziemianin, B., & Kuniewicz, Z. (2007). Prawo cywilne. Część ogólna. Ars boni et aequi.

Footnotes

1. For historical development of personification, see van Dijk (2020).

2. In the context of these considerations, it is possible to recall the concepts of artificial intelligence personifying human beings, which can be discussed from the point of view of social predictions. See Pegani (2016).

3. Resolution no. 196 of the Council of Ministers on establishing “A policy for the development of artificial intelligence in Poland till 2020”, Monitor Polski 2021, item 23.

4. These provisions are an implementation into the Polish legal system of the Council Directive 85/374/EEC.

5. More on this in section 5 of this study.

Cryptoeconomics

$
0
0

This article belongs to the Glossary of decentralised technosocial systems, a special section of Internet Policy Review.

Definition

Cryptoeconomics describes an interdisciplinary, emergent and experimental field that draws on ideas and concepts from economics, game theory and related disciplines in the design of peer-to-peer cryptographic systems. Cryptoeconomic systems try to guarantee certain kinds of information security properties using incentives and/or penalties to regulate the distribution of efforts, goods and services in new digital economies.

Cryptoeconomics is an embryonic field at present and can be taken to include several areas of focus: information security engineering, mechanism design, token engineering and market design. This portmanteau of cryptography and economics raises questions regarding the epistemic novelty of cryptoeconomics, as distinct from its constituent components.

Origin

The term cryptoeconomics entered casual usage in the formative years of the Ethereum developer community in 2014-5. The phrase is typically attributed to Vitalik Buterin with the earliest public usage being in a 2015 talk by Vlad Zamfir entitled “What is Cryptoeconomics” (Zamfir, 2015). For Buterin, the aim of cryptoeconomics is “as a methodology for building systems that try to guarantee certain kinds of information security properties" (Buterin, 2017, pp. 46-56). While for Zamfir, the focus is more broadly on the distribution of efforts, goods and services in new digital economies: "A formal discipline that studies protocols that govern the production, distribution, and consumption of goods and services in a decentralized digital economy. Cryptoeconomics is a practical science that focuses on the design and characterization of these protocols" (Zamfir, 2015, 00:00:58). The term is uncommon amongst Bitcoin developers, but is occasionally used to discuss adversarial scenarios such as state-sponsored defensive mining and transaction censorship (Voskuill, 2018).

Cryptoeconomics was coined by the Ethereum community but was initially inspired by the use of economic incentives in the Bitcoin protocol (Nakamoto, 2008). Bitcoin mining is designed with the intention that it would be more profitable and attractive to contribute to the network than to attack it. With the development of Ethereum as the first successful general-purpose blockchain protocol, the idea of using economic incentives was also generalised as an approach to achieve a broad variety of behavioural and information security outcomes for decentralised systems. This has led to experimentation with the use of cryptographic techniques and incentives in organisational, financial, market and monetary experiments (Davidson et al., 2016; Halaburda et al., 2018; Voshmgir, 2019).

Motivation for the development of cryptoeconomics arises from the need to solve specific information security, organisational and economic problems that manifest in cryptographic systems. Examples include incentive alignment between stakeholder participants in permissionless networks and developing viable alternative approaches to distributed consensus other than proof-of-work, which is also commonly referred to as blockchain mining. In this sense, the portmanteau cryptoeconomics (or crypto-economics) as a combination of cryptography and economics raises an interesting question regarding epistemic reducibility. Can cryptoeconomics be fully deconvoluted—in other words, retro-synthesised—into its constituent namesakes; is it a mere combination or greater than the sum of its parts? A particular respondent’s answer might fall along the lines of their proclivity towards general-purpose blockchain networks and / or proof-of-work.

The aforementioned affinity to decentralisation as an axiomatic aim and primary concept originates from a longer history of the development of peer-to-peer systems as a means to establish autonomous networks (Brekke, 2020). With the invention of Bitcoin, economic ideas were added to the toolbox of computer engineers developing leaderless systems. For some, the motivation was to enable economic autonomy and fair distribution of efforts and rewards within such decentralised networks, what scholar of money and the internet Swartz calls infrastructuralmutualism. For others, the promise of provably scarce and unforgeable virtual commodities—digital metallism—was the main attraction (Swartz, 2018). Adherents to the digital metallist ideology often draw upon economic and monetary concepts typically associated with libertarianism and the US far right (Golumbia, 2016).

Evolution

Over time there has been a broadening in the scope of what can be considered cryptoeconomics as the variety of consensus systems and token types has proliferated. The different approaches to cryptoeconomics are beginning to settle into distinct layers of a cryptoeconomic 'stack': 'layer 1' referring to the information security of a network protocol such as proof-of-work and proof-of-stake; and 'layer 2' referring to the token, market or mechanism capacities offered by emerging cryptoeconomic platforms (Alsindi, 2019).

In recent years a number of networks affording general-purpose computation with facile smart contracting and token creation capabilities have emerged. This layer 2 cryptoeconomics entails the creation of notionally valuable economic assets without being connected to the underlying security properties of the network substrate; for example ERC20-type Ethereum tokens, Non-Fungible Tokens (NFTs) and more recently Decentralised Finance (DeFi) synthetic tokens. Whilst having notional economic value, these assets provide negligible security benefits to the base layer of the network: the abstracted non-native assets of 'layer 2' may increase the incentive to attack 'layer 1', as has been discussed in relation to ledger forks (Alsindi, 2019), Initial Coin Offering launches and sudden market-moving events are seen regularly in the hyper financialised DeFi sector (Daian et al., 2019).

The scope and definition of cryptoeconomics is still undergoing epistemic formation (0x Salon & Alsindi, 2020) and thus entails specific areas of focus:

Information security engineering: Where the primary focus of the cryptoeconomic endeavour is on the security properties for peer-to-peer 'layer 1' protocols.

Mechanism design: Where the focus is specifically on the use of incentives for behavioural engineering of rational agents in a game theoretical setting (Brown-Cohen et al., 2018).

Token engineering: Where the primary focus is on the functionality and properties exhibited by tokens used in a system. Tokens might for example grant token holders specific rights (such as service access or voting privileges as commonly encountered with the ERC-20 pseudo-standard), be fungible or non-fungible such as NFTs, be generated and distributed through mining, or through airdrops. Different token designs are understood to encourage different types of behaviours and organisational properties (Voshmgir, 2019).

Market design: Where the focus is on employing blockchain protocols and tokens in order to experiment with new kinds of markets that generate specific types of outcomes. For example, bonding curves determine the price of tokens depending on the supply or other factors, with an aim to influence the behaviour of investors (Titcomb, 2019).

Issues currently associated with the term

Cryptoeconomics is generally understood to combine cryptographic techniques and economics. However, much of the field of cryptoeconomics “shows an interesting but also alarming characteristic: its underlying economics is remarkably conventional and conservative” (Virtanen et al., 2018). Out of the long-standing and broad fields of economics and associated fields of political economy, monetary theory, finance and social study of finance, most literature on cryptoeconomics takes an overly formalist approach to the contested field of game theory (Green & Viljoen, 2020). Virtanen et al. (2018, n.p.) quote a revealing tweet from the influential Nick Szabo: “An economist or programmer who hasn’t studied much computer science, including cryptography, but guesses about it, cannot design or build a long-term successful cryptocurrency. A computer scientist and programmer who hasn’t studied much economics, but applies common sense, can.” This means that the potential of cryptoeconomic approaches may be more reformist than revolutionary; “in spite of their noble intentions, these projects do not in fact break with the current financial paradigm” (Lotti, 2016, p. 105).

More recent characterisations of cryptoeconomics take a broader societal outlook, for example focusing on the economics of new organisational forms (Davidson et al., 2016), the design of economic space (Virtanen et al., 2018), or on economic and monetary design that draws on mutual credit systems (Brock et al., 2018) and commons approaches (De Filippi & Hassan, 2015; Catlow, 2019). There is, in other words, much broader economic experimentation taking place with and through peer-to-peer cryptographic systems, however, those explicitly labelled cryptoeconomic often imply narrow and formalist approaches limited to Austrian school economics, right wing monetary ideas and game theory, especially apparent in the usage of the term in reference to Bitcoin (Golumbia, 2016; Voskuill, 2018).

One of the ongoing challenges encountered in cryptoeconomics is inherent to mechanism design and market design economics more generally (Ossandón, 2019). Namely the contradiction between the promise of deterministic outcomes in theory and the complex, emergent behaviours and effects of the systems in real deployments. On the one hand, the market design approach in cryptoeconomics promises to deliver specific properties (information security or behavioural outcomes). But on the other hand, the simple rules of the systems designs produce complexity and unintended outcomes (Voshmgir & Zargham, 2019). A contradiction off-handedly commented on by Ethereum developer Floersch when discussing the Casper proof-of-stake approach: "[W]e have this complex behavior emerging from really simple economic rules, and this actually not specific to Casper by any means, this is any protocol that are messing around with economics" (Floersch, 2017, pp. 12-18).

This contradiction—of emergent complexity and unintended effects—is nevertheless “productive” for those seeking to promote economic approaches to social problems: the promise of deterministic outcomes makes the models convincing and attractive from a formalist perspective (Green & Viljoen, 2020), while the complexity obscures any “failures” of the design (Nik-Khah & Mirowski, 2019). These shortcomings are instead relegated to being a problem “of the social” or “with humans” or that the implementation was not sufficiently faithful to the protocol, or even that the protocol implementation was not being expansive or radical enough. This contradiction is extensively covered in political economic and economic history and comprises one of the main critiques of the Austrian school of economics in particular (Mirowski & Nik-Khah, 2018; Heilbroner, 1998), what is also called the performative aspects of economics. From an information security perspective, the incorporation of economic incentives into protocol design in this sense radically increases the complexity of peer-to-peer systems, and correspondingly also leads to an increased attack surface and wider variety of hypothetical vulnerabilities (Alsindi, 2019).

Conclusion

In summary, cryptoeconomics refers to an emerging field that employs economic concepts in the design of peer-to-peer cryptographic systems. The origins of the field lie in specific information security problems arising out of such systems. Competing approaches draw from a much wider field of economic and political economic thinking, including mutual credit systems and commons frameworks, in order to address questions of organisation and societal outcomes more broadly.

References

0x Salon, & Alsindi, W. Z. (2020). 0x002 Report: Trespasser Theory: Aside on Cryptoeconomic Systems—A case study in attempted epistemic formation? [Report]. 0x Salon. https://doi.org/10.21428/49968aaa.9160a130#aside-on-cryptoeconomic-systems---a-case-study-in-attempted-epistemic-formation

Alsindi, W. Z. (2019). TokenSpace: A Conceptual Framework for Cryptographic Asset Taxonomies. Parallel Industries. https://doi.org/10.21428/0004054f.ccff3c19

Brekke, J. K. (2020). Hacker-engineers and Their Economies: The Political Economy of Decentralised Networks and ‘Cryptoeconomics’. New Political Economy. https://doi.org/10.1080/13563467.2020.1806223

Brock, A., Atkinson, D., Friedman, E., Harris-Braun, E., Mcguire, E., Russell, J. M., Perrin, N., Luck, N., & Harris-Braun, W. (2017). Holo Green Paper [White Paper]. Holo. https://files.holo.host/2017/12/Holo-Green-Paper.pdf

Brown-Cohen, J., Narayanan, A., Psomas, C., & Weinberg, S. M. (2018). Formal Barriers to Longest-Chain Proof-of-Stake Protocols. ArXiv. https://arxiv.org/abs/1809.06528

Buterin, V. (2017). Introduction to Cryptoeconomics, Ethereum Foundation [Talk]. https://youtu.be/pKqdjaH1dRo

Catlow, R. (2019). Decentralisation and Commoning the Arts. Free/Libre, Technologies, Arts and the Commons. Unconference Proceedings, 50–55. http://www.unrf.ac.cy/files/unconference-proceedings-phygital.pdf#page=50

Daian, P., Goldfeder, S., Kell, T., Li, Y., Zhao, X., Bentov, I., Breidenbach, L., & Juels, A. (2019). Flash Boys 2.0: Frontrunning, Transaction Reordering, and Consensus Instability in Decentralized Exchanges. ArXiv. https://arxiv.org/abs/1904.05234

Davidson, S., De Filippi, P., & Potts, J. (2016). Economics of Blockchain. https://doi.org/10.2139/ssrn.2744751

De Filippi, P., & Hassan, P. (2015). Measuring Value in the Commons-Based Ecosystem: Bridging the Gap Between the Commons and the Market. In G. Lovink, N. Tkacz, & P. De Vries (Eds.), MoneyLab Reader: An Intervention in Digital Economy (pp. 74–91). Institute of Network Cultures. https://networkcultures.org/wp-content/uploads/2015/04/MoneyLab_reader.pdf#page=76

Floersch, K. (2017). Casper Proof of Stake [Talk]. Cryptoeconomics and Security Conference, Berkeley. https://youtu.be/ycF0WFHY5kc.

Golumbia, D. (2016). The politics of Bitcoin. Software as right-wing extremism. University of Minnesota Press.

Green, B., & Viljoen, S. (2020). Algorithmic Realism: Expanding the Boundaries of Algorithmic Thought. Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (FAT ’20), 19–31. https://doi.org/10.1145/3351095.3372840

Halaburda, H., Haeringer, G., Gans, J., & Gandal, N. (2018). The Microeconomics of Cryptocurrencies (Research Paper No. 2018-10–02). NYU Stern School of Business, Baruch College Zicklin School of Business. https://doi.org/10.2139/ssrn.3274331

Heilbroner, R. (1998). The self‐deception of economics. Critical Review, 12(1–2), 139–150. https://doi.org/10.1080/08913819808443490

Lotti, L. (2016). Contemporary art, capitalization and the blockchain: On the autonomy and automation of art’s value. Finance and Society, 2(2), 96. https://doi.org/10.2218/finsoc.v2i2.1724

Mirowski, P., & Nik-Khah, E. (2018). The Knowledge We Have Lost In Information – The History Of Information in Modern Economics. Oxford University Press. https://doi.org/10.1093/acprof:oso/9780190270056.001.0001

Nakamoto, S. (2008). Bitcoin: A peer-to-peer electronic cash system [White Paper]. https://bitcoin.org/bitcoin.pdf

Nik-Khah, E., & Mirowski, P. (2019). On Going the Market One Better: Economic Market Design and the Contradictions of Building Markets for Public Purposes. Economy and Society, 48(2), 268–294. https://doi.org/10.1080/03085147.2019.1576431

Ossandón, J. (2019). Notes on Market Design and Economic Sociology. Economic Sociology, 20(2), 31–39. http://hdl.handle.net/10419/200967

Swartz, L. (2018). What was Bitcoin, what will it be? The techno-economic imaginaries of a new money technology. Cultural Studies, 32(4), 623–650. https://doi.org/10.1080/09502386.2017.1416420

Titcomb, A. (2019). Deep Dive: Augmented Bonding Curves [Blog post]. Giveth Medium. https://medium.com/giveth/deep-dive-augmented-bonding-curves-3f1f7c1fa751

Virtanen, A., Lee, B., Wosnitzer, R., & Bryan, D. (2018). Economics Back into Cryptoeconomics [Blog post]. Econaut Medium. https://medium.com/econaut/economics-back-into-cryptoeconomics-20471f5ceeea

Voshmgir, S. (2019). Token Economy: How Blockchains and Smart Contracts Revolutionize the Economy. BlockchainHub Berlin.

Voshmgir, S., & Zargham, M. (2019). Foundations of Cryptoeconomic Systems [Working Paper]. Institute for Cryptoeconomics, Vienna University of Economics and Business. https://epub.wu.ac.at/7309/

Voskuill, E. (2018). Cryptoeconomics [Wiki Page]. The Bitcoin Development Library. https://github.com/libbitcoin/libbitcoin-system/wiki/Cryptoeconomics

Zamfir, V. (2015). What Is Cryptoeconomics? Cryptocurrency Research Group Cryptoeconomicon. https://youtu.be/9lw3s7iGUXQ?t=58

Blockchain governance

$
0
0

This article belongs to the Glossary of decentralised technosocial systems, a special section of Internet Policy Review.

Definition

Blockchain governance can be regarded as the integration of norms and culture, the laws and the code, the people and the institutions that facilitate coordination and together determine a given organisation.

Origin and competing definitions

The importance of governance is well recognised in the information technology (IT) industry (ITSM Library, 2008) and this term is widely used in academic, economic and policy debates. In the blockchain space, this term has been tightly linked to Decentralised Autonomous Organisations (DAOs) (Buterin, 2013). Unfortunately, there is no common understanding, or generally accepted formal definition of governance, when associated with blockchain-based technologies. In pursuit of a formalisation of this term, before going more deeply into its evolution in the context of blockchain technology, we will briefly chart out a few common definitions.

The origins and most common approaches to governance are thoroughly dealt with by Hufty (2011), and as stated by Bevir (2011), at the most general level, governance can be associated with “theories and issues of social coordination and the nature of all patterns of rule”. The Oxford English Dictionary defines governance as “the action or fact of governing a nation, a person, an activity, one's desires, etc.; direction, rule; regulation.” In an economics context, governance is defined as “the use of institutions, structures of authority and collaboration to allocate resources and coordinate the effort and activity in society or in the economy” (Bell, 2002).

On the other hand, from an IT perspective, governance is composed of the leadership and the set of structures and processes that guarantee that the IT of an organisation provides support and extends the organisation’s strategy and objectives in a manner that is focused on achieving a better alignment between the business and IT (van Bon, de Jong & Pieper, 2008). In contrast, Margaret Blair notes that corporate governance is “**the whole set of legal, cultural, and institutional arrangements that determine what publicly traded corporations can do, who controls them, how that control is exercised, and how the risks and returns from the activities they undertake are allocated” (1995, p. 3), as quoted in Clarke (2012). However, the meaning of corporate governance could vary considerably according to the values, institutions, culture and objectives pursued by each organisation as well as the corporate governance system in the jurisdiction where the corporation is registered (Pollman, 2019; Norbäck & Persson, 2009). Corporate governance is not just about accountability, and it has an important role enabling strategising, value creation and innovation, as highlighted by Kraakman et al. (2017).

In this context, Morell (2014) presents the term community governance which is defined as the direction, control and coordination of a dynamic process, which evolves over time and manages several aspects of power classified by eight interrelated categories, from 'cultural principles/social norms' and 'formal rules or policies', to 'infrastructure provision'.

Academic review of the term in the blockchain domain

Despite the gap in literature due to the lack of a formal, comprehensive and holistic definition of what governance means in different domains, we can find several papers focused on governance whose approaches are applied or could be applied to blockchain technology.

For example, Reijers et al. (2016) explore how blockchain technology enables the configuration of specific forms of political organisation using the Ethereum network as a case study, based on the idea that the blockchain can act as a legal framework that provides the basis for online interactions of any kind in terms of governance.

Similarly, Davidson et al. (2016) share the idea that by eliminating the need for trust of agreed contracts through consensus and transparency, blockchains enable a new type of governance for autonomous organisations with the legal coordination properties of a market. Further, the governance attached to these decentralised autonomous organisations could be implemented as blockchain-based software systems through smart contracts (i.e., small pieces of code deployed on the blockchain) (De Filippi & Wright, 2018). Although the fact that the blockchain is operated autonomously, could itself raise problems for corporate governance, such as corporate record-keeping and the maintenance and upgrading of blockchains themselves (Yermack, 2017).

Another approach is the use of notions of governance of the commons derived from the study of natural resources, particularly the work of the Nobel-laureate Elinor Ostrom (1990) as the basis for blockchain-based self-governance (Rozas et al., 2018). They identify and conceptualise six affordances that blockchains may provide including tokenisation, formalisation and decentralisation of rules, autonomous automatisation, decentralisation of power over the infrastructure, increase in transparency and codification of trust.

Data governance is another less explored approach presented by Micheli et al. (2020). In their work, governance is defined as the power relations between all the actors affected by, or having an effect on, the way data is accessed, controlled, shared and used, the various socio-technical arrangements set in place to generate value from data, and how such value is redistributed between actors.

Finally, in another line of research, Karjalainen (2020) presents an informative survey of governance models in blockchain-based decentralised networks. It is worth highlighting that consensus mechanisms inherent in blockchain transactions have been excluded from this study.

Usage of the term ‘blockchain governance’

We find relevant visions of governance in the context of blockchain, for instance, in the works presented by Finck (2018) and Reijers et al. (2018). However, as mentioned earlier, the academic research for blockchain governance is still somewhat sparse (see also: Pelt et al., 2020), and while governance is a much discussed topic at blockchain conferences, such as Ethereum Devcon, the annual conference for all Ethereum developers, researchers, thinkers, and makers (DevCon Archive, n.d.); Community Ethereum Development Conference (EDCON, n.d.); Ethereum Community Conference (ETHCC, n.d.); and DAOfest, an event series focused on advancing the technology and adoption of decentralised governance globally (DAOfest, n.d.), the written record still comprises mostly blog posts and social media entries of dubious quality.

As stated previously, all governance is ultimately a social construct, comprising not simply laws (or bylaws), but also norms, culture, institutions, and individuals. Despite impassioned claims to the contrary, this is no different in regard to blockchains.

To understand the (mis-)usage of the notion of blockchain governance, we must first consider what specifically blockchains bring to the table: they enable systems in which adherence to procedure is automatically enforced, relying neither on norms nor a legal system, and leaving no room for individual discretion. This strict separation of enforceable procedure on the one hand and norms and discretion on the other is genuinely novel, but its import is exaggerated. Among the more enthusiastic supporters of blockchain technology, we observe a tendency to wilfully ignore all questions of norms and culture and equate governance entirely with coded procedures (code is law). Once all governance is reduced to procedure, it is hard to resist the claim that blockchains change everything.

This mixture of confusion and hubris is exemplified nicely in Singh (2020), who introduces "standard" governance as being either direct governance or representative governance, thus conflating governance with voting procedures, and asserting that everything is different with the blockchain: "We can broadly categorize the governance types into two major categories: Standard Governance and Blockchain Governance" (n.p.).

A further ambiguity stems from the fact that blockchain governance is used in two related but distinct contexts—governance of the chain itself vs governance using the chain. Additionally, usage in the first context is further complicated by the highly polarised and politicised nature of the blockchain space where we observe different factions reinterpreting and redefining the phrase to fit their outlook.

In this first usage, blockchain governance refers to governance of the blockchain (i.e. the specific question of making consensus-relevant changes to the software running a blockchain). Consensus relevance here means a change to the internal rules of the blockchain that must be applied (i.e., software must be updated) by all relevant participants in the blockchain network such as cryptocurrency exchanges, wallet software providers, miners, and users. If a large enough portion of the network does not apply the changes, then the network splits into two: those following the new rules and those following the old rules—this is called a hard fork1.

Examples of this approach include: (i) Curran (2020), who uses blockchain governance to vaguely mean whatever process leads to consensus-relevant changes in the software, and hard forks are hailed as a safety valve for users to choose their own fork if things go awry; and (ii) Rajarshi (2020), where governance is conflated with voting procedures, and hard forks are hailed as enabling "much more flexibility in operation than traditional structures" because "a user is free to choose which blockchain to follow."

In this context, we typically observe the introduction of a strict separation of governance into off-chain governance and on-chain governance.

The main idea of on-chain governance is to use coded procedures within a blockchain that represent voting procedures by which decisions about consensus-relevant software upgrades are mediated through the consensus system itself. Usage of the term in industry is neatly summarised by Frankenfield (2018): "On-chain governance is a system for managing and implementing changes to cryptocurrency blockchains. In this type of governance, rules for instituting changes are encoded into the blockchain protocol. Developers propose changes through code updates and each node votes on whether to accept or reject the proposed change".

Proponents of this way of doing things disparage the off-chain (human) world as being outdated in its reliance on people, norms, and culture to achieve governance, specifically alleging that procedures might be ill-defined or opaque: "off-chain collectives that organize over phone calls or at conferences, which either leads to shadow hierarchies where only a few, unwritten people make decisions" (Petrowski, 2020, n.p.). Central to this line of thought is that anything on-chain is transparent and thus fair, and anything off-chain is hidden and potentially nefarious. This stands in contrast to the Bitcoin notion that all consensus relevant changes are bad because they represent human involvement and in as much as code is law, they are breaking the law (De Filippi & Wright, 2018). On-chain governance, they argue, only aids and abets such law breaking; arguing that the goal is not coordinated updates to the network, but immutability.

The other context in which blockchain governance is used ignores the previous question entirely and focuses on using the blockchain to achieve governance. It presupposes the existence of a functioning blockchain network such as Ethereum, which can be leveraged to deploy smart contracts that encode the procedures of a decision-making paradigm. The blockchain is used to force/guarantee adherence to procedure, but the decisions being made have nothing to do with the blockchain itself (i.e., upgrading, avoiding hard forks). Rather, the goal of this form of on-chain governance is to enable the creation and operation of DAOs (i.e., organisations whose bylaws are written in code and enforced by the blockchain).

Once a DAO has been deployed to a blockchain, its rules can no longer be changed—short of a hard fork of the underlying network. Envisioning the need for future changes, DAO authors must incorporate the rules-for-changing-the-rules in the original deployment. We may think of this as analogous to an ordinary legislative process, coupled with a process for amending the constitution that the legislation is based on.

Current prominent examples of DAO platforms such as Aragon (Aragon, n.d.) and Daostack (DAOstack, n.d.) place heavy emphasis on a process in which proposals—usually to reallocate cryptocurrency funds—are put forward, a voting procedure then determines passage of the proposal, and eventually the funds are moved. This all happens on the blockchain, though off-chain communication and discussion are alluded to. Other examples such as Colony (Rea et al, 2020) take a more holistic view of governance, involving primarily off-chain interactions between human beings to come up with ideas and make decisions, and usage of the blockchain is reserved for enforcement, as opposed to decision making, whenever this is feasible.

It is worth noting that all DAO projects are ultimately a mixture of off-chain and on-chain elements, echoing the idea that even with blockchains and cryptocurrencies, governance consists of more than coded procedures.

Conclusion

As we have seen, the concept of blockchain governance is still under development and it can be understood differently depending on the domain of the application area under discussion.

In a broad sense, blockchain governance can be regarded as the integration of norms and culture, the laws and the code, the people and the institutions that facilitate coordination and together determine a given organisation. Importantly it refers to the entirety of motivations, rules, and activities that feed into the establishment of choices and subsequently deciding on them, and includes, but is not limited to, any coded on-chain rules that guide these processes.

However, blockchain governance also refers to two distinct dimensions: off-chain governance vs on-chain governance.

When referring strictly to smart contracts, one should specify that one is referring specifically to the on-chain elements of the governance system in question. Further care should also be taken to clarify whether one is talking about governance of a blockchain's own consensus relevant rules, or whether the governance system in question is merely using a blockchain to enforce on-chain rules in an otherwise unrelated off-chain domain.

References

Aragon. (n.d.). https://aragon.org/

Bell, S. (2002). Economic governance and institutional dynamics. Oxford University Press.

Bevir, M. (Ed.). (2011). The SAGE handbook of governance. SAGE. https://doi.org/10.4135/9781446200964

Blair, M. M. (1995). Ownership and Control: Rethinking Corporate Governance for the Twenty-First Century. Brookings Institution Press.

Buterin, V. (2013). Ethereum whitepaper: A next-generation smart contract and decentralized application platform [White Paper]. https://ethereum.org/en/whitepaper/

Clarke, T., & Branson, D. (Eds.). (2012). The SAGE handbook of corporate governance. SAGE Publications.

Curran, B. (2020, July 30). What is Blockchain Governance? Complete Beginner’s Guide. Blockonomi. https://blockonomi.com/blockchain-governance/

DAOfest. (n.d.). https://daofest.io

DAOstack. (n.d.). https://daostack.org/

Davidson, S., De Filippi, P., & Potts, J. (2016). Economics of Blockchain. https://doi.org/10.2139/ssrn.2744751

De Filippi, P., & Wright, A. (2018). Blockchain and the law: The rule of code. Harvard University Press.

DevCon Archive: The annual conference for all Ethereum developers, researchers, thinkers, and makers. (n.d.). https://archive.devcon.org/

EDCON: Community Ethereum Development Conference. (n.d.). https://edcon.io/

EthCC: Ethereum Community Conference. (n.d.). https://ethcc.io/

Finck, M. (2019). Blockchain regulation and governance in europe. Cambridge University Press. https://doi.org/10.1017/9781108609708

Frankenfield, J. (2018). On-Chain Governance. Investopedia. https://web.archive.org/web/20200224034437/https://www.investopedia.com/terms/o/onchain-governance.asp

Hufty, M. (2011). Governance: Exploring Four Approaches and Their Relevance to Research. In U. Wiesmann (Ed.), Research for Sustainable Development: Foundations, Experiences, and Perspectives (pp. 165–183). Geographica Bernensia. http://nccr-north-south.ch/Upload/8_Hufty.pdf

ITSM Library. (2008). IT Service Management Global Best Practices (Vol. 1). Van Haren Publishing.

Karjalainen, R. (2020). Governance in Decentralized Networks. https://doi.org/10.2139/ssrn.3551099

Kraakman, R., Armour, J., Davies, P., Enriques, L., Hansmann, H., Hertig, G., Hopt, K., Kanda, H., Pargendler, M., Ringe, W.-G., & Rock, E. (2017). The Anatomy of Corporate Law: A Comparative and Functional Approach. Oxford University Press. https://doi.org/10.1093/acprof:oso/9780198739630.001.0001

Micheli, M. (2020). Emerging models of data governance in the age of datafication. Big Data & Society, 7(2). https://doi.org/10.1177/2053951720948087

Morell, M. F. (2014). Governance of Online Creation Communities for the Building of Digital Commons: Viewed through the Framework of Institutional Analysis and Development. In B. M. Frischmann, M. J. Madison, & K. J. Strandburg (Eds.), Governing Knowledge Commons (pp. 281–312). Oxford University Press. https://doi.org/10.1093/acprof:oso/9780199972036.003.0009

Norbäck, P.-J., & Persson, L. (2009). The Organization of the Innovation Industry: Entrepreneurs, Venture Capitalists, and Oligopolists. Journal of the European Economic Association, 7(6), 1261–1290. https://doi.org/10.1162/JEEA.2009.7.6.1261

Ostrom, E. (1990). Governing the Commons: The Evolution of Institutions for Collective Action. Cambridge University Press. https://books.google.co.uk/books?id=4xg6oUobMz4C

Pelt, R., Jansen, S., Baars, D., & Overbeek, S. (2020). Defining Blockchain Governance: A Framework for Analysis and Comparison. Information Systems Management, 38(1), 21–41. https://doi.org/10.1080/10580530.2020.1720046

Petrowski, J. (2020, June). Polkadot Governance. Polkadot. https://polkadot.network/polkadot-governance/

Pollman, E. (2019). Startup Governance. University of Pennsylvania Law Review, 168(1), 155. https://scholarship.law.upenn.edu/faculty_scholarship/2135/

Rajarshi, M. (2020). What is Blockchain Governance: Ultimate Beginner’s Guide. Blockgeeks. https://blockgeeks.com/guides/what-is-blockchain-governance-ultimate-beginners-guide/

Rea, A., Kronovet, D., Fischer, A., & du Rose, J. (2020). Colony [White Paper]. https://colony.io/whitepaper.pdf

Reijers, W., O’Brolcháin, F., & Haynes, P. (2016). Governance in Blockchain Technologies & Social Contract Theories. Ledger, 1, 134–151. https://doi.org/10.5195/ledger.2016.62

Reijers, W., Wuisman, I., Mannan, M., De Filippi, P., Wray, C., Rae-Looi, V., Vélez, A. C., & Orgad, L. (2018). Now the code runs itself: On-chain and off-chain governance of blockchain technologies. Topoi. https://doi.org/10.1007/s11245-018-9626-5

Rozas, D., Tenorio-Fornés, A., Díaz-Molina, S., & Hassan, S. (2018). When Ostrom Meets Blockchain: Exploring the Potentials of Blockchain for Commons Governance. https://eprints.ucm.es/id/eprint/59643/1/SSRN-id3272329.pdf

Singh, N. (2020, August 19). Blockchain Governance Principles: Everything You Need To Know. 101 Blockchains. https://101blockchains.com/blockchain-governance/

van Bon, J., de Jong, A., & Pieper, M. (Eds.). (2008). IT service management global best practices (Vol. 1). Van Haren.

Yermack, D. (2017). Corporate Governance and Blockchains. Review of Finance, 21(1), 7–31.[ https://doi.org/10.1093/rof/rfw074

Footnotes

1. This term itself is not well defined. Thus hard fork may refer to a network split where different actors in the network follow different rules, whether due to an update that was not universally installed or due to a software flaw; but it is also used to describe a successful network upgrade that could have led to a split but did not.

Decentralized Autonomous Organization

$
0
0

This article belongs to the Glossary of decentralised technosocial systems, a special section of Internet Policy Review.

Definition

A DAO is a blockchain-based system that enables people to coordinate and govern themselves mediated by a set of self-executing rules deployed on a public blockchain, and whose governance is decentralised (i.e., independent from central control).

Origin and evolution of the term

Organisation theory has abundant literature on decentralised organisations of several kinds (Shubik, 1962; Beckhard, 1966; Freeland & Baker, 1975). Yet, the first references to actual Decentralized Autonomous Organization (DAO) only emerged in the 1990s to describe multi-agent systems in an internet-of-things (IoT) environment (Dilger, 1997) or nonviolent decentralised action in the counter-globalisation social movement (Schneider, 2014).

However, the modern meaning of DAOs can be traced back to the earlier concept of a Decentralized Autonomous Corporation (DAC), coined a few years after the appearance of Bitcoin (Nakamoto, 2008). The DAC concept was used mostly informally in online forums and chats by early cryptocurrency enthusiasts, using both “decentralized” and “distributed” autonomous corporations interchangeably. It was only in 2013 that the term became more widely adopted, and publicly discussed in a variety of websites (S. Larimer, 2013; D. Larimer, 2013), in particular by the co-founder of Bitcoin Magazine Vitalik Buterin 1(Buterin, 2013).

DACs were described as a new corporate governance form, using tokenised tradable shares as a means of providing dividends to shareholders. Such corporations were described as “incorruptible”, running “without any human involvement” and with “publicly auditable” bylaws as “open source software distributed across the computers of their stakeholders” (S. Larimer, 2013). According to this definition, anyone could become a stakeholder in a DAC by simply “buying stock in the company or being paid in that stock to provide services for the company”. As a result, the owners of a DAC stock would be entitled to “a share of its profits, participation in its growth, and/or a say in how it is run”. (ibid). Such a definition reflects the maximalist view of many blockchain advocates considering that “DACs don’t need regulation” because “you don’t want to regulate them, and happily you can’t” (ibid).

The term was inherently linked to corporate governance and therefore was too restrictive for many blockchain-based applications with a more general purpose. Thus, several alternatives to the term appeared, leading to the emergence of decentralized applications (dapps) (Johnston, 2013), and later to the generalisation of DAOs as a replacement for DACs (Buterin, 2014).

While some argue that Bitcoin is effectively the first DAO (Buterin, 2014; Hsieh et al., 2018), the term is today understood as referring not to a blockchain network in and of itself, but rather to organisations deployed as smart contracts on top of an existing blockchain network. Although there have been several attempts at instantiating a DAO on the Ethereum blockchain (Tufnell, 2014), the first DAO that attracted widespread attention is a 2016 venture capital fund confusingly called “TheDAO” (DuPont, 2017). Despite the short-life of the experiment 2, TheDAO has inspired a variety of new DAOs (e.g., MolochDAO, MetaCartel), including several platforms aimed at facilitating DAO deployment with a DAO-as-a-service model, such as Aragon, DAOstack, Colony or DAOhaus.

The DAO concept has enabled other derived terms: the term Decentralized Collaborative Organization (DCO) is typically referred as a DAO with strengthened collaborative aspects (Hall, 2015; Schiener, 2015; Davidson, De Filippi, & Potts, 2018); a more elaborate concept derived from those attempts is Distributed Cooperative Organization (DisCO), which highlights its co-op and democratic nature (Troncoso & Ultratel, 2019).

Definitions in the field

There are multiple coexisting definitions of DAOs in use within the industry. The most relevant are the following:

  • Buterin, in the Ethereum white paper (Ethereum, 2013, p. 23), defines a DAO as a “virtual entity that has a certain set of members or shareholders which [...] have the right to spend the entity's funds and modify its code”. That is, the aim is to replicate “the legal trappings of a traditional company or nonprofit but using only cryptographic blockchain technology for enforcement” (ibid).
  • Some of the most popular DAO platforms, such as DAOstack and Aragon define a DAO similarly as “a network of stakeholders with no central governing body” (https://daostack.io), “which is regulated by a set of automatically enforceable rules on a public blockchain” (https://aragon.org/dao). Conversely, other DAO platforms have opted to use a different terminology as a proxy to a DAO, such as the “colonies” of Colony (https://colony.io) or DAOhaus’ “magic internet communities”(http://daohaus.club).

In the academic literature on DAOs, although some works avoid picking a definition (Norta et al., 2015) or refer to industry definitions (DiRose & Mansouri, 2018), multiple attempts have been made at providing a specific definition of DAOs. Most of these definitions include the following distinctive characteristics:

  • DAOs enable people to coordinate and self-govern themselves online. 3 Although no mention is made as to the minimum size of the group, the term “organization” is generally understood to refer to an entity comprising multiple people acting towards a common goal 4, rather than a legally registered organisation.
  • A DAO source code is deployed in a blockchain with smart contract capabilities like Ethereum—arguably always a public 5 blockchain.
  • A DAO’s smart contract code specifies the rulesfor interaction among people 6—although it is unclear to which extent there may be other governance mechanisms that can affect or overrule such code. 7
  • Since these rules are defined using smart contracts, they are self-executed independently of the will of the parties. 8
  • The DAO governance should remain independent from central control: 9 e.g. some definitions specifically refer to self-governed (De Filippi & Hassan, 2018), self-organising (Singh & Kim, 2019), peer-to-peer and democratic control (Hsieh et al., 2018).
  • Since they rely on a blockchain, DAOs inherit some of its properties, such as transparency, cryptographic security, and decentralisation10

Current open discussions

While the academic literature on DAOs is still fairly limited, there is a significant number of papers from the field of computer sciences focusing on blockchain technology as a technical platform for building new blockchain-based business models, such as decentralised exchanges (Lin et al., 2019; Bansal et al., 2019) or market-based platforms such as prediction markets (Clark et al., 2014) that operate as decentralised organisations with automated governance (Jentzsch, 2016; Singh & Kim, 2019). Yet, a DAO can be deployed to fulfill many different types of functions. A DAO can, for example, be used to create a virtual entity that operates as a crowd-funding platform, a ride-sharing platform, a fully automated company, or a fully automated decision-making apparatus. It is therefore important to understand that a DAO is not a particular type of business model or a particular type of organisation, but a concept that can be used to refer to a wide variety of things.

In terms of governance, diverse scholars recently started investigating the opportunities of blockchain technology and smart contracts to experiment with open and distributed governance structures (Leonhard, 2017; Rozas et al., 2018; Hsieh et al., 2018; Jones, 2019), along with the challenges and limitations of doing so (Garrod, 2016; DuPont, 2017; Scott et al., 2017; Chohan, 2017; Verstreate, 2018; Minn, 2019; Hutten, 2019). There is also an emerging body of literature from the field of economic and legal theory concerning DAOs. While most of these works focus on the new opportunities of decentralised blockchain-based organisations in the realm of economics and governance (Davidson et al., 2016, 2018; Sims, 2019; Rikken et al., 2019; Kaal, 2020), others focus on the legal issues of DAOs from either a theoretical (De Filippi & Wright, 2018; Reijers et al.. 2018) or practical perspective (Rodrigues, 2018; Werbach, 2018; Riva, 2019).

The political discourse around DAOs is more pronounced, at least in the context of many existing blockchain communities (Scott, 2015; Swartz, 2017; DuPont, 2019). Various authors have pointed out that DAOs could be used to further economic and political decentralisation in ways that may enable a more democratic and participatory form of governance (Swan, 2015; Atzori, 2015; Allen et al., 2017; Tapscott & Tapscott, 2017). However, as the limitations of blockchain-based governance came into light, especially in the aftermath of the aforementioned TheDAO hack (DuPont, 2017; Reijers et al., 2018; Mehar et al., 2019), the public discourse around DAOs has shifted from describing DAOs as a technical solution to a governance problem (Jentzsch, 2016; Voshmgir, 2017) to a discussion on how DAOs could change the nature of economic and political governance in general (Davidson et al., 2016; Beck et al., 2018; Zwitter & Hazenberg, 2020; De Filippi et al., 2020).

The use of the term “decentralized autonomous organization” or DAO is now fairly established in the blockchain space, yet there are still many misconceptions and unresolved issues in the discussion around the term.

(1) First of all, with regard to the “decentralization” aspect of a DAO, it is unclear whether decentralisation only needs to be established on the infrastructural layer (i.e. at the level of the underlying blockchain-based network) or whether it also needs to be implemented at the governance level (i.e. the DAO should not be controlled by any centralised actor or group of actors).

(2) Second, it is unclear whether a DAO must be fully autonomous and fully automated (i.e. the DAO should operate without any human intervention whatsoever), or whether the concept of “autonomy” should be interpreted in a weaker sense, (i.e. while the DAO, as an organisation, may require the participation of its members, its governance should not be dependent on the whims of a small group of actors).

(3) Third, there are some debates as to when the community of actors interacting with a smart contract can be regarded as an actual “organization” (independently of any legal recognition). For instance, it is unclear whether the mere act of transacting with a smart contract qualifies as an organisational activity, or whether a stronger degree of involvement is necessary, such as having a governance model or collective interactions amongst participants.

The latter has triggered important discussions in the blockchain and legal field, as regards whether a DAO could be considered as an entity separate from the human entities that operate it (i.e. as a legal person) or whether it can only be considered as an entity when it is identified as such by the law (i.e. the law should identify a DAO as a legal person for the DAO to be considered as such). Yet, the common understanding today is that the “autonomous” nature of a DAO is incompatible with the notion of legal personhood, as legal personhood can only be established if there is one or more identified actors responsible for the actions of a particular entity. The discussion on whether a DAO should be recognised as a legal person has important implications in the legal field, as it can determine the extent to which a DAO can be considered as a separate legal entity from its human actors, and therefore the extent to which these actors can be shielded from the liabilities of the DAO.

References

Allen, D. W., Berg, C., Lane, A. M., & Potts, J. (2017). The economics of crypto-democracy. 26th International Joint Conference on Artificial Intelligence, Melbourne. https://doi.org/10.2139/ssrn.2973050

Atzori, M. (2015). Blockchain technology and decentralized governance: Is the state still necessary?https://doi.org/10.2139/ssrn.2709713

Bansal, G., Hasija, V., Chamola, V., Kumar, N., & Guizani, M. (2019, December). Smart Stock Exchange Market: A Secure Predictive Decentralized Model. 2019 IEEE Global Communications Conference (GLOBECOM). https://doi.org/10.1109/GLOBECOM38437.2019.9013787

Beck, R. (2018). Beyond bitcoin: The rise of blockchain world. Computer, 51(2), 54–58. https://doi.org/10.1109/MC.2018.1451660

Beck, R., Müller-Bloch, C., & King, J. L. (2018). Governance in the blockchain economy: A framework and research agenda. Journal of the Association for Information Systems, 19(10). https://aisel.aisnet.org/jais/vol19/iss10/1

Beckhard, R. (1966). An Organization Improvement Program in a Decentralized Organization. The Journal of Applied Behavioral Science, 2(1), 3–25. https://doi.org/10.1177/002188636600200102

Buterin, V. (2013a). Ethereum whitepaper: A next-generation smart contract and decentralized application platform [White Paper]. https://blockchainlab.com/pdf/Ethereum_white_paper-a_next_generation_smart_contract_and_decentralized_application_platform-vitalik-buterin.pdf

Buterin, V. (2013b, September 13). Bootstrapping A Decentralized Autonomous Corporation: Part I. Bitcoin Magazine. https://bitcoinmagazine.com/articles/bootstrapping-a-decentralized-autonomous-corporation-part-i-1379644274

Buterin, V. (2014, May 6). DAOs, DACs, DAs and More: An Incomplete Terminology Guide [Blog post]. Ethereum Foundation Blog. https://blog.ethereum.org/2014/05/06/daos-dacs-das-and-more-an-incomplete-terminology-guide/

Chohan, U. (2017). The Decentralized Autonomous Organization and Governance Issues (Notes on the 21st Century) [Discussion Paper]. University​​ of ​​New ​​South ​​Wales. https://doi.org/10.2139/ssrn.3082055

Clark, J., Bonneau, J., Felten, E. W., Kroll, J. A., Miller, A., & Narayanan, A. (2014, June). On decentralizing prediction markets and order books. 13th Annual Workshop on the Economics of Information Security, Pennsylvania State University. https://econinfosec.org/archive/weis2014/papers/Clark-WEIS2014.pdf

Davidson, S., De Filippi, P., & Potts, J. (2016a). Disrupting governance: The new institutional economics of distributed ledger technology. https://dx.doi.org/10.2139/ssrn.2811995

Davidson, S., De Filippi, P., & Potts, J. (2016b). Economics of Blockchain. https://doi.org/10.2139/ssrn.2744751

Davidson, S., De Filippi, P., & Potts, J. (2018). Blockchains and the economic institutions of capitalism. Journal of Institutional Economics, 14(4), 639–658. https://doi.org/10.1017/S1744137417000200

De Filippi, P., & Hassan, S. (2016). Blockchain technology as a regulatory technology: From code is law to law is code. First Monday, 21(12). https://doi.org/10.5210/fm.v21i12.7113

De Filippi, P., Mannan, M., & Reijers, W. (2020). Blockchain as a confidence machine: The problem of trust & challenges of governance. Technology in Society, 62. https://doi.org/10.1016/j.techsoc.2020.101284

De Filippi, P., & Wright, A. (2018). Blockchain and the law: The rule of code. Harvard University Press.

Dilger, W. (1997). Decentralized autonomous organization of the intelligent home according to the principle of the immune system’. 1997 IEEE International Conference on Systems, Man, and Cybernetics. Computational Cybernetics and Simulation, 351–356. https://doi.org/10.1109/ICSMC.1997.625775

DiRose, S., & Mansouri, M. (2018). Comparison and analysis of governance mechanisms employed by blockchain-based distributed autonomous organizations. 2018 13th Annual Conference on System of Systems Engineering (SoSE), 195–202. https://doi.org/10.1109/SYSOSE.2018.8428782

DuPont, Q. (2018). Experiments in algorithmic governance: A history and ethnography of “The DAO,” a failed decentralized autonomous organization. In M. Campbell-Verduyn (Ed.), Bitcoin and Beyond: Cryptocurrencies, Blockchains, and Global Governance (pp. 157–177). Routledge. https://doi.org/10.4324/9781315211909-8

DuPont, Q. (2019). Cryptocurrencies and blockchains. John Wiley & Sons.

El Faqir, Y., Arroyo, J., & Hassan, S. (2020). An overview of decentralized autonomous organizations on the blockchain. Proceedings of the 16th International Symposium on Open Collaboration, 1–8. https://doi.org/10.1145/3412569.3412579

Franklin, S., & Graesser, A. (1996). Is it an Agent, or just a Program?: A Taxonomy for Autonomous Agents’. In International Workshop on Agent Theories, Architectures, and Languages (pp. 21–35). Springer. https://doi.org/10.1007/BFb0013570

Freeland, J. R., & Baker, N. R. (1975). Goal partitioning in a hierarchical organization. Omega, 3(6), 673–688. https://doi.org/10.1016/0305-0483(75)90070-5

Garrod, J. Z. (2016). The real world of the decentralized autonomous society. TripleC: Communication, Capitalism & Critique, 14(1), 62–77. https://doi.org/10.31269/triplec.v14i1.692

Hall, J. (2015). The Future of Organization, Deep Code [Blog post]. Deep Code Medium. https://medium.com/deep-code/the-future-of-organization-b26219e5fc95

Hsieh, Y. Y., Vergne, J. P., Anderson, P., Lakhani, K., & Reitzig, M. (2018). Bitcoin and the rise of decentralized autonomous organizations. Journal of Organization Design, 7(1), 1–16. https://doi.org/10.1186/s41469-018-0038-1

Hütten, M. (2019). The soft spot of hard code: Blockchain technology, network governance and pitfalls of technological utopianism. Global Networks, 19(3), 329–348. https://doi.org/10.1111/glob.12217

Jentzsch, C. (2016). Decentralized autonomous organization to automate governance [White Paper].

Johnston, D. (2013). The General Theory of Decentralized Applications, Dapps. David Johnston CEO. https://github.com/DavidJohnstonCEO/DecentralizedApplications

Jones, K. (2019). Blockchain in or as governance? Evolutions in experimentation, social impacts, and prefigurative practice in the blockchain and DAO space. Information Polity, 24(4), 469–486. https://doi.org/10.3233/IP-190157

Kaal, W. A. (2020). Decentralized Corporate Governance via Blockchain Technology. Annals of Corporate Governance, 5(2), 101–147. https://doi.org/10.1561/109.00000025

Larimer, D. (2013a). Bitcoin and the Three Laws of Robotics. [Blog post]. Let’s Talk Bitcoin. https://letstalkbitcoin.com/blog/post/bitcoin-and-the-three-laws-of-robotics

Larimer, D. (2013b). DAC Revisited. Lets Talk Bitcoin [Blog post]. Let’s Talk Bitcoin. https://letstalkbitcoin.com/blog/post/dac-revisited

Leonhard, R. (2017). Corporate Governance on Ethereum’s Blockchain. https://dx.doi.org/10.2139/ssrn.2977522

Lin, L. X., Budish, E., Cong, L. W., He, Z., Bergquist, J. H., Panesir, M. S., & Wu, H. (2019). Deconstructing Decentralized Exchanges. Stanford Journal of Blockchain Law & Policy.

Mehar, M. I., Shier, C. L., Giambattista, A., Gong, E., Fletcher, G., Sanayhie, R., & Laskowski, M. (2019). Understanding a revolutionary and flawed grand experiment in blockchain: The DAO attack. Journal of Cases on Information Technology (JCIT, 21(1), 19–32. https://doi.org/10.4018/JCIT.2019010102

Minn, K. T. (2019). Towards Enhanced Oversight of" Self-Governing" Decentralized Autonomous Organizations: Case Study of the DAO and Its Shortcomings. NYU J. Intell. Prop. & Ent. L, 9, 139.

Nakamoto, S. (2008). Bitcoin: A peer-to-peer electronic cash system’.

Norta, A., Othman, A. B., & Taveter, K. (2015). Conflict-resolution lifecycles for governed decentralized autonomous organization collaboration. Proceedings of the 2015 2nd International Conference on Electronic Governance and Open Society: Challenges in Eurasia, 244–257. https://doi.org/10.1145/2846012.2846052

Reijers, W., Wuisman, I., Mannan, M., De Filippi, P., Wray, C., Rae-Looi, V., Vélez, A. C., & Orgad, L. (2018). Now the code runs itself: On-chain and off-chain governance of blockchain technologies. Topoi. https://doi.org/10.1007/s11245-018-9626-5

Rikken, O., Janssen, M., & Kwee, Z. (2019). Governance challenges of blockchain and decentralized autonomous organizations. Information Polity, Preprint, 1–21. https://doi.org/10.3233/IP-190154

Riva, S. (2019). Decentralized Autonomous Organizations (DAOs) as Subjects of Law–the Recognition of DAOs in the Swiss Legal Order [Master’s Thesis].

Rodrigues, U. R. (2018). Law and the Blockchain. Iowa L. Rev, 104, 679.

Rozas, D., Tenorio-Fornés, A., Díaz-Molina, S., & Hassan, S. (2018). When Ostrom Meets Blockchain: Exploring the Potentials of Blockchain for Commons Governance. https://eprints.ucm.es/id/eprint/59643/1/SSRN-id3272329.pdf

Schiener, D. (2015). Reposium: The future of Wikipedia as a DCO. Medium. https://medium.com/@DomSchiener/reposium-dco-the-future-of-wikipedia-4be080cfa027

Schneider, N. (2014). Are You Ready to Trust a Decentralized Autonomous Organization?. Shareable. https://www.shareable.net/are-you-ready-to-trust-a-decentralized-autonomous-organization/

Scott, B. (2015). Visions of a techno-leviathan: The politics of the bitcoin blockchain.

Scott, B., Loonam, J., & Kumar, V. (2017). Exploring the rise of blockchain technology: Towards distributed collaborative organizations. Strategic Change, 26(5), 423–428. https://doi.org/10.1002/jsc.2142

Shubik, M. (1962). Incentives, Decentralized Control, the Assignment of Joint Costs and Internal Pricing’. Management Science, 325–343. https://doi.org/10.1287/mnsc.8.3.325

Sims, A. (2019). Blockchain and Decentralised Autonomous Organisations (DAOs): The Evolution of Companies?https://dx.doi.org/10.2139/ssrn.3524674

Singh, M., & Kim, S. (2019). Blockchain technology for decentralized autonomous organizations. In Advances in Computers (Vol. 115, pp. 115–140). Elsevier. https://doi.org/10.1016/bs.adcom.2019.06.001

Swartz, L. (2017). Blockchain dreams: Imagining techno-economic alternatives after Bitcoin. Another economy is possible: Culture and economy in a time of crisis.

Tapscott, D., & Tapscott, A. (2017). How blockchain will change organizations. MIT Sloan Management Review, 58(2), 10.

Troncoso, S., & Utratel, A. M. (2019). If I Only Had a Heart: A DisCO Manifesto. DisCO. https://disco.coop/manifesto/

Tufnell, N. (2014). Bitcloud wants to replace the internet. https://www.wired.co.uk/article/bitcloud

Verstraete, M. (2018). The Stakes of Smart Contracts. Loyola University Chicago Law Journal, ue 50.

Voshmgir, S. (2017). Disrupting governance with blockchains and smart contracts. Strategic Change, 26(5), 499–509. https://doi.org/10.1002/jsc.2150

Werbach, K. (2018). Trust, but verify: Why the blockchain needs the law. Berkeley Tech. LJ, 33, 487. https://doi.org/10.15779/Z38H41JM9N

Zwitter, A., & Hazenberg, J. (2020). Decentralized Network Governance: Blockchain Technology and the Future of Regulation. Frontiers in Blockchain. https://doi.org/10.3389/fbloc.2020.00012

Footnotes

1. Vitalik Buterin would later co-found the Ethereum platform in 2014.

2. This open-source project attracted 11,000 investors and USD$ 150 million, where the funds were operated by the code implemented, theoretically safe from managerial corruption. However, a bug in its code enabled vulnerabilities exploited by an attacker who stole USD$ 50 million, requiring a fork in the Ethereum blockchain to restore the funds.

3. See e.g. Singh and Kim (2019, p. 119) who describe a DAO as a “a novel scalable, self-organizing coordination on the blockchain, controlled by smart contracts”.

4. See e.g. El Faqir, Arroyo, and Hassan (2020, p. 2) according to whom a DAO is made up of “people with common goals that join under a blockchain infrastructure that enforces a set of shared rules”.

5. See e.g. Hsieh et al. (2018, p. 2) claiming that a DAO should be deployed on a “public network”.

6. See e.g. De Filippi and Hassan (2018, p. 12), describing a DAO as a“self-governed organization controlled only and exclusively by an incorruptible set of rules, implemented under the form of a smart contract”.

7. See e.g. Singh & Kim (2019, p. 119)’s definition of a DAO as “an organization whose essential operations are automated agreeing to rules and principles assigned in code without human involvement”. However, this definition is put into question by Reijers, Wuisman, Mannan, De Filippi and colleagues (Reijers et al., 2018) distinguishing between “on-chain” and “off-chain” governance in the governance structure of DAOs.

8. See also De Filippi & Wright (2018, p. 146), according to whom a DAO “represents the most advanced state of automation, where a blockchain-based organization is run not by humans or group consensus, but rather entirely by smart contracts, algorithms, and deterministic code”.

9. See e.g. Hsieh et al. (2018, p. 2) describing DAOs as “non-hierarchical organizations that perform and record routine tasks on a peer-to-peer, cryptographically secure, public network, and rely on the voluntary contributions of their internal stakeholders to operate, manage, and evolve the organization through a democratic consultation process”.

10.“A decentralized, transparent, and secure system for operation and governance among independent participants” which “can run autonomously” (Beck, 2018, p. 57).

Viewing all 294 articles
Browse latest View live