Big Tech, data and the public good: trade-offs and harms

The pandemic has exposed the multitude of dangers in trusting private entities with public data, demonstrating the need for carefully thought out regulation, argue Jenna Harb and Kate Henne from the Justice and Technoscience (JusTech) Lab at the ANU School of Regulation and Global Governance.

VirusGeoData.jpg

2020 has been no ordinary year. Coping with crises, including bushfires and COVID-19, individuals and institutions have had to adapt quickly and under immense pressure. Digital technologies have featured prominently in responses. Among them are apps like COVIDSafe, which is purported to assist with contract tracing, epidemiological modelling to support better understanding of patterns of transmission, and digital conferencing platforms to enable remote working.

These methods of managing the massive disruptions to daily life have generated huge amounts of data. Many leaders have framed the collection and analysis of data as necessary for effective decision-making in these times of crisis, both now and moving forward. In doing so, these proposals invite new modes of population governance, increased surveillance, and growing reliance on digital platforms—trends that have contributed to the rise of technology firms’ enhanced market share during the pandemic.

Data is a key consideration within bigger questions of governance. As others explain, we are facing a series of critical junctures. While we realize the limitations of contemporary ideas and approaches to governance, many “data-driven” solutions seem like promising alternatives. But, what potential trade-offs, unintended consequences, and harms do they present? What relationships are necessary or eroded when leveraging data for problem-solving, particularly as they emerge against a backdrop of inequalities being exacerbated by the pandemic? 

Conversations for Robust Responses

To consider these questions, the ANU School of Regulation and Global Governance hosted a panel of experts who have studied disruptive technologies, internet governance, and surveillance. Their insights helped to identify important challenges regarding the use of data for governance and the governance of data. Their reflections considered who becomes involved in decision-making about data, as well as how ideas – such as data justice and data sovereignty– might support fairness and accountability in ways that the language of privacy and rights alone cannot.

ToyKnightSquashedByShoe.jpg

The literature on critical junctures offers important reminders that new pathways are often informed by past trajectories. Earlier trends in which private actors took on responsibilities once associated with governmental duties have paved the way for the involvement of Big Tech and other data analytic companies, such as Palantir, in service delivery and provision. 

As Natasha Tusikov noted, these firms rarely have direct expertise in the areas they are contracted for—such as health. Instead, they often leverage their expertise in data collection and analysis as reason enough for their involvement. In doing so, their legitimation strategies can contribute to negating or undermining the insights of topic-specific experts. 

These concerns become compounded when considering the patterns of regulating Big Tech, which all panellists raised as an important issue for further scrutiny. The tendency to rely on self-regulation may give the appearance of meaningful reform and enforcement, but it can also be understood as a strategy meant to pre-empt stronger interventions. 

Risk and the Challenges of Regulating Data

As scholars of regulation have long advised, effective regulatory approaches require accounting for a wide range of actors and tools, with self-regulation only one consideration among many. Without these other measures, companies with clear profit motives can go largely unchecked. Thus, their provision of digital services often poses risks disproportionate to the problems they are meant to solve.

Given the many dangers of improper management of personal data—especially for marginalized populations who have more on the line—trusting Big Tech’s assurances that they are enforcing safeguards can be problematic. Deficient data protection and security practices have been found in numerous digitized social assistance programs.

Data has been stored in ways that make it easily compromised and at risk of being leveraged by malicious parties. Because in-need populations, such as refugees and the impoverished, must divulge their data in order to access essential goods and services, these conditions instantiate the exceptional powers of technology companies. 

As technology firm executives are less accountable to legislation and policy, they can implement experimental technologies without being fully vetted, tested and understood. In essence, this makes welfare populations test-subjects facing potentially high-stakes consequences. The combination of these conditions reinforces a datafied crisis response during critical junctures, which are ripe for coercion. 

Data, therefore, is a relational phenomenon: data is an exchange or an extractive relationship, and understandings of data are based on relations with other people and institutions, as well as relations with broader power dynamics informed by capitalism, gender, racism, and nationalism.

2020TheRoadAhead.jpg

Charting a Path Ahead

Many of the concerns addressed during the panel surpassed “technological solutionism”, which is the common practice of reducing complex social issues as if they are problems that can be addressed through technical fixes. These practices sidestep deeper structural inequalities that are inextricably linked to systems of ableism, colonialism, economic disparities, heternormativity and racism. 

Proponents of design justice have argued that there are always trade-offs in the design and use of digital platforms and tools. The use of data inevitably has risks, so decisions must be made about how to distribute these risks and who becomes privileged in such decision-making. It is our responsibility to ask and be explicit about who is being left out and who is benefiting, rather sweeping these essential questions under the rug.

As recent developments in Australia and the United States attest, there is an appetite for stronger interventions. The next steps are not simply to call for stronger regulation or to uncritically replicate models used in other domains.

Julia Powles offers important words of caution when considering regulatory alternatives: that it is essential to think carefully about the design and implementation of checks and balances, as even seemingly more radical proposals like data cooperatives and trusts may not prevent more powerful actors from gaining influence. The challenge, then, is to interrogate and identify what forms of regulation can be employed in the service of counteracting larger – often structural – power asymmetries.