ECPR

Install the app

Install this application on your home screen for quick and easy access when you’re on the go.

Just tap Share then “Add to Home Screen”

ECPR

Install the app

Install this application on your home screen for quick and easy access when you’re on the go.

Just tap Share then “Add to Home Screen”

The opacity of AI systems and its challenges for democratic legitimacy in public decision-making

Citizenship
Democracy
Political Theory
Decision Making
Ethics
Normative Theory
Technology
Big Data
Maria Carolina Jimenez
University of Geneva
Maria Carolina Jimenez
University of Geneva

Abstract

This paper focuses on AI driven decision-making systems deployed by governments to manage their resources and to engage with citizens for the provision of public services (e.g. welfare benefits, health care, policing and administration of justice). The use of AI systems by public administrations is often advertised as a cost-cutting tool and an instrument to combat traditional institutional dysfunctions such as inefficiency, under-staffing, corruption and human bias. While AI offers an incredible potential for progress, an emerging body of literature highlights the challenges that AI-driven decision-making may raise for a public sector ethics. A common trait of these challenges is their being related to some form of “epistemological opacity” that undermines the capacity of humans to explain and justify decisions based on AI systems, detect errors or unfairness and adopt corrective actions. The situation may entail public officers and citizens taking the outcomes of AI systems at face value, thus basing their actions/deliberations (wholly or in part) on pieces of information that cannot be scrutinized and/or corrected if necessary. This paper intends to contribute to an emerging but still underdeveloped trend in normative political theory that study how AI-driven decision-making is reshaping the conceptualization and assessment of interactions between citizens and public institutions. The overall goal of the paper is to analyze how 4 sources of “epistemological opacity” (algorithmic/legal/illiteracy/narrative) affecting AI systems, may undermine the democratic legitimacy of public decisions based on them or seeking to regulate their use. Broadly speaking, legitimacy is the property that grounds the exercise of political authority, where authority standarly means the right to rule (Ceva 2013). In this paper, democratic legitimacy is understood as a distinctive form of political authority grounded in the recognition of citizens as joint legislators. The paper offers a conception of democratic legitimacy conditional on the capacity of decision-making procedures and outcomes to realize the principle of public equality, which requires citizens’ control over public decision-making, as well as respect for their equal status as political decision-makers. Specifically, the paper argues that the “epistemological opacity” affecting AI-driven decision-making systems, brings about a mistreatment of citizens by undermining the conditions of possibility of a cognitive environment conducive to democratic deliberation and decision-making. The main conjecture is that different sources of “epistemological opacity” (algorithmic/legal/illiteracy/discursive) are causing the disengagement of citizens and public officers from public decision-making, either because they directly undermine necessary conditions for democratic legitimacy (co-authorship/accountability/publicity), or because they hide from the public eye instances of illegitimate automation and privatization of decisional power. Based on a conceptualization of AI systems as socio-technical artifacts, the paper offers a taxonomy of sources of epistemological opacity affecting them, as well as a normative conception of democratic legitimacy, both of which may contribute to efforts in various fields (e.g. “AI fairness”, “Explainable AI”, "e-government") to better adapt technological tools to equality requirements distinctive of public decision-making within democratic societies.