In a 2017 report on ethical matters raised by algorithms, the French Data Protection Authority (CNIL, “Commission Nationale de l’Informatique et des Libertés”) stated “The algorithm without data is blind. Data without algorithms is dumb”.1 We are about to understand how true this assertion is.
In judicial processes, the main threat posed by automation is the application of faulty or biased algorithms to litigation. Discriminatory data can be coded into AI-driven programmes and ultimately foster new areas of inequality. There is an algorithmic tendency to generate bias which can then reinforce discrimination. This goes against the very objective of algorithmic assistance in decision-making, which is to reach fairer and more objective decisions.
To give an example, a risk that is usually associated with the proliferation of automated systems is the increased threat of mass surveillance, with potentially catastrophic consequences for privacy protection and autonomy. The typical example that is given to illustrate such drawbacks is the practice of predictive policing, which is already a reality in some countries like the United States of America. For instance on the other side of the Atlantic Ocean, PredPol uses a machine-learning algorithm to calculate predictions. Predictive policing technology involves algorithms being used to predict where and when crimes are most likely to occur, in order to provide guidance to law enforcement. Predictive patterns in burglary risks, for instance, might encourage police units to patrol a certain neighbourhood more intensively.2 If most individuals convicted of robbery in a given jurisdiction are non-white, an algorithm that might be used in criminal trials will tend to find a black or coloured person guilty of robbery
more frequently than white defendants, which in turn will confirm the biased statistical trend, etc. This chain of biases added to statistical trends is dangerous for impartiality and, more generally, for justice.
In the UK, the South Wales Police (SWP) used a predictive profiling algorithm based on automatic facial recognition. In regard to this tool, the England and Wales Court of Appeal found that the SWP had failed to adequately consider whether their trial could have a discriminatory impact. They also did not take reasonable steps to establish whether their facial recognition software contained biases related to race or sex. According to the Court, “all police forces that intend to use [the software] in the future would wish to satisfy themselves that everything reasonable which could be done had been done in order to make sure that the software used does not have a racial or gender bias” (R (on the application of Bridges) v Chief Constable of South Wales Police [2020] EWCA Civ 1058, §201).
This underlying issue observed in the context of predictive justice could lead to tragic consequences and enormous imbalances in the application of the law. Moreover, there are severe threats to the respect of fundamental liberties such as the principle of non-discrimination, equality before the law, right to private life, and the right to a fair trial or right of due process. In English Common Law, the Equality Act 2010 is the main piece of legislation that protects individuals against discrimination and should be used as a legal warranty against biased algorithmic outcomes. In European countries more generally, including France, the European Convention on Human Rights is the paramount text which shields the fundamental freedoms and liberties of citizens.
In principle, mistakes in the data on which the AI systems are based should be avoided, so as to render the system as foolproof as possible and to avoid discriminatory effects. Hence, the input data in algorithmic systems should be correct: however, taking some true facts as a basis for future predictions can sometimes amount to a biased approach. For instance, it may be true that few women hold high-level careers in particular companies. However, taking this as a sign of women’s inability to accomplish such careers in the future is a biased conclusion.3
Thus, in some instances the truthfulness of input data is not useful and should be disregarded to prevent discriminatory outcomes. How are the French and English systems reacting to these issues, and what tools can be adopted to prevent the risks linked to automation?
1 CNIL, ‘How can humans keep the upper hand? The ethical matters raised by algorithms and artificial intelligence’, December 2017, page 18
2 Janneke Gerards, Raphaële Xenidis, ‘Algorithmic discrimination in Europe: Challenges and opportunities for gender equality and non-discrimination law’ (2021), page 40
3 CNIL, ‘How can humans keep the upper hand? The ethical matters raised by algorithms and artificial intelligence’, December 2017, page 40
Scrivi un commento
Devi accedere, per commentare.