INTRODUCTION
This article stems from the sessions held in the conference series of the relationship between law and science fiction entitled: “Human Rights from Black Mirror”. This series, sponsored by the University of San Pablo-Tucumán (Argentina) and the University of Medellín (Colombia), proposed an international dialogue between professors and students from different universities in both countries, to give an account of the current legal problems present in different chapters of the popular science fiction series of Netflix platform.
Specifically in the second session of this seminar, the authors spoke about the profiling of people, Big Data and the new identities generated by a society guided by algorithms. Based on these themes, this reflection on the relationship between the use of these technologies and crime prevention was built, which led us to the study of the literary and film classic Minority Report, an essential work in the study of criminal law in fiction.
We will begin this article by giving a brief overview of Minority Report and its dystopian nature, as well as highlighting the main legal-criminal problems raised by this film. Subsequently, we propose to define the existing relations between law, ethics, and the machines to which we are already handing over -or proposing to hand over- part of the administration of justice: Artificial Intelligences (AI). Then, we will see whether the instruments used in Minority Report to prevent crime can be understood as AI. We will continue by dwelling on the ethical-political implications of the use of a partially deterministic crime prevention system for law in general, and criminal law theory in particular. Finally, from a Foucauldian perspective, we will analyse the ways in which power is produced and reproduced through the hypothetical shaping of an administration of justice guided by algorithmic procedures.
1. THE DYSTOPIA/UTOPIA OF MINORITY REPORT
Minority Report is a science fiction film directed by Steven Spielberg in 2002. It is based on the book of the same name by Philip K. Dick published in 19561. It is set in the year 2054, starring by the Captain John Anderton, head of the Washington D.C. PreCrime force. This police organisation is in charge of stopping crimes before they happen, thanks to the precognitive visions of three mutant brothers with certain mental disorders. These are called “PreCogs” (Precognitives).
The PreCogs, beyond their superhuman powers, seem to have an advanced mental disability that does not allow them to communicate normally or to describe their visions themselves. They are seen as a sort of idiots savants2 (Velo Dalbrenta, 2019). To make themselves useful to the PreCrime cause, the mutants are connected to a machine that can translate and visualise their visions. In this way, the PreCrime division arrests future perpetrators of crime, sentencing them to the penalty they would have deserved if they had actually committed the crime.
In a few words, there is a crime without a crime. As Dick (2002) expressed through Anderton: the commission of crime becomes absolute metaphysics. On this issue, Gazzolo (2019) explains it is not that there is an absence of crime for the system, but rather that the fact does not coincide with its verification in reality. The action takes place in a future time that will never be present thanks to the intervention of PreCrime. A person is then blamed for committing a crime in a timeline in which it will not occur (thanks to the arrest allowed by premonition).
We can now realise why Spielberg’s film has aroused much debate, both among philosophers and jurists. On the one hand, the classic debate of determinism and free will is present, trying to discuss whether the actions of these future criminals were inevitable and already determined to happen; or whether, on the other hand, it was various circumstances in their existences that led them to these decisions, and that they still possess the control to decide otherwise (Quian Quiroga, 2018).
As far as legal theory is concerned, we are facing the rupture of two central foundations of law. If we accept determinism, this will call into question the idea that only acts committed by subjects with full individual freedom can be judged. Even more palpably, and in the criminal field itself, it is a matter of judging acts that have not been committed and are likely to be committed. Consequently, those criminals have not done anything reprehensible to the legal system, they have not broken any law, but they will be necessarily an offender in their future. The PreCrime system aims to circumvent the problem of human will, accepting as a necessarily situation the certain probability that a person will commit the crime. Of course, if the PreCogs sentence him or her to that destiny (Velo Dalbrenta, 2019).
It is no wonder then that Minority Report is often labelled as a dystopia. The very thought that one can be guilty without doing any criminal offence exposes us to a global fear of our own future. Likewise, it induces us to distrust ourselves, that we may be determined to be the “bad people” in the eye of the moral positivised norms.
But from another point of view, as far as the administrators, officials and crime prevention agencies are concerned, such a system can certainly be seen as utopian. In the absence of corruption among the “machinists”, technicians and interpreters of the PreCogs (as is ultimately the case in the film), the precognitive system would allow for a world without murder, robbery and rape. People who have suffered loss of loved ones or trauma through these reprehensible actions could cease to fear violence from other humans. The fear, however, would become focused on who controls the system: the state and the surveillance society.
With this brief overview, we realise that the premise of PreCrime is neither black nor white. In the field of crime prevention, it is mainly related to the ethical aspects of handing over to a machine the definition of how justice should act. That is why we want to take this movie as a concrete point of analysis to address central topics of the contemporary interaction between science and philosophy of law, thinking about cinema as a tool for study from a philosophical approach (Falzón, 2005).
2. LAW, ETHICS AND AI
Artificial Intelligence is often a difficult concept to define. To be more precise, in this paper we will understand it as the ability of machines to make decisions, just as a human would, but through the use of algorithms and the apprehension of data.
We find it equally important to differentiate AI as a science from AI as knowledge engineering. Mira Mira (2008) explains that understood as a science, AI has an analytical task that encompasses a set of facts associated with neurology and cognition, which generates a computable theory of human knowledge; while understood as engineering, we would be dealing with an applied branch that has the mission of using what is studied in the analytical branch to create formal models of cognitive processes and thus program the systems and machines that will use this technology. This distinction will be useful to show how ethics plays an important role, both in the theorisation of the discipline and in its technical application.
As it is clear, AI attempts to replicate human intelligence artificially and in a way, improve it. The philosophical debate is no stranger to this situation. In this field, the mayor attention is given to ethics, a branch of philosophy that focuses on the study of human behaviour to disentangle the morally right from the wrong (Zavadivker, 2011).
Certainly, this branch has been devoted to the study of humans. Until a few decades ago, no one would have thought of the possibility of an artificial being that could develop behaviours that could be judged and circumscribed within an ethical framework. This is because we need autonomous beings with the capacity to discern and decide, in order to apply judgements to their behaviour.
This capacity usually brings with it a correlative of responsibility, since the being in freedom must be accountable for the acts it commits. So far, we do not see an entity outside humanity that meets all the necessary qualities, but AI seems to at least raise the possibility of coming close.
AI gives rise to a new subject to be studied in ethics, and this is a fascinating disciplinary change. Of course, it is still speculative, but it is getting closer. We already have tools that, although they need human help to function, presuppose a necessary minimum of autonomy to fulfil their tasks.
They require patterns of behaviour that conform to a moral decision to actually work. For a simple example, consider a car assembly robot arm. It has the simple task of assembling a part of the vehicle, but if it works with human labourers, it may have sensors attached to it that detect if they are in its vicinity, to not endanger them. So as not to harm them, the arm “decides” to stop working if it sees a human being getting in its way, as it was programmed to do this. Of course, it was not the AI that decided to do this, but its creator, the engineer, the code.
This picture often repeats itself. The more complex the machine’s task becomes, the more autonomy is needed to enable it to perform its function better and to reduce fatigue on the part of the human creator. For this, AI requires new tools that bring its actions closer to morally acceptable behaviour. This is not only to perform tasks that require greater complexity, but also to make the machine’s actions more predictable.
The big debate is whether the acts performed by these machines should be the responsibility of the creators or operators or whether, when the time comes, the machine itself can be blamed. What should be clear is that, even if automatons are never responsible, this does not preclude that discernment of what is a “good” or “bad” decision should be present in AI programming.
Although we share the idea that law is an autonomous technical tool for regulating human conduct, it is impossible to deny its relationship with morality. This is not because there are common orders of social control, but rather because the content of law is formed by ethical-political decisions that have their origin in the moral order. So, when a society decides that stealing or killing is forbidden, this is an ethical-political decision, just as when it stipulates taxing a product or exempting others. All of this is in response to a public morality that the law tries to echo in its regulations.
AI puts legislators in an uncomfortable place, as they tend to be slower in their regulatory projects than the speed of development of the technology. As Martínez García (2018) argues, AI is added to and modifies natural intelligence itself, transforming the mindset of the second. New technological breakthroughs end up changing our very existence and the way we see the world.
Leaving aside the debate on the machine as a subject of law, we are left with the idea of using this tool as a judge or at least as an assistant in the administration of justice. The temptation is very strong, since by configuring ethical-normative values, and numerically defining actions, a utilitarian3 calculation could be made that allows for a more objective justice in the search for the good of the majority (Salvi, 2020).
But before delving into this matter, it is necessary to define whether PreCogs can really be understood as an AI.
3. ARE PRECOGS AN AI?
“The talent absorbs everything; the esp-lob shrivels the balance of the frontal area. But what do we care? We get their prophecies. They pass on what we need. They don’t understand any of it, but we do”. John Anderton (Dick, 2002).
It is not made very clear in the film how the PreCrime machine actually works. Clearly, this is not a work of hard science fiction4, and it doesn’t try to be one; moreover, when the original novel was written, computer technology had not yet been developed to the levels that we are so overwhelmed by today.
However, from what we see in the film, it seems that the three children have some kind of mental disorder that causes them to have visions of future crimes when they dream. These do not appear clearly, and they themselves cannot express them. Moreover, it gives the impression that only the older sister can communicate in a comprehensible way while the younger twins would not have this possibility, being categorised as having some degree of disability or severe autism as we explained.
Hence, we understand that, although they have a kind of supernatural power, their precognition is useless without the machine to which they are connected. The complex device to which they are linked induces sleep, records, and translates the children’s dream images. So, we are talking about something that can be categorised as biotechnology.
Strictly speaking, it would not be an AI, since its central engine is a strange and unexplained natural intelligence of the PreCogs. But in practice, without the artificial element, this intelligence would not be able to communicate or transmit itself to other people. Therefore, while the mechanical artefact may be categorised as a mere translating instrument, it is also what gives materiality and telos to the power of precognition. It is the computer that processes and presents the data necessary to ultimately know the future and prevent crimes.
We conclude then that PreCogs, although full of inexplicable mysticism, can be understood from many points of view as an AI or mixed AI. Moreover, we agree with Stănilă (2020) when she states that the scenarios of debate that Minority Report opens up for criminology are sustained if one replaces PreCogs with contemporary AIs.
4. DETERMINISM AND LEGAL THEORY
One issue that generates a lot of friction is the proposal to hand over the administration of justice to machines. Clearly the main incentive for this is the quest for greater efficiency and speed in the processes, as well as to guarantee greater impartiality by trying to eliminate the human factor.
For this, systems such as Prometea in Argentina (Corvalán, 2018) or Compas in the USA (Miró Llinares, 2018) are often tools that are increasingly accepted by judges. These instruments are seen to improve the efficiency of state bureaucracy.
The Compas system (Correctional Offender Management Profiling for Alternative Sanctions) is the most famous tool in this field. It is a software used by the US justice system to calculate the risk of a person reoffending, based on various data and questionnaires given to the offender. Judges then use these predictions to decide whether or not to grant a release.
The question becomes more difficult when considering the possibility of replacing judges and other justice operators with AI. The biggest critics argue that it would be impossible for “robot judges” to be able to solve “difficult cases” where the solution does not suffice with the subsumption of a fact to a norm.
Leaving aside the possibility that it would never be accepted, let us put ourselves in the place where it is possible. This being so, one must think about what ethical standards should be set in this hypothetical AI. One might think that it should be legislated in a way that favours its efficacy. The rules should, perhaps, take valid forms in systems of deontic logic5, so that decisions have greater predictability. The evidence system should move to a digital language as should the entire judicial process, abandoning habits to which the new “robot judge” cannot adapt.
It is still difficult to envisage such scenario, although its use in minor cases is already being considered, with Estonia and China being exemplary cases (Cancio Fernández, 2020; Cárdenas Krenz, 2021). Moreover, “preventive justice” tools have already existed in criminal law for decades, such as the use of pre-trial detention or the legislation of criminal offenses based on the risk of harm -and not on the production of certain harm (Böhm, 2016).
Criminal law is often the branch of law that gets the most public attention. It is no coincidence that the vast majority of normative systems have had at their regulatory centre the aim of preventing us from killing each other. Law -as a normative system and a tool of social control- is no exception.
Throughout its history, society organised in different political models has tried to prevent crime. Criminal law has been a tool of deterrence, marking the sanctions that would be received by those who violated the most important legal goods for the community.
But AI may provide new opportunities to make the preventive task more effective. The problem is that the proposed measures may change the way the law works and leave privacy in a difficult position. In this regard, government security systems around the world already use cameras, drones, satellites, facial recognition and so many other technologies in addition to traditional policing. These tools are often the protagonists of various science fiction dystopias, and they already exist.
To begin with, if all patterns of behaviour considered to be detrimental or illegal were somehow configured, profiles could be created that would allow potential criminals to be caught in advance. Not to mention a Minority Report scenario in which, by detecting the crime early, irreversible consequences for the victims are avoided.
Here we see a criminal law that focuses on the perpetrator, and that condemns people who have not committed a reprehensible action in the real world. The key issue is how to develop the algorithm that the machine will use to define the potential criminal. As Miró Llinares (2018) states, the potential for algorithmic discrimination in the equitable application of criminal justice is a strong risk. Tools such as the profiling of individuals, collectives or even urban areas are used to decide where to identify anti-government messages, the distribution of police patrols or the selection of where to place speed cameras. The danger is that, in the absence of a global understanding and of the characteristic emotionality of a human, the machine that should be more impartial will end up being more biased than the mind of its own creator.
This already exists, for example, with the Compas system discussed above. The patterns it uses are questioned, as it is often accused of having elements of clear discrimination in how it measures the possibility of recidivism. As Quian Quiroga (2018) points out, it is not only the number of obvious errors in this algorithm that is of concern, but also that, in a detailed analysis of 7000 cases, it was shown that the tool tends to give more risk factors to people of colour than to whites.
If an even stronger system were to be developed (one that could detect future offenders before the wrongdoing takes place), the ethical framework to be applied would be the most relevant for the future of the social order. Unfortunately, it is virtually impossible to see a scenario in which those who fall outside the canons decided in the configuration are not victims of the system. The only advantage, of course, would be to avoid crimes and save victims, at the cost of unseen repression and the victimisation of groups stigmatised by the algorithm.
More distantly, perhaps very speculatively, the fear of this all-powerful algorithmic machine rebelling is latent. Elbert (2000) addresses this question in more detail. If algorithmic robots, no longer so much physical or android, but rather in the form of computers like HAL-90006, and no longer dedicated to the protection of a crew but to the protection of the whole society, rebel and cease to serve to oppress us, it would be truly terrifying. Hypothetical situations can be envisaged, such as the one seen in Alex Proyas’ 2014 film I Robot, in which, in order to protect humanity, the supercomputer V.I.K.I. (Virtual Interactive Kinetic Intelligence) concludes that it is better to curtail the freedom of human beings because of the self-destructive tendencies of our species.
No less important, from a theoretical point of view, the idea of only judging external acts performed in freedom is left behind. The whole theory and philosophical conceptions of contemporary criminal law, influenced by Kantian ideas, would become only a machine of categorical application of social morality positivised by law and algorithms. There would remain only room for collective or state freedom, with criminal law having only the task of choosing the punishable offences, leaving aside discussions of the realization of conduct in the physical present that has so characterised the theory of crime. This change may be more than welcome by a large sector of the doctrine -and the continental criminal systems themselves- that continue to uphold the fiction of an ostensible aprioristic certainty of the law, which clearly cannot exist in the current circumstances (Velo Dalbrenta, 2013).
These kinds of insights lead us to what Andronico (2021) calls the “predictive justice” of the digital world. This type of justice, based on the use of data and machines that collect and analyze it, can predict the laws to be applied and the judicial success with a crime. Predictive justice is justice without process, which anticipates litigation. And a justice that does not look to the future, but only thinks in the present, the only thing that can be calculated by the machine.
Given the possibility of absolute foreseeability of the crime, only the elements of definition, unlawfulness and accountability would be important for this new theory of crime. The action would no longer exist, it would be metaphysical.
In today’s reality, we cannot be sure of the future, which creates chaos and insecurity, so it is normal to opt for the existence of free will. But if we could be sure of a high percentage of the things people will do, wouldn’t that change our conception of how we act? And how we punish and prevent? (Quian Quiroga, 2018).
Another classic view of criminal law theory is modified here. If people are destined to be guilty, it means that they were not free to choose to be innocent. Then beings would be nothing more than chess pieces in the universe. Wouldn’t the new criminals be the victims, punished by destiny to be the social deviants?
5. FROM THE PANOPTICON TO THE ALGORITHMIC REPUBLIC
As we see, the possibility of introducing into our system of law an algorithm that identifies potential criminals is a strong topic that affects all the branches of the public powers. Consequently, it seems pertinent to study, under the premises of Michel Foucault (1995), the ways in which power is constituted and reproduced, and the reasons why our system could mutate into a clear authoritarian criminal law.
The effectiveness of the algorithm will depend on two elements: firstly, information -or rather data- on the person who might commit the crime; secondly, data on subjects who have already committed different crimes. In the massiveness of this loaded information, similarities and differences of all kinds will be sought, that will generate relationships and causalities, which are what the algorithm itself will use to provide results. Briefly, if we know what characteristics or relationships a series of criminals have in common, we will have to look for the same in other citizens.
The algorithm works based on human characteristics. When we talk about people, the relevant information is, for example, their ethnicity, economic situation, age, criminal record, or perhaps in which area they live or used to live (Kleinberg et al., 2018; Završnik, 2021). The human being is fragmented into different specific data that will form the logical sequences that will later form the results of the algorithm.
In this way, the legal accusation will be towards a subject determined by the regular actions he or she carries out or by his or her particular characteristics. The specific act of the felony is only indirectly relevant to the law. Killing is morally reprehensible, and people who kill are prosecuted by the law. But if we catch them before they do it, then we do not prosecute people who commit murder, but people who potentially could do it. The real reproach becomes directed towards those characteristics or acts that generate that potentiality.
The preventive criminal justice model leaves aside the axiological idea of justice in favour of a disciplinary-normalising pedagogy. To understand this better, we will use the Foucauldian concepts of “disciplinary power” and “normalisation”.
Foucault understands that at least since the 18th century, various mechanisms of power have begun to populate modern Europe. This is the disciplinary power, which is not an exclusive instrument of sovereign States, but flows throughout society in various intersubjective relations. This power is applied to bodies, and aims to classify, control, catalog, give meaning to and punish them. This disciplinary process of people is normalisation.
The application of this power is material and escapes the abstractions of modern states’ laws. It is not possible to comprehend discourses of disciplinary order without understanding contextually situated practices of power (Aguilar, 2020). Therefore, we see discipline in action in institutions such as psychiatric hospitals, factories, schools, and, clearly, in prisons. These spaces do not seek to exclude, but rather to fix individuals. That is, to normalise them. Psychiatric hospitals indicate the problem and the cure of the insane, the factory ties the worker to a system of production, and the school connects the pupil to an apparatus of knowledge transmission (Foucault, 1996).
With the creation of the prison, citizens began to be educated not to commit acts that were considered crimes. They should not steal, defraud, rape, or kill. However, with the proposal of a preventive criminal justice system, the citizen must be careful not to act abnormally. As follows, the legal system of the algorithm functions as a disciplinary power towards people. Fear induces them to act as expected, to be normal. Normal being is the opposite of the algorithm’s description of those who are now criminals.
Foucault (1995, p. 199) explains that the disciplinary society establishes a binary order: normal-abnormal, harmless-dangerous, innocent-guilty; and at the same time performs a coercive assignment: who they are, where should they be, and so on. And what better way to discipline people than to let them know that everything is known about them and that anyone who is not normal in their ways necessarily falls into the net of the algorithm as a systemic flaw that must be resolved. In Minority Report, committing a premeditated murder was pure metaphysics, there was no way one could pull it off. In the algorithm community, the danger of abnormality leads us to act within the expected parameters. “Hence the major effect of the Panopticon: to induce in the inmate a state of conscious and permanent visibility that assures the automatic functioning of power” (Foucault, 1995, p. 201). By “Panopticon”, Foucault refers to the prison architecture formulated by Jeremy Bentham (1995) in the 18th century, which consisted in a circular structure with a central tower housing a guard who could, in theory, observe all prisoners, creating a sense that they were constantly being watched. It is not only an architectural proposal, but also a philosophical principle of surveillance, control, and domination (Valencia Grajales & Marin Galeano, 2017). Foucault (1995, p. 205) sees this cruel work as an ideal representation of the functioning of power in society. We will return to this point when we discuss surveillance.
The film is not obtuse about this criterion of socially imposed discipline. It makes it clear that to evade the system it will be necessary to walk in the shadows, having to lose even our eyes and any trace of our material-being in order to do so. Similarly, in the society of the algorithm, sacrifices will have to be made to circumvent the system and arrive to be an abnormal person in freedom.
In relation to these normal subjects, Foucault speaks of fabricated individuals7. An effective algorithm is unthinkable in a community with heterogeneous individuals. Its homogeneity is achieved by imbuing it with disciplinary-normalising power. This power is productive, its aim is to control behaviour and form the individuals most useful to the society, for which is necessary to make use of data collection and storage, and obviously, hierarchical surveillance.
Unlike the old methods of data collection and documentation that required a third party to produce data (such as interrogation or torture), in our time, data is produced autonomously by citizens themselves. Today we are users in all forms: users of social networks, of websites and other digital applications. We are the ones who are in charge of uploading our location, tastes, images of our private life, conversations, friendships, and all kinds of information to the database every day (Camacho et al., 2021; Kreso et al., 2021). Nowadays, it is enough to stop, for a longer time than usual, in front of a shop with the GPS activated, to receive advertisements for the products of that shop (Lagua et al., 2021). The automatic and self-serving burden of the subjects of their information causes a veil of invisibility over the way in which this power works: nobody owns the information, nobody demands it from me, and therefore it would seem that nobody else could use it.
The central concept then emerges: surveillance. Foucault explains that “Surveillance thus becomes a decisive economic operator both as an internal part of the production machinery and as a specific mechanism in the disciplinary power” (1995, p. 175). The perfect disciplinary device, the Panopticon, would make it possible to see everything immediately, says the author. For we set up an ideal architecture, where one knows that one is seen at all times and is suspicious of anyone. This surveillance is no longer vertical, its intelligence lies in the fact that every individual can observe the other. The very massiveness of data and eyes are the only protection for the subject; even if the anthill is being observed, finding the ant is not an easy task. And we are not referring to the productive or educational space, but to the social order in any environment.
Well, the advantage of power having an order and a horizontal distribution is the effect it has on the individuals themselves. As our peers watch over us, as we ourselves provide them with the information to do so, we enter into a game of self-discipline in which police or military activities, or the king’s gaze, are no longer necessary. Much more effective is the eye of the neighbour, the finger of the colleague or our own conscience. In this way we unify the private and the public plane, and by showing our constant activity, we have no other way out than to behave like a normal and expected citizen.
This state of digital panopticism has its causes. The film under analysis perfectly represents the premise necessary to get there. The government puts the special PreCrime police power into operation in the wake of the countless deaths in the state of Virginia. Today we could also use any means to justify surveillance in our world: for example, the insecurity of the streets or the number of infections of some new virus. The Panopticon is only a model that power uses for its own reproduction.
Foucault explains how this system was used particularly with a view to production within the framework of the capitalist system8. And when we speak of production, we refer especially to the human part. The Panopticon serves to monitor and document how the worker works, when and how he or she rests, with whom he or she socialises, whether he or she respects the rules, and whether he or she uses the elements properly. In the same way, it educates pupils within the school system, creates prisoners within the prison system, and produces parents and children within the family households.
The existence of the algorithm comes to occupy the place of the watchman in the Panopticon. Its appearance functions as a coercive power over society, a power that tries to produce a type of citizen. It is not a tool to solve social problems. If crimes are committed out of a need for food, then the algorithm will be more the farmer’s scythe than the farmer’s grain. And if the algorithm condemns the poor, foreign or coloured population of the country, it will certainly be because the engineer seeks to do so. This is because whoever sets up the algorithm understands that the conditions of the system lead those minorities to commit more crimes; or because only qualify as crimes those actions that are committed mainly by the abnormal minorities. If today we have a class society, a class criminal justice system, a class legal framework, it is to be expected that we also manufacture an algorithm that responds to these parameters. A classist algorithm, much more selfish than utilitarian.
The algorithm, like the Panopticon, shows the intentions of power or the intentions towards power. This system may well be the solution to insecurity and death. The algorithm can be managed in a transparent way by demo-liberal governments and give birth to “algorithmic republics”, which base their decisions on objective data known to the members of the political and social community. Something like a scaffolding of algorithms is really understood as a public thing. However, without the forms of production and reproduction of power being revised, it is clear that the cost will be high for those who do not adapt to social morality, and the algorithm will function as one more tool of disciplinary power in an unequal structure.
CONCLUSIONS
Undoubtedly, the issues raised in Minority Report are more alive than ever. Just to give a current example, with the SARS-COV-2 pandemic, we have seen the use of facial recognition, tracking and behavioural profiling techniques to try to stop the spread of the Coronavirus (Van Natta et al., 2020; Lin & Hou, 2020). Analogous to what is discussed in this article, attempts are being made to generate elements to prevent unwanted behaviour, sometimes by punishing and limiting individual freedom before to judge those who actually transmit the virus and violate criminal laws that deal with such behaviour.
Crime prevention is nothing more than trying to avoid what is considered morally undesirable. Minority Report puts us in the position of having the opportunity to do so without suffering the consequences of crime. But at the same time, it immerses us in the idea that we are not really free, or at least that, like a falling ball, if someone doesn’t catch us, we are sure to hit the ground. If we don’t stop the murderer, the murder will happen; if we don’t stop the thief, the theft will happen. If this is so, and we have the tools to foresee and prevent it, why not do it?
It is impossible to imagine that prevention agencies would not use these tools when they can reach them. Leaving aside the magical element of precognition, it is not difficult to visualise them using tools to profile the behaviour of individuals, assign a number, and use it to define who is or is not dangerous to society9. These technologies inevitably lead to a surveillance society and a mega Panopticon, which attacks all kinds of privacy and intimacy that constitutions claim to defend (Echeverría, 2019). Not least, an enormous surveillance weapon is placed in the hands of the state, a political institution that has a monopoly of violence as a constitutive, essential, and sustaining element of its authority (Ibarra, 2017). The artifact of power is guided by preventive justice within criminal law, developing a utilitarianism of probabilities chances that allows a de facto social determinism on behalf of accomplishing order and social welfare.
It should be taken into account that the application of these algorithms is not intrinsically evil. These technologies cannot be blamed for stigma or violence within law, justice, or governments. It is also necessary to keep in mind that the debate does not end in the theory of criminal law, and the fact of mentioning the enormous risks arising from its misapplication, is only the first step to think about the great benefits that it could bring to the administration of justice in the future. Fear should never close the doors to new technologies, especially when it is known that sooner or later these doors will give way under their own weight. The judicial systems have an enormous social debt due to the slowness of decisions, the enormous public expenditure, the political and media pressure on decisions and a lack of transparency. We believe that it is the task of researchers to be, perhaps, seers of dystopias when they describe the range of possibilities that open up with the application of AI to justice, but only with the aim of achieving a less violent future prescription by the state officials in their mission to seek a more efficient judicial system. In other words, thinking the worst in order to reach the best.
If the governments choose not to limit the use of these instruments, the debate will be about how the profiling algorithm will be implemented and by whom. Transparency must be the guiding principle so that at least, if the dogma that only free and material acts are judged disappears, legal certainty and equality before the law can survive in the hypothetical “algorithmic republics” of the future. This would require a criminal justice system that breaks away from its traditional hermeticism, characterised by a reluctance to be observed, to produce data on its work and to be held accountable for its performance (Kostewein, 2021).
For a general principle of transparency to be effective, not only the basic science knowledge and technical characteristics of the tools discussed here must be trumpeted, but the degree of public understanding of the relationship between science and politics must also be emphatically discussed (Collins & Pitch, 1993). Without understanding how technologies are used and that science itself is full of non-epistemic values surrounding its practice, transparency will be empty and merely formal. If a path of ignorance of the social implications is chosen, leaving the engineering work to a few, “data dictatorship” will be a better nickname for governments to come.