AI, Hume and a guillotine: The dangers of machine-learning loops

A- A A+

Anyone who ever engaged with a lawyer will notice how binary lawland is. Lawyers like to divide the basic facts of life into dichotomies: lawful/unlawful, just/unjust, fair/unfair, proportional/disproportional, and so on.

A less well-known distinction in legal science is that between the descriptive and the normative. An illustration helps.

Descriptive statement: John was walking his dog without a leash in the Boboli gardens. John’s dog bit a child.

Normative statement: No dog should be walked in the Boboli gardens without a leash.

The first statement describes something that happened. John had a dog, and he took it for a walk without a leash in the Boboli gardens. The dog bit a child.

The second statement is different. While the first statement is an is – what John did with his dog – the second statement is an ought – what John should or should not do with his dog.

The crucial difference

Why does this matter? Legal and moral philosophers call our attention to the intransitivity of descriptive and normative statements. To put it concretely: can I say, “dogs should be forbidden in the Boboli gardens because they can bite children?” Philosophers suggest that although it seems absolutely logical to say yes, we are skipping a few steps by doing so.

What drives us to intuitively accept the normative inference is not actually the account of a child being bit by a dog. To see why, take law enforcement. We train police dogs to bite criminals. We accept dog biting as a good thing in some situations.

In reality, a hidden value choice is behind the idea that we ought to forbid dogs in the Boboli garden. We believe that we find it better (moral/normative) to forbid dogs in the Boboli gardens because a dog bit a child. But in reality, we are thinking that it is better (moral/normative) to forbid dogs in the Boboli gardens because we cherish human physical integrity (especially those of children).

Scottish philosopher David Hume described this conflation of the is and the ought (aka the naturalistic fallacy) and made a point that that the two must be severed; this has become known as Hume’s guillotine. Formally, Hume’s guillotine states that if one has sole access to a descriptive statement of a given reality (is statements) then it is not possible to infer any normative or moral statement (ought statements) from it. (Some disagree; see Searle.) This law can be summarised in the catchphrase: No ought from an is.

Now to AI

What does this have to do with AI? Artificial intelligence, and in particular machine learning, runs on algorithms. An algorithm is a structured decisionmaking process that can be automated by a computational procedure to generate a specified decisional outcome. (Lagioia and Sartor provide a useful typology of machine learning.) In itself, an algorithm is neither good nor bad.

But the most sophisticated algorithms in use today are no longer dependent on such instructions being given to them. An increasingly popular AI protocol uses deep learning technology and neural networks. In these systems, the machine is given a goal (for example, to predict the odds of children being bitten by dogs in the Boboli Gardens) and it autonomously develops its own parameters to define the best way to achieve that given goal.

Deep learning algorithms feed on data. If one wants to make the prediction above, then one could feed the machine with thousands of examples of gardens around the world, some of which let dogs walk freely and some of which mandate the use of leashes, and the machine would extract a prediction of how likely is for a certain regulatory change to achieve the desired result (avoid dogs harming children).

In some countries, the authorities are relying on deep learning technology to assist decisionmaking in matters of public interest. Deep learning algorithms are deployed to assess criminal recidivism, rank professors, or evaluate high-school applications. In all these cases, the idea is the same: data is fed into complex algorithms which extract the parameters of what a good professor ought to be, or a good student, or a better inmate.

Recipe for no change

With deep learning, the algorithm output is deterministically guided by is statements or descriptions/facts (data) from the past. If we introduce into a machine massive data that describes the existing state of affairs of a given society, and ask it to predict the future based on these, then the machine will ossify the norms and values underpinning the data. In fact, it will reinforce them. This is a problem in all areas of public policy where reform – that is, changing the state of affairs – is the ambition.

The point is best driven home with an example:

A machine-learning algorithm was developed to evaluate PhD candidates. It was fed thousands of applications of previous candidates, including their pictures. Through supervised learning methods, successful and unsuccessful applications were identified. The machine then extracted the following parameters as relevant features of a good PhD candidate:

  • High grades;
  • Enrolment in the national top 1% universities
  • White

The system in this illustration was not designed to be racist; the data that was fed to it was biased. Unfortunately, this problem is widespread in much of AI in use today, from racist facial recognition systems to misogynous job applications software to classist credit AI assessments. What is common to all of them is that they are using the is from the past to predict the is for the future. The result is to accentuate the vicious circle whereby, for example, poor people have less access to credit because they are poor; and because they have less access to credit, they remain poor.

Machine-made inertia

But this in itself does not exemplify Hume’s guillotine. Although it looks like we are deriving an ought from an is, we are actually only extracting an is from another is. The machine is not saying that it is better to hire white people or that poor people should not be entitled to credit. It has no normative/moral data but simply raw descriptive statements. It is merely saying that, according to the data, the best candidates were white (as those were the chosen ones) and that, if that reflects what a good candidate is, then in the future you should go with white. It is merely using the past to guide your future choices to achieve the objective you have given it (to select the best candidates).

However, just as in the Boboli example, the deeper logic is hidden. The ought is still there – whether it is good or bad to accept only white people to PhD programs – and that should be the most important question.

What is worrisome is that by inadvertently embedding is loops in AI systems in the course of technological experimentation, competition and innovation, we risk actually “scaling” normative choices that societies no longer support. At no moment in our history have we shaped the future by only looking at the past for guidance. To do so would simply make social change, political reform and economic progress impossible.

If we start to rely on ubiquitous machine-learning loops to inform our future actions, what effect might this have on our normative oughts? David Hume and his guillotine, “No ought from an is”, is a powerful reminder to beware the tyranny of the past. It forces us to continue the moral debates about what future we want as a society. AI systems, with their promises of efficiency, might be obfuscating such debates. We must identify which values from the past we wish to part with before we feed them to technologies that act as massive replication systems. We must create boundaries for the uses of AI. In short, human moral reasoning is fundamental, if we do not want AI to shape a future that we have dreamed of based on a past that we disparage.

 

Francisco de Abreu Duarte is a PhD Researcher in the Department of Law at the EUI. His doctoral research is on the rise of digital constitutional orders, and asks the question “How have Facebook, Google or Amazon become the new authorities?” Francisco has published in the European Journal of International Law and contributes regularly to several blogs.