Artificial intelligence and inequality: a solution or a weapon?
Artificial intelligence (AI) algorithms were born as an innovation to substitute human discretion with mathematical calculations, especially when important decisions are to be made. For example, AI algorithms can be used in the labour market, specifically in the hiring process. Ideally, if it is a machine selecting who to hire, problems of human bias, such as racial or gender discrimination, should be solved.
What we are instead observing in reality is exactly the opposite. Machines are in a way legitimising the biases that are systemic and endemic in our society. The question, thus, is whether artificial intelligence is helping us to solve human biases or is instead camouflaging them through technology.
How artificial intelligence algorithms work
Artificial intelligence algorithms are used to predict certain outcomes that we need to know but we cannot observe because of our computational limitations as human beings. For example, the police need to know in which geographical areas crimes are most likely to occur. The task of artificial intelligence algorithms is thus to predict to what neighbourhoods cops should be sent because of intense criminal activity.
In the labour market, employers, when making new hires, need to know how the potential workers will perform if hired. Unfortunately, this is not possible to know beforehand. The task of artificial intelligence algorithms is to predict the future performance and behaviour of potential new hires and help employers make their hiring decisions.
But how do algorithms predict those outcomes? The answer is data. We feed the computer lots of data. The machine digests it, learns from it, and makes its predictions.
Artificial intelligence as a weapon of inequality
In the section above, we have said that machines learn from the data that they are fed. But what if there are problems with the data? Herein lies the harm of artificial intelligence.
The data we use for training artificial intelligence algorithms represent reality, the society in which we now live and operate.
Employers needing to hire a manager would use an algorithm fed with data on a labour market in which white men have been the most successful social group in history, at the expense of women and blacks. The machine indeed would suggest hiring a white man as manager.
In the same fashion, the algorithm used to predict criminal rates in certain geographical neighbourhoods is fed with data registering that poor neighbourhoods experience intense criminal activity. The algorithm will indeed send more cops to those neighbourhoods, with the result that more black people would be arrested, since they are more likely to live in poor areas. It is thus clear how a feedback loop is generated: people at a power disadvantage, who have been discriminated against by society, will also suffer discrimination by artificial intelligence.
The salient issue raised with how we use artificial intelligence is thus the following: what we as a society are doing is using an innovative tool, i.e. artificial intelligence, to codify and legitimise systemic biases that have their roots in the past.
Thoughts on the future of artificial intelligence
The existing academic research is indeed doing a masterful job of identifying and solving the biases dominating the data we use for training artificial intelligence algorithms. However, this is just a tiny piece of a huge and important problem.
My research shows that even if we use unbiased, i.e completely objective, training data for feeding artificial intelligence algorithms to make them predict the outcomes of interest to the labour market, the algorithms still discriminate against women, judging them potentially less successful than men.
From an inequality point of view, an examination of AI predictions is worrying. The evidence that artificial intelligence algorithms discriminate despite unbiased training data shows how the issues related to artificial intelligence are deeply rooted in our culture.
One source of the bias codified in artificial intelligence algorithms indeed comes from the data we use. But it is also unconsciously perpetuated by the people who generate the algorithms themselves. It is not by chance that most computer science engineers are white men and that algorithms discriminate against women and blacks. Finally, bias may also be intentional, when people shape AI algorithms to their own preferences, for personal profit. Whether employers want to hire more people like themselves (white, male), or police departments hope to maximize arrests among certain populations, the technology may be swayed.
The evidence shows that AI algorithms, while perhaps efficient, are not by any means fair, and, as the codification of data rooted in existing power asymmetries, do little to level the playing field for those at the margins of society.
It is time to step back from a state of mind that aims above all at maximizing profits, and step towards a deployment of machines programmed to make fair decisions for all, especially for those who historically have suffered unfair discrimination. It is time to challenge power relationships, not legitimise them under cover of technology. But are we ready?
Elena Pisanelli is a PhD researcher in the Department of Political and Social Sciences. She has a Master of Research in Public Policy and Social Change from Collegio Carlo Alberto and a masters’ degree in economics from the University of Bergamo. Her research on artificial intelligence examines the link between algorithmic bias and labor market consequences, specifically in terms of its impact on gender discrimination.