The Bias of Artificial Intelligence

The Bias of Artificial Intelligence

When it comes to artificial intelligence (AI), most people think of the supremacy of robots, humanised machines or computer chips that want to harm us. In reality, AI is already integrated into various systems in the appearance of an algorithm. Whether it is word recognition in writing, speech recognition or navigating a car. AI is already part of our lives. It answers our questions about the business hours of the restaurant near me, whether men are more creditworthy than women or which skin colour has a higher potential for criminality. Most of the time, we trust its decisions. Nevertheless, these results can be discriminatory and raise big questions in today’s society. Therefore, it is even more important to ask the question, are the decisions that an AI takes ethically justifiable and can guidelines for programmers help to make ethical decisions. It was specifically with this question of fairness and a few more that we discussed with Dr. Baur from Baur Consulting.

For an algorithm to behave ethically, one must first understand how this system works, and this is where the problem usually arises. How does an AI recognise a cat, does it even understand what a cat is? In a large data, the algorithm decides this based on millions of parameters and with the help of machine learning, it can learn from its mistakes and constantly improve itself by adding new parameters. Mathematically, it is simply a linear regression, only bigger and non-linear. However, this system eventually becomes so complex that it is no longer possible to say which parameter will trigger which decision on the selected data.

The ethical question does not start with the output of AI, it begins with the data or so-called training data that is selected by humans. If the data includes that men earn more than women, it is clear why a man is classified as more creditworthy than a woman. This selection of the data can reflect biased human decisions such as social inequality, gender or skin colour. Another reason may be the absence of marginalised groups in these data. Thus, it is easier to correct a biased system than to correct the biases of each human being.

Do ethical guidelines help to control these biases? The problem is that guidelines are often written by ethicists and should be applied by programmers. It is no surprise that these two worlds do not speak the same language, and therefore it is difficult to implement them.

The solution to this problem does not seem so simple. What is clear, however, is that on the one hand we need to be much more mindful of our biases, and on the other hand we need to have diverse teams working on such technologies and learning from each other. We must be clear about what we are teaching machines and what data we are giving them. Only in this way can discussions about ethical issues arise and their results be implemented in a comprehensible way.

Author: Priscilla Alexandra Laube

Image Source Title: https://www.sciencegeist.ch/news/597

Leave a comment