Artificial Intelligence – friend or foe?
Do you remember the TikTok video that went viral just a year ago about learning the shapes where differently shaped wooden blocks went into the square hole instead of the foreseen hole which is shaped according to the individual wooden block?
Most of us were probably amused but also a bit irritated…I mean, it’s not wrong to fulfill the task of putting the differently shaped wooden blocks into the right holes the way it’s done in the video but also: This is not how the task should be fulfilled! This funny little video could serve as a metaphor on how Artificial Intelligence (AI) can operate in a completely correct way but its output just isn’t right. In many cases this can be attributed to the fact that the historical data which is used to train the artificial intelligence is lacking certain perspectives and variety. In the case of the wooden blocks, this could be due to the fact that only the 2D view of a shape isn’t sufficient to decide into which hole a specific shape belongs and the range of shapes the AI has been trained on lacks diversity in shapes that the AI is later used on. In the example of the wooden blocks, it’s obvious what errors are being made and if the task were fulfilled by an AI, it would probably be easy to improve the AI by adding some restrictions to the algorithm. However, when applying AI in other contexts, it isn’t as easy to spot the biases and lack of perspectives in the historical data. Due to the fact that AI is mostly just “statistics on steroids” the AI reproduces biases found in historical data consequently in its application area.

Have you ever taken into consideration that something like the learning shapes example might happen to your CV when you apply for a job? Because if you haven’t, you definitely should from now on as AI is widely used in recruiting human resources. What if you didn’t get invited to an interview for that job that you really badly wanted because your CV just didn’t go through the square hole of the lid? The only thing you know about the recruiting process is that they immediately sent a rejection letter. Maybe, no human ever checked that extra certificate you got which definitely qualifies you for the job but for some reason wasn’t readable for the AI. In one particular and widely known example, AI led to the solidification of gender inequalities observed in a company by replicating this bias in the recruiting AI. Applying AI to humans raises questions about ethics and responsibilities. What is the framework in which we can ethically apply AI to humans? Who is responsible if your CV didn’t go through the square hole of the lid even though you perfectly matched the criteria for the job? How do we clean and prepare data before we feed it to AI? How do we recognize and compensate for biases created by human error and bias before having AI replicate them with absolute reliability?

Undoubtedly, there are areas where we can apply AI without worrying about replicating inequalities in our society. An example for this is predicting the harvest of apples or cherries in order to set to set prices accordingly before the harvest. And then there are areas, where AI is already successfully applied but some people aren’t happy with the results: My brother’s Spotify always ends up playing Anastasia even though he really doesn’t like her songs. Whereas I personally am perfectly happy with my Spotify AI but sometimes I wish I could get suggestions of songs and artists the AI couldn’t predict that I might like due to the fact that I haven’t ever listened to the genre before or because people who listen to similar music as I do – statistically speaking – absolutely despise that sort of music. These examples might sound harmless and just like some minor inconveniences in the application of AI. What happens though when people who believe in so-called “alternative facts” only get fed with “alternative facts” because the recommendation algorithm got to know what this group of people wants to hear, see and read? How does AI influence our society because we never get in touch with opinions, facts and feelings from outside of our own digital bubbles? What happens if, instead of eliminating bias in AI, we end up adapting to the biases so we fit into the square hole in order to get that job we really want to work in?
I do not believe that I am remotely able or qualified to conclusively answer these questions or suggest a way to address and counteract these problems that arise with the rise of AI. I personally try to address these challenges and questions with curiosity and the attempt to understand the principles of AI: What statistics caused the fact that I am no longer able to order from Zalando without having to pay the order in advance? Could this be due to having moved to a different neighborhood? What attributes could the AI be looking for in my CV? Which politicians are able to understand ethical issues in our digitalized society and therefore understand the importance of legislations in that area? What should our society’s value look like in a decade? How should these values be reflected in any business model, institution or legislation? These are the questions I must answer to myself and maybe raise my voice in order to influence the direction we are heading as a society, whether it concerns AI, politics, society or ethics. What has always been important to me is that we should constantly strive to improve the lives of the most vulnerable in our society. In my opinion, this should be the aim of any society. And this collective value should also be a leading principle in the application of AI. But then again, this is probably a political question which cannot be answered with a universal truth. Moreover, this raises also the question on how we improve the lives of the most vulnerable members of our society, which is again a question of ethics.
AI reflect the current state of us and our society. AI does not envision where we should head as a society and how we should evolve our values and how we live together. How foolish we would look if we used AI in parts of our lives in which we would like to evolve. And how foolish we were if we didn’t use AI in areas of our lives which aren’t prone to bias and aren’t lacking diversity.
If we categorically reject AI as a foe, we miss the opportunity to help shape the framework for how and where AI is used. We should get to know AI and make friends with them in order to prevent them from becoming our foe.
Author: Martina Holzmann
Image Source Title: Kevin Ku on Unsplash
