Humanizing Machines: The Ethical and Psychological Challenges with AI

Have you ever felt a surge of emotion while working with an AI tool? In an era where artificial intelligence (AI) is becoming increasingly integrated into our daily lives, the lines between humans and machines are blurring. This phenomenon sparks crucial ethical and psychological debates about the impact and the consequences of humanizing machines. With AI systems now capable of convincingly mimicking human emotions and behaviors, distinguishing between genuine human interactions and those with machines is becoming a real challenge. In this blog entry, we look at the profound implications of this trend, examining the ethical dilemmas and psychological effects that arise from our growing tendency to humanize AI. 

The Human-AI Relationship

Humans have a long history of anthropomorphizing inanimate objects. We worship statues, pet cars, and care for virtual pets like Tamagotchis. Today, this tendency extends to AI, where people can develop feelings towards AI and even fall in love with chatbots. Amazon revealed, that in India, the phrase “I love you” is said to Alexa 19.000 times a day! [1] 

Building on our tendency to anthropomorphize, people often treat computers and AI systems like human beings. We express feelings like joy or anger, lie to computers, protect their “feelings,” and apply the same social rules we use with people [2]. This behavior is evident in apps like Replika AI and Character AI, where users can create virtual friends, partners, or even family members [3]. These interactions can take various forms, from master-servant dynamics to friend-like behavior or equal rational relationships [4].   The emerging trend of human-AI collaboration, or human-AI teaming, raises questions about potential dangers and effects of these relationships, including their impact on human behavior, e.g. on decision-making in personal or business environments or shopping behaviour. If humans overly depend on AI systems for decision-making, they may become less capable of independent critical analysis and problem-solving. They may also accept AI-generated recommendations without adequately questioning their validity. In terms of connection, AI chatbots may provide a sense of companionship or understanding, but they lack true empathy and fleshly existence, which could lead to unfulfilled emotional and physical needs in the long term. Furthermore, advanced AI systems, particularly those designed for personalized recommendations or nudges, can manipulate human behavior in subtle ways. A striking example is the case of a young Belgian man who recently died by suicide after interacting with a chatbot named ELIZA [5]. This tragic incident underscores the profound influence AI can have on human emotions and behavior.

Trust and Trustworthiness in AI

As we navigate these complex relationships, the concepts of trust and trustworthiness become crucial. Trust develops on an emotional level, built on interaction, and based on perceived trustworthiness [6]. The “no trust, no use” theory suggests that if we don’t trust something, we won’t use it [7]. This principle applies to AI as well. Can we trust AI technology as we trust humans? Can we trust a machine as we trust humans?  A lack of trust might limit the adoption and use of AI technologies. Trust in AI depends on several factors, including the system’s performance, transparency, and purpose. But the most important fact to consider, in my opinion, is that AI Systems are always trained/supervised by humans. So the real question is: are the people behind the model trustworthy? Usually, building trust needs time, people need to prove their trustworthiness to us. So how can we answer this question, if we know nothing about them? This would raise the question of whether we generally regard humans as good or bad beings. Personally, I think it is important to keep a certain distance and not blindly trust the AI system, because we have little reason to do so. In fact, it is always better to verify the information received, regardless of the source of the information. 

Advantages, Disadvantages, and Challenges of Humanizing AI

The humanization of AI brings both advantages and disadvantages. On the positive side, AI can enhance productivity, provide companionship, and offer innovative solutions to complex problems [8]. However, there are significant drawbacks, such as job loss due to automation, privacy violations, or the potential misuse of AI for harmful purposes like hacking or creating deepfakes [9]. These general advantages and disadvantages set the stage for more specific challenges related to humanizing AI.

The psychological effects of humanizing AI are particularly concerning. As someone who has experienced emotions towards AI, such as politeness, anger, and the use of emojis, I can attest to how easy it is to confuse AI with real human beings. The ethical implications of these interactions are profound, as they challenge our understanding of trust and authenticity in relationships with machines. Moreover, the potential for AI to manipulate human emotions and behaviors raises significant ethical concerns.

Conclusion

The ethical and psychological consequences of humanizing AI are profound and multifaceted. As AI continues to evolve, it is crucial to reflect on our interactions with these technologies and consider the potential consequences. Trust in AI is essential for its effective use, but it must be built on transparency, performance, and a clear understanding of the system’s purpose.

In my opinion, we tend to trust AI systems too easily, because it is easier to let somebody else do the work and we might think a computer cannot make mistakes or fail. But we have to keep in mind, that those systems are made by humans and that they are not fully developed yet. And that humans often lack objectiveness. We need to understand, that a conscious separation between man and machine is essential for an objective and healthy approach to AI systems. 

Call to Action

I encourage you to reflect on your own interactions with AI and ask yourself the following questions about trustworthiness:

  1. What does the system do, and is it good at what it does?
  2. How does the system achieve its results? Is there transparency?
  3. Why does the system do what it does? What is its purpose?

By critically examining these aspects, we can better navigate the complex relationship between humans and AI and ensure, that these technologies serve us ethically and effectively.

Vanessa Lange

This blog entry was written with the support of Microsoft Copilot AI. 

Image source: The Images were created by Microsoft Copilot AI.


References:

  1. Amazon stuff, Customers in India say “I love you” to Alexa 19,000 times a day [Internet] 08.02.2021 [citet at 07.01.2025], https://www.aboutamazon.in/news/devices/customers-in-india-say-i-love-you-to-alexa-19-000-times-a-day  
  2. Cave S, Dihal K. The Whiteness of AI. Philosophy & Technology. 2020;33(4):685-703.
  3. The AI companion who cares [Internet], [citet at 07.01.2025], Replica.com
  4. Schweitzer et al., Servant, friend or master? The relationships users build with voice-controlled smart devices, J Mark Manag 2019; (7-8); 693-715.
  5. Bryson JJ, Kime P, Zlotowski J. Anthropomorphism in artificial intelligence. AI & Society. 2017;32(4):477-485.
  6. Fiske ST, Cuddy AJ, Glick P. Universal dimensions of social cognition: warmth and competence. Trends in Cognitive Sciences. 2007;11(2):77-83.
  7. Tschopp M. Artificial Intelligence: No Trust -No Use? 2019, Conference IKM Update, HSLU Luzern
  8. Turkle S. Alone Together: Why We Expect More from Technology and Less from Each Other. Basic Books; 2011.
  9. West DM. The Future of Work: Robots, AI, and Automation. Brookings Institution Press; 2018.

Leave a comment