The Human Element in AI

When Technology Feels Human: A Look at Anthropomorphism

Have you ever apologized to chatGPT or felt bad for your robot vacuum when it bumped into a wall? If so, you’ve experienced anthropomorphism firsthand. This phenomenon, where we attribute human-like traits to non-human entities, is more common than you might think and reveals much about our emotional connections and interactions with artificial intelligence (AI).

Understanding Anthropomorphism

Anthropomorphism is when we assign human traits—like intentions, emotions, or behaviours—to non-human things. It’s a natural human tendency rooted in our evolution and psychology. Instead of being based on what the object or animal is actually like, it reflects how we interpret them through a human lens (Salles, Evers, and Farisco 2020). This might manifest as waving at your cat, assuming it understands the gesture, or talking to your car when it refuses to start, as if the vehicle can understand your frustration and react on it.

The philosophical exploration of anthropomorphism is not new. As early as the 18th century, David Hume, in The Natural History of Religion, observed this universal human tendency, writing: 

“There is an universal tendency among mankind to conceive all beings like themselves, and to transfer to every object, those qualities, with which they are familiarly acquainted, and of which they are intimately conscious.”

The origins of anthropomorphism stretch far back into human evolution, serving as an adaptive trait that helped our ancestors interpret and respond to their environment. By projecting human-like traits onto animals, natural forces, and inanimate objects, early humans could make sense of complex and often unpredictable phenomena, fostering a sense of control and connection to the world around them (Mithen and Boyer 1996).

“Anthropomorphism turns the typical human-to-object interaction into a process similar to human-to-human interaction.”(Wan and Chen 2021).

This tendency to “humanize” objects, animals, or modern technologies—offers profound insights into how we perceive and connect with the world around us. When it comes to AI, this habit reveals both our desire to make technology more relatable and the challenges of understanding its actual capabilities.

The Connection Between Anthropomorphism and AI

Modern AI systems, like generative AI models, are designed to mimic human-like responses, creating a fertile ground for anthropomorphism. Trained on vast datasets and powered by neural networks, these systems generate outputs that often feel eerily human. And let’s face it—that’s a big part of why we’re so hooked. It’s not just about what these systems can do; it’s the almost magical feeling of interacting with something that feels alive. Research confirms that anthropomorphism enhances users’ understanding of AI technology, promotes acceptance, and increases perceived competence during interactions (Placani 2024) .

However, it also highlights a fundamental challenge: bridging the gap between human intelligence and the intelligence we are creating.

Human Intelligence vs. Machine Intelligence

Intelligence is central to what it means to be human. Every milestone in human civilization—from mastering fire to cultivating food and exploring the cosmos—is a testament to the unique power of human intellect. Professor Stephen Hawking, speaking at the launch of the Leverhulme Centre for the Future of Intelligence on October 19, 2016, stated:

“I believe there is no deep difference between what can be achieved by a biological brain and what can be achieved by a computer. It therefore follows that computers can, in theory, emulate human intelligence—and exceed it.”

While Hawking’s insights inspire reflection, current AI remains limited to unconscious machines designed to assist with specific tasks and lack consciousness or emotional depth (Korteling et al. 2021).

Human cognitive frameworks are inherently designed to connect with others through shared experiences and context. Our communication and intelligence are deeply rooted in collective understanding and empathy. AI systems, by contrast, lack access to shared contexts or a sense of purpose. Despite their advanced capabilities, this gap underscores the limitations of machine intelligence and its inability to truly mirror human thought (Lawrence 2024).

Undoubtedly, AI has made remarkable strides in generating text or improving content of  “Will Smith eating spaghetti”, however large language models still lack true understanding of the words they produce. These systems operate on patterns derived from training data and are prone to errors, or so-called “hallucinations”.

Artificial intelligence = the ability of a system to identify, interpret, make inferences, and learn from data to achieve predetermined organisational and societal goals (Mikalef and Gupta 2021)

This implies that AI is most effective when applied in contexts where it can help achieve predetermined organizational and societal goals.

Navigating Anthropomorphism and AI’s Future

Anthropomorphism acts as a double-edged sword. While it can enhance user experience and foster trust, it also serves as a cautionary reminder. Overestimating AI’s capabilities by projecting consciousness or emotions onto it can lead to misconceptions about its functionality and foster unrealistic expectations (Lawrence, 2024). Consider examples where users form emotional bonds with AI companions instead of other humans or how human-like chatbots can subtly manipulate user decisions.

This disparity highlights a fundamental difference between human cognition and machine functionality, prompting us to reconsider how we navigate our evolving relationship with AI.

Key Questions to Remember

To counteract these risks, it’s essential to stay grounded and ask critical questions:

  • What does the system do?
  • How does it achieve its results?
  • Why does it do what it does?

By focusing on the practical, non-human nature of AI, we can better understand its capabilities and limitations, ensuring that our interactions with it remain informed and intentional.

Harnessing AI While Staying Grounded

Artificial intelligence offers thrilling possibilities to revolutionize different fields and advance humanity in extraordinary ways. However, recognizing AI’s limitations is essential to ensure technology complements humanity rather than competes with it. By balancing our natural tendency to anthropomorphize with a clear understanding of AI’s true capabilities, we can thoughtfully navigate this era of human-machine coexistence.

Human intelligence represents just one of many possible forms of general intelligence. AI systems can exhibit unique abilities that differ from human cognition. This diversity in intelligence challenges us to redefine how we collaborate with machines in ways that enrich both technological innovation and human creativity.

Arianna Rotundo

Inspired by the speeches during the Digital Food Business Week 2025 from:

Alix Rübsam on Exponential Technologies and Responsible AI
Marisa Tschopp on Ethics & Trust in AI

Further literature used:

Korteling, J. E. (Hans)., G. C. Van De Boer-Visschedijk, R. A. M. Blankendaal, R. C. Boonekamp, and A. R. Eikelboom. 2021. ‘Human- versus Artificial Intelligence’. Frontiers in Artificial Intelligence 4 (March):622364. https://doi.org/10.3389/frai.2021.622364.

Lawrence, Neil D. 2024. The Atomic Human: Understanding Ourselves in the Age of AI. Random House.

Mikalef, Patrick, and Manjul Gupta. 2021. ‘Artificial Intelligence Capability: Conceptualization, Measurement Calibration, and Empirical Study on Its Impact on Organizational Creativity and Firm Performance’. Information & Management 58 (3): 103434. https://doi.org/10.1016/j.im.2021.103434.

Mithen, Stephen, and Pascal Boyer. 1996. ‘Anthropomorphism and the Evolution of Cognition’. The Journal of the Royal Anthropological Institute 2 (4): 717–21.

Placani, Adriana. 2024. ‘Anthropomorphism in AI: Hype and Fallacy’. AI and Ethics 4 (3): 691–98. https://doi.org/10.1007/s43681-024-00419-4.

Salles, Arleen, Kathinka Evers, and Michele Farisco. 2020. ‘Anthropomorphism in AI’. AJOB Neuroscience 11 (2): 88–95. https://doi.org/10.1080/21507740.2020.1740350.

Wan, Echo Wen, and Rocky Peng Chen. 2021. ‘Anthropomorphism and Object Attachment’. Current Opinion in Psychology 39 (June):88–93. https://doi.org/10.1016/j.copsyc.2020.08.009.

Image source:

OpenAI. 2025. AI-generated image: Fusion of human and digital worlds with a human and robot smiling. Generated using DALL·E via ChatGPT. https://chat.openai.com/.

Leave a comment