Ethics & Trust in Artificial Intelligence

How much of your life are you willing to trust to a machine?

AI has quietly become part of our everyday lives. It’s always available, stores huge amounts of information, and can help us instantly. Most of us use it daily without thinking about what that trust really costs. After the lecture on ethics and trust in AI, I started wondering: when does convenience turn into emotional dependence? Everyone uses AI every day—it gives us knowledge, helps us communicate, and makes tasks easier—but it can also create fake relationships that replace real human connections. In the lecture, we discussed how people sometimes treat chatbots like friends or therapists, and the problems this can cause with trust, privacy, and emotions. The main question we kept returning to was not whether AI is helpful, but which parts of our feelings and relationships we are willing to give to machines.

Emotional chatbots are a striking example. People treat them as friends—or even equals—sharing their deepest thoughts and vulnerabilities. It’s easy to see why: being “heard” at any hour, without judgment, feels comforting. But AI cannot take responsibility or provide real human care. In fields like mental health, this isn’t just a philosophical problem—it’s dangerous. Are we confusing synthetic empathy with real support? And if we do, are we putting our emotional well-being on the line for something that can’t truly care?

The paradox of closeness is even more unsettling. The more human-like AI becomes, the more it can make us uneasy. Platforms like Replika AI show how we experiment with AI friendships, blurring the lines between human and synthetic relationships. But every step into this synthetic intimacy carries real risks: privacy breaches, emotional dependency, and social isolation. I can’t help but ask myself—how much of our emotional energy should we pour into something incapable of accountability or care?

Ultimately, trusting AI requires conscious reflection. The lecture highlighted three simple but powerful questions: What does it do? How does it work? Why does it act this way? These aren’t just technical prompts—they’re warnings. AI should support human judgment, not replace it.

The bigger challenge—and the more frightening one—is balancing AI’s immense potential with the responsibility we take on every time we invite it into our lives. Are we truly thinking about the trust we give, or are we letting convenience and comfort dictate the boundaries of intimacy—and maybe even what it means to be human? Reflecting on this, I wonder whether we are truly aware of the trust we give, or if convenience and comfort are quietly setting the limits of intimacy—and even reshaping what it means to be human. Personally, I think AI can be an incredible tool if used thoughtfully, but I believe we must stay attentive and consciously decide which parts of our emotions and relationships remain only human.

Knecht Patricia

Source: 

https://chatgpt.com/

European Commission (2019). Ethics Guidelines for Trustworthy AI. https://www.aepd.es/sites/default/files/2019-12/ai-ethics-guidelines.pdf

Image source: AI generated from ChatGPT

Leave a comment