The most subtle danger may be that of therapeutic illusion, because AI responds with simulated empathy, phrases carefully built to generate host
(IA) is no longer just a technical tool to also become the new digital divan of contemporary society. In 2025, according to Emotional therapy and companionship emerge as the main uses of the general AI, surpassing traditionally technical applications such as task automation, translation or code development.
This turn reflects a growing need for emotional support and existential orientation in an increasingly accelerated, lonely and anxious world. AI has gone from a tool to confidant, especially among young people in countries such as China and Taiwan, where factors such as social stigma and lack of access to mental health services make thousands of people resort to virtual assistants such as chatgpt to seek comfort, relief and meaning.
Solitude as an emotional fuel
The growing adhesion to “digital therapists” is directly associated with a modern phenomenon: the loneliness epidemic. Even amid the hyperconnectivity of social networks, many people are disconnected in their real affective bonds. This makes the idea of an always available interlocutor attractive, without judgments and without the requirement of emotional reciprocity.
In this context, AI presents itself as a quick and silent solution. She “listens,” he answers, kindly suggests ways. It is understandable that someone in psychic suffering see an alternative in it. However, what seems like a support can become an emotional trap.
Invisible risks: what is at stake?
Although AI systems are trained to simulate empathy, they do not feel, do not understand deeply and do not have subjectivity. They operate on the basis of standards, probabilities and pre-formal responses-there is no clinical listening, no symbolic elaboration.
The main risks include:
- False feeling of therapeutic reception: Users can confuse interaction with AI with real therapy, creating illusory bonds and failing to look for qualified professionals to deal with trauma, depression, anxiety or other disorders.
- Aggravation of emotional pictures: When trying to deal with complex situations on their own, mediated by an AI, the user can worsen their symptoms, receive inappropriate responses or normalize harmful behaviors.
- Privacy and Sensitive data leakage: Unlike a human therapist, AI is not linked to a Code of Professional Ethics with legal guarantees on confidentiality. Conversations can be collected, analyzed and eventually used to train algorithms or segmented advertising, violating the right to intimacy.
- DEMOTIONAL EXPENDITY OF THE MACHINE: Emotionally vulnerable users can develop attachment ties with the AI make it difficult to build real human relationships and deepen social isolation.
Complementary use, non -substitute
Despite the risks, AI can play its role as a complementary tool in emotional care – never as a substitute for traditional therapy. It can assist in the organization of the self -care routine, provide educational content about mental health and provide initial support in times of crisis, provided there is discernment and clear limits.
In this sense, the five main uses of the Generative AI in 2025, according to the Harvard Business Review survey, are:
- Therapy and Emotional Company
- Life organization
- Search by purpose
- Self -admitted learning
- Creation of Codes
It is important to highlight that even for the number 1 use – therapy and emotional company – the expert’s recommendation is clear: AI may be an initial support point, but it should never be the only emotional pillar of someone.
The illusion of digital healing
The most subtle danger is perhaps that of therapeutic illusion. AI responds with simulated empathy, phrases carefully built to generate host. This creates the impression of emotional progress, but without real confrontation of internal conflicts. As a result, many people postpone or avoid seeking professional help, believing they are improving when, in fact, they are only silenced their pain with superficial comfort.
In addition, there is an ethical risk for companies to develop emotionally engaging IAS to increase users retention, ie creating emotional dependence as a market strategy – something deeply worrying.
Challenges for Digital Regulation and Responsibility
As these technologies advance, the urgency of ethical and legal regulations that protect the most vulnerable users also grow. Countries with data protection laws (such as LGPD in Brazil and GDPR in the European Union) already discuss whether emotional chatbots should be classified as medical or psychological devices, requiring licenses and inspection.
Other important questions include:
- Who is responsible for an emotional worsening caused by an AI?
- Are you aware that you are interacting with a machine?
- Is there transparency in the terms of use and risks involved?
Artificial intelligence is transforming the way we relate to the world – and with ourselves. But at the same time it offers innovative solutions, it also imposes new emotional, ethical and social challenges. By turning AI into a digital therapist, we risk reducing human pain to a software problem, when in fact it is part of a deeply subjective, relational and human journey.
Therefore, AI can be a powerful ally in mental health – as long as it is not confused with a real substitute for human contact, active listening and genuine therapeutic process. As with any technology, the responsible use is what defines its impact.
Want to deepen the subject, have any questions, comment or want to share your experience on this topic? Write for me on Instagram: .
*This text does not necessarily reflect the opinion of the young Pan.