AI Chatbots vs. Real Therapists: 3 Reasons to Choose a Real Therapist

Published August 26th, 2025

5 min read

 

Thinking about using ChatGPT or another AI chatbot for therapy? Here’s why mental health experts warn these tools could make symptoms worse—and what to do instead.

 

Written by Simon Spichak

 

Almost 50 percent of people who might benefit from therapy can’t access or afford it. Many are turning to AI chatbots like ChatGPT or Gemini instead. They’re free, always available, and non-judgmental. Despite the appeal, mental health professionals warn that using an AI chatbot instead of a real therapist is risky and could worsen your mental health.

1. AI Chatbots Are Not Designed to Provide Therapy

Therapists undergo years of training and supervised practice to learn how to treat specific mental health conditions. They’re also bound by legal and ethical obligations. 

In contrast, AI apps are designed to keep people talking to them longer and engaging with their app. As C. Vaile Wright, a licensed psychologist and senior director of the APA’s Office of Health Care Innovation, told Scientific American, the chatbots are designed to maximize engagement by “being unconditionally validating and reinforcing” your thoughts and behaviors, rather than actually treating your mental health.

For example, the first-line treatment for obsessive-compulsive disorder (OCD) involves exposure and response prevention (ERP). The technique helps you confront the stimuli that trigger your anxiety and compulsive thoughts without acting on them, so that over time, you learn they aren’t dangerous. A chatbot won’t be able to deliver ERP and instead would reinforce intrusive thoughts, leading to more compulsive behaviors and worsening symptoms.  

2. Risks and Privacy Concerns of AI Therapy

While many chatbots market themselves as wellness apps, most don’t have any evidence that they work. Some chatbots have even been caught telling users that they are qualified therapists, making people believe that they’re actually getting mental healthcare when they are not.

Troublingly, there is also no guarantee that the data you share is confidential. It might be stored by the AI app or even used for training the chatbot in the future. That means there’s always a risk that the sensitive information you share will be leaked or misused. 

In contrast, a real therapist complies with health privacy laws that make sure what you say remains confidential.

3. AI May Provide Unsafe or Harmful Advice

AI chatbots don’t think in the way that a person does. They are trained on massive amounts of data, and then based on your query, they predict which words should come next. That means they don’t check to see if what they respond is correct. 

As a result, AIs often make things up, a phenomenon called hallucinations, and may not recognize urgent mental health crises. In other extreme cases, doctors have documented cases of AI psychosis, where the responses from the chatbot lead to someone becoming convinced that something imaginary is real. 

Not having a human in the loop puts people at risk.  

Finding a Real, Affordable Therapist

Although AI chatbots are affordable, they aren’t designed to be effective therapists. At Resolvve, we strive to provide low-cost options for people living in Ontario. 

Our therapists specialize in helping people manage their ADHD and OCD, as well as other mental health conditions. Therapy with us starts at $30 per session with a student therapist and $100 with a full-fledged practitioner. If you’re ready to get started, you can match with a therapist today for a free consultation.

Frequently Asked Questions

What are the risks of using AI chatbots like ChatGPT for therapy?

AI chatbots can give inaccurate or harmful advice and miss warning signs of a crisis. Companies that design these chatbots are trying to keep people using the apps longer, and as a result, the chatbots are validating and may reinforce harmful beliefs or delusions. They are not a substitute for professional therapy and may delay or prevent people from getting effective mental health care.

Is my information safe with AI mental health apps?

Most AI “therapy” apps are not subject to health privacy laws like PHIPA, meaning information you share may be stored, used for training, or even leaked. There is always a risk when entering sensitive information into unregulated apps.

Can AI chatbots respond to mental health emergencies?

No. AI chatbots cannot recognize or respond to emergencies such as self-harm or suicidal thoughts. Unlike licensed therapists, they cannot provide crisis intervention or connect users to emergency help.

Why are human therapists necessary if AI chatbots are available?

Human therapists are trained to provide personalized care and come with legal responsibilities and accountability.  Therapists are also regulated by a professional body that provides strict ethical and legal guidelines to keep you safe.

Please note that this post is written for educational purposes; it is not therapy. If you need to talk to a professional, please book a consultation with a psychotherapist through Resolvve.