Can AI Replace Your Therapist? Pros and Cons of AI Therapy

Can AI Replace Your Therapist? Pros and Cons of AI Therapy

The Rise of AI Therapy Conversations

Everyone spends their time online differently and a black hole of mine is Reddit. I can easily get lost in sub reddits reading about countless things but one thing I see with increasing frequency is people discussing how they use ChatGPT for therapy. As a therapist, I have thoughts on this as I think it’s a mixed bag and there is a lot to be concerned about.

The Potential Role of AI in Therapy

I think there is a place for AI in therapy as it could help someone to open up a bit, find some resources, and help them gain openness to therapy. There are also potential drawbacks that people are not considering and I am going to try and give space for both.

Pros and Cons of ChatGPT as a Therapeutic Tool

Pro’s

  • ChatGPT therapy is free(ish).
  • It has a large amount of data in it’s LLM and can give some helpful reflections.
  • It’s available all the time.
  • Through it’s archive on knowledge about you it can draw on past conversations.
  • It’s not time limited and can feel very private.
  • It’s accessible to the world.
  • It can offer exercises and things to try in the real world.
  • Helpful for short-term coping, journaling, or talking through a rough moment.

Con’s

  • The data you share with any LLM is not private and could be used in a variety of ways that could cause harm.
  • AI therapy fuel delusions and give dangerous advice.
  • There are worries about how it’s affecting children’s mental health.
  • Conversations, even stripped of identifiable data, could be re-identified. AI can be hacked (and has been) and can run malicious commands.
  • ChatGPT keeps logs of all queries, even if they are anonymous.
  • Conversations with AI could become part of litigation, used to deny healthcare, life insurance.
  • Should those conversations become public, it could have life threatening implications for those individuals. We saw this happen in Europe.
  • Chatbot addictions are causing problems in relationships.
  • People are being committed after spiraling into psychosis linked to AI.
  • Could this data be used for surveillance of the public or influence campaigns?
  • Imagine talking with AI about depression and then in an almost Minority Report like way all you see is ads everywhere about depression, products on Amazon about depression, emails about medications you could take.
  • Companies like ChatGPT could easily be undercut by cheaper rivals and as they are in a mode of growth at all cost, they are less likely to have guardrails in place (without policy forcing them) and they could cause harm. This could be due to bias or even directly telling people to harm themselves (which has been documented).
  • Kids are using AI from a very young age with no supervision and this is concerning.
  • AI will share data with other sources like Palantir, which was recently tapped to gather data on Americans. This could be solely to connect a lot of data sources or it could be used in malicious ways. How will companies use similar data? Could it be to screen someone before they interview?
  • It’s not a licensed professional and there is a lot of nuance that goes into being a good therapist. The things people’s body tell you that words don’t. The feeling you get behind them in the room tie into intuition and data about what’s going on which is not captured by AI. It has no emotional intuition.
  • In a high stakes situation it may give generic advice and feel very surface level and that could spell trouble for someone in crisis.
  • It could easily miss signs of suicidal ideation of a manic episode.

I know some of the concerns I mentioned above might seem paranoid, but all of this is moving very quickly and we are already seeing some unexpected Space Odyssey type things happening. FT reported recently:

Screenshot 2025 06 06 at 11.59.51

The Future of AI in Mental Health: Proceed with Caution

I think there is absolutely a place for AI in the therapy world but I think it must be used very carefully. As a therapist, I am excited to see how it could be used to aid insurance billing, scheduling, and for pairing therapists with good fit clients from a variety of medical systems, for follow ups on their care to prevent oversights from happening. AI could be wonderful for helping to train support staff in different environments like churches, support groups, schools, and nonprofits, to help serve their communities needs and to know when to refer out.

I wish AI therapy could be trusted but in an era of weakening democracies and a lack of privacy and protections around personal data (Europe is somewhat better in this regard), I won’t feel comfortable with this until more protections and guardrails are put in place – and I think it’s important to keep well trained therapists involved in therapy.

What Experts Are Saying

AI tools may appear helpful, but without clinical oversight, they can give advice that’s inappropriate or even harmful. The illusion of empathy from a chatbot is not the same as trained therapeutic care.”

Dr. John Torous, Director of Digital Psychiatry at Harvard

AI systems are being deployed without transparency, accountability, or adequate testing, especially in domains where human lives are at stake.

Timnit Gebru, AI ethics researcher

There is a long-standing pattern of putting tech out first, and patching problems later — but you can’t beta-test with human trauma.

Margaret Mitchell (formerly at Google AI)

AI-driven tools in mental health must not be seen as replacements for human care… Risk of data misuse, misdiagnosis, and depersonalized care is significant.

– World Health Organization

Photo by Markus Winkler on Unsplash

Author: William Schroeder, LPC

Looking for Some More Guidance?

Sign up for our newsletter to get one or two emails a month with tips on parenting, relationships, anxiety, ADHD and everything in between.