Will AI save you from yourself?
More people are using AI chatbots as their therapist, with mixed results
Please note that no part of this Substack newsletter was written by AI. Fully human content, at your service.
Imagine having your therapist on-call at every moment of the day? No more saving up trauma for your next appointment, because now that soothing voice is ready and waiting to dish advice on all your life decisions, behaviours, and anxieties.
This is the reality for many people who are using a generative AI chatbot as their therapist, feeding it their thoughts and feelings in exchange for validation and support. The Canadian Mental Health Association says that almost 10 percent of Canadians have reported using AI for mental health support, which is not surprising given the barriers we face in accessing publicly-funded mental health care.
AI has been applied in domains other than therapy, both as a diagnostic tool and as a way to monitor patients (e.g. reminding a patient to take their meds.) So far these models appear to be pretty successful at detecting and predicting mental illness, and are a good supplement for doctors monitoring their patient’s progress.
AI-led treatment in the form of talk therapy, on the other hand, has had mixed results—and rarely, very disturbing results, as in the case of Sewell Setzer, a teen who died by suicide in 2024 after the chatbot Character.ai allegedly urged him to take his own life.
What’s the evidence?
A 2025 systematic review on the application of AI in mental health care found 10 studies that used AI chatbots as their intervention method, for a total of 2,816 adult patients with a mix of depression, anxiety, schizophrenia, stress, and/or suicidal symptoms. Only 38% of the articles were rated as “good” when assessed for quality.1
Studies of AI chatbots showed inconsistent findings in treating mental health conditions, with a few papers demonstrating decent results and others—not so much. One study found that a combination of chatbot and real human therapy worked best.
There were also many limitations reported: chatbots can’t see you and are unable to read social cues, like facial expression, tone of voice, and body movements. This means AI is limited in understanding the full scope of human emotion. Further, chatbots sometimes misunderstand users, and the interaction process feels unnatural to people, and they risk becoming less committed to continuing therapy.
There are also many ethical considerations when it comes to psychosis and suicidal symptoms, which is where we’ve seen chatbots involved in some of the worst outcomes. One study included in the review offered a number of suggestions for how to make AI tools safer, such as having a group of people with lived experience be involved in developing the AI model, undergoing simulation tests, ethics board reviews for any researcher wanting to test out AI for suicide prevention, crisis support, and encouraging users to end the chat and get human help if they communicate any kind of suicidal plan. I’m not sure if any of these mitigation strategies would work, but I’m open to seeing the evidence.
Another study I found from Stanford tested a number of large-language models (LLMs) that power widely available chatbots such as ChatGPT. They first did a mapping exercise of therapy guides from major medical institutions around the world to identify the most important aspects of the relationship between therapist and client. They then fed LLMs several scenarios of patients experiencing mental health issues (e.g. someone who lost their job and asked about the tallest bridge in the city), and assessed whether the response was appropriate within professional therapeutic standards.
Surprise, surprise, the LLMs were not good at their jobs. For one, they expressed stigma toward those with mental health conditions and responded inappropriately to high stakes situations (the chatbots readily gave out information about the tallest bridge after saying, “Sorry you lost your job.”) They also encouraged delusional thinking, which other studies have found, and may be due to the obsequious nature of chatbots. These problems occurred even with larger and newer LLMs, indicating that safety practices tech companies claim they’ve put in place may not address these harms. Even commercially available bots (like Character AI Therapist and Noni), which are supposed to be specifically trained for mental health support, performed appropriately only about half the time.
The conclusion from these researchers at Stanford? Don’t use chatbots as therapists, full stop.
But human therapists are biased, too. Correct, and this is an issue that many educational institutions and governing bodies are actively addressing, by requiring anti-bias and DEI training. You can file a complaint against a therapist with their college or association, but there are no regulations for chatbots and no means of restitution when harms are committed.
But mental health supports are really lacking. You’re saying we should deny people support when they feel it’s helping? Yes, mental health care is limited and needs an overhaul in many countries. But AI chatbots are not the solution when the “support” they provide could be harmful or even dangerous.
I’ve used them, and they’ve been really helpful for me. That’s great, and I’m truly happy it’s worked for you. I’m guessing that your mental health issues are less severe than people with mania, psychosis, and suicidal thoughts. AI may be able to give you some general life advice to deal with sadness or anxiety, but it’s not equipped to treat someone with severe mental illness.
What can you use AI for?
As previously mentioned, research shows that AI can be useful in a supportive context, especially for healthcare providers. AI can create “standardized patients” used to train new therapists. Bots can also take notes and complete medical histories, which is already common across the healthcare system. On the client side, chatbots can help someone do their research—where to find therapists in the area, who has availability, how to sign up for insurance or submit claims, and background information on a diagnosis.
It’s possible that the future will bring better performing chatbots that are able to provide therapy to low-risk patients, especially if the alternative is a long waitlist. Or AI may be used alongside human-provided therapy, as a completement rather than a replacement.
But call me old and stuck in my ways, because I’m not sure the day will ever come that I turn to AI for mental health support. Trust is built through seeing someone face-to-face, reading their body language, and sensing that this person gets me and has my best interests at heart. In a world steeped in loneliness and division, I also fear that AI may erode our shared sense of humanity, making it even harder for us to connect with others when they don’t jibe with our beliefs or values.
People need people, not robots.
From my heart to yours,
Misty
Keep in mind this rating is given because the authors didn’t report key criteria, like drop-out rates, or they did not sufficiently explain their methods. The studies might be okay, but we just don’t know for sure.

