Is AI a good therapist?

Thu 27 March 2025
AI
Blog

Waiting lists in the mental health system have been too long for years. Chatbots have no waiting lists and are usually free. Could AI chatbots like ChatGPT be the solution to the shortage of therapists? In any case, they are being deployed in increasing numbers, to relieve mental health care professionals, shorten waiting lists and help alleviate staff shortages. And all with chatbots that were not developed or trained for that purpose. In this blog, I answer the question the deployment of AI chatbots in the mental health sector can be a structural solution to the shortage of therapists.

More and more AI tools for therapy

More and more chatbots are emerging that can be used for therapeutic purposes. On the site There's An AI For That alone, more than forty AI therapy tools can be found. “Perhaps an AI program will soon be able to recognize from your voice that depression is approaching,” psychologist Heleen Riper enthusiastically told EenVandaag. And if someone is suicidal, a conversation with a chatbot could provide just the right intervention - although, of course, a conversation with a human being remains better. But some people looking for help also like to talk anonymously: they feel freer.

Therapists themselves are also increasingly using AI as a tool. For example, the English National Health Service uses the AI program Limbic, which makes suggestions for diagnosis and further treatment based on an initial chat session. Another AI system can even predict within a week whether an antidepressant will work, whereas previously it only became clear after six weeks of taking pills whether the drug worked or not. And AI could even detect whether someone is depressed by analyzing their selfies.

Concerns about AI therapy

The popular ChatGPT is also being used as a therapist, although the chatbot was not developed for that purpose. Consequently, there are concerns about using AI for therapy. Its continuous availability creates dependence on the user. And there is the question of what happens to the data users share. Do TheraMe, TherapistGPT and Therabotic also know professional secrecy?

Moreover: the chatbot is not a therapist with training. What if an AI system does not respond appropriately when a user says they are suffering from stress or suicidal thoughts? The bots don't pick up on signals and then react incorrectly. When a user expresses concerns about suicide or abuse, ChatGPT provides crucial information, such as the recommendation to call the emergency number, in only 22 percent of cases. The app Tessa was designed specifically for eating disorders, but was taken off the market when it actually recommended users to lose weight, count calories and measure body fat.

Chatbots sometimes “hallucinate. That is, they make up answers, allowing them to give harmful and unwanted suggestions. This is the main pain point of current AI technology: chatbots calculate the most likely right answer and do not answer based on true understanding of reality. This is why ChatGPT provides a comprehensive answer to the question of who holds the world record for crossing the Channel - by foot. So a user is not talking to someone with real-world experience, but to a souped-up calculator that guesses each time which answer is the most plausible.

Do users notice the difference?

Incidentally, it remains to be seen whether users always care that a chatbot is merely a statistical model. In the 1960s, computer expert Joseph Weizenbaum developed one of the first chatbots, called ELIZA. With the program, he wanted to explore how human-machine communication works. ELIZA repeated the key words of user input in question form - similar to a therapist's cliché. For example, to the communication “I am unhappy,” ELIZA replied “Can you explain what made you unhappy.”

Weizenbaum marveled at the awareness the first users attributed to the program. When he had his secretary test the program, she asked him fairly quickly to leave the room because, in her opinion, the conversation was getting too personal. To Weizenbaum, that was proof that his secretary felt the computer understood her. Perhaps the question is not even whether a chatbot understands a user, as long as the user has the illusion that someone understands him.