Years ago, my therapist recommended that I read a dog training book, telling me that “the same principles work for humans.” I thanked her, but said it was too condescending to train my husband like a dog. “No,” she laughed, “the book is to help you practice.” If therapists refer their patients to generalizable frames (in this case, even dogs), couldn’t an AI robot function as a therapist giving the same advice? The short answer is yes, but we don’t know at what risk.
Finding a therapist is difficult. The pandemic has seen a sharp rise in depression and anxiety, leading to a global shortage of mental health professionals. Therapy can be expensive, and even for those who can afford it, asking for help requires the effort of reaching out, saving time, and planning with another person. Enter therapy bots: an alternative that eliminates almost all overhead.
Woebot and other therapy chatbots like Wysa and Youper are growing in popularity. These 24/7 couch friends rely on methods such as cognitive behavioral therapy, which has a specific structure and well-established exercises. The premise makes sense, and human-computer interaction research shows that people can develop rapport, a personal relationship, and trust a chatbot. They might even trust him more than people, lest someone judge them, for example.
But while existing bots use established therapeutic frameworks, their effectiveness may depend on how well the user engages with them, which is easier for a human professional to guide. To date, very little research has been done on whether therapy robots work, whether they are good or bad for people, and also for Who.
Woebot was criticized in 2018 for unwittingly endorsing the sexual exploitation of children. This problem has been solved, but it will not be the last. New generative AI methods might make a bot’s responses seem less prepared, but still have the problem that no one can predict exactly what the bot might say, which is particularly risky in a therapeutic setting. . AI-based text systems are notorious for sexism, racism, and fake news.
Even with predefined, rules-based answers, it’s easy to hurt those seeking mental health advice, many of whom are vulnerable or fragile. While bots are designed, for example, to recognize suicidal language and refer to human help, there are many other situations where a bot’s response may be wrong or taken the wrong way.
Good therapists are adept at knowing when and how (and how hard) to nudge someone in a certain direction. They read between the lines, they observe gestures, they notice changes in tone, all of which help inform their responses. They struggle to strike a difficult balance between meeting their patient where they are and moving them forward. It’s such a difficult skill that even human therapists stumble.
More like this
Bad human therapists are undoubtedly harmful. The profession has seen everything from dangerous advice to therapists who swindle their clients out of their lives. But it has also been geared towards harm prevention, with codes of ethics, licensing requirements and other safeguards. Entrusting sensitive data collected in a mental health context to an individual is different from entrusting it to a company. Human therapists can make mistakes, but they are not risky on a grand scale. And the promise of these therapy bots is exactly that: scale.
The biggest selling point is improved access to therapy, and it’s a compelling one. Reducing the barrier to mental health services is undoubtedly helpful, but we don’t yet know if the risks are worth the benefits. In the meantime, there are ways to accompany people without trying to recreate human therapists.
Ironically, a better solution may be simpler technology. In the 1970s, Joseph Weizenbaum created a chatbot named ELIZA that primarily answered users with simple questions. Traditional journaling, a technique recommended by many therapists, is made more accessible to people through interactive formats like ELIZA. There are also mood tracking and meditation apps that help people on their mental health journeys.
Some of the therapy robot creators distance themselves from the replacement narrative, saying they too provide an additional tool. But their tool is not designed to enable therapists to better serve clients, or to be used as an intervention alongside therapy. It is mainly used as an alternative for people who cannot or do not want to have a therapist. Maybe it’ll be fine, but app designers need to be honest about what they’re doing.
It is possible that therapy robots can be of great help to people. But we should be wary of any product that rushes to market with insufficient research, and especially AI-powered apps that may incorporate all sorts of known and unknown harm. This week, when I asked my therapist what she thinks of bots, her main concern was simple: don’t trust anyone who participates in them for the money.
Learn more about mental health: