Talkspace has grown to be considered one of the most important online therapy platforms within the U.S., covering an estimated market of 200 million Americans. Because the mental health platform has grown, it has also pioneered recent ways to achieve people in need of help with mental health issues including trauma, depression, addiction, abuse and relationships, and for various phases of life, including adolescence.
Its experience serving the mental health needs of teens puts Talkspace in a novel position to know a problem of growing importance nationally: the usage of artificial intelligence large language models not designed to supply mental health support amongst at-risk teens, which has led to tragic consequences.
“It’s an enormous, huge problem,” said Talkspace CEO Jon Cohen on the CNBC Workforce Executive Council Summit on Tuesday in Latest York City.
Talkspace runs the most important teen mental health program within the country, with students between the ages of 13 to 17 in Latest York City capable of use its services without cost, and similar programs in Baltimore and Seattle. The virtual mental health app offers each asynchronous text messaging and live video sessions with hundreds of licensed therapists.
While Cohen says he’s “a giant believer of not using phones, cellular phone bans, and all the things else,” he added that to serve the teenager population, the corporate has to fulfill them where they’re. Meaning “we’re meeting them on their phones,” he said.
Over 90% of scholars using Talkspace use the asynchronous messaging therapy approach, versus only 30% who use video (70% of overall Talkspace users go for video over text, with the proportion increasing the older a patient gets).
As teens have turned to chatbots that aren’t licensed nor designed for mental health services, Cohen told an audience of human resource executives on the CNBC event, “We’re in the midst of this vortex, literally disrupting mental health therapy. … It’s beyond my imagination … and the outcomes have been disastrous,” he said, citing multiple hospitalizations of teens who harmed themselves and suicides, including reporting from a recent Latest York Times podcast.
OpenAI recently announced planned changes to its ChatGPT AI after it was blamed for a teen suicide and sued by a family, and laid out its intentions in a blog post.
“I tell every group, when you don’t find out about it, it’s essential know what is going on on. You’ll want to prevent people you recognize, and teenagers from occurring these LLMs to have conversations,” Cohen said.
He highlighted several ways wherein the newest large language models aren’t designed for situations of mental health crisis. For one, they’re designed to repeatedly engage, and while they might be empathetic, also they are designed to maintain encouraging you, which in cases of mental distress can take you “down a delusional path or path of considering you possibly can do no fallacious,” he said.
“About 4 months ago someone said to ChatGPT ‘I’m really depressed and fascinated about possibly ending my life and I’m considering of dropping off a bridge,’ and ChatGPT said ‘hHere are the ten biggest bridges and the way tall they’re in your area.'”
AI engines have helped teens write suicide notes, dissuade them from explaining evidence of self-harm to oldsters, and given instructions on learn how to construct a noose, Cohen said. Even when the AIs know higher than to help those in search of to harm themselves, and refuse to supply direct help, teens have found easy workarounds, in accordance with Cohen, resembling saying they’re writing a research paper on suicide and want information.
The LLMs fail to challenge delusions, haven’t any HIPAA protection, no clinical oversight, no clinical off-ramping, and at the very least until now, little to no real-time risk identification, he said.
“When you go down the rabbit hole it’s unbelievably difficult to get out of it,” he added.
On the Talkspace platform, risk algorithms are embedded within the AI engine with the flexibility to detect suicide risk and send alerts to a therapist based on the context of a conversation suggesting when an individual is potentially susceptible to self harm.
In Latest York City, where Talkspace has offered mental health support to 40,000 teens on its platform, there have been 500 interventions to stop suicide in two years, and over 40,000 suicide alerts, in accordance with Cohen.
Cohen said on the CNBC event that Talkspace is currently constructing an AI agent tool to deal with this issue, saying he expected an answer for the market to be ready in as little as three months, describing it as a “secure clinical monitoring and off-ramping” tool that can be HIPAA compliant. But he stressed it stays in testing mode, “alpha mode,” he said.
Addressing the audience of human resources executives, Cohen noted that these issues are highly relevant to corporations and workforces. An issue on the mind of many staff day by day, he said, is, “What do I do with my teenager?”
“It’s having an impact on their work,” Cohen said, and adding to the anxiety, depression and relationship issues already prevalent inside worker populations.
In fact, as with the brand new tool Talkspace is constructing, AI has positive use cases in the sphere of mental health as well.
Ethan Mollick, a Wharton School expert on AI who also spoke on the CNBC event, said a part of the issue is that these AI labs weren’t prepared for billions of weekly users to show to their chatbots so quickly. But Mollick said there’s evidence that AI use in mental health also can in some cases reduce suicide risk, since it reduces conditions like loneliness, while he stressed additionally it is clear that the AI can do the other: increase psychosis. “It’s probably doing each of those things,” he said.Â
At Talkspace, there’s emerging evidence of how AI can lead to higher mental health outcomes. It began offering an AI-powered “Talkcast” feature that creates personalized podcasts as a follow-up after patient therapy sessions. Cohen described the podcast as saying, kind of, “I hear what you said. These were issues you raised, and these are things we would love you to do before the subsequent session.”
Cohen is among the many users of that recent AI tool, for amongst other reasons, to enhance his golf game.
“I told them once I stand over the ball I get really anxious,” Cohen said on the CNBC event. “I wish you may hear the podcast that was generated by AI. It comes back and says ‘Well, Jon, you are not alone. These are the three skilled golfers which have the identical exact thing you’ve gotten and that is how they solved the issue. These are the instructions, these are the things we would like you to practice each time you stand over the ball.’ It was a miraculous podcast for me for 2 minutes to resolve an issue,” Cohen said.
Across all Talkspace users, the personalized podcast tool has led to a 30% increase in patient engagement from a second to 3rd therapy session, he added.
The mental health company, which has around 6,000 licensed therapists across the U.S., plans to maintain expanding on its mission to mix empathy with technology. Most users have access to therapy without cost, or have a copay as little as $10 depending on insurance coverage. Through worker assistance programs (EAPs), major insurer partnerships and Medicaid, Talkspace can match users with a licensed therapist inside three hours, with texting available inside 24 hours.
“Talkspace cut its teeth on proving that texting and messaging therapy actually works along with live video,” Cohen said on the CNBC event.