How AI Could Expand And Improve Access To Mental Healthcare

Image

A collaborative approach between AI and behavioral health providers could lead to more personalized care, improving patient outcomes, access, and efficiency.

Hailey Fowler and John Lester

5 min read

Editors note: This article originally appeared in the World Economic Forum on Nov. 1, 2024.

Mental health is a major global public health concern. While an estimated one billion people suffered from mental health or substance abuse disorders before COVID-19, the pandemic accelerated the problem, resulting in a 25%-27% rise in depression and anxiety, according to the World Health Organization. About half of the world's population is expected to experience a mental health disorder during their lifetime, researchers at Harvard Medical School and the University of Queensland found.

Exacerbating the problem is a shortage of qualified professionals to treat patients. Globally, there are 13 mental health workers per 100,000 people, according to WHO’s Mental Health Atlas. The split is especially stark between developed and underdeveloped economies. The number of mental health workers in high-income countries can be 40 times higher than in low-income countries. The workforce shortage significantly limits access to care, notably in low- and middle-income nations and contributes to an estimated 85% of people with a mental illness not receiving treatment, a study in the International Journal of Mental Health Systems reported.

The gap in supply and demand highlights an urgent need for alternative solutions. Just as telehealth expanded access to care for many conditions, artificial intelligence has the potential to improve availability to more mental health patients, many of whom are eager to try it for therapy, according to a recent survey of 16,000 people in 16 countries by the Oliver Wyman Forum.

The applicability of AI tools will vary depending on the severity of a patient’s condition — both the underlying diagnosis and the symptoms they experience at a given point in time. But in many cases, AI could increase access to much-needed care. It also can be used to analyze data and help clinicians treat patients in real-time with more personalized insights and guidance.

The rise of AI therapy

There already are a variety of ways AI is used in mental health treatment. AI can combine and use insights from a number of sources such as medical textbooks, research papers, electronic health record systems, and clinical documentation to help mental health professionals recommend treatments and predict how patients will respond. Meanwhile, self-diagnosis apps, chatbots, and other conversational therapy tools that leverage generative AI lower barriers of access for patients experiencing less severe episodes.

Many consumers are willing to try AI to manage their mental health. In fact, 32% of respondents to the Oliver Wyman Forum survey said they would be interested in using AI instead of a person. At the high end, 51% of respondents from India expressed a willingness to use AI-generated therapy, compared to 24% in the US and France. Interest in receiving support from a human-like therapist to manage mental health is highest in countries with fewer mental health professionals per capita, demonstrating how AI can extend access, especially in developing markets.

Younger people are more willing to leverage the technology for this purpose — 36% of Gen Zers and millennials reported interest in using AI for mental health, compared to 28% of other generations. This generational difference tracks with prevalence and attitudes toward mental health more generally: Generation Z is around two times more likely to struggle with mental health issues and twice as likely as non-Gen Zers to attend therapy as a result, according to an Oliver Wyman Forum report. Their proactiveness and interest in using AI could kickstart widespread development of AI therapy tools in the future.

Despite it not having actual emotions, consumers perceive generative AI to be a trusting, emotional confidant, perhaps due to its constant availability and ability to consistently replicate empathy through patterns in underlying data. Our research shows that five times more respondents reported generative AI made them feel like they had a reliable confidant for sharing personal thoughts and seeking life advice compared to a human. Of the four out of five people who said they preferred AI in at least one scenario, around 15% said that they believe AI is more emotionally intelligent than humans.

Proceeding with caution

There are, however, risks and a thorough understanding of the technology is required before deploying AI tools to ensure care is safe and effective. Importantly, AI-assisted mental healthcare should be viewed as part of a multipronged approach, rather than a direct substitute for human-to-human interactions.

It’s important to recognize that severity of a patient’s condition fluctuates, sometimes during the same episode. Understanding the spectrum of behavioral health conditions is a must before deploying AI tools. Health systems can use that knowledge to provide more personalized care. And digital health companies must embed functionalities that direct patients to human resources if the app detects a condition is worsening. The goal is to reduce the risk of a situation in which the patient gets and acts on bad advice, potentially leading to self-harm.

Confidentiality and security are other critical issues. Data can be, and often is, sold for marketing purposes and regulators around the globe are putting AI firms and companies offering AI-enabled technology on notice. The US Federal Trade Commission warned that it would pursue cases against companies that loosen data policies to allow for greater mining of customer data. Meanwhile, member states of the European Union earlier this year unanimously approved the Artificial Intelligence Act, which sets out strict new standards for regulating AI, as well as significant penalties for violators.

Beyond data mining and privacy concerns, security is a high priority for healthcare organizations. Healthcare is a top target for cyber criminals and consumers must understand the risks before downloading apps onto their devices. Awareness and recognition of these drawbacks will ensure that generative AI is used safely, protecting patients and their rights to confidential treatment.

To maximize the benefits of the technology, generative AI should complement human providers rather than replace them. Ultimately, a collaborative approach between AI and human providers will allow professionals to focus even more on empathetic, personalized care, improving patient outcomes, access, and efficiency.