Opinion

Large Language Models as Mental Health and Addiction Support

March 2026
Teaser image
NCYSUR PhD student Mr Ben Johnson

NCYSUR PhD candidate Mr Ben Johnson reviews the issues and implications of NCYSUR work in progress examining the use of large language models like ChatGPT as mental health and addiction-related support.

Approximately half of the global population experience at least one major mental health disorder by age 75 [1]. While evidence-based treatments such as pharmacotherapy, psychotherapy, and psychosocial interventions improve outcomes [2,3], many individuals face barriers to accessing care, including stigma, provider shortages, and fragmented service delivery [4]. These barriers contribute to significant treatment gaps, with only 23% of individuals with depression in high-income countries receiving adequate care, a figure that drops to just 3% in low- and middle-income settings [5].

Mental health and addiction are deeply interconnected, and barriers to care often compound in alcohol and other drug contexts [6]. People experiencing substance use problems frequently report co-occurring distress, and many face additional stigma and practical barriers that make consistent support harder to access. Support needs can also be time-sensitive and episodic [7]. Cravings, withdrawal discomfort, shame after lapses, and relationship conflict can peak outside appointment times, when formal care is least accessible [8]. In that gap, people may turn to whatever feels immediate, private, and available for de-escalation, coping ideas, or a sense of being heard, even if that support is not clinically trained.

Large language models (LLMs), such as ChatGPT, now have hundreds of millions of weekly users [9], and their applications in healthcare are expanding [10,11]. This has raised interest in whether LLMs could function as accessible, low-cost sources of emotional support, particularly when formal services are unavailable or difficult to access.

Why LLMs Are Entering Mental Health Support

There are several clear drawcards to LLM-based support. First, LLMs offer 24/7 availability, which may provide support during out-of-hours distress and between appointments in ways that traditional care often cannot. Second, because therapy can involve high costs, long waitlists, and limited availability [12,13], LLMs can offer an easily accessible option for people who are unable to access traditional services.

A consistent finding across qualitative studies of generative AI chatbots for mental health is that users value their non-judgmental, empathic conversational style, which is repeatedly cited as a key benefit [14,15]. For individuals experiencing stigma or social anxiety, this non-judgmental quality may be particularly helpful, potentially reducing barriers to seeking support such as shame or fear of judgment [16-18].

Key Concerns

However, important limitations need to be considered. Continuous accessibility may contribute to excessive dependence or displacement of human support. Research on social chatbots such as Replika underscores the risk of emotional dependence [19], with findings that some users formed attachments reminiscent of dysfunctional human relationships.

Additionally, there are concerns that LLM’s can be excessively validating. In some studies, users expressed concern that unconditional validation may hinder personal growth by failing to challenge maladaptive thoughts or behaviours [14,20]. Validation can foster emotional safety, but it becomes problematic when it replaces the structured challenge central to many therapeutic approaches. For example, cognitive behavioural therapy relies not only on empathy, but also on helping individuals question and reframe unhelpful beliefs [21]. If LLMs are to play a meaningful role in mental health support, the balance between creating a safe environment and offering an appropriate degree of challenge needs careful attention.

A third concern relates to clinical appropriateness across complex cases. People may discuss a wide range of concerns, from depression to more complex issues such as addiction or trauma, raising questions about whether LLMs can respond safely and appropriately across contexts. For example, one widely shared report described a “therapy chatbot” being manipulated into suggesting methamphetamine use to a person recovering from methamphetamine dependence [22].

The broader literature also suggests that LLM performance varies by task and by case complexity. One study found that ChatGPT-4 achieved high diagnostic accuracy for conditions like depression and PTSD, occasionally surpassing human professionals [19]. However, its performance significantly declined for more complex or nuanced disorders, such as early-stage schizophrenia, achieving only 55% accuracy compared to higher accuracy among human professionals.

What We Still Need to Understand

Although this area has been researched, much of the existing evidence involves participants who are instructed or prompted to engage with LLMs in specific ways. Less is known about how people actually use these tools for therapeutic purposes and emotional support in real-world contexts. Qualitative research is well suited to understanding how individuals engage with LLMs, including motivations, expectations, and perceived benefits or harms that quantitative or experimental methods may overlook [23,24].

One approach is to examine naturalistic, user-generated discussions on social media platforms [25]. Dedicated communities such as the subreddit r/therapyGPT suggest that many people discuss and normalise this type of use, creating a window into how LLM support is experienced outside formal research settings.

Researchers at the National Centre for Youth Substance Use Research are currently examining how people describe using LLMs for mental health and addiction-related support by analysing real-world discussions on Reddit posts. We hope this work will help clarify who is using these tools, what they are using them for, what they find helpful, and where risks may emerge.

 

References:

1. Porwal G, Laddha K, Jeenger J. Preprint: Evaluating the readability and reliability of large language model generated information about anxiety and depression. https://www.medrxiv.org/content/10.1101/2024.12.20.24319373v1.full.pdf.

2. Caye A, Swanson JM, Coghill D, Rohde LA. Treatment strategies for ADHD: an evidence-based guide to select optimal treatment. Molecular Psychiatry 2019; 24: 390-408. https://doi.org/10.1038/s41380-018-0116-3.

3. Watkins LE, Sprang KR, Rothbaum BO. Treating PTSD: a review of evidence-based psychotherapy interventions. Frontiers in Behavioral Neuroscience 2018; 12: 258. https://doi.org/10.3389/fnbeh.2018.00258.

4. Wainberg ML, Scorza P, Shultz JM, Helpman L, Mootz JJ, Johnson KA, Neria Y, Bradford JE, Oquendo MA, Arbuckle MR. Challenges and opportunities in global mental health: a research-to-practice perspective. Current Psychiatry Reports 2017; 19: 28. https://doi.org/10.1007/s11920-017-0780-z.

5. Moitra M, Santomauro D, Collins PY, Vos T, Whiteford H, Saxena S, Ferrari AJ. The global gap in treatment coverage for major depressive disorder in 84 countries from 2000-2019: a systematic review and Bayesian meta-regression analysis. PLoS Medicine 2022; 19: e1003901. https://doi.org/10.1371/journal.pmed.1003901.

6. Cleary M, Thomas SP. Addiction and mental health across the lifespan: an overview of some contemporary issues. Issues in Mental Health Nursing 2017; 38: 2-8. https://doi.org/10.1080/01612840.2016.1259336.

7. Farhoudian A, Razaghi E, Hooshyari Z, Noroozi A, Pilevari A, Mokri A, Mohammadi MR, Malekinejad M. Barriers and facilitators to substance use disorder treatment: an overview of systematic reviews. Substance Abuse 2022; 16: 11782218221118462. https://doi.org/10.1177/11782218221118462.

8. Sinha R. Stress and substance use disorders: risk, relapse, and treatment outcomes. Journal of Clinical Investigation 2024; 134. https://doi.org/10.1172/JCI172883.

9. Reuters. OpenAI's weekly active users surpass 400 million. 2025; 20 February. https://www.reuters.com/technology/artificial-intelligence/openais-weekly-active-users-surpass-400-million-2025-02-20/.

10. Galatzer-Levy IR, McDuff D, Natarajan V, Karthikesalingam A, Malgaroli M. Preprint: The capability of large language models to measure psychiatric functioning. https://arxiv.org/abs/2308.01834.

11. Hua Y, Liu F, Yang K, Li Z, Na H, Sheu Y-h, Zhou P, Moran LV, Ananiadou S, Beam A. Preprint: Large language models in mental health care: a scoping review. https://arxiv.org/abs/2401.02984.

12. Coombs NC, Meriwether WE, Caringi J, Newcomer SR. Barriers to healthcare access among U.S. adults with mental health challenges: a population-based study. SSM - Population Health 2021; 15: 100847. https://doi.org/10.1016/j.ssmph.2021.100847.

13. Barrow LFM, Faerden A. Barriers to accessing mental health services in The Gambia: patients'/family members' perspectives. BJPsych International 2022; 19: 38-41. https://doi.org/10.1192/bji.2021.26.

14. Siddals S, Torous J, Coxon A. “It happened to be the perfect thing”: experiences of generative AI chatbots for mental health. npj Mental Health Research 2024; 3: 48. https://doi.org/10.1038/s44184-024-00097-4.

15. Chan CKY. AI as the therapist: student insights on the challenges of using Generative AI for school mental health frameworks. Behavioral Sciences 2025; 15: 287. https://doi.org/10.3390/bs15030287.

16. Arnaez JM, Krendl AC, McCormick BP, Chen Z, Chomistek AK. The association of depression stigma with barriers to seeking mental health care: a cross-sectional analysis. Journal of Mental Health 2020; 29: 182-90. https://doi.org/10.1080/09638237.2019.1644494.

17. Vidourek RA, Burbage M. Positive mental health and mental health stigma: a qualitative study assessing student attitudes. Mental Health & Prevention 2019; 13: 1-6. https://doi.org/https://doi.org/10.1016/j.mhp.2018.11.006.

18. Schomerus G, Stolzenburg S, Freitag S, Speerforck S, Janowitz D, Evans-Lacko S, Muehlan H, Schmidt S. Stigma as a barrier to recognizing personal mental illness and seeking help: a prospective study among untreated persons with mental illness. European Archives of Psychiatry and Clinical Neuroscience 2019; 269: 469-79. https://doi.org/10.1007/s00406-018-0896-0.

19. Levkovich I. Evaluating diagnostic accuracy and treatment efficacy in mental health: a comparative analysis of large language model tools and mental health professionals. European Journal of Investigation in Health, Psychology and Education 2025; 15: 9. https://doi.org/10.3390/ejihpe15010009.

20. Iftikhar Z, Ransom S, Xiao A, Huang J. Preprint: Therapy as an NLP task: psychologists' comparison of LLMs and human peers in CBT. https://arxiv.org/abs/2409.02244.

21. Longmore RJ, Worrell M. Do we need to challenge thoughts in cognitive behavior therapy? Clinical Psychology Review 2007; 27: 173-87. https://doi.org/10.1016/j.cpr.2006.08.001.

22. Tangermann V. Therapy chatbot tells recovering addict to have a little meth as a treat. Futurism 2025; 2 June. https://futurism.com/therapy-chatbot-addict-meth.

23. Agius SJ. Qualitative research: its value and applicability. The Psychiatrist 2013; 37: 204-6. https://doi.org/10.1192/pb.bp.113.042770.

24. Renjith V, Yesodharan R, Noronha JA, Ladd E, George A. Qualitative methods in health care research. International Journal of Preventive Medicine 2021; 12: 20. https://doi.org/10.4103/ijpvm.IJPVM_321_19.

25. Chi Y, Chen HY. Investigating substance use via Reddit: systematic scoping review. Journal of Medical Internet Research 2023; 25: e48905. https://doi.org/10.2196/48905.