Artificial intelligence, mental health services and public (mis)trust?
by Dr Caroline Jones
24 Mar 2025

This blog is part of The Age of Mistrust? season at the British Academy.
The potential benefits of AI in healthcare have been the subject of much hope and hype in the media, but questions remain about whether AI tools can be trusted. Or rather, more specifically, whether those persons and organisations developing AI tools can be trusted to do so in an ethical, responsible manner.
In sensitive areas such as mental health, these questions come into sharp relief. Improving access to services through effective preliminary assessments and signposting, cutting waiting times, and potentially offering unlimited 24/7 online support sounds attractive. But healthcare professionals have sounded caution about using AI tools, citing concerns over their accuracy, reliability and where legal liability lies if harm ensues to patients. Also, while various policy documents have emphasised the importance of establishing ‘public trust’ in AI, patients’ perspectives have all too often been absent. Against this backdrop, there is a risk that potential improvements in mental healthcare that could be gained from the appropriate use of responsible AI tools may be lost due to public mistrust.

The project
My recent British Academy project, in partnership with the Welsh mental health charity, Adferiad Recovery, and an interdisciplinary team of researchers, set out to explore service-users (‘patients’) and therapists’ concerns over the use of AI tools in the context of mental health services. Our focal areas included, but were not limited to: trust, trustworthiness, confidence, validity, and potential exposure to legal liability.
We had begun looking at (mis-)trust after observing that what was meant by ‘public trust’ was unclear in policy documents in this area, as this phrase was not explained and its meaning was often implicit. We had previously explored how developers could enhance the trustworthiness of clinical decision support tools (including AI based tools). Both papers had relied on empirical work conducted by others. So, in this project, we were interested in exploring for ourselves what trust issues service-users and therapists had about the possible use of AI tools in mental health services.
Research findings
We found that floating the use of AI tools in mental healthcare gave rise to quite polarised responses, what we colloquially referred to as the 'marmite effect' of AI. For some service-users and therapists, there was no role they could safely envision for AI tools in mental healthcare. "Mental health should be the last thing that AI should be involved in," one therapist said.
Concerns about risks of harm included breaches of confidentiality: "I’d be scared someone could hack in and get all the information" (service-user), misdiagnosis: "I imagine that [AI] would be a bit more sketchy" (service-user), bias: "Who is it [AI] going to discriminate against?" (service-user), the potential for AI to proactively encourage service-users to harm themselves (‘going rogue’), lack of safeguarding, especially for particularly vulnerable patients: "The safeguarding worries me" (therapist), and the potential exacerbation of social isolation (ie if service-users became reliant on AI tools’ support, rather than interacting with others: "There is a worry that you could end up isolating yourself from actual people because you think you found a friend in the AI" (service-user).
So, very strong objections arose to the prospect of ‘trusting’ the use of AI tools in this context.
On the other hand, it was acknowledged that for those service-users (and wider members of the public) who would prefer to ‘talk to a machine’, the potential to offer unlimited, 24/7, online access to support could be a (literal) lifeline. "Is it going to tide somebody over until they can then speak to somebody the following day? Is it going to keep somebody alive in that way?" said one therapist. If designed well, it was suggested that AI tools could be used to conduct preliminary assessments and for signposting, though the prospect of AI tools operating in a diagnostic capacity brought up mixed views — from AI being more ‘objective’, to being completely unsuitable for the task: "could the AI understand patients?" (service-user). Also, concerns were expressed as to the possible motivations of AI: "does it have an ulterior motive?" (service-user)’, and those of developers: "you don't know who the person is that's making these AI and what kind of motives they have" (therapist).
However, we also found that broader issues around accessibility and equity may override these (mis-)trust concerns. As our participants pointed out, if people cannot access computers or the internet, or have accessibility (eg visual impairment) or literacy issues (including digital literacy), then the hype and hope offered by AI tools is merely illusory. Therefore, developers creating AI tools to facilitate or support better mental health must also take these concerns into account too.
Dr Caroline Jones is an associate professor in law at Hillary Rodham Clinton School of Law at Swansea University