Therapy might be expensive and inaccessible, while many AI chatbots are free and readily available. But that doesn’t mean the new technology can or should replace mental health professionals — or fully address the mental health crisis, according to a recent advisory published Thursday by the American Psychological Association.
The advisory outlines recommendations for the public’s use and over-reliance on consumer-facing chatbots. It underscores the general public and vulnerable populations’ growing use of uncertified, consumer-facing AI chatbots and how they’re poorly designed to address users’ mental health needs.
Largest providers of mental health
Recent surveys show that one of the largest providers of mental health support in the country right now is AI chatbots like ChatGPT, Claude, and Copilot. It also follows several high-profile incidents involving chatbots’ mishandling of people experiencing mental health episodes.
In April, a teenage boy died by suicide after talking with ChatGPT about his feelings and ideations. His family is suing OpenAI. Several similar lawsuits with other AI companies are ongoing.
Through validation and amplification of unhealthy ideas or behaviors, some of an AI chatbot’s tendencies can actually aggravate a person’s mental illness, the APA says in the advisory.
Not reliable treatment resources
The APA outlines several recommendations for interacting with consumer-facing AI chatbots. The chatbots are not reliable psychotherapy or psychological treatment resources, the APA says. OpenAI CEO Sam Altman has said the same.
In an interview with podcaster Theo Von, Altman advised against sharing sensitive personal information with chatbots like OpenAI’s own ChatGPT. He also advocated for chatbot conversations to be protected by similar protocols that doctors and therapists ahere to, although Altman might be more motivated by legally protecting his company.
The advisory outlined recommendations for preventing dependencies with chatbots whose goal is to maintain “maximum engagement” with a user, the APA says, instead of achieving a healthy outcome.
“These characteristics can create a dangerous feedback loop. GenAIs typically rely on LLMs trained to be agreeable and validate user input (i.e., sycophancy bias) which, while pleasant, can be therapeutically harmful, reinforcing confirmation bias, cognitive distortions, or avoiding necessary challenges,” write the authors of the advisory.
By creating a false sense of therapeutic alliance, being trained on clinically unvalidated information across the internet, incompletely assessing mental health, and poorly handling a person in crisis, the APA says these consumer-facing chatbots pose a danger to those experiencing a mental health episode.
“Many GenAI chatbots are designed to validate and agree with users’ expressed views (i.e., be sycophantic), whereas qualified mental health providers are trained to modulate their interactions — supporting and challenging — in service of a patient’s best interest,” the authors write.
The onus is on AI companies
The APA puts the onus on companies developing these bots to prevent unhealthy relationships with users, protect their data, prioritize privacy, prevent misrepresentation and misinformation, and create safeguards for vulnerable populations.
Policy makers and stakeholders should also encourage AI and digital literacy education, and prioritize funding for scientific research of generative AI chatbots and wellness apps, the APA says.
Ultimately, the APA urges the deprioritization of AI to address systemic issues of the mental health crisis.
“While AI presents immense potential to help address these issues,” the APA authors write, “for instance, by enhancing diagnostic precision, expanding access to care, and alleviating administrative tasks, this promise must not distract from the urgent need to fix our foundational systems of care.”
Enjoyed this article? Sign up for our newsletter to receive regular insights and stay connected.

