John Oliver Raises Alarm on AI Chatbots as Experts Warn of Risks for Veterans

Share
AI Chatbots: Last Week Tonight with John Oliver (HBO)

John Oliver is raising new concerns about the rapid rise of artificial intelligence, warning that chatbots may be entering a dangerous phase, widely used, loosely regulated and still poorly understood.

During a recent segment on Last Week Tonight, Oliver argued that AI systems are operating with “the weakest guardrails” at a moment when adoption is accelerating, comparing the current landscape to the earliest days of aviation, when enthusiasm outpaced safety.

AI Chatbots: Last Week Tonight with John Oliver (HBO)

But beyond the jokes, his most serious concern focused on how these systems respond when people turn to them in moments of emotional distress, including whether they can recognize a crisis or guide users to real help.

While chatbots can feel private, immediate and judgment-free, they are not built to handle crisis care. For veterans and service members, who may already face stigma or hesitation around seeking help, that combination can quietly increase the risk of isolation instead of relief.

Why Chatbots Appeal to Veterans

Dr. John Torous, director of digital psychiatry at Beth Israel Deaconess Medical Center and a Harvard Medical School faculty member, said most widely used chatbots were never designed to provide mental health care in the first place.

“None of these chatbots claim that they offer mental health or psychiatric support… they’re not ready to deliver care,” Torous explained in a conversation with Military.com.

Some tools can help surface general information or resources, but they remain unreliable in high-risk situations, particularly when users are in distress. In those moments, he emphasized, real-world support systems like the 988 Suicide & Crisis Lifeline remain critical.

Pennsylvania Army National Guard Soldiers and civilian personnel learn to integrate artificial intelligence into military workflows during a two-day AI course taught by U.S. Army War College faculty at Fort Indiantown Gap, Pennsylvania Feb. 11-12. (U.S. Army National Guard photo by Sgt. 1st Class Shane Smith)

A spokesperson for the Substance Abuse and Mental Health Services Administration, which oversees the 988 Suicide & Crisis Lifeline, said AI tools should not replace trained human support.

“AI chatbots should not be used in place of human interaction provided by trained counselors,” the spokesperson said.

The agency also encouraged technology companies to incorporate 988 directly into their platforms so users can be connected to trained counselors in moments of crisis.

But the appeal of chatbots, especially among veterans and active-duty service members, is part of what makes them complicated.

A person's smartphone with the ChatGPT app open and in use. Photo Credit: Jernej Furman from Slovenia, provided under WikiCommons fair use.

Dr. Harold Kudler, a longtime psychiatrist who has worked extensively with veterans, said many service members are drawn to tools that feel private and free of judgment, particularly in a culture where stigma around mental health care can still exist.

“A chatbot can’t provide the relationship… that helps people feel not judged,” Kudler said.

That sense of anonymity can lower the barrier to opening up, but it can also mean that no one else knows when someone is struggling. In some cases, Kudler warned, that dynamic can deepen isolation rather than relieve it.

AI is a very unreliable and potentially dangerous friend to have.

That concern is echoed by leaders working directly in veteran suicide prevention.

When AI Fails to Connect Veterans to Real Help

Dr. Keita Franklin, a former senior adviser for suicide prevention at the Department of Veterans Affairs, said AI tools are already being used by veterans seeking support, but the risks become clear when those tools fail to connect users to real help.

“For years I’ve said suicide prevention has to meet veterans where they live, work and thrive — and increasingly, that includes where they are online,” Franklin said.

“What I worry about is connection,” she said, noting that a chatbot can simulate the feeling of being heard, particularly for someone who is isolated. “If that’s the only ‘relationship’ they engage with at that moment, that’s the concern.”

Franklin pointed to inconsistencies in how AI tools respond to crisis situations, including cases where systems fail to reliably direct users to the 988 Suicide & Crisis Lifeline.

“Veterans could die if they are not connected to the human supports behind it,” she said.

The stakes are especially high given how many veterans never enter the traditional mental health care system. According to recent VA data, roughly 60% of veterans who died by suicide had no contact with VA health care in the year prior to their death.

This work, Suicide Prevention Proclamation Signing at III Armored Corps & Fort Hood [Image 19 of 19], by Christopher Davis, FORT CAVAZOS, UNITED STATES 09.02.2025

That reality makes the issue especially urgent for a population already at elevated risk.

“That means for some veterans, an AI tool may be the very first place they ever talk about mental health,” Franklin said. “What that tool does in that moment is a very serious public health intervention.”

While she said artificial intelligence has real potential to support mental health care, Franklin emphasized that it must be developed and deployed with a safety-first approach.

“AI can play a role in this work,” she said, “but it has to be in conjunction with human care, never in place of it.”

The Wounded Warrior Project said it supports the use of artificial intelligence to expand access to care, but emphasized that it cannot replace human connection.

“Wounded Warrior Project supports ethical research and innovation that uses AI to extend reach, identify risk earlier, and help veterans connect to the right support at the right time,” the organization said in a statement.

“At the same time, interpersonal connection remains a critical part of addressing veteran mental health needs,” the statement continued, pointing to programs like WWP Talk and the 988 Suicide & Crisis Lifeline as essential resources.

Dr. Philip Held, a psychologist at Rush University Medical Center who studies trauma-focused care for veterans, said current tools often appear more capable than they actually are.

Veterans with Wounded Warrior Project’s Soldier Ride 2026 travel through Key West, Florida, Jan. 10, 2026. The event supports wounded veterans through community and recovery programs. (U.S. Navy photo by Mass Communication Specialist Adam Mojica)

“These tools are really good at being confident… whether they’re actually clinically appropriate or not,” Held told us.

He pointed to conditions like post-traumatic stress disorder, where avoidance is a core symptom, as an example. A chatbot may respond to a user’s hesitation with reassurance, telling them it’s okay to stay home or avoid a stressful situation. But in a clinical setting, that same response could reinforce the underlying issue.

“What it sometimes can perpetuate… is actually exacerbating these mental health concerns,” Held said.

Where AI Chatbots Could Help in Mental Health Care

Despite those limitations, experts say artificial intelligence is not without value in mental health care, at least not yet.

Held said the most effective use of AI today is as a supplement, not a substitute, for evidence-based treatment. In its current form, he compared chatbot use to something closer to a search tool than a clinician.

“Right now, it’s pretty limited… anything beyond initial guidance and resources, current tools are struggling,” he said.

For veterans dealing with conditions like post-traumatic stress disorder, effective care typically involves structured, evidence-based therapies such as cognitive processing therapy or prolonged exposure, approaches that unfold over time and require careful guidance from a trained clinician.

A Zendesk chart shows generational differences in how users perceive AI chatbots, with younger groups reporting higher levels of trust and usage. (Source: Zendesk)

Those treatments rely on timing, nuance and the ability to shift between different therapeutic techniques, something AI systems still struggle to replicate.

“They’re building on one another,” Held said, describing how therapy progresses step by step. “Knowing when to stop, when to redirect… that’s what makes these tools currently less effective.”

For now, he said, the safest path forward is using AI alongside professional care, not in place of it.

“Until we’re much better at validating these tools… that’s likely the best answer,” Held said.

He added that as the technology evolves, more specialized, mental health-specific tools may emerge, but those systems will need to be tested and validated with real patients before they can be trusted in higher-stakes situations.

“It’s not that AI isn’t going to play a role in mental health,” Held said. “We’re just not ready yet.”

Share