AI hallucinations: What are they, always terrible?

Hallucinations are a regular focus of conversations about healthcare AI. But what do they actually mean in practice? This is a topic of a panel discussion held at the Medcity Invest Digital Health conference in Dallas last week.
Soumi Saha, senior vice president of government affairs at Premier Inc., said AI hallucinations are AI “using its imagination,” which can sometimes hurt patients because it can provide incorrect information.
One of Jennifer Goldsack, founder and CEO of the Digital Medicine Association, described AI hallucinations as “nonsense technology.” Manatt, partner at Phelps & Phillips Randi Seigel defines it as when AI does something, “but that sounds like a fact, so you don’t want to question it.” Finally, Gigi Yuen, chief data and AI official at Cohere Health, said the hallucination is when AI is “not rooted” and “not humble.”
But are hallucinations always bad? Saha asked the panelists this question, wondering if hallucinations can help people “determine potential gaps in the data or gaps in the study,” suggesting more things need to be done.
Yuen said that when users don't know that AI is hallucinating, hallucination is bad.
But, “I would be happy to have brainstorming conversations with my AI chatbot if I’d like to share with me what they say to them.”
Goldsack equates AI hallucinations with clinical trial data, believing that the lost data can actually tell researchers something. For example, when conducting clinical trials on mental health, the lack of data actually shows that someone is doing well because they “living life” rather than recording their symptoms every day. However, when data is missing, the healthcare industry often uses accusation language, indicating a lack of compliance in patients rather than reflecting on the actual meaning of the lost data.
She added that the healthcare industry tends to put many “value judgments” on technology, but technology “has no values.” So if the healthcare industry experiences the hallucination of AI, then humans need to be curious about why hallucinations and use critical thinking.
“If we can’t make these tools useful to us, then I’m not clear in the future how we can really have a sustainable healthcare system,” Goldsack said. “So, I think it’s our responsibility to be curious and have some kind of observation that observes these things and think about how we actually compare and contrast with other legal frameworks, at least a starting point.”
Meanwhile, Manatt, Phelps & Phillips' Seigel highlights the importance of squeezing AI into Med and nursing students' curriculum, including how to understand and ask questions.
“In clicking a course in the annual training, you've spent three hours telling you how to train on AI. … I think it has to be iterative, not just something taught at once, and then some resurrection courses in all other annual trainings, and then you click during all other annual trainings.”