OpenAI Tool Used By Doctors ‘Whisper’ Is Hallucinating: Study

OpenAI Tool Used By Doctors ‘Whisper’ Is Hallucinating: Study


ChatGPT-maker OpenAI introduced Whisper two years ago as an AI tool that transcribes speech to text. Now, the tool is used by AI healthcare company Nabla and its 45,000 clinicians to help transcribe medical conversations across over 85 organizations, like the University of Iowa Health Care.

However, new research shows that Whisper has been “hallucinating,” or adding statements that no one has said, into transcripts of conversations, raising the question of how quickly medical facilities should adopt AI if it yields errors.

According to the Associated Press, a University of Michigan researcher found hallucinations in 80% of Whisper transcriptions. An unnamed developer found hallucinations in half of more than 100 hours of transcriptions. Another engineer found inaccuracies in almost all of the 26,000 transcripts they generated with Whisper.

Faulty transcriptions of conversations between doctors and patients could have “really grave consequences,” Alondra Nelson, professor at the Institute for Advanced Study in Princeton, NJ, told AP.

“Nobody wants a misdiagnosis,” Nelson stated.

Related: AI Isn’t ‘Revolutionary Change,’ and Its Benefits Are ‘Exaggerated,’ Says MIT Economist

Earlier this year, researchers at Cornell University, New York University, the University of Washington, and the University of Virginia published a study that tracked how many times OpenAI’s Whisper speech-to-text service hallucinated when it had to transcribe 13,140 audio segments with an average length of 10 seconds. The audio was sourced from TalkBank’s AphasiaBank, a database featuring the voices of people with aphasia, a language disorder that makes it difficult to communicate.

The researchers found 312 instances of “entire hallucinated phrases or sentences, which did not exist in any form in the underlying audio” when they ran the experiment in the spring of 2023.

Related: Google’s New AI Search Results Are Already Hallucinating — Telling Users to Eat Rocks and Make Pizza Sauce With Glue

Among the hallucinated transcripts, 38% contained harmful language, like violence or stereotypes, that did not match the context of the conversation.

“Our work demonstrates that there are serious concerns regarding Whisper’s inaccuracy due to unpredictable hallucinations,” the researchers wrote.

The researchers say that the study could also mean a hallucination bias in Whisper, or a tendency for it to insert inaccuracies more often for a particular group — and not just for people with aphasia.

“Based on our findings, we suggest that this kind of hallucination bias could also arise for any demographic group with speech impairments yielding more disfluencies (such as speakers with other speech impairments like dysphonia [disorders of the voice], the very elderly, or non-native language speakers),” the researchers stated.

Related: OpenAI Reportedly Used More Than a Million Hours of YouTube Videos to Train Its Latest AI Model

Whisper has transcribed seven million medical conversations through Nabla, per The Verge.



Source link


Discover more from Сегодня.Today

Subscribe to get the latest posts sent to your email.

Leave a Reply

Your email address will not be published. Required fields are marked *

Discover more from Сегодня.Today

Subscribe now to keep reading and get access to the full archive.

Continue reading