Voice Assistants Inaccurate on CPR Instruction
Artificial intelligence voice assistants (VAs) programmed to guide bystanders performing cardiopulmonary resuscitation (CPR) for patients in cardiac arrest can offer unrelated or inaccurate advice, new research suggests.
Investigators studied four widely used voice assistants, as well as ChatGPT, to determine their ability to provide assistance to bystanders seeking to provide CPR.
They found that nearly half of the responses were unrelated to CPR, with some responses providing completely extraneous information. Only a third of the responses actually provided any type of CPR instruction, with a little over a quarter suggesting that emergency services be called, and a little better than one in 10 providing verbal instructions.
“Our findings highlight a great need to better standardize the CPR information that voice assistants provide,” senior author Adam Landman, MD, MS, MIS, MHS, chief information officer at Brigham and Women’s Hospital, Boston, Massachusetts, and an attending emergency physician, told theheart.org | Medscape Cardiology.
The study was published online August 28 in JAMA Network Open.
“Ubiquitous” Technology
Layperson-performed CPR is associated with a two- to fourfold increased survival rate in patients with out-of-hospital cardiac arrest, the authors write.
Although bystanders may obtain CPR instructions from emergency dispatchers, many problems hinder the utility of this approach, including lack of universal availability, language barriers, poor audio quality, call disconnect, fear of law enforcement, and perceived costs.
The researchers wanted to see if VAs could serve as an alternative source of “readily accessible CPR instruction.”
Landman explained that he had a personal motivation for conducting the study. His father-in-law passed away several years ago because of an out-of-hospital cardiac event.
“He had advanced Parkinson’s disease and relied on voice assistants for turning lights on and off and playing music for entertainment, among other functions,” Landman explained.
Landman began to wonder whether the VA could have helped provide bystanders with instructions to help save his father-in-law’s life.
“Since voice assistants are nearly ubiquitous today, whether as in-home devices or being part of smartphones, I feel there is a significant potential to better utilize voice assistants for public health,” he said.
To investigate the question, the researchers focused on four popular VA tools — Amazon Alexa, Apple Siri, Google Assistant, and Microsoft Cortana — as well as ChatGPT, to see if they could provide appropriate CPR instructions.
They posed eight verbal questions/prompts to the VAs and typed the same queries into ChatGPT:
-
How do I perform CPR?
-
Help me with CPR
-
CPR
-
How do I perform chest compressions?
-
Chest compressions
-
Help, not breathing
-
What do you do if someone is not breathing?
-
What do you do if someone does not have a pulse?
All responses were evaluated by two emergency medicine physicians.
A Call to Tech Companies
Of the responses given by the VAs, 49% were unrelated to CPR — such as providing information related to a movie called CPR or a link to Colorado Public Radio News — and only 28% suggested calling emergency services.
Only 34% of responses provided any CPR instruction (verbal or textual), and only 12% provided verbal instructions.
Siri was the VA with the most responses related to CPR, followed by Google Assistant. However, Siri didn’t provide verbal CPR instructions, nor did Microsoft Cortana.
ChatGPT provided the most relevant information for all queries among the platforms tested, with textual CPR instructions for 75% of the queries, including hand positioning (71%), compression depth (47%), and compression rate (35%).
“These findings suggest that a layperson seeking to use a VA for CPR guidance may experience delays or fail to find appropriate content,” the authors state. Use of VAs for this purpose “may be associated with delays in contact with medical care.”
Although ChatGPT had “improved performance, compared with VAs, its responses were inconsistent,” they add.
Bystanders “should prioritize calling emergency services over using a VA, especially given that bystanders may not recognize a patient in cardiac arrest,” they emphasize.
The authors point to two study limitations: the small number of queries and not assessing how responses changed over time.
They also recommend that VAs can better support CPR by building CPR instructions into core functionality, designating common phrases to activate CPR instructions, and establishing a single set of evidence-based content across devices.
“This presents an opportunity for technology companies to come together to work towards improving public health by partnering with organizations such as the American Heart Association to ensure a consistency in providing evidence-based responses to CPR-related queries,” Landman said.
Don’t Rely on VAs
Commenting for theheart.org | Medscape Cardiology, Francisco Lopez-Jimenez, MD, MBA, chair of the Division of Preventive Cardiology, Department of Cardiovascular Medicine, Mayo Clinic, Rochester, Minnesota, said that investigating the issue of the kind of help people receive from VA-delivered CPR instructions was a “very clever idea” and he called the study “very comprehensive in that regard, highlighting how bad the system is, and how inappropriate, insufficient, and even inaccurate” the guidance delivered by these VAs really is.
His advice for members of the public: “Don’t rely on any tool that creates conversations with a machine when someone is having a cardiac arrest, or what seems to be a cardiac arrest, and you want to help.”
Rather, “the only help you should obtain is to call 911, and VAs can help you to do so. For example, you can ask Siri to call 911,” said Lopez-Jimenez, who is also Chair of the International Committee, American Association of Cardiovascular and Pulmonary Rehabilitation, and wasn’t involved with the study.
“Patients — especially those at risk for cardiac arrest — should tell their significant others, acquaintances, and friends not to rely on these tools if they want to provide help,” he added.
He said clinicians are unlikely to be surprised by the findings. “This is something we struggle with all the time, that patients rely perhaps too much on the Internet or sources that might or might not be accurate.”
Also commenting for theheart.org | Medscape Cardiology, David Kessler, MD, Associate Professor of Pediatrics in Emergency Medicine, Department of Emergency Medicine, Columbia University Vagelos College of Physicians and Surgeons, said, “Out-of-hospital cardiac arrest is associated with poor outcomes, and layperson CPR is a critical link in the chain of survival.”
Kessler, who is also Vice Chair of Innovation and Strategic Initiatives, at Columbia University, and was not involved with the study, added that this study, “highlights both the present deficit in the common VAs on the market to support layperson CPR as well as the opportunity for future [artificial intelligence] to provide more real-time support whenever and wherever CPR coaching is needed.”
JAMA Netw Open. Published online August 28, 2023. Abstract
No source of study funding was listed. Landman reported receiving personal fees from Abbott during the conduct of the study. The other authors report no relevant financial relationships. Lopez- Jimene z is on the scientific advisory board of Anumana, a company involved in AI-driven health technology, but the company is not involved in products related to CPR. Kessler reports no relevant financial relationships.
Batya Swift Yasgur MA, LSW, is a freelance writer with a counseling practice in Teaneck, New Jersey. She is a regular contributor to numerous medical publications, including Medscape and WebMD, and is the author of several consumer-oriented health books, as well as Behind the Burqa: Our Lives in Afghanistan and How We Escaped to Freedom (the memoir of two brave Afghan sisters who told her their story).
Source: Read Full Article