KLAS-CHIME report offers AI insights from early adopters

A new joint report from KLAS and CHIME polled some early adopters of artificial intelligence and machine learning tools, and asked how the technology is impacting their clinical, financial and operational goals.

WHY IT MATTERS
The study is based on interviews with IT leaders at 57 organizations – CIOs, CMIOs, data scientists and more – that are using AI across a variety of different cases, from clinical decision support to patient engagement to revenue cycle management. It asked them about some tangible gains the technology has helped them achieve. It also gleaned some insights about a handful of leading vendors, and found some common best practices for AI adoption.

KLAS focused on purpose-built AI vendors – those focused primarily on analytics and AI, with dedicated, standalone product – and analytics platforms with AI infrastructure. It did not assess EHR vendors with AI capabilities or other vendors whose IT applications that have some AI capabilities but aren’t meant to be standalone AI applications.

The research firm defines AI as software that “provides machine learning or natural language processing capabilities for healthcare-related clinical, operational, or financial areas.”

Specifically, machine learning tools for structured data are able to “study and learn computer systems’ algorithms and statistical models to effectively perform tasks without requiring explicit instructions, relying instead on patterns and inference to determine results.” NLP, meanwhile, for unstructured notes, “enables software solutions to understand, process, and analyze natural language, whether speech or text.”

The report specified some important capabilities AI vendors worth their salt should be able to provide for their healthcare customers, some critical (prebuilt healthcare models, a machine learning platform to create models, NLP and free text functionalities) and others that are “nice to have” (supervised and unsupervised learning, forecasting, modeling multimedia).

Based on its polling of those 57 IT decision-makers, KLAS also rated customer satisfaction for six leading vendors: Jvion, DataRobot, KenSci, Clinithink, IBM Watson Health and Health Catalyst. Researchers noted, however, that “because healthcare AI is such a new market,” only one of those six – Jvion – “has enough evaluations (at least 15) to be considered fully rated. Findings on all other vendors are based on limited data (6–15 evaluations).”

Among its findings:

  • Clients of DataRobot like its “customer-focused partnership approach,” and its support for successful AI deployments. “Many clients say DataRobot’s robust training and certification process helps internal teams without a data science background successfully build models,” according to KLAS.
  • Likewise, KenSci “collaborates closely with clients, providing data science expertise, listening to constructive feedback from customers, and incorporating changes,” said researchers in the report, but “most customers acknowledge KenSci has work to do to mature their platform.”
  • As for IBM Watson Health, “today, most validated customers use Watson’s ML and NLP technology with a clinical focus and fairly narrow scope,” said researchers, who noted its comprehensive database of indexed medical publications, and that it’s able to be deployed for “highly valuable use cases, such as medical research, genomics analysis, and education for cancer patients.”
  • KLAS estimated that Jvion “has by far the largest client base in the healthcare AI market,” and likewise features “the largest offering of prebuilt healthcare content for machine learning models/vectors.” The report noted that “some customers cite implementation challenges, though they give Jvion credit for staying engaged.”
  • Customers of Health Catalyst use its AI-powered analytics for “highly diverse use cases,” researchers report, something that’s “partly attributable to the way the vendor helps customers build models and is invested in driving outcomes.”
  • And Clinithink, with its “ready-to-use NLP algorithms for clinical notes and SNOMED,” is lauded by its clients for the ability to process unstructured data. KLAS notes, however, that the vendor is “still building out their product portfolio from an NLP engine, so clients mention many features still need to be developed.”

KLAS also looked at the vendors SAS, with a “large healthcare presence in academic medical centers,” and Symphony AyasdiAI whose unsupervised learning capabilities – “a fairly unique offering in healthcare AI today” are used for clinical variation management and research.

THE LARGER TREND
In addition to its vendor assessments, the KLAS report drew on its conversations with end-users to help clear up some lingering misconceptions about how AI can work for healthcare.

For one, it dispelled the notion that building data models is the most time-consuming task for AI deployment. Instead, researchers said, it’s important to be clear-eyed about the realities of data governance.

“Healthcare data is hard to clean and comes from many sources, and your organization may not have the expertise to feed the right variables or features into your models,” they explained. “Vendors and tools can help, but you need to do your own evaluation of the time and effort required to be successful with your models.”

Likewise, it’s a mistake to assume that, “once the model is built, it will run itself,” according to KLAS. In fact, healthcare organizations need to work to “ensure long-term model applicability, accuracy, and performance. Healthcare data is often heterogenous; prebuilt models might work well in some cases, but you may have specific demographic situations to which they won’t apply, generating misleading results.”

And healthcare providers shouldn’t expect an easy deployment of a turnkey AI technology that will offer immediate improvements.

“There are plenty of user-friendly products that can help you build models quickly, and some vendors can provide deep data-science and healthcare expertise,” said researchers. “But if you want tangible outcomes from the data, you need to consider operational aspects, like how many departments, resources, and facilities across the care continuum need to be involved.” That enterprise-wide buy-in – crucial hard work related to people and process – is key to making AI models work.

ON THE RECORD
“While technology is important, success with AI is perhaps even more dependent on an organization’s operations and change management,” according to the KLAS-CHIME report. “The following best practices come from some of the industry’s most successful AI users.

“Embed AI in the workflow: When creating models, observe clinicians’ workflows and find the appropriate places in which to embed models or insights so that they are located within users’ regular routines and are not disruptive. Promote AI insights to clinicians as extra information to act on, not extra hoops to jump through.

“Bring together experts on AI, data science, modeling, analytics, and subject matter: Promote interdisciplinary collaboration. An AI project cannot be successfully rolled out unless all groups work closely together.

Take ownership for driving change management and operationalizing insights: Take a social engineering approach to get staff engaged in implementing changes. Report progress and successes to staff to encourage adoption.”

Twitter: @MikeMiliardHITN
Email the writer: [email protected]

Healthcare IT News is a publication of HIMSS Media.

Source: Read Full Article