Decoding the AI Enigma: Illuminating Healthcare’s Path Ahead

AI, Healthcare
4 minute read

Jeffrey Sullivan, Chief Technology Officer of Consensus Cloud Solutions, thinks of artificial intelligence (AI) the same way he does art: He knows it when he sees it. However, getting a solid definition of AI can be a challenge, in part because the goal posts keep moving. 

Sullivan was one of three panelists representing different types of healthcare technology partners who shared their perspectives on AI during the 2023 Healthcare Information and Management Systems Society (HIMSS) Gulf Coast Annual Regional Conference (GC3), Nov. 15–17, in Biloxi, Miss. 

Defining AI in Healthcare

Traditional discussions around AI in healthcare have often centered on whether it would replace human intelligence. In contrast, Sullivan points out that AI promises to develop human-scale intelligence to address specific tasks like detecting cancer from a CT scan or transforming unstructured data from digital faxes into structured data. Generative AI is what’s now “blazing across every screen,” including EHRs, as companies like Epic embed it in their technology, he said. 

When thinking about the use of AI in EHRs, Madelaine Yue, Vice President of Solutions Delivery at Experis Health Solutions, separates the augmented and generative aspects of AI intelligence. Whereas augmented AI—which is already embedded in EHRs today—makes processes and steps easier for humans, generative AI is driving toward replacing human tasks such as creating customized patient education material. 

Although such tasks still require human intervention, “[It’s] taking us to that next step, which, ideally, if we’re able to keep growing this in the right way, there’s going to be more opportunities to replace certain tasks to embrace that efficiency,” she said.

Benefits and Use Cases 

Yue sees AI benefiting the healthcare industry in two ways: optimizing healthcare diagnosis and treatment and aiding in communications and consumerism. 

Pointing to a study in which the diagnostic accuracy of AI was compared with that of human radiologists, Yue said the research found that AI caught more false positives whereas humans detected more complex conditions that warranted immediate treatment. Although AI helped to maximize the efficiency of the radiologists, “We’re not quite [to a point] where AI can actually just do all of this [by] itself,” Yue said. 

However, AI has proven helpful in its ability to mine data across EHRs and create pre-treatment plans that can then be validated by a physician. AI has also had a significant impact from a population health perspective because of its ability to predict readmission risks based on EHR data. 

Other benefits of AI include its ability to produce relatable, empathetic patient education materials and predict drug shortages at hospitals and health systems. It’s also proven useful in recruiting for clinical trials, according to Mason Ingram, Director of Payer Policy with Premier, which is leveraging the technology to match the most appropriate patients for clinical studies.

Challenges, Pitfalls, and Tips for Mitigation 

Although Sullivan and Ingram see AI as being helpful in terms of its ability to perform administrative tasks, they both think greater regulation is needed around the use of AI in the healthcare sector, especially for clinical purposes. To that end, the White House issued an executive order late last year to establish new standards for AI safety and security. 

Other challenges include the high computational cost of applying AI at commercial scale and the need for greater understanding around how it might create disparity in affordability and accessibility. There’s also a push to hire people with specialized training and skills to use AI across the industry, including within health systems. “How is [AI] going to be reimbursed? That’s going to drive complete changes in how it’s going to be adopted and what potential disparities can occur there,” Yue said. 

AI “hallucination”—incorrect or misleading results created by generative AI models based on insufficient training data, incorrect assumptions made by the model, biases in the data used to train the model, or inherent in the very nature of generative large language models—is another challenge for which AI developers are trying to find solutions. The use of watermarking, or content authentication, is one way to safeguard AI-generated diagnoses by flagging them for physician review. Another way to protect against AI hallucination is to have good data, according to Ingram. 

As AI adoption within the healthcare sector increases, the panelists stressed the importance of healthcare organizations becoming educated about the implications of the technology, especially as the industry shifts toward value-based care.