The ascent of artificial intelligence (AI) in healthcare is undeniable, with both in-house and third-party solutions infiltrating the market. However, a critical concern emerges post-deployment as the monitoring of health and the real-world impact of these AI products often take a back seat. The lack of rigorous monitoring in many AI applications poses potential risks to patients and operational efficiency.
Bommae Kim, lead data scientist at Hackensack Meridian Health, draws attention to the oversight in the AI lifecycle, emphasizing that the focus tends to be on technology and performance during development, neglecting crucial stages of adoption and impact evaluation post-deployment.
Scheduled to address this issue at the upcoming HIMSS24 conference in Orlando, Florida, Kim, holding a PhD in quantitative methods and a master’s in behavioral science, outlines a robust monitoring framework addressing four key areas: product pipeline, model performance, user behaviors, and business impact.
“To address this gap, we developed a robust monitoring framework covering four key areas: product pipeline, model performance, user behaviors, and business impact,” said Kim. “The objectives are to detect potential issues, prevent critical errors, and measure program effectiveness for both in-house and third-party AI solutions.
Despite the buzz around AI, Kim notes, “unfortunately, the importance of monitoring is often overlooked in contrast to the development and deployment of AI solutions.”
To streamline the process, Hackensack has designed a dashboard template featuring standardized metrics and a unified data model. This monitoring system facilitates discussions on AI product impacts among product teams, stakeholders, and leadership. An automated alert system promptly notifies product teams of concerning patterns, preventing erroneous results from reaching end-user applications and ensuring patient safety and compliance.
Kim and the team at Hackensack stress the significance of evaluating and monitoring AI applications to ensure effective, safe, and fair AI-powered patient care and operations.
“AI solution monitoring facilitates meaningful discussions with stakeholders on the real-world impact of AI solutions, an aspect that is often overlooked,” said Kim. “This monitoring framework can help optimize the use of AI by designing integrated workflows and enhancing user adoption, rather than solely focusing on AI model development.”
Kim’s session, titled “Monitoring the Health and Real-World Impact of AI Applications,” is scheduled for March 12, from 4:15-4:45 p.m. in Room W307A at HIMSS24 in Orlando. Interested participants can learn more and register for the event.