HEALTHCARE & MEDICARE

Why proxy AI is not ready for prime time healthcare

Agent AI is a hot topic in many industries, including healthcare. Everyone seems to be buzzing at Agi, envisioning Jarvis from Iron Man, where artificial intelligence systems have human-level intelligence and decision-making capabilities.

Excitement is understandable – suddenly, there is a real opportunity to build models and robots that mimic human tasks and can interact with and interact with humans. Businesses are passionate about this capability, especially in labor shortage industries where artificial workers can be replaced with AI machines that can automate tasks more efficiently without errors or financial compensation.

But when it comes to health care, I think we need to take a minute to evaluate on a deeper level. In general, healthcare is a highly regulated industry, and there are good reasons why it cannot bear some of the risks and failures inherent in other industries. We need to evaluate proxy AI through a specific lens of the healthcare industry and its workflow, assess the needs of providers, healthcare organizations, and patients, and recognize that healthcare is not single and will have a workflow that is more suitable for implementing AI than others.

Understanding this requires honesty about the advantages and disadvantages of proxy AI. LLMS Excel in the process of writing – Chatgpt, for example, can immediately write a new song about Taylor Swift's latest album, Dr. Seuss Dr. Seuss, with minimal prompts. By contrast, rules-based engines excel at structured output. The tip to consider an autonomous car: “When the traffic light is red, then stop.” What's interesting about proxy AI is that it may fall somewhere in the middle – partly rules-based, partly creative, and navigating the middle ground and the appropriate risks and rewards that it should be applied to the healthcare workflow are a huge challenge.

The most useful heuristic I found on this topic is to look at the concept of risks and consequences. I define risk as the percentage of something that fails and consequences as the result of failure.

In situations where workflow risk is high, you don't want the proxy process to have it – the reality is that every AI model fails at some point, and when it comes to healthcare outcomes, the cost of failure can be too high.

Here are two examples, the proxy workflow does not work:

  • Authorize or define advanced instructions (end-of-life plan). Obviously, there is a creative and explanatory workflow here that requires sympathy, human experience and judgments of when to guide and when to listen; because the source of information (people and their circumstances) is not all equal. This is also a situation where you don't want to make mistakes in any form or form.
  • Manage classification in ER – a chaotic, fast moving environment. People are best suited to make quick decisions – there is no time to enter data into the proxy.

However, here are two examples of where Agesic AI works in healthcare:

  • Use a proxy to unlock EHR data to automate a range of tasks that require it to navigate the user interface. Enterprise-grade software has done this before. It used to be called RPA or robotic process automation, but now the process of proxying power can be more resilient to do this.
  • Review patient charts to ensure clinicians don’t miss emerging chronic diseases.

Currently, I think proxy AI is in a stage where if it starts telling doctors what to do and take over decisions, and worse, it hurts everyone (through its association) with AI’s trust in medicine. Humans must be in a cyclical state when patient safety, empathy and human judgment take precedence over cost savings and potential efficiency improvements. But automating tasks, annotations, extracting data (a boring, manual process in healthcare that doesn’t require human intervention) is a great place to start implementing AI. The healthcare industry should look at how AI finds and provides relevant data about the context to enable human clinicians to make informed decisions that free up providers’ time to reduce burnout and allow them to provide more personalized patient care to improve patient outcomes.

Photo: Philo, Getty Images


Isaac Park began his formal education in software development when he tinkered with technology and high school in his youth. After moving to Durham, North Carolina, he graduated from Duke University with a bachelor’s degree in computer science. Isaac immediately began his tech career as a software developer who built the front-end framework. He then moved to a product management position, directing stakeholders and technical teams through various projects from inception to final release.

In 2009, he co-founded Innovation and Product Studios through digital products that transformed business in the healthcare and defense verticals, Pathos Ethos, and coached startup and corporate innovation teams, from releasing millions of dollars in software products to building native mobile applications that more than one million people use simultaneously. At the end of 2022, he withdrew from Pathos Ethos and joined the Pratt Institute of Engineering and served as a teacher in innovative product management and innovation in the Christensen family. In 2023, he co-founded Keebler Health, an AI-local healthcare technology company, and is currently CEO.

This article passed Mixed Influencer program. Anyone can post a view on MedCity News' healthcare business and innovation through MedCity Remacence. Click here to learn how.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button