6 Questions to Ask Before Integrating AI into Clinical Workflow

The emergence of large language models (LLM) prompted a research team to compare how the technology performs in identifying potential drug-drug interactions. Retrospective analysis found that traditional clinical support tools identified 280 clinically relevant examples, but only 80 were found in AI.
Such research is a good example of why healthcare providers are cautious about incorporating AI into clinical practice. Respondents to the 2024 Healthcare IT Spending Study studied by Bain and Klas cited regulatory, legal, cost and accuracy issues that are effective when patient safety is threatened.
However, the study also found that AI continues to attract healthcare providers. Respondents seemed optimistic about implementing generative AI and were more inclined to try techniques that improve results.
When integrating technology into clinical workflows, AI faces a long-standing central dilemma: How do we use technology to improve care while reducing risks?
Let’s view this question through a clinical decision support perspective, especially drug information for prescribers. Technology has supported clinicians’ insights into drug safety for decades, as it is impossible for clinicians to maintain a pace of growing and evolving evidence. For example, there are currently more than 30 million citations in PubMed, adding about 1 million new citations each year.
Technology can help. Content databases monitor world literature, regulatory updates and clinical guidelines. They conducted a rigorous assessment of quality and integrated the results into the content, and suggested that clinicians could use it at the point of care.
A sound decision support system provides trustworthy evidence-based information. They have clinicians curate it carefully and accurately from the universe of medical literature available today. This provides clinician users with the latest relevant evidence to inform specific patient care decisions at the point of care. AI can enhance the experience by playing information in these systems faster and click less, especially for information built for this purpose.
General AI and dedicated AI
In recent years, large language models (LLMSs) such as Chatgpt have taken a central role in conversations about AI. These tools enable better comprehension and reasoning skills in common languages.
But just adding general AI tools to these decision support systems and pointing them to a range of clinical files will not bring the benefits many people want. The study provides a cautionary story for those who think that universal LLM can be used instead of a definite decision support system to evaluate drug-drug interactions.
For example, one study found that Chatgpt missed potential drug interactions that are important clinically. In another study, Chatgpt can identify potential drug interactions but score poorly in predicting severity and onset or providing high-quality documentation. These findings suggest that the shortcomings of the system are not specifically built for clinicians who make patient care decisions.
Simple questions can help healthcare organizations determine whether the decision support AI they are considering is specially built for clinicians:
- Who is this AI designed for? Dedicated AI focuses on focus. It targets a limited audience and focuses on the issues that matter most to that audience. Once correctly completed, these systems should exceed the general system in their field of expertise.
- What data is training this AI? Direct citation of evidence must be a core part of any answer in the decision support tool. A general AI system may tease the internet about relevant content, but may include flawed evidence without expert review or review. Full text on the internet provides many publications for free, so LLM may not capture the details of key works, creating a gap in evidence. The system should also be updated frequently to include the latest discoveries and regulatory materials. Finally, users should be clear about the information AI uses to purchase answers.
- How does this AI explain my problem? In health care, users can ask questions with ambiguous acronyms or incomplete follow-up questions. For example, if someone type “vancomycin”, it seems to be an isolated random fragment. However, if the previous question was “monitoring the parameters of cephalospores”, it is obvious that the correct explanation of the question is “monitoring the parameters of vancomycin” and the AI system should tell the user how it interprets the problem, so the user knows from the beginning whether the AI can even answer the correct question. The clarification mechanism allows users to refine their queries before the AI provides answers.
- Does this AI provide multiple most appropriate answers? A common situation between nurses and pharmacists is to determine whether multiple drugs can be combined into different intravenous (IV) administration solutions. A simple chat response may only provide one of the most appropriate answers, but clinicians may need several options, especially when patients have limited IV access. Clinicians should have systems that enable them to manage medications safely using the best judgment.
- Will this AI recognize its limitations? AI technologies are improving every day, but they have limitations. It is important to find the answer quickly, but expectations should be realistic. For example, users can ask questions equivalent to requiring AI to perform meta-analysis, which will be difficult to accurately and promptly support decisions at the point of care. AI systems must be aware of their limitations and be transparent about their limitations, rather than risking fabricated answers that endanger patients’ safety.
- Are clinicians involved in developing this AI? Clinicians must always stay in the driver's seat for any tool, technique or process that affects patient safety. period. Clinicians bring important perspectives to developing technologies and continual feedback loops to continuously improve systems. Clinician and user testing should validate key components of all clinical decision support tools.
Collaborative approaches provide better results
Ultimately, specially built AI focuses on the outcome: helping clinicians access trusted information in nursing. The combination of humans and artificial intelligence has achieved better results.
Image: Mr. Cole_photographer, Getty Images
Sonika Mathur is executive vice president and general manager of Micromedex, a clinical decision support technology for drug information. Sonika has over 20 years of experience in clinical decision support, technology-driven nursing delivery, and patient engagement. Prior to joining Merative, she led initiatives for CityBlock Health and Elsevier clinical solutions.
This article passed Mixed Influencer program. Anyone can post a view on MedCity News' healthcare business and innovation through MedCity Remacence. Click here to learn how.