HEALTHCARE & MEDICARE

In the era of artificial intelligence, when inaction becomes negligence

For decades, medical malpractice centered around “mistakes”—wrong diagnoses, botched surgeries, incorrect medications. But I see a more insidious and rapidly emerging threat: “errors of omission.” We are on the verge of being held accountable, not only for what we do wrong but also for what we fail to do, especially with readily available life-saving technology that can help.

Changes in the standard of care

Medical standards of care are not static; they evolve with scientific discoveries and technological advancements. What was considered cutting-edge yesterday is standard practice today, and today’s innovations will be tomorrow’s expected norm. Artificial intelligence is accelerating this evolution at an unprecedented rate. The question is no longer whether AI will transform health care, but when AI’s absence will be considered an oversight.

Whose responsibility is it to usher in this new era? While this is a collective responsibility, the chief medical information officer (CMIO) and chief medical officer (CMO) are in the vanguard. They are an important bridge between clinical practice and technological innovation. Their mission goes beyond maintaining IT infrastructure; it includes identifying, reviewing and strategically integrating technologies that can measurably improve patient care, enhance security and increase efficiency. It’s not just about adopting new tools; when so many AI tools are on the table, it’s about redefining what constitutes optimal care.

The cost of missed opportunities: a case study in lung cancer

Consider the tragic case of lung cancer. For too long, diagnosis has been made at an advanced stage, significantly limiting treatment options and survival rates. Imagine a scenario in which a patient (let’s call her Sarah) develops a persistent cough. Her chest X-ray was deemed “unremarkable.” A few months later, she was diagnosed with stage three lung cancer. Now, imagine a world—a reality we are rapidly approaching—in which AI-driven diagnostic tools, integrated into radiology workflows, could flag subtle abnormalities on the initial X-ray, prompting further investigation and early I diagnosis.

The difference between a stage I and stage III diagnosis is not just a matter of clinical staging; It is often the difference between life and death, curative care and palliative care. Patients and their families are increasingly aware of these technological advances. Lawsuits have begun to surface from patients claiming delays in diagnosis, arguing that hospitals failed to take advantage of existing technology that could have detected their conditions earlier. For example, legal scholars and medical ethicists are actively discussing the impact of the absence of AI in diagnostic processes, and claims of “failure to use AI” are expected to increase as the technology becomes more ubiquitous and demonstrably effective.

Just as advanced surgical robotic platforms have become the benchmark for complex treatments, artificial intelligence is quickly becoming the benchmark for advanced diagnostics, risk stratification and proactive intervention. People’s expectations are changing: If the data exists, and AI can analyze it to prevent harm, why not use it?

Ethical and financial requirements

The costs of this inaction go far beyond legal settlements. Preventable suffering and death carry a profound moral burden. Trust in healthcare institutions has been eroded, and these institutions have been slow to adopt innovations that protect patients. And there are long-term financial implications: longer hospital stays, hospital readmissions, and more complex and expensive treatments that could have been avoided with earlier intervention.

Investing in AI isn’t just about gaining a competitive advantage; This is about delivering on our fundamental commitment to do no harm and provide the best care possible – a commitment that goes beyond the exam room and is about how the entire system works. When our providers are hampered by outdated tools that delay critical surgeries or slow down the discharge process, the promise of “the best care” is broken. Providing staff with the technical support they need to accomplish this mission and ensuring patients receive timely, high-quality care is an ethical imperative.

Overcoming barriers to AI adoption

Of course, barriers to AI adoption exist: initial investment, complexity of integrating into legacy systems, the need for strong data governance, and the natural skepticism of clinicians accustomed to traditional approaches.

Leading academic institutions such as Stanford University (FURM) and Wake Forest University (FAIR-AI) recently released impressive frameworks for evaluating and implementing artificial intelligence solutions. These ambitious efforts often involve deep technical expertise, multiple governance committees, and multidisciplinary leadership.

Yet for every Stanford or Wake Forest, there are dozens of smaller hospitals that simply lack the staff and infrastructure needed to replicate these processes. Academic medical centers make up less than 5% of U.S. hospitals, which means the vast majority of patients receive care in environments with tight budgets, lean IT teams, and limited governance structures.

Frameworks such as FURM and FAIR-AI can be refined and adapted into lightweight toolkits suitable for adoption by smaller organizations. We also need to share resources (e.g., rigorous academic research, governance models, standard assessment methodologies) to enable all health systems to efficiently and safely deploy AI to improve patient care.

A call to action: Shaping the future of healthcare

The courtroom scene I open with is not some distant dystopian fantasy; it is a real-life scenario. This is our immediate reality. Healthcare leaders, especially CMIOs and CMOs, must actively advocate for the strategic adoption of AI. We must educate our clinicians, invest in the necessary infrastructure, and foster a culture that embraces innovation as a cornerstone of patient safety. The days of passive observation are over. The future of medical liability will increasingly depend on whether we seize the opportunity to leverage artificial intelligence to improve care, or whether we allow errors of omission to define our legacy. The lives of our patients and the integrity of our institutions depend on our decisive action today.

Source: Just_Super, Getty Images


Dr. David Atashroo is Chief Perioperative Medical Officer at Qventus. In this role, he leads the design and direction of Qventus perioperative solutions, which use artificial intelligence and automation to optimize operating room utilization and drive strategic surgical growth. He holds an MD degree from the University of Missouri-Columbia and received his plastic surgery training at the University of Kentucky before completing a postdoctoral fellowship at Stanford University School of Medicine. In addition to his role at Qventus, Dr. Atashroo continues his clinical practice at UCSF.

This article appeared in Medical City Influencers program. Anyone can share their thoughts on healthcare business and innovation on MedCity News through MedCity Influencers. Click here to learn how.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button