HEALTHCARE & MEDICARE

'Free' AI pilot costs health system millions

The phrase “nothing is free” often refers to hidden, intangible costs, such as reputation or mental anguish. But in healthcare, the hidden costs of so-called “free” AI pilots are far more real.

Recent headlines paint a disturbing picture of artificial intelligence adoption. For example, the recent State of Business in Artificial Intelligence 2025 report from the Massachusetts Institute of Technology (MIT) found that 95% of generative AI pilots fail. According to MIT, this is the “GenAI divide”; most companies rely on general-purpose tools that impress in demos but fall apart in actual workflows, while only a few integrate AI deeply enough to have a meaningful, sustained impact.

Nowhere is this divide more apparent than in health care. Every healthcare system in America is inundated with “free trials” from AI vendors. Typically, this is how it plays out: the demo piques the interest of the decision-maker, who then approves their team to dive into it. That’s when organizational overhead begins to creep up, employees devote their time to the pilot, and before long, the opportunity costs begin to accumulate. In 2022, Stanford University reported that “free” models (ones that require custom data extraction or further training to be suitable for clinical use) can cost up to $200,000 and still fail to translate into clinical benefits in the form of better care or lower costs.

Multiply the cost of dozens of pilots and the cost of failure can quickly soar into the millions of dollars.

Artificial intelligence has been positioned as the savior of healthcare over the past few years. When these expensive experiments fail to materialize, trust in the technology is eroded; every stalled or abandoned pilot reinforces the perception that the technology is more hype than help. But the problem isn’t that AI’s value doesn’t live up to its promise. For example, the American Medical Association found that clinicians who had access to the right automated tools reported lower levels of burnout.

When deployed correctly, AI can reduce administrative burdens, streamline communication, and provide meaningful support for clinicians’ workflows and decision-making. Pilots are crucial because they demonstrate whether AI tools can actually deliver these improvements in practice. But they must be rigorously implemented and measured. Not all AI is created equal; choosing the right tool for the right job is key, but more important is how leaders set the conditions for success once they adopt the tools. Without clear goals and shared accountability, AI pilots can quickly become an exercise in hope rather than strategy.

This is an expensive way to innovate. Artificial intelligence is powerful, but it requires structure to succeed. Three disciplines could reverse its current trajectory.

Three major disciplines of artificial intelligence

First, design discipline. Before agreeing to another pilot, healthcare leaders must define the purpose of the tool, what problem it solves, when to use it, and where it fits into the workflow. Most importantly, leaders should ask why they need it. Without the answer to this question as a guiding principle, measurement becomes impossible and adoption can lag or even fail entirely.

Second, result discipline. Every pilot project should start with a definition of success based on the organization’s priorities—a definition that is both specific and measurable. It may reduce report turnaround time, reduce administrative burden, or improve patient access. For example, an AI model designed to flag patients at risk for breast cancer and encourage follow-up would need to demonstrate its ability to successfully flag risk, schedule patients for critical follow-up care, and detect potential cancers earlier.

Finally, the discipline of partnership. As with any solution, the easiest option is to default to the vendor that is the largest or already has the broadest catalog. But size and scale alone don’t guarantee success—far from it. In fact, as MIT proposes in its recent paper, general-purpose Gen AI tools tend to fail precisely because they are not designed for the complexity of specific workflows. In healthcare, these workflows are especially complex. Successful organizations will select partners who understand their domain, help define results, and share responsibility for results.

In other words, don't choose the cheapest or largest solution. Choose the right one. If you make the wrong choice, you're essentially running a self-developed project and assuming all costs and risks. Make the right choices and you're building a path to sustainable success.

AI in healthcare won’t fail because the technology is bad or broken. It failed because policymakers stepped in without discipline, framework or the right partners. The hidden costs of “free” are too high to keep learning the same lessons.

Photo: Damon_Moss, Getty Images


Demetri Giannikopoulos is chief innovation officer at Rad AI, a leader in generative artificial intelligence in healthcare. He has more than 20 years of experience in healthcare technology, focused on driving the adoption of artificial intelligence in complex clinical environments, and has deep expertise in leveraging artificial intelligence as a tool to help bridge the gap between regulatory requirements, innovative artificial intelligence products, and provider needs. Demetri has contributed to national guidance such as BRIDGE, a framework designed to accelerate the adoption of artificial intelligence in healthcare, and serves as a working group member of the Health Artificial Intelligence Alliance. He also serves as a patient advocate on the ACR's Patient- and Family-Centered Care Quality Experience Committee and as a Patient-Centered Outcomes Research Institute (PCORI) Ambassador.

This article appeared in Medical City Influencers program. Anyone can share their thoughts on healthcare business and innovation on MedCity News through MedCity Influencers. Click here to learn how.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button