How do digital health leaders view Trump’s new AI action plan?

The White House released the “American AI Action Plan” last week, which outlines a variety of federal policy recommendations aimed at improving the U.S.’s international AI diplomacy and security leader. The program aims to consolidate the dominance of American AI through deregulation, expansion of AI infrastructure and a “Mr. Try” culture.
Here are some of the measures included in the plan:
- Deregulation: The plan aims to repeal national and local rules that hinder the development of artificial intelligence, and federal funds may also be withheld from states with restrictive AI regulations.
- Innovation: The proposal aims to create a regulatory sandbox for government operations, a safe environment in which companies can test new technologies.
- Infrastructure: The White House plan is calling for rapid development of the country’s AI infrastructure and offering companies tax benefits to do this. This also includes fast tracking permissions for data centers and expanding the grid.
- Data: The program aims to create industry-specific data usage guidelines to accelerate AI deployment in key areas such as healthcare, agriculture and energy.
Healthcare AI leaders are cautiously optimistic about the pro-innovation stance of action plans, and they are grateful that it advocates for better AI infrastructure and data exchange standards. However, experts still have some concerns about the program, such as the lack of concerns about AI safety and patient consent, and the lack of mentions of critical health care regulators.
Overall, experts believe the program will eventually become the net worth of the development of healthcare AI – but they do think some edits can be used.
Relax the data center
Ostro CEO Ahmed Elsayyad sold AI-driven engagement technology to life science companies, which he sees as the overall benefit for AI startups. This is mainly due to the focus of the program on amplifying infrastructure such as data centers, energy grids and semiconductor capacity, he said.
Training and running AI models requires a lot of computing power, which translates into high energy consumption, and some states are trying to address these growing levels of consumption.
Elsayyad noted that local governments and communities have considered buildings that regulate data centers due to concerns about the pressure on the grid and environmental impacts – but the White House’s AI action plan is designed to remove these regulatory barriers.
No details about AI security
However, Elsayyad is concerned about the program's concerns about AI security.
He hopes the program places more emphasis on AI security as it is a major priority in the AI research community, with leading companies such as OpenAI and numerous dedicated computing resources for security efforts.
“Openai famously said they will allocate 20% of their computing resources for AI security research,” Elsayyad said.
He pointed out that artificial intelligence security is a “main topic” in the digital health community. Responsible AI usage, for example, is a topic that is often discussed in industry events, and organizations focusing on healthcare AI security, such as the Alliance of Health AI and Digital Medicine Associations, attract thousands of members.
Elsayyad said he was surprised that the new federal action plan did not mention AI security and that the language and funding around it would make the program more balanced.
He is not alone in noticing that there is no obvious AI security in the White House plan – Adam Farren, CEO of EHR platform Canvas Medical, is also shocked by the lack of attention to AI security.
“I think it’s necessary to work to require AI solution providers to provide transparent benchmarking and evaluate the safety of products they offer on the frontline of clinical lines, which feels like what’s posted,” Farren declared.
He pointed out that AI is fundamentally probabilistic and requires continuous evaluation. He argued that it supports mandatory frameworks to assess the safety and accuracy of AI, especially in higher use cases such as drug recommendations and diagnosis.
No mention of ONC
Farren noted that the action plan has not yet mentioned the National Health Information Technology Coordinator's Office, although the name “tons” of other agencies and regulators has been named.
This surprised him, given that ONC is the primary regulator responsible for all matters related to health IT and providers’ medical records.
“[The ONC] It's just not mentioned anywhere. This seems like a missed to me, because now in healthcare, one of the fastest growing applications of AI is AI scribes. When doctors see patients transcribing access, they are using it – fundamentally, it's a software product that should sit under the ONC, which has experience regulating those products. ” Farren said.
He added that the environment scribe is just one of many AI tools implemented in the provider's software system. For example, providers are adopting AI models to improve clinical decision-making, mark drug errors, and simplify coding.
Request technical standards
Leigh Burchell, president of the EHR Association and vice president of policy and public affairs at Altera Digital Health, believes the program is largely positive, especially its focus on innovation and its recognition of the demand for technical standards.
Technical data standards, such as those developed by organizations such as HL7 and those supervised by the National Institute of Standards and Technology (NIST) – ensures that the software systems of healthcare can exchange and interpret data consistently and accurately. Burchell said these standards make it easier for AI tools to integrate with EHR and use clinical data in ways that are useful to providers.
“We do need standards. Healthcare technology is complex, and it's about exchanging information in a way that can be consumed easily on the other end so that it can act. It requires standards.”
Without standards, AI systems may experience poor communication and poor performance in different environments, Burchell added.
Few considerations regarding patient consent
Burchell also raised concerns that the AI action plan does not adequately address patient consent, especially whether patients have a say in how their data is used for AI.
“We have seen laws on how AI should be regulated. Where should there be transparency? Where should there be information about the training data used? When using AI, should the patient be notified when AI is used in its diagnostic process or treatment determination? This does not solve this,” she explained.
In fact, the plan shows that in the future, the federal government can withhold funds from states through states that hinder AI innovation, Burcher noted.
But without clear federal regulations, states must fill the gap with their own AI laws, which can create a scattered, heavy landscape. To address this, she called for a coherent federal framework to provide a more consistent guardrail on issues such as transparency and patient consent.
While the White House’s AI action plan lays the foundation for faster innovation, Burchell and other experts believe that stronger safeguards must be accompanied by ensuring responsible and equitable use of AI in healthcare.
Image source: Mr. Cole_photographer, Getty Images