HEALTHCARE & MEDICARE

The lawyer said

AI Agents – autonomous, task-specific systems designed to perform functions with little to no human intervention – gained traction in the healthcare field. The industry is under tremendous pressure to reduce costs without compromising the quality of care, and health tech experts believe that Agesic AI could be a scalable solution that could help achieve this difficult goal.

But, according to a cybersecurity and data privacy lawyer, the AI ​​category has a greater risk than its AI predecessors.

Lily Li, founder of the law firm Metalaw, notes that by definition, agent AI systems are designed to represent consumers or organizations’ actions – which removes humanity from the cycle of potentially important decisions or tasks.

“If there are hallucinations or errors in the output, or biases in the training data, this error will have a realistic impact,” she declared.

For example, AI agents may make mistakes such as incorrectly refilling prescriptions or poor classification of first aid departments, which can lead to injuries or even death.

These hypothetical scenarios cause lights in the gray area when liability is transferred from the licensed provider.

“Even if the AI ​​agent makes the 'right' medical decision, patients respond poorly to treatment, it is not clear whether existing malpractice insurance covers the claim without the participation of the licensed physician,” Lee said.

She notes that healthcare leaders are operating in a complex region – saying she believes society needs to address the potential risks of proxy AI, but only to the extent that these tools lead to excessive death or increased harm to human physicians like humans.

Lee also pointed out that cybercriminals can use proxy AI systems to launch new attacks.

She recommends avoiding these hazards, and healthcare organizations should include AI-specific risks in their risk assessment models and policies.

“Healthcare organizations should first review the quality of the underlying data to eliminate existing errors and biases that will lead to what the model has learned. Then, make sure that the types of actions that AI can take have guardrails, such as rate limits on AI requests, geographic requirements, request restrictions on goods, reported archives, otherwise harmful to malicious behavior,” he said.

She also urged AI companies to adopt standard communication protocols in their AI agents, which would allow encryption and authentication to avoid malicious use of these tools.

In Lee's eyes, the future of proxy AI in healthcare may not depend on its technical capabilities but more on the industry's ability to build trust and accountability when using these models.

Photo: Weiquan Lin, Getty Images

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button