Blog

Medical AI Is Already In Hospitals. Who Is Watching Its Safety?

I previously examined how clinical artificial intelligence (AI) is beginning to challenge a regulatory framework originally designed for medical devices like pacemakers and CT scanners that rarely change once they are approved. AI software, by contrast, can be updated over time, improving performance or expanding its role in patient care. The question is no longer whether that difference exists but how it should be addressed.

This issue gained focus on Dec. 29, when the U.S. Food and Drug Administration opened public comment on a citizen petition. The petition suggests that oversight of certain radiology AI tools could shift away from repeated FDA review before updates are introduced and toward ongoing oversight as the software evolves, shifting more responsibility into the period after AI is already in use.

This shift raises a more fundamental question: if safety is evaluated after AI is already in clinical use, who is responsible for ensuring that safety in practice and what mechanisms are needed to drive accountability?

Regulators can establish requirements and manufacturers can monitor performance, yet once AI is in use, patients, physicians and health systems are the ones who live with the consequences. As these tools continue to evolve in clinical care, it becomes less clear who is responsible, even as the stakes are higher.

Speed Versus Safety: How To Hit A Moving Target

Clinical AI is increasingly designed to change over time. Software updates may improve performance, expand what the AI can do, or reflect new medical knowledge. These changes are often necessary to maintain clinical relevance, but they also challenge regulatory models built around fixed products.

Kei Nakagawa, MD, director of strategic impact and growth at UC San Diego Health, said lifecycle oversight only works if expectations after deployment are clear and enforceable.

“If we reduce pre-market friction, post-market surveillance isn’t optional,” he said. “We’re dealing with a new species of clinical AI, and the framework has to work beyond the ivory towers.”

Chris Wood, CEO of RevealDx, wrote in the company’s public letter that CADx tools raise distinct safety concerns. He said many people may not realize the petition extends beyond detection and triage.

“Almost everybody I talk to thinks this petition only references detection and triage, but the side effects, as written, could be dire because it goes beyond CADe into CADx.”

His comments highlight a broader shift under the petition. Safety would instead be evaluated after the AI is already in use, shifting more responsibility into real-world clinical settings rather than relying on repeated FDA review.

Health Systems Capacity For Oversight

Some institutions have dedicated governance committees, validation protocols, and informatics infrastructure to monitor AI performance. Others do not.

Andrew Menard, executive director for radiology strategy and innovation at the Johns Hopkins Health System, said the FDA’s decision to invite public comment highlights the degree of uncertainty about how adaptive AI should be governed.

In his conversations with peers, he has seen opinions divide between those who believe existing regulatory pathways create unworkable burdens and those who worry that shifting oversight could create new risks. At the center of that divide, he said, is a practical question of responsibility. “Academic centers generally have the people and expertise to monitor AI safety,” he said. “The real risk is what happens in smaller places without the same resources.”

Brenton Hill, head of operations and general counsel at the Coalition for Health AI (CHAI), a healthcare coalition bringing together leaders and experts, said lifecycle oversight will only work if safety monitoring reaches beyond major academic centers.“If we’re going to let innovation move faster, we have to build safety nets that work for small hospitals, not just the biggest systems,” he said.

Hill said consensus within the CHAI health provider community is that shared governance models across health systems of all sizes and geographies will be critical to ensure safety nationwide.

Are Physicians The Final Safeguard?

Scott Mahanty, MD, a practicing radiologist in a large multi-site private practice, supports the petition’s direction but said its success depends on robust enforcement mechanisms.

“That is why, in my view, the only acceptable version of this petition is one that comes with teeth,” he said. “Minimum elements for post-market plans, explicit expectations around handling performance drift, attention to under-represented populations, and clear requirements for how key real-world findings are communicated back to users.”

Others emphasized that not all AI carries the same level of risk.

Lauren Nicola, MD, CEO of Triad Radiology and chief medical officer of RevealDx, said the distinction comes down to whether physicians can independently confirm what the AI is telling them.

“With some tools, doctors can double-check the answer right away,” she said. “With others, they can’t. That makes the risk very different.”

She added that radiologists do not have the time or resources to act as regulators themselves.

Yet as AI becomes more deeply embedded in clinical workflows, physicians may increasingly serve as the final safeguard.

The Missing Roadmap: The Future Of Clinical AI Regulation

By elevating the petition and inviting public comment, the FDA signaled that existing regulatory tools may no longer fully align with software that continues to evolve after deployment.

Lifecycle oversight is likely inevitable, but it shifts more responsibility to what happens after AI is already in use. As more safety checks happen in clinical practice, responsibility moves beyond regulators and into the hands of manufacturers and the clinicians who rely on these tools every day. The question is no longer just how medical AI is regulated, but who owns safety once it becomes part of patient care.

Ownership alone is not enough. There must be clear rules for how AI performance is checked and what happens when problems appear, because as these tools spread to hospitals of different sizes and resources, the real test will be whether health systems know not only who is responsible, but how to keep patients safe over time. This petition highlights the urgent need for clear rules on how manufacturers, health systems, clinicians and regulators will work together to keep patients safe as AI becomes part of everyday care.

The original version of this blog originally appeared on forbes.com and is reposted with permission.

Join the thousands of radiologists who trust Rad AI

REQUEST DEMO

Request a demo