As Healthcare AI Changes On The Fly, FDA Reconsiders How To Keep It Safe

Imagine a medical AI tool that helps radiologists spot early signs of cancer. It works well today. Six months later, the software updates. A year later, it updates again. Each version changes slightly, but the tool is already in use across hospitals, influencing real clinical decisions.
That reality is creating a growing problem for regulators. The U.S. Food and Drug Administration’s approval process was designed for medical devices that stay largely the same after launch. Clinical AI does not operate this way; it is built to evolve.
On Dec. 29, the FDA took a notable step by opening public comment on a citizen petition that asks whether the agency’s current approach can keep up. The FDA did not have to do this. But it did. The petition proposes a new way to oversee certain radiology AI tools as they change over time rather than requiring a full regulatory reset with every update. The comment period closes Feb. 27.
The question now is how regulators can scale oversight as clinical AI takes on a larger role in patient care without weakening the safety checks patients rely on.
The petition, submitted by Rubrum Advising on behalf of Harrison.ai, does not argue for weaker oversight. Instead, it challenges whether a system that reviews products in isolated snapshots rather than over time remains the right tool for governing clinical AI that updates frequently, expands in scope and operates across diverse populations and workflows.
“This isn’t about weakening oversight,” said Thalia Mills, senior strategy manager at Harrison.ai. “It’s about aligning regulatory tools with how clinical AI evolves, while keeping guardrails in place.”
Why Radiology Sits At The Center Of The Debate
Radiology does not represent just one clinical use case for AI; it serves as the proving ground.
Peer-reviewed research shows that radiology accounts for more than 75% of all FDA-cleared clinical AI tools in the United States. As of Jan. 30, the FDA’s public inventory lists 1,357 AI-enabled medical devices, the majority focused on radiology.
Widespread use does not always mean strong clinical evidence. A JAMA Network Open analysis found that while radiology represents roughly three quarters of FDA-authorized AI devices, most entered the market through the FDA’s faster review process (510k), which allows new tools to reach the market by showing they are similar to existing ones. Fewer than one-third incorporated clinical testing and only a small number were tested in real clinical settings before approval.
This imbalance has turned radiology into both a lighthouse and a stress test for AI regulation. Innovation moves fastest here. Clinical demand runs high. The limits of episodic oversight appear first.
What The Petition Actually Proposes
The proposal focuses on four established radiology AI categories: CADx, CADe, CADt and combined detection and diagnosis systems. These tools detect, prioritize or help characterize abnormalities on medical imaging.
Under the petition, manufacturers could qualify for a conditional, partial exemption from submitting a new 510(k) for certain future capabilities, but only after demonstrating regulatory competence through at least one relevant clearance. The traditional 510(k) pathway would remain available.
Crucially, the proposal does not remove oversight. It preserves existing quality system requirements and special controls while strengthening expectations for postmarket monitoring, transparency and training. The shift is not toward less scrutiny, but toward lifecycle accountability and real-world performance.
“I’m pro smart regulation that enables smart tools to reach physicians, who ultimately make the decisions,” said Dimitry Tran, co-founder of Harrison.ai. “If you make clinical claims, you need evidence and you should expect audits. Trust is everything. If physicians lose it, the entire sector pays.”
Josh Duncan, chief growth officer at Harrison.ai, framed the implications beyond any single vendor.
“This is a rising tide,” he said. “It lifts vendors, health systems, radiologists and ultimately patients. Every AI company in this space is trying to answer the same question: how do we make the biggest real-world impact on patient care?”
Mills also drew a clear boundary around what the proposal does not attempt to solve. The evolution at issue is not autonomous learning in production, but the growing pace and scope of manufacturer-controlled software updates, with guardrails designed to preserve auditability and safety.
Why Postmarket Oversight Becomes The Fulcrum
Postmarket surveillance sits at the center of this debate. Done well, it does more than monitor safety; it reveals whether a tool performs as intended across populations, workflows and time.
In practical terms, this means tracking how often the AI flags the wrong finding, whether clinicians override its suggestions and whether patient outcomes actually improve after deployment.
That same performance insight also surfaces value. When systems track false positives, missed findings, workflow impact and downstream outcomes, they expose whether a technology improves care or simply adds noise. In that sense, rigorous postmarket oversight links safety, effectiveness and value in ways siloed premarket review cannot.
That approach carries risk. If postmarket monitoring becomes under-resourced or inconsistently enforced, regulators could lose visibility into real-world performance drift. If manufacturers stretch “in-scope” updates too far, meaningful changes could evade scrutiny. Lifecycle oversight works only if the FDA retains audit authority and the willingness to intervene when performance degrades.
Why The FDA Elevated This Petition
The FDA does not need to publicly elevate every citizen petition or solicit broad external input. By doing so here, the agency is acknowledging that these questions deserve serious public scrutiny.
This debate does not pit innovation against safety. It reflects a simpler reality: static regulatory tools strain when applied rigidly to dynamic clinical software.
My view is that lifecycle-based oversight for clinical AI is not only reasonable, it is inevitable. The real question is where scrutiny adds clarity rather than friction. Requiring full resubmission for every incremental, well-characterized update may create the appearance of rigor without improving safety. At the same time, broad exemptions without enforceable postmarket evidence would be a mistake.
The right threshold is not how often software changes, but whether those changes alter clinical risk, decision boundaries or downstream responsibility.
With the comment period now open, clinicians, health systems, patients and AI developers have an opportunity to weigh in not just on this petition, but on how U.S. medical device regulation should evolve as AI becomes foundational to clinical care.
The outcome will shape AI oversight far beyond radiology.
