Blog

From Patchwork to Posture: Federal Signals on AI Regulation

ICYMI: In December 2025, following extended debate, including prior attempts at a state level AI moratorium, President Trump signed an executive order – “ENSURING A NATIONAL POLICY FRAMEWORK FOR ARTIFICIAL INTELLIGENCE – outlining a federal posture on state AI regulation. 

This is an important signal for innovation in the United States, particularly given the very real risk of a fragmented patchwork of state AI laws,which we have already seen emerge in states like Colorado, Utah, California and ~15 others.

We have seen this movie before. A useful analogy is California’s Proposition 65, which mandates disclosure of potentially dangerous chemicals in consumer products, ranging from garden hoses to children’s toys. The result was a lowest common denominator compliance approach with a label appearing on just about everything under the sun. If all you had to do was disclose risk the easiest path was to label everything resulting in ubiquitous warnings appearing everywhere (Fig. 1), often saying very little of practical value and simply being ignored because of their prevalence.

The hidden risk the patchwork regulations uncover is that many of the approaches treat AI as a single category or in very broad categories. In healthcare alone, AI could be an ambient AI scribe with speech recognition, a pixel based medical device, a non-medical device clinical decision support (CDS), a medical device CDS, a direct patient mental health chatbot and thousands of other use-cases. AI in any environment will span nearly every risk profile imaginable.

Fig. 1 - Quail egg label from my daughter’s birthday party

Why AI is different

With AI, the “least common denominator” problem is far less obvious and far more consequential. Instead of a single warning label, compliance risk is distributed across many states, each advancing different definitions, thresholds, and enforcement mechanisms. 

In the executive order, Colorado is explicitly called out, largely due to its algorithmic discrimination clause, which has already created uncertainty for developers and deployers of AI systems.

An interesting footnote here is Governor Jared Polis comments when signing the Colorado legislation. He explicitly noted “creates a complex compliance regime for all developers and deployers of AI doing business in Colorado”

The broader regulatory landscape

Other states, particularly California, have been highly active on the regulatory front, including recent legislation such as AB 2013, which mandated a broad sweep of transparency actions. Governor Newsom also vetoed other amendments, noting “Adaptability is critical as we race to regulate a technology still in its infancy. This will require a delicate balance,” even while acknowledging “Proactive guardrails should be implemented.”

This contrast highlights the challenge. States are acting in good faith, but they are moving at different speeds, with different assumptions, and with different risk tolerances.

It is also telling that firms such as Manatt have had to dedicate sustained effort to tracking and synthesizing state level AI regulation, simply because the pace and variability have made it increasingly difficult for even well resourced organizations to keep up. I recently joined Manatt for a webinar titled “Navigating the New Frontier: AI Policy Trends Impacting Health Tech Innovation,” which underscored just how fragmented and fast-moving this landscape has become, particularly for companies operating in healthcare.

Why the executive order matters

While the regulatory pathway outlined in the executive order may still be unclear in its details, the principle is significant. It serves as a clear marker that artificial intelligence governance is best managed at the federal level, rather than through a fragmented patchwork of state laws. 

For an industry moving as quickly as AI, clarity, consistency and predictability are not luxuries. They are prerequisites for responsible innovation. A federal approach could serve as a springboard for safe and agile innovation. 

Importantly, the EO is likely to face a significant amount of legal hurdles and challenges in implementation given the blurry lines between executive authority and states rights. This will put the regulatory landscape into a spiral, likely until the legislative branch offers clear regulation with codified federal preemption.

Healthcare specific implications

One of the most significant considerations in healthcare will be what happens at the site level. Even if a federal executive order signals a shift away from state by state AI regulation, it is far from clear that health systems, academic medical centers and integrated delivery networks will feel comfortable stepping back from state level requirements that have already passed or are on the verge of implementation.

Healthcare governance processes are already fragmented and highly complex. AI oversight often sits across multiple committees, including compliance, legal, IT security, clinical leadership, ethics boards and innovation councils, each with different risk thresholds and incentives. In that environment, many sites may choose the most conservative path forward, continuing to comply with state level regulations even if their long term enforceability becomes uncertain.

There is also a practical reality at play. Once governance criteria, review workflows and documentation standards are built into institutional processes, they are difficult to unwind. Health systems are unlikely to rapidly reverse course, particularly when patient safety, bias concerns, and reputational risk are involved. As a result, site level governance may continue to resemble a patchwork in practice, even if the legal framework becomes more centralized.

For AI developers and vendors operating in healthcare, this creates a familiar challenge. Federal clarity may reduce long term regulatory risk, but near term adoption will still be shaped by local interpretation, institutional caution and legacy governance structures. In many ways, the success of federal AI leadership in healthcare will depend not just on preemption, but on whether it meaningfully simplifies the operational reality faced by health systems on the ground.

Join the thousands of radiologists who trust Rad AI

REQUEST DEMO

Request a demo