Blog

HIMSS Reflections: From Hype to Real-World Help in Healthcare AI

I was only able to attend the preconference and the first day of HIMSS this year, but in the whirlwind of 48 hours of sessions and meetings a consistent theme kept coming up. Across nearly every discussion in the AI in Healthcare Forum, the conversation around healthcare AI felt like it was maturing. 

The question I’ve heard repeatedly this year, from health system boardrooms to discussions in the U.S. Senate, is how we move from AI hype to real-world help.

That shift was noticeable throughout the forum. The discussion was less about what AI might someday be capable of and more about the practical work of deploying it inside health systems. Speakers spent far more time on governance, workflow integration and how organizations actually measure whether these tools are improving care. In other words, the conversation felt less like a technology discussion and more like a discussion about operations.

A Focus on Infrastructure

A number of sessions focused on the less glamorous side of AI deployment: the infrastructure required to make these systems work reliably inside large health systems. Leaders from organizations, such as Mass General Brigham, discussed the governance structures they have built around AI safety, security and lifecycle oversight. 

In healthcare, success with AI is rarely about the algorithm alone. It depends on the operational frameworks, validation processes and monitoring structures that surround it. As these systems move from experimentation into everyday use, they begin to resemble other clinical capabilities that must be managed and evaluated over time. 

As Laura Edell, Chief Innovation and Technology Officer and Senior Partner at C-Suite Growth Advisors, noted from a solutions engineering perspective, organizations often move too quickly to solutioning before clearly defining and scoping the underlying problem.

Strategy vs. Execution

Another point that came up repeatedly was the gap between executive AI strategy and what actually happens inside clinical workflows. Many early AI initiatives began as innovation pilots. While these pilots often demonstrated technical promise, they frequently struggled to scale because they were disconnected from day-to-day clinical workflows or lacked clear operational ownership. 

During the fireside chat on aligning AI strategy with clinical reality, Suresh Balu, Associate Dean for Innovation and Partnership and Executive Program Director at the Duke Institute for Health Innovation, shared an observation from real-world deployments: the hardest failures are not when clinicians revolt against a system, but when a use case quietly fades into the background and stops being used. 

Several panels emphasized that sustainable adoption requires coordination across three levels of a health system: executive leadership defining priorities and return on investment, clinical leaders ensuring the tools support real care delivery and operational teams responsible for implementation and change management. When those layers align, AI can move beyond experimentation and become part of the organization’s infrastructure.

The Importance of Workforce Engagement 

Many clinicians are now encountering AI systems in their daily work, yet the level of familiarity and confidence with these tools varies widely. Speakers emphasized that adoption depends heavily on building AI literacy within the workforce and providing clear frameworks for when and how these systems should be used. Healthcare has long relied on evidence-based practice and professional judgment, and the same expectations apply to AI. 

As Brian Weirich, Chief Nursing Innovation Officer at Bon Secours Mercy Health, aptly observed, health systems often find themselves trying to solve old problems with new solutions. Clinicians need transparency about how systems function, training on how they should be interpreted, and confidence that the tools are designed to support their decision-making rather than replace it.

The forum also highlighted the importance of ensuring that innovation is not limited to large academic medical centers. Community hospitals and rural health systems often operate with fewer resources and less access to specialized expertise, yet they face many of the same clinical challenges. 

Several discussions explored how AI systems can help extend expertise across geographic boundaries, enabling clinicians in smaller or more remote settings to access decision support and operational tools that were previously available only in larger institutions. If implemented thoughtfully, these technologies could help narrow rather than widen gaps in access to care.

As a dad-joke connoisseur, one of my favorite moments came when session host Brian Spisak delivered one that concluded with a rather “fabulous” punchline (IYKYK). It was a brief moment of levity and a welcome ice-breaker, reminding us that the people implementing these technologies are still human and navigating a period of rapid change in healthcare.

In personal conversations throughout the first day of the conference, the emphasis was on moving away from demonstrations of technical capability and toward the practical work of integrating these tools into real clinical environments. Health systems are increasingly focused on governance, workflow integration, workforce readiness and measurable outcomes.

AI implementation in healthcare isn’t a technology challenge; it’s an operational one.

With healthcare’s existing regulatory structure, robust early adoption of test use cases, and decades of effort to address the access crisis, it is heartening to see that the industry has moved past the stage where the technology itself is the story. The harder and more important work now is proving where it actually helps.

If we missed the chance to connect with Rad AI at HIMSS, we can still continue the conversation

Join the thousands of radiologists who trust Rad AI

REQUEST DEMO

Request a demo