Blog

What We’re Watching, Listening to, and Reading in December

This month, we’re exploring how leading voices across healthcare are approaching the core challenges of responsible AI: building governance frameworks that keep AI aligned with patient outcomes, understanding the cognitive biases that shape both human and machine decision-making, and navigating emerging questions of liability when radiologists and algorithms disagree.

Together, these perspectives reinforce one truth: the future of AI in healthcare depends on trust. Trust built through transparency, validation, and shared accountability between humans and technology. If you missed last month’s edition, you can read it here for more conversations on how radiologists are leading responsible AI adoption: What We’re Reading (and Watching): How Radiologists Are Shaping Responsible AI →

1. Governing AI: What Responsible Use Looks Like at Scale

Watch: AI in the NHS 2025 - Keynote address: Dr. Andrew Bindman

In this talk, Kaiser Permanente’s Chief Medical Officer, Andrew Bindman, MD, shared how a large integrated health system tackled AI with a “responsible use” playbook. He also argued that the hard part isn’t inventing AI, but making it fit real-world workflows and data: cleaning and unifying enterprise data, watching for biased training signals, and only scaling tools when they demonstrably improve outcomes, efficiency, and clinician experience without widening disparities.

2. Unlearning Bias: What Diagnostic Error Teaches Us About AI

Listen: From Hindsight Bias to Machine Bias: Dr. Laura Zwaan on Learning Mistakes

This episode of NEJM AI Grand Rounds explores why diagnostic errors are so hard to define and measure, and how hindsight and cognitive biases shape what we call “error” in medicine. Laura Zwaan, PhD, connects those same biases to AI, arguing that large models can inherit human-like framing and confirmation biases, which have big implications for using AI as a second opinion or quality assurance tool in radiology. 

3. Who’s Liable When AI Disagrees with a Radiologist?

Read: If AI finds an abnormality that a radiologist misses, who's at fault?

This AuntMinnie piece uses a legal-style vignette to explore what happens when an AI system flags a finding that a radiologist overlooks—and how jurors think about responsibility when humans and algorithms disagree. It’s a perfect companion to Dr. Bindman’s talk on AI governance and data quality and the NEJM AI Grand Rounds episode on diagnostic error, underscoring that the real challenge isn’t just building powerful models, but deciding how they fit into workflows, accountability, and human–AI decision-making in radiology.

Continuing the conversation

Want more insights like these? Subscribe to our newsletter for curated reads, upcoming webinars, and practical insights on AI in healthcare.

Join the thousands of radiologists who trust Rad AI

REQUEST DEMO

Request a demo