Inside Engineering: How Rad AI Builds Solutions for Real-World Radiology

When assessing radiology AI solutions, teams are getting more attuned to what looks good in a demo versus what will actually work in practice. In this Q&A, we go inside the Rad AI engineering team to hear that perspective firsthand.
Ryan Hood, VP of Engineering at Rad AI, brings experience across a wide range of health tech environments, from early-stage startups to large-scale platforms. He spent nearly eight years at Amazon, working in settings that varied dramatically, from the Grand Challenge organization, a small, skunkworks-style team focused on moonshot projects, to AWS AI, where he helped build processes as the organization scaled from just a handful of teams into thousands of employees. That diversity shaped his perspective on how team structure, culture, and leadership must evolve as team sizes change and products move from zero-to-one experimentation to real-world adoption at scale.
Hood joined Rad AI to return to healthcare and work closer to the clinicians and patients the technology serves, bringing his experience in scaling systems responsibly to a company entering its next phase of growth.
Radiology teams are under intense, multifaceted pressures. How do those realities shape the way you prioritize engineering work and decide which problems to tackle first?
We're deeply aware of these pressures—the volume increases, staffing shortages, and rising expectations for quality and speed. But I want to be clear—we do not view these as short-term problems. We believe they'll continue and likely intensify over the next decade. That long-term lens changes how we think about solutions.
When you recognize this is not a temporary hump to get over, you realize that a handful of small fixes will not cut it. For us, it's more of an obsession with reaching a north star state—one where efficiency improves significantly without placing additional burden on the radiologist.
As you can imagine, we receive many feature requests. A big part of how we evaluate them comes down to two questions: (1) Will this meaningfully improve quality and efficiency? (2) Does it impact a large number of customers or just a few? We need to get to all of these requests eventually, but we ruthlessly prioritize the ones that move the needle the most on efficiency. Those always rise to the top.
Many AI solutions show promise in pilots but struggle in everyday clinical use. From your perspective, what separates AI that looks good in theory from AI that actually becomes part of routine radiology practice?
Most AI solutions try to boil the ocean. There is often an oversimplification of the use case or the job the human is actually doing.
For example, in radiology, people are often surprised to hear that we don't do any computer vision or AI on the image itself. Our AI is solely focused on improving the radiology workflow within reporting. On the surface, it certainly makes sense that AI could be useful for analyzing images in a clinical environment. But when you really dig into the details and get granular, several hurdles appear that make the entire effort non-trivial.
Our approach at Rad AI is different. We go very deep with radiologists into their workflow, and when we use AI, we're intentional about where we place it and the specific problems we're trying to solve. We get granular with radiologists to validate with objective data, and we have several CMIOs on staff who help us iterate on features before they go into production.
With Rad AI Impressions, for instance, we customize every detail to match each radiologist's style. We spend a tremendous amount of effort getting this right and continuing to improve—as opposed to scattering AI across random spots in the workflow without validating whether those are actually pain points.
We get as granular as possible on understanding the problem, we are intentional about the approach we take with AI to solve it, and we validate with proper data and expert radiologists before features ever see the light of day.
AI sovereignty and “black box” are becoming more prominent topics in healthcare. What’s your POV on this, and how do we build solutions with this in mind?
We understand that explainability and transparency in AI are not just desired—they are essential. For us, we make sure to ground our AI by providing evidence of the source of each sentence or fragment.
We did this with one of our most recent reporting features, called Prior Summarization. It summarizes all priors for a particular patient while still allowing the radiologist to review prior exams in detail if needed. We provide a direct link from each sentence fragment to its source in the prior, which significantly reduces the risk of hallucinations the AI might produce. But just as importantly, it gives the radiologist full transparency into the source of that portion of the summary. They can see the evidence and verify it themselves.
As AI governance becomes increasingly important, what strategies are in place to manage ethical challenges, such as accountability for errors or data privacy?
Regarding data privacy, we undergo annual third-party HIPAA compliance auditing alongside SOC 2 Type II certification. We implement comprehensive security measures, including encryption, role-based access controls, audit logging, and secure data deletion protocols.
We employ a risk-based governance approach that prioritizes high-risk use cases for additional scrutiny, actively mitigates bias through diverse training datasets, and maintains transparent communication about our AI's capabilities and limitations. Every model deployment follows a rigorous checklist that considers patient safety, bias potential, and real-world impact before going live. This multi-layered approach ensures we harness AI's power to improve patient care while never compromising ethical responsibilities or patient safety.
If a radiology leader could only focus on one question when evaluating new technology this year, what should it be?
How much efficiency gain will radiologists actually see when using this technology?
This is a metric we measure internally. We establish a baseline for how long reads take in a radiologist's current software, then track how long those same tasks take in our reporting software—and continue measuring as we release new features. This gives us transparency into the level of improvement radiologists can expect when switching to our reporting solution, as well as the efficiency gains they will see with each new release.
But efficiency isn't just about speed—it's also about reducing repetitive tasks, minimizing manual calculations prone to error, and streamlining tedious work like inserting recommendations or pulling historical patient findings from previous reports. A strong reporting solution also serves as the integration point for other AI tools; without it, radiologists are left juggling multiple disconnected systems.
Ultimately, efficiency has to be the primary focus. Given the burnout, staffing shortages, and increasing volumes we discussed earlier, that's what matters most. Certain AI prototypes and features can look flashy and interesting on the surface, but the real proof is in the data.
Continuing the Conversation
Want to learn more? In the previous Q&A, Hood answered six of the most common questions we hear about engineering AI for radiology, including how we think about transparency, clinician oversight, and building assistive, not replacement, technology.

