AI has firmly entered clinical practice, but its arrival has exposed a different fault line than many expected. The tension isn’t about whether the technology works, it’s about who controls it, how it’s implemented, and whether it actually reflects how medicine is practiced.
At the center of that conversation is David Rhew, an infectious disease physician by training and now Global Chief Medical Officer at Microsoft, with a vantage point few clinicians ever see.
In this episode of How I Doctor, Graham Walker and David unpack why physicians are adopting AI at record rates while simultaneously expressing deep dissatisfaction with how it’s being deployed.
Rather than positioning AI as a single, all-knowing system, David offers a different mental model: one that mirrors how clinicians already work. He explains why the future of medical AI is modular and agentic, with multiple purpose-built systems handling discrete tasks like differentials, testing strategy, guideline alignment, cost awareness. Meanwhile clinicians remain responsible for synthesis and decision-making. This approach, he argues, is not only more effective, but safer and more aligned with how care teams function today.
The conversation then shifts beyond documentation and decision support into a broader systems view. David shares concrete examples of AI-enabled screening identifying advanced disease in patients who never would have sought care until a catastrophic event occurred. These use cases reframe AI not as a productivity tool, but as a mechanism for earlier detection, smarter triage, and more responsible allocation of scarce clinical resources.
Throughout the episode, David returns to a consistent principle: AI should reduce cognitive burden, not add to it. Its role is not to replace judgment, but to surface what’s actionable, filter noise, and allow clinicians to focus on patients rather than interfaces. When AI is designed to work with physicians rather than around them, it has the potential to make medicine more humane, more sustainable, and more honest about its limitations.
This is not a conversation about hype or inevitability. It’s about implementation, accountability, and agency. AI will not fix medicine on its own, but as David makes clear, if physicians remain engaged and lead its application thoughtfully, it may finally help the system work the way clinicians always hoped it would.
Poorly designed AI tools remove autonomy, add cognitive burden, and erode trust. David makes it clear that physician resistance is not about fear of technology, but frustration with systems that don’t reflect real workflows. He explains how misaligned incentives and metrics have caused otherwise promising tools to fail. The episode emphasizes that physician leadership and governance are essential if AI is going to help rather than harm care delivery.
Join Offcall to keep reading and access exclusive resources for and by the medical community.
Offcall Team is the official Offcall account.