Medicine isn’t struggling with AI because doctors are technophobic. It’s struggling because AI arrived in a system already stretched thin, under-designed for nuance, and desperate for relief.
In this episode of How I Doctor, Dr. Graham Walker sits down with Robert Wachter to interrogate a harder truth: the biggest risks of AI in medicine may come not from what the technology gets wrong, but from how uncritically we learn to trust it.
As the chair of medicine at UCSF and one of healthcare’s most credible translators between technology and clinical reality, Bob has spent years thinking publicly about digital transformation from EHRs to AI. What he sees now is familiar. Like earlier waves of health IT, AI promises relief from cognitive overload, inefficiency, and wasted time. And in many cases, it delivers. But unlike prior tools, generative AI doesn’t just automate tasks, it participates in reasoning. That shift changes what it means to be a physician, especially when the technology is good enough to be useful, but not good enough to be right all the time.
The tension shows up most clearly in de-skilling. Bob is candid that some de-skilling is inevitable and even appropriate. Doctors no longer read maps. Physical exams have evolved. But clinical judgment is different. When AI becomes the first pass and the human becomes the reviewer, the system depends on clinicians retaining the very skills the technology tempts them to offload. Lose those skills too early, Bob warns, and medicine risks a quiet inversion: the AI becomes the authority, and the human becomes the liability.
That concern extends beyond individual practice into medical education. Graham and Bob explore whether trainees should be shielded from AI answers or forced to think before seeing them. Once the answer is given, something essential about learning may already be gone. And yet, withholding “the best available information” feels increasingly unnatural in a culture optimized for speed and efficiency.
Rather than rejecting AI or surrendering to it, Bob argues for a middle path: informed optimism.
Compare AI not to a perfect system, but to the one we actually have.
Acknowledge its failures without letting fear freeze progress.
Build guardrails locally, stay humble about what machines can and can’t do, and remain vigilant about how efficiency gains are distributed.
AI may not replace physicians, but if medicine isn’t thoughtful, it could reshape the work in ways that leave clinicians faster, busier, and further from the parts of care that matter most.
Thank you to our wonderful sponsor for supporting the podcast:
Sevaro is a physician-led telestroke and neurology company that delivers rapid virtual neuro coverage that’s reliable. Learn more at https://sevaro.com/
Bob describes de-skilling as both inevitable and dangerous. Some skills can safely disappear, but clinical reasoning and judgment cannot - at least not yet. The episode explores how AI oversight requires more expertise, not less, and why losing that expertise too early could create a “death spiral” where humans make systems worse rather than safer.
Join Offcall to keep reading and access exclusive resources for and by the medical community.
Offcall Team is the official Offcall account.
See what your colleagues are saying and add your opinion.