Now Live:
∙
2025 Physicians AI Report! See what physicians really think about AI in healthcare.View the report
  • Salary
  • Privacy
  • Learn
  • About
Login
Salaries by stateSalaryPrivacyLearnAboutContact
Sign up for Offcall's newsletter
Copyright © 2026 Offcall All Rights Reserved
Cookies
Privacy Policy
Terms and Conditions
BAA
Podcast

What Doctors Get Wrong About AI with Robert Wachter, MD

Offcall Team
Offcall Team
  1. Learn
  2. Podcast
  3. What Doctors Get Wrong About AI with Robert Wachter, MD

Key Podcast Moments

  • Robert Wachter explains how AI-driven “de-skilling” could quietly undermine medical judgment if clinicians stop practicing fundamentals too soon.
  • Graham and Bob unpack why trust in AI systems may arrive faster than we expect—and why that’s both rational and dangerous.
  • A candid discussion on why healthcare organizations should partner with tech companies on AI rather than try to build everything themselves.
  • Wachter argues that the biggest mistake medicine could make is abandoning AI after early failures instead of comparing it to the deeply flawed system we already have.

Medicine isn’t struggling with AI because doctors are technophobic. It’s struggling because AI arrived in a system already stretched thin, under-designed for nuance, and desperate for relief.

In this episode of How I Doctor, Dr. Graham Walker sits down with Robert Wachter to interrogate a harder truth: the biggest risks of AI in medicine may come not from what the technology gets wrong, but from how uncritically we learn to trust it.

As the chair of medicine at UCSF and one of healthcare’s most credible translators between technology and clinical reality, Bob has spent years thinking publicly about digital transformation from EHRs to AI. What he sees now is familiar. Like earlier waves of health IT, AI promises relief from cognitive overload, inefficiency, and wasted time. And in many cases, it delivers. But unlike prior tools, generative AI doesn’t just automate tasks, it participates in reasoning. That shift changes what it means to be a physician, especially when the technology is good enough to be useful, but not good enough to be right all the time.

The tension shows up most clearly in de-skilling. Bob is candid that some de-skilling is inevitable and even appropriate. Doctors no longer read maps. Physical exams have evolved. But clinical judgment is different. When AI becomes the first pass and the human becomes the reviewer, the system depends on clinicians retaining the very skills the technology tempts them to offload. Lose those skills too early, Bob warns, and medicine risks a quiet inversion: the AI becomes the authority, and the human becomes the liability.

That concern extends beyond individual practice into medical education. Graham and Bob explore whether trainees should be shielded from AI answers or forced to think before seeing them. Once the answer is given, something essential about learning may already be gone. And yet, withholding “the best available information” feels increasingly unnatural in a culture optimized for speed and efficiency.

Rather than rejecting AI or surrendering to it, Bob argues for a middle path: informed optimism.

Compare AI not to a perfect system, but to the one we actually have.
Acknowledge its failures without letting fear freeze progress.
Build guardrails locally, stay humble about what machines can and can’t do, and remain vigilant about how efficiency gains are distributed.

AI may not replace physicians, but if medicine isn’t thoughtful, it could reshape the work in ways that leave clinicians faster, busier, and further from the parts of care that matter most.

Thank you to our wonderful sponsor for supporting the podcast:

Sevaro is a physician-led telestroke and neurology company that delivers rapid virtual neuro coverage that’s reliable. Learn more at https://sevaro.com/

Top 4 Takeaways

1. De-skilling Is the Risk No One Is Owning

Bob describes de-skilling as both inevitable and dangerous. Some skills can safely disappear, but clinical reasoning and judgment cannot - at least not yet. The episode explores how AI oversight requires more expertise, not less, and why losing that expertise too early could create a “death spiral” where humans make systems worse rather than safer.

Sign up for free

Join Offcall to keep reading and access exclusive resources for and by the medical community.

Offcall Team
Written by Offcall Team

Offcall Team is the official Offcall account.

podcast
AI

Comments

(0)

Join the conversation

See what your colleagues are saying and add your opinion.

Trending


01 Jan 2026Where AI in Medicine Is Actually Headed, with Microsoft’s CMO David Rhew
0
226
0
15 Jan 2026ER Doctor Running for Congress Diagnoses How to Fix the U.S. Healthcare System, with Dr. Tim Peck
0
91
0
29 Jan 2026What Doctors Get Wrong About AI with Robert Wachter, MD
0
87
0
;