Just announced!
∙
Download Offcall's new referral tool for and by clinicians.Try it here.
  • Products
      • Salary
      • Referrals
  • Learn
  • About
Login
Offcall Footer Background
ProductsSalaryReferrals
ResourcesLearnAboutContactFix Referrals ManifestoPrivacy PolicyTerms and Conditions
Apps
apple

Download on the

App Store
google

GET IT ON

Google Play
In the browser
Follow us
Sign up for Offcall's newsletter
Copyright © 2026 Offcall All Rights Reserved
Articles

The Most Dangerous Mistake Physicians Make With AI

Offcall Team
Offcall Team
  1. Learn
  2. Articles
  3. The Most Dangerous Mistake Physicians Make With AI

The problem isn’t just hallucinations

When physicians think about AI risk, hallucinations usually come first. The concern is straightforward. AI can generate information that sounds correct but is completely fabricated.

That is a real issue. It is also not the most dangerous one.

A more subtle and clinically relevant problem is sycophancy. AI systems are designed to be helpful. That means they are trained to respond in ways that align with the user’s intent. They try to give you what you are asking for, even if your question is flawed. In everyday applications, this feels useful. In medicine, it can be risky.

For more context, watch Introducing AI Residency for Clinicians - Part 1

Resources:

  • Session slides
  • Dr. Michael Hobbs' AI 101 Guide

How sycophancy shows up in clinical reasoning

The analogy from the session is simple and effective.

“If a patient says ‘I have a fever,’ that’s wide open… if they add more detail, it narrows it considerably.”

AI behaves the same way. The information you provide shapes the output.

“The model isn’t going to push back the way a good colleague would.”

Unlike a colleague, it does not interrupt or challenge your assumptions unless explicitly prompted. That creates a situation where errors are reinforced rather than questioned.

Why this matters in real-world practice

Clinical reasoning often relies on tension. Good medicine involves:

  • Questioning assumptions
  • Considering alternatives
  • Being open to being wrong

AI removes much of that tension.

If you anchor early, AI will:

  • Reinforce the anchor
  • Build a coherent explanation
  • Make it sound convincing

Errors that sound convincing are harder to catch.

A practical framework for evaluating AI output

Ask:

  • Is the response complete?
  • Is it traceable to a source?
  • Is it accurate?
  • Does it answer the question?
  • Does it acknowledge uncertainty?

AI systems tend to avoid saying “I don’t know.” You have to explicitly allow for that.

Treat AI like a trainee

“Think of AI as the world’s best chief resident… occasionally wrong in ways that are hard to spot.”

You would not:

  • Sign a trainee’s note without reviewing it
  • Accept a plan without verifying it
  • Assume correctness based on tone

The same approach applies here.

Bottom line

The biggest risk of AI is not that it produces incorrect answers, but that it produces answers that align with your thinking even when your thinking is wrong. This makes it a powerful tool and a potentially dangerous one, yet the solution is not to avoid AI, but to develop the habit of questioning it.

Offcall Team
Written by Offcall Team

Offcall Team is the official Offcall account.

Comments

(0)

Join the conversation

See what your colleagues are saying and add your opinion.

Trending


12 Mar 2026Inside the First Autonomous AI Prescription Program in America w/ Doctronic CMO Dr. Byron Crowe
0
70
0
05 Mar 202699 Ways to Avoid Death: Lessons from Author and ER Physician Dr. Ashely Alker
0
68
0
29 Mar 2026Offcall Physician Spotlight: Meet Dr. Joseph Shapiro
0
48
0