The Most Dangerous Mistake Physicians Make With AI
The problem isn’t just hallucinations
When physicians think about AI risk, hallucinations usually come first. The concern is straightforward. AI can generate information that sounds correct but is completely fabricated.
That is a real issue. It is also not the most dangerous one.
A more subtle and clinically relevant problem is sycophancy. AI systems are designed to be helpful. That means they are trained to respond in ways that align with the user’s intent. They try to give you what you are asking for, even if your question is flawed. In everyday applications, this feels useful. In medicine, it can be risky.
For more context, watch Introducing AI Residency for Clinicians - Part 1
Resources:
How sycophancy shows up in clinical reasoning
The analogy from the session is simple and effective.
“If a patient says ‘I have a fever,’ that’s wide open… if they add more detail, it narrows it considerably.”
AI behaves the same way. The information you provide shapes the output.
“The model isn’t going to push back the way a good colleague would.”
Unlike a colleague, it does not interrupt or challenge your assumptions unless explicitly prompted. That creates a situation where errors are reinforced rather than questioned.
Why this matters in real-world practice
Clinical reasoning often relies on tension. Good medicine involves:
- Questioning assumptions
- Considering alternatives
- Being open to being wrong
AI removes much of that tension.
If you anchor early, AI will:
- Reinforce the anchor
- Build a coherent explanation
- Make it sound convincing
Errors that sound convincing are harder to catch.
A practical framework for evaluating AI output
Ask:
- Is the response complete?
- Is it traceable to a source?
- Is it accurate?
- Does it answer the question?
- Does it acknowledge uncertainty?
AI systems tend to avoid saying “I don’t know.” You have to explicitly allow for that.
Treat AI like a trainee
“Think of AI as the world’s best chief resident… occasionally wrong in ways that are hard to spot.”
You would not:
- Sign a trainee’s note without reviewing it
- Accept a plan without verifying it
- Assume correctness based on tone
The same approach applies here.
Bottom line
The biggest risk of AI is not that it produces incorrect answers, but that it produces answers that align with your thinking even when your thinking is wrong. This makes it a powerful tool and a potentially dangerous one, yet the solution is not to avoid AI, but to develop the habit of questioning it.
Offcall Team is the official Offcall account.
Comments
(0)
Join the conversation
See what your colleagues are saying and add your opinion.