When physicians think about AI risk, hallucinations usually come first. The concern is straightforward. AI can generate information that sounds correct but is completely fabricated.
That is a real issue. It is also not the most dangerous one.
A more subtle and clinically relevant problem is sycophancy. AI systems are designed to be helpful. That means they are trained to respond in ways that align with the user’s intent. They try to give you what you are asking for, even if your question is flawed. In everyday applications, this feels useful. In medicine, it can be risky.
For more context, watch Introducing AI Residency for Clinicians - Part 1
Resources:
The analogy from the session is simple and effective.
“If a patient says ‘I have a fever,’ that’s wide open… if they add more detail, it narrows it considerably.”
AI behaves the same way. The information you provide shapes the output.
“The model isn’t going to push back the way a good colleague would.”
Unlike a colleague, it does not interrupt or challenge your assumptions unless explicitly prompted. That creates a situation where errors are reinforced rather than questioned.
Clinical reasoning often relies on tension. Good medicine involves:
AI removes much of that tension.
If you anchor early, AI will:
Errors that sound convincing are harder to catch.
Ask:
AI systems tend to avoid saying “I don’t know.” You have to explicitly allow for that.
“Think of AI as the world’s best chief resident… occasionally wrong in ways that are hard to spot.”
You would not:
The same approach applies here.
The biggest risk of AI is not that it produces incorrect answers, but that it produces answers that align with your thinking even when your thinking is wrong. This makes it a powerful tool and a potentially dangerous one, yet the solution is not to avoid AI, but to develop the habit of questioning it.
Offcall Team is the official Offcall account.
See what your colleagues are saying and add your opinion.