When physicians talk about AI publicly, the conversation often centers on safety: liability, hallucinations, and the fear that a machine might make a dangerous clinical mistake.
But behind closed doors and in the data, doctors are worried about something very different.
According to the 2025 Physicians AI Report, physicians’ deepest fears around AI have little to do with malpractice or diagnostic errors. Instead, they revolve around how AI will be used on them, not for them.
The real concern isn’t that AI will replace doctors. It’s that it will change the practice of medicine in ways that erode autonomy, meaning, and trust, while quietly shifting power away from clinicians and toward administrators who don’t understand the work.
This article unpacks the three fears that surfaced most strongly in the survey:
Together, they explain why enthusiasm for AI often coexists with deep unease
The most consistent—and emotionally charged—fear expressed by physicians is not job loss. It’s exploitation of efficiency.
Doctors understand that AI can save time. Many already experience it firsthand through documentation tools and general AI assistants. But instead of imagining that saved time being returned to patient care or physician well-being, many fear it will be reclaimed by the system.
The unspoken expectation they worry about is simple:
If you can see patients faster, you should see more patients.
Physicians are deeply skeptical that productivity gains will translate into:
Instead, they anticipate increased patient volume, tighter schedules, and higher expectations, without meaningful negotiation or benefit-sharing.
This fear reflects lived experience. Over decades, efficiency improvements in medicine have rarely resulted in less work. They’ve resulted in more throughput.
AI, in this framing, becomes not a relief, but a lever.
Medicine is not just a technical discipline. It is relational, interpretive, and deeply human.
Many physicians expressed concern that AI, especially when optimized for speed, standardization, and metrics, will erode the art of medicine:
Doctors worry that as AI systems become embedded in workflows, subtle pressures will emerge:
This fear isn’t anti-technology. It’s about what gets valued.
If AI optimizes for what can be measured, time, volume, coding accuracy, then what can’t be measured risks being marginalized. The art of medicine doesn’t show up cleanly in dashboards.
Physicians worry that once care is optimized primarily for efficiency, something essential may be lost—and difficult to recover.
Perhaps the most corrosive fear revealed in the survey is about control.
Doctors are not afraid of AI itself. They are afraid of who decides how it’s used.
Repeatedly, physicians expressed frustration with the idea that:
This fear is magnified by the broader adoption context:
From the physician’s perspective, this creates a dangerous dynamic: tools that reshape clinical work are being controlled by people who don’t practice medicine—and don’t experience the consequences firsthand.
AI, in this scenario, becomes a management instrument, not a clinical one.
One of the most powerful themes in the survey is anxiety about how AI is framed at the executive level.
Physicians repeatedly referenced concerns that AI would be positioned primarily as:
Even when AI is introduced under the banner of “efficiency” or “innovation,” doctors often hear a subtext: do more with less.
This creates mistrust.
When physicians sense that AI is being used to extract more labor rather than support care, resistance becomes emotional, not technical. The fear isn’t that AI will fail. It’s that it will succeed in the wrong way.
Notably absent from the top concerns is malpractice liability.
That doesn’t mean physicians are unconcerned about safety, but it suggests that safety fears are secondary to structural ones. Doctors trust their own judgment. They assume responsibility for clinical decisions regardless of tools.
What they don’t trust is how systems behave once efficiency is unlocked.
In other words, physicians believe they can manage AI risk clinically. They are less confident it will be managed ethically or equitably at the organizational level.
When physician resistance to AI appears, it’s often misinterpreted as technophobia.
The survey suggests something far more human:
AI amplifies existing tensions in healthcare, between care and cost, autonomy and standardization, professionalism and productivity.
Doctors aren’t resisting AI. They’re resisting what it might enable in the wrong hands.
The fears outlined in the survey point clearly to what physicians actually want:
The most important insight from the survey is not that doctors are afraid of AI.
It’s that they are afraid of what happens when AI is introduced without trust, alignment, or clinician voice.
AI has the potential to restore time, reduce burnout, and improve care. But if it is framed primarily as a cost-cutting tool, controlled by administrators and disconnected from clinical reality, it risks deepening cynicism and accelerating disengagement.
The future of AI in healthcare will be shaped less by algorithms than by intent.
Doctors are watching closely.
And what they fear most is not malpractice, it’s being optimized out of the very profession they devoted their lives to practicing.
Download the 2025 AI Physicians Report
Offcall Team is the official Offcall account.