Just announced!
∙
Download Offcall's new referral tool for and by clinicians.Try it here.
  • Products
      • Salary
      • Referrals
  • Learn
  • About
Offcall Footer Background
ProductsSalaryReferrals
ResourcesLearnAboutContactFix Referrals ManifestoPrivacy PolicyTerms and Conditions
Apps
apple

Download on the

App Store
google

GET IT ON

Google Play
In the browser
Follow us
Sign up for Offcall's newsletter
Copyright © 2026 Offcall All Rights Reserved
Opinions

What Worries Me Most About AI After 26 Years As a Practicing Doctor

Elizabeth Vainder, M.D., F.A.A.P
Elizabeth Vainder, M.D., F.A.A.P
  1. Learn
  2. Opinions
  3. What Worries Me Most About AI After 26 Years As a Practicing Doctor

Sign up for our newsletter

On/Offcall is the weekly dose of information and inspiration that every physician needs.


For 26 years, I have practiced medicine because I believe caring for people is one of the most meaningful responsibilities a person can accept.

That belief is why I cannot stay quiet about what is happening in healthcare right now.

Across the country, hospitals and health systems are rapidly embedding artificial intelligence directly into clinical care. These are not simply digital scribes or tools that summarize charts after a visit. Increasingly, these systems are designed to suggest diagnoses, recommend treatments, and shape the clinical thinking of the physician sitting across from your mother, your child, your spouse, or you.

One leading platform proudly advertises a perfect score on the United States Medical Licensing Examination and is reportedly used by 40% of U.S. physicians. I understand why that sounds reassuring. We are conditioned to believe that high test scores equal competence.

But medicine is not a multiple choice exam.

The USMLE is built around questions with right answers. Real patients rarely present that way.

Real medicine is the exhausted parent holding a feverish child at 2 a.m. It is the elderly patient who cannot afford the medication you know would help them most. It is the frightened family sitting in a rural emergency department while labs are delayed and decisions still have to be made, knowing the nearest hospital is miles away.

It is uncertainty, complexity, incomplete information, and human judgment.

A perfect score on a standardized test tells us very little about performance in the messy and deeply human reality of patient care. And that reality is exactly where these tools are now being deployed.

For decades, physicians have struggled with insurance companies inserting themselves between doctors and patients. Every physician in America understands the frustration of prior authorizations, denied treatments, and administrative interference in clinical decisions. We fought that battle, and we are still fighting it every day.

Now another layer is emerging. One that does not simply influence access to care, but reaches directly into the physician's thought process itself. Unlike insurers, however, this layer remains largely unregulated and poorly understood.

Physicians themselves are signaling concern. Surveys consistently show that doctors worry most about accuracy and misinformation in clinical AI systems. Liability remains another unresolved question. If a physician signs the chart, the physician owns the outcome.

But does the software company? That question rarely receives a clear answer.

I want to be equally clear about something else. I use AI in my own practice. I believe these tools hold enormous promise when applied thoughtfully and with meaningful physician oversight. But there is an important difference between a tool that supports clinical judgment and one that quietly substitutes for it.

This distinction is especially important for younger physicians and trainees who may not yet have enough experience to recognize when the machine is wrong. AI systems can produce recommendations with remarkable fluency and confidence. But confidence is not the same thing as wisdom. We must also remember that adoption is not validation, and speed is not safety.

Widespread use does not eliminate the need for scrutiny. Healthcare innovation often moves faster than oversight, but medicine is not a typical industry. Mistakes here are not inconveniences. They affect human lives, families, trust, and safety.

One day, every one of us will be a patient. The people we love will be patients too.

AI systems will increasingly participate in those moments of vulnerability and decision making. Before we hand over more clinical reasoning to systems we do not fully regulate, fully understand, or fully hold accountable, we should be asking ourselves: Are we truly comfortable with that?

2025 PHYSICIANS AI REPORT

Access the Complete Report

✓Complete quantitative breakdown of what physicians really think about AI

✓Strategic implications for healthcare organizations and AI companies

✓Sentiment analysis of physician attitudes about AI and the future

Download full report
Elizabeth Vainder, M.D., F.A.A.P
Written by Elizabeth Vainder, M.D., F.A.A.P

I’m a practicing board-certified pediatrician with over 25 years of experience caring for children and families in South Florida. As a bilingual, Hispanic physician, I’ve had the privilege of building trust with families across cultures and backgrounds, which has shaped how I practice and how I advocate. I’m also the founder of DrVCares Pediatrics, a concierge membership practice, and DrVCares, a pediatric education platform for parents.

Comments

(0)

Join the conversation

See what your colleagues are saying and add your opinion.

Trending


16 Apr 2026Using AI to Accurately Code and Measure Physician Outcomes, With Solventum’s Dr. Travis Bias
0
123
0
26 Apr 2026When General AI Tools Fall Short in Women’s Health and Hormone Health — And What Clinicians Can Do About It
0
94
0
23 Apr 2026Why The Pitt Is the Show Every Doctor Needs Right Now, with Dr. Jeremy Faust
0
51
0