Just announced!
∙
Join Offcall‘s free live AI webinar for clinicians.Sign up here.
  • Products
      • Salary
      • Referrals
  • Learn
  • About
Login
Offcall Footer Background
ProductsSalaryReferrals
ResourcesLearnAboutContactFix Referrals ManifestoPrivacy PolicyTerms and Conditions
Apps
apple

Download on the

App Store
google

GET IT ON

Google Play
In the browser
Follow us
Sign up for Offcall's newsletter
Copyright © 2026 Offcall All Rights Reserved
Articles

How to Start Using AI Without Making a Dangerous Mistake

Offcall Team
Offcall Team
  1. Learn
  2. Articles
  3. How to Start Using AI Without Making a Dangerous Mistake

Generative AI is already in the exam room, whether clinicians invited it or not. Colleagues are using it to draft notes, look up drug interactions, and synthesize research. Patients are using it to interpret their own labs before appointments. Health systems are beginning to build it directly into clinical workflows. The question is no longer whether to engage with these tools, but rather whether you understand them well enough to use them without getting burned.

Resources:

  • Session slides
  • Dr. Michael Hobbs' AI 101 Guide

This session is part of Offcall's AI Residency series. The next session is Wednesday, May 6.

The sycophancy problem is a clinical problem

Large language models are trained to be helpful, which means they are designed to give you the answer you seem to want. In consumer applications, that's a feature. In medicine, it's a hazard.

If you lead a model in the wrong direction, it will follow you there. The clinical analogy is direct: a patient who says "I have a fever" produces a wide-open differential. A patient who adds "my ear hurts and my kid has strep" narrows it considerably. LLMs work the same way — what you put in shapes what comes out, including its errors. The model isn't going to push back the way a good colleague would.

This is distinct from hallucination, though both matter. Models will confidently generate information that is simply fabricated — including citations, vital signs, and clinical details you never provided — and it will look entirely plausible. Models have improved, but the underlying risk hasn't gone away. The output will still look good.

A framework for evaluating outputs

A useful habit for assessing any AI output: Is it complete? Is it traceable? Is it accurate? Does it fit what you asked for? And is it transparent about uncertainty?

That last point is especially important. These models struggle to say "I don't know." You can prompt them to do so explicitly — telling the model upfront that uncertainty is an acceptable, preferred response. Asking a model to show its sources doesn't guarantee accuracy, but it changes the character of the output in useful ways and gives you something to verify.

Think of it the way you'd approach a trainee's note: trust but verify, every time.

PHI and the habit that protects you

Keep protected health information out of your prompts. This applies regardless of platform unless your organization has a specific HIPAA-compliant agreement with that vendor. Default to whatever tools your institution has already vetted, and build the habit of scanning every prompt for identifiable information before you send it. Good prompting habits and good data hygiene are, in practice, the same habit.

Liability hasn't moved

When AI gets something wrong, the clinician owns it. State medical boards have been consistent on this, and technology companies have no incentive to change it. "The AI told me so" is not a defensible clinical position — and it won't be for the foreseeable future. That's precisely why physicians need to be the ones shaping how these tools enter clinical workflows, not as a rubber stamp, but with genuine authority over how and when they're used.

The best way in is to start

The fastest way to develop good judgment about these tools is to use them in low-stakes settings first. Plan a trip. Draft an email. Upload a guideline to NotebookLM and generate an audio overview for your commute. These interactions build the intuition you'll need to recognize when something is going wrong — before the stakes are higher.

Think of AI as the world's best chief resident: tremendous knowledge, generally excellent, occasionally wrong in ways that are hard to spot, and ultimately working under your judgment. You wouldn't sign a resident's note without reading it. Don't treat AI-generated output any differently.

Offcall Team
Written by Offcall Team

Offcall Team is the official Offcall account.

webinar
AI

Comments

(0)

Join the conversation

See what your colleagues are saying and add your opinion.

Trending


26 Feb 2026What Will It Take to Actually Build a Quality Healthcare System? NCQA's New CEO Dr. Vivek Garg Has a Plan
0
343
0
05 Mar 202699 Ways to Avoid Death: Lessons from Author and ER Physician Dr. Ashely Alker
0
65
0
12 Mar 2026Inside the First Autonomous AI Prescription Program in America w/ Doctronic CMO Dr. Byron Crowe
0
54
0