There is a simple principle that came up repeatedly in the session:
“The best way to learn generative AI is just to start using generative AI.”
That advice is correct. It just needs context.
Not all use cases carry the same level of risk. The safest way to start is outside of clinical care. Low-stakes use builds familiarity. It helps you understand how these tools behave before you rely on them in higher-stakes environments.
To see how clinicians are being introduced to this approach, watch Introducing AI Residency for Clinicians - Part 1
Resources:
Before using AI clinically, use it for everyday tasks. Ask it to plan a trip, draft an email, or summarize an article or explain a topic you are unfamiliar with.
These interactions help you build intuition. You start to recognize when outputs feel off. You begin to understand how different prompts produce different results. You notice where the model struggles.
That intuition is what allows you to safely identify issues later in clinical contexts.
AI outputs are highly dependent on inputs.
“What you put in is very impactful to what you get out.”
Vague prompts tend to produce generic responses. More detailed prompts tend to produce more useful ones.
A simple structure can help:
Adding context reduces the likelihood of hallucinations. It also improves relevance. At the same time, more detail increases responsibility. The more context you provide, the more you need to ensure it is accurate and appropriate to share.
One of the most important habits to build early is protecting patient data.
“Keep protected health information out of your prompts.”
Unless your institution has a formal agreement with an AI vendor, you should assume that any data entered into these tools is not protected.
That means:
This is less about the tool and more about the habit. Scanning your prompt before submitting it should become automatic, just like checking a medication dose before ordering it.
AI does not change liability. The clinician remains responsible for decisions made using AI outputs.
“When AI gets something wrong, the clinician owns it.”
This is consistent with how medicine already works. You are responsible for decisions informed by guidelines, colleagues, or trainees. AI is no different. It is another input, not a replacement for judgment.
The most practical way to think about AI is as a highly capable assistant.
“Think of it like the world’s best assistant… really good at many skills.”
It can:
It cannot:
That distinction should guide how it is used.
AI is already part of the clinical environment. The difference between using it safely and using it poorly comes down to habits.
Over time, these habits turn AI from a potential risk into a useful tool.
Offcall Team is the official Offcall account.
See what your colleagues are saying and add your opinion.