The rise of generative AI tools like ChatGPT, Claude, and Gemini has created both excitement and anxiety in healthcare. For physicians, these tools offer compelling possibilities for streamlining workflows and enhancing patient care — but they also raise critical questions about safety, compliance, and clinical judgment.
Can you trust AI-generated medical content? How do you maintain HIPAA compliance? And where should busy clinicians even begin?
This guide draws from insights shared in Offcall's recent physician-led AI webinar featuring Offcall co-founder Graham Walker, MD, emergency medicine chief resident Allyssa (Ally) Abel, MD, MPH, and Abridge senior physician executive Reid F. Conant, MD.The session emphasized putting physicians in control of AI adoption while avoiding common pitfalls that could compromise patient safety or regulatory compliance.
On/Offcall is the weekly dose of information and inspiration that every physician needs.
What AI Actually Does
Generative AI tools are not medical databases or search engines. They're large language models (LLMs) that predict and generate text based on patterns learned from vast training datasets. This fundamental difference has important implications:
Your Role as the Expert
Think of AI as a sophisticated writing assistant or brainstorming partner — never as a replacement for your clinical expertise. You remain the final authority on all medical decisions and content validation.
Why This Matters
Consumer AI platforms store and use your inputs to improve their models. Once you submit protected health information (PHI), you cannot retrieve it, and you may have violated HIPAA regulations.
What Counts as PHI
The Safe Approach
Unless your institution has explicitly approved a specific AI tool through a Business Associate Agreement (BAA) or formal IT approval, treat all AI platforms as non-secure and PHI-prohibited.
Administrative Tasks
Educational Uses
Professional Development
With proper safeguards and institutional support, consider:
Apply this simple test: Could this task compromise patient safety, privacy, or legal compliance? If yes, don't use AI without explicit institutional approval.
Start with tasks that are:
The "Just This Once" Trap
Never compromise on PHI rules, even for seemingly harmless cases. Habits form quickly, and compliance violations can have serious consequences.
Over-Reliance Syndrome
Don't let AI become a crutch. Continue developing your own clinical reasoning and communication skills.
Isolation Adoption
AI implementation works best as a team effort. Lone-wolf approaches miss opportunities for shared learning and institutional safeguards.
Start Your AI Journey Today
Questions for Self-Assessment
Generative AI will continue evolving rapidly. By starting with safe, low-risk applications now, you're building the skills and judgment needed for more sophisticated uses as technology and regulations mature.
The goal isn't to become an AI expert overnight — it's to develop informed comfort with these tools so you can make deliberate choices about when and how to use them in your practice.
Most importantly, don't let uncertainty paralyze you. As the webinar speakers emphasized: "Kick the tires. See what works. But do it safely."
On/Offcall is the weekly dose of information and inspiration that every physician needs.
Want to learn more about implementing AI in your practice? Join our physician community for ongoing discussions and updates on AI best practices. Subscribe to our newsletter and follow us on LinkedIn in order to receive invitations to future AI webinars for physicians.
You can also read our free AI Resource Guide, giving you the tools, tips, tricks and resources you need in order to get up to speed and get started with AI today.
Offcall Team is the official Offcall account.