There is a version of the AI-in-medicine conversation that stays safely abstractm full of potential and timelines and thought leadership that asks nothing of anyone. This episode is not that conversation.
When Dr. Graham Walker sat down with Dr. Byron Crowe, Chief Medical Officer of Doctronic and first author of the Society for General Internal Medicine's position statement on generative AI, the program they were discussing wasn't a pilot proposal or a research concept. It's already running. In Utah, right now, an AI system is autonomously renewing prescriptions under a first-of-its-kind state-approved regulatory framework.
Byron didn't arrive at this work as a technologist who learned some medicine. He trained as an internist, taught at Harvard Medical School, and spent years thinking carefully about what AI can and cannot do in clinical settings before concluding that the right response to a broken system isn't caution for its own sake, but rather building something better and being willing to own the outcome. The Doctronic model has four layers: AI chat for general health questions, an AI consult that takes a full history and produces a differential and preliminary plan before any physician enters the room, human care delivered by a nationwide physician practice, and autonomous care that is currently limited to prescription renewals in Utah under a graduated autonomy model that begins with physician review of every single refill before moving toward full autonomy with retrospective oversight.
The conversation Graham and Byron have about liability is the one most health AI companies refuse to have in public. Graham names it directly: the pattern in health tech is to show physicians every upside and the moment something goes wrong, the responsibility lands on the doctor. Byron's response is the through line of the entire episode. Doctronic is a care delivery organization. They see patients. And when their system acts autonomously, they take responsibility for that too. Liability, Byron argues, is not the same as responsibility and careworthiness, the moral standard he uses to evaluate every deployment decision, isn't about whether the model passed a test. It's about whether the physicians who built it would stand behind every decision it made, knowing everything they knew at the time.
What makes this episode essential listening for skeptical physicians isn't that Byron resolves every concern. There's no way he could. It's that he takes the concerns seriously, engages them on clinical terms, and makes a case that is harder to dismiss than most. The status quo he's asking physicians to compare his system against isn't an ideal version of medicine. It's the actual one: portal messages clicked through by staff, refills processed without a conversation, patients who interact with the healthcare system less in a year than they will interact with Doctronic's AI in a single encounter. The question Byron keeps returning to is the one Bob Wachter has asked about every healthcare innovation —-compared to what?
Thank you to our wonderful sponsors for supporting the podcast:
Sevaro is a physician-led telestroke and neurology company that delivers rapid virtual neuro coverage that’s reliable. Learn more at https://sevaro.com/
Evidently - Leading AI-powered clinical data intelligence https://evidently.com/
For physicians trying to separate the signal from the noise on AI in clinical medicine, this episode is a rare conversation grounded in peer-reviewed research, real regulatory experience, and genuine clinical accountability. These are the four ideas most worth sitting with.
Join Offcall to keep reading and access exclusive resources for and by the medical community.
Offcall Team is the official Offcall account.
See what your colleagues are saying and add your opinion.