To accompany the release of The 2025 Physicians AI Report, Offcall co-founder Dr. Graham Walker steps out from behind the interviewer’s desk for a special mailbag episode, answering the questions doctors are actually asking about AI: How much are clinicians really using it? When should we trust it? Who should control it? And what will it mean for the future of practice?
More than 1,000 physicians were polled in The 2025 Physicians AI Report, making it one of the most detailed looks yet at how clinicians are adopting and feeling about AI inside their actual workflows. The findings are striking:
Graham walks through these insights with the candor only a working ER physician can offer. He explains why doctors trust AI for documentation but hesitate to let it steer clinical decision-making. He breaks down the incentives that make clinicians worry about payer-controlled AI. He shares the everyday frustrations — like fax machines, EHR limitations, and needless administrative complexity — that make physicians crave practical, low-friction tools rather than moonshot diagnostics.
And he issues a clear call to action: if clinicians aren’t vocal participants in shaping AI adoption, the technology will be deployed to them instead of with them, repeating the mistakes of the last decade.
👉 You can explore every chart and insight at The 2025 Physicians AI Report, available now at: https://2025-physicians-ai-report.offcall.com/
This episode isn’t about hype. It’s a grounded look at where AI genuinely supports better care, where it still falls short, and what physicians need to ensure the future of medicine is built around clinical judgment and not bureaucracy.
The data reveals a quiet revolution underway: clinicians are using AI daily for writing notes, drafting documents, patient communication, and simplifying routine tasks. This widespread, organic adoption contrasts sharply with the sluggish pace of formal institutional rollouts — and signals that physicians will lead innovation from the ground up, whether their organizations keep pace or not.
Physicians overwhelmingly fear AI placed in the hands of payers or administrators. As Graham explains, AI is neutral; the incentives behind it are not. A payer-controlled model that nudges toward fewer admissions or fewer tests poses a fundamentally different risk than a physician-centered tool designed to improve care. The report makes clear: clinicians trust AI when it supports their judgment, not when it threatens to override it.
The top clinician priority isn’t diagnostic support; it’s eliminating administrative drag. Documentation, messaging, paperwork, and workflow bottlenecks remain the most painful parts of modern practice. AI that meaningfully reduces this burden is welcomed. AI that replaces clinical reasoning is approached with caution for good reason.
Graham’s ideal AI isn’t a black box that issues orders; it’s a trusted colleague — a second set of eyes that notices what you might have missed, nudges you toward differentials, and supports safer care without stripping autonomy. The lesson from The 2025 Physicians AI Report is clear: when physicians help shape the tools, adoption rises, trust increases, and outcomes improve. When they are sidelined, resistance grows.
On/Offcall is the weekly dose of information and inspiration that every physician needs.
Graham Walker:
Welcome to How I Doctor, where we're bringing joy back to medicine. Today's episode is a little bit different, so let's dive right in and welcome our guest, me. I hear from physicians every day on shift in DMs, texts, emails about AI and healthcare. People ask me, will it take my job? How should I be using it? How shouldn't I be using AI? And so we decided to take all these anecdotes and survey physicians to get some actual data. Off calls releasing our biggest project yet, a confidential AI survey of over a thousand physicians. And I'll be honest, some of these numbers stopped me in my tracks. We're used to being told by vendors, by administrators, by teams that still use fax machines, what AI should be doing for us. But this survey suggests that physicians are already using AI way more than anyone realizes, and they like it way more than anyone wants to admit.
Now, of course, there's some selection bias at play here. This isn't a randomized sample. We aren't double blinded to the results. I'm always amazed at off-call's reach, but we are certainly skewed towards online engaged doctors. I don't think that means the data's wrong. I think it's showing the early trends that are likely to continue. Here's just a few statistics. Two out of three physicians say they're using AI every single day, which means if you've ever whispered a question to ChatGPT at 3:00 AM on shift, congratulations, you're not weird, you're in the majority. But here's the twist. 81% of doctors are frustrated with how their organizations are deploying AI, which is the most physician thing ever. Yeah, I use it constantly, but it's not set up how I want to use it, which tells you everything about how physicians think versus how healthcare organizations think.
So today, the tables are turned. I'm on the spot. Producer Chad is here and he's going to hit me with the most, why is this trending among physicians AI questions? We're going to break down what doctors are actually asking me, what the data actually shows, and what this all means for the future of practicing medicine.
Chad Parizman:
So let's just jump into it. I think a lot of these were pretty fascinating finds to the team. 89% of physicians are saying they're using AI at least once a week.
GW:
Yeah, it's like two thirds, I think two thirds every day, and then almost 90%, at least once a week.
CP:
Yeah. So that's almost everybody is at least dabbling with it at this point. But only 42% are saying that they're more likely to stay in medicine because of AI, which is a topic we've been talking about for the past year on this podcast. So does that surprise you? And do those numbers feel like a disconnect?
GW:
No, they don't surprise me and they don't feel like a disconnect, Chad. We have this term you learn in genetics, necessary but not sufficient. So I view the AI stuff as it's a necessary improvement, but it's not sufficient to make the work acceptable, tolerable, enjoyable is probably the right word, meaningful, enjoyable. There's so much stuff that AI will never be able to fix because it's not a problem that AI is going to address ever. And so that doesn't surprise me one bit. There are so many problems and challenges in practicing medicine and delivering healthcare today that I think there's plenty of things that AI can absolutely help with and will help with and will become bread and butter stuff that we don't know how to practice without, but there will still probably always be a lot of stuff that is just not an AI problem that AI maybe just can't help with.
CP:
And we've got some questions about this later, but you're genuinely in the camp that AI is not going to replace physicians anytime soon, right?
GW:
Yeah, that's generally my take. I'm a cautious optimist that AI can tremendously help physicians with their work. I think every physician will tell you that some portion of their work could be automated, improved, simplified, because a lot of it is kind of a pattern that we see and we do the same thing over and over again. My hope is that AI helps physicians be able to spend more time understanding their patients' needs and their values and stuff, as opposed to having to spend that time collecting information about how they sprain their ankle or something like that.
CP:
Let's just zoom out a little bit. When you first started seeing these numbers come in, and now that we've done the analysis on them, what was your gut reaction when you first saw some of these pretty big numbers and some pretty clear headlines out of them?
GW:
I was surprised, honestly. I think it's just the early part of the trend. And if you think of that innovation diffusion curve, the numbers that we're seeing from the survey, it's not like they're going to go down. It's just that these people are early. And it's because physicians all want to feel very much like we're right in the middle of the bell curve. There's a phrase. One of my old attendings once taught me, shout out Dr. Andy Leifer, who's now retired. Andy said, "Well, you don't want to be the first person to be using a medicine or a drug, but you also don't want to be the last person, either." There's some risks from trying new things, but there's also some risk from not changing the way you practice and improving the way you practice and coming up reading literature and adopting that in your workflow, too.
So I view that exact same thing about AI is you don't want to be the first person to use AI, but you also don't want to be the last, either.
CP:
We said two thirds of physicians are using it daily, but we also saw 81% are dissatisfied with how their organizations are deploying it. So back to this, it seems like physicians want to tend to ask for permission rather than asking after the fact. So this to me seems a little bit contradictory. They are using it, but they don't really like how they're being given permission to use it. So what do you think is actually going on here with those set of numbers?
GW:
Well, first, physicians like to be in control of everything. We are control freaks to some degree. I'm kind of joking, but we're also responsible for, we have large responsibilities for our patients' lives and their outcomes and stuff like that, too, so there's a reason we like to be in control. I think physicians want to move faster and try out new stuff, especially when they see probably like ChatGPT and other tools being helpful in their daily lives and see it in the media every fricking day. So the idea that I hear about this technology in society, maybe I'm using it with my personal life or my family or planning a trip or something like that, and I can't use it the way I want to use it in my work life, and maybe other people in other industries are using it for their work life. I think that's where this comes from.
CP:
On that note, back to this autonomy and control, again, the numbers show three quarters of people said that they have no influence on which AI tools their organizations are adopting, but when they actually do get to choose, it's overwhelmingly positive, or at least neutral reactions, to the tools. When you're talking with colleagues, when you're working with a big system inside of Kaiser, how are you thinking about that? Again, this probably is not surprising. Does this give physicians more firepower to say, "Hey, invite us in and we'll actually be happy with these tools"?
GW:
Physicians want a seat at the AI table to, I think they want to be heard. I don't think any physician's going to say that the one cardiologist or the pediatrician should be the one that decides which tool everybody else uses. It's like I prefer Google Docs to Microsoft Word, but that's not always an option in an organization or a company that you're a part of, and sometimes it is. So I feel like there's some appropriate amount of tension there. I think the physician also would want a tool that has been vetted or tested or works with their population. And I think the messaging is important to communicate that clearly, but you can probably... Physicians are not all, we're fairly rational actors, I think, most of the time. If you say, "Yeah, sorry, you can't use a Mac, you have to use a PC," and then you explain why think a physician would say, "Oh, okay." And in the same way that you could say, "Hey, we decided to use Microsoft Copilot instead of ChatGPT," or something like that.
CP:
I know you've seen, you've been part of technology rollouts probably as someone that helped make a decision and then you've been given some of these tools or they've been thrown upon you and maybe not, to your point, your first choice. You don't really love Microsoft Word, but everyone just gets used to using it. When we asked about fears from physicians, we thought maybe it was going to be about AI replacing them or patients. And actually, the number one thing was that their fear was AI ending up in the hands of payers and administrators. And we pulled out this one quote, it said, "The most sophisticated AI will end up controlled by third party payers and bureaucracy, not physicians." And that was their biggest fear. One, is that just a valid fear that doctors should have right now?
GW:
Yeah, I think so. All technologies are arguably neutral, Chad. I will say that. AI is a technology. I frequently, I love this quote, but you create the ship, you create the shipwreck, right? When you create electricity, you have now a new risk. Someone can get electrocuted. You didn't used to be able to get electrocuted. AI is a neutral technology and AI in particular will do whatever its master tells it to do, to some degree. If a payer builds an AI and says, "Yeah, you have to use this one," physicians are probably going to be a little skeptical of which side the AI is on. I was speaking to this med group, a hospitalist asked me, "Well, so is the AI going to make there be more patients that I'm admitting to the hospital or fewer patients? Which way is it going to go?" And I said, "Well, it could go either way."
I think that physicians are in the unique position of, I think, hopefully being less biased, we've taken this Hippocratic Oath thing, and certainly I don't want to claim that physicians are angelic saints and perfect either, but when you compare that to a payer or a health system that has a very hard financial benefit that's arguably attached to getting care or not getting care, getting admitted to the hospital or not getting admitted to the hospital, I think the patient and the physician feel way closer to what the "right" answer is. You could say there's going to be a health system bot that's going to say, "Yeah, admit them to the hospital, but they have an incentive because they can collect money from the insurance company." And the insurance company bot's going to say, "No, that doesn't meet our criteria for admission to the hospital." And they're going to disagree and say the opposite.
But I think the clinician and certainly the patient are, that unit combined, are hopefully closer to the right answer than a bot made by a payer or a health system. I think. I don't actually know.
CP:
Would you rather have a tool that was hoisted upon you by an administrator or by this third party or no tool at all and continue to have to do these things like going rogue and using it on the slide?
GW:
I think it really depends on the workflow, Chad. There are so many things that a physician does through any clinical day, right? If you were going to propose, "Hey, this third party's offering you a tool that'll fill out all your paperwork," then yeah, I'm going to totally choose that tool over no tool at all. If it's this tool's going to help decide if a patient should be admitted to the hospital, maybe I don't want that tool at all, and that's for a couple reasons. One is certainly the risk of the bias from the tool. But then the other one is, if I'm ultimately still responsible medical, legally, for the patient, as well, then how does that tool help me?
I already knew the patient should be admitted to the hospital. If the bot agrees with me, that's fine, but if it disagrees with me, I'm still going to disagree with the bot and still admit the patient. So then what was the point of asking the bot in the first place? That's the situation where I think there's less adoption by clinicians where they say, "Oh, I don't really like this tool. I don't find it very helpful."
CP:
That's a really good point. Like do I want the tool at all is the first thing you should be asking.
GW:
Yeah, I think so. And then I think there's certain situations where again, every physician's like, this documentation stuff is terrible. I don't want to do it. It would be a relief if I could do it less. I think that's why AI scribes are such an obvious place for people to find value.
CP:
The people that were interviewed for this, we broke out all the different tools that they were using and over half are using some type of documentation tool. And we said, what's your favorite tool for documentation? And it was very close. It was like Abridge, then it was like DAX Copilot, then it was like Ambience. And so it seems like AI scribes specifically are the one that's resonating the most with physicians right now. What do you think is making that work so well?
GW:
I think it's the rare tool that reduces documentation time or burden, especially what I'll call the clerical documentation burden on physicians, the most out of any of the current AI solutions, to my knowledge. And I think that's such a pain point. Physicians find charting to be tedious, tedious, tedious, but it's important, accurate, good charting. And it's necessary, right? It's a tool. It's a reflection of what you did and how you thought about the patient. And it's also a billing document, hopefully for the physicians, as well. So I think that AI scribes reduce that burden the most and probably improve the quality of the note the most as well, hopefully, out of all the current AI options.
CP:
Do you have a thought on the ones, on the couple that were mentioned there about who's leading the way right now?
GW:
Yeah. So Abridge was number one, which is awesome. I'm glad Abridge is well-represented. I'm lucky to be able to work with them every day because they're an off-call partner. I will also say it's hard to use an AI scribe if it's not available within your EHR, too. So I don't know what the breakdown of what the respondents' EHR systems were or AI scribe availability is. I think people are often comparing different types of AI scribes. I tell people, even if my health system made me use Suki or made me use DAX Copilot or made me use Abridge, I think I would probably be happy with any of them. They're all such a tremendous upgrade from typing notes or saying to Dragon or asking my MA or an associate to hand document stuff for me. So I think that there's less competition between the AI scribes and more of these AI scribes need to be readily available or affordable for health systems to adopt them.
CP:
If you're a leader listening to this, the idea of affordability, I guess, explain to them why does price matter so much when physicians are already dealing with all these problems with charting and documentation?
GW:
Yeah, I think we had some questions in the survey about this and we can leave them for folks to read as well. I think there is a certain amount of disconnect, sometimes, from physicians who have really pressing needs for these tools. They find the tools very valuable. There's some AI tools and AI scribes in particular. People told us in the survey, I think, correct me if I'm wrong, Chad. I'm remembering this correctly, but somebody was like, "Yeah, I want to try a scribe, but there's no budget for that." It's like, but that feels like, to some physicians, it feels like if you were a line cook or a chef and somebody's saying, "Hey, would you like a knife in the kitchen?" And the chef's like, "Yeah, of course I would like a knife in the kitchen." And the accountant's like, "Well, I don't know if we have any money to get you a knife," and the chef's like, "What the hell? How do I make food without a knife?"
I think that is a fundamental, I'll call that probably a failure of communication or education, leadership, not listening to the people who are doing the work. I don't think that's necessarily a reality. I think AI scribes, probably I'm biased and I work with Abridge, but I think they absolutely can be cheaper than paying a scribe or paying an MA to scribe. And I think the physician experience is oftentimes better as well for the AI scribe because it's always there, it's always available, and they're always going to try their best to capture every word, whereas a human scribe might be just having a bad day or not able to capture every word perfectly.
CP:
I've got a quote right here from the data. Respondent who said, "Organizations will spend hundreds of dollars per hour on traveling nurses, but won't find the money for AI scribes which vastly improve physician job satisfaction and likely patient care."
GW:
Oh, damned. I think Abridge and all the AI scribe companies are trying to calculate and tell their story of the ROI. And I agree with the AI scribe companies. I think there is tremendous ROI there. Again, I think that's a failure of leadership to communicate or understand why the physician would be comparing those two and think that it's a problem that you're funding one and not the other.
CP:
All right. So there's a ton more data in this, the release that we're about to do, people can click on the, we'll have a link in the show notes, click on the article in off call.
GW:
Cool.
CP:
So let's just... Your usual rapid fire, given everything that we've heard, all these numbers, what's your honest assessment on the current state of AI and medicine right now?
GW:
Based on these numbers, I think it's being used more than physicians are talking about publicly, and it's only going to continue to grow.
CP:
What's the thing that the media is getting wrong when they're talking about AI in general, but also AI in medicine?
GW:
I think that doctors are generally cautiously optimistic about it, but we are also unwilling to risk our patients' safety for it.
CP:
And then let's just talk to some of the folks that we know are listening. So tell healthcare executives one thing that they're like, "You need to know this based on the survey data."
GW:
Oh, I think that your physicians don't feel like they have a seat at the table, that they don't feel like they're heard.
CP:
And then on the flip side, if you could tell all these, the AI companies themselves, the ones that are building tools for physicians, what do they need to understand from this dataset?
GW:
Understand that physicians are not all the same people. We don't do the same work. And so if you've talked to a pediatrician, you have no idea what a cardiologist has done or a pathologist or an ER doctor or a primary care doctor. And so if you say you've built an AI solution for physicians, unless it's something very general that we all do, like write notes, you haven't built a physician tool. You've built a tool for a certain group of people with certain tasks.
CP:
Awesome. And we've definitely, we've heard that from a lot of the guests that we've had on recently, as well. And then last, the biggest group that we know is listening right now, other physicians, your colleagues, the folks that you see every day, what's the headline for them?
GW:
I think the headline is it's a really exciting time to be in medicine and I think the future's really bright if we are all active participants in this, but that requires us to go above and beyond and be vocal and share our experiences with AI and how we want it to work and what would be helpful to us and what wouldn't be helpful.
CP:
Amazing. Dr. Graham Walker, thank you so much for being a guest on How I Doctor Today.
GW:
On my own podcast.
CP:
Any last words?
GW:
Thank you for having me, Chad.
No, I'm excited for the future and I really believe that AI can be tremendously helpful if we deploy it in a safe and ethical and accurate way. Thanks for joining me today.
For interviews with physicians creating meaningful change, check out offcall.com/podcast. You can find How I Doctor on Apple, Spotify, or wherever you listen to podcasts. We'll have new episodes weekly. This has been, and continues to be, Dr. Graham Walker. Stay well, stay inspired and practice with purpose.
To make sure you don’t miss an episode of How I Doctor, subscribe to the show wherever you listen to podcasts. You can also read the full transcript of the episode below.
Offcall exists to help restore balance in medicine and improve the wealth and wellbeing of physicians. Sign up for Offcall here to bring about more physician compensation transparency and join our physician movement.
Offcall Team is the official Offcall account.
See what your colleagues are saying and add your opinion.