Dr. Sarah Gebauer is an anesthesiologist, a health tech founder, and one of the clearest voices on what AI means for frontline care. But before all that, she’s a clinician who understands what it feels like to spend your nights charting instead of sleeping, and now she’s helping doctors imagine a different way forward.
In this special episode of How I Doctor, Graham Walker hosted a live webinar with Sarah, recorded in front of hundreds of physicians on LinkedIn. Together, they demystify the fast-moving world of AI in healthcare. From scribes and diagnostic tools to what’s coming next with predictive alerts and documentation automation, their goal isn’t hype. It’s clarity. What’s real, what’s useful, and what every clinician needs to know right now.
Physicians, visit Offcall.com/AIGuide to download Graham & Sarah’s AI Guide. For everyone else who doesn’t have an NPI number you can download that guide at https://docsend.com/v/3t6h4/offcallwebinar
In the first part of their webinar three core lessons emerged that every physician should hear. These takeaways aren’t just about tech—they’re about trust, time, and the future of clinical practice.
Physicians don’t need to master computer science, but they do need to understand how AI is already integrated into their daily tools—from automated alerts to documentation support. The journey begins with building foundational literacy so clinicians can critically evaluate where AI helps and where it still falls short.
Graham and Sarah break down AI into two core types: deterministic AI, which produces consistent, predictable outputs (ideal for clinical decision support), and probabilistic or generative AI, which creates flexible, creative outputs (useful for summarization or charting). Understanding the difference is essential for knowing where each type is most useful—and where it may be risky.
From early warning systems at Kaiser to AI-powered scribes that reduce after-hours charting, the episode highlights how predictive AI is already improving care. These tools analyze subtle patterns across hundreds of variables—often catching signs of deterioration or complexity that human clinicians may miss.
On/Offcall is the weekly dose of information and inspiration that every physician needs.
Connect further with Sarah on LinkedIn
To make sure you don’t miss an episode of How I Doctor, subscribe to the show wherever you listen to podcasts. You can also read the full transcript of the episode below.
Offcall exists to help restore balance in medicine and improve the wealth and wellbeing of physicians. Sign up for Offcall here to bring about more physician compensation transparency and join our physician movement.
Graham Walker:
Welcome to How I Doctor, where we're bringing joy back to medicine. A huge thank you to everybody who attended our first webinar, AI Secrets for Clinicians that I hosted on LinkedIn with my friend and Valara health founder, Dr. Sarah G. Bauer. We had a great turnout, over 200 thoughtful questions and comments about how AI is impacting medicine today. You'll hear highlights from that webinar and stick around to the end. We will answer a few questions that we've gotten from fellow clinicians as well. Also, Sarah and I created a special resource guide that features a roundup of AI tools, communities, YouTube resources, prompting tips and educational sessions that we personally think you can use and will be extremely helpful as you start to understand AI in your clinical practice. Physicians. Visit off call.com/aiguide to download it now. And for everybody else who doesn't have an NPI number, we're adding a link you can use in the show notes. While you're there, be sure to subscribe to our on-off call weekly newsletter where we share exclusive interviews with physicians, happenings from around the world of medicine and webinars and future events. And now here's Sarah and I getting into the AI fundamentals and real world applications in healthcare. What we're going to talk about, Sarah, you want to go over what we're hoping this does for people?
Sarah Gebauer:
Sure. A lot of physicians I talked to and the Graham talked to are really intimidated, as he mentioned by ai. And so the thought behind this was we want to give people the really clinical pearls about ai. What do you really need to know? How is it being used? What can you do to know more about it? So we're going to really try to hit the high points, but we really want your questions and comments along the way to make sure that you're guiding us into the appropriate areas as well.
GW:
Yeah, I think this can be really intimidating and honestly scary. It was kind of scary when I first started looking at it. So we really want this to be totally approachable and easy to access. Jared STR in the chat said he wants a prompting webinar. So Jared, I have a couple ideas for prompts in the resource guide, but yeah, I think an entire how to with prompting is probably a great idea too. So this is kind of what we're going to cover today. We'll just go over some fundamentals. We will not get overly technical, but I do think it's helpful to understand a little bit about how AI works and how it doesn't work. And again, we'll tie this all back to stuff that you already use as a clinician on a daily basis so it's not scary and intimidating. We'll talk about some real world ways that AI is being used. Then we'll talk about the jagged edge where AI works really well and where it's maybe not so hot, and then ways that you can get started and
GW:
Take some action. Sarah, you want to take this one?
SG:
Yeah, I mean, I think we've all heard so much about AI lately, and it's really hard to separate what is meaningful and what's not. A lot of products and tools that we've been using for years have what we would call AI in them and have a long time. And so I think people often now that AI is hot now that they will put it in there when the tool hasn't actually changed. Just the marketing of it has changed. And then generative AI is one type of ai, but it's not all of ai. It is where a lot of the most profound changes and advancements have come, and that's why it's in the news so much more often. But it goes way beyond being a chatbot and there's a much wider net to AI in general. So th e simple definition, which Graham came up with and I love is computers make predictions and find patterns. This is, is actually what I tell my kids when we talk about AI too. Anything that has a pattern AI is going to be good at, it's going to be great at finding patterns better than we're, and so if you think about it like that, you can basically predict what it'll do well in healthcare as well.
GW:
Yeah, I agree with your first point, especially Sarah Netflix recommending a show to you that is ai, your iPhone or your Android device as you're typing, predicting what word it thinks you're going to type next. That is ai. So AI is here. It's actually been here since probably mid 2010s in lots of products that Instagram, all these tools are using ai. It's just really starting to make a larger difference because of generative ai. And we're starting to see how we could use it in medicine and hopefully that's why you're here. This is a way oversimplification, but I think it's helpful to think of AI in kind of two flavors. Let's start on the right to be different since Sarah mentioned generative ai. So think of generative ai. If you've used a chatbot, this is probabilistic ai, meaning that if you type in the same thing, there's a probability that is applied to the chat GBT or the clawed information.
So you can type in the exact same thing and it's going to spit out different outputs, different responses. There's some kind of creativity there. And you actually can control the creativity and generative AI and make something extremely creative to the point that it sometimes doesn't even make sense or extremely rigid and extremely boring, and it's very regimented in how it responds. Probabilistic or generative AI powers chatbots and content creation. So if you think of chat GT or Microsoft copilot or creating an image or a video, if you've seen some of those demos, that is all generative ai. It is generating something from a prompt, from the ether, from nothing. And then on the left side, deterministic ai, personally, I think this is the area that will be actually even more profound in the practice of medicine, is the same input always gives the same output.
If you use a model, one plus one always equals two. It's never 2.01, it's never the color blue. It's never a meatball sandwich recipe. It's always two. And you can imagine that is a lot of what we want in medicine. We want something that follows a predictable pattern that if they're 87 and they're from a nursing home and they are hypoxic, it's going to give you the same answer for every single 87-year-old from a nursing home who's hypoxic really good for clinical decision support and algorithms and typically predictive ai. Deterministic AI is a bit more transparent and auditable, and we'll talk about that in a second.
SG:
And I'll just add that now a lot of the tools have a component of each, so it's not necessarily all one or all the other. There are lots of tools that, especially in healthcare, will have a generative AI tool with a deterministic tool around it to give it safeguards and to help decrease the risk that the AI tool will come up with something totally crazy. So you might see that too, and that is, I would expect that's going to become even more common.
GW:
Yeah, I think the reason I call out these two different types of models is there's different stuff that we all do in the practice of medicine, and so different models are going to be helpful with different things. Deterministic stuff is going to be helpful with math, with medication dosage, with protocols, and then probabilistic stuff is going to be good with AI scribes and taking text and summarizing it, ways of taking a lot of content and changing it in some way. That's what generative AI is literally meant to do,
SG:
And it's really good at it. Yeah,
GW:
It's shockingly good at it. I'll talk about this example and then Sarah, you can do the next one. Predictive AI is trying to find patterns in noise, and this is just an example of a predictive model actually finding a pneumothorax. Now, these models can't look at images like we do where photons are hitting our retinas, but these models can look at pixel differences, and you can imagine if it takes enough chest x-rays that have a pneumothorax on it, and it starts to find predictions like, oh, hey, normally there's supposed to be lung markings here and there's not. Maybe that's one way for me to predict a pneumothorax, or, Hey, there's a line of black pixels and then a subtle line of gray pixels where there's normally not a line. Maybe that's another way that I can detect a pneumothorax. So you can see there's different ways that these models are trying to learn through kind of a highlighting up here on the top left, what's important, looking at gray scale or looking at using a single color or kind of combining these things to say, Hey, not only is there pneumothorax, there's actually a mass here in the right middle lobe as well.
SG:
So this is a slide from Graham's wonderful company, MD Calc, and it's the wells criteria for pe. We've all seen this many times, as you can see, it takes six factors and gives you a score and then makes recommendations from there. Predictive ai, this is an example of predictive ai, but instead of the six, you can also have hundreds of inputs in a way that you couldn't 10 or 15 years ago, it would just have been too hard for the computer for humans to identify all the possible factors, and B, to actually run those kinds of programs. So the wells criteria distill the likely most important pieces. But just as clinicians, when we walk by somebody's room, we take in so much information about them just by looking at them. Are they lying down? Are they awake? Are they reading with their glasses on and their feet crossed on the bed, or are they completely asleep? All these kinds of information that we're taking in these hundreds of data points we don't really realize, but that's something that AI is able to do well, is to look at those and find what might be important that before we had to really just narrow down to such a small number, and that's why it can be so powerful.
GW:
I'll
GW:
Talk about this. This is a model that we use in our facilities at Kaiser Permanente where I practice. So this is called the Advanced Alert monitoring Tool. This takes, I want to say like a hundred variables from inpatients and it's predicting who may need the ICU or who may die and tries to then take action. So you can imagine subtle changes that maybe the doctors, the humans have trouble noticing, like, oh, hey, the hemoglobin's just slowly trending down over the next few days, and the heart rate's slowly trending up, but it's still technically normal. Those trends you can imagine, take all the data from the EHRA hundred data points, and that's labs and vitals and flow sheets, and maybe even the note from the doctor, oh, patient appears mildly sicker today. Whatever it is, it's able to run these numbers and every hour make a prediction, Hey, there's the patient in room 1741. Looks like they're not doing so hot. What are we doing to address this? And it's been very successful at reducing mortality, addressing ICU transfers and kind of unexpected ICU transfers code blues rapid responses. And then finally, it's actually been effective at increasing palliative care referrals because it's again, bringing to the forefront of the clinician's attention, Hey, this person's not doing so well and maybe helping them prioritize a palliative care consult or a goals of care conversation with the patient or their family.
SG:
Those are all great examples, and I completely, completely agree. And the reason I like that example especially is because it points out that there are things that humans are just not very good at, which is taking a lot of different variables that are just subtly different and collating that information into an action. Whereas the pattern recognition, again, is something AI is great at. So using AI at the places that we know humans and the healthcare system in general is often not very good at. That's exactly where we want to use these kind of tools.
GW:
Yeah, let's augment the human capability. We don't need to replace it. There's just stuff that machines are better and that's fine. We trust the automated CBC machine every day to decide if the white count needs a manual diff or if the automated differential is fine.
GW:
Jerry, you want to talk about AI scribes?
SG:
Yeah, so AI scribes have been in all in the news lately. This one is from a bridge. This is featured because the owner or the founder is a physician. But there are many, many companies these days will listen into physician conversations. They basically transcribe the conversations and then summarize them into a format that is in a note that is something that a physician might write. So the advantage of this is a few fold. One is that it may capture better and more complete information than a physician is able to type during a visit, for example. There is some evidence that it decreases for some users the amount of pajama time or the amount of time they spend after hours doing charting. And also just the burnout of having to have so many notes to write and to do every day is a real burden on a lot of clinicians. So I certainly have friends who are primary care docs who have said they just won't practice anywhere, ever again that doesn't have this technology because it reduces their workload so much. But you can see why it would work well with ai. It is able to summarize, like we said, summarization is one of its strong points, and then it's able to, because it knows patterns, well, it can, it's trained on the pattern of a physician note and it can organize that information into that pattern very successfully. Yeah,
GW:
I've got several different AI scribe companies. There are many of these companies. Abridge is just one. I've got several in the resources guide that we'll send out and link to all physician created and founded. And then I've got a list of, I think every single one from the company called Elion that does healthcare tech reviews and collation. This is another example of how you might use generative AI in healthcare. The first thing I'll just, I think it's important to say, is chat. GBT, Claude, all of these chatbots, I would say you probably or certainly should not be using these with PHI or I. There's certainly some nuance in areas where you could technically use it, but because of hipaa, because these tools are vacuuming up everything you share with it, and you cannot get that information that you've shared deleted easily just a call out, you probably should not be using PHI or PII or these tools because of for HIPAA reasons.
Obviously, there's ways you can get around that you can delete the age, you can delete identifying characteristics, but just a word of caution for everybody. I don't want anybody to violate HIPAA or get in trouble with their hospital or their health system or whatever, but this is a way that I frequently use statute PT and other models. I will give this a paper in this case, this is a paper about ECGs and STEMI and occlusion, myocardial infarction, and just ask it to summarize the paper for me. Now, there's lots of ways to prompt this to get a better response in this, a more accurate response. You could say, hi, I'm an ER doctor. Tell me what's relevant to this paper for me as an ER doctor. And you're going to get a different response than if Sarah says, I'm an anesthesiologist. What does this matter to me as an anesthesiologist?
But again, this is able to take a 10 page whatever PD, F, and summarize it and simplify it. And I wouldn't go as far as saying, this certainly is not the same as reading the paper. Hopefully you all understand that. But it helps me to understand, is this worth my time to actually read the whole paper? Is this, oh, this summary looks really good. It's more thoughtful and it's a better summary than the abstract. It maybe goes into more detail than the abstract. And then it helps me decide should I spend 20 or 30 minutes to read this whole thing versus like, yeah, this is probably a dud.
SG:
So I think we have a lot of really great questions.
GW:
Yeah, you want to pause there, Sarah, and questions?
SG:
Yeah. Think, first of all, I want to give a shout out to BJ Lee, who's a med school class of ours who's on call, and also all the other folks who have joined. So we are reading through the comments. We appreciate your participation. That's wonderful. And the questions are great. It's honestly tough to choose amongst them. Graham, do you want to take the first shot at choosing when to answer?
GW:
Yeah, let's see. Should I go here to comments or just questions? Let's see.
SG:
While you're looking, I know someone asked about, yeah, go for it. Commodities that I think is certainly something that's happening in the market. And from what I've seen, more and more AI scribe companies are going into CDI, clinical documentation improvement, and then also they are doing computer use now. So being able to do things like order, if a physician says, I'm going to order your TSH levels and I'm going to get a chest x-ray, put in those orders on a preliminary basis for physician review prior to sending those, and I think that will go pretty far away in helping decrease the time that physicians spend on the computer because it's an underappreciated fact that the note is one part, but a lot of the clicking around in the chart is doing orders and sending consults and a lot of the other pieces of using the EHR that is not just writing the note. So they're expanding into that area and then also into, as everything ends up in healthcare, into the revenue side to be able to capture the billing appropriately.
GW:
Yeah. Hi, Dave. Thanks for that question. I agree. Sarah. I think there's going to be consolidation acquisition. Some things will go bankrupt. There'll be some specialization, so some will go after large health systems, some will go after psychologists or social workers and find niches. And then I think, like Sarah said, I think some will actually try to become more of a platform. So they won't just do notes. They'll do notes and billing and to try to add features to hold onto their market share as well. Let's see. I've got a question. What would I recommend to med students today? Oh, wow. It's what an interesting time to be a medical student. Sarah and I were med students 20 more 20 something years ago. It's a really good question. What would I do as a med student today? I think I would understand how to do things the old way and the new way. So I think it's critically important that you as a medical student understand how to critically assess a paper on your own so that you can decide as a physician, Hey, yeah, this makes sense to me. I agree with this paper or not, but the other thing, I work with med students and residents. I also learn a students and residents who have a way of learning and finding information as well. So I think there's things that medical students can certainly teach us about using AI effectively in ways that I probably have not even thought about.
SG:
The other thing I'd say is that I know a lot of med students who are more motivated to get involved in the development of these AI tools in evaluating them and being on the governance committees and basically being involved in the process more with ai. And I think that's really important, is not being intimidated with it by it one, and then also, physicians need to be part of the future and part of the solution. So the earlier you can get exposure to those kinds of aspects, you're just going to set yourself up better for understanding what new technology means and which of it is hype and which of it is real.
GW:
And med students know way more about medicine than, I mean, you may think you don't know that much, but I promise you way more than 98% of the population by just being in medical school and taking med school classes. So you can be an extremely valuable asset to a startup, or you can create your own company. I mean, there's tremendous opportunity as a med student. Okay, let's keep going because this is my favorite slide.
SG:
I was just about to say, it was my favorite slide.
GW:
Oh, okay. So Sarah, tell us what we're looking at here and why is this even on the screen?
SG:
Yeah. We are looking at both parrots and guacamole and then chihuahuas and blueberry muffins. And for humans, it's relatively easy to tell these apart, but for computers, it's actually surprisingly hard because they have a lot of the same colors. Their edges aren't necessarily in the same spots. If you think about looking at a photo and all the different angles that you can look at a dog from, and you still know it's a dog, computers can't sense that automatically. And so they have to be taught. And so being able to differentiate between these images that you can see how they would look pretty similar. I actually didn't realize how similar parrots were to guacamole until Graham showed me the slide, so that was a great learning point as well. But how it does this is go through really pixel by pixel and kind of a nine pixel square and tries to predict what's going to be in the pixel next to it and then learns from that and decides if it's right or not. And in that way, it can figure out where the edges of things are and where the edges are likely to be in different kinds of images of different kinds of things. So this is also self-driving car technology. This is how that was able to advance so quickly. The image-based technologies in medicine are also far ahead of many of the other technologies. The AI was developed really sooner than the rest of it.
GW:
And Sarah, the way these things work is you give this computer program, say a hundred images of parrots and a hundred images of guacamole, and you say, okay, these are pars, these are guacamole. And then you give it an image that you don't tell it the answer and you say, Hey, figure it out. And so just imagine this computer program coming up with a million hypotheses about how it can figure out, figure this out and find a pattern and just give it enough time, give it enough server flops, and it will make a best prediction. And it's not that far off from how maybe you figured out which patients have a kidney stone or which patients have strep throat versus viral pharyngitis. You see enough of these over time. You get feedback about, oh, that strep test was negative. I was sure it was going to be positive. You get that feedback iteratively, and your brain over time gets better at predicting these and AI's doing the exact same thing.
SG:
Exactly.
GW:
We're just going to very briefly go over two different types of ai, and again, oversimplification. I'll cover this one, Sarah, then you want to do neural networks. So machine learning is, think of it as a little bit of a simpler version of ai. And again, this is a similar version to the Wells criteria or if you've ever used an appendicitis prediction score, you take, humans took maybe 50 variables that they thought mattered for appendicitis, white count, radiation of the pain, sudden onset pain versus gradual onset vomiting, fever, age, all this stuff. And then they paired down to the factors that mattered and they came up with a score that you can add, oh, this gets one point. This gets two point, obviously a very simple version made for a simple human to add points to. Now imagine the model. You don't give the model 50 things, you give it 500 or 5,000 and you just have it run for a week and you say, okay, figure out the best way to predict appendicitis. And so you might get something that's like, okay, if you're between the ages of 50 and 60, these are the things that matter the most. But if you're female and 23 years old, this thing matters the most. And you eventually build up this best model, this best fit model that tries and successfully will predict appendicitis better than all the other models from the data that you've given it. That's kind of how machine learning works. Now, we'll talk about another type of AI called a neural network or deep learning.
SG:
So neural networks, this is really what people are talking about these days when they're talking about ai. This is really what the innovation has happened. Machine learning is, I think it's helpful to point out that machine learning is what we've been using for a long time. That encompasses neural networks, it encompasses a lot of kinds of ai. So that is kind of the big circle. And neural networks are a circle inside of that. And neural networks are really, especially by doctors best understood as neurons. And so these are different kinds of neurons. They move forward, they try to predict what the next outcome might be, and they try it over and over, kind of like a little kid forming neurons, and they try something over and over and see what works. And then when it works, they assign it a probability that it will work in the future.
That will be that thing in the future. And those probabilities are called weights. And so you might hear the term model weights, and that is what the probability is that has been assigned in this large neural network that is created by these systems. So it can be a black box. And that's one of the difficult parts about it with the earlier machine learning models, you were giving it the data of these are the variables that we think might be important, go for those In this, it really learns on its own, and so it might be considering things that you had never thought of or that are completely wrong. So there was a paper not too long ago that showed that an AI system would predict based on a chest x-ray, which patients were likely most likely to eat refried beans. So it did that based on age and demographic data and where the patient lived and who was most likely to eat refried beans in what part of the country, for example. So it may be facing decisions on things that you don't want it to base decisions on, and that's one of the challenges with these kinds of, with this technology is that you don't know why it's giving you specific information.
GW:
Yeah. Let's pause for a second before we get into
GW:
Our next section and just take a few questions.
SG:
These questions are so good.
GW:
I know there's too many, and feel free to email me and I'm happy to answer more later as well, or send me a message
SG:
Questions afterwards too.
GW:
Let's see. There's one about AI and consent. That's good. How are patients consented to the use of these Gen AI tools in these use cases? This is a fantastic question. Thank you, Jono. So it depends a bit on the health system and the local laws. There are some places that say every time generative AI or any AI is used, you need to inform the patient. It's interesting, I think we're in a bit of a transition period where I fully agree that we need, patients need to have give consent to have their data and their information being used for generative ai, but I think in five or 10 years, we're going to look back and think like, oh, that's crazy that we had to consent people because AI is everywhere, and how do we live without it? We don't ask for people's consent for something like using the WELL score.
I don't, because we take that as just like a standard tool that helps us take better care and use more evidence in our practice. So I do think it's important that we talk about consent with patients. Certainly whenever anybody uses an AI scribe, I think they should get consent for the patient to be recorded and have their data moved in. I think it's pretty rare that patients don't consent to that use case in particular. Maybe like one in 20 or less patients is like, nah, I think I'm good and it's fine. That's not the end of the world. Sarah, do you have thoughts about the consent discussion as an ethicist? Well,
SG:
I have talked to a lot of lawyers. So these kind of questions are being resolved really at the state level at this point. So Colorado and California, for example, California was one of the first to have AI consent laws. People have to be notified if they're interacting with the chat bot, for example. So there's a patchwork really right now of state laws that are being implemented. There's not a federal mandate and not a federal standard at this point. So it's really kind of case by case. It makes it hard for vendors and for a lot of health systems cover more than one state, so they're kind of trying to make sure that they, they're complying with all the different state regulations. But I think ethically, people want to know if they're talking to a person or not. There is interesting data about whether they like the interaction more, if it's a chat bot or not. So when they don't know that it's a chat bot, they often like it better than talking to a person. And when they know it's a chat bot, they then don't like it as much. So it does influence patient experience. That knowledge does influence patient experience, I think is a piece of that too. It does change people's opinion, so they should know. But I agree with Graham that in five or 10 years, I think when I interact with the United Chatbot, I assume that it's a chat bot.
GW:
Yeah, I do too. At this point, I assume it's just a chatbot and maybe eventually I get to a human or do I? Or is it just like a better chatbot
SG:
Chatbot with the name? Yep.
GW:
Yeah. Thank you so much for listening to watch the full webinar. Click the link in the show notes or visit off call.com/ai guide. We've had so many great questions during the event, so I wanted to take some time to answer a few more here. This first one comes from Sarah Pasteur, head of Primary Care Advancement at Elation Health. Sarah asks thoughts on how AI literacy might impact the healthcare workforce. So Sarah, I think AI literacy is going to be a dividing line. It's not doctors versus machines, but there's going to be doctors who understand AI and IT and leverage it and people that are following along. So it's not adapt or perish, it's lead or follow. The more clinicians know about how AI works and its limitations and its potential, the more they can influence how it gets integrated into their care. This is why I talk about this stuff so much.
It is so critical that physicians are not seeding the design and the workflow of these clinical tools to people who have never run a code, who've never managed a chronic disease. So I think we need to talk about this stuff more, build our AI literacy muscles, just like we all learned about how evidence-based medicine works and how to critically appraise a journal article. We need to learn that for AI models. That's one of the reasons we're doing this with off call. And honestly, the good news is doctors are really good at learning things, and we're really good at learning things quickly. Sarah, thanks for the question. The next question comes from Dr. Kavita. Jay Kavita is the chief of physician education at Kaiser Permanente Santa Clara Medical Center. K asks, who and how is the legal responsibility going to fall on physicians versus AI versus AI companies?
How does this play out? And Dr. Noel Ping asked a similar question. Noel's an investor and an innovator, Noel asks how to make sure that AI health liability is not underused. So let's just kind of dive into this. I think liability is something that's either going to throttle or accelerate adoption and ai, if nobody's accountable, it's not going to get trusted. If doctors are accountable for the decisions made by a black box algorithm, they're going to reject it or they're going to be really cautious and adopt it very, very slowly. If bad outcomes happen either because an AI model bad or faulty or inaccurate or biased, patients aren't going to want AI either. So we've got to have clear frameworks on how to do this, how to do it responsibly, and we honestly needed them yesterday, Kavita, to your point, the Federation of State Medical Boards published a paper in May of 2024.
It was like a white paper or a policy statement that said, from their view, the doctor is still ultimately responsible. So this organization that represents the medical boards that license all 50 states of us says that it's ultimately still our problem, still our responsibility. And if that's the case, if that's the way the rest of the industry falls and says, yeah, it's on the doctors, well, that will shape how AI gets deployed and gets adopted as well. One more question here from David Lopez. David's the director of Psychology Squared and is trying to help get clinicians started on AI today. How do we manage the double standards around AI errors? Even if AI tools may make mistakes, but fewer mistakes than people, people have more difficulty tolerating AI errors. So I agree, David, we have zero tolerance for AI mistakes, even when physicians, doctors, psychologists make mistakes too.
But I think it's really that it's violating our sense of control. I think it's also dangerous to just assume that humans don't make mistakes either. Honestly, I don't know how this one's going to play out the thing I do know if doctors, psychologists, pharmacists, if healthcare workers are not part of the conversation, someone else is going to decide this for us, and it'll be made in a way worse way than if we had a seat at the table. So that's why I think it's so important that we speak out, that we say how AI can impact us for the good, and what the challenges and what our worries are as well. This has been and continues to be, Dr. Graham Walker. Thanks for watching and listening.
Offcall Team is the official Offcall account.