Now Live:
∙
2025 Physicians AI Report! See what physicians really think about AI in healthcare.View the report
  • Salary
  • Privacy
  • Learn
  • About
Login
Salaries by stateSalaryPrivacyLearnAboutContact
Sign up for Offcall's newsletter
Copyright © 2026 Offcall All Rights Reserved
Cookies
Privacy Policy
Terms and Conditions
BAA
Podcast

Move Over LLMS! AI Legends Yann LeCunn and Alex LeBrun Debut AMI Labs' Bold Ambitions for World Models in Healthcare

Offcall Team
Offcall Team
  1. Learn
  2. Podcast
  3. Move Over LLMS! AI Legends Yann LeCunn and Alex LeBrun Debut AMI Labs' Bold Ambitions for World Models in Healthcare

Key Podcast Moments

  • “LLMs Predict Words. World Models Predict Patients.” Yann LeCun explains why large language models are fundamentally stochastic and why predicting the next token isn’t the same as understanding physiology, causality, or treatment consequences.
  • The ER Mental Model Analogy Graham describes how physicians mentally simulate treatment options before acting — and LeCun argues that this internal “patient model” is exactly what AI must replicate to move beyond chatbots.
  • Why 80% Accuracy Fails in Healthcare Alex LeBrun shares how physicians stop trusting a tool after just a few errors, underscoring why reliability — not novelty — determines whether AI succeeds in clinical practice.
  • Language Is a Crude Projection of Biology The group explores why training AI on notes alone is inherently limiting: “Heart rate 133 bpm” is a reductive description of a complex physiological process, and text cannot fully capture the dynamics of living systems.

Yann LeCun and Alex LeBrun joined Dr. Graham Walker to introduce AMI Labs and explain why they believe the next generation of AI in healthcare must move beyond large language models. While today’s LLMs are powerful and often surprisingly capable, Yann argues they are fundamentally designed to predict the next word — not to understand the world.

Large language models generate outputs by estimating probabilities. They predict what token is most likely to come next based on patterns in massive amounts of text. That approach has proven useful for summarization, documentation, and conversational interfaces. But in medicine, the stakes are different. Clinical decisions require reliability and the ability to reason about consequences. As Yann explains, an LLM produces actions “blindly,” without simulating what will happen next. It can appear intelligent, but it is not modeling cause and effect in the way physicians do.

Yann introduces the concept of “world models” as a different architectural approach. Instead of predicting tokens, a world model predicts how the state of the world changes in response to actions. In clinical terms, that means anticipating how a patient’s physiology might evolve after an intervention. During the conversation, Graham describes the moment when a physician pauses and mentally runs through possible treatment paths like fluids versus vasopressors or admission versus discharge. Yann responds that this is exactly the type of internal model AI would need to develop: a structured representation capable of simulating outcomes rather than generating text.

Alex adds a practical dimension from his experience building Nabla. In healthcare, accuracy thresholds are unforgiving. An AI system that is wrong even a small percentage of the time will quickly lose clinician trust. He describes how physicians will ignore a tool after encountering just a few incorrect outputs, much like they tune out unreliable EHR alerts. For AI to succeed beyond documentation, it must move from probabilistic suggestions to dependable reasoning systems. Yann argues that the real world is continuous, high-dimensional, and dynamic. It is far richer than the discrete tokens used in LLM training. World models, in contrast, aim to learn structured internal representations of those dynamics.

Alex and Yann emphasize that this shift will take time. Building generalized world models capable of understanding complex systems is still a research effort. But they envision a future where AI assistants function less like autocomplete engines and more like collaborative reasoning partners — systems that simulate, plan, and advise, while physicians remain in control. In that framing, the goal is not replacement, but amplification.

Thank you to our wonderful sponsor for supporting the podcast:

Sevaro is a physician-led telestroke and neurology company that delivers rapid virtual neuro coverage that’s reliable. Learn more at https://sevaro.com/

Graham’s Takeaways On AI World Models

Here are the biggest takeaways for me about world models and their potential application to healthcare:

Sign up for free

Join Offcall to keep reading and access exclusive resources for and by the medical community.

Offcall Team
Written by Offcall Team

Offcall Team is the official Offcall account.

podcast
AI

Comments

(0)

Join the conversation

See what your colleagues are saying and add your opinion.

Trending


12 Feb 2026Move Over LLMS! AI Legends Yann LeCunn and Alex LeBrun Debut AMI Labs' Bold Ambitions for World Models in Healthcare
0
410
0
29 Jan 2026What Doctors Get Wrong About AI with Robert Wachter, MD
0
261
0
15 Jan 2026ER Doctor Running for Congress Diagnoses How to Fix the U.S. Healthcare System, with Dr. Tim Peck
0
98
0
;