PoddsändningarTeknologiLearning Bayesian Statistics

Learning Bayesian Statistics

Alexandre Andorra
Learning Bayesian Statistics
Senaste avsnittet

200 avsnitt

  • Learning Bayesian Statistics

    The Hidden Geometry of Hierarchical Models

    2026-05-13 | 3 min.
    Today's clip is from Episode 157 featuring Stefan Radev. In this conversation, Alex and Stefan dig into one of the hardest open problems in simulation-based inference — hierarchical models.

    The core idea: when you move from flat to hierarchical models, you're no longer estimating one set of parameters. You have local parameters that vary by location (or subject, or city) and global parameters that capture what's shared across all of them. And you don't just want each separately — you want the full joint posterior, because that's where the Bayesian magic of shrinkage actually lives.
    Stefan builds the problem from the ground up. Start with the simplest hierarchical case: a two-level model. He uses electoral forecasting in France as the example — cities nested inside departments nested inside the whole country.

    Now your simulator has to cover all three levels. If that simulator is slow (think: brain emulators, minutes per sample), scaling to hundreds of groups becomes completely intractable. Memory issues, specialized network requirements, the works.

    The key insight: this problem has structure you can exploit. The joint posterior factorizes in a particularly nice way — each local parameter depends on its own local data and on the global parameters. That means instead of cramming everything into one giant high-dimensional vector and hoping a neural network figures it out, you can decompose the problem. Estimate local parameters conditioned on local data and the globals. Use composition.

    The takeaway: hierarchical models aren't just "harder flat models" - they have a geometry that demands a different architecture. Respecting that structure is what makes amortized inference scale.

    Get the full discussion here

    Support & Resources
    → Support the show on Patreon
    → Bayesian Modeling Course (first 2 lessons free)

    Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work
  • Learning Bayesian Statistics

    #157 Amortized Inference & BayesFlow in Practice, with Stefan Radev

    2026-05-06 | 1 h 18 min.
    Support & Resources
    → Support the show on Patreon
    → Bayesian Modeling Course (first 2 lessons free)

    Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work

    Takeaways:

    Q: What is simulation-based inference and what does "sim-to-real" mean?
    A: Simulation-based inference (SBI) uses a mechanistic simulator as an epistemic tool: you train a neural network on a large number of labeled simulations and then deploy it on real, unlabeled data. The "sim-to-real" framing captures the key asymmetry -- your network never sees real data during training, only simulations, but it generalizes to real observations at inference time. This is the opposite of the more common "synthetic-for-ML" approach, where fake data is used purely to augment real training data.

    Q: What is the amortized inference agent skill and what does it do?
    A: It's an open-source AI agent skill, co-developed by Stefan and Alexandre, that teaches an AI coding agent to run a complete, state-of-the-art amortized inference workflow. Because amortized inference is recent enough that it's underrepresented in LLM training data, vanilla agents tend to get it wrong. The skill injects the right methodology: it guides the agent to set up the simulator, choose the right network architecture, run a pilot, train with appropriate diagnostics, and produce an actionable report -- without the user needing to know the details.

    Q: What is calibration coverage and why should you never skip it?
    A: Calibration coverage tells you whether your posterior uncertainty is honest -- whether your credible intervals actually contain the true parameter at the right frequency. A model can show poor parameter recovery yet still be well-calibrated (because it's falling back on the prior), or it can appear to recover parameters while being poorly calibrated. Running calibration diagnostics both in-sample and out-of-sample is especially revealing for hierarchical models, which often appear to underfit in-sample but generalize much better out-of-sample thanks to shrinkage.

    Full takeaways here

    Chapters:
    00:00:00 How does amortized inference fit into the Bayesian workflow?
    00:12:03 What does "sim-to-real" mean in simulation-based inference?
    00:15:57 Why is amortized inference particularly suited to psychology and neuroscience?
    00:21:51 What is the amortized inference agent skill?
    00:39:00 What is calibration coverage and how do you interpret it?
    00:41:50 How do you decide what to do next after your first training run?
    00:44:53 How do actionable insights make Bayesian workflows more usable?
    00:49:08 What are the unique challenges of hierarchical models in amortized inference?
    01:00:51 What is the current state of BayesFlow's support for hierarchical models?
    01:05:00 What are the main failure modes of amortized inference and how do you handle model misspecification?

    Thank you to my Patrons for making this episode possible!

    Links from the show
  • Learning Bayesian Statistics

    How to Design Better Experiments with Expected Information Gain

    2026-05-01 | 5 min.
    Today's clip is from Episode 156 featuring Adam Foster. In this conversation, Adam explains Expected Information Gain (EIG) -the scoring function at the heart of optimal Bayesian experimental design.

    The core idea: when designing an experiment, you need a way to compare possible designs and pick the best one. EIG is that score - it tells you how much information you expect to gain about your model parameters from a given design. The higher the EIG, the better the design.

    Adam builds intuition for EIG from two directions that sound completely different but lead to the same place. First, the Bayesian angle: simulate datasets from your prior predictive distribution, run inference on each, measure how much uncertainty dropped, and average across datasets. Second, a classic puzzle - the 12 prisoners balance scale problem - where the best weighing strategy turns out to be the one that makes all three outcomes (tip left, tip right, balance) equally likely. This maximizes outcome entropy, which is exactly what EIG does: it steers you toward designs where every possible result narrows down your hypotheses as fast as possible.

    The takeaway: good experimental design isn't about intuition or convention - it's about making your data work as hard as possible, and EIG gives you a rigorous way to do that.

    Get the full discussion here

    Support & Resources
    → Support the show on Patreon
    → Bayesian Modeling Course (first 2 lessons free)

    Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work
  • Learning Bayesian Statistics

    #156 Bayesian Experimental Design & Active Learning, with Adam Foster

    2026-04-25 | 1 h 16 min.
    Support & Resources
    → Support the show on Patreon
    → Bayesian Modeling Course (first 2 lessons free)

    Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work

    Takeaways
    Q: What is Bayesian experimental design and what problem does it solve?
    A: It's the practice of using a Bayesian model to decide how to collect data before you collect it. Most statistical thinking starts with a fixed dataset. Bayesian experimental design sits upstream -- you have control over experimental parameters (which questions to ask, which reagents to mix, which conditions to test) and you want to choose them optimally. The Bayesian angle is to ask: what new data would most reduce my current uncertainty?

    Q: When should you actually use Bayesian experimental design?
    A: When two conditions hold: you have active control over how data is collected (not just passive observation), and you have a Bayesian model whose prior predictive distribution gives a reasonable picture of what typical data might look like. It's especially valuable when data collection is expensive or irreversible -- when the "committal step" of running an experiment has real cost, it's worth doing the analysis first.

    Q: What is expected information gain (EIG) and why is it central to Bayesian experimental design?
    A: EIG is the score you assign to a candidate experimental design -- the amount of information you expect to gain about your model parameters by running an experiment with that design. You compute it by simulating datasets from your prior predictive, doing Bayesian inference on each, and averaging how much the uncertainty decreased. What's remarkable is that you can derive the same quantity from two completely different starting points -- reducing parameter uncertainty, or maximizing outcome uncertainty while correcting for noise - and arrive at the same formula. That convergence is why EIG keeps being re-discovered independently across fields.

    Full takeaways here

    Chapters:
    00:00 What is Bayesian experimental design and why does it matter?
    00:06:02 What problem does Bayesian experimental design actually solve?
    00:08:54 When should practitioners use Bayesian experimental design?
    00:12:00 Is Bayesian experimental design changing how scientists work in practice?
    00:15:04 What are the limitations of Bayesian experimental design?
    00:17:55 What is expected information gain (EIG) and how does it work?
    00:21:05 How do you compute expected information gain in practice?
    00:23:48 What is active learning and how does it connect to Bayesian experimental design?
    00:41:02 What is active learning by disagreement?
    00:48:57 What is deep adaptive design and when should you00: use it?
    00:56:02 How is Bayesian experimental design applied in protein dynamics and quantum chemistry?
    01:01:58 What does a practical Bayesian experimental design workflow look like?

    Thank you to my Patrons for making this episode possible!

    Links from the show
  • Learning Bayesian Statistics

    Pricing Under Uncertainty: A Bayesian Workflow

    2026-04-16 | 5 min.
    Today's clip is from Episode 152 of the podcast, featuring Daniel Saunders. In this conversation, Daniel explores how Bayesian decision theory handles real-world risk aversion beyond the textbook maximum expected utility framework.

    The key insight: classical Bayesian decision theory assumes risk neutrality, but in practice, people and businesses are risk-averse. Using a pricing optimization example, Daniel shows how uncertainty varies dramatically across price points—lower prices have predictable demand, while higher prices create wide uncertainty in profits. This asymmetry matters when you want safer decisions.

    Daniel introduces exponential utility functions—a technique from economics that models diminishing returns on money. By adjusting a risk-aversion parameter, you can see how increasing risk aversion shifts optimal decisions away from high-uncertainty, high-profit scenarios toward more predictable outcomes.

    The broader lesson: optimal decision-making requires separating the modeling process from the decision process, allowing you to build in constraints and risk adjustments that pure expected utility maximization would miss.

    Get the full discussion here

    Support & Resources
    → Support the show on Patreon: https://www.patreon.com/c/learnbayesstats
    → Bayesian Modeling Course (first 2 lessons free): https://topmate.io/alex_andorra/1011122

    Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work at https://bababrinkman.com/ !
Fler podcasts i Teknologi
Om Learning Bayesian Statistics
Are you a researcher or data scientist / analyst / ninja? Do you want to learn Bayesian inference, stay up to date or simply want to understand what Bayesian inference is? Then this podcast is for you! You'll hear from researchers and practitioners of all fields about how they use Bayesian statistics, and how in turn YOU can apply these methods in your modeling workflow. When I started learning Bayesian methods, I really wished there were a podcast out there that could introduce me to the methods, the projects and the people who make all that possible. So I created "Learning Bayesian Statistics", where you'll get to hear how Bayesian statistics are used to detect black matter in outer space, forecast elections or understand how diseases spread and can ultimately be stopped. But this show is not only about successes -- it's also about failures, because that's how we learn best. So you'll often hear the guests talking about what *didn't* work in their projects, why, and how they overcame these challenges. Because, in the end, we're all lifelong learners! My name is Alex Andorra by the way. By day, I'm a Senior data scientist. By night, I don't (yet) fight crime, but I'm an open-source enthusiast and core contributor to the python packages PyMC and ArviZ. I also love Nutella, but I don't like talking about it – I prefer eating it. So, whether you want to learn Bayesian statistics or hear about the latest libraries, books and applications, this podcast is for you -- just subscribe! You can also support the show and unlock exclusive Bayesian swag on Patreon!
Podcast-webbplats

Lyssna på Learning Bayesian Statistics, Lex Fridman Podcast och många andra poddar från världens alla hörn med radio.se-appen

Hämta den kostnadsfria radio.se-appen

  • Bokmärk stationer och podcasts
  • Strömma via Wi-Fi eller Bluetooth
  • Stödjer Carplay & Android Auto
  • Många andra appfunktioner
Sociala nätverk
v6.9.1| © 2007-2026 radio.de GmbH
Generated: 5/13/2026 - 6:17:49 PM