Doom Debates!

Liron Shapira
Doom Debates!
Senaste avsnittet

154 avsnitt

  • Doom Debates!

    Live Q&A: Bernie Sanders Wakes Up to AI Doom, Dwarkesh's $20,000 Questions, Caller Debates the Alignment Problem!

    2026-04-28 | 2 h 12 min.
    Multiple live callers join this month's Q&A as I react to Dwarkesh Patel's $20K blog prize, debate the orthogonality thesis from first principles, and welcome Bernie Sanders aboard the Doom Train.
    Timestamps
    00:00:00 — Cold Open
    00:01:00 — Welcome to Doom Debates Live!
    00:01:30 — What Do You Think of Open Source Models Out-Benchmarking OpenAI and Anthropic?
    00:04:55 — Michael Cheers Joins: What If We Don't Give AIs Full Situational Awareness?
    00:11:55 — Thoughts on Mythos' Hacking Abilities?
    00:15:43 — Liron Reacts to Dwarkesh Patel's $20K AI Questions
    00:23:28 — Pretraining Goals vs RL Training Goals
    00:28:58 — Mental Model of Yudkowsky-ians & the IABIED Claim
    00:37:24 — You Can't Hide Reality from a Superintelligence (The Truman Show Analogy)
    00:42:57 — Back to Dwarkesh's Questions: When Do AI Labs Start Making Money?
    00:48:50 — Upcoming Guests Reveal!
    00:51:35 — Will Lancer Joins: Is The Yudkowskian Thesis Credible?
    01:27:03 — Back to Answering Questions from the Chat
    01:33:28 — The Cameraman Always Survives Analogy
    01:40:52 — Liron's Banger Response to Roon's Tweet
    01:47:00 — Nuance About Pausing AI Development
    01:50:57 — Capitalism Isn't Going to Steer Us to an Alignment Solution
    01:53:10 — Is Optimization Equivalent to Intelligence?
    01:57:21 — BREAKING: Bernie Sanders on the Existential Threat of AI
    02:01:12 — Spoiler for the Upcoming Mike Israetel Episode
    02:01:57 — $500 Bet on AI Unemployment
    02:05:46 — Misuse, Surveillance, and the Real Costs of Pausing AI
    02:11:04 — Wrap-Up
    Links
    Nick Bostrom, Deep Utopia: Life and Meaning in a Solved World (Amazon) — https://www.amazon.com/Deep-Utopia-Meaning-Solved-World/dp/1646871642
    Yudkowsky & Soares, If Anyone Builds It, Everyone Dies (book) — https://www.amazon.com/If-Anyone-Builds-Everyone-Dies/dp/0316571253
    Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
    Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏


    Get full access to Doom Debates at lironshapira.substack.com/subscribe
  • Doom Debates!

    Emad Mostaque Has A 50% P(Doom) & A Plan To Lower It

    2026-04-21 | 1 h 43 min.
    Emad Mostaque helped kick off the modern AI revolution as the head of Stability AI, the company behind Stable Diffusion. Unlike most AI CEOs, he doesn't sugarcoat the risks of AGI development.
    He explains his 50% P(Doom), why we have less than 1,000 days to get our act together, and how his new startup, Intelligent Internet, aims to be a countervailing force against doom of all kinds.
    Timestamps
    00:00:00 — Cold Open
    00:00:39 — Introducing Emad Mostaque
    00:02:08 — How Emad Got Involved in AI Development
    00:05:46 — The 60-Second Pitch for Intelligent Internet
    00:09:29 — What’s Your P(Doom)?™
    00:13:32 — Why ASI Doesn’t Need Massive Compute
    00:15:56 — AGI Timelines: Cognitive Labor Going to Zero
    00:17:29 — Is There a Ceiling Above Human Intelligence?
    00:41:22 — Corporations as Slow, Dumb AIs
    00:50:19 — Emad’s Mainline Doom Scenario
    00:55:28 — Jailbreaks and “Mecha-Hitler” Latent Spaces
    00:59:56 — The Last Economy: How to Navigate Economic Doom
    01:08:57 — The Coming Unemployment Spike
    01:15:13 — Why Isn’t Google Stock Mooning?
    01:25:05 — Intelligent Internet as a Solution: Bitcoin for the Intelligence Age
    01:33:13 — Can an Aligned Network Stop a Rogue ASI?
    01:36:20 — Is the Pause AI Proposal Too Late?
    01:40:50 — Are We Facing Russian Roulette Odds with AI?
    01:42:29 — Wrap-Up
    Links
    Emad Mostaque, The Last Economy (Amazon) — https://www.amazon.com/Last-Economy-Guide-Intelligent-Economics/dp/103693411X
    Intelligent Internet (ii.inc) —https://ii.inc
    Emad Mostaque, Wikipedia — https://en.wikipedia.org/wiki/Emad_Mostaque
    Emad Mostaque on X — https://x.com/EMostaquePause Giant AI Experiments open letter (Future of Life Institute) — https://futureoflife.org/open-letter/pause-giant-ai-experiments/
    Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
    Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏


    Get full access to Doom Debates at lironshapira.substack.com/subscribe
  • Doom Debates!

    Did Eliezer Yudkowsky Really Call for VIOLENCE? — Debate with John Alioto

    2026-04-18 | 1 h 1 min.
    My guest, John Alioto, is an independent AI engineer with a computer science degree from UC Berkeley and 25 years building real-world AI systems at companies like Microsoft and Google.
    In the wake of an attack on Sam Altman’s property, John Alioto came on the show to argue that Yudkowsky’s words are violent rhetoric that helped create this moment. Since I completely disagree with that characterization, we had a lot of fuel for a passionate debate.
    For the record, here’s my position on why AI doomers are NOT “calling for violence”:
    Are we acting like we actually think there’s an urgent extinction risk? Yes.
    Are we calling for lawless violence? Absolutely not. At least not me, or the leaders of the movement, or anyone I’ve ever personally interacted with.
    Are we calling for violence as a last resort if a government policy has been established and then egregiously violated? Yes… but that’s just standard for any governance proposal! A proposal for a strictly enforced treaty isn’t a call for violence — it’s a call for doing everything we can to make sure no one breaks the treaty, with zero violence, unless rogue actors decide to break the treaty and bring the consequences on themselves.
    Thanks to John for having an extremely respectful and good-faith debate on this heated subject.

    Timestamps
    00:00:00 — Cold Open
    00:00:37 — Introducing John Alioto
    00:03:02 — Setting the Stage: Recent Acts of Violence & AI Discourse
    00:05:53 — Eliezer Yudkowsky's 2023 TIME Article
    00:11:16 — John's Two-Part Argument
    00:14:37 — Conditional on High P(Doom), Is Eliezer's Policy Bad?
    00:17:46 — Be Like Carl Sagan — Win in the Arena of Ideas
    00:21:12 — No Carve-Outs for Non-Signatories
    00:26:15 — Hypothetical: What If the UN Voted for a Treaty?
    00:30:46 — What's the Correct Interpretation of Eliezer's TIME Article?
    00:32:42 — Liron's Interpretation: Same Structure as Any International Law Proposal
    00:42:23 — What Should Eliezer Have Written? "Airstrikes" vs "Strong Deterrent"
    00:49:57 — How John Would Rewrite the TIME Piece
    00:50:54 — Carve-Outs: Allies, Civilians, Consequences
    00:52:52 — Debate Wrap-Up
    00:56:27 — Last Q: Does High P(Doom) Imply Violence?
    00:59:40 — Closing Thoughts

    Links
    John P. Alioto on X (Twitter) — https://x.com/jpalioto
    Eliezer Yudkowsky, “Pausing AI Developments Isn’t Enough. We Need to Shut It All Down” (Time Magazine, March 2023) — https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/
    Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
    Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏


    Get full access to Doom Debates at lironshapira.substack.com/subscribe
  • Doom Debates!

    Are AI Doomers “Calling for Violence”? Debate with Steven Balik

    2026-04-16 | 51 min.
    Are AI safety advocates like Eliezer Yudkowsky at fault for the recent attacks on Sam Altman because they are “calling for violence”?
    I invited Steven Balik to join me on this emergency episode to hash it out.
    Steven is an activist short seller and data engineering professional whose Substack is popular among Silicon Valley VCs and hedge funds.
    Links
    Steven Balik on X (Twitter) — https://x.com/laurenbalik
    Steven Balik, “The Talmudic Stock Bubble, AI Psychosis, & Esoterrorism” (Substack, October 2025) — https://laurenbaliksalmanacandrevue.substack.com/p/the-talmudic-stock-bubble-ai-psychosis
    Eliezer Yudkowsky, “Pausing AI Developments Isn’t Enough. We Need to Shut It All Down” (Time, March 2023) — https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/
    Timestamps
    00:00:00 — Cold Open
    00:00:52 — Introducing Steven Balik
    00:01:24 — Setting the Stage: Molotov Cocktail Incident
    00:03:31 — Steven’s Opening Position
    00:06:10 — Is Eliezer Yudkowsky “Calling for Violence”?
    00:07:25 — Steven on AI, Yudkowsky, the Zizians & Escalating Rhetoric
    00:12:16 — Focusing on the Time Article
    00:18:51 — Who’s Responsible for the Violence?
    00:25:33 — Debating the Key Quote in Yudkowsky’s Time Article
    00:31:07 — Liron Passes the Ideological Turing Test
    00:45:42 — Liron & Steven Find Common Ground
    00:46:57 — Why Does Steven Call Eliezer Yudkowsky an “Esoterrorist”?
    00:48:51 — Wrapping Up: Deescalating the Situation
    Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
    Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏


    Get full access to Doom Debates at lironshapira.substack.com/subscribe
  • Doom Debates!

    Tristan Harris and Ted Tremper are WAKING UP Humanity to AI Extinction!

    2026-04-14 | 1 h 27 min.
    The AI Doc: Or How I Became an Apocaloptimist could become the most important movie of our generation. It’s a new film from the producers of the Oscar-winning Everything Everywhere All at Once.
    Tristan Harris is a subject in the film who is well-known for his role in Netflix's The Social Dilemma. He is the co-founder of the Center for Humane Technology.
    Ted Tremper is a producer on the film who is the interim Executive Director of the Creators' Coalition on AI.
    Timestamps
    00:00:00 — Cold Open
    00:01:20 — Introducing Tristan Harris and Ted Tremper
    00:04:31 — The Genesis of The AI Doc
    00:12:48 — Tristan’s Journey From Social Media to AI
    00:14:30 — Updating From AI Skeptic to AI Risk Aware
    00:20:31 — How They Convinced the AI CEOs to Agree to be Interviewed
    00:28:58 — Ted’s Journalism Advice, Working on Borat Subsequent Moviefilm
    00:30:37 — Tristan, What’s Your P(Doom)™?
    00:34:30 — The Resource Curse: What AI Revenue Does to a Society
    00:44:10 — Ted, What’s Your P(Doom)™?
    00:46:34 — Reacting to Demis Hassabis Statement that AGI Development Is Inevitable
    00:49:52 — Liron Sharpens the Criticism Towards AGI Builders
    00:55:35 — AGI Developers Claim to Want International Cooperation, But Have They Really Tried?
    01:04:30 — What Should Be the Single Takeaway for Concerned Viewers?
    01:11:40 — Building a Coalition Against Superintelligence Development: From Bernie to Bannon
    01:19:52 — Take Action at TheAIDocGetInvolved.com
    01:24:40 — Tristan’s Closing Message: We’ve Done This Before
    Links
    Watch The AI Doc — https://www.focusfeatures.com/the-ai-doc-or-how-i-became-an-apocaloptimist
    Get Involved with The AI Doc Community — https://theaidocgetinvolved.com/
    Tristan Harris, Wikipedia — https://en.wikipedia.org/wiki/Tristan_Harris
    Ted Tremper, IMDb — https://www.imdb.com/name/nm3998229/
    Center for Humane Technology — https://www.humanetech.com/
    “The Intelligence Curse” by Luke Drago and Rudolf Laine — https://intelligence-curse.ai/
    Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
    Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏


    Get full access to Doom Debates at lironshapira.substack.com/subscribe

Fler podcasts i Näringsliv

Om Doom Debates!

It's time to talk about the end of the world. With your host, Liron Shapira. lironshapira.substack.com
Podcast-webbplats

Lyssna på Doom Debates!, Svart marknad och många andra poddar från världens alla hörn med radio.se-appen

Hämta den kostnadsfria radio.se-appen

  • Bokmärk stationer och podcasts
  • Strömma via Wi-Fi eller Bluetooth
  • Stödjer Carplay & Android Auto
  • Många andra appfunktioner
Sociala nätverk
v8.8.13| © 2007-2026 radio.de GmbH
Generated: 4/28/2026 - 9:46:24 PM