PoddsändningarUtbildning80,000 Hours Podcast

80,000 Hours Podcast

Rob, Luisa, and the 80000 Hours team
80,000 Hours Podcast
Senaste avsnittet

312 avsnitt

  • 80,000 Hours Podcast

    2025 Highlight-o-thon: Oops! All Bests

    2025-12-29 | 1 h 40 min.

    It’s that magical time of year once again — highlightapalooza! Stick around for one top bit from each episode we recorded this year, including:Kyle Fish explaining how Anthropic’s AI Claude descends into spiritual woo when left to talk to itselfIan Dunt on why the unelected House of Lords is by far the best part of the British governmentSam Bowman’s strategy to get NIMBYs to love it when things get built next to their housesBuck Shlegeris on how to get an AI model that wants to seize control to accidentally help you foil its plans…as well as 18 other top observations and arguments from the past year of the show.Links to learn more, video, and full transcript: https://80k.info/best25It's been another year of living through history, whether we asked for it or not. Luisa and Rob will be back in 2026 to help you make sense of whatever comes next — as Earth continues its indifferent journey through the cosmos, now accompanied by AI systems that can summarise our meetings and generate adequate birthday messages for colleagues we barely know.Chapters:Cold open (00:00:00)Rob's intro (00:02:35)Helen Toner on whether we're racing China to build AGI (00:03:43)Hugh White on what he'd say to Americans (00:06:09)Buck Shlegeris on convincing AI models they've already escaped (00:12:09)Paul Scharre on a personal experience in Afghanistan that influenced his views on autonomous weapons (00:15:10)Ian Dunt on how unelected septuagenarians are the heroes of UK governance (00:19:06)Beth Barnes on AI companies being locally reasonable, but globally reckless (00:24:27)Tyler Whitmer on one thing the California and Delaware attorneys general forced on the OpenAI for-profit as part of their restructure (00:28:02)Toby Ord on whether rich people will get access to AGI first (00:30:13)Andrew Snyder-Beattie on how the worst biorisks are defence dominant (00:34:24)Eileen Yam on the most eye-watering gaps in opinions about AI between experts and the US public (00:39:41)Will MacAskill on what a century of history crammed into a decade might feel like (00:44:07)Kyle Fish on what happens when two instances of Claude are left to interact with each other (00:49:08)Sam Bowman on where the Not In My Back Yard movement actually has a point (00:56:29)Neel Nanda on how mechanistic interpretability is trying to be the biology of AI (01:03:12)Tom Davidson on the potential to install secret AI loyalties at a very early stage (01:07:19)Luisa and Rob discussing how medicine doesn't take the health burden of pregnancy seriously enough (01:10:53)Marius Hobbhahn on why scheming is a very natural path for AI models — and people (01:16:23)Holden Karnofsky on lessons for AI regulation drawn from successful farm animal welfare advocacy (01:21:29)Allan Dafoe on how AGI is an inescapable idea but one we have to define well (01:26:19)Ryan Greenblatt on the most likely ways for AI to take over (01:29:35)Updates Daniel Kokotajlo has made to his forecasts since writing and publishing the AI 2027 scenario (01:32:47)Dean Ball on why regulation invites path dependency, and that's a major problem (01:37:21)Video and audio editing: Dominic Armstrong, Milo McGuire, Luke Monsour, and Simon MonsourMusic: CORBITCoordination, transcripts, and web: Katy Moore

  • 80,000 Hours Podcast

    #232 – Andreas Mogensen on what we owe 'philosophical Vulcans' and unconscious beings

    2025-12-19 | 2 h 37 min.

    Most debates about the moral status of AI systems circle the same question: is there something that it feels like to be them? But what if that’s the wrong question to ask? Andreas Mogensen — a senior researcher in moral philosophy at the University of Oxford — argues that so-called 'phenomenal consciousness' might be neither necessary nor sufficient for a being to deserve moral consideration. Links to learn more and full transcript: https://80k.info/am25For instance, a creature on the sea floor that experiences nothing but faint brightness from the sun might have no moral claim on us, despite being conscious. Meanwhile, any being with real desires that can be fulfilled or not fulfilled can arguably be benefited or harmed. Such beings arguably have a capacity for welfare, which means they might matter morally. And, Andreas argues, desire may not require subjective experience. Desire may need to be backed by positive or negative emotions — but as Andreas explains, there are some reasons to think a being could also have emotions without being conscious. There’s another underexplored route to moral patienthood: autonomy. If a being can rationally reflect on its goals and direct its own existence, we might have a moral duty to avoid interfering with its choices — even if it has no capacity for welfare. However, Andreas suspects genuine autonomy might require consciousness after all. To be a rational agent, your beliefs probably need to be justified by something, and conscious experience might be what does the justifying. But even this isn’t clear. The upshot? There’s a chance we could just be really mistaken about what it would take for an AI to matter morally. And with AI systems potentially proliferating at massive scale, getting this wrong could be among the largest moral errors in history.In today’s interview, Andreas and host Zershaaneh Qureshi confront all these confusing ideas, challenging their intuitions about consciousness, welfare, and morality along the way. They also grapple with a few seemingly attractive arguments which share a very unsettling conclusion: that human extinction (or even the extinction of all sentient life) could actually be a morally desirable thing. This episode was recorded on December 3, 2025.Chapters:Cold open (00:00:00)Introducing Zershaaneh (00:00:55)The puzzle of moral patienthood (00:03:20)Is subjective experience necessary? (00:05:52)What is it to desire? (00:10:42)Desiring without experiencing (00:17:56)What would make AIs moral patients? (00:28:17)Another route entirely: deserving autonomy (00:45:12)Maybe there's no objective truth about any of this (01:12:06)Practical implications (01:29:21)Why not just let superintelligence figure this out for us? (01:38:07)How could human extinction be a good thing? (01:47:30)Lexical threshold negative utilitarianism (02:12:30)So... should we still try to prevent extinction? (02:25:22)What are the most important questions for people to address here? (02:32:16)Is God GDPR compliant? (02:35:32)Video and audio editing: Dominic Armstrong, Milo McGuire, Luke Monsour, and Simon MonsourCoordination, transcripts, and web: Katy Moore

  • 80,000 Hours Podcast

    #231 – Paul Scharre on how AI-controlled robots will and won't change war

    2025-12-17 | 2 h 45 min.

    In 1983, Stanislav Petrov, a Soviet lieutenant colonel, sat in a bunker watching a red screen flash “MISSILE LAUNCH.” Protocol demanded he report it to superiors, which would very likely trigger a retaliatory nuclear strike. Petrov didn’t. He reasoned that if the US were actually attacking, they wouldn’t fire just 5 missiles — they’d empty the silos. He bet the fate of the world on a hunch that his machine was broken. He was right.Paul Scharre, the former Army Ranger who led the Pentagon team that wrote the US military’s first policy on autonomous weapons, has a question: What would an AI have done in Petrov’s shoes? Would an AI system have been flexible and wise enough to make the same judgement? Or would it immediately launch a counterattack?Paul joins host Luisa Rodriguez to explain why we are hurtling toward a “battlefield singularity” — a tipping point where AI increasingly replaces humans in much of the military, changing the way war is fought with speed and complexity that outpaces humans’ ability to keep up.Links to learn more, video, and full transcript: https://80k.info/psMilitaries don’t necessarily want to take humans out of the loop. But Paul argues that the competitive pressure of warfare creates a “use it or lose it” dynamic. As former Deputy Secretary of Defense Bob Work put it: “If our competitors go to Terminators, and their decisions are bad, but they’re faster, how would we respond?”Once that line is crossed, Paul warns we might enter an era of “flash wars” — conflicts that spiral out of control as quickly and inexplicably as a flash crash in the stock market, with no way for humans to call a timeout.In this episode, Paul and Luisa dissect what this future looks like:Swarming warfare: Why the future isn’t just better drones, but thousands of cheap, autonomous agents coordinating like a hive mind to overwhelm defences.The Gatling gun cautionary tale: The inventor of the Gatling gun thought automating fire would reduce the number of soldiers needed, saving lives. Instead, it made war significantly deadlier. Paul argues AI automation could do the same, increasing lethality rather than creating “bloodless” robot wars.The cyber frontier: While robots have physical limits, Paul argues cyberwarfare is already at the point where AI can act faster than human defenders, leading to intelligent malware that evolves and adapts like a biological virus.The US-China “adoption race”: Paul rejects the idea that the US and China are in a spending arms race (AI is barely 1% of the DoD budget). Instead, it’s a race of organisational adoption — one where the US has massive advantages in talent and chips, but struggles with bureaucratic inertia that might not be a problem for an autocratic country.Paul also shares a personal story from his time as a sniper in Afghanistan — watching a potential target through his scope — that fundamentally shaped his view on why human judgement, with all its flaws, is the only thing keeping war from losing its humanity entirely.This episode was recorded on October 23-24, 2025.Chapters:Cold open (00:00:00)Who’s Paul Scharre? (00:00:46)How will AI and automation transform the nature of war? (00:01:17)Why would militaries take humans out of the loop? (00:12:22)AI in nuclear command, control, and communications (00:18:50)Nuclear stability and deterrence (00:36:10)What to expect over the next few decades (00:46:21)Financial and human costs of future “hyperwar” scenarios (00:50:42)AI warfare and the balance of power (01:06:37)Barriers to getting to automated war (01:11:08)Failure modes of autonomous weapons systems (01:16:28)Could autonomous weapons systems actually make us safer? (01:29:36)Is Paul overall optimistic or pessimistic about increasing automation in the military? (01:35:23)Paul’s takes on AGI’s transformative potential and whether natsec people buy it (01:37:42)Cyberwarfare (01:46:55)US-China balance of power and surveillance with AI (02:02:49)Policy and governance that could make us safer (02:29:11)How Paul’s experience in the Army informed his feelings on military automation (02:41:09)Video and audio editing: Dominic Armstrong, Milo McGuire, Luke Monsour, and Simon MonsourMusic: CORBITCoordination, transcripts, and web: Katy Moore

  • 80,000 Hours Podcast

    AI might let a few people control everything — permanently (article by Rose Hadshar)

    2025-12-12 | 1 h

    Power is already concentrated today: over 800 million people live on less than $3 a day, the three richest men in the world are worth over $1 trillion, and almost six billion people live in countries without free and fair elections.This is a problem in its own right. There is still substantial distribution of power though: global income inequality is falling, over two billion people live in electoral democracies, no country earns more than a quarter of GDP, and no company earns as much as 1%.But in the future, advanced AI could enable much more extreme power concentration than we’ve seen so far.Many believe that within the next decade the leading AI projects will be able to run millions of superintelligent AI systems thinking many times faster than humans. These systems could displace human workers, leading to much less economic and political power for the vast majority of people; and unless we take action to prevent it, they may end up being controlled by a tiny number of people, with no effective oversight. Once these systems are deployed across the economy, government, and the military, whatever goals they’re built to have will become the primary force shaping the future. If those goals are chosen by the few, then a small number of people could end up with the power to make all of the important decisions about the future.This article by Rose Hadshar explores this emerging challenge in detail. You can see all the images and footnotes in the original article on the 80,000 Hours website.Chapters:Introduction (00:00)Summary (02:15)Section 1: Why might AI-enabled power concentration be a pressing problem? (07:02)Section 2: What are the top arguments against working on this problem? (45:02)Section 3: What can you do to help? (56:36)Narrated by: Dominic ArmstrongAudio engineering: Dominic Armstrong and Milo McGuireMusic: CORBIT

  • 80,000 Hours Podcast

    #230 – Dean Ball on how AI is a huge deal — but we shouldn’t regulate it yet

    2025-12-10 | 2 h 54 min.

    Former White House staffer Dean Ball thinks it's very likely some form of 'superintelligence' arrives in under 20 years. He thinks AI being used for bioweapon research is "a real threat model, obviously." He worries about dangerous "power imbalances" should AI companies reach "$50 trillion market caps." And he believes the agriculture revolution probably worsened human health and wellbeing.Given that, you might expect him to be pushing for AI regulation. Instead, he’s become one of the field’s most prominent and thoughtful regulation sceptics and was recently the lead writer on Trump’s AI Action Plan, before moving to the Foundation for American Innovation.Links to learn more, video, and full transcript: https://80k.info/dbDean argues that the wrong regulations, deployed too early, could freeze society into a brittle, suboptimal political and economic order. As he puts it, “my big concern is that we’ll lock ourselves in to some suboptimal dynamic and actually, in a Shakespearean fashion, bring about the world that we do not want.”Dean’s fundamental worry is uncertainty: “We just don’t know enough yet about the shape of this technology, the ergonomics of it, the economics of it… You can’t govern the technology until you have a better sense of that.”Premature regulation could lock us in to addressing the wrong problem (focusing on rogue AI when the real issue is power concentration), using the wrong tools (using compute thresholds when we should regulate companies instead), through the wrong institutions (captured AI-specific bodies), all while making it harder to build the actual solutions we’ll need (like open source alternatives or new forms of governance).But Dean is also a pragmatist: he opposed California’s AI regulatory bill SB 1047 in 2024, but — impressed by new capabilities enabled by “reasoning models” — he supported its successor SB 53 in 2025.And as Dean sees it, many of the interventions that would help with catastrophic risks also happen to improve mundane AI safety, make products more reliable, and address present-day harms like AI-assisted suicide among teenagers. So rather than betting on a particular vision of the future, we should cross the river by feeling the stones and pursue “robust” interventions we’re unlikely to regret.This episode was recorded on September 24, 2025.Chapters:Cold open (00:00:00)Who’s Dean Ball? (00:01:22)How likely are we to get superintelligence soon, and how bad could it be? (00:01:54)The military may not adopt AI that fast (00:10:54)Dean’s “two wolves” of AI scepticism and optimism (00:17:48)Will AI self-improvement be a game changer? (00:28:20)The case for regulating at the last possible moment (00:33:05)AI could destroy our fragile democratic equilibria. Why not freak out? (00:52:30)The case AI will soon be way overregulated (01:02:51)How to handle the threats without collateral damage (01:14:56)Easy wins against AI misuse (01:26:54)Maybe open source can be handled gracefully (01:41:13)Would a company be sued for trillions if their AI caused a pandemic? (01:47:58)Dean dislikes compute thresholds. Here's what he'd do instead. (01:57:16)Could AI advances lead to violent conflict between the US and China? (02:02:52)Will we see a MAGA-Yudkowskyite alliance? Doomers and the Right (02:12:29)The tactical case for focusing on present-day harms (02:26:51)Is there any way to get the US government to use AI sensibly? (02:45:05)Having a kid in a time of AI turmoil (02:52:38)Video and audio editing: Dominic Armstrong, Milo McGuire, Luke Monsour, and Simon MonsourMusic: CORBITCoordination, transcripts, and web: Katy Moore

Fler podcasts i Utbildning

Om 80,000 Hours Podcast

Unusually in-depth conversations about the world's most pressing problems and what you can do to solve them. Subscribe by searching for '80000 Hours' wherever you get podcasts. Hosted by Rob Wiblin and Luisa Rodriguez.
Podcast-webbplats

Lyssna på 80,000 Hours Podcast, Mina Andetag och många andra poddar från världens alla hörn med radio.se-appen

Hämta den kostnadsfria radio.se-appen

  • Bokmärk stationer och podcasts
  • Strömma via Wi-Fi eller Bluetooth
  • Stödjer Carplay & Android Auto
  • Många andra appfunktioner
Sociala nätverk
v8.2.1 | © 2007-2025 radio.de GmbH
Generated: 12/29/2025 - 7:48:40 PM