PoddsändningarTeknologiThe Neuron: AI Explained

The Neuron: AI Explained

The Neuron
The Neuron: AI Explained
Senaste avsnittet

68 avsnitt

  • The Neuron: AI Explained

    GPT 5.4 LIVE Test & Learn to Code in 2026: What's Essential vs. What AI Handles Now

    2026-03-06 | 2 h
    Ryan Carson taught over 1,000,000 people how to code at Treehouse and spent 25% of his entire life doing it. Now he says everything about that process needs to change.

    In this livestream, Ryan joins Corey Noles and Grant Harvey to rethink programming education from scratch. When AI agents can write production code, pass competitive coding challenges, and ship features while you sleep.

    We'll cover:🧠 What’s still fundamental when agents handle the syntax
    🔄 Where beginners should start in 2026 (it’s not where you think)
    🚀 The new hard parts: deployment, databases, security, and getting your app on the internet
    ⭐ Ryan’s viral 3-file system for building with AI agents (5,000+ GitHub stars)
    🧪 Why “vibe coding” gets you a prototype but not a product
    🛠️ The skills that separate someone who prompts from someone who ships

    Ryan is the founder of Treehouse (raised $23M, taught 1M+ students, acquired 2021), Builder in Residence at Amp (Sourcegraph's coding agent), and is currently building Untangle, a real production app, almost entirely with AI tools.

    Whether you're a complete beginner curious about coding in 2026 or an experienced developer rethinking your workflow, this one's for you.

    🔗 LINKS & RESOURCES:
    • Ryan Carson's website: https://www.ryancarson.com/
    • Ryan's articles on agent workflows: https://www.ryancarson.com/articles
    • Code Factory workflow: https://x.com/ryancarson/status/2023452909883609111
    • Agent teams in OpenClaw: https://x.com/ryancarson/status/2020931274219594107
    • Agents that ship while you sleep: https://x.com/ryancarson/status/2016520542723924279
    • Ryan's newsletter: https://ryancarson.substack.com/
    • Untangle: https://untangle-us.com/
    • Amp (Sourcegraph coding agent): https://ampcode.com/

    🗞️ Subscribe to The Neuron newsletter: https://theneuron.ai
  • The Neuron: AI Explained

    AI Is Helping Build the Power Source It Desperately Needs (Brandon Sorbom w/ Commonwealth Fusion Systems)

    2026-03-03 | 1 h 3 min.
    AI data centers are going to double their power consumption by 2030—so where's all that energy coming from? One answer is fusion, the same process that powers the sun.
    In this episode of The Neuron, we're joined by Brandon Sorbom, Chief Science Officer and Co-founder of Commonwealth Fusion Systems, to explore how his company is racing to build the world's first commercial fusion power plant—and how AI is helping them get there faster.
    Brandon explains why fusion has been "30 years away" for decades, what changed with high-temperature superconducting magnets, and why fusion is fundamentally safer than fission (hint: fusion is "default off"). We dive into CFS's collaborations with Google DeepMind and NVIDIA, what it takes to wrangle 10,000 unique parts, and when we might actually see fusion on the grid.
    You'll learn:
    • What fusion actually is (and why it's not nuclear fission)
    • Why high-temperature superconducting magnets changed everything
    • How AI is accelerating plasma control and simulation
    • The safety profile that makes fusion regulated like an MRI, not a reactor
    • When CFS expects to hit Q > 1 (net energy) and beyond
    To learn more about Commonwealth Fusion Systems, visit https://cfs.energy.
    For more practical, grounded conversations on AI and emerging tech, subscribe to The Neuron newsletter at https://theneuron.ai
  • The Neuron: AI Explained

    BONUS: Gemini 3 Flash (Smartest, Cheapest AI) with Google DeepMind's Logan Kilpatrick

    2026-02-27 | 1 h 59 min.
    From the YT live archives: Google just dropped Gemini 3 Flash—a model that outperforms Gemini 2.5 Pro (their last top model) while running 3x faster at less than 1/4 the cost. It's frontier-level reasoning at Flash-level speed, and it's rolling out globally right now.

    We're sitting down with Logan Kilpatrick from Google DeepMind to explore what this actually means for developers, knowledge workers, and anyone trying to figure out how AI fits into their workflow.

    What we'll cover:
    🔥 Live demos – Logan will show us Gemini 3 Flash in action, from coding to multimodal understanding

    ⚡ What's now possible – Use cases that weren't practical with previous models (or weren't possible at all)

    🛠️ Building together – We might wire up a tool live if Logan's game (we've got ideas)

    💰 Intelligence too cheap to meter – We'll dig into the economics: when AI gets this powerful and this affordable, does it change the hiring calculus?

    On that last point: right now, data shows AI is raising wages for AI-impacted roles because workers who use AI effectively can command higher salaries. But what happens when frontier intelligence costs $0.50 per million tokens? When does “intelligence as a commodity” flip from “AI makes workers more valuable” to “why hire a human?” We’ll see if we can get Logan’s take on this topic!

    Key specs on Gemini 3 Flash:
    Outperforms Gemini 2.5 Pro across most benchmarks

    3x faster than 2.5 Pro

    Less than 1/4 the cost of Gemini 3 Pro

    1M token context window

    Advanced visual and spatial reasoning with code execution
    78% on SWE-bench Verified (agentic coding)

    Rolling out globally in Gemini app, AI Mode in Search, and developer platforms

    Logan has been at the center of Google's push to make frontier AI accessible to millions of developers. If you're shipping products, building with AI, or just trying to wrap your head around where this is all going, this conversation will give you clarity.
  • The Neuron: AI Explained

    Diffusion for Text: Why Mercury Could Make LLMs 10x Faster

    2026-02-24 | 48 min.
    Diffusion models changed how we generate images and video—now they’re coming for text.

    In this episode, we sit down with Stefano Ermon, Stanford computer science professor and founder of Inception Labs, to unpack how diffusion works for language, why it can generate in parallel (instead of token-by-token), and what that means for latency, cost, and real-time AI products.

    We talk through:
    The simplest mental model for diffusion: generate a full draft, then refine it by “fixing mistakes”

    Why today’s autoregressive LLM inference is often memory-bound—and why diffusion can shift it toward a more GPU-friendly compute profile

    Where Mercury wins today (IDEs, voice/real-time agents, customer support, EdTech—anywhere humans can’t wait)

    What changes (and what doesn’t) for long context and architecture choices

    The real-world way to evaluate models in production: offline evals + the gold-standard A/B test

    Stefano also shares what’s next on Mercury’s roadmap—especially around stronger planning and reasoning for agentic use cases.

    Try Mercury + learn more: inceptionlabs.ai

    For more practical, grounded conversations on AI systems that actually work, subscribe to The Neuron newsletter at https://theneuron.ai.
  • The Neuron: AI Explained

    Can AI Improve Customer Service Without Killing Jobs? Crescendo Thinks So

    2026-02-20 | 57 min.
    Customer service is one of the industries most impacted by AI — but what if AI alone isn’t the answer?

    In this episode of The Neuron Podcast, Grant Harvey and Corey Noles sit down with Matt Price, Founder & CEO of Crescendo, to explore how AI and humans working together can outperform automation alone. After spending 13+ years at Zendesk, Matt is now building an AI-native customer experience platform that automates up to 90% of tickets with 99.8% accuracy — without sacrificing empathy, trust, or outcomes.

    We cover:
    • Why LLMs are the biggest shift in customer service since the telephone
    • Why bolting AI onto old CX workflows fails
    • How Crescendo’s multimodal AI can chat, talk, see images, and control devices in one conversation
    • Real-world examples (like smart sprinkler troubleshooting via voice + vision + APIs)
    • Why Crescendo combines AI agents with forward-deployed human experts
    • How outcome-based pricing aligns incentives around real customer satisfaction
    • How AI is reshaping (not eliminating) customer service jobs
    • Why “deflection” is the wrong mindset for CX — and what replaces it
    • What customer support roles look like in an AI-native future

    This is a deep dive into the next generation of customer experience, where AI handles scale and speed — and humans deliver judgment, empathy, and innovation.

    Subscribe for weekly conversations with the builders shaping the future of AI and work.

    Subscribe to The Neuron newsletter for more interviews with the leaders shaping the future of work and AI: https://theneuron.ai

Fler podcasts i Teknologi

Om The Neuron: AI Explained

The Neuron covers the latest AI developments, trends and research, hosted by Grant Harvey and Corey Noles. Digestible, informative and authoritative takes on AI that get you up to speed and help you become an authority in your own circles. Available every Tuesday on all podcasting platforms and YouTube. Subscribe to our newsletter: https://www.theneurondaily.com/subscribe
Podcast-webbplats

Lyssna på The Neuron: AI Explained, AI Sweden Podcast och många andra poddar från världens alla hörn med radio.se-appen

Hämta den kostnadsfria radio.se-appen

  • Bokmärk stationer och podcasts
  • Strömma via Wi-Fi eller Bluetooth
  • Stödjer Carplay & Android Auto
  • Många andra appfunktioner

The Neuron: AI Explained: Poddsändningar i Familj

Sociala nätverk
v8.7.2 | © 2007-2026 radio.de GmbH
Generated: 3/6/2026 - 10:38:38 PM