PoddsändningarTeknologiUnsupervised Learning with Jacob Effron

Unsupervised Learning with Jacob Effron

by Redpoint Ventures
Unsupervised Learning with Jacob Effron
Senaste avsnittet

91 avsnitt

  • Unsupervised Learning with Jacob Effron

    Ep 84: OpenAI’s Chief Scientist on Continual Learning Hype, RL Beyond Code, & Future Alignment Directions

    2026-04-09 | 58 min.
    Jakub Pachocki, OpenAI's Chief Scientist, sits down with Jacob to cover the full arc of where AI research stands today and where it's headed. The conversation spans the explosive growth of coding agents and what it signals about near-term AI capability, the use of math and physics benchmarks as proxies for general intelligence, how reinforcement learning is being extended beyond easily-verified domains toward longer-horizon tasks, and what it means to run a research organization at the precise moment the models themselves are starting to accelerate the research. Jakub shares a candid take on the competitive landscape, why chain-of-thought monitoring is one of the most promising tools in the alignment toolkit, and — with unusual directness — why the concentration of power enabled by highly automated AI organizations is a societal problem that doesn't yet have an obvious solution.

     

    (0:00) Intro

    (1:53) Research Intern Capability Timelines

    (4:59) Math Breakthroughs

    (7:59) RL Beyond Verifiable Tasks

    (12:32) RL vs In-Context

    (19:01) Allocating Compute Internally

    (28:18) AI for Science

    (31:40) Pattern Matching

    (33:23) Solving the Hardest Math Problems

    (37:40) Chain of Thought Monitoring

    (44:33) Generalization and Value Alignment in Models

    (47:57) Inside OpenAI

    (51:55) Quickfire

     

    With your co-hosts: 

    @jacobeffron 

    - Partner at Redpoint, Former PM Flatiron Health 

    @patrickachase 

    - Partner at Redpoint, Former ML Engineer LinkedIn 

    @ericabrescia 

    - Former COO Github, Founder Bitnami (acq’d by VMWare) 

    @jordan_segall 

    - Partner at Redpoint
  • Unsupervised Learning with Jacob Effron

    Ep 83: Owning the System of Record, AI-Native Org Charts, & Why ITSM is The Most Vulnerable Legacy Category

    2026-04-02 | 54 min.
    Serval is one of the fastest-growing AI-native enterprise software companies right now, and this episode is a rare inside look at the deliberate architectural, go-to-market, and talent decisions behind that growth. Jake Stauch breaks down why he made the contrarian bet to build a full system of record rather than layer on top of existing tools, why ITSM is more vulnerable to AI disruption than CRM, ERP, or HRIS, and how Serval is winning Fortune 500 deals against a $14B incumbent with a fraction of the resources. Beyond the product, Jake gets into the organizational decisions that underpin Serval's velocity — why recruiting is the #1 job of every employee, how to prevent talent bar decay as you scale from 8 to 200 people, and how the role of the manager is shifting as ICs own more scope than ever. Threading it all together is a founder's honest account of what it means to build a horizontal software company when the models are improving, the infrastructure is shifting, and the window to displace a legacy incumbent is open but won't stay open forever.

     

    With your co-hosts: 

    @jacobeffron 

    - Partner at Redpoint, Former PM Flatiron Health 

    @patrickachase 

    - Partner at Redpoint, Former ML Engineer LinkedIn 

    @ericabrescia 

    - Former COO Github, Founder Bitnami (acq’d by VMWare) 

    @jordan_segall 

    - Partner at Redpoint
  • Unsupervised Learning with Jacob Effron

    Ep 82: Behind Legora's $550M Raise, Model Competition, Doubling Revenue Every Quarter, & US Expansion

    2026-03-11 | 54 min.
    Max Jungestål, CEO of Legora, joins Jacob Effron and Logan Bartlett to discuss the company's $550M Series D and share a candid account of what building an AI-native company at speed actually looks like from the inside.

    Max argues that the AI application layer requires a fundamentally different operating model than traditional SaaS, one built on low ego, constant reinvention, and a willingness to watch nine months of work get washed away by a model update. He walks through how step-function improvements in the underlying models, particularly Opus 4.5 and 4.6, have repeatedly forced Legora to rebuild core product features from scratch, and why he sees that as a feature, not a bug.

    On the legal industry, Max offers a ground-level view of how AI is actually diffusing through law firms, less through top-down mandates and more through competitive pressure between firms and, increasingly, from enterprise clients demanding efficiency from their outside counsel. He pushes back on the viability of AI-native law firms, dismisses outcome-based pricing as harder than it looks, and makes the case for why foundation model competition creates tailwinds rather than threats for a company with Legora's depth.

    The episode closes with a detailed look at the US expansion strategy, including the deliberate cultural decisions, like flying all New York hires to Stockholm for onboarding, that Max believes are the real source of Legora's compounding advantage.

     

    [0:00] Intro

    [1:16] Legora's Series D Story

    [3:24] Why You Need Low Ego to Build in AI

    [5:58] From 60% to 100% Accuracy in One Summer

    [7:04] Law Firm Economics Shift

    [14:09] Pricing Seats Vs Outcomes

    [18:31] Why Foundation Models Entering Legal Helps Legora

    [30:10] Convincing a 75-Year-Old Partner to Go All In

    [33:02] Hiring Legal Engineers

    [34:32] Running an AI-Native Company

    [35:57] The Opus 4.5 Christmas Breakthrough

    [40:02] Building With Customers

    [44:01] All In On US Expansion

    [51:22] Stockholm Startup DNA

     

    With your co-hosts: 

    @jacobeffron 

    - Partner at Redpoint, Former PM Flatiron Health 

    @patrickachase 

    - Partner at Redpoint, Former ML Engineer LinkedIn 

    @ericabrescia 

    - Former COO Github, Founder Bitnami (acq’d by VMWare) 

    @jordan_segall 

    - Partner at Redpoint
  • Unsupervised Learning with Jacob Effron

    Ep 81: Ex-OpenAI Researcher On Why He Left, His Honest AGI Timeline, & The Limits of Scaling RL

    2026-01-29 | 1 h 2 min.
    This episode features Jerry Tworek, a key architect behind OpenAI's breakthrough reasoning models (o1, o3) and Codex, discussing the current state and future of AI. Jerry explores the real limits and promise of scaling pre-training and reinforcement learning, arguing that while these paradigms deliver predictable improvements, they're fundamentally constrained by data availability and struggle with generalization beyond their training objectives. He reveals his updated belief that continual learning—the ability for models to update themselves based on failure and work through problems autonomously—is necessary for AGI, as current models hit walls and become "hopeless" when stuck. Jerry discusses the convergence of major labs toward similar approaches driven by economic forces, the tension between exploration and exploitation in research, and why he left OpenAI to pursue new research directions. He offers candid insights on the competitive dynamics between labs, the focus required to win in specific domains like coding, what makes great AI researchers, and his surprisingly near-term predictions for robotics (2-3 years) while warning about the societal implications of widespread work automation that we're not adequately preparing for.
     
    (0:00) Intro
    (1:26) Scaling Paradigms in AI
    (3:36) Challenges in Reinforcement Learning
    (11:48) AGI Timelines
    (18:36) Converging Labs
    (25:05) Jerry’s Departure from OpenAI
    (31:18) Pivotal Decisions in OpenAI’s Journey
    (35:06) Balancing Research and Product Development
    (38:42) The Future of AI Coding
    (41:33) Specialization vs. Generalization in AI
    (48:47) Hiring and Building Research Teams
    (55:21) Quickfire
     
    With your co-hosts: 
    @jacobeffron 
    - Partner at Redpoint, Former PM Flatiron Health 
    @patrickachase 
    - Partner at Redpoint, Former ML Engineer LinkedIn 
    @ericabrescia 
    - Former COO Github, Founder Bitnami (acq’d by VMWare) 
    @jordan_segall 
    - Partner at Redpoint
  • Unsupervised Learning with Jacob Effron

    AI Vibe Check: The Actual Bottleneck In Research, SSI’s Mystique, & Spicy 2026 Predictions

    2025-12-18 | 1 h 18 min.
    Ari Morcos and Rob Toews return for their spiciest conversation yet. Fresh from NeurIPS, they debate whether models are truly plateauing or if we're just myopically focused on LLMs while breakthroughs happen in other modalities.
    They reveal why infinite capital at labs may actually constrain innovation, explain the narrow "Goldilocks zone" where RL actually works, and argue why U.S. chip restrictions may have backfired catastrophically—accelerating China's path to self-sufficiency by a decade. The conversation covers OpenAI's code red moment and structural vulnerabilities, the mystique surrounding SSI and Ilya's "two words," and why the real bottleneck in AI research is compute, not ideas.
    The episode closes with bold 2026 predictions: Rob forecasts Sam Altman won't be OpenAI's CEO by year-end, while Ari gives 50%+ odds a Chinese open-source model will be the world's best at least once next year.
     
    (0:00) Intro
    (1:51) Reflections on NeurIPS Conference
    (5:14) Are AI Models Plateauing?
    (11:12) Reinforcement Learning and Enterprise Adoption
    (16:16) Future Research Vectors in AI
    (28:40) The Role of Neo Labs
    (39:35) The Myth of the Great Man Theory in Science
    (41:47) OpenAI's Code Red and Market Position
    (47:19) Disney and OpenAI's Strategic Partnership
    (51:28) Meta's Super Intelligence Team Challenges
    (54:33) US-China AI Chip Dynamics
    (1:00:54) Amazon's Nova Forge and Enterprise AI
    (1:03:38) End of Year Reflections and Predictions
     
    With your co-hosts:
    @jacobeffron  
    - Partner at Redpoint, Former PM Flatiron Health
    @patrickachase  
    - Partner at Redpoint, Former ML Engineer LinkedIn
    @ericabrescia  
    - Former COO Github, Founder Bitnami (acq’d by VMWare)
    @jordan_segall  
    - Partner at Redpoint

Fler podcasts i Teknologi

Om Unsupervised Learning with Jacob Effron

We probe the sharpest minds in AI in search for the truth about what’s real today, what will be real in the future and what it all means for businesses and the world. If you’re a builder, researcher or investor navigating the AI world, this podcast will help you deconstruct and understand the most important breakthroughs and see a clearer picture of reality. Follow this show and consider enabling notifications to stay up to date on our latest episodes. Unsupervised Learning is a podcast by Redpoint Ventures, an early-stage venture capital fund that has invested in companies like Snowflake, Stripe, and Mistral. Hosted by Redpoint investor Jacob Effron alongside Patrick Chase, Jordan Segall and Erica Brescia.
Podcast-webbplats

Lyssna på Unsupervised Learning with Jacob Effron, Lex Fridman Podcast och många andra poddar från världens alla hörn med radio.se-appen

Hämta den kostnadsfria radio.se-appen

  • Bokmärk stationer och podcasts
  • Strömma via Wi-Fi eller Bluetooth
  • Stödjer Carplay & Android Auto
  • Många andra appfunktioner

Unsupervised Learning with Jacob Effron: Poddsändningar i Familj

Sociala nätverk
v8.8.7| © 2007-2026 radio.de GmbH
Generated: 4/10/2026 - 6:44:02 AM