Powered by RND
PoddsändningarTeknologiThe MAD Podcast with Matt Turck

The MAD Podcast with Matt Turck

Matt Turck
The MAD Podcast with Matt Turck
Senaste avsnittet

Tillgängliga avsnitt

5 resultat 86
  • GitHub CEO: The AI Coding Gold Rush, Vibe Coding & Cursor
    AI coding is in full-blown gold-rush mode, and GitHub sits at the epicenter. In this episode, GitHub CEO Thomas Dohmke tells Matt Turck how a $7.5 B acquisition in 2018 became a $2 B ARR rocket ship, and reveals how Copilot was born from a secret AI strategy years before anyone else saw the opportunity.We dig into the dizzying pace of AI innovation: why developer tools are suddenly the fastest-growing startups in history, how GitHub’s multi-model approach (OpenAI, Anthropic Claude 4, Gemini 2.5, and even local LLMs) gives you more choice and speed, and why fine-tuning models might be overrated. Thomas explains how Copilot keeps you in the “magic flow state,” how even middle schoolers are using it to hack Minecraft. The conversation then zooms out to the competitive battlefield: Cursor’s $10 B valuation, Mistral’s new code model, and a wave of AI-native IDE forks vying for developer mind-share. We discuss why 2025’s “coding agents” could soon handle 90 % of the world’s code, the survival of SaaS and why the future of coding is about managing agents, not just writing code.GitHubWebsite - https://github.com/X/Twitter - https://x.com/githubThomas DohmkeLinkedIn - https://www.linkedin.com/in/ashtomX/Twitter - https://twitter.com/ashtomFIRSTMARKWebsite - https://firstmark.comX/Twitter - https://twitter.com/FirstMarkCapMatt Turck (Managing Director)LinkedIn - https://www.linkedin.com/in/turck/X/Twitter - https://twitter.com/mattturck(00:00) Intro (01:50) Why AI Coding Is Ground Zero for Generative AI (02:40) The $7.5B GitHub Acquisition: Microsoft’s Strategic Play (06:21) GitHub’s Role in the Azure Cloud Ecosystem (10:25) How GitHub Copilot Beat Everyone to Market (16:09) Copilot & VS Code Explained for Non-Developers (21:02) GitHub Models: Multi-Model Choice and What It Means (25:31) The Reality of Fine-Tuning AI Models for Enterprise (29:13) The Dizzying Pace and Political Economy of AI Coding Tools (36:58) Competing and Partnering: Microsoft’s Unique AI Strategy (41:29) Does Microsoft Limit Copilot’s AI-Native Potential? (46:44) The Bull and Bear Case for AI-Native IDEs Like Cursor (52:09) Agent Mode: The Next Step for AI-Powered Coding (01:00:10) How AI Coding Will Change SaaS and Developer Skills
    --------  
    1:04:46
  • Inside the Paper That Changed AI Forever - Cohere CEO Aidan Gomez on 2025 Agents
    What really happened inside Google Brain when the “Attention is All You Need” paper was born? In this episode, Aidan Gomez — one of the eight co-authors of the Transformers paper and now CEO of Cohere — reveals the behind-the-scenes story of how a cold email and a lucky administrative mistake landed him at the center of the AI revolution.Aidan shares how a group of researchers, given total academic freedom, accidentally stumbled into one of the most important breakthroughs in AI history — and why the architecture they created still powers everything from ChatGPT to Google Search today.We dig into why synthetic data is now the secret sauce behind the world’s best AI models, and how Cohere is using it to build enterprise AI that’s more secure, private, and customizable than anything else on the market. Aidan explains why he’s not interested in “building God” or chasing AGI hype, and why he believes the real impact of AI will be in making work more productive, not replacing humans.You’ll also get a candid look at the realities of building an AI company for the enterprise: from deploying models on-prem and air-gapped for banks and telecoms, to the surprising demand for multimodal and multilingual AI in Japan and Korea, to the practical challenges of helping customers identify and execute on hundreds of use cases.CohereWebsite - https://cohere.comX/Twitter - https://x.com/cohereAidan GomezLinkedIn - https://ca.linkedin.com/in/aidangomezX/Twitter - https://x.com/aidangomezFIRSTMARKWebsite - https://firstmark.comX/Twitter - https://twitter.com/FirstMarkCapMatt Turck (Managing Director)LinkedIn - https://www.linkedin.com/in/turck/X/Twitter - https://twitter.com/mattturck(00:00) Intro (02:00) The Story Behind the Transformers Paper (03:09) How a Cold Email Landed Aidan at Google Brain (10:39) The Initial Reception to the Transformers Breakthrough (11:13) Google’s Response to the Transformer Architecture (12:16) The Staying Power of Transformers in AI (13:55) Emerging Alternatives to Transformer Architectures (15:45) The Significance of Reasoning in Modern AI (18:09) The Untapped Potential of Reasoning Models (24:04) Aidan’s Path After the Transformers Paper and the Founding of Cohere (25:16) Choosing Enterprise AI Over AGI Labs (26:55) Aidan’s Perspective on AGI and Superintelligence (28:37) The Trajectory Toward Human-Level AI (30:58) Transitioning from Researcher to CEO (33:27) Cohere’s Product and Platform Architecture (37:16) The Role of Synthetic Data in AI (39:32) Custom vs. General AI Models at Cohere (42:23) The AYA Models and Cohere Labs Explained (44:11) Enterprise Demand for Multimodal AI (49:20) On-Prem vs. Cloud (50:31) Cohere’s North Platform (54:25) How Enterprises Identify and Implement AI Use Cases (57:49) The Competitive Edge of Early AI Adoption (01:00:08) Aidan’s Concerns About AI and Society (01:01:30) Cohere’s Vision for Success in the Next 3–5 Years
    --------  
    1:02:24
  • AI That Ends Busy Work — Hebbia CEO on “Agent Employees”
    What if the smartest people in finance and law never had to do “stupid tasks” again? In this episode, we sit down with George Sivulka, founder of Hebbia, the AI company quietly powering 50% of the world’s largest asset managers and some of the fastest-growing law firms. George reveals how Hebbia’s Matrix platform is automating the equivalent of 50,000 years of human reading — every year — and why the future of work is hybrid teams of humans and AI “agent employees.” You’ll get the inside story on how Hebbia went from a stealth project at Stanford to a multinational company trusted by the Department of Defense, and why their spreadsheet-inspired interface is leaving chatbots in the dust. George breaks down the technical secrets behind Hebbia’s ISD architecture (and why they killed RAG), how they process billions of pages with near-zero hallucinations, and what it really takes to sell AI into the world’s most regulated industries.We also dive into the future of organizational design, why generalization beats specialization in AI, and how “prompting is the new management skill.” Plus: the real story behind AI hallucinations, the myth of job loss, and why naiveté might be the ultimate founder superpower.HebbiaWebsite - https://www.hebbia.comTwitter - https://x.com/HebbiaAIGeorge SivulkaLinkedIn - https://www.linkedin.com/in/sivulkaTwitter - https://x.com/gsivulkaFIRSTMARKWebsite - https://firstmark.comTwitter - https://twitter.com/FirstMarkCapMatt Turck (Managing Director)LinkedIn - https://www.linkedin.com/in/turck/Twitter - https://twitter.com/mattturck(00:00) Intro (01:46) What is Hebbia (02:49) Evolving Hebbia’s mission (04:45) The founding story and Stanford's inspiration (09:45) The rise of agent employees and AI in organizations (12:36) The future of AI-powered work (15:17) AI research trends (19:49) Inside Matrix: Hebbia’s flagship AI platform (24:02) Why Hebbia isn’t just another chatbot (28:27) Moving beyond RAG: Hebbia’s unique architecture (34:10) Tackling hallucinations in high-stakes AI (35:59) Research culture and avoiding industry groupthink (39:40) Innovating go-to-market and enterprise sales (41:57) Real-world value: Cost savings and new revenue (43:49) How AI is changing junior roles (45:55) Leadership and perspective as a young founder (47:16) Hebbia’s roadmap: Success in the next 3 years
    --------  
    48:24
  • AI Eats the World: Benedict Evans on What Really Matters Now
    What if the “AI revolution” is actually… stuck in the messy middle? In this episode, Benedict Evans returns to tackle the big question we left hanging a year ago: Is AI a true paradigm shift, or just another tech platform shift like mobile or cloud? One year later, the answer is more complicated — and more revealing — than anyone expected.Benedict pulls back the curtain on why, despite all the hype and model upgrades, the core LLMs are starting to look like commodities. We dig into the real battlegrounds: distribution, brand, and the race to build sticky applications. Why is ChatGPT still topping the App Store charts while Perplexity and Claude barely register outside Silicon Valley? Why did OpenAI just hire a CEO of Applications, and what does that signal about the future of AI products?We go deep on the “probabilistic” nature of LLMs, why error rates are still the elephant in the room, the future of consumer AI (is there a killer app beyond chatbots and image generators?), the impact of generative content on e-commerce and advertising, and whether “AI agents” are the next big thing — or just another overhyped demo.And, we ask: What happened to AI doomerism? Why did the existential risk debate suddenly vanish, and what risks should we actually care about?Benedict EvansLinkedIn - https://www.linkedin.com/in/benedictevansThreads - https://www.threads.net/@benedictevansFIRSTMARKWebsite - https://firstmark.comX/Twitter - https://twitter.com/FirstMarkCapMatt Turck (Managing Director)LinkedIn - https://www.linkedin.com/in/turck/X/Twitter - https://twitter.com/mattturck(00:00) Intro (01:47) Is AI a Platform Shift or a Paradigm Shift? (07:21) Error Rates and Trust in AI (15:07) Adapting to AI’s Capabilities (19:18) Generational Shifts in AI Usage (22:10) The Commoditization of AI Models (27:02) Are Brand and Distribution the Real Moats in AI? (29:38) OpenAI: Research Lab or Application Company? (33:26) Big Tech’s AI Strategies: Apple, Google, Meta, AWS (39:00) AI and Search: Is ChatGPT a Search Engine? (42:41) Consumer AI Apps: Where’s the Breakout? (45:51) The Need for a GUI for AI (48:38) Generative AI in Social and Content (51:02) The Business Model of AI: Ads, Memory, and Moats (55:26) Enterprise AI: SaaS, Pilots, and Adoption (01:00:08) The Future of AI in Business (01:05:11) Infinite Content, Infinite SKUs: AI and E-commerce (01:09:42) Doomerism, Risks, and the Future of AI
    --------  
    1:15:09
  • Jeremy Howard on Building 5,000 AI Products with 14 People (Answer AI Deep-Dive)
    What happens when you try to build the “General Electric of AI” with just 14 people? In this episode, Jeremy Howard reveals the radical inside story of Answer AI — a new kind of AI R&D lab that’s not chasing AGI, but instead aims to ship thousands of real-world products, all while staying tiny, open, and mission-driven.Jeremy shares how open-source models like DeepSeek and Qwen are quietly outpacing closed-source giants, why the best new AI is coming out of China. You’ll hear the surprising truth about the so-called “DeepSeek moment,” why efficiency and cost are the real battlegrounds in AI, and how Answer AI’s “dialogue engineering” approach is already changing lives—sometimes literally.We go deep on the tools and systems powering Answer AI’s insane product velocity, including Solve It (the platform that’s helped users land jobs and launch startups), Shell Sage (AI in your terminal), and Fast HTML (a new way to build web apps in pure Python). Jeremy also opens up about his unconventional path from philosophy major and computer game enthusiast to world-class AI scientist, and why he believes the future belongs to small, nimble teams who build for societal benefit, not just profit.Fast.aiWebsite - https://www.fast.aiX/Twitter - https://twitter.com/fastdotaiAnswer.aiWebsite - https://www.answer.ai/X/Twitter - https://x.com/answerdotaiJeremy HowardLinkedIn - https://linkedin.com/in/howardjeremyX/Twitter - https://x.com/jeremyphowardFIRSTMARKWebsite - https://firstmark.comX/Twitter - https://twitter.com/FirstMarkCapMatt Turck (Managing Director)LinkedIn - https://www.linkedin.com/in/turck/X/Twitter - https://twitter.com/mattturck(00:00) Intro (01:39) Highlights and takeaways from ICLR Singapore (02:39) Current state of open-source AI (03:45) Thoughts on Microsoft Phi and open source moves (05:41) Responding to OpenAI’s open source announcements (06:29) The real impact of the Deepseek ‘moment’ (09:02) Progress and promise in test-time compute (10:53) Where we really stand on AGI and ASI (15:05) Jeremy’s journey from philosophy to AI (20:07) Becoming a Kaggle champion and starting Fast.ai (23:04) Answer.ai mission and unique vision (28:15) Answer.ai’s business model and early monetization (29:33) How a small team at Answer.ai ships so fast (30:25) Why Devin AI agent isn't that great (33:10) The future of autonomous agents in AI development (34:43) Dialogue Engineering and Solve It (43:54) How Answer.ai decides which projects to build (49:47) Future of Answer.ai: staying small while scaling impact
    --------  
    55:02

Fler podcasts i Teknologi

Om The MAD Podcast with Matt Turck

The MAD Podcast with Matt Turck, is a series of conversations with leaders from across the Machine Learning, AI, & Data landscape hosted by leading AI & data investor and Partner at FirstMark Capital, Matt Turck.
Podcast-webbplats

Lyssna på The MAD Podcast with Matt Turck, Fabriken och många andra poddar från världens alla hörn med radio.se-appen

Hämta den kostnadsfria radio.se-appen

  • Bokmärk stationer och podcasts
  • Strömma via Wi-Fi eller Bluetooth
  • Stödjer Carplay & Android Auto
  • Många andra appfunktioner
Sociala nätverk
v7.19.0 | © 2007-2025 radio.de GmbH
Generated: 7/2/2025 - 5:40:34 AM