Every disaster follows the same pattern: highly automated systems, humans relegated to opening doors and watching screens, and then suddenly asked to intervene in a crisis they no longer understand.
Bryan Reimer has spent 25 years studying this exact failure mode. His warning is simple: the more you automate, the less capable you become at supporting that automation. Your skills atrophy. Your assumptions grow dangerous. Your attention wanders. And when the system reaches its boundaries, you're not there anymore.
Now apply that to every knowledge worker in your organization using ChatGPT.
This episode is a blueprint for navigating what Bryan calls "the year of the human". Not because machines are failing. Because we're finally waking up to the real question: Are we building AI that replaces us, or AI that amplifies what we're capable of?
🎙️ Guest
Bryan Reimer is a Research Scientist at MIT AgeLab and Associate Director of the New England University Transportation Center. He's published over 350 academic papers, advises AI Sweden and Autoliv, and just released the book "How to Make AI Useful – Moving Beyond the Hype to Real Progress in Business, Society and Life."
What makes his perspective rare: he's watched the automation trap play out across three decades of disasters. Self-driving cars. Aviation. Nuclear plants. The patterns are identical. And they're now showing up in every enterprise deploying AI without understanding the human factors underneath.
🔥 Key Insights
✅ The Automation Paradox: Better systems, worse humans
When automation does the work, we stop learning. Our neural activity drops. Our expertise erodes. We begin making assumptions about what the system is doing (it's not always what we think). Most critically, we trust it just a little too much. This isn't speculation. It's documented across every safety-critical domain. And it's happening right now in your organization with chatbots.
✅ Copilot vs. Autopilot: The fork that defines your future
Some people ask ChatGPT to write their essay. They're not building skills. They're outsourcing thinking. Others "jam" with it. Back and forth, iterating, treating it like an intellectual sparring partner. Same tool. Completely different outcome. The first path leads to atrophy. The second creates what Bryan calls "superworkers" who can do with AI what a team of 20 couldn't do before.
✅ 2026: The year of the human
After years of tech-first thinking, Bryan predicts a pivot. Time Magazine made AI person of the year in 2025. But the winners in 2026 won't be those deploying more AI to replace humans. They'll be organizations deploying AI that enhances what their teams can actually do. The competitive advantage isn't automation. It's amplification.
✅ Unlearn as much as you learn
75% of major organizations still aren't using AI in any meaningful way. Some actively forbid it. The resistance isn't about technology. It's about mindset. History repeats itself, but that doesn't mean we want it to. Leaders need to unlearn old assumptions about control, measurement, and what productivity even means. A 35-hour work week might not be crazy. Swedish unicorns prove you can build world-class companies while taking summers off.
✅ Learn to play more
There's no textbook for this. When we were children, we went to sandboxes. We experimented. We tried to create things we'd never seen before. That's the only path forward with AI. Create low-risk environments where you and your team can experiment without financial consequences. Ask Claude, Gemini, or ChatGPT weird questions about things you already know well. That's how you learn whether it's right, whether it's wrong, and where the edges really are.
▶️ Listen now
Bryan's final thought: We started inventing technology to help us, not replace us. Maybe it's time to remember that.