PoddsändningarTeknologiExploring Information Security - Exploring Information Security

Exploring Information Security - Exploring Information Security

Timothy De Block
Exploring Information Security - Exploring Information Security
Senaste avsnittet

140 avsnitt

  • Exploring Information Security - Exploring Information Security

    Inside Cambodia's Scam Compounds: Pig Butchering, Organized Crime, and Protecting Your Life Savings

    2026-02-24 | 39 min.
    Summary:

    Timothy De Block sits down with former FBI agent Scott Augenbaum to discuss his eye-opening trip to Cambodia, which has become the "online scam capital of the world". They dive into the terrifying evolution of "pig butchering" scams, how Chinese organized crime and geopolitical investments have fueled a massive criminal ecosystem, and why the ultimate vulnerability is still human psychology. Scott explains the massive scale of these operations and shares the single most important step you can take to avoid losing your money to these syndicates.

    Key Topics Discussed

    The Ground Zero of Scams: Scott discusses his trip to Sihanoukville, Cambodia, a city filled with scam compounds hiding in plain sight behind casino facades and fortress-like buildings with their backs facing the street.

    The Pivot to "Pig Butchering": How China's 2018 ban on online gambling and the 2020 COVID-19 casino shutdowns forced organized crime to pivot to massive, highly organized cryptocurrency and romance advanced-fee scams.

    A Geopolitical Nightmare: The complexities of combating these compounds when they are backed by Chinese investment and infrastructure (such as a highway built using Huawei routers). This dynamic leaves local law enforcement hesitant to intervene and limits the FBI's power.

    The Anatomy of a $5.2 Million Scam: Scott breaks down a devastating case of "pig butchering," detailing how scammers use fake simulated trading apps, "spot gold trading," and artificial intelligence to fatten victims up before stealing millions.

    The Double Crisis: The conversation acknowledges the horrifying human trafficking of compound workers—often lured from underdeveloped nations by fake jobs—while also focusing on the victims in the US and globally who are losing billions.

    The "Cancer Drug" Problem: Why organizations and individuals often only invest in security after they've been breached to meet compliance requirements.

    One Essential Tip: The absolute necessity of understanding social engineering and enabling Two-Factor Authentication (2FA) on all mission-critical accounts, such as home routers, cellular providers, iCloud, and Gmail.

    Memorable Quotes

    "If you're not going to make money through gambling, you're going to make money through the old-fashioned way, scamming." — Scott Augenbaum

    "We don't need to make information security people smarter... We need to get the end users up to taking it seriously." — Scott Augenbaum

    "I deal with people who want to buy the cancer drug after they had cancer. They don't want to buy it before because well, that's too much work." — Scott Augenbaum

    Resources Mentioned

    Book: The Secret to Cyber Security by Scott Augenbaum.

    Special Offer: Scott is generously offering a free audio or electronic copy of his book to listeners. Reach out to him directly to claim it.

    Contact Scott: [email protected].
  • Exploring Information Security - Exploring Information Security

    What are the AI Vulnerabilities We Need to Worry About

    2026-02-17 | 52 min.
    Episode Summary

    Timothy De Block sits down with Keith Hoodlet, Security Researcher and founder of Securing.dev, to navigate the chaotic and rapidly evolving landscape of AI security.

    They discuss why "learning" is the only vital skill left in security, how Large Language Models (LLMs) actually work (and how to break them), and the terrifying rise of AI Agents that can access your email and bank accounts. Keith explains the difference between inherent AI vulnerabilities—like model inversion—and the reckless implementation of AI agents that leads to "free DoorDash" exploits. They also dive into the existential risks of disinformation, where bots manipulate human outrage and poison the very data future models will train on.

    Key Topics

    Learning in the AI Era:

    The "Zero to Hero" approach: How Keith uses tools like Claude to generate comprehensive learning plans and documentation for his team.

    Why accessible tools like YouTube and AI make learning technical concepts easier than ever.

    Understanding the "Black Box":

    How LLMs Work: Keith breaks down LLMs as a "four-dimensional array of numbers" (weights) where words are converted into tokens and calculated against training data. * Open Weights: The ability for users to manipulate these weights to reinforce specific data (e.g., European history vs. Asian Pacific history).

    AI Vulnerabilities vs. Attacks:

    Prompt Injection: "Social engineering" the chatbot to perform unintended actions.

    Membership Inference: Determining if specific data (like yours) is in a training set, which has massive implications for GDPR and the "right to be forgotten".

    Model Inversion: Stealing weights and training data. Keith cites speculation that Chinese espionage used this technique to "shortcut" their own model training using US labs' data.

    Evasion Attacks: A technique rather than a vulnerability. Example: Jason Haddix bypassing filters to generate an image of Donald Duck smoking a cigar by describing the attributes rather than naming the character.

    The "Agent" Threat:

    Running with Katanas: Giving AI agents access to browsers, file systems (~/.ssh), and payment methods is a massive security risk.

    The DoorDash Exploit: A real-world example where a user tricked a friend's email-connected AI bot into ordering them free lunch for a week.

    Supply Chain & Disinformation:

    Hallucination Squatting: AI generating code that pulls from non-existent packages, which attackers can then register to inject malware.

    The Cracker Barrel Outrage: How a bot-driven disinformation campaign manufactured fake outrage over a logo change, fooling a major company and the news media.

    Data Poisoning: The "Russian Pravda network" seeding false information to shape the training data of future US models.

    Memorable Quotes

    "It’s like we’re running with... not just scissors, we’re running with katanas. And the ground that we're on is constantly changing underneath our feet." — Keith Hoodlet

    "We never should have taught runes to sand and allowed it to think." — Keith Hoodlet

    "The biggest bombshell here is that we are the vulnerability. Because we're going to get manipulated by AI in some form or fashion." — Timothy De Block

    Resources Mentioned

    Books:

    Active Measures: The Secret History of Disinformation and Political Warfare by Thomas Rid.

    The Intelligent Investor by Benjamin Graham.

    Thinking, Fast and Slow by Daniel Kahneman.

    Churchill: A Life by Martin Gilbert.

    Videos & Articles:

    3Blue1Brown (YouTube): "But what is a neural network?" (Deep Learning Series) .

    Keith’s Blog: "Life After the AI Apocalypse".

    About the Guest

    Keith Hoodlet is a Security Researcher at Trail of Bits and the creator of Securing.dev. A self-described "technologist who wants to move to the woods," Keith specializes in application security, threat modeling, and deciphering the complex intersection of code and human behavior.

    Website: securing.dev

    Mastodon: Keith on Infosec.Exchange
  • Exploring Information Security - Exploring Information Security

    [RERELEASE] How to make time for a home lab

    2026-02-10 | 22 min.
    Chris (@cmaddalena) and I were asked the question on Twitter, "How do you make time for a home lab?" We answered the question on Twitter, but also decided the question was a good topic for an EIS episode. Home labs are great for advancing a career or breaking into information security. To find the time for them requires making them a priority. It's also good to have a purpose. The time I spend with a home lab is often sporadic and coincides with research on a given area.
  • Exploring Information Security - Exploring Information Security

    [RERELEASE] How to build a home lab

    2026-02-03 | 30 min.
    Chris (@cmaddy) and I have submitted to a couple of calls for training at CircleCityCon and Converge and BSides Detroit this summer on the topic of building a home lab. I will also be speaking on this subject at ShowMeCon. Home labs are great for advancing a career or breaking into information security. The bar is really low on getting started with one. A gaming laptop with decent specifications works great. For those with a lack of hardware or funds there are plenty of online resources to take advantage of.
  • Exploring Information Security - Exploring Information Security

    How to Build an AI Governance Program with Walter Haydock

    2026-01-27 | 30 min.
    Summary:

    Timothy De Block sits down with Walter Haydock, founder of StackAware, to break down the complex world of AI Governance. Walter moves beyond the buzzwords to define AI governance as the management of risk related to non-deterministic systems—systems where the same input doesn't guarantee the same output.

    They explore why the biggest AI risk facing organizations today isn't necessarily a rogue chatbot or a sophisticated cyber attack, but rather HR systems (like video interviews and performance reviews) that are heavily regulated and often overlooked. Walter provides a practical, three-step roadmap for organizations to move from chaos to calculated risk-taking, emphasizing the need for quantitative risk measurement over vague "high/medium/low" assessments.

    Key Topics & Insights

    What is AI Governance?

    Walter defines it as measuring and managing the risks (security, reputation, contractual, regulatory) of non-deterministic systems.

    The 3 Buckets of AI Security:

    AI for Security: AI-powered SOCs, fraud detection.

    AI for Hacking: Automated pentesting, generating phishing emails.

    Security for AI: The governance piece—securing the models and data themselves.

    The "Hidden" HR Vulnerability:

    While security teams focus on hackers, the most urgent vulnerability is often in Human Resources. Tools for firing, hiring, and performance evaluation are highly regulated (e.g., NYC Local Law 144, Illinois AI Video Interview Act) yet frequently lack proper oversight.

    How to Build an AI Governance Program (The First 3 Steps):

    Establish a Policy: Define your risk appetite (what is okay vs. not okay).

    Inventory Systems (with Amnesty): Ask employees what they are using without fear of punishment to get an accurate picture.

    Risk Assessment: Assess the inventory against your policy. Use a tiered approach: prioritize regulated/cyber-physical systems first, then confidential data, then public data.

    Quantitative Risk Management:

    Move away from "High/Medium/Low" charts. Walter advocates for measuring risk in dollars of loss expectancy using methodologies like FAIR (Factor Analysis of Information Risk) or the Hubbard Seiers method.

    Emerging Threats:

    Agentic AI: The next 3-5 years will be defined by "non-deterministic systems interacting with other non-deterministic systems," creating complex governance challenges.

    Regulation Roundup:

    Companies are largely unprepared for the wave of state-level AI laws coming online in places like Colorado (SB 205), California, Utah, and Texas.

    Resources Mentioned

    ISO 42001: The global standard for building AI management systems (similar to ISO 27001 for info sec).

    Cloud Security Alliance (CSA): Recommended for their AI Controls Matrix.

    Book: How to Measure Anything in Cybersecurity Risk by Douglas Hubbard and Richard Seiers.

    StackAware Risk Register: A free template combining Hubbard Seiers and FAIR methodologies.

Fler podcasts i Teknologi

Om Exploring Information Security - Exploring Information Security

The Exploring Information Security podcast interviews a different professional each week exploring topics, ideas, and disciplines within information security. Prepare to learn, explore, and grow your security mindset.
Podcast-webbplats

Lyssna på Exploring Information Security - Exploring Information Security, AI Sweden Podcast och många andra poddar från världens alla hörn med radio.se-appen

Hämta den kostnadsfria radio.se-appen

  • Bokmärk stationer och podcasts
  • Strömma via Wi-Fi eller Bluetooth
  • Stödjer Carplay & Android Auto
  • Många andra appfunktioner
Sociala nätverk
v8.7.0 | © 2007-2026 radio.de GmbH
Generated: 2/25/2026 - 3:04:47 AM