PoddsändningarTeknologiExploring Information Security - Exploring Information Security

Exploring Information Security - Exploring Information Security

Timothy De Block
Exploring Information Security - Exploring Information Security
Senaste avsnittet

139 avsnitt

  • Exploring Information Security - Exploring Information Security

    What are the AI Vulnerabilities We Need to Worry About

    2026-2-17 | 52 min.
    Episode Summary

    Timothy De Block sits down with Keith Hoodlet, Security Researcher and founder of Securing.dev, to navigate the chaotic and rapidly evolving landscape of AI security.

    They discuss why "learning" is the only vital skill left in security, how Large Language Models (LLMs) actually work (and how to break them), and the terrifying rise of AI Agents that can access your email and bank accounts. Keith explains the difference between inherent AI vulnerabilities—like model inversion—and the reckless implementation of AI agents that leads to "free DoorDash" exploits. They also dive into the existential risks of disinformation, where bots manipulate human outrage and poison the very data future models will train on.

    Key Topics

    Learning in the AI Era:

    The "Zero to Hero" approach: How Keith uses tools like Claude to generate comprehensive learning plans and documentation for his team.

    Why accessible tools like YouTube and AI make learning technical concepts easier than ever.

    Understanding the "Black Box":

    How LLMs Work: Keith breaks down LLMs as a "four-dimensional array of numbers" (weights) where words are converted into tokens and calculated against training data. * Open Weights: The ability for users to manipulate these weights to reinforce specific data (e.g., European history vs. Asian Pacific history).

    AI Vulnerabilities vs. Attacks:

    Prompt Injection: "Social engineering" the chatbot to perform unintended actions.

    Membership Inference: Determining if specific data (like yours) is in a training set, which has massive implications for GDPR and the "right to be forgotten".

    Model Inversion: Stealing weights and training data. Keith cites speculation that Chinese espionage used this technique to "shortcut" their own model training using US labs' data.

    Evasion Attacks: A technique rather than a vulnerability. Example: Jason Haddix bypassing filters to generate an image of Donald Duck smoking a cigar by describing the attributes rather than naming the character.

    The "Agent" Threat:

    Running with Katanas: Giving AI agents access to browsers, file systems (~/.ssh), and payment methods is a massive security risk.

    The DoorDash Exploit: A real-world example where a user tricked a friend's email-connected AI bot into ordering them free lunch for a week.

    Supply Chain & Disinformation:

    Hallucination Squatting: AI generating code that pulls from non-existent packages, which attackers can then register to inject malware.

    The Cracker Barrel Outrage: How a bot-driven disinformation campaign manufactured fake outrage over a logo change, fooling a major company and the news media.

    Data Poisoning: The "Russian Pravda network" seeding false information to shape the training data of future US models.

    Memorable Quotes

    "It’s like we’re running with... not just scissors, we’re running with katanas. And the ground that we're on is constantly changing underneath our feet." — Keith Hoodlet

    "We never should have taught runes to sand and allowed it to think." — Keith Hoodlet

    "The biggest bombshell here is that we are the vulnerability. Because we're going to get manipulated by AI in some form or fashion." — Timothy De Block

    Resources Mentioned

    Books:

    Active Measures: The Secret History of Disinformation and Political Warfare by Thomas Rid.

    The Intelligent Investor by Benjamin Graham.

    Thinking, Fast and Slow by Daniel Kahneman.

    Churchill: A Life by Martin Gilbert.

    Videos & Articles:

    3Blue1Brown (YouTube): "But what is a neural network?" (Deep Learning Series) .

    Keith’s Blog: "Life After the AI Apocalypse".

    About the Guest

    Keith Hoodlet is a Security Researcher at Trail of Bits and the creator of Securing.dev. A self-described "technologist who wants to move to the woods," Keith specializes in application security, threat modeling, and deciphering the complex intersection of code and human behavior.

    Website: securing.dev

    Mastodon: Keith on Infosec.Exchange
  • Exploring Information Security - Exploring Information Security

    [RERELEASE] How to make time for a home lab

    2026-2-10 | 22 min.
    Chris (@cmaddalena) and I were asked the question on Twitter, "How do you make time for a home lab?" We answered the question on Twitter, but also decided the question was a good topic for an EIS episode. Home labs are great for advancing a career or breaking into information security. To find the time for them requires making them a priority. It's also good to have a purpose. The time I spend with a home lab is often sporadic and coincides with research on a given area.
  • Exploring Information Security - Exploring Information Security

    [RERELEASE] How to build a home lab

    2026-2-03 | 30 min.
    Chris (@cmaddy) and I have submitted to a couple of calls for training at CircleCityCon and Converge and BSides Detroit this summer on the topic of building a home lab. I will also be speaking on this subject at ShowMeCon. Home labs are great for advancing a career or breaking into information security. The bar is really low on getting started with one. A gaming laptop with decent specifications works great. For those with a lack of hardware or funds there are plenty of online resources to take advantage of.
  • Exploring Information Security - Exploring Information Security

    How to Build an AI Governance Program with Walter Haydock

    2026-1-27 | 30 min.
    Summary:

    Timothy De Block sits down with Walter Haydock, founder of StackAware, to break down the complex world of AI Governance. Walter moves beyond the buzzwords to define AI governance as the management of risk related to non-deterministic systems—systems where the same input doesn't guarantee the same output.

    They explore why the biggest AI risk facing organizations today isn't necessarily a rogue chatbot or a sophisticated cyber attack, but rather HR systems (like video interviews and performance reviews) that are heavily regulated and often overlooked. Walter provides a practical, three-step roadmap for organizations to move from chaos to calculated risk-taking, emphasizing the need for quantitative risk measurement over vague "high/medium/low" assessments.

    Key Topics & Insights

    What is AI Governance?

    Walter defines it as measuring and managing the risks (security, reputation, contractual, regulatory) of non-deterministic systems.

    The 3 Buckets of AI Security:

    AI for Security: AI-powered SOCs, fraud detection.

    AI for Hacking: Automated pentesting, generating phishing emails.

    Security for AI: The governance piece—securing the models and data themselves.

    The "Hidden" HR Vulnerability:

    While security teams focus on hackers, the most urgent vulnerability is often in Human Resources. Tools for firing, hiring, and performance evaluation are highly regulated (e.g., NYC Local Law 144, Illinois AI Video Interview Act) yet frequently lack proper oversight.

    How to Build an AI Governance Program (The First 3 Steps):

    Establish a Policy: Define your risk appetite (what is okay vs. not okay).

    Inventory Systems (with Amnesty): Ask employees what they are using without fear of punishment to get an accurate picture.

    Risk Assessment: Assess the inventory against your policy. Use a tiered approach: prioritize regulated/cyber-physical systems first, then confidential data, then public data.

    Quantitative Risk Management:

    Move away from "High/Medium/Low" charts. Walter advocates for measuring risk in dollars of loss expectancy using methodologies like FAIR (Factor Analysis of Information Risk) or the Hubbard Seiers method.

    Emerging Threats:

    Agentic AI: The next 3-5 years will be defined by "non-deterministic systems interacting with other non-deterministic systems," creating complex governance challenges.

    Regulation Roundup:

    Companies are largely unprepared for the wave of state-level AI laws coming online in places like Colorado (SB 205), California, Utah, and Texas.

    Resources Mentioned

    ISO 42001: The global standard for building AI management systems (similar to ISO 27001 for info sec).

    Cloud Security Alliance (CSA): Recommended for their AI Controls Matrix.

    Book: How to Measure Anything in Cybersecurity Risk by Douglas Hubbard and Richard Seiers.

    StackAware Risk Register: A free template combining Hubbard Seiers and FAIR methodologies.
  • Exploring Information Security - Exploring Information Security

    Exploring Cribl: Sifting Gold from Data Noise for Cost and Security

    2026-1-20 | 33 min.
    Summary:

    Timothy De Block and Ed Bailey, a former customer and current Field CISO at Cribl, discuss how the company is tackling the twin problems of data complexity and AI integration. Ed explains that Cribl's core mission—derived from the French word "cribé" (to screen or sift)—is to provide data flexibility and cost management by routing the most valuable data to expensive tools like SIEMs and everything else to cheap object storage. The conversation covers the 40x productivity gains from their "human in the loop AI", Cribl Co-Pilot, and their expansion into "agentic AI" to fight back against sophisticated threats.

    Cribl's Core Value Proposition

    Data Flexibility & Cost Management: Cribl's primary value is giving customers the flexibility to route data from "anywhere to anywhere". This allows organizations to manage costs by classifying data:

    Valuable Data: Sent to high-value, high-cost platforms like SIMs (Splunk, Elastic).

    Retention Data: Sent to inexpensive object storage (3 to 5 cents per gig).

    Matching Cost and Value: This approach ensures the most valuable data gets the premium analysis while retaining all data necessary for compliance, addressing the CISO's fear of missing a critical event.

    SIEM Migration and Onboarding: Cribl mitigates the risk of disruption during SIM migration—a major concern for CISOs—by acting as an abstraction layer. This can dramatically accelerate migration time; one large insurance company was able to migrate to a next-gen SIEM in five months, a process their CISO projected would have taken two years otherwise.

    Customer Success Story (UBA): Ed shared a story where his team used Cribl Stream to quickly integrate an expensive User and Entity Behavior Analytics (UBA) tool with their SIEM in two hours for a proof-of-concept. This saved 9-10 months and the deployment of 100,000 agents, providing 100% value from the UBA tool in just two weeks.

    AI Strategy and Productivity Gains

    "Human in the Loop AI": Cribl's initial AI focus is on Co-Pilot, which helps people use the tools better. This approach prioritizes accuracy and addresses the fact that enterprise tooling is often difficult to use.

    40x Productivity Boost: Co-Pilot Editor automates the process of mapping data into complex, esoteric data schemas (for tools like Splunk and Elastic). This reduced the time to create a schema for a custom data type from approximately a week to about one hour, representing a massive gain in workflow productivity.

    Roadmap Shift to Agentic AI: Following CriblCon, the roadmap is shifting toward "agentic AI" that operates in the background, focused on building trust through carefully controlled and validated value.

    AI in Search: The Cribl Search product has built-in AI that suggests better ways for users to write searches and utilize features, addressing the fact that many organizations fail to get full value from their searching tools because users don't know how to use them efficiently.

    Challenges and Business Model

    Data Classification Pain Point: The biggest challenge during deployment is that many users "have never really looked at their data". This leads to time spent classifying data and defining the "why" (what is the end goal) before working on the "how".

    Vendor Pushback and MSSP Engagement: Splunk previously sued Cribl over cost management, though resulting damages were only one dollar, demonstrating that some vendors initially get upset. However, Cribl is highly engaged with MSSP/MDR providers because its flexibility dramatically lowers their integration costs and time, allowing them to get paid faster and offer a wider suite of services.

    Pricing Models: Cribl offers two main models:

    Self-Managed (Stream & Edge): Uses a topline license (based on capacity/terabytes purchased).

    Cloud (Lake & Search): Uses a consumption model (based on credits/what is actually used).

    Empowering the Customer: Cribl's mission is to empower customers by opening choices and enabling their goals, contrasting with other vendors where it's "easy to get in, the data never gets out".

Fler podcasts i Teknologi

Om Exploring Information Security - Exploring Information Security

The Exploring Information Security podcast interviews a different professional each week exploring topics, ideas, and disciplines within information security. Prepare to learn, explore, and grow your security mindset.
Podcast-webbplats

Lyssna på Exploring Information Security - Exploring Information Security, Elbilsveckan och många andra poddar från världens alla hörn med radio.se-appen

Hämta den kostnadsfria radio.se-appen

  • Bokmärk stationer och podcasts
  • Strömma via Wi-Fi eller Bluetooth
  • Stödjer Carplay & Android Auto
  • Många andra appfunktioner
Sociala nätverk
v8.5.0 | © 2007-2026 radio.de GmbH
Generated: 2/17/2026 - 11:35:28 PM