Powered by RND
PoddsändningarTeknologiExploring Information Security - Exploring Information Security

Exploring Information Security - Exploring Information Security

Timothy De Block
Exploring Information Security - Exploring Information Security
Senaste avsnittet

Tillgängliga avsnitt

5 resultat 122
  • Exploring AI, APIs, and the Social Engineering of LLMs
    Summary: Timothy De Block is joined by Keith Hoodlet, Engineering Director at Trail of Bits, for a fascinating, in-depth look at AI red teaming and the security challenges posed by Large Language Models (LLMs). They discuss how prompt injection is effectively a new form of social engineering against machines, exploiting the training data's inherent human biases and logical flaws. Keith breaks down the mechanics of LLM inference, the rise of middleware for AI security, and cutting-edge attacks using everything from emojis and bad grammar to weaponized image scaling. The episode stresses that the fundamental solutions—logging, monitoring, and robust security design—are simply timeless principles being applied to a terrifyingly fast-moving frontier. Key Takeaways The Prompt Injection Threat Social Engineering the AI: Prompt injection works by exploiting the LLM's vast training data, which includes all of human history in digital format, including movies and fiction. Attackers use techniques that mirror social engineering to trick the model into doing something it's not supposed to, such as a customer service chatbot issuing an unauthorized refund. Business Logic Flaws: Successful prompt injections are often tied to business logic flaws or a lack of proper checks and guardrails, similar to vulnerabilities seen in traditional applications and APIs. Novel Attack Vectors: Attackers are finding creative ways to bypass guardrails: Image Scaling: Trail of Bits discovered how to weaponize image scaling to hide prompt injections within images that appear benign to the user, but which pop out as visible text to the model when downscaled for inference. Invisible Text: Attacks can use white text, zero-width characters (which don't show up when displayed or highlighted), or Unicode character smuggling in emails or prompts to covertly inject instructions. Syntax & Emojis: Research has shown that bad grammar, run-on sentences, or even a simple sequence of emojis can successfully trigger prompt injections or jailbreaks. Defense and Design LLM Security is API Security: Since LLMs rely on APIs for their "tool access" and to perform actions (like sending an email or issuing a refund), security comes down to the same principles used for APIs: proper authorization, access control, and eliminating misconfiguration. The Middleware Layer: Some companies are using middleware that sits between their application and the Frontier LLMs (like GPT or Claude) to handle system prompting, guard-railing, and filtering prompts, effectively acting as a Web Application Firewall (WAF) for LLM API calls. Security Design Patterns: To defend against prompt injection, security design patterns are key: Action-Selector Pattern: Instead of a text field, users click on pre-defined buttons that limit the model to a very specific set of safe actions. Code-Then-Execute Pattern (CaMeL): The first LLM is used to write code (e.g., Pythonic code) based on the natural language prompt, and a second, quarantined LLM executes that safer code. Map-Reduce Pattern: The prompt is broken into smaller chunks, processed, and then passed to another model, making it harder for a prompt injection to be maintained across the process. Timeless Hygiene: The most critical defenses are logging, monitoring, and alerting. You must log prompts and outputs and monitor for abnormal behavior, such as a user suddenly querying a database thousands of times a minute or asking a chatbot to write Python code. Resources & Links Mentioned Trail of Bits Research: Blog: blog.trailofbits.com Company Site: trailofbits.com Weaponizing image scaling against production AI systems Call Me A Jerk: Persuading AI to Comply with Objectionable Requests Securing LLM Agents Paper: Design Patterns for Securing LLM Agents against Prompt Injections. Camel Prompt Injection Defending LLM applications against Unicode character smuggling Logit-Gap Steering: Efficient Short-Suffix Jailbreaks for Aligned Large Language Models LLM Explanation: Three Blue One Brown (3Blue1Brown) has a great short video explaining how Large Language Models work. Lakera Gandalf: Game for learning how to use prompt injection against AI Keith Hoodlet's Personal Sites: Website: securing.dev and thought.dev
    --------  
    52:13
  • How to Prepare a Presentation for a Cybersecurity Conference
    Summary: Join Timothy De Block for a special, behind-the-scenes episode where he rehearses his presentation, "The Hitchhiker's Guide to Threat Modeling." This episode serves as a unique guide for aspiring and experienced speakers, offering a candid look at the entire preparation process—from timing and slide design to audience engagement and controlled chaos. In addition to public speaking tips, Timothy provides a concise and practical overview of threat modeling, using real-world examples to illustrate its value. Key Presentation Tips & Tricks Practice for Time: Practice the presentation multiple times to ensure the pacing is right. Timothy suggests aiming to be a little longer than the allotted time during practice, as adrenaline and nerves on the day of the talk will often cause a person to speak more quickly. Use Visuals Strategically: Pacing and hand gestures can improve the flow of a talk. Be careful with distracting visuals, such as GIFs, by not leaving them up for too long while you are speaking. Stand Out as a Speaker: Be willing to do shorter talks, such as 30-minute sessions, as many speakers prefer hour-long slots. He notes that having a clever or intriguing title for your presentation is important, and using humor or pop-culture references can help.
    --------  
    1:01:09
  • Exploring the Rogue AI Agent Threat with Sam Chehab
    Summary: In a unique live recording, Timothy De Block is joined by Sam Chehab from Postman to tackle the intersection of AI and API security. The conversation goes beyond the hype of AI-created malware to focus on a more subtle, yet pervasive threat: "rogue AI agents." The speakers define these as sanctioned AI tools that, when misconfigured or given improper permissions, can cause significant havoc by misbehaving and exposing sensitive data. The episode emphasizes that this risk is not new, but an exacerbation of classic hygiene problems. Key Takeaways Defining "Rogue AI Agents": Sam Chehab defines a "rogue AI agent" as a sanctioned AI tool that misbehaves due to misconfiguration, often exposing data it shouldn't have access to. He likens it to an enterprise search tool in the early 2000s that crawled an intranet and surfaced things it wasn't supposed to. The AI-API Connection: An AI agent is comprised of six components, and the "tool" component is where it interacts with APIs. The speakers note that the AI's APIs are its "arms and legs" and are often where it gets into trouble. The Importance of Security Hygiene: The core of the solution is to "go back to basics" with good hygiene. This includes building APIs with an open API spec, enforcing schemas, and ensuring single-purpose logins for integrations to improve traceability. The Rise of the "Citizen Developer": The conversation highlights a new security vector: non-developers, or "citizen developers," in departments like HR and finance building their own agents using enterprise tools. These individuals often lack security fundamentals, and their workflows are a "ripe area for risk". AI's Role in Development: Sam and Timothy discuss how AI can augment a developer's capabilities, but a human is still needed in the process. The report from Veracode notes that AI-generated code is only secure about 45% of the time, which is about on par with human-written code. The best approach is to use AI to fix specific lines of code in pre-commit, rather than having it write entire applications. Resources & Links Mentioned Postman State of the API Report: This report, which discusses API trends and security, will be released on October 8th. The speakers tease a follow-up episode to dive into its findings. Veracode: The 2025 GenAI Code Security Report was mentioned in the discussion on AI-generated code. GitGuardian: The State of Secrets Sprawl report was referenced as a key resource. Cloudflare: Mentioned as a service for API shield and monitoring API traffic. News Sites: Sam Chehab recommends Security Affairs, The Hacker News, Cybernews, and Information Security Magazine for staying up-to-date.
    --------  
    39:01
  • A conversation with Kyle Andrus on Info Stealers and Supply Chain Attacks
    Summary: In this episode, Timothy De Block sits down with guest Kyle Andrus to dissect the ever-evolving landscape of cyber threats, with a specific focus on info stealers. The conversation covers everything from personal work-life balance and career burnout to the increasing role of AI in security. They explore how info stealers operate as a "commodity" in the cybercriminal world, the continuous "cat and mouse game" with attackers, and the challenges businesses face in implementing effective cybersecurity measures. Key Takeaways The AI Revolution in Security: The guests discuss how AI is improving job efficiency and security, particularly in data analytics, behavioral tracking, and automating low-level tasks like SOC operations and penetration testing. This automation allows security professionals to focus on more complex work. They also highlight the potential for AI misuse, such as for insider threat detection, and the "surveillance state" implications of tracking employee behavior. The InfoStealer Threat: Info stealers are a prevalent threat, often appearing as "click fix" or fake update campaigns that trick users into granting initial access or providing credentials. The data they collect, including credentials and session tokens, is sold on the dark web for as little as two to ten dollars. This fuels further attacks by cybercriminals who buy access rather than performing initial reconnaissance themselves. The Human and Business Challenge: As security controls improve, attackers are increasingly relying on human interaction to compromise systems. The speakers emphasize that cybercriminals, "like water, follow the path of least resistance." The episode also highlights the significant challenge for small to medium-sized businesses in balancing risk mitigation with operational costs. Software Supply Chain Attacks: The discussion touches on supply chain attacks, like the npm package breach and the Salesforce Drift breach, which targeted third parties and smaller companies with less mature security controls. They note the challenges of using Software Bill of Materials (SBOMs) to assess the trustworthiness of open-source components. Practical Cybersecurity Advice: The hosts discuss the need to rethink cybersecurity advice for non-tech-savvy individuals, as much of the current guidance is impractical and burdensome. While Timothy De Block sees the benefit of browser-based password managers when MFA is enabled, Kyle Sundra generally advises against storing passwords in browsers and recommends more secure password managers.
    --------  
    41:29
  • The Winding Path to CISO: Rob Fuller's Leadership Journey
    Summary: In this episode, Timothy De Block sits down with Rob Fuller, Vice President of Cybersecurity, for a candid discussion about Rob's journey into cybersecurity leadership. Rob shares his unique path from the Marine Corps to a Fortune 10 company, revealing the struggles and lessons learned along the way. The conversation delves into the critical role of visibility, the importance of continuous learning, and invaluable advice for those aspiring to leadership roles in the security industry. Key Takeaways From "Noob" to VP: Rob shares the humorous origin of his online handle, "Mubix," which came from a mistyped name in an MMORPG. He recounts his initial struggle to transition into leadership, including turning down a director position at General Electric due to perceived lack of experience, until his wife reminded him of his past leadership roles in the Marine Corps and community groups. Leadership is a Different Career Path: Rob emphasizes that moving into a leadership role requires a complete mindset shift and is a distinct career path from a technical one. He learned a crucial lesson about career advancement: while diligence and relationships are important, visibility is paramount. He also notes the importance of a manager understanding they are part of two teams: their direct reports and their peer group of fellow leaders. The Value of Continuous Learning: Rob recommends the book Surrounded by Idiots by Thomas Erikson to understand different communication styles and the importance of adapting in management. He is also actively pursuing advanced degrees and certifications like CISSP and NACD to meet the requirements for director and CISO roles in large companies. Aspiring to CISO: Rob's ultimate goal is to become a CISO, as he believes it's the only role that allows for the implementation of comprehensive, widespread cybersecurity solutions. Advice for Career Starters: For those looking to enter cybersecurity, Rob and Timothy advise being open to any IT job, including the help desk, as an entry point. They also stress the importance of actively participating in local groups and conferences like hacker meetups and B-Sides, as this networking and volunteering can significantly increase your chances of getting hired. Blue Team Experience is Gold: Both agree that blue team (security operations) experience is highly valuable for aspiring pentesters, as it teaches crucial skills like scripting, queries, networking, and evasion techniques that make them more effective in red team roles. Resources & Links Mentioned The Five Dysfunctions of a Team by Patrick Lencioni Surrounded by Idiots by Thomas Erikson Fredericksburg Hackers Meetup CISSP certification NACD (National Association of Corporate Directors) certification
    --------  
    44:30

Fler podcasts i Teknologi

Om Exploring Information Security - Exploring Information Security

The Exploring Information Security podcast interviews a different professional each week exploring topics, ideas, and disciplines within information security. Prepare to learn, explore, and grow your security mindset.
Podcast-webbplats

Lyssna på Exploring Information Security - Exploring Information Security, Lex Fridman Podcast och många andra poddar från världens alla hörn med radio.se-appen

Hämta den kostnadsfria radio.se-appen

  • Bokmärk stationer och podcasts
  • Strömma via Wi-Fi eller Bluetooth
  • Stödjer Carplay & Android Auto
  • Många andra appfunktioner
Sociala nätverk
v7.23.9 | © 2007-2025 radio.de GmbH
Generated: 10/17/2025 - 6:12:34 AM