Shai Alon, Director of AI Innovation at Orca Security, throws cold water on the AI hype train and exposes the terrifying vulnerabilities lurking beneath the surface. Buckle up for a wild ride as Shai reveals how AI agents can be manipulated 😈, data can be poisoned ☠️, and systems can be completely compromised. Forget the theoretical - this talk dives into live demos 💻, real-world attacks, and the unsolved challenges facing AI security today.
You'll witness firsthand the dangers of LLM01: Prompt Injection, where carefully crafted inputs can trick AI into making disastrous decisions. Shai also unveils the sinister world of LLM03: Training Data Poisoning, demonstrating how malicious data can infect the heart of AI models, leading to unpredictable and harmful outcomes.
But that's not all. This talk takes you deep into the world of agentic AI apps, exposing the risks of LLM02: Insecure Output Handling, where AI's ability to generate and execute code opens up a Pandora's box of vulnerabilities. Discover how seemingly innocent AI agents can be hijacked to bypass security measures, leak sensitive data, and even wreak havoc within your cloud infrastructure.
Don't miss this eye-opening presentation that explores the OWASP LLM Top 10 and the concept of jailbreaking AI models. Shai Alon doesn't just highlight the problems - he provides insights into mitigation strategies and sparks a critical conversation about the future of AI security.
---
📌 Resources:
Find the code from the demos on GitHub: github.com/shaialon/ai-securi...
Connect with Shai: / shaialon
---
✅ Show Notes:
0:00 Intro: Shai Alon takes the stage to discuss the hidden dangers of AI.
2:04 The AI Revolution: A look at the rapid adoption of AI across industries and the exciting possibilities it offers.
5:38 The Security Nightmare: Exposing the vulnerabilities of AI that most people ignore.
8:13 Demo 1: Headset Support Center Refund Bot - Witnessing the dangers of LLM01: Prompt Injection in action.
14:00 LLM03: Training Data Poisoning - How malicious data can poison AI models and lead to unexpected outcomes.
17:08 Demo 2: AI Data Scientist - Exploring the power and peril of agentic AI apps.
19:07 LLM02: Insecure Output Handling - AI's ability to write and execute code creates a massive attack surface.
23:27 Authorization Bypass: AI agents can be tricked into granting access to sensitive data.
25:14 SQL Injection and Remote Code Execution: Witnessing the devastating impact of exploiting AI code generation.
29:49 Mitigation Strategies: Practical steps to protect your AI applications from these emerging threats.
32:51 OWASP LLM Top 10: A deep dive into the most critical vulnerabilities facing AI applications.
34:53 AI Tooling, education and engineering
38:22 The Role of AI Firewalls: Evaluating the strengths and limitations of perimeter defenses.
39:49 Jailbreaking AI Models: Bypass AI model guardrails & examining the shared responsibility model between AI providers and developers.
41:54 The Unprepared Cybersecurity Industry: A call to action for developers and security professionals.
42:29 Q&A: Shai Alon answers questions from the audience, addressing practical concerns about the combination of human expertise, AI regulation
---
🙏 Special Thanks:
@Google for Startups for providing the fantastic venue.
#AICybersecurity #AIsecurity #LLM #PromptInjection #DataPoisoning
Негізгі бет Ойын-сауық 🤯 AI Security EXPOSED! Hidden Risks of AI Agents: 💉 Prompt Injection ☣️ Data Poisoning
Пікірлер: 8