🛡️
AI Security: Prompt Injection Defense & Hardening
An intensive technical training for developers, security engineers, and architects deploying LLM-powered applications in production. You will learn to map the attack surface of your AI systems, implement three defense layers (input validation, output filtering, tool sandboxing), analyze 5 documented real-world security incidents, and assemble a production security pipeline compliant with OWASP LLM Top 10. Every module includes production-ready Python code with vulnerable examples and their hardened versions.
Duration
2 days
Level
Advanced
Price
9.99 EUR/month (all courses included)
Max group
10 participants
What you will learn
+Classify any LLM attack using the OWASP Top 10 for LLM Applications (2025) taxonomy
+Build a STRIDE threat model for an existing LLM application
+Implement three defense layers: input validation, output filtering, tool sandboxing
+Deploy guardrails using open-source libraries llm-guard and promptfoo
+Reproduce and analyze the 5 most exploited production injection vectors
+Assemble a complete security pipeline with logging, anomaly detection, and incident response
+Conduct a red-team audit with Garak (open-source LLM vulnerability scanner)
Course program
Module 1: Attack Taxonomy & Threat Modeling
3h00- OWASP LLM Top 10 (2025): 4 injection categories
- Reproducing 3 real attacks in a safe sandbox
- STRIDE adapted for LLM applications
- Workshop: threat model your own application
Module 2: Defense Layer 1: Input Validation
3h00- Regex sanitization and blocklists
- Semantic LLM classifier for intent detection
- Structural prompt/data separation (XML tagging)
- Workshop: validation pipeline for a RAG chatbot
Module 3: Defense Layer 2: Output Filtering
3h00- Prompt leakage detection in model outputs
- Guardrails with llm-guard and promptfoo
- JSON schema validation for structured outputs
- Workshop: guardrails integration on an existing LLM
Module 4: Production Security Pipeline
2h30- Assembling all 3 defense layers
- Secure logging and anomaly detection
- Rate limiting and incident response
- Red-teaming with Garak: automate vulnerability testing
Module 5: Real-World Case Studies: 5 Production Incidents
2h30- Bing Chat 'Sydney': system prompt leakage (2023)
- Air Canada: chatbot policy manipulation with legal ruling (2024)
- ChatGPT Plugin: data exfiltration via Markdown rendering (2023)
- AI code review agents: injection via PR descriptions (2024)
- RAG supply chain injection via poisoned documents (2025)
Ready to get started?
9.99 EUR/month — All courses included, cancel anytime