AI is being deployed at scale across enterprise, government, and critical infrastructure. Every LLM application has an attack surface. Every ML pipeline has trust assumptions that can be exploited. Every agentic AI system has an autonomy boundary that can be manipulated. The security practitioner who cannot assess these systems is increasingly irrelevant in 2026. XAIHP is built for offensive security professionals who want to extend their capability into the most underserved and fastest-growing attack surface in the industry.
Across eight instructor-led days, participants build AI offensive security capability from first principles: AI and ML system architecture for security professionals, prompt injection and jailbreaking techniques against production LLMs, indirect prompt injection in agentic AI systems, data poisoning and training data attacks, model extraction and membership inference, adversarial examples and evasion attacks, LLM application security testing methodology, AI infrastructure security assessment, and AI red team report production aligned to MITRE ATLAS and OWASP LLM Top 10.
On Day 8, participants conduct a supervised AI red team exercise against a deployed LLM application with RAG pipeline and agentic capabilities. They attempt prompt injection, indirect injection through documents, data exfiltration from the vector database, and privilege escalation through the agentic system. A senior practitioner observes methodology, technique selection, and report quality. XAIHP certificate and Practitioner Assessment Report issued together. Aligned with MITRE ATLAS, OWASP LLM Top 10, OWASP ML Security Top 10, NIST AI RMF, and EU AI Act security testing requirements.