3-Day Instructor-Led Programme
Understand LLM attack techniques including prompt injection, jailbreaking and model risks in real-world systems. This mentor-led programme uses practical scenarios to test, analyse and strengthen AI system security.
Duration
3 Days
Price
$1,998
Large Language Models (LLMs) introduce new security risks that organisations must understand and manage. Attack techniques such as prompt injection and jailbreaking can lead to data leakage, policy bypass, and unintended system behaviour.
This programme focuses on understanding common LLM attack patterns including direct and indirect prompt injection, jailbreak techniques, model extraction risks, and model poisoning concepts. Participants learn how these attacks occur and how to design controls to prevent and detect them.
Delivered as a mentor-led programme using practical scenarios, this course provides hands-on experience in analysing attack techniques within authorised testing environments and implementing safeguards to strengthen AI system security.
Authorised LLM security testing labs, attack simulations, and defence design exercises.
Mentors guide threat modelling, attack analysis, and secure AI system design practices.
Develop AI security testing and defence capability for LLM systems.
Analyse LLM attack patterns and risks
Evaluate prompt injection and jailbreak techniques
Implement controls to mitigate AI security threats
Design secure LLM-based systems
Communicate risks and mitigation strategies
Evaluate system resilience against attacks
Basic understanding of AI or LLM concepts.
Familiarity with cybersecurity fundamentals.
Interest in AI security and risk management.
Step-by-step learning journey from basics to professional practice
Master these in-demand skills through hands-on practice
A clear view of the roles this programme supports, what typically comes next, and where learners progress over time
Choose the learning format that works best for you and your team
Instructor-Led Training
Join live instructor-led sessions from anywhere. Interactive, engaging, and flexible.
Price per person
Group enrolments and early planning options available.
All prices are exclusive of VAT where applicable. Group enrolments and custom packages available on request.
Not everyone learns best in a group. If you want focused guidance, faster clarity, and confidence you can use on the job, our 1-to-1 Fast-Track Training gives you private, mentor-led support tailored to your experience and goals.
"Many learners choose 1-to-1 when they want understanding, not memorisation."
Everything you need to know about the certification exams
You will receive an Xcademia certificate of completion based on participation and successful completion of labs and scenario simulations.
Everything you need to know about this course
Security professionals, AI engineers, and anyone responsible for AI system risk and protection.
Take the next step in your professional development