3-Day Instructor-Led Programme
An advanced programme for experienced red team and security professionals covering the use of AI-powered offensive tooling in authorised red team exercises, deepfake attack simulation, AI threat modelling, and translating AI red team findings into defensive improvements. Build the capability to conduct AI-augmented red team engagements, generate controlled deepfake attacks in authorised exercises, and produce AI threat models covering all synthetic attack vectors.
Duration
3 Days
Price
$4,297
Experimental malware families are now capable of modifying their behaviour during attacks using language-model-based components. AI-assisted code analysis has identified hundreds of severe vulnerabilities in short timeframes. Autonomous reconnaissance operates faster than any human red team. Senior red team professionals who do not understand and work with these capabilities are testing defences against a threat model that is already obsolete. This three-day advanced programme addresses that gap directly.
Over three mentor-led days, participants examine how attackers deploy AI-powered offensive tooling across the full attack lifecycle, conduct authorised AI-assisted red team exercises across spearphishing, vishing, synthetic identity bypass, and automated vulnerability discovery, produce AI threat models covering all synthetic attack surfaces, and develop the skills to translate AI red team findings into concrete defensive capability improvements.
The programme concludes with a complete AI red team engagement report capstone: participants conduct, document, and present a full AI-augmented engagement covering multiple attack vectors in a controlled lab environment. This course is aligned with MITRE ATLAS, UK Computer Misuse Act requirements for authorised testing, GDPR constraints on synthetic media generation in testing, and AI red team industry standards.
AI-powered spearphishing campaign against simulated targets, voice clone vishing exercise against simulated helpdesk, synthetic identity bypass attempt, automated vulnerability discovery demonstration, and a full AI red team engagement report capstone.
Senior practitioner instruction on authorised AI offensive tooling, legal boundary management, threat model design, and instructor critique of engagement reports and defensive translation quality.
AI-augmented red team methodology, authorised deepfake attack simulation, AI threat model production, prompt injection red teaming, Computer Misuse Act boundary management, and AI offensive findings to defensive improvement translation.
Use AI-powered offensive tools to test organisational defences against synthetic threats in authorised engagements.
Generate controlled deepfake attacks in authorised red team exercises against simulated targets.
Design AI red team scenarios covering all synthetic attack vectors for board-level wargaming.
Identify AI-exploitable weaknesses in authentication, verification, and communication workflows.
Produce a complete AI threat model covering synthetic attack vectors for your target environment.
Brief leadership on the offensive AI threat landscape from a hands-on practitioner perspective.
Translate AI red team findings into specific defensive capability improvements with measurable impact.
Minimum three years of professional penetration testing or red team experience.
Solid understanding of social engineering, phishing, vulnerability assessment, and offensive security methodology.
Familiarity with AI and machine learning concepts, preferably with some exposure to AI offensive tooling.
Step-by-step learning journey from basics to professional practice
Master these in-demand skills through hands-on practice
Choose the learning format that works best for you and your team
Instructor-Led Training
Join live instructor-led sessions from anywhere. Interactive, engaging, and flexible.
Price per person
Group enrolments and early planning options available.
All prices are exclusive of VAT where applicable. Group enrolments and custom packages available on request.
Not everyone learns best in a group. If you want focused guidance, faster clarity, and confidence you can use on the job, our 1-to-1 Fast-Track Training gives you private, mentor-led support tailored to your experience and goals.
"Many learners choose 1-to-1 when they want understanding, not memorisation."
Everything you need to know about the certification exams
You will receive an Xcademia certificate of completion based on participation and successful completion of labs and scenario simulations.
Everything you need to know about this course
Senior red teamers, penetration testers, threat intelligence leads, security architects, and CISO advisory teams with significant offensive security experience who need to understand and apply AI-powered offensive techniques in authorised contexts.
Take the next step in your professional development