The EU AI Act 2024 is the most significant regulatory development in artificial intelligence globally. High-risk AI systems require conformity assessments, risk management systems, and ongoing monitoring. General purpose AI models carry transparency obligations. Prohibited AI practices carry fines of up to 7% of global annual revenue. Almost no organisation is ready. Almost no training provider offers a practical instructor-led programme that teaches professionals how to govern AI systems against this regulatory reality. XAIG fills that gap entirely.
Across five instructor-led days, participants build AI governance capability across the complete governance lifecycle: EU AI Act risk tier classification and legal obligations, ISO 42001 AI Management System implementation across all clauses, NIST AI RMF application across all four functions, AI risk assessment methodology, AI supply chain and third-party governance, AI transparency documentation, human oversight architecture, AI incident response, and regulatory audit preparation. Every session uses real AI system case studies from healthcare, financial services, hiring, and law enforcement contexts.
On Day 5, participants design an AI governance framework for a simulated enterprise operating multiple AI systems across risk tiers, including at least one high-risk AI system under the EU AI Act. A senior practitioner assesses framework design, risk assessment quality, and regulatory documentation. XAIG certificate and Practitioner Assessment Report issued together. Aligned with EU AI Act 2024, ISO 42001:2023, NIST AI RMF 1.0, OECD AI Principles, and UK AI Safety Institute frameworks.