Talki Academy
Intermediate2 daysEU AI Act 2026

AI Act 2026: Practical Implementation Playbook

Move beyond compliance checklists. Build real risk assessments, DPIA templates, audit logging, and drift monitoring for EU AI systems — with working code for every deliverable.

This training is eligible for OPCO funding. EUR 1,490 reference price.

What you will build

Classify any AI system (Gen AI, RAG, voice agents) under the EU AI Act with documented evidence
Complete a DPIA for AI systems — pre-filled templates for three scenarios
Generate system cards and technical documentation that satisfy Annex IV
Run automated bias detection and production drift monitoring suites
Implement Article 12 compliant audit logging with PII protection
Produce a full risk assessment for a RAG system with actual risk scores

Who this is for

  • Compliance officers and DPOs preparing for August 2026 high-risk system obligations
  • Developers building AI systems deployed to EU users
  • Legal teams advising organizations on AI Act implementation
  • CTOs and product managers launching AI features that may be high-risk

Prerequisites

  • ·Basic understanding of GDPR (data subjects, legal basis, processor vs. controller)
  • ·Familiarity with at least one AI deployment pattern (chatbot, RAG, scoring model)
  • ·Python basics for running the code examples (you do not need to be a developer)

Modules

Risk Taxonomy: Classify Your AI Systems Correctly

2h30

Apply the AI Act's four-tier risk model to real system architectures — generative AI, RAG pipelines, voice agents — and produce a defensible classification with documented evidence.

Risk Taxonomy: Classify Your AI Systems Correctly

By the end of this module you will: correctly classify a generative AI chatbot, a RAG pipeline, and a voice agent under the AI Act; identify which classification triggers high-risk obligations; and produce a classification evidence document that satisfies an audit.

The EU AI Act's risk model is risk-based, not technology-based. The same underlying model — say, a Claude API call — can be minimal risk in one context (summarising internal documents) and high risk in another (scoring job candidates). Classification errors in either direction are costly: misclassify downward and you face fines up to EUR 15 million or 3% of global turnover; misclassify upward and you impose unnecessary compliance overhead on teams. The classification decision must be documented, reviewed annually, and defensible to a market surveillance authority.

The Four Risk Tiers with Real Deployment Examples

  • Unacceptable Risk (PROHIBITED from February 2025): Real-time remote biometric identification in public spaces without judicial authorisation; social scoring systems that assign citizens a reputation score affecting access to services; AI that exploits psychological vulnerabilities to manipulate behaviour. Example prohibited system: a retail analytics platform that uses facial recognition to match shoppers against a police database and alert security staff in real time.
  • High Risk (full conformity assessment required before August 2026): Automated CV screening and candidate ranking (Annex III point 4); AI-assisted credit scoring used in lending decisions (Annex III point 5b); medical device software that influences diagnostic or treatment decisions (Annex III point 6); AI used in critical infrastructure management — energy grids, water systems (Annex III point 2). Example high-risk system: a recruitment SaaS that takes CVs and outputs a ranked shortlist, even if a human makes the final hire decision.
  • Limited Risk (transparency obligations only): Chatbots interacting with natural persons must disclose they are AI. Deepfake content must be labelled. AI-generated text in journalism or marketing must be disclosed. Example: a customer support chatbot built on Claude must tell users they are speaking with an AI — but the system itself requires no conformity assessment.
  • Minimal Risk (no mandatory obligations): Product recommendation engines, spam filters, AI-powered search, content translation, grammar correction. Example: an internal document summarisation tool using a RAG pipeline over company knowledge base — minimal risk if it does not make consequential decisions about individuals.

Classifying Generative AI Systems (GPAI)

The AI Act adds a separate track for General Purpose AI (GPAI) models — models trained on broad data and usable for many tasks. If you deploy a GPAI model (Claude, GPT-4, Llama 3) or build on top of one, classification works differently: the model itself falls under GPAI obligations (provider responsibility), but your application layer falls under the four-tier risk model. A GPAI model with systemic risk (training compute above 10^25 FLOPs) has additional requirements: adversarial testing, incident reporting, and energy consumption disclosure.

RAG Pipeline Classification Decision Tree

Python

Voice Agent Classification: A Worked Example

Voice agents present a multi-layered classification problem because they combine three distinct AI components: speech recognition (STT), language model inference (Claude/GPT), and text-to-speech synthesis (ElevenLabs/Azure). Each layer can have different risk levels. The key is to classify the system as a whole based on its purpose and outputs — not its components individually. A voice agent that cold-calls customers to offer loan restructuring is high-risk (credit domain, automated outbound). A voice agent that answers FAQ calls for a software company is limited risk (chatbot transparency obligation only).

Classification rule of thumb for voice agents: if the voice agent's output influences a financial, employment, or health decision — even indirectly — treat it as high-risk and run a full conformity assessment. The cost of over-classification is documentation overhead. The cost of under-classification is EUR 15 million.

🛠️ Exercise 1: Risk Classification Workbench

Run this code against three pre-loaded real-world scenarios, then add your own system as Scenario 4. The classifier mirrors the logic used by compliance teams in actual EU AI Act audits. Pay attention to how a single parameter change shifts the risk tier — and the resulting obligations.

Exercice pratique

Run the code and examine each classification. Then: (1) In Scenario 1, change `uses_biometric_data=False` — does risk stay high? Why? (2) In Scenario 2, change `decision_domain` to `None` — count how many obligations disappear. (3) Add your own real or hypothetical system as Scenario 4 and defend your classification in writing.

Python

🛠️ Exercise 2: Generate Your Article 11 Technical Documentation

High-risk AI systems must produce technical documentation before a single line of production code runs. Market surveillance authorities request this document during audits — teams that cannot produce it face immediate non-compliance findings. Fill in the fields below for a real or hypothetical CV screening tool.

Exercice pratique

Fill in all TODO fields for a real or hypothetical CV screening tool. Pay particular attention to §4 performance metrics — these must be disaggregated by demographic group. After completing, count how many fields you needed to research vs. already knew. The fields you had to research are your compliance blind spots.

Python

Quiz disponible

Terminez la lecture de ce module puis validez vos connaissances avec le quiz.


DPIA Templates: Pre-filled for Three Common Scenarios

2h

Write compliant Data Protection Impact Assessments for the three AI deployment patterns that most frequently require one: RAG pipelines processing personal data, voice agents recording conversations, and automated scoring systems.

DPIA Templates: Pre-filled for Three Common Scenarios

By the end of this module you will have a complete DPIA template for each of the three scenarios below — ready to submit to your DPO or supervisory authority, with all legally required sections filled.

A DPIA is mandatory under GDPR Article 35 whenever AI processing is 'likely to result in a high risk to the rights and freedoms of natural persons.' For AI systems, CNIL and equivalent authorities consider three triggers automatic: (1) systematic monitoring of individuals at scale, (2) processing sensitive categories of data (health, biometric, political), (3) automated decision-making with legal or similarly significant effects. If your AI system hits any of these, a DPIA is required before you process a single record in production.

DPIA Scenario 1 — RAG Pipeline Processing Employee Personal Data

Json

DPIA Scenario 2 — Voice Agent Recording Customer Conversations

Json

🛠️ Exercise 3: DPIA Risk Matrix — Employee Monitoring Scenario

Your company wants to deploy facial recognition for employee time-tracking. Before the DPO can approve, you must complete a GDPR Article 35 risk matrix. This exercise walks you through assessing three real risks that supervisory authorities look for in biometric system DPIAs.

Exercice pratique

Complete the three risk entries: set likelihood, severity, add two concrete mitigations each, then set residual_risk after mitigations. When done: (1) Does R-002 (discriminatory error rates) justify requiring demographic benchmark testing as a deployment gate? (2) For R-003 (scope creep), what technical control would you implement to make expansion impossible without triggering a new DPIA?

Python

Quiz disponible

Terminez la lecture de ce module puis validez vos connaissances avec le quiz.


Documentation Patterns: System Cards, Technical Records, and Model Cards

1h30

Build the three documentation artifacts required by AI Act Article 11: a system card (business-level), a technical record (engineering-level), and a model card (model-level). Includes automation scripts.

Documentation Patterns: What the AI Act Actually Requires

AI Act Article 11 requires high-risk AI systems to maintain technical documentation before market placement — and keep it updated throughout the system's lifecycle. The regulation specifies what must be documented (Annex IV) but not the format. This module establishes three practical document types that together satisfy Annex IV, are maintainable by engineering teams, and can be version-controlled alongside code.

The System Card (Business-Level, for DPO and Management)

Yaml

Automated Documentation Generation

Python

Testing Frameworks: Bias Detection and Drift Monitoring

2h

Build automated test suites for demographic bias and production drift — the two most common causes of AI Act non-compliance violations discovered during post-market monitoring.

Testing Frameworks: Bias Detection and Drift Monitoring

The AI Act requires ongoing post-market monitoring (Article 72) — not just pre-deployment testing. You need automated tests that run in production and alert you when performance degrades, bias increases, or the input distribution shifts. This module builds a complete monitoring stack using open-source tools: Evidently for drift detection, Great Expectations for data quality, and a custom fairness testing framework.

Demographic Parity Test Suite

Python

Production Drift Monitor with Evidently

Python

Audit Logging: What to Log, How to Structure It, How Long to Keep It

1h30

Implement the logging infrastructure required by AI Act Article 12 for high-risk systems. Covers log schema design, immutability requirements, and a FastAPI middleware implementation.

Audit Logging for AI Act Article 12

AI Act Article 12 requires high-risk AI systems to 'automatically log events' to enable post-market monitoring and incident investigation. The logs must be: (1) automatically generated — not manually written, (2) sufficient to reconstruct the system's operation at any given time, (3) retained for the system's operational lifetime (minimum 6 months for most systems, 10 years for systems used in critical infrastructure). This module shows the complete logging stack from schema design to infrastructure.

AI Act Compliant Log Schema

Python

Retention setup for Article 12 compliance: For high-risk systems in most domains, keep logs for the system's operational lifetime + 10 years. Use S3 Object Lock (COMPLIANCE mode) or Azure Immutable Blob Storage to prevent deletion. Test your log retention policy annually — an unreachable log is equivalent to no log during an audit.


Capstone: Complete Risk Assessment for a Production RAG System

2h30

Work through a complete AI Act + GDPR risk assessment for a legal document analysis RAG system — the type most frequently scrutinised by regulators. Produce actual risk scores, identify mitigations, and generate the compliance certificate.

Capstone: End-to-End Risk Assessment for a Legal RAG System

Scenario: Your company has built a RAG system that analyses legal contracts and flags non-standard clauses. Lawyers use it to prioritise review effort. You are about to roll it out to 40 law firms across the EU. Before go-live, you need a complete risk assessment. Work through each section below and fill in the assessment for your own system.

A legal contract analysis RAG system sits in an interesting compliance position: it touches legal advice (Annex III §8 of the AI Act covers AI used in 'administration of justice and democratic processes'), processes confidential business information, and its outputs influence legal decisions. Let's assess it systematically.

Step 1 — Risk Classification with Evidence Score

Python

Step 2 — Checklist Before Go-Live

  • Risk classification documented and signed off by compliance officer — evidence file stored in version control
  • DPIA completed and approved by DPO — stored in privacy register with expiry date
  • Conformity assessment completed (for high-risk systems) — certificate issued
  • EU AI Act database registration submitted (mandatory before August 2026 for high-risk systems)
  • Human oversight mechanism tested — override workflow verified end-to-end
  • Audit logging running — sample log reviewed by compliance team, retention policy confirmed
  • Bias test suite running — baseline report generated, thresholds documented
  • Drift monitor configured — first reference dataset snapshot saved
  • Data processing agreements signed with all sub-processors (LLM API, vector store, cloud provider)
  • User-facing disclosure implemented — 'This is an AI system' notice verified in UI
  • Incident response plan documented — who to notify, within what timeframe (Article 73: serious incidents within 15 business days to market surveillance authority)
  • Post-market monitoring schedule set — minimum quarterly review for high-risk systems

Save this checklist as a GitHub issue template or JIRA ticket template. Run it for every new AI deployment and at every major version release. Attach the completed checklist to your release PR — it becomes the audit trail.


Ready to implement AI Act compliance?

On-site or remote. Groups of 4 to 12. Contact us to schedule a session for your team.

Contact us