Talki Academy
Regulatory20 min de lecture

AI Act 2026: 15-Point Compliance Checklist

Actionable AI Act 2026 compliance checklist. Risk classification, mandatory documentation, transparency requirements, human oversight, conformity assessment, penalties. Guide for CTOs, DPOs, compliance officers.

Par Talki Academy·Mis a jour le 3 avril 2026

The European AI Act entered into force on August 1, 2024, and its first obligations apply from August 2, 2026. If your company develops or uses AI systems — chatbots, recruitment tools, LLM assistants, recommendation systems — you are affected.

This 15-point checklist lets you verify your compliance. It is aimed at CTOs, DPOs, compliance officers, legal managers, and startup founders. Each point includes a concrete action to take and a real-world example.

🎯 Goal of this article: Provide a clear AI Act 2026 compliance roadmap with concrete actions and practical examples. Reading time: 20 minutes. Implementation time: 2 to 8 weeks depending on your AI maturity.

Risk Classification (Points 1-4)

The AI Act classifies AI systems according to 4 risk levels. Your risk level determines the applicable obligations. Always start by classifying your systems.

✅ Point 1: Identify all your AI systems

Action: Map all AI systems used in your organization.

Include:

  • SaaS tools with integrated AI (CRM, ERP, HR tools, recruitment software)
  • LLMs used directly (ChatGPT, Claude, Copilot, Gemini)
  • Internally developed systems (chatbots, recommendation engines, ML models)
  • AI APIs integrated into your products (image recognition, translation, transcription)

Concrete example: An e-commerce SME uses: ChatGPT (writing product descriptions), HubSpot with AI (lead scoring), an internally developed chatbot (customer support), and Google Translate API (website translation). Total: 4 AI systems to document.

✅ Point 2: Check for unacceptable risk systems (PROHIBITED)

Action: Confirm you are not using any system prohibited by the AI Act.

Prohibited systems (Article 5):

  • Subliminal manipulation causing harm (AI dark patterns)
  • Exploitation of vulnerabilities related to age, disability, or social situation
  • Social scoring by public authorities
  • Real-time biometric facial recognition in public spaces (except strict exceptions)
  • Risk assessment for committing crimes based solely on profiling

Example: A system that analyzes children's emotions to recommend addictive content = PROHIBITED. If this applies to you, stop immediately and consult a specialized attorney.

✅ Point 3: Identify your high-risk systems

Action: Determine whether your systems fall into the “high-risk” category (Annex III of the AI Act).

High-risk domains:

  • Human resources: AI used for recruitment, performance evaluation, promotion, termination.
  • Access to essential services: credit scoring, solvency assessment, insurance pricing based on behavioral data.
  • Education: automated exam grading, academic orientation, cheating detection.
  • Law enforcement: recidivism risk assessment, predictive crime analysis.
  • Critical infrastructure: water, electricity, transport management.
  • Migration and borders: identity verification, fake document detection.

Example: Your AI tool automatically sorts CVs and rejects 80% of candidates without human intervention = high-risk system. Obligations: complete technical documentation, rigorous testing, CE marking, human oversight, European registry.

✅ Point 4: Classify limited and minimal risk systems

Action: Distinguish limited risk (transparency obligations only) from minimal risk (almost no obligation).

Limited risk:

  • Conversational chatbots (must identify themselves as AI)
  • Emotion recognition systems
  • AI-generated or manipulated content (deepfakes, synthetic images)

Minimal risk:

  • Spam filters
  • Product recommendations without significant impact
  • Professional use of ChatGPT/Claude for general tasks (writing, summaries)

Example: Your customer support chatbot = limited risk. It must clearly display “I am an AI assistant” at the beginning of the conversation (transparency obligation, Article 52).

Mandatory Documentation (Points 5-7)

Compliance is proven through documentation. In case of audit, it is the first thing authorities will ask for.

✅ Point 5: Technical documentation for high-risk systems

Action: For each high-risk system, create complete technical documentation (Article 11).

Mandatory content:

  • General system description (objective, operation, limitations)
  • Training data: sources, collection methods, potential biases
  • Model architecture and development methods
  • Performance metrics and test results
  • Risk management measures (security, robustness, accuracy)
  • Human oversight procedures
  • Logs of system modifications and versions

Concrete example: Your recruitment AI must document: the dataset used (how many CVs, from which sources), the bias metrics tested (male/female balance, diversity), model performance (precision, recall), and tests performed to detect unintentional discrimination. Recommended format: PDF or Markdown stored in a versioned repository (GitLab, GitHub).

✅ Point 6: AI systems registry (provider obligations)

Action: If you are a provider of a high-risk system, register it in the EU European database.

When to register:

  • Before placing the system on the market
  • For each substantial update to the system

Information to provide: provider name and contact details, system description, risk level, CE marking, notified body that conducted the conformity assessment.

Important note: The European database will be managed by the European Commission. Access to the registration portal is planned for mid-2026. If you are only a deployer (you use a third-party system like ChatGPT), this obligation does not apply to you.

✅ Point 7: Create an internal AI usage registry (all deployers)

Action: Even if your systems are minimal risk, document your AI uses internally (Article 4 obligation).

Recommended content:

  • List of all AI systems used (name, provider, use case)
  • Classification by risk level
  • User services/departments
  • Training received by users
  • Deployment date and last audit date

Simple template (Google Sheets or Notion):

AI SystemProviderUseRisk LevelTrained Users
ChatGPTOpenAIEmail writingMinimalYes (12/2025)
HubSpot AIHubSpotLead scoringLimitedIn progress

Why this matters: In case of a CNIL audit or AI Act inspection, this registry proves you have AI governance. Its absence can lead to a fine of up to 3% of worldwide turnover (Article 71).

Transparency Requirements (Points 8-10)

Article 52 imposes transparency obligations for certain AI systems. Users must know when they are interacting with an AI.

✅ Point 8: Disclosure of AI use in interactions

Action: If your users interact with an AI chatbot or assistant, it must clearly identify itself as AI.

Legal obligation (Article 52.1):

  • Inform the user they are communicating with an AI
  • This information must be clear, visible, and displayed at the start of the interaction
  • Exception: if it is obvious from context (visible humanoid robot, etc.)

Recommended implementation:

// Example chatbot disclaimer message
"👋 Hello! I'm an AI assistant. I can help with your questions,
but my answers may contain errors. For any important decision,
please consult a member of our human team."

Bad example: A chatbot that introduces itself with a human first name (“Hi, I'm Sophie, how can I help?”) without mentioning it's an AI = violation of Article 52. Possible fine: up to €15M or 3% of worldwide turnover.

✅ Point 9: Labeling AI-generated content

Action: If you publish AI-generated or manipulated content (images, videos, audio, text), you must clearly label it.

Obligation (Article 52.3):

  • Synthetic images/videos: watermark or visible caption (“AI-generated image”)
  • Generated audio (voice deepfake): disclaimer at the beginning or in metadata
  • Text: clear mention if it is content predominantly generated by AI

Concrete example: You use Midjourney to create blog illustrations. Add a caption under each image: “AI-generated illustration (Midjourney)”. If you publish a deepfake video (even humorous), display a clear warning at the beginning.

Special case — marketing content: If your AI generates marketing emails, you are not required to indicate “This email was written by AI” UNLESS the email claims to be written by a specific person (human signature). In that case, transparency is mandatory.

✅ Point 10: Detecting and disclosing deepfakes

Action: If you create or distribute deepfakes (manipulated videos or audio), label them explicitly.

Deepfake definition (AI Act): Audio or video content that has been generated or manipulated by AI in a way that deceptively resembles real persons, objects, or events.

Specific obligations:

  • Display a clear, visible warning (video overlay, audio disclaimer)
  • Include technical metadata enabling automated detection (C2PA, Content Credentials)
  • Retain evidence of labeling (logs, archived versions)

Example: You create an advertising video with a CEO speaking in 5 languages (voice cloned by ElevenLabs). Add a discreet overlay: “AI-generated voice”. If you don't = Article 52 violation + GDPR risk if the voice is recognizable.

Human Oversight (Points 11-12)

High-risk systems must include effective human oversight (Article 14). The human must be able to understand, intervene, and stop the system.

✅ Point 11: Implement effective human oversight

Action: For each high-risk system, clearly define who oversees the system and how.

The 3 pillars of human oversight (Article 14):

  • Understanding: The human must understand the system's capabilities and limitations.
  • Intervention: The human can intervene during system use (correct, adjust, stop).
  • Override: The human can ignore or reverse a system decision.

Concrete example (recruitment AI):

  • Understanding: The recruiter completed 4 hours of training on the system's operation, potential biases, and limitations.
  • Intervention: The recruiter can manually add a candidate rejected by the AI to the shortlist if they detect an error.
  • Override: Every final decision (interview invitation, final rejection) is manually validated by a human. The AI proposes, the human decides.

Non-compliant counter-example: System that automatically rejects 90% of CVs with no possibility for the recruiter to see rejected candidates or understand why = Article 14 violation. In case of a candidate complaint, possible fine + legal liability.

✅ Point 12: Train human supervisors

Action: People who supervise high-risk systems must receive specific training (Article 4).

Recommended training content:

  • Technical operation of the system (without being a data scientist)
  • Known potential biases and limitations
  • Intervention and escalation procedures
  • Legal obligations (AI Act, GDPR, non-discrimination)
  • Case studies: common errors and how to detect them

Recommended duration: 1 to 2 days of training for supervisors of high-risk systems. Our AI Governance and GDPR Compliance training covers these aspects and is OPCO-fundable.

Conformity Assessment (Points 13-14)

✅ Point 13: Conduct a conformity assessment (high-risk systems)

Action: Before placing a high-risk system on the market, have it assessed by a notified body (Article 43).

Who must do this: Only providers of high-risk systems. If you are a deployer (you use a third-party system), verify that the provider has obtained CE marking.

Assessment process:

  1. Complete technical documentation (see Point 5)
  2. Performance, security, robustness testing
  3. Audit by independent notified body
  4. Obtaining the conformity certificate
  5. Affixing CE marking to the system

Estimated cost: Between €15,000 and €50,000 depending on system complexity. Timeline: 3 to 6 months. List of notified bodies available on the European Commission website (NANDO database).

✅ Point 14: Continuously monitor systems after deployment

Action: Implement a post-market monitoring system to detect drift (Article 72).

Metrics to monitor (high-risk systems):

  • Performance: Model accuracy, error rate, latency
  • Bias: Decision distribution by gender, age, origin (if applicable)
  • User feedback: Complaints, decision challenges
  • Incidents: Bugs, outages, unexpected behaviors

Notification obligation: If you detect a serious incident (discriminatory bias, security breach, systemic error), you must notify the competent national authorities within 15 days (Article 73).

Example: Your credit AI systematically refuses loan applications from people in a certain postal code (geographic bias as proxy for ethnic origin). You detect this in your logs. You must: (1) suspend the system immediately, (2) notify the CNIL and the AI Act authority within 15 days, (3) fix the bias, (4) retest before restoration.

Penalties and Remedies (Point 15)

✅ Point 15: Know the applicable penalties and prepare your defense

Action: Understand the financial risks and prepare your compliance evidence.

Penalty schedule (Article 71, Article 99):

ViolationMaximum Fine
Use of a prohibited system (Article 5)€35M or 7% worldwide turnover
Non-compliance of high-risk system (Articles 8-15)€15M or 3% worldwide turnover
Transparency violation (Article 52)€7.5M or 1.5% worldwide turnover
Lack of training (Article 4)€15M or 3% worldwide turnover
Provision of incorrect information€7.5M or 1.5% worldwide turnover

Supervisory authority: In most EU member states, the data protection authority (e.g., CNIL in France, ICO in the UK, BfDI in Germany) has been designated as the national AI Act supervisory authority for many cases. They can conduct on-site inspections, request documents, and impose fines. AI Act inspections began in March 2026.

How to prepare your defense:

  • Keep all compliance evidence (training certificates, AI registry, AI charter, oversight logs)
  • Document all decisions and system updates (GitLab, Notion, SharePoint)
  • Designate an AI Act officer in your organization (often the DPO or CTO)
  • Take out professional liability insurance covering AI risks (new 2026 products from AXA, Hiscox, etc.)

Decision Tree: What Risk Level for Your AI System?

Use this decision tree to quickly classify your AI systems. Start at the top and follow the branches according to your answers.

🌳 AI ACT DECISION TREE

❓ Does your system subliminally manipulate users or exploit vulnerabilities?

├─ YES → PROHIBITED (stop immediately)

└─ NO → Continue ⬇️


❓ Is your system used for: recruitment, credit, education, justice, critical infrastructure?

├─ YES → HIGH RISK (complete documentation, CE assessment, human oversight)

└─ NO → Continue ⬇️


❓ Does your system interact directly with users (chatbot, emotion recognition, deepfakes)?

├─ YES → LIMITED RISK (transparency obligations, AI disclaimer)

└─ NO → MINIMAL RISK (user training, internal registry recommended)

Ambiguous cases: If you hesitate between two categories, apply the precautionary principle and classify at the higher level. You can also consult the guidelines from the European AI Office (ai-office.ec.europa.eu) or engage a specialized consulting firm (Deloitte, PwC, KPMG offer AI Act audits).

Next Steps: Your 8-Week Action Plan

Here is a concrete roadmap to go from non-compliance to AI Act compliance in 8 weeks. Adapt according to your organization's maturity.

📅 Weeks 1-2: Initial Audit

  • Map all AI systems (Point 1)
  • Classify each system by risk level (Points 2-4)
  • Identify obvious compliance gaps

📅 Weeks 3-4: Documentation and Governance

  • Create the internal AI usage registry (Point 7)
  • Draft the company AI charter
  • For high-risk systems: start technical documentation (Point 5)

📅 Weeks 5-6: Transparency and Training

  • Implement chatbot disclaimers (Point 8)
  • Add watermarks on AI content (Point 9)
  • Train all AI users (2-4 hour session)
  • Train supervisors of high-risk systems (1-2 days)

📅 Weeks 7-8: Monitoring and Final Validation

  • Set up post-deployment monitoring (Point 14)
  • Internal compliance audit (full checklist)
  • If high-risk system: engage notified body for CE assessment (Point 13)
  • Final documentation and archiving (retain minimum 5 years)

Need help? Our AI Governance and GDPR Compliance training covers this entire checklist in 1 day. It includes: documentation templates, model AI charter, practical cases, updated regulatory monitoring. OPCO-fundable (potential out-of-pocket cost: €0).

Frequently Asked Questions

Is my customer support chatbot a high-risk system under the AI Act?

It depends on its use. If your chatbot only answers support questions (FAQ, order tracking), it is a limited or minimal risk system. But if the chatbot makes decisions that significantly affect user rights (automatic refusal of refunds, account blocking without recourse), it may be classified as high-risk. The key criterion: is there human oversight before any impactful decision?

What is the difference between provider and deployer in the AI Act?

A provider develops or commercializes an AI system (example: OpenAI providing ChatGPT). A deployer uses an AI system developed by a third party in the context of their professional activity (example: your company using ChatGPT to write emails). Most AI Act obligations apply to both roles, but providers have additional responsibilities (CE conformity, European registry).

Does the AI Act apply to companies outside the EU?

Yes, if the AI system is used in the EU or produces effects on persons located in the EU. Example: a US startup offering an AI recruitment tool to French companies must comply with the AI Act. This is the same extraterritoriality principle as the GDPR.

How much does AI Act compliance cost for an SME?

For a minimal risk system (standard use of ChatGPT, Claude, etc.): between €2,000 and €5,000 (staff training, drafting the AI charter, documenting uses). For a high-risk system: between €15,000 and €50,000 (technical audit, complete technical documentation, conformity assessment by notified body, security testing). Talki Academy training programs are OPCO-funded, which can reduce out-of-pocket costs to zero in many cases.

📚 Additional Resources:

Formez votre equipe a l'IA

Nos formations sont financables OPCO — reste a charge potentiel : 0€.

Voir les formationsVerifier eligibilite OPCO