The EU AI Act entered into force on August 1, 2024, and its first obligations became enforceable on August 2, 2026. If your company develops or uses AI systems — chatbots, recruitment tools, LLM assistants, recommendation engines — you are in scope.
This 15-point checklist helps you verify compliance. It targets CTOs, DPOs, compliance officers, legal leads, and startup founders. Each point includes a concrete action and a real-world example.
🎯 Goal of this article: Provide a clear AI Act 2026 compliance roadmap with concrete actions and practical examples. Reading time: 20 minutes. Implementation time: 2-8 weeks depending on your AI maturity.
Risk Classification (Points 1-4)
The AI Act classifies AI systems into 4 risk levels. Your risk level determines applicable obligations. Always start by classifying your systems.
✅ Point 1: Identify All Your AI Systems
Action: Map all AI systems used in your organization.
Include:
- SaaS tools with embedded AI (CRM, ERP, HR tools, recruitment software)
- LLMs used directly (ChatGPT, Claude, Copilot, Gemini)
- In-house developed systems (chatbots, recommendation engines, ML models)
- AI APIs integrated into your products (image recognition, translation, transcription)
Concrete example: An e-commerce SME uses: ChatGPT (product description writing), HubSpot with AI (lead scoring), an in-house chatbot (customer support), and Google Translate API (website translation). Total: 4 AI systems to document.
✅ Point 2: Check for Unacceptable Risk Systems (PROHIBITED)
Action: Confirm you are not using any system prohibited by the AI Act.
Prohibited systems (Article 5):
- Subliminal manipulation causing harm (AI dark patterns)
- Exploitation of vulnerabilities related to age, disability, or social situation
- Social scoring by public authorities
- Real-time biometric identification in public spaces (strict exceptions only)
- Risk assessment of crime commission based solely on profiling
Example: A system that analyzes children's emotions to recommend addictive content = PROHIBITED. If this is your case, stop use immediately and consult a specialized lawyer.
✅ Point 3: Identify Your High-Risk Systems
Action: Determine if your systems fall into the "high-risk" category (Annex III of the AI Act).
High-risk domains:
- Human resources: AI used for recruitment, performance evaluation, promotion, termination.
- Access to essential services: credit scoring, creditworthiness assessment, insurance pricing based on behavioral data.
- Education: automated exam grading, academic guidance, cheating detection.
- Law enforcement: recidivism risk assessment, predictive crime analysis.
- Critical infrastructure: water, electricity, transport management.
- Migration and borders: identity verification, fake document detection.
Example: Your AI tool automatically sorts CVs and rejects 80% of candidates without human intervention = high-risk system. Obligations: complete technical documentation, rigorous testing, CE marking, human oversight, EU registry.
✅ Point 4: Classify Limited-Risk and Minimal-Risk Systems
Action: Distinguish limited risk (transparency obligations only) from minimal risk (almost no obligations).
Limited risk:
- Conversational chatbots (must identify as AI)
- Emotion recognition systems
- AI-generated or manipulated content (deepfakes, synthetic images)
Minimal risk:
- Spam filters
- Product recommendations without significant impact
- Professional use of ChatGPT/Claude for general tasks (writing, summaries)
Example: Your customer support chatbot = limited risk. It must clearly display "I am an AI assistant" at the start of the conversation (transparency obligation, Article 52).
Mandatory Documentation (Points 5-7)
Compliance is proven through documentation. In case of audit, this is the first thing authorities will request.
✅ Point 5: Technical Documentation for High-Risk Systems
Action: For each high-risk system, create comprehensive technical documentation (Article 11).
Mandatory content:
- General system description (purpose, operation, limitations)
- Training data: sources, collection methods, potential biases
- Model architecture and development methods
- Performance metrics and test results
- Risk management measures (security, robustness, accuracy)
- Human oversight procedures
- Logs of modifications and system versions
Concrete example: Your recruitment AI must document: the dataset used (how many CVs, from what sources), bias metrics tested (gender balance, diversity), model performance (precision, recall), and tests performed to detect unintended discrimination. Recommended format: PDF or Markdown stored in a versioned repository (GitLab, GitHub).
✅ Point 6: AI Systems Registry (Provider Obligations)
Action: If you are a provider of a high-risk system, register it in the EU database.
When to register:
- Before placing the system on the market
- For each substantial system update
Information to provide: provider name and contact details, system description, risk level, CE marking, notified body that performed conformity assessment.
Important note: The EU database is managed by the European Commission. Access to the registration portal is expected mid-2026. If you are only a deployer (you use a third-party system like ChatGPT), this obligation does not apply to you.
✅ Point 7: Create an Internal AI Usage Registry (All Deployers)
Action: Even if your systems are minimal risk, document your AI usage internally (Article 4 obligation).
Recommended content:
- List of all AI systems used (name, provider, use case)
- Classification by risk level
- User services/departments
- Training received by users
- Commissioning date and last audit date
Simple template (Google Sheets or Notion):
| AI System | Provider | Use Case | Risk Level | Users Trained |
|---|---|---|---|---|
| ChatGPT | OpenAI | Email writing | Minimal | Yes (12/2025) |
| HubSpot AI | HubSpot | Lead scoring | Limited | In progress |
Why this matters: In case of a data protection authority audit or AI Act inspection, this registry proves you have AI governance. Its absence can result in a fine up to 3% of global revenue (Article 71).
Transparency Requirements (Points 8-10)
Article 52 imposes transparency obligations for certain AI systems. Users must know when they interact with AI.
✅ Point 8: Disclose AI Usage in Interactions
Action: If your users interact with a chatbot or AI assistant, it must clearly identify itself as AI.
Legal obligation (Article 52.1):
- Inform the user they are communicating with AI
- This information must be clear, visible, and displayed at the start of interaction
- Exception: if it's obvious from context (visible humanoid robot, etc.)
Recommended implementation:
// Example chatbot disclaimer message (text to display) "👋 Hello! I'm an AI assistant. I can help with your questions, but my responses may contain errors. For any important decision, please consult a member of our human team."
Bad example: Chatbot presenting itself with a human name ("Hello, I'm Sophie, how can I help?") without mentioning it's AI = violation of Article 52. Possible fine: up to EUR 15M or 3% of global revenue.
✅ Point 9: Label AI-Generated Content
Action: If you publish AI-generated or manipulated content (images, videos, audio, text), you must clearly indicate it.
Obligation (Article 52.3):
- Synthetic images/videos: watermark or visible caption ("AI-generated image")
- Generated audio (voice deepfake): disclaimer at the beginning or in metadata
- Text: clear mention if it's predominantly AI-generated content
Concrete example: You use Midjourney to create blog illustrations. Add a caption under each image: "AI-generated illustration (Midjourney)". If you publish a deepfake video (even humorous), display a clear warning at the beginning.
Special case — marketing content: If your AI generates marketing emails, you're not required to state "This email was written by AI" UNLESS the email claims to be written by a specific person (human signature). In that case, transparency is mandatory.
✅ Point 10: Detect and Disclose Deepfakes
Action: If you create or distribute deepfakes (manipulated videos or audio), explicitly label them.
Deepfake definition (AI Act): Audio or video content that has been generated or manipulated by AI in a way that deceptively resembles real people, objects, or events.
Specific obligations:
- Display a clear and visible warning (video overlay, audio disclaimer)
- Include technical metadata enabling automated detection (C2PA, Content Credentials)
- Retain evidence of labeling (logs, archived versions)
Example: You create an advertising video with a CEO speaking in 5 languages (voice cloned by ElevenLabs). Add a discrete overlay: "AI-generated voice". If you don't = violation of Article 52 + GDPR risk if the voice is recognizable.
Human Oversight (Points 11-12)
High-risk systems must include effective human oversight (Article 14). Humans must be able to understand, intervene, and stop the system.
✅ Point 11: Implement Effective Human Control
Action: For each high-risk system, clearly define who oversees the system and how.
The 3 pillars of human oversight (Article 14):
- Understanding: Humans must understand the system's capabilities and limitations.
- Intervention: Humans can intervene during system use (correct, adjust, stop).
- Override: Humans can ignore or reverse a system decision.
Concrete example (recruitment AI):
- Understanding: The recruiter completed 4 hours of training on how the system works, its potential biases, and its limitations.
- Intervention: The recruiter can manually add a candidate rejected by AI to the shortlist if they detect an error.
- Override: Any final decision (interview invitation, definitive rejection) is manually validated by a human. AI proposes, human disposes.
Counter-example (non-compliant): System that automatically rejects 90% of CVs without the recruiter being able to see rejected candidates or understand why = violation of Article 14. In case of candidate complaint, possible fine + legal liability.
✅ Point 12: Train Human Overseers
Action: People who oversee high-risk systems must receive specific training (Article 4).
Recommended training content:
- Technical operation of the system (without being a data scientist)
- Potential biases and known limitations
- Intervention and escalation procedures
- Legal obligations (AI Act, GDPR, non-discrimination)
- Case studies: common errors and how to detect them
Recommended duration: 1-2 days of training for high-risk system supervisors. Government-funded training programs available in most EU member states.
Conformity Assessment (Points 13-14)
✅ Point 13: Conduct Conformity Assessment (High-Risk Systems)
Action: Before placing a high-risk system on the market, have it assessed by a notified body (Article 43).
Who must do this: Only providers of high-risk systems. If you are a deployer (you use a third-party system), verify the provider obtained CE marking.
Assessment process:
- Complete technical documentation (see Point 5)
- Performance, security, robustness testing
- Audit by independent notified body
- Obtain conformity certificate
- Affix CE marking on the system
Estimated cost: Between EUR 15,000 and 50,000 depending on system complexity. Timeline: 3-6 months. List of notified bodies available on the European Commission website (NANDO database).
✅ Point 14: Continuously Monitor Systems After Deployment
Action: Implement post-market monitoring to detect drift (Article 72).
Metrics to monitor (high-risk systems):
- Performance: Model accuracy, error rate, latency
- Bias: Decision distribution by gender, age, origin (if applicable)
- User feedback: Complaints, decision challenges
- Incidents: Bugs, failures, unexpected behavior
Notification obligation: If you detect a serious incident (discriminatory bias, security breach, systemic error), you must notify competent national authorities within 15 days (Article 73).
Example: Your credit AI systematically denies loan applications from people in certain postal codes (geographic bias proxy for ethnic origin). You detect this in your logs. You must: (1) suspend the system immediately, (2) notify the data protection authority within 15 days, (3) correct the bias, (4) re-test before resuming service.
Penalties and Remediation (Point 15)
✅ Point 15: Understand Applicable Penalties and Prepare Your Defense
Action: Understand financial risks and prepare compliance evidence.
Penalty schedule (Article 71, Article 99):
| Violation | Maximum Fine |
|---|---|
| Use of prohibited system (Article 5) | EUR 35M or 7% global revenue |
| High-risk system non-compliance (Articles 8-15) | EUR 15M or 3% global revenue |
| Transparency violation (Article 52) | EUR 7.5M or 1.5% global revenue |
| Lack of training (Article 4) | EUR 15M or 3% global revenue |
| Provision of incorrect information | EUR 7.5M or 1.5% global revenue |
Supervisory authority: Each EU member state has designated a national AI Act supervisory authority. These authorities can conduct on-site inspections, request documents, and impose fines. AI Act inspections began in March 2026.
How to prepare your defense:
- Keep all compliance evidence (training certificates, AI registry, AI charter, oversight logs)
- Document all decisions and system updates (GitLab, Notion, SharePoint)
- Designate an AI Act point person in your organization (often the DPO or CTO)
- Subscribe to professional liability insurance covering AI risks (new 2026 offerings from major insurers)
Decision Tree: What Risk Level for Your AI System?
Use this decision tree to quickly classify your AI systems. Start at the top and follow branches based on your answers.
🌳 AI ACT DECISION TREE
❓ Does your system subliminally manipulate users or exploit vulnerabilities?
├─ YES → PROHIBITED (cease immediately)
└─ NO → Continue ⬇️
❓ Is your system used for: recruitment, credit, education, justice, critical infrastructure?
├─ YES → HIGH RISK (complete documentation, CE assessment, human oversight)
└─ NO → Continue ⬇️
❓ Does your system directly interact with users (chatbot, emotion recognition, deepfakes)?
├─ YES → LIMITED RISK (transparency obligations, AI disclaimer)
└─ NO → MINIMAL RISK (user training, internal registry recommended)
Ambiguous cases: If you hesitate between two categories, apply the precautionary principle and classify at the higher level. You can also consult the European AI Office guidelines (ai-office.ec.europa.eu) or engage a specialized consulting firm (Deloitte, PwC, KPMG offer AI Act audits).
Next Steps: Your 8-Week Action Plan
Here's a concrete roadmap to go from non-compliance to AI Act compliance in 8 weeks. Adapt based on your organization's maturity.
📅 Weeks 1-2: Initial Audit
- Map all AI systems (Point 1)
- Classify each system by risk level (Points 2-4)
- Identify obvious compliance gaps
📅 Weeks 3-4: Documentation and Governance
- Create internal AI usage registry (Point 7)
- Draft company AI charter
- For high-risk systems: start technical documentation (Point 5)
📅 Weeks 5-6: Transparency and Training
- Implement chatbot disclaimers (Point 8)
- Add watermarks to AI content (Point 9)
- Train all AI users (2-4 hour session)
- Train high-risk system supervisors (1-2 days)
📅 Weeks 7-8: Monitoring and Final Validation
- Set up post-deployment monitoring (Point 14)
- Internal compliance audit (complete checklist)
- If high-risk system: engage notified body for CE assessment (Point 13)
- Final documentation and archiving (keep minimum 5 years)
Frequently Asked Questions
Is my customer support chatbot a high-risk AI system under the AI Act?
It depends on its use case. If your chatbot only answers support questions (FAQs, order tracking), it's limited or minimal risk. But if the chatbot makes decisions that significantly affect user rights (automatic refund denial, account blocking without recourse), it may be classified as high-risk. The key criterion: is there human oversight before any impactful decision?
What's the difference between provider and deployer in the AI Act?
A provider develops or commercializes an AI system (e.g., OpenAI providing ChatGPT). A deployer uses a third-party AI system in a professional context (e.g., your company using ChatGPT to write emails). Most AI Act obligations apply to both roles, but providers have additional responsibilities (CE marking, EU database registration).
Does the AI Act apply to companies outside the EU?
Yes, if the AI system is used in the EU or produces effects on people located in the EU. Example: A US startup offering an AI recruitment tool to French companies must comply with the AI Act. It's the same extraterritoriality principle as GDPR.
How much does AI Act compliance cost for an SME?
For minimal-risk systems (typical use of ChatGPT, Claude, etc.): EUR 2,000-5,000 (staff training, AI charter, usage documentation). For high-risk systems: EUR 15,000-50,000 (technical audit, complete technical documentation, notified body conformity assessment, security testing). Training programs can often be funded by government grants, reducing out-of-pocket costs significantly.
📚 Additional Resources:
- Full AI Act text: eur-lex.europa.eu
- European AI Office guidelines: digital-strategy.ec.europa.eu
- AI Act FAQ: digital-strategy.ec.europa.eu