On August 2, 2026, the obligations for high-risk AI systems under Annex III of the EU AI Act (Regulation (EU) 2024/1689) become enforceable. That's roughly four months away. Companies without a compliance framework in place by then risk fines of up to \u20ac15 million or 3% of global annual revenue, whichever is higher (Art. 99(3)).
Over the past few years, I've built more than a dozen compliance and legal tech solutions for European enterprises. This article is what I tell my own clients: no consultant-speak, no abstract frameworks. Just a concrete plan with calendar weeks.
The August 2026 Deadline - What's at Stake
The EU AI Act entered into force on August 1, 2024. It's being phased in gradually, and the next major milestone affects all high-risk AI systems under Annex III. These are systems in areas such as:
- HR and employment: AI-assisted candidate screening, performance evaluation, termination decisions (Annex III, point 4)
- Credit assessment: Scoring models at banks and insurers (Annex III, point 5a)
- Biometric categorization: Emotion recognition in the workplace, biometric classification (Annex III, point 1)
- Critical infrastructure: AI in energy, water, or transport networks (Annex III, point 2)
- Education: Automated exam grading, access decisions (Annex III, point 3)
Starting August 2026, these systems are subject to comprehensive obligations: risk management systems (Art. 9), data governance (Art. 10), technical documentation (Art. 11), record-keeping (Art. 12), transparency (Art. 13), human oversight (Art. 14), and accuracy and robustness requirements (Art. 15).
Important: High-risk AI systems in regulated products under Annex I (medical devices, machinery, toys, etc.) have until August 2, 2027. This article focuses on the Annex III deadline in August 2026.
The Fines Are Not Theoretical
The AI Act's penalty structure is tiered (Art. 99):
- Prohibited practices (Art. 5): up to \u20ac35 million or 7% of annual turnover
- High-risk violations: up to \u20ac15 million or 3% of annual turnover
- False statements to authorities: up to \u20ac7.5 million or 1% of annual turnover
Each member state must designate a national market surveillance authority (Art. 70). Authorities won't start proactively auditing companies on August 3, 2026. But complaints from employees, applicants, or consumers will come. And when they do, what matters is whether you can produce a documented compliance system or not.
What's ALREADY in Effect - and Many Companies Don't Know
Before we get to the countdown, here's an uncomfortable truth: Two stages of the AI Act are already in force. Have been for months.
Prohibited Practices Since February 2, 2025
Since February 2025, certain AI practices have been banned in the EU (Art. 5). These include:
- Social scoring: Rating individuals based on social behavior across multiple contexts (Art. 5(1)(c))
- Subliminal manipulation: AI systems that manipulate behavior through subliminal techniques (Art. 5(1)(a))
- Exploitation of vulnerabilities: Targeting age, disability, or social circumstances (Art. 5(1)(b))
- Emotion recognition in the workplace and educational institutions (with exceptions, Art. 5(1)(f))
- Real-time biometrics in public spaces by law enforcement (with narrow exceptions, Art. 5(1)(h))
Practical example: If your company uses a tool that analyzes candidate videos for "emotional suitability," that may have been illegal since February 2025. This isn't a future risk. It's a current one.
AI Literacy Since February 2, 2025
Also since February 2025, Art. 4 applies: Providers and deployers of AI systems must ensure their personnel have adequate AI literacy. This doesn't mean "everyone needs to take an AI course." It means: Anyone developing, deploying, or overseeing AI systems must understand what the system does, what risks exist, and how to monitor it.
Ask yourself: Have your HR staff using an AI-powered recruiting tool received documented training? If not, there's already a compliance gap.
GPAI Obligations Since August 2, 2025
Since August 2025, obligations for providers of General-Purpose AI Models apply (Art. 51-56). This primarily affects model providers themselves (OpenAI, Google, Mistral, etc.), not most companies. But if you internally train your own foundation model or fine-tune an open-source model and make it available to third parties, you might qualify as a GPAI provider yourself.
The Compliance Countdown: April to August 2026
Four months isn't much, but it's enough if you work systematically. Here's the roadmap.
April 2026: Inventory and Classification
Goal: Complete overview of all AI systems in the company, classified by risk level.
- Create an AI inventory. Go through every department and document: Which tools use AI? Not just the obvious ones (ChatGPT, Copilot). Think about recruiting software, credit scoring, automated quality control, predictive maintenance, fraud detection systems.
- Perform classification. For each system, check: Does it fall under Annex III? Is it a prohibited system under Art. 5? Is it a GPAI model? Or is it minimal risk (e.g., a spam filter)?
- Clarify roles. Are you a Provider or Deployer of the system?
Deliverable: A spreadsheet with all AI systems, their risk class, and your role.
May 2026: Gap Analysis
Goal: For every high-risk system, know what's missing.
Take the requirements from Chapter III, Section 2 of the Regulation and check system by system:
- Risk management system in place? (Art. 9)
- Data governance documented? (Art. 10)
- Technical documentation created? (Art. 11)
- Automatic logging enabled? (Art. 12)
- Instructions for use available for deployers? (Art. 13)
- Human oversight defined? (Art. 14)
- Accuracy, robustness, cybersecurity tested? (Art. 15)
Create a checklist for each system. Mark green (present), yellow (partial), red (missing). This gap analysis is your work plan for June and July.
June 2026: Implement Frameworks
Goal: Build risk management and documentation for all high-risk systems.
- Risk management system (Art. 9): Iterative process - identify, assess, mitigate, monitor risks. Not a one-time document, but a living process.
- Technical documentation (Art. 11 in conjunction with Annex IV): Intended purpose, development process, training data, performance metrics, known limitations.
- Data governance (Art. 10): How are training and test data selected, prepared, checked for bias?
Pragmatic tip: Start with the system that affects the most people. In most companies, that's HR software or a customer scoring system. A good template for the first system can be adapted for the rest.
July 2026: Test, Validate, Train
Goal: Systems tested, personnel trained, oversight mechanisms active.
- Run tests: Check accuracy and robustness (Art. 15). Bias testing with representative datasets. Document the results.
- Establish human oversight (Art. 14): Who monitors the system? Who can stop it? Who reviews outputs? Define clear roles and escalation paths.
- Conduct training: Ensure Art. 4 literacy. Train operational users AND management. Document date, content, participants.
August 2026: Go-Live Checks and Monitoring
Goal: Compliance demonstrable, monitoring active.
- Registration: High-risk AI systems must be registered in the EU database (Art. 49). Verify that registration is complete.
- Set up monitoring (Art. 72): A post-market monitoring system is mandatory for providers. Deployers must monitor usage and report incidents (Art. 26).
- Assemble compliance file: All documents in one place. When the authority asks, you must be able to deliver.
Provider vs. Deployer: Two Different Checklists
The AI Act strictly distinguishes between providers and deployers. Your obligations are different.
You're a Provider if you:
- develop an AI system and place it on the market under your own name (Art. 3, point 3)
- substantially modify an existing system (then you become a provider yourself, Art. 25)
Your core obligations: Risk management system, technical documentation, conformity assessment, CE marking, EU database registration, post-market monitoring.
You're a Deployer if you:
- use a third-party provider's AI system within your organization (Art. 3, point 4)
Your core obligations (Art. 26): Use according to instructions, ensure human oversight, input data quality, monitoring, incident reporting, data protection impact assessment (where required), inform affected persons.
Be careful with the distinction: If you deploy a purchased AI system under your own name, change its intended purpose, or substantially modify it, you become a provider yourself (Art. 25(1)). That shifts obligations significantly.
Quick Wins - What You Can Do THIS WEEK
You don't need to wait for a big project kickoff. Five things you can do immediately:
- Art. 5 quick check: Go through the list of prohibited practices. Are you using emotion recognition, social scoring, or subliminal manipulation anywhere? If yes: stop immediately. It's been illegal since February 2025.
- List AI systems: Send an email to all department heads: "Which tools with AI functionality are you using?" Not a perfect inventory, but a starting point.
- Review contracts with AI providers: Does anything mention compliance obligations, documentation, liability? If not, it needs to be added.
- Plan Art. 4 training: Identify employees who operationally use AI systems. Plan basic training. Can be internal, one-hour workshop.
- Establish accountability: Who in the company is responsible for AI compliance? If the answer is "nobody," change that today.
The Digital Omnibus: No Reason to Wait
In December 2025, the European Commission proposed postponing certain AI Act deadlines as part of the Digital Omnibus. This proposal is not applicable law. It still has to pass the European Parliament and Council, and even if adopted, it's unclear which deadlines would be affected and to what extent.
My clear advice: Plan based on current law. If a deadline extension comes, you'll have more time for fine-tuning. If not, you're prepared. Those waiting now for a possible postponement are playing compliance roulette.
When You Need External Help - and When You Don't
Not every company needs Big Four consulting for AI compliance. Here's my honest assessment:
You can handle internally:
- Creating the AI inventory (bringing IT and business units together)
- Conducting Art. 4 training (with good materials)
- Reviewing existing contracts with AI providers
- Clarifying internal responsibilities
- Integrating monitoring processes into existing compliance structures
External support makes sense for:
- Classifying complex systems: Is your system really "high-risk"? Borderline cases are tricky.
- Conformity assessment for providers: If you're a provider yourself, conformity assessment (Art. 43) is demanding.
- Technical documentation per Annex IV: The level of detail is high, especially for self-developed systems.
- Automatable compliance processes: If you have many AI systems, a tool-based approach beats manual Excel lists.
- Regulatory edge cases: Art. 6(3) allows providers, under certain conditions, to argue that their system doesn't pose high risk despite Annex III classification. That requires solid legal reasoning.
Conclusion
The EU AI Act is not a paper tiger. Two stages are already in effect, the third comes in August. Four months is tight, but doable if you start today.
The most important step is the first one: Know which AI systems you have and which risk class they fall under. Everything else builds on that.
Those who start now will make it. Those waiting for the Digital Omnibus will have a problem in August.