The EU AI Act entered into force on 1 August 2024. Since then, various obligations have been coming into effect step by step - and with them, the possibility of significant fines. Anyone who ignores deadlines and requirements risks penalties running into millions. This article explains which violations trigger which sanctions, how authorities proceed, and what companies should do now.
The Penalty Tiers Under Article 99
The EU AI Act defines three categories of violations, each carrying different penalties. The amount depends on the severity of the breach - and may be calculated based on a company's global annual turnover.
Category 1: Prohibited AI Practices (Article 5) Anyone who places on the market, puts into service, or uses an AI system prohibited under Article 5 commits the most serious violation. This carries fines of up to €35 million or 7% of global annual turnover, whichever is higher.
These prohibitions have been in effect since 2 February 2025. Prohibited practices include:
- Manipulative AI systems that influence people's behavior in ways that cause harm to themselves or others
- Social scoring by public authorities
- Real-time remote biometric identification in public spaces by law enforcement (with narrow exceptions)
- AI systems that exploit vulnerabilities of persons (age, disability, socioeconomic situation)
An example: A company develops an app that specifically identifies emotionally vulnerable users and pushes them toward purchases through personalized dark patterns. If a national supervisory authority classifies this as a prohibited manipulative practice under Article 5(1)(a), it can impose a fine of up to €35 million or 7% of group turnover.
Category 2: Violations of High-Risk Requirements Medium-severity violations primarily concern high-risk AI systems. These carry fines of up to €15 million or 3% of global annual turnover.
This category includes violations of key obligations from Articles 8 to 15 (data quality, documentation, transparency, human oversight, cybersecurity) as well as obligations for deployers and authorized representatives.
From 2 August 2026 - just a few months away - these requirements will be fully enforceable. High-risk AI systems under Annex III must be compliant by then. At the same time, transparency obligations from Article 50 apply to certain AI systems, such as chatbots or deepfakes.
An example: An HR tech provider deploys an AI-powered applicant management system classified as high-risk under Annex III. The system never underwent conformity assessment, there is no risk management system per Article 9, and the technical documentation required by Article 11 is missing. The competent market surveillance authority can impose fines of up to €15 million or 3% of turnover.
Category 3: False or Incomplete Information Anyone who provides false, incomplete, or misleading information to authorities upon request risks fines of up to €7.5 million or 1% of global annual turnover.
This category is deliberately broad and covers all situations where companies fail to cooperate honestly with supervisory authorities. This can happen during market surveillance measures, information requests, or investigations.
Special Rules for SMEs and Startups
Article 99(5) explicitly states that for small and medium-sized enterprises and startups, economic capacity must be taken into account when calculating fines. Authorities should act proportionally.
In practice, this means: A startup with 50 employees and €2 million annual turnover will not face the same fine as a tech giant with billions in revenue - even for comparable violations. The upper limit of 7% of turnover formally applies here too, but authorities have discretion and should choose sanctions that are effective and dissuasive without being existentially threatening.
Penalties for Union Institutions
EU institutions and bodies are also subject to the regulation. Article 99(6) sets reduced caps for them:
- Up to €1.5 million for violations of prohibited practices
- Up to €750,000 for violations of high-risk requirements
- Up to €375,000 for false information
These amounts seem low compared to corporate sanctions, but they are substantial for public entities and are meant to ensure that state actors also take the regulation seriously.
Who Imposes the Fines?
Enforcement of the EU AI Act lies with national market surveillance authorities. In Germany, this is likely to be the Federal Network Agency (Bundesnetzagentur), which is already responsible for other digital laws (such as the Digital Services Act).
These authorities have extensive powers:
- They can require companies to provide information
- They may conduct on-site inspections
- They can temporarily ban market access
- They impose fines
Important: Authorities do not operate in isolation. They are part of the European Artificial Intelligence Board (AI Board) and coordinate across borders. A violation in one member state can become known in other countries - and lead to consequences there.
What Triggers an Investigation?
Supervisory authorities don't act randomly. Typical triggers include:
- Complaints from affected individuals or consumer protection organizations
- Media reports about problematic AI applications
- Tips from competitors
- Random market surveillance checks
- Reports from other authorities (e.g., data protection supervisors)
A company that publicly advertises an AI system while violating transparency obligations will quickly attract attention. The same applies to AI systems used in sensitive areas - such as HR, credit scoring, or law enforcement.
The Enforcement Timeline
The EU AI Act does not come into force all at once, but in stages. For companies, knowing the deadlines is crucial:
2 February 2025 (already effective) Prohibitions under Article 5 and AI literacy obligations under Article 4 are enforceable. Violations can be penalized with fines of up to €35 million or 7% of turnover since this date.
2 August 2025 (already effective) Obligations for providers of general-purpose AI models (GPAI) under Articles 51 to 56 are enforceable. This includes transparency obligations, documentation, and for systemic risks, model evaluations.
2 August 2026 (in four months) High-risk requirements for AI systems under Annex III become fully enforceable. At the same time, transparency obligations from Article 50 apply (e.g., labeling AI-generated content).
2 August 2027 Requirements for AI systems embedded in regulated products under Annex I become enforceable.
Companies should not assume authorities will only become active from these deadlines. Market surveillance is already happening - and anyone who fails to have a system compliant by the deadline risks a fine from day one.
Digital Omnibus: Changes in the Pipeline
The European Commission proposed the so-called Digital Omnibus in January 2025 - a package that includes adjustments to the EU AI Act. This is currently only a proposal, not law.
A key change concerns enforcement: The Digital Omnibus would allow the Commission itself to intervene and impose fines in cases of serious cross-border violations - similar to the GDPR. Currently, this power lies exclusively with national authorities.
For companies, this would mean: A violation could be pursued not only at national level, but directly from Brussels. Negotiations between Commission, Parliament, and Council are ongoing. Adoption is realistically not before late 2026 or early 2027.
Until then: National market surveillance authorities are the key players.
How Fines Are Calculated
Article 99 does not provide a rigid formula but gives authorities discretion. Typical factors that influence calculation:
- Nature and severity of the violation
- Duration and frequency
- Intent or negligence
- Remedial measures already taken
- Previous violations
- Willingness to cooperate with the authority
- Impact on particularly vulnerable groups (e.g., children)
Anyone who self-reports a violation, quickly remedies it, and cooperates transparently with authorities has good chances of a lower penalty. Those who try to withhold information or obstruct investigation should expect the full force.
Practical Examples
Example 1: Chatbot Without Labeling An e-commerce company deploys an AI-powered chatbot that advises customers. The chatbot is not labeled as AI. From 2 August 2026, this violates Article 50(1). The competent authority orders the company to remedy the situation. The company does not respond. The authority imposes a fine of €500,000 for violation of high-risk and transparency obligations (up to €15 million or 3% possible).
Example 2: Manipulative AI App A provider develops an app that uses psychological tricks to push users toward in-app purchases. The app uses AI to specifically target people in financially difficult situations. The supervisory authority classifies this as a prohibited practice under Article 5(1)(a). Fine: €20 million (below the cap of €35 million, but still substantial).
Example 3: Missing Documentation for High-Risk AI A software provider sells an applicant management system based on AI that qualifies as a high-risk system under Annex III. During a market surveillance inspection, the authority finds: no technical documentation, no risk management system, no records on data quality. The provider cooperates, immediately stops the systems, and submits the documentation. Fine: €2 million - well below the possible cap because the company cooperated.
What Companies Should Do Now
- Conduct AI inventory: Which AI systems are in use? Do they fall under the regulation? Are they high-risk?
- Watch the deadlines: 2 August 2026 is approaching. High-risk systems must be compliant by then.
- Create transparency: Chatbots, deepfakes, emotion recognition systems - all must be labeled from August 2026.
- Build documentation: Technical documentation, risk management system, records on data quality and testing - these are not optional extras, but mandatory.
- Reach out to supervisory authorities: If uncertain, proactively contact the competent authority. This shows good faith and can be helpful if issues arise.
- Establish internal processes: Compliance is not a one-time project. It requires ongoing monitoring, updates, and training.
Conclusion
The fines in the EU AI Act are not an empty threat. The caps of up to €35 million or 7% of global turnover even exceed GDPR penalties. National supervisory authorities have extensive powers and will use them.
Those who act now can minimize risks. Those who wait until the authority knocks on the door will pay - in the worst case, with a multi-million amount.
The EU AI Act is more than a set of rules. It is a signal: AI must be safe, transparent, and trustworthy. Companies that take this seriously are not only legally on the safe side - they also build trust with customers and partners. And that is priceless in the long run.