All articles
EU AI ActApril 3, 20265 min

GPAI Models Under the EU AI Act: Provider and User Obligations

Since August 2025, special rules apply to General-Purpose AI models. Who is affected, what obligations apply, and what does systemic risk mean?

Since August 2, 2025, the EU AI Act's rules for General-Purpose AI (GPAI) models have been in effect. If you provide a GPAI model, train with it, or integrate it into your systems, you need to understand who has which obligations - and what the consequences of non-compliance are.

What Is a GPAI Model?

According to Article 3(44b), a GPAI model is an AI model trained on large amounts of data, displaying significant generality, and capable of performing a wide range of distinct tasks. Typical examples: GPT-4, Claude, Gemini, Llama, Mistral.

Classification follows Article 51: The European Commission can classify a model as GPAI if it meets these criteria. In practice, this covers all major foundation models that can be used for various applications.

Who Is a GPAI Provider?

GPAI providers under the AI Act are: - OpenAI (GPT-4, GPT-5) - Google (Gemini) - Anthropic (Claude) - Meta (Llama) - Mistral AI - Developers of open-source models like Falcon, BLOOM, etc.

Key criterion: They provide the model and make it accessible to others.

NOT GPAI providers are companies that use a finished model: - A startup integrating the OpenAI API into its software - A law firm using Claude for document analysis - A corporation embedding Gemini into its chatbot

These companies are deployers (operators), not GPAI providers. They have different AI Act obligations, but not the GPAI-specific ones from Articles 51-56.

Core Obligations for All GPAI Providers (Article 52)

Every GPAI provider must, since August 2025:

1. Create and maintain technical documentation The documentation must include: - Training methods and data used - Computational resources for training and operation - Model architecture and parameters - Evaluation results - Risk assessments

Documentation must be provided to the European Commission and national authorities upon request.

2. Provide information to downstream providers GPAI providers must give developers who further use the model information about: - Model capabilities and limitations - Known risks and vulnerabilities - Requirements for safe use - Information about data sources

This information must be publicly accessible or available through API documentation.

3. Copyright and training data According to Article 53(1)(d), GPAI providers must: - Create and publish a copyright compliance policy for training data - Provide a summary of the data used for training

This is particularly relevant after several lawsuits were filed against GPAI providers for alleged copyright violations. The obligation applies regardless of whether the model poses systemic risk.

Models with Systemic Risk (Article 53)

Some GPAI models are subject to stricter obligations if they pose systemic risk. This applies when: - The model was trained with more than 10^25 FLOPs, OR - The European Commission classifies the model as posing systemic risk based on its capabilities and dissemination.

10^25 FLOPs - what does that mean in practice? FLOPs (Floating Point Operations) are computational operations. 10^25 FLOPs roughly corresponds to the training effort of: - GPT-4 (presumably above this threshold) - Gemini Ultra - Claude Opus

Models like GPT-3.5, smaller Llama versions, or specialized models often fall below this threshold.

Additional obligations for systemic risk: 1. Conduct model evaluations - including adversarial testing 2. Assess and mitigate systemic risks - e.g., abuse potential, cybersecurity risks 3. Report serious incidents - to the AI Office and national authorities 4. Ensure adequate cybersecurity - protection of model weights and infrastructure

These obligations apply in addition to the core obligations from Article 52.

Open-Source Exception (Article 53(2))

GPAI models provided under an open-source license are exempt from the additional obligations under Article 53 - unless they pose systemic risk.

This means: - An open-source model without systemic risk has only the core obligations (Article 52) - An open-source model WITH systemic risk (e.g., above 10^25 FLOPs) has all Article 53 obligations

Example: Meta's Llama 3 is open source. If a version exceeds the FLOPs threshold, Article 53 obligations apply despite being open source.

GPAI Code of Practice (Article 55)

The European Commission has published a GPAI Code of Practice. This contains concrete measures through which GPAI providers can fulfill their obligations under Articles 52 and 53.

Important: Compliance with the Code of Practice creates a presumption of conformity. This means: those who follow the Code are considered compliant unless authorities prove otherwise.

The Code is voluntary, but practical: it gives GPAI providers legal certainty and concrete guidance. Those who don't participate in the Code must demonstrate compliance with legal obligations by other means.

Enforcement by the AI Office (Article 56)

GPAI models are not supervised by national authorities but by the AI Office of the European Commission. This makes sense: large models have cross-border effects, and a central authority prevents fragmented national approaches.

The AI Office can: - Request technical documentation - Initiate investigations - Demand model evaluations - Determine violations and impose sanctions

National authorities are only responsible when GPAI models are used in high-risk systems - then they examine the deployer, not the GPAI provider.

Penalties for Violations (Article 99(3))

Violations of GPAI obligations can be expensive: - Up to 15 million euros or - 3% of the company's global annual turnover

The higher amount applies. For large tech corporations, this could mean billions.

Penalties apply to: - Missing or insufficient technical documentation - Violation of information obligations toward downstream providers - Non-fulfillment of systemic risk obligations - Failure to report serious incidents

What This Means for Companies

If you develop a GPAI model: 1. Check whether your model meets the GPAI definition 2. Fulfill the core obligations from Article 52 (technical docs, information duties, copyright policy) 3. Check whether the FLOPs threshold is exceeded 4. If yes: implement Article 53 obligations 5. Consider participating in the GPAI Code of Practice

If you use a GPAI model (e.g., OpenAI API): You are not a GPAI provider but a deployer. Your obligations depend on what you use the model for: - High-risk application (e.g., recruitment management, credit assessment): obligations under Chapter III of the AI Act - Other applications: transparency obligations under Article 50

The GPAI obligations affect OpenAI, Google, or Anthropic - not you.

If you fine-tune a GPAI model: This gets complicated. Fine-tuning can make you a GPAI provider if the result is an independent GPAI model. If you merely adapt an existing model for your use case, you remain a deployer. The line is blurred and depends on the extent of fine-tuning.

Outlook

The GPAI rules have been in force for eight months. The AI Office has already published initial guidance and is working on concrete enforcement measures. Companies should act now: - GPAI providers: ensure compliance, review Code of Practice - Deployers: clarify whether fine-tuning makes you a provider - Everyone: review contracts with model providers - who bears which responsibility?

The rules are complex, but the principle is simple: those who provide large models must create transparency and manage risks. Those who use them must understand what they're deploying. The AI Act makes both mandatory.

WP

Author

Werner Plutat

Legal Engineer x AI

The Legal Engineer's Daily Brief

AI, Legal Tech & automation insights, 3x per week.

Subscribe

Does this topic affect your organization?

Let's clarify in 30 minutes how to implement these requirements with working technology, not slide decks.

Book a discovery call