All articles
EU AI ActApril 2, 202610 min

Provider vs. Deployer: What Are My Obligations Under the EU AI Act?

Provider or deployer? The AI Act distinguishes strictly - with different obligations, costs, and risks. Here's how to determine your role.

Why the Distinction Matters

The EU AI Act (Regulation (EU) 2024/1689) entered into force on August 1, 2024. Obligations for high-risk AI systems under Annex III apply from August 2, 2026. Non-compliance risks fines of up to €15 million or 3% of worldwide annual turnover (Art. 99(3)).

But before you start working through obligations, you need to answer one question: Are you a Provider or a Deployer?

The answer determines everything. A Provider must build a risk management system, conduct conformity assessments, create technical documentation, and operate a quality management system. A Deployer must use the system according to instructions, ensure human oversight, and fulfill certain logging obligations. The effort differs by orders of magnitude.

The problem: Many companies don't know which side they're on. And some are Providers without realizing it.

When You Are a Provider (Art. 3(3))

A Provider is anyone who develops or has developed an AI system and places it on the market or puts it into service under their own name or trademark. The definition in Art. 3(3) is deliberately broad. What matters is not whether you wrote the code yourself. What matters is who brings the system to market and takes responsibility for it.

Typical Provider Scenarios

You develop an AI system yourself. Your data science team trains a model for internal credit risk assessment. Even if you only use the system internally: once it's a high-risk system (creditworthiness assessment falls under Annex III(5)(b)), you're a Provider.

You commission development and market it under your name. An external service provider builds an AI-powered applicant management tool for you. Your logo is on the interface, your customers buy it from you. You're the Provider, not the service provider.

You distribute a white-label product under your own brand. The manufacturer built the system, but externally you appear as the supplier. Art. 3(3) ties to the name and trademark - you're the Provider.

When You Are a Deployer (Art. 3(4))

A Deployer is anyone who uses an AI system under their own authority. The definition in Art. 3(4) explicitly includes use in the course of professional activity. Private individuals using an AI app on their phone are not Deployers under the Regulation.

Typical Deployer Scenarios

You purchase a ready-made AI system and deploy it. Your company licenses an AI-powered fraud detection tool from a specialized provider. You configure it for your data, but you don't modify the model. You're a Deployer.

You use a SaaS solution with AI features. Your HR tool has integrated AI-powered pre-screening of applications. You use the function as the provider delivers it. You're a Deployer.

You deploy an open-source model unchanged. You download a pre-trained model and use it for a purpose intended by the model manufacturer, without modification. You're a Deployer.

The Trap: When a Deployer Becomes a Provider (Art. 25)

Art. 25 is the provision that will cause the most problems in practice. It governs when a Deployer must assume full Provider obligations. Three scenarios:

1. You Place the System on the Market Under Your Own Name or Trademark (Art. 25(1)(a))

You purchase an AI system, slap your logo on it, and resell it. Classic rebranding. From that moment, you're a Provider with all obligations. The original manufacturer remains alongside, but you can no longer rely solely on their declaration of conformity.

2. You Change the Intended Purpose of a High-Risk System (Art. 25(1)(b))

The provider developed and documented the system for a specific purpose. You use it for something else. Example: A system for analyzing customer behavior is repurposed for employee evaluation. The original conformity assessment doesn't cover this use case. You become the Provider for the new purpose.

3. You Make a Substantial Modification (Art. 25(1)(c))

"Substantial modification" is defined in Art. 3(23): a change not foreseen by the provider in the original conformity assessment that could affect the system's compliance with requirements or changes the intended purpose.

In practice, this means: if you modify a high-risk system such that the original risk assessment no longer applies, you're the Provider for the modified version. This particularly concerns retraining with your own data when it significantly changes system behavior.

The threshold is not at every small configuration change. But it's lower than many think. Anyone who fine-tunes a pre-trained model with their own datasets and thereby changes behavior in a way the original provider didn't anticipate should assume Art. 25 applies.

Provider Obligations in Detail

Anyone classified as a Provider of a high-risk AI system must fulfill a comprehensive set of obligations from August 2, 2026 (Annex III) or August 2, 2027 (Annex I). Here's the complete checklist:

Risk Management System (Art. 9) A continuous, iterative process throughout the system's lifecycle. Identification and analysis of known and foreseeable risks, assessment of potential misuse, implementation of appropriate risk mitigation measures.

Data Governance (Art. 10) Training, validation, and testing data must be prepared according to specific quality criteria. These include relevance, representativeness, accuracy, and completeness. For high-risk systems trained with data, datasets must be statistically appropriate.

Technical Documentation (Art. 11) Technical documentation must be created before placing on the market to demonstrate conformity. Annex IV lists specific contents: general description, design specifications, monitoring, risk management, change log.

Record-Keeping / Logging (Art. 12) High-risk AI systems must be technically capable of automatically recording events (logs). Logs must enable traceability of system behavior. Minimum requirements are defined in Art. 12(2).

Transparency and Instructions for Use (Art. 13) The system must be designed so Deployers can understand and use it appropriately. Instructions for use must include intended purpose, performance metrics, known limitations, and measures for human oversight.

Human Oversight (Art. 14) The system must be designed to allow effective monitoring by natural persons. Oversight measures must be appropriate to the risk, autonomy level, and deployment context.

Accuracy, Robustness, and Cybersecurity (Art. 15) High-risk systems must achieve and maintain an appropriate level of accuracy, robustness, and cybersecurity throughout their lifecycle. Performance metrics must be specified in the instructions for use.

Quality Management System (Art. 17) Providers must establish a documented quality management system. This includes: strategies for regulatory compliance, design and development techniques and procedures, data management procedures, risk management, post-market monitoring, and corrective actions.

Conformity Assessment (Art. 43) A conformity assessment must be conducted before placing on the market. Depending on system type, internal control suffices (Annex VI) or a notified body must be involved (Annex VII). Biometric identification typically requires external assessment.

EU Declaration of Conformity and CE Marking (Art. 47, 48, 49) After successful conformity assessment, the Provider issues an EU declaration of conformity and affixes CE marking to the system. The declaration of conformity must contain the information specified in Annex V.

Registration (Art. 49) High-risk AI systems must be registered in the EU database before placing on the market. No registration, no lawful market placement.

Deployer Obligations in Detail

Deployer obligations are significantly leaner, but by no means trivial. Core provisions are in Art. 26 and 27.

Use According to Instructions (Art. 26(1)) Deployers must use the system as the Provider's instructions specify. This sounds trivial but has practical consequences: anyone using a system outside documented parameters violates their Deployer obligations and cannot rely on the Provider in case of damage.

Human Oversight (Art. 26(2)) Persons exercising human oversight must have the necessary competence, training, and authority. The Deployer must ensure these persons are actually capable of performing the oversight function.

Input Data (Art. 26(4)) Where the Deployer has control over input data, they must ensure it is relevant and sufficiently representative for the system's intended purpose.

Monitoring and Reporting Obligations (Art. 26(5)) Deployers must monitor system operation. If they identify a risk within the meaning of Art. 79, they must inform the Provider and the competent market surveillance authority. Serious incidents require direct reporting.

Log Retention (Art. 26(6)) Deployers must retain logs automatically generated by the system - for at least six months, unless other legal provisions require longer periods.

Data Protection Impact Assessment (Art. 26(9), Art. 27) For certain high-risk systems (particularly those under Annex III(1-8)), the Deployer must conduct a fundamental rights impact assessment before use (Art. 27). This goes beyond a GDPR DPIA and encompasses the specific impacts of the AI system on affected persons.

Information Obligations Toward Affected Persons (Art. 26(11-12)) Deployers operating in public administration or for certain high-risk systems (Annex III(1, 6, 7, 8)) must publicly register use of the AI system. For emotion recognition or biometric categorization systems, affected persons must be informed.

AI Literacy (Art. 4) Since February 2, 2025, all AI systems - not just high-risk systems - are subject to AI literacy obligations. Deployers must ensure their staff operating and using AI systems have a sufficient level of AI literacy.

Practical Scenarios

"We use recruiting tool from Provider X" You deploy a finished AI system from an external provider for pre-screening applicants, without modification. Recruiting tools fall under Annex III(4). You're a Deployer. Your obligations: use according to instructions, human oversight by trained personnel, input data quality, log retention, fundamental rights impact assessment before deployment. The Provider is responsible for conformity assessment, technical documentation, and risk management system.

"We fine-tuned an LLM" You take a pre-trained large language model and retrain it with your own data to use it for a specific high-risk application (e.g., supporting credit decisions). The fine-tuning changes model behavior for a purpose the original model provider didn't specifically foresee. You've made a substantial modification within the meaning of Art. 3(23). Result: You become a Provider under Art. 25 - with all obligations, including conformity assessment and technical documentation for the modified system.

"We built our own system" Your company internally developed an AI system for creditworthiness assessment and deploys it. You're both Provider and Deployer. You must fulfill both obligation catalogs. The fact that the system never leaves the internal sphere doesn't change this - Art. 3(3) also covers putting into service for own purposes.

"We distribute a white-label product" A software manufacturer developed an AI system for document analysis. You license it, integrate it into your platform, and distribute it under your brand name to your customers. Under Art. 25(1)(a) you're a Provider. The manufacturer's declaration of conformity isn't enough. You must conduct your own conformity assessment or contractually ensure the original manufacturer does so in a way that covers your market entry. In practice, you need an ironclad contract with the manufacturer governing responsibilities, information obligations, and liability issues.

What You Should Do Now

Obligations for high-risk AI systems under Annex III apply from August 2, 2026. That sounds like plenty of time. It isn't.

Step 1: Inventory Your AI Systems Create a complete inventory of all AI systems in your organization. Not just the obvious ones, but also those embedded in SaaS tools, purchased platforms, and internal prototypes.

Step 2: Classify Your Role Per System For each system: Are you Provider, Deployer, or both? Review the Art. 25 scenarios in particular. Have you modified, repurposed, or distributed systems under your own brand?

Step 3: Check the Risk Classification Does the system fall under Annex III? Under Annex I? Under the prohibited practices of Art. 5 (which have applied since February 2, 2025)? The risk classification determines your obligation catalog.

Step 4: Identify Your Gaps Compare your current state with requirements. Providers need a risk management system, technical documentation, a quality management system. Deployers need trained oversight personnel, log retention, fundamental rights impact assessments. What's missing?

Step 5: Review Contracts If you're a Deployer relying on Providers: Do existing contracts ensure you can fulfill your obligations? Are you receiving instructions for use, performance metrics, information on intended purpose? If not, contracts need renegotiation.

The EU AI Act penalizes those who start too late. Violations of high-risk obligations risk fines up to €15 million or 3% of worldwide annual turnover (Art. 99(3)). Violations of prohibited practices: up to €35 million or 7% (Art. 99(2)). False or incomplete information to authorities: up to €7.5 million or 1% (Art. 99(4)).

The first step is always the same: Know where you stand. Everything else follows from that.

WP

Author

Werner Plutat

Legal Engineer x AI

The Legal Engineer's Daily Brief

AI, Legal Tech & automation insights, 3x per week.

Subscribe

Does this topic affect your organization?

Let's clarify in 30 minutes how to implement these requirements with working technology, not slide decks.

Book a discovery call