All articles
EU AI ActApril 2, 202612 min

Documentation Requirements for High-Risk AI: What You Actually Need

Technical documentation, risk management, QMS, instructions for use - the AI Act requires a lot of paperwork. Here's the practical overview of what goes where.

The Documentation Landscape: What's Required and Why

Documentation is the backbone of high-risk regulation. The EU AI Act follows a clear principle: if an AI system is deployed in sensitive areas (recruitment, credit scoring, law enforcement, critical infrastructure), every decision must be traceable. Not reconstructed after the fact, but documented from the start.

The obligations are spread across multiple articles that form an integrated system:

  • Technical documentation under Art. 11 in conjunction with Annex IV
  • Risk management system under Art. 9
  • Data governance under Art. 10
  • Logging obligations under Art. 12
  • Transparency obligations under Art. 13
  • Quality management system under Art. 17
  • Conformity assessment under Art. 43
  • EU declaration of conformity under Art. 47
  • Registration in the EU database under Art. 49

That sounds like a lot. It is. But most of these requirements overlap in content. Build solid technical documentation and you've already covered 60-70% of the other obligations. The key is in the structure.

Technical Documentation (Art. 11 + Annex IV)

Annex IV is the core. It spells out exactly what the technical documentation must contain. The documentation must be created before placing the system on the market and continuously updated.

General Description and Intended Purpose

Every high-risk AI system needs a clear description of its intended purpose. This isn't marketing copy. What's required:

  • The exact function of the system and its intended purpose
  • Name and contact details of the provider
  • The version of the system
  • How the system interacts with hardware or software that isn't part of the system itself
  • Versions of relevant software or firmware

Concretely: if you operate an AI system for candidate pre-screening, "AI for HR" won't cut it. You describe: "System for automated pre-screening of applications based on CVs and cover letters. Evaluates suitability based on X criteria. Output: ranking score 0-100 and recommendation yes/no/maybe. Not intended for: final hiring decisions without human review."

Development Process and Design Decisions

Annex IV requires detailed description of the AI system's elements and development process:

  • Methods and steps of development, including use of pre-trained systems or third-party tools
  • Design specifications: overall architecture, algorithmic choices, key assumptions
  • Description of computational infrastructure (hardware requirements, computational capacity, expected latencies)

In practice: document why you chose a particular model. Why gradient boosting instead of neural networks? Why GPT-4 as base instead of Llama? These decisions must be traceable - not as an academic paper, but as a comprehensible justification.

Training, Validation, and Test Data

Art. 10 governs data governance, Annex IV requires its documentation. Specifically required:

  • Description of the training, validation, and test datasets used
  • Data origin, scope, and key characteristics
  • How data was procured and selected
  • Labeling procedures, data cleaning, enrichment
  • Assessment of availability, quantity, and suitability of datasets
  • Examination for possible biases and measures to address them

This means: you need a data sheet per dataset. Not 200 pages, but enough for an auditor to trace where the data comes from and why it's suitable.

Performance Metrics and Known Limitations

The documentation must include:

  • Metrics for assessing accuracy, robustness, and cybersecurity (Art. 15)
  • Known and foreseeable limitations of the system
  • Expected results and degree of predictability
  • Specifications for input data (what data is needed, in what format)

Practically speaking: accuracy, precision, recall, F1-score - plus indication of where performance varies across subgroups. A creditworthiness assessment system that performs 15% worse for self-employed individuals than employees must document this. Don't hide it.

Measures for Human Oversight

Art. 14 governs human oversight, Annex IV requires its documentation:

  • What measures for human oversight are provided
  • How the oversight person can override, stop, or correct the system
  • What technical means support oversight

An example: for a system for automated document review in insurance, you document that every rejection decision must be confirmed by a claims handler, that the handler can see the evaluation reasons, and that they can override the system decision with one click.

Accuracy, Robustness, Cybersecurity

Art. 15 requires that high-risk AI systems achieve an appropriate level of accuracy, robustness, and cybersecurity. Documentation must cover:

  • Tested and expected accuracy levels
  • Measures against errors, failures, and inconsistencies
  • Resilience against manipulation attempts (adversarial attacks)
  • Technical redundancy and failover concepts
  • Cybersecurity measures against unauthorized access and data manipulation

Risk Management Documentation (Art. 9)

The Iterative Risk Management Process

Art. 9 requires a risk management system that accompanies the AI system throughout its lifecycle. This isn't a one-time assessment before launch. It's an ongoing process with four phases:

  1. Identification and analysis of known and reasonably foreseeable risks
  2. Assessment of risks that may arise during intended use and reasonably foreseeable misuse
  3. Assessment of additional risks based on post-market monitoring (Art. 72)
  4. Appropriate and targeted risk mitigation measures

What Risk Assessment Looks Like in Practice

A risk assessment for a high-risk AI system isn't an abstract document. It contains concretely:

  • Risk catalog: Each identified risk with description, affected population group, and damage potential
  • Likelihood and severity: Quantified or categorized (high/medium/low)
  • Mitigation measures: What was done against each risk?
  • Test protocols: How was it verified that the measure works?

For an AI system in recruitment, a risk entry might read: "Risk: Gender-based discrimination through historically biased training data. Severity: High. Likelihood: Medium. Measure: Bias audit on protected attributes, regular fairness testing, exclusion of proxy variables. Verification: Quarterly adverse impact test using 4/5 rule."

Residual Risks

Art. 9(4) is clear: remaining risks must be documented. No system is risk-free. Anyone claiming to have eliminated all risks either hasn't looked properly or is documenting incompletely.

Residual risks are accepted with justification and communicated to deployers in the instructions for use.

Quality Management System (Art. 17)

What QMS Means for AI Concretely

Art. 17 requires providers to have a quality management system. This isn't an ISO 9001 certificate on the wall. It's a documented system of strategies, procedures, and instructions covering the following areas.

Minimum Requirements

The QMS must include at least:

  • A strategy for compliance with regulatory requirements
  • Procedures for design development and control
  • Procedures for development, quality control, and quality assurance
  • Testing and validation procedures (before, during, and after development)
  • Technical specifications and standards
  • Systems and procedures for data management (including Art. 10)
  • The risk management system under Art. 9
  • Post-market monitoring under Art. 72
  • Procedures for reporting serious incidents (Art. 73)
  • Communication with authorities and notified bodies
  • Document management and retention system
  • Resource management and responsibilities
  • Accountability framework with clear responsibilities

In practice, a well-structured 15-25 page document addressing these points suffices. Not a consultant's novel.

Instructions for Use (Art. 13)

The instructions for use are directed at deployers of your system. Art. 13 requires that they clearly and comprehensibly contain the following information:

  • Name and contact details of the provider
  • Performance characteristics, capabilities, and limitations of the system
  • Intended purpose and contraindications (what the system is not intended for)
  • Changes the provider has made during the lifecycle
  • Measures for human oversight, including technical measures
  • Expected lifetime and necessary maintenance measures
  • Specifications for input data
  • Information on training and validation data (where relevant)

Instructions for use are the link between provider and deployer. If you document incompletely here, you shift compliance risks to your customers - and to yourself. Because the provider is liable for incomplete information.

Logging and Monitoring Obligations (Art. 12 + Art. 72)

Art. 12 requires that high-risk AI systems enable automatic logging of events. Logs must include at least:

  • Time of each use (start and end)
  • Reference database against which input data is checked
  • Input data that led to a search
  • Identification of persons involved in human oversight

Art. 72 supplements this with post-market monitoring: the provider must establish a system that systematically collects and analyzes data on the AI system's performance throughout its lifecycle. Results feed back into the risk management system.

Deployer Obligations (Art. 26 + Art. 27 FRIA)

Deployers aren't just passive users. Art. 26 requires their own documentation obligations:

  • Ensuring that input data is relevant and sufficiently representative
  • Monitoring operation according to instructions for use
  • Retaining automatically generated logs (at least 6 months, unless otherwise regulated)
  • Conducting a Fundamental Rights Impact Assessment (FRIA) under Art. 27 before deployment

The FRIA under Art. 27 is mandatory for deployers that are public sector bodies or private deployers providing public services. It covers:

  • Description of processes in which the system is deployed
  • Duration and frequency of intended use
  • Categories of affected persons
  • Specific risks to fundamental rights mentioned in Art. 27
  • Description of measures for human oversight
  • Measures if identified risks materialize

Practical Template: What a Documentation Folder Looks Like

A pragmatic folder structure for a high-risk AI system:

\\\` šŸ“ [System_Name]_Documentation/ ā”œā”€ā”€ 01_General/ │ ā”œā”€ā”€ System_Description_and_Intended_Purpose.pdf │ ā”œā”€ā”€ Version_History.xlsx │ └── Provider_Contact_Details.pdf ā”œā”€ā”€ 02_Technical_Documentation/ │ ā”œā”€ā”€ Architecture_and_Design.pdf │ ā”œā”€ā”€ Algorithm_Decisions.pdf │ ā”œā”€ā”€ Infrastructure_Specifications.pdf │ └── Third_Party_Components.pdf ā”œā”€ā”€ 03_Data/ │ ā”œā”€ā”€ Dataset_Training_Data.pdf │ ā”œā”€ā”€ Dataset_Validation_Data.pdf │ ā”œā”€ā”€ Dataset_Test_Data.pdf │ ā”œā”€ā”€ Bias_Analysis.pdf │ └── Data_Governance_Procedures.pdf ā”œā”€ā”€ 04_Risk_Management/ │ ā”œā”€ā”€ Risk_Assessment_v[X].pdf │ ā”œā”€ā”€ Risk_Catalog.xlsx │ ā”œā”€ā”€ Residual_Risks.pdf │ └── Mitigation_Measures.pdf ā”œā”€ā”€ 05_Performance_and_Tests/ │ ā”œā”€ā”€ Performance_Metrics.pdf │ ā”œā”€ā”€ Test_Protocols.pdf │ ā”œā”€ā”€ Known_Limitations.pdf │ └── Fairness_Audit.pdf ā”œā”€ā”€ 06_Human_Oversight/ │ ā”œā”€ā”€ Oversight_Concept.pdf │ └── Override_Procedures.pdf ā”œā”€ā”€ 07_QMS/ │ ā”œā”€ā”€ Quality_Management_Manual.pdf │ └── Process_Descriptions.pdf ā”œā”€ā”€ 08_Instructions_for_Use/ │ └── Instructions_for_Use_v[X].pdf ā”œā”€ā”€ 09_Logging_and_Monitoring/ │ ā”œā”€ā”€ Logging_Concept.pdf │ └── Post_Market_Monitoring_Plan.pdf ā”œā”€ā”€ 10_Conformity/ │ ā”œā”€ā”€ Conformity_Assessment.pdf │ ā”œā”€ā”€ EU_Declaration_of_Conformity.pdf │ └── EU_Database_Registration.pdf └── 11_FRIA/ (if applicable) └── Fundamental_Rights_Impact_Assessment.pdf \\\`

This structure isn't a standard. It's a proposal that covers all requirements and is navigable for auditors.

Tools vs. Manual Documentation

Let me be direct: you can fulfill documentation obligations with Word, Excel, and a structured file folder. The EU AI Act doesn't prescribe specific software.

But the question is whether that scales.

When manual documentation works:

  • You have one or two high-risk AI systems
  • The team is small and responsibilities clear
  • Systems rarely change

When you should consider tools:

  • Multiple AI systems in operation simultaneously
  • Regular model updates that trigger new documentation cycles
  • Different teams (data science, legal, compliance) need to work on the same documentation
  • You need versioning and audit trails beyond "filename_v3_final_FINAL.docx"

There are now specialized platforms for AI governance and AI Act compliance. Some are mature, many aren't yet. Critically evaluate whether a platform actually maps the Annex IV documentation scope or just delivers a pretty dashboard.

My pragmatic advice: start now with the structure, not the tool. The folder structure above works with any system. If in six months you realize manual maintenance is too cumbersome, migrate. But the structure and content - you need those either way.

Conclusion

August 2, 2026 is no longer an abstract date. Documentation obligations for high-risk AI systems are extensive, but they're not rocket science. They follow clear logic: describe what your system does. Document how and with what it was developed. Prove you know the risks and address them. Give deployers everything they need for safe operation.

Anyone viewing this as a burdensome obligation has the wrong perspective. Good documentation isn't compliance overhead - it's the foundation for your AI system being trustworthy and deployable at all. And it's your ticket to the European market.

WP

Author

Werner Plutat

Legal Engineer x AI

The Legal Engineer's Daily Brief

AI, Legal Tech & automation insights, 3x per week.

Subscribe

Does this topic affect your organization?

Let's clarify in 30 minutes how to implement these requirements with working technology, not slide decks.

Book a discovery call