Why a Governance Framework Is Not Optional
The EU AI Act came into force on August 1, 2024, with a phased implementation timeline. The AI literacy obligation under Art. 4 has been enforceable since February 2, 2025. The heavy requirements for high-risk AI systems listed in Annex III will be enforced starting August 2, 2026.
For providers of high-risk AI systems, Art. 17 mandates a quality management system. Art. 9 requires a risk management system. Art. 14 demands human oversight. Art. 72 governs post-market monitoring. These aren't vague recommendations - they're concrete obligations with specific requirements for documentation, processes, and responsibilities.
But deployers are affected too. If you operate a high-risk AI system, you must ensure it runs according to the instructions for use, that human oversight functions properly, and that incidents get reported. Many companies underestimate this: they assume compliance is the provider's problem. Wrong. The regulation distributes obligations across both sides.
An AI governance framework bundles exactly this: Who is responsible? Which processes apply? Which tools support implementation? Without this structure, legal requirements remain abstract - and in case of an audit, impossible to demonstrate.
The Three Pillars: People, Processes, Technology
Governance frameworks rarely fail because of missing technology. They fail because of unclear responsibilities and missing processes. That's why people and processes come before technology - not the other way around.
The three pillars aren't new. Every functioning internal control system (ICS) is based on this principle. The difference with AI governance: the technology you're trying to control is itself complex and dynamic. That requires specific roles, specific processes, and - yes - sometimes specific tools.
People: Roles and Responsibilities
AI Governance Lead / AI Officer
Someone needs to own this. In larger organizations, that's a dedicated role - an AI Officer or AI Governance Lead. In mid-sized companies, this function might sit with the Compliance Officer, Data Protection Officer, or CTO. The title doesn't matter. What matters is that one person carries overall responsibility and reports directly to executive management.
The tasks: building and maintaining the AI register, coordinating risk assessments, serving as the interface to business units, reporting to executive management, acting as the contact point for supervisory authorities. This isn't a side job. For a company with five or more AI systems, this role needs at least 50 percent capacity.
Risk Owner Per AI System
Every AI system needs a risk owner - someone from the business unit who knows the system, understands the deployment context, and owns the risk assessment. In practice, this is often the business unit leader who requested the system.
Example: An AI-powered applicant tracking system. The risk owner is the head of HR, not IT. IT ensures operational stability, but HR understands which decisions the system influences and which risks arise for affected individuals.
Roles for Human Oversight (Art. 14)
Art. 14 requires that high-risk AI systems be designed so that natural persons can effectively oversee them. In practice, this means: for every relevant system, you must define who reviews the outputs, who can intervene, and who has the authority to shut down the system.
This isn't a theoretical exercise. A credit scoring system that automatically issues rejections needs a human who can review and override the decision. This role must be named, trained, and equipped with the necessary access rights.
Training and Competence (Art. 4)
Art. 4 requires that personnel working with AI systems possess a sufficient level of AI literacy. This obligation has been in force since February 2, 2025 - meaning now.
Concretely: training programs for all employees who use, operate, or monitor AI systems. Not generic e-learning modules about "What is AI?", but role-specific qualification. The risk owner needs different competencies than the end user. Both need training, but different kinds.
Document who completed which training when. During an audit by supervisory authorities, this is exactly what will be requested.
Processes: What Must Be Documented
AI System Inventory and Classification
You can't control what you don't know. The first process is therefore a systematic AI inventory: Which AI systems are deployed in the company? By whom? In which context? With which risk class?
The classification follows the logic of the EU AI Act: prohibited, high-risk (Annex III), limited risk, minimal risk. Most companies will discover that the majority of their AI applications fall into the "minimal risk" category - ChatGPT for email drafts, translation tools, simple automations. But that's precisely why inventorying matters: it identifies the few systems that actually fall under high-risk requirements.
Practical tip: Start with a simple query to all business units. "Which tools do you use that are based on AI?" The responses will surprise you. Shadow AI is real - business units purchase and use AI tools without IT or compliance knowledge.
Risk Assessment Methodology (Art. 9)
Art. 9 requires a risk management system that covers the entire lifecycle of a high-risk AI system. This isn't a one-time assessment, but a continuous process: identification, analysis, evaluation, and mitigation of risks.
Define a methodology that fits your company. If you already have risk management for data protection (DPIA under GDPR) or IT security, use the existing structure. Extend it with AI-specific criteria: discrimination risks, transparency, explainability, robustness.
An example of a pragmatic assessment matrix: likelihood (low/medium/high) times impact on affected individuals (low/medium/high). For each identified risk: Which measure will be taken? Who is responsible? By when?
Change Management for AI Systems
AI systems change. Models get updated, training data shifts, deployment context expands. Every substantial change requires re-evaluation: Does the risk classification remain the same? Are the mitigation measures still sufficient?
Define what constitutes a "substantial change." A user interface update typically isn't. A new model or new training data is. Link the change management process to your existing IT change management - separate parallel structures are inefficient.
Incident Reporting and Escalation
What happens when an AI system makes an erroneous or discriminatory decision? Who gets informed? How quickly? What immediate measures kick in?
Define escalation levels. Level 1: The operator reports to the risk owner. Level 2: The risk owner informs the AI Governance Lead. Level 3: Report to executive management and potentially to supervisory authorities. Practice this process. A flowchart in SharePoint that's never been tested is worthless.
Vendor Management for AI Providers
Most companies don't develop AI systems themselves - they purchase them. This doesn't shift responsibility. As a deployer, you must ensure that your provider meets the requirements of the EU AI Act.
Check during procurement: Does the provider deliver the required technical documentation? Do they provide information on risk classification? Is there an instruction manual that meets the requirements of Art. 13? Integrate these checkpoints into your existing procurement process - as an extension, not a parallel structure.
Technology: What You Need (and What You Don't)
Registry and Inventory Tool
For the AI system inventory, you need a central register. This can be a structured Excel spreadsheet, a SharePoint list, or a specialized GRC tool. What matters: it must be maintained, it must be accessible, and it must contain the relevant fields - system name, provider, risk class, risk owner, risk assessment status, last review.
For a company with fewer than 20 AI systems, Excel is absolutely sufficient. Once you're managing 50+ systems, conducting regular audits, and coordinating multiple locations, upgrading to a specialized tool makes sense.
Document Management
The EU AI Act requires extensive documentation: risk assessments, conformity declarations, training records, audit results. These documents must be versioned, accessible, and stored in a traceable manner.
Use what you have. SharePoint, Confluence, an existing DMS - everything works as long as the folder structure is clear and access rights are cleanly defined. You don't need an "AI Governance Documentation Tool" for €50,000 per year.
Monitoring and Logging (Art. 12)
Art. 12 requires that high-risk AI systems automatically create logs that enable traceability. This is primarily a requirement for the provider - but as a deployer, you must ensure these logs are accessible and evaluated.
Check during procurement whether the provider offers logging functionality. Define who evaluates the logs and how often. Automated monitoring with alerting makes sense for high-risk systems; for simpler systems, regular manual review suffices.
When Excel Is Enough - and When It Isn't
The honest answer: for most mid-sized companies, Excel is sufficient in the initial phase. A well-structured spreadsheet with the right columns, clear responsibilities, and regular maintenance beats any expensive tool that nobody uses.
The switch to specialized software makes sense when: you're managing more than 50 AI systems, multiple teams are working in parallel, regulatory reporting should be automated, or external audits happen regularly. Before that, a dedicated tool is often over-engineering.
Integration With Existing Frameworks
Don't build a parallel world. AI governance isn't an isolated project - it's an extension of existing management systems.
If you've implemented ISO 27001 (Information Security), use the existing risk methodology, audit structures, and documentation processes. Extend them with AI-specific controls.
If you want to orient toward ISO 42001 (AI Management System): Good. The standard offers a solid framework. But it's not a replacement for the requirements of the EU AI Act - it's an aid in implementation. Review the overlaps and close the gaps.
Your existing internal control system (ICS) already has checkpoints for processes, access rights, and reporting. Add AI-specific controls: "Is the risk classification current?", "Was human oversight reviewed last quarter?", "Are all training records documented?"
Integration doesn't just save effort - it increases acceptance. New parallel processes get ignored. Extensions of existing processes get lived.
Start Small, Scale Fast - A Pragmatic Rollout
Phase 1 (Month 1-2): Create inventory. Identify and roughly classify all AI systems in the company. Appoint an AI Governance Lead. Analyze existing governance structures.
Phase 2 (Month 3-4): Conduct risk assessment for identified high-risk systems. Appoint risk owners. Launch training program for Art. 4 (if not already done - the obligation has been in force since February 2, 2025).
Phase 3 (Month 5-6): Document processes - change management, incident reporting, vendor management. Operationalize human oversight (Art. 14). Conduct first internal audits.
Phase 4 (Ongoing): Continuous improvement. Operationalize monitoring. Extend framework to new AI systems. Regular reviews and adjustments.
This timeline is realistic for a mid-sized company. If you start today, you'll be operational before August 2, 2026. If you start in January 2026, it'll be tight.
Common Mistakes
Over-engineering. The framework must fit the company size. A mid-sized company with three AI systems doesn't need a 200-page governance manual and an AI governance platform for €100,000. It needs clear responsibilities, documented processes, and a maintained register.
Treating AI governance as purely an IT topic. AI governance is a topic for executive management, for compliance, for business units. IT supports, but doesn't lead. An AI system in HR has different risks than one in quality control - and the business unit understands these risks, not the IT manager.
Ignoring deployer obligations. Many companies think: "We don't develop AI, so the regulation doesn't affect us." Wrong. Whoever deploys high-risk AI systems is a deployer and has their own obligations - from human oversight to incident reporting. The regulation makes no distinction between in-house development and procurement.
Set it up once and forget it. AI governance isn't a project with an end date. It's an ongoing process. AI systems change, regulatory requirements get more concrete, new systems are added. A framework that isn't regularly reviewed and updated is outdated within six months.
Treating training as a checkbox exercise. "Everyone completed the e-learning" isn't enough. Art. 4 requires sufficient AI literacy - not sufficient click rates. Training must be role-specific, and effectiveness must be verified. A risk owner who can't explain how risk classification works after training wasn't sufficiently trained.
Conclusion
Building an AI governance framework isn't rocket science. It's solid craftsmanship: clarify responsibilities, define processes, document, implement, review. The EU AI Act provides the framework - the implementation must fit the company.
Start with the people, not the technology. Use existing structures instead of building parallel worlds. Begin pragmatically with Excel and scale when necessary. And treat the topic for what it is: a leadership task, not an IT project.
The deadline is set. August 2, 2026. The clock is ticking.