All articles
EU AI ActApril 3, 20267 min

Prohibited AI Systems Under the EU AI Act: What You Need to Know Now

The Article 5 prohibitions have been in force since February 2025. Emotion recognition at work, manipulative AI, social scoring - this article explains the eight prohibited practices.

Status: April 2026 - These prohibitions have been in force for over a year.

Since February 2, 2025, the prohibitions on certain AI practices under the EU AI Act (Regulation (EU) 2024/1689) have been enforceable law. These are not "coming soon" or "under preparation" - they are binding today, with severe penalties of up to €35 million or 7% of global annual turnover.

Companies developing or deploying AI systems must verify now whether their systems fall under these prohibitions. This article explains the eight prohibited practices from Article 5, highlights practical examples and gray areas, and provides concrete action steps.

The Eight Prohibited AI Practices Under Article 5(1)

Art. 5(1)(a): Subliminal, Manipulative, or Deceptive Techniques

What is prohibited: AI systems that deploy subliminal components or purposefully manipulative or deceptive techniques to materially distort behavior and cause significant harm.

Practical examples: - Dark patterns in apps personalized by AI to push users toward unwanted purchases - Voice assistants using subtle tone manipulation to influence buying decisions - AI-driven advertising that exploits psychological weaknesses to create purchase compulsion

The decisive threshold: "Significant harm" is the bar. Not every personalization or recommendation is prohibited. There must be material behavior distortion with demonstrable harm (e.g., financial loss, psychological distress, health consequences).

Gray area: Where does legitimate personalization end and manipulation begin? A recommendation algorithm suggesting products based on your interests is allowed. A system deliberately triggering your loss aversion to pressure you into a purchase is not.

Art. 5(1)(b): Exploiting Vulnerabilities

What is prohibited: AI systems that exploit vulnerabilities of specific groups due to age, disability, or socioeconomic situation to materially distort behavior and cause significant harm.

Practical examples: - Chatbots deliberately persuading seniors with cognitive impairments toward risky financial products - Gaming apps with AI-driven monetization targeting children - Lending platforms using manipulative designs to push economically vulnerable people toward overpriced loans

Important: Again, the threshold is "significant harm." A simplified interface for older adults is not prohibited. Deliberately exploiting disorientation is.

Art. 5(1)(c): Social Scoring by Public Authorities

What is prohibited: Evaluation or classification of natural persons by public authorities (or on their behalf) based on social behavior or personal characteristics, resulting in detrimental treatment.

Practical examples: - A point system ranking citizens by "good behavior" and granting access to public services accordingly - Algorithms rating unemployment benefit recipients by "cooperativeness" - Systems scoring parents by parenting behavior to inform child welfare decisions

Clarification: Private social scoring systems (e.g., creditworthiness by banks) do not directly fall under this prohibition but may fall under other provisions (e.g., as high-risk systems). This prohibition specifically targets state or quasi-state scoring practices.

Art. 5(1)(d): Predictive Policing Based on Profiling

What is prohibited: Assessing the risk that a natural person will commit a criminal offense solely based on profiling or personality traits.

Practical examples: - An algorithm calculating "crime probability" solely from residence, age, and socioeconomic status - Systems inferring criminal propensity from Facebook likes - AI flagging people as "potential offenders" without considering concrete behavior

Important - the word "solely": Systems incorporating concrete behavioral data (e.g., suspicious transactions, known locations in specific investigations) are not inherently prohibited. The ban targets pure profiling without behavioral basis.

Gray area: Fraud detection systems analyzing transaction patterns are allowed. Systems identifying "risk persons" from demographic data without concrete grounds are prohibited.

Art. 5(1)(e): Untargeted Scraping for Facial Recognition Databases

What is prohibited: Untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases.

Practical examples: - Scraping Instagram, Facebook, or LinkedIn for facial databases - Harvesting public webcams or surveillance cameras for data collection - Mass capture of faces from YouTube videos

What does "untargeted" mean? The prohibition targets mass, indiscriminate collection. Targeted capture within specific investigations or with consent may be permitted.

Practice note: Many existing facial recognition databases were built exactly this way. Companies using such databases should urgently verify their provenance.

Art. 5(1)(f): Emotion Recognition in Workplace and Education

What is prohibited: Emotion recognition in the workplace and educational institutions, except for medical or safety reasons.

Practical examples (prohibited): - HR software analyzing emotions in video interviews to assess "honesty" or "motivation" - Exam proctoring software evaluating students' facial expressions - Call center software monitoring employee mood to measure "engagement"

Permitted exceptions: - Fatigue detection for truck drivers (safety) - Stress monitoring for pilots during training (safety) - Emotion recognition for patients with communication disorders (medical)

Common misunderstanding: "We only use sentiment analysis, not emotion recognition." If the system infers emotional states from facial expressions, voice, or body language, it falls under the prohibition. Sentiment analysis of pure text (e.g., emails, chat) is not affected.

Art. 5(1)(g): Biometric Categorization Based on Sensitive Attributes

What is prohibited: Biometric categorization to infer or deduce sensitive attributes (race, political opinions, religious beliefs, sexual orientation).

Practical examples: - AI attempting to infer "political leanings" from facial images - Systems predicting sexual orientation from photos - Algorithms inferring religious affiliation from biometric features

Important: This is not about systems that use these attributes (if lawfully collected and known), but systems that attempt to derive these attributes from biometric data.

Why is this prohibited? These systems are based on pseudoscientific assumptions and lead to discrimination. There is no reliable biometric basis for such inferences.

Art. 5(1)(h): Real-Time Remote Biometric Identification in Public Spaces by Law Enforcement

What is prohibited: Use of real-time biometrics (e.g., live facial recognition) in publicly accessible spaces by or on behalf of law enforcement, with narrow exceptions.

Practical examples (generally prohibited): - Live facial recognition at demonstrations - Permanent surveillance of train stations with automatic identification - "Predictive policing" through real-time person capture

Permitted exceptions under Art. 5(2) - only under strict conditions: - Targeted search for missing children or abducted persons - Prevention of concrete terrorist threats - Search for suspects of serious crimes (murder, terrorism, serious violent crime)

Requirements for exceptions: - Prior judicial or administrative authorization (except in emergencies) - Proportionality assessment - Limitation to what is necessary - Ex-post review in emergency cases

Penalties: Up to €35 Million or 7% of Turnover

The prohibitions in Article 5 represent the most severe violation category in the AI Act. Violations are subject to fines of up to €35 million or 7% of global annual turnover, whichever is higher.

These penalties have been enforceable since February 2, 2025. Supervisory authorities in EU member states are already active. Companies cannot claim ignorance of the rules.

What Companies Must Do Now

1. Inventory: What AI Systems Are You Using?

Catalog all AI systems that you: - Developed in-house - Procured from third parties (SaaS, software licenses, APIs) - Are testing in pilot projects

Particularly critical: - HR tools (recruiting, performance evaluation, employee monitoring) - Customer scoring systems (scoring, risk assessment) - Biometric systems (access control, time tracking, video surveillance) - Marketing automation (personalized advertising, pricing)

2. Assessment: Do Systems Fall Under the Prohibitions?

For each system, ask: - Does it analyze emotions in the workplace or educational institutions? (Art. 5(1)(f)) - Does it use biometric data to infer sensitive attributes? (Art. 5(1)(g)) - Does it scrape facial images from public sources? (Art. 5(1)(e)) - Does it manipulate behavior subliminally? (Art. 5(1)(a)) - Does it exploit vulnerabilities of protected groups? (Art. 5(1)(b))

3. Action: Adapt or Shut Down Systems

If a system falls under a prohibition: - Shut down immediately if no exception applies - No "transition period" or "phase-out" - the prohibitions are already in force - Check whether functions can be removed (e.g., remove emotion recognition from recruiting tool) - Evaluate alternative systems that work without prohibited functions

4. Documentation and Training

  • Document your review and decisions
  • Train employees who procure or develop AI systems
  • Establish a process for evaluating new AI systems before deployment

Common Misconceptions

"We only use emotion recognition internally, not for decisions." → Irrelevant. The use itself is prohibited, regardless of how results are used.

"Our vendor says the system is GDPR-compliant." → GDPR compliance does not mean AI Act compliance. They are two different regulatory frameworks.

"We anonymize the data." → Anonymization does not protect against the prohibition. A system scraping anonymized facial images from the web remains prohibited.

"It's just a pilot project." → The prohibitions apply to tests and pilots as well. There is no sandbox exception for Article 5.

"Our system does sentiment analysis, not emotion recognition." → Check carefully: Does the system only analyze text content (permitted) or also voice, facial expressions, body language (prohibited in workplace/education)?

Difference from High-Risk Systems

Not every biometric or AI-based assessment system is prohibited. Many fall into the "high-risk system" category (Annex III of the Regulation) and are permitted with strict obligations:

  • Biometric identification (except real-time in public spaces)
  • AI in critical infrastructure
  • Assessment systems in HR (e.g., CV screening without emotion recognition)
  • Creditworthiness assessment

High-risk systems may be deployed but must: - Be CE-marked - Have a risk management system - Maintain technical documentation - Ensure human oversight

The prohibitions in Article 5 are absolute - they apply regardless of risk management or certification.

Conclusion: Act Now

The prohibitions in Article 5 of the EU AI Act have been enforceable law for over a year. Companies deploying prohibited AI systems risk substantial penalties and reputational damage.

Our recommendation: 1. Conduct an inventory of all AI systems (or have it conducted) 2. Prioritize review of critical systems (HR, biometrics, customer scoring) 3. Immediately shut down or adapt prohibited systems 4. Establish procurement processes for new AI systems that verify AI Act compliance

The EU AI Act is complex, but the prohibitions in Article 5 are deliberately clear. When in doubt: better to check once too often than to have a prohibited system in operation.

---

Lexbeam Software supports you with AI Act compliance: From inventory through legal assessment to implementation of compliance processes. Contact us for an initial consultation.

WP

Author

Werner Plutat

Legal Engineer x AI

The Legal Engineer's Daily Brief

AI, Legal Tech & automation insights, 3x per week.

Subscribe

Does this topic affect your organization?

Let's clarify in 30 minutes how to implement these requirements with working technology, not slide decks.

Book a discovery call