EU AI ActArticle 5

EU AI Act (Regulation (EU) 2024/1689) Prohibited practices

Article 5 is a stop or redesign test, not a drafting exercise.

Teams should run prohibited practice screening before product build hardens and again before material changes ship.

Author
Sorena AI
Published
Mar 4, 2026
Updated
Mar 4, 2026
Sections
4

Structured answer sets in this page tree.

Primary sources
3

Cited legal and guidance references.

Publication metadata
Sorena AI
Published Mar 4, 2026
Updated Mar 4, 2026
Overview

Article 5 is the earliest and most severe operational gate in the AI Act. If a use case falls into a prohibited category, the answer is not better documentation. The answer is to stop, narrow, or redesign the use case. That makes early product review essential.

Section 1

What Article 5 is trying to prevent

The prohibited practices target AI uses that create unacceptable risk to health, safety, or fundamental rights. The core logic is not whether the model is advanced. It is whether the use case distorts autonomy, exploits vulnerability, creates abusive surveillance or categorisation, or applies AI in especially harmful law enforcement or social control settings.

This is why the screening must focus on actual use context, affected persons, and downstream actions rather than only on model type.

  • Manipulative or deceptive techniques causing significant harm.
  • Exploitation of vulnerabilities due to age, disability, or specific social or economic situation.
  • Social scoring that leads to detrimental or disproportionate treatment.
  • Certain biometric, emotion recognition, and criminal risk uses listed in Article 5.
Section 2

The prohibited practice categories teams should know cold

The practical screening list should include the categories most likely to surface in design or procurement. These include manipulative or deceptive systems, exploitation of vulnerability, social scoring, certain individual criminal offence risk assessments based only on profiling or personality traits, untargeted scraping of facial images from the internet or CCTV to create facial recognition databases, and biometric categorisation that infers sensitive attributes.

The Act also prohibits certain emotion recognition uses in workplace and education settings and tightly restricts real time remote biometric identification in public spaces for law enforcement, subject to narrow exception pathways.

  • Check for systems that steer users in ways they cannot reasonably resist or understand.
  • Check for use of sensitive biometric inference or facial scraping as a dataset strategy.
  • Check workplace and education tools for emotion recognition functions.
  • Check any public space biometric law enforcement scenario with specialist review.
Section 3

How to build a usable screening gate

The gate should run early in product design, at procurement for external systems, and again on material changes. It should ask a short set of hard questions on affected persons, vulnerable groups, biometric features, decision consequences, and deception or manipulation risk. If any answer is high concern, the feature should not progress without formal review.

This works best when product, legal, risk, and security review together. Article 5 issues are usually visible in the intended purpose, the data strategy, the user journey, or the target deployment environment.

  • Attach the gate to feature approval and procurement onboarding.
  • Use screenshots, prompts, and workflow maps as part of the review input.
  • Record the rationale for a not prohibited conclusion.
  • Block launch where a redesign decision is still open.
Section 4

Evidence and escalation

Evidence for Article 5 should be simple and clear. Keep the screening result, the facts reviewed, the people who approved the decision, and any design changes made because of the review. If the use case is rejected, keep the rejection record too. That is proof that the control actually works.

Escalation should be mandatory where biometrics, vulnerable groups, law enforcement use, education, workplace monitoring, or synthetic manipulation of public audiences is involved.

  • Decision record with date, approvers, and supporting evidence.
  • Blocked or redesigned feature log for high concern cases.
  • Escalation roster with named legal, risk, and product owners.
  • Periodic review of near misses to improve the screening questions.
Recommended next step

Use EU AI Act (Regulation (EU) 2024/1689) Prohibited practices as a cited research workflow

Research Copilot can take EU AI Act (Regulation (EU) 2024/1689) Prohibited practices from clarifying scope and applicability with cited answers to a reusable workflow inside Sorena. Teams working on EU AI Act (Regulation (EU) 2024/1689) can keep owners, evidence, and next steps aligned without copying this guide into separate documents.

Primary sources

References and citations

Related guides

Explore more topics

EU AI Act Applicability and Roles | Provider, Deployer, Importer Guide
Determine whether the EU AI Act applies, when output used in the Union brings a system into scope, and how to assign provider, deployer, importer.
EU AI Act Applicability Test | Scope, Role, and Obligation Routing
Run a practical EU AI Act applicability test that checks scope, exclusions, operator role, prohibited practices, high risk status, transparency triggers.
EU AI Act Checklist | Practical Compliance Checklist by Obligation
Use a detailed EU AI Act checklist covering inventory, role mapping, Article 5 screening, high risk controls, Article 50 disclosures, GPAI evidence, logging.
EU AI Act Compliance Program | Build an Operational AI Act Program
Build an EU AI Act compliance program that covers inventory, governance, AI literacy, prohibited practice gates, high risk controls, Article 50 product work.
EU AI Act Deadlines and Compliance Calendar | Exact Dates and Workplan
Track the exact EU AI Act dates, including entry into force on 1 August 2024, early obligations from 2 February 2025, GPAI obligations from 2 August 2025.
EU AI Act FAQ | Dates, High Risk, GPAI, Transparency, and Penalties
Get grounded answers to common EU AI Act questions on application dates, high risk status, provider versus deployer roles, transparency.
EU AI Act GPAI and Foundation Model Obligations | Chapter V Guide
Understand EU AI Act obligations for general purpose AI model providers, including Article 53 documentation, copyright policy.
EU AI Act High Risk AI Use Cases by Industry | Annex III and Product Routes
See how EU AI Act high risk status appears across biometrics, critical infrastructure, education, employment, essential services, law enforcement, migration.
EU AI Act High Risk Requirements Checklist | Articles 9 to 15 and Beyond
Use a detailed high risk AI checklist covering Article 9 risk management, Article 10 data governance, Annex IV technical documentation, logging, instructions.
EU AI Act Penalties and Fines | Article 99 and GPAI Fine Exposure
Understand EU AI Act penalty tiers, including Article 5 fines up to EUR 35,000,000 or 7 percent.
EU AI Act Requirements | Prohibited, High Risk, Transparency, and GPAI
Get a grounded overview of EU AI Act requirements across Article 5 prohibited practices, Article 6 and Annex III high risk systems.
EU AI Act Timeline and Phasing Roadmap | Practical Implementation Roadmap
Follow a practical EU AI Act roadmap that aligns workstreams to the phased application dates for prohibited practices, AI literacy, GPAI obligations.
EU AI Act Transparency, Labeling, and User Disclosures | Article 50 Guide
Implement EU AI Act Article 50 transparency duties for direct interaction notices, machine readable marking of synthetic outputs, deepfake disclosures.
EU AI Act vs ISO 42001 | What ISO 42001 Covers and What It Does Not
Compare the EU AI Act with ISO/IEC 42001:2023. Learn where ISO 42001 helps with AI policy, roles, risk assessment, impact assessment, documented information.
EU AI Act vs NIST AI RMF | How to Use AI RMF Without Missing AI Act Duties
Compare the EU AI Act with NIST AI RMF 1.0. Learn how the voluntary NIST AI RMF functions Govern, Map, Measure.