EU AI ActArticle 50

EU AI Act (Regulation (EU) 2024/1689) Transparency and labeling

Article 50 is product work, design work, and evidence work.

Teams need to know where disclosures appear, when machine readable marking is required, and what proof to keep.

Author
Sorena AI
Published
Mar 4, 2026
Updated
Mar 4, 2026
Sections
4

Structured answer sets in this page tree.

Primary sources
3

Cited legal and guidance references.

Publication metadata
Sorena AI
Published Mar 4, 2026
Updated Mar 4, 2026
Overview

Transparency is one of the most visible parts of the AI Act because users, deployers, buyers, and authorities can all see when it fails. The law covers several different situations, so the first implementation step is to map each trigger to the product or content surface where it actually appears.

Section 1

Interaction notices and exposed person notices

Article 50 requires providers of AI systems intended to interact directly with natural persons to inform those persons that they are interacting with an AI system, unless the use is obvious from the perspective of a reasonably well informed and observant person in context. This is a user journey question, not a generic site footer question.

Deployers of emotion recognition systems and biometric categorisation systems also have to inform natural persons exposed to their operation, subject to the law enforcement exception structure in the Regulation.

  • Map every surface where a user could reasonably mistake an AI interaction for a human one.
  • Check exposed person notices for biometric categorisation and emotion recognition deployments.
  • Review accessibility, localization, and timing of the notice.
  • Store screenshots and version history for each notice location.
Section 2

Machine readable marking of synthetic outputs

Providers of systems generating synthetic audio, image, video, or text content must ensure outputs are marked in a machine readable format and detectable as artificially generated or manipulated, where the article requires it. The law also says the technical solution should be effective, interoperable, robust, and reliable as far as technically feasible, taking account of content type, implementation cost, and the state of the art.

This is a technical implementation problem as much as a legal one. Teams should define where the marking is created, how it survives downstream handling, and how QA verifies it.

  • Document the marking method for each modality.
  • Test detectability after normal export and platform handling.
  • Record where an assistive or non substantial edit exception is being relied on.
  • Keep release notes when marking logic changes.
Section 3

Deepfakes and public interest text

Deployers of systems that generate or manipulate image, audio, or video content constituting a deepfake must disclose that the content has been artificially generated or manipulated. For artistic, satirical, fictional, or analogous works, the disclosure must still exist but can be adapted so it does not undermine the normal display or enjoyment of the work.

AI generated or manipulated text published to inform the public on matters of public interest also needs disclosure unless it has undergone human review or editorial control and a natural or legal person holds editorial responsibility for the publication.

  • Identify every deepfake capable workflow across product, marketing, and content teams.
  • Define the disclosure treatment for creative or fictional works.
  • Define editorial responsibility checks for public interest text workflows.
  • Keep approval records for exceptions based on human review and editorial control.
Section 4

Evidence, change control, and UX quality

The strongest transparency programs treat disclosures as reusable interface components with design system rules, accessibility review, and regression testing. That improves consistency and reduces the risk that a new feature quietly removes or hides a required notice.

Evidence should show where the disclosure appears, which copy version is live, what technical marking is used, and what event triggers re review.

  • Disclosure component library with design and content ownership.
  • Release checklist requiring Article 50 review on model and feature changes.
  • QA scripts that verify notices and marking across key flows and devices.
  • Privacy minimised evidence showing notice display and control effectiveness.
Recommended next step

Turn EU AI Act (Regulation (EU) 2024/1689) Transparency and labeling into an operational assessment

Assessment Autopilot can take EU AI Act (Regulation (EU) 2024/1689) Transparency and labeling from turning this guidance into an operational assessment workflow to a reusable workflow inside Sorena. Teams working on EU AI Act (Regulation (EU) 2024/1689) can keep owners, evidence, and next steps aligned without copying this guide into separate documents.

Primary sources

References and citations

Related guides

Explore more topics

EU AI Act Applicability and Roles | Provider, Deployer, Importer Guide
Determine whether the EU AI Act applies, when output used in the Union brings a system into scope, and how to assign provider, deployer, importer.
EU AI Act Applicability Test | Scope, Role, and Obligation Routing
Run a practical EU AI Act applicability test that checks scope, exclusions, operator role, prohibited practices, high risk status, transparency triggers.
EU AI Act Checklist | Practical Compliance Checklist by Obligation
Use a detailed EU AI Act checklist covering inventory, role mapping, Article 5 screening, high risk controls, Article 50 disclosures, GPAI evidence, logging.
EU AI Act Compliance Program | Build an Operational AI Act Program
Build an EU AI Act compliance program that covers inventory, governance, AI literacy, prohibited practice gates, high risk controls, Article 50 product work.
EU AI Act Deadlines and Compliance Calendar | Exact Dates and Workplan
Track the exact EU AI Act dates, including entry into force on 1 August 2024, early obligations from 2 February 2025, GPAI obligations from 2 August 2025.
EU AI Act FAQ | Dates, High Risk, GPAI, Transparency, and Penalties
Get grounded answers to common EU AI Act questions on application dates, high risk status, provider versus deployer roles, transparency.
EU AI Act GPAI and Foundation Model Obligations | Chapter V Guide
Understand EU AI Act obligations for general purpose AI model providers, including Article 53 documentation, copyright policy.
EU AI Act High Risk AI Use Cases by Industry | Annex III and Product Routes
See how EU AI Act high risk status appears across biometrics, critical infrastructure, education, employment, essential services, law enforcement, migration.
EU AI Act High Risk Requirements Checklist | Articles 9 to 15 and Beyond
Use a detailed high risk AI checklist covering Article 9 risk management, Article 10 data governance, Annex IV technical documentation, logging, instructions.
EU AI Act Penalties and Fines | Article 99 and GPAI Fine Exposure
Understand EU AI Act penalty tiers, including Article 5 fines up to EUR 35,000,000 or 7 percent.
EU AI Act Prohibited AI Practices | Article 5 Screening Guide
Screen AI systems against EU AI Act Article 5 prohibited practices, including manipulative and deceptive techniques, exploitation of vulnerabilities.
EU AI Act Requirements | Prohibited, High Risk, Transparency, and GPAI
Get a grounded overview of EU AI Act requirements across Article 5 prohibited practices, Article 6 and Annex III high risk systems.
EU AI Act Timeline and Phasing Roadmap | Practical Implementation Roadmap
Follow a practical EU AI Act roadmap that aligns workstreams to the phased application dates for prohibited practices, AI literacy, GPAI obligations.
EU AI Act vs ISO 42001 | What ISO 42001 Covers and What It Does Not
Compare the EU AI Act with ISO/IEC 42001:2023. Learn where ISO 42001 helps with AI policy, roles, risk assessment, impact assessment, documented information.
EU AI Act vs NIST AI RMF | How to Use AI RMF Without Missing AI Act Duties
Compare the EU AI Act with NIST AI RMF 1.0. Learn how the voluntary NIST AI RMF functions Govern, Map, Measure.