top of page

EU AI Act: first draft Code of Practice on transparency of AI‑generated content

  • Фото автора: ILLIA PROKOPIEV
    ILLIA PROKOPIEV
  • 3 дня назад
  • 8 мин. чтения

A first draft EU Code of Practice under AI Act Article 50 sets out proposed measures for marking/detecting AI outputs and labelling deepfakes and certain AI‑text.


Key points

  1. The European Commission published a first draft Code of Practice on 17 December 2025, with a target to finalise the Code by June 2026.

  2. The draft is intended to support compliance with the AI Act’s transparency duties for (i) providers of AI systems that generate synthetic content and (ii) deployers that publish deepfakes or certain AI‑generated/manipulated text.

  3. For providers, the draft pushes a multi‑layer approach to machine‑readable marking (metadata/signature, watermarking, fingerprinting/logging) paired with accessible detection tooling (interfaces or detectors with confidence scores).

  4. For deployers, the draft centres on consistent end‑user disclosure at first exposure, supported by a shared taxonomy and icon, plus detailed practices by modality (video, image, audio, text).

  5. The transparency rules for AI‑generated content are scheduled to apply from 2 August 2026, so design, procurement and publishing decisions made in 2025–2026 will influence implementation effort.


What the first draft is (and is not)


The draft Code of Practice is an EU‑level voluntary tool being developed to make the AI Act’s transparency obligations workable in practice. It is not legislation and it is not yet final. The text is explicitly positioned as a foundation for refinement and has been prepared through a broad drafting process involving industry, academia, civil society and Member States.


Under the AI Act, the AI Office is tasked with encouraging and facilitating EU‑level codes of practice on detection and labelling of AI‑generated or manipulated content. The Commission can approve codes of practice via implementing acts. If the Commission considers a code inadequate, it can adopt an implementing act specifying common rules for implementation. Those provisions explain why this draft is likely to influence market practice even before the final text is published.


The Commission is also preparing guidelines in parallel to clarify scope, definitions and exceptions that the draft Code does not attempt to settle.


Legal baseline: Article 50 AI Act, in practical terms


Article 50 of the AI Act sets out transparency duties that apply to certain AI systems and their use. The first draft Code focuses on three operational questions.


  1. If you provide a system that generates synthetic audio, images, video or text, what technical signals should be placed into the output so that it is machine‑readably marked and detectable as AI‑generated or manipulated?

  2. If you deploy a system to publish deepfakes, what does a “clear and distinguishable” disclosure look like across different formats (real‑time video, edited video, images, audio‑only)?

  3. If you publish AI‑generated/manipulated text to inform the public on matters of public interest, when does a disclosure apply, and what workflow controls are needed if you plan to rely on human review/editorial control with editorial responsibility?


Article 50 also sets a common usability bar: the relevant information needs to reach users in a clear and distinguishable manner, at the latest at the time of first interaction or exposure, and it needs to meet accessibility requirements.


Who is most affected


The draft is immediately relevant to:

  • providers of generative AI systems and general‑purpose AI systems that produce or transform content for the EU market; and

  • organisations deploying such systems in professional contexts to publish content to the public, including media, advertising, political communications, entertainment, social platforms, marketplaces, and enterprise communications teams.


What the draft asks of providers: multi‑layer marking, provenance chains and detection


The provider section is structured around two linked goals: embed signals into outputs and make those signals usable in downstream verification.


Multi‑layer machine‑readable marking


The draft moves away from “one technique to rule them all”. Signatories are asked to implement a multi‑layer set of active marking techniques, tailored by modality and output format.


The measures include:

  • provenance information and a digital signature embedded in metadata when the format supports it (typical in image, video and document files);

  • an imperceptible watermark interwoven within the content so that it is hard to separate from the content and it survives typical processing steps; and

  • fingerprinting/logging approaches that support fast lookup of known content in a repository, where a mark is not practical or where additional verification is helpful.


The draft also tackles operational realities of distribution. Metadata is frequently stripped by platforms and intermediate processing, so the draft includes measures for content that does not retain metadata and for multimodal outputs where text and images (or audio and video) need cross‑referenced markings.


A further feature is “structural marking” for open‑weight models. The draft discusses watermarking that is embedded into the model (during training or inference) to facilitate compliance by downstream users of open‑weight systems, while also recognising inherent security limitations where watermarking keys travel with the model.


Non‑removal and provenance chain transparency


Two measures are likely to drive immediate product and contracting work.


First, the draft expects providers to preserve existing marks and intrinsic provenance signals when AI‑generated content is used as an input and then transformed into a new output. Second, it calls for terms, acceptable‑use policies or documentation that prohibit removal or tampering of marks by deployers or other third parties.


Alongside that, the draft promotes a provenance chain concept: record and embed, where feasible, the origin and sequence of AI and human operations from AI‑assisted/modified content through to fully AI‑generated content, rather than treating each output as a clean slate.


Support for deployer labelling through “perceptible” options


The draft also anticipates deployer duties. It proposes that providers offer, within the system interface, an integrated option (enabled by default) that allows deployers to include a perceptible label at the moment of generation. That is intended to make downstream compliance easier when the deployer needs an on‑screen disclosure for deepfakes or certain public‑interest text.


Detection: interfaces, confidence scores and accessible results


Marking without detection has limited value. The draft therefore links marking to a detection commitment. Providers are asked to offer, free of charge, either an interface (API or UI) or a publicly available detector that allows users and other third parties to verify whether content was generated or manipulated by the provider’s system or model, with confidence scores. Where provenance information includes signals from multiple providers, the draft anticipates that the interface will disclose the complete set of provenance information available.


The draft goes further by proposing:

  • detectors for content generated before the model is placed on the market (important for continuity when models evolve);

  • forensic detectors that do not rely on the presence of active marks, as an additional line of defence; and

  • human‑understandable explanations and accessibility features in detection results, plus supporting documentation and training materials to help users interpret provenance data.


What the draft asks of deployers: shared disclosure cues plus modality‑specific practices


For deployers, the draft is built around consistency and user comprehension: disclosures need to be recognisable, timely and workable across formats where context is often lost.


Common taxonomy and icon


A core proposal is a shared taxonomy that distinguishes, at minimum, between fully AI‑generated content and AI‑assisted content. The intent is to align disclosure language and reduce “label inflation” where everything is tagged the same way.


The draft couples the taxonomy with an icon. While an EU‑wide icon is still under development, the draft describes an interim approach based on a two‑letter acronym for artificial intelligence (with language variants), with placement requirements focused on visibility at first exposure and a location that fits the content format. The longer‑term concept is an interactive EU icon that could offer additional detail on what was generated or manipulated, informed by machine‑readable provenance information.


Compliance, training, monitoring and cooperation


The draft expects deployers to build and maintain internal documentation describing their labelling practice and to train personnel involved in the creation, modification or distribution of in‑scope content. It also proposes monitoring and cooperation measures, including a confidential and secure channel through which third parties and individuals can flag mislabelled or unlabelled content, plus cooperation with competent authorities and other relevant actors (including media regulators, platforms and fact‑checking organisations).


Deepfakes: timing and placement by medium


The draft recognises that a deepfake disclosure cannot look the same across all media. It proposes different practices for:

  • real‑time video, where an icon needs to be persistently visible and paired with a clear disclosure at the start of the content;

  • non‑real‑time video, where options include disclosure at the start, a persistent icon, and end‑credits disclosures paired with other measures;

  • images, where placement needs to remain visible in common embedding/cropping contexts; and

  • audio‑only content, where audible disclosures become critical, with different approaches for very short clips and longer programming.


The AI Act itself limits the deepfake disclosure duty for evidently artistic, creative, satirical, fictional or analogous works to a disclosure of the existence of generated or manipulated content in a manner that does not hamper display or enjoyment. The draft reflects that constraint in icon placement expectations for creative works.


AI‑generated/manipulated text on matters of public interest


For text published to inform the public on matters of public interest, the AI Act provides an exception where the text has undergone human review/editorial control and a person holds editorial responsibility for the publication. The draft is designed to translate that into workflow practice.


In broad terms, the draft pushes deployers to:

  • decide upfront whether a given publication will follow a documented review chain under editorial responsibility, or whether it will be disclosed as AI‑generated/manipulated; and

  • place a disclosure in a fixed position so that a reader sees it at first exposure.


The draft also calls for documentation elements that support an “editorial responsibility” position, including identification of the responsible person, the concrete organisational/technical measures and human resources allocated to review, the date of review/approval, and a reference to the final approved version of the content.


Accessibility and related legal duties


Accessibility is treated as a cross‑cutting requirement: disclosures and icons need to work for users with disabilities, including via alternative text for screen readers, captioning or sign‑language solutions where relevant, and appropriate colour contrast.


The draft also notes that transparency duties sit alongside other EU and national obligations that may apply to the creation and distribution of AI‑generated content, including data protection, consumer law, the Digital Services Act, intellectual property and media rules.


Issues flagged for further development in later drafts


The first draft is deliberately high‑level in places and identifies areas where further input is sought. Topics raised include technical approaches for marking AI‑generated software code and other hard‑to‑mark content (including very short texts), applicability to agentic systems and sector‑specific contexts (gaming/VR/voice assistants), shared verifier concepts, the design of an EU‑wide icon including audio‑only use, and the design of flagging systems for mis‑labelled content.


Practical next steps for 2025–2026


Providers


  • Build a modality map: which systems generate or materially manipulate audio, images, video and text, and where outputs lose metadata during distribution.

  • Choose a marking stack per modality and output channel (metadata/signature, watermarking, fingerprinting/logging) and define rules for preserving marks when AI content is re‑used as an input.

  • Define the detection offer and operating model: public verification interface, confidence scoring, lifecycle maintenance, and documentation/training for users.


Deployers


  • Identify publication use cases that trigger Article 50(4): deepfakes and public‑interest text, including campaigns and third‑party distribution.

  • Decide how disclosures will be applied at first exposure in each medium, including audio‑only and short‑form content where visual labels are not reliable.

  • Where relying on human review/editorial responsibility for public‑interest text, document the review chain and decision rights in a way that is auditable.


Timeline


Feedback on the first draft is being collected with a view to a second draft by mid‑March 2026 and finalisation by June 2026. The transparency rules for AI‑generated content are scheduled to apply from 2 August 2026.

If you would like to discuss how the draft Code maps to your product roadmap, procurement specifications or publishing workflows, contact us.


The information provided is not legal, tax, investment, or accounting advice and should not be used as such. It is for discussion purposes only. Seek guidance from your own legal counsel and advisors on any matters. The views presented are those of the author and not any other individual or organization. Some parts of the text may be automatically generated. The author of this material makes no guarantees or warranties about the accuracy or completeness of the information. 

 
 
bottom of page