EU AI Act 2026 for creators: 2 August deadline + 5-step compliance checklist
On 2 August 2026 the EU AI Act's Article 50 transparency rules become fully applicable. Three exceptions exempt most creators from labeling — but deepfakes and AI text on public-interest matters always fall under disclosure. Definitions, comparison table + 5-step compliance checklist.

Direct answer: On 2 August 2026 the EU AI Act's Article 50 transparency rules become fully applicable. Three exceptions exempt most creators from labeling (artistic context, editorial control, technical assistance) — but deepfakes and AI text on public-interest matters always fall under disclosure. Below you'll find a 5-step checklist that makes you compliant in 30 minutes.
⚡ Compliant in 30 minutes. Jump to the 5-step checklist below, or try LinkDash for free — the AI Page Builder is §4-compliant by default.
Definitions: what does the EU AI Act actually say?
Short answer: Regulation (EU) 2024/1689 introduces four core terms. Each definition in one sentence, with source noted below.
- Provider
- In one sentence: The company that builds an AI system and places it on the EU market — think OpenAI, Midjourney, ElevenLabs.
Source: Article 3 + 50.2 (machine-readable watermarks in output). - Deployer
- In one sentence: The party using an AI system to publish content — that's you as a creator.
Source: Article 3 + 50.4 (disclosure for deepfakes and public-interest text). - Deepfake
- In one sentence: Realistic AI content that mimics a person and could be seen as real.
Verbatim (Recital 134): artificially generated image, audio or video content "that would falsely appear to a person to be authentic or truthful." - Editorial control
- In one sentence: A real human reads the AI output, edits it, and publishes under their own name.
Source: Recital 134. Counts as an exemption ground for §4 text disclosure. - Public interest
- In one sentence: Text that informs society on matters that matter — journalism, politics, health, safety.
Source: Article 50.4 + Recital 134 (no exhaustive definition).
What changes on 2 August 2026?
Short answer: Article 50 (transparency obligations for providers and deployers) becomes fully applicable. Concretely: you must disclose when content was artificially generated, in four specific scenarios.
The EU AI Act (Regulation (EU) 2024/1689) entered into force on 1 August 2024, with phased applicability of individual provisions:
- 2 February 2025: Ban on unacceptable-risk systems (in force)
- 2 August 2025: Rules for general-purpose AI models (in force)
- 2 August 2026: Article 50 transparency obligations — your deadline as a creator
- 2 August 2027: High-risk AI systems embedded in products
This matters for creators. Article 50 specifies when you must disclose that content was AI-generated. The European Commission published the second draft of its Code of Practice on marking AI content in March 2026 — a voluntary compliance tool with concrete examples. Final guidelines arrive in Q2 2026, exactly the three months before the deadline.
Which creator content falls under Article 50?
Short answer: Four scenarios — chatbots (§1), AI provider output (§2), emotion and biometric recognition (§3), and deepfakes + public-interest text (§4). For creators, §1, §2 and §4 are the relevant ones.
§1 — Chatbots and AI assistants on your page
Have an AI chatbot on your link-in-bio page (e.g. an "Ask me anything" widget powered by GPT)? Visitors must be informed they are interacting with an AI system — unless that's obvious from context. In practice a "Made by AI" or "Powered by AI assistant" label near the widget suffices.
§2 — AI providers, not you but the tool behind you
This paragraph imposes obligations on AI tool builders (OpenAI, Midjourney, ElevenLabs) to mark outputs in machine-readable formats as AI-generated. As a deployer you have no direct obligation here, but it influences which tools are "compliant". Tools without technical watermarking implementations may disappear from the EU market from August 2026.
§4 — Deepfakes and AI text on public-interest matters
This is where it gets concrete for you:
- Deepfakes — image, audio or video that's artificially generated and looks realistic — must always be labeled, with one exception: if the content is "evidently artistic, creative, satirical, fictional or analogous", a mention of the existence of the manipulation suffices, in a manner that doesn't "hamper the display or enjoyment of the work".
- AI text on public-interest matters — if you publish AI-generated text "with the purpose of informing the public on matters of public interest", you must disclose. However: this obligation falls away if the text "has undergone a process of human review or editorial control" and a natural or legal person holds editorial responsibility.
Verbatim from Article 50 §4 (official text): "Deployers of an AI system that generates or manipulates text which is published with the purpose of informing the public on matters of public interest shall disclose that the text has been artificially generated or manipulated. This obligation shall not apply (...) where the AI-generated content has undergone a process of human review or editorial control."
Which 3 exceptions exempt most creators?
Short answer: For most creators the editorial-review exception (§4), the artistic exception (§4) and the technical-assistance exception (§2) provide relief. By structuring your workflow so one of these three applies, you avoid almost all disclosure obligations.
- Editorial review exception (§4). If you review, edit and publish AI-generated text under your own editorial responsibility, you're exempt from the disclosure obligation. In practice: as long as you "read through, adjust and publish" — not copy-paste — you comply. Keep an audit trail: raw AI output, your edit, published version. Recital 134 confirms substantive human review is essential here.
- Artistic / satirical exception (§4). Making deepfakes within an evidently artistic or satirical frame (parody, art project, satirical clip)? A light mention of the existence of the manipulation suffices. A poster with "AI collage by [name]" in a corner is enough.
- Technical-assistance exception (§2). AI functions performing only "standard editing" or that don't "substantially alter the input data" are exempt. Examples: spell-check, color correction, automatic captions based on existing audio, noise reduction. These are assistive functions, not content generation.
When must you label AI content and when not?
Short answer: The comparison table below maps each creator-content type to Article 50 disclosure requirements. Remember: deepfakes with recognizable people require disclosure, technical assistance never does, and public-interest text only without editorial review.
| Content type | Disclosure required? | Exception applicable? |
|---|---|---|
| AI chatbot on your page | Yes (§1) | Only if context is obvious (rarely) |
| AI-generated avatar or profile photo of yourself | Yes, deepfake (§4) | No — recognizable depiction of a person |
| AI-generated stock image for a blog post | No, not a deepfake (no person mimicked) | n/a |
| Fully AI-written product description in your shop | Possibly (§4 public interest — usually not) | Yes, if you review and edit |
| AI-edited podcast captions | No (§2 technical assistance) | n/a |
| Satirical deepfake of a politician | Mention of existence of manipulation (§4 artistic exception) | Full label not required |
| Spell-check + color correction | No (§2) | n/a |
| AI-generated news article without review | Yes (§4 public interest) | None — review exception lapses |
Edge cases: when are you in a grey area?
Short answer: Five scenarios sit between mandatory disclosure and exemption. Below how to interpret each. Rule of thumb when genuinely in doubt: apply disclosure — it's always permitted and costs nothing.
1. AI-edited self-portrait with cosmetic tweaks
Photo of yourself with AI editing (skin corrections, background removed, color grading). Is this a deepfake? Usually not — if it's your own face and editing is cosmetic (not structurally changing), this falls under §2 "standard editing." But if AI structurally changes your face (different hair color, age, expression) you're in §4 deepfake territory. Conservative approach: add "AI-edited" in caption.
2. AI-generated stock photo with fictional people
A generated photo of a fictional person (no existing person mimicked) does not fall under Recital 134's deepfake definition — no one is misled. But if the photo looks realistic and visitors might think it's a real person, voluntary disclosure is wise. Many authors choose an unobtrusive "AI" text label in a corner.
3. AI translation of your own text into NL/DE/FR
You write in EN and have AI translate. Strictly the translation is AI-generated, but: if you review each translation and publish under your own name → §4 editorial-review exception applies, no labeling. If you let AI publish unseen → strictly speaking labeling required for public-interest content. Practical: always do a brief review pass so the exception holds.
4. AI-edited voice clone that doesn't sound fully realistic
Voice conversion where the result sounds clearly synthetic (auto-tune-level, robotic) — the core of the deepfake definition is "could be falsely seen as real." Unmistakably synthetic voices fall outside. Realistic voice clones of existing people do fall under §4. If listeners in a blind test would believe it's the real person, §4 applies.
5. Newsletter with AI-written summary of current news
Newsletter where the editorial summary is AI-generated. Does public interest apply? If you're "informatively" passing on current news, yes. Does the obligation lapse? Only if you genuinely do editorial review (not just typo fixes). Safest solution for newsletter creators: an "Includes AI assist" byline in every edition, regardless of topic.
Disclaimer: these interpretations are based on the current legal text (13 June 2024 version) and the second draft Code of Practice (5 March 2026). Final European Commission guidelines arrive in Q2 2026 and may refine some boundaries.
What does your creator type need to set up before 2 August?
Short answer: Different accents per creator archetype. Below five scenarios with concrete actions you can take today.
For the vlogger or YouTube Shorts creator
You probably use AI for automatic captions, noise reduction or thumbnail generation. Captions based on existing audio = §2 exception. But AI-generated thumbnails with fake footage of yourself or others = §4 deepfake = labeling required. Practical solution: add an "AI thumbnail" text label in a corner, or mention it in your video description.
For the musician or producer
AI mastering and autotune fall under §2 (technical assistance). But if you use Suno, Udio or similar tools to generate a vocal track that sounds like yourself or someone else, that's an audio deepfake and falls under §4. Disclosure: a mention in track title or description suffices, e.g. "[track name] (AI vocal)".
For the coach, consultant or B2B creator
You likely publish regular articles about your field. The editorial review exception is your best friend here: as long as you read, adjust and publish each AI draft under your own name, you meet the exception. Don't engineer a workflow where AI publishes unseen — that falls outside the exception.
For the e-commerce or shop creator
AI product descriptions don't fall under §4 if they don't carry a "public-interest" character — commercial copy usually doesn't. Voluntary disclosure is still smart. Google has indicated in its own guidance that transparency about AI use carries no ranking penalty and increases reader trust.
For the visual artist or designer
The artistic exception is central for you: if you use AI as part of an artistic work, a mention of the existence of AI manipulation is enough — nothing more. "Created with AI assistance" in the portfolio description suffices, as long as the work style is evidently artistic.
5-step checklist: become AI Act-compliant in 30 minutes
Short answer: Walk through these 5 steps and you'll be Article 50-compliant by 2 August 2026 without extra effort. Estimated time: 30 minutes for an average creator profile.
- Inventory which AI tools you use. ChatGPT, Suno, Midjourney, ElevenLabs, Adobe Firefly? Make a list. Per tool: check whether there's an EU compliance roadmap (OpenAI, Anthropic, Google and Microsoft each have one as of May 2026).
- Split your content into 3 buckets: (a) deepfakes (mandatory disclosure), (b) published text (review exception possible), (c) technical assistance (§2 exemption).
- Document your workflow. How do you review AI output? Save the raw + edited version as audit trail. A simple Notion or Markdown doc is enough.
- Add disclosure labels where needed. Text label "AI-generated" near avatars/heroes, brief byline on articles, mention in track titles. Placement: visible within 1 second of first interaction (Article 50.5 requirement).
- Test your flow on one post. Open it on mobile and check: are labels clearly visible within 1 second? Done — you're compliant.
⚡ Check in 30 seconds whether your content needs labeling.
The LinkDash AI Page Builder has human-in-the-loop built in — every AI suggestion goes through your editor review before publication. That places you automatically under the §4 editorial-review exception. No workflow rewiring, no extra labels, just work as you're used to.
Try LinkDash free → no credit card · 5-minute setup
What does AI-compliant disclosure look like in practice?
Short answer: Three patterns work: an inline text label, an icon-with-tooltip, or machine-readable metadata (only required for providers). Disclosure must be clearly visible from the moment of first interaction.
Article 50 §5 states that information must be "clear and distinguishable", "at the latest at the time of first interaction or exposure". The Code of Practice (March 2026 second draft) names three acceptable patterns:
- Inline text label — e.g. "AI-generated" or "Made with AI" visible alongside the content. For blog posts: in a short byline. For images: an unobtrusive corner label.
- Icon label with tooltip — a small AI icon that reveals a description on hover. Compromise between visibility and visual calm.
- Machine-readable metadata — required for providers (§2), not required for you as deployer. C2PA metadata is the emerging standard implemented by OpenAI, Adobe and Microsoft.
What doesn't suffice: disclaimers hidden in a footer, disclosure only after a second click, or vague terms like "partially handmade".
How much fine can you get for an AI Act violation?
Short answer: For Article 50 violations fines are lower than for prohibited practices — up to € 15 million or 3% of global annual turnover for providers. For individual creators this is theoretically high; in practice enforcement focuses on large parties, not micro-influencers.
Penalties under the AI Act are tiered by violation type. Article 50 falls in the middle category (see Article 99). Prohibited practices (Article 5) carry fines up to 7% of global turnover; Article 50 substantially less. Enforcement will focus on platforms and large providers — not individual micro-influencers.
In the Netherlands, the Dutch Data Protection Authority (AP) and the RDI are designated as national supervisors. Enforcement will likely start with complaints and sample audits. Keep a simple audit trail: which tool you used, what review you did, what was published.
How LinkDash prepares you
Our AI Page Builder is built around human-in-the-loop: every AI suggestion passes through your editor review before going live. This brings most creator usage automatically under the editorial-review exception of §4.
For disclosure cases that do require a label (deepfake avatars, fully AI-published content), we're building an opt-in disclosure marker into the page editor. Available before 2 August 2026, starting on Premium and Creator Pro plans.
Privacy aspects of AI content are a separate hurdle. Read our guide to GDPR for link-in-bio if you use AI for visitor personalization or email capture.
Disclaimer + sources
This guide is informative, not legal advice. For specific compliance questions consult a specialist lawyer. As of 5 March 2026 the European Commission published the second draft of the Code of Practice; we update this article as soon as final guidelines appear in Q2 2026. Some exceptions (especially artistic and editorial) depend on interpretation — when in doubt: apply disclosure, which is always permitted.
Primary sources used in this guide:
- European Commission — AI Act overview (last updated 27 January 2026)
- Article 50 — official text (incl. Recitals 132-135)
- Code of Practice on marking AI content
- Article 99 — fines and sanctions
Frequently asked questions
What is a deepfake according to the EU AI Act?
Recital 134 of the Regulation defines a deepfake as artificially generated or manipulated image, audio or video content that "would falsely appear to a person to be authentic or truthful." Realistic AI portraits and generated voices of existing people always fall under this. Cartoons and clearly stylized portraits don't.
What's the difference between provider and deployer under Article 50?
A provider builds and distributes the AI system (OpenAI, Midjourney). A deployer uses the AI system to produce or publish output — that's you as a creator. §2 affects providers, §4 affects deployers. Both roles can overlap, but for creator content §4 mainly applies.
Am I allowed to use AI for my bio page?
Yes — Article 50 doesn't ban AI use, it requires transparency where disclosure is relevant. For most bio content (links, short bio, bio text) you fall under §2 (technical assistance) or §4 (review exception). Avatars that realistically depict you do require a label.
Do I need to label my newsletter as AI-generated?
Only if the text "informs the public on matters of public interest" and you haven't applied editorial review. Personal newsletter content (lifestyle, personal updates, product updates) typically doesn't fall under public interest. Journalistic newsletters publishing AI content without review do.
Does Article 50 only apply to EU companies, or also to creators outside the EU?
Article 50 applies to providers and deployers whose output is available in the EU — regardless of where the company is established. A US creator serving NL/DE/FR/BE followers and publishing AI-generated content falls under the rules.
What if I just use ChatGPT to write a blog post and review it myself?
Then you fall under the editorial-review exception of §4. Reading through, adjusting and publishing under your own name = compliant without labeling. Important: your review must be substantive (content edits, not just typo fixes), and you carry editorial responsibility.
Do I need to label my AI-generated profile photo?
If the photo looks realistic and depicts you (or someone else), it falls under §4 deepfake rules — disclosure required. Cartoons, abstract avatars or clearly stylized portraits don't fall under the deepfake definition. A brief "AI portrait" mention in your bio suffices.
What is a "Code of Practice" and do I need to sign one?
The Code of Practice on marking and labelling of AI-generated content is a voluntary tool with examples of compliance patterns. For individual creators signing isn't relevant — the code targets providers and large deployers.
Which AI tools are already "AI Act compliant"?
As of May 2026, OpenAI, Anthropic, Google and Microsoft have published compliance roadmaps, with C2PA watermarking emerging as the standard. Midjourney and open-source models lag — check before using a tool whether it has an EU compliance roadmap.
Can I be penalized if the tool I use isn't compliant?
No — §2 obligations apply to providers, not to you as deployer. But if the tool doesn't generate machine-readable markers, manual disclosure can be harder. Indirectly it affects your workflow.
Does the satire exception apply to commercial content too?
Not automatically. The artistic exception requires the work to be "evidently artistic, creative, satirical, fictional". A satirical advertisement may qualify, but a commercial spot with deepfake elements normally doesn't — full disclosure obligation applies there.
How do I know if my AI text falls under "public interest"?
Article 50 §4 says: text that "informs the public on matters of public interest". That includes journalism, political analysis, health information, regulatory updates. Product descriptions, personal updates and lifestyle content typically fall outside — when in doubt, add a brief "AI assist used" disclosure.
Ready to get your content compliant before 2 August?
The easiest route is to maintain your human-in-the-loop workflow. Create a free LinkDash account and build your bio page with AI suggestions you review — and you're compliant by default. Questions about a specific content situation? Read our guide to GDPR aspects of AI content.
Andreas
Founder of LinkDash
Ready to get started?
Create your own link-in-bio page for free with iDEAL, Wero and 100+ templates.
Start free