11 min read

EU AI Act for AI-built apps — what to ship before August 2026

The EU AI Act's second enforcement wave lands August 2026. If your product uses a large language model — directly or via a wrapper — here is what you need to publish, document, and do before the deadline.

The EU AI Act (Regulation 2024/1689) entered into force August 2024, with obligations phasing in through 2027. The August 2026 wave is the one most founders need to plan for: transparency for any AI system serving EU users, model-card requirements for general-purpose-AI downstream providers, and risk-management documentation for AI systems categorized as high-risk.

What it is

The EU AI Act classifies AI systems into four risk tiers: unacceptable (banned), high-risk (strict requirements), limited-risk (transparency only), and minimal-risk (no obligations). Most startup AI features (chatbots, AI-generated content, recommendation engines) fall into limited-risk. High-risk includes AI used for recruiting, credit scoring, critical infrastructure, and law enforcement — if your product does any of those, the bar is much higher.

Vulnerable example

# Common EU AI Act mistakes

- No public model card listing which LLMs the product uses
- No disclosure that users are interacting with an AI (for chatbots)
- No human-oversight procedure for AI-generated decisions
- Using AI in a high-risk context (recruiting, credit) without conformity assessment
- No record of training-data provenance for GPAI downstream use
- Logging prompt+completion without a lawful basis or retention policy

Fixed example

# Minimum viable EU AI Act readiness

1. Publish a model card: list every AI model, its role, residency, retention
2. Disclose AI interaction in-product ("You are chatting with an AI")
3. Document human-oversight procedure (who can override, how)
4. Watermark or label AI-generated content (images, video, audio)
5. Record training-data categories if you fine-tune a GPAI model
6. Keep an incident log for AI safety events (bad outputs, abuse)
7. Build a risk-management document for any high-risk AI feature
8. Be ready to provide Article 11 technical documentation on request

How Securie catches it

Securie's AI-feature specialist (on the Series-A roadmap) will cover prompt-injection detection, tool-scope abuse, and output-filtering failures. Today, the tool library includes guidance on what to publish before enforcement lands.

Checklist

  • Public model card listing every AI model used in the product
  • In-product disclosure that user is interacting with an AI system
  • Human-oversight procedure documented (who can override, timeline)
  • AI-generated output labeled (text, image, audio watermarking)
  • Risk-management document for any high-risk AI use (recruiting, credit, etc.)
  • Training-data provenance recorded if you fine-tune a GPAI model
  • Incident log for AI safety events with retention policy
  • Data Protection Impact Assessment (DPIA) if AI processes sensitive data
  • Conformity assessment for high-risk AI systems before EU deployment
  • CE marking for products classified as high-risk AI

FAQ

When does the AI Act actually enforce against my indie SaaS?

The prohibitions entered force February 2025. GPAI obligations (for model providers like OpenAI) started August 2025. High-risk system requirements + most downstream-use obligations start August 2026. Administrative penalties for non-compliance are phased in; full enforcement including fines up to 7% of global turnover starts gradually through 2027.

Is my LLM-powered chatbot a high-risk AI system?

Usually no. High-risk is reserved for specific Annex III uses: biometric identification, critical infrastructure, education admissions, recruitment, credit scoring, law-enforcement profiling, migration/border management, and administration of justice. A general-purpose chatbot is limited-risk — the main obligation is transparency (disclose that it's AI).

Do I need a conformity assessment?

Only for high-risk AI systems. For limited-risk (most chatbots and content-generation features), the obligation is transparency + documentation. Publish a model card, label AI-generated content, and you're mostly covered.

What is GPAI and does it apply to me?

General-Purpose AI (GPAI) means foundation models like GPT-4, Claude, Gemini. If you're a downstream user (you wrap them in your product), you have lighter obligations than if you train one. Document which GPAI you integrate and its capabilities; the upstream provider carries most of the compliance burden.

Does the UK follow the EU AI Act?

No — the UK is taking a different, lighter approach ('pro-innovation AI framework'). But if you serve EU users from the UK, EU AI Act still applies to those users.