9 min read

The EU AI Act Hits in August: What Developers Actually Need to Do

Even calling an LLM API triggers legal obligations if you serve EU users. Fines up to €35M. The major deadline is August 2, 2026. Here's the practical checklist.

AIRegulationEU AI ActCompliance

I used to think AI regulation was a big-tech problem. Then I read the EU AI Act and realized that my compliance chatbot — a TanStack Start app that calls the Anthropic API to answer questions about cannabis regulations — falls within scope. Not because I am a large company. Not because I trained a model. But because my application uses AI to generate outputs and serves users in the EU.

The major enforcement deadline is August 2, 2026 — five months from now. Fines go up to €35 million or 7% of global turnover. This is not a "prepare for next year" post. Two enforcement waves have already passed. If you ship AI features to EU users, this applies to you right now.

The Four Risk Levels

The EU AI Act classifies every AI system by risk. Your obligations depend on which level your application falls into.

Risk LevelExamplesWhat You Must Do
UnacceptableSocial scoring, mass surveillance, manipulative AIBanned. Do not build it.
HighAI in hiring, credit scoring, law enforcement, medical devicesFull documentation, risk management, human oversight, EU registration
LimitedChatbots, deepfake generators, AI content toolsTransparency: users must know they are interacting with AI
MinimalSpam filters, game AI, recommendation enginesNo specific obligations

Most developer-built AI applications fall into the "Limited" risk category. That means your primary obligation is transparency — telling users they are interacting with an AI system, and labeling AI-generated content as such.

But here is where it gets complicated: if your AI application is used in a high-risk domain (hiring, healthcare, legal, finance, education), it may be classified as high-risk regardless of how simple the technology is. A chatbot that answers HR policy questions? That could be high-risk if it influences hiring decisions.

What Has Already Been Enforced

The EU AI Act entered into force in August 2024 and is being enforced in phases:

  • February 2, 2025: Prohibited AI practices are banned. Social scoring, manipulative AI, and real-time biometric surveillance in public spaces are illegal.
  • August 2, 2025: General-purpose AI model obligations are live. If you develop or fine-tune foundation models, transparency and documentation requirements apply.
  • August 2, 2026: High-risk AI system obligations are fully enforceable. Conformity assessments, risk management, and human oversight requirements kick in.

If your application falls under the "Limited" risk category, most of your obligations are already active. The transparency requirements have been enforceable since mid-2025.

The Developer Compliance Checklist

Based on the developer-focused guides I have read, here is the practical checklist:

For All AI Applications (Including "Minimal" Risk)

  1. Create an AI inventory. Document every AI system in your application — which models you use, what they do, and where the data flows. Over half of organizations still do not have this.

  2. Classify your risk level. Determine whether your AI systems are minimal, limited, high, or unacceptable risk. When in doubt, consult the Annex III list of high-risk use cases.

  3. Document your AI usage. Maintain records of which models you use, their versions, and what data they process. This is the foundation for everything else.

For "Limited" Risk (Most Chatbots and AI Content Tools)

  1. Add transparency disclosures. Users must know they are interacting with AI. This is not optional.
// In your chat interface
function ChatDisclaimer() {
  return (
    <div className="text-sm text-gray-500 mb-4">
      This assistant is powered by artificial intelligence.
      Responses are generated by an AI model and may not always be accurate.
    </div>
  );
}
  1. Label AI-generated content. If your application generates text, images, or other content, it must be identifiable as AI-generated.

  2. Provide opt-out mechanisms. Users should be able to request human assistance instead of AI interaction for significant decisions.

For "High" Risk (Hiring, Healthcare, Finance, Legal)

  1. Implement a risk management system. Document the risks of your AI system, mitigation measures, and monitoring procedures.

  2. Ensure human oversight. A human must be able to override, interrupt, or review AI decisions. This means building review interfaces, not just logging.

  3. Maintain technical documentation. Describe your model's capabilities, limitations, and intended use. Include information about training data, evaluation metrics, and known biases.

  4. Register in the EU database. High-risk AI systems must be registered in the EU's public database before deployment.

What This Means for API Users

Here is the part that surprises most developers: even if you just call an API, you have obligations. The EU AI Act defines roles: providers (who build models), deployers (who use models in applications), and distributors.

If you call the OpenAI or Anthropic API in your application, you are a deployer. Your obligations include:

  • Using the AI system according to the provider's instructions
  • Implementing appropriate human oversight
  • Monitoring the system for unexpected risks
  • Reporting serious incidents to authorities

You cannot pass all responsibility to your API provider. The regulation holds you accountable for how you use the AI in your specific context.

Practical Steps I Have Taken

For Complai, here is what I have done:

  1. Added an AI disclosure to every chat interaction. The disclaimer appears before the first message and in the footer of every AI-generated response.

  2. Logged all AI interactions. Every query, every response, every model version. This creates an audit trail that satisfies documentation requirements.

  3. Implemented human escalation. When the AI's confidence is low or the question touches regulatory enforcement (not just regulation text), the system suggests contacting a human compliance expert.

  4. Documented the system. I wrote a one-page description of what the AI does, what model it uses, what data it processes, and what its limitations are. This lives alongside the codebase.

  5. Added content watermarking. AI-generated compliance summaries include a small footer: "Generated by AI. Verify against original regulatory text."

The total implementation time was about two days. The documentation took longer than the code changes.

The Uncomfortable Truth

Most developers I talk to are ignoring the EU AI Act. They assume it only applies to large companies, or that enforcement will be slow, or that their application is too small to matter.

The regulation has extraterritorial reach. If your AI application serves EU users — even if you are a solo developer in the United States — you are in scope. The enforcement mechanisms include cross-border cooperation between EU member states, and the penalties are calculated as a percentage of global turnover, not just EU revenue.

The compliance requirements for limited-risk applications are not burdensome. Add a disclaimer, label AI content, document your system. A weekend of work covers the basics. The cost of non-compliance — up to 7% of global revenue — is not worth the risk of ignoring it.

Start with the AI inventory. If you cannot list every AI system in your application and its risk classification, you are not compliant. That is the most basic step, and over half of organizations have not done it.

August 2 is coming. The checklist is short. Do it now.


References

Ask about Kyle
AI-powered resume assistant

Ask me about Kyle's skills, experience, or projects