The EU AI Act (Regulation (EU) 2024/1689) is the world's first comprehensive legal framework for artificial intelligence. It takes full effect on August 2, 2026, and applies to every business that uses AI systems within the European Union – regardless of where the company is headquartered.

If you use ChatGPT, automated chatbots, AI-powered analytics, or any other AI tool in your business operations, this regulation applies to you. Here's everything you need to know.

Who Is Affected by the EU AI Act?

The regulation applies to three groups:

  • AI Providers: Companies that develop or place AI systems on the EU market
  • AI Deployers: Businesses that use AI systems in their operations (this is most companies)
  • Importers & Distributors: Companies that bring AI systems from outside the EU into the market

Key point: You don't need to be a tech company to be affected. A bakery using ChatGPT for marketing, a law firm using AI document review, or a retailer with a chatbot on their website – all fall under the EU AI Act as deployers.

The Four Risk Categories

The EU AI Act classifies AI systems into four risk levels based on their intended use, not the technology itself:

Minimal Risk

Spam filters, AI-powered games, basic recommendation systems. No specific obligations, but transparency is recommended.

Limited Risk

Chatbots, AI-generated content, emotion recognition. Transparency obligation: Users must know they're interacting with AI.

High Risk

AI in HR/recruiting, credit scoring, medical diagnosis, education assessment. Full compliance required: documentation, risk assessment, human oversight.

Prohibited

Social scoring, subliminal manipulation, real-time biometric mass surveillance. Completely banned – immediate cessation required.

Your Step-by-Step Compliance Roadmap

Step 1: Create an AI Inventory

Document every AI system used in your organisation. This includes cloud services and SaaS tools that use AI under the hood. Common examples many businesses overlook:

Department Commonly Overlooked AI Systems
Marketing ChatGPT for copywriting, Midjourney for images, AI-based email personalisation
Customer Service Chatbots, automatic ticket categorisation, sentiment analysis
HR & Recruitment CV screening tools, automated pre-selection, AI-assisted interview analysis
Finance Fraud detection, automated credit checks, risk assessment tools
IT & Development GitHub Copilot, AI-powered code reviews, automated testing
Administration DeepL translations, document summarisation, transcription services

Step 2: Classify Each System

For every AI system in your inventory, determine its risk category. Ask yourself:

  • What is the purpose of this AI system?
  • Who is affected by its outputs? (employees, customers, public)
  • Are decisions made based on AI results?
  • Is there human oversight before final decisions?

Important: The same AI service can fall into different risk categories depending on how you use it. ChatGPT for marketing copy is "limited risk", but ChatGPT for evaluating job applications would be "high risk".

Step 3: Implement Required Measures

Obligation Minimal Limited High
AI Inventory Recommended Required Required
Transparency Notice Required Required
Risk Assessment Required
Technical Documentation Required
Human Oversight Required
Conformity Assessment Required

Step 4: Set Up Documentation

Maintain records of your AI systems, their risk classifications, and the measures you've taken. This documentation serves as evidence of your due diligence if regulators come knocking.

Step 5: Monitor and Update

Schedule quarterly reviews: Are you using new AI tools? Has the purpose of any system changed? Are all transparency notices still current?

Key Deadlines

Date What Happens
February 2, 2025 Prohibited AI practices become enforceable
August 2, 2025 Rules for general-purpose AI models apply
August 2, 2026 Full enforcement – all obligations take effect
August 2, 2027 High-risk AI systems in Annex I must comply

Penalties for Non-Compliance

The fines under the EU AI Act are substantial:

  • Prohibited practices: Up to €35 million or 7% of global annual turnover
  • High-risk violations: Up to €15 million or 3% of global annual turnover
  • False information to authorities: Up to €7.5 million or 1.5% of global annual turnover

For SMEs and startups, proportionally lower fines apply – but they can still be significant enough to threaten a business.

Check Your Compliance Status Now

Not sure where your business stands? Our free compliance check analyses your specific AI usage and delivers in just 5 minutes:

  • Your individual risk classification
  • A list of your obligations under the EU AI Act
  • Concrete action recommendations
  • A PDF document as compliance evidence

Start Your Free Compliance Check

15 questions. 5 minutes. Instant results.

Check Now – It's Free