Millions of employees across Europe are using ChatGPT, Copilot, Gemini, and other generative AI tools every day at work. Writing emails, summarising documents, generating code, creating marketing copy β AI has become part of the daily workflow in virtually every industry.
But with the EU AI Act taking full effect on August 2, 2026, businesses can no longer treat these tools as casual utilities. As a deployer of AI systems, your company has concrete legal obligations β even if you simply have a subscription to ChatGPT.
Are You a "Deployer" Under the EU AI Act?
Yes β almost certainly. The EU AI Act defines a deployer as any organisation that uses an AI system in the course of a professional activity. This includes:
- Using ChatGPT or Copilot for internal tasks (drafting, summarising, analysing)
- Deploying an AI-powered chatbot on your website
- Using AI tools for customer service, HR, finance, or marketing
- Employing AI-based translation, transcription, or document review tools
You do not need to develop or sell AI software to be affected. Simply using it at work is sufficient to trigger deployer obligations.
What Risk Category Does ChatGPT Fall Under?
This is where most businesses get confused: the risk level depends on how you use the tool, not the tool itself. The same ChatGPT subscription can fall into different categories depending on the use case.
Limited Risk β Most Common Use Cases
Marketing copy, email drafting, summarising reports, brainstorming, coding assistance, translations. Obligation: Transparency. Inform users/clients when AI-generated content is used.
High Risk β Specific Contexts
Using ChatGPT to screen CVs, evaluate employee performance, assist in credit decisions, or support medical triage. Full compliance required including documentation, risk assessment, and human oversight.
Your Obligations as a ChatGPT-Using Business
1. Transparency β Tell People When AI Is Involved
If your customers or employees interact with AI-generated content or an AI system, you must clearly disclose this. Examples:
- Website chatbots must identify themselves as AI
- AI-generated emails or reports sent to clients should be labelled as such (or at minimum, your policy should be documented)
- AI-generated images or videos must be marked as synthetic
2. Maintain an AI Inventory
You need a documented record of every AI system used in your business. For each tool, record:
- Name of the AI system (e.g., ChatGPT Plus, Microsoft Copilot)
- Who uses it and in which department
- What it is used for
- Risk classification
- Any controls or oversight in place
| AI Tool | Typical Use | Risk Level | Key Obligation |
|---|---|---|---|
| ChatGPT / Copilot | Writing, summarising | Limited | Transparency disclosure |
| Midjourney / DALL-E | Image generation | Limited | Label synthetic media |
| CV Screening AI | HR recruitment | High | Full documentation + human review |
| AI Chatbot (customer-facing) | Customer service | Limited | Identify as AI to users |
| DeepL / AI Translation | Document translation | Minimal | Recommended: note in documents |
3. Train Your Employees
The EU AI Act requires deployers to ensure that staff who work with AI have sufficient AI literacy. This does not mean everyone must become an AI expert β but employees should understand:
- What the AI system can and cannot do
- How to critically evaluate AI outputs (do not blindly trust the results)
- When to escalate or apply human judgement
4. Apply Human Oversight for High-Risk Uses
If you use AI in a high-risk context (HR decisions, financial assessments, safety-critical tasks), you must ensure a qualified human reviews and approves every significant decision before it is acted upon. An AI recommendation alone is not sufficient.
The β¬10,000 Question: What Happens If You Don't Comply?
Non-compliance with the EU AI Act can result in fines of up to β¬15 million or 3% of global annual turnover for most violations (and higher for more serious breaches). Enforcement begins in August 2026, with national supervisory authorities responsible for oversight.
Beyond fines, non-compliant businesses risk:
- Reputational damage if AI misuse becomes public
- Employee and customer trust erosion
- Mandatory suspension of AI system use pending compliance review
What You Should Do Right Now
- Take the free AI Act compliance check β identify your risk level in 5 minutes
- Audit your AI tools β list every AI service used across all departments
- Classify your use cases β is each use Minimal, Limited, or High Risk?
- Add transparency notices where required (chatbots, AI-generated content)
- Document everything β your compliance effort is itself part of the requirement
The good news: for most businesses using ChatGPT for everyday tasks like writing and summarising, the compliance burden is manageable. The key is being deliberate and documented about how you use these tools.
Not sure where your business stands?
Take our free 15-question EU AI Act compliance check and get your personalised risk classification in under 5 minutes.
Start Free Check