If your HR team uses any form of AI – whether it's an ATS (Applicant Tracking System) with smart filtering, a CV screening tool, a video interview analysis platform, or even ChatGPT to help draft job descriptions – the EU AI Act directly affects you.

HR is one of the sectors most explicitly targeted by the regulation. AI systems used in employment, worker management, and access to self-employment are listed as High Risk in Annex III of the EU AI Act – meaning they come with the most stringent compliance obligations short of a complete ban.

Why Is HR AI Classified as High Risk?

The EU AI Act classifies AI systems as high risk when their outputs significantly affect people's livelihoods, fundamental rights, or access to opportunities. Recruitment and workforce management AI clearly falls into this category because it directly determines:

  • Who gets called for an interview
  • Who gets hired or rejected
  • How employees are evaluated and promoted
  • Who is flagged for performance management or dismissal

Historical data has repeatedly shown that AI systems trained on biased data can perpetuate or amplify discrimination – making transparency, oversight, and documentation not just a legal requirement, but an ethical imperative.

Which HR AI Tools Are Affected?

Tool Type Examples AI Act Classification
CV / Resume Screening HireVue, Workday AI, custom ATS filters High Risk
Video Interview Analysis AI that analyses tone, expression, word choice High Risk
Psychometric / Personality AI AI-powered aptitude or personality assessments High Risk
Performance Monitoring AI Productivity tracking, performance scoring systems High Risk
Workforce Planning AI AI predicting redundancies, attrition, or promotions High Risk
Job Description Writing (ChatGPT) Drafting job ads with AI assistance Limited Risk
Scheduling / Calendar AI AI that optimises interview scheduling Minimal Risk

What High-Risk Compliance Requires from HR Departments

1. Detailed Documentation

For every high-risk AI system used in HR, your organisation must maintain documentation that includes:

  • Intended purpose: What is the AI tool used for specifically? (e.g., "Initial CV screening for engineering roles")
  • Technical description: How does the system work? What data does it use?
  • Known limitations: What can the system not reliably do? Where might it make errors?
  • Performance metrics: Accuracy, bias testing results if available
  • Vendor information: Who provides the AI system and what compliance documentation have they supplied?

2. Human Oversight – This Is Non-Negotiable

Perhaps the most important requirement: no significant HR decision may be made by AI alone. A qualified human must review and approve every decision where AI played a meaningful role. This means:

  • A recruiter must review AI-shortlisted candidates – not simply forward the AI's output
  • Managers must independently evaluate AI performance scores before acting on them
  • Employees must be able to request human review of any AI-influenced decision about them

Not Acceptable

AI screens 500 CVs → System auto-rejects 480 → Only 20 candidates advance without human review of rejections.

Compliant

AI screens 500 CVs → HR reviews AI rankings and reasoning → HR makes final shortlist decisions, overriding AI where needed.

3. Transparency Towards Candidates and Employees

People have the right to know when AI is used in decisions about them. Your obligations include:

  • Inform candidates in job postings or application confirmations that AI-assisted screening is used
  • Inform employees if AI monitors or evaluates their performance
  • Provide a clear contact point for people to request human review
  • Explain in plain language what the AI does and how it influences decisions

4. Bias Monitoring and Testing

High-risk AI systems must be regularly monitored for discriminatory outcomes. HR teams should:

  • Request bias audit reports from AI vendors
  • Monitor hiring and promotion outcomes by demographic group (where legally permissible)
  • Document any anomalies or bias-related findings and corrective actions taken

5. Data Governance

AI systems used in HR must be trained on relevant, representative, and fair data. As a deployer, you should:

  • Ask vendors how their AI was trained and what data was used
  • Ensure candidate and employee data fed into AI systems complies with GDPR
  • Implement data retention limits – do not store AI-processed HR data indefinitely

Practical Checklist for HR Compliance

Action Status
List all AI tools used in HR (ATS, screening, performance monitoring) □ To Do
Classify each as High Risk, Limited Risk, or Minimal Risk □ To Do
Request technical and compliance documentation from vendors □ To Do
Implement human review process for all AI-assisted decisions □ To Do
Add AI transparency notice to job postings and employee policies □ To Do
Define process for candidates/employees to request human review □ To Do
Schedule regular bias monitoring reviews □ To Do
Create internal AI policy document for HR □ To Do

What About Using ChatGPT to Write Job Descriptions?

Good news: using ChatGPT purely to help draft or improve job descriptions is classified as Limited Risk – not High Risk. The critical distinction is that a human writes the final version, reviews it, and posts it. The AI assists with language, not selection decisions.

However, if you then use a different AI tool to match candidates against that job description and score or rank them, that matching/scoring step becomes High Risk.

Vendor Responsibility vs. Your Responsibility

It's tempting to think: "If I buy an AI tool from a vendor, it's their problem to comply." This is incorrect. Under the EU AI Act:

  • Vendors (Providers) are responsible for the technical design, training, and documentation of the AI system
  • You (Deployer) are responsible for how you use it, your oversight processes, transparency to affected individuals, and ensuring it is used as intended

Both bear responsibility. Non-compliance by your vendor does not shield you from liability as a deployer.

Action: Where to Start

The first step is understanding your current situation. Many HR teams are surprised to discover how many AI systems they already use – often embedded in existing HR software they've used for years.

  1. Take our free compliance check to get your baseline risk assessment
  2. Audit every HR software tool for embedded AI features
  3. Prioritise the high-risk systems for immediate compliance action
  4. Engage your HR software vendors and request their EU AI Act compliance documentation

Is your HR department AI Act ready?

Take our free 5-minute compliance check to identify which of your HR AI tools are high-risk and what you need to do next.

Start Free Check