The European Union’s AI Act will force employers to rethink how artificial intelligence is used in human resources processes. Tools that rely on AI for recruitment, candidate screening or employee evaluation have been classified as high-risk systems under the regulation. This classification introduces additional obligations not only for providers of such technologies but also for companies that deploy them—ranging from human oversight and system monitoring to preparing HR teams to use algorithms responsibly.
According to data published last year by the eRecruiter platform, the use of artificial intelligence in recruitment processes has increased fourfold since 2023. Among nearly 500 companies surveyed, 25% reported using AI-based recruitment tools. Despite this rapid growth, only 15% of these companies had internal guidelines governing the ethical use of AI in recruitment, while 62% had not yet implemented systematic policies in this area.
Research conducted by the Polish HR Forum in its “HR Tech Changer 2025” study, based on responses from more than 300 industry professionals, shows that technological tools are most commonly used for candidate screening, recruitment processes, competency assessments and the management of candidate databases.
“Artificial intelligence is used in recruitment not only to screen candidates but also at later stages of the process,” said Nadia Winiarska, an employment expert at the Polish Confederation Lewiatan, in an interview with the Newseria news agency. “Systems are increasingly automated, with certain tasks delegated to technology. This improves the efficiency of HR departments and the entire recruitment process. In sectors such as manufacturing or logistics—where staffing needs must be constantly replenished—it is difficult to imagine human resource management and recruitment without AI.”
From the perspective of the labour market, the AI Act will have significant implications because the use of artificial intelligence in HR management systems has been classified as high-risk, requiring additional safeguards and oversight.
The AI Act (EU Regulation 2024/1689) is the first comprehensive set of rules governing artificial intelligence in the European Union. The regulation is being introduced gradually. It formally entered into force on 1 August 2024, but its key provisions are being implemented in stages. Since 2 February 2025, bans on certain prohibited AI practices and general provisions—including requirements related to AI literacy—have applied. From 2 August 2025, parts of the regulation concerning general-purpose AI models and supervisory frameworks came into force.
However, most obligations—particularly those concerning high-risk systems in HR—will apply starting 2 August 2026, with the final phase of implementation scheduled for one year later.
Annex III of the AI Act specifies that high-risk AI systems include those used for recruitment and personnel selection. This covers technologies used to create job advertisements, analyse and filter applications, and evaluate candidates.
In practice, this classification means companies will have to strengthen oversight of how such systems are implemented and operated. Employers will be required to monitor the functioning of AI tools, manage risks associated with their use, and clearly inform candidates when AI is involved in recruitment decisions—especially when it supports candidate screening.
An employer using such tools in the workplace will also be obliged to notify affected individuals before the system is launched or used.
“There are also additional obligations regarding training for personnel who interact with these systems,” Winiarska emphasized.
The AI Act also places strong emphasis on data quality and the mitigation of algorithmic bias. High-risk systems must follow strict data governance practices, including ensuring that datasets are representative and that potential bias is identified and mitigated.
This issue is particularly sensitive in HR contexts, where the risk of discrimination is significant. Algorithms often learn from historical data and patterns, which may include past biases or discriminatory practices.
Experts from eRecruiter note that if a company historically hired mostly men for a given role, an AI system trained on that data might automatically favour male candidates and reject female applicants in order to replicate past hiring patterns.
“Artificial intelligence in recruitment should be treated as a supporting tool,” Winiarska explained. “At the same time, these new technologies must be used responsibly to ensure that AI does not lead to undesirable outcomes such as discrimination.”
Cases have already been documented where flawed algorithms in recruitment processes resulted in discriminatory practices or favoured certain candidates whose CVs shared particular characteristics.
The Polish HR Forum study shows that 29% of HR professionals consider algorithmic errors and AI bias to be the biggest challenge in the use of HR technologies, while 19% are concerned about so-called AI hallucinations, where AI systems generate inaccurate or misleading results.
As artificial intelligence becomes more deeply integrated into recruitment and workforce management, the AI Act is expected to significantly reshape how organisations design, monitor and regulate their HR technologies—placing greater emphasis on transparency, accountability and human oversight.


