Empty Link Skip to Content

AI in Recruitment: The Potential Risks and Rewards for Employers

AUTHORs: Alice Duffy co-author(s): Shane Gallen, Niall Quinlan Services: Employment, Pensions and Benefits, Digital Economy DATE: 21/01/2025

Throughout this AI series, we are exploring the legal issues connected with the introduction of AI in the workplace and setting out some practical tips for businesses in the wake of emerging AI regulation.

In this article, we explore considerations, risks and practical tips when using AI in the recruitment process.

How is AI used in recruitment?

AI tools are most commonly used for more repetitive administrative tasks during the recruitment process, such as drafting job descriptions, generating candidate update emails or scheduling interviews. However, AI can also play a more important role in the screening process, and this is when issues may arise.

AI tools have the capability to assess and rank candidates by analysing video interviews or by screening CVs and application forms for key words. Ultimately, this results in the AI tool having the decision making power on which candidates should proceed to the next stage of the recruitment process and which applications should be rejected.  

Recent reports have indicated that nearly a third of Irish employers are using artificial intelligence in their recruitment processes. If used correctly with proper human oversight, AI in the recruitment process can undoubtedly save recruiters and hiring managers time and allow them to fill vacancies faster and more efficiently. It can also enable employers to look at a much larger and broader talent pool, potentially resulting in a more diversified workforce. However, as this space continues to evolve and the law attempts to keep pace, it is critical that employers are aware of the risks and legal implications of using AI systems in their recruitment process.

Does AI reduce or increase the risk of bias during recruitment?

Supporters of AI in the workplace would argue that the risk of bias or discrimination arises with any human-run recruitment or HR process and that AI tools can be trained to eliminate these risks. However, at a basic level,  if the data used to train the AI is biased, the AI may reinforce an existing bias and potentially amplify it. This could result in breaches of the employment equality legislation in Ireland.

From a discrimination perspective, an employer must be able to show the reason it took a particular action, such as a decision not to recruit a particular applicant. The difficulty here is that the internal workings of the AI system are often invisible to the user and it can be particularly challenging to unpick why a decision was reached by AI and the importance assigned to various factors.

AI has significant potential to put groups who share a particular protected characteristic (such as age, gender, disability) at a disadvantage in comparison to others. For example, when AI systems are trained to prefer certain educational backgrounds or to rank applications based on the use of certain language or participation in a particular sport, this can have an indirect discriminatory effect on certain groups.  

If an employer cannot explain the reason for the decision (e.g. because it is unclear how the AI arrived at certain outcomes), an adverse inference may be drawn that the reason was a particular protected characteristic and the individual may succeed in their claim.

In general, direct discrimination can’t be objectively justified. However, in indirect discrimination cases, an objective justification defence may apply if it can be shown that the deployment of the AI system was a proportionate means of achieving a legitimate aim. The proportionality element of the defence can be challenging to establish in practice as it involves a careful balancing exercise to determine whether the discriminatory impact is outweighed by the needs of the user of the system.

Compensatory awards for the effects of discrimination in Ireland can be up to 2 years pay or up to €13,000 for someone who is not an employee, such as a job applicant. There is also potential significant reputational risk for businesses as a consequence of AI bias.

To reduce the risk of this arising, organisations should not overlook the importance of AI literacy and keeping the human in the loop at all stages of the design and implementation of AI tools by running checks on the AI’s output. The data used to train the AI tool should be checked for accuracy and potential for bias. Human oversight is a key step to ensure that AI systems are properly trained with appropriate data and that the AI operates free from bias.

Disability and Reasonable Accommodation in Recruitment

Employers should also be particularly mindful of the obligation to provide reasonable accommodation to disabled employees and applicants and should assess whether usage of AI in a process may be disadvantageous to certain disabilities. For example, an automated timed assessment in a recruitment selection process to assess problem solving skills could disadvantage a candidate who was neuro diverse, such as someone with ADHD or dyslexia, who may require take longer to respond due to differences in information processing, potentially scoring lower than peers with the same skill level. Employers should therefore consider what adjustments may be required to the recruitment tool in order to alleviate the disadvantage.

Legal considerations when using AI in recruitment

The EU AI Act (“the AI Act”) designates most HR-related uses of AI as “high-risk” and this includes recruitment processes. The AI Act distinguishes between providers of AI and deployers of AI. Different obligations exist for each and most employers will be designated as deployers, with less onerous obligations than providers. The responsibilities of deployers include:

  • Assigning competent personnel to oversee the AI system;
  • Ensuring input data quality and relevance;
  • Monitoring system performance and reporting incidents to providers;
  • Maintaining system logs;
  • Informing workers representatives and workers they will be subject to AI systems ahead of use; and
  • Complying with data requests for explanation of the role of AI in decision that has impacted individuals.

Employers should bear in mind that if a HR team significantly modifies an existing AI system or repurposes a non-high risk system for use in a high-risk activity such as recruitment, they can be reclassified as a provider under the AI Act. This attracts even more onerous responsibilities than those of deployers.

Data Protection considerations

Employers should be also aware of their responsibilities under the General Data Protection Regulation and the increased chance of a candidate or employee submitting a Data Subject Access Request when AI is used in the recruitment process. It is imperative that decisions made by employers using AI are explainable by the employer and that the use of AI is transparent.

Data protection policies and privacy notices should be reviewed to ensure they reflect the legitimate processing of employee and candidate data. Employers should review commercial terms carefully when using AI systems supplied by a third party. Data subjects have the right to not be subjected to fully automated decision-making where there is a legal or similarly significant effect on them. Therefore, AI tools in recruitment require an element of human oversight and monitoring. The use of data should not go beyond what is necessary and proportionate.

Best practice tips for employers

To help avoid bias and discrimination when implementing AI in the recruitment process, there are a number of practical steps employers should take:

  • Employers should ensure there is adequate human oversight of recruitment processes that use AI.
  • Employers should conduct an assessment of any potential bias risk and/or adverse impacts that the proposed usage may have on certain protected groups, any objective justifications for differential treatment and remedial steps that may be taken.
  • Staff should be trained on the acceptable use of AI in all HR processes and trained on how to spot bias or discrimination the AI systems may create.
  • There should be ongoing testing of AI systems with the aim of identifying and removing any bias.
  • AI systems in recruitment should be audited to ensure results produced are not discriminatory.
  • Employers should not rely on the assurances of AI system providers and ensure they take steps to ensure compliance with the AI Act and equality legislation.
  • Employers should be able to explain to candidates how decisions are have been made by AI systems and be fully transparent on the decision making process.

Failure to ensure AI systems are fair and unbiased can lead to a breakdown in trust with employees and reputational damage. Employers will not attract the best talent if systems do not value diversity and select the best candidates using bias-free criteria.

Conclusion

Though the regulations of the AI Act governing high-risk AI systems do not come into force until 2 August 2026, it is imperative that employers prepare for their responsibilities as deployers of high-risk AI systems in advance of this and ensure that AI use in their recruitment process complies with the AI Act and existing equality legislation.

Matheson's Employment, Pensions and Benefits Group have previously prepared a useful guide on AI in the Workplace - What Are Employers Doing to Prepare?  The team is available to guide you through the complexities of navigating the ever increasing use of AI in the workplace and new legislation being introduced in this area, so please do reach to our team or your usual Matheson contact.

Our Digital Economy Group have published a comprehensive AI Guide for Businesses, which provides an overview of the AI Act, and will help you to understand the scope of your new obligations.  To get your copies of the Matheson AI Guide for Businesses, please email: AIGUIDE@matheson.com

This article was co-authored by Employment, Pensions and Benefits partner, Alice Duffy, senior associate, Shane Gallen, and trainee solicitor, Niall Quinlan.