Empty Link Skip to Content

In AI We Trust: Bridging AI’s Trust Gap in the Workplace

AUTHORs: Alice Duffy co-author(s): Jill Barrett, Cliona Coleman Services: Employment, Pensions and Benefits, Digital Economy DATE: 01/04/2025

AI holds immense opportunities to revolutionise the world of work. However, an AI trust gap is emerging. A key hurdle for successful AI implementation in the workplace is employee trust, or rather lack of trust, in AI and the role it will play in their jobs. Despite AI being a tool that will increase productivity and innovation, it is increasingly being met with resistance rather than excitement in some workplaces. 

In this article in our AI series, we explore the increasing AI trust-gap in the workplace, the reason this has developed and how organisations can address this issue.

Why is there a trust gap?

A recent Global Study by Workday "Closing the AI Trust Gap" has shown that only 52% of employees welcome AI.  One of the main cited reasons for this is due to a lack of trust that AI will be deployed responsibly. There is also uncertainty among employees that their organisation will implement AI in the right way. Nearly one-quarter (23%) of employees surveyed were not confident their organisation puts employee interests above its own when implementing AI.

This raises the question as to why employees may struggle with trusting AI. A common reason is the fear of job displacement and machines taking over human roles. Others may have data privacy concerns or a lack of confidence regarding the fairness and transparency of AI-driven decisions. Transparency is a critical issue when it comes to AI and the workplace – and is a key focus of the EU Artificial Intelligence Act (the “AI Act”).

There have been several high-profile examples of employees being unfairly judged by impersonal algorithms during screening stages or allegations that facial recognition software was ‘racially discriminatory’, so it is not unreasonable for employees to have their doubts. (For further information on AI bias and its hidden risks, see a previous article in our AI series.)

However, over half of those surveyed at Matheson’s Digital Economy Group conference ‘Regulating Tech, AI and Cybersecurity’ in November 2024, agreed that AI would be ‘very useful’ or critical’ for their business, and so bridging the AI trust gap is essential to unlock the transformational power of AI.   

What can employers do to reduce the trust-gap relating to AI in the workplace?

When it comes to closing the trust gap amongst employees, HR play a pivotal role. HR teams are in a unique position within their business to address employees’ concerns and to work with key stakeholders to adopt a comprehensive AI responsibility and governance framework. To work towards a more trusting relationship between employees and AI, some key focus areas for HR are as follows:

1. Training and education

Knowledge is power. If employees develop a deeper knowledge about the technology itself and the way in which it will be used, they will likely have less fear of the unknown. The AI Act requires that all employers using AI systems take measures to ensure that their employees are sufficiently trained and skilled when using the AI systems. Proper training and guidance may also help to clear up any misconceptions about the organisation’s intentions regarding AI deployment in the workplace and create greater understanding and confidence regarding ethical guardrails and risk governance.  

2. Develop Ethical AI Guidelines, Training and Governance

Create guidance and policies that govern the ethical use of AI, including privacy, bias mitigation, transparency, and accountability, as well as steps for handling AI-related concerns, incidents or failures.  A team should be assigned to oversee AI implementation and monitor for unintended consequences, such as biased decision-making or data protection concerns and ensure appropriate training is provided to all employees using AI systems in the workplace.

3. Focus on / share positive outcomes

While it is important to be aware of negative outcomes and learn from AI pitfalls, it is crucial that employers highlight the positive outcomes too. Highlighting the successes and showing employees practical examples of how AI has positively impacted the workplace can help to build trust in AI. Until employees can see that it is possible for AI to positively impact their workload, it may be harder for them to adapt to automation. However, if they are shown a concrete example of automation being used to streamline a specific daily task and reduce their administrative workload, the employee will be significantly more likely to trust and embrace AI.

4. Communicate openly and create transparency

Meaningful communication is key for building trust amongst employees and reducing the apprehension around the use of AI in the workplace. HR teams should focus on creating consistent messaging that is both transparent but also relevant and directly applicable to the employee’s work. While HR should instigate the open communication, it should not be a one way street. In order to create a genuine relationship of trust in the workplace, employees should be encouraged to share feedback on AI systems, ask questions and highlight any concerns they may have.

Employee consultation obligations should be identified and considered in relation to current and planned AI deployment and processes. Under the AI Act, employers and their representatives must be informed that they are subject to an AI system. HR practitioners can perform a valuable role in helping design engagement mechanisms that achieve the aims of the provision of the AI Act while also ensuring adherence to employee engagement policies that are also working well. This can sometimes be a complex balance and will require a tailored approach to the unique circumstances of each organisation. 

Does the legislation provide any reassurance for employees?

The AI Act prohibits certain AI practices in the workplace. For example, the use of emotional inference AI systems during interviews or task performance. Biometric categorisation systems that infer protected characteristics are also banned. Key workplace decisions such as termination, performance and selection are considered high-risk and therefore subject to more stringent rules and regulations, such as human oversight. This should be comforting to employees as it shows there are limits to how their employers can use automation in the workplace. Please see our previous article in the AI series, The EU AI Act: The Download for Employers.

Our Digital Economy Group have published a comprehensive AI Guide for Businesses, which provides an overview of the AI Act, and will help you to understand the scope of your new obligations.  To get your copies of the Matheson AI Guide for Businesses, please email: AIGUIDE@matheson.com.  

What can we do to help ?

Matheson's Employment, Pensions and Benefits Group is available to guide you through the complexities of navigating the ever increasing use of AI in the workplace and new legislation being introduced in this area, so please do reach to our team or your usual Matheson contact.   

This article was co-authored by Employment, Pensions and Benefits partner, Alice Duffy, senior associate, Jill Barrett and trainee solicitor, Cliona Coleman.