AI and Employment Law: Fairness, Transparency and Workplace Risk


AI and Employment Law: Fairness, Transparency and Workplace Risk

AI tools are used in various aspects of the employment relationship and have been for some time. From job advertising and initial sifts in the recruitment process, to managing absences and determining rotas, automated decision-making is playing a part, and this will only increase as further AI tools are deployed. Despite the opportunity for enhanced business productivity and time-saving , over-reliance on AI tools, without careful management, risks undermining the personal nature of the employment relationship and the nuanced decision making often required to manage a workplace empathetically at best, and discrimination and bias at worst.

What is the impact of AI on recruitment and hiring decisions?

Employers are increasingly using AI tools to sift initial applications and CVs and search social media profiles for key terms. This can lead to automated decision-making where applications are rejected without any direct human involvement. 2025 data collected by DemandSage indicated that 87% of companies now use AI in recruitment. These numbers will no doubt continue to increase. Under data protection legislation there is a right (in some circumstances) to a human review of a decision that has been made by a fully automated decision-making process, but this limited right does not provide sufficient protection against potential bias in the algorithms used.

Should employers be concerned about bias and discrimination in AI decisions?

AI bias is often a product of the way the tool has been modelled and the type of data fed into the tool when training and developing it. In a machine learning context, the potential problems were highlighted by Amazon's use of automated CV screening several years ago. Using the data from Amazon's historic recruitment data, the algorithm, through machine learning, "taught" itself that male candidates were preferable to female candidates. Amazon abandoned the use of the tool, but it is a warning of the potential discrimination that may arise. Where AI tools are developing as they receive information it becomes more difficult to know what the underlying algorithm is basing its decisions on, making it difficult for employers to be able to justify their decision-making process as the process becomes more ambiguous.

Issues can even arise before application stage when job advertising. In 2023, Global Witness conducted research which revealed clear gender bias in Meta's Facebook algorithm, with a receptionist role advertised to female Facebook users in 97% of cases and a mechanic role advertised to male Facebook users in 96% of cases in the Netherlands, with similar statistics in France. In February this year, the Netherlands Institute for Human Rights found that Meta was not fulfilling its duty of care to its Dutch users by using algorithms which have a discriminatory effect. Similarly, the French equalities regulator said in a ruling last month, that Meta's algorithms are sexist and in breach of France's anti-discrimination laws, giving the company 3 months to provide measures to rectify this. This sets a European precedent for treating algorithmic bias as discrimination in law, expanding the reach of equality regulation and making it clear that the continued application of transparency, fairness and human supervision is required when deploying such systems.

Uber has also faced criticism in recent times over its facial identification AI software which it required its drivers to use when logging on to its driver app. The software employed a photo comparison tool to verify drivers' identities by matching their pictures with those stored in its database upon app login. The app reportedly struggled to accurately recognise individuals with darker skin tones. This issue led to numerous workers being unable to access the app and secure employment. Testing revealed that the software has a failure rate of 20.8% for females with darker skin and 6% for males. A tribunal claim brought by one such affected driver, who was removed from the app due to "continued mismatches" in the photos he was submitting, settled before its 17-day hearing listed late last year.

Other AI tools also have the potential to discriminate against employees with disabilities e.g. automated shift allocation tools utilising AI to assess data on workers' past availability and productivity may result in offering fewer shifts, and consequently reduced pay, to an employee whose availability or productivity is affected by a disability. Employers must therefore be alive to these sorts of issues to avoid discrimination claims which can be brought as one or a number of: direct discrimination, indirect discrimination, harassment, discrimination arising from disability or failure to make reasonable adjustments (in a disability claim).

How can employers ensure compliance with employment laws and regulations when using AI?

The use of AI technology in, for example, a redundancy process, would make it much more difficult for an employee to understand if a decision to dismiss is rational and fair unless the AI model deploys appropriate transparency and explainability. The laws protecting against unfair dismissal and discrimination require an employer to act fairly and appropriately. Without a transparent and explainable understanding of the underlying model, employers will find it difficult to defend claims, leaving them exposed. From both employee relationship and risk management perspectives, employers need to be certain they can show that the decisions they make are objective and non-discriminatory.

Key tips for employers using AI when making employment decisions

What's next for AI and Employment Law?

As highlighted in our recent Insight on AI and Regulation and Ethics, there are currently no specific UK AI laws in force. However, other frameworks are gathering momentum in the employment space. In April 2024, the Trades Union Congress published a draft Artificial Intelligence (Employment and Regulation) Bill, setting out a potential framework for regulating AI in the workplace. Employers in the EU will already be facing some AI related obligations arising from the new EU AI Act - some provisions of which are already in force.

With the recent decision of the French equalities regulator, there is growing recognition for the need for supervision, open-scrutiny and accountability of automated decision-making. Greater regulation and an influx of AI-related case law, appears likely.

Previous articleNext article

POPULAR CATEGORY

misc

18093

entertainment

19537

corporate

16331

research

10022

wellness

16229

athletics

20607