Legal updates and opinions
News / News
Do Androids Dream of Unfair Dismissal?
A federal judge in California recently considered an employment discrimination claim that involved AI. Even though it considered specific circumstances which are fairly unique and groundbreaking, this is likely to become more common practice in workplace disputes and to find application to the South African workplace in the future. The California court considered a proposed class action lawsuit, filed by a job applicant (Derek Mobley) against Workday Inc., an American on‑demand financial management, human capital management and student information system software vendor, where the lawsuit claimed that Workday’s AI-powered hiring software perpetuates existing biases against job applicants, on race, age and disability. Mobley claimed that he was overlooked for over a hundred jobs due to these biases.
In a groundbreaking ruling, even though Workday was not screening Mobley for direct employment with it, and instead relied on the results of the AI-powered hiring software to determine whether or not to put Mobley forward as a candidate for its clients, who sought candidates for employment, and as such Workday was acting in the traditional role of a recruitment agency, the California court determined that Workday could be seen as an employer under federal anti-discrimination laws because it handles screening tasks typically done by its clients. While the California court dismissed claims of intentional discrimination and ruled Workday is not an “employment agency”, it maintained that the company’s AI tools could make it liable for discrimination as they perform crucial hiring functions. The court took into account that Workday’s customers delegated their traditional hiring functions, including rejecting applicants, to the algorithmic decision-making tools provided by Workday.
This finding could have an impact on South African employers which may use service providers such as Workday, either when they engage the service provider directly as a recruiter for an open position, or where the employer utilises software, tools or platforms bought or licenced from the service provider to identify or select candidates for employment. This arises since, in South Africa, employees are protected against unfair discrimination in terms of the Employment Equity Act (EEA), which provides that no person may unfairly discriminate, directly or indirectly, against an employee, in any employment policy or practice, on one or more grounds, including race, gender, sex, pregnancy, marital status, family responsibility, ethnic or social origin, colour, sexual orientation, age, disability, religion, HIV status, conscience, belief, political opinion, culture, language, birth or on any other arbitrary ground. The EEA extends this protection to applicants for employees.
Given the above protections against unfair discrimination, which are already entrenched in SA labour laws and practices, and the broad scope of what is intended to be protected against, it may be that in future, if a South African employer engages a third party such as Workday as a recruiter, or itself uses software, tools or platforms provided by a third party such as Workday, it is possible that such employer could be involved in unfair discrimination claims. In the case where the employer engages a recruitment service, any court considering whether the recruitment service’s utilisation of AI or algorithmic decision-making tools results in unfair discrimination against a person, could take into account that the EEA restricts not just an employer, but “no person” from unfairly discriminating against even an applicant for employment at the recruiter’s client. Any such claim would possibly be brought against the potential employer, which could notionally, under South African law, be held responsible on the basis of the acts of the recruiter as its agent, or on the basis of the potential employer relying on or using the discriminatory results of the algorithmic decision-making tools. In addition, if the potential employer uses software, tools or platforms provided by a third party such as Workday, and these tools or processes produce discriminatory results, not only could the potential employer face legal claims, but it is possible that the provider of the algorithmic decision-making tools could itself face legal claims, as a developer of the tool. This is aligned to the legislative landscape taking shape in various countries around the globe to manage the potential harms that can arise from the development and use of AI systems, where the developers and deployers of AI systems could be held liable for, amongst others, biases in their AI tools. For example, the EU AI Act classes AI systems intended for recruitment or selection of employees and employee performance monitoring and evaluation as high-risk AI. AI systems that use automated decisioning to profile individuals in, among other things, a workplace context is also high-risk AI. This class of AI requires stringent obligations to be met under this legislation, including in respect of the data quality used in relation not the AI tool, to prevent biases, ensure accuracy, address cybersecurity risks and to provide transparency on how the AI tool works, to have human oversight to ensure consistency and prevent harms and ensure responsible use.
Algorithmic decision-making tools may also need to comply with specific South African laws, including the Protection of Personal Information Act, where such AI tools may not be permitted if they derive decisions and create a profile solely on the basis of automated processing of personal information.
Although the decisions of the courts of any foreign jurisdiction are not binding on SA courts, where a novel point of law is to be considered, foreign jurisprudence can be considered and can be considered persuasive. As such, the approach of the California court in the Workday case should be noted.
South African employers should note that the use of AI by recruiters could inadvertently draw them into claims for unfair discrimination by employees or applicants for employment; recruiters could face similar claims directly. Not only should the algorithmic decision-making tools be carefully calibrated to remove any biases that could constitute discriminatory grounds and to ensure compliance with any applicable laws and regulations, but the contracts for services between the parties should be carefully thought through to provide for commercial protection in the event that such legal risks materialise.
Latest News
The PAIA and POPIA dichotomy: What information are you requesting?
Promotion of Access to Information Act, 2 of 2000 We have received numerous queries from clients seeking advice on attending [...]
Security for costs – A White Elephant? A Chimera? Pie in the sky? …On any basis a Herculean task
Security for costs In the recent case of McHugh N.O. & Others v Wright [5641/2021) [2021] ZAWCHC 205 (19 October [...]
Merger approval without a specific acquiring or target firm
Merger approval The Competition Act 89 of 1998 ("Competition Act") and Commission Rules[1] contain review provisions that establish a mandatory [...]
Data protection impact assessment required despite “Might of the State”
Kenyan High Court Introduction On 14 October 2021, the Kenyan High Court declared the collection of biometric information and the [...]
Relief from oppressive or prejudicial conduct in terms of the Companies Act 71 of 2008
Section 163 of the Companies Act 71 of 2008 In any corporate environment, the authority of the board of directors, [...]
Domestic Violence: New definitions you should know
by Dakalo Singo, Director and Head of Pro Bono Practice Domestic Violence Amendment Bill Introduction The annual "16 Days of [...]
