Share:
Business
July 8, 2020

Legal Consensus Regarding Biases and Fairness in Machine Learning in Europe and the US

Author:
Synthesized
Legal Consensus Regarding Biases and Fairness in Machine Learning in Europe and the US

Machine learning and AI are increasingly used to assess applicants for a variety of services within the financial, healthcare and public sectors, with advocates highlighting the speed and precision they bring to what have traditionally been time-consuming paper-based processes. However despite the obvious efficiency gains, there is a rising tide of concern about the possibility for unintentional (and potentially illegal) discrimination through the large-scale deployment of automated decision making. Recent, high-profile examples have included sentencing algorithms that showed racial bias alongside gender bias in job advertising and credit applications. Should businesses deploying artificial intelligence today be concerned about the risk of breaching human rights law?

How Is It Possible for Machine Learning to Be Discriminatory?

Discrimination, in this legal context, can be defined as “when an individual is treated less favourably than someone else similarly situated on one of the grounds prohibited in international law”. These grounds vary slightly under different regulation, but normally include as a minimum an individual’s sex, gender, religion, race, age, disability, familial status and sexual orientation.

Whilst most would agree that the kinds of AI-driven applications normally implemented at scale are not designed to deliberately discriminate against individuals in the way a human might, it’s important to understand that the international human rights legislation recognizes two possible ways for discrimination to occur – direct and indirect. An application can be outwardly unbiased at the design level but if the data used to develop it or to make decisions is imbalanced, the decisions made could themselves have an inherent, unintended indirect discriminatory effect. What’s more, as many such systems ‘learn’ as they go forward, refining their analysis of data and therefore changing how they make decisions, even a balanced model can have a negative impact over time.

Legal Implications for Organisations Relying on AI

Individuals’ rights against discrimination are protected in international law, with local statutes such as the UK’s Equality Act and the US Civil Rights act setting out the specific protected domains and penalties for companies. The growth of privacy and data protection laws such as GDPR also give greater weight to individual’s choices around how their data is used, and protection against potential harms from data use. This means that any business using machine learning systems that discriminate, indirectly or directly, would be acting unlawfully and could be ordered to compensate any person who felt they had been affected financially or emotionally. What’s more, if the software had been developed by a third party, that third party could also be liable for a discrimination claim which would be costly and would carry the risk of reputational damage.

These are not just theoretical concerns; in 2014, Amazon was forced to turn off their machine learning recruiting engine, as, based on the historical (and largely male) successful applicant data, the engine was shown to automatically favour new male applicants. A major analysis by Propublica exposed clear racial bias in a commonly used scoring system used by the US Justice system to establish reoffending risk and make probation decisions, something that seems to directly contravene the US Civil Rights Act (1964). Concern is growing, with some US legislators even calling for AI- specific disclosure laws to prevent such discrimination in the future.

The UK and Europe have traditionally taken a more rigorous approach to data privacy and protection, and this still appears to be the case when examining automated decision making. European citizens are protected against discriminatory decision making under the broader EU Charter of Human Rights, which states that: “discrimination based on any ground such as sex, race, colour, ethnic or social origin, genetic features, language, religion or belief, political or any other opinion, membership of a national minority, property, birth, disability, age or sexual orientation shall be prohibited.

Whilst this charter does not specifically cover machine learning decisions, it is enhanced by  the EU Data Protection Convention and 2016’s General Data Protection Regulation (GDPR), which requires organisations to conduct a data protection impact assessment (DPIA) to quantify, identify and minimise the risk of new technological practices that have a high potential to impact an individual’s rights. Importantly, the GDPR assumes that fully automated decision making by organisations carries a high risk, and the UK Information Commissioner’s Office (ICO) has also highlighted the responsibility of data controllers have a responsibility to evaluate, document and take account of possible bias in the design phase to ensure artificial intelligence does not contravene the UK Equality Act (2010). Lawsuits under GDPR in Denmark, Holland and France have all led to the withdrawal of automated systems for welfare decision making and access control, citing infringement of individual rights.

Technical Approaches to Remove Discrimination

It’s clear that in order to remain compliant with the law, organisations designing and deploying machine learning for decision making need to understand the risk and have clearly defined processes to mitigate against discrimination. An important step is to ensure that the datasets used to build new applications are not in themselves contributing to the problem, either through imbalanced data that leads to disproportionate decisions taken against underrepresented groups, or through the use of historical data that contains a pattern of discrimination previously applied to protected groups. In particular, the ICO recommends that organisations establish “clear policies and good practices for the procurement of high-quality training and test data ... especially if organisations do not have enough data internally or have reason to believe it may be unbalanced or contain bias”.

Solving Discrimination Through AI

It’s important to remember that, as well as taking steps to ensure AI is not harmful, organisations can also choose to create and deploy machine learning solutions that improve outcomes for all members of society. Some are actively building tools to detect discrimination, image and speech recognition algorithms are improving accessibility, and machine learning’s ability to analyse and make predictions across large volumes of data is improving healthcare outcomes across the world. Whilst this blog has highlighted serious risks, careful evaluation and planning combined with new data technologies will ensure applications advance, rather than restrict, the most vulnerable and do not come into conflict with human rights law.

Summary

Discrimination through the use of AI is a real issue with potentially far-reaching societal and ethical outcomes. As more and more private and public sector organizations build and deploy automated decision making systems, they need to be aware of the potential to introduce bias, the impact this can have on individuals and communities, and the significant legal implications if indirect discrimination occurs.

A key factor is identifying bias in the datasets used as inputs for AI development and providing access to data that does not contain these issues. Look out for our upcoming blog that will examine this in more detail, or to understand how to detect biases in your data and the implications for your models, contact our data experts at team@synthesized.io.