Mitigating Artificial Intelligence Bias Risk in Preparation for EU Regulation
Getting Ready for the European Union’s Artificial Intelligence Act
-
November 22, 2022
-
Critics say it is overzealous and will stifle innovation. Proponents say its firm boundaries will build trust and allow innovation to flourish. But whatever companies think of the European Union’s draft Artificial Intelligence Act,1 they need to be ready for it.
If you work in a law firm or corporate legal team, you may already be following the European Commission’s proposals to establish the world’s first region-wide legal framework for artificial intelligence (AI) systems. And if not, maybe you should be.
Published in April 2021, the proposed AI Act would ban “unacceptable” uses of AI that violate people’s fundamental rights and safety, imposing strict rules for applications that pose a “high risk”. Firms that fail to comply could be hit with fines of up to €30mn or 6% of global revenue,2 whichever is highest.
The Commission followed up the draft in September 2022 with proposals for an AI Liability Directive and revised Product Liability Directive, which would make it easier for people to get compensation if they suffer AI-related damage, including discrimination.
Coupled with the General Data Protection Regulation, Digital Services Act and Digital Markets Act,3 the proposed rules are part of the EU’s strategy to set the global gold standard for ethical and trustworthy technology.
The World Is Watching
The implications are far-reaching, not only for software developers but for companies large and small that use AI, including those outside the EU whose systems are used within the bloc. More than four in 10 enterprises in Europe declared their use of at least one AI technology, according to a 2020 survey for the European Commission.4 This year alone, China brought in tighter controls on the way tech companies can use recommendation algorithms in March. In the US, the draft Algorithmic Accountability Act of 2022 is under consideration and in October the White House published a blueprint for an AI Bill of Rights.5
With strict rules potentially being applied in the EU as early as 2025, organisations that develop and deploy AI systems need to ensure they have robust governance structures and management systems in place to mitigate risk, particularly of AI bias. In recent months, the major law firms have taken steps to advise their clients accordingly, but many organisations are yet to lay the groundwork for compliance.
The Rise of AI Bias
The draft AI Act takes a very broad view — critics say too broad — of what qualifies as an AI system. It covers:
- Machine learning approaches, including supervised, unsupervised and reinforcement learning, using a wide variety of methods including deep learning.
- Logic and knowledge-based approaches, including knowledge representation, inductive (logic) programming, knowledge bases, inference and deductive engines, (symbolic) reasoning and expert systems.
- Statistical approaches, such as Bayesian estimation, search and optimisation methods
The argument for such a broad definition is that the system itself is less important than the use case. In other words, the Act takes a broad view of what constitutes an AI system, but a narrow view concerning its use case.6
Beyond chatbots and self-driving vehicles, narrow AI in its many guises is being used every day to make decisions that affect people’s lives, from credit scores to job applications, university admissions to healthcare spending allocation.
Alongside the lauded successes are many horror stories about unintended consequences of algorithmic decision-making. A recruitment tool that discriminates against women, a chatbot that talks like a Nazi, facial recognition platforms that perform significantly worse on darker-skinned females than on lighter-skinned males7 — these are examples that have left some of the world’s largest companies red-faced.
One of the challenges when AI fails is the lack of “explain-ability”. In today’s era of deep learning algorithms and pervasive AI, the bias that human mistakes can introduce at any stage of the AI lifecycle is reproduced and amplified at a scale that is sometimes impossible to control and mitigate. When this happens, the forensic exercise needed to identify the trigger is not trivial and requires advanced technical expertise.
Distributed Accountability
So, who is culpable when outcomes are unfair and discriminatory? The people who trained the system? The people who selected the data? The people who built the software, often incorporating code written by different practitioners over many years and for different intended purposes?
The draft AI Act takes a hard line on accountability, placing obligations on providers, distributors, exporters and even “users” of AI systems that are deemed to pose a high risk for people’s safety or fundamental rights. The word “users” here does not mean end-users (e.g. the general public) but individuals or bodies that use an AI system under their authority, for example a recruitment firm deploying an automated CV-screening tool. As for “providers”, this definition includes not only software developers but also the companies that commission them with a view to putting an AI system on the market or into use in the EU under their name or trademark. In other words, firms won’t be able to avoid responsibility by outsourcing projects. If they are involved in the building or deployment of a high-risk AI system, they will have at least some legal obligations.
The draft Act divides AI systems into four risk categories: unacceptable, high-risk, limited risk and minimal risk.
Unacceptable applications are prohibited and comprise subliminal techniques, exploitative systems, social scoring systems used by public authorities and real-time remote biometric identification systems used by law enforcement in publicly accessible spaces.
The full force of the proposed regulation focuses on the “high-risk” category, which covers applications relating to transport, biometric ID, education, employment, welfare, private-sector credit scoring, law enforcement and immigration among others. Before putting a high-risk AI system on the market or in service in the EU, providers would have to conduct a prior conformity assessment and meet a long list of requirements. These include setting up a risk management system and complying with rules on technical robustness, testing, data training, data governance and cybersecurity. Human oversight throughout the system’s lifecycle would be mandatory and transparency would need to be built in so that users could interpret the system’s output — and challenge it if necessary. Frameworks that will serve as governance tools have already been developed. A relevant example is capAI, a procedure for conducting conformity assessment of AI systems in line with the EU Artificial Intelligence Act developed through a collaboration between the Oxford Internet Institute and the Saïd Business School.8
Providers would also have to register their systems on an EU-wide database and put in place a compliant quality management system committing to post-market monitoring.
Users — including businesses deploying AI technology — would have to commit to inputting only data relevant to an AI system’s intended purpose and to monitoring the system’s operation. As well as keeping accurate records and carrying out data protection impact assessments, they would be obliged to inform the provider or distributor of any malfunctioning risks or serious incidents.
How Bias Infiltrates Systems
Companies that intend to develop or deploy AI systems need to act now to ensure their AI governance and risk-mitigation infrastructure will comply with the forthcoming rules.
This is not a job to be left solely to the technical experts or limited to legal and compliance departments. Rather it will require a cross-functional team led from the top and ideally including diversity, equality and inclusion specialists who can bring a human-centric perspective.
We expect machines to be objective decision makers, but the reality is that human bias can —and inevitably will — infiltrate AI systems at multiple stages of their lifecycles, from before they are even conceived to many years into their deployment.
First, there’s the quality of the raw material — the data. The old adage of “rubbish in, rubbish out” applies to all analyses based on data. Data needs to be assessed for consistency, representativeness and usefulness. For example, if the data contains too few observations for a certain group, the training samples may contain too few such units and, albeit representative of the population, may not be effective in training the algorithm, leading to biased decision-making. So, relevant questions must be asked. How old is the data? How was it gathered? For what original purposes? Does it accurately reflect the population that will be affected by the technology, or does it favour a particular gender, ethnicity, geographical area or other demographic? Implicit stereotypes, reporting bias, selection bias, group attribution errors, halo effect and other problems can all creep in at this stage, compromising an AI system before it has even been built.
Then there’s the model — the coded algorithm that is trained to perform a specific task based on the supplied data. In a supervised scenario, when humans annotate the data for model training, they can compound problems with sampling bias, non-sampling errors, confirmation bias, anecdotal fallacy and more. These biases may then be propagated and amplified during the model training process and missed by the humans evaluating the model’s performance. When people see the output from an AI system, they may wrongly assume it to be objective and correct — so-called automation bias — thereby creating a feedback loop in which biased data is fed back into the system, replicating and magnifying original shortcomings. The data scientist coding the AI is not spared by these biases and is often further deceived by a performance-driven culture, where the most important thing is how accurate a model is.
The risks do not end there. As time passes, an AI system may be scaled to serve a broader population or adapted to meet a new objective. The data informing the system may become obsolete and, without monitoring and intervention, the potential for discrimination with real-life consequences for individuals and reputational damage for system providers is even higher.
Mitigating Bias
However, companies can do much to protect themselves by introducing a rigorous AI risk assessment framework. This should address bias risk at every stage of a project, from design to retirement. It means understanding and documenting the intrinsic characteristics of the data, carefully deciding on the goal of the algorithm, using appropriate information to train it and capturing all model parameters and performance metrics in a model registry. Providers can be confident in creating ethical systems that comply with the forthcoming rules and evolving ethical standards if they use a holistic approach characterised by:
- A technical framework to assess and monitor AI
- Adequate governance infrastructure to ensure the system is compliant with existing regulations
- Skills diversity in teams involved in the AI lifecycle
Having attracted more than 3,000 amendments, the draft AI Act could undergo some heavy editing over the next year — perhaps even a change to the Commission’s very broad definition of an AI system. It is unlikely to become binding law before mid-2023, by which time tech-friendly Sweden will hold the Presidency of the European Council, a factor that could further influence the final version. A grace period of 24-36 months should give companies time to comply with the new legislation.
In the meantime, there are lots of decisions to make. And they can’t be left to robots.
Footnotes:
1: https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai
2: Article 17, https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=CELEX:52021PC0206&from=EN
3: https://digital-strategy.ec.europa.eu/en/policies/digital-services-act-package
5: https://www.whitehouse.gov/ostp/ai-bill-of-rights/
6: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4064091
7: https://www.jumpstartmag.com/ai-gone-wrong-5-biggest-ai-failures-of-all-time/
Published
November 22, 2022
Key Contacts
Senior Managing Director
Senior Managing Director
Managing Director