How C-level can drive fairness in AI: by making (hard) choices

By Laurent Sorber
May 18th, 2020
6 minutes
AIFairnessManagement

Executive summary

Fairness in artificial intelligence is a major concern in today’s ever-connected world: unless properly supervised, AI risks duplicating our society’s biases.

As AI becomes more embedded in life and business, we realize that our artificial intelligence is “all too human”: it reflects biases that it finds in the data it is trained on. Is AI bound to endlessly replicate our own shortcomings? Maybe not. Bias is more easily detected in AI than in humans. This is an advantage we can exploit.

Why fairness matters

Automated decision-making systems are being deployed on an unprecedented scale. With AI set to take or influence more and more decisions that affect social and economic opportunities, the idea of fairness in AI is hotly debated. Will AI hinder your (re-)entry to the labor market because you are a woman or because you have a migration background? There is widespread anxiety about this, and it may not be wholly unjustified.

How should we deal with that? How should we “regulate” AI? In part, AI is already regulated in Europe. When AI makes decisions that affect GDPR subjects, the owner of the software should be able to explain how the software made those decisions. Art. 22 of the GDPR stipulates that data subjects have the right to “opt out” of any decision that was based “solely on automated processing” if this “produces legal effects concerning him or her or similarly significantly affects him or her”. And this is only the beginning.

Cutting-edge technology, age-old debate

What is important to understand about fairness in AI, is that it will require making difficult choices. And those choices will define us as machine learning engineers, data scientists, but most importantly as humans. Today, when implementing AI solutions, most of the attention of an AI engineer goes to the utility of the model. When reporting to stakeholders, AI engineers tend to ask themselves: is the model I am delivering as accurate as possible?

The answer is that accuracy is not the only major goal (anymore). Accuracy is a technical challenge that is very well understood. Once you add the requirement that the system also needs to be fair, you have to realize that this will require a trade-off: your system will necessarily become less “accurate”.

“A new, powerful, data-driven tool”

This is logical. In fact, you might find that the more accurate your model is, the more biased it becomes – as the AI will learn from a real world data set. It might steer women to nursing jobs, for example, and men to engineering jobs. A famous example of bias in AI is Amazon’s recent AI recruiting tool showing aversion towards women. COMPAS, an algorithm used in the US to assess the likelihood of a defendant becoming a recidivist, was and still is criticized by many for being biased against African Americans.

While these examples remind us that there is still a lot of work to be done, the debate around fairness in AI is also a great opportunity. AI also allows us to implement a company’s choice of fairness. Fairness policies in AI are to be considered as a new, powerful, data-driven tool to sharpen and refine age-old questions about fairness, discrimination, and justice. It is precisely because machine learning models are helplessly explicit that we are forced to precisely define our values and make choices.

How can you make AI “fair”?

An organization's preferences around fairness in AI should be discussed by its management team, maybe even by the board level for important matters. It is not something that can be “decided” at the level of the developers. If you do not address fairness questions before starting an AI project, some developers might choose the “tech bro” solution: opt for “accuracy”, make sure it works technically, with little regard for the consequences or overarching fairness considerations.

But fairness questions can be prepared and answered. And then they can be implemented, explained and defended. Fairness in AI is too important to leave to a technology partner—it is a matter that needs to be handled by the organization itself.

“Fairness in machine learning starts with what you want to achieve”

Consider these common biases and the groups of people affected by them: gender, race, socioeconomic and migratory background, age, familial status, disability, medical history, and religion. Before starting an AI project, you need to decide as a company how you want to deal with them. There are multiple ways to approach the question. Fairness in machine learning starts with stating what we want to achieve: what should we define as a standard? Do you want to reflect the (implicit) societal biases in your system? Do you want to make sure that every group will be treated representatively? Or do you want to create equality in the outcome?

As a thought experiment, you could imagine an AI system that offers a list of all the possible biases it can find in the data—and then proceeds to ask to what extent the owner is willing to accept this bias, or how they want to correct the bias.

Dealing with AI biases in a business environment

To illustrate how to deal with bias in AI projects, we created an interactive walk-through of a worked scenario: as a manager of a construction company, you would like to use an AI recruiting tool to assist you in selecting candidates in a fair way. This scenario includes 3 concrete strategies that can be used to deal with bias in AI (and we hint to a fourth one that we’ll describe in a follow-up article). Should you strive for equality of outcome, demographic parity, or equal opportunity? Have a look at our interactive tool and discover the business outcomes and consequences of your strategies.

Fairness Dashboard

As manager, you are tasked with hiring 100 construction workers for an upcoming construction project. You would like to use an AI recruiting tool to assist you in selecting candidates in a fair way with respect to gender.

The visual shows the impact of your chosen fairness measure on the hiring recommendations made by your AI model. You can see the consequences of each fairness measure by clicking on the arrows.

The numbers above the line represent the candidates your model suggests you to hire. The numbers below the line represent the remainder of your talent pool.

Pay close attention to how your choice of fairness measure influences the composition of your team in terms of gender, (true) competencies, and profits. Where do you draw the line?

Equal Outcome

Equal outcome means using quotas to ensure positive outcomes are distributed equally among men and women.

Incompetent men

Competent men

Incompetent women

Competent women

Conclusion

Your AI model enforces equal quotas to achieve a balanced suggestion between women and men. Because of the gender imbalance in the talent pool, there are simply not enough women available for hire to meet your quota, resulting in 40 open positions.

The numbers

60

/100

Number of hired candidates

30

/ 30

Number of hired female candidates

31

/144

Number of hired competent candidates

3.1

/10

Profit in million €

Assumptions

1

We assume you are prepared to interview up to 3x the number of people you want to hire, in this case 300 people.

2

We assume your talent pool consists of 270 men (90%) and 30 women (10%).*

3

We assume that your talent pool contains 135 competent men and 9 competent women. The proportions of men and women who are competent are slightly skewed on purpose to help illustrate the effect of your fairness choice.

4

We assume that every competent construction worker you hire brings in $100k of profit, regardless of gender.

5

Your AI hiring model tries to predict if someone is competent or not, and, like any real-world decision making model, sometimes fails to do so.

At this point, you’re probably wondering about the fourth strategy mentioned in our tool (equalized odds). It is a specialised approach to fairness that embodies the idea that being fair also means being fair to the group of incompetent candidates. Sounds counterintuitive? Put yourself in the shoes of an incompetent woman. All things considered, wouldn’t you want the same chance to be selected as an incompetent man? A very interesting question for another time: we’ll take a deeper look at this specific strategy and its outcomes in an upcoming blog post.

How to live a “good” life

If you are not ready to answer these fairness questions, you should at least strive for transparency (explain how it works) and compliance with privacy (explain to individuals affected by the AI how the system makes its decisions). In the end, the holy grail of fairness in AI and machine learning is not about finding the best AI, the one that will be 100% accurate while also perfectly unbiased, because in general it cannot be done. It’s about asking yourself “How can we be a ‘good’ member of society? How can we use AI to bring ‘good’ to society by making conscious, moral choices that will be embedded in the software we create?”

Those aspects need to be addressed before developing AI. One essential facet is coming to terms with the fact that there is no standard definition of fairness. Such a definition would require, well, bias. This is true for human decisions, and this is true for machine decisions. Without a perfect fairness measure, a trade-off has to be made. Promoting fairness and reducing inequality transcends machine learning and AI and requires a multi-disciplinary approach, ultimately involving society as a whole. In the end, it’s all about the choices we make.

Stay up to date

Stay ahead of the world. Our team shares their
knowledge learnt on the field. Sign up for our
newsletter

SIGN UP