Cutting-edge technology, age-old debate
What is important to understand about fairness in AI, is that it will require making difficult choices. And those choices will define us as machine learning engineers, data scientists, but most importantly as humans. Today, when implementing AI solutions, most of the attention of an AI engineer goes to the utility of the model. When reporting to stakeholders, AI engineers tend to ask themselves: is the model I am delivering as accurate as possible?
The answer is that accuracy is not the only major goal (anymore). Accuracy is a technical challenge that is very well understood. Once you add the requirement that the system also needs to be fair, you have to realize that this will require a trade-off: your system will necessarily become less “accurate”.
This is logical. In fact, you might find that the more accurate your model is, the more biased it becomes – as the AI will learn from a real world data set. It might steer women to nursing jobs, for example, and men to engineering jobs. A famous example of bias in AI is Amazon’s recent AI recruiting tool showing aversion towards women. COMPAS, an algorithm used in the US to assess the likelihood of a defendant becoming a recidivist, was and still is criticized by many for being biased against African Americans.
While these examples remind us that there is still a lot of work to be done, the debate around fairness in AI is also a great opportunity. AI also allows us to implement a company’s choice of fairness. Fairness policies in AI are to be considered as a new, powerful, data-driven tool to sharpen and refine age-old questions about fairness, discrimination, and justice. It is precisely because machine learning models are helplessly explicit that we are forced to precisely define our values and make choices.
How can you make AI “fair”?
An organization's preferences around fairness in AI should be discussed by its management team, maybe even by the board level for important matters. It is not something that can be “decided” at the level of the developers. If you do not address fairness questions before starting an AI project, some developers might choose the “tech bro” solution: opt for “accuracy”, make sure it works technically, with little regard for the consequences or overarching fairness considerations.
But fairness questions can be prepared and answered. And then they can be implemented, explained and defended. Fairness in AI is too important to leave to a technology partner—it is a matter that needs to be handled by the organization itself.
“Fairness in machine learning starts with what you want to achieve”
Consider these common biases and the groups of people affected by them: gender, race, socioeconomic and migratory background, age, familial status, disability, medical history, and religion. Before starting an AI project, you need to decide as a company how you want to deal with them. There are multiple ways to approach the question. Fairness in machine learning starts with stating what we want to achieve: what should we define as a standard? Do you want to reflect the (implicit) societal biases in your system? Do you want to make sure that every group will be treated representatively? Or do you want to create equality in the outcome?
As a thought experiment, you could imagine an AI system that offers a list of all the possible biases it can find in the data—and then proceeds to ask to what extent the owner is willing to accept this bias, or how they want to correct the bias.
Dealing with AI biases in a business environment
To illustrate how to deal with bias in AI projects, we created an interactive walk-through of a worked scenario: as a manager of a construction company, you would like to use an AI recruiting tool to assist you in selecting candidates in a fair way. This scenario includes 3 concrete strategies that can be used to deal with bias in AI (and we hint to a fourth one that we’ll describe in a follow-up article). Should you strive for equality of outcome, demographic parity, or equal opportunity? Have a look at our interactive tool and discover the business outcomes and consequences of your strategies.
At this point, you’re probably wondering about the fourth strategy mentioned in our tool (equalized odds). It is a specialised approach to fairness that embodies the idea that being fair also means being fair to the group of incompetent candidates. Sounds counterintuitive? Put yourself in the shoes of an incompetent woman. All things considered, wouldn’t you want the same chance to be selected as an incompetent man? A very interesting question for another time: we’ll take a deeper look at this specific strategy and its outcomes in an upcoming blog post.
How to live a “good” life
If you are not ready to answer these fairness questions, you should at least strive for transparency (explain how it works) and compliance with privacy (explain to individuals affected by the AI how the system makes its decisions). In the end, the holy grail of fairness in AI and machine learning is not about finding the best AI, the one that will be 100% accurate while also perfectly unbiased, because in general it cannot be done. It’s about asking yourself “How can we be a ‘good’ member of society? How can we use AI to bring ‘good’ to society by making conscious, moral choices that will be embedded in the software we create?”
Those aspects need to be addressed before developing AI. One essential facet is coming to terms with the fact that there is no standard definition of fairness. Such a definition would require, well, bias. This is true for human decisions, and this is true for machine decisions. Without a perfect fairness measure, a trade-off has to be made. Promoting fairness and reducing inequality transcends machine learning and AI and requires a multi-disciplinary approach, ultimately involving society as a whole. In the end, it’s all about the choices we make.