Fairness & Bias in AI

How we can help you drive fairness in your organization’s AI systems

Fairness & Bias in AI

What does fairness mean for AI systems?

In the last decade, large-scale AI systems have been deployed on an unprecedented scale to make decisions for, or about, people with little human oversight. A lot of the most successful automated decision-making systems today are based on probabilistic models trained on large amounts of data. Models are incentivised to mimic a dataset of past examples and have the capacity to accurately reproduce or even amplify existing societal biases present in the data.

Even a rigorously tested model can reveal unintentional, unfair blind spots when deployed in the real world. Biases in data seep into model parameters and model predictions, which in turn influence the model’s data generating process, i.e. the real world. Properly trained large-scale AI systems often merely reflect the biased, flawed, and opaque decision-making of humans. While mildly annoying for Netflix recommendations, outcomes can be disastrous for use cases like politically motivated social media nudging and critical applications like health care and justice. It is our responsibility to act to prevent this from happening.

What does fairness mean for AI systems?

Fairness in AI is all about what we should do. There is no standard definition of fairness, whether for human or machine decisions. In the context of machine learning, fairness is actually a placeholder term for a variety of normative egalitarian considerations. Designing fair algorithms is designating some actions or outcomes as good or desirable or permissible and others as bad or undesirable or impermissible. The real challenge of fairness in AI hence lies not so much in pursuing technically satisfying fairness frameworks but rather in designing algorithmic systems compatible with human values.

What is Fairness?

Fairness is an umbrella term dealing with the perceived appropriateness of the distribution of goods, benefits, and outcomes in society, groups, or organisations. Fairness appears in ethics, politics, social justice, and economics and is closely related to concepts like discrimination and equality. Fairness involves expressing value judgments about whether a situation is desirable or undesirable by making a choice based on a set of values and held beliefs.

What is Bias?

Bias has both technical and non-technical interpretations, which are important to distinguish in order to prevent unnecessary confusion:

  • Colloquially, bias is the disproportionate weight in favour of or against an idea or thing, usually in a way that is closed-minded, prejudicial, or unfair. People may develop biases for or against an individual, a group, or a belief.
  • Technically, bias is a systematic error. Think of a statistical bias resulting from the non-representative sampling of a population or from an estimation process that does not give accurate results on average.

What is Radix's vision on fairness and bias, and how can we help you?

  • We know that any real-world human-centered AI system is designed as a trade-off between fairness, privacy, accuracy and transparency.
  • We have the technical know-how to address data bias and implement fairness measures.
  • We are aware that there is no perfect fairness metric.
  • We believe creating fair AI solutions requires context beyond data and making a choice.
  • We present our clients with a menu of fairness options and assist them in their decision.
  • We think beyond data and models and take the bigger picture into account.
How to make fairness decisions: discover our fairness and AI dashboard ➞

In practice: some questions to ask before starting an AI project

Here are some questions the Radix engineers ask before starting a project:

  • In what context will the AI service be deployed? Are there ethical implications?
  • Will the AI service make automated decisions affecting people?
  • Can the AI predictions possibly disfavour a specific group of people?
  • Has the client given appropriate thought to the possible impact?
  • Can the AI service be easily transferred to a different use case which would violate the ethical context?

We have expertise in helping clients implement fairness in their AI projects. Find out more about our project approach here:

How does Radix make this concrete?See our project approach ➞

Ready to talk?

Wherever you are in your AI journey, we help you get ahead.