Towards fair machine learning models

Imagine your team has built a relatively accurate machine learning model: it predicts the right medical device for a user 92% of the time. But after digging a little deeper, you discover that for people from a specific ethnic background, the model is almost always wrong. What now?

In this post, we’ll explore why accuracy alone is not always a good metric to judge a model and how algorithmic fairness can help. Fairness is a topic that’s expansive, essential, and all too often neglected. There’s been a lot of news about AI being sexist and racist, but simply knowing a model is biased won’t solve the underlying issue. We’ll show you how technical approaches addressing fairness have evolved in recent years and what that means for you.

As the fairness and privacy researcher Cynthia Dwork has said, “Algorithms do not automatically eliminate bias.” In other words, fairness doesn’t just happen: rather, it’s the result of careful engineering, rigorous math, and a bit of thought-provoking ethical philosophy.

Why fairness matters

Discussions about fairness in machine learning tend to focus on how different models impact socially sensitive groups. Cathy O’Neil’s Weapons of Math Destruction and an eye-opening ProPublica article drew much-needed attention to the social biases that often get baked into our models, perpetuating racism, sexism, and classism into AI.

A table from the ProPublica article showing algorithmic bias in risk assessments for criminal sentencing

Fairness is an expansive topic though, and it’s relevant to just about any set of users. Which users get shown specific ads, get offered specific prices, get rewards, get shorter call wait times, and are identified in terms of distinct propensity models?

The FAT* (fairness, accountability, and transparency) community has created a set of fairness principles designed to help organizations consistently explain model outcomes. Since the General Data Protection Regulation (GDPR) came into effect, these are no longer theoretical concerns: users now have a right to contest the output of an automated system if they believe they were treated unfairly.

Various fairness criteria have been proposed in recent years, but two approaches predominate: demographic parity (equivalent to removing disparate impact) and equality of opportunity.

Demographic parity ensures that any decision a model makes remains uncorrelated with a protected attribute (e.g. race, gender, or age). In other words, being a man or a woman should not determine whether you see an ad for a specific job (like a software engineering role).

Equality of opportunity is a bit more subtle: it requires that individuals who qualify for a good outcome should obtain that outcome with the same probability, regardless of whether they are a member of the protected group. For example, the percentage of individuals who are both qualified for a loan and end up receiving a loan should not differ across racial groups. Moritz Hardt has shown some potential issues with demographic parity, arguing that it fails to fully ensure fairness while also unnecessarily undermining the ideal predictor for a given classification task.

An illustration from a Moritz Hardt  Medium post  demonstrating how the same classifier can produce inverse outcomes for two groups

While having fairness criteria in place is essential, so is being attentive to all the potential issues that can come up across the different stages of the machine learning pipeline. Oversampling or undersampling data from a specific group can lead to a skewed dataset that doesn’t generalize well, resulting in serious failures such as a child abuse prediction model that’s inaccurate for poor families or a sexist image-recognition system.

In feature selection, that is, picking which aspects of the data are correlated with an outcome, including zip code when training a loan classification model likely means using a proxy for age and race.

Finally, putting a model into production has its own pitfalls: it’s hard to predict where the model will fail, leading to unintended consequences. Being attentive to each of these stages is necessary for designing an approach that addresses fairness across the full range of development and deployment.

Similar posts

News, insights, and opinions about federated learning and analytics.
Close Cookie Preference Manager
Cookie Preferences
By clicking “Accept All Cookies”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage and assist in our marketing efforts. Privacy Policy
Strictly Necessary (Always Active)
Cookies required to enable basic website functionality.
Made by Flinch 77
Oops! Something went wrong while submitting the form.
Cookie Preferences
X Close
Please provide a business or institutional email to continue.
We have a date... to federate!

Your request for a 14-day free trial has been received. You will receive an email within 1 business day with instructions to access your account.
Oops! Something went wrong while submitting the form.