By Suited

How Suited removes bias & negates adverse impact in our AI

Product
Product

How Suited removes bias & negates adverse impact in our AI

By Suited

Only 5% of lawyers are African American, and the ratio of men to women lawyers is nearly 2:1. Out of the senior-level managers in investment banking, 83% are male and 85% are white. AI can enable firms to create fairness and equity in the recruiting process.

And that’s where the world sits today, on top of stark imbalances that many banks and law firms are ready and eager to address.

For many businesses, the bias that causes and perpetuates this type of disproportionate landscape begins during the hiring process. Studies have shown that humans have a tendency to prefer people who are similar to us, known as the “like-me” bias. “Like-me” criteria can stretch from education and perceived social standing, all the way to race and gender. Paradoxically, it has been shown that the desire to hire those who mirror ourselves and our life experiences causes firms to sacrifice the creativity, inclusivity, and increased revenue that real diversity and representation brings. Firms that invest in increasing their racial, ethnic, and gender diversity are 15-35% more likely to have financial returns above their national industry medians.

For example, if a financial firm only considers finance majors with 3.5+ GPA from the top 10 universities, they are likely only considering a relatively homogeneous population with very little dimensionality, leading to selection bias. Instead, Suited allows firms to assess hundreds of characteristics that are distributed equally across racial, gender, ethnic, and socioeconomic groups. Using A.I., we are able to identify high-potential candidates with diverse backgrounds who have the raw characteristics required to be successful at each individual firm.

But, how can we ensure the machine learning models we create don’t themselves contain bias? In the world of A.I., it would not be absurd to assume that algorithms built in a vacuum of homogeny produce biased predictions. There are, however, scientific ways to mitigate bias and negate its adverse impact. Here’s how we do it:

01.

Diverse data collection‍

As mentioned, we create unique prediction models for each partner we work with. To initiate this process, we collect data from employees who have worked at our company partner long enough to demonstrate their level of performance. We ask the employees to take our assessment, and then their managers provide a measure of employee performance, such as annual performance scores. In the aggregate, the data contains enough diverse employees to provide insight into the biasing factors, thereby allowing us to identify and at least begin removing bias.

02.

Synthetic data generation‍

However, we are not naive to the fact that the investment banking or legal industries do not contain all the diversity data we need to produce bias-free models. For example, the investment banking analyst workforce is 41% female and 59% male. So, to develop technology to help solve the industry’s diversity problem, we use our existing data to programmatically generate “synthetic” data to balance out the lack of under-represented information present in our datasets. This new data is created by estimating attributes of the population in question based on the data we already have.

When training models, it is best practice to create balanced classes of sub-segments. As mentioned above, women are often underrepresented in the data we collect. Prior to building a model, we would generate a set of synthetic candidates that are similar to the existing set of female candidates until the proportion of men to women in the dataset becomes 1:1. We always strive to achieve the 1:1 ratio in our dataset with any gender, race, age group regardless of the percentage of the population they represent in our partner's workplace.

03.

Determine if certain questions are causing bias

Next, we figure out if there is a dominant population of people that is causing specific questions to produce biased results. This is accomplished through what's known as a Principal Component Analysis (PCA).

For example, those who are successful in fighting sports, like boxing, are likely to have low variations, yet high values, on an attribute like aggression. If we trained a model to predict success in Mixed Martial Arts ("MMA"), it would almost certainly discriminate against anyone who comes from the Jain religion, which preaches a doctrine of peace and non-violence. Using a data science technique called a principal component analysis ("PCA"), we would pick up on the low standard deviation of aggression of those who are successful in MMA and consider eliminating the attribute of aggression from the model.

Applied to the law space, some firms may find a similar trend. Let’s say a firm has a lot of high performing men who all score high on the attribute of aggression. Without a PCA, the machine may be partial to aggressive men, and because we want to give everyone a fair shot, we would adjust or remove this trait from the model so as not to allow aggressiveness to impact the predictions.

04.

Determine if certain measured traits cause bias (and then adjust)

We start by visualizing the data to help us understand what traits we measure may be unintentionally causing bias. This visualization helps us easily spot potential problems that could cause the final algorithm to be biased against a group of candidates.

To confirm if certain traits are causing bias, we will increase or decrease the prevalence of identified attributes and make an adjustment to the model's parameters. This practice is called a hyper-parameter adjustment. If we determine that an attribute is causing an adverse impact with statistical significance, we will train our models to weight this particular trait as less important.

05.

Reweight the data so that the AI focuses on each group equally‍

Although a certain group may not be prevalent in the dataset, we can adjust the importance of underrepresented segments of data so the AI focuses on that set just as much as the more represented segments of the data. This allows us to compose a model that focuses equally on each segment of the population.

For example, let’s say we don’t have enough data from African American women in the dataset — not even enough to synthetically generate appropriate estimations (see above). To correct this challenge, instead of artificially creating samples, we will tell the machine to assign more value to the female African American data in the algorithm. That way, the machine knows that the predictions associated with that data are equally important to other populations that are more represented.

06.

Incorporate industry-wide models to reduce firm-related biases

When we produce a final model for our partners to use in their recruiting efforts, it’s actually many models built on top of each other. In other words, we instruct the AI to look at attributes that are both predictive for specific firms and predictive industry-wide. Most firms have bias, but they are likely not all biased in the same way. If we use additional models based on aggregated data, we are more likely to reduce the possibility of causing adverse impact.

This works especially well with something like where a candidate went to college — we have so much data that tells us that where a candidate or employee studied does not significantly impact their performance at work. So, even if a firm has historically hired from mostly Ivy League schools, the aggregate data we feed the machine will outweigh the bias towards these universities.

07.

Model testing‍

Throughout the process, we test a model over and over to make sure no discrimination is taking place. We will never deploy a model that does not meet the Equal Employment Opportunity Commission technical guidelines.