Diversity across all workplaces and teams should be something the world is striving for – that much is clear. So, why does it appear we’re seeing increased talk regarding this issue in reference to the Artificial Intelligence space in particular? As AI is progressively making data central to much of our decision-making, this issue talks to the ingrained discrimination that is woven into the fabric of our society.

Let’s look at some real world examples of where this is already causing friction.

Amazon hiring bot prefers hiring men

The flow on effect of critical gaps and skews in data can be extensive, and in some cases directly increase the gender disparity of the workplace. According to this article in Entrepreneur, prior to its recent wave of layoffs and hiring freezes Amazon developed AI hiring bots that were found to give preference to men. How could this be so? Because the data the technology learnt from was based on its previous hires. If the data the model is sourcing from is inherently biased, so too will the result it generates.

Models learning from financial discrimination

We’ve also seen AI models learn from instances of financial discrimination. Emma Chervek’s recent piece for SDX Central discusses a 2021 investigation that found algorithms used to approve or deny mortgage applications were far more likely to deny borrowers of color compared to white borrowers. The data used to feed the model was based on who had been approved for home loans in the past – data that included the United States’ legacy of financial discrimination, bias in residential lending and residential segregation.

So, what can be done?

If we know that these issues are a product of the data we’re feeding models, can’t we just adjust this portion of the process? We wish it were that simple.

Pin-pointing the source of algorithmic bias is, generally speaking, anything but a straightforward process. Isabelle Bousquette’s recent article in the Wall Street Journal quoted Flavio Villanustre; Global Chief Information Security Officer at LexisNexis Risk Solutions; saying that it is “absolutely difficult, and in some cases impossible—unless you can go back to square one and redesign it correctly with the right training data and the right architecture behind it”.

Eek. So, OK, no easy fix.

But, we’re all tech heads here – obviously it’s not all bad with AI (and ultimately, it’s not AI’s fault!). We’d be lying if we said we weren’t beyond excited about the insurmountable array of new capabilities these models are making possible. So it’s positive to see that, even in terms of retrospective fixes and tests, there has been some action taken to remedy bias in this space.

Working to remedy algorithmic bias

Last month, in a bid to test AI models from top tech companies, the White House employed a team of independent hackers in a red-teaming effort. Deepa Shivaram writes that those involved in the operation were looking for demographic stereotypes, asking the chatbot questions to try to yield racist or inaccurate answers. The results were then passed onto tech tech teams from the likes of Google and OpenAI to investigate.

It is more promising however, to see the companies who are taking the proactive approach. Making sure that there is sufficient representation present at the ground level in development and management teams. Charlotte Trueman’s Q&A piece with Google Cloud exec, Helen Kelisky, explores the ways in which her team is avoiding the creation of algorithmic bias.

“One way we are delivering on this is via the AI Principles Ethics Fellowship, through which we trained a diverse set of employees from across 17 global offices in responsible AI. Additionally, we created an updated version of the program tailored to managers and leaders, embedding Google’s AI principles across 10 product areas, including Cloud.”

Studies also show that while diverse teams are better positioned to produce more balanced products, we also see continual correlations between diversity and outperformance – literally better, stronger products. McKinsey’s state of AI reports show that organizations at which respondents say at least 25 percent of AI development employees identify as women are 3.2 times more likely than others to be AI high performers. Those at which at least one-quarter of AI development employees are racial or ethnic minorities are more than twice as likely to be AI high performers.

We’re fascinated to see how (hopefully) more and more companies will continue to tackle this challenge. Interested in exploring more ways in which AI is morphing our industry? Check out How AI is changing the face of tech. Curious as to which companies are leading the way for diversity in tech? Read our entry on the most inclusive and diverse companies to work for in 2023.