Bigotry in the Machine: Study Finds Bias in AI

Bias in AI

The workforce that develops artificial intelligence products is in “a diversity crisis,” says a new report. As a result, algorithms behind the technology are usually biased themselves.

Report: #AI reflects developers' bias. Result: an 'AI diversity crisis' #HR #HRTech #HRTribe Click To Tweet

According to Discriminating Systems, a report from NYU’s AI Now Institute, the employees of companies building AI solutions are, as in most of the technology sector, largely male and white. At Google, for example, women comprise just 10 percent of the AI research staff while the company’s overall workforce is just 2.5 percent black. Facebook and Microsoft don’t do much better: 4 percent of their employees are black, the report said.

While in recent years technology companies have been vocal about plans to develop more diverse pipelines and increase equality within their workforces, the report found AI providers have made “no substantial progress in diversity.” Power imbalances, harassment, discriminatory hiring and unfair compensation all remain widespread problems, as well.

Bias at Home, Bias in AI

As a result—and not surprisingly—AI systems reflect the very biases they’re meant to combat. To name just one example, in October Amazon shut off a machine-learning recruiting tool that developed a consistent predisposition against women. According to Reuters, the system compared applicants to the patterns it found in resumes Amazon received over a 10-year period. Because the tech workforce is predominately made up of men, the system machine-taught itself that male candidates were stronger than their female counterparts. Resumes that included the word “women’s” were downgraded, as were the graduates of two all-women’s colleges.

The report said that AI systems used to classify, detect and predict race and gender—which means pretty much every talent acquisition platform we can think of—is in “urgent need of re-evaluation.” Specifically, it calls out products that supposedly determine sexuality from headshots or assess competence by scanning “micro-expressions.” Said the report: “The commercial deployment of these tools is cause for deep concern.”

AI’s Higher Standard?

The issue isn’t a technical one. Amazon’s experience, for example, was just one that demonstrated how an AI’s logic “echoes” the biases of the people who developed it and the data they use to teach it.

That doesn’t surprise John Harney, CTO of DataScava, a New York-based provider of unstructured data-mining solutions. “We live in a biased world, so AI systems absorb this training data and may correctly replicate decisions humans make,” he told us. “Do we expect AI systems to somehow filter out bias when we can’t?” From a technical point of view, Harney believes, creating such filters can’t help but reduce a system’s intelligence–even though they produce unintended outcomes.

Then Harney poses a good question: “Why is it more acceptable that we have teams of employees making biased decisions, but we’re horrified when a machine does it?” he asked. “Either we accept that programs make the same unfair decisions we would, or we stop using AI systems for such purposes.”

Human Challenges First

That kind of statement may astound the data scientists, architects, product managers and marketers who’ve convinced CHROs and CIOs that artificial intelligence now represents “table stakes” for HR technology products. But as one diversity specialist observed, a system that identifies signs of unconscious bias doesn’t come close to addressing the actual prejudice that exists in a person’s head. That’s an age-old problem, of course, and it means addressing bias in their systems requires vendors to first tackle it in their workforce.

An important early step, AI Now said, is for the industry to acknowledge just how severe a diversity problem it has, and admit corporate and social attempts to address it have been ineffective. “Existing methods have failed to contend with the uneven distribution of power and the means by which AI can reinforce such inequality,” the report said. A bit further on, it observes, “issues of discrimination in the workforce and in system-building are deeply intertwined.”

AI Now has four recommendations for the developers of AI products:

  • Emphasize transparency: Remedying AI bias is almost impossible when systems rely on the proverbial “black box,” it said. The use of specific AI systems should be tracked and publicized.
  • Require rigorous testing across the lifecycle of AI systems in “sensitive domains.” Conduct pre-release trials and independent audits, and continuously monitor for bias, discrimination and similar issues.
  • Expand research on bias to learn more about how bias is reflected in AI and the contexts in which it’s used. 
  • Expand methods for addressing bias and discrimination by adding evaluations and risk assessments of whether certain AI systems should be designed at all.

The industry doesn’t seem to be ignoring the situation. AI Now says solutions providers are using a body of work covering “fairness, accountability and transparency” in attempts to refine their systems so they produce “fair” results, as determined by “mathematical definitions.” They’re also facing more pressure to develop a more ethical approach to AI development.

But that doesn’t go far enough, the report suggests. Developers, it says, need to understand how AI tools can be biased through technology, how they’re shaped by the cultures behind them as well as the biases of the people who build them. “By integrating these concerns, we can develop a more accurate understanding of how AI can be developed and employed in ways that are fair and just, and how we might be able to ensure both.”

That’s easier said than done. As we noted earlier understanding bias is a lot easier than actually changing people’s minds, or modifying how human behavior impacts a machine’s logic.

You can read the report here.

Sign up for our newsletter here.

Image: iStock

Previous articleRoundtable: Video in HR Tech Will Improve Both Engagement and Efficiency
Next articleRoundup: PandoLogic, iCIMS Connect; Topia Upgrades ‘Mobility Suite’