3 Things to Consider on the Ethical Use of AI in HR

Focused Eye AI

Around the world, everyone is racing to create, sell or use the latest technological innovation for the workplace – artificial intelligence. Who can blame them when the alternative, some say, is the complete loss of relevance?

With the adoption of AI for HR, questions have arisen about how to take an ethical approach to using advanced technology. Between possible problems with privacy, bias and other issues — as well as a lack of rules and regulations — it’s hard for business leaders to be sure they’re taking a proper approach to any number of challenges. Still, over 75% of HR leaders say they’re planning to use AI software in the next 12 months, according to a study by Paychex.

It’s clear that AI solutions can help organizations. But you have to ask: Is the benefit worth the potential risks?

Risks and Rewards

The rewards are obvious. Companies save time and money when they use AI instead of employees to work on specific efforts, like talent acquisition. In addition, some studies have shown using AI increases productivity. Also, Forbes linked AI to better employee engagement.

On the flip side, AI can spell trouble when it comes to hiring. While it automates aspects of talent acquisition, such as candidate screening, AI may also introduce bias or overlook qualified candidates, observes SAP SuccessFactors Chief Marketing and Solutions Officer Aaron Green. If not designed and trained carefully, AI algorithms can perpetuate or amplify existing biases present in the data it’s been trained with.

This, in turn, can lead to discrimination against certain groups of people. In March, for example, Workday found itself on the receiving end of a lawsuit contending that its AI and screening tools discriminate against workers who are Black, disabled or over 40.

“If the AI makes a decision that discriminates against an individual, whether it’s an employee or a robot that makes that decision, you are breaking the law,” said Siobhan Savage, CEO of the workplace intelligence platform Reejig. “Every single time that you make a decision that is unfair, you could be fined as a company. Now, imagine how many millions of decisions are being made by an algorithm within a company today.”

AI can also pose a risk to privacy since HR systems rely on employee data that often comes without transparency or informed consent.

Understanding Regulation

Despite the prevalence of AI in the cultural zeitgeist, not many rules or regulations have been put in place to guide its use or implementation.

One of the first laws regarding bias in AI was recently implemented by New York City. Simone Francis, an attorney with employment law firm Ogletree Deakins, said the automated employment decision tools, or AEDT, law will “restrict the use of automated employment decision tools and artificial intelligence by employers and employment agencies by requiring that [they] be subjected to bias audits and [require] employers and employment agencies to notify employees and job candidates that such tools are being used to evaluate them.”

The law requires that a bias audit be conducted “no more than one year prior” to the AEDT being implemented so a third party can determine whether the software meets the law’s requirements, SHRM said.

The EEOC has also released guidelines on AI’s use in assessing employees and job applicants. The commission said the guidance is meant to educate employers, employees and other stakeholders about the application of equal employment opportunity laws when organizations use employment software, some of which incorporate algorithmic decision-making. 

This is just the beginning. Experts agree there’s much more to come as laws catch up to AI’s use.

Where We Go from Here

The ethical use of AI is still evolving, and may look different from company to company. However, many recommendations about its use include a common thread: A human must be involved in AI decision-making to ensure that it is fair and unbiased.

Savage recommends that all HR leaders learn the basics of AI – how the decisions are made and what their implications are – so they can make informed choices and build better practices. In addition, she believes checks and balances should be in place to ensure the data used to train AI is balanced. Specifically, she said, the data set must be diverse and the AI “has to be trained in a good and fair way.”

“HR professionals need to be the ones to steer the wheel and ensure the technology is used in the right ways,” said Scott Litman, founder and COO of Lucy, an AI-powered “answer engine.” HR teams, he believes, “still need to be there to be the ultimate guardrail and bring human empathy and judgment to situations. AI can drive incredible efficiencies but, at the end of the day, there needs to be a human involved so that each scenario is handled appropriately and with integrity, empathy and ethics.”

Image: iStock

Previous articleBeekeeper Launches Frontline Intelligence to Watch Sentiment, Productivity
Next articleWhat Makes a Positive Employee Experience?