For job seekers, finding a new position is not only complicated by today’s economic uncertainty, but also by the AI tools companies use in the hiring process, according to Insider. In order to reduce costs and the time taken up by manual tasks, many employers have turned to automation to review applications. In turn, candidates are experiencing more angst around their job search and more trouble landing new roles.
A study from SHRM found that 42% of large employers use AI hiring support, meaning job seekers may virtually interview with or be prescreened by an artificial intelligence program. Hiring platforms and job boards, such as LinkedIn, also use “language-processing” AI tools to filter applicants.
Discrimination and Bias
At the same time, research shows that these same AI tools may be injecting bias and discrimination into the hiring process. In fact, researchers at the University of California, Berkeley, say that AI decision-making systems could have a 44% chance of being embedded with gender bias, a 26% chance of displaying both gender and racial bias, and may be prone to screening out applicants with disabilities.
In addition, some AI tools have been shown to be unable to identify quality candidates based on their training and the datasets provided. A 2021 Harvard Business School study found that 88% of executives know their AI tools screen out qualified candidates, but continue to use them because they’re cost-effective.
As a result, legislation and guidelines are slowly being put into place to mitigate the effects of AI on hiring. For instance, the Equal Employment Opportunity Commission recently began an initiative to aid organizations in their use of AI to “ensure that these technologies are used fairly and consistently with federal equal employment opportunity laws.” Meanwhile, New York City’s law on AI, requiring bias audits for automated employment decision tools, will go into effect in July.
HR leaders and the C-Suite need to be aware of the ethics of using AI tools to make their hiring decisions. The Harvard Business Review said, “It is important to create internal processes based on how one’s organization defines fairness in algorithmic outcomes, as well as setting standards for how transparent and explainable AI decisions within the organization need to be.” Others recommend that people always have the final say when it comes to AI decision-making, adding a human-touch and double checking the work that’s been done.