AI: Best Practices, and Pitfalls to Watch Out For

AI Handshake

In this guest column, Bryq co-founder and CEO Markellos Diorinos takes an in-depth look at how AI can contribute to an employer’s recruiting efforts… and some pitfalls to look for.

Technology seems to be taking over the world, but is it also looking to take your job? Don’t worry. That’s not happening anytime soon, although artificial intelligence is quickly becoming an important tool to help recruiters find great talent. Nearly a quarter of the businesses surveyed by the Sage Group are already using AI to help streamline their hiring, while over half plan on taking advantage of AI technology within the year. 

AI is changing the way we hire people. However, improper use has led to allegations of problematic and biased AI systems. Let me share some of today’s best practices, as well as some common pitfalls to help you choose the system that’s right for you.

AI is changing the way we hire people. Here are some of today’s best practices, as well as common pitfalls to help you choose the system that’s right for you. #HR #HRTech Click To Tweet

The Benefits of Artificial Intelligence

There are many ways AI can help make the recruiting process easier for hiring managers. If used properly, it can also make the application process fairer for candidates. AI can eliminate bias, source candidates and even help diversify your workplace. Tools such as chatbots, resume scanners, facial monitoring, cognitive skill tests and personality assessments are all being used to help streamline the hiring process. These systems are easy to implement and can often be integrated into an ATS.

AI is particularly useful for high-volume hiring. When trying to fill a large number of roles, you are going to receive a large number of applications. AI can easily ingest large volumes of data about candidates, such as assessment scores and cognitive abilities, and help you make sense of them. This prevents recruiters and hiring managers from missing out on great talent. AI delivers results that help them hire the candidates with the right skills for these open roles. 

The best argument for artificial intelligence is that when used properly, it can increase efficiency and effectiveness while diminishing bias in the hiring process. In theory, it has the potential to eliminate factors such as age, gender, name, home address, race, experience, and even education from the initial hiring process.

AI: Good Robot or Bad Robot?

Despite the many ways that we can benefit from AI, there have been a handful of cases that show the technology doing the exact opposite. The trouble is AI can give you inaccurate, misleading, and just plain biased data if it’s used incorrectly.

The U.S. court system’s “COMPAS” program is a perfect example. COMPAS, which stands for Correctional Offender Management Profiling for Alternative Sanctions, assists in predicting recidivism in convicted felons. (Recidivism is the tendency for criminals to reoffend upon being released from prison.) While the system has no feature in place to consider race, COMPAS data told authorities that Blacks were more likely than Whites to reoffend. Actual figures, however, showed that the AI’s output was far from the truth. Blacks were nearly half as likely as Whites to be flagged as high-risk reoffenders, although in reality they didn’t go on to commit further crimes. Whites were far more likely to commit new crimes after their release from prison.

We can see similar scenarios play out in recruitment practices. AI algorithms have shown bias against females at companies like Amazon, for example. In 2019, a complaint against HireVue was filed with the Federal Trade Commission regarding the company’s video interview software: Allegedly, its video-scanning algorithms improperly took a candidate’s facial expressions, speech and eye movements into account when considering whether or not they’d be a good fit for a role. The concern was that because most of the programmers teaching the systems were White men, the AI was possibly being trained to disregard voices and faces of women and candidates of color. While HireVue remained adamant that these factors made little difference in final assessment results, it quickly removed the features from its products .

Good Teachers Get Good Results

All of this begs the question, “Is it possible to get AI right?” The answer is certainly yes. 

You can think of AI as a student, one who is very productive and diligent. Like the student, you need to show the AI exactly how a  process works and what steps it needs to  follow. The AI will be diligent in doing what you (or the vendor you bought it from  from) taught it. However, it will not be able to develop critical thinking or handle exceptions effectively. Nor will it realize when it is getting things wrong. There must be a process that’s proven to work first–before you can teach an AI how to work effectively.

This is one of the major pitfalls of many modern AI systems–such as resume screeners. Screening resumes is not only cumbersome, it is also very prone to bias. Screeners are based on the process of screening information contained in a resume, even though resumes in and of themselves have been proven to be inaccurate and flawed. No amount of AI can fix a process that doesn’t even work in principle.

The other common pitfall is what we use to teach our AI systems. Most of our existing results–the data that we have at hand–are based on our current processes. These  have known weaknesses, such bias. This “dirty data” is inaccurate, inconsistent and irrelevant, and teaching an AI system using it is an effective way to perpetuate the inefficiencies and wrong decisions we made in the past–not to discover what we are looking for now in the present.

More current AI implementations are so-called “black box” systems, which can make decisions but can’t explain how they made them. This leads to arbitrary decision-making and removes the ability for humans to validate and correct the decisions made by the AI. It is the equivalent of being denied entry to a restaurant, though no one is going to tell you why. 

Governments around the world are also looking to ensure that organizations are not misusing AI. Both the EU and U.S. government are cracking down on poor AI practices by implementing laws and mandates that require organizations to constantly monitor and review their systems. The European Commission will fine organizations up to 6% of the yearly profit for AI violations. Similarly, the U.S. Fair Trade Commission has written up a list of guidelines companies must follow to ensure that their AI is not causing trouble. These heavily enforced rules will make sure that companies hold themselves accountable for their AI practices.

When properly designed, trained, and deployed, artificial intelligence is an incredibly effective tool for hiring. It will improve your time-to-hire, diversify your workplace and save you hours of manual labor, allowing you to use that time and money to improve other areas of your HR practice. AI will never find a perfect candidate by itself, but it will help you find the best candidate for the role based on their attributes. Instead of relying fully on technology, humans need to walk into the future hand in hand with it. If we work together with technology, possibilities going into the future are endless.

Markellos Diorinos is the CEO and co-founder of Bryq, a talent intelligence platform created to eliminate the biases, time constraints and inefficient decisions that result from traditional workforce management processes. Bryq’s AI-assisted platform enables employers to create the ideal candidate profile, then blindly screens candidates on cognitive abilities and personality traits. Learn more at Bryq.com.

Image:

Previous articleIT Spending Rises as Employers Struggle With Workplace Transitions
Next articleTikTok Runs Pilot With Video Resumes, Job Search Tools