Podcast: HR Examiner’s John Sumser on Intelligent Tools and the Ethics of AI

Business Tech Woman

Transcript

Mark Feffer: Welcome to People Tech, the podcast of the HCM Technology Report. I’m Mark Feffer. This edition is brought to you by Ultimate Software, a leading innovator in HR technology. Ultimate’s dedicated to putting people first with cloud-based solutions designed to engage, motivate, and empower your workforce. Learn more at www.ultimatesoftware.com. Ultimate Software, people first.

Today I’m joined by John Sumser, principal analyst and HR examiner and a widely followed observer of the HR tech industry. We’re going to talk about his 2020 index of intelligent tools and HR technology, which he’s titled The Birth Of HR As A System Science. John, thanks for being here.

John Sumser: Thanks Mark, it’s good to be here.

In our latest podcast, @JohnSumser talks about intelligent tools and HR's role in determining the ethics of AI. #HR #HRTech @HRTribe Click To Tweet

Mark: Let’s start with the notion of intelligent tools. What are they and how are they different from, say, AI or machine learning or other things people think of as advanced?

John: So, I’m of the opinion that there isn’t much in the way of AI out in the world just now. AI, for my money, is a general tool that approximates human intelligence. What we’ve built, and what’s moving into the market right now, are a series of things that are different from software that we’ve used before. So I like to talk about them as intelligent tools. These are parts of your software deployment that will give you recommendations or probabilistic estimates of the likelihood of things happening. It’s easier to start to manage it if you start to understand that these are little bits of intelligence rather than one big, big Bible.

Mark: What’s an example of an intelligent tool?

John: So there are tons of examples, there are about 50 different use cases that cross from recruiting to onboarding, to training, to workforce planning and what they are. Intelligent tools are all involved in processing of large quantities of data and so you do that in recruiting as ever matching of job requirements to resumes.

You do that in onboarding as an ongoing real-time analysis of how well somebody’s doing in their integration with the company. You do that and learning as testing and measuring topic retention in relation to the changes that are going on with the job and workforce planning, what you do is use existing patterns to try and forecast what the workforce is going to look like five years from now. So they’re all statistical analysis that are highly automated April to run because we’ve got these huge quantities of data.

Mark: All right. Now, one of the things that you say in the reporters that intelligent tools have morphed over the last year in a pretty significant way, and that this morphing has changed the HR technology landscape. How so?

John: Well, the first instances of these tools were full of the marketing. The marketing rhetoric associated with these tools was so intense and so provocative and so disturbingly off-base that when people went to the market with these tools and early adopters purchased these tools and utilized them, the results were other than the marketing message suggested it would be. And so in the last year what’s happened is there’s been a better realization of what actually can be done and what the constraints are.

It was the case, for instance, in the early days of intelligent tools, but people went to market claiming that they eliminate bias. And now only vendors who’ve had their head in the sand continue to make the claim that you can eliminate bias. Because bias is… That’s a very rich topic, but bias is embedded in every piece of data that you get and so you can’t really eliminate it, but what you can do is understand it more.

Mark: Yeah. What’s that mean for the people who are buying HR technology? I think they think of it in a less sophisticated level in some ways than vendors or analysts. As they’re trying to make sense of the landscape, what should they be keeping in mind to keep that in perspective?

John: Well, there’s a really interesting thing. I have been looking hard in the last month or so at the human tendency to believe that computers are more powerful than they are. Anthropomorphization is the attributing of human characteristics to non-human things like machines or animals. And people anthropomorphize about their tools and so if the computer says something, it’s widely understood that because it comes across from a computer, people give it more credibility. This is the foundation of the I heard it on the internet meme, which is somehow because it comes through a screen, it is more true.

When you have these intelligent tools, they are not really very sophisticated and they’re particularly not very intelligent. There are single purpose, analytical modules that are good at solving a particular problem in a particular way, but what they’re going to give you is probabilistic information. And then so probabilistic information and it’s like casino odds and with casino odds, if you get a blackjack hand and you hold at 16. You hold it 16 because the odds that you’re going to draw a card that’ll break you are high, but the odds that you’re going to win if you hold at 16 are relatively low.

If you bet on holding at 16, you should be prepared to lose even though that’s the proper recommendation. Just like if you have a weather forecast and it says, “There’s an 80% likelihood of rain today.” It’s a prudent idea to carry an umbrella, but the forecast wasn’t wrong if it doesn’t rain. And so consuming that kind of probabilistic information is the deepest difficulty that we’re going to have with these new tools. The thing is going to say something like, “Well, Mark Feffer, there’s an 80% chance that he’s going to work out really well on this next job.”

And the problem is that it’s not, he’s going to be able to do 80% of the job. It’s that there’s an 80% chance that it’s going to work out well and a 20% chance that it’s going to be a dismal failure. Right? And so if you go in and you say, “Oh, it’s 80%. He’s going to be a success.” You’re setting yourself up for a real problem. And what that turns out to mean practically is that when the machine makes a recommendation, you always need to double-check it then you always need to supplement it because the way that it works is the machine makes the recommendation, but the human being gets the responsibility.

So nobody’s going to buy the second or third time you said, “I made that mistake because the machine recommended it.” That’s going to be not doing your job even though the machine makes a recommendation. If you follow it and it’s stupid, you’re going to be held to task for it.

Mark: Great. And my last question for you is, we’ve been talking about your conclusions from last year and the landscape today, but what are you looking for this year? What are your expectations for the industry and the technology in 2020?

John: Well, so I think this is where we’re going to start the first conversations about the fact that every HR department needs an ethics organization. And every HR department needs an ethics organization because we’re going to be installing tools in our companies without exactly knowing what they do. And these tools are going to have input into the lives and livelihoods of the human beings in our organizations.

They’re going to make recommendations about how to coach. They’re going to make recommendations about how to improve performance. They’re going to make recommendations about all sorts of things. And what we don’t have currently is a good way of thinking about what could go wrong. So when I say ethics, I don’t mean never take a coffee cup that’s worth more than five bucks. I mean that you have to give solid discipline. You have to have a solid discipline process for anticipating what could go wrong and making decisions in light of what could go wrong.

Mark: And do you think that’s going to gain real traction?

John: I do. I do. I do. There’s all sorts of signs. Accenture has opened a practice in the development of ethics committees inside of companies to handle AI and the real meaty problem in AI is what happens in the HR department. So it’s going to be critical that the HR department takes the lead. And this is an area that’s adjacent to employment law. If law tells you what you can’t do, ethics is about figuring out what the right thing to do is. AI is going to force us to look very carefully at what’s the right thing to do.s

Mark: John, thank you again. It’s always a pleasure.

John: Thanks, Mark.

Mark: This has been PeopleTech, from the HCM Technology Report. This edition was brought to you by Ultimate Software, a leading innovator in HR technology. Learn more at www.ultimatesoftware.com. 

And to keep up with HR technology, visit the HCM Technology Report every day. We’re the most trusted source of news in the HR tech industry. Find us at www.hcmtechnologyreport.com. I’m Mark Feffer.

Sign up for our newsletter here.

Image: iStock

Previous articleHireology Announces Payroll Tools, New Recruiting App for Auto Dealers
Next articleRoundup: Ceridian’s Dayforce Revenue Climbs; Dice Sees Promise in Earnings