Podcast: beqom’s CPO Aisling Teillard on AI, Bias and (of Course) ChatGPT

AI Huddle

Transcript

Mark:

Welcome to PeopleTech, the podcast of the HCM Technology Report. I’m Mark Feffer. My guest today is Aisling Teillard. She’s the chief customer officer at the performance rewards company, beqom. She’s had a notable career in both HR and technology, so we’re going to talk about the impact AI might have on HR, look at legislation that touches AI, the realities of bias, and of course, ChatGPT, all on this edition of People Tech. Hi, Aisling. It’s nice to see you. So New York City has implemented regulations about using AI when you’re hiring, and I wondered if you can tell me what those are basically about?

Aisling:

Yeah. I think people are a little bit concerned about using AI and hiring. And AI and hiring is not new. It’s been around for quite a while. In fact, I remember in Unleash, maybe six years ago now, the early recruitment AI tools came to prominence and people were really going really strong for AI and using it in recruitment. But then they kind of discovered that some of the tools had bias kind of built in, and not intentionally naturally, but it was just oversight. The people who had built the AI happened to be of a particular cohort of people who were white, technology guys, and they hadn’t really considered some of the implications of that. As in when the AI was looking for candidates for a role, it naturally excluded certain segments of the population and so on.

I think New York are just being very careful about what they’re doing around AI, and I think that’s wise. But having said that, it’s moved on and off a lot in the past few years. And one of the things that I think we can look forward to at AI is it’s going to automate the boring stuff for sure, and that that kind of tedious stuff. What we have to be equally sure of is that it doesn’t eliminate where humans matter most in those really moments, touchpoint moments that really can’t.

Mark:

And so what’s it mean for employers? Is it really going to impact how they operate when they’re recruiting or is this going to be a sort of quick thing to implement?

Aisling:

I think it’s going to impact enormously, because I think AI is going to do a lot of the filtering for you. And it doesn’t matter whether it’s recruitment, whether it’s performance feedback, whether it’s all aspects of HR. I think AI will infiltrate HR technology very swiftly. In fact, it’s already there in an awful lot of solutions. One of the things that I think organizations might be careful about, is using AI to do that filtering to go, “Is this biased? Have I forgotten something? Are we looking for the inclusion factor throughout?” And I think that’s really an interesting way of using AI to almost check ourselves. Textio did a really cool study. I don’t know if you saw that one. And they started analyzing feedback. So they do an awful lot of recruitment and making sure the people are being inclusive in their job descriptions and so on.

And they did an analysis, and they looked at feedback that would go to candidates, for example, or feedback that would go to employees in performance feedback settings. And they found huge amounts of bias with AI. So they saw that women were 11 times more likely to be described as abrasive in their performance feedback than their male colleagues. They saw that Asian men were seven times more likely to have the words brilliant or genius in their feedback. So they can look for these patterns and things, which was really, really interesting. But they also saw that it was naturally biasing itself. So they gave it roles. And they said, give me feedback for a receptionist role, as such. And of course, the AI assumed it was a woman, made all these kind of assumptions that were inherently built in, I suppose, to the AI. And it just highlighted very clearly and swiftly how we can go horribly wrong with this stuff when we don’t check it and when we don’t make sure that it’s set up for success.

Because very quickly, AI does what it’s told to do. If it’s programmed to work a certain way, that’s what it will do. And if we don’t have it set up the right way, and indeed, if we don’t have a four eyes lens to it to make sure that it’s doing what we want it to do, I think we might have big trouble ahead.

Mark:

Today, everyone’s talking about ChatGPT, and it seems like that term is becoming almost interchangeable with artificial intelligence. I know that’s not correct, but it seems how people are using it.

Aisling:

Yeah.

Mark:

So do these laws cover the use of ChatGPT at all, or is it a separate issue?

Aisling:

Yeah. Look, nearly half of companies are already using ChatGPT, and another 30% are planning to use it. So that’s pretty much most of the organizations out there are using it in some shape or form, and 80% of the companies are already using AI in some form of HR. I think ChatGPT is a really interesting phenomenon for us. I think when it first came, people were like, “Oh, it’s a bit of a jumped up Google, and now they’re really seeing the use cases for it and how much it can actually edit language for you, edit shorten briefings for you. So if you are recruiting people, giving them feedback, if you are performance, reviewing people, giving them feedback, and so on, it can play a powerful role.

I think what we have to be careful about is that we don’t eliminate the human in all of this and we don’t kind of hand over our responsibilities to ChatGPT to start running things for us that we should run for ourselves, and not to lose the humanity of the conversations that should be happening anyway, whether it’s with your recruitment candidate who spent time going through your processes, or whether it’s with your employees who now live in your world, that there’s some AI giving you feedback. Because very quickly, people are going to be able to recognize whether this is AI feedback or whether this is true humanity at play. And I think there’s still a place for meaningful and purposeful conversations in both scenarios.

Mark:

Well, that makes me think of a point that everybody’s jumping in playing with ChatGPT right now and using it in business despite all the flaws that OpenAI itself talks about. So do you think that employers or people in general are aware of the AI’s shortcomings and the legal issues and the compliance issues and all of those things that might impact them when they use the AI? Are they even aware of that right now?

Aisling:

I think there are an awful lot of people in organizations putting up red flags going, “Hey, we really have to think about this, all this information about our company being outside of our company.” The compliance, the privacy, the security people, I think do have concerns and they’re flagging them, but I think there’s a natural excitement around this stuff, and it’s really hard to stop people laying with it, and then finding that it’s valuable and using it and look guilty. I’m using it for all sorts of things myself. So it’s kind of an impossible allure. It’s a bit like when Google first came about. Everybody was searching on Google, and we all thought that was magic. And I think ChatGPT has the same magic about it. But I think maybe we’re not having the right conversations around the balance of when we’re comfortable as an organization using it.

I know internally at Beqom, we’re really clear about when we use it, when we don’t, tracking usage. You really have to manage it. Because I think otherwise, there is a point of it will become an unhealthy play for the organization. But right now, I think people are just excited by it and they’re using it and seeing lots of value in it. And there is lots of value in it. You can’t deny it. It’s like when any innovation comes along. People want to engage. They want to see what this can do for them. And it can do phenomenal things, in the very way that Google transformed the way we search. ChatGPT is transforming a lot of things in the way we do businesses, the way we communicate, the way we search for information, the way we even look up things in our own space.

Sometimes it’s actually better at describing things than we are. So it can be used for training, for onboarding. You ask AI to give you a really good training on the basis of performance or of compensation, actually pretty good. It’s actually got some pretty good answers in there.

Mark:

Let me go back to the New York laws for a second. For some reason, they really fascinate me. The law requires that employers have to audit their system, their AI system before using it in talent acquisition. And I keep wondering, well, what’s that mean? How do you conduct an audit to figure out whether or not your AI is biased?

Aisling:

Yeah. I think they have to go after the decision making tools in particular. And this is something I think that people using AI have to be really careful about, is that you have to show when AI is involved in the decision. So since AI is a complex and it’s evolving, it’s important to equip employees with as much exposure and training on AI as possible. But even on our technology, we’re using some AI and our technology, but we’re very clear to say this was generated by AI so people can track an audit when AI is involved. Because if you can’t track when AI was involved in the decision, you don’t know whether this decision was made by AI and the consequences of it are down to an AI issue or whether it was made by somebody in the business, that’s a very dangerous place for people to be.

So I understand why New York have gone down this road when they introduce those restrictions, particularly around hiring decisions. I think they have a point, that if, for example, your recruitment does become biased, if you’ve got it wrong, if you’ve inherently discriminated against a population, you want to know, was it AI that made that flaw on our behalf? And how did the AI get to that point? And what do we need to fix in it? Or actually, do we have a people problem in here? I wouldn’t be against what they’re doing. I think it’s a really interesting take to make sure that employers are really thinking about these things and aren’t just pushing stuff across the line and not thinking about the implications and the consequences of it

Mark:

Do the story about Amazon and the AI it developed to screen resumes for technology folks. And that they started to run it. What they discovered was it was biased against women. It was downgrading anyone who’d gone to a woman’s college, played certain kind of sports, just all of that kind of thing. And over the course of a couple of years that they could never fix it. They finally just walked away from it. And I’m wondering, so if you’ve implemented a system that does something like that, what do you do? How do you go about finding the issues and fixing them?

Aisling:

Yeah. Look, I think you’ve got to be very careful in the selection of vendors, to go, Did they have an ethics committee when they were building the AI? I think that’s super important. I don’t think you can just buy AI without knowing that these people have really checked themselves against the AI. And there’s an awful lot of voices in the industry who’d be very strong about this. You’re probably familiar with Thomas Otter, his really strong opinions about anybody who’s out there selling AI tools, they need to be able to trace and show how they checked it against all these biases. What is the oversight on the AI to make sure it doesn’t go there? And I think that’s the piece when you’re buying tools that incorporate a AI to be inquisitive about how did you make sure that it was going to be the inclusive tool that we needed to be when we’re hiring and so on. I think that’s really important. Because anybody can use AI.

This is available to everybody now. So it’s how you apply the AI and what kind of second checks did you put in place to make sure it’s doing what it’s intended to do and not have those unintended consequences that we all fear. And I think we all have a concern about where this could go for everybody.

Mark:

I know I’ve been asking about bias a lot, but I’ve got to think that there are other challenges or problems with AI, at least at this point. Are there? And what are they?

Aisling:

Yeah. I think there’s two lenses to look at AI.I think it can do a lot of good, and we can’t unravel all the good stuff it can do for us. It can really make life very, very simple for people. But I think the unintended consequences of it are that we start to automate the relationship and put this middleware between us as people, and that it starts to take over the employment relationships between people. And at that point, if we’re over utilizing AI, we have to look at issues like trust, like the humanity between people and how we ensure that we still are bridging relationships. You’re bringing somebody into your organization, or you’re managing that person’s career, or you’re helping them become a better version of themselves through progressing their growth, their career, their development. Yes, AI can help us do these things. But in the end, we do need to have those trusted relationships between people.

We need the tool to enable us to have those trusted relationships, but not to take over the relationship. And I think that’s the tipping point, isn’t it? That there’s a beauty of AI being really helpful and being a bridge between us all to make life a bit easier for us. But then there’s the tipping point of AI actually going to the extreme and actually starting to take over, and then it’s feeling a little bit cold, a little bit soulless. If AI is managing you, then why work for AI instead of working for a company where they actually care about me? So I think we’ve to bridge that gap and balance it out to go, well, maybe it’s really easy to automate goals, for example, is something that we do to make… The setting up of goals can be tedious, and goals are often repetitive.

AI can help us there, but you don’t want AI to sit down and go, what’s really important to you? Where do you want to take your career to next? That shouldn’t be an AI conversation. That should be a conversation between an employee and a manager. And I think we have to be careful and almost remap the employee journey to go, which parts make sense for it to be AI, where it’s really helpful to automate it, and which parts of it should be human? I actually think we need to… Do you remember all those employee journeys we did back in the day, maybe four or five years ago when everybody was mapping their employee journey according to engagement? I think we need to remap out those journeys according to, where’s AI helpful? Where is it not? Where do we put a stand in and go, “No, this is a conversation that will always be human, this is a process that will always be human?”

And this is the one where AI can really help. I think we really need to rethink an awful lot of our processes, whether they are recruitment or otherwise, and around that journey.

Mark:

You’ve actually touched on my next question several times, but I want to ask it specifically. And it’s, what’s the human factor in all this? You were just sort of talking about people interacting with their manager rather than the AI, but I’m wondering how much influence do humans have on the development and the ongoing operation of the application? And are there other issues that may not be evident about the human AI interaction that employers should be thinking about?

Aisling:

Yeah. It’s such a complex space, isn’t it? I think that there are real moments where we need those human interactions. Because in the end, what drives us are trusted relationships. That’s why I come to work, that’s why I stay at this workplace. If I don’t have those trusted relationships, then chances are I’m probably going to get a bit cynical, or I’m going to leave, or I’m going to check out or not be so engaged. So I think we’ve got to be really careful to maintain trusted relationships with our human colleagues and have those conversations of, where do you want your career to go? How do you want to be developed? But maybe the AI can help in giving us approach. I logged into our own goals genius product, which is AI driven, and I was like, I want to be better at this by the end of the year, and is like give me some help.

And it actually came up with four things that I could do better, development initiatives for me. And I was like, that’s helpful because my boss wouldn’t have thought of those four things. So there’s times I think where the AI is helping me as an employee and it’s helping the boss, because you wouldn’t have thought, if you want to be better at influencing or networking or relationship building or presentation skills, or whatever your need is, your boss can identify that and go, “Well, I think this would really help your career to get better at this.” But they don’t necessarily know how to go about it. How do I get you better at being a better influencer? I can’t just tell someone, “Go off and be a better influencer.” They need a little bit of help. So I think that’s where AI can be really, really helpful and can kind of help both the boss to come up with ideas, help the employee kind of think about how you might go about these things.

But in the end, I want to have that conversation with my boss. So I want to have the conversation to go, “Well, I’m thinking of doing these things to support my development. What do you think?” So there’s still those human moments that I think we need to bridge together. And we can bridge them together through technology, but we have to have the human touch points. But AI can be super helpful too. So I think for me, it’s a big balance. It’s a balancing act and we need to carve it very intentionally. I think if we just let the AI take over and do its thing, we could end up in chaos. I think we have to be very intentional about which pieces of the employment relationship we carve out for AI, and which pieces we keep human. I don’t know if I’m answering your question there. Am I?

Mark:

I know it’s kind of an amorphous question. I guess let me narrow it down a little bit, which is just AI’s being used a lot in HR and talent acquisition. And how do you think HR can strike the balance between the technology and their staff? You hear about the talk of empowering people rather than replacing them. Do you think HR today has a good handle on how they should approach that?

Aisling:

No. In honesty, I don’t. I think everybody’s still figuring it out and people are at very different places with this. And I have the good privilege of, I suppose, talking to HR professionals pretty much every day of the week, pretty much eight hours a day. And I really see them struggling with, “Well, what do we use? Where do we start?” And some being really fearful of it, going, “We don’t want to even look at it at all, which I think is a mistake because I think you could get left behind as well if you don’t look at it.” And then other people fully embracing it without thinking of the unintended consequences. I’m not sure we’ve figured it out. I think it’s still too early on in the journey to go, “HR, have this. We got it sussed.” We don’t. We definitely don’t as a profession. I don’t think we have. But I don’t think any profession does, so I wouldn’t be too hard on us there.

I don’t think any profession has really mapped out the journey in meticulous detail about when we’re using AI and when we’re not. I think that’s a journey that we’re all taking together as a society in many respects, not just organizationally. But I think everybody in society is figuring out where the play is and how much of it we want to indulge in, and how much of it feels like, oh God, the robots are taking over. And I think we can’t be… We can’t pretend that jobs won’t be lost. Jobs will absolutely be lost with AI. There is no doubt about that. I suppose our hope is that it’s the jobs that people don’t want. It’s those tedious, repetitive jobs that can easily be absorbed by AI, and hopefully that enables us to level up. But I still think there’s a big skills issue to happen because we can’t just assume that all the people who are doing those repetitive jobs day in, day out, or indeed some of them not so repetitive, some of them quite creative…

If you think about CoPilot and Microsoft, well, a lot of people across organizations sit around doing PowerPoints every day. Don’t really need to anymore. CoPilot’s going to do it for you. So they were not repetitive jobs because they were doing different styles of presentations for different boardroom packs or whatever. AI can do all that for you now. So I don’t think we’re thinking enough about, how do we re-skill those people now to do other things? Because what they do today is very soon going to be redundant. And I think we have a bit of a skills crisis there, because can we get them upskilled to do entirely other things? You think about how the tech industry responded to their own crisis and how many people they let go out of the industry. Maybe if they thought differently about, they could re-skill some of those people because wads of recruiters left the tech industry.

Lots of them happen to be women. So now the tech industry is down again on their diversity stats. Well, could we start thinking a bit smartly about other roles? There was one tech company whose name escapes me, and they re-skilled the recruiters, put them in the business. And then when the business came back, the recruiters went back to being recruiters, and the recruiters themselves said it was the best thing that ever happened. They understood the business so much more now. They can recruit much better for the business. It was a good thing for them, instead of just letting them all go. So I think we have to think about skills transferability and how we can start to now upskill people in AI so that people can still lead and manage AI, as opposed to the AI managing us and doing us all out of jobs. So I really think there’s a balancing act there for HR to think about in terms of the skills that we need for the future and how do we get ready for that.

Mark:

Well Aisling, thanks very much for taking the time. It was great to meet you, and I really appreciate your talking with me.

Aisling:

Thanks so much. Thanks for having me, Mark. It’s been an absolute pleasure.

Mark:

My guest today has been Aisling Teillard, the co-founder and chief customer officer at beqom. And this has been PeopleTech, the podcast of the HCM Technology Report. We’re a publication of Recruiting Daily. We’re also a part of Evergreen Podcasts. To see all of their programs, visit www.evergreenpodcasts.com. And to keep up with HR technology, visit the HCM technology report every day with the most trusted source of news in the HR tech industry. Find us at www.hcmtechnologyreport.com. I’m Mark Feffer.

Image: iStock

Previous articleWorking Smarter to Retain Top Talent
Next articleWhat Are the Best Tools for Managing Remote Teams?