Today I speak with Elaine Pulakos, the CEO of PDRI by Pearson. Their business is assessments, so she spends a lot of time thinking about how technology can help them. Today, obviously, that means she thinks a lot about AI. There’s a lot of questions in the space, covering everything from AI’s role in best practices to how it can help develop assessments. We’ll cover that and more, on this edition of PeopleTech.

Image: iStock




Get full access to WorkforceAI at workforceai.substack.com/subscribe

Learn more about your ad choices. Visit megaphone.fm/adchoices

Powered by the WRKdefined Podcast Network. 

[00:00:00] Welcome to PeopleTech, the podcast of Workforce AI.News, I'm Mark Feffer. Today I'm talking with Elaine Palacos, the CEO of PDRI by Pearson. Their business is assessments, so she spends a lot of time thinking about how technology can help them out.

[00:00:28] Today, obviously, that means she thinks a lot about AI. There are a lot of questions in the space covering everything from AI's role and best practices to how it can help develop assessments themselves. We'll cover that more on this edition of PeopleTech. Hi Elaine, welcome.

[00:00:47] So first you're conducting a study right now that's looking at the use of generative AI in assessments. What do assessment providers think about generative AI right now? Not just in hiring but in performance management and all of the other areas it's involved in.

[00:01:10] Yes, I think as I've talked to a lot of people in the field, we don't know. Everyone is optimistic that we look at these tools and they're remarkable. We can see in our initial experimentations with them the amazing things that they can do.

[00:01:32] Last week, for example, we wanted to create a realistic job preview and embed some assessment items into the realistic job preview and see if we could train the AI to deliver all of this and what it produced was extraordinary with basically a very simple instruction.

[00:01:56] So clearly there's a lot of power there, there's a lot of potential there, but in the assessment field in particular, which is a high-stake situation because we're hiring people. So there are implications for job candidates, we have to treat them fairly.

[00:02:15] There are implications for organizations, we have to make sure that we're actually identifying the best people for a job. So everybody it's not just a fun use of AI, it's one that has consequences for both individual applicants and organizations.

[00:02:33] We have to be really careful because we've seen some, I guess for lack of a better word just poor uses and poor outcomes from AI that are irresponsible.

[00:02:48] So where we're at right now is looks great, lots of potential capabilities for our field, not just assessments but also for performance management when you think about scraping all kinds of data to provide more accurate performance information about people, data from their day-to-day work.

[00:03:11] There's high potential there but goodness there's privacy concerns, adoption concerns, fairness concerns. And I'm trained as an industrial and organizational psychologist before I became a CEO and still am largely my true profession.

[00:03:30] So our field has a lot of best practices that we use to design and develop all of these systems. And what we find is that when we just very loosely try to leverage AI, let's say to support assessments, the AI doesn't necessarily know the best practices.

[00:03:52] It goes out there and scours all kinds of data so we'll see common practices.

[00:03:59] And I'll give you an example of that in a minute but we don't necessarily see the best practices that our field knows need to be incorporated in the development of our tools and use of the tools. So we worry about that.

[00:04:16] And I'll just give you a real quick example and then let you ask me some questions.

[00:04:22] But when it comes to item development for assessments, best practice would tell us, and we know this from our research, don't use all of the above or none of the above as response options. It's just not item writing best practice.

[00:04:39] But when you ask AI tools to just write a question for you to assess a skill or whatever you want to assess, you oftentimes will get all of the above and none of the above.

[00:04:50] So that's just a very simple, I hope easy to understand example of how when we're not curating our models and ensuring the quality and training them on best practices from our field, you can get poor quality items. You can't get weird things going on with the models.

[00:05:13] So we have to be careful that we're not going out too quickly and that we're really doing enough research to understand the outcomes and that they're going to be sensible outcomes and fair outcomes for candidates and good outcomes for organizations.

[00:05:31] Now, I was just reading that there's obviously a lot of talk in pretty much every industry that people are running into AI. But the reality is that just about 5% of American businesses are using it or seriously looking to use it.

[00:05:51] How does the assessment business and assessment firms align or not align with that? How much are they talking and how serious is the talk about AI? So I think in our field it's actually pretty serious.

[00:06:08] You know, our CEO is very interested and this is a CEO of Pearson now is very interested in seeing us be at the forefront of technology.

[00:06:19] I see other researchers in our field highly interested in testing out models and trying to train them to see if we can train them to replicate our best practices.

[00:06:32] And I think in our field it's actually important because if you look at what's going on now, there's, you may be familiar that both the federal government and large companies are looking to tear up their models. They're looking to tear up the paper ceiling.

[00:06:50] And what I mean by that is there's, for example, the chance to compete at that would remove degree based hiring from the requirements for getting a job in the federal government in favor of skills based hiring.

[00:07:05] Well, if you think about this, all the different skills at least today that one might need to assess for any job opening. I mean, that's a lot of assessment development work.

[00:07:18] And if you're taking away degree requirements, you've got to have some way to assess who can actually do the job. The good news is with skill based hiring, you're not artificially excluding people who don't have degrees. But then how do you assess all these skills quickly?

[00:07:37] You know, how do you develop that many assessments that would be appropriate, valid, unbiased assessments. So we hope that generative AI can help us in our test development work.

[00:07:51] And in fact, some initial product development work has been done in my company to see if we can actually leverage AI effectively to develop assessment items. And I believe that we can. So again, there's a lot of promise there.

[00:08:07] But when we need to scale and we can no longer use degrees and we need to scale the efficiency and effectiveness of hiring and we want to assess skills.

[00:08:16] I think we need to have some tools to help us get there and AI holds great promise for being able to do that. But I say that we also need to make sure that there are guardrails around our use of AI and we really understand it.

[00:08:34] And we can train the models using curated high quality data and infuse our best practices in the models. Because otherwise it's kind of a situation where it's garbage in, garbage out.

[00:08:48] And I fear and we've seen people being treated unfairly or just purely unexplainable assessment outcomes, you know, odd things are correlated with, you know, your ability to get a job like what your name is, is predictive of high performance, which is absurd. So there's something wrong there.

[00:09:12] And because we don't understand what's going on with the AI models and we can't explain it, then we can't fix it, which is why we need, I think a lot of work in training these models and pressure testing them to make sure they're working properly.

[00:09:28] You give me an example of how AI could be used to develop questions. Well, it's interesting. Can I give you my favorite example? Actually, it's to develop a type of questions.

[00:09:42] But it's something that we've been thinking about that we're going to make part of our research program. I don't know if we're going to be able to do this, but I think it helps to explain where I think AI could be super helpful is in interviewing.

[00:09:55] So we know, you know, just about everybody has to go through an interview to get a job.

[00:10:02] It's rare, especially for hire, you know, mid to higher level jobs or entry level professional jobs that you can actually get hired without somebody, a hiring manager HR person talking to you.

[00:10:14] But what we know from our research is that unstructured job interviews can be quite biased and they don't often do a good job of identifying the best candidate for a job.

[00:10:23] But when we use a structured interview and what I mean by that is standard questions and then structured evaluation criteria that you can use to evaluate the responses in a standardized way.

[00:10:39] These provide very powerful, fair and accurate hiring tools and they're very predictive, you know, of future performance. So we're curious now about the extent to which we can leverage generative AI to help us in the interview process.

[00:10:57] So it's kind of easy to ask the AI to just, I want to let's say assess whatever skill you're interested in interpersonal skills. So it's easy. You can ask the AI to create a question for you to generate, you know, to assess interpersonal skills.

[00:11:14] But if you think about it, how do you know that's a good question? How do you know it's really getting at the skills that you want to assess and that the question you're asking is really job relevant and free of bias.

[00:11:28] And so I think AI can really help us there, but that's an example of where we need to really review and curate the content that's generated to make sure we're not just asking questions, but good structured interview questions.

[00:11:43] Then let's think about the next step. So somebody answers an interview questions and oftentimes when, especially when you're using structured evaluation criteria, they actually don't give you enough information to make a judgment. So the interview viewer needs to be really good at probing the question.

[00:12:01] They need to understand the candidate response correctly, recognize job relevant themes, identify which responses actually need further elaboration and then create good probes that don't give away the answer but really do probe the candidate properly so that you can elicit the information you need to evaluate the candidate.

[00:12:25] That's what a good structured interview does. The question becomes, you know, can we train an AI model to do that to recognize job relevant information to really create good prompts on the fly where they need to.

[00:12:41] We think we may be able to train a model to do that and this is really interesting to us but you'd have to do it in a way to also ensure you're avoiding bias and you're monitoring performance over time to make sure that it's not going awry.

[00:12:54] So you see like AI, I don't know that we'll ever get rid of the human interviewer.

[00:13:02] But I do think, you know, leveraging Microsoft's word co-pilot, I think we could provide a co-pilot that might do a better job of helping interviewers do good interviews along the lines that I've been talking about here because I got to tell you in my own experience,

[00:13:20] we can write structured interview questions but training interviewers how to probe those properly and really elicit information that they need to evaluate the question. And then I'll add a third complexity there, apply the rating criteria properly. That is hard to do. You know, some people do it well.

[00:13:42] Some people even though we train them a lot can never get there. But the point is I do think there's promise in AI models potentially providing a co-pilot that could help interviewers do those kinds of things better.

[00:13:55] So that's one of the applications that we are exploring as part of our research program. We're also exploring just very simplistic item development like I want to evaluate a plumber, their skill in plumbing.

[00:14:10] So knowledge questions, you know, tell me how you've connected to pipes properly, whatever the case may be. I mean, you can ask AI right now tools to help you just generate questions.

[00:14:26] But again, I think we need a lot of work to make sure we're capturing the right questions that we're really sampling the domain properly of things that we need to be asking.

[00:14:37] We can't just let the AI run amok because the models aren't developed well enough to do what we know is good assessment development practice today. But I think the jury's still out.

[00:14:51] And one of the things on how much it'll be adopted, how quickly and how acceptable it's going to be. I was having a conversation with my husband, who's an engineer.

[00:15:06] And he works for a company that does just set up a big AI research center here in the Washington area, which is where I live.

[00:15:14] Now it's telling about this interview research and I was all excited about it because I thought it was cool to explore what we could do with this.

[00:15:22] And he said to me, I was telling him initially that I thought maybe we could even train the AI to conduct the whole interview, evaluate the response. And we might not even need to use a human.

[00:15:36] And he said to me, because we'd have to disclose this, if an AI tool called me up and interviewed me, I wouldn't take the interview. And I thought to myself, you know, Elaine, you're a psychologist and you study human behavior your whole life.

[00:15:52] And you know that the candidate experience is really important and companies won't implement hiring practices that candidates thought. So there's a big question out there in my mind about what are people going to accept too?

[00:16:08] Because if your company uses an AI approach to hiring and people don't like that, then guess what? You might turn off the very people that you want to come to your company.

[00:16:20] So that's the kind of intangible variable here I think is a human race. What are we going to accept? How much of this are we going to accept? How much are we going to want?

[00:16:29] I don't know the answer to that, but it's, it did temper my thinking about more of a co-pilot model because I wouldn't like it either when I stopped to think about that angle.

[00:16:43] So there you go. I don't have the answer to that. I don't think any of us do.

[00:16:47] Well, that's actually a really good point though. And it makes me wonder about the dangers or the drawbacks of using AI, not just the technology itself, but people's reaction to it and sort of unintended consequences.

[00:17:07] What do you think are some of the most dangerous dangers for lack of a better term or the kinds of challenges outside of technology that people using AI might run into?

[00:17:26] Well, you know, I think there's a few outside of using the technology. I'm not sure I understand that question. I think there's a lot of potential downstream consequences though for using the technology on people.

[00:17:46] I think there's, you could actually hurt people. You know, if you have a bias or AI not only hurt people, but companies will run into legal issues because, you know, surrounding hiring, for example, there's all sorts of not just professional standards,

[00:18:05] but legal standards. And there are class action suits that people in companies or outside of companies can bring if they feel that they're being treated unfairly.

[00:18:16] There are laws that prohibit discrimination that require your test to meet certain standards and be valid, especially if they don't treat everyone the same.

[00:18:27] And we know in the hiring field, there are times when certain demographic groups and gender groups get discriminated against or have gotten discriminated against with certain assessment programs.

[00:18:39] So companies can be hurt in that way if we don't get this right in the assessment field. Candidates can be hurt by not having access to employment opportunities.

[00:18:51] Then I think there's the whole psychology of using AI. What can you trust? You know, who am I talking to? The other thing that I worry about is privacy concerns.

[00:19:02] I mean, to be perfectly honest with you, you talked about performance management. I've actually written probably three or four books on performance management trying to get that right over the course of my whole career.

[00:19:14] And I'm not sure I got it right yet, but no formal system really will do performance management. It is inherently a human activity.

[00:19:25] It's a manager having a conversation with his or her employee and coaching and trying to bring out the best in that employee and helping to solve problems.

[00:19:34] I mean, that's what good performance management looks like. And it requires a level of trust and willing to disclose. Well now everything I do at work, everything I say is being recorded.

[00:19:46] And how is that data all being used? And how is that going to come back to help me or hurt me?

[00:19:53] I think there's some real danger there around privacy and around just destroying trust. I don't know. It just, I think we need to be careful and we need to take this a step at a time.

[00:20:08] I'd be nervous about implementing that kind of system in my organization because we have good trust between managers and employees. We have good conversations.

[00:20:19] And I'd be worried that if all of a sudden we're doing a data check on everything everybody does and somehow using that in random and potentially biased ways, I don't know how that would go. I would worry about that.

[00:20:34] Let me ask you one more question. You're a CEO, so you're a business person. You're also a psychologist. And I kind of assume you've just been observing business and talking to business people and all that kind of thing. Yes.

[00:20:52] Generally speaking, can you give me some perspective on AI right now and what does quoting of business really seem to think of it? What are your colleagues, other executives talking about? What are their concerns? Are they excited? Is it too expensive? Just anything like that. What strikes you?

[00:21:18] I think I've seen two things happen. One thing that's been really interesting to me is, and these are a lot of people who are running hiring programs.

[00:21:33] So I'll talk about executives in their view in just a second. But a lot of our customers are saying, what can you do for us in AI? Because I got this goal that I need to find 10% savings through application of the efficiencies that can be created with AI.

[00:21:52] And I don't know what to say to my bosses. So I've had several people actually talk about they're getting gold on finding efficiencies from AI. And we're looking to do that in our business as well.

[00:22:08] Like how can we write items more quickly? But again, you know, when that is ready for prime time is only going to be after we are sure what we're producing and we can stand behind it.

[00:22:20] So I see some instances where companies seem to be coming out too soon and very cavalierly pushing down, you know, how can you use AI to make our processes better? You know, what are you going to do? Where can you find some savings and efficiencies?

[00:22:37] And honestly, I worry about that because I'm not sure we're ready for prime time. So that's kind of one side of the point.

[00:22:44] Most people I talk to are kind of wait and see. You know, you hear, oh, this is going to be transformative. This is going to get rid of jobs. This is going to change how we go to work every day.

[00:22:57] You know, to the point where but I say, okay, how exactly? And they're not there yet. I heard a story and I don't I don't know this is true. But some people were calling in some of the big consultancies, you know, the big names that you would be familiar with that do this kind of work and say,

[00:23:16] Okay, I need to do something. What should I do? And what I heard is that they're not rushing none of the big consultancies are rushing to have solutions right now.

[00:23:28] Everybody's taking kind of a wait and see attitude trying to advise around it. And I think taking this with some degree of caution so that we don't have downstream consequences that we don't expect. That's what I'm seeing, you know, so it's two ends of the continuum.

[00:23:47] But that's what I've been hearing in the market kind of both of those angles. It's moving so quickly, I have to say honestly don't know what most people think. I know our approach and I think it's a measured approach. That's all I can share with you.

[00:24:02] I think it's the right approaches. Let's see where we can apply these tools to help us. Let's not move too quickly, so that we're putting out products that we're not 100% sure of really embody our best practices and that we can stand behind and know what they're doing.

[00:24:23] We've got to be able to explain them. It can't be a black box and we've also got to monitor them carefully so that if something changes, we know about it and can address it.

[00:24:36] So it needs to be a measured approach and frankly a lot of investment to do this. It doesn't just, you know, we can't just leverage the AI model to do stuff.

[00:24:45] You know, I hope my whole explanation of what we need to do to train that interview for example helped illustrate that because it's a lot of steps and it's a lot of monitoring to make sure we can get the applications right.

[00:25:02] Elaine, thanks so much for taking the time and talking with me today. You bet. Anytime. Thank you. My guest today has been Elaine Palacos, the CEO of PDRI by Pearson. And this has been PeopleTech, the podcast of WorkforceAI.News.

[00:25:30] To keep up with AI technology and HR, subscribe to WorkforceAI today. We're the most trusted source of news covering AI in the HR tech industry. Find us at www.WorkforceAI.News. I'm Mark Thetford.