AI’s impact on assessments, how assessments and resumes fit together – or don’t.
Learn more about your ad choices. Visit megaphone.fm/adchoices
Powered by the WRKdefined Podcast Network.
[00:00:00] Welcome to PeopleTech, the podcast of WorkforceAI.news. I'm Mark Pfeffer. My guest today is Tracy Kantrowitz. She's the Chief Professional Services and Product Officer of Pearson's PDRI. Assessments, resumes, and AI, that's what we'll be talking about today.
[00:00:29] AI's impact on assessments, how assessments and resumes fit together or don't, and more on this edition of PeopleTech. Hey Tracy, I wanted to start about talking about resumes and assessments and how that all sort of fits. And I wonder if you can tell me what's happening in that regard now in the industry.
[00:00:58] Sure, yeah, absolutely. Well, thanks for inviting me to participate in this important conversation. I'm an industrial psychologist by training, and this is the work we do on a day-to-day basis, is to design talent assessment programs for large organizations, both in the private and public sectors. And there's been a lot of focus on various hiring tools and their efficacy, their predictive validity, and their efficiency in recent years.
[00:01:26] And it's interesting to have a think about the focus on resumes. I think resumes are just a mainstay in the vast majority, if not all hiring practices out there. Show me a job in which candidates don't have to submit a resume, right? But as from a scientific perspective, if we take a step back, it's really one of the least valid hiring tools. So, you know, AI gets a lot of critique these days for being sort of having a black box component to it.
[00:01:54] But really, if we think of the resume, it's kind of an ultimate black box. As what candidates choose to go into them is generally unstructured, unstandardized, and at their discretion. And how they're evaluated by recruiters and hiring managers is often unknown and unstandardized. And really, you know, we're talking about making real decisions about who gets a job.
[00:02:17] So we're, you know, we're using hiring tools to make predictions about what candidates are most likely to show up and be successful on the job. And the resume has been seen as a data point to help do that, right? But it's really quite flawed in a lot of ways. Used everywhere, used all the time, but it could be fraught with just so many issues. That can lead to incomplete or biased and inaccurate decisions. And AI is really exacerbating that, right?
[00:02:45] Because AI is predicated on having quality data to work with in the first place. But we know from resumes that there's many problems. I mean, they're a self-report tool, so they may not be accurate or valid in that way.
[00:02:59] And without being able to verify candidates' actual skills, not just what they report, which is something that can be done with more objective assessments, there's just such huge potential for candidates to manipulate the information in a way to put their best foot forward, which is everyone's natural tendency. Another problem with resumes, and again, this gets exacerbated with AI applications, is that they're unstructured, right? So, you know, candidates choose what to put in a resume, right?
[00:03:28] So if they didn't have an outstanding GPA in college, they may not list that where others may. If they didn't get that graduate degree or undergraduate degree, they may neglect to include educational details. Some may include credentials or certifications or languages or hobbies, all sorts of things. And especially these days, candidates are including things like pictures or other information that may not be relevant to the job requirements.
[00:03:56] So generally, without standardized guidance on how to screen a resume for job-relevant criteria, hiring managers may be using implicit biases or gut feels when it comes to whether an applicant meets job qualifications. And how does AI impact that? I mean, specifically, you mentioned it can have an impact, but what kind of mischief can it cause? Yeah.
[00:04:23] Well, I mean, for a while now, automated resume screening, you know, in some cases for some organizations, it may be the only hiring tool that some companies are using to hire, right? If the volume is high or company resources are minimal. So, you know, AI adds a lot of efficiency to a process, right? Because it's in lieu of somebody having to manually review resumes, the AI can essentially do that.
[00:04:47] And as this existed for a long time, automated resume screening has really been, you know, not much more than keyword searching. So savvy candidates, you know, they know to embed terms in their resumes from things like the job description. And they know that that's a strategy that would get them past that automated screener. So that's kind of been the more primitive application of automated screening.
[00:05:10] But as AI has advanced and we see things like machine learning algorithms, you know, they're much more capable of producing more predictive results compared to the more primitive keyword searching. But, you know, that assumes that the data or quality in the first place, you know, which we can't really assume with resume data, given some of the things I mentioned, right?
[00:05:31] Around the unstructured nature, candidates are choosing to include or exclude information sort of at their own discretion versus there being sort of a standardized practice for it. And that's on the input side, right? On the output side, we don't really know commonly how hiring managers are reviewing resumes, right? So, you know, some of the things that have been long discussed, right? So did the candidate go to my alma mater, you know, may be viewed as a, you know, the candidate may be reviewed favorably in that regard.
[00:06:01] So there may be biases in place on the evaluation side. So it's really kind of an ultimate garbage in, garbage out situation, you know, even with more sophisticated AI applications. If the data aren't valid and reliable on the input and output sides, AI is not going to buy us really anything beyond keyword searching.
[00:06:21] I've been hearing a lot about assessments replacing resumes or the use of resumes or certainly having more weight in the whole decision-making process of hiring. How do you see them working together? Do you see assessments ever replacing the resume or how do you think that's going to play out? Yeah, it's a great question.
[00:06:46] You know, I think they do different things, you know, and especially in this day and age when a lot of companies' talent practices are moving, you know, beyond educational degrees. So this notion of, like, tearing the paper ceiling that's often discussed, you know, companies like Accenture and Chevron and Google, et cetera, among many more, you know, are kind of moving beyond educational credentials, which has sort of been a clearly like a longstanding, you know, focus of the resume.
[00:07:16] You know, do they simply possess that degree, that credential to move past, you know, to the next stage of evaluation? So what are they supposed to do in lieu of using that? So that sort of movement, right, of skills-first procedures would suggest a minimal role for the resume going forward. So, like, what are companies left to do then? Assessments are a much more robust and scientifically evaluated hiring tool.
[00:07:43] You know, I mean, even from a scientific perspective, so the literature has been looked at, and, you know, even on their best day, right, resumes are not a very predictive source of information.
[00:07:55] So the meta-analyses that we've conducted in the field of industrial and organizational psychology shows that things like years of job experience or education or reference checks, which are part and parcel of a lot of large companies' hiring programs, are far below other predictors. Things like assessments and their predictive power of forecasting ultimate job performance.
[00:08:19] So, again, I think it suggests a minimal role going forward for resumes, if at all. I think a lot of more sophisticated organizations are moving right to objective assessments to understand candidate skills. And that's something we know a lot more about. So the validity, so the predictive power, right, of assessments of job-relevant skills has been supported through more than a century of research.
[00:08:44] And they can be really efficiently implemented and used as an objective and standardized source of applicant qualifications. Things that resumes have done, sort of in terms of the efficiency, directing candidates right away to online assessments, putting into place things like automated passing scores so that candidates that achieve a minimal score would go on right to the next stage of a process, whether that's an interview of some sort, right?
[00:09:11] So the point is, you know, assessments can serve in the capacity that resumes have effectively in that they're automated, short, and efficient, right? So in as little as 30 minutes, in most cases, candidates can complete a short job-relevant assessment. We can get a lot more predictive power out of that. And it still saves the company time and resource on the front end of their hiring process.
[00:10:03] Hey, everybody. I'm Lori Rudiman. What are you doing? Working? Nah. You're listening to a podcast about work, and that barely counts. So while you're at it, check out my show, Punk Rock HR, now on the Work Defined Network. We chat with smart people about work, power, politics, and money. Are we succeeding? Are we fixing work? Eh, probably not. Work still sucks. But tune in for some fun, a little nonsense, and a fresh take on how to fix work once and for all.
[00:10:32] What does that do for bias? People are worried or talk about their concern with hiring managers looking at resumes and sort of inputting their own natural bias into their evaluation. Do assessments and AI help get around that, or are they just another flavor of it? Oh, they absolutely get around it.
[00:10:59] So assessments are selected based on their relevance to the job, right? So assessments that measure job-relevant skills provide that clear basis for the use of the tools in the first place, and they really remove biases completely. Because it's no longer idiosyncratic judgments or reviews of how a candidate sacks up relative to job requirements.
[00:11:23] Rather, it's objectively developed test questions and indicators of the relevant job skills and competencies that determine whether the candidate is suited for the job. So that really ameliorates a lot of the concerns and biases that we've had in the past. And that's traditional assessment, right? I mean, so you can imagine simplistic assessments, you know, a knowledge test, right? So does somebody have the data science knowledge that's needed for a job?
[00:11:50] You know, we can very, we can come up with very objective indicators of that. And, you know, in a more sophisticated way, AI is being increasingly applied to assessments to capture and score things like work samples, right? So imagine like a coding assessment in which a candidate has to demonstrate that they can write a piece of code to solve some kind of business need or challenge or right to requirements.
[00:12:18] AI can be applied to open-ended code or text to judge the quality, the efficiency, whether the code compiles or not, et cetera. There's a whole range of things that AI can do for us in the open-ended assessment realm. And again, it's objective. It's, you know, the AI-based assessments are trained based on large data sets. We know what good looks like and what that correlation is with future job performance.
[00:12:47] And it allows for a much more objective evaluation than we have really ever had with more subjective self-report methods and resume screens. So shifting gears a bit, are there signs or any indications that you can see that candidates are using AI and, you know, possibly using it as a way to, I don't want to say game the system, but, you know, to assist them with their completion of assessments and their positioning?
[00:13:46] And so, you know, there's a lot of things that are available in the open-ended systems that are available, et cetera. But you're right. The other side of the coin is that those tools are known and candidates will use them. And so there's, from a test security perspective, a cheating mitigation perspective, there are several things that we do and that we advise our organizations that we work with to do to mitigate some of those risks. And, you know, there's a lot that can be done in the prevention realm.
[00:14:10] So in high stakes testing applications, say, for instance, we have functionality within our online assessment platform at PDRI that really minimizes candidates' ability to sort of navigate to other browsers or tabs, open up their favorite AI tool, things of that nature, and try to, you know, essentially cheat during the assessment.
[00:14:34] We also monitor assessment scores over time to try to flag things that may be indicative of higher rates of cheating. And we feed that information back to our client customers. So that's kind of been the more traditional way that, you know, cheating of any kind has been monitored and mitigated ever since, you know, the advent of online assessment. You know, I mean, we've been at this for 25 years doing unproctored online assessment.
[00:15:00] And we've known, you know, organizations are aware that some degree of risk comes along with that because the person taking the test may not be the person applying to the job or they may be using outside tools. But by and large, organizations have been willing to accept that risk because of the significant cost and time savings and efficiency. So in my view, AI is just sort of the next iteration of the risks associated with unproctored testing.
[00:15:27] And where the assessment field is really going is incorporating generative AI use into the assessment process. You know, as AI becomes more embedded in all of our lives and all of our just how we function at home and at work, we're assuming that candidates will use AI to complete an assessment. So we're incorporating that into our assessment design principles.
[00:15:52] So in essence, we're looking at how well do they generate prompts and query the AI, you know, as a tool or as a, you know, an assistant effectively in helping them complete a task. That's a job relevant skill that we can measure in an assessment, right? And in higher level sort of thinking skills. So can they do the critical thinking that's necessary for the job with the assistance of AI? So that's really where things are going with AI infused assessment.
[00:16:20] How are AI's capabilities changing your approach to product development? Are there capabilities that it's bringing to the table that you've thought about in the past, but haven't been able to figure out how to do or is it bringing completely new ideas to you? Oh, it's, it's, it's changing the game. I would say as an assessment practitioner, you know, I've been doing this for over 20 years, developing and validating and implementing assessments in large organizations.
[00:16:50] It's changing the game. You know, historically as a field, you know, we, we've, we've leveraged subject matter experts a lot. You know, when we go to develop a valid assessment and go to develop, you know, job relevant questions, we will get, you know, what we call SMEs, you know, in a workshop, multiple workshops, multi, multi-day workshops and, and tap their knowledge, you know, for specialized job skills
[00:17:16] and what it means to, to have a solid assessment of a particular skill. Organizations are hard pressed these days to corral dozens of SMEs, you know, for a multi-hour, multi-day workshop to conduct that kind of activity. We've been doing research over the past year and have put AI developed test questions alongside
[00:17:40] human generated test questions. And in a blind comparison, humans evaluated the AI generated questions higher in quality than the human generated questions. So the AI is really outperforming the human in terms of the quality of the output of, of, of suitable test questions. Now there's caveats to that, right? I mean, there's a lot of the human is absolutely in the loop in that process. You know, we have, we, we structure the way in which the AI is used, the way it
[00:18:10] checks itself. Humans are reviewing the questions that come out of that AI process, et cetera. So I feel like it's a little misleading to say, ah, you know, AI is just doing a better job. It's with a lot of care and attention and, and design from the humans that that result occurs. So that's just one example of how it's changing things. There's, again, there's just so much efficiency that AI can help us with to more scalably, nimbly develop, update, rollout assessments
[00:18:38] with the help of AI that they, that we haven't had in the past. So it's, it's my, I anticipate that it's really paving the way for larger scale uses and better uses of assessments than we've seen in the past, because there's kind of been this upfront barrier of sorts, you know, in terms of, we need people to take the assessments. We need your SMEs to help develop, help us develop them. We need your SMEs to help decide how to score them, et cetera. Like it's a very sort of resource intensive process
[00:19:08] that will reduce dramatically with the introduction of AI. Tracy, thank you very much. It was great to talk with you. Appreciate your time. And I hope sometime you'll come back and we can talk some more. Absolutely. Thanks so much, Mark.
[00:19:34] My guest today has been Tracy Kanterwitz, the chief professional services and product officer of PDRI. And this has been People Tech, the podcast of WorkforceAI.news. We're a part of the Work Defined Podcast Network. Find them at www.wrkdefined.com. And to keep up with AI technology and HR,
[00:19:58] subscribe to WorkforceAI today. We're the most trusted source of news in the HR tech industry. Find us at www.workforceai.news. I'm Mark Pfeffer.