Bob is joined by Dr. Charles Handler to discuss various topics related to AI, skills-based hiring, and the challenges of keeping up with information overload. They explore the importance of defining skills in skills-based hiring and the need for verification at scale. They also touch on the value and risks of AI in the hiring process, particularly in the assessment phase. Dr. Handler shares his thoughts on reducing friction in hiring, the paradox of AI, and the potential future of assessment with automated inferential assessments and high-fidelity simulations. The conversation explores the challenges and potential of AI adoption and responsible AI practices. Bob and Charles discuss the timeline for mass adoption of AI, the role of AI in simulations and training, the need for policies and governance around AI, and the impact of AI on diversity, equity, and inclusion. They emphasize the importance of understanding the technical aspects of AI and recommend educational resources for building AI literacy.
Keywords
AI, skills-based hiring, information overload, verification, friction reduction, accuracy, fairness, assessment, inferential assessments, simulations, AI adoption, responsible AI, mass adoption, simulations, training, policies, governance, diversity, equity, inclusion, AI literacy
Takeaways
- Defining skills is crucial in skills-based hiring to ensure a shared understanding of what constitutes a skill.
- Verification of skills at scale is a challenge in skills-based hiring, and finding a solution for this is essential for accuracy.
- AI can reduce friction in the hiring process but may come at the cost of accuracy and fairness.
- Machine learning has been used in ATSs for resume parsing and matching, but there is still room for improvement.
- The future of assessment may involve automated inferential assessments based on digital exhaust and high-fidelity simulations. Mass adoption of AI may take time, but experimentation and progress are happening.
- AI can be used in simulations and training, such as military simulations and role-playing scenarios.
- Policies and governance around AI are necessary to ensure responsible AI practices.
- AI has the potential to impact diversity, equity, and inclusion in hiring and talent acquisition.
- Building AI literacy is important for understanding the technical aspects of AI and its implications.
Sound Bites
- "Skills-based hiring is about accuracy and fairness."
- "There will be a sinkhole in assessment with automated inferential assessments and high-fidelity simulations."
- "Is AI a fad? This is not crypto. This is here to stay."
- "The future is people training the agents how to have good competencies."
- "The baseline of where we are in the moment is still incredible and game-changing."
Chapters
00:00 Introduction and Background
03:03 Challenges of Keeping Up with AI and Information Overload
10:40 Skills-Based Hiring and the Half-Life of Skills
21:58 The Value and Risks of AI in the Hiring Process
22:18 AI's Role in Reducing Friction and the Paradox of Accuracy and Fairness
24:47 The Future of Assessment: Automated Inferential Assessments and High-Fidelity Simulations
25:59 The Timeline for Mass Adoption of AI
27:31 AI in Simulations and Training
34:18 The Need for Policies and Governance around AI
43:18 AI's Impact on Diversity, Equity, and Inclusion
48:13 Building AI Literacy
Dr. Charles Handler: https://www.linkedin.com/in/drcharleshandler/
Rocket-Hire: http://www.rocket-hire.com
Psych Tech @ Work: https://open.spotify.com/show/2euPiLjmRMoce5cvh5S3Jp?si=30050c48046e452d
For advisory work and marketing inquiries:
Bob Pulver: https://linkedin.com/in/bobpulver
Elevate Your AIQ: https://elevateyouraiq.com
Powered by the WRKdefined Podcast Network.
[00:00:00] Hey, you with the podcast in your ear!
[00:00:02] Just a moment.
[00:00:03] Have you already activated your mobile happy hour?
[00:00:05] So you have an unlimited data volume every day for at least one hour.
[00:00:10] It's quite new and for everyone who is now getting mobile phone plus internet for at home on the TV
[00:00:16] Magenta 1 customers will activate the mobile happy hour in the My Magenta app
[00:00:21] and it's already off.
[00:00:22] The 5G network of Telekom
[00:00:23] So don't wait, now get in and activate the My Magenta app of Telekom
[00:00:39] Hey, it's Bob Pulver.
[00:00:41] In this episode of Elevate Your AIQ, I sat down with Dr. Charles Handler, President and Founder
[00:00:46] of RocketHire and host of the PsychTech at Work podcast.
[00:00:50] Charles and I delve into the intersection of AI, skills-based hiring, and the challenges
[00:00:54] of managing information overload.
[00:00:56] Discuss the critical role of defining skills in the hiring process, the balance between
[00:01:00] AI's benefits and risks, and the future of automated assessments and simulations.
[00:01:05] Charles also highlights the importance of policies for responsible AI use and the impact
[00:01:10] of AI on diversity and inclusion.
[00:01:12] Tune in to gain insights into AI's evolving role in HR and how to navigate this transformative
[00:01:17] technology in order to, you guessed it, elevate your AIQ.
[00:01:22] Thanks for listening.
[00:01:24] Hello everyone, welcome to another episode of Elevate Your AIQ.
[00:01:28] I'm your host Bob Pulver.
[00:01:29] With me today is my friend Dr. Charles Handler.
[00:01:32] Charles, no worry today.
[00:01:34] I'm doing great.
[00:01:35] Thanks so much for inviting me to be on the show, Bob.
[00:01:37] We always have such good, meaningful and interesting conversations.
[00:01:42] So I'm looking forward to that today.
[00:01:43] Absolutely.
[00:01:44] I appreciate you being here.
[00:01:45] Why don't you give the audience just a quick little background of just what you've been
[00:01:50] doing, how you wound up where you are today.
[00:01:53] And yeah, always interesting to hear the path that you've taken.
[00:01:56] Well, my mom met my dad in graduate school.
[00:02:00] So I have a doctorate in industrial organizational psychology and for, I don't know, 30 years,
[00:02:07] I've specialized in predictive hiring tools.
[00:02:10] I call them now.
[00:02:11] And I've worked in a lot of capacities.
[00:02:13] I have somewhat of a unique perspective just because I've worked in so many angles of
[00:02:18] this thing.
[00:02:18] So I have a pretty holistic view from legal compliance to technology side of it,
[00:02:23] which I'm really passionate about to just looking at the overall market as a market
[00:02:28] analyst and to helping enterprise companies on a global level create strategy
[00:02:35] and implement programs.
[00:02:38] I do a lot of the standard job analysis and validation work, building assessments.
[00:02:43] And I like that stuff, but I really also love the innovation.
[00:02:46] And I'd say innovation and nowadays AI, that's really been my passion area and
[00:02:51] where I've been advising and learning constantly, trying to keep up dog power.
[00:02:56] I don't know if it's just folks like us who have done a lot of different things in
[00:03:00] their career that makes it harder to keep up with everything or that's
[00:03:05] something that everyone goes through because we're all experiencing information
[00:03:09] overload.
[00:03:09] But I feel like sometimes being a generalist and doing a lot of different
[00:03:14] roles makes it worse just because you find a lot of different things of interest
[00:03:20] and also you start to connect dots across industries and domains that maybe
[00:03:26] other people just don't see.
[00:03:28] Yeah, I think no matter who you are and what you do, if you're paying attention
[00:03:33] to the world of AI and AI in your own field, I mean, I guess in a general
[00:03:38] or higher sense, we all have so much information coming at us all the time
[00:03:45] that it becomes very difficult to be able to keep up with anything.
[00:03:50] It's overwhelming and I subscribe to so many different, you know,
[00:03:55] sub stacks and newsletters and stuff.
[00:03:57] And I feel like I skim those and I feel like I find great nuggets
[00:04:02] and I feel lucky.
[00:04:03] I'm like, oh, I'm glad I didn't miss that one.
[00:04:06] But yet there are so many that I probably did miss.
[00:04:09] So it's all you can do is absorb what you can, find the people
[00:04:14] and the outlets that you like the most, you know, and kind of stay tuned to those
[00:04:18] things. And so I read a lot about just general AI and large language models
[00:04:22] specifically. I'm trying, I'm not trying.
[00:04:24] I am learning and absorbing, you know, like a sponge and I'll read technical
[00:04:28] papers that push my understanding and I don't even read the whole thing
[00:04:32] because I'm like, all right, I get the nature of this.
[00:04:35] There's 15 pages of prompting and stuff that's going on.
[00:04:38] But but just really working hard to stay on top and the large language
[00:04:42] model stuff changes so quickly.
[00:04:45] And even if you just use in chat, you know, you can experience it by just
[00:04:49] using chat GTP, you know, for Omni to me is it's amazing.
[00:04:54] It's getting better and better.
[00:04:56] Like it's doing a lot of really good things.
[00:04:58] And I've discovered lately a really good way to manage information overload
[00:05:03] is perplexity.
[00:05:05] I subscribe to the perplexity pro.
[00:05:07] So I've almost completely replaced my Googling with perplexity.
[00:05:12] But Google, perplexity and chat GTP all have their own unique things.
[00:05:19] They do best.
[00:05:20] So but I feel like when I need to research a topic, perplexity summarizes
[00:05:26] and pulls a bunch of articles with links and it's much more current.
[00:05:30] And I love it.
[00:05:31] So I would recommend that to anybody.
[00:05:32] I'm going to start my own sub stack.
[00:05:34] So I've been kind of studying what sub stacks all about.
[00:05:37] There's an overwhelming number of awesome authors on there.
[00:05:41] And one of the interesting models you see on there is the paid people
[00:05:45] monetizing their content through paid descriptions and stuff.
[00:05:49] I'm not quite ready to do that.
[00:05:50] I'm still trying to figure out Bob a name for my sub stack.
[00:05:53] I'm stuck on the name.
[00:05:54] I have all this content ready to go, and I can't think of a good name.
[00:05:57] So maybe we can brainstorm on that.
[00:06:00] So this is separate from from rocket hire.
[00:06:02] It wouldn't be just.
[00:06:03] Yeah, I guess I forgot to mention thanks for prompting me.
[00:06:06] I forgot to mention that.
[00:06:08] Yeah, I have a company rocket hire when we've been in business
[00:06:10] for 22 years, basically everything I described in what I do is done
[00:06:16] within the context of rocket hire.
[00:06:19] But it's really pretty much me and some associates.
[00:06:21] So as far as my market facing brand, I'm really pushing on the
[00:06:26] sub stack and the content stuff myself above above anything.
[00:06:30] So there's a lot of choices to be made there.
[00:06:33] But anybody can be a content creator now.
[00:06:36] And there's a lot of good stuff that people have to say.
[00:06:39] Right.
[00:06:39] So I don't know.
[00:06:41] How do you keep up with it?
[00:06:42] I mean, do you have some similar experiences than I do?
[00:06:45] I do.
[00:06:45] I was actually just talking to a couple of guys yesterday about the
[00:06:49] fact that I signed up for all these different newsletters,
[00:06:53] you know, the sub stacks and mediums.
[00:06:55] And so now, and of course, that content doesn't just reside there.
[00:06:59] I get the nudges through emails and now my inbox is filled
[00:07:03] with all these newsletters.
[00:07:05] And you're right.
[00:07:05] The problem is I don't I need curation on top of curation because
[00:07:11] it is sort of serendipitous.
[00:07:13] If I click on one, they happen to put in a good catchy title or lead headline
[00:07:19] and I'll go in and then three scrolls down.
[00:07:22] I see something that piques my interest or whatever.
[00:07:25] And so I'm reluctant to unsubscribe to any of these.
[00:07:29] They're all like you said, there's some really good writers
[00:07:32] and really bright people who are thinking about things in different ways.
[00:07:37] And so so it is hard.
[00:07:39] I mean, one of the tools I used to love just to try to keep up with social
[00:07:42] fees and social information.
[00:07:44] I think this came out right around the time that Flipboard came out,
[00:07:47] which is another great tool that I still use.
[00:07:49] But there was a tool called Nuzzle, this entrepreneur, Jonathan Abrams,
[00:07:53] said I'm up with basically using your social network and your social graph.
[00:07:57] You could look at the things that you've identified that you find of interest
[00:08:01] and then look at your first degree connections and the things they flagged
[00:08:06] of interest. And then if you have enough time in your day,
[00:08:09] you could go beyond that to a second degree or whatever.
[00:08:12] But yeah, at least you could kind of gauge and sort of only commit
[00:08:16] to the things that you thought were most relevant to you.
[00:08:20] But I think to your point, there is a lot.
[00:08:22] And even for those folks who are just trying to keep up,
[00:08:26] even without getting into the technical stuff as you're doing,
[00:08:30] it's still a lot because there are even abundance of choice.
[00:08:34] Like you, I mean, I updated my Chrome default search engine to be
[00:08:38] for complexity. So that way I don't really get any of the nonsense
[00:08:43] that was part of PageRank and all the SEO content.
[00:08:48] And it's much more around here's what I think you were asking for.
[00:08:52] And it tells me the sources and I can single click.
[00:08:55] I see top five sources of where this information came from and what have you.
[00:09:01] So I've found it to be invaluable.
[00:09:03] And I always felt like it was only a matter of time before search engines,
[00:09:07] not necessarily their demise, but certainly at a significant challenge
[00:09:11] to the way that we discover information and knowledge.
[00:09:17] And so, yeah, so I think that whole market is ripe for disruption.
[00:09:23] Has been searches dead, by the way.
[00:09:25] And I want to plug one more thing real quickly that I just thought of.
[00:09:29] My very favorite thing to listen to now is something called this week in tech or twit.
[00:09:34] It's it's about a two and a half hour long panel discussion
[00:09:39] led by this guy, Leo Laporte.
[00:09:42] But man, oh man, it is the I like addicted to it.
[00:09:46] I learned so much and there's so many good.
[00:09:48] I don't know if you ever heard of it before, but I highly recommend it.
[00:09:52] I like every week on Monday, it's three some hours, but it's during my commute,
[00:09:57] which is about 20 minutes.
[00:09:59] I listen to a little bit of it every time.
[00:10:01] So anyway, highly recommend that to your listener.
[00:10:04] Yeah, I've heard of it.
[00:10:05] I haven't actually listened to it, but it sounds like it's pretty informative.
[00:10:09] Yeah, it is.
[00:10:10] I guess if we correlate sort of absorbing all this information
[00:10:14] and translating that to actual sort of skills, I do you think about,
[00:10:19] you know, there's a lot of the chatter these days, at least from the marketing teams
[00:10:24] of solution providers is around skills based hiring, skills based organizations.
[00:10:29] So with you with your background in psychology and, you know,
[00:10:34] behavioral and psychometric assessments, I mean, how do you think about
[00:10:38] the half life of skills and how are we properly sort of assessing skills?
[00:10:44] Because I feel like you and I, you know, being on sort of the back nine
[00:10:48] of our careers, you know, we're we're doing a lot of knowledge work.
[00:10:52] We interact with a lot of knowledge workers.
[00:10:55] But if knowledge is is that your fingertips and you're, you know,
[00:11:00] codifying, you know, knowledge and feeding it into these AI tools or whatever.
[00:11:05] I mean, at some point you've got to say, well,
[00:11:09] if I know where to find the information, even if I don't have it in my own
[00:11:12] human brain, you know, do do AI skills and the ability to not just
[00:11:17] know about it in terms of like AI literacy, but to practical use, you know,
[00:11:24] to make yourself more effective and efficient, making better decisions,
[00:11:28] things like that. I mean, it seems like at some point, you know,
[00:11:31] having the skills from, you know, the actual knowledge and expertise
[00:11:35] that you've acquired over potentially decades.
[00:11:38] Absolutely. Well, have a lot to say about skills based hiring
[00:11:41] probably more than we can even cover here in this in this conversation,
[00:11:46] right, during the time we have.
[00:11:48] But what I would say is the very first thing you have to do in
[00:11:53] skills based hiring is define what a skill is.
[00:11:56] And we all have to share that same definition.
[00:11:59] Otherwise we're calling things skills that may not be skills or maybe
[00:12:03] everything that we're using as a signal for a job is a skill.
[00:12:06] It could be a hard skill, a soft skill.
[00:12:10] A lot of times you see knowledge slash skills, right?
[00:12:13] So sometimes knowledge is assumed to be a skill or that you have a skill
[00:12:17] because you have the knowledge.
[00:12:18] So one of the things people who are steeped in school, then measurement,
[00:12:23] which I am, is that you have to define very clearly and objectively
[00:12:27] what it is you're going to measure so everybody's on the same page.
[00:12:31] So first of all, I don't think we've done that as to what a skill is, right?
[00:12:36] Secondly, you know, there's different ways, right?
[00:12:39] So think about reducing friction in the hiring funnel.
[00:12:43] And you think about basically parsing a person out into a bunch of labels
[00:12:48] that you can compare them to other labels that would say they're good at a job
[00:12:53] or, you know, the fit for an organization or whatever you do.
[00:12:57] You know, there's a lot of friction there.
[00:12:58] So we've got inferential skills, right?
[00:13:01] We've got a cloud of words, ontologies.
[00:13:04] We take a job description.
[00:13:05] We take a resume.
[00:13:07] We may we may take a preponderance of stuff that's digital exhaust for a person,
[00:13:12] you know, on the web and infer skills.
[00:13:14] And you've got to do that in some sense because of the friction
[00:13:18] and how painful that friction is.
[00:13:20] But the whole thing can't work in my opinion unless you can verify those skills,
[00:13:25] right? And that's where it gets harder.
[00:13:27] So how do you verify skills at scale without any input from candidates
[00:13:32] without asking them to do anything?
[00:13:33] Because ultimately that's where the friction begins is,
[00:13:37] OK, now we've got to get 5,000 candidates to sit for a skills test.
[00:13:42] Nobody likes taking tests.
[00:13:43] So we're not going to be able to do that verification.
[00:13:46] And without that, we lack, you know, some accuracy.
[00:13:49] So whoever figures that out how to verify at scale will be, you know,
[00:13:53] a really well regarded person or a company or whatever.
[00:13:57] But, you know, at the same time, it's there's something like credentialing,
[00:14:02] right? So about 10 years ago, I don't know.
[00:14:05] Maybe it's more or maybe 12 years ago, credentialing was a big thing
[00:14:09] everybody was talking about, like, oh, we'll be able to verify skills of these credentials.
[00:14:13] And again, it's the same thing as defining a skill
[00:14:16] without kind of a universally accepted thing like MCSE, right?
[00:14:20] Microsoft Certified, what is it?
[00:14:22] Soft Solutions Engineer or Software Engineer, whatever, right?
[00:14:25] Those are things that are credible courses and you should know your stuff
[00:14:28] if you get one. So those are great.
[00:14:30] But and everybody kind of knows those, but but we don't have.
[00:14:34] I was just looking up, you know, skills, verification, credentialing companies today
[00:14:39] on ChatGDP asking it to list them for me.
[00:14:42] And before we move on, I need to let you know about my friend Mark Feffer
[00:14:47] and his show, People Tech.
[00:14:50] If you're looking for the latest on product development, marketing, funding,
[00:14:54] big deals happening in talent acquisition, HR, HCM,
[00:14:59] that's the show you need to listen to.
[00:15:02] Go to the work to find network, search up People Tech.
[00:15:05] Mark Feffer, you can find them anywhere.
[00:15:10] You know, there's a lot of them.
[00:15:11] There's a lot I hadn't heard of.
[00:15:13] Everybody's kind of jockeying for the same thing to be the universal standard.
[00:15:18] I haven't seen credentialing go like I thought it would
[00:15:21] because it makes so much sense to me.
[00:15:23] You do have it in like any kind of technical medical field, the legal field.
[00:15:28] You have to take, you know, a very, very high quality,
[00:15:32] well-referred or proctored, you know, exams to keep yourself current.
[00:15:37] So that's very valuable.
[00:15:39] But again, that's not the preponderance of the population.
[00:15:42] So the one last thing I'll say about skills-based hiring,
[00:15:46] it's very easy for companies to say they're going to do it.
[00:15:49] And by gum, it has a lot of benefits, you know, from an equity,
[00:15:54] fairness, hidden workforce you can discover.
[00:15:57] Good, good stuff.
[00:15:58] But organizations struggle to really implement it at scale.
[00:16:02] Part of it for the reasons I talked about.
[00:16:04] But I think there's also a, hey, we're going to adopt this kind of like
[00:16:08] D.E. and I was, you know, 15 years ago.
[00:16:11] We're just going to show you some videos.
[00:16:13] We're going to say we're doing it or not.
[00:16:15] And so I had the pleasure to work on a project and the frustration
[00:16:20] about, I don't know, seven or eight years ago
[00:16:23] called the Essential Competencies Project.
[00:16:25] And it was funded by the conference board.
[00:16:27] And I was working with somebody, we canvassed, I don't know,
[00:16:31] 20 enterprise companies.
[00:16:33] We ended up with like five.
[00:16:34] And the whole project was we're going to get an assessment provider.
[00:16:39] We're going to plug it into your hiring funnel for a particular job or jobs.
[00:16:44] And you're not going to see a resume.
[00:16:46] You're just going to see the assessment results.
[00:16:47] And then you're going to move that person forward based on those results
[00:16:51] in the initial ATS screening stuff, you know,
[00:16:54] that's more neutral.
[00:16:56] And it didn't work.
[00:16:57] We couldn't get one company, even though they had high level people
[00:17:01] who signed off when we got into the guts of, OK, now we've got to modify
[00:17:05] your hiring process.
[00:17:06] You can't use this tech stack you were always using.
[00:17:09] Your recruiters have to do this instead.
[00:17:12] Nobody did it.
[00:17:13] You know, that was a while back, but it taught me a real valuable lesson.
[00:17:18] You could even your leadership can want to do it.
[00:17:21] The change management is.
[00:17:22] And we know HR technology is entrenched, man.
[00:17:26] It's hard to get it changed.
[00:17:29] So anyway, that's my long winded take.
[00:17:31] Like I said, I could talk forever about it.
[00:17:33] I guess part of what's frustrating about that to me is on some level,
[00:17:37] I know that that's the right approach and it's disappointing
[00:17:41] that we haven't been able to fully, you know, figure that out
[00:17:44] and flush that out.
[00:17:45] But you're right.
[00:17:46] I mean, if you can't validate the skills, then you could pretty much
[00:17:48] say, and without the credentialing like a standard credential,
[00:17:52] then you're subject to potential, I guess, fraud, over embellishing
[00:17:59] on what they're capable of doing.
[00:18:02] And even some of the ways that some companies have approached it,
[00:18:07] your evaluation or other things, this person really have this level
[00:18:11] of this skill just gets really muddy.
[00:18:15] And then you fall prey to some of the human biases that are plaguing
[00:18:20] the entire talent lifecycle.
[00:18:22] So it can be tricky, but, you know, I guess the other big thing
[00:18:26] about why it's disappointing to me is I don't like the cat and mouse
[00:18:29] game of job descriptions to resumes because everything just
[00:18:35] gets foggy and manipulated.
[00:18:38] And it just seems kind of silly.
[00:18:40] I mean, I have had the chance to basically do like a rich text
[00:18:45] kind of assessment.
[00:18:47] And not only did they not ask for my resume, but it replaced
[00:18:50] the recruiter phone screen and probably derived insight from at
[00:18:56] least the first round interview.
[00:18:59] So in theory, it's a better way of assessing someone's
[00:19:02] potential to succeed on that team and in that role.
[00:19:06] You know, if only we could pass some of these not insignificant
[00:19:11] hurdles. Yeah, 100 percent, you know, hiring is about in my mind,
[00:19:16] it's about two things, accuracy and fairness and the extent that
[00:19:20] you have a signal and that signal is predictive of some kind
[00:19:24] of outcome that you want.
[00:19:26] The more noise that's there, the less that, you know, the
[00:19:30] less of a direct bit there is, the less accuracy you have
[00:19:34] often then what gets substituted is is biased predictors,
[00:19:39] biased signal and then you lose your fairness.
[00:19:41] So we're always juggling that over my whole career and over the
[00:19:44] last, you know, 50 years of IO psychology, we're juggling
[00:19:48] fairness and accuracy all the time, trying to make sure we have
[00:19:52] both without compromising either.
[00:19:54] It's always hard and it applies to any kind of hiring,
[00:19:57] skills based hiring, interviewing, you name it, those are
[00:20:01] the, you know, those are the guiding stars.
[00:20:03] With that, I mean, as AI has sort of entered the picture,
[00:20:07] you could argue, ATSs have had some level of AI learning and
[00:20:10] they're matching algorithms, et cetera for quite a while.
[00:20:15] But I guess in modern terminology where AI is used more
[00:20:20] much more sort of broadly and generically, you know, how are
[00:20:24] you thinking about, I guess, both the value and the
[00:20:27] opportunity as well as the risks when it comes to having
[00:20:31] AI as part of the hiring process?
[00:20:34] Well, I mean, it's inevitable, right?
[00:20:36] I mean, there's so many advantages and it's all about
[00:20:38] reducing friction, which is lots of people, hopefully, that
[00:20:42] you want to evaluate and finding the very best ones,
[00:20:47] finding that best fit, excuse me, like we were talking
[00:20:49] about. So that I was just writing about this.
[00:20:53] I mean, AI is great at reducing friction in that
[00:20:55] it can go out there without anybody overseeing it and
[00:20:59] say, I'm delivering you the best matches or I'm going
[00:21:02] to go out and find these people or I'm going
[00:21:04] to screen these people. But what's the substance
[00:21:06] it's using? I mean, if you're using skills on
[00:21:09] top of these, so to call empty calories, that's
[00:21:12] difficult. And, you know, if you're using resume
[00:21:15] job descriptions, you get into this kind of garbage
[00:21:18] in garbage out situation. So you're getting
[00:21:21] reduction of friction, but at the cost of accuracy
[00:21:23] and fairness. And that's the paradox of AI, in my
[00:21:26] opinion is like it can make things so much faster
[00:21:30] and easier for us. But that doesn't always
[00:21:33] mean that it's achieving other objectives, right?
[00:21:35] It's because you can't always have it all.
[00:21:39] And so I think we'll get better with that.
[00:21:42] You know, in some sense, machine learning is AI.
[00:21:44] We've been using machine learning for a long time
[00:21:46] as you alluded to, you know, ATS has have been
[00:21:49] using that resume. I mean, from the very
[00:21:51] beginning of resume parsing and matching
[00:21:55] in the early 2000s, once monster and job
[00:21:58] boards happened, companies had no infrastructure
[00:22:01] or ability to deal with that thing happening.
[00:22:04] Right? There were just so many resumes coming in.
[00:22:06] So you could log into Monster and manage your stuff,
[00:22:09] but it wasn't really going into your ATS or anything.
[00:22:11] It was a difficult, you know, situation.
[00:22:14] So technology has arisen to the challenge,
[00:22:17] but it's still a challenge because friction
[00:22:20] reduction is not accuracy.
[00:22:22] I think AI will get continually better and better.
[00:22:25] And I think we will learn how to train it
[00:22:29] better. So ultimately, it only knows what we know.
[00:22:31] So if we're training it with bias, then it's going to have bias.
[00:22:34] But since we have the opportunity
[00:22:37] to train it with unbiased information,
[00:22:40] theoretically, we can do that. Right?
[00:22:42] I mean, you get into large language models,
[00:22:45] they've ingested the entire Internet,
[00:22:46] they've ingested your life, my life, etc.
[00:22:50] It gets a lot harder to do that,
[00:22:52] but I still think it's possible.
[00:22:53] And I think that will be working in that direction.
[00:22:57] I believe when it comes to assessment,
[00:22:59] I feel pretty strongly about this.
[00:23:00] I just don't know the time horizon.
[00:23:02] We're going to have what I call a sinkhole
[00:23:05] in the middle of assessment.
[00:23:06] And what's going to happen is we're going to have all these
[00:23:09] automated inferential assessments based on,
[00:23:12] you know, again, digital exhaust
[00:23:17] and information where an applicant doesn't have to do anything.
[00:23:20] And then on the other side of it,
[00:23:21] I believe we're going to have super high fidelity simulation.
[00:23:24] So, oh, you want this job?
[00:23:26] OK, well, you know, log on here and do part of the job
[00:23:29] and you'll have something that looks like you and I doing
[00:23:32] this exact thing and it'll be scored.
[00:23:34] And it'll be highly realistic and people, I think,
[00:23:37] would enjoy that.
[00:23:39] Maybe it's part of an interview, I don't know.
[00:23:41] But but I think we'll have that because it makes so much sense.
[00:23:44] I think sitting there answering personality questions
[00:23:47] and like it scales and even multiple choice questions.
[00:23:51] The multiple choice questions in interesting one, Bob,
[00:23:54] because I'm like, you know, it's a very effective way
[00:23:57] to test the body of knowledge.
[00:23:59] I don't know how.
[00:24:00] I think maybe your digital exhaust could do it.
[00:24:03] Maybe the simulations can too.
[00:24:04] But it's it's a very, you know, standard way to go.
[00:24:08] So I think that one will be the hardest one or the one
[00:24:11] with the longest tail to extinguish.
[00:24:13] But what we will be able to do is probably continue
[00:24:15] to up our game with adaptive testing and, you know,
[00:24:19] that kind of stuff where we can have shorter deals
[00:24:22] or we can we can have LLM sit there and role play
[00:24:25] with you and ask questions instead of multiple choice
[00:24:27] or something. I don't know.
[00:24:29] But as we know it and we see it now,
[00:24:33] ultimately, it's not going to look like that.
[00:24:35] My guess across the board, 10 years,
[00:24:38] but it's impossible to know.
[00:24:40] I think, you know, just to see how someone,
[00:24:43] you know, operates potentially.
[00:24:46] Not just in a leadership position, but, you know,
[00:24:49] in pressure situations or how they resolve conflict
[00:24:54] and delegate authority, perhaps and things like that.
[00:24:57] It's just it's really kind of crazy.
[00:25:01] Yeah, yeah. About where this is is going
[00:25:03] and it's going so fast that even in a couple of years,
[00:25:06] you know, who knows?
[00:25:08] Yeah, yeah.
[00:25:09] So here's one thing I say all the time.
[00:25:12] Just think about it like this.
[00:25:14] If chat GTP never evolved past four Omni that we're using now,
[00:25:18] it would still be a miracle.
[00:25:20] We would still be able to do all kinds of shit with it.
[00:25:22] That that would benefit us all.
[00:25:24] So the baseline of just where we are in the moment
[00:25:27] is still incredible and game changing, and it's not going to stop.
[00:25:32] But I think, you know, you listen to these tech shows
[00:25:34] and I think the large language models themselves
[00:25:37] may have some limitations.
[00:25:39] Also, we're going to run out of stuff to train them on,
[00:25:43] although new stuff's always coming out.
[00:25:44] So I don't know if I buy that one.
[00:25:45] The energy and resources it takes to train one of these things
[00:25:49] is insane.
[00:25:51] And so it's not accessible by individuals,
[00:25:54] a small model or local model or whatever,
[00:25:56] but or modifying or, you know,
[00:25:58] achieve augmented generation fine tuning on an existing model.
[00:26:01] Sure. But creating your own is going to you don't have the GPUs.
[00:26:05] You don't have that you need a nuclear power plant.
[00:26:08] So, you know, people say there's other actual
[00:26:12] ways of structuring the engine
[00:26:15] that's doing the AI essentially that may take over.
[00:26:20] And you've also got the agent thing like you're talking about,
[00:26:22] where, you know, it's moved quickly into
[00:26:26] we're going to have multiple agents coming to do like Lang chain.
[00:26:29] I don't know if you're familiar with that.
[00:26:31] You know, that makes it makes so much sense, right?
[00:26:33] You're accomplishing one task with a bunch of AIs or little
[00:26:38] LLMs that are going off and doing individual things
[00:26:40] and coming back and producing a product.
[00:26:42] And so we'll see more and more of that.
[00:26:45] You know, somebody I'm not going to claim that I
[00:26:49] that I made this up, but it blew my mind
[00:26:52] because I never thought about it.
[00:26:53] My friend, Georgi Yankov, he's at DDI.
[00:26:56] He's a he's an I.O., but he's he's like
[00:26:59] clairvoyant in AI stuff, I think he was saying, yeah, you know,
[00:27:03] the future is people like myself and him.
[00:27:05] I.O.s we're going to be training the agents
[00:27:08] how to have good competencies, right?
[00:27:10] So in other words, we're going to train the agent
[00:27:13] how to display empathy or how to do this complex mix of traits
[00:27:19] and competencies that so it can interact more like a human,
[00:27:22] right, because it may not be able to train itself kind of thing.
[00:27:27] I get if I make sense there, right?
[00:27:29] So we may shift from evaluating people on these things
[00:27:32] to training the LLMs that are going to evaluate the people
[00:27:36] more accurately, transferring our knowledge as
[00:27:39] trained psychologists into the model.
[00:27:42] I mean, I'm doing that right now to a little bit
[00:27:45] training a role playing LLM, right?
[00:27:48] So we're we've created a whole structure of dialogue
[00:27:51] that's set up to elicit certain risk,
[00:27:54] poke you to display a certain competency, right?
[00:27:56] Like I get mad at you and I'm a customer, right?
[00:28:00] There goes your poking you on your empathy
[00:28:02] or your ability to maintain your composure or whatever.
[00:28:06] And then we measure that score that with another LLM.
[00:28:09] So that's a simplified version.
[00:28:12] I think we could probably talk about that for at least another hour,
[00:28:15] but it brings up an important point about the fact that
[00:28:18] it's not just that everyone is now a user
[00:28:22] of this type of AI, being generative AI
[00:28:28] because it's really it's part of the user interaction,
[00:28:31] the user interface and the experience.
[00:28:34] It's that we all now have the capability to actually build
[00:28:38] essentially new AI sort of personas, if you will,
[00:28:42] also at our fingertips.
[00:28:44] So whether you're a GPT plus customer building,
[00:28:46] custom GPTs or you're working at an organization
[00:28:49] that's a Microsoft shop and you've got co-pilots
[00:28:53] there, then you can custom you create custom co-pilot
[00:28:57] or you're on Amazon and you're playing around in party rock
[00:29:01] or just all of these foundational models
[00:29:04] that the producers of these LLMs are giving
[00:29:08] the average person the ability to actually be a builder,
[00:29:11] not just a user.
[00:29:13] It elevates the importance of responsible AI practices,
[00:29:19] which is something I wanted to talk to you about
[00:29:20] because you and I have talked in the past about audits
[00:29:24] and anti-bias or bias mitigating practices
[00:29:28] and things like that.
[00:29:30] So when we think about that,
[00:29:33] I think about how do we give people just enough
[00:29:38] because they don't need to know every acronym
[00:29:40] that we just cited.
[00:29:43] They don't need to know what RAG necessarily is
[00:29:45] and how to fine-tune models on top of going that methodology.
[00:29:52] But I do think that one of the core things
[00:29:54] that they do need to know is design, build, test
[00:29:58] and use it responsibly.
[00:30:01] And so I don't know if that means
[00:30:03] like we need compliance training on responsible AI,
[00:30:08] but it seems like for organizations that have been waiting
[00:30:13] to figure some of this out before setting policy
[00:30:17] and figuring out what's acceptable to use,
[00:30:20] which tools, do we let them use it?
[00:30:22] Do we also let them build it?
[00:30:24] There's all kinds of things that need to be considered.
[00:30:28] And so I guess I wonder,
[00:30:30] based on the companies that you've talked to
[00:30:33] and that you're working with,
[00:30:34] I mean, how are they appreciating the magnitude
[00:30:37] of the impact of what we're talking about
[00:30:41] and the need to make sure that we train it the right way
[00:30:46] with transparency and ethics and fairness
[00:30:49] and all of these attributes in mind?
[00:30:52] I think so.
[00:30:54] And I think what we have right now is a little bit of,
[00:30:56] hey, we're not ready to even implement a lot of this.
[00:31:01] So I make a fundamental distinction though too, right?
[00:31:05] And if you think about the totality of an organization,
[00:31:08] there's a lot of different areas of the business, right?
[00:31:11] So for instance, AI for supply chain and logistics,
[00:31:15] what's the ethical problem there?
[00:31:17] Like what is it possibly gonna do that's unfair
[00:31:20] or anything like you're just routing things
[00:31:22] or you're ordering things and you're optimizing.
[00:31:24] So those are really easy adoption use cases.
[00:31:28] They probably have a lot less hoops.
[00:31:29] When you start talking about people, right?
[00:31:32] Or even like financial data, sensitive stuff,
[00:31:35] people being some of the most,
[00:31:37] it gets a lot more difficult.
[00:31:39] And what I've seen and heard is companies saying,
[00:31:43] hey, we don't even have a policy for this stuff yet.
[00:31:45] Like let's take a little bit of a breather here
[00:31:49] and let's figure out what our policy is,
[00:31:52] how we're gonna manage this.
[00:31:53] At the same time, in hiring and recruitment,
[00:31:55] people are using a lot of these AI screening tools.
[00:31:59] So there's gotta be use cases
[00:32:01] where their company has a policy or not
[00:32:03] because we've been using those tools for a while.
[00:32:05] Maybe they come back and say,
[00:32:07] hey, we've got to look at everything
[00:32:08] which is what they should be doing.
[00:32:10] So a lot of companies are in the process
[00:32:13] of putting in place a chief AI officer, right?
[00:32:16] So there's a set of creating the environment
[00:32:20] to be prepared for using AI tools
[00:32:23] that I think is going on now.
[00:32:25] And I think the adoption curve is still very much in the,
[00:32:28] which I'm glad about.
[00:32:30] I did a project,
[00:32:32] I'm actually in the process of writing it up.
[00:32:34] It's pretty interesting.
[00:32:35] Colleague and I, we interviewed 20 IO psychologists
[00:32:38] at global enterprise companies
[00:32:40] about their testing programs and their AI policies, et cetera.
[00:32:45] And pretty much zero of them are using heavily
[00:32:48] using any AI type assessment tools, right?
[00:32:52] An interesting aside on that is we started that project
[00:32:55] like six months ago.
[00:32:57] We were gonna, we did parse apart,
[00:32:59] we interviewed transcripts from an AI note, AI note taker,
[00:33:03] parsed them apart into rows and columns of text
[00:33:07] so that we could analyze it ourselves better
[00:33:10] but that we could have chat GDP do it.
[00:33:13] But we started by just feeding 20 transcripts
[00:33:15] into PDFs and to chat GDP and asking it questions.
[00:33:19] I have no need to go and put it all in a grid.
[00:33:23] Like it's nailing it, it's incredible.
[00:33:26] Anyway, as an aside,
[00:33:27] like the kind of stuff we can do with that is insane.
[00:33:30] But I think that it's good
[00:33:33] that companies are being careful.
[00:33:34] I think that AI regulation is gonna be a good thing,
[00:33:39] noting there's a spectrum, New York city law
[00:33:42] kind of garbage, EEOC stuff is what it is
[00:33:48] and it applies no matter what.
[00:33:51] But the EU AI Act is gonna be just like GDPR in my opinion
[00:33:54] and anybody in the US who touches anybody in the EU
[00:33:58] with any kind of hiring thing
[00:33:59] is gonna have to go through a certified audit
[00:34:03] of all kinds of stuff.
[00:34:04] You're not gonna be able to get around that.
[00:34:06] So I feel like that's gonna be great
[00:34:09] and we've got to prepare for that inevitability.
[00:34:12] And that'll help us, but at the same time,
[00:34:16] internal governance is critical.
[00:34:19] Companies, if they really care,
[00:34:20] they're gonna have to have policies,
[00:34:21] they're gonna have to be careful.
[00:34:23] And I think right now there's a lot of questions
[00:34:27] and so where are you gonna find it
[00:34:30] where there's least sensitivity and more objectivity
[00:34:35] and the highest levels of friction in things?
[00:34:39] And I think a lot of companies aren't prepared
[00:34:42] to understand the HR tech side of this really super well.
[00:34:47] You know, originally, I think when you and I first talked
[00:34:50] in New York City bias law,
[00:34:52] I forget if it had gone into effect yet,
[00:34:55] we're approaching its anniversary date
[00:34:58] and people aren't knocking down our doors
[00:35:00] to get their audits done.
[00:35:03] No, right.
[00:35:04] Or nor anyone's.
[00:35:06] You know what we see is vendors getting some kind
[00:35:10] of New York City audit when vendors
[00:35:12] wouldn't even be audited under that,
[00:35:14] but basically saying here, look at our data.
[00:35:16] We don't have any ratios that are out of whack.
[00:35:20] But as you know, just like with the EOC,
[00:35:22] the vendor has, it's not about the vendor.
[00:35:24] It's about the local use of the tool.
[00:35:26] However, the EUAI acts gonna change that.
[00:35:29] Benders are gonna have to be accountable soon.
[00:35:31] Yeah, that's what I was gonna say.
[00:35:32] I do respect those who have taken the initiative
[00:35:36] to get out in front of this.
[00:35:38] I think for solution providers who have,
[00:35:40] they're either AI sort of native solutions
[00:35:43] or they've added AI capabilities to their solution
[00:35:47] just to go out and say, you know what?
[00:35:49] This is important for our clients
[00:35:51] and our current clients and our future clients
[00:35:55] that we are considered a trusted partner.
[00:35:58] And people can say that,
[00:35:59] but if you don't want it to just be lip service
[00:36:01] or based on some historical metric,
[00:36:04] if you really want them to be trustworthy,
[00:36:06] then why all else being equal,
[00:36:09] you had your short list of vendors
[00:36:10] and only one who had taken initiative
[00:36:14] to go ahead and get an independent audit,
[00:36:16] even though they weren't necessarily responsible for it.
[00:36:20] It just, I don't know,
[00:36:21] it gives people a little bit more comfort
[00:36:24] that this person's gonna,
[00:36:25] that this company is going to be with me
[00:36:28] and they're gonna stay on top of it.
[00:36:29] And if I do get audited as a user,
[00:36:32] as a town acquisition team, for example,
[00:36:34] that they're gonna be there to support me
[00:36:37] and if the auditor needs some of their data,
[00:36:41] maybe you don't have enough demographic data
[00:36:44] and you need to go back to the vendor
[00:36:45] to supplement it or whatever,
[00:36:48] I think that's important.
[00:36:49] I also think that just because
[00:36:52] I guess where I wanted to go was like,
[00:36:54] when I think of the concept of responsible AI
[00:36:57] and that being a sort of umbrella term
[00:36:59] similar to like human centric AI
[00:37:03] that I think is more used in research
[00:37:05] and academic communities.
[00:37:08] But if you maintain that
[00:37:10] and you think of that as the overarching theme
[00:37:13] and collection of these concepts around fairness
[00:37:17] and explainability and transparency
[00:37:19] and bias mitigation and all these other concepts
[00:37:22] that we supposedly care about,
[00:37:25] then you should want that wherever your solution
[00:37:28] is deployed across the town, life cycle.
[00:37:32] Because some might say, oh well,
[00:37:33] we're too early in the cycle, right?
[00:37:36] We're doing programmatic advertising, right?
[00:37:39] We're doing job boards or whatever.
[00:37:41] And so that stuff's never gonna be
[00:37:43] we're never gonna be on the hook for that.
[00:37:45] It's like, well, you know what?
[00:37:47] I disagree.
[00:37:49] You might not be the target of a lawsuit
[00:37:52] by a candidate, but you can't tell me
[00:37:55] that you're not using algorithms
[00:37:57] where maybe you're bias
[00:37:59] and who you're actually targeting
[00:38:01] with your programmatic advertising
[00:38:03] or who you even bite to come into your talent community.
[00:38:06] Or I mean, you could be not necessarily
[00:38:08] in a purposeful way,
[00:38:11] but you could be unnecessarily excluding
[00:38:13] certain populations from even seeing a job
[00:38:19] that they're capable of doing.
[00:38:22] And so if they can't even throw their hat in the ring,
[00:38:25] so to speak, then it seems like a problem unto itself.
[00:38:29] A million percent.
[00:38:30] I say and I have model like graphics
[00:38:32] and stuff I've made about this.
[00:38:35] So the great Satan of all this
[00:38:37] in my estimation is programmatic job ads
[00:38:40] and recommendation engines
[00:38:42] because hiring is a probability game.
[00:38:44] It's a funnel.
[00:38:46] And if you don't put something in the funnel
[00:38:48] that you want to come out the other side,
[00:38:49] you can't get blood from a rock.
[00:38:51] So if you're excluding people of diverse backgrounds
[00:38:57] from seeing job ads
[00:38:58] because the preponderance of people in that job
[00:39:01] are white males,
[00:39:03] so you're not gonna serve it up to that person.
[00:39:05] You'll never get that person.
[00:39:07] I mean, they may find it another way,
[00:39:08] but you're not attracting
[00:39:10] and getting in that person's face with your job.
[00:39:13] And so they'll never apply
[00:39:14] and they'll never get hired.
[00:39:15] And then bias compounds down the funnel.
[00:39:18] Every single decision at a layer in the funnel
[00:39:21] has a potential for bias.
[00:39:23] So by the time you get to the bottom of that funnel,
[00:39:26] you're not gonna be even cl...
[00:39:28] That's why we end up with such homogenous stuff.
[00:39:31] So one of the best ways in my mind to combat that.
[00:39:34] And I've seen this done really well
[00:39:36] like in Nike, I used to work for Nike a bunch.
[00:39:39] They would go out in the community
[00:39:40] with their recruitment efforts
[00:39:42] and build and engage people
[00:39:44] that they want to fill certain jobs to say,
[00:39:48] we want diversity.
[00:39:49] So we go find diversity,
[00:39:51] we engage diversity with our brand,
[00:39:52] we sponsor events
[00:39:54] and we have recruitment representatives on the ground
[00:39:58] meeting people, building relationships.
[00:40:01] It costs more money, it's harder to scale,
[00:40:03] but at the end of the day,
[00:40:05] that overcomes what we're just talking about, right?
[00:40:07] Like then you don't even have to worry about a job ad.
[00:40:09] You're going out and getting what it is
[00:40:12] that you wanna have
[00:40:13] and making sure they get in your funnel.
[00:40:15] So that is not sitting back
[00:40:17] and letting the AI do it for you, you know?
[00:40:20] Yeah, you hit on two really, really important points
[00:40:23] that I always try to impress upon people.
[00:40:26] One is if you care about DEI
[00:40:29] and you can say, well, it was well intended
[00:40:32] but badly marketed or whatever.
[00:40:35] If you care about diversity, equity, inclusion
[00:40:38] and belonging, then you have to support responsible AI
[00:40:41] because that's what's going to be,
[00:40:44] that's where we can actually keep tabs on all of this
[00:40:47] and that's what's gonna be the overarching
[00:40:51] and prevalent mechanism following those principles
[00:40:55] are gonna have those downstream effects on DEI
[00:40:58] and other kinds of programs similar to that.
[00:41:01] But before we go, Charles,
[00:41:03] I mean, there's one question I ask all my guests
[00:41:05] which is the title of this podcast
[00:41:07] is called Elevate Your AIQ,
[00:41:10] what comes to mind in terms of getting people sort of ready?
[00:41:14] Yeah, two things I did.
[00:41:16] So A, back to consuming information, right?
[00:41:19] I took a Coursera class on prompt engineering
[00:41:22] and then there's a really good Microsoft course
[00:41:25] that has like fundamentals of AI
[00:41:27] and it's a self-guided course.
[00:41:29] There's like six or eight modules, it's free.
[00:41:32] Can't remember where I found it.
[00:41:34] I think someone posted on LinkedIn about it.
[00:41:35] I can't remember the name of it
[00:41:37] but it's sponsored by Microsoft.
[00:41:39] It's either IBM or Microsoft.
[00:41:41] I can't remember which one.
[00:41:43] I think it's IBM actually.
[00:41:44] It's a really good course
[00:41:46] and I knew a lot of the stuff
[00:41:47] but there was stuff I didn't know
[00:41:49] and I took it over about a week.
[00:41:51] So there's all kinds of free high quality
[00:41:53] education tools out there.
[00:41:55] You know, you and I talk about a lot of concepts.
[00:41:57] They're kind of flying around.
[00:41:59] We tie them together on some themes
[00:42:00] but from a purely technical standpoint,
[00:42:03] what the hell's going on behind the scenes?
[00:42:05] How does this stuff work?
[00:42:07] I'm still learning about the transformer model
[00:42:10] and diffusion models of like how Chatchi DP works
[00:42:13] and if you understand that it's a prediction engine
[00:42:16] and all it's doing is predicting what the next word is,
[00:42:19] it blows your mind even more
[00:42:21] because you're like, wow, there's a lot of combinations.
[00:42:24] How does it know?
[00:42:25] But it's not a person.
[00:42:27] It doesn't have a personality.
[00:42:28] It's just the thing that can predict stuff.
[00:42:31] Supernatural math is kind of what I call it.
[00:42:33] But anyway, and I would suggest if you're getting into this
[00:42:37] that's a foundational and fundamental thing to do
[00:42:40] because then you can understand what transparency is
[00:42:43] and what explainability is at a deeper level.
[00:42:46] Love it, love it, awesome.
[00:42:48] That's what great to talk to you as always.
[00:42:50] Thanks so much for coming on
[00:42:51] and sharing your perspective and insights.
[00:42:55] Really, really appreciate it.
[00:42:56] Yeah, 100%.
[00:42:58] Thanks so much for inviting me.
[00:42:59] It's always a great conversation.
[00:43:01] Congrats on having a podcast as a fellow podcaster.
[00:43:05] I know how much work it is,
[00:43:06] but it's also super rewarding
[00:43:08] because we get to have these focused conversations.
[00:43:11] So always a pleasure, Bob.
[00:43:13] Awesome, much appreciated.
[00:43:14] That's it. Thanks everyone for listening
[00:43:16] and we'll see you next time.