In Part 2 of Bob's conversation with Jeremy Lyons, they shift from AI’s impact on job seekers to how AI is changing the workplace itself. Bob and Jeremy dive into the debate over AI vs. human productivity, how AI-driven hiring can lead to bias and legal risks, and why AI literacy is becoming an essential skill in the workforce. They also explore how AI might create "one-person billion-dollar companies", what AI governance looks like in talent acquisition, and why companies need to think deeply about AI ethics before adopting new tools.


Keywords

AI in the workplace, AI productivity, AI compliance, hiring bias, AI literacy, AI in HR, AI governance, responsible AI, AI ethics, workforce transformation, automation in hiring


Key Takeaways

  • AI productivity is about more than speed—it’s about effectiveness – AI isn’t just about working faster, but also about better decision-making and finding insights.
  • AI hiring bias is real – If AI is trained on biased data, it can create legal risks, discrimination issues, and adverse hiring outcomes.
  • AI literacy is the next essential skill – Future job descriptions will require AI proficiency, just like Microsoft Office skills in the past.
  • AI-driven businesses are coming – There may soon be billion-dollar companies run by just one person and their AI agents.
  • AI governance is critical – Companies need to implement AI monitoring & compliance tools (like Warden AI) to ensure fair hiring practices.


Top Quotes

  • "AI isn’t replacing humans—it’s forcing humans to redefine their roles."
  • "AI productivity isn’t just about working faster—it’s about working smarter."
  • "There’s a reason AI governance tools like Warden AI exist—because bias in AI hiring is real."
  • "AI literacy will be the next ‘must-have’ skill in every job description."
  • "We’re not far from the era of billion-dollar companies run by one person and their AI."


Chapters

00:00 – Challenges with AI-Powered Hiring: Bias & Compliance Risks

10:09 – AI Productivity vs. Human Productivity: Where’s the Balance?

20:34 – AI Literacy: The New Essential Skill for Employees

30:52 – AI Governance & Legal Risks in Hiring Tech

40:13 – Final Thoughts: AI’s Place in Hiring & Workforce Development


Jeremy Lyons: https://www.linkedin.com/in/lyonsjeremy

RecOps RoundUp: https://recops.substack.com/

RecOps Collective: https://www.recopscollective.com/


For advisory work and marketing inquiries:

Bob Pulver: https://linkedin.com/in/bobpulver

Elevate Your AIQ: https://elevateyouraiq.com


Thanks to Warden AI (https://warden-ai.com) for their sponsorship and support of the show! Warden is an AI assurance platform for HR technology to demonstrate AI-powered solutions are fair, compliant and trustworthy. 

Powered by the WRKdefined Podcast Network. 

[00:00:00] Welcome to Elevate Your AIQ, the podcast focused on the AI-powered yet human-centric future of work. Are you and your organization prepared? If not, let's get there together. The show is open to sponsorships from forward-thinking brands who are fellow advocates for responsible AI literacy and AI skills development to help ensure no individuals or organizations are left behind. I also facilitate expert panels, interviews, and offer advisory services to help shape your responsible AI journey. Go to ElevateYourAIQ.com to find out more.

[00:00:28] Hey, it's Bob Pulver. Welcome back to Elevate Your AIQ. If you caught my last conversation with Jeremy Lyons, you probably noticed that we had a lot to say about AI in hiring. We're going to shift gears and continue the conversation talking about what happens after hiring and how AI is transforming work itself.

[00:00:57] So in this part of the conversation, we explore the AI productivity debate. Is AI really making us more efficient or just overloading us with more work? We also get into some serious topics like AI bias in hiring, how AI could change the structure of companies, and why AI literacy is about to become one of the most important skills in the workforce.

[00:01:17] If you want to stay ahead of the AI curve and understand how AI will shape the future of work, this is a great ending to the conversation with Jeremy. So enjoy it. We are back with Jeremy Lyons. Jeremy, great to have you back again, continue the conversation. We always have these lengthy and wide ranging but very insightful chats. We were talking about being human centric from a candidate perspective when we last spoke and the misalignment.

[00:01:45] And so it's situations like that where people are just like, please explain what exactly you're looking for because on paper, this should have at least been worth a conversation. And if you don't want to have a conversation, then where's your AI interviewing tool where I can add color to this faulty gate that you have at the ATS stage?

[00:02:08] Yeah, well, I think that there's, there are a couple of things that I can certainly touch on in that one, I do think that there are a number of TA teams out there that have pivoted to also talking about very openly what's going on in their process.

[00:02:25] So they say, you know, we're going to close the application by this date. And then they say, we got X number of applications for this role. We have decided to move forward, like we moved forward with Y number of people because they were showing Z sorts of sorts of outcomes. Now, if a person is introspective, they'll go back, they'll look at their resume and they'll sort of say, hey, wait, I didn't say it like that.

[00:02:52] I think one of the more interesting things, and I spent about nine months last year writing job descriptions for a very big company using their job description logic. And one of the things that had that it made me better about was, I sort of would apply to jobs, and I do apply to jobs, because I want it, it's both market research and it's jobs that I'm interested in, is that sometimes the people who are in the know are not the people doing the filtering.

[00:03:21] And that includes hiring managers too. And this happens in recruiting operations a lot because it's still this new field that has certainly grown and certain people have different impressions of it. And part of RecOps Collective is to give people that education of what is it, what does it do, what are the skills involved? There's that. But you touched on something really, really, really, really interesting, which is the EEO aspects of this.

[00:03:46] And something that I don't think gets talked about enough when it comes to the AI piece and biases in the AI piece, and I ran a test actually on ChatGPT a couple of weeks or so ago that was sort of like, what do you know as your biases? Like, walk me through that. And then I prompted it back out with a question of like, but you're a machine, so don't give me the machine answer.

[00:04:12] But one of the things that also fascinates me in that is, especially with large language models and NLP or LLMs, NLP, AGI, you know, generative AI, all of that right now that isn't talked about is the power of your vocabulary

[00:04:32] and how that is going to show up in sort of how different education, how you receive your education and what you have available to you. And I always remember, and this is like, I said this to somebody about two years ago when I started using ChatGPT to learn was, wow, all those words I learned for the SATs are now really, really useful, but they also have this tremendous impact on the output of what I'm getting.

[00:05:01] And there was a case that I wrote about in a post last year where I talked about how words really matter in AI. And even if they're words that are very close to one another, like hate and abhor or love and adore, and those sorts of words, they mean different things to the AI because it has the ability to go back and use the dictionary definition for those words.

[00:05:26] And so if you go, for example, being Jewish myself, there's a concept called sadaka. Okay, that's a word unto itself. And AI, if you say, what is sadaka, we'll explain it to you. But if you go in, one of the really interesting parts, and I was having a conversation actually earlier today, which is like most religions have in and outside words. So words that show that you are part of that group and words to describe people who are not in that group.

[00:05:56] And so if you were to say, explain this concept to me as if I was a non-Jew, you will get one answer. If you go back and say, explain this to me as if I'm a Gentile, which is the technical word for non-Jew, which demonstrates you have to know that word, you have to be exposed to that word, you have to understand how to properly use it in a sentence,

[00:06:20] you actually get a different answer in every way, which way I've tested that across models. And so you have to go back and look at like, what is the education divide in this country and in other countries? Because that's going to have an impact on the prompts that you use and the structure that you use, because those words now really start to matter when you're trying to look at the output.

[00:06:46] I don't think I hear that being talked about enough, but maybe it's because I don't run in linguist circles and the people in my groups are not exactly talking about that. But I think that it's an important conversation to have. Yeah, for sure. You had a story for me. Yeah. About your new assistant, employee. I don't know how you wanted to describe it. You have new digital labor in your mid- Yes, yes.

[00:07:14] And I think that's a really interesting story. You know, so somebody asked me to do a presentation. And it was a presentation that I had done before. But they wanted, you know, a new and modern twist on it. And they wanted it for a different audience. So I went to a GPT that I built. And I said, you know, here is the specifications.

[00:07:43] Here is, you know, what the outcomes that this group is looking for. I've sort of done my own version before. Here it is. And the GPT said to me, you know, okay, great. Like, do you want us to use your slide or would you like to see something new? Which would be exactly how I would ask somebody if I was working with them. And they say, hey, look, I've got a presentation coming up. We need you to prepare the slide deck.

[00:08:11] You know, we did this presentation sort of like this, but we want to maybe see a new version of it and put it together. So, okay. I feed it the information and it says, you know, hey, I'm going to do this. And then I'll give you a list of, you know, what I changed, how I changed them, what the backing. Okay. About 10 minutes go by. I said, you know, this is late at night. I said, hey, look, I'm going to go to sleep. You know, when do you think this is going to be done? I said, it'll be ready for you by morning. Okay.

[00:08:40] Very human type interaction. Come back in the morning. I say, hey, where is this at? So, you know, I opened it with something nice. I said, like, hey, how's your night been? How's your start to your day? Responded back. Oh, you know, it's great. Blah, blah, blah, blah. I was like, all right, cool. And I said, you know, hey, look, what's the timeline on this? Because I want to get this done by the end of the day. I said, I'll get it to you in an hour, hour and a half. All right. Come back after an hour and a half.

[00:09:09] I'm like, hey. And it said, I will update you when this happens. I said, well, hey, where is it? He says, well, I'm still working on it. Okay. Well, can you hurry that up a little bit? How much more time? Oh, it'll be done in the next 30. What? So I run this. And as you can imagine, because I can see your face, I run this conversation and it gets, this is now, I started this thing at 9 p.m. the night before.

[00:09:37] It's 2.30 when I'm just sort of like, hey, look, just give me what you have. And it says, well, I'll send it to you as an email. And I go, you don't have the ability to send this as an email. Okay, you're right. I can't do it that way. Give me a drive link. I'm like, all right, here's a drive link. Doesn't do anything. I'm like, hey, where is this thing at? And so ultimately, what was a very funny, well, obviously, I was stressed out a little bit about it

[00:10:05] because I promised a deadline to myself and to other people about doing this. And I spent a little too much time in and I should have just pulled the plug, which is if you are managing somebody, you know that there's a certain point in time where you just go, you know what? You're not going to deliver this for me. You're just not going to. And you've been kind of trying to, but you're not going to. Bye. I'm going to do it myself. And so ultimately, I did that. But then the funnier part was I then came back and sort of I asked another AI,

[00:10:35] how do I give good coaching feedback back to AI? And then I provided that feedback. Essentially, I reworked the feedback back to the GPT to sort of say, hey, you need to not make promises on things that you can't do. So, so confused. That's fascinating, first of all. But it's acting like a junior employee.

[00:11:02] Well, yeah, but it's not even just junior. Like, it could be junior and still like overachiever kind of, you know, junior. But like, there's no way that it would take as long as it's saying each time.

[00:11:20] And so it's almost like somebody, you know, tricked you and inserted, you know, system instructions into your custom GPT to say, whatever this guy asks for, just, you know, give him the runaround or procrastinate or whatever. It's just, it's so bizarre. Yeah, it was the most bizarre thing that I've ever been through. At a certain point in time, it switched from like, hey, I really want to get this done to like, I'm actually curious about your thought process.

[00:11:49] I did not to go completely off subject since we've already been talking for so long. But that's why I like when AIs show the thought process. They show evidence, right, of like where they're thinking. I mean, it started with perplexity just at least citing its sources, even though some of that was completely full of crap. But overall, I was generally pretty happy with perplexity's results.

[00:12:17] And now if you go to perplexity, it can show you its reasoning, what it, you know, where it's going, not just where it's going, but I see that this term is used in this context or whatever. It just came up on the last call I was on. Somebody used a term that I had never heard before. It was, it's derived from something that happened in the medical field that medical students learn. Like grunt work, they called like scud or something like that, or scut. I was like, what is that? People were asking, is that an acronym? Like, what does that even mean?

[00:12:46] And yeah, so it, but it showed you, oh, it's derived here. And then, you know, maybe a decade ago, it started getting used in, in other contexts, like in consulting and whatever. I was like, I never heard of it. I'm sticking with grunt work, you know? I mean, I love going and using perplexity. I love going to perplexity and asking what's the etymology of a word or a concept and seeing how that breaks down. I just feel like it's so good.

[00:13:10] And I mean, I, I think, especially given what recruiting operations is and how it borrows from a number of places and how essentially like you need to understand product management lingo, project management lingo, really all of the lingo. It's amazing to see how you can bring things together. I use the word comorbidity a lot in, in what the work that I do. And that is a medical term.

[00:13:34] But it works to describe when we have data that we don't understand, but could be explained by two sorts of things that have two, two outcomes. And when symptoms are sort of masquerading as something else, why not use a word that you can use that accurately displays the language? Yesterday, I saw someone use the phrase AI obesity. So what, describe a stack of where you just have so many AI tools, you don't know what to do?

[00:14:03] I was like, sorry, what, what are you saying now? It was, it was like overconsumption of, of AI and having and being overwhelmed by just everything that's out there and getting maybe a little bit of analysis paralysis or just going overboard.

[00:14:25] You're, you know, you're at like a buffet and you're just like, you want to sample all these things and maybe you go too far down that path. I mean, eventually I could see it leading to this whole other topic that we could talk about another time. But like, is, are you, are you using AI too much such that you have now outsourced, going back to the, using it in your interview or your assessment kind of scenario?

[00:14:51] Are you outsourcing too much to AI such that you have lost your own ability to, to think critically and analytically about your, your life and your decisions and the, your work product and things like that, which is totally valid concern. And there's been some recent research about, about that, not in the context of calling, calling it AI obesity, but, but that's sort of where I saw that. So it's just interesting.

[00:15:21] Those three random examples of medical terminology leading into. Well, I'm sort of curious if the like medical, you know, like WebMD, there's actual like ism now for people who show up and sort of tell their doctors, this is WebMD.

[00:15:35] And like, this is where I got the information from, but I do wonder if you're going to, if we're going to start to see something where it's sort of AI addiction, where it switches from like you actually are fully addicted in terms of how you use AI, how you incorporate AI. I don't know about you, but I don't know very many people that are hopeful about the world of work. And I'd like to change that.

[00:16:01] My name is Marcus Mossberger and I started the Hope at Work podcast where you'll find two things. Number one, really interesting guests. And number two, innovative ideas about the future of work. Check it out.

[00:16:56] There's a specific word that starts with a C that I'm totally blanking on right now, that it's sort of this, this like you cannot sort of function without it. And that's, that's going to be something where it's like, you know, kind of, kind of compulsion will become a compulsion if you will. But you know, who knows? The world is expanding. Technology is expanding.

[00:17:19] I think to your, to your other point too, one of the things that I love about perplexity is that you could go in and you could use a whole bunch of different models to get your answer. And it's all centrally in one place. But like, do you need to go and use Grok? Do you need to go and use Llama? Do you need to use like, what are you, what are you doing? And I know that they just brought in like DeepSeek. So it's like, well, you know, which one am I using? Like fundamentally, aren't they all going to become the same?

[00:17:44] And I think one of the really more interesting things, taking it back to the recruiting space and over AI-ing something is, all right, if you go and now make your implementation, your, you set up an implementation agent on, let's say like you're doing an implementation between, you're shifting your ATS from Greenhouse to Ashby or Ashby to Greenhouse. Whatever, pick a title, that sort of thing.

[00:18:07] If your implementation is going to be done by agents and you have the Greenhouse agent and the Ashby agent and the agents sort of send it instantaneously so you don't have to spend these long sorts of periods. At what point in time will Greenhouse's agent basically learn all of Ashby's and Ashby will learn all of Greenhouse.

[00:18:28] And essentially now when the programmers on both sides go in and say, hey, we want to make improvements, one side says, this is what Ashby's code base shows us to do and how we're going to do it. Do you want to push that sort of similar thing into this? And now you've got competitors essentially with the code bases. Now I'm sure that there's a way to block that and there's going to be, you know, all that. But it's an interesting conversation to me.

[00:18:52] Well, it also gets into one of the concerns I've had with agentic AI in general and in the context of responsible AI and the traceability, observability of these. And if you, so if you have, let's say there's an agent and let's say there's a whole bunch of agents throughout your tech stack, even on the, just within TA.

[00:19:20] You could have conflicting information depending on if the communication is, I guess, to your point, like bidirectional, you know, how much of that is available and is factoring in. Because ultimately if somebody, you know, in 2027 files a EEOC claim for discrimination and there were all these agents running around, you know, talking to each other. Is it just going to be, you know, a finger pointing, you know, exercise?

[00:19:50] Well, you know, well, your agent was talking to my agent and, you know, well, that I didn't, I never said that. My agent never said that. My agent doesn't, can't do that. Or it's always, it's just going to be this daisy chain, hot potato, you know, kind of scenario. And so you do need to go back and have the sort of agentic forensics to go in and try to figure it out.

[00:20:18] If there really is adverse impact, what was it that created it? And then, of course, you have to say, well, ultimately this was supposed to be a human-based decision. If it wasn't, then that's problem. That's your first problem. Well, and I know that there are a number of, so Warden AI, Jeff Poles, and they are building a really interesting, you know, product. And I know that they just dropped in a whole feature set, I think, around diversity and stuff.

[00:20:43] But it's funny that you mentioned that because now I'm sort of thinking, you know, for a long time, forensic accounting wasn't a thing. So are we going to start to see forensic AI investigation as a thing? And that seems like, you know, one of those more interesting roles that pops out. And, you know, are you going to have people who are specialists in a specific field doing this forensic AI work to be able to see it?

[00:21:11] And then I think what brings up another really interesting question is if the AI is knowing you're doing forensic work, is it going to take a self-preservation aspect and send you down the wrong trail? Which, I mean, admittedly is a sci-fi idea and, you know, any conversation about technology can always devolve into the like, we're all going to die or we're all going to live in a utopia very quickly.

[00:21:36] But it's just sort of one of those thought experiments to me that I think is very fascinating and something that I explore. Yeah, I hadn't even gone that far. But yeah, no, I'm glad you brought up Warden. They're actually a sponsor of this show. Jeff and I have been working together a little bit. And yeah, the... Not a plant. I did not know that. No, I don't know. That's why it's so great. It's... I love that they added...

[00:22:02] They went beyond what current legislation talks about, at least from an AI legislation perspective. They are... I think a couple months ago, they started checking for age bias. And then, yeah, just recently, they are now looking at all kinds of disability bias as well. So, yeah. I mean, I love the innovation and the pace at which they're releasing features on their AI assurance platform. So, yeah.

[00:22:31] I think it's a really important space to watch for sure. Well, and I'll throw this out just because you now may need to think about this. And obviously, this is something to talk about for a long time. But I do know that this is like a thought in the entertainment space right now, which is sort of like, okay, well, if we have AI, what does the actor have in terms of rights to their image and likeness?

[00:22:54] Because now, essentially, we could be like, I want to see a Tom Cruise film after Tom Cruise is dead and we have enough imagery and stuff like that to write like, hey, I'm a lot Mission Impossible 27 and I don't need another actor. I'm just going to use the AI version of Tom Cruise in perpetuity.

[00:23:10] But it makes me wonder, it's sort of like, if we build these agents and we create these companies and the person who creates it dies, but the agents can sort of still function and still update because, you know, this person laid a really good foundational groundwork and maybe there was enough information to continue to pull from it to do this.

[00:23:30] What will that do when you essentially have companies where the founder is dead, there's nobody else there, and it's sort of like the estate runs this company? Like what's mergers and acquisitions going to look like in the AI verse and agents and all that? Yeah, there's so many interesting, you know, pathways that this could go.

[00:23:55] So I do think, you know, I was telling you before, I had a filmmaker on the show earlier and we were talking about that. We were talking about, you know, AI's impact on creatives and your likeness and your digital twin. And I also, it's also sort of came up on the talent intelligence collective call as well.

[00:24:15] We were talking about, are you paying for, talking about like your commitment to one company or multiple, you know, the hours that you pull in, you know, they're paying you to work 40 hours, but what if you get your work done in 30? Like, are you paying for outcomes or are you paying for people to be, you know, how many hours your butt is in your seat, right? And so you're getting the value out of this person and they can do it in 20 instead of 40 or 30 instead of 40.

[00:24:45] You know, who gets that benefit? Are you going to give them more work so you get them back up to 40 hours and just so they're completely burnt out? Or are you going to just give them, you're going to reinvest the ROI of your AI investments in other ways, including upskilling and reskilling those employees and, you know, maybe increasing our ever shrinking, you know, tenures at these companies.

[00:25:12] Well, so there's two parts to this and I'm looking it up right now. There is a designer who talks often about how, you know, how do you set the price of something? And is it, you're not paying necessarily for the time work, you're paid for the experience that this person has to be able to do the work very, very quickly for you.

[00:25:37] So what the cost becomes is, you know, are you looking for outcomes or are you looking for things like that? The second part of that entire conversation as well, and of course, the minute I start looking for it, draw a blank and all of that, is that, you know, are you, what, there, now I remember what it was.

[00:26:02] There was a, I was on Instagram, I was searching through, there's a group there that I follow called like GPT Tricks or something like that. And that somebody had, they posted the screenshots for it and they asked sort of who benefits from the mindset of work smarter, not harder. And the response that I gave was kind of something that's really interesting. And if I, if I find it, I'll send it to you.

[00:26:32] But essentially what the GPT was saying was it will benefits the employer because it's not like the, that you get less work, you get more work. So is it to the employee's benefit to work smarter and think of all these shortcut ways and think of how to get to the same outcome that they would get to? Or is it actually not beneficial to the employee to do that?

[00:26:57] And it's actually more like you should try and find whatever that middle ground is, stay at that middle ground because that is that equilibrium point that you're, that you're going to hit on that curve where you are not doing more work, but not doing less work, but still having quality, but not. And where do you, where do you sacrifice it and how do you do that?

[00:27:17] Yeah. I think for the average, you know, knowledge worker or, you know, even deskless worker, I mean, these are important questions that as the organization thinks about their strategy, they need to think very deeply about, you know, some of these things. Cause we're not on the other call. I use the analogy of like athletes, right?

[00:27:39] Like look at these enormous contracts that they get, like, you know, people do it as a, people do it just to have a point of reference. Like, well, this guy just got a $10 million contract. Well, there's these basketball players play 48 minutes a game times, you know, 80 games or whatever. So they're basically making like, you know, a hundred grand a minute, you know, what?

[00:28:00] Well, yeah, that's, it's, it's a fun little exercise to do, but that's not how people who, you know, negotiate these contracts think about it. They think about the value that this person is bringing and the, and the potential outcomes and, you know, championships or, you know, merchandising revenue or whatever, you know, sold out stadiums, all these things, right?

[00:28:26] Like it's, it's what these people ultimately, you know, produce. I know we're not all professional, you know, athletes, but, you know, there are other, you know, probably examples. Like you don't pay a good electrician, you know, $300 an hour because he can fix something. They can fix it quicker. They also know what to fix. They know where to, they know how to optimize their own time, but you're, so you're paying for their expertise, I guess what I'm saying.

[00:28:55] And not just like the time that they're actually at your house. Well, so this is, this is kind of a funny thing. And I found that, that comment too. So I'll send it to you so you can hyperlink it for people if they're curious, like what the GPT said around this. So I worked for the Clippers during the summer where LeBron James was a free agent. And that summer taught me so much about the back end of sports that I had never thought about in my entire life.

[00:29:23] That almost made me very much like not be able to watch any sort of sporting event ever without thinking about it from a business perspective. One was how, and this is not woe is to the athlete, was sports accounting, which is as an athlete, if you play, you're, you're not paying taxes in your state. You're paying taxes where you get paid. Now you get paid in different states when you are playing in those states.

[00:29:53] So that is an interesting thing. That's why athletes in Washington, like why Texas, Washington and Florida can be really attractive to certain athletes because no income tax. All right. That aside, I thought was really an interesting takeaway.

[00:30:09] The other thing that it taught me was that in basketball versus other sports, the ability of one player to completely actually not only change the team on the course, but in the business office is very interesting because we worked with somebody who had previously worked for the heat. And so when LeBron James joins the heat and Chris Bosh joins the heat, you would think great.

[00:30:36] All these sellers are now going to be able to pick up the phone and call all these people and sell tickets more like better. That's going to be great. What I found out because we can't, I came in the next day and she just looked just dead on her face. Like somebody had died. I said, what's wrong? And she said, everybody I know at the heat just got fired.

[00:30:58] And that was because LeBron James and that you now no longer needed people to do outbound calling or you needed fewer people to do outbound calling because you automated all the inbound calls because the tickets are selling themselves. That's a problem that we're sort of dealing with now with a lot of like AI stuff, which is like big companies who get millions of applications.

[00:31:28] If you can automate the inbound, why do you need people? You don't need that many people. And also kind of in terms of all of that as well, too, which is kind of a very fascinating discussion and topic of things is, well, what is the modern TA team going to work like with agents, with AI enablement, with these tools?

[00:31:52] And what I'm kind of a lot of tools and what I'm kind of tools and what I'm kind of seeing based on what people are talking about.

[00:32:20] A head of tech and then maybe a employer brand person. So, OK, so four or five. And those people are actually overseeing all the agents underneath that. And they're just there as the subject matter experts. And that's a future that could possibly exist. It's not. And that's scary and that's uncomfortable. And I think that it's important to to talk about that.

[00:32:43] And I think that it's also important to talk about what is a company's AI strategy as well, because if their strategy is the way to if they are viewing AI as a copilot, then enablement is going to take a bigger, bigger chunk of the piece. How you train your employees is going to take a big chunk of the piece and all those things and how you can get it set up and get user adoption and show it's not this big, scary thing.

[00:33:10] But if the strategy is we're going to replace people, all you're doing is sort of a waiting game telling you can do that and then you will do that. And those two strategies are very, very hard for people. And they're hard for me, but they're hard for for everybody because that's sort of the plays on fear. And I don't want to play on fear.

[00:33:34] But I think it's also important that we have these hard conversations about the tools, the tech, the space and all those things, because it will help. I also do know we've been talking for a long time, so if we want to cut up into a multiple episode, then we can do that. Yeah, at this point, we might have to do a double episode because we're way over any of the other shows. That's OK.

[00:33:58] So I think that your point there with this sort of almost like skeleton crew operating DA was sort of ties to the point I've been making for quite a while in terms of AI literacy and upskilling and readiness overall,

[00:34:20] which is if you like your career or you're targeting some other career path, you've really got to think about the trajectory and speed of some of the AI advancements in general.

[00:34:35] And then specifically at the organization and get yourself, position yourself to know enough about AI to augment your own capabilities to know how to sort of continuously sort of improve it in a way through system instructions, just like you would sort of coach and mentor an employee in a way.

[00:35:00] Before we move on, I need to let you know about my friend Mark Pfeffer and his show, People Tech. If you're looking for the latest on product development, marketing, funding, big deals happening in talent acquisition, HR, HCM, that's the show you need to listen to. Go to the Work Defined Network, search up People Tech, Mark Pfeffer. You can find them anywhere.

[00:35:24] And get yourself to like the top, you know, quartile of that profession or that career path because then you're at least positioning, you're hedging your bets, right?

[00:36:13] You're positioning yourself so that, you know, I can still thrive because I am continually learning and adapting and growing as technology advances. And I think in some ways that's similar to the overall technology, you know, evolution, right? That's not just a sort of AI phenomenon.

[00:36:37] But now it's even more important and coming faster than ever. At some point, your organization is going to have to just, you know, look at the numbers. And if we can't, we're not seeing this precipitous drop in, you know, the experience or the value that we're providing to our customers or clients or stakeholders.

[00:37:01] And we can save a lot of money at the same time and don't have to worry about sick days and don't have to worry about interpersonal conflicts and whatever. You know, your own digital slacker aside, most of them are going to continue to be quite efficient and get better at following explicit and not so explicit, you know, instructions with some, you know, nudging and coaxing.

[00:37:31] And so I just think everyone needs to pay attention to what's going on around them. Yeah, I mean, you touched on a whole bunch of stuff there. The ones, the things that I want to focus on within that is we as, and this is sort of something that I think about in the talent space a lot, is that people always love to say like, oh, I'm in the top quartile of this.

[00:38:02] But what does that mean and how do you find it? So like, it's very easy on roles that are very metrics driven. Sales roles. I have, I am, I've sold millions in deals. I was in the president's club of the last whatever companies that I've been in, whatever. That's an easy thing to metric on. In sales, you could say I, you know, my leads have driven billions of views or, or, or things like that.

[00:38:30] And those, that can kind of get you to where you're talking about like, okay, if I want to be the best, I have to go and get to a billion views. And if I'm continuously spinning this up and getting a billion views, then I know that what I'm doing is good. And then that's comparison up. For a lot of roles, that isn't necessarily talked about.

[00:38:52] So thinking specifically to the recruiting operations space, you know, there are these pillars within recruiting operations that we talk about continuously. You know, you've got your ops, you've got your programs and initiatives, you've got your reporting and analytics. Maybe you have employer branding and then you've got strategy. Well, it's very hard to find somebody who is at the top of each one of all of those things at one time.

[00:39:18] But you might find somebody who's really good on the data and really good on the strategy. But they use the data and the strategy to then fill in for gaps they have themselves on programmatics and operations. So that makes it really hard in this space to sort of say, what is a top rec ops person? Now, are you also saying that this person is a top rec ops person because they're the zero to one type and they build?

[00:39:43] Or are you saying that this is a person who can come in, really analyze and then upscale our team? What makes that person good? So you have to talk about that at the company level. And that's sort of the what does good look like piece really matters on the debate for that. So that's kind of how I sort of think about that.

[00:40:07] I mean, in my own job search, too, you have to skills map and understand the skills mapping for yourself. But you also have to understand and have enough experience to sort of say, I will work well in this type of environment versus I will work not well in this kind of environment. And that takes some level of experience to do.

[00:40:30] Because then you have to think about how am I going to market myself with these skills and the skills that I have and do all those things, which I think is very, very hard for some people. And certainly not somebody that somebody sits you down when you're young and says, do this this way. Because it will get you further ahead. There are some people who do.

[00:40:52] And you certainly see those people rise very quickly because in conversations, if I were to say to you, you know, hey, Bob, what can I do to help to get you to the next level? And you sort of give me this wishy-washy answer, you know, I'm going to use your skills to better me and my position if we're co-workers or something like that. But if you come to me and you say to me, Jeremy, you know what I want to be? I want to be the best X. Now I know directly how to help you.

[00:41:21] And I can do that. And I can connect you to those people that need to be connected. You need to be connected to to get that experience that's going to take you to that next level. And then the other thing, too, is you have to have the conversation with yourself, which is where do I really want to be? And I think that in a least in capitalism and in a least in America, it's always get to the top, get to the top. But I've sort of been at the top and I don't like it. There are certain things that I like to do.

[00:41:48] And once you get to the top, you don't get to do those things anymore. And so you have to sort of say, like, hey, look, I want to plateau at this level because above it, I'm not going to be happy. Below it, I'm going to be happy, but I might not be where I want to be. And this balance of where I need to sit in order to do those things.

[00:42:08] Yeah, I'm glad we talked about this because you're connecting two of the thoughts that I had that I hadn't really combined into one sort of thought yet. Because, like I said, I have been saying, well, you know, you like your job, you know, whether you're a recruiter, recruiting coordinator, you're HR VP. I mean, whatever your role is, it doesn't even have to be in HR, but you could be a consultant, you could be an analyst, right?

[00:42:35] So I guess my point is, as it starts to take, you know, tasks off your plate and once it gets to a certain threshold, people are going to start to say, well, I just took 50% of the work off of 20 people's plates. So why on earth would I still need those 20 people unless I'm actually going to reinvest all of that ROI into people, which would be great, but it's unlikely, right?

[00:43:04] So then you say, well, we're going to cut a few people and then we'll use some of that to invest in more AI and automation and reinvest some of that in reskilling and upscaling, whatever. But to your point, it's not just about trying to stay ahead of that red line. Teams need cognitive diversity. Teams need a lot of different, you know, sort of personalities.

[00:43:32] And, you know, some are, you know, type A, some are introverted, some are, you know, something differently like you and I, right? And so there's inherent value in that that are in some ways the less tangible, less quantifiable things. But if you start thinking more deeply about what you're measuring, right?

[00:43:56] And you think about not just, you know, quality of hire at an individual level, but have you improved the team cohesiveness, team quality and the overall output of that? Have we improved retention? Have we, you know, hit on, you know, improved some of these other metrics that matter? And we've got to think about that holistically. But you're also hitting on something that came up earlier, which is around like self-awareness.

[00:44:26] And have you, how well do you know yourself and what you're good at and what you want? And, you know, in some ways that's, that could be, you don't need AI to help you with that. But technically there are solutions could help you, you know, figure that out. Complemented by feedback you might get in the interview process to have you do some more sort of introspection. Yeah, well, and I know we got to close up shop a little bit.

[00:44:55] The last thought that I would love to leave people with is this, which is most of what is being talked about with AI solutions actually starts with people first. And so you have to be able to think about, is it a software issue or is it a hardware issue? Both from a technical perspective and in a human perspective.

[00:45:20] What I mean by is it a hardware or is it a software issue is if it's a hardware issue and you're a human being, there are certain things that you cannot change. I cannot all of a sudden go for being five, six to clearing high heel height and being six feet. Just not going to happen. I will also say this. I have a stepbrother named Jeremy.

[00:45:44] My stepbrother named Jeremy is six foot four and, you know, has a hundred, has 70 pounds on me. Okay. I am not all of a sudden going to be the taller Jeremy in our, that hardware is just not going to happen. Now, by the way, everybody, if you want another Jeremy that, you know, is sort of interlinked, you know, I can show you the photo and you'll see. There's a very big height difference. But if there is something that can be changed, I can straighten my hair, for example. I can dye my hair a different color.

[00:46:14] That could be a hardware switch. The software piece, though, is what you have to look at in your own self about changing your own code. And there are just going to be certain things that are just not going to get there. And so a lot of this using of AI and any technology, really, you have to go back and look at the people using it, how those people are thinking about how they're using it, and see if that also to yourself is going to work out.

[00:46:44] Because if it is not, then you have to see how these things are kind of commingling and coming together. And I think that that is something that oftentimes gets left out of the equation is how much analysis have we gone back to first principles on? Like, why is this the solution that we're using? How is it going to do that? And does it sort of track back to all these why questions we asked in the kickoff? I think those are perfect parting words.

[00:47:12] And we're going to have to leave it there. Already we've got two sizable episodes out of it. Two full episodes, and we didn't even get through all of our questions either. We did not. We did not. We didn't get into details on neurodiversity for sure, which I know is a topic near and dear. Jeremy, thank you so much. This has been fantastic. I did not expect this to go for so long. And like I said, this will probably be the closing for this part two of this very long but fantastic discussion.

[00:47:42] So thank you for spending so much time with me and for the audience. Really appreciate it. Definitely. Thank you so much for having me, Bob.