In this episode we speak with Andrew Gadomski, Branch Chief (A), Workforce Planning & Strategy at the Cybersecurity and Infrastructure Security Agency about AI's role in workplace compliance and ethics. We look at the importance of workforce composition and the skills that are in demand and how this plays into evolving employer loyalty. We discuss HR's role in AI governance, de-risking, and the political influence on enforcement.

Connect with Andrew Gadomski here: https://www.linkedin.com/in/andrewgadomski/

Takeaways

  • Data plays a crucial role in decision-making, and organizations should use it to drive compliance and ethical practices.
  • Ethical AI in the workplace involves non-discrimination, responsible personnel actions, secure data handling, and reasonable accommodations.
  • AI can be used for similarity matching between position descriptions and candidate profiles, but veracity of data is essential for accurate results.
  • Organizations should focus on transparency and trust in AI processes, and consider the ethical implications of AI applications in different industries.
  • Third-party audited AI can provide more reliable and trustworthy results compared to internally developed AI solutions. The end goal of recruitment should be to fill jobs quickly, ideally within two weeks of posting.
  • Loyalty to employers has changed, and employees no longer fear leaving organizations. This shift has significant implications for company culture and management.
  • The role of a Chief AI Officer is crucial in industries such as healthcare and finance, where data integrity and compliance are paramount.
  • HR should play a key role in creating guardrails for AI and ensuring ethical and responsible use of AI in employment.
  • De-risking is an important aspect of AI governance, and organizations should prioritize compliance and risk management.
  • Government regulation and legislation are shaping the governance of AI, and organizations need to stay informed and compliant.

Chapters

00:00 Introduction and Background

01:19 Transition to Government Work

03:08 The Importance of Data in Decision-Making

04:04 Transition to Auditing and Compliance

05:27 Using AI for Compliance and Pay Equity

06:20 The Uncertainty of AI in Employment and Recruiting

07:18 The Link Between AI and Shareholder Value

08:16 Ethical AI and Compliance

09:00 Areas to Focus on for Ethical AI in Employment

10:22 The Role of AI in Decision-Making and Mobility

11:19 Using AI for Performance Improvement and Lateral Moves

12:17 The Challenges of Integrating AI Solutions

13:43 The Importance of Third-Party Audited AI

14:57 Defining Ethical AI in the Workplace

16:21 The Role of Reasonable Accommodations in Ethical AI

19:12 The Challenge of Trusting Data in AI

20:30 Veracity of Data in AI and HR

29:38 The End Goal: Filling Jobs Quickly

30:14 The Fear of Leaving

31:08 Changing Loyalty to Employers

32:17 The Shift in Job Application Volume

33:22 The Incentive to Leave Organizations

36:03 The Need for a Chief AI Officer

38:29 Creating Guardrails for AI

41:22 The Role of HR in AI Governance

43:49 The Importance of De-risking

45:17 Government Regulation and AI Governance

48:02 The Influence of Politics on Enforcement

51:27 Closing Remarks

Powered by the WRKdefined Podcast Network. 

[00:00:00] Are we moving people up, and are we doing that with any kind of adverse impact?

[00:00:07] Are we doing that with parity? Are we offering pay in this complete

[00:00:12] renumeration? So pay benefits weight, you know everything.

[00:00:16] Right. Are we doing that with parity? Then are we using any kind of artificial intelligence

[00:00:23] to tag individuals who would be marked for performance improvement plans?

[00:00:29] All right. Do we have humans that are in between that and that?

[00:00:33] Are we using artificial intelligence to tag people for reductions in force?

[00:00:39] Are we using talent intelligence to go ahead and potentially look at lateral moves?

[00:00:47] So I think one of the things that you know, we're talking about skills based hiring.

[00:00:50] So much.

[00:00:51] You know what I like about I solved everything.

[00:00:55] I solved these people's centric. And in a people's centric world, you need a people's centric solution.

[00:01:00] I saw people cloud is a comprehensive human capital management solution that helps you employ, enable and empower your workforce

[00:01:07] throughout the entire employment life cycle from tracking to recruiting, to onboarding clients, from barrel of benefits, time and labor management,

[00:01:15] transform your employee experience for a better today and a better tomorrow with I solved.

[00:01:21] For more information, go to I solved HCM.com.

[00:01:27] This is William Tinkup in Rhydrary.

[00:01:29] You should know podcasts because Andrew Gennam's key on today.

[00:01:33] So we're going to talk about all kinds of fun stuff around AI and compliance and auditing and ethical, moral,

[00:01:41] all kinds of good stuff and especially a resume relates to compliance and some of the governmental actions both here and abroad.

[00:01:48] And Andrew said expert and love his analytical bond, which I had a half of it.

[00:01:54] And so why don't we just start there? Andrew what's going on in your world? What do you do?

[00:01:59] Yeah, so it's been interesting right over the last few years, both pretty and post pandemic.

[00:02:07] As some people might know in December of 19, Aspen was asked to come into the federal government and work with the workforce planning and strategy group within the cyber security and infrastructure security agency.

[00:02:23] So this is the cyber arm and the security arm of the Department of Homeland Security.

[00:02:30] At the same time, we started really exiting out of let's call it dashboarding.

[00:02:41] Right.

[00:02:42] So since 13, you know, we've, you know, we started doing centralized dashboards and data lakes and cloud.

[00:02:52] You know, we were doing that for a long time. And there's like a point where you know, you can't see the writing on the wall that, you know, we went from dashboarding to gap analysis and risk analysis.

[00:03:04] And there's like a point where it's like, okay, you don't need to do that anymore because if you're not doing it by now, if you haven't bought into the concept, you're not going to do right.

[00:03:14] If you're not using data to make decisions by now, you're just not. So that's fine.

[00:03:19] So we got out of waiting for those people to die is.

[00:03:23] Yeah, right. You're just waiting for those people to leave.

[00:03:25] Yes.

[00:03:26] So we kind of got out of that business and started doing more root cause analysis. And at the same time, I'm working with the department.

[00:03:36] And then I had some medical stuff candidly.

[00:03:40] I'm a person with epilepsy. A lot of people don't know that.

[00:03:44] It was dormant for 35 years and then it wasn't.

[00:03:49] Wow.

[00:03:50] And that is a bizarre twist to a lifestyle.

[00:03:54] You know, no driving, no, no swimming, no walking. I mean there's all kinds of weird stuff that was going on. I had to figure that out.

[00:04:03] And so it was easy for me to move into the government full time. They had an opening.

[00:04:12] They say, well, would you like to do this full time? And I said, sure.

[00:04:15] But there was a contingency. So I was able to keep Aspen as a business at the same time.

[00:04:22] But I had to flip it. I couldn't do workforce planning and strategy and root cause analysis for others while I'm doing it for the federal government.

[00:04:30] And so we moved and I moved the business. And I was slowly moving it anyway into auditing.

[00:04:37] And so now what we're doing is we're looking at organ and we've and we're fortunate that we're able to turn the ship, which I think was actually easier for us because we were already working in other companies data.

[00:04:50] And we're already hooked into their networks and working on their laptops and looking at work day and phenom and everything else.

[00:04:57] A lot of people know that we've been doing that a long time.

[00:05:00] So we're focused in on our companies using artificial intelligence. And if they're doing that as it is, it efficient.

[00:05:08] And then same thing, but then compliance started to come up.

[00:05:13] And there was mandatory. There was enacted legislation that wasn't there was pending legislation that was waiting to turn on.

[00:05:22] There was draft drafted legislation that was waiting to turn on if it got approved.

[00:05:27] And the same was happening for pay equity and wage transparency.

[00:05:32] So we saw an opportunity to exit the analytics business and move into the auditing business.

[00:05:39] And so now we've got customers that you know, we are behind their firewall and we're looking at their usage of both.

[00:05:49] Are they exposing their wages the way they're supposed to be?

[00:05:53] Are they executing pay equity the way that they're supposed to be both legislatively and in terms of, you know, just de-risking their organization.

[00:06:02] And then I think what's more recent is well everyone's using AI. What the hell does that mean for employment and recruiting?

[00:06:09] Are we doing the right things? And I think what we've kind of run into.

[00:06:15] And I think we're probably going to go on this path anyway is that no one quite knows what the landscape looks like for AI.

[00:06:21] There's a bunch of techs out there who are saying, we got it and I think that if you start with those texts, it's great.

[00:06:29] I think that what you're trying to do and what we've learned to do is you've got compliance risk and you've got reputational risk.

[00:06:39] And so there's a group of employers who are saying, I don't need to do this until I get until someone gets slammed.

[00:06:47] Right. Right.

[00:06:49] And then there's another group of employers who say, I'm going to do the right thing and rather than looking at diversity and inclusion, equity and accessibility as a page on my website, I'm going to show it with data.

[00:07:03] And I'm going to do it for real because that's what all this artificial intelligence legislation is about.

[00:07:10] It is about ensuring that individuals and underserved or protected classes are not being discriminated against or automation is not leaving them behind or advancing others in an unfair way.

[00:07:27] And so I think, you know, that's what's been going on.

[00:07:31] And I got to tell you that with all this talk that we're seeing and you guys see it, people are pulling back on diversity and inclusion, you know, we're starting to lose leaders.

[00:07:44] I'm like, you got to be kidding me.

[00:07:47] If there was a time and I call it a buy accessibility, belonging inclusion, diversity and equity, okay, a buy.

[00:07:55] Just kind of like a cool pun, right?

[00:07:58] If there was a time to dig into this with data, it's now because now you're just, it's not even about the EEOC, you're getting slapped around based on your revenue.

[00:08:11] And I can't believe people are trying to back up out of EDIA.

[00:08:18] I mean, I'm like, this is the one you want to see at the table.

[00:08:22] This is the one that actually links back to shareholder value in the street.

[00:08:27] Yeah, but what should they be looking?

[00:08:29] What should they be digging into?

[00:08:33] So there's the pre-employment processes, right?

[00:08:39] So you're, you know, I call that the AEIOUNY.

[00:08:45] Applicant, evaluated, interviewed by the manager offers unnecessary, meaning that you don't have to advance them.

[00:08:53] And yes, okay.

[00:08:54] So easy to remember, right?

[00:08:56] You want to look at the different stages of those and how we're doing on a conversion and you want to look for adverse impact.

[00:09:02] Right.

[00:09:03] So you want parity.

[00:09:06] So, you know, if you don't know what adverse impact is, go Google it.

[00:09:11] But, you know, but you have to look at that first and then you have to look at it in terms of not just conversion, but speed.

[00:09:19] Yeah.

[00:09:20] And you can't even, what's that?

[00:09:23] It's an unintended consequences as well.

[00:09:26] Right.

[00:09:27] And so it's just, it's good for operational value but that's the first thing you want to look at.

[00:09:32] Then I think the next thing you want to look at is this pay equity issue.

[00:09:40] And this is really outside of the control of your average TA leader.

[00:09:44] Right.

[00:09:45] You know, talent acquisition, you know, isn't there in the Wii help hiring business?

[00:09:52] But they're not in the Wii do a lot of personnel actions of employees business.

[00:09:58] Yeah, it's not there.

[00:09:59] Because it's financial.

[00:10:01] It also touches ops and finance.

[00:10:03] That's right.

[00:10:04] And so it's a little bit out of their hands.

[00:10:07] It's a little bit out of their hands because the, you know, you talk to your app.

[00:10:11] You guys have no bunch of them, but you talked to your average talent acquisition leader.

[00:10:15] You know, they can't, you know, the only reason they know about separations is because

[00:10:21] the reactive responses we have a vacancy now.

[00:10:24] Right.

[00:10:25] Right.

[00:10:26] Right.

[00:10:27] So they're not looking at, they're looking a little bit at mobility but they're not,

[00:10:31] they're not executing career ladders and designing, you know,

[00:10:35] knowledge skills abilities and tasks that they're not doing that stuff.

[00:10:40] So when you're looking so you have to look at your pre employment processes.

[00:10:46] But now with AI, you have to look at your decision processes based on internal mobility such as promotions.

[00:10:53] So are we moving people up?

[00:10:58] And are we doing that with any kind of adverse impact?

[00:11:01] Are we doing that with parity?

[00:11:03] Are we offering pay and that's complete renumeration?

[00:11:07] So pay benefits weight, you know, everything.

[00:11:11] Right.

[00:11:12] So are we using that with parity?

[00:11:15] Then are we using any kind of artificial intelligence to tag individuals who would be marked for performance improvement plans?

[00:11:24] Alright, do we have humans that are in between that and that?

[00:11:28] Are we using artificial intelligence to tag people for reductions in force?

[00:11:35] Are we using talent intelligence to go ahead and potentially look at lateral moves?

[00:11:42] So I think one of the things that you know, we're talking about skills based hiring so much.

[00:11:47] Okay.

[00:11:49] You hired all these people.

[00:11:53] And now you have a skills taxonomy.

[00:11:57] So you're going to use the skills taxonomy to figure out, well, I'm a project manager.

[00:12:02] Can I go work as a project manager in another division?

[00:12:07] So are you going to rock and roll on AI to make that decision based on the skills taxonomy?

[00:12:12] And just automatically assign that person and they rise to the top.

[00:12:16] Or are you going to have humans involved?

[00:12:19] And if you are, where and did you document?

[00:12:23] So everything I just talked about from mobility to pay equity, to the transparency, to the exits, to the to the change of workforce.

[00:12:35] We didn't talk about reorganizations.

[00:12:37] That is so beyond their average talent acquisition leader.

[00:12:41] And that's because it's just not in their scope.

[00:12:44] Right.

[00:12:46] And what's bizarre is you look at a company like Workday.

[00:12:51] And they just acquired hard, hard squad, right?

[00:12:55] So good for Athena.

[00:12:57] That's awesome for her and her group.

[00:12:59] How long do you think it will take for Workday to wrap that in their one ecosystem model?

[00:13:10] Make that for the pre-employment processes.

[00:13:13] And then define another company that will do performance management and tax skills taxonomies and have that rope into artificial intelligence.

[00:13:25] Not only that is a two weeks prior to that, they put 250 million into paradox.

[00:13:32] Right.

[00:13:33] So there's another hook there but I will digress on just just just working kind of their stuff.

[00:13:41] I want to get your take on audited AI and ethical AI.

[00:13:50] And in particular, what I want to come unpack with you is this is two years ago.

[00:13:54] So it's a little dated.

[00:13:55] I only could find one H.R. Tech vendor in the world that had third party audited AI.

[00:14:05] They had hired a university.

[00:14:08] They gave them all the data they paid them a ton of money and they gave them all the data and it's easy doing what we say it's supposed to be doing.

[00:14:17] Right.

[00:14:18] And if not, what needs to be calibrated recalibrated et cetera.

[00:14:22] I don't know if anybody else that's doing that.

[00:14:25] I know a lot of folks that do it internally, right?

[00:14:28] Which I believe is, you know, who's watching the chicken, the chickens and the chicken and chicken house types of stuff.

[00:14:36] Like I don't believe in that particular model.

[00:14:39] I think if they're going to do it, then it should be third party party.

[00:14:44] That's my own personal belief.

[00:14:46] The ethical part.

[00:14:49] What I want to get you take on an ethical AI is whose best position to actually say what ethical AI is?

[00:14:57] And is it?

[00:14:59] Is it related to work?

[00:15:02] So wherever you want to go.

[00:15:04] So first of all, ethical AI.

[00:15:07] You know, that's kind of like saying computers.

[00:15:12] Well, they're all based on well, it says I'll understand large language models in this particular.

[00:15:18] It says they're all based on their moral code.

[00:15:22] Right.

[00:15:23] Right.

[00:15:24] So, which I'm not sure I'd like or are not.

[00:15:30] So but the ethical AI is also first it's around the data coming in and and.

[00:15:35] And are you using that data in a responsible way in terms of in terms of insulating where the data.

[00:15:45] The people or the individuals or the companies that contributed to the data.

[00:15:51] So, you know, is responsible in your mind at that particular point subjective.

[00:15:56] No, I think that you know respond.

[00:15:59] You know, ethical artificial intelligence ethical AI does not include sensitive and personal identifiable information.

[00:16:09] It does not include.

[00:16:12] It doesn't include any data that hasn't been voluntarily disclosed.

[00:16:17] Yeah, there is an open source intelligence.

[00:16:20] I mean, so that's some of those things.

[00:16:23] Yeah, so it's either on legal though.

[00:16:26] That's a legal and ill right?

[00:16:28] But that's part of it.

[00:16:29] Right.

[00:16:30] So part of it is we have a structure as as either a deployer of artificial intelligence.

[00:16:36] So this is a company that's deploying somebody else's or were a developer.

[00:16:42] So we have to train this and did we do that in an ethical model to be get our data from the right places.

[00:16:49] In addition to ethical is this secure by design concept, right?

[00:16:53] So, didn't we actually now that we have this data that we're training.

[00:16:57] Did we actually put a shroud around it?

[00:17:00] Right.

[00:17:01] In a secure environment because that's part of the ethics is that safe.

[00:17:05] Right.

[00:17:06] Ethical AI is also making sure that we're taking steps in terms of healthcare, finance, high risk areas.

[00:17:19] And saying what guardrails have we put in about what we are going to do and aren't going to do as we take this artificial intelligence and apply it to the vertical.

[00:17:30] So ethical AI in healthcare when we're looking at it in terms of identification of cancer cells or forecast forecasting models around vaccine or so on.

[00:17:46] There's a difficult set of there's a different set of ethics associated with those applications then say we're going to use artificial intelligence to determine what the rate of what the

[00:17:58] what is the mortgage rate percentage we are going to offer to Andrew William and Ryan versus who are.

[00:18:05] A lot of it would be higher.

[00:18:07] I'm at 13 14%.

[00:18:11] And you probably get the best one.

[00:18:14] So I'm interested to get go ahead of us your thoughts now come back.

[00:18:18] So I just think that the first thing is that this concept of you know we've had some vendors kind of come out.

[00:18:25] In the HR space and say here's our ethical AI statement excellent just disclose you know understanding that you have a problem in stating out loud is the first way to get past a problem.

[00:18:38] So I think that's great but I think that the ethical use just within employment is actually pretty straightforward.

[00:18:51] Yeah.

[00:18:52] I mean, I think it's around nondiscrimination picture that personnel actions are done responsibly that you've locked up your data that you haven't trained for.

[00:19:03] So you're being you're baked into that transparency on that to where the company you're using a product and they're transparent about what's going on or your transparent to candidates or otherwise but there's a transparency layer.

[00:19:17] Whatever that is I'm assuming it's taken away.

[00:19:20] And I think what we miss on ethical AI is is that we don't necessarily offer reasonable accommodations to get out of the AI.

[00:19:31] A question that I'll ask CHROs and heads of talent or business leaders when they're talking about AI related to employment is they'll say so what do you think our strategy should be what do you think we should do as I said.

[00:19:44] The first thing we're talking about is reasonable accommodations.

[00:19:48] There's a first thing we're talking about because no matter what there's any number of jurisdictions that you have to disclose the use and then you have to all you have to offer something that says I don't want to be part of this so assess me or accept it.

[00:20:03] I like GDPR you are better you opt out so what's your plan?

[00:20:08] If your plan was 100% of our personnel actions are going to be done through artificial intelligence I will tell you that there are any number of country based laws that actually prohibit that business decision.

[00:20:27] That's where you start you start by declaring reasonable accommodations.

[00:20:33] That's cool.

[00:20:40] So interested here to get your take on the role of AI specific to employment practices what is Andrew Godomsky's view on this?

[00:20:51] So first and foremost there's different levels of AI I remember oh god I think it was 2016 myself in this part of this employment attorney Brian Garrison and I went out to like a recruit con or something.

[00:21:08] No no it wasn't a recruit con.

[00:21:09] It was literally a law compliance conference in Las Vegas.

[00:21:16] Oh yeah that was a who that was that was like going to HR tech right.

[00:21:21] The same people right we're all going out afterwards only a day ball.

[00:21:26] Yeah so so we did this presentation about the different levels of AI and what the legal impact was and I think that just like AI is a buzzword.

[00:21:39] Right so automation is not the same as algorithmic which is not the same as cognition or pure artificial.

[00:21:49] So when we talk about employment and you mentioned paradox paradox primarily right you know and we all you know we know a bunch of people over there I mean they started out as an automation and waterfall you know set it set of tools and so that's different than do we think this person.

[00:22:08] Is better than the other right matching technology stuff like that.

[00:22:13] Yeah right so you have similar I mean similarity matching as an out as a statistical model and as an algorithm you like you know is really old this is not new stuff.

[00:22:24] So I think that kind of concept is strong.

[00:22:29] I want to go ahead and match I've got a position description may be clear.

[00:22:36] I have a position description not a job announcement there's a difference.

[00:22:41] Oh yeah once a marketing.

[00:22:43] One marketing document one is right so on the inside of the organization where you map the knowledge skills abilities and tasks longer term.

[00:22:53] So this is what you know position descriptions are basically parking spots.

[00:23:00] Job ads are the billboards and then the person who goes into the the parking spot is driving a car right so they have a the car has a certain set of skills and abilities and it can go fast or it can go on mountains whatever it is but that goes into that spot.

[00:23:17] That spot and where it is and what it's supposed to be and how much you access it are the knowledge skills abilities and task of that position so you have a CEO.

[00:23:26] And their job regardless who the person is is supposed to do X Y and Z just like a CFO is supposed to do in a project manager is supposed to do things that are different than the count right.

[00:23:37] So once you have those position descriptions if you want to go ahead and say I've got a resume I've got performance management or I've got some sort of profile within the organization and I want to do similarity matching between this in depth position description which is stable by the way that is a wildly stable document last a long time.

[00:24:05] Versus individual great if so I find that to be really strong in the agency what we're doing that we're looking at things like you know there's any number of work roles and you're looking at cyber incident responders and you're trying to know with there's a definition of that which is a niceness standard right.

[00:24:26] Okay, this is person match up to that and we use you know in a large language models and that kind of thing but that's against a very stable document right.

[00:24:37] I decided the manager decided to say these are the three bullet points and I got to make sure that they install work day on the next 18 months and that's what's on the job announcement.

[00:24:50] I think the job announcement versus the resume is a weak use of AI.

[00:25:00] Week use.

[00:25:03] Generative AI you want to go ahead and make up interview questions and you want to make up job announcements and you know you got employment branding people who want to make 27 versions of the same of the same Facebook post or whatever you have.

[00:25:21] I mean that's so that's what I you and I were at a dinner 100 years ago when you're really doing a lot of great work on the analytics side and we got into a conversation around data.

[00:25:32] 30 data and whether or not practitioners could trust the data.

[00:25:38] Yeah, which I thought was I think it was a great conversation for us to have a wish we would probably broadcast up to other folks because you're never going to have perfect data this idea that you're going to be sitting on some type of perfect data and then be able to make perfect decisions just.

[00:25:55] It's kind of the dumb concept right yeah what I want to bring you back to is not that conversation because that's all how much do you hear about that place it relates to AI and some of it being data and some of it being trust from practitioners so.

[00:26:16] Artificial intelligence is a series of statistical models that are you know hyped up and so there's training data that comes in and then if they're really smart about it they're going to use what are called hold backs to say okay we trained it with this data and then we're going to test the models and we're going to pull some of the data out we're going to you know we're going to put some data back in.

[00:26:37] And then we're going to keep our confidence up and we're going to look at different statistical models and we're going to see how much error we have so that's that's an old concept you know also known as area under the curve right so you want to look for the for the error.

[00:26:51] If you're doing that doing that if you're doing that at a baseline that is the minimum of artificial intelligence if you're if you're a winging it outside of that like that's not AI that's winging it.

[00:27:04] It's artificial not so much intelligence maybe artificial what's less than intelligent stupidity artificial and stupidity you know it's it's artificial hype right.

[00:27:17] Artificial high right.

[00:27:19] But if if you're going ahead and you're doing your training the way you should be then what you need to do is you need to make sure that the data that's coming in for the say similarity matching doesn't have its own flaws.

[00:27:32] Right and I think where we're struggling is you know you got to get past you got to get past all the blog posts in the LinkedIn.

[00:27:42] You know that oh this is the greatest solution ever it's like well yeah you know it can't be this is why you need to have human interaction I think the data veracity in HR is weak right.

[00:27:59] Thereby AI outputs are going to have error or what we're quickly calling hallucinations.

[00:28:09] Would you say for us it's just just as a point of clarity.

[00:28:12] Are you doing the literacy?

[00:28:14] Are you thinking about it from the truth perspective?

[00:28:17] Okay so data truth which is we have a structured set of data and it says Andrew Gadomsky not Andrea Gadomsky.

[00:28:27] Right right.

[00:28:29] Right.

[00:28:30] That kind of stuff so it's accurate data but you know that's that's where I think we're seeing with AI is that the things that we might be putting into it as an employer.

[00:28:44] So depending upon the service provider that we're using and the training that they executed to line up with the data that we have to come up with the outputs that we want.

[00:28:56] And I think when I look at organizations and I see you know there like how many you look at things like how many interview dates are actually logged in the applicant tracking system and how many are actually accurate.

[00:29:13] Right right.

[00:29:14] And unless you're using an automated tool to do all of that you're depending upon manual.

[00:29:20] Right.

[00:29:21] You look at basics like that and you say well we don't even have veracity around our normal staffing processes.

[00:29:29] So what are the chances that we've got really strong veracity about stuff that's a lot more complex.

[00:29:37] I'm picking up what you're putting down here.

[00:29:39] And so it's kind of like well and by the way you know the matching is all is all subjective anyway.

[00:29:48] Oh yeah.

[00:29:49] Of course.

[00:29:50] So you know I think that from a speed perspective it's great.

[00:29:56] Or loading our own biases into the match right.

[00:29:59] And we're not even talking about the right things with AI right now.

[00:30:04] So we need to kind of keep on talking about skipping to the end.

[00:30:09] So here's the end.

[00:30:11] The end is within two weeks of a job being posted it's filled.

[00:30:17] That's the end right so everybody wants everybody wants that time period could be three weeks but it could be two days.

[00:30:25] But the idea is that it's it opens it closes.

[00:30:29] Now the first problem with where we're going with AI is we're creating an environment where there's no fear of leaving.

[00:30:39] If I can fill a job in two weeks and I can find one in two weeks right so the dynamics between in the employer change because if the employee doesn't have fear of leaving.

[00:30:53] And then the employer and the manager team management team have a fear of people leaving at any time.

[00:31:00] It changes the total dynamics around your culture how you manage and those types of things.

[00:31:06] I haven't seen any of that stuff anywhere other than this podcast right here.

[00:31:12] Right.

[00:31:13] It's an interesting point let's go deeper on that.

[00:31:17] When when did this change happen?

[00:31:19] When did you start to see this?

[00:31:20] Pre-agency in sports.

[00:31:22] Yeah it's basically yeah I'm pretty sure we said this before got like.

[00:31:26] Loyalty loyalty to the employer change.

[00:31:29] Right yeah so.

[00:31:32] So this is when this started so this wasn't automated until pretty much so this is in the in the part of being automated right where we can see little tools coming up that

[00:31:46] that allow you to apply to 200 gender jobs overnight and right.

[00:31:51] Right and when we got then we got people who make mistakes where they don't put a limit on how many people come into a job.

[00:31:59] So you got a thousand candidates that pop in because they're all using artificial intelligence and go to apply and then the recruiter walks in in the morning and it's like what the heck?

[00:32:10] Like I can't go through all these people and they're all moaning about the work that they have to do and then they say to their boss well we need to go ahead and get a eyes so I can go.

[00:32:20] I don't have to go through this when it was just like okay just stop and cut it off at 50 applicants.

[00:32:27] You just need to hit the button right right well yeah linked in apply indeed apply.

[00:32:34] We fell in love with volume both sides.

[00:32:37] I can't if I love with volume because I can just apply to and the I remember you probably all of us probably remember an era where recruiters would say hey I got 10,000,000 people that applied to the job like they were happy that they had the volume that applied for the job.

[00:32:55] Yeah, we're yeah, then you get down the wrong thing.

[00:32:58] Then you get down to the 50 people that you looked at and you put them into your community and do all that other crap that never worked.

[00:33:06] Yeah, so I were talking about when this pivoted so go back he's bringing us about 15 years.

[00:33:16] What we saw was the ability to increase your annual income was more when you left in organization right versus when you stayed correct.

[00:33:30] So maybe that's 20 years let's just call it 15 and that became statistically viable for basically every occupation.

[00:33:39] So you're advised to move to another organization right you were in if you wanted to make more I mean like literally take it to a family dynamic right.

[00:33:49] I have one shot but if I wanted another child, I know how much it costs to have four mouth to feed in a house rather than three right so I've got pressure to make more money or cinch my bell.

[00:34:06] So it made sense to leave an organization to gain more income based on you know your own internal pressures but even just like cycle events right.

[00:34:18] And on top of that we also since 1963 we've continually added dual income houses right so what's happened is you make more money by leaving and you have two people in a house working.

[00:34:34] Dual income houses so it becomes fashionable to leave.

[00:34:39] And now it's turned into I can leave whenever I want not just because of money right I need to have fair treatment I need to make sure that I'm well I have all the benefits and that's absolutely true that's absolutely true so there's no social barrier no.

[00:35:02] To well I've been with an organization for two years and if I leave right I don't have you know I can explain why I've left hundreds of ways.

[00:35:14] Well, you remember how we would talk about gaps and resumes so we talked about it for a decade we were talking about how do you explain the gap in a resume there is no explanation of a gap in a resume.

[00:35:24] I got two questions you can handle anyway one is are you a buyer or seller of a chief AI officer that's one okay to who's best suited to help with compliance around AI.

[00:35:46] Again setting some of the boundaries or some of the guardrails if you will you and I pre show we talked about politicians and academics those neither of those might be the case but so one is is your your bid on the emergence of a potential emergence of a chief AI officer what you think about that idea is exceptionally.

[00:36:11] Who should be writing that who should we who should be helping us with the guardrails of AI okay so absolutely you should have a chief AI officer if you're an organization that number one is in healthcare number two if you're in finance.

[00:36:28] You're dealing with any kind of regulated industry public oriented consumer based service where data would come in and there's a threat of artificial intelligence tainting that data right so.

[00:36:42] So there's this weird we're cross between if you have a sissso okay if you have a chief information security officer.

[00:36:53] You probably need an intelligence officer because you're already in a market where you actually said there's all kinds of threat to our organization so now that means the artificial intelligence not only are you using it you're protecting against.

[00:37:09] It's not going to be the same skill set but it needs to be a person that's coordinating their efforts with.

[00:37:14] Right got it so I think then so if you don't have a chief security officer related to information you might be a little bit less apt right to say well why do I need an artificial intelligence officer probably that's just based on your maturity of an organization in terms of what you see as advantages and threats.

[00:37:35] If so that's my so I think you need to have one if you are a vendor so we'll all be in here because lot you guys get heard by a lot of people.

[00:37:48] If you are on the floor of HR tech and you have a booth and you don't have someone whose job is executing artificial intelligence policy but you have AI on the back on the booth or in.

[00:38:09] No if it says AI on the on the squishy ball.

[00:38:14] Okay on the sock.

[00:38:16] Okay you need someone you can point to and I don't care what they're titled it and say that position description includes understanding ethical AI the use of AI how it develops on our roadmap and then candidly what is the competitive landscape right so this an intelligence job.

[00:38:36] What are the things that we could be using are we not going to use based on this developing developing technology so if you're that in in the HR tech space that's my advice there if.

[00:38:49] You are ahead of talent acquisition or a CHRO and you're in one of these larger organizations your chief artificial intelligence officer probably going isn't going to get into your way very much.

[00:39:03] Similar to the way that you know workday is rarely bought by the head of talent acquisition right.

[00:39:11] It's really bought by the C.H.R.O.

[00:39:14] It's bought by the C.H.R.O.

[00:39:16] So I said actually he's rarely bought by the C.H.R.O.

[00:39:20] That's right it's bought by so often it's more often bought by the C.O.

[00:39:24] Right so I think that there's I don't think that there's going to be a lot of engagement between the head of talent acquisition and a chief artificial intelligence officer in the larger corporation and forgive me but it's because it's not that important to them.

[00:39:45] It's important to them if it's a de risking exercise EU artificial intelligence act right great.

[00:39:55] If you are a healthcare organization in the state of Maryland your chief artificial intelligence officer is going to talk to the C.H.R.O.

[00:40:07] because that's a de risking exercise around employment and you know if you've got seven clinics and you're burning through 3000 people a year as employees and you're looking at 30 40 50 thousand applications.

[00:40:24] Yeah there's a conversation because you need to have okay we're using artificial intelligence where are we storing the data are we getting external observers do we have an internal audit I need to know those things.

[00:40:36] Because the C.E.O. is asked asking me as a chief artificial intelligence officer how have you increased efficiency and how have you de risk us was the two things a chief AI officer answers.

[00:40:54] And the one the D risk is probably more important than the efficiency one right by the way we talk about the de risking.

[00:41:05] Like it's brand new in HR.

[00:41:09] Yeah but we talk we talk more about the efficiency side it's more of a it's more of a comfortable conversation about the efficiency like all this is going to make it can make it efficient but risk is boring because it's gets border it's get back to compliance like HR is compliance period and story.

[00:41:28] People are people are bored by compliance but it's the stuff it's the stuff that makes HR HR and so I think the de risking is a great conversation.

[00:41:38] You had a second second question so the first one was do you need a chief AI officer and then the second one which I didn't skip over just haven't gotten to yet is guard rails.

[00:41:49] What are the guard rails you need to have who's who's best suited to create the guard rails.

[00:41:55] Okay you're talking about employment now.

[00:42:00] Yeah okay well there's there's a couple of things so there's a couple of things that influence that right if you're a global employer and you've got thousands of people you need to have guard rails that are educated by the CHRO once they're informed as this is a de risking exercise for our organization

[00:42:23] and that is important for not only the CEO to know but the board and in the reason being as you talk about de risking and you talk about legislation and you talk about workforce efficiency not just

[00:42:37] you have talked about to the board but you're talking about it to the SEC and you're talking to it about the regulators inside the EU around corporate sustainability and response.

[00:42:48] So that's a big strategic conversation right so the guard rails start with HR and they're informed by legal organization and they're informed by auditors who are independent who are looking at the data right and they can be informed by their heads of talent and their heads of work designed and that sort of thing.

[00:43:08] So it starts with the CHRO it probably ends with investor relations and says we are responsible yeah we are following the rules and that makes us

[00:43:24] more innovative workforce that retains people better so we have less risk and that's why you should invest in us.

[00:43:35] That tracks for me.

[00:43:38] Ryan what do you got?

[00:43:40] I like it.

[00:43:41] I like the answer.

[00:43:43] I got nothing else to add here man Andrew.

[00:43:45] I'm glad you got one more.

[00:43:47] Yeah outside of the guard rails of like who's going to create the laws like we talked about the EU and what they're doing and we've talked about some of the things that our country is doing both federal state, municipal etc.

[00:44:03] I'm my fear is politicians are going to write a lot of that stuff maybe even be impacted by industry and probably some tech giants etc.

[00:44:13] But my fear is like a lot of things I don't see politicians writing the things that need to be written.

[00:44:22] And so I would rather that be done by think tank or be done by academics to write those.

[00:44:31] I know I hear you.

[00:44:34] It's not going to agree with you.

[00:44:36] I'm about to get on the head and so we're already in an era of what's known as the Brussels effect.

[00:44:44] So because the EU and the team in Brussels has launched this as much as three years ago and I has cast.

[00:44:53] So the UK is matching up legislation India has matching up legislation China China published a 300 page strategy around artificial intelligence for the next 30 years and 300 trillion dollars of estimated revenue.

[00:45:10] I think it's 300 trillion over the next like 50 years some crazy number and I did I stopped reading after about page 60.

[00:45:19] I'm like okay this is well.

[00:45:22] I don't need to get into this so we've already got the major market players already talking about artificial intelligence and how they're going to govern it.

[00:45:32] So I mean was it senator senator Casey here in the United States, I think it's senator Casey you know I haven't had enough coffee today.

[00:45:41] He's got a bill that's been drafted no one's talking about this.

[00:45:45] He's got a bill that's a federal bill that's been drafted talking about AI government specifically in employment.

[00:45:54] Wow and all it says is follow the rules of the EEOC so it reinforces all of the EEOC laws and says well why don't we just pass a law that says all AI.

[00:46:14] That talks about advancement and everything else follows these laws which seems basic.

[00:46:23] It's it's kind of like having like like an agency like OSHA right it's like yes we're not supposed to you know the only only the Joker jumps into the chemical.

[00:46:36] You're trying to kill your employees.

[00:46:39] Right like we're going to like write some rules down because some of these rules are already there but we're just going to go ahead and blanket it.

[00:46:47] I mean if you listen to you know if you listen to the EEOC commissioner he's all over the place by the way.

[00:46:54] Right so Keith is out there talking hey we've already got these things on the books and running an audit following the rules using AI might make it better for you that's fine and he's right he.

[00:47:07] It's you got to follow the rules and if you're using AI it helps you reinforce the fact the fact that you're following the rules that's cool I think having an you know if you look at what the president's come out with around around artificial intelligence and governance and guidance.

[00:47:23] It basically says do the right thing right don't.

[00:47:29] It basically says do the right thing this federal drafted bill is basically saying we're applying AI to employment saying continue to do the right thing but prove it.

[00:47:45] I right and that's basically what the EU has said you got to have either smart people or external people looking at your data saying are you doing the right thing.

[00:47:59] New York was bold enough to say you absolutely need to have an internal independent an independent auditor of course you know about seven companies have complied with that.

[00:48:12] You know but you know I think it's again you know everyone's not thinking of thinking about this stuff is compliance risk right dumb question or the NLRB I've always thought of the LRB shifting with political who's in office.

[00:48:33] So if the Republicans in office okay goes one way Democrats in office it goes another way I'm in my mind I've kind of put the EOC in the similar vein that it just depends on who's in office as to what gets enforced the laws might be the laws but whether or not we're actually going to put agents on it and enforce it become seems to me at least to be political or weaponized in that way.

[00:49:00] My first fault I said it was a dumb question alert so do I have that wrong.

[00:49:07] Okay so this is what I will tell you about government and its missions mission changes based on where the risk is where the risk is for the United States.

[00:49:26] So if the risk is increasing for say certain types of discrimination we will execute the government will probably execute more resources towards that more budget towards that so here's a great example okay we were we haven't been talking about hate speech.

[00:49:50] A lot until a few years ago and then it became around okay houses of worship are being attacked or under threat so then what you'd have to do is you start budgeting towards that I mean there's no different than you know look at the TSA great example before 9-11 you know I could walk through it matter so we shift based on the environment in the conditions I think I think where we are.

[00:50:19] With with the EEOC's that we're paying very close attention to discriminatory acts one because we have more people in the workforce too because we have people staying in the workforce longer and then three we've naturally spread into other urban areas so there's this concept of well I live in the place where I'm an under I'm part of an underserved population and I'm feeling discriminated against so there's more.

[00:50:47] There's more rec recording of that more document documenting of that and you have to realize that before we were using paper to make complaints right now I can go on my phone and I can swipe right and make a complaint right so I agree with you that things will sway I don't think that's necessarily administer I don't think that's linked to the executive branch.

[00:51:14] I think it's hard to link it to the legislative branch right in but I think what happens is the legislature if there's a problem in California and it's getting a lot of noise in California.

[00:51:28] The members of the House and the Senate in California will start to pay more attention to it right right.

[00:51:35] It's probably the same as true of New York in some ways and that it pushes to the middle if something happens in either of those two areas it probably but Andrew we've kept you too long and we can talk to you forever.

[00:51:47] You've got like a job and stuff to do so I've got a job and a business.

[00:51:53] Thank you so much for coming on the podcast brother we absolutely appreciate you my pleasure.

[00:51:59] I appreciate you guys inviting me on and hey let me thank you by the way so not just for inviting me your words your leadership your advocacy in our space is longstanding.

[00:52:12] I feel very blessed and honored to have these dialogue dialogues with you but thank you so much for continuing to push our industry HR to think differently so thank you.