Ep 52: Responsible AI and Cognitive Diversity to Drive Talent Transformation with Jen Kirkwood
Elevate Your AIQJanuary 31, 202500:57:36

Ep 52: Responsible AI and Cognitive Diversity to Drive Talent Transformation with Jen Kirkwood

Bob Pulver speaks with Jen Kirkwood, an expert in HR technology and IT, about her journey into Responsible AI. They discuss the rapid changes in AI, the importance of transparency and governance in AI systems, and the critical role of HR in managing talent and cybersecurity. Jen emphasizes the need for cognitive diversity in AI governance and decision-making. The conversation highlights the challenges and opportunities that AI presents for HR and the workforce, the importance of data in AI hiring practices, and the role of responsible AI in promoting inclusivity. They explore the rapid evolution of AI in HR, the need for AI ethics and governance, and the critical role of HR in implementing responsible AI practices. The discussion also highlights the value of neurodiversity in innovation and the necessity of balancing technology with human talent in organizations.


Keywords

HR technology, AI, generative AI, workforce optimization, cybersecurity, talent management, responsible AI, data privacy, offboarding, cognitive diversity, AI, hiring, diversity, responsible AI, HR, talent acquisition, data analytics, inclusivity, neurodiversity, technology transformation


Takeaways

  • Jen Kirkwood has over 28 years of experience in HR technology and IT.
  • She founded Talvana Consulting to focus on talent optimization.
  • AI is rapidly changing the landscape of HR and IT.
  • Transparency in AI systems is crucial for effective governance.
  • Offboarding processes are often neglected but essential for alumni relations.
  • Cognitive diversity is important in AI governance committees.
  • HR must be involved in AI decision-making processes.
  • Cybersecurity is a critical concern for HR data management.
  • Inclusivity in hiring practices is vital for innovation.
  • The future of work requires a focus on skill development and optimization. 
  • Data analytics maturity is crucial for effective hiring practices.
  • Historical bias in hiring data can perpetuate discrimination.
  • Responsible AI can uncover overlooked talent pools.
  • Organizations must prioritize AI ethics and governance.
  • HR needs to articulate business cases for responsible AI.
  • Neurodiversity can drive innovation and creativity.
  • Flexibility in management is essential for diverse teams.
  • Self-learning and digital literacy are key for organizational transformation.


Sound Bites

  • "AI models need to be embedded in every use case."
  • "AI is a team sport."
  • "The complexity of AI regulations is overwhelming."
  • "How do we hire more talented engineers?"
  • "Responsible AI is how we get through this."
  • "HR is the highest risk area."
  • "We are all responsible for responsible AI."
  • "You can't just cut everyone."
  • "Flexibility is a treasure in management."


Chapters

00:00 Introduction to Jen Kirkwood and Her Journey

02:50 The Intersection of HR and IT in AI

06:12 Challenges and Opportunities in AI for HR

08:57 The Importance of Transparency and Governance in AI

12:04 Navigating Offboarding and Alumni Relations

15:07 The Role of Cognitive Diversity in AI Governance

18:05 Cybersecurity and Responsible AI Practices

21:08 The Future of Work and Talent Optimization

32:24 The Importance of Data in AI Hiring Practices

34:42 Responsible AI: A Path to Inclusivity

35:41 The Rapid Evolution of AI in HR

39:00 AI Ethics and Governance in Organizations

41:22 The Role of HR in Responsible AI Implementation

45:54 Transforming Organizations Through AI and Inclusivity

51:31 Harnessing Neurodiversity for Innovation

55:31 Balancing Technology and Human Talent


Jen Kirkwood: https://www.linkedin.com/in/jenphillipskirkwood/

Talvana Consulting: https://talvanaconsulting.com/


For advisory work and marketing inquiries:

Bob Pulver: https://linkedin.com/in/bobpulver

Elevate Your AIQ: https://elevateyouraiq.com


Thanks to Warden AI (https://warden-ai.com) for their sponsorship and support of the show! Warden is an AI assurance platform for HR technology to demonstrate AI-powered solutions are fair, compliant, and trustworthy. 

Powered by the WRKdefined Podcast Network. 

[00:00:00] Welcome to Elevate Your AIQ, the podcast focused on the AI-powered yet human-centric future of work. Are you and your organization prepared? If not, let's get there together. The show is open to sponsorships from forward-thinking brands who are fellow advocates for responsible AI literacy and AI skills development to help ensure no individuals or organizations are left behind. I also facilitate expert panels, interviews, and offer advisory services to help shape your responsible AI journey. Go to elevateyouraiq.com to find out more.

[00:00:28] Hey everyone, welcome back to Elevate Your AIQ. Jen Kirkwood and I have had a number of conversations over the last few years about responsible AI and workforce transformation. And after a lot of scheduling gymnastics, we finally got together in the studio. Jen recently left her high profile and highly impactful role as an IBM consulting partner covering responsible AI and talent transformation and strategy, as well as being a member of IBM's AI ethics board to start her own consultancy. So I was looking forward to hearing about this big change. Jen also recently joined the board and the board of the team.

[00:01:09] at HCI, the Human Capital Institute. So congrats to Jen on both of these new adventures. Through her significant real-world experience collaborating with top business and government leaders, Jen has gained invaluable insights and perspective on this critical theme and its impact on the future of work. We explore the rapid evolution of AI and HR and why human centricity, fairness, and inclusivity must drive AI-powered talent decisions.

[00:01:32] From cognitive diversity to responsible governance, Jen offers actionable insights on how organizations can optimize hiring, reskilling, and decision-making while keeping people at the core of transformation. It certainly took a while to get Jen in the studio, but I assure you it was worth the wait. Enjoy the conversation and thanks again for listening. Hello, everyone. Welcome back to another episode of Elevate Your AIQ. I'm your host, Bob Pulver. And with me today, I have the absolute pleasure of speaking with Ms. Jen Kirkwood. How are you doing today, Jen?

[00:02:02] I'm doing great and thrilled and honored to be here. Thank you, Bob. Thank you so much for spending some time with me and for my listeners. I think we've talked about doing this for, what, 10 months? So it's awesome to finally have you in the studio. Yeah, it's your waiting list I have to get past. Right, right. Put it on me. Just to kick things off, why don't you tell folks a little bit about yourself and your background and what you're working on now?

[00:02:29] Well, I have spent over 28 years in the HR technology and IT business. And I do say both of those affectionately because I happen to think that my special skill, if you will, if I was a Marvel hero, is to really bridge that chasm between HR and IT.

[00:02:47] And it's more needed today than ever with pressures and the acceleration of technology that has really whiplashed a lot of HR executives and frankly, senior leadership outside of HR. So, you know, I've had such a passion for this area for so long and enjoying doing so many things with it. I've had a humble, humble experience, learning and great partners around me that have finally encouraged me.

[00:03:16] Not finally. I've finally taken the step to lead and found my own business, which is called Telvana Consulting. And I happen to think of Telvana as talent focused with focusing on the needs of the workforce in a great transformation, peaceful existence that is both revenue driving and definitely revenue driving as well as cost reductions, but really optimizing work.

[00:03:43] There's so many different trends in the world of work with low birth rates, with talent scraping that we're doing, as well as, you know, the cultures themselves. And are they dwindling or are they amplifying with technology? That for me, it seems like a perfect recipe to now leverage all my partners, my network and my experience humbly to be able to work with these clients across the globe and the US. Wow. I'm really, really excited for you, Jen.

[00:04:11] And I know you're very well respected in the space, both both spaces, IT and HR and the important intersection of the two. So I know it's not easy to break out and go off on your own. I'm sure it'll be a humbling experience, but you're going to do great. Thank you. Thank you. Thank you.

[00:05:04] Thank you.

[00:05:33] Thank you. Thank you. So congrats again. So far.

[00:06:03] Yeah. So you mentioned before about the sort of whiplash and, you know, all the speed at which, you know, this change is coming and it's coming from multiple directions. And it's hard for people to make sense of it. I know you and I have had some sidebar discussions about some of the hype and perhaps myths about some of what's happening.

[00:06:27] As you look back on 2024 in particular, I think that was where it sort of hit the peak of some of the hype. What do you think about as you start this new venture? I mean, I'm sure you have some observations from last year that are going to play right into how you approach some of your engagements. Yes, I do.

[00:06:49] And the interesting thing is, you know, with what's going on in the landscape of the politics impacting principles like responsible AI or inclusion. There's so many different strengths to amplify in the world of AI and the responsible AI that needs to be embedded in HR technology, even if it's not created within HR, but HR technology itself.

[00:07:14] And, you know, Stackflow, GitHub, they are both reporting as of last year that their respondents in IT, 97% of coders, developers and assistants are now using or plan on using this year generative AI. And Stackflow reported a level further of saying they're going to use AI in workflows.

[00:07:37] And one of the biggest areas in the businesses are HR, HR, payroll, the workforce optimization. So in my endeavors, we're focusing heavily on worker optimization, not just time and labor systems, but those that are connecting talent and how we optimize talent from, yes, recording time to understanding the pay policies, the time policies,

[00:08:05] and then the skills that are required. What do we have in our workforce today? What is the skills or the future that are going to be required for jobs and how jobs morph? And then how do we acquire new skills as well as develop with learning these skills? That's really the hub of our focus right now, cross sector. Yep.

[00:08:25] And I'm reminded the criticality is not too many people think of time and labor skills at a senior level or time and labor systems at a senior level until I remember a couple years ago, there was a huge dominant time and labor software company that had a ransomware attack. And you may recall this. And it was a very big event.

[00:09:13] So it wasn't the action of the coding that's difficult in restoring systems. It was the action of interpreting and an understanding what these systems were about. And that's that need for that transparency. So that really shone a light into the time and labor issues. And yet one of the biggest areas that AI has promises for senior leaders and in the workforce is in time, productivity, and optimization.

[00:09:43] So as we unravel different things within time systems, payroll systems, HR technology systems, while we're trying to build on top of them with AI, that critical need for transparency on how does something work? What does the data look like? Where's the data coming from? Who has access to the data? And all the different collective policies around that are so embedded in just one AI case.

[00:10:10] Often it is underestimated by those outside of HR and IT on how great of a need it is for what I always call special teams. You need special teams from many different areas to help on these endeavors. So a couple thoughts.

[00:10:26] What you just described shines a little bit of light and color on why anything related to HR and someone's livelihood, their paycheck, their career progression, or whatever is considered high risk in the EU AI Act.

[00:10:44] It also made me think about the caution that people have around these, not an individual agent, but these systems of agents, agentic workflow. When you start stringing all of these things together, there's two big, I guess, flags that pop up. One is, do we have the observability to be able to provide that transparency that you just described?

[00:11:12] Do we know how all these systems are actually talking to each other, how they were all trained to interact with data, to interact with their neighboring agents or other systems that they need to talk to? But then also, I guess, again, putting my own responsibility, I had on, and I think about the governance and assurance.

[00:11:34] Do we actually know that these same systems that were built over the last couple of years, a lot of them are looking at an individual. Like if you do an audit on a system, it's usually an audit. You're auditing one particular software platform, right? But do they have the capability yet to actually do AI assurance or AI governance across that agentic workflow?

[00:12:03] I mean, you're bringing up the point that is, as everyone's rushing towards the holy grail of, I want to reduce costs, I want to optimize my workforce. The questions that senior leaders are asking is, how many of my recruiters or HR people can I cut? And that's a really real question that is often so misunderstood because there's so many ways in other areas to cut costs, to look at profitability.

[00:12:29] People shouldn't be the first thing because the human in the loop is the most critical element. So when we look at those things from AI models and getting predictive models to more of the assistants and having that conversation and that generative piece where assistants are creating new content, and that is, you know, from a private or public model where employees or candidates or managers are looking for answers.

[00:12:58] And if companies aren't serving them up, a private generative AI model, whether it's their own enterprise, open AI or quad or meta, llamas, then they're going to bring their own to work, even if the company hasn't done it. And I've seen so many different privacy exposures because they're putting information into the systems. Help me model compensation plans for next year. Easy one.

[00:13:22] That is always a hard task, but easy now to model more within generative agents. That threads all the way through the different systems. And systems are always managed by many different people and department. So if a payroll system, in fact, there was a company with a payroll system just this year in January that was exposed in data privacy because of their generative model.

[00:13:49] It exposed the data and did not have it secured at rest in the cloud. And it was exposed and they had a huge breach. So when we think about the intertwineness, that's the word, intertwineness of the systems, the people who run them, and the needs of those getting answers from them, it is a fantastic ball of yarn that I truly like to sift through and thread through. But it's extremely complicated.

[00:14:15] And that question of transparency is not just from the responsible AI, ethical, are we inclusive, are we reducing bias? It's from a cybersecurity threat. And we need to look at these systems because there's so much to them that HR is now a steward of cybersecurity. This is the most highly coveted data in often the organization. Most highly coveted.

[00:14:41] And every attack is always looking to try to penetrate that type of private information. So HR is the stewards of the treasure of that organization. And with everyone trying to breach that from whether it's data security outside, it's fraud or internal fraud, in looking at the time and productivity theft within employees, to then just looking at what are the skills that we're acquiring, developing, and losing.

[00:15:10] You know, those type of models and those type of business outcomes are what we also have to include when we're thinking through that intertwineness and how responsible AI overall needs to embed it in each and every use case, process, and evaluation. And that takes a lot of different depth and experience to go that wide and think. So no one should be surprised when people are standing back and watching and observing.

[00:15:36] And maybe their plan is to be a fast follower, but certainly I can understand why people would have reservations about being an early adopter. I do think some of it goes back to their own data and analytics maturity to get value out of the system and to have that system be trusted. But you're raising some interesting points that I hadn't thought deeply about.

[00:16:00] Like when I hear people talk about AI in the context of a shadow IT scenario or bring your own AI, BYO AI, I've heard a couple of times. When I hear that, I think they've got their personal mobile device that they're bringing that has AI on it. Or maybe someone has joined an organization, they've got some custom GPTs that they built and things like that.

[00:16:27] But it could be even more concerning than that because if they've actually built agents and they think they can just show up and plug those agents into the systems, you have to know that in a cyber context that you're not at that point of failure. You're not the one creating centerpiece of risk that could come in to the organization. Couldn't agree more.

[00:16:53] So there's a story I just heard from a friend of mine who is offboarding from his company. And offboarding happens to be a process that nobody prioritizes, right? Someone's leaving the company or they're forced to leave the company. It's a goodbye, get out type of process usually. It's not the one that is like talent acquisition where we're really going to work on that candidate experience in Corkfest. Offboarding is, show them the door.

[00:17:18] However, in this offboarding process, he was trying to get answers to his 401k disbursements, his pension disbursements. And the multiple answers from third party administrators led to so much confusion, misinterpretation, because it had not even had the basic housekeeping from the AI that they put into their HR technology systems and the policies,

[00:17:44] that it actually, when they backed up into it, it had cost the company more in costs. It was a leakage point for them in how they were dispersing things or managing things with their TPA. And it all could have been easily cleaned up a lot better with AI, with conversational agents. I hesitate on the agentic AI for HR right now, because agentic AI is more than just automation with AI.

[00:18:14] It's the AI is making decisions. And so if agentic AI enters into an equation like, should we give them their 401k or not? Should we disperse it in a year or should we do it now or in six months? And how do we do that? That can lead to so many legal issues, which in his case it did, or it can lead to so much overcorrection that once the company knows if there's an issue.

[00:18:38] 401k calculations in the payroll world and oldie but goodie are still awful to try to backtrack, recalculate, catch up, make the affording with the taxes. I mean, it's a nightmare. And yet it's such an overlooked process of cost control, cost control and culture. And as companies are building their alumni databases as well, which is a huge technology that's up and coming, we know that this goodbye could be another hello in a matter of months or a year.

[00:19:08] Well, I'm glad to hear that somebody is focusing on that because, yeah, in my experience, well, I haven't off-boarded that many companies, but it does seem like it is just, you know, best of luck. You know, I'll take your computer and your badge now and, you know, maybe I'll see you around. Hey, everybody. It's Libby again with fearlessness. So what's fearlessness? It's that underlying grit that empowers us to forge ahead even when hope seems distant.

[00:19:37] It's the courage to walk through those fires of hell, knowing that we're going to come out better and stronger on the other side. Stay tuned and learn how to get fearlessness. I do think companies need to do a lot better job with that, that this is a, I like to think of the talent lifecycle as applicant to alumni, not hired to retire.

[00:19:58] Your brand perception, your relationship all starts with the experience that you have when you first apply to a job or first hear about a job. And then, yeah, alumni is a potential talent pool of people with institutional knowledge, you know, perhaps still loyal to company and consider themselves an advocate even after leaving. I mean, I still, you know, repost IBM posts.

[00:20:27] I still read their Institute for Business Value research. I still have all kinds of IBM swag around the house for sure. So it's not like I left with any, like, you know, bad tastes in my mouth. So I do think that's important. I like that. No, no, I agree with you. You know, there's so much focus on that candidate experience and companies are hyper focused on, you know, eightfold promises of 40% reduction in time to hire is their latest stat.

[00:20:57] 40% reduction in time to hire. But if you look at the sourcing time and the candidate experience and everything that goes on before they're hired, that is so tremendously important. And we have low birth rates. I keep emphasizing this because while we have low birth rates, we have the highest need for new skills and developed skills.

[00:21:20] So we need to make sure these models, when we say are inclusive, they're inclusive to age, to neurodiversity, those with seen and unseen disabilities. Not just what people tend to, you know, politicize as gender and race, that we're looking at inclusivity for that innovation and those amazing candidates.

[00:21:39] And even some of the older generations are proving that they can ramp up in quicker timeframe with digital literacy and HR or IT than other generations that are very much scraped. So, you know, the opportunity right now to hone these skills to, yes, find optimization areas like crazy, but to do it with thought and on the right priorities.

[00:22:04] There is so much hidden opportunity for cost reduction, for the worker optimization, but also for the talent culture and farming that talent pool that we can't exclude anything. And so that's why that transparency is so critical to understanding how something works. What does the data look like?

[00:22:22] And when you look at vendor agreements, I do a lot of vendor evaluations for organizations, and you look at their contracts and their agreements, it is very difficult to get forthcoming information on how is the data stored? Yes. But what are the algorithms made up of? Who is determining the model? The weights of the factors? It used to be in a predictive model. Everyone used commute time until the pandemic happened, right?

[00:22:50] And yet, do you know how many organizations are still using that same flight risk model or attrition model with commute time? And yet someone, some human had to determine that. So when you talked about the maturity scale of what HR was doing in data, and there were some that were doing it well. I say doing it well. They were managing data. They were harnessing data. They hadn't that. We always have that Achilles heel of clean data. Do we actually ever have it?

[00:23:18] All the multiple systems and owners. But they were really starting to get great KPIs and great metrics and insight. Some very keen users out there like Unilever and Cisco. I look at the things that Vizier and OneModel are doing, which are really savvy and moving HR's trajectory and data really well.

[00:23:37] Now you layer on AI, assistant AI, agentic AI, and we start to get really gobsmacked with HR and making sure not just is HR up for it, but procurement, supply chain, finance, legal. All of these other organizations I've seen throughout hundreds of experiments. They're leading the charge on HR cases. And why?

[00:24:03] They're thinking that HR doesn't have the skills or the literacy to be leading this themselves. So AI is a team sport. It's special teams. And when you don't have HR in the middle of these teams, the negligence is huge. And we've seen it with the AI ethics cases on the IBM boards, where we have people that are designing time and labor systems or policies with AI that don't understand the downstream effect into the world of HR compliance.

[00:24:31] Or those that are trying to use recruiting models and they're saying, we're only a 200 person company in the United States. We're not global. We're not subject to the EU AI Act. Ah, but nay, nay, you are because recruiting is a global function. So those things sneak up on people and they underestimate the need for HR being in the loop, the HR expertise being in the loop, and that team sport.

[00:24:59] My head is spinning right now. I'm going to see that. Yeah. Triggered so many things. But one of the things that I wanted to make sure to hit on is, like you, I advise people to really think deeply about the cognitive diversity that you need on an AI governance committee and an AI ethics committee.

[00:25:22] Would you mind just sort of unpacking how you sort of frame this with clients in terms of looking past? Yeah, we know you have a legal team. We know you have a compliance, you know, governance, risk and compliance team, you know, embedded in HR or what have you. And then you've got, you know, the developers and the product teams like building some of these solutions. But there's a whole bunch of other people in the middle that might need to be brought to bear.

[00:25:52] Is the demographic makeup similar to how you might have people in like a design thinking exercise where you're trying to make sure that you understand pain points of, you know, different users? Or is it broader than that? And do these people actually influence features and functions and capabilities within the solutions that are being built? One would hope that it's being well thought of in terms of who do we need on a team.

[00:26:21] But with so many innovation hackathons and inspirational activities that companies are trying to really understand how could AI or generative AI, just AI in general, how can AI impact our company? That fear of missing out is haunting every senior leader right now.

[00:26:39] So the competitive pressure is on 67% of CEOs and leaders are expecting to use generative AI, if not at scale already this year, because they've been doing experiments and they've failed, failed fast and moved on. And getting to scale now is now the optimized piece. You would think it would be well thought out with all these areas.

[00:27:01] Unfortunately, I think that the regulations and the bias and inclusion efforts that should be first and foremost in thinking through these models are not a business case. And I understand that. I don't think the competitive pressure with the regulations do the right thing and then the bias and discrimination. There's a lot of different political themes that I stay out of that are changing intonations between countries.

[00:27:31] Japan has one of the widest AI regulations out there and they pride themselves that that helps their innovation. Whereas EU AI Act, I happen to think was very well thought out over years and years and it took a lot to get them there. But they've done a very nice model. Now we have over 120 in the United States, 120 plus new regulations, proposed regulations all around AI and the workforce.

[00:28:01] And that's states, cities and some at the federal, although it remains to be seen how much will continue at the federal level right now with this administration. Sure. Over 120. So the complexity oftentimes with senior leaders is lost in them to say they're never going to be able to audit me. They're not going to have the expertise. I will just pay the penalties and interest because it's cheaper than trying to figure out this mass chaos in the beginning. And I don't blame them.

[00:28:29] But I go back to the cybersecurity piece because when you have these models and we approach these models, whether they have an AI ethics board or not, whether they have the full functional team yet or not, we usually bring that piece in consultatively to say you've got to have this person, this person. And it's not always on job description. It's on who the individual is and the skills that they have. But it's got to be that wider thinking.

[00:28:54] We're bringing all of that into the room in that conversation to say, first and foremost, cybersecurity. And you have to thwart attacks. The data models, the data privacy on your organization's most coveted data is first and foremost in a responsible AI model. And that's not just IT. That is also legal and HR, as well as payroll if it's a separate function and not under HR. Those all have to be contemplated in those skills and making sure they're at the table.

[00:29:24] So that framework that starts to develop, everybody is a steward of cybersecurity. And I don't mean just encoding and assistance and looking at the development cycle. I mean, are we farming the skills in our organization? Often there's this misnomer that is published a lot by AI companies that are stating, hey, with the codes and the code assistance, you don't need as much software development, engineers or special skills.

[00:29:51] And I will say again, nay, nay, that is not accurate because if you look at solar winds, CIOs are now resistant to stepping into those positions because of the personal devastating financial catastrophe that the CIO had on him in that situation. Why would a CIO want to take on this personal financial responsibility? And it is the most critical element of using AI because AI is a living, breathing thing.

[00:30:19] And the machine learning in it, it will keep going. So you need those authoritators on code that can decipher between a Trojan horse of a code assistant, which there's over 70% in these open models out there or closed models that the public is using. How do you know you're seeding technology or code in your systems and that it's accurate and truthful and verified? You don't unless you have that skill.

[00:30:48] So HR has to be there first and foremost from the skills of the team to the skills that they need in their workforce to maintain an AI model. Responsibly, ethically, inclusivity. The old recruiting example with a large retailer, I think we all know and use, where they programmed their next AI model to recruit more talented engineers. And they recruited more white male engineers.

[00:31:16] It's a very famous case that the company publicized, which was Amazon. The first question that team actually was asking is not how do we hire more racially diverse or gender diverse engineers. It was how do we hire more talented engineers, period. And that open thinking that needs to be contemplated and, you know, it's being challenged by do we have a DE&I person or not.

[00:31:43] But you need those skills throughout everyone, threaded throughout everyone, to be able to get the best candidate possible and the best skills possible. So part of that triggers something I wanted to go back to, which was without having good knowledge about how these AI systems work or even the forethought to say, are we measuring the right things?

[00:32:08] You may have some people in leadership who will continue to do things the way that they've been done because they don't know any about it. They have either not achieved that sort of data and analytical maturity, particularly in HR.

[00:32:25] And that's fine if like core HR folks don't have those skills, but isn't that why you have a people analytics or workforce analytics team or the talent intelligence team, which traditionally looks outward or whatever. But the fact that those teams are underfunded, understaffed, may not even exist is just mind boggling to me, given the state of affairs.

[00:32:49] The other is exactly Amazon cases is perfect example where if you haven't taken a critical eye to your own data and you say, well, let's just train it on how we have hired. And let's make sure we're hiring people with the potential to succeed, matching that against our historical data.

[00:33:09] Well, you have to look at the data itself and whether that data is suspect and whether there's historical patterns of bias in the data, because now you're just going to train it to think like you, which is not what you want.

[00:33:22] But you don't know that until you look at the data and say, well, we've got a problem here because we didn't do it consciously, but we have a pattern of equating top talent with white males or whatever it is. I mean, that's why we definitely don't want to go down the DEI rabbit hole right now.

[00:33:45] But if you look at what its original intent was and take away the label, then how can responsible AI not help us power through this? Right. You're going to be able to get and discover all this new talent from talent pools that were overlooked, whether that's geographically or culturally or to your point earlier around different types of visible and non-visible challenges that people need to overcome or accommodations that are needed or whatever.

[00:34:13] But responsible AI is how we get through this because we will have the visibility and the transparency. And at some point in a few years, if the transparency is not there, then you know there's a problem. Very much, very much. And it's not I don't believe it takes a few years.

[00:34:30] The rate of acceleration and new headlines today with additional country races on AI and who's exceeding whom of tech companies is the pace is huge. You know, we thought originally the industry thought that generative AI would be out in a few years by 2028. It's out now. Agentic AI is out now. It's not a myth. It is out.

[00:34:56] And so, you know, you can say, well, when is AGI or the autonomous AI going to come out? Let's just focus on what we have right now and the thoughtful, responsible decision making and keeping that human in the loop at some point with these HR processes. Because HR is the highest risk area aside from military, biochemical, some health care models.

[00:35:20] HR is rated by some of the best academics, organizations and business leaders around the world is the highest risk area. And when we think of inclusivity and trying to make sure bias free models or bias that is mitigated because no model can truly be bias free today. It's all created by some team, some people and managed appropriately by humans as well, since it has to be maintained.

[00:35:48] You can't let it sit in the corner. We look at the ways that people are using it in HR or talent acquisition. And you think at a root cause, I've got an applicant that's going to go through an assessment. It's so great. I can use this assessment in multiple languages now because generative AI has advanced language translations so much. However, before we move on, I need to let you know about my friend Mark Pfeffer and his show People Tech.

[00:36:15] If you're looking for the latest on product development, marketing, funding, big deals happening in talent acquisition, HR, HCM, that's the show you need to listen to. Go to the Work Defined Network, search up People Tech, Mark Pfeffer, you can find them anywhere. Hopefully it's not reading expressions. Two, is it connecting with a person with seen or unseen disabilities?

[00:36:43] And in an applicant stage, very few are willing to expose that or share that for whatever their reasons are. So how are you making sure your technology you're implementing is going to yield you, just from a business outcome, the best possible candidate? We always look at who are our top talent, but we really need to start looking at and triangulating who is my competition's top talent? How did they get that model?

[00:37:09] And with all the data brokers that have exposed all of our data out there, it's not too hard to figure out and to find out. We talked before about how some companies are just still taking a wait and see, slow rolling, what they do with AI. Do you think on the responsible AI front, if we look beyond HR and talent for a moment, following the path of cybersecurity or data privacy, right?

[00:37:38] These are broad issues. Anyone can be the weak link within an organization or even beyond if you've got access to certain systems. I was on a Gartner call and I posed the question because they were talking about AI governance and responsible AI, and they said like a two to four year outlook. And so I did ask, is that you're talking about mass adoption, right?

[00:38:00] Like you're talking about through the trough of dissolution meant on their hype cycles, because a lot of those platforms have been around or have cropped up over the last couple of years. And this year is going to be a huge year for that.

[00:38:13] But the analyst's response was, yes, he does have an appreciation for that and the ecosystem that has developed, but it is still largely looking at individual systems or within the HR sort of domain and companies. Obviously, there's more departments than just HR. So he felt like, yeah, part of this is mass adoption.

[00:38:36] Part of it is potential market consolidation and then the expansion of some of the responsible AI elements that go far beyond just the highest risk categories in HR. I mean, do your clients thinking about this in a broader sort of enterprise context? Sometimes. It depends on the size of the organization, for one.

[00:39:00] If they're multinational and very large, they're often moving on an AI ethics board or looking for that type of help in development on how they look at frameworks and responsible AI. And they're involving multiple stakeholders because so many have provoked that idea. If it's a smaller organization within the U.S. with 500 employees or 1,000 employees, not so much.

[00:39:27] And that's where a huge amount of risk is for these organizations. And I try to hit on where they're really going to care as far as the bottom line. And it's going to be with work councils and unions, labor unions. It's going to be with defamation cases and slander or discrimination. There are many cases out today within the AI system case. If you ever look at them, you can see every autonomous car and accident out there of every model.

[00:39:56] But you can also see, as you probably know, in the AI directory of all the different failed cases and notably in HR. And when you look at that, some of the smaller companies, some of the larger companies, they still have employees that are emailing or downloading files globally from directories. They're still emailing compensation files or performance evaluation information. Things that have been outlawed for years before AI.

[00:40:26] And yet, you said from the basic data privacy principles, those are not being adhered to. And I know some CIO is probably just cringing right now or CTO is cringing hearing this. But it happens constantly. The smaller organizations, it's a cost measure. It is really difficult for a lot of the smaller organizations to upscale their organization for full AI ethics board with purpose, business value that's defined and an ROI because there can be ROI on those pieces.

[00:40:56] But it's really difficult and complex and sometimes kludgy for them to do. And that's where outside advice, guidance can bring a lot of that in to set up the mechanics and help them understand here's how you have to be nimble. You've got to be agile. You have to be looking at these things. There's tremendous opportunity. There's tremendous risk, but there's tremendous opportunity. And the rule of doing nothing right now is also against you in cybersecurity.

[00:41:24] So, again, you know, I will keep harping on it because it is the big amplification point in responsible AI that is often thought, oh, well, IT has that. It's everyone's job now as they're using AI models. And they've got to thread that all the way through with the cybersecurity and data privacy.

[00:41:41] That is usually what will get the biggest business case for the organizations of all sizes to move the needle on why we need these frameworks and why beyond the misnomers of it's just inclusivity or DE&I or bias. It's the right thing to do. Often the right thing to do is not getting the business value with organizations.

[00:41:59] But when you hit back on the Achilles heel of risk, slander, fraud, overrun costs, and the cybersecurity pieces and how those skills are now dispersed or need to be dispersed in order to be competitive, it's a different understanding and a different criticality that senior leaders are really prioritizing. Yeah, I harp on this all the time, as you probably know.

[00:42:26] So, I mean, when I say when it comes to responsible AI, we are all responsible. And I don't think that's like this grandiose, you know, earth shattering statement. It just follows what we've seen with cybersecurity. Just like anybody can be, you know, a weak link. Anyone can click on a spear phishing, you know, email or things like that.

[00:42:46] And so one of the things that I haven't seen enough of is in the same way you would do like compliance training with harassment and cyber and data privacy or whatever when you onboard and at some recurring frequency, usually annually. I haven't seen a lot of people adding to that, you know, a responsible AI module.

[00:43:12] And so I just wonder if that's just, we're just still early days where a lot of companies still haven't even set policies. And if I were to train you on how to use something responsibly, if I haven't told you, you could use it at all. And taking that, you know, head in the sand kind of approach.

[00:43:28] But it just seems like, I mean, the same way when everyone at IBM had to learn what cognitive computing was and what IBM Watson was capable of doing, you still had modules around data, around cloud, around cyber. Because we're not just, this isn't just a new set of toys that you're playing with. There are implications to the data that it's getting, the data that it's trained on, you know, the inputs and the outputs.

[00:43:57] And I don't think that some of those fundamentals have not really changed. I 100% agree with you on that. Yay. I love hearing that. The responsible AI modules, whenever someone hears module or program and they haven't done it before, they can have that knee-jerk reaction of extra cost. And there's an ROI and a business value to it that is, you know, threaded on the organization's philosophy. But it also can be expanded.

[00:44:26] And that's where the teaching comes in. And at the HR tech sessions last year with Human Capital Institute, of which I'm on the board of, there is so much fever and excitement around what companies like Vero AI or FairNow can do with the governance and the regulations of that. That has to be expanded. And HR being able to sell that because it comes down to, again, is HR the priority of the company?

[00:44:54] And that is still an age-old question of balance, which I feel like it needs to be more valued in terms of the impact they can bring. Right now, HR is very threatened to keep their jobs. HR has got to accelerate their skills. They've got to be able to articulate the business cases. And we've been saying that for years to whether it's a CFO on a business process outsourcing model, global payroll, worker time system, learning and development.

[00:45:21] And quantifying the business case is more prevalent than ever. And those type of modules, programs, thinking have value. And it's been proven with different research institutes, including IBMs. But it's also been put into practice. And it's funny, I was with a large financial tech firm in Europe last year.

[00:45:44] And they wanted me to come in and teach their HR and their HR managers on AI from digital literacy and data. Maybe they haven't done a lot with, like you said earlier, they didn't have that workforce analytics team that was so skilled or really beefed up with resources or skilling. So they wanted me to look at the entire program and put a program together that we executed. But in the first request was the CHRO.

[00:46:12] She took me aside and she's like, our chairman of the board wants training for him, his team of shareholders, and for all of us in the C-suite. And I said, I understand. And it was really getting into the tactics that they needed to understand that, honestly, I won't get off on a soapbox, but any consumer needs to know on sharing data with what's listening around us, to how data is being consumed.

[00:46:38] Whether it's IP and copyright laws, whether it's your own data or your organization's data, that construct of digital learning is needed at every single level all the way down. And they've got to get their hands dirty into thinking through it.

[00:46:54] And the organizations that are best equipped, I talked about this financial firm, they are doing wonderfully for a company that never used tech and was very consumer-interacted to now using tech effectively while preserving the customer experience, really just doing marvelously. And the first step they had to do was train their teams. And that investment's required.

[00:47:17] It's for frontline managers at a different level, but for the data protection, data stewards, more than ever, we are giving away data every single day. And through our companies is one of the biggest exposure points within the hours of eight to five, wherever someone works. Yeah, absolutely. I think you are hitting on something I have spoken about before. So much great stuff, Bob. I'm hoping that we did. Yeah, no, absolutely.

[00:47:46] What I was thinking was when it comes to transformation, which is still on everybody's bingo card that's in this space. You can't escape talking about transformation and change. And this is a significant one. I mean, I spent my whole career doing transformation, but this one is bigger than ever. And so you're right, there are no populations that should be excluded from this or can be excluded from this.

[00:48:15] And the leadership teams in particular, it's not just so that they can build their own agents, which might be nice. And it's so that they have a full appreciation of the benefits, the opportunities and the risks.

[00:48:29] I mean, when people start bringing you ideas related to how they could use AI and automation, you want to have a full appreciation for what the implications of those ideas are such that you put some backing and build real business cases behind what you're proposing. And so I think they really need to get on board. I mean, IBM did this with Watson. They did it with design thinking. They made sure every it wasn't just for product teams.

[00:48:56] It was leadership teams because you could have a product mindset in HR. You could have a product mindset in supply chain. You've really got to understand some of these principles that are almost transferable across domains. So I think it's incredibly important. I mean, I'd like to some enterprise sort of training and incubation, just like we did back in the day with Watson and the cognitive build program that I wound up leading.

[00:49:24] Because, you know, once you unleash that creativity and upskill people on what's possible, you're going to get a lot of ideas. What are you going to do with them? Do you have the capacity to embrace those ideas? And it also goes back to what we talked about with diversity, not just DEI, but, you know, the color of your skin and your gender, et cetera. It's like, you know, different backgrounds, different ways of thinking, neurodiversity.

[00:49:50] You've got to understand how to put all these things together into effective teams and, you know, impactful products and services. I think Bill Gates just came out in one of his interviews of saying that he would have definitely been on the spectrum from a neurodiversity piece had he been tested back then.

[00:50:09] And can you imagine having the equation said before you, whether you like him or not, we just failed to hire Bill Gates, somebody that we know trajectory wise is going to change the world and the technology landscape. It's the Bill Gates that are out there.

[00:50:50] We've got to be able to harness the ideas. Curiosity, creativity, and innovative thinking are still some of the top three skills that every research firm is still rating that's needed for AI and technology thinking. And if we don't find the right way to harness that feedback.

[00:51:07] I mean, when we're integrating ChatGPT into Slack systems and we're stating a 25% improvement in onboarding and employee engagement so quickly from onboarding, there's so many different vehicles to listen effectively to what the candidates and the employees need to be able to hire the best talent. And that's what responsible AI should really get into is the optimization of the best possible talent. Absolutely.

[00:51:37] Love it. Jen, we could probably talk for at least another hour. I want to be respectful of your time, but I appreciate the shout out at the end to all the square pegs out there. I certainly know what that's like. And the job market is not always very kind when you're trying to squeeze everyone into round holes.

[00:51:59] And by the way, just because IBM led an incredible neurodiversity program in recruiting candidates and I was fortunate enough to help other government agencies who were really needing IT help or HR IT help. And so many other organizations that were challenged in trying to find other talent pools, much like government is that can work within jurisdictions.

[00:52:22] We found such tremendous success with these programs that we have to think broader when we're leveraging these technologies. So when we talk about inclusivity to the CEO's heart, we're saying, I want to include the competition's talent because I want them, because I can upscale them. Or I have the skills in-house and I'm going to create learning that's on-demand learning all the time using generative AI so I can push it out and it's maintained and it's fresh.

[00:52:48] There's so many different MOOCs and online learning programs that many employees, candidates, and individuals are going to get. And it's not all good now. So we really need to have the best possible in our talent. Yeah. You also had me thinking about performance reviews. If you look at statistics now about the neurodiverse population and the double-digit percentage of your existing workforce that is already amongst that population,

[00:53:17] if you really look back at some of those performance evaluations, I feel like you just didn't know enough about them to ask the questions about why they might be underperforming. And I know there's a million reasons, right? It could be different, you know, personal issues, mental health, not the right fit for the job or whatever. But I feel like everything just has a tendency to get sort of, you're just put into that bucket. And you talked about where is the organization actually investing?

[00:53:46] Are they just going to invest in technology and just get down to as few humans as possible? Or are you going to think more holistically about, you know, the human and AI, you know, sort of collaboration that's needed? You can't just cut everyone. I mean, even Elon Musk, you know, acknowledged that he over-automated, you know, his, excuse me, manufacturing. So there is a happy balance point between humans and AI, and it's different for different organizations.

[00:54:13] But as organizations think about what do they do when they do achieve cost savings or cost avoidance, how do they reinvest that? And you've got to find some balance. It can't all go back into more technology. It's got to be into upskilling and reskilling and making sure that you're getting the best of the human talent.

[00:54:31] Yes, that frontline manager, the first line leaders, second line leaders are more coveted than ever with their skills to be able to work with those of neurodiverse or unseen, seen disabilities of those with different ages. That diversity component of somebody who is flexible, flexibility being like the fourth characteristic that research firms are saying they need in skills, that flexibility of that manager to be able to work with different communication styles.

[00:55:00] And yet get great productivity yield is unique and a treasure. And yet some organizations have really done well and having more success with it. We have enough tools out there with employee productivity and employee listening that it's a matter of which business cases are we going to put together. We can be agile without putting in massive transformation projects. The biggest transformation has to start with the innovation and the self-learning.

[00:55:28] The self-learning and then the skilled learning within the organization to be able to get everyone digital literate. Well said. Okay. I'm really going to give you back your time now. And thank you again for joining me, Jen. We'll have to have you back and get an update on how your solopreneurship and your new career is going. But thank you so much for spending so much time with me. And I think there's a lot of great insights for my listeners.

[00:55:58] I want to wish you well for a very prosperous 2025. Thank you, Bob. You as well. Look forward to seeing you soon. Sounds good. Thanks, everyone, for listening. We'll see you next time.