Bob Pulver is joined by Pauline James and David Creelman to discuss the transformative impact of AI on employee experience and organizational change. They discuss the evolving role of HR and leadership in navigating AI adoption, emphasizing the importance of continuous learning, data-driven decision-making, and fostering a culture of adaptability and innovation. The conversation highlights ethical considerations in AI, advocating for responsible governance, oversight committees, and transparent policies to ensure fairness and compliance. They reflect on lessons from the pandemic in driving rapid transformation and the necessity for agile workforce planning to balance cost, capability, and strategy. Pauline and David share their insights on experimenting with generative AI tools, building intuition through hands-on learning, and the critical role of foundational skills like AI literacy and design thinking. They leave you with advice for leaders on effectively managing change, along with details about Pauline and David’s upcoming masterclasses and events.
Key Topics Discussed:
- AI and Organizational Change: The role of AI in workforce transformation and how it’s reshaping leadership, employee roles, and collaboration.
- Upskilling for the Future: Why continuous learning and fostering AI literacy are essential for both leaders and employees.
- Breaking Down Silos: Strategies to remove barriers to collaboration and align organizational goals in the age of AI.
- Responsible AI: The importance of governance, transparency, and ethical practices in implementing AI solutions.
- AI Augmentation vs. Automation: Real-world examples of using AI to enhance human capabilities, from creating content to personalized learning.
- Strategic Workforce Planning: How organizations can rethink processes, roles, and incentives to align with AI-driven opportunities.
Takeaways for Listeners:
- AI adoption requires cultural shifts, leadership alignment, and trust-building across all organizational levels.
- Continuous upskilling, including AI literacy and data literacy, is critical for staying ahead of technological advancements.
- Leaders must move from managing tasks to managing outcomes, leveraging AI to empower teams rather than micromanage.
- Organizations should focus on responsible AI practices, ensuring compliance, transparency, and inclusivity at every stage.
- Experimentation and hands-on learning with AI tools can drive innovation and help organizations unlock new opportunities.
Notable Quotes:
- Pauline James: “The importance of transparency and training to mitigate risks cannot be overstated.”
- David Creelman: “We need to experiment hands-on to learn what AI can and can’t do—this is critical for its integration.”
- Bob Pulver: “Leaders need to understand AI’s impact to make informed, data-driven decisions for their organizations.”
Chapters
00:00 – Welcome and Introductions
03:00 – AI in Workforce Transformation
06:45 – Leadership in the AI Era
10:30 – Breaking Silos for Collaboration
14:20 – Upskilling and AI Literacy
18:10 – AI Augmentation vs. Automation
22:35 – Lessons from Rapid Change
27:15 – Responsible AI and Governance
32:40 – Experimentation and AI Tools
37:50 – AI’s Rapid Evolution
43:25 – Strategic Workforce Planning with AI
48:00 – Final Thoughts and Upcoming Projects
For more resources, visit:
- Pauline’s company website: Anchor-HR
- David Creelman’s publications and insights: David Creelman
- Upcoming events: HR Gazette
For advisory work and marketing inquiries:
Bob Pulver: https://linkedin.com/in/bobpulver
Elevate Your AIQ: https://elevateyouraiq.com
Thanks to Warden AI (https://warden-ai.com) for their sponsorship and support of the show! Warden is an AI assurance platform for HR technology to demonstrate AI-powered solutions are fair, compliant and trustworthy.
Powered by the WRKdefined Podcast Network.
[00:00:00] Welcome to Elevate Your AIQ, the podcast focused on the AI-powered yet human-centric future of work. Are you and your organization prepared? If not, let's get there together. The show is open to sponsorships from forward-thinking brands who are fellow advocates for responsible AI literacy and AI skills development to help ensure no individuals or organizations are left behind. I also facilitate expert panels, interviews, and offer advisory services to help shape your responsible AI journey. Go to elevateyouraiq.com to find out more.
[00:00:28] Hey everyone, welcome back to Elevate Your AIQ. On today's episode, I am joined by not one but two excellent guests. Pauline James is the founder and CEO of Anchor HR, among other hats that she wears, and David Creelman, a globally recognized management consultant and researcher. They are also the co-hosts of a podcast series on AI and HR in partnership with HR Gazette based in Canada. The three of us cover a lot of ground at the intersection of AI and workforce transformation, unpacking how organizations can prepare themselves and their talent to thrive in our AI-powered
[00:01:09] future. This includes understanding lessons we learned in prior transformations and disruptions like the pandemic, continuous learning, responsible AI literacy, and bold leadership to drive behavior change. Pauline and David have a wealth of experience and expertise, and I really enjoyed talking shop with them. I hope you'll find lots of great takeaways from each of them. As always, thanks for listening.
[00:01:30] Hello, everyone. Welcome back to another episode of Elevate Your AIQ. I'm your host, Bob Pulver, and with me today, I have two special guests, Pauline James and David Creelman. Thank you both for joining me very much. How are you guys doing today?
[00:01:43] Great.
[00:01:44] Thanks so much for doing this. By way of introducing you to my audience, I thought you guys could just give brief intros about your background. Pauline, why don't you start?
[00:01:55] Yes. I'm the founder and CEO of Anchor HR, and as a way of my background, prior to launching Anchor, I led the employee experience function for a few large organizations.
[00:02:09] And with that, you know, I got to do the fun part of setting the proactive strategy around how we engage and excite employees. And then I also got to roll up my sleeves when things were going sideways and not going as well as we would prefer.
[00:02:22] And also within that, supporting a lot of larger change management initiatives within organizations as well. I'll note that I started my career in training. I've also worked in operations, and that leads me to have a really strong focus on enablement.
[00:02:38] And I like to think practical and scalable initiatives and programs as well. So at Anchor, really focused on supporting organizations assess and implement their HR and their employee experience strategy and to support organizations with transformation.
[00:02:54] I work on various projects related to tech, so really looking forward to this conversation today. I'll note that those projects are often in the Canadian market across North America.
[00:03:06] And the largest tech project, related project I've worked on was with an EU agency on AI adoption.
[00:03:14] And it was a large change effort with a focus on organizational readiness. And interestingly, this was launched before the advent of Gen.AI.
[00:03:23] Awesome. Yeah, we'll definitely have to dig into that. And David, how about you? I know you've had a varied background as well.
[00:03:30] So yeah, now it's getting to be the point that I've had a long career and it gets scary where you say, well, this is what I did 20 years ago, and then this is what I did 40 years ago.
[00:03:37] And you think, what, 40 years ago? Is that possible? Anyway, a lot of it's management consulting and research and writing.
[00:03:44] In terms of AI, the most exotic thing that I did this year was a presentation for the Central Bank of Laos in Southeast Asia.
[00:03:51] But all my work's been fairly global and I've done things in China and Japan this year, US, Europe, and Middle East.
[00:03:58] And the writing stuff is some of the most fun, intellectually challenging stuff.
[00:04:01] And I've done four books with the Marshall School of Business at the University of Southern California.
[00:04:06] I did one with Peter Nabin, who's the chief people officer of the Olympic Committee in the US.
[00:04:12] And everyone knows Dave Ulrich. If you ask people which HR consultants they know, Dave Ulrich is always the name that comes first.
[00:04:19] So I did a piece for him in a recent book on workforce planning, along with Alexis Fink, who's a very deep thinker.
[00:04:26] So that was fun.
[00:04:29] Finally, on sort of the academic side, my hero has always been Henry Mintzberg out of McGill University.
[00:04:34] And I've got to work with him on learning programs.
[00:04:36] So I've had a great opportunity to stand beside greatness and hopefully pick up some tips from them.
[00:04:45] Yeah, for sure.
[00:04:46] No, I definitely recognize those names, certainly Dave Ulrich.
[00:04:49] And I understand you guys have done a whole bunch of projects together, right?
[00:04:53] So I wanted to just ask you about that.
[00:04:56] And Pauline already mentioned the biggest, most exciting and perhaps least expected one was several years ago with the EU Agency on AI.
[00:05:05] So she's mentioned that.
[00:05:07] I think the main thing we're doing now in terms of deliverable is AI related workshops.
[00:05:12] And again, it's the practical management side of it, not the technical side of it.
[00:05:17] And we're also doing a lot more just sort of general education with podcasts and LinkedIn posts and so on.
[00:05:23] So I guess those would, it's funny, it all sort of relates around either change management, hands-on education, or just sort of broadcast education.
[00:05:32] Sure.
[00:05:33] I think on those workshops, totally appreciate staying on the business side because there's a lot to understand, like just in terms of what are the impacts, what are the risks, what are the implications to engagement, retention, even how our company is actually deploying it.
[00:05:50] Do you get a lot of leaders?
[00:05:54] What's the kind of sort of makeup of some of the courses that you're teaching?
[00:05:59] I mean, basically, the people who tend to come tend to be close to the HR function.
[00:06:06] So simply because that's where Pauline and my background tends to be, although to be honest, the kind of information is relevant across the organization.
[00:06:15] And the sort of things we cover, you know, start from, I mean, you have to kind of understand what an AI is, what it's capable of now and might be capable of the future.
[00:06:27] So that's kind of building the intuition side.
[00:06:29] But then you get into issues, well, how do we develop capability?
[00:06:34] What are the risks?
[00:06:36] What are the barriers to moving forward?
[00:06:39] Those are the kinds of things we discuss.
[00:06:41] One of the things that's come up in previous conversations on this show and elsewhere is making sure that leaders really are part of this sort of upskilling process and really understanding the impacts of all of this.
[00:06:57] Because if they don't, it's not just do they have, do they learn, you know, prompting, do they learn how to build agents and the technical hands to keyboard kinds of stuff?
[00:07:06] But if they don't appreciate some of the big, you know, sort of use cases beyond just sort of automating things and looking at, you know, augmentation and augmenting human capability and creativity, et cetera, then they're going to have a hard time appreciating, you know, the potential ROI of an understanding why the juice is worth the squeeze, as they say.
[00:07:33] Right.
[00:07:33] And, you know, with that EU agency project that Pauline mentioned, one of the things that we heard at the start is a little bit along the lines of, oh, there's an IT team.
[00:07:44] You know, they're responsible for transformation.
[00:07:46] They've hired AI experts.
[00:07:48] They're going to do AI and then give us something, you know.
[00:07:52] And so there's a little bit of a sense of it.
[00:07:55] It's over there that some really smart people are dealing with.
[00:07:58] And I think one of our main messages was, no, this is a business thing.
[00:08:04] You're the business leaders.
[00:08:05] You need to figure out what your priorities are, how AI fits into those priorities, fits into where the organization is going.
[00:08:13] So that's, I think, a big side of it is AI is not something that gets delegated to some technical specialists in an ivory tower someplace.
[00:08:24] Yeah, for sure.
[00:08:25] Some interesting initiatives, too.
[00:08:26] We've supported HR in facilitating discussions around how roles are going to need to evolve and leaders, how leaders can support in that regard.
[00:08:34] And what is that going to look like for all of us in leadership positions that, you know, in addition to helping inform HR on the learning path and how we build competency, it's been a really important tool for engaging leaders and change management.
[00:08:49] And often we've seen that be the largest gain.
[00:08:52] And then we do have some workshops coming up in the new year that are much more leader-focused as well, which we're excited about.
[00:08:58] Awesome.
[00:08:59] So, Pauline, I wanted to ask you, when you're trying to transform any organization, I mean, these are big initiatives and in general, not necessarily in the talent space, but in general, organizational transformations are complicated and they don't always go as planned.
[00:09:20] And a lot of that obviously goes back to people and behavior change and things like that, as you know well.
[00:09:25] And so I was just curious about your perspective on talent transformation or workforce transformation specifically and, you know, what's sort of holding people back from having, from finding success here.
[00:09:38] I think you actually highlighted it well in how you framed the question that when we think of transformation, our brain naturally goes to thinking around grand and impressing sounding initiatives, which they're intended to be.
[00:09:52] We have broad, inspiring goals.
[00:09:53] And then there's the reality that it takes often a significant investment of time and upskilling and patience and encouragement for people to adopt new patterns and ways of working.
[00:10:06] So, first, I'd argue that it's most effective for us to consider how we enable more micro and continuous learning within an organization, how we ensure we have that strong cultural foundation that we're aligning employees across rules, aligning them across divisions.
[00:10:23] We're fostering collaboration.
[00:10:25] Collaboration.
[00:10:25] There's often barriers set up in our incentive programs and communication programs that prevents that from working well.
[00:10:34] And how do we remove barriers to micromanagement and the silos that are created there?
[00:10:39] Often employees aren't allowed to collaborate effectively across.
[00:10:43] Actually, we get penalized for doing so.
[00:10:46] And, you know, and as in probably making sure we're not mistaking communication and updated policies and SOPs for change management.
[00:10:53] And then the second point would come back to what you were importantly raising as well is how do we support leaders?
[00:10:59] What does that look like?
[00:11:00] And we expect a lot from leaders and so does the rest of the organization.
[00:11:04] And too often, I would argue it's not with enough support.
[00:11:08] We want leaders to be personable.
[00:11:10] We want them to be authentic.
[00:11:10] But we also don't want them to take any resistance personally.
[00:11:13] We want them to be effective.
[00:11:22] Authentic, effective way.
[00:11:24] We don't give them enough information in advance.
[00:11:26] We don't give them the same amount of time to absorb and assimilate to the new ways of working that, you know, those who have made the decisions potentially have had.
[00:11:34] We'll invest on training them on how to be a great coach.
[00:11:38] But then we give them performance management forms and directives that we actually actively encourage.
[00:11:42] Subjective feedback on style.
[00:11:44] A transactional approach.
[00:11:45] We're going to stack rank employees.
[00:11:48] So, yeah, we often get in their way.
[00:11:51] Yeah, I feel like, I mean, some of it might just come down to how they're incentivized, right?
[00:11:56] Like what are you looking out for, you know, a broader part of the organization?
[00:12:02] I just had this vision of office space come into my head.
[00:12:07] Like, is this good for the company kind of thing?
[00:12:09] But it's true in some level.
[00:12:11] Like you've got to incentivize them to break down some of those silos and to look out for the sort of longer term success of, you know, this.
[00:12:20] These are the longer term benefits and not just look at what are my metrics going to look like this this month or being competitive with your managers or, you know, whatever it is.
[00:12:30] We've got to maybe think about different types of incentives when it comes to driving this transformation.
[00:12:35] Yeah.
[00:12:35] And it could be intrinsic as well.
[00:12:37] No one ever writes a incentive scheme which says do not allow your staff to talk to other people in the organization.
[00:12:45] So it's an unintended side effect of the way the organization and incentives are set up, not an intentional action.
[00:12:54] Yeah.
[00:12:54] No, that's fair.
[00:12:55] That's fair.
[00:12:56] So, David, when we think about AI sort of entering the picture here and the need to sort of upskill and reskill, I mean, this is a huge undertaking.
[00:13:08] I mean, how do you see the role of HR sort of evolving to build this sort of new capability?
[00:13:17] Because it's pretty disruptive to the traditional model.
[00:13:21] Let me tell you where HR might start is they might start from the point of view of fear, which is not all wrong because they say there are all these legal issues.
[00:13:32] We've heard these horror stories.
[00:13:34] We're responsible for a lot of labor law compliance and so on and protecting employee privacy.
[00:13:42] So as well as things like it may not be illegal, but there's certain kinds of surveillance we don't believe in doing.
[00:13:49] And so there's all that side of things.
[00:13:52] And so that is where HR may say we need to be involved in the governance of what are the rules?
[00:14:00] What are the guidelines?
[00:14:01] How do we make sure this doesn't go off the track?
[00:14:03] So that's one side of the activity.
[00:14:07] Then maybe the more positive thing and the obvious thing is we want to give AI training to individuals.
[00:14:14] And these days that mainly means using generative AI and generative AI tools.
[00:14:20] And as Pauline keeps mentioning, it's continuous learning because it's not that, yeah, yeah, we'll send them on a course on how to use chat GPT.
[00:14:27] And then maybe five years later, we'll update it.
[00:14:30] No, it's got to be ongoing.
[00:14:31] You give them something to introduce them to the tools, but there's got to be a continuous learning aspect.
[00:14:39] And so both sides, the initial introductory workshop, you say AI is HR should be creating.
[00:14:47] And then the mechanism for continuous learning should also be a part of what HR understands how to orchestrate.
[00:14:54] Now, so we started at the very top, which we call the governance layers, you know, the guidelines.
[00:14:59] And let's make sure we don't get into trouble.
[00:15:00] We talked to the lower level, which is the individuals using tools to increase their productivity.
[00:15:05] But there's also stuff going on and where HR has to sort of address the question.
[00:15:12] And along with IT, I think it may be more an IT issue than an HR issue, but HR can be involved in the training, which is at the departmental level or organizational level in terms of if we're going to introduce this new tool and the vendors claiming this is AI.
[00:15:29] Do we know how to assess those tools?
[00:15:32] That's a whole new skill set.
[00:15:34] So, again, that's training, that's HR.
[00:15:36] And we're not done yet because, again, this begins to get beyond HR's regular competence, but it shouldn't be, is understanding the strategic impact.
[00:15:48] Like, is the industry going to be disrupted?
[00:15:50] And this is obviously the most important thing.
[00:15:52] So we can have all our employees be more productive and we could be buying fancy new tools and implementing them effectively.
[00:15:58] But then something happens to the industry and people are no longer buying CDs.
[00:16:03] They're streaming music.
[00:16:04] And you're thinking, but we're the best CD company in the world.
[00:16:08] And you think that's just too bad because something's happened.
[00:16:12] And so and there's going to be a lot of that going on where something big happens and suddenly your industry looks completely different.
[00:16:22] So, again, I can imagine HR helping to facilitate both the kind of strategic conversations.
[00:16:30] I've been going on a long time here.
[00:16:31] I said everyone I would keep these short, but this is a big one.
[00:16:36] So there will be a strategy committee.
[00:16:39] There'll be people doing foresight, all those kinds of things to try to figure it out.
[00:16:44] There is another side that just in terms of organizational design.
[00:16:49] So even if we don't have smart people seeing the future, we do want to have an organizational design that's agile.
[00:16:55] And lots of people say, oh, we're going to be a fast follower.
[00:16:58] OK, that's a reasonable strategy.
[00:17:00] Are you designed to be a fast follower?
[00:17:02] Are you, you know, can you do that?
[00:17:04] And they say, oh, no, actually, we can't do that at all.
[00:17:07] OK, well, that becomes an HR issue because it's an org design or culture issue.
[00:17:11] And I think that gets into some of the things we were talking about before in terms of you really got to get people to open the aperture to think about doing things in a different way.
[00:17:22] And people can get sucked into these traditional mindsets, whether it's the existing metrics or the existing org structure or the way roles are structured and, you know, the way you build.
[00:17:34] You know, these are the tasks.
[00:17:35] These tasks require these skills.
[00:17:37] These skills require these people.
[00:17:38] And the whole, you know, without going down the skills rabbit hole, there's just a lot of sort of friction that could prevent a team from, you know, recognizing that you do need to sort of reimagine how this might be structured and that things aren't going to stay the same.
[00:17:59] And this isn't, maybe we can't follow this sort of iterative transformation into this, you know, sort of AI powered future, if you will.
[00:18:08] Maybe we now need to potentially, you know, sort of leapfrog some of the things that we've been doing because we were that far behind before.
[00:18:18] Or I don't know.
[00:18:19] I just think there's, for a lot of organizations, they've really got to wrap their head around just thinking about things quite differently than they have before.
[00:18:28] We've seen where the approach to capability building can really help in that regard too.
[00:18:32] When you begin to implement knowledge circles across the organization, just the chance for employees to come together across divisions, to learn together, to think through how their work bumps into each other, how processes can be improved.
[00:18:48] And how much more so there's foundational skills that are required across the organization in ways we may not have imagined before.
[00:18:56] AI literacy, how that's important at every single level of the organization.
[00:19:02] AI literacy, data literacy, probably design thinking, right?
[00:19:06] Much, we need many more employees to understand design thinking across levels than we have previously.
[00:19:12] Yeah, I think that that is underappreciated right now, whether it's design thinking or product mindset, but really thinking through, you know, some of the experiences that people are going.
[00:19:25] So, Pauline, some of the work you've done, I mean, with employee experience and engagement and things like that.
[00:19:32] I mean, if you were to put an employee experience sort of lens around some of this, this isn't just about, you know, reshuffling the deck and moving people around and give these people these courses and we're going to do some automation over there or whatever.
[00:19:49] I mean, you really need to take this sort of holistic look, like what are the ramifications of these changes on people?
[00:19:58] And even psychologically, you know, what is this going to do for them and what are the potential implications of that?
[00:20:05] And, you know, what I always sort of have in mind is sort of two cultures.
[00:20:11] And you imagine one is a rigid culture and everybody's scared of AI and they're used to doing a certain thing, it's all very process driven.
[00:20:20] And they're trying to manage this top down.
[00:20:23] But there's so much uncertainty and other priorities that they're really struggling with it.
[00:20:28] And there's another organization that's much more relaxed.
[00:20:31] And all their, you know, so many of their employees have been dabbling with AI and trying things.
[00:20:35] And they've taken courses and they're talking to each other and they're keeping their eyes open.
[00:20:39] And they have this, as you say, product mindset.
[00:20:42] And another one has a design thinking mindset.
[00:20:44] And, you know, they're coming up with ideas sort of obvious to them.
[00:20:48] Hey, we should be doing this or we should be trying that.
[00:20:50] And you don't have to have it all figured out by some, by the CEO or some strategy committee.
[00:20:56] You have this capability across the organization.
[00:20:59] And the organization is sufficiently laid back in a way to let them change your organization the way it needs to be changed.
[00:21:10] Hi there.
[00:21:10] I'm Peter Zollman.
[00:21:11] I'm a co-host of the Inside Job Boards and Recruitment Marketplaces podcast.
[00:21:16] And I'm Stephen Rothberg.
[00:21:17] And I guess that makes me the other co-host.
[00:21:19] Every other week, we're joined by guests from the world's leading job sites.
[00:21:23] Together, we analyze news about general niche and aggregator job board and recruitment marketplaces sites.
[00:21:30] Make sure you sign up and subscribe today.
[00:21:34] I'd add to that.
[00:21:35] We saw with COVID how many organizations rapidly adjusted to very different circumstances and engagement go up in many sectors.
[00:21:47] Employees love being involved and they enjoy collaborating.
[00:21:52] And the situation was handed to organizations where suddenly there was a common context across the organization.
[00:21:58] There was alignment that we need to move quickly, that we need to change, we need to adapt, we need to move our offer online if we're going to manage through this, survive through this.
[00:22:07] We need to figure out how to work remotely immediately, even if it's something we've resisted culturally for decades.
[00:22:14] And with that common context, agreement on the need for change, alignment on objectives, and the fact that it couldn't be overly controlled from the center.
[00:22:23] And it was interesting to see how many articles popped up as change management dead, right?
[00:22:27] Can we stop going through all this fancy stuff?
[00:22:30] Well, sure, if you can recreate without the trauma and drama of having a pandemic, that common context across the organization, that agreement on the need for change, alignment on an outcome with room for experimentation and involvement to achieve it, then yeah, we can move.
[00:22:47] We can move quite quickly.
[00:22:49] But it was really hoping that lesson would stick much more broadly than it feels like it has.
[00:22:54] I was a bit surprised at that and all the return to office momentum.
[00:23:01] I mean, that was, yeah, we really, a lot of companies, not all, but a lot of companies really snap back pretty quickly.
[00:23:10] IBM, I mean, I spent over two decades at IBM and I'm still an advocate on many levels.
[00:23:17] But I was surprised at their return to office, partly because of my personal experience way before COVID.
[00:23:27] I mean, I was working remotely for them for years, like from like 99 to 2004 or 5.
[00:23:36] And they had all their own technology to do that successfully.
[00:23:40] I mean, before Zoom, most people think Zoom was like this novelty.
[00:23:44] It wasn't.
[00:23:45] IBM had an entire suite of unified communications and collaboration tools way before that, that we used successfully.
[00:23:55] Well, to a degree.
[00:23:56] I mean, we didn't have the broadband that we have today and the Wi-Fi everywhere.
[00:24:00] But if you had a cable modem, you could access and have video collaboration all the time.
[00:24:07] So I was just surprised that they didn't, I guess it depends on, goes back to leadership, right?
[00:24:12] And what you think is working.
[00:24:14] And if you can get that much more, you know, either productivity or engagement or whatever by having people in the office,
[00:24:22] then you're going to make some of those difficult decisions.
[00:24:25] And I would argue it's both more effective and also more challenging to manage to outcome as opposed to managing tasks.
[00:24:34] But, you know, David, I welcome your actions.
[00:24:36] I think AI is going to really instill the same requirements for leadership.
[00:24:42] We're going to be able to manage tasks in the same way.
[00:24:44] Employees will have at their disposal all the expertise that we may need them to have in a role.
[00:24:51] And we'll need to manage differently in that circumstance as well.
[00:24:54] What I hope that leaders would do in these situations is not use, you know, their sort of gut, right?
[00:25:04] And this is what worked before.
[00:25:05] This is what's going to work going forward.
[00:25:08] But really think about making, you know, data-driven decisions.
[00:25:11] Like, show me the data that tells me, to your point, yeah, maybe it is about outcomes and value or whatever.
[00:25:20] And you've got to look at some of the more strategic sort of measures and not just look at, you know, tactically, is this, you know, is this stock up or down?
[00:25:28] Or, you know, what am I hearing anecdotally or whatever?
[00:25:31] But data, I mean, just sort of unintentionally sort of pivoting here to the data topic more generally, which is, you know, the fuel that powers all these AI algorithms.
[00:25:41] And some of those algorithms are involved in decision-making, you know, processes, right?
[00:25:46] So when we think about leveraging data, you know, for and about people, how are we doing in that regard?
[00:25:55] Because I feel like, I guess, David, this is probably a question for you.
[00:25:59] But I just feel like I get into all these conversations about, you know, talent intelligence and people analytics and look at all this data we have and look at all these insights.
[00:26:07] And, you know, we can predict attrition and we can do all these, you know, crazy things.
[00:26:12] And yet these same teams are getting smaller, it seems like.
[00:26:15] A lot of organizations are not investing.
[00:26:18] In fact, they're divesting in some of this.
[00:26:20] And it's like, wait a minute.
[00:26:21] These are the super smart people who are bringing you all this intelligence and all this data.
[00:26:29] Isn't this what you wanted?
[00:26:30] And yet we're still not having success.
[00:26:32] So I guess, I don't know if there's a question in there somewhere, but it's really about how are we leveraging that data?
[00:26:37] Are we having success or what are the challenges in not using it?
[00:26:40] And here we're at risk of being going on for an hour.
[00:26:44] First of all, I want to talk about the return to work thing.
[00:26:46] And John Boudreau from University of Southern California is just continually drives him crazy that people are making these return to work decisions just based on their own personal feeling.
[00:26:55] And the analogy he uses is, well, you know, let's imagine there's a situation where the company was thinking of doing something and most of the customers were saying we don't want this.
[00:27:06] Would you accept it when the chief marketing officer says, well, I know that most of the customers don't want this, but my feeling is it's a good idea anyway, so let's just do it.
[00:27:15] Or would you demand a certain amount of evidence before you chose that decision?
[00:27:20] And he said, why is it that we would demand evidence and experimentation and data for that kind of decision affecting customers?
[00:27:29] But when it comes to employees, you know, we just say, no, we'll just do what we think.
[00:27:34] And there's so much opportunity for analysis and fine tuning, you know, which groups.
[00:27:42] There'll be different groups where it makes sense for them to be in the office and groups where it doesn't make sense.
[00:27:47] There may be different personalities.
[00:27:49] There may be different ways you manage people, as Pauline was saying about managing output versus tasks.
[00:27:54] There are so many variables that you could dig into and create, I'll say, an optimal solution of how you manage people.
[00:28:04] And there seems to be so little interest in doing that analysis.
[00:28:08] Now, more broadly on people analytics, which I did a program on today, the big problem was that people started saying, oh, we got a lot of data.
[00:28:19] Let's hire some data scientists and they'll do something.
[00:28:22] Like, we'll just, they've got PhDs from MIT and we'll let them loose on the data.
[00:28:27] And that was a mistake.
[00:28:29] You don't need data scientists as much as you need people who will start with saying, what are the business questions we need to investigate?
[00:28:36] Such as the business questions, who needs to return to work?
[00:28:40] Who's better at home?
[00:28:41] What management techniques work?
[00:28:43] And then go out as almost like a research scientist, you might say.
[00:28:48] But I want to step back from that statement, research science.
[00:28:53] Someone with a research scientist mindset.
[00:28:54] But they're not going to spend three years studying this.
[00:28:57] They're going to gather the available data and make a business decision in a timely way.
[00:29:02] But that's the kind of approach.
[00:29:06] So that kind of approach wasn't built into analytics originally because people were thinking in terms of hiring data scientists and statisticians and so on, as opposed to hiring people who were almost like consultants, you might say, to identify and solve business problems using data and other forms of evidence to answer those questions.
[00:29:27] I'm hopeful.
[00:29:27] Well, am I hopeful or not?
[00:29:29] I guess, like Polly, I'm always hopeful that people will figure it out.
[00:29:36] And I do hear, you know, there are some companies that do it well.
[00:29:40] How about that?
[00:29:41] Yeah, I was going to add, we think about upskilling leaders.
[00:29:45] And I like, you know, David talks about the mindset, right?
[00:29:47] And these foundational mindsets we want to equip leaders with and training to ensure they have that foundational knowledge.
[00:29:55] They can effectively leverage this new technology, that they can adapt their leadership style as well.
[00:30:02] And equally important that I would say that HR is leaning in to drive this upskilling.
[00:30:07] It builds their credibility and ensures that they are there to partner alongside operations.
[00:30:13] And really how HR folks and the pros we partner with have those essential skills that organizations need right now.
[00:30:22] We might think, we don't want to embarrass ourselves.
[00:30:24] This is, you know, this is a very technical topic.
[00:30:27] I don't feel comfortable.
[00:30:28] But, you know, we often absolutely feel comfortable when we're brought in to support change management and innovation and strategy and how we support learning.
[00:30:38] And these are all incredibly important right now.
[00:30:43] And how we overcome those organizational barriers that can inadvertently be created that we've been discussing in our discussion so far as well.
[00:30:52] And then if we're avoiding, because we're finding it intimidating, it seems like a black box.
[00:30:58] It seems magical.
[00:30:59] We're worried about the risks.
[00:31:01] Just how so important it is that we lean in because organizations need us.
[00:31:07] And our ability to support our employees, keep pace is incredibly important.
[00:31:11] To clarify, I think, I mean, you guys know this, but just for the audience, when I talk about, you know, leaders needing to upskill,
[00:31:18] I'm talking about all leaders.
[00:31:19] I'm not talking about HR leaders, right?
[00:31:22] Anyone who's a people manager, everyone who's a decision maker in an organization.
[00:31:28] I mean, any people manager, anyone who's in charge of programs, you know, program directors, initiative directors, things like that.
[00:31:36] I mean, you're making decisions.
[00:31:38] So I said before, you know, you really need to understand so that when people bring you ideas, people bring you opportunities,
[00:31:45] you have an appreciation for, you know, the scope and size and the impact of what that is.
[00:31:51] But it's also, you know, AI is not going to sort of discriminate in terms of who it impacts.
[00:31:59] It's doing all kinds of things that even a year ago we did not anticipate.
[00:32:03] Definitely two years ago we did not anticipate.
[00:32:06] But now we start to see this trend towards not just, you know, advanced automation, but, you know, these AI agents getting strung together in workflows to create what people refer to as agentic workflow.
[00:32:22] And we can debate whether it's truly, you know, reasoning or at least it's doing some type of reasoning.
[00:32:30] It may not be reasoning like a human brain reasons, but nonetheless, it's taking in a lot of context and data and it's reaching conclusions.
[00:32:42] And a lot of people are trusting those conclusions, but leaders need to understand exactly how that type of capability is going to augment their own role as a leader, as a decision maker, as a coach, as a mentor.
[00:32:58] I mean, there's not that much from a knowledge work standpoint that's sort of, you know, off limits or that won't be touched in some way by what's coming or in some cases what's here already.
[00:33:10] And a lot of this is we don't know.
[00:33:13] So, for example, on using agents to run workflows, if you think, oh, well, this agent is going to just be able to replace my HR analyst or, you know, one of my financial analysts or one of my administrators, coordinators or so on.
[00:33:31] Yeah, maybe it can, but maybe it can't because there's still quite different kinds of intelligences than human intelligences.
[00:33:39] But it will also surprise you.
[00:33:41] You know, for example, it surprised everybody about how well DeepMind's programs did in the Math Olympiad.
[00:33:49] And you say, there's no way it can solve these extremely difficult math problems that even the best human mathematicians struggle with.
[00:33:56] Well, actually, it can.
[00:33:59] Strangely enough, it can.
[00:34:01] But then again, there's a whole series of other math problems that you think that, again, are very difficult.
[00:34:06] And you think, well, if it solved the Math Olympiad ones, I bet it'll do well on this.
[00:34:10] But the frontier math ones, it's only getting 2% right on, so 98% wrong.
[00:34:14] Now, it's better than me.
[00:34:16] I'd get 0% right.
[00:34:17] So I don't want to be cast too much shade on the AIs.
[00:34:20] But the point is, your intuition, you say, I don't think it can do math, and it does incredibly well on the International Math Olympiad.
[00:34:28] You say, oh, I think it's fantastic at math, superhuman level, and then it bombs in the frontier math tests.
[00:34:36] And you think, oh, you know, there's stuff we need to experiment with to learn what it can and can't do.
[00:34:42] And in your organization, you're probably not doing higher math, but you probably are having these workflows.
[00:34:47] And it's true that these AI agents can help with workflows.
[00:34:51] But you're going to have to spend time to figure that out hands-on so that you get to develop a good intuition of what you should or should not be turning over to these agents.
[00:35:03] That's an absolutely critical lesson that everyone needs to take to heart, right?
[00:35:09] What are we deploying these solutions where we should, not just where we can?
[00:35:16] And the other point I would make, based on what you were talking about, David, is even where it may fall short today, it's only going to get smarter and more capable.
[00:35:29] And that's before we consider how AI may get applied or intersect with other trends, right?
[00:35:37] Or other technologies, right?
[00:35:38] Like, we don't know what, you know, AI plus, you know, quantum might be able to do, right?
[00:35:43] We don't know what, you know, all this, we talk about all this AI stuff as sort of an extension of like the software arena.
[00:35:53] But what happens when you put these really advanced AI models into combining it with robotics, right?
[00:36:01] So that's the other thing that people need to think about is not just, you know, some of these things in isolation sitting on your device that you can fit in your pocket or your backpack.
[00:36:13] It's what happens when this stuff goes mobile.
[00:36:16] I mean, in a drone, in a car, you know, those kinds of things.
[00:36:21] There's a lot to think about.
[00:36:23] And I'm not trying to scare everyone.
[00:36:26] I just want people to recognize that AI is only going to get more capable.
[00:36:32] And therefore, if you haven't started exploring and reading and getting your hands dirty, it would be really helpful for you to have that as a baseline just to sort of whet your appetite and be aware.
[00:36:46] I want to highlight your point.
[00:36:47] So let's imagine you say, well, we're going to study AI and we're going to use our interface to chat GPT to do something.
[00:36:54] And the finance guy goes, you know what?
[00:36:56] Like, it's all very well, but it's expensive and it's actually cheaper to use people.
[00:37:02] And so let's just put that to bed.
[00:37:04] We tried it.
[00:37:05] It works, but way too expensive.
[00:37:07] Forget it.
[00:37:08] And you lean back.
[00:37:09] And then six months later, you find, oh, did you know that the cost has come down by from what it was when you studied it?
[00:37:16] And they go, well, it really was too expensive.
[00:37:18] No, but it's come down by a hundredfold.
[00:37:20] But it would cost a dollar, but it would cost one cent now.
[00:37:22] And you're sitting there saying, oh, we tested it.
[00:37:25] It was too expensive.
[00:37:26] Let's forget about it.
[00:37:27] But to your point, Bob, it's getting better.
[00:37:30] And at times it's getting better really in exponential leaps.
[00:37:35] And again, it just shows you that, oh, my heavens, we're just not particularly used to things that used to cost a dollar costing one cent.
[00:37:43] And this gets back to sort of do we have the management capability, the organizational capability, so that if something was, you know, 50 times too expensive at the start of the year, are we able to recognize that maybe it'll be half the price of what we're doing manually by June?
[00:38:07] So that kind of issue makes it incredibly challenging for organizations to get the best out of AI.
[00:38:20] Finding a great career has always been a challenge.
[00:38:24] But today, with the massive changes underway in just about every sector of the economy, in just about every country in the world, finding a great new career is even more challenging.
[00:38:38] And if you're a student, recent graduate, or someone else early in their career, it's even harder for you because you just don't have the experience that those who might be 10, 20, 30 years older than you have.
[00:38:54] The answer?
[00:38:55] The podcast.
[00:38:57] From dorms to desks.
[00:38:59] A podcast by college recruiter job search site, where every week we take a deep dive into a topic specifically of interest to candidates who are early in their careers and looking for a great part-time, seasonal internship, or other entry-level job.
[00:39:23] Listen today.
[00:39:27] Yeah, there's definitely some calculus there, right?
[00:39:29] And I think that's, I don't know if we have time for it, but I definitely wanted to talk about strategic workforce planning.
[00:39:35] Because I do think, David, that comes into it, right?
[00:39:37] You've got to look at cost-benefit analysis of a lot of things, whether it's build versus buy versus bot, or it's, yes, we could, this is an appropriate use case for AI.
[00:39:49] But maybe the COO has a different view than others in terms of, well, it's just cheaper to do it, to use humans for it.
[00:39:59] Or it's just too, this is a human-centric sort of process.
[00:40:04] And we don't want to lose that because there's qualitative, not just quantitative measures that we need to be careful about.
[00:40:14] And there's a trade-off, right?
[00:40:15] It is new technology.
[00:40:18] And to lean in, you can catch up pretty quickly to where things are at.
[00:40:23] And once you have an intuition and understanding of how the technology works, then it makes more sense why, well, that didn't work yesterday, but it does work today.
[00:40:31] And you've likely had the same experience David and I had.
[00:40:33] We'll record something, and by the time it plays or just after, it's out of date, right?
[00:40:38] Literally, it may not be able to do something today, and you check back and you try it again the next day, the next week, and the capability is there, which is, again, where it's so important we build our intuition and our understanding of how these tools work.
[00:40:51] So I have a perfect example.
[00:40:54] I have always had trouble, as you guys may have, using an image generation tool powered by AI, like a Mid Journey or one of those other tools.
[00:41:08] And it could never, if you wanted to write words in the image, it could never do it.
[00:41:15] So even though it says it's multimodal, which I thought meant, well, I can both hear and listen or see and listen or whatever.
[00:41:25] I thought, well, multimodal, you're just combining the two, but that's not how it works.
[00:41:30] So literally, I read yesterday that the new Google Gemini models are one of the derivatives of the Gemini 2 model.
[00:41:37] Now it does that successfully without issue, right?
[00:41:41] If you want words in your AI-generated image, you got it.
[00:41:45] So it didn't take long at all for that capability to become available.
[00:41:52] And we see that kind of advancements happening pretty quickly.
[00:41:56] And all the more reason why I don't want people to fall behind.
[00:41:59] But I also don't want people to think that they have to know everything.
[00:42:05] Because I have subscribed to way too many AI newsletters and updates.
[00:42:11] And I don't need to know what Cloud 3.5, Alpha, Beta, whatever does today.
[00:42:18] Because those alphanumeric numbers will be absolutely meaningless to 99% of the population.
[00:42:25] Just tell me what it can do.
[00:42:27] Tell me the benefits and tell me the risks.
[00:42:30] And then I'll decide whether it's worth it.
[00:42:33] One of the topics I always like to ask my guests about is just the whole responsible AI topic.
[00:42:38] So I guess I'm curious to get your input.
[00:42:42] David, I guess I'll start with you.
[00:42:44] Just how you're hearing HR think about the broad responsible AI topic.
[00:42:50] And when I say that, I'm talking about everything from compliance and governance and risk to bias mitigation and fairness and transparency and all of these good things.
[00:43:02] I have this strange, very negative reaction to the term responsible AI.
[00:43:07] And the reason is, I think, you're going to attract all the people with personal and political agendas who think this is a broad topic that will allow me to poke my nose in wherever I want.
[00:43:17] I'd have no one question me because I'm the person in charge of responsible AI.
[00:43:22] I would never set up a committee and call them that because it gives them too much power.
[00:43:27] And as I say, if you have a powerful committee in charge of a vague thing, it'll attract people who want power.
[00:43:33] But you already have many of the mechanisms in place.
[00:43:38] So you've got a whole lot of people with expertise in labor law.
[00:43:42] And this deals with things like bias.
[00:43:45] And so what you want, those people say, OK, I know all about all these labor laws about bias and about privacy and protecting employees in various ways.
[00:43:54] So all that knowledge about what is legally required and maybe what's ethically required, because an organization has ethical norms on top of the legal norms.
[00:44:03] We're just going to apply that expertise to AI.
[00:44:06] And we will need some technical understanding because these systems behave differently than the systems we're used to.
[00:44:15] But it's merely a matter of, I say merely, but you have to develop that technical understanding in conjunction with your IT people.
[00:44:26] In terms of things like data security, which, of course, HR is really concerned about the security of people data.
[00:44:33] Well, there are all kinds of threats that IT is dealing with.
[00:44:37] I mean, and the more you know about the threats, the worse it gets because you have state actors trying to crack your system.
[00:44:46] But and this is way before AI, you've got all these threats.
[00:44:50] But now AI presents a new set of threats because it's a new technology that's that in a way nobody fully understands.
[00:44:59] You don't understand what it's going to do or why it's going to do.
[00:45:02] It's not nearly as predictable as any of your traditional systems.
[00:45:05] So so you do want that I'll say call it the governance committee, which is has lawyers and has people who are experts on security.
[00:45:16] And you have the HR people who understand the employee experience and see it, see what's happening to the eyes of the employee.
[00:45:23] And so you work together to make sure that it, you know, that stays on track.
[00:45:30] Again, it's kind of all the same issues we've always had.
[00:45:34] But it's just that now there's a new element in AI, new technology.
[00:45:39] And we have to make sure that all these things we were worried about and dealing with before we now deal with in light of this new technology.
[00:45:48] I totally understand your your points and your perspective.
[00:45:51] I do think the advantage of if you were to create a governance committee, an AI governance committee, an AI ethics committee, like sort of net new.
[00:46:01] Right. You didn't just say, oh, let's let's just give it to the GRC folks or the legal folks or whatever.
[00:46:06] And honestly, I mean, I do recommend those, but but I recommend them constructed in a specific way, meaning they should have representation from all of these different disciplines to bring in that cognitive diversity and to make sure that you haven't overlooked, you know, specific things.
[00:46:26] Because AI is going to be used in all kinds of things.
[00:46:31] Beyond, you know, human impacting decisions and beyond, you know, just talent acquisition or talent management, et cetera.
[00:46:39] And so I just think it's important to think about those things, even the way we communicate, starting from maybe a job description that goes out to the public.
[00:46:49] And is that is that being inclusive?
[00:46:52] Right. Are we using inclusive language in that?
[00:46:54] Or are we when we use programmatic advertising?
[00:46:57] Are we pushing this out to, you know, diverse populations or are we really wind up, you know, just attracting a particular demographic or whatever?
[00:47:08] And then, you know, even in performance management reviews, if you're going to have an AI powered, you know, performance management system or we look at, you know, pay equity and all of these areas.
[00:47:21] I mean, yes, some of them are definitely clearly in HR's domain, but I think when the more people start to build things on their own and not necessarily rely on a third party, you know, solution provider, then even being responsible by design takes on, not takes on more meaning, but it's applicable to even the average, you know, worker in a lot of cases.
[00:47:47] Because I could go and create my own custom GPT or my own agent right now.
[00:47:53] And where are the guardrails?
[00:47:55] I mean, people are learning how to use this stuff.
[00:47:57] We talked a little bit before about like adoption and stuff like that.
[00:48:00] But, you know, even if the corporation hasn't, even if the organization hasn't sort of sanctioned, this is our policy, these are the specific tools we're going to use or whatever.
[00:48:09] Where the adoption is sometimes two or three X what the company says, you know, they've sanctioned because everyone's using it on their own devices.
[00:48:20] So you have an AI version of the shadow IT kind of problem.
[00:48:24] You've got shadow AI, bring your own AI kind of situation.
[00:48:28] So I just think there's a lot of sort of arms and legs to this.
[00:48:32] And so you've got to bring in those diverse perspectives to do it right.
[00:48:37] And then obviously some of this gets into what legislation do I need to comply with?
[00:48:42] But being responsible and being responsible by design is much bigger than complying with the law.
[00:48:47] That's a pretty low bar in my opinion.
[00:48:50] Yeah, no, I absolutely agree.
[00:48:52] And I think David would agree too.
[00:48:57] Just principles of governance, absolutely important.
[00:49:00] Having governance committee oversight within the organization is important.
[00:49:04] And David, you speak to and recommend to HR that if there's not a governance committee in the organization, set up a shadow committee until there is one in place to tackle these important challenges.
[00:49:19] And I agree and align with the list that you shared, Bob.
[00:49:24] And I'll note that, and David had highlighted, these are challenges we're familiar with, you know, governance, what's the validity of the data that decisions are being based upon, the importance of transparency, being able to explain decisions as opposed to saying, well, this magic black box came up with this rating, so it's yours.
[00:49:42] Right, right.
[00:49:43] How we set cultural norms, how we address poor actors.
[00:49:46] Because sometimes when somebody gets, you know, when they, you know, work the system, those are situations we're accustomed to.
[00:49:54] And at the same time, it is new technology.
[00:49:56] It does behave in unpredictable ways.
[00:49:58] And we need guardrails automated as much as possible within the organization.
[00:50:03] And we need training to mitigate the risks that we can't control for across the organization too.
[00:50:10] So I would just underline all the points that you made, Bob.
[00:50:12] I think they're very important.
[00:50:14] I was curious, is there anything that you're playing with either in your work or in your personal life that you're particularly, you know, intrigued by or concerned by just in terms of generative AI?
[00:50:29] I just underline playing.
[00:50:32] We're doing a lot of playing and we try to fold it into whatever we're doing and how it relates to projects that we're involved in.
[00:50:40] And I will say, I do like consensus that's within ChatGPT and being able to quickly pull research, fun little tools, napkin AI we found the other day where you can put in text and it creates a pretty model.
[00:50:54] But, you know, I'll give an example.
[00:50:55] We talk about playing with the tech.
[00:50:57] Dave and I actually were working on a presentation.
[00:50:59] Like, can we live it up this section?
[00:51:01] It was, you know, going through a series of tools.
[00:51:04] This would be more efficient and potentially more engaging if we turned this into a video.
[00:51:09] And tremendous admin support, Casey on my team.
[00:51:13] We tasked her, gave her a couple hours.
[00:51:15] Can you play with this and see if you can make it into a video?
[00:51:18] And she went online.
[00:51:20] She wanted to ChatGPT to ask it for advice on what tool.
[00:51:23] It gave her steps to follow.
[00:51:25] And she came back within a few hours.
[00:51:27] And she'd used the videos that she'd pulled from the different demos.
[00:51:32] She had ChatGPT.
[00:51:33] She liked the intro from ChatGPT.
[00:51:35] She liked the closing from Copilot.
[00:51:36] It all went in and it came, you know, it spat back out.
[00:51:39] This video and the script had been automated with an AI voice.
[00:51:44] And it was all quite slick.
[00:51:46] And something that would have taken us, I don't know how long previously, David.
[00:51:50] And, you know, underlined the point that this is not a tech expert, right?
[00:51:54] This is somebody who has an appetite and enjoys going to play.
[00:51:59] And what she came back with was really helpful to us.
[00:52:02] But more importantly, it just keeps allowing us to build our understanding of what the tech can do,
[00:52:07] what the capability is as well.
[00:52:09] Very cool.
[00:52:10] And I'm in a similar vein of, you know, just trying all kinds of things and saying,
[00:52:15] I wonder if it can do this.
[00:52:17] I wonder if it's going to happen.
[00:52:19] But one thing that I'm coming to terms with is what a spectacularly good teacher it is.
[00:52:25] So, for example, on very technical topics, I mean, you can ask it about category theory and mathematics.
[00:52:34] And you can do the classic, explain to me like a five-year-old.
[00:52:37] I don't get it.
[00:52:38] Can you give me an example?
[00:52:40] I was looking up graph databases the other day.
[00:52:42] And I say, you know, I'm used to traditional reporting tools, you know, SQL and so on.
[00:52:47] And how would I write a query for a graph database?
[00:52:53] How does that work?
[00:52:54] Can you give me an example?
[00:52:57] In what ways is this better?
[00:52:59] In what ways is it maybe not as effective?
[00:53:04] And so it's quite different from traditional book learning.
[00:53:10] It's much more like being able to sit down with somebody who is an expert,
[00:53:14] who is there to help you understand what you need to understand about the topic.
[00:53:19] And when you don't understand something, walk back and give you a simpler version to bring you along.
[00:53:27] And it may use 10 terms and you understand nine of them, but there was one you didn't understand.
[00:53:31] And you say, wait, wait, wait, wait.
[00:53:32] What was that again?
[00:53:35] And there's something astounding.
[00:53:36] And just before this call, I was talking to somebody in the machine learning space.
[00:53:41] And he was saying, you know, I'm so used nowadays to be able to go from zero to 100 on different skills I need or different technologies I need to understand.
[00:53:51] Because it will just walk me through this.
[00:53:54] And what might have taken, I think he didn't use any numbers, but you can imagine you would have thought,
[00:53:59] well, this is going to take me a long time to learn this stuff and work through the textbooks and such.
[00:54:04] And boom, you have this expert sitting right by your side, giving you customized advice on how to understand and take action with whatever area you're working on.
[00:54:17] I think those are all fantastic examples and also examples of why this is not just about automating things, right?
[00:54:26] Like in Pauline's case, you're talking about augmenting your own skill set and capability in addition to saving all that time to actually learn how to do all that and get it just right,
[00:54:40] bouncing back and forth between different tools.
[00:54:42] And then in your case, David, using it as sort of both a tutor and a thought partner as you're exploring and trying to learn about a particular subject matter.
[00:54:54] So those are awesome, awesome examples.
[00:54:57] As we kind of close up, any final thoughts from you guys?
[00:55:01] Any advice?
[00:55:03] Any upcoming sessions that the listeners should know about?
[00:55:06] It's advice that they keep experimenting and with vendors.
[00:55:11] Don't be shy about asking as many questions as you need to ask until you understand the technology and what it said,
[00:55:19] what's the database, whatever you need to understand.
[00:55:21] It's so important that you just, you know, not be shy about asking until you have that clarity.
[00:55:29] And as far as what we have coming up, David and I have masterclasses coming up in 2025 on how to build your AI capability with Lernova out of the UK and Keynotive out of,
[00:55:41] sorry, Lernova is out of the US and Keynotive out of the UK.
[00:55:44] We have a podcast series on the intersection of AI and HR and partnership with HR Gazette that can be found on your favorite way to stream or compiled on the Anchor HR website.
[00:55:55] And we will be partnering with Bill Bannam at the HR Gazette on AI summits in the new year.
[00:56:01] So stay connected to us.
[00:56:04] Awesome. Yeah, that's how we met. Bill's the one who connected us in the first place. Awesome.
[00:56:06] Yes, yes.
[00:56:08] Pauline and David, it was a pleasure. Covered a lot of ground today. I really appreciate you taking all this time.
[00:56:14] This was really insightful for me and I'm sure it was for my listeners as well. So thank you both very much.
[00:56:19] Thank you, Bob.
[00:56:20] And thank you everyone for listening. We'll see you next time.


