Sarah Katherine Schmidt and Bob Pulver explore the intersection of AI and HR, focusing on performance management, trust in AI tools, and the evolving expectations of a multi-generational workforce. They discuss the importance of continuous feedback, continued human involvement in recruitment, and the need for data-driven insights to enhance talent management and workforce planning. The conversation emphasizes the potential of AI to improve employee experiences while maintaining a focus on human agency and trust. Sarah Katherine and Bob discuss the importance of hiring practices that focus on objective criteria rather than personal biases, the challenges of organizational transformation, and the emotional responses individuals have to change. They emphasize the need for AI literacy and responsible use of AI tools, highlighting the importance of training and policies to mitigate bias. The conversation also touches on the interplay between data literacy and AI literacy, advocating for continuous education and critical thinking in the use of AI technologies.
Keywords
AI, HR, performance management, customer experience, workforce planning, talent development, employee engagement, generational differences, data-driven insights, trust in AI, AI, hiring, transformation, emotional response, AI literacy, responsible AI, data literacy, change management, bias training, organizational change
Takeaways
- AI enhances performance management by promoting agile processes.
- Trust in AI tools is essential for successful HR integration.
- Feedback frequency is shifting towards continuous rather than annual reviews.
- The candidate experience is crucial for attracting talent.
- Generational differences impact employee expectations and experiences.
- AI can help reduce biases in performance evaluations.
- Investing in talent development is key for organizational success.
- AI tools can streamline HR processes and improve efficiency.
- The future of workforce planning relies on comprehensive data integration.
- Emotional responses to change can impact engagement.
- Organizations must foster AI literacy among employees.
- Bias training must evolve to include AI considerations.
- Policies on AI use are essential for responsible implementation.
- Continuous evaluation of AI tools is necessary.
- Data literacy is foundational for effective AI use.
- Curiosity and education about AI should be ongoing.
Sound Bites
- "Trust in HR around AI still needs to be built"
- "Candidates desire feedback, not ghosting"
- "Understand the emotional response to change."
- "We need a whole new lens on bias training."
- "Responsibility by design is crucial."
- "You can't just grab any shiny object."
- "Let's not set the bar at just good enough."
Chapters
00:00 - Introduction to AI in HR
02:48 - The Evolution of Performance Management
05:47 - Building Trust in AI Tools
08:53 - The Human Touch in Recruitment
12:07 - Navigating Generational Differences in the Workforce
15:02 - Data-Driven Insights for Talent Management
17:55 - The Future of Workforce Planning
20:51 - Leveraging AI for Talent Development
25:52 - Hiring Beyond Bias: The Need for Objective Criteria
27:06 - Navigating Transformation: Challenges and Opportunities
29:51 - Understanding Emotional Responses to Change
33:30 - AI Literacy: Empowering Organizations and Individuals
35:49 - Responsible AI: Training and Policies for Ethical Use
38:53 - Evaluating AI Tools: The Importance of Critical Thinking
46:52 - The Interplay of Data Literacy and AI Literacy
Sarah Katherine Schmidt: https://www.linkedin.com/in/sarahkatherineschmidt
PeopleLogic: https://peoplelogic.ai
For advisory work and marketing inquiries:
Bob Pulver: https://linkedin.com/in/bobpulver
Elevate Your AIQ: https://elevateyouraiq.com
Thanks to Warden AI (https://warden-ai.com) for their sponsorship and support of the show! Warden is an AI assurance platform for HR technology to demonstrate AI-powered solutions are fair, compliant and trustworthy.
Powered by the WRKdefined Podcast Network.
[00:00:00] Welcome to Elevate Your AIQ, the podcast focused on the AI-powered yet human-centric future of work. Are you and your organization prepared? If not, let's get there together. The show is open to sponsorships from forward-thinking brands who are fellow advocates for responsible AI literacy and AI skills development to help ensure no individuals or organizations are left behind. I also facilitate expert panels, interviews, and offer advisory services to help shape your responsible AI journey. Go to elevateyouraiq.com to find out more.
[00:00:28] Hi there, welcome back to Elevate Your AIQ. In this episode, I'm joined by Sarah Katherine Schmidt, VP of Customer Experience at PeopleLogic, for a deep dive into how AI is being incorporated into agile performance management. Sarah Katherine and I explore the critical balance between AI-driven tools and human capabilities in performance management, recruitment, and workforce planning. We talk about building trust in AI, the evolving expectations of a multi-generational workforce, and the importance of data and AI literacy to drive better decisions.
[00:01:09] Whether you're an HR leader, a team manager, or something curious about how AI is impacting talent management and talent development, this conversation offers practical insights on leveraging AI responsibly while keeping people at the center. Let's get into it. Hi, everyone. Welcome to another episode of Elevate Your AIQ. I'm your host, Bob Pulver. With me today, I have the pleasure of speaking with Sarah Katherine Schmidt. How are you today, Sarah Katherine?
[00:01:35] I'm well. How are you, Bob? I'm doing well. I am excited to dig into some AI and HR topics with you.
[00:01:45] Yeah, absolutely. We had a great first conversation, so I think this is going to be amazing.
[00:01:50] Yeah, absolutely. Just to kick things off, why don't you tell listeners a little bit about your background?
[00:01:56] Sure, absolutely. So my name is Sarah Katherine Schmidt. I am currently the VP of Customer Experience at PeopleLogic.
[00:02:04] So prior to joining PeopleLogic about a year ago, I was in people operations and HR roles for about 15 years.
[00:02:12] So director of people operations, more broadly, director of L&D. And what I love about bringing my experience to PeopleLogic is that I get to talk with people that have been in my seat and are currently in the seat that I was in and coach them, guide them, advise them.
[00:02:31] At PeopleLogic, we have an AI-enhanced performance management tool. And we ultimately believe that when individuals thrive, they excel. And when teams excel, organizations naturally unlock their true potential.
[00:02:47] And so we exist to empower companies to thrive in the new world of work through the creation of high-performing cultures.
[00:02:55] And we create a suite of AI-enabled and seamlessly integrated products that enhance the people experience and highlight opportunities for scale, which is so important right now, and drive organizational success.
[00:03:10] So that's a little bit about what we do. And we're very hands-on with our customers and helping them through that change journey.
[00:03:19] Awesome. Yeah, the change is never easy, right? I know we're going to dig into that.
[00:03:24] We're going to talk about all that.
[00:03:25] Yeah, I imagine it's especially sort of rewarding to see, because you've been in seat, just seeing how things have evolved.
[00:03:36] I mean, change seems constant, of course, but with everything going on, AI and automation and just different sort of expectations these days,
[00:03:48] I'm sure it's comforting to your clients and prospects that you know exactly what they're going through.
[00:03:56] It is. And the one thing that I'll also mention is that there are now five generations in the workforce, which is transformational in and of itself.
[00:04:06] So technology aside, there are higher expectations than ever on organizations when it comes to their people strategies.
[00:04:12] So it's AI, it's modern technology, but then it's also meeting five generations where they are, which is very, very different than where we've been before.
[00:04:24] Absolutely. So those experiences could feel quite different for those different generations.
[00:04:30] So just to unpack a little bit about, you know, how you're using AI within performance management, you know, I just thought we could dig into that a little bit, like how, how that sort of upgrades, you know, how companies have done performance management in the past.
[00:04:47] Absolutely. So when we think about performance management, it's traditionally, you know, your, your top down, your stagnant process, that's very bureaucratic, very hierarchical.
[00:04:58] And what we like to promote with our customers and more broadly is agile performance management.
[00:05:05] And so, again, it comes to meeting people where they are, but also acknowledging that the landscape has changed significantly and that once a year, twice a year process is very arduous on managers and employees.
[00:05:22] And they want more feedback. And they want more feedback.
[00:05:24] They want frequent feedback. I think I read recently, it's every seven days individuals would like feedback on their performance.
[00:05:32] And so agile performance management, we like to say is more simplified because it is more continuous and it becomes the fabric of your organization versus a point in time exercise.
[00:05:47] From a platform perspective, the way that we do that is through having everything in one space.
[00:05:53] I like to say that we take our customers from spreadsheets to superpowers because we have everything from objectives and key results to one-on-ones to feedback and praise and then surveys and performance reviews.
[00:06:08] And so each of those pieces has their own AI component that is generative and leans on best practices models such as ChatGPT or other Gen AI tools that we trust,
[00:06:24] but then encompasses a flavor of how we advise our customers in giving actionable feedback or how to set objectives and key results that are truly measurable and will help you see success.
[00:06:37] So we're using it in ways that don't take away agency and ownership of that input because ultimately they own the output of what that is.
[00:06:48] But there are places to help them just get started.
[00:06:51] I was going to ask you about that because, you know, you mentioned trust in there and, you know, if you lose that, you know, all bets are off, right?
[00:07:00] So you've got to design that with, you know, very intentionally and very carefully so that people realize that this is for everyone's benefit, that it's not monitoring.
[00:07:14] Oh, goodness.
[00:07:14] I cringe at that word.
[00:07:15] I know, right?
[00:07:17] You and I both do.
[00:07:18] Yeah, it's scary.
[00:07:20] I mean, I don't even like to talk about those things.
[00:07:23] Like if that's what you do, then this conversation is not going to go well.
[00:07:27] Right.
[00:07:28] There's a line to be drawn there, right?
[00:07:31] And so I just think you've got to look at this as what's in the best interest of, to the point you made earlier, someone's potential.
[00:07:40] How are we going to help them achieve that potential?
[00:07:42] And this isn't about, you know, efficiency gains to the point where we're, you know, super micromanaging, you know, people's time because just doing that gives the impression that you simply don't trust these people to do their job.
[00:07:58] Right. Absolutely.
[00:08:00] And utilizing a platform such as ours and carefully integrating AI tools or the capabilities within our platform helps to build that trust while not taking away, again, that agency and keeps it human focused.
[00:08:17] We'll talk, I think, about, you know, that human touch a little bit later.
[00:08:21] But I think ultimately, you know, the trust in HR around AI still needs to be built.
[00:08:29] And we will do that through education.
[00:08:31] We will do that through encouraging people to learn and be curious.
[00:08:35] But then ultimately emphasizing what the tools that we're using can and cannot do or do and do not do around utilizing AI.
[00:08:46] And it's not supplanting managers.
[00:08:48] It's not supplanting your own intuition and your own context.
[00:08:53] It's helping you in best practices and getting started.
[00:08:58] And it still gives you that user-driven decision making at the end of the day.
[00:09:04] So there's a couple different approaches that people are taking to this.
[00:09:08] And certainly there's different types of AI that could come to bear, you know, in some of these use cases and some of these scenarios.
[00:09:16] Right. So you've got things that might ease the process.
[00:09:19] Right. From an automation perspective, you know, whether that's, you know, nudges or just, you know, streamlining, you know, submission and or maybe data, you know, aggregation from multiple systems.
[00:09:32] Which I wanted to ask you about as well.
[00:09:34] But then I think to your point, this is about augmenting, you know, human decision making so that, you know, you've got all the information and you're thinking in a, I guess, a less perhaps biased way about some of the information that you're gathering so that you're not, you know, I guess you don't put too much emphasis on like maybe the personal relationship between the manager and the employee and things like that.
[00:10:01] So there's just a lot of factors to consider. And obviously we'll talk more about like responsible AI and the ethics involved and, you know, bias mitigation or whatever.
[00:10:09] But certainly, you know, there's a lot of ways that AI is helping us here.
[00:10:14] Absolutely. I know we'll get there. So I'm jumping ahead a little bit.
[00:10:18] But recency bias is the place where I appreciate our platform and I appreciate other platforms and approaches to minimizing that recency bias.
[00:10:31] Because I think as a manager, I had it. You know, I'm looking back at the last three months versus the last six months or a year.
[00:10:39] And being able to recall all of those really key moments in someone's development and achievements can be really taxing on our brains cognitively.
[00:10:50] And so you're correct. AI has the potential to reduce bias on a lot of other levels.
[00:10:57] But recency bias is the one that comes to mind most clearly for me, not only because it's mine, but because I've seen it with countless managers.
[00:11:06] Yeah, no doubt. I did want to hit on that human touch thing because I feel like there's, you know, it's sort of another type of bias around for the people that are concerned about their job, the longevity of a role or their specific job.
[00:11:26] And you can tell me, like on the talent management and talent development side, but my observations on the talent acquisition side, when it comes to like recruiters and folks that are, they say they care about the candidate experience.
[00:11:43] But then if we're improving the candidate experience to the detriment of a recruiter, recruiters experience or recruiters, you know, livelihood.
[00:11:53] I feel like they get, recruiters can get really defensive about, you know, some of the ways in which AI might come after their job.
[00:12:02] And so they use that phrase all the time. Oh, it could never replace, you know, the human touch that I provide.
[00:12:08] Well, that might be true for certain recruiters who are actually, you know, walking that talk.
[00:12:16] But I think you need to be realistic about the experiences that can be streamlined and can actually give candidates, I guess, more confidence.
[00:12:30] You know, there's a morale issue, you know, like getting ghosted.
[00:12:34] Like if you give a candidate a choice, I can either be ghosted by technology or a human, or I can get a response that gives me some logical and constructive, you know, feedback about my application and maybe why I'm not moving forward.
[00:12:51] When my candidate had on, why would I not want that, that feedback?
[00:12:57] I mean, you could argue that's, that's a form of performance management in a way, not performance, but you know what I mean?
[00:13:03] It's, you're being judged in some way by factors that you don't even know and you can't control them.
[00:13:11] So I just, again, going back to the point I want to make is I think people have this self-preservation bias when it comes to AI.
[00:13:23] Like AI could never do what I do.
[00:13:26] And like, have you, have you been paying attention?
[00:13:29] Because it's already doing a lot of amazing things that we thought were clearly in the domain of a human brain.
[00:13:37] Right.
[00:13:38] Absolutely.
[00:13:39] I appreciate all of your points because I started in talent acquisition.
[00:13:43] And so that candidate experience was incredibly important for me to maintain this experience that ultimately drove someone to want to join the organization and be part of that journey.
[00:13:56] And I do view performance feedback at that interview stage as really key and important because the candidate experience is just the precursor to the new hire experience.
[00:14:10] And so they're getting a sense of how you manage performance, how you give feedback.
[00:14:15] And I've had several managers come back to me and say, hiring managers in particular, come back to me and say, well, this candidate wants feedback.
[00:14:24] What kind of feedback do I give them?
[00:14:27] Do we even give them feedback?
[00:14:28] And resoundingly, yes, if they are eager to receive that feedback.
[00:14:34] Absolutely.
[00:14:36] And I think to your point around the human touch and that sort of self-preservation bias, we really have to ask ourselves as HR professionals and humans generally, we have to ask, do we feel there's a need for human touch because of our own bias?
[00:14:54] Or do we just not want to be replaced by machines?
[00:14:57] But the real question is, are candidates and employees desiring the human touch in their everyday interactions with HR and people teams?
[00:15:08] Or are they desiring something different?
[00:15:12] And so how can we actually improve the candidate and employee experience?
[00:15:17] Again, by meeting people where they are and utilizing AI to replace steps in a process that seem cumbersome on both sides and actually can negatively impact the employee experience, even for recruiters.
[00:15:33] Right.
[00:15:34] If you're in that space of doing manual follow-ups and you're constantly reaching out to candidates, getting no's or reaching out to candidates that are not necessarily qualified, your morale goes down as well because you're getting more no's than yeses.
[00:15:50] And that in and of itself can lead to burnout.
[00:15:54] So we have to ask that question of ourselves.
[00:15:57] Can we automate or can we utilize AI so that ultimately we can elevate strategically as professionals?
[00:16:05] And if we don't want to elevate, that's perfectly fine.
[00:16:09] But embracing those technologies that make our lives easier and allow us to continue to grow as professionals is really key as well.
[00:16:17] Yeah, no, absolutely.
[00:16:18] So I guess when I think about designing these experiences, you've got a lot of challenges in doing that, right?
[00:16:28] You mentioned like you now have four if not five generations in the workforce or soon we'll have five.
[00:16:36] You've got different roles whose experiences, you know, obviously intersect.
[00:16:42] That's a candidate and a recruiter and a hiring manager on the TA side or it's, you know, the people manager, the employee, maybe, you know, colleagues.
[00:16:53] Maybe you're working on multiple projects and maybe a multiple sort of sort of bosses or your dotted line to a couple of people.
[00:17:00] There's a lot of factors that come into play.
[00:17:03] One of the things that I was thinking about in that regards is like all the other data that is all over the place.
[00:17:14] Yeah, there's so much data.
[00:17:15] I don't know that, you know, unless organizations are pretty mature in their not just general sort of data and analytics maturity, but specifically within HR.
[00:17:27] Like in your experience, are people gathering all of that to make a comprehensive sort of assessment about where someone is and, you know, potential trajectory as they navigate their career?
[00:17:44] I would say they want to, you know, if I go back to the origin story of people logic a little bit, the goal was to ultimately prove out the hypothesis that engaged employees equal happy customers equals revenue.
[00:17:59] But within each of those pieces, especially the employee satisfaction piece are all these other variables and data points that you have to synchronize.
[00:18:10] And even today, organizations at a larger level are doing this through spreadsheets.
[00:18:17] And I heard this in the AIHR Summit on Wednesday.
[00:18:23] You know, there are 2,600 person companies that are still trying to figure out how they piece together all of the employee data that they have to get a holistic picture of what engagement looks like.
[00:18:35] Hi, I'm Steven Rothberg.
[00:18:38] And I'm Jeanette Leeds.
[00:18:39] And together we're the co-hosts of the High Volume Hiring Podcast.
[00:18:43] Are you involved in hiring dozens or even hundreds of employees a year?
[00:18:46] If so, you know that the typical sourcing tools, tactics and strategies, they just don't scale.
[00:18:52] Yeah.
[00:18:53] Our bi-weekly podcast features news, tips, case studies and interviews with the world's leading experts about the good, the bad and the ugly when it comes to high volume hiring.
[00:19:04] Make sure to subscribe today.
[00:19:05] And I think performance management, objectives and key results, one-on-ones, feedback, reviews, surveys, all of those pieces of data are in disparate tools today.
[00:19:18] Unless you are using one that gives you superpowers and doesn't require you to download five different spreadsheets to put it together.
[00:19:27] And so performance management, reviews, individual development plans, it doesn't get talked enough about in the context of AI for HR.
[00:19:38] And I appreciate you bringing it up because it has tremendous power to not only reduce the amount of time that HR and managers and employees need to spend on lengthy reviews and working through some of these processes,
[00:19:56] but it has the ability, as you and I have talked to reduce that recency bias.
[00:20:01] And ultimately, AI, I believe, has the potential to become even more advanced agent-based AIs that are available.
[00:20:12] It creates this opportunity to bring together all of that disparate data that I ultimately would love to see happen for our customers and for organizations more broadly because right now it's bite-sized.
[00:20:27] And AI can pull that thread across talent management and allow HR teams to better understand what's actually going on in their organization.
[00:20:38] You kind of confirmed what I was thinking, that this is still early days, not just with the evolution of AI, but just even the data and analytics maturity and making sure we're understanding what the results that we're getting and the results that the managers reviewing with the employee.
[00:21:00] Have they considered all these other factors that may have influenced the performance itself?
[00:21:06] So those are all like all these different inputs, but then the outputs as well, right?
[00:21:10] Like once you, if you're able to do all that, then you have a lot more insight coming out the other side, which is, well, you know, this is not better than me, but I'm thinking like, you know, mentoring opportunities, coaching opportunities, learning and development opportunities.
[00:21:30] This isn't just about like, you know, you pass or fail or good or bad.
[00:21:34] And, you know, this, you know, these two people are going on a PIP and these three are going on to bigger and better things.
[00:21:41] It's much more nuanced and complicated than that.
[00:21:44] But again, if you were thinking about how do we guide everyone to their potential, you've got to have those outlets that says, you know, based on the performance, based on the factors within performance management.
[00:21:59] And we, and the potential that we still see, these are some things that can, you know, maybe help this person get back, you know, on track or maybe there's another path for them, right?
[00:22:10] Part of this is around upskilling for sure.
[00:22:12] But part of it is around like reskilling and internal mobility and other opportunities that they might be better suited for.
[00:22:17] And understanding what skills they have that are untapped as well.
[00:22:21] We can revolutionize workforce planning in a really wonderful way.
[00:22:27] If any of us are going through workforce planning right now, it's like the, it's the bane of existence for organizations because they have to pull together all of, all of the data points with the positive intent that they want people to grow.
[00:22:41] And they want to be able to incentivize them to reach new levels and new heights, but can't do that from a data-driven perspective just yet for a lot of organizations.
[00:22:51] And we mentioned talent development or, you know, learning paths and upskilling, you know, those investments in talent initiatives are primary for many organizations, according to Gallup, according to McKinsey.
[00:23:06] They want to invest in their current workforce because we're reaching a place where it's really, really difficult to hire for those specialists, especially.
[00:23:16] And investment in talent initiatives and proving out the ROI of those is one of the most difficult things to do on the learning and development side.
[00:23:26] And so when we think about AI and its capabilities, it can help you prove out that ROI when you bring together all of this data and you have greater insights.
[00:23:37] I was also thinking about the way that whether it's in talent acquisition or talent development, the data that you might get out of, or the insights you might get out of like people logic, that's going to fuel better quality data for any kind of algorithm that you're using to assess someone's potential to succeed.
[00:24:07] In these other roles, because before you may pull in historical data about your past leaders or whatever.
[00:24:16] And you're trying to basically, as you hire, you might try to like recreate that person, but you know what I mean?
[00:24:23] The purple squirrels, you try to recreate them and find them.
[00:24:27] Right.
[00:24:28] The very data that was used to create those purple squirrel profiles is suspect, right?
[00:24:37] Because you have all of these biases, you have like, you know, there could have been, you know, nepotism.
[00:24:45] It could have been, you know, all kinds of favoritism.
[00:24:47] There could have been just anecdotal, you know, information that is not necessarily fact-based.
[00:24:55] So now you've got a much more sort of trustworthy, you know, set of data points to say, you know, now we're going to, I mean, you need to track this, I guess.
[00:25:07] But it seems like going forward, once you have this sort of upgraded, you know, system with that comes more trustworthy and reliable data.
[00:25:19] And now going forward, that influences your ability to make better quality decisions.
[00:25:26] Absolutely.
[00:25:28] Absolutely.
[00:25:28] So the one phrase that I heard over and over again is, we need to hire another Emily, or we need to hire another Enrique.
[00:25:37] And that in and of itself made me cringe because we're not naming their success factors and the qualities that made them successful in their roles.
[00:25:49] We are naming a name into a function.
[00:25:52] And that in and of itself sets us up for failure on the recruiting side of things.
[00:25:58] And so when we can look at the success factors and the data from what has Emily or what has Enrique done that has been above and beyond, how have they performed from the time that they joined to the time of now?
[00:26:13] You know, what, how have they grown?
[00:26:15] Do we need to hire another Emily or Enrique?
[00:26:18] Or do we need to adjust our job description and our expectations and make it separate from a person and separate from our bias and go out and recruit for key traits that a lot more individuals will exhibit in a hiring process than just your very small candidate profile that you're basing it on, which is your bias.
[00:26:40] Yeah, absolutely.
[00:26:41] I wanted to switch gears a little bit and just talk about overall transformation and change, right?
[00:26:51] This is a huge challenge.
[00:26:52] It always has been.
[00:26:53] I spent 25 years doing various transformation programs and projects and initiatives at IBM.
[00:27:01] I know you spent some time there as well.
[00:27:03] And at NBC Universal as well as just constant state of transformation.
[00:27:09] And those projects don't have a great track record.
[00:27:16] So where I get concerned and what I talk about a lot on this show and basically with anyone who will listen is if you weren't successful before, when those transformation projects were, you know, within a line of business, they were within a division.
[00:27:32] Maybe they're related to digital capabilities and things like that.
[00:27:37] But as big as those initiatives seemed, AI-driven transformation is bigger and more complex than any prior one, at least that I've been involved in.
[00:27:51] And I, you know, I'm no spring chicken.
[00:27:53] I've been around a while.
[00:27:55] So the question to you is, like, what are you seeing, like, that is helping people, you know, think about this size and scope of what's in front of them?
[00:28:08] Are they encouraged by what you're seeing?
[00:28:10] Are people still, you know, about to be caught flat-footed if they're not already?
[00:28:14] I wish I could say I was encouraged.
[00:28:18] I think there are pockets of encouragement.
[00:28:22] And I will admit that I'm in a little bit of a filter bubble with our customers who are very excited about the change, have not tackled a change such as modernizing their technology or even utilizing AI.
[00:28:35] But they're excited and they are initiating change from a place of acknowledging when there's too much else going on organizationally and change-related.
[00:28:47] And when they can actually, you know, phase in the change of a performance management tool or a new process.
[00:28:54] What I guide our customers on and what I encourage organizations to do is understand the emotional response to change and how that can fuel engagement and adoption or how it can just cost you a lot of money and not have the engagement or adoption that you desire.
[00:29:15] And I like to get a little bit sciency and brainy on this.
[00:29:21] But when our bodies encounter change, they initiate what we call the fight or flight response.
[00:29:28] It's that immediate physiological response that our bodies naturally go to.
[00:29:33] And so we can't judge it.
[00:29:34] We probably can't stop it.
[00:29:36] We can slow it if we're self-aware enough to know that that's what's happening.
[00:29:40] But it involves various systems and various parts of our brains from a sympathetic nervous system to, you know, the hormones and the endocrine system.
[00:29:51] All of that is involved in the change.
[00:29:54] And companies who can recognize that from a physiological level, individuals experience change very differently and then ultimately help them through those changes and those stages.
[00:30:08] I think you and I talked about the Elizabeth Kubler-Ross curve and those stages of grief and the stages of change.
[00:30:15] We have to acknowledge that all of those exist from denial to excitement to, you know, not knowing how to do something new and being afraid of that.
[00:30:26] Our bodies will move into an adaptation phase.
[00:30:30] But easing into that adaptation phase, it's on organizations to really understand the behavioral patterns that will ultimately be fueled by, you know, motivation and engaging the parts of the brain that are excited about change and can get excited about building new habits and building new skills.
[00:30:54] We've got to pay attention to those things.
[00:30:57] And, you know, we can't just accept or expect people to adapt because it's, again, physiological and it's a specific response to each person.
[00:31:07] So we need to consider that a little bit more when we're initiating change for sure.
[00:31:12] Yeah, I absolutely agree.
[00:31:14] And I think some of what you just described, like some of it's around your own sort of biases, some of it's around, you know, that emotional response, anxiety.
[00:31:27] There's a lot of things that sort of enter one's, you know, thoughts when we think about where this is going and the speed at which some of this change is coming, I think is scary unto itself, right?
[00:31:39] Like I'll never possibly, you know, keep up with all of this and how do I, you know, stay ahead of it, stay on top of it, et cetera.
[00:31:46] I do think the organizations, there are things that individuals can do absolutely on their own and their personal devices and things like that, even if the company hasn't really fully embraced this.
[00:31:56] But I do think the organization has a responsibility to have some AI literacy, you know, programs and, you know, reverse mentoring.
[00:32:07] I mean, there's all kinds of programs that you can put in place to sort of hold people's hands, as it were, along this journey.
[00:32:17] Because everyone is in this together and you really need to do what you can to, you know, mitigate some of that anxiety around what's happening.
[00:32:27] So that's, I guess, another area where I try to push, nudge, I'll say, people to really think about AI literacy, not just hands to keyboard, you know, basic prompting.
[00:32:42] But why is this coming?
[00:32:46] Why do we need to do this, not just from an efficiency perspective, but from an effectiveness perspective to improve the experiences like we were talking about before?
[00:32:54] Or that overall, as scary as this is, we're doing it because we're at least cautiously optimistic that this is the way that is going to be sort of a win-win for everyone as we move forward.
[00:33:11] And so you just got a part of it's just straight up communications, right?
[00:33:15] Like, how are you empathizing with people in different roles in different departments?
[00:33:20] And how are you bringing them along to show them that this is something we're bringing to them and for them?
[00:33:28] It's not something that's there to replace them, or at least I hope not.
[00:33:34] Hey, everybody.
[00:33:35] It's Libby again with fearlessness.
[00:33:37] So what's fearlessness?
[00:33:38] It's that underlying grit that empowers us to forge ahead, even when hope seems distant.
[00:33:43] It's the courage to walk through those fires of hell, knowing that we're going to come out better and stronger on the other side.
[00:33:50] Stay tuned and learn how to get fearlessness.
[00:33:53] Absolutely.
[00:33:53] And I think that willingness to equip individuals with the knowledge and the capabilities to utilize AI responsibly in their roles and organizations saying, here's how we will utilize AI responsibly and minimize bias in doing so.
[00:34:12] I look at this as education, which you touched on a little bit.
[00:34:16] But we need a whole new lens on bias training.
[00:34:19] Like, that's what AI has pushed us to a place of, is that, yes, we need bias training on, you know, gender, equality, equity, ethnicity, race, religion, all of those components that make each individual who they are and special and unique.
[00:34:39] And then we need to layer on top of that what it means for AI to be part of our organization and training on how we engage with AI, how we engage with one another.
[00:34:53] And then policies.
[00:34:57] Organizations, they love and hate policies depending on where they are in their evolution.
[00:35:02] And I'm not a policy for the sake of policy kind of person or, you know, too much process for the sake of process.
[00:35:11] But policies at an organizational level on how they will utilize AI, to your point, being able to communicate effectively and transparently about those, what that means for roles and how those roles can be enhanced by AI, not supplanted or replaced by AI.
[00:35:29] And those types of policies and reducing that anxiety is up to organizations to do for individuals.
[00:35:39] And, yes, individuals should go out and educate themselves on their own and they should be curious.
[00:35:45] But organizations have that responsibility to their employees to actually say, here's what this means for you.
[00:35:51] Yeah, I think that's always been a key part of any sort of transformation or anything trying to drive behavior change, right?
[00:36:00] It's like what is in it for me.
[00:36:03] And just on the responsibility AI piece, I mean, obviously that's one of the themes that I talk about consistently.
[00:36:11] And you're right.
[00:36:12] I mean, when we talk about AI literacy, it's not just the skills or how do I create my own agent or how do I use AI or write a job description or summarize a meeting or things like that.
[00:36:28] It's am I using it properly?
[00:36:31] Is it being designed appropriately with ethics and governance and all these things in mind, right?
[00:36:37] Because when you do have the ability, you can't just push it back on solution providers.
[00:36:44] Well, they built it.
[00:36:47] I just need to use it.
[00:36:48] So when you give it to me, it should be safe to use or whatever.
[00:36:51] It's not that simple.
[00:36:52] Somebody gave you a driver's license, you can hop in any car.
[00:36:55] It's not up to the manufacturer to teach you how to drive that car safely.
[00:37:00] And so I think it's inevitable.
[00:37:03] I'm curious to get your take.
[00:37:04] I think it's inevitable that organizations will have mandatory literacy training, just like they did for data privacy and cybersecurity.
[00:37:12] I don't see any reason why you wouldn't have a similar sort of course when you onboard and maybe ongoing to make sure that people are thinking about how to be responsible by design.
[00:37:26] Because technically, anybody could go create a co-pilot or a custom GPT or whatever.
[00:37:32] So you can't just say, hey, look what I did and not think about where you got the data from or what someone may do with the information that your new toy is going to present to them.
[00:37:47] So we're all responsible.
[00:37:49] I love responsibility by design.
[00:37:52] I'm going to steal that because any of us who are in the data world and interacting with models and AI tools that we're particularly fond of or concerned about, whatever that case may be, responsibility is the person, is the thing that we need to have at the top of mind for us.
[00:38:15] And so I'm a big fan of a few key tools that I trust right now.
[00:38:22] So Notion has been my right hand when I needed to get started.
[00:38:26] When I have brain fog and I need to write an e-book or I need to write a blog post, it's my just get started right hand person as I use my left hand to articulate this.
[00:38:39] Can I just ask you a question about that?
[00:38:40] Of course.
[00:38:41] I like Notion a lot too.
[00:38:42] Yeah.
[00:38:43] I'm trying to get everything out of all these other tools and stick it into Notion.
[00:38:47] So that's my source of truth.
[00:38:49] But I was curious if you're actually using the Notion AI capabilities or just Notion itself.
[00:38:55] Oh, I'm using Notion AI capabilities for sure.
[00:38:59] This is the tool that I have found and everyone has their favorite.
[00:39:03] So Clawed, Chat, GPT, whatever that may be.
[00:39:06] There's no judgment.
[00:39:07] I'll talk about concern in a minute.
[00:39:09] Sure.
[00:39:10] It's my place to continue particular threads, whether it's about brain science, performance management, change management, whatever it may be.
[00:39:20] It is my log and my continued ability to prompt what Notion knows about me and my voice and how I interact with it that I love.
[00:39:33] And so, no, it's not going to build me slides.
[00:39:36] I use Gamma for that.
[00:39:37] Love that tool as well.
[00:39:39] Yep.
[00:39:40] But Notion is going to keep that running thread of where my brain is going and how I'm prompting it so that I can continue talking to it and asking questions and building.
[00:39:51] And so, I love their AI capabilities and it meets me where I am, truly.
[00:39:58] Right.
[00:39:59] So, you mentioned something you're concerned about.
[00:40:01] You came across something.
[00:40:02] I think it's more so just being able to distinguish between the models that power AI tools and the models themselves.
[00:40:14] So, I've worked for data companies in the past.
[00:40:18] Our data scientists have built models that are phenomenal.
[00:40:24] But we look at the makeup of that data science team and the team that's building the models and it's not diverse.
[00:40:30] And there's perhaps the place for bias or there is bias within that.
[00:40:37] I'm not saying that, you know, chat GPT or others or other providers are building their models with bias.
[00:40:45] But I think we have to use models that complement our desire for reducing bias in performance management processes specifically because they are HR processes and there's a level of risk and importance there.
[00:41:00] But then we as humans have that responsibility to use models that reduce bias and that we can trust.
[00:41:08] Yeah, I think you mentioned like agentic AI a little earlier and we definitely won't have time.
[00:41:16] That's a whole other conversation we'll have to get into.
[00:41:18] This is top of mind, not just because it's, you know, topic of the day, but I'm actually in the middle of an AI agent and agentic workflow sort of boot camp right now.
[00:41:30] It's challenging, but it's really important to understand how you get these agents, these copilots to actually, you know, think like you and, you know, give output that sounds like you.
[00:41:45] You're the example we're using is basically embedding ourselves into one particular agent.
[00:41:52] So we're building a bunch of them, but one of them is around like making sure that your brand voice really sounds like you and your company, right?
[00:42:01] So that takes continuous training, feeding it more and more examples of things that you've written, guiding it with both positive and negative examples and consistently evaluating it.
[00:42:15] So people here start to hear this term evaluations or evals.
[00:42:19] My course calls them evils because everyone seems to get stuck.
[00:42:22] But it's like, you know, if you're familiar with traditional, you know, software development, you know, cycle and some of the testing that gets done, evaluations are not just about what happens in, you know, pre-production.
[00:42:37] You're constantly evaluating the model and tweaking it so that it's consistently sort of learning and recalibrating.
[00:42:46] So I'm encouraged by that, but I know it's going to be quite a while before everyone really embraces that.
[00:42:52] And for now they're going out and they're grabbing these shiny objects and they're paying attention to, or they're basically, you know, buying what is coming out of these tools.
[00:43:04] And you just need to be careful.
[00:43:05] I don't want people to trust these like calculators.
[00:43:08] I mean, you have to still apply your critical thinking.
[00:43:12] I mean, if you ask a question that you already know the answer to just to test it, you'll see that, you know, you may not get 100% accurate answers.
[00:43:22] So I just I appreciate that you're finding some things that are concerning.
[00:43:28] And that's part of maybe the AI literacy is you can't just grab any shiny object, find out what core model this is actually built upon.
[00:43:37] And has this been tested for a variety of biases and other things?
[00:43:41] Yes, absolutely.
[00:43:44] I'm not sure how much I can add there because you're 100% on track and you have to keep iterating and updating and providing the context that you as a human have to train it, make it better, more accurate.
[00:44:00] But also in whatever way you can free of bias.
[00:44:05] Because if you're using this as a custom GPT or as an agent to supplement your HR function, then there's a huge potential for some missteps there.
[00:44:20] If you're not training it and feeding it with the right information or the information that's accurate and correct.
[00:44:27] If you're allowing it to make assumptions versus actually leaning on the data that you're feeding it, so on and so forth.
[00:44:34] And so I think you've mentioned AI literacy so many times there.
[00:44:41] And that is it's so incredibly important.
[00:44:44] My mind immediately goes to we haven't even scratched the surface on data literacy, much less AI literacy.
[00:44:50] And so I would encourage organizations to do both at this point.
[00:44:56] Because data literacy, I believe, is very closely tied to AI literacy because the data literacy is all of the inputs and understanding at the core what you're feeding a model and what you're prompting a model to do.
[00:45:12] And then AI literacy is how to use it responsibly and how to not just prompt, but think very critically about what you're doing and about what you're utilizing.
[00:45:23] So the curiosity and education on that front should be never ending, truly.
[00:45:29] It's not reading one article.
[00:45:31] It's not attending one learning module.
[00:45:35] It should be consistent because I do feel as though Gen AI and the tools that we're using today will continue to be even more impactful in our work and in the world as we go forward.
[00:45:50] I don't see it stopping.
[00:45:52] So maybe we'll come back to this in four years and I'll change my statement.
[00:45:56] Yeah, no, I think you're right on point.
[00:45:58] I mean, it's a fair statement to make, though, because I think as AI gets seemingly smarter, seemingly has the ability to reason, right?
[00:46:13] Then people start to get even more concerned because everyone's always looking for shortcuts.
[00:46:21] They're looking for easy ways to do things.
[00:46:23] If it sounds good enough, you know, is that truly where everyone sort of draws the line and then they just.
[00:46:32] Right.
[00:46:33] That's the bar.
[00:46:34] Right.
[00:46:34] So it is concerning.
[00:46:36] I mean, I think it's already concerning for a lot of, you know, knowledge workers who thought, oh, well, you know, you can automate whatever you want.
[00:46:45] That's that's not going to really impact me at the end of the day.
[00:46:48] I get to still do all the, you know, higher order, you know, thinking and whatever.
[00:46:53] But, you know, you start to see that some of these cases that are involved now.
[00:46:58] And I think even people in who thought they were sort of shielded from a lot of this are realizing that's not necessarily the case.
[00:47:07] But to your point about the literacy, I think people need to get their hands dirty and just start playing with it and testing it.
[00:47:16] Right.
[00:47:17] It's your support, you know, forum and your your help desk agent and your your technology all at the same time.
[00:47:27] So if you don't understand something, just just ask it.
[00:47:30] You'll start to see where it's quite imperfect.
[00:47:34] Right.
[00:47:35] And that's where I say, you know, just get started.
[00:47:38] Just start asking questions and you will very quickly understand, you know, is this the right tool for me?
[00:47:49] Is are these the right inputs?
[00:47:51] Are these the inputs and the outputs that I'm desiring of?
[00:47:55] Let's not set the bar at just good enough, please, for for a high and utilizing these tools.
[00:48:03] And let's not forget that ultimately we are humans and we have the context that is so incredibly important to the work that we do, whether we're using an AI tool or not.
[00:48:15] Yeah, for sure.
[00:48:17] Sarah, Catherine, we covered a lot today.
[00:48:19] I want to be respectful of your time.
[00:48:22] There's a lot more to talk about.
[00:48:23] I hope we we stay in touch and have to continue these conversations anytime.
[00:48:29] Yeah.
[00:48:29] Thank you so much for the opportunity.
[00:48:31] It was absolutely really great.
[00:48:32] I hope everyone walks away with a nugget or or five from this conversation because I have.
[00:48:39] And and that's wonderful inspiration for me.
[00:48:43] Awesome.
[00:48:43] No, I really appreciate you saying that.
[00:48:45] And thank you so much for joining me.
[00:48:48] Of course.
[00:48:49] Thank you again.
[00:48:50] Take care.
[00:48:51] You too.
[00:48:52] Thanks, everyone, for listening.
[00:48:53] And we'll see you next time.


