Bob Pulver speaks with Lydia Wu, an expert in all things HR tech and future of work. They discuss Lydia's journey in the HR tech industry, her initiative 'Oops! Did I Think That Out Loud?', and the importance of transparency and accountability in HR technology. Bob and Lydia talk about the challenges of fast-paced innovation in HR tech and the need for a holistic understanding of the ecosystem to ensure responsible AI practices. The conversation gets into the complexities of AI in the HR tech landscape, discussing the challenges of separating fact from fiction, and the role of people analytics in talent transformation. They emphasize the need for better questions from buyers, the significance of storytelling with data, and the skills gap in HR. Bob and Lydia hit on compliance and risk management in the age of AI, the evolving landscape of HR leadership, the impact of AI tools like Bersin’s Galileo on HR market research, the importance of curiosity and play in the workplace, and the balance between innovation and trust in AI applications.
Keywords
HR Tech, AI, transparency, innovation, accountability, startup, ecosystem, DEI, AI, Responsible AI, People Analytics, Data Storytelling, Skills Gap, Workforce Impact, Compliance, Risk Management, AI Tools, fractional leadership, HR research, Galileo, AI tools, workplace innovation, curiosity, play, instant gratification, market research, technology in HR
Takeaways
- Lydia emphasizes the importance of transparency in HR discussions.
- Fast-paced innovation can lead to significant impacts on end users.
- HR tech designers need to consider the full ecosystem when launching solutions.
- Accountability is crucial in the HR tech industry.
- Lydia advocates for positive feedback rather than criticism in HR tech.
- Understanding deeper layers of innovation is essential for success.
- Engagement and people-centric approaches are vital in HR tech.
- Separating fact from fiction in AI is crucial.
- Buyers need to ask better questions about AI solutions.
- Responsible AI requires more than just compliance.
- People analytics is often seen as a luxury.
- Storytelling with data is essential for HR success.
- The skills gap in HR is widening with AI advancements.
- HR must understand the implications of AI on workforce.
- Compliance and risk management are critical in AI adoption.
- New AI tools can bridge gaps in HR practices.
- HR must evolve to support business needs effectively.
- Attention spans are decreasing, necessitating quicker information access.
- Balancing theory and practice is crucial in HR research.
- AI tools should acknowledge their limitations to maintain trust.
- Curiosity and play are essential for innovation in the workplace.
- AI can help expand ideas but may struggle with summarization.
- The integration of AI in HR requires responsible innovation.
- Understanding AI's capabilities can lead to new ideas and applications.
Chapters
00:00 Introduction to HR Tech and Lydia Wu's Journey
03:04 The Real Talk in HR Tech: Transparency and Accountability
05:55 The Impact of Fast-Paced Innovation in HR Tech
08:00 Navigating the Landscape of AI in HR Tech
13:52 The Role of People Analytics
20:05 The Skills Gap in HR
26:05 Compliance and Risk in the Age of AI
30:49 The Rise of Fractional Leadership
38:32 Galileo: Transforming HR Research
44:12 AI in the Workplace: Balancing Innovation and Trust
50:37 Curiosity and Play: The Future of Work
Lydia Wu: https://www.linkedin.com/in/lydiaywu
Oops! Did I Think That Out Loud: https://beacons.ai/oopsdidithinkthatoutloud
For advisory work and marketing inquiries:
Bob Pulver: https://linkedin.com/in/bobpulver
Elevate Your AIQ: https://elevateyouraiq.com
Powered by the WRKdefined Podcast Network.
[00:00:00] Welcome to Elevate Your AIQ, the podcast focused on the AI-powered yet human-centric future
[00:00:04] of work. Are you and your organization prepared? If not, let's get there together. The show
[00:00:09] is open to sponsorships from forward-thinking brands who are fellow advocates for responsible
[00:00:13] AI literacy and AI skills development to help ensure no individuals or organizations
[00:00:17] are left behind. I also facilitate expert panels, interviews, and offer advisory services
[00:00:22] to help shape your responsible AI journey. Go to ElevateYourAIQ.com to find out more.
[00:00:39] Thanks for listening to another episode of Elevate Your AIQ. Today I'm thrilled to sit
[00:00:43] down with Lydia Wu, a well-known expert in all things HR tech, talent acquisition, skills,
[00:00:48] people analytics, AI, you name it. I've been a follower of Lydia's for quite a while, not
[00:00:53] only for her expertise, but also she doesn't pull any punches when assessing solutions and
[00:00:57] the market more broadly. You should definitely check out her webinars and multimedia content,
[00:01:01] particularly her new Oops! Did I Think That Out Loud series. I'll put some links in the
[00:01:06] show notes. In this wide-ranging conversation, we'll dive deep into everything from the critical
[00:01:10] need for transparency and responsible AI practices to the exciting possibilities that generative
[00:01:15] AI tools are bringing to HR and talent research. Whether you're an HR professional navigating
[00:01:20] AI-driven transformation or somebody curious about how this technology is reshaping the future
[00:01:25] of work, you're in for a treat. Hello everyone, welcome to another episode of Elevate Your AIQ.
[00:01:31] I'm your host, Bob Pulver. Today I have the pleasure of speaking with Lydia Wu. How are you, Lydia?
[00:01:36] Doing very well. Thanks for having me here. Absolutely. I appreciate you being here.
[00:01:40] Just to kick things off, why don't you give the listeners just a little bit of detail around
[00:01:45] your background and what you're working on? Yeah, absolutely. So nice to meet everyone.
[00:01:49] Thanks for listening to this episode. My name is Lydia and I've been in HR tech for about 15 years.
[00:01:53] At this point, I call myself the full 360 of HR tech. So I started out as a consultant
[00:01:59] implementing HR technology. Went in-house to run an HR tech portfolio, HR tech buying strategy,
[00:02:05] which included people analytics and a little bit of AI as well. Got a little bored of that,
[00:02:09] so pivoted out, went on to the solution provider side. And now I'm working with a variety of startups
[00:02:16] in terms of what it's like launching a startup in the HR tech world. What's the ecosystem like?
[00:02:21] How do you create a solution that's best fit for the end users? And simultaneously,
[00:02:26] I'm also running my little channel called Whoops, Did I Think That Out Loud?
[00:02:30] Yeah, I definitely want to hear a little bit more about that. So we were just talking in the green room
[00:02:37] about your Galileo review, Josh Burson's review. Is that the kind of thing, we'll get into that as
[00:02:42] we talk, but is that the kind of thing that you're going to be doing? Just taking a firsthand look at
[00:02:50] some of these tools and calling people out if they're doing some not so great things?
[00:02:56] BSing? Yes. Absolutely. So I think the whole premise of How Whoops Did I Think That Out Loud even came
[00:03:02] about was, I distinctively recall sitting in a meeting room, and this was a pretty senior meeting
[00:03:07] room. And I'm listening to all these HR executives talk. And there was a moment where I'm like,
[00:03:11] oh my God, the stuff that you say in this room can never exit this room. Do people you work with
[00:03:17] actually know that this is what you think? And this is like what's going through your head as
[00:03:21] you're kind of going out there and championing it's about the people, it's about like the engagement,
[00:03:26] et cetera, et cetera. So that was when I kind of started that, I think that I allowed the very
[00:03:30] first article, which is about DEI in the corporate world, essentially. And since then, I've just strived to
[00:03:36] be sort of like the real talk BFF, as I call myself in HR tech, which is essentially like,
[00:03:42] let's call things for what they are. If it's legit, let's say like, yep, it's legit, move on. If it's
[00:03:47] not legit, give some positive feedback. There's no need to chase after people with pitchforks, but there's
[00:03:52] also, I don't think, room for snake oil necessarily to be sold in the market anymore.
[00:03:58] Yeah, it's funny. I guess I kind of do that off the record, like whether it's Monday morning
[00:04:04] and quarterbacking, as they say, or what have you. But this whole show is around responsible AI,
[00:04:11] and I'm always pushing people to think more deeply about the things that they're building,
[00:04:15] as well as just this concept of being responsible by design, just like we used to say,
[00:04:21] privacy by design and all these things. If you think about it from the beginning,
[00:04:25] from conception, then you have less to worry about. And you can, I guess, move fast,
[00:04:33] maybe break a few things, but make sure that you're not completely propagating these behaviors.
[00:04:42] And we've seen that a lot with some of the generative AI solutions, certainly.
[00:04:47] For sure. And I think this whole notion of moving fast and breaking things in the world of HR tech,
[00:04:51] it's almost like we took the surface without understanding the deeper layers of what that
[00:04:57] sentence was intended to mean. So we're like, oh yeah, let's just move fast, left, right,
[00:05:00] and center. And like, we'll just bounce off like a ping pong ball in a pinball machine, essentially.
[00:05:05] And it's one of those moments where you're kind of like, okay, yes, but let's think about the full
[00:05:10] circular ecosystem here that if you have investors or vendors solution providers, who are kind of like
[00:05:16] pinballs in a pinball machine bouncing off, think about the end users, think about the HR team that
[00:05:21] are kind of chasing after those trends, not exactly sure where things land, not having the time to
[00:05:26] necessarily investigate. And then on top of that, think about the ultimate workforce that's on the
[00:05:31] receiving end of that technology. I think the impact is exponentially larger in that case. And I'm
[00:05:38] not sure if every HR tech designer out there thinks about that when they're launching new solutions.
[00:05:44] Yeah, no, I would absolutely agree. This is a huge challenge. I mean, we've seen it,
[00:05:48] you know, when I go to some of these events and walk around, you know, some of the booths,
[00:05:53] I'm always going in there with a very healthy dose of skepticism. And do the people manning,
[00:06:00] you know, the booth even know how to properly answer some of those questions?
[00:06:06] They don't.
[00:06:07] They don't.
[00:06:07] They absolutely don't. So here's the most iconic conversation I have had. I will not name the
[00:06:13] vendor necessarily, but it was basically demo, demo, demo, fun things, really cool UI, everything else.
[00:06:19] And then I asked a question. I'm like, okay, so what's powering this in the back? Is it pure LLM?
[00:06:24] Do you have a vector database or do you have a graph database? And they were like, yes, all of it.
[00:06:30] And that was my moment where I'm like, okay, I'm going to politely walk away now because that was
[00:06:35] a or question. It was not a and kind of question. So it was kind of that moment. I'm like, as a sort
[00:06:42] of person representing the company, and this person actually carried a title for the company as well.
[00:06:46] So, you know, they're not like a hired marketing kind of help. It was that moment where you're
[00:06:50] just like, do you even know what you've built? And if you don't know what you feel, how do I as a buyer
[00:06:57] trust that I can get what I need to do done? And you're actually there to help me solve my problem.
[00:07:02] And you're not just selling the buzzwords in the market.
[00:07:05] Yeah. I brought this up. Let's see. I was at HR tech 2022 and 2024. So before HR tech 2023,
[00:07:14] I knew I wasn't going, but I put out a post and I was like, you know, AI everywhere. You're going to
[00:07:19] see it in 500 booths. Like it's time to start asking better questions just as you did. And it's not like
[00:07:27] I gave you, here's your cheat sheet, you know, ask these, you know, 10 specific questions, but I did
[00:07:31] have some in mind because as when I was at talent tech labs, we had, we helped our clients with the RFP
[00:07:37] processes and stuff like that. And so it was like, you know, these are some of the questions that you
[00:07:40] should ask. And then we had to basically insert, you know, we augmented that with a set of questions
[00:07:46] to start asking about the models and, you know, can we trust the data or, you know, basically are
[00:07:51] you being responsible by design before I was actually using that, that phrase, but, but anyway,
[00:07:56] I was giving people advice to start asking better questions and I, I missed that show. But then this
[00:08:01] year I went to quite a few events and it seemed like it's, it was like deja vu all over again.
[00:08:07] Like nobody listened to me and the people are still, we're still letting people get away
[00:08:12] with these things and you, you can't explain it. You can't just put out a statement that says
[00:08:16] you're, you support, you know, ethical AI and human centricity and all that stuff without putting
[00:08:22] your money where your mouth is and, and actually going through the steps and showing that, you know,
[00:08:27] you've either been voluntarily, you know, independently, you know, audited or whatever. But I just felt
[00:08:33] like in the, going forward, this is what it's going to mean to be a trusted partner,
[00:08:36] right? You've taken those steps before you just, you know, basically, you know, threw it over the
[00:08:41] fence and implemented it or sold it and implemented it at your, your clients. Like this is important.
[00:08:49] And legislation is always going to be trailing, but responsibly as a lot more than just complying
[00:08:55] with a piece of legislation. For sure. And I think the thing about responsibly, I was telling somebody
[00:09:00] this last week or maybe two weeks ago, I was like, you know, I'm even having whiplash with AI. And I
[00:09:06] literally spend all of my waking hours in the HR tech industry, looking at technology, looking at what's
[00:09:12] the next iteration of HR and work technology. The fact that I'm the, when I forecasted something to
[00:09:17] happen in about like the next 12, 16 months, and in fact, it's happened in the last sort of six to
[00:09:22] eight months. And I'm sitting here going like, oh my gosh, we are moving so fast right now. Part of me
[00:09:28] is also thinking, what does that mean for your typical HR buyer, right? Like the person who's
[00:09:34] working 10, 12 hours a day, who's maybe buying technology on the side of their desk, trying to
[00:09:39] coordinate procurement, leverage their political capital and ultimately get to the right answer.
[00:09:43] What does the world look like for them right now, besides looking at the shiniest brand with the
[00:09:49] most marketing dollars and kind of like, are you really buying the solution or are you just buying
[00:09:54] the sales pitch of the solution? Yeah, I do think they should be put through obviously the same
[00:09:59] rigor as, as anything else. And you're right. I mean, I've definitely seen, well, that was, I guess
[00:10:05] part of that is just how the VC money was coming in and them having shiny object syndrome as well. I mean,
[00:10:11] certainly contemplated pivoting a little bit, my advisory work to be like, look, VC is like, this is part of
[00:10:19] your risk mitigation strategy. This is part of your due diligence. Like, do you just really like,
[00:10:23] oh, agentic workflows? Oh, these guys have a cool website. Like take my money, right? Come on.
[00:10:30] Right. Like you're part of the problem. I think there are a few smart ones out there who have already
[00:10:35] identified and recognized that, but I think there is a lot of sales pitch happening, right? Because
[00:10:41] here's the thing. I dabble a little bit with investing in HR tech as well. As an investor,
[00:10:47] you look for something entirely different than as a solution provider or as a founder.
[00:10:52] And as a buyer, that means something that's entirely different to you. So my struggle, and quite frankly,
[00:10:58] the ultimate goal for what's that I think that out loud is how do you bring all of these perspectives
[00:11:03] together so that you're delivering products that are not only generating positive returns on the
[00:11:08] upstream investment side, but also in terms of the downstream usability side, you're not selling
[00:11:13] a dream. You're actually selling a reality. And it's a reality that's actually going to help your
[00:11:18] end users instead of burning through their political capital, running like six months long
[00:11:23] implementations, and then delivering something that's maybe 25% of what they bought or they thought
[00:11:28] they bought. Absolutely. You know, I think the, as I think about some of the events I went to
[00:11:33] this year, I was at recfest. I was at HR tech. I was at unleash in Las Vegas. And that was a
[00:11:41] people analytics world where we ran into each other in the city. And I don't know if it's just
[00:11:47] like the intimate nature of something like the people analytics world where people are just having
[00:11:53] more seemingly more like real, you know, conversations, but I thought it was a good event.
[00:11:58] I was only able to attend one of the two days in New York city, but it was just a good learning
[00:12:05] experience for me. I mean, I'm not, I don't have a people analytics background. In fact, I don't have an
[00:12:10] HR background. So I'm sort of Monday morning quarterbacking this whole thing. But my background
[00:12:16] is in enterprise transformation, but it's really interesting to see. And on some level frustrating
[00:12:22] to see where he's incredibly smart people. They have the data, they have the insights they have,
[00:12:30] they should have the, the ear of leadership, senior leadership. And yet people analytics teams,
[00:12:38] talent intelligence teams, it's like, still seems like a luxury, especially like mid market and below.
[00:12:45] And I just, I guess, curious to get your impressions of what you heard there. I know you participated in
[00:12:50] one of the debate panels to kick off day two about whether AI and recruiting is better bad.
[00:12:56] Let me bring out my little soapbox so I can stand on that for a moment. I have so many things to say
[00:13:02] about people analytics as an industry. I think starting with the events, so absolutely love
[00:13:07] people analytics world. And you're right in that. I think the intimate setting does bring out
[00:13:11] much better conversations and allowing sort of for that sort of closer transition time instead of
[00:13:16] running between expo halls and sessions and expo halls and sessions and so on and so forth.
[00:13:21] Yeah.
[00:13:22] One thing I have always, that's always challenged me with these types of events though,
[00:13:28] especially like when you're speaking at those types of events and in listening to those types of events,
[00:13:32] is that everybody highlights the 1% of the iceberg. You hear the success cases, you hear how amazing
[00:13:39] organizations have done and delivered. And I've created and have delivered on those promises. But I think
[00:13:45] when you pull all of the speakers aside and we kind of go backstage a little bit, all of us acknowledge
[00:13:50] the fact that we didn't share the 99%. We didn't share the 99% of quagmire,
[00:13:56] of politicking, of hallway questions and those types of conversations that needed to happen.
[00:14:05] And part of me having heard sort of where the success cases are and what sort of everyday
[00:14:10] practitioners are struggling with does question, is it because we haven't taught people how to
[00:14:15] navigate corporate politics? Is it because we haven't taught analytics practitioners who are trained
[00:14:21] to leverage the data, trust the data, because the data means and speaks everything that when you share
[00:14:27] the data, you need to watch the room. You need to understand their context. You need to understand
[00:14:32] your stakeholders before you just let the data do its thing. So I think there's a little bit of that
[00:14:38] in there for me. And I think the other part of it too is, yes, it's a luxury, but it's almost a
[00:14:45] needed luxury. And I have always had this question, and this is my sort of hypothesis that's neither
[00:14:51] been validated or invalidated, if you will, is when you have a more junior level people analytics
[00:14:57] practitioner, because very rarely do you see VPs of people analytics in organizations,
[00:15:03] aren't we covering the data or the true potential of the data through different layers of org hierarchy
[00:15:09] so that by the time it actually makes it into the C-suite conversation,
[00:15:13] whether we like it or not, out of personal interest, out of technicality, and something
[00:15:17] else in between, the richness of the data, of the insights never carry itself to the C-suite. So you
[00:15:24] always have the parties who are wanting the data and the analysis, like the business executives,
[00:15:29] and you have the parties who are trying to deliver, but not quite understanding why it's not
[00:15:34] landing the way it needed to in the business. So a couple things come to mind as you describe
[00:15:42] that. One is that maybe, you know, part of this is about better storytelling with the data and making
[00:15:50] it relatable, understanding the breadth of the problem or the challenge, and that, you know, making
[00:15:58] sure that that hits home. Because just from some of the strategic planning work that I did at
[00:16:04] NBC Universal, that was really part of it, telling that story. And then if you're talking about,
[00:16:10] if you're applying numbers to that, you're talking about, you know, cost avoidance, cost savings,
[00:16:15] you know, things like that. Also, like sort of taking the proposed win, you know, based on whatever
[00:16:21] your projections are. And then also saying, now imagine, continuing the story, right? Say,
[00:16:27] right, Nick, now imagine if we reinvested 25% of that in upskilling, or, you know, this other
[00:16:34] solution that's going to get us from, you know, robotic process automation to intelligent automation,
[00:16:40] or it's going to get us from, you know, these pilot projects in this particular domain.
[00:16:45] And now we can start to, you know, propagate that and drive adoption in other areas and things like
[00:16:52] that. So I do think storytelling is a factor. The other thought I had was around, I guess,
[00:17:00] going back probably 15 years or so, like the advent of social media and collaboration platforms,
[00:17:07] you know, sort of the consumerization of that inside an enterprise that IBM used to start working
[00:17:13] out loud, right? Like people were blogging, people were getting into forums, and they were talking
[00:17:17] about some of these things. And the whole point was to sort of level or flatten the organization so
[00:17:23] that, you know, you could be chatting back and forth with an SVP on a particular topic, and they
[00:17:30] would have the visibility to how conversations were really happening. And then the sort of social
[00:17:37] analytics tools to do that, going back to your point earlier, your comment earlier around the
[00:17:42] graph databases, I mean, IBM research gave us, you know, plenty of those as IBMers. And so it just
[00:17:49] seems like at 2024, now doing planning for 2025 and beyond, like, it's just like, I don't know,
[00:17:57] I guess I get anxious to see, we're moving fast on the AI development front. But obviously,
[00:18:05] the harder part is the behavior change and the actual transformation.
[00:18:09] Yeah.
[00:18:12] You know what you should know? You should know the You Should Know podcast. That's what you should know.
[00:18:19] Because then you'd be in the know on all things that are timely and topical. Subscribe to the You
[00:18:24] Should Know podcast. Thanks.
[00:18:27] And I think it's almost like an industry and a culture transformation that we're asking for,
[00:18:31] right? So what surprises me the most today is once in a way, I kind of peruse people analytics,
[00:18:37] jobs postings just to see what's in the market, what's everyone asking for. The rise of what I call
[00:18:43] the Excel spreadsheet expert requirement and like must be an expert in Google Sheets. I am not joking.
[00:18:50] I read this last week, must be an expert in Google Sheets. It floors me a little bit because
[00:18:56] decades later, we still, for some reason, think that reporting equates to analytics. And the debate
[00:19:02] between a VLOOKUP and HLOOKUP and an XLOOKUP is still thriving. And it's a necessary step. I think
[00:19:09] it's a necessary step in every single organization's journey towards analytics. But the fact that
[00:19:14] some of what I would consider larger enterprises are still stuck in that stage really makes me question,
[00:19:22] how is HR in that particular organization having conversations around ROI, around value delivery of its
[00:19:30] workforce? Or have we just kind of given up and went like, yep, cut my budget, do whatever you want with me.
[00:19:35] I'll just sit back because it is what it is at this point.
[00:19:38] Yeah, I've seen, you know, I poke around periodically as well, especially on the like responsible AI, AI
[00:19:45] governance kind of space because I'm really curious to see like the overall sort of adoption. Am I seeing
[00:19:54] increasingly more organizations are looking for people to fill those roles? If they're not, is it
[00:20:00] because they are just complete laggards or have they actually said, yeah, I think this is important,
[00:20:08] but, you know, Joe and in legal and Sue in, you know, HR compliance, you know, they, they got this,
[00:20:15] right. So I do pay attention to that too. And I, even these leadership positions overseeing some of these
[00:20:22] potential groups, they're still asking the same sort of, I wouldn't say 20th century skills. I mean,
[00:20:30] Google sheets is not a 20th century skill, but the point is when you think about skills in general,
[00:20:37] and your skills gaps and your skills needs, you have to incorporate some of that trajectory of AI
[00:20:46] capabilities. I mean, it may not be great in certain areas now, and certainly you and I can pick apart
[00:20:53] all kinds of tools that are available today. But if you look at the pace at which these things are
[00:20:59] evolving, I mean, really, I mean, these are, these are the skills that are either must haves in your
[00:21:07] job description. I think you should maybe rethink that.
[00:21:11] Yeah. And honestly, on the topic of skills, that's actually what scares me the most about HR in the age
[00:21:18] of AI. Quite frankly, it's not whether HR is using the appropriate technology, responsible AI,
[00:21:23] and the whole nine yards. I think, oddly enough, the HR tech industry, as it has done in the last
[00:21:28] decade, two decades, will eventually guide the HR department to the right path in that regard.
[00:21:33] What concerns me is in the age of AI, when you have the people parts of your organization,
[00:21:39] not understanding the skill sets, the possible depletion of skill sets that takes to run your
[00:21:47] organization. That's a very scary concept, because imagine if you were a HR business partner,
[00:21:54] a recruiter sourcer, who got handed a job rec that says, I need to hire a prompt engineer.
[00:22:00] Where do you even start? Where do you even start with a draft job description? Where do you even start
[00:22:04] with? Okay, what are they prompting? What kind of language model are they using? Like, what's the
[00:22:08] backend architecture? How do you even start asking the business and having those types of conversations
[00:22:14] with the business? And sure, one job description isn't much. But when your business starts to advance
[00:22:19] faster than HR departments do, you're going to have multiple of these conversations. And eventually,
[00:22:25] I do think it will become a credibility issue from an HR function standpoint of, are you mature enough
[00:22:32] to support the business? Or is this one of those times where the business will have to figure out how
[00:22:36] to support themselves again? That should be a serious warning to HR teams, right? To get their act
[00:22:43] together, because they're already fighting against, you know, reputational issues, right? Like,
[00:22:51] they're not, are they really helping to drive change? Are they really my talent advisors? Are they really
[00:22:56] helping me with, you know, strategic workforce planning and some of these longer term, you know,
[00:23:02] existential things? Or are they just, you know, firefighting and protecting the organization from
[00:23:08] all kinds of risk? I mean, you've got to, you've got to do it all, frankly, and you've got to stay
[00:23:14] on top of this, you can't relinquish control. This is not a technology, I mean, yes, it's AI, but
[00:23:19] it's not exclusively a technology challenge. I mean, HR knows this, right? So it's just a matter of
[00:23:26] how are we going to position ourselves? And as you pointed out before, like, this concept of,
[00:23:33] you know, social capital, political capital within the organization, how can you maintain and grow
[00:23:39] your influence? Because, because if you don't sort of embrace this opportunity, you're really going to
[00:23:45] be behind the eight ball, as they say. Yeah, exactly. And I think it's even like a lot of times I have
[00:23:51] conversations with HR professionals who are like, oh, you know, my job is compliance, my job is risk
[00:23:55] mitigation. Like, I don't need to worry about advancing technology or like, what is AI doing
[00:24:00] to the workforce? And I'm sitting here going like, okay, but if you don't understand what's
[00:24:04] happening, let's talk risk and compliance for a bit. The EU AI act is already in play. We know,
[00:24:09] given GDPR trajectories and the hoopla around that, it's a matter of time before it rolls out
[00:24:14] in North America in the, what I call the 50 flavors of the United States at a certain point.
[00:24:20] Yep.
[00:24:21] It's a matter of time that happens. Without understanding technology, how are you supposed
[00:24:25] to understand the regulation that's behind it? Number one. And on the whole risk mitigation side
[00:24:31] of it, if your employees are leveraging AI technology as part of their personal tool stack,
[00:24:36] but on your company equipment, how are you supposed to write policies around it to kind of help protect
[00:24:42] the company, keep its data private, keep your company data from training the sort of larger language
[00:24:47] models out there and looking at that sort of sandbox periphery? Because sure, IT can take some of the
[00:24:54] technical heavy lifting, but let's be realistic when it comes to policy drafting, enforcing the
[00:25:00] policies and kind of monitoring. A lot of times that ends up on HR's desk. And I think it's also a
[00:25:05] question of, even if we break HR down to its most foundational elements of compliance and risk
[00:25:11] mitigation, there is still a bit of, I think, a learning trajectory and maturity curve that needs
[00:25:16] to happen, especially when you start folding in the conversation of ethical and responsible AI amidst all
[00:25:22] of that.
[00:25:23] Yeah. No, it's definitely a lot. We are all responsible, right? And that's why we need to be
[00:25:28] responsible by design. That's why we need, I'd still contend it's inevitable that we'll have
[00:25:33] like sort of onboarding and regular compliance training around, you know, ethical AI, AI governance
[00:25:39] and things like that, just like we do with privacy and cybersecurity and harassment. I just think everyone
[00:25:45] needs to know those basics, whatever your familiarity with, your comfort level is with
[00:25:51] the technology itself. You've got to know when to trust the output and how much to rely on that when
[00:26:00] you use it as part of a decision-making process, especially in talent and HR where we're talking
[00:26:05] about people's livelihoods and, you know, there's human beings behind, you know, those rows in your
[00:26:10] TTS and your other systems. Exactly.
[00:26:13] Oh, I wanted to ask you about person's Galileo.
[00:26:17] Yes.
[00:26:18] For people that didn't watch the video, I encourage everyone to go to YouTube and watch
[00:26:21] this video. It's not very long, but just, you know, your, your quick impressions. And then I was
[00:26:28] curious if you have also tried like Atlas Copilot or any other, you know, similar tools that are gen AI,
[00:26:35] but within a specific domain, you know, focused on, on talent, I guess.
[00:26:40] Yeah, for sure. So I dabble in AI tools quite a bit because ever since I kind of got into the
[00:26:46] solution provider side of the house, I realized that I have now become a firm believer that if I
[00:26:51] can't touch it, I can't see it. Your marketing language does not exist to me. I feel like I've
[00:26:56] read and seen too many things where I'm like, I don't think that's what you meant to say. Or like,
[00:27:00] I don't think that's what your product actually does. So that's how the whole Galileo thing
[00:27:04] started. I came across Galileo when it was actually launched in April, May this year at
[00:27:10] the enterprise level, essentially. I thought it was really cool, but at the same time, I'm like,
[00:27:14] oh, that's another, at least five figure enterprise contract. Like nobody else can take advantage of
[00:27:19] it. It's kind of one of those things where you're like, yep. And there's a Delta between the larger
[00:27:23] companies and the smaller ones once again. So when they announced that Galileo was coming out with
[00:27:28] individual licensing, I think about three, four weeks ago at this point, I was like, oh,
[00:27:32] this is really interesting. Let me sweat my credit card, pay for it and dabble with it. Right. Because
[00:27:36] everyone gets to wipe a credit card these days. So I'm like, let me grab a license and see what
[00:27:40] this thing does. Yep. I think it's genuinely cool. And it's genuinely cool, not because it's AI. I think
[00:27:46] the AI to me is like that 5% cherry on top. It's the fact that Burson's research is now available
[00:27:52] for public consumption in a very targeted format. And I think when you start looking at sort of the macro
[00:27:59] economic factors in our industry right now, it's almost a perfect storm and the perfect timing for
[00:28:06] that solution. Because more than ever, you're seeing VPs of HR, CHROs stepping out of their roles.
[00:28:11] There is a rise of the fractional that's happening in the market right now. Simultaneously, you're
[00:28:16] seeing internal to organizations where the average tenure it takes to get into a more senior role
[00:28:21] isn't that long anymore, which I think is a phenomenal thing. But it also means that there isn't
[00:28:26] a full suite of rotation, I think, across the HR function before someone is elevated into senior
[00:28:31] leadership position. So in a way, what Galileo does is it allows you to kind of access almost two decades,
[00:28:38] if not more than that, worth of HR research at your fingertip when you need it in digestible language
[00:28:48] that does not feel like you're reading an essay. And on top of that, it's with sort of like all the
[00:28:54] resources and all the supplemental documents attached in the background. So you can easily
[00:29:00] click into the next layer down if you need to. I think it's really playing into the fact that
[00:29:05] our attention spans are getting a lot shorter. We're just not reading 20 page research papers anymore
[00:29:10] in this day and age. And we just need a instant sort of gratification that when we have a question,
[00:29:16] we need an answer immediately. And I think we're going to see more and more of that, quite honestly,
[00:29:21] in the work tech market, where the sort of consumer market carryover or patterns, if you will,
[00:29:28] in terms of attention spend, as well as the desire for instant gratification and possibly infotainment,
[00:29:33] as we now call it, is going to come into the work tech market.
[00:29:38] No, I think it's certainly conceptually, it's amazing because, you know, Josh and his team have been around
[00:29:44] for quite a while. And to have all of that, I'm assuming there's no, are there any particular
[00:29:51] limitations in terms of like the types of content or anything that they've researched is fair game?
[00:29:58] So I have yet to come across something that is a sort of general interest topic that hasn't been able
[00:30:06] to give me an answer in Galileo. And I've been playing around it for about a month at this point.
[00:30:10] There are some like more niche questions that you'll ask and I'll be like, oh, I'm sorry,
[00:30:14] I'm not trained to answer those types of questions. But I would say that as a new CHRO or VP of HR or even
[00:30:22] HR business partner, it will minimally get you 75% of the way there. And then like once you know the
[00:30:28] language and like the thinking pattern behind it, I think it's pretty easy to fill the rest of the 20%
[00:30:33] gap either with Google perplexity or pick your weapon of choice essentially to augment that level
[00:30:40] of research. Have you ever been to a webinar where the topic was great, but there wasn't enough time
[00:30:46] to ask questions or have a dialogue to learn more? Well, welcome to HR and Payroll 2.0, the podcast
[00:30:51] where those post-webinar questions become episodes. We feature HR practitioners, leaders, and founders
[00:30:56] of HR, payroll, and workplace innovation and transformation sharing their insights and lessons learned from
[00:31:02] the trenches. We dig in to share the knowledge and tips that can help modern HR and payroll leaders
[00:31:06] navigate the challenges and opportunities ahead. So join us for highly authentic, unscripted
[00:31:11] conversations and let's learn together. Yeah, no, that's pretty impressive. Well, first of all,
[00:31:17] I've seen a couple of different examples of where people have built Gen.AI tools for market research
[00:31:25] more broadly. And I think that's an amazing use case. I think that if you have the attribution
[00:31:32] and understand where the sources are all coming from, I think that's really valuable. And you can only
[00:31:39] imagine what you would pay to have somebody go off and do that level of research, particularly with
[00:31:44] the accolades that Josh has earned over the years. Well, and the thing is, it also does part of your
[00:31:50] job for you, right? So they have these little templates that they've set up in there. And I think
[00:31:54] the one I've been playing around with, it's sort of like global PTO policy template or something like
[00:31:59] that. And essentially, it'll write you a PTO policy, like you enter the country, it will kind of get you
[00:32:04] like pretty much again, 75% of the way there in terms of what the government requirements are
[00:32:09] pertaining to a specific state, a specific province and a specific country from a PTO standpoint,
[00:32:14] from a leave of absence standpoint. And I thought that was pretty cool. It's not that you can't get
[00:32:19] that stuff off of Google, but just that when you keep it in the same place, it's almost like you can be
[00:32:24] like, okay, I'm going to go really deep and like really strategic. And now I got to go executional
[00:32:29] on the policies and it's just all there. That's interesting. Okay. Cause I thought maybe the main,
[00:32:34] not the main difference, but one of the differences between like, uh, and I haven't played with Atlas
[00:32:39] co-pilot to compare it necessarily, but I mean, just for the audience sake, Atlas co-pilot was released
[00:32:44] by, um, HR leaders, Chris, Chris Rainey and team Matt Burns, I think was involved, but I thought
[00:32:50] Burson's was more about his vendor and market, uh, research. Whereas an Atlas co-pilot would be
[00:32:58] about like what you were describing, like policy based stuff, uh, like this Uber HR virtual assistant
[00:33:06] kind of thing, but it sounds like Galileo does.
[00:33:10] Galileo's got a bit of both. Um, I'm actually going to go see if I can get a license to Atlas
[00:33:14] co-pilot now, and we might just do a compare and contrast YouTube video at this stage to see who does
[00:33:20] leave of absence policies better. Um, I haven't dabbled in Atlas co-pilot yet, but I think so Galileo,
[00:33:26] if I were to kind of assign a percentage breakdown, it's probably about like 70, 75 theory. So what you
[00:33:33] would typically expect of Josh Burson company, their research, that type of thing. And then 25%
[00:33:39] practice where you can actually search up benchmarks, um, from different industries through this year.
[00:33:44] And I think there's a couple other skills, taxonomy skills, mapping stuff, I believe from
[00:33:49] light cast that's in there as well. So Galileo is really combining, if you will, all of the
[00:33:54] industry's knowledge, um, and putting a practitioner's lens on it and then distributing it through that
[00:34:00] channel. Okay. So that's, that's encouraging. Cause I, one of my concerns and nothing, of course,
[00:34:08] nothing against Burson. I mean, it does amazing research. It was more, it was more just, is there
[00:34:14] inherent, you know, bias in the data that's now being summarized by the same people that had the
[00:34:23] same team. And now you have this one lens, right? But it sounds like that's not actually the case.
[00:34:30] It's, it's incorporating other data to supplement that.
[00:34:33] So there are certain papers that the Josh Burson company publishes that kind of make me like,
[00:34:38] I see where you're coming from, but have you thought about?
[00:34:41] So even in those types of topic areas, when you kind of look at the way, um, Galileo lists out the
[00:34:47] answers without going deep in the research paper, there's a level of neutrality in how those answers
[00:34:53] are shaped and from a natural language format that upon reading the paper, you might not necessarily
[00:35:00] get. So I think there's also a level of sort of tweak that was done in the backend as well. So I,
[00:35:06] I like it to be honest, like I think it would really benefit the fractional, um, to see Charles out
[00:35:11] there. Um, and really like any HR general generalist or any HR business partner, cause it's like your
[00:35:17] brain's going to feel like it'll melt after 30 minutes reading that stuff, but it is also going
[00:35:22] to make you a lot smarter after 30 minutes of reading that stuff.
[00:35:25] No, that's, that sounds pretty compelling. The other thing I would just give them, you know,
[00:35:30] at least one bonus point for is to actually acknowledge that it has its limit, its own limitations,
[00:35:35] right. I, it drives me crazy when you ask some of these Gen AI tools, a question and it doesn't know,
[00:35:43] but it gives you an answer anyway and makes it sound like it knew. And if you don't, you don't know enough
[00:35:50] about the subject matter, you're more likely to trust its response. And I feel like perplexity has
[00:35:56] gotten worse with, in that regard. Like, I don't understand what it's doing, but it's apologizing to
[00:36:03] me constantly because I keep calling it out. No, I shouldn't have said that. Oh no, that's right. I
[00:36:07] did. I did just completely make up that company that you were looking for that you couldn't find.
[00:36:13] I just completely made up what they did. Like what? Oh my God. Just say that you don't know,
[00:36:19] just stop the nonsense. Yeah, definitely. I think for me, perplexity, if anything is just a quicker
[00:36:28] redirecting tool at this point than a Google search. I wouldn't say like, I trust a hundred percent
[00:36:32] the answer that comes out of it or even 80%. Cause sometimes when you kind of start going
[00:36:36] through the financial figures and stuff like that, it'll give you data. And then you're kind
[00:36:39] of sitting there going like, okay, but like, are you actually interpreting the core source of that
[00:36:43] data as it was intended to be interpreted or just taking that one sentence and kind of going like,
[00:36:48] okay, yep, this is what I'm going to serve this for you. The other thing that comes up lately,
[00:36:53] not in terms of hallucination, I wouldn't, some might call it a hallucination, but like these
[00:37:00] summarization tools where it may not necessarily say something that's not, that's false, but it,
[00:37:08] it draws, it could draw conclusions that aren't grounded, or it could take a little bit of creative,
[00:37:17] you know, liberties just to sort of like, if you were to say, if you didn't like the response and
[00:37:22] you say, expand on this, right. All of a sudden it starts getting a little too creative. Right.
[00:37:30] And it's like, well, where'd you, where'd you come up with that? I mean, it's happened to me just
[00:37:35] trying to, if I ask one of my go-to tools to do like a summarization of a podcast based on the
[00:37:43] transcript or the show notes or whatever. And it's like, not only at no point did we talk about
[00:37:49] that subject that you just wrote a paragraph on, but you overlooked and didn't include
[00:37:57] some of the best parts of the discussion. So that kind of stuff scares me, not necessarily. I mean,
[00:38:04] yes, it, it could impact you doing, you know, market research, but imagine if that was the case in
[00:38:12] an interview transcription tool or something like that. So I don't know, it just, it's concerning,
[00:38:19] right? It's concerning that there was an article the other day in the Associated Press,
[00:38:23] Hiltke, Shulman coauthored a piece in the Associated Press around his health, one of these
[00:38:29] healthcare transcribing tools, completely made up stuff, right? In a, in a patient,
[00:38:36] in a doctor's summary of a patient interaction that is unacceptable. So I just think it's still early
[00:38:44] days for a lot of these things. And, you know, if we're going to advance so quickly, can we not,
[00:38:51] can we not break the patient doctor, you know, trust that would be a good thing if we didn't,
[00:38:57] if we didn't do that.
[00:38:58] Yeah, for sure. I think it's fascinating because as you were saying that I just realized something
[00:39:02] and this is a completely personal preference, but I realized I don't use AI to summarize things.
[00:39:09] I make AI expand on things. So sometimes I'll just have an idea because apparently I've been told that
[00:39:15] I talk and think in bullet points. And as such, it's really hard for me to create paragraphs or like
[00:39:20] event descriptions and stuff like that. So I'll dump a bunch of bullet points and I'll get AI to
[00:39:24] put together a summary or a synopsis of sorts. And it does a really good job in expanding those ideas.
[00:39:31] But I can see where the challenge is when you're trying to get it to summarize concepts and ideas,
[00:39:35] because depending on the language model that is trained after and all of that fun stuff,
[00:39:40] I think the complexity of the human thought process has yet to be fully translated into AI format.
[00:39:50] So it's almost like it's good for repetition sake, for expansion sake, for like ideation sake,
[00:39:55] but not necessarily to like take over for part of the work. So like health record transcribing,
[00:40:01] for example, or like summarizing podcast and things along those lines.
[00:40:05] Yeah, no, that's fair. I do play around with it a lot of different tools. I see a lot of
[00:40:14] consistency and not in a necessarily good way across some of these tools. They all, you know,
[00:40:20] operate, you know, similarly, and that's across different LLMs as well. I mean, I know people have
[00:40:27] been doing like you were doing with your video, like analyzing different tools and comparing results.
[00:40:34] I'm curious if you've played around with any of the ones. There's some new tools cropping up that claim to
[00:40:41] basically you put in your prompt once and it'll go and basically execute it across multiple models.
[00:40:50] Have you played with any of those?
[00:40:52] I've dabbled with this. So I call it daisy chain AI for lack of a better description. I think there's multiple tools
[00:40:59] that do that. And I think how every tech platform daisy chains the backend together significantly
[00:41:06] impacts the output that you get. So on the good ones, for example, I can basically prompt like draft
[00:41:13] me a presentation. Here's what I'm looking for. Here's how long I have, et cetera, et cetera.
[00:41:18] And I will get a presentation that's not necessarily camera quality, but something that you can dump
[00:41:23] into Canva and have Canva make look nicer. And then there are others where I think it was last
[00:41:30] week, one of the images came up with six thumbs and I'm kind of sitting there going like,
[00:41:36] not quite sure how we got the six thumbs here, but that is so not what I was looking for.
[00:41:41] Yeah. That's not good. Yeah. I mean, it's sometimes, you know, you just have to take a step back.
[00:41:47] You know, we're in this sort of echo chamber where everyone's like following every little
[00:41:51] point release in every development so closely. We've got to take a step back, maybe talk to,
[00:41:58] you know, non-techies, talk to people in other domains and really understand,
[00:42:01] you know, what, what they're seeing. I mean, the healthcare one came up sort of by accident
[00:42:07] across, across my path because I was doing research, but, you know, I talked to people in education.
[00:42:12] I'm going to be talking soon to my local, you know, school district and folks running curriculum
[00:42:18] and technology or whatever, because I feel like, you know, the sooner we can get people,
[00:42:22] you know, comfortable, then we can work together to carve out the right policies. But just sticking
[00:42:29] your head in the sand is not a policy and it's, it's not helping our kids be prepared for what's
[00:42:36] next. I mean, we're going to start to see these job descriptions instead of saying, you know,
[00:42:41] you need to be an Excel jockey and, you know, master. When I was growing up, it was, yeah, it was,
[00:42:48] it was, you know, do you know, Microsoft office? Do you know, whatever, how to use a computer?
[00:42:54] Right. I mean, I had a computer, I mean, I'm, I'm old now, but I mean, I was given my first
[00:43:00] computers in school and when I was in middle school when I was 11, 12 years old. So, yeah,
[00:43:06] I mean, some ways, you know, using these AI tools is similar to that. So, you know, my daughter's 16
[00:43:14] and she doesn't get to use it at all related to school. And so I want to change that, but, you know,
[00:43:22] going out into these other domains, talking to people who are not tech savvy and who are just
[00:43:28] experiencing frustration. I mean, we see the adoption rates, adoption rates are higher than,
[00:43:33] than computers. It's like social media adoption or maybe even higher. Right. So the train has long
[00:43:41] left the station. Right. And so we're already sort of backtracking and trying to make sure people,
[00:43:48] imagine if just, you just cars came out and you didn't need a license and everyone was just
[00:43:52] driving around in the car. Yeah. I mean, I think that's a really good point because one thing I do
[00:43:57] appreciate the quick rise of AI for is bringing back the concept of what I call play, right? Because
[00:44:04] sometimes when we enter the working world, and I think it's true for a few of the generations that
[00:44:09] are currently working, it's like, you're told that this is a model. This is a mode. You have to do
[00:44:13] X, Y, Z step. I mean, we create a career path for a reason. You have to do X, Y, Z step to get to
[00:44:18] the next stage and then X, Y, Z step to get to the next stage, so on and so forth. But the thing about AI
[00:44:23] and the amount of data it takes to power AI, it's like what used to be linear is a bunch of zigzags
[00:44:29] and could entirely be circular at that point. And it's kind of encouraging everyone to go back to
[00:44:34] the concept of like play around with it, experiment with it. And it's almost that sort of lost sense
[00:44:40] of curiosity, I think, that we need to bring back to the workplace, quite frankly. I imagine that's part
[00:44:45] of your answer when I, if I were to ask you your advice to elevate your AIQ, right? Probably.
[00:44:53] Curiosity is definitely one of the top answers. So yeah, we covered a lot. Lydia, I mean,
[00:45:00] I was going to ask you about some of your favorite tools. So if there's anything you're particularly
[00:45:05] fond of or scared of that we didn't cover already, I'm curious.
[00:45:10] I don't think I'm scared of anything. At least I haven't come across anything that I'm scared of
[00:45:15] yet. Particularly fond of, Weerly is actually not an AI tool, but it's, I'm sure everyone's heard of
[00:45:23] this. If you're on YouTube, these guys sponsor like 10 bajillion YouTube creators, but it's called
[00:45:28] brilliant.org. So it's basically like a courseware training type of thing. It's a gamified learning,
[00:45:33] essentially on a monthly subscription model with like a 14 day free trial or something along those
[00:45:38] lines. But they do a really, really good job of simplifying concepts. Like how do you code with
[00:45:44] Python? What is an LLM? How do you AI work? So on and so forth. And I think for anyone that's
[00:45:50] curious out there, definitely worth maybe even just a one month subscription to kind of check out
[00:45:55] and see what it's all about. Cause short of using the tools, I think it's also nice to kind of
[00:46:00] understand the backend of what's actually happening from a system perspective.
[00:46:05] Got it. No, I'll have to check that out. I'm not familiar with it. I have looked at a bunch of
[00:46:10] different, they all start to look the same, right? I think it's hard. I was just having this
[00:46:14] conversation on LinkedIn just yesterday. We were talking about hacking HR had created like,
[00:46:20] like a 20 question kind of AI readiness assessment. So I commented, like I've gone down this path. You
[00:46:26] can imagine AIQ. We've gone, done some iterations on this, this for sure. And it really depends on how
[00:46:35] you ask the questions and who you're asking the questions of and, and the domain that you need to
[00:46:41] know as well, right? Like an HR business partner doesn't need to know the same things as, you know,
[00:46:46] maybe a product manager or, you know, people who aren't in technology at all and, and your level of
[00:46:52] proficiency in terms of actually working with, with data or, or having, you know, some data,
[00:46:59] you know, background or what aspects of responsible AI are most important for you to understand. I mean,
[00:47:06] that's, it's different depending on your, your role and perhaps your, your seniority or industry.
[00:47:11] But I do think that there's some basics like we were talking about before, like whether it's
[00:47:16] through compliance training or whatever, there are some basics that everyone needs to understand.
[00:47:20] And I do think once people have a good sense of what this can do, you're going to get a lot of
[00:47:28] ideas, right? And then what are you going to do with those ideas? You have a way to sort of capture
[00:47:33] them and potentially realize them by, by investing in them. So there's a lot, I'm optimistic that we'll
[00:47:41] start to go down that path, but we just, we need to, we need to innovate responsibly. It's sort of my
[00:47:46] new little tagline. Yeah, for sure. Lydia, this has been awesome. Thank you so much for, for nerding
[00:47:52] out with me on all these, these topics. We could probably talk for a few more hours, but I'm going
[00:47:57] to let you go. But again, thank you so much for, for sharing your insights with, with my audience and
[00:48:03] chatting with me. Yeah. Thanks for having me. Absolutely. Thanks everyone. That concludes
[00:48:08] another episode of Elevate Your AIQ. Thank you again, Lydia, and we'll see you next time.