Bob chats with veteran learning and development (L&D) expert Josh Cavalier about his background and journey in L&D and his transition into focusing on Generative AI as both the subject and copilot for learning. They discuss the potential impact of AI on L&D professionals, the L&D space overall, and the need to adapt to new technologies. They explore the role of AI in content creation, coaching and mentoring, and the importance of human oversight. Bob and Josh further discuss a variety of related topics, including the challenges and risks of adopting AI, the need for responsible design and use of AI, the lack of AI training in organizations, the rapid changes in AI tools and technologies, the role of AI as a creative co-pilot, the slow adoption of AI in education, and the need for individuals to develop their 'AIQ' through hands-on experience.
Keywords
AI, generative AI, learning and development, L&D professionals, content creation, human oversight, technology, job displacement, upskilling, human performance, efficiency, productivity, collaboration, analytics, AI applications, culture of innovation, risk mitigation, AI adoption, responsible AI, AI training, AI in education, AIQ
Takeaways
- L&D professionals need to adapt to the changing landscape of AI and embrace new technologies to enhance their work.
- The role of L&D professionals is evolving towards becoming human performance analysts who orchestrate learning journeys and measure their effectiveness.
- The adoption of AI in L&D is varied, with some professionals fully embracing it, while others have concerns about ethics and bias.
- Organizations need to have a culture of innovation and a willingness to assess and mitigate the risks associated with AI implementation.
- Adopting AI comes with challenges and risks that need to be addressed through responsible design and use of AI.
- Many organizations lack AI training programs, which may be due to the rapid changes in AI tools and technologies.
- AI can serve as a creative co-pilot in fields like animation, providing efficiency gains while allowing artists to showcase their creativity.
- The adoption of AI in education is slow, but there are efforts to incorporate AI into curriculums and support students in using AI as a learning tool.
- Individuals can elevate their AIQ by actively engaging with AI tools, experimenting with prompt design, and building a personal relationship with technology.
Sound Bites
- "Our job is to align the business to human performance."
- "Technology fails and you have to have a human in the loop, a human backup."
- "The next couple of years are absolutely going to be wild as these businesses realize that they're all going to one endpoint."
- "There's a lot of challenges and risks and that's probably a large reason why a lot of companies haven't set up actual AI strategy and policy yet."
- "Responsible design and use of AI should be part of compliance training, just like data privacy and cybersecurity."
Chapters
00:00 Introduction and Background
06:37 Josh's Journey in L&D and AI
08:19 Job Displacement and AI Applications
10:30 The Changing Role of L&D Professionals
13:15 Orchestrating Learning Journeys with AI
15:24 Convergence of AI Technologies and M&A Activity
17:21 Adoption of AI in L&D and the Importance of Culture
20:00 Challenges and Risks of AI Adoption
22:56 The Lack of AI Training in Organizations
26:30 The Rapid Changes in AI Tools and Technologies
29:23 AI as a Creative Co-Pilot
39:13 The Slow Adoption of AI in Education
47:15 Elevating Your AIQ: Hands-On Experience with AI
Josh Cavalier: https://www.linkedin.com/in/joshcavalier
Josh’s website: https://www.joshcavalier.com/
BRAINPOWER Weekly AI Training Show: https://www.youtube.com/joshcavalier
For advisory work and podcast sponsorship inquiries:
Bob Pulver: https://linkedin.com/in/bobpulver
Elevate Your AIQ: https://elevateyouraiq.com
Powered by the WRKdefined Podcast Network.
[00:00:09] In this episode of Elevate Your AIQ, I had the pleasure of speaking with Josh Cavalier, a veteran expert in learning and development who has transitioned into generative AI training and coaching. We explore the potential impact of AI on L&D professionals and discuss how AI is reshaping content creation and learning journeys. Josh and I discuss the challenges and risks of AI adoption, the importance of human oversight, and the need for organizations to develop a culture of innovation. There's tremendous opportunity for AI use cases in learning and development, whether you are upstate or upstate or upstate or upstate or upstate or upstate or upstate.
[00:00:49] Josh Cavalier, Elevate Your AIQ, and I'm your host, Bob Polver. Today, I am joined by Josh Cavalier, who is doing some great work in upskilling all of us.
[00:01:11] Josh Cavalier, Elevate Your AIQ, and I'm a senior director of the power of AI, and I'm a senior director of AI in how to use generative AI. Josh Cavalier, Thank you so much for being here.
[00:01:14] Josh Cavalier, Hey, great to be here, Bob.
[00:01:16] Josh Cavalier, Why don't we just start with a little bit about your background? Maybe what you went to school for, your L&D roles and how you sort of pivoted into this being the sort of Gen AI guru.
[00:01:27] Josh Cavalier, Wow. So we're going to go in the Wayback Machine. I actually have a degree in medical illustration.
[00:01:33] Josh Cavalier, I took pre-med and art when I was in college. I took my pre-med over at University of Rochester. There were just a few classes over there.
[00:01:43] Josh Cavalier, And then the majority at RIT. So I think the catalyst of what's happening is this balance between art and science in my head.
[00:01:56] Josh Cavalier, Right? You know, and that's been going on my whole entire career. Now, when I was at RIT, I got exposed to multimedia.
[00:02:03] HyperCard, the first version of Director, and I got the bug. So even though I went to school to be a medical illustrator, once I got my hands on the multimedia tools, that was in the back of my brain.
[00:02:18] And I was thinking to myself, my gosh, if I ever had the chance to use these tools, I'm going to just jump at it.
[00:02:25] And of course, that opportunity presented itself because just a few years after I graduated, I had an opportunity to become an art director for an e-learning firm based here in Charlotte, North Carolina, which is where I live.
[00:02:38] And got the position, and I was caught up in the bank mergers and acquisitions that was happening in the mid-90s that was all part of deregulation here in the States.
[00:02:50] And the company I work for, Hanshaw & Associates, which was led by Dick Hanshaw, who has been a leader in this industry for decades, specifically around performance consulting, he jumped right in.
[00:03:03] And I can remember when I first started, I was doing mainframe training, and then eventually we got into CD-ROM, and that was my flywheel.
[00:03:13] Because with CD-ROM, we were able to do multimedia, even if the video was 160 by 120, a posted stamp size video, we still got video in those applications.
[00:03:25] And then eventually the internet happened, things changed, and my position was eliminated.
[00:03:31] But that was the start of Lodestone.
[00:03:36] Lodestone became one of the top authorized Adobe training partners in the United States.
[00:03:41] That was my company.
[00:03:42] Now, in that capacity, I would go into organizations, specifically L&D organizations, and spend them up on technology.
[00:03:50] So even though we were doing this broad range of Adobe training, my role was, one, to run the company, but two, go in and do the consulting work specific to L&D teams.
[00:04:03] And so over the years, I've been in so many different environments, Fortune 1000, corporate, government agencies, higher education institutions, mostly implementing software tools like Storyline, Captivate, Lectora, Adobe Tools, obviously.
[00:04:21] Even some programming stuff there for a little bit, especially when mobile kicked in.
[00:04:26] But Lodestone ran its course, and I had to change things up for myself personally.
[00:04:33] And so right before COVID, specifically, five weeks before COVID hit, I jumped back into corporate.
[00:04:40] And the way that showed up was I worked for a $5 billion supply chain company, American Tire Distributors, as an individual contributor, which is wild because running and owning a business for that long and then going back as an individual contributor was different.
[00:04:57] But it's what I needed to understand exactly what was happening in corporate.
[00:05:02] It was a great break for me, so to speak.
[00:05:05] And during that time, I was involved with large initiatives around data and analytics, training frontline sales with those products, also implementation of Microsoft Teams during COVID.
[00:05:21] And we also opened up a brand new facility.
[00:05:24] And so doing a 3D or 360 tour of the brand new corporate headquarters, those types of projects.
[00:05:31] But then AI showed up.
[00:05:33] And so the first week of December of 2022, I got my hands on ChatGPT.
[00:05:40] And of course, you know, like anyone else, I started doing limericks and songs and just goofiness.
[00:05:48] And things changed when I asked it to write a learning objective.
[00:05:51] And once that happened, I went down the rabbit hole and was thinking about my workflows, was thinking about, well, wait a second.
[00:06:00] If it can do a learning objective, how many steps for it to take to create a video script from this learning objective?
[00:06:07] And it didn't take long.
[00:06:08] And that was even ChatGPT or GPT 3.5.
[00:06:12] And so I went down this rabbit hole and I was like, this is going to have massive impact on our industry.
[00:06:19] I could just tell.
[00:06:20] And so I chased after it.
[00:06:22] And then in about April-ish time frame of 2023, I went full time.
[00:06:31] So I left the organization I was at, decided to do the AI thing full time.
[00:06:35] And the way that that shows up is I have an online course.
[00:06:38] I do workshops.
[00:06:39] I do webinars.
[00:06:40] webinars.
[00:06:41] I do a lot of public speaking.
[00:06:43] So I will go ahead and speak sessions.
[00:06:45] I just started doing keynote presentations.
[00:06:48] So there you go, Bob.
[00:06:50] That's 30 years.
[00:06:51] I tried to summarize it, but that's a lot.
[00:06:55] That is a lot.
[00:06:56] And that is a fascinating journey.
[00:06:59] I love the variety.
[00:07:01] I love that you were proactive and you sort of adapted to where you saw things moving.
[00:07:08] I love the balance of art and science, even back in school.
[00:07:12] I've never heard of a medical illustrator before.
[00:07:15] I'm not sure what exactly that would have entailed, but I'm impressed by where you found out for sure.
[00:07:22] Well, I have been published.
[00:07:25] So I've had illustrations in medical journals.
[00:07:28] And for a heartbeat there, I was actually doing demonstrative evidence for malpractice court cases.
[00:07:36] Oh, that is interesting.
[00:07:38] So basically showing what should have happened or would have happened or what this looks like to the layperson.
[00:07:45] Exactly.
[00:07:46] Yeah.
[00:07:47] So we were generating or creating very large illustrations and panels with overlays.
[00:07:53] So as the lawyers need to go ahead and tell a story about what happened or whatever the aspect was of the case,
[00:08:01] they would actually flip through the layers.
[00:08:03] And it was kind of a fun adventure doing that for a bit.
[00:08:08] But for being just out of school is extremely high pressure.
[00:08:12] And so I tried to go a different direction just because working with doctors and lawyers under those types of situations,
[00:08:23] not for everybody.
[00:08:25] Right.
[00:08:26] Well, let me ask you this.
[00:08:28] Knowing what you know now and all of the sort of tinkering that you've been doing and teaching,
[00:08:37] is that something that still exists today?
[00:08:40] And are there AI applications that could basically do that?
[00:08:45] Yeah.
[00:08:45] So in regards to L&D professionals, your job is going to change.
[00:08:50] There is no doubt about it.
[00:08:52] Having experienced working with these applications, working with ChatGPT or AnthropicClaud,
[00:09:00] the way that you go about doing your work is inherently going to change.
[00:09:06] But at the end of the day, you know, our job is to align the business to human performance.
[00:09:16] Right.
[00:09:17] Measurable human performance.
[00:09:19] It's just going to show up different.
[00:09:22] I love working in a timeline.
[00:09:24] I love Camtasia, the video editing tools, Storyline, Captivate.
[00:09:30] But I believe you need to be okay with not doing that work as we progress
[00:09:37] or other forms of content creation that will be done by a different tool
[00:09:45] or expedited or even automated.
[00:09:49] Again, I don't have a timeline for when all of that shows up,
[00:09:52] but based upon the paths that we are headed on here,
[00:09:56] it seems to me that we will be doing more orchestration of learning journeys
[00:10:04] and measuring as opposed to creating the content itself.
[00:10:10] And that gives us so many different opportunities to support the organizations that we work for.
[00:10:15] Yeah.
[00:10:15] I think one of the themes I take away from that is that you're still focusing on the human
[00:10:22] at the center of this work.
[00:10:25] And then you're augmenting that human with all these new capabilities where,
[00:10:30] yes, you may have needed a specialist to come in and do that,
[00:10:35] or it may have required, you know, a lot of tweaking, maybe a lot of time.
[00:10:40] But at the end of the day, you've got to balance this big, you know,
[00:10:46] sort of efficiency and productivity push with what actually makes up organizations,
[00:10:51] which is the human talent that drives the strategy and delivers value.
[00:10:57] Correct.
[00:10:57] I think for as long as humans have been in front of other humans trying to transfer knowledge,
[00:11:05] skills, and even modify behaviors, we have always been subject to evaluating the technology
[00:11:14] and how we can apply that technology to getting to our end goal.
[00:11:19] Sometimes we never get to that goal.
[00:11:20] And it just happens that we are in a phase right now where the technology is massively accelerated
[00:11:28] at scale.
[00:11:29] And as knowledge workers, that can be scary because we have, you know, with the internet,
[00:11:37] it's taken quite some time, at least for me personally, to have a one gigabit connection
[00:11:42] in my house.
[00:11:44] And so now that shows up different.
[00:11:47] There's different things I can do with that.
[00:11:49] But, you know, like any other technology, it fails, right?
[00:11:53] As we were just setting up this particular recording, we had challenges.
[00:11:57] And I guarantee that even with AI, we can premeditate all of these automations, all of these things
[00:12:06] that may be done for us.
[00:12:08] But you and I both know that technology fails and that you have to have a human in the loop,
[00:12:13] a human backup or a human that is going to review any content or any work that is produced automatically.
[00:12:24] So it's a maturation of the role.
[00:12:28] And I mean, I see it going towards becoming a business or a human performance analyst.
[00:12:36] It's very different.
[00:12:39] Like a human performance analyst is going to orchestrate learning journeys that are automated
[00:12:44] by AI based upon things that are occurring in the business.
[00:12:48] If there's an increase in safety incidents, if sales are down, if we have to sell a new product,
[00:12:54] we'll orchestrate that, verifying, hey, based upon the personalization or learning paths or the media that's created,
[00:13:04] did it work?
[00:13:05] And if not, you may have to go back to the AI and have a conversation and say,
[00:13:09] hey, you missed a segment of that population.
[00:13:12] They didn't take the training or whatever the case may be.
[00:13:16] You know, again, there's always human in the loop.
[00:13:19] So I don't know how fast or when that's going to show up.
[00:13:22] But, you know, what's interesting is, you know, I go to a lot of conferences
[00:13:27] and I get to see tons of products out there in the marketplace.
[00:13:30] And what's really fascinating is that all of these products seem to be converging.
[00:13:37] And what I mean by that is that we're getting tons of overlap in functionality
[00:13:42] because there are only a finite number of channels that we can deliver this content in.
[00:13:51] And when you're talking about the full automation of educational content,
[00:13:58] it's natural for all these vendors to move in the same direction.
[00:14:04] So the next couple of years are absolutely going to be wild as these businesses realize
[00:14:10] that they're all going to one endpoint.
[00:14:13] And there's going to be a lot of M&A, mergers and acquisition activity,
[00:14:17] that's going to happen because people are going to realize,
[00:14:21] oh, well, we're going to create an engine that's going to generate this content
[00:14:25] that's going to go to a mobile phone, to a desktop, to a laptop, to a TV.
[00:14:31] And that's about it.
[00:14:33] Or AR, VR, headset.
[00:14:35] Go.
[00:14:36] So it should be very interesting the next few years how that shakes out.
[00:14:40] Yeah, no, that's fascinating.
[00:14:42] There's a lot to unpack there.
[00:14:43] I mean, first is I agree with you because you've got these AI native applications
[00:14:49] coming into the picture.
[00:14:51] But then at the same time, you have all the incumbents, all the existing solutions,
[00:14:56] whether it's a collaboration platform, analytics platform.
[00:14:59] It doesn't even matter because everyone's now bolting on AI features and capabilities on top.
[00:15:06] So you're going to have redundancy in some of that capability.
[00:15:12] And I don't know, some of those AI native solutions may, well, if they grow up quick,
[00:15:16] then they could get adopted in even some of the larger enterprises.
[00:15:20] But otherwise, they do risk being sort of cannibalized by the incumbents just because they've got that footprint.
[00:15:28] And I guess it depends on who's calling the shots at the organization in terms of adopting this kind of thing.
[00:15:36] So L&D and other HR and talent leaders certainly need to work in collaboration with the CIO or chief data officer.
[00:15:45] I don't know that we're going to see consistency in who actually owns all of this.
[00:15:50] But I do think that L&D has an incredible opportunity to show just how valuable this sort of upskilling can be.
[00:16:01] And one of the questions I had for you was, as you talk to L&D leaders and hear from people in that space,
[00:16:07] I mean, are they embracing this opportunity?
[00:16:11] Do they feel like this is bigger than they can handle within a large organization?
[00:16:17] What's some of the feedback that you're getting?
[00:16:19] It's across the board.
[00:16:21] We typically see in organizations only 10% of individuals working in L&D teams getting after AI.
[00:16:31] And what I mean by that is regardless how AI shows up within the organization,
[00:16:35] whether it be exposed models that are securely accessible or they're working at home on their personal computer or laptop or phone or whatever,
[00:16:44] they're getting after it.
[00:16:45] And they're understanding how the technology works and they're prompting that whole thing.
[00:16:50] And then you have another segment that's probably about 25% or 35% that are casual users.
[00:16:57] They'll come in and, you know, use it when they need to once a week.
[00:17:02] And then the rest don't even care or it's not even on their radar unless it's a directive by leadership to begin implementing or placing it into their workflows.
[00:17:14] And that just shows up different.
[00:17:16] But even then, you know, I'm going back to this is a conversation about you and technology.
[00:17:23] So even though management may say, hey, you need to go ahead and use these large language models to do your work to increase productivity.
[00:17:33] It's still that relationship.
[00:17:35] I still have individuals in my workshops who are.
[00:17:40] Hey, it's Bob Pulver, host Q podcast.
[00:17:42] Human centric AI, AI driven transformation, hiring for skills and potential, dynamic workforce ecosystems, responsible innovation.
[00:17:52] These are some of the themes my expert guests and I chat about, and we certainly geek out on the details.
[00:17:57] Nothing too technical.
[00:17:58] Hope you check it out.
[00:18:01] Extremely opposed to using AI because they're an artist or there's ethics and bias issues that are going on where the technology and the way that is trained doesn't align to their values.
[00:18:16] So there's a larger issue that's happening within organizations because people are just put off.
[00:18:24] Certain groups are put off by using this technology.
[00:18:27] And I totally get it as an artist and knowing other artists, there are concerns there and they're all valid concerns.
[00:18:35] And so, you know, you have these larger issues that are going on, you know, and it's different than the Internet.
[00:18:41] With the Internet.
[00:18:43] It's a connection.
[00:18:44] It's access to information.
[00:18:46] It's speed.
[00:18:47] It's e-com.
[00:18:48] It's security.
[00:18:50] Go.
[00:18:51] Like, you know, let's go ahead and expand it out.
[00:18:53] How does e-learning or how does training show up on this platform?
[00:18:56] Go.
[00:18:56] With AI, you have so many different risks that need to be mitigated and they're vast.
[00:19:05] And I believe that some organizations are not set up to approach that risk mitigation.
[00:19:11] You know, so for companies that have a culture of innovation, it typically shows up better because they allow the time for leaders and for associates to do the things to assess the technology and how it fits into workflows.
[00:19:31] And I see that at various levels.
[00:19:33] Right.
[00:19:34] You would think that smaller businesses would be more nimble and they'd be able to act upon it faster.
[00:19:39] That doesn't really matter.
[00:19:40] Again, it's a culture that has to be in place to see this technology flourish.
[00:19:46] So that's what I'm currently seeing out there.
[00:19:49] In terms of adoption, and absolutely, I agree.
[00:19:52] There's a lot of challenges and risks and probably a large reason why a lot of companies haven't set an actual AI strategy and policy yet.
[00:20:03] But when I think about some of those risks, and they're different for different companies and different industries, etc.
[00:20:09] But I've suggested that this should be in terms of responsible design and use of AI should be part of compliance training, just like data privacy and cybersecurity.
[00:20:24] I wanted to get your thoughts on that and whether or not you know of any organizations that have done that, because I honestly don't.
[00:20:31] Not that level.
[00:20:32] That doesn't mean it's not happening.
[00:20:34] I'm just not aware of organizations that have implemented usage of AI training right there with phishing and cybersecurity training that they're constantly sending out.
[00:20:49] I'm wondering if it has to do with platforms like LinkedIn learning or OpenSesame not having available content.
[00:20:59] Because typically, that type of topic is leveraged.
[00:21:07] It's contracted out.
[00:21:08] As opposed to creating internal, there may be so many changes that are happening so rapidly that it's difficult to maintain that training currently.
[00:21:21] I know for myself, I was going to put forth an effort to describe all the avatar vendors that are out there and their capabilities.
[00:21:30] And I couldn't complete that task because it changes so often and so frequent that if I were to just go ahead and put something out there showing the capabilities, it would be old next week.
[00:21:41] And that's the world that we live in.
[00:22:11] And that's the world that we live in.
[00:22:12] And I personally haven't seen it yet.
[00:22:14] Yeah, I think similar to data privacy and cybersecurity where essentially anyone can be the weak link.
[00:22:21] I feel like that's an important element of upskilling everyone, regardless of how much of your day is now augmented by AI or will be.
[00:22:32] You're right.
[00:22:32] It's changing incredibly fast.
[00:22:34] My last full-time role was doing market research and advisory work in the talent technology space.
[00:22:43] And I thought, well, because I've had this focus myself on responsible AI as well as sort of upskilling everyone on AI and the intersection of those things, how could I possibly keep up with all of what's changing?
[00:23:00] Because you're right.
[00:23:01] It seems like every week, if it's not new functionality within ChatGPT from OpenAI, it's, oh, have you checked out the latest from Perplexity or from Anthropic or all of these other much more specialized tools, which I know you talk about a lot in some of your brainpower YouTube training and other things.
[00:23:23] I mean, it's just dizzying.
[00:23:53] Kind of point at the beginning, we've got to still use our critical thinking.
[00:23:59] I mean, this is not a calculator.
[00:24:01] It's not giving you a definitive answer, just like if you Google something, it's not necessarily giving you a definitive answer.
[00:24:09] So we've got to always trust but verify, I suppose, in some ways in terms of the output.
[00:24:15] Now on the creative side and artistic side, I know that's a little bit different.
[00:24:20] And I know there's nuances to that, right?
[00:24:22] I mean, there are certainly some tasks I would imagine even in the creative arts that it makes sense to automate or have AI.
[00:24:34] But still as a sort of artistic co-pilot, if you will.
[00:24:39] Like I know there's some very detailed tasks that people do.
[00:24:42] I'm just thinking about animation, right?
[00:24:44] Like the models that do animation and all the steps that it takes and the cells and the stills within that.
[00:24:52] I mean, that's a pretty painstaking process.
[00:24:56] So I'm sure there's some efficiency gains while still allowing those artists to show their creativity.
[00:25:04] Yeah, exactly.
[00:25:05] And I think that's where we're at today is there's a lot of tools that are out there that gives you a multiplier.
[00:25:12] It just gives you an edge.
[00:25:14] And one example, and I bring this up frequently because it's such a multiplier for me, is a tool called CastMagic.io.
[00:25:22] It allows you to upload an MP3 or an MP4, get a transcript, get a closed caption file.
[00:25:27] But the magic happens when the transcript through generative AI is transformed into other forms of media to support the podcast, to support the SME interview, whatever it is.
[00:25:43] And that can show up as an email, an overview, a YouTube keywords, a description of the video and so on.
[00:25:51] And so for me, that is a multiplier because now I can take a look at my workflow.
[00:25:55] I can take a look at the outputs that need to be created and I can fit this Gen AI tool very nicely into that workflow.
[00:26:03] So after my live show is done, I can take that recording, get the transcript, transform it the way I need it, and then automate some processes to get that content into various channels to support my audience and to support the general community.
[00:26:18] So, but again, to your point, it's tough to keep your eye on the horizon and investigate those tools, test them, and then put that into your workflow.
[00:26:29] When you have thousands upon thousands of AI tools in the marketplace, it's numbing at times.
[00:26:35] Yeah, and I wonder from your experience or what you've seen from organizations that have embraced AI and they're doing their own experimentation, are they allowing people to create their own custom co-pilots or GPTs within an enterprise context?
[00:26:54] And if so, are they making sure that they do so responsibly?
[00:26:58] Because we're all builders now, right?
[00:27:01] These platforms are all opening up to allow you to create, to customize through various training methods and injecting proprietary knowledge and data and things like that.
[00:27:13] So this is where I get even more concerned, not that people are going to, I mean, yes, it's a concern that you might input proprietary information and it goes back, you don't know where it might be going back to the mothership or things like that.
[00:27:28] But also you are not just a user and a consumer, you are, everyone's a builder now.
[00:27:35] Yeah, typically those environments are locked down in regards to security and access to models.
[00:27:41] It's far and few in between.
[00:27:43] There are some organizations that are giving their associates access to multiple models through a portal and through those API calls, all of the prompts and all the responses can be tracked.
[00:27:54] There is maturation in some organizations around AI operations to where, let's say they're a Microsoft shop or a Google shop and they're leveraging the entire ecosystem that those vendors give them in regards to AI and how it shows up.
[00:28:11] So some organizations, not a lot, but some are using, let's say Microsoft Azure and using the AI architecture behind that, using Autogen to create agents.
[00:28:26] That activity is happening, but it's not at scale.
[00:28:30] And I think one of the things that is the barrier currently for not seeing that, actually there's two things that I think about.
[00:28:38] One is when you're talking about building agents, you got to know Python.
[00:28:44] Well, it's getting easier.
[00:28:45] So low code, no code tools to help build those automations and agents.
[00:28:51] And then it comes back to the functionality of the large language model itself, not being able to reason.
[00:29:00] So, you know, when you have a tool that's going to scan for material, information, data, whatever the case may be, it's just going to do what you tell it.
[00:29:13] Have you ever been to a webinar where the topic was great, but there wasn't enough time to ask questions or have a dialogue to learn more?
[00:29:19] Well, welcome to HR and Payroll 2.0, the podcast where those post-webinar questions become episodes.
[00:29:24] We feature HR practitioners, leaders, and founders of HR, payroll, and workplace innovation and transformation, sharing their insights and lessons learned from the trenches.
[00:29:33] We dig in to share the knowledge and tips that can help modern HR and payroll leaders navigate the challenges and opportunities ahead.
[00:29:39] So join us for highly authentic, unscripted conversations, and let's learn together.
[00:29:45] But it may or may not know what it has as good information, right?
[00:29:50] It's just going to kind of check the box.
[00:29:53] So until we have these models that can reason, I think it's going to get capped as far as functionality.
[00:30:02] So we may have no-code tools that can build agents, but they're going to be very substandard.
[00:30:09] I mean, yeah, we'll have some home runs.
[00:30:11] There's some agents that are going to do some amazing work, but we're not going to see it at scale.
[00:30:16] But once that reasoning capability comes in as we march towards artificial general intelligence, then it's going to change.
[00:30:24] And I don't know when that shows up.
[00:30:26] Yeah, I wonder, because I've seen some market research firms claiming that they just basically put a generative AI sort of interface on top of some of the market research that they're doing.
[00:30:40] But I haven't seen the latest to know.
[00:30:42] So it's almost like an advanced generative AI solution also takes some attributes from what was powerful about predictive AI.
[00:30:53] We have to remember this whole generations of AI prior to generative AI.
[00:31:00] But when I think of some of the ways in which it's able to weigh different sources of information differently, I don't know how that gets brought forth.
[00:31:12] But I mean, even if you go to so if you use perplexity now as your your search engine, it'll give you the summary and it'll give you like top five sources of where it pulled together that that summary.
[00:31:23] But it's not I don't know how it does the order of those five, but it's treating, I think, all five of those sources.
[00:31:32] And maybe there's more behind the scenes, but the five that they list, I don't know that they're in any particular priority order or reputational order to say this is, you know, this is the most trustworthy.
[00:31:44] So if you're going to use it for market research, I don't know.
[00:31:47] It seems like it still has some ways to go to say, well, I'm going to weigh, you know, these two sources the most because there have been scientific papers about them or, you know, whatever.
[00:31:57] So I think there's still a ways to go in terms of how it's presenting you almost like a confidence score, because that's what I saw when Watson came out of the labs.
[00:32:06] And we used to see demos of how it thinks and looking at its confidence score of responding to an answer in jeopardy specifically.
[00:32:15] And so it wasn't going to buzz in if it wasn't at least 75 percent confident in the particular answer.
[00:32:22] So it's able to weigh different evidence from, you know, medical journals and, you know, take into account the more recent, you know, research on a particular medical condition and things like that.
[00:32:34] And then it knew what it was missing to get its confidence score higher.
[00:32:38] Well, if you could give me the patient's, you know, medical history, then my confidence score would probably go from 75 percent to 85 or 90 percent or whatever.
[00:32:47] So it's elements of the predictive AI, you know, getting baked into some of the generative AI tool that is going to be incredibly powerful.
[00:32:56] And I think that's perhaps a step towards AGI, but I'm not an expert in that for sure.
[00:33:01] Yeah, there's a lot of upside to this and it's going to be messy along the way.
[00:33:06] It's going to be extremely uneven.
[00:33:07] Another thing to consider is, you know, we talk about automations or AI performing the content creation process automatically, the learning journey automatically.
[00:33:20] That's all software and that needs to be maintained and there needs to be consistency with that.
[00:33:26] Even with prompts, when I go and I work with my customers, they eventually get to a prompt library.
[00:33:33] But even that prompt library needs to be maintained.
[00:33:37] Who's going to change the prompt when a model changes and you're not getting the same results?
[00:33:44] Who's going to change the prompt when it's part of an agent or an automation?
[00:33:48] Who maintains that?
[00:33:50] Oh, Joe left the company.
[00:33:53] He wrote all those prompts.
[00:33:55] Well, maybe we should write some new ones.
[00:33:57] Sound familiar?
[00:33:58] So, you know, again, it's not all going to be smooth sailing, even at the prompt level.
[00:34:05] So, you know, we're going to get all of these, again, really shiny and beautiful things that come out from these companies as far as like video or all these different functions.
[00:34:18] But in regards to scaling and maintenance, man, it's going to take a long time to get alignment on that and how it shows up and the way that it works for your organization and specifically your workflows.
[00:34:31] And that's going to be the next like five to 10 years is that type of work.
[00:34:36] Yeah, I think that's crucial inside an organization.
[00:34:39] I mean, you need a definitive answer in a lot of cases, whether that's, you know, trying to get, you know, insights from an analytics platform through a natural language interface or you're trying to get a definitive answer from HR through the HR conversational AI or whatever is being deployed there.
[00:34:59] You need an answer and that answer should be consistent for the most part, other than incorporating your personally identifiable information and maybe your particular customized learning path and incorporating that.
[00:35:13] But if it's more of a like a policy question or whatever, these answers need to be, you know, definitive and consistent.
[00:35:20] I wanted to ask about the audience for your training, because I know it goes way beyond just people within L&D.
[00:35:33] And specifically, I was curious about training people on different tools and use cases and things like that.
[00:35:44] If it's learning about the AI tools themselves or using AI to learn and do other things sort of in a co-pilot sense or both, my hunch is it's both.
[00:36:00] There's this massive behavior change that needs to occur.
[00:36:05] Just because you have the co-pilot button doesn't mean you're going to press it or want to use it.
[00:36:09] It's that level of trust.
[00:36:11] It's that integrity.
[00:36:12] It's that personal relationship with AI.
[00:36:15] That 10%, they're going to go and use it.
[00:36:18] They're going to go use co-pilot.
[00:36:20] They're going to use chat GPT.
[00:36:21] They're going to use multiple models.
[00:36:22] They're in it to win it.
[00:36:24] But for the 90%, eh, they're kind of on the fence, right?
[00:36:30] And that's how it shows up.
[00:36:32] It doesn't matter what type of business I go into because I've been getting into other types of domains beyond L&D when I go and I teach generative AI.
[00:36:43] And it always shows up the same.
[00:36:46] People are people.
[00:36:47] And the way they react to technology is the same as it always has been.
[00:36:51] And so there needs to be that cultural shift in that risk mitigation and that culture of innovation.
[00:37:01] And when you have alignment on those, it begins to show up differently.
[00:37:05] You now have the confidence in using that AI tool, whatever it is, in your current workflow.
[00:37:12] For some types of businesses, the risk is so high, they don't even want to touch it.
[00:37:20] Legal, medical, insurance, scientific research, there's a lot of risk there.
[00:37:27] And so they're just kind of hands off and they're interested.
[00:37:31] Don't get me wrong.
[00:37:32] I mean, they want to investigate, but they're trying to figure out what are the tasks that I can use this tool for
[00:37:38] that will allow me to be more productive, to be more creative, to do problem solving, but not put my company in jeopardy or my job.
[00:37:48] No, that makes sense.
[00:37:50] In the education system, even before higher education, it seems like as the world is changing and embracing these technologies,
[00:38:01] I mean, AI is not going away and it is changing people's job prospects.
[00:38:08] It may change what people decide to study when they do go to college, if they go to college.
[00:38:15] But do you see, I don't know if organizations have sort of pipelines when they do recruiting or obviously a lot of these HR leaders
[00:38:25] presumably have children who are school age.
[00:38:29] And I guess just from my own perspective, I don't see enough being done in the education system saying,
[00:38:36] starting from teenage users who are digital natives, it just seems like we're falling down by not giving them some of the training that,
[00:38:47] I mean, I can send them to your YouTube page, certainly, but just in terms of the scalability of what we're seeing,
[00:38:53] I mean, any thoughts there?
[00:38:56] Yeah, humans are inherently lazy.
[00:38:59] And so, you know, when you're talking about a pedagogical structure that's been around 150 years,
[00:39:09] that's a tough ship to change direction.
[00:39:13] Again, there's probably five or 10% of individuals, students that are out there that are into AI
[00:39:22] and that they're using it as a partner to study and to help write essays and do that type of work.
[00:39:31] But the environment to support AI and the student, again, from a pedagogical standpoint,
[00:39:42] it's there, but it's not at scale.
[00:39:44] Because I have seen, so my wife is a K-5 science teacher and we have these conversations and occasionally she will give me a link to lesson plans that involve AI.
[00:40:02] I'm like, whoa, this is amazing.
[00:40:04] Like they flip the script and they're allowing the student to use AI,
[00:40:08] but the way that they test them on the knowledge gain or skill acquisition is different, right?
[00:40:15] Because you got to go about it differently.
[00:40:16] If I'm going to go ahead and partner with AI on an essay,
[00:40:20] then the way that I need to test your knowledge is just going to be, it's going to show up different, right?
[00:40:24] So it's fascinating to see, you know, there are some instructors that are getting after it.
[00:40:32] They're trying to figure it out.
[00:40:33] There are some companies that create this material for and incorporating AI into the curriculums.
[00:40:42] But again, it's a slow roll.
[00:40:44] Like it is very difficult to do this at scale because again, going back to how does it show up?
[00:40:52] We have equity issues.
[00:40:54] Some schools have access.
[00:40:56] Some schools don't have access.
[00:40:58] It's not at scale.
[00:40:59] It's the same thing with the internet.
[00:41:00] Like in the way that internet showed up with some schools and some didn't have internet.
[00:41:04] Some still may not have wireless or internet connectivity today.
[00:41:08] So there's going to be a lot of work that's going to need to be done around supporting K through 12 here in the States,
[00:41:19] higher education, and the way that instructors are going to leverage AI within their courses.
[00:41:25] And one thing I do know, and this is from statistics, is that that group of students between 17 and 18 years old and 21 and 22,
[00:41:36] they're using AI more than any group out there age-wise, specifically the ones that are in college.
[00:41:44] I think usage level is somewhere between 25 and 30%, which is, again, very different than that 10% that we're seeing in corporate.
[00:41:54] Going back to talent management and how that shows up in organizations, yeah, that's going to be very interesting when the current generation or those individuals that are in college that are allowed to use AI,
[00:42:08] when they go and look for a job, they're going to consider whether they're going to get access to AI in their work.
[00:42:13] Oh, absolutely.
[00:42:14] I haven't even seen an article about that yet, but that's going to happen in the next few years.
[00:42:19] I've got nieces who are in college and internships or whatever.
[00:42:24] It's definitely coming up, but it wasn't like six months ago, it wasn't really coming up.
[00:42:29] And now it's like you can't enter an environment without it.
[00:42:33] I mean, it's going to be like automobiles, right?
[00:42:35] I don't need you to be a mechanic.
[00:42:36] I just need you to learn how to drive and drive safely.
[00:42:39] So, Josh, I have one final question for you that I ask all my guests, which seems like we've already covered this over the course of this conversation.
[00:42:48] But when you hear or see the phrase, you know, elevate your AIQ, I mean, this is your bread and butter, but what would be the key takeaways for anyone embarking on this AI journey?
[00:43:01] As I mentioned before, it's that relationship between you and technology.
[00:43:05] I would get a personal learning plan together and whether it be five minutes a day or once a week for an hour, whatever the case may be.
[00:43:16] Get out there and get your hands dirty.
[00:43:18] Go into ChatGPT.
[00:43:20] It's free.
[00:43:20] You can get access to the 4.0 model today.
[00:43:24] And think about the work that you do, or it could be a hobby, something that you are an expert in or extremely confident within that domain of information and ask the AI to assist you in a task.
[00:43:41] And then you can judge based upon the results, whether you need to modify your prompt.
[00:43:47] And so you want to learn a little bit of prompt engineering.
[00:43:50] And I don't really even like that term.
[00:43:52] I typically like to say prompt design.
[00:43:54] Engineering sometimes is exclusive.
[00:43:57] And so design a prompt that you start to see results.
[00:44:03] And I think at that moment, it's going to click.
[00:44:06] And then you'll realize, oh, wow, I've just saved some time or I didn't think about that problem that way.
[00:44:13] And I know for myself, that's when I knew things were different and I need to go ahead and figure out how this works.
[00:44:21] And so that's the way I'd go about it.
[00:44:24] Again, it's this relationship between you and the technology.
[00:44:27] And we're talking about behavior change.
[00:44:31] And so that's a journey.
[00:44:33] I mean, that takes a long time to make that jump.
[00:44:37] But with anything, it's always about that first step.
[00:44:41] Perfect.
[00:44:42] Josh, thank you so much for sharing your wisdom with us.
[00:44:46] Really, really appreciate you being here.
[00:44:48] It was fun, Bob.
[00:44:49] I appreciate it.
[00:44:50] Absolutely.
[00:44:51] Thank you, everyone, for listening.
[00:44:53] We'll see you next time.