Bob Pulver catches up with advisor, executive coach, entrepreneur, and NY Times best selling author Charlene Li about AI-driven transformation and the importance of remaining human-centric. Charlene has written numerous books on this subject (her seventh book is coming soon!) and is continuously educating leaders and organizations worldwide on how to get it right. They spoke about AI-driven transformation and how it differs from prior transformation initiatives. Charlene shared invaluable insights on strategic alignment and leadership buy-in to shrink the ‘knowing-doing-leading’ gap. Importantly, Charlene and Bob dig into the crucial role of Responsible AI in ensuring positive outcomes for all stakeholders. Charlene also shares examples of organizations that have successfully embraced AI and offers practical advice for gaining AI knowledge and skills. How are you bridging the knowing-doing-leading gap? How will AI help you ask better questions, and think differently? Charlene's extensive expertise in digital transformation and human-centric approaches to technology make this episode a must-listen for anyone interested in the future of work and how AI can augment and empower us (humans, that is).
Keywords
AI-driven transformation, human-centric, digital transformation, strategic alignment, leadership buy-in, responsible AI, data literacy, data governance
Takeaways
- AI-driven transformation is an opportunity to change the way organizations operate and upend the status quo.
- Strategic alignment and leadership buy-in are crucial for successful AI implementation.
- Responsible AI requires a focus on data literacy, data governance, and ethical use of AI.
- Customization and personalization of AI tools can enhance their effectiveness and impact.
- Building AI tools in-house can make organizations more knowledgeable buyers and ensure solutions that solve real problems.
Sound Bites
- "AI upends the way we work and changes the way we relate to each other."
- "AI transformation requires strategic alignment and thinking big."
- "Executives need AI literacy and must transform themselves before leading AI implementation."
- "Until you can see the power of AI directly benefiting you, you can't begin to think about how to lead an organization in using AI."
Chapters
00:00 Introduction and Background
02:21 Comparing AI-driven Transformation to Previous Transformations
04:39 Strategic Alignment and Leadership Buy-in
06:16 Becoming AI Literate as a Leader
09:14 Responsible AI: Data Literacy and Governance
13:18 Responsible AI: Speeding Things Up with Clear Guidelines
16:11 Example of Successful AI Implementation: AARP
18:05 Responsible AI: Setting Up AI Ethics Committees
20:24 Responsible AI: Responsible Use of Data and Access
22:25 Building AI Tools In-house: Customization and Personalization
26:04 Building AI Tools In-house: Minimal Viable Team
29:32 Responsible AI: Imagination and Curiosity
32:28 Responsible AI: Responsible Use of All AI Technologies
40:33 Customization and Personalization of AI Tools
46:25 Building AI Tools In-house: Becoming Knowledgeable Buyers
Charlene Li: https://charleneli.com/
Charlene’s books: https://charleneli.com/books/
Powered by the WRKdefined Podcast Network.
[00:00:09] The HOT's Bob Pover, I had the very good fortune to sit down with Charlene Li at six-time New York Times Best-selling
[00:00:15] author and leading voice in technology that transformation.
[00:00:19] I've long admired Charlene's work and perspective, which have consistently shaped how leaders
[00:00:23] worldwide think about technology's impact on organizations and leadership.
[00:00:28] Our conversation delved into the exciting world of AI-Driven Transformation, scoring its
[00:00:32] distinctions from previous transformation efforts and examining the crucial importance
[00:00:36] of responsible AI in maintaining a human-centric approach as work evolves.
[00:00:42] I'm deeply grateful for Charlene's time and insights which will undoubtedly provide immense
[00:00:46] value to many of you as you embark on your AI journey.
[00:00:50] Take around until the end, Charlene gives us a sneak peek at both for upcoming HR Tech Conference
[00:00:56] Keynote, Address and Turnthrough Book.
[00:00:59] This is A Do Not Miss Episode.
[00:01:01] Thanks for listening.
[00:01:03] Hello again everyone.
[00:01:05] Welcome to another episode of Elevate Your AIQ.
[00:01:07] I'm your host Bob Pover.
[00:01:09] Today, I have the pleasure of meeting with Charlene Li.
[00:01:12] How are you today, Charlene?
[00:01:14] I'm doing great, Bob.
[00:01:15] How are you doing today?
[00:01:16] I'm doing excellent.
[00:01:17] Thank you so much for joining me.
[00:01:19] Good to be here.
[00:01:20] I could just give you the opportunity to do your own introduction about your background in
[00:01:25] the space not just in AI of course but digital transformation and your background at
[00:01:30] all time there, etc.
[00:01:32] I am in New York Times Bestselling Author of Six Books and soon to be seven.
[00:01:36] And I started out as an analyst looking at the internet media and marketing with forest
[00:01:43] research and in 2008 started my own company called Old Timothy Group focused on digital transformation
[00:01:50] and then 2015 sold that to profit, brand strategy and bounced around and corporate jobs
[00:01:55] for a little while and then recently became independent again going back to my entrepreneurial
[00:02:00] roots.
[00:02:01] And I graduated from Harvard College in Harvard Business School and pleased on what that
[00:02:05] against me because I do not have a technical degree, but I'm focused much more on business
[00:02:09] and strategy and leadership and especially the disruptive impact of different technologies.
[00:02:15] Fantastic.
[00:02:16] I know you've been an avid blogger for quite a while.
[00:02:19] I won't date you but I know it goes back quite a number of years and certainly I built
[00:02:24] up quite an audience online on that side of the book.
[00:02:27] Yeah I started blogging actually about 20 years ago this month.
[00:02:31] Oh my goodness.
[00:02:32] It's been exactly almost 20 years to the day.
[00:02:35] It's amazing.
[00:02:36] So one of the things that I thought would be interesting to kick things off and we talk
[00:02:42] about AI-driven transformation, we talk about, you know, remaining human centric as we do
[00:02:48] that.
[00:02:49] But as we think about transformation just the concept of me that work kind of gets overused.
[00:02:54] But as you think about what we're experiencing now or what people are setting out to
[00:03:00] do with this type of transformation, how do you see it comparing and contrasting to what
[00:03:06] you've seen before with digital transformation or social business transformation or
[00:03:11] maybe a transformation within a specific, you know, line of business or domain?
[00:03:15] Yeah sure.
[00:03:16] I can't do it.
[00:03:17] There have been so many types of social business, digital transformation and they
[00:03:21] all have something in common which is some new business trend or technology typically
[00:03:27] comes along and it's an opportunity to change the way you normally do things.
[00:03:32] So that's the status quo.
[00:03:34] And the reason why I feel so disruptive is that it upends the way you think about the
[00:03:38] world and how you fit into it because the way you get worked done is completely changed.
[00:03:44] So it's highly disruptive and the most powerful transformation ones are unfortunately
[00:03:48] the ones that are also most disruptive.
[00:03:51] And AI is a huge transformation because the way we work is based on how we know how to do
[00:03:58] things and AI can in many cases do a better job of doing that than we can as humans.
[00:04:06] So it it upends the way that we do work, it changes the way we relate to each other
[00:04:12] and it also calls into question, this existential question that if a machine can do it,
[00:04:18] what's my value?
[00:04:20] So this transformation is much bigger and we think of it as a technology transformation
[00:04:24] but actually like any other transformation that came before, it's always about people.
[00:04:31] It's about the digital which is the technology side but we forget that the transformation
[00:04:36] side is about the people and how we have to change.
[00:04:39] So in that regard when you think about what it takes to alter people's behavior and drive
[00:04:48] a cultural change and things like that, I guess I'm trying to think of in prior transformations
[00:04:54] like we sort of deputized people right?
[00:04:56] We had champions and change agents and things like that.
[00:04:59] And there was a lot of grassroots efforts that sort of pushed the organization forward
[00:05:05] and created some inertia within individual teams.
[00:05:09] You know, it can't just be bottom up right?
[00:05:12] Now it absolutely can't be and for one simple reason, it is the potential for transformation
[00:05:17] and value creation and benefit to the organization and its customers and employees.
[00:05:23] All its very stakeholders is huge and if you just let little pockets of it spring up,
[00:05:29] you'll see little pockets and little bits of impact.
[00:05:33] So sure we can write better marketing copy with AI.
[00:05:36] But what if you were to rethink the entire marketing process, the way you actually engage
[00:05:41] with customers, that's the most strategic question and that can't be driven by grassroots.
[00:05:46] So the opportunities to completely rethink the way you get work done, how you engage
[00:05:51] with customers, how you do live a product or even develop new products and services is possible
[00:05:57] because of AI.
[00:05:59] You can do things now in create new companies, in new products and services with no new people,
[00:06:06] no additional costs and in the fraction of the time.
[00:06:09] So unless you look at it strategically that way, you'll never get to that point by boiling it up
[00:06:15] from the bottom up.
[00:06:16] So you would argue that now there were entering sort of strategic planning time of year
[00:06:21] that this needs to be front and center rate, your AI strategy align to your business,
[00:06:27] your technical and your talent strategies.
[00:06:29] We have a saying, think big, start small scale fast.
[00:06:34] And oftentimes we're forgetting the first part.
[00:06:36] We're just starting small and hoping we can figure out what to scale without that big thinking.
[00:06:42] And align me at two years strategic opportunities.
[00:06:45] And what your strategy is overall as a business, you'll never get to a point where it becomes
[00:06:49] strategic.
[00:06:50] So as a chicken and an egg here, and that most executives don't feel comfortable with AI.
[00:06:55] They think of us a technology rather than as a transformational strategy.
[00:07:00] And so therefore say it's not my job, it's my CIO's job or somebody overindigital.
[00:07:07] Instead of thinking, oh this could complete transform our business.
[00:07:12] So when you prepare an organization as you talk to the leaders at different organizations
[00:07:18] and I'm sure you did this part of research for your new book and for all the great newsletters
[00:07:24] that you've been publishing, it just seems like everyone wants to make sure they've got some
[00:07:31] voice and some proverbial seed at the table.
[00:07:35] But they may not know what they're actually asking for and what it's going to take to be in that
[00:07:40] position and contribute meaningfully by getting their own hands dirty and upskilling themselves
[00:07:47] and what's possible.
[00:07:50] How do you see people embracing this at a leadership level to make sure that they're as
[00:07:54] stand that they have the opportunity here to really get ahead of this and lead by example
[00:08:02] and learn these skills and become AI literate themselves?
[00:08:07] Yeah, I think you're pointing to a really important requirement which is the C-suite,
[00:08:13] the top leaders.
[00:08:14] I like to think about the top two levels of leadership in your organization.
[00:08:19] They need to have AI literacy and that isn't just having gone in and use chatchipt once or twice.
[00:08:28] I call it bridging the knowing, doing, leading gap.
[00:08:33] You know about it.
[00:08:34] Are you actually doing it using it on in a way that it has transformed the way you actually work
[00:08:41] because until you can see the power of AI directly benefiting you, you can't begin
[00:08:48] to think about how to lead an organization in using AI.
[00:08:53] So I oftentimes speak to audiences or executive teams and I ask,
[00:08:57] how many of you know about it?
[00:08:58] And it's going up.
[00:08:59] How many of you said, hands go up.
[00:09:01] How many of you use it every day?
[00:09:03] All the hands go down.
[00:09:05] You cannot begin to transform the organization unless you have transformed yourself
[00:09:11] in the way you work at it.
[00:09:12] And so I have training for executives and do workshops
[00:09:17] and get them to use AI to do the things you already know how to do.
[00:09:21] And that's primarily creating content to do research.
[00:09:24] And very importantly, to use it as a sounding board.
[00:09:27] Because as a leader, one of those scaringest resources is appear.
[00:09:32] Somebody, you can say, well what about this?
[00:09:34] What about that?
[00:09:35] Here's my plan.
[00:09:36] You evaluate it.
[00:09:37] How could it be better?
[00:09:38] And using it as a sounding board to just up-level your work
[00:09:42] is a hugely strategic use of AI for executives and leaders.
[00:09:48] Yeah, absolutely.
[00:09:49] I think back to some of the transformations that I went through at IBM
[00:09:53] or we made sure that, you know, as people bought into the concept of design thinking
[00:09:58] and that methodology that it wasn't just the designers that went through that,
[00:10:03] it was all the product managers and product teams that
[00:10:06] go through that.
[00:10:07] And all the executives had to go through that.
[00:10:10] So they could start to rethink how they can make longer lasting change
[00:10:15] and make better strategic decisions and not just worry about tactics
[00:10:19] and, you know, quarterly metrics and things like that.
[00:10:22] And then even product management and some of the principles
[00:10:25] that would fundamentally support management and agile
[00:10:28] for important for people who were not product managers to learn how to do that.
[00:10:32] So I feel like this is just maybe that cranked up to 11 at least to learn how to use AI
[00:10:38] just because AI, I don't think of it as a tool but more a tool kit
[00:10:43] that can do a lot of different things depending on the use case and the domain.
[00:10:47] The design thinking approach that you talk about at IBM,
[00:10:50] what I loved about it is that I got trained
[00:10:52] on design thinking by IBM as an analyst covering the company.
[00:10:58] And so it, and I think that's what's happening with AI not.
[00:11:01] The best companies are training people on how to use AI responsibly.
[00:11:07] And so because they want to ensure that everybody inside of their ecosystem
[00:11:10] is using AI and, you know, responsible ways, treating customer and employee data in a responsible way.
[00:11:16] So they can then with full conviction say to customers, yes, we are protecting everything
[00:11:22] we're using AI and responsible in an ethical way up and down throughout the entire system.
[00:11:26] Absolutely. When we think about the opportunities with AI and I use AI more broadly than just
[00:11:34] generative AI, I know that's the top of mind for a lot of folks including many of my listeners
[00:11:40] but when you think of AI more broadly and the ways in which you can be used
[00:11:46] but the people that you've spoken to, the leaders that you've spoken to around
[00:11:50] how they've adopted it, are they still starting to
[00:11:55] small and not thinking big enough and boldly enough about how to do this or they scared that they
[00:12:01] they're not going to have to, know how to measure success. Before we move on, I need to let you know
[00:12:07] about my friend Mark Feffer and his show people tech. If you're looking for the latest on product
[00:12:14] development, marketing funding, big deals happening in talent acquisition, HR, HCM, that's the show
[00:12:22] you need to listen to. Go to the work to find that work, search out people tech, Mark Feffer,
[00:12:28] you can find them anywhere. All of the above, it's so easy looking at the productivity and
[00:12:36] efficiency gains from AI. So hey, if I can just take the productivity gains, you know, I get 10, 20,
[00:12:43] 30%, maybe even exponential levels of productivity, I'm going to use that, slow-hanging fruit. Why not do
[00:12:49] that? But I think you're missing an opportunity and this is the key question around where do you aim
[00:12:55] for and how do you measure the benefit. And I keep coming back to your existing business strategy
[00:13:01] and the reason why people don't think about it this way is because they typically are not thinking
[00:13:06] about business strategy on a daily basis. They have their heads down, they're doing their job,
[00:13:13] they have your OKRs that are very poorly written, very much department or team oriented,
[00:13:20] not business oriented. And so they don't really know how to think about the strategic
[00:13:25] opportunities because they don't think about strategy. And so it falls to a few people who are
[00:13:30] the least qualified to actually think about the AI because they're just thinking about the strategy
[00:13:36] and not necessarily the gaps that are between where you are today and where you want to be.
[00:13:43] And that's what strategy is all about. Like how are we going to get from where we are today
[00:13:46] to where we want to be in the future? And AI using AI very strategically is say,
[00:13:52] how do we plug those gaps? How do we use it to accelerate or defend our strategy? Our go-to market.
[00:13:59] And then the way you measure it is, are we getting to our strategy better and faster? So you
[00:14:05] use your existing most important KPIs to measure the effect in those of AI. And it's a highly,
[00:14:12] highly relevant to the organization because it's strategic. And again, it's just really highlights
[00:14:18] the fact that most organizations are not strategic in the way that they operate and the way they think.
[00:14:25] And so in order to use AI strategically, you have to be a strategic focused company.
[00:14:30] I'm guessing you've got some companies that you've spoken to that are just, you know,
[00:14:35] maybe not firing on all cylinders but they're at least doing better than most and have
[00:14:41] some best practices that perhaps you could share. Yes, one of my favorites is the AIRP.
[00:14:49] So it's really focused on serving its members. It's a membership nonprofit organization. That's
[00:14:55] focused on people age 50 and above. And then especially people who are in retirement thinking about
[00:15:01] that advanced age, health, financials, everything. And so they started thinking about how can we use
[00:15:07] AI to be more effective in our communications to them? How can we instead of having a couple different
[00:15:15] types of newsletters because it's a huge range of interests from 50 years old and onwards.
[00:15:22] Some people are still working. Other people have various family different situations. So how can we
[00:15:28] granularly communicate to people their needs and put in front of them all of the services
[00:15:33] and support that AIRP offers? And so they're using AI. And in the beginning, it was a small group of
[00:15:41] just three people setting the strategy but it all tied back to again their purpose and their
[00:15:52] productivity gains and efficiency gains. They were focused internally but it was always with the
[00:15:57] goal of serving its members better. So having that focus right from the very beginning directed all
[00:16:05] the other round-swell efforts from people because they had an opportunity to view one of you say
[00:16:10] I, fill out this form and review it and get back to you within a few days. And they would approve
[00:16:17] these things constantly because people were very given very very clear instructions use AI for this
[00:16:23] purpose. Use it to help us engage in our customers better. Help us find ways to do things
[00:16:30] more efficiently so we can engage with our customers better. So everything was focused on that
[00:16:35] so it would be your goal that they had. Yeah, I guess I'm a little surprised that that's the organization
[00:16:42] that's you know, showing everyone how it's done but yeah that's a great example.
[00:16:47] I think a lot of what you just described is actually falls into this umbrella
[00:16:53] of responsible innovation right so it's you know I spent a lot of time in responsible AI which is really
[00:16:59] about you know the governance it's almost in some ways sort of this like defensive strategy but
[00:17:07] it doesn't have to be that way. You can be responsible adhere to you know legislation and
[00:17:15] guidelines and you know ethical expectations or whatever and still innovate like you can walk
[00:17:20] into gum at the same time people have been doing it for a long time. There's more to think about
[00:17:26] but there's also a ton of of opportunity and if you educate part of this sort of upskilling
[00:17:34] and educating people to advance their ability to work with AI and understand how to use AI
[00:17:40] properly you're undoubtedly going to get a lot of ideas and now you got to figure out which ones
[00:17:47] of these you know make sense which ones are relevant within a particular domain or particular role
[00:17:53] and which ones could be adopted more or broadly or even commercialized. You've got this
[00:17:59] dual benefit of not only mitigating risk within your war and organization by having people use
[00:18:06] the technology properly for your own operations but you're also going to ignite a lot of
[00:18:13] promising ideas and some of them might be far fetched and some of them might be sort of incremental
[00:18:18] innovation but this a lot of value in every direction by making your people more aware of what's
[00:18:27] possible and guiding you in a strategic direction that makes sense and maintains some competitive
[00:18:35] edge. Yeah I see a lot of companies so banning the use of AI in the organization
[00:18:42] and when I talk to people in those organizations they're all using it. This is not using
[00:18:47] an in-sansion way which is a problem because by just saying no to everything's presumably turning
[00:18:54] it off you're pushing it underground and you have no visibility. You're much better off making
[00:18:59] invisible making sure that people are using it in a responsible way which means giving them training
[00:19:05] giving them guidelines and more importantly using it in a strategic way to say use AI to do this
[00:19:11] don't use it to do that like you may want to go do that because it's interesting but it doesn't
[00:19:17] really help us focus and set on using it this way and here's the training here are the tools
[00:19:22] like focus on here and and I remember one organization that says said we're going to become AI ready
[00:19:30] we're going to become AI first this is an imperative for us as a fairly large organization about
[00:19:35] 600 people and the CEO came out and said we're going to give you all training here's a list of
[00:19:41] other true tools if you want new tools as a process and we try to streamline that and then we're
[00:19:46] going to have a dashboard with every single person's name on it and next to each name will have a
[00:19:51] light there'll be a green light if you're using these tools all the time there's a yellow light
[00:19:56] if you're using them still trying to figure it out but at least you're in the game and get a red
[00:20:01] light if you're not using these usually any of these tools and he threw it on the gauntlet at that point
[00:20:07] he said in six months time there would be no more red lights employed at this company
[00:20:13] and he would say look you have to learn how to use these tools it is a given to the future success
[00:20:19] of not only our company but for you as an employee anywhere you work and if you're not really
[00:20:25] to do this and advanced to become at least a yellow light then we're going to have another conversation
[00:20:30] that this may not be the right place for you right but no matter what there'll be no more red lights
[00:20:34] at this company in six months well I guess it would be after the six months and there's no more
[00:20:40] the light's probably won't be that many yellow lights either right because you don't want to
[00:20:45] get a move there go first and it's not you know what's going to want to fall behind and yeah below that
[00:20:50] line yeah well I mean the reality is there are some jobs where Jennifer I doesn't really help you that much
[00:20:56] and it may be already built into the tools and you just use it but it's not something that
[00:21:01] everybody will be using all the time necessary but at least you're familiar with it and when you
[00:21:10] in it you know how it works you're appreciate to know if something was created by AI what to do
[00:21:17] with it too as well so it's just becoming AI literate the way I like to think about it is
[00:21:23] if you have a new employee you wouldn't just necessarily trust everything that they put out you'd want
[00:21:28] to review it and AI is like that you don't want to just necessarily take it at face value you
[00:21:33] want to review it and at some point you begin to trust it even more and just going back to your
[00:21:39] whole point about responsible AI this is not about slowing things down AI regulations all these
[00:21:45] things that in governance is sort of a dirty word inside of our sensations it really is about
[00:21:52] speeding things up because if you have clear guidelines on what quality means what safety and
[00:21:59] security means what it means to build trust in this technology once you have that trust then
[00:22:07] can move much, much faster. If you're constantly doubting does everyone know how to use this is being
[00:22:13] are there any hallucinations in here do people have the critical thinking skills to use this in the
[00:22:19] right way then you're you're not trusting either the people or the technology but if you have a
[00:22:26] foundation of responsible and ethical use of AI then trust that's being built in the organization
[00:22:31] and you can move much, much faster. Absolutely organization needs to set the policy but I just feel
[00:22:37] like it's a matter of time before this is just part of your onboarding and annual or whatever
[00:22:44] frequency you know training that everyone should be going through just like they went through with
[00:22:49] harassment and data privacy and cyber security I mean what you're let's do take I mean doesn't
[00:22:55] just matter of time before the zatted to that list. Yes again I know that one organization HP
[00:23:01] does a regular training for all its people on safety security use of data especially if you have
[00:23:07] access to information and built into that is the use of AI into that training too as well so
[00:23:13] when you have a regular training cadence and education cadence for people then this becomes an easy
[00:23:19] way to build it in isn't just me on board but it's a regular part of just your access to
[00:23:25] in order to be certified to use things like customer data, employee data you have to be trained
[00:23:31] and reminded this is how you take care of things. So again the power of AI is very much based on
[00:23:38] what data you have access to but also what you do with it and so do you pass it off as
[00:23:44] you don't do we review it what are the safeguards that you have to put in place to make sure it's
[00:23:49] being used well and those are things again that can this some degree get created from the bottom
[00:23:55] up but it needs to have some sort of consistency and alignment across your organization and that
[00:24:01] can only happen from the top down. No absolutely great I know you've written about setting up like AI
[00:24:08] ethics committees and things like that which of course I'm a huge proponent of did you just unpack
[00:24:14] that that thought a little bit and just talk about like who should be on that committee and
[00:24:20] you know what their possibility should be I mean it's part of an AI center of excellence or
[00:24:25] is it just completely you know independent because I think people will be concerned about you know
[00:24:31] another yet another sort of gatekeeper to get something deployed or what have you.
[00:24:37] I tend to say away from an AI center of excellence because it says that there's a
[00:24:43] single way to do things we're way too early for that in some ways and so I separated and call it
[00:24:49] the minimal viable team that you need to put together your AI strategy and that's very
[00:24:55] difficult then when you're operating and executing but to pull together that strategy you want
[00:25:00] somebody clearly from the digital and technology side not necessarily IT because IT typically is
[00:25:07] keeping the lights on maintaining things not necessarily innovating with things so the most
[00:25:11] innovative technology thinkers in the organization you want somebody who is the most strategic
[00:25:16] person you can get to be on this team I do the CEO the head of strategy somebody who understands
[00:25:24] your strategic objectives you want somebody who can represent your customer and commercial interests
[00:25:30] so has a view for how customers needs and expectations are changing because of AI and what the
[00:25:36] impact of anything that you do you could include somebody from product so you can start
[00:25:40] understanding what's going to be required and very importantly you need somebody from HR
[00:25:46] because this is a people transformation you're talking about do you have the skills
[00:25:50] how are you going to be changing jobs how are career ladder is going to change now notice
[00:25:55] who is not on this team legal risk and the reason is because they play a very important role
[00:26:03] but unless they can be in the room and say in temper there need to say no to think
[00:26:10] if they're in the room and can say yeah we can do that and here are the things we're going to have
[00:26:16] to do and ensure to make sure it's safe if they can be a proactive partner then they can be in
[00:26:21] the room otherwise from the most part I encourage people to keep them off of that initial team
[00:26:27] and to keep them absolutely informed get them to write the responsible AI and ethical approaches because
[00:26:34] absolutely need to do that and as you come up with a strategy keep them involved bring them in
[00:26:42] to say well these are the safeguards we have want to have how do we include that and actually
[00:26:47] put that into execution so the concern is that you have to get legal and risk and security
[00:26:55] involved by from the very beginning and the reality is if you can demonstrate that you are approaching
[00:27:01] in a responsible and ethical way with some basic guidelines to say we are not going to execute
[00:27:07] anything but always going to come back and we'll keep these guidelines in mind as we go through this
[00:27:13] and we're constantly saying in touch with you but we need to think very openly very much
[00:27:19] in a brainstorming fashion without saying no to opportunities and that is your job is to say no
[00:27:25] to put these safeguards in and we don't need that in the room right now yeah now as a strategy
[00:27:30] comes together then you absolutely have to include them but in that initial group give yourself some
[00:27:37] breathing room and instead of constantly thinking about all the things that can go wrong think about
[00:27:43] how you can be curious how you can have imagination brought into the process to imagine a very
[00:27:47] very different future yeah I really like your kind of thought there this is different
[00:27:54] and what we need here are to your point people from different domains different lines of business
[00:28:01] and with the cognitive diversity to understand where there might be some implications to
[00:28:08] protected category of workers or whatever you need to do to make sure that you're thinking
[00:28:15] you know deeply and broadly about adverse impact and fairness and transparency and things of that
[00:28:23] nature so I think what I tell people and and what you describe I think are complementary
[00:28:29] one of the things that has come up is around the phrase responsible AI kind of entered my
[00:28:38] at least my vocabulary around the time that generative AI came in people have those two things sort of
[00:28:45] linked together but responsible AI is much bigger than any particular legislation that is just
[00:28:52] now coming out and it's not only generative AI, I responsibly I is how you're using any
[00:28:59] autonomous algorithmic or artificial intelligence system to help make decisions and
[00:29:08] draw conclusions right so are you seeing a similar observation or people are sort of
[00:29:14] conflating you know or inappropriately directly associating with responsibility I did generative AI
[00:29:20] yes and again it's it's completely understandable because it's the most visible, relatable,
[00:29:26] usable example of AI that we can actually use an access without being a technologist
[00:29:31] and so if you think about the true power of AI is when you have generative AI as a front end
[00:29:37] the end from the interaction point of view and then be able to access all this powerful traditional
[00:29:42] is what I call it AI like machine learning or natural language processing all those things
[00:29:49] become much more accessible and even more powerful with generative AI later than top of it so
[00:29:54] it's it's this melding of all these AI together we don't say responsible generative AI we say responsible
[00:30:00] AI because it's encompassing everything but the real focus right now is because it is so much
[00:30:06] accessible because it is now democratized it's a different level of responsibility and care that needs
[00:30:14] to be in place because anybody anybody with a mobile phone anybody with a browser can kill uses
[00:30:21] and it's sort of like I remember it's mine in me of the early days of social media
[00:30:25] anybody can set up a Facebook account anybody could go out there and start a blog
[00:30:30] and you can do a lot of damage if you're not careful I'm glad you brought that up because I know
[00:30:36] people have been sort of bouncing around what is the proper sort of analogy you know I guess
[00:30:43] innovations from a technology standpoint I mean is it is generative AI like you know the internet
[00:30:50] is it like social media I happen to think social media is a really powerful one that sounds
[00:30:56] like you do too just because I feel like the the guard rails that were set for social media use
[00:31:02] within organizations and the fact that some of them did the same thing with the of them
[00:31:07] doing with AI which is you know you're not nobody's going on Facebook or whatever your
[00:31:13] work right they just didn't even see the beat of he now use case for it back in the early days so
[00:31:20] I think social media is the more appropriate analogy yeah agreed agreed again people are
[00:31:27] asking me how do you tell that this is a technology trend that's going to be a big deal
[00:31:31] and I go well I can tell you that things like blockchain or augmented reality or web three
[00:31:39] were not going to be that big because they didn't have this component of transformation with them
[00:31:43] they're going to be very important underlying technologies but they don't fundamentally change
[00:31:48] the power dynamics and the relationships before each other in the technology that it happened
[00:31:55] with the internet absolutely power structures were completely obligated same thing with social
[00:32:01] to somebody agree with mobile and then especially with AI the fact that we have this incredibly
[00:32:07] powerful tool now at our fingertips anybody can use it anybody can write programs now I can write
[00:32:16] apps and programs for the first time in my life it's an incredible powerful sense of capability
[00:32:23] and being able to have agency that just didn't exist before we can now set up companies and
[00:32:30] launch new products and services so much faster than we could before and the people who are benefiting
[00:32:37] from those are the ones who are agile enough to your point before agile agility being able to work
[00:32:43] in a very different way is probably the most important character risk that comes success for AI
[00:32:49] if organizations already gone through digital transformation and I mean not just digitized
[00:32:54] their transactions but actually work in a transformed way they're in a much better position
[00:32:59] to be able to take advantage of this new gender AI technology yeah absolutely um I also think
[00:33:05] the company is when it comes to data and you talked about hallucinations before and being able to trust
[00:33:10] these systems I think that organizations that are more mature from a data and analytics
[00:33:16] perspective are also in a much better position especially as you build proprietary
[00:33:21] you know models or you know you bring something in now so you build with it or it's just
[00:33:27] getting additional you know training with your you know proprietary data you're just going to
[00:33:32] be in a much better position if you have better data quality and data governance and things like that
[00:33:37] and it's not only data quality and data governance it's also do you have data literacy
[00:33:43] because data used to be the main of an interesting team of pointy headed
[00:33:49] analytics people right and if you wanted to get some analysis you were going knocking
[00:33:54] very quietly on the door I can be bothered you can be bothered you to like there's
[00:33:58] piece of report for us and now with these tools you have a potential giving anybody access
[00:34:05] to these tools to be able to run the data but but with that it comes great responsibility also
[00:34:10] so who has access to what data to do what with it and make what decisions and if you already
[00:34:17] have a culture of responsible use of data and access a data and training on how to do that
[00:34:22] it's a lot easier than if you people have never been able to access data before
[00:34:28] and don't know what to do with it it's incredibly powerful could be incredibly dangerous
[00:34:33] but again not having a PhD in data analytics in data science is it's not a requirement anymore
[00:34:42] to be able to draw insights and said the biggest requirement is that you have an interesting
[00:34:46] question that data could potentially answer so can we train people on how to ask these interesting
[00:34:53] questions that will make a difference in them making better decisions on behalf of the
[00:35:03] because I think the other thing about like when I think about the learning curve the journey of
[00:35:08] AI and like you said this is the first time people have actually been able to interact with it
[00:35:12] that like you know the UI is the AI right but it's also the user manual and you know the
[00:35:17] help section the community for a mall rolled into one right so so you're running out of excuses
[00:35:23] to start to get in there and get your hands dirty because if you're if you're stuck you can
[00:35:29] immediately get on stock right without leaving your keyboard in the screen as is in there's
[00:35:36] some point it's not to be able to ask right questions exactly so I was just curious just you know
[00:35:42] this could be you know for personal use or in your work are there any particular you know tools
[00:35:49] or use cases besides the usual you know chat GPT and or whatever that you're finding particularly
[00:35:56] useful or interesting yes I like using the my GPT features within chat GPT also cloud has
[00:36:04] something similar called projects and the ideas that you can customize and and we oftentimes use
[00:36:11] this idea prompt engineering to make the results better so there's all sorts of tutorials on how to
[00:36:18] get that get the most out of your LLM hours of models but if you find that you're doing a certain
[00:36:24] amount of tasks over and over again so for example I'm constantly writing content and I want the AI
[00:36:31] to understand my writing style my past writing content so I upload all of my content I upload of my
[00:36:38] last book I give it all of the articles I've written every newsletter I've written over the past
[00:36:43] four years it's all in there and as a result it understands a full picture of me and the instructions
[00:36:51] I give at are very specific to what I'm looking for so I have a couple different versions of this to
[00:36:56] do different things and it's a nice way to customize the LLM but use its base trained model it's
[00:37:05] very powerful model to do things very specifically to you so sometimes I'll go in with an executive
[00:37:10] team and we'll create models custom models for each member of that team with their writings with
[00:37:17] teachers plans the way that they talk and present again using publicly available information nothing
[00:37:24] proprietary and then becomes instantly usable for them because of rights in their voice it thinks like
[00:37:30] them it understands all its prior thinking so it can be that great thought partner and so I think
[00:37:37] it's one of the most powerful things you can do with this don't don't settle for the same GPT that
[00:37:42] everyone else is using make your own yeah now let's try to test the advice I know I've tried to do
[00:37:49] in customization even just for my August episode some rays I certainly don't want to repeat
[00:37:55] titles or you know some rays and things like that even if we've touched on some of the topics
[00:38:00] with different guests I mean there's always a different perspective and a different angle to take
[00:38:06] but yeah I can imagine for someone who such an habit you know blogger and writer that that would
[00:38:12] be extremely helpful to have that sort of writing copilot with you but even that's an executive
[00:38:18] I think for you again assuming that you have a version that is not being used to train the model
[00:38:25] you have all those safety and privacy controls set up correctly then you might feel very comfortable
[00:38:30] uploading your strategic plan a competitive analysis your marketing plan your latest presentation
[00:38:36] to the team various correspondences or brand stories and guidelines so that you have
[00:38:43] the phone knowledge of everything you would want somebody to know to help you be a better leader
[00:38:49] so it's almost like having this this sort of personal board of directors to help you do your job better
[00:38:55] and it's sitting right there and you can call on it to do anything from writing to researching to
[00:39:01] sounding off on various ideas you may have in full context of what you are trying to accomplish as a
[00:39:08] leader so I think the value here is for any leader who is constantly thinking strategically
[00:39:15] and it's not just to create content it's to help you think in new and better in different ways
[00:39:21] not totally great awesome so Charlene I know you have a couple exciting things coming up you've
[00:39:29] a keynote coming up for HR tech and then of course as you mentioned your seven book which we
[00:39:35] are eagerly anticipating but inside info are you going to share it for either of those stuff
[00:39:42] I'm almost afraid to say because I'm going to jinx myself I'm trying to create a digital avatar
[00:39:48] digital twin of myself in that will help me present at HR tech and if all goes well I will connect
[00:39:55] hopefully to my private database my private chatbot and it'll also be able to answer questions
[00:40:03] in real time on its own so just having that kind of technology and showcasing it right
[00:40:10] it applies where you can now really create training for example inside an organization
[00:40:15] that's highly customized to each person we've all gone through just really long tedious training
[00:40:22] inside of our organizations and there's no reason why learning can't be more compelling
[00:40:27] the more engaging so if you could create customized training with a live instructor or even
[00:40:34] within a digital instructor and have it even tell jokes and add humor which is very very
[00:40:40] difficult for AI to do it would be a really interesting engagement and provide scale in ways that
[00:40:47] would be possible and the new book is much more workman like it's step-by-step taking you through
[00:40:54] what do you do for a second third fourth to have a compelling strategy for january i
[00:41:00] and we'll focus specifically on january i just too because it's a newer thing that's out there
[00:41:05] it's a layer that sits on top of any a-sucing a-i it's the one that has most transformational
[00:41:11] power inside of your organization because anybody can use it so requires a strategic approach to
[00:41:17] and we start with everything from just literacy and creating a foundation with responsible AI
[00:41:23] to auditing your strategy to nuts and bolts of how to set up your data and technology so
[00:41:28] that it can scale so nuts and bolts everything you need to know from beginning to end to at least
[00:41:33] get you started on on the path to having a coherent strategy yeah i think a lot of people are
[00:41:40] struggling to figure out they're already overwhelmed and they don't know where to begin
[00:41:44] and so it sounds like a really important you know practical guide that people get a lot of
[00:41:50] value from so congratulations on that and looking forward to reading it and definitely looking
[00:41:56] forward seeing your day chartek i did just book my trip so we'll be there in person then thanks
[00:42:03] did surely in any any other final thoughts about elevating your your AIQ i think we covered
[00:42:09] that pretty well on this conversation we covered a lot of stuff to your day i think the only advice
[00:42:16] I give to people is that the biggest impediment to using AI and specifically journey of AI
[00:42:23] is a lack of imagination so get really curious with it try a lot of different things with it
[00:42:30] and it is constantly changing and if you don't see the technology working in the way you want
[00:42:37] go and advocate and say i really wanted to do this and most likely i there somebody will show you
[00:42:43] how to do it that way or vendors will like oh i'm gonna start paying attention to this and
[00:42:48] actually deliver that way i think the smartest thing that organizations to do is to build to buy
[00:42:54] meaning they're gonna start building some of these custom tools with the intention of being able
[00:42:59] to buy it from a vendor in the future but because they know they're gonna buy and not build the
[00:43:03] ones from scratch it's being a more thoughtful, thorough buyer more knowledgeable buyer simply
[00:43:11] because you're starting to use these tools and build them too as well so i think it's fantastic
[00:43:16] advice i mean i think once the buyers are smarter about what they want they'll make sure that
[00:43:21] those you know the vendors you know build things that solve real problems and not just
[00:43:26] but chini objects out there to distract us all so that's the great closing thought
[00:43:31] Charlie thank you so much for checking time to educate me and my listeners
[00:43:37] really really appreciate it and look forward to seeing you at a guitar track.
[00:43:42] Thank you so. Thanks again Charlie thanks everyone for listening we'll see you next time


