Ep 30: AI Readiness Takes Effective Leadership and a Culture of Responsible Innovation with Andrew Whyatt-Sames
Elevate Your AIQOctober 24, 2024x
30
00:51:59

Ep 30: AI Readiness Takes Effective Leadership and a Culture of Responsible Innovation with Andrew Whyatt-Sames

In this episode of Elevate Your AIQ, host Bob Pulver speaks with Andrew Whyatt-Sames, Co-founder of Uptake AI, a boutique consultancy focused on AI literacy in organizations. Andrew has had a very interesting and certainly non-linear career path, from broadcaster to comedian to leadership development executive. Bob and Andrew discuss the transformative impact of generative AI on organizations, emphasizing the importance of leadership, culture, and compassion in successfully integrating AI into the workplace. Andrew shares insights on the necessity of AI literacy, responsible AI practices, and the five C's of effective AI leadership: communication, culture, capability, compassion, and collaboration. The conversation highlights the need for organizations to prepare for the future of work where AI and human collaboration will be essential.

Keywords

AI, organizational psychology, leadership, generative AI, responsible AI, culture change, AI literacy, compassion, technology adoption, workplace transformation

Takeaways

  • AI is revolutionizing productivity in organizations.
  • Leadership must embrace AI to drive adoption.
  • Compassion is crucial in managing AI transformation.
  • Organizations need to foster a culture of innovation.
  • AI literacy is essential for all employees.
  • Responsible AI practices must be prioritized.
  • The five C's of AI leadership are vital for success.
  • Understanding AI tools enhances ethical use.
  • Collaboration between HR and IT is necessary for AI integration.
  • Organizations should prepare for the future of work with AI.


Sound Bites

  • "There's a massive productivity increase here."
  • "If you're a leader and you go ‘get using AI!’, that's really lacking in compassion."
  • “There's a massive job to do…to get their data ducks in a row before they can even start.”
  • "We need to maintain empathy."


Chapters

00:00 Introduction to Andrew Wyatt-Sames and His Background

03:01 The Role of AI in Organizational Development

05:57 Leadership and AI Adoption

08:57 Cultural Readiness for AI Integration

11:55 The Importance of Compassion in AI Transformation

15:03 Navigating Responsible AI Practices

18:13 The Five C's of AI Leadership

20:53 The Future of AI and Human Collaboration

24:13 Practical Tips for AI Utilization

27:07 Conclusion and Final Thoughts


Andrew Whyatt-Sames: https://www.linkedin.com/in/andrewwhyattsames

UptakeAI: http://uptakeai.co.uk/



For advisory work and podcast sponsorship inquiries:

Bob Pulver: https://linkedin.com/in/bobpulver

Elevate Your AIQ: https://elevateyouraiq.com

Powered by the WRKdefined Podcast Network. 

[00:00:09] In this episode of Elevate Your AIQ, I'm joined by Andrew Whyatt-Sames, co-founder of Uptake AI.

[00:00:15] Andrew delves into the interplay between business technology and talent strategy and underscores the critical need for organizations to achieve alignment across these three pillars.

[00:00:24] If you are in the middle of strategic planning, hopefully you're doing exactly that, which will reveal ways to mitigate risk and capitalize on opportunities.

[00:00:32] We talk about the benefits of automation to free up human workers, focus on more strategic and creative endeavors, as well as augmentation to help humans make better decisions.

[00:00:42] The mission of Uptake AI is to increase AI skills and literacy across organizations, which clearly aligns with the theme of this show.

[00:00:50] Andrew highlights the importance of continuous learning and adaptation in the face of rapidly changing technology and business landscapes,

[00:00:56] and encourages organizations to foster a culture of innovation and experimentation, empowering employees to embrace new ideas and technologies.

[00:01:04] I always enjoy my chats with Andrew and there are plenty of valuable takeaways from this conversation.

[00:01:08] So thanks as always for listening. Let's get into it.

[00:01:12] Hello everyone. Welcome to another episode of Elevate Your AIQ. I'm your host, Bob Pulver.

[00:01:18] With me today, I have the pleasure of speaking with Andrew Whyatt-Sames. How are you doing, Andrew?

[00:01:22] Hey Bob, I'm really good. Thank you very much.

[00:01:24] Thank you so much for spending some time with me and providing some insight to my audience. Really appreciate it.

[00:01:29] Just to kick things off, why don't you just give us an introduction about who you are and a little bit about your background?

[00:01:35] Yeah, sure. So my background is a good mix. I was a professional broadcaster for a while. I was a comedian. Don't worry, I'm not going to try and be funny today.

[00:01:43] I retrained in 2008 as a, in the UK we call it an occupational psychologist, but I guess in the US you would call it an organizational psychologist.

[00:01:52] Industrial organizational psychologist. Yeah. Seems a bit dated as well over here.

[00:01:57] It does. And I was sort of busy building a boutique consultancy after working for one of the big four in leadership team and culture development.

[00:02:04] And then along came AI. And after about a thousand hours of playing with the tools, I thought, hmm, there's a massive productivity increase here.

[00:02:12] I just want to go out and share it with people. And so with my co-founder Lucy, I set up Uptake AI.

[00:02:17] And that's what we've been doing for the best part of the year is sort of helping organizations get their head around the human side of generative AI.

[00:02:24] And the skills and the AI literacy and the cultural conditions and the cultural plan that needs to be true in order for people to get the best out of the tools.

[00:02:33] It's a noble and important mission. So, you know, I loved hearing about your story and I love hearing about, you know, some of the ways in which you're engaging clients, big and small.

[00:02:44] So, you know, maybe we could just unpack that a little bit about, you know, what you're, you know, the service that you're providing and, you know, the types of clients that you're seeing and, you know, how that's going.

[00:02:56] Well, we frustrated our marketing guy because he sort of said, what size of organization can you have?

[00:03:00] We went any size and it was like, what sector? We were like, any sector. I said to him, it's a bit like trying to get people to use PowerPoint.

[00:03:07] Like everybody needs it, you know. So the sort of the case studies that we're generating, there's a food packaging organization in the US that we've been helping their procurement function to rip through our RFIs and request for information.

[00:03:22] You know, so we've been helping them to do that. And the minute they saw how easy it was, they were like, please train my team.

[00:03:28] We've helped a big brewery organization to get their high potentials to understand the massive potential of generative AI just as a tool that everyday workers can use.

[00:03:39] So they're really excited about it. And there's another organization, like an entertainment organization.

[00:03:44] And we are currently running a, an executive team version of our immersion event, which is like a two hour weekly deep dive for eight weeks into Gen AI tools.

[00:03:54] And the big idea is that executives, they can't delegate it.

[00:03:58] You know, they need to be hands on with the tools in order to become evangelists and really understand where it fits in the strategy.

[00:04:04] So we're kind of working with loads of different sectors, loads of different countries, Portugal, we're all over the place, Bob, we're all over the place.

[00:04:12] No, that's awesome. You're right. You know, AI has become ubiquitous.

[00:04:16] And, you know, I think it's affecting different industries and sectors and perhaps roles at a different pace.

[00:04:23] But it is coming and it will change, you know, countless roles in the way that we work, the way that we use technology and perhaps the way that we interact with each other.

[00:04:36] And yeah, I mean, just nobody's, nobody's spared.

[00:04:40] So I can't say that I'm surprised that, you know, you have such a broad, you know, audience.

[00:04:45] And, you know, we're seeing that here as well.

[00:04:47] I think one of the things that's interesting is, and I'm glad to hear that you have not just high potential folks who are anxious to, you know, learn more and continue to, you know, enhance their own, you know, capabilities and learn more, you know, skills and things.

[00:05:02] But leadership, I think it's so important for leaders to really understand and embrace.

[00:05:07] And I think to your point with the immersion program, get their hands dirty themselves so that they can fully appreciate what's possible.

[00:05:15] If you're a leader of any size organization and you don't understand generative AI and you're not close to your CIO or your CTO, you're already behind.

[00:05:24] So I couldn't agree more.

[00:05:26] Like the minute people see what the potential is, you know, then they start thinking, right, how do I get more of this?

[00:05:32] But you're absolutely right.

[00:05:33] Leaders need to be on this.

[00:05:35] You and I have spoken about disparity between employee adoption, despite, you know, maybe a corporate policy or lack thereof.

[00:05:45] Policy meaning they've said, we're concerned about the risks.

[00:05:50] Don't do anything until we figure it all out versus, you know, some type of policy that says, oh, we're going to use it, but we're going to use it in a controlled environment.

[00:06:01] We're only going to use it for these use cases or in this geography or whatever it is.

[00:06:06] But employees, you know, sort of the modern equivalent of shadow IT, right?

[00:06:11] They're using it whether you've sanctioned it or not.

[00:06:15] And they're probably using it beyond whatever it is that you've sanctioned.

[00:06:19] So I guess the question is, have these leaders in your immersion workshops and other programs, has their acceptance of all that's possible translated into we need everybody on board?

[00:06:34] What we're noticing is that the organizations that are doing it well are the ones where the CEO is just basically going, I believe in this.

[00:06:41] I'm using it.

[00:06:42] You need to get on board.

[00:06:43] So when it comes from when it comes from the top desk, boom, like it just starts happening.

[00:06:47] So executive involvement is everything.

[00:06:51] You know, I speak to chief technology officers, you know, and I'll say to them, are you using Genare?

[00:06:54] And they'll go, yeah, yeah, I wrote my strategy on it.

[00:06:56] And you go, great.

[00:06:57] What about the rest of the workforce?

[00:06:58] They're like, I'm a bit busy over here with this work stack and this machine learning operations kind of thing.

[00:07:05] So there's a void, you know.

[00:07:07] So where leaders aren't evangelizing about it and getting on board and understanding it, there's a bit of a void.

[00:07:13] And I think there's an opportunity for us, you know, people who get it, who understand how people functions work to build up people functions, understanding and literacy.

[00:07:23] So they know what to ask for.

[00:07:24] You know, they know where to put it in the strategy, how to talk to IT and what the business case is they need to take to the CEO.

[00:07:31] Yeah, no, that's great.

[00:07:32] You know, I think back to some of the experiences I had with other forms of, you know, technological advancement, including, you know, the last generation of AI.

[00:07:44] When I was at IBM, obviously, IBM Watson was a big hit.

[00:07:48] And the later, you know, that came out towards the end of my tenure there, 2014 or so.

[00:07:54] But once people are exposed to this technology and they get the sense of what they could do with it, they go into overdrive, right?

[00:08:05] They're thinking about all these ideas and all these use cases.

[00:08:08] What if we do it over here?

[00:08:09] What if we're over here?

[00:08:11] And you could argue that some of that is, you know, hey, look, you know, I just found a shiny new hammer.

[00:08:17] Point me to the nails, right?

[00:08:19] But it's really getting these ideas from, you know, frontline employees, people interacting with clients, people talking to prospects.

[00:08:30] They're seeing these problems firsthand and they're coming up.

[00:08:33] They see now how technology can be applied, you know, hopefully responsibly and ethically, but could be applied to these particular use cases, whether that's sort of an automation example or an augmented intelligence kind of example.

[00:08:48] But I guess I wonder if organizations are ready for those floodgates to open and what to do with those ideas.

[00:08:58] So in the past, you know, we've had some conversations around, well, organizations that are mature from a data and analytics perspective are probably in a better position to, you know, adopt AI, you know, more quickly as an early adopter.

[00:09:15] But I also think your maturity in terms of innovation capacity and you mentioned before around like this culture change, behavior change.

[00:09:26] I just wonder if companies are really ready for what's to come with these new floodgates opening.

[00:09:33] Yes, I think you're right.

[00:09:34] There's two sort of things that I can pull out of what you just said.

[00:09:37] The first is, you know, if you're a leader and you go get using AI, that's really lacking in compassion and it's not really respectful.

[00:09:44] Your role is to put it in the strategy, start enthusing about it and then create an environment where that groundswell happens.

[00:09:51] You create an innovative learning mindset kind of environment.

[00:09:54] You give people the tools and then you facilitate that sort of collaboration.

[00:09:58] And then, you know, that's culture change, isn't it really?

[00:10:01] And then the second thing was, you know, yes, there are some companies who are really up for it.

[00:10:06] So, for example, one of my customers got a U.S. owner and they are a kind of global retailer.

[00:10:13] And their vision is they kind of start with the customer experience and they work their way back.

[00:10:17] And they say, imagine going on a website and you say, I want a circular saw blade.

[00:10:21] And the AI on the site says, oh, I've noticed that you're probably cutting something.

[00:10:25] Why don't you buy this lubricant?

[00:10:26] Because it will make your blade last five times longer.

[00:10:28] And have you thought about eye protection?

[00:10:30] So it kind of pulls together and it's got a feedback loop, right?

[00:10:33] So it actually learns which suggestions get more revenue.

[00:10:37] Fantastic.

[00:10:38] So work that way to where they've got what they've got now.

[00:10:41] And they've got horrible, dirty, misaligned data.

[00:10:46] So they've got stuff in American stuff, in imperial measurements like quartz and inches and all that sort of business.

[00:10:54] And they've got metric stuff in centimeters and all that sort of business.

[00:10:57] So there's a massive job to do to clean up the data and, you know, as they call it, kind of get their data ducks in a row before they can even start.

[00:11:05] So, you know, I do feel sorry, you know, for people functions, they just go, yeah, start using it.

[00:11:09] And, you know, you create the culture quick.

[00:11:11] But for technical folks, there is a big job to do in terms of data cleansing and harmonization before you can start getting to the good stuff.

[00:11:20] And it's a bit of a slog.

[00:11:21] And I think that might be one of the reasons why CTOs aren't saying, right, we need to now work on the culture because they've got so much stuff to do just to get the business AI ready, I think.

[00:11:30] Yeah.

[00:12:00] We should take this.

[00:12:01] I mean, what are you hearing or what are you experiencing either in your workshops or as you talk to prospects?

[00:12:07] There's a bit of a void.

[00:12:09] Yeah.

[00:12:09] So the easy answer is go, right, chief AI officer, you know, go.

[00:12:13] And, you know, they're sort of blend of the sort of people, head of the people function and head of the technical function.

[00:12:19] But why not just facilitate a better conversation between HR and IT and starting with finding some budget, like putting someone's name on it, getting some budget and setting the ball rolling.

[00:12:31] So it often doesn't have an owner.

[00:12:32] And the sort of like the sort of psychological side of it is, you know, I talk to people leaders who are like, yeah, this all sounds great.

[00:12:39] But I don't want to put another big project in front of the executive team because they've got a lot on at the moment.

[00:12:43] And I'm like, do we have a thing over in the UK called lemon drizzle cake?

[00:12:48] Like, I don't know if you have that in the US.

[00:12:50] There's like a cake, vanilla sponge.

[00:12:53] And then you soak it with this sweet lemon sort of zesty juice thing.

[00:12:58] And it's really delicious.

[00:12:59] So that's the sort of analogy I draw.

[00:13:01] I go, we're not asking you, you know, to have another cake.

[00:13:04] We're just asking you to put AI lemon drizzle into your cake so your company tastes like AI.

[00:13:09] It's not a big new project.

[00:13:11] It's just this thing that if you get the culture right and the strategy and the sort of initiatives right, you can just basically build it from the ground up.

[00:13:19] So I think it's a mental model thing.

[00:13:22] Overloaded executive teams going, sounds great.

[00:13:24] Not now.

[00:13:25] Well, it sounds delicious.

[00:13:27] Oh, it's really nice.

[00:13:27] Sounds like a good plan.

[00:13:29] Yeah.

[00:13:30] I mean, I'm sure there's a you can have your cake and eat it too, you know, analogy in there somewhere.

[00:13:35] But no, I think you're right.

[00:13:36] I mean, it's not some of this is not necessarily, you know, replacing some of the things that you're doing.

[00:13:43] You've got to both think.

[00:13:44] And I think it's part of having the right people, you know, involved.

[00:13:48] But you've got to be able to think both tactically and strategically at the same time.

[00:13:53] I mean, you need the people in the trenches thinking about how this is going to change experiences

[00:13:58] from almost like a design thinking perspective.

[00:14:01] But you also need the systems thinking perspective that says, let's look at how we can reorganize

[00:14:06] and reimagine how this work is getting done.

[00:14:09] And I think it's one of the concerns that I have when people, you know, people think, you know, automation is the same as AI.

[00:14:16] And that's not the case at all.

[00:14:17] We're not talking about, you know, automation is generally for automating, you know, tasks.

[00:14:22] We're talking about like overhauling job design and how work actually occurs and how people engage with each other

[00:14:30] and look for the sort of cognitive diversity to make intelligent and comprehensive decisions

[00:14:39] when it comes to the potential impact to different, you know, audiences and things like that.

[00:14:44] And then, of course, what do you do with the people?

[00:14:47] If you do automate a good portion of someone's job, you need to be thinking ahead to what else could be.

[00:14:53] These are great people.

[00:14:54] They're loyal.

[00:14:55] They're hardworking.

[00:14:56] They're performing well.

[00:14:57] What else can they be doing for us?

[00:15:00] Because there's always more to do.

[00:15:01] There is.

[00:15:02] But the CEO of KPMG this week came out on LinkedIn saying, you know, my position is it's not going to take jobs away.

[00:15:09] Gen.ai is going to create value and it's going to create jobs, obviously, because I sell AI literacy.

[00:15:15] I'm going to agree with that.

[00:15:17] You know, I think it's a great argument.

[00:15:20] And we'll wait and see.

[00:15:22] I think it's true.

[00:15:24] You know, if you think about one organization, if they start using this stuff, everyone's 20 percent more productive.

[00:15:29] They win more work.

[00:15:30] They grow.

[00:15:31] So you can't just say it's going to take jobs out.

[00:15:34] That's short sighted and unrealistic.

[00:15:36] Yeah.

[00:15:37] So one of the topics that, as you know, I'm sort of passionate about is, you know, responsible AI.

[00:15:45] Are we rebuilding and using things, you know, the right way?

[00:15:49] So, you know, some of those ideas I mentioned that come from frontline employees or anywhere, really, they may not be the most, you know, ethical.

[00:15:59] Maybe they would need to use, you know, proprietary data or personally identifiable information or, you know, whatever it is.

[00:16:07] I mean, you need to look at each of those ideas and concepts sort of on its own merit.

[00:16:13] But I guess one of the questions is, as you execute these workshops and these immersion programs, how do the concepts around responsible AI with governance and compliance and fairness and, you know, ethics and things like that?

[00:16:28] How does that come into play?

[00:16:30] So, before we move on, I need to let you know about my friend, Mark Pfeffer and his show, People Tech.

[00:16:37] If you're looking for the latest on product development, marketing, funding, big deals happening in talent acquisition, HR, HCM, that's the show you need to listen to.

[00:16:50] Go to the Work Defined Network.

[00:16:52] Search up People Tech.

[00:16:53] Mark Pfeffer.

[00:16:54] You can find them anywhere.

[00:16:58] So, two interesting things to share there.

[00:17:00] You know, the first thing is we look at people's responsible use policies.

[00:17:05] And after you've read it, you feel like you need to go and sit in a room and put a wet towel over your head because it's been written in a kind of, you know, the lawyer of the organization is happy with it because it's very comprehensive.

[00:17:19] But it doesn't, it's not easy to read and the balance is towards regulation, not innovation.

[00:17:24] Really, a response, an accountable use statement, as I would like to call it, is, talks about how do you do great, cool stuff?

[00:17:32] How do you manage bias?

[00:17:33] How do you manage data security?

[00:17:35] How do you make yourself able to explain the decisions you've used along the way in the human oversight?

[00:17:39] And, you know, how do you move the needle on performance?

[00:17:41] That's the purpose of it.

[00:17:42] So, how do we move the needle on performance safely?

[00:17:45] Whereas the emphasis seems to be just be safe, which is an innovation killer.

[00:17:49] And I get that because as an executive team of a big business, you're, you're, it's a bit of a worry, right?

[00:17:54] But then, as you said before, there's this shadow chat GPT usage where people are just using it on the side of their desk.

[00:18:00] So, it's happening anyway.

[00:18:01] So, why not do a more comprehensive thing?

[00:18:03] So, that's the first bit.

[00:18:04] Organizations need to spend more time embedding those policy documents, bringing them to life and having line managers having meaningful conversations with their people.

[00:18:13] And they say, talk me through how you did this project safely.

[00:18:16] That's how you do that.

[00:18:17] The second thing that we've noticed in our immersion is we don't wag our finger at you and say, be ethical.

[00:18:24] Don't upload data.

[00:18:26] The more people understand about how LLMs, sorry, technical jargon, large language models and generative AI tools like ChatGPT or Copilot, how they work,

[00:18:35] and what the actual data principles and firewalls are, the more we see the scores moving on their ethical use.

[00:18:43] So, if you say to, if I say to you, are you ethical?

[00:18:45] You would say yes.

[00:18:46] But if I say to you, do you know how to be ethical and accountable with AI tools?

[00:18:51] Your score is lower if you don't understand how they work and if you're not hands on and using them all the time.

[00:18:56] So, we've noticed this interesting correlation between understanding of AI and ethical use.

[00:19:02] If you don't know how it works, you can just blissfully put personal identifier data in there and go, oh, sorry, I didn't realize.

[00:19:08] Right.

[00:19:08] So, people need to be educated and trusted and then that's when the good stuff happens.

[00:19:12] I guess I don't want people to just assume that, you know, that's someone else's job on the team or that, you know, it'll get caught because we've got this thing running, doing continuous monitoring.

[00:19:23] I'm sure it'll be fine.

[00:19:24] Like, you can't assume that and I keep harping on this concept that when it comes to responsible AI, we are all responsible.

[00:19:34] So, don't just dismiss it.

[00:19:36] Don't just assume that, you know, someone else has got it.

[00:19:39] If everyone is doing their part, then you're mitigating the weak links, which means you're mitigating, you know, the risk that something's going to go wrong.

[00:19:46] So, whatever phase of the product development lifecycle you're a part of, you should be thinking about those things.

[00:19:53] Absolutely.

[00:19:54] In most offices, there's a knife, you know, and the knife is used to cut birthday cakes.

[00:19:59] But everybody knows the knife is there for birthday cakes and you shouldn't use it to cut the seats up or carve your name in the desk.

[00:20:06] People know that, you know, so I sort of draw the same.

[00:20:09] I used to have a much worse analogy than that, Bob.

[00:20:11] But, you know, I'm working on my analogy, but, you know, it's knowing what a knife is there for its purpose and how you could use it badly is really, really useful in making sure that you don't use it appropriately.

[00:20:22] And it's just the same with Gen.ai.

[00:20:24] We just need to know how it works, what it's for and what the invitation is, and we will be more ethical.

[00:20:30] Yeah.

[00:20:31] Do you see an increase in organizations that are actually starting, like, compliance training in the sense that, well, you know, we've got compliance training, whether you're onboarding or annual or whatever the frequency is.

[00:20:47] You know, we had it for, you know, harassment and basic don't be a jerk policies.

[00:20:53] And then you had a module for, you know, data privacy and you had a module for cybersecurity.

[00:20:59] And so as technology has evolved, you've had to put that in place.

[00:21:03] I'm not saying that solves everything.

[00:21:06] I'm just saying at least whet people's appetite with, oh, I hadn't really actually thought about that or I hadn't, especially with like the scenario-based, you know, training that they've forced you to watch these cringeworthy, you know, videos of, you know, harassment and stuff like that.

[00:21:22] But you've got to make it hit home, right?

[00:21:25] You can't have it such that that would never happen.

[00:21:28] That's ridiculous.

[00:21:29] You've got to have, you know, sort of tangible, realistic scenarios where this isn't about you maliciously, you know, just grabbing data and being a bad sort of corporate citizen.

[00:21:41] This is, you could just inadvertently, you know, share something through a conversation with one of these, you know, agents or co-pilots and not realize.

[00:21:49] So do you see people moving in that direction?

[00:21:52] Because my stance is, it seems inevitable.

[00:21:55] I don't have much sight of it, but because over in the UK where most of my customer base is, there's usually a responsible use policy and there's a bit of training.

[00:22:06] Like the first bit of training I saw was a year ago and it was a US organization.

[00:22:09] And they did this great like kind of situational judgment test, which was about a tsunami alert.

[00:22:17] And should you put the thing in, you know, so it gave you this moral dilemma.

[00:22:21] Should I put the data into chat GPT or should I, you know, do X, Y, Z?

[00:22:26] And it was, it was really thought provoking.

[00:22:28] And I thought, wow, it's really, you know, fast out the gate.

[00:22:30] I haven't seen anything else at all, but for my money, it's all about whatever there is.

[00:22:37] None of it is as powerful as a line manager looking you in the eye and saying, this is a great piece of work.

[00:22:42] Explain to me or please help me to understand how you used it ethically responsibly.

[00:22:47] You managed bias, you managed equality and help me to understand why this is a great piece of work, you know?

[00:22:52] Yeah.

[00:22:52] So all the training in the world doesn't count for nothing unless you, unless it's brought to life by teams.

[00:22:59] Yeah, this is a full team effort and we can all, you know, learn, hopefully not learn some of these lessons the hard way.

[00:23:06] But I think just to get ahead of it and really understand, it just seems like that should be part of training.

[00:23:12] At a minimum, it should be part of the training for anyone who's even thinking about building something, which leads to my next question,

[00:23:19] which is like we're moving past the point or we have moved past the point where software engineers and software developers who are actually, you know, building things.

[00:23:28] I mean, you could say that started even before gen AI with some of these no code, low code, you know, platforms.

[00:23:34] But even those have evolved significantly with the advent of generative AI.

[00:23:39] So the average person is not just a user of these tools now.

[00:23:43] The average person could technically go in and build a custom GPT and, you know, go on Amazon, you know, party rock and create, you know, some little hacker or whatever.

[00:23:53] So we're all kind of went through this with IBM Watson as well as IBM was upskilling the entire organization and their business partners actually on what cognitive computing or what we used to call, you know, sort of AI, what that was capable of.

[00:24:11] And so just like, you know, cloud or social media or whatever, you can't really just give everyone access to these very powerful, in some cases, very expensive tools and expect them to just know exactly what to do and how to do it right.

[00:24:26] And so I guess I just wonder if there's any kind of not compliance training, like from an HR policy kind of standpoint to protect the organization, but more in this domain.

[00:24:41] Let's make sure you're not going to break stuff before we actually give you permission to create your own, you know, agent or something like that.

[00:24:50] Do you see that happening?

[00:24:52] It's a sort of product versus homemade thing, you know, it's a really interesting space at the moment, like about six months ago, you know, an old colleague of mine says, I've got this great idea for an app.

[00:25:03] And I'm like, right.

[00:25:05] And he's like, yeah, it does this, it does that.

[00:25:06] It's a journaling app.

[00:25:07] And I'm like, apps are kind of dead because people are just building their own apps now, you know.

[00:25:13] So there's this sort of kind of paradigm, this sort of framework where we go, I've got this idea for this thing.

[00:25:18] And the flip side of that is like, maybe if I'm an organization, if I'm a buyer in people function or procurement or whatever, vendors are coming to me and they're going, hi, here's an app that does this job for you.

[00:25:28] I might buy it.

[00:25:29] If I didn't know, I could actually have someone on my team make it.

[00:25:32] So we've got to build our awareness of what stuff we could be making to sort of build our awareness about whether we actually need a proprietary product or not.

[00:25:42] But people need to be doing it safely.

[00:25:45] My favorite tool du jour at the moment is Claude, the artifacts.

[00:25:50] So I'm sort of dazzling people by saying, look, I can create a website straight away.

[00:25:54] And in five minutes, you create a working game of Space Invaders.

[00:25:56] It's a real website.

[00:25:57] And people are like, no way.

[00:25:58] And I say, right, let's do the real thing, which is, you know, maybe all the line managers in your business don't know how to give feedback.

[00:26:04] I'm going to create a situational judgment test quiz, which is a website they can log on to or they can just click on and ask them a bunch of questions, give us some feedback on their style.

[00:26:13] So we can we can be starting.

[00:26:15] We can make those things.

[00:26:16] But we always have to have quality control, which I think is like the kind of golden thread in your in your question.

[00:26:23] Yes, let's make really cool stuff.

[00:26:27] And let's keep an eye on quality.

[00:26:28] But teams need to be managing their own quality.

[00:26:30] We don't want to be worried about Mavericks creating garbage content.

[00:26:36] One of the very first sort of internal cloud deployments.

[00:26:40] I don't even think we called it that.

[00:26:41] That's how long ago it was.

[00:26:43] But it was basically it was basically a giant sandbox for anyone with any kind of hack.

[00:26:49] It could be, you know, alpha level, you know, code.

[00:26:53] You know, they had the appropriate, you know, warnings if you wanted to go and install these extensions to the same time, which was their unified communications platform way before Zoom.

[00:27:04] They would install extensions.

[00:27:06] They would install extensions so you could see, you know, the time zone of all the participants in your conference call or you could see who was immediately available.

[00:27:12] You wanted to just, you know, bounce around an idea or get an answer to a question or whatever.

[00:27:17] So some of the tools that we're fascinated by were available even back then.

[00:27:22] But the point is the environment, you had a safe environment where you could build things and you were leveraging not just, you know, corporate policy, but you were almost dependent on the crowd to give you the type of feedback that says, you know, this is not integrate with Linux or this does not work well on, you know, Mac.

[00:27:45] Or if you have this installed, it's going to cause some resource, you know, conflict or whatever it was, right?

[00:27:52] Like that was the point of having stuff in that kind of environment.

[00:27:55] And so I guess I wonder if companies are setting up the same thing because otherwise you're going to have these, you know, rogue sort of agents all over the place just grabbing data and it's not well, you know, sort of orchestrated or controlled or whatever.

[00:28:13] But I definitely take your point as far as, you know, building a full-fledged app.

[00:28:19] You've got to now question whether the sort of longevity and sustainability of that in the age of AI.

[00:28:26] Oh, absolutely.

[00:28:27] I mean, you know, my friend who was sort of building this app, sort of built it.

[00:28:32] And then a week later, Apple went, hi, there's an AI journaling app on iOS.

[00:28:35] And he was like, oh yeah, I see what you mean.

[00:28:38] So, you know, it got washed away by Apple anyway.

[00:28:41] But yeah, like, you know, as I was sort of explaining before we started, I started learning how to be a paraglider a couple of years ago.

[00:28:49] And when you're on the hill environment, the rules are you can intervene.

[00:28:53] You can talk to people.

[00:28:54] You can say, excuse me, I see you're not wearing a helmet or that doesn't look safe.

[00:28:58] Or I noticed you landing a bit strangely.

[00:29:00] You know, here's a bit of advice next time.

[00:29:02] So the rules, the social rules in that setting are we all care about safety and therefore our behaviors are in line with that.

[00:29:11] You know, and I think we can't ignore the social side of this, you know, the AI sandbox.

[00:29:17] Everybody, it sounds like everybody knew the rules of the game.

[00:29:20] And as you said, people were kind of saying to each other, that's not going to work.

[00:29:23] That is going to work.

[00:29:24] And they were kind of helping each other as a collaborative effort.

[00:29:27] So that's going to be big.

[00:29:28] I think, you know, the three things that I think are going to be really big are the technology, obviously, for people to have to be coaching each other all the time on this stuff.

[00:29:37] And so that kind of coaching culture and also that kind of social learning and kind of social innovation thing.

[00:29:44] That's how you're going to get the goodness.

[00:29:46] So I imagine that's probably what helped the IBM products to flourish, I'm imagining.

[00:29:53] Yeah, for sure.

[00:29:54] I mean, and then just to be able to have access to sort of pre-release stuff, right?

[00:29:59] So Lotus, if people remember Lotus.

[00:30:02] Yeah.

[00:30:03] Oh, my goodness.

[00:30:03] Lotus 1, 2, 3.

[00:30:04] Exactly.

[00:30:05] As they evolved, I mean, they took advantage of this platform too, right?

[00:30:09] So we're going to, we're about to go out with our next release of Lotus Notes or Lotus Same Time or 1, 2, 3, I think was long dead at that point.

[00:30:16] They had replaced it with another productivity suite.

[00:30:19] I forget what that was called, but yes, we can put this out to a certain subset of our customers as a beta release and get their feedback.

[00:30:27] But I mean, talk about a captive audience that with a big, both technical and non-technical audience.

[00:30:34] Why would we not take advantage of, you know, tens of thousands of IBMers who can kick the tires and will not be shy about providing feedback because they've got a vested interest in these products succeeding in the market.

[00:30:48] So it's not just a selfish, oh my God, look at me.

[00:30:51] I got, I'm an early adopter.

[00:30:53] I've got early access to these tools and, you know, they're the best thing since sliced bread or whatever.

[00:30:59] But you also had IBM research saying, hey, I built this working prototype.

[00:31:04] I built it based on, you know, a patent that I filed or customers that I spoke to who said, oh, I wish there was something to do this or that.

[00:31:12] And now we have a preview of that.

[00:31:14] So everything from like, you know, social network analytics to, you know, Chrome extensions that scrape stuff and summarize stuff, whatever.

[00:31:21] So I had a pretty significant, you know, toy box to play with.

[00:31:27] And of course, the downside is when you leave IBM, all your toys are taken away.

[00:31:31] In my mind, I'm going to sort of think about the IBM example.

[00:31:34] It's like you were given a box of Lego pieces and, you know, invited to build something.

[00:31:39] The way that I view Gen AI tools, you know, and I think co-pilot's a bit like that, you know, but ChatGPT is like being given a lump of clay and say, shape something, whatever you want.

[00:31:50] And you're like, oh, do I need an ashtray?

[00:31:52] Oh, no, I don't smoke, you know.

[00:31:53] So things have shifted on.

[00:31:55] So OpenAI had basically gone, we've made this unbelievable tool.

[00:31:58] We've no idea really what it's capable of in terms of use cases.

[00:32:01] Have a go.

[00:32:03] And that's just the most exciting thing ever.

[00:32:05] I love it.

[00:32:06] I love my job.

[00:32:07] Oh, that's fantastic.

[00:32:07] I wanted to circle back just on the sort of human-centric, you know, piece of this.

[00:32:13] You talked about the social aspects.

[00:32:14] We talked about behavior.

[00:32:16] But just leaning in a little bit to human centricity and like the cultural aspects of this sort of transformation.

[00:32:24] Talk to me a little bit about the five Cs that we've spoken about before and how important those are.

[00:32:30] Okay.

[00:32:30] So let me see if I can do it from memory.

[00:32:32] So we've got communication.

[00:32:34] So, you know, and that could be, you know, that starts with what's our organizational stance on generative AI.

[00:32:41] And, you know, so my advice if you're a leader of an organization is if you haven't written that down, if it's not in the, you know, if it's not known by your colleagues, just write something down, you know, along the lines of.

[00:32:50] Hi, I'm Steven Rothberg.

[00:32:52] And I'm Jeanette Leeds.

[00:32:54] And together, we're the co-hosts of the High Volume Hiring Podcast.

[00:32:58] Are you involved in hiring dozens or even hundreds of employees a year?

[00:33:01] If so, you know that the typical sourcing tools, tactics, and strategies, they just don't scale.

[00:33:07] Yeah.

[00:33:08] Our biweekly podcast features news, tips, case studies, and interviews with the world's leading experts about the good, the bad, and the ugly when it comes to high volume hiring.

[00:33:19] Make sure to subscribe today.

[00:33:21] We're interested.

[00:33:22] We're excited.

[00:33:22] We're investigating it.

[00:33:23] And we'll keep you posted.

[00:33:24] You know, just that to get going.

[00:33:27] Then you've got culture.

[00:33:29] Culture.

[00:33:29] So be thinking with your cultural toolkit about the things that you would do with other stuff.

[00:33:37] Like, so if you need an organization which is using Gen.I.

[00:33:41] Well, then there needs to be positive culture.

[00:33:44] There needs to be a learning mindset.

[00:33:46] There needs to be people giving each other rich feedback.

[00:33:48] There needs to be higher trust.

[00:33:50] The third C is about capability.

[00:33:52] You know, so by that, what I mean is AI literacy and AI fluency.

[00:33:58] Everyone talks about what AI literacy.

[00:34:01] It's rare to find somebody who's defined it well.

[00:34:04] But we define literacy as basics.

[00:34:07] You know, do I understand what AI is?

[00:34:09] Real world application.

[00:34:10] Am I using for real stuff?

[00:34:12] Accountable use.

[00:34:12] Am I using it responsibly?

[00:34:14] Innovation.

[00:34:15] Once I've got through productivity and, you know, efficiency, am I now using it to solve new problems?

[00:34:21] And then the last bit is the navigation piece.

[00:34:23] So it's not like Excel where you just learn a trick and that's there forever.

[00:34:27] However, you need to stay open-minded and that's a whole kind of social thing.

[00:34:30] So that's the literacy bit.

[00:34:33] And the fluency is how to talk to AI, which is different from literacy.

[00:34:37] Anyway, so that's the capability piece.

[00:34:40] And I'm still tinkering with the four and five Cs, but I'll go for my number four.

[00:34:44] The fifth one, the jury's not in yet.

[00:34:46] But the fourth C is compassion.

[00:34:49] You know, anyone who's led any kind of change program, especially, you know, like so 10 years ago, I was helping organizations to implement D365.

[00:34:57] There was always an absence of compassion.

[00:34:59] You get a bone crushing chief technology officer in who's done it before.

[00:35:03] They know what to do and they kind of bash through and implement.

[00:35:06] That's not going to work because in our research and in our immersions, we're coming up against these really gentle, nuanced human things like guilt, you know, using chat GPT or copilots cheating.

[00:35:18] I don't want to I don't want to do that.

[00:35:20] I want to do the hard work or I wrote this myself and it doesn't feel authentic.

[00:35:24] So those kind of like nuanced, sophisticated human things need compassion if you're going to if you're going to lead the change.

[00:35:31] So I'm going to stop at four Cs.

[00:35:33] So don't overload.

[00:35:33] So we've got communication, culture, capability and compassion.

[00:35:38] Those those are enough for a leader to really get their head around things and start making the good stuff happen.

[00:35:43] Yeah, I think that's fantastic.

[00:35:45] Yeah, because it's we need to maintain empathy.

[00:35:48] And I think that's core to to that compassion.

[00:35:52] And we've got to really lean into some of these innately human skills.

[00:35:58] Right.

[00:35:58] Because AI is going to continue to do more of the technical tasks and things like that.

[00:36:04] But but moving to the sort of augmented intelligence is where, you know, you take the best of of human thought and creativity and critical thinking and empathy, of course, and combine that with the increasingly powerful reasoning capability.

[00:36:20] I know you've been using chat GPT 01.

[00:36:25] Oh, yeah.

[00:36:26] And it sounds like you've been pretty impressed with that.

[00:36:29] And but yeah, how do you take the best of of humanity and the best of, you know, what ethical AI and responsible AI can do?

[00:36:38] And that's a really potent combination.

[00:36:40] It's really potent.

[00:36:42] And, you know, and as a full on evangelist, you know, my family abandoned me from talking about it around the kitchen table.

[00:36:48] I have to remember that there are some people who've never even heard of Gen AI, you know, and there and, you know, fast forward to sort of the, you know, the problem statement I put in front of senior people is in a year's time, you've got a junior manager called Sandra.

[00:37:04] And she's got seven people in our team.

[00:37:06] Two people have used Copilot and they have absolutely smashed their targets.

[00:37:09] Two people haven't used Copilot, which you've given them at all because they're worried about it.

[00:37:14] And their performance is massively different.

[00:37:16] How is that manager going to compassionately manage performance in that scenario?

[00:37:22] If you don't know the answer, then you need a plan because it's coming.

[00:37:26] So, you know, compassion needs to be a big part of it.

[00:37:29] So back to the communication, the culture, the capability and the compassion that that's how that's, you know, it's thinking about those real life examples that really help you to go.

[00:37:38] Yeah, I can't just allow this to be organic.

[00:37:41] No, that makes total sense.

[00:37:42] Yes. So, Chattgpt 01 is your favorite tool right now?

[00:37:47] Yeah.

[00:37:48] Yeah.

[00:37:48] I'm worried about talking about tools because it sort of dates this podcast, right?

[00:37:52] You know, people go, 01, that's so four weeks ago, you know.

[00:37:55] But what I love about 01 is that it sort of demonstrates its reasoning.

[00:38:00] And what was it I was doing with it yesterday?

[00:38:02] I was getting it to write multiple job descriptions.

[00:38:04] It was just blasting it out.

[00:38:06] Whereas 4, you know, 4 sort of forgets what it's doing and gets a bit muddled sometimes if you're in a long sort of prompting chain.

[00:38:12] And 01 or 1-0, whatever it's called, was just absolutely cutting through the work.

[00:38:17] It's just fantastic.

[00:38:18] And, you know, I think that open AI, they're kind of like the Coca-Cola of the AI world, right?

[00:38:26] They just always seem to be one step ahead.

[00:38:29] And when the voice comes out at the end of fall, note my American use of words.

[00:38:34] I didn't say autumn.

[00:38:36] When the new voice comes out at the end of the fall, it's just going to absolutely smash everything else out of the water.

[00:38:41] I think just to your point about, you know, the timeliness of some of the tools.

[00:38:46] I mean, I do think things are moving rapidly.

[00:38:50] And, you know, we may have talked about this before.

[00:38:52] It just seemed like things are moving so fast that you're right.

[00:38:55] Even, you know, three months from now, we're going to be like, oh, my God, that was like archaic.

[00:39:01] Like that stuff, right?

[00:39:02] Oh, I can't believe I was cutting and pasting.

[00:39:04] Right, right.

[00:39:05] Well, I do think, you know, we're still early days of some of the, like, agentic workflows and the interoperability of some of those things.

[00:39:14] And from my perspective, not only is that incredibly powerful to think about, especially for anyone who does a lot of, like, context switching and tasks switching

[00:39:27] and is going back and forth between different, you know, applications, even if it's just for, you know, 10 seconds at a time.

[00:39:33] I was literally doing that this morning.

[00:39:35] It's not just that.

[00:39:36] I also think about it with my sort of responsible AI head on as well as that almost itself necessitates this need for continuous monitoring of how, you know, data and algorithms are treating and managing all the data as it traverses, you know, behind the scenes across these agents and copilots and things like that.

[00:39:59] So it'll be really interesting to see, you know, how this evolves and can the monitoring, can the governance, can those things, we know legislation can't keep up.

[00:40:07] But, yeah, can the software platforms and these, you know, AI providers actually keep up with their own stuff such that we do feel, you know, confident.

[00:40:23] We might be cautiously optimistic, but at least we're optimistic that these things will, you know, catch up and we don't have just complete, you know, chaos out there.

[00:40:34] Yeah, exactly. So I think from that perspective, Microsoft got us as a captive audience.

[00:40:38] If they could just raise their game and get the apps talking to each other and like, you know, I want to say to copilot, hi, can you book me a meeting with Lisa next Thursday at four o'clock or find Lisa's availability?

[00:40:49] You know, I just want to do that fire and forget kind of agent stuff.

[00:40:52] I think they'll get there, you know, but at the moment, you know, the data thing, you're so right as well.

[00:40:57] So I'm in chat GPT and, you know, somebody's built an agent that does a job, you know, maybe it finds me flights or something like that.

[00:41:04] So I'm interacting with somebody else's GPT.

[00:41:08] I put my data in. Where's that going?

[00:41:10] Is that going to the person who created the GPT or is it going to open AI?

[00:41:15] Is it ring fenced in my chat GPT account?

[00:41:19] It's a daily problem.

[00:41:21] It's a daily head scratcher.

[00:41:22] And I think you put your finger on something really important.

[00:41:24] Where's the data going?

[00:41:26] I think that that has prevented many companies and many people from getting started.

[00:41:33] Right.

[00:41:34] Like I refuse to get shiny object syndrome.

[00:41:39] I refuse to buy into the hype.

[00:41:43] And there is certainly a lot of hype and there's certainly a lot of shiny objects out there.

[00:41:48] But if you can look past that and see the potential, you know, we've gone through this before.

[00:41:55] Right.

[00:41:55] This isn't AI didn't just paraglide or parachute out of thin air from the hilltops.

[00:42:02] You know, this has been around for a while.

[00:42:04] We've seen it evolve.

[00:42:06] And, you know, because some of the things that you're talking about, I mean, conversational AI was around, you know, before Gen AI entered the public, you know, consciousness almost two years ago.

[00:42:18] So some of that stuff was possible.

[00:42:20] But certainly it took, you know, programming and it took logic.

[00:42:24] But, you know, there was machine learning.

[00:42:26] There was natural language, you know, understanding.

[00:42:28] Some people still cannot tell if, you know, an email was written by an AI or they're having what they think is a live chat with AI.

[00:42:39] What I expect is for companies to be transparent when they're doing so.

[00:42:44] Whether that's putting it in front of a candidate moving through the recruiting funnel or it's from customer service or what have you.

[00:42:55] Because I can tell you, I mean, I can tell.

[00:42:57] Well, I'm not saying, you know, perfectly and 100% of the time.

[00:43:01] But, you know, sometimes when you think it might be a non-human, you might be right.

[00:43:07] But you've got to be transparent.

[00:43:09] I've personally had experiences where they said they were connecting me with a live agent and it was just another, you know, non-human agent.

[00:43:17] Yeah.

[00:43:18] Right.

[00:43:18] Most of the time it's just logic tree machines, aren't they?

[00:43:21] But to actually get a true AI one, I've yet to get one as a consumer in the UK.

[00:43:25] Yeah.

[00:43:26] It was one of these food delivery services that I was trying to get support for like an active delivery.

[00:43:32] And it's like, there's no way.

[00:43:34] Like they responded within like half a second with like multiple sentences.

[00:43:40] I could tell the words they were using was very robotic and it was not just, you know, sort of outsourced, you know, overseas.

[00:43:47] It was clearly, I mean, they were very short.

[00:43:50] They weren't very friendly.

[00:43:52] And they said too much and too short a period of time for it to be a human.

[00:43:58] So this is wrong.

[00:43:59] You clearly said that you were connecting me with a live agent and live should equal human being.

[00:44:04] Think about it.

[00:44:05] So that's the customer experience.

[00:44:06] Imagine the employee experience.

[00:44:08] Like imagine having a conversation with your line manager where you say, I'm finding this really difficult.

[00:44:11] And the line manager says, I've reached my capabilities as a synthetic manager.

[00:44:15] I'm going to refer you to a human being now.

[00:44:17] You know, like what?

[00:44:18] That's coming.

[00:44:19] That's almost exactly what it said.

[00:44:22] I have resolved this to the best of my ability or something like that.

[00:44:26] I'm like, there's no way that you are a human being.

[00:44:29] No way.

[00:44:29] So, yeah, transparency is key.

[00:44:31] And that's one of the principles underneath, you know, responsible AI, right?

[00:44:36] Because if you're going to build confidence, if you're going to get people comfortable with AI's evolution, you've got to at least be transparent with them.

[00:44:45] So.

[00:44:45] Explainable and savvy organizations are asking for it.

[00:44:48] You know, so one of the big pharma, global pharma companies, if they hire a consultant, they say, please, can we see your explainable AI document?

[00:44:56] Please, can you tell us how much money you're saving us and how the quality is better and what you're doing with our data?

[00:45:01] Thank you very much.

[00:45:02] So, you know, it's coming.

[00:45:04] And the biggest sort of commercial players will be driving that.

[00:45:07] And over in the European Union, which is, you know, if you're in the US, if you're trading with the EU, you need to care about this.

[00:45:14] The EU AI Act is putting into legislation things like forcing organizations to make people AI literate, which I'm delighted with.

[00:45:23] And also forcing organizations to, by law, to publish their explainable AI documentation, which is a really good thing.

[00:45:31] But nobody knows what the hell to do about it.

[00:45:33] It's tough.

[00:45:33] But I was just commenting on LinkedIn yesterday.

[00:45:37] I mean, you can't just look at the EU AI Act from here across the pond and say, oh, wow, that's good luck with that.

[00:45:44] Like, this is relevant to you, whether you're, you know, based outside the EU, but with aspirations to enter that market.

[00:45:53] Or you have, you know, clients and partners who work in that space.

[00:45:57] And you're sharing data, which is now subject to those laws.

[00:46:01] Or even just as a bellwether to, you know, at least here in the US, we're going to think about potential, you know, federal policy or additional state level, you know, policy.

[00:46:10] Because right now, as I'm sure you know, Andrew, like, it's just a patchwork of random pieces of legislation at a municipal level, like New York City and in my backyard or at different, you know, state levels.

[00:46:25] But they're all different.

[00:46:27] This one's about, you know, in Illinois, it's about, you know, video interviewing and having AI interpret, you know, facial expressions and drawing conclusions from that.

[00:46:35] But everything is a little bit different, which is why, obviously, it would make life a bit easier if there was a sort of federal, you know, policy for us.

[00:46:44] But if you can comply with the EU AI Act and understand what the implications are to that, then probably be in good shape for whatever the US government comes up with.

[00:46:56] At least that's my stance right now.

[00:46:59] Fast forward 10 years and countries will be collaborating and aligning on this stuff.

[00:47:03] Yeah.

[00:47:03] Yeah.

[00:47:04] So, yeah, it will grow locally.

[00:47:06] I don't know what the EU is.

[00:47:07] It's like 30 odd countries.

[00:47:09] But, you know, like there's going to be international agreements on this stuff.

[00:47:12] So why not just get ready for it?

[00:47:13] You know, do the right thing now.

[00:47:15] Be explainable.

[00:47:16] Be high quality.

[00:47:17] Look after data.

[00:47:18] And just like, if you're doing the right thing, you can't go wrong, can you?

[00:47:21] Right.

[00:47:22] Exactly.

[00:47:22] Right.

[00:47:23] Don't wait for legislation.

[00:47:24] I mean, responsible AI is not just about complying with existing legislation.

[00:47:29] Because, frankly, as important as some of the existing, you know, civil rights legislation is here in the US.

[00:47:37] You know, that's a pretty low bar when it comes to being ethical and responsible and fair.

[00:47:43] That's grim.

[00:47:45] Andrew, any other?

[00:47:46] I know we kind of covered all the elements that would help people elevate their AIQ.

[00:47:52] Any other thoughts in that regard in terms of, you know, upskilling and learning?

[00:47:56] I'll share my favorite prompt.

[00:47:58] That's what I like to leave people with is I'm always told by people, oh, AI is killing original thought.

[00:48:05] I'm like, it's just not using it right.

[00:48:06] So my favorite prompt is you are an investigative journalist.

[00:48:10] You're researching an interview.

[00:48:12] You're researching an article about XYZ.

[00:48:14] Please interview me and ask me 10 great questions about this topic that I care about in order to ghostwrite an article for me.

[00:48:20] It's like get AI doing the work.

[00:48:22] So, yeah, I say to it, you're a ghostwriter, interview me and then write this amazing article.

[00:48:27] And by the way, pay attention to my vocabulary and my tone of voice so that when it's written, it actually sounds like me.

[00:48:32] Done.

[00:48:33] Click.

[00:48:33] Five minutes.

[00:48:34] Move on.

[00:48:34] You know, you've done your LinkedIn for the day.

[00:48:36] So, yeah, that's that's my little nugget that I would love to share.

[00:48:40] Love it.

[00:48:40] No, that's awesome.

[00:48:42] Yeah, it's getting a lot of the output is getting too sort of homogenized.

[00:48:47] Right.

[00:48:47] Because it's eating itself.

[00:48:48] Right.

[00:48:48] It's now learning from AI generated data and it doesn't.

[00:48:52] Even know it.

[00:48:53] In this fast moving world.

[00:48:54] Yeah.

[00:48:54] Rocket emoji.

[00:48:55] Cheesy call to action.

[00:48:57] Bullet points.

[00:48:58] Organized paragraphs.

[00:48:59] You know, my other top tip is if you write something like that, just don't bother.

[00:49:03] Yeah.

[00:49:04] Do something original.

[00:49:06] Yeah.

[00:49:06] I mean, even as I write show notes and, you know, social posts for this show, you know, it's they all start to sound the same.

[00:49:14] So I'm editing more and more, it seems like.

[00:49:18] So I feel it.

[00:49:19] I feel it on a daily basis.

[00:49:20] So, Andrew, I really want to thank you very much for spending some time with me.

[00:49:25] I think there's a lot of insights to take away for my audience.

[00:49:29] And I personally want to thank you for all the work that Uptake AI is doing because I think that's the whole reason I started this podcast is to get more people exposed and literate so that we just amplify and build sort of champions and change agents and people who have sort of woken up to what's possible.

[00:49:52] And they move us in a, you know, help move us in a positive direction.

[00:49:55] So thank you.

[00:49:56] Onwards and upwards.

[00:49:57] Thank you.

[00:49:57] I appreciate that.

[00:49:58] I really appreciate you having me on your show.

[00:50:00] Absolutely.

[00:50:01] Absolutely.

[00:50:02] Thank you again.

[00:50:03] Good luck.

[00:50:04] And be safe out there with your hair gliding.

[00:50:08] And glad you made it back to the show.

[00:50:10] Thanks again, Andrew.

[00:50:12] Thanks, everyone, for listening.