Bob has an engaging discussion with Gary Bolles, Chair for the Future of Work at Singularity University, author of the book, “The Next Rules of Work: The Mindset, Skillset and Toolset to Lead Your Organization through Uncertainty”, keynote speaker, lecturer, and many other roles. They talk about his extensive background, including the ‘family business’ and all his endeavors related to human potential, career development, and of course, the future of work. Bob and Gary discuss the challenges of finding the right balance between innovation and regulation. They hit on the automation maturity curve and the need for individuals and organizations to adapt and transform in the face of exponential change. Gary emphasizes the importance of mindset and culture in driving successful transformation. The conversation explores the concept of collective intelligence and the combination of human intelligence and AI. They touch on the importance of asking better questions and the evolving role of leaders in guiding teams. Gary highlights the need for organizations to embrace new models of social cohesion and education systems to empower individual learning. Lastly is a discussion on the responsible and fair use of AI and the need to develop AI literacy in education.

Keywords

future of work, AI, trends, regulation, innovation, automation, transformation, mindset, culture, collective intelligence, human intelligence, AI, adaptability, growth, team metrics, cognitive diversity, AIQ, education, AI literacy

Takeaways

  • Exponential technologies like AI have the potential to bring both tremendous benefits and concerns.
  • It is crucial to have a proactive approach to regulation and scenario planning to anticipate the impacts of new technologies.
  • Automation is a continuum. Rather than focusing on job displacement, organizations should re-architect job roles and empower humans with technology.
  • Successful transformation requires a growth mindset and a culture of continuous learning and adaptation.
  • AI has the potential to augment collective intelligence by providing insights and sparking new ideas.
  • Leaders should focus on asking better questions and empowering individuals to find solutions.
  • Organizations should embrace new models of social cohesion to leverage collective intelligence.
  • Education systems need to shift towards empowering individual learning and teaching AI literacy.

Sound Bites

  • "Exponential technologies often follow an exponential curve and accelerate each other along that curve."
  • "The concern with exponential technologies is that they tend to create outsized players and monocultures."
  • "Automation doesn't replace jobs, it automates tasks. The decision to eliminate jobs is a human decision."
  • "So long as we help them to be able to continually adapt and grow, then you're going to be much more likely to have the organization have that capacity."
  • "We're still in very early days, especially with regenerative AI and large language models."
  • "We will define the organization by its level of social cohesion."

Chapters

00:00 Introduction and Background

05:20 Exploring the Intersection of Trends and the Impact of AI

08:52 Finding the Balance Between Innovation and Regulation

16:28 Navigating the Automation Maturity Curve

25:10 The Importance of Mindset and Culture in Transformation

29:08 Adaptability and Growth

31:46 Augmenting Collective Intelligence with AI

35:12 Asking Better Questions for Insights

44:00 Rethinking Organizational Models

51:11 Empowering Individual Learning


Gary Bolles: https://www.linkedin.com/in/gbolles

The Next Rules of Work: https://www.koganpage.com/general-business-interest/the-next-rules-of-work-9781398601635

LinkedIn Learning: https://www.linkedin.com/learning/instructors/gary-bolles

Singularity University: https://su.org


For advisory work and marketing inquiries:

Bob Pulver: https://linkedin.com/in/bobpulver

Elevate Your AIQ: https://elevateyouraiq.com

Powered by the WRKdefined Podcast Network. 

[00:00:00] Feeling kind of left out at work on Monday morning? Check out The Barf, Breaking News, Acquisitions, Research, and Funding. It's a look back at the week that was, so you can prepare for the week that is. Subscribe on your favorite podcast app.

[00:00:25] Hey, it's Bob Pulver. In this episode of Elevate Your AIQ, I sat down with Gary Bolles, Global Leader of the Future of Work Initiatives at Singularity University, and author of the new book, The Next Rules of Work.

[00:00:36] Naturally, we discussed the future of work, augmented by AI, and the core principles of his book, which include Mindset, Skillset, and Toolset, happens to align very nicely with AIQ.

[00:00:49] Gary shares his insights on the impact of AI on various industries and the challenges of regulating this rapidly evolving technology.

[00:00:55] We explore the need for a balance between innovation and regulation, emphasizing the importance of proactive approaches and scenario planning.

[00:01:03] Enjoy this insightful episode with a world-renowned thought leader helping us reimagine the evolution of work. Thanks for listening.

[00:01:12] Hi, everyone. Welcome to another episode of Elevate Your AIQ. I'm your host, Bob Pulver.

[00:01:17] Today, I have the pleasure of talking to Mr. Gary Bolles. Gary, welcome to the show.

[00:01:23] Thank you. I've been looking forward to the conversation.

[00:01:26] Yeah, likewise. I probably can't do your intro justice, so I thought I would let you talk about your books and your LinkedIn Learning courses and your work at Singularity University and just some of your background, how you got to this.

[00:01:40] Absolutely. This is a great opportunity to catch up and explore some different ideas.

[00:01:46] So I have a number of hats that I spin constantly.

[00:01:51] I'm the author of a book called The Next Rules of Work, Mindset, Skill Set, and Toolset to Lead Your Organization Through Uncertainty.

[00:01:59] I have served for quite some time as the chair for the future of work with Singularity University.

[00:02:04] We're sort of mixing things up a little bit, and they've asked me to focus on issues related to transformation, which encompasses changes as individuals, as organizations, and systemically.

[00:02:18] So that's a great opportunity for me to expand my mandate.

[00:02:24] I have TED courses on LinkedIn Learning with one and a half million learners, courses like Learning Mindset, Learning Agility, and Leading Change.

[00:02:35] And I'll be updating those first two courses at AI.

[00:02:40] So what is learning mindset and learning agility in the age of AI?

[00:02:45] I helped to catalyze a group called Next CoLabs, which is an international group of innovators that all come together to workshop with the new AI tools and help organizations to be able to navigate that process.

[00:02:58] And then I have a small but mighty software company called E-Parachute.

[00:03:04] And it's built from the knowledge of a book called What Color Is Your Parachute? The World's Enduring Career Manual.

[00:03:10] Because I was trained as a career counselor when I was 19.

[00:03:13] So I've had a deep foot in the whole process about each of us as individuals continually go through career and life change.

[00:03:22] So it's a wide ranging portfolio.

[00:03:25] I lecture around the world through my company, Charette, a consulting company called Charette.

[00:03:30] And it gives me an opportunity to be able to hear so many different perspectives from people.

[00:03:35] And I typically sort of organize those into individuals, organizations, communities, and countries.

[00:03:42] Wow.

[00:03:43] That is more hats than I can even count.

[00:03:47] So when you, and just as a bit of history for listeners, that E-Parachute was sort of derived from your father's work writing that book, right?

[00:04:00] Yeah.

[00:04:01] That's amazing.

[00:04:02] So is your parachute multiple colors or how did that work out?

[00:04:07] Yeah, that's a good question.

[00:04:08] So the irony of these things is that I actually barely escaped high school.

[00:04:14] I was never very much interested in college.

[00:04:16] And so I did a variety of odd jobs.

[00:04:18] And when your father's becoming the world's career counselor, you can imagine how that might have had, you know, sort of the catalyzed conversation or two.

[00:04:28] And so I sort of fell into the family business when I was 19 and as trained as a career counselor.

[00:04:33] But I found what my passion was, was high tech.

[00:04:37] And then the early 1980s, there was this brand new thing called the IBM PC.

[00:04:43] And I moved to Silicon Valley thinking, oh, you know, we've probably done everything you possibly could do with a computer.

[00:04:49] But why don't I just sort of check it out?

[00:04:51] And to say the rest is history.

[00:04:53] And the rest is a whole bunch of phases of history that I've had the great opportunity to be able to be part of.

[00:05:00] Yeah, that's amazing.

[00:05:01] And so I think back to some of the early sort of comical statements at this point about the future of computers, right?

[00:05:09] Wasn't it the head of the tech?

[00:05:11] Yeah.

[00:05:12] Yeah, that said, who would need a computer, right?

[00:05:15] Let alone their pocket.

[00:05:17] So, yeah, it's fascinating, the trajectory.

[00:05:20] One of the things that I also think about is, this is a really, really important chapter in your LinkedIn learning course around, you know, leadership for the future of work around exponential change.

[00:05:33] And I think a lot about the intersection of trends.

[00:05:39] So not just AI, but, you know, AI plus things that came before it or AI plus, you know, robotics, AI plus blockchain.

[00:05:48] Is there any particular intersection of trends that you see that are particularly intriguing to you or concerning to you or that will, that may trigger the next sort of exponential change?

[00:06:02] I mean, I know we're still on the first inning of AI, particularly, you know, generative AI.

[00:06:06] Yeah.

[00:06:06] So I'd say there's a number of intersection, as I often call them, of exponential technologies and that are both tremendously inspiring and have a number of concerns associated with them.

[00:06:17] So exponential technologies give it and they take it away.

[00:06:21] There's always people that benefit and there's people that are disrupted, one of my least favorite Silicon Valley terms.

[00:06:29] So there's no question that generative AI, large language models, but even just the basic machine learning algorithms that have been around for quite some time, they've represented for quite some time an opportunity at this inflection point where we suddenly find applications for them that we'd never thought of before.

[00:06:45] And one of the great insights that Ray Kurzweil, our co-founder at Singularity University, came up with years ago is that not only do each of these technologies often follow an exponential curve that is doubling in performance in a really short period of time and dropping by more than half in price, like originally with microprocessors, Moore's Law.

[00:07:07] Not only do many of these technologies follow that curve, but they accelerate each other along that curve.

[00:07:13] And if there's anything, any more recent technology that we can point to, then there's no question that a lot of advances that large language models and generative AI can catalyze are going to have profound impacts on a variety of different arenas.

[00:07:28] So for instance, you look at protein folding and the whole idea to be able to understand the dynamics of proteins so that they can be used in a variety of different solutions for human health problems.

[00:07:40] If you look at the process of trying to figure out how we're going to have enough food to feed 11 billion people, there's so many different arenas where those technologies, especially generative AI, will help a wide variety of people.

[00:07:55] It's so democratize the process of helping so many people to be able to solve problems that it's actually one of these really transformative inflection points.

[00:08:04] But it's also concerning because what's happened in many cases with exponential technologies is that the organizations, the companies, the innovators that create those technologies tend to be the ones that win.

[00:08:19] And so we get these outsized players that dominate the technology.

[00:08:24] There's six companies, only six companies that have invested in the top 50 generative AI startups.

[00:08:33] And so what that does is it creates these monocultures.

[00:08:36] There's one Google, one Facebook, one Amazon, one, you know, you just keep on going.

[00:08:40] And so that is one of the potential concerns is that these technologies, the power remains in a few hands when instead what we really want is to have really broadly distributed benefit from those technologies so that many people can continue to innovate.

[00:08:56] The regulations around some of the technologies, especially that we're seeing today, always seems like they're, you know, injecting themselves too late in the process.

[00:09:07] Right. And then we get into these debates of how much do we regulate without stifling innovation and how do you find that balance?

[00:09:14] And so I don't know.

[00:09:16] It's hard, right, because everyone's trying to protect their intellectual property and protect everything that they're working on in R&D.

[00:09:23] And they can just claim, look, we're just we're tinkering, we're experimenting, we're trying different things.

[00:09:30] So I don't need you coming here to supervise.

[00:09:33] I mean, I think when I was when I got to IBM, I'll date myself, 95.

[00:09:38] I got there right around when Lou Gerstner got there.

[00:09:42] And I remember hearing stories about him touring the research labs.

[00:09:46] And he's just like, why?

[00:09:48] Why don't people know about this?

[00:09:50] These are like, you know, your jewels here.

[00:09:53] And so I think there was a mentality like, look, we're going to be insular in that way to protect ourselves.

[00:10:00] But it almost seems like we'd be better off understanding and starting to preemptively think about how we might understand and play out.

[00:10:09] And then we're going to do real scenario planning and then figure out where maybe legislation could come in early to say, look, let's let's make sure we're focused on the right use cases.

[00:10:19] Let's make sure we're understanding how someone might misuse this technology and start to frame it that way so that people don't go off on the right on the wrong path.

[00:10:28] And then we can continue to innovate.

[00:10:31] And then I guess part of the whole open source movement was around how do we do this transparently and drive innovation?

[00:10:39] But I just said what you were talking about there really kind of struck right at the core of that.

[00:10:45] How do we do this?

[00:10:46] Because otherwise you're going to have this if you get whiplash going back and forth.

[00:10:50] So I have the opportunity, great gift to be able to lecture around the world.

[00:10:55] And I was just recently lecturing to a group of legislators, people in the legal system of Brazil in a city called Belen, sort of the gateway to the Amazon region.

[00:11:06] And I was talking to these people that were part of the legal system in the country.

[00:11:11] And I said, look, regulation is essentially just two things.

[00:11:16] It's either oil or glue.

[00:11:19] It's either lubricant or friction.

[00:11:22] It's either we want to accelerate and encourage you to do these things, and then we want to discourage you and stop you from doing these things, right?

[00:11:29] So it's either lubricant or friction.

[00:11:32] And what happens in so many cases is countries have these layers after layers of laws and regulations and rules that are layers of glue and oil.

[00:11:42] And it's often very difficult to see what you're trying to encourage or discourage, right?

[00:11:46] And so along comes a new exponential technology.

[00:11:50] And typically, most legal systems are trying to do two things.

[00:11:53] They're trying to have lubricant.

[00:11:56] They're trying to encourage innovation and free markets.

[00:12:01] And they're trying to discourage monopolies and negative impacts on societies.

[00:12:08] And so what ends up happening is legislation almost always legislates the past.

[00:12:15] That is, the technologies move so quickly.

[00:12:18] Very few people outside of Ray Kurzweil and a few other brilliant minds were talking about this inflection point coming where there was going to be a big leap forward with AI software.

[00:12:29] Of course, if we had all been proactive, as you say, we'd been thinking about these use cases.

[00:12:33] If their legal systems in countries around the world had all anticipated, had the foresight to be anticipating that these technologies could happen,

[00:12:42] and therefore we're thinking about the ways that we want to be able to create that balance of lubricant and friction,

[00:12:48] then we would have had the process in place to be able to respond and to be able to determine how are we going to be able to design this combination of what we want to encourage or what we want to discourage.

[00:13:01] The challenge with these technologies is that not only do they have these broad uses in so many different arenas that are very, very hard to predict,

[00:13:11] but they also affect markets in ways that don't meet the traditional definitions of markets.

[00:13:21] In the United States, one of the greatest things that we care about with anything that we try to regulate when it relates to capitalism and markets is consumer harm.

[00:13:31] Well, if Amazon keeps getting you the cheapest thing for the cheapest price, where is the consumer harm?

[00:13:38] Well, you can say your harm is because you don't have access to all these other platforms that you can go to.

[00:13:44] Well, you do have access, it's just you don't do that.

[00:13:46] So you create these de facto monopolies, and what ends up happening then is we don't yet have the language,

[00:13:52] the process to be able to determine what are the benefits and what are the negative aspects of impact on society and anticipate those ahead of time.

[00:14:00] So that's really what I think.

[00:14:02] If you look at the EU, you look at certain approaches around the world to try to get a process in place, as you were saying, to scenario plan.

[00:14:11] Country of Greece, for instance, is putting in a new function that's focused on foresight.

[00:14:15] And so how can you have all these agencies that are part of this process of trying to understand that exponential technologies are not going to slow down?

[00:14:23] That was one of Ray Kurzweil's greatest insights is it's just only going to accelerate.

[00:14:27] And so if we can put the kinds of processes in place to be able to coordinate how we want our markets to work, how we want our societies to work,

[00:14:35] then we can actually do better regulations and we can anticipate how we get that balance better.

[00:14:41] Yeah, no, that makes sense.

[00:14:43] And I know you've got a lot of stakeholders in a lot of these scenarios, right?

[00:14:48] So in your example with Amazon, I mean, yes, I love having things available where at this point things can be delivered in, you know, an hour.

[00:14:57] Some of these services.

[00:14:59] And yes, of course, that's convenient.

[00:15:00] But if I was...

[00:15:02] If you like swiping, then head over to Substack and search up work defined.

[00:15:07] W-R-K defined and subscribe to the weekly newsletter.

[00:15:12] Competing with Amazon, if I had a small business, then I might not think that this is a very good idea at all.

[00:15:19] And we see that kind of example across industries, certainly.

[00:15:22] When you think about the automation, I'm going to kind of switch gears a little bit, get a little bit more grounded.

[00:15:30] Think a lot about the automation, I'll call it an automation maturity curve, right?

[00:15:34] So I think you would put this in that four box kind of matrix, right?

[00:15:39] Like the rote tasks and, you know, in the bottom left and then moving up to the, you know, what we think of as more human tasks,

[00:15:45] more cognitive tasks that only humans, what we thought only humans could handle.

[00:15:51] But I feel like there's a lot of back and forth around a work environment and the potential for people to be concerned about, you know, job displacement, etc.

[00:16:03] And we talk about productivity.

[00:16:04] We've got to think scenario plan and think about what technology or, you know, digital labor, however you want to think about it, what tasks could be taken off our plate.

[00:16:13] You can't just look at, oh, well, look, it can schedule meetings automatically now.

[00:16:18] I don't need an assistant.

[00:16:20] Well, okay, that's one immediate, you know, tactical thing to think about.

[00:16:24] But now you've got to start to think about the trajectory of this and what happens when you string, you know, multiple automatable, you know, tasks together.

[00:16:32] At some point, if you've reached some threshold and that threshold could be different for every company or every industry or every role.

[00:16:40] At some point, when you reach a threshold, your leadership is going to be like, well, I can start thinking about that.

[00:16:47] I don't need, you know, 80 people in my call center.

[00:16:50] Maybe I only need, you know, 40.

[00:16:52] But then there's, you know, of course, the inevitable decision.

[00:16:56] Do I actually let go of half my staff or do I understand enough about what they're capable of doing mapped to the trajectory of those technological advancements, as well as all the other work that we know we would have loved to do if only there was more bandwidth.

[00:17:14] Now, now you have it.

[00:17:16] So I guess just, you know, get your thoughts around that automation maturity curve, you know, between robotic process automation all the way to essentially cognitive automation, which is really where automation meets AI.

[00:17:30] You're exactly right.

[00:17:31] So first off, I like people to have the mental picture of this sort of landscape.

[00:17:35] And the lower left is, you know, boring, repetitive tasks.

[00:17:37] And the upper right is really creative problem solving.

[00:17:40] And one of the chapters in my book is the history of the future of work.

[00:17:45] And the truth is that technology has always automated activities down in the lower left.

[00:17:50] Not many of us around the world till the ground by hand.

[00:17:55] We either have a mule or an ox or we have an autonomous tractor, right?

[00:18:01] And so there's always going to be automation down in the lower left-hand corner.

[00:18:06] But what ends up happening is, in most cases, that automation doesn't replace a job.

[00:18:12] What it does is it just automates tasks.

[00:18:15] And then it's a human's decision if a job goes away.

[00:18:19] So that really is the way to be thinking about the decision-maker of people who lead an organization's decision-making process that they go through.

[00:18:26] So you can automate, let's say, 20% of the tasks.

[00:18:28] And initially, sure, that's likely going to be in places like call center.

[00:18:33] And it's probably going to be in warehouses where you can use robots to pick boxes and so on.

[00:18:39] So you can automate 20% of the tasks in your company.

[00:18:42] And you have a set of decisions to make.

[00:18:44] Are you going to say, as you just mentioned, is one of the options to lay off 20% of your people?

[00:18:50] Are you going to give people 20% more free time to think of new opportunities?

[00:18:57] That's something Google did for years, called 20% time, where you would take a day a week and you'd be thinking about new products, new ways to solve problems for your customers.

[00:19:05] Are you going to give people a four-day work week?

[00:19:08] There's a lot of companies around the world that are experimenting and saying, you know, we probably can get a lot of stuff done in those four days.

[00:19:14] And we maybe don't need that.

[00:19:16] And so on.

[00:19:17] So there's the mindset that these new technologies, and especially, I agree with you, AI is not a thing.

[00:19:24] AI is just a label.

[00:19:26] It's a bucket of a whole bunch of different technologies.

[00:19:29] But what these technologies represent is the opportunity to automate a whole bunch of tasks.

[00:19:35] And so that's actually an opportunity for organizations to completely rethink.

[00:19:39] And rather than having the zero-sum world where you just lay off messy, expensive humans,

[00:19:46] you just see them as discardable.

[00:19:47] I prefer a world where you say, no, no, these tools are helping people to be able to solve problems in completely new ways.

[00:19:54] We can re-architect job roles.

[00:19:56] We can have a completely different set of agreements about how teams come together to be able to solve problems.

[00:20:02] We can be much more dynamic organizations where we're helping people to continually solve the problems of tomorrow's customers.

[00:20:09] And if we have that much more positive perspective that we essentially want to be human-centric and we want to empower humans with these technologies,

[00:20:19] then that's just a completely different kind of process of leading an organization.

[00:20:24] I agree.

[00:20:24] I think it.

[00:20:25] And I mean, everything's about balance, right?

[00:20:27] I mean, we've talked about three or four examples already where you just have to find the right balance between either technology and humans or innovation and regulation.

[00:20:37] And I feel like that's part of where people struggle, right?

[00:20:42] Is to keep that balance.

[00:20:43] And you brought it up in one of your talks where you talked about almost like, I think you use a table as an example, but it's almost like a seesaw, right?

[00:20:52] Like how do you sort of maintain that equilibrium or something close to it so that you don't get caught flat-footed or have a rebellion amongst your workforce?

[00:21:03] And I think that's part of where people are running into challenges because people want to feel like they're in control in terms of the decision-making.

[00:21:15] They want to be decisive when they do so, but oftentimes that results in some backlash or they just haven't incorporated all available information or perspectives or things like that.

[00:21:28] So I think this is an important part of trying to execute this type of transformation successfully.

[00:21:35] So just tying back to your new focus or one of your new focus areas with Future of Work at Singularity, it seems like this is a big challenge.

[00:21:45] You know, transformations have historically not gone that great at organizations.

[00:21:51] And a lot of those, I mean, you could argue organizations should be continually transforming so it's not a time box exercise, even multi-year exercise.

[00:22:01] But a lot of the transformation work I did at IBM was domain-specific.

[00:22:06] You know, it was process transformation.

[00:22:08] It was supply chain transformation.

[00:22:10] I mean, this is probably the most disruptive of all just because of its complexity and its breadth.

[00:22:16] It's not, we're not just talking about, you know, knowledge workers in terms of the disruption and the widespread impact.

[00:22:24] First off, if we accept that Mr. Kurzweil is right and that exponential change is only going to accelerate, then we have to start at the individual level.

[00:22:34] And so for each one of us, we have an opportunity to be able to continually adapt.

[00:22:41] And we can have a, Carol Dweck calls a fixed mindset in her book Mindset.

[00:22:49] There's nothing wrong with a fixed mindset, but if a global pandemic or an AI tsunami dramatically changes our work roles, then we each must be able to adapt.

[00:23:02] And a fixed mindset is no longer quite as useful.

[00:23:04] And so if instead we all have a growth mindset and each of us realize that we need to continually adapt and change, it starts at the individual level.

[00:23:14] And so, so long as each of us is committed to continually growing, continually changing, continually learning new skills, continually solve new problems, at the individual level, transformation can continue.

[00:23:26] And that's why I talk so much in my LinkedIn courses about learning mindset and learning agility and leading change, because all of those I think are going to be requirements forever.

[00:23:37] As a matter of fact, the line in my course, Leading Change, is change management, which is the way we used to approach these things.

[00:23:44] Change management is dead and all that's left is leading change.

[00:23:48] Hey, it's Bob Pulver, host Q Podcast.

[00:23:51] Human-centric AI, AI-driven transformation, hiring for skills and potential, dynamic workforce ecosystems, responsible innovation.

[00:24:00] These are some of the themes my expert guests and I chat about, and we certainly geek out on the details.

[00:24:05] Nothing too technical.

[00:24:06] I hope you check it out.

[00:24:08] I want to take a break real quick just to let you know about a new show we've just added to the network.

[00:24:15] Up Next at Work, hosted by Gene and Kate Akil of the Devon Group.

[00:24:21] Fantastic show.

[00:24:22] If you're looking for something that pushes the norm, pushes the boundaries, has some really spirited conversations, Google Up Next at Work, Gene and Kate Akil from the Devon Group.

[00:24:37] So that's at the individual level.

[00:24:40] So at the organizational level, now we want a whole bunch of humans to continually change.

[00:24:44] Sometimes we call that digital transformation, which is often what may have been focused on with IBM.

[00:24:50] Sometimes it's called business process transformation because we used to solve problems.

[00:24:54] We used to do tasks in this way, and now we want to do them in another way.

[00:24:59] But the process of transformation is really cultural.

[00:25:02] That's why I begin with mindset.

[00:25:04] I talk about mindset, skill set, and tool set.

[00:25:07] But it's really a mindset shift.

[00:25:10] And if you want to get a mindset shift across, especially a large organization, you're right.

[00:25:17] The data is not very good.

[00:25:19] There's a book called Culture Transformation by the Institute for Corporate Performance.

[00:25:25] And they did a study of hundreds of attempts at cultural transformation.

[00:25:31] Bad news, 85% fail.

[00:25:35] 85%.

[00:25:36] Would you take a job if you had an 85% chance of failure?

[00:25:40] However, good news, 15% succeed.

[00:25:43] And so the book Culture Transformation walks through what those steps are.

[00:25:48] I actually replicate them in my book.

[00:25:51] And it's not a forced march.

[00:25:55] It's a cookbook.

[00:25:57] It's a set of insights as to how humans can change at scale.

[00:26:04] And if we think about that not as change management, where we used to do things a certain way,

[00:26:09] now we do them a new way, and we're done.

[00:26:12] We have to think of it as a continuous process of transformation.

[00:26:16] And for some people, that's really exciting.

[00:26:19] Oh, boy, I get to learn new skills every year.

[00:26:21] And to some people, that just feels overwhelming.

[00:26:23] Like, won't it ever slow down?

[00:26:25] It's just moving way too quickly.

[00:26:27] And so that's what we need to help people to be able to do, is to be able to continually adapt and change and develop new skills and have the mindset that they can do those things.

[00:26:36] Yeah, I am not a fan of the change management phrase either.

[00:26:40] And after going through your course, now I feel like that's like a red flag, right?

[00:26:46] In terms of the mindset and the culture, right?

[00:26:49] Because we're going to give you a specific, to your point, here's our recipe for change.

[00:26:57] And here's how it's going to work.

[00:26:59] And it's not really that simple.

[00:27:01] And calling it change enablement actually isn't that much better, right?

[00:27:06] You're just acknowledging that it doesn't get to the adaptability that I think you would advocate for.

[00:27:15] Absolutely.

[00:27:16] And in terms of that learning and growth mindset, right?

[00:27:19] Yeah.

[00:27:19] And it starts at the individual level, individuals and teams.

[00:27:21] As long as we help them to be able to continually adapt and grow, then you're going to be much more likely to have the organization have that capacity.

[00:27:31] Yeah.

[00:27:32] I wanted to ask you about the concept of collective intelligence.

[00:27:37] I did a lot of work in crowdsourcing and collective intelligence back at IBM before there was a gig economy.

[00:27:44] And it's sort of resurfaced like many of the things that I focused on at IBM, social analytics and other things.

[00:27:54] But this one in particular, I think is fascinating in the context of AI and the combination of not just an individual human and their intelligence and intuition and creativity, but AI plus collective intelligence.

[00:28:12] And I think about that in multiple ways.

[00:28:14] One is we've got to change individuals, but individuals are parts of teams and probably have team metrics and goals and objectives.

[00:28:22] And then, of course, you could go up to a department or organizational level.

[00:28:26] So there's at least sort of three tiers of human intelligence, I suppose.

[00:28:31] But when you think about collective intelligence and cognitive diversity in the age of AI, do people have any, have they made any progress in terms of thinking about how AI works, not as like a co-pilot for an individual to help them make better decisions and, you know, aggregate and derive insights from, you know, a data set, for example, versus people collectively having the cognitive human diversity?

[00:29:31] But then augment that with AI.

[00:29:33] And that's kind of a problem.

[00:29:34] Well, I think we've proven with a lot of the social media tools that are out there that while we may not be all that great at sponsoring collective intelligence, we're really, really good at catalyzing collective stupidity.

[00:29:46] So we're still in very early days, especially with regenerative AI and large language models.

[00:29:53] But I'll give you a couple of quick examples.

[00:29:54] So the short answer to your question is yes.

[00:29:56] I think there's a tremendous amount of inspirational experimentation and trying to help understand because each of us has a unique set of experiences and perspectives and history.

[00:30:09] And we each bring different things when we're trying to solve problems.

[00:30:13] And if we have the right tool set in place, then what that gives us is an opportunity to be able to leverage what is most effective that each of us can bring and to be able to understand each other's differences and to be able to help each other to continually develop our skills and to be able to collectively solve problems.

[00:30:32] It's funny you use that phrase because with several colleagues I used to function.

[00:30:38] We were a nonprofit called Collective Intelligence.

[00:30:41] We used to own the URL, collectiveintelligence.net, back in the early 2000s.

[00:30:47] And that actually led to us founding a group called SOCAP, Social Capital Markets, which is a large conference that brings together social entrepreneurs and investors from around the planet, still running today.

[00:30:58] Right. So the whole idea of collective intelligence, the basic is that each of us has these unique characteristics and capacities.

[00:31:07] And if we can find the right kind of operating system, the right way that each of us can interact, then we have tremendous opportunity to solve problems that we never saw before.

[00:31:17] And also to maximize the potential of each human because we can bring what is unique to the party and we can have the tool set that allows us to be able to continually leverage those tremendous assets.

[00:31:30] So I'll just give you two quick examples.

[00:31:32] I was brought in to help moderate a group of a couple dozen people that wanted to think about emotional intelligence and how it might be better trained and sought after in the enterprise.

[00:31:45] And then I was, we did a very long day of whiteboard sessions.

[00:31:49] I did all of the note taking on, on acyl pads.

[00:31:53] And then at the end of the day, we were all trying to figure out, okay, so what do we do next?

[00:31:56] And while everybody's sort of exhausted and trying to figure this out, I typed up every single note on the whiteboards into Claude, into Anthropic.

[00:32:07] And I asked Claude, what are the three things that this group needs to do next?

[00:32:12] And then I plugged in my computer to the screen.

[00:32:16] And one by one, people in the room just stopped talking.

[00:32:18] And they say, oh, well, number one is not that intelligent, but two and three, those are pretty good ideas.

[00:32:24] Let's use those as jumping off points.

[00:32:26] So this is one of the things that the tools do very well is synthesis, is you take all these different perspectives and then you start asking, what are we thinking about?

[00:32:35] What could we do?

[00:32:36] What did we miss?

[00:32:38] And it doesn't mean that they're right.

[00:32:40] They're often wrong, or rather, they're often not giving the answers that we want or that are most appropriate, but they can spark ideas that then can take us in a new direction.

[00:32:50] So that's one example.

[00:32:51] And another example is I'm collaborating with a company out of Brazil called Acaso, A-C-A dot S-O.

[00:32:57] And we are creating what I call the community operating system for the organization.

[00:33:02] Because what I believe is that this old model of the organization, and you were talking about sort of the context, the levels from the team up to people who lead the organization.

[00:33:12] And that construct literally goes back thousands of years to Alexander the Great.

[00:33:19] Like this whole idea of a hierarchy and structure, that goes back to much more militaristic applications of how we organize large groups of humans.

[00:33:29] And I believe we're shifting into a new era, a next era, where we will define the organization by its level of social cohesion.

[00:33:39] That is, we will have people within the organization continually collaborating and solving problems together.

[00:33:44] Before we move on, I need to let you know about my friend Mark Pfeffer and his show, People Tech.

[00:33:52] If you're looking for the latest on product development, marketing, funding, big deals happening in talent acquisition, HR, HCM, that's the show you need to listen to.

[00:34:04] Go to the Work Defined Network, search up People Tech, Mark Pfeffer.

[00:34:08] You can find them anywhere.

[00:34:12] The more we understand each individual's skills, the more we can maximize their human potential, and the more we can build the connections between people in the organization, the better we can leverage that collective intelligence.

[00:34:25] And one of the examples that I use is we just recently did an event in Sao Paulo in Brazil, 40 HR executives, and we had scraped their LinkedIn profiles.

[00:34:37] And we have some AI that guesses at skills that people have.

[00:34:42] And so we put this up on the screen, and it shows us a node graph, basically just a graph of people's connections to each other.

[00:34:50] And then we typed in, all right, what's a problem that these people might be asked to solve?

[00:34:54] And it could be a succession plan.

[00:34:56] And automatically, the software picked out the top five people in the room who have the skill set, the best skill set to be able to solve the problem of developing a succession plan.

[00:35:10] So you think of that collective intelligence operating system.

[00:35:13] You think of the ways to be able to surface.

[00:35:16] I often show a picture of an iceberg because we're sort of icebergs to ourselves.

[00:35:21] We only know this thin layer of skills above the waterline, and then there's all this hidden potential below the waterline that we don't even see that each of us has.

[00:35:30] And the organization is the same way.

[00:35:31] It doesn't know about all the human potential in the organization.

[00:35:34] So if we can surface all those skills, and then we can help them to be able to dynamically bind around new problems that they'd never even encountered before, that is the operating system for collective intelligence.

[00:35:47] There's so much to unpack there.

[00:35:50] And yeah, it's amazing to think about even going back to some of the things that I witnessed at IBM and my time at NBC.

[00:36:01] But I guess the first thing I would say is this goes back to one of the former HP leaders.

[00:36:09] I think it was the CEO, Lou Platt, who said, if only HP knew what HP knows.

[00:36:14] I remember specifically using that or stealing it in the context of IBM, because some of the things that I saw before Watson and with Watson tied directly to some of the examples and the potential value.

[00:36:31] That you would get from incorporating those that type of dynamic methodology in terms of whether it's pulling together those diverse perspectives and that diverse insight for a particular decision.

[00:36:45] So that you have it in the room without needing more chairs.

[00:36:50] Because one of the things that I saw at the tail end of my IBM career, I spent a lot of time connecting clients to research.

[00:36:57] And of course, Watson was one of those areas.

[00:37:00] Outside their innovation lab, they had this special room that they called the Cognitive Environments Lab.

[00:37:06] And it had its own intelligent assistant.

[00:37:09] So it was called CELIA, Cognitive Environments Lab Intelligent Assistant.

[00:37:14] And it was basically this next generation Watson.

[00:37:16] So you just yell out a question, voice command, show me all the...

[00:37:21] They did a mergers and acquisitions example.

[00:37:24] Show me all the software companies between $25 and $50 million in revenue working in this space or whatever.

[00:37:34] And it immediately shows up, throws up a graph.

[00:37:36] Looks like a social network graph with nodes and things or whatever.

[00:37:41] And then you continually ask questions to narrow down that list.

[00:37:45] So that was 2015, right?

[00:37:48] That was almost a decade ago.

[00:37:50] And I was in an exercise the other day with this new technology where they split us up into groups kind of randomly.

[00:37:57] But what they did was they had each group brainstorm.

[00:37:59] And then they had AI assistants in each of the groups listening.

[00:38:04] And it would grab the nuggets dynamically that it thought were interesting and potentially relevant to the other groups.

[00:38:10] And it would basically automatically just show up in the other room and say, hey, this group came up with...

[00:38:16] This is where they're heading or these are some of the key ideas from there.

[00:38:20] And the other groups would be like, oh, that's interesting because we didn't have anyone in the group that brought that perspective or raised that question or whatever.

[00:38:28] So the ability to dynamically do that not only injects new ideas and maybe overcomes a potential blind spot, but you also potentially overcome a problem that even IBM Watson used to demonstrate, which is known unknowns.

[00:38:47] It knew what information it was missing to increase its confidence level and the probability of giving you the right answer.

[00:38:57] So I just think some of these things are really, really fascinating to think about and how the organization at different levels can start to make better decisions, take incorporating the knowledge, not necessarily in a knowledge, deep in a knowledge base that may not get surfaced even by AI because maybe it doesn't have access or whatever, but really leveraging that human intellect and intuition that otherwise would go unnoticed.

[00:39:25] Unnoticed. And I think it also ties to your point that you have these hidden skills.

[00:39:29] Everyone's working on skills-based organizations and skills-based hiring or whatever.

[00:39:33] You have no idea, especially with the... Also going back to the job displacement scenario, you're going to get rid of people without even knowing what they're capable of.

[00:39:42] I mean, talk about a huge mistake in shooting yourself in the foot.

[00:39:46] So you're exactly right. So one of the things that I advocate strongly in my book for is a new way of thinking about what it is to lead in an organization, whether you're leading a team or you're leading a hundred thousand person corporation.

[00:40:02] The old rules, the old rules, the old mindset was that the leader, I don't even use the word leader, the person who leads was in control, was able to issue commands.

[00:40:15] And my friend Esther Wojcicki, in her marvelous book, Moonshots in Education, she says the role of the teacher needs to shift from the sage on the stage, the one with all the answers, to the guide on the side, the one with the best questions.

[00:40:29] So one of the tremendous capacities of the large language models in generative AI is that it allows us to ask new questions.

[00:40:39] And so in that person who leads, whether it's a team or a large organization, I suggest that they become the team guide.

[00:40:47] They become the person who does not have to have all the answers, but instead asks the best questions.

[00:40:54] And one of the things that these tools are very, very good at is asking great questions.

[00:40:59] So you can ask, what are the questions I'm missing?

[00:41:02] How should I be thinking differently?

[00:41:05] We all just came up with these five answers.

[00:41:08] What are five other answers that we might have come up with?

[00:41:12] I've got these people in my team that each have these backgrounds.

[00:41:16] What are some of the skills they might have that I don't even know about?

[00:41:20] And so what that does is it opens up the aperture to a range of new insights,

[00:41:25] insights, but it has to begin with asking better questions.

[00:41:29] And I really do believe that's what we need to train the people who will lead for tomorrow

[00:41:33] in doing is to continually think about how they ask better questions and how they are empowering

[00:41:39] the people that they work with to come up with the answers.

[00:41:43] The themes of your book, The New Rules of Work, Mindset, Skill Set, Tool Set, maps quite well to some of the things that I think about underneath this,

[00:41:53] what I'm calling your AIQ.

[00:41:55] When we're playing with how to think about this and frame it and say, well, it's not just about AI literacy.

[00:42:02] It's not just about AI skills.

[00:42:05] The readiness is partly mindset.

[00:42:06] It is partly hands to keyboard, playing with it.

[00:42:10] How do you get it to phrase things in different ways and sort of provoke it and try to, I guess,

[00:42:17] make sure that it's tuned in some way to how you like to communicate?

[00:42:22] I guess it depends on the task that you're trying to accomplish.

[00:42:25] But on the AI literacy part, we don't need everybody to know all of these.

[00:42:30] We're kind of going crazy.

[00:42:31] Well, maybe it's me because I'm in this echo chamber, but I don't care about LLM 3.8B.

[00:42:40] It's like in a couple of years, we're going to be like, I can't believe we were even going so far into the weeds with all of this stuff.

[00:42:46] But you have to know where it's getting its data from.

[00:42:49] You've got to know that you need to still think critically about the output that some of these generative AI tools are giving you.

[00:42:57] And you need to use technology, whether it's AI or anything else, you need to use it responsibly and fairly.

[00:43:06] And so certainly that's one of the areas where I focus on.

[00:43:10] So I guess as you think about those components or those attributes of what it takes to lead and be successful in the future of work,

[00:43:21] how do you think about the progress that we're making towards that?

[00:43:25] The short answer is we're way behind.

[00:43:27] But it's not unusual in that, and especially with exponential technologies.

[00:43:32] The new technology comes along, really rapid adoption rates.

[00:43:35] It's often a tool in search of a problem to solve.

[00:43:38] We then figure out some of the problems that it's good at and hopefully a lot of the problems that it's not good at.

[00:43:44] But a lot of the experimentation happens at the grassroots or at the team level because enterprises move very slowly and they're very often want to,

[00:43:52] appropriately so, want to make sure that there are the right technologies to be applied to the right problems.

[00:43:58] The difference with these technologies is that what's happened in the eras that you and I have lived through is that there's these layers of technology that have been put in place

[00:44:08] from the very basic computers and operating systems to the internet.

[00:44:13] And then there's the cloud computing, and then along comes the AI tsunami.

[00:44:18] And so we keep on building new layers.

[00:44:20] What is under the covers?

[00:44:22] Where the technology comes from?

[00:44:25] The information it was trained on?

[00:44:27] All of that is opaque to us.

[00:44:29] So what's ended up happening is we've had this incredibly rapid adoption and widespread tinkering and usage.

[00:44:38] Only a few really, really clear business applications that are tremendously useful.

[00:44:45] A variety of applications we're still trying to figure out.

[00:44:49] And we're only now realizing, well, you know what, if we'd put the guardrails around them at the beginning, then a lot of the risks that can be exposed the enterprise to would have been mitigated.

[00:45:02] And so, you know, one of my favorite sort of acronyms for how to be thinking about these issues is RAFT.

[00:45:10] But I think Dataiku has suggested reliable, accountable, fair, and transparent, which are kind of sort of words to live by.

[00:45:22] So you want the technology to be accurate.

[00:45:25] You don't want it to hallucinate.

[00:45:27] You want to ask a question and know that what you're getting is reliable.

[00:45:30] You want to be able to query it.

[00:45:32] It has to be accountable.

[00:45:33] Where did you get this information from?

[00:45:35] Why did you say that?

[00:45:37] You want it to be fair.

[00:45:38] That is, if it's not inclusive, if it's built on models from the global north and has no data, almost literally no data from the global south, then it's not fair.

[00:45:53] It's not inclusive.

[00:45:55] And you want it to be transparent.

[00:45:57] That is, that the models, what it was trained on, how the algorithms work, we need to understand.

[00:46:03] We need to be able to ask it, you know, what's the process you went through to come up with this answer?

[00:46:09] And which algorithms literally were the ones that have you give this response?

[00:46:16] I get it.

[00:46:17] The podcast just isn't enough.

[00:46:20] That's all right.

[00:46:21] Head over to your favorite social app, search up work defined, WRK defined, and connect with us.

[00:46:28] But if the technologies are only in a few hands, we're not going to get all four of those, and we may get only one or two of them.

[00:46:37] And so that's the way I encourage every end user to be thinking about it is in the same way you kind of want to be encouraged to be thinking about, should I get in the gas-guzzling car or should I hop on my bike?

[00:46:48] Because your decisions matter.

[00:46:51] Think about the tools that you're using.

[00:46:53] Think about the sources.

[00:46:54] Think about whether or not they are reliable, accountable, fair, and transparent.

[00:46:58] And make intelligent decisions.

[00:47:00] Make equitable decisions about the tools that you use because that actually does have an impact.

[00:47:06] It actually does have results in how we think about what we want to encourage and the kinds of technologies that we want to support.

[00:47:14] Yeah, I think that's great.

[00:47:15] I like that framework.

[00:47:17] The other question I had for you, just in terms of AI and education, the next generation and what they're dealing with, they can handle this.

[00:47:26] They're dealing with how to behave properly on social media.

[00:47:30] They're digital natives.

[00:47:31] My daughter's been playing with touchscreens since she was basically born and could move her arms.

[00:47:39] So it's like they can handle that.

[00:47:42] I mean, I'm fascinated by the way that she handles her social media accounts or whatever.

[00:47:47] It's just, I mean, I've been using devices since middle school myself.

[00:47:51] And just, I'm still like fascinated.

[00:47:53] So they're pretty mature when they want to be and they can handle this if you teach them these, you know, your RAF principles and some of these things.

[00:48:04] And yet, I don't know.

[00:48:06] I just feel like we're failing them as well.

[00:48:09] It's not just the people already in the workforce.

[00:48:11] It's the education system.

[00:48:13] Like you can't say, you can't just say do your own work.

[00:48:17] That's not a policy, right?

[00:48:19] And that's not a strategy.

[00:48:20] You know, so I don't know.

[00:48:21] How are you thinking about, you know, AI and education these days?

[00:48:25] So first of all, I have tremendous compassion for both educators and students because when these inflection points hit, it's always the combination of pace and scale that overwhelm our social systems and keep us, especially as parents, being able to give the best advice.

[00:48:46] Or as educators, to be able to leverage the tools in the most appropriate ways.

[00:48:50] And so it's hard to believe that it's only now been still a very short period of time since ChatGPT 3.5 was released, which we kind of all see as that inflection point.

[00:49:02] And I think educators are starting to get their arms around that this is not necessarily an enemy.

[00:49:07] These tools actually can be, you know, quite useful in a variety of different ways.

[00:49:12] Students are already experimenting with them.

[00:49:14] But we're at this scene where we don't yet know the best ways to be able to help coach young people, to be able to have these sort of ethical guidelines around what they do.

[00:49:27] And we certainly, as educators, we often don't know how to be able to leverage the tools most appropriately in a classroom setting.

[00:49:34] And so in the next year or two, I think we'll find a lot more consistent approaches to the process.

[00:49:43] But it can require a couple of things.

[00:49:46] First off, as parents, you know, you were obviously, you grew up in the technology realm.

[00:49:51] And so you're able at least to understand these technologies, to explain them to your kid, and then to be able to try to help to hopefully encourage really useful applications of the technologies.

[00:50:03] That's really hard for parents that have never been exposed to these technologies before, don't know anything about them.

[00:50:08] Wouldn't you be able to find them on the internet if they tried?

[00:50:11] And this has been true with each wave of technology.

[00:50:14] It's just because of the pace and scale, because it's moving faster and faster, and it's affecting more and more people in such a short period of time.

[00:50:22] That's one of the reasons it's so hard for these systems to adapt.

[00:50:25] What I believe is coming out of this era is, in the same way I was saying, I think the organization is going to need to change dramatically.

[00:50:35] We've got this operating system, you know, called hierarchy that we've used for thousands of years, and we need to come up with better approaches.

[00:50:42] We're going to do the same thing with our educational systems.

[00:50:45] The education system in the West really became solidified only in the past few hundred years, and in the United States, only in the past hundred years.

[00:50:52] And it does this same thing that I was talking about after Rajiski was saying, is we've encouraged the role of the teacher to be the sage on the stage and to have all the answers.

[00:51:02] And instead, now we need to completely change that environment where kids are responsible.

[00:51:08] They have the agency to actually be in charge of their own learning.

[00:51:13] And that is just a totally different model of education system.

[00:51:17] So it's a totally different way of thinking.

[00:51:18] It's a different mindset in how we learn.

[00:51:21] And so this is a process, and I believe that younger educators will probably be more able to adapt more rapidly.

[00:51:29] But we have to change these systems so that they encourage individual learning and group learning.

[00:51:36] It can be a group sport as well, but where the individual learner is the one that is responsible for continuing to learn the next thing or solve the next problem.

[00:51:45] So that's one aspect of it.

[00:51:48] And the other aspect of it is, especially when we think of higher education and we think then of degrees and how that gets sort of that young adult launchpad and getting into the world of work, we've got to rethink all that as well.

[00:52:01] Because we've used the degree as a calling card.

[00:52:06] We've used it as sort of a quality assurance stamp.

[00:52:10] And so we've used it as a company that we've used it as well.

[00:52:11] Because this educational institution said this person got this degree, they must have learned these things.

[00:52:18] And therefore, a company will hire that person over somebody else that was what is often called skilled through alternative routes, what opportunity at work, the nonprofit, calls stars.

[00:52:27] And the current system is optimized for people with degrees.

[00:52:31] But as you know, IBM announced last year that more than 50% of its open jobs no longer required a degree.

[00:52:38] Now, the hiring process inside IBM is still, unfortunately, the incentives are built around the old rules of hiring because only 7% of the open jobs that didn't require a degree hired somebody without a degree.

[00:52:52] So we're still in the early days of this.

[00:52:56] But if we rethink the way that we help young minds to learn and then we rethink the way we use that learning system to be able to help people to learn the real world skills they can apply inside actual work, then that will have this sea change effect.

[00:53:15] And the underlying tools, especially generative AI, can help all of the steps in that process if they're applied correctly.

[00:53:24] Yeah, no, that's fantastic, Gary.

[00:53:26] I could keep talking to you for hours, but I think I'm going to be respectful of your time and let you get on with your day.

[00:53:33] Any parting piece of advice for our listeners?

[00:53:36] Well, since we're talking about AIQ, I want to make sure we're separating out all the different cues.

[00:53:46] So human intelligence is each of our own knowledge.

[00:53:50] And I'm working on a series of courses with Dr. Evian Gordon on the brain and mindset, skill set, and tool set.

[00:53:59] But there's – I wouldn't call it intelligence, but there's certainly the capacity of the technology tool set itself of these basket of technologies.

[00:54:08] But there's also EQ, emotional quotient, and all of the human aspects of our interaction with each other.

[00:54:17] And what I would encourage listeners to be thinking about is, well, how do I develop all those cues?

[00:54:22] How do I develop a mindset, skill set, and leverage this tool set?

[00:54:25] But then how do I also double down on all of the ways that we as humans can continually solve problems together so that the tool set becomes an enabler and the tool set doesn't become the actual task performer?

[00:54:41] And we just automate the way human work.

[00:54:43] Let's focus on how we're going to help each human to be able to maximize their human potential.

[00:54:48] So that is a mic drop moment.

[00:54:51] Thank you, Gary.

[00:54:52] That was perfect.

[00:54:53] Well, thank you again, Gary, for joining me.

[00:54:55] I'll have to have you back to continue the conversation.

[00:54:58] Yeah, absolutely.

[00:54:59] I have like five more topics I probably asked you right now.

[00:55:03] So thank you again for spending some time with me and for our listeners.

[00:55:06] I think there's a lot of takeaways from this.

[00:55:08] And thank you, everyone, for listening.

[00:55:11] And we'll see you next time.

[00:55:12] All right.

[00:55:13] Be well.

[00:55:13] Thanks, Gary.