Is AI expertise more valuable than industry experience?
Andrea Iorio explores how AI literacy reshapes hiring and leadership. This episode focuses on generative AI, decision making, and workforce strategy—revealing how organizations future-proof talent through AI automation and digital transformation.
—
Subscribe to the LeaderbookAI Podcast (formerly CPO PLAYBOOK): https://leaderbook.ai/podcast
Want to create 10x the value of your business or investments? Let’s talk: https://leaderbook.ai/contact
Powered by the WRKdefined Podcast Network.
[00:00:00] I'm Felicia Shakiba, and this is CPO PLAYBOOK, where we solve a business challenge in every episode. There's one shift that completely is reshaping how we work, the rise of AI in the workplace.
[00:00:28] AI isn't just automating tasks, it's redefining what it means to be a skilled professional. If you're leading an organization, how do you evaluate talent in this AI-driven world? Should companies prioritize hiring people with deep industry knowledge, or is it more important to find those who know how to leverage AI effectively? To help us make sense of all this, I have an incredible guest today, Andrea Iorio.
[00:00:58] Andrea is a keynote speaker and author of the upcoming book, Me, Myself, and AI, which explores the necessary skill sets professionals need to thrive in a workplace dominated by AI. We're going to unpack AI's impact on leadership, recruitment, and professional development, and trust me, you're going to want to take notes. Andrea, welcome to the show.
[00:01:24] Thank you so much, Felicia. It's such a pleasure being here on the CPO PLAYBOOK. Andrea, I just want to pause for a moment and acknowledge your incredible background. You're a leading AI and leadership speaker delivering over 100 keynotes annually, a wily author, former host of NVIDIA Brazil's podcast, MIT's technology review columnist,
[00:01:50] former Tinder and L'Oreal executive, and board member. Wow, that is an incredible achievement. Such a pleasure. I mean, I think, you know, in the age of AI, we need also to prove that humans can thrive in the workplace. And I think this is also part of the achievement is how we can leverage better the tools that we have available to make all of this happen.
[00:02:14] So I think some of the background and my experiences will prove that and eager to showcase some of the learnings I've had in my research and in my books. I love that. So, Andrea, AI is, as you know, rapidly reshaping this professional landscape. You've been researching this intersection. What's the single biggest mindset shift that professionals and organizations need to make when it comes to AI?
[00:02:42] There's definitely a lot of things that we have to change when it comes to the way we approach this transformative technology that, you know, different from the past, this time has the unique ability to learn and improve and evolve at a much rapid, more rapid pace than humans do. But the single biggest mindset shift that I think is very important to acknowledge is seeing it as an opportunity rather than a threat.
[00:03:11] And that's really the starting point, because whenever we've been faced as humans with transformative technologies that come up and are better than us at some of our skills, we tend to fight them off or at least see them as threats to our jobs, to our role, even to our, you know, existence, you know, because in a way, if they come on and they perform some of our tasks better than us,
[00:03:37] well, maybe am I going to be replaced or am I going to be still important in the workplace? So I think this mindset shift is important because whenever we understand it is an opportunity and not just a threat, we start collaborating with it and we start understanding where it can be used. And it can be used in most of our familiar, repetitive, mnemonic tasks. And the advantage we get from it is we gain time back, right?
[00:04:05] And so today an Accenture study shows that from 40 to 60% of the tasks we do every day are either automatable or augmentable by AI. And so imagine having 40 to 60% of your time back or your employees' time back. Well, that means we can focus on more creative work, more human work, having critical thinking, thinking strategically, and having a super assistant side by side.
[00:04:35] And again, not a menace or a threat. In your additional research, you asked a fascinating question. Would you rather hire a highly knowledgeable professional who doesn't use AI or someone less knowledgeable but highly skilled at leveraging AI? What have you found so far? It's really, you know, even shocking the results that we've had so far.
[00:05:00] We're running this survey for my next book and we've had more than 300 respondents so far. So it's pretty representative of the executives out there, especially because the respondents are from 18 countries. And believe it or not, 59% of respondents chose the latter, right? The professional that does not have the sufficient knowledge or sufficient answers for that role knows how to use AI well.
[00:05:27] And it is interesting because I've also asked these people why they would choose the second candidate. And if I should sum it up, they basically say, look, since AI is democratizing access to knowledge because of its, as I've said before, unique ability to learn based on a volume of data and information that is much higher than our human cognitive ability to absorb information.
[00:05:51] Well, because of this democratization, then we'd rather have professionals that are better at searching information or prompting AI tools to get better answers or feeding the right data to a predictive AI tool, rather than someone who is very knowledgeable, but if faced with an AI that, again, we cannot compete.
[00:06:10] Although I'm the most expert in my sector, in my field, I'm the best lawyer, I cannot compete with an AI tool that updates its knowledge about cases in real time, with some cases might be happening in Japan or South Africa or Italy, right? We cannot compete with that. But 59% is a good number that shows that people start to understand that. But again, I'm even worried about the rest, 41% who are saying, no, no, I still prefer the very knowledgeable one.
[00:06:40] Well, that's seeing partially the picture, right? Which is still holding on tight to the old way of working. Right. And which brings up so many more questions. I think if AI skills are becoming just as critical as traditional expertise, what does that mean for hiring? How should recruiters be evaluating candidates differently?
[00:07:06] I mean, should they be asking candidates to have AI or chat GPT available during the interview process to see how, you know, that they differ from one question to the next and compare those answers among candidates? How is this recruitment process going to change? It's tough, Felicia. And you've said some very good ideas.
[00:07:33] I think that overall, maybe stepping back, I think that recruiters and HR people and in the audience and not only should basically look at two very big sets of skills whenever recruiting or looking for, again, these professionals of the future. And the first big set of skills is exactly what you've mentioned, is the degree or at least the ability of using AI tools,
[00:08:01] which is a set of skills that I call AI literacy skills. So this is the first one. So AI literacy, though, is slightly different than a traditional hard skill, namely knowledge of a foreign language or analytical skills or, you know, remembering dates of Roman wars, as much as, you know, we are basically graded at school because of our knowledge and expertise and, you know, remembering things.
[00:08:30] Well, AI literacy skills are slightly different in the sense that they're slightly harder to measure. Right. How can you measure the ability of someone asking questions to an AI tool, which is actually one of the most important AI literacy skills, which I call prompting. Right. So prompting is this general term that defines the ability to, you know, provide good inputs to AI tools in order to get better outputs.
[00:08:58] Right. So the thing is that, as you've said, they are hard to measure. And so the important thing here, and maybe we can further dive deeper across the episode, is how can we measure those? And the way to go about it is defining behaviors that show this. And so whenever, for example, I want to test the ability of someone prompting an AI tool,
[00:09:25] maybe I want to see how clear are the questions they frame or how much of a context they provide whenever they describe a situation and so on. Right. And so these are the first big set of skills, AI literacy. An example, again, is prompting. Another one is what I call data sense making, which is the ability of an employee or a worker to understand the data sets they're working on and that they're feeding to AI tools.
[00:09:51] They might be biased. They might be, you know, causing hallucinations and so on. So we need to better understand the data. We need to better ask questions. We need to, again, better understand how the tools work. And this one set. The second big set is what I call the human literacy skills. And so basically it's understanding, OK, so what are the skills that I need to, again, identifying the employees that AI is not able to replicate?
[00:10:19] And that's where we're going more toward the soft skills domain, towards trust and empathy, adaptability, because these are things that still AI is not able to replicate. And I think these also should be as much as AI literacy skills, top of mind in, you know, for recruiters or for someone who's recruiting for their team. Hey, everybody, I'm Lori Rudiman. What are you doing, working? Nah.
[00:10:47] You're listening to a podcast about work, and that barely counts. So while you're at it, check out my show, Punk Rock HR, now on the Work Defined Network. We chat with smart people about work, power, politics, and money. Are we succeeding? Are we fixing work? Probably not. Work still sucks. But tune in for some fun, a little nonsense, and a fresh take on how to fix work once and for all. We'll be right back. Back to the show.
[00:11:16] There's, that's so interesting because I feel like those recruiters or companies that are not doing this or don't have a path to this type of talent acquisition strategy, I think, are going to be in trouble if they don't. And to be able to really identify what it is, the skill that they are looking for and to properly know how to analyze it, right?
[00:11:44] Benchmark what is poor to good to great AI literacy skills, for example. What does that look like? And how do people behave during the interview process and so forth? And really, what are the outcomes, right? That's another piece is like, what is the best outcome you can get for a specific question, whether it be strategy or a mathematical equation and so forth. So I think there's a lot to unpack there.
[00:12:11] And I think it depends, obviously, on the role and the culture of the business, too. But let's say a company decides they want to upskill their workforce. How do organizations practically integrate AI training into their leadership development and learning and development programs? Actually, one of the first things to do you've already sort of mentioned, which is sort of trying to make a diagnostic
[00:12:39] of what is the level of AI literacy skills and the human skills needed in the workplace? Because oftentimes we rush when it comes to learning and development programs, trainings, workshops, courses, corporate universities. We rush at implementing or at least rolling out educational content that is maybe general about AI, but we don't have personalized tracks.
[00:13:04] The second thing, or at least mistake that I see way too often is it is oftentimes way too technical for the business applications that people need to hear and learn, right? You've mentioned it really has to be tailor-made for each area, each role. And so far, it isn't, actually.
[00:13:26] And maybe a third big thing or at least issue is that way too often, again, we forget about the human skills that have to go in parallel to AI literacy skills. So we focus very much on the technical knowledge of AI, but we don't train or prepare people to, again, develop trust and empathy and so on, which we might further talk a little bit down the road.
[00:13:55] So whenever we look at the programs, the best way for them to actually be rolled out is first starting with a diagnostic. And it's a diagnostic, again, it's not too much about the level of technical knowledge of AI, but it is much more about the level of skills that are related to that. And the skills we try to measure through this diagnostic, through the behaviors related to that skill. Again, we've made some examples related to prompting.
[00:14:24] But again, data sense-making is, for example, is that employee or colleague critically thinking about the output that AI spits out or not? Because one of the big problems is that employees start using AI and then they do not critically evaluate the output. And maybe they copy and paste things that are biased, wrong, hard to explain, right?
[00:14:49] Way too often we, you know, copy and paste an AI decision and then we bring it to a meeting and then our, you know, leaders or colleagues ask us, why or how did you come up to that outcome? And we don't know how to explain that. Yeah. So if we don't train people on that, right, Felicia, it's tough. And it's also tough to follow up on the trainings and L&Ds programs. What I mean by that?
[00:15:13] It's hard to evaluate the improvements if we don't try to tie it to the behaviors. I explained, let's make an example with a soft skill like, for example, the degree of empathy, which is something I will stress very much is a skill that is more and more important in the age of AI. Because although AI is very good at recognizing someone's feelings, it does not feel back. And therefore, the empathy mechanism is broken. And that's something only humans can do.
[00:15:43] That is feel back and feel with the other person. And that's something very important in customer relationship, in people management, and so on. Imagine a situation where basically after we provided a workshop about empathy, we sit down with the people who are there and we set an objective for them next semester to be more empathetic. It's tough. They start to think, well, what does that mean? How can I measure that?
[00:16:13] How can I achieve that? That's the inherent problem with the soft skills that they are hard to measure. They're hard to develop. They're hard to transfer. And whenever we look at the L&D programs related to AI, they suffer from many of these problems. And so we need to really break it down to the behaviors that we want to see in people and not just relate to the skills.
[00:16:37] And we also have to change the format, make them more hands-on, make them more gamified, more engaging, more personalized, maybe even shorter. One of the biggest complaints around traditional L&D programs is that they're way too long.
[00:16:55] A person already spent eight hours in front of a desk, in front of a computer, and they still have to maybe absorb a one-hour long video around the technicalities of AI, while instead they're used at seeing one-minute TikToks that are as informative or sometimes even more. And so we need to, again, shape them in a way that also they're more absorbable. So it's about the content that must be related to the behaviors we want to see in people.
[00:17:22] But it's also about the format that has to be more engaging and more in line with the way we consume content nowadays, which is very different than in the past. I think that's fair. You know, our attention spans are getting shorter. Yes. With the technology, with, you know, social media and so forth. And to pack in greater amount of value in a shorter amount of time, I think, makes a lot of sense.
[00:17:49] And I love the fact that you emphasize the tie to behaviors. I mean, this is something that we practice in executive coaching a lot, which is we have a definition for, you know, what is dealing with ambiguity as a soft skill. We know what it looks like, but how are people actually showing up in that behavior, I think, is really important.
[00:18:11] And so to tie it to how people are behaving, I think, is really interesting because I see a lot of similarities of how people grow in that way. Exactly. I totally agree. And I think that's something that we have to change actually rapidly because more and more new generations are getting into the workforce and they expect something totally different than in the past. The other thing you mentioned was critical thinking skills and or like, you know, critical analytical thinking.
[00:18:40] I think that this is really important because oftentimes I will have a vision of, you know, maybe what I want to say or what do I want to do for a project? And I'll say, you know, I'll input the information into a chat GPT and it will come out so wrong. And what I think that's what I think that's going to be a bit more important than what I had anticipated.
[00:19:00] You know, you're really having to think through, you know, what I as a as someone who has a certain amount of experience in a particular subject and being able to bring my experience to the table and leverage that through, you know, critical thinking and really kind of manipulate the information to where I think it should be going. Not necessarily just what is out on the table, right? And what's available.
[00:19:26] But I think that this is what pushes people and their knowledge forward is the ability to kind of blend their own experience with the outcome of what AI is producing. Totally agree. And there is this great quote by Po Shen-Lo.
[00:19:43] He's a mathematics professor at Carnegie Mellon University that always says that in the age of AI, we shouldn't teach kids just to solve their homework, but to review it, grade it and evaluate it. And why? Because, again, we grow up in educational systems that reward the quality of our answers, but not of our questions.
[00:20:05] And, again, the answers we extract from textbooks, we don't train our critical thinking, and then it trickles down in the workplace having employees that, again, especially with the use of AI, prefer to just copy and paste in order to be more efficient and faster, you know, the outcome of an AI tool. But then maybe spread misinformation or wrong decisions and eventually it can boil down to big business failures or problems.
[00:20:36] I think you've hit on something that's so important because I often think about this question about what are schools doing, right? I mean, they are the ones that are funneling people into the workplace and we're trying to figure out how to marry those new grads into the workplace. And it's often a very big jump.
[00:20:56] And but what you're talking about is evaluating work based on how it gets done, not what is as in the is in the end product. And gosh, that is that makes so much sense to me. And the value in doing so, I think, is incredible. It's really night and day, right? And how you come up with the answer or how you approach the problem, even if you don't get it right.
[00:21:22] Right. Because failing fast is also a value add to someone's work. So this, I think, is the appropriate shift that I would I would anticipate taking, you know, as AI grows and develops and really gets woven into the workplace. One of the biggest barriers that I think to AI adoption in the workplace is trust, which is what you've kind of been touching on.
[00:21:47] Right. The bias and, you know, being able to analyze if something is actually true or false or needs to be addressed in regards to an AI decision. What do leaders need to understand about employee skepticism toward AI? They have to understand a lot. And honestly, the first thing they have to do is understand that the level of trust in AI is still very low.
[00:22:12] And again, getting back to the survey I'm doing for my book, we also asked rate from zero to 10 the level of trust you'd have in an AI colleague. And honestly, Felicia, the results have not really been encouraging because more than 50 percent of people graded from one to five. So, you know, not even the starting point to actually collaborate with someone if you have that low level of trust. Right.
[00:22:39] Maybe I think that leaders beyond, of course, understanding that the level of trust is low, they have to do at least a second big thing that is at least trying to understand why is that low? And that's where some of the big issues related to, again, the way that AI works come to surface.
[00:23:01] And the first big problem that is related to the lack of trust in AI systems and tools is definitely what they call the explainability problem. So AI, in a way, is a black box.
[00:23:14] Even to its own developers, no one even within OpenAI or Google or Anthropic would be able to exactly explain to you the way that the AI tool would make a certain decision or generate a certain output. They know in general terms.
[00:23:34] But what's the problem with the fact that they lack the exact understanding and especially we as users lack this general understanding is that imagine if I am, let's make an example customer facing, for example, and then I'll make an example employee facing. So imagine that we're a financial institution and we roll out an AI system as much as many financial institutions are doing now that either approves or not a line of credit to a customer.
[00:24:04] Right. So I'm the customer of the bank and I use this tool. I submit all my information and the bank rejects my application. So I'm denied credit. So I decide to go to my human, you know, like bank manager. I go to the bank agency or, you know, in physical terms, I just want to talk to someone. And I go there and I ask them, why was I rejected?
[00:24:30] And the problem is that the financial institution will not be able to tell me 100% transparently why the AI tool came to that decision. And imagine this then inside facing, like within the company, with our employees and so on. When we start to implement AI tools that, for example, make decisions on our behalf or help us crunch big volumes of data and so on.
[00:24:55] Well, then it goes beyond, for example, calculators in the past because calculators, you just input the data. But if you have good knowledge of mathematics, you can prove that that equation that the calculator made was right or wrong. With AI systems, we are not that able to do that.
[00:25:14] So we cannot have the same level of trust in calculators because calculators are just static and easier to understand, you know, mechanisms that enhance our productivity, but cannot really make decisions on our behalf. And so the explainability problem is really, really specific to AI.
[00:25:32] Because of the fact that AI operates as a black box, it generates a trust issue because within humans, again, within the company or, you know, in our relationship with the customer, if we're not really able to 100% transparently explain the decisions we outsource to AI, well, then we'll have a trust issue. Other trust issues are related to, you know, biases in data, right?
[00:25:59] You know, we don't exactly know which are the data sets, the AI tools we use are being trained on. So we kind of not have so much trust on it being fair, equal, up to date. Exactly. That's also one big issue, right? We navigate, but are we really sure that it is up to date with the latest information?
[00:26:22] And then, again, I think that trust between or among humans is created by the fact that, again, if I do something for you, I expect reciprocation, right? And reciprocation is something that has always taken place between, you know, among humans.
[00:26:44] It's, you know, that mechanism for which if I, you know, go down the street and I'm just bumping into someone I don't know, what's the only thing and, you know, that I can do that makes sure that person smiles at me? It's just if I smile first, right? And that's interesting because it does not really work with AI.
[00:27:07] If we use it in a certain way, but it is programmed for a different goal, let's say, for example, maximizing outcome for the company, and I'm a salesperson, and I want to provide a discount to that customer I have a very, very close relationship with, maybe the AI tool will not approve that discount for me and I won't be able to do it. So how can you trust a tool like that that does not really feel back the emotions that we humans do?
[00:27:36] And that's why we practice reciprocation. So overall, you know, there's a bunch of issues related to trust. Leaders have to acknowledge that, but also to understand what are the causes to not outsource too much to AI, but especially don't outsource the touch points that are very much needed for humans with humans to establish trust. Right, right.
[00:28:01] That human-human interaction is what's going to bolster the trust. Got it. Have you seen companies successfully roll out AI tools within their workforce? What made their approach work? And what can we learn from it? Yes, they are rare. But I mean, they're going to be more and more, surely. But still, we haven't seen so many success cases.
[00:28:29] I would maybe mention something, you know, that is related. For example, an announcement that Sundar Pichai, Google CEO, made in their Q4 earnings call in 2024 last year. He said that already 25% of code developed and written software code within Google has been generated by AI.
[00:28:52] So that's a great example within Google of the ability to, again, outsource the, I wouldn't say the easiest code to develop, but maybe the code we have more historical information on, because the more historical information is on, the better the AI systems will be able to replicate it and adjust it and tweak it. So what they did is mapping out what are the tasks of developers that AI can do better.
[00:29:20] And now already 25% of code being written by AI means that their developers overall, as a company, have 25% of their time back in order to refine that code, tweak the code, think about the architecture, prompt better, and so on. So I think that's a great example, of course, coming from a tech company. If we look at more traditional businesses, I think there's a great example within John Deere.
[00:29:50] And maybe not only within, but especially by the way that John Deere, the traditional farming equipment manufacturer, serves their customer. And let me explain better. I was recently speaking at a John Deere event, and before stepping on stage, I told, you know, I spoke with some of the leaders and I praised them for being, to my eyes, the best farming equipment company in the world. And they told me, Andrea, please don't mention that on stage.
[00:30:18] And I'm like, what do you mean? You guys, you know, manufacture farming equipment. And they're like, no, no, no, we don't consider ourselves anymore a farming equipment company. We are now a company that sells intelligence to the farmer. They told me, we have a public objective of making 10% of our revenues coming from software fees. So they sell software. They embed their farming equipment with IOTs and sensors that generate data.
[00:30:46] And through AI, they basically provide intelligence to the farmer. And I was shocked. I was positively shocked, in a sense, because, first of all, John Deere's stock price has never been that high in its more than 200 years of history. But also proved to show that AI, when well applied, can not only, of course, improve our internal processes, but can also pave the way to launch new business models, as in this way.
[00:31:14] And again, this is the consequence of the fact that John Deere has trained their internal teams. And I've seen that. I've also, you know, as part of the speaking engagement I made, it was part of one of these programs where they basically, again, change the mindset of people from AI is a threat to AI is an opportunity. Even for us, a 200 years old traditional company, which is, you know, very hard to see. We'll be right back.
[00:31:45] Back to the show. Wow. That's so interesting. And it does make sense. And I think that, you know, there's so many ways we can really implement AI in the workplace. I don't think that there's one way. I think that the strategy in the appropriate environment with both leadership and culture and perhaps the particular function or department, I mean, all of those pieces really customized how to implement AI.
[00:32:15] And I don't think that there's really one way. But it's a fascinating story to hear from you. I know that you've identified three transformational pillars of skills. Yes. So those three are cognitive, behavioral and emotional. Which of these do you think will become even more critical as AI advances? Yes.
[00:32:38] In my work, I divide sort of the skills that we have to improve or refine in the age of AI across these three different pillars. And I would say that the three of them are critical in their own ways. But what I can say is that each one of them is impacted in different degrees by AI. I'll explain better. So the first pillar is what I call the cognitive transformation. So it is related to all of our cognitive skills.
[00:33:06] Decision-making, creativity, knowledge, learning. And all of this is very highly substitutable by AI. So our cognitive skills. So the cognitive pillar is the one that is, I wouldn't say more at risk of being substituted. It's actually more at an opportunity of being substituted, if I may.
[00:33:32] But as a consequence of that, we have to understand that that's where the AI literacy skills prime in. So if we want better answers, we need to ask better. And that's where prompting is. If we want to access, you know, better quality knowledge, we have to understand the data behind it and critically refine the output. So that's where data sense making, that's a skill I call data sense making, steps in.
[00:33:55] If AI thinks rationally, well, we as humans, we have to think out of the box through a skill that I call reperception. And that's the cognitive skill. It's really highly substitutable by AI. The behavioral skill is basically all the skills related to our ability to transform ideas into execution. And that's partially substitutable by AI. I would say 50-50.
[00:34:21] So I would say that AI will be much better than us, and is already much better than us, at performing those execution tasks that are familiar, repetitive, on which we have lots of data. That's where AI will be better than us. It will make more mistakes. It will be more efficient. It will be more productive. But as a consequence, we humans, we will have to focus on the unfamiliar tasks.
[00:34:50] Innovation, experimentation, learning with mistakes, as you said, becomes more important. And so that's where AI substitutes, I would say, 50-50. The emotional pillar, which is related to all of our soft skills, naming, trust, empathy, adaptability, well, that's where AI almost does not substitute because it has issues with each one of them. Again, it's not really flexible and adaptable beyond the data set it's been trained on.
[00:35:21] It's very good at recognizing emotions in people, but it's not good at feeling back. So there's an empathy problem. There's the trust problem because of the explainability and so on. And so that's where humans will, for the foreseeable future, thrive. And so I would say that the three pillars are very critical, but each of them has very different degrees of susceptibility. And as a consequence, they need new skills to support them.
[00:35:50] Those soft skills are going to get you in the end. They're a recurring theme. I mean, they pop up all the time. Yeah. Right, right. If leadership has traditionally been about decision-making and expertise, but AI can now support or even replace some of these functions, what does the future of leadership look like? That's the $1 million question, Felicia, and I'm glad you asked it.
[00:36:17] And in order to maybe explain the way I see the future of leadership, I will go back in time and actually do a very, very brief historical retrospective. Because if we go back thousands of years to the age of hunters and gatherers, early ages of the human beings being out there trying to survive, who would be the leader of the tribe? Right. It would be the person or the people that would have the best physical skills. Right.
[00:36:46] The most resistant, the fastest, again, the strongest. Right. And of course, cognitive skills were important, but they were more attractive. More equalized across people. And also because of the lack of technology, they were not very easily applicable. Therefore, for thousands of years, we've underwent what I call the cognitive leadership paradigm when it comes to the skill set of what makes a good leader.
[00:37:15] Then we get to the age of the industrial revolution. And then something changes. Because of innovation, new technologies start to pop up. And these technologies are much stronger, much faster, much more resistant than human beings. And the consequence is that we enter then a new skills paradigm that I call the cognitive leadership paradigm. And why?
[00:37:44] Because if machines were better at our physical skills than humans, then humans started to thrive in the workplace by having the cognitive skills to do two things. First, use the machines better. And secondly, think where the machines couldn't. And these were not thinking machines.
[00:38:07] So anything that you could think and make decisions with, that was an advantage with respect to the machines. Great. And that's the paradigm that basically is still entrenched today in leadership. The big problem is that now we have a thinking machine, namely AI systems, that are much better at cognitive skills.
[00:38:31] And so if in the past, or at least even up to today, we think that leaders are better because they have a higher IQ, they're smarter, they're more efficient, they make faster decisions, they have more experience, more knowledge. Overall, you know, they're smarter. Well, then we'll have a problem because this is not what the future of leadership will look like. I'm not saying that these skills are not important. I'm just saying that they are being commoditized through the democratic access to AI tools that we're seeing today.
[00:39:00] And so what will make a difference, because it all revolves around the concept of skill scarcity. Skill is important and makes us thrive in the workplace if most of other people don't have it, and if it's in high demand because it generates an impact. And so now we see, we don't see any more scarcity of cognitive skills, but we see a scarcity of the soft skills, of the emotional skills that you've mentioned.
[00:39:28] And therefore, as much as in the past, making good decisions would help us to make the best use of machines and be smarter, well, now the soft skills help us make best use of the machines and also connect better with others. And so the future of leadership will be one that is much more strategic, that outsources to AI. Most of it's like familiar and routinary tasks.
[00:39:53] In order to have more time to focus on this new set of skills that, again, I call the hybrid leadership paradigm. And it's hybrid because understands that AI is a tool to enhance the quality of our work, to enhance the quality of our decisions, to enhance the time that we have available to, again, better manage people and lead others. So I think, again, that the past history of leadership teaches us a lot about how the future of leadership will look like.
[00:40:23] I couldn't agree with you more, Andrea. I think that as I have grown in my own industry and as an executive coach, I have been, you know, sharing this, I feel like, for much longer than AI has been around, that soft skills are so critical, especially at the leadership level. And now they are more so. And so I can see a lot of time and effort and growth being put into these soft skills for the future.
[00:40:52] So we are out of time. And I'm actually quite sad because I feel like we could talk a lot more about this subject. But for everyone listening, if today's episode made you rethink how AI is changing your industry, share this conversation. And don't forget to check out Andrea's upcoming book, Me, Myself and AI, I Know I Will. Andrea, thank you so much for being here today. Thank you, Felicia, for the opportunity. And thanks, everyone here in the audience.
[00:41:21] If today's episode captured your interest, please consider sharing it with a friend and leaving a review. To learn more about how CPO Playbook can support you or a leader you know with executive coaching or organizational transformation, visit us at cpoplaybook.com. Your support as a subscriber means the world to us. So thank you for tuning in. I'm Felicia Shakiba. Let's connect on LinkedIn. See you next Wednesday. See you next Wednesday.


