Bob Pulver and Torin Ellis, a prominent voice in diversity, equity, inclusion, and belonging (DEIB), discuss these topics in the context of the evolving landscape of artificial intelligence (AI) and the implications for the workforce. They reflect on the challenges faced in 2024, including political pressures on DEI initiatives, the importance of cognitive diversity, and the need for responsible AI practices. The discussion emphasizes the significance of collaboration, accountability, and the potential for AI to contribute positively to DEI efforts while acknowledging the necessity for legislative frameworks to guide ethical AI use. Bob and Torin emphasize the importance of reskilling and ethical AI practices, discuss how AI can uncover hidden talent, the distinction between augmented intelligence and automation, and the necessity of human-centric approaches to AI. The conversation also highlights the challenges organizations face in addressing bias and the critical role of responsible AI in promoting diversity and inclusion. Ultimately, they advocate for a future where AI is leveraged to create equitable opportunities and enhance organizational effectiveness.

Keywords

DEI, AI, bias mitigation, cognitive diversity, responsible AI, legislation, workforce, diversity strategy, workforce reskilling, augmented intelligence, human-centric AI, diversity and inclusion, responsible AI, bias mitigation, ethical AI, organizational change

Takeaways

  • Political pressures have negatively impacted DEI initiatives.
  • The removal of 'equity' from DEI discussions is concerning.
  • Progress in AI and DEI is being made, but more is needed.
  • Cognitive diversity enhances decision-making in organizations.
  • Responsible AI requires accountability from vendors and organizations.
  • Legislation is lagging behind technological advancements.
  • The future of work will require re-skilling to keep pace with AI. 
  • Organizations must invest in reskilling to adapt to AI advancements.
  • AI can help uncover overlooked talent in hiring processes.
  • Augmented intelligence should enhance human capabilities, not replace them.
  • Human-centric AI is essential for ethical and responsible technology use.
  • Many organizations are hesitant to embrace AI due to concerns about bias.
  • AI can significantly improve diversity and inclusion efforts in hiring.
  • Responsible AI practices are crucial for maintaining ethical standards.

Sound Bites

  • "I'm chasing the word promise 365 days this year."
  • "I'm all things people."
  • "I believe that progress is being made."
  • "Cognitive diversity is incredibly important."
  • "You can use AI to unearth talent."
  • "AI is augmenting your own capabilities."
  • "AI is good for business."
  • "We're building a manager intelligence platform."

Chapters

00:00 New Year Reflections and Words of the Year

01:48 Torin Ellis: Background and DEI Work

03:07 Reflections on 2024: Political Climate and DEI Challenges

06:45 AI's Role in DEI and Bias Mitigation

13:08 Cognitive Diversity and Its Importance

19:54 Responsible AI and Legislative Challenges

24:45 The Future of AI and Workforce Implications

31:45 AI's Role in Uncovering Hidden Talent

33:46 Augmented Intelligence vs. Automation

35:39 Human-Centric AI: Ethics and Responsibility

38:03 Understanding AI's Limitations and Bias

42:25 The Impact of AI on Diversity and Inclusion

45:50 The Need for Responsible AI Practices

49:48 Addressing Systemic Inequities with AI

51:20 Building a Better Future with AI


Torin Ellis: https://www.linkedin.com/in/torinellis

Torin Ellis Brand: https://torinellis.com/

Reducing Bias in HR Using AI: https://www.plum.io/report-reducing-bias-in-hr


For advisory work and marketing inquiries:

Bob Pulver: https://linkedin.com/in/bobpulver

Elevate Your AIQ: https://elevateyouraiq.com


Thanks to Warden AI (https://warden-ai.com) for their sponsorship and support of the show! Warden is an AI assurance platform for HR technology to demonstrate AI-powered solutions are fair, compliant and trustworthy. 

Powered by the WRKdefined Podcast Network. 

[00:00:09] Hey everyone, it's Bob. Welcome to another episode of Elevate Your AIQ. Today I'm thrilled to share my conversation with my friend Torin Ellis, founder and namesake of the Torin Ellis brand, a leading voice in diversity, equity, and inclusion. And certainly into the DEI and bias mitigation rabbit hole we went, including insights from a hot-off-the-press report. Torin and his team, in partnership with Aptitude Research and Plum, just released. You will find a link to the report in the show notes for this episode.

[00:00:39] We dive into how AI is reshaping the workplace and the opportunities it creates to drive progress in DEI. Torin and I talk about the challenges organizations face in mitigating bias, why cognitive diversity is crucial for better decision-making, and how responsible AI can uncover hidden talent. It's an inspiring and practical discussion about how we can use AI to build a more equitable future. I thoroughly enjoyed my conversation with Torin and I'm confident you will too. Thanks so much for listening.

[00:01:07] Hello everyone. Welcome to another episode of Elevate Your AIQ. I am your host, Bob Pulver. Happy New Year. Happy 2025. Today I am joined by my friend Torin Ellis. How are you doing today, Torin?

[00:01:20] Hey, so look, let me just tell you, it feels like I got on a muscle shirt, but I want you to know that I'm feeling extremely good. It's 2025. I got you on one screen smiling. I'm on the other screen smiling. Let's have a good conversation.

[00:01:34] Awesome. Sounds like a plan.

[00:01:36] So you have a word of the year already picked out. What do you got?

[00:01:42] Yeah, mine is promise. I'm chasing the word promise 365 days this year. I'm extending that wish upon everyone that I encounter, if you will. I want them to reap the benefit of all of the promise that their life has in store for them.

[00:02:02] So that is my word for 2025.

[00:02:06] Awesome.

[00:02:07] Awesome. Love it. My word is optimism. Just because of the last 12 to 18 months has been kind of a roller coaster and I'm just really optimistic that 2025 is going to be a breakout year in a lot of ways.

[00:02:23] That's what we're doing. And so I'm happy to be in conversation with you. Let's do this.

[00:02:28] Yeah. Yeah. So tell us a little bit about your background and the work that you're doing.

[00:02:35] Yeah. So for those that don't know, Torin Ellis, diversity strategist, risk mitigator.

[00:02:41] I've been a consultant for a bit more than a decade. I came into the work of diversity and inclusion consulting through being a recruiting practitioner.

[00:02:51] I was always external building high performing teams for a number of organizations.

[00:02:57] And what I decided to do is when I recognized that there was a lack of representation in these incredible organizations, I shifted from doing transactional recruiting to consulting, to speaking, to now doing analytical work and soon will be a founder of a manager intelligence platform.

[00:03:18] So as you can see, Bob, from that description, I'm all things people.

[00:03:25] Awesome. Let's dig into this a little bit.

[00:03:27] But before we get into some of these topics that you and I hold near and dear around like, you know, DEI and bias mitigation and how AI can help with all of that, you know, since we are kicking off the new year here, just wondering if you have any specific sort of reflections, highlights, lowlights.

[00:03:48] About what transpired last year.

[00:03:51] Yeah, I mean, I do. I have several. I think the two that really, really stand out for me.

[00:03:59] Number one was external to our space.

[00:04:02] It was the political pressure, the ideological attack on diversity and inclusion with that political slant and how it was weaponized against those of us that are doing incredible work.

[00:04:19] Listen, I'm one of those individuals who recognizes that there were a lot of people who got into the DNI space post George Floyd.

[00:04:27] I even hate referencing that.

[00:04:29] But the point is they got into the space because they were emotionally connected to the work,

[00:04:35] but they weren't necessarily grounded in the type of acumen required to do the work.

[00:04:42] And so I understand that there were just some, you know, ways that the work had been leveraged and it may not have been attractive,

[00:04:50] but that should not have been the green light, the signal for it to become weaponized, if you will.

[00:04:58] And then the second thing that was very disappointing for me, and some would say that it's just a matter of naming,

[00:05:06] that you are, I'll use one of your words, Bob, nitpicking.

[00:05:10] But what was really disappointing was when SHRM decided that they were going to drop equity from diversity, equity, inclusion, and belonging.

[00:05:21] And so I felt like that exercise of shuffling letters was a fool's errand on their part.

[00:05:28] I felt like that was the time, given all that we had seen in 2023, 2024, that they should have put a flag in the ground

[00:05:37] and they should have been, you know, the standard bearer.

[00:05:41] And no, it's required.

[00:05:43] As a matter of fact, if we really look at the way that demographics and everything else is shifting,

[00:05:51] not just here in the U.S., but across the globe, if we really are paying attention to what's happening,

[00:05:57] what SHRM should have done is said, we're going to advocate for adding more letters to the equation.

[00:06:04] So those were the two things that were the biggest, you know, I don't want to say biggest,

[00:06:10] but those were the two glaring disappointments for me in 2024.

[00:06:14] No, I think those are important call-outs.

[00:06:16] They had to know about the backlash that that would incite, and it just seemed like a pointless exercise.

[00:06:25] But it wasn't pointless, you know, just to stay there for 90 seconds.

[00:06:29] It wasn't pointless.

[00:06:31] And so while I don't have intimate knowledge around why the decision made, who was talked to, consulted about the decision,

[00:06:40] who may have signed off on the decision, what I can surmise is that it too was based off of political implications.

[00:06:48] And I believe the leadership of the organization wanted to fashion themselves, fashion the entity as being neutral in this fight for humanity.

[00:07:00] I felt like the organization is trying to make sure that they had access to the incoming administration.

[00:07:07] They were playing both sides of the field, and I just felt like that was not necessarily where they should have been.

[00:07:14] I guess I just think about the sort of branding problem that DEI and DEIB, you know, have more generally.

[00:07:21] I just felt like that was, you know, you're not solving the broader sort of issue.

[00:07:27] You're just sort of shuffling the deck a little bit.

[00:07:30] But, you know, as we think about these topics, and I know we're going to talk about this report that you've got coming out,

[00:07:37] which I'm excited to get to see people's reactions to.

[00:07:40] But just before we get into the report about reducing bias through AI, I just wanted to, you know,

[00:07:46] take a step back more broadly and think about everything that you just talked about, but, you know,

[00:07:51] framing it in the context of everything that's going on with AI.

[00:07:55] And I know we'll talk about responsible AI and what that means, but just like perspectives of, you know,

[00:08:10] sort of AI adoption and they may be adopting it because it's the, they think it's the right thing to do,

[00:08:18] or it's the, you know, the timely thing to do, or it's going to take, you know, work off their plate or whatever.

[00:08:25] But, you know, as you think about DEI and bias mitigation and fairness and things like that,

[00:08:30] I mean, in the age of AI, are we moving in the right direction?

[00:08:36] Did we move the needle at all with what you've seen?

[00:08:39] I want to hit this with a bit of optimism.

[00:08:42] The very same tenor in which we approached the report, we wanted to do it in a positive slant.

[00:08:52] And we did that intentionally because we know that there are many reservations around the use of AI,

[00:09:00] the lack of representation, bias being built into the systems, so on and so forth.

[00:09:07] And so it was very intentional that we took a positive approach to the drafting of the report.

[00:09:15] To your question, do I feel like progress has been made?

[00:09:19] Absolutely.

[00:09:19] You know, listen, I would like for the ball to be further down the field.

[00:09:26] I would like for the race to be moving at a quicker pace, if you will.

[00:09:31] But I'm happy that the ball is moving.

[00:09:34] I'm happy that the runners are in the race and that the baton is moving from one person to another, so to speak.

[00:09:42] And so I do believe that we are making progress.

[00:09:44] I believe that organizations are becoming a bit more aware of not just the fact that the technology exists,

[00:09:51] but how can we implement and deploy the technology in our organization in a way that is good for our culture,

[00:10:00] good for engagement, good for productivity, good for retention,

[00:10:03] and all of the other things that really build a phenomenal organization.

[00:10:07] So my response to you is yes.

[00:10:10] But I really believe that the progress that we are making is because we are beating the drum.

[00:10:16] We're not just allowing the founders and the developers and the people that are crafting these solutions to have a say.

[00:10:25] I think the marketplace is also speaking up and influencing the direction that some of these solutions go.

[00:10:32] And so, yes, I believe we are making progress.

[00:10:34] I agree.

[00:10:35] I see progress being made as well.

[00:10:38] And some of it, I think, is intentional.

[00:10:42] And some of it is, I don't want to say unintentional, but some of it is sort of,

[00:10:47] we're getting some of the DEI and bias mitigation benefits as a result,

[00:10:53] even if it wasn't the original intent of deploying the tools, right?

[00:10:59] Like you and I talked about this the other day.

[00:11:01] Like, you know, you may have started down the path of deploying AI because it was going to make you,

[00:11:07] you know, more efficient or make your team more effective and things like that.

[00:11:13] But came along with that, even if it wasn't your original intent, is you're making fairer decisions as a result.

[00:11:23] It is a fair statement, but let's move away from AI for a moment and we'll go back to, you know,

[00:11:28] let's just say, and I don't forget how they say this, but if you design for the fringes, everyone will benefit.

[00:11:37] So let's look at ramps on sidewalks.

[00:11:40] Everyone uses ramps.

[00:11:42] It's not just people in the disability community.

[00:11:45] Everyone uses the ramps, bikers, runners, people walking with strollers.

[00:11:50] Everyone uses the ramps.

[00:11:52] Remote controls were designed for a particular audience group community.

[00:11:57] Everyone uses the remote control to control their TV.

[00:12:01] So when I think about the indirect implications of AI having some impact on bias or impacting the DNI effort or impacting the inclusion, the belonging, the equity, equality efforts inside of organizations, outside of organizations.

[00:12:21] I think that that's a natural evolution of our designing.

[00:12:56] And that's a natural evolution.

[00:12:57] And if you're looking at the indirect application and presence of AI in spaces like bias and others, I think that that's a good thing.

[00:13:05] And again, that contributes to the progress that I feel like we are making.

[00:13:09] Yeah, no, absolutely.

[00:13:10] And I think the more that we can recognize some of that through our own insight gathering within an organization or across a community, I mean, it shouldn't take legislation for people to recognize what's right.

[00:13:25] Listen, it's going to require legislation, you know, and so as we look at the top of the conversation, I wasn't happy with, you know, DNI being politicized.

[00:13:37] But I can tell you right now, I do believe that public policy is necessary and that we should not be guided only by an EU, you know, declaration on what ethical AI or policies around AI.

[00:13:53] That shouldn't come from across the waters.

[00:13:56] We need to have the wherewithal to design and develop that sort of policy and structure here.

[00:14:03] Because, I mean, listen, if we allow these folks to keep going and doing what they want to do, Bob, listen, I'm optimistic, but I'm also realistic.

[00:14:12] And they will operate in ways that are very nefarious.

[00:14:16] So I do believe that we need to have both public policy and personal responsibility with an asterisk as it relates to development and deployment of these solutions.

[00:14:27] I agree.

[00:14:29] I just think that part of it is legislation can't necessarily keep up with the innovation that's developed, right?

[00:14:37] And why not?

[00:14:39] Legislation and politics moves very slowly and they can't get their act together and they can't move as fast as technology companies can develop new capabilities.

[00:14:51] I mean, we see this every week.

[00:14:53] So let's enjoy an exercise for just a moment.

[00:14:55] You know, the very same people that are creating technology are the same people that could be in politics and policy.

[00:15:08] They could be lobbyists.

[00:15:10] They could be, you know, the people running the halls of Congress and whatnot.

[00:15:16] And so I think it's a matter of choice.

[00:15:19] And years ago, Bob, I used to say that people are people first before their title, before any of these other accolades.

[00:15:29] We're just people.

[00:15:30] And so I'm just jarring with you and having fun.

[00:15:33] And yeah, it is hard for politicians to keep up because oftentimes they are not wise enough to know where we're going with some of these solutions or where some of those, you know, developers, founders are going with some of these solutions.

[00:15:48] But I do believe in all sincerity that we could be better.

[00:15:53] And I think what demands and will push us to being better is that voice that I was speaking of, you know, five or so minutes ago, that voice has to say, no, I demand more of my politician, my elected official, both on the state and the federal level.

[00:16:09] I demand more.

[00:16:11] I want to hear you talk more about your awareness around technology and how you might attack and or approach some of these solutions.

[00:16:20] What sort of legislation might you you put forth so that we don't find ourselves still chasing, you know, six months, 12 months, whatever the case may be.

[00:16:31] So I'm playing with you, but I do believe that we can do better.

[00:16:34] So I think no matter what domain we're talking about, one of the aspects of diversity that I always hone in on is cognitive diversity.

[00:16:48] Sure.

[00:16:49] Right.

[00:16:49] So forgetting about ethnicity and appearances and, you know, age and, you know, all of these, you know, physical sort of characteristics and factors.

[00:17:00] I think about cultural experiences.

[00:17:04] I think about, you know, life experiences, different types of expertise, different domain knowledge.

[00:17:10] So some of this, again, regardless of domain.

[00:17:15] So whether it's politics or it's technology or it's healthcare or whatever, better decisions are made when you have cognitive diversity in your decision-making population.

[00:17:27] Right.

[00:17:27] So I know you talk about this in your report around these multidisciplinary teams.

[00:17:35] Right.

[00:17:35] And so whether that's on an AI, you know, governance committee or AI ethics committee.

[00:17:41] But as you make decisions as a cohort, having all those diverse perspectives is incredibly important to make sure that all voices and all angles are considered.

[00:17:57] As you develop these policies, without getting too much into politics.

[00:18:03] I mean, a lot of that is driven by the loudest voices or the deepest pockets.

[00:18:07] And so that's where things go, you know, astray or they're not willing to, you know, concede particular points or make concessions and things like that.

[00:18:17] So there's a lot that goes into that.

[00:18:19] But when it comes to organizational decisions, I just think, you know, this is why we see statistics.

[00:18:27] I think David Green just posted something, I think it was from McKinsey around and some of the Insight 222 work they've done around, you know, the business success of companies that embrace, you know, diversity.

[00:18:44] And yes, some of it is because they have more women on their board or on their leadership team.

[00:18:50] And some of it's, you know, some of that typical, you know, sort of DEI, you know, characteristics.

[00:18:56] But a lot of it does go back to bringing those diverse perspectives, the cognitive diversity that gives you the sort of collective intelligence that you need to make better decisions.

[00:19:08] And that's why companies succeed.

[00:19:10] That's why they increase their innovation capacity.

[00:19:13] And that's why they're more, you know, empathetic to different, you know, populations.

[00:19:17] Yeah, shout out to Aptitude Research, Kyle and Madeline for partnering with our team to, you know, produce reducing bias in HR using artificial intelligence.

[00:19:30] Absolutely shouting out Plum for their collaboration and, you know, coming on and saying that this was good enough work that they wanted to get behind and bring to the marketplace.

[00:19:40] So truly, truly, truly, truly adore and appreciate the good teams at Aptitude and Plum.

[00:19:48] Listen, Bob, when you talk about cognitive, I'm all for that because cognitive really is, it's so much.

[00:19:55] It's communication style.

[00:19:57] It's learning styles.

[00:19:58] It's IQ.

[00:19:59] It's being an introvert versus the extrovert.

[00:20:02] It's mental ability, emotional intelligence, awareness.

[00:20:06] You know, and when you going back to the piece that stood out for me when you said the loudest voice in the room, I want to just take a moment to insert.

[00:20:15] There's a book called Quiet by Susan Cain, and it focuses on introverts.

[00:20:21] I want people to read the book so that they can understand how to navigate and nurture those introverts that are on their team.

[00:20:29] But when you talk about cognitive, you're absolutely right.

[00:20:32] It really is the dimensions that we bring to the equation that help us to be better organizations.

[00:20:39] It's not just the race or the gender.

[00:20:47] Finding a great career has always been a challenge.

[00:20:51] But today, with the massive changes underway in just about every sector of the economy, in just about every country in the world, finding a great new career is even more challenging.

[00:21:05] And if you're a student, recent graduate, or someone else early in their career, it's even harder for you because you just don't have the experience that those who might be 10, 20, 30 years older than you have.

[00:21:20] The answer?

[00:21:22] The podcast.

[00:21:23] From dorms to desks.

[00:21:26] A podcast by college recruiter job search site, where every week we take a deep dive into a topic specifically of interest to candidates who are early in their careers and looking for a great part-time, seasonal internship, or other entry-level job.

[00:21:49] Listen today.

[00:21:51] It's not a zero-sum game on how we can put a target on certain communities and groups of individuals.

[00:22:03] It's not about excluding people.

[00:22:05] It's about how do we make sure that we bring in the various dimensions of separation, of variety, of disparity?

[00:22:15] How are we looking at our collective group of people so that we are developing products and services and we're entering into different communities and geographies to do business?

[00:22:27] It's headcount management and succession planning.

[00:22:30] It is so extremely nuanced.

[00:22:32] And that is the reason why, when I talk about diversity and inclusion, I'm absolutely an unapologetic.

[00:22:40] I am taking no pause.

[00:22:42] I'm taking no prisoners.

[00:22:43] I suffer fools.

[00:22:45] What do they say?

[00:22:47] Don't suffer fools lightly.

[00:22:48] I'm not an individual that you can really argue with as it relates to the efficacy and the efficiency and the value of diversity and inclusion because I know the power of this work.

[00:23:00] And so I love that you highlight the cognitive side.

[00:23:04] I look at the physical side, the relational side, occupational side, societal side, the value side.

[00:23:12] There's so many dimensions, and that's what I'm focused on when we're working with organizations.

[00:23:17] Yeah, excellent.

[00:23:19] When we talk about responsible AI, because I think this is going to be a key theme this year.

[00:23:29] As you and I have talked about, legislation has been progressing, not just in the EU, but in parts of Asia and the US.

[00:23:36] We'll see what happens in a couple of weeks in terms of AI policy and things like that.

[00:23:42] But the legislation is there, and people need to be paying attention to that.

[00:23:48] The requirements on talent acquisition teams in particular, anything related to HR and people's livelihoods is going to be under scrutiny.

[00:24:00] You would consider those high-risk use cases.

[00:24:04] I think there's going to be certainly attention paid in the US, again, whether that's a federal policy or there's state and municipality level policies like in New York City.

[00:24:19] But I also think, again, to the point earlier about legislation sort of lagging behind, we need to hold each other accountable.

[00:24:26] We need to hold vendors accountable for being responsible by design, because when AI is everywhere, we need to make sure that people are using it in the right way and for the right use cases.

[00:24:40] And so in some ways, it's got to be self-policing, at least until that legislation is in place.

[00:24:48] But I just think from a reputational standpoint, as an employer brand, and you know this not better than me, but as an employer brand, I mean, you need to really think about how you're treating not just your employees, but potentially your candidates.

[00:25:04] Well, you do.

[00:25:04] And again, we have struggled with the ethicalness.

[00:25:08] I took a positive approach, but the bottom line is there was a definitive starting point for how we fashioned the report.

[00:25:18] I purposely went back five years to 2019 because there was an article, and the title of the article is Escaping Me Right Now, but it came out from the AI Now Institute.

[00:25:35] And when I read that article, and I want to say it was March of 2019, it was one of those things that sort of opened up my eyes as it related to AI.

[00:25:44] And I began to pay a bit more attention to it, albeit late or later, because AI has certainly been around a lot longer than 2019.

[00:25:54] But that's when I opened my eyes up to it.

[00:25:56] And so when I made the decision in July of this year to write this report, I said, that's the point that I'm going to go to.

[00:26:05] I want to start in March of 2019.

[00:26:07] And I want to, you know, certainly not in an academic way, but I want to, in some way, give respect to the light bulb going off and see what growth and progress we've made up until this point.

[00:26:22] And so certainly this was the positive side.

[00:26:25] I can look at the negative side.

[00:26:28] I can look at where we're lagging.

[00:26:29] I can look at ways in which AI is not ethical.

[00:26:33] It's not technically proficient.

[00:26:35] It's not beneficial to society.

[00:26:37] We can certainly look at those ways.

[00:26:40] But what I will say is that I do appreciate the direction that some folks in our space are taking.

[00:26:46] Folks over at Career Crossroads and the collective that they have together and how they're looking at, you know, trying to evaluate vendors that we are bringing into our organizations.

[00:26:57] The work that Aptitude Research is beginning to do around evaluating and scoring and being an external, I guess, external body and looking at some of these AI solutions and other technical solutions.

[00:27:15] So, again, we, not just politicians, we as individuals, as practitioners, as leaders, as executives, we too are in some ways lagging.

[00:27:27] And so what we have to make sure that we are intentional about is that we are curious and we are applying ourselves to try to keep pace with these.

[00:27:38] Listen, we got some creative people.

[00:27:40] You know, I don't know where these folks come up with some of these ideas, but we got some extremely creative people.

[00:27:46] So we do have to ourselves also be intentional about trying to make sure that we are up on these technology solutions as they are being applied.

[00:27:56] And the last thing that I'll say is that, you know, again, and it's the pressing question that I kept going back to in 2024.

[00:28:05] And it was a question that Sam Altman put out in January, February of last year.

[00:28:11] And it's it asked or it posed, how long is it going to take for us to have a one person unicorn?

[00:28:19] Yep.

[00:28:19] So I'm asking myself, you know, is this something that is going to have a major disruptive impact on people?

[00:28:30] That is a guiding question that I am asking myself.

[00:28:34] Are we able to reskill enough to keep people employed and earning income and taking care of their family?

[00:28:43] Is HR doing enough to protect the workforce?

[00:28:48] Are leaders doing enough to protect their workforce because of the rate in which this technology is moving?

[00:28:56] So I do believe that we also have a responsibility to make sure that we, Bob, are keeping our finger on the pulse so that that frequency is not getting away from us.

[00:29:07] Yeah, that was a very provocative question.

[00:29:10] I know it's come up.

[00:29:11] It came up in December.

[00:29:12] I remember some people were asking, is this 2025 going to be the year that there could potentially be a one man or one entity unicorn?

[00:29:23] I don't know if 2025 will be it, but in theory, I mean, I could see it happening in 2026.

[00:29:31] I don't know.

[00:29:32] But you're right.

[00:29:32] I mean, what does that even mean?

[00:29:35] You can code all this stuff and you can have it play all these different roles.

[00:29:40] And maybe not one like AI agent, but you could have one sort of agentic system that handles a lot of that.

[00:29:46] And the rate at which, you know, the technology is progressing, you know, it could handle a lot of individual roles.

[00:29:52] I think the tricky part is the connective tissue, you know, that ties it all together.

[00:29:57] And then how does it make decisions incorporating all of those different, you know, simulated perspectives, I suppose.

[00:30:05] But it's definitely a thought provoking concept.

[00:30:08] But to your point, I think for the workforce, I'm concerned, right?

[00:30:14] I'm concerned that organizations aren't investing enough in that reskilling or they're not moving fast enough to recognize the disruption that's coming.

[00:30:25] Because, I mean, we talk a lot about, of course, what AI is doing.

[00:30:29] But there's a lot of other, you know, trends and technologies that could intersect with AI that in certain industries and certain roles could, you know, accelerate the adoption or accelerate the need for, you know, reskilling or upskilling workers.

[00:30:45] So, you know, I always encourage everyone, you know, whatever path you want to be on, whether you want to take a different trajectory and reskill to another role, another discipline, or you want to upskill yourself.

[00:30:58] Either way, you need to get yourself into that top, you know, sort of quartile of folks in that space to sort of stay ahead of what's coming.

[00:31:07] So this is a great place for me to insert, you know, a positive application of AI.

[00:31:13] To that point, you know, you can use AI the way that Plum has developed the solution.

[00:31:20] You can use it in ways that helps organizations to unearth talent that had often been ignored, that had often been overlooked, if you will.

[00:31:30] You know, you know, they, we have inside of the reducing bias paper, we have a case study where Plum looks at a global financial services firm, 90,000 employees.

[00:31:42] I believe that they have a footprint in like 30 countries.

[00:31:45] And, you know, they were struggling with finding talent.

[00:31:49] They were going to, like most organizations, a handful of schools.

[00:31:56] I think it was like 10 to 12 academic institutions.

[00:32:00] Of course, they were using resumes.

[00:32:02] The resumes were kicking people out.

[00:32:04] And what they decided to do was partner with Plum.

[00:32:07] And in that partnering with Plum, they were able to leverage an AI assessment, if you will, remove the resume from the equation.

[00:32:16] And now they've shot up their results in terms of diversity hiring, expanded those dimensions that I talked about a moment ago, the efficacy, the efficiency, the retention.

[00:32:28] All of those things are through the roof.

[00:32:30] And so I do believe that if organizations are committed and serious, curious, asking the right questions, understanding exactly what it is that they're trying to accomplish in making this technological investment and what the ROI needs to look like in a variety of different ways.

[00:32:50] I absolutely believe that we can leverage AI in ways that are extremely ethical.

[00:32:56] They are ethical.

[00:32:57] They are technically beneficial.

[00:33:00] I'm sorry, technically sound.

[00:33:01] And they're beneficial to society.

[00:33:03] So I absolutely believe that we can do that the way that Plum is doing it.

[00:33:06] Yeah, that's a great example.

[00:33:08] And it causes me to just sort of caveat what I said before about moving yourself up to the top quartile or whatever.

[00:33:17] I don't mean to say that AI's primary function is to replace people.

[00:33:24] But the reality is that some companies approach it that way.

[00:33:29] Ideally, AI is augmented intelligence.

[00:33:32] It is augmenting your own capabilities.

[00:33:35] It is your co-pilot.

[00:33:37] It is helping you learn things faster, get things done faster, think more deeply about topics, help you develop content and other sort of assets for yourself and for your team and the organization.

[00:33:53] So ideally, that's the case.

[00:33:55] But we can't ignore the fact that some people are sort of lumping automation and AI together because in a lot of cases, AI has basically become just this sort of cognitive, intelligent automation capability to help with workflows and stuff like that.

[00:34:13] So we've just got to pay attention to all the things that are happening around us.

[00:34:17] And the circumstances and conditions are going to be different depending on your role, your industry, and even your organization and their adoption rate of a new technology and their ability to experiment.

[00:34:30] And when you use augmented, you know, the way that you described it 30 seconds ago, is that synonymous with human-centered AI or is that interchangeable?

[00:34:41] Do you see those two as being the same augmented the way that you described it, human-centered AI?

[00:34:47] How do you see that?

[00:34:48] Well, everything we do with AI should be human-centric.

[00:34:52] So I know there's folks in the responsible AI sort of camp and there's people in the human-centric AI camp.

[00:34:59] I see those two as very synonymous.

[00:35:03] I don't see how you can use it responsibly, which includes, you know, ethics and fairness and bias mitigation and all those things.

[00:35:11] I mean, by its nature, those efforts are human-centric.

[00:35:15] From an augmented intelligence perspective, it is a sort of variation of being human-centric.

[00:35:23] The point is, the human is the asset, is the focus, and we're layering artificial intelligence on top of that to make the human more effective.

[00:35:36] So in a way, you could think of AI as sort of this, you know, in physical terms, maybe it's like an exoskeleton, right, that helps you lift more or run faster or, you know, do things better.

[00:35:49] You're sort of enhancing your own capabilities.

[00:35:53] But in a way, it's also that sort of second brain.

[00:35:57] It's an extension of yourself to get you to think about things a little bit more deeply or, like I said, to help you, you know, create, you know, content or things like that to just be a sounding board in some ways for the work that you need to do.

[00:36:13] So I definitely see, you know, a significant overlap between augmented intelligence and human-centric AI.

[00:36:21] On the latter, I guess I think human-centric is incorporating all of these sensitivities to how people, you know, behave and interact and feel, right?

[00:36:33] So there's an emotional and empathic, you know, piece to it as well, where augmented intelligence is often more on the knowledge and cognitive side.

[00:36:46] Love that. Thank you. Love that.

[00:36:48] Yeah, yeah.

[00:36:49] I want to take a break real quick just to let you know about a new show we've just added to the network.

[00:36:55] Up Next at Work, hosted by Gene and Kate Akil of the Devon Group.

[00:37:01] Fantastic show.

[00:37:03] If you're looking for something that pushes the norm, pushes the boundaries, has some really spirited conversations, Google Up Next at Work, Gene and Kate Akil from the Devon Group.

[00:37:17] Were there any other interesting observations from, you know, the focus groups and the folks you talked to for the report?

[00:37:26] Because it seemed like just from the sneak peek that you gave me, it seemed like a lot of folks were still sort of trying to figure out exactly where AI's sort of limitations are and how much it's actually helping reduce bias or its potential to reduce bias.

[00:37:46] That is one way to say it.

[00:37:47] It also seems like a lot of folks are reticent.

[00:37:51] You know, they're still sitting on the pause button.

[00:37:53] They are, you know, as you pull up to the stadium, they haven't taken their seatbelt off and gotten out of the vehicle and started the revelry of being at the game.

[00:38:03] You know, a lot of people are still sitting in their vehicle watching all of the other people walk into the stadium.

[00:38:09] I think one of the things that jumped out for us was that of the individuals that responded to our query, 27% of the organizations, only 27% of the organizations specifically made decisions around AI to reduce bias.

[00:38:30] So that leaves a large percentage of organizations that are making investments or not, but they're not including bias in that consideration or the broader D&I umbrella in that consideration.

[00:38:48] Like one of the things that was really, really disappointing for me, Bob, was that a lot of organizations clearly did not care whether or not the AI investment had any impact, preferably a positive impact, but had any impact on their DEIB effort.

[00:39:12] Wasn't even asked.

[00:39:13] Wasn't even asked.

[00:39:14] It wasn't even a consideration.

[00:39:16] So that was a factor.

[00:39:17] That was a data point that jumped out.

[00:39:19] It was very disappointing to myself and my team because I do believe, again, that diversity and inclusion includes everyone.

[00:39:31] Everyone.

[00:39:31] So I believe that when I'm making an investment, whether it be around training material, how we hire, employer branding, supplier diversity, what vendors we use, policy procedure, I can keep going.

[00:39:47] I look at every aspect of the organization as impacting everyone.

[00:39:53] And so if I'm making decisions around investing in technology, but I'm not including consideration around how it impacts everyone, that is a bit problematic for me.

[00:40:06] So that only 27% were making an AI-related decision because they specifically wanted it to reduce bias.

[00:40:14] That was a bit alarming.

[00:40:17] But I will say that one of the positives in the report were that in so many areas, Bob, the organizations that have deployed AI are seeing efficacy and efficiency increases of a much, I mean, a very much so measurable nature.

[00:40:37] Like retention is better.

[00:40:40] Engagement is better.

[00:40:43] Diversity of slates are better.

[00:40:46] Response times to their candidates, their pipeline, their prospects, their applicants, better.

[00:40:52] So I will say that the data in our report, it did two things.

[00:40:57] One, it affirmed a great deal of the work that we do in consulting.

[00:41:02] It also lined up well with some of the other research papers that are out there by larger organizations.

[00:41:10] And that was a, you know, that's a feather in our cap.

[00:41:13] We felt good.

[00:41:13] We felt like we put forth a decent mix of questions.

[00:41:18] We got a decent mix of responses.

[00:41:21] And so we felt like it's a good piece that, you know, we really want people to read and embrace in the market.

[00:41:27] Yeah, no, I think it's a great, it's a great piece of work.

[00:41:29] And I know there's going to be, you know, some follow on, you know, work to, you know, expand on some of the questions and some of the details.

[00:41:38] You know, it's frustrating, right?

[00:41:39] It's frustrating to see people not believe the evidence.

[00:41:44] I mean, there's literally no evidence to refute all the observations that you just cited, right?

[00:41:52] Like no one is showing me that AI is making things worse.

[00:41:57] People have to use these anecdotal citations of, oh, look at what happened to Amazon.

[00:42:04] This one program at Amazon, you know, six years ago where it was prioritizing, you know, males over females.

[00:42:12] Or look at this one, you know, instance or whatever.

[00:42:15] I'm like, it's like, you got to be kidding me.

[00:42:17] It's like with self-driving cars where there's like one accident per, you know, two million miles of driving.

[00:42:27] Compare that to when you have a human being, you know, behind the wheel, there's an accident every two miles.

[00:42:33] You know, it's something ridiculous, right?

[00:42:35] So you have to recognize when progress is being made and when something is showing overwhelming evidence that this is contributing positively to organizations, not to mention to the morale of the employees who want this.

[00:42:53] So you can debate, you know, whether you need a chief diversity officer or whether you need dedicated funding for this particular group or whatever.

[00:43:02] I mean, overwhelmingly employees want this and they recognize the value of it and they appreciate it and they look for it in an employer brand.

[00:43:09] And it helps keep them engaged and keeps them loyal.

[00:43:14] So it helps with attrition.

[00:43:16] And it just seems like what other evidence do you need?

[00:43:20] And I think your report calls out, you know, some significant, you know, differences between the people who get it and the people who don't.

[00:43:28] I mean, some of the stats I saw were, it was like double or more, you know, the value that you're getting when AI is deployed.

[00:43:36] And I think, as we've talked about, I think responsible AI and DEI are, you know, inextricably linked.

[00:43:45] You know, AI is everywhere.

[00:43:47] And we push for responsible AI, you know, practices, which from a behavioral standpoint, you know, focuses on, you know, fairness and bias mitigation and things like that, that we can sort of control and monitor.

[00:44:03] Then how is DEI going to be successful when AI is everywhere?

[00:44:08] The only way is if AI is designed and used responsibly.

[00:44:12] Yeah, but you're one of Malcolm Gladwell's outliers, you know, and so fortunately, I appreciate having you in the foxhole.

[00:44:20] Like, I do want to go to battle with a person like you, but you're an outlier.

[00:44:25] And what we need is for more people in the general homogenous marketplace, the dominant, whatever adjective you want to use.

[00:44:36] We need more of them to think that AI and DEI are inextricable, inseparable.

[00:44:44] We need more of them to think that way.

[00:44:46] You are right now in OutLab.

[00:44:48] But I will tell you, you know, again, we tried to massage into the report some of those examples like the Amazon or the medical profession, or we put some examples in around real estate, financial services.

[00:45:05] We did a beautiful job of creating narration and telling story to show the application of AI in people's personal and professional lives.

[00:45:16] But we wanted to do it in a way that kept you engaged and not repulsive, if you will.

[00:45:24] We wanted you to see that this is what's happening.

[00:45:28] Very similar to, you know, the ramps in the sidewalk and the remote control portion of the conversation.

[00:45:36] We're moving in the right direction.

[00:45:38] And so we're not using this report to amplify the negative of Amazon or ageism or healthcare or some of the other references and examples that we place in there.

[00:45:50] We're using it so that people are, one, aware, and two, helps them to inform both how they question and set strategy in their organization, how they make determination around making this investment, how they decide how to evaluate the efficacy and the ROI.

[00:46:10] Because we found that too, Bob, we found that a number of organizations were not interested in making an investment in an AI solution to reduce bias.

[00:46:20] But what we learned was that for some, it was because they had already or recently made an investment.

[00:46:28] Not that they're opposed, but we want to make sure that we get a return on what we've already invested in.

[00:46:35] Fair position.

[00:46:36] Absolutely fair.

[00:46:37] So we wanted to make sure that we put those examples in so that they help inform how we move forward.

[00:46:43] I do recognize that there has been, you know, sort of widespread discrimination in financial services and healthcare or whatever.

[00:46:55] So I should not have implied that, you know, it's just these onesie-twosie things and everything else is great.

[00:47:01] Right.

[00:47:02] But it is because these AI systems are, we have the traceability and the observability of these models that we can go back and we can actually fix it.

[00:47:15] Right.

[00:47:15] And then as it scales and as we propagate these solutions, you know, elsewhere, we can keep track of them.

[00:47:22] But it shouldn't take a complaint to the EEOC or a class action lawsuit against a financial services company for these things to see the light of day.

[00:47:34] Right.

[00:47:35] We should be able to design these things and fix them earlier in the cycle, like when it's being designed.

[00:47:42] And that takes AI, you know, responsible AI literacy and readiness for people to see that these are decisions that we could have or these are things that we could have fixed much earlier in the life cycle.

[00:47:57] Absolutely.

[00:47:59] Absolutely.

[00:47:59] I'm jumping in because I'm just sitting here thinking about some of the things that we are experiencing.

[00:48:05] We're still having conversations around equal pay for women.

[00:48:09] Why?

[00:48:10] We're still having conversations around the maternal death rate of African-American women in childbirth.

[00:48:17] Why?

[00:48:18] Why?

[00:48:18] Why?

[00:48:19] Why are we having that conversation still?

[00:48:22] We have all of these academic reports and studies and peer reviewed papers and all of that.

[00:48:29] And yet black women are going to hospitals and dying at an alarming rate against their white counterparts.

[00:48:37] That should not be that should not be the case.

[00:48:40] And so that's why I say, yeah, I love having you in the foxhole.

[00:48:44] You're an outlier.

[00:48:44] I want to make sure I would love to see like wake up in 2025, you know, the folks that are developing these solutions experience some rapid acceleration around how they bring equity to the equation.

[00:49:01] I would just love to see that.

[00:49:03] I would love to see the playing field be even in so many areas as it relates to our personal and professional lives.

[00:49:12] So, yes, I'm optimistic.

[00:49:15] I'm aspirational.

[00:49:17] I'm positive.

[00:49:18] I think it's beautiful and promising.

[00:49:20] My word.

[00:49:21] Yeah.

[00:49:22] But I also feel like there's room for us to to remain focused and intentional in making it better.

[00:49:27] Yeah, I definitely see more vendors, you know, stepping up and trying to, you know, put these better equitable policies in place, try to at least put out statements about being responsible and being ethical and things like that.

[00:49:43] It does take more than just a sort of marketing statement.

[00:49:47] In my book, I want to see that they've gone through some independent, you know, audits and risk assessments and they're taking much more tangible steps to do what's right because I think it is in their best interest.

[00:50:00] People are going to start asking these types of questions in the RFP process.

[00:50:07] How are you being responsible by design?

[00:50:10] How is where the data come from?

[00:50:12] How are these models trained and da, da, da.

[00:50:14] So I think we're going to see more and more of that.

[00:50:17] Not not.

[00:50:18] Yes, part of it is in anticipation of legislation and potential financial risk and legal risk.

[00:50:25] But I think part of it is also they're recognizing that there will be reputational risk or they will be at a disadvantage.

[00:50:33] All else being equal in terms of functionality and things like that.

[00:50:36] If one company is being responsible by design and the other is just, you know, whipping up and selling another shiny object.

[00:50:44] No, I agree.

[00:50:44] And again, I just think about, you know, I'm smiling right now because as we ended the year of 2024, there was this back and forth around the BOI filing for business owners.

[00:50:58] Individuals that are entrepreneurs have a business.

[00:51:00] They had to go in to the federal government and fill out this form, the BOI form, and that's to address money laundering.

[00:51:10] I get it.

[00:51:10] There are some folks that like it, a whole bunch of folks that feel like it's going to be bad for business.

[00:51:15] But in the middle, it's to address money laundering.

[00:51:19] I'm willing to stand on that.

[00:51:20] I hope that it doesn't take us 50 years to figure out that we need to, in some ways, put our foot down as it relates to ethical AI, human-centered AI, building it in ways that are responsible, have awesome data governance, and everything else that applies to it.

[00:51:37] I just don't want it to take us, you know, I'm being funny, but I don't want it to take us a decade.

[00:51:43] Yeah.

[00:51:44] And back to your point around politicians, you know, not keeping up.

[00:51:49] I hope that they do keep up and that we do see legislation here in the U.S., oversight, that we see some external or a number of external bodies that are evaluating these technical solutions.

[00:52:01] And I hope we'll get to a point where we're like, nah, we ain't doing that.

[00:52:06] You know, you remember the Chia Pet when we were growing up, Bob?

[00:52:08] Of course.

[00:52:09] Or the Slinky?

[00:52:11] The Chia Pet or the Slinky had absolutely no purpose.

[00:52:15] They hit the marketplace and they made tons of money.

[00:52:21] This is a little bit different in the damage that it has the potential of making.

[00:52:26] I think that we need to make sure that we lasso that quickly, sooner rather than later.

[00:52:31] Fair point.

[00:52:32] I love the Slinky.

[00:52:33] Not a Chia Pet fan, but I love the Slinky.

[00:52:36] You love the Slinky.

[00:52:37] It's so simple.

[00:52:37] No problem.

[00:52:40] Yeah.

[00:52:41] So the report comes out when?

[00:52:45] It hit the marketplace on January 7th.

[00:52:48] And so you can go to Plum's website and you can download a full copy of such 7,500 plus words, two case studies, one from Plum, one from Humanly, a number of stories, links to TED Talks, articles, you know, a number of charts, graphs.

[00:53:08] I mean, we put book references, we did everything that we could to try to hit the various modalities that people love to ingest content and information.

[00:53:18] We wanted to put something in for everyone because in the end, I want people off of the sideline and playing in the space because I do believe that AI is good for business.

[00:53:33] And so we wrote in a way that is inviting rather than exclusive.

[00:53:40] Excellent.

[00:53:41] I will, I'll grab that link and put it in the show notes for this.

[00:53:45] Thank you.

[00:53:45] Thank you.

[00:53:45] Thank you.

[00:53:47] Anything else?

[00:53:47] Big plan for 2025?

[00:53:49] I know you're doing a lot of speaking.

[00:53:52] I saw you on a couple of agendas already.

[00:53:55] Yeah.

[00:53:56] We're going to have some fun in 2025.

[00:53:58] You know, again, as I said at the top, founder, we're building a manager intelligence platform, which we believe is missing in a number of workplaces.

[00:54:09] I think it's been far too long that organizations have taken the position of plausible deniability.

[00:54:16] And to me, that's not a business strategy.

[00:54:18] I do believe that we can allow people to self-declare their dimensions, that we can align those dimensions with various KPIs of measurement and performance and engagement and culture and compliance.

[00:54:33] And that ultimately, we will place managers in a position to be better leaders.

[00:54:38] And that when we have better leaders, better and more engaged employees, we have higher trust.

[00:54:44] And when we have higher trust, we have stronger organizations.

[00:54:47] So we are building a manager intelligence platform.

[00:54:51] Anybody out there listening, if you are in HR and you want to be a part of our beta, reach out to me.

[00:54:56] You can find me at Torrin Ellis across all of social media or LinkedIn at Torrin Ellis.

[00:55:03] But we're building NOMA to make sure that we become the drumbeat of organizations all across the globe.

[00:55:09] Awesome.

[00:55:10] Love it.

[00:55:11] And best of luck with that endeavor.

[00:55:13] Sounds awesome.

[00:55:14] Thank you, man.

[00:55:15] Thank you.

[00:55:16] Thank you.

[00:55:16] Torrin, I want to thank you for spending so much time with me and a lot of great insight for my listeners.

[00:55:25] As you know, this is a near and dear topic to me.

[00:55:29] And I love the work that you're doing.

[00:55:31] I love the work you're doing with Aptitude.

[00:55:33] And I look forward to collaborating however possible with you and Kyle and Madeline and anyone else who wants to support these human-centric AI initiatives.

[00:55:45] Yeah.

[00:55:46] Yeah, I look forward to that.

[00:55:47] Absolutely believe that that's going to happen.

[00:55:49] So you can lace up your boots.

[00:55:51] We got some running to do later on this year.

[00:55:53] Sounds good.

[00:55:54] Thank you, man.

[00:55:55] Thank you, Torrin.

[00:55:55] And thanks, everyone, for listening.

[00:55:57] We'll see you next time.