Bob Pulver speaks with Jeff Pole, co-founder and CEO of Warden AI, about the critical issues surrounding trust in AI technology, particularly in the context of HR and recruitment. They discuss the importance of third-party assurance in AI systems, the fluidity of AI terminology, and the need for continuous monitoring to ensure compliance and fairness. Jeff shares insights on how AI can potentially enhance fairness in hiring practices and the implications of emerging AI legislation across the globe. Bob and Jeff discuss the widespread issue of age discrimination in hiring, and Warden AI’s newly announced capabilities to check for age bias in AI-powered hiring solutions. They explore future opportunities and challenges in Responsible AI governance, including the implications of existing discrimination laws, the need for comprehensive data to assess bias, and the evolving landscape of AI adoption in various sectors, particularly in the public domain. Bob and Jeff conclude by emphasizing the importance of balance between innovation and ethical considerations in AI to maintain trust across stakeholder communities.
Keywords
AI, trust, assurance, governance, HR technology, bias, compliance, monitoring, legislation, fairness, AI, age bias, discrimination, responsible AI, governance, technology, workforce, regulations, innovation, public sector
Takeaways
- AI technology can be a force for good if used correctly.
- Warden AI focuses on third-party assurance for AI systems.
- Continuous monitoring of AI is crucial for trustworthiness.
- The terminology around AI governance is fluid and evolving.
- Legislation is pushing for more transparency in AI processes.
- AI can help identify and correct bias in recruitment.
- The potential for AI to improve fairness in hiring is significant.
- Emerging laws will likely increase scrutiny on AI systems.
- AI can help unlock hidden talent pools in the workforce.
- The future of AI in HR is about enhancing diversity and inclusion.
- Age discrimination is a significant issue in hiring.
- AI systems must comply with existing discrimination laws.
- The first AI bias lawsuit was related to age discrimination.
- Employers can unintentionally lead to discriminatory outcomes.
- Five generations will soon be part of the workforce.
- Data collection is crucial for assessing AI bias.
- Counterfactual analysis is a technique to test AI systems.
- Responsible AI practices can coexist with innovation.
- AI literacy is essential for effective adoption.
- AI adoption is a gradual process, not an immediate change.
Sound Bites
- "How can we trust and safely adopt AI?"
- "Technology can be a force for good in society."
- "We're working on age bias detection capability."
- "The first AI bias lawsuit was for age bias."
- "We have to keep in mind existing legislation."
- "Five generations will be in the workforce soon."
- "We bring our own data to test AI systems."
- "Counterfactual analysis helps assess AI bias."
- "AI can augment human processes effectively."
- "It's a marathon, not a sprint with AI adoption."
Chapters
00:00 Introduction to AI and Trust Issues
03:00 Warden AI's Mission and Assurance Role
05:50 Understanding AI Terminology and Governance
08:51 The Importance of Continuous Monitoring
12:13 AI in HR: Opportunities and Challenges
14:45 The Role of Legislation in AI Assurance
18:10 AI's Potential for Fairness in Hiring
21:05 The Future of AI and Workforce Diversity
28:59 Addressing Age Bias in AI Systems
41:24 Navigating Responsible AI and Governance
50:34 The Future of AI: Opportunities and Cautions
Jeff Pole: https://www.linkedin.com/in/jeffrey-pole-91887a44
Warden AI: https://www.warden-ai.com/
Addressing Age Discrimination: https://www.warden-ai.com/blog/age-bias-ai-hiring-age-discrimination-fairer-recruitment
For advisory work and marketing inquiries:
Bob Pulver: https://linkedin.com/in/bobpulver
Elevate Your AIQ: https://elevateyouraiq.com
Powered by the WRKdefined Podcast Network.
[00:00:00] Welcome to Elevate Your AIQ, the podcast focused on the AI-powered yet human-centric future of work.
[00:00:05] Are you and your organization prepared? If not, let's get there together. The show is open to
[00:00:09] sponsorships from forward-thinking brands who are fellow advocates for responsible AI literacy
[00:00:13] and AI skills development to help ensure no individuals or organizations are left behind.
[00:00:18] I also facilitate expert panels, interviews, and offer advisory services to help shape
[00:00:23] your responsible AI journey. Go to ElevateYourAIQ.com to find out more.
[00:00:39] Welcome back to another episode of Elevate Your AIQ. Jeff Pole is the CEO and co-founder of Warden AI,
[00:00:45] an AI assurance platform that looks for bias and adverse impact in HR and talent technology solutions
[00:00:50] where human impacting decisions are made every day. As many of you know, responsible AI is a
[00:00:55] focus area of this show as well as my advisory practice, so I was anxious to catch up with Jeff.
[00:01:00] We talk about what brought him into this important and evolving space, the latest on what Jeff and
[00:01:05] Warden AI team are building, and I get Jeff's take on the promise of AI to mitigate bias and increase
[00:01:10] fairness and inclusivity if it's designed and used responsibly, of course. As I always say,
[00:01:15] when it comes to responsible AI, we are all responsible. So enjoy this conversation and please
[00:01:20] innovate responsibly. Hello everyone, welcome to another episode of Elevate Your AIQ. I'm your host,
[00:01:26] Bob Polver. With me today is Jeff Pole, co-founder and CEO of Warden AI. How are you doing, Jeff?
[00:01:33] Hi, Bob. I'm well, thanks. Good to see you again.
[00:01:36] Good to see you. Thanks for doing this, first of all. Really appreciate it.
[00:01:40] Thanks for having me.
[00:01:42] Absolutely. My pleasure. Just to kick things off, why don't you tell my listeners a little bit about
[00:01:46] yourself and how you got exposed to AI and what was the impetus for Warden?
[00:01:54] Sure. So yeah, my background is in both AI and regulation technology. So I've been working in AI for the
[00:02:02] last just over a decade, studied ML and AI at Oxford quite a while ago before it was so popular and
[00:02:10] prevalent. And then I joined a startup called Onfido, which is an online digital identity provider,
[00:02:19] which helps bring trust to all my platforms and marketplaces. And it uses AI to do that and to
[00:02:25] meet regulations like KYC and anti-mind laundering regulations. And it became really successful.
[00:02:32] It became a unicorn, did really well, which was a great kind of career trajectory for me. But what struck me
[00:02:38] in that experience is I was very struck by the fact that we were using this advanced AI technology to
[00:02:45] do what we did to look at biometric and identity verification, to meet regular requirements,
[00:02:53] to bring trust to online platforms. And yet nobody ever asked us, at least not for much of that
[00:03:01] journey, any questions about our AI and whether it was trustworthy, whether it might be fair and
[00:03:08] unbiased and that kind of thing. And without saying too much, we did look at ourselves and it wasn't
[00:03:15] perfect. We found there's issues that we worked on over time. And anyway, that was the kind of origin
[00:03:21] of me coming across this other trust issue with AI and has led me down this path of working on what,
[00:03:28] in my opinion, is the biggest trust challenge for humanity over the next decade and to come.
[00:03:33] I can, we trust and safely adopt AI. And that's what we're working on at Warden.
[00:03:38] Yeah, that's really interesting. I mean, I think for those who aren't tech savvy by any stretch,
[00:03:44] some of the, that terminology gets jumbled around, even just privacy, cybersecurity, anything that relates
[00:03:52] to what goes on behind the scenes, I feel like all gets sort of jumbled together. But to your point,
[00:03:59] regulators and others who are supposed to be providing oversight should very well know the
[00:04:05] difference in those or not, just because you have privacy controls doesn't mean you have cybersecurity
[00:04:09] controls. And it certainly doesn't mean you can have trust in the way your algorithms and autonomous
[00:04:15] systems might work. So just give us like the, you know, the sort of elevator pitch for Warden AI,
[00:04:21] like where you're laser focused right now. Sure. Yeah. So our mission is to, in so many words,
[00:04:29] is to drive safe and compliant adoption of AI. And our role really is to shed a light on AI systems
[00:04:37] used in high risk use cases, for example, in HR and recruitment and enable the ones that are
[00:04:43] responsible and are fair to succeed and overcome the growing adoption barriers that both regulator and
[00:04:50] customers, ultimately buyers are asking about. So more concretely, what we do is bring third-party AI
[00:04:57] assurance to AI systems. So we have a product that becomes embedded into an AI system. Let's say it's a
[00:05:04] AI candidate matching system in a recruitment platform. And our product connects with that product and does
[00:05:11] kind of ongoing testing and monitoring, sometimes called auditing of the product, kind of overseeing it
[00:05:17] to test and assess, evaluate it for how trustworthy it is. For example, one of our big focuses is on
[00:05:25] bias and how biased or not the AI system is. And then based on that evaluation, we can give feedback
[00:05:31] and recommendations to the company that owns the AI. But assuming things are good, we help kind of
[00:05:37] demonstrate compliance with emerging AI regulations and communicate kind of transparency and trustworthiness
[00:05:43] of the system to customers of that product or to users who might be using it to kind of, you know,
[00:05:52] give this third-party perspective that, you know, here's how this works and hopefully this is why it's actually
[00:05:57] trustworthy to use.
[00:05:59] So I certainly understand the importance of going through that and doing it as a third-party, as an independent
[00:06:06] third-party to make sure there's neutrality. And obviously it would be hypocritical if there was some
[00:06:12] bias on the auditor side while conducting this exercise. Jeff mentioned the word assurance. And so
[00:06:20] just going back to the terminology that's getting thrown around, I see a lot of vendors in this space,
[00:06:28] you know, call themselves like an AI governance platform or an observer, you know, AI, you know,
[00:06:34] model observability, you know, platform. Could you just give us a little clarity on some of those terms?
[00:06:43] Yeah, definitely. So first of all, the terms are, in many cases, very fluid, right? In this
[00:06:48] emerging field of AI trust and safety and compliance. So the way we see the landscape is
[00:06:56] you've got, on the one hand, those that are very close to being developer tools. So these are tools that
[00:07:02] the development teams will buy and configure and use as part of their process. So that's more like
[00:07:07] observability platforms or
[00:07:11] machine learning operations, MLOps, AIOps tools. And that's usually quite horizontal. It can be used
[00:07:17] in really any sector or use case because
[00:07:22] they just have the tools that the developers then use and
[00:07:25] you know, configure and have to put quite a lot of work in to then help them assess, let's say bias or whatever
[00:07:30] issue they're looking at. So that's kind of the dev tool side of things. You then have
[00:07:37] governance platforms, which can often do more than one thing, but typically, as we see it's about internal
[00:07:44] management of the AI. So if you're a developer who develops an AI tool or even a deployer,
[00:07:48] you want to put measures and processes in place that make sure you are checking things that you're making sure that AI
[00:07:55] is safe from your internal perspective and then also reporting that you're doing that in terms of
[00:08:01] upcoming regulations. And that, as we see, is really what governance is about. It's you internally governing this
[00:08:09] process as a whole. It's mostly about processes, not about
[00:08:12] you know, here are hard technical tests that the governance platform does of that system.
[00:08:17] And there's a third category, which is auditors, which can mean different things and different people.
[00:08:23] But we think of a traditional consultant type person who comes in, looks at how
[00:08:29] you know, a set of processes works and how a company operates
[00:08:32] and the concept of AI, looking at the AI processes
[00:08:36] and ultimately saying, you know, here's how you can improve or basically well done.
[00:08:39] You know, it looks like your processes are quite good. And then finally, where we sit, what we call AI assurance
[00:08:46] is actually being a third party that does, you know, hard technical testing and monitoring of an AI system
[00:08:53] to test for trustworthiness. So we actually bring, you know, we have our own data sets.
[00:08:57] In many cases, we also do work with customer data too. And we are specialists to currently just one sector,
[00:09:04] which is HR technology as a specialist to do what we do rather than being a horizontal
[00:09:10] governance platform or, you know, consultant type auditor who can go in and kind of manually
[00:09:15] look at processes of any company. So that's our definition. But certainly, words assurance and auditing
[00:09:21] and AI are up for grabs. And after other people would have different definitions.
[00:09:26] Makes me think about talent intelligence and how people have used that phrase very loosely
[00:09:31] all over the place. If you look at all the vendors using that phrase, they'd be all over the map
[00:09:37] different across different categories. But this one's particularly interesting to me. So as you know, I
[00:09:42] have gone through a sort of independent auditor kind of certification and then a specific focus on New York City.
[00:09:50] But certainly there's a lot of other areas with much more well-defined, I'll call it, legislation that's
[00:09:59] hopefully enforceable and hopefully that triggers people to start being prepared for it as opposed to
[00:10:05] waiting for someone to get sued. And only then, you know, reacting and calling up an auditor to come in.
[00:10:12] But on the assurance side, I guess one of the things I'm just curious about is you said you're
[00:10:20] focusing on the HR and talent space, but you support anyone who's either building or using those
[00:10:28] technologies. Is that fair?
[00:10:30] That's right. Yeah. So we've got, we already worked with some of the leading talent platforms.
[00:10:36] And so we've got customers ranging from, well, startups that are growing fast.
[00:10:41] One of our first customers is a company called Pop AI based in London,
[00:10:46] who are growing like a rocket ship right now, actually.
[00:10:48] I regret passing an angel investment opportunity with them.
[00:10:52] That's Pop, P-O-P-P?
[00:10:54] Pop, yeah, P-O-P-P AI.
[00:10:56] Yeah, I think I'm connected to them.
[00:10:58] Yeah. And they're on a few things, but they're quite strong on the kind of conversational sort
[00:11:03] of pre-screening type part of the process. And so that's one example. And then, but so,
[00:11:09] and so startups and even smaller ones than them, right up to, I think our bigger platform that we
[00:11:14] have is Beameray. And who many of you listeners will know, of course, been around for the best
[00:11:18] part of the decades, working with a lot of enterprises. So we work with them and we're also
[00:11:23] working with some of their end customers as well to look at assurance of the deployer side.
[00:11:28] Because there's, you know, just two kind of angles to AI risk and AI compliance. One is,
[00:11:34] you know, a platform that develops an AI tool has, of course, lots of responsibility and risk
[00:11:38] with getting this wrong or getting it right. And then anyone who deploys an AI tool,
[00:11:44] you know, usually has some customizations, they've got their own jobs, for example,
[00:11:48] if it's a recruitment context, which itself has its own system ultimately,
[00:11:53] as it has its own risks. And so those deployers also need or can benefit from an assurance to help
[00:11:59] manage those and help demonstrate trust to other stakeholders. So yes, we're working with both
[00:12:04] that equation.
[00:12:06] Got it. Okay. Yeah, certainly I knew about memory because I think, you know, Sultan was on the show.
[00:12:11] So basically, you've got observability that sits within a DevOps or MLOps right at the early part of
[00:12:18] the sort of product development cycle. And so hopefully, of course, those developers are have
[00:12:24] the, you know, the literacy and, and understand how to use those tools, you know, properly so that
[00:12:30] they're being responsible by design, essentially. And then at least in your view, the governance stuff
[00:12:36] kind of fits into a governance risk compliance kind of framework for the organization where they're
[00:12:43] already looking at how, you know, data and privacy and sort of extension of data governance in a way
[00:12:50] within an organization, but then the assurance is, is sort of a broader view that might impact,
[00:12:55] you know, customer, you know, data or in case of talent, you know, candidate,
[00:13:00] you know, data and being fair with that. Yeah, that information.
[00:13:03] And the crucial element, at least from our definition of this is, so the third party
[00:13:08] assurance that we're doing, like our name is on the findings is on the outcomes that our product,
[00:13:13] you know, it's continuous, right? It's not like a one off thing, but it's like every week,
[00:13:17] month, whatever time frame we're, we're testing and reporting on this system. Our name is against
[00:13:22] our findings saying, you know, well, here's the results. Here's the metrics that we've tested.
[00:13:26] It gets quite technical to jump into and an indication of whether we find any issues or not.
[00:13:31] And our name is against that as, you know, and this is why you can trust it because this third party
[00:13:34] is doing this. And obviously our, our background as well as, as well as there, whereas the governance
[00:13:40] side can very readily just be, we internally as a organization have done our own processes mostly,
[00:13:47] and maybe even some technical tests. Great. But it's internal, like we're marking our own homework,
[00:13:51] essentially. And that's the, at least one of the fundamental differences with, with our,
[00:13:56] our version anyway, of assurance. We're giving a sure, we're assuring that this technology you might,
[00:14:02] you might want to use is trustworthy to use touch words. Yeah.
[00:14:06] Yeah. I would say if I were acting as an auditor or one of those consultants and advisors that goes in
[00:14:11] and does like a pre-audit assessment or something like that. I mean, I, I don't have a data science,
[00:14:15] you know, background. So I'd probably need a tool like Warden at my side to help with that exercise
[00:14:22] anyway, but certainly if that were the scenario, I think both our names would be on it, right? Like
[00:14:28] this is the platform and these are the, this is the way in which we checked these AI tools,
[00:14:34] you know, one by one, right? Because each one that you're using, which whatever the use case,
[00:14:39] right? Whether it's that upfront training or it's an interview intelligence tool, or it's an ATS,
[00:14:44] whatever, each one needs its own assurance and, and audit if that's what the law requires.
[00:14:51] Right.
[00:14:51] Yeah. So one of the things I wanted to ask you was related to sort of governance and, and assurance,
[00:14:57] the law generally, as I understand most of the laws that have been passed in some of the proposed
[00:15:03] legislation, it's like a periodic check, usually like an annual, you know, check for, for compliance.
[00:15:09] Yeah. Yeah.
[00:15:10] But as you know, better than me, I mean, these models are changing, you know, constantly,
[00:15:15] one batch of candidates could be under one algorithm and then the next batch, you know,
[00:15:20] two days later it could have gone through some tweaked, you know, version of that. So I guess the question is,
[00:15:27] just want to get your take on like the periodic, you know, point in time kind of assessment versus
[00:15:33] the continuous monitoring of, of these, you know, sort of very fluid evolving models.
[00:15:39] That's a really good question. And it's, it's really important. Yeah. I think that the challenge
[00:15:43] almost lies in the definition of the words, which are being used funny, like audit, which is typically
[00:15:48] like a kind of process thing. You would be with the finance and accounting, you know, and coming
[00:15:52] and checking the books, auditing the books. And there's only so much, you know, only so frequently
[00:15:56] that you, you kind of can or need to do that. And as you're referring to in New York City, local law
[00:16:02] over four in particular, there is a requirement of an annual audit and to look at bias for, you know,
[00:16:09] automated decisions tools, which is great, you know, a great starting point to bring this third
[00:16:14] party perspective for sure. But when we think of some of these issues like, like bias, and you think
[00:16:20] of looking back over the last year and saying, you know, let's now find out if this system is,
[00:16:25] is, is be biased for the last year or not, and then publish the results. While it's admirable,
[00:16:30] it's, you know, it's, it's kind of too late, right, to find out at that point. But, you know,
[00:16:35] and worse even if you find out, and then you have to publish the results to the world,
[00:16:40] that is, you know, worst case, it's game over, right? It's like the last year we've
[00:16:44] processed X number of, of candidates, and we've done it in a biased way. And we now publish this to
[00:16:48] everybody, you know, the potential damage is pretty severe to reputation. And also, of course,
[00:16:53] just, well, the actual damage of, of applying a biased system is, is being done for the last year.
[00:16:59] And the other element here to highlight, of course, is that, as many people are aware,
[00:17:04] technology changes so fast. And AI in particular is, is super fast. It came faster all the time with,
[00:17:11] with Gen AI and, you know.
[00:17:13] Yeah. You know what you should know? You should know the You Should Know podcast. That's what you
[00:17:21] should know. Because then you'd be in the know on all things that are timely and topical. Subscribe to
[00:17:27] the You Should Know podcast. Thanks.
[00:17:30] Yeah.
[00:17:31] If you're using a Gen AI model under the hood, for example, even if you don't change,
[00:17:36] the model itself can change. Or if you're using some sort of prompting on top of that,
[00:17:40] you know, we all know that the prompting, you know, messaging essentially chat dbt to get it
[00:17:44] to do something different is so easy that people, developers are making changes even more frequently
[00:17:49] than in the past. Yeah. So, you know, a higher frequency of testing and monitoring, whether you
[00:17:56] call it auditing or not, is, is a terminology thing is, in our opinion, a important to be a credible
[00:18:03] solution. So yeah, regular testing and in our approach, ideally by a third party, not just internally.
[00:18:10] And so you get that credibility benefit as well as the actual benefits of flagging anything early
[00:18:17] are, I think, a important, certainly we look to the future as this evolves the next few years.
[00:18:23] Do we think we'll just be in a place where people do once a year tests and audits? Or do you think
[00:18:27] will people be kind of continuously monitoring it? It's hard not to see it being the latter
[00:18:31] as time comes. Yeah, I agree. I think this is just going to go the way of privacy and
[00:18:38] cybersecurity. I mean, you wouldn't wait. Imagine if you waited the year to see if you had any privacy
[00:18:46] leaks or cybersecurity, you know, breaches. I mean, that's insane to think about. So yeah,
[00:18:51] no, that makes total sense. Yeah. And I think the audits, I mean, it's one piece. I mean,
[00:18:58] you can continuously monitor and then still obviously do the audits, but, but you're right, right now,
[00:19:03] companies are actually incentivized to essentially not post the results of the audit, right? The New
[00:19:11] York City law says you have to conduct the audit and post the results. It doesn't say you have to pass,
[00:19:17] but if your models are exhibiting, you know, adverse impact, and then you publish that to your point,
[00:19:23] I mean, you're just airing your dirty laundry and literally publicizing evidence that a candidate could
[00:19:29] use to file a claim with the EEOC in the case for the States. So it's a very awkward situation there.
[00:19:38] And we're seeing a lot more of that, right? There's more lawsuits in this, in this area
[00:19:43] that's happening. And it's likely that's only going to increase as AI is being used more in this type of
[00:19:50] process and recruitment. And also people are becoming more and more aware of it. And one of the interesting
[00:19:55] ramifications that many of the other regulations will also bring, you know, New York City is the one
[00:20:00] that's in effect at the moment. We think of Colorado, for example, law that's coming up,
[00:20:07] and the EU-AI Act, of course, which also on the US many will subscribe to if they don't have to.
[00:20:14] The biggest outcome of those regulations is transparency. And so making sure that any consumers who
[00:20:21] undergo some sort of AI examination like that have to know about it, even if you don't have to do an audit
[00:20:25] or the other requirements are less prescriptive, they're going to be made aware. And once people are made aware of those,
[00:20:31] of these currently quite kind of hidden AI processes, then, you know, there's obviously a bigger invitation to
[00:20:38] challenge them, often rightly so. But of course, sometimes there will be spurious claims, I'm sure as well.
[00:20:45] So that's an interesting dimension. I think it's likely to become a bigger issue.
[00:20:50] Yeah.
[00:20:51] As space emerges.
[00:20:52] I wanted to go back up to, you know, 30,000 feet and just think about, you know, AI more broadly,
[00:21:00] the HR and talent space in particular, because I do think, to your point earlier,
[00:21:06] in the EU AI act and their, their sort of risk, risk pyramid, I think it's still referred to as, but,
[00:21:13] but anything related to talent decisions that impact someone's livelihood, whether that's to gain
[00:21:19] employment, to be promoted, I think it might include even into like learning and development,
[00:21:26] like if you're providing unfair, you know, recommendations to, you know, protected categories
[00:21:33] in some way, you know, it could fall into this high risk category, but, but just overall not
[00:21:39] thinking about any particular piece of legislation, just this macro view about AI in HR. I'm of the
[00:21:46] opinion that this is overall a good thing and that it will increase, you know, fairness as we can find,
[00:21:56] you know, the bias, we can find the adverse impact, and we can course correct that overall,
[00:22:01] this is a good thing that we can identify bias before that bias scales and causes far reaching,
[00:22:08] you know, negative impacts. I just want to get your, your perspective on what you're seeing,
[00:22:13] what you're hearing from your clients, etc.
[00:22:15] Yeah. And to clarify, you mean the potential for AI processes, replacing human ones or,
[00:22:20] you know, complimenting them to actually improve on things like bias and fairness in recruitment?
[00:22:26] Yeah. I mean, just like it's potential to increase overall sort of fairness in hiring practices. Yeah.
[00:22:33] 100%. Yeah. So yeah, it's such a good point. I think with AI, it's like any technology,
[00:22:38] it's just double-edged sword. And certainly our, for all that we are, our role is to, you know,
[00:22:44] be the third party that does this technical testing and evaluating other systems. It's actually based on
[00:22:50] the premise, fundamentally, that AI can and hopefully ultimately will be a force for good here, right?
[00:22:56] It's not just that AI can, it's faster and more cost efficient. It can also improve, let's say, ethical
[00:23:03] considerations, can improve consumer outcomes, can improve fairness to your point, so long as it's done
[00:23:09] correctly, right? And that's why, you know, we're trying to play a role there to help with that.
[00:23:14] That's our strong belief. I think there's a really interesting parallel in history with, and probably one of one,
[00:23:20] but one comes to mind for me is the emancipation of women, particularly in the 20th century,
[00:23:25] which obviously just, I'm not saying is a complete process and there's lots more to do there.
[00:23:30] And, but you know, were significant strides made particularly in the 20th century.
[00:23:34] And a lot of that, of course, is down to all kinds of, you know, societal shifts and, you know,
[00:23:40] discourse and all kinds of changes and perceptions and things like that. But also, arguably,
[00:23:46] technology played a huge role because what happened is there was loads of technological
[00:23:51] advances, particularly in home appliances, being from bathroom cleaners to microwaves to
[00:23:57] bridges and washing machines that actually meant that managing the home became much,
[00:24:03] much more efficient time-wise. And that is, it drove a lot of, you know, women,
[00:24:07] enabled a lot of women to go out to work full time in a way that they didn't maybe as much before.
[00:24:13] And it's very interesting. And I don't think people maybe recognize that very often,
[00:24:16] but technology played a huge role in that process. And I think what we could be set up here for,
[00:24:22] thinking optimistically, is that this big step change in AI that we're experiencing and will over
[00:24:27] the next decade or two, I think could maybe be the next step change in emancipation, if you will,
[00:24:33] or increased fairness and diversity across, whether it's just gender or other fairness issues,
[00:24:41] whether it's race or age or all the types of issues that are prevalent in hiring and other decisions.
[00:24:46] Yeah, absolutely. Certainly, I think about gender. I think about minorities. I think about,
[00:24:52] I do think about ageism because I experienced that myself. I do think about hidden talent. We often
[00:24:59] refer to it as hidden talent pools or underutilized, you know, talent pools, whether that's, you know,
[00:25:06] the older generation of workers who, you know, we're, you and I have talked about this a little
[00:25:10] bit, like people are living longer, working longer, you know, someone who's in their fifties
[00:25:18] still could be working at least another, you know, 20 years. So it's not like you're a runway,
[00:25:23] you know, passion model whose career, you know, ends it, you know, in their mid thirties or what have
[00:25:28] you, or you're a professional athlete, right? Like there's all these people with second chance,
[00:25:32] you know, hiring people who have previously been incarcerated and can't find the job because of it.
[00:25:36] People who are super smart, but didn't go to, you know, a four year university, but are still
[00:25:42] perfectly capable of doing, you know, most of the jobs that they are interested in that don't require
[00:25:49] an undergraduate degree, let alone an advanced degree. So I think about all of those things that,
[00:25:54] you know, if you really are hiring for skills and potential to succeed in the role and to, you know,
[00:26:00] be a good sort of corporate, you know, citizen, be a good team player, be a good leader, things like
[00:26:06] that. I mean, if we're evaluating those things independent of some of those other attributes,
[00:26:12] then I feel like we're all better off and creating, you know, that fairness will continue to increase.
[00:26:17] I guess just circling back on warden in that respect, on the governance platforms, it's,
[00:26:22] it's generally just gender, race, ethnicity, but you guys are working on going beyond that.
[00:26:28] Right.
[00:26:29] Is that right?
[00:26:29] That's right. Yeah. Yeah. So in our, in our solution, we have, um, obviously,
[00:26:34] race and ethnicity as two of the key categories of bias or protective characteristics to, to look at,
[00:26:41] and how we evaluate and particularly in New York city law, that you refer to explicitly
[00:26:44] mandates those, those two, um, for good reason, but we are working on essentially working with
[00:26:50] through all the protective characteristics of various different regulations. So actually really
[00:26:55] pleased to be launching. And actually we already have done this work with a couple of our customers
[00:27:00] who've been piloting it, a age discrimination, uh, kind of age IS detection capability that we've added
[00:27:07] to our platform. Um, so the principle works much the same as the others. It's just, we have the data
[00:27:12] sets now that look at different age groups and particularly differentiating between older and
[00:27:17] younger age groups. Um, you know, in the US there's explicit regulation about over 40 and being
[00:27:24] a protected protective characteristic, which I do, as you mentioned is close to a number of us and
[00:27:29] particularly as we all work older and longer now, it feels very, very young as a number. And, and that's
[00:27:35] something we're adding in, uh, we're launching at the moment, which is super exciting. And it's just really,
[00:27:41] and not, not here to cast judgment on which form of bias is like, you know, better or worse than
[00:27:46] others, but it's definitely a big one. You know, there's, uh, there were studies done from, and this
[00:27:52] is not to do with AI, this is just like job seeking in general, that's up to 36% of older job seekers,
[00:27:59] those over 40, uh, get, get a lower callback rate than younger, younger applicants, which is a really
[00:28:04] big number. It's like a third is, you know, your third less likely to get a callback, uh, in some cases,
[00:28:09] just because of your age compared to younger, uh, applicants. A third of an already low number is a,
[00:28:16] there's a low number. Exactly. And that's not good. Right. It's not good. Yeah. Um, and also,
[00:28:21] another interesting thing about the EOC is the very first bias, AI bias lawsuit they actually settled
[00:28:27] was not for, for sex and gender or race bias. It was for age bias. So, uh, a company called
[00:28:34] iTutor group was, you know, I had a settlement from the EOC, uh, last year or year before,
[00:28:40] because it was an unintentionally or intentionally discriminating on, on, on older applicants and
[00:28:45] not giving them a job. So, you know, it's a real issue in the real world. That's a real issue
[00:28:49] in AI powered systems as well. So we're really glad to be bringing this new capability to market and,
[00:28:55] and helping and working with our customers to make sure those systems, their systems don't have this
[00:29:00] and to give assurance to their, their customers or users that the age bias doesn't feature in their AI system.
[00:29:05] Yeah. I'm really glad to hear that. I remember reading about that case and was just wondering
[00:29:11] like how, what's the suit against? Like, what did they violate? Cause I wasn't sure. I mean,
[00:29:16] I know, like you said, there, there's sort of a, uh, age discrimination. There is legislation around
[00:29:21] that, but the headline that I read implied that this was related to an AI based, you know, law. And I
[00:29:29] didn't, I didn't understand how they connected the two. If there's no AI legislation currently
[00:29:34] around a, around age discrimination. That's a great question. And my understanding, when I always
[00:29:38] still check these articles, but my understanding is that it wasn't an AI regulation that they were
[00:29:45] being sued against. It wasn't New York City law, for example. And as we know, most other AI regulations
[00:29:51] are not yet in force anyway. And it was just against kind of traditional discrimination law,
[00:29:57] whichever one it was, maybe it was a civil rights act or it may have been that age. There's explicit
[00:30:01] age act in the US, I forget the name of it. And the references is over 40. And so, and, and, and,
[00:30:07] and that's a true point because AI, and people forget this. You think of the AI regulations coming up
[00:30:12] or recently announced as being the only regulations that AI need to follow. But actually,
[00:30:18] it doesn't matter what tool you use, whether it's an AI, a person or anything, if as an organization,
[00:30:25] you know, you lead to discriminatory outcomes based on protective characteristics, it doesn't
[00:30:31] matter. Like the law prohibits that. And as an employer, you have broken the law unintentionally
[00:30:36] or otherwise by doing that. And so I'm pretty sure in that case, that's what, that's the laws they
[00:30:41] were talking about. Not actually new AI ones explicitly mentioned. AI can't be age bias.
[00:30:46] Yeah.
[00:30:46] And it's already true. And it's just harder to, it's hard to assess that often. I think that
[00:30:51] what's good about the new AI regulations is of putting an additional kind of bonus on this and,
[00:30:56] and, you know, more, more explicit guidelines that AI systems and developers have to follow
[00:31:00] before they can even get to market in many cases.
[00:31:04] Yeah, that's, that's an important point. And I know former EEOC commissioner,
[00:31:08] Keith Sondling has talked about that countless times, um, about the fact that, you know, part of
[00:31:13] this to keep in mind is always that point is we've had this, this longstanding legislation and
[00:31:19] whether you're doing it manually or you're using an AI or, or some other.
[00:31:24] Have you ever been to a webinar where the topic was great, but there wasn't enough time to ask
[00:31:28] questions or have a dialogue to learn more? Well, welcome to HR and payroll 2.0, the podcast where
[00:31:34] those post webinar questions become episodes. We feature HR practitioners, leaders, and founders
[00:31:39] of HR payroll and workplace innovation and transformation, sharing their insights and
[00:31:43] lessons learned from the trenches. We dig in to share the knowledge and tips that can help
[00:31:47] modern HR and payroll leaders navigate the challenges and opportunities ahead. So join us
[00:31:52] for highly authentic unscripted conversations and let's learn together.
[00:31:56] You know, algorithmic or analytic process to help you make decisions, the same rules apply,
[00:32:02] right? So you've got to, you know, keep that in mind. And two things come to mind. One is,
[00:32:08] I do think that's really important because so many people are living longer, working longer,
[00:32:13] and you are potentially discriminating against a significant portion of the working population.
[00:32:19] I know we put a lot of focus on the upcoming generation millennials, well, not even millennials
[00:32:25] anymore, Gen Z, Gen Z. And yeah, we start to think about even Gen Alpha, I think we'll be in at
[00:32:31] least middle school, if not high schools soon. So yeah, we've got many generations to think about in
[00:32:38] the workforce. I mean, pretty soon you'll have, I think within five years, we'll have five generations
[00:32:43] in the workforce, right? So that's a lot. You know, you and I have talked about disabilities as well.
[00:32:50] And obviously there's pre-existing legislation around, you know, bias against people with disabilities,
[00:32:55] whether that's a physical disability or a mental health issue, or just people that think differently,
[00:33:03] and you know, thinking about the course of neurodivergent population, we've got to be,
[00:33:07] you know, sensitive to these things. And we've got to have some protections for these folks.
[00:33:12] But just on the age thing, all of this information that we're talking about is sometimes not necessarily
[00:33:22] disclosed by the candidate, right? Like I don't have to, it's optional for me to say whether I have
[00:33:27] a disability or whether I'm a veteran or whether, or to declare my race and my gender. So what do you do
[00:33:37] when you just don't have, or your client doesn't have a representative sample of that data to test?
[00:33:48] And I'm starting to think about the age thing as well, because that's not even one of those optional
[00:33:52] questions. That's something that you would have to deduce from either the chronology of a resume or
[00:33:59] what's on a LinkedIn profile, or just looking at somebody that's been doing some work and contributing and
[00:34:07] publicly disclosing some work that they've done, you know, outside, you know, beyond, you know, high school.
[00:34:13] Yeah, it's a good question. And maybe it gets a little bit more technical potential to answer
[00:34:18] this question technically. But so long story short, we bring our own data in many cases. So we can also,
[00:34:24] you know, work with customer data, like real historical data that's covered in the system that we pull out
[00:34:29] through our integration with that system, which as you see, often has the kind of equality monitoring
[00:34:36] form information, which has some characteristics, but not all of them. So what we do for things like
[00:34:41] age is, is we have our own data sets. So depending on the particular system that we're testing, if it's,
[00:34:47] for example, some resume basis, and which is our profile based in profiles of which many hiring tools
[00:34:54] use, we have our own real data, we've collected real examples of that type of data with the additional
[00:35:03] metadata of what their background is. Some ages one is characteristic, but we also do look at disability
[00:35:08] and sexual orientation, religion, and a whole host of them. And we haven't brought fully to market
[00:35:13] the ability to yet on those, but we will be very soon. So one way is that we have this still real data,
[00:35:19] but we've gone and got it ourselves. And then we can, with the permission to use it for, for testing
[00:35:24] and monitoring. And then we can run that data through the AI, the algorithm and see how it performs
[00:35:30] across those different age groups and give, give results that way. And there's another slightly more
[00:35:35] complicated technique that also uses your own data, which basically is called, it's called counterfactual
[00:35:40] analysis. And what we're doing there is taking input examples. Let's say it's a resume example again,
[00:35:45] you know, let's say it's my resume. And then we make changes to it, just small changes,
[00:35:51] keep most of it the same, that essentially stress test that variable. So in the case of age,
[00:35:56] we might change things like the education dates that someone has, or take them away or add them in.
[00:36:02] We also, well, name is the one we would definitely change for gender and race, but we can also do that
[00:36:07] for name as well. There's a bit of a, you know, there's data on names based on different age groups,
[00:36:11] for example. And we would even do more explicit things that would reference someone, some people
[00:36:17] do put date of birth in or our age, not so common in their, in their resume, like a letter or something
[00:36:22] like that. We would do things like that too, make all the changes, basically making sure we tell the
[00:36:27] AI in so many words, this person is of an older demographic or younger demographic, but otherwise
[00:36:32] keep everything else the same. And so we have these two examples, the kind of older version and the
[00:36:36] younger version. And we test the AI system with that and see how the score or whatever the outcome
[00:36:41] is from the AI system, how it compares across those examples and if it's different and how much
[00:36:45] different it is. And of course we do that across a large sample size data set. And then we get a
[00:36:51] picture of to what extent the AI is essentially implicitly using age-based information to make its
[00:36:59] prediction, make its outcome. So that's another technique. Like I said, that's called counterfact analysis.
[00:37:05] It's a little bit, maybe sounds a bit technical. So yeah, basically a few techniques using
[00:37:09] principally our own data, unless we can also get it from the customer to, to test it.
[00:37:14] Right. So it's, it's not trivial, but it's, yeah, that's how we managed to overcome it and excited to,
[00:37:19] to bring this to market now.
[00:37:21] Very cool. Yeah, no, I'm excited for that. I mean, certainly I, I don't apply to full-time jobs
[00:37:27] often, but it wasn't that long ago that I, that I did. And you always wonder, especially because you
[00:37:33] rarely hear back, you know, you always wonder, well, you wonder a lot of things, right? It's a,
[00:37:40] it's an anxious, uh, you know, process, right? Cause if you don't hear back, you, you wonder why,
[00:37:44] right? Was it because, you know, I'm an older worker and there's other talent that's just as
[00:37:49] competitive, if not more so that is, you know, 10 years younger, or is it because I don't, you know,
[00:37:56] you don't even know if your resume was properly parsed and scored or whatever,
[00:38:01] you're just completely in the dark. So it just leaves you with more questions than answers,
[00:38:05] certainly. So it's frustrating.
[00:38:07] A hundred percent.
[00:38:08] I guess I just wanted to ask about just your broader view, you know, as you pay attention to,
[00:38:13] to the market, you know, not just, you know, target customers and legislation and, and competition,
[00:38:20] but like, as you think about responsible AI, you know, more broadly and all the facets of that,
[00:38:27] uh, certainly governance and assurances is part of that privacy and cyber, which we've talked about
[00:38:33] as part of that as well. But just as, as people think about responsible AI and being responsible by,
[00:38:40] by design, your, your clients or your prospects express more, you know, broader concerns about
[00:38:48] where things are going, or do they think, are they more just let's take attention to what I need to
[00:38:55] comply with so that I can continue doing what I do? Uh, because most of them are not in this space
[00:39:01] themselves or like, I guess just, you know, in general observations about what you're,
[00:39:06] what you're seeing as people, as we think about creating, you know, it's sort of a trusted
[00:39:11] framework or across the talent life cycle.
[00:39:14] Yeah. It's, it's, it's very interesting time because it has in many ways been a wild west,
[00:39:18] right? You know, people have been using AI, partly claiming to use it in ways are claiming it's,
[00:39:23] it's, it's, it's some advanced AI when it's maybe not, but there's not really been any scrutiny
[00:39:28] and, and real concern, I think from businesses as well about the, about the use of AI. Hence my,
[00:39:34] my example at the beginning of where I worked and the lack of scrutiny we, we faced then.
[00:39:38] But obviously that's changing, right? It's got this kind of conflicting kind of pressures of
[00:39:44] motivation to use AI, adopt AI. Everyone's so excited about it now. And then businesses
[00:39:49] are therefore excited to, to develop it. At the same time, people are obviously heightening
[00:39:53] their awareness of the risks, but it's being backed up by regulations. So we see quite a range
[00:39:59] of responses to this. On the one hand, you get people who, you know, just with any, any new
[00:40:05] technology, right? There's the early adopters and the kind of mainstream and the laggards.
[00:40:09] We're seeing the same thing in, in terms of, yeah, trust and governance and, and assurance.
[00:40:15] So you've got those who embrace it, partly because they want to be really ethical. That's
[00:40:19] definitely, there are people who just like that. And I put Beamer in that category, for example,
[00:40:23] like they, that they first audited themselves, like, or had an auditor way back 2022, maybe even
[00:40:30] before there was anything requiring it. And they're, then you find us just genuinely have a quite unique
[00:40:35] kind of, you know, you spoke to them, right? Perspective on doing the right thing. But what we're
[00:40:40] also seeing some of these early adopters are seeing the business benefits and that's what
[00:40:44] we land well most well where it's like the people who see the business benefits as well as the ethical
[00:40:50] benefits of doing this right and that would open the drive that too right so it's like
[00:40:55] they will make a big promotion ultimately they will promote themselves as also genuinely true
[00:41:01] that they care about responsible AI and ethical AI and here's an actual tangible commitment to that
[00:41:06] which is powered by Warden you know who comes in as a third party and does all the stuff that we do
[00:41:11] and it's a win-win and so that's that's good for them on the other hand as you've mentioned there
[00:41:16] are those who are like what do I actually need to do what's the absolute minimum you know can I do
[00:41:20] can I just do once a year rather than more frequently and it's like well and you know legally that is
[00:41:26] okay and so it becomes a conversation about about the pros and cons there so I would say there's a
[00:41:31] there's a mix to it as to be expected but I think it's only going to change as this heats up in this
[00:41:37] market yeah I do think as as AI literacy and AI readiness improve you know responsible AI and
[00:41:46] like the continuous monitoring will sort of follow the data privacy and cyber security track of you
[00:41:53] know we got to keep an eye on this at all times I think people will realize that once you can bake
[00:41:59] responsible AI principles and practices into your your workflow and your life cycle that you can
[00:42:07] be responsible and innovate at the same time right right no one's not innovating because there's just
[00:42:14] too much you know privacy and and cyber security you know legislation and too many rules or whatever
[00:42:20] it's not like people are just throwing their hands up and and giving up I mean we've seen a lot of
[00:42:25] innovation happening and so so you can do both and I think it's just a matter of people finding that
[00:42:30] balance and going and getting over some of these initial hurdles and humps definitely and and actually
[00:42:35] one last point to add on to that which is in that spectrum maybe the one that's the most cutting edge if
[00:42:41] you will is those that well like AI is so powerful now I'm not so I'm not complete um you know
[00:42:47] evangelist saying it's all perfect and like it can do everything we ever want to do yet but it is still
[00:42:52] very powerful and there's kind of human-based processes that it could partially replace or
[00:42:58] augment but it doesn't and the main reason not to is is the trust and adoption concerns rather than the
[00:43:04] technology that we've given up and so what we're even starting to see a little bit in our with our
[00:43:08] customers and the market is those who didn't want it it was too risky like do this and how will people
[00:43:14] respond whether regulators or you know even potential customers would even be willing to use this
[00:43:20] feature let's say it's going further up the interview funnel not just looking at maybe the input resume
[00:43:24] but looking at doing the interview uh our first interview with an AI but what we're seeing is
[00:43:28] people actually going up that because they have more assistance with assurance with governance to
[00:43:35] overcome some of those risks and overcome some of those adoption concerns that's actually enabling
[00:43:40] them to say okay you know what we're actually going to add this higher risk even higher risk AI feature
[00:43:45] to our roadmap and kind of go up into that and unlock this new use case that we didn't have before
[00:43:51] and because we think we've got appropriate safeguards in place and hopefully that is a win-win then
[00:43:56] right we could bring AI technology safely to these new uh use cases that maybe weren't being done before
[00:44:02] and there's long way to go but that's that's hope and trajectory that we and then and those aren't
[00:44:06] just because i don't just mean about us but i mean even the regulations right by defining what's
[00:44:11] allowed and what's what's okay should actually hopefully unlock some more high risk use cases
[00:44:16] but but in a way that gives confidence to users and buyers that's a use case that it's actually okay
[00:44:21] rather than being like this is so risky and so scary i don't want to go near it and you know the education
[00:44:26] the literacy the readiness i mean it's on the it's on the buyer side on the user side too right so a more
[00:44:33] intelligent buyer will select you know technologies that have been able to find that balance between
[00:44:39] responsibility and and innovation and you know consumers need to follow the same thing don't
[00:44:45] just trust what you what you see i mean as as always we've been saying that for decades right um don't
[00:44:51] trust everything you read don't trust everything that you see and nowadays we have to say you know
[00:44:56] the output of generative AI is not necessarily a fact it's you know it's not a calculator so just
[00:45:02] don't outsource your your critical thinking i guess this is more curiosity question like
[00:45:07] i'm imagining that since you do the work that you do that you're skeptical of a lot of AI tools that
[00:45:14] you see but is there any are there any uh particular like use cases or or tools you know outside of the
[00:45:21] hr talent domain outside of the governance governance domain that you're particularly you know fond of or
[00:45:27] scared of yeah and you can maybe see my answers that i i like the duality of the higher risk ones where
[00:45:34] there's genuine risk and fear of me to be concerned but also the opportunity that those can maybe be
[00:45:38] when done correctly even better than than the human way so i kind of have that weird duality in me
[00:45:43] so both come at the same time of of fond of and fearful of i think for me the one that stands out
[00:45:49] as really interesting is public sector the public sector you've got bureaucratic processes you've got
[00:45:56] you know scale of people applying for things like visas or welfare benefits or marriage license
[00:46:05] applications etc they're essentially text-based in nature of which lms as we know um in the world
[00:46:11] now are very good at doing there's huge bottlenecks you know you might we could you want to apply to
[00:46:16] visa to go to us it'll take you a year to get it back for a number of reasons partly because they just
[00:46:20] can't deal with the volume right and anyway that's the stomach so obviously super high risk scenario so
[00:46:27] fearful of that as well but i think there's quite interesting potential for ai particularly large
[00:46:32] language models i think to to get involved in public sector use cases but there's also a lot of
[00:46:38] particularly uk i think less so in the us and there's been a number of kind of scandals around
[00:46:43] not even like advanced ai just like simple software and algorithms being used in some of these processes
[00:46:49] before that have been exposed for uh for error and discrimination ultimately so it's highly highly
[00:46:56] uh risky as well of course and it's in the public domain too so i find that use case both scary and
[00:47:02] but i think i think one day we'll get there but definitely need to take our time right before we
[00:47:06] slap a satjack gpt into abuse application process no that makes sense all right very last question
[00:47:13] any advice for the for the laggards who have not jumped into to ai yet i don't know about specific
[00:47:21] practical advice but um i don't know for me it's i think important to embrace the change
[00:47:27] on some level but also be realistic about it right i think what can disillusion people often
[00:47:33] whether they're watching others or whether they're doing it themselves of people who
[00:47:35] get super excited and be like this new thing is going to change everything and it's going to
[00:47:40] happen very very it's going to happen now like let's all do this and the sad reality with all
[00:47:44] big technology changes uh when you look at the internet or even mobile phones and others it takes
[00:47:50] a while for the real benefits to come in right the first few years are relatively minor in any kind of
[00:47:55] real benefit to the world and so on you know a few people have it people will use it uh it's maybe 10
[00:48:00] years later after the advent of the internet for example that really it's a mass adoption and like
[00:48:05] the world is quite a different place and i think that'll be the same with this this change too right
[00:48:10] we're getting small benefits nothing too exciting easy to get disillusioned it's a marathon not a
[00:48:14] sprint just keep up generally learning about ai be more literate experimenting with it in your own
[00:48:18] time using where you can in business and then in five years 10 years there'll be i think quite big
[00:48:24] changes from where we are today but not so much in the next like year or two it's not gonna it's
[00:48:29] gonna disappoint rather than impress probably in the short term no that's that's sound advice hype
[00:48:34] cycles are real jeff this has been great thank you so much for your time i think this is a lot of
[00:48:40] really important nuggets in here for people who aren't sure how to you know be responsible by
[00:48:47] design and their use of this technology so i think we covered we covered a lot of ground really really
[00:48:52] appreciate it great well thanks for your time really enjoyed the chat absolutely me too all right
[00:48:57] thank you again jeff and thank you everyone for listening we'll see you next time