Not long ago the European Union enacted what’s called the EU AI Act. Basically, a set of regulations governing privacy and reporting in relationship to AI. Joining me today is Jason Albert, the global chief privacy officer at ADP. We’re going to talk about the act itself, what’s driving it and what it’s impact will be on governments, and employers, around the world. All, on this edition of PeopleTech.
Image: Wikipedia
Get full access to WorkforceAI at workforceai.substack.com/subscribe
Learn more about your ad choices. Visit megaphone.fm/adchoices
Powered by the WRKdefined Podcast Network.
[00:00:00] Welcome to PeopleTech, the podcast of Workforce AI.News, I'm Mark Feffer. Not long ago, the European Union enacted what's called the EU AI Act. Basically, it's a set of regulations governing privacy and reporting in relationship to AI.
[00:00:29] Joining me today is Jason Albert, the Global Chief Privacy Officer at ADP. We're going to talk about the act itself, what's driving it and what its impact will be on governments and employers around the world. All on this edition of PeopleTech. Hi Jason, welcome.
[00:00:50] We're going to talk about the EU AI Act which has been getting a lot of media play lately. It's the first legislation of its kind that's been adopted anywhere in the world. Broadly speaking, what's it do?
[00:01:04] So as you noted, the EU AI Act is sort of the world's first comprehensive AI legislation. And what it's really designed to do is to address what the EU saw as possible risks as AI deployment became more common.
[00:01:21] And so it's designed to do, I think really three things. First, it's designed to prohibit certain high-risk use of AI. Think of things like real-time biometric identification by public authorities in public places.
[00:01:37] It was viewed that that was viewed as being so invasive, so much risk of getting it wrong that those types of things shouldn't be permitted. And so you have a sort of a set of things that are prohibited like that.
[00:01:49] Then you have this wide range of high-risk use cases, things that are viewed as sort of potentially impacting fundamental human rights or having such impact on people that there needs to be governance about how AI is deployed across those use cases.
[00:02:06] And then third, and this was added late in the process, there are transparency requirements around foundational models. Think large language models that power gendered AI.
[00:02:16] The idea that there needs to be some transparency and information so people can properly understand and assess those models as they decide to use them.
[00:02:25] So really it was really covers those three things, certain things that are banned, certain things that are high-risk that have to go through a number of governance steps and monitoring steps both before and after they're placed on the market in the EU.
[00:02:40] And then transparency requirements around large language models. Now what's the thinking behind it or maybe another way to put it is, what's the issue that it's trying to solve?
[00:02:52] Well, the issue is trying to solve is while we have generally applicable laws that apply to AI just like anything else, right? You can't discriminate and the fact that you're using an AI tool doesn't relieve you of that right.
[00:03:04] You have to comply with privacy laws and that's true whether the data is processed by an AI-enabled tool or processed in the other way. There was a concern that there were certain inherent risks to AI that weren't covered by the existing legal structure.
[00:03:20] These range from things like how is the AI developed? Is the data of sufficient quality? Are we sure that the output is actually meaningful, right? Does the AI do what it claims to do?
[00:03:34] When you put it on the market are you just sort of putting it on the market or are you monitoring how it performs?
[00:03:40] Are you making sure that the results don't drift? Are you making sure that the steps you've taken to eliminate bias or mitigate bias are continuing to be effective? Is there a human oversight or people just using this to make automated decisions?
[00:03:55] Are they using it as an input? Are people understanding how the tool works? When you have a tool that's developed by a company but it's deployed by a different company, does that second company understand how it works?
[00:04:06] Does it understand its instructions? All of these things are governed by the regulation again for these high-risk use cases to make sure that risks are accounted for as these AI systems are put on the market in Europe in these areas where they're going to have significant impact on the...
[00:04:27] Now, I said earlier that this is the first legislation of this kind in the world but it's a busy world right now and there's lots of people starting to use AI. So are there other jurisdictions that are working on this or do you think that's going to happen?
[00:04:45] Yeah, so there's similar legislation that's pending right now in Canada. I think it's in second reading in the House of Commons and so it's going through the amendment process there. The UK government has announced plans to do some regulation, perhaps a little bit more of a self-regulatory approach.
[00:05:03] If you look in the US, we've got a number of things. The most prominent is the National Institute for Standards and Technology, which is part of the Department of Commerce has come up with an AI risk management framework.
[00:05:15] So it's voluntary but it's meant to do much the same thing. It's meant to make sure that people who deploy AI establish a governance program that they monitor the risks, that they have a way of measuring the impact of those risks and managing them in order to address them.
[00:05:34] And again, it's not sort of prescriptive because AI can be used on a lot of different things. It could be used in the context of employment as I'm sure we'll discuss. Be used in your self-driving car. Lower risks that can be used to give you movie recommendations.
[00:05:48] So but the idea is you have to think through an account for the risks in the various ways. And of course in the US on top of this voluntary framework, we have Congress really sort of trying to learn about AI and considering regulation.
[00:06:01] There were insight forms in the Senate. The House has just established a bipartisan task force on this.
[00:06:08] So we're going to continue. This is just sort of the tip of the iceberg. We're going to see I think increasing regulation around the world, you know, just as we saw with privacy.
[00:06:19] It strikes me that since it's a global issue, it's going to be pretty tough on global corporations to make sure that they're in sync or in compliance with all of these different countries and jurisdictions.
[00:06:36] First of all, is that assumption correct? But also do you think there where they should be in preparing for it? So look, I think anytime you have regulation for things that are going to be used globally like AI, you want to have commonality.
[00:06:55] Right. And I think it's a little early to tell, but we're seeing some signs of that. Right. I think that there's not a huge gap between the self-regulatory framework that misses as established in terms of governance because they think about certain things like data quality.
[00:07:10] You know, are you measuring how your systems are formed? Are you checking for drift? It's really more of a series of questions. The EU is taking a little bit more of a prescriptive approach again in these high-risk fields.
[00:07:21] But you know, but Canada I think is sort of following it's something. So I don't think you're going to see the divergence whereas say in the privacy space we've seen, you know, GDPR and rules like it in a large part of the world.
[00:07:34] And then the US take a somewhat different approach to regulating privacy. But I think the jury is still out on that. And you know, look all these things are going to evolve.
[00:07:44] Right. We think about this fundamentally, you know, technology is not an inexorable force that acts on society. We as society decide how technology is going to be used just as we set other rules of the road across all sorts of different areas.
[00:07:59] And so I think it's only natural that policymakers and others are going to be looking at this space. Right. They're going to see the potential of AI. There's just huge potential for what it can do for employers, for employees, for others.
[00:08:11] I mean, just think how amazing it is to be able to like stand outside your building and click a button and have your self-driving car back out of its parking space and come around just like it has an invisible valet.
[00:08:27] But along with those, you know, we had to make sure that, you know, the risks are addressed. And so I think that's really what's what's going on here. And what about employers? There must be or I'm sorry, let me ask it this way.
[00:08:42] Are there particular areas that you think are going to impact employers particularly and what should employers be looking out for is this is all going on?
[00:08:54] Well, look, I think there are a number of different ways to think about this as an employer. Right. I think there are an area, there's an area of the cause of high risk.
[00:09:04] I think another way of thinking about it is consequential decisions where AI is a significant factor in forming a human decision maker, but on things like recruiting, on hiring, compensation, work assignment, termination.
[00:09:20] And those are the areas where I think there's going to be a lot of attention paid, whether in the sub regulatory frameworks or in the EU AI Act is a high risk activity that is going to involve governance across sort of the tool.
[00:09:36] You know, being sure that you know it's created using our quality data again making sure that it's monitored for its performance. You know, testing it for bias and we see that, you know, frankly things like the New York City's long, you know, automated decision making and employment.
[00:09:52] And so I think that's sort of one set of things. But then the area of employment and the opportunity for employers, employees is so much broader. Think about an AI tool that helps me enhance my resume because it looks at it and says Jason you've got skills in these areas.
[00:10:08] Maybe you've got this other this other skill right you've done public policy maybe you have a skill in government affairs don't you want to add that.
[00:10:15] Maybe it says, yeah, hey, you said you do word processing but that's not really how people talk about it the standard thing is to talk about skills and Microsoft Word. Why don't we do that so you fit in better when you go apply for jobs.
[00:10:27] What about AI that develops sort of training says alright, for your career path here's your next role here the skills you're missing here's the, you know, the training that you need things like that there's so much there.
[00:10:39] That's, you know, that poses very little risk in this of high benefit to employees they get to grow their careers of benefit to employers their employees have more opportunities get skills. They can help find non traditional talent that matches up well against jobs.
[00:10:54] Right all these types of things so I think there's no sort of single way of thinking about it.
[00:11:00] What's your impression about employers and well I guess corporations attitude toward all this or are they fighting it or they just kind of resigned to it do they think it's necessary.
[00:11:14] I don't think any of those are really how I would describe it you know I have the opportunity to spend a lot of time with our with our clients.
[00:11:22] You know I spent some time with our HC leadership council last fall I just came back from beating of the mines with our with our with a lot of our clients and I think they're really excited about AI I think they're really excited about its potential right I think they view it as a way of,
[00:11:40] you know, increasing productivity of taking away routine tasks to allow people to be more creative. Again about finding new talent about, you know, making it easier for people to advance about, you know, being able to help and finding great candidates for jobs all those types of things.
[00:11:57] So I really view it as sort of one that's embracing it as they embrace it they want to understand functionality they want to understand what it's going to do.
[00:12:06] I think they want to have you know some assurance that they understand the risks and I think, you know, like all of us they're thinking about how do we implement a governance program to make sure we use it the right way that we have that human oversight that we understand how it works that we
[00:12:19] understand what it's telling it and what it isn't so we're not just sort of defaulting to whatever the machine says. But I think overall there's just a huge amount of excitement and enthusiasm around it.
[00:12:30] Well, Jason thanks very much for stopping by today it was great to meet you and probably will invite you back as things progress around the world. Well thank you it's been a delight to be here in a great conversation.
[00:12:50] My guest today has been Jason Albert the global chief privacy officer at ADP and this has been PeopleTech the podcast of workforce AI dot news to keep up with AI technology and HR subscribe to workforce AI today.
[00:13:11] We're the most trusted source of news in the HR tech industry. Find us at www.workforceai.news. A Mark Thepher.


