Summary:
Jason Albert is the Global Chief Privacy Officer at ADP. He’s worked at the intersection of technology and law for over 25 years and has focused on privacy and AI.
In this episode, Jason unpacks the new EU AI Act, how it will affect companies around the globe, and what they can do to prepare for this new regulation.
Chapters
- Welcome, Jason!
- Today’s Topic: Unpacking the EU AI Act
[3:33 - 8:35] What are the implications of the new EU AI Act?
- The Act takes a horizontal approach to AI regulation that could affect companies globally
- The importance of thinking through usage of AI and developing ethical principles
[8:36 - 14:43] What are the key points of the EU AI Act?
- Breaking down the 3 main points of the Act
- Using employee data in AI models for HR
[14:44 - 26:56] What can companies around the world do to be compliant with this new regulation?
- Could GDPR provide a roadmap for compliance with the Act?
- How quickly will the EU (and the rest of the world) have to comply?
- Thanks for listening!
Quotes:
“It’s important to know that [the EU AI Act] applies to organizations outside the EU if they supply AI systems in the EU or if the outputs of their systems will be used in the EU.”
“The EU AI Act addresses three key risk areas. . . . and it has different timelines for implementing these provisions depending on the risk category.”
Contact:
Jason's LinkedIn
David's LinkedIn
Podcast Manger: Karissa Harris
Email us!
Production by Affogato Media
To schedule a meeting with us: https://salary.com/hrdlconsulting
For more HR Data Labs®, enjoy the HR Data Labs Brown Bag Lunch Hours every Friday at 2:00PM-2:30PM EST. Check it out here: https://hrdatalabs.com/brown-bag-lunch/
Produced by Affogato Media
Powered by the WRKdefined Podcast Network.
[00:00:01] [SPEAKER_00]: The world of business is more complex than ever.
[00:00:04] [SPEAKER_00]: The world of human resources and compensation is also getting more complex.
[00:00:09] [SPEAKER_00]: Welcome to the HR Data Labs podcast,
[00:00:12] [SPEAKER_00]: your direct source for the latest trends from experts inside and outside the world of human resources.
[00:00:18] [SPEAKER_00]: Listen as we explore the impact that compensation strategy, data, and people analytics can have on your organization.
[00:00:25] [SPEAKER_00]: This podcast is sponsored by salary.com, your source for data, technology, and consulting for compensation and beyond.
[00:00:34] [SPEAKER_00]: Now here are your hosts, David Turetsky and Dwight Brown.
[00:00:38] [SPEAKER_01]: Hello and welcome to the HR Data Labs podcast.
[00:00:40] [SPEAKER_01]: I'm your host David Turetsky.
[00:00:42] [SPEAKER_01]: Like always we try and find brilliant people inside and outside the world of HR to tell you what's happening, what's going on, what's the latest.
[00:00:48] [SPEAKER_01]: Today we have with us Jason Albert from ADP.
[00:00:52] [SPEAKER_01]: Jason, how are you?
[00:00:53] [SPEAKER_01]: I'm doing well. It's great to be here.
[00:00:55] [SPEAKER_01]: It's great to have you. Jason, tell us a little bit about you and ADP.
[00:01:00] [SPEAKER_02]: Sure. Well, you know, I'm Jason Albert. I'm the Global Chief Privacy Officer at ADP.
[00:01:05] [SPEAKER_02]: I've worked for over 25 years at the intersection of technology and law with a focus on privacy and more recently AI.
[00:01:12] [SPEAKER_02]: Having worked in Europe and the US and the law firms and tech companies,
[00:01:16] [SPEAKER_02]: I'm inspired by the opportunity that new technology provides.
[00:01:19] [SPEAKER_01]: I've worked in Europe too when I worked at Morgan Stanley back in the early 1990s.
[00:01:25] [SPEAKER_01]: Isn't it fun to have like bi-continental experience?
[00:01:29] [SPEAKER_02]: Oh, absolutely. I spent about five years in Europe and it was great because you get to see a different culture,
[00:01:36] [SPEAKER_02]: experience a different legal system, see alternate approaches to how to regulate and learn to tremendous amount over there.
[00:01:46] [SPEAKER_01]: Yeah, it's so much fun. In some ways you had to relearn what you thought you knew when you came over, so that was even more fun.
[00:01:52] [SPEAKER_01]: Very true.
[00:01:53] [SPEAKER_01]: So Jason, what we ask every one of our guests to do, tell us one fun thing that no one knows about Jason Albert.
[00:02:03] [SPEAKER_02]: Well, my thought on that would be that I actually at college studied plate tectonics under the professor who actually developed the theory,
[00:02:13] [SPEAKER_02]: wrote the seminal paper on the topic.
[00:02:15] [SPEAKER_01]: Wow. I think you may have heard just recently the San Andreas Fault is moving again.
[00:02:20] [SPEAKER_02]: Well, we had an earthquake here in New Jersey a couple weeks ago.
[00:02:24] [SPEAKER_01]: Yeah. Yes, the ground shook.
[00:02:28] [SPEAKER_01]: And actually, I think we were supposed to feel it up here in Massachusetts if I'm not mistaken.
[00:02:33] [SPEAKER_02]: Yeah, I mean definitely people in New York felt it in elsewhere was interesting.
[00:02:36] [SPEAKER_02]: I was on a call at the time and I looked pretty close to the episode or so.
[00:02:40] [SPEAKER_02]: So the house started shaking and then a few seconds later you can see my colleague in the office,
[00:02:46] [SPEAKER_02]: she started shaking and then we had somebody from New York City and they started shaking.
[00:02:51] [SPEAKER_02]: Wow.
[00:02:51] [SPEAKER_02]: You can see the propagation across.
[00:02:54] [SPEAKER_01]: Absolutely. Scary too.
[00:02:55] [SPEAKER_02]: Well, not that scary.
[00:02:58] [SPEAKER_01]: Well, I would have been a little bit freaked out.
[00:03:01] [SPEAKER_01]: I know my dogs would have been more freaked out but all right, well,
[00:03:04] [SPEAKER_01]: I'm going to be calling you next time we have an earthquake.
[00:03:06] [SPEAKER_01]: So I'll try and get the skinny on what was what I actually should feel then.
[00:03:11] [SPEAKER_01]: But today we're going to talk about something that does feel like an earthquake probably to many in the European Union
[00:03:18] [SPEAKER_01]: who are thinking about or using artificial intelligence, which is the EU AI Act.
[00:03:33] [SPEAKER_01]: So our first question is the EU has just passed an act governing AI.
[00:03:38] [SPEAKER_01]: Give us about the 10,000 foot view first and then let's discuss the implications of what companies are doing to be able to deal with this in the EU.
[00:03:48] [SPEAKER_02]: Absolutely.
[00:03:49] [SPEAKER_02]: Right?
[00:03:50] [SPEAKER_02]: You know, we see the policymakers around the world are really grappling with how to realize the benefits of AI while at the same time protecting individuals.
[00:03:58] [SPEAKER_02]: And as you mentioned, the EU is taking a first step with its Artificial Intelligence Act, which when finalized,
[00:04:04] [SPEAKER_02]: we expect that next month it'll be the first comprehensive regulation on AI anywhere in the world.
[00:04:11] [SPEAKER_02]: But, you know, it was approved by the European Parliament on March 13th and then, you know, they've gone through some acoustic analysis
[00:04:16] [SPEAKER_02]: and finally has to be approved by the European Council.
[00:04:21] [SPEAKER_02]: You know, and the interesting thing about the EU AI is it takes a horizontal approach.
[00:04:25] [SPEAKER_02]: It regulates AI whether it's a standalone offering in software or whether it's embedded in hardware like in a self-driving car.
[00:04:32] [SPEAKER_02]: And it takes a life cycle approach.
[00:04:34] [SPEAKER_02]: It starts from the initial development to the usage of the AI to post-market monitoring and it covers all the parties involved,
[00:04:42] [SPEAKER_02]: those that are developing the AI, those that are introducing it on the market, those who are selling it, distributing it and then ultimately using it.
[00:04:51] [SPEAKER_02]: And, you know, it's important to know that it applies to organizations also there outside the EU if they supply AI systems in the EU
[00:04:58] [SPEAKER_02]: or if the outputs of their systems will be used in the European Union.
[00:05:02] [SPEAKER_01]: So, it really is a comprehensive view of artificial intelligence.
[00:05:08] [SPEAKER_01]: Now, do you think the EU and this might sound funny, but do you think the EU is worried because of some of those people
[00:05:14] [SPEAKER_01]: who are talking about the Doomsday or they had been watching about too many movies about AI,
[00:05:20] [SPEAKER_01]: maybe even the movie AI, about how this could escalate out of control?
[00:05:24] [SPEAKER_01]: Do you think they were kind of responding to that or is this just kind of setting the boundaries?
[00:05:30] [SPEAKER_02]: Look, I think really when you think about this it's a question of setting boundaries.
[00:05:35] [SPEAKER_02]: We all know that ultimately society decides how technology is going to be used.
[00:05:41] [SPEAKER_02]: And I mentioned at the outset how I'm personally excited and inspired by the potential technology
[00:05:49] [SPEAKER_02]: and AI offers so much potential it's going to help make tasks faster.
[00:05:53] [SPEAKER_02]: It's going to give us new opportunities.
[00:05:55] [SPEAKER_02]: It's going to help us do things that we weren't able to do before.
[00:05:58] [SPEAKER_02]: It's going to give us insights that we weren't able to achieve before,
[00:06:02] [SPEAKER_02]: before the development of this type of programming and this type of computing power.
[00:06:06] [SPEAKER_02]: But at the same time obviously it's important to have guardrails.
[00:06:09] [SPEAKER_02]: It's important to protect against risks.
[00:06:11] [SPEAKER_02]: It's important to protect against potential bias.
[00:06:14] [SPEAKER_02]: It's important to protect against misuse.
[00:06:16] [SPEAKER_02]: And so I think really this regulatory structure was adopted to try to strike a balance between the realization of those benefits
[00:06:23] [SPEAKER_02]: and addressing possible risks.
[00:06:28] [SPEAKER_01]: Well, we've seen that there are risks and there are actually real risks like for example
[00:06:33] [SPEAKER_01]: utilizing artificial intelligence to develop technology that can use people's voice patterns
[00:06:39] [SPEAKER_01]: and be able to create like for example the robo calls that just happened during the political process here in the US
[00:06:47] [SPEAKER_01]: where they were faking either candidate or many candidates, faking their voice
[00:06:53] [SPEAKER_01]: and tricking people into either not showing up to the polls or trying to make the wrong decision.
[00:06:58] [SPEAKER_01]: Do you think that's kind of one of those quote unquote, they're foreseeable risks?
[00:07:03] [SPEAKER_01]: But those are one of the scarier ones where we're actually seeing rogue players use it to do the wrong thing right now.
[00:07:11] [SPEAKER_02]: I think when you really look at technology, right?
[00:07:14] [SPEAKER_02]: Technology is a tool, right?
[00:07:16] [SPEAKER_02]: And like any tool it can be used for various purposes, right?
[00:07:20] [SPEAKER_02]: I have a hammer here in the house.
[00:07:22] [SPEAKER_02]: I've used it to hang some pictures.
[00:07:23] [SPEAKER_02]: Back in my days studying geology, I was out in the field and I was using it to chop off rocks
[00:07:29] [SPEAKER_02]: so I could look at the grain pattern and look at how they were tilted relative to the landscape.
[00:07:36] [SPEAKER_02]: So I really think when you think about AI, it's important to think through sort of the usage.
[00:07:43] [SPEAKER_02]: It's important to adopt ethical principles that govern how you'll use AI.
[00:07:49] [SPEAKER_02]: We've done that at ADP.
[00:07:52] [SPEAKER_02]: We have our own set of ethical principles that cover things like explainability, transparency,
[00:07:58] [SPEAKER_02]: human oversight, addressing bias, having an inclusive development process,
[00:08:05] [SPEAKER_02]: training all those types of things.
[00:08:09] [SPEAKER_01]: In essence, it's just like a hammer.
[00:08:11] [SPEAKER_01]: It can be used for good things or it can be used for bad things.
[00:08:14] [SPEAKER_02]: Well, yeah, although I don't think it probably can be used to chip off rocks.
[00:08:18] [SPEAKER_01]: Well, when the robots start being utilized by the AI, that might be true.
[00:08:25] [SPEAKER_00]: Like what you hear so far?
[00:08:27] [SPEAKER_00]: Make sure you never miss a show by clicking subscribe.
[00:08:30] [SPEAKER_00]: This podcast is made possible by salary.com.
[00:08:33] [SPEAKER_00]: Now, back to the show.
[00:08:36] [SPEAKER_01]: So let's get to question two.
[00:08:38] [SPEAKER_01]: Let's get a little more detailed.
[00:08:39] [SPEAKER_01]: How do you think the EU AI Act, how is it structured?
[00:08:42] [SPEAKER_01]: What are the key points and when does it actually go into effect?
[00:08:46] [SPEAKER_02]: Look, I think that's a great question.
[00:08:48] [SPEAKER_02]: And if you think about the EU AI Act, it addresses three key risk areas.
[00:08:51] [SPEAKER_02]: First, it bans certain uses of AI that are seen as posing unacceptable risks.
[00:08:57] [SPEAKER_02]: You know, one example is real time biometric identification by law enforcement.
[00:09:02] [SPEAKER_02]: Another one is like social scoring things like that.
[00:09:05] [SPEAKER_02]: Second, it adopts a regulatory regime for so-called high risk use cases.
[00:09:10] [SPEAKER_02]: You know, those were the use of AI could impact the rights or opportunities
[00:09:14] [SPEAKER_02]: available to individuals, you know, whether in things like education,
[00:09:18] [SPEAKER_02]: employment, access to credit, things like that.
[00:09:21] [SPEAKER_02]: Third, for foundational models such as large language models,
[00:09:26] [SPEAKER_02]: it imposes transparency obligations.
[00:09:28] [SPEAKER_02]: So you get more information about how those models are developed
[00:09:31] [SPEAKER_02]: and when they're being used.
[00:09:33] [SPEAKER_02]: So you can see it covers sort of the gamut and things that are banned,
[00:09:38] [SPEAKER_02]: certain things that are seen as high risk and such a traditional protections,
[00:09:42] [SPEAKER_02]: you know, certain things for the LLMs that we've all come to know
[00:09:45] [SPEAKER_02]: and love in the age of generative AI.
[00:09:47] [SPEAKER_02]: And then it has different timelines for implementing these provisions
[00:09:51] [SPEAKER_02]: depending on the risk category.
[00:09:52] [SPEAKER_02]: So for those things that are banned, those will come into effect
[00:09:56] [SPEAKER_02]: six months after the act's adopted.
[00:10:00] [SPEAKER_02]: The provisions around LLMs and general purpose models will be for 12 months
[00:10:03] [SPEAKER_02]: and then for high risk it'll be 24 months.
[00:10:07] [SPEAKER_02]: And then there are a few small little things about AI embedded
[00:10:09] [SPEAKER_02]: in other things that actually extend out for three years.
[00:10:14] [SPEAKER_02]: So that's essentially the way that it's structured.
[00:10:18] [SPEAKER_01]: So I know that there's been a lot of talk in the U.S. about the biometric problems
[00:10:22] [SPEAKER_01]: that AI and the training sets in the AI have been utilized in the past,
[00:10:27] [SPEAKER_01]: especially going through things like airports and the TSA.
[00:10:30] [SPEAKER_01]: Do you see the U.S. kind of taking on similar...
[00:10:34] [SPEAKER_01]: I don't necessarily know if they're going to be regulations.
[00:10:37] [SPEAKER_01]: They might turn out to be.
[00:10:39] [SPEAKER_01]: Regulations or laws based on where the EU is going with that?
[00:10:43] [SPEAKER_02]: Well, I don't know that it'll be necessarily based on where the EU is going.
[00:10:47] [SPEAKER_02]: But I think in the U.S., as elsewhere around the globe,
[00:10:50] [SPEAKER_02]: there's this concern about enabling the potential of AI while addressing the risks.
[00:10:56] [SPEAKER_02]: And so we see a couple of different things here in the U.S.
[00:10:58] [SPEAKER_02]: We see the National Institute of Standards and Technology,
[00:11:03] [SPEAKER_02]: which is part of the Department of Commerce has adopted an AI risk management framework.
[00:11:07] [SPEAKER_02]: And it really provides a way of sort of thinking through how do you identify risks?
[00:11:11] [SPEAKER_02]: So it has governed... You have to have this governance regime map.
[00:11:15] [SPEAKER_02]: You have to map your activities and the risks.
[00:11:17] [SPEAKER_02]: You have to then measure the risks and see whether you think they're large or small,
[00:11:23] [SPEAKER_02]: and then you have to monitor how the steps you've taken to address those risks are performing.
[00:11:28] [SPEAKER_02]: So it's a self-regulatory system, but it's pretty detailed in terms of what it requires
[00:11:34] [SPEAKER_02]: looking across, again, different types of AI.
[00:11:37] [SPEAKER_02]: There's a bill pending in Connecticut, as we speak, that just passed part of the legislature.
[00:11:45] [SPEAKER_02]: That also would adopt sort of a risk framework, not quite as involved as what was adopted in Europe,
[00:11:52] [SPEAKER_02]: but again, this sort of similar life cycle approach about looking at how you develop
[00:11:56] [SPEAKER_02]: and deploy and use AI technology.
[00:11:59] [SPEAKER_01]: I think there was a legislation that was pending quite a long time ago from California,
[00:12:05] [SPEAKER_01]: which talked about the use of employee data in models,
[00:12:10] [SPEAKER_01]: and it was potentially going to be disastrous for HR
[00:12:14] [SPEAKER_01]: because it would have basically said that employees had to sign off on usage of their data
[00:12:20] [SPEAKER_01]: in any type of algorithm that included their data.
[00:12:24] [SPEAKER_01]: And that would have made it really hard to do a headcount analysis
[00:12:27] [SPEAKER_01]: and let, you know, if Fred didn't want you to count him in the headcount,
[00:12:30] [SPEAKER_01]: well, then we couldn't use it.
[00:12:32] [SPEAKER_01]: So it would have made things like analytics and even doing things like bonus planning
[00:12:38] [SPEAKER_01]: or merit increase modeling pretty impossible if we didn't have access to that.
[00:12:42] [SPEAKER_01]: Do you see that where Connecticut may be going,
[00:12:45] [SPEAKER_01]: or having those kind of protections for using employee or HR data in that way?
[00:12:53] [SPEAKER_02]: So what we see, for example, are a couple of different things.
[00:12:57] [SPEAKER_02]: We see in the context of certain automated employment decision technologies,
[00:13:03] [SPEAKER_02]: things that are used to help with hiring or recruitment.
[00:13:05] [SPEAKER_02]: You see requirements to give applicants the ability to opt out of having that technology
[00:13:10] [SPEAKER_02]: act on them.
[00:13:11] [SPEAKER_02]: So that's one thing that you see.
[00:13:15] [SPEAKER_02]: But the AI laws really tend to be a little bit more focused on things like data quality,
[00:13:21] [SPEAKER_02]: things like, you know, making sure that the AI is accurate at his validity,
[00:13:26] [SPEAKER_02]: making sure that you provide transparency so somebody knows that AI is being used
[00:13:30] [SPEAKER_02]: in a certain context.
[00:13:32] [SPEAKER_02]: In terms of the data collection, right?
[00:13:36] [SPEAKER_02]: That's usually governed either by IP laws in terms of what you can do with
[00:13:41] [SPEAKER_02]: sort of existing materials or privacy laws in terms of the ability
[00:13:45] [SPEAKER_02]: and the rights that people have around their personal data
[00:13:49] [SPEAKER_02]: and what it can be used for.
[00:13:53] [SPEAKER_01]: So what you're saying is that it's possible that those things might come in,
[00:13:58] [SPEAKER_01]: but it's not really going to be necessarily around AI.
[00:14:03] [SPEAKER_01]: It may be around other things.
[00:14:04] [SPEAKER_02]: Right. Yeah.
[00:14:05] [SPEAKER_02]: I think what you see is you tend to see things that regulate AI
[00:14:08] [SPEAKER_02]: to really sort of be focused on that.
[00:14:10] [SPEAKER_02]: And then the controls on things that might be inputs tend to exist
[00:14:14] [SPEAKER_02]: in related legal areas.
[00:14:18] [SPEAKER_01]: Hey, are you listening to this and thinking to yourself,
[00:14:21] [SPEAKER_01]: man, I wish I could talk to David about this?
[00:14:23] [SPEAKER_01]: Well, you're in luck.
[00:14:25] [SPEAKER_01]: We have a special offer for listeners of the HR Data Labs podcast,
[00:14:29] [SPEAKER_01]: a free half hour call with me about any of the topics we cover on the podcast
[00:14:33] [SPEAKER_01]: or whatever is on your mind.
[00:14:36] [SPEAKER_01]: Go to salary.com forward slash H-R-D-L consulting
[00:14:40] [SPEAKER_01]: to schedule your free 30 minute call today.
[00:14:44] [SPEAKER_01]: So why don't we turn our attention back to the EU Act?
[00:14:48] [SPEAKER_01]: What can companies do pretty much around the world
[00:14:51] [SPEAKER_01]: to be compliant with this new regulation?
[00:14:54] [SPEAKER_02]: Look, I think there's strong alignment between good AI governance
[00:14:58] [SPEAKER_02]: and the requirements of the EU AI Act.
[00:15:01] [SPEAKER_02]: Right? When you're thinking about any sort of AI development,
[00:15:04] [SPEAKER_02]: it's important to have a process to identify and assess
[00:15:07] [SPEAKER_02]: and manage risks.
[00:15:08] [SPEAKER_02]: Similarly, you know, to make sure that you have good output,
[00:15:11] [SPEAKER_02]: you need to use high quality data.
[00:15:13] [SPEAKER_02]: You have to monitor performance of the model.
[00:15:15] [SPEAKER_02]: You want to make sure that it doesn't drift.
[00:15:17] [SPEAKER_02]: You want to make sure that you, you know, address any potential bias.
[00:15:21] [SPEAKER_02]: And obviously, AI that makes recommendations
[00:15:23] [SPEAKER_02]: needs to be subject to human oversight.
[00:15:26] [SPEAKER_02]: So, you know, we've talked a little bit about, you know,
[00:15:28] [SPEAKER_02]: risk management is really at the core of the EU AI Act's approach.
[00:15:32] [SPEAKER_02]: You know, because of this, you know,
[00:15:33] [SPEAKER_02]: companies should identify who owns overall responsibility
[00:15:36] [SPEAKER_02]: for risk management.
[00:15:37] [SPEAKER_02]: And then it's going to be important for that person to work
[00:15:40] [SPEAKER_02]: with a cross-functional team with individuals from legal
[00:15:43] [SPEAKER_02]: and from privacy, from security and from the business
[00:15:46] [SPEAKER_02]: to map what the AI systems or the company is using
[00:15:50] [SPEAKER_02]: and developing or doing and for each of those
[00:15:52] [SPEAKER_02]: to evaluate potential risks.
[00:15:54] [SPEAKER_02]: And then to identify how to address
[00:15:56] [SPEAKER_02]: and manage those risks once you know what they are.
[00:16:00] [SPEAKER_02]: You know, and even where a company implements
[00:16:02] [SPEAKER_02]: AI systems that are developed by someone else,
[00:16:04] [SPEAKER_02]: they have to use those systems in accordance with the instructions
[00:16:08] [SPEAKER_02]: of the developer.
[00:16:09] [SPEAKER_02]: And they also have to make sure there's human oversight
[00:16:11] [SPEAKER_02]: over the use of system output.
[00:16:13] [SPEAKER_02]: So it's not just going to be about who builds the AI system.
[00:16:16] [SPEAKER_02]: It's going to be about the people who actually deploy them.
[00:16:19] [SPEAKER_02]: They're going to have some obligations as well.
[00:16:21] [SPEAKER_02]: But, you know, fundamentally at the end of the day,
[00:16:23] [SPEAKER_02]: building a strong AI governance program
[00:16:25] [SPEAKER_02]: will get you much of the way there.
[00:16:26] [SPEAKER_02]: You'll have to still account for the specifics of the EU AI Act.
[00:16:30] [SPEAKER_02]: But as with a strong privacy program,
[00:16:32] [SPEAKER_02]: those are adjustments that would be made
[00:16:34] [SPEAKER_02]: on top of a strong foundation.
[00:16:36] [SPEAKER_01]: Yeah, I was just going to ask you,
[00:16:37] [SPEAKER_01]: because Jason, we remember GDPR happening.
[00:16:41] [SPEAKER_01]: And GDPR was a series of rules around the data privacy
[00:16:44] [SPEAKER_01]: and the as you mentioned before the right to be forgotten.
[00:16:47] [SPEAKER_01]: I guess you could kind of boil it down to
[00:16:49] [SPEAKER_01]: from the employee or from the individual's perspective.
[00:16:52] [SPEAKER_01]: Do you see that as a roadmap for this EU AI Act?
[00:16:57] [SPEAKER_02]: Well, look, I think, you know,
[00:16:59] [SPEAKER_02]: the work that companies did to, you know,
[00:17:03] [SPEAKER_02]: come into compliance with GDPR
[00:17:04] [SPEAKER_02]: I think will be very instructive, right?
[00:17:06] [SPEAKER_02]: You'll need something that's programmatic.
[00:17:09] [SPEAKER_02]: You know, like the EU AI Act, GDPR was a horizontal regulation
[00:17:13] [SPEAKER_02]: applied to use of personal data,
[00:17:14] [SPEAKER_02]: whether as a consumer or as an employee
[00:17:17] [SPEAKER_02]: or, you know, or things like that.
[00:17:19] [SPEAKER_02]: And so you had to think about this
[00:17:20] [SPEAKER_02]: across your business processes.
[00:17:23] [SPEAKER_02]: I think, you know, one thing that's a little bit different
[00:17:26] [SPEAKER_02]: is that GDPR really includes, you know,
[00:17:28] [SPEAKER_02]: privacy by design when you think about product development.
[00:17:31] [SPEAKER_02]: But a lot of it is about processes,
[00:17:32] [SPEAKER_02]: about, you know, allowing individuals to access
[00:17:35] [SPEAKER_02]: or delete their data.
[00:17:36] [SPEAKER_02]: You mentioned the right to be forgotten,
[00:17:38] [SPEAKER_02]: you know, rules around, you know,
[00:17:40] [SPEAKER_02]: transfer data.
[00:17:41] [SPEAKER_02]: The EU AI Act is going to be able to forward
[00:17:42] [SPEAKER_02]: a lot more I think on, you know,
[00:17:44] [SPEAKER_02]: product and service development.
[00:17:45] [SPEAKER_02]: There's still elements that are after the facts
[00:17:48] [SPEAKER_02]: is just the instructions you have to give to people
[00:17:50] [SPEAKER_02]: and the human oversight and the post-market monitoring.
[00:17:53] [SPEAKER_02]: But so I think it'll be a little bit different
[00:17:55] [SPEAKER_02]: in that aspect, but fundamentally the same sort of
[00:17:57] [SPEAKER_02]: horizontal programmatic approach will be valuable.
[00:18:01] [SPEAKER_01]: Talk a little bit about the U.S.
[00:18:03] [SPEAKER_01]: and other countries having to now look at this
[00:18:07] [SPEAKER_01]: and say, are we compliant with it?
[00:18:09] [SPEAKER_01]: Do we have to be compliant with it?
[00:18:11] [SPEAKER_01]: Is there any understanding about, you know,
[00:18:13] [SPEAKER_01]: what footprint you have to have in the EU
[00:18:15] [SPEAKER_01]: to be able to be or to think about compliance with this?
[00:18:19] [SPEAKER_02]: Yeah, well, I think it's pretty straightforward, right?
[00:18:23] [SPEAKER_02]: If you are making a system that will be,
[00:18:27] [SPEAKER_02]: involves AI, that will be used in the EU
[00:18:29] [SPEAKER_02]: that fits into one of these categories,
[00:18:31] [SPEAKER_02]: whether it's a large language model,
[00:18:32] [SPEAKER_02]: whether it's a, you know, a high-risk system,
[00:18:36] [SPEAKER_02]: then you're going to need to comply with the Act.
[00:18:38] [SPEAKER_02]: So it's really, think about it more
[00:18:41] [SPEAKER_02]: sort of a primary approach.
[00:18:42] [SPEAKER_02]: Am I going to put a product on the market at EU?
[00:18:43] [SPEAKER_02]: That's sort of one task, right?
[00:18:45] [SPEAKER_02]: And look, with hardware, it's very easy.
[00:18:47] [SPEAKER_02]: Like, you know, is the self-driving car,
[00:18:49] [SPEAKER_02]: you know, in Brussels or is it somewhere else?
[00:18:52] [SPEAKER_02]: But even with software, you have a pretty good sense.
[00:18:55] [SPEAKER_02]: But then the other thing is if you have an AI system
[00:18:59] [SPEAKER_02]: that operates, regardless of where it operates,
[00:19:02] [SPEAKER_02]: but if the outputs of it are going to be used
[00:19:04] [SPEAKER_02]: in the EU, then it too has to comply with the Act.
[00:19:08] [SPEAKER_01]: So the answer is get ready, US,
[00:19:11] [SPEAKER_01]: get ready other countries, including the UK.
[00:19:13] [SPEAKER_01]: You now have to figure out how you're going
[00:19:15] [SPEAKER_01]: to be compliant with this within,
[00:19:17] [SPEAKER_01]: I imagine within the same timelines that the companies
[00:19:20] [SPEAKER_01]: that are headquartered in the EU do, right?
[00:19:22] [SPEAKER_02]: Yeah.
[00:19:23] [SPEAKER_02]: Now I think that would be right.
[00:19:24] [SPEAKER_02]: Like, the timing would be the same.
[00:19:26] [SPEAKER_01]: Yikes.
[00:19:27] [SPEAKER_01]: So I remember when we were working with GDPR regulations,
[00:19:31] [SPEAKER_01]: which is the global data privacy regs that came out of the EU,
[00:19:35] [SPEAKER_01]: that was really a big deal for us to try and figure out
[00:19:37] [SPEAKER_01]: how we were going to be compliant with that.
[00:19:39] [SPEAKER_01]: And it took us a while because you basically
[00:19:41] [SPEAKER_01]: were turning the Titanic in many different ways
[00:19:44] [SPEAKER_01]: for organizations and for...
[00:19:47] [SPEAKER_01]: Think about it as applications
[00:19:49] [SPEAKER_01]: that had never kind of considered that.
[00:19:51] [SPEAKER_01]: We had to create net new things that were able
[00:19:53] [SPEAKER_01]: to facilitate this issue.
[00:19:57] [SPEAKER_01]: Now, with this regulation,
[00:19:59] [SPEAKER_01]: some companies have been using AI for years
[00:20:02] [SPEAKER_01]: and now have to figure out how they're going to be compliant
[00:20:05] [SPEAKER_01]: with those categories.
[00:20:06] [SPEAKER_01]: So I guess the question is,
[00:20:08] [SPEAKER_01]: this isn't just for new technology.
[00:20:11] [SPEAKER_01]: This is for existing technology in any company
[00:20:13] [SPEAKER_01]: who has any component of AI.
[00:20:16] [SPEAKER_01]: You said that before, right?
[00:20:17] [SPEAKER_02]: Right, yeah.
[00:20:18] [SPEAKER_02]: Yeah, exactly.
[00:20:19] [SPEAKER_02]: It's not just going forward, right?
[00:20:21] [SPEAKER_02]: If you're using some AI
[00:20:25] [SPEAKER_02]: that again falls within sort of the ambit of what the act regulates,
[00:20:28] [SPEAKER_02]: it's going to need to be compliant
[00:20:30] [SPEAKER_02]: as of the effective date of that portion of the act.
[00:20:34] [SPEAKER_01]: Well, then we have very little time.
[00:20:36] [SPEAKER_01]: We better...
[00:20:37] [SPEAKER_01]: We're going to get back to it.
[00:20:39] [SPEAKER_02]: We have some time.
[00:20:40] [SPEAKER_01]: Right.
[00:20:40] [SPEAKER_01]: But as you said before, though,
[00:20:42] [SPEAKER_01]: there are certain pieces of this,
[00:20:44] [SPEAKER_01]: especially the ones that are considered most high-risk,
[00:20:46] [SPEAKER_01]: where there's a ticking clock on this.
[00:20:48] [SPEAKER_01]: We've got to get going, right?
[00:20:50] [SPEAKER_02]: Yeah, no, I think that's right.
[00:20:51] [SPEAKER_02]: But I think not many companies are really involved
[00:20:54] [SPEAKER_02]: in sort of the things that are banned
[00:20:56] [SPEAKER_02]: and for the high-risk stuff,
[00:20:58] [SPEAKER_02]: which a lot of companies use,
[00:21:01] [SPEAKER_02]: but you've got 24 months, right?
[00:21:02] [SPEAKER_02]: So you have time to understand the act's requirements
[00:21:05] [SPEAKER_02]: to be able to then figure out
[00:21:06] [SPEAKER_02]: what steps you're going to take to address that
[00:21:09] [SPEAKER_02]: so that the products are compliant.
[00:21:11] [SPEAKER_02]: So there's going to be obviously guidance along the way
[00:21:14] [SPEAKER_02]: as we all go on that journey.
[00:21:16] [SPEAKER_01]: I guess one question that I think would be burning in the ears
[00:21:20] [SPEAKER_01]: or the minds of the listeners would be,
[00:21:23] [SPEAKER_01]: what does HR have to do?
[00:21:25] [SPEAKER_01]: Is there anything that's kind of incumbent upon
[00:21:27] [SPEAKER_01]: the users of these systems,
[00:21:30] [SPEAKER_01]: whether they're in the EU or not,
[00:21:32] [SPEAKER_01]: what would be their need to be able to deal with this response?
[00:21:38] [SPEAKER_02]: So I think there really are three things
[00:21:40] [SPEAKER_02]: that HR has to think about, right?
[00:21:43] [SPEAKER_02]: The first, of course, is to make sure that for any system
[00:21:48] [SPEAKER_02]: that fits in this category,
[00:21:50] [SPEAKER_02]: that implicates some of the high-risk things in employment,
[00:21:52] [SPEAKER_02]: you know, that you understand how the system
[00:21:55] [SPEAKER_02]: you're using operates
[00:21:56] [SPEAKER_02]: and that you follow the instructions
[00:21:58] [SPEAKER_02]: of the developer of that system.
[00:22:00] [SPEAKER_02]: That's one clear obligation.
[00:22:02] [SPEAKER_02]: You know, the second one is to have human oversight, right?
[00:22:05] [SPEAKER_02]: We already see this a little bit with automated decision-making
[00:22:08] [SPEAKER_02]: under GDPR, but it's going to be important
[00:22:10] [SPEAKER_02]: to just not like the system and follow it blindly,
[00:22:13] [SPEAKER_02]: but you want to actually have human oversight
[00:22:15] [SPEAKER_02]: like how it's performing of the decisions,
[00:22:18] [SPEAKER_02]: how do you review them, how do you assess those?
[00:22:20] [SPEAKER_02]: And then third, to make sure that there's good transparency
[00:22:23] [SPEAKER_02]: to the end users where AI is involved
[00:22:26] [SPEAKER_02]: so that employees and others really understand
[00:22:28] [SPEAKER_02]: where AI may be interacting with AI really.
[00:22:33] [SPEAKER_01]: So to boil it down, you have a lot of work to do HR.
[00:22:37] [SPEAKER_01]: You have to look at your systems that may use AI.
[00:22:40] [SPEAKER_01]: You have to talk to them about what the AI portions are.
[00:22:45] [SPEAKER_01]: And internally, you probably need to get some governance
[00:22:48] [SPEAKER_01]: and some risk management people together
[00:22:50] [SPEAKER_01]: to talk about how are you going to enforce compliance
[00:22:54] [SPEAKER_01]: from your perspective to make sure,
[00:22:56] [SPEAKER_01]: as you say, there's human oversight
[00:22:57] [SPEAKER_01]: and that we're understanding what's being done
[00:23:00] [SPEAKER_01]: and then communicating that transparently
[00:23:03] [SPEAKER_01]: to the people who it affects.
[00:23:05] [SPEAKER_01]: Right. Now, I think that's exactly right.
[00:23:07] [SPEAKER_01]: Oh boy, there's going to be a lot of work from this.
[00:23:11] [SPEAKER_01]: And you know, that means that if there's anybody
[00:23:14] [SPEAKER_01]: who has employees who are in the EU, right?
[00:23:18] [SPEAKER_01]: I mean, I guess let's boil it back to the HR people.
[00:23:21] [SPEAKER_01]: Is it just the people in the EU
[00:23:23] [SPEAKER_01]: that they're going to need to worry about these kind of regulations
[00:23:27] [SPEAKER_01]: or if they're like a US company
[00:23:30] [SPEAKER_01]: and they have some presence in the EU?
[00:23:33] [SPEAKER_01]: Is it just those employees or is it really all their employees?
[00:23:36] [SPEAKER_02]: Well, really again, I think it's...
[00:23:38] [SPEAKER_02]: Look, this is...
[00:23:39] [SPEAKER_02]: You know, while transparency really applies
[00:23:41] [SPEAKER_02]: to sort of the individuals who AI acts upon,
[00:23:43] [SPEAKER_02]: you know, first, like we talked at the beginning
[00:23:46] [SPEAKER_02]: about how this is a circus with good governance.
[00:23:49] [SPEAKER_02]: You're probably going to want to be transparent anywhere, right?
[00:23:51] [SPEAKER_02]: You're going to want that under the NIST AI framework.
[00:23:52] [SPEAKER_02]: You're going to want that certainly under what we're seeing
[00:23:55] [SPEAKER_02]: in terms of existing and pending proposals in the US.
[00:23:59] [SPEAKER_02]: So, you know, I wouldn't view this sort of narrowly that way.
[00:24:02] [SPEAKER_02]: And then when you think about sort of the more broader things,
[00:24:05] [SPEAKER_02]: whether it's sort of human oversight,
[00:24:06] [SPEAKER_02]: whether it's following a certain secret, that, you know,
[00:24:08] [SPEAKER_02]: those things apply really to systems.
[00:24:10] [SPEAKER_02]: And so unless you're really running a separate system in the EU,
[00:24:12] [SPEAKER_02]: those are things that are going to...
[00:24:14] [SPEAKER_02]: You're probably going to need to grapple with, you know,
[00:24:16] [SPEAKER_02]: on a company-wide basis.
[00:24:18] [SPEAKER_01]: So when we talk about things like transparency in the US,
[00:24:22] [SPEAKER_01]: we...
[00:24:23] [SPEAKER_01]: I mean, people here at salary.com,
[00:24:25] [SPEAKER_01]: we talk about having like a lowest common denominator
[00:24:28] [SPEAKER_01]: where whatever the most harsh regulation
[00:24:32] [SPEAKER_01]: or whatever the most thorough regulation is,
[00:24:34] [SPEAKER_01]: we suggest to companies that they use that across the board
[00:24:37] [SPEAKER_01]: for all of their entities, for all their employees.
[00:24:40] [SPEAKER_01]: And I think what you're saying is in the same vein,
[00:24:43] [SPEAKER_01]: since the systems may touch all of your employees,
[00:24:47] [SPEAKER_01]: it probably makes sense to be more transparent with all of them
[00:24:50] [SPEAKER_01]: and have the governance facilitated across all of them.
[00:24:54] [SPEAKER_02]: Look, I think, yeah, no, I think each company's going to ultimately,
[00:24:56] [SPEAKER_02]: you know, decide, you know, what requirements apply to it
[00:24:59] [SPEAKER_02]: and how it's going to address them.
[00:25:02] [SPEAKER_02]: But I do think, you know, with the act,
[00:25:05] [SPEAKER_02]: there are some benefits to taking a more holistic approach.
[00:25:08] [SPEAKER_02]: Right.
[00:25:09] [SPEAKER_01]: Well, Jason, I think what we're going to need to do
[00:25:12] [SPEAKER_01]: is come back and visit this maybe in about six to 12 months
[00:25:14] [SPEAKER_01]: and see how it has influenced
[00:25:17] [SPEAKER_01]: not only organizational compliance, but also
[00:25:21] [SPEAKER_01]: how it's influenced U.S. compliance or U.S. efforts
[00:25:24] [SPEAKER_01]: to regulate AI.
[00:25:27] [SPEAKER_01]: And I got to be honest with you,
[00:25:28] [SPEAKER_01]: I'm proud of them from doing something
[00:25:31] [SPEAKER_01]: because some of the things we've heard out of Congress
[00:25:33] [SPEAKER_01]: has been kind of wackadoodle,
[00:25:35] [SPEAKER_01]: and I'll use that as a technical term.
[00:25:38] [SPEAKER_01]: Some of the things that the U.S. have said
[00:25:40] [SPEAKER_01]: seem kind of draconian
[00:25:42] [SPEAKER_01]: because I don't think they're actually thinking about this
[00:25:44] [SPEAKER_01]: from a realistic perspective.
[00:25:46] [SPEAKER_01]: I think they're watching too many movies.
[00:25:48] [SPEAKER_02]: Well, look, I think, you know, what we've seen in the U.S.
[00:25:51] [SPEAKER_02]: is we've seen a lot of desire by legislators to learn more.
[00:25:55] [SPEAKER_02]: You know, we've had the, you know,
[00:25:57] [SPEAKER_02]: the insight forms in the Senate.
[00:25:58] [SPEAKER_02]: We have a task force in the House.
[00:26:01] [SPEAKER_02]: Look, I think we all, you know,
[00:26:03] [SPEAKER_02]: AI is going to be a transformative technology.
[00:26:05] [SPEAKER_02]: You know, it may be, you know, perhaps in the end
[00:26:09] [SPEAKER_02]: even more transformative than the Internet.
[00:26:11] [SPEAKER_02]: And I think, you know, as we all have to, you know,
[00:26:15] [SPEAKER_02]: figure out how we see that, how we see the opportunities,
[00:26:18] [SPEAKER_02]: how we enable those opportunities
[00:26:21] [SPEAKER_02]: in a way that also is cognizant of the risks
[00:26:26] [SPEAKER_02]: that we have to address.
[00:26:28] [SPEAKER_02]: I think a colleague of mine really put it well
[00:26:31] [SPEAKER_02]: in a sign-off for some review forms that we do around AI.
[00:26:36] [SPEAKER_02]: Responsible fun is the best fun.
[00:26:38] [SPEAKER_02]: Ha-ha-ha-ha-ha!
[00:26:42] [SPEAKER_01]: Spoken like a true ethical risk taker,
[00:26:45] [SPEAKER_01]: or risk not taker.
[00:26:48] [SPEAKER_01]: That's great.
[00:26:57] [SPEAKER_01]: Well, Jason, thank you very much for being on the HR Datalabs podcast.
[00:27:00] [SPEAKER_01]: We really appreciate it.
[00:27:01] [SPEAKER_01]: And as I said, I reserve the right to call you back
[00:27:03] [SPEAKER_01]: in six to 12 months to say how far has it gone
[00:27:06] [SPEAKER_01]: and has it gone well enough?
[00:27:08] [SPEAKER_02]: I look forward to it.
[00:27:09] [SPEAKER_01]: All right, great.
[00:27:10] [SPEAKER_01]: Well, thank you very much for being here.
[00:27:12] [SPEAKER_01]: We really appreciate you being on the podcast.
[00:27:14] [SPEAKER_01]: And thank you all for listening.
[00:27:15] [SPEAKER_01]: Take care and stay safe.
[00:27:18] [SPEAKER_00]: That was the HR Datalabs podcast.
[00:27:21] [SPEAKER_00]: If you liked the episode, please subscribe.
[00:27:24] [SPEAKER_00]: And if you know anyone that might like to hear it,
[00:27:26] [SPEAKER_00]: please send it their way.
[00:27:28] [SPEAKER_00]: Thank you for joining us this week
[00:27:29] [SPEAKER_00]: and stay tuned for our next episode.
[00:27:32] [SPEAKER_00]: Stay safe.


