In this episode, Dr. Shari Simpson speaks with Jillian Ambrose, Counsel at Crowell, about the urgent legal and compliance risks associated with using AI in hiring processes. As organizations increasingly adopt AI tools for recruitment, understanding the evolving landscape of state laws and potential biases is crucial.


Listeners will gain insights into how to effectively engage with AI vendors, ensure compliance, and mitigate risks while leveraging AI for efficiency in hiring.

• Understand the evolving patchwork of state laws affecting AI in hiring.

• Learn key questions to ask AI vendors about their tools.

• Discover the importance of conducting bias audits before using AI tools.

• Recognize the need for human oversight in AI-driven hiring processes.

• Identify best practices for communicating AI use to candidates.


00:00 -- Introduction to the episode

00:35 -- Importance of AI in hiring today

01:44 -- Legal and compliance risks of AI tools

02:32 -- Engaging with AI vendors effectively

03:40 -- Liability concerns for employers using AI

06:17 -- Addressing bias and disparate impact in AI

08:02 -- Governance and oversight of AI tools

09:23 -- The complexity of state laws on AI

10:19 -- Starting points for AI policy development

12:10 -- Balancing efficiency and legal compliance

13:05 -- Managing human oversight in AI processes

14:30 -- Baby steps for implementing AI in hiring

17:31 -- Communicating AI use to candidates

19:57 -- Future of regulation in AI hiring


Guest(s): Jillian Ambrose, Counsel at Crowell. She advises employers on employment law, focusing on compliance and emerging workplace risks. Jillian has extensive experience navigating the complexities of AI in hiring and its legal implications.


Keywords: AI in hiring, legal risks of AI, compliance with AI tools, bias in AI recruitment, engaging AI vendors, human oversight in hiring, state laws on AI, AI policy development, communicating AI to candidates, future of AI regulation


Powered by the WRKdefined Podcast Network. 

[00:00:01] You're listening to the HR Mixtape, a podcast for leaders who want to understand people, strengthen culture and navigate change with clarity. Today's conversation starts now. Joining me today is Jillian Ambrose, counsel at Crowell. Jillian advises employers on employment law, including leave, accommodations, wage and hour compliance and emerging workplace risks.

[00:00:35] Jillian, thank you so much for jumping on the podcast with me. Thanks. It's great to be here. Jillian Ambrose, Ph.D.: So you advise employers on employment law. That's where you really sit. And today we're going to talk about AI. And I'd really love to hear, you know, what made AI in hiring really feel like an urgent topic to you right now? Jillian Ambrose, Ph.D.: Sure. It's the next frontier. It's the thing we've been hearing about for three or five years, but it's really gotten hot in the last year and a half or so.

[00:01:04] Jillian Ambrose, Ph.D.: We know that sophisticated employers are looking for efficiencies and are using AI tools in a way that feels like it should be getting us to those efficiencies. And in many ways, AI tools were really billed as a way to reduce or eliminate bias in employment decision making, particularly in hiring. Jillian Ambrose, Ph.D.: But we're seeing that that's not playing out as cleanly as maybe we had hoped. Jillian Ambrose, Ph.D.: What do you think organizations that are using AI right now in recruiting or screening?

[00:01:32] Because I think screening is probably the more common way that it's been used in the past. I think obviously it's been leveling up. What are some of the biggest legal and compliance risks that you think leaders may be overlooking right now? Jillian Ambrose, Ph.D.: Sure. So the most practical, I think, and salient thing is that we have this evolving patchwork of state laws. And many of those state laws have various compliance requirements, and those requirements are glamorous.

[00:01:57] But they're things like notice to applicants and record keeping and those sorts of things that just create pitfalls. As we moved into this post-COVID landscape where more employers have multi-state or even national presences, that patchwork of employment laws across many states, many cities even, has put an onus on employers to get that right.

[00:02:22] Just this year, we've had laws come online in Illinois, in Colorado, in Texas that all have specific compliance obligations in addition to everything else. Jillian Ambrose, Ph.D.: It's so exciting and so hard right now, I think, in this space.

[00:02:37] You know, when I was speaking about this last year at conferences, one of the things I talked a lot about was how the EEOC looks at decisions related to hiring an AI and that the onus really is on the employer, not potentially the vendor, you know?

[00:02:55] Which led me kind of down this rabbit hole, and I love your perspective on this, is what should HR leaders really be focusing on when they're talking to vendors about using AI before they deploy it into their hiring process? Jillian Ambrose, Ph.D.: That's a really tricky conversation to have. And that's in part because vendors reasonably are protective of the trade secrets that go into these tools.

[00:03:22] And so the regulatory landscape is putting a lot of pressure on employers to know how the tool is being applied to their data. What is it doing? How are these decisions being informed? Jillian Ambrose, Ph.D.: And by decision, I mean whatever the AI tool is spitting out, whether it's a ranking or score or go, no go, whatever it is. But when employers are trying to have that conversation with vendors, vendors are pushing back and saying, we can't give you the information about how the tool is being used.

[00:03:52] Jillian Ambrose, Ph.D.: And that really highlights the challenge of applying the existing civil rights and employment law framework, Title VII, uniform guidelines, all of that that we are all so familiar with. That's the challenge of applying that framework to this new technology, which is that it's very hard for employers to look inside that black box and know how the tool is applying both their own employees' data and any training data, other data that the tool may be using.

[00:04:21] Jillian Ambrose, Ph.D.: Do you think there's any specific questions that employers should really push back on with vendors if they're not getting the answers? Because, I mean, I understand. Obviously, I work for a technology company. We don't want to give you the peek behind the curtain with our code. But if I'm held liable, like where can I really push? Jillian Ambrose, Ph.D.: So first thing I'd say about that on liability is the different state laws have come up with different schemes, and some of these are still in development.

[00:04:48] Colorado, in particular, I'm thinking about for apportionment of liability. So the first thing is to understand across all the states where there may be laws, how is liability assigned? And is there any limitation on how liability can be assigned in an agreement between the employer and the vendor? So that's step one.

[00:05:07] Step two is thinking about what the applicable law says about the weight that can be given to the AI tools decision, because that's going to determine the potential liability on the employer. Now, in terms of what questions to ask the vendor, one is what outside information the tool may be pulling in that is not the employee or the applicant-provided information.

[00:05:36] Another is how the tool is applying existing basic qualifications or preferred qualifications frameworks to the data that it's pulling from the applicant. So if you have a tool that looks at resumes, is it only looking at experience, or is it also looking at undergrad college? Is it potentially looking at zip code, which we know many state laws, at least a couple, have said you absolutely cannot do?

[00:06:04] That's information that most vendors should be able to provide. And if they can't, the employer just needs to understand that that may mean some potential exposure if the employer can't explain how these decisions are being made. How have organizations dealt with issues like potential bias or desperate impact, transparency, data privacy? Like, all those things are showing up, or we're talking about them, I guess I should say, in AI tools. Are they really showing up in the real world, though?

[00:06:33] Like, are we actually seeing that happen? Yeah. And I think any sophisticated employer that is thinking about bringing an AI tool online as part of the hiring process needs to at least think about the possibility of doing a disparate impact analysis, a bias audit, prior to using that tool on actual employee data. And this is where, again, the existing framework doesn't necessarily port cleanly onto this new technology.

[00:07:00] It is very challenging to validate an AI tool in the way we've often thought about validation when, A, we may not know exactly how the tool is functioning, and B, the tool may change. It may iterate. It may learn. And so if we validate today's version of the tool, which, again, is challenging, that validation may not be perpetually useful or defensible. And we've seen already litigation around potential bias in tools.

[00:07:30] There was the Immobile lawsuit two summers ago, I guess now, alleging an age bias component in an AI tool. We saw suit over the summer, again, SiriusXM on similar grounds. So this is bubbling up.

[00:07:47] And I think it's going to take a while for courts to figure out how to allow employers and vendors to sort through the potential sources of bias, if there is any, in discovery. How do we then, as HR professionals, think about governance of these tools and potentially who in our organization really needs to sit at that table to have those conversations? So absolutely, HR and legal.

[00:08:16] I think those are probably the folks who both have the most applicable knowledge and the greatest stake. In my experience, it is often HR or talent acquisition that says this tool is going to improve efficiency greatly. And it's often legal that says, yes, but. And that shouldn't be attention. That should be a collaboration. There is always the business interest of efficiency and the legal risk analysis of legal. And they need to work in tandem.

[00:08:43] The other thing I'd say about that is that HR needs to be very thoughtful about how a tool is being used and how much weight it's being given. And then documenting that decision making and that process and ensuring that it's being followed. Because, again, I have to speak in generalities about state laws because there are so many and none of them are the same. But many of them require a human in the loop.

[00:09:06] They require that some person actually review what the AI tool has done and decide if it's valid. And so we need to make sure that our talent acquisition folks are following those procedures once we have them. You know, it's interesting. You've brought it up a couple of times is the variety of state law. And for those listening, I'm sure they feel this.

[00:09:31] You know, that is increasingly become a more important and complex part of our roles, not just in things like AI, but something, you know, which used to be pretty simple as sick time, how we manage sick time in our organizations, has drastically changed in my career, you know, in the complexities of state and sometimes county or city level things that we're seeing now.

[00:09:51] So as you've worked with people who are developing their own kind of policies and documentations around AI, where should they start to make sure that they can stay grounded and also have the ability to review those things as quickly as they need to? I guess without like making them now this becomes a full time job, just managing AI governance and policy and documentation.

[00:10:16] Big sigh. I think in some organizations, it probably should be something like a full time job to manage AI policy and governance. I think it would behoove organizations that are using AI tools to have a compliance person or compliance committee or a good governance function, something like that, that is tasked with tracking these laws. That is also where outside counsel can be really helpful because that's our job.

[00:10:45] But the problem with this patchwork of state laws is not just that it's complex. It's that the penalty provisions are no joke and the penalty provisions attached to things like notice and records retention that are so easy to get right if you know what the obligation is.

[00:11:03] And again, when we're talking about tools that are at their highest and best use when they're being applied across lots of applicants or lots of employees. And that means the penalty provision potential ratchets up very quickly. So I don't mean to scare people, but I do think, and this is coming from the perspective of an employer side litigator.

[00:11:31] But I do think that we are going to see plaintiff's firms become increasingly interested in finding employers that are just not tracking those compliance obligations. And that is going to be a good thing. And that is going to become, we're going to see that develop a rhythm over the next few years. Which is scary for employers to think about, you know, how much governance and oversight we need. And you mentioned, you mentioned that these tools work the best.

[00:12:01] And we know this by like the volume of information that they have access to in our processing. And so I think of kind of two categories, right? You have the seasonal worker category where you just have to hire quickly. You know, I think like, you know, a big golf course is going to hire seasonal workers really quickly. And then you have another category where it's, you know, potentially healthcare education where they have these heavy regulations, high stakes.

[00:12:27] In both scenarios, there's a level of caution that's needed to process these. And I want to go back to your comment about having a human in the loop. What does that look like from kind of the definition that, you know, the legal system is going to look at? Because I guess I'm thinking, you know, if you're using these tools and you have a human in the loop, have you just lost all your efficiency? Because now you have to hire a full-time person to review every application.

[00:12:54] It's like, okay, what's the point of it all, you know? Absolutely. And that's the tension. And that's where I said, you know, there's kind of the business-driven side of efficiency seeking. And there's the legal-driven side of risk mitigation. And I think it's not overstating the case to say that the more we rely on a tool, the more we lose control of our potential risk profile. I'm not saying that using an AI tool necessarily leads to risk, but it invites risk.

[00:13:24] And we need to understand that risk. So in terms of kind of practically how we manage that, we've advised clients, excuse me, that they can and should combine sort of traditional methods of managing high-volume hiring with the benefits of an AI tool. And that can look like we only apply the tool to the first 100 applicants we receive for a given requisition. And that limits our risk.

[00:13:50] If we are doing something wrong, we only have 100 applicants that we've, you know, that we've applied that potential risk to. But it also lets us take advantage of the efficiencies of the tool. I will also say from sort of the data analytics perspective of all of this, when we're looking at high-volume hiring, when we're looking at using a tool across a large volume, we always see an increased risk of disparate impact in the data.

[00:14:16] And so that's another reason to think about applying the tool in a more limited way. We will get pushback from the business about whether or not that makes it worth the cost. And it may. That's the balance. Yeah. What are some ways that you've seen organizations take baby steps to implement this that feel like a good progression? So I'm thinking, you know, you have basic screeners we've been using forever, right?

[00:14:43] Like if the job requires a bachelor's degree and you don't put a bachelor's degree on the resume and you're screened out, there's not a lot of risk in that, right? Like that's a very check-the-box. What's kind of like the next step that you're seeing that, you know, employers are a little bit more comfortable with? It maybe is always going to introduce a little bit of risk, but it's an okay, you know, and I'm not a lawyer, so I know I'm using language that I would use.

[00:15:09] It's an okay first step to kind of get into it without crossing some of these big lines that we've talked about.

[00:15:43] Yeah.

[00:16:14] But most of the time, that would only happen if the tool is sort of freelancing and figuring out, taking a wild guess at what we're looking for. If we tell it exactly what we're looking for and we tell it to only look for that, that helps. Yeah. Yeah. I think about this as, you know, we don't talk about this a ton in the HR world. Obviously, in the education space, we talk about it a lot, is like using rubrics, right? You use rubrics to grade papers and kind of know where you're at.

[00:16:40] But, you know, I kind of see that if you can create a job rubric for how you think the AI should be applying, you can use that as your audit documentation and go, okay, this is what I think it should be doing. And the quadrants, it should be bucketing people in because I think there's just not enough conversation right now on how we should be auditing the tools beyond just like, well, let's look at it. Did it, like who did it knock out? Who did it not knock out? And there's definitely nuance.

[00:17:07] Like you said, you know, it shouldn't only be looking for, did you have this very niche experience in your profile? Especially because we know that like that's not how skills work. That's not how people work. You know, you can learn something without doing the exact job before. And when I think about, you know, us using these tools to be more efficient, how are we communicating that or what's the best way to communicate that to candidates on the front end?

[00:17:31] I've heard a couple examples where, you know, organizations will have a disclaimer in their job posting, but it'll also say something like, you know, if you want to make sure that a human has reviewed your resume, send an email to this email address. What are some of the best kind of languages that you've seen out there notifying candidates of how these tools are being used? We would absolutely recommend giving candidates notice up front, in part because some state laws require it and more are going to, but also an opt-out.

[00:17:59] I think that transparency builds trust both in your applicant pool and also in your employee pool. I think we have seen that employee populations are nervous about how AI is being applied to them. And so more transparency is always better. That opt-out is also important because it allows folks who might need accommodations to seek them.

[00:18:24] We know that there have been instances where folks, for example, if an AI tool is being used to review a video interview recording, where folks have claimed disability disadvantages because of that, neuroatypicality, vision issues, whatever it may be. And so making sure we give people an opt-out as to that and that we have a process in place to ensure that that applicant is still given a fair shake in that application process.

[00:18:52] I love that you mentioned video because that was going to be my next question. As I see kind of the evolution of technology, there's some really great things coming and there's things to be excited about as a consumer, right? I love some of the stuff I'm seeing as a consumer. As an employer, as an employee, some of it is very scary because, again, you know, I am neurotypical. I have ADHD. And so eye contact isn't the same for me as it is necessarily for other people.

[00:19:19] And the thought of getting knocked out because I recorded something and either had way too much eye contact or not enough is terrifying because you don't really know how to communicate that because maybe you don't need a full accommodation. You don't necessarily want to show that card because the way you work, it really won't be affected. So it's really fascinating all the kind of different pieces that we really need to start thinking about.

[00:19:42] As you look ahead, how do you think regulation and enforcement is going to involve in this space? And how can organizations prepare for that? Oof. Honestly, it's really hard to say. We have seen some legal challenges to the use of AI tools that maybe were less expected. I think we all expected to see the bias, discrimination litigation come through, and we have.

[00:20:11] Just recently, we saw some litigation come up against one of the big developers of these tools, Eightfold, alleging Fair Credit Reporting Act claims, which is just a different way of thinking about the problems, potential problems with these kinds of tools. So it's hard to say. I think the most important thing is to keep track of the laws as they come out.

[00:20:35] I think the next important thing is for anyone in the HR space or in GC's offices to understand what tools are being used and how they're being used and how they're being weighted and how your talent development folks are using them practically day in and day out and making sure that that's documented and doing periodic reviews of that.

[00:20:58] And by periodic, I mean frequent quarterly, biannually, to check in and say, is this still working the way we intended it to work? The other sort of ounce of prevention general advice I would give is that if we are going to think about bringing an AI tool into our process, that we do a pilot program, that we run it on our applicant population without actually relying on it at all and see how it's working.

[00:21:25] Have seasoned talent acquisition folks go back and check its work, see if they agree with the output that it's giving. And then if we can tweak it, either how it works, what it's considering or how much weight it's given, we can do that. I love that. Jillian, this is such a good conversation and has given me a lot to think about as I talk about AI and the ethics of it and how organizations are applying in it.

[00:21:52] And in a little way, it was scary in the best way possible to keep us grounded, though, and diligent on all the things that we need to consider when it comes to AI and hiring especially. So I really appreciate you jumping on and having this really important conversation about AI. Thank you very much for having me. It's fascinating. Thanks for tuning in to the HR Mixtape.

[00:22:19] Like, share, review, and subscribe to support the show and help more people discover these conversations. Until next time, keep the conversation going.