Looking for the candidates other methods miss.

Learn more about your ad choices. Visit megaphone.fm/adchoices

Powered by the WRKdefined Podcast Network. 

[00:00:00] Welcome to PeopleTech, the podcast of WorkforceAI.News. I'm Mark Pfeffer. My guest today is Diana Tsai, the co-founder and CEO of Upwage. Their platform is built to uncover

[00:00:24] high potential candidates who've been overlooked by traditional recruiting systems. We're going to talk about how that works, but also about other ways data can factor into talent acquisition and decision making. All that and more on this edition of PeopleTech. Hey, Diana. Welcome. And can you give me just a very few sentences, a little background on Upwage?

[00:00:50] Yes. Three years ago, my co-founder Greg and I started thinking about how to create an AI safety net for the workforce. And specifically, Greg was doing his PhD in AI and labor markets. We'd known each other from being in the recruiting tech space for quite a while in the past. And as we were talking, we were like, wow, there's unbelievable, unbelievable opportunity that will emerge from AI in the next couple of years. But there's also going to be a lot of devastation. And we wanted to figure out a way

[00:01:19] to leverage AI to combat the negative impacts of AI. And so one of our big thoughts was how do we use AI to help people find jobs faster in a world where the job market is going to transform so dramatically? That quest, that North Star led us to what we're building today, which is AI agents, highly advanced behavioral interviewing agents, AI analysts that are essentially helping companies decrease turnover, decrease time to hire, increase recruiter productivity, and on the other side, deliver

[00:01:49] an exceptional candidate experience. Now, before we get into some specifics of what you're doing, I have a general question, which is, how do you approach something like AI when it brings certain challenges to it, like embedded bias, for example? How do you balance that with the advantages that it offers? Design well, you're actually not, it's not a balancing or counterbalancing game. It's more a

[00:02:19] understanding of what the limitations and capabilities of AI are, and then being able to leverage it to not just do the base minimum, which is, okay, so here, let me get into the whole bias stuff. What we're really talking about is in the employment space, it's New York Laws 144 around automated decision makers when it comes down to automated employment decision tools. And specifically, the concern here for TA leaders is, how do I know if an AI tool is biased? An AI vendor I'm using, are they biased or not? So from a regulation perspective, what was created was this third-party

[00:02:47] bias auditing that's required. And so essentially what then happens is you have sort of what meeting expectations looks like in the AI bias world, which is, is my AI treating candidates fairly across protected classes? But what we're not getting into is the nuances of how AI can not only, so it's not enough to actually just beat these bias auditing standards, which frankly, it wasn't very hard for us

[00:03:11] to do that, to build an AI that is unbiased by all government requirements or by any sort of regulatory compliance requirements. The harder thing is to imagine above and beyond how AI can be used to not just meet bias requirements, but actually significantly decrease bias within the hiring process. So from the very beginning, as we were designing the technology, our thoughts were like, okay, here's, where's the inherent human bias in the system? I've had a bad day. Maybe I'm a little

[00:03:39] impatient and tired. I mean, I even have this in meetings. Like I'm not the same person in all my meetings. Sometimes I'm really tired. I'm exhausted. I'm not, I'm not going to treat the same, everyone the same way necessarily. And that's just being human. So I think one part of it is how do you create a consistent experience for all from an interviewing perspective when recruiters are on phone screens, sometimes 20 back to back. So then what happens by the 20th person? Are they the same person energetically or curiously in the beginning? Is every candidate getting the exact same experience? Not necessarily. And that's not the fault of the recruiter or the human being.

[00:04:08] So really it is like one element of AI that's really powerful is the ability to create a consistent, emotionally intelligent experience where you're hyper-optimistic and curious about this person all the time. But on top of that, what then happens is you also have the hiring manager side of things. And so oftentimes from a hiring manager perspective, when you're analyzing candidates, hiring managers for the most part are trained based on like their hiring manager guides, whatever it happens to be, interview guides, but their full-time job is not recruiting.

[00:04:38] They didn't necessarily come out of the TA profession. They have a day job. And then as a part thing, they're also doing their interviewing. So expecting like exceptional interviewing skills out of all your hiring managers is somewhat unreasonable as well. And that's where AI can be really helpful. So on our part, we have an AI analyst that actually analyzes all the phone screens that happen with recruiters. So we have the AI interviewer that's doing the interviewing, the AI analyst that's doing the analyzing of those AI interview transcripts. So that way,

[00:05:06] when a hiring manager actually receives that transcript, it's been pre-analyzed, it has high, medium, low fit, it explains where the fit of the candidate is, and they can dive deeper into understanding that candidate and really deeper into the interview process and provide a more consistent interview on the interview level. So in those worlds, you can actually see that the system and the new process upgrades the entire system. Like, so essentially like mitigates a lot of the preexisting bias that we're not even checking for currently. And that's what I mean by

[00:05:38] just adhering to the regulatory AI bias requirements is the base minimum, like the base minimum and the easiest part. The harder part is re-imagining how a product experience, how an AI experience can like tear to shreds preexisting bias structures that exist within our hiring processes. Not because people have malintent, and I'm going to reemphasize that, but it's because we're actually human and it's impossible to provide a consistent experience to every candidate as a human being, unless you're just a superhuman, you know? Yeah.

[00:06:08] So then let me go back and try and ask you a nuts and bolts question about bias. You know, how do you address bias? How does your system address bias in a way that employers can feel comfortable with, you know, knowing that it really is being addressed? So there's a couple different layers. And so one is, like I mentioned, so we have two different AI

[00:06:36] agents, right? We have the AI interviewer and the AI analyst. And they're actually two separate agents that have two, they're like in two different silos. So the AI interviewer is the agent that does the initial interview. It does star interviewing, it's EEOC compliant. So that's one part of how you mitigate bias, you have to train the AI agent that's doing the interviewing to be EEOC compliant. Another layer of it is the AI interviewer is set up based on competencies and questions that are

[00:07:05] co-created with the TA teams. And those competencies tend to be either predictors of high performers or high retention candidates, so high quality candidates. One thing that we do to mitigate bias is actually like bias auditing those questions, as well as competencies. And so that really helps as well to from the get go foundational stage, build an AI interviewer that's less biased. The other side of it is actually so the AI interviewer does the interview, and then the AI analyst is the one that

[00:07:31] reads all of the transcripts, and then does high medium fit, high medium low fit scoring against those competencies for the what we call the super screener PDF. So the PDF version of the transcript that's assessed and given to hiring managers. So the AI analyst, actually, what we do is before any of that transcript interview, that the AI interviewer that is that the AI interviewer is the one that's

[00:07:57] conducting the interview before it reaches the AI analyst, we actually redact all PII so that that way the analyst is sort of reading blind, it's almost like a blind analysis. So there's no names, there's no any mention of previous employers, we also try to exclude those schools. Because you know, what happens? It's like, oh, they're MIT. Well, this is a great camp. But it's like, is that true? You know, is that true? Gender, mention of children, all this sort of stuff. So the two things that we look at that's unique, I think when we talk about going above and beyond is

[00:08:27] there's bias when it comes down to protected classes, but there's also non protected classes. So some of those things that I mentioned before are non protected classes. So from a regulatory perspective, we are not held accountable to redact or to try to make our product care about things like let's reduce bias against single moms. However, we do it ourselves by self policing ourselves and building in these checks and balances. And I think that's what in our mind, it's like, okay, so if you look on our trust center, right, there's the third party bias audit. And we also have an

[00:08:56] internal bias audit, our internal bias audit is taking more into account looking into these non protected classes, because we also want to make sure that we go above and beyond the regulatory standards around bias. So yeah, I think that's a key thing. I think the other part just linking it all together is most of our clients are using our product to increase accessibility. So when they look at their candidate pipeline, it's like, okay, I post a rec, I get 500 applicants. From a just a time

[00:09:22] constraint perspective, it's probably reasonable to expect my recruiter to interview 50 of those if that like in the next couple of if it's so in that world, then the other 450 applicants never get a chance to be seen. And there's a gut decision that's typically made on reviewing a resume for 30 seconds. And so when you look at this whole entire equation and zoom out, you're like, wait, but what if we had the ability to interview all 500 applicants and give them a chance to be seen

[00:09:51] beyond their resume for their skills? Maybe they're not good at writing resumes. Maybe they didn't even submit a resume as sometimes half of our hourly applicants don't submit resumes. So it's like, what if we have no cues? And then you're just making a decision almost blind on whether or not to pass on this person. So the AI interviewer offers the opportunity to increase accessibility within the entire pool, which also naturally increases the diversity of your applicant pool. One other

[00:10:18] thing I'll mention, like from a systems perspective, as we think about the question of accessibility bias inclusivity is really that we see on average, about a third of our candidates are completing their interviews off hours. So they're doing it on the weekends. They're doing it at 2am after they've tucked their kids into bed. They're doing it after their shifts. Some are passive candidates. They have day jobs or they're gig workers and they have multiple day shifts that they have to take care of

[00:10:41] before they're able to focus on an interview. In a world where you're only relying on nine to five interviewing options with a team that can only interview during work hours, what that means is that's potentially a third of your pipeline that's excluded from the entire process of even being considered. And in many cases, these are folks who may be under circumstances such as, you know, I have

[00:11:08] kids to take care of. I have multiple jobs I need to do, et cetera, et cetera. So you can imagine the implications when you unlock the floodgates and actually make it possible for them to be interviewed, to be seen, to have their skills assessed on their own time. Leading a growing business, it's like building a plane while flying it. Team building, decision-making, and scaling all at once. At CPO Playbook, we get it. That's why our podcast, ranked in the top 10%

[00:11:35] worldwide, tackles the toughest leadership challenges with insights to help you lead smarter and grow faster. Tune into the CPO Playbook podcast because leadership doesn't come with a manual, but we're pretty close. You know, a lot of what you're talking about sort of flies in the face of, you know, typical talent acquisition. You know, interviewing everybody as opposed to,

[00:12:00] you know, screening, deciding, interviewing. Do you run into resistance from recruiters or from hiring managers who think this sounds squishy, there needs to be like a tighter process? Or do they basically look at the results and say, okay, no, this works? We are hyper data driven. So usually our data is pretty bulletproof when it comes down to ROI on this

[00:12:26] stuff. Look, I think if folks are worried about things being squishy, our approach is always to say, hey, let's do some benchmarking. So first, what we're going to look at is there's a couple metrics that I think are exceptionally compelling to recruiters. And then I'll talk about the one for hiring managers. For recruiters, it's really phone screen pass through to hiring manager ratio. So it's really compelling when it's like, whoa, I just noticed that the number of phone screens I need to take, it was eight to one, I needed to do eight phone screens to find one great candidate,

[00:12:52] I'm going to pass to a hiring manager, and now it's four to one. What that means is that upwage helped me save a lot of time by interviewing everyone finding the best candidates for me to focus my time on. And now I'm only having to do four calls for every person I can pass over to a hiring manager. So what we're not saying is, everyone drop your phones, stop doing phone screens and stop doing interviews. We're just saying focus your time on highly qualified skilled candidates who've actually already gone through an interviewing process. And you might be surprised that some of those may not have had great resumes, you might have passed on them. We get this all the time from recruiters after

[00:13:22] we run a pilot. And this is the other thing, just run a pilot. And then we asked our recruiters to look out for is this somebody like you talked to this person on the phone, they were upwage ranked high. You've gone through the process, you advance them, let's go backwards and look at their resume. Oh my gosh, I would have passed on this person. Whoa, like that's a hidden gem higher. That's actually the stuff we get most excited about. It's like, who are we missing? Like, who are we missing within pipeline either by being like, oh, well, you have to adhere to our hours because we're only available from nine to five on Mondays through Fridays. And so if you can't do that,

[00:13:49] you're out of our process. Second, you know, we just don't have time. Like if your resume is not good enough, or you didn't submit one that caught our eye, or you didn't submit a resume, okay, just out of the process entirely. We literally don't have, we just don't have time. We don't have physical time to do this. So I think what we see it as is we have created an infinitely patient AI that can spend as much time with a candidate as needed. And oftentimes that's why we get this feedback from candidates, which is like, I love talking to my AI interview. Like that was actually really nice because I felt like I could think about my responses, take my time.

[00:14:19] I wasn't in a rush. I wasn't nervous. I wasn't anxious. I could make my kids dinner in the middle of my interview, come back and finish. It's like all these kinds of things that are actually really make a lot of sense and seem even more emotionally tuned than not, which is why I always find it interesting when people are like, oh, isn't AI dehumanizing? I'm like, no, do I have stories for you on like, what is actually humanizing in this process? You know, I forgot one thing, Mark, the hiring manager interview. So in terms of how hiring managers build trust with up wage.

[00:14:47] So we look at the hiring manager to offer ratio. And so REI is like a really good example of this. They were using our AI for specifically events and the events are our store openings. So live events and, and they weren't. So the recruiter started using the AI to phone screen first to help with the phone screenings and the initial screen. And their hiring managers didn't realize they were only like, they were basically getting scheduled with these high fit up wage interviewed candidates

[00:15:11] that had been phone screened. And the, and the hiring managers actually said to the recruiting team, whoa, like why are, this is crazy. This is the best batch of candidates we've ever received. We want to hire them all. That's when we know that we've hit things home with a hiring manager. Oftentimes it's like, do they need to even know AI was involved? Not necessarily. They're just all of a sudden noticing that everyone who their past for interview is exceptional and they want to, they want to hire everyone. That's usually when we get a signal of like, Ooh, something's really,

[00:15:37] really going well with the tuning process and calibration of the AI. Yeah. What about on softer things, you know, like, you know, cultural fit, the can, can AI be useful in that? Super useful consistency as well. So the use cases here are universal culture fit interviewers. So essentially this is something that we actually get commonly asked a lot for. So it's, we have five cultural values in the company. We're growing and we, we would really, really love to hire folks

[00:16:06] against our five cultural fit values. So we actually take those cultural values and just convert, convert them into competencies. And then we take the competencies and then we generate interview questions based on that. So yeah, it's, it's actually fairly straightforward. So from that perspective, rather than looking at, I want to build an AI interviewer that is going to be focused on competencies that define high performance in this particular role. It's like, we're looking at cultural cohesion. We'd like to have, in addition to the skills-based or competency-based

[00:16:33] interviews, we also want a culture fit interview. Actually, the coolest part that AI can do with this kind of data is we can capture. So this is, this is the question I always ask people leaders. I'm like, how much conviction do you have that your cultural values actually predict a great hire? And then they're like, we don't know. That's a really good question. It's gut instinct or like the executive team decided, or the founding team came up with these values or like, this has always been our values. And then I just asked like, okay, would it be interesting for us to collect data

[00:17:03] over time? So we build a cultural fit interviewer. We culturally assess folks. But what I'm saying is maybe on those hires, on the backend, we measure performance, we measure retention, we look at all the data and we start looking at if your cultural values are predictive of the people you actually want to hire. And then maybe we end up tuning your cultural values and everyone's like, whoa, that's crazy. Like to be, and so that's like the opportunity that's open with AI is to capture new data sets and

[00:17:29] refine our approach. We almost think of every AI interviewer, every set of competencies that we develop as a hypothesis or an experiment. We have a hypothesis that this is the interview process that creates a great hire, but can we collect the data on the backend and also along the entire process so we can cross-reference and be like, is this true or not? Let's hold ourselves accountable. And the ability to hold ourselves accountable hasn't really existed in the past. Not when your interview notes are scribbled, like bullet point notes that are sometimes one person takes a lot

[00:17:56] of notes. Sometimes someone takes no notes. It's like, what are you using for that data source besides the conversation that was had? So you know what I'm saying? Like, it's like, we have this whole entire new rich set of data that is consistently collected intelligence from an AI interviewer that can level up the entire team. And I think that's what makes this so exciting. In addition to all the time savings and automation and productivity and enhancement and all this sort of stuff, it's like, well, we have real data to calibrate on, you know? Yeah.

[00:18:24] You know, there's a lot going on with this. And have we reached the point with AI where people aren't sort of keeping it at arm's length, where you tell them what your product can do and they just basically believe you as opposed to being really cynical and looking for the holes? We have, well, okay. So we spent a year gathering data to get to the point where we had the

[00:18:51] longitudinal case studies to prove out what we were saying. So I think prior to collecting that data, no, people were like, what? Everybody says AI can do all this stuff. But when we were like, oh, here, let us show you. So we ran a longitudinal case study with a company that had 53% turnover in call center roles. And we split tests, 64,000 applicants, half went through their existing assessment, half started going through up wage AI screening. And then we measured this over a year and found that we were able to decrease turnover by a third. Same population, same role,

[00:19:20] same classes, same timeframe, same everything. The only thing that changed was the top of funnel interviewing process and the assessment of criteria. So that's the kind of data, I think, the rigorousness at which these experiments have to be conducted to really get to the point where it's convincing. Otherwise, it's just empty. It's just empty promises. So from our perspective, like we essentially say, what do we need to prove out? What's our level of conviction based off what we've proven in the past? And then how do we track this? And so we do monthly, usually monthly ROI

[00:19:49] capture and reporting and then quarterly for partners, assuming that most of the ROI reporting we do ends up in the executive suite. It almost always does. And it, because it becomes a CFO conversation of like, we invested $1, what's our ROI on the dollar? And from our perspective, we're not doing our job unless we can provide a TA leader with you invested $1, you have recouped, you know, $2 in productivity enhancement. You've recouped another $10 in turnover, et cetera.

[00:20:18] And so we literally can like capture all the ROI buckets along the way in partnership by working on the data together. So that I think is absolutely key. Otherwise there's nothing to prove this out. Like you have to anchor it in data. You know, you're grabbing so much data and it seems like you're doing, you're grabbing so much and you're doing so much with it. Does that offer other benefits? I mean, can customers sort through the data or pull off certain data so they can identify,

[00:20:49] you know, other, other things besides whether or not this candidate is good or this candidate is bad. Longer term. Well, so you're pointing to our longer term vision, right? Because the way we think about it right now, like the three waves of AI transformation, right? Right now we're in the spot where companies are drawn to, practitioners are really drawn to the power of AI interviewers on a variety of reasons. Like at the most base minimum, it's, this will help my team save time.

[00:21:17] It'll help extend the capacity of my recruiting team. When you go layers deeper, it goes into, this improves the quality of hires. This decreases turnover. This improves, increases revenue in our high performing roles, right? That's where things get really, really interesting from an ROI for the business perspective. But when it comes down to the data, what can happen is this. So today, like at UpWage as an example, we have two AI agents, our interviewer and our analyst. There's a, there are a couple other AI agents coming along the way. One of those is a stay interview

[00:21:44] agent that interviews candidates 30, 60, 90 days after they've been hired to understand retention risk, turnover risk. You know, how's your relationship with your hiring manager? How's your relationship with your team? Is this job, the job that you interviewed for actually the job that you're in, AKA like where you sold a job that you actually dislike. And we ended up like sending you into the wrong job, essentially poor job fit. All the reasons people leave. Now where this data collides and becomes really interesting is if you understand the bottom of the funnel post hire, and you can bring

[00:22:13] that data to the top, then you can build better interviewer agents. Cause the interviewer agents will better understand, like they can learn from the fact of, Oh, I did a really bad job. Like that was not actually correct. We're sending candidates the wrong signals. We're finding folks that actually end up hating this job and they really aren't going to stick. Like, so we can do a better job at that. What are the competencies in the interview questions which are actually doing top of funnel? So that's where the data gets fascinating. The other use case you can imagine with data is,

[00:22:39] um, go puff. That was a good example. So they have like 962 or something AI agents that we've created with them over the last year and a half. And, uh, the idea now is what happens if we can have agents realize or capture the data in an interview and be like, Ooh, this person's not going to come out as a high fit for this role, but they actually are a great fit for these other 25 roles. And therefore can we proactively match? That's really exciting data, Matt. Like that's, that's what you can do with this kind of data, you know?

[00:23:07] Diane, thanks very much. It's, it was really great to talk with you. This is really fascinating and I hope we can do it again. Awesome. Thanks, Mark. My guest today has been Diana Tsai, the co-founder and CEO of Upwage. And this has been People Tech,

[00:23:32] the podcast of WorkforceAI.news. We're part of the Work Defined Podcast Network. Find them at www.wrkdefined.com. And to keep up with AI technology and HR, subscribe to Workforce AI today. We're the most trusted source of news in the HR tech industry. Find us at www.workforceai.news. I'm Mark Feffer.