We speak with Matt Fischer, President and COO of Bullhorn, about AI in recruiting. We discuss the current state of AI in the industry, the expectations and challenges for clients, and the role of automation in improving data quality. Matt makes the case to understand the importance of experimentation and the need for a human touch in the recruiting process. We talk all things automation, data quality, AI matching, candidate matching, skills classification, and fraud detection.
Takeaways
- AI in recruiting is currently at the peak of inflated expectations, but it is important to educate clients on how it can add value and be integrated into their workflow.
- Clients have different expectations and approaches to AI, with some focusing on efficiency and productivity, while others envision a future without recruiters.
- The decision of where to put the human touch in the recruiting process is crucial, and it varies based on factors such as job type, customer type, and talent type.
- Bullhorn's marketplace allows for integration with various AI and automation tools, and they prioritize an open ecosystem to serve their customers.
- Data quality and the willingness to experiment are key factors in leveraging AI effectively in recruiting.
- Automation can help improve data quality by engaging and nurturing candidates in the database, reducing the need to go back to external sources.
- AI matching is a hot topic in the corporate world, but the staffing industry has unique challenges and considerations in implementing effective matching algorithms. AI can improve candidate matching by focusing on outcomes rather than just resumes.
- AI can generate personalized job descriptions and pitch letters, saving time for recruiters.
- AI has the potential to detect fraud and reduce ghosting in the hiring process.
- Embracing AI technology can drive innovation in the staffing industry.
Chapters
00:00 Introduction and Background
07:41 Building an Open Ecosystem: Bullhorn's Marketplace
15:55 Creating Custom Language Models for Bullhorn
30:54 Skills Classification and Taxonomy with AI
39:26 Embracing AI for Innovation in the Staffing Industry
Learn more about your ad choices. Visit megaphone.fm/adchoices
Powered by the WRKdefined Podcast Network.
[00:00:00] Where are they making that decision today to put the human?
[00:00:04] And where do you, maybe as Bullhorn,
[00:00:08] where do you see that changing over the next year or so?
[00:00:12] Yeah, they're making that decision today in automation.
[00:00:16] And they're deciding where they want to inject the personal touch.
[00:00:20] And there's sort of two ways to think about it. One is you could go from
[00:00:24] you could start from a recruit-less mindset and think, I'm going to automate
[00:00:28] everything and decide where to put the person. I think most of our customers are not doing that.
[00:00:32] Most of them are saying, I'm starting from how I've always done it and figuring out where I can automate instead.
[00:00:36] So you can come at that from both different angles.
[00:00:40] But they're doing it primarily in automation. And I think where the opportunity is
[00:00:44] for our customers is to unify all of
[00:00:48] that since it's all on one platform.
[00:00:50] All right. I want to talk to you for a moment about retaining and developing
[00:00:54] your workforce. It's hard. Recruiting is hard. Retaining top employees
[00:00:58] is hard. Then you've got onboarding, payroll, benefits,
[00:01:02] time and labor management. You need to take care of your workforce and you can
[00:01:06] only do this successfully if you commit to transforming your
[00:01:10] employee experience. This is where ISOF comes in.
[00:01:14] They empower you to be successful. We've seen it with a number of companies
[00:01:18] that we've worked with. And this is why we partner with them here at
[00:01:22] ISOF Defined. We trust them and you should too. Check them out
[00:01:26] at isolvedhcm.com.
[00:01:52] Hey, it's William Tinkup and Ryan Leary. You're listening to the You Should Know Podcast.
[00:01:56] We're actually talking with Matt. He's from Bullhorn.
[00:02:02] We were just catching up on some history of Bullhorn, which is
[00:02:06] fascinating. There's probably books that are going to be written about this
[00:02:10] at some point. The history of Bullhorn.
[00:02:14] Matt, would you do us a favor before we get into the topic and
[00:02:18] introduce yourself? Yeah, absolutely. I'm Matt.
[00:02:22] I'm the president and COO of Bullhorn. I was the CTO
[00:02:26] of Bullhorn for 10 years prior to that and it's always exciting
[00:02:30] to talk about tech, any opportunity I get. And I've been at
[00:02:34] the organization for 20 years. And you had a full head
[00:02:38] of hair, full beard. I did. I had a full head of hair. I had a full
[00:02:42] beard. It's amazing what 20 years in recruiting will do to you, right?
[00:02:46] Yeah, I'm not going to blow your mind.
[00:02:50] I'm not going to blame that on private equity.
[00:02:54] However, I've heard stories.
[00:02:58] I've read stories. Just kidding. For the audience, we're going to be talking about
[00:03:02] AI and recruiting. So we're going to get Matt's take on what
[00:03:06] he sees. Bullhorn has a ton of customers
[00:03:10] probably trying different things. So I can't wait
[00:03:14] to get your take on it. So where do you want to start with AI and recruiting?
[00:03:18] From my view? Yeah. I thought you were the one asking
[00:03:22] the questions here.
[00:03:26] You're interviewing us.
[00:03:30] Wouldn't that be a great bit to have a
[00:03:34] guest on? But I was like, alright, let's go.
[00:03:38] Victoria didn't tell you that we're the experts
[00:03:42] today? Dude, that would be so great of a podcast.
[00:03:46] You almost pulled it off there.
[00:03:50] We tried. It can get more awkward.
[00:03:54] We could just pretend we can't hear you. Oh, yeah. We've done that before too.
[00:03:58] Our freeze on purpose.
[00:04:02] So what are you seeing with AI and recruiting?
[00:04:06] With your customers and even Bullhorn itself proper.
[00:04:10] In all seriousness, I mean, it's definitely an exciting topic for folks.
[00:04:14] I think we're somewhere on the hype cycle.
[00:04:18] There's a lot of confusion about how disruptive it will be
[00:04:22] and what will you even need recruiters in 20 years?
[00:04:26] What will their job be? And I think that how quickly
[00:04:30] Chad GPT burst onto the scene only 12 months ago put this in the spotlight
[00:04:34] in a way that there was a lot of catching up that people had to do to really understand
[00:04:38] how it would impact recruiting.
[00:04:42] What we talk about with our customers is really just educating them on
[00:04:46] how it will add value and where in the life cycle
[00:04:50] we'll be able to integrate it. We have a pretty strong position on it, which
[00:04:54] is yes, we're probably at the peak of inflated expectations
[00:04:58] right now, maybe coming down the other side of that. But in the end of the day
[00:05:02] your data really matters. Your
[00:05:06] ability to integrate AI into the workflow really matters because what you don't
[00:05:10] want people doing is going out to Chad GPT and trying to figure it out themselves. That's not a great
[00:05:14] recruiters time. And I think that's what a lot of people do because it happens so quickly
[00:05:18] and everybody was sort of playing catch up on it. So
[00:05:22] there's lots of interest in generative AI, in searching, and
[00:05:26] matching, and sort of what the revolution will look
[00:05:30] like, I guess. So Matt, let's unpack this a little bit.
[00:05:34] You mentioned that you're working with your clients to set some expectations.
[00:05:38] What are, as you're going into these conversations, and I'm interested here because
[00:05:42] we talk to a lot of people in recruiting, obviously.
[00:05:46] And we're not seeing, outside of the normal
[00:05:50] kind of trends that people talk about, we're not really seeing
[00:05:54] educated expectations. It's more the Jetsons, right?
[00:05:58] The futuristic flying cars, which at some point, got it.
[00:06:02] All good. Where are your clients' expectations baseline
[00:06:06] when you start to have these conversations? Yeah, I think baseline, there's
[00:06:10] two areas. One is how can I be more efficient?
[00:06:14] And how can I use AI to drive efficiency and productivity
[00:06:18] through my organization in a practical way? It's very easy to talk
[00:06:22] about it in the abstract, but in a practical way, how can I do that?
[00:06:26] That's really where we think that our product set can add
[00:06:30] value to our customers because it is integrated into the workflow. We are the domain experts.
[00:06:34] We can help translate AI and marry that to what our
[00:06:38] people do on a day-to-day basis and help them be
[00:06:42] more effective at just doing what they do. So that's part of it.
[00:06:46] And we've also seen, we've got an automation product we acquired here, Fish,
[00:06:50] five years ago, and our customers have adopted that
[00:06:54] in great effect. And in many ways,
[00:06:58] if you've already embraced the fact that you can automate a large part of your
[00:07:02] workflow, it's not that big of a leap to then say,
[00:07:06] I can use AI. You can do even that much more for me. How do I tighten it up
[00:07:10] and make it better? 100%. And so in some ways, it's like this is just
[00:07:14] another tool to make automation and getting to that model
[00:07:18] easier and more effective. And so that's
[00:07:22] a contingent of folks. And then there's another contingent of folks that are more like the Jetsons,
[00:07:26] like you talked about, which is we're not going to need recruiters in 20 years, and it's
[00:07:30] going to be a full recruiters model. And I'm not of that
[00:07:34] opinion personally. I think that the option
[00:07:38] sure will be there if you want to automate fully end-to-end, but then I think you're discounting
[00:07:42] what our customers do really well and the human touch that they provide in the process.
[00:07:46] Those are the same people, William, that are saying, we said in 1992
[00:07:50] we're not going to have resumes coming in.
[00:07:54] Or job boards.
[00:07:58] You know, I get asked this question all the time in terms of augmentation
[00:08:02] versus automation. And companies that are going through
[00:08:06] usually it's corporates that are going through the transition
[00:08:10] to figure out where they are in that process.
[00:08:14] And what can be kind of low value tasks that can be
[00:08:18] automated and kind of where that human touch
[00:08:22] should be. Like they're not, the corporates I deal with, they're not trying to get rid of
[00:08:26] the humans. Not altogether. They're just saying the humans shouldn't be
[00:08:30] performing certain tasks, like scheduling or
[00:08:34] something like that. They should be doing other things.
[00:08:38] And if we say the new normal is that they're going to be carrying 40 recs or something
[00:08:42] like that, okay, well they just need to be more efficient. And it's not they need
[00:08:46] to be more efficient, it's the things around them that need to create
[00:08:50] that efficiency. Do you see some of the same stuff play out in the staffing world?
[00:08:54] Oh, absolutely. 100%. And I think that
[00:08:58] concept of where you put the human, I think is a really important concept.
[00:09:02] Because that's also how our customers can differentiate.
[00:09:06] Because they can decide where they can use automation, where
[00:09:10] you don't need to put the human for high value. You know, and in some
[00:09:14] cases that may be, that was a job I was never going to fill anyway because that's
[00:09:18] a low margin job that came into the VMS and I'm okay with waiting all of that.
[00:09:22] In other cases, I know I need high touch. So when you actually parse it apart,
[00:09:26] you realize that there's no one size fits all. You're going to
[00:09:30] decide based on a whole bunch of different criteria where you put the person in the process,
[00:09:34] the type of job, the type of customer, the type of talent, etc. And so
[00:09:38] everything we think about is how can you give the customers
[00:09:42] the flexibility to make that decision and use it when they want to use it.
[00:09:46] Where are they making that decision today to put the human?
[00:09:50] Where do you, maybe as Bullhorn,
[00:09:54] where do you see that changing over the next year or so?
[00:09:58] Yeah, they're making that decision today in automation. And they're
[00:10:02] deciding where they want to inject the personal touch.
[00:10:06] And there's sort of two ways to think about it. One is you could go from,
[00:10:10] you could start from a recruit-less mindset and think I'm going to automate everything
[00:10:14] and decide where to put the person. I think most of our customers are not doing that. Most of them are saying
[00:10:18] I'm starting from how I've always done it and figuring out where I can automate instead.
[00:10:22] So you can come at that from both different angles, but they're doing it
[00:10:26] primarily in automation. And I think where the opportunity is for
[00:10:30] our customers is to unify all of that
[00:10:34] since it's all on one platform now in Bullhorn. You can pull all of that
[00:10:38] together and decide from end to end, including the talent experience,
[00:10:42] how you want to do that. And we're just in sort of the beginning phases of
[00:10:46] helping our customers explore this concept. So I think they'll continue
[00:10:50] to make those decisions. They'll continue to decide talent experience where they need the recruiter
[00:10:54] internally, what they want them doing as they go. And then
[00:10:58] as equally important, I think, is where do you put AI
[00:11:02] in that process? Because you have some
[00:11:06] customers that believe like I want to review every single communication
[00:11:10] that goes out to a candidate because I don't trust that generative AI is going to
[00:11:14] represent me the way I should be represented. And then you have
[00:11:18] others that are like there's no way generative AI is going to be worse than a canned
[00:11:22] message template that I use today.
[00:11:26] I feel like I need the Rocky Bell.
[00:11:30] I have some over the ladder. So you know, you've got to give them the choice.
[00:11:34] Right. So one of the things I wanted to get your take on is evaluating
[00:11:38] technology. So Bullhorn proper, y'all are always innovating,
[00:11:42] you're going to be doing a check and you're going to be adding in a generative AI and AI
[00:11:46] and infusing it different places. But you also have a really robust marketplace
[00:11:50] and people that you're integrated with, etc. So how do you
[00:11:54] look at these companies that want to integrate with
[00:11:58] you? Like I understand it from their perspective. It's like you'll have thousands of customers.
[00:12:02] Of course we'd want to integrate with you. But how do
[00:12:06] y'all kind of make the determination on whether or not
[00:12:10] they should be in your marketplace? Love that question.
[00:12:14] Matt, before you answer that, let me add to that. Is it
[00:12:18] is that decision based on what they're doing today and what they're
[00:12:22] thinking about evaluating the future for your
[00:12:26] current clients? All of the above.
[00:12:30] So we've long sort of been the champions of an open
[00:12:34] ecosystem is the best way to serve the total. And so we
[00:12:38] have different tiers of marketplace. So we have new emerging companies
[00:12:42] that have got a cool idea that are like, hey, we want access to Bullhorn customers.
[00:12:46] And they can do that. We give them what's called a developer license and they can
[00:12:50] go build on top of the platform and they can work with our customers.
[00:12:54] But there is some sort of validation or proof point where the customers
[00:12:58] say this thing is awesome and this is
[00:13:02] you've got five or ten customers that are saying this actually works, it adds value,
[00:13:06] it does what they say it's going to do. OK, now you get bumped up to the next tier in the marketplace
[00:13:10] and you get more access to more customers and that kind
[00:13:14] of thing. And so that's how we think about it. We'll let anybody develop on the platform
[00:13:18] and as they get further proof points, they get deeper and deeper into the whole process.
[00:13:22] I like that a lot. First of all, again,
[00:13:26] one of the things I like is the recognition we don't need to build at all.
[00:13:30] It's similar to kind of Salesforce and Benioff
[00:13:34] strategy is like, OK, we're not going to build at all. There's going to be just a bunch of things that
[00:13:38] we just either don't want to, can't, won't, whatever.
[00:13:42] We're just not interested in letting other people build that, but build
[00:13:46] a really wonderful ecosystem around it. Build a great, I mean, you all have
[00:13:50] a great user conference. I've not been to
[00:13:54] it, but I've had a lot of friends that go to it and they always come back like that was
[00:13:58] a good use of time. Those are your use of money, et cetera.
[00:14:02] It reminds me of Dreamforce. I've been to Dreamforce and
[00:14:06] it is, if you love Salesforce, it is
[00:14:10] Disneyland. Like it is just, it's crazy because
[00:14:14] you can go booth to booth and go, oh my God, I had no idea I did this.
[00:14:18] Oh my God, I had no idea I did this. And I've had folks
[00:14:22] that have been to y'all's, I think it's called Engage. It's alright. Coming out
[00:14:26] on the 15th. So it's right around the corner. Oh, very nice. Is it in Boston?
[00:14:30] It is in Boston. We do it in Boston every year. It's gotta be in Boston.
[00:14:34] Come on now. That was a dumb question. I'm sorry. I'm gonna do it in Philly.
[00:14:38] That's for sure, yeah.
[00:14:42] So what do you, I mean, you're probably not
[00:14:46] super involved in the content for the conference, but what do you think
[00:14:50] kind of the main themes are gonna be as it relates to AI and GenAI?
[00:14:54] What do you think you're gonna see? We're talking a lot about it. I do a
[00:14:58] session with, I don't know if you know Jason Howman, but he was the EO of
[00:15:02] HereFish. He runs all things automation and AI for Bullhorn and he and I do
[00:15:06] a Vision for Innovation session every year where we sort of set out
[00:15:10] the roadmap and talk about stuff like this and how
[00:15:14] we see the overall landscape of AI playing out and how we integrate
[00:15:18] that into our platform and our tech. And one of the things we're gonna talk about
[00:15:22] just to quickly tie it back to the marketplace is this concept of
[00:15:26] because we have such a large ecosystem and there is so much data that resides
[00:15:30] in our marketplace, getting all of that into our platform so that we can
[00:15:34] use it to train AI and machine learning models
[00:15:38] based on the entire ecosystem of data is a huge interest to everybody,
[00:15:42] to us, to our marketplace partners and to our customers. And so there's
[00:15:46] a product called Data Hub that we're announcing at Engage which does
[00:15:50] just this. It's sort of productized integrations for all of the ecosystem
[00:15:54] and we're going to funnel that back into Bullhorn so that we can use that to represent a full end-to-end to our
[00:15:58] customers, AI, analytics, automation, etc. all off of one data store.
[00:16:02] So that's just one of the many things we'll talk about but it's exciting.
[00:16:06] Dumb question on my part, but is there a way to create with metadata,
[00:16:10] is there a way to create your own large language model for Bullhorn
[00:16:14] or Bullhorn's customers? So the way we think about that
[00:16:18] and just sort of think about LLMs in general is that
[00:16:22] 12 months ago I got up on stage at Engage and said that these are
[00:16:26] going to be ubiquitous and commoditized by the next time I show up one year
[00:16:30] from now. And that was all of four months after
[00:16:34] GPT went public. And it's already turned out to be true.
[00:16:38] Right? Like you can pull Mistral off the shelf or Lama off the
[00:16:42] shelf and it's going to be arguably as good as something
[00:16:46] you have to pay Microsoft a lot of money for. So the way we think
[00:16:50] about it is over time we absolutely can train
[00:16:54] large language models on staffing specific data. Right? That's a huge
[00:16:58] value add of having all this data on our platform and so that's
[00:17:02] certainly where we're headed with that.
[00:17:06] It doesn't matter which one, you just be able, whether or not it's OpenAI or any
[00:17:10] of the ones that you mentioned, it doesn't
[00:17:14] matter. It'd be Swiss. Bring your own model. Yep, that's how we think about it.
[00:17:18] Justin. Matt, where do you see some of the
[00:17:22] pitfalls that clients or just recruiting in general,
[00:17:26] staffing in general needs to be aware of?
[00:17:30] As it relates to AI in particular, I think the
[00:17:34] quality and size and scope of data
[00:17:38] that you train models on is really important. Recruiting has not historically been known
[00:17:42] for excellent data quality.
[00:17:46] So as a result, you can only train a model. I have to mark this spot.
[00:17:50] 100%. That's the feature right there. No, that's the title of
[00:17:54] the show. Yeah, it's probably
[00:17:58] not really debatable. I can't really think of anybody who needs another type of
[00:18:02] improvement. No one's going to take you on right there. You're good. Exactly.
[00:18:06] So I think that is an area that customers need to pay attention to.
[00:18:10] The cleaner the data, the more effective will be at actually
[00:18:14] training models that are producing solid outcomes.
[00:18:18] That's one. And I also think back to our
[00:18:22] previous conversation, customers' willingness to
[00:18:26] experiment is also really important because
[00:18:30] the customers that are doing well early doors are willing to throw incentives
[00:18:34] out the window. They're willing to throw process out the window and say, you know what, I'm going to
[00:18:38] start from scratch blank slate. And there's a lot of organizational inertia
[00:18:42] in many companies that struggle to embrace
[00:18:46] the fact that they have to think differently about it. And so I think
[00:18:50] if you can get data quality right, you can get people in the right mindset to experiment and try things
[00:18:54] differently and not get sort of overburdened by the processes of
[00:18:58] today or yesterday, I think customers can make a pretty quick impact.
[00:19:02] What's the kind of mindset to make their data better?
[00:19:06] I mean, Ryan and I have used tools in the past to kind of augment data.
[00:19:10] You know, things like that are even purifying, going and kind of
[00:19:14] help you with your data. But that's, you know, I don't, I think that's more
[00:19:18] I think that's more from a marketing perspective than anything else.
[00:19:22] I'm kind of rounding out the contact data and things like that. But where do you
[00:19:26] see when we say that, okay, they're sitting on dirty data or imperfect data, etc.
[00:19:30] One of the things is it's never going to be perfect.
[00:19:34] Correct. Check. But how do they make it better?
[00:19:38] How do they make it better? How do they make it better?
[00:19:42] I mean, I think we have a lot of, you know,
[00:19:46] institutionally prioritizing data.
[00:19:50] I could see that. But how do they do it with tech? How do they make their data
[00:19:54] better? Yes, there is augmentation. Like you mentioned, we have a couple more
[00:19:58] partners that do that. That's one thing. I would say
[00:20:02] I would substitute better also for current
[00:20:06] candidates in your database, but you probably have only talked to five percent of those in the last
[00:20:10] you know, 12 months. And so this is really where automation plays
[00:20:14] I think a really big role because it's about engaging and nurturing the candidates that you
[00:20:18] already paid for. You're already in your database and
[00:20:22] engaging them in campaigns, getting them to actually use your talent platform
[00:20:26] so that they are self-serving and keeping their data up to date.
[00:20:30] There's a lot of data hygiene automations and self-service
[00:20:34] around talent platform that can help with that problem.
[00:20:38] All right, I want to talk to you for a moment about retaining and developing your
[00:20:42] workforce. It's hard. Recruiting is hard. Retaining top employees
[00:20:46] is hard. Then you've got onboarding, payroll, benefits, time, and
[00:20:50] labor management. You need to take care of your workforce and you can only
[00:20:54] do this successfully if you commit to transforming your employee
[00:20:58] experience. This is where ISOF comes in. They empower
[00:21:02] you to be successful. We've seen it with a number of companies that we've worked
[00:21:06] with and this is why we partner with them here at WorkDefined.
[00:21:10] We trust them and you should too. Check them out at
[00:21:14] ISOFEDHCM.com. And we measure that in terms
[00:21:18] of what we call database utilization, right? How often are you going back to your own database
[00:21:22] versus having to go back out to LinkedIn or a board and pay again for somebody to get their most current information.
[00:21:26] And we see customers that use automation when they do it the right way.
[00:21:30] That number moves up dramatically because they can start serving from their own database
[00:21:34] because they've kept it current and they've kept it clean. There's also
[00:21:38] cost savings there because as you just mentioned, you've bought the data
[00:21:42] before or why go out and buy it again
[00:21:46] when you could just keep it up to date. I love that.
[00:21:50] Absolutely. My question there was going to be very similar, William, to your comment
[00:21:54] which is what is the cost? What's the
[00:21:58] cost that clients or customers are paying
[00:22:02] for by going back into their database? What's that
[00:22:06] dollar amount that you can assign to that? That you can assign for going
[00:22:10] back out to the boards? Going, yeah I'm sorry, going back out
[00:22:14] to the boards revenue lost by doing that. Yeah, I don't have a specific
[00:22:18] number to be honest with you. I mean that's actually data
[00:22:22] that I think we wish we knew. Yeah, me too. Which is really
[00:22:26] expensive. It's expensive because they have to go back and buy
[00:22:30] it again. Yeah, especially if they're using, you know, like right, if they're on LinkedIn
[00:22:34] and they're paying again and they keep doing that over and over and over again.
[00:22:38] I think we wish we knew a little bit better how to calculate the ROI on that
[00:22:42] but it's definitely expensive and our customers have certainly
[00:22:46] been able to decrease the cost of external job boards by doing these
[00:22:50] things. I don't have an example for you though. Well in fact, if they're
[00:22:54] spending the money keeping their data current using your
[00:22:58] words, they shouldn't have to go back out to those sources
[00:23:02] if they built a big enough database. I love that.
[00:23:07] I would also say one more thing. I would also say it's about redeployment too.
[00:23:11] I mean it's very expensive to start from scratch every time and so when you've got
[00:23:15] your data current and you're like instrumented as an organization
[00:23:19] to go redeploy talent that you already spent time getting to want to work with your firm
[00:23:23] you see additional gross profit savings there too. So those two kind of go hand in hand.
[00:23:28] I love it. What do you see in AI matching? We see on the corporate side
[00:23:34] pretty much throw a rock and you can hit somebody talking about
[00:23:38] matching and using AI to match different candidates to jobs
[00:23:42] and jobs to candidates and things like that. But staffing world is a little bit different, built a little bit different.
[00:23:47] So what do you see? I think that there's a key difference here
[00:23:53] and I think for however long we've all been at this, there's always somebody claiming
[00:23:58] that their matching is the best. But from a technical perspective, everybody
[00:24:03] is trying to be very nice here. Oh man, I love it.
[00:24:08] We need to go have a beer or two and be on camera.
[00:24:13] We'll get the real story. But technically everybody's always done it the same way.
[00:24:18] It's all been looking at resumes, parsing out
[00:24:23] terms and doing semantic search and it's always been
[00:24:28] find me another resume that looks like this resume. That's been the tail of the tape for however long we've all been doing this.
[00:24:33] We think about that very differently because we have the outcomes
[00:24:38] which is something that not everybody has. We know who gets the most.
[00:24:43] So we have five million just in the last year, placements, people that successfully went to go work somewhere.
[00:24:48] And so our matching algorithms start from there and they say
[00:24:53] what do the people that actually were successful have in common?
[00:24:58] It's not about comparing resumes. It's about understanding what candidates have in common with other people who are successful at
[00:25:03] actually getting that job and doing that. And then the next layer is other people that were maybe silver medalists or
[00:25:08] runners up. And so our models look at this very differently.
[00:25:13] It's outcome based and everything we've trained, all our machine learning models are
[00:25:18] sort of making recommendations based on similar candidates who have been successful at similar jobs versus
[00:25:23] resumes that look the same. And that sounds nuanced but it's extremely different in terms of the
[00:25:29] accuracy that you get out of that because in the semantic world, you're basically just creating
[00:25:34] big Boolean strings and anybody can stack their resume. The more terms you put in there, the higher you show up on the list.
[00:25:39] That's how it's always. This is less like that. This is looking at all of the other variables that go into whether or not somebody is
[00:25:44] successful, all their activity, their work history, how long they have actually
[00:25:49] practiced those skills in reality and all the insights that you can glean from the profiles that get
[00:25:55] cultivated and ultimately get placed. So we have a different twist on it.
[00:26:00] And when we surface the matches, we're surfacing them very similar to how Amazon is recommending books.
[00:26:08] You like Will, you'll probably like these five people but you may not. It's okay if you don't buy those five books.
[00:26:16] That's okay. And so the way that they get surfaced to the customer and the way we do the matching is also a little bit
[00:26:21] different. So we think it's differentiated. Our customers have been successful making, you know, we have some businesses that are doing
[00:26:27] 25% of their placements just from using that matching tech alone in an automated way.
[00:26:32] That's high. That's pretty high.
[00:26:35] Now there's not a lot that are doing that but there are enough to say that if you embrace that and you use it that way, then
[00:26:42] you can get a similar outcome.
[00:26:45] Exactly. And I love the comparison. I've always struggled staffing versus corporate.
[00:26:51] Like, is it really that different? And I know it is. But when you say it that way, we say you have just last year 5 million or
[00:26:58] the number one, a corporate entity, maybe they have a thousand hires.
[00:27:04] Exactly.
[00:27:05] 20,000, 40,000 hires. That is not the same. Right.
[00:27:09] And they're just saying, well, we hire Johnny and he came from, I don't know, UCLA and now he's a developer.
[00:27:18] Let's go get more UCLA grads like Johnny. It's not the same.
[00:27:23] It's not the same.
[00:27:24] Just more data equals better.
[00:27:26] I mean, it's kind of a war in the sense of the person with the more weapons and this is data.
[00:27:33] Well, it's just guarding the path. It's just there.
[00:27:38] One question I will ask you is either you bullhorn proper or through a partner, are you helping customers with job descriptions in
[00:27:48] the sense of making them better?
[00:27:51] Yes.
[00:27:52] You know what I'm saying? Because it's garbage in garbage out, right?
[00:27:55] So we've been talking mostly about candidate data. But on the front end of that is, you know,
[00:28:00] the marketing utility that they put out there as a job description. If it's crap, okay.
[00:28:09] Well, we're already starting at a deficit. So tell us a little bit about that and kind of its AI story.
[00:28:16] Yeah. So I've made it this far without talking about our co-pilot product, which is sort of our AI, which I know really,
[00:28:25] really unique name. I know.
[00:28:27] But no, right. Actually, actually, this is funny that this is prior to the Super Bowl and prior to Microsoft's release or
[00:28:35] whatever. Ryan was talking on podcast about everyone's going to have a co-pilot.
[00:28:40] Yeah, like everybody in every job and every form of society, everyone's going to have someone right?
[00:28:44] You know, whatever personal.
[00:28:46] Actually, there's I can't remember the name. I just saw the other day. It's a little thing that clips.
[00:28:51] Oh, yeah.
[00:28:52] I have a wearable. Yeah, yeah. It clips and I mean, I'm not paying for it yet.
[00:28:58] But no, it does foreign language. It does. One of the things it does is foreign language so you can hear a foreign language
[00:29:04] and then trigger it into your ears and then speak in your own language.
[00:29:07] Anyhow, so tell us about your co-pilot.
[00:29:10] So, well, yeah, I think co-pilot will become pilot and autopilot in time.
[00:29:13] Like I think that's where that works today is somebody who's assisting you.
[00:29:17] And so our co-pilot uses a lot of of LLM, Gen AI stuff to assist in content creation across the board, across the whole workflow.
[00:29:26] That's a large part. Part of it's matching. The other part of it is this is this content generation stuff.
[00:29:30] And so the job description thing is a great use case for that, where that is certainly something that customers do not do well.
[00:29:39] Like we we look at our job descriptions and go, I bet you only 20 percent of these things are actually any good.
[00:29:45] And they would be awesome if they were 100 because I would like seriously help us train our AI.
[00:29:50] And so, yes, a co-pilot will will be able to help build out job descriptions based on a variety of different factors.
[00:29:58] There's also other things like, you know, writing pitch letters, right?
[00:30:02] Decks for candidates like those are things that take people a lot of time to do today if they do it well.
[00:30:08] And if they don't do it well, there's all generic and there's no personalization.
[00:30:11] And so you have the job description and you have the resume.
[00:30:14] You can really very easily create great pitch letters and you can tailor them to, you know, whoever you're sending them to.
[00:30:20] We have a funny thing where you can pick the tone like are you sending it to a boomer or are you sending it to, you know, to Gen Z?
[00:30:26] We will change that.
[00:30:27] I love that.
[00:30:28] Right.
[00:30:29] So those are just kind of fun to play around with.
[00:30:31] What was your favorite Beatles album?
[00:30:35] You know, it would be it would be even better if you could just put the year of birth, the date of birth and see the difference between each person.
[00:30:44] Yeah, funny.
[00:30:45] Maybe where they live because we live in like 11 different parts of the United States.
[00:30:49] So like we actually do that.
[00:30:51] We take location and we take location into account when you do that.
[00:30:54] You can you can ask it to do something personal like inject something something funny about wherever the person lives and it will say, oh, that's genius.
[00:31:02] And here you go.
[00:31:03] You can put something funny.
[00:31:04] You sold me mad.
[00:31:05] I need to see a demo now.
[00:31:06] Fly Eagles.
[00:31:07] Fly Eagles fly.
[00:31:08] Yeah, exactly.
[00:31:11] Hey, we just resigned to Vontae Smith.
[00:31:13] So back off.
[00:31:14] 100 percent.
[00:31:15] We're good.
[00:31:16] Hey, I can't I cannot talk about this.
[00:31:18] Yeah.
[00:31:19] Alabama, Alabama, Alabama football.
[00:31:21] Not a problem.
[00:31:22] Love that.
[00:31:23] You're a Patriots fan.
[00:31:24] I am.
[00:31:25] Of course.
[00:31:26] Right.
[00:31:27] Yeah, that sucks.
[00:31:28] Hey, don't worry about it.
[00:31:29] I remember when I grew up in an era where they were not great, but we lived through a wonderful era where they were great.
[00:31:35] Same.
[00:31:36] And hopefully they can get it turned around pretty quickly.
[00:31:38] I agree.
[00:31:40] Let's talk a little bit about skills in our world on the corporate side in HR.
[00:31:44] Everything in HR is led with some type of skills discussion.
[00:31:49] Usually, I think I think Ryan and I are a little bit cynical because it kind of sounds a lot like competency models.
[00:31:58] And then we went kind of went through that arrow where everyone talks about competency models.
[00:32:02] No one implemented them.
[00:32:03] Yeah.
[00:32:04] So what are you seeing in the staffing world as it relates to skills and A.I.
[00:32:11] And kind of that intersection between the two from a staffing perspective?
[00:32:15] Yeah, the holy grail for customers is to be able to know how to specifically classify a candidate or a job.
[00:32:21] Right.
[00:32:22] I mean, to be able to say show me anywhere in the taxonomy, right?
[00:32:26] Show me all the developers or show me all whatever.
[00:32:29] And historically, that's been pretty tricky because you've had to use parsers that are not awesome at waiting those skills when they pull them out.
[00:32:36] So you get this laundry list of skills and you're like, OK, that's your things.
[00:32:39] They can't be great at 100 things.
[00:32:41] Right. But but this is really another area where I think LLMs have changed the game.
[00:32:45] And so what we've started to do and we're early on this, but we started to do is classify almost like a shadow record of every profile that comes in the system and parse out not only their skills, but put them into a standardized hierarchy.
[00:33:02] You can just ask them to do it.
[00:33:04] You don't have to manage a hand rate attacks on him anymore.
[00:33:07] That stuff's like that's legacy stuff.
[00:33:09] You don't have to do that.
[00:33:10] Right. Right.
[00:33:11] And then you can start getting more specific about, like, how long have they actually had that skill at what types of companies did that skill?
[00:33:18] Right. What's tertiary to those skills?
[00:33:21] If they know that they probably know this, this, this and this.
[00:33:23] Right. And so you're not doing it with like this sort of like semantic keyword expansion thing, which is how everybody's doing it.
[00:33:30] Right. You're doing it by like creating these profiles and having, you know, an LLM driven taxonomy in the classification for candidates and jobs.
[00:33:39] And then, like, think about that all the way through.
[00:33:41] Like if you're in analytics and you actually want to run a report about your margin profile for Java developers or whatever it might be, there's no more sort of, you know, gymnastics that you have to do to figure that out.
[00:33:53] You just go I just want to I've already classified it.
[00:33:55] I want to go figure that out for whatever that is.
[00:33:57] Do you see verification or validation of those skills as a part of that process?
[00:34:03] So we, yeah.
[00:34:05] You know what I'm saying? Like we have a job developer, we say they're a five star and there's some type of depth and breadth to whatever their knowledge is.
[00:34:12] Yeah.
[00:34:13] How do we know that?
[00:34:14] I know. I think that's a good question. And I think that can go many different ways.
[00:34:18] There's all sorts of sort of, you know, there's blockchain verification of that kind of stuff.
[00:34:23] Right. Right.
[00:34:24] I'm not, I don't know. I'm sort of, I'm in the middle on that. Like how successful you can actually be at doing that at scale. Right?
[00:34:33] Is there a version of that that's internal only to the organization through feedback and assessment, all that stuff?
[00:34:40] I think that's what I was going to say. I think that's where that goes. I think that that becomes a competency of a staffing agency.
[00:34:46] And they have their own way of sort of figuring this out and determining the relative proficiency of these folks.
[00:34:52] And there's ways you can deduce it looking at data and you can do some external validation.
[00:34:56] But I think doing that really reliable, reliably at scale is probably pretty tricky.
[00:35:02] But if a customer gets it right, it's certainly, it's certainly, you know, IP for them.
[00:35:07] So let me ask you this, Matt. This is something I've been looking for.
[00:35:12] And if it's out there and I just don't know it, if anyone's listening, please let me know.
[00:35:18] So when I was on a desk, and this is a long time ago, so this could have changed.
[00:35:23] But you had said something where you're building your taxonomy. You can go left and right.
[00:35:28] You can make deductions. You can deduce where or how good people are, how proficient, where they've been and all that stuff.
[00:35:36] Is there a way for you now or in the future?
[00:35:40] Is there a way to say that William is a developer at company A in 2023 or we'll say 2015?
[00:35:49] In 2015, he was a developer at company A.
[00:35:53] Here are the five other companies in our LLM that in 2015 also have this same skill set on other people.
[00:36:02] So we can now safely make the assumption that William has experience at this particular level or on specific projects if it's at that same company.
[00:36:12] So use case, I'm thinking when I was with Kinect and again long time ago, Java Shop, we were looking for people that worked very specific projects.
[00:36:23] We knew just because of people in the industry, Comcast had those very specific people.
[00:36:30] But we got lucky because we only knew because somebody knew personally and came from there.
[00:36:38] Is there a way now for you to just say, yes, these people at these four companies worked on a very similar project and would have these very similar skill sets?
[00:36:48] I mean, I love that idea in theory.
[00:36:51] I think this connection back to the customer is a big one just in general in terms of what that actually allows you to do because especially if you think about it from a placement perspective,
[00:37:03] if you know people that have been successfully placed at a certain company, you can go find other people that worked at that company and you can start to create all of these other things and stuff.
[00:37:11] And that's a new thing.
[00:37:12] And I don't think anybody is really doing that particularly well.
[00:37:14] I think that's an opportunity.
[00:37:16] Down to the project level, as you mentioned, I mean, you have to try to figure out, I guess, how to get that information off of a resume or a profile at that level of detail that you could do it.
[00:37:28] I mean, that sounds awesome.
[00:37:30] I'm not sure.
[00:37:31] It always sounded awesome in my head.
[00:37:33] Yeah.
[00:37:34] But I think I would go on.
[00:37:35] I would sort of pick up on the company side of things because I think what you're saying there is really powerful because you know people that are working and it could, whether it's Amazon or whatever it is.
[00:37:45] I think that's really powerful and I think you can start to draw some conclusions that previously you wouldn't be able to connect the dots on.
[00:37:51] Yeah.
[00:37:52] So, I need you to solve.
[00:37:54] It's not a fight but a disagreement that Ryan and I have around candidates using AI.
[00:38:01] So now you start to see some tools out there.
[00:38:03] 100% use it.
[00:38:05] Where candidates can apply to 10,000 jobs.
[00:38:08] Sure.
[00:38:09] Using AI.
[00:38:10] And I'm not necessarily for that.
[00:38:13] I'm more kind of a buyer of detection of those types of things like detection software to find out who's using AI.
[00:38:20] Should I give my argument before he answers?
[00:38:22] Yeah, yeah, absolutely.
[00:38:23] Good.
[00:38:24] It's not really an argument.
[00:38:25] If recruiters are able to use AI or tech to screen out 400 resumes, why can't I screen myself in the 400 jobs?
[00:38:36] Well, your most powerful part of your argument you didn't say is do you care?
[00:38:41] At the end of the day, if it placed somebody, do you care?
[00:38:46] And that's where I guess we get to a point where it's like, I think candidates should apply to the jobs that they have the best chance of applying to and probably ought to, if you will.
[00:38:58] Like I don't believe in the apply all, etc.
[00:39:01] I don't like those.
[00:39:02] I just think it creates noise.
[00:39:04] So I'm one of those types of people, which could be my age.
[00:39:08] But I don't necessarily like creating more noise.
[00:39:12] And if AI from a candidate's perspective, it helps you facilitate through GNI, AI or otherwise just more resumes, personalized resumes, looking at the job description, looking at your own resume and personalizing it.
[00:39:26] I don't really like that.
[00:39:29] But I do get to at the end of Ryan's argument where it's like, OK, if it makes the placement.
[00:39:36] Well, I'm not saying hit the easy button and apply to 500 jobs.
[00:39:40] That's what they're doing.
[00:39:41] That's just yeah, that's what they're doing.
[00:39:43] That I don't like.
[00:39:44] I like the power of the candidate has the power to personalize the resume and just say, look, here's 40 jobs that I've saved.
[00:39:52] I'm going through job boards.
[00:39:54] They've saved save save save take those 40 jobs.
[00:39:58] Create me 40 cover letters.
[00:40:00] Create me 40 emails.
[00:40:02] Create me 40 variations of my resume that potentially fit for each of these jobs and give me something personalized.
[00:40:08] We're doing the same thing as recruiters in our reach out in our emails and our job descriptions.
[00:40:13] Why not allow the candidates to do the same?
[00:40:15] You're going to have to solve this for us.
[00:40:17] I mean, I'm sort of in there with good for the goose is good for the gander camp personally.
[00:40:21] Right.
[00:40:22] Like I've never seen any technology that adds value.
[00:40:30] I've never seen it be successful to try to stop people from using it right.
[00:40:34] Right.
[00:40:35] If it helps and if it helps a candidate, then they will figure out a way to do it because it is essentially the same thing that we're doing.
[00:40:42] On the recruiting side, you know, and I always make this joke like our our copilot generates pre screening questions using an LLM.
[00:40:49] And I always make the joke that can't the candidate just take those and answer them with an LLM?
[00:40:53] Like, like, of course they can.
[00:40:55] And so these are like ubiquitous tools that are publicly available.
[00:40:59] And I don't I don't think stopping them from doing it's the right answer.
[00:41:04] I think detecting it's the right answer.
[00:41:06] And I think that this also proves why the human is so important in this process, because you've got to be able to, you know, now, of course, you don't want the recruiter having a thousand conversations because somebody made a bunch of stuff up on their resumes.
[00:41:18] Right. You can you can screen that kind of stuff out.
[00:41:21] But this is why the person is so important, because it becomes really easy to kind of fake your way through this if you're a candidate.
[00:41:26] So you gotta I think you have to meet everybody where they are and figure out a way to embrace the fact that's probably what people are going to do.
[00:41:33] Can I help us? So thank you for solving that, Ryan.
[00:41:36] You get it. I don't know if I solved it, but I'm not going to humble brag, but I'll kick your ass.
[00:41:44] So staffing's kind of always had some things and we've seen it now in corporate as well where there's ghosting and not fraud, fraud, the way that people think of fraud, but people may be using other people's backgrounds and representing themselves in a different way.
[00:42:02] So can I help us with those things? And do you see kind of it in helping your clients either reduce the reduce the fraud, reduce ghosting and things like that or candidates representing themselves incorrectly?
[00:42:17] Can I or do you see AI as a response to that?
[00:42:21] I do. I don't see it necessarily right the second. I mean, I think there's an argument for like an immutable blockchain type verification of a human and their resume for this right?
[00:42:31] Right. And I think that actually doing that in practice at scale and having all the authorities be able to validate everything they've ever done is probably going to be hard.
[00:42:39] Right. But that aside, yeah, I think generally that the reason I'm optimistic about AI in the future is I think that the more problems it solves, the more the more opportunity it has to offer.
[00:42:50] The more opportunities it creates. This is a perfect example of that, right? If AI is everywhere and people are able to use it to apply to jobs and okay, well, an unintended consequence of that could certainly be more fraud.
[00:43:02] Okay, so somebody's going to go figure out a technology solution to then go detect that fraud. Right. And so it just creates this sort of snow in my opinion positive snowball effect of you know, of other of other technology providers sort of responding to the things that come from this.
[00:43:18] It's just like hackers. Hackers figure out a way to hack something, then there's another group of hackers that go okay we need to solve that. Right. So it's just kind of the same type of thing. That's where innovation comes from.
[00:43:30] Absolutely. And I think I mean this is a little bit off topic but not really because you know people are still going to have to hire these skills but I think InfoSec and security is huge.
[00:43:39] It's going to be so much easier in the next few years to hack. You're going to have to have a bunch of response to that with new technologies and AI driven security products and all these different things and that creates employment opportunities then for people to go and build those things.
[00:43:52] I think you know there's a few areas that I think fit this really well and you know I would say security and fraud is one of them where I would expect there to be a marketplace of technology around.
[00:44:02] You know it's funny when we were talking about the candidate thing and candidates using to personalize cover letters and personalize a resume. If you can do to not if when you can detect that, you might flip the job on them and put them in more of an AI centric job.
[00:44:19] That's awesome.
[00:44:22] We're going to hire you for this bit over here but you know what you're pretty good at AI.
[00:44:27] I love that.
[00:44:28] That's great.
[00:44:29] I thought you were going to go the opposite way and say, psych, there's not really a job.
[00:44:34] Again, you're early on, and if somebody is really proving themselves to be adept at AI.
[00:44:40] You know what? There's probably another job for them at the company. Yeah, I love it. It might not be listed but it's pretty good company.
[00:44:48] This has been way more than I ever expected. So Matt, thank you so much for your time and coming on the show and kind of just talking with us about AI.
[00:44:56] Absolutely.
[00:44:57] We appreciate you guys. I really enjoyed the discussion. Thanks.


