We hear a lot about what AI can do, but not much about checking to make sure it’s actually doing it. I’m joined today by Eric Sydell. He’s CEO of a new company called Vero AI, which helps businesses understand and evaluate AI and other advanced technology. We’ll talk about keeping an eye on solutions and processes, and what role humans should play. All, on this edition of PeopleTech.
Image: iStock
Get full access to WorkforceAI at workforceai.substack.com/subscribe
Learn more about your ad choices. Visit megaphone.fm/adchoices
Powered by the WRKdefined Podcast Network.
[00:00:00] Welcome to PeopleTech, the podcast of Workforce AI.News, I'm Mark Feffer. We hear a lot about what AI can do, but not much about checking to make sure it's actually doing it. I'm joined today by Eric Seidel.
[00:00:25] He's CEO of a new company called Vero AI, which helps businesses understand and evaluate AI and other advanced technology. We'll talk about keeping an eye on solutions and processes and what role humans should play while on this edition of PeopleTech. Hi Eric, welcome.
[00:00:47] Now Vero AI is a new company, so tell me its genesis and tell me what it does. Yeah, sure. Absolutely.
[00:00:55] Well, the genesis of this company was really decades ago for me personally as I explored science, learning about science, and I became an industrial psychologist a couple decades ago at this point.
[00:01:11] And throughout my consulting work for the last 20 years, I've analyzed a lot of data in HR tech, in hiring and assessments. And I've seen a lot of tools that are very powerful and that work very well and many that are not very powerful and don't work very well.
[00:01:29] So I've always been taken with this idea that there's tons of data out there and we're not doing a very good job of making sense of it.
[00:01:37] There's a lot of tools in HR technology and other places in the economy that sound good on paper that are marketed well, but that don't always do that much. And they don't always have the benefits that we want them to have.
[00:01:51] And that can mean in terms of effectiveness, do they actually predict what we want them to predict or fairness? Are they fair for all classes of individuals, protected classes? And so that has been a driving force for me throughout my career before AI, even before AI.
[00:02:12] And now with AI, several years ago we had so many more AI tools come on the market and then in November of 2022, the big one, generative AI hit the market and chat GPT came out.
[00:02:28] And since then, all of these types of issues that I've always dealt with and wanted to solve have become even more important because there's even more information. There's even more confusing technology out there that's very powerful but also difficult to understand and difficult to evaluate.
[00:02:50] And so what I wanted to do is create an entity that can help companies do that better and at scale.
[00:02:58] And so that's sort of the idea behind Vero AI is to help companies take control of complex processes that might involve AI or they might just be algorithmic or automation in general types of tools.
[00:03:11] They don't necessarily have to involve AI, but we can come in using our platform and we can collect information on those tools and then we can evaluate them at scale ongoing all the time and help these tools make sense for business users.
[00:03:29] So essentially, you know, that's sort of the big picture, right? Is taking in a bunch of information and then we can evaluate that at scale automatically.
[00:03:38] So can you bring that down a little bit and tell me, give me a real world example? What kind of question would somebody try to answer? How would they go about it?
[00:03:49] Yeah, I'll talk about assessments here, pre-hire assessments because that's the world that I lived in for 20 plus years of my previous life.
[00:03:58] So in that world, you know, you basically have an assessment that you give to candidates and then it scores them on some characteristics that are hopefully job relevant.
[00:04:08] And then you use that in the hiring decision to figure out whether to hire them help figure out whether to hire them or not. And so throughout that process, the company is collecting a lot of data on how candidates score on the tool, right?
[00:04:23] And so do those scores work? Are they meaningful? Are they biased is a question that a company should have. And so you hire a new person with that assessment and then you see how they perform on the job down the road.
[00:04:38] Now, a lot of companies don't actually do the analytics work to see whether there's a stable relationship between that assessment and job performance over time. There should be. But a lot of companies don't do that work.
[00:04:51] They don't have the tooling. They don't have to know how they don't have the expertise to do that. Another big issue is, is that against protected classes now?
[00:05:02] So, you know, it could generate higher scores for white people or for men or for some other class of individual and therefore, you know, have an overall bias in how it's implemented in the company, which could lead to a lack of diversity, a lack of fair hiring.
[00:05:20] So, you know, these things are often not monitored ongoing. You know, typically what happens is a tool like that and assessment like that is implemented. And at that point in time, it's got documentation behind it that shows its predictive of job success and it's not biased, right?
[00:05:38] But then it's implemented. And then oftentimes most of the time it's not really monitored. And you can expand that not just assessments but other types of hiring practices and other types of HR tools, development tools and so forth, you know, are often in that same category.
[00:05:55] They look good on paper. They've got a good technical report that showcases previous analytics and data that were used to create them, but then they're implemented and not really monitored in a stable ongoing way.
[00:06:08] And so we want to change that. We want to be able to audit and continually audit these types of tools to make sure that they're not generating bias at any given time and that they're actually effective.
[00:06:21] Now, we can also add an AI. Let's say it's an AI assessment and that then makes it subject to a whole raft of new AI legislation that's being created all over the world.
[00:06:33] And you know this. I mean, there's legislation coming everywhere. There's European stuff. There's states in the United States are creating their own legislation.
[00:06:42] Cities are like New York City with local law 144. So everybody's struggling to get a handle on AI. So if a tool has AI in it, now all of a sudden it's subject to these other growing body of legislation as well.
[00:06:57] So it's all the more important to harness these tools to make sure they're doing what we think they're doing and not going off the rails. It's kind of ironic or someone could say it is that you're using AI to monitor the performance of AI.
[00:07:13] That's exactly right. I struggled with that early on. Early on, I thought, well, you know, like a lot of people, well, you can't use AI to harness AI. That doesn't even make sense.
[00:07:23] But it actually does make sense. And I address it head on now because, you know, AI, what is AI? It's basically just statistical techniques that we use to understand data.
[00:07:36] AI in and of itself is something that helps us understand the world around us. Right. That's good. We want to be able to study the world around us.
[00:07:45] And specifically what it allows us to do is study unstructured information, not just structured data. So not just numbers, but also text and any other sort of information that's out there.
[00:07:59] You could say video imagery, whatever. AI techniques can actually help us parse those sources of information and make sense of them at scale.
[00:08:10] And so, for example, the way we're using this in our platform is if you think of a typical business intelligence dashboard, Mark, where you're reading in numerical data and then you've got some charts and graphs that the user can explore to understand that information.
[00:08:27] Well, we can do that. But in addition to that, what we can do is read in unstructured information documents, primarily at this point. You can actually upload technical manuals, technical addenda, terms and conditions, release notes, blog entries, white papers, whatever you want, anything.
[00:08:47] Any documentation can actually be uploaded into our system and using AI, we can parse those documents and figure out what they mean and actually score them numerically on our output model.
[00:09:03] So, essentially, we can now study a vastly larger amount of the world around us using AI. And that's extraordinarily powerful because where it might take a human thousands of hours to wade through technical documentation and really understand it, we can do that in the click of a button and automatically pull out relevant factors.
[00:09:30] And we're using generative AI models to do this and it's baked into our system. So, instead of just spitting out some charts and tables, what we can do is combine numeric and non-numeric data, score it using these AI and other statistical tools and produce an output that's very easy to interpret so you understand whether and how a system is working.
[00:09:58] So, what's the human's role in that? Yeah. Well, it's certainly not the case that we can automate this 100%. So I think a lot of times the goal in software is a fully 100% SAS model where everything is just the click of a button and it's all automated.
[00:10:20] And I think that at this stage of AI development, that's not an appropriate way for us to pursue this. So essentially what we've done is we've built an AI assistive engine that can help us understand data but there's a human in the loop.
[00:10:38] There's a PhD scientist in the loop or multiple of them in the loop. And so when the AI model produces its results, its recommendations, its scoring, we're actually watching that and we are reviewing it and making sure that it's accurate and making sure that in a lot of cases we are actually testing it
[00:10:59] and doing the ratings ourselves as humans to make sure that that AI model is, its ratings match up with our ratings. So we're in the loop. We're making sure that it works right. So there is a human aspect to what we're doing. It's not just software.
[00:11:17] And I think that that is, that works really well for where we are right now because there's so many different use cases for this type of technology that it's really important to have a human in there working with it and understanding it and learning as we go.
[00:11:31] So do you think most people understand what's going on with all this? Are they buying, oh here's a solution. I don't know how it works but we're gonna buy it anyway or are they trying to get their arms around what exactly is going on under the hood?
[00:11:50] No, I would say a lot of people aren't trying to look into these tools in that level of detail. I think what's happened is that technology is complicated. It's hard to understand. Even data scientists that create AI tools don't really understand how they work a lot of times.
[00:12:13] And so then you have end users and companies trying to buy or make purchase decisions about these super complicated tools and it's next to impossible.
[00:12:23] Right? And so I think what happens a lot is companies just buy things that seem like they look professional and look like they're serious and look like they have a lot of money and funding behind them.
[00:12:36] And so there's sort of an assumption that it probably works and it probably does what it's supposed to do. And then the company pays money for it and then they don't monitor it ongoing in any sort of rigorous way.
[00:12:49] So they're just basically crossing their fingers. Now, the problem with this is that crossing your fingers and not looking at the data leads to almost always leads to lack of effectiveness and bias and increasingly lack of legal companies.
[00:13:06] And so the whole compliance and other negative outcomes. So the whole sort of barrier head in the sand approach is very harmful to companies, to their bottom lines, to ROI, to fairness, to effectiveness.
[00:13:19] And this is a grand problem. I've been interested in this problem my entire career and I've seen it firsthand. And we now have tools with AI to change that and to change that model.
[00:13:33] And you can view what we're doing as a sort of next gen business intelligence dashboard where we study not just numbers but text information and other information as well.
[00:13:44] And make sense of it using science, using actual scientific techniques of data exploration and creating outputs that humans can easily understand and know, hey, this is not effective or hey, this is biased or hey, this might not be legally compliant.
[00:14:02] In a certain jurisdiction, we can do that at scale now. And that's tremendously powerful. It provides business users, leaders, a window into how these complex systems are working that they've never had before.
[00:14:19] So what's the human's role in all this? I mean, you know, the AI is monitoring things and examining the data. How does it, when I say the human's role, I mean the customer's role, your customer's role. Mm hmm. How are they supposed to work with the system?
[00:14:37] Yeah, the, it's simple really. I mean, we basically have to work with the clients to zero in on their particular use case. And then from that, they help us to identify the sources of information that we need.
[00:14:54] And they may be, they're able to upload documents and links to our platform. We can also establish, you know, data feeds through APIs on numerical data feeds and things of that nature. So they help us hook that up.
[00:15:09] They help us hook all that stuff up. And then, you know, we configure it, we configure the scoring based on the use case and based on the information they provide our psychologists or scientists do that.
[00:15:22] And they create these scores, then they go into the admin view and they can see how the system scores on all of the relevant metrics that we're tracking. Mark, we created a model called violet, which is six factors that are comprehensive on algorithmic and AI effectiveness.
[00:15:42] So if you want to know, like how to evaluate AI or how to evaluate an algorithm, that's confusing because it might think, well, compliance, there's compliance needs. There's a need to be fair and not biased.
[00:15:55] You know, how well does it work, etc., so forth. Everything you might want to know is included in our violet model. And so we can score that the end user can go in and see the scores on violet on each letter.
[00:16:09] And they can drill down for more information to find out how to improve each of the scores as well. So it's very, very simple. It's not like a dashboard where you have to wade through a bunch of charts and tables and graphics and try to figure out what it means on your own.
[00:16:23] Instead, we've basically done the interpretation by creating these validated violet scores. So it's super easy. It's super easy to understand where the limitations are, where the strengths are of your current system and then to go and fix any issues.
[00:16:40] And so I, you know, I think of it as the difference between a dashboard and a scoreboard. So I like to use the analogy that if you're at a baseball game and there's no scoreboard for some reason, and you're just trying to figure out who won the game, who's winning the game based on, you know, how fast the runners are running and how far it looks like they're hitting the ball
[00:17:02] and whether it looks like they're playing in an energetic way or whatever you might, whatever information you might use to try to figure out who's winning without a scoreboard being there. That's what a dashboard is today.
[00:17:16] You're trying to look at a bunch of data to figure out what's going on. And as opposed to what we're doing, which is a scoreboard, we're actually telling you specifically in these numerical indicators that we validated what's going on.
[00:17:28] So if I want to ask a question, let's want to step back a little bit. Just looking at the business, you know, the industry that you're now a part of with your RAI. What's your feeling about where we are right now?
[00:17:44] I mean, there's so much buzz out there. There's so much hype. How can a business person sort it all out and understand what's really valuable and what's just talk?
[00:17:53] Yeah. Well, obviously they can work with us. But you know, short of that, that's the challenge. You know, you talk to a lot of companies out there and you say like what types of AI tools are you using? And they'll say, none, we're not really using any or, you know, we're not sure we think there might be a little AI in this one tool that we're using or something like that.
[00:18:16] So for all the hype about AI, a lot of organizations, a lot of HR organizations are not really using it in any fundamental way. And that is because it's difficult to understand. And it's difficult to know if it actually works and it's difficult to know if it's legal, all these things, right?
[00:18:33] So a lot of companies aren't really adopting AI in the way that they might want to because here's the thing. I mean, AI technology can be enormously powerful, right? It can do a lot of amazing things for businesses.
[00:18:46] You could say money, it can make them run smoother, et cetera, et cetera, et cetera. It can reduce bias even though it can also increase bias if it's implemented incorrectly. But this is the type of thing that businesses are confused by.
[00:19:01] So I think, you know, first of all, there's no real experts on AI because it's changing so fast that nobody is really on top of every little aspect of it. So we're all trying to be experts on it as much as we can.
[00:19:16] And so that's sort of the function of our platform is to make that easy. Like we have all these scientists processing the nascent legislation in the world and all the other issues and things and encoding this in our scoring so that all you have to do is upload information and we will help you figure it out.
[00:19:36] So I mean, that's, there's no substitute for learning about the technology, learning about the tools. You know, when you talk to a vendor, if you're talking to a vendor that claims to have AI in its tool, you need to know the right questions to ask.
[00:19:52] Some of the time it's not even really AI. They're just saying that from a marketing perspective. Other times they might be using some AI, but it's used in a minor way, or it's not used in a way that really drives much of an impact, or it might be done in a way that's relatively unsafe and could lead to bias against protected classes and get you into legal hot water.
[00:20:12] So there's a lot of ways that you can make marketing claims about AI that sound very compelling. And companies that are looking to buy these tools, you've got to have a level of expertise to know what are the right questions to ask.
[00:20:26] You could say for example, how did you come up with this claim about how effective your tool is? And how much data did you use to calculate that result? Was it 10,000 people? Was it 50 people?
[00:20:42] If it was 50 people, it's not very reliable. It's not a very reliable claim. So you know, question, you got to dig into these claims that are being made to understand really whether and how the AI that you're being marketed is really valuable or not.
[00:21:00] Eric, thanks so much. It's great to talk to you again. I hope you'll come back sometime. Good luck. I mean, it's great. Thanks Mark. Appreciate the time. My guest today has been Eric Seidel, the CEO of Vero AI. And this has been PeopleTech, the podcast of workforceai.news.
[00:21:28] To keep up with AI technology and HR, subscribe to Workforce AI today. We're the most trusted source of news in the HR tech industry. Find us at www.workforceai.news. I'm Mark Feffer. Thank you.


