In this episode, we speak with Eric Sydell about the transformative potential of Vero AI in assessing tech investments. We look hard at the critical balance between value and risk, and explore how Vero AI helps you to determine if you bought what you were sold. Is your tech delivering on it's promises? We look at data efficacy in evaluating AI tools and the current state of AI as compared to the boom of the internet.
Key Points:
1. Verify marketing claims with evidence and data before investing in AI tools to bridge the gap between promises and reality.
2. Assessing effectiveness data is essential for understanding the impact of AI on job performance and company outcomes.
3. Vero AI integrates automation and human expertise to impartially evaluate and monitor AI tools' effectiveness.
4. Recognize the limitations and ethical considerations of large language models, emphasizing critical evaluation and bias awareness.
5. AI's current stage resembles the internet's early days, marked by experimentation and uncertainty.
6. Prioritize outcome evaluation over code adjustments for AI models to gauge practical effectiveness.
7. Vero AI offers actionable insights for informed decision-making across diverse sectors like HR tech and healthcare.
Chapters
00:00 Introduction to Vero AI
01:33 Background of Eric and Vero AI
08:13 The Disconnect Between Marketing and Reality
10:38 The Degree of Lies in AI
15:07 Vetting AI Tools and Monitoring Effectiveness
22:29 Evaluation of Large Language Models
23:17 Morality and Limitations of Large Language Models
24:05 The Challenge of Bias in AI
25:01 The Wild West of AI
26:30 Focusing on Outcomes
27:16 Recommendation Engine for Improving Output
28:23 Vero AI's Target Market
29:38 Automating Certification Routines
30:54 Potential Customers for Vero AI
33:14 Using Violet as a Metric for AI
34:54 Creating Awards Based on AI Outputs
36:33 The Perception of AI in Different Fields
38:07 Educating and Explaining Vero AI
39:26 Targeting HR Tech and Expanding to Other Industries
42:31 Differentiation and Messaging
Learn more about your ad choices. Visit megaphone.fm/adchoices
Powered by the WRKdefined Podcast Network.
[00:00:00] How does it help me hire people better? And the thing that always was apparent to
[00:00:04] up to me in my career is that the hardest data to get is effectiveness data. Does it actually work?
[00:00:11] Like you can calculate whether something's biased if you know the demographics of your
[00:00:15] hiring pool. You can do a little simple, you know, calculation, right? But is it effective?
[00:00:21] Well, you have to have criterion data. You have to have job performance or company
[00:00:24] performance data of some sort. You got to have supervisor ratings that are collected in an
[00:00:29] objective way, not part of the traditional rating process because those ratings are garbage.
[00:00:35] You've got to have, you know, if you're working in a call center, maybe you can access metrics like
[00:00:40] average handle times or number of units sold or customer service scores or whatever. Those
[00:00:46] are metrics you can use but you've got to be able to predict those things.
[00:00:50] Feeling kind of left out at work on Monday morning? Check out the BARF, Breaking News,
[00:00:55] Acquisitions, Research and Funding. It's a look back at the week that was so you can prepare for
[00:01:01] the week that is. Subscribe on your favorite podcast app.
[00:01:25] Kind of hear what your new venture is. And so why don't we start off with introductions?
[00:01:30] Why don't you give us an introduction to yourself and also Barrow, yeah.
[00:01:34] Okay, great. Well, thank you both for having me on. It's great. Really appreciate the opportunity.
[00:01:40] So my name is Eric Seidel. I am an industrial organizational psychologist by training,
[00:01:46] which is, you know, basically business psychology. If you're in the UK
[00:01:50] or Europe, that's what they call it, which is a much better name.
[00:01:53] But we prefer the complicated long form name industrial organizational here. But
[00:01:58] essentially, you know, I study statistics and tests and measures and things in school. And
[00:02:03] I spent most of my career doing pre-hire assessment. So large scale selection and
[00:02:10] assessment stuff where we would create tests and simulations, job simulations and collect a ton
[00:02:16] of data on candidates and use that to try to predict the likelihood of success in a role.
[00:02:22] So lots of statistics, lots of measurement and lots of, you know, calculations and using
[00:02:28] regressions and correlations and everything to try to make a difference for our clients,
[00:02:32] help them hire better people and then show the ROI of doing that. So that's kind of my
[00:02:39] background from a, you know, educational perspective. Give a PhD? Yeah, PhD.
[00:02:46] What I know of I O psychologist is I spoke at PSIA a couple years in a row.
[00:02:54] And most I O psychologists like to drink. So I'm not, I'm not sure if I was just in the wrong
[00:03:00] crowd, which usually happens a lot. I was with Hogan and all the Hogan personality
[00:03:09] assessment people with Charles handler at different points and just like, yeah,
[00:03:13] it was insane. But they're fun. I call propeller heads, but lovingly,
[00:03:21] because they're actually really fun people to be around. Yeah, I'm super drunk right now to be on
[00:03:30] Slurring your words, the big model in the back. Yeah, I like the effect that you just went
[00:03:34] with it. You just ran right with it. That's awesome. I think it was more shock that you
[00:03:38] could see it. He was shit. I got caught. So Eric, Eric, let's so you're with Vero, right?
[00:03:47] Like, so you said, let's talk about Vero, Vero, Vero AI. Yeah, we have to make sure the AI is
[00:03:54] 100%. Zero AI. So let's let's talk about Vero AI. I want to say Vero. It feels easier. But
[00:04:02] Vero will do this. Yeah, Pete. Yeah, they're gonna they're gonna change your name, of course.
[00:04:06] Yeah. What's the problem that you all are trying to solve? Let's talk about that. Yeah,
[00:04:11] let me go back to the beginning if I may for a second. So it's December 20th, 1996.
[00:04:18] Oh, yeah. I love the storytelling. I love it. And I turned 23 that day and my parents gave me a
[00:04:26] book called The Demon Haunted World by Carl Sagan. Now, this is a book that talks a lot
[00:04:33] about using science to shine a light on the world around us and to get rid of pseudoscience and
[00:04:40] bullshit and just all the stuff, right? And so I still go back to this book today. And, you know,
[00:04:48] Carl Sagan has so many great quotes in there, but one of them is spurious accounts that snare
[00:04:54] the gullible are readily available. Skeptical treatments are much harder to find. Skepticism
[00:05:00] does not sell well. And I would expand that to say science doesn't sell well. Oh, 100%.
[00:05:07] And so but that's that's always been the thing that has stuck with me is
[00:05:11] there's so much BS out there in the world. And I want to use science to help shine a light on it
[00:05:17] and find objective truth. Now, I will mention Carl Sagan died that same day, December 20th,
[00:05:23] 1996. So yeah, it's billions and billions and billions of stars. Yeah, the whole bit.
[00:05:30] But that's, you know, that has been my sort of guiding light, I think throughout my career. I went
[00:05:36] to, you know, school got to, you know, PhD in iosychology and all this stuff. And, you know,
[00:05:40] then spent 20 years doing assessment stuff where we really tried to focus on building
[00:05:45] tools that really work that we could demonstrate work. So lots of data, lots of
[00:05:49] statistics, lots of science. And, you know, sometimes you find that they don't work.
[00:05:53] Okay, great. Go back and fix it. As long as you're looking at the data, you're always
[00:05:57] making it better. You know, that was that was always sort of our mantra in that world. And
[00:06:02] along the way, what happened is I sort of became more and more a part of marketing.
[00:06:08] So I did more and more marketing as I grew in that role and was more and more, you know,
[00:06:13] facing the market and just seeing all the statements that vendors make that our competitors
[00:06:18] make or that other people in the space make and that we made, you know, and I realized,
[00:06:23] and it's always, it's this line, you know, there's like, there's marketing and then there's
[00:06:27] reality. And in the best companies, those messages are pretty close, but they're never
[00:06:33] identical. They're not intersecting. In fact, if they're identical, there's a problem.
[00:06:39] Yeah. Because you've got to lead, you've got to, in marketing, you've got to lead the
[00:06:44] buyer to a place. If you're just talking about what you are now, there's a book,
[00:06:48] it's about the lean startup movement, but it's called Lean Analytics.
[00:06:52] The first chapter of this book is all about how founders are psychopaths. Because they live,
[00:06:59] in their world, they live in the past, present and future tense. And they have, they can't
[00:07:05] delineate. So if someone says, Hey, is that feature? Is it, is it, is it live? Well,
[00:07:11] the founder knows that it will be live next Tuesday. But right now they're asking a question
[00:07:15] is it live? Yes. Right. Technically, it is live. It's a psychopath.
[00:07:23] In their world that they have to be able to kind of manage those things and be able to say,
[00:07:27] Oh yeah, absolutely. Yeah. You know, well, we set up a meeting for Wednesday
[00:07:32] so that we can show you everything and blah, blah, blah. But right. So, so marketing,
[00:07:37] I think has to lead. I think it's the, the veracity of that leading and also how far of it
[00:07:45] is outside of the actual roadmap. You know, like this is something I've seen with a lot of software
[00:07:51] companies where they sell the roadmap, not what's built. Yeah. And it's like the, the, the person
[00:07:58] that's buying the prospect, the practitioner, they fall in love with the road with the roadmap.
[00:08:02] Like, oh my God, this fixes all the problems that we have. And like in your world,
[00:08:07] the net world was pretty hard. This fixes everything. And they said it's integrated with
[00:08:10] all the stuff that we already use. Fantastic. But they don't tell them that's not going to be that
[00:08:15] way until nine months later. Right. Right. I mean, yeah, that is a huge, huge challenge. And I
[00:08:21] think it's, it's not just the, you know, founders or CEOs that are psychopaths, but,
[00:08:25] you know, the whole failed teams are psychopaths too. Well, they believe the mission.
[00:08:30] Yeah. That's right. They're paid, they're paid to, they're incentivized
[00:08:37] Yeah. I mean, you try to reduce that distance as much as possible, but a lot of companies,
[00:08:44] they don't, they're not even trying to reduce that distance. They're increasing the distance.
[00:08:47] And you know, sometimes the marketing messages are like the best case possible scenario. And
[00:08:52] then other times they're like completely made up and divorced from reality. And,
[00:08:57] and then the other thing I've seen a lot is that the messages will, this will be
[00:09:02] focused on one thing to the exclusion of other things. So like what happened in selection,
[00:09:07] in the selection space, I think is when I first started out, nobody was really,
[00:09:10] nobody was talking about DEI. And our clients are cared about.
[00:09:16] Yeah. I mean, well they, you know, they cared about adverse impact from a legal
[00:09:19] perspective. That's it. And they only really cared about the validity, the predictive power
[00:09:24] of the tool. And then, you know, DEI became a much more talked about topic and then
[00:09:32] everything switched. And then now it's, it's like that's the most important thing.
[00:09:36] And the validity is secondary, but a lot of times they don't even talk about validity anymore.
[00:09:41] So you'll see like a lot of tools out there that's purport to do better
[00:09:45] selection, better hiring. And they don't talk about the effectiveness of the tool.
[00:09:50] They only talk about the lack of bias or the bias is minimized or something like that.
[00:09:54] So then you have to wonder, well, okay, well, but does it work? And that's where I always use
[00:09:59] this example of like, you could flip a coin, decide who to hire and you're not going to
[00:10:04] violate bias, any bias regulations. You're not going to violate data privacy regulations.
[00:10:10] So you're good to go from a legal perspective, but from an effectiveness perspective.
[00:10:14] Yeah, 50-50.
[00:10:16] You know, yeah.
[00:10:17] You got to have a chance.
[00:10:18] Yeah.
[00:10:19] Yeah, you're good.
[00:10:20] So the work that you did in pre-hire that led you to this place right now,
[00:10:28] you've seen the hype machine several times in SAS and or on premise software to SAS.
[00:10:36] You've seen mobile, you've seen social, you saw probably a four or five years ago, AI,
[00:10:41] general speaking, now more generative AI and the lies on all of those have been
[00:10:51] expanded exponentially. So when you walk like a floor at a conference or you just
[00:10:57] wander around looking at different firms, et cetera. And let's start with AI.
[00:11:01] You don't have to go way back. But what's the level?
[00:11:06] What's the degree of lies that you're dealing with, what a buyer would need to be aware of?
[00:11:11] Yeah.
[00:11:11] And I'm asking the question to help buyers understand like, what should they be asking
[00:11:17] of the vendors that they're dealing with? What is this?
[00:11:21] That's great. Yeah. I think that when I'm walking the floor of HR tech or something,
[00:11:26] and I'm looking at the claims that are on the booths and everything,
[00:11:30] my mind is constantly just in criticize mode. I'm just looking at what statement are
[00:11:35] they making? I wonder how they got to that statement. And what are they not saying
[00:11:40] that's missing from the message too? So you see so many different claims,
[00:11:44] like if you see a claim that says something like, hey, we reduce bias by 20% or something.
[00:11:51] It's like, okay, well, how did you calculate that? What data did you use to calculate that?
[00:11:57] And walk me through in depth what you did because probably what they did
[00:12:02] is in most cases what they did is they have like a small sample that they collected with one client
[00:12:07] and it's like 50 people or whatever. And they found that if you use their tool,
[00:12:12] then it helped you hire a couple more people from a protected class or something.
[00:12:18] And so they got this huge statement on their booth that makes it sound like it's going to
[00:12:23] reduce bias all over the world by 20% or whatever the case is. And that just came
[00:12:27] from this little tiny set of data. And so you really have to dig in to find out, okay,
[00:12:33] how did you actually calculate that? And what was the outcome metric you used?
[00:12:36] And you don't have to get a PhD in statistics. You just have to think through it
[00:12:41] a little bit and know that it's probably 80% bullshit whatever that statement.
[00:12:51] But then you have to wonder too like, okay, so again, going back to the coin flip analogy,
[00:12:55] okay, it reduces bias. What else does it do? How does it help me hire people better? And the thing
[00:13:00] that always was apparent to me in my career is that the hardest data to get is effectiveness data.
[00:13:07] Does it actually work? Like you can calculate whether something's biased if you know the
[00:13:12] demographics of your hiring pool. You can do a little simple calculation. But is it effective?
[00:13:18] Well, you have to have criterion data. You have to have job performance or company
[00:13:22] performance data of some sort. You got to have supervisor ratings that are collected in an objective
[00:13:27] way, not part of the traditional rating process because those ratings are garbage. You've got to
[00:13:33] have you know, if you're working in a call center, maybe you can access metrics like
[00:13:37] average handle times or number of units sold or customer service scores or whatever.
[00:13:43] Those are metrics you can use. But you've got to be able to predict those things
[00:13:48] in new hires. So you've got to be able to do this calculation that shows if you use this tool,
[00:13:54] then you're going to hire people who can perform better on these metrics and it's going to save you
[00:13:58] X amount of dollars. But it's really hard to get that data. It's much easier just to say, hey,
[00:14:04] we've got a tool, trust us it works. We tested it with a bunch of companies,
[00:14:08] pay us a million dollars and here it is. It's going to be great. And a lot of people
[00:14:11] are like, okay, yeah, that sounds good. But then it's in use and it's not being monitored.
[00:14:18] And it's not being tied to outcomes that matter guaranteed it's not doing much at that point.
[00:14:24] And even if there was evidence that it reduced bias in the beginning or you know, with other
[00:14:31] clients, if you're not monitoring that in your own organization, you know, it's probably not
[00:14:38] doing anything. It may be helping a tiny bit. But these the validity and the effectiveness
[00:14:45] and the bias of these tools degrades over time. And if you're not monitoring them,
[00:14:52] you've got no idea what's going on. So that's that's problem.
[00:14:56] When do people bring you in? Are they doing this during the buying cycle or
[00:15:03] to do that due diligence? Or is this after the fact they've already made their decision?
[00:15:07] They bring you in now to monitor, measure and make sure they're going as planned.
[00:15:13] Both, both we can do vendor type of analyses and just look at what is behind the marketing
[00:15:19] materials and really dig in to see whether and how these tools were worked. I mean, my
[00:15:25] company so Vero AI, we have seven people total right now and six of us are PhDs. So we have a
[00:15:31] lot of scientific expertise that can ask those hard questions and dig beyond the
[00:15:37] billboard statements and everything. But then also the what we're really building is this platform
[00:15:44] that can read and read and data and constantly and continually evaluate it to see whether and how
[00:15:51] a tool is or isn't working. So that's and that's the crux of what we're building. It's not a
[00:15:57] consulting company per se, although we do more consulting now than software I would say,
[00:16:03] but that balance will change over time as we continue to take the scientific method and
[00:16:10] the statistical stuff that's in our brains and automate it in our software platform.
[00:16:15] That's right. At one point, it'll be AI as a service. And for me, if I'm telling someone
[00:16:22] else what do you do, I'm saying they're the bullshit detector.
[00:16:26] Yeah, that's exactly how I would love for you to position us. Yeah.
[00:16:28] Okay. So I think you should send this to T-shirts and bullshit detector.
[00:16:36] It just seems like what I love about it is it helps practitioners, no matter how much we arm them
[00:16:45] with data, they're not going to be able to ask the questions and be able to find out. But
[00:16:50] the larger clients are going to be able to say, especially if you're doing with another
[00:16:55] IO psychologist, that you're going to be able to actually talk to them and just go,
[00:17:00] okay, here's the deal. Here's what ultimately this is what we're all judged on. This is the
[00:17:06] message that they're saying, let's go, I mean, scientific theory, let's just go see if it's
[00:17:12] true. It could be true, but let's not believe it's true until we can prove it to ourselves.
[00:17:18] And I think that helps everybody, not just IO psychologists that you interact with,
[00:17:23] but I think it helps practitioners because they're overwhelmed. I mean, you know this because
[00:17:27] you've been interacting with them for years. They're overwhelmed with this stuff. They don't buy
[00:17:32] this stuff that often and they buy it. They're listening to all the hype like I've been,
[00:17:37] I've gotten in trouble for saying this, but I'll get some comments. Okay. The HR practitioners,
[00:17:42] TA practitioners, they don't buy software. They're sold software, which gets back to
[00:17:48] the point you were making earlier in the discussion. Like hiring managers aren't hiring managers.
[00:17:52] Exactly, they're not. They're managers that just so happen to hire. So what I love about it is you're
[00:17:59] actually, this is a real problem and a real solution because when someone says AI, they'll
[00:18:04] act like they, yeah, yeah, totally get it 100% algorithm spell that algorithm totally
[00:18:11] chat GPT. That's, you know, it's the hype cycle. Right. But they don't know,
[00:18:17] they don't know if that's true. Right. And I like the fact that someone's actually going to say,
[00:18:22] yes or no. Let me ask you a question here and is this, is it so
[00:18:30] you're on the front end, you're doing the due diligence for the company. We're looking to
[00:18:33] bring in a specific tool. It's going to cost us three or $4 million annually to run this
[00:18:39] platform. We bring you in. Is this to vet this? Is this always a people process or at some
[00:18:45] point does AI take over and you're able to for thousands of softwares that are in the marketplace
[00:18:53] pipe into it and anonymously or however, you know, securely pull their data to run the BS test on
[00:19:00] them and truly report that back without a human being involved or at least in the vetting process.
[00:19:06] Yeah. Great question. And certainly that is where we would like to get. So, you know, going
[00:19:11] back to when William called me a psychopath earlier, I mean, not directly but implied. You
[00:19:16] will apply. Let's just agree you implied it. You know, I could say, hey, we built this platform
[00:19:24] and that's what it does. It automates everything 100%. That's not the case. And that would
[00:19:29] definitely be a lie. So, you know, for me, that's what we're building. That's what we have built
[00:19:34] is a platform that does that to some extent, but it also requires a human in the loop,
[00:19:39] a PhD in this case, in the loop and human hand holding and stuff. So there's that's why I say
[00:19:44] like there's a dose of consulting with some automation and over time, the ratio is going to
[00:19:49] change, you know, as we get better at it. But it is, it's not fully automated and
[00:19:54] it shouldn't be and it can't be at this point in, you know, technological development.
[00:19:59] I don't think that would be dishonest. So we're not doing that. But the core technical
[00:20:04] innovation that we created, and this is largely at the genius of our chief data scientist,
[00:20:11] Rachel King. And I didn't even realize we could do this, but she enlightened me to this and it
[00:20:18] continues to blow my mind still to this day. Gen AI, right? Gen AI comes out November 2022.
[00:20:26] And you know, everybody goes crazy, obviously, super exciting and interesting and,
[00:20:30] you know, problematic in a lot of ways, but also really exciting. So what we can do with Gen AI and
[00:20:37] the way we're using it is it's baked into our platform in a fundamental way. And essentially
[00:20:41] what it allows us to do is study not just numerical data like a, you know, like a business
[00:20:47] intelligence dashboard would do, right? You read in some data and you have some charts
[00:20:51] that you can look at great, you know, that's, you know, that's a business intelligence
[00:20:54] solution. We can read in the numerical data, but we can also read in unstructured information
[00:21:01] like text and essentially study it at scale. That's crazy. Like that is, that's, that,
[00:21:09] think about, you know, if you just meditate on the possibilities of what you can do with that
[00:21:13] idea and come up with all kinds of things that run the gamut far outside of HR tech,
[00:21:19] other aspects of the world, we can study essentially all the data in the world now at scale
[00:21:26] and start to make sense of it in a way that we never could before Gen AI. And it's not perfect.
[00:21:31] And that's why we have the human in the loop and we can't read everything in, but we can
[00:21:36] certainly make a lot of progress doing that. And the way that we're about to demonstrate this
[00:21:41] to the market is we're going to release a report on the large language models, the top
[00:21:46] 10 large language models, where we rate them on our model, which is, which is called violent
[00:21:53] and I can tell you about that, but it's basically like a holistic model to think about everything
[00:21:58] that's important and evaluating an AI or an algorithm. And so we apply that to the large
[00:22:04] language models and we read in all of the documentation hundreds, thousands of pages,
[00:22:09] hundreds of documents for all these models. And we scored it, you know, very, very quickly
[00:22:15] using our engine, which we call Iris that's Gen AI based. So to do that manually with HDs would have
[00:22:23] taken hours and instead it took, you know, still a chunk of time, but probably 100 hours or something,
[00:22:29] you know, so vastly shortened the time scale to objectively evaluate these huge sets of information.
[00:22:37] Are the LLMs the public ones? Are these ones that are kind of industry specific or
[00:22:44] the foundation models? So like, you know, Google Jim and I and open AI's, you know,
[00:22:50] chat, GBT and that type of thing. So yeah. If you like swiping, then head over to
[00:22:56] Substack and search up work defined WRK defined and subscribe to the weekly newsletter.
[00:23:03] The question because you'd actually be one of the guests that might actually be able to
[00:23:08] answer this. I've been reading about the large language models that we have right now,
[00:23:13] most of them are based in morality. Like there's a moral explicit or implicit,
[00:23:21] but there's a moral to what's going on. Like I at one point when chat GBT first came out,
[00:23:29] I went to it and I was playing around with it for about five seconds. I'm like,
[00:23:35] write the obituary of William Tenko in 500 words. And it's like with great sadness,
[00:23:41] you know, just does this bid. And then I'm like, Oh, that's cool. That's cool. Okay. Now write the
[00:23:46] same thing, but in Richard Pryor's voice and it wouldn't let me. It actually came back and said,
[00:23:54] we can't do that. We can with death and with humor, the two camp coexist. We can't do that.
[00:24:03] And so I kept edging around it like Sam Kinnison, Bill Cosby,
[00:24:06] whatever. So I kept trying different things and it still wouldn't do it. It wouldn't, at that time,
[00:24:13] it wouldn't put humor with death. And I'm like, how are you? Well, of course,
[00:24:19] right. Right. Makes fun of me. Like that's the first thing you did with chat GBT.
[00:24:23] Dark. It is obituary. Really dark. But it's like, but again, I don't know if that's true or
[00:24:29] if that's just again, if that's just hype or people talking about it. Well, you see it
[00:24:34] in the Google Gemini situation where it released pictures of black Nazis and things.
[00:24:42] There's this thing that's happening at a lot of these companies where they try to interject
[00:24:48] a level of oversight into what these models are producing. And it's an experiment. They're
[00:24:55] trying to figure out, what should we be doing here? How should we fix this problem of bias?
[00:25:00] Because these models just read the internet and oh, guess what? There's a little bit of bias on the
[00:25:05] internet. So how do we control that? How do we harness that? How do we get rid of that? So I think
[00:25:11] there's probably some hamfisted sort of attempts going on that are not the best at this point.
[00:25:18] And I don't know that there's a perfect solution for it. I think they may never get there.
[00:25:22] Well, we liken it to a lot of guests. We liken to where we are right now,
[00:25:27] to back to your 1996 story of the beginning of the internet. In the beginning of the internet,
[00:25:32] there's a bunch of people that knew about it before the interfaces kind of came out.
[00:25:36] Once the interfaces came out, it was the Wild West. And so AI and it's going to happen a lot
[00:25:42] faster. But it's going to be, you remember that period where people were like, oh, and you can
[00:25:46] do this with the internet, and you do this with the World Wide Web, and you do this. They're
[00:25:50] doing the same thing with AI. And it's like, well, some of that will hit. Well, some of that
[00:25:55] will never happen. Right. But you know, one of the things that I'm really focused on and we are
[00:26:00] really focused on is outcomes. Right. There are companies that are approaching AI,
[00:26:07] auditing an algorithmic auditing, and trying to peer inside the algorithm and look at the code
[00:26:13] and fix problems with the code and stuff like that. Right. Celebration. We're not doing that.
[00:26:17] We're looking at the outcomes that come out of these tools. And at the end of the day,
[00:26:22] that's what I feel like is realistic to focus on. Because it's a, you know, the black box of how these
[00:26:28] things work. We're never going to completely make the perfect LLM that says exactly the perfect
[00:26:36] thing at all times. But what we can do is we can evaluate how that tool is being used and
[00:26:40] the impact it's having on individuals and organizations. We can look at that data. We
[00:26:45] can collect that data. We can find that data. And, you know, we can see if you're using
[00:26:49] an LLM and some sort of hiring tool, well, it might be beyond any individual to really understand
[00:26:56] that LLM in depth. I mean, I would submit to you that even the engineers behind it don't understand
[00:27:01] how it works. In pre-hiring or in hiring for sure. But what we can look at is the outcomes.
[00:27:06] Who did you hire? How well did they perform? Perform and that stuff. So let's harness what
[00:27:11] we can access and see. And if we find out that at the end of the day, it's not biased
[00:27:16] against protected classes or other groups of people and it works and it's effective. And,
[00:27:21] you know, there's messaging in place that shows the candidates, all the visibility behind the tool
[00:27:27] so they understand it's there. And, you know, there's a lot of pieces that we look at.
[00:27:31] But if we do all that stuff then, ah, you know. Right. Is there room for a recommendation
[00:27:39] engine after you determine the output to the client? Is that something that you're
[00:27:44] doing now? Here's where it's at. And here's how it could be fixed. Is that what you're thinking?
[00:27:51] Yeah, well, so let's take it on the enterprise side. So companies buying software, they have
[00:27:58] the software, you're in there and you're saying, okay, here's your output. It's not good.
[00:28:03] Here's our recommendation. The output of the program, of the solution is good.
[00:28:09] You're not leveraging it properly. I will follow up to that once he's done.
[00:28:13] Well, yeah. I mean, I think that's not something we've, you know, created at scale or anything like
[00:28:19] that. But certainly, like the stuff that we, the information that we provide needs to be actionable.
[00:28:25] Otherwise, you know, what's the point? Right. So there's got to be clear actions you can take
[00:28:29] to improve things from this. And so in our system, once you see a score, then you can drill
[00:28:33] down and you can see what's causing the score to be lower high. And you can drill down and
[00:28:38] assess specific, look at specific factors that you can fix.
[00:28:43] And Rhina was in that question, it was assumed that they had already purchased the software.
[00:28:49] But in my mind, for some reason, I had y'all before they buy the software so they could make
[00:28:54] it better. It's on both sides. Yeah. Could it be on both sides? Or where do you feel like
[00:29:00] very AI plays the best? Well, so what we have is we have a, we have an engine where you can,
[00:29:05] the client can essentially upload information and documents directly to it. So you could say,
[00:29:10] hey, we're trying to evaluate, you know, five different vendors. And we would say, okay, cool,
[00:29:16] here's an upload link. Just upload all the documents they sent you, ask them for technical
[00:29:21] reports, ask them for terms and conditions, ask them for anything that you want. Hundreds,
[00:29:25] thousands of pages doesn't matter, upload it and then hit a button and then the system can
[00:29:30] process it and score it on Violet or anything else. Like for example, another use case would be
[00:29:37] like ISO certifications, right? You know, any sort of certification routine, which is largely done
[00:29:44] manually and can be very time consuming and involve tons of people and costs and everything,
[00:29:49] upload all the documents and information and we can't automate it 100%, but we can automate
[00:29:54] it a lot and make it much, much faster. So any sort of use case like that. So, you know,
[00:30:01] in before the software is bought is purchased, we can analyze the documents after we can analyze
[00:30:07] the documents and the data outputs. Yeah. Yeah. Where do you believe that your first customers,
[00:30:14] maybe first 50 or 100 customers, where do you think that they're going to buy you?
[00:30:18] What do you think? What do you think they're going to utilize you? We've had a lot of
[00:30:22] interest from procurement type of organizations for sure, procurement. But then there's also
[00:30:28] governance risk compliance, GRC stuff. And that's where you get into the ISO stuff and the SOC
[00:30:33] certifications and all these things. There's a million of these different certification routines.
[00:30:38] And so, you know, it's this is a great, great use of generative AI to be able to process
[00:30:44] and understand and really attach numerical quantitative ratings to all the factors
[00:30:51] we pull out of these huge sets of information. So tons of stuff can be automated and sped up there.
[00:30:57] Again, we still have that human in the loop because you can't leave it on its own. So it's not like
[00:31:01] we don't yet have a viable path to just clicking a button and instantly available. I think
[00:31:08] there's a little bit of a lack, which is appropriate, but it's still vastly sped up from,
[00:31:13] you know, what would happen before. Yeah. So how do you, you, okay. So,
[00:31:18] so a couple of questions here. One is in the world of instant gratification, and I'll use
[00:31:25] my girls that, you know, they take a test, they immediately get their score because it's done
[00:31:31] on canvas or whatever. It immediately scores them tells them what they got wrong. And all of this
[00:31:36] other stuff where we had a wait a week to get our test grades back right now, which is fantastic
[00:31:41] for me. They come home. How'd you do on your test? I don't know. Oh, I know how you did.
[00:31:46] It's right here. Like it scores you that quick. How do you, what's that conversation like with
[00:31:50] a client that says, okay, we're doing this, we need it next week. Right? Obviously, there's a path
[00:31:57] that you guys have there. And then secondly, in that process, what questions should they be asking
[00:32:04] you or what questions should they have in their mind that prompt them to reach out to a company
[00:32:11] Yeah. So, because the core technology, it can be used in so many ways like one of our challenges is
[00:32:18] like narrowing in rifle shotting into specific cases because geez, there's a million things you
[00:32:24] do with this type of thing. So it starts with a with a given client right now, it's going to
[00:32:29] start with a discussion about what they really need to know and find out and what their
[00:32:34] questions really are. And then we can say, okay, here's what we need from you. We need this
[00:32:39] documentation, this information uploaded here. And we can talk about the the violet model itself
[00:32:46] and whether that is the particular set of metrics you need and are interested in or
[00:32:50] if it's something else like an ISO, you know, certification or something like that,
[00:32:54] or maybe you're just concerned about a few of the elements from the violet model or whatever.
[00:32:58] So we would have those discussions. And so once we have that type of clarity, then I think,
[00:33:04] you know, the clients in a position to provide us with the information we need.
[00:33:07] And essentially, we run the model and review it to make sure it's right. And
[00:33:11] they can have the output in, you know, very, very short order at that point.
[00:33:16] So what's the actual delay? How much time does it really take that we have to review things?
[00:33:22] I think it depends on the use case. It can be from almost nothing to, you know,
[00:33:27] it could take a few days or something if it's really complicated information. But
[00:33:31] that's that's the type of thing that will evolve. And certainly we want to shorten
[00:33:35] that window down. But if it takes us a couple of days, then you can be sure that in, you know,
[00:33:40] without that, it would take probably half a year or something. Yeah.
[00:33:44] If ever. Yeah.
[00:33:47] Two things. One is on the output side. So let's say you're doing a lot of work on the output
[00:33:53] side with a lot of vendors understanding kind of the separation of the marketing message from
[00:33:59] reality and what you see in the data. It seems to me that at one point,
[00:34:03] you're going to be sitting on a rating system of people's, again, the bullshit detector. And
[00:34:13] so if I have that right, right, it seems like you could create awards
[00:34:18] based on who's who's actually the outputs again getting to the outputs. Yeah. And if the outputs
[00:34:25] are truer to what their marketing message is, it seems like you'd be able to be in a position
[00:34:31] and then say, yeah, I'm not going to say names or companies or anything. But yes, this one,
[00:34:36] this one, their marketing message or sales message and the outputs that we see with our customers.
[00:34:43] Yeah. Pretty spot on pretty close. Yeah.
[00:34:47] I would love to be able to do that. I mean, just to provide sort of an objective measure of
[00:34:51] truthfulness, you know, I think we're great. And, you know, I think of like the
[00:34:56] Violet model for us is sort of like anybody, any system we score on that now,
[00:35:00] like the large language models that we are scoring, we did score. They're not perfect.
[00:35:05] They're not great. All the factors. But it's like when the, you know, when the insurance
[00:35:09] institute comes up with a new crash test and all the cars tell on it, it came up with something
[00:35:15] new. It takes a year or two for the automakers to catch up. So I want to put out this idea
[00:35:20] Violet as a human centered model to think about AI. And if you score well on it, then that means
[00:35:28] you're deploying AI in a way that's not only good for a company, but also good for an individual.
[00:35:32] And so it's a target and then hopefully it can become a metric that companies try to hit,
[00:35:36] that vendors try to hit and make their tools, you know, better on it.
[00:35:41] I love that. Yeah. I like, and I love the award.
[00:35:46] Yeah. Because there's, you know, as you're saying that I'm thinking,
[00:35:50] there's a ton of award programs out there. It's all about creativeness and how innovative it was
[00:35:56] and how funny it was. Very subjective. Very subjective. I mean, you got all these fact
[00:36:02] checkers sites and all this crap with the election. But nothing on the marketing side. This would
[00:36:07] actually true up their message versus reality. Yeah. And I think what's beautiful that
[00:36:12] you said independent objective, which is exactly what I was thinking as well. It's like,
[00:36:17] here's some affirm that can actually we wouldn't have a dog in this race or whatever. Like,
[00:36:22] we don't care. What we care about is understanding the outputs as it relates to
[00:36:27] what you're saying the outputs should be. And I think that's going to help
[00:36:32] in doing that, that helps practitioners sleep at night. Understanding what they bought
[00:36:38] actually does what they says that there were it was going to do.
[00:36:42] The second question I had is just I owe psychologist.
[00:36:46] I get it. The podcast just isn't enough. That's all right. Head over to your favorite social app,
[00:36:52] search up work defined WRK defined and connect with us. Have you have you have
[00:36:58] you talked to a lot of your friends and shown them kind of things that you can do like what
[00:37:03] is their take on this and walk away. Yeah, I mean, no, that's a tin cup thing. You're good.
[00:37:11] I did but they were so drunk that I don't even know if they really were listening. But
[00:37:18] they they they love it. They're an easy target because they're science.
[00:37:22] You know, you train scientists like they're the easy ones. I don't even need to explain
[00:37:26] it. You know, they they you know, it's the challenge comes in when you're talking to
[00:37:31] people who don't have that scientific background, I think. And you know, one thing that's
[00:37:35] I've noticed that's interesting in the world today is you see a lot of this where somebody
[00:37:40] like let's say Google Gemini is a fantastic and amazing LLM the best one that's out there.
[00:37:46] But right, they did this black Nazi thing and now everybody thinks it sucks.
[00:37:50] Well, that's an anecdote. That's a one. So scientists, you know, don't think in terms
[00:37:56] of anecdotes. You don't care about the anecdote around here. Yeah,
[00:38:00] you care about overall is it trending in the right direction when you look at the aggregate data and
[00:38:04] you look at coalitions and significance levels and things. That's what we're trained to care about.
[00:38:08] So but the media works almost exclusively on anecdotes. Because they're just they're trying
[00:38:15] to get clicks. They're trying to sell advertising. So to come out and do a story on yep,
[00:38:19] it's working. Nobody clicks on that story. Yeah, yeah, right. It's not fun. That's just
[00:38:25] that's just the world we live in. What's success for 2024? What do you what is the goal? We're
[00:38:34] damn near at the end of Q one. What do you look back at January and 25? What was success for you?
[00:38:41] Well, you know, I think that we this takes time. This takes time. You know, we're not
[00:38:49] it'd be so much easier if we just came out with this violet model and started doing consulting
[00:38:53] around it. Easy, easy, right? You know that you can hit the ground tomorrow and start doing that.
[00:38:59] But architecting building refining a scaled engine that essentially can make sense of vast
[00:39:07] amounts of information using cutting edge technology like you know, it takes some time.
[00:39:11] So and there's also the barrier of just explaining what the heck we're talking about
[00:39:15] to people because it is new. You know, you could think of it as sort of like a next gen
[00:39:19] business intelligence dashboard board as one way to think about it. I also talk a lot about
[00:39:25] the violet violet as a scoreboard, not a dashboard. So like if you think about going to a baseball
[00:39:32] game and there's no scoreboard and you're just trying to figure out who's winning based on how
[00:39:35] fast the people look like they're running. That's a that's a business dashboard today. Right?
[00:39:41] So we're trying to spell things out in a way that's not typically done. So, you know,
[00:39:46] there's this sort of barrier. We have to do a lot of education around what it is that we're
[00:39:51] actually talking about. So I mean, basically, I think the rest of this year for us is focused on,
[00:39:56] you know, new client acquisition and working with them on their use cases. And it's going to be
[00:40:01] more can a little more consulting than it will ultimately be down the road. But that's great.
[00:40:06] That's fine. That's how we learn and improve and get better. And then probably, you know,
[00:40:11] funding some time, some more time is going to be devoted to funding as well and just looking
[00:40:15] for some outside sources and partners and things like that. Where'd you do your PhD?
[00:40:19] The University of Akron. Yeah, it's funny because Psyop is one of the professions for Psyop, not
[00:40:26] the industrial psychology, I O psychologist is one of the professions where schools like it's a
[00:40:33] thing. Yeah, like the teachers that you had there, all of that stuff, that's a thing. Yeah.
[00:40:38] One of my favorite moments at Psyop was I was with Dr. Hogan of Hogan assessments, him and his wife.
[00:40:46] And I'm a marker, or at least a recovering marker, I should say. So I'm sitting there and we're
[00:40:52] drinking and having a good time. And I said, Listen, can I just be honest? He goes, Well,
[00:40:58] your personality tells me you're just going to be honest anyhow. But thanks for the
[00:41:03] heads up. I said, you're talking over people's heads. You're just fucking you know, I'm not
[00:41:09] saying you need to dumb shit down. But you're not using their words. They're not you're using the
[00:41:16] words of I O psychologist. So that's your target. Great. Great. Keep doing what you're doing.
[00:41:21] But if you're trying to get more of a mass appeal, you just kind of change your words.
[00:41:26] And it was it was a great conversation for him to hear. And then actually, it took a little while,
[00:41:32] but it's for him to hear that, okay, for the I O psychologist that we interact with,
[00:41:37] there is a different language. And we can get there faster because of some other reasons because
[00:41:42] we have a share of an accurate and all that other stuff. But with this other group of people,
[00:41:46] these HR people, these recruiting people, they don't use those words.
[00:41:51] They don't use those words. So it's almost like with this new business, it being new,
[00:41:59] and you have to explain what it does. And then also the words that you choose to then explain
[00:42:04] that is going to be dependent on the audience that you see. Yeah, absolutely. Absolutely. And
[00:42:10] it's been the story of my entire career so far. So it's like nothing new, but it is always
[00:42:14] challenging. I mean, that's why Sagan said, you know, skepticism doesn't sell some
[00:42:18] sell science doesn't sell people want an easy answer and and hey, it's great trust us here it is.
[00:42:24] And that's it. You don't have to think about it. So there's a
[00:42:27] AI detector right? Seventh grade literacy just I mean, and nobody even knows what AI is, you
[00:42:32] know, it's no one I mean, you know, everybody's got a different idea of what it means. I was,
[00:42:39] I have a book that I wrote a couple years ago and it has AI in the subtitle and I was
[00:42:43] mailing it at UPS and the person behind the counter said, Oh, I'm scared of that stuff.
[00:42:47] And I said, what stuff? Robots doesn't think about robots on here. You know, like, but that's,
[00:42:54] you know, everybody thinks of AI. Everybody thinks AI is a scary robot from the future,
[00:42:57] send back in time to kill you. That's basically, you know, because that's what because that's
[00:43:03] what sells. Yes, I get fear cells. It sells in the media. The your pricing model right now,
[00:43:10] obviously being consulting and then obviously you're gonna be doing probably over time more
[00:43:16] and more attacking back and getting to probably maybe even a SAS model. Yeah, or sometimes a SAS model.
[00:43:24] What, you know, Ryan's already asked you what questions that that practitioners should ask
[00:43:30] of you because it is so new. So I won't ask that question but I do I am curious as to
[00:43:37] kind of you've been in this space for a long time. You've got a lot of folks that you've
[00:43:41] already interacted with historically. Are you going to go back out? This was
[00:43:48] when people soft was acquired two years after that Dave went out and basically sold to 200 of
[00:43:55] his clients that he had with people soft. The first I want to say the first 200 clients of
[00:44:02] workday were people soft clients. So he just went back to the people they already knew and said,
[00:44:06] yeah, I got it's the same thing or more eloquently. It's the same thing but it's in a cloud. Yeah.
[00:44:14] Are you is a plan to kind of go back out back out to the folks that you already know or is
[00:44:20] it to build a new audience? A little bit like a little of both. I mean my co founders and I
[00:44:25] we certainly have a lot of contacts in HR tech space. Certainly in Cyop and the IO world
[00:44:30] of course but you know it's really just our initial market I think is because we know that space.
[00:44:37] A real goal is to get our feet on the ground in HR tech but then expand out of HR tech,
[00:44:44] healthcare, fintech. I mean any sort of every space is right for this type of capability. So
[00:44:52] we want to be able to harness AI or complicated algorithms whatever they may be anywhere.
[00:44:58] But yeah so and it's obviously it's different. So it's not like anything that people are familiar
[00:45:04] with. So we have to explain even in our own space you know how it would work and what it
[00:45:09] would look like and everything. So I'll give you a last thing I'll do. It's like growing
[00:45:15] an alcoholic and a psychopath is when people think about differentiation the problem that they
[00:45:23] usually run into is they're so different. Like what you've built is different but for it to sink
[00:45:31] into someone's mind you may have to make it like something that they know the hook. So if you spend
[00:45:37] your time saying we're different, we're different, we're different, we're different all they hear
[00:45:41] is yeah I don't have budget for that or that's too experimental or whatever. But if you say
[00:45:48] well think of consumer reports right? What was consumer reports? Yeah these little circles and
[00:45:55] you know this gave you kind of some insight into they did some testing and they gave you
[00:46:00] kind of a guidebook on you know what to avoid and what you should do. Yeah now just fast forward
[00:46:08] 30 40 years AI meets co-consumer reports and we give you a guide on what's working and what's
[00:46:16] not working. So it's like something that's where most people get differentiation wrong is they're
[00:46:24] they're focused on hey we're different. It's like yeah that's actually not helping your case. Right
[00:46:29] the more you talk about different the more they hear fud, fear, uncertainty and doubt.
[00:46:36] So so when you anchor this consumer reports might not be a thing but anchor it in something
[00:46:43] that they know. Yeah no that's great that's super good feedback I think yeah we that's helpful because
[00:46:50] we're always in this continual process of trying to figure out how can we explain this in a way
[00:46:54] that people understand in as few words as possible 100 so yeah that's that's great that's very good.
[00:47:00] Thank you so much for coming on this has been fantastic to learn about if you're still listening
[00:47:06] watching subscribe to us you see us out there say hello Eric thank you again we will see you
[00:47:12] next time


