Powered by the WRKdefined Podcast Network.
[00:00:00] Hello and welcome to the latest episode of the work.
[00:00:17] The work is a podcast that I do with my longtime friend and colleague, John Sumser. And today we are speaking with Anup Gupta. Anup is
[00:00:27] the co-founder and CEO of Seekout. And Seekout, of course, has a very strong foothold in the AI
[00:00:36] space. So John is especially interested in chatting with Anup today. Anuke, welcome to the work. Please tell our listeners a bit about
[00:00:45] your background and what has brought you here today. Thank you, Jean. It's wonderful to be here
[00:00:53] and along with John. So my background, I'm a geek and an entrepreneur. I'll quickly go to my history
[00:01:01] and then more about seek out. So I came to the country in 1980. I got my PhD in
[00:01:07] computer science at Carnegie Mellon. In fact, you know, my PhD advisor was Alan Nool, who's one of
[00:01:16] the founders of artificial intelligence. It used to be done in a, you know, different way. Jeff Hinton, who later went to Google and was kind of
[00:01:30] father of a lot of the generative AI was on the faculty at Carnegie Mellon and got to know him
[00:01:35] really well. In 1987, I went to Stanford University as a professor and clearly entering.
[00:01:45] And lots of the way I work there,
[00:01:46] though I focused on high performance computing.
[00:01:49] So if you think of NVIDIA and the underlying infrastructure,
[00:01:53] I spent a lot of time on those kinds of projects.
[00:01:59] Then I did my first startup in 95,
[00:02:02] which was in the streaming media business,
[00:02:04] which was the very early days of the internet, which was acquired by Microsoft in 97.
[00:02:12] And then I had a wonderful and diverse journey at Microsoft from natural user interfaces directly reporting to Bill Gates as his TA running Skype and exchange businesses, global technology policy,
[00:02:28] which is very interesting in what is happening
[00:02:31] where the AI today, because I always felt,
[00:02:37] and it was very important to Microsoft too,
[00:02:39] not just as a way to drive your own business,
[00:02:44] that we get better decisions for the world than
[00:02:48] our politicians, when the decisions make it have a deeper, better understanding of what
[00:02:54] is happening in technology.
[00:02:56] So, got a chance to do that, a variety of other things.
[00:02:59] And then in 2015, late, we found seek out and seek out is in the talent business, both on the recruitment
[00:03:10] side and on how talent, internal talent, how do you retain, grow, develop, redeploy, and
[00:03:22] how do people find fulfillment, right? These eventually it is about,
[00:03:26] our saying is that companies and people,
[00:03:30] great companies and people grow together.
[00:03:32] Great companies cannot exist about great people.
[00:03:36] People are the asset and people can't really
[00:03:40] thrive if the companies are going down.
[00:03:43] So it is mutually tied together that we have great
[00:03:48] companies and great people growing together. So that's our mission and we support that.
[00:03:54] And you know, Gen AI, we believe we'll have a transformative
[00:03:58] whole effect on this. And I can't get into that later. But John, sorry for taking too long in the introduction,
[00:04:06] but excited to be with you here today. Oh, and I know John has lots of questions about the AI
[00:04:13] discussion. I want to put a pin in your policy reference and come back to that.
[00:04:19] John, I'm going to turn the floor over to you since this is clearly your sweet spot.
[00:04:24] going to turn the floor over to you since this is clearly your sweet spot.
[00:04:29] Oh, let's talk about policy first, because because a new pen I are liable to run out the clock. And so let's get the let's talk about policy.
[00:04:35] Yeah, I'm fascinated by this. You know, we I work obviously in the public
[00:04:44] relations world
[00:04:45] and that enables me to touch a lot of organizations,
[00:04:49] including lobbyists, and it's very interesting
[00:04:52] what goes on in the policy space
[00:04:54] and what is taking place without most of our knowledge.
[00:04:58] So Anup, when you look at creating policy
[00:05:03] to, I'll use the word govern, but let's say guide something
[00:05:08] as transformative as AI.
[00:05:10] Who should be part of that policy discussion?
[00:05:14] Who's on that committee per se?
[00:05:17] Same as it is really important because the impact is so large, I think it is important to have the people creating the technology
[00:05:29] be there. Both the companies and the individuals who are really deep. And I think it is really
[00:05:36] important to have, you know, of course, the government officials, but people who are going to be impacted the social scientists.
[00:05:47] You know, that are there and representatives of society and of different segments of society and international and global.
[00:05:57] Right, so there are so many stakeholders in this case at this point in time. At the same time, I think we need to be thoughtful about
[00:06:12] who you have in the kitchen because if you have a very white kitchen, often nothing can happen.
[00:06:18] So it's not that you don't inform, I believe, but how you inform, who you inform and what is the discussion
[00:06:28] you have is really important because it is too easy to get too hyped, it's too easy
[00:06:37] to get too scared.
[00:06:39] It is difficult to offer to understand what we don't understand oftentimes in these cases,
[00:06:46] because things are changing so rapidly.
[00:06:48] So I think it has to be a broad segment
[00:06:51] for something like what is happening
[00:06:53] with January to May I?
[00:06:55] It's a fascinating topic.
[00:06:57] John, jump in please.
[00:07:00] Anup has said the magic words generative AI.
[00:07:03] Well, so I want to dig below generative AI because you might imagine.
[00:07:14] So when I look out at the HR technology vendor space, I can't find anybody who's not claiming
[00:07:24] to have a generative AI solution on the table or in process.
[00:07:29] There's a lot of twisted wreckage.
[00:07:49] Not that generally the AI won't be transformative,
[00:07:52] but the hype
[00:07:58] overlooks the error rates.
[00:08:01] And one of the things that we don't know how to do yet,
[00:08:04] as far as I can tell
[00:08:06] is measure, understand, or account for the error rates that are implicit in the use of
[00:08:13] generative AI. And while you can exert some control over the input, And you can try to solve quality by having good control over the input.
[00:08:28] Most people aren't really actually looking at that. And so you end up with an output
[00:08:32] that's variable in ways that because this is human resources technology, It's variables in ways that could damage my mother or your wife or my kid. And
[00:08:51] if you brush that off as a statistical problem, you miss the humanness in HR technology. Right?
[00:08:59] And so we've got this big bluster with zero control over error rates and zero capacity to measure
[00:09:07] or articulate the error rates. How do you solve that?
[00:09:12] I think it's a deep question. So let's parse it. So what are the things that I would say? So when you have a human assistant or person,
[00:09:30] they are not at our free either. Let me stop you right there. Let me stop you right there.
[00:09:36] That's the credible shit that technologists always say. And this is a machine. This is not a person
[00:09:43] that we're talking about. This is a governable factory machine
[00:09:47] and to compare it to a person
[00:09:49] and hold it to the standards of a person is bogus.
[00:09:53] Okay.
[00:09:54] Okay.
[00:09:54] Deep breaths everyone, deep breaths.
[00:09:57] Yes, deep breaths.
[00:09:58] I know it is wonderful.
[00:10:00] Because what I'm saying is the first basic principle
[00:10:03] from our side, we say human driven AI assistant. So where I was going with the assistant laws,
[00:10:12] errors happen in lots of places in the process. It is rarely important that a human be there
[00:10:19] eventually who is feeling accountable. I will go on a total tangent for a second, John.
[00:10:26] Oh, please do, please do.
[00:10:28] To give you an analogy. So here is the challenge. It's not that human drivers don't have accidents.
[00:10:33] They don't kill people while driving, you know, drunk driving. Lots of things happen.
[00:10:38] We live with it in some sense, and we have made rules around it. When the Tesla car, autopilot makes a bad decision
[00:10:51] and somebody's hurt, you know,
[00:10:53] we are suddenly very, very concerned and rightfully so.
[00:10:58] I think there is a difference in the outlook
[00:11:02] and why we do.
[00:11:03] And the reason is we have laws written about
[00:11:07] when humans do bad things.
[00:11:12] They are drunk and they drive and they hurt somebody.
[00:11:15] When a Tesla car goes and hurts something
[00:11:20] because there was something wrong in the software and design,
[00:11:24] the question becomes,
[00:11:25] who has accountability? Are you going to put Elon Musk? Are you going to put Tesla and charge
[00:11:34] them a billion dollars? Because they're wrongful. Our social systems, our legal systems are not
[00:11:46] legal systems are not built for accountability when machines make an error.
[00:11:53] So I want to separate out the site and that's why if you put the human absolutely in the loop,
[00:12:02] you have accountability. If your finances are wrong for a corporation, you don't say I'm going to blame Excel. You say you are the human who is accountable, you know, if your formulas were wrong. So I think there is
[00:12:08] a element about where we need to keep human. And not only for the accountability reason,
[00:12:14] actually there are lots of good reasons to go and do that. That is one part of it.
[00:12:30] The second thing I would say is, generate a AI can help in the following. So when you're trying to understand a resume, there are some phrases you have.
[00:12:36] You know, digital marketing, social marketing have done, you know, sales and a price sales have no job.
[00:12:48] Bennett, recruiter is looking at it, and hiring manager is looking at it.
[00:12:51] They don't look at those phrases.
[00:12:53] They look at the English language surrounding it
[00:12:56] to say, this is the experience I had at a team belt.
[00:13:01] We designed this, it's used by a million people.
[00:13:04] We look at the English text around it
[00:13:06] right and we match based on that English text
[00:13:14] Today
[00:13:17] Machines can't do that
[00:13:19] in the future I believe machines will be able to do a much better match between the English and two machines.
[00:13:31] The code is the same as the English language or French or whatever else code is just in a language. So we can become better at doing everything and suggesting a mentor, suggesting a learning course, suggesting a job or a gig,
[00:13:54] because we can do the match in a match. We understand people better, we understand job description better, we understand everything better, and we can do matching
[00:14:05] in those rich ways. We can understand teams better. What does a team do? So I think fundamentally,
[00:14:12] the fundamental change that's going to happen is we will have a better understanding of this wonderful complex entity that a human being is, okay, and we will be able to do better
[00:14:29] suggestions, better recommendations, okay, for things which must be viewed by a human. You say,
[00:14:39] you know, evidence, what is it? Show it to me. You know, I'm going to look at it myself. But a person often in many
[00:14:46] cases won't be able to read the code, won't be able to understand it, won't be able to do a lot
[00:14:50] of things. And we can provide evidence and we can say, here are the artifacts. And then human
[00:14:57] may want to look at the artifacts to make sure better things happen.
[00:15:01] So those are both powerful arguments.
[00:15:05] Let me walk you through the batch argument
[00:15:08] because the accountability argument is the case
[00:15:14] you make does it really account for human nature.
[00:15:18] And in the matching case, the example that I love
[00:15:22] is what distinguishes one nurse from another nurse at the
[00:15:27] fundamental is whether or not they are good at giving shots. And you can look through all of the
[00:15:35] nursing resumes you can find and you won't find a single nursing resume that says good at giving shots. You won't find it.
[00:15:46] And so what most starting and shifting processes do,
[00:15:52] what they are attempts to do is take the huge volume of stuff
[00:15:56] and whinnow it down.
[00:15:58] But if you can't tell that what makes a good nurse
[00:16:01] or a bad nurse in a particular setting
[00:16:03] is their capacity to give shots, which you can glean from their history. They get sifted out because the words of the page
[00:16:15] don't indicate the things that are actually being looked for in the job. And so the biggest concern that I have about matching and the theory that it can all be found in language
[00:16:30] is that the elements just are going to get overlooked because they don't appear in language
[00:16:38] and you end up with a selection process that eliminates really, really good solid
[00:16:46] candidates because it's not possible to understand what they're good at by
[00:16:52] reading the language that they propose or a job description that somebody
[00:16:56] writes about them because the things that make a person great at their job
[00:17:02] generally you don't talk about them because everybody goes
[00:17:06] in. Let me ask you dig in a little bit deeper, John. You think that information shows up
[00:17:17] in anywhere in any form? That was my question. It could be a performance review or somewhere or the discussion feedback, you know, quarterly
[00:17:27] discussions that the nurse and her manager have.
[00:17:32] Could it be there in feedback that a patient writes for her?
[00:17:37] And again, you have to treat all this information very carefully.
[00:17:41] There are privacy aspects, lots of aspects. But I'm trying to think about actually curious about,
[00:17:48] where does this information come from
[00:17:51] when you're trying to make a hiring decision or say,
[00:17:54] hey, we want you to be a manager?
[00:17:57] Where does that information go?
[00:18:00] What form does that take?
[00:18:02] In the case of developers, it might be actually in the code.
[00:18:06] In some of the cases, you know, they wrote that code,
[00:18:08] well, whatever they did it.
[00:18:10] And I'm wondering in other cases,
[00:18:12] where it is in the language,
[00:18:14] in the case of a salesperson might be in a record
[00:18:17] which is in Salesforce or somewhere else
[00:18:19] and what they did and feedback.
[00:18:21] So the integrative possibilities of taking in a lot more information and helping
[00:18:31] people is certainly something to be looked at. Oh yeah, I think it's possible to imagine building
[00:18:41] systems that are smart enough to ferret this out. But I'm not saying that it is completely impossible ever.
[00:18:48] Yeah.
[00:18:49] But between now and Nirvana,
[00:18:52] where we have
[00:18:56] steampunk technology rather than ideal technology,
[00:19:01] and bad decisions get made because the technology is less effective than advertised,
[00:19:08] let's say. There's a social price to pay and there's a business price to pay. That's also
[00:19:15] going to be very, very difficult to detect. And I get concerned that we will have a CASP classification system that doesn't actually
[00:19:27] know what it's doing on our way to giving to the perfect system that can consume all
[00:19:32] data and make all sorts of interesting nuanced judgments.
[00:19:36] So, I actually am going to take you in a slightly different direction.
[00:19:41] We can come back to this. So with the top, there are easy cases and there are hard cases.
[00:19:49] And my fault, I think I've entered to some of the harder cases.
[00:19:54] As you know, you know, we should seek out the says that does two things.
[00:20:00] One is you give us a job description and, you know, we can automatically form great searches for you.
[00:20:08] We do have recruiters, all the different kinds of strengths, some condubilions, some condubilions,
[00:20:15] some don't know how to do a great search, but are amazing conversationalists and saying
[00:20:21] why this company is interesting to you. So we can help those are the thing.
[00:20:27] The second thing is messaging is a very hard part.
[00:20:35] And so creating very nice, relevant,
[00:20:40] specific personalized messages.
[00:20:44] Gen AI can really help you with, you know,
[00:20:50] those tasks, having a conversational interface English language. I was at a
[00:20:55] Microsoft alumni meeting and the technical fellows at Microsoft was going.
[00:21:01] And what she was saying is suddenly
[00:21:05] for the productivity systems,
[00:21:07] Microsoft was in the document space.
[00:21:10] We are moving from documents to dialogue.
[00:21:14] So the main activity and the main way to think about it,
[00:21:19] okay, is this transition to conversation and capturing
[00:21:25] and meeting summarization? You know, you just did an AI. is this transition to conversation and capturing
[00:21:27] and meeting summarization. You know, you just did an email,
[00:21:29] imagine, you know, simply uses that I can talk about,
[00:21:32] which are less contribute.
[00:21:34] You just spent an hour, you know,
[00:21:37] talking to a candidate and you wanna send a summary
[00:21:40] to the hiring manager while you like them.
[00:21:44] It's a very hard thing to write it down.
[00:21:47] And if you get an initial draft of a summary, you know, off the conversation from these
[00:21:56] technologies, you know, one of my very close colleagues is now the Chief Technology Officer at Zoom.
[00:22:02] Many of my friends are, you know, the Chief Scientist and things Zoom. Many of my friends are the chief scientists
[00:22:06] and things that Microsoft now, ex-colleagues.
[00:22:09] So there is a lot of applications being looked at
[00:22:13] where it is helpful.
[00:22:14] And I'll give you an example.
[00:22:16] It's a very personal example.
[00:22:20] So yesterday was Microsoft alumni summit.
[00:22:24] They were run 600 plus people physically
[00:22:28] attending and I was giving a talk on culture workforce culture, my journey to move from head to heart.
[00:22:38] And I talked about gratitude and how that has transformed, you know every one of our all-hats meetings
[00:22:45] starts with 15 minutes of gratitude
[00:22:48] where people hand.
[00:22:52] So I was struggling with actually how to begin the talk
[00:22:55] and thinking I've written an article before.
[00:22:57] So I gave it to Chad Jupyty.
[00:22:59] I said, how should I start my conversation?
[00:23:05] And it came up with the idea, I know this is what you should do. How should I start my conversation?
[00:23:06] And it came up with the idea, I know this is what you should do.
[00:23:09] Ask everyone, or the first two, three men,
[00:23:11] ask everyone to close their eyes.
[00:23:14] Think for a minute of something they are grateful for
[00:23:18] in this last week, open the eyes,
[00:23:21] have a conversation with their name.
[00:23:24] And actually, actually did that.
[00:23:26] OK.
[00:23:27] And later people came up.
[00:23:28] It changed the energy, changed the way
[00:23:31] the people were coming from, because our mindset makes
[00:23:35] a difference.
[00:23:37] This was a suggestion from TPT seriously.
[00:23:41] So the thing is if you put a bit as an assistant when I was going with the assistant in the
[00:23:47] beginning, then you can use this as somebody, which has a lot of world knowledge.
[00:23:54] If I'm going and looking up an industry or a candidate or something, I can do research
[00:23:59] much easier, much more deeply.
[00:24:03] And I have to be smart.
[00:24:04] So what we have to teach students in the internet
[00:24:08] can be said, how do you become a judge of what is bogus
[00:24:12] and what is not?
[00:24:14] How do you understand the quality of the source?
[00:24:18] How do you understand the quality of the results?
[00:24:20] Otherwise, you'll get fed all kinds of things.
[00:24:23] I think similarly now with Jim
[00:24:26] really the bar has gotten higher and as both adults and as students we need to learn
[00:24:35] where was code what is not and that just went full matter of even more but we shouldn't throw out the baby with the what for in some sense of the
[00:24:46] whichever it goes to, you know, not leverage technology, you not appreciate what it can
[00:24:53] offer.
[00:24:54] So I find it interesting that when you have a critique about the technology, technologists
[00:25:03] always assume that it's a binary critique,
[00:25:06] that the answer is either do it or don't do it, and that somehow that's what's at stake. When
[00:25:17] my question is, you just laid it out perfectly. The stuff is amazing. The stuff is transformative. The stuff is magic. And
[00:25:29] the stuff takes away the heart of a lot of things that people with white-colored jobs
[00:25:35] do. It eats out the heart and says what you need to be good at is asking the right question
[00:25:43] and editing the answer.
[00:25:48] But the people who are great white collar workers are not good at either of those good things,
[00:25:51] they're good at the stuff that GPT does.
[00:25:54] And so when you put the tool in front of them
[00:25:56] and you say, oh, this is simple,
[00:25:58] all you gotta do is edit the answer.
[00:26:02] You forget that we live in a world where people have bosses. And so if the machine
[00:26:09] says it's green and I go to my boss and I say, look, it's perfectly clear that it's
[00:26:15] red. We have to disagree with the machine. I will have to justify that every single time
[00:26:24] I wanted to disagree with the machine and
[00:26:26] eventually I'll get tired of going to my boss to justify the fact that I disagree with my
[00:26:31] machine and I will be just like a Tesla driver.
[00:26:35] Yeah, but how is this different than going to Google and Bing and asking a question.
[00:26:42] I mean, we have an entire generation that's come up through
[00:26:46] the ranks that doesn't, you know, I'm really gonna date myself and say, you know, we used to like
[00:26:53] sit with a volume of the encyclopedia and just randomly be reading things about places, you know,
[00:27:01] Madagascar or something. Like, if I don't know to go and ask that question, you know, Madagascar or something like if I don't know to go and ask that question,
[00:27:06] you know, that that's what I'm not I'm not connecting these dots. So the difference is
[00:27:13] you can ask about Madagascar all night and all day. It's not a judgment about a person.
[00:27:20] It's not a judgment about a person. and when you apply this technology to making judgments about people
[00:27:28] and
[00:27:29] You don't have a grasp of the depth of those judgments
[00:27:34] You're creating systems that contain biases that you can't control
[00:27:40] That's all I'm saying and it's a costume. It's not a It's not a don't go there because there's monsters there.
[00:27:47] It's a why isn't this the first question that's being answered?
[00:27:53] Rather than the glories of the technology
[00:27:56] and how transformative it is, let's look at the actual work that
[00:28:00] comes out of this that people have to do
[00:28:02] and whether or not they're equipped to do it
[00:28:04] and how we get them equipped to do it. So we talk about a human machine partnership.
[00:28:12] But like business has done a lot over the last hundred years, we ignore the training that's
[00:28:17] necessary to get people to pay attention in very, very different ways than they used to have to pay attention. So John, I agree with you.
[00:28:29] I think just like as Gene was saying when search engines came out,
[00:28:35] people had to learn and I'll tell you they're good searches and bad search engines.
[00:28:41] Yes, there are. Yes. Indeed.
[00:28:43] You know, yes, indeed. And similarly with this technology, you know, how to ask the question, how do you have a generate an explanation? Why do you think so explain your reasoning interesting thing is, you know, it is changing
[00:29:06] all the time.
[00:29:07] We thought it just mechanically regurgitated the next word and somehow it makes sense.
[00:29:12] And frankly, a lot of the deepest people who created the technology are totally surprised
[00:29:17] by what it can do.
[00:29:19] Oh my goodness.
[00:29:21] Listen, I know we're a little tight on time and there's one more thing I'd
[00:29:26] like to bring to the more popular discussion. You know, listen, all of our careers, we have
[00:29:33] watched technology as being game changing. And I think one of the paradigm shifts that
[00:29:41] I'm very excited about right now is this move to skills-based labor models.
[00:29:47] And Anupa, I know your company has an offering,
[00:29:52] a seek out assist, that is integral
[00:29:57] to this particular movement.
[00:30:00] Can we talk, is this for real?
[00:30:02] I mean, do we think that five years from now,
[00:30:04] we're going to be
[00:30:05] ditching the resume and talking about skills instead? Let's touch on that for a few moments.
[00:30:12] So, you know, our
[00:30:15] The first thing I would say I think there's confusion about skills. A lot of people talk about skills and not,
[00:30:23] you know, it's the buzzword that that is there.
[00:30:28] Also, skills have existed for time long before the current thing and the emphasis on skills and
[00:30:36] it is not that. Skills are not there. The thing I like to emphasize is often the way people talk about skills, the two-word,
[00:30:48] three-word, one-word phrases that are there, are just labeled.
[00:30:53] What goes behind that is what are your experiences?
[00:30:56] What did you do with that?
[00:30:59] What are the results and capabilities you achieve?
[00:31:04] Is the important part.
[00:31:07] So when you say, oh, you know, do you have leadership skills?
[00:31:11] What the heck does that mean?
[00:31:13] Right.
[00:31:14] You know, what it means for leading a team of five people, to leading a team of hundred
[00:31:21] people to leading hundred thousand people are very different things.
[00:31:26] Scale matters, experience matter, the nuance matters.
[00:31:30] So I think what is important in the approach
[00:31:33] when we think about skills is bringing that nuance
[00:31:39] to bear along with skills.
[00:31:42] And so if you say, somebody knows machine learning,
[00:31:46] what do you know? What can you do? Because the machine learning you need to build a chatbot
[00:31:52] is different to build the infrastructure that NVIDIA model is building or this thing is different
[00:31:58] than what seek out needs to do with it. They are all very different and nuanced. And what we believe and the infrastructure
[00:32:07] we are building is letting you understand those nuances in your reasoning. It is about
[00:32:16] having a dynamic instead of fixed ontology of, you know, hair of 30,000 skills, I think
[00:32:20] what they have that or something like that. It is a much more dynamic. In some places,
[00:32:25] you need to go into much more detail, add a lot beyond what any of these even with 30,000 things
[00:32:31] can add, and in some cases subtract and give color to what these phrases really mean in the context of the company and the work being done.
[00:32:46] So you refer, AMD says, I want to be like NVIDIA.
[00:32:50] Write me up.
[00:32:51] There are billion dollar valuation.
[00:32:53] We are here.
[00:32:53] What do we do?
[00:32:54] What are the kinds of engineers we need?
[00:32:57] You don't say, I need engineers with firmware.
[00:33:01] And you want to say, I want engineers
[00:33:04] who know how to translate machine
[00:33:07] learning algorithms into firmware best in terms of chips we are doing and how do we
[00:33:14] want to do the chips. That's not a skill in any taxonomy I can assure you that anybody.
[00:33:21] Yeah, yeah. That's a very interesting point you've made a new job. Did you want to weigh in on that?
[00:33:31] Only if we have another hour because it's a secret. It's a supremely interesting rabbit hole and the the
[00:33:51] And the balance between a structured taxonomy and a dynamic ontology.
[00:33:56] Again, this is one of those arguments
[00:33:58] where people tend to treat it as binary.
[00:34:01] tend to treat it as binary. Mm-hmm.
[00:34:02] Mm-hmm.
[00:34:03] And the reality is there's truth in those places, and you sort of need a, I believe,
[00:34:12] you need a model that starts with structure, that's sort of the ice cream cone, and then
[00:34:18] has dynamic capacity, that's the ice cream that goes in the cone in order to get the most optimal results
[00:34:26] out of a skills-oriented system. Anybody I'm seeing who understands that exactly yet,
[00:34:36] the emphasis in AI these days is on the power of discovering relationships inside of the data.
[00:34:43] is on the power of discovering relationships inside of the data.
[00:34:46] And the older school view of AI,
[00:34:49] which is that you need structure on which to build that.
[00:34:54] And you don't get repeatable results
[00:34:57] in what's left structure in place is quiet right now.
[00:35:03] But once you know, at the beginning of the conversation, I said,
[00:35:05] there's a crash coming and the crash is going to come because the error rates will be out of control
[00:35:09] and there'll be Tesla style crashes all over our industry. And we'll pull back and at the moment
[00:35:19] that we pull back, we'll start talking about how to you blend structure with dynamic tools to get a
[00:35:25] more fully fledged answer. Yes, so well partly John I again I agree with you that
[00:35:34] you don't throw away structure because you need to be dynamic in terms of you know even if you
[00:35:42] look at human or the catheter they're continuously adding work so you're constantly, you know, even if you look at human or the catheter, they're continuously adding words,
[00:35:45] we're continuously adding, you know, it is not static,
[00:35:47] it is not an unchanging world.
[00:35:50] And how do you even add something, right?
[00:35:53] The way that humans understand is when we take something new,
[00:35:56] we connect it to a whole lot of the existing things
[00:35:59] that we have models and things we already understand.
[00:36:02] And now similarly, when you get one of the new skills,
[00:36:07] you know, in terms of, you know, for example,
[00:36:09] Andromeda is a system at Google
[00:36:11] that helps you manage a large number of virtual machines
[00:36:14] or networking.
[00:36:16] If you just add and you describe the skill, right,
[00:36:20] can it automatically connect it to everything else
[00:36:23] that is related?
[00:36:24] You still refer to as a syndrometer. We believe actually a lot of also in terms of custom
[00:36:30] things because when you're looking for people inside the company, you might say, if they've worked
[00:36:36] on Andromeda, I know a lot of things about that. So these can't be static, that any company says,
[00:36:48] has your 30,000, we believe extensibility is important,
[00:36:52] we believe dynamic connection is important.
[00:36:55] And we believe that the technology is there
[00:36:57] to build that and that is where we are going with it.
[00:37:01] Now, sometimes we call it beyond skills,
[00:37:03] sometimes we call it a generative skills platform.
[00:37:08] And as you're saying, we believe it will be structured
[00:37:12] or some structured, but those people who get stuck
[00:37:15] with structure or just with correlation,
[00:37:19] I think that is gonna be not serve the clients
[00:37:23] and the customers as well as they might be believing
[00:37:27] because of the hyper on skills.
[00:37:30] You just opened the door to an amazing conversation
[00:37:33] that we should have the next time we're together.
[00:37:36] And that is the view that you just described
[00:37:42] is fantastic in large organizations. It's exactly fantastic in large
[00:37:47] organizations. Most organizations have fewer than 200 people in them and a dynamic anything
[00:37:56] gets in the way of getting work done because it's more complicated than it needs to be.
[00:38:02] that it needs to be. And so I wonder if you're talking about
[00:38:07] scale differences that are so profound
[00:38:09] that they require radically different solutions.
[00:38:12] And that's an interesting question for another day, I think.
[00:38:16] We're gonna give a noop just one quick moment to respond
[00:38:21] while we have him in that thought.
[00:38:25] So I think that the dynamic, you know, the number of different kinds of skills you want to bring in, a 200 person organization.
[00:38:35] So if you say, you know, what to seek out is a 200 person organization that is there and we might say, you know, understanding of resumes and how do you parse,
[00:38:46] you know, certain things, is really important.
[00:38:48] So that could be a skill relevant to us.
[00:38:50] But it might be much more that the specific context, for example, when you say social
[00:38:56] marketing, are you doing it for B2B businesses or are you doing it for B2C businesses?
[00:39:03] So there might be a variety of nuance
[00:39:05] that can be actually inferred based on what is needed
[00:39:09] and so it gets customized a little, but it is simple.
[00:39:11] The goal is again, the challenge and the opportunity for us
[00:39:15] is how do we make it simple?
[00:39:17] And seek out assist is actually a pretty amazing example
[00:39:21] of where we have taken something, the JNAI,
[00:39:24] and just tried to simplify it.
[00:39:27] I think we have landed on a key phrase that neatly encapsulates today's discussion,
[00:39:33] and that is challenges and opportunities. A nuke we would like to have you come back in the future
[00:39:40] and discuss, continue this conversation. Actually, I can see that you and John
[00:39:46] could probably be locked in a room for several days
[00:39:49] and just create all kinds of wonderful things.
[00:39:52] Would you please tell us how our listeners
[00:39:56] can get in touch with you Anup
[00:39:58] and how they can learn more about Seekout?
[00:40:01] So you can go to our website and certainly seekout.com
[00:40:08] and connect with people.
[00:40:10] I always get people my own email,
[00:40:12] anewpatsicout.com.
[00:40:15] So if you like to get in touch, please send me
[00:40:18] and I will connect you to the right people
[00:40:20] or the right cases they're talking to yourself.
[00:40:23] I love the customer. But those
[00:40:26] are the two ways, you know, and best ways to connect with us.
[00:40:30] Well, thank you so much for being our guest today. This is the work podcast. And my colleague,
[00:40:37] John Sumster, looks like he wants to add one more thing. So I'm going to turn the phone
[00:40:41] over.
[00:40:42] Yeah, I just want to say thank you, Adoop. This was a
[00:40:47] rich, nuanced conversation with real points of view in it. That doesn't always happen when we
[00:40:55] talk to people who are in the industry. Thanks for being willing to go. Add it a little bit. I really appreciate that.
[00:41:06] Thank you, John.
[00:41:07] It is always a pleasure and insightful to talk to you.
[00:41:12] Thank you.


