Ep 18: Crowdsourcing and AI to Solve Data Challenges with Justin Strharsky
Elevate Your AIQSeptember 12, 2024x
18
00:48:56

Ep 18: Crowdsourcing and AI to Solve Data Challenges with Justin Strharsky

Justin Strharsky, co-founder of humyn.ai, joins Bob Pulver to discuss the power of collective intelligence and the role of AI in solving complex problems. He explains how humyn.ai uses competition and collaboration to bring together a global community of data scientists to solve data challenges. Humyn.ai's crowdsourcing approach has proven very effective, where multiple independent solutions can lead to higher confidence and the discovery of valuable outlier answers. Strharsky emphasizes the importance of constraints and the right incentives to maximize contributions and ensure the best outcomes. It's a wide-ranging and insightful discussion on myriad topics, each relevant to the future of work design and dynamic workforce ecosystems.

Keywords

collective intelligence, AI, data science, competition, collaboration, constraints, incentives, outcomes

Takeaways

  • Collective intelligence, where multiple independent solutions are combined, can lead to higher confidence and the discovery of valuable outlier answers.
  • Constraints and the right incentives are important in maximizing contributions and ensuring the best outcomes.
  • AI can function as a thought partner, stimulating creativity and enhancing human problem-solving abilities.
  • The power of AI lies in its ability to free up time and resources, solve mundane problems, and create a world of abundance.
  • It is important to have a clear understanding of the problem at hand and to critically evaluate AI outputs to avoid being overwhelmed or misled.

Sound Bites

  • "More answers are better than one."
  • "Constraints matter in maximizing contributions and outcomes."
  • "Generative AI is fantastic for finding commonalities and driving decisions."

Chapters

00:00 Introduction and Background

02:16 Competition and Collaboration in Solving Data Challenges

07:17 IP Ownership and Collaboration Opportunities

09:17 The Power of Cognitive Diversity and Subject Matter Expertise

13:59 Finding the Right Constraints for Optimal Solutions

22:15 Transparency, Observability, and Intellectual Property

25:04 The Need for Responsible and Ethical AI Use

28:33 The Potential of AI in Enhancing Human Thinking

31:39 The Importance of Engaging with AIQ and Avoiding Over-Optimism

36:05 The Value of Solving Mundane Problems and Creating a World of Abundance

39:01 The Future of AI and Collective Intelligence

42:10 Navigating the AI Landscape and Ensuring Clear Understanding

45:17 The Importance of Critical Evaluation and Avoiding Overwhelm

46:33 Elevating AIQ: Engaging with Tools Practically and Focusing on Problem Solving


Justin Strharsky: https://www.linkedin.com/in/justin-strharsky

https://humyn.ai

Powered by the WRKdefined Podcast Network. 

[00:00:00] Hey, you, with the podcast, are you?

[00:00:02] No, cuts!

[00:00:03] Hashtag already has a mobile-happy-hour activated?

[00:00:06] With you every day, you have to wait for a long time to start the performance.

[00:00:10] This is great, and for all these mobile functions,

[00:00:14] Internet for home-made telecom,

[00:00:16] now you have one hour,

[00:00:17] the mobile-happy-hour in the mobile-magnet app activated,

[00:00:21] and now it's up to 5th of the day.

[00:00:24] So, don't worry, now you have one hour and activate it,

[00:00:27] in the mobile-magnet app of the telecom.

[00:00:39] Hi, everyone, it's Bob Pover.

[00:00:41] In this episode, I'm joined by Justin Strharsky,

[00:00:44] cofounderofhuman.ai and AI Power Crowdsourcing platform,

[00:00:47] to discuss the power of collective intelligence and AI

[00:00:50] in solving complex problems.

[00:00:52] Justin shares insights on how competition and collaboration

[00:00:55] can bring together global data scientists,

[00:00:57] the tackle data challenges,

[00:00:59] emphasizing the importance of constraints and incentives

[00:01:01] in maximizing contributions.

[00:01:03] We explore the potential of AI as a thought partner,

[00:01:06] its role in creating a world of abundance,

[00:01:08] and the critical need for clear,

[00:01:11] problem understanding and evaluation of AI outputs.

[00:01:14] As you'll hear in each episode,

[00:01:16] I ask all my esteemed guests what they recommend,

[00:01:18] to elevate one's AIQ,

[00:01:19] so stick around on to hear what Justin has to say.

[00:01:22] I hope you enjoy the discussion. Thanks for tuning in.

[00:01:27] Hello, and welcome to another episode of Elevator AIQ,

[00:01:30] I'm your host Bob Pover with me today,

[00:01:32] is Justin Strharsky.

[00:01:35] He is a cofounderofhuman.ai.

[00:01:38] Welcome Justin.

[00:01:39] Hi, everyone.

[00:01:39] Thanks for having me.

[00:01:41] Absolutely. Appreciate your time.

[00:01:43] It's good to see you again.

[00:01:44] Been a while since we talked about all things

[00:01:48] gig economy and future work.

[00:01:52] So, why are you just starting by

[00:01:53] giving a little bit of background about what you're,

[00:01:57] what you went to school for,

[00:01:59] what you've been doing and how you've wound up pounding?

[00:02:03] Yeah, I suppose along with winding road,

[00:02:05] I had background in tech,

[00:02:07] we were up in California and I worked for company called Sun Microsystems Back in the Day.

[00:02:14] And Sun was founded by a number of folks.

[00:02:17] One of whom is Will Joy

[00:02:19] and Will's now famous for things that he said around the office,

[00:02:23] including something called Joy's Law,

[00:02:24] which is essentially that it doesn't matter the size of your company,

[00:02:29] but most of the world's smartest people work for someone else.

[00:02:32] So, that's been a guiding principle of my career

[00:02:35] since being exposed to that.

[00:02:37] I think a lot about what the world looks like

[00:02:39] when smart people distributed around the world

[00:02:42] and had to take advantage of that.

[00:02:44] I started human because customers in another business

[00:02:48] that I started 10 years ago were asking us to do more with data science

[00:02:52] and AI.

[00:02:52] This is before the Gen AI type that we're currently in.

[00:02:59] And those customers were trying to do more with their data.

[00:03:02] So they're trying to predict things,

[00:03:04] optimize things,

[00:03:05] understand when people are exposed to risk in their workplace.

[00:03:11] And do predictive maintenance and things like that

[00:03:14] in large industrial settings.

[00:03:17] So we built a community of people around the world

[00:03:19] that can help them solve this problem.

[00:03:20] Challenge and competition,

[00:03:22] based on is it collaborative across these sort of freelance data scientists

[00:03:27] or, you know, the question,

[00:03:29] I suppose it's kind of co-opitition.

[00:03:30] Most of it looks like a competition,

[00:03:35] but we're seeing lots of collaboration between the members of our community.

[00:03:39] They share methodologies,

[00:03:41] they like to help each other succeed.

[00:03:43] That's something that gives me super excited.

[00:03:45] But essentially what we've done is figure out what the right incentives are

[00:03:49] to get any group of people

[00:03:50] to contribute maximally to solving a problem.

[00:03:54] And that differs problem to problem.

[00:03:56] So if we're doing a small piece of analysis

[00:03:59] where the outcome is several reports,

[00:04:03] what we're interested in is what's the right number of people

[00:04:06] to contribute to solving that problem?

[00:04:10] A fundamental belief that we hold in human eyes

[00:04:12] that more answers are better than what.

[00:04:14] In particular in data science it's very useful

[00:04:16] to have multiple independent answers

[00:04:18] and see where they converge.

[00:04:20] That's important in both of the way

[00:04:22] they converge given you higher confidence in the result

[00:04:24] but also for discovering outlier answers it might be valid.

[00:04:27] So it's something like that piece of analysis.

[00:04:30] We might get four people.

[00:04:32] Each of them is paid,

[00:04:33] but the one who submits in our customers you

[00:04:36] the best piece of work gets paid more.

[00:04:39] So we're maintaining that competitive nature.

[00:04:43] Then we do model building competitions

[00:04:44] which involve up to say 200 people

[00:04:47] from around the world and the top participants

[00:04:49] when a prize pay out in the next.

[00:04:51] Really interesting.

[00:04:52] I mean, I have some familiarity with

[00:04:55] some of those types of models.

[00:04:57] I mean, I was spoken to top voter

[00:04:59] and probably been seven years ago

[00:05:02] since I spoke to them on behalf of the companies

[00:05:05] that I was working for.

[00:05:08] At the time,

[00:05:10] none of them, my company didn't have

[00:05:12] some expertise but it was more bandwidth

[00:05:17] as well as something that wasn't

[00:05:21] for the poor to their scope of responsibility.

[00:05:25] So it's either try to get their management

[00:05:29] to agree to let them contribute to this

[00:05:32] and then, do you even have the structure

[00:05:36] to figure out how to do everything

[00:05:39] that you guys are doing?

[00:05:41] Or do we just,

[00:05:42] we know this is a sort of time box

[00:05:45] or exercise anyway

[00:05:47] and just from a scalability

[00:05:50] and maybe even an efficiency standpoint

[00:05:53] why not just throw it out to a diverse crowd

[00:05:57] who would just be like you said,

[00:05:59] incentivize to just crank it out.

[00:06:01] Yeah, what we're seeing in our customers

[00:06:03] is that it's capability and capacity.

[00:06:07] So even in large enterprises that we deal with,

[00:06:09] they have an internal data science team.

[00:06:11] They may have two, three,

[00:06:13] maybe more data scientists on staff.

[00:06:15] They also have data engineers.

[00:06:17] But even those teams have this long list of projects

[00:06:20] to get after and things don't get on the list

[00:06:23] even if they can't be proven to deliver value.

[00:06:26] So there's even some lost opportunity

[00:06:29] that happens because people are doing proof of concept

[00:06:31] or in data science.

[00:06:33] The science part of data science,

[00:06:36] which proves the value of taking the next step

[00:06:40] and so those teams suffer from the capacity constraint

[00:06:44] that they have more work on than they can get done.

[00:06:47] But they also have a capability constraint,

[00:06:49] which is even in a team of three,

[00:06:51] how many of the different methodologies

[00:06:54] that make up data science and AI

[00:06:56] can you possibly bring to bear

[00:06:57] on the problems in your large enterprise?

[00:06:59] But it looks like it's,

[00:07:01] you know, you may have an excellent person

[00:07:04] on staff who's good at time series data.

[00:07:08] But next week you want to do something

[00:07:10] with satellite data

[00:07:11] and they've never been exposed to working with satellite data

[00:07:14] computer vision problem.

[00:07:16] And so we're finding that we are very powerful

[00:07:19] in augmenting internal teams capability

[00:07:21] and possibly do more

[00:07:23] and they do it with skills that they need now

[00:07:27] and that allows their internal teams

[00:07:29] to focus on expanding their core confidence

[00:07:32] which really is becoming about knowing the business

[00:07:35] and that business is data better than anybody else.

[00:07:38] So they can be more effective as the stewards

[00:07:42] of awesome data science projects

[00:07:44] and having super skills access

[00:07:47] to this distributed community of people

[00:07:48] with different skills and capabilities.

[00:07:51] That's fantastic.

[00:07:53] So when your crowd comes up with these

[00:07:56] solutions, whether it's the winning one or not,

[00:08:00] how does the, is there IP that's generated

[00:08:03] and if so, like where does who gets that?

[00:08:07] It gets the rights to that.

[00:08:09] Great question.

[00:08:10] And it really depends on the nature of the project.

[00:08:11] For most of those projects,

[00:08:13] the IP goes to the person we're sponsoring the competition

[00:08:15] so our customers

[00:08:17] but there are some competitions that we run

[00:08:21] where the sponsor

[00:08:23] that large enterprise that we're dealing with

[00:08:26] understands that they don't have the internal capability

[00:08:29] or desire to carry a deal forward,

[00:08:33] and they might want to stimulate the innovation ecosystem

[00:08:36] to produce a commercial product

[00:08:38] that then they can buy.

[00:08:40] Or they may want to collaborate on the production of some novel

[00:08:43] so we've seen that in a number of cases

[00:08:44] where a customer of hours wants to engage our community

[00:08:48] to prove a hypothesis,

[00:08:52] whether it's something like

[00:08:53] can we use machine learning to find

[00:08:55] exploration targets for a cover?

[00:08:58] In that case, they may actually want to partner

[00:09:00] with the team or group of individuals who built the best solution

[00:09:04] to commercialize something

[00:09:05] or to take the next step in building a solution in specific time.

[00:09:09] I guess I wanted to drill down

[00:09:10] into like the sort of cognitive diversity

[00:09:14] of the crowd, not in a decision-making way

[00:09:20] which we'll get into in a minute.

[00:09:23] I'm sure when we talk about how people are using AI.

[00:09:26] But more just the expertise and exposure

[00:09:32] from people that are coming from different backgrounds,

[00:09:36] either different industries or different geographies.

[00:09:39] I mean, I don't know just from a lot of your projects

[00:09:43] originate in industrial manufacturing,

[00:09:47] mining, things like that.

[00:09:48] I mean, are there people from different specific geographies

[00:09:53] that happen to have been exposed

[00:09:55] because of where they reside

[00:09:57] and where those industries have some prominence

[00:10:00] that they seem to be coming

[00:10:02] forth with like novel approaches

[00:10:05] to solving some of these problems?

[00:10:07] I'm such a big question.

[00:10:08] It really cuts both ways actually.

[00:10:11] What we find is that the large industrial businesses

[00:10:14] that we work with in,

[00:10:16] whether it's agriculture, manufacturing,

[00:10:19] mining, energy, they have access to people

[00:10:22] that have very detailed subject matter expertise.

[00:10:26] Some of them can walk into the room

[00:10:28] and know that a circuit breaker

[00:10:29] on an electrical substation isn't performing right

[00:10:32] and they can't say how they know that.

[00:10:34] Some of them know about the data

[00:10:36] that their company has been gathering.

[00:10:37] They understand what a null value in a day-to-day set

[00:10:39] means because it means they're trying to shut off

[00:10:41] on purpose, but sailing.

[00:10:44] And some of that is required to build

[00:10:47] great data science solution.

[00:10:49] And it is not necessarily the case

[00:10:51] that people around the world who have

[00:10:53] great data science skills have that subject matter expertise.

[00:10:58] And so sometimes the values

[00:10:59] and bringing those people together in a project.

[00:11:03] Sometimes you're actually looking for outlier solutions

[00:11:07] and those can come from people

[00:11:08] who do bring a skill that the company

[00:11:11] that has the subject matter expertise do not have.

[00:11:14] So whether it's the latest techniques

[00:11:16] in machine learning applied to a particular problem

[00:11:18] that they have or something else.

[00:11:21] So we're trying to figure out what the sweet spot is

[00:11:23] in combining subject matter expertise

[00:11:25] with the best machine learning

[00:11:27] that science skills set.

[00:11:29] What we know for sure

[00:11:32] is that the broader reach,

[00:11:35] the better our ability to make that match happen.

[00:11:40] And what's unique about our approaches

[00:11:41] that we're not trying to predict that match.

[00:11:44] So we're not trying to evaluate before hand

[00:11:47] and make a prediction about whether

[00:11:49] Bob would be great at solving this particular problem.

[00:11:53] We're trying to measure as objectively as possible

[00:11:56] the outcomes produced by a field of people

[00:11:59] who bring a diversity of skill.

[00:12:01] So that merit-based assessment

[00:12:04] of outcomes is super powerful

[00:12:07] for getting results on these kinds of business challenges.

[00:12:12] Before we move on,

[00:12:13] I need to let you know about my friend Mark Feffer

[00:12:16] and his show People Tech.

[00:12:18] If you're looking for the latest

[00:12:20] on product development,

[00:12:21] marketing funding,

[00:12:23] big deals happening in talent acquisition,

[00:12:26] HR, HCM,

[00:12:28] that's the show you need to listen to.

[00:12:31] Go to the work to find network,

[00:12:33] search out People Tech,

[00:12:34] Mark Feffer,

[00:12:35] you can find them anywhere.

[00:12:39] So when you put out a challenge

[00:12:42] or one of your clients kicks this off,

[00:12:46] they give you parameters in terms of who they might be looking for.

[00:12:51] Or did they basically bust human AI to do that?

[00:12:55] Or is it just sort of here we're putting it up.

[00:12:58] It goes to the same place every other challenge goes to

[00:13:01] and whoever starts to contribute.

[00:13:04] It contributes or is there any sort of filtering

[00:13:07] in terms of the exposure you give to the challenge?

[00:13:12] I would say that it was kind of like a slider.

[00:13:15] If we measure a slider between something where we don't know

[00:13:19] what the end solution is going to look like on one end of the

[00:13:23] last slide and on the other end,

[00:13:25] we know exactly what we want down to the operating environment

[00:13:29] that it must perform.

[00:13:30] So our job at human AI is to understand

[00:13:33] where any particular project sits and move that slide

[00:13:36] back and forth and that determines the methodology

[00:13:39] that we bring to the table.

[00:13:40] So in the highly specified case,

[00:13:43] where we've got projects that are producing machine learning models

[00:13:46] that go into production and a particular environment,

[00:13:50] then by the way that we still and describe that to our community,

[00:13:54] the right people are up in it.

[00:13:56] Similarly, if we move to the other direction,

[00:13:59] we may have to do some outreach of a particular nature

[00:14:01] to make sure that we are exposing that opportunity

[00:14:05] to people who are bringing a diversity of skill sets

[00:14:07] without bias to understanding which of those skills is going to be

[00:14:10] most effective at solving problems.

[00:14:12] On the one hand, we have things that look like innovation

[00:14:15] or normal solutions and now that we have things that look like projects

[00:14:19] for finding the best solution to a very specified opportunity.

[00:14:24] Okay, I guess it's similar to writing a good job description.

[00:14:29] Right? Like we're going to be inclusive

[00:14:32] where we're going to attract the right people

[00:14:35] that where this is going to resonate or maybe candidates

[00:14:39] that think have got enough relevant experience plus

[00:14:42] a unique perspective or angle on this that would make me

[00:14:47] potentially successful.

[00:14:49] Parking's back to some of the collective intelligence work

[00:14:54] that I was doing back at IBM.

[00:14:56] We probably talked about that on the time we spoke,

[00:15:00] but how do you, I guess it's a combination of

[00:15:03] collective intelligence and collaborative decision making.

[00:15:09] Do you have the right people in the room where you bring

[00:15:13] not just diverse perspectives,

[00:15:17] but it's like the opposite of an echo chamber

[00:15:19] you're bringing in these perspectives.

[00:15:22] You've haven't considered this angle or you're not being human

[00:15:27] centric or you're not,

[00:15:30] considering macroeconomic factors or whatever it is,

[00:15:34] but each of those contributes to an improvement

[00:15:40] even if it's a rounding error or kind of improvement.

[00:15:43] But each perspective gets you a slightly better

[00:15:48] prediction or slightly better decision in some ways.

[00:15:52] Yeah, I think that's right.

[00:15:54] What we have found is that constraints matter.

[00:15:58] So in our other open innovation business,

[00:16:02] we started out by running hack phones.

[00:16:04] We brought these incredibly interesting challenges

[00:16:06] to people around the world.

[00:16:09] And then we put them in a room and we told them

[00:16:11] you got the weekend to solve this hard problem.

[00:16:14] You can use any tools you like,

[00:16:16] but you've only got two and a half days or whatever.

[00:16:20] And so they're the constraints of time

[00:16:23] really forced incredible creativity.

[00:16:26] Now of course, most of the solutions at the end of the weekend

[00:16:29] weren't fully baked, but how powerful is it

[00:16:32] to see 50 or 100 different approaches

[00:16:35] to solving that problem in just a week and

[00:16:37] before you then double down on spending more time

[00:16:40] and resources on questioning and presenting.

[00:16:43] And so in human AI we do spend time thinking about

[00:16:46] for any given data project.

[00:16:49] What are the right set of constraints for that project?

[00:16:52] And a lot of our work goes into engaging with customers

[00:16:56] to figure out those constraints.

[00:16:59] And as we get better at doing this,

[00:17:01] what we're discovering is that there are atomic units

[00:17:04] of their sites and that's how we think about that.

[00:17:07] And those are the basic building blocks of any data project

[00:17:10] and some of them can be thought of as independent projects.

[00:17:14] So whether you're doing a proof of concept

[00:17:17] on whether a data set might support a hypothesis that you have,

[00:17:21] that looks a particular way and needs a particular set of constraints

[00:17:24] and requires people of certain skills to advance that.

[00:17:27] Then if you're at another stage and you're actually building a model

[00:17:31] that has to be deployed into production,

[00:17:34] you've got different constraints.

[00:17:36] And so the more that we can figure out what those atomic units

[00:17:39] as project takes look like,

[00:17:42] the better we are at spending them up quickly

[00:17:45] with the right set of constraints to get the right outcome.

[00:17:47] It's very cool.

[00:17:49] I guess when I think about constraints,

[00:17:52] I'm excited to find context.

[00:17:55] When people agree to join this challenge and contribute,

[00:18:02] perhaps this is in like the T's and C's

[00:18:05] and the agreement,

[00:18:07] but like,

[00:18:07] how do you know everyone's coming up with how unique solutions

[00:18:11] are code or whatever,

[00:18:13] but the kind of you know what their contributing is

[00:18:15] their own work.

[00:18:17] That is a great question and unfortunately it is the kind of detail

[00:18:22] that we have to get obsessed about

[00:18:25] and part of how we mitigate the risk of somebody doing something like

[00:18:29] stealing from somebody else

[00:18:32] is in the terms and conditions as you've suggested.

[00:18:36] There are also just a lot of boring work bits that go into

[00:18:40] to doing that like code reviews

[00:18:42] and we're trying to automate that as much as possible,

[00:18:47] but there are just a series of work steps that have to happen

[00:18:51] to prevent that kind of thing.

[00:18:53] Thankfully we find that much of that can be avoided

[00:18:57] if we get the ultra and the expectations right

[00:19:01] and we attract the right people based on the right set.

[00:19:05] That's not foolproof.

[00:19:07] Things always enter at the edges

[00:19:08] and that's why we have to have both contractual

[00:19:10] and operational controls.

[00:19:13] Yeah, no that's not easy.

[00:19:14] I mean I have a lot of conversations just in the context

[00:19:18] of responsible AI have a lot of those similar conversations

[00:19:24] because people talk a lot about responsible use of AI

[00:19:29] and certainly when we think about, you know,

[00:19:32] talent acquisition and talent management

[00:19:34] in some of those use cases, it's the average user.

[00:19:39] Well the average person I should say is a user not a builder

[00:19:43] but if you're going to do these things right

[00:19:46] as you know, I mean it starts where you're collecting data

[00:19:50] and writing algorithms or using code that maybe came from a shared

[00:19:58] repository maybe it came from an accepted methodology

[00:20:02] and a researcher academic community but you're still making

[00:20:07] potentially an assumption that, you know,

[00:20:10] bear and transparent and it's okay to use that

[00:20:16] but you've really got to think about being responsible

[00:20:20] by design and that includes, you know,

[00:20:24] fairness and being ethical and responsible transparent

[00:20:27] explainable all these things.

[00:20:29] And so I think with genera they are that's even more

[00:20:32] of the case because now anyone can technically create

[00:20:35] a computer, co-pilot or agent themselves and we're all

[00:20:40] builders now but I think the same sort of concepts

[00:20:44] apply and honestly no I mean no one's completely

[00:20:47] figured it out right do you try to catch it at the point of

[00:20:53] decision when something is going about to be used

[00:20:57] and you know perhaps inappropriately trusting

[00:20:59] the system gave you an output that was correct

[00:21:04] and treating it like a calculator or do you go back

[00:21:07] and do some type of continuous, you know,

[00:21:10] monitoring and have the traceability and the

[00:21:14] observability all the way back beginning of the sort of data

[00:21:17] by chain.

[00:21:19] Sure, that is from from our perspective is the transparency

[00:21:23] and observability, these that you mentioned is very important.

[00:21:26] So we have people submit not just the solution but the source code

[00:21:30] that generates that solution and that allows us to go back and see

[00:21:35] every piece of code that was used to generate the solution has to be

[00:21:38] reproducible and we have to know where things came from.

[00:21:41] And so being able to inspect libraries that people use

[00:21:43] understand where those libraries came from, attribute

[00:21:47] the intellectual property to the authors of that intellectual property

[00:21:50] where required is an important step for us.

[00:21:53] Just being able to really understand the solutions that I submitted

[00:21:56] and the building blocks from which they were created.

[00:22:00] We're all standing on the shoulders of giants and we've got to figure out

[00:22:03] how to recognize people's contributions where they make them.

[00:22:07] We're in the lucky position of being able to celebrate people

[00:22:10] who are genuinely making a lot of novel solutions and that's great.

[00:22:15] They're not doing that with entirely novel tools

[00:22:18] but using tools that are built by others of course,

[00:22:21] that's part of the game and figuring out exactly how to recognize that contribution

[00:22:26] from others is something that we're all putting in best effort.

[00:22:30] I think that's great. Yeah, there's been so much innovation happening

[00:22:35] and I think we've just got a cat default to just assuming anything.

[00:22:39] And we can't lose our humanity and our critical thinking when we apply these things

[00:22:45] so I think it's a matter of everybody thinking we're deeply about that.

[00:22:50] I think sometimes lately it seems like we've been focusing so much on productivity

[00:22:54] that's going to be something we do something faster, whatever

[00:22:57] and so it's so much as I feel like we've got a real line in ten of quality over

[00:23:04] throughput and so I think that'll come with time.

[00:23:09] Honestly, I think sometimes if people had started automating things back when

[00:23:15] some of the earlier automation platforms came out like most of the decade ago at this point

[00:23:21] they wouldn't be so fascinated with this basic robotic process automation

[00:23:29] but we are a little bit unfortunate that everyone's sort of completing automation

[00:23:34] with augmentation and AI but that's okay.

[00:23:38] I'm just glad that people are kind of woken up.

[00:23:40] So sort of a double edged sword right when dinner of AI came into the public consciousness

[00:23:46] it was good to start to see where things could go wrong but also get people exposed

[00:23:53] and I guess ready to start to upscale themselves and figure out where and how it can help.

[00:24:01] So in that regard, I guess I'm curious how often AI is used in some of your projects

[00:24:08] and where do you see AI coming into help in the whole process of executing these projects?

[00:24:16] Very question.

[00:24:18] I'm sure earlier the rise of generative AI and the natural tendency that people have to refer

[00:24:25] to AI as a blanket technology and usually meaning generative AI in that case

[00:24:32] has created some awesome opportunities and some challenges.

[00:24:35] I often feel like I'm using different language with different parts of my community.

[00:24:41] So when I'm engaged with senior leaders, boards and execs, they're frequently very concerned

[00:24:48] with how they do more AI and they have people in their businesses who have been building machine learning solutions

[00:24:55] for quite some time and doing data science and analytics for a lot longer than that.

[00:24:59] And understanding the background that each individual brings to highly conceptualize a problem

[00:25:07] what they're trying to achieve helps me to have those kind of conversations and tease out what are we trying to do?

[00:25:13] The actual use of generative AI in our solutions, we're super excited to be a user ourselves at that technology.

[00:25:22] One simple and straightforward example is it's so powerful when you have multiple different solutions

[00:25:28] in different formats or similar formats.

[00:25:32] So let's take a piece of analysis that generates reports.

[00:25:36] We have four independent people produce four reports about when somebody is likely to be injured at work.

[00:25:44] So generative AI is fantastic for the consumer of that analysis to look across those four reports

[00:25:50] and find things that they all agree on and that can drive a decision or an action on the audio location.

[00:25:57] So we're excited to use generative AI in cases like that to help our customers consume the awesome work of human being.

[00:26:07] Some generative AI might be used in building those solutions as well

[00:26:11] and we're fairly agnostic outside of the constraints that we define for any project, what tools people use to get there.

[00:26:20] If we build constraints around the outcomes and we can be relatively agnostic about how people get to those outcomes

[00:26:26] that's fantastic for creativity and stimulating the best possible solutions.

[00:26:30] So people do use generative AI in coding and encoding their machine learning solutions

[00:26:35] and that has increased the power of our community to do work and the work excited about that.

[00:26:41] I think there'd be an opportunity for AI to identify where, like let's say, let's say you get like six people,

[00:26:47] like, could it look at the backgrounds of those people and using this?

[00:26:54] Like I guess a known unknown concept, I could have known that there's something missing.

[00:26:59] Like there's another perspective that's missing and you might want to leave the open call that contribute open for another couple days or throughout,

[00:27:12] you know, help out sort of recruiting in a way for someone who might have some specialized expertise that would make the ultimate solution.

[00:27:22] That's a great question and to be honest, until you asked it, I haven't thought about how we might apply it in that way

[00:27:30] and I think it's something that I'll certainly look into.

[00:27:34] I have a hard time imagining the unknown's piece and how generative AI or any other AI would understand that there is a missing puzzle piece that didn't come from one of the inputs into the problem.

[00:27:51] But I think it can be certainly very powerful for going here up here all the inputs.

[00:27:55] We were talking about constraints earlier, but we know what constraints of the end solution need to look like.

[00:28:01] And we've got the representation in the people who are participating, we think is required to get there.

[00:28:07] Firstly, I'm less excited about doing that work on the basis of predictors in the people and more excited about actually comparing outcomes because I'm motivated by thinking about merit-based approaches to doing this that allow us to.

[00:28:25] Below the top off on the talent pool around the world and create opportunities for people that otherwise wouldn't get a look at that someone who's up to.

[00:28:32] So we want to engage people regardless of whether at school or they look like what they do during the day, what the last name is any of that kind of stuff.

[00:28:41] And so that means we have a strong focus on paying attention to the merit of the solution and to making sure that we specify the nature of a problem.

[00:28:56] So we get great outcomes rather than trying to predict who based on their characteristics is going to be good at solving that problem.

[00:29:03] Yeah, I mean obviously it was throwing it out as a hypothetical but based on some of the things that I saw like when I was at IBM I was connecting like C-suite clients to IBM research.

[00:29:15] So when Watson was coming out of the labs I saw a lot of the demonstrations and the use cases that the research team and the chief investigators of those projects were presenting.

[00:29:27] And so like one of them was a healthcare example you know patient living in Arizona had this something on their skin and I couldn't understand where it came from nobody could figure it out.

[00:29:44] But part of it was because they weren't asking the right questions which I mean the end result was all the doctors in Arizona had never practiced medicine elsewhere.

[00:29:56] And they never asked the question of where else this person lived and so none of the doctors in Arizona had ever been exposed to line disease right which was mainly where deer reside which is predominantly in the northeastern United States or whatever.

[00:30:11] So I mean it's a simple example but like you had the means to assemble a medical team you wouldn't have all you know internists you'd have different specialists and maybe if you could only have a report specialist they'd be aligned to your own medical family medical history and any.

[00:30:29] You wouldn't just pick you know random specialists but you just assemble a team where you knew you had you know 95% of your possible future you know ailments sort of covered right so you know it's a risk mitigation strategy and some sense.

[00:30:47] Yeah I mean and there's a sweet spot isn't there between going as wide as possible and getting an outcome in the reasonable amount of time and you know that can actually be implemented.

[00:31:00] And it's impossible I think to know exactly how to draw that curve.

[00:31:08] And I would rather this to try to figure that out and especially in running up an innovation challenges was it was incredible to continually be surprised and impressed that somebody that had.

[00:31:21] A skill that you wouldn't think would be applied to this problem.

[00:31:25] Good make a difference so one of the.

[00:31:29] One of the experiences I had that gives me chills to this day and keeps me involved in this kind of business is that we were trying to solve a problem for a mining customer that wanted to discover where mineral sands are likely to be deposit.

[00:31:43] And it turns out that there are these jace of days where wave action historically has deposited mineral sand.

[00:31:51] But now they're covered by foliage and other things so it's relatively hard to get a vision approaches at the time to try to locate all of them.

[00:31:59] And there was a unity who was super keen on music and they just asked themselves could our experience in digital signal processing be applied to this problem.

[00:32:09] And that led them to convert a particular data set into another format that allow them to use their skills and digital signal processing to identify all of the historic patient days around the world of course.

[00:32:21] And that conceptual leap from this is a problem of looking at these visual shapes to one in which they were using wave forms and techniques that they developed from music was brilliant.

[00:32:33] And I couldn't have predicted that they would be successful that I couldn't have gone out and got well we need somebody with digital signal processing in the room.

[00:32:41] So what we were doing was trying to engineer a certain dip it and the whole nature of those of innovation was creating the right magic the right and strange on time but openness to different skillsets to foster an environment in which magic like that happened.

[00:33:00] And when it does it's electrifying and people get super excited about it and that's kept me going for a very long time.

[00:33:07] I figured out the right amount of it to pull into any data challenge is just something that I think about not everyone is required innovation like that.

[00:33:17] And there's a tremendous long tail of work to be done and high value problems to be solved whereas pretty clear we need somebody with machine learning skills just to get this this thing done and the solution is to look like this for us to act on the recommendation that model.

[00:33:32] That's also awesome and let's great about that is there are people all over the world where we don't have to do it to necessarily hire somebody that goes into our local office to solve that problem.

[00:33:45] So I bring that same excitement to finding the right skills where it happened to be and to try to figure out how we measure the output of those types of projects on the basis of their ability to solve the problem.

[00:33:58] I don't know that gets me excited as well I mean when you have experienced people who are systems thinkers and they can extrapolate you know concepts and ideas from one domain to another.

[00:34:13] It's hard to even put a value on that but I know you know I've pivoted quite a few times in my career I mean even in my over two decades at IBM I did a lot of very different things but there was always not just transferable skills but knowledge that I carried from one to another and you know I guess even the this scenario that I depicted finding these on unknowns.

[00:34:41] I mean you do have the constraint I think this touch your point you're still subject to whatever.

[00:34:48] Data is available about people and their backgrounds and there's so many things I mean I could have a whole second alternative resume with.

[00:34:58] Hold the things that were not part of my job descriptions over the course of my career and so.

[00:35:05] I don't know if AI will ever be able to you know figure that out not necessarily back to an individual but.

[00:35:13] AI where it could actually go now that it's absorbed.

[00:35:18] You know so much information and continues to ingest more more data every day but I do wonder about that kind of thinking and assimilation of all that.

[00:35:32] I mean I'm not sure if you can see the knowledge and data and being able to come up with.

[00:35:35] Now the solutions now will project based on everything it knows across different industries I'm not sure I mean maybe.

[00:35:44] If you do that today I don't I'm not sure.

[00:35:46] I don't know but what I do know about today and what it can do now is that it can function as an incredible part of.

[00:35:53] I've ever had that experience where you vibe with somebody else and you're talking about.

[00:35:58] If you're working an AI like we are and new information new knowledge is created in the creative spark that happens between two months.

[00:36:09] I think the promise of AI right now that it can deliver on is bringing us from being trapped in our own singular minds and it can be a thought partner that stimulates that creativity.

[00:36:17] I think that it has tremendous power for helping us be more human bring our best creative and skills of ingenuity to the table and problems are like.

[00:36:28] And I suspect that it can do that now for lots of different problems and for people and that's something to be excited about regardless of whether it can do that independent amount of time.

[00:36:38] Fair point I mean I've been spending a lot of time excuse me with with Ross Dawson who's at a few trips and speaker and.

[00:36:47] He's got some amazing resources on these kinds of concepts how AI is enhancing our own thinking and you know putting things in perspective with the human sort of the human first.

[00:37:00] And then adding AI as needed to to complement that but one of the threads within these these conversations and some of the resources that he's created are not just.

[00:37:13] Augmenting individual human creativity and ingenuity but the pairing of elective human intelligence with.

[00:37:24] Artificial intelligence and you know because AI has not absorbed it's not actually absorbed all.

[00:37:31] Human collective intelligence by any stretch and so.

[00:37:35] How do you have it in the room you know with you be interesting to see if an AI could actually contribute to some of your projects on its own but.

[00:37:46] I don't know maybe that ties too closely to basically being the missing puzzle piece itself but yeah I just think it's it's really it's fascinating to think.

[00:37:56] Through some of these things and what previously intractable problems could be.

[00:38:02] So I and I also like to be a champion for the boring problems because we're standing on the the brink of being able to make.

[00:38:15] Fantastic shifts in the ability of our core industries to operate sustainably and effectively the free up our time and resources for doing other things and it might be that the majority of those games come from solving lots of little problems.

[00:38:31] Adding 5% efficiency here increasing safety there and the some total of all of those boring business problems that our that are industry space actually is a.

[00:38:49] powerful force for for us being better humans and and being more creative and living in the world that we want to live in.

[00:38:55] I was of abundance where no longer are we playing by that annoying possibly more than the way safety risk or the fact that we can't you get more performance from that transformer.

[00:39:09] It's on their own perhaps they're not so sexy as some of the other problems we talk about solving taking together.

[00:39:16] They they create but possibly for lifestyle and fulfillment.

[00:39:21] Yeah, I absolutely agree. I mean there's plenty of daily you know challenges that you're just like wait we just kind of leaped over solving all these you know pervasive.

[00:39:35] And on a mental you know challenges with with how we engage with the idea where it can provide.

[00:39:42] You can't be able to you know guidance or whatever I mean I even just.

[00:39:47] Driving around running errands I mean I can't I can't tell it where I want to go and then just have it plot an optimal you know route I can't divert from once I've got my destination and in ways or whatever mapping you're using like to add a stop and unexpected stop.

[00:40:05] And then you know.

[00:40:07] We reroute everything.

[00:40:09] That's a great one your time is valuable it's the most precious versus resource you you have and.

[00:40:16] Freeing up some of that from being stuck in traffic or whatever it is is a good and we should be pursuing that good with the tools that we have there are disposal and I's one.

[00:40:26] Yeah, yeah absolutely any particular tools that you've been playing with on the generative AI front that you get in value from.

[00:40:33] Yeah, I mean I played with lots of them and I like all the money I don't have a particular favorite as you can tell I'm excited about the people part and the the name of the business says it all so I try to spend my time engaging with the community of people that we've got our customers and getting them excited about potential for.

[00:40:49] The application at technology.

[00:40:52] And also get some somewhat frustrated at seeing people captured by the sexiness of a tool rather than.

[00:40:58] And what it feels like to really solve our problem and so I feel like I'm on the boring guy in the corner trying to bring us all back to the problem at hand and solving that and painting the picture of what a great world it's going to be if we just make that thing more efficient.

[00:41:12] Perfecting doesn't blow up as a result of using the tool who cares about tools.

[00:41:16] Let's get to that world where we're happy that the things aren't going to blow up and we got time to spend with a lot of ones and.

[00:41:23] But that's a great use of technology regardless of what it is.

[00:41:26] Absolutely so.

[00:41:28] So Justin I have to have to ask you the question that I asked on my guests and ties to the title of the podcast and you when you hear that phrase elevate your AIC what comes to mind.

[00:41:41] What comes to mind is all of the discussions that I've been in about artificial intelligence where people's deeply rooted fears come to the surface.

[00:41:49] And how much capture there is of that fear or on another hand over optimism about some some future.

[00:41:58] So I'm excited about people developing the ability to engage with tools practically and understand how they can apply them in their own lives.

[00:42:07] And we can apply them at work to create outcomes that that we want and to do that.

[00:42:13] Well, being captured by other agendas and carried away by by either fear or overall optimism.

[00:42:20] And then that's that's me being the boring guy but I think that requires a bit of an IQ to pick apart the terminology that we use what are we talking about when we talk about our personal.

[00:42:30] Are we talking about something that has independent volition we're talking about.

[00:42:35] Yeah, or are we talking about a system that that optimizes something on the basis of data that we've got and and sometimes we find myself in conversations where we are confusing lots of terms and people are getting excited and that's because of some fear that they're bringing to the table.

[00:42:55] And so really engaging with that I think is what the IQ means for me, going deep enough that you can come to the table within form perspective what are we trying to achieve here and what are we talking about when we use the language that is so popular.

[00:43:10] Absolutely fantastic. Now I think it's easy to get overwhelmed it's easy to rust what you get from.

[00:43:17] As an output from some of these tools so as I mentioned before, keep thinking critically about what we're trying to accomplish and does this sound right even if it sounds right.

[00:43:28] Maybe I should double check it second opinion and it's easy to get overwhelmed just because there's so much it's dominating the headlines there's so many new tools every week and it's progressing really fast but I think focus on the things that are providing you value in your personal life and in your work.

[00:43:46] And sort of ignore a lot of the voice just and thank you so much for your time and awesome conversation as always and really appreciate you taking the time.

[00:43:59] I like what I thought really enjoyed it. Thanks again just.

[00:44:03] Alright that's a wrap for another episode thank you everyone for listening and we'll see you next time bye bye.