Ep 42: Gen AI Transforms Learning Experiences and Modernizes Content Management with Jack Houghton
Elevate Your AIQDecember 05, 202400:42:17

Ep 42: Gen AI Transforms Learning Experiences and Modernizes Content Management with Jack Houghton

Bob chats with Jack Houghton, co-founder and Chief Product Officer of Mindset AI, about the evolution of AI and its transformative role in learning, development, and content management. Jack shares his journey in AI, insights into generative AI’s rapid advancements, and the challenges of ensuring accuracy and trust in these systems. The conversation explores how AI as a tool to enhance human capability, how organizations can implement AI responsibly, and the disruptive potential of AI agents in reshaping user experiences. They also discuss the critical importance of data quality, accountability in AI workflows, and what the future holds for AI’s integration into our lives.

Keywords

Generative AI, AI in Learning and Development, Responsible AI, AI Agents, Data Quality, Trust in AI, Human-AI Interaction, Workflow Automation, Knowledge Management, Innovation in AI

Takeaways

  • Generative AI is reshaping user interactions with technology, but accuracy and trust are essential.
  • AI should enhance human capability, not replace it.
  • Data quality and transparency are critical for responsible AI workflows.
  • AI agents offer opportunities for efficiency but require careful design to avoid bias or errors.
  • Organizations must focus on verified knowledge and user-friendly AI applications.
  • AI tools can transform learning and development by curating personalized journeys.
  • Responsible AI implementation involves regular audits, observability, and human oversight.
  • Future AI applications may enable highly personalized support systems across various domains.

Sound Bites

  • "Can technology adapt to people instead of the other way around?"
  • "Generative AI makes enterprise content management faster, simpler, and more intuitive."
  • "Trust is critical — we need accuracy to avoid hallucinations in AI."
  • "Every single SaaS application will soon integrate conversational workflows."
  • "The future of AI is all about enhancing, not replacing, human potential."
  • "Data quality is the lifeblood of effective AI systems."
  • "Responsible AI requires accountability and transparency at every step."
  • "AI agents will eventually outnumber humans in multi-agent workflows."

Chapters

00:00 - Introduction to Jack Houghton and Mindset AI

02:21 - Early AI experiences and the evolution of machine learning

03:39 - The impact of generative AI on content management systems

05:28 - Challenges of trust, accuracy, and hallucinations in AI

07:28 - Target use cases for AI in learning and development

09:10 - AI coaches and performance support tools

11:19 - Measuring ROI and impact of AI tools in organizations

14:37 - Balancing efficiency, trust, and employee expectations with AI

19:04 - The integration of AI agents with legacy systems

26:12 - Rethinking software and SaaS in the era of AI agents

31:08 - Responsible AI: Addressing data quality, bias, and accountability

38:12 - Future scenarios for AI agents and their integration into everyday life

44:04 - Closing thoughts on advancing AI responsibly


Jack Houghton: https://www.linkedin.com/in/jack-houghton1

Mindset AI: https://www.mindset.ai/


For advisory work and marketing inquiries:

Bob Pulver: https://linkedin.com/in/bobpulver

Elevate Your AIQ: https://elevateyouraiq.com

Powered by the WRKdefined Podcast Network. 

[00:00:00] Welcome to Elevate Your AIQ, the podcast focused on the AI-powered yet human-centric future of work. Are you and your organization prepared? If not, let's get there together. The show is open to sponsorships from forward-thinking brands who are fellow advocates for responsible AI literacy and AI skills development to help ensure no individuals or organizations are left behind. I also facilitate expert panels, interviews, and offer advisory services to help shape your responsible AI journey. Go to ElevateYourAIQ.com to find out more.

[00:00:28] Hey everyone, it's Bob. Welcome back to Elevate Your AIQ. Today I'm excited to have Jack Horton, co-founder and chief product officer at Mindset AI join me for this episode. Jack and his team are doing amazing work in leveraging AI to transform learning and development and content management, all while keeping trust and responsibility at the forefront, of course. In our conversation, we explore how generative AI is reshaping user experiences, the challenges of ensuring accuracy and accountability, and the future potential of AI agents to enhance human

[00:01:08] capabilities. If you have not heard AI agents and agentic workflows are very hot topics. This is a fascinating discussion on how AI can truly elevate both individuals and organizations. I learned a lot myself, and admittedly, Jack scared me a little bit. But nonetheless, I think you're really going to enjoy this conversation. Thanks for listening.

[00:01:28] Hello, everyone. Welcome to another episode of Elevate Your AIQ. I'm your host, Bob Pulver. With me today is Jack Horton from Mindset.ai. How are you today, Jack?

[00:01:37] Doing very well, thank you. How about yourself?

[00:01:40] I'm doing pretty well. Thank you so much for taking some time. I'm really interested in some of the work that you're doing as co-founder and chief product officer at Mindset AI. Tell me a little bit about the impetus for this and what led you to start this startup.

[00:01:55] That's a really good question. I mean, originally, we were in, I guess, old school, uncool AI, so machine learning. This is before, obviously, OpenAI and Chat2BT. We were building a platform which connected content and information to the right people at the right time using machine learning algorithms and tagging and all the things now that were quite obviously uncool.

[00:02:18] So looking back now, it seems so much less cool and exciting than where we're at today.

[00:02:24] Yep.

[00:02:25] But we kind of started with the fundamental question, which was, can technology adapt to people instead of people adapting to technology?

[00:02:32] So we kind of, as an organization, went, okay, if that question was to be true, what would have to happen? And that kind of set us up on that journey. So we'd need a database and a machine learning algorithm that could capture every interaction of people in the same way as a social media platform would work, to be able to make connections between different events and circumstance and people and things.

[00:02:55] And we'd need to build a technology architecture on top of that that allowed the technology to move around and change. So not this rigid thing that could never change and adapt.

[00:03:05] Obviously, when OpenAI came about and we obviously were day one in implementing different models into the platform.

[00:03:17] That's never been the journey ever since. So we're very much now still doing very similar stuff.

[00:03:24] But we work with learning and media industries typically at the moment.

[00:03:29] Okay. So certainly I've been around a while. I saw that first wave.

[00:03:35] I was at IBM for a long time working closely with IBM Research to try to connect them to clients and let clients see what's coming next.

[00:03:45] And this predates a lot of even the machine learning and stuff we saw with eventually IBM Watson.

[00:03:51] But some of the expert knowledge systems, these unbelievably complicated sort of decision trees and things like that.

[00:03:58] And like you said, it was very rules-based, very structured, very time consuming.

[00:04:04] And for an individual organization, you'd basically have to go in and do a pretty significant project to even make that work at a reasonable level.

[00:04:14] And in hindsight, seeing what some of the AI agents and some of the more recent advancements stemming from generative AI.

[00:04:23] I mean, it's really incredible. And it's what I think you and I have been hoping for all these years, right?

[00:04:30] That we'd get to that level of intelligence, customization, personalization, et cetera.

[00:04:35] Yeah. 100%. I mean, I don't think many people in the world got the opportunity to foresee how fast things would grow and change, to be honest.

[00:04:44] I mean, it for the first time ever probably made enterprise content management sexy and cool and useful and important.

[00:04:50] But, yeah, 100%. I mean, our team, the other members of the co-founding team, their last business was one of the victory market leading legal technology businesses in the enterprise content management space.

[00:05:06] And the sort of things we can do now would have only been dreamt of.

[00:05:08] But, I mean, the teams, that platform, that business was, you know, 39,000 global enterprise, you know, clients.

[00:05:16] And really interestingly doing the same things today, but now with language models that make it, to be honest, 10 times faster, simpler and easier.

[00:05:25] I think more recently, what builds on top of even those innovations is the accuracy, right?

[00:05:33] Like if you're going to all of a sudden give everyone, because we saw this with enterprise search, right?

[00:05:38] Like you've spent so much time searching for information as opposed to having it understand what you're looking for and being able to more quickly discover the right information that you're looking for.

[00:05:49] And so, as we've seen with a lot of the generative AI solutions, if it's not done properly, I mean, you're just going to continue to get, you know, what they call hallucinations.

[00:05:59] And you're going to have to be able to trust these systems, right?

[00:06:02] So the accuracy improvements, I know, as you said, you've been working with OpenAI, you know, some of the improvements with the newer models is pretty significant.

[00:06:11] So obviously when AI came out, I think the perception was that, when I say AI, large language models and kind of all of the chart what came out.

[00:06:19] I think people initially went, this is a complete and finished product.

[00:06:22] And it wasn't just because we were getting value in some way.

[00:06:25] It wasn't.

[00:06:26] And I guess the fallacy that, I guess, people have been probably coming to grips, especially the people building in this space, and I guess it's a much slower learning curve for those that are purchasing, using, testing, is just because it seems good at search doesn't mean it is.

[00:06:42] It understands natural language and then can query.

[00:06:44] But actually, as a fundamental baseline, the idea of obviously RAG has now become a major industry.

[00:06:51] Like RAG is now, you know, a massive topic with many different versions and implementations and middleware products and all sorts of stuff there.

[00:07:00] And this assumption that RAG plus large language models equals finished product that can search, as you were saying, is just a complete policy.

[00:07:07] It's so far from the truth to what is actually required to make something helpful versus give me an answer engine, essentially.

[00:07:15] Like give me an answer engine that's a vague collection of important information that is somewhere in a knowledge base.

[00:07:20] Right.

[00:07:22] So, I mean, I know you've been focusing on like learning and development and media.

[00:07:29] You said, I mean, certainly significant, you know, content management systems and things like that.

[00:07:34] But for a lot of organizations, I mean, I imagine you would target like consulting companies, you know, firms that are sort of based in, you know, the collective intelligence and knowledge of a lot of pretty smart people, a lot of client engagements or whatever.

[00:07:49] Or do you, is that a particular, you know, target for you?

[00:07:52] Is that just the drop in the bucket for compared to other use cases?

[00:07:56] It's a great question.

[00:07:57] I guess.

[00:07:59] Hey, I mean, as an organization, don't want to play in, I guess, anywhere that places like people at Microsoft are playing right now.

[00:08:05] You know, they can go fill their boots over there.

[00:08:08] Yeah.

[00:08:08] Very focused at the moment.

[00:08:09] I mean, ultimately, where our mission is and is to make expert verified knowledge accessible instantly to at least 100 million people.

[00:08:19] And until we achieve that, then, you know, we're very happy and focused on where we're at.

[00:08:24] And I guess to answer your question really directly, that means we get to work with some of the biggest ed tech educational technology providers, learning software, all the way to, I guess, school level and education level.

[00:08:38] Consultancy in the form of people who deliver e-learning and training online and information online.

[00:08:43] So often the people who are typically the ones that provide content and training to other organizations.

[00:08:50] So we get to work with that incredible market, which gives us a fantastic, I guess, oversight and vision on where the market is actually at.

[00:08:58] Yeah.

[00:08:58] Each one of our customers also works with hundreds of other customers as well.

[00:09:01] Right.

[00:09:01] But I think the really biggest, I guess, the encapsulation of what people typically want to, I guess, implement is an AI coach, AI tutor.

[00:09:09] I guess what you probably call that.

[00:09:12] That's a major area of innovation, I guess, over the next probably like 12 to 18 to 24 months, I think.

[00:09:17] Because what people think about as an AI coach or tutor scratches the very surface of what that could be.

[00:09:23] No, that's awesome.

[00:09:25] I've had a lot of conversations of late here in the U.S. with educators and just that subject in general, just because there's so much inefficiency.

[00:09:35] Some of this ties to public sector more broadly, right?

[00:09:39] Even in government.

[00:09:39] I mean, it's like this comical maze of clicks and loops trying to find information when it seems like you could just ask a natural language question.

[00:09:50] And then to your point, if you knew enough about the types of things that I might be interested in, knew enough about me, you could sort of work backwards and figure out the context of what I'm looking for.

[00:10:02] And I would say, if you think about the scale at which, if you can save me two hours, think about those 100 million people and how much overall time you're saving in aggregate.

[00:10:13] It's a pretty incredible opportunity.

[00:10:16] Yeah, 100%.

[00:10:17] I mean, most of our, I guess, the use cases our customers work with us on is, you probably call it, so AI coaches imagine the product.

[00:10:24] The use cases typically are launching an academy of some sorts that is driven by AI coaching and learning.

[00:10:30] Performance support is the other one.

[00:10:33] And search, I guess, with performance support.

[00:10:35] So turning their existing content product information into performance support tool in the flow of work for our customers.

[00:10:42] So they might, they build agents on our platform, put them inside their platforms or they have their own.

[00:10:47] They launch it via our web and mobile app or whatever.

[00:10:50] And then give it to employees, give it to their customers.

[00:10:53] They put it inside a Slack channel, for example, because we just put it in any way you want.

[00:10:58] Yeah.

[00:10:58] So people can access information.

[00:10:59] I guess from an ROI perspective, though, is what you, it's an interesting one you're discussing.

[00:11:05] Because I guess the easy story to tell us all is saving time.

[00:11:10] I think organizations, buyers, CFOs or whatever, whoever is the person and pulling the full purse strings.

[00:11:19] Hasn't got to grips on how we measure this thing yet, I don't think.

[00:11:23] Agree.

[00:11:24] I think the problem, you can tell me if I'm off base, but I think you're kind of a straight shooter

[00:11:29] and you will call out some of the nonsense that you see in the headlines and things like that, right?

[00:11:34] So I'm the same way.

[00:11:36] I feel like for 2023 and the first half of 2024, everything was around an individual's productivity, right?

[00:11:46] And an organization is made up of teams and divisions and departments and things like that.

[00:11:51] So if you're not doing things to think about how you improve the end-to-end experience, the metrics that matter,

[00:12:01] or the ROI like you were mentioning before, then what are you really doing?

[00:12:05] I mean, you can't just have one cog spinning at 10x and then everyone else is at 1x or 2x.

[00:12:11] I mean, things are going to break.

[00:12:12] So you've got to think more holistically about the impact and the scale and what matters.

[00:12:17] So there's a card company.

[00:12:19] I can't remember the name off the top of my head.

[00:12:20] I should be able to.

[00:12:21] But I guess they're a technology platform provider that helps organizations manage their payments, enterprise organizations.

[00:12:30] So they have actually a lot of the data on what types of purchases you're making.

[00:12:34] If you're raising a proper commitment versus a credit card spend, it tells people in terms of a signal where enterprise is spending.

[00:12:44] Is it team leaders purchasing three or four licenses to essentially a POC or a test?

[00:12:48] Or are people committing to something for a long period of time?

[00:12:51] Now, that commitment curve is going up for most organizations as they've gone through pilots and the testing.

[00:12:56] But a lot of this is still in test phase for a lot of people.

[00:12:59] And I guess the challenge of justifying an ROI here, as you say, team-wide is really impactful.

[00:13:07] However, the idea of this sort of saving time thing, I'm always unconvinced by as a proposition, really.

[00:13:14] Because unless you can articulate to a CFO, you can hire less people three years into the future.

[00:13:20] That is saving time to save money.

[00:13:23] It's not saving any money if you're not hiring less people.

[00:13:27] So the only way that ROI comes true is also on the, I guess, how many less people do I need to hire in a forward-looking financial model?

[00:13:37] It's a difficult thing to predict at the moment.

[00:13:40] And it's an interesting one because I think typically most organizations try to grapple with this, how we're justifying ROI and impact.

[00:13:48] But that's the journey of most AI providers, though, as you perfectly, I think, summed up.

[00:13:52] Which is, how do we move away into increasingly more valuable tasks and more of those tasks for a personal team?

[00:14:03] Because suddenly you've now moved from a task that would take X amount of time and therefore this cost to something that would now take away four or five steps to that process.

[00:14:13] And therefore the value, you're actually moving up the value chain there quite quickly.

[00:14:17] And that's the race.

[00:14:18] Right.

[00:14:19] I think you've also got, especially in the case of your customers, you're elevating their knowledge and their ability to improve their own support of their clients or get their job done more effectively.

[00:14:33] It's not all about efficiency.

[00:14:36] It's about how do you make the organization more effective?

[00:14:39] And what do you do with the time that you've freed up?

[00:14:42] How are you sort of reinvesting that cost avoidance to that cost savings, moving people, you know, freeing them up to work on other projects?

[00:14:50] I mean, maybe you avoid some of the anticipated hiring because you've got people already with the skills and now they have a little bit more bandwidth to take on more work.

[00:15:00] They may not love that.

[00:15:01] Like, here I am thinking I learned all these tools and now I can work more efficiently and have this work-life balance.

[00:15:07] But now I'm going to get more work instead to fill up some of that time.

[00:15:11] But I mean, these are tough, you know, situations that, you know, leadership needs to navigate.

[00:15:17] But you've got to understand what the expectations are from employees and make sure you maintain that trust.

[00:15:23] I guess trust is a two-way street, right, between employees and employers.

[00:15:27] But yeah, I just think there's a lot of value that maybe people just aren't tracking the right metrics to see it.

[00:15:35] Yeah, the classic innovator's dilemma, which is, you know, by the time you've justified and been organized, it's probably a bit too late a lot of the time.

[00:15:45] Always be willing to rip up everything and just go for it.

[00:15:49] And the big tech companies have understood that really well.

[00:15:51] And in many ways, that's why there's a lot of noise around AI typically.

[00:15:56] You know, big bets are being placed by big companies, publicly traded companies.

[00:16:01] So Wall Street suddenly wants to have a view on what's the expected returns of this X billion dollar investment to say meta or whatever it might be.

[00:16:10] And actually, we're just in the innovation stage.

[00:16:12] And that innovation stage means all the way down to measurements, as you say.

[00:16:16] Yeah.

[00:16:16] How we measure impact and return.

[00:16:18] And therefore, there's just creates a lot of unnecessary noise around it.

[00:16:20] And actually, it's such a baby technology that has been around for three years or less, you know, in terms of a widespread in the cultural zeitgeist kind of technology.

[00:16:31] But there's a lot of attention on it all the time.

[00:16:33] Because it's exciting or it feels scary or it's easy to get clicks by a newspaper or, you know, if they put a headline saying something flashy.

[00:16:41] Right.

[00:16:42] But, yeah, a lot of the time people just need to relax a little bit.

[00:16:46] Yeah.

[00:16:47] I feel like some of it was sort of built up because 10 years ago, people were doing, they were moving beyond, maybe it wasn't 10 years, but people think of automation as simply robotic process automation.

[00:16:59] And that's not true at all.

[00:17:01] We could do, you know, more intelligent automation, you know, many years ago, even with, you know, machine learning based, you know, predictive AI and some, you know, back in that timeframe.

[00:17:14] But, I mean, I saw this when I was leaving, when I left IBM and went to NBC Universal, I had to put together an automation strategy.

[00:17:21] And, I mean, I remember very clearly there was a maturity curve that said you can start with robotic process automation, but then there's intelligent automation and even cognitive automation, which was trying to incorporate some of that, you know, machine learning based AI.

[00:17:36] And, you know, machine learning based AI still had rules, right?

[00:17:40] It still wasn't doing what, you know, an agentic workflow today would do, but it's not like this stuff didn't exist.

[00:17:47] This didn't, some of the things we're seeing now, some of the headlines we're seeing, and some of this wasn't, you know, out of thin air.

[00:17:53] It was just a, some of it seemed like just a natural sort of evolution of, we were talking about 10 years ago, but most people just weren't that far on the adoption curve at that time.

[00:18:04] Yeah, 100%.

[00:18:05] It's definitely the ability to use it has moved from organizations, typically operations or IT teams within a big company to now be everyday builder.

[00:18:20] And therefore it can spread as an idea or as a concept as if it's a new thing.

[00:18:24] I mean, a lot of this is building on the frameworks that were provided by RPAs or whatever other systems came first.

[00:18:32] And everything else just kind of slotted on top of that.

[00:18:34] So do you guys find that you're going in and just basically displacing people that have RPA, that tried to find some, you know, rough way to accomplish what you've given them a much better approach to do?

[00:18:52] I mean, are you displacing people that sort of went down that path already saying like, yeah, that's like old school or is it really sort of net new?

[00:19:01] These guys are just waking up and, you know, it's a, it's sort of a fresh, fresh start at all this.

[00:19:06] From an RPA perspective, we don't, that's not typically what we sell into and we work with on, with people on.

[00:19:12] I mean, the people who manage that in an organization is operations in IT.

[00:19:16] Yeah.

[00:19:16] They manage that type of activity.

[00:19:18] In terms of AI agents, large language models, all they're doing is sitting across pretty much just sitting across RPA processing.

[00:19:27] Hey, this is William Tencup, Work Defined.

[00:19:29] Hey, listen, I'd like to talk to you a little bit about Inside the C-Suite, the podcast.

[00:19:34] It's a look into the journey of how one goes from high school, college, whatever, all the way to the C-Suite.

[00:19:40] All the ups and downs, failures, successes, all that stuff.

[00:19:44] Give it a listen.

[00:19:45] Subscribe wherever you get your podcasts.

[00:19:47] It's a much easier, faster way to extract data from this place, do something with that data, push it into a system, but no longer is it as rigid with so many decision trees.

[00:19:59] There's less decision tree need.

[00:20:01] So it's reduced the burden on the person.

[00:20:03] What is happening is a new application layer to make RPA and language models work better together.

[00:20:10] Obviously, if you're a legacy system of 10, 15 years, you've gone through many different iterations of thinking.

[00:20:16] And now this has come along and said, well, it's very frustrating because everything we've just been doing is kind of useful, but not that useful.

[00:20:21] Yeah.

[00:20:22] So I think there'll be, there's a displacement there.

[00:20:25] But to be honest, I think it's super early days for the idea of RPA being something that's everywhere for most organizations.

[00:20:32] It's super early because there's a great analogy.

[00:20:36] It's more of a product marketing kind of idea and product thing is you give someone a box of Lego.

[00:20:42] They go, what do I build?

[00:20:43] Now, if you give someone the Lego Star Wars pack, they know exactly what to build.

[00:20:48] And they go, I love Lego.

[00:20:49] That's really helpful.

[00:20:50] Right.

[00:20:50] And that is the process that the market is now going through is, okay, we've got this really powerful thing.

[00:20:56] How do we make this easy for most people to consume?

[00:21:00] Yeah.

[00:21:01] No, that makes sense.

[00:21:02] I will tell you that I just started like this bootcamp to really, you know, learn by doing and building my own authentic workflow for myself.

[00:21:11] And already just with the first couple sessions, I'm already really excited at what it can do.

[00:21:19] Just even for me as a solopreneur, whether it's my podcast, you know, workflow or my sales marketing outreach, you know, workflow and things like that.

[00:21:29] And so, I mean, I'm excited for organizations that are ready to embrace, you know, platforms like Mindset because I just think it's eye-opening.

[00:21:43] And I don't think people necessarily appreciate, I guess I'm a process guy at heart, even though I'm a technologist.

[00:21:52] I started my career doing process, you know, re-engineering work and redesigning workflows and stuff like that.

[00:21:58] And so, I don't think people fully appreciate just how much value you can get out of this.

[00:22:04] And it's not just like cost savings.

[00:22:05] It's like even just the mental drain that some of these steps take.

[00:22:12] I don't think people are fully appreciating how much time they could be saving because they're so just used to this is how things are done.

[00:22:20] And I feel like there's like this awakening coming when people see, you probably get that when people see your demo.

[00:22:27] Yeah, 100%.

[00:22:27] I mean, I guess my view is that every single SaaS application will have what you're describing built in.

[00:22:35] Every single one of them.

[00:22:36] Almost every single one of them.

[00:22:37] The new interface is a conversation.

[00:22:40] And the thing that sits behind a conversation is out-the-box built for workflows that do things wrong.

[00:22:46] In many ways.

[00:22:47] And so, for many people, you're probably on the innovator end who are, let me build and get stuck in.

[00:22:53] Many people, I'd say 80% of people will not want to go near that until it's ready.

[00:22:58] You know, it's the difference of someone that in the 70s, 80s of people who build their own computer and, you know, from a pack versus someone that waits for the Mac to come and it's beautiful.

[00:23:11] But I certainly think for those people who don't want to touch that, it'll be in almost every SaaS application.

[00:23:16] And really, I think, could even eliminate the need for most SaaS.

[00:23:20] A good example is the computer use.

[00:23:24] If you saw Claude's new announcement, it's a computer use where that takes, for those who are listening, haven't seen that.

[00:23:31] Using vision and stuff like that, it's able to perform actions across your computer for you.

[00:23:38] It helps you build things, do things, do whatever.

[00:23:41] The really powerful thing there is it actually built code to build a website.

[00:23:46] And then you say, no, no, I don't want it like that.

[00:23:48] Do it like this, using your laptop or PC.

[00:23:50] So the reason that's so interesting, if you abstract that away from the use case of build me this thing, currently you have a CRM.

[00:23:57] That CRM has to be built over many, many, many years.

[00:24:00] It's very complex.

[00:24:02] And technology is built on this.

[00:24:04] Now, in theory, all you really need is to say, here's the data from a deal.

[00:24:09] And it can just build you the UI you need and it's just stored in a database that's accessible.

[00:24:13] Organizations have literally been putting their data together in a way that's accessible for 10 years.

[00:24:18] It's a CIO role.

[00:24:20] Now, AI can just build custom UI on the fly for when it's needed.

[00:24:25] So suddenly this idea of process and experience of people could even eliminate much of the complex sauce that we used to.

[00:24:32] And it seems like a big thing.

[00:24:34] But I mean, if you abstract away and go, how should a user feel and experience technology?

[00:24:40] We don't care about UI and learning how to use something.

[00:24:43] We just, you know, Apple got it right.

[00:24:45] Simple.

[00:24:46] Right.

[00:24:46] That's a really thought-provoking and disruptive thought, right?

[00:24:52] Because these platforms are getting, trying to add more and more to these platforms.

[00:24:59] And for buyers, for CIOs, for CTOs, or even, you know, in your case, maybe it's a chief learning officer or CHRO or something like that.

[00:25:08] But for any leader who's involved in technology, you know, purchasing decisions and uses the technology themselves, you have to call into question why you need all these things.

[00:25:22] Now, as you mentioned, I mean, we're still very early days in a lot of this.

[00:25:28] But in theory, you have all this.

[00:25:31] I don't want to use the term fine-tuning because I know that means there's something different in alternative AI terms.

[00:25:36] But you have such flexibility in how you design all of this that's not rules-based, that's not rigid, that doesn't have to follow normal, you know, software development cycles necessarily.

[00:25:48] And so this is disruptive, I agree, to just the entire software, you know, SaaS market.

[00:25:56] So when TikTok and platforms like that came around, the mainstream big media, so big media wasn't replaced by another big media.

[00:26:05] It was replaced by thousands of small media.

[00:26:08] So creators are small media publishers now.

[00:26:11] Now, I'm not trying to predict the end of SaaS, but I'm saying it is an enterprise organization spends many millions of dollars on huge amounts of overhead tax of time and effort and stress,

[00:26:26] gluing together different systems to perform many things that should be quite, in many ways, basic and simple.

[00:26:32] But actually, at the core of it, data is the new important asset.

[00:26:37] So as long as the data is in check, over time, there's no reason you can't start to explore those types of things.

[00:26:44] And I mean, it's really difficult to obviously obstruct ourselves away from this world of something that we have a framework for.

[00:26:49] But if there is a SaaS that enabled you to build SaaS like that on the fly using data, who knows?

[00:26:55] Could very well be a new type of SaaS model because agents are a new form of SaaS.

[00:26:59] That's the best way to think about an AI agent.

[00:27:01] It's just a SaaS application now.

[00:27:03] Right. Yeah. And I've heard people use this term.

[00:27:07] I don't know if it's going to catch on, but like instead of software as a service, but service as software.

[00:27:14] Is that how you think about it?

[00:27:16] Yeah, it's not a bad description. It's probably quite apt.

[00:27:19] Yeah. In terms of the customization, does the average user get to customize their own agent for learning or how does that work?

[00:27:30] It's a really good question at the moment.

[00:27:32] So we've got essentially, let's call it another module, another part of the product that we've been working through, which allows anyone to build their own in just a few steps.

[00:27:40] Very simple.

[00:27:41] The workflow part is something that's kind of becomes more complex, you know, but not too complex, especially in the context of learning.

[00:27:49] But most use cases actually, people don't want to do that.

[00:27:54] People actually want to build, you know, they've got massive of expertise and content and information on, let's say something very generic and simple, but leadership principles or leadership or whatever it might be.

[00:28:05] They want to create an agent that acts like a coach and give it to users and give access to users for that.

[00:28:11] So that's what our, we work with LMS is.

[00:28:14] We work with different organizations across that spectrum.

[00:28:16] We provide that to enterprise organizations.

[00:28:19] Most of those use cases, the administrator, the team leader who gives it to their employees and their learners versus learners building it on the fly like a GPT, mainly because of the, it's about verified knowledge and information.

[00:28:33] Right.

[00:28:34] That's actually one of the core parts.

[00:28:35] It's almost like a cross between knowledge management and learning management, if you know what I mean.

[00:28:38] Okay, sure.

[00:28:39] Although it can help, the knowledge management part is an important part of that piece to make something helpful because you don't want to deliver training to people on random information.

[00:28:49] You need it to be quite verified and expertise.

[00:28:52] Yeah.

[00:28:53] It's consistent as well, right?

[00:28:54] Yeah.

[00:28:55] Make sure everyone's getting the same information.

[00:28:57] That could be a disaster.

[00:29:00] That gets people out of hand.

[00:29:01] One of the things that's come up recently, so one of the areas that I focus on is responsible AI.

[00:29:10] And I tell people that when it comes to responsible AI, we're all responsible in some fashion.

[00:29:16] So I know we haven't seen a lot of organizations put together required like AI literacy training or anything like that yet.

[00:29:24] But one of the things as I try to keep tabs on legislation and things like that around the world and how people are thinking about it, governance and ethics and fairness, there have been some folks that I follow that are questioning whether agentic workflows kind of break some of this or make it more difficult to actually have the traceability or observability.

[00:29:52] Of whether there's bias in any of the data.

[00:29:56] I was just curious if you think about that, especially in your chief product officer role and how you're designing and building some of these things, just how you think about that.

[00:30:07] Is it about bias, audibility and understanding of what data is passing through these multiple steps of a workflow?

[00:30:13] Do you lose track of the data and the algorithm?

[00:30:18] And how do you know that the algorithm from working in one agent has the right, I guess, fair information to the next agent or whatever?

[00:30:30] This is just sort of an open-ended question, however you want to interpret it.

[00:30:36] Hey, everybody.

[00:30:37] I'm Lori Rudiman.

[00:30:38] What are you doing?

[00:30:39] Working?

[00:30:39] Nah.

[00:30:40] You're listening to a podcast about work and that barely counts.

[00:30:44] So while you're at it, check out my show, Punk Rock HR, now on the Work Defined Network.

[00:30:49] We chat with smart people about work, power, politics, and money.

[00:30:53] Are we succeeding?

[00:30:54] Are we fixing work?

[00:30:55] Eh, probably not.

[00:30:56] Work still sucks, but tune in for some fun, a little nonsense, and a fresh take on how to fix work once and for all.

[00:31:04] It's a tough question.

[00:31:05] I mean, that particular example is quite painful, I guess, because that's one of the big areas of, I guess, even my icon, I'm quite worried about that type of area.

[00:31:15] From a data in perspective, you should be able to understand what data is actually inside the knowledge that you're giving access to.

[00:31:23] So, I mean, for example, when, so we have clients who ingest, say, 100,000 pieces of content, huge amounts of information and data.

[00:31:32] So much that a mere human would never comprehend what's inside that stuff.

[00:31:38] So we had to build technology and to surface, I guess, what we have is a knowledge graph.

[00:31:45] So we use multiple different databases to map concepts, people, and information, and the interconnected nature of that per client.

[00:31:54] So each client has their own knowledge graph.

[00:31:57] And then, therefore, we can surface the actual data inside the content that's ingested.

[00:32:03] So you can, from our system, you integrate into your SharePoint, an AWS bucket, wherever it might be.

[00:32:10] And we actually surface all the unexpected potential concepts and pieces of data inside it.

[00:32:15] So this is expected, unexpected, this is what's covering.

[00:32:18] So we use different AI to summarize all the data inside it, present that.

[00:32:24] And then equally a system for being able to go through and see, okay, where's the data coming from?

[00:32:28] So I always use an example as part of a demo, actually, in Matthew Saeed's book.

[00:32:33] There's lots of mention about the CIO.

[00:32:35] And this agent is supposed to be helping people with communication, let's say.

[00:32:38] Our agents are supposed to help with this piece of thing.

[00:32:41] But why is the CIO in there?

[00:32:43] That's a weird concept.

[00:32:44] But actually, it's a part of a book that's on leadership by Matthew Saeed.

[00:32:48] Anyway, but we can then trace exactly where that piece of data came from, like the exact segments or chapters of a piece of content, information, knowledge, whatever it might be that that's come from.

[00:32:59] And you can snip that out.

[00:33:00] So that's a really important part is the data, the garbage in, garbage out, bias in, bias out.

[00:33:05] That source, you need to be able to understand that.

[00:33:08] People panic about the quality of their data, what's inside it.

[00:33:11] But if you have a system that, like ours does, you should be able to access and understand what's inside it straight away when you ingest it before turning on access to any agentic kind of workflow.

[00:33:23] So as soon as you switch that on, data flows, you run a process.

[00:33:28] That's where the problem is.

[00:33:30] That's your email process?

[00:33:31] That's in terms of data in, yeah.

[00:33:33] Audibility throughout the process is also really important.

[00:33:36] So you should be able to track every single question, every single thing, every single interaction, all the data that's been processed.

[00:33:42] So, I mean, any system that's quite robust should have an audibility built in, should be simple audibility.

[00:33:48] Agreed.

[00:33:49] Agreed.

[00:33:50] Awesome.

[00:33:50] That scale becomes quite weird and scary, I guess, because in a multi-agent system with, say, 50 agents, 60 agents, 100 agents, 3,000, 10,000, a million, very soon there will be more agents than people.

[00:34:04] I can guarantee that.

[00:34:06] And kind of a spider web of interactions and engagements.

[00:34:10] Yeah.

[00:34:11] No, it's crazy.

[00:34:12] Yeah.

[00:34:12] And I don't know, I guess I'm cautiously optimistic that those types of things will scale well.

[00:34:20] But, you know, I think as long as you have the right mechanisms in place and you've got humans in the right spots in the loop or multiple loops, I'm not even sure what that's going to look like.

[00:34:31] But I think that's what makes people nervous, right?

[00:34:34] When you give these agents agency and autonomy, like, what are the ramifications of that?

[00:34:41] But I like your approach in terms of the thoroughness of looking at it and the observability of all of that.

[00:34:49] Yeah, I'm more concerned at, say, an education system when you apply for school in an area that also has access to other application data, insurance, location, other forms of data based on social media activity or other forms of activity.

[00:35:04] Because that's where this will start to go is just like people buy social media data to inform, you know, I think it was Netflix that did a, you know, almost billion dollar partnership with Meta to take data to inform their models for recommendations.

[00:35:21] When we get to that level, when AI agents are a bit more scale, suddenly you have, you know, an application that you interacted with and you were dating a person.

[00:35:30] You know, it could be as simple as dating someone from an area that was deemed dangerous by an AI system.

[00:35:35] And suddenly you apply for a school or a health care system and you're rejected.

[00:35:40] That's where I'm actually in many ways more concerned about in almost all ways in the immediate term over the next five years, because that will happen.

[00:35:48] Those are painful circumstances that people will actually have to go through.

[00:35:52] And in many ways, you can try and stop at a source.

[00:35:55] You could try and stop innovation.

[00:35:56] But the also the big thing is, I think, as long as we don't replace every interaction with an organization or a business with more agents that make it very difficult for a human to say, look, what's happening to me here?

[00:36:09] I'm being unfairly treated.

[00:36:10] And that's that ability for a human to contact another human and say, please help fix this.

[00:36:16] I've been rejected from my school application because I interacted with the person in somewhere that was deemed unsafe, which is a ridiculous thing, but will happen.

[00:36:25] And that's going to be a really important practice and law that will have to be brought in.

[00:36:30] Yeah.

[00:36:31] I think you just scared the shit out of a lot of people.

[00:36:34] But it's very I don't think that's far fetched at all.

[00:36:38] I mean, this is why we have these controls.

[00:36:40] This is why we need to balance responsibility with innovation.

[00:36:44] And we've got to just mitigate some of these risks.

[00:36:47] We can't eliminate them.

[00:36:48] But but you're absolutely right.

[00:36:50] I mean, we see this.

[00:36:51] We see hints of this.

[00:36:52] And anytime you're a victim of identity theft, like all of a sudden that data was linked to your credit report and all these things.

[00:37:01] And now you can't get a car or a house.

[00:37:03] You can't apply for a college loan.

[00:37:05] You can't do all these things.

[00:37:06] And so that's why we locked down medical records.

[00:37:09] And, you know, personally, I identify the information.

[00:37:11] And that's why privacy and cybersecurity are part of responsible AI.

[00:37:16] So, yeah, I'm glad you brought those things up.

[00:37:19] Now, I think the most irresponsible thing is to try and hold back because actually, in many ways, if open AI or another provider suddenly came out with what they defined as super intelligence, the world just collapsed.

[00:37:32] There's no adjustment.

[00:37:33] There's no small test that creates a huge, horrible crisis, but affects only a small group of people in the grand scheme of things of billions of people on Earth.

[00:37:42] So I think the most irresponsible thing to do is not do, actually.

[00:37:46] You know, we need to go through the pain and realize what's real, what's not.

[00:37:49] Yeah, great.

[00:37:50] I mean, you're sort of at the forefront of a lot of this.

[00:37:53] I mean, is there anything that you're particularly fond of or scared of with the latest AI advancements?

[00:38:01] I'm really interested in the journey of AI agents from a typical application.

[00:38:09] So let's say a chat GBT or whatever system you are interacting with these things in to coming outside of our screen and into our lives now.

[00:38:18] You know, that's something that I'm really, I find very interesting.

[00:38:22] You know, I think a good example, Elon Musk accidentally bought Twitter for whatever reasons, but actually turned out fantastic data source.

[00:38:35] And I think that that data source will start to be brought into Tesla's, which will be used increasingly so.

[00:38:40] So I'm interested in how AI agents as a concept or someone or something that understands you and follows you everywhere suddenly starts to enter every part of your life.

[00:38:50] And I'm both worried and excited by that, you know.

[00:38:54] So this is something I'm looking forward to in the future.

[00:38:57] I mean, from a learning perspective, we talk about an AI coach.

[00:39:00] Yeah.

[00:39:01] An AI coach that can recommend information and understand you.

[00:39:04] Yeah.

[00:39:04] Cool.

[00:39:05] We're well past all that sort of stuff.

[00:39:06] But one, it's about if you imagine a role play and you go, Bob, you expressed interest in becoming better at X.

[00:39:15] Are you OK if I just follow you around for X period of time?

[00:39:19] You'll know you'll be notified.

[00:39:21] And I'm just going to just watch you as a friend because a vision or other type of capabilities can actually go and identify all the areas that you can improve on based on a methodology.

[00:39:30] Yeah.

[00:39:31] An action plan that feeds into a separate system in a learning management system or content and curates a journey for you to improve.

[00:39:38] And you choose its personality and how it knows you and what it's going to help you with.

[00:39:42] And that there is what I look at now going, how do we take that next big leap in that area?

[00:39:48] Yeah.

[00:39:49] I think we can all think of many of our own real life scenarios where that would be extremely valuable as long as you understand the data is safe and we've got the right parameters around that.

[00:40:07] But that's where I'd like to see us go, like solving some of these everyday challenges that maybe people have just, like I said before, just accepted as reality.

[00:40:16] This is life, you know, but, you know, it doesn't it actually doesn't have to be that way.

[00:40:22] And that's why we've got to move forward, you know, in a positive direction because people will get a ton of benefit.

[00:40:29] So awesome.

[00:40:30] Well, Jack, this has been really, really intriguing conversation.

[00:40:34] I want to be respectful of your time.

[00:40:36] Thank you so much for coming on the show.

[00:40:39] And I think there's a lot for our listeners to take away from this.

[00:40:43] So thank you.

[00:40:44] Thank you so much.

[00:40:45] It was a pleasure.

[00:40:47] Excellent.

[00:40:48] Thanks again, Jack.

[00:40:49] And thanks everyone for listening.

[00:40:50] We'll see you next time.