Bob Pulver speaks with Tyler Fisk, who is currently the co-founder of two AI consultancies, Light Magic AI and Galactic Ranch, where they design, build, and implement customized AI solutions. Tyler has a knack for seeing what’s next and seizing opportunities, and his backstory is really interesting. Bob and Tyler talk about the evolution of AI in business, the importance of responsible AI practices, and AI’s potential transform workflows and knowledge work with AI agents. They discuss the challenges and opportunities presented by AI, including its impact on employment and the future of education. Tyler shares insights from his experiences with various AI projects, emphasizing the need for customization and ethical considerations in AI development. The conversation highlights the importance of AI literacy and the potential for AI to help solve many challenges facing organizations of all sizes. It's a longer episode than usual, but well worth your time. And if you are doubting whether AI can handle some of your unique use cases, this episode is a must listen.
Keywords
AI, agents, agentic workflows, automation, entrepreneurship, education, responsible AI, employment, technology, innovation
Takeaways
- AI is transforming business workflows and knowledge work.
- Customization of AI solutions is crucial for effectiveness.
- Responsible AI practices are essential for ethical development.
- AI can help scale businesses and improve efficiency.
- Education systems need to adapt to incorporate AI.
- The future of work will involve collaboration with AI.
- Understanding AI literacy is vital for professionals.
- AI has the potential to address global challenges.
- Ethics in AI development must be prioritized.
- The pace of AI advancement requires continuous learning.
Sound Bites
- "I want to get as many people involved in this space."
- "AI can help us achieve some of those things."
- "They didn't think it could be done."
- "There's no time like right now to begin."
Chapters
00:00 Introduction and Background of Tyler Fisk
02:55 The Evolution of AI in Business
05:56 Understanding AI Workflows and Adoption
08:55 The Role of AI in Knowledge Work
11:50 Ethics and Responsibility in AI Development
14:57 AI's Impact on Employment and Workforce Dynamics
17:53 The Future of AI and Education
20:51 Agentic Workflows and Real-World Applications
24:11 The Importance of Customization in AI Solutions
26:56 Final Thoughts on AI's Potential and Challenges
Tyler Fisk: https://www.linkedin.com/in/tyfisk
Light Magic AI: https://www.lightmagic.ai/
Maven course: https://maven.com/sara-davison/scale-with-aiworkflows-foundations
For advisory work and marketing inquiries:
Bob Pulver: https://linkedin.com/in/bobpulver
Elevate Your AIQ: https://elevateyouraiq.com
Powered by the WRKdefined Podcast Network.
[00:00:00] Welcome to Elevate Your AIQ, the podcast focused on the AI-powered yet human-centric future of work.
[00:00:04] Are you and your organization prepared? If not, let's get there together. The show is open to
[00:00:09] sponsorships from forward-thinking brands who are fellow advocates for responsible AI literacy
[00:00:13] and AI skills development to help ensure no individuals or organizations are left behind.
[00:00:18] I also facilitate expert panels, interviews, and offer advisory services to help shape
[00:00:23] your responsible AI journey. Go to ElevateYourAIQ.com to find out more.
[00:00:39] Hey everyone, it's Bob. If you've been listening to the show for a while, thank you very much. I really
[00:00:43] appreciate it. Feel free to drop me a line if you have any feedback at all. If this is your first time,
[00:00:48] welcome to the show. Today's episode is with Tyler Fisk, who's the co-founder of not one but two
[00:00:53] AI consultancies, Light Magic AI and Galactic Ranch, where they design, build, and implement
[00:00:58] AI solutions for companies of all sizes. Tyler has always had an entrepreneurial spirit,
[00:01:03] whether that started with farming, believe it or not, before switching gears to build
[00:01:06] e-commerce businesses and later realizing the promise of AI. Tyler is designing some really
[00:01:11] innovative solutions that show what's possible with AI and agentic workflows, which is a hot topic,
[00:01:17] making AI accessible and practical for his clients. I always love chatting with and learning new things
[00:01:22] from Tyler, so let's get to it. Thanks for tuning in. Hey everyone, it's Bob Pulver. Welcome to another
[00:01:28] episode of Elevate Your AIQ. Today, I am joined by my friend Tyler Fisk. How you doing, Tyler?
[00:01:34] I'm doing awesome, man. I'm here with you. Thank you for having me.
[00:01:38] Absolutely. It's great to have you. You're down in the Carolinas?
[00:01:41] I'm actually in Middle Tennessee. I'm in Cookville.
[00:01:44] Okay. So were you spared from the storm?
[00:01:48] For the most part, yeah. I had some friends that are over in western North Carolina that got hit really
[00:01:53] hard. Where I live is actually kind of up on a mountain, so we didn't really have any flooding
[00:01:58] issues, but the wind got really wild here for a little bit. So we had some trees come down, but
[00:02:04] that's nothing compared to what I've seen on the internet.
[00:02:06] Yeah, no, it's pretty bad. I have a friend in Charleston who also has a house in Nashville,
[00:02:11] and he's actually not sure what's going on with his property. So hopefully they'll get communication
[00:02:18] restored and they can figure that out. But fingers crossed that goes smoothly.
[00:02:24] Exactly. So I know you're involved in a couple things related to AI development, upskilling others.
[00:02:33] You're spreading yourself around a lot of interesting projects. So why don't we kick
[00:02:38] things off with just you giving a little bit about your background and some of the entities that you're
[00:02:44] involved with. Sure. Yeah. So, well, my previous 20-something year career was mainly in the tech space,
[00:02:52] but I always joke saying that I didn't really work in Silicon Valley, even though I worked with a lot of
[00:02:57] folks from there. I worked in the Tennessee Valley. So it was an e-commerce background mainly across a bunch of
[00:03:04] different brands. So my family has a greenhouse manufacturing and gardening supply company.
[00:03:11] And my dad started that in our spare bedroom with myself and my brother and my then girlfriend,
[00:03:18] now wife with just 72 bucks. It was very, very humble beginnings. And now that is one of the bigger
[00:03:25] names in the space and it's an eight figure like ARR type company. And I tell that story a lot to kind of
[00:03:32] set the stage that I got to go through an education process of what is it like to run a tech first, internet
[00:03:40] first business and to wear all of the hats to make all that stuff happen. And also in the later part of
[00:03:47] that career, I was able to start dabbling in AI products really before the chat GPT boom. I was using
[00:03:54] Grammarly a lot in its early days. We had some products that we used with some of our Shopify stores
[00:04:00] that were using some machine learning algorithms. And also I think he was even using GPT-3 like an early
[00:04:06] version of that to be able to write email copy to our customers that sounded like us. Basically,
[00:04:14] it was it was decent. It was more templatized than it was completely generative, but it was still pretty
[00:04:19] solid. And that lit a fire in me to learn more about AI. So now fast forward, you're correct. Like
[00:04:27] I'm in a couple different things. I'm an entrepreneur at heart. And so I'm a part of Light Magic. Light
[00:04:35] six of us in total that we all met in the AI exchange. Shout out Rachel Woods and the AI
[00:04:41] exchange. I believe that's even where we met, isn't it? That's what I was thinking.
[00:04:44] I think it might've been. Yeah.
[00:04:46] Yeah. And so that is a really cool community online. And so the six of us met there,
[00:04:54] we started as an accountability group, and then we turned that into what would it be like to work
[00:04:59] together. And so we've done several projects together. And since then, our personal brands
[00:05:04] have taken off and our personal lives have gotten really busy. I've got two young kids and a few of
[00:05:11] the others in the group have kids as well. So we have have really had to focus more on our personal
[00:05:17] brands and we leave the Light Magic brand mainly for big projects that come through like more
[00:05:22] enterprise level or things that just ourselves and our teams can't do individually. So Galactic Ranch
[00:05:29] is my personal brand. And that is where I do AI development, AI consulting, even partnered up with
[00:05:38] Sara Davison and Hunter Canning who are also from Light Magic. And we've got this new Maven course.
[00:05:44] This is our first time doing teaching. We're in the middle of cohort one right now. It's been a lot of
[00:05:48] fun. And it's called how to scale your business with AI workflows. And so that's, that's, I've got
[00:05:54] my, I've got my hands or fingers in a lot of pies, I guess is the best way to put it. So we like to stay
[00:06:00] busy. Oh my God, that's awesome. No, I love hearing the sort of origin story and people that didn't
[00:06:06] necessarily have a tech background, but recognizing the opportunity and recognizing a way to either bolster
[00:06:14] your existing business and give that some competitive advantage in a sort of digital and technical edge,
[00:06:22] if you will, but also exploring other more deeper tech kinds of solutions. And also, you know,
[00:06:30] just upskilling, obviously the theme of this show is really how do we elevate people in all walks of life
[00:06:36] and get them to recognize some of these opportunities. So I think you're setting a good example and a
[00:06:43] relatively high, high bar. Thank you. Yeah. So, you know, you mentioned you're using AI before,
[00:06:50] you know, ChatGPT entered the public consciousness. And that's one of the elements that I talk about
[00:06:58] when I try to educate people. So when you start to teach people, maybe it's too early, but when you get
[00:07:07] feedback on either the Maven course or, you know, earlier, you know, more informal, you know, training,
[00:07:14] are people getting it? Are people anxious to tinker like you did when you first started? I mean,
[00:07:21] what's the audience sort of reception and their level of adoption, I guess, before they encountered your
[00:07:29] courses?
[00:07:30] For previous teachings that we've done, and even just some of the seminars and things that I've done,
[00:07:37] always kind of start off and set the stage of saying, I'm about to tell you a lot that's going
[00:07:44] to really make your mind melt. And it's, you're going to be overwhelmed, I can guarantee you,
[00:07:48] I still feel overwhelmed. And I do this every single day. So I always kind of set that because that feeling
[00:07:53] never really goes away, at least for me, it doesn't. It really speaks to the pace of how fast,
[00:07:59] this stuff is moving. It's, you know, I've been in tech my entire life. And I'm used to this, like,
[00:08:04] this cycle of the new thing coming out. And anymore, that's not the case. But I really have
[00:08:10] been trying to focus my education and even more recently, a lot of my implementation work on small
[00:08:16] and medium sized businesses, for a couple of different reasons. And that's because so many people
[00:08:21] I know are interested, and they know how I've got to do AI, I've got to learn AI. And that's literally
[00:08:26] how they'll describe it, or they'll call it chat GPT. And those are kind of the, the nouns that
[00:08:31] they're throwing around right now. And just generally kind of like what you were talking
[00:08:34] about with the jargon or whatnot, which is totally cool. And that's, that's where they are in their
[00:08:39] journey. What we've found is that people definitely have a hunger and a desire to learn. And at the same
[00:08:47] time, they don't fully know what's possible or what the state of the union of AI really is now.
[00:08:53] Because in those same seminars, I'll start off very similarly to like what you were just talking
[00:08:58] about of the background. I'll even go all the way back to like Alan Turing, you know, 1950s,
[00:09:03] work all the way up and kind of give this background just to be like, this is not something new.
[00:09:08] We are literally building on the backs of people that have come years and years,
[00:09:14] dedicating their whole lives to this, to get to the point that we are now. And then we've hit this
[00:09:18] instant boom and open AI is really good marketing scheme here. And that's where it's gotten y'all's
[00:09:26] attention. So it's, it is a very powerful tool. People are really wanting to learn it. They don't
[00:09:33] know where to start. They do feel overwhelmed. And if, if they have much experience with it at all,
[00:09:38] or what they would call it experience, they've chatted with chat GPT a bit. So saying that once you
[00:09:44] hit that point, then you kind of hit like this diverging path of folks that want to learn about
[00:09:49] it. So they have an understanding of it, but they don't want to do it. They want to either like empower
[00:09:55] people on their team to be the AI, AI folks on their internal team, or they're wanting to hire people
[00:10:01] for it. Then you're also going to get a lot of the folks that are like us that are, you know,
[00:10:05] tech nerds and AI nerds, and they're going to go all the way down the rabbit hole. And they want to
[00:10:09] really learn this stuff inside and out. But that's, that's kind of where I see it. And that's
[00:10:14] the way it was even in the AI exchange. When we had folks coming in there, it was some of them were
[00:10:19] coming just to really get a better grasp and understanding of what this is and how to use it
[00:10:24] and what the terminology is. And then also a lot of them are really taking the time to learn prompt
[00:10:30] engineering, automation, workflow engineering, and even coding. A lot of folks are learning how to get
[00:10:36] into like Python and stuff now, just because they recognize that as the, the, the default language
[00:10:43] that is, that works really well alongside with, with a lot of these LLMs. There's always going to be,
[00:10:48] you know, I'm not a learning and development professional by any stretch, but, you know,
[00:10:52] people have different approaches to learning. They learn and through different means, right? They learn by
[00:10:58] doing, they learn through, you know, online courses, instructor led or self-guided and things like that.
[00:11:04] And so, so you've got that dynamic still at play, I think. And then obviously you've got some people
[00:11:10] with a, with a learning mindset and people who are anxious to make sure that their, their own career,
[00:11:17] you know, trajectory and potential is not, you know, disrupted by, by technology. And they recognize that,
[00:11:25] you know, this is the future of, of work. I mean, not every role will be impacted at the same speed or to the
[00:11:32] same degree, but it's not something that's going to go away. I mean, the hype is there and for many
[00:11:40] situations it's legitimate. I mean, it's not a, it's not a trend. I mean, it is a trend, but it's
[00:11:47] not something that's going to just, you know, dip and, and go away. So I think we do need to embrace it.
[00:11:54] I think part of where people get confused is because there's so much to potentially learn.
[00:11:59] And they've got to at least figure out what are the specifics of what I need to know. Like I kind
[00:12:06] of use the, the automotive analogy, like I need you to get your permit and your license before you
[00:12:13] drive. And you should probably learn how to change a tire, know what some of the warning lights are or
[00:12:19] whatever, but I'm not asking you to be a mechanic. Right. So I think you want to, you want to be self
[00:12:26] sufficient to a point, right? You don't necessarily need to completely pivot and become,
[00:12:31] you know, an AI guru, but it can certainly help you sort of hedge your bets as your career may take
[00:12:38] some unexpected turns and hit some bumps. That is something I've seen a lot over the last year.
[00:12:45] I've made it part of my weekly routine to try and have zoom coffees with different folks that I've met
[00:12:50] on the internet. And a lot of those folks have been downsized or been given packages to, to retire
[00:12:59] early, all that sort of stuff. And part of that is because of just the current economic landscape.
[00:13:05] Part of that is because these companies aren't saying the quiet part out loud, they're actually
[00:13:09] rolling out AI and they're replacing people with it. And when we're doing our implementation and work,
[00:13:17] that's a vibe check that we really do on our first discovery call with folks that, you know,
[00:13:23] if you're going to take these new superpowers and you're going to send that straight to the bottom line,
[00:13:26] and you're going to just cut your workforce in half, instead of trying to help them, help them learn,
[00:13:32] help empower your people so that literally everyone wins at the end of this, then we're,
[00:13:37] we're not the team for you. There's 100%. There's plenty of teams out there that will,
[00:13:41] that will go and do that. They, they won't think twice about it, but for us, it's, it's just an
[00:13:47] ethical thing for me. And by no stretch of the imagination, do I think that every business that
[00:13:53] we work with at some point is not going to lay someone off for one reason or another, but we
[00:13:57] just want to do everything in the realm of possibilities of the way that we can affect
[00:14:01] these relationships is that we inspire them to have the conversations and build up the culture within
[00:14:06] their team because folks are afraid of it. That's, that is the number one thing that I've seen.
[00:14:11] It's, it's almost, we could turn it into a drinking game. When I tell people what I do for a living over
[00:14:16] the last couple of years, uh, they'll mention Skynet within the first five sentences. It's like almost
[00:14:21] always that comes up. And, uh, at some point they're going to, in one way or another, ask me if
[00:14:28] they think that AI is going to be able to replace their job. Long story short is yes, it probably can,
[00:14:34] but at the same time, part of my job is that we try and figure out what's your secret sauce in your
[00:14:41] organization that the people do the best, or what is the thing that y'all really love doing and you
[00:14:46] want to do more of, uh, that is what we can help you figure out how to kind of build this harmony up
[00:14:52] between you and the AI systems or agents or AI team members is even how we're calling them now. And we
[00:14:57] stand up some of these agents so that y'all work in collaboration with each other instead of
[00:15:02] being fearful of if they're going to come and take your paycheck and then you're going to be out on
[00:15:07] the street because that, that is a real concern in organizations that we've been in for sure.
[00:15:12] Yeah. I think in a lot of cases people, and I'm not saying that's not a legitimate
[00:15:18] concern. I think like a lot of issues, people see that this could be a slippery slope, right? Like, well,
[00:15:25] if it starts taking on, you know, these couple tasks and then at the pace in which it's gaining
[00:15:33] momentum and, and gaining other capabilities, like as we've seen with open AI now has chat GPT now has
[00:15:41] some better reasoning capabilities with the O1 release. People see you're basically agreeing with
[00:15:50] me that this is the tip of the iceberg, right? So if it can start to do this and it can start to
[00:15:56] narrow down these choices and, and be this effective in decision-making, not just generating content,
[00:16:02] but actually doing design work, you know, writing music, making images, and now the multimodal
[00:16:10] capabilities where it can do some of these things, you know, simultaneously, it's starting to creep up on
[00:16:17] things that we thought were clearly within the domain of, of a human, you know, brain and human
[00:16:23] capability. And so I totally understand why people are nervous to, you know, today it's automation
[00:16:31] taking, you know, these three tasks off my plate tomorrow. It's leaving me with only 25% of what my
[00:16:39] job was. And then the company has to take a look and say, what are we doing here? How are we going
[00:16:45] to redesign work? Are we thinking about the humans, not just like humans in the loop, but the human
[00:16:52] centric nature of work and what are the implications to employee engagement? What are the implications to,
[00:17:01] you know, how we contribute to our, you know, our community and, and, you know, the world at large
[00:17:07] with our products and services? What are the consumer, you know, client expectations for working with actual
[00:17:14] people or just trying to get their stuff done, however it needs to get done? So there's a lot of
[00:17:20] sort of existential kinds of questions that people are actually, you know, sort of bringing up. So,
[00:17:26] you know, I don't have all the answers. I do think, you know, the Skynet thing, it's kind of funny that
[00:17:31] people still, you know, think about that because I still think that's, that's pretty far away, but
[00:17:39] I don't know. I just think the more educated people are about what's, what's actually happening,
[00:17:43] the more unrealistic that seems, but yeah, I mean, I guess if, if you're doing some long-term
[00:17:50] thinking, you know, maybe Skynet won't affect us, but maybe our kids, I don't know. No, that's,
[00:17:57] that's fair. That's what, you know, something that we've, I think we've talked about in the past and
[00:18:01] it's something I talk about with folks in the space a lot is that my boys are just turned three
[00:18:06] and they're, my oldest is about to turn five. And the world that they are going to grow up in
[00:18:11] is just completely going to be different. Like it unimaginably different. I can't do a five-year
[00:18:18] plan. I don't feel like I can accurately do a five-year plan anymore because that's how fast
[00:18:22] the world is moving. It feels like at this point on a bunch of different fronts, not just because
[00:18:27] of AI, but definitely AI has something to do with that. And so one of my missions that I've kind of
[00:18:34] put upon myself is that I want to get as many people involved in this space, or at least aware of it so
[00:18:40] that they can bring their voice to the table for a bunch of different reasons to, because collectively
[00:18:45] we're the ones that need to work through, how do we deploy this stuff? What's our culture around this?
[00:18:51] Like all these big existential questions, how do we actually work with AI responsibly?
[00:18:57] And I know you've probably seen here in the news, even in the last like week or so,
[00:19:02] open AI's the rest of the, well, almost the rest of their, their initial founding team has left.
[00:19:10] They're making the switch from a nonprofit to a for-profit company. What's the implications of
[00:19:16] that going to be? Because now they're going to have, you know, shareholders that they need to,
[00:19:21] it's a whole different ball game on the level of responsibility that they're going to have on that.
[00:19:25] Same thing with the California law that just got up and got vetoed by Gavin Newsom.
[00:19:32] I haven't fully made up my mind on how I feel about that yet, but essentially it was going to
[00:19:38] put some guardrails around how California deals with all these AI systems and the startups and how
[00:19:45] it can be rolled out. And Gavin Newsom vetoed that because he was afraid it would stifle innovation.
[00:19:50] And also, by the way, I can't remember the exact number. It's like close to 15 of the top 30 AI
[00:19:58] companies in the world are centered in California. So that might have a little something to do with
[00:20:04] that ruling there, but that's like very in-depth industry stuff. And that's very focused in closed
[00:20:11] rooms in these research labs or these organizations that are developing these things.
[00:20:16] And I personally think that this needs to be a more inclusive and democratic process,
[00:20:22] not only just because I think everyone can benefit from learning and using these tools in a big, big way,
[00:20:28] but they're not, like you said, they're not going away. And it's, they're going to be a big part of
[00:20:33] our future and our children's future. And if we decide that we're just going to stick our head in
[00:20:39] the sand, and if we're going to be like, well, I'm going to get to that someday at the pace that this
[00:20:44] stuff is moving, that someday is going to get here quicker than you realize. And maybe not,
[00:20:50] this is not like a fear thing, but you're going to see more and more people adopt it. So if you don't
[00:20:54] get on the learning train to really start at least get a basic level understanding of it quicker,
[00:21:01] I hope that it won't be that folks can't catch up, but that, that is a real concern of mine.
[00:21:05] Or even more so, if you go back to when we were younger in the earlier internet days,
[00:21:11] there were a lot of companies out there and a lot of people playing in that space. And now you don't
[00:21:16] see Ask Jeebs anymore. Yahoo is not the superpower that it used to be. MySpace is not the superpower it
[00:21:22] used to be. All of that power got consolidated down into the fang stocks, basically. And I don't want
[00:21:30] to see that happen in the AI space because now we're playing with super intelligence. So what
[00:21:35] happens when super intelligence gets consolidated to the hands of a few? I don't particularly like
[00:21:41] that future. Yeah, no, you're bringing up some really important points. I think the responsible AI piece
[00:21:47] is critically important. I think, you know, that's one of my focus areas. So, I mean, just if we think
[00:21:55] about those fears that you mentioned before with, you know, Skynet and things like that, responsible AI
[00:22:01] practices, governance, the appropriate, you know, legislation that's, you know, hopefully consistent,
[00:22:10] more consistent than the patchwork, you know, we see today. But we need all companies developing
[00:22:17] AI and all people who are building with some of these AI solutions, which could be any of our listeners at
[00:22:25] this moment, right? You could be whipping up a custom GPT, you know, an agent, maybe you're on AWS,
[00:22:31] you know, Party Rock, just, you know, having fun, but you're creating something from scratch where,
[00:22:38] you know, where are you getting the data that you're inputting? Is it copyrighted? Is it proprietary? I mean,
[00:22:44] you know, this is part of the education, right? Part of the upskilling is not just what it can do and how to
[00:22:50] better, you know, prompt it or whatever. It's not just about your AI skills, it's your AI literacy and
[00:22:56] being responsible by design. And what does that mean? So that we're making fair, you know, ethical
[00:23:02] decisions based on the data and algorithms that fed these AI tools. So all these things are going to
[00:23:08] mitigate, you know, risk in general, and in particular, some of the bigger, scarier things
[00:23:14] like you're talking about, like moving towards artificial general intelligence or artificial
[00:23:19] super intelligence, whatever it is, we've got to stay human centric as we do this.
[00:23:26] The California thing, I didn't read through the entire original bill as written, but I understood
[00:23:33] the premise. And I did read Gavin Newsom's letter of why he was vetoing it. And I thought it was very
[00:23:40] well thought out and appropriate. The problem was it was putting all the onus on the biggest
[00:23:49] large language models with the most compute power and, and what have you. And he, I thought rightly
[00:23:57] called out the fact that who's to say that tomorrow someone doesn't build an incredibly powerful,
[00:24:04] you know, large language model that, that uses a fraction of that compute, or what about the high risk
[00:24:11] use cases, as opposed to like you built that computer too big, you know, you're going to get
[00:24:16] regulated. But these guys who, you know, networked a bunch of smaller ones together can completely
[00:24:22] circumvent the legislation or someone could use a small language model or some other approach and
[00:24:29] still come up with something that is like basically open source and, or, you know, on, you know, 4chan or
[00:24:38] some back channel where people can take it and apply it to do, you know, facial recognition
[00:24:45] on closed circuit TV at stadiums or airports or like, you just don't know. So it left too much
[00:24:52] to chance and gave the public a false sense of security. So I just predict there's going to be
[00:24:59] people who are going to take that knee jerk reaction to the headline that says, I can't
[00:25:03] believe Governor Newsom is, you know, putting us all at risk. But if you look at it in detail,
[00:25:11] I thought it was a very thoughtful and measured and logical response. He wants the legislation,
[00:25:18] but it's got to be more buttoned up. And it's got to think about high risk use cases,
[00:25:22] which is the way that eat the EU thought about the risk level in their sort of risk pyramid for
[00:25:28] the EU AI act. What are the actual use cases that are going to be used and what are,
[00:25:33] and how do we build upon like GDPR and protecting consumer rights and personal data and worker rights
[00:25:40] and things like that. So it's a good talking point, but we've got to.
[00:25:45] Before we move on, I need to let you know about my friend, Mark Pfeffer and his show,
[00:25:50] People Tech. If you're looking for the latest on product development, marketing, funding,
[00:25:56] big deals happening in talent acquisition, HR, HCM, that's the show you need to listen to.
[00:26:04] Go to the Work Defined Network, search up People Tech, Mark Pfeffer, you can find them anywhere.
[00:26:11] We've got to go back and rethink how to draw that up a little bit better. And you're right,
[00:26:17] it's like two thirds of the big AI companies are headquartered in California. So he understands the
[00:26:25] importance of whatever gets passed there. And the last thing he needed was that to become like a New
[00:26:32] York City bias law that was unenforceable and largely ignored.
[00:26:39] Your point that you said there about people being able to kind of circumvent the rules that do get
[00:26:45] and get put in place is my reasoning behind. I really haven't seen much legislation passed anywhere
[00:26:54] that I think is going to be effective or have really have much teeth to it over any like long term.
[00:27:02] And the reason is because these models are changing at such a pace. And then also the techniques that
[00:27:08] we're able to use now, such as like the ensemble type technique or mixture of experts or the router LLM.
[00:27:16] So that's like very technical terms. But the idea of that is that now someone could take and run
[00:27:23] several different models on open sourced local on their own computers and devices at their home.
[00:27:31] You're not going to be able to regulate that in the same way that,
[00:27:34] you know, it's still really difficult now. And, you know, Napster is gone. LimeWire is gone,
[00:27:40] but Torrent still exists. They were never able to squash that out because it's out in like the the open source community.
[00:27:49] And whatever gets passed here in the United States is not necessarily going to be honored or
[00:27:56] mirrored back in other countries around the world as they're trying to really,
[00:28:00] you know, make a name for themselves in this space as well. So that's why I think that,
[00:28:06] you know, the next several years we're really going to be threading the needle and almost put this more on
[00:28:12] personal responsibility and education, because I think this is a really, really difficult problem
[00:28:17] for legislators to figure out anywhere, no matter how educated they are. It's just going to be
[00:28:23] it changes too fast. Even by the time they get something passed today,
[00:28:27] GPT five or whatever the latest greatest thing could come out next week. That's going to,
[00:28:31] you know, blow through some of these, these things that they might try and put into place.
[00:28:35] Yeah. I don't, I don't think it's all bad. I don't think it's all doom and gloom. It's just,
[00:28:39] it's that's the kind of stuff that keeps me up at night when I'm thinking about that, because I'm
[00:28:44] thinking about what's the future going to look like for my kids, basically. And I don't think I'm alone in that.
[00:28:49] I think most parents would be in that same boat.
[00:28:50] Absolutely. Yeah. And I think you and I are, have different circumstances just because of the
[00:28:57] different ages of our kids. And my, my daughter's in high school, she's junior in high school already
[00:29:02] taking the PSATs later this month and starting to look at colleges and stuff like that. So
[00:29:09] that horse has definitely left the barn, but for your kids, I mean, who knows what the labor market is
[00:29:16] going to look like and the, even what they're going to learn in school when they get to, you know,
[00:29:23] you know, middle school and, and high school. I hope that the curriculum starts to adjust to
[00:29:31] accommodate that. I hope AI is incorporated into that, just like digital technology has been
[00:29:38] incorporated into the education system now. And, you know, they're using computers, they're digital
[00:29:44] natives, they're using, you know, smart boards instead of chalkboards. And I just think they can
[00:29:51] handle it, honestly. And it's a matter of how quickly we can apply some agility to the education
[00:29:58] system and get, to get teachers comfortable so that they can in turn teach the kids good habits,
[00:30:05] you know, using AI where you should not where you can, right. As a mentor, as a tutor, you know,
[00:30:11] now everyone can basically afford a tutor. So we're going to deny that to kids because you don't
[00:30:20] trust them to not use it as a calculator, to just give them the answer. Just like, you know,
[00:30:26] AI being used by candidates and in the job search process, there are scenarios where it's appropriate
[00:30:33] to use it. And there are scenarios where it's not, and, you know, there's got to be some level of trust.
[00:30:38] Otherwise, it's going to be yet another cat and mouse game trying to catch people, you know,
[00:30:44] using it inappropriately. That's one of the most hopeful messages that I've seen in this space,
[00:30:49] like period. It was about a year ago, there was a TED talk from the founder of Khan Academy. And it's
[00:30:56] only like 15 minutes, and he's basically talking about what's the future of education look like.
[00:31:00] And I found that really inspiring. And also, I practice a lot of what he talked about in that
[00:31:08] on a daily basis with my boys. So as young as they are, they already know how to interact with,
[00:31:14] if I say all the assistant's name, they might chime in here. But if I say, hello, G name person,
[00:31:20] or hey, S name lady, they know how to talk to her. And also when they're out learning, because
[00:31:25] their kids and their curiosity is just, it's, it's alive and well. So they'll see a new leaf or a bug,
[00:31:32] and they'll go get that thing or come run and get mom and dad and say, come get your phone, come get
[00:31:36] your phone, take a picture and ask, you know, one of the they call it ask the robots to tell me more
[00:31:42] about it, because they want to learn. And we'll either fire up perplexity or some of the other
[00:31:48] things to be able to take a picture. And I'll tell the system, I act, we actually use the voice on there
[00:31:53] now a lot. We'll tell the system, hey, you're talking to my son Jackson, he's about to turn five.
[00:31:59] So remember, when you're giving your response back, I'll kind of prime it with a vocal prompt like that,
[00:32:04] to explain to him what this moth is that he just caught, that like he's a five year old,
[00:32:09] but he really, really wants to learn, you know, the details about this thing. So that's, that's
[00:32:14] beautiful, because I don't know the answers to all that stuff. I can go Google it, but I can't do
[00:32:18] that very fast, not fast enough to, to quench his appetite to learn about that moth, but perplexed
[00:32:25] as you can. And one of the things that we've done in our Maven cohort is something that Amy,
[00:32:32] my wife and I have been dreaming about here for the last year or so is, could we potentially set up
[00:32:40] personalized tutors for our boys with AI agents? And we've even let them interact with Hume AI,
[00:32:48] that's a, that's a voice agent. We've done perplexity, they talk to chat GPT. I mean,
[00:32:54] I build voice agents for a living on different platforms, so they get to come in and talk to
[00:32:57] dad's robot sometimes. But connected to that, what if we could have an empathetic tutor that in its
[00:33:05] knowledge base or RAG retrieval knowledge base, it has the curriculum that we want to teach from.
[00:33:11] And it's adaptive and can teach based off of Socratic principles. So it's not just feeding them
[00:33:17] the answers, it's also trying to help them learn along the way. And it can also account for
[00:33:23] their personal learning styles, whether they're, you know, visual learners, they learn by doing,
[00:33:28] they learn by reading in further, or they learn just by going back and forth and talking to a system
[00:33:34] about it. I mean, now you have the, like you said, it is the best possible tutor. And we've built an
[00:33:41] MVP for that in our Maven cohort called the professor. And the professor is trained on all the coursework
[00:33:47] that we're going through. It's trained on a lot of, you know, stuff that we throw in its knowledge base
[00:33:52] about AI, about the tools that we're using, and even all the transcripts from our meetings and lessons
[00:33:59] get chucked in there as well. So that the students in the course can actually go and interact with the
[00:34:04] professor to get their answers. And it is adaptable to their learning and speaking style, like how they
[00:34:10] choose to learn, how they'd like to be talked to. So I see that it's, that that's been going really
[00:34:15] well in there. Now, not to say that it hasn't, you know, misfired here and there, because this is tech,
[00:34:20] and that's an MVP. But yeah, I think that figuring out that problem is really hopeful for kids of all age,
[00:34:28] because, you know, even your child, that's going to be having to think about career here soon of like,
[00:34:34] what do they want to do when they get big? Or when they when they get older is that's a that's a tough
[00:34:40] question to answer now. And it's probably never been easier to be self employed, if you have that
[00:34:48] drive at all than it is now. Because you're able to do things as a solopreneur or a very small agile
[00:34:55] team that just was not possible before. Yeah, no, absolutely. I can certainly appreciate that myself.
[00:35:02] So you answered some of my questions I had for you as as we went and as we were talking, one of them
[00:35:09] being like some of the projects that you're working on sounds like you had some personal
[00:35:13] projects that were pretty cool for your kids and for the family. And then of course, we talked about
[00:35:20] responsible AI in depth. But you know, maybe some of the projects you've done for clients would be
[00:35:27] interesting to hear about, especially, you know, if it's around like the, like your Maven course talking
[00:35:33] about like, you know, workflows and how to connect some of these, you know, sort of disjointed pieces so
[00:35:39] that you're not just doing, you know, task by task, but you're, you're sort of stringing them
[00:35:43] together. Can we like unpack one of those? Sure. Yeah. I have to talk about them a little bit,
[00:35:50] not all the way in the details, just because of like NDAs. But when I always love talking about
[00:35:54] this as best as we can, because I think that's what helps people have those light bulb moments of
[00:36:00] what you can do now. And recently, like you said, at the top of the podcast, agentic workflows are the,
[00:36:06] the flavor of the day right now, which is basically us building up AI agents or assistants.
[00:36:14] Because, you know, not all assistants are agents, but not all agents are assistants. Like it's part
[00:36:19] of the jargon that gets thrown around through there that gets kind of, you know, mixed, mixed up. But
[00:36:24] we are working with an organization with a bunch of, they're a fractional C-suite team. And I'll just
[00:36:33] kind of leave it top level like that. They go into organizations, they're offering C-suite level
[00:36:38] services. And everyone on their team is later in their career, they're full of wisdom. They're really
[00:36:46] highly regarded folks in their, their industry. And even on their own team, they had some real doubt
[00:36:54] of if it was possible for us to build an AI system that could emulate their processes to the same level
[00:37:01] of quality and standards that they could do themselves. They just, they, there were some
[00:37:06] believers and non-believers on the, on the team there. And essentially what the, what the workflow
[00:37:12] is that they wanted to set up is when they hire, or when they bring on a new client, they'll go through
[00:37:18] this discovery phase where their clients will send through all of these different files into a drop zone
[00:37:24] folder. And their team then has to go fine tooth comb through every single one of these files. They
[00:37:30] could be PDFs, documents, PowerPoints, financial statements, organizational charts, like literally
[00:37:37] anything that you can think of. They're going to chuck that into this drop zone folder. And this
[00:37:42] fractional team goes through all that. They're making their notes of what's important or what's relevant
[00:37:48] to their engagement. And that's extremely time consuming for them. And it's also, you have to
[00:37:55] kind of have this head knowledge that they've built up over their entire career of what could potentially
[00:38:01] be relevant in this engagement based on this just random array of different files that we're looking
[00:38:08] at. They wanted to see if we could set up an agentic workflow that could help them with that process.
[00:38:14] And we've been able to do that where those files can now still hit the drop zone folder that goes
[00:38:21] through a team of agents that will review each of those files. We've got it set up right now for just
[00:38:27] five. That's enough. So five different agents will look at the same file. They all are writing their own
[00:38:34] summary or extraction of what they think is relevant or could be relevant for this fractional team.
[00:38:39] And then they pass along that summary to a different agent that's going to review all of that work.
[00:38:46] And then it will then make a summary of the summaries. Beyond that, there will be an evaluator step.
[00:38:54] So this is kind of like, if you've heard of the chain of density framework for chain of density
[00:39:00] summarization, where it's taking and writing a denser and denser summary, that's definitely helped
[00:39:06] influence the way that this works. And at the end result, our goal is to have a one page summary
[00:39:13] that is written that anyone on this team can pick up without having any other knowledge about this
[00:39:19] organization or the project at hand and be able to know all the details that are most pertinent for
[00:39:24] them and the type of work that they do related to this potential client. And I will say that we've
[00:39:32] done that and per their own evaluations from this team, they gave it marks of 11 out of 10. I think
[00:39:38] the worst evaluation that we got from them was a nine out of 10. And I'll take that. That's still a B.
[00:39:44] So, but they really didn't think it was possible. And it even went as far as it was able to
[00:39:51] take things from these files that weren't even explicitly said on the page and do reasoning and
[00:39:58] thought processes behind it that the human team would have done. And one example is there was an
[00:40:05] organizational chart that ran through this. And on this org chart, the number two person on the org
[00:40:12] chart said acting on it. So it was just like, it looked like a regular organizational chart had
[00:40:16] everyone's name, their position and the department that they worked in. And that's it, nothing else.
[00:40:21] But this, the number two in charge said acting. And in the summary, they got sent to the fractional team.
[00:40:28] It made a comment, and I'm really going to not word this as well as eloquently as it did. But it said
[00:40:34] that since this person is now in the acting position, that shows that this organization that you're working
[00:40:41] with is in a transitional period. That means or could mean that they're going to be much more open to
[00:40:47] additional changes in the work that you all are going to be performing for them in their engagement.
[00:40:52] So nowhere on that page did it do that. But a seasoned person on this fractional team would have
[00:40:59] seen and recognized that same thing. In fact, the CEO of that organization was like, my lead person
[00:41:05] who would have done that is, I don't even know if they would have caught that necessarily and written
[00:41:10] that into the summary. And all of these, these one page summaries that are made during this process
[00:41:16] are then stored in a rag retrieval database, where a final summary will happen right before they go back
[00:41:25] and do their proposal of like, here's, here's the fix. Here's what we think that we're going to be
[00:41:30] doing for you in this engagement. Because that's that emulates what their their actual team does. They
[00:41:36] they go through this process, they have all their meetings, they do their discovery, they're basically
[00:41:40] doing their their analysis of where y'all stand currently, as we're doing this consultation for
[00:41:45] you. And where do we think that you can get to? And how do we think that we're going to get you there?
[00:41:50] So normally, their whole team would just be doing this. But now this agentic workflow does the process
[00:41:56] of going through all the files, they can still go in and read them themselves if they need to like
[00:42:00] double click into it. But it went from this process that takes weeks into, I mean, literal minutes,
[00:42:09] if you have all the files right out the gate, because it just runs through all of them, it creates that
[00:42:14] summary. And from there, it creates this plan for what is the fractional teams, potential services that
[00:42:22] they could offer. And it even goes into depth of like, how would we they go about doing that? So it's
[00:42:28] helping them formulate their plan for their their pitch or project plan for the their their client
[00:42:33] that, you know, I know I'm having to speak vaguely about that. But that one was, I was particularly
[00:42:40] impressed with the reasoning that our team was able to program into that. Because these folks are top of
[00:42:49] their field. And I know that we, we can get these models and these these agents to be able to reason
[00:42:55] really well, but to be able to emulate someone that is, you know, 60 plus years old, that is just
[00:43:01] really, really full of not just like industry knowledge, but just life wisdom, and to be able to
[00:43:08] deduce information out of just a simple summarization flow at the same level that they could, if not
[00:43:14] better, that surprised me. And I think that goes to show that knowledge work is about the cost of
[00:43:22] whatever the average token rate is now from, from Anthropic or Chat GPT, whatever it costs per
[00:43:30] million tokens is probably what your head knowledge is worth these days.
[00:43:33] Yeah, no, I definitely tell people that this is not just about, you know, road tasks and drudgery.
[00:43:41] This is, you know, knowledge work is affected as well. I'm not saying we remove knowledge workers,
[00:43:49] as well as these things advance, but you've got to really understand how to best position yourself and
[00:43:54] your own human durable skills, right? Because as AI takes on some of some, some of these other
[00:44:01] tasks that were previously the domain of, of humans and human cognition, human creativity,
[00:44:08] you know, taking on some of those things. But so the more we can think about how to,
[00:44:12] you know, reposition ourselves and the value that we create, the better off we are. So it sounds like,
[00:44:19] I mean, in that particular case, you're saving a tremendous amount of hours because it's not,
[00:44:25] you know, those, those resources, you know, fractional or not are experienced and tenured,
[00:44:31] and they've got the domain knowledge, they've got the, the work experience, they've got a lot of value,
[00:44:37] but they're also expensive resources, right? So I don't know if you can share anything about the,
[00:44:44] you know, the outcomes, but I imagine it's saving them, you know, significant, both cost and time.
[00:44:50] Hi, I'm Steven Rothberg.
[00:44:52] And I'm Jeanette Leeds.
[00:44:53] And together we're the co-hosts of the High Volume Hiring Podcast.
[00:44:57] Are you involved in hiring dozens or even hundreds of employees a year? If so,
[00:45:01] you know that the typical sourcing tools, tactics, and strategies, they just don't scale.
[00:45:06] Yeah. Our bi-weekly podcast features news, tips, case studies, and interviews with the world's
[00:45:12] leading experts about the good, the bad, and the ugly when it comes to high volume hiring.
[00:45:18] Make sure to subscribe today.
[00:45:20] It definitely is. And it's helping them. Their reasoning behind this is that they're a small
[00:45:26] team and they wanted to be able to scale because they're, they're very much so on the side of the
[00:45:32] demand is much greater than what they can currently supply. And they were hoping that layering more
[00:45:39] AI into their work would be able to help them scale that. And it, it definitely has. They've,
[00:45:45] they've taken on much, well, they already had several large clients, but now they've taken on even,
[00:45:50] even more larger clients. We even had a meeting with them yesterday just to go over some different
[00:45:55] things. And they were sharing that they had just brought on two additional clients.
[00:45:58] And they're, they're feeling like they can do that because, you know, now they have these,
[00:46:04] this AI team in place with them as well to help them with these processes that, you know, before
[00:46:10] they, they wouldn't have been able to do it because the man hours is just not possible. And I also want
[00:46:15] to like put in the caveat that when we design the system for them, in my opinion, every really good
[00:46:22] business has some sort of a secret sauce. And so part of our job, when we're designing this and writing
[00:46:28] the system instructions and the, the workflow and all that sort of stuff for these different processes
[00:46:34] is that we'll go through some really intensive discovery phases ourself, but we're trying to
[00:46:40] understand who they are as people, what they're inspired by, what their personalities are like,
[00:46:46] how they, how they work through and think through problems related to their, their work. And also
[00:46:51] just personally as well, what's the type of information they have in their, their company
[00:46:56] knowledge base, like not AI, just like in general, what are the types of things they're trying to learn
[00:47:00] about? And what does this process, we'll go through this process mapping exercise because a lot of
[00:47:08] companies do not have SOPs, especially in small and medium-sized businesses. Some of them don't even
[00:47:12] know what that acronym even means. And so we just do this really fun zoom call or in-person meeting and
[00:47:19] we'll record it. And if we can just talk to them conversationally, like most business owners or
[00:47:24] leaders in a team can walk you through a process verbally of like, oh yeah, we do this. And then
[00:47:29] Bob over in accounting, he does this part. And then Tyler over in marketing, he does this bit.
[00:47:33] And they're literally verbally writing the SOP for us. And we have a workflow on the backend that will
[00:47:39] take that transcript, turn it into SOPs. We also get a lot of information out of them and those
[00:47:46] transcripts to be able to use that to influence these very personalized assistants that are in
[00:47:53] this workflow that emulate how they do the job. So it's not just the bone stock, Claude or chat
[00:47:59] GBT that's doing that, or even just some more generalized assistant at that task. It's, it is that, but also
[00:48:05] personalized very much so to their business, the way that they do things and their thought processes
[00:48:11] behind it. And that's how I think we've been able to get the really high quality results out the other
[00:48:15] end that, that passed the sniff check of did a person do that or not? Wait, so, so when you created
[00:48:21] those five agents who were going to look at those documents, was each one customized to five
[00:48:29] different, you know, experts, but came in and came at it with the different through their own digital
[00:48:35] lens. In other words, were they, were they five sort of digital twins of, of the people that would
[00:48:41] otherwise be going in there themselves to review that? That's what I wanted to do. I personally wanted
[00:48:47] to do that to get very diverse backgrounds and points of view, because that's getting into like
[00:48:54] the, the mixture of experts and like chain of thought type reasoning that we could do with that.
[00:49:00] But our, our client wanted to emulate their CEO and leader because he is the real serious brainpower
[00:49:10] over there. And he's the one that's taught a lot of his team members how to do these different
[00:49:14] things. So they wanted to make these different assistants really emulate him and his processes.
[00:49:20] So the assistants are, are modeled after him and the way that he does things on top of just how they
[00:49:28] do things as an organization to all of that's kind of baked in, but you, you nailed it. Like I really
[00:49:33] wanted to look at like, here's these five people on their team. We're going to make a, a digital
[00:49:39] assistant that knows exactly how they like to work and how they view things so that you're really getting
[00:49:44] the same type of result that, that they might be able to produce.
[00:49:47] This could go one of two ways, right? One is listen, that this is one of the enormous benefits of,
[00:49:55] of creating custom GPTs or, or letting one of these generative AI tools really understand
[00:50:03] you and how you think and how you write or whatever. Cause otherwise you're going to continue to get
[00:50:07] generic sounding responses. You could ask different, you could ask chat GPT, you could ask Claude,
[00:50:12] you could ask, you know, Gemini or whatever. And they all, a lot of the output starts to sound the
[00:50:17] same. Well, it won't, it'll sound more like you. It'll sound more personalized. It'll take on your
[00:50:24] writing style, your tone or whatever. If you customize these solutions, um, which you get vastly incremental
[00:50:31] value from, but we also have to acknowledge that the very fact that it can do that and basically create
[00:50:39] a, a sort of, at least some subset of a, of a digital twin kind of concept that in some ways people are
[00:50:47] going to feel like some of this technology is, is basically they're, they're training their AI
[00:50:53] replacement. And so I think we just need to acknowledge that, that sort of delicate, you know, balance and,
[00:51:00] and also why being responsible by design. Uh, and as we use these tools is just crucial for positive
[00:51:09] developments in this area.
[00:51:11] A hundred percent. Like their, their goals in this were that they wanted to be able to scale beyond
[00:51:17] their current capacity. And just like any business, it's really difficult to find team members that not
[00:51:25] only match your culture, but the educational background and the domain expertise, all that sort
[00:51:31] of stuff. And then you have to, once you find those really amazing people to bring into your team, keeping
[00:51:37] them on your team can be even more of a challenge because those type of superstars are generally are
[00:51:43] going to get poached from other companies, or they're going to go off and start their own businesses
[00:51:46] or something like that eventually. So knowing that, uh, where they are in their career as well, because a
[00:51:53] lot of these folks on this particular team are later in their, uh, career, they knew that they wanted a
[00:51:59] better work-life balance, but they also have a certain lifestyle that they wanted to maintain.
[00:52:04] And so to be able to maintain that, you know, that would require a lot of man hours. Now they're able
[00:52:09] to either stay on par or pace with their growth that they've been experiencing now without having to
[00:52:15] put in all those additional man hours and still keep their workload manageable and the quality, uh,
[00:52:21] to the level that they would expect to do if they were doing it themselves. The other really interesting
[00:52:26] thing that was fun about this project is it wasn't in the United States. This was with a company in New
[00:52:31] Zealand. So thank God they spoke English, but they speak very different English than we do. And one, one of
[00:52:38] the fun things that, that I don't have the life experience to be able to gauge when I was doing my evals on
[00:52:44] these was this good or not because we had it in, in the responses and in the way that it would talk,
[00:52:51] it would sound like them. We did a brand voice analysis on it so that it would sound more like
[00:52:59] them or, or the tone and style that they wanted it to be. And it spoke New Zealand English or the
[00:53:05] Queens English or whatever. And it even uses like cloak colloquialisms and phrases and vernacular that were
[00:53:13] only in New Zealand. Do you speak this way? And they probably think the same way when they were
[00:53:17] talking to me during the project, I use a lot of phrases from the South. They're like, what the
[00:53:21] heck are you saying? And I literally would have to, we would stop and ask each other throughout this
[00:53:25] project. Whoa, whoa, whoa. I don't know what you just meant. Can you, you're speaking English,
[00:53:29] but I don't understand it. Well, we had their assistants speaking New Zealand English, and I'm the one that
[00:53:36] wrote that from the United States, these LLMs are trained a lot on information and data from sources in
[00:53:44] the United States, because a lot of the companies are based here. And the fact that it was able to
[00:53:49] translate that and mimic their speaking style and use metaphors and verbiage that is common for them, I
[00:53:58] thought was also a really cool and fun win there that it did a really good job at that part of the task
[00:54:05] too. So that's, that's where we're getting more towards this idea of a universal translator too, which is
[00:54:13] also, I think very helpful because now it really does open up this international market even more than
[00:54:18] we already have had with, with the internet. No, absolutely. You know, one of the things I wanted to, you know,
[00:54:24] throw out there for the listeners, like when we talk about, you know, agentic workflows, and you mentioned
[00:54:31] this gives like solopreneurs like myself and, you know, small businesses, some, some superpowers in
[00:54:39] the sense that I don't necessarily have to take it all on myself, right? Like it's, there's only so many
[00:54:45] hours in the day and you need to maintain your, your sanity and your work life balance a little bit
[00:54:52] of a degree, but you also don't necessarily have to go and find, you know, a contract resources or even,
[00:55:00] you know, a virtual assistant or an intern. I mean, I, I still contend that could probably use one, but,
[00:55:05] but still there's, there's AI out there that can help me if I take the time to sort of put the pieces
[00:55:15] together. So, I mean, it sounds like you either through, you know, light magic or galactic ranch,
[00:55:21] you know, could help someone like me with some of the, what I think would be good agentic workflow
[00:55:27] use cases. You and I have talked once before about like the production and, and publication of,
[00:55:33] of this podcast, right? I mean, I, I do, I do research. I understand the guests. I think about
[00:55:40] how to frame the discussion. And then I take, you know, I've got show notes. I've got the transcript,
[00:55:47] I've got a summary of the transcript. I've got all those keywords. I need to create an introduction
[00:55:52] to the podcast based on the summary. I've got to create a social posts to promote the episode.
[00:56:00] And I'm in and out of like 10 different tools. So it's not just the number of tools and perhaps
[00:56:07] some of those tools may not be free. It's not just that, but the, you know, each time it takes,
[00:56:15] you know, me as the, as the human in this workflow and the loop to go in and, you know, copy paste
[00:56:22] or whatever. I'm, you know, in notion, I've got an AI in Google docs, I've got Google Gemini,
[00:56:27] I've got the chat GPT plus perplexity that I use for research. I mean, I'm all over the place, right? So
[00:56:36] there's gotta be a way if I, again, if I put in the time to structure it and work with, with someone
[00:56:42] like you to put those pieces together, use something like, like make or whatever, which is sort of like a,
[00:56:49] like a Twilio or a, you know, one of those connector API connector kind of orchestrator things.
[00:56:56] But how do you like literally put all the pieces together? So if you put in that work up front,
[00:57:00] it could save you a lot of time in going task switching and application switching and, and things
[00:57:09] like that. So if that takes me, I don't know, let's just say it takes me, you know, eight hours to do
[00:57:15] all of that. How much time could I save? And that gives me time to talk to other prospects and do other
[00:57:21] work so that I'm not working 12, 15 hours a day or whatever.
[00:57:24] Yeah, for sure. It's not something that happens overnight, for sure. That is something that it
[00:57:30] takes time to identify all these different tasks and processes that we're doing as people in our work
[00:57:37] and then start to figure out where does it make most sense? Where am I going to get the best bang for my
[00:57:42] buck on starting to, you know, hand that off to these AI systems a bit more to be able to help. A couple of the
[00:57:49] the biggest things that I've seen to help me just as like really easy wins are if you can begin to
[00:57:56] create whatever you want to call it, a second brain system for yourself or your business or both,
[00:58:02] where you can consolidate all this data that you're creating. People don't even necessarily think of it
[00:58:07] as data. It could be, you know, we have transcripts from all these different zoom calls and team
[00:58:13] meetings and all these sorts of things that we're in on now. I have all this internal documentation
[00:58:18] that we create when we're working. We have that land back into the second brain system. Our CRM,
[00:58:24] depending on if you have that separate or in the same location, that's something that you
[00:58:29] would want to have in that second brain system. I personally use Notion for that. That's just a
[00:58:35] personal choice, but a lot of our clients, they don't use that. They'll be in the Microsoft or
[00:58:40] they'll be using things like HubSpot or Salesforce on top of a bunch of other usually disconnected and
[00:58:47] siloed tools that have all this really powerful data in it. And when you're able to start connecting
[00:58:53] and consolidating all that, or at least making the connectors so that these agents and your AI
[00:59:00] workflows or AI team members, whatever it is that you want to call them, can access some of that stuff.
[00:59:07] That's when you're really able to start connecting the dots there and see some really powerful results.
[00:59:12] Like as an example, to even talk about what you were going through there just for editing the podcast
[00:59:18] and all the different processes in that, us launching this cohort on Maven, Sara and I have been working,
[00:59:26] I've spent a lot of time in the studio here recently recording lessons, making the lesson plans,
[00:59:33] making the copy that's going to land on the Maven page, making the knowledge-based items that need
[00:59:40] to go into the professor's brain in the rag retrieval knowledge base, creating the projects and the
[00:59:46] homework, actually filming and editing this stuff. Like that's a lot of stuff. The other day I was joking
[00:59:52] that I had, I have a dual monitor set up and I had split apps in both of the monitors. So I had four
[01:00:00] screens up basically that I was looking at. And in two of them were Cassidy automations running in the
[01:00:07] background on client projects that I needed to get, you know, stuff moved forward on while at the same
[01:00:13] time on this other screen was completely dedicated to work that I was doing for this cohort. And I had
[01:00:19] Descript open and then I had Typing Mind open where I was in there just going back and forth, having it
[01:00:24] create these different assets like the lesson plan and all that kind of stuff. While Underlord was over
[01:00:29] here in Descript, you know, shortening word gaps and, and figuring out what are the hottest
[01:00:35] clips or takes that I could use to repurpose for, for marketing this thing. I would never be able to do
[01:00:41] all that stuff if I had not taken the time and, and really outsource the information from my head and
[01:00:51] into a system so that I can then begin to teach and give access to this knowledge that lives in my head
[01:00:58] or these, these, all these other independent places so that these AI assistants could do it mainly in Cassidy and
[01:01:05] typing mind is where those assistants were doing it. The Descript agent was just focusing, you know,
[01:01:09] obviously on the video that we were editing, but it's, it was a really cool moment. And also I had to
[01:01:16] like kind of zoom out on myself and be like, this is like a mad scientist moment here for me in my office,
[01:01:22] just trying to conduct the symphony of AI agents on the back end to get all this work across the finish
[01:01:29] line. And I was able to do it. That's how I'm able to be able to pursue so many of these different
[01:01:35] projects and passions that I'm in is because I have slowly, but surely began to look at what is it that
[01:01:42] I do on a daily basis and what bits and pieces of that make the most sense to be able to, to start
[01:01:48] offloading to my AI team versus me, or we do it collaboratively one or the other. You know, this is about
[01:01:54] what else could we be doing when we know some percentage of our day is of everyone's days doing
[01:02:01] sort of administrative things. I mean, even if you have, even if you're fortunate enough to have an
[01:02:07] actual assistant, there's always other things that could be more routine and could be automated. And then
[01:02:13] how do you use, move up this sort of value chain of some of the AI capabilities to augment what you do.
[01:02:22] And some of that is, is just summarizing things that you need where the volume of, of text and
[01:02:29] information is just overwhelming and you need a trusted sort of partner to summarize that. I like your
[01:02:35] approach that takes multiple agents and pulls that together because a single summary may overlook,
[01:02:42] may miss some context and, and overlook some things. So I think that's a really interesting
[01:02:47] approach. It's almost like applying some of the concepts of collective intelligence and cognitive
[01:02:53] diversity. But within, you know, the, the AI domain, I also just want to point out, like, I'm not trying
[01:03:02] to put my business on autopilot, right? Nothing. I don't want anything to be posted on social media or
[01:03:11] anywhere else without me having my eyes on it. I think AI has been great to draft things, but we're still
[01:03:20] at the point where, and part of this is about the customization and it learning my writing style and
[01:03:26] tone or what have you. But, you know, so far I've, I have edited everything that it has created for me,
[01:03:34] whether that's emails or social posts or, you know, the summaries of, of the podcasts or whatever. There's
[01:03:42] always a little bit of tweaking that needs to be done. And then just almost, you know, proofreading,
[01:03:47] you know, what, what it's done. So, but it still takes a lot of time away from things that I would
[01:03:55] otherwise have to do and, and time is your most precious resource. So what are you going to do
[01:04:02] with the time savings and how you're going to reinvest that in yourself and your organization
[01:04:08] and learning more about some of these topics that are influencing the future of work?
[01:04:13] A hundred percent. Very well said.
[01:04:15] So Tyler, I know we kind of covered a lot of ground over this conversation. Just in,
[01:04:21] in closing, I mean, any, any final thoughts and advice for, for listeners in terms of,
[01:04:26] as the title says, uh, elevate your AIQ?
[01:04:30] I mean, no matter where you are in your, your headspace of where you are in your AI journey,
[01:04:36] if you want to call it that there's no time like right now to begin just practicing and testing with
[01:04:42] this stuff and, and learning and finding trusted resources that you can really figure out what are
[01:04:49] the frameworks and the techniques that you need to do to get these AI systems to work even better
[01:04:54] than just going in and just talking in plain language to chat GPT. A couple of the really key
[01:05:00] things that I think that anyone can start with are as simple as using something like a, an otter or a
[01:05:07] fireflies or a fathom. You want to make sure that, uh, whatever it is, the security layer is in place,
[01:05:13] like SOC 2, type 2. I know that's something we talk about a lot. All of those are AI note-taking apps.
[01:05:18] And being able to just that one simple thing of having an AI show up into your meetings so that
[01:05:24] you can be 100% present in the conversation and have everything captured. So you have perfect recall
[01:05:30] from that. And then the next step is that if you're able to have all that stuff re-sent into a
[01:05:37] repository, like a second brain system, just that one thing is incredibly powerful. So now as you
[01:05:44] begin to learn how to connect in, you know, chat GPT or even, even further down the road, you get into
[01:05:50] agents or assistants now can go back and look through all these different meetings that you have
[01:05:55] to be able to answer questions of like, Hey, Bob and I met, you know, six months ago. Uh, and we talked
[01:06:00] about this, that, and the other, can you refresh my memory on all that? Um, so that I'm up to speed or
[01:06:06] if this was a project, what was the thing that, uh, Bob and I went and talked to client ABC over here
[01:06:12] about, I was supposed to be doing something and I can't remember. Can you give me my complete to-do
[01:06:15] list? That is super helpful for everyone. And it's a really easy thing to set up and, and get into play.
[01:06:24] It doesn't any, anyone can pretty much do that. The other thing that I like using a lot that is also
[01:06:30] just a really easy lift is using a tool such as audio pen, or there's voice notes, or even the
[01:06:37] fireflies app can do this where you can just record a voice memo. So I have been embracing the second
[01:06:44] brain mentality for a while now before AI, where I'm trying to offload things out of my cognitive load.
[01:06:51] So I free up and can stay in flow state. And I need to remember this stuff and get it down,
[01:06:56] or I just won't remember it anymore. I've got dad brain now, literally. And so if I don't get it out
[01:07:01] of my head and into a system, then it'll be gone. And being able to use one of those type tools to
[01:07:07] just very quickly, just open it up, vocally brain dump into it, have that transcribed. You can also
[01:07:14] begin to set up automations to turn it into whatever downstream, like an email or blog posts.
[01:07:21] The world's your oyster there, but have that again, land into that second brain system. So now we have
[01:07:27] my thought process, whether it was just like a quick thing I needed to get down a to do list item
[01:07:32] coupled with also here's all my meetings that are now being recorded and added into there.
[01:07:36] That's a lot of your intelligence and what you're doing in a day to day basis that pre AI trying to
[01:07:44] keep up with all that and remember all that kind of stuff is I'm not good enough to do it.
[01:07:48] As I've tried all the different productivity systems like Eisenhower square, and then even like the
[01:07:55] Tiago Forte's system or the PARA method, all those types of things, and I struggle with it. And now that
[01:08:03] I'm able to still keep the concept of centralizing that information, but then connecting AI to it, I can
[01:08:10] outsource that out of my head and into the system. And now I can literally have so much stuff in there
[01:08:17] now, both personally and professionally. It probably knows me better than I do. I can ask it questions
[01:08:22] about myself and it'd be able to tell me things back that I might not even recognize about myself
[01:08:26] or that's going on, which is, it's pretty cool when you can do that.
[01:08:30] Awesome and scary.
[01:08:32] Yes, awesome. Both can be true.
[01:08:36] Yeah, yeah. No, you've got me thinking about, I have this giant pile of post-it notes with ideas and
[01:08:43] follow ups and to do's and whatever. And it's like, do I just use one of those transcribing
[01:08:50] tools and just start reading them into it? Or do I take a picture of all of them and
[01:08:56] you know, upload it and will it recognize my handwriting? Like I gotta figure out something,
[01:09:02] but I'm tired of walking around with two inch thick pile of post-its over the last couple.
[01:09:06] Yeah, it depends on your chicken scratch. Like mine, mine's pretty rough. So that's why I just
[01:09:10] brain dump into it, but they can do that. You can just take pictures of it. I even learned a hack from
[01:09:16] one of my buddies, Matt Burton. He went and took a picture of his bookcase and it was able to extract
[01:09:22] the titles off of every single book in his personal library. And then he had all those listed out as
[01:09:29] books that he has read and added that sort of information into a second brain system. So now these
[01:09:34] agents kind of know like, okay, what else does Tyler know about? What is he interested in? What
[01:09:39] has he read? Because all these things, whether we recognize it or it's still just going on in our
[01:09:45] unconscious minds that affects us and our thought processes and our level of intelligence. So getting
[01:09:53] our AI systems up to speed on that kind of stuff is, that's the type of information that I'm throwing
[01:09:59] into my second brain system. That and recipes and social media posts and all that kind of stuff,
[01:10:05] I'm getting it all in there. You can have a fear for AI, but make it a healthy fear. Don't be so scared
[01:10:11] of it that you're worried about Skynet, that you're going to miss the boat on how much of a positive
[01:10:15] effect this can have on your life, your business's life and your family's life. Because it definitely
[01:10:21] does. And it definitely can. It's not all bad. In fact, I would argue that the net effect of all this
[01:10:27] is going to be that we're going to be able to solve in the very near future. A lot of these very,
[01:10:32] you know, unsolvable problems such as like curing cancer. How do we feed the world so that no one
[01:10:38] goes hungry anymore? How do we make sure that everyone is taken care of with employment and
[01:10:44] housing? Like again, those sound like, you know, this sort of picturesque future that is not possible.
[01:10:51] But I really truly believe that this could be the technology that could help us achieve some of
[01:10:59] those things. I agree. I think to your point, these previously sort of insurmountable kind of challenges,
[01:11:04] these super complex, you know, global challenges, you know, between our collective intelligence and
[01:11:11] artificial intelligence, it seems like there's a lot of opportunity to address, you know, some of
[01:11:17] these, some of these global issues. So that is a whole nother conversation, the humans plus AI to solve
[01:11:25] world hunger and world peace. So Tyler, we're gonna, we're gonna have to leave it there. But this has been
[01:11:32] awesome to really get into the weeds with you on some of these projects you're working on. And
[01:11:37] you've given my listeners some amazing advice. I know I've got a bunch of things I need to now go
[01:11:43] play with and try to improve my own, my own workflows. Always good to talk to you. And thank
[01:11:50] you for spending so much time with me today. You as well. Hey, thank you so much. I really enjoyed it.
[01:11:55] Excellent. All right, everyone. That's gonna wrap it up for today. Thank you for listening to another
[01:12:00] episode of Elevate Your AIQ. And we'll see you next time.