Bob Pulver and Joe Lazer explore the evolution of AI, its implementation challenges in business, and the importance of storytelling in the age of AI. Joe shares his personal journey in the AI space, discussing his experiences at Contently and A-Team, and how companies are navigating the complexities of AI adoption. Bob and Joe delve into the significance of the human touch in content creation and the future of AI detection. They explore the importance of maintaining authenticity and unique human elements in quality storytelling. They discuss the potential dangers of relying too heavily on AI, particularly in the context of youth engagement and the development of communication skills. The conversation also touches on practical advice for writers on how to leverage AI as a thought partner while preserving their unique voice and ideas. Finally, they speculate on the future of storytelling in the age of AI, highlighting the need for responsible use and the potential for collective intelligence in creative processes.
Keywords
AI, storytelling, business strategy, generative AI, content marketing, innovation, technology, entrepreneurship, partnerships, implementation challenges, AI, content creation, authenticity, storytelling, digital twins, youth engagement, writing therapy, responsible AI, creativity, technology
Takeaways
- Storytelling is becoming increasingly important in the age of AI.
- Understanding client needs is crucial for successful AI projects.
- Partnerships and ecosystems are essential for AI development.
- The last mile of AI implementation is often the hardest.
- Generative AI is reshaping business strategies.
- AI detection tools are still in development.
- The future of work will heavily rely on soft skills; AI can enhance productivity but shouldn't replace original thinking.
- Authenticity in storytelling is crucial; AI cannot replicate personal experiences.
- Digital twins may undermine trust and authenticity in content creation.
- Younger audiences are drawn to authentic, relatable content.
- AI tools can serve as thought partners for writers.
- Storytelling will be a vital skill in the AI age.
- AI can help connect disparate ideas and personal stories.
- Curiosity and hands-on experience with AI tools are essential.
- The future of storytelling may involve collective intelligence and co-creation.
- Responsible AI use is critical, especially concerning youth engagement.
Sound Bites
- "AI has created a groundswell of interest."
- "What's the actual high ROI use case?"
- "Every company wants to talk to their data."
- "We need better watermarking and AI detection."
- "AI content is often dull and soulless."
- "Digital twins defeat the point of authenticity."
- "Nobody wants personalized versions of TV shows."
- "AI can be a thought partner for writers."
- "Be curious; spend 10 hours using AI tools."
Chapters
00:00 Introduction to AI and Personal Background
03:14 The Evolution of AI in Business
05:59 Navigating AI Implementation Challenges
09:04 Understanding Client Needs and Market Dynamics
11:47 The Role of Storytelling in the Age of AI
15:01 Building Partnerships and Ecosystems
17:53 Personal Projects and Future Aspirations
20:59 The Importance of Human Touch in AI Content
23:57 AI Detection and the Future of Content Creation
26:58 The Role of AI in Content Creation
30:12 Authenticity vs. Digital Twins
32:36 The Human Element in Storytelling
34:36 Navigating AI's Impact on Youth
36:46 Writing Therapy and AI as a Thought Partner
43:41 Future Horizons of AI in Storytelling
Joe Lazer: https://www.linkedin.com/in/joe-lazer-lazauskas-8b442026
Joe’s blog: https://storytellingedge.substack.com/
Joe’s book, “The Storytelling Edge”: https://www.amazon.com/Storytelling-Edge-Transform-Business-Screaming/dp/1119483352/
A.Team: https://www.a.team/
For advisory work and marketing inquiries:
Bob Pulver: https://linkedin.com/in/bobpulver
Elevate Your AIQ: https://elevateyouraiq.com
Powered by the WRKdefined Podcast Network.
[00:00:00] Welcome to Elevate Your AIQ, the podcast focused on the AI-powered yet human-centric future of work.
[00:00:05] Are you and your organization prepared? If not, let's get there together. The show is open to
[00:00:09] sponsorships from forward-thinking brands who are fellow advocates for responsible AI literacy
[00:00:13] and AI skills development to help ensure no individuals or organizations are left behind.
[00:00:18] I also facilitate expert panels, interviews, and offer advisory services to help shape
[00:00:23] your responsible AI journey. Go to ElevateYourAIQ.com to find out more.
[00:00:28] Hey everyone, welcome back to the show. In this episode, I had the pleasure of sitting down with
[00:00:43] Joe Lazer, bestselling author of The Storytelling Edge and the creator of a popular newsletter by
[00:00:48] the same name on LinkedIn and recently launching in more depth on Substack. Joe and I dive into the
[00:00:54] ever-changing world of AI, discussing the challenges businesses face in implementing these technologies.
[00:00:59] Joe shares insights from his journey at Contently and then A-Team, where Joe is currently their fractional
[00:01:04] head of content, exploring how companies can effectively use AI without losing the human touch
[00:01:08] in content creation and staying authentic. Beyond that, Joe even gave me some personal
[00:01:12] tips on writing and storytelling that I'll be putting into practice. If you're curious about
[00:01:17] the future of AI and how it intersects with storytelling and creativity, you won't want
[00:01:21] to miss a minute of this conversation. Enjoy it. Hi everyone, welcome to another episode of Elevate
[00:01:28] Your AIQ. I'm your host, Bob Pulver. With me today is Mr. Joe Lazer. How are you doing today, Joe?
[00:01:34] Doing well. You know, you got me here on Halloween, so I'm about to take my kid out right after this and
[00:01:41] get to those mean streets of Park Slope, get some candy, show him off in his first Halloween costume. I'm excited.
[00:01:48] Awesome. Yeah, no, that'll be a lot of fun. So yeah, I appreciate you doing this and I was wondering if you could just give
[00:01:56] my audience a little bit about your background and how you were introduced to AI. I know you've spent a lot of time in this
[00:02:03] space trying to teach others about what's possible with AI and your work at A-Team and stuff like that.
[00:02:09] Yeah, sure. So I'd say that probably my first exposure to AI came, you know, close to a decade ago when I was at
[00:02:16] Contently before I was at A-Team. So Contently was a content marketing technology company and one of our bigger
[00:02:23] clients was IBM Watson, right? So we were working with them on a lot of their content around AI.
[00:02:30] We also ended up integrating IBM Watson into our platform to get a lot of AI-driven content insights,
[00:02:37] which is very early on, right? These are things more like on voice and tone analyzer,
[00:02:41] like here's the voice and tone your audience is most likely to respond to. Here's automated
[00:02:45] recommendations on how to tweak your voice to do that. Stuff that's baked into Grammarly Pro
[00:02:50] now that I think we just gloss over, but was really cool at the time a decade ago, you know,
[00:02:56] we'd show it to prospects and be like, wow, this is the most awesome thing ever.
[00:03:00] And then after when I left Contently, I went to be on the founding team running marketing for this
[00:03:07] future of work, AI startup called A-Team. And the idea behind A-Team was to build an AI team graph that
[00:03:15] would understand who worked well with whom on what in small innovation teams, right? So learning from
[00:03:22] the experience that people have working together, how fast they build, how well they rate their
[00:03:29] experience with one another and do that over and over again. So you can figure out to optimize what is
[00:03:35] the best team to bring in, build any given products. You do it as quickly, as high quality, and as cheaply
[00:03:41] as possible. And I thought that was a really cool vision because it was building out a lot of what
[00:03:46] we'd done at Contently, which is bringing together creative teams, but doing that for product and
[00:03:51] engineering teams, which, you know, people are willing to spend a lot more money on and using AI to kind of
[00:03:57] master what had been, you know, a very manual process for us when we were at Contently.
[00:04:01] And through that, you know, it's also being at this company with a lot of very smart product and
[00:04:08] engineering folks, right? And even before ChatGPT came out, my team was experimenting a lot with
[00:04:16] different generative AI tools. You know, we were playing around with Jasper for ad copy, with mid-journey,
[00:04:23] with runway to figure out, okay, can we create really cool and interesting videos much more cheaply than
[00:04:29] we would otherwise. And then of course, once ChatGPT came out, that created this next wave groundswell that
[00:04:37] both very much changed the way that we went to market at A-Team because we've been doing AI projects. And I was
[00:04:44] like, wow, this is the one thing that everyone's interested in. And I think like with everyone leading
[00:04:50] marketing at the time, there was increased pressure to figure out, okay, how do we really use this for
[00:04:55] ourselves to be, you know, leaders, innovators, whatever, you know, bullshit business buzzword
[00:05:02] you want to use on the topic of AI? Because that was a big pressure, I think, for a lot of marketing
[00:05:07] teams at the time. So first of all, Joe, you and I were basically introduced to, I would say modern AI
[00:05:15] going back decades at almost the same time, but from different perspectives. So I was at IBM,
[00:05:22] working on an IBM research campus where we had the mock Jeopardy studio set up and we first exposed
[00:05:31] clients to what IBM Watson was. So from that and understanding why cloud computing, big data,
[00:05:40] advanced analytics were all necessary to, you know, make this leap with AI. And then I wound up
[00:05:47] trying to sell some of those same solutions. So I remember the partnership with Contently. I was
[00:05:52] there when we formed a Twitter partnership to ingest all that social data and run sales,
[00:05:57] social analytics on it, partnership with a weather company, which they wound up acquiring to get all
[00:06:01] kinds of other, you know, sort of localized, you know, data that affects, you know, retail and all kinds
[00:06:06] of industries. But some of those capabilities that you were describing in terms of how to understand,
[00:06:13] you know, content, how to understand personality and tone and all those kinds of things were,
[00:06:19] I mean, they were just, you know, APIs at the time, right? It wasn't built into the platform and
[00:06:24] it certainly hadn't, they hadn't really built out the ecosystem that Contently was one of the first
[00:06:29] to really start to take advantage of it and play with it and see what was possible. The other sort of
[00:06:35] huge parallel is the last program that I was running at IBM was, I had a lot of similarities to the A-Team
[00:06:44] model, which is now that people know what's possible with AI and what these APIs can do,
[00:06:52] what ideas do you have? How do you vet those ideas? How do you build a team around those ideas
[00:06:57] and move them from an idea to reality? And I say reality instead of commercialization,
[00:07:04] because some of those ideas were internal basing, right? Maybe they're for cost, you know, savings or
[00:07:10] efficiency and things like that. But certainly there were a lot of commercialization, you know,
[00:07:14] opportunities as well. But all the work that I did in idea management, innovation management at IBM,
[00:07:20] and then seeing what AI could do, it's like, okay, well, what if you used AI to help, you know,
[00:07:27] strengthen the idea? What if you use AI to, you know, help you select the right people with the right
[00:07:32] skills to build that team? And then how do you go and use AI to do the market research on the viability
[00:07:39] of the idea and the SWOT analysis? And then how do you go and use AI to target specific potential sponsors
[00:07:46] and investors for those ideas? So it seems like in a lot of ways, a team, you guys just basically,
[00:07:53] you know, commercialized and hyper accelerated, you know, what's what's possible in that sort of
[00:07:58] life cycle? Well, I think these are all the big questions that companies are still grappling with,
[00:08:03] right? It's like, what's the actual high ROI use case that I can have for this technology?
[00:08:08] There's been a little bit of that shiny object syndrome with AI over the last couple of years,
[00:08:13] where it's like, oh, like we can, let's just find some use case for it. There's pressure from the
[00:08:18] board top down for us to figure out what our strategy is. And I think the biggest thing that
[00:08:23] we've found is that with AI, it's really easy to get to like 70, 75% done to have a prototype that
[00:08:31] works well enough in certain contexts to kind of prove out an idea. But then it's very hard to do
[00:08:38] that last mile, that last 20 to 30%, especially when it comes to things like having a scalable
[00:08:43] infrastructure, minimizing hallucinations. So much of the work we're really starting to do now is in
[00:08:51] partnership with, you know, other larger frontier model type companies who have realized that you
[00:08:58] can't really just give most corporate clients all these APIs or just, you know, give them chat GPT
[00:09:06] enterprise and say, go have at it. You're not actually going to get any gains from that. And a
[00:09:12] lot of it, you know, requires having a knowledge of what these technologies can do. And I think one
[00:09:18] of the advantages of working with, you know, fractional specialized folks in this is that
[00:09:22] they've seen it across six to eight companies. They already know what giant pitfalls there are
[00:09:26] along the way. And the second part is that they have that zero to one startup mentality and experience
[00:09:34] for building new products, which if you know, we're being honest, quite frankly, just isn't the
[00:09:38] purview of most IT teams inside of large non-tech organizations to do that. So it's a kind of
[00:09:45] different muscle and cultural mindset that you need to bring in. So, you know, as much as we're working
[00:09:51] on it, I think we're still figuring out right what the best model is for all of this. Like we're
[00:09:56] literally learning every day and with every engagement. Right, right. And so a lot of the folks that come to
[00:10:02] you, you know, to, to a team, I mean, are they, you know, budding entrepreneurs? Are they, are they
[00:10:07] startups who are stuck or are they, you know, companies who just want to, you know, do some
[00:10:13] kind of spinoff in a skunkworks thing that doesn't abide by the same rules and inhibitors to, to
[00:10:19] innovation within their own company?
[00:10:21] It ranges, right? So, you know, we have, we have customers in every market and startup and
[00:10:27] mid-market and enterprise with startups. I think you get two buckets. One is more of an AI native
[00:10:33] startup. They know what they want to build. They have the idea. They just need to move really quickly.
[00:10:38] So they want really good engineering or design or data science talent that they can get up and running
[00:10:44] in the next, you know, week from a team to fill in those holes. Cause they don't want to waste,
[00:10:48] you know, four plus months on a hiring cycle. Yeah. That's what kind of bucket one bucket
[00:10:52] too, is that it's more of a non-tech native SMB or they built their product, you know, like a decade
[00:11:00] ago. They want to bring in someone who can, you know, give them that team to build, say a new AI
[00:11:08] driven LOB to take their business to the next level. Cause they see the writing on the wall that
[00:11:13] if they just stick with their legacy product, it's probably not going to work out well in the long
[00:11:17] run. Then when mid-market and enterprise, it tends to happen much more on the strategy front first.
[00:11:23] So kind of help us figure out what's the right use case here, sell it in internally. And then
[00:11:29] bring in the right type of team with the right type of experience who's done this before to build it
[00:11:35] out. And again, I think that's maybe the most valuable thing right now that really good fractional
[00:11:42] folks in our network bring is the fact that they've just done it across, you know, a half dozen or a
[00:11:48] dozen different companies. They're very good at these very specific stages of the development process.
[00:11:54] They know what's going to go wrong, which if you just have built it at one company before, you just
[00:11:59] don't have as good of a breadth of understanding of what the potential pitfalls are. It's a case where
[00:12:06] I think companies are just getting there where they're ready to move past the prototyping and
[00:12:11] pilot phase and really adopt generative AI. Like all the hype two years ago where it felt like every
[00:12:17] analyst on earth was tripping over themselves to project how much GDP AI would add in the next year,
[00:12:24] how many jobs it would replace, what the pace of transformation would be. I mean, like,
[00:12:28] I think we all kind of knew in the back of our minds that that was likely bullshit and we were
[00:12:34] likely headed for the bit of trough of disillusionment that we're in right now,
[00:12:38] a query and a gardener. But that's a healthy part of the process because now we actually figure out,
[00:12:42] you know, what's the right application of this technology and what works.
[00:12:45] Yeah, no, that makes total sense. And I think you're right. You guys have built a pretty impressive
[00:12:50] network of some of those fractional folks because I know a few of them and you're right. The exposure to
[00:12:55] different companies in different scenarios is really important, not just from maybe a data and analytics
[00:13:02] maturity standpoint. Do you even have the foundation to build something that can be,
[00:13:07] you know, scaled across an existing enterprise, maybe overcoming some of the legacy baggage,
[00:13:15] but also just culturally, you know, every company's different and you may have different,
[00:13:20] you know, approaches to how fast people want to move or how quickly you want to, you know, maybe
[00:13:26] move from experimentation in one domain and move on to others. So just different appetites, I suppose.
[00:13:32] Yeah. And when it comes down to it, like most companies want to do the same thing with Genuine
[00:13:36] of AI right now. They want to be able to talk to their data and get insights out of it. Like when
[00:13:41] it comes down to it, that's, that's what 90% of these ideas or implementations comes down to.
[00:13:47] What I've seen, you know, I've been spending a lot of time on different content and research
[00:13:52] pieces lately with some of the top AI folks in the network, and they have just figured out really
[00:13:58] novel ways to extract insight from semi-structured or unstructured data, which is the case inside of
[00:14:05] most companies. Whereas figuring that out the first time took them months, right? But now they can
[00:14:11] apply this novel approach they developed and apply it as almost like a ready-made template or building
[00:14:17] block to the development cycle and now do that in two or three weeks. And that's the thing that's
[00:14:22] been most mind blowing to me when I spend time really going deep with them and try and understand,
[00:14:27] you know, all of this better for our own content, for our own marketing is what a colossal advantage
[00:14:33] that becomes. The frameworks that you guys use, I guess I was curious about like,
[00:14:38] do you have partnerships with some of these like platforms that may connect some of the, you know,
[00:14:46] agents or conversational AI, you know, bots and stuff like that? Or do you guys build like
[00:14:51] proprietary sort of plumbing? Yeah. I mean, we have some partnerships in place,
[00:14:54] like for instance, with, you know, writer.com. I'm familiar.
[00:14:58] May Habib's company. Okay.
[00:14:59] Yeah. So they're building basically the end-to-end platform for generative AI. But while the idea of
[00:15:06] AI is writer studio, which I think is very, very cool. And I'm a huge fan of May and her approach.
[00:15:12] I think she's one of the most, the smartest and strongest, you know, leaders in this space
[00:15:17] is to basically build no code tools internally for building AI applications. But the truth is inside
[00:15:22] of a lot of companies, you need more custom development that involves actually using,
[00:15:27] you know, the APIs with various tools to build it out. So having, you know, sort of writer certified
[00:15:33] A-teamers who can come in and execute against that is one example. And we're working to develop more
[00:15:39] and more of those relationships in the ecosystem because, you know, the truth is that every company
[00:15:44] is going to need custom development and most companies can't afford to go to an Accenture or
[00:15:51] McKinsey, you know, and it might also not always be necessary to do so. If what you really need is
[00:15:58] just a small agile tiger team to come in and help drive your development forward.
[00:16:04] Bob Pulver, host Q Podcast. Human centric AI, AI driven transformation, hiring for skills and potential,
[00:16:12] dynamic workforce ecosystems, responsible innovation. These are some of the themes my
[00:16:17] expert guests and I chat about, and we certainly geek out on the details. Nothing too technical.
[00:16:22] I hope you check it out.
[00:16:24] Excellent. I want to talk, you know, more about, you know, some of the things that you're working on.
[00:16:28] And personally, I know your newsletter, which is like has incredible, you know, following already,
[00:16:34] I think you hit 150,000 subscribers, at least on LinkedIn. But congrats on the success of that.
[00:16:40] And I thought you could just unpack that because I'm really liking your candid, you know, writing.
[00:16:46] It just resonates with me quite a bit.
[00:16:48] That's really kind of you to say, I really appreciate it. Yeah. So,
[00:16:51] you know, I've, I started out my career as a bit of a freelancer, you know,
[00:16:56] I started a media company out of college bootstrapped called the faster times of a couple of my bosses
[00:17:02] from college, but often to, to pay the rent and the bills in there. I was picking up a lot of
[00:17:09] freelance journalism gigs and content strategy gigs. You know, we built a branded content studio
[00:17:15] there doing early work with brands before content marketing was maybe even a term at the time back
[00:17:20] in 2010, 2011. And then spent the last 11 years working full time at platforms that basically
[00:17:26] specialized in connecting highly skilled freelancers who are at the top of their game with brands
[00:17:34] willing to pay at the top of the market for high quality, whether that was in, you know, content,
[00:17:37] filmmaking, now product development.
[00:17:40] So it got to the point where, you know, my creative side projects were getting to a pace where
[00:17:45] I thought, huh, I've been marketing and evangelizing this lifestyle for a decade now. Maybe I should get
[00:17:51] back to it a little bit because yeah, I, you know, I wrote a pilot of a TV show that got made with my
[00:18:00] friend or any partner, Shane Snow called Resignation. It's about a group of freelancers trying to make it in
[00:18:05] the new world of work. So we're currently, you know, hoping to get that that sold on a continuation deal.
[00:18:11] So I had that in the works was was helping out with some other TV shows that his studio is developing.
[00:18:16] You know, my newsletter, yeah, has continued to take off. So it's called the storytelling edge.
[00:18:21] It's about how to master the art and science of storytelling and use that to those lessons to
[00:18:27] build stronger relationships in your career and in your life. It's based off a book that I wrote
[00:18:33] that did pretty well a few years ago, also called the storytelling edge. And as AI hit,
[00:18:39] a lot of the big focus that I've had is really exploring what are the skills that are going to
[00:18:45] be most important in the age of AI. And as I dug into this for a lot of my future of work research
[00:18:51] at a team, the conclusion that I came to was that storytelling would actually be the super skill
[00:18:58] for this next stage. Because as we moved into an era in which machines could do the technical tasks
[00:19:04] and skills better than the vast majority of us, the thing that's going to be most valuable at work
[00:19:10] is the relations we have with other people, our ability to rally folks together, to communicate
[00:19:16] a clear vision, to set that path forward. And these are all the skills that we've long derided as soft
[00:19:24] skills, right? Things like empathy, communication, leadership. And if you really look at the course
[00:19:31] of human history of the way that these skills develop throughout our lives, the way that our
[00:19:38] brains work at a fundamental level, storytelling is the super skill that powers all of these different
[00:19:45] soft skills. So I've been writing a lot about how to really think about the role that storytelling
[00:19:53] and your own creative work plays in this next age of AI and what the future of storytelling looks like
[00:20:01] in this next era that we're entering into. So that's essentially the basis for this newsletter,
[00:20:08] which I'm investing more and more time in. As you alluded to, it has over 150,000 subscribers on LinkedIn,
[00:20:13] which goes direct to inbox. But I'm always honestly a little bit skeptical of a social media platform
[00:20:20] to do right by creators over the long term. So I've been migrating those folks for a while over to
[00:20:27] a kind of monthly mailing list and now moving that fully over to Substack and invest more time there.
[00:20:34] And that separate list has about 40,000 folks or so. And I'm working on a new book that's centered
[00:20:42] around a lot of these ideas, working on the TV projects and still involved with A-Team as a fractional
[00:20:49] exec, focused on a lot of our content and storytelling and PR efforts. But it's been two months now,
[00:20:56] two months today that I've been in this new role. And I have to say, I love it. I think as someone who
[00:21:02] always bites off a little bit more than they can chew, always has a bunch of side hustles going on.
[00:21:08] In addition to trying to raise a two-year-old, it's been nice to more formalize that as a portfolio of
[00:21:16] work, as opposed to trying to cram all those things in late at nights and on weekends and not sleeping
[00:21:23] enough. Yeah, for sure. I'm with you 100% on what I now call the durable skills. I give my friend
[00:21:30] Antonia Menaceri credit for that term. I know other people are starting to use it, but they really are the
[00:21:36] more durable skills going forward as you alluded. As AI takes on more and more, I was just having
[00:21:44] this conversation recently with some other guests and it's come up in a couple of different contexts
[00:21:49] about people's ability to tell, oh, I can tell when this is written by AI. Well, I know you think
[00:21:58] that, but let's be careful, especially if you're actually judging someone for college admission
[00:22:04] or for a job based on that writing sample, as opposed to someone that's, yeah, some people are writing,
[00:22:12] using AI to write their newsletters and stuff like that. And if you need to do that, that's fine,
[00:22:19] but you need to understand that if you're not the human in your own loop and content production before
[00:22:24] you hit publish, there's going to be plenty of people who just find that it's sort of dull. You
[00:22:29] just talked about this in your post the other day, right? Like if you think you're being authentic and
[00:22:35] expressing real human emotion and empathy and creativity, and you're talking about how you truly
[00:22:41] like resolved conflict or you've led an organization or whatever. I mean, you can sort of tell, but
[00:22:49] I just, again, I do caution people before judging too quickly because we do have false positives with
[00:22:57] some of this with people who maybe aren't, English isn't their first language or just people who need
[00:23:03] that thought partner and maybe just didn't do enough editing or whatever. But I think the points
[00:23:09] you made in that post, as I mentioned, they just really resonated. Yeah. I mean, I think one on the AI
[00:23:16] detection side, I think we have to accept that we do not have the tools yet to detect accurately when
[00:23:22] something is written by AI. So, you know, I heard a lot of people saying that, you know, professors
[00:23:28] that are using AI to check students' work and automatically failing them if they get a positive.
[00:23:34] I mean, AI tools will tell you that, these AI detection tools will tell you that the constitution
[00:23:38] was written by AI. Yeah.
[00:23:40] We're not there yet. We need to get there because we're seeing a deluge of AI slop pollute the web
[00:23:48] and things are getting really messy really fast. So we need better watermarking. We need better
[00:23:53] AI detection. Google just came out with a paper last week that seems to have found a fairly
[00:23:59] novel approach. I won't go deep into it because I don't fully understand it, but it essentially uses
[00:24:05] like a, a tournament method of checking AI against each other. That seems to be a decent solve.
[00:24:12] But when it comes to AI to create content, like it's one of my favorite topics to discuss right now,
[00:24:17] because I'm not against using AI in production or creative process at all. But where I think AI
[00:24:22] gets dangerous is when you outsource your thinking to it because writing is thinking,
[00:24:28] right? When writing is hard, it's because we're really trying to figure out
[00:24:31] what we want to say. So even when you go to AI for that first outline, you're outsourcing that
[00:24:39] thinking to the AI and whatever the output you're going to get from chat, GP to your cloud is really
[00:24:44] going to anchor your ideas. Inherently, the way that these frontier models work, both through the way
[00:24:50] that they're essentially an averaging out of all the texts on the web that has been scraped,
[00:24:55] copyright be damned in their training data, and also move towards safe outputs through the human
[00:25:01] reinforcement learning in, you know, essentially digital sweatshops in Africa and Southeast Asia.
[00:25:08] That all sucks the ideas and output into what I call this sort of vortex of mid or the ideas, you know,
[00:25:15] even with the best prompting, aren't going to be truly, truly original. They're going to be completely
[00:25:20] unoriginal, but they're going to be just kind of mid. And, you know, the quality, it's going to be better than,
[00:25:26] you know, even a lot of freelance writers, you might be able to hire certainly better than an intern.
[00:25:31] But it's a little bit dull and soulless. One of the best jokes I've heard is, you know, that AI art and
[00:25:38] writing has done what humans can ever do and proven the existence of the soul just by reading it and
[00:25:45] seeing the lack of it. But, you know, if you do really start by figuring out what are your own
[00:25:52] unique original ideas, if you do hook your audience with stories about your own life, because AI can
[00:25:58] never come up with stories about your own life for you, right? AI might be able to write, but it can
[00:26:02] never be a storyteller. If you do those things, and then you use AI to get feedback on your writing,
[00:26:08] to clean it up, to give you like alternate tones for specific paragraphs, use things like Grammarly Pro
[00:26:14] for, you know, copy editing, like, have that. Like, I think all those use cases are totally fine.
[00:26:20] But I do think that people are going to gravitate towards content that they feel like is created by
[00:26:27] a human. And that's what those stories and even some of those imperfections in human created content
[00:26:32] give you. Like, is your LinkedIn feed like mine flooded with people making those avatar videos
[00:26:39] and being like, this is crazy, this is going to change everything, you know, this was a fake video
[00:26:43] of me? Yeah, I've got a lot. And I've also talked to some folks who work at some of those, those
[00:26:49] companies who are making those, those avatars and have offered to use that. But my very last
[00:26:56] conversation was about, like, whether or not that's an appropriate use of the technology. Like, if you
[00:27:03] were, let's say you're not great at public speaking, and you just want to get something out, you want
[00:27:08] it to be as polished as it can be. Do you create a digital twin feed it a script, and then just have
[00:27:16] this like super polished, you know, almost magical, like version of yourself? I mean, don't we do that
[00:27:23] with, if we're using AI to create a custom, you know, resume, so we can put our best foot forward? I
[00:27:28] don't know. I don't know where the line is drawn. I just don't know who wants that, right? I think
[00:27:33] it's a nobody wants this scenario. Yeah, the only type of content that works with those digital twins
[00:27:40] is the ones pointing out that this is novel, like, hey, this is, I created this video of my digital twin
[00:27:46] that's talking to you right now. Isn't this crazy, right? That's the one type of content with that,
[00:27:51] that works. I don't think anyone is going to follow a creator who's constantly creating uncanny
[00:27:59] valley content using their digital twin over and over again. Like you're just sort of losing that
[00:28:04] sense of trust and authenticity. The reason that vertical video has popped off so much and why,
[00:28:12] most people under the age of 25 spend way more time watching, you know, vertical video on TikTok and
[00:28:19] Instagram than more premium polished, you know, TV and video is because we're drawn to the authenticity
[00:28:28] of that format. Having an AI version of yourself doing those vertical videos completely defeats the
[00:28:35] point. And I think that's the thing that a lot of people in Silicon Valley just don't seem to get about
[00:28:41] the way that humans operate in terms of content is that we just don't necessarily want a lot of these
[00:28:48] things when it comes down to it. One of the dumbest ideas I feel like I've heard is the idea that we're
[00:28:54] all going to have personalized versions of our, of TV shows, uh, where I'm getting a version of,
[00:29:00] you know, Game of Thrones or House of Dragons. It's only made for me. Nobody wants that TV and stories are
[00:29:05] a communal experience, right? Like it would suck to see an episode of a TV show that I can never talk to
[00:29:12] anyone else about because they never experienced that same episode. It, there's just all these
[00:29:17] little things in the way that we think about AI and content that I think completely missed the point
[00:29:22] of how humans operate and what we're actually attracted to. Yeah. I think that's taking.
[00:29:27] Peter Zollman
[00:29:27] Hi there. I'm Peter Zollman. I'm a cohost of the Inside Job Boards and Recruitment Marketplaces podcast.
[00:29:34] And I'm Steven Rothberg. And I guess that makes me the other cohost.
[00:29:37] Every other week we're joined by guests from the world's leading job sites.
[00:29:41] Together, we analyze news about general niche and aggregator job board and
[00:29:46] recruitment marketplaces sites. Make sure you sign up and subscribe today.
[00:29:52] The choose your own adventure concept just a little bit too.
[00:29:56] Yeah. I mean, like, listen, you know, like I read some choose your own adventures
[00:29:59] goosebumps when I was a kid, like I enjoyed them. But, uh, even with that, like you're getting the same
[00:30:04] choose your own adventures as everyone else. Right. I can still talk to, you know, my best friend in,
[00:30:09] in first grade about like, oh, did you try, you know, number seven. And then, uh, it's still a
[00:30:14] common experience. I just don't think anyone wants it.
[00:30:17] I'm thinking about what you said about like what the younger generation really pays attention to.
[00:30:23] So, uh, obviously kids are spending too much time on, on social media and they're doing scrolling and
[00:30:28] they could probably be doing something more constructive. But to your point about the content itself,
[00:30:32] I mean, my daughter, my daughter's 16 now, but I think some of the stuff that she watches is just
[00:30:38] incredibly sophomoric and stupid, but at least it's authentic. And at least it's real people
[00:30:43] doing stupid shit, but you should maybe appreciate how people built those things and how,
[00:30:50] I don't know how it works or whatever.
[00:30:52] David Morgan Everyone's always wanting an easy button
[00:30:54] for content, right? So digital twin of yourself, where you don't have to get good at delivering your
[00:30:59] own video and you can just have AI do it for you. Like that's an easy button, but I think in the end,
[00:31:04] it's not really going to work.
[00:31:05] Yeah.
[00:31:05] David Morgan In terms of what AI formats, you know, I think that's just the stupid idea. The ones that
[00:31:10] scare me are the character bots right now, especially from a parent perspective. Like,
[00:31:15] imagine you heard of that, you know, that kid and, um, who killed himself after falling in love
[00:31:20] with his character AI bot and wanting to, you know, be with her. Um, and I think that's the type of stuff
[00:31:26] that's really scary moving forward.
[00:31:28] David Morgan Yeah, for sure. And I mean,
[00:31:30] in some ways that's, um, you know, when we talk about responsible AI, I mean, that's just one of
[00:31:36] those things that we've got to really think about. I've been more focused obviously on organizational
[00:31:42] use cases, especially in talent where we're making decisions about people's livelihoods, right. But,
[00:31:49] you know, because, you know, we have, we have kids and because we see what's going on
[00:31:54] elsewhere in other domains. Um, we've got to think more broadly about, you know, responsible
[00:31:59] use and being responsible by design and having the right controls, you know, in place as well as
[00:32:04] the transparency to know AI is being used. So I think I definitely want to check out what Google
[00:32:11] just published. I saw that, um, as well about the watermarking.
[00:32:13] David Morgan Yeah. Just on the, on the content, uh, side and the story, the storytelling
[00:32:21] being a little, you know, selfish, but I'm sure there's listeners that are struggling with the
[00:32:25] same thing, but I know that there's ways that I can use AI to be my, my thought partner. But
[00:32:32] to your point before, like, I don't want it coming up with, you know, random
[00:32:37] shit that I try to like, you know, expand upon. I want it to learn enough about me and some of the,
[00:32:42] some of the ideas that I have and some of the things that I've done. So it at least knows,
[00:32:47] you know, how to sort of, uh, assess what I do right and, and strengthen it. But any advice for
[00:32:54] someone who's done and seen a lot of shit and just has trouble, uh, getting it, getting it out there.
[00:33:01] David Morgan Yeah. I mean, let's do a little
[00:33:02] writing therapy right now. Like, what do you find most difficult? Is it like figuring out,
[00:33:07] you know, what that, that hook is for your post? Like what's a story to lead with? Like,
[00:33:12] where do you find it most difficult or just the act of writing itself, like sitting down and doing it?
[00:33:17] David Morgan More of the latter,
[00:33:17] but a lot of it is like my brain kind of thinks of many things and the connection between those things
[00:33:23] all at once. And so it's really just like the focus and the logic of laying out like,
[00:33:29] David Morgan And prioritizing maybe what's important. It's just my, you know,
[00:33:34] ADD brain trying to get it down. I mean, you know, my editor always has a lot of work to do because
[00:33:40] even through these conversations, I kind of go in different directions, even stopping mid-sentence
[00:33:46] sometimes and, and thinking of something else. Right. So it's like, I don't want to spend,
[00:33:50] shouldn't take me, you know, 10 hours to write a couple paragraphs. Right. So it's just,
[00:33:54] it's just getting it out there. And maybe there's a way to just have like a, like a repository of,
[00:34:00] of all of these thoughts. Maybe I just need to start writing them down and putting them away and
[00:34:04] then giving AI access to it to say, let's figure out how we can formulate some of these key thoughts in
[00:34:10] a logical fashion. So it flows like a, like a story. Right. I think that is one of the interesting
[00:34:15] applications right now of the Google LM notebook. So that was developed at Google with this writer,
[00:34:25] Stephen Johnson's written, you know, a dozen books, maybe 14, you know, pretty popular books,
[00:34:30] New York times writer when in house at Google developed this tool and basically was tasked with
[00:34:35] like help us develop an AI tool that will actually be useful for writers. I was really skeptical of Google
[00:34:41] notebook when it came out, because I thought the automated podcast thing was just another example
[00:34:47] of like a hyped AI content format that would actually have very little utility in real life,
[00:34:53] especially because it makes things up. It changes ideas back to the most average version of them.
[00:34:58] It's a lot of garbage filler if you actually listen to a few of them. But I think what's really
[00:35:04] interesting about this tool is to have this space where they're not feeding your own content,
[00:35:09] a back into their own training data, which is important, but essentially where you could dump
[00:35:14] a lot of different ideas. And then you can talk to the notebook about connecting those different
[00:35:20] concepts together. Let me give an example. So I think that the simplest sort of formula for thought
[00:35:26] leadership writing is lead with a personal story about your life that hooks the reader in,
[00:35:34] and then that connects to a bigger idea that you might have, right? For example, I give a popular
[00:35:41] newsletter I wrote a few weeks ago about why storytelling will be the super skill of the AI age.
[00:35:47] Open with my experience, like having my son a few weeks after chat GPT comes out, spending my maternity
[00:35:55] leave going into an absolute doom loop rabbit hole over generative AI, worrying about his future,
[00:36:01] worrying about my future, and how that then led to these realizations about how storytelling actually
[00:36:07] will be the most important skill of the AI age, right? This is the sort of formula that just works
[00:36:12] really well for any sort of thought leadership content because it creates a relatable connection
[00:36:16] with the reader and then makes them more open to this big idea that you're presenting because they feel
[00:36:23] a connection to you, you've built some trust, and it brings them in as opposed to just starting with
[00:36:28] the idea which feels like you're doing homework or you're eating your vegetables off the bat.
[00:36:33] I think there's potentially interesting applications of using a notebook like that to connect sort of
[00:36:38] big ideas that you might have to personal stories that you might have because I think everyone does this
[00:36:44] in a different way. Like I have some writer friends who start with a little story and then they go and
[00:36:49] they're searching for what is an idea that could possibly connect to that story, right? As an opener,
[00:36:54] I'm someone more who has the idea that I want to write about and then I look back in the recesses of
[00:37:01] my brain for what is the story that I could connect in there. But that seems like the type of thing that
[00:37:06] AI could help if in your ADD brain, and I think one of my specialties as an editor is working with ADD
[00:37:13] writers if you ask some people on my team because I kind of can see the different disparate connections
[00:37:18] that they're making in their brain, but is to sort of like have two dumps, right? Like one notebook
[00:37:24] that's like all the kind of big ideas and concepts I want to write about. And then I have over here
[00:37:28] is like a repository of different story like having my kid like hitchhiking in Albania, jumping off this
[00:37:34] clip in Hawaii, getting arrested in Costa Rica for doing some bad shit. And then figuring out how you can
[00:37:44] tie those opening leads to your big ideas. I think that could be like one really interesting
[00:37:50] application you have. Or I'm also bullish on just using AI as a little bit of a tutor or a
[00:37:55] thought partner. I think voice mode is really cool with chat GPT. I've had surprising results just like
[00:38:02] walking around with my dog and talking to it and trying to work out an idea, right? Like,
[00:38:07] does this make sense? Like how would I refine it? Yada, yada, yada. Like you're not always going to have
[00:38:13] someone that you can talk to on demand about that. So treating, you know, AI like a AI intern that you
[00:38:20] can just brainstorm with, I think is another way that we can do that.
[00:38:24] No, that's really helpful. Yeah. I mean, some of it's like, I've done a lot of stuff and a lot of those prior
[00:38:31] experiences, even if they were quite a while ago, like they're coming back full circle, like another wave of
[00:38:39] technology, right? Like everything that I was describing at the beginning that relates to like
[00:38:45] A-team and putting that, putting the teams together and stuff like that. I mean, all this extracurricular
[00:38:51] work I was doing at IBM around like crowdsourcing and collective intelligence before there was a gig
[00:38:55] economy. And before any of this really took off, it's like, it was all manual. And then, and then IBM
[00:39:02] research came along and started giving us these, you know, graph database powered solutions for social
[00:39:06] network analysis and knowledge graphs and all this stuff. And it's like, it keeps resurfacing in the
[00:39:11] context of AI, like the, everything I went through with building automation strategy at NBC universal,
[00:39:16] or, you know, early, the first wave of, of chat bots that completely sucked. So there's all these topics
[00:39:22] that are become interrelated. And then I've got to like, put them in a logical fashion, if I'm going
[00:39:27] to incorporate them in a particular story at all. And then I guess the other aspect is, you know,
[00:39:33] I spent most of my career as an individual contributor, sometimes getting pretty deep
[00:39:37] in the weeds, but then other times I'm also like a chief of staff who's got this higher profile kind of,
[00:39:45] kind of audience. And I have no choice, but to balance both, both being in the weeds to do tactical
[00:39:52] stuff and get my hands dirty, as well as putting my systems thinking hat on and thinking big picture,
[00:39:57] like how is this going to evolve or how could this evolve over time on a H2, H, well, probably not
[00:40:05] H3, that's just throwing darts, but moving from H1, H2, like what happens when you get out of the
[00:40:10] trough of disillusionment, right? Like, so it was just always like 20 things going on in my head.
[00:40:16] And so, so maybe it's just the focus and the dedication and then a little bit of perfectionism.
[00:40:21] It kind of makes me really intrigued with the idea of building a sort of story editor or story teaser
[00:40:29] GPT like that, right? Because I sat with you, I could ask a series of questions that gets to the
[00:40:34] core idea you'd want to do. Be like, okay, like thinking back to your time at ABM, like you're
[00:40:40] building those chatbots. Like was there one moment where you realize like this just isn't going to work,
[00:40:44] right? And then could we connect that story to a development that you see right now where, you know,
[00:40:51] maybe it's Claude and the way and where we are with agents, right? Where, you know, there's a lot of hype
[00:40:57] around agents in this next era of work. But we saw earlier this year that Devin, the AI software developer,
[00:41:05] turned out to mostly be a scam. Even Claude's agent capabilities that came out two weeks ago,
[00:41:12] I mean, it can only actually operate an operating system successfully a little less than 15% of the
[00:41:18] time, you know, and I imagine you could, I could probably like program a GPT to do that somewhat
[00:41:24] successfully. So if that's something I'll try out, then I'll send to you to see if it helps at all.
[00:41:29] Yeah, that'd be awesome. Joe, I want to be respectful of your time. I know you've got a two-year-old
[00:41:34] waiting to get some candy. So any final thoughts on advice for folks that haven't gotten started to,
[00:41:41] you know, literally elevate their AIQ.
[00:41:44] So be curious. It's the most generic advice you're probably going to get. But it's the thing that
[00:41:49] Ethan Mullick writes about in his book, Co-Intelligence, which I highly recommend,
[00:41:54] by the way. I know we're not doing the Ezra Klein podcast thing of recommending three books at the
[00:41:58] end, but that's one that I'd recommend. He writes that you only really get an understanding of what
[00:42:03] AI tools are capable of when you spend at least 10 hours using them. And I think that's right.
[00:42:07] And that's in my experience with every tool that I've used, whether it's Claude, whether it's a
[00:42:11] video editing tool like Opus, whether it's like Mid Journey, but figure out where these things can
[00:42:16] play into, you know, your productivity and your efficacy in your work with AI. I think that from
[00:42:24] a storytelling perspective, like we're looking at three horizons right now for how, you know,
[00:42:30] content and storytelling is going to change in the age of AI. For my creatives out there, I think the one
[00:42:35] we're at right now is using it in terms of, you know, enhancing productivity, using it like your
[00:42:41] team of interns. You can build your one person media operation. You couldn't before using it for
[00:42:45] tutoring, doing the sort of teasing out of story ideas like we just talked about or learning or
[00:42:50] researching about a topic more deeply. But I also think we're getting into a second wave where we're
[00:42:54] seeing some really creative artists and filmmakers use AI in a way to create something that is new and
[00:43:02] original and artistic. And, you know, this stuff could go down a really evil pathway in terms of
[00:43:09] just taking all of our jobs and having the world polluted with slop, which definitely is happening
[00:43:14] already. But there's also a world in which, you know, we see short films like Airhead, which is made
[00:43:20] using Sora that are new and different and interesting and things that we couldn't create without it.
[00:43:24] That's the path that most technology takes in storytelling. And, you know, the third horizon that I'm
[00:43:29] most interested in researching right now, so if anyone listening to this has seen a cool example
[00:43:33] of it, shoot me a DM on LinkedIn. My DMs are open. It's using AI for sort of collective intelligence
[00:43:43] and co-creating art in big groups. You see artists, you know, like Grimes and Holly Herndon
[00:43:49] doing this in the music space. And yeah, finally, like if you want to follow this more deeply,
[00:43:55] if you want to learn about the art and science of storytelling and how AI is changing content,
[00:43:59] give a plug for, you know, my LinkedIn newsletter called the Storytelling Edge. You can also find
[00:44:04] on Substack, the Storytelling Edge, go to my profile, you can find all those things and
[00:44:10] appreciate the opportunity to get to chat about this stuff. Always, always love to, Bob, and
[00:44:15] the opportunity to fit in a couple of plugs here.
[00:44:17] Yeah, of course. Of course. Yeah, Joe, I really appreciate your time. This was awesome.
[00:44:21] Thanks for some of the personal tips as well. And yeah, I want to wish you the best of luck with the
[00:44:27] new book and the newsletter and the movie and all these other things that you're working on. And
[00:44:34] I hope your son gets a lot of good candy. Appreciate it. Yeah, probably going to limit him to one or
[00:44:38] two pieces, but you know, I'll eat the rest. Nice. So yeah, and some of those links I'll make sure to
[00:44:45] include in the show notes so people have them. Thank you again, Joe. This was great. Thanks everyone for
[00:44:50] listening and that concludes another episode of Elevate Your AIQ. We'll see you next time.