Bob sits down with Giancarlo Erra, a creative tech entrepreneur with extensive experience in AI across numerous domains. They discuss the challenges small businesses face in adopting AI, the misconceptions surrounding the technology, and the importance of education and critical thinking in navigating the AI landscape. Giancarlo emphasizes the need for society to take responsibility for AI adoption and the importance of upskilling the workforce to adapt to the changing job market. Bob and Giancarlo also discuss the evolving landscape of AI and its implications for skills, education, and employment. They emphasize the importance of durable skills such as critical thinking and empathy, the need for responsible AI usage, and the potential of custom GPTs to empower users without technical backgrounds. Giancarlo shares additional insights on practical approaches to AI training, the role of AI in creative processes, and the necessity of ongoing education to navigate the challenges posed by AI technologies.
Keywords
AI, small business, education, critical thinking, technology adoption, media influence, workforce skills, upskilling, societal responsibility, AI risks, AI, durable skills, education, upskilling, custom GPTs, creative processes, responsible AI, technology integration, talent management, automation
Takeaways
- Giancarlo Erra has been a creative tech entrepreneur for 25 years.
- Small businesses often struggle with AI adoption due to limited resources.
- Many businesses put technology first before understanding their actual needs.
- The media often creates confusion around AI advancements.
- Education is crucial for understanding and using AI effectively.
- Critical thinking is a vital skill in the age of AI.
- Society must take responsibility for how AI is used.
- Upskilling is essential for adapting to the future job market.
- Durable skills like critical thinking and empathy are essential in the AI era.
- Over-reliance on AI outputs can lead to negative consequences.
- Education and upskilling are crucial for adapting to AI integration.
- AI should enhance human capabilities rather than replace them.
- Experimentation with AI tools is key to understanding their potential.
- AI can serve as a valuable brainstorming tool for generating ideas.
- Responsible AI usage requires sensitivity and fairness in implementation.
- The responsibility for educating others about AI lies with us.
Sound Bites
- "You need to understand what you need."
- "AI is not a silver bullet."
- "We need more critical thinkers."
- "We will always need critical thinking and empathy."
- "Don't be overly reliant on AI outputs."
- "Education is key to understanding AI's impact."
Chapters
00:00 Introduction to Giancarlo Erra and His Background
03:09 AI in Small Businesses: Patterns and Challenges
05:52 Understanding AI: Misconceptions and Media Influence
09:12 The Importance of Education and Critical Thinking
12:12 The Future of Work: Human Skills vs. AI
14:53 Societal Responsibility in AI Adoption
18:13 Navigating AI Risks and Regulations
20:58 The Role of Education in AI Literacy
23:58 Empowering the Workforce: Upskilling and Adaptation
30:50 The Importance of Durable Skills in the Age of AI
32:14 Navigating AI's Impact on Talent and Employment
34:53 Education and Upskilling for AI Integration
38:23 Practical Approaches to AI Training and Tools
41:52 Custom GPTs: Empowering Users Without Technical Skills
44:11 Exploring AI's Role in Creative Processes
59:56 Final Thoughts on Getting Started with AI
Giancarlo Erra: https://www.linkedin.com/in/giancarloerra
Promethean Box Ltd: https://prometheanbox.com
Tweetify It: https://tweetify.it
CopyForge AI: https://copyforge.ai
Your Vault AI: (https://yourvault.ai)
For advisory work and marketing inquiries:
Bob Pulver: https://linkedin.com/in/bobpulver
Elevate Your AIQ: https://elevateyouraiq.com
Powered by the WRKdefined Podcast Network.
[00:00:00] Welcome to Elevate Your AIQ, the podcast focused on the AI-powered yet human-centric future of work.
[00:00:05] Are you and your organization prepared? If not, let's get there together. The show is open to
[00:00:09] sponsorships from forward-thinking brands who are fellow advocates for responsible AI literacy
[00:00:13] and AI skills development to help ensure no individuals or organizations are left behind.
[00:00:18] I also facilitate expert panels, interviews, and offer advisory services to help shape your
[00:00:23] responsible AI journey. Go to ElevateYourAIQ.com to find out more.
[00:00:28] Hey everyone, it's Bob Pulver. Welcome to another episode of Elevate Your AIQ.
[00:00:43] In this episode, I sit down with Giancarlo Erra, a creative tech entrepreneur with a wealth of
[00:00:48] experience in AI. Giancarlo is constantly building novel AI solutions or teaching others how to do so.
[00:00:54] You can check out his website and some of his AI solutions using the links in the show notes for
[00:00:58] this episode. Giancarlo and I delve into the challenges faced by small businesses and adopting AI,
[00:01:04] exploring misconceptions about the technology, and emphasize the importance of education and
[00:01:09] critical thinking in navigating the AI landscape. Giancarlo also stresses the need for society to take
[00:01:14] responsibility for AI adoption more broadly and for the workforce to upskill in order to adapt to
[00:01:20] the evolving job market. Let's dive into this comprehensive and informative conversation.
[00:01:24] Hello everyone, welcome to another episode of Elevate Your AIQ. I'm your host, Bob Pulver. With me today,
[00:01:30] I have the pleasure of speaking with Giancarlo Erra. How are you this morning, Giancarlo?
[00:01:36] Everything is good. Thank you very much. Thank you for having me.
[00:01:38] I guess it's afternoon for you, I suppose, being in the UK. But thank you so much for joining me. I'm
[00:01:45] excited to get into the weeds with you because you're like an AI renaissance man, I would say. So by way of
[00:01:53] introduction, why don't you tell us a little bit about your background and some of the things you're
[00:01:59] working on? Yeah, I'm a creative tech entrepreneur. I've been doing that for now almost 25 years.
[00:02:07] And I'm a professional tuning musician. So you will just split my life between
[00:02:13] those two things. Staying on the tech side of things. Yeah, started as a developer and then,
[00:02:20] you know, business tech consultant, technical director, and then obviously entrepreneur,
[00:02:24] started my own startups. And I've always been involved with mentoring and tutoring. And now I do
[00:02:31] a lot of that around obviously AI, because it is what every business wants to know about. And
[00:02:38] yeah, I'm, you know, I run the AI summits in the UK. We've done, you know, Liverpool,
[00:02:44] Cambridge, we're doing, we're going to be London, we've done Norwich, Peterborough, and basically going
[00:02:49] all around the country. And I do some AI workshops for businesses. And I do some mentoring and tutoring
[00:02:59] as well. And I'm an advisor as well about all of that. And my obsession, and the reason why obviously
[00:03:06] is the AI summits in the summer obsession is the simplification. So my obsession is getting all the
[00:03:10] people that are not tech, and possibly they are business owners, and then helping them and explaining
[00:03:17] to them. In particular with AI, I always done it with everything, trying to find the simplest and more
[00:03:23] efficient solution to their problem. In particular, when we're talking about businesses not having,
[00:03:30] you know, engineers or big resources to obviously embrace technologies.
[00:03:36] Yeah, that's amazing. I'm not sure how you juggle all of that. That's, that's amazing. And when you
[00:03:43] said, you know, business owners, so we're talking about small businesses, maybe medium sized business,
[00:03:48] what's your sort of average?
[00:03:49] Basically, I'm very much, you know, and if there are startups even better, you know, as a founder
[00:03:54] myself of quite a few companies, I get very excited about the early stages of creativity and madness.
[00:04:03] I've never been a big fan of the enterprise way, you know, the bigger the companies are less,
[00:04:09] the least interesting it becomes for me. So yeah, usually the small, the small medium businesses. Yeah.
[00:04:16] No, I can totally appreciate that having spent quite a bit of time at large enterprises, but the last
[00:04:22] few years, you know, working with startups and, you know, talking regularly with, you know,
[00:04:27] entrepreneurs like yourself, as you help these businesses, I mean, what are some of the patterns that
[00:04:33] you see? I mean, are you seeing like specific things that are consistent or, or maybe differences across
[00:04:39] like industries?
[00:04:41] Understandably enough, the smaller the business usually, obviously the least amount of resources
[00:04:45] you have and probably didn't have many employees. So everything is a bit all over the place in a
[00:04:51] way, and you're always struggling for time and efficiency. They are faster than the obviously enterprise
[00:04:57] and corporate, but very often they are also struggling to understand how they can implement
[00:05:05] these things without the big investments. Because in particular with AI, basically everyone can use
[00:05:13] chat GPT or whatever available tools. The second level from it, it is all the big galaxy of apps built on
[00:05:22] LLMs, be it charge GPT or whatever it is. And then there is the third element that is, you know, your own custom,
[00:05:31] you know, I don't know, developing your own custom model, or even using an open custom model and then, you know,
[00:05:36] make it your own. That is what most businesses think they want. And probably I would say 50% of the time they're right.
[00:05:47] But that is pretty much out of their reach, because first of all, they have no idea how to do it. And once you go
[00:05:54] and you see consultants and businesses and stuff, the pricing and the complication that they have out of it, it is
[00:06:01] unbelievable. And as always happens, you know, it happened with the blockchain, Web3 is happening now. And lots of people,
[00:06:10] they're obviously riding this wave, and they're charging incredibly stupid amounts of money, just playing on the fact that no one
[00:06:18] knows, and everyone is in a hurry to just adopt AI. And so when I said that is the pattern they see most of the time is they don't
[00:06:24] know what to do. And they are in a hurry to just do something, whatever it is. And very often, it means that they just put the
[00:06:33] technology in front of the problem. That is the main, main thing I do every single time, you know, they say, Oh, we want to use AI for this,
[00:06:39] this, this and that. And I'm always telling them, let's, you know, let's swap the table, let's turn this table, because
[00:06:46] you're putting it in the wrong order. What are these processes that you want? Let's talk about these internal
[00:06:51] processes, let's see what you want to do. And then let's talk about if AI is a viable solution, maybe it's not, you know,
[00:07:00] happened to me, probably the same with you, or lots of times that people think that AI is there, and they can skip on, I don't know,
[00:07:07] developing, they can skip on coding stuff, they can skip on paying their developers, they can skip on paying
[00:07:12] their marketing. That is always happening, that happens all the time. So that is the main thing. And
[00:07:18] the other one, as I said before, it is helping them understanding what they need, because it is true
[00:07:24] that charge GPT, it is something everyone can use. But in my, in my personal opinion, in my personal case,
[00:07:31] no one is really using it as it could be used. You know, the custom GPTs, for example, people are always
[00:07:36] surprised, as I tell them, actually, if you spend some time, I can make for you a custom charge GPT that is done with
[00:07:43] your stuff. And actually, it might solve probably a good 50% of the things that you want to do. And especially fine being
[00:07:50] charge GPT or whatever it is, because you don't need that extra layer of security. Some other times they need the personal
[00:07:56] custom solutions. So that is the main pattern I see. Putting the problem, the technology in front of the problem, and then, you
[00:08:05] know, not really understanding what they need. And yeah, that is obviously, part of the problem is the media narrative. You
[00:08:14] know, we can see all of us, we see that we have the tech media, if you want, for the tech people that they're all into the
[00:08:23] details, the tech stuff that obviously, normal people don't understand. And then you have the general media, the UK here, but I see the
[00:08:30] same in the US, that, you know, is basically mostly fear is either fear mongering or trying to explain something and not
[00:08:37] being it not being either accurate or simple enough for people to get it. So basically, there's this huge gap in the
[00:08:44] middle of people scrambling around, knowing by feeling that they need to somehow understand what this is, and not knowing
[00:08:52] what to do. That is the whole reason that I've done the air summits. And yeah, and this is the whole reason why I
[00:08:59] usually do advise people and companies with AI, just because of all those reasons.
[00:09:04] Folks like you and I spend so much time, you know, trying to keep up with all of these advancements that
[00:09:11] it's going out and interacting with the people who have other things to do, right, who are trying to run their
[00:09:18] business, who are trying to, you know, meet, you know, deadlines and work on projects and things like
[00:09:23] that. They don't have time for all of this. And sometimes we, we have to maintain that balance so
[00:09:28] that we understand how to reframe, to your point, what this all means, because a lot of it is sort of
[00:09:35] noise and a lot of it is way too much in the technical, you know, weeds about, you know, sort of point
[00:09:42] release and every advancement in these, in these models and some of the technical details. It's,
[00:09:47] it's ridiculous. I mean, I'm not as technical as you are, but I mean, just seeing some of this,
[00:09:54] I mean, I think if we were to fast forward to, you know, a year or two from now, we'd look back and say,
[00:09:59] why was that even like newsworthy? But I love your, your perspective in those observations make
[00:10:06] complete sense that everyone is in the US, we would say putting the, putting the cart before the horse.
[00:10:11] They see AI as this broad, you know, maybe set of technologies and these, this sort of silver
[00:10:17] bullet, however you want to think about it. And they're not focusing on where they need help
[00:10:23] with a particular business, you know, challenge. Right. And I'm wondering if you think some of this
[00:10:30] is because people didn't adopt the last sort of wave of technological, you know, advancement.
[00:10:38] Yeah, no, no, it is true. It is true. I think that obviously the fact that now it is so easily
[00:10:43] available because before there was the, you know, the easy chat interface of chat GPT, all of that was
[00:10:49] quite an accessible that is as usual, you know, that's what made it very accessible. And this is
[00:10:55] also a very good point of what you said about the perception of people, because I mean, in a way,
[00:11:00] it's not a new tech, I mean, not without in a way, it is not a new technology. You know, we're just
[00:11:06] throwing power and money at it, you know, to be more powerful processing, obviously optimized algorithms
[00:11:14] and, you know, more data, but we are, you know, forcing it through it. So basically, you know, very often,
[00:11:22] it's very interesting because where question always get asked, oh, they always tell me, oh,
[00:11:26] you know, it's going so fast. What's going to happen in, I don't know, just in a few months or
[00:11:32] next year or whatever. And I always tell everyone, I said, actually, yeah, there was an advancement, but
[00:11:38] we're still talking about the same technology. And I always tell them until someone comes up with
[00:11:45] either the hardware or the software next idea for generative AI, we're not going to see that,
[00:11:54] you know, you are going to see a diminished returns. And, you know, this is what's happening even now,
[00:11:58] you know, before, you know, within, between a GPT-4, GPT-40, and then there was the GPT-4 or
[00:12:05] whatever preview that in theory is reasoning, and then in practice, it is not very good anyway,
[00:12:11] you know, basically the perception, oh, now it's reasoning. And obviously, the thing is,
[00:12:16] if you don't know in a very simple way how it works, you know, you're very,
[00:12:21] it's very well understandable that you think, oh, now I can reason, which obviously is not true.
[00:12:27] You know, it's not, it's not what's happening at all. So yeah, that perception, I think it is going
[00:12:32] fast. It is going fast, that's for sure. But it's not really going to be anything major until we will
[00:12:39] see the next shift in terms of hardware or, or software. So I always tell people, probably,
[00:12:45] you know, the hype phase, very steep. We are now more in a plateau, and then we will see what's
[00:12:51] going to happen, you know, depending on, you know, you know, if you keep just, you know,
[00:12:55] we can't keep throwing energy and money, just, just the same thing, you know, there would be GPT-5,
[00:13:00] six, seven, eight, whatever. I'm sure it will be better. But the difference between GPT-3 and
[00:13:07] four, and then the four and five, there will be always less and less. So I always try to keep
[00:13:13] people also to try to give them, get to give them a bit of a perspective over reality check,
[00:13:18] if you want. Because it's everywhere, and it's not always newsworthy. But that confirms the fact
[00:13:25] that even the general media, they don't know what they're talking about. That also doesn't really
[00:13:29] help a good constructive dialogue about it. Because, you know, AI, like the internet,
[00:13:34] like mobile phones, is going to be a disruptive technology. It is. So it's going to be disruptive
[00:13:38] with some jobs. It's going to create many other jobs. But the thing is, a responsibility is on us
[00:13:45] as a society in terms of what we do with it. And until we promote a constructive talk about it,
[00:13:52] we are not going to get out of it. We can't, you know, my main fear is when someone tells me,
[00:13:57] oh, I'm scared of losing my job. I'm scared of fake news, whatever. I'm telling them I'm not scared
[00:14:01] of any of that. I'm scared of us leaving the responsibility of a technology that works fairly,
[00:14:09] leaving this responsibility either to the tech giants or to the governments. And I believe that
[00:14:16] they're not going to do it because, you know, they have, you know, they're not the ones that are going
[00:14:20] to change things. I don't believe in a top-down approach. I think that education and us as a society,
[00:14:27] it is us who are going to drive how this is adopted and how this is going to be planned out.
[00:14:34] But that means that we need education, we need a lot of staff, you know, we need to solve the
[00:14:38] challenges that we have in front of us. I definitely agree with you. I spent a lot of time in the
[00:14:43] responsible AI space and I use that term just for the audience's sake as the sort of umbrella term over
[00:14:51] governance and ethics and fairness and transparency, explainability. And so there's a lot of facets to
[00:14:59] this. And, you know, in some ways, the small business probably has less risk exposure, I suppose,
[00:15:08] but it doesn't mean that, you know, something that goes awry won't have a sort of outsized, you know,
[00:15:14] impact to the way that they, you know, operate, right? Like if you put everything into like a,
[00:15:20] I don't know, some agentic AI, you know, workflow, and all of a sudden you went from a, you know,
[00:15:25] mom and pop, you know, old school paper, pencil, cash register thing. And all of a sudden now
[00:15:31] you've automated perhaps too much and something goes wrong. And now all your, you know, orders are
[00:15:39] screwed up and all that. Maybe they all shipped to the wrong place or whatever it is. Like, that's a
[00:15:45] huge, you know, problem. But I do agree with you that regulation, you know, relying on the government
[00:15:53] is, while it's important for them to have those conversations and to push forward, they won't be
[00:15:59] able to keep up. And we've certainly in the US here, we've seen that quite clearly that they can't keep
[00:16:05] up or they have a knee jerk reaction to certain things. I mean, even the California legislation
[00:16:11] that was vetoed by Governor Newsom was really putting too much onus on, to your point, the big
[00:16:18] technology vendors, the ones that can afford, you know, the compute costs and the energy costs to build
[00:16:25] these models in the first place. You've got to really look at the use case. I mean, I do like
[00:16:30] what the EU did in terms of looking at sort of risk pyramid and you've really got to look at the use
[00:16:36] case and how a technology is used, right? I mean, a car is, you know, a utility. It's, you know,
[00:16:43] a lot of things, but it could also be used as a weapon, right? It's also a dangerous thing. So you've
[00:16:48] got to put regulations around, you know, how a particular technology is used.
[00:16:55] It's very interesting to talk about the car because during my round tables or public speaking,
[00:17:01] the car is as an example. When I see that everyone is very concerned about safety and stuff,
[00:17:06] the car is actually an example I always tell them. I always tell everyone, I said, look,
[00:17:10] as an example, I always tell them when you go and buy a car, the retailer, it's not going to spend
[00:17:16] like five hours with you telling you, oh, be careful with the car. You can kill someone.
[00:17:23] Or even more, the car manufacturer, they're not, we don't have cars that can go only five miles per
[00:17:29] hour and are surrounded by airbags everywhere around it so that we are sure they don't go anything
[00:17:34] wrong. It is us as a society that we are educated to it and we are taking the responsibility of using a
[00:17:41] tool properly. You know, that is the main thing. Obviously cars have done all the security reasons,
[00:17:46] and all the things that are needed to enhance that. But I was thinking, why with AI? But it's not the
[00:17:54] only AI in the case, obviously. Why with AI is all about the technology and stuff. We are not actually
[00:18:00] taking any responsibility as a society. Or, you know, companies as companies or, you know, governments
[00:18:07] as the ones who pay for education. Because as you were saying before, one of the main problems is the
[00:18:14] lack of education. It is. I mean, schools, they have absolutely no idea. They are still,
[00:18:21] you know, they're still considering AI as cheating instead of teaching the students how to use it.
[00:18:26] Jobs, same thing. So, I mean, we're seeing it everywhere. I think, you know, that is the main
[00:18:29] problem. And I think that as long as we think that this can be solved by someone else and not us,
[00:18:35] then I think it's not going to change.
[00:18:37] This is why I've been pushing on organizations and individuals to really understand, just get,
[00:18:42] start to use it, start to see what it can do, test its limitations, see where it's hallucinating,
[00:18:51] as they say. Ask it a question that you already know the answer to and see what it says and see
[00:18:56] if you really think it's giving you 100%.
[00:19:00] You know what you should know? You should know the You Should Know podcast. That's what you should know.
[00:19:08] Because then you'd be in the know on all things that are timely and topical. Subscribe to the You
[00:19:13] Should Know podcast. Thanks.
[00:19:16] You know, accurate response. Because it happens to me all the time. I will, just in the course of my work,
[00:19:23] maybe I'm researching a particular company or technology. I mean, I've looked up companies that
[00:19:29] I know exactly what they do. And then I ask it, and then I have to, you know, nudge it a little bit,
[00:19:35] like, no, it's this type of company. And then it'll just completely make up a story
[00:19:40] about what this company does based on my initial and secondary prompts. I'm just trying to give you
[00:19:46] a nudge and give you a little bit of context so that you can go and find the right information.
[00:19:52] But I know that what you just returned to me is wrong. And it'll just leave it out there
[00:19:58] as if it was, you know, fact. And then until I call it out and challenge it, it'll just leave it that
[00:20:06] way. But if I challenge it, it's like, Oh, you know, I shouldn't have said that. Why did you say it?
[00:20:11] You know, this is, it's okay to not fully understand how everything works behind it. The
[00:20:17] point is to apply your own critical thinking. It's not a calculator, right? And so apply your own
[00:20:23] critical thinking and understand how you can use it as an aid to start to get you to the right place.
[00:20:31] But, you know, as the models evolve, I mean, hopefully hallucinations decrease. And if you build
[00:20:36] to your point before about the custom models, and as you sort of train it with your proprietary data,
[00:20:43] or the way that you write, if you're using it as a, as a writing assistant and writing,
[00:20:47] you know, co pilot or something like that, it'll start to get better and better, but you can't
[00:20:53] completely rely on it, which is where I think people get into trouble. Yeah, yeah. I mean,
[00:20:58] you use the word critical thinking. This is always something else I use a lot when I speak to parents,
[00:21:03] or I've done some tutoring for college here. And, you know, they're obviously concerned about,
[00:21:10] you know, what jobs will be available for my kids. And I always tell everyone, I said, look,
[00:21:13] I mean, knowledge, it is something that AI can help with. But that means that we humans are left with
[00:21:19] what we are best and what we should be best at. That is not remembering stuff. What you're best at
[00:21:25] is critical thinking. I always say critical thinking. I mean, we raised the generation,
[00:21:29] including my one in the last 40 years, probably more that everyone needs to be an engineer or a
[00:21:35] doctor or a lawyer or whatever. So we don't need any more thinkers. We don't need any more creatives.
[00:21:42] We don't need any more, I don't know, philosophers, whatever. And always telling people, actually,
[00:21:47] what they, I will likely do, it is that all this, I think we call humanistic thing,
[00:21:52] all this humanistic sides of you, of us, they are the things that we need. We need more people
[00:21:57] able to critical think. We need more creatives. We will need a lot of the soft skills, communication
[00:22:02] between humans, empathy, you know, all these skills, the ones that before they were considered
[00:22:09] secondary, because you mean to, you know, if you're an engineer, you're better. Actually,
[00:22:13] you know, this is finally being reverted. And, you know, we're going back into using our brain
[00:22:19] in our brain. You know, for me, it's the same when I see someone spending a whole day
[00:22:24] scanning items as a cashier in the supermarket, or I see someone spending their whole day on an Excel
[00:22:31] file doing this stuff. I'm really feeling pain because I'm really thinking you as a human being,
[00:22:38] I wouldn't say you're being not used, or I think you are honestly wasting your time into something
[00:22:44] that you don't like. And probably you have so much better that you can give. I don't know,
[00:22:50] the cashier in the supermarket, they are excellent people, people. They are lovely, you know, they talk
[00:22:54] to people. Maybe we can use them in something else. Maybe they could do, instead of scanning items on a
[00:23:00] bar with barcodes, they could just do something like customer support, or maybe not even dealing
[00:23:05] with customers. Maybe they can be therapists, whatever. But you know what I mean? I think that all
[00:23:10] of us, we have such a big potential once we start exploring it and once we have the time for it.
[00:23:15] And that is the opportunity, I think that there is with AI. The opportunity is that it might free us
[00:23:21] from all this repetitive and boring and brain-killing jobs, and actually everyone else
[00:23:31] having the time for the rest. But there is a big, big problem in there that it will work only if we
[00:23:37] embrace it as a society so we protect the weaker people. We make sure that people that are in the
[00:23:41] jobs that will be replaced are upskilled, and that is a responsibility of the businesses.
[00:23:47] And then we need to be sure that governments jump on with education. That is the key factor.
[00:23:54] And they are very different, they are very difficult challenges. You know, asking governments to spend
[00:23:58] more on education instead of cutting, and asking businesses to spend more on upskilling instead of
[00:24:03] firing, they're very difficult things to do. But we need to face it, it's the elephant in the room
[00:24:08] that we need to face it. If we do that, then AI might literally allow most of us to do what we like
[00:24:14] in terms of jobs, and to elevate all of us. I really believe in that.
[00:24:18] Yeah, I do too. There's so much to unpack there. I guess the first part I'll unpack is around education.
[00:24:26] I absolutely agree. But I don't understand why people don't see that starting to learn how to
[00:24:34] use AI and use AI properly is similar to, you know, having your first computer or, you know, all these
[00:24:41] Gen Z, to some degree, millennials, and now Gen Alpha, I think are calling it, but they were given
[00:24:50] digital devices before they could talk, right? So, I mean, they can handle this easily. They're already
[00:24:58] using it in various forms with the apps that they use on their smartphone, which is basically a
[00:25:05] computer. And so there's AI, there's all kinds of AI on those devices already. So why would you not
[00:25:13] teach them how to use it just like they might take computer science in high school now? But I mean,
[00:25:20] you've got to understand where things are going and how you're positioning this future workforce
[00:25:25] to be successful for the jobs that may not exist right now. So the other thing I wanted to just hit
[00:25:32] on was the skills piece, which to your point, it's those human, what we used to call soft skills.
[00:25:41] Yeah, I agree with you. It sort of switched. They're not soft at all. Those are the powerful
[00:25:45] skills. Those are the real human skills that are the hardest for AI to take on, including the reasoning
[00:25:53] that you mentioned before. But those are the durable skills. Those are the ones that are going to last,
[00:26:00] and those are the ones that are important to complement whatever AI can do today and whatever
[00:26:06] AI can do tomorrow. We will always need the critical thinking, the empathy, the creativity,
[00:26:11] ability, and the ability to take in all this context and all this other information to make decisions.
[00:26:21] That's why, especially in the talent space, I get concerned when people are overly reliant on the output of any AI,
[00:26:30] whether that's in text form or it's a recommendation or it's an actual score or stack rank of a person.
[00:26:38] Behind that application is a person, there's a human being, and we need to treat that with the right
[00:26:45] sensitivity in terms of fairness and potential adverse impact and things like that.
[00:26:52] And so there's just so much that we can do to position ourselves properly for this. And if we don't
[00:26:58] get ahead of it and push legislators to do the right things in the interest of us as human beings,
[00:27:07] this doesn't evolve in a positive direction.
[00:27:11] No, no, I totally agree. As I said, it's a tricky one. It's a difficult one. But what I see is that
[00:27:18] every single time that I'm asked to speak at an event is something, usually it is always something
[00:27:24] like explaining to people how AI works in very simple terms and stuff, and then explaining them
[00:27:29] a bit of the tools and some tips and tricks on how to use the tools that are available there.
[00:27:34] The thing is that once the end opens the room to questions, you can be sure that you always end up
[00:27:41] talking about these topics. So that is always something that actually comforts me, I think,
[00:27:47] because I can see that people, even before understanding, all right, okay, how can I use
[00:27:52] HGPT, whatever it is for, I don't know, automating my emails, whatever it is, the technical detail,
[00:27:57] they all want to talk first about what we're talking about it now. And that gives me hope that
[00:28:04] actually, society knows, we as a society, we know that we need to go deep into that as much as we like
[00:28:14] to go deep into the technology side of things. So I'm always hoping that as more people like me,
[00:28:19] like yourself, get out there and try to help people with education, basically trying to have people
[00:28:26] thinking about things. You know, all these summits that I do, people always tell me, oh, I arrived here
[00:28:31] with lots of questions. And I got all my questions answered, but actually now I have more questions,
[00:28:36] even more, and they are more deep. And most of the times they are about all these topics we're talking
[00:28:41] about, education, how it affects us. And I think that, you know, it's a fertile ground at the moment
[00:28:50] with society. And I think that it is responsibility of, as I said, people like yourself, like me,
[00:28:56] like many others trying to do this, to basically try and help everyone starting this sort of
[00:29:02] constructive discussions. Because I think this is what will allow us to understand, you know,
[00:29:08] what is hallucination? What is not? You know, how AI can actually, because it does enhance our
[00:29:13] productivity, can make our life easier. And that doesn't mean that it's solving all the problems,
[00:29:19] and doesn't mean that, you know, it is stealing jobs or, you know, it is going to disrupt, as I said
[00:29:24] before, yes. But once again, you know, we have cars now, it's not that when, when the first car
[00:29:29] arrived, all the people that were, I don't know, managing horses, that just stopped things to happen.
[00:29:35] It's normal, it always happened.
[00:29:37] I do think, you know, whether it's, you know, the actual education system, or it's,
[00:29:41] you know, upskilling that you were mentioning before, I do think it's important for people to,
[00:29:46] you know, be given examples that can really hit home. And maybe even just some basic analogies,
[00:29:52] like we were talking about the car, right? So here in New York, at 16, you get your driver's permit,
[00:29:58] which you have to take a test for. And then you have, at 16 and a half, you get your actual license.
[00:30:04] That means you don't have to have another person in the car. But in that six months, you've got to
[00:30:09] practice, you've got to take driver's education, you've got to take an actual, you know, literal
[00:30:16] exam, like with someone from the Department of Motor Vehicles, you know, takes you out,
[00:30:19] they make sure you actually know what you're doing or whatever. So point is, with AI, we haven't done
[00:30:25] that, right? Like, open AI basically just announced ChatGPT and said, here you go, have fun. And it's just
[00:30:30] like you just handed the keys to essentially like a 14 or 15 year old that maybe had no business
[00:30:38] getting behind the wheel. And you just said, you know, go nuts, right? Enjoy, do whatever you want.
[00:30:44] Wow, right? Like, you know, from a technology standpoint, I was just thinking back to my time
[00:30:50] at NBC Universal, when we were moving to cloud off of, you know, data centers and legacy infrastructure,
[00:30:57] you didn't just give all of those database administrators and all of those owners of servers
[00:31:03] or whatever, you didn't just say, Oh, forget that, you know, just go to AWS, go to Azure,
[00:31:08] and just log in and, you know, have fun, right? You had to go through a certain set of steps to
[00:31:15] understand what it means. How do I, you know, spin up a new, you know, instance? How do I tap into,
[00:31:21] you know, S3 and get more story? You had to go through these steps, not to mention
[00:31:26] all the steps around, you know, privacy and cybersecurity and things like that, right?
[00:31:32] You don't want to just basically inadvertently, you know, leave the back door open,
[00:31:36] right? And cause all kinds of security incidents and, you know, maybe legal risk as well. So I just
[00:31:42] feel like, you know, AI, now that it's in everyone's hands, right? I think the big difference now with
[00:31:47] generative AI is everyone has, you know, the AI is, is the UI, right? The user interface,
[00:31:53] you're interacting directly with it. Whereas, you know, predictive AI in the past and analytics
[00:31:58] or whatever, like not everybody had necessarily direct access to some of those technologies. So,
[00:32:04] so we've got to do some, some of those practical steps, but I wonder in some of the training that
[00:32:11] you've done in the workshops that, that you do, are there particular sort of exercises or,
[00:32:17] you know, scenarios that you describe that resonate, you know, better that help people say,
[00:32:22] oh, now I start to see why I can't just rely on it like a calculator.
[00:32:27] So I think that when it comes to tools, it can be, A, it can be very, very, very overwhelming.
[00:32:34] And B, if someone needs to understand the basics, I wouldn't really go into, oh, this, you know,
[00:32:42] you can use this tool for this tool for this, this tool for doing that, because I think that is too much.
[00:32:47] And these tools, you know, most of them are new, some of them that will close. So I'm always trying
[00:32:52] to go back to basics, basically, you know, is it for me, it's an analogy if, I don't know,
[00:32:57] back in the days when I was learning how to, you know, I was 14, 15, when I was learning how to
[00:33:01] program, I wasn't learning, I don't know, I didn't start learning visual basic, I was learning something
[00:33:08] that wasn't visual, I was learning the basics, the real basics to understand what's happened,
[00:33:13] what happens. So for me, what I do, I first of all, in the room, I ask everyone to think about,
[00:33:19] forget about what you know about the AI, I always tell them, what is one internal process,
[00:33:24] very important, because they all want their support chatbots and stuff and that. And just so
[00:33:29] and I just tell all of them, just forget about it, because you have absolutely no idea how complicated
[00:33:34] and risky that thing is. So just put this thing aside. They told you, you can have your chatbot and,
[00:33:39] you know, fire all your support people. And that is not true. So forget that.
[00:33:43] Let's go to internal processes, because they're easier and are less risky. So I always tell them,
[00:33:48] what is an in your group, in your team, in your company, depending on how big,
[00:33:52] what is the process, possibly repetitive, most likely is going to be repetitive,
[00:33:58] that you don't know how, but you have a hunch that there must be a way to automate it. There must be a
[00:34:05] way because it's so simple. And you can be sure that usually a good 80-90% of the times,
[00:34:10] they come up with something that you can immediately think, ah, yeah, obviously,
[00:34:14] ChargeGPT can help with that. And I'm going back to ChargeGPT because I mean, I've used a lot of,
[00:34:19] you know, OpenAI, Cloud, Gemini, Copilot. I mean, there is the same thing. But anyway,
[00:34:28] in my experience, I mean, OpenAI, they are quite far ahead. You know, Cloud sometimes is better in some
[00:34:34] things. But in my experience, in OpenAI, they are much, much ahead of everyone else in terms of,
[00:34:40] you know, in terms of text generation. So I always encourage everyone to literally stick with one
[00:34:47] tool. And I always tell them, if I was a business, I don't have time to try all the tools, I would tell
[00:34:51] them just stick with ChargeGPT. We're talking, it's the same difference between Google and Bing.
[00:34:57] Bing, it is used, it is good. It's not as good as Google. So I would just say, for now, because we
[00:35:03] don't have time to just go into the single nuances of the difference, just ChargeGPT. And
[00:35:08] I think, to be honest, I think that more than half of the times, I can have a business living with a
[00:35:13] working solution that automates some in some of their internal processes just using custom GPTs.
[00:35:19] I think custom GPTs, they are the most powerful thing that you can imagine, you know, the thing
[00:35:25] that you can upload documents. So you can do a RAG literally without knowing a single line of code
[00:35:31] is very good at setting its own prompt, because it will ask you, what is this bot about? It's quite
[00:35:37] good with that. And, you know, you can implement tools, actions, APIs. So, you know, if someone
[00:35:43] needs something quite specific, you can basically tell them, you can use ChargeGPT, make it your own,
[00:35:51] put in your own documents, explaining to them, obviously, what it means in terms of privacy,
[00:35:55] disabling the training of the model with the data, but, you know, all the basics. Most of the times,
[00:36:03] I mean, you know, I've done two incredible apps just literally using the custom GPT. And people are
[00:36:10] always saying, wow, they just don't understand. They didn't know that. They think that they need to hire
[00:36:15] a developer and always tell them, no, this is why it is good. This is why it is interesting. You need your
[00:36:22] left side of the brain. You need your language to make it work. You don't need code. You need to
[00:36:28] understand how to speak to someone else in a weird way, because, you know, prompt engineering and all
[00:36:32] that sort of stuff. And they're always incredibly surprised. And funnily enough, when they ask me,
[00:36:38] what is, you know, when I tell them that I didn't study, I'm not an engineer, you know,
[00:36:42] I studied philosophy, you know, I come from, you know, I studied Latin, Greek history and philosophy.
[00:36:47] And they always ask me, how do you end up doing this? I always tell them,
[00:36:49] I ended up doing this. And the reason I love AI, it is because it is the ideal bridge between being
[00:36:55] humanistic and then being technical. And I always tell them you can do it yourself.
[00:37:00] You just need to iterate. So I personally, I believe that custom GPTs are by far
[00:37:05] the most powerful tool that you can imagine out there. And once you have a developer that maybe
[00:37:10] can do an API for some external tool, I don't know, to have it interacting with your
[00:37:16] database, or you are a store and you want it to interact with your shipping. So whatever it is,
[00:37:21] it is incredibly, incredibly, incredibly powerful. So that is usually, that is the only thing I do.
[00:37:27] And the funny thing is that they all can do it for $20 a month. I tell them, you can have as many
[00:37:33] as you want. I think I have probably 50 or 60 custom GPTs just for me. It's incredible what they can do.
[00:37:40] Well, we'll have to, we'll have to have a separate conversation where I could take a look at some of
[00:37:45] what you've built. I've built a few myself, but not for myself. I built them when I was just playing
[00:37:52] around when that ability, you know, first came out and I created one around responsible AI. I created one
[00:37:59] to recommend whiskey, but I haven't done, as you described, some of the custom ones that I know
[00:38:08] can probably help me. And I think you made a point in there that I think.
[00:38:12] Hi there, I'm Peter Zollman. I'm a co-host of the Inside Job Boards and Recruitment Marketplaces
[00:38:19] podcast. And I'm Steven Rothberg, and I guess that makes me the other co-host. Every other week,
[00:38:23] we're joined by guests from the world's leading job sites. Together, we analyze news about general niche
[00:38:29] and aggregator job board and recruitment marketplaces sites. Make sure you sign up and subscribe today.
[00:38:37] This is where I got concerned myself, which was, can this actually go in and out of, you know, my
[00:38:44] my CRM or my Google docs or my whatever. And so, but to your point, just start asking it what it can do.
[00:38:52] It's not just the AI to help you, you know, through all these different tasks and to help you make
[00:38:58] decisions and research and things like that. It's also its own, you know, help section and user guide.
[00:39:05] And all of that, all at once. And so you've got to just, you know, spend the time to your point,
[00:39:13] iterating and pushing its limits and understanding what it can do.
[00:39:17] And it's available to everyone. That is the thing I tell everyone. It is literally out there. You don't
[00:39:21] need to hire anyone. I mean, you can start with it. Just do it. This is what I always tell people. Just
[00:39:28] start experimenting with it. I mean, there are people out there that literally charge thousands to
[00:39:33] deploy, to develop a custom GPD and they tell people, oh, you know, this is your model and stuff.
[00:39:39] And people are always impressed. And this is why I'm always very disruptive as an entrepreneur. So I
[00:39:45] literally go to businesses and say, just please ignore all these people, just ignore all of them
[00:39:50] and just do it yourself. Because half of the times you don't need anyone else. Then once you start
[00:39:57] needing, for example, they might need, oh, I don't know, I am a legal firm or I am an accountancy firm.
[00:40:03] So you have compliance, you have internal documents, you can't upload them on OpenAI. And then this is
[00:40:09] when I just come in with my, you know, with a, you know, one of my projects is called your vault,
[00:40:14] that it is literally like a private AI for businesses. And then I start telling them,
[00:40:19] you can do that. You can have the same power of charge GPT with your own model that runs on your
[00:40:26] own instance is only for you. And those data, they're not going out to anywhere. So, you know,
[00:40:31] it's completely a hundred percent private. And the problem is there are not many solutions that
[00:40:36] do that for businesses, for small businesses in particular, in simple terms. And, you know,
[00:40:41] this is why literally my MVP for your vault AI already got two people that wanted me to try it
[00:40:47] because the market is literally asking for such simple solutions. But it is always the numbers that,
[00:40:53] you know, I wouldn't tell anyone to go on my service, your vault, until they have done some
[00:40:56] time spending it on charge GPT and understanding how it works. It doesn't make any sense. It's just
[00:41:01] selling to the 16 years old that you felt that you told me, giving them a Ferrari.
[00:41:05] You wouldn't do that. You did that. You first need to learn to learn on your Ford Fiesta or whatever
[00:41:09] it is, the scrap car that goes slow and you can destroy and you can make lots of errors. Once you learn,
[00:41:16] then first of all, you will know if you really need a Ferrari, and then you will have a better
[00:41:20] ability to understand how to use it or what you need from your, for your, from your thing.
[00:41:25] I'm a big Ferrari fan. It's hard to come up with a scenario where someone needs
[00:41:29] a Ferrari, but I understand your point. One of the observations that I've noticed,
[00:41:35] I have spoken to a few folks like you who are doing, you know, training sessions. And one of the things
[00:41:43] that stood out to me was some of the exercises that, that they go through. Like one person has created
[00:41:48] like a card deck, right? Where people can say, okay, you're going to use this and it's really to help
[00:41:54] people understand the basics of prompting and as well as understanding some of the responsibility. I
[00:42:00] principles too, I will say, but you know, here's, here's your prompt, here's some context or whatever.
[00:42:05] And then you're working in small groups, right? So it's not just this, you know, fun exercise,
[00:42:11] but you're actually getting some different perspectives, right? Some cognitive diversity
[00:42:16] that is applied to some of these scenarios. And the response of the feedback that I saw was that
[00:42:23] people really were much more comfortable. It reduced some of the fear that you were talking about
[00:42:28] earlier and having drove some of that constructive dialogue that you were mentioning earlier that
[00:42:33] can help people get over some of that fear. The other exercise that was just actually in yesterday,
[00:42:39] there's an AI ethicist that I follow and she created this game. It's called the AI ethics game.
[00:42:45] And so I participated in her pilot group just yesterday and it was really interesting. She had
[00:42:51] created a custom GPT to basically play out this scenario where she had basically say, I've got this fictitious
[00:42:58] startup, this is what we're building. I've got four personas, we're going to ask these four people
[00:43:05] questions about how they've designed their solution. So the scenario was, this is a startup that built an AI
[00:43:10] powered learning tool for developing markets. So people that haven't necessarily been exposed to all of
[00:43:17] this, this technology, and won't necessarily apply, you know, critical thinking by default, because they'll just be so
[00:43:24] enamored with this fascinating new technology. But you got to ask questions of the CEO, the CTO,
[00:43:32] a junior data scientist, and the product manager. And everyone from all of these different, you know,
[00:43:38] the whole audience was basically coming from these different perspectives. There were at least 10
[00:43:42] different sort of roles that people were coming from across industries as well. So again, you had this
[00:43:48] cognitive diversity and people were basically asking probing questions. Well, how did you, where'd you get your
[00:43:53] data from? How are you accounting for, you know, personalization when, you know, you're supposed to
[00:43:59] be, you know, these are students, right? Like, so you need to keep that data private. And, you know,
[00:44:03] it was just a really interesting and engaging way to try to get people to think more deeply about not just
[00:44:09] the solutions that might be out there now, but how startups in this space who are building these solutions
[00:44:17] are thinking about it. And I think it goes back to one of the things we were talking about at the
[00:44:21] beginning of this conversation, which is, you know, this, these shiny objects and everyone just kind
[00:44:26] of latching on to this technology and then going to try to find a problem to solve it with. Like,
[00:44:33] this is my new, my new shiny hammer and let's go find some nails. I feel like it was an interesting
[00:44:40] example of where, again, maybe this is because I'm close to this and I talked to these startups and I
[00:44:46] see these startups in the space building these solutions and building more of these sort of shiny
[00:44:51] objects, right? They're, they're building solutions. They're building those hammers in the hopes that
[00:44:57] people find nails to, to use the hammer and buy the hammer. So it's interesting to see from the other
[00:45:03] side, like some of these startups that are building things that also don't use that mindset that said,
[00:45:09] what problem are we actually solving? What is the pain or creating a, you know, a vitamin for like,
[00:45:16] what, what are we doing? And so it was just interesting to see all these people who are not
[00:45:21] AI experts by any stretch, actually asking intelligent probing questions. Like, did you really put ethics
[00:45:29] first? Are you really being human centric or all you're worried about is, you know, you got VC money and now
[00:45:36] they want to see, they want to see you moving fast and breaking things, but that's not necessarily
[00:45:43] appropriate, right? You need to be thinking about the human factor. You need to think about the ethics
[00:45:48] involved. You need to be thinking about, you know, the, the fairness and that these are people who are
[00:45:54] not technologists, uh, that you're targeting and you need to take a step back, um, because there's
[00:46:00] like red flags all over the place. Yeah. I totally agree. I totally agree. As I said, you know,
[00:46:05] we do, we go back to what you were saying in terms of the responsibilities on us and education is going
[00:46:11] to be a key element in all of that, you know, to just tackle exactly to do what you're saying.
[00:46:16] Yeah. So I know you sort of advise people not to, you know, go out on product hunter. There's a bunch
[00:46:22] of sort of catalogs of all these AI solutions cropping up and, you know, a lot of them are half-baked or maybe
[00:46:29] quarter-baked at best. And it, you know, it can be both dangerous and time consuming to try to go and
[00:46:37] play with, with some of those. You don't know what it's doing with your data besides some of the main,
[00:46:42] you know, LLM vendors. Is there anything that you've seen aside from the things that you've built
[00:46:49] where you think there's promise, like there's something that could be extremely useful or, or on the
[00:46:56] other side, is there anything that you see that you think this can't do anything good? This is
[00:47:00] dangerous. I don't know. I mean, to be honest, I tried many tools and in the end, I always go back
[00:47:06] to using, to integrating the main players myself, but probably this is also because obviously as a tech
[00:47:13] person, you know, if I don't see something that I like, I build it. To be honest, I mean, apart from
[00:47:19] using tools like, I don't know, the Grammarly for, for, for things that is very useful. I mean,
[00:47:26] OpenAI, it is integrated everywhere in everything I do. Um, lots of good AI for coding, you know, MVP
[00:47:33] developing, it is incredibly useful. And, you know, there are, uh, you know, things like cursor that
[00:47:39] it is like a VS code integrated with AI, which is, can be incredibly powerful once you configure it.
[00:47:45] I imagine, uh, you've played around with some of the ones generating like multimedia content or,
[00:47:51] you know, music and stuff like that. I've used obviously, you know, mid journey for image generation,
[00:47:56] but you know, it's always the same advice that most of the images it generates contains,
[00:48:01] they are, they have problems, but it can be incredibly useful. So, you know, I definitely use mid
[00:48:05] journey in terms of videos. I mean, I, I didn't try Sora. I've seen some other of the video
[00:48:11] generation tools at the moment, but they still look very weird. I mean, they still look like a
[00:48:18] series, a series of images stitched together. So we're still not there, but Sora is very promising,
[00:48:23] but obviously we have seen only the demos and I want to see, I want to be able to put my hands on it.
[00:48:28] So in terms of generation, possibly not. I mean, tools like there are many tools that generate music.
[00:48:34] They are at best elevator music at best. So I'm not really bothered by it because,
[00:48:41] I mean, there are reasons behind it, obviously, you know, to, to, to, to make music, you can't
[00:48:45] expect an AI to be trained from a mixed music as in from a stereo file that contains all the
[00:48:52] instruments of a song, because there is no way for it to understand the single parts apart from the
[00:48:58] very simple cases. So you usually just get more obvious than with text. I think mishmash of
[00:49:05] something existing that sounds horrible. And it is a copy of something else.
[00:49:09] I mean, what I see very well AI used it is when it is integrated into the production process. So for
[00:49:15] example, I don't know Adobe Premiere, I do a lot of video editing. They have now Firefly that is
[00:49:21] literally an AI integrated into editing. Same thing with Photoshop, very, very useful, very powerful.
[00:49:27] So you don't use it for necessarily generate stuff, but you use it to edit, to correct things.
[00:49:35] Same thing in music, you know, there are lots of plugins for, you know, in the box,
[00:49:40] mixing and mastering, they all use AI. So I don't know things like polishing a voice vocal take,
[00:49:47] because I don't know, I was recording it at home, and there is an ambulance passing by doing my best
[00:49:52] take. Before it was going to be quite tricky. Now there are many plugins that they use neural networks,
[00:49:57] and they know how a voice sounds. So if there is a dog barking or an ambulance passing,
[00:50:02] they can literally filter it out intelligently. So that is where I can see the full power of it.
[00:50:09] But in terms of multimedia, I'm not thinking, I don't think I see anything really useful in terms
[00:50:14] of generation yet. That's really interesting. I hadn't thought about that
[00:50:18] angle, but it does tie to some of the other themes that we see, like AI should be augmenting,
[00:50:24] you know, human abilities. And to your point, a better use of AI is to clean up
[00:50:32] the audio and make you, the musician, the singer, etc. or the podcaster sound as good as possible,
[00:50:43] but not take that job away from that person.
[00:50:47] Yeah.
[00:50:47] Well, I mean, if you think about it, it's the same with text. You know, I always tell people
[00:50:51] never use an image from Chargipit, from Dali or from Midjourney as it is without literally
[00:50:57] tracking it. And even more, I always tell them never use a text as it is out of Chargipit or whatever.
[00:51:03] Just consider it a draft at best of the times. And most of the times, it's just going to be an
[00:51:10] inspiration for them. You know, that is where AI is useful as a resounding board, as a tool for
[00:51:17] firing up ideas and then for drafting. That is what it is. So that is where I can see, I see it being
[00:51:22] very useful.
[00:51:23] So any words of advice for people trying to get started who have not attended one of your workshops
[00:51:31] and just know that they need to carve out the time, but haven't yet?
[00:51:35] Yeah. I mean, as I said before, I think the best advice I can give them, it is
[00:51:40] personally speaking, just go on Chargipit, create an account, it's $20, pay the $20 because it is what
[00:51:45] you need. And then just start experimenting. There are, if you go on Google and you Google OpenAI
[00:51:51] custom Chargipit, you will see a million and a half tutorials where you don't need to be a developer.
[00:51:57] It's not a tech thing. It's just a tool to be used and just go and experiment with that. Because I
[00:52:03] think this is where you would see really the potential and also the pitfalls of, of, of AI.
[00:52:10] And I think you will have a much better idea of how it can be used for your business.
[00:52:14] Awesome. All right. Great advice. Giancarlo, thank you so much for, uh, for joining me. I think, uh,
[00:52:20] this was a fascinating conversation. I think, uh, listeners have a lot of
[00:52:24] takeaways and, uh, a lot of things that they can go and play with themselves. I'm going to
[00:52:29] make sure we include, uh, some links to some of your projects and your, your startups in the show
[00:52:35] notes so people can, can access those and know how to reach you. So thank you again.
[00:52:40] You know, I'm visible, obviously on LinkedIn and stuff. I'm always happy to have a chat with people.
[00:52:43] Perfect. Perfect. Thank you very much for having me.
[00:52:46] Absolutely. Absolutely. Well, thank you again, Giancarlo. And, uh, thank you everyone for listening.
[00:52:51] Uh, we'll see you next time.


