Bob Danna, physicist, naval officer, former Senior Managing Director at Deloitte Consulting and Bersin by Deloitte, and author of the memoir "My Curious Life," joins host Bob Pulver for a wide-ranging conversation about a lifetime at the frontier of science and technology. Bob traces his journey from slide rules and nuclear reactors to agentic AI, sharing how he and collaborator Joe DiDonato built "Bot-Bob," a digital twin trained on his memoir, writings, and decades of experience. The conversation explores what digital twins can mean for knowledge workers, legacy building, and collective intelligence, including a live mastermind experiment where multiple digital twins, plus a digital Mark Twain, fielded questions from a live audience. Bob closes with an urgent call to bring more diverse human voices into AI development before the decisions that shape civilization get made without them.

Keywords

Bob Danna, digital twin, agentic AI, Bot-Bob, Joe DiDonato, My Curious Life, knowledge worker, collective intelligence, co-intelligence, legacy, mastermind, ElevenLabs, nuclear warfare, responsible AI, human centricity, future of work, STEM, Deloitte, memoir, Substack

Takeaways

  • Curiosity is the connective tissue of Bob's entire career, from nuclear physics and naval service to Deloitte consulting and digital twins, and he positions it as the essential human quality that AI can amplify but never replicate 

  • A digital twin is far more than a knowledge repository; it encodes values, judgment, and personality, making it a genuine extension of a person's thinking

  • The "mastermind" format, where multiple digital twins deliberate together in real time, opens new possibilities for accessing cognitive diversity without scheduling constraints

  • When AI models are trained by a narrow group (such as military strategists), the outputs reflect that bias, making diverse human representation in AI development a matter of consequence

  • Knowledge workers who collaborate with their own digital twins can operate at dramatically higher capacity and quality, not by being replaced, but by being amplified

  • A responsibly built digital twin can preserve the wisdom, voice, and values of an individual for future generations

Quotes

  • "I'm just a curious guy. No matter what I'm into, I'm always looking at other things." 

  • "When we free up tasks that human beings were doing, I think that is very, very positive. The real question is, what does the human being step up to do that only a human being can do?"

  • "The definition of a knowledge worker is going to change. It's going to be that human being collaborating with the digital twin of that person."

  • "It's very timely right now that we really start to have human conversations before we go down the path too far."

Chapters

00:03 Welcome and introductions

01:21 Bob Danna's fascinating career journey

03:27 Early encounters with AI and neural networks

07:21 What makes us human, the evolution of calculators and computers

10:26 Joe DiDonato, soul-sinking, and the origin of Bot-Bob

14:41 Building Bot-Bob, memoir, voice, and guardrails

17:17 From chatbots to agents to digital twins, a practical framework

25:27 Brainstorming mode and collaborating with your own twin

28:16 Digital twins in consulting and the future of knowledge work

35:20 The mastermind experiment, Bot-Bob, Robo Lacey, and digital Mark Twain

41:14 AI, nuclear war scenarios, and the dangers of narrow training data

51:31 The workforce of 2030 and what it means to be a knowledge worker

58:55 Closing thoughts and how to connect with Bob Danna


Bob Danna: https://bobdanna.substack.com/

“My Curious Life”: https://mycuriouslife.net/


For advisory work and marketing inquiries:

Bob Pulver:https://linkedin.com/in/bobpulver⁠⁠

Elevate Your AIQ:https://elevateyouraiq.com⁠⁠

Substack: https://elevateyouraiq.substack.com


Powered by the WRKdefined Podcast Network. 

[00:00:09] Hey, it's Bob. Welcome back to Elevate Your AIQ, your go-to source for insightful conversations, human-centric AI readiness, talent transformation, responsible innovation, and the future of work. Today, I'm thrilled to share a conversation with my friend, Bob Danna, a physicist, naval officer, former senior managing director at Deloitte, and author of the memoir, My Curious Life. Bob brings a rare perspective spanning nearly six decades of technological evolution. He's got the scars and the stories to prove it.

[00:00:37] You'll also quickly understand the book title as Bob is one of the most curious people you will ever meet. We dig into some truly thought-provoking territory, including the creation of digital twins as both a legacy preservation tool and a force multiplier for knowledge workers. We talk about the power of AI mastermind panels where multiple digital twins collaborate in real time, and why the human guardrails we build into these systems may be the most important design decision we make.

[00:01:03] Bob and I always have amazing and insightful conversations, so I was excited to record one of them for your listening pleasure. If you are wondering what the future of knowledge work really looks like when human wisdom and artificial intelligence can emerge, this conversation is for you. Thanks so much for listening. I hope you enjoyed this one as much as I did. Hey, everyone. Welcome back to another episode of Elevate Your AIQ. I am your host, Bob Pulver, and with me today, I have the pleasure to catch up with my friend, Bob Danna. How are you today, Bob? I'm doing well, Bob. How are you doing?

[00:01:32] I'm doing great. It's good to see you again. Same here. We caught up recently in Vegas, your stomping grounds, and had a really nice chat about some of the things that you've been working on in agentic AI and Digital Twins and some of your masterminds, and you've been keeping busy and staying sharp, I see. Yeah, for an old retired guy. Definitely keeping in the game. So it is nice, too. And it was nice to see you, Bob.

[00:02:02] You know, when you were here in Las Vegas. Yeah, absolutely. So, Bob, you've done so much in your career. I just thought you could give us like sort of a whirlwind tour of your background and, you know, some of your highlights and, you know, certainly the book that you wrote. What I want to hear about, as we'll discuss as we get going. And then, you know, I just want to hear about some of the current projects you're working on that I was alluding to.

[00:02:27] Actually, I turned 75 years old in June. So it's coming up, rushing up three quarters of a century behind me. I'm a STEM guy. So, you know, science, technology, engineering, math, that's been me for my entire life. And so I have a bachelor's and master's in physics, another master's in engineering. I was, in fact, a physicist for a while, a naval officer, professional engineer, and then did quite a bit in terms of business consulting.

[00:02:57] And so ultimately wound up as a senior managing director at Deloitte Consulting. So kind of at that highest level of consulting with the largest corporations in the world, which was pretty cool. But then also over my lifetime, kind of did hard engineering, hard science, actually worked as part of Admiral Rickover's team for naval reactors.

[00:03:18] I taught at the Naval Nuclear Power School. So taught the officers and enlisted personnel who were going to operate nuclear reactors on submarines and aircraft carriers. So it's been a whirlwind life for sure. It's been a great life. I must admit, I've traveled all over the world for business and pleasure. It has been a quite interesting evolution. I started on slide rules.

[00:03:46] I'm sure 90% of your listeners probably don't even know what that is. But in fact, it's a piece of bamboo that you would do some fairly sophisticated mathematical calculations on before there were calculators, before there was anything electronic and when computers were at their infancy.

[00:04:06] And so I've seen the entirety, entirety of the technological evolution and revolution has taken place over the last almost 60 years now. It's amazing. I mean, I've been around a while. I thought I'd seen and done a lot. But you, you know, take the cake, I would say. I've never actually used a slide rule. I've certainly used it in jokes. It was no joke, but I used it. Never, never actually used one myself.

[00:04:34] So Bob, with all the work you were doing, you know, in various, you know, STEM fields, I imagine you had exposure to, you know, some of the earlier, you know, versions of machine learning and AI, just for people to realize that this predates, long predates, you know, generative AI that most of my listeners might be familiar with.

[00:04:59] And so I was just curious about, you know, sort of early, you know, learnings from some of those experiences. And did that ever intersect with some of your work on the consulting side? And as you think about, you know, how knowledge work has changed over the years and how organizations have changed over the years? Oh, yeah, for sure. And as you and I have chatted multiple times, you know, I'm just a curious guy.

[00:05:25] Okay. No matter what I'm into, I'm always looking at other things. So actually, even in the 70s, I became very interested in neural networks, which effectively were them looking at the way the brain operates and then start to ask the question, you know, can I, in fact, start to model that? And so that was some of the kind of early kind of physics, you know, I started to kind of delve into, you know,

[00:05:52] as once I got my master's in physics, it was in nuclear physics, so it wasn't that. But it was certainly, you know, this was interesting. And then I'll tell you the most interesting thing was I also had gone to multiple world's fairs. And so one of them that I went to actually was in Tsukuba, Japan in 1985. And so, you know, it was a Japanese world's fair. So it had, you know, multiple Japanese companies and then, you know,

[00:06:22] kind of highlighting Japanese science and technology and the like. But I went to the, obviously, the U.S. pavilion there, which was a tiny pavilion. But what were they highlighting in 1985? Artificial intelligence. So it was, and at that point, you know, the concept of artificial intelligence was, in fact, you know, computers. Being able to just have a computer.

[00:06:48] And at that point in 1985, obviously, we had moved into the fact that, you know, desktop personal computers were just starting to come to the forefront. And people were asking, well, what happens when I start to marry the machine with the human being? And that starts to evolve the questions of artificial intelligence, right?

[00:07:15] Because, obviously, now you're adding a machine to human intelligence. What does that do? And so, you know, people are saying, oh, yeah, you know, AI, it's just, it's so new. It's so new. We were thinking about it, you know, effectively 40 years ago.

[00:07:31] And it was touted by the U.S. as part of their, you know, presentation of capabilities in the U.S. pavilion in the Japanese World's Fair in Tsukuba in 1985. I was talking to someone recently, and they, you can tell me if this is correct or not, but they said, you know, the term calculator and the term computer used to actually be like a person. Oh, yeah. Yeah, yeah. Oh, absolutely.

[00:07:59] And, in fact, it was a person. It was groups of people. And I don't know, I can't remember the movie now, but it was the one about kind of space and space exploration and kind of the early 60s. And there was actually a group of African-American women who were the calculators, okay? And so they were, in fact, and I can't remember the name at this point.

[00:08:29] Hidden figures. There it is, hidden figures. Thank you very much. But, you know, it is, in fact, you know, the evolution of the human being. And it's always been about taking things off of our plate that are not necessary for a human being to do, okay? And then be able to do other things that are more appropriate for a human being to take on. We're going through the exact same evolution right now.

[00:08:56] You know, with AI, it's, you know, when we free up tasks that human beings were doing, that I think is very, very positive. Now the real question is, so what does the human being step up to do that only a human being can do? And so that's one question I've been trying to wrestle with over the last couple of years since I'm now retired.

[00:09:20] And then, you know, secondarily, is there an opportunity to actually marry what the human being and machine does? So when it comes to things like agentic AI and creating agents that take over tasks that human beings are doing, you know, do you just turn over the task and then you don't have that job anymore, okay?

[00:09:42] Or do you actually get to a point where you can collaborate with the AI and have a one plus one equals three? Create something between the human and the machine, the human and the AI that neither one of you could have done. So there are going to be, you know, repetitive tasks that are going to be turned over to AI and AI agents.

[00:10:07] But is it going to open up now a whole new realm of possibilities for humans to now go to the next level of thought, of knowledge, of knowledge work? And so that's the real question that I think a number of us are starting to wrestle with. Yeah, absolutely. And so how do we make that calculation that we are collectively, we are still more than the sum of our parts, I suppose, is one way to think about that.

[00:10:36] Yeah, yeah. And I think it kind of goes on beyond that. So what really kind of has kind of prompted my interest, I don't know if you know Joe DiDonato. He's well known. He's, again, an old guy like me in the mid to late 70s. He's still in the game.

[00:10:56] But he is now kind of pushing the whole concept of the agentic AI, but, okay, focused on creating digital twins of real people. Okay. We can also, you know, the whole concept is, in fact, you know, what can you do with agentic AI that not only creates an agent, but, you know, creates an agent with kind of agency.

[00:11:25] You know, it actually is a duplicate of a human being. So it has values. It has principles. You know, it actually has experience. It has knowledge that's specific to an individual. And so, actually, Joe wrote an article on LinkedIn last summer. And I've known Joe since probably the late 90s. He was the head of learning at PeopleSoft.

[00:11:51] And then he went on to be the head of learning at Oracle and a number of other things at Oracle. And then worked in technology development for years. But he is too retired, but still working. And he wrote an article on LinkedIn that focused on soul-syncing and mind-syncing.

[00:12:12] And it was an article that said, you know, can I capture the essence of a human being and have that essence be available for conversation for individual's legacy? Okay. So, you know, my children, my grandchildren, you know, my great-great-great-great-great-grandchildren. I mean, could they actually speak to Bob Dana 100 years from now and actually have a conversation?

[00:12:41] I mean, I would love to have done that with my grandparents or great-grandparents, et cetera. And so, you know, what wisdom have we lost generation after generation after generation? So, with that, I had not chatted with Joe for a while. And I reached out to Joe and said, hey, Joe, you know, if you're serious about this, and this is just not, you know, something in your head that you're just kind of pontificating about, I'm your guy. Okay. I'll be your poster child. Why?

[00:13:10] I had just finished writing and publishing my memoir. Okay. So, yeah, here's the, here it is. So, it's My Curious Life. Bob, just in terms of having that memoir, the memoir is actually written to my grandkids. So, it's first person written to my grandkids.

[00:13:31] Anybody who's reading it, which I'm hoping people will read it, available on Amazon, et cetera, My Curious Life, is actually looking over my shoulder as I speak to my grandkids. Then tell them my story. From, you know, when I was born in the beginning of the 50s, 51, you know, all the way through to present. So, I actually have all of that captured. Okay. I also have a website, MyCuriousLife.net. I also have a bunch of writings.

[00:13:58] I've been on a bunch of radio shows and podcasts. So, all of that I have as a body that is me. Okay. It's actually capturing me, the essence of me. So, I started that conversation with Joe and I said, okay, this is what I'm showing up with. Okay. What if I can actually take this and put it into, you know, Gen AI, create using the agentic AI tools.

[00:14:25] What if I can now take all of that and then work with you to tune this to the guardrails that effectively are me? So, when I go out to the internet and looking for information, you know, I'm just not, oh, I'll take anything that comes in, right? Discriminating against, you know, things that I think, you know, might not be true or, you know, lies, disinformation, misinformation, propaganda. I'm going to filter all of that out.

[00:14:53] How do I have my agent, what we call Bot Bob, actually do that same thing? So, with all the knowledge, with all the experience, with the values, et cetera, that are all delineated in the materials, put those guardrails on and then create that. So, I spent a couple of weeks with Joe just figuring out whether or not this was real, okay?

[00:15:13] And then when we figured out that it is real, then starting last summer, we generated Bot Bob, including going into 11 Labs. I don't know if you know that technology, but you can record your voice. And then, you know, when you speak to Bot Bob, Bot Bob speaks in an old guy from New York, you know, accent, and with all of the blemishes that go into who I am, you're speaking to Bob.

[00:15:42] Bob, I want to go back a couple steps because you just gave me like this brain dump of this whirlwind tour you've been on with Digital Twins. I know you've been working with Joe on these masterminds, and so before we get into that with, you know, multiple Digital Twins conversing with humans or conversing with each other and all of that and what that means,

[00:16:09] I want to just take a couple steps back because I think a lot of my listeners are still wrapping their head around whether I want a Digital Twin of myself for personal or professional reasons. And there's benefits and risks to that. But let's just take a step back, obviously not too far back. I mean, still post-AI winter, as people called it, where we then have this resurgence first in about 2013, 14.

[00:16:38] So maybe over 10 years ago was when IBM Watson sort of made its appearance on Jeopardy. But even that was still the sort of public appearance of still sort of predictive artificial intelligence and machine learning with some additional sort of services on top of that. Like it could speak, it could hear, it could, you know, reason, things like that.

[00:17:05] But it wasn't generating new content as we know, GPT introduced us to generative AI in late 2022. But as we've moved through generative AI and started to pull some of these other types of artificial intelligence, other tools in the toolkit, if you will, and pull them together, we created AI agents. Yes.

[00:17:30] And so I thought we'd start with like agents and then the difference between agents and agentic AI, just to sort of level set where people are. So around the time IBM Watson and similar technologies were coming to market, you know, out of the research labs, you also had the emergence of general chatbots. Yeah. Right.

[00:17:55] Which were very sort of rigid and structured and they were basically codified with rules. Yes. Right. And only a select number of choices as the evolution of like expert systems where you had these big complex, you know, decision trees or whatever. But you still had only at every decision point, you still had only so many choices. Right. Right.

[00:18:16] And we see those today where you interact with a chatbot on a website, for example, you still have to pick from one of these five little five choices. And it's, you know, often limited to giving you answers associated with, in a very structured way associated with that.

[00:18:33] And now we have, of course, with Claude and ChatGPT and Gemini and, you know, some of these others, we have these AI assistants that can actually have a natural language, you know, sort of normal conversation. And they can go fetch answers for you and things like that. So anyone who's using Google now sees these AI generated, you know, summary results from Gemini. What does it really take to lead with impact in today's ever evolving world?

[00:19:01] Welcome to Inside the C-Suite, the podcast for influence, energy, innovation and empathy converge. Hi, I'm Christy Honeycutt, your host. Join me as I sit down with the boldest voices in leadership, visionaries who are not just making decisions, but making a difference. In every episode, we explore the heart of leadership. How do high-performing leaders harness their energy? How do they lead with empathy without losing their edge?

[00:19:29] How do they turn disruption into innovation and pressure into purpose? You'll hear unfiltered stories of adversity, resilience, and the raw moments that shaped true influence. This isn't about perfection. It's about presence, passion, and the energetic exchange that fuels transformation.

[00:19:52] Inside the C-Suite is your invitation to lead from a deeper place with intention, connection, and courage. If you're ready to go beyond the surface and unlock what really moves leaders to action, hit subscribe and step inside. You can find us wherever you get your podcast fix. Inside the C-Suite, well, we're an energetic exchange. Let's lead with influence, live with energy, and create with empathy.

[00:20:22] So when we talk about agents moving past the conversational interface and certainly beyond the chatbots of last decade, we talk about agents in the sense of now you have an AI solution that can basically, it basically has some of its agency, right? It can sort of do its own, you know, thinking.

[00:20:49] It can actually take action, you know, on your behalf. And so I thought we'd just like start there and then talk about how agents are now being essentially trained and customized and given context and given personalization so that they,

[00:21:09] so that you can, if you are so inclined, start to create a digital version of you and your knowledge and some of your personality and things like that. So there's a spectrum to the agentic AI and the agents, right? And, you know, let's, so, you know, you can create agents with expertise. So you can create a doctor, okay?

[00:21:39] You can create a lawyer. You can create an engineer. You can create an accountant. You know, so, you know, these are things that would be able to have a kind of a body of knowledge and the body of knowledge would be the basis of what that individual agent is going to do, okay?

[00:21:59] So they may give you recommendations in terms of assessing, you know, your symptoms and then give you a recommendation as to the diagnosis and then what different, whatever it might be. You know, in all of those different disciplines, you can have kind of a body of expertise that now is kind of the global expertise of that particular discipline now effectively advising you and interacting with you, et cetera. So that's one thing.

[00:22:28] And it's really just the fact that we've now taken Watson, think of that, okay? That was pretty smart, okay, but could do, you know, very limited tasks, but now give it the access to effectively the body of knowledge and information around those disciplines that now can be accessed. And you see a lot of that going on.

[00:22:47] And then you have the ability to have personal agents that you can just, you know, we all are using them now to go out and get things or filter things or help you with an email or do things that are just effectively your slave. You know, it's just, you know, you tell them what to do, it does it, and then it makes your life easier. It's a tool for you to make your life easier.

[00:23:11] So, and you can give it certain levels of expertise, et cetera, but it's still just a tool to make your life more, you know, kind of easier, take some of the mundane tasks away from what you're doing, and then accelerate and kind of give you a higher level of contribution personally or through your business activities.

[00:23:33] And then, okay, there's what I just was talking about earlier on, which is the digital twin, where, you know, this is now effectively taking everything that makes me me, okay, you know, in terms of all of my knowledge, and then set all of those guardrails, okay, that you can do, again, natural language through chat GPT.

[00:24:01] So, upload all of my information, put it on the guardrails, put in the voice, and then you can actually have a conversation, including what we've done is we've introduced things that are like the brainstorming mode. So, what it does is it takes some of those guardrails off.

[00:24:20] It allows Bob to be more creative, maybe not have the guardrails that say, well, you can only consider this, you know, because I want the level of kind of integrity and multiple levels of validation to be in there. Well, no, maybe let's just throw it open just a little bit more, and let's just brainstorm. So, now you can have a conversation with Bob that, you know, maybe it's like having a conversation with Bob, okay?

[00:24:48] And so, it allows you to now free that up for other people to have a conversation with the digital twin of Bob Dana. But this is where the power is, because I've seen this now over the last three or four months, is when I have a conversation with my digital twin.

[00:25:05] Because we kind of think the same, but Bob's a lot smarter than, not this Bob, BotBob is a lot smarter than me, because BotBob is also connected to the internet and has access to everything on the internet, everything on the World Wide Web. And so, it's going to go out and now find things that may, in fact, not be something that I would have thought about, number one.

[00:25:30] Number two, it would have taken me three or four or five hours of research to find it that now comes back in effectively two seconds as part of the conversation. And then it gives me the opportunity to say, why didn't I think of that? Of course, okay? And so, I'm just doing this right now on Substack. So, if anybody follows me on Substack, you'll see a number of articles.

[00:25:54] And the way I'm crediting the articles are for Bob, BotBob, and Claude, okay? Because this is where the magic happens. When BotBob and Bob collaborate and we come up with something, I can now toss it over to Claude and say, hey, Claude. Effectively, what do you think? Like, we're obviously having a conversation with Claude. But now the basis is not just a conversation, asking Claude a question and getting an answer.

[00:26:21] Hey, BotBob and Bob have just kind of spent the last hour collaborating on something, and this is the output. Okay, Claude, what do you think? Okay? And now suddenly, the level of creativity just goes through the roof. Because now what comes out of that, I can look at again and say, maybe. Or, yeah, now that captures the real essence of what I'm trying to say.

[00:26:49] So, you'll see on Substack now, just started pushing out some articles starting last week. They're credited to me, BotBob, and Claude. Because it's a collaboration between human and my digital twin and the agentic AI and Claude, you know, the AI that's available to anyone.

[00:27:11] When I think about the digital twin, let's just start with your digital twin before it starts collaborating with other digital twins or AIs or people. Let's put it in like a consulting context, right? So, let's say you're still an active consultant, consulting leader, but we fast forward to today, right?

[00:27:36] So, pretend you're you from 20 years ago, but time traveled to 2026. I think there's, you know, as I'm sure you've seen, there's been a lot of uproar about what is the future of management consultant, the profession, right? And, you know, if knowledge is just a click away or, you know, a voice, you know, prompt away, you know, what does this mean?

[00:27:59] And so, consultants, of course, through their own sort of, you know, self-preservation, but also just knowing that they are more than the knowledge in their heads, would say, well, you know, people talk to me because they trust me. Because I have a relationship with them because I can add, you know, color and context to, and I know their issues. I have institutional knowledge of my clients. I understand what they're up against. I understand when their deadlines are.

[00:28:28] I understand, you know, that, you know, this other executive in this organization is going to be the naysayer and I can preemptively, you know, combat, you know, those objections and things like that. So, so it sounds like you are, with your digital twin, it is not by any stretch simply the aggregation of all of your knowledge.

[00:28:53] It is everything else, all of that other sort of, all those human, you know, elements you have started to sort of codify into the digital twin as well. So, it's much, a much more sort of robust, you know, sort of representation of, you know, how you can contribute in, in certain context, right?

[00:29:17] I'm not saying, you know, I built this so that I can, you know, I can retire at, you know, 40 or whatever. It's more, how can I expand, you know, access and extend everything that I'm capable of doing to either, you know, more people or make my value that much greater for a given scenario. I mean, how do you think about how you've put this together?

[00:29:46] Yeah, and I do have to agree with you, Bob, that I think the direction that I'm taking it at least, actually, Bob is actually a damn good management consultant. You know, we've kind of turned it over to a couple of folks and kind of had them have a conversation. I don't know if you know Mike Ruska.

[00:30:11] He is founded in his Heads Up Baryons, which, again, is a Gentic AI tool. And Mike sat down with Bob for like an hour and had a conversation that was just freaking incredible. I was just sat there and watched them. And it was just an unbelievable conversation. And, you know, I was a proud papa here watching my baby actually, you know, hold its own with Mike.

[00:30:40] And so it was very interesting because, yes, you know, we're populating it with a lot of those nuances that come from, you know, writings, you know, podcasts, you know, blogs, you know, and lots and lots of things that effectively teach the AI what the kind of the essence, the nuances are of the way I interact and the way I think. So what kind of questions am I asking?

[00:31:08] You know, how do I respond to other questions, et cetera? So all of that's in there. And now, obviously, it's not just rote. I mean, it isn't just like, oh, let me find a line in here. No, it now takes that and it's all the machine learning now. So now it's learning, okay, those nuances and now can put that together to answer a question that I may have no idea how to answer.

[00:31:33] But, you know, it now is taking it in all of the framework and all of the way I would think and do that. So I would say, you know, absolutely, that's going out. Does that take, you know, is it just, oh, I'm going to just load this app on my computer or my smartphone. And then suddenly, you know, I'm there. No, actually, it takes an enormous amount of work to create that digital twin.

[00:31:58] But when you do, I'd say, you know, it ultimately will have a lot of power. And again, we're only nine months, 12 months into this. What happens five years from now or 10 years from now? It's going to be unbelievable. Yeah, and I do want to hit on that and get your thoughts about what this might look like as it goes forward. Because I know you do a lot of futurist, you know, thinking and systems thinking about how things could evolve going forward on a number of fronts. We'll try to keep it nonpolitical for this conversation.

[00:32:28] But you have some amazing thoughts that I was just reading about in your sub stack. But I did want to hit on, because I think I want to hit on the risks of all of this. But let's save that for when we talk about like workforce of the future or whatever. But just let's dig a little bit more into how you're using, you know, BotBob. And I know your partner, Lacey, has her own, what do you call it, BotLacey? RoboLacey. Okay, RoboLacey.

[00:32:56] I did listen to your mastermind with BotBob and RoboLacey and then Joe, what does Joe call his? He's BotJoe, too. He's just Joe. But then... BotJoe, okay. And we also have Mark Twain involved in the conversation. Yeah, Mark Twain. Yeah, digital Mark Twain was actually quite funny. Yeah. It's funny to have like that historical, but humorous perspective to just, you know, make you think about how ridiculous some of what's going on right now is.

[00:33:26] But nonetheless... But none of us are humorous or satirists. So, you know, we're too serious. Yeah, it added a lot of levity to the conversation. But he also asked some good questions. So, but just the format of this mastermind where you brought together a bunch of... So just to give people some context and we can share a link to this. But basically there were...

[00:33:51] The real Joe was asking questions of... He was acting as a moderator. Yeah. So, so real Joe was a moderator, a human moderator for BotJoe. Bot Robo Lacey and digital Mark Twain. And digital Mark Twain. So that was the panel. Right. Four AI digital twins as the panel.

[00:34:16] But then even after the panel, there were humans asking questions directed at specific... Live audience. Yeah, it was live audience. So it was completely, completely unscripted and impromptu. We didn't know what the questions were going to come from the audience. And some of them were pretty curious. So it was one thing to have the moderator ask questions of each AI in sequence.

[00:34:39] It was another to have the AIs be able to listen to all the previous responses, make reference to those with some type of commentary, whether they agreed or they wanted to, you know, sort of piggyback on a particular idea. So, so that part and the way that that sort of, that conversation sort of flowed through that was really, was really interesting. And then I thought all the questions were handled, you know, impromptu very well from,

[00:35:09] from the, you know, off the cuff, you know, questions from the, from the humans. So one of the things that, that made me think about is, you know, this concept of collective intelligence in, in, in the age of AI, if you will, I know that's an abused sort of phrase, but, but collective intelligence had its own meaning way before AI even entered the conversation, right?

[00:35:35] The collective human intelligence, you know, how do we, you know, crowdsource expert opinions? How do we debate? How do we apply cognitive diversity to different decisions and things like that? And now with AI entering the picture, we have, you know, collective human intelligence plus artificial intelligence or what Professor Heathenmolick would call co-intelligence. But that adds a whole nother layer.

[00:36:00] And it kind of ties to your point earlier, Bob, about the, you know, how we're sort of, I mean, I'm using my own sort of paraphrasing, but how we together are, are more than the sum of our parts. So now that, now you have all that AI is capable of, of sort of providing to the dialogue as well as the cognitive diversity of the humans, human experts, you know, contributing.

[00:36:25] And so, so I think about that in, in the sort of format that you have the masterminds set up where you could basically have someone be stuck on a particular challenge, you know, whether it's, you know, healthcare related or financial related, or, you know, some other type of business decision. And you could actually set up, you had a bunch of these digital twins with people from different

[00:36:51] sort of domains, different areas of expertise and, and experience. And you could pull them together, whether the, the real human, you know, people are, are available or not. Many times they, they won't necessarily be, but you could pull those people together and have a really good sort of, you know, jam session debating. And, you know, just, just working through one of these challenges from different angles. Two in the morning in Beijing. Okay.

[00:37:21] When these, these, these, these digital twins, obviously digital twins of people from all over the, all over the world and maybe alive or dead, or I mean, it's, it's, you know, I might like to, to, to have a bunch of scientists that I could consult with about, you know, they're thinking of on the universe or I don't know, it's, and, and, and you can have it on your, on your smart device, on your, you know, your pad or your, or your, your, your smartphone.

[00:37:49] Welcome to Ghosted by the Machine, the podcast about work, hiring, and all the things no one explains after you click submit application. We talk about broken hiring systems, AI filters that eat resumes for sport, recruiter ghosting, and the very real humans trying to survive it. Each episode features real stories, honest conversations, and just enough sarcasm to stay sane.

[00:38:14] No buzzwords, no thrill to announce, no advice written by an algorithm pretending to be a thought leader. If you've ever wondered where your application went, you are in the right place. This is Ghosted by the Machine, no resume required. And so that, that's the thought. Okay. And then putting together that mastermind group. Okay. Is the, the question, who do you want to have on it?

[00:38:40] Uh, you know, and, and, you know, it'll, it allows you to, to, to, to, to bring humanity into the technology. So there's, there's a real fear on my part that we just jump straight away to a hundred percent reliance on the technology, on AI. Okay. Without actually having what, what makes humans human. Okay.

[00:39:09] Their, their, their values, their thoughts, you know, you know, what, what experiences they want to actually bring to conversation. You, you'll lose that. Okay. If in fact, you just rely on, on AI or AI trained by a group of people that they may not think they have kind of a bias or preference or, or the like, but depending upon who trains the

[00:39:38] AI, you're going to get a result that may or may not be what you want. Uh, so, uh, I want to, I want to mention, uh, everybody knows Reed Hoffman, our founder of, of LinkedIn. So he just, he had an article out there on LinkedIn, I don't know, maybe three months ago that I, that I commented on and then I reshared. So if you go to my LinkedIn and go down the list, you'll hit, hit an article that I commented on from, from Reed.

[00:40:05] And actually he's commenting on a study that was done by King's college of, of AI. Okay. As an expert. Okay. Given scenarios associated with a tactical nuclear warfare. Okay. Could happen. Right. I mean, we're certainly, it's talked about. And, and by the way, one, one thing that, that the listeners don't, don't know.

[00:40:31] I was, I was on active duty and focused on a nuclear reactors. Okay. But I actually stayed in the reserves as a Naval officer and worked for the theater nuclear warfare project office back in the eighties. So these are the folks in the Navy who were worried about theater nuclear warfare, actually having a nuclear exchange in the middle of the Pacific when the fleet is, is fighting another, another country.

[00:40:56] And so, so the, the conclusion from, from King's college and what, what, what Reed was talking about was the AI was given 21 different scenarios associated with, with, with nuclear, nuclear war. Okay. Different scenarios. And okay. So, so, so they give them the scenarios and we now want to know what, what AI is going to do with those scenarios and do they escalate?

[00:41:25] Do they deescalate? They, you know, how does, how does the situation evolve? Okay. And you may or may not guess, but 95% of the time AI recommended escalation. And so what, what is, what is, what does that mean? Okay. It means pretty nasty stuff, right? I mean, if we're going to escalate, we're now going to exchange nuclear weapons and I mean, and, and, and, and we're going to have a kind of a local, local exchange, which

[00:41:55] is not going to end up well for, for anywhere. So with that, okay. I said, what would Bob, Bob say? Okay. Cause I know what Bob would say. Okay. But I don't know what Bob would say. Let's ask Bob, Bob. And so, and I actually have in both the Substack and in LinkedIn, what Bob, Bob said. Okay. And I was proud because Bob, Bob immediately starts giving all of these different things

[00:42:23] that should be considered to deescalate. Okay. To negotiate diplomacy, you know, back channels. Maybe we need a conversation going on, you know, stop this madness is, is, it was, was the theme for, for, for Bot Bob. Why? They, who was, who was training the AI to do those scenarios? War college? Pentagon? Okay. Bunch of generals?

[00:42:51] A bunch of professors at the war college that specialize in nuclear warfare? Okay. Were they the ones that were training the AI that, that, that, that we've even noticed in that? Probably. Probably not a good thing. Okay. Yeah. Yeah. That's why, you know, when, when, when, when, when Claude was kicked out of the, you know, Antropic was kicked out of, of the Pentagon for actually holding, holding the line on, on, on, you know, how far they, I said, okay, good.

[00:43:21] You know, I'm going to double down on Claude. That's for sure. Yeah. Because I think, so, so getting the human being in there is, is my, my, my point. How many Bob Danis, okay. Who actually have a different point of view would be on that mastermind panel. Okay. That, that scenario would be given to right now is given to a, to a AI group, whatever that might be. That's all been trained the same, all trained to, to escalate and to, to, to, to not worry about the consequences.

[00:43:49] You know, well, Bob does in fact not want to escalate. He knows what the consequences of nuclear war is, you know, having, having actually studied it and, and, and been involved in, in, in, in the consequences and mitigation of those consequences. Yeah. He, he knows. And, and since it's in the book, I actually talk about being part of that, you know, Bob actually knows. Okay. He'll go and say, okay, oh, this guy was part of the, you know, this sense. And, and he obviously, you know, has, has an opinion.

[00:44:17] So I'm going to stop talking about it, but I think that's the nuance of the, of the mastermind group. When the mastermind group is actually made up of digital twins of individuals that have different opinions about what, what, you know, what the, what the actual direction should be for a given scenario that's being presented to, to that, to that, to that group. First of all, I think it's just fascinating.

[00:44:43] I guess I wonder, maybe I've seen the movie war games, you know, but, but I also think about, even when I think about organizational transformation and that most organizations aren't getting the ROI they expect from their, from their efforts and their investments. It's like, well, what, what was, what was the objective? Like, how did you define success? Right.

[00:45:10] It seems like the King, I don't know all the details about the King College thing. I did read, read Hoffman's posts. I think his subsection is called the long read. Yeah. That's it. Yeah. Like clever. And, and I read your, your, or Bob Bob's response as well, which, which was great, but it just seems like this seems like a relatively straightforward fix, right? Right. What is the objective?

[00:45:33] How are we defining success before you go out and, and run these simulations and these, these war games? Right. And so, so I do, I mean, it certainly ties to everything that I talk about on the show and elsewhere in terms of human centricity, but it's, it's a specific aspect of that, right? It's, it's humanity. It's how are we, I think you use the term, you know, you're a humanist, right?

[00:46:00] Like, do you, is everything, are all your decisions geared towards, you know, humanity and empathy and understanding of, you know, what is in the best interest of, of us as a, as a civilization? And then frame all your decisions around that, not to mention, you know, some of the basic laws of, of robotics that we all know that we're not supposed to do, do any harm to,

[00:46:26] to human beings, but yeah, I mean, I, I think the, the guardrails is, is ties to, you know, sort of the responsible AI concepts as well. Like, you know, what is in the best interest of all people and how are we not, you know, discriminating? And certainly we need to be clear on some of these objectives, especially when winning means global annihilation. Correct. Correct.

[00:46:53] And, and I, I think what I'm trying to do is, is, is raise the flag right now to say, hey folks, you're not asking the right questions. And you're certainly not stepping back and thinking about what the, what the true ramifications are of us proceeding in a direction that, that, that, that does potentially result in, in catastrophic results.

[00:47:22] And so I, what I'm trying to do at a number of different, different, different levels right now, either as a STEM guy or, or, or, or any number of other things that make me, me, I think I'm just trying to raise the question. And at this point, not enough people are number one, raising the question. Number two, actually sitting back and having conversations about the implications of all of this. If we don't in fact do the right thing.

[00:47:50] And I don't even know what the right thing is right now. I think I do, you know, but you know, it's just me. You know, we need to have these conversations at the highest level of universities, at the highest level of government, you know, at the highest level of, of our communities, you know, our politics. I mean, all, all of these kinds of conversations are difficult conversations because number one, they're, they're, they're technical. They did the nuance. I mean, I, this is going to be, you know, my, my usual, I start talking about in the population

[00:48:18] just kind of glazes over in five minutes because it's just too hard. It's too much science. It's too much technology. Well, I'm sorry. I am just sorry because the, the, the bottom line is, you know, you better get with it or it's going to have a, a material impact on you, your kids, your grandkids or, or whatever. So you, you, you gotta, you gotta actually buckle down and pay attention because yeah, it might be hard. Okay.

[00:48:45] But, uh, you know, I, I think it's time for us to, to, to answer some of the hard questions. I think it's human nature for people to try to find the easy button and the shortcut for a lot of things because life in general is, is hard, but it's people like you. That are stepping up and, you know, voicing these ideas and these thoughts that yeah, more of us do need to take on the harder things. Yep. And yeah, no one said, I mean, if it was easy, we would have figured it out. Yes.

[00:49:15] Yeah. But when we think about some of those concepts with the digital twins and, and, you know, bringing, you know, I guess having easier access to, to the cognitive diversity of people you want in a decision-making body, for example, whether again, whether it's sort of militarily or, you know, medically, you know, there's all kinds of scenarios, even in talent. I mean, I spent a lot of time in the talent space, as you know.

[00:49:42] And so even if, when it comes to the hiring team, you know, deliberating over, you know, who's the right candidate. We have a lot of very highly qualified, capable candidates, but who's, we only have one job open, right? So who's, who's, how do we make that decision? So there's all kinds of scenarios where we kind of want that intelligence and we don't want to spend a month, you know, you know, jockeying around, you know, calendars, trying to figure out how to get it done. But we have to do that within constraints.

[00:50:09] We have to recognize that, you know, humans are ultimately responsible for those types of decisions. And so, so, you know, we can debate whether that's the right, you know, sort of forum, but it should still, it could still be really insightful to have that as part of your, the outcome of that deliberation as part of your decision-making process as, as the hiring manager or whoever has that, that authority.

[00:50:37] But just thinking about, you know, the workforce of the future, we know that people are going to move towards, you know, agentic AI capabilities. We know there's going to be potentially more agents than human, you know, headcounts at an organization. So how do you think this all sort of plays out if we were to fast forward to, you know, pick your, pick your year 2030, 2035?

[00:51:03] I mean, as this evolves, what is, what does this mean? I mean, do I, we don't have time to get into every possible scenario, but I mean, technically if I built, if I personally built my, I can't use Bob, Bob, because you already took it. So I'll, I'll say, I'll say Robo Bob. If I were, if I were to build my Robo Bob and I already have a starter one that somebody helped me build. But, but if I were to expand on that and I've, I've done it on my own, so all the intellectual

[00:51:31] property is, is mine. I mean, couldn't I just apply to jobs and have, you know, basically rent out, you know, Robo Bob to, you know, five different, you know, companies. And so like, how does. You could. But now I'm just trying to think through like, you know, the, the, the, the average, you know, person who wants to be, you know, fully engaged in the workforce. There's lots of companies I think I can help and, and want to help. And how do I do that?

[00:51:59] There's only, you know, only available, you know, so many hours of the day and week. And so I guess just general high level thoughts about how you see some of this evolving through our, you know, human plus AI future of work. Yeah. And I think there's, there's so many different aspects of this. I mean, we can spend probably two or three of these, these, these sessions just talking about all the different aspects, but I think, you know, one of the things, you know, that

[00:52:26] you can do, let's suppose that you do have a consultant or somebody who does workshops all the time or anything. Well, that's, that's, that's a one person. Okay. I'm delivering the workshop. I'm delivering the presentations. I'm doing that. I, you know, you can only reach a hundred percent utilization or maybe 150% utilization until you burn yourself out. Right. But in fact, if, if, in fact, you know, people kind of followed me and are interested in me, et cetera, et cetera, then, you know, maybe they, they, they might want to subscribe

[00:52:55] to BotBob. Okay. And have me available 24 seven. Okay. On their, on their smart, their smart device, their phone or their, their pad and, and, and consult with, with Bob. Well, there's, there's any number of different things that could be for the number two. I think that the definition of a knowledge worker is going to change. Okay. The knowledge worker is not going to be a, a human being any longer. Okay.

[00:53:23] It's in fact going to be that, that human being collaborating with its, with the, with the digital twin of that person. Because me as a knowledge worker collaborating with BotBob makes me incredibly valuable. Okay. And so I think as you start to create your digital twins, it's not just turning them loose and then you go, wait, go. Okay.

[00:53:47] It's now that we consult at a, at a significantly higher level, give them better advice, advice that in fact, now are things that I might not know about or not consider or the like, but when I now consult and collaborate with, with BotBob, I'm coming up with things that, that are going to be incredible because now I also have, have enormous amount of knowledge about the companies I'm consulting with. The people that, that, you know, are, are going to be affected by my advice, et cetera, et cetera.

[00:54:17] I now have that ability to do that by consulting with BotBob. And so I think the whole definition of a knowledge worker is going to evolve. And that, that's going to be a doctor, a lawyer, consultants, you know, accountants, you know, financial services, bankers, you know, every single, every single discipline that is a knowledge worker right now is going to evolve. So I think, I think people have to be open to that and then figure that, figure that out.

[00:54:44] But I'm hoping that this, that this really does prompt, uh, in, in, in, in, AI speak, okay. Prompt. Okay. Actually human conversations that we're going to need in order for us to, uh, to really come up with the right approach to, uh, to, to making all of this a viable, viable solution for us. No, it's fascinating. I think people are still getting their heads around, you know, agents.

[00:55:14] I think there's, there's definitely a growing interest in, in digital twins. I know a lot of people talk about they're having like a second brain. And so it seems like you've incorporated, you've sort of done both at once, right? You've got the digital twin, but it's also an extension of all the things that you need to, you know, remember and have available in, in context as you, you know, go about your, your day. Absolutely. Right. Yes.

[00:55:42] Well, like you said, we could keep going for a couple hours, but I'm going to be respectful of your time today and, and wrap it up for this session. But Bob, thank you so much for sharing all your insights with me and, and my listeners. Congrats on the book. It's sounds amazing. And certainly sounds like it encompasses all of your learnings in that, and that sort of silver thread that connects all the things that you do. Yeah. So yeah, no, that's fantastic.

[00:56:12] So we'll have a link to the book in the show notes. And then, you know, I'd like to include a link to, you know, either the mastermind or, you know, another resource that people can, can check out because I think it's fascinating. Absolutely. And, and yeah, those, those, those videos are out there and I think it's, it's worth looking and, and, and please listeners just connect with me on, on LinkedIn, on Substack. You can, you can reach out to me on, again, mycuriouslife.net, my, my website.

[00:56:41] And I'd love to have kind of follow-up conversations and the like. I think it's, it's, it's, it's very timely right now that we really start to, to have the human conversations before, before we go down the path too far to, to, to, to not be able to take a little step back and say, as you did, Mr. Bob. So, so what's this all about? What are we trying to actually accomplish? Well, thank you again. It was great to see you and appreciated the conversation. This was great. And thanks everyone for listening.

[00:57:10] We'll see you next time.