Bob Pulver welcomes Andrew Gadomski, Founder and Managing Director of Aspen Analytics, to talk about how AI and data science are reshaping the future of HR, talent, and organizational decision-making. Andrew is a veteran workforce data strategist who shares candid, practical insights on what it really takes for companies to evolve their data maturity, why LLMs can’t be treated like magic wands or oracles, and how to make AI work with your people, not instead of them. From “decision gravity” to the fallacy of talent pipeline management, this episode is a masterclass in balancing technological possibility with human nuance.


Keywords

Andrew Gadomski, Aspen Analytics, workforce analytics, decision intelligence, data maturity, talent strategy, HR transformation, responsible AI, talent pipeline, future of work


Key Takeaways

  • The difference between using AI as a prediction tool vs. a decision-making tool—and why that matters

  • “Decision gravity” and how influence travels through an organization

  • Why most organizations aren’t “data mature” and how to assess where you really are

  • LLMs (like ChatGPT) aren’t ready to make decisions—they need guardrails, oversight, and smart humans

  • The myth of a linear talent pipeline and how hiring should actually work

  • Data-informed != data-driven: what smart decision-making really looks like

  • How to frame AI adoption around people, not just tools


    Sound Bites

  • “Data is a tool for influence—not control.”

  • “If you don't trust the decision, you won't trust the data.”

  • “AI will tell you what it would do. It won't tell you what you should do.”


    Chapters

  • 00:00 – Welcome and Guest Intro Overview of Andrew’s role at Aspen Analytics and his approach to data-driven transformation.

  • 05:10 – What “Data Maturity” Really Means Why most organizations overestimate their data capabilities—and what a mature approach actually involves.

  • 12:40 – Decision Gravity and Influence Mapping How organizational decisions really get made and why influence—not hierarchy—is what drives outcomes.

  • 21:25 – Prediction vs. Decision: The Role of AI Understanding how AI fits into human workflows, and why relying on LLMs for decisions is risky.

  • 31:00 – The Limits of Large Language Models (LLMs) Where LLMs can be helpful, where they hallucinate, and how to set trust boundaries around their output.

  • 40:30 – Hiring Myths and the Talent Pipeline Fallacy Why treating hiring like a “pipeline” misses the mark, and what a better model could look like.

  • 52:15 – Building Trust Through Responsible AI How trust, transparency, and cultural readiness shape whether AI is embraced—or ignored.

  • 63:00 – Reframing Success: Learning, Not Just Automation Closing reflections on how organizations can prioritize adaptability, curiosity, and practical value in the AI era.

  • 72:30 – Final Takeaways and Where to Learn More Andrew’s parting thoughts on decision support, ethical data use, and leading with intentionality.


Andrew Gadomski: https://www.linkedin.com/in/andrewgadomski

Aspen Analytics: https://www.aspenanalytics.io/


For advisory work and marketing inquiries:

Bob Pulver:⁠ ⁠https://linkedin.com/in/bobpulver⁠⁠

Elevate Your AIQ:⁠ ⁠https://elevateyouraiq.com⁠⁠

Substack: https://elevateyouraiq.substack.com

What’s Your AIQ? Assessment interest form


Powered by the WRKdefined Podcast Network. 

[00:00:09] Hey everyone, it's Bob Bolver. Today on Elevate Your AIQ, I am joined by my friend Andrew Gadomski. Andrew is the founder and managing director of Aspen Analytics, and he is a trusted advisor to some of the world's largest employers on how to make better workforce decisions. Andrew is also one of my trusted advisors who I've really enjoyed getting to know over the last few years in the responsible AI space. He is one of my top resources to better understand the implications of AI legislation, governance, and risk.

[00:00:38] Andrew is one of my favorite things that I've ever heard about. So having him on the show is long overdue, and it's a pretty long discussion. And even at that, this can be considered part one of a longer conversation. We get into a ton of important things like what it really means to be data mature, how AI can support decision making, support not making decisions, and why it's time to reimagine our concept of a talent pipeline. It's a thoughtful, wide ranging conversation, and I think you're going to take a lot away from it. Thanks for listening. Let's dive in.

[00:01:07] Hey, everyone. Welcome back to another episode of Elevate Your AIQ. I am your host, Bob Pulver. With me today, I have the pleasure of speaking with my friend, Andrew Gadomski. How are you doing today, Andrew? I'm doing great. Bob, thanks for me so much for having me on the show. I'm a long-time listener, but first-time guest, so I'm excited about that.

[00:01:29] Yeah, I would say this is overdue to finally have a recorded conversation. We've been talking quite a bit over the last year and a half or so, probably since we were both on Recruiting Brain Food, talking about a lot of the topics we're going to cover today.

[00:01:45] Yeah, I think that was probably our first back and forth. It's fascinating how, you know, if you were to listen to that now, there's an amount of prognostication that has come to be real.

[00:02:03] And so I'm excited to see that, but at the same time, not as excited to see that, you know, a year and a half ago, whenever we did it, people weren't taking it as seriously as they probably could have. To some degree, to some degree, it's human nature, I guess, from a people perspective and then organizations. I mean, you know this better than I do. Like they just, they wait until something really bad happens before they take action.

[00:02:33] Yeah, that's true. I think that there's a percentage of people who are out there who, I think there has been a history of organizations when regulations exist that they may not do anything until there's a formalized inspection or until there's been an incident.

[00:02:59] I think I've learned that over the years working with a lot of manufacturing organizations, running staffing at a manufacturing organization. And so there was this, you know, because there were inspections, there was this natural tendency to say, well, we have quality control, we have quality management, and that became its own function.

[00:03:21] Well, HR has had that in small amounts, but it wasn't necessarily technology-driven. It was outcome-driven. Do you have some sort of equal opportunity reporting, or is there a corporate sustainability and responsibility?

[00:03:44] The SEC added human capital disclosures to annual reports? And so there was a handful of things that I wouldn't call them voluntary, but something that could get thrown over the wall relatively quickly and didn't necessarily have a complex patchwork.

[00:04:05] And you probably could have thrown it to legal professionals if you had to, and you could have also done it in such a way where you can retroactively investigate an audit.

[00:04:18] Well, a lot of that isn't what we're in right now. You need to progressively have a program. You need to constantly have safety and ethics. And that's just not programmed into talent acquisition and HR because it's so transactionally oriented.

[00:04:41] It's just, you know, it's just, you know, we're constantly doing things, but none of it necessarily from a natural safety perspective, natural security perspective. And that's really where we are now.

[00:04:58] Yeah. Yeah. I think there's going to be a bunch of examples that we can cite to talk about, you know, that transactional nature and what that means when we talk about experiences, when we talk about, you know, employer brand and reputation, not to mention the need for people to sort of think more deeply about their role in the organization and how do they drive more value?

[00:05:26] And how do they drive towards those outcomes that are more meaningful at a, you know, from a strategic standpoint, as opposed to, you know, being tactical, thinking about, you know, just your next hire or the specific transactions that support that. I think what might be a little bit of work.

[00:05:49] I think what might be a little bit hard for the talent acquisition part, you know, you have individuals who deal primarily with employees. And so that's the HR space, workforce planning space, you know, labor relations. And then there's individuals who, as controversial as this might sound, the majority of their job is not engaging with employees.

[00:06:18] The majority of their job is engaging with non-employees and who will never be employees. And they don't carry the decision on the offer. Someone else does in terms of either a manager makes a decision to extend it or an applicant makes the decision to refuse it or accept it.

[00:06:42] But talent acquisition doesn't carry as much weight in personnel actions as potentially their counterparts where we need to make a transfer, we need to make a pay increase. Those are different weights of decisions. Yeah, for sure. Andrew, we neglected to just get a little bit about your background. We jumped right in.

[00:07:10] Could you just give the listeners just a little bit about your background and then, you know, what you're working on these days? So it's funny when people ask me that question. I never know. Do you start with the most recent or do you start with the earliest? So I'll start with the early one. So I am, I have always been a mathlete.

[00:07:32] So from a, from a very young age, you know, it was, you know, can Andrew come up with, you know, what's, you know, 12,180 divided by 74, you know? So that's always kind of been something that's been part of me and I've been computer oriented.

[00:07:50] But I come from a long line of individuals who were first in their entrepreneurial nature. So my mother grew up in Kokomo, Indiana, which is known as the city of firsts, right? Um, and so there's always been this natural tendency to do things, uh, in my career that were first.

[00:08:20] I worked with, uh, real media as a distributor when Mark Cuban stood it up. And that was back in 1998. That's a long time ago to, um, put video and audio on the internet. You know, we, we take it for granted now. You and I are doing it now, but when you're doing that, uh, back then it was interesting.

[00:08:45] And then I moved on to executive search, uh, by accident. No one falls into it. No one goes to school for it. I was not enjoying my job in marketing and sales. And someone said, you know, I met with executive recruiters and they, well, one of them said, so you have a background in marketing. So you have degrees in marketing and logistics. You've got a chemical engineering background. You're science oriented.

[00:09:12] Well, why don't you go recruit for a living for us? I had no idea what that meant. Um, but after, after a little while, and this is career builder and monster, just kind of jumping in. I said, this is, this is messed up. I mean, you, this is not a sustainable model. All of this, all of these systems are going to converge. And that probably went back to some work I did in broadband technology when I was in undergrad.

[00:09:43] And I had a pretty good internship, uh, that did a lot of engineering and marketing work around convergence theory, uh, with broadband and analog. And I said, this is just not sustainable. You can't do all these manual processes. It's all going to become automated and algorithmic. And no one's going to listen to me talk about data unless I go and work as a head of staffing for something.

[00:10:13] And so I ended up leaving, uh, that executive search practice to go and become the head of staffing for a $5 billion division, a Honeywell, which was in chemicals. And so it was in my wheelhouse. Um, but that was really as a start of starting this business back in 2006. I, I didn't think I was going to have any credibility talking about data and worldwide change if I had never done the work.

[00:10:39] So that was in 2006 and Aspen has been around since then and has gone through some evolutions first in terms of consulting, then focusing on workforce analytics. And now it's focused on audits and de-risking from regulations and, and, and risk that organizations have. So companies will, you know, developers, they're usually more deployers.

[00:11:06] So the employers will come to me and say, Hey, we're, we're running a global organization. We're using AI more, but we also want to make sure we're doing a pay equity fair. We want to make sure that we're being transparent. We're concerned about the risks associated with using these new technologies, potentially cyber risk or usage risk.

[00:11:31] And I'll sit with them and use a lot of that work that I've done to help them do that. I probably should have mentioned that during, during the time, as I moved from being into more audit, I did a lot of workforce planning for the department of Homeland Security.

[00:11:50] And, uh, what, what a ride that was, that was growing, uh, a cybersecurity with very difficult positions from, uh, 1500 to almost 4,000 people, um, with a workforce of 8,000 total with contract staff. And that was a very sophisticated workforce planning organization.

[00:12:14] And so between all of that, now I'm going to, I think I have a, a fun role every day in that the companies are coming to me with significant problems around AI and wages and cyber, and they ultimately have never faced them before. So I really enjoy the newness. Yeah, that's quite, that's quite a trajectory.

[00:12:38] And I feel like, you know, my career has not taken a linear path either. And I always sort of look back at different experiences and how can I bring some of those, those learnings and some of that expertise to bear in, in some of the things that we're, we're talking about now, right. Around, around, around risk mitigation, you know, bias mitigation, fairness, um, looking for adverse impact.

[00:13:07] Um, thinking about governance beyond, beyond what a typical governance risk and compliance, you know, GRC, you know, team might typically handle. And that's one of the things, I mean, be curious to get your, you know, workforce planning perspective around that, because I do feel like a lot of companies, partly to the point we were making originally around that they're just kind of not necessarily thinking about this until something happens.

[00:13:36] They're also thinking that, oh, AI governance, you know, just the fact that the word governance is there. And I, I've got someone with that title. I've got one of those teams. I'm just going to, you know, oh, if you think we can, you know, automate a bunch of things and maybe we'll save them a little bit of time with the routine stuff. And now we can throw this at them and let them handle the AI governance too.

[00:13:56] And, you know, my experience and, and what I've learned through organizations like For Humanity, et cetera, it's just like, that is, that's a terrible idea. Well, I, I think that with, as, as we're moving into this, you know, how many times a day can we say, you know, artificial intelligence, right?

[00:14:18] I mean, it's just, it's gotten to be almost, you know, I, I, you know, I almost kind of, you know, hate saying the words because you say them so often. But it is its own function and it is its own set of unusual or new methods.

[00:14:40] But if you look at it from a position of security, safety, and ethics, those are actually old concepts. And, and so when you, when you look at how we're going to transition people into the use of this, the question is, is do they have that safety and the security and the transparency built into their, their competencies?

[00:15:06] And I think, for example, diversity, equity, inclusion, you know, DEI. I'm actually a big fan of another phrase called abide. So accessibility, belonging, inclusion, diversity, and an equity. Meaning that, you know, accessibility includes language and individuals with disability and those types of things.

[00:15:33] But individuals who have done that type of work come from a safety and security background and have done different programs that would encourage that. I'm hesitant to say that they'll make a strong transition into, say, audit or artificial intelligence governance because it's so data heavy and so technical documentation heavy.

[00:16:00] But those are things that can be learned because ultimately you want safety and security. But I think that's a really good example of people who can probably make that transition to, you know, from one career to another. I think it's actually not even a career change as much as it is a focus change.

[00:16:21] I do pay attention to a lot of these folks who have put out, like, here are some roles of the, of the future kind of thing. And some of them are kind of BS, right? Like, I don't, I don't know that all of those, those roles need to be dedicated roles. So some of them are logical, but maybe combining, you know, two or two or three of them.

[00:16:44] And, you know, there's a, you know, an AI ethics leader, there's AI, you know, there's, there's a governance lead. That one probably is a full-time role. And then, but there's all these little ones like, you know, overseeing, you know, prompts and over, like some of these things are just like, you know, sounds more like something that should be embedded in most people's roles. Or it's just one piece of, you know, what one of these new roles could be.

[00:17:12] But I do think about, as I think about not letting people fall behind or get a complacent and, you know, the, the profession that they're in with a specific focus. And I think about people in talent acquisition, you know, pretty regularly when, when talk about this, but do they have the capacity that the transferable skills, the desire to move into some of those roles?

[00:17:38] Do they have sort of, do they respect that, you know, this is, that things are changing, that their role, as much as they provide, you know, maybe some, some level of, you know, advisory, you know, work, or they're providing a good candidate experience and those kinds of things. You know, you know, you know, you've still got to see the writing on the wall.

[00:18:02] And, and that's, that's more, that's partly what's coming just from a, from a market perspective. But also your organizational sort of appetite for, you know, evolving and the, the, the speed at which they're willing to do that and invest in, you know, better positions.

[00:18:27] Positioning themselves going forward by, you know, there's, there's going to be, there's going to be some job loss. And then what do you do with, with that? Do you, how much do you sort of reinvest in your own, your own people to assess them for the ability to make some of those transitions?

[00:18:48] And that's why when I, when I talk about, you know, this concept of, of AIQ or AI, you know, maturity and readiness and things like that, it's, it's very much, this isn't, we're not talking about just hands to keyboard, you know, prompting, you know, those types of, of skills.

[00:19:04] It's appropriate use, appropriate use cases, um, and understanding, um, you know, how to evolve that human plus AI kind of relationship. Cause I think that's going to be the most productive and constructive, um, approach to how we think about, you know, building the future of work that we all want.

[00:19:27] I think one of the things that talent acquisition has denied for a very long period of time. And maybe this is because I'm just older in the space is it's an outsourced function inside an organization.

[00:19:46] So the, you know, we, we didn't have recruitment and talent acquisition as a function when organizations held on to personnel for a longer period of time and had, uh, promotion tracks or just had, um, less the, you know, when, when you don't have multiple vehicles in your family,

[00:20:13] when you're centralized around train stations and when manufacturing is set locally for you, the idea of, and in prices are under control, there's a high risk of leaving your job. And you have a, you had wage increases that were normalized where if I stay at my job, I will make more money.

[00:20:41] Well, over time that changed. And so all of a sudden, lots of people are leaving and lots of people, uh, have to be hired. Uh, the uptick of the participating workforce for 18 to 64 year olds has increased since the 1960s progressively.

[00:21:00] And, and now, you know, we're, we're socially, there's no social, um, stigma around staying with an organization for only two or three, four years. There's no, so, I mean, in fact, you will make more money by leaving an organization and moving into another one every three years. And if you do it, if you stayed, especially as a span of control, how many people report to one person increases. Yep.

[00:21:27] And so recruiting is here because all that has increased and it became a distraction. And there was a level of expertise that operating manager said, I don't want to be an expert at talking to people who likely aren't going to work for me. Thus, the, thus the function was born.

[00:21:53] And it was encouraged by human resource personnel saying, I also don't want to talk with people very often who don't work for us. And now here we are where we have people who are holding onto jobs as best they can.

[00:22:15] We have knowledge workers that are struggling to find jobs because we're naturally reducing the amount of knowledge work. And we've had a trade skills gap and an education gap for trade skills for the better part of two generations. And recruiting is saying, I don't know what to do. And, you know, oh, well, if you become more AI oriented, well, I will, you will have more value.

[00:22:45] I don't discount that. But if there is no demand for new people, there's no need for your job. Right. Because the cannibalization of recruiting isn't necessarily AI. AI, it's AI at the hiring manager level. So they can just do it themselves. And there's enough automation where why do I need to outsource this to another human?

[00:23:10] When we have another series of tools that allow me just to handle it and do my own work, similar to other administrative based tasks. You know, we no longer do some of those things and outsource some of those things. I think one of the best examples is probably how CPA engagement has gone down over the last 40 years.

[00:23:38] The majority, so many people use Quicken. So many people use those types of tools. We don't even think about it. And, okay, well, that's where we are right now. It will be interesting to see, you know, again, everyone's trying to build these AI tools and AI literacy and recruitment. But it's all dependent upon supply and demand. And I hate to break it to everybody.

[00:24:04] But if you think talent intelligence and, you know, special searching is going to find you radiology technicians, which are in high demand, mechanics, or CNC machinists, or people who work on, you know, roughnecks who work on oil rigs. Think again. Think again.

[00:24:58] DA Spencer. Paper on random cars in a parking lot of the competitor. I mean, it's... Right, right. You know, I'm definitely, you know, removed from that because I've been in the corporate world my whole time.

[00:25:26] career. I mean, yeah, back to, I'm a little older than you. So yeah, back to mid nineties, I've been sort of in corporate light. Well, I mean, I guess technically I left it a couple of years ago, but, but still obviously actively, you know, working with and talking about, you know, some of these big companies and thinking about, um, knowledge work. But I, you know, I think about the current workforce and the future workforce. Um, you know, if you're,

[00:25:53] my daughter's getting ready to apply to college. I think your, your kids are, you know, um, you know, older, younger. So yeah, my daughter is younger than yours. Only by like a year or two though. Right. 13. Okay. Almost nice. But I was talking to some folks yesterday at this, uh, NYU coaching and innovation, coaching technology summit. Big fan of NYU as an alumnus. Uh, there you

[00:26:19] go. So yeah, it was, it was a great event. And, um, as you might imagine, there were a lot of like, uh, you know, assessment, you know, kind of vendors, uh, there. And I was talking to some guys who had, who had younger kids, like, you know, one of them had kids like three and six and the other one's like, you know, not even in kindergarten yet, but, you know, so my daughter's older, she's already applying to college. I can't stop that train. But if my kids were younger, I'd be like, I, I'm not even sure what direction to sort of,

[00:26:48] you know, point them or what, not obviously early to start thinking about careers, but, but just like, how do we sort of, um, encourage, you know, specific interests, make sure you, um, obviously you think about creativity and curiosity as, at a young age. And so how do we sort of lean, lean into that and let them see what, what path that might take them, but it certainly won't be this sort of keeping up with the Joneses kind of, you know, oh,

[00:27:18] you know, my daughter's a good student. She's got to go to a good college to get a good job or whatever. Like that whole concept is, is already fracturing. Right. And so, um, so yeah, I mean, that, that decline, do we, do we try to reverse, you know, um, that trend that you described as far as like trades and, um, you know, how to, like, what do, what are the things that we encourage such that

[00:27:46] there are opportunities when they get to, you know, that, that stage where they have to make that type of decision and start working. Um, and so, you know, college, uh, it, it's even more expensive than I thought. So, you know, is the return on the investment there? I know a lot of parents are struggling to figure that out. Um, and so, um, you're right back to the, you know, recruiting

[00:28:15] thing. Like it is, it makes sense now why so many companies, you know, do outsource, you know, uh, recruiting to, you know, RPO firms and other, um, independent search firms and things like that. And then I am a huge fan of outsourcing recruitment. And that's someone who built a very large, you know, so full disclosure, one of the largest RPO engagements in history in 2006. I mean,

[00:28:45] so, you know, I'm a big fan of that. There's a strong story there. I built that at Honeywell and then seven, seven, seven or seven or eight years later, I'm actually out on my own and I get hired by an RPO and I've done work with six or seven of them over the years. And they said, somebody put in the service level agreements at this Honeywell account and they're so hard. And so they send to me, I'm like,

[00:29:14] yeah, those are my service level agreements. And I bet you're not making as much money. Right. Um, but I'm a big fan of, of outsourcing recruitment. I think it reduces risk for employers because they don't have headcount variation. The criticality of those jobs as important as talent,

[00:29:39] you know, talent acquisition is important when you need to acquire talent. But if you were fully staffed, if you're a, as a client I've worked with over time, what a cool customer, small business in Pennsylvania, and they make something like 75% of the U S military's aviator and pilot helmets for the

[00:30:05] military. So all these like, you know, infrared, like the radar readouts are coming right up into the glasses and specialized audio. Um, and if they don't have electronic engineers and people who can execute assembly with high dexterity, they're dead in the water. I mean, it's, you know, HR is important

[00:30:31] and finance is important, but they could go ahead and probably outsource all of that. But if they don't have people who can work in these plants and know how to do the wiring by hand, you're just, they're just not going to survive. And so how do you, so you talked about trades. Okay. Well, where's the schooling and the

[00:30:56] apprenticeship programs that create that? We just got rid of them in the United States. Europe got rid of, you know, was better at it, but also started to kind of lose that as knowledge work increased in the nineties. And I'm, you know, my daughter, you know, we watch all kinds of movies. And one of the ones I, I got,

[00:31:22] uh, her early was actually one called the guardian with Kevin Costner. And it's about, uh, coast guard rescue swimmers. And I said, kiddo, you will, if you want to do this, you will always have a job. You will always have a job as a coast guard rescue swimmer because people will always be on boats,

[00:31:47] you know? And, you know, there will be military applications. There will be trade. We have tremendous infrastructure in the United States that needs to be built and rebuilt that require standing jobs, not sitting jobs. That's forgive me saying standing, but you know, you're going to be using a level of dexterity with your hands that have nothing to do with a keyboard. Yeah. That's, that's where we're at.

[00:32:15] Yeah. I mean, if I think about AI as impact by its, by itself, um, I definitely agree, but I also do think about the trajectory of like intersecting trends, right? Like AI and robotics, you know, that might start now. I mean, we see that now, you know, maybe in parts of, of Asia, um, with, uh, actually even Amazon's making huge investments, right? Robotics. Sure.

[00:32:41] Having AI embedded in, in robotics, but I still think, uh, to your point, it'll be a long time before, uh, you know, you stick some of those AI powered robots on a, on a, on a vessel, uh, and, uh, and they go and save, um, you know, people's lives in the water and stuff like that. Uh, I think we're at least a generation away from that. Well, and, and there's some, there's some actual structural, there's some work structure that has

[00:33:07] to change in order to allow automation, uh, and algorithms and robotics to work. So, um, I don't think people may know as an example, how Amazon, uh, or more sophisticated distribution, uh, consumer package, good distribution centers work. They don't store anything. They don't have shelves. They might have levels, you know, the upper produce level, they have a soft goods level,

[00:33:35] a hard, good level. But the assumption is when something comes in, it's going to leave. So it's not supposed to go onto a shelf. It's supposed to be scattered in such a way that the, as an order comes in, whoever the, uh, the order picker is, they find out who's closest and they put it inside a cart and put it on a conveyor. So it goes out the door. So they never really

[00:34:00] reorganize it or anything. Okay. When you look at human resources and you think about artificial intelligence, um, IQ, we're so linear as, as HR, we tend to think about things in a way that, well,

[00:34:21] we have to wait for something to happen so we can react to it. And where AI and HR really makes an impact is if recruitment is moving towards the ability to fill a job in 24 hours, 48 hours, seven days, or 30 days. Let's just say that's the case that like 97% of jobs, some are going to go away,

[00:34:49] but we're using AI to get the majority of people in the door or at least identify within a couple of weeks. Have we just completely changed how we've done succession planning and how we do pay knowing that when Andrew walks out of, and if I know, uh, if, if I'm, you know, is, if Andrew's a supervisor

[00:35:12] and so-and-so mouths off in a meeting, you know, how quick am I to make the judgment of I'm just going to terminate so-and-so because they had a poor performance because I know I can get a replacement in a declining job market with high supply relatively quickly and know it's only a two-week gap. Okay. Until you and I just said that out loud, I haven't heard that dialogue that says, HR, you need

[00:35:41] to start helping people understand what the workforce plan can be. And that if there's talent intelligence that says we cannot replace the electronic engineers, you cannot take that little, we'll just replace them stands because AI is in our recruiting process. The, there aren't enough Coast Guard rescue swimmers. Right. Right. There just aren't.

[00:36:10] Yeah. No, you're bringing up an interesting point. I hadn't really thought of that. I think, you know, I've never had an official HR job. My goals were always more sort of transformation or innovation, um, you know, focus, but certainly, you know, early days of crowdsourcing and collective intelligence. And, you know, before there was a good economy, I was thinking about these things and built some internal, you know, projects and stuff at IBM where you said,

[00:36:38] like, let's, let's get, you know, we've got a great idea. Let's, let's vet it. Let's, let's build a team around it. Let's, let's crowdsource and get some seed funding and, and what have you. So very sort of entrepreneurial, you know, efforts in, in that regard. And so, so, so ever since then, and especially at NBC, after my IBM tenure, I saw a lot of these scenarios

[00:37:04] where people weren't necessarily thinking two steps ahead to say, you know, what, what do we have upcoming? How does our, how can we build a talent strategy that's going to give us more success with our, our business strategy and our technology strategy? And unfortunately, because of that,

[00:37:35] they were, they were behind in, in moving to cloud because they didn't, it wasn't just, oh, we need a new home or who's going to babysit these, you know, servers. It was, well, we don't think our, our traditional infrastructure guys have the capacity to do that. Well, you should have thought of that a while ago. Um, first of all, and, and now, um, like you said, if you're just

[00:38:02] thinking about talent acquisition as, as sort of an offshoot, um, you know, my, my concern is you should have been thinking about this for your own survival. Um, you should have empowered people to be constantly thinking about this. And the fact that you're thinking about internal mobility opportunities, even after you've, all of that, after you've realized you're behind and you're,

[00:38:30] now you're going out and you're using third parties and you're trying to identify this talent, um, that everyone's fighting for, you know, top, you know, technical talent. Um, and you haven't turned your chair around to say where else amongst this enormous, you know, sort of Comcast family of companies, um, to say maybe that talent exists here, which is

[00:38:56] why I get so frustrated when I hang out with folks like you and, and, and Toby Kulshan, the talent intelligence collective, and all these people analytics, you know, gurus. And it's just like, these are some of the smartest people who are bringing you data and bringing you insight. And yet these teams are, are scrambling for funding and they're, they're just not being listened to. Like, I don't know how to make this any, you know, clearer for you. You need to be

[00:39:26] thinking about how we have as an organization, the, the agility, um, and the foresight to say things are moving fast. Things are changing fast. This, this, the skills dynamics, uh, the labor market, like it's, it's really a really complex, you know, sort of optimization problem. And it just seems like

[00:39:52] until recently people aren't admitting it or they were waiting until AI swoops in and saves, saves the day, but that's not, it doesn't have magic wand. Yeah. It's, I'm a little concerned about my LinkedIn thread in that the algorithm creates the same echo over and over again, but it doesn't

[00:40:16] talk about how artificial intelligence and well, better yet, how technology and infrastructure change because of new assets changes, how workforce can work. So artificial intelligence is now part of the workforce. So if you think about a workforce planning and you look at a concept known as

[00:40:41] workforce capacity or workforce modeling, the idea being we have this many tasks to do, and there's a certain level of competence that has to get done. There are reasons why we do this. And then is this job something that needs to be inherently done by a human? And is that human,

[00:41:05] should they inherently be an employee where we provide benefits and plan and succession planning and learning and development? And in some cases like trades, the, the threat is robotics, right? Or the threat is outsourcing of it because someone says we're going to get out of owning our own

[00:41:30] fleet. So we're no longer, you know, we've made the decision that our fleet of trucks are old. So rather than replacing that capital asset, we're going to just naturally retire it and, and move into having third-party logistics providers do our transit. Okay. I didn't just come up with that today.

[00:41:54] That's a 1960s, 1970s movement that created the J.B. Hunts and then created FedEx and created all of these to where FedEx is now a franchised operator business. You know, you, you, you can have your own FedEx business. And I don't think that people understand that AI is now in the position of

[00:42:19] removing the transactions for any number of positions in a workforce. I, you know, Bob, I get really frustrated with the, the concept of we go out and get the best talent, or we have the best, the best talent in our organizations. I appreciate it's personal. I appreciate it's, it's humanity.

[00:42:44] But as someone who's in workforce planning and strategy, I'm more concerned about having the best positions. I know that there's more than one person in the world who could do this job, whatever it is. Right. And I actually know that because they won't stay in that job forever

[00:43:10] because we allow for, you know, we don't pay as much or we allow for succession or people retire. We know that, you know, so I look at what position do you have right now? Can it be enabled by AI robotics or both or training or outsourcing or all four? And what does that look like

[00:43:35] four years from now? And then does, I had a conversation with an organization they were trying to, they were saying, we want to go ahead and hire 3,500 electrical engineers and multiple because we're, you know, our private equity firm wants to grow. And I said, impossible. I said that, I said that, that concept of you building a talent acquisition team to do that when there isn't a

[00:44:04] digital footprint for the majority of that personnel, I'm advising you to go through a merger and acquisition concept where you find rather than growing, you know, similar to insurance brokerage in the healthcare industry. Go buy smaller insurance brokers and add them to your portfolio of business rather than trying to train them. And okay, they're moving in that direction, right? It's,

[00:44:33] well, we can't, we can't build, we can't find them, we can't train them, and we may not be able to retain them. So we'll go through an M&A exercise. Well, then do that. That's, that goes back to what you talked about, about thinking way ahead. You should have known better, right? If, if I'm Aetna right

[00:44:56] now, or Jones CBS, I am thinking really hard if I'm going to move to 90% of the drugs being delivered via mail to, to our consumers over the next so many years, because pharmacy techs are of, are, are hard

[00:45:22] to find. Pharmacists are hard to find. The investment in pharmacy has gone, is, is changing, and our manufacturing has, has increased. And so the question is, once something is regulated, do you really need that personnel? And, you know, Amazon just got into the pharmacy business. And now the question is, and there's plenty,

[00:45:51] if you go and you look at, there's some really great videos about this, but now you can have robotic-based pharmacies that are even in large-scale hospital networks where no one pulls, excuse me, no one pulls the, the actual drug. It's just plugged in and automatically robots go and get it. And I don't

[00:46:16] think that we're looking at AI literacy the right way and saying, okay, how far can we take that, take whatever we have right now for not this fiscal year, but the next fiscal year? And then, you know, good strategic, good workforce planning goes at least four years, right? You know,

[00:46:40] one year out's too late. You already did your annualized operating budget. But right now, if I'm four years out and I'm in HR, I probably would say that you should be looking at an 80 to 85% reduction of headcount. And then ask yourself the question, do we want to do that first? And that's a result of

[00:47:08] the workforce itself. If your plan is to retain people, if your plan is to outsource non-critical positions, you just don't, and you're automating personnel actions by using insert human capital management system here, do you need to have as many human capital managers running around?

[00:47:37] Well, probably not. Now you can be bold and, you know, show them all the door now, but I don't think that that's the case. People have asked me, Andrew, how much should an organization reduce that has knowledge workers? And I said, go back to 2018 pre-pandemic and look at your attrition rate. That's the attrition rate that you should focus on now. You should be getting to an operating model where

[00:48:05] if you were reducing by, you know, if you had people who were leaving you naturally in a voluntary rate at 12%, that's a very good target that says, well, we were able to work with that before. Okay. If people leave at that same rate, we just won't replace them.

[00:48:31] Have you ever wondered what really makes a generation tick? Who gets to pick the name and why the slang keeps changing? Don't worry. I can help. My name's Dr. Megan Grace on hashtag Gen Z. I share the voices and experiences of generation Z, how they're different from other generations, what moves them and why they do what they do. In each episode, we go beyond the buzzwords and the stereotypes to dive into real conversations and the insights that matter to making intergenerational

[00:48:58] collaboration a reality. You can catch hashtag Gen Z on the work defined podcast network and wherever you listen to podcasts. You can't do that for all the jobs, by the way. Can't do that. Yeah, you can't do it for the Coast Guard rescue swimmers. You can't do it for the people who plug in the radar systems on the

[00:49:18] assembly line. But okay, if someone from human capital leaves, if someone from finance leaves, did the AI that we offered, does that pick up the slack? Does it mean, well, we can hire someone back, but maybe with

[00:49:40] the right AI literacy, we don't have to pay them $140,000. We can pay them $75,000. We can pay them $75,000 and sign them like a baseball player for four years and say, we will guarantee you over the next four years,

[00:50:03] $435,000. But you have to stay. That concept with AI and workforce right now is not even being discussed. Except for Zuckerberg trying to get people for $100 million a throw from open AI, where's the Juan Soto deals

[00:50:34] for AI personnel right now saying, pretty please don't leave. We will pay you four years to sit wherever you are. And we'll make sure you don't leave. No, it's not necessarily don't name your price, but sign up with a clawback. I talked to my workforce planning colleague. It's not even on the radar screen. And it should be with AI. Yeah, I think people are still getting their arms around

[00:51:02] in their head around the pace at which things are evolving. And people say, okay, well, AI can only do this and that. Well, they're also saying, which I agree with is it's only going to get better, smarter, more capable. Of course, the job for you and I is to make sure people do that in an ethical

[00:51:30] and responsible way. And we're insulated, right? Because there's rules that say humans need to intervene and watch this. So it was kind of like, you know, if you want to have your job insulated, be in health care or, you know, be in internal affairs, right? Right. Yeah. I mean, or get really good at, you know, sort of supervising, you know, the automation and the agents and things like that.

[00:51:57] So that's one area that I think I agree with. But at the same time, most personnel managers, if you're managing, you know, sort of knowledge worker kinds of people, you're going to have a team of both, you know, human and digital labor, right? And so some of those skills are going to be

[00:52:23] necessary on a lot of people's plates. And so, but these are some of the things that I think people need to think about when it does come to AI literacy and readiness. It's certainly not, you know, learning how to, you know, do better, you know, prompts. And it's certainly not just grabbing any agent. You know, I see people, you mentioned in your LinkedIn feed before, mine is,

[00:52:49] is, I'm not happy with it. It's, it is a lot of echo chamber stuff, but it's a lot of, it's a lot of shiny objects and it's a lot of, Hey, look what I did. I, I curated a list of, of stuff that's probably useful or I type, you know, type AI number one and I'll send you, you know, my whatever. Right. Yeah. Yeah. I mean, I tried to give away something of value,

[00:53:16] which is my AI, you know, assessment and like, you know, crickets. So I guess I don't have the following that some of these other guys have, but, but when you've been giving away, you know, some of these, you know, cheat sheets and, you know, top 50, this top 50, that, or, Hey, I just built these 50 agents, you know, go nuts. Uh, but you gotta, you gotta tag, tag a friend, uh, like the post be connected. Like how many hoops do I have to jump through? And Oh, by the

[00:53:41] way, once I get this stuff, I don't know what you were doing when you built this. I don't know what your background is. So how do I know that I can trust these things to incorporate into my, you know, operation or even use, you know, myself. I mean, I need to know this isn't just another little baby black box. Um, and you created 50 of them. Right. And so now, um, you know,

[00:54:08] my own reputation is, is on the line. If I start, you know, deploying these, you know, just willy nilly. And so, so there's a lot of noise that I think we're all struggling to, to get through, not to mention the nonsense with, you know, going back to talent acquisition and the AI versus AI, you know, sort of, um, you know, matching my God, this doesn't, that doesn't end well either.

[00:54:35] Right. So, um, but candidates are desperate candidates are, you know, they're, they feel like they want to jump up and down if they get, even get an interview because the odds are so low. It's really kind of a mess. Well, and that's where position management's different than job announcements. So again, this idea that you need to have the best positions. If, if your announcement

[00:54:57] is we're looking for these basics, we're cool, we're hip, we'll pay you this, then, okay, you have, um, you know, the, the, the candidates AI methodology, whether it's by hand or by service goes ahead, automatically changes your resume. It uses computer vision and similarity matching

[00:55:20] to fill out all your applications. By the way, huge cybersecurity risk. I feel like I'm alone in talking about that. It's just ridiculous, but okay. But the position management is where I think HR can lead. If you are a human, if you're in human capital, if you're in HR and you want to

[00:55:45] elevate your AIQ, right? Did you see how I did that? It's almost like a 1980s movie where you say the name of the movie. Um, but it's, you should not know how to use HR agents. If you're supporting manufacturing, you should know how to use manufacturing AI agents. You should know what

[00:56:11] your quality teams are using as AI agents and understand their mechanisms and why that's part of their workforce. And they've decided to automate it. They've decided to outsource it for one reason or another. I think that, you know, we've got in our little echo chambers, we've got all kinds of people who are like, oh, here I'm a recruiter and here's some cool recruiting prompts. I'm like, no,

[00:56:39] no, no, no, no. You're a, so when I was in executive search, my specialization was supply chain and chemical engineering because I knew the space. So I wanted to understand, tell me your intermodal transportation portfolio and why did you make it the way that you did and help me understand your operating expenses. And okay, those are, those are pretty complex questions and I would listen

[00:57:04] and I wouldn't understand it. But HR right now should be able to do that for their customers with AI. Okay. You created a customer service AI. Tell me what that replaced. Why did you do it? How are you doing it? What's your advancement strategy for it? Do we have adoption across our customer services teams

[00:57:28] and what's inhibiting your adoption level? If you're not asking those questions and you're an HR professional, that's a problem. If you're in recruiting and you're not asking people how they're driving adoption as a leader in AI and what their methodologies are, and you should know enough about customer service recruiting or sales recruiting or whatever to know whether or not those things make sense.

[00:57:52] That, that's what we should be doing right now. We should be elevating our AI or our AI or AIQ, trying to do it twice that way, not let work day and everybody else come up with your agents. They're going to do it anyway. They're going to, and not only are they going to do it, they're going to trump your ability to have your cool little bespoke thing get installed.

[00:58:22] Because you may not be there, but I guarantee you the $25 million infrastructure HR investment you made will outlast you at the company. So it's, it's just better to know what are my operators doing with AI. And that's what I need to know.

[00:58:46] You raise an interesting point about, you know, by the time you train your folks and try to build your own agents, you know, have your vendors already, you know, added that, that capability. I mean, I think that's something that you have to trust, you know, do you trust your, your vendors to, to build those things and to build it right, to make sure that it's a priority on their,

[00:59:14] you know, roadmap. You know, I know the, the more legacy, you know, established platforms are not sitting around. I mean, work, you know, work days got a bunch of AI capability. You know, Oracle has a whole, you know, sort of marketplace of, of AI, you know, agents. So these guys aren't sitting around while these startups are, you know, getting a lot of VC money to, to build new, new stuff and plug it in.

[00:59:40] And I know, you know, implementation in theory should be, you know, streamlined with, with some of these things, but, you know, the proof is, is in the pudding and how that's orchestrated. But I do think that as we think about AI literacy, I want people to think about, I guess I'd describe it as playing both offense and defense, right? And those things can coexist. You know, this concept of responsible innovation is, is a thing.

[01:00:10] You know, responsibility, ethics, governance, these things don't, don't have to slow us down. I know it's been a bumpy ride and it will continue to be for, for a little bit, but there's, there's, you know, we went through this with, with privacy, you know, regulations and data. We went through this with cybersecurity. And so I think this is just another sort of structural sort of a layer that we need to get through.

[01:00:36] But if we, if everyone, and we can hold people accountable and help them assess what potential adverse impact and risk may exist across that talent lifecycle, you know, you and I have talked about that many times, you know, the better off you're going to be. And then you can go, you can go faster once you know those things are in place. But without it, you know, you don't have time to do it right. When are you going to have time to do it again?

[01:01:03] There was a time where it was okay to produce a vehicle without a seatbelt. And there was a time where, um, there was, you know, there, where we had locking brakes rather than anti-lock brakes. And so I've used that analogy more than once. And people say, well, all cars have got, you know, airbags now.

[01:01:26] I'm like, okay, but how many people actually know that the skeletal structure of a 12-year-old hasn't developed enough so that they can sit in the front seat and sustain the damage of an airbag? They're like, Andrew, what are you talking about? I'm like, okay, you haven't evolved yet in terms of your safety and risk yet. So realize every time we invent something, we have to maybe look at the safety and security of the invention

[01:01:54] and say, what do we need to do that's different? You know, and can you imagine having electronic cars that go, you know, okay. So I had a mechanic come over to get some work done on one of the cars. And we were talking about another one of these cars, another one of the cars. And I've got this fancy, you know, fast car. And he says to me, he says, Andrew, you should take it out on the track.

[01:02:22] Like, he says, I'll take the governator off and you take it out on the track. And I'm like, this is a bad idea. And he said, but I'll indulge the conversation. And I said, on a straightaway, how fast do you think I could go? And he says, 230 miles an hour. And I said, okay. So in my head, I'm now thinking about all the things that you need in order to go 230 miles an hour from a safety perspective.

[01:02:51] That car hasn't been set up with a roll cage. That car hasn't been set up with the braking mechanisms that would need to go 230 and all these other things. But just because it can doesn't mean you should. And we had that whole, you know, Malcolm, you know, Jurassic Park moment where, hey, let's go as fast as possible.

[01:03:17] And, you know, next thing you know, we've got, you know, the thing that Malcolm was right about in Jurassic Park is that the problem never goes away. Like, it's like the eighth movie of Jurassic Park. How many times do we have to realize that making the dinosaur continues to have a problem? It's like over and over again. Let's keep on building dinosaurs. Right. That's what's going to happen with AI.

[01:03:45] We're going to continue to build it. We're going to continue to struggle with the conflict. And we're going to continue to have to learn more about it because every time we come up with an invention, we think we've created a safety mechanism for something that's better and faster. That's ridiculous. And you can't make the safety mechanisms, all of them, until you know what you have. Yeah.

[01:04:10] There was definitely no governance or ethics committee helping him build the original park. And he obviously didn't. Oh, totally. I love that. Establish one. I know that other people have used that clip when it talks about AI, but then, of course, they produce AI. So I'm like, you know, how'd you get away with using the quote? But you yourself have no safety net at all. Yeah. I mean, it'll be interesting.

[01:04:39] It'll be an interesting next couple of years. I mean, the EUA Act will go into full effect next year, right? Well, so there is no stop, right? It's get correct. If you're, you know, GAI, you got to get correct fast. You got a year if you're an employer in the space that I work in.

[01:05:06] You're going to have to have the compliance by mid-26 and then everything AI has got to have compliance by summer of 27, right?

[01:05:18] And there's been this, you know, in the U.S. Senate, there was this discussion on the recent bill that was passed around spending about an AI moratorium and saying, well, let's go ahead and have as much deregulation as possible. And, okay, that was ridiculous. It just wasn't going to happen. And there's 10th Amendment problems with that.

[01:05:45] But even if the U.S. had one way of doing it, I don't think even people know that inside the EU, some of the countries have their own augmented artificial intelligence compliance rules. I don't think people know that.

[01:06:04] And I don't think people know that Australia has got it and Korea, South Korea has got it and that the U.K. is talking and that Mexico has a plan and that the Brazil, the Brazil AI Act is going to tell the EU AI Act to hold my beer. It is just as strict.

[01:06:30] But the difference will be by the time that enacts, AI would have advanced so much. So the whole concept that it's going to go away is ridiculous in terms of safety. And the idea that there's going to be one framework that does it, I mean, how many frameworks have to come out? I mean, there's a new framework coming out every day.

[01:06:54] Do you know how hard it is to have two ISO standards for artificial intelligence come out? Like they came out like almost back to back within a year. I worked on the ANSI and the ISO standard for human capital metrics. Like I think my child learned to walk, ride a bike and be by herself during that time. It was like five years that went.

[01:07:24] So I can't believe that, you know, we're going to have a patchwork of frameworks. We're going to have a patchwork of needs for security and safety. And that's where the AI literacy can be is understand, secure, safe, trust, all those things. Yeah, I think organizations definitely need to get their arms around that.

[01:07:48] But I mean, as individuals, I just think the average individual needs to think about some level of that. Like how does it impact? How does what you build or use impact others? Because the average person isn't necessarily in making judgments about people's lives or livelihoods.

[01:08:10] But you can't just assume that everything under the covers is on the up and up and has been built with fairness in mind. And so I think as you look across the software development lifecycle, if we're even calling AI software at all.

[01:08:28] But as you think about that sort of development arc and you think about the talent lifecycle and you think about your career trajectory, your school, your education. Are you actually learning the things that you need to learn? I mean, I just I mentioned earlier, I came from this coaching and technology summit.

[01:08:48] I asked a bunch of the vendors about the content of some of that coaching and that sort of awareness and things like that, whether AI, irresponsible AI was sort of part of that curriculum. And like I got a lot of just kind of blank hooks.

[01:09:09] Like so, so, so, so even, even startups that are sort of using AI in their development of their solutions haven't necessarily also thought about, you know, the content that's actually being discussed between, you know, a coach and a coachee or a mentor and mentee and things like that.

[01:09:35] And so I think even some of these new technology companies focused on AI need to think more deeply about, you know, preparedness because a lot of this is centered on professional and career development.

[01:09:52] So if the future of work is around human plus AI, you know, collaboration and augmentation, I mean, it seems pretty, seems like a big mess, honestly. It is, so much has changed. I mean, if I was a SaaS developer, I would be so upset right now in that, you know, I could have spent the last three years developing something that was unique.

[01:10:20] And now you can go ahead and point, you know, anthropic and perplexity at a site and say, I want you to build this and make it look like this other site. And I'll do the rest. And like, it's 80% there within, you know, however many, you know, minutes, hours, days.

[01:10:45] And then it's, okay, now all I need to do is go get a minimal viable. The amount of time it takes to get a minimal viable product in anything knowledge-based and SaaS-based has gone from years to days. And that, you know, if I was a private equity organization, I probably would be a little miffed because they've spent so many millions on doing that.

[01:11:13] I was reading, somebody posted about this exact scenario yesterday in the context of like trials, like software trials, right? Not even like a trial, like, oh, two-week trial, month trial. I mean, they're licensing the technology, but basically they're licensing a couple, maybe similar technologies. And they'll pay the first year, they'll negotiate down as best they can for first year, you know, license.

[01:11:42] But you're basically just going to say, hey guys, check this out. We're licensed for the next, you know, 12 months. What would it take for us to just build our own, you know, version of this? So, so you've got to, as a VC, you've got to say like these guys, you know, they seem pretty bright, but like what's, what's the moat? Like what's the differentiator that's going to be lasting? And it goes back, it might go back to the talent. It might go back to the data.

[01:12:12] Who's got a really trusted, you know, powerful, reusable, you know, data set. But if, if you think it's, you know, just, you know, a couple of these, you know, agents or something that's basically a wrapper on, you know, a public, you know, frontier model, like a chat GPT or, you know, Gemini or whatever. I mean, your competitive advantage from that is, is quite short lived.

[01:12:41] I've had people say to me, Andrew, I'm having, I'm struggling with artificial intelligence and how it's going to relate to me. And, and I said, I want you to look at the Kardashians, believe it or not. And they're like, okay, you just lost me.

[01:13:03] I said, okay, you need to realize that 10 years ago, if I told you that someone can become a social media influencer and put themselves a position to have a consumer product good with a very low cost of acquisition to then sell off into a multi-billion dollar business. You would have told me, right? Right. Right.

[01:13:30] So if you look at where private equity is right now, and typically looking at things like the number of user growth, the annual run rate, you know, your go-to-market strategies, your product market fit, a lot of that has kind of, is on its way to kind of being thrown out or significantly altered.

[01:13:49] Because your go-to-market strategy may not cost you very much, or your product market fit investment may not be very much, such that the, the private equity organization will, will invest in you very, very differently. And then they also will talk about, well, how do our users get cannibalized?

[01:14:10] I mean, are we really in a position where we're going to have this hockey stick moment now that software development is so cheap, that software imitation is so easy? You know, if I was, you know, I'm not a private, look, I don't, you know, I'm not, I'm not working for KKR, but if I was, I'd be investing in pipe fitting right now because we're going through a lot of infrastructure change.

[01:14:40] Yeah, for sure. All right, Andrew, I'm going to let you go because we have been on for quite a while and we didn't even get into some of the, some of the topics I wanted to cover. So we'll have to, yeah, we'll, uh, want to get into, you know, more of the, the audits and the exposure and, um, you know, assessing, uh, risks throughout the talent life cycle. Cause, um, I know this is another area that's, that's top of mind for both of us.

[01:15:08] So, yeah, it's not just, you know, yeah, my, my, uh, my work isn't just a bunch of movie analogies. So we can get into that next time. And, and, you know, Bob, all the work that you're doing, uh, to bring awareness around this, being a voice, you know, having the pod, um, and candidly being out there on the edge a little bit. Um, thanks. You know, it's being one of the first ones through the wall is always the bloodiest. So, uh, I appreciate you doing what you're doing.

[01:15:37] Yeah, no, I appreciate you, uh, mentioning that Andrew. Um, yeah, no, it's been, it's been kind of a crazy ride, but I do think, you know, I am still very optimistic. I should say optimistic, but optimistic that we'll, we'll get through this, but the more people we can wake up to what's, what's really happening and the more impact we can have on our current and future workforce, you know, I sleep better. Yeah, I agree. Awesome. Well, Andrew, it's been great to be continued. To be continued.

[01:16:06] And, uh, thanks everyone for listening. We will see you next time. Bye.