Ep 31: Revolutionizing Employee Experiences Using Human-Centric AI and a Foundation of Trust with Beth White
Elevate Your AIQOctober 29, 2024x
31
00:56:47

Ep 31: Revolutionizing Employee Experiences Using Human-Centric AI and a Foundation of Trust with Beth White

In this episode of Elevate Your AIQ, host Bob Pulver speaks with Beth White, founder and CEO of MeBeBot, about the evolution of employee experience and the role of technology in enhancing workplace communication. Beth shares her journey from HR to tech, emphasizing the importance of understanding employee needs and creating a seamless experience through platforms like MeBeBot. The conversation delves into the significance of feedback loops, trust in AI solutions, and the necessity of data governance in the integration of AI in the workplace. In this conversation, Beth White and Bob Pulver discuss the foundational capabilities necessary for successful AI projects, the importance of user permissions, and the evolving landscape of AI regulations. They emphasize the need for trust and responsibility in AI usage, as well as the importance of elevating AI literacy through curiosity and experimentation. The discussion highlights the complexities of navigating AI in the workplace and the potential for significant changes in how organizations operate.

Keywords

Employee Experience, AI Solutions, MeBeBot, HR Technology, Feedback Loops, Data Governance, Trust in AI, Employee Communication, Digital Transformation, Workplace Innovation, AI, data governance, responsible AI, user permissions, AI regulations, ethical AI, AI literacy, trust in AI, AI tools, innovation

Takeaways

  • Beth White transitioned from HR to tech due to burnout.
  • Employee experience is crucial for customer satisfaction.
  • Feedback loops are essential for trust in AI solutions.
  • AI must be accurate to build user trust.
  • Data governance is fundamental for AI integration.
  • Foundational capabilities around data maturity are essential for AI success.
  • User permissions and data access are critical in AI interactions.
  • The future of work may shift towards employee-controlled data.
  • AI regulations are emerging from societal needs and pressures.
  • Balancing innovation with responsibility is a key challenge in AI legislation.
  • Trust in AI is a two-way street between employers and employees.
  • Everyone has a role in ensuring responsible AI usage.
  • Navigating AI complexities requires structured support and guidance.
  • Curiosity and experimentation are vital for improving AI literacy.
  • Practical use cases can help individuals become more comfortable with AI tools.

Sound Bites

  • "I was burnt out."
  • "How do you remove the barriers?"
  • "It's about the employee experience."
  • "Foundational capabilities are key for AI success."
  • "User permissions are crucial in AI interactions."
  • "The future of work could be entirely different."

Chapters

00:00 Introduction to Employee Experience and Technology

03:01 The Evolution of Employee Experience

06:10 MeBeBot: Bridging Gaps in Employee Communication

08:57 Feedback Loops and Trust in AI Solutions

12:08 Navigating AI and Employee Needs

15:03 The Importance of Accuracy in AI Responses

18:07 Data Governance and AI Integration

21:12 Future of AI in Employee Experience

29:59 Foundational Capabilities for AI Success

36:04 Navigating AI Regulations and Legislation

41:59 Trust and Responsibility in AI Usage

49:58 Elevating AI Literacy and Curiosity


Beth White: https://www.linkedin.com/in/whitebeth

MeBeBot: http://www.mebebot.com



For advisory work and marketing inquiries:

Bob Pulver: https://linkedin.com/in/bobpulver

Elevate Your AIQ: https://elevateyouraiq.com

Powered by the WRKdefined Podcast Network. 

[00:00:00] Welcome to Elevate Your AIQ, the podcast focused on the AI-powered yet human-centric future of work. Are you and your organization prepared? If not, let's get there together. The show is open to sponsorships from forward-thinking brands who are fellow advocates for responsible AI literacy and AI skills development to help ensure no individuals or organizations are left behind. I also facilitate expert panels, interviews, and offer advisory services to help shape your responsible AI journey. Go to elevateyouraiq.com to find out more.

[00:00:28] Hi everyone, welcome to the Elevate Your AIQ podcast. Today I have the pleasure of chatting with Beth White, the founder and CEO of Mebebot. Beth is considered an AI pioneer and her journey from HR executive to tech founder has allowed her to parlay that first-hand experience into building a well-known AI-powered employee experience platform that is addressing real-world challenges. Beth has some amazing insights into how employee experience is changing and how technology can make huge improvements in workplace communication and engagement. We'll be back to the next episode.

[00:01:09] We'll be discussing the latest trends in employee experience, AI in the workplace, and how companies can create a culture of innovation and collaboration. Beth is a responsible AI advocate like me. So of course, we'll get into the weeds a bit on some of that, including the importance of trust in each other and in our AI solutions, data governance, and the patchwork of AI legislation cropping up around the world. Always an insightful discussion with Beth, so I'm sure you'll get a lot of value from it as well. Thanks again for listening.

[00:01:36] Hi, everyone. Welcome to another episode of Elevate Your AIQ. I'm your host, Bob Pulver. With me today, I have the pleasure of speaking with Beth White, who is the founder and CEO of Mebebot. How are you doing today, Beth?

[00:01:48] Hey, Bob. Good to be here. I'm doing fine. Thanks.

[00:01:52] Great to have you, and thanks so much for spending some time with me.

[00:01:56] Would love to continue conversations that we've had in the past. So this is just kind of us bringing some of the topics we've talked about to light to others.

[00:02:04] Exactly. Exactly. So just as we kick things off, I thought you could just give a brief introduction to yourself and your background and why you started Mebebot.

[00:02:14] Yeah, absolutely. So I know I do get asked that origin story question often, and it's always interesting to think, well, how do I portray it to this audience?

[00:02:24] But the people that I know you're reaching out to and talking are probably most interested in the fact that, you know, I started my career in HR.

[00:02:32] You know, I worked, you know, in every facet of HR at a lot of different size companies and really got to a point where, I admit it today, I was burnt out.

[00:02:41] You know, I was burnt out. I felt like the profession had taken me through learnings of a lot of great businesses and industries, and I just wanted to do a little something else.

[00:02:55] And really why that was is I felt a little frustrated with the systems and the tools that I was able to access to do my job to be efficient, and I felt like it was very administratively heavy.

[00:03:06] And the company that I was working with gave me an opportunity to get involved in enterprise software sales, and we were in the retail industry.

[00:03:15] In the retail industry, selling technology into the retail industry, they have such a great concept of consumer experience and what that means.

[00:03:26] And in selling things to people like you and me, they want to know what do you want? When do you want it? How do you buy?

[00:03:33] You know, what are the things you've bought before? And everything they can do to get to know you to sell you more things.

[00:03:40] But really, it was when I had the light bulb epiphany of what I believe could be the future of how employees are treated and how work is done within the workplace,

[00:03:51] is if we can think more about the consumer experience and how brands and retailers work to get to know you better and flip that model inside organizations into what is now being coined as employee experience.

[00:04:06] And I know you've been in the profession.

[00:04:09] And I know you've been in the profession a long time, too, and you probably agree that, you know, is it the CX or the EX that comes first?

[00:04:15] Right.

[00:04:16] And ideally, it would be the EX produces a great CX because if employees are engaged, directed, et cetera, then they're going to be their best sells to your customers.

[00:04:28] And they're going to have the mindset to help the customers and support the path of the business, which ultimately is how do we grow our customer base revenues and keep happy customers?

[00:04:39] Well, we all want to do that with inside the workplace, too.

[00:04:43] And so looking at the ways technology has evolved in the last, gosh, 10, 15 years since I began the journey in retail tech, we were in the area of mobile technologies.

[00:04:54] And mobile was a really exciting time where it brought a whole new interface to tie together different disparate sets of information that retailers and brands had in other systems.

[00:05:07] They grew up in the world of e-commerce separate, totally separate from the brick and mortar retail.

[00:05:14] And pulling it all together required mobile.

[00:05:16] And when you think about what would drive a more unique experience to employees inside organizations, it's how do you remove the barriers that are there today between a lot of the different systems with inside organizations that help us do our jobs, that we use as part of our roles.

[00:05:33] And so really that became the essence of MeBeBot is how do we drive a great, unique employee experience through a single, we'll call it user interface or digital front door, as we've called it at times, to provide employees access to information they need when they need it, as well as to help with workflow and business transactions that happen inside companies.

[00:05:59] So we remove layers of the friction and noise that are out there.

[00:06:03] So that's just a bit about that history of how I went from HR into selling solutions to bringing these solutions with inside the workplace.

[00:06:15] I love it.

[00:06:16] I could dissect each piece of that and we'll certainly get into some of it, but it gives me a good sense of why you have such empathy for your users,

[00:06:27] how you're trying to balance the experiences of multiple sort of personas or archetypes as you build this workflow, whether you're automating things.

[00:06:37] I mean, it's not just about efficiency.

[00:06:39] The point is to make a better experience for the users.

[00:06:43] And I agree with you on the employee, starting with the employee experience.

[00:06:48] I mean, Richard Branson used to talk about that all the time, right?

[00:06:52] If you take care of your employees, they will in turn take care of your shareholders and your customers, or in this case with talent, maybe your candidates and improving the candidate experience as well.

[00:07:05] So I love it.

[00:07:06] I love the backstory.

[00:07:07] So in the case of MeBeBot, I don't want to spend too much time talking about the platform itself, but as you think about those digital experiences, I mean, you're cutting across all different domains, right?

[00:07:21] It's not just for HR professionals.

[00:07:23] That's correct.

[00:07:24] I mean, out of the gate, we really realized that in order to get adoption and usage by employees of a solution that they could have as a trusted, truly assistant for themselves, you have to make it in their flow of work.

[00:07:41] And you have to also not keep it in silos.

[00:07:45] And so the world of an employee, let's just start with when they're onboarding at a company.

[00:07:50] When an employee is onboarding, that's a really intense time in our lives.

[00:07:54] Starting a new job, you don't know who to talk to, what kinds of things to do as far as how do you, you know, get a, submit an expense report?

[00:08:04] What's the process to enroll in benefits?

[00:08:06] All the kinds of questions that, how do I get a new badge for the office if you lost it?

[00:08:11] You know, it's all the types of questions that people have when they first start a role at a company, as well as when they evolve through the various, what we've coined for many years, the employee life cycle.

[00:08:23] There's different points of where employees have heightened needs to support them understanding, you know, how they enroll in, again, open enrollment for benefits or what the performance appraisal process is like, or how we're going through this compliance training.

[00:08:40] And it crosses across lots of different areas of the business.

[00:08:44] So today, what MaybeBot does is we're accessible to employees as an app in Microsoft Teams, Slack.

[00:08:52] We also have a web chat.

[00:08:54] And then for the frontline workers, we have SMS text messaging interface as well, so that employees can get answers to their employee needs 24-7 from wherever they are, right?

[00:09:07] And wherever their flow of work is.

[00:09:10] Layering on top of that, what we do is we also use it as a way to not only provide employees information and have a way to escalate to ticketing systems, etc., but we're pushing communications that are meaningful to employees in a personalized way.

[00:09:26] I remember the days of having to send out the group emails to, you know, hundreds of people to tell them about something because that was the active directory group for the specific location.

[00:09:39] Whether or not it was applicable to everybody, yes, still set it out.

[00:09:42] Well, nowadays, people just start to ignore it because of information overload.

[00:09:48] And what MaybeBot's push messaging does is it allows you to be very targeted to give employees the information they need that's specific to them and allows them even opportunities to act upon it,

[00:10:00] even with the pulse surveying functionality, which collects feedback if that's also required as part of the communication effort.

[00:10:08] And then when we share the usage information and the dashboard of insights as to how employees are interacting with MaybeBot and what they gain,

[00:10:19] how they even appreciate the solution by rating answers as helpful or not,

[00:10:24] the business is able to learn more about their people, even if they don't physically see them.

[00:10:29] And we really did have a huge moment during the pandemic when MaybeBot had been in existence before that.

[00:10:37] And all of a sudden, everyone was working remote.

[00:10:40] Not everyone, but many people were.

[00:10:42] And there was this need to be able to share information to people very timely that was specific to them,

[00:10:49] a lot of it by locations, especially when it was returning to the office,

[00:10:53] whether you had to wear a mask, all the different protocols.

[00:10:56] We were very instrumental in that whole change management point of time.

[00:11:02] But if you think about it, there's all these other moments that matter to businesses where they need to have communications that are effective,

[00:11:10] as well as gathering feedback from employees on things and supporting not only their questions,

[00:11:18] but workflow automation as well.

[00:11:20] Well, that's a lot.

[00:11:22] It is a lot.

[00:11:23] It is a lot.

[00:11:24] It's a lot of capability, but it also has huge benefits for the organization as well as the individuals, right?

[00:11:30] Because you've got this constant feedback loop.

[00:11:33] I mean, in some ways, you're doing a little bit of social listening.

[00:11:38] And, I mean, surveys are great, but to be able to complement that with data that doesn't have to be sort of overtly and explicitly submitted,

[00:11:46] you've just got like this digital exhaust in a way that you can derive insights from.

[00:11:52] For sure.

[00:11:53] In fact, I'm a fan of Maslow's hierarchy of needs.

[00:11:57] All the years I was in HR was foundational.

[00:12:00] You have to help employees understand how their basic needs are being met, meaning how do they get paid?

[00:12:07] What are their benefits?

[00:12:08] And then you can kind of layer on top of it as they get to that point of self-actualization to us is where things become truly more self-service

[00:12:19] and where the market and our technology has been from day one, which is what everyone's coining now AI agents.

[00:12:27] You know, doing much more than just, you know, transactions, simple transactions.

[00:12:32] It can actually be something that is doing and performing work for you.

[00:12:37] When people are interacting with the system and they're getting those answers to their questions,

[00:12:43] do they get an acknowledgement, like their feedback based on the quality of those answers is going to be fed back into the system?

[00:12:52] And if so, is that sort of anonymized, like it's not necessarily attributed to you as an individual or how does that feedback loop work?

[00:13:02] Yeah, that's a great question.

[00:13:03] So from day one, MeBeBot, you know, is again a solution that can be delivered very easily.

[00:13:09] It's essentially a turnkey, you know, out-of-the-box type model because we have a curated knowledge base of hundreds of different questions and answers that companies can start with

[00:13:19] to address the needs that we know are commonly asked across businesses of five years of working with companies in this area.

[00:13:28] So when an employee interacts with us today, they can be anonymous from day one.

[00:13:34] If the employer chooses, which is what most do, is we don't know who they are.

[00:13:39] We authenticate them based on credentialing from the business, whether we choose it to be, you know, various ways, SSO or email address, et cetera, phone numbers.

[00:13:49] We have ways to credential knowing that that person's at that company.

[00:13:53] But when they're interacting and they're giving feedback, it is anonymous.

[00:13:57] It is anonymous.

[00:13:58] And we do that on purpose because we want the employee to have the ability to feel comfortable maybe asking questions that they may not want to go to HR for yet, right?

[00:14:08] Like, hey, I may be pregnant.

[00:14:10] What is the policy for maternity?

[00:14:12] And you're just kind of querying a little bit.

[00:14:15] You may not want to bring that forward, right?

[00:14:18] And there is a path to escalate to ticketing systems or inboxes.

[00:14:23] At that moment in time, they do become identified.

[00:14:26] But the system says, if you're submitting this to, you know, to your people team or your IT team, they have to know who you are so that they can respond and help you, right?

[00:14:39] Yeah.

[00:14:39] So at that point, they're given notification that they will be identified.

[00:14:43] So they don't have to choose to escalate if they don't want, right?

[00:14:47] Right.

[00:14:47] They have to start to trust, not start to trust.

[00:14:50] They have to continue to trust that, you know, they can share information that they need at a certain point in time or a certain life event, to your point.

[00:15:00] And they've got to know that, you know, only certain people and systems will capture that information.

[00:15:07] They've got to know that, you know, they've got to know that, you know, the next time this happens and some other life event comes up or there's a, you know, workplace, you know, issue or whatever it is that they can go with and get not just reliable answers, but they will continue to have trust in the system and systems in which they're engaging.

[00:15:57] And then we have to know that, you know, all of us, you know, with our friends, you know, and so when we first started, even though chatbots weren't as favorable at the time, we were a chatbot.

[00:16:05] But we were a conversational chatbot, which allowed more interaction and had the intelligence of natural language processing and machine learning, the core elements of AI.

[00:16:17] You know, in the 2018 timeframe, it seems like so many years ago.

[00:16:22] Right, right.

[00:16:23] I'm like being facetious because everything's moving so fast.

[00:16:27] When people get into the world that just started to learn about AI with ChatGPT and other types of large language model solutions, you know, they may not have known there was a world that existed before.

[00:16:41] A world that really did pay, because it had to, a lot more attention to the concept of guardrails, right?

[00:16:49] Which is getting very much specific answers to people based on what their information they're looking for, but in a very controlled manner.

[00:16:58] And that's what we've done from day one.

[00:17:00] But a lot of it may have been because of the technology constraints that we had at the time.

[00:17:05] But we forced and trained the solution to be very accurate because from day one, that's one of the measurements we share back to our customers.

[00:17:14] And they can see this in the dashboard is the level of accuracy of the answers to the employees by, you know, if they are, you know, rating it helpful or not helpful.

[00:17:24] I mean, employees don't rate everyone.

[00:17:26] And then we know the answers that may have been incorrect.

[00:17:29] And we surface that too, but we have to be at 95% or greater accuracy to be a trusted solution.

[00:17:35] And most AI solutions, unfortunately, that were being delivered to market, you know, just even a couple years ago and still today, they don't pay attention to the accuracy and they say, well, we'll get there over time.

[00:17:49] We're going to launch the solution.

[00:17:51] We're going to get employees to use it.

[00:17:53] We're going to take in all that training data and we're going to improve upon it.

[00:17:57] But guess what?

[00:17:58] You probably lost them already.

[00:18:00] If they couldn't trust in it from day one, what makes them want to come back to it, you know, 90 days from now when you are now ready for that?

[00:18:09] So that's the big difference of what we deliver is we want our customers, the moment that they go live with MimiBot, it to be that solution that is trusted, it is accurate, because that helps you build the other things that you can do with the solution.

[00:18:24] So employees want to use it.

[00:18:26] They see it as a beneficial, you know, assistant resource for them.

[00:18:31] And then the business gets the benefit because these are things that they, you know, operationally need to streamline so that they can free up their valuable time to focus on other initiatives that drive a lot of impact for the overall bottom line.

[00:18:46] Yeah, I think the, I mean, I know this continues the trust kind of theme, but that was a challenge because I remember being at NBCUniversal trying to evaluate some of these, you know, early conversational AI kind of chatbots.

[00:19:00] And, you know, which use cases do we start with, how much proprietary sort of data that we need to feed it and how reliable will the answers be?

[00:19:10] I mean, at least at that time, if it didn't know the answer, you very quickly knew that it didn't know the answer, as opposed to a generative AI powered solution where you have to worry about hallucinations.

[00:19:22] And it actually giving you an answer that sounds like it might be correct or, but it might not be completely off, right?

[00:19:30] Like hallucinations is the, is the focus now before, whereas before it was.

[00:19:34] It just, Hey, this is William Tenka work to find.

[00:19:38] Hey, listen, I'd like to talk to you a little bit about inside the C-suite, the podcast.

[00:19:42] It's a look into the journey of how one goes from high school, college, whatever, all the way to the C-suite, all the ups and downs, failure, successes, all that stuff.

[00:19:52] Give it a listen, subscribe, wherever you get your podcast.

[00:19:55] Only had a certain corpus of information.

[00:19:58] And so there was a lot of stuff.

[00:19:59] It just didn't know, but at least it told you that it didn't know.

[00:20:03] Right.

[00:20:03] Well, if you think about the whole basis of how large language models were created, it was on publicly accessible information.

[00:20:11] Yeah.

[00:20:11] Right.

[00:20:11] If you think about what happens inside business, that's not publicly accessible information that these models could learn from.

[00:20:19] And so when you talk about things or not you specifically, but when people in the industry talk about, oh, you can just take an employee handbook and throw it into a GPT model and employees are going to get all that support they need.

[00:20:34] Okay.

[00:20:35] Go ahead and try that.

[00:20:37] But the models have never been trained for that, nor do they really understand the nuances in how employee handbooks have been written.

[00:20:47] I have written a number of them.

[00:20:49] And they always get a review by an attorney and why certain language needs to be in certain ways.

[00:21:25] We also apply retrieval augmented generation, which is hopefully people are learning more and more about that.

[00:21:32] And it's a way to kind of synthesize the data to more of a specific use case.

[00:21:37] And then we even apply, you know, the human in the loop, which means the content we've ingested gets populated into our architecture, which is this curated library of all these different types of questions.

[00:21:51] And then we want the business users to review those answers, lock them in place, we call it lock it in place so that they know that consistently every time an employee is asking certain types of questions, they're going to get the same answers without the errors of hallucination.

[00:22:08] And this is a really important fact, and I hate to kind of dwell on it, but, you know, we submitted voluntarily to the EEOC.

[00:22:16] They will let you do a review of your technology.

[00:22:19] And we sat on a call.

[00:22:21] We showed what we do.

[00:22:22] What they really appreciated about our solution is that at any point in time, an employer absolutely knows what information was delivered to an employee as far as a specific answer.

[00:22:34] There's also the audit trail, as I mentioned, that if an answer was incorrect, we're surfacing that.

[00:22:40] Sometimes it means the business needs to update that information because things have changed with inside their environment or a link that's referenced.

[00:22:48] It has moved and that needs to be surfaced.

[00:22:52] But, you know, you look at the backlog of cases that are happening in the EEOC related to AI, and you really need to be more mindful about what you're doing because any information you give to an employee as an employer, you know, can be brought into any kind of conversation or claim an employee may have against the business.

[00:23:15] Okay.

[00:23:15] So, yeah.

[00:23:16] So, yeah.

[00:23:16] So, you've got that baked in to have that traceability.

[00:23:19] That's correct.

[00:23:20] Yeah.

[00:23:20] And does that mean in the future, everything will be behind what we call the guardrails, which is this way of keeping content contained so that the AI does not hallucinate?

[00:23:34] Well, AI is always going to hallucinate.

[00:23:36] Is it going to get better, faster, smarter?

[00:23:38] Yes.

[00:23:39] Are there also high consumption costs to leveraging large language models every time with, you know, original queries over and over?

[00:23:48] Yes.

[00:23:49] And I think that's been a big gotcha out there to some of the companies that have been dabbling in AI is they're writing really big checks or paying very big invoices to AWS or Microsoft that they weren't expecting because this cost of consumption of AI services is expensive.

[00:24:07] And there are ways to do it in a more mindful way that allow the ability to not have to make those active calls to consume and, again, to avoid the concepts of the hallucination.

[00:24:21] Got it.

[00:24:22] Okay.

[00:24:22] Yeah.

[00:24:23] Yeah.

[00:24:23] I think it'll be interesting to see how some of that evolves because I think right now the consumption costs, to your point, and the impact technically on the environment is significant.

[00:24:36] But I think we're going to see innovation in that space.

[00:24:41] I mean, even I just saw something from one of the spinoffs from MIT, I think it's called Liquid.ai, where they're looking at alternatives to LLMs on how the models get trained.

[00:24:52] And that doesn't require nearly as much consumption costs.

[00:24:56] And that's essentially what we've been doing without calling it a true small language model, but the concepts of more smaller purpose-built language models will start to be more of a thing because they won't require that consumption.

[00:25:12] And I'm glad they're looking at it because, hey, Microsoft bought the nuclear power plant probably not too far from where you live, Bob.

[00:25:23] And everyone's looking for new sources of energy.

[00:25:26] And I live in Texas.

[00:25:28] I live in Austin.

[00:25:30] Elon Musk is building his whole empire where there's lots of farm fields that can be easy data centers.

[00:25:37] But is that really what we want to do to our planet and our environment and just keep building more and more of these data centers instead of being more pragmatic and smart about what we're doing to solve the computational needs we need, but in a more mindful way?

[00:25:55] Yeah, absolutely.

[00:25:56] The other thing that comes to mind when I think about the fact that maybe Bob and everything that you're doing predates any of the generative AI and LLM advancements as we know it since 2022.

[00:26:11] But I think about the data aspect of all of this.

[00:26:15] So I know we're going to talk about AI governance and responsibility and things like that.

[00:26:19] But some of these core principles go back to sound foundational elements like data and data governance and data quality, data provenance, and how you really think about your maturity from a data and analytics perspective.

[00:26:35] And how that feeds into your ability to take on some of these AI projects where you'll see value, where you'll mitigate not just bias, but you'll mitigate some of the potential governance challenges.

[00:26:49] You may mitigate some of the hallucinations as well because your data is that much more trusted.

[00:26:54] So how do you think about that?

[00:26:56] When we essentially go through a process of bringing on a new customer, contracts, et cetera, first and foremost, we're SOC 2, Type 2 certified.

[00:27:05] So we go through a lot of steps to make sure that we meet the minimum requirements or more of the industry standards of best practices.

[00:27:16] And again, that's one of the fundamentals.

[00:27:20] The other piece of it is we have explainable AI.

[00:27:23] We can talk about where this data is, how it's stored, what data is purged immediately after usage and training.

[00:27:31] We only aggregate data back to the business.

[00:27:34] We don't show, to your point earlier, like an employer can't say to us, well, who asked that question?

[00:27:40] We can tell you it's been asked and it was part of the benefits category of questions asked.

[00:27:46] And it was specifically about the FSA plan, for example.

[00:27:51] But, you know, we can't tie it back to a person.

[00:27:54] And a lot of them don't want that.

[00:27:56] They don't want that information.

[00:27:58] Getting to data that lives in other systems that we want to deliver to employees in a more personalized manner would require us to go beyond where we are today,

[00:28:08] which means we would require the employee to identify who they are or allow us to see who they are so that we could then better serve them by, again,

[00:28:19] getting very granular and specific to information about them that may live in another system and transmitting that data back and forth.

[00:28:27] I mean, that's the information that's really more complex, right?

[00:28:31] Because we would be touching core systems of record.

[00:28:35] And at this point in time, there's a lot of companies that are, you know, dabbling a bit in it, meaning things like PTO accrual balance, right?

[00:28:43] What's my PTO balance?

[00:28:45] These are things that you can get access to and bring it back or access to and bring it back.

[00:28:50] Where I was always hoping things would move faster, I had seen this happen on the retail tech side, was the concept of the data lakes.

[00:28:58] And having companies that have invested in their own data lake that then they allow accessible to companies like Mebebot to tap into to be able to,

[00:29:09] instead of going, say, directly to an ADP or a workday, we go to the data lake.

[00:29:15] That's the better protocol to take.

[00:29:17] And companies are getting there where they're building the concepts of the data lakes or using data fabric, what have you, whatever they're calling it.

[00:29:27] And that's where we really want to start to accelerate the process of the interaction of the data to an individual with their permission, right?

[00:29:38] Right.

[00:29:39] It just, I think we're getting closer to it.

[00:29:42] I don't know if you have any predictions yourself, Bob, but I would love to hear that.

[00:29:46] Well, you know, it's interesting.

[00:29:48] I'm hearing it from different angles, right?

[00:29:51] So I was at the People Analytics World Conference in New York City last week.

[00:29:55] So, you know, those things came up in that context.

[00:29:58] And as we think about, you know, gathering data across the talent lifecycle, not just gathering, you know, proprietary data within a particular enterprise, but just as you think about the data that you're collecting about candidates, the data that you store about employees.

[00:30:13] And then, of course, you know, all the resources that, you know, maybe about is helping to tap into to provide those answers in real time and in context to the user.

[00:30:23] I think there's still a ways to go, but, you know, as I alluded to before, I do think that people, because data governance and sound data practices are so fundamental, the more experts I talk to in that space, either analytics experts or data engineers and people of that ilk.

[00:30:46] I just, it just seems like that is really going to get you more success with AI projects out of the gate, maybe less need for fine tuning.

[00:30:58] I mean, I think you still need fine tuning.

[00:31:00] I think you still need RAG or whatever comes after RAG and some of those things.

[00:31:04] But I do think some of those foundational capabilities around your maturity of data, understanding how people are using that data, who should have access to that data.

[00:31:14] Because I think one of the things that comes up often, you know, I mentioned agentic workflows, but you're probably the best person to ask this actually, because as you think about how all these, the data flows across these different systems,

[00:31:28] and we try to bring all the data that's necessary to respond to a user request, the source of some of those data elements may come from a lot of different places that come together in that interaction.

[00:31:43] And so you start to think about the traceability and the observability of those different data elements.

[00:31:49] And is there metadata on each of those that says who is supposed to have access to that?

[00:31:55] So how do we know that some of the data we're giving to a user, like it's everything that we're giving them is okay for them to actually see?

[00:32:04] So these are just some sort of high level thoughts that I think about as I think about responsible AI and are people using all these tools responsibly?

[00:32:15] And now that anyone can create a copilot or create, you know, their own custom GPT or some kind of agent, like what does that mean?

[00:32:25] Do they know what they should, they have access to only the things that they should have access to as they build it?

[00:32:31] And have they tested it to make sure that it's not, you know, biased in its answers, let alone, you know, that there's veracity of the data that you're getting and using.

[00:32:39] It's a huge thing, Bob, right?

[00:32:41] I mean, it's really the concept of, you know, user permissions and who should have access to what.

[00:32:48] Hey, everybody.

[00:32:49] I'm Lori Rudiman.

[00:32:50] What are you doing?

[00:32:51] Working?

[00:32:52] Nah.

[00:32:52] You're listening to a podcast about work and that barely counts.

[00:32:56] So while you're at it, check out my show, Punk Rock HR, now on the Work Defined Network.

[00:33:02] We chat with smart people about work, power, politics, and money.

[00:33:06] Are we succeeding?

[00:33:07] Are we fixing work?

[00:33:08] Eh, probably not.

[00:33:09] Work still sucks.

[00:33:10] But tune in for some fun, a little nonsense, and a fresh take on how to fix work once and for all.

[00:33:17] I think about salary compensation information.

[00:33:19] I remember in all my years of HR, it was like salary information, never share what someone gets paid with each other.

[00:33:27] And then you live in New Jersey, New York.

[00:33:30] New York.

[00:33:31] New York.

[00:33:32] Okay.

[00:33:32] And in New York, you have to share it now.

[00:33:34] Yeah.

[00:33:35] Right?

[00:33:35] So it gets to be like, okay, so how much data is really that sensitive and how much of it should not be shared?

[00:33:43] And I start to question some of it.

[00:33:46] And does all data need to come from other systems?

[00:33:49] Or can we generate data in a much more organic way than we even are today?

[00:33:55] Right?

[00:33:56] Right.

[00:33:56] And if you think about, you know, the future of what I always saw is like if I was an employee applying for a job, maybe I really interact with an AI agent and I share a lot of information.

[00:34:08] Maybe I give it access to seeing some of my portfolio of work or some other things I do.

[00:34:14] And I'm feeding it.

[00:34:15] I'm feeding it with the information that it should have.

[00:34:18] And I'm controlling what it becomes instead of the employer controlling what my data as an employee looks like based on what they required of me for their business processes.

[00:34:30] I mean, the whole model could be flipped entirely.

[00:34:33] Right?

[00:34:34] Yeah.

[00:34:34] And frankly, the name maybe bot came from the idea of me be bot, you know, like me, you know, me be efficient, me be productive.

[00:34:43] And it's all about the ego or the id of the individual.

[00:34:46] Well, and if the individual, though, gets their needs satisfied, well, then the organization should.

[00:34:54] But today, the way organizations are structured, the organization needs their satisfaction first.

[00:35:00] Right?

[00:35:01] Right.

[00:35:01] If you think about it.

[00:35:02] And they need it satisfied because they're under constraint.

[00:35:06] So, too.

[00:35:07] You know, they have people looking to make sure that they're filing sound hiring processes and have all the I-9 forms for individuals.

[00:35:14] And all the payrolling is done correctly.

[00:35:18] But, I mean, I'm just putting it out there because there is a world where it's going to be way different than what we're even seeing today.

[00:35:25] And, yes, traceability of all this information, how it's delivered, roles and permissions, that does need to be figured out.

[00:35:33] It's way more complex.

[00:35:34] What I find more interesting is just starting with some digestible things that make lives easier for people, like helping give them information at their fingertips of things that they would have had to pull a report on.

[00:35:46] Sit and try to feed, you know, their own brain power analysis on understanding what that report meant.

[00:35:53] And instead, letting the AI take over some of those aspects.

[00:35:57] That's great.

[00:35:58] Let's just start there.

[00:35:59] And let's go from there.

[00:36:00] Right?

[00:36:01] No, I think that's a great way to think about it.

[00:36:04] As we think about AI regulation, I know we talked about it last time we spoke.

[00:36:10] But as you think about some of the legislation that you're seeing, obviously the EU AI Act is being absorbed.

[00:36:17] And, yeah, it's pretty much set.

[00:36:20] I don't think it's being enforced yet.

[00:36:22] But, you know, it's been finalized and people are absorbing that around the world.

[00:36:28] What are you seeing and hearing from your clients as you, you know, travel the world and talk to people?

[00:36:34] Yeah.

[00:36:35] I mean, it is interesting.

[00:36:36] It is very diverse in that sense where there are some people that I don't understand it, so I don't want to use it.

[00:36:45] And now no AI for us.

[00:36:48] That's one of the extremes.

[00:36:50] But it is happening.

[00:36:51] I wouldn't say it's that extreme.

[00:36:53] I would say a good 20 to 25 percent are in that camp.

[00:36:57] And then there's others that may be at the other end of it that's like, this is the most game-changing thing.

[00:37:03] The sooner we can understand it, absorb it, use it inside our business, the bigger competitive advantage we're going to get as a company against them in the market.

[00:37:11] So the spectrum is pretty broad right now.

[00:37:15] And I think where the AI regulations and litigation are coming from are, you know, out of, you know, societal, you know, needs to have some structure around this.

[00:37:27] And the questioning of the what ifs.

[00:37:30] We've been using AI as consumers for a while.

[00:37:33] You know, you've got the Alexa or the Siri device, you know, in your environment.

[00:37:37] And, hey, you've been feeding it information for quite a while.

[00:37:41] And that's the same type of thing that's happening in these large language models.

[00:37:45] So, you know, there's been some precursors to where we are.

[00:37:49] I think where the legislation is going to come into place, and I follow it a little bit obsessively because I think, you know, Bob, you're part of the ethical AI for HR group.

[00:37:59] I just see that this is a really interesting time that we're in and the world of work and the way that governments and business interact on AI.

[00:38:13] It will be telling as to how fast innovation can happen or where they do see risks.

[00:38:19] And where, you know, I've been following, for example, the bills in California.

[00:38:26] And anytime you write legislation, well, it's written by people who are used to writing that legislation.

[00:38:32] Yeah.

[00:38:32] They write it in certain forms.

[00:38:34] They may tap industry experts inside their, you know, knowledge areas and have them provide guidance.

[00:38:42] Do they always get a really highly representative view?

[00:38:46] Who knows, right?

[00:38:47] But it is, you know, going in front of the constituent, you know, whether it's a House, Senate, what have you.

[00:38:54] It is in front of people that represent the community.

[00:38:57] That's the whole goal.

[00:38:59] So when the veto was done on some aspects of it by Newscom, you know, the governor of California, you know, you have to wonder how much of that was political pressures from businesses that are making, you know, their headquarters in California.

[00:39:16] That would be asked to do something different and may leave the state if those kind of regulations are in place.

[00:39:22] So is everything that's being done in the world of AI legislation coming to pass as quickly as we might need it?

[00:39:30] Maybe not because of these other types of political pressures or maybe so because of different ways that that community that the state serves, for example, in the U.S.

[00:39:41] is responding.

[00:39:44] I do think, you know, in the U.S., I see it being the powers are going to be given to the states.

[00:39:50] There'll probably be some federal regulations oversight like the White House did their, what was it called?

[00:39:59] It was basically like a this is our recommendation letter, right, years ago.

[00:40:04] I can't remember what the specific name of it was.

[00:40:07] And they've iterated on that.

[00:40:09] And that's kind of like a overarching.

[00:40:11] But in other countries, they've gotten a lot.

[00:40:14] They've leaned into a lot more.

[00:40:16] You know, Switzerland is one.

[00:40:17] You know, they've always been very much data privacy.

[00:40:21] And and that that serves their community well.

[00:40:24] So, you know, I do think that we're going to see more bills.

[00:40:28] I know they're happening at the state levels inside states.

[00:40:32] I think countries are going to start to hone in on what they want to do a little bit more.

[00:40:36] They're going to look at the people who came before them and see what's working and what's not.

[00:40:40] And then they're going to probably get scared every now and then with stories like the one that came out of Harvard.

[00:40:46] I don't know if you read about this one from this week, I think, where several students got a hold of the metaglasses that have the camera.

[00:40:55] And they did an experiment where they could, you know, like I could be staring at you, Bob, and taking your image in the glasses and comparing it against Instagram.

[00:41:06] And then essentially figuring out a lot of your logins and passwords to different accounts.

[00:41:12] Right.

[00:41:12] So they were showing examples of things that are, you know, risks of AI and more kudos to students that want to do those kinds of projects at school because that's great.

[00:41:24] Because I do believe there are people that are going very aggressively at AI in our market, you know, without much restraint to societal impacts down the line.

[00:41:40] And, you know, it would be good to have some people start to check and balance a little bit of this activity.

[00:41:46] No, I absolutely agree.

[00:41:48] I'm all for, you know, ethical hacking and red teaming and trying to, you know, pressure test some of these systems because we've got to know what the exact dangers are for some of these use cases.

[00:42:00] But in the case of California, I mean, I read Governor Newsom's veto letter and I thought it was quite logical.

[00:42:09] I mean, I know there's fair arguments on both sides.

[00:42:12] It's not easy to balance responsibility with innovation.

[00:42:18] And I've had a lot of conversations with folks about this.

[00:42:22] I do think we were talking before about the LLMs and, you know, the energy consumption and how powerful they are.

[00:42:29] But to write legislation that only restricts the companies with the deepest pockets and not think about the actual use case,

[00:42:38] which ties more to how the EU looks at it in terms of their, like, risk tiers, right?

[00:42:45] Is this a high-risk, you know, use case or whatever?

[00:42:48] To me, that is a more, you know, logical approach.

[00:42:52] Let's figure out how this particular capability could be used for good or for bad.

[00:42:58] And let's make sure that we say this type of thing is you can use it this way but not this way.

[00:43:03] I mean, we do it with guns.

[00:43:04] We do it with all kinds of other safety measures, right?

[00:43:08] So I just think one of Governor Newsom's points was this would give people a false sense of security.

[00:43:17] Yeah.

[00:43:17] And I agreed with that point because it wasn't 24 hours after I read that that I read about the MIT team or ex-MIT team,

[00:43:28] the spinoff that was taking a completely different approach that would have wind up being a very powerful model

[00:43:34] that maybe open source and other sort of movements in this space, you know, take on.

[00:43:40] They would have completely circumvented that legislation, right?

[00:43:44] Yeah.

[00:43:44] And it could have been more harmful than, you know, plenty of other situations.

[00:43:50] But to your point, I do think you have to have the right voices with the right influence.

[00:43:55] And I do think, at least at a federal level, from what I've seen and some of the communities that I'm a part of,

[00:44:01] including not-for-profit organizations like, you know, All Tech is Human, AI for Good, For Humanity,

[00:44:08] and the people associated with For Humanity have been very vocal about, you know, NIST is looking for, you know, input.

[00:44:16] They're looking to talk to us about use cases that you're seeing.

[00:44:20] Talk to us about, you know, ethical challenges that you see this particular, you know, wording or whatever.

[00:44:26] I mean, some people are just incredibly passionate about making sure that we do this right and setting the right, you know, stakes in the ground so that we move forward in a positive direction.

[00:44:39] There's always going to be bad actors.

[00:44:41] For sure.

[00:44:41] This is about mitigation strategies.

[00:44:43] This isn't about elimination of risk.

[00:44:47] Absolutely.

[00:44:47] I think back to my days, one of my early career moments was working for a dot-com, back in the dot-com era.

[00:44:55] And so the company was Garden.com, fabulous organization.

[00:44:59] We went from zero to a couple hundred people in an IPO in like four years, you know, back in that day.

[00:45:07] And one of the things is it was one of the very first companies where you can see all these beautiful plants and gardening gloves and equipment online and buy them.

[00:45:17] But a lot of the clients were people who were older and used to buying from a catalog and placing an order over the phone.

[00:45:25] So we had literally a third of our staff was a call center where people were afraid to put their credit card information into the e-commerce engine.

[00:45:34] So instead, they called someone to give them their information, you know, over the phone that they then just typed into the form.

[00:45:44] So and then over years, of course, there became like you don't put your credit card information in unless you see the security, you know, padlock type thing.

[00:45:54] And we have the payment card industry has their industry standards to prevent, you know, all the transaction.

[00:46:01] I spent many years dealing with, you know, credit card transactions and encryption of that data.

[00:46:07] And it's the same thing.

[00:46:08] I mean, we're at that point where we just need to get some a bit more structure in place in my mind so that they don't have to.

[00:46:18] I mean, it is incredibly complex to keep up with AI.

[00:46:22] And I think people need some help.

[00:46:24] I think they're going to need some help.

[00:46:25] Otherwise, they're not going to know what to trust.

[00:46:28] And then they may shut down and fear everything.

[00:46:31] And that's not where we need to be.

[00:46:33] So if there's a way we can strike a good balance, that's what I would be a fan of.

[00:46:38] But it is hard.

[00:46:41] It's hard.

[00:46:41] Yeah.

[00:46:42] I do think that this does go back to the trust factor, right?

[00:46:49] Trust is a two-way street.

[00:46:51] Employers need to trust that their employees are doing the right things.

[00:46:55] They're not misusing technology.

[00:46:57] And whether that's, you know, trying to access data that they shouldn't access or it's, you know, uploading proprietary content that is going and training a, you know, a public model and creating basically a security incident.

[00:47:14] And then the other point I was going to make was around the responsibility that people take on when you start using AI.

[00:47:23] I mean, I like to say that when it comes to responsible AI, we are all responsible because we're all using it.

[00:47:29] We're all going to be, you know, building with it, testing it.

[00:47:32] So, you know, you just don't, you don't want to be that, that weak link in this, you know, data analytics AI kind of supply chain because, you know, it's not like prior versions of, of AI, you know, the predictive analytical AI that was more, you know, the domain of, you know, data and analytics experts, machine learning experts, et cetera.

[00:47:57] You know, you know, the AI wasn't the UI, right?

[00:48:01] And now it is.

[00:48:02] Yeah.

[00:48:03] That's a great point.

[00:48:04] Yeah.

[00:48:04] I think the role of the chief information security officer is really hard right now.

[00:48:09] I mean, because they're tasked to help evaluate a lot of the systems that, or tools or projects that the business wants to embark on.

[00:48:19] Yeah.

[00:48:20] And then the attorneys with inside these companies as well, when we're speaking about the businesses, I mean, they're a little bit behind on their even knowledge on AI and what to pay attention for and what they, where their risk talents could be.

[00:48:36] Right.

[00:48:36] Right.

[00:48:37] So it's, it's an interesting time still.

[00:48:40] And I think, you know, hopefully people are embracing educating themselves and, and see this as fascinating and interesting.

[00:48:48] And I was a history major from college.

[00:48:51] So I see this as such a unique turning point in our society, you know, that I'm all over it because I, I know how big of an impact this is going to be.

[00:49:01] Just like I could recall that fun story about being at the early days of.com and I was in the early days of mobile.

[00:49:08] It's these turning points that, that are going to shape and change the way we live and operate with inside workplaces.

[00:49:16] And hopefully it's for, it spurs enough natural curiosity for people that they're not sitting back, which I see happening today, waiting for their employer to train them on AI.

[00:49:28] You know, please people have some natural curiosity of your own and do things like I need to build a meal list for things I want to get from the grocery store based on low calorie meals or low fat, heart healthy, what have you.

[00:49:44] Have some fun with it.

[00:49:45] I watched my 85 year old dad have a great time with his, he's had a Alexa and a Google home and the way he interacts and uses it is it's like a kid, you know, it's playful and it makes him learn how to articulate and ask better questions.

[00:50:05] And then even watch, you know, a Siri device or the iPad, they do things much more natively and intuitively because they don't know what they don't know.

[00:50:19] And I, I really wish that the people that have been in the workforce will just look at it as something that they can use as something in a way playful on their own time to get more comfortable with it.

[00:50:34] And then they'll start to see the light bulb moments of how this could help them in the workplace more.

[00:50:39] So you just gave a fantastic answer to the question I was about to ask you.

[00:50:45] So this is, this is a bit like, this is a bit like playing Jeopardy, right?

[00:50:48] You gave me the answer.

[00:50:48] Now I'm going to give you the clue, but yeah, I was going to ask you what advice do you have for people to elevate their quote unquote AIQ, you know, whether it's individually or for their team or their organization.

[00:51:00] And, but certainly, you know, the, the curiosity experimentation and just having fun, even with some personal use cases, I think is part of that, right?

[00:51:11] Anything else, anything else you would add?

[00:51:13] Yeah.

[00:51:13] And then just think about things that you're doing inside work that are time consuming.

[00:51:17] I don't think many of us journal about our day at work and what we spend time doing, but it's a really easy, quick way to start to observe things that are super time consuming and may not be fulfilling to you.

[00:51:30] Or, you know, back adding value to the business.

[00:51:34] And sometimes if it's even creating a PowerPoint, you know, or presentation on something, there are now a handful of tools.

[00:51:43] I was early in on a company called beautiful.ai as a user, and I love that product.

[00:51:49] And I think they've evolved it very naturally over time where they've inserted in Anthropic actually into their, their solution where I can put in some prompts about,

[00:52:00] like, I want to create a presentation to explain, you know, to a group of, you know, HR leaders, how AI can be used within their day-to-day work.

[00:52:09] And here's some of the things I want to talk about and help me create visual representation.

[00:52:14] I mean, those are great use cases of things that may have taken you hours to do.

[00:52:18] And even analyzing spreadsheets and, and, and just writing and composing, if that's not your thing, obviously that's an easy fix with, you know, AI is helping people get, you know, a little bit of that creative block removed.

[00:52:35] It makes me fear that we'll not have good writers anymore in the world, but I was very much English, you know, nerd when I grew up and wrote a lot of, you know, papers and things like that.

[00:52:48] And, and I still think it's definitely a skill you don't want to lose, but if you can help get on your way faster, do it, you know?

[00:52:56] So, yeah, no, those are great examples.

[00:52:59] Those are some of the ways that I use AI myself.

[00:53:02] I'm struggling little writer's block, you know, create me a draft and then I'll tweak it and edit it.

[00:53:09] Or if I need a second pair of eyes on something that I've, I've written, you know, help me refine this, help me condense it, help me, you know, just make it more formal, make it less formal, you know, those kinds of things.

[00:53:19] It's, it's ideal.

[00:53:20] I think it's great.

[00:53:21] Understanding nuances and large language models.

[00:53:24] One way to quickly get to this is to get a subscription to ChatGPT, get one for Anthropic, get one for Perceptics and compare what you can do in the different tools.

[00:53:38] Like I frankly toggle between Claude and ChatGPT because I get different, there's just different takes on things.

[00:53:46] And I start to not use my Google search as much and I really feed prompts into things, into just curiosities.

[00:53:56] Like I want to build a website.

[00:53:57] I want to do it in a more streamlined way that's faster, easier, cheaper.

[00:54:02] You know, what's the latest trend and how can, how much would it cost to build it?

[00:54:07] I mean, you can literally ask all these questions with inside.

[00:54:11] I helped my dad and stepmom, they were wanting to buy solar panels for their home.

[00:54:16] And I said, well, you want to know more information?

[00:54:18] Let's go into ChatGPT and I'll show you how you can get a, like a counseling session on what are solar panels and how do they work and all that.

[00:54:27] And they went crazy.

[00:54:28] They loved it.

[00:54:29] So.

[00:54:30] Very cool.

[00:54:31] Well, Beth, I want to be respectful of your time.

[00:54:33] This has been fantastic as always.

[00:54:35] Great chatting with you, Bob.

[00:54:37] Yeah.

[00:54:37] Likewise.

[00:54:38] So I'm going to leave it there, but thank you so much for all this great insight on the things that you're doing personally and professionally and the advice that you've given my audience.

[00:54:48] So, so thank you again.

[00:54:50] Really appreciate it.

[00:54:51] Well, and thank you, Bob, for being out there, helping to educate and create desire for learning and, and information.

[00:54:58] So it's awesome to know that there's someone so well educated in the area of AI out there, like yourself, that's sharing your views and inviting different people to give their perspectives.

[00:55:11] Absolutely.

[00:55:11] Well, thank you.

[00:55:12] Thank you for that, Beth.

[00:55:13] Yeah.

[00:55:14] Thank you.

[00:55:14] All right.

[00:55:15] We will leave it there.

[00:55:17] Thank you, Beth, again.

[00:55:18] And thank you, everyone, for listening.

[00:55:20] We'll see you next time.

[00:55:21] Bye-bye.

[00:55:21] Bye-bye.