Ep 29: Using Talent Lifecycle Intelligence and Responsible AI to Mitigate Talent Risk with Sultan Saidov
Elevate Your AIQOctober 22, 202400:52:54

Ep 29: Using Talent Lifecycle Intelligence and Responsible AI to Mitigate Talent Risk with Sultan Saidov

Sultan Saidov, Co-founder and President of Beamery, sits down with Bob to discuss the impetus for starting the company and the evolution of AI in talent management. Sultan highlights the need for identifying and developing potential in employees, as well as the importance of transparency and information in making career choices. He also emphasizes the role of AI in talent risk management and the shift towards treating talent like customers. Bob and Sultan discuss the challenges of integrating talent intelligence and people analytics and the potential for generative AI in improving data accessibility and decision-making. The conversation explores the challenges and opportunities of using AI in HR and the importance of Responsible AI. Sultan discusses the need for AI models to fail safely and the importance of data safety and security. He highlights the legal review required for HR use cases and the slow adoption of AI in the HR industry. The conversation also touches on the value of integrating AI into existing platforms and the potential for AI to provide guidance and insights. The discussion concludes with a focus on the importance of Responsible AI, including bias auditing and transparency.

Keywords

talent management, AI, potential, transparency, career choices, talent risk management, talent intelligence, people analytics, generative AI, AI in HR, responsible AI, data safety, legal review, AI adoption, integrating AI, guidance and insights, bias auditing, transparency

Takeaways

  • Identifying and developing potential in employees is crucial for talent management.
  • Transparency and information are essential for making informed career choices.
  • AI can play a significant role in talent risk management.
  • Integrating talent intelligence and people analytics can lead to better decision-making.
  • AI models in HR should be designed to fail safely and prioritize data safety and security.
  • Integrating AI into existing platforms can unlock the full potential of AI and provide a seamless user experience.
  • AI can provide guidance and insights, going beyond task execution to help users ask better questions and make more informed decisions.
  • Responsible AI practices, such as bias auditing and transparency, are crucial in ensuring fair and ethical outcomes in HR.

Sound Bites

  • "Making career choices available to people, not just simpler, but fairer."
  • "Creating more information transparency and solving asymmetries."
  • "Redeploying and training employees is financially more efficient than hiring new people."
  • "We're trying to make very particular interactions that our products and AI models already serve work in a much easier and more seamless way."
  • "What extra insights can we show that give you guidance? For example, can we tell you that before you post this role, consider removing these requirements in order to not be at risk."

Chapters

00:00 Introduction and Background

07:47 Transparency and Information

10:48 Talent Risk Management

16:16 Integrating Talent Intelligence and People Analytics

20:12 Generative AI and Data Accessibility

27:54 Challenges and Opportunities of AI in HR

31:13 AI as a Guide for Better Decision-Making

33:01 Injecting Nudges and Concepts with Digital Adoption Platforms

36:03 Building Trust in AI Platforms

38:55 The Role of Responsible AI in HR


Sultan Saidov: https://www.linkedin.com/in/sultanmurad/

Beamery: https://beamery.com



For advisory work and podcast sponsorship inquiries:

Bob Pulver: https://linkedin.com/in/bobpulver

Elevate Your AIQ: https://elevateyouraiq.com

Powered by the WRKdefined Podcast Network. 

[00:00:09] In this episode of Elevate Your AIQ, I had the pleasure of speaking with Sultan Saidov, co-founder and president of Beemery, a leading talent lifecycle management solution, about the evolution of AI in talent management. We delve into the critical importance of identifying and nurturing human potential, which means weighing the benefits and challenges of talent investments, particularly talent acquisition versus upskilling and reskilling. Sultan explains the concept of talent risk management, which sheds light on the challenges organizations face in retaining top talent, and the critical importance of having trustworthiness.

[00:00:43] Sultan Saidov, Ph.D.: We also talk about AI governance, including AI legislation, ethics, and fairness. Beemery was one of the first solution providers to be voluntarily, independently audited for adverse impact. Of course, I had to give Sultan and the Beemery team kudos for that and their ongoing advocacy in the responsible AI space. Overall, it's a fascinating and thought-provoking conversation with Sultan. I really enjoyed it and found it truly insightful. I'm confident you'll find it very insightful as well. Thanks for listening.

[00:01:10] Hello, everyone. Welcome to another episode of Elevate Your AIQ. I'm your host, Bob Pulver. With me today, I have the pleasure of speaking with Sultan Saidov from Beemery. How are you doing, Sultan?

[00:01:21] I am very well. Thanks for having me, Bob.

[00:01:23] Absolutely. Thank you for being here. Just to kick us off, I thought maybe I'd just have you give a brief introduction of your background and the impetus for starting Beemery back in, what, 2013, was it?

[00:01:39] I think it was a big, really important. I think it was a big part of the context. Beemery today is a leading AI platform for talent management. And certainly in 2013, when we started AI, it meant something very different to today. But the impetus has stayed the same.

[00:01:52] I was working in finance before starting Beemery. And part of the impetus came from the fact that I was working in finance during the last recession, you know, from 2008 to 2012, when a lot of people suddenly found themselves out of work.

[00:02:07] And I started a couple of technology experiments while working in that industry. And one of them was to see if there's a way to identify people's potential to help people get discovered for jobs outside of finance and outside of those industries.

[00:02:26] And as I was running those experiments, before it became a motivation for a company, it was really a motivation to see if there was a way of making those career choices available to people, not just simpler, but fairer.

[00:02:39] Because where I come from, it's an unusual part of the world. It's a place called Dagestan. And not many people from that part of the world get access to opportunities in any industry or any company.

[00:02:53] And so one of my questions outside of looking at is there a way for people to get found for work having lost their jobs was, are there things that go beyond people's credentials, e.g. people's skills or potential or interests, that you could derive from where people have worked or what they studied?

[00:03:09] And at the time, that experiment started with just looking at what kind of things could we capture? Are there data points about where people come from or go?

[00:03:18] What companies hire for that could reveal some alternative way for people to get discovered beyond just applying for jobs?

[00:03:24] And very quickly, that led to me spending time in speaking to people in the HR industry, speaking to talent leaders, anybody who'd talk or listen.

[00:03:31] And I started to realize that there's a interesting systemic problem for why this type of potential-based hiring and talent decision-making doesn't already happen, or certainly didn't in 2012, 13.

[00:03:42] And it comes down to the fact that the industry of people decisions and HR and talent has been largely built around the processes, whether it's payroll or onboarding or applications.

[00:03:55] And as a result, has never really been people-centric, either in the experience or the data.

[00:03:59] And so that initial experiment of, is there a fairer way to identify people for potential, turned into a, is there a way to actually treat talent like customers?

[00:04:08] Not only in a fair way, but in a people-centric way.

[00:04:12] And that led to us building the first AI-based CRM in the talent space.

[00:04:17] And then over the years, it's turned into what we do today, which includes helping companies do talent planning and risk management and so on.

[00:04:23] Yeah, I would say it's quite a comprehensive platform at this point.

[00:04:28] But I love the backstory.

[00:04:30] I mean, certainly I see a lot of those challenges, I'll call them persisting today, not people that aren't using Vimory, of course.

[00:04:40] And then that definitely resonates with me on a personal level.

[00:04:45] For me, it was COVID rather than the financial crisis.

[00:04:49] But certainly I witnessed and had some personal experience where if you understood what some of these folks were capable of, you never would have let them go in the first place.

[00:04:59] So yeah, so that definitely hits home.

[00:05:02] When you think about that human potential throughout the talent lifecycle that I know you support and your platform supports,

[00:05:11] are there specific areas where you think we've got more opportunity, meaning there are more people being overlooked?

[00:05:20] I guess where do you see the most opportunity?

[00:05:24] That might be kind of a loaded question.

[00:05:26] But is it in the general sort of labor market, all these people who are just consistently sort of untapped and remain hidden?

[00:05:34] Or is it the people that are already working that now you don't even know, like you really don't know what else they're capable of doing as the organization evolves?

[00:05:42] It's a question you can look at through a few degrees of depth or complexity.

[00:05:46] I think at a high level, you can't validate potential in a vacuum.

[00:05:52] You can identify where it's likely, but there has to be human interaction to actually explore your own potential as part of that.

[00:06:01] So for example, today, a lot of the biggest opportunities lie in future-proofing where a business or an organization is heading or where you as an individual are heading,

[00:06:10] because the work we do is changing so rapidly.

[00:06:14] And oftentimes, there is a symmetry between the types of work that is most at risk and the types of people and work that actually can transition into high-demand work.

[00:06:26] For example, when we saw AI in its early wave starting to take over image generation, the early hypothesis was,

[00:06:36] well, now we won't need designers or creative people.

[00:06:39] But the new specialized roles in prompt engineering and designing 3D worlds and games with AI assistants have actually drawn in exactly the people who are being displaced by what the AI is doing in basic image generation.

[00:06:51] And similarly, when you look at the types of people who are told, well, you will no longer need work because of AI in other areas like customer support,

[00:07:01] the areas that companies who've been building new AI bots for customer support have needed to hire, like in content labeling and so on,

[00:07:09] have had to actually draw in people who used to work in customer support that's being replaced.

[00:07:12] And the symmetry means that the potential for the hard-to-fill new roles, whether it's in pharmaceuticals or in support or any other industry,

[00:07:22] often comes from the people whose roles are at risk because that is the types of things that technology is starting to displace in some ways.

[00:07:30] And that means that the two sides of the coin are not just how do you as a company or a manager or a recruiter look at,

[00:07:37] well, how do I identify potential?

[00:07:39] There's also a question of how do you as a human, no matter what work you do,

[00:07:43] consider where your skill sets to this point might need to evolve and what information can help you make choices of where you go next.

[00:07:50] And I think those two things have to meet in any company setting.

[00:07:55] Careers don't happen to you and people's progression doesn't happen by itself,

[00:07:59] but you can start sending the right signals to tell people, hey, you look like you have adjacent skills to this role.

[00:08:07] Is this something you'd find interesting?

[00:08:08] Or there is a workforce that you're thinking about adjusting, either restructuring or training.

[00:08:14] Here's signals that you may want to consider around what they would need to be trained in or how those roles are shifting.

[00:08:20] And so I think it really boils down to creating more information transparency and solving some of those asymmetries of people not having the context of how work is changing

[00:08:29] and what might make you likely to be good at something.

[00:08:32] And, you know, that applies in day to day areas like when you have new technologies like blockchain emerging.

[00:08:38] A lot of companies default to starting to just hire people with that experience, which for a while is going to be almost no one.

[00:08:44] And the easy thing to do is to say, well, people who are starting to move into these roles, where do they tend to come from?

[00:08:50] And therefore, rather than just defaulting to hiring, who could we train rather than just hire and then replace?

[00:08:57] What makes it easier to train somebody?

[00:08:58] What are the gaps and similarities?

[00:09:00] And I think that applies both to companies making decisions and to people trying to figure out their own paths.

[00:09:04] Yeah, I think for the internal population and not necessarily like working full time, but the people who are already actively participating in your projects, whatever the sort of legal relationship is,

[00:09:20] I feel like it's certainly a big miss if you haven't taken the time to figure out what people are capable of.

[00:09:28] And obviously, technology has advanced in some of these more modern, you know, application talent marketplaces and other things that people are still trying to get their arms around, right?

[00:09:38] Skills-based organization.

[00:09:40] What does this mean for us?

[00:09:41] How do we identify and keep track of all these skills?

[00:09:45] So to me, that's one of the biggest misses, just because you know, if you were to actually let those people go, like deliberately or just showing a lack of empathy or interest in, you know, what their goals are and things like that is particularly egregious, as opposed to not necessarily sticking your head in the sand, but you can't solve everything necessarily.

[00:10:08] You've got a mission as a company, but just to optimize what you have seems like a much smarter play than letting these people go trying to say, okay, we're going to go fish again and spend, you know, 6x or whatever it is to try to bring in the right people with shrinking tenures and loyalty and still, you know, continually low engagement levels and things like that.

[00:10:32] It just seems like it's just not a good investment.

[00:10:36] For the majority of cases, it's not just an ethical benefit.

[00:10:39] Obviously, ethically, it's better to keep people and train them rather than letting people go and hiring new people to do similar jobs.

[00:10:45] But from a company outcomes perspectives financially, I think the average is very depending on the types of role, but it's typically at least twice as fast to redeploy and train versus hire and onboard.

[00:10:58] And sometimes even faster than that.

[00:11:00] I mean, if you look at even just the hiring for potential without the consideration of whether you've let anybody go for a lot of the newer roles that have emerged in the last couple of years, where there aren't that many people with experience, whether it's an AI or whatever it might be.

[00:11:15] Obviously, the number of people that are available for you to hire if you didn't have all of the requirements.

[00:11:21] LinkedIn estimated increases more than tenfold in terms of talent pools.

[00:11:24] But the impact it has on the business is usually it is more than twice as fast to hire and train for somebody who hasn't had every skill set or every experience than it is to hire for somebody who meets all of the experience requirements.

[00:11:37] So it isn't just in the case of existing employees and how do you avoid layoffs.

[00:11:42] It's in all cases.

[00:11:43] But obviously, that doesn't always work.

[00:11:45] You might not have the infrastructure to do those training programs.

[00:11:47] It may be hard.

[00:11:48] And so it can be simple on paper, more nuanced than practice.

[00:11:53] And so, you know, as an organization, you have to make responsible decisions of whether you have capacity to train people and so forth.

[00:11:59] But going back to the transparency of information, usually the issue is companies don't have the insights into, well, could we actually train people?

[00:12:08] Who could we train?

[00:12:09] Are there ways of identifying who has potential?

[00:12:12] And so the friction often just that's stopping companies isn't the ability or the budgets to train, but not knowing which populations to hire or address internally or externally.

[00:12:23] And that's the easy problem to solve, which is what the whole skills movement is really about.

[00:12:26] And in the process, you also end up having the tools to be more inclusive and fair in your hiring by looking at skills over credentials, which is a sort of a positive externality of taking that approach.

[00:12:38] In addition to the fact that it gives you more effective ways of filling roles quickly.

[00:12:42] I was reading about some of your recent solution updates.

[00:12:45] The talent risk management dashboard, is that part of that sort of understanding where the puck is going and where you might need to supplement existing staff?

[00:12:54] Sort of a build versus buy kind of thing?

[00:12:57] Yeah, it is.

[00:12:58] And I think there's a bigger movement that is happening in the way businesses look at talent that we're at the early stages of.

[00:13:09] Now, when we started the company 10 years ago, the shift that was happening in the HR space was that companies needed more sophisticated technologies to treat talent like customers because there was a war for talent.

[00:13:23] And there was an opportunity in terms of how technologies were evolving to start taking charge in that war by being more proactive and having better data.

[00:13:31] And in many ways, you know, where we started with a skills based AI CRM, it was bringing the kind of tools that sales teams had in systems like Salesforce into the context of what you needed as a talent in HR team.

[00:13:45] But what's happened and is happening today is a slightly bigger tectonic shift, not just in the HR industry, but in industry.

[00:13:53] You know, because of the speed of technological innovation, most companies are having to place existential bets on where their people are, what industries they operate in.

[00:14:03] You know, traditional construction firms are hiring chemists and researchers to reinvent materials.

[00:14:07] Car manufacturers like General Motors are building lunar rovers.

[00:14:11] All companies in all industries are transforming at a very accelerated rate, which means that the question of do we have the right people?

[00:14:20] Are we hiring well?

[00:14:21] Is no longer a efficiency war for talent optimization question, but a much bigger what makes us survive and be competitive as a business question.

[00:14:31] And in some ways, that's always been true.

[00:14:33] People have always been called to the business.

[00:14:35] But today it is much higher stakes for every business, which means that the question of is our talent strategy at risk is becoming more front of mind to many business leaders, whether it's CEOs, CFOs, CHROs too.

[00:14:53] And one of the interesting things about the word risk is most parts of the business are very risk conscious already and talent teams are not when it comes to people plans because they haven't really had data to model it.

[00:15:06] For example, most companies have a very sophisticated view of looking at whether their revenue targets are at risk.

[00:15:11] You have a pretty sophisticated view of knowing how likely customers are to stay to convert.

[00:15:16] The idea of is our people strategy at risk, how likely we are to hire for our roles, train for them hasn't been modeled very effectively in the past.

[00:15:24] And so a lot of what it comes down to is we now have the need for this kind of risk prediction for businesses to know that their overall strategies are on track.

[00:15:34] At the same time, as there's the possibility emerging to make those predictions because AI models that look at people data and labor market data are getting more sophisticated.

[00:15:41] And so you have this opportunity now to say we can, of course, keep making better employee experiences and fairer decisions about mobility and redeployment.

[00:15:50] But we can also help businesses consider their scenario planning for, you know, do we place these business bets?

[00:15:56] Do we go into these markets with a lot more informed information about whether the people strategy that supports that is realistic?

[00:16:02] Yeah, that makes sense.

[00:16:04] So one of the topics that has come up a lot over the past year or so is this talent intelligence versus people analytics kind of camps, right?

[00:16:14] And, you know, how people are defining those terms, particularly talent intelligence may vary.

[00:16:20] But the premise, I think, at least how I think about it is the way you're describing this complex, you know, labor market, making strategic business decisions, forming a talent strategy that aligns with your growth plans, your business strategy, your technology strategy.

[00:16:37] How is it that so many organizations are still like separating these teams, right?

[00:16:43] Like talent intelligence typically looks outward at the labor market and in-demand skills and things like that, whereas people analytics looks at just your people.

[00:16:52] And those two things don't necessarily meet.

[00:16:55] And I guess I'm curious to get your observations.

[00:16:58] Are people starting to collaborate more across those pre-existing silos?

[00:17:04] I don't know, maybe Beamery users are recognizing that and you're already helping them break down those barriers by bringing all that data together.

[00:17:12] I think it's an early stage in the lifecycle of how data intelligence is being used beyond centralized BI or intelligence functions.

[00:17:25] You know, the kinds of companies that have been more sophisticated in looking at talent intelligence have often come from areas where they have the hiring of people as a core internal technology or competency,

[00:17:41] which is often, you know, high volume, high-res and tech firms and so forth, whether it's e-commerce or companies like Uber or Amazon and so forth.

[00:17:51] And the decisions around, you know, where they open either warehouses or go markets, they go to from a consumer and labor perspective are so hand in hand that they have to be sophisticated in areas of talent intelligence just as part of how they build their business.

[00:18:03] I think for many more traditional businesses, it's a more analytics-driven function of looking at, you know, are you retaining employees?

[00:18:12] What is employee satisfaction rather than trying to make these predictive bets?

[00:18:15] But in all cases, whether it's an HR intelligence or just general business intelligence, AI is slightly changing the landscape by making insights much more accessible to an unsophisticated user.

[00:18:27] Because historically, you know, using any form of...

[00:18:31] I want to take a break real quick just to let you know about a new show we've just added to the network.

[00:18:37] Up Next at Work, hosted by Gene and Kate Akil of The Devon Group.

[00:18:43] Fantastic show.

[00:18:44] If you're looking for something that pushes the norm, pushes the boundaries, has some really spirited conversations,

[00:18:52] Google Up Next at Work, Gene and Kate Akil from The Devon Group.

[00:18:59] Deep intelligence requires a certain intent and sophistication.

[00:19:05] You know, you log into your BI system and run certain queries.

[00:19:08] Increasingly, we don't need to be even in the headspace of I need to look at analytics in order to benefit from them.

[00:19:14] You could be doing a daily task like writing an email or opening a job description or any activity.

[00:19:22] And your tools can provide an adjacent insight.

[00:19:25] Like if you're opening a job and writing a description, you can say, hey, before you post this, did you know that you could look internally?

[00:19:29] There's some people who've already raised their hand and are relevant to it.

[00:19:32] If you are having a conversation with a particular colleague, similar insights can say, hey, in our internal employee system, this may be useful to your conversation.

[00:19:39] And so this topic of what is BI, whether in the talent intelligence space or analytics space or otherwise, is starting to become a bit more nuanced and embedded into people's day to day decisions and interactions.

[00:19:52] And I think that changes the landscape of both what's possible and where this is heading, because I think where it's heading as an industry and what it unlocks is a way for this to no longer be about the functions in the business,

[00:20:05] whether it's HR, whether it's HR analytics or intelligence or elsewhere, and much more about who's doing the deep analysis versus who benefits from the outcomes of that analysis being available in the flow of work and day to day decisions.

[00:20:19] And is there an element of quality control, which might be what BI and analytics teams start doing before those insights start getting pushed into people's day to day usage?

[00:20:28] Yeah, no, I think that makes sense. And I know it's an evolution. Sometimes I get anxious because I see where this is going and it just takes some people longer to get in that direction.

[00:20:39] It'll take time. The technology to do a lot of what we're talking about in terms of insights being more sophisticated is there.

[00:20:46] But just because, you know, generative AI is here doesn't mean everyone is using it for all of its use cases, even if it's easy.

[00:20:53] I think and it's not just a case of habits. It's a case of as a business user or as an employee, there is a web of tools and interactions that has to adjust before these things start to be adopted.

[00:21:03] It's much easier as a consumer to just log in and try something out.

[00:21:07] In the context of your business decisions, there has to be sort of an ecosystem and that hasn't come together yet.

[00:21:11] So I think we're not going to see, in my mind, you know, major change in how people work overnight.

[00:21:18] But if you look at it on the sort of five year horizon, I think it's going to start to come in just as it gets embedded into our Slack and our teams and other products that we use every day.

[00:21:25] It seems like some of the generative AI capabilities and the data that it's been trained on, both external trusted data and internal proprietary data,

[00:21:36] it seems like that should open people's eyes to some of these possibilities and the need to break down some of these silos and increase investment in these areas.

[00:21:47] I was talking to someone recently about the fact that finance and leadership wants people to bring strong business cases and bring data when they're seeking investment.

[00:21:59] And you've got people in analytics teams, workforce analytics teams who have the data, have the expertise, have everything you could need to support and bolster a business case for some of these investments.

[00:22:12] And yet they're still not getting it.

[00:22:14] So I don't know, maybe that's a cultural thing or just a myopic perspective on, look, we've got to tackle, you know, what's around the corner.

[00:22:23] We've got to worry about this quarter, not, you know, what's going to happen in 2026.

[00:22:27] Yeah, I mean, it can be, but it's also risky to go the other way, which is, you know, to go all in on a hypothetical technology innovation that hasn't yet had specific validations.

[00:22:42] I think each business and team ideally would have a sort of a validation strategy of which challenges do we have that some of these new technologies could meaningfully address.

[00:22:54] And so it doesn't need to become a, well, do we think about the next quarter?

[00:22:57] What do we think about this theoretical future?

[00:22:59] But there can be some overlap.

[00:23:01] Are there problems for the next quarter that if we were to anticipate happen again the following quarter, we could spin out some pilots and look at whether in the future some of this stuff can help.

[00:23:10] So, for example, you know, if you're one of your problems is the amount of times time employees have to spend on internal support queries and tickets, it might be pretty quick to run a pilot and say, does creating an internal knowledge base or support bot provide a lot of value with very little effort?

[00:23:27] And that's what a lot of companies have done.

[00:23:58] You know, firms like McKinsey have ended up creating bots for their own consultants that only they could build because it looks at their own past consulting cases and it serves their own consultants and being able to work better.

[00:24:11] And they aren't a tech company, but they can hire a number of tech resources and build that suddenly much quicker than would have ever been possible.

[00:24:18] And I think there's a lot of those types of internal experiments that, you know, have a lifecycle of at least a year or two before they start to really get more traction.

[00:24:27] And I think we're in the middle of that for many cases.

[00:24:30] So we'll probably see more of it and start to see that maybe some companies that look more myopic outside in are in fact going to be a lot more creative than meets the eye in the next year or two.

[00:24:41] You know, you've always got to balance your tactical and your strategic, right?

[00:24:44] And you've got to play some bets.

[00:24:45] Your risk profile may need to change in this area than you typically might.

[00:24:51] But you also need to pay attention to impending legislation, which I know you guys stay on top of and we'll get into that.

[00:24:59] But just while we're on the generative AI topic, I did want to ask about TalentGPT, how that's going and probe that a little bit in terms of who are the primary users and sort of what's next.

[00:25:12] Yeah, absolutely. So we announced TalentGPT, I think, March last year.

[00:25:17] And it's a generative AI interface built on top of the largely job architecture and skills-based AI models that we've built for years,

[00:25:29] but that weren't always easy to interact with in a consumable way.

[00:25:33] So, for instance, we can tell you through our own AI models, the four generative AI, if you have a particular job title or experience,

[00:25:41] these are the most relevant skills, these are the skills run into a task.

[00:25:44] But that's not always the most consumable way of getting that data.

[00:25:48] You might just say, hey, can you help me find some job requirements?

[00:25:51] So in many ways, the power of generative AI in general is to make conversational interactions connect with other technologies and products that have already existed.

[00:26:02] And that's what TalentGPT was for us.

[00:26:05] Suddenly we could say, you don't have to put in or click a button.

[00:26:08] You can just say, hey, help me write a job description.

[00:26:10] And then we could call our AI models that find the right skills and so forth.

[00:26:13] And so the early use cases were to run this as a alpha program to ensure that the control of what happens when you ask a question that might not fit our AI models,

[00:26:27] that the interaction can essentially fail safely, i.e. tell somebody this isn't what we do,

[00:26:32] because we're not trying to make a general purpose conversation bot.

[00:26:35] We're trying to make very particular interactions that our products and AI models already serve work in a much easier and more seamless way.

[00:26:42] And there's also a question of data safety and security.

[00:26:45] We don't need to store any information, but we may still run into users sending sensitive data, even though there's no need to.

[00:26:54] So you have to be able to catch it, filter it out.

[00:26:55] So the early couple of months, we didn't just launch the product.

[00:26:58] We started onboarding clients into this alpha program.

[00:27:02] And one of the things we found is because of the kind of companies we work with, which is largely the Fortune 500 or Fortune 2000,

[00:27:08] even for very basic use cases that were very controlled, like help me write a job description,

[00:27:14] there is a lot of legal review required, partly because the HR industry is under particular scrutiny of anything that touches AI requires certain hoops.

[00:27:22] So we excluded any use cases.

[00:27:24] We certainly don't do anything like applicant ranking or scoring with generative AI.

[00:27:28] There's many risks to that, but that wasn't even a use case for us.

[00:27:31] And we very much focused in on things like writing better job descriptions, running better interactions for find reports, et cetera.

[00:27:39] But the legal reviews were still necessary.

[00:27:41] And it basically took maybe until the end of last year for many companies to really start approving both the launch of those pilots

[00:27:50] and some of the legal part of our talent GPT launch was in partnership with Microsoft

[00:27:54] as some of the generative AI components were running through Microsoft Azure and their use of open AI.

[00:28:00] And so in earnest, a lot of the validation of people actually starting to use the product started this year, almost a year later.

[00:28:06] And I think in that regard, it's still early doors for the true potential of this,

[00:28:12] because what I think a lot of those use cases that we focused on, which make the product easier to use, et cetera,

[00:28:18] I think what really brings them into maximal potential is if those interactions that are generative AI based

[00:28:27] don't require you to go into a different system.

[00:28:28] Because right now, all of talent GPT has been built into the Beamery interface.

[00:28:33] You know, you can click a button or you can chat to the assistant or the copilot.

[00:28:37] And I think a lot of the real value comes in from when you can make those interactions of talking to a copilot happen within a product that you are using all the time,

[00:28:47] like a Microsoft or a Slack or maybe a LinkedIn if you're a recruiter.

[00:28:50] And so what we're now starting to do is run the next iteration of our Talent GPT rollout program,

[00:28:56] which is to start to embed it into that day-to-day ecosystem,

[00:29:00] where I think it starts to unlock access to a lot more users in a much easier way and really achieves the goal,

[00:29:07] which is minimize not just context switching, but the cognitive load of what do I have to do?

[00:29:12] And in that context, you know, some of the experiments we're running is what I said in the conversation earlier.

[00:29:19] Beyond the helping with the task, like writing a role description or showing some requirements,

[00:29:23] what extra insights can we show that give you guidance?

[00:29:27] For example, can we tell you that before you post this role, consider removing these requirements in order to not be at risk?

[00:29:34] And that's the other area of experimentation that's been so far proving to be a very valuable area.

[00:29:41] But we need to plug in the right level of insights to really maximize the potential of those products.

[00:29:46] Yeah, I like that approach just coming from, I've spent a lot of time in the broad sort of responsible AI space over the last year plus.

[00:29:56] And I think one of the elements of all of this is that, you know, people are going to take advantage of these interfaces.

[00:30:02] They're going to like the natural language, the conversational nature of it, whether it's, you know, in the future,

[00:30:08] it's mainly voice or you're typing through, you know, text interface.

[00:30:13] But just the fact that you're asking or you're sort of preempting people before they do something not intentionally bad,

[00:30:22] but just they're just in the moment, they're in the flow of their own work,

[00:30:26] and they're just not thinking critically at every moment.

[00:30:30] And you're almost injecting like some nudges and some of the concepts with the digital adoption platforms to say,

[00:30:38] hey, but, you know, let me just get you to pause, you know, for a second.

[00:30:42] Do you really need to, to your example before, do you really need to,

[00:30:46] it looks like you just sent me a document that has some personally identifiable information,

[00:30:51] or maybe even sensitive and confidential information.

[00:30:54] I actually don't need this to execute the request that you gave.

[00:30:59] So basically, you're preemptively trying to prevent some, some weak links, I guess, in the, in the flow.

[00:31:06] Yeah. And again, I think that that's where there's a unique opportunity for Gen AI in general,

[00:31:11] as a guidance of what have you not thought about,

[00:31:16] and as a guide to helping you ask better questions rather than just as a sort of alternative way to execute on a task.

[00:31:22] You know, I think the, the early experiences most people would have had with generative AI is likely,

[00:31:29] hey, write me a poem or draw me an image, which is kind of something that you may not have done before,

[00:31:34] but you were giving it a task.

[00:31:35] I think the sort of breadcrumb of, hey, you maybe, maybe didn't notice this, like you said, with, you know, sensitive data,

[00:31:41] or, hey, I think Microsoft's doing this, like, hey, you're about to have a meeting, here's some relevant information.

[00:31:46] I think it's starting to become more than just an assistant to something you want to do,

[00:31:50] and more of a assistant to things you might not have thought about.

[00:31:54] And that starts to open up a different window for just how technology helps us be more effective.

[00:31:59] What you just said reminded me of some of my IBM Watson experiences in the last generation of AI tools,

[00:32:06] but one of the concepts that it always showed when we were demoing it to, you know, the C-suite was,

[00:32:13] it knows what it doesn't know.

[00:32:15] Hi there, I'm Peter Zollman.

[00:32:17] I'm a co-host of the Inside Job Boards and Recruitment Marketplaces podcast.

[00:32:22] And I'm Steven Rothberg, and I guess that makes me the other co-host.

[00:32:25] Every other week, we're joined by guests from the world's leading job sites.

[00:32:28] Together, we analyze news about general niche and aggregator job board and recruitment marketplaces sites.

[00:32:35] Make sure you sign up and subscribe today.

[00:32:39] Right?

[00:32:40] No unknowns.

[00:32:41] And so it seems like there's a lot of opportunity to incorporate,

[00:32:46] and maybe you guys have already done this because of the trajectory, you know,

[00:32:50] you just wasn't, Henry wasn't founded, you know, last year.

[00:32:53] So I guess the question is incorporating, like, both predictive AI and generative AI.

[00:33:02] And some of those scenarios that we were just talking about could be,

[00:33:06] maybe it sort of knows where you are on the risk scale or on providing, like, a really reliable,

[00:33:16] reliable, you know, output or response.

[00:33:19] Don't give me some BS, you know, answer.

[00:33:22] Don't try to sound like you know, because I'm going to take that and I'm going to run with it.

[00:33:28] So if you don't know, just maybe tell me or tell me what else you need to be more confident

[00:33:34] and give me a reliable answer.

[00:33:36] So is that part of the trajectory and the maturity of generative AI is to take some of those,

[00:33:44] you know, last generation of AI kinds of principles and concepts and incorporating it so that you have,

[00:33:51] because to me, that's how you are going to grow to trust, you know, TalentGPT or whatever the platform is.

[00:33:58] That's how you're going to trust it, because otherwise you're just my know-it-all co-worker who's full of shit.

[00:34:05] I mean, that's how we're approaching it.

[00:34:07] But I think it's an optimistic view to assume that that's what all generative AI is being built to do, right?

[00:34:15] I think a lot of research into how to really create productivity out of this AI wave is looking at agents and automation

[00:34:26] rather than what we're describing, which is this kind of augmenting people with,

[00:34:30] I can help you here, I don't know this.

[00:34:31] So I think we are seeing a sort of a bifurcation split in which direction AI is being used to clone people's work,

[00:34:42] whether it's a sort of digital twins or just alternative automations versus assist people's work.

[00:34:46] And I think it's a real interesting area of ethical conversation, you know, what's right.

[00:34:52] But for us, the goal is very much to give people transparency and insight rather than try to automate.

[00:35:01] Because, you know, automation exists and has already existed, but I don't think it's where these LLM type AIs are right to be used today.

[00:35:11] You know, when we talk about risks of hallucinations, it's very different if you are giving somebody information and saying,

[00:35:17] well, this is where I found it and so on, but it might be incorrect versus if you start to apply AI that could hallucinate to like large scale automation tasks.

[00:35:25] And that, especially in the HR domain, is pretty scary and at least for now being regulated, which is good to see.

[00:35:32] But I think the question of how do you create AI that isn't just truthful and take some of those old principles of,

[00:35:40] I don't know this, I'll flag it to you, but gives people ways of navigating information in general better because the world is full of information that you can't process or whether it's accurate or not.

[00:35:51] So we have an opportunity for AI to help us zoom in on data sources and information better.

[00:35:57] And, you know, tools like Slack are doing a good job of this, I think.

[00:35:59] They've started to embed summaries of conversations and the summary might not be perfect, but it will reference which bits it took it out from.

[00:36:07] And so the user habit that's being formed isn't to make you trust and rely that the words that are summarized are always going to be right.

[00:36:14] It's as an alternative notification engine that gives you a quick read and a way of drilling into check.

[00:36:19] And I think that that sort of habit formation of how do you use AI to not assume it's always right, but, you know, Google searches aren't always right either,

[00:36:27] but to assume that it gives you a different way of inspecting things you might not have noticed in a more efficient pattern.

[00:36:33] And I think that's, you know, a pattern of responsible AI design that applies very well to the HR domain.

[00:36:38] You're right.

[00:36:38] It doesn't necessarily have to be if you're summarizing something, you know, as long as it got the gist of if you're summarizing a set of documents or some of this legislation around AI is ridiculous, right?

[00:36:48] It's like 800, 900 pages.

[00:36:51] But then I think about like interview intelligence tools that are transcribing interviews and summarizing it, right?

[00:36:58] So, I mean, maybe it did a good job of summarizing it.

[00:37:02] Maybe it got the gist of it, but maybe it also missed some nuggets that a human being might find particularly, you know, insightful.

[00:37:10] So I think we need to be careful depending on the use case.

[00:37:12] This is a great example because it really boils down to there being a different risk when you're assisting someone in being more productive versus when you are potentially influencing an employment decision, right?

[00:37:23] You know, bias emerges in any attempt to summarize or change anything because any words you've removed create a different potential context, right?

[00:37:30] And if what you're summarizing is presidential speeches, then you run a risk of, you know, creating bias in how people vote.

[00:37:38] If you're summarizing interviews, you create a bias that somebody might not be hired based on how you summarized.

[00:37:42] If you're summarizing a newsfeed or a Slack for your own personal consumption, there's not necessarily the same risk.

[00:37:47] And so I think, you know, one of the tricky things is as a user or consumer of these products, you're not necessarily aware that the same tool that summarizing suddenly is super risky in one use case and not that risky in another.

[00:37:58] And I think signaling that element of you really should dig into this and consider these risks is part of what we have to start making, you know, key to these interfaces and how these things are served.

[00:38:10] Being usable in a safe way without assuming everyone's very sophisticated and thoughtful about using them.

[00:38:16] Absolutely. Just staying on the responsibility I front, I have to give you kudos for being early and still to this day, I feel like you're one of, I don't know, maybe a dozen or so companies that I've seen that have taken the initiative to be independently audited, you know, for, you know, your algorithms to check for adverse impact and other potential risks.

[00:38:41] You know, risks. And so, you know, I did spend time getting certified by a nonprofit to conduct audits in New York City and, you know, it just seems like everyone was like sitting around and waiting, even the employers who were doing a lot of hiring weren't even going through that audit.

[00:38:58] And so, you know, I know the law in New York City is imperfect and not necessarily enforceable as written.

[00:39:08] I'll just kind of say that. But I do think that it's important as people, and maybe you guys have seen this already in terms of prospects, but it just seems like it was inevitable for companies to start technology evaluations to start asking some tough questions.

[00:39:26] I mean, I brought this up in a LinkedIn post before HR tech last fall. And then I could probably just repost it again, because what I saw at Unleash a couple months ago was the same thing as, you know, 90% of the conversations I have were about AI.

[00:39:40] But if it was about legislation and being responsible, which goes way beyond legislation, right, we're talking about responsibility by design, it just seemed like we were having a responsibly high conversation.

[00:39:54] It was probably because I brought it up, right? So I think there's a long way to go in this industry.

[00:39:59] But I just think if as you're doing vendor selection, why would you not incorporate some risk mitigation by asking the right tough questions about, you know, whether you've been audited? Are you a trusted partner or not?

[00:40:12] Yeah, I think it's a really important conversation for a number of reasons.

[00:40:17] One of which is, when it comes to HR and bias, what really matters is how you as a company monitor your own, not what the technologies are.

[00:40:27] And it's actually something we've recently started to unlock because bias auditing technology, third party bias auditing technology as a space is maturing to enable companies to self audit AI models on their own data, not just rely on a vendor.

[00:40:49] And what that means is, you know, you can have, let's say, a skills based AI model that statistically produces bias, on average across the world, because one, in general, skills tend to compare to credentials, etc.

[00:41:01] But two, let's say the models have been tested. That doesn't mean that in a very specific company, there couldn't be bias, because there could be a population set where the bias exists from the types of people that apply, and then the model perpetuates it, right?

[00:41:14] Like, you know, just like, you know, human decisions might create bias down the line. Same with the interview technology. It may be that the way that assessments are summarized on average doesn't, but for a particular company does.

[00:41:25] And so I think the maturity of recognizing that the thing to worry about is not just AI, but the fact that people make decisions within companies, and those decisions can be biased.

[00:41:38] And some of those people might already be using third party tools without them being formally approved in the company for various things.

[00:41:44] So how do you just monitor the state of fair and equitable decisions in the business, and then consider does adding a technology improve that or not?

[00:41:55] And I think that the challenge is, when you look at HR as an industry, it is a lot more risk averse than most other parts of the business.

[00:42:03] You know, I see many companies that have been launching AI to their consumers in their products, but have not touched it in their HR space.

[00:42:13] And eventually, that's going to change, right? Eventually, there's going to be an element of sort of competitive differentiation and recognizing that these things are being used.

[00:42:21] But I think we need more maturity in how to evaluate, not just the technologies, but these topics of how do we even look at bias?

[00:42:28] How do we look at ethics? How do we look at what people are being trained on? And I think it's been surprisingly slow as a maturation, which has actually led to, you gave the New York example, some of the regulators going out and saying, we are actually trying to encourage AI, not stop it.

[00:42:45] Keith Sonderling, you might have seen as he's been at the EEOC for, I think, seven years, and is actually leaving, I think, at the end of this year.

[00:42:52] But he's been a big advocate in the HR tech industry for the EEOC is trying to encourage technology adoption and AI adoption.

[00:43:01] We see that it's on average having good effects. Here's how we want you to be thoughtful about not just trusting it, but reviewing it.

[00:43:07] But the biggest thing we're trying to stop is companies not auditing and doing traditional tap on the shoulder.

[00:43:14] We're not using systems to monitor what we do.

[00:43:16] And the fact that it requires the regulators to encourage people to not worry about the regulation is quite an interesting phenomenon.

[00:43:21] Yeah, I've had a few conversations with Commissioner.

[00:43:24] I didn't realize he was leaving his post because I know he's going to be at HR tech and he's been making the rounds to a lot of different events.

[00:43:32] But yeah, I think it'll be interesting to see how regulations evolve.

[00:43:38] I mean, obviously, we could see some things at the federal level, I'm guessing maybe next year, but they're making slow progress in that regard.

[00:43:45] But I understand his position in terms of the need to obviously comply with longstanding civil rights legislation.

[00:43:54] And let's just not let's not make sure that, you know, AI, it's not necessarily legislation around AI.

[00:44:01] It's a let's not let AI scale while exactly not mitigating the biases.

[00:44:08] Yeah.

[00:44:08] So, yeah, we'll we'll see how that evolves.

[00:44:10] But I just think in general, whatever the legislation is.

[00:44:14] First of all, we see the EU AI Act and its full, you know, very lengthy documentation.

[00:44:20] And so people have been analyzing that.

[00:44:22] And I just think if you can align yourself to some of the core principles and of their sort of risk pyramid and some of the ways that they've outlined, you know, consumer protection and candidate protections and things like that, I think you can probably be in good shape for legislation elsewhere, including here in the States.

[00:44:39] I agree. I suspect that once, as you say, the EU lengthy AI Act is fully rolled out, is going to become the standard for other countries, just like GDPR did, because it's once you have a really comprehensive standard and a large jurisdiction, it tends to become easier to adopt it.

[00:44:57] But I think the the implications of the EU AI Act may end up being more far reaching, just like they were of GDPR than than people might expect.

[00:45:08] You know, one of the things that we're considering to your point around not just current regulation, but what's right and ethical is to what degree can there be full consent and transparency about AI across both consumer experiences and employee experiences and HR experiences?

[00:45:24] And I think the answer has to be, you know, it should be complete.

[00:45:27] Like you should be able to opt in or out of anything, any kind of data or technology use.

[00:45:31] But that isn't how current regulation has to date approached it.

[00:45:36] You know, you have like New York law talking about accommodations, but it's to date been quite vague and not really requiring any explicit opt in or out.

[00:45:43] Whereas I think the EU AI Act and what happens next might start to change that.

[00:45:47] We may see a lot more obligation to have not just transparency around how your data is used, but around what AI might do and how you opt in and out of it.

[00:45:56] And I think starting to prepare for what that might mean and how we adopt technology is going to be quite important.

[00:46:02] Yeah, absolutely. So I know you guys have focused on, you know, the ethics and, you know, all the aspects of what we call responsible AI.

[00:46:14] So human centric and ethical and fair, transparent.

[00:46:18] When you think about like the whole talent ecosystem and, you know, moving these pieces around and people coming from different talent pools, are you trying to apply the sort of the same principle?

[00:46:34] Like not everyone who might be interviewed for a job is going to come through a traditional, hey, the job is posted on, you know, Indeed or LinkedIn or whatever.

[00:46:42] And this is, you know, you don't have one door that these people are going to come through.

[00:46:47] Maybe they're, maybe you use, you know, some of the capabilities that you acquired from Flux and you're connecting to other, you know, talent pools and other, you know, ecosystems.

[00:46:58] So I guess one of the things that I think about is how do we apply consistency in the, it's about the decision and the humans being impacted.

[00:47:08] It's not about a particular channel where you identified the talent.

[00:47:13] There's a couple of threads to this.

[00:47:15] I think an important one is today in the HR universe, we often look at the AI wave and the skills wave as being these two parallel separate topics.

[00:47:26] But actually they are and have to be the same topic because for pretty much anything that involves AI in the world of people, decisions or experiences, skills are how you create consistency and fairness.

[00:47:42] They're sort of the zeros and ones of connected, consistent, fair AI for people decisions.

[00:47:49] And that's partly because it's how you can compare work in the lens of like tasks and things people have to do with capabilities, like what sets of things you might be good at to do that work.

[00:48:00] And it's also because that's what gives us a way of building AI models that can be connected to your decisions and performance management or job design and so on and learning.

[00:48:09] And so a key thing to ensuring that employees have a fair way of accessing career opportunities or mentors, et cetera, is to find a way of threading your universe of HR systems, which for many large companies is, you know, 100 plus different products to speak some common language that can connect to all of those AI products.

[00:48:30] So I think today that doesn't quite work because 100 plus HR systems don't all have a common skills language and they don't all use skills based AI models.

[00:48:37] But in an ideal world, they would.

[00:48:39] You could have a common currency and if you can change this is what we mean by the skills we need and so on, it would cascade into the AI used in your learning systems, performance systems, recruiting systems in a similar way.

[00:48:50] So I think one thread is how do you create data consistency and skills is a big part of it so that can be a fair experience.

[00:48:57] The other part of it is you can't create fairness in a vacuum just through recommendations, right?

[00:49:01] There needs to be some kind of action available afterwards.

[00:49:04] In the same way as, you know, 10 years ago when companies first started using talent communities, most companies didn't hook up those talent communities to any form of review or messaging systems.

[00:49:13] It became this sort of black hole of submitting your interest and then never being reviewed.

[00:49:18] I think there's a similar element of creating good outcomes and fair outcomes, whether it's for employees or for candidates.

[00:49:24] But I have to consider how do we take any form of interaction if an employee says I want to do this into something that's actionable.

[00:49:32] And, you know, I think many companies that have rolled out talent marketplaces have found a lot of value and demand.

[00:49:37] People want to do gigs, but not necessarily the supply.

[00:49:40] Like not every company is ready to create lots of projects and so forth.

[00:49:43] And I think you have to consider what is the path to giving people an action.

[00:49:48] Now, in some cases, there's a lot of value to just giving transparency, i.e. using AI to show employees here's what careers you might take that are relevant to you.

[00:49:57] But the moment you make an action available, like apply for something, you need to have a way of learning from it, following through on it.

[00:50:03] So I think, you know, companies have to consider the range of what can you give people in a fair way, whether it's just showing employees here's what is possible or where people like you have gone for information versus what you might do with, you know, actions and programs for people who then have an interest that comes out of that information.

[00:50:22] These aren't easy problems to solve, but I think it's an important endeavor to make sure that we are making fair and consistent decisions.

[00:50:31] And we're doing what's giving people a fair shot because I think there's a lot of, like we said at the beginning, there's a lot of potential that we have not tapped.

[00:50:41] You've got to be on the lookout, right?

[00:50:43] You've got to know how to discover that and put some mechanisms in place to help you, you know, surface that talent.

[00:50:50] I want to be respectful of your time.

[00:50:51] I have like probably 10 more questions to ask you, but I'm going to let you go.

[00:50:55] Thank you so much for your time, Sultan.

[00:50:57] This was great.

[00:50:58] I appreciate it.

[00:50:59] Thanks so much, Bob.

[00:50:59] It's good to see you.

[00:51:00] You too.

[00:51:01] Thank you, everyone, for listening.

[00:51:03] We'll see you next time.