Recruiting industry veteran and conversational AI expert Martyn Redstone joins Bob Pulver to delve into the transformative impact of AI on talent acquisition. They discuss the evolution of conversational AI, the importance of responsible and human-centric AI, and the need for proper governance and education around AI ethics. The conversation highlights the challenges of AI regulation and the role of different stakeholders in ensuring compliance. Overall, Bob and Martyn emphasize the need for organizations to understand and operationalize AI in a responsible and efficient manner. They cover AI use cases and implications across talent acquisition, including candidate engagement and screening, and the importance of cognitive diversity in designing human-centric solutions and experiences. The conversation concludes with a discussion on the need for individuals to embrace AI as a standard part of their personal and professional lives.


In this episode we look at: AI, talent acquisition, chatbots, conversational AI, automation, AI ethics, AI regulation, compliance, AI literacy, systems thinking, design thinking, cognitive diversity, candidate experience, AI governance, upskilling. 


Key Takeaways

  1. Conversational AI, including chatbots and voicebots, is increasingly relevant in talent acquisition and business operations.
  2. Understanding and differentiating AI terminology is crucial for effective implementation, as all chatbots are essentially software-driven conversations.
  3. Chatbot technology has evolved from basic decision trees to advanced natural language understanding and large language models.
  4. Integrating large language models into chatbot technology stacks can improve conversational experiences.
  5. There is a growing recognition of the need for governance and responsible AI, but more action is needed to operationalize it. 
  6. Effective AI regulation and compliance require ongoing education and integration of responsible AI practices within organizations.
  7. Responsible AI governance involves diverse stakeholders and should be part of onboarding and training processes.
  8. Responsible AI requires a human-centric approach and should be an extension of existing data protection and cybersecurity processes.
  9. Transparency is crucial when using conversational AI to ensure ethical practices and manage user expectations.
  10. Automation in screening processes can help alleviate capacity issues and improve the candidate experience.
  11. Designing for a better candidate experience can lead to better outcomes for all parties involved in talent acquisition.
  12. Combining systems thinking and design thinking creates a hybrid approach that enhances both process efficiency and candidate experience.
  13. AIQ is not just about technical skills, but also about mindset, adaptability, and readiness to embrace AI as a standard part of life.


Chapters 

**00:00** Introduction and Background 

**01:13** The Evolution of Chatbots and Conversational AI 

**09:17** Navigating AI Regulation and Compliance 

**19:18** Educating Employees on AI Ethics and Compliance 

**25:47** Moving Towards Operationalizing Responsible AI 

**28:24** The Use of AI in Candidate Engagement 

**34:52** Ethics of AI in Recruitment 

**38:16** Automation in Screening Processes 

**41:55** Designing for a Better Candidate Experience 

**47:20** Thinking both Tactically and Strategically in Talent Acquisition 

**52:29** Embracing AI as a Standard Part of Life


PPLBOTS: https://www.pplbots.com/

Martyn Redstone: https://www.linkedin.com/in/mredstone/

H.A.I.R. - AI in HR Community: https://nas.io/hair

 

Powered by the WRKdefined Podcast Network. 

[00:00:00] Feeling kind of left out at work on Monday morning? Check out the barf, breaking news, acquisitions, research and funding. It's a look back at the week that was so you can prepare for the week that is. Subscribe on your favorite podcast app.

[00:00:25] Hello everyone. In this episode I am joined by Martin Redstone, founder of PeopleBots, Martin and I discussed the evolution of conversational AI which is Martin's specialty. The importance of responsible AI including governance and ethics and the need for proper education as AI impacts

[00:00:42] talent acquisition and organizations more broadly. We highlight some of the challenges of AI regulation, benefits of AI in candidate engagement and screening processes and the significance of designing for a better candidate experience. Martin and I also chat about balancing big picture systems thinking and experience based

[00:01:01] design thinking to optimize and modernize talent acquisition in a human-centric way. Martin has practiced a good advice for elevating one's AIQs to go around for that and I hope you enjoy it my discussion with Martin. Thanks again for tuning in.

[00:01:16] Welcome to Elevate Your AIQ Podcast. This is Bob Olver. I am here with Martin Redstone who I met through, I think it was the AI for talent event maybe early in January. We are both talking about AI and talent acquisition and how it is affecting the

[00:01:33] future of work. Martin, welcome. Thanks so much for being here and I'll let you say hello and introduce yourself. Thanks Bob. Thanks for inviting me to podcast. It's a great speed here.

[00:01:44] So as you said, my name is Martin Redstone. I have been in the recruitment industry for just over 18 years. Last 12 years I've been building technology solutions for recruiters. But for the last almost

[00:01:58] seven years now, I've been specialising in conversation AI. So chat box voice bots, those kind of things and obviously that's picked up a lot of momentum over the last 18 months since the launch

[00:02:12] of chat GPT which is in my mind anyway, it's a glorified chatbot but it's great there everybody's getting really excited about AI now because of a chatbot. Yeah for sure and I think people are

[00:02:23] getting tripped up a lot with terminology and I'm sure you've seen this over those last seven years. I think part of it is just people sort of got reaction to any technology, anything that's

[00:02:35] not done by a human. They just lumped all into AI at this point even if it's just automation or if it's a chatbot. I remember when I was at NBC in 2016, 2017 we were talking about implementing

[00:02:51] chat bots for customer service and things like that, IT support and people couldn't necessarily get their arms around it or they didn't think it was something that they could scale across the organization or whether they just thought they didn't have enough data to feed it to make it

[00:03:07] useful and first impressions were important for the technology and if people you know returned off by it right away, it was going to be hard to get them back. So I think today's chatbots

[00:03:18] I guess I don't even know if we should call them chatbots. How do you differentiate the term analogy? It's a great question. The terminology is really interesting one because ultimately

[00:03:28] they're all chatbot. If we take that word what it is chat and a bot is a conversation with a piece of software and it's the underlying technology stack that's the bit that defines ultimately the experience. So back in 2016 earlier than that we were very much used to decision-tree

[00:03:49] based chatbots. So kind of choose your own adventure. You click here for to go down this route, say yes to go down that route and then we kind of went down the keyword matching route and then we

[00:03:59] started getting a little bit more intelligent around natural language understanding, intent mapping, those kind of things. And that's where we are right now really. So when it comes to integrating large language models into chatbots technology stacks, the best way to use it is to

[00:04:21] have generative areas to have large language models do that kind of intent mapping. So rather than a chatbot build along myself spending weeks on weeks training a natural language understanding engine to understand that there's 72 ways that somebody could say hello to a chatbot.

[00:04:38] It's actually basically pumping it into a large language model and saying what are they trying to say? And then doing that intent matching on the data that comes back. So that's kind of the basics

[00:04:48] of it in terms of terminology but then you're right, we've always been able to plug data in. Just certainly over the last several years that I'd been building chatbots but but now the

[00:04:59] lovely thing is, is again the speed to getting that up and running so you can now plug that corpus of data in. You can have it vectorized immediately in a database and then have something called RAD retrieval augmentation generation actually just being able to retrieve

[00:05:15] that data relevant data straight out of that vectorized database. So what it's meant is using kind of brand new generative AI and other kind of technologies that wrap around that is the rate will just speed up the process of getting really, really good conversational experiences

[00:05:34] into the public's hands really quickly. And I say a conversational experience is because while we're very much used to chatbot you can think of any time you have an automated conversation with a business so that could be chat, it could be voice, it could be multi-modal,

[00:05:50] cross avatars and even yesterday I was in a session around immersive technology so having automated conversations with in virtual reality. So there's loads going on, loads going on, but ultimately it's having a conversation. Yeah that's a hand even think about the VR

[00:06:08] situations but yeah that seems totally logical to have that intersection. So when you go in for for your people bots clients maybe you could just talk a little bit about like who your typical

[00:06:19] client is and is an engagement more about sort of enabling them to to build their own you know conversational AI solutions or do you go in and actually help them sort of scope out

[00:06:32] what it is and then you you hear a team you know actually build it or is it a bit above? Yeah so it's very question so the way the way that people bots which is my kind of

[00:06:43] consultancies defined is an AI and automation agency. And so what we're trying to do is demystify like you said right at the beginning, a lot of the time people just threw this AI terminology at everything and they say I'm on a great everything and sometimes it's not AI,

[00:07:00] sometimes it's automation and sometimes that's all people need. So we're trying to make sure that that's very very clear and totally demystified in the recruitment process. So the vast majority of times there's a very there's a very defined split between the type of work that people bots

[00:07:17] does. So we have the recruiter side and that's across both internal talent and and recruitment agencies and and again that can become very split as well because recruitment agencies are both

[00:07:30] be to be ambitacy. So they're thinking about how they can use AI to not only speed up the recruitment process from their perspective but also speed up client acquisition, conversational sales, conversational marketing and those kind of things. On the internal talent side absolutely you think

[00:07:47] about all the high volume low value conversations that you can automate in the process and then we've got the HR side as well as we do a lot of work with HR teams so we think about again service desk automation

[00:08:01] you mentioned earlier about NBC kind of IT service desk but then you've got the HR service desk and when we talk to HR directors and one simple question which is how many tickets have you got

[00:08:11] open on your service desk, how many on ready emails, how many phone calls are you getting? You know and that could be automated pretty straightforward and then obviously within HR we've got learning and development of those kind of things as well but then we also work with the

[00:08:26] recruitment and HR technology vendors as well who are looking to implement conversational AI or a general TV AI solutions within their system and we work with them to help build an

[00:08:38] implement that or at least roadmap it in for them. So there's lots and lots we do but it always starts with a mix of feasibility and discovery. The Shining object sender and miss is pretty bad

[00:08:52] this is right and had a couple of recent conversations actually around you know the human the need for human sensitivity around this which can sometimes get lost I mean I know categorically

[00:09:05] you know we talk about responsible AI I know you and I both talked to both social providers and clients about what that concept means and you know acting as an umbrella term for you know ethics and transparency and you know fairness and things like that but the

[00:09:24] the human's sensitivity really needs to come into play especially in the domain like talent acquisition and even HR more broadly where you have humans you know those aren't just you know rows in a table right those are people those are applications those are people's

[00:09:41] you know livelihoods and we need to make sure we're being extremely careful with how we incorporate AI into our decision making I think these are really important sort of ethical you know questions

[00:09:55] that we have to encompass and so it is a bit scary that you know the the horses out of the barn as they say in terms of these solutions being out there and and legislation not being able to keep up

[00:10:08] so I know you talk about that on podcasts like this and and different events and stuff like that do you feel I mean you're getting feedback you feel like it's resonating because I know

[00:10:20] you know they're in the UK you know you think about or your government thinks a little bit differently than the EU for example about you know the risk and how heavy-handed you know legislation

[00:10:34] needs to be to make sure we're moving forward in a positive way so yeah we're in a really interesting time from a legislation perspective because the entire world is trying to legislate for AI personally I believe which you early I think we're trying to legislate something that

[00:10:50] is moving at such a rapid base that it's like Jan Lekun from Metaset you know trying to legislate for the jet engine industry when you're still flying propeller planes and so it's

[00:11:01] that's the challenge we have you know and legislation has been put in place you know we talk about Europe and the EU because they're kind of the first kind of real major error in the world to

[00:11:11] pass you know very very thorough legislation and you know they're they're putting massive barriers in the way massive barriers in the way you know vast majority of the the large model providers out there are not a not compliant legislation so there's a lot of work to be done

[00:11:28] I think the vast majority of people I speak to are concerned because they don't really understand the legislation and don't understand either requirements but also who's responsible for implementing compliance around the legislation so I have you know employers who are implementing AI saying

[00:11:45] is it our providers are to acknowledge you provide is that a responsible yeah they say many questions out there I think the thing that a lot of people have to remember is that you have to legislate

[00:11:56] for the worst possible case of where you are acting within the world and we used to say the same data protection and it's still very much you know there is that always try and put in place

[00:12:08] your data protection processes as if you were processing data or in Germany because it's the most strictest country in the world when it comes to data data compliance so so the same thing should

[00:12:20] be said for for AI regulation we unfortunately you know there's a very big disparity across the world in terms of in terms of requirements for legislation so just you know putting place processes

[00:12:34] for the worst one basically is the world in place yeah that's that's exactly what I tell folks as well so right now you know the EU seems to be the most restrictive if you can design for that

[00:12:49] then you'll probably be okay as you know the the patchwork of legislation here in the US you know evolves and I mean there's legislation happening all over the world on every continent so

[00:13:00] so yeah it just seems like you've got to go to the most restrictive you know common denominator and then you'll be fine if you like swiping then head over to Substack and search up work to find

[00:13:11] WRK defined and subscribe to the weekly newsletter is both in New York City which you know we don't have to get into the lack of enforceability and and that'd be a list you know sort of language and

[00:13:23] have one but it's something that people need to keep an eye on and some are also arguing that you know HR is already doing a lot of compliance work but I would also argue you know this is too big

[00:13:36] for those teams to handle it's not an expertise thing it's not that they can't you know get their heads around it but I just think this is much bigger than just you know something that HR needs to handle

[00:13:49] it's I mean you're limited to you know the data privacy and cyber security you know it's up to companies about the governance and AI ethics and if you're going to establish you know committees

[00:14:01] around those those topics which I encourage people to do it really needs to have the cognitive diversity of abroad you know swath of of the organization it should not be constrained to you know any particular C-suite you know executive or anyone particular department and a

[00:14:18] certain certainly shouldn't be sort of just added to the agenda of any existing you know committees so people need to think about that and if you're too small to actually you know pull those people

[00:14:31] together like if you're start up you know building these solutions then I would encourage people to think about AI ethics as a form of almost a specialized advisory board where you could get others

[00:14:44] from outside the company to sort of keep tabs on those things for you make sure you're moving in the right direction and being responsible by design yeah I totally agree that it's something

[00:14:54] I was talking to somebody about yesterday which is I don't understand why TA in particular in Tana acquisition in particular has taken such a massive interest in AI regulation you know I get

[00:15:09] asked so many times about it in the work that I do I personally believe that you know that there should be people within the organization that should be managing that down it shouldn't be up to

[00:15:22] like C-E-H-R and Tana acquisition to be insuring compliance and becoming the the regulation experts because ultimately they've got a job to do and there should be somebody else in the organization like you said it whether it's you know C-T-O-C-I-O you know head counts or whatever

[00:15:41] who should be responsible for ensuring that a the organization is compliant but B that education piece is happening within the organization as well I do find it a very bizarre situation right now where people who wouldn't normally be expected to be regulation experts as it were a screaming

[00:16:02] out for information I've suggested in the past that this is so important that I think it's inevitable that this becomes part of your onboarding and regular compliance training just like data privacy or you know harassment in the workplace or you know cybersecurity policies. Yeah absolutely I agree

[00:16:20] and I actually see it as an extension to all of the above you know so it's been a bit of a cognitive reset that's happened when when AI kind of hit the mainstream. When people talk to me about

[00:16:30] you know responsible AI, FAQ AI regulations and you know it's no different to all of the other regulation out there that we've had to deal with whether it be data protection, cybersecurity etc and

[00:16:43] I and I actually see it as an extension to all of those. I see it as if you're if you're carrying out responsible data protection, if you're carrying out the correct cybersecurity process

[00:16:56] is you know then you should be using the same mindset when it comes to your use of and implementation of AI are you abusing data are you using data correctly responsible are using the right data at the right time are you transferring data across borders when you're not

[00:17:13] spaced different cybersecurity perspective are you putting confidential data into the into the into the cloud or into the into their into a non-sanctioned and unsanctioned provider. You know those kind of things that we would normally be enacting from from a good data protection

[00:17:31] as cybersecurity process all seems to have been dare I say forgotten and like I say if you're like there's been a bit of a weird kind of cognitive reset on our on our cyber and data purity responsibilities as as people employees and corporates, purely because of

[00:17:50] our generative AI. Yeah I guess I feel like you know certainly our privacy protections and cyber protections have advanced considerably you know over the last couple decades but yeah I feel like even if you were doing even if you had an AI governance platform

[00:18:07] installed and whether that's you know like a fair now you know holistic those those guys or it's something that's sort of checking even earlier in the in the cycle right like to have the observability and the monitoring right from when you're designing the solutions

[00:18:27] I still feel like you know there's there's still humans involved right we haven't fully automated you know anything AI has not taken over think goodness but as long as there's humans in the

[00:18:41] loop there's there's you know you know fallibility and you know in the end and sort of life cycle of how data is flowing through our processes you know internal acquisition is a pretty important one just because because of the protections that we expect and because of the

[00:19:01] you know the ethics around you know human sensory you know decisions human impact and decisions and so so there's always the potential for someone to be you know the weak link

[00:19:11] in this you know in this workflow and this sort of supply chain of of data and so just like anyone in a company could be could give up their password through some sort of you know spearfishing

[00:19:23] you know campaign or something like that I mean once people start to I guess assume that the systems are sort of protecting everybody that's where things can fall apart right or they're treating you know if a recruiter just says well I'm being measured on you know throughput

[00:19:45] on on efficiency and so I'm going to trust you know this matching algorithm as I would you know a calculator and I'm just going to say here my top candidates these are the people that are

[00:19:56] going to move to the you know recruiter phone screen or first interview or whatever it is and and then for legislation to only come in if someone actually you know files a lawsuit

[00:20:08] that's they're sort of a big gaping hole there of potential you know problems that we might not otherwise you know uncover if we're not continuously monitoring this but oh yeah I think

[00:20:20] and so part of that I think is because back to the education piece how do we make people realize that you know this is not infallible and this is something that we're going to continue

[00:20:31] in the need humans in the loop to make sure we're doing this properly. And I agree I agree I think I think we're quite lucky over here in Europe because we've had the GDPR since 2018

[00:20:43] and although people think that this is purely a kind of data protection legislation there is sections of the GDPR which are about automated decision making and how data is used there and how it needs to be explainable and human's ability to be involved in that decision making

[00:20:59] so I think we're in a nice position over here in Europe to be able to continue being responsible when it comes to you know decision making data processing etc but from an AI perspective so

[00:21:13] I think we're quite lucky in that respect but the AI I agree I think there's a lot more education and I think it needs to be coming from the organisation the the organisation which would need to make sure that people are aware are educated and are practicing correctly.

[00:21:32] Yeah when you go I know you've gone to a lot of events you've spoken in a lot of events recently any insightful feedback that you're getting from practitioners or solution providers and they are recognizing this or they're getting a weight and we didn't see and hope

[00:21:50] that they don't get you know a big fine or non-compliance or what's what's some general thoughts you're hearing. Well I think last year was the year of kind of running around panicking understanding

[00:22:02] what AI is what it can do and what I have been I think this year is the year where people are trying to understand how to operationalize it and when it comes to operationalizing they need to be

[00:22:13] governance in place and people are starting to recognise that so a lot of the conversations and a lot of the questions that I'm getting from people out there is around the governance

[00:22:25] side of things and so and so that's been the very very interesting conversation point. I still think there's a lot more talking than doing right now when it comes to anything to do with AI

[00:22:37] I think we're in a very very strange place from a market perspective from a from a talent and people perspective where there's not much investment going on comes to people related

[00:22:49] teams departments and budgets because the world's in a bit of a mess when it comes to the economy. So so people are wanting to talk about it now so they're ready to go and they understand it

[00:23:02] and that when it comes to the proverbial ground stuff hitting the fan, people are ready to go and they understand and they can operationalize it correctly, responsibly and efficiently

[00:23:15] that's the kind of feel that I'm getting from the market right now when I'm out talk to people at events or online and what have you I think people want to be all over it now so they're ready to go.

[00:23:26] So I was at a I was at on leash and certainly you know probably 90 plus percent of the conversations were around AI but very very small percentage of those conversations were around

[00:23:41] governance and responsibility and things like that. I mean I had to nudge a few people to even go down that path which was a bit disappointing but not surprising I suppose but

[00:23:54] I guess it seemed like there was a wait and see kind of attitude about some of it even from the established vendors with a few exceptions greenhouse had talked about some kind of beta

[00:24:07] to a call greenhouse verified which I thought was an interesting take at least as it relates to them putting some some checks on top of their you know marketplace but whether that that sticks or something that others will you know sort of think about reputationally I think

[00:24:25] it was an interesting interesting move I don't know if they actually deployed it yet but I know the president announced it it was going into beta so other that was an interesting thing but I also

[00:24:35] think you know it's a big ecosystem you know partnerships are really important you know there's gonna be a ton of integrations with all these you know point solutions until we see more market

[00:24:47] consolidation and so it just adds to the you don't want to be the the weak link kind of situation but but it also means you know that that monitoring you're doing throughout your process as it sort of navigates through multiple AI solutions either from multiple vendors or

[00:25:07] they are their agents or co-pilets you know that are built on different LLMs I mean it just seems like it's going to get even messier as more people come up with easier ways to

[00:25:22] integrate all of this and so I guess I'm curious just from your perspective not just on how you see the market but as you're trying to build things based on your clients you know requirements

[00:25:36] just imagine you have to navigate you know some of those complexities yeah absolutely so they're taking away from it's a very very very tricky in difficult situation right now because you're trying to build for efficiency be also trying to build for regulation as well and that's

[00:25:57] difficult bit because again taking back to my my previous point you know legislation is so disparate across the world and and most legislation hasn't even been enacted yet and that's the challenging part and it's interesting to your point around um unleashed Americas because

[00:26:19] I sometimes think that that may be a geographic difference because there are a lot of conversations happening over here around responsible AI compared to possibly what's going on over in the US where I think which is very much used to a

[00:26:33] very kind of strict regulatory environment over here in Europe and obviously with the UK leaving the EU through Brexit that gives us peace to look at as well but um but yeah it's um

[00:26:47] it's a really difficult tricky situation so what we're trying to do is you know take it back to what I said earlier is that as long as we're being responsible when it comes to building the

[00:26:59] systems as long as we're being open when it comes to and transparent when it comes to communication not only with business but also with the users of those systems um then we're trying to cover it

[00:27:12] basically and say look you know right now this is how we build it you know and you know the the the onus is on us and the organization to continue monitoring legislation and being able to

[00:27:26] make any changes necessary whether it be like you said you know it could be down to which L.A. then provide you are using in the background it could be down to anything when it comes

[00:27:36] to building for regulation my my government starting point is always transparency and so as long as you're being transparent with the user and the system A on how you how how you're doing things how you're using their data you're making these decisions etc etc then you you've hopefully

[00:27:57] got the one you'll find anyway on the topic of transparency there've been some interesting conversations offline in online around candidate engagement with a conversational AI and it's used not just to not just a normal like like interview scheduling kind of use case but to actually

[00:28:18] have the AI essentially conduct the phone screen or even a first interview what where do your thoughts on like you know that I guess some people say like you you would use an AI if you're talking to an

[00:28:34] AI through conversational agent whether it was over actually it was over voice over or be a text assuming there were no technical challenges no you know latency or anything any awkward you know pauses or whatever and you had good natural language capabilities if people didn't

[00:28:54] know but they were talking to an AI is that on ethical is that problematic? From my perspective and any of the work I ever do and I get people asking me this you know we'd like to have people think

[00:29:08] that it's that they're talking to a real person and and ask that away from it it's it's totally not only in my opinion unethical you're ultimately not setting up a good a good relationship between

[00:29:21] you and your users if you're trying to keep them and so so there's that piece you know and there's this legislation in places like California where you have to tell people that they talk to a

[00:29:31] bot but but I believe that that's the correct way to do things and there's a couple of reasons so the first thing is absolutely you're setting up the relationship for one of true transparency

[00:29:46] and what I have here and and a proper ethical relationship but secondly it changes people's expectations of the experience so so if I go back to an old example you know you go to a supermarket and you've

[00:30:00] got you know you've got a human managed checkout counter and you've got an automated checkout counter when you go to the automated button you're not being told that there's somebody sitting inside saying

[00:30:12] be and scanning the item for you you know and and and the way that you interact with that terminal is totally different to where that you interact with a checkout counter that's got a human

[00:30:24] on it because you you're you're kind of expecting it to be dare I say a little bit dumber you know and so when people talk to chat bot and we see that in the way that people are engaging with

[00:30:36] chat GPT the whole do I say please and thank you you know those kind of things people's expectations change in in the way that a they interact with with that experience but also how they expect

[00:30:48] that experience to interact with them and that helps from a from an experience perspective but also designing for that that experience perspective so so absolutely you need to tell people that they're interacting with a piece of software both from a ethical perspective but also from a

[00:31:07] experience perspective as well yeah I think people get shipped up like oh well we've seen you know surveys or we talk to people and they don't want to you know they want a human

[00:31:19] experience because they're going to be you know empathetic and they're going to be you know all these things but like especially for a recruiter phone screen I mean they're really just gathering

[00:31:30] information to know whether to to move you forward or not I don't think that's a heavy you know you know human necessitating kind of function right just the same I mean it's obviously more involved

[00:31:44] than an interview scheduling which you know has that's a solved you know problem I would say but but still I mean there's not there's not a lot of you know necessarily you know banter and

[00:31:58] yeah just it just seems like something that could be you know taken off of humans you know played absolutely I've been building initial screening conversations for I was in spout 2017 and so the whole point is especially when it comes to the kind of the high

[00:32:16] volume low-skilled or kind of hourly markets that the screen's very binary it's you know are you um is this position you thought you'd applied for when you were available is it the right location is it the right salary etc etc and

[00:32:33] it's very very very very binary and and why the heck does that to the person want to be doing that over and over again naturally into about scalability this is the interesting bit is that you know

[00:32:45] if all a recruiter was doing was telephones screening I did a kind of time and motion study of that several years ago and the average telephones screen across across a thousand upon thousands of them was eight to nine minutes and that included also updating the 8ds

[00:33:06] as well afterwards so so the interesting thing is when you when you lay that across the working day a recruiter can do 53 of those in a day if that's what they don't so so and if that's

[00:33:17] what they're doing that's the key point but actually if you if you scale that out across the week you know five days a week 53 a day that's what 265 screaming calls a week if you think about you know the number of people that apply for your requisitions the number

[00:33:30] of requisitions you have and also all the other activities you do in the week it becomes a capacity issue not even a capability issue a capacity issue and that's where automation is totally not to be right you know you've got this this finite amount of time

[00:33:45] that a human has in their working week you don't have that when it comes to computers you also don't have the issue of the working week you know what we find most of the time

[00:33:53] is that people engage with chat box outside working hours at weekends you know and we see lots of interesting patterns like you know software developers tend to engage more on a Sunday evening

[00:34:06] you know white color workers tend to engage more on a Monday evening first day of the week they really annoyed at their job they want to find something new you know there's loads of trends

[00:34:14] out there but actually most of the trends point towards people engaging with talent acquisition outside of working hours so no recruiting their rivals to work outside of their own working hours

[00:34:27] so so again it kind of adds to that capacity issue and and so what to people want they don't actually want that kind of human element in their experience because they know the normal recruitment experience is and you mentioned it earlier is ghosting ultimately and ghosting not because

[00:34:46] recruited a bad at their job ghosting because they don't have the capacity so what do people want and I'll go back to the supermarket experience you know to people want to queue up for 30 minutes

[00:34:58] to scan their groceries to have to have a conversation with the person behind the till or who you don't really want to have to have a conversation with you being polite or do they want

[00:35:07] do they want immediacy convenience and speed and agency that's exactly what they want and that's what they want through all of their experiences we see that with e-commerce we see that with booking

[00:35:17] restaurants we see that with actually everything and so why not have that with you know with other experiences like you know talent acquisition and so it makes perfect sense from from my perspective

[00:35:30] I'm a little bit biased obviously but but when you put it like that a lot of people kind of sit there and go yeah absolutely yeah no I think that makes sense in your last

[00:35:39] news like you had a really interesting article this piece around systems thinking and design thinking or something versus design thinking when it comes to talent acquisition which really you know call my attention as as a systems think or who has done design thinking work

[00:35:59] I hadn't really thought of them as sort of this two different approaches to you know similar problem and so so I do think you know there's there's a complex domain there's a lot of

[00:36:14] moving parts there's a lot of pieces you can solve different things in different ways sometimes it's improving experiences for all parties sometimes it's favoring you know the candidate at the

[00:36:27] perhaps at the to the detriment of a recruiter or hiring manager but so I do think there's a lot to think about and the interconnectedness of some of those pieces trying to almost solve multiple

[00:36:41] problems at once but then the design thing approach where you have a little bit more of a linear pathway where you're trying to make sure you know remove the points of friction

[00:36:52] and but also taking into account you know some of the other other parties involved as you move through you know the talent life cycle so there's a curious if you could just sort of expand a

[00:37:04] little bit on your thinking and how because I think part of your conclusion was sometimes you need sort of a hybrid you know kind of thing and so I was just curious if just to get your some additional color on that yeah absolutely so so it's really interesting

[00:37:22] is a really the catalyst behind it is that I have to do but I'm an engineer you know before I was a recruiter I'm a computer science graduate I'm a I'm a mobile communications engineer

[00:37:37] electronics engineer etc by trade as my granddad needs to say and so a lot of the work that I've done over my career is very much systems thinking but when I started to work more and more

[00:37:50] within the the the conversation experience side of building conversation automation it had to kind of flip over towards more the design thinking and thinking about the end user and and working backwards and and so my thought process was that actually I hear a lot of people talking about

[00:38:10] design thinking and bringing design thinking to recruitment and also you're work with a lot of technology vendors who are more around the systems thinking and it was a very interesting question that catalyzed this thought process of putting this stuff down on in my newsletter some of

[00:38:26] the answers me with the advent of generosity they are what's changed from a candidate experience perspective and I and I really struggled to answer that I found myself thinking more around the recruiter experience has probably got better because we've sped up content generation

[00:38:45] the high-end manager experience may have got a little bit better because we sped up the speed to be able to get jobs out and speed up the time to hire and those kind of things but has anything

[00:38:53] got better from a candidate experience perspective well if you look at LinkedIn and you look at Twitter and you look at Reddit and all those kind of things also are not allowed to say Twitter x,

[00:39:02] you know any more or x formula is Twitter I think that's what people call it the candidate experiences is not quite the better and so my almost cool to act should was everything we do in

[00:39:16] recruitment should be focused on the end user and the end user of recruitment is somebody looking for a job it's a highly emotive, highly stressful situation that you want to try and alleviate that kind of emotion from and so when you're thinking about implementing

[00:39:35] I get it the podcast just isn't enough that's all right head over to your favorite social app search or work to find WRK defined and connect with us a new way of doing things a new piece of

[00:39:48] technology then yes we should be thinking about recruitment as this kind of interconnected processes that work in the background but we should also be thinking about the design element and thinking about how that experience is designed and working almost backwards from that

[00:40:07] and that's why I kind of started calling out this hybrid approach because you can almost have this kind of gradient approach of when it goes from design into system as you kind of move away

[00:40:20] from the experience but designing a system that's the the the enables and enhances that experience yeah really interesting I guess I always wondered I'm not I'm a design thinking participant I wouldn't say I'm an expert I've never led those engagements so though I'm sure that'll happen

[00:40:40] in the future but but I did wonder when you have when you have multiple parties you have multiple experiences that you're trying to design is there a way to do so if you were doing candidate

[00:40:54] experience but you also knew you had to do recruiter experience and hiring manager experience could you somehow superimpose those on top of each other and then look for you know the the point of friction I don't know if any any of the you know those software tools

[00:41:13] that then help you with that mapping do that you know mural and and others if they allow you to do that but I guess that's one way to sort of get to a to put your systems thinking can't on while you're

[00:41:25] doing design thinking but I know that's that's also not it's probably easier it's a special of done in a workshop environment I'll be honest with you but yeah muros one of my one of my

[00:41:35] tools of choice for mapping out processes but also then collaborating on on the solution design around that I think the one thing we need to remember is that when we tend to get the end point

[00:41:51] right it tends to make everything else better so I go back to the example of contact centre that you can customer service so you've got again various kind of actors operators involved in that process you've got the customer you've got the contact centre agent you've got the business

[00:42:10] when you make the experience better for the customer and they finally get through to the contact centre agent and they're in a better mood they're not going to take that out on the contact centre agent so better customer experience creates better employee experience which creates

[00:42:26] better business outcomes and so when you when you start overlapping that into the recruitment experience you think well actually a better candidate experience in terms of non-goesting in terms of better information, better communication, it's better recruitment experience because they're not

[00:42:41] dealing with a backlog of applications they're not having the business beating them up over poor glass door, reviews, etc etc and a better hiring experience creates better business outcomes because employer branding goes up because time to productivity goes up because they get better people

[00:43:00] coming in they get better you know faster speed to revenue generation for those people coming in so so when you overlay one into the other it tends to create an entire chain of designing for a better experience of everyone yeah that makes little sense last question for you

[00:43:20] so the the title of this podcast is called elevate your AIQ and so the AIQ is not intended to be a numeric score like a like your IQ score necessarily but it's really about

[00:43:37] how are we preparing everyone because AI is going to affect everyone in varying degrees in different ways but but it really does affect everyone and has even you sighted in in the last you pull

[00:43:52] out some stats from the Microsoft you know work index that was released a couple weeks ago but you know if everyone is going to start to look for people with with AI skills even that needs

[00:44:04] to be sort of broken down but when we think about AIQ I guess we think about not just you your skills and your ability to you know put in the right prompts and your familiarity with

[00:44:18] technical you know terminology and things like that but it part of it is your you know your mindset you're readiness to change your adaptability and things like that I mean when you if I had

[00:44:31] said all those things and you just said if I said Martin how would you define AIQ I mean what would you say are some of the key things my thought process is that we are putting too much pressure

[00:44:43] on ourselves as individuals and also our ourselves as workers on this whole AIQ skilling thing I think we're we've seen something that looks like it's going to be a paradigm shift

[00:44:56] and and we're at panic stations so I've got two thoughts on this so first of all one of the things that we know is that there's a paradigm shift when it comes to computing so eventually the

[00:45:11] way that we engage with all of our systems around us whether it be you know our mobile communication systems whether it be our personal computing etc etc can be through natural language

[00:45:27] and we don't have to panic about that because as humans that's the way that we interact with the world around us anyway so so as long as we can have a conversation with a person we can have a

[00:45:38] conversation with a computer so so that's the first thing that I get people to think about is there's nothing to panic about prompting is just having a good conversation with a piece of software

[00:45:50] and it's at its basic level but actually it's going to be as ubiquitous as anything else so 10 years 20 years ago we'd see on job adverts must have experience in Microsoft Office and and that's what we're seeing now must have experience using chatchew busy must have experience in AI

[00:46:10] and actually do we ever see job at job adverts and they are saying must have experience in Microsoft Office no because number one there's more than just Microsoft Office there's you know Apple there's

[00:46:20] Google etc etc so you know but the experience the use of that tooling within your everyday life becomes standardized and naturalised and so I think that we're just going through the same shift that we did you know 20 30 years ago when we had Office of Webfirst coming out and

[00:46:40] everyone was thinking you know panic stations we have to get up and running on this pretty quickly so I think we just need to look forward into the future and know that actually this is going to

[00:46:50] become a standard part of our lives and as long as we have as long as we can have a good conversation with people around us we can have a good conversation with a computer and we can use

[00:46:59] this new paradigm in computing which is natural language interaction comfortably and naturally excellent you know that makes a lot of a lot of sense I think people like it's not everyone's the systems thinker and they can't maybe look at our head or see how these different

[00:47:19] trends and patterns are going to you know evolve and manifest so I think that's a great great perspective Martin thank you so much for spending some time with me it was really great

[00:47:30] talk to you as always and I'm feeling like everyone please follow Martin Redstone on LinkedIn he's got some great resources he's got newsletter about all the things that we talked about today as well as another newsletter around jobs in conversational AI so check that out and we'll talk

[00:47:49] again soon Martin thank you again thanks have we both been around the pleasure thank you absolutely bye