Biometric identity verification is no longer optional—it's a necessity in a world dominated by remote interactions and generative AI advancements. Andrew Bud of iProov lays out the alarming reality of deepfakes and synthetic identities, warning that "you can't trust your eyes anymore." Organizations face mounting risks from this wave of technology, which enables impersonation and deception at unprecedented levels. Bud highlights the importance of proactive measures like identity verification, continuous monitoring, and robust threat analysis to combat this evolving threat.
In this episode, we explore biometric identity verification, deepfake risks, synthetic identities, generative AI, and strategies for mitigating security threats. Andrew Bud shares insights on safeguarding enterprises, governments, and individuals from the growing misuse of AI technology, ensuring authentication processes remain accessible and effective.
Key Takeaways
Biometric identity verification is vital for remote-first interactions and reducing fraud.
Deepfakes and synthetic identities are rising due to generative AI advancements.
Awareness and preparedness in enterprises are lacking despite escalating risks.
iProov offers accessible biometric authentication, ideal for governments and businesses.
Deepfake threats impact workforce, contractors, and supply chains, requiring tailored solutions.
Background checks, continuous monitoring, and proactive threat analysis can mitigate risks.
Chapters
00:00 Introduction and Background
01:15 The Role of iProov in Biometric Identity Verification
02:16 The Shift to Remote Interactions
03:10 The Risks of Deepfakes and Synthetic Identities
06:04 The Accessibility of Deepfake Creation Tools
09:14 The Need for Awareness and Preparedness in Enterprises
20:51 The Rising Threat of Deepfake Technology
22:19 Implementing Identity Verification to Mitigate Deepfake Risks
23:14 iProov: Easy-to-Implement Biometric Authentication for Governments and Organizations
25:38 Securing Employees and Addressing Vulnerabilities in the Age of Deepfakes
30:30 Continuous Monitoring and Proactive Measures to Protect Against Deepfake Attacks
Connect with Andrew Bud here: https://www.linkedin.com/in/andrewbud/ and learn more about iProov here https://www.iproov.com/
William Tincup LinkedIn: https://www.linkedin.com/in/tincup/
Ryan Leary LinkedIn: https://www.linkedin.com/in/ryanleary/
Connect with WRKdefined on your favorite social network
The Site | Substack | LinkedIn | Instagram | X | Facebook | TikTok
Share your brand across the WRKdefined Podcast Network
Learn more about your ad choices. Visit megaphone.fm/adchoices
Powered by the WRKdefined Podcast Network.
[00:00:00] All right, I want to talk to you for a moment about retaining and developing your workforce.
[00:00:05] It's hard. Recruiting is hard. Retaining top employees is hard. Then you've got onboarding,
[00:00:10] payroll, benefits, time and labor management. You need to take care of your workforce and
[00:00:15] you can only do this successfully if you commit to transforming your employee experience.
[00:00:21] This is where iSoft comes in. They empower you to be successful. We've seen it with a number
[00:00:27] of companies that we've worked with and this is why we partner with them here at WorkDefined.
[00:00:32] We trust them and you should too. Check them out at iSolvedHCM.com.
[00:00:38] Oh my goodness. Bad touching, harassment, sex, violence, fraud, threats, all things that could
[00:00:48] have been avoided if you had FAMA. Stop hiring dangerous people. FAMA.io.
[00:01:00] Hey, this is William Tincup and Ryan Leary. You are listening, hopefully watching the Use Case Podcast.
[00:01:16] Today we have Andrew on. We're going to learn all about his company. Ryan, let's check in on you first.
[00:01:21] How are you doing? How's your day?
[00:01:23] I am fantastic. I'm excited for this call because I looked up what they do and I will let Andrew introduce himself and I'm excited to talk about this because we cover a lot of this in a lot of our news stories.
[00:01:38] That's right.
[00:01:38] And a lot of this craziness that's happening in the world today. So I'm really excited to see kind of the inner workings, where we're going and learn a bit.
[00:01:47] All right. Andrew, would you do us a favor, the audience a favor and introduce yourself? Tell us a little bit about yourself.
[00:01:52] Thank you very much, William. I'm Andrew Budd. I am the founder and CEO of London-based iProof. We are the world's leading provider of biometric identity verification, which sounds like a kind of a slightly boring technical thing.
[00:02:11] When a person is remote, sitting on their couch at home, we make sure they're the right person, a real person and present there right now. And we do this all over the world millions of times a day.
[00:02:22] And did this start after COVID or was this going on before COVID?
[00:02:27] I had the idea for iProof in 2011.
[00:02:32] Okay. Okay. And this is verifying you or you.
[00:02:36] So if we're hiring somebody in Sydney and we haven't met them for whatever reason, we've had calls or whatever, we don't know if they're doing the work.
[00:02:47] This is a way to then verify that that person is that person.
[00:02:54] Particularly after COVID, we live in a new hybrid kind of a world where actually meeting people has kind of gone out of fashion.
[00:03:03] Oh, that's so 2000.
[00:03:06] So increasingly, more and more in the corporate world is done by video conferencing.
[00:03:11] And increasingly, people never turn up to the office.
[00:03:15] They're interviewed remotely.
[00:03:17] They're onboarded remotely.
[00:03:19] They interact remotely.
[00:03:21] They resign remotely.
[00:03:23] And they were not just in the world of people and talent and enterprises, but right across the economy, more and more is being done by people on their couches at home instead of in a formal office space.
[00:03:36] And that creates, in some ways, it's great, but it also creates a whole raft of new and, in some ways, quite frightening challenges to be addressed.
[00:03:48] So I've never thought about this, Andrew, until just now when you said it.
[00:03:52] They resign remotely, which to me seems harmless.
[00:03:57] Have you come, well, probably, I know we're going to get deep into this.
[00:04:01] Oh, yeah.
[00:04:01] But have you come across a situation where someone who is not me or not that person actually does the resignation and leaves the job and the person's left without a job who really still wanted to be at the job?
[00:04:15] So I can't recount any anecdotes about that.
[00:04:18] I can well imagine it's happened.
[00:04:20] The truth is that all the horror stories today are about companies hiring people who never were.
[00:04:27] We'll get into that in a second, but there are real concrete stories about people, about nonexistent people being hired and on the verge of being accessed to mission-critical systems.
[00:04:41] Well, we did a news story a couple weeks ago, Ryan.
[00:04:44] No, it was this week.
[00:04:45] Is it this week?
[00:04:46] This week, yes.
[00:04:48] Where it was a person that basically would go out and get a job, and then they would subcontract the work to the people that would do the work.
[00:04:56] He had like six or seven jobs where he was the guy that was the face, but really he had a team of people that would do the work.
[00:05:04] One of the most striking examples that we came across is now about six, seven years ago where a major telecommunications company said, we have a thousand super users.
[00:05:16] These are top-level people, cleared and skilled to the maximum level, who have the ability to do absolutely anything to our systems.
[00:05:25] And we are convinced that at least half of them have given their credentials to their brother-in-law who then does this work and have gone out and got a second highly paid job doing the same thing.
[00:05:36] We know it's happening.
[00:05:38] So this concept of people getting jobs, giving their credentials to somebody else who they pay a pittance to do the job and going out and getting a second one, this is a known threat.
[00:05:49] And these are the cases in which people's credentials aren't stolen, but you've got complicit credential sharing.
[00:05:58] It's one of the – I think it's one of the dirty little secrets lurking in the world of –
[00:06:03] Do you see Gen AI and AI making that worse?
[00:06:08] Like I'm on TikTok and Instagram and all the social things and like I'll see something.
[00:06:14] And in their bio, it'll say AI – it'll actually say that they're an AI model or AI, whatever the bit is.
[00:06:22] And it's like I like that they tell you that.
[00:06:25] However, it's like this person looks legit real.
[00:06:29] Like we've gotten to a point where they're dancing, they're eating, they're talking.
[00:06:37] So I hate to say it, but my message in some ways for anybody who relies upon the evidence of their own eyes as a basis of trust when they're engaging online,
[00:06:49] the message is be afraid, be very afraid.
[00:06:52] Now, you kind of would say that, wouldn't you?
[00:06:54] But we are – as I prove, we're unique because we operate something called the iProof Security Operations Centre.
[00:07:00] And what that means is that when anybody is verifying online using our system anywhere in the world,
[00:07:06] we analyze and assess that in real time and we extract the evidence of attempted fraud and we bring those frauds back to our analysts.
[00:07:15] So our analysts see what the bad guys are doing worldwide.
[00:07:19] And what we've seen is a 700% growth in attacks by deepfakes last year.
[00:07:27] But what we've also seen is a tremendous evolution in the quality of those deepfakes.
[00:07:33] A year ago, everyone went, yeah, yeah, I can spot a deepfake.
[00:07:36] I think like 52% of people thought that they could spot a deepfake.
[00:07:39] You know, your ears aren't the same shape or one of your eyes is upside down or something like that.
[00:07:43] But that was last year.
[00:07:44] This year, our analysts cannot spot the deepfakes by eye anymore.
[00:07:51] They have become indistinguishable to the naked eye.
[00:07:54] And that's not just our analysts.
[00:07:57] One of the great cases in this is it comes from a worldwide engineering consultancy company called Arup Consulting.
[00:08:07] Huge, 6,000 consultants worldwide.
[00:08:09] They design the most prestigious buildings in the world.
[00:08:12] They are privy to lots of secrets.
[00:08:14] They are extremely skilled in cybersecurity.
[00:08:17] So back at the beginning of this year, the financial controller of Arup in Hong Kong was told to set up new payments for $25 million.
[00:08:27] He said, I know what this is.
[00:08:28] This is a standard CEO fraud.
[00:08:30] You know, this is straight out of the playbook.
[00:08:32] So he demanded a video conference with his CEO who gave him permission, but he still wasn't satisfied.
[00:08:37] So he had a call with the board.
[00:08:40] And they authorized.
[00:08:41] He knew the board.
[00:08:42] And they authorized him to make the payments.
[00:08:45] And he made the payments.
[00:08:46] Everybody on the board had been deepfaked.
[00:08:48] And he couldn't tell.
[00:08:50] Wow.
[00:08:51] And if Arup are vulnerable to that, everybody is.
[00:08:54] Oh, yeah.
[00:08:55] Ryan, we reported.
[00:08:56] That was in Hong Kong?
[00:08:57] Yeah.
[00:08:58] Yeah.
[00:08:58] We reported on it.
[00:09:00] We reported on it because we're like, you know, this is getting to a point where, again, that's just fraud, right?
[00:09:09] But it's detecting that fraud and having a plan.
[00:09:12] So who do we work with?
[00:09:14] Like who's the – when you go into an organization, who do you tend to work with?
[00:09:19] Well, today – so what we do – let me step back for a second.
[00:09:23] Yeah, yeah.
[00:09:23] And let's say our job is to make sure that when a person appears remotely, they're the right person.
[00:09:30] So we use facial verification, facial matching technology because people are really not very good at making sure that person A looks like how person A should look.
[00:09:41] Surprising.
[00:09:42] I have to ask real quick.
[00:09:43] Are you Andrew Budd?
[00:09:46] So –
[00:09:47] I mean, it's a fair question.
[00:09:49] Yeah, it is.
[00:09:50] And in fact –
[00:09:51] Is he going to tear off a mask?
[00:09:53] Well, frankly, we did consider doing this podcast using a deepfake because you don't know the answer to that, Ryan.
[00:10:02] That's right.
[00:10:03] You literally – you think you do.
[00:10:04] You know what?
[00:10:05] I'm joking, but I really don't now.
[00:10:07] And I'm a little – it might not be you.
[00:10:10] Let me show you something which I can show you, which will obviously be visible to those of you – those of your viewers who go on to YouTube.
[00:10:21] Yeah, so if you're listening, there's a link somewhere in here.
[00:10:25] Go to the YouTube video.
[00:10:27] We'll timestamp it for you so you can kind of see what Andrew's going to pull up here.
[00:10:33] But I was joking when I said that.
[00:10:35] It's legit.
[00:10:36] It's legit.
[00:10:38] Yeah, it's legit.
[00:10:38] It's because – and in a wide variety of circumstances, people need to look at the human being behind the name.
[00:10:52] You know, when you're on board – when you're interviewing someone, when you're onboarding them, when you're setting up their accounts, when you're – even when you're recovering their identity.
[00:11:04] You're always dealing with facts and names and dates of birth and social security numbers and these other great things.
[00:11:09] But actually, trust doesn't reside in those facts about people.
[00:11:13] Trust resides in human beings.
[00:11:14] You know, the people who are honest or who aren't honest, that happens between people's ears, not in a set of sterile facts.
[00:11:24] So at some point, you've got to bind those facts to a real human being.
[00:11:28] And that's a very dangerous moment, and that's what we solve.
[00:11:31] And we solve it by solving three problems for those of you online.
[00:11:36] If you want to be sure that a remote person is – if you want to verify a remote person, you've got to solve three problems.
[00:11:42] One is, are they the right person?
[00:11:45] And that's a question of face matching.
[00:11:47] And that's a solved problem.
[00:11:49] People get very excited about it.
[00:11:50] Biometrics, does it work?
[00:11:52] Does it discriminate?
[00:11:53] No, it doesn't.
[00:11:53] That was a problem from three years ago.
[00:11:55] It works fantastically well, about 10,000 times better than any human being can actually do that matching.
[00:12:01] So protecting against impersonation using face matching solved problem.
[00:12:05] Second problem is, are you actually looking at a real person?
[00:12:12] Are you looking at an artifact like a photograph or an image on a screen or an iPad or a mask?
[00:12:22] For those of you who aren't on YouTube, this picture in the middle of the screen shows a 3D printed face – a coloured face of me that looks like nothing so much as a death mask.
[00:12:33] In fact, my wife absolutely refused to have that in the house.
[00:12:37] So you've got to make sure that you're not having an artifact, what is called a presentation attack, a copy of the person's face being put in front of the screen.
[00:12:45] And this can be done very, very well and very realistically.
[00:12:48] But actually, even that's not a particularly big problem.
[00:12:51] They're expensive to make.
[00:12:52] They don't scale.
[00:12:52] And there's a lot of trouble to attack one person.
[00:12:55] The real threat is the deepfake in which you use generative AI technology to produce an image of someone.
[00:13:02] And then you digitally inject it directly into the data stream.
[00:13:05] On the right, you can see – on the right is an image of me deepfaked as Tom Cruise.
[00:13:13] And it's very realistic.
[00:13:15] It's realistic.
[00:13:16] Can't tell the difference.
[00:13:18] Well, you could if you could see my bank balance.
[00:13:21] Right, right.
[00:13:22] Yeah, that's fair.
[00:13:23] That's fair.
[00:13:23] But, I mean, to the naked eye, to your point, you can't trust your eyes anymore.
[00:13:30] Correct.
[00:13:31] And that applies if you're doing video conferences.
[00:13:34] It applies if you're doing – it applies if somebody is being interviewed online.
[00:13:39] And it applies when they're onboarding and they've uploaded their photo ID.
[00:13:44] And now they have to make sure that they look like that photo ID.
[00:13:47] The classic use case was – it recently came out.
[00:13:52] It was a company called KnowBefore.
[00:13:56] KnowBefore is quite a sophisticated – it's a sophisticated cyberware business.
[00:14:01] And they did a remote interview, several remote interviews and a remote onboarding and all the record checks.
[00:14:07] And they hired somebody.
[00:14:10] And it was only when they delivered the laptop to that somebody.
[00:14:13] And that somebody then started deliberately mounting malware that their cyber protection rang and stopped him doing any harm.
[00:14:20] And they discovered that they had interviewed a deep fake.
[00:14:24] They had hired a deep fake.
[00:14:26] They had onboarded a synthetic identity.
[00:14:29] And what – this person actually was a North Korean hacker.
[00:14:32] I can see this being used in corporate espionage.
[00:14:36] Absolutely.
[00:14:38] Just having people infiltrate and work.
[00:14:41] And gaining trust and gaining – just being a part of the team and then just using all of that data and feeding it back to a competitor.
[00:14:49] Correct.
[00:14:50] The root of trust for people's identity integrity is a government-issued ID.
[00:14:56] Right.
[00:14:57] A government-issued ID has loads of interesting facts on it, but the only thing that ties it to the actual human being is the photo on the ID.
[00:15:05] So there comes a moment when you have to match the facts on the ID to the physical human being presenting it.
[00:15:10] If that ID has been, for example, created in the name of a synthetic ID and a photograph of a person who never existed, which is a very common attack method,
[00:15:22] and you can then deep fake the attacker so that they look like the person who's never existed.
[00:15:27] You've got a completely non-existent – you've onboarded somebody who's a completely non-existent person,
[00:15:32] and when they start to do wrong, you'll never be able to hold them account because the person who perpetrated it never existed in any way at all.
[00:15:39] That's the way that frauds are currently being perpetrated in the financial services and against government tax authorities today.
[00:15:46] Andrew, so many questions here.
[00:15:48] Yeah.
[00:15:49] Where do we begin?
[00:15:50] So I want to get into types of companies that are using – that you're talking to, who you're selling to within the company, all of that good stuff.
[00:16:00] But I also want to know – I guess I come at life from the good side of things, which not everybody does.
[00:16:09] Man is basically evil or good?
[00:16:12] Yeah.
[00:16:13] Yeah.
[00:16:13] Like, is the majority of this, outside of a normal person myself, William, you would say, okay, there's a few bad actors in the world that do this.
[00:16:25] Where are we in – just for the audience's clarity, where are we in terms of maybe percentage of good versus evil in this space?
[00:16:37] In good meaning, I look at deepfakes as, hey, I could do something really funny and send it to my mom.
[00:16:46] Right?
[00:16:46] Or I can go have fun with William and he'll never know.
[00:16:49] These are bad actors doing bad things.
[00:16:51] Right.
[00:16:52] I mean, what's the percentage there?
[00:16:54] So the answer is it's impossible to know because most organizations are not required to disclose these sorts of things happening.
[00:17:05] I mean, I hope, for example – and also know before, these guys weren't compelled.
[00:17:08] They did a public service by going public about this and going, look what happened to us.
[00:17:13] You guys should know.
[00:17:14] Most organizations don't necessarily talk about that.
[00:17:17] Because of the shame.
[00:17:19] They would have that up and they don't want people to know that that happened to them.
[00:17:24] And potential liability.
[00:17:26] You know, there are lots of good business reasons why if you're not compelled to do it, you keep your mouth shut about this.
[00:17:30] But let's be clear.
[00:17:31] The majority of the world is good, but there are foreign adversaries with very powerfully and well-funded secret services with high levels of technical capability for whom this can be a strategic objective, more than one.
[00:17:49] There are very large-scale criminal gangs with enormous resources, I mean, large enterprise enormous resources for whom this can be both a source of opportunity and also, in some cases, a necessity.
[00:18:05] So you don't need many bad guys.
[00:18:07] You just need them to be highly motivated.
[00:18:10] But in addition, what's happened is deepfakes used to be serious technology.
[00:18:15] I mean, you had to be a class A geek in order to be able to build or find and operate this kind of technology.
[00:18:23] What's happened over the last year is that there has been a vast proliferation of these tools.
[00:18:28] You go onto the Internet, there are over 100 tools that do this stuff really, really well, which you either get for free or you can pay a few hundred bucks.
[00:18:38] And suddenly, it's crime as a service.
[00:18:43] And so, therefore, there is a sophisticated ecosystem.
[00:18:47] In some cases, it's almost like an open source ecosystem in which loads of bad actors exchange knowledge.
[00:18:52] We found that there's been a doubling in the number of – sorry, a 50% rise in the number of threat groups that kind of chitter-chat amongst themselves.
[00:19:02] Average median number of members in those groups, over 1,000 people.
[00:19:06] So, there's a whole open source community busily figuring this stuff out, generating tools.
[00:19:12] And those tools have become not just incredibly good but incredibly easy and cheap.
[00:19:17] So, now, anybody can do this stuff.
[00:19:20] You don't need many bad people to cause mayhem.
[00:19:22] They just need to be motivated.
[00:19:24] It's interesting because years ago, these people would have congregated on the dark web.
[00:19:29] And now, they don't have to congregate.
[00:19:31] I mean, they still do, but they don't have to.
[00:19:34] They can just be out in the open because the technology is so accessible.
[00:19:39] That's exactly right.
[00:19:40] I mean, one of the ways we know about what they're doing is because I prove as well as our security operation center, we also have a sophisticated threat intelligence system.
[00:19:47] So, we are actually watching and engaging with what's going on in the dark web.
[00:19:52] So, we're aware just of what an interesting and lucrative problem the use of deep fakes to penetrate the identity systems of enterprises actually is.
[00:20:03] Look, it's not hard.
[00:20:04] If there's the technology, if there's means, opportunity, and motive, you've got a crime.
[00:20:07] And I think what's happened is that, generally speaking, enterprises haven't yet fully internalized the threat that this creates and the consequent damages that they can suffer.
[00:20:26] This is risk management, a component of what finance and HR kind of own.
[00:20:32] But they're way behind.
[00:20:35] I mean, Ryan and I talk to HR leaders every day.
[00:20:38] They're way behind.
[00:20:39] They don't even know this is a thing thing.
[00:20:42] I can just speak for 100 people, line them up.
[00:20:44] There might be two or three that think it's a thing thing.
[00:20:47] But they're way behind what you're talking about.
[00:20:50] I mean, these poor guys, they've got other challenges, which are in some cases more immediate and more direct.
[00:20:54] And they need another major threat like they need a hole in the head.
[00:21:00] Unfortunately, the technology and the people using this technology are not going to give them very much respite.
[00:21:10] And the number of ways of defending yourself against this is going to diminish.
[00:21:15] And the amount of threat is going to rise.
[00:21:19] What we have found is that when these threats arise, they don't grow suddenly.
[00:21:25] They hit you like they it's not like rising sea level from climate change.
[00:21:29] This is a tsunami.
[00:21:30] And the tsunami of this stuff is coming driven by technology because the technology of deep of generative air is moving so fast and driven by the opportunities that this can that this can provide.
[00:21:49] Not just within the workforce, but also within the extended workforce, because what we're seeing also is harm can be caused by contractors, by the staff of contract of contracting companies, by temporary workers,
[00:22:05] by people with whom the people function is slightly more weakly coupled over whom they have maybe less control, but are in a position to do to do a great deal of harm.
[00:22:18] And when they and when HR departments try to impose very heavy identity checking processes further down their supply chain, you get pushback because nobody likes to have their business processes fiddled with by by by a by by a by a customer.
[00:22:35] The nice thing about our technology, for example, is that it's extremely easy to implement.
[00:22:40] It's extremely non not it's extremely low impact.
[00:22:43] So so you don't have to install software on a you can just use a web browser and so on.
[00:22:48] And it's also terribly easy.
[00:22:49] Look, most of our customers, half about half our revenue worldwide comes from governments who are serving citizens.
[00:22:55] One of the things that governments demand when they're serving the citizens is inclusion and accessibility.
[00:23:01] So this thing. So our technology has to be incredibly easy to use because otherwise governments end up on the front page of newspapers because this or that or the other social group has been excluded.
[00:23:12] And that's unacceptable.
[00:23:13] So we've never worked extremely hard to make it accessible, inclusive.
[00:23:16] And that's great because it means that if you impose it on your supply chain, the supply chain hasn't got a great deal to complain about.
[00:23:22] So let's talk a little bit about who who are you having conversations with in the governments or in the companies that you're working with.
[00:23:31] I'm assuming it's not the HR leader.
[00:23:33] I'm assuming this is a much broader or higher level conversation.
[00:23:38] Maybe talk to us a little bit about that.
[00:23:42] So the HR teams that you're talking to who aren't yet fully on board with dealing with the deepfake set are not alone.
[00:23:52] That is true of many, many, many functions in many, many kinds of organizations.
[00:23:55] So today, the leading edge is actually governments who are establishing accounts for citizens who maybe want to pay their taxes or more importantly, receive tax credits or citizens who are seeking to get benefits, money benefits or the benefits maybe of residence or immigration or criminal records checks all over the world.
[00:24:20] Or maybe seeking to gain to gain entry to their country.
[00:24:23] So the IRS, for example, through a partner, ID.me uses iProof to ensure that crooks cannot fraudulently create, sign up as people to file bogus tax returns and get loads and loads of money back and then escape, leaving the victim and the IRS out of pocket.
[00:24:45] In the UK, when people were setting up their national health accounts so that they could get their COVID passports, that identity had to be protected.
[00:24:55] In Australia, there's a national digital identity used for paying taxes and other social activities called MyGovID.
[00:25:05] Onboarding to that, incredibly important.
[00:25:07] In Singapore, they have a national digital identity which is used for almost everything when you engage with the government called SingPass, which is protected by us.
[00:25:17] So we're dealing, in many cases, we're dealing with governments who are trying to serve citizens but at the same time protect both the citizens' identities and their own revenue.
[00:25:27] We work with governments, banks, banks, we work with UBS worldwide, for example.
[00:25:31] UBS is a huge wealth management company.
[00:25:35] You really don't want customers being defrauded.
[00:25:39] So in most cases today, we're talking to organizations who are serving members of the public and have to protect themselves against attacks on themselves and on those members of the public.
[00:25:53] The conversations are beginning now, or in fact are moving rapidly towards the new world of securing employees, of securing employees.
[00:26:04] Password resets is the cutting edge.
[00:26:07] One of the places – I mean, it's unbelievable.
[00:26:09] In 2024, people still have passwords.
[00:26:12] And oddly enough, they forget them.
[00:26:14] So oddly enough, they need them reset.
[00:26:15] And if you're an attacker, what do you do?
[00:26:17] You reset someone's password, get the new password, seize control of their account.
[00:26:20] It happened to the CEO of a major financial organization about a month ago.
[00:26:25] Password resets is one of those places where – it's one of those vulnerable spots where it's just – where the pickings from attacking an employee's credentials are just stupendous.
[00:26:39] And the cost and sophistication required to do it is not very great.
[00:26:42] So we're being rapidly engaged in that sort of area.
[00:26:46] There are a number of other areas which are interesting.
[00:26:48] So, for example, where employees are not allowed to bring their personal devices.
[00:26:52] There are a range of applications where if you're in a pharmaceutical factory or a pet food factory or a food factory or a call center or a trading floor,
[00:27:04] there are loads of places where you're not allowed to bring your personal device.
[00:27:06] It's in a world in which people authenticate themselves using their personal devices, not being allowed to carry your personal device along with you.
[00:27:14] Bit of a problem.
[00:27:15] The other circumstances where your – it happened to me the other day in the center of London.
[00:27:22] Kid on a bike mounts the pavement.
[00:27:24] I'm looking at my phone trying to figure out where on Google Maps where I'm going to go next.
[00:27:28] Snatches my phone out of my hand.
[00:27:30] Luckily, he was incompetent.
[00:27:31] So the phone fell on the floor and I was able to retrieve it.
[00:27:33] Otherwise, he would have become me.
[00:27:36] You know, in a world in which your phone is the – the phone is the way in which you prove your selfhood,
[00:27:43] the worst – at the very minimum, I just lost my ability to be who I want to be.
[00:27:49] Account retrieval, great point of attack.
[00:27:51] Boom.
[00:27:51] Yeah, all of that.
[00:27:53] These are problems which are decades old and aren't really very well – aren't better addressed now than they were 10 years ago,
[00:27:59] which is one of the reasons why I found it.
[00:28:01] As companies start pushing more for return to office, which we're seeing a lot of now,
[00:28:09] where does – I guess how does this affect return to office versus not?
[00:28:14] Are companies going to – is this going to be a driving force in companies saying,
[00:28:18] no, we want you in the office because of verification or is that industry dependent?
[00:28:24] That's an interesting question.
[00:28:28] A number of – what I do know is the number of large governments, for example,
[00:28:32] are arguing that if you want to set up a national digital identity,
[00:28:34] you've got to come into an office to have it set up.
[00:28:37] And a number of other governments are saying, we tried that and it doesn't work.
[00:28:41] It manages – it just doesn't work.
[00:28:43] It's expensive.
[00:28:45] It is – people refuse to do it.
[00:28:48] And when they do turn up into the office, the employees many times are not well enough trained
[00:28:52] to spot the difference between a real person and an imposter.
[00:28:55] Right.
[00:28:56] Because actually –
[00:28:56] So you're still the trust of your eyes.
[00:28:58] Right.
[00:28:59] We have a number of applications where our technology is used in a bank branch.
[00:29:03] What?
[00:29:04] Why would you do this in a bank branch?
[00:29:06] The answer is the poor bank employees are not super recognizers.
[00:29:10] They have not been trained.
[00:29:11] They don't have the skill to be able to distinguish between somebody – between a real match
[00:29:19] between a 10-year-old passport photo and a person and an imposter.
[00:29:24] In fact, even skilled passport officers – there was a study done in Australia about 10 years ago
[00:29:30] in which they measured the performance of skilled, experienced passport officers.
[00:29:34] And even they matched an imposter to a photograph about 10% of the time.
[00:29:41] So people are actually rubbish at this.
[00:29:43] So you can come into the office.
[00:29:44] But if the person who comes into the office has the gift – has blag and is able to say,
[00:29:51] yeah, this photo was taken quite a long time ago and I've grown my hair and I've grown a beard
[00:29:55] and I had a bicycle accident which slightly damaged my face, where are they going to go?
[00:30:01] You are an imposter.
[00:30:02] It takes a lot of moral courage to place that responsibility on a member of staff.
[00:30:08] If a highly reliable, proven piece of machinery says that, that's a lot easier.
[00:30:14] So we've actually had our technology mounted in bank branches to support the tellers,
[00:30:21] to give them a higher reliability solution than they would.
[00:30:25] In addition, you know, things happen in offices.
[00:30:28] You can get collusion.
[00:30:31] Yep.
[00:30:32] So again, let's give the technology to the staff which absolves them of any suspicion of wrongdoing.
[00:30:42] Right.
[00:30:42] Right.
[00:30:43] So take us into how the technology works, facial recognition and the biometrics.
[00:30:48] Like what – not the secret sauce, of course, but just like how does it actually work?
[00:30:54] Okay.
[00:30:55] So there are two problems you've got to solve here.
[00:30:57] First is you've got to make – you've got to do the face match.
[00:31:00] Right.
[00:31:01] Does the person look like they're supposed to look like?
[00:31:03] For decades, that was a really, really, really hard problem.
[00:31:06] And everyone went, oh, biometrics, this is so interesting.
[00:31:08] We talk about face matching and was there ethnic bias.
[00:31:11] Deep learning just solved that.
[00:31:13] That is a non-problem now.
[00:31:15] Right.
[00:31:15] It is just – it is done.
[00:31:18] So we don't need to talk about how does the face matching work because deep learning solved that problem,
[00:31:23] but like hundreds of thousands of times better than a human being.
[00:31:26] So matching a person or not matching a person, solved problem.
[00:31:29] It's not the hard one.
[00:31:31] The hard problem when you're dealing at least with a remote person is are they real?
[00:31:36] Kind of go, why does that matter?
[00:31:37] Well, look, many firms are – let me take you back a step.
[00:31:42] Many firms are worried about the privacy implications of biometrics.
[00:31:45] I'm sure you've heard this before.
[00:31:47] I find this absolutely fascinating because a biometric is nothing other than a heavily encrypted version of a person's face image.
[00:31:59] So what you're actually saying is I'm really worried about the privacy implications of storing a very heavily encrypted version of a person's face.
[00:32:07] Why?
[00:32:07] Well, a person's face is obviously a secret.
[00:32:09] No, it's not.
[00:32:11] A person's – people's faces are absolutely all over the internet.
[00:32:14] They're on Facebook.
[00:32:15] They're on LinkedIn.
[00:32:16] They're life.
[00:32:18] You know, today the one thing you can say is that a person's face is not a secret.
[00:32:22] You know, if I want to know what your face is, I'll stand outside your house and take a photograph of you.
[00:32:26] But it's frankly –
[00:32:27] Public domain.
[00:32:28] Simply just to go to your Instagram.
[00:32:31] Yeah.
[00:32:31] So, you know, the concept – I'm not disputing people's right to privacy, not for a second.
[00:32:37] Right.
[00:32:37] But the idea that a face is a secret is, to me, strange.
[00:32:41] The idea that a heavily encrypted version of that face is somehow deeply sensitive, and I'm kind of going, why?
[00:32:48] Why?
[00:32:49] If someone steals a heavily encrypted version of the face, what they've got is 128 – it's 256 bites of garbage.
[00:32:54] They may as well just go to Instagram.
[00:32:57] And some people say, so surely then, what happens if someone steals my face?
[00:33:01] I have news for you.
[00:33:02] You can't steal someone's face, at least not without a scalpel and a lot of blood.
[00:33:06] That's a whole different episode.
[00:33:07] Correct.
[00:33:09] You can't steal a person's face.
[00:33:11] Because the face is public, however, you can copy it.
[00:33:14] And therefore, the key to relying on a face as a route of trust is to make sure that you're not looking at a copy.
[00:33:24] So that's why I say face matching, yeah, whatever.
[00:33:27] The real issue is, is this a copy?
[00:33:30] So the question is, how can I be sure that a person who is in an untrusted environment, like they're living them, on their couch, using an untrusted device, which has untrusted software and an untrusted network, how can I be sure that I'm not looking – and it has to be any device because you can't say to people, okay, I'm only going to trust people who have the iPhone 15 and above.
[00:33:50] Sorry, it doesn't work that way.
[00:33:51] You know, we work for many parts of the South African banking environment.
[00:33:57] There are 28,000 separate Android models that we've had to verify people on their database.
[00:34:04] So you can't trust the environment.
[00:34:06] You can't trust the network.
[00:34:07] And you can't control the device.
[00:34:09] How can you be sure that you're not looking at a copy?
[00:34:11] At a deepfake.
[00:34:12] The answer is, if you just look at the imagery, you can't.
[00:34:16] Sorry, impossible.
[00:34:17] It literally is an impossible problem.
[00:34:19] So what we do is we fiddle with the situation itself.
[00:34:27] We illuminate the user's face with a rapidly changing sequence of colors on their phone screen.
[00:34:34] So the screen flashes an unpredictable, never-changing sequence of colors which illuminates their face.
[00:34:42] And while that's happening, we send video back to our servers where we study the light reflecting off their face and interacting with the ambient light.
[00:34:50] If the light's reflecting off their face and, I'll say again, interacting with the ambient light in the way that real human faces reflect,
[00:34:59] then we are probably looking at a 3D skin-covered live human face-shaped object.
[00:35:05] And if the sequence of colors which the attacker can't predict is correct, then we're looking at something that imagery that was created right there, right now.
[00:35:17] Right.
[00:35:17] And so just the simple fact of illuminating their face with this unpredictable sequence of colors creates for the deepfaker a really strange and unfamiliar problem to solve.
[00:35:28] Tens of thousands of times a day, they try to solve it.
[00:35:31] But because it's a strange and unfamiliar problem, they solve it imperfectly.
[00:35:36] And we spot that.
[00:35:38] Right.
[00:35:38] And because we've done this a billion times, so we have an exquisitely precise idea of what all the signals should be in order for them to be real,
[00:35:49] we know what reality looks like and the attackers don't.
[00:35:53] How do you stay ahead of this?
[00:35:56] By watching what the attackers are doing.
[00:36:00] By our iProof Security Operations Center, which receives data and metadata from every transaction that takes place worldwide,
[00:36:08] triages out the suspicious attacks and brings those attacks back to our analysts.
[00:36:14] We study what the attackers are doing in real time.
[00:36:17] And we learn from them.
[00:36:19] And so we go, oh, look, look what's going on here.
[00:36:21] This attacker, whoever they are, has tried this thing.
[00:36:25] That's really interesting.
[00:36:25] So we replicate it and then we amend our systems to cope with that.
[00:36:30] So we will do up to 100 modifications of our system in a month.
[00:36:34] Yeah.
[00:36:36] And we stay ahead of them because we have an information asymmetry that we learn more from them than they learn from us.
[00:36:44] Right.
[00:36:45] And therefore, we stay ahead of them by watching them and by studying what they're doing and learning from them.
[00:36:52] It's the way that all fraud protection systems stay ahead of the fraudsters.
[00:36:58] The difference is that this is a new form of fraud, which is biometric fraud.
[00:37:01] And at the moment, iProof is the only company in the world that does this.
[00:37:05] I'll say it again because it's one of those things that sounds like a piece of marketing fluff.
[00:37:09] It's literally true.
[00:37:10] iProof is the only company in the world that actually runs a biometric security operations center,
[00:37:16] watching what the attackers are doing and responding and adapting to it in real time.
[00:37:21] And that's the only way that we can stay ahead of the.
[00:37:24] Is there any relationship between some of the things we've seen in data leaks and and identity verification?
[00:37:34] So have you seen?
[00:37:35] Is there any parallels or am I drawing dots that don't connect?
[00:37:39] No, no, no.
[00:37:39] There are some statistics that say that 70 percent of all data leaks and system compromises arise from weak access control.
[00:37:48] And weak access control is because people are because access is given to people using credentials that are dead easy to socially engineer and to be and and to be or to be shared or compromised.
[00:38:02] You can't you can't share a face.
[00:38:05] Sorry.
[00:38:06] Not even not without a skeleton, a lot of blood.
[00:38:09] And you and our systems are resistant to social engineering.
[00:38:14] So if people had insisted, if if organizations insist that any kind of privileged access requires the user to approve themselves first, these are these compromises would not have happened.
[00:38:27] Oh, if I'm terrified of this.
[00:38:30] Terrified.
[00:38:30] I'm terrified.
[00:38:32] I kind of.
[00:38:33] Yeah.
[00:38:34] Like I might have different dreams tonight.
[00:38:37] No, but kidding aside, it's actually the conversation we should be having.
[00:38:42] Yeah.
[00:38:43] Yes.
[00:38:43] Because this is an awareness thing, especially in the world that we play with talent that they need to they need to hear this.
[00:38:52] They need to understand how how far the bad actors have come and also protections against that.
[00:39:02] So, you know, this is something that we've seen coming for a while in 2022.
[00:39:06] The FBI issued an explicit note on the topic warning people of an impending risk of of deep fakes of the use of deep fakes to impersonate people for the purposes of onboarding.
[00:39:22] So the FBI issued issued a warning note as long ago as 2022.
[00:39:26] In two in 2018, I asked the FBI in a in an event that took place here in the in the London House of Commons, what their view about the impending deep fake threat was.
[00:39:39] And back in 2018, they said, we regard this as one of the as one of the as one of the coming existential threats to to the to the stability of society.
[00:39:49] So this this has been coming for a while.
[00:39:51] What has changed in the last year, as I said before, is the ease and cheapness of producing the stuff coupled with the quality of what is being produced.
[00:40:00] So a a a a watershed has has now been crossed.
[00:40:06] And that means that whereas you could read about this stuff a little bit like quantum computing, you know, one day it'll change the world just yet.
[00:40:13] The same was true of deep fake since for many years.
[00:40:16] But the future just arrived.
[00:40:18] So, Andrew, in a couple of minutes that we have left together, where do we go from here?
[00:40:23] And I know we can't solve the problem today.
[00:40:26] Right.
[00:40:26] I guess this is ongoing.
[00:40:29] Where where do organizations that are thinking about this?
[00:40:32] Where do they go?
[00:40:33] How do they learn about it?
[00:40:35] What do they need to be aware of?
[00:40:37] So, you know, the answer is that when it comes to the restricted problem of assure of ensuring that a person who presents themselves online is themselves and not a deep fake, we can solve it because that's what we do.
[00:40:50] And we do it in a proven way.
[00:40:51] We did it.
[00:40:51] We do it millions of times a day.
[00:40:52] And our customers attest to the fact that we do it with extraordinary reliability and we intend to do so.
[00:40:59] So if customers, if your listeners need to protect themselves against against the appearance of people pretending to be people, but actually deep fakes that could present a threat, then they need to they need to contact.
[00:41:13] They need to contact contact at I prove dot com or com or come on our website.
[00:41:16] We can do this.
[00:41:18] And I tell you, we fully intend to continue doing this.
[00:41:21] People sometimes say, what's your strategy?
[00:41:23] And the answer is like asking a hamster on a wheel.
[00:41:25] What's the strategy?
[00:41:26] The answer is stay where I am.
[00:41:28] And the wheel accelerates.
[00:41:30] Well, you've got to stay ahead of the bad actors.
[00:41:33] So you've got to understand what they're doing.
[00:41:34] That's a full time job.
[00:41:37] I think I prove has 200 staff focused.
[00:41:41] Almost exclusively upon this particular problem.
[00:41:44] Occasionally, we come across competitors who have like five or 10 people.
[00:41:47] They go, we've solved it.
[00:41:48] I'm not paying 200 people because I enjoy payroll.
[00:41:53] That's what it takes.
[00:41:54] That's what it takes to rely.
[00:41:57] That's that's that's the that's the show title.
[00:42:00] Yeah, that's the show title right there.
[00:42:01] Last question.
[00:42:03] Last question for me is.
[00:42:07] Monitoring something in the background screening world, which is.
[00:42:10] It's not where you're at at all.
[00:42:12] But one of the things we've seen that they do is background checks used to be.
[00:42:17] You do it pre screen.
[00:42:18] So before someone they got to a certain place in the hiring process, do a background check.
[00:42:23] Look at criminal.
[00:42:24] Look at all this type of stuff.
[00:42:26] And all of those companies that do that have do still do background checks.
[00:42:30] But they do more employee monitoring.
[00:42:33] So they'll go and do those screens continuous throughout the employment.
[00:42:38] Is is what is what we do here?
[00:42:41] Is it something that not only is it is it is it constant?
[00:42:46] Or is it something that you check in all the time?
[00:42:49] I think if you've got it.
[00:42:51] Absolutely.
[00:42:51] If you've got a remote user, you have to you have to authenticate them.
[00:42:56] And not necessarily every time, but periodically you have to authenticate that the real human being.
[00:43:01] Right.
[00:43:01] With whom you're interacting is who they claim to be.
[00:43:04] So a part of your authentication strategy should be to ensure biometric by biometric authentication along the course of the of the lifecycle journey.
[00:43:12] You know, Ryan asked earlier, what can companies do?
[00:43:14] The first thing is do a threat analysis.
[00:43:17] The first thing is find out.
[00:43:18] You say assume deep fakes.
[00:43:20] OK, what does that mean?
[00:43:21] How can those be used to attack us?
[00:43:23] What will be the consequences of those attacks?
[00:43:25] And how can we mitigate those?
[00:43:27] How can we mitigate those risks?
[00:43:28] The first thing to do is is is not to be insouciant.
[00:43:31] Look, I found that I prove I found that I prove after an incident took place.
[00:43:38] What is it now?
[00:43:40] 16 years ago, I was running the world's largest mobile payments business and we had risks that we hadn't understood.
[00:43:47] And when the because we had some vulnerabilities, when the criminals found that out within a matter of a few months,
[00:43:52] millions of people had money stolen from them through our network, which I was very ashamed of.
[00:43:57] But also I got pulled up on television and a reporter thrust the microphone in my face and said, Mr.
[00:44:02] Budd, the role of you and your company in this scandal, were you complicit or just recklessly incompetent?
[00:44:09] Wow.
[00:44:11] Well, how do you how do you answer that one?
[00:44:12] Yeah.
[00:44:14] I said that's a trick question because there's no right answer.
[00:44:18] It's the it's the old.
[00:44:20] But you know what?
[00:44:20] It's the right.
[00:44:21] It's when you're dealing with when you're dealing with risk.
[00:44:24] It is actually the right question.
[00:44:26] Yeah.
[00:44:26] Sure.
[00:44:27] If you if you are if you are sitting on latent risk.
[00:44:32] Right.
[00:44:32] And you haven't looked for it and you haven't done that threat analysis, then in my view, you are recklessly incompetent.
[00:44:39] And if you saw those risks, that risk and you didn't make a reasoned assessment of what of the of it.
[00:44:48] If you knew the risk was there, but you were in denial.
[00:44:50] Well, I'm sorry, but I think in my my my book, that's that is complicity.
[00:44:54] So any company know you cannot ignore the fact that deep fakes are coming to visual communications and are coming to identity verification.
[00:45:02] So it is incumbent, I think, upon every HR department to say, what does this mean for us?
[00:45:07] What's the risk that this presents?
[00:45:09] To what degree do we have to mitigate it?
[00:45:12] And how should we mitigate it and not engage in what I see a number of organization, other parts of other organizations?
[00:45:18] So not HR departments and enterprises engaging, which is denial.
[00:45:22] We're a bit busy.
[00:45:22] Let's let's not think about it.
[00:45:24] Put their heads in the sand and be done.
[00:45:27] Andrew, this has been a fantastic episode and it's going to get Ryan and I to talk about a bunch of stuff afterwards.
[00:45:33] So thank you so much for carving out time for us in our audience and explaining this, because, again, like we said,
[00:45:39] we need to talk more about this so that people understand what is now, not some, you know, something that's going to be in the future.
[00:45:49] It's it's now.
[00:45:50] So we appreciate you, brother.
[00:45:52] Thank you.


