
Banking on Information
Where we dive deep into the dynamic world of Financial Services and Technology. Discover the innovative solutions driving the industry forward, exploring the latest trends, and uncovering the strategies that are reshaping the future of finance.
Join us as we unravel the WHY, WHAT and HOW of solution providers in the Financial Services industry. Stay tuned for insights that will revolutionize the way you think about money and technology.
Each guest will engage with our host Rutger van Faassen in Futures Thinking and provide their view of a possible future and how we can get ready for that future today.
Banking on Information
Banking on Information with Ben Colman CEO of Reality Defender
In this episode of Banking on Information, Rutger van Faassen interviews Ben Colman, CEO and co-founder of Reality Defender, discussing the challenges posed by deepfakes and AI in the realm of cybersecurity. Colman emphasizes the importance of developing technology to combat fraud and impersonation, particularly in communications. He shares insights on how Reality Defender secures communications, the evolution of customer awareness regarding deepfakes, and the future of AI technology in everyday life. The conversation concludes with advice on how individuals can prepare for a future where deepfakes are prevalent.
Takeaways
- Technology evolves, and so do the tactics of bad actors.
- Fraud is an equal opportunity issue, affecting everyone.
- Reality Defender focuses on securing communications against impersonation.
- Deepfakes can be used for financial fraud and identity theft.
- Education on deepfakes has become crucial for clients.
- Demonstrating the problem of deepfakes has become more impactful recently.
- The future will see deepfake detection integrated into everyday devices.
- No single technology can solve the deepfake problem; a multi-faceted approach is needed.
- Consumers should question their banks about security measures against deepfakes.
- The technology to create deepfakes is becoming increasingly accessible.
Chapters
00:00 Introduction to Reality Defender
02:18 Understanding the Threat of Deepfakes
05:10 Customer Experiences and Education
07:57 Futures Thinking: The Next Decade
11:20 Preparing for the Future of Deepfakes
Keywords
Reality Defender, deepfakes, AI technology, fraud prevention, cybersecurity, identity protection, communication security, future technology, antivirus software, digital security
Rutger van Faassen (00:01.352)
Hello and welcome to another episode of Banking on Information. Today, my guest is Ben Colman, who is CEO and co-founder of Reality Defender. Welcome to the podcast,
Ben Colman (00:15.037)
Thank you for having me.
Rutger van Faassen (00:16.904)
Now, we always start with this very important question, which is, why do you do what you do?
Ben Colman (00:25.454)
The short answer is this is all that I've ever done. The longer answer is that
With all things technology, the first best, or I guess the worst use case is for bad actors. And so, as I worked in different organizations, as a Google in grad school, I worked at Goldman Sachs, I've done work for various government research spaces. What we've seen time and time again is that as technology evolves,
The first best use case unfortunately for the technology is to commit fraud because bad actors and hackers and fraudsters are the most motivated to leverage new tools to do fraud at scale. I think generally in fraud, but also cyber security specifically, fraud is equal opportunity. It doesn't care what you look like, what you sound like, where you're based.
Rutger van Faassen (01:28.252)
Yeah. So you are passionate about technology, but also keeping it out of the hands of the bad guys, or at least making sure that it's hard for them to use it.
Ben Colman (01:39.362)
You know, I think put a different way is I expect we should all expect them to use it. nothing can really stop them from using it, unfortunately. It's more about developing technology, developing AI in this space, deep fake detection to detect AI because humans just cannot tell the difference, even the PhDs in our team.
Rutger van Faassen (02:06.107)
Yeah, so you're passionate about defending against that hence the name Reality Defender.
Ben Colman (02:13.474)
I'm absolutely passionate about defending all types of bad actions, particularly here within bad actions accelerated with AI.
Rutger van Faassen (02:29.777)
So let's talk a little bit about then what Reality Defender does. What is the number one use case that you solve for?
Ben Colman (02:38.19)
So Reality Defender, we're focused on securing communications wherever they are. And really that comes down to impersonations that are used for fraud, scanning for real-time audio, real-time phone calls, for example, financial fraud with our bank clients, scanning real-time video, for example, this podcast, to see if the person you see on Zoom or Teams or Webex is indeed a real person or not. And one thing to...
kind of say what we don't do is we don't touch any personal data. And so if you say, you know, my name is Ben, we will just say no AI detected. We're not going to know whether it's Ben or a Rutger or, you know, or anybody. And so if we think about it, anything tied to you or I, our face, our face print, our voice print, name, birthday, social security number, unfortunately, it's all available.
Rutger van Faassen (03:14.608)
Mm-hmm.
Rutger van Faassen (03:20.015)
Yeah. Yeah.
Ben Colman (03:35.564)
either online, on the dark web, or different corners of the internet. And so assuming all of our personal information is already available online, it doesn't really provide any type of fraud protection, the fact that someone who claims to be you or me knows that information. And so by detecting that your face or your voice are indicative AI generated or manipulated, it's kind of the final
Rutger van Faassen (03:37.969)
Yep.
Rutger van Faassen (04:01.797)
Mm-hmm.
Ben Colman (04:05.332)
overlying protection to protect us from bad actors pretending to be us financial fraud or identity fraud.
Rutger van Faassen (04:15.238)
So you're actually checking for personhood for someone actually being real and not fake.
Ben Colman (04:24.062)
Right. Yeah, we're not checking for content or context or truthfulness or identity. All we're doing is saying at the pixel layer within an image or a video or within the waveform or spectrogram with an audio, are there anomalies that indicate AI generation or manipulation? And from there, we make it really easy to understand for a non-technical user, perhaps somebody who works in a bank call center or, you know, you are on a Zoom.
just to say immediately, here's a call to action. We're not sure, but the platform says this might be fake. Let me either ask you more questions. Let me ask you to call me directly or let me cancel the conversation or the wire transfer because it just seems fishy.
Rutger van Faassen (05:11.385)
Yeah, I think it's a very, very important thing in this day and age with so much fakes happening. Now, when you talk to your customers and they describe what value you deliver for them, what are the stories that they tell you?
Ben Colman (05:28.334)
You know, for the first few years was a huge challenge because we had to basically prove to people this was even something that was a problem in the first place. You know, back when we started the company about five years ago, there was no such word deep fake or generative AI. And we called it digital humans and AI avatars, which is not quite as sexy of a name and then deep fake or generative AI. And so the first few years, it was just a tremendous amount of education, really saying like this, this could be a problem.
Rutger van Faassen (05:50.233)
Yeah.
Ben Colman (05:58.414)
but it had not reared its ugly head yet. And then toward the end of 2022, beginning of 2023, once OpenAI's chatGPT should be to really reach massive global success, the education we were doing turned a lot more into directly demonstrating the problem to kind of get that visceral feeling. Kind of similar to when you find out that your email or your social media have been hacked, no longer is it, this will never happen to me. It's more like, wow, that is happening to me.
and here's how it's happening. And so, you your viewers, your listeners can Google us and see we were subpoenaed to give testimony in Congress, in the Senate, and we provided a deep fake on a permission basis. So we first asked Senator Blumenthal's office and Senator Hawley's office and Klobuchar and others who, you know, are way left or way right wing. But on this issue, they're truly bipartisan. So it's one of the areas really exciting to watch our
Rutger van Faassen (06:36.888)
Yeah.
Ben Colman (06:57.228)
our government, democracy really work well. And when they heard Senator Blumenthal's deep fake voice, it was like, wow, this isn't just a theoretical thing where you met online. That's me. And I did not say that. So really trying to demonstrate to our clients like, with their permission, obviously, we'll show up on a call to a top 10 global bank as their CEO with him on the call. And he'll say, holy moly, that's not me.
Rutger van Faassen (06:58.979)
Yeah.
Rutger van Faassen (07:05.666)
Yeah.
Rutger van Faassen (07:11.694)
Right.
Rutger van Faassen (07:23.491)
Right. Yep. Yeah.
Ben Colman (07:27.192)
So anyhow, kind of summing it all up, deep fakes are here. We'll need to get used to them. There's technology that can solve for it. We're thinking of us as antivirus software. It's going to get more exciting with the advent of AI agents, where you'll use a permissioned deepfake to call and make a reservation or buy an airline ticket or tell somebody that you're changing a meeting, which just increases the opportunity.
Rutger van Faassen (07:30.904)
Yeah. Yeah.
Rutger van Faassen (07:48.439)
Yep. Yep.
Ben Colman (07:53.614)
to better our standard difference between real and fake.
Rutger van Faassen (07:57.25)
Yeah, but it already feels very futuristic. But I like to do this thing called futures thinking, which is thinking 10 years out and thinking what a possible future could look like. We all don't know what the future is going to hold, but I'd love to get your thoughts on what a possible future could look like in this space 10 years out from now. What do you envision?
Ben Colman (08:16.813)
Mm-hmm.
Ben Colman (08:20.14)
Yeah. So, you know, again, I'm incredibly optimistic on this. I just think that right now technology is moving quicker than regulators can really respond to it. But in our space specifically, I think it's going to very much mature like antivirus software. know, for those of your viewers, they might remember maybe 25 years ago, you picked a file and scanned it because it was computationally expensive. And then maybe 15 years ago,
you got an email from your company or your school saying, you know, please log out at six o'clock. We're going to take over your computer and update it and check for viruses. And now what's happening is all done locally. It's happening in a real time. So you get an email, you know, my mother sends me these PowerPoint presentations of dogs and cats. Obviously there's probably a virus in there. Well, Gmail or Outlook find it before you even open it. And you only know what's happening because it says, Hey, we caught something. We caught ransomware.
We caught an APT, we caught a Trojan horse. We're very much still in the first kind of chapters in our space where you're scanning specific files in certain times. And that's really just a challenge around compute and battery life. But what I'd expect is it to be within 10 years built into every device that you use. And it becomes just as common as, know, spam block. Your phone says, hey, this call might be spam.
Rutger van Faassen (09:41.09)
Mm-hmm.
Ben Colman (09:48.398)
This caller might be spam. Are you sure you want to do it? And you're like, oh, it's probably United Airlines telling me that my flight's been delayed. Yes, I want to hear the robot voice. So in this case, it'll say, you know, you're on a call Rutger and it looks to be an AI audio and you'll get notified. Maybe I still wanted the call. Maybe it'll be the AI agent of yours telling us that this podcast has been rescheduled to 10 minutes later. Either way, you know, our kids' generation are going to get used to it.
Rutger van Faassen (09:50.38)
Right?
Ben Colman (10:16.128)
Again, just an extension of general antivirus displayed toward all media and all communications.
Rutger van Faassen (10:21.378)
Yeah, so it's going to be everywhere on every device, letting us know what is actually real and what is AI or deepfake
Ben Colman (10:33.358)
Yep. one thing I'll kind of really, really end on is that there's no silver bullet here. No single technology can solve any of this the same way. And antivirus software is really an amalgamation of thousands of different models. same thing with us. We're one of a number of tools. I would argue we're the most important one because even if all the others say that Rutger his name, birthday, social, face, voice, all match, we say we don't care about any of that.
Rutger van Faassen (10:40.81)
Nope. Nope.
Ben Colman (11:00.686)
just we're hearing or seeing seems manipulated, which then gives you a moment to pause and decide, well, maybe it's his AI agent and I'm fine with that. maybe it's him calling because he's emergency needing money in an emergency money. wow, this is fraud. And unfortunately, that's the thing that's happening to a lot of families getting these, know, AI ransom phone calls before they used to say, we have your daughter. And now they're saying we are your daughter.
Rutger van Faassen (11:09.154)
Yeah.
Rutger van Faassen (11:22.945)
Yeah.
Rutger van Faassen (11:26.86)
You're right.
Ben Colman (11:28.716)
So we're very optimistic and we're really excited with some of our partners, whether it's large banks, large telecoms. So they take our solution and expose it to the masses to help protect average people.
Rutger van Faassen (11:35.563)
Mm-hmm.
Rutger van Faassen (11:40.821)
Yeah. So how do people get ready for that future that you're describing? What can they do today to defend themselves to what is coming?
Ben Colman (11:51.374)
You know, if we spoke about this a year ago, I give you all the tips and tricks to notice what there's too much symmetry or pixelation or anti-aliasing or other. But unfortunately over the last four or five months, the knowledge has gotten so good that you just can't tell the difference right now. I'd really kind of go back to the antivirus comment. You know, we don't expect, you know, my seven-year-old or my seventy-year-old parents to look at code and say, you know what, that looks like a virus.
Rutger van Faassen (12:03.276)
Yeah. Right.
Ben Colman (12:18.03)
It's more that we just use solutions that demonstrably protect us and themselves from these things. Same thing with our solution. Folks should ask their banks, like, what are they doing? If their bank says, your voice is your password, you've been authenticated, they should say, holy moly. Like, that can be fake. That's not good enough. And us right now, we've been careful on naming any banks that use us. But the ones that are thinking about this,
they don't use audio as authentication anymore because it can be faked again by my seven year old with a few mouse clicks on his tablet that he uses just watch cartoons, but he's making these funny videos of him or me as a, you know, two new favorite actor. those are deepfakes? They're not, you know, fraud in that sense, but again, the technology is out there and easy to use.
Rutger van Faassen (12:53.739)
Yup.
Rutger van Faassen (13:08.075)
Yeah. Right.
Rutger van Faassen (13:13.57)
Yeah. So the reality is here of deepfakes. So you got to get ready for it. You got to defend yourself against it. I think that's probably a great spot to wrap it up. Thank you very much, Ben, for being on the podcast.
Ben Colman (13:27.438)
Thank you for the opportunity.
Rutger van Faassen (13:29.227)
Great, and until next time, choose to be curious.