The Fraud Cast: Is AI driving a fraud revolution?

Playback of this video is not currently available

29:11

Is AI driving a fraud revolution?

Transcript

Steve Buick

Good morning, everybody. And thank you all for joining us. My name is Steve Buick. I'm a partner in our restructuring and forensics practice, and I lead our forensic services team across the UK. So very warm welcome to this fifth webcast in our broadcast series. A quick reminder, our most recent broadcast focused on fraud in a changing world, and you can watch this webcast and all of our previous sessions on our broadcast web hub. I'm delighted that on today's broadcast, I'm going to be joined by a panel of experts who are going to be talking to us about AI, and specifically how AI is impacting the fraud landscape, how businesses can respond, and what the future looks like. So throughout the webcast, we are going to be using the Web X polling system, so you can share your thoughts as we go through the session. We'd also love to hear any questions you might have for the panel, and you can ask these by typing them in the box on the screen. The broadcast today is going to be recorded, and there's a paper on the impact of AIM Fraud and scams. That was prepared in collaboration with Stop Scams UK that we'll be sharing. So please do share those with your colleagues. And with that, I'm delighted to introduce you to our panel members today. I'm joined by Rachel Joyce, Alex West, and Maria Axente, and I'll let the panel introduce themselves. So Rachel, Shall I come to you first.

Rachel Joyce

Steve, so yeah, I'm Rachel Joyce. I'm a director in our forensics team, and I focus on all things fraud with a specific focus on our TMT sector.

Alex West

Great. Alex. Hi, Steve, I'm Alex West. I'm a director in our forensics practice, and I'm a banking and Payments fraud specialist.

Maria Axenti

Hi, Steve, I'm Marie Axenti. Of course. I'm a responsible specialist, and I'm PWC head of AA Public Policy and Ethics.

Steve Buick

Okay. That's great. Thanks very much for those introductions. So we start by going to our first polling question, which we'd like to get your thoughts on. So you should be seeing that come up on the screen on the screen. Now we have to be quite quick. You only have about 20 seconds to answer it. But the first question is, how concerned are you that AI will be used to perpetrate fraud and scams at your organization? And your options are, I'm not concerned, I'm somewhat concerned, and I'm very concerned. So if you can answer that, that would be great. U Right. So whilst the people are answering that question, look very conscious that, you know, there was a lot of opportunity to get concerned about AI, but there are also are lots of potential for AI to have a very positive positive impact on the world. And so, you know, perhaps if we can start by that. So, Maria, maybe if I can come to you to just talk some examples of where AI is being used for good and where it's exciting rather than scary.

Maria Axenti

What we learned in 2023 is that, of course, AI has potential to harm, I have negative impacts, but it's not the full story. We know there are some huge benefits we need to account for. And of course, in the run for us to build those safeguards, we sometimes forget what an amazing world AI has gave us already, the fact that the online shopping, the social media experience, and all the big digital platform are powered by I, something that sometimes is being missed. If I were to choose three of the use cases I can talk about. I'll go first and what we experience in PWC of generative AI is about to change the knowledge based businesses as we are. And particular use cases we're working at the moment, and I'm quite excited about it is the horizon scanning for regulation. The fact that it comes to a point where it's quite difficult to keep track of the new regulatory rules in any sector pals related to AI. And how can we use technology? AI itself to help us track those new developments, being able to analyze it, drive inside, and also generate rules that will allow us to be much better in how we govern AI or any other subject that we need to align ourselves with regulation. The second use case I'm very excited about is a one from financial inclusion. We know that AI has made strides in financial services, but sometimes, looking at how AI is being used to foster participation in the financial world. For example, there is a app called Tala, which is role in Southeast Asia, where data from smart phones is being used to assess credit worthiness of those who are not in the financial industry and grand them micro credits. That has made, you know, fantastic impact in Kenya, in India, in Philippine,

Maria Axenti

on the innovation side. And the last one is one of the use cases that probably will revolutionize science is the Deep Mind, which was a UK company, D AI company in the world, which is now part of Google. Um, released in 2020, a model called Alpha fold that is predicting the folding of the proteins, and the shape of the protein actually determines its role in our bodies. And understanding the shape of the protein will tell us more about diseases and with that, the cures. And they were able to solve this 50-year-old problem in biology and being able to predict the folding of the proteins and the made that model available to all the scientists in the world, especially in the global style where scientists wouldn't have the resources. So imagine having models, they were able to help us accelerate development of new drugs, new materials. What that will do to our society is mind blowing. For one, I cannot wait to witness that.

Steve Buick

Yeah. No, really exciting stuff. And so we can all see that there's 00:05:56,360 --> 00:05:58,060 some great positive potential. So we're focusing as much on the threat as the potential today. So and fraud is clearly one of the risk factors that AI poses. So, you know, Alex, perhaps if I could turn to you to talk about, you know, what is the threat that it poses from the Fraud perspective, and is it being used by fraudsters today?

Alex West

Yep. Well, Steve, you referenced the research we've been doing recently was Stock Scams UK, and we're exploring exactly that issue. So we spoke to their members, so that's biggest banks in the UK, some of the big international technology companies, and also telcos, and the companies specifically working on the development of AI tools. And we explored how AI is currently being used and how it might be used in the future. And I guess the headline is that there's consensus generally across the industry, that AI will be used to increase the volume and sophistication of scam attempts. So, unfortunately, we're all going to have to get used to more of that coming out a way, and also increasingly personalized as well, because AI can help tailor material to individuals. Our report on that piece of research is coming out this week. We focused on six key threats, which sort of span using generative AI to create smishing and phishing e mails, voice cloning, deep fake videos, amongst others. I think all of that though needs to be sort of taken with the right kind of context. There's not a lot of evidence. This is actually happening at scale now. So the banks we spoke to are seeing huge volumes of this coming through at the moment. And that I think could be for one or two reasons, either because scammers are being incredibly successful without using this kind of technology at the moment. So do they need to adopt AI at the moment? Perhaps not. But also, secondly, it's really hard to tell whether a scam or a fraud is generated by AI or a human. There's no good way of detecting that. So perhaps more of this is happening than we realize. I think the key thing that comes out of that research, though, is the question of the immediacy of the a threat. You know, Is this going to be a problem in weeks or months or in years? I think those closest to the development of the technology measure time scales in the shortest possible windows. So days, weeks, months, as this technology is moving really, really fast. So I guess the headline is that it absolutely is being used by Fraudsters now, and it will be more in the future.

Steve Buick

Yeah. Okay. So some excitement and some scary stuff. But we need to be sensible. Great. All right. It looks like we've got the results from our first poll. So how concerned are you that AI will be used to perpetrate fraud and scams at your organization. So by far the biggest response is, I am somewhat concerned, which is 73% of you. 21% are very concerned and 5% are not concerned. That's quite interesting. I mean, any kind of reactions from that. Maybe Alex fun can come to you.

Alex West

Think that's probably the right answer, Ashley. I mean, it clearly is a big problem, and it is going to become a bigger problem in the future. And I think the time to act is now to think about the risks posed by AI and to bolster defenses. But I think probably it's also right to be cut through the hype and think about really, it's not a huge problem right now. We need to think about the future. Yeah.

Steve Buick

Yeah. Okay. Very good. Thank you very much. So let's come on to our second poll question, which should be coming up on your screen now. So this is within your organization, are you exploring how to mitigate the increased risk of fraud posed by AI? And then the three options are, we have started. We have progressed our thinking or we have not started. So, if you can fast his fingers, get responding to those, and then we'll come back to the results in a few minutes. But while we're waiting for those results to come in, Rachel, perhaps, so I can turn to you. And if you could just talk to us a little bit about, what can businesses do to respond to the threat that AI poses from a fraud perspective?

Rachel Joyce

I think I think it's a difficult one because of the wanting to act now as you sort of say, but actually knowing what to do at the moment, given that things are moving so quick. So I think the main things for me are that agility piece of this is something that we're all going to have to keep really ahead of in understanding how the threats evolve and how we can respond to those. I think some practical steps, two things I would call out. The first is around the sort of risk assessment side, things that businesses do and have been doing for years. But actually, this is changing, So of the frequency that businesses need to be updating those risk assessments, really thinking about tailored to their business, where is the biggest risk? Where do we need to be most worried and then focusing attention on how to pick that up. The other side I would say is around the sort of behavioral and technology combined. Those two things, the sort of behavioral change, and the technology change, they have to happen together. And that's going to be really important. I think for businesses, Training their staff, staff awareness. People will be starting to do this as individuals, but within the business context, being a bit more skeptical, I guess about everything that I seeing come through the business. And then on the technology side, we're already seeing lots of things coming through. Some examples, the voice cloning and you mentioned there's tools there coming about of how to start detecting some of the patterns to see cases where that's happening. AI to combat AI. So fraudst interrupting and disrupting I guess the threat. Water marking, so we talked a bit about documents that can be created. How can you identify those because they are so good and gain ever more genuine. There's quite a bit technology in that space. Then the last one, which has been around for a long time, but is evolving of machine learning in fraud detection. So plenty to go at for businesses, I would say.

Steve Buick

Yeah. And it's interesting, isn't it? Because when you started there, there's a lot of things that everybody needs to do and the time to invest really is now. But a lot of what we're talking about here is things that have been around for a long time, risk assessment, lot of people listening to this will be very familiar with a risk assessment. And I think we were talking about this the other day. One of the key things is to be looking at that probably more frequently than you previously would have done because of the pace of change. So, interesting times. Okay. Well, thanks for that, Rachel. So it looks like the results of our poll question are in. So within your organization, how are you are you exploring how to mitigate the increased risk of poor opposed by AI? We actually got a tie a very precise tie. Amazing. I don't think we had one of those before. So 42.86% of people said we have started and 42.86% of people said, we have progressed our thinking with only 14% saying we have not started. So prob. It's very interesting that it's a draw. I mean, any perspectives on that that people would like to share. Rachel, maybe if you could.

Rachel Joyce

I think that's great that the audience have started to do that thinking. It's going to be a continual journey and not a static assessment. People are going to have to keep on top of this, so great that people are starting to go on that journey. Yeah.

Maria Axente

Yeah. It is consistent with what we've learned from our clients in terms of A governance, the fact that you need to start understanding how special AIs and with that, the need to deploy and governance that is adapted to this special technology.

Steve Buick

Yeah. Yeah. Very interesting. But like you say, very positive that people are thinking about it. I guess we'd be a bit surprised if the majority of people say haven't thought about this because there's been a lot of media attention on it, hasn't it? So a lot of hype. Okay, great. Well, we'll come on to the next topic now, which is all about what does the future look like? Talked a little about what the threat is. So, I guess, Maria, could you one of the things that I think we've seen come through quite clearly is a lot of the media focus on this, and there was the recent AI summit, and there was certainly a period where it seemed like there was more messaging coming out from governments around the world. You know, almost every few minutes in relation to how AI is going to be regulated. So, you know, could you just give us your thoughts on what you're seeing in the AI regulatory landscape and any perspective from the recent AI safety summit?

Maria Axente

It was great to be able to see, for the first time leaders of the main countries in the world discussing about the need to agree a path forward on guard race and safety for AI. And the Bletchley decoration that came out of the Safety Summit had put us on the right track. We now have full visibility and support from the different actors in society, and with that, you know, hopefully, an accelerated journey in creating different intervention that will guarantee a safer use of AI, but it might not be enough. We welcome the fact that we have regulation popping up in the main jurisdiction, driving AI. We have an executive order that specifically mention the need for AI company to prevent financial fraud. And prepare the citizen citizens for a future where we'll see a lot of synthetic content being generated. So there'll be provision for disclosure when AI is being used, chat boards, and so have you where content is AI content is generated being used. We'll see watermarking becoming a regulatory requirement. But despite the fact that we have those regulation in place and we'll have them coming, we also need to make sure that we educate the wider public to understand how this content is how this content is generated, how it's being used, and for them to be in the lookout for such scam, because no matter how well we are preparing to prevent some use cases, it's going to be difficult to prevent a lot of the misuses. So how do we balance the regulation that coming with a set of intervention from risk assessment that we already use extensively in business? With two standards, to educating the wider public, to have a different narrative in the media that will allow us to see the power of the deep fakes. And we've seen already how some politicians or celebrities have been, you know, their image have been misused. It's a powerful tool. We'll get us into the new world. So how do we make sure that all those different elements come in place that will allow us to prevent scams, prevent use cases, misuses, and address some of the risk. It's going to be fundamental. So the messages, we should not rely solely on regulation. We should see this holistic integrated AA governance with different intervention to give us confidence that we are on the right path?

Steve Buick

Yeah, it's absolutely one of those scenarios where everybody driving in the same direction, and it's for our everybody's benefit if, organizations, governments, everybody is trying to make this, really powerful technology safe, but also really really effective. So, it's going to be an interesting journey, I think. All right, so, Alex, just in terms of what the future looks like, I guess, in terms of disruption and prevention and from a fraud perspective, you could just talk us through what you think that's going to look.

Alex West

Happy too. I think some of the points you made earlier, Rachel are absolutely spot on in terms of where we see really powerful use cases for AI in terms of prevention and detection of fraud and scams. I think for me, there are kind of three really key areas to focus on. There's improved detection, so that's taking the power of techniques like machine learning, which are already extensively used across banking and on social media and in search platforms to look for more subtle indicators of malicious content. So how do we train models to identify this and proactively remove bad content from platforms and so that we never see it in the first place. I think that's absolutely critical. And that's going to get better as companies expose di and different types of datasets to those models. I think the second and perhaps biggest bang for buck in the short term is just driving process efficiency. So perhaps the less exciting use of AI, but, you know, if you can reduce the small percentage of queries that make it through to a call center, for example, where someone actually needs to have a long interaction with a customer to resolve a query, if you can use AI to solve some of that. While giving a good customer experience, you can potentially free up an awful lot of resource to focus on high risk more complex cases. So potentially very, very powerful way of both improving cost efficiency, but giving more focus to those high risk cases. I think the third really good example is using AI to fight AI, but also to take the fight to the fraudsters. Generally, when we're talking about fraud, we're quite reactive, we're responding to what fraudsters are doing. But with AI, there's the potential to actually get ahead of that, actually start taking the fight to them, using, for example, a chat bot to engage in conversation with a scammer, waste their time, collect their contact details and bank information, at that to law enforcement. There are some really potentially very powerful use cases there on top of things like the tools to detect synthetic content, the tools to reduce spam messages coming through on our telephones. All of these are really positive. So AI is going to be enormously powerful to and detect fraud and to protect us all. But at the same time, it's inevitable that it's going to have coming through to respond to.

Steve Buick

Yeah. Okay. Very interesting. And thank you very much. Right. So let's turn to some questions that we've got from our audience. So I see that there's quite a few questions coming in. But the first question here that the pace of change around AI is so fast. You talked about updating risk assessments regularly. Does this mean we will be constantly having to update these? So Rachel, I think that's probably one for you.

Rachel Joyce

I mean, I think, My view on that. It's not a static risk assessment anyway in terms of what we look back over. It shouldn't be something that he's done once a year and sits on a shelf not looked at. So regularly, yes. And I think the key is it's not just a responsibility of a small number of individuals within a business to be identifying those risks and escalating and thinking about how to respond. I think the key for me is educating all the staff, no matter where they sit in the business, that that's part of their responsibility as well. So it's not about sitting down to just to exercise, it's about engaging the business fully. So it's a regularly updated assessment. Yeah.

Alex West

Yeah. That's a really good point, I think, Rachel, because it's so easy to think of AI as a subject that AI specialists that others don't need to engage with or talk about, but certainly in a fraud world. And I think in any application, we all need to understand the basics of what the technology is, how it can affect our jobs. So we all need to become familiar with this tool.

Rachel Joyce

Yeah, I would like it to sort of, you know, bribery as was a few years ago, is that education piece that exactly not somebody that may necessarily come across bribery in there every day, but they're aware and sort spot some of the And yes, some of the things that might be a concern for business.

Maria Axente

Yeah. There is an element of heavy lifting when it comes to risk management in AI right now because a lot of the risks AI brings are rather new. So we need to make sure that we are prepared for those risks, and then being able to keep an eye on new ones that might emerge. So heavy lifting, it a bit of update, the aches is absolutely right. We have to go from a status for where risk management is second line only to a mindset where everyone is risk aware and being able to work as a unit to be on the look for new risk, but also being able to develop internal mechanisms to allow us to be quite agile.

Steve Buick

Yeah. Yeah. No, completely agree. Okay, let's take another question. So what do you think the picture will be looking like in a year's time? Maybe Maria, if I can come to you on that one.

Maria Axente

Prediction in the air will becomes increasingly difficult. On one hand, because AI improvement has surprised its own makers. Those who have been working for decades have been saying this, again and again and again, they are surprised how fast this technology in improving. And that on one hand, on the other hand, is being able to have the foundation that will allow AI to really flourish in enterprises is critical. Being able to look after digital transformation, data strategy, cloud strategy. Um, to really being able to scale AI so that we can see this value materialize in legacy businesses, not just technology first companies where this is not a problem. So in a year time, I would expect next year, we have elections, 48% of the population of the globe has election, where we more likely will see synthetic content being used one way or another in elections by, you know, malicious actors or by opponents. So we'll see full full string the potential of this technology and with that, I expect that a higher degree of awareness among the citizens of the planet of the potential, but also the potential for misuse. So election next year will tell us how well prepared we are for the force of AI for the positive impact, p for the negative impact.

Steve Buick

I mean, the education piece is key, isn't it? Behavioral change we've touched on it as we've been having this discussion here. Like some of you telling us the other day that you were to your friends about, how do we protect ourselves against a voice clone phoning up and transferring money. So maybe we have a safe word that somebody phones you and says, Hi, it's your mom. You've got a safe word that you know that the AI doesn't. So it's interesting time. But we've got a few more questions coming in, so we'll take as many of these as we can. So how important is it for businesses to train their employees on the negative implications of AI, I being able to successfully identify scam slash fraud, and should this be our highest priority? Alex, I think I'm going to come to you for that one.

Alex West

I think it is really important to train stuff. And I think some of that training does need to change to reflect the new technology voice claims, I think being probably the most obvious example that's made it to the public eye a lot. So, you know, there have been publicly reported cases where members of the finance team of businesses have been called up by someone that sounds like the CEO. The CEO asked them transfer some money. They transfer the money. It turns out the CEO never made the call. So being aware of those kind of situations, and just because someone sounds like the CEO doesn't mean it's them. I think absolutely that needs to be trained. And that also impacts our personal lives as well to your point earlier, Steve. We all need to be aware that content might not be trustworthy, or that a phone call that sounds like someone we know might not be from them. So I think there's general kind of behavioral awareness of the technology and what it can do. I think what's also important with all this kind of stuff is that actually, whether a scam is AI generated or human generated, the same kind of protective techniques could apply. So pausing, challenging the source of information, verifying that is what it is what you think it is, and that's no different for AI versus any other type of fraud. So it's about just having that process of making sure employees know that they shouldn't be rushing into doing these things, especially if that thing is making a payment. Okay.

Steve Buick

Very good. I think we're out of time for questions. So before I bring things to a close, I'm just going to come back to the panel and ask for some closing remarks and observations. So Maybe, Marie, if we could start with you.

Maria Axente

I'm very encouraged by the fact that the public attitudes on AI have changed this year. Of course, it was everywhere in the news. We've seen quite a lot of reporting on various shenanigans that have happened in the air industry. And with that, the fact that the public pays attention to the subject, they are educated surprisingly through those investigations that the journalists have conducted in the misuses of AI, which does in a certain way a much better job than the formal education process on this subject. And with that, I expect people to be much better aware next year on how AI is being used, and if it's abused and they are impacted, they will ask questions. And I think this critical thinking cannot happen if we don't really understand what's going on. And with those public attitudes, we'll also put a bit of pressure on the enterprises to really prioritize AI governance, Air risk management, responsible AI altogether, and being able to really come together as a collective irrespectively as we are in our capacity as patients or citizens or customers, we all want the same thing. This technology to be safely used for our needs not the other way around. Yeah.

Steve Buick

Okay. Very good. Alex, Yeah,

Alex West

I would echo some of the points that Maria made, but also reflect on the fact that to some extent, the technology is sort of the genies out of the bottle and open source models available now. I think there's an inevitability that even with safeguards and regulation built into the system, we are going to see more fraud come through this because Fraudsters, we know operate completely outside the regulatory and legal framework. So I think we need to be prepared for that. But I'm positive, I'm optimistic. I think this technology is going to drive enormous positive change, and unfortunately it's something that we're going to have to deal with. But I would encourage everyone to kind of prepare themselves for that. Prepare their families for what's coming and start sort of building your own defenses.

Steve Buick

Okay. Very good. Rachel.

Rachel Joyce

Think. I think for me, it's probably around. Yes, this is something that potentially is quite a scary concept and something to deal with, but actually, no one is starting from scratch. We've got the basics there, and it's about adjusting and adapting. So again, feeling positive. Okay. Very good. Ergo we'll start to to a close there. So I think I'd echo what the panel has said and what we've heard, clearly huge potential for this technology for good and potential for harm. So we just all need to keep alive to that fact. So huge thank you to all of our panelists for joining us today. And a big thank you to all of you for great input during the session and some great questions. And, please do be sure to visit our Fraud Cast web Hub, and don't forget to keep eye for our future broadcast sessions. There's another one coming up very soon. And let's know if there's any topics you're particularly interested in hearing about. So thank you very much, and we look forward to seeing you all again soon.

Follow us
Hide