Video transcript: GenAI - Building trust and managing risk

Playback of this video is not currently available

2:37

GenAI presents incredible opportunities but there are risks associated with its use. PwC experts draw from their experience and share advice on implementing the right guardrails, protecting data and ensuring people are always involved to mitigate against issues such as bias and hallucinations.

Transcript

Leigh Bates: I think generative AI is different, primarily because the scale and the opportunity around it is just massive, but it does come with some risks.

Chris Oxborough: Don't be afraid of the risks, but make sure you put the right frameworks in place to manage those risks.

Charlotte Byrne: My focus is mainly on how we apply generative AI technology to us as an organisation. So how can PwC roll out these technologies safely? What governance processes and guardrails we put in place, and we're taking those insights out to clients.

Quentin Cole: Given the nature of GenAI and data, which is the lifeblood of it, governance is absolutely key. The public need confidence. Businesses need confidence. And we need to know that confidentiality is being maintained, but also security is being maintained.

Chris: If you don't understand the risks, you can't manage the risks. So our role is to really help them think about those risks, manage those risks, and help them to maintain that trust they need to build.

Tilly Harries: One area that has been of particular interest to me is bias in AI. As an employment lawyer, I focus very much on discrimination and so any form of bias in the work that I do is risky.

Colin Light: Generative AI particularly has the ability to have inbuilt bias or hallucinations as well. Hallucinations are something where the outputs of the generative AI completely misinterprets the input to give a really strange, erroneous, hallucinating answer.

Tilly: This is where the human judgement really comes in to make sure that any output of the AI is fair and is balanced.

Chris: That you still need somebody there with domain expertise, who can review the output and make sure that the decisions you take off the back of it
are the right decisions.

Leigh: There are a number of risks that firms need to consider as they go on their AI journey. Through the more obvious risks which might be around legal IP protection or privacy and security risks. You also need to consider risks around cyber threats, resilience and third party risks as well. If you're using third party models.

Chris: We've got experience around how to build the use cases, how to build the technology, how to secure that technology, but also how to help you secure the risks that sit around it.

Leigh: With PwC we have a breadth of capability across our firm, not just around the technology side of AI, generative AI model itself, but also culture and change. How to embed this into the business for success, right the way through to the control and regulatory landscape.

Contact us

Johan Jegerajan

Johan Jegerajan

EMEA and UK Consulting CTO, PwC United Kingdom

Tel: +44 (0)7841 562026

Simon Perry

Simon Perry

Risk Head of Markets and Services, PwC United Kingdom

Tel: +44 (0)7740 024957

Follow us