By Leigh Bates, Partner and Trishia Ani, Manager PwC UK
Artificial intelligence (AI) – generative AI (GenAI) in particular – is creating palpable excitement within boardrooms and businesses – but not without uncertainty and fear.
Nearly half of the UK business leaders taking part in PwC’s 27th Annual CEO Survey (45%) believe that GenAI will boost their revenues and returns. But 59% are worried that it will increase the spread of misinformation in their businesses. Nearly half (47%) are concerned that GenAI will increase their susceptibility to legal liabilities and reputational risks.
Understanding your starting point is critical to finding the right way forward. Use our interactive tool to assess where you are on your journey and access further content to help your finance function get fit for the future and deliver greater business benefits.
This troubling ‘trust gap’ is holding back the value creating potential of AI, not just through the threat of reputationally damaging incidents and errors, but also by undermining your business’ confidence in the outputs.
Incomplete, outdated or invalid inputs can heighten the risk of so-called AI hallucinations (false answers). It’s therefore important to guard against overreliance on the outputs unless there is sufficient testing, governance and validation. Grounding the AI with context-specific data is also key.
A third of UK CEOs are worried that GenAI will heighten the risk of bias towards specific customer or employee groups. GenAI learns from the data it is trained on, therefore, will inherit biases when not managed during the pre-processing phase of the AI lifecycle.
The ‘black box’ inner workings of AI can make it hard to explain the results. It’s therefore important to test and understand how AI models arrive at their decisions to ensure accountability and maintain trust. A human-led validation by domain experts is also key in maintaining trust.
GenAI introduces new threats ranging from the mistaken uploading of confidential information in response to prompts to deliberate ‘jailbreaking’ to get around AI guardrails.
There’s the risk that GenAI will use copyrighted images, text or graphs without authorisation – in a financial report, for example.
All these risks come together in the potential for lost opportunity costs. In particular, boards may be reluctant to use AI-generated analysis in their decision-making or sign-off AI use cases because they don’t have sufficient confidence in the outputs.
As a CFO, your connections across the business, comfort with data and ability to provide critical challenges make you ideally suited to bridging this trust gap. In creating effective governance foundations and realising the AI potential within your function and the wider business, five key priorities stand out:
This balance between awareness, responsibility, and value creation will help build confidence in AI and enable your business to move out in front.