Guardians of responsible AI: How CFOs can lead in building trust

colleagues are talking

Trust is critical in unlocking the full potential of AI. As a CFO, you’re ideally positioned to lead in building confidence in models and ensuring they’re used ethically and responsibly.

By Leigh Bates, Partner and Trishia Ani, Manager PwC UK

Artificial intelligence (AI) – generative AI (GenAI) in particular – is creating palpable excitement within boardrooms and businesses – but not without uncertainty and fear.

Nearly half of the UK business leaders taking part in PwC’s 27th Annual CEO Survey (45%) believe that GenAI will boost their revenues and returns. But 59% are worried that it will increase the spread of misinformation in their businesses. Nearly half (47%) are concerned that GenAI will increase their susceptibility to legal liabilities and reputational risks.

Where are you on your finance transformation journey?

Understanding your starting point is critical to finding the right way forward. Use our interactive tool to assess where you are on your journey and access further content to help your finance function get fit for the future and deliver greater business benefits.

Failing guardrails

This troubling ‘trust gap’ is holding back the value creating potential of AI, not just through the threat of reputationally damaging incidents and errors, but also by undermining your business’ confidence in the outputs.

Risk one: Overreliance

Incomplete, outdated or invalid inputs can heighten the risk of so-called AI hallucinations (false answers). It’s therefore important to guard against overreliance on the outputs unless there is sufficient testing, governance and validation. Grounding the AI with context-specific data is also key.

Risk two: Bias

A third of UK CEOs are worried that GenAI will heighten the risk of bias towards specific customer or employee groups. GenAI learns from the data it is trained on, therefore, will inherit biases when not managed during the pre-processing phase of the AI lifecycle.

Risk three: Explainability

The ‘black box’ inner workings of AI can make it hard to explain the results. It’s therefore important to test and understand how AI models arrive at their decisions to ensure accountability and maintain trust. A human-led validation by domain experts is also key in maintaining trust.

Risk four: Cybersecurity

GenAI introduces new threats ranging from the mistaken uploading of confidential information in response to prompts to deliberate ‘jailbreaking’ to get around AI guardrails.

Risk five: Copyright infringement

There’s the risk that GenAI will use copyrighted images, text or graphs without authorisation – in a financial report, for example.

Risk six: Lost opportunity costs

All these risks come together in the potential for lost opportunity costs. In particular, boards may be reluctant to use AI-generated analysis in their decision-making or sign-off AI use cases because they don’t have sufficient confidence in the outputs.

Intelligent risk-taking

As a CFO, your connections across the business, comfort with data and ability to provide critical challenges make you ideally suited to bridging this trust gap. In creating effective governance foundations and realising the AI potential within your function and the wider business, five key priorities stand out:

  • Identify openings
    Identify use cases and consider the benefits (direct and indirect), from customer and employee experience through to hours saved. You can then continuously track the return on investment.
  • Set the standard
    Establish policies and standards on safe AI use and deployment. You can then operationalise these across the business, taking account of evolving regulations and best practices.
  • Assign ownership
    Define clear roles and responsibilities for AI oversight – including your board, technical teams, risk management and external AI providers.
  • Be proportionate
    Governance requirements should be proportionate to business value and associated risk, considering your organisation’s AI architecture, regulatory requirements and risk appetite.
  • Create organisation-wide understanding
    Provide ongoing training, from executives to operational staff to increase awareness of how to maximise the value from AI, while managing the risks. This includes helping employees learn how to use AI responsibly, understand its limitations and apply human-led governance as part of a human-in-the-loop approach.

Clear path forward

This balance between awareness, responsibility, and value creation will help build confidence in AI and enable your business to move out in front.

Contact us

Leigh Bates

Leigh Bates

UK FS Data Analytics Leader, PwC United Kingdom

Follow us