GenAI: Creating value through governance

woman using tablet

Over a year of Generative AI (GenAI) excitement has translated into palpable momentum, with 60% of organisations seeing GenAI as an opportunity rather than a risk, and many launching GenAI-enabled capabilities, investing in GenAI skills and pursuing ambitious strategies.

The ecosystem is evolving too. Big Tech and venture capitalists are pouring investment into GenAI systems, employees are asking to use GenAI tools in daily workflows, and enterprise software vendors are augmenting products with GenAI features.

Yet, organisations have also seen how things can go awry, from the leakage of confidential data to public GenAI systems, to struggles with inaccuracies or “hallucinations”, deep fakes, manipulated content, and bias.

The case for GenAI governance

Most business leaders recognise thorough governance will be essential to scale up their GenAI proofs of concept (POCs) and generate sustainable value, while avoiding soaring technology costs, copyright liabilities, unmet expectations, and low-quality outputs.

They are augmenting risk management, compliance, security, and privacy functions to support governance, including vetting third-party GenAI tools and assessing specific risks associated with planned GenAI use cases.

These principles and governance practices are enabling enterprises to align on standard processes, manage risk and clarify priority areas for investment.

The PwC CEO Survey 2024 reveals 75% of CEOs whose companies have already adopted GenAI believe this technology will enhance their ability to build trust with stakeholders over the next 12 months - perhaps due to the approach they took to deploy applications safely.

Aligning AI governance across business functions enables AI investments and projects to be prioritised consistently based on feasibility, complexity and potential risks. Teams can coordinate to access appropriate and diverse eyes on use cases at critical junctures during development.

We’ve also found a proportion of proposed GenAI use cases are not use cases for GenAI at all. In fact, they are use cases for a different AI technology or simpler robotic process automation (RPA). Governance program intake and prioritisation processes, when aligned with a consistent view of risk, help map the right technology infrastructure and the right data with the right use case.

Telecommunications companies with access to significant quantities of consumer data are showing interest in adopting responsive AI principles to guide their adoption of AI and GenAI. These companies are actively exploring tools, techniques, and other processes to reduce the use of sensitive customer data while enabling more curated and customised experiences.

Adapting existing governance for the GenAI era

GenAI governance does not need to be built from scratch. Organisations with practices governing technology, from privacy and data governance to third-party risk management, and those with foundational components for AI governance, can adapt existing practices to address GenAI risks.

Risk functions can assess how they relate to GenAI governance objectives and identify which additional responsibilities they can take on, when they need to be involved in escalations and key decisions, and coordinate around leadership and broader roles and responsibilities in a holistic AI governance program.

GenAI’s broad applicability empowers creativity on a wide scale, but also poses new challenges for governance. Governance can't merely oversee specific teams; it should cater to a wide user spectrum. With GenAI we have less control around generated outcomes than with narrow AI. In addition, the risk management process should consider how one tool can potentially apply to many use cases with different risk profiles rather than one specific use.

When updating the AI governance operating model for GenAI, organisations should decide how much to centralise management of governance resources, and give considerations to:

  • Is the overall enterprise operating structure highly centralised, and therefore conducive to centralised governance?
  • Which governance practices must remain centralised? (e.g., risk taxonomy)
  • Which practices should be federated? (e.g., localised testing)

Taking an approach that scales

Scale may be the most pressing challenge facing organisations. Companies are offering new GenAI solutions almost daily, alongside abundant free or nearly-free tools online and GenAI capabilities provided by legacy software vendors.

Getting a handle on enterprise wide GenAI use and maintaining an up-to-date inventory of systems can be a significant challenge.

One approach toward enabling consistency is by aligning with and reinforcing a common view of risks. The nature of GenAI models exacerbates some risks, such as those relating to data protection and IP. Largely, these are consistent with other risks seen with AI and emerging technologies. A holistic and standardised risk taxonomy around AI enables an enterprise to triage systems based on the view of risk and align on remediation strategies.

We see several main categories that can be used as the basis of an enterprise-wide risk taxonomy for AI.

  • Model risks: Risks related to the training, development, and performance of an AI system, including conceptual soundness, reliability of output and oversight.
  • Data risks: Risks related to the collection, processing, storage, management, and use of data during the training and operation of the AI system.
  • System and infrastructure risks: Risks related to the acquisition, implementation, and operation of an AI system in a broader software and technology environment, including third-party and open source risks.
  • Use risks: Risks related to intentional or unintentional misuse, manipulation, or attack against an AI system.
  • Legal and compliance risks: Noncompliance with applicable laws, rules, and regulations, including privacy, sector-specific and function-specific guidance, and issues such as copyright and deep fakes.
  • Process impact risks: Unforeseen or unmitigated risks that arise from integrating the use of AI into an existing workflow.

It is important to educate governance personnel regarding how AI risks may manifest for the organisation’s expected GenAI use cases and align on a shared point of view of risk and risk tolerance. This view can then be embedded into existing risk processes to flag AI systems warranting closer inspection and denote which team needs to be involved in risk remediation.

Building GenAI governance

Organisations without an existing governance program have an opportunity to design one from scratch for all forms of AI including GenAI. Below are four steps to get started.

1. Align AI governance to AI strategy

The EU AI Act and NIST’s AI Risk Management Framework both advocate for AI governance to be proportional to risk. If AI governance is overly restrictive or inconsistent with the organisation’s plans to deploy AI, it may result in too much friction. If governance is too loose, the organisation may be unable to mitigate risks and adjust when regulations around AI go into effect.

This alignment also signals to the organisation that risk management is as important as the business goals it seeks to achieve with AI. Such alignment fast-tracks innovation by focusing on priority areas of investment where risk can be effectively managed.

2. Update enterprise perspectives on risk and firm values

Central to effective governance is a shared understanding of risk. The AI governance team should evolve beyond traditional operational risk management frameworks. This involves enhancing procurement, third-party risk management, security, privacy, data, and other associated risk and compliance measures to help address risks that AI use has either elevated or created.

Adopting a risk taxonomy for evaluating AI technologies and supervising established AI systems enables efficient oversight and resources usage. The risk taxonomy describes the nature of GenAI specific risks, while a companion roles and responsibilities charter should formalise ownership of associated risk controls. Internal codes of conduct should also be refreshed to address GenAI misuse.

Organisations may also revisit their approach to decision-making in accordance with firm values, such as customer safety or product transparency. These may be themes from firm mission and vision statements, or strategic priorities for the business. The organisation can apply existing capabilities or set up new ones that allow stakeholders to address such considerations, starting with updating codes of conduct and acceptable use policies. Organisations can also reconsider their responsible AI principles and frameworks considering GenAI. As new dilemmas emerge around what it means to develop AI responsibly, the organisation should identify channels to raise concern around them, facilitate their analysis and update policies.

3. Define roles and responsibilities

To formalise governance, organisations might define new structures such as an AI steering committee or governance board. These bodies, comprising members from existing governance functions and business sectors, will influence internal policy development, support use case prioritisation, and address escalated issues.

These roles and responsibilities may also be time-bound, or dialled up or down over time, considering the approvals, potential risks, decisions and remediations that may occur at different stages of AI development, implementation, use, and monitoring.

4. Develop a training and change management program

In the rush to embrace GenAI, staff may be tempted to use any tool without understanding the potential risks. Some organisations, as a result, have chosen to block publicly available GenAI. If a company chooses to allow staff to explore GenAI capabilities, coaching should be provided on how GenAI works and how risks manifest. Staff should also be clear on their responsibilities regarding data, systems, and processes.

Organisations that move forward methodically, invest in a solid foundation of governance, and unite as a cross-functional team to address difficult questions will be well positioned for AI success.

Find out more about PwC’s Responsible AI approach.

Contact us

Maria Axente

Maria Axente

Head of AI Public Policy and Ethics, PwC United Kingdom

Tel: +44 (0)7711 562365

Chris Oxborough

Chris Oxborough

Lead for Responsible AI, PwC United Kingdom

Tel: +44 (0)7711 473199

Ilana Golbin

Ilana Golbin

Director and Responsible AI Lead, PwC US

Follow us: