The ecosystem is evolving too. Big Tech and venture capitalists are pouring investment into GenAI systems, employees are asking to use GenAI tools in daily workflows, and enterprise software vendors are augmenting products with GenAI features.
Yet, organisations have also seen how things can go awry, from the leakage of confidential data to public GenAI systems, to struggles with inaccuracies or “hallucinations”, deep fakes, manipulated content, and bias.
Most business leaders recognise thorough governance will be essential to scale up their GenAI proofs of concept (POCs) and generate sustainable value, while avoiding soaring technology costs, copyright liabilities, unmet expectations, and low-quality outputs.
They are augmenting risk management, compliance, security, and privacy functions to support governance, including vetting third-party GenAI tools and assessing specific risks associated with planned GenAI use cases.
These principles and governance practices are enabling enterprises to align on standard processes, manage risk and clarify priority areas for investment.
The PwC CEO Survey 2024 reveals 75% of CEOs whose companies have already adopted GenAI believe this technology will enhance their ability to build trust with stakeholders over the next 12 months - perhaps due to the approach they took to deploy applications safely.
Aligning AI governance across business functions enables AI investments and projects to be prioritised consistently based on feasibility, complexity and potential risks. Teams can coordinate to access appropriate and diverse eyes on use cases at critical junctures during development.
We’ve also found a proportion of proposed GenAI use cases are not use cases for GenAI at all. In fact, they are use cases for a different AI technology or simpler robotic process automation (RPA). Governance program intake and prioritisation processes, when aligned with a consistent view of risk, help map the right technology infrastructure and the right data with the right use case.
Scale may be the most pressing challenge facing organisations. Companies are offering new GenAI solutions almost daily, alongside abundant free or nearly-free tools online and GenAI capabilities provided by legacy software vendors.
Getting a handle on enterprise wide GenAI use and maintaining an up-to-date inventory of systems can be a significant challenge.
One approach toward enabling consistency is by aligning with and reinforcing a common view of risks. The nature of GenAI models exacerbates some risks, such as those relating to data protection and IP. Largely, these are consistent with other risks seen with AI and emerging technologies. A holistic and standardised risk taxonomy around AI enables an enterprise to triage systems based on the view of risk and align on remediation strategies.
We see several main categories that can be used as the basis of an enterprise-wide risk taxonomy for AI.
It is important to educate governance personnel regarding how AI risks may manifest for the organisation’s expected GenAI use cases and align on a shared point of view of risk and risk tolerance. This view can then be embedded into existing risk processes to flag AI systems warranting closer inspection and denote which team needs to be involved in risk remediation.
Organisations without an existing governance program have an opportunity to design one from scratch for all forms of AI including GenAI. Below are four steps to get started.
Organisations that move forward methodically, invest in a solid foundation of governance, and unite as a cross-functional team to address difficult questions will be well positioned for AI success.
Find out more about PwC’s Responsible AI approach.