Why Responsible AI is key to unlocking innovation

Moving from playing to scaling requires trust and governance

Technology networks board background

Conversations around how to harness the potential of GenAI continue to dominate boardroom agendas - yet progress on wholesale adoption has yet to follow suit. As Responsible AI lead, Chris Oxborough, explains, the issue isn’t the technology, but understanding the approach and framework that needs to sit around it.

What are the trends we are seeing in the market?

I’ve spent much of the last 12 months speaking to Boards, CIOs, CDOs and other stakeholders about the impact of GenAI for their businesses. They all recognise the potential benefits and often quickly move to creating proof of concepts. However what I’m not seeing yet is the wholesale adoption of GenAI in businesses outside of a few use cases in customer care, document summarisation and the most common use case which has been engineering / coding augmentation.

“Unlike previous periods of excitement for AI, I don’t think this is down to a lack of senior interest or willingness to invest funds - in fact it’s quite the opposite.  For businesses to move from “playing” to “scaling” requires a different approach, one that focuses on building Trust and leveraging strong governance through Responsible AI.  This is not an insurmountable problem, and can be easily fixed by implementing flexible governance processes that support innovation, not stifle it.”

What are the challenges

The issues can be summarised into three challenges.

First, it can be difficult to move from proof of concept (PoC) to scaling when it comes to AI use cases. Consistent issues are:

  • Lack of trust in managing risks: Organisations hesitate to scale PoCs due to uncertainty in handling increased and complex risks effectively. Stakeholders holding different views of the risks facing their organisations puts a roadblock on deployment.
  • The lack of enterprise wide cloud infrastructure - 16% of respondents of PwC Cloud Survey 2023 are ‘all-in’ on cloud, so the infrastructure to support scaling is not always in place.
  • Data issue - Quality, availability of data, lack of capabilities in handling larger and more complex data sets limits the ability to scale.
  • The lack of necessary technical, project management, and change management skills - 55 % of respondents will look to hiring or contracting to manage data and analytic skills gaps.

Secondly, the business case for scaling is more complex than for a PoC. PoCs allow you to focus on individual use cases but scaling requires you to focus on patterns - repeatable applications of the technology that can be deployed multiple times across your business efficiently, building cumulative value in many processes, functions or customer experiences. I believe that the desire for speed lulls people into moving to PoCs before they’ve actually thought about their long term vision for AI. This is evidenced through:

  • The absence of a clear, long-term strategic vision for AI implementation, leading teams to focus on isolated use cases without considering how they fit into the overall business strategy or contribute to measurable objectives around top-line growth or cost efficiency
  • Fragmented governance resulting in different group companies, functions or teams operating in silos, each developing use cases relevant to their immediate needs without a unified approach or shared objectives. That hinders collaboration and communication across different functions. It also increases the challenge of allocating scarce capital and highly skilled engineers to the most important opportunities.
  • No overall vision for managing Risk: with increased awareness of AI risks and their reach, organisations prefer playing safe and invest in smaller, less risky individual projects rather than embarking on larger, more complex and transformational initiatives.

Lastly, there is the desire to move quickly with the PoC by leveraging one of the 100s of point solutions now available. This adds an extra layer of complexity to the technology supply chain which can cause inertia while third party risk is assessed:

  • As the market for AI tools has burst into life, organisations have been flooded with a wide array of niche solutions. The accessibility of the latest GenAI tools have tempted many front-line employees and Execs to push early adoption of multiple point solutions quickly.
  • The pressure to innovate quickly with AI encourages companies to adopt a test-and-learn approach and to rely heavily on the roadmaps of technology partners to address their needs, putting organisations more at the mercy of AI vendors.
  • Without centralised, medium term AI strategies and governance, the pace of innovation with AI actually slows down - as the business is paralysed with too many competing choices and experiences lower returns on investment in early experimentation.

What's the answer

The 2024 PwC CEO survey indicates a strong interest in adopting AI within the business landscape, with substantial support and budget allocations signalling readiness for implementation of AI at scale.

“The time is right, the approach also needs to be. My experience is that organisations who approach AI integration strategically and holistically manage to accelerate the value creation with AI.”

Understand your readiness to scale AI - consider if you already have an existing technology and data stack that can be leveraged for wider scaling, or would a large scale modernisation be required? Based on our recent study, significant ROI is expected within the next 12 months for executives at ‘cloud-powered companies’ in the UK, but only 16% of UK organisations are considered ‘cloud-powered’ and have scaled adoption throughout all functions of their business to create greater value.

Assess the most viable solution, balancing future value, costs, complexity and risks - though building your own Generative AI solution from scratch and training on existing solutions using proprietary dataset may be ideal for some situations, an off-the-shelf tool in other scenarios may be more beneficial given speed to market, up-front investments and skill requirements. At this stage in the market development for GenAI, with the pace of AI advancement increasing, you will need to weigh up frequently between testing new tools and providing stability and confidence in enterprise standards. Irrespective of technology choices, responsible AI needs to form part of your decision making, otherwise, this may put a roadblock in successfully scaling out your solution.

If it doesn’t already exist, build out your Responsible AI principles and align your AI strategic decisions with this. This should be complemented by your ESG strategy and should be considered as your north star in safe adoption. Operationalising these principles is key to responsible deployment and should guide in decision making when prioritising use cases.

GenAI can ‘hallucinate’. The best way to manage risks with GenAI is applying a human-led and tech powered approach. Upskilling your staff in ‘prompt engineering’ and applying sound domain expertise in validating the responses by the AI is key in governing the solution and managing misinformation.

The majority of GenAI solutions being employed in organisations at this time are either off-the-shelf third party solutions or are built on top of third party solutions, further training it with relevant proprietary datasets. Third party risk management becomes increasingly important and enhancing this, alongside your vendor risk assessments will be key in responsible adoption.

GenAI will have an impact on your existing risk taxonomy - understand the new risks posed by GenAI across the lifecycle and impact assess it against your existing risk domains, design governance and controls and seek the right approvals before this is launched into production. Employ a strategy to continuously monitor these risks and manage potential reputational implications.

Conclusion

Having a ‘Responsible AI - first’ approach in your strategy will accelerate value to adoption and manage potential issues in the future, safeguarding those who may potentially be impacted. A well architected set of governance principles and processes will provide the necessary foundation that will guide and enhance the development and deployment of AI, ensuring it is responsible, sustainable, and aligned with the organisation's vision and goals.


Author: Chris Oxborough

Contact us

Chris Oxborough

Chris Oxborough

Lead for Responsible AI, PwC United Kingdom

Tel: +44 (0)7711 473199

Follow us