I’ve spent much of the last 12 months speaking to Boards, CIOs, CDOs and other stakeholders about the impact of GenAI for their businesses. They all recognise the potential benefits and often quickly move to creating proof of concepts. However what I’m not seeing yet is the wholesale adoption of GenAI in businesses outside of a few use cases in customer care, document summarisation and the most common use case which has been engineering / coding augmentation.
“Unlike previous periods of excitement for AI, I don’t think this is down to a lack of senior interest or willingness to invest funds - in fact it’s quite the opposite. For businesses to move from “playing” to “scaling” requires a different approach, one that focuses on building Trust and leveraging strong governance through Responsible AI. This is not an insurmountable problem, and can be easily fixed by implementing flexible governance processes that support innovation, not stifle it.”
The issues can be summarised into three challenges.
First, it can be difficult to move from proof of concept (PoC) to scaling when it comes to AI use cases. Consistent issues are:
Secondly, the business case for scaling is more complex than for a PoC. PoCs allow you to focus on individual use cases but scaling requires you to focus on patterns - repeatable applications of the technology that can be deployed multiple times across your business efficiently, building cumulative value in many processes, functions or customer experiences. I believe that the desire for speed lulls people into moving to PoCs before they’ve actually thought about their long term vision for AI. This is evidenced through:
Lastly, there is the desire to move quickly with the PoC by leveraging one of the 100s of point solutions now available. This adds an extra layer of complexity to the technology supply chain which can cause inertia while third party risk is assessed:
The 2024 PwC CEO survey indicates a strong interest in adopting AI within the business landscape, with substantial support and budget allocations signalling readiness for implementation of AI at scale.
“The time is right, the approach also needs to be. My experience is that organisations who approach AI integration strategically and holistically manage to accelerate the value creation with AI.”
Understand your readiness to scale AI - consider if you already have an existing technology and data stack that can be leveraged for wider scaling, or would a large scale modernisation be required? Based on our recent study, significant ROI is expected within the next 12 months for executives at ‘cloud-powered companies’ in the UK, but only 16% of UK organisations are considered ‘cloud-powered’ and have scaled adoption throughout all functions of their business to create greater value.
Assess the most viable solution, balancing future value, costs, complexity and risks - though building your own Generative AI solution from scratch and training on existing solutions using proprietary dataset may be ideal for some situations, an off-the-shelf tool in other scenarios may be more beneficial given speed to market, up-front investments and skill requirements. At this stage in the market development for GenAI, with the pace of AI advancement increasing, you will need to weigh up frequently between testing new tools and providing stability and confidence in enterprise standards. Irrespective of technology choices, responsible AI needs to form part of your decision making, otherwise, this may put a roadblock in successfully scaling out your solution.
If it doesn’t already exist, build out your Responsible AI principles and align your AI strategic decisions with this. This should be complemented by your ESG strategy and should be considered as your north star in safe adoption. Operationalising these principles is key to responsible deployment and should guide in decision making when prioritising use cases.
GenAI can ‘hallucinate’. The best way to manage risks with GenAI is applying a human-led and tech powered approach. Upskilling your staff in ‘prompt engineering’ and applying sound domain expertise in validating the responses by the AI is key in governing the solution and managing misinformation.
The majority of GenAI solutions being employed in organisations at this time are either off-the-shelf third party solutions or are built on top of third party solutions, further training it with relevant proprietary datasets. Third party risk management becomes increasingly important and enhancing this, alongside your vendor risk assessments will be key in responsible adoption.
GenAI will have an impact on your existing risk taxonomy - understand the new risks posed by GenAI across the lifecycle and impact assess it against your existing risk domains, design governance and controls and seek the right approvals before this is launched into production. Employ a strategy to continuously monitor these risks and manage potential reputational implications.
Having a ‘Responsible AI - first’ approach in your strategy will accelerate value to adoption and manage potential issues in the future, safeguarding those who may potentially be impacted. A well architected set of governance principles and processes will provide the necessary foundation that will guide and enhance the development and deployment of AI, ensuring it is responsible, sustainable, and aligned with the organisation's vision and goals.
Author: Chris Oxborough