By Zena Zenonos, Naomi Gaston, Haris Haider
Legal teams recognise the potential benefits of generative AI but have been slow to adopt new tools amid concerns about risk and security; however, a responsible and ethical approach to implementation will unlock competitive edge.
Generative artificial intelligence (AI) tools such as ChatGPT could transform the way that legal teams work – on both legal and non-legal work. But while many lawyers are enthusiastic and optimistic about the potential value of such technologies, their adoption of AI tools has so far been relatively slow.
There are some understandable reasons for that. For example, there is still anxiety about the accuracy and efficacy of AI, and nervousness about increased privacy and security risks. Some of that concern is a matter of perception, but there have been several very real examples where legal teams’ use of AI has caused significant problems. AI tools have cited fake cases, for example.
Other objections to AI focus on the value of legal expertise and experience – the skill of lawyers in understanding how to apply legal precedent, for example, and how to bring commerciality into their work. Will AI ever be sophisticated enough to replicate such qualities? There is also concern about the time required for legal teams to implement new AI tools. And even after a successful implementation, there may be an ongoing time commitment required to review AI-generated work.
Add in broader ethical considerations, as well as issues such as confidentiality in respect of data collected by AI models, and it is easy to see why adoption rates have proved slower than might have expected. That said, it is important not to lose sight of the benefits that AI tools offer, particularly since many of these worries can be mitigated during and after the implementation process.
For example, it is possible to calibrate AI with a corpus of relevant documents so that models are built for specific legal tasks; this gives the user much greater control over data. AI models can then be tailored to suit specialised tasks or styles, resulting in more coherent and contextually relevant outputs.
The approach should be to programme an AI tool to take account of the firm’s internal policy documents, as well as specific industry norms and relevant legislation; this process can be fine-tuned to recognise contextual risk appetite. This type of AI model will require less ongoing human input, supervision and intervention; users can rely on its accuracy and efficiency.
With such safeguards in place, AI can be deployed as a powerful tool to assist your legal team with tasks such as due diligence, specialised legal research and advisory work. AI is a hugely effective technology for handling large data sets, and for efficiently breaking down information in document review at scale.
Moreover, new applications of AI within in-house legal teams are emerging at pace. One example is the field of contract analysis and compliance. AI can be used for a variety of contract support scenarios including summarisation, clause identification and tasks such as self-service contract production.
The key is to focus on the work required rather than the technology itself. Your business priorities should always determine how you can use AI with the most impact. The focus will be to drive efficiency and value on tasks that are repetitive, of low complexity, and high in volume.
The bottom line is that while it will be vital to identify risks and dependencies when considering the use of AI in legal workflows, these can be addressed. The right processes and controls, combined with appropriate security and guard-rail measures, will safeguard confidential information and preserve data integrity.
Any legal team planning to use an innovative product that involves AI should consider the wider ethical implications of the technology – and develop and use AI responsibly. However, the responsible deployment of AI will drive competitive advantage by eliminating bias (including unconscious bias) and improving the accuracy of output. With effective governance and control frameworks in place for its management and regulation, and strong security to protect data privacy, AI can be hugely powerful.
PwC can provide support to implement and custom-build AI tools, from tools for contract extraction, discovery, and legal due diligences, through to proprietary AI models trained to suit the bespoke needs of the client.
PwC’s skills, interdisciplinary platform and resources, can support firms to enhance and fine-tune AI models trained on a corpus of industry-specific documents.
PwC can help clients identify and explore a plethora of opportunities and applications of AI, and to support a process to define successful AI implementation, ensuring risks and dependencies are appropriately mitigated and future-proofed, as well as to balance productivity gains with innovation.
PwC can embed AI within legal teams to ensure business needs are met in a way that is responsible, ethical, and safe-guarded.