Kriti Sharma, head of product, LegalTech at Thomson Reuters explores how the legal sector must respond to the AI revolution
Kriti Sharma is an overachiever, having held senior tech and advisory roles at HSBC, Sage, the UN and Barclays, and been named among Forbes’ 30 under 30 list. She still heads up an NGO she founded called AI for Good. Now, she is also chief product officer, legal tech at Thomson Reuters.
Speaking to her, it’s easy to see why she is so in demand. She has a laser focus on what clients need and want, as well as clear views on the risks and opportunities of AI – for the legal market, regulators and society.
‘The pure definition of AI is how intelligence can be augmented, and how human skills can be supplemented or simulated in digital forms. Now, this also includes the application of AI and its impact on society,’ she says.
Sharma likens the status quo to the early, dial-up days of the internet, christening it ‘the 1.0 iteration’.
‘We’ve got a foundational technology now, so the opportunity is to build applications on top of it.’ To get it right, the industry should apply lessons from past iterations of technology such as social media, she says.
In her role at Thomson Reuters, Sharma focuses on three themes: high-quality content, a human-centred approach, and interoperability.
Legal research platform Westlaw, for example, provides answers to difficult questions in seconds, and thus relies on AI trained by ‘good quality, trusted, up-to-date data’.
By human-centred, Sharma refers to applications that enable users to ‘save time, have control, spot and retrieve answers quickly, and do this with confidence. Efficiency and confidence are only possible if you can trust the technology, which is rooted in good data, information and facts.’
Increasingly, workers don’t realise they are using AI, because it’s engaging with them in everyday programmes such as Teams, Word and email, she points out.
Embedding AI into how people work already, rather than asking them to work differently, is the best way to avoid waste, which she defines as buying technology that is surplus or too difficult to use.
Another way to do this is to insist on interoperability, which means using open standards to allow different systems to work together, she says.
When AI is working well, it gets information to users, rather than making users have to find information, she explains.
For example, some clients integrate Salesforce into cloud platform HighQ to build contracts, which they can then share with colleagues using social sharing channels. But AI can also work for those who are not at the cutting edge.
‘People should be able to use our tools in whatever way that’s most useful to the way they work. We adapt to how people are working today, whether or not they are early adopters.’
Sharma is enthusiastic about interoperability among enterprise software vendors, because it offers users more choice, encourages innovation, and creates a level playing field.
‘Start-ups offering new applications can layer over Thomson Reuters because we have an open platform strategy that allows other companies to integrate with us.’
When a developer wants to add a new application, Thomson Reuters and customers both vet them for security. Then, Thomson Reuters provides customers with an API, allowing them to choose how to integrate it into Teams, for example.
When law firms are considering adding external AI capability, Sharma advises them to think about user experience and adoption, how it operates within existing systems, whether it is grounded in trusted content, whether it is trained by experts, and how they plan to integrate and improve it in line with constant innovation.
She also flags the three areas where AI is most useful for lawyers: providing quick responses to legal questions; understanding natural language for tasks such as contract drafting; and adjusting to changing workflow patterns.
‘We are one of the very few companies that can help you be efficient, with confidence, because we are grounded in trusted content.’
An urgent need for regulation
Asked at what point policymakers should devise rules for emerging technologies such as AI, Sharma responds that most industries, such as law, medicine and auditing, have rules and ethical conduct codes.
‘Somehow in the world of computing, there are few broad-reaching codes of ethics,’ she says, going on to point out that Italy’s recent ban on ChatGPT was guided by privacy, not ethics.
Europe, she continues, has led on tech regulation such as the AI Act, GDPR and data privacy.
But the UK’s light-touch approach to regulating AI, detailed in a recent white paper, could leave individuals, companies and sector regulators having to deal with tough ethical questions.
It will therefore be the legal sector’s responsibility to agree sector-specific rules on how to apply AI, create safe AI and lead consumer awareness.
‘The legal industry will need to expand its existing code of ethics to include AI,’ she says.
For example, law firms must be transparent if consumers are receiving advice from a machine.
‘We need a proactive approach, with the legal industry leading the way.’
Sharma proposes a sector-wide effort that includes regulators, The Law Society, law firms and legal professionals, as well as tech providers. This consortium would need to look at topics such as the difference between task automation and job automation, as well as the changing nature of jobs and upskilling, she recommends.
‘AI won’t replace your job, but someone who knows how to use AI will. The question is what tech can solve, what humans can solve, and how to make AI work for you – rather than you working for it.’
Kriti Sharma, Thomson Reuters
For now, though, it’s up to individuals to learn next-generation skills.
‘AI won’t replace your job, but someone who knows how to use AI will. The question is what tech can solve, what humans can solve, and how to make AI work for you – rather than you working for it.’
AI for good
Sharma remains at the helm of AI for Good, which she founded in 2018 because ‘I didn’t want to be in a place where the best thing we did with AI was boosting adverts and encouraging doom scrolling.’
AI is growing, and more tasks will be automated. For this to be done well, she says, all industries need better controls and processes, in order to help employees themselves be more creative.
She is concerned that in the recent Big Tech layoffs, there will be a reduction in ethics specialists, but says there are two ways to look at this.
‘There is an argument that ethical teams are being embedded into core teams. But really, I see this as an “and” rather than an “or”. Ethics should be at top, and embedded.’
For this to take place, she continues, there must be central frameworks with teams that are asking tough questions. They will have to balance risks and opportunities – at least until there is more widespread agreement on controls and processes.
In law, Sharma sees ‘a huge untapped market for justice, so we need to be thinking of how we can bring access to legal justice to the broader community’.
‘As we build those applications as part of AI 1.0, we must ensure that access to justice becomes easier and more affordable. We have yet to reach that “WhatsApp moment”,’ she concludes.
Navigate through the complexity of your work with Thomson Reuters
Legal departments across the globe are embracing technology as never before, to capture new insights and empower your problem solvers sooner. We combine technology, intelligence and automation to optimise workflows across spend, matter and contract management, research and legal guidance.
Learn how our legal technology solutions for corporate legal departments help reduce complexity and raise productivity in your legal team at tr.com/elm-uk