Jane Colston discusses how the legal world is wrestling with technology regulation
As technology becomes increasingly sophisticated, making every aspect of our lives more streamlined and efficient, the legal world is wrestling with how best to regulate that technology in a way that does not stifle innovation but allows humans to understand and keep control of technology, and make sure it is applied in a way that takes into account human morals and ethics.
Current laws are having to adapt to a new asset class of property created by companies like Google and Facebook. Shoshana Zuboff’s The Age of Surveillance Capitalism talks about behavioural surplus, which is traded for profit. Zuboff argues that Google was unique in building a sustained billion-dollar business around the insights into our future behaviour based on our past searches. The law is in danger of being outrun and out-spent by these tech giants.
Lord Sales, Justice of the UK Supreme Court, while delivering the Sir Henry Brooke Lecture for BAILII, ‘Algorithms, Artificial Intelligence and the Law’, likened this dilemma to the frog-in-hot-water effect, saying: ‘We need to think now about the implications of making human lives subject to these processes, for fear of the frog-in-hot-water effect. We, like the frog, sit pleasantly immersed in warm water with our lives made easier in various ways by information technology. But the water imperceptibly gets hotter and hotter until we find we have gone past a crisis point and our lives have changed irrevocably, in ways outside our control and for the worse, without us even noticing. The water becomes boiling and the frog is dead.’
Sales LJ in his lecture examined the difficult question of how legal doctrine should adapt to accommodate the new environment in which we find ourselves. He discussed in depth some of the difficulties that go hand in hand with the use of sophisticated algorithmic processes. Sales LJ called attention to two significant problems – lack of detailed knowledge among parliament, lawyers and the general public about how coding works and its limitations, and capacity for error; and commercial secrecy surrounding the coding that is being used, which makes it hard to regulate and hold to account the entities and processes using them.
‘With so much at stake, self-regulation of the tech world with its ‘move fast and break things’ mantra seems a non-starter.’
One needs only to look at some of the laughter-inducing questions asked of Facebook chief executive Mark Zuckerberg during his appearance before the US Senate to see a real-life illustration of politicians trying to hold tech companies to account without the necessary technical knowledge, for example, Zuckerberg was asked: “Is Twitter the same as what you do?” and “How do you sustain a business model in which users don’t pay for your service?”
James Lovelock, the British scientist, in his new book thinks ‘our supremacy as the prime understanders of the cosmos is rapidly coming to end. The understanders of the future will not be humans but what I choose to call “cyborgs” that will have designed and built themselves’.
With so much at stake, self-regulation of the tech world with its ‘move fast and break things’ mantra seems a non-starter.
In light of the public interest (albeit the public may not be interested), Lord Sales in his speech proposed a solution: the establishment of a new agency for scrutiny of programs. It would constitute a public resource for government, parliament, the courts and the public generally. He proposes an expert commission, staffed by coding technicians, with lawyers and ethicists to assist them, which would be given access to commercially sensitive code on strict condition that its confidentiality is protected. It would invite representations from interested parties in society and would publish reports from its reviews, to provide transparency in relation to digital processes. This commission would provide pre-scrutiny of important algorithmic systems and after-the-fact testing and auditing of algorithmic systems. The creation of such an algorithm commission would meet the challenges Lord Sales identifies, by supplying to government, parliament, the courts and civil society, the expert understanding that is required for effective lawmaking, guidance and control in relation to digital systems. The commission would include experts who understand the weaknesses and flexibilities of code, and can constantly remind government, parliament and the courts about this.
Lord Chief Justice Burnett has set up an advisory board to the judiciary, made up of senior judges and artificial intelligence (AI) experts to consider and advise on the issues arising from the use of AI in the courts. In setting up the group, Burnett LCJ seeks to ensure that judges are at the forefront of governing how increasingly capable AI technology may be used.
Of course, as part of the drive towards using machine learning itself, the civil courts are already encouraging litigants to use technology-assisted review (TAR). The disclosure pilot scheme for the Business and Property Courts in England and Wales (which has one more year to run) requires that parties seek to agree the use of machine-learning analytical tools and provides that the court may give directions in regard to TAR’s use. There can be no doubt that the use of increasingly sophisticated technology in the courts is here to stay.
‘It will be increasingly hard to justify not using technology such as machine learning in litigation and investigations given the time and cost efficiencies it produces.’
Inevitably, it will be increasingly hard to justify not using such technology in litigation and investigations, given the time and cost efficiencies it produces. The Serious Fraud Office (SFO) piloted the use of an AI robot to weed out legally privileged material during its investigation of Rolls-Royce, the first use of AI in a criminal investigation. Not only was RAVN much faster than its human counterparts, but it was also more accurate, cutting out the inevitable inconsistencies that arise when teams of people are assessing documents and each making often finely balanced judgement calls. The use of RAVN in the Rolls-Royce case equated to an 80% cost saving, compared to engaging outside counsel to conduct the review. Following this successful pilot, the SFO invested in AI-powered document-review technology to assist with the analysis of documents in its investigations and has been using it for all its cases since April 2018. The system does not remove the need for human review but enables the SFO to reach relevant documents quicker with the knock-on effect of speeding up its investigations.
The key takeaway from all of this: knowledge is power and there is a need for legal supervision. We as litigators have a duty to our clients and the courts to understand, or at least engage the necessary expertise to understand, use and, where necessary, challenge machine learning tech and systems, which are increasingly being used to make our working and homes lives easier and more efficient, at the expense of our privacy and dignity, before we find ourselves in proverbial boiling water.
Jane Colston is a partner at Brown Rudnick.