Legal Intelligence

Artificial Intelligence (AI) focuses on the study and construction of intelligent agents that do the right thing.

Intelligence has been defined by AI researchers in terms of fidelity to human performance and as rationality.

It has also been considered a property of internal thought processes and reasoning, and as an external characterisation of intelligent behaviour.

From these two dimensions – human vs. rational and thought vs. behaviour – the rational agent approach to intelligence with its emphasis on rational behaviour (doing the right thing) has prevailed in the history of AI.

(Russell & Norvig, 2020*)

There is good reason computer scientists are preoccupied with machines doing the right thing. AI is the dominant technology of the future and no one can predict exactly how the technology will develop or on what timeline (Russell, 2019**).

Thought and action though connected are clearly not synonymous. Any seasoned lawyer will decry regarding legal theory and legal practice as the same thing!

If rationality is to be an important lynchpin of artificial general intelligence (AGI), then the development of lawbots encoded with legal intelligence that do the “right legal things” should aid the development of AGI.

Legal rationality may not equate perfectly with computational rationality from the mathematical or engineering sense, but it does bring us closer to building rational bots. Rationality in legal reasoning, argument and opinion or decision-making is, after all, a hallmark of both fine legal theory and legal practice.

The success of any AI tool can only be determined from empirical data on its internet or other adoption rate. More empirical research is needed in this case to understand how lawyers (judges, practitioners and academics) think and practise today if we are to develop the right lawbots of the future.

On what kind of “right” lawbots, we should strive further than current search bots that retrieve legal articles, cases and due diligence material, or those that spit out documents generated from form-filling or document automation. Lawbots of the future should seek to mirror human lawyers in legal acuity.

Acuity has been defined as an “ability to think, see or hear clearly”; a “keeness of perception or thought”; “sharpness” and “acuteness”; and associated in thesauruses with intelligence. In the context of legal acuity in lawyers – clarity in thought, and a keeness or sharpness of perception, must surely translate into legal behaviour (reasoning in arguments, drafting, and opinion or decision-making) that exhibits rationality.

This said, building lawbots (machines) that do the right legal things in daily grind requires more than an abstract general understanding of legal acuity. We need to pinpoint the nuts and bolts or elements of acute legal thought processes, reasoning and behaviour. This is possible only if we become more familiar with how lawyers think and behave in simple and certain legal environments, as well as in complex and uncertain legal situations.

We could begin first with identifying proofs (manifestations) of legal acuity in lawyers of our time, then decomposing and reconstructing its meaning and significance for the right lawbots of tomorrow.

17 July 2020

* Artificial Intelligence: A Modern Approach by Stuart Russell and Peter Norvig, Fourth Edition, 2020 (Pearson), see pp. 1-5 on What is AI? For AIMA site: http://aima.cs.berkeley.edu

** Human Compatible: Artificial Intelligence and the Problem of Control by Stuart Russell, 2019 (Viking), see Preface.