Garry Kasparov was the world chess champion for 255 months of his career. Undefeatable, he did not worry too much when IBM suggested that he play a chess game against their computer. This match, known as “Deep Blue versus Kasparov” (1996), was the first game in history in which a computer defeated a human. Twenty years later, history repeated itself. This time, the AlphaGo computer program defeated Lee Sedol, one of the strongest players of Go, a strategy board game.

Go was the last classic game that a person played better than a computer.


Stagnation is over

AlphaGo’s victory is a part of the global history of the rapid progress of artificial intelligence (AI). Without going into too much detail, AI is the ability of an engineering system to acquire, process and apply knowledge and skills. With the help of pre-programmed algorithms, the computer simulates certain cognitive functions of the brain, which allows it to behave intelligently and solve the tasks assigned to it.

For a long time, real progress in the field of AI did not go much further than bold theoretical assumptions about how AI would gradually take over the world. The last 20 years, however, have seen a real revolution in the theory and practice of AI. This is not least because of the fact that the computing power of computers has increased from 103 to 107, parallel computing allowed complex problems to be solved quickly, and developers got access to large amounts of data, the storage price of which decreased from USD12.4 to USD0.004 per GB.

AI is only at an initial stage of its development, but commercial giants like Google, Uber and Apple as well as plethora of other companies, are already actively using AI in their services. For example, Uber uses AI to determine the price and time of your trip, Google Translate recognises the translation context, and Apple Face ID projects a 3-D image of your face.

While the ultimate global impact of AI is difficult to predict, it is already clear that AI is not science fiction, but a flexible tool for transforming established practices and industries.

Justice is also not going to escape the influence of AI.


Digital Fairness

We traditionally perceive justice as a formal judicial process for the protection of rights and freedoms. In practice, justice is a continuous information process in which the parties transfer information (procedural documents) to the court, the court analyses the collected data (evidence), and as a result also produces information (court decision).

During the trial, an impartial judge must analyse the case materials and, on the basis of his or her own conviction and being guided by the law, make a decision. However, just like other people, judges have inherent weaknesses such as fatigue, prejudice or distracted attention.

At the same time, fuelled by the uninterrupted supply of electricity, the computer should not make any of the mistakes inherent in people. Similarly, it should not be a problem for the operating system to carefully and swiftly examine details of complex cases with dozens or even hundreds of volumes.

If justice is an information process in which AI can avoid human error and process significant amounts of information more quickly, how likely is that AI will completely replace a judge?

A judge usually has three things that even the most powerful computer does not yet seem to have: legitimacy, hierarchical thinking, and empathy. The first manifests itself in the trust in court and the sustainable development of judicial institutions. The second guarantees an optimal balance between the heterogeneous goals of justice (tentatively speaking, “punish the guilty/acquit the innocent”). The third allows a judge to put him or herself in the other person’s shoes, feel their pain, and understand their motives.

AI is believed to have other drawbacks as well. In December 2020, the Secretariat of the Council of Europe published a 200-page study on global prospects for the development of AI legal regulation. In this document, the Secretariat concluded that AI was more of a tool for helping judges than a fully fledged replacement for them. The reason for this is the inability of modern AI to think legally, to administer justice as part of the social process, and the risks of large-scale design biases.

On the other hand, AI technologies are not stable and evolve non-linearly. Exponential growth in technology (Moore's Law) and general scientific progress can eliminate these shortcomings, and society can change its own social norms. Even today, AI allows us to organise and manage knowledge, prepare draft procedural documents and provide legal advice. More importantly, AI is now able to cover entire areas of dispute resolution with minimal interference in these processes by advocates and judges:

Research. Collecting, verifying and evaluating evidence with the use of AI (eDiscovery) is already one of the key stages of the judicial process in the United States and Great Britain.

Forecasting. The adoption of any decision by a party in a lawsuit directly depends on how much such decision will affect the probability of winning the dispute. An example of AI’s work in the field of forecasting is the analysis of expected outcomes of cases in the European Court of Human Rights. In the study, Nikolaos Aletras, Professor at the University of Sheffield, and his team were able to use AI to predict the outcome of the ECHR's consideration of the case on its merits with a 79% probability.

Resolution. Statistically, most disputes are formal and uncomplicated. It is for such cases that an online court (Civil Resolution Tribunal) has been created in Canada. With the help of artificial intelligence, this online service, which is part of the justice system, advises the parties and helps to resolve some disputes on the merits (disputes worth up to $5,000, traffic accident disputes, disputes against joint management organisations). All communication with this court is online. The court is open 24/7.


These examples are further confirmation that not only does a person have certain unique features, but also a machine can have them. The uniqueness of AI lies in its ability to achieve versatility, depth and rapid scaling. And modern justice systems urgently need these features, too.


Ukrainian Approach

Ukraine also attempts to introduce AI to its justice system. By Order No. 1556-р dated 2 December 2020, the Cabinet of Ministers of Ukraine approved the Strategy for Artificial Intelligence Development in Ukraine (the “Strategy”).

According to the Strategy, justice is one of the priority areas of AI application. More specifically, AI is planned to be used to fully launch the Unified Judicial Information and Telecommunications System and the Electronic Court. In addition, AI is planned to be used for legal advice and the resolution of disputes of minor complexity. The High Council of Justice (HCJ) approved the latter by its decision dated 9 February 2021.

This HCJ decision, however, is also interesting for another reason. The action plan to implement the Strategy did include developing a programme that would allow AI to analyse the texts of court decisions for unfair judicial practice. However, the HCJ rejected this innovation, referring to the fact that the judicial practice analyses and collations are reserved only to courts.

In the above study, the Secretariat of the Council of Europe directly mentions the ability of AI to effectively detect fraud and corruption. Using natural language processing, AI can detect natural patterns and anomalies in documents. Predictive analytics is another tool against corruption. An example of its use is an early warning system developed by a team from the Higher School of Economics and the University of Valladolid. On the basis of real corruption cases in Spain, researchers have illustrated how artificial intelligence analysis of certain political and economic factors can predict the emergence and spread of corruption in the public sphere.

The Ministry of Justice of Ukraine is also considering using AI software as a decision support tool in criminal justice. The Minister announced that the developed software, dubbed “Cassandra” after a Trojan priestess, will allow the authorities to assess the potential risk of recidivism and to help judges make custodial decisions. Currently, no further information on who the software developer is or what kind of algorithms Cassandra will use are available yet. However, it is evident that previous attempts to introduce risk assessment technologies by other governments faced significant obstacles (e.g. COMPAS in the USA, HART in the UK), but the Ministry promises to be careful and to use only best practices.

Perhaps the national authorities that develop and execute the Strategy have a lot of time for AI technologies to become an efficient tool for fighting corruption and strengthening the rule of law in Ukraine. According to the Strategy, the deadline for its implementation is 2030. Nevertheless, the Ukrainian government will have to address the key outstanding issues concerning AI (risks, ethics, funding, etc.) before the fully-fledged implementation of AI in the Ukrainian legal institutions can take place.



Software is eating the world, cautioned Mark Andreessen, an American investor. In an essay published in the previous decade, he suggested that the world was in the middle of a dramatic technological and economic shift, where software companies would soon be ready to take on large economies of scale .

Not everyone believed him. Neither could Lee Sedol believe that he had lost to a machine. As he prepared for the match against AlphaGo, he was convinced of the strength of the digital competitor, but not by any means of its victory. He lost the next four out of five matches. And three years later, Lee announced his retirement because computer algorithms had become far superior to humans, and a human being was no longer able to defeat them.

While many lawyers have already grasped the power and the scale of AI, still more remain unconvinced, hoping complacently that the uniqueness of their skills will retain relevance. The history of progress is evidently full of stories of rejection over adaptation. We all know how those stories end.

What is most inspiring in the story of AI is that the key ultimate beneficiary of these technologies might be our society. There is hardly anyone in Ukraine who does not criticise the current justice system. And it would be naïve to think that AI could be a magic pill that would fully reform the legal system. Nevertheless, we strongly believe that with reinforced support of transparent and robust AI by all key stakeholders, the Ukrainian legal system will be more efficient, fair, and just.

Hopefully, AI can be a force for good.