Ana Toskić Cvetinović
Executive director of the organization Partners Serbia
In the New Year press conference in December 2020, President Vučić said that it is important for Serbia and its judiciary to introduce artificial intelligence (AI) and the so-called predictive justice. The purpose of this innovation would be to help judges in making decisions, primarily for traffic violations. However, bearing in mind that the concept of predictive justice has a much broader meaning, it is not clear whether the initiative is aimed at the use of VI only in the field of traffic violations, or if it would also receive a wider application.
The announcement of the introduction of artificial intelligence into our judiciary is not surprising on the one hand - Serbia adopted the Strategy for the Development of Artificial Intelligence for the period from 2020-2025. year in which one of the planned measures is the improvement of public sector services using artificial intelligence. Since the Serbian judiciary has been struggling with a large number of cases and insufficient efficiency for decades (and the unevenness of judicial practice is also often highlighted), the use of AI can be an additional mechanism for solving these problems. However, what is (again) surprising is that we learn about such projects from the addresses of officials, expositions or media releases, even though these are initiatives that, due to their complexity and possible consequences for individuals, should be the subject of a wider social discussion.
Predictive justice actually involves the analysis of a large number of court decisions by the VI in order to predict the outcome of a particular case or a particular stage of the proceedings. Having this in mind, it is natural that this mechanism is used more in the countries of precedent law.
What are the experiences of other countries?
Comparative experiences show that AI in the judiciary can have different purposes - it can be used for case management, or (as a support) for making court decisions. This first purpose, for example, refers to the use of algorithms that, after filing a lawsuit, assess the complexity of the case, on the basis of which a decision is made about the resources that need to be foreseen (including the involvement of the court apparatus) in order to resolve the case. In the USA, for example, AI is used in some federal states (California, Wisconsin) for case management, but also to support decision-making. Thus, the "COMPAS" software is used to assess the probability of recidivism, based on which alternatives to detention are decided. There are also examples of European countries, such as the Netherlands and Estonia, which are leading the way in the use of AI in the judiciary. In 2019, a project was launched in Estonia to create a robot judge that would make decisions in small-value disputes. It would be possible to file an appeal against the decision, which would be decided by a "living" judge. Also, various mechanisms of online dispute resolution, etc., are widely used. So far, AI has not completely replaced judges anywhere, and this can potentially bring the biggest problems for citizens. Simply, all disputes are not equally complex to fully apply one model/algorithm for solving them, without additional check/analysis by the judge.
The problems that the application of AI implies are similar in different social fields, from medicine, through banking, to the judiciary. They primarily involve a series of ethical issues, starting with the questionable neutrality of algorithms (so far, several studies have pointed to possible discrimination of marginalized groups that do not enter the "average value", i.e., to which the largest amount of data on which machine learning is based does not apply) , their fairness, given that they do not enter into the context and specificity of each individual case. Also, the issues of possible algorithm errors and their consequences, as well as the transparency of the algorithm and responsibility for the decision made, have not yet been adequately resolved. Regarding the use of AI in the judiciary, a number of other issues arise, such as the impact on access to justice and the right to a fair trial, or on the free judge's opinion, even when the VI is used to support decision-making, and especially if it is a AI that would eventually replaced the judges.
European ethical charter on the use of artificial intelligence in judicial systems
For these reasons, the European Commission for the Efficiency of Justice (CEPEJ), a body of the Council of Europe, adopted the European Ethical Charter on the use of artificial intelligence in judicial systems.
The European Ethical Charter on the Use of Artificial Intelligence in Justice Systems defines five principles that should be taken into account before starting to use AI in the judiciary. Thus, care should be taken to respect fundamental rights during the design and implementation of AI, prevent the development or encouragement of any discrimination against individuals or groups of individuals, insist on the quality and security of processes and data, enable access and understanding of data processing methods and external audit of those methods, and enable users of the system to have enough information and autonomy to be able to understand the process, consider alternatives, including the possibility of having their case decided by a judge. This last principle ("under user control") implies that when introducing the AI system in the judiciary, computer literacy programs should be implemented, as well as a debate in which the professional public would be involved.
As a member of the Council of Europe, Serbia should take these principles into account when planning the introduction of VI in the judiciary.
In Serbia, the problems brought about by the AI can further deepen, due to the otherwise insufficiently effective mechanisms for the protection of human rights. In addition, the protection of citizens' privacy has so far not been in focus either during the planning or implementation of projects that involve mass processing of personal data (such as the introduction of mass video surveillance, or information systems created due to the emergence of Covid-19). What is most worrying is the fact that the most flagrant violations of this right came precisely from the authorities, so trust in new similar projects has, with reason, been shaken. There is no transparency in decision-making, nor any broader social discussion about whether we need such projects and what are their advantages and possible consequences. A special question is who manages these systems, how they are protected, whether they can be misused, etc. In addition, our Law on the Protection of Personal Data, which was adopted on the model of the EU General Data Protection Regulation (GDPR), also provides for special standards for the protection of the rights of individuals in cases of automated individual decision-making, and it is necessary for any AI system to respond and to these requests.
Bearing all this in mind, it is extremely important that the introduction of AI in our judiciary is approached with great caution, while respecting the highest ethical, technical and human rights protection standards. The question remains whether Serbia has a system that can meet these standards.
Source >>