AI in the courtroom and judicial independence: An EU perspective
The Court of Justice of the EU (CJEU) has hailed judicial independence as an essential principle for national courts in the EU. Judicial systems which do not comply with this principle cannot ensure protection of EU rights and ultimately violate the rule of law, one of the EU’s fundamental values. Yet the advancement of artificial intelligence (AI) technology in the courtroom raises questions about judicial independence under EU law.
AI entails potential benefits when it comes to the administration of justice, as it could help reduce litigation backlogs. AI-driven judicial systems can range from algorithms used to support judicial decision-making and allocate cases among judges, to AI employed to facilitate virtual hearings via holograms or even substitute judges in the decision-making process.
Both EU and national institutions are in the process of adopting legislation on AI, making these issues particularly timely. The Next Generation EU framework invests significant resources to enhance digitisation in the Member States, including national judicial systems. Italy’s digitalisation measures, for example, include reforms concerning courts. As algorithms and AI increasingly shape judicial power in the EU constitutional landscape, how can they comply with the principle of judicial independence?
Judicial independence in the European legal landscape
The principle of judicial independence is a manifestation of the broader EU principle of effective judicial protection. Under this principle, Member States should offer effective remedies in the fields covered by EU law, including the guarantee of independent courts.
As clarified in the EU case law (para. 118, AK, C-824/18), judicial independence and impartiality presuppose
rules, particularly as regards the composition of the body and the appointment, length of service and grounds for abstention, rejection and dismissal of its members, that are such as to dispel any reasonable doubt in the minds of individuals as to the imperviousness of that body to external factors and its neutrality with respect to the interests before it.
Judicial independence has both an external and an internal dimension under EU law. The external dimension requires judges to be free from interferences, while the internal dimension demands judges to maintain an impartial attitude towards the parties of the litigation. The presence of independent arbiters for disputes is essential for procedural fairness and contributes towards public trust in and the legitimacy of courts.
The EU case law illustrates that the effects of judicial reforms introduced in the Member States should ultimately respect judicial independence, in its external and internal dimensions. In Commission v Poland, the Commission raised concerns about the recent reform empowering the Polish Minister for Justice to authorise judges to continue to carry out judicial duties beyond the newly introduced (lower) retirement age. The CJEU found that these procedural rules failed to protect judges from potential direct or indirect influences on decision-making and thus that Poland was in breach of Articles 19(1) of the Treaty on European Union and 47 of the Charter of Fundamental Rights.
In addition, the European Court of Human Rights case law, in the light of which Article 47 of the EU Charter should be interpreted, clarified the importance of the context in which a judicial reform takes place, as it may disclose unduly intrusion in the realm of judicial independence. Moreover, it reaffirmed the centrality of the law to establish tribunals in so far as the law helps set out clear procedures for the appointment of judges and thus prevents potential interferences on the judicial power from the executive.
AI in the courtroom: Emerging issues
In evaluating how AI-assisted courts can comply with the EU principle of judicial independence, we should consider first how any potential future reforms should be carried out. As explained, judicial reforms are likely to affect the external dimension of independence – which means that judges should be protected from potential temptations to give in to intervention or pressure, in particular procedural rules that could affect their decisions, even if indirectly. It follows that the introduction of AI to courts via judicial reforms should not favour the exercise of influence over national judiciaries.
Examples of such influence are the potential establishment of liability regimes for judges operating in the context of courts embedding AI elements which may exercise pressure over judicial decisions; the reduction of judges’ salaries, which would expose them to corruption; or the reduction of funding for the judiciary, which could increase the burden of work for individual judges.
The implementation of reforms involving AI in the courtroom raises additional issues for the external dimension of judicial independence. To begin, any influence exercised by a state’s executive or the legislative, for instance, over data centres used to digitise judicial decisions, the selection of the training data for neural systems, or the very design of the algorithm used in the courtroom would be liable to raise doubts about the court’s independence.
Moreover, due to the complexity of AI, there is the need for expert knowledge on these systems in the courts, to oversee their operations and spot potential problems. At least three scenarios are possible: the training of judges, the support of technicians to operate AI in the courtroom, or a combination of both. If judges are to be trained, what knowledge is required and which skills are sufficient to deal with AI issues? The answer is unclear at this stage. Under the second scenario, involving AI technicians, the potential expansion of independence guarantees for them is crucial. Any technical expert working in this field should be free from any pressure and interferences. A related question is the relationship between the technician and the judge and how liability (for any damage or problems caused by AI) is distributed. A joint regime of liability seems appropriate in light of the shared responsibility of the judge and the expert. At the same time, as mentioned, any liability regime for judges in AI-assisted courts should not entail forms of influence over the judicial functions.
Proposed EU AI regulation and the risks to judicial independence
Issues of judicial independence become more tangible under the proposed EU Regulation on AI. Article 8 of Annex III classifies AI systems used in the administration of justice and in democratic processes as high-risk. The Article uses a broad definition of AI: tools for courts to conduct research, interpret and apply the law to a set of circumstances. Depending on whether we apply a narrow or broad reading, this provision may cover systems used to support judicial decision-making but also tools used merely to conduct case-law research. These systems are subject to requirements of Chapter 2 of Title III of the proposed AI Act. Among the many questions relating to judicial independence, two deserve particular attention: the role of the AI system provider, and the tangle of control entities.
The Act defines a provider as ‘a natural or legal person, public authority, agency or other body’ that develops an AI system or has one developed, with a view to marketing it or putting it into service ‘under its own name or trademark, whether for payment or free of charge’ (Article 3). The implication, with respect to judicial independence, is that the provider should not be the executive or legislative power.
If, for example, a national government were tasked with the design of algorithms used in courts this could allow interferences with decision-making and the external aspect of judicial independence would be impeded. In the case of a private-sector provider, that entity should be free from influence from public authorities in order to avoid indirect influence over the judiciary. Moreover, the AI Act envisages that the provider may be located in a third country. This raises issues of potential surveillance and control from foreign states (see recital 10 of the proposed AI Act). The scandals of Pegasus and PRISM underline the high likelihood of this risk. Ultimately, any doubt concerning the independence of AI-assisted courts would affect public trust in the judiciary.
Furthermore, the proposed Act would place high-risk AI systems under the control of a web of entities. These are notified bodies (in charge of conformity assessment of AI systems – Article 33 of the proposed AI Regulation), notifying bodies (responsible for carrying out necessary procedures for the assessment, designation and notification of conformity assessment – Article 30 of the proposed AI Regulation), the European Artificial Intelligence Board, national competent authorities (Article 59 of the proposed AI Regulation) and market surveillance authorities. While there are some requirements of independence for the notifying bodies, this is not the case for notified bodies and national competent authorities. How would these entities tasked with overseeing the functioning of AI interact with the executive and the legislator? Also in this case, the risk of interference from state powers is real.
The way forward
The introduction of AI in the courtroom is desired politically by the EU and nearly unstoppable in the context of technological progress. Nevertheless, any reforms of this type should be mindful of the principle of judicial independence, being of the essence for the rule of law. In managing technological advances such as AI-assisted courts, Member States and EU institutions need to cooperate with the view to preserve the European way of life, which rests also on constitutional guarantees such as independent judiciaries.
Giulia Gentile is a Fellow in Law at the London School of Economics and Political Science and a CIVICA Visiting Scholar at the European University Institute.