Artificial Intelligence, Judicial Practice, and the Future of Due Diligence

Artificial Intelligence, Judicial Practice, and the Future of Due Diligence

In this episode of the IN-CJ podcast, Rob Watson speaks with Mateus Rocha about the growing use of artificial intelligence within judicial systems and the implications this has for justice, accountability, and professional practice. Drawing on Mateus’s postgraduate research and emerging doctoral work, the discussion explores how AI tools are being adopted by judges, how they are experienced in practice, and what risks and transformations accompany their integration into court environments.

The conversation begins with a practical observation. Courts across many jurisdictions are operating under sustained pressure. Heavy caseloads, expectations of productivity, and administrative complexity create strong incentives to adopt tools that promise efficiency. In this context, generative AI systems are increasingly being used to summarise documents, search case files, structure reasoning, and draft procedural texts. For some judges, these systems are understood as workflow tools that enable them to shift attention from routine administrative labour towards more qualitative aspects of decision-making.

Mateus explains that in his research with judges, many described themselves as conscientious users. They emphasised that they read and reviewed outputs carefully, checking references and ensuring that decisions remained grounded in their own legal reasoning. For these practitioners, AI is not a substitute for judgment but an aid to navigating volume. However, he also notes that this account comes from a small sample and cannot be assumed to represent wider practice. Some interviewees acknowledged that colleagues may not apply the same level of scrutiny. This introduces variability in how due diligence is exercised.

A central theme of the discussion concerns the distinction between procedural productivity and substantive justice. AI tools can generate text quickly and convincingly. They can produce formal language that appears authoritative and coherent. Yet the ability to produce well-structured prose is not equivalent to understanding legal principles or ensuring that natural justice is upheld. Rob raises the question of whether training judges as prompt engineers risks displacing attention from core judicial competencies. The issue is not technological competence in itself, but whether the introduction of AI subtly reshapes professional identity and priorities.

The conversation references examples where judicial decisions have required substantial correction due to unverified or inaccurate citations. Such incidents highlight the risk of fabricated authorities or invented references when AI outputs are insufficiently checked. While courts are accustomed to issuing minor corrections, systemic errors raise deeper concerns about trust, legitimacy, and appellate burden. Even when AI is deployed in so-called low-risk contexts, the cumulative effect of small inaccuracies can undermine confidence in institutional processes.

Mateus observes that, in some jurisdictions, AI is currently used primarily in administrative or collective cases rather than in high-stakes criminal matters. This creates a perception of limited direct harm. However, he argues that the framing of certain decisions as low-risk may itself be misleading. Institutional decisions can have broad implications for public administration, collective rights, and access to services. The indirect effects on citizens should not be underestimated.

Another layer of complexity relates to organisational structure and governance. In Brazil, for example, courts may develop or adopt systems independently, resulting in a heterogeneous technological landscape. This contrasts with more centralised European models, where implementation might be coordinated at higher institutional levels. Decentralised adoption can foster experimentation and innovation, but it may also lead to uneven standards, inconsistent training, and fragmented oversight. Centralised models, by contrast, can promote uniformity but risk imposing solutions that are insufficiently sensitive to local conditions.

The discussion also touches on the informal use of AI within court environments. Judges may receive some degree of guidance or instruction, but other court staff are often not formally trained or authorised to use these tools. Nonetheless, publicly accessible systems remain available. This creates the possibility of unregulated use, with limited transparency. Questions arise about data security, confidentiality, and accountability. Who is responsible if an AI-assisted document contains errors? How should courts document or disclose AI involvement in decision-making processes?

A recurring insight from Mateus’s literature review is that the dynamics observed in judicial contexts mirror broader patterns across public institutions. Digitalisation and modernisation initiatives often proceed with a strong narrative of inevitability. Technology is framed as necessary to remain contemporary, competitive, or efficient. Wealthier institutions may adopt tools more quickly, reinforcing status differentials, while resource-constrained organisations struggle to keep pace. The symbolic value of being technologically advanced can become intertwined with perceptions of legitimacy and institutional authority.

At the same time, there is a sense among many practitioners that AI is not a temporary phenomenon. It is regarded as a tool that has arrived and will remain, similar to earlier waves of digital transformation. The question, therefore, is not whether courts will engage with AI, but how they will do so responsibly. This includes developing training frameworks, ethical guidelines, and evaluation methodologies that recognise both benefits and risks.

Rob emphasises the importance of understanding these processes in comparative perspective. Mateus’s doctoral research aims to extend his initial study internationally, examining whether judges in different legal systems experience similar transformations in their work. Are there shared perceptions of pressure, productivity, and professional adaptation? Do institutional cultures shape how AI is interpreted and integrated? Comparative inquiry can illuminate whether certain risks are structural features of digitalisation, rather than local anomalies.

Throughout the conversation, the underlying concern remains the integrity of justice. Courts occupy a distinctive position within democratic systems. Their legitimacy depends not only on outcomes but on process. Transparency, reasoned judgment, and adherence to procedural fairness are foundational principles. If AI tools alter how decisions are produced, even indirectly, it becomes necessary to scrutinise their influence carefully.

This episode does not offer definitive conclusions. Instead, it opens a space for reflective engagement. It invites practitioners, researchers, and policy-makers to consider how technological adoption intersects with workload pressures, professional norms, and institutional design. The adoption of AI in judicial settings may yield efficiencies. It may enable better organisation of complex information. Yet it also introduces new dependencies, new vulnerabilities, and new ethical questions.

As Mateus continues his research, the aim is to move beyond anecdote towards systematic evidence. For the IN-CJ network, the discussion forms part of a broader commitment to examining how innovation in criminal justice systems can be aligned with accountability, equity, and social value. The challenge is not to resist technology reflexively, nor to embrace it uncritically, but to situate it within the enduring principles of justice and public responsibility.

Rob Watson

Rob Watson

Leave a Reply