AI and Rehabilitation – What Should Change, What Must Not?
Podcast: Play in new window | Download
Subscribe: Apple Podcasts | Spotify | Amazon Music | TuneIn | RSS
The recent IN-CJ and Probation Institute roundtable on AI and rehabilitation brought together an unusually rich mix of voices from practice, research, service development, policy, design, and international justice settings. Dr Victoria Knight and Helen Schofield framed the discussion, while key contributions came from Carl Mumford on neurodiversity, David Raho on AI design and implementation, Edgar Kuhimbisa on digital governance in Uganda, Mateus Amorim on judicial uses of AI in Brazil, and Glyn Davies on service development and the practical barriers to digital delivery. The wider discussion was strengthened by comments from participants including Katy Savage, Jay Wood, Martina Feilzer, Liliana Lupsica, Rachel A. Wood, Mary Anne McFarlane, Simon Bonk, and others working across probation, research, training, ethics, and digital innovation.
What emerged was not a simple argument for or against AI. The discussion was more mature than that. It asked a more demanding question:
If rehabilitation is fundamentally relational, what kinds of technology can support that work without distorting it?
That question ran through almost every contribution. Helen Schofield set the tone by arguing that technology should serve the primary purposes of rehabilitation rather than subvert them. AI may help with administration, assessment, communication, and even therapeutic support, but rehabilitation itself remains rooted in human judgement, reflection, and relationship. Several contributors returned to the same point in different ways.
Carl Mumford stressed the importance of keeping a human in the loop, particularly where people’s needs, motivations, vulnerabilities, and learning styles are complex. AI, he suggested, may be useful as an aid, but it cannot substitute for the professional discretion and interpersonal skill that rehabilitative work depends on.
This became especially clear in discussion of neurodiversity. One of the strongest observations of the session was that the criminal justice system was not designed with neurodiversity in mind, and that any digital intervention introduced into it should therefore be neuroinclusive from the outset. That is not a marginal issue. It goes to the heart of whether people can understand, trust, and benefit from the systems intended to support them.
Carl Mumford and others raised practical questions about dyslexia, autism, communication preferences, theory of mind, and the design of interfaces and interactions. The implication was clear: if AI tools are introduced without serious attention to different cognitive and communicative needs, they may widen exclusion rather than reduce it.
There was also considerable interest in the more modest, low-risk uses of AI. Several participants argued that administrative support may be the most immediate and useful application. Liliana Lupsica, speaking from probation practice in Romania, made the point directly: if AI could reduce paperwork and release more time for one-to-one work, that would be valuable.
Mateus Amorim described similar low-risk benefits in legal settings, where AI is already helping to sort documents and organise tasks. David Raho noted promising examples in England and Wales, including AI transcription pilots that had been well received by practitioners. In this part of the discussion, AI was not imagined as a replacement for professional work, but as a tool that might remove friction from it.
Even here, however, the roundtable refused easy optimism. Several concerns were repeated throughout the session and in the chat. One was deskilling. If practitioners become too dependent on AI-generated summaries, prompts, or recommendations, what happens to the slow cultivation of judgement? Another was data. Who owns it? Who controls it? What is it trained on?
Carl Mumford, David Raho, Simon Bonk, and others all pointed in different ways to the importance of understanding the provenance, limits, and biases of data. Martina Feilzer made one of the sharpest interventions in this respect, warning that AI is not entering a neutral system. It is being developed from data shaped by existing inequalities, exclusions, and distortions. There is no reason to assume that AI will be less biased unless that claim is properly tested.
The discussion of chatbots sharpened these concerns further. Participants explored whether AI systems can genuinely support relational or empathetic work, or whether they simulate it in ways that may be misleading. David Raho raised the possibility that chatbots may be too affirmative or too emotionally persuasive, creating risks of over-dependence.
Yet, the roundtable did not dismiss such tools outright. Jay Wood, drawing on work with prison leavers, argued that some people who are anxious, ashamed, distrustful, or neurodivergent may initially find it easier to disclose concerns to a chatbot than to a person. That is an important observation, and it complicates any straightforward defence of human-only contact. The real issue may not be whether AI can ever play a relational role, but under what conditions, with what safeguards, and with what pathways back to human support.
Trust was another major theme, especially in the chat. Jay Wood described a significant trust gap between prison leavers and anything associated with the probation system. If digital tools are to work, they must be shaped around what users actually want and can trust.
Katy Savage made a related point from the perspective of Revolving Doors, arguing that governance and design must include people with lived experience of the justice system. This was one of the strongest practical principles to emerge from the session. It is no longer enough to say that services should be user-centred in theory. In this area, service users need to be involved in design, testing, critique, and oversight.
The conversation also stayed grounded in the realities of infrastructure. Glyn Davies and others noted that prisons and probation services are not operating on a level digital playing field. Access remains uneven. Connectivity is inconsistent. Some settings still struggle with the most basic forms of digital implementation.
One striking point raised in the main discussion was that secure laptops had been rolled out in some settings but remained unused because staff lacked confidence in using them in front of those they support. This matters because it reminds us that debates about AI ethics can become abstract very quickly. In many places, the challenge is still basic digital capability, organisational confidence, and operational readiness.
What, then, should practitioners, managers, researchers, and policy developers take from this discussion? Perhaps the clearest message is that AI should not be treated as a question of technical adoption alone. It is a question about purpose, values, relationships, and institutional design. The comparison is not simply between AI and no AI.
As several contributors noted, the status quo also carries risks: fragmented systems, inconsistent support, excessive paperwork, weak transparency, and poor continuity of care. The right question is not whether AI is risky. It is whether we are willing to evaluate new risks against the harms and limitations already built into current practice.
That leaves a set of questions which the roundtable did not resolve, but rightly kept open.
- What should rehabilitation protect that technology must never erode? Which uses of AI are genuinely low-risk, and which only appear so?
- How can professional judgement be strengthened rather than hollowed out by automation?
- What forms of accountability, transparency, procurement discipline, and data stewardship are needed before systems are scaled?
- How should lived experience shape design from the beginning, rather than as an afterthought?
- And perhaps most importantly, if rehabilitation is about human change, belonging, and the rebuilding of agency, what kinds of digital systems can support that process without redefining it in purely managerial terms?
These are not peripheral questions. They are now central to the future of criminal justice. The roundtable did not offer a settled doctrine, but it did something more useful. It created a space in which practical experience, ethical caution, and institutional imagination could be held together. That may be the most rehabilitative approach to AI available to us at present.
