The future: AI and the erosion (or evolution) of judicial discretion
By Efrat Shimonov
The litigation team at Soroker Agmon Nordman Riba recently submitted a closing brief in a decade-long civil litigation which originated in the aftermath of the infamous Madoff affair, predominantly based on the Israeli Law of Unjust Enrichment, 5739-1979 (the “UE Law”). This landmark case brings to the forefront a critical tension between the rigid application of law and the necessity of dynamic judicial discretion.
Judicial discretion as a judicial safeguard
The UE Law, anchored in principles of fairness, justice, and integrity, contemplates a world defined by human interactions (e.g. erroneous payment, building inadvertently across property lines, receipt of stolen funds as donations, etc.). At its core, the law seeks to prevent individuals from benefiting at another’s expense without legal justification.
One of its most distinctive features lies in Section 2, which grants judges the discretion to exempt restitution when circumstances render it unjust. This discretion functions as a vital safety valve, ensuring that rigid application of the law does not lead to inequitable outcomes. It relies on a judge’s intuition, empathy, and moral judgment, which are uniquely human qualities, and therefore cannot be easily replicated by algorithms.
Will AI rewrite the rules of justice?
Artificial Intelligence (AI) is rapidly transforming the legal landscape. Predictive analytics, automated drafting, and advanced evidentiary tools promise efficiency and consistency. AI can already analyse thousands of past rulings, identify patterns, and draw probabilistic inferences. Soon, “predictive justice” systems may even anticipate how a specific judge will rule on a Section 2 exemption based on precedents and historical data. These capabilities raise profound questions about the future of judicial discretion.
From safeguard to formality?
On one hand, AI could standardise the application of the UE Law, making outcomes more predictable and reducing uncertainty for litigants. This would enhance efficiency and minimise the risk of inconsistent rulings. On the other hand, such substantial standardisation risks eroding the very essence of judicial discretion. In jurisdictions influenced by common-law reasoning, the ability to weigh unique circumstances and deliver justice tailored to human realities is paramount. If judges begin to rely heavily on algorithmic predictions, they may feel an implicit pressure to conform to statistical norms, fearing reversal on appeal or reputational harm for deviating from data-driven expectations. In such a scenario, judicial discretion risks becoming a mere formality rather than a meaningful safeguard.
Can AI truly understand fairness?
The tension between technological progress and human judgment is not merely theoretical, as it stands at the heart of the meaning of justice in a digital age. I believe the UE Law was never intended to operate in a mechanised system. Its purpose was to protect fairness in unpredictable, deeply personal situations. While AI can calculate damages, draft opinions, and possibly even predict rulings, it cannot “feel” whether restitution is “unjust”. It cannot perceive the nuances of good faith, moral standing, or societal implications that often shape equitable outcomes.
Will courts follow code or conscience?
For attorneys, this evolving reality demands a strategic shift as it may no longer suffice to argue the law alone. Attorneys might need to be prepared to challenge algorithmic recommendations and persuade judges to exercise the discretion granted by the Israel’s parliament, the Knesset, in 1979. Arguments may need to extend beyond simple legal principles to preserve the human dimension of justice. In an era dominated by data, the ultimate question remains whether the courts will embrace AI-driven uniformity or resist it to ensure that justice remains a profoundly human endeavour.
Personal reflection
In my personal view, while I recognise the immense value AI brings to legal research and procedural efficiency, I am wary of a future where we sacrifice the “soul” of the judiciary for the sake of statistical certainty. I believe justice is not a mathematical equation to be solved, but a human virtue to be practiced. In my opinion, delegating the concept of fairness to an algorithm means losing the very empathy that makes our legal system just.
To me, the preservation of the judge's conscience over the machine's code is not just a preference; it is a necessity for a society that values individuals and not only the average. In an era dominated by data, I believe we must fight to ensure that justice remains a profoundly human endeavour.
Efrat Shimonov is a senior litigation associate at Soroker Agmon Nordman Riba, Advocates & Patent Attorneys, where she specialises in high-stakes dispute resolution. Her practice focuses on navigating complex civil and intellectual property litigation, and she represents clients across both national and international jurisdictions.
