AI in Law: ‘Terrifying’ Impacts

Relying on artificial intelligence (AI) for judicial and legal decision-making could have devastating effects, according to the Commissioner for Human Rights at the Council of Europe. During a high-profile law conference in Dublin, Commissioner Michael O’Flaherty stressed the necessity of ensuring that human judgement remains central to the legal decision-making process.

Sharing a similar stance, Ireland’s Attorney General, Rossa Fanning, cautioned against letting sophisticated technology strip the justice system of its human core. Even though AI tools can help resolve disputes by collating data and uncovering documents, Fanning emphasized that an inseparable aspect of judicial decision-making is human involvement for the final outcome to gain acceptance from the public and parties involved in the litigation.

Fanning pointed out that feeding a computer with numerous sentencing decisions for varied types of crimes for AI to calculate a possible sentence totally disregards what people expect from the legal process. Individuals, he explained, anticipate a human to listen to their account, examine the evidence and give a judgement based on their humanity and discretion—an element that would be sorely missed if excluded.

Marko Bošnjak, the President of the European Court of Human Rights (ECHR), added that AI brings its own set of challenges to human rights. Although the European Convention on Human Rights does not explicitly mention AI, Bošnjak pointed out that they had overcome unexpected challenges before, expressing confidence that they would find ways to deal with this issue too.

A recent ECHR ruling on the use of AI and facial recognition technology in tracking a Russian activist emphasised the need for proper regulation of AI to protect against arbitrary violations of rights, suggested Bošnjak.

The three men emphasised both the positive and negative aspects of Artificial Intelligence during a panel debate about AI and its implications in the legal domain, at the annual summit of the European Law Institute (ELI) in Dublin last week. The ELI boasts nearly 1,700 individual associates from various legal sectors and academia across Europe, with an additional 150 institutional members comprising of EU agencies and supreme courts.

The conference, with around 400 attendees, spanned two days and laid focus on topics such as the impact of digitisation on law and social constructs, and ethical considerations and regulation of AI. Commissioner O’Flaherty remarked on the groundbreaking EU AI Act 2024, the first-ever legal system for AI globally, crafted to govern the market. He commended it as commendable to a degree, but pinpointed loopholes that need covering, particularly with regards to private, security and defence sectors.

He accused tech firms of shirking accountability for spreading and promoting some severely objectionable content on their platforms. On one hand, he acknowledged the positive implications of AI, but alongside, he underlined the substantial and multifaceted threat it poses to human rights.

He stressed that the primary incentive for AI isn’t enhanced results or quality, but rather quickness and productivity, a situation that requires serious consideration. The effectiveness of AI, he argued, is is only as reliable as the data it utilises, and repeatedly, it has proven to be driven by incorrect data, consequently yielding incorrect outcomes. He expressed concern over the insufficient focus on its errors, emphasising on the necessity for AI to be reliable.

He anticipates that the rulings regarding the efficiency and suitability of regulations on AI by European countries will fall under the purview of the ECHR. He pointed out a disorderly state of regulation across multiple nations, suggesting the need for uniformity.

Condividi