“Social Media’s Harmful Impact on Democracy”

Over the previous week, a video on TikTok purportedly issuing death threats to Sinn Féin leader Mary Lou McDonald became widely shared, marking another case in an unsettling pattern of political threat and aggression in Ireland. A man, thought to have made the threats towards McDonald, faced charges under the Non-Fatal Offences Against the Person Act 1997 and was subsequently refused bail. This episode punctuates the pronounced problems in our digital society.

Instances of political violence directed at our public figures are increasingly becoming a perilous element of public existence. Recently, after the Dublin riots, Leo Varadkar has publicly admitted his inability to rest in his own residence due to concerns over his wellbeing. The recent local and European polls have seen physical attacks on campaigners, and public badgering recorded and shared on the internet with the intention to embarrass and frighten the candidates.

While it’s crucial to treat individual violence threats as a criminal issue, we also must find solutions to tackle the root problems. The newly implemented Digital Services Act plans to address some of these problems.

There are three narratives we share about the interconnection of these threats with the digital world. One suggests that they stem from planned digital campaigns aimed at intimidating officials, shaping public perception and vilifying individuals and groups. McDonald discussed an “escalation of targeted digital maltreatment aimed at me over the past year” led by a “very organised section of people online against Sinn Féin and particularly against me.”

The second viewpoint sees this as the extreme manifestation of genuine irritation towards our elected officials bubbling up online, or even authentic digital expressions of resentment embedded in public attitudes. McDonald took care to distinguish between real online discussion and her ordeal, asserting she is “not someone who is oversensitive about the criticism and remarks that being a public figure attracts, but I strictly refuse, and I will always refuse threats against my life.”

We also debate how our increasing dependency on virtual platforms are dragging us into segregated echo rooms, aggravating our darkest feelings and injuring our potential to identify the human aspect in those with whom we dispute. Separating the start and end points of these three narratives, and understanding the influence they have on one another, proves increasingly challenging.
Coordinated actions online are often meticulously planned and well-funded attempts to fabricate a sense of momentum behind a concept. This is evident in cases such as the Russian involvement in the US 2016 elections. Enormous influence networks can be mobilised by antagonistic states, political agents, extremist factions amongst others, with aims to undercut, delegitimise, and even dehumanise political personalities.
It’s hard to distinguish between genuine public sentiment, platform-generated content curated by algorithms, and these manoeuvres – coordinated campaigns frequently capitalise on pre-existing public discontent, intensifying and magnifying it, while also exploiting social media algorithms to inundate people with specific messages.
The orchestrating and strategising of these campaigns often occurs in secluded spaces and forums, enhancing their complexity and making them more difficult to detect and breakdown. This highlights the necessity of engaging with the social media systems that underscore our virtual information ecosystem. A fresh piece of European Union legislation may assist us in recognising the role these online platforms have in sculpting our progressively combative and potentially violent political landscape.
The violent threats issued by individuals should rightly be treated as criminal offences. However, it is equally crucial that we tackle the root causes of these issues.
The attempts to address the role of social media in polarisation has been hindered by the limited legal tools available and the absence of political will to enforce those that exist. This lack of regulation has been especially pronounced in Ireland, a nation playing an outside role in implementing EU regulations, yet the Irish Government continues to face criticism for their lenient enforcement of rules on the tech firms they tactically depend upon for their economy.

The Digital Services Act (DSA) came into force in February, with the execution of its provisions being jointly overseen by the Commission. The Act could address prevalent issues as it comes with a novel tool that enables individuals to lodge complaints against social media platforms which they believe cause harm to them or others. This mechanism comes into play in situations where systemic failures have been observed, specifically, when a platform fails to recognise and curb potential risks that could have detrimental effects on public discourse, electoral procedures, and public safety.

In June, I had the chance to utilise this mechanism when I raised a complaint with the Irish regulator, Coimisiún na Meán, over TikTok’s negligence to put safeguarding measures in place to protect Irish democracy during recent elections. I was informed of progress on the complaint last week. The regulator has given the social media giant two weeks to address one aspect of the complaint and upon their response, further investigations may commence. Additionally, another segment of the complaint has been taken up with the European Commission.

The enforcement of the DSA is unequivocally powerful. The penalties can amount up to 6 per cent of annual turnover, which in TikTok’s case could potentially run into hundreds of millions of dollars. The Act also gives regulators the authority to delve into and examine platforms, along with their recommendation system and other close-guarded processes. The regulator will be idle unless the office is presented with complaints, and thus far, they haven’t been overwhelmed with any.

It is yet to be known where this endeavour will lead and whether this freshly appointed regulatory body will break the ongoing pattern of under-enforcement in technology regulation, a disappointing trait that has been associated with Europe. What will determine the effectiveness of this regulator and the tool they wield is the number of those affected by escalating harassment who come forward to lodge complaints.

This article was written by Liz Carolan who covers democracy and technology issues for TheBriefing.ie.

Condividi