“Deepfakes: Threatening, But Not Dangerous”

The apprehension that surrounds the use of artificial intelligence to create deepfakes has been causing significant alarm, prompting fears of democracy being undermined. In the year alone, half of the world’s population is expected to vote across 70 countries. A survey conducted by the World Economic Forum in late 2023 saw 1,500 experts cite the spread of false information as the most pressing global risk in the forthcoming two years. According to these results, even severe weather implications and potential war between states were considered less pressing.

However, it seems the fears might be exaggerated. An incorrect consensus from Davos wouldn’t be a first. Deception and dishonesty have been integral parts of our society since the Trojan horse tactic in Troy. Case in point, The Daily Mail’s use of the forged Zinoviev letter significantly influenced the 1924 British general elections.

Scepticism, however, is more prevalent in the age of the internet. The fear is that with the advent of AI, misinformation could be weaponised on a larger scale. The internet has significantly brought down the costs associated with content distribution, while generative AI could potentially reduce content creation expenses. Steve Bannon, a renowned US political strategist, predicts a flood of meaningless information.

Moreover, the problem of deepfakes — AI-made impersonations that appear authentic — is growing. The late philosopher Daniel Dennett warned of this world full of “counterfeit people,” questioning who we can trust online. The peril lies not in individuals trusting the untrustworthy, but in distancing trust from those who are indeed trustworthy.

Despite this, the expected political havoc caused by deepfakes has not yet materialised. Some AI startups argue that the focus should be on distorting distribution rather than creation, thereby shifting the blame to big tech corporations. In a recent conference in Munich, twenty leading tech companies, including Google, Meta, and TikTok, vowed to tackle misleading deepfakes. However, whether these companies are standing true to their statements remains unclear, although a relative scarcity of scandals so far provides some reassurance.

The drive to expose disinformation has been significantly supported by the open-source intelligence community, which encompasses a vast number of cyber investigators. A Political Deepfakes Incidents Database, established by American scholars, is part of this effort. The database, which had documented 114 instances up until the beginning of this year, aims to reveal and maintain records of such events. The surge in AI technology uses by countless users might be promoting a broader comprehension of the technology, thereby immunising individuals against deepfakes.

The world’s largest democratic election was conducted in India, intriguingly tech-savvy nation, where an estimated 642 million citizens voted. The election saw a dominating use of AI applications to emulate notable individuals and candidates, forge approvals from deceased politicians, and launch attacks against political adversaries amid the chaos of Indian democratic politics. However, the election did not seem to be marred by the inevitable digital distortions.

Bruce Schneier and Vandinika Shukla, two specialists from the Harvard Kennedy School, studied the implementation of AI in the campaign. They found its overall usage to be largely productive. Certain politicians, for instance, used AI applications and the official Bhashini platform to translate their speeches into all 22 officially recognised Indian languages, forging a tighter bond with their electorate. Shukla and Schneier wrote that whilst it’s challenging to differentiate between fact and fiction due to deepfakes’ ability to create non-consensual representations of anyone, the consensual utilisation of the technology is predicted to make democracy more approachable.

However, this shouldn’t be misconstrued to imply that deepfakes are intrinsically harmless. They’ve been previously utilised for criminal activities, resulting in considerable damage and personal anguish. Earlier this year, a British engineering firm, Arup, was duped out of $25 million (€23 million) in Hong Kong, following a scam involving a digitally cloned video of a top executive, ordering a financial transfer. That’s not all; explicit deepfake pictures of 50 girls from an Australian school, Bacchus Marsh Grammar, were disseminated on the internet. It is believed that their images were manipulated after being taken from various social media outlets.

It is important to note that criminals often adopt new technological advancements before most other groups. Their malicious applications of deepfakes, particularly when aimed at private individuals, present significant cause for worry. Detecting and countering public misuse of technology is often quicker. The challenge lies in discerning real politicians spouting nonsensical rhetoric from AI-created avatars producing counterfeit nonsense. – Financial Times Limited 2024 Copyright.

Condividi