In the 2016 US presidential elections, Russia employed brash and provocative posts full of grammatical inaccuracies and awkward sentence construction to meddle in the process. These posts were designed to catch the public’s attention and instigate anger, regardless of the method used. A Russian-made Facebook post that labelled Hillary as satanic is indicative of this.
Jump forward eight years and foreign intervention in US elections has significantly advanced, becoming more elusive to identify. Emanating predominantly from Russia, China, and Iran, these false narratives pose a constant, severe menace. As suggested by US intelligence, defence officials, tech companies, and academia, these nations continually develop and implement more refined strategies. Swapping the opinion of a minor segment of America could cause an unbalanced impact on the tight presidential race, as shown by most polls.
Former President Donald Trump’s candidacy is favoured by Russia, according to American intelligence evaluations, while Iran seems to support his rival, Vice President Kamala Harris. China, on the other hand, doesn’t seem to favour any particular outcome. Despite the evolution of tactics, the overarching objective remains unaltered – creating confusion, disorder and discrediting American democracy on the world stage.
Adapting to the shifting media environment and exploiting new tools, these campaigns can easily deceive gullible audiences. Foreign disinformation has become ubiquitous and traversed from being a primarily Russian operation in 2016 to involving Iran and China, influencing American politics across a plethora of platforms. These range from small forums discussing local weather to shared interest groups on messaging platforms. Even though there’s debate about direct strategy collaboration, it’s evident they are learning from each other’s tactics.
Evidence of this disinformation includes the hundreds of Russian accounts on Telegram disseminating divisive content and election-related videos, memes, and articles. Add to that, numerous Chinese accounts that imitated students to spark discord on American campuses during the Gaza Strip conflict this summer, along with a presence on less popular right-wing platforms like Gab, where they amplify conspiracy theories.
Russian agents have also aimed to bolster Trump’s image on Reddit and right-leaning forums, focusing their attention on citizens from six battleground states as well as Hispanic Americans, video gamers, and other groups seen as potential Trump supporters, according to files publicised in September by the US Justice department.
One initiative linked to China’s state influence, known as Spamouflage, has manipulated accounts under the alias ‘Harlan’, to give the impression that the conservative content originated from an American source. This strategy was deployed across four different platforms, namely YouTube, X, Instagram and TikTok.
The content of this misinformation campaign has become much more focused and tailored. The current deceptive information shared by overseas nations targets not just swing states, but specific constituencies within them, and even particular ethnic and religious communities inside those areas. Experts and academics who have analysed these current influence initiatives predict the more precise the misinformation, the higher the probability it will spread.
“The more misinformation is tailored to appeal to a unique audience, the higher its effectiveness,” stated Melanie Smith, the head of research at the Institute for Strategic Dialogue, a London-based research body. “In past elections, we were trying to predict the overarching false narrative. However, at present, it’s understated polarised messaging that fuels division.”
In particular, Iran has channeled its resources into subtle disinformation campaigns to attract specialised groups. A website called Not Our War, designed to appeal to US military veterans, mixed articles about insubstantial support for serving soldiers with intensely anti-American viewpoints and conspiracy theories.
Additional websites included Afro Majority, aiming content at African Americans, and Savannah Time, which endeavoured to influence conservative voters in the battleground state of Georgia. In Michigan, another key state, Iran created a digital platform called Westland Sun to serve Arab Americans in the suburbs of Detroit.
“The fact that Iran has set its sights on Arab and Muslim communities in Michigan reveals a sophisticated understanding of America’s political landscape, and a finesse in captivating a pivotal demographic to swing the election,” remarked Max Lesser, a senior analyst at the Foundation for Defense of Democracies.
The use of artificial intelligence is driving this advancement.
In the wake of recent advancements in artificial intelligence (AI), the ability to spread disinformation has drastically improved. This evolution has enabled state actors to structure and propagate their campaigns with increased proficiency and subtlety.
OpenAI, the developer of the widely-used ChatGPT tool, stated earlier this month that over 20 foreign operations utilising their services were disrupted between June and September. These operations included attempts from countries like Russia, Iran, and China to generate and populate websites, to propagate propaganda or disinformation on social media platforms, and even to scrutinise and respond to specific posts.
Jen Easterly, the head of the Cybersecurity and Infrastructure Security Agency, stated in an interview that the utilisation of AI capabilities is enhancing the predicted and prevalent threats. She further added that this situation essentially lowers the threshold for foreign actors to run more complex influence campaigns.
China also has been escalating its range of tools and techniques, which involve AI-engineered audio clips, harmful memes, and counterfeit voter polls in its worldwide campaigns. Dissimulating their engagement has also become a strength for all these three countries.
Previously, Russia was discovered concealing attempts to influence Americans by clandestinely sponsoring a group of American conservative commentators through Tenet Media, a digital platform established in Tennessee in 2023. This platform acted as a seemingly credible front for releasing numerous videos with acute political commentary and conspiracy theories regarding election fraud, Covid-19, immigration, and Russia’s conflict with Ukraine. The influencers, who were secretly compensated for their appearances on Tenet, claimed ignorance about the source of the funds being Russia.
Rivaling Russia’s strategy, Chinese operatives have been nurturing a network of foreign influencers to advance their narratives, thereby creating a group referred to as “foreign mouths,” “foreign pens,” and “foreign brains”, as per a report by the Australian Strategic Policy Institute last year.
These novel tactics have made it difficult for government agencies and technology firms to locate and eradicate these influence campaigns. According to Graham Brookie, the senior director at the Atlantic Council’s Digital Forensic Research Lab, this situation only emboldens other hostile states. He stated that the presence of more malicious foreign influence activity creates a larger scope for other malevolent actors to enter the scene. He concluded that if everyone is doing it, the risk of being exposed isn’t as severe.
The global efforts to stifle misinformation by technology corporations have diminished, with many, including major players such as Meta, Google, OpenAI, and Microsoft reducing their activities since the previous presidential race. Some don’t even have strategies in place to tackle this issue.
The inconsistent tactics being used across these tech firms has incapacitated their ability to stand up collectively against foreign misinformation, as asserted by security authorities and executives from these companies.
According to Lesser from the Foundation for Defence of Democracies, these alternate platforms lack stringent content moderation methods along with well-established trust and safety policies essential to combat these campaigns.
Furthermore, he stated that even major platforms like X, Facebook, and Instagram find themselves in a never-ending cycle of catch and release as overseas state-run operators rapidly recreate influence campaigns that have previously been discontinued. For instance, Alethea, a firm that screens online threats, recently discovered a re-emergence on platform X of an Iranian misinformation strategy utilising accounts named after brightly coloured birds, hoopoes, despite it being prohibited twice earlier.