Promoting artificial intelligence through scare tactics is an unfavourable approach

One and a half years ago, I attended a soiree in San Francisco that was held to honour the emergence of generative AI as the forthcoming industrial transformation. The atmosphere was paradoxically merry, yet nihilistic. AI is set to deconstruct our current way of life, voiced one guest. We were being compared to farmers oblivious to the advancing machinery that was heading our way to tear us apart.

Nevertheless, generative AI hasn’t caused much havoc yet. Regardless of stretched anticipation, jobs like accounting, designing, software engineering, filmmaking, and interpreting are still holding strong. The course of global elections hasn’t been disrupted. The world continues to revolve. The initial fearmongering is beginning to resemble an uncanny advertisement strategy.

Silicon Valley is often imbued with a sense of optimism. This relentless belief that the world is surging upwards is one of its most endearing characteristics. If grand visions aren’t realised, such as Elon Musk’s prediction of human missions to Mars by 2024, society tends to be understanding. There’s a perception that being overly ambitious is a virtue.

However, not all mindsets grown in California are optimistic. There’s a faction in the tech industry motivated by apprehension.

The most extreme manifestation of this is survivalists – those fearful of societal collapse. For some, this means purchasing rural land in New Zealand or hoarding drinking water.

For some, this fear can be a commercial tactic. Software/services firm Palantir is notorious for scaring investors with talk of worldwide catastrophe in its quarterly reports. Such existential speculation adds to its enigma. Despite having gone public and being over 20 years old, Palantir still retains an air of mystery.

This negative portrayal of tech goods is not always without merit. Describing social media platforms as addictive and invasive of privacy might worry users, but it doesn’t seem to deter advertisers.

Take Facebook for instance. Its stock sunk in 2018 after it came to light that Cambridge Analytica was exploiting user data for experiments purportedly affecting electoral results. However, not only did prices bounce back within a year, but Facebook’s market value has now doubled.

The perception of possessing enough potency to influence worldwide political affairs lent an air of significance to the platform, even though this might not be the case – the idea that “psychographic” data could manipulate election outcomes remains largely unverified. Many have latched on to AI as a vessel for all their apprehensions. Sam Altman, co-founder of OpenAI, even aligned with other executives and scientists last year to endorse a letter advocating that AI’s potential extinction threat should be globally prioritised.

Numerous tech pioneers proposed a six-month research hiatus due to the grave risks the technology posed to mankind and society, while Goldman Sachs predicted that AI’s advance could jeopardise 300 million full-time positions. While this anxiety is likely authentic, it often subconsciously prepares us to be initially dazzled by the technology’s prowess, only to later be disillusioned by it.

When OpenAI unveiled Sora, an AI that can produce digital videos, a critic referred to it as bringing us “one stride nearer to the demise of reality itself”, notwithstanding a filmmaker who utilised it deeming it somewhat unremarkable. As the public become more exposed to generative AI through their devices, Google Docs, and various multimedia platforms, the hype surrounding the technology is facing increasing scrutiny.

Early consumer products like Humane’s $699 AI clip-on pin are struggling to gain traction. Tech media outlet, The Verge, reveals that in recent times, more of these Humane pins have been returned than purchased.

Meta’s AI Ray-Ban sunglasses have earned more positive reviews. The glasses are capable of identifying what you’re viewing by capturing a picture and recognising the object. However, this feature, while impressive, isn’t flawless. On testing the glasses, I found the most value in the earphone speaker function, a sentiment echoed by my colleagues in the San Francisco bureau.

Eventually, these glasses might be capable of translating street signs, providing directions, and assisting those with visual impairments. However, the practical applications of such new technologies are not immediately realised.

We’re currently in the preliminary stages where various concepts are undergoing evaluation. The challenge lies in aligning this with the notion that the technology is already daunting. We all might demonstrate more tolerance while anticipating AI’s breakthrough application, if we hadn’t been continually warned about its potential to annihilate us all. – Copyright The Financial Times Limited 2024.

Written by Ireland.la Staff

Clontarf, a lithium exploration company, garners attention

Tensions escalate in the alliance as the budget deadline approaches