Is it possible for Nvidia to remain integral to the emerging AI market?

Adam, a 44-year-old hospitality professional, made a speculative investment last month into a stock named Nvidia on a friend’s recommendation. Despite not knowing many specific details about the firm, he was attracted by its work in the realm of artificial intelligence, likening it to the futuristic Cyberdyne Systems portrayed in the Terminator films. The London-based man requested anonymity as his relatives aren’t privy to his equity trading practice.

Nvidia, a chipmaker aged 31 years, has been capturing considerable interest, not just from retail investors like Adam, but from Wall Street too. This week, the company surpassed major tech giants Apple and Microsoft in value, reaching an approximate worth of $3.3 trillion (€3.1 trillion). This astronomical growth is presumably due to the increasing demand for Nvidia’s graphics processing units, deemed ideal for constructing vast AI systems by leading tech firms, such as Meta and Microsoft. This has led to a 700% increase in the share price since OpenAI launched their popular ChatGPT chatbot in November 2022.

Nvidia’s progression is indicative of the dynamics of the evolving AI sector, encompassing rapid expansion, investment allure, and unpredictability. Its trajectory is expected to align with, and potentially influence, the direction of the AI economy at large. This scenario echoes memories of March 2000 when Cisco, a relatively unknown brand like Nvidia, ousted Microsoft during the height of the dotcom bubble. Similar to that era, corporations are now investing billions in setting up platforms for the projected digital revolution.

Much like Nvidia now, Cisco profited significantly by supplying the necessary digital tools. Yet, it’s worth noting that Cisco’s share value failed to recover post the dotcom collapse in 2000. It serves as a pertinent reminder of the highs and lows in the technology market.

The immense capital investment in Artificial Intelligence by major tech companies is being driven largely by anticipations for future revenue, rather than immediate returns. This trend has led some to worry about history repeating itself, mirroring the situation with Cisco who created massive infrastructure for a demand they hoped would materialise, leaving fibre untouched underground to this day.

Despite these concerns, differences exist, according to Bernstein analyst Stacy Rasgon. Compared to the peak prices of Cisco shares during the dotcom bubble, Nvidia shares are, in comparison, trading at a significantly lower multiple of anticipated profit.

Some companies, like Microsoft, are already experiencing a return on their AI chip investment, though others such as Meta have cautioned that such returns will take longer to materialise. If an AI market bubble is indeed forming, Rasgon believes it’s not on the verge of bursting anytime soon.

Cisco’s experience during the dotcom era greatly differs from the likes of Apple and Microsoft. These tech behemoths have consistently competed for the top spot on Wall Street, not solely by creating successful products, but also by developing platforms that sustain enormous business ecosystems. With about two million apps on its App Store, Apple reportedly generates hundreds of billions in developer revenues annually.

In contrast, Nvidia’s economic circumstance is vastly different from Apple’s. It’s largely the popularity of a single app, ChatGPT, that has spurred a lot of investment and consequently pushed Nvidia’s shares upwards recently. Nvidia reports having an ecosystem encompassing 40,000 companies along with 3,700 “GPU-accelerated applications”.

Contrary to selling millions of reasonably priced electronics to consumers annually, Nvidia has soared to become the most valuable company globally by selling a comparably small number of costly AI chips for data centres. These are primarily sold to a select group of companies.

Major cloud computing providers such as Microsoft, Amazon, and Google accounted for nearly half of Nvidia’s data centre revenues, disclosed by the company last month. Chip analyst group TechInsights reported that Nvidia sold roughly 3.76 million of its graphics processing unit chips for data centres in the previous year, granting it a 72 per cent share of that niche market and far outpacing competitors like Intel and AMD.

Despite the rapid expansion of Nvidia’s sales, which increased by 262% year by year to reach $26 billion in the quarter that ended in April, there are rising questions about the profitability of AI technologies. Nvidia’s growth rate has surpassed Apple in its initial iPhone launch period, but questions persist on harnessing AI’s potential successfully.

Tech firms trying to resolve these doubts are resorting to the approach of investing in more chips. In a bid to achieve a major breakthrough in artificial intelligence, businesses such as OpenAI, Microsoft, Meta, and Elon Musk’s new venture, xAI, are in a competition to build data centres. These centres are integrated with as many as 100,000 AI chips leading to the creation of supercomputers, three times larger than the biggest clusters we have today. It is believed that the hardware for each of these server farms costs around $4 billion, as per chip consultancy firm SemiAnalysis.

The need for AI’s higher computational power is unending. Jensen Huang, Nvidia’s CEO, estimates that the coming years will see an expenditure exceeding $1 trillion to modernise the existing data centres and create what he labels as ‘AI factories’. Big tech firms to country governments are expected to create their own AI models.

However, this intense level of investment can only sustain if Nvidia’s clients manage to profit from AI. Aquiring a top position in the stock market has led to an increase in sceptics in Silicon Valley who challenge whether AI can fulfill its high expectations.

David Cahn, a partner with Sequoia, one of the largest start-up investors in Silicon Valley, cautioned about a ‘speculative frenzy’ surrounding AI and the ‘illusion’ of earning ‘rich quick’ through advanced AI and storing Nvidia chips.

Cahn predicts that AI will generate substantial economic value, for which Big Tech firms will need to make hundreds of billions of dollars more annually in new revenues to recover their current and rapidly increasing AI infrastructure investment. For Microsoft, Amazon Web Services, and OpenAI, income from generative AI is predicted to be in the low billions this year.

Euro Beinat, the Global Head of AI and data science at Prosus Group, which is amongst the world’s leading tech investors, reckons that the era of exorbitant claims regarding AI’s potential is closing. In the forthcoming 16 to 18 months, there will likely be a more honest perspective regarding its potentials and constraints.

For Nvidia to maintain its prosperity, it needs to simulate Apple’s approach, despite it not being a consumer company for the mass market like Apple, according to industry experts. The conjecture is that Nvidia needs to intertwine its software platform with its hardware to retain corporate clients once the hardware frenzy subsides, as stated by Ben Bajarin from Creative Strategies, a consultancy located in Silicon Valley.

Nvidia’s CEO, Huang, has always maintained that the company’s offerings extend beyond just producing chips. He claims they supply all elements required to construct a comprehensive supercomputer, encompassing chips, network equipment, and its unique Cuda software that enables AI applications to interact with its chips. Nvidia’s secret weapon, as per many, lies in this software.

In March, Huang introduced Nvidia’s Inference Microservices (NIM), a suite of software tools designed for businesses to readily incorporate AI into specific sectors or domains. He equated these tools to an operating system for managing large language models, such as the one that powers ChatGPT. He anticipates producing NIMs on a massive scale and expects their software platform, Nvidia AI Enterprise, to boom.

Despite previously offering its software at no cost, Nvidia now intends to impose a fee of $4,500 per GPU annually for businesses to implement Nvidia AI Enterprise. This strategy is pivotal for attracting more corporate or governmental clients that lack the AI expertise possessed by tech giants.

Nvidia faces a challenges as many of its major clients wish to control the developer relationship and establish their own AI platforms. For instance, Microsoft encourages developers to rely on its Azure cloud platform. Also, OpenAI has launched its own GPT Store, akin to an App Store, where tweaked versions of ChatGPT are available. Amazon, Google, and AI start-ups like Anthropic, Mistral, among others also have their own suite of developer tools.

Nvidia is increasingly finding itself in direct competition with its largest clients. Google, for instance, has developed their own AI accelerator chip called the Tensor Processing Unit. Both Amazon and Microsoft have also created their own variants. Although these are currently quite small-scale, particularly Google’s TPU, it suggests a future possibility for these firms to decrease their reliance upon Nvidia.

In response, Nvidia has begun to reach out to possible future competitors to its Big Tech clients, in an effort to vary its ecosystem. Nvidia has been supplying its chips to startups like Lambda Labs and CoreWeave, who are AI-centric cloud computing firms and offer access to Nvidia GPUs. They’ve also been sending their chips to localized businesses such as France’s Scaleway, instead of focusing solely on multinational giants.

These manoeuvres are part of Nvidia’s wider acceleration of investment across the blossoming AI tech ecosystem. They have recently been involved in funding rounds for Scale AI, a data labelling firm that successfully raised $1 billion, and Mistral, a Paris-based competitor of OpenAI, which raised €600mn.

Data from PitchBook shows that Nvidia has made an impressive 116 such deals over the previous five years. As well as standing to gain financially, Nvidia benefits by gaining early insight into future directions of AI, which aids in plotting their own product development plans.

Nvidia’s chief executive, Huang, is heavily involved in the mechanics of AI trends and their potential implications, as Kanjun Qiu, the CEO of AI research lab Imbue, explains. In fact, Nvidia funded Imbue last year. Huang has cultivated a large team to work directly with AI labs so he can fully comprehend their aims, even if they aren’t his direct clients.

This forward-thinking strategy positioned Nvidia at the heart of the rampant AI industry growth. Despite multiple brushes with failure, Nvidia succeeded in becoming the world’s most valuable company. However, even in the fiercely competitive environment of Silicon Valley, no firm’s survival is guaranteed.

Condividi