Elon Musk’s ventures, Tesla and xAI, have invested around $10 billion in AI training and computing capabilities in 2024, including a 29,000-unit Nvidia H100 cluster at Giga Texas. Tesla’s Cortex and Colossus supercomputers, along with xAI’s Memphis supercomputer, are key projects aimed at advancing AI technology. Despite significant spending, Musk’s efforts appear to lag behind competitors like Microsoft and Google, who are investing much larger amounts in AI infrastructure.
—
Les entreprises d’Elon Musk, Tesla et xAI, ont investi environ 10 milliards de dollars dans les capacités d’entraînement et de calcul en IA en 2024, y compris un cluster Nvidia H100 de 29 000 unités à Giga Texas. Les superordinateurs Cortex et Colossus de Tesla, ainsi que le superordinateur Memphis de xAI, sont des projets clés visant à faire progresser la technologie IA. Malgré des dépenses importantes, les efforts de Musk semblent à la traîne par rapport à des concurrents comme Microsoft et Google, qui investissent des montants bien plus élevés dans les infrastructures IA.
Elon Musk’s ventures, Tesla and xAI, have reportedly invested around $10 billion this year in enhancing training and inference compute capabilities, according to a post by Tesla investor Sawyer Merritt on X (formerly Twitter).
Merritt highlighted that “Tesla is already ahead of schedule, utilizing a 29,000 unit Nvidia H100 cluster at Giga Texas, with plans to reach a capacity of 50,000 H100 units by the end of October, and approximately 85,000 equivalent capacity by December.”
By year’s end, Musk’s companies (Tesla and xAI) are expected to have deployed around $10 billion in training compute capacity for 2024 alone.
Tesla had already introduced its Cortex AI cluster in August, which will support the training of its Full Self-Driving technology, utilizing 50,000 Nvidia H100 GPUs alongside an additional 20,000 Dojo AI chips created by Tesla. The Colossus supercomputer unveiled in September matches the H100 GPU capacity of Memphis, with plans to expand by another 50,000 H100 and 50,000 H200 GPUs in the upcoming months.
In contrast, xAI started building its Memphis supercomputer in July at the Gigafactory of Compute, situated in a converted Electrolux facility in Memphis, Tennessee. Musk has claimed that the Memphis is “the world’s most powerful AI training cluster,” operating on 100,000 Nvidia H100 GPUs, with promises to double that in the near future. It went live in September and has been charged with developing what is intended to be “the world’s most potent AI by every standard by December of this year,” likely Grok 3. xAI has yet to disclose the construction costs of Memphis, but Tom’s Hardware estimates at least $2 billion has been spent on GPUs alone.
The $10 billion figure is actually half of Musk’s earlier claim in April, when he stated that Tesla would invest this amount in AI compute capacity for the year. At that time, he noted, “Tesla will spend around $10 billion this year on combined training and inference AI, primarily in vehicles. Any company not spending at this level effectively cannot compete.”
This suggests that Musk’s AI initiatives are lagging behind well-funded competitors such as Microsoft, OpenAI, and Google. For instance, analysts estimated in July that OpenAI would expend about $7 billion on AI compute while reporting losses of roughly $5 billion on other operating expenses. However, in early October, the company revealed a new investment round totaling $6.6 billion, leading to a post-money valuation of $157 billion. This funding will enable them to strengthen their research in frontier AI, boost compute capacity, and continue innovating solutions for complex challenges.
According to a report by Reuters, both Microsoft and Meta are investing significantly in bolstering their AI compute capabilities. Microsoft is reportedly channeling as much capital each quarter as it previously spent annually before 2020. Their capital expenditure rose over 5% in Q1 2024 to $20 billion, with expectations of even higher spending in Q2. Conversely, Meta has allocated quarterly capital in 2024 that matches its annual spending levels from 2017.
On the other hand, Google has reportedly spent $13 billion in capital expenditures during Q3 2024, marking a 63% increase compared to the same period last year. Additionally, the company has invested around $38 billion in compute infrastructure since the beginning of the year, an 80% increase from the first three quarters of 2023. As a result, Musk’s $10 billion investment amid his two companies and their few projects now appears relatively modest.
En français :
Les entreprises d’Elon Musk, Tesla et xAI, ont apparemment investi environ 10 milliards de dollars cette année pour améliorer les capacités de calcul d’entraînement et d’inférence, selon une publication de l’investisseur de Tesla, Sawyer Merritt, sur X (anciennement Twitter).
Merritt a souligné que « Tesla est déjà en avance sur son calendrier, utilisant un cluster Nvidia H100 de 29 000 unités à Giga Texas, avec des plans d’atteindre une capacité de 50 000 unités H100 d’ici la fin octobre, et environ 85 000 capacités équivalentes d’ici décembre. »
À la fin de l’année, les entreprises de Musk (Tesla et xAI) devraient avoir déployé environ 10 milliards de dollars en capacités de calcul d’entraînement rien que pour 2024.
Tesla avait déjà présenté son cluster Cortex AI en août, qui soutiendra l’entraînement de sa technologie de conduite entièrement autonome