Mark Zuckerberg

Meta Announces Ambitious Plan to Acquire 350,000 Nvidia H100 GPUs for AI Development by 2024

In a recent Instagram post, Meta CEO Mark Zuckerberg revealed the company’s strategic move to heavily invest in Nvidia hardware and other artificial intelligence (AI) initiatives before the close of 2024. Zuckerberg outlined Meta’s commitment to advancing open-source artificial general intelligence (AGI) capable of supporting a wide array of applications, from productivity and development to wearable technologies and AI-powered assistants.

The focal point of Meta’s infrastructure investment lies in the acquisition of approximately 350,000 Nvidia H100 units by the end of 2024, with an additional 600,000 H100 equivalents when factoring in other planned GPU purchases. This sizable investment positions Meta as one of Nvidia’s prominent customers, with last year’s H100 orders already dominated by Meta and Microsoft.

According to market analysis company Omdia, Meta and Microsoft surpassed Google, Amazon, and Oracle in individual H100 purchases. With this new acquisition, Meta aims to solidify its position as a major player in the cutting-edge AI hardware market.

Zuckerberg emphasized that Meta’s AI infrastructure investment is pivotal for developing a comprehensive AGI capable of providing reasoning, planning, coding, and various other abilities to researchers, creators, and consumers globally. The company is committed to responsibly developing this new AI model as an open-source product, ensuring accessibility for users and organizations of all sizes.

However, the surge in demand for the H100 has resulted in extended lead times, potentially putting companies awaiting fulfillment at a disadvantage against their competitors. Omdia reports lead times for Nvidia H100 orders ranging from 36 to 52 months due to the escalating demand for advanced AI hardware.

The driving force behind Meta’s pursuit of these high-performance GPUs is the significant increase in computing power and processing speed. Nvidia claims that the H100 with InfiniBand Interconnect is up to 30 times faster than its predecessor, the Nvidia A100, in mainstream AI and high-performance computing (HPC) models.

Notably, the H100 units boast three times the IEEE FP64 and FP32 processing rates compared to the previous A100. This improvement is attributed to enhanced clock-for-clock performance per streaming multiprocessor (SM), additional SM counts, and higher clock speeds. As Meta embarks on this ambitious AI infrastructure investment, the tech world eagerly anticipates the implications of such a substantial commitment to advanced GPU technology.

Oh hi there 👋
It’s nice to meet you.

Sign up to receive awesome content in your inbox, every week.

We don’t spam!

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *