Published 14:20 IST, October 30th 2024
OpenAI to build its first AI chip with Broadcom and TSMC
OpenAI is working with Broadcom and TSMC to build its first in-house chip designed to support its AI. OpenAI considered building everything in-house and raising capital for an expensive plan to build a network of factories known as "foundries" for chip manufacturing.
Advertisement
OpenAI is working with Broadcom and TSMC to build its first in-house chip designed to support its artificial intelligence systems, while adding AMD chips alongside Nvidia chips to meet its surging infrastructure demands, sources told Reuters.
OpenAI, the fast-growing company behind ChatGPT, has examined a range of options to diversify chip supply and reduce costs. OpenAI considered building everything in-house and raising capital for an expensive plan to build a network of factories known as "foundries" for chip manufacturing.
Advertisement
The company has dropped the ambitious foundry plans for now due to the costs and time needed to build a network, and plans instead to focus on in-house chip design efforts, according to sources, who requested anonymity as they were not authorized to discuss private matters.
The company's strategy, detailed here for the first time, highlights how the Silicon Valley startup is leveraging industry partnerships and a mix of internal and external approaches to secure chip supply and manage costs like larger rivals Amazon, Meta, Google and Microsoft. As one of the largest buyers of chips, OpenAI's decision to source from a diverse array of chipmakers while developing its customized chip could have broader tech sector implications.
Advertisement
Broadcom stock jumped following the report, finishing Tuesday's trading up over 4.5%. AMD shares also extended their gains from the morning session, ending the day up 3.7%.
OpenAI, AMD and TSMC declined to comment. Broadcom did not immediately respond to a request for comment.
Advertisement
OpenAI, which helped commercialize generative AI that produces human-like responses to queries, relies on substantial computing power to train and run its systems. As one of the largest purchasers of Nvidia’s graphics processing units (GPUs), OpenAI uses AI chips both to train models where the AI learns from data and for inference, applying AI to make predictions or decisions based on new information. Reuters previously reported on OpenAI's chip design endeavors. The Information reported on talks with Broadcom and others.
OpenAI has been working for months with Broadcom to build its first AI chip focusing on inference, according to sources. Demand right now is greater for training chips, but analysts have predicted the need for inference chips could surpass them as more AI applications are deployed.
Advertisement
Broadcom helps companies including Alphabet unit Google fine-tune chip designs for manufacturing and also supplies parts of the design that help move information on and off the chips quickly. This is important in AI systems where tens of thousands of chips are strung together to work in tandem.
OpenAI is still determining whether to develop or acquire other elements for its chip design, and may engage additional partners, said two of the sources.
The company has assembled a chip team of about 20 people, led by top engineers who have previously built Tensor Processing Units (TPUs) at Google, including Thomas Norrie and Richard Ho.
Sources said that through Broadcom, OpenAI has secured manufacturing capacity with Taiwan Semiconductor Manufacturing Company to make its first custom-designed chip in 2026. They said the timeline could change.
Currently, Nvidia’s GPUs hold over 80% market share. But shortages and rising costs have led major customers like Microsoft, Meta, and now OpenAI, to explore in-house or external alternatives.
OpenAI’s planned use of AMD chips through Microsoft's Azure, first reported here, shows how AMD's new MI300X chips are trying to gain a slice of the market dominated by Nvidia. AMD has projected $4.5 billion in 2024 AI chip sales, following the chip's launch in the fourth quarter of 2023. Training AI models and operating services like ChatGPT are expensive. OpenAI has projected a $5 billion loss this year on $3.7 billion in revenue, according to sources. Compute costs, or expenses for hardware, electricity and cloud services needed to process large datasets and develop models, are the company's largest expense, prompting efforts to optimize utilization and diversify suppliers.
OpenAI has been cautious about poaching talent from Nvidia because it wants to maintain a good rapport with the chip maker it remains committed to working with, especially for accessing its new generation of Blackwell chips, sources added.
Nvidia declined to comment. (Reporting by Krystal Hu in New York, Fanny Potkin in Singapore, Stephen Nellis in San Francisco, additional reporting by Anna Tong and Max Cherney in San Francisco; Editing by Kenneth Li and David Gregorio)
14:20 IST, October 30th 2024