OpenAI, Broadcom plan major foray in custom AI computing systems
OpenAI and Broadcom will co-design custom AI chips and systems to boost efficiency and performance, marking the ChatGPT maker’s deeper move into hardware.
OpenAI and Broadcom have announced a partnership to design and install a large amount of custom computer hardware built specifically for running artificial intelligence (AI).
The companies said they will begin rolling out racks of the new machines in the second half of 2026 and aim to complete the programme by the end of 2029. The project will include both new types of processors and the networking equipment needed to connect them.
The collaboration will focus on designing custom AI accelerators rather than manufacturing them directly. OpenAI will create the designs and define how the chips should handle the kinds of calculations used in training and running its models.
Broadcom will then develop and engineer the chips and systems, working with external manufacturers to produce them. Broadcom will also supply the networking and connectivity components that allow many of these processors to work together at massive scale.
“Partnering with Broadcom is a critical step in building the infrastructure needed to unlock AI’s potential and deliver real benefits for people and businesses,” said Sam Altman, Co-founder and CEO of OpenAI. “Developing our own accelerators adds to the broader ecosystem of partners all building the capacity required to push the frontier of AI to provide benefits to all humanity.”
This arrangement follows a common pattern in the tech industry, where companies such as Google, Amazon and Meta design their own chips but rely on specialist foundries, such as TSMC, to build them. It lets the designers focus on performance and efficiency while leaving the complex process of physical chip production to established manufacturers.
Tailoring hardware
By designing its own chips, OpenAI hopes to create hardware that is better suited to its models than general-purpose graphics processors, which are widely used across the industry.
Custom accelerators can make AI systems run faster and use less energy, while also helping to manage costs as demand for computing power grows. Broadcom’s expertise in networking will also help ensure that these chips can communicate efficiently when working together in large clusters.
“Our collaboration with Broadcom will power breakthroughs in AI and bring the technology’s full potential closer to reality. By building our own chip, we can embed what we’ve learned from creating frontier models and products directly into the hardware, unlocking new levels of capability and intelligence,” OpenAI Co-founder and President Greg Brockman noted.
Several major tech firms have taken similar steps. Google developed its Tensor Processing Units (TPUs), which it now uses across its cloud services. Amazon Web Services (AWS) designed its Trainium and Inferentia chips to improve performance and lower costs for customers running AI workloads. Meta has been building its own training and inference chips to handle large-scale AI systems more efficiently.
These efforts reflect a broader industry shift towards tailoring hardware to specific AI needs rather than relying solely on off-the-shelf products.
AI development
When companies design their own hardware, they can align it more closely with their software, which can lead to gains in speed, energy efficiency and overall capability. It also gives them more control over their computing resources at a time when global demand for AI chips continues to outstrip supply.
For the wider industry, this trend means more diversity in hardware options and potentially new competition for large established suppliers such as NVIDIA.
OpenAI’s decision to design custom accelerators at such a scale is significant because of its central role in the AI ecosystem. As one of the largest users of computing power in the world, its choices can influence prices, supply chains and design priorities across the market.
By working with Broadcom, OpenAI is signalling a long-term commitment to shaping both the software and the hardware that underpin its AI systems.
The collaboration could make running large AI models faster and more affordable, which could benefit OpenAI and potentially its users and partners. However, developing new chips and systems is complex, costly and time-consuming.
Charlie Kawwas, President of Broadcom’s Semiconductor Solutions Group, said the project would combine “custom accelerators with standards-based Ethernet scale-up and scale-out networking solutions to provide cost and performance optimised next generation AI infrastructure.”
Hock Tan, Broadcom’s President and CEO, described the deal as a landmark in the race toward more general forms of AI, saying, “OpenAI has been at the forefront of the AI revolution since the ChatGPT moment, and we are thrilled to co-develop and deploy 10 gigawatts of next generation accelerators and network systems to pave the way for the future of AI.”
The first deployments are planned for 2026, and the industry will watch closely to see how the performance compares with existing GPU-based systems.
Edited by Megha Reddy


