Semiconductor giant Broadcom Inc. has unveiled its latest innovation — the Thor Ultra networking chip — a powerful new processor designed to help companies build and scale artificial intelligence (AI) computing systems. The new chip marks a major step in Broadcom’s efforts to challenge Nvidia’s dominance in AI networking and data center infrastructure.
The Thor Ultra chip allows data center operators to connect hundreds of thousands of AI processors, making it possible to train and deploy massive machine learning models that power advanced AI applications like ChatGPT.
Broadcom Expands AI Infrastructure Ambitions
The launch of the Thor Ultra deepens Broadcom’s growing presence in the AI hardware ecosystem, an area where Nvidia has long maintained leadership through its high-speed networking and GPU technologies. The new chip will directly compete with Nvidia’s network interface controllers (NICs) and is expected to further strengthen Broadcom’s grip over AI data center networking systems.
The announcement comes just a day after Broadcom revealed a landmark deal to supply 10 gigawatts of custom AI chips to OpenAI, the developer of ChatGPT. The rollout for this large-scale deployment is scheduled to begin in the second half of 2026, signaling Broadcom’s rising influence in the AI accelerator market.
Thor Ultra: Doubling Bandwidth and Boosting AI Network Efficiency
According to Broadcom, the Thor Ultra chip doubles the bandwidth compared with its predecessor, delivering faster data transfer speeds across distributed AI systems. This leap in performance helps enterprises build large, high-speed AI clusters that can handle the enormous data loads required for training advanced models.
“In distributed computing systems, the network plays an extremely important role in building these large clusters,” said Ram Velaga, Senior Vice President at Broadcom. “It’s no surprise that anyone in the GPU business wants to be deeply involved in networking as well.”
The Thor Ultra serves as the backbone connecting AI systems to the rest of the data center infrastructure, ensuring seamless communication between GPUs, CPUs, and storage systems.
Broadcom’s Growing AI Market Opportunity
AI continues to be one of the fastest-growing business segments for Broadcom. Chief Executive Officer Hock Tan had earlier estimated that the total market opportunity for Broadcom’s AI-focused chips — including both networking solutions and custom data center processors — could reach between $60 billion and $90 billion by 2027.
In fiscal 2024, Broadcom generated $12.2 billion in AI-related revenue, driven largely by demand for its networking and custom chips. In September, the company also announced a $10 billion deal with an unnamed hyperscaler for data center AI processors, highlighting growing interest from major tech players.
A Longstanding Partnership with Google
Broadcom has a strong history in custom AI chip design, particularly through its long-term collaboration with Alphabet’s Google. The semiconductor firm has worked on multiple generations of Google’s Tensor Processing Units (TPUs) — specialized chips designed to accelerate machine learning workloads. These TPUs have reportedly generated billions of dollars in revenue for Broadcom over the years.
Inside Broadcom’s Chip Design and Testing Labs
During a recent visit to Broadcom’s San Jose chip-testing labs, company executives shared insights into the extensive engineering process behind the Thor Ultra and its flagship Tomahawk networking switch series. Engineers have designed the new chip to deliver higher performance, better energy efficiency, and improved heat management.
Before production, every chip undergoes rigorous testing and evaluation to ensure reliability in high-performance computing environments. Engineers collaborate closely with hardware system designers to finalize the chip’s packaging, power requirements, and cooling design.
“For every dollar we invest in our silicon, our ecosystem partners invest between six to ten dollars,” Velaga explained. “That’s why our focus remains on advanced chip design — making sure our products are ready for seamless integration into customer systems.”
A New Phase in the AI Chip Rivalry
While Nvidia remains the dominant force in AI accelerators and networking chips, Broadcom’s Thor Ultra aims to carve a strong position in data center networking — a critical layer that connects and powers distributed AI infrastructure.
Broadcom does not manufacture or sell servers directly. Instead, it provides reference system designs and component blueprints to customers, allowing cloud providers and data center operators to build customized networking setups around Broadcom’s chips.
With its Thor Ultra networking processor and expanding portfolio of AI-centric semiconductor solutions, Broadcom is positioning itself as a formidable player in the next generation of AI infrastructure, intensifying competition with Nvidia and reshaping the global data center landscape.
