The AI race is accelerating, but OpenAI isn’t planning to load Google’s in-house chips into its production engines anytime soon. Though the lab is running early tests with Google’s tensor processing units (TPUs), a spokesperson confirmed on Sunday there are no active plans to deploy them at scale.
Most of OpenAI’s compute power still comes from Nvidia graphics processing units and AMD’s AI chips. Building the right hardware stack for large-scale AI workloads can take months or even years, demanding custom architecture, software tuning, and extensive reliability checks. That’s why many labs test multiple chip families before making big shifts.
Balancing Performance and Integration
Using new hardware in pilot phases is one thing; running global AI services on it is quite another. Scale introduces challenges around networking, cooling, and code compatibility. OpenAI has chosen to stick with tried-and-tested platforms while it evaluates TPUs alongside its existing GPU fleet.
Crafting Its Own Roadmap
Beyond external partnerships, OpenAI is racing to finalize its own custom AI chip. The company expects to reach the tape-out milestone later this year, marking a key step toward in-house silicon tailored for next-generation models.
A Multi-Cloud Approach
To meet skyrocketing demand, OpenAI signed up for Google Cloud services and leans heavily on CoreWeave, a neocloud provider operating GPU servers around the world. This multi-cloud, multi-hardware strategy reflects a broader industry trend: AI labs diversifying their compute suppliers to boost flexibility and scale.
What Comes Next?
Google has been opening its TPUs to external customers, drawing clients like Apple and startups spun out by former OpenAI leaders. For now, OpenAI remains cautious, balancing the lure of cutting-edge chips against the realities of large-scale deployment. Will TPUs find a permanent home at OpenAI, or will the lab rely on its own silicon roadmap? Let us know your take in the comments.
Reference(s):
cgtn.com