OpenAI CEO Sam Altman announced that the company is facing a GPU shortage, forcing it to roll out its latest model, GPT-4.5, in stages, as reported by TechCrunch. This hardware limitation is significantly impacting OpenAI’s ability to meet the growing demand for its AI services and delaying the release of new products.
OpenAI’s Plan for Custom AI Chips
OpenAI is actively developing custom AI chips to address the GPU shortage and reduce its reliance on Nvidia hardware. The company is collaborating with Broadcom to design an inference chip, which is expected to be manufactured by TSMC and released in 2026. This move aligns OpenAI with other tech giants like Amazon, Google, and Microsoft, which have also developed custom AI processors.
To support this initiative, OpenAI has assembled a team of about 20 engineers, including experts from Google’s Tensor Processing Unit (TPU) project. The company is also hiring for hardware/software co-design roles to work with vendors on future AI accelerators. Although OpenAI initially considered building its own fabs, it has abandoned this plan due to high costs and long timelines. Instead, the focus remains on chip design and collaboration with established manufacturers to meet the growing demand for AI computation.
GPU Shortage Impacting GPT-4.5 Rollout
OpenAI’s rollout of GPT-4.5 has been significantly impacted by the GPU shortage, forcing the company to limit access to its most expensive subscription tiers. The new model, described by CEO Sam Altman as “giant” and “expensive,” requires substantial computational resources. OpenAI charges $75 per million tokens for input and $150 per million tokens for output—30 times and 15 times higher than its previous GPT-4 model, respectively. To address the shortage, OpenAI plans to add tens of thousands of GPUs next week, gradually expanding access to ChatGPT Plus users and other tiers.
The GPU shortage highlights broader challenges in the AI industry, where demand for high-performance chips far outpaces supply. This scarcity has led to increased costs and limited accessibility of cutting-edge AI technologies, potentially stifling innovation and concentrating AI benefits among wealthy corporations. In response, some companies are exploring alternative solutions, such as developing their own AI chips or optimizing existing hardware to reduce reliance on expensive GPUs.
TSMC Collaboration for AI Hardware
TSMC, a leading semiconductor manufacturer, has emerged as a key player in the AI chip manufacturing landscape. The company has formed strategic partnerships with major AI companies to address the growing demand for advanced AI hardware. Notably, OpenAI is collaborating with TSMC to develop its first generation of in-house AI chips, aiming to reduce its reliance on Nvidia and finalize the design in the coming months. This partnership leverages TSMC’s cutting-edge manufacturing processes, including 5nm and 3nm nodes, which are crucial for producing high-performance AI chips.
TSMC has introduced innovative technologies like the TSMC A16™ process and System-on-Wafer (TSMC-SoW™) to enhance AI chip performance.
The company’s Chip on Wafer on Substrate (CoWoS) technology has been instrumental in enabling the AI revolution by allowing more processor cores and high-bandwidth memory stacks on a single interposer.
TSMC’s collaborations extend beyond OpenAI, with partnerships involving other tech giants and AI-focused companies, positioning it as a central figure in the evolving AI hardware ecosystem.