Data Center Power Revolution: NVIDIA (NVDA.US) Unveils Growth Point in Partnership with ON Semiconductor (ON.US) for 800V DC
We've learned from Zhongtong Finance APP that, faced with the automotive and industrial sectors, chip giant ON Semiconductor (ON.US) saw its stock price surge by over 6% on Tuesday morning, prior to the company announcing its partnership with "AI Chip Leader" NVIDIA (NVDA.US) to accelerate the development of an 800V DC power solution for next-generation artificial intelligence data centers.
The core of this transformation is a new type of power system that requires minimal energy loss in each voltage conversion. ON Semiconductor has emphasized that its smart power solution is a crucial link for providing electricity supply to next-generation AI data centers, enabling high-efficiency and high-power-density electrical energy conversion at each stage.
As ChatGPT, Claude, and DeepSeek, among other artificial intelligence applications, sweep across the globe, global large-scale AI data centers' power demands have become increasingly massive. The growth of these high-consumption AI data centers is driven by the exponential expansion of AI chipsets and AI algorithm infrastructure, with international energy agency (IEA) predicting that global data center power demand will grow by over 100% to around 945 TWh by 2030, exceeding Japan's total electricity consumption. The growth is driven by AI applications.
Global AI data centers' power consumption is rapidly transitioning from traditional per-rack 20-30 kW to 500 kW or even 1 MW levels. To transmit multiple times the current without overheating copper bars, cables, and converters, the industry is shifting from 48V/400V DC to 800V high-voltage direct current (HVDC) architecture.
ON Semiconductor and NVIDIA jointly announced their 800V DC plan, which is this evolution. By raising the power supply voltage of data center racks from the current common 48V or 380/400V DC to 800 V DC, ON Semiconductor's smart power solution can reduce energy loss by a factor of 10, significantly reducing copper bars, cables, and converters, and increasing overall efficiency by at least 5 percentage points.
Currently, super-large-scale cloud computing and most AI training/prediction clusters rely on 48V DC (OCP Open-Rack, etc.) or 380/400V DC parallel power architecture; 800V is a new generation of high-voltage direct current (HVDC) solution, led by NVIDIA, ON Semiconductor, and others in the 2025-2027 timeframe.
The mainstream power system remains 48V and some 380V DC; 800V can be said to be in the early stages of adoption, targeting a future paradigm of 1 MW racks. NVIDIA's single AI GPU server has approached 1 kW, with full configurations of NVIDIA NVL AI servers breaking 100 kW easily; NVIDIA plans to produce 1 MW AI Factory AI server clusters starting from 2027, requiring 800V level high-voltage power lines for size and heat control within a usable range.
For the world's highest-valued tech giant, "AI Chip Leader" NVIDIA, which may soon add new revenue streams – after all, 800V DC high-voltage direct current power distribution will likely become the backbone of future petawatt-level AI data centers. This also means that, following AI GPU, data center high-performance network solutions InfiniBand, and automotive-grade high-performance SoC platforms NVIDIA DRIVE Thor, NVIDIA will welcome a new and extremely strong growth channel.
For ON Semiconductor, the company may provide 800V-level SiC MOSFETs, solid-state transformers, and high-density DC-DC modules as core power components/electronics OEM for the entire solution. Leveraging NVIDIA's unique scaled ecosystem platform deployment, ON Semiconductor can directly lock in long-term big deals and strengthen its competitiveness against competitors like Infineon.