top of page

800V DC Link: Powering the Next Generation of AI Data Centers

  • 2 days ago
  • 3 min read

As artificial intelligence continues to scale at an unprecedented pace, one challenge is becoming impossible to ignore: power.



With high-performance GPUs such as the NVIDIA B300 driving rack densities to 100kW–300kW and full deployments exceeding 10MW traditional data center power infrastructure is reaching its limits. To meet these demands, a new architecture is emerging as the industry standard: the 800V DC Link.


At Amaryllo, where we focus on high-performance infrastructure and modular data center innovation, this shift represents more than just an upgrade, it’s a fundamental transformation in how AI data centers are designed and powered.


The Problem with Traditional Power Architecture

Most legacy data centers rely on an AC-based power flow:


Grid (AC) → UPS → PDU → Server → GPU


While widely adopted, this architecture introduces several inefficiencies:

  • Multiple AC/DC conversions lead to energy loss

  • Increased heat generation requires more cooling

  • Complex systems reduce scalability and flexibility

As AI workloads continue to grow, these inefficiencies translate directly into higher operational costs and physical limitations.


Introducing the 800V DC Link

The 800V DC Link simplifies power delivery by converting electricity once and distributing it as high-voltage DC:


Grid (AC) → AC/DC → 800V DC Bus → DC/DC → GPU


This streamlined approach is purpose-built for high-density AI environments.


Why 800V DC Is a Game Changer

Higher Efficiency


By reducing the number of power conversions, overall system efficiency can reach 96%–98%, significantly lowering energy waste.


Reduced Heat and Power Loss

Higher voltage means lower current for the same power output. This results in:

  • Less heat generation

  • Reduced cable losses

  • Improved system reliability


Simplified Infrastructure

The DC-based architecture:

  • Eliminates the need for traditional UPS systems

  • Enables direct integration with battery storage

  • Supports cleaner, modular system design


Enabling the Architecture: Advanced Power Engineering

The transition to 800V DC is made possible by advancements in power electronics and energy conversion systems. These technologies enable:

  • Stable AC to DC conversion at scale

  • Efficient DC bus distribution

  • Seamless integration with energy storage

  • Precision control for high-load environments


This is not just a component upgrade, it’s a complete rethinking of power infrastructure for AI.


How It Works in a Modular Data Center (MDC)

In a modern AI modular data center, the architecture is typically divided into two key components:


Power Container – The Energy Core

This unit is responsible for:

  • Converting incoming grid power into 800V DC

  • Managing energy storage systems

  • Distributing power through a DC bus


Compute Container – The AI Engine

This unit contains:

  • High-density GPU servers

  • Rack-level DC/DC converters

  • Direct liquid cooling (DLC) systems


Together, these components create a highly efficient, scalable, and modular infrastructure designed for AI workloads.


A Practical Example: 10MW AI Deployment

In a 10MW modular data center, the power flow typically follows:

  1. Utility power enters at medium voltage (e.g., 22.8kV)

  2. Voltage is stepped down to 415V

  3. Converted into 800V DC

  4. Distributed via a centralized DC bus

  5. Delivered to compute containers

  6. Converted to lower voltages for GPU operation


Each rack can support up to 300–500kW, making this architecture ideal for next-generation AI applications.


Why This Matters Now

AI is fundamentally changing infrastructure requirements. Power density is increasing faster than traditional systems can handle.


Legacy AC-based designs struggle with:

  • UPS inefficiencies

  • Complex distribution layers

  • Excessive heat generation


In contrast, 800V DC architecture offers:

  • Greater energy efficiency

  • Simplified deployment

  • Seamless scalability

  • Lower total cost of ownership


At Amaryllo, we see the 800V DC Link not just as an innovation, but as a necessary evolution for AI infrastructure. As modular data centers become the preferred approach for rapid deployment and scalability, integrating DC-based power systems will be critical to unlocking their full potential.


The future of AI data centers will not be built on legacy power systems. For organizations planning 10MW+ AI deployments, the decision is clear: Design for efficiency from the start—build with DC, not legacy AC.


bottom of page