LAYER 01
POWER
ARCHITECTURE
AC Multi-Hop
415V / 480V 3-Phase AC
Grid AC → UPS (AC→DC→AC) → PDU → PSU (AC→DC) → VRM → chip. 4–5 conversions. 10–20% leakage.
→
800V HVDC
Single conversion at facility perimeter
Grid AC → rectifier (one conversion to 800V DC) → busway → DC-DC at rack → chip. Eliminates rack-level PSU. ~2–5% losses vs 10–20%.
INCOMING
−15pp loss
Rubin Ultra
2027
Magnetic Core
Passive electromagnetic induction
Heavy, slow, AC-only. Multiple step-down stages (HV → MV → LV). Passive, no digital control. Can’t do bidirectional flow.
→
Solid State Transformers (SST)
SiC / GaN high-frequency switching
30–50% smaller. Bidirectional DC flow. Digital control. Supports DER sell-back to grid. 150% more power through existing conductors.
PILOT
−45% copper
Amperesand
SolarEdge
VRLA UPS
Valve Regulated Lead Acid
10–20ms response. 200–500 cycles. 5–15min backup. Can’t handle sub-millisecond GPU power spikes (3× nominal draw in 1ms).
→
Supercapacitors + LFP BESS
Microsecond response + MWh-scale storage
Supercaps: <1ms response, 1M+ cycles, spike smoothing, demand charge mitigation. LFP BESS: 95%+ efficiency, peak shaving, grid demand response revenue.
PILOT
1M+ cycles
OCP 2025
demo done
PSU-Based AC Rack
10–50kW density, thick copper busbars
AC-to-DC PSU inside each rack. Up to 200kg copper busbars per rack. Dense cabling. Air-cooled at most. PSU fans consume significant power.
→
PSU-Free 800V DC Rack
900kW+ density, SiC/GaN DC-DC conversion
PSU eliminated → 60% more compute space per rack. High-temp steel enclosure. Seismic bracing. Liquid-cooled busbars. >98% efficient DC-DC conversion.
INCOMING
+60% space
SuperMicro
Kyber rack
Air + Chilled Water
CRAC units, ~15–20kW rack limit
Air cooling physically maxes out at ~20kW/rack. Chillers at ~$2M/MW, consume 15–20% of datacenter capex. Each deployment still feels “custom.”
→
Direct-to-Chip (D2C) Liquid
45°C inlet, no chillers required (Nvidia RA)
Cold plates on CPUs/GPUs. 45°C warm water eliminates chiller cost. Row-based CDUs integrate into rack. 20–30% cooling energy reduction. 13% of failures in 2024 from cooling.
LIVE
−$2M/MW
H200/B200
standard
Legacy DCIM
Static dashboards, manual workflows
Built for low-density static environments. Multi-vendor protocol fragmentation. Alert fatigue. No GPU-native telemetry. Can’t support lender collateral tracking.
→
GPU-Native Real-Time DCIM
Sub-second telemetry, API-first, lender-grade
Asset API normalizes across all vendors. Sub-second alerts. GPU wear telemetry for collateral underwriting. SaaS delivery. Maintenance graph tracks failure rates vs. run parameters.
LIVE
GPU wear
Aravolta
Crucible