Part 2 of 9

Market Demand Foundation

AI model training niche, regional competition, and latency-tolerant workload targeting.

Seattle Latency

20ms

Target Workload

AI Training

AI Compute Demand Growth

Flathead Valley vs Industry Average (MW)

Rack Density Requirements

Power per Rack (kW) - AI workloads demand 4-6x higher rack density than legacy infrastructure

Target Workload: AI Model Training & HPC

GlacierScale targets AI Model Training and High-Performance Computing (HPC) workloads. These applications require massive, constant power loads (100% duty cycle) and are latency-tolerant, making Kalispell's industrial rate structure ideal for rewarding high utilization rates.

Why Not General Cloud?

Latency to major population centers (Seattle ~20ms, Denver ~30ms, Salt Lake City ~25ms) is acceptable for async workloads but inferior to San Antonio for real-time applications serving the Texas Triangle. This is a precision play, not a general-purpose data center.

The Training Niche Advantage

AI training clusters require massive, constant power loads and are latency-tolerant. Kalispell's industrial rate structure creates a "flat load" incentive, rewarding high utilization rates typical of AI training, whereas Texas creates volatility risks during heat waves via ERCOT scarcity pricing.

Regional Competition

The region is seeing interest from hyperscalers, but capacity is limited. Unlike the 1GW+ deals in Yellowstone County (NorthWestern Energy territory), the Flathead Valley is constrained by transmission capacity into the Buckley substation, making GlacierScale a boutique, high-efficiency play rather than a gigawatt-scale campus.

Density Requirements

AI Training workloads require 50kW to 100kW per rack for massive synchronous clusters. AI Inference (production layer) requires 20-40kW/rack. Maximum efficient density with standard air cooling is only 15-20 kW/rack, necessitating liquid cooling solutions.