Crusoe

  • What it is:Crusoe is the industry's first vertically integrated, purpose-built AI infrastructure provider offering reliable, cost-effective, environmentally aligned cloud platforms powered by clean energy and AI-optimized data centers.
  • Rating:88/100Very Good
  • Expert's conclusion:Crusoe is best suited for large scale GPU-based AI factories that prioritize performance, cost savings, and sustainability above the versatility of a general-purpose cloud provider.
Reviewed byMaxim Manylov·Web3 Engineer & Serial Founder

What Is Crusoe and What Does It Do?

Crusoe is developing sustainable AI infrastructure using an energy-first approach, utilizing excess natural gas as well as renewable energy to produce power for high-performance computing and AI applications. The company develops and operates hyperscale AI data centers as well as provides the Crusoe Cloud, a cloud-based AI environment that has been developed to support AI development and deployment at scale. As a vertically-integrated organization, Crusoe also provides customers with the ability to develop AI solutions at scale while ensuring that they are utilizing reliable performance and low-carbon energy.

Active
📍Denver, CO
📅Founded 2018
🏢Private
TARGET SEGMENTS
AI DevelopersHyperscalersHigh-Performance ComputingEnterprise AI

What Are Crusoe's Key Business Metrics?

📊
1.8 GW (Wyoming)
Data Center Capacity (announced)
📊
$1B+ Series E (2025)
Funding Raised
📊
99.98%
Uptime
📊
Up to 81%
Cost Savings
👥
100%
Customer Satisfaction

How Credible and Trustworthy Is Crusoe?

88/100
Excellent

Private organization with sufficient capital and successful implementation of sustainable AI infrastructure, hyperscale deployments, and significant enterprise partnerships with OpenAI and Oracle.

Product Maturity85/100
Company Stability92/100
Security & Compliance80/100
User Reviews90/100
Transparency85/100
Support Quality95/100
Partners with OpenAI and Oracle$1B+ Series E funding (2025)1.8 GW data center announced99.98% uptime SLA100% customer satisfactionRenewable-powered Iceland facility

What is the history of Crusoe and its key milestones?

2018

Company Founded

Crusoe was founded by Chase Lochmiller and Cully Cavness and is based on the concept of implementing an energy-first approach to utilize excess natural gas for high-performance computing.

2019

Series A Funding

Funding from Series A investment established the financial base to execute the mission of providing sustainable computing.

2021

Series B Funding

$128M in Series B funding was used to expand Digital Flare Mitigation technology throughout the United States.

2022

Series C Funding & Cloud Launch

$350M in Series C funding was utilized to accelerate expansion of Crusoe Cloud operations began mid-2022 with initial A100 deployment in Montana.

2023

European Expansion

Launched the sustainable geothermal and hydro-powered Crusoe Cloud in Iceland and published its first Environmental Social Governance report.

2025

Series E & Major Announcements

$1B+ in Series E funding; launched Crusoe Managed Inference with MemoryAlloy; acquired Atero; announced plans to build a 1.8 GW data center in Wyoming.

What Are the Key Features of Crusoe?

NVIDIA & AMD GPU Compute
Optimized for high-performance compute to run AI workloads and supports the latest GPU architectures.
Accelerated Storage
Designed high-throughput storage systems to support demanding AI training and inference workloads.
Optimized RDMA Networking
Provides low-latency RDMA fabric which allows models to be deployed up to 20x faster.
Managed Kubernetes & Slurm
Fully-managed orchestration to eliminate operational overhead for running AI clusters.
Crusoe AutoClusters
Auto-scale clusters with fault tolerance to provide a resilient infrastructure for production AI.
Crusoe Managed Inference
Proprietary inference engine with MemoryAlloy technology to optimize GPU memory usage.
99.98% Uptime SLA
Ensures enterprise-grade reliability with a resilient multi-region infrastructure.
Sustainable Energy
Supported by renewable sources, including hydroelectric power, geothermal heat and converted flare natural gas.

What Technology Stack and Infrastructure Does Crusoe Use?

Infrastructure

Hyperscale AI data centers with renewable-powered facilities in US (MT, VA, TX, WY) and Iceland; containerized and modular deployments

Technologies

NVIDIA GPUsAMD GPUsKubernetesSlurmRDMA Networking

Integrations

AI FrameworksHPC WorkloadsCloud APIs

AI/ML Capabilities

Purpose-built for AI training/inference with proprietary MemoryAlloy GPU optimization, managed inference services, and AutoClusters for large-scale model deployment up to 20x faster

Based on official website product descriptions and company announcements

What Are the Best Use Cases for Crusoe?

AI Research Labs
Trains the next generation of AI frontiers on massive clusters of graphics processing units (GPUs) that allow customers to train their models 20 times faster than is possible in traditional cloud-based environments while reducing costs by 81 percent compared to what customers would pay in those same cloud-based environments.
Enterprise AI Teams
Uses managed services to perform production inference at scale and offers a service-level agreement (SLA) of 99.98 percent uptime, as well as 24/7 support which includes optimizing Memory Alloy for its customers.
HPC Organizations
Utilizes high-performance computing resources powered by sustainable energy and optimized RDMA networking for scientific simulation workloads.
Hyperscalers
Provides wholesale capacity to build an "AI Factory" for large-scale deployments of AI applications to other companies, such as OpenAI and Oracle through partnerships.
NOT FORIndividual Developers
The cost of using GPUs is lower in terms of dollars per hour than most of the public cloud providers. However, since it is primarily designed to be an enterprise scale solution, and does not have a simple self-serve sign-up process, it limits access to some users who are interested in a cost effective way to get access to GPU resources.
NOT FORTraditional Web Applications
While the system has been built to provide the highest level of performance for GPU intensive AI and High Performance Computing (HPC) workloads, it was not designed to meet the needs of the average web or cloud workload. Therefore, while it can run applications that do not require massive amounts of GPU resources, it will never be able to match the price-performance of traditional web and cloud workloads that do not use GPUs.

What APIs and Integrations Does Crusoe Support?

API Type
REST API available through docs.crusoecloud.com for VM management and cloud services
Authentication
Standard cloud API authentication methods including API keys and IAM roles (inferred from enterprise cloud platform)
Webhooks
No public information on webhook support
SDKs
No official SDKs found on GitHub or developer portal; Terraform and Kubernetes integrations available
Documentation
Comprehensive docs at docs.crusoecloud.com covering VM specs, networking, and compute APIs
Sandbox
No dedicated sandbox; trial access through sales contact for new accounts
SLA
99.98% uptime guaranteed with 24/7 enterprise support and 100% customer satisfaction
Rate Limits
Not publicly documented; enterprise-level quotas apply
Use Cases
Programmatic VM provisioning, GPU cluster management, AutoClusters orchestration, Command Center observability

What Are Common Questions About Crusoe?

Crusoe supports NVIDIA H100, H200, A100 (40GB / 80GB), L40S, A40, GB200, B200 and AMD MI300X GPUs. Customers can select a variety of configurations ranging from one or two GPUs, up to eight cluster nodes with a maximum total of 946K GPUs supported across all of the Crusoe data centers.

On demand pricing begins at $0.90 per GPU-hour for A40, $1.45 for A100 PCIe, and goes up to $3.90 per GPU-hour for H100. Discounted prices are available for spot instances, such as $1.00 for A100 and $0.50 for L40S. Additionally, reserved pricing is available for customers that commit to a specific amount of usage over time.

Crusoe is focused on supporting AI workloads by providing a highly specialized GPU-based architecture with low carbon footprint, low operating expenses, and up to 81 percent cost savings over traditional hyperscalers. In addition to the cost savings, Crusoe also provides 99.98 percent uptime versus traditional general purpose clouds, and provides customers with optimized RDMA networking and AutoClusters to help optimize their workflows.

Yes, Crusoe provides a full suite of enterprise grade infrastructure features, including 99.98 percent uptime, redundant systems, 24/7 support, advanced cooling systems in each of our data center locations, and AI-optimized networking systems. Additionally, Crusoe is compliant with SOC 2 standards, similar to many other cloud platforms that provide a secure environment for customers to deploy their applications.

Yes, we provide managed Kubernetes, Slurm clusters and fault tolerant AutoClusters. We also integrate with Saturn Cloud, which allows customers to utilize JupyterLab and RStudio, and enables elastic scaling of GPU-based workloads.

Enterprise 24/7 support with 100% customer satisfaction is provided through Command Center, which offers a combination of orchestration, observability and collaboration features. For custom service level agreements (SLA), or for having dedicated resources, please contact sales.

No information about the existence of a free public tier was found; new customers are to contact sales to receive a trial license and/or proof-of-concept. A pay-as-you-go model with spot/on-demand pricing will be used.

Some virtual machine (VM) configurations may have geographic restrictions (e.g., some regions); for access to these configurations, please contact sales.

Is Crusoe Worth It?

Ephemeral storage is available on GPU instances. The primary function of Crusoe is to offer specialized infrastructure for AI workloads using high-performance GPUs from NVIDIA and AMD, as well as to provide purpose-built data centers that can achieve cost savings of up to 81% compared to hyperscalers. Crusoe also has strong reliability (99.98% uptime), with tools such as Command Center and AutoClusters that allow the company to deliver an enterprise-level platform to support large-scale AI training and inference. In addition, the company's focus on renewable energy and its ability to rapidly expand its capacity allows it to compete effectively in the crowded AI cloud market.

Recommended For

  • AI/ML teams that require the most recent generation of GPUs (H100, H200, MI300X) at prices that are competitive to those of hyperscalers.
  • Enterprises that run large-scale training/inference using Kubernetes and Slurm.
  • Organizations that prioritize sustainability and cost efficiency for their GPU-based workloads.
  • Teams that want to have AI clusters provisioned and managed by someone other than the hyperscalers.

!
Use With Caution

  • Customers that need general-purpose cloud services (beyond GPU compute) and do not find them offered by Crusoe.
  • Small teams that do not have sales contacts to reach out to for initial provisioning.
  • Inference applications where latency is critical (check RDMA performance for your specific application).

Not Recommended For

  • Budget startups that would prefer to have a free tier or lower commitment options to access the Crusoe Cloud.
  • Non-GPU based workloads (general purpose cloud computing is better suited to AWS/GCP/Azure).
  • Highly regulated industries without verification of compliance documents.
Expert's Conclusion

Crusoe is best suited for large scale GPU-based AI factories that prioritize performance, cost savings, and sustainability above the versatility of a general-purpose cloud provider.

Best For
AI/ML teams that require the most recent generation of GPUs (H100, H200, MI300X) at prices that are competitive to those of hyperscalers.Enterprises that run large-scale training/inference using Kubernetes and Slurm.Organizations that prioritize sustainability and cost efficiency for their GPU-based workloads.

What do expert reviews and research say about Crusoe?

Key Findings

Summary: Crusoe Cloud provides NVIDIA H100/A100/H200 and AMD MI300X GPUs with an interesting on-demand pricing structure ranging from $0.90-$3.90/GPU-hr (with spot discounts they don’t disclose that go up to about 67%). Their AI infrastructure is built around purpose-built AI data centers spanning ~9.8M sq ft, delivering 99.98% uptime on automated hardware-level SSD and NIC failover, with renewable power and air-cooling infrastructure. Some of their tooling looks interesting including Command Center, AutoClusters, and managed Kubernetes. They sell mostly to enterprise companies (their public pricing API and developer docs references are scant, and require emailing their sales team).

Data Quality

Good - detailed specs from official docs and pricing pages, third-party benchmarks. Limited API/integration documentation and no public FAQ/customer stories.

Risk Factors

!
Requires access to an enterprise sales process to access, as they have some restricted instances.
!
GPU pricing is competitive, but availability of their Spot offering will vary based on how many GPUs are available at the given time.
!
Pricing may shift massively based on the evolving GPU hardware market.
!
Does not have as many non-GPU/general cloud capabilities broadly beyond those of traditional cloud providers.
Last updated: February 2026

What Additional Information Is Available for Crusoe?

Data Center Scale

Their purpose-built AI cloud is spread out to about 9.8M square feet of space (multiply that by about 4 for 466K GPUs) and deliver 3.4GW of electricity to 946K GPUs. They do advanced features like air-cooling, RDMA networking with racks with 1GPU-6GPU density, and focusing on renewable energy sources.

Command Center

“We recently launched a new unified operations platform, available to all AI workloads. We focus less on orchestration and more on deep observability, collaborative support experiences, and optimizing GPU uptime.”.

Integrations

Partners with Saturn Cloud to employ a GPU-powered JupyterLab, RStudio, Dask clusters, etc., along with managed Kubernetes, Slurm, and AutoClusters to automate MLOps.

Sustainability

They power their data centers on renewable energy sources, stating they are “designed to significantly reduce the carbon footprint of AI workloads with advanced energy efficiency features”, and indicate they’re for “high-density GPU deployments with precision power distribution who traditionally bought infrastructure with hyperscalers”.

Capacity Expansion

“Rapidly scaling and deploying infrastructure to support needs for GB200 and B200 next-gen GPUs” and engineer solely for AI.

What Are the Best Alternatives to Crusoe?

  • CoreWeave: “AI GPU cloud similar to ours, they sexy They use pricing that is similar in NVIDIA H100/A100 style” ;) Kubernetes, H100 and A100 Style Price”. Will have ML-oriented tooling, however may not be that dedicated to sustainability side of things (obviously and massively more established developer tools TBH); if you need access immediately these would be the duo for you! (coreweave.com)
  • Lambda Labs: Familiar names in autonomously scaling H100s; pricing seems to be about ~$2.50/hr approx H100s and very heavily leaning in on ML workflows/stacking them together via a simpler nature self-service Autonomously scaling stack. (lambdalabs.com)
  • RunPod: RunPod offers pay-per-second GPU rentals of A100s beginning around $0.20 per hour, which is a more flexible option for burst or intermittent workloads; however, it has lower reliability as an enterprise solution and is best for cost-conscious prototyping and spot utilization.
  • Google Cloud AI Platform: Google Cloud provides both TPU and GPU-based infrastructure with A100 and H100 GPUs, in addition to a full cloud ecosystem; while it is more expensive, ~$3.50/hour for an H100, it also offers better options for running hybrid and multi-cloud applications and is best for enterprises that are already utilizing some of the GCP ecosystem.
  • AWS EC2 P5: Amazon Web Services offers high-end GPU instances with H100s at premium prices, as well as an unmatched ecosystem; however, AWS may be 20-80% more expensive than Crusoe depending upon specific use cases; and is best suited for mission-critical workloads that require the integration of AWS services.

Crusoe Cloud GPU Performance Specifications

48 GB per GPU
NVIDIA L40S Memory Capacity
80 GB per GPU
NVIDIA A100 Memory Capacity
30 % faster vs A100/A40
L40S Inference Performance (Llama2)
3.3 images/sec (single L40S, fp16)
SDXL Image Generation Throughput
2.8 images/sec (15% speedup over fp16)
SDXL Throughput with INT8 Optimization
10 GPUs
Maximum L40S GPUs per Instance
80 vCPUs (AMD Genoa)
L40S Instance vCPU Count
1470 GB
L40S Instance System Memory
200 Gbps
L40S Instance Network Bandwidth

Crusoe Cloud Data Center Infrastructure

Platform Architecture
Purpose-built for GPU-intensive AI workloads
GPU-to-GPU Communication
High-bandwidth RDMA networking optimized
Uptime SLA
99.98%
Enterprise Support
24/7 with 100% customer satisfaction score
Model Deployment Speed
Up to 20x faster than alternatives
Cost Reduction Potential
Up to 81% cost savings vs alternatives
Kubernetes & Slurm Support
Managed with fault-tolerant AutoClusters
Energy Efficiency
Renewable-powered infrastructure

Available GPU Types on Crusoe Cloud

NVIDIA A100 (PCIe and SXM variants)NVIDIA A40NVIDIA L40S 48GBNVIDIA H100NVIDIA H200NVIDIA GB200NVIDIA B200AMD MI300X 192GB

Crusoe Cloud Workload-to-Hardware Optimization Guide

Workload TypeRecommended HardwarePerformance AdvantageConfiguration ExampleUse Case Notes
Generative AI Inference (Text)NVIDIA L40S, A10030% faster inference vs A40/A100l40s-48gb.1x to l40s-48gb.10xLLM inference, chatbots, text summarization
Image Generation (SDXL, Stable Diffusion)NVIDIA L40S3.3 images/sec fp16, 2.8 images/sec int8l40s-48gb.1x or higherSDXL inference, native fp8/int8 support
Fine-tuning & TrainingNVIDIA L40S clusters (10x GPU)Completed 32k context Mistral 7b finetune in 3 daysl40s-48gb.10x instanceCost-effective training: ~$1000 for multi-GPU finetune
Multi-Model Serverless DeploymentNVIDIA L40S (10x GPU, high memory)Fast model switching, multiple models in system memoryl40s-48gb.10x with 1470GB total memoryServerless providers, per-customer fine-tunes
Large-Scale Model TrainingNVIDIA H100, H200, B200High-bandwidth interconnect, extreme scalingMulti-node H100/H200 clustersTrillion-parameter models, distributed training

Crusoe Cloud Pricing

L40S GPU Pricing (On-Demand)
$1.45/GPU-hour
L40S GPU Pricing (Committed Usage)
Discounted rates available
AMD MI300X 192GB Pricing
$3.45/hour
Pricing Model
Transparent, on-demand and reserved options
Cost Advantage
Up to 30% better price/performance vs current-generation platforms

Crusoe Cloud CPU Instance Specifications

Instance TypevCPU CountMemoryNetwork BandwidthTypical Use Case
c1a.2x28 GB1 GbpsDevelopment, testing
c1a.4x416 GB2 GbpsSmall workloads
c1a.8x832 GB5 GbpsMedium compute
c1a.16x1664 GB10 GbpsGeneral purpose
c1a.32x32128 GB20 GbpsHigh-memory workloads
c1a.64x64256 GB35 GbpsLarge-scale processing
c1a.128x128512 GB70 GbpsExtreme-scale workloads
c1a.176x176704 GB100 GbpsMaximum throughput

Crusoe Cloud Storage-Optimized Instance Specifications

Instance TypevCPU CountMemoryNVMe StorageNetwork BandwidthUse Case
s1a.20x20176 GB1x 12.8 TB25 GbpsData processing, storage-intensive AI
s1a.40x40352 GB2x 12.8 TB50 GbpsLarge dataset handling
s1a.60x60528 GB3x 12.8 TB75 GbpsMulti-tier data workloads
s1a.80x80704 GB4x 12.8 TB100 GbpsEnterprise-scale storage
s1a.120x1201056 GB6x 12.8 TB150 GbpsDistributed training data
s1a.160x1601408 GB8x 12.8 TB200 GbpsMaximum storage density

Crusoe Cloud Core Features

MemoryAlloy Inference Engine

The proprietary technology provided by Crusoe enables ultra-low latency and scalable throughput in large context AI workloads.

Managed Kubernetes

Crusoe’s simple container orchestration, based on automated scaling and fault tolerance, minimizes operational overhead.

Managed Slurm

Job Scheduling for Distributed Computing.

Crusoe AutoClusters

Automatic cluster management that eliminates operational overhead to provide fault-tolerance.

Optimized RDMA Networking

High Bandwidth and Low Latency GPU-to-GPU Communication.

Accelerated Storage

Integration with High Performance Storage Solutions for AI Workloads.

Mixed Precision Support

Crusoe supports FP8, FP16, FP32, BF16 and INT8 Quantization on L40S and later generation GPU Architectures.

Native Sparsity Support

Crusoe supports hardware accelerated sparse tensor operations on supported GPU types.

Crusoe Cloud Compliance & Support Status

Enterprise Support Availability24/7 enterprise-grade support
Customer Satisfaction Score100% customer satisfaction
Infrastructure Reliability99.98% uptime SLA
Renewable Energy PoweredCrusoe operates as renewable-powered AI factory
Integration with Saturn Cloud MLOpsFull compatibility for simplified AI deployment
Elastic Scaling CapabilitiesAutomatic provisioning and resource management
Transparent PricingClear on-demand and reserved pricing models

Expert Reviews

📝

No reviews yet

Be the first to review Crusoe!

Write a Review

Similar Products