Neo Cloud Providers Comparison

Compare GPU-first and AI-native cloud providers - GPU availability, pricing, performance, and specialized AI infrastructure.

Last updated: 2026-02-11

FeatureCoreWeaveCoreWeaveCoreWeave Inc.LambdaLambdaLambda Inc.Together AITogether AITogether AICrusoe CloudCrusoe CloudCrusoe Energy SystemsVoltage ParkVoltage ParkVoltage ParkRunPodRunPodRunPod Inc.
General
HeadquartersLivingston, NJSan Francisco, CASan Francisco, CASan Francisco, CASan Francisco, CACherry Hill, NJ
Founded201720122022201820232022
Company TypePrivate (~$35B valuation)Private (~$1.5B valuation)Private (~$3.3B valuation)Private (~$10B valuation)PrivatePrivate (~$500M+ valuation)
Total Funding~$12B+ (debt + equity)~$800M+~$400M+~$1.4B+ (Series E)~$100M+~$20M+
Business ModelGPU cloud IaaS (bare metal & VMs)GPU cloud + on-prem GPU serversAI inference & training API platformClean-energy GPU cloudGPU-as-a-service marketplaceServerless GPU cloud for AI
Target CustomersAI labs, enterprises, hyperscalersML researchers, startups, enterprisesAI developers, startups, researchersAI companies, enterprisesStartups, researchers, AI companiesIndie developers, startups, researchers
GPU Availability
Nvidia H100
Nvidia H200Limited
Nvidia B200 (Blackwell)Coming 2025-2026Coming 2026Coming 2026Coming 2026 (GB200)TBDTBD
Nvidia A100
Nvidia L40S / L4
Consumer GPUs (RTX 4090, etc.)
Total GPU Fleet Size(?)100,000+ H100s (growing to 250K+)10,000+ H100sThousands (not disclosed exactly)Planned 400,000 GB200s (Abilene)Thousands of H100sThousands (community + owned)
Pricing
H100 On-Demand (per GPU/hr)~$2.23/hr~$1.99/hr~$2.50/hr (dedicated)~$2.35/hr~$1.89/hr~$2.49/hr
A100 80GB On-Demand (per GPU/hr)~$1.28/hr~$1.10/hr~$1.25/hr~$1.20/hr~$0.99/hr~$1.64/hr
Reserved / Contract Pricing1-3 year contracts (significant discount)Reserved instances availableCustom enterprise pricingLong-term contracts availableFlexible commitmentsSavings Plans available
Spot / Interruptible
Serverless (Pay-per-token/sec)
Price Competitiveness(?)~50-70% cheaper than hyperscalers~60-75% cheaper than hyperscalersCompetitive for inference APIs~50-65% cheaper than hyperscalers~60-80% cheaper than hyperscalers~50-70% cheaper than hyperscalers
AI Services & Features
Managed Inference API
Model Fine-tuning Service
Pre-built Model Catalog100+ open source models (Llama, Mistral, etc.)Community templates
Bare Metal Servers
Virtual MachinesDedicated endpointsGPU Pods
Kubernetes Native
On-Prem GPU Servers (Purchase)(?)
Networking & Storage
InfiniBand(?)Limited
RDMA Support(?)Limited
NVLink / NVSwitch
High-Performance StorageNVMe SSD, shared filesystemsNVMe SSD, persistent storageManaged (transparent to user)NVMe SSD, object storageNVMe SSDNetwork volumes, NVMe pods
Object Storage
Infrastructure & Regions
Data Center LocationsUS (NJ, TX, IL, WA), Europe (London, Norway)US (TX, UT, AZ)US-basedUS (TX, WY), IcelandUS-basedUS, EU (distributed community + owned)
Number of Regions6+3+1-23+1-210+ (incl. community)
Clean / Renewable EnergyPartial (varies by location)Not emphasizedNot emphasizedNot emphasizedNot emphasized
Uptime SLA99.9%+99.9%99.9% (API)99.9%+Best effort99.9% (varies by tier)
Notable Customers & Partnerships
Key CustomersMicrosoft, Nvidia, Cohere, Stability AIML researchers, universities, startupsScale AI, Stanford HAI, Hugging FaceAI labs, enterprise GPU tenantsAI startups, researchersIndie AI developers, Stable Diffusion community
Strategic InvestorsNvidia, Microsoft, Magnetar, Coatue, CiscoG Squared, Gradient Ventures (Google)Nvidia, Salesforce, Kleiner PerkinsNvidia, Fidelity, MubadalaNot publicly disclosedDell Technologies Capital, Intel Capital
Nvidia PartnershipPreferred cloud partner, DGX CloudCloud partner, hardware resellerInvestor + GPU partnerStrategic investorGPU customerGPU customer
Key Differentiators
Primary StrengthLargest independent GPU cloud; Nvidia-preferred; enterprise-grade KubernetesSimplicity + best GPU cloud pricing; also sells on-prem serversBest platform for open-source model inference + fine-tuning APIsClean energy powered; vertically integrated (energy + compute)Lowest-cost H100 access; flexible commitmentsMost accessible GPU cloud; serverless + community marketplace
Best ForLarge-scale AI training, enterprise GPU infrastructureML researchers and teams wanting simple, affordable GPU accessDevelopers building AI apps with open-source modelsAI companies wanting sustainable, clean-energy computeBudget-conscious teams needing bulk H100 accessIndividual developers, hobbyists, small teams, inference workloads