What it is:TensorWave is a cloud infrastructure provider offering scalable compute powered by AMD Instinct GPUs for AI and high-performance computing workloads.
Rating:72/100Good
Expert's conclusion:TensorWave is best suited for organizations that are running memory intensive AI training workloads and are interested in running these workloads on high capacity AMD GPUs without the price and availability concerns associated with NVIDIA GPUs.
Reviewed byMaxim Manylov·Web3 Engineer & Serial Founder
Company Overview
TensorWave is a deep-tech company that delivers scalable, memory-optimised AI and HPC cloud services powered by AMD Instinct GPUs, which include the world’s first deployments of MI300X accelerators. Established in 2023, TensorWave will enable AI innovation through accessible High Performance Computing (HPC) infrastructure designed to meet the needs of demanding AI workloads. TensorWave is currently developing its HQ base in Las Vegas, Nevada.
Active
📍Las Vegas, NV
📅Founded 2023
🏢Private
TARGET SEGMENTS
AI DevelopersEnterprisesResearch InstitutionsBusinesses with AI Workloads
Key Metrics
📊
2023
Founded
🏢
38-100
Employees
📊
$100M Series A
Funding Raised
📊
AMD Instinct MI300X (industry first)
GPU Deployment
📊
Las Vegas HQ + Silicon Valley satellites
Offices
Credibility Rating
72/100
Good
Early stage with strong technological foundations as one of the first adopters of new generation AMD GPUs and with major Series A investment; however, very limited publicly available user review and visibility.
BREAKDOWN
Product Maturity65/100
Company Stability78/100
Security & Compliance70/100
User Reviews50/100
Transparency68/100
Support Quality70/100
TRUST SIGNALS
$100M Series A led by Lux CapitalFirst deployment of AMD Instinct MI300X GPUsBacking from established AI infrastructure investorsNevada economic development partnership
Company History
2023
Company Founded
TensorWave Inc. was founded in November 2023 by Piotr Tomasik to provide scalable AI/HPC Cloud with AMD GPUs.
2024
Series A Funding
Raised $100 million in Series A funding led by Lux Capital to build AI-specific Cloud Infrastructure (announced May 2024).
2025
HQ Expansion
In May 2025 announced its Las Vegas Headquarters expansion with Nevada economic incentives and MI300X GPU deployment.
2025
Leadership Solidified
Appointed CEO Darrick Horton and other executive team members to lead the development of TensorWave’s AI Compute Platform.
Api Integrations
API Type
No public API documentation found. Primarily bare metal and cloud infrastructure access via dashboard or direct server access.
Authentication
Likely SSH keys and cloud account credentials for bare metal access. Enterprise SSO possible but not publicly detailed.
Webhooks
No webhook support mentioned in public documentation.
SDKs
No official SDKs. Uses standard AMD ROCm software stack, JAX, PyTorch for direct GPU programming.
Documentation
Limited - product-focused docs on website. Technical docs focus on hardware specs and deployment guides rather than programmatic APIs.
Sandbox
No public sandbox. On-demand GPU access starts within seconds via dashboard.
SLA
SOC 2 Type II certified and HIPAA compliant. Dedicated instances guarantee availability but no public uptime SLAs.
Rate Limits
N/A - bare metal infrastructure, no API rate limits.
Use Cases
Programmatic AI workload orchestration via standard ML frameworks on bare metal GPUs.
Faq
TensorWave offers AMD Instinct MI300X, MI325X, and MI355X GPUs each with up to 288GB HBM3E memory per GPU, and are used to train and infer large language models with industry leading memory bandwidth.
Dedicated MI355X prices start at $2.95/GPU hour, MI325X at $2.25/GPU hour, and MI300X at $1.71/GPU hour. Custom pricing is available for enterprise clusters and storage. Three-year reservations are also possible.
TensorWave has exclusively utilised AMD Instinct GPUs to avoid NVIDIA supply chain constraints and price premiums. The superior memory capacity offered by AMD GPUs (up to 288GB HBM3E), are particularly well-suited to memory intensive LLM applications. Additionally, TensorWave’s use of direct liquid cooling enables 51% energy cost savings.
Yes, TensorWave is both SOC 2 Type II certified and HIPAA compliant. TensorWave prioritises security and complies with enterprise grade infrastructure located within North American Data Centres.
Yes, TensorWave has available both bare metal GPU nodes and fully managed Kubernetes clusters so that you can run your containerized AI applications with either of them providing you with full root access to dedicated hardware.
TensorWave will provide you with expert support through their Book A Call system or they will assign a team from their enterprise group to assist you with how to optimize your GPU clusters for your specific AI application workload.
There was no mention of a free trial for the typical sense of the word (free) although you can deploy your GPU access in under 10 seconds using their dashboard with a pay per use model which starts at $1.71/GPU hr for the MI300X.
TensorWave is an AMD GPU exclusive company. As such, they will need to be able to optimize your software for the ROCm stack if you plan to use any of their GPUs. If you are planning to run a memory intensive workload, this is likely going to be a good option for you but if you have a lot of CUDA optimized applications, you should check the compatibility of those applications with the ROCm stack before purchasing. Also, enterprise pricing will need to be customized for your needs.
Expert Verdict
TensorWave is delivering a competitive AMD based AI solution platform for organizations that are looking for a way to achieve higher memory capacity and lower energy consumption relative to other solutions such as NVIDIA, making it a viable alternative to NVIDIA for many memory intensive workloads. Additionally, the ability to rapidly scale bare metal environments is attractive to many organizations that do not want to worry about getting caught up in GPU shortages; however, the development of the AMD stack is lagging behind the development of the CUDA stack.
Recommended For
AI teams training large language models such as greater than 100 B parameters.
Organizations looking for an NVIDIA alternative due to current supply chain constraints.
Organizations focused on achieving total cost of ownership (TCO) by utilizing liquid cooled data center designs.
Research institutions requiring massive amounts of HBM3E memory capacity.
!
Use With Caution
Teams that are very invested in CUDA only workflows -- verify ROCm compatibility.
Small teams without DevOps experience -- bare metal requires management.
Latency sensitive inference -- test AMD vs NVIDIA performance.
Not Recommended For
Budget conscious startups unable to commit to $1.71+/gpu hr prices.
Teams that require mature plugin ecosystems -- AMD stack still developing.
Teams doing simple inference workloads -- typically less expensive to use consumer GPU cloud services.
Expert's Conclusion
TensorWave is best suited for organizations that are running memory intensive AI training workloads and are interested in running these workloads on high capacity AMD GPUs without the price and availability concerns associated with NVIDIA GPUs.
Best For
AI teams training large language models such as greater than 100 B parameters.Organizations looking for an NVIDIA alternative due to current supply chain constraints.Organizations focused on achieving total cost of ownership (TCO) by utilizing liquid cooled data center designs.
Research Summary
Key Findings
TensorWave provides a service that is specifically focused on cloud-based systems using the AMD Instinct GPU (MI300X/MI325X/MI355X) with as much as 288 GB of HBM3E memory and liquid-cooling, which can reduce your energy expenses by 51%. For customers who want to run their own instance (bare-metal) pricing begins at $1.71/GPU-hour with SOC 2 Type II and HIPAA compliance. The company has a significant enterprise focus due to over $166 million in funding and has experienced very rapid growth.
Data Quality
Good - comprehensive specs from official website and AMD partnership announcements. Pricing transparent for dedicated instances, enterprise details require sales contact. Limited public info on customer metrics and API capabilities.
Risk Factors
!
Maturity of the AMD ecosystem compared to the well-established NVIDIA CUDA stack
!
The rapidly evolving state of the GPU market may significantly impact the competitive landscape.
!
Bare-metal offerings require a high level of expertise from the customer regarding DevOps.
!
Due to its exclusive focus on AMD, TensorWave offers limited flexibility in terms of hardware.
Last updated: February 2026
Additional Info
Strategic Partnerships
TensorWave’s entire infrastructure is powered by an exclusive partnership with AMD Instinct GPUs. This partnership is backed by AMD Ventures, Magnetar ($166M+) in funding. The company also collaborates with Modular (the MAX inference engine) and SuperMicro for hardware.
Infrastructure Scale
TensorWave operates one of the world’s largest all-AMD GPU clouds, featuring an Arizona cluster with 8,192 GPUs and >1 GW of computing power. They have North American datacenter locations with UEC-ready Ethernet networking.
Customer Success
Felafax (YC S24) optimized a 405 billion parameter model to train on a single 8-GPU MI300X node. Felafax used JAX on bare metal and demonstrated stable operation throughout multiple iterations of training.
Compliance & Sustainability
SOC 2 Type II certified and HIPAA compliant. Liquid-cooled systems provide up to 51% reduction in energy costs versus air cooled systems.
AMD GPU Portfolio
Pricing for the three types of systems are: MI355X ($2.95/GPU hr with 288GB of HBM3E), MI325X ($2.25/GPU hr with 256GB), and MI300X ($1.71/GPU hr with 192GB). Three year reservations are available for large enterprise organizations.
Alternatives
•
CoreWeave: The current leader in the field of GPU cloud services, with the broadest set of models supported and the most mature CUDA ecosystem, is NVIDIA. Although it charges higher prices when supply is short, the availability and software maturity are unmatched. Ideal for organizations requiring guaranteed access to the latest NVIDIA H100 or H200 GPUs (coreweave.com).
•
Lambda Labs: A GPU cloud provider offering both NVIDIA and AMD options, and providing strong on-demand pricing, is Lambda Labs (lambdalabs.com). It offers greater flexibility in terms of hardware choices than TensorWave’s AMD-only approach. Lambda Labs is ideal for organizations that wish to take advantage of both ecosystems and scale simply.
Beginning of Text
•
Paperspace (DigitalOcean): Paperspace is a developer-friendly GPU cloud that uses NVIDIA GPUs and provides managed services. The lower barrier to entry for developers is compared to bare-metal clouds using notebooks and API access. It is best used by small AI teams and for prototyping. (Paperspace.com)
•
Together AI: Together’s platform is an inference-focused platform with a decentralized GPU network including both AMD and NVIDIA. Together’s pricing is pay per token as opposed to TensorWave’s hourly pricing. Together is best for cost optimized inference at scale. (Together.ai)
•
RunPod: RunPod is a spot market for renting GPUs with AMD MI300X available. They are significantly cheaper than TensorWave’s dedicated pricing options but also may be interrupted at any time. RunPod is best for training workloads that can be interrupted. (RunPod.io) End of Text