TensorWave deploys AMD Instinct MI355X GPUs in its cloud platform

TensorWave, a leader in AMD-powered AI infrastructure solutions, today announced the deployment of AMD Instinct MI355X GPUs in its high-performance cloud platform.
As one of the first cloud providers to bring the AMD Instinct MI355X to market, TensorWave enables customers to unlock next-level performance for the most demanding AI workloads—all with unmatched white-glove onboarding and support.
The new AMD Instinct MI355X GPU is built on the 4th Gen AMD CDNA architecture and features 288GB of HBM3E memory and 8TB/s memory bandwidth, optimized for generative AI training, inference, and high-performance computing (HPC).
TensorWave’s early adoption allows its customers to benefit from the MI355X’s compact, scalable design and advanced architecture, delivering high-density compute with advanced cooling infrastructure at scale.
“TensorWave’s deep specialization in AMD technology makes us a highly optimized environment for next-gen AI workloads,” said Piotr Tomasik, president at TensorWave, in a statement. “With the Instinct MI325X now deployed on our cloud and Instinct MI355X coming soon, we’re enabling startups and enterprises alike to achieve up to 25% efficiency gains and 40% cost reductions, results we’ve already seen with customers using our AMD-powered infrastructure.”
TensorWave’s exclusive use of AMD GPUs provides customers with an open, optimized AI software stack powered by AMD ROCm, avoiding vendor lock-in and reducing total cost of ownership. Its focus on scalability, developer-first onboarding, and enterprise-grade SLAs makes it the go-to partner for organizations prioritizing performance and choice.
“AMD Instinct MI350 series GPUs deliver breakthrough performance for the most demanding AI and HPC workloads,” said Travis Karr, corporate vice president of business development, Data Center GPU Business, AMD, in a statement. “The AMD Instinct portfolio, together with our ROCm open software ecosystem, enables customers to develop cutting-edge platforms that power generative AI, AI-driven scientific discovery, and high-performance computing applications.”
TensorWave is also currently building the largest AMD-specific AI training cluster in North America, advancing its mission to democratize access to high-performance compute. By delivering end-to-end support for AMD-based AI workloads, TensorWave empowers customers to seamlessly transition, optimize, and scale within an open and rapidly evolving ecosystem.
For more information please visit: