Reserve your NVIDIA® H100 GPU today with Civo

Be the first to access the NVIDIA H100 GPUs into your infrastructure.

Reserve your GPU today

Trusted by businesses of all sizes worldwide.

Docker Orbital Mercedes-Benz THG Redhat

H100 High-performance GPU for computer vision, LLMs & generative modelling

  • H100 GPUs from just $1.99 per hour
  • PCIe or SXM H100 GPUs available
  • Support for compute and Kubernetes
  • Transparent pricing for predictable billing
  • Supported by full ML ecosystem
  • Public, private, on-prem & hybrid

Maximum GPU power at the lowest possible price

Industry-leading pricing to make AI innovation accessible to everyone

Model Status From price On-demand price
H100 PCIe
In Stock now
$1.99 Per GPU/h
$2.49 Per GPU/h
H100 SXM
In Stock now
$2.49 Per GPU/h
$2.99 Per GPU/h

Take an in-depth look

NVIDIA H100 SXM and PCIe GPU specifications
Specficiation
H100 SXM
H100 PCIe
F64
34 teraFLOPS
26 teraFLOPS
FP64 Tensor Core
67 teraFLOPS
51 teraFLOPS
FP32
67 teraFLOPS
51 teraFLOPS
TF32 Tensor Core
989 teraFLOPS (with sparsity)
756 teraFLOPS (with sparsity)
BFLOAT16 Tensor Core
1,979 teraFLOPS (with sparsity)
1,513 teraFLOPS (with sparsity)
FP16 Tensor Core
1,979 teraFLOPS (with sparsity)
1,513 teraFLOPS (with sparsity)
FP8 Tensor Core
3,958 teraFLOPS (with sparsity)
3,026 teraFLOPS (with sparsity)
INT8 Tensor Core
3,958 TOPS (with sparsity)
3,026 TOPS (with sparsity)
GPU Memory
80GB HBM3
80GB HBM2e
Memory Bandwidth
3.35 TB/s
2 TB/s
Decoders
7 NVDEC, 7 JPEG
7 NVDEC, 7 JPEG
TDP (Power Draw)
700W
300W - 350W
Interconnect
NVLink: 900 GB/s, PCIe Gen5: 128 GB/s
NVLink: 900 GB/s, PCIe Gen5: 128 GB/s
MIG (Multi-Instance GPU)
Supports up to 7 MIGs (each 10GB)
Supports up to 7 MIGs (each 10GB)
Form Factor
SXM (Liquid/Air Cooled)
PCIe Dual-Slot (Air Cooled)
Ideal For
Large-scale AI Training, Conversational AI
AI Training, Inference, HPC

Trusted by global businesses

The first thing that drew us to Civo was the expertise they demonstrated in selecting the right GPUs for our needs. Their knowledge made all the difference.

Matt Butcher

Matt Butcher

Chief Executive Officer, Fermyon

For our PoC, Civo impressed us by setting up a Slack Connect channel for direct access to their SRE team. Their ledge of cloud-native tech and GPUs gave us real confidence in their support.

Oliver Pinson-Roxburgh

Oliver Pinson-Roxburgh

Chief Executive Officer, Defense.com

Unlike other industry players, Civo doesn't bury hidden charges or surprise you with complex egress fees - their pricing is straightforward, and that makes all the difference for businesses trying to scale efficiently.

James Faure

James Faure

Chief Executive Officer, Clairo.ai

With Civo, deployment speeds and efficiency have significantly improved. Their GPU expertise has been invaluable, boosting our service delivery and enhancing client conversion and retention.

Anuraag Gutgutia

Anuraag Gutgutia

Co-Founder, True Foundry


Discover Civo's AI/ML learning resources

Secure your GPU today


H100 Frequently Asked Questions

To reserve the NVIDIA H100 GPU, simply fill out our form with details about your requirements and use case. Our team will get in touch to discuss next steps and answer any questions you may have.
If you’re interested in a demo or trial, let us know when you fill out our form. A Civo representative will then contact you to showcase how the chip performs in real-world AI and ML scenarios.
Civo offers the NVIDIA H100 GPU in two configurations, with both committed and on-demand pricing options. For the H100 PCIe, the 36 months commitment pricing starts at $1.99 per GPU/hour, while the on-demand rate is $2.49 per GPU/hour. For the H100 SXM, the 36 months commitment pricing starts at $2.49 per GPU/hour, with an on-demand rate of $2.99 per GPU/hour. These configurations provide powerful performance for AI, machine learning, and high-performance computing workloads. For a complete breakdown of specifications and pricing, visit our pricing page.
The NVIDIA H100 is designed for cutting-edge AI and high-performance computing (HPC) workloads. It excels in large-scale deep learning training, AI model inference, high-performance data analytics, and scientific computing. With its Transformer Engine and high-bandwidth memory, the H100 is particularly well-suited for training and fine-tuning large language models (LLMs), generative AI applications, and complex simulations.
Civo’s NVIDIA H100 GPUs are available in select data centers. To check the latest availability in your preferred region, review our Region availability documentation.
Yes, you can scale up your instance to a larger GPU setup based on your workload requirements. However, upgrading requires launching a new instance, so it's recommended to plan your resource needs in advance. If you need assistance choosing the right configuration, our team can guide you through the best options.

Trusted compliance services from a certified provider

ISO 27001 G-Cloud CCSS Cyber Essentials SOC 2