H100 GPU for AI workloads

Run large-scale AI training and inference on H100 GPUs with scalable compute and Kubernetes



GPU illustration representing graphics processing

AI development, simplified


Build and scale AI workloads on NVIDIA H100 GPUs without the complexity or cost barriers.


Deploy H100 GPUs through flexible compute or Kubernetes, with fast provisioning and predictable performance. Run training, inference, and large-scale models with the power of a full ML ecosystem, all on infrastructure that scales with you.


Everything to move you forward, nothing to slow you down.

Clients

Bright minds build with Civo

Pricing

Great power. No great cost.

We believe brilliant minds should have access to brilliant tools, without the baffling price tags.

Simple, transparent pricing means you can scale without fear.

NVIDIA H100
ModelStatusOn demand
Commitment
Small H100 SXM1 x NVIDIA H100 - 80GB
In stock
$2.99per hour
$2.49per hour
Small H100 PCIe1 x NVIDIA H100 - 80GB
In stock
$2.49per hour
$1.99per hour

Build AI faster

Accelerate your AI journey with Civo

When you're ready to build with the industry's most powerful GPU, you need infrastructure that keeps up. We get you to the NVIDIA H100 fast, so you can focus on what you do best: groundbreaking AI. We manage the cloud so you can manage the breakthroughs.

01

Stop configuring. Start creating.

Let your team be brilliant. Our fully managed platform removes the burden of setup and maintenance, freeing your data scientists and developers to actually build. Deploy models, run experiments, and iterate at the speed of thought.

02

Pay for performance

Don't pay for GPUs sitting still. With Civo, every hour your H100 is running is an hour of value: training, inference, and data processing. Our transparent, on-demand pricing means you can scale your ambition without inflating your budget. This is compute without the cost surprises.

Get started today

NVIDIA H100 GPUs are in stock and ready for deployment. Contact our sales team to secure yours.

NVIDIA H100 specifications

Take an in-depth look

NVIDIA H100 SXM and PCIe GPU specifications
SpecificationsH100 SXMH100 PCIe
F64
34 teraFLOPS
26 teraFLOPS
FP64 Tensor Core
67 teraFLOPS
51 teraFLOPS
FP32
67 teraFLOPS
51 teraFLOPS
TF32 Tensor Core
989 teraFLOPS
756 teraFLOPS
BFLOAT16 Tensor Core
1,979 teraFLOPS
1,513 teraFLOPS
FP16 Tensor Core
1,979 teraFLOPS
1,513 teraFLOPS
FP8 Tensor Core
3,958 teraFLOPS
3,026 teraFLOPS
INT8 Tensor Core
3,958 TOPS
3,026 TOPS
GPU memory
80GB HBM3
80GB HBM2e
Memory bandwidth
3.35 TB/s
2 TB/s
Decoders
7 NVDEC, 7 JPEG
7 NVDEC, 7 JPEG
Ideal for
Large-scale AI Training, Conversational AI
AI Training, Inference, HPC

Trusted by top-tier teams worldwide

Serious power for serious workloads

We’re trusted by ambitious startups and major enterprises around the world to deliver scalable, sustainable infrastructure without the hidden costs or headaches of hyperscalers.

Regent Lee

Professor of Interdisciplinary Innovations

"Civo gives us the flexibility and performance we need to train our AI models at scale... a powerful example of how secure, sovereign infrastructure can enable positive change in healthcare."

Daniel Miodovnik

Chief Operating Officer

"We're using Civo's GPUs to develop world-leading AI models to discover new materials and develop hardware solutions to the biggest challenges in data centres."

James Faure

Co-Founder & CEO

"Our experience with Civo has been outstanding. We're treated as a customer, not just a consumer, with a highly innovative and customer-centric service."

Anuraag Gutgutia

Founder

"The dedication and expertise of Civo's support team have been standout. They've been instrumental in helping us navigate any challenges, always with a focus on our long-term success."

 University of Oxford logo
Regent Lee profile

Regent Lee

Professor of Interdisciplinary Innovations

"Civo gives us the flexibility and performance we need to train our AI models at scale... a powerful example of how secure, sovereign infrastructure can enable positive change in healthcare."

Resources

Built for insight and innovation

Get in touch

No lock-ins. No let-downs.

Run AI Cloud workloads and save over 50% compared to the big 3 hyperscalers with Civo.

Loading form...

FAQs

Frequently asked questions

Find out more about H100 GPU at Civo and how it can help you.

What workloads is the NVIDIA H100 best suited for?

The NVIDIA H100 is designed for cutting-edge AI and high-performance computing (HPC) workloads. It excels in large-scale deep learning training, AI model inference, high-performance data analytics, and scientific computing. With its Transformer Engine and high-bandwidth memory, the H100 is particularly well-suited for training and fine-tuning large language models (LLMs), generative AI applications, and complex simulations.


Have doubts? Check our comparison of NVIDIA’s Next-Gen GPUs.