Cloud GPU-powered compute and Kubernetes

High-performance GPUs for machine learning, scientific computing, and generative AI

  • Powered by industry gold standard NVIDIA GPUs
  • Zero vendor lock-in, ensuring a seamless workflow
  • Transparent pricing for cost-optimized budgeting and fair access
  • Plug and play for smooth integration into your existing infrastructure
  • Scale effortlessly from startup to enterprise

Accelerate your ML and AI projects

From single GPU Compute Instances to scalable GPU Kubernetes clusters.

NVIDIA logo
Kubernetes Icon

GPU Kubernetes clusters

The adaptability of Kubernetes meets GPU firepower. Designed for vast datasets and intricate models. Offering easy scalability and ROI by supporting multiple GPUs efficiently.

NVIDIA logo
Kubernetes Icon

GPU Compute instances

Compute instances powered by our NVIDIA GPUs. Perfect for individual workloads and experiments, machine learning model testing, and graphics rendering.

Civo GPU

Cloud GPUs the Civo way

At Civo we're cloud native “all the way down”. We don't rely on legacy infrastructure for our custom-built stack. This means you get a streamlined developer experience, with industry-leading hardware, at a fair price.

Civo NVIDIA GPUs

Choose from our range of highly-performant GPUs powered by NVIDIA, helping you to redefine the boundaries of AI, HPC, and graphics performance.

NVIDIA A100 Tensor Core GPU

Setting the standard for AI and graphics excellence, stands as a beacon empowering data centers with unparalleled performance and efficiency.

Find out more

NVIDIA L40S GPU

The ultimate fusion of AI and graphics performance, designed for complex data center workloads with revolutionary Ada Lovelace architecture for multi-workload acceleration.

Find out more

NVIDIA GH200 Grace Hopper Superchip

A monumental leap in AI and HPC with its extraordinary HBM3e memory and processing prowess, tailored for generative AI and expansive high-performance tasks.

Reserve now

NVIDIA H100 Tensor Core GPU

A leader in AI acceleration, leveraging the advanced Hopper architecture for groundbreaking AI training and inference capabilities in data centers.

Reserve now

Cloud GPU pricing

You can launch NVIDIA GPUs on our Compute and Kubernetes services directly via our dashboard or CLI.

NVIDIA L40S 48GB GPU pricing

NVIDIA L40S 48GB GPU pricing
Size CPU RAM Storage Data Transfer Price
Medium
1 x NVIDIA L40S - 48GB
8 Cores 64 GB 200GB NVMe FREE
$1,200per month
Medium
2 x NVIDIA L40S - 48GB
16 Cores 128 GB 400GB NVMe FREE
$2,400per month
Large
4 x NVIDIA L40S - 48GB
32 Cores 255 GB 400GB NVMe FREE
$4,800per month
Extra Large
8 x NVIDIA L40S - 48GB
64 Cores 512 GB 400GB NVMe FREE
$9,600per month

NVIDIA A100 40GB GPU pricing

NVIDIA A100 80GB GPU pricing
Size CPU RAM Storage Data Transfer Price
Small
1 x NVIDIA A100 - 40GB
8 Cores 64 GB 200GB NVMe FREE
$1,200per month
Medium
2 x NVIDIA A100 - 40GB
16 Cores 128 GB 400GB NVMe FREE
$2,400per month
Large
4 x NVidia A100 - 40GB
32 Cores 255 GB 400GB NVMe FREE
$4,800per month
Extra Large
8 x NVidia A100 - 40GB
64 Cores 512 GB 400GB NVMe FREE
$9,600per month

NVIDIA A100 80GB GPU pricing

NVIDIA A100 80GB GPU pricing
Size CPU RAM Storage Data Transfer Price
Small
1 x NVidia A100 - 80GB
12 Cores 128 GB 100GB NVMe FREE
$1,600per month
Medium
2 x NVidia A100 - 80GB
24 Cores 256 GB 100GB NVMe FREE
$3,200per month
Large
4 x NVidia A100 - 80GB
48 Cores 512 GB 100GB NVMe FREE
$6,400per month
Extra Large
8 x NVidia A100 - 80GB
96 Cores 1024 GB 100GB NVMe FREE
$12,800per month

Go green with our Deep Green GPUs

Leverage the power of eco-friendly GPUs Civo, in collaboration with Deep Green, offers you a sustainable cloud computing solution.

  • Sustainable solution

    Run workloads on 100% renewable energy systems.

  • Zero carbon heat

    90% of server heat benefits community projects.

  • Competitive pricing

    Green switch at constant prices.

  • Efficient use

    Dual-purpose waste heat for cloud and community.

Find more about Deep Green GPUs

Frequently Asked Questions


How to choose the right GPU?

Choosing the right GPU among the NVIDIA L40S, H100, and GH200 Grace Hopper depends on your specific needs and your use case:

  • NVIDIA A100: Delivering unmatched AI and graphics performance with cutting-edge technology, the NVIDIA A100 is the pinnacle of innovation for demanding workloads in AI, data analytics, and scientific computing.
  • NVIDIA L40S: Ideal for those who need a balanced blend of AI and graphics performance. Powered by the NVIDIA Ada Lovelace architecture with 48 GB GDDR6 memory, it's particularly well-suited for tasks that require both intricate AI computations and advanced graphics processing, like 3D graphics, rendering, and large language model training.
  • NVIDIA H100: Best suited for intensive AI training and inference tasks. It offers options of 4GB or 80GB HBM2e memory and is equipped with fourth-generation Tensor Cores, making it a superior choice for developing and deploying large AI models like chatbots and recommendation engines. The H100 is also designed for scalable multi-GPU systems, ensuring high performance and robust security for data center operations.
  • NVIDIA GH200 Grace Hopper™ Superchip: The go-to choice for cutting-edge AI and high-performance computing (HPC) workloads. It features an integrated CPU-GPU architecture with a high-bandwidth NVLink C2C interconnect, enhancing performance for AI and HPC tasks. With its 72-core Grace CPU, H100 Tensor Core GPU, and up to 624GB of memory, it's tailored for generative AI, large-scale AI inference, or scientific computing that demands the highest level of memory and processing power.

Your choice should be based on the specific balance of AI, graphics, and HPC needs in your projects, considering factors like memory requirements, processing power, and the nature of the tasks (AI training/inference, graphics rendering, HPC tasks, etc.).


Are your GPU products climate-friendly?

At Civo, we are dedicated to providing climate-friendly GPU products. Our sustainable cloud solution, powered by 100% renewable energy, runs on Deep Green's carbon-neutral data center. We effectively capture 90% of the heat generated by our servers, repurposing it for community-based projects, thus reducing our carbon footprint and contributing to a sustainable future.

As the exclusive cloud partner of Deep Green, we offer state-of-the-art GPU-powered servers that align with environmental responsibility. We believe in making sustainability accessible; hence, we've maintained competitive pricing across all regions to encourage more businesses to adopt eco-friendly computing solutions.


Can I scale these GPUs for larger projects?

Yes, the NVIDIA L40S, H100, and H200 Grace Hopper GPUs can be scaled for larger projects. These GPUs are designed with scalability in mind, allowing them to be integrated into multi-GPU systems. This capability is crucial for handling more complex or larger-scale computational tasks, such as extensive AI training, large language model processing, or high-performance computing. By leveraging scalable architectures and technologies, these GPUs can work in tandem, providing increased processing power, memory bandwidth, and efficiency necessary for handling the demands of expansive and resource-intensive projects.


What are the use cases for Civo Cloud GPUs?

Leverage the power of NVIDIA GPUs to excel in a variety of projects. Whether you need single GPU instances for tasks like AI model training and high-performance computing, or scalable GPU Kubernetes clusters for handling complex machine learning models and large datasets, Civo Cloud GPUs offer unparalleled performance and versatility.


How do I get started with Civo Cloud GPUs?

For new users, sign up and receive $250 in free credits. Existing users can simply launch their Cloud GPU via the dashboard or CLI.


How do I integrate Cloud GPUs into my existing workflow?

Civo offers plug-and-play integration and easy configuration, ensuring users familiar with our existing instance services feel right at home. With our setup, it's effortless to scale from startup to hyperscale without any vendor lock-in.