Connecting workloads across clouds has become a common requirement for teams that rely on hybrid environments. Organizations migrating to Civo Cloud, or expanding into it, often need a stable way to reach private AWS resources without exposing anything publicly. But this is where challenges appear quickly.

Most teams discover that cross-cloud connectivity is often expensive, complicated, and filled with vendor-specific requirements. AWS Direct Connect, proprietary firewalls, or managed VPNs introduce cost or administrative weight that small teams don’t always want to deal with.

Fortunately, there's a practical answer: a StrongSwan-powered site-to-site VPN running on a Civo VM that connects directly to an AWS VPC. This setup creates a secure tunnel where both cloud networks can communicate as if they belonged to the same private environment.

This tutorial provides a complete, end-to-end walkthrough—from planning IP address space and configuring AWS networking components, to deploying StrongSwan on Civo, establishing redundant IPsec tunnels, and extending connectivity to additional virtual machines.

Why cross-cloud private connectivity matters

Many organizations reach a point where hosting everything in one cloud is no longer viable. New products might be easier to run in a different provider, or teams may want to migrate workloads gradually while keeping backend systems in AWS.

In most of these scenarios, engineers face a similar challenge:

“How do we allow Civo workloads to reach AWS private resources without placing those resources on the public internet?”

A public endpoint is often unacceptable for:

  • Internal APIs
  • Databases
  • Legacy systems
  • Sensitive or regulated workloads

A site-to-site VPN addresses this problem by securely linking two private networks, so they behave as a single routed environment.

AWS supports IPsec-based VPNs via a Virtual Private Gateway (VGW). Civo, meanwhile, provides full control over virtual machines and networking, making StrongSwan a flexible and cost-effective VPN solution.

Prerequisites

To get started with this tutorial, you will need the following in place:

The result is a secure IPsec tunnel that routes private traffic between Civo and AWS.

Planning the network layout

A stable VPN configuration begins with clean network planning. The most important rule is simple:

Your AWS VPC CIDR and Civo network CIDR must not overlap.

The following reference layout is used throughout this tutorial:

Component Example IP / CIDR Notes
AWS VPC 10.10.0.0/16 Primary AWS private network
AWS resource subnet 10.10.1.0/24 EC2 and application workloads
AWS instance IP 10.10.1.10 Example private VM
AWS database subnet 10.10.2.0/24 Private database network
AWS database IP 10.10.2.106 Example private database
Civo network 192.168.50.0/24 Civo private network
Civo StrongSwan VM (private) 192.168.50.5 VPN gateway local address
Civo StrongSwan VM (public) X.Y.Z.W VPN gateway public address

These CIDRs must remain consistent across AWS configuration, StrongSwan setup, and static routing.

Preparing the AWS side

This entire setup depends on defining the AWS components correctly. The following AWS pieces are required:

Create the AWS VPC

In the AWS console:

  • Navigate to VPC → Create VPC
  • Name: mumbai-main-vpc
  • IPv4 CIDR: 10.10.0.0/16
  • Tenancy: Default

Create subnets

You’ll need at least two subnets (the database subnet should not require internet access):

Subnet CIDR Purpose
Resource subnet 10.10.1.0/24 Application and EC2 workloads
Database subnet 10.10.2.0/24 Private database resources

(Optional) Internet gateway

An internet gateway is only required if some AWS resources need outbound internet access. It is not required for the VPN itself. If you need more information on this, refer to this documentation.

Launch AWS private resources

Example configuration:

  • Application instance
    • Subnet: 10.10.1.0/24
    • IP: 10.10.1.10
    • Security group: allow traffic only from 192.168.50.0/24
  • Private database
    • Subnet: 10.10.2.0/24
    • IP: 10.10.2.106
    • Security group: allow database port (for example, 3306 or 5432) from 192.168.50.0/24

This ensures private access exclusively through the VPN tunnel.

Creating AWS VPN components

AWS must now be configured to recognise the Civo VPN endpoint.

Create a Customer Gateway (CGW)

  • Console → VPC → Customer Gateways → Create
  • Name: civo-strongswan-cgw
  • IP address: Public IP of the Civo VM
  • Routing: Static
  • BGP ASN: Any valid value

Create a Virtual Private Gateway (VGW)

  • Console → VPC → Virtual Private Gateways → Create
  • Name: mumbai-vgw
  • Attach it to the VPC

Create the site-to-site VPN

  • Console → VPC → Site-to-Site VPN Connections → Create
  • Name: aws-civo-vpn
  • Target: mumbai-vgw
  • Customer Gateway: civo-strongswan-cgw
  • Routing: Static
  • Static Route: 192.168.50.0/24 (Civo CIDR)

Wait for it to enter the Available state.

Download the VPN configuration file

Choose:

  • Vendor: strongSwan
  • Platform: Ubuntu
  • IKE version: IKEv2

This file contains tunnel endpoints, pre-shared keys, encryption parameters, and tunnel interface IPs used later in the configuration.

Deploying StrongSwan on Civo

A single Civo virtual machine acts as the VPN gateway. StrongSwan supports route-based VPNs using VTI interfaces, which align well with AWS VGW requirements.

The provided script is a production-ready deployment tool that:

  • Installs StrongSwan
  • Enables IP forwarding
  • Configures two redundant tunnels
  • Creates VTI interfaces
  • Applies static routes and firewall rules
  • Starts and validates both tunnels

Rather than duplicating the entire script inline, this guide focuses on how it works and how to adapt it safely.

Key variables to update

Inside the script, update:

  • PUBLIC_IP
  • LOCAL_SUBNET
  • AWS_CIDR
  • AWS_T1_PUBLIC, AWS_T2_PUBLIC
  • AWS_T1_PSK, AWS_T2_PSK
  • T1_LOCAL_IP, T1_REMOTE_IP, T2_LOCAL_IP, T2_REMOTE_IP

These values all come directly from the AWS VPN configuration file.

Setup-vpn.sh

#!/bin/bash

##############################################
# AWS VPN TUNNEL SETUP SCRIPT (Route-Based)
# 
# This script sets up a route-based IPsec VPN tunnel between
# a Civo VM and AWS VPC using strongSwan.
#
# TO USE ON A NEW VM:
# 1. Update the variables below with your values
# 2. Run: sudo bash script.sh
# 3. The script is idempotent - safe to run multiple times
#
##############################################

##############################################
#  CHANGE ONLY THESE VALUES FOR NEW VM
##############################################

PUBLIC_IP="212.2.249.102"          # Your Civo VM PUBLIC IP (must match AWS Customer Gateway) `leftid`
LOCAL_SUBNET="192.168.50.0/24"     # Your Civo network CIDR `leftsubnet`
AWS_CIDR="10.10.0.0/16"            # AWS VPC CIDR `rightsubnet`

# AWS VPN Tunnel Endpoints (from AWS VPN config file)
AWS_T1_PUBLIC="13.235.15.233"      # AWS Tunnel 1 Public IP
AWS_T2_PUBLIC="65.0.248.54"        # AWS Tunnel 2 Public IP

# Pre-Shared Keys (from AWS VPN config file)
AWS_T1_PSK="xxxxx"
AWS_T2_PSK="xxxxx"

# Route-based VPN tunnel IPs (from AWS VPN config file)
# These are the 169.254.x.x addresses for the tunnel interfaces
T1_LOCAL_IP="169.254.26.210/30"    # Tunnel1 local IP
T1_REMOTE_IP="169.254.26.209/30"   # Tunnel1 remote IP
T2_LOCAL_IP="169.254.19.106/30"    # Tunnel2 local IP
T2_REMOTE_IP="169.254.19.105/30"   # Tunnel2 remote IP

# Get physical interface name (usually eth0 or ens5)
PHYS_INTERFACE=$(ip route | grep default | awk '{print $5}' | head -1)
if [ -z "$PHYS_INTERFACE" ]; then
    PHYS_INTERFACE="eth0"  # fallback
fi

##############################################
# INSTALL STRONGSWAN
##############################################
echo " Installing strongSwan..."
sudo apt update && sudo apt install -y strongswan netfilter-persistent

##############################################
#  ENABLE IP FORWARDING
##############################################
echo "Enabling IP forwarding..."
echo "net.ipv4.ip_forward=1" | sudo tee /etc/sysctl.d/99-ipforward.conf
echo "net.ipv4.conf.all.accept_redirects=0" | sudo tee -a /etc/sysctl.d/99-ipforward.conf
echo "net.ipv4.conf.all.send_redirects=0" | sudo tee -a /etc/sysctl.d/99-ipforward.conf
sudo sysctl -p

##############################################
#  CREATE /etc/ipsec.conf  (ROUTE-BASED VPN)
##############################################
echo " Writing /etc/ipsec.conf..."
sudo tee /etc/ipsec.conf > /dev/null << EOF
config setup
    charondebug="all"
    uniqueids=yes
    strictcrlpolicy=no

conn Tunnel1
    type=tunnel
    auto=add
    keyexchange=ikev2
    authby=psk

    left=%any
    leftid=$PUBLIC_IP
    leftsubnet=$LOCAL_SUBNET

    right=$AWS_T1_PUBLIC
    rightsubnet=$AWS_CIDR

    aggressive=no
    ikelifetime=28800s
    lifetime=3600s
    margintime=270s
    rekey=yes
    rekeyfuzz=100%
    fragmentation=yes
    replay_window=1024
    dpddelay=30s
    dpdtimeout=120s
    dpdaction=restart
    ike=aes128-sha1-modp1024
    esp=aes128-sha1-modp1024
    keyingtries=%forever
    mark=100
    leftupdown="/etc/ipsec.d/aws-updown.sh -ln Tunnel1 -ll $T1_LOCAL_IP -lr ${T1_REMOTE_IP%/*}/30 -m 100 -r $AWS_CIDR"

conn Tunnel2
    type=tunnel
    auto=add
    keyexchange=ikev2
    authby=psk

    left=%any
    leftid=$PUBLIC_IP
    leftsubnet=$LOCAL_SUBNET

    right=$AWS_T2_PUBLIC
    rightsubnet=$AWS_CIDR

    aggressive=no
    ikelifetime=28800s
    lifetime=3600s
    margintime=270s
    rekey=yes
    rekeyfuzz=100%
    fragmentation=yes
    replay_window=1024
    dpddelay=30s
    dpdtimeout=120s
    dpdaction=restart
    ike=aes128-sha1-modp1024
    esp=aes128-sha1-modp1024
    keyingtries=%forever
    mark=200
    leftupdown="/etc/ipsec.d/aws-updown.sh -ln Tunnel2 -ll $T2_LOCAL_IP -lr ${T2_REMOTE_IP%/*}/30 -m 200 -r $AWS_CIDR"
EOF

##############################################
#  CREATE PSK FILE
##############################################
echo " Writing /etc/ipsec.secrets..."
sudo tee /etc/ipsec.secrets > /dev/null << EOF
$PUBLIC_IP $AWS_T1_PUBLIC : PSK "$AWS_T1_PSK"
$PUBLIC_IP $AWS_T2_PUBLIC : PSK "$AWS_T2_PSK"
EOF

##############################################
# 4.CREATE AWS UPDOWN SCRIPT
##############################################
echo " Creating AWS updown script..."
sudo mkdir -p /etc/ipsec.d
sudo tee /etc/ipsec.d/aws-updown.sh > /dev/null << 'UPSCRIPT'
#!/bin/bash

while [[ $# > 1 ]]; do
    case ${1} in
        -ln|--link-name)
            TUNNEL_NAME="${2}"
            TUNNEL_PHY_INTERFACE="${PLUTO_INTERFACE}"
            shift
            ;;
        -ll|--link-local)
            TUNNEL_LOCAL_ADDRESS="${2}"
            TUNNEL_LOCAL_ENDPOINT="${PLUTO_ME}"
            shift
            ;;
        -lr|--link-remote)
            TUNNEL_REMOTE_ADDRESS="${2}"
            TUNNEL_REMOTE_ENDPOINT="${PLUTO_PEER}"
            shift
            ;;
        -m|--mark)
            TUNNEL_MARK="${2}"
            shift
            ;;
        -r|--static-route)
            TUNNEL_STATIC_ROUTE="${2}"
            shift
            ;;
        *)
            echo "${0}: Unknown argument \"${1}\"" >&2
            ;;
    esac
    shift
done

command_exists() {
    type "$1" >&2 2>&2
}

create_interface() {
    ip link add ${TUNNEL_NAME} type vti local ${TUNNEL_LOCAL_ENDPOINT} remote ${TUNNEL_REMOTE_ENDPOINT} key ${TUNNEL_MARK} 2>/dev/null || true
    ip addr add ${TUNNEL_LOCAL_ADDRESS} dev ${TUNNEL_NAME} 2>/dev/null || true
    ip link set ${TUNNEL_NAME} up mtu 1419
}

configure_sysctl() {
    sysctl -w net.ipv4.ip_forward=1
    sysctl -w net.ipv4.conf.${TUNNEL_NAME}.rp_filter=2
    sysctl -w net.ipv4.conf.${TUNNEL_NAME}.disable_policy=1
    sysctl -w net.ipv4.conf.${TUNNEL_PHY_INTERFACE}.disable_xfrm=1
    sysctl -w net.ipv4.conf.${TUNNEL_PHY_INTERFACE}.disable_policy=1
}

add_route() {
    # Get local IP from the physical interface
    PHYS_IF=$(ip route | grep default | awk '{print $5}' | head -1)
    LOCAL_IP=$(ip addr show ${PHYS_IF} 2>/dev/null | grep "inet " | awk '{print $2}' | cut -d/ -f1)
    if [ -z "$LOCAL_IP" ]; then
        LOCAL_IP="${TUNNEL_LOCAL_ENDPOINT}"
    fi
    
    IFS=',' read -ra route <<< "${TUNNEL_STATIC_ROUTE}"
        for i in "${route[@]}"; do
        # Remove existing route first, then add with source IP
        ip route del ${i} dev ${TUNNEL_NAME} 2>/dev/null || true
        ip route add ${i} dev ${TUNNEL_NAME} src ${LOCAL_IP} metric ${TUNNEL_MARK} 2>/dev/null || true
    done
    iptables -t mangle -C FORWARD -o ${TUNNEL_NAME} -p tcp --tcp-flags SYN,RST SYN -j TCPMSS --clamp-mss-to-pmtu 2>/dev/null || \
        iptables -t mangle -A FORWARD -o ${TUNNEL_NAME} -p tcp --tcp-flags SYN,RST SYN -j TCPMSS --clamp-mss-to-pmtu
    iptables -t mangle -C INPUT -p esp -s ${TUNNEL_REMOTE_ENDPOINT} -d ${TUNNEL_LOCAL_ENDPOINT} -j MARK --set-xmark ${TUNNEL_MARK} 2>/dev/null || \
        iptables -t mangle -A INPUT -p esp -s ${TUNNEL_REMOTE_ENDPOINT} -d ${TUNNEL_LOCAL_ENDPOINT} -j MARK --set-xmark ${TUNNEL_MARK}
    # Mark outgoing traffic based on route
    iptables -t mangle -C OUTPUT -o ${TUNNEL_NAME} -j MARK --set-xmark ${TUNNEL_MARK} 2>/dev/null || \
        iptables -t mangle -A OUTPUT -o ${TUNNEL_NAME} -j MARK --set-xmark ${TUNNEL_MARK}
    ip route flush table 220 2>/dev/null || true
}

cleanup() {
        IFS=',' read -ra route <<< "${TUNNEL_STATIC_ROUTE}"
        for i in "${route[@]}"; do
            ip route del ${i} dev ${TUNNEL_NAME} metric ${TUNNEL_MARK} 2>/dev/null || true
        done
    iptables -t mangle -D FORWARD -o ${TUNNEL_NAME} -p tcp --tcp-flags SYN,RST SYN -j TCPMSS --clamp-mss-to-pmtu 2>/dev/null || true
    iptables -t mangle -D INPUT -p esp -s ${TUNNEL_REMOTE_ENDPOINT} -d ${TUNNEL_LOCAL_ENDPOINT} -j MARK --set-xmark ${TUNNEL_MARK} 2>/dev/null || true
    iptables -t mangle -D OUTPUT -o ${TUNNEL_NAME} -j MARK --set-xmark ${TUNNEL_MARK} 2>/dev/null || true
    ip route flush cache 2>/dev/null || true
}

delete_interface() {
    ip link set ${TUNNEL_NAME} down 2>/dev/null || true
    ip link del ${TUNNEL_NAME} 2>/dev/null || true
}

case "${PLUTO_VERB}" in
    up-client)
        create_interface
        configure_sysctl
        add_route
        ;;
    down-client)
        cleanup
        delete_interface
        ;;
esac
UPSCRIPT
sudo chmod 744 /etc/ipsec.d/aws-updown.sh

##############################################
# 4. CONFIGURE CHARON (DISABLE AUTO ROUTES)
##############################################
echo "  Configuring charon.conf..."
if [ -f /etc/strongswan.d/charon.conf ]; then
    sudo sed -i 's/# install_routes = yes/install_routes = no/' /etc/strongswan.d/charon.conf
    if ! grep -q "install_routes = no" /etc/strongswan.d/charon.conf; then
        echo "install_routes = no" | sudo tee -a /etc/strongswan.d/charon.conf
    fi
fi

# Note: VTI interfaces will be created by the updown script when tunnels come up

##############################################
#  FLUSH OLD STATE & RESTART VPN
##############################################
echo "♻ Restarting VPN..."
# Clean up any existing routes to AWS CIDR
sudo ip route del $AWS_CIDR via 192.168.50.5 2>/dev/null || true
sudo ip route del $AWS_CIDR dev enp1s0 2>/dev/null || true
sudo ip route del $AWS_CIDR dev $PHYS_INTERFACE 2>/dev/null || true
sudo ip route del $AWS_CIDR dev Tunnel1 2>/dev/null || true
sudo ip route del $AWS_CIDR dev Tunnel2 2>/dev/null || true

sudo ip xfrm state flush
sudo ip xfrm policy flush
sudo ipsec restart
sleep 5

# Start the tunnels
echo " Starting tunnels..."
sudo ipsec up Tunnel1
sudo ipsec up Tunnel2
sleep 3

# Routes are added by the updown script, but verify they have correct source IP
echo "  Verifying routes..."
LOCAL_IP=$(ip addr show $PHYS_INTERFACE | grep "inet " | awk '{print $2}' | cut -d/ -f1)
if [ -n "$LOCAL_IP" ]; then
    # Ensure routes have correct source IP (updown script should have done this, but double-check)
    sudo ip route del $AWS_CIDR dev Tunnel1 2>/dev/null || true
    sudo ip route del $AWS_CIDR dev Tunnel2 2>/dev/null || true
    sudo ip route add $AWS_CIDR dev Tunnel1 src $LOCAL_IP metric 100 2>/dev/null || true
    sudo ip route add $AWS_CIDR dev Tunnel2 src $LOCAL_IP metric 200 2>/dev/null || true
fi

echo " Tunnel status:"
sudo ipsec statusall

##############################################
#  FORWARD TRAFFIC (REQUIRED FOR VM2,3,4...)
##############################################
echo "🔧 Allowing forwarded traffic..."
sudo iptables -C FORWARD -s $LOCAL_SUBNET -d $AWS_CIDR -j ACCEPT 2>/dev/null || \
    sudo iptables -A FORWARD -s $LOCAL_SUBNET -d $AWS_CIDR -j ACCEPT
sudo iptables -C FORWARD -s $AWS_CIDR -d $LOCAL_SUBNET -j ACCEPT 2>/dev/null || \
    sudo iptables -A FORWARD -s $AWS_CIDR -d $LOCAL_SUBNET -j ACCEPT
sudo netfilter-persistent save

##############################################
#  TESTING INSTRUCTIONS FOR YOU
##############################################
echo ""
echo " TESTING COMMANDS:"
echo "-------------------------------------------"
echo "sudo ipsec statusall                    # Check tunnel status"
echo "ip route show                           # Check routes"
echo "ip addr show Tunnel1                    # Check Tunnel1 interface"
echo "ip addr show Tunnel2                    # Check Tunnel2 interface"
echo "sudo tcpdump -n -i any proto esp        # Monitor ESP traffic"
echo "ping -c 4 10.10.2.106                   # Test ping to AWS"
echo "-------------------------------------------"
echo "If you have another VM (192.168.50.10+):"
echo "sudo ip route add 10.10.0.0/16 via <STRONGSWAN_LOCAL_IP>"
echo "-------------------------------------------"
echo " If tcpdump shows ESP packets — TUNNEL IS WORKING!"
echo ""

Running the script

SSH into your Civo VM, then run:

sudo bash Setup-vpn.sh

The script will:

  • Install packages
  • Create /etc/ipsec.conf
  • Create /etc/ipsec.secrets
  • Create /etc/ipsec.d/aws-updown.sh
  • Restart StrongSwan
  • Bring up Tunnel1 and Tunnel2
  • Add routes for AWS CIDR

You should see output indicating that both tunnels come up successfully.

Understanding what the script creates

Section Details
IPsec tunnels Two tunnels are created:

Tunnel1 (mark 100)
Tunnel2 (mark 200)

AWS provides dual tunnels for redundancy, and both should remain active.
VTI interfaces Each tunnel creates a virtual interface with:

A local tunnel IP
A remote tunnel endpoint
A route priority for failover
Static routing Routes are added for the entire AWS VPC:

10.10.0.0/16 dev Tunnel1 metric 100
10.10.0.0/16 dev Tunnel2 metric 200

This enables automatic failover between tunnels.
Firewall configuration The script adds:

TCP MSS clamping to prevent MTU issues
Packet marking for routing consistency
Forwarding rules for bidirectional traffic

Testing the VPN

Once the StrongSwan VM is configured, it’s time to verify connectivity.

Check Details
Check tunnel status sudo ipsec statusall

Both tunnels should show “INSTALLED”.
Check VTI interfaces ip addr show Tunnel1
ip addr show Tunnel2

You should see the 169.254.x.x addresses.
Check the routing table ip route show

Look for:
10.10.0.0/16 dev Tunnel1
10.10.0.0/16 dev Tunnel2
Check for ESP traffic sudo tcpdump -n -i any proto esp

If ESP packets appear while pinging AWS, the tunnel is functioning.
Ping an AWS resource ping -c 4 10.10.1.10

A working ping indicates:
The tunnel is active
AWS security groups allow traffic
NAT is not interfering
Route tables are correct

Extending connectivity to other Civo VMs

Once the VPN gateway is operational, other VMs in the same Civo network can reach AWS resources by adding a single route:

sudo ip route add 10.10.0.0/16 via 192.168.50.5

Test it by running:

ping -c 4 10.10.1.10

If ping works, the routing path is correct.

Making static routes persistent

Routes added manually disappear after reboot. To make them persistent, use one of the options below.

Option A: Netplan (Ubuntu 18.04+)

Edit:

sudo nano /etc/netplan/50-cloud-init.yaml

Add under your network interface:

network:
  version: 2
  ethernets:
    enp1s0:
      dhcp4: true
      routes:
        - to: 10.10.0.0/16
          via: 192.168.50.5

Apply:

sudo netplan apply

Option B: Older Ubuntu versions

Edit:

sudo nano /etc/network/interfaces

Add:

up ip route add 10.10.0.0/16 via 192.168.50.5

Now AWS routing persists across reboots.

How traffic moves across the VPN

Here’s the flow for a VM in the Civo network:

Civo VM (192.168.50.10)
        ↓
Route: 10.10.0.0/16 via 192.168.50.5
        ↓
StrongSwan VM (VPN Gateway)
        ↓
VTI Tunnel
        ↓
AWS Virtual Private Gateway
        ↓
AWS VPC Resource (10.10.x.x)

This architecture makes both clouds feel linked through a private connection.

Summary

Cross-cloud networking is often associated with complexity, cost, and restrictive vendor tooling. With Civo, StrongSwan, and standard AWS VPN components, it is possible to build a secure, private, and production-ready cloud-to-cloud connection without unnecessary overhead.

This architecture enables:

  • Private database access
  • Secure service-to-service communication
  • Gradual migration from AWS to Civo
  • Hybrid and multi-cloud deployments without public exposure

If you're looking for a straightforward method to connect AWS and Civo in a way that keeps traffic private and under your control, this StrongSwan VPN architecture is a strong option. With proper planning and the script provided earlier, you can have both clouds communicating through a secure tunnel in minutes.