Every architectural decision starts the same way: someone describes a system architecture for you in plain English, everyone nods, and then someone spends the next hour drawing boxes around on a whiteboard. What if we could skip that entirely? Type “React frontend, Fast API backend, Postgres database, and Redis cache”, and within seconds, a structured architecture diagram appears on screen. No drawing boxes. No connecting arrows. Just describe it, and it builds itself. That’s what we are building here.
We'll combine the power of large language models with Civo's sovereign AI platform, relaxAI, to create a full-stack application that turns natural language into architecture diagrams. The backend is FastAPI, the frontend is React, and the whole thing gets deployed to Civo Kubernetes from zero to a live, public URL.
Prerequisites
Before we start building, make sure you have the following tools and accounts ready to go:
- macOS, Linux, or Windows with a terminal
- Python 3.11+ installed
- Docker Desktop installed and running (download here)
- A relaxAI API key from Civo's relaxAI platform
- A Civo account with an API key from dashboard.civo.com/security
- A Docker Hub account from hub.docker.com
How it works
The app follows a straightforward flow. The user types an architecture description into the React frontend. Nginx receives the request and proxies it to the FastAPI backend. The backend wraps the description in a carefully crafted system prompt and sends it to relaxAI, which runs an LLM to interpret the text and return structured JSON. That JSON contains nodes (components like services, databases, and caches) and edges (the connections between them). The frontend then takes that data and renders it as a visual diagram.
User → React (Nginx) → FastAPI Backend → relaxAI LLM → Structured JSON → Diagram
Project structure
Here's how the project is organized: a clean separation between frontend, backend, and deployment config:
architecture-diagram-generator/
├── frontend/
│ ├── src/
│ ├── public/
│ ├── package.json
│ ├── Dockerfile
│ └── nginx.conf
├── backend/
│ ├── main.py
│ ├── requirements.txt
│ └── Dockerfile
├── docker-compose.yml
├── k8s.yaml
└── .env
Step-by-step walk through
Let's build this step by step, from setting up the project to deploying it live on Civo Kubernetes.
Step 1: Create and connect to your Civo Kubernetes cluster
- Log in to the Civo Dashboard
- Create a new Kubernetes cluster
- Add a GPU node pool (one node is sufficient for this demo)
- Download your kubeconfig
- Connect to the cluster by running
set KUBECONFIG=path-to-kubeconfig-filein your terminal (useexportinstead ofseton Linux or macOS). - Verify connectivity by running
kubectl get nodes
Step 2: Create the project folder
We'll start by setting up the project structure with separate directories for the frontend and backend services.
mkdir architecture-diagram-generator
cd architecture-diagram-generator
mkdir frontend backend
Step 3: Backend
This is the brain of the app. We're using FastAPI to create an API that takes plain-text architecture descriptions, sends them to relaxAI, and returns structured JSON with nodes and edges.
3.1 Create backend/main.py
Start by importing the libraries and defining the data models. These Pydantic models ensure that both the input from the user and the output from the LLM follow a strict structure.
from fastapi import FastAPI, HTTPException
from fastapi.middleware.cors import CORSMiddleware
from pydantic import BaseModel
import json
import os
from openai import OpenAI
# Pydantic Models
class ArchitectureInput(BaseModel):
description: str
model: str = "llama-4-scout"
class DiagramNode(BaseModel):
id: str
type: str
label: str
category: str
class DiagramEdge(BaseModel):
id: str
source: str
target: str
label: str
type: str
class DiagramResponse(BaseModel):
nodes: list[DiagramNode]
edges: list[DiagramEdge]
Next, define the system prompt. This is the instruction set that tells the LLM exactly how to format its output. The more precise the prompt, the more consistent the diagrams.
SYSTEM_PROMPT = """You are an expert system architect. Convert user descriptions into structured architecture diagrams.
Output ONLY valid JSON with this exact structure:
{
"nodes": [
{
"id": "unique-id",
"type": "service|database|queue|storage|external|loadbalancer|gateway",
"label": "Short Name",
"category": "frontend|backend|data|infrastructure|external"
}
],
"edges": [
{
"id": "edge-id",
"source": "source-node-id",
"target": "target-node-id",
"label": "HTTP|gRPC|WebSocket|SQL|Message|Data Flow",
"type": "sync|async|bidirectional"
}
]
}
Rules:
1. Create clear, meaningful node IDs (lowercase-with-dashes)
2. Use appropriate types for each component
3. Label edges with the communication protocol or data flow type
4. Include all mentioned components and their relationships
5. Infer implicit connections (e.g., API Gateway connects to services)
6. Add load balancers, caches, or queues if architecture implies them
7. Return ONLY the JSON, no markdown, no explanation"""
Now set up the FastAPI app, enable CORS so the frontend can talk to it, and initialize the relaxAI client. The client uses the OpenAI SDK format since relaxAI's API is compatible with it.
app = FastAPI(title="Architecture Diagram Generator - Powered by relaxAI")
# CORS Middleware
app.add_middleware(
CORSMiddleware,
allow_origins=["*"],
allow_credentials=True,
allow_methods=["*"],
allow_headers=["*"],
)
# Initialize relaxAI client (Civo's Sovereign AI)
client = OpenAI(
api_key=os.getenv("RELAXAI_API_KEY", "not-set"),
base_url="https://api.relax-ai.com/v1"
)
Add a health check endpoint. Kubernetes uses this to know if your pod is alive and ready to receive traffic.
# Endpoints
@app.get("/health")
async def health_check():
"""Health check endpoint for Kubernetes"""
return {
"status": "healthy",
"service": "architecture-diagram-generator",
"ai_provider": "relaxAI (Civo)",
"data_sovereignty": "UK"
}
This is the core endpoint. It takes the user's description, sends it to relaxAI with our system prompt, parses the JSON response, validates it, and returns the structured diagram data.
@app.post("/generate-diagram", response_model=DiagramResponse)
async def generate_diagram(input_data: ArchitectureInput):
"""
Generate an architecture diagram from a text description.
Uses relaxAI - Civo's privacy-first, sovereign AI platform.
"""
try:
# Call relaxAI with structured output
response = client.chat.completions.create(
model=input_data.model,
messages=[
{"role": "system", "content": SYSTEM_PROMPT},
{"role": "user", "content": f"Convert this architecture description to a diagram:\n\n{input_data.description}"}
],
temperature=0.3,
response_format={"type": "json_object"}
)
# Parse the LLM response
diagram_data = json.loads(response.choices[0].message.content)
# Validate the structure
if "nodes" not in diagram_data or "edges" not in diagram_data:
raise ValueError("Invalid diagram structure from LLM")
return DiagramResponse(**diagram_data)
except json.JSONDecodeError as e:
raise HTTPException(status_code=500, detail=f"Failed to parse relaxAI response: {str(e)}")
except Exception as e:
raise HTTPException(status_code=500, detail=f"Error generating diagram with relaxAI: {str(e)}")
Finally, add the models endpoint and the entry point for local development.
@app.get("/models")
async def list_models():
"""List available relaxAI models"""
return {
"models": [
{"id": "llama-4-scout", "name": "Llama 4 Scout", "provider": "relaxAI"},
{"id": "llama-4-maverick", "name": "Llama 4 Maverick", "provider": "relaxAI"},
{"id": "llama-3.3-70b", "name": "Llama 3.3 70B", "provider": "relaxAI"}
]
}
if __name__ == "__main__":
import uvicorn
uvicorn.run(app, host="0.0.0.0", port=8000)
3.2 Create backend/requirements.txt
fastapi==0.115.0
uvicorn==0.30.6
openai==1.68.0
httpx==0.27.2
pydantic==2.9.2
3.3 Create backend/Dockerfile
This containerizes the backend into a lightweight Python image that runs the FastAPI server on port 8000.
FROM python:3.11-slim
WORKDIR /app
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
COPY main.py .
EXPOSE 8000
CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8000"]
Step 4: Set up the React frontend
The frontend is a React app that users interact with in the browser. The key detail here is using a relative API URL so that Nginx can proxy requests to the backend seamlessly.
In your React code, make sure the API URL is set like this:
const API_URL = process.env.REACT_APP_API_URL || '';
4.1 Create frontend/Dockerfile
This is a multi-stage build; it compiles the React app with Node, then serves the production build using a lightweight Nginx image.
# Build stage
FROM node:18-alpine AS build
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY public ./public
COPY src ./src
RUN npm run build
# Production stage
FROM nginx:alpine
COPY --from=build /app/build /usr/share/nginx/html
COPY nginx.conf /etc/nginx/conf.d/default.conf
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
4.2 Create frontend/nginx.conf
This is where the magic happens. Nginx serves the React app and reverse-proxies API routes to the backend. The name backend resolves automatically in both Docker Compose and Kubernetes.
server {
listen 80;
server_name localhost;
root /usr/share/nginx/html;
index index.html;
# Proxy API requests to backend
location /generate-diagram {
proxy_pass http://backend:8000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
location /health {
proxy_pass http://backend:8000;
proxy_set_header Host $host;
}
location /models {
proxy_pass http://backend:8000;
proxy_set_header Host $host;
}
# Serve React app
location / {
try_files $uri $uri/ /index.html;
}
gzip on;
gzip_vary on;
gzip_types text/plain text/css text/xml text/javascript application/x-javascript application/xml+rss application/json;
location ~* \.(jpg|jpeg|png|gif|ico|css|js|svg|woff|woff2)$ {
expires 1y;
add_header Cache-Control "public, immutable";
}
}
4.3 Create frontend/src/App.js
This is the main component. It handles user input, sends requests to the backend, and renders the architecture diagram using React Flow.
import React, { useState, useCallback } from 'react';
import ReactFlow, {
MiniMap,
Controls,
Background,
useNodesState,
useEdgesState,
addEdge,
MarkerType,
} from 'reactflow';
import 'reactflow/dist/style.css';
import './App.css';
const API_URL = process.env.REACT_APP_API_URL || '';
// Node type colors
const nodeColors = {
service: '#3b82f6',
database: '#8b5cf6',
queue: '#f59e0b',
storage: '#10b981',
external: '#ef4444',
loadbalancer: '#06b6d4',
gateway: '#ec4899',
};
// Category colors for borders
const categoryColors = {
frontend: '#3b82f6',
backend: '#8b5cf6',
data: '#10b981',
infrastructure: '#f59e0b',
external: '#ef4444',
};
function App() {
const [description, setDescription] = useState('');
const [loading, setLoading] = useState(false);
const [error, setError] = useState('');
const [nodes, setNodes, onNodesChange] = useNodesState([]);
const [edges, setEdges, onEdgesChange] = useEdgesState([]);
const [selectedModel, setSelectedModel] = useState('Llama-4-Maverick-17B-128E');
const onConnect = useCallback(
(params) => setEdges((eds) => addEdge(params, eds)),
[setEdges]
);
const generateDiagram = async () => {
if (!description.trim()) {
setError('Please enter an architecture description');
return;
}
setLoading(true);
setError('');
try {
const response = await fetch(`${API_URL}/generate-diagram`, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify({
description,
model: selectedModel,
}),
});
if (!response.ok) {
throw new Error(`HTTP error! status: ${response.status}`);
}
const data = await response.json();
const flowNodes = data.nodes.map((node, index) => ({
id: node.id,
type: 'default',
data: {
label: (
<div className="node-content">
<div className="node-type">{node.type}</div>
<div className="node-label">{node.label}</div>
</div>
),
},
position: calculatePosition(index, data.nodes.length),
style: {
background: nodeColors[node.type] || '#6b7280',
color: 'white',
border: `3px solid ${categoryColors[node.category] || '#6b7280'}`,
borderRadius: '8px',
padding: '10px',
minWidth: '150px',
},
}));
const flowEdges = data.edges.map((edge) => ({
id: edge.id,
source: edge.source,
target: edge.target,
label: edge.label,
type: edge.type === 'bidirectional' ? 'default' : 'smoothstep',
animated: edge.type === 'async',
markerEnd: {
type: MarkerType.ArrowClosed,
color: '#6b7280',
},
style: {
stroke: edge.type === 'async' ? '#f59e0b' : '#6b7280',
strokeWidth: 2,
},
labelStyle: {
fill: '#1f2937',
fontWeight: 600,
fontSize: '12px',
},
}));
setNodes(flowNodes);
setEdges(flowEdges);
} catch (err) {
setError(`Failed to generate diagram: ${err.message}`);
console.error('Error:', err);
} finally {
setLoading(false);
}
};
const calculatePosition = (index, total) => {
const cols = Math.ceil(Math.sqrt(total));
const row = Math.floor(index / cols);
const col = index % cols;
return {
x: col * 250 + 50,
y: row * 150 + 50,
};
};
const loadExample = () => {
setDescription(`E-commerce platform with:
- React frontend served by Nginx
- API Gateway (Kong) handling routing
- 3 microservices: User Service, Product Service, Order Service
- PostgreSQL database for users and products
- MongoDB for orders
- Redis cache for sessions
- RabbitMQ message queue for order processing
- S3-compatible storage for product images
- External payment gateway (Stripe)
- Load balancer in front of services`);
};
return (
<div className="App">
<div className="sidebar">
<h1>🏗️ Architecture Diagram Generator</h1>
<p className="subtitle">Powered by relaxAI (Civo's Sovereign AI)</p>
<div className="input-section">
<label>Architecture Description</label>
<textarea
value={description}
onChange={(e) => setDescription(e.target.value)}
placeholder="Describe your system architecture in plain text..."
rows={12}
/>
<div className="model-select">
<label>Model</label>
<select
value={selectedModel}
onChange={(e) => setSelectedModel(e.target.value)}
>
<option value="Llama-4-Maverick-17B-128E">Llama 4 Maverick</option>
<option value="DeepSeek-V31-Terminus">DeepSeek V3.1 Terminus</option>
<option value="GPT-OSS-120b">GPT-OSS 120B</option>
<option value="GLM-46">GLM 4.6</option>
</select>
</div>
<div className="button-group">
<button
onClick={generateDiagram}
disabled={loading}
className="generate-btn"
>
{loading ? 'Generating...' : '✨ Generate Diagram'}
</button>
<button onClick={loadExample} className="example-btn">
📝 Load Example
</button>
</div>
{error && <div className="error">{error}</div>}
</div>
<div className="legend">
<h3>Legend</h3>
<div className="legend-section">
<h4>Node Types</h4>
<div className="legend-item" style={{ background: nodeColors.service }}>Service</div>
<div className="legend-item" style={{ background: nodeColors.database }}>Database</div>
<div className="legend-item" style={{ background: nodeColors.queue }}>Queue</div>
<div className="legend-item" style={{ background: nodeColors.storage }}>Storage</div>
<div className="legend-item" style={{ background: nodeColors.external }}>External</div>
<div className="legend-item" style={{ background: nodeColors.loadbalancer }}>Load Balancer</div>
<div className="legend-item" style={{ background: nodeColors.gateway }}>Gateway</div>
</div>
<div className="legend-section">
<h4>Categories (Border)</h4>
<div className="legend-item" style={{ border: `3px solid ${categoryColors.frontend}`, background: 'transparent', color: '#333' }}>Frontend</div>
<div className="legend-item" style={{ border: `3px solid ${categoryColors.backend}`, background: 'transparent', color: '#333' }}>Backend</div>
<div className="legend-item" style={{ border: `3px solid ${categoryColors.data}`, background: 'transparent', color: '#333' }}>Data</div>
<div className="legend-item" style={{ border: `3px solid ${categoryColors.infrastructure}`, background: 'transparent', color: '#333' }}>Infrastructure</div>
</div>
</div>
</div>
<div className="diagram-container">
<ReactFlow
nodes={nodes}
edges={edges}
onNodesChange={onNodesChange}
onEdgesChange={onEdgesChange}
onConnect={onConnect}
fitView
>
<Controls />
<MiniMap />
<Background variant="dots" gap={12} size={1} />
</ReactFlow>
</div>
</div>
);
}
export default App;
Step 5: Configure Docker compose
Docker Compose ties both services together for local development. It creates an internal network so the frontend container can reach the backend by name.
services:
backend:
build:
context: ./backend
dockerfile: Dockerfile
ports:
- "8000:8000"
environment:
- RELAXAI_API_KEY=${RELAXAI_API_KEY}
networks:
- app-network
frontend:
build:
context: ./frontend
dockerfile: Dockerfile
ports:
- "3000:80"
depends_on:
- backend
networks:
- app-network
networks:
app-network:
driver: bridge
Step 6: Test locally
Before deploying anywhere, make sure everything runs on your machine. This builds both containers and starts them together.
docker compose up --build
Step 7: Push Docker images
Civo's Kubernetes cluster needs to pull your images from a public registry. We'll build them for AMD64 (since Civo runs on x86 servers) and push to Docker Hub.
docker login
docker buildx build --platform linux/amd64 \
-t YOUR_DOCKERHUB_USERNAME/arch-diagram-backend:latest --push ./backend
docker buildx build --platform linux/amd64 \
-t YOUR_DOCKERHUB_USERNAME/arch-diagram-frontend:latest --push ./frontend
Replace YOURDOCKERHUBUSERNAME with your actual Docker Hub username. Make sure both repos are set to Public on Docker Hub.
Step 8: Create the Civo Kubernetes cluster
Now we move to production. We'll use the Civo CLI to spin up a two-node Kubernetes cluster in under 3 minutes.
8.1 Install the Civo CLI
The Civo CLI lets you manage clusters, nodes, and resources directly from your terminal.
# macOS
brew tap civo/tools
brew install civo
# Linux
curl -sL https://civo.com/get | sh
8.2 Add your Civo API key
Grab your key from dashboard.civo.com/security and register it with the CLI.
civo apikey add my-key YOUR_CIVO_API_KEY
civo apikey current my-key
8.3 Create the cluster
This provisions a two-node k3s cluster in Civo's London region. The --wait flag blocks until the cluster is fully ready.
civo kubernetes create arch-diagram-cluster \
--size g4s.kube.medium \
--nodes 2 \
--region LON1 \
--wait
Step 9: Deploy to Kubernetes
This is our final step. We'll define all our Kubernetes resources in a single manifest and apply it.
9.1 Create k8s.yaml
This file contains everything: the API key secret, backend and frontend deployments with health checks, an internal service for the backend, and a public LoadBalancer service for the frontend.
apiVersion: v1
kind: Secret
metadata:
name: relaxai-secret
type: Opaque
stringData:
api-key: "your-relaxai-api-key-here"
---
# Backend Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: backend
labels:
app: backend
spec:
replicas: 2
selector:
matchLabels:
app: backend
template:
metadata:
labels:
app: backend
spec:
containers:
- name: backend
image: YOUR_DOCKERHUB_USERNAME/arch-diagram-backend:latest
ports:
- containerPort: 8000
env:
- name: RELAXAI_API_KEY
valueFrom:
secretKeyRef:
name: relaxai-secret
key: api-key
livenessProbe:
httpGet:
path: /health
port: 8000
initialDelaySeconds: 10
periodSeconds: 30
readinessProbe:
httpGet:
path: /health
port: 8000
initialDelaySeconds: 5
periodSeconds: 10
---
# Backend Service (internal only)
apiVersion: v1
kind: Service
metadata:
name: backend
spec:
type: ClusterIP
selector:
app: backend
ports:
- port: 8000
targetPort: 8000
---
# Frontend Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontend
labels:
app: frontend
spec:
replicas: 2
selector:
matchLabels:
app: frontend
template:
metadata:
labels:
app: frontend
spec:
containers:
- name: frontend
image: YOUR_DOCKERHUB_USERNAME/arch-diagram-frontend:latest
ports:
- containerPort: 80
---
# Frontend Service (public-facing)
apiVersion: v1
kind: Service
metadata:
name: frontend
spec:
type: LoadBalancer
selector:
app: frontend
ports:
- port: 80
targetPort: 80
Replace YOURDOCKERHUBUSERNAME with your actual Docker Hub username in both image fields.
9.2 Apply the manifest
One command deploys everything to your cluster.
kubectl apply -f k8s.yaml
9.3 Verify the deployment
Check that all pods are up and running. It may take 30–60 seconds for them to start.
kubectl get pods
Step 10: Test the live deployment
Once your pods are running and the frontend has an external IP, it's time to verify everything works end-to-end. Get your public IP first:
kubectl get svc frontend
Health check, make sure the backend is reachable through the Nginx proxy:
curl http://EXTERNAL_IP/health
Expected response:
{
"status": "healthy",
"service": "architecture-diagram-generator",
"ai_provider": "relaxAI (Civo)",
"data_sovereignty": "UK"
}
Generate a diagram, try this e-commerce architecture as a test:
curl -X POST http://EXTERNAL_IP/generate-diagram \
-H "Content-Type: application/json" \
-d '{"description": "An e-commerce platform with an API gateway, user authentication service, product catalog service, order processing service, payment service connected to Stripe, a PostgreSQL database for orders, MongoDB for the product catalog, RabbitMQ message queue between order and payment services, and a Redis cache in front of the catalog"}'
How the app looks
The app follows a simple design. Just type in your architecture description, hit Generate Diagram, and watch as the LLM turns your words into a structured, visual diagram in seconds. What used to take 20 minutes in a diagramming tool now takes one sentence.

Summary
And just like that, you've gone from an empty folder to a live, AI-powered app running on Civo Kubernetes. A user types a description, an LLM thinks about it, and a structured architecture diagram comes back, no drag-and-drop required.
The best part? This whole stack, FastAPI, React, Docker, Kubernetes, is a pattern you can reuse for any AI-powered tool you want to build next. Swap out the diagram logic for code generation, document analysis, or anything else an LLM can handle. The deployment pipeline stays the same.
Additional Resources
To learn more about the technologies used in this tutorial and explore next steps, check out these resources: