Kubernetes on VPS Guide 2026: K8s Setup on Single Node & Multi-Node
TUTORIAL 15 min read fordnox

Kubernetes on VPS Guide 2026: K8s Setup on Single Node & Multi-Node

Complete guide to setting up Kubernetes on VPS servers. Single-node k3s setup, multi-node clusters, cost optimization, and production-ready configurations for any VPS provider.


Kubernetes on VPS: Complete Setup Guide for 2026

Running Kubernetes on VPS servers gives you container orchestration power without the complexity and cost of managed services. This guide covers everything from lightweight single-node k3s to production multi-node clusters.

Why Kubernetes on VPS?

Why Kubernetes on VPS?

Why Kubernetes on VPS?

ApproachCostControlComplexityScalability
K8s on VPSLowFullMediumHigh
Managed K8s (EKS/GKE)HighLimitedLowVery High
Docker ComposeVery LowFullLowLow
Traditional VPSLowFullVery LowMedium

When to use Kubernetes on VPS:

When to stick with Docker Compose:

Best VPS Providers for Kubernetes

Single Node (Development/Small Production)

ProviderPlanSpecsPriceBest For
HostingerKVM44 vCPU, 8GB RAM$14.99/moBest value
HetznerCX324 vCPU, 8GB RAM€11.90/moEU deployment
VultrVC2-2C-4GB2 vCPU, 4GB RAM$12/moGlobal locations
DigitalOceans-2vcpu-4gb2 vCPU, 4GB RAM$24/moSimple networking

Minimum requirements: 2 vCPU, 4GB RAM for k3s. 4 vCPU, 8GB RAM for full Kubernetes.

Multi-Node Cluster

SetupTotal CostNodesUse Case
3x Hostinger KVM2$19.47/mo3x 2vCPU, 4GBHigh availability
1 Master + 2 Workers (Hetzner)€15.58/moCX22 + 2x CAX11Cost-optimized
2x Vultr VC2 + 1x Regular$30/moMixed workloadsProduction-ready

Quick Start: k3s on Single VPS (10 Minutes)

k3s is perfect for VPS deployment — it’s lightweight, batteries-included Kubernetes that runs great on a single node.

Step 1: Prepare Your VPS

# Update system
sudo apt update && sudo apt upgrade -y

# Install curl (if not installed)
sudo apt install curl -y

# Disable swap (Kubernetes requirement)
sudo swapoff -a
sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab

# Load kernel modules
sudo modprobe br_netfilter
echo 'br_netfilter' | sudo tee -a /etc/modules

Step 2: Install k3s

# Single command install
curl -sfL https://get.k3s.io | sh -

# Or with custom options
curl -sfL https://get.k3s.io | sh -s - \
  --write-kubeconfig-mode 644 \
  --disable traefik \
  --disable servicelb

Options explained:

Step 3: Verify Installation

# Check node status
sudo k3s kubectl get nodes

# Check system pods
sudo k3s kubectl get pods -A

# Get kubeconfig for external access
sudo cat /etc/rancher/k3s/k3s.yaml

Your single-node Kubernetes cluster is ready! Services will run on your VPS IP.

Step 4: Set Up kubectl Access

# Install kubectl
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl

# Use k3s config
mkdir -p ~/.kube
sudo cp /etc/rancher/k3s/k3s.yaml ~/.kube/config
sudo chown $(id -u):$(id -g) ~/.kube/config

# Test access
kubectl get nodes

Full Kubernetes Setup (kubeadm)

For production workloads or learning “real” Kubernetes, use kubeadm instead of k3s.

System Requirements

Master Node:

Worker Nodes:

Step 1: Install Container Runtime (All Nodes)

# Install containerd
sudo apt update
sudo apt install -y apt-transport-https ca-certificates curl gnupg lsb-release

# Add Docker repo (for containerd)
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
echo "deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list

sudo apt update
sudo apt install -y containerd.io

# Configure containerd
sudo mkdir -p /etc/containerd
containerd config default | sudo tee /etc/containerd/config.toml
sudo sed -i 's/SystemdCgroup = false/SystemdCgroup = true/' /etc/containerd/config.toml
sudo systemctl restart containerd
sudo systemctl enable containerd

Step 2: Install Kubernetes Components (All Nodes)

# Add Kubernetes repo
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.29/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.29/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list

# Install kubelet, kubeadm, kubectl
sudo apt update
sudo apt install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl

# Enable kubelet
sudo systemctl enable kubelet

Step 3: Initialize Master Node

# Initialize cluster
sudo kubeadm init \
  --apiserver-advertise-address=YOUR_VPS_IP \
  --pod-network-cidr=10.244.0.0/16 \
  --service-cidr=10.96.0.0/12

# Set up kubeconfig
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Save the join command printed at the end — you’ll need it for worker nodes.

Step 4: Install Pod Network

# Install Flannel CNI
kubectl apply -f https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml

# Or use Calico instead
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.27.0/manifests/tigera-operator.yaml
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.27.0/manifests/custom-resources.yaml

Step 5: Allow Scheduling on Master (Single Node)

# Remove taint to allow pods on master node
kubectl taint nodes --all node-role.kubernetes.io/control-plane-

Skip this for multi-node clusters.

Multi-Node Cluster Setup

Architecture Options

Option 1: Single Master + Workers

Master Node (Hostinger KVM2): Control plane
Worker 1 (Hostinger KVM1): Workloads  
Worker 2 (Hostinger KVM1): Workloads

Option 2: HA Masters + Workers

Master 1, 2, 3 (Hetzner CX22): Control plane HA
Worker 1, 2 (Hetzner CX32): Heavy workloads

Joining Worker Nodes

On each worker VPS, run the join command from kubeadm init:

sudo kubeadm join YOUR_MASTER_IP:6443 \
  --token TOKEN \
  --discovery-token-ca-cert-hash sha256:HASH

If you lost the command:

# On master, create new token
kubeadm token create --print-join-command

Verify nodes:

kubectl get nodes

Load Balancer for HA Masters

For production multi-master, use a load balancer. On a separate VPS:

# Install HAProxy
sudo apt install haproxy -y

# Configure /etc/haproxy/haproxy.cfg
cat << EOF | sudo tee -a /etc/haproxy/haproxy.cfg
frontend k8s-api
  bind *:6443
  mode tcp
  default_backend k8s-masters

backend k8s-masters
  mode tcp
  balance roundrobin
  server master1 MASTER1_IP:6443 check
  server master2 MASTER2_IP:6443 check
  server master3 MASTER3_IP:6443 check
EOF

sudo systemctl restart haproxy

Then point kubeadm to the load balancer IP.

Essential Add-ons

Ingress Controller

nginx-ingress (recommended):

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/static/provider/baremetal/deploy.yaml

# Get the service port
kubectl get svc -n ingress-nginx

Traefik (if you disabled k3s built-in):

helm repo add traefik https://traefik.github.io/charts
helm repo update
helm install traefik traefik/traefik \
  --namespace traefik \
  --create-namespace \
  --set service.type=NodePort

Storage (Longhorn)

For persistent volumes across nodes:

# Install Longhorn
kubectl apply -f https://raw.githubusercontent.com/longhorn/longhorn/v1.6.0/deploy/longhorn.yaml

# Access UI (port-forward or ingress)
kubectl port-forward -n longhorn-system svc/longhorn-frontend 8080:80

Metrics & Monitoring

k9s (cluster management):

curl -sL https://github.com/derailed/k9s/releases/latest/download/k9s_Linux_amd64.tar.gz | tar xvz -C /tmp
sudo mv /tmp/k9s /usr/local/bin/

Prometheus & Grafana:

helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo update

helm install prometheus prometheus-community/kube-prometheus-stack \
  --namespace monitoring \
  --create-namespace

Certificate Management

# Install cert-manager
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.14.0/cert-manager.yaml

# Create Let's Encrypt issuer
cat << EOF | kubectl apply -f -
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: letsencrypt
spec:
  acme:
    server: https://acme-v02.api.letsencrypt.org/directory
    email: your-email@example.com
    privateKeySecretRef:
      name: letsencrypt
    solvers:
    - http01:
        ingress:
          class: nginx
EOF

Deploying Your First App

Simple Web App

# app.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-demo
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx-demo
  template:
    metadata:
      labels:
        app: nginx-demo
    spec:
      containers:
      - name: nginx
        image: nginx:alpine
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: nginx-demo
spec:
  selector:
    app: nginx-demo
  ports:
  - port: 80
    targetPort: 80
  type: ClusterIP
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: nginx-demo
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
    cert-manager.io/cluster-issuer: letsencrypt
spec:
  tls:
  - hosts:
    - demo.yourdomain.com
    secretName: demo-tls
  rules:
  - host: demo.yourdomain.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: nginx-demo
            port:
              number: 80

Deploy:

kubectl apply -f app.yaml
kubectl get pods,svc,ingress

Database with Persistent Storage

# postgres.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: postgres-storage
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: postgres
spec:
  replicas: 1
  selector:
    matchLabels:
      app: postgres
  template:
    metadata:
      labels:
        app: postgres
    spec:
      containers:
      - name: postgres
        image: postgres:15
        env:
        - name: POSTGRES_PASSWORD
          value: "yourpassword"
        - name: POSTGRES_DB
          value: "myapp"
        ports:
        - containerPort: 5432
        volumeMounts:
        - name: storage
          mountPath: /var/lib/postgresql/data
      volumes:
      - name: storage
        persistentVolumeClaim:
          claimName: postgres-storage
---
apiVersion: v1
kind: Service
metadata:
  name: postgres
spec:
  selector:
    app: postgres
  ports:
  - port: 5432
    targetPort: 5432

Production Best Practices

Resource Limits

Always set resource requests and limits:

spec:
  containers:
  - name: myapp
    image: myapp:latest
    resources:
      requests:
        memory: "64Mi"
        cpu: "250m"
      limits:
        memory: "128Mi"
        cpu: "500m"

Health Checks

spec:
  containers:
  - name: myapp
    image: myapp:latest
    livenessProbe:
      httpGet:
        path: /health
        port: 8080
      initialDelaySeconds: 30
      periodSeconds: 10
    readinessProbe:
      httpGet:
        path: /ready
        port: 8080
      initialDelaySeconds: 5
      periodSeconds: 5

Security

# SecurityContext example
spec:
  securityContext:
    runAsNonRoot: true
    runAsUser: 1000
    fsGroup: 2000
  containers:
  - name: myapp
    securityContext:
      allowPrivilegeEscalation: false
      readOnlyRootFilesystem: true
      capabilities:
        drop:
        - ALL

Secrets Management

# Create secret
kubectl create secret generic app-secrets \
  --from-literal=db-password=secretpass \
  --from-literal=api-key=your-api-key

# Use in deployment
spec:
  containers:
  - name: myapp
    env:
    - name: DB_PASSWORD
      valueFrom:
        secretKeyRef:
          name: app-secrets
          key: db-password

Cost Optimization

Resource Efficiency

k3s vs kubeadm resource usage:

Node sizing strategy:

Cluster Autoscaling Alternatives

Since VPS don’t auto-scale like cloud instances, plan for peak capacity:

  1. Horizontal Pod Autoscaler: Scale pods based on CPU/memory
  2. Vertical Pod Autoscaler: Adjust resource limits automatically
  3. Manual scaling: Add/remove worker nodes as needed

Cost Comparison

SetupMonthly CostCapability
k3s single node (Hostinger KVM2)$6.49Development, small production
3-node cluster (Hostinger)$19.47High availability
5-node cluster (Hetzner mix)~€25Production workloads
EKS equivalent$100+Managed, no maintenance

Common Issues & Solutions

Pod Networking Problems

# Check CNI status
kubectl get pods -n kube-system | grep -E "(flannel|calico)"

# Restart network pods
kubectl rollout restart daemonset/kube-flannel-ds -n kube-flannel

# Check node networking
kubectl describe node NODE_NAME

Storage Issues

# Check PV status
kubectl get pv,pvc

# Check Longhorn (if used)
kubectl get pods -n longhorn-system

# Manual PV cleanup
kubectl patch pv PV_NAME -p '{"metadata":{"finalizers":null}}'

DNS Resolution

# Test internal DNS
kubectl run test --image=busybox --restart=Never -- nslookup kubernetes.default

# Check CoreDNS
kubectl get pods -n kube-system -l k8s-app=kube-dns
kubectl logs -n kube-system -l k8s-app=kube-dns

Resource Exhaustion

# Check node resources
kubectl top nodes
kubectl describe nodes

# Find resource hogs
kubectl top pods --all-namespaces --sort-by=memory
kubectl top pods --all-namespaces --sort-by=cpu

Kubernetes vs Alternatives on VPS

ToolBest ForComplexityResource Usage
k3sSmall teams, learningLowVery Light
MicroK8sLocal developmentLowLight
kubeadmProduction, learning full K8sMediumMedium
Docker ComposeSimple appsVery LowMinimal
Docker SwarmLegacy migrationLowLight
NomadMixed workloadsMediumLight

Backup & Disaster Recovery

etcd Backup (kubeadm)

# Create backup script
cat << 'EOF' > /usr/local/bin/etcd-backup.sh
#!/bin/bash
ETCDCTL_API=3 etcdctl \
  --endpoints=https://127.0.0.1:2379 \
  --cacert=/etc/kubernetes/pki/etcd/ca.crt \
  --cert=/etc/kubernetes/pki/etcd/server.crt \
  --key=/etc/kubernetes/pki/etcd/server.key \
  snapshot save /backup/etcd-$(date +%Y%m%d-%H%M%S).db

# Keep only last 7 days
find /backup -name "etcd-*.db" -mtime +7 -delete
EOF

chmod +x /usr/local/bin/etcd-backup.sh

# Add to crontab
echo "0 2 * * * /usr/local/bin/etcd-backup.sh" | crontab -

k3s Backup

# k3s stores everything in SQLite by default
cp /var/lib/rancher/k3s/server/db/state.db /backup/k3s-$(date +%Y%m%d).db

Application Data Backup

Use Velero for application-level backups:

# Install Velero CLI
curl -fsSL -o velero-v1.12.1-linux-amd64.tar.gz https://github.com/vmware-tanzu/velero/releases/download/v1.12.1/velero-v1.12.1-linux-amd64.tar.gz
tar -xzf velero-v1.12.1-linux-amd64.tar.gz
sudo mv velero-v1.12.1-linux-amd64/velero /usr/local/bin/

# Install in cluster (with MinIO backend)
velero install \
    --provider aws \
    --plugins velero/velero-plugin-for-aws:v1.8.0 \
    --bucket velero-backups \
    --secret-file ./credentials-velero \
    --use-volume-snapshots=false \
    --backup-location-config region=minio,s3ForcePathStyle="true",s3Url=http://minio:9000

Monitoring & Logging

Prometheus Stack

# Install with Helm
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm install monitoring prometheus-community/kube-prometheus-stack \
  --namespace monitoring \
  --create-namespace \
  --set grafana.adminPassword=admin123

# Access Grafana
kubectl port-forward -n monitoring svc/monitoring-grafana 3000:80

Centralized Logging

# ELK Stack alternative: Loki
helm repo add grafana https://grafana.github.io/helm-charts
helm install loki grafana/loki-stack \
  --namespace logging \
  --create-namespace \
  --set grafana.enabled=true

Migration from Docker Compose

Kompose Tool

# Install kompose
curl -L https://github.com/kubernetes/kompose/releases/latest/download/kompose-linux-amd64 -o kompose
chmod +x kompose
sudo mv ./kompose /usr/local/bin/kompose

# Convert docker-compose.yml
kompose convert

# Review and apply
kubectl apply -f .

Manual Conversion Tips

Docker Compose → Kubernetes mapping:

FAQ

Is Kubernetes worth it on VPS?

For 1-3 containers: No, use Docker Compose. For 5+ microservices that need scaling, auto-healing, and rolling updates: Yes.

k3s vs full Kubernetes?

k3s for most VPS use cases — it’s production-ready and much lighter. Use full K8s if you need specific features or want to learn “real” Kubernetes.

How much RAM do I need?

Can I run this in production?

Yes! Many companies run production Kubernetes on VPS. Just ensure you have:

What about managed Kubernetes?

Managed K8s (EKS, GKE) costs 3-5x more but handles upgrades, security, and scaling automatically. Great for teams that want to focus on applications, not infrastructure.

Next Steps

  1. Start with k3s on Hostinger — best learning experience for $6.49/mo
  2. Deploy a real application — migrate something from Docker Compose
  3. Add monitoring — Prometheus + Grafana for visibility
  4. Set up CI/CD — GitLab or GitHub Actions deploying to your cluster
  5. Scale horizontally — add worker nodes as you grow

Running Kubernetes on VPS gives you container orchestration superpowers at a fraction of cloud costs. Start small with k3s, learn the concepts, then scale up as your needs grow.

The combination of Hostinger VPS + Kubernetes gives you enterprise-grade container infrastructure for less than $20/month. Perfect for startups, learning environments, and cost-conscious production workloads.

~/kubernetes-on-vps-guide/get-started

Ready to get started?

Get the best VPS hosting deal today. Hostinger offers 4GB RAM VPS starting at just $4.99/mo.

Get Hostinger VPS — $4.99/mo

// up to 75% off + free domain included

// related topics

kubernetes vps setup k8s on vps k3s vps guide kubernetes single node kubernetes multi node vps

// related guides

Andrius Putna

Andrius Putna

I am Andrius Putna. Geek. Since early 2000 in love tinkering with web technologies. Now AI. Bridging business and technology to drive meaningful impact. Combining expertise in customer experience, technology, and business strategy to deliver valuable insights. Father, open-source contributor, investor, 2xIronman, MBA graduate.

// last updated: April 2, 2026. Disclosure: This article may contain affiliate links.