VPS Monitoring with Uptime Kuma and Grafana
TUTORIAL 8 min read fordnox

VPS Monitoring with Uptime Kuma and Grafana

Set up VPS monitoring with Uptime Kuma and Grafana. Track uptime, CPU, memory, and disk usage with alerts so you never miss downtime.


VPS Monitoring with Uptime Kuma and Grafana

You can't fix what you can't see. This guide covers setting up comprehensive monitoring for your VPS—from simple uptime checks to full metrics dashboards.

Why This Matters

Without monitoring, you only learn about problems when users complain—or when everything is already broken. Good monitoring means:

Prerequisites

Quick Start: Choose Your Stack

Tool Best For Complexity
Uptime Kuma Uptime monitoring, status pages Easy
Grafana + Prometheus Full metrics, dashboards Medium
Netdata Real-time system monitoring Easy
Full stack Production environments Advanced

Part 1: Uptime Kuma (Simple & Effective)

Uptime Kuma is a self-hosted monitoring tool that's beautiful and easy to use.

Step 1: Deploy Uptime Kuma

# docker-compose.yml
services:
  uptime-kuma:
    image: louislam/uptime-kuma:1
    container_name: uptime-kuma
    volumes:
      - uptime-kuma-data:/app/data
    ports:
      - "3001:3001"
    restart: unless-stopped

volumes:
  uptime-kuma-data:
docker compose up -d

Access at http://your-server:3001

Step 2: Configure Monitors

After setup, add monitors for:

HTTP(S) Monitoring:

TCP Port Monitoring:

Docker Container Monitoring:

DNS Monitoring:

Step 3: Set Up Notifications

Uptime Kuma supports 90+ notification services:

Configure at: Settings → Notifications

Step 4: Create a Status Page

  1. Go to Status Pages
  2. Create new page
  3. Add your monitors
  4. Share the public URL with users

Part 2: Full Metrics Stack (Prometheus + Grafana)

For comprehensive metrics collection and visualization.

Step 1: Create the Stack

# docker-compose.monitoring.yml
services:
  prometheus:
    image: prom/prometheus:latest
    container_name: prometheus
    volumes:
      - ./prometheus/prometheus.yml:/etc/prometheus/prometheus.yml
      - prometheus_data:/prometheus
    command:
      - '--config.file=/etc/prometheus/prometheus.yml'
      - '--storage.tsdb.path=/prometheus'
      - '--storage.tsdb.retention.time=30d'
    ports:
      - "9090:9090"
    restart: unless-stopped

  grafana:
    image: grafana/grafana:latest
    container_name: grafana
    volumes:
      - grafana_data:/var/lib/grafana
    environment:
      - GF_SECURITY_ADMIN_USER=admin
      - GF_SECURITY_ADMIN_PASSWORD=${GRAFANA_PASSWORD}
      - GF_USERS_ALLOW_SIGN_UP=false
    ports:
      - "3000:3000"
    restart: unless-stopped

  node-exporter:
    image: prom/node-exporter:latest
    container_name: node-exporter
    volumes:
      - /proc:/host/proc:ro
      - /sys:/host/sys:ro
      - /:/rootfs:ro
    command:
      - '--path.procfs=/host/proc'
      - '--path.sysfs=/host/sys'
      - '--path.rootfs=/rootfs'
      - '--collector.filesystem.mount-points-exclude=^/(sys|proc|dev|host|etc)($$|/)'
    ports:
      - "9100:9100"
    restart: unless-stopped

  cadvisor:
    image: gcr.io/cadvisor/cadvisor:latest
    container_name: cadvisor
    volumes:
      - /:/rootfs:ro
      - /var/run:/var/run:ro
      - /sys:/sys:ro
      - /var/lib/docker/:/var/lib/docker:ro
    ports:
      - "8080:8080"
    restart: unless-stopped

volumes:
  prometheus_data:
  grafana_data:

Step 2: Configure Prometheus

# prometheus/prometheus.yml
global:
  scrape_interval: 15s
  evaluation_interval: 15s

alerting:
  alertmanagers:
    - static_configs:
        - targets: []

scrape_configs:
  - job_name: 'prometheus'
    static_configs:
      - targets: ['localhost:9090']

  - job_name: 'node-exporter'
    static_configs:
      - targets: ['node-exporter:9100']

  - job_name: 'cadvisor'
    static_configs:
      - targets: ['cadvisor:8080']

  # Add your applications
  - job_name: 'myapp'
    static_configs:
      - targets: ['myapp:3000']
    metrics_path: '/metrics'

Step 3: Start the Stack

mkdir -p prometheus
# Create prometheus.yml as above
docker compose -f docker-compose.monitoring.yml up -d

Step 4: Set Up Grafana Dashboards

  1. Access Grafana at http://your-server:3000

  2. Login with admin / your-password

  3. Add Prometheus data source:

    • Configuration → Data Sources → Add
    • Select Prometheus
    • URL: http://prometheus:9090
    • Save & Test
  4. Import pre-built dashboards:

    • Dashboards → Import
    • Popular dashboard IDs:
      • 1860 - Node Exporter Full
      • 893 - Docker and System Monitoring
      • 14282 - cAdvisor Dashboard

Step 5: Create Custom Alerts

In Grafana:

  1. Alerting → Alert Rules → New
  2. Create conditions based on metrics
  3. Set notification channels

Example alert: CPU > 80% for 5 minutes

100 - (avg by (instance) (rate(node_cpu_seconds_total{mode="idle"}[5m])) * 100) > 80

Part 3: Netdata (Real-Time Monitoring)

For instant, zero-config monitoring:

services:
  netdata:
    image: netdata/netdata
    container_name: netdata
    ports:
      - "19999:19999"
    cap_add:
      - SYS_PTRACE
    security_opt:
      - apparmor:unconfined
    volumes:
      - netdataconfig:/etc/netdata
      - netdatalib:/var/lib/netdata
      - netdatacache:/var/cache/netdata
      - /etc/passwd:/host/etc/passwd:ro
      - /etc/group:/host/etc/group:ro
      - /proc:/host/proc:ro
      - /sys:/host/sys:ro
      - /etc/os-release:/host/etc/os-release:ro
    restart: unless-stopped

volumes:
  netdataconfig:
  netdatalib:
  netdatacache:

Access at http://your-server:19999 - instant beautiful dashboards!

Part 4: Application-Level Monitoring

Add Metrics to Your Apps

Node.js with prom-client:

const client = require('prom-client');
const express = require('express');

// Collect default metrics
client.collectDefaultMetrics();

// Custom metrics
const httpRequestsTotal = new client.Counter({
  name: 'http_requests_total',
  help: 'Total HTTP requests',
  labelNames: ['method', 'path', 'status']
});

app.use((req, res, next) => {
  res.on('finish', () => {
    httpRequestsTotal.inc({
      method: req.method,
      path: req.route?.path || req.path,
      status: res.statusCode
    });
  });
  next();
});

// Expose metrics endpoint
app.get('/metrics', async (req, res) => {
  res.set('Content-Type', client.register.contentType);
  res.end(await client.register.metrics());
});

Python with prometheus-client:

from prometheus_client import Counter, Histogram, generate_latest
from flask import Flask, Response

app = Flask(__name__)

REQUEST_COUNT = Counter('requests_total', 'Total requests', ['method', 'endpoint'])
REQUEST_LATENCY = Histogram('request_latency_seconds', 'Request latency')

@app.route('/metrics')
def metrics():
    return Response(generate_latest(), mimetype='text/plain')

Part 5: Log Monitoring with Loki

Add centralized logging:

services:
  loki:
    image: grafana/loki:latest
    ports:
      - "3100:3100"
    command: -config.file=/etc/loki/local-config.yaml
    volumes:
      - loki_data:/loki
    restart: unless-stopped

  promtail:
    image: grafana/promtail:latest
    volumes:
      - /var/log:/var/log:ro
      - ./promtail-config.yml:/etc/promtail/config.yml
    command: -config.file=/etc/promtail/config.yml
    restart: unless-stopped

volumes:
  loki_data:

Promtail config:

# promtail-config.yml
server:
  http_listen_port: 9080

positions:
  filename: /tmp/positions.yaml

clients:
  - url: http://loki:3100/loki/api/v1/push

scrape_configs:
  - job_name: system
    static_configs:
      - targets:
          - localhost
        labels:
          job: varlogs
          __path__: /var/log/*log

  - job_name: docker
    static_configs:
      - targets:
          - localhost
        labels:
          job: docker
          __path__: /var/lib/docker/containers/*/*log

Add Loki as a Grafana data source and query logs alongside metrics!

Recommended Monitoring Setup

For most VPS deployments:

# docker-compose.monitoring.yml - Complete recommended stack
services:
  uptime-kuma:
    image: louislam/uptime-kuma:1
    volumes:
      - uptime-kuma-data:/app/data
    ports:
      - "3001:3001"
    restart: unless-stopped

  grafana:
    image: grafana/grafana:latest
    volumes:
      - grafana_data:/var/lib/grafana
    environment:
      - GF_SECURITY_ADMIN_PASSWORD=${GRAFANA_PASSWORD}
    ports:
      - "3000:3000"
    restart: unless-stopped

  prometheus:
    image: prom/prometheus:latest
    volumes:
      - ./prometheus.yml:/etc/prometheus/prometheus.yml
      - prometheus_data:/prometheus
    command:
      - '--config.file=/etc/prometheus/prometheus.yml'
      - '--storage.tsdb.retention.time=15d'
    restart: unless-stopped

  node-exporter:
    image: prom/node-exporter:latest
    volumes:
      - /proc:/host/proc:ro
      - /sys:/host/sys:ro
      - /:/rootfs:ro
    command:
      - '--path.procfs=/host/proc'
      - '--path.sysfs=/host/sys'
      - '--path.rootfs=/rootfs'
    restart: unless-stopped

volumes:
  uptime-kuma-data:
  grafana_data:
  prometheus_data:

Best Practices

  1. Monitor from outside - External checks catch network issues
  2. Set reasonable thresholds - Avoid alert fatigue
  3. Layer your monitoring - Uptime + metrics + logs
  4. Retain data appropriately - 15-30 days for metrics, longer for aggregates
  5. Document runbooks - What to do when alert X fires
  6. Test your alerts - Ensure they actually reach you
  7. Monitor the monitors - Use external service to watch your monitoring
  8. Secure your dashboards - Metrics can reveal sensitive info

Common Mistakes to Avoid

Too many alerts - Alert fatigue means ignoring real issues

No external monitoring - If your server is down, so is your monitoring

Exposing metrics publicly - Use authentication or internal networks

Not setting retention - Disk fills up with old data

Monitoring without acting - Dashboards don't fix problems

Single notification channel - Email is down? No alerts

No baseline - You need to know what "normal" looks like

Over-monitoring - Start simple, add complexity as needed

Key Metrics to Watch

System Metrics

Application Metrics

Business Metrics

FAQ

How much resources does monitoring use?

Minimal. Uptime Kuma: ~100MB RAM. Full Prometheus/Grafana stack: ~500MB-1GB. Worth it for the visibility.

Should I use cloud monitoring or self-hosted?

Self-hosted for cost control and data ownership. Cloud (Datadog, New Relic) if you have budget and want managed solution. Hostinger VPS has enough resources for self-hosted monitoring.

How do I monitor from outside my network?

Use external services like:

These catch issues your self-hosted monitoring can't see.

What alert thresholds should I set?

Start conservative, adjust based on experience:

How long should I retain metrics?

Balance detail vs storage costs.

Can I monitor multiple servers?

Yes! Prometheus + Node Exporter scale well. Just add new targets to your scrape config and you can monitor 100+ servers from one dashboard.


Your monitoring stack is ready! Combine this with our backup guide and security guide for a production-ready VPS.

~/vps-monitoring-guide/get-started

Ready to get started?

Get the best VPS hosting deal today. Hostinger offers 4GB RAM VPS starting at just $4.99/mo.

Get Hostinger VPS — $4.99/mo

// up to 75% off + free domain included

// related topics

VPS monitoring Uptime Kuma Grafana server monitoring uptime tracking server alerts

fordnox

Expert VPS reviews and hosting guides. We test every provider we recommend.

// last updated: February 6, 2026. Disclosure: This article may contain affiliate links.