Docker Compose Deployment on VPS
Complete guide to deploying applications with Docker Compose on a VPS. From installation to multi-container setups with networking and volumes.
Docker Compose Deployment on VPS
Docker Compose is the easiest way to deploy and manage applications on your VPS. Define your entire stack in one file, deploy with one command, and update without downtime.
Why This Matters
Traditional deployments are painful:
- Dependencies conflict between applications
- "It works on my machine" syndrome
- Complex manual setup for each server
- Rollbacks require prayer and luck
Docker Compose solves this:
- Isolated environments - Apps can't break each other
- Reproducible deployments - Same config, same result, every time
- Version control - Your infrastructure is code
- Easy rollbacks - Previous version is one command away
Prerequisites
- A VPS with Ubuntu 22.04+ (we recommend Hostinger VPS for their Docker-optimized images)
- Basic command line knowledge
- SSH access to your server
Step 1: Install Docker
# Remove old versions
sudo apt remove docker docker-engine docker.io containerd runc
# Install dependencies
sudo apt update
sudo apt install ca-certificates curl gnupg -y
# Add Docker's GPG key
sudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
sudo chmod a+r /etc/apt/keyrings/docker.gpg
# Add repository
echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
$(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
# Install Docker
sudo apt update
sudo apt install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin -y
# Add your user to docker group (logout/login required)
sudo usermod -aG docker $USER
# Verify installation
docker --version
docker compose version
Log out and back in for group changes to take effect.
Step 2: Create Your Project Structure
mkdir -p ~/apps/myapp
cd ~/apps/myapp
Recommended structure:
myapp/
├── docker-compose.yml # Main compose file
├── docker-compose.prod.yml # Production overrides
├── .env # Environment variables (never commit!)
├── .env.example # Template for env vars
├── nginx/
│ └── nginx.conf # Custom Nginx config
├── data/ # Persistent data (gitignored)
└── logs/ # Application logs (gitignored)
Step 3: Write Your First docker-compose.yml
Let's deploy a complete web stack:
# docker-compose.yml
services:
app:
image: node:20-alpine
working_dir: /app
volumes:
- ./src:/app
- /app/node_modules
command: npm start
environment:
- NODE_ENV=production
- DATABASE_URL=postgres://user:pass@db:5432/myapp
depends_on:
db:
condition: service_healthy
restart: unless-stopped
networks:
- internal
db:
image: postgres:16-alpine
volumes:
- postgres_data:/var/lib/postgresql/data
environment:
POSTGRES_USER: user
POSTGRES_PASSWORD: ${DB_PASSWORD}
POSTGRES_DB: myapp
healthcheck:
test: ["CMD-SHELL", "pg_isready -U user -d myapp"]
interval: 5s
timeout: 5s
retries: 5
restart: unless-stopped
networks:
- internal
nginx:
image: nginx:alpine
ports:
- "80:80"
- "443:443"
volumes:
- ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro
- ./certbot/conf:/etc/letsencrypt:ro
- ./certbot/www:/var/www/certbot:ro
depends_on:
- app
restart: unless-stopped
networks:
- internal
redis:
image: redis:7-alpine
volumes:
- redis_data:/data
command: redis-server --appendonly yes
restart: unless-stopped
networks:
- internal
volumes:
postgres_data:
redis_data:
networks:
internal:
driver: bridge
Step 4: Use Environment Variables Properly
Create your .env file:
# .env
DB_PASSWORD=your-super-secret-password-here
REDIS_PASSWORD=another-secret
API_KEY=your-api-key
Create .env.example for documentation:
# .env.example
DB_PASSWORD=
REDIS_PASSWORD=
API_KEY=
Never commit .env to git! Add to .gitignore:
echo ".env" >> .gitignore
echo "data/" >> .gitignore
echo "logs/" >> .gitignore
Step 5: Production Overrides
Create a production-specific file:
# docker-compose.prod.yml
services:
app:
deploy:
resources:
limits:
cpus: '1'
memory: 512M
reservations:
cpus: '0.5'
memory: 256M
logging:
driver: "json-file"
options:
max-size: "10m"
max-file: "3"
db:
deploy:
resources:
limits:
cpus: '2'
memory: 1G
Deploy with:
docker compose -f docker-compose.yml -f docker-compose.prod.yml up -d
Step 6: Common Docker Compose Commands
# Start all services
docker compose up -d
# View logs
docker compose logs -f
# View specific service logs
docker compose logs -f app
# Stop all services
docker compose down
# Stop and remove volumes (CAREFUL - deletes data!)
docker compose down -v
# Rebuild and restart
docker compose up -d --build
# Restart a specific service
docker compose restart app
# View running containers
docker compose ps
# Execute command in container
docker compose exec app sh
# View resource usage
docker stats
Step 7: Zero-Downtime Deployments
For updates without downtime:
# Pull new images
docker compose pull
# Recreate only changed containers
docker compose up -d --no-deps app
Or use rolling updates with multiple replicas:
services:
app:
deploy:
replicas: 2
update_config:
parallelism: 1
delay: 10s
Step 8: Health Checks
Always add health checks:
services:
app:
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:3000/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 40s
Check health status:
docker compose ps
docker inspect --format='{{json .State.Health}}' container_name
Step 9: Managing Secrets
For sensitive data, use Docker secrets or external secret managers:
services:
app:
secrets:
- db_password
- api_key
environment:
- DB_PASSWORD_FILE=/run/secrets/db_password
secrets:
db_password:
file: ./secrets/db_password.txt
api_key:
file: ./secrets/api_key.txt
Step 10: Backup Strategy
Create a backup script:
#!/bin/bash
# backup.sh
BACKUP_DIR="/backups/$(date +%Y-%m-%d)"
mkdir -p "$BACKUP_DIR"
# Backup PostgreSQL
docker compose exec -T db pg_dump -U user myapp > "$BACKUP_DIR/db.sql"
# Backup volumes
docker run --rm \
-v myapp_postgres_data:/data:ro \
-v "$BACKUP_DIR":/backup \
alpine tar czf /backup/postgres_data.tar.gz /data
# Backup Redis
docker compose exec -T redis redis-cli BGSAVE
docker cp "$(docker compose ps -q redis)":/data/dump.rdb "$BACKUP_DIR/"
echo "Backup completed: $BACKUP_DIR"
Real-World Examples
WordPress with Database
services:
wordpress:
image: wordpress:latest
ports:
- "8080:80"
environment:
WORDPRESS_DB_HOST: db
WORDPRESS_DB_USER: wordpress
WORDPRESS_DB_PASSWORD: ${WP_DB_PASSWORD}
WORDPRESS_DB_NAME: wordpress
volumes:
- wordpress_data:/var/www/html
depends_on:
- db
restart: unless-stopped
db:
image: mysql:8.0
environment:
MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD}
MYSQL_DATABASE: wordpress
MYSQL_USER: wordpress
MYSQL_PASSWORD: ${WP_DB_PASSWORD}
volumes:
- db_data:/var/lib/mysql
restart: unless-stopped
volumes:
wordpress_data:
db_data:
Full-Stack JavaScript App
services:
frontend:
build: ./frontend
ports:
- "3000:3000"
environment:
- REACT_APP_API_URL=http://api:4000
depends_on:
- api
api:
build: ./api
ports:
- "4000:4000"
environment:
- DATABASE_URL=postgres://user:pass@db:5432/app
- REDIS_URL=redis://redis:6379
depends_on:
- db
- redis
db:
image: postgres:16-alpine
volumes:
- pgdata:/var/lib/postgresql/data
environment:
POSTGRES_PASSWORD: pass
POSTGRES_USER: user
POSTGRES_DB: app
redis:
image: redis:7-alpine
volumes:
- redisdata:/data
volumes:
pgdata:
redisdata:
Self-Hosted Git with Gitea
services:
gitea:
image: gitea/gitea:latest
ports:
- "3000:3000"
- "222:22"
volumes:
- gitea_data:/data
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
environment:
- USER_UID=1000
- USER_GID=1000
- GITEA__database__DB_TYPE=postgres
- GITEA__database__HOST=db:5432
- GITEA__database__NAME=gitea
- GITEA__database__USER=gitea
- GITEA__database__PASSWD=${DB_PASSWORD}
depends_on:
- db
restart: unless-stopped
db:
image: postgres:16-alpine
volumes:
- postgres_data:/var/lib/postgresql/data
environment:
POSTGRES_USER: gitea
POSTGRES_PASSWORD: ${DB_PASSWORD}
POSTGRES_DB: gitea
restart: unless-stopped
volumes:
gitea_data:
postgres_data:
Best Practices
- Pin image versions - Use
postgres:16-alpine, notpostgres:latest - Use .env files - Keep secrets out of compose files
- Named volumes for data - Don't use bind mounts for databases
- Health checks everywhere - Know when services are actually ready
- Resource limits - Prevent runaway containers from killing your server
- Logging limits - Set max-size to prevent disk filling
- Use networks - Isolate services that don't need to talk
- depends_on with conditions - Wait for services to be healthy, not just started
Common Mistakes to Avoid
❌ Using latest tag - Builds become unreproducible
❌ Storing data in containers - Data disappears when container is removed
❌ Committing .env files - Secrets end up in git history forever
❌ No health checks - depends_on doesn't wait for app readiness
❌ Ignoring logs - They'll fill your disk without limits
❌ Exposing database ports - Only expose what needs external access
❌ Running as root - Use USER directive in Dockerfiles
❌ No restart policy - Containers don't come back after crashes
Debugging Tips
# See why a container is failing
docker compose logs app --tail=100
# Get a shell in a running container
docker compose exec app sh
# Get a shell in a stopped container
docker compose run app sh
# Inspect container details
docker inspect $(docker compose ps -q app)
# Check network connectivity
docker compose exec app ping db
# View environment variables
docker compose exec app env
FAQ
How much RAM do I need?
For small projects, 2GB is usually enough. Each container has overhead, so budget ~100MB per container plus actual app needs. Hostinger VPS plans start at 4GB which handles most stacks comfortably.
Should I use Docker Compose or Kubernetes?
Docker Compose for single-server deployments (most people). Kubernetes when you need multi-node clusters, auto-scaling, or have a dedicated DevOps team. Don't overcomplicate.
How do I update a running application?
# Pull latest images
docker compose pull
# Recreate changed containers
docker compose up -d
For custom builds: docker compose up -d --build
Can I use Docker Compose with Nginx Proxy Manager?
Yes! Don't expose ports directly, just put containers on the same network as NPM. See our reverse proxy guide.
How do I persist data?
Use named volumes (Docker manages location) or bind mounts (you specify path). Named volumes are recommended for databases.
What's the difference between up and start?
up creates and starts containers. start only starts existing stopped containers. Always use up -d.
Next steps: Set up automated backups to protect your Docker data, and add monitoring to track container health.
Ready to get started?
Get the best VPS hosting deal today. Hostinger offers 4GB RAM VPS starting at just $4.99/mo.
Get Hostinger VPS — $4.99/mo// up to 75% off + free domain included
// related topics
fordnox
Expert VPS reviews and hosting guides. We test every provider we recommend.
// last updated: February 6, 2026. Disclosure: This article may contain affiliate links.