DOKPLOY-GUIDE 7 min read fordnox

Deploy Open WebUI with Dokploy: Docker Compose Setup Guide

Step-by-step guide to deploying Open WebUI on your VPS using Dokploy and Docker Compose. Connect to Ollama or OpenAI APIs with persistent storage and SSL.


Deploy Open WebUI with Dokploy

Dokploy is an open-source server management platform that simplifies deploying Docker Compose applications on your VPS. It handles reverse proxy, SSL certificates, and deployment lifecycle — making it straightforward to run Open WebUI in production.

This guide covers deploying Open WebUI with optional Ollama integration for local AI models, persistent data storage, and automatic HTTPS.

Prerequisites

Docker Compose Configuration

Create a new Compose project in Dokploy and paste the following configuration:

version: "3.8"

services:
  open-webui:
    image: ghcr.io/open-webui/open-webui:main
    restart: unless-stopped
    ports:
      - "3000:8080"
    environment:
      # Ollama connection (if running Ollama on the same server)
      - OLLAMA_BASE_URL=http://ollama:11434
      # OpenAI-compatible API (optional — set your API key)
      - OPENAI_API_BASE_URLS=${OPENAI_API_BASE_URLS:-https://api.openai.com/v1}
      - OPENAI_API_KEYS=${OPENAI_API_KEYS:-}
      # Data directory
      - DATA_DIR=/app/backend/data
      # Disable signup after initial admin creation (optional)
      - ENABLE_SIGNUP=${ENABLE_SIGNUP:-true}
      # WebUI name
      - WEBUI_NAME=${WEBUI_NAME:-Open WebUI}
    volumes:
      - ../files/open-webui-data:/app/backend/data
    depends_on:
      ollama:
        condition: service_started
    healthcheck:
      test: ["CMD-SHELL", "curl -f http://localhost:8080/health || exit 1"]
      interval: 30s
      timeout: 10s
      retries: 3

  ollama:
    image: ollama/ollama:latest
    restart: unless-stopped
    volumes:
      - ../files/ollama-data:/root/.ollama
    # Uncomment the following for NVIDIA GPU support:
    # deploy:
    #   resources:
    #     reservations:
    #       devices:
    #         - driver: nvidia
    #           count: all
    #           capabilities: [gpu]
    healthcheck:
      test: ["CMD-SHELL", "ollama list || exit 1"]
      interval: 30s
      timeout: 10s
      retries: 3

Note: If you only plan to use external APIs (OpenAI, Anthropic) without local models, you can remove the ollama service entirely and set OLLAMA_BASE_URL to empty.

Environment Variables

Set these in Dokploy's Environment tab for your compose project:

Variable Purpose Example
OPENAI_API_KEYS OpenAI API key (or compatible provider) sk-your-api-key
OPENAI_API_BASE_URLS API endpoint URL https://api.openai.com/v1
ENABLE_SIGNUP Allow new user registration true or false
WEBUI_NAME Custom name for the UI header My AI Chat

In Dokploy, environment variables are set via the Environment editor in the project settings. Do not create a .env file manually — Dokploy manages this for you. To use multiple API providers, separate URLs and keys with semicolons (e.g., https://api.openai.com/v1;https://api.anthropic.com/v1).

Volumes & Data Persistence

This setup uses Dokploy's ../files convention for bind-mounted volumes:

The ../files path is relative to the compose file inside Dokploy's project directory. This ensures your data persists across redeployments. Avoid using absolute paths (e.g., /opt/open-webui) because Dokploy may clean them during redeployment.

If you need S3 backup support, consider using named Docker volumes instead. Named volumes can be backed up with Dokploy's built-in backup features.

Domain & SSL Setup

  1. In your Dokploy project, navigate to the Domains tab
  2. Click Add Domain and enter your domain (e.g., chat.yourdomain.com)
  3. Set the container port to 3000
  4. Enable HTTPS — Dokploy automatically provisions a Let's Encrypt SSL certificate
  5. Save and wait for the certificate to be issued (usually under a minute)

Dokploy's built-in Traefik reverse proxy handles TLS termination and routes traffic to your Open WebUI container.

Verifying the Deployment

  1. In Dokploy, go to your project's Deployments tab and click Deploy
  2. Watch the build logs — you should see the open-webui container pull and start
  3. Check the Logs tab for the open-webui service. Look for the startup message indicating the server is listening
  4. Open https://chat.yourdomain.com in your browser — you should see the Open WebUI registration/login page
  5. Create your admin account (the first account created automatically gets admin privileges)
  6. To pull a local model, open a terminal in the Ollama container and run: ollama pull llama3.2

Troubleshooting

Open WebUI shows "Ollama not connected" Verify the Ollama container is running in Dokploy's logs. The containers must be on the same Docker network (Dokploy handles this automatically for services in the same compose file). Ensure OLLAMA_BASE_URL is set to http://ollama:11434 using the service name, not localhost.

Models take forever to download Ollama models are large (4-40 GB). Download speed depends on your VPS bandwidth. Check Ollama container logs for download progress. Ensure your VPS has enough disk space — the ../files/ollama-data volume needs room for each model.

Out of memory when running local models Local LLM inference is memory-intensive. A 7B parameter model requires roughly 4-8 GB RAM. If your VPS doesn't have enough memory, use API-based models (OpenAI, Anthropic) instead of local Ollama models, or choose smaller quantized models.

Permission errors on startup Open WebUI runs as a non-root user inside the container. If the ../files/open-webui-data directory has incorrect permissions, fix ownership from your server shell: chown -R 1000:1000 ../files/open-webui-data.


Learn more about Open WebUI in our complete overview.

Need a VPS? Hostinger VPS starts at $4.99/mo — perfect for running Open WebUI.


For more on Docker Compose deployments in Dokploy, see the Dokploy Docker Compose documentation.

App data sourced from selfh.st open-source directory.

~/self-hosted-app/open-webui/dokploy/get-started

Ready to get started?

Get the best VPS hosting deal today. Hostinger offers 4GB RAM VPS starting at just $4.99/mo.

Get Hostinger VPS — $4.99/mo

// up to 75% off + free domain included

// related topics

open webui dokploy docker compose self-hosted ai chat interface ollama ui

fordnox

Expert VPS reviews and hosting guides. We test every provider we recommend.

// last updated: February 12, 2026. Disclosure: This article may contain affiliate links.