TL;DR: Docker Compose is the cleanest way to run n8n on a Debian home server. This post walks through installing Docker on Debian from the official repository, configuring an n8n and nginx container stack, and getting the automation layer of a self-hosted LLM server operational and accessible across the network.
After some time away from this project, I am picking up where the hardware build left off. The core objective has not changed: a centralized automation hub that functions like the ship’s computer from Star Trek: TNG, serving as an intelligent assistant, tool orchestrator, and knowledge base on the home network. n8n is the automation platform running at the center of that setup, and this post covers getting it fully deployed.
One quality-of-life addition worth noting upfront: I configured pfSense with local DNS so each server on the network gets a friendly name rather than a raw IP address. This makes access throughout the setup significantly cleaner. You can follow along using IP addresses if local DNS is not yet configured.
What Do You Need Before Setting Up n8n on a Home Server?
Getting n8n running on a home server requires a few things in place before touching any Docker configuration. Missing one of these is the most common source of early setup friction.
- A Debian-based server (or similar Linux distribution) with SSH access
- Basic familiarity with command-line operations
- Approximately 2GB of free disk space for Docker images and data
- Network access to your server from client machines
- (Optional) Local DNS configured for friendly server names
This project assumes a fresh Debian installation. For other distributions, Docker installation steps may vary slightly. The official Docker documentation has OS-specific instructions for each supported platform.
How Do You Install Docker on Debian?
Installing Docker on Debian is best done through apt using Docker’s official repository rather than the Desktop package. The Desktop route has real limitations: it may not run correctly with certain X11 window managers, and updating it after a DEB package install is more cumbersome. Adding Docker’s apt source gives full CLI control and clean upgrade paths going forward.
Start by updating and upgrading existing packages:
sudo apt update
sudo apt upgrade
Next, install the prerequisite packages and add Docker’s official GPG key and apt repository, as detailed in Docker’s documentation :
# Add Docker's official GPG key:
sudo apt update
sudo apt install ca-certificates curl
sudo install -m 0755 -d /etc/apt/keyrings
sudo curl -fsSL https://download.docker.com/linux/debian/gpg -o /etc/apt/keyrings/docker.asc
sudo chmod a+r /etc/apt/keyrings/docker.asc
# Add the repository to Apt sources:
sudo tee /etc/apt/sources.list.d/docker.sources <<EOF
Types: deb
URIs: https://download.docker.com/linux/debian
Suites: $(. /etc/os-release && echo "$VERSION_CODENAME")
Components: stable
Signed-By: /etc/apt/keyrings/docker.asc
EOF
sudo apt update
Now install the Docker Engine and all the plugins needed for this setup:
sudo apt install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
Docker’s official documentation is worth bookmarking here. It is the most reliable reference for keeping the installation current.
How Does Docker Compose Simplify n8n Deployment?
Docker Compose reduces a multi-service setup to a single configuration file and one command. Rather than manually running and linking individual containers, you define the entire stack in a docker-compose.yaml file and Docker handles the orchestration.
Finding the Right Images
Docker Hub is the starting point. For this automation stack, two images are needed: the official n8n image from n8n.io and the official nginx image. Sticking to official images keeps the setup aligned with maintained documentation, which matters when you hit an edge case and need to troubleshoot.
- Official n8n image: https://hub.docker.com/r/n8nio/n8n
- Official nginx image: https://hub.docker.com/_/nginx
For each image, the key details are the image name and the tag. The tag functions as a version selector. For this initial setup, latest is sufficient for both.

The Docker Compose Configuration
Below is the docker-compose.yaml for this automation stack. It brings up nginx as a reverse proxy in front of n8n, with a shared Docker network connecting the two services:
services:
nginx:
image: nginx:latest
container_name: nginx
restart: unless-stopped
ports:
- "80:80"
- "443:443"
volumes:
- ./nginx/conf.d:/etc/nginx/conf.d
- ./ssl:/etc/nginx/ssl:ro
depends_on:
- n8n
networks:
- n8n_network
n8n:
image: n8nio/n8n:latest
container_name: n8n
restart: unless-stopped
ports:
- "5678:5678"
environment:
- N8N_BASIC_AUTH_ACTIVE=true
- N8N_BASIC_AUTH_USER=admin
- N8N_BASIC_AUTH_PASSWORD=YOUR_SECURE_PASSWORD_HERE
- GENERIC_TIMEZONE=America/New_York
- N8N_LOG_LEVEL=info
- N8N_DEFAULT_BINARY_DATA_MODE=filesystem
- N8N_METRICS=true
- N8N_EXECUTIONS_DATA_PRUNE=true
- N8N_EXECUTIONS_DATA_MAX_AGE=168
volumes:
- n8n_data:/home/node/.n8n
- ./local-files:/home/node/local-files
networks:
- n8n_network
volumes:
n8n_data:
networks:
n8n_network:
Security Note: For production use, avoid hardcoding credentials in your compose file. Consider using Docker secrets or a separate
.envfile that is excluded from version control.
Understanding the Key Configuration Parameters
For those newer to Docker Compose, here is what each section is doing:
Core Service Parameters:
services:defines all containerized applications in this setupimage:specifies which Docker image to use (format:name:tag)container_name:sets a friendly name for the running containerrestart:controls restart behavior (unless-stoppedmeans the container restarts automatically, except when manually stopped)ports:maps host ports to container ports (format:host:container)volumes:mounts directories from the host into the container for persistent datanetworks:assigns the service to a specific Docker networkdepends_on:ensures services start in the correct order
n8n-Specific Environment Variables:
N8N_BASIC_AUTH_ACTIVEenables username/password protectionN8N_BASIC_AUTH_USERandN8N_BASIC_AUTH_PASSWORDset the login credentialsGENERIC_TIMEZONEsets the timezone for scheduled workflowsN8N_DEFAULT_BINARY_DATA_MODEstores file data on the filesystem rather than in the databaseN8N_METRICSenables performance metrics collectionN8N_EXECUTIONS_DATA_PRUNEautomatically cleans up old execution logsN8N_EXECUTIONS_DATA_MAX_AGEretains execution logs for 168 hours (7 days)
For a complete reference, see the official n8n environment variable documentation .
How Do You Configure the Directory Structure and nginx Reverse Proxy?
The directory structure needs to exist before the containers will start cleanly. In the project folder on the server, create the required directories:
mkdir -p nginx/conf.d ssl local-files n8n_data
This produces the following layout:
/my-project-folder
├── local-files/ # Where files input/output to n8n workflows
├── n8n_data/ # Where n8n stores workflows, credentials, settings
├── nginx/
│ └── conf.d/ # nginx configuration files
├── ssl/ # SSL certificates (if using HTTPS)
│ └── YOUR_SSL_CERT.crt
│ └── YOUR_SSL_KEY.key
└── docker-compose.yaml # The compose configuration
The nginx reverse proxy configuration goes in nginx/conf.d/default.conf. This configuration handles HTTP-to-HTTPS redirection and secure proxying of traffic to the n8n container:
events {
worker_connections 1024;
}
http {
upstream n8n {
server n8n:5678;
}
# HTTP server - redirects to HTTPS
server {
listen 80;
server_name _;
return 301 https://$host$request_uri;
}
# HTTPS server
server {
listen 443 ssl http2;
server_name _;
# SSL configuration using your generated certificates
ssl_certificate /etc/nginx/ssl/YOUR_SSL_CERTIFICATE.crt;
ssl_certificate_key /etc/nginx/ssl/YOUR_SSL_KEY.key;
# SSL settings for security
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA384;
ssl_prefer_server_ciphers off;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 10m;
# Increase client max body size for file uploads
client_max_body_size 100M;
# Security headers
add_header X-Frame-Options DENY;
add_header X-Content-Type-Options nosniff;
add_header X-XSS-Protection "1; mode=block";
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
location / {
proxy_pass http://n8n;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto https;
proxy_cache_bypass $http_upgrade;
# WebSocket support
proxy_set_header Sec-WebSocket-Extensions $http_sec_websocket_extensions;
proxy_set_header Sec-WebSocket-Key $http_sec_websocket_key;
proxy_set_header Sec-WebSocket-Version $http_sec_websocket_version;
# Timeouts
proxy_connect_timeout 60s;
proxy_send_timeout 60s;
proxy_read_timeout 60s;
}
}
}
Note: This is a full SSL/HTTPS setup. A simpler HTTP-only nginx configuration will work if you want to verify the reverse proxy is routing correctly before adding SSL.
How Do You Launch and Verify the n8n Container?
Launching the stack is a single command from the project folder:
cd /path/to/my-project-folder
docker compose up -d
The -d flag runs the containers in detached mode, meaning they run in the background and persist after the terminal session closes.
Verify both containers are running:
docker compose ps
Both nginx and n8n should show as running. Navigate to the server in a browser using either the DNS name or the IP address, and the n8n login screen should appear.

The automation layer is operational. From here, the work shifts from infrastructure to building the workflows that make the server actually useful.
What Comes Next After Getting n8n Running?
The foundation is working. The next priorities are making the stack resilient and starting to build the automations that justify the project.
On the infrastructure side: configuring the containers to start automatically on server reboot using systemd, setting up monitoring for container health, and scheduling automated backups of workflow data.
On the workflow side: automated file management for the torrent server, network monitoring that alerts on service failures, and early integration experiments to start testing the ship’s computer concept in practice.
Those workflows, and the infrastructure work that supports them, are covered in the next post.
Thanks for reading.
