Part 2: Getting Automation Layer Set Up

After some time away from this project, I’m picking up where I left off with the automation layer of my local LLM build. The core objective remains: building a centralized automation hub that functions like the ship’s computer in Star Trek: TNG; an intelligent assistant, tool orchestrator, and knowledge base all in one.

The heart of this setup is n8n, the workflow automation platform that I’ll be running locally across my network and is used by so many others. I may experiment with Windmill at some point in the future, however is the preferred application for this layer. As mentioned in the previous post, I’m using Docker and containers to keep everything modular and maintainable rather than building web servers from scratch.

To make interaction with my network easier, I’ve configured my pfSense box with DNS to provide friendly internal names for my servers, instead of memorizing IP addresses. This makes access much cleaner, though you can follow along using IP addresses if you don’t have local DNS configured yet.

Prerequisites

Before diving into the Docker setup, you’ll need:

  • A Debian-based server (or similar Linux distribution) with SSH access
  • Basic familiarity with command-line operations
  • Approximately 2GB of free disk space for Docker images and data
  • Network access to your server from client machines
  • (Optional) Local DNS configured for friendly server names

This guide assumes you’re working with a fresh Debian installation. If you’re using a different distribution, the Docker installation steps may vary slightly - refer to the official Docker documentation for your specific OS.

Setting Up The Project

The server where this will live is setup with an ssh connection so I can control most things from any terminal app on my network. Once I ssh into the server I need to install Docker.

This is not as straight forward on a Debian box as it would seem. The easiest way to do this is to install the Docker Desktop application. You can download the DEB package here if you are doing this via desktop. There is a couple issues with this that you should be aware of.

  • Although Debian is supported, Docker Desktop app may not run properly on certain window managers(X11). However the Docker Engine is supported, and that is really what we need.
  • Updating Docker using the Docker Desktop is not available as easily when you install via the DEB package. Installing what we need by adding sources to our apt application will make this easier from the CLI

If you do decide to go the route of downloading the package, you can still use apt to install it by pointing apt to the download like so:

cd ~/Downloads  <-- Or wherever you keep your downloads.
sudo apt install ./docker-desktop-amd64.deb

Since I am not installing it using the prepackaged DEB file. I am going to get the packages and install via apt. This will give me more flexibility and allow me to do all I need from the command line. First I need to update and upgrade all packages.

sudo apt update

sudo apt upgrade

Next I need to add a few prerequisite packages as detailed in Docker’s documentation . These include ca-certificates and curl, which can be found by default on most Linux distros but since this is barebones Debian, they need to be installed. Next, we need to add Docker’s official GPG key and finally we add the Docker apt repo to our sources list.

# Add Docker's official GPG key:
sudo apt update
sudo apt install ca-certificates curl
sudo install -m 0755 -d /etc/apt/keyrings
sudo curl -fsSL https://download.docker.com/linux/debian/gpg -o /etc/apt/keyrings/docker.asc
sudo chmod a+r /etc/apt/keyrings/docker.asc

# Add the repository to Apt sources:
sudo tee /etc/apt/sources.list.d/docker.sources <<EOF
Types: deb
URIs: https://download.docker.com/linux/debian
Suites: $(. /etc/os-release && echo "$VERSION_CODENAME")
Components: stable
Signed-By: /etc/apt/keyrings/docker.asc
EOF

sudo apt update

Now I can install the packages needed to run the Docker Engine and all the features I want to work with. These packages are also what is suggested for a default install.

sudo apt install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin

This and more information can be found at Docker’s official documentation . It is always good to bookmark such things for future reference.

Now that Docker is installed, I need to set up my images and container for the automation layer of the server.

Setting Up The Docker Images and Container

One of the wonderful things about using Docker is Docker Hub. A must have resource for aspiring developers and experts alike. Within Docker Hub one can easily find premade images of popular libraries, databases and applications. Using these images to build docker-compose.yaml files, to utilize Docker Compose .

Retrieving The Necessary Images

Retrieving the images via the Desktop app is pretty easy, however it is just as easy to get them using the CLI and Docker Compose makes it even easier, by grabbing the images that we define within the docker-compose.yaml, automatically. More on this later. For now, I head over to https://hub.docker.com/explore to get some information about what images are available to me.

Docker Hub Explore Page

For this automation container I want a couple of things. First, the obvious, n8n. Then I want to grab nginx for setting up a reverse proxy. For the n8n image there were several options, but I stuck to the official n8n.io image. I did the same for nginx, mostly to keep things simple, but also so I can go to official documentation if I run into any roadblocks.

The important information we want is the name of the image and the tag. Think of the tag like a version or variant. For this initial setup, I am just going to use the latest tag for each of the images. You will see what I mean when I get to the docker-compose.yaml configuration.

Docker Compose Configuration file

This is where the rubber meets the road. A docker-compose.yaml file is a YAML document that contains the configuration and parameters for spinning up containerized environments using the Docker Engine. While the name suggests it is a markup file, it isn’t, and this can confuse some newcomers. The format of the file is fairly simple and easy to understand. Below is the docker-compose.yaml I constructed for this project.

services:
  nginx:
    image: nginx:latest
    container_name: nginx
    restart: unless-stopped
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - ./nginx/conf.d:/etc/nginx/conf.d
      - ./ssl:/etc/nginx/ssl:ro
    depends_on:
      - n8n
    networks:
      - n8n_network
      
  n8n:
    image: n8nio/n8n:latest
    container_name: n8n
    restart: unless-stopped
    ports:
      - "5678:5678"
    environment:
      - N8N_BASIC_AUTH_ACTIVE=true
      - N8N_BASIC_AUTH_USER=admin
      - N8N_BASIC_AUTH_PASSWORD=YOUR_SECURE_PASSWORD_HERE
      - GENERIC_TIMEZONE=America/New_York
      - N8N_LOG_LEVEL=info
      - N8N_DEFAULT_BINARY_DATA_MODE=filesystem
      - N8N_METRICS=true
      - N8N_EXECUTIONS_DATA_PRUNE=true
      - N8N_EXECUTIONS_DATA_MAX_AGE=168
    volumes:
      - n8n_data:/home/node/.n8n
      - ./local-files:/home/node/local-files
    networks:
      - n8n_network
      
volumes:
  n8n_data:
  
networks:
  n8n_network:

Security Note: For production use, avoid hardcoding credentials in your compose file. Consider using Docker secrets or a separate .env file that’s excluded from version control.

This setup is fairly simple and should be easy to understand if you are familiar with Docker. The first bit is setting up the nginx image. It contains parameters needed for a basic setup with simple SSL encryption. In the previous post I mentioned that this is running on my local network and when exposing a web server, even on your home network, it should be secured against MITM attacks and other shenanigans. While this does not cover all the security concerns, it will work for what I need.

Understanding the Docker Compose Configuration

For those new to Docker Compose, here’s a breakdown of the key parameters used in this setup:

Core Service Parameters:

  • services: - Defines all containerized applications in this setup
  • image: - Specifies which Docker image to use (format: name:tag)
  • container_name: - A friendly name for the running container
  • restart: - Controls restart behavior (unless-stopped means restart automatically except when manually stopped)
  • ports: - Maps host ports to container ports (format: host:container)
  • volumes: - Mounts directories from the host to the container for persistent data
  • networks: - Assigns the service to a specific Docker network
  • depends_on: - Ensures services start in the correct order

n8n-Specific Environment Variables:

  • N8N_BASIC_AUTH_ACTIVE - Enables username/password protection
  • N8N_BASIC_AUTH_USER / N8N_BASIC_AUTH_PASSWORD - Login credentials
  • GENERIC_TIMEZONE - Sets the timezone for scheduled workflows
  • N8N_DEFAULT_BINARY_DATA_MODE - Stores file data on the filesystem rather than in the database
  • N8N_METRICS - Enables performance metrics collection
  • N8N_EXECUTIONS_DATA_PRUNE - Automatically cleans up old execution logs
  • N8N_EXECUTIONS_DATA_MAX_AGE - Keeps execution logs for 168 hours (7 days)

For a complete list of n8n environment variables, refer to the official documentation .

Preparing the Directory Structure

Before launching the container, I need to create the required directory structure and configuration files. In my project folder on the server, I create the following:

mkdir -p nginx/conf.d ssl local-files n8n_data

This gives me:

/my-project-folder
├── local-files/       # Where files input/output to n8n workflows
├── n8n_data/          # Where n8n stores workflows, credentials, settings
├── nginx/
│   └── conf.d/        # nginx configuration files
├── ssl/               # SSL certificates (if using HTTPS)
│   └── YOUR_SSL_CERT.crt
│   └── YOUR_SSL_KEY.key      
└── docker-compose.yaml    # The compose configuration

Creating the nginx Configuration

I need a slightly more robust nginx reverse proxy configuration than the one shown below. For security reaso to route traffic to n8n. I create nginx/conf.d/default.conf with the following:

events {
    worker_connections 1024;
}
http {
    upstream n8n {
        server n8n:5678;
    }
    # HTTP server - redirects to HTTPS
    server {
        listen 80;
        server_name _;
        return 301 https://$host$request_uri;
    }
    # HTTPS server
    server {
        listen 443 ssl http2;
        server_name _;
        # SSL configuration using your generated certificates
        ssl_certificate /etc/nginx/ssl/YOUR_SSL_CERTIFICATE.crt;
        ssl_certificate_key /etc/nginx/ssl/YOUR_SSL_KEY.key;
        # SSL settings for security
        ssl_protocols TLSv1.2 TLSv1.3;
        ssl_ciphers ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA384;
        ssl_prefer_server_ciphers off;
        ssl_session_cache shared:SSL:10m;
        ssl_session_timeout 10m;
        # Increase client max body size for file uploads
        client_max_body_size 100M;
        # Security headers
        add_header X-Frame-Options DENY;
        add_header X-Content-Type-Options nosniff;
        add_header X-XSS-Protection "1; mode=block";
        add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
        location / {
            proxy_pass http://n8n;
            proxy_http_version 1.1;
            proxy_set_header Upgrade $http_upgrade;
            proxy_set_header Connection 'upgrade';
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header X-Forwarded-Proto https;
            proxy_cache_bypass $http_upgrade;
            # WebSocket support
            proxy_set_header Sec-WebSocket-Extensions $http_sec_websocket_extensions;
            proxy_set_header Sec-WebSocket-Key $http_sec_websocket_key;
            proxy_set_header Sec-WebSocket-Version $http_sec_websocket_version;
            # Timeouts
            proxy_connect_timeout 60s;
            proxy_send_timeout 60s;
            proxy_read_timeout 60s;
        }
    }
}

Note: This is SSL/HTTPS setup. You can get the reverse proxy going with a simple HTTP configuration if you prefer.

Launching the Container

With the directory structure and configuration in place, I navigate to my project folder via SSH and launch the container:

cd /path/to/my-project-folder
docker compose up -d

The -d flag runs the container in detached mode, allowing it to run in the background.

To verify everything is running I run the following command:

docker compose ps

You should see both nginx and n8n containers listed as “running”.

Now I can navigate to my server in a browser (using either the DNS name or IP address), and n8n’s login screen appears.

n8n Gateway on my local server

The setup is now operational and ready for my workflow developments.

Next Steps

With the automation server running successfully, there are several areas to tackle next:

Infrastructure Improvements:

  • Configure the container to start automatically on server reboot using systemd
  • Set up monitoring and alerting for container health
  • Configure automated backups of workflow data

First Workflows: The real value comes from building practical automations. My initial projects will include:

  • Automated file management for my torrent server, moving completed downloads to the NAS
  • Network monitoring workflows that alert me to server or service failures
  • Integration experiments to test the “ship’s computer” concept

I’ll document these workflows and infrastructure improvements in upcoming posts. The foundation is solid - now it’s time to make it useful.

Thanks for reading.

Contact Me

Thank you for visiting my site. I hope you found something useful or interesting. Please use this form to send me any feedback, questions or just to connect. Have a wonderful day!