Part 3: Claude Code on a Home Server: Building a Distributed AI Agent Network With n8n and Tailscale

TL;DR: Deploying Claude Code on a single machine is a productivity win. Deploying it across a home server network, orchestrated by n8n, and connected globally via Tailscale turns a home lab into a distributed AI agent platform you own entirely. This post covers how Shockwave evolved from a local automation server into a multi-machine agent network I can reach from anywhere with an internet connection.

There is a Van Morrison line that fits this moment perfectly: “My how you have grown.” That is exactly how I feel looking at Shockwave, the AI/LLM server I have been building and documenting over the past several months. What started as a local automation experiment has expanded into a multi-container, multi-machine setup running Claude Code agents across the entire home network, accessible anywhere I have a signal. This post is a status update on where Shockwave stands today and how Claude Code became the connective tissue that makes the whole stack work.

Why Expand a Home AI Server With More Docker Containers?

Adding more Docker containers to Shockwave was about giving the server visibility and reach across the network. A home AI server is only as useful as the data and services it can access. For Shockwave to function as a genuine network hub, it needed the infrastructure to match.

Here is what I added using Docker Compose:

Samba Server: This container maps two drives from Shockwave, ENERGON_01 and ENERGON_02, to the broader network. Other machines can drop files into these shared drives, and any agent running on Shockwave can pick them up. Running Samba as a container rather than on the host machine keeps the host clean and makes administration straightforward. Updating or restarting the SMB server is a container operation, not a host-level service management task.

PostgreSQL Server: A standalone Postgres container gives every container and every machine on the network access to a unified database. Information written from one container is immediately available to every other service on the network, which matters a great deal once you start chaining agents and automations together.

NodeJS Server 01 (Dashboard): A Nuxt and Vue container hosting a web dashboard for Shockwave. The goal is a real-time interface showing service status, agent activity, and automation state. A dedicated post on this is coming soon.

NodeJS Server 02 (Learning Container): A Nuxt and Svelte container running alongside the dashboard. No critical function yet. I have not built a Svelte project with Nuxt before, so this one exists to learn. Consider it a development sandbox.

Each of these containers exists because a networked AI server needs more than raw compute. It needs accessible storage, a shared data layer, and interfaces for monitoring what is actually happening.

How Does Claude Code Transform a Home Server Into an Agent Network?

Claude Code on a single machine is capable. Claude Code across a network of machines, orchestrated through n8n, is a different category of tool entirely. Deploying Claude Code on Shockwave and connecting it to n8n’s SSH node is what shifted this project from “interesting experiment” to “genuinely useful infrastructure.”

The key discovery was that n8n’s SSH node can reach Shockwave’s host machine directly. Because n8n runs in its own Docker container, I can use Shockwave’s domain name to SSH into the host and have the full server environment as a workspace. That means I can trigger Claude Code commands from n8n automation workflows without touching an AI node in n8n at all. The practical benefit is significant: I am not burning API tokens through n8n’s AI integrations. My Claude Code subscription handles the compute, and the agents I have configured maintain their own memory in ~/.claude/**/memory.md files, persisting context between runs without any additional overhead.

Claude Code is now running on five machines in the network: Shockwave (the main server), Mirage (torrent server), Soundwave (music server), Rumble (a Raspberry Pi terminal), and Teletraan (the NAS). Each installation has agents configured specifically to its host machine. Every one of them is reachable through n8n’s SSH node.

The result is a chain of Claude Code agents that can hand off work across machines. One node completes a task and passes results to the next. Coupled with Tailscale, this chain is no longer limited to the local network.

What Is Tailscale and Why Does It Matter for a Self-Hosted AI Setup?

Tailscale is a mesh VPN built on WireGuard that makes all of your devices behave as if they are on the same local network, regardless of where they are physically. Each device gets a stable Tailscale IP and hostname. No port forwarding, no dynamic DNS management, no services exposed on the public internet. It connects and stays connected.

For a self-hosted AI server setup, Tailscale is a production multiplier in two concrete ways.

First, Shockwave is now reachable from anywhere I have a cell signal. My phone connects through Tailscale the same way it would on the local network. That means I can trigger automations, check server status, and interact with agents from a coffee shop, an airport, or anywhere with connectivity. The server is doing work for me whether I am in the same room or across the country.

Second, Tailscale keeps everything inside a private mesh rather than exposing services to the public internet. With so much infrastructure living in cloud environments by default these days, there is real value in controlling your own network topology and keeping your data on hardware you own. Tailscale makes that practical rather than painful. I have my own cloud, on my own devices, without anyone else dictating what I can do with it.

What Automations Does This Stack Actually Enable?

The combination of Claude Code and n8n automation enables real operational workflows, not just demos. Here are some that are running now on Shockwave.

The first is a network health automation. It cycles through every access point on the network, tests connectivity, and sends a structured report to the firewall. The firewall machine runs a Claude Code agent that reads the report and optimizes traffic routing and access point configuration based on what it finds. This runs on a schedule without manual intervention.

The second is a network kill switch triggered by Discord. Texting a specific command to the server via Discord initiates a controlled network shutdown. Useful for situations where I need to cut access quickly and remotely, without opening a terminal.

The third is an event discovery automation. Shockwave pulls event and show listings for the Philadelphia area, passes them through an agent that filters by known preferences, and surfaces suggestions directly to our shared calendars. It is a practical example of how useful a personalized, self-hosted agent can be when it has access to both external data and internal context at the same time.

More automations are in development. The pattern is consistent: identify a recurring task, build an n8n workflow to gather or act on data, and hand the complex reasoning to a Claude Code agent that has the context to do something genuinely useful with it.

How Does VSCode Remote SSH Fit Into a Home Lab Development Workflow?

VSCode’s Remote SSH extension makes Shockwave feel like a local machine, and that matters more than it sounds. Working on server files through a terminal SSH session works, but it is not a sustainable development environment for anything beyond quick edits.

After configuring SSH keys and the remote connection, I can open any file or directory on Shockwave in VSCode as if it is sitting on my local machine. Extensions work. IntelliSense works. The integrated terminal connects to the remote environment. The development experience is indistinguishable from working locally.

Combined with Tailscale, this setup travels. I can open Shockwave’s codebase from my laptop at any location and work on it with my full local tooling intact. For anyone building out a home server or home lab, this combination of VSCode Remote SSH and a mesh VPN is worth setting up early. It removes one of the biggest friction points in remote server development and makes the whole project significantly more sustainable.

What Is Next for Shockwave?

Shockwave has grown into something more than an automation server, and the next phase is focused on natural language interaction. The priority is a voice interface, most likely built on top of the Svelte application already running on the server. The goal is a conversational interface to the agent network, one that does not require opening a terminal or manually triggering a workflow.

A Python application for deploying lightweight interfaces across other devices on the network, including voice and text prompt options, is also on the roadmap. Before any of that, the dashboard gets the attention it deserves. It is the operational face of the server, and it needs to be solid before adding more surface area.

More to come. Happy hacking.

Contact Me

Thank you for visiting my site. I hope you found something useful or interesting. Please use this form to send me any feedback, questions or just to connect. Have a wonderful day!