Part 1: Building a Self-Hosted LLM Server From Scratch

TL;DR: A repurposed gaming rig, Debian Linux, and Docker are all you need to get a self-hosted LLM and automation server running on a home network. This post covers the hardware choices, OS setup, and initial Docker configuration that started this build.

I had a thought that kept returning: a local server running its own LLM, connected to automation tools, sitting behind my own firewall, doing useful work around my home network. Not a cloud subscription, not someone else’s hardware. My own machine, under my own control, specialized for the way I actually work.

The case for a self-hosted LLM server comes down to two things. Privacy is the first. Keeping data, output, and context behind a local firewall means the information I feed into the system stays on hardware I own. Specialization is the second. A self-hosted server can be tuned to my specific working environment, projects, and preferences in ways a shared cloud service cannot match. Even on modest hardware, both of those advantages are worth building toward.

What Hardware Does a Self-Hosted LLM Server Actually Need?

You do not need expensive hardware to get started, but you do need to be realistic about what the machine can handle. I repurposed an old gaming rig that had been sitting idle since the COVID era. Underpowered by any serious LLM benchmark, but enough to be useful and a good foundation for learning how to deploy and maintain this kind of system.

Processor AMD Ryzen 5 1600 3.2 GHz 6-Core
Motherboard ASRock B450M-HDV R4.0 Micro ATX AM4
RAM Corsair VENGEANCE LPX DDR4 64GB (2x32GB) 3600MHz
GPU Sapphire PULSE Radeon RX 580 8 GB

The GPU is the real limiting factor here. An RX 580 handles smaller models well, but anything in the 13B+ parameter range requires patience or heavier quantization. A GPU upgrade is on the list, most likely a refurb after prices settle. For now, the machine has slightly more than the minimum needed to run local models with reasonable efficiency.

The broader point for anyone starting a similar build: do not let imperfect hardware stop the project. A working modest server that you iterate on is more useful than waiting for the optimal moment to buy better parts.

Why Is Debian the Right OS for a Home LLM Server?

Debian is the right choice for a home server because it is stable, lightweight, and predictable. Other options were on the table. Ubuntu is popular and well-supported. Arch gives you maximum control. Mint is approachable for newcomers. But for a machine that needs to run reliably for months without constant attention, Debian’s conservative release cycle and minimal footprint are advantages, not limitations.

I installed Debian with the XFCE desktop environment, again prioritizing lightness over features. XFCE is customizable without being resource-heavy, which matters on a machine where most of the compute needs to go toward the LLM and services rather than the graphical layer.

One installation note worth flagging: when adding AMD drivers on Debian, install them manually from the AMD website rather than pulling them through sources.list. Adding third-party sources to your Debian apt configuration can cause breakage on future distro upgrades. Manual installation means manual updates, but a simple shell script handles that cleanly and keeps the upgrade path safe.

Why Use Docker to Run a Local LLM Server?

Docker makes a home server maintainable. Running services directly on the host means one misconfigured dependency or failed update can destabilize everything. Containers isolate each service so they can be updated, restarted, or replaced without touching anything else on the system.

For this server, the initial Docker setup consists of two main containers. The first runs n8n, the workflow automation platform that will serve as the central nervous system for the server’s automations. Alongside it, nginx runs as a reverse proxy, providing secure connections across the local network and protecting against man-in-the-middle vulnerabilities, even within a home environment.

A PostgreSQL container handles persistent data storage. Automation output, workflow results, and anything else that needs to survive a container restart lives in this database. A dedicated database container also makes that data available to future projects and MCPs without any additional configuration overhead.

What Comes Next After the Initial Build?

The foundation is working: Debian is installed, Docker is running, and the initial container stack is defined. The next step is configuring n8n for practical automation tasks and making the server accessible across the network using DNS routing through pfSense.

The goal from the beginning was a server that behaves like a ship’s computer. Responding to requests, running automations, managing information across the network. That vision starts taking shape in the next post.

Happy making!

Contact Me

Thank you for visiting my site. I hope you found something useful or interesting. Please use this form to send me any feedback, questions or just to connect. Have a wonderful day!