Local LLM Development & Configuration

Build of a stand alone local LLM server to experiment and use for home office. The goal of the project is to get familiar with deploying local LLMs and configuring them to work as agents for teams, or automate specific tasks

Project Goal

Set up and configure a local server for experimenting and understanding local deployments of LLMs, AI agents, and their respective protocols.

Requirements

  • Server with OS installed
  • Ollama
  • Network settings for a static

Configuring a local LLM using Ollama. This is being installed on an old gaming machine that I wiped clean and set up with Debian 12. The intention is to leverage the API provided by Ollama and create local agents to assist Abby and myself with our work and day to day tasks.

I am hoping to use this experience to show clients how they could set up a local LLM within their organization. These LLMs could handle specialized initatives like writing draft product documentation or automatically compile consumable release notes.

Contact Me

Thank you for visiting my site. I hope you found something useful or interesting. Please use this form to send me any feedback, questions or just to connect. Have a wonderful day!