Project Goal
Set up and configure a local server for experimenting and understanding local deployments of LLMs, AI agents, and their respective protocols.
Requirements
- Server with OS installed
- Ollama
- Network settings for a static
Configuring a local LLM using Ollama. This is being installed on an old gaming machine that I wiped clean and set up with Debian 12. The intention is to leverage the API provided by Ollama and create local agents to assist Abby and myself with our work and day to day tasks.
I am hoping to use this experience to show clients how they could set up a local LLM within their organization. These LLMs could handle specialized initatives like writing draft product documentation or automatically compile consumable release notes.