Skip to content

Instantly share code, notes, and snippets.

@milank94
Created December 30, 2024 23:25
Show Gist options
  • Save milank94/bddd1ee4a441b9ed53acc009d610917b to your computer and use it in GitHub Desktop.
Save milank94/bddd1ee4a441b9ed53acc009d610917b to your computer and use it in GitHub Desktop.
Getting Started with Ollama + Open WebUI for Local LLM Deployment

Ollama + Open WebUI = Local LLM Server

Motivation

This guide helps you deploy a local Large Language Model (LLM) server on your Apple MacBook (Intel CPU or Apple Silicon (M-series)) with a user-friendly chat interface. By running LLMs locally, you ensure data privacy, improve reliability with offline capabilities, and leverage cutting-edge tools for efficient AI workflows

Prerequisites

  • macOS 11.0 or later (Intel, Apple Silicon (M-series))
  • At least 8 GB of RAM (16 GB recommended for optimal performance)
  • Admin privileges to install software

Tool Stack

  • Python
  • Ollama
  • Llama 3.2 1B
  • Open WebUI

Setting up

Homebrew

Homebrew is a free and open-source software package management system that simplifies the installation of software on Apple's operating system, macOS.

Installation

Follow steps from: https://brew.sh

Python

Python is a high-level, general-purpose programming language.

Installation

brew install [email protected]

Note: Install Python 3.11, which is required to run Open WebUI. Use Homebrew for an easy installation process.

Ollama

Ollama is a lightweight, extensible framework for building and running language models on the local machine.

Installation

brew install ollama

Note: At the time of writing, I am running version 0.5.4

Llama 3.2 1B

Llama (Large Language Model Meta AI, formerly stylized as LLaMA) is a family of autoregressive large language models (LLMs) released by Meta AI.

Installation

ollama run llama3.2:1b

Open WebUI

Open WebUI is an extensible, self-hosted AI interface that adapts to your workflow, all while operating entirely offline.

Installation

python3.11 -m venv venv
source venv/bin/activate
python3 -m pip install open-webui

Note: At the time of writing, I am running version 0.5.2

Run

open-webui serve

Navigate to http://localhost:8080 and select "llama3.2:1b" from the drop down.

Closing Remarks

Congrats! You now have a local LLM deployment. Welcome to your own personal AI assistant.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment