Skip to content

Instantly share code, notes, and snippets.

View milank94's full-sized avatar

Milan Kordic milank94

View GitHub Profile
@milank94
milank94 / ollama_open-webui.md
Created December 30, 2024 23:25
Getting Started with Ollama + Open WebUI for Local LLM Deployment

Ollama + Open WebUI = Local LLM Server

Motivation

This guide helps you deploy a local Large Language Model (LLM) server on your Apple MacBook (Intel CPU or Apple Silicon (M-series)) with a user-friendly chat interface. By running LLMs locally, you ensure data privacy, improve reliability with offline capabilities, and leverage cutting-edge tools for efficient AI workflows

Prerequisites

  • macOS 11.0 or later (Intel, Apple Silicon (M-series))
  • At least 8 GB of RAM (16 GB recommended for optimal performance)