Ollama is a lightweight framework for running large language models (LLMs) locally on your machine. It simplifies the process of downloading, running, and interacting with AI models without requiring extensive setup.
brew install ollama
Open WebUI is an extensible, feature-rich, and user-friendly self-hosted AI platform designed to operate entirely offline. It supports various LLM runners like Ollama and OpenAI-compatible APIs, with built-in inference engine for RAG, making it a powerful AI deployment solution.
docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main
This spins up an instance of Open WebUI in a local docker container, available at port 3000. Change this number if your local development workflow relies on port 3000.
Open: http://localhost:3000
When you have changed a few settings, like the theme, it should look like this | You can use ollama.com to find models to use, add them using the name | Chat to your local LLM |
---|---|---|
![]() |
![]() ![]() |
![]() |
- Install Obsidian - Sharpen your thinking and let it become your favourite notes taking app.
- Spend an unreasonable amount of time adding extensions and organising your notes into a second brain.
- Install the https://github.com/logancyang/obsidian-copilot extension
- Add in your local ollama model.
Add in your local ollama model to the obsidian copilot extension settings | Chat to your notes! Reference them using the obsidian markdown format |
---|---|
![]() |
![]() |