Skip to content

Instantly share code, notes, and snippets.

@savio777
Last active October 27, 2025 19:19
Show Gist options
  • Save savio777/2a9585b51c2818ac5c46cf1f286d8044 to your computer and use it in GitHub Desktop.
Save savio777/2a9585b51c2818ac5c46cf1f286d8044 to your computer and use it in GitHub Desktop.
Create my chat cli

Rodar LLM localmente

instalar ollama cli https://ollama.com/download

curl -fsSL https://ollama.com/install.sh | sh

testar se ollama está rodando

http://localhost:11434/

instalar ollama model de exemplo

ollama run llama2-uncensored:7b

instalar web local com docker para utilizar o modelo (http://localhost:8080/)

docker run -d --network=host -v open-webui:/app/backend/data -e OLLAMA_BASE_URL=http://127.0.0.1:11434 --name open-webui --restart always ghcr.io/open-webui/open-webui:main

image
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment