instalar ollama cli https://ollama.com/download
curl -fsSL https://ollama.com/install.sh | sh
http://localhost:11434/
ollama run llama2-uncensored:7b
instalar web local com docker para utilizar o modelo (http://localhost:8080/)
docker run -d --network=host -v open-webui:/app/backend/data -e OLLAMA_BASE_URL=http://127.0.0.1:11434 --name open-webui --restart always ghcr.io/open-webui/open-webui:main