Hello fellow Gophers!
In preparation for the workshop, you'll need to download and install some tools ahead of time so save bandwith and have everything you need to follow along during our time together.
You'll need Docker Desktop installed: https://www.docker.com/products/docker-desktop/
Our LLM work will rely on running moddels locally using Ollama. You can either install it directly on your machine or pull a Docker image to run as a container.
$ docker pull ollama/ollama:latest
docker run --name ollama ollama/ollama
Ollama itself is a runner for models, which means we'll need to pull the actual LLM we'll rely on for our work. You can do so by having the ollama
container pull the image itself (all-minilm
is less than 50MB).
$ docker exec -it ollama ollama pull all-minilm
We'll rely on a PostgreSQL version that's been extended to support the pgvector
extension out of the box. If you have PostgreSQL running locally already, you can look into installing the extension yourself. Otherwise, we'll use the docker image below.
$ docker pull pgvector/pgvector:pg16
That's it! If you have any questions, reach out on X: @jboursiquot.
I had to start the
ollama
container using the following command.docker run -d -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama
When I run
docker exec -it ollama ollama run all-minilm
, I am getting the following message:Error: embedding models do not support chat