Skip to content

Instantly share code, notes, and snippets.

@nitingupta910
Created March 21, 2025 01:26
Show Gist options
  • Save nitingupta910/6991fb7a0b0539664c7d50c92eb64127 to your computer and use it in GitHub Desktop.
Save nitingupta910/6991fb7a0b0539664c7d50c92eb64127 to your computer and use it in GitHub Desktop.
Interacting with Ollama using Elixir Livebook

ollama_dev

Section

Mix.install([
  {:ollama, "~> 0.8.0"},
  {:kino, "~> 0.15.3"},
])
client = Ollama.init(
    base_url: "http://192.168.86.36:11434/api"
)
{:ok, resp} = Ollama.completion(client, [
  model: "gemma3:27b",
  prompt: "Why is the sky blue?",
])
md = Map.get(resp, "response")
|> String.replace("\\n", "\n")
Kino.Markdown.new(md)
# Get the current Livebook directory
IO.puts("Current Livebook directory: #{__DIR__}")

# Change to the Livebook directory
File.cd!(__DIR__)

# Verify the current directory
IO.puts("Working directory is now: #{File.cwd!()}")

# List files in this directory to confirm
{:ok, files} = File.ls(".")
files
image_binary = File.read!("images/image.jpg")

# Create base64 encoding
base64_encoded = Base.encode64(image_binary)
{:ok, resp} = Ollama.completion(client, [
  model: "gemma3:27b",
  prompt: "Describe this image",
  images: [base64_encoded]
])
md = Map.get(resp, "response")
|> String.replace("\\n", "\n")
|> Kino.Markdown.new()
@nitingupta910
Copy link
Author

To setup Ollama server on a remote Linux server:

Install ollama

curl -fsSL https://ollama.com/install.sh | sh

Setup

sudo -E systemctl edit ollama.service

Above opens a service override file. Add these lines in this file:

[Service]
Environment="OLLAMA_HOST=192.168.86.36:11434

Change IP address to your network interface IP, or just put 0.0.0.0 to listen on all interfaces.

Now restart the daemon and verify the service could start correctly after above changes:

sudo systemctl daemon-reload
sudo systemctl restart ollama.service
sudo systemctl status ollama.service

Testing

Now you can connect to ollama hosted on this IP using:

CLI:

curl -s http://192.168.86.36:11434/api/generate -d '{
  "model": "gemma3:27b",
  "prompt":"Why is the sky blue?", "stream": false, "images": [ "$IMG" ]
}'

or using Python API:
(useful when sending image data to model)

First create and activate ollama env:

python3 -m venv ollama_exp
cd ollama_exp
. bin/activate
pip3 install ollama

Then run this test script with env activated: python3 test.py)

test.py:

import ollama

# Create client with custom host
client = ollama.Client(host='http://192.168.86.36:11434')

response = client.chat(
    model='gemma3:27b',
    messages=[{
        'role': 'user',
        'content': 'Describe this image',
        'images': ['image2.jpg']
    }]
)

print(response['message']['content'])

And finally you can run this Elixir Livebook if you prefer using Elixir instead of Python. Use of Elixir is very helpful if you want to integrate Ollama based inference with your Elixir LiveView application.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment