Mix.install([
{:ollama, "~> 0.8.0"},
{:kino, "~> 0.15.3"},
])
client = Ollama.init(
base_url: "http://192.168.86.36:11434/api"
)
{:ok, resp} = Ollama.completion(client, [
model: "gemma3:27b",
prompt: "Why is the sky blue?",
])
md = Map.get(resp, "response")
|> String.replace("\\n", "\n")
# Get the current Livebook directory
IO.puts("Current Livebook directory: #{__DIR__}")
# Change to the Livebook directory
File.cd!(__DIR__)
# Verify the current directory
IO.puts("Working directory is now: #{File.cwd!()}")
# List files in this directory to confirm
{:ok, files} = File.ls(".")
files
image_binary = File.read!("images/image.jpg")
# Create base64 encoding
base64_encoded = Base.encode64(image_binary)
{:ok, resp} = Ollama.completion(client, [
model: "gemma3:27b",
prompt: "Describe this image",
images: [base64_encoded]
])
md = Map.get(resp, "response")
|> String.replace("\\n", "\n")
|> Kino.Markdown.new()
To setup Ollama server on a remote Linux server:
Install ollama
Setup
Above opens a service override file. Add these lines in this file:
Change IP address to your network interface IP, or just put
0.0.0.0
to listen on all interfaces.Now restart the daemon and verify the service could start correctly after above changes:
Testing
Now you can connect to ollama hosted on this IP using:
CLI:
or using Python API:
(useful when sending image data to model)
First create and activate ollama env:
Then run this test script with env activated:
python3 test.py
)test.py
:And finally you can run this Elixir Livebook if you prefer using Elixir instead of Python. Use of Elixir is very helpful if you want to integrate Ollama based inference with your Elixir LiveView application.