Skip to content

Instantly share code, notes, and snippets.

@cagataycali
Last active March 12, 2024 20:19
Show Gist options
  • Save cagataycali/acaa476865821b02813b8a8e88e59c13 to your computer and use it in GitHub Desktop.
Save cagataycali/acaa476865821b02813b8a8e88e59c13 to your computer and use it in GitHub Desktop.
Run multimodal llm (llava with llamafile) and open browser after the model start.

Install

[wget ... or download](https://gist.github.com/acaa476865821b02813b8a8e88e59c13.git)
chmod +x run-local-multimodal-llm-openai-compatible.sh
./run-local-multimodal-llm-openai-compatible.sh
#!/bin/bash
# Define the file path
FILE="llava-v1.5-7b-q4.llamafile"
# Check if the file exists
if [ -f "$FILE" ]; then
echo "$FILE exists. Starting."
else
echo "$FILE does not exist, downloading now."
# Download the LLaMA file from Hugging Face
wget https://huggingface.co/jartine/llava-v1.5-7B-GGUF/resolve/main/llava-v1.5-7b-q4.llamafile?download=true -O $FILE
fi
# Make the downloaded file executable
chmod +x $FILE;
# Kill the LLaMA process if it is already running
killall $FILE;
# Start the LLaMA process with the specified option in the background
./$FILE -ngl 9999 &
open http://localhost:8080;
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment