Local LLM implementation of the Grug speak translator using Ollama and DSPy framework.
This project demonstrates how to build a translation system that converts plain English text into "Grug speak" (caveman-style language) using DSPy with Ollama for local model execution. No external API keys required.
ollama_grug_translator.py
- CLI script that reads JSON input and outputs JSON resultsutils_ollama_grug.py
- Utility functions using only OllamaREADME_OLLAMA.md
- This filerequirements_ollama.txt
- Python dependencies
First, install Ollama on your system:
Linux/macOS:
curl -fsSL https://ollama.com/install.sh | sh
Windows: Download from https://ollama.com/download
Pull the default model (llama3.2) or another compatible model:
ollama pull llama3.2
Other supported models:
ollama pull llama3.1
ollama pull mistral
ollama pull codellama
Ensure Ollama service is running:
ollama serve
pip install -r requirements_ollama.txt
echo '{"text": "You should not construct complex systems."}' | python ollama_grug_translator.py
Output:
{"status": "success", "result": "grug no make big complicated thing. big thing bad. make grug brain hurt."}
python ollama_grug_translator.py --input-json input.json
Where input.json
contains:
{"text": "Modern software development practices are essential."}
python ollama_grug_translator.py --model mistral --input-json input.json
python ollama_grug_translator.py --base-url http://192.168.1.100:11434 --input-json input.json
Input:
{"text": "Object-oriented programming is a programming paradigm."}
Output:
{
"status": "success",
"result": "grug make thing like thing. thing have properties. grug group similar things together. make code less confusing for grug brain."
}
Input (missing text field):
{"message": "hello world"}
Output:
{
"status": "error",
"error": "Missing 'text' field in input"
}
Input (invalid text type):
{"text": 123}
Output:
{
"status": "error",
"error": "'text' field must be a string"
}
Output when Ollama is not running:
{
"status": "error",
"error": "Connection to Ollama failed. Ensure Ollama is running and the model is available."
}
-
CLI Interface (
ollama_grug_translator.py
)- Reads JSON from file or stdin
- Configures DSPy with Ollama
- Outputs structured JSON responses
- Never prints non-JSON content
-
Utility Functions (
utils_ollama_grug.py
)translate_grug()
- Core translation using DSPy + Ollamaautomated_readability_index()
- Text complexity measurementsimilarity_metric()
- Embedding-based similarity using Ollamaoverall_metric()
- Combined quality assessmentretry()
- Robust error handling wrapper
-
Error Handling
- Exponential backoff retry mechanism
- Graceful degradation for embeddings
- Always outputs valid JSON
-
"Connection refused" errors
- Ensure Ollama service is running:
ollama serve
- Check if port 11434 is available
- Verify firewall settings
- Ensure Ollama service is running:
-
"Model not found" errors
- Pull the required model:
ollama pull llama3.2
- Check available models:
ollama list
- Pull the required model:
-
Slow responses
- Consider using smaller models (e.g., llama3.2 instead of llama3.1)
- Ensure adequate RAM (8GB+ recommended)
- Check CPU/GPU utilization
-
JSON parsing errors
- Ensure input is valid JSON
- Check that 'text' field is present and is a string
- Use GPU acceleration if available
- Consider quantized models for faster inference
- Adjust
max_tokens
parameter in utils for longer/shorter outputs - Use smaller models for development/testing
Tested with:
- ✅ llama3.2 (recommended)
- ✅ llama3.1
- ✅ mistral
- ✅ codellama
⚠️ Larger models may require more RAM
- All processing is done locally - no data sent to external services
- Models run in isolated Ollama environment
- No persistent storage of sensitive data
- Suitable for offline/air-gapped environments
# 1. Start Ollama
ollama serve &
# 2. Ensure model is available
ollama pull llama3.2
# 3. Test the translator
echo '{"text": "Complex software architecture requires careful planning."}' | \
python ollama_grug_translator.py
# Expected output:
# {"status": "success", "result": "grug make big software thing need think hard first. plan good before grug start make thing."}
Educational example demonstrating local LLM integration with DSPy.