Skip to content

Instantly share code, notes, and snippets.

Java:
String getDescription() {
StringBuilder stringBuilder = new StringBuilder();
if(titel != null && titel.length() > 0){
stringBuilder.append(titel).append(',')
}
if(vorname != null && vorname.length() > 0){
stringBuilder.append(vorname).append(',')
@olafgeibig
olafgeibig / nous-hermes-2-solar.ollama
Created January 3, 2024 09:35
Ollama modelfile for nous-hermes-2-solar-10.7b
FROM ./nous-hermes-2-solar-10.7b.Q5_K_M.gguf
PARAMETER num_ctx 4096
TEMPLATE """<|im_start|>system
{{ .System }}<|im_end|>
<|im_start|>user
{{ .Prompt }}<|im_end|>
<|im_start|>assistant
"""
PARAMETER stop "<|im_start|>"
PARAMETER stop "<|im_end|>"
@olafgeibig
olafgeibig / review_tool.py
Last active June 1, 2024 13:46
A review tool for CrewAI that uses a LLM to review a work result if it matches a given task or topic. I posted this here because it demonstrates how to use Joao's way to do multi-argument calls for calling multi-argument tools. The tool is actually not really very useful, since I don't manage that the agents are passing on their full work result…
from langchain.tools import Tool
from langchain_core.language_models.chat_models import BaseChatModel
from langchain_core.messages import HumanMessage, SystemMessage
class ReviewToolFactory():
@staticmethod
def get_review_tool(llm: BaseChatModel) -> Tool:
"""
Returns a Tool object that can be used for reviewing a result from working on a task and giving constructive
```
uv run --with mlx-vlm mlx_vlm.generate --model gg-hf-gm/gemma-3n-E4B-it --max-tokens 100 --temperature 0.7 --prompt "Transcribe the following speech segment in English:"
× Failed to build `llvmlite==0.36.0`
├─▶ The build backend returned an error
╰─▶ Call to `setuptools.build_meta:__legacy__.build_wheel` failed (exit status: 1)
[stderr]
Traceback (most recent call last):
File "<string>", line 14, in <module>
File "/Users/olaf/.cache/uv/builds-v0/.tmpPmfozJ/lib/python3.12/site-packages/setuptools/build_meta.py", line 331, in
@olafgeibig
olafgeibig / cc-proxy.sh
Last active August 12, 2025 16:27
A LiteLLM proxy solution to use Claude Code with models from the Weights and Biases inference service. You need to have LiteLLM installed or use the docker container. Easiest is to install it with `uv tool install "litellm[proxy]"` Don't worry about the fallback warnings. Either LiteLLM, W&B or the combo of both are not handling streaming respon…
#!/bin/bash
export WANDB_API_KEY=<your key>
export WANDB_PROJECT=<org/project>
litellm --port 4000 --debug --config cc-proxy.yaml
@olafgeibig
olafgeibig / README.md
Created August 6, 2025 10:00
Apple Containers 0.3.0 on macOS 15 Cheatsheet

Upgrading to 0.3.0

Before be sure you properly upgraded to containers 0.3.0. Clean the old installation with sudo uninstall-container.sh -k and then install the new version. Then start containers service container system start. Check if it downloaded the latest vminit image. container i ls If not then try sudo uninstall-container.sh -d that deletes everything including the user data.

Configure the Networking

Setting the conainer subnet seems to be only necessary for macOS 15. Lookup the subnet of the bridge network with ifconfig. Look for the bridge, e.g.

bridge100: flags=8a63<UP,BROADCAST,SMART,RUNNING,ALLMULTI,SIMPLEX,MULTICAST> mtu 1500
        options=63<RXCSUM,TXCSUM,TSO4,TSO6>
        ether be:d0:74:63:61:64
        inet 192.168.206.1 netmask 0xffffff00 broadcast 192.168.206.255