Skip to content

Instantly share code, notes, and snippets.

View codefromthecrypt's full-sized avatar

Adrian Cole codefromthecrypt

View GitHub Profile
#!/bin/sh
set -e
# guess OS_TYPE if not provided
if [ -z "$OS_TYPE" ]; then
case "$(uname -s | tr '[:upper:]' '[:lower:]')" in
cygwin_nt*|mingw*|msys_nt*)
OS_TYPE="windows"
;;
linux*)
@codefromthecrypt
codefromthecrypt / main.py
Last active March 13, 2025 03:21
OpenAI Agents SDK with LogFire
import os
import httpx
import logfire
from agents import Agent, ModelSettings, OpenAIProvider, RunConfig, Runner, function_tool
from agents.tracing import GLOBAL_TRACE_PROVIDER
# Shut down the global tracer as it sends to the OpenAI "/traces/ingest"
# endpoint, which we aren't using and doesn't exist on alternative backends
# like Ollama.
@codefromthecrypt
codefromthecrypt / goose-mcp.md
Last active April 10, 2025 02:54
Goose MCP practice
@codefromthecrypt
codefromthecrypt / main.py
Created April 10, 2025 08:39
Minimum viable agent using google adk
import os
import httpx
from google.adk.agents import Agent
from google.adk.models.lite_llm import LiteLlm
from google.adk.runners import Runner
from google.adk.sessions import InMemorySessionService
from google.genai import types
# TODO: native openai https://github.com/google/adk-python/issues/27
@codefromthecrypt
codefromthecrypt / elastic-flow.md
Created April 17, 2025 03:54
Correlating OTLP traffic with Elasticsearch POSTS

rename elasticsearch to elasticsearch_real and change its port to 9201 rename otel-collector to otel-collector_real and change its port to 4319

add this container with the below flow.py. Then make sure you are sending with javascript as python doesn't support OTEL_EXPORTER_OTLP_PROTOCOL=http/json yet.

then, you can do docker compose logs elasticsearch and see the POSTs.

  elasticsearch:
    image: mitmproxy/mitmproxy
@codefromthecrypt
codefromthecrypt / a2a.py
Last active May 26, 2025 23:22
run A2A's latest python SDK
# e.g. to run with ollama, do this:
# OPENAI_BASE_URL=http://localhost:11434/v1 OPENAI_API_KEY=unused CHAT_MODEL=qwen3:1.7b uv run a2a.py
#
# /// script
# requires-python = ">=3.13"
# dependencies = [
# "a2a-sdk",
# "uvicorn",
# "openai",
# "httpx"
@codefromthecrypt
codefromthecrypt / hosted-mcp.sh
Created May 28, 2025 07:36
invoke openai with a hosted MCP server
curl -s -X POST https://api.openai.com/v1/responses \
-H "Content-Type: application/json" \
-H "Authorization: Bearer ${OPENAI_API_KEY}" \
-d '{
"input": [
{
"content": "Which language is this repo written in?",
"role": "user"
}
],
@codefromthecrypt
codefromthecrypt / main.py
Created May 28, 2025 11:43
openai-agents using OpenAI Responses API with MCP traced with OpenTelemetry
# Add OpenAI and OpenTelemetry ENV variables to .env and run like this:
# uv run -q --env-file .env main.py
#
# Note: Use a larger model like qwen3b:14b, if you are hosting your own models.
#
# /// script
# requires-python = ">=3.13"
# dependencies = [
# "openai-agents",
# "elastic-opentelemetry",