Skip to content

Instantly share code, notes, and snippets.

@jcheng5
Created August 21, 2025 17:27
Show Gist options
  • Select an option

  • Save jcheng5/28a84f3fde930778e09509ad0437030f to your computer and use it in GitHub Desktop.

Select an option

Save jcheng5/28a84f3fde930778e09509ad0437030f to your computer and use it in GitHub Desktop.
LiteLLM otel
OTEL_SERVICE_NAME=litellm-test
OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4318/v1/traces
OTEL_EXPORTER_OTLP_PROTOCOL=http/protobuf
from dotenv import load_dotenv
import litellm
def main():
# Load environment variables from .env file
load_dotenv()
# Enable OpenTelemetry integration
litellm.callbacks = ["otel"]
# Set up the request to ollama
messages = [
{"role": "user", "content": "What is the capital of France?"}
]
# Make the chat completion request
response = litellm.completion(
model="ollama/llama3.1:8b",
messages=messages
)
# Print the response
print("Response from model:")
print(response.choices[0].message.content)
if __name__ == "__main__":
main()
backoff
litellm[proxy]
opentelemetry-api
opentelemetry-exporter-otlp
opentelemetry-sdk
python-dotenv
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment