Skip to content

Instantly share code, notes, and snippets.

@samuelcolvin
Created October 24, 2024 12:33
Show Gist options
  • Save samuelcolvin/4b5bb9bb163b1122ff17e29e48c10992 to your computer and use it in GitHub Desktop.
Save samuelcolvin/4b5bb9bb163b1122ff17e29e48c10992 to your computer and use it in GitHub Desktop.
Logfire docs JSON
[
{
"id": 0,
"parent": null,
"path": "roadmap.md",
"level": 1,
"title": "Roadmap",
"content": "Here is the roadmap for **Pydantic Logfire**. This is a living document, and it will be updated as we progress.\n\nIf you have any questions, or a feature request, **please join our [Slack][slack]**."
},
{
"id": 1,
"parent": 0,
"path": "roadmap.md",
"level": 2,
"title": "Features 💡",
"content": "There are a lot of features that we are planning to implement in Logfire. Here are some of them."
},
{
"id": 2,
"parent": 1,
"path": "roadmap.md",
"level": 3,
"title": "Server side Scrubbing of Sensitive Data",
"content": "The **Logfire** SDK scrubs sensitive data from logs on the client side before sending them to the server.\n\nWe are planning to implement similar scrubbing on the server side for other OpenTelemetry clients.\n\nWe'll also support adhoc scrubbing of rows."
},
{
"id": 3,
"parent": 1,
"path": "roadmap.md",
"level": 3,
"title": "Create Teams",
"content": "You'll be able to create **teams** with organization.\n\nSee [this GitHub issue][teams-gh-issue] for more information."
},
{
"id": 4,
"parent": 1,
"path": "roadmap.md",
"level": 3,
"title": "Alerts & Notifications",
"content": "The following features are planned for the alerts and notifications system:\n\n- [X] Slack integration\n- [ ] Email integration\n- [X] Webhook integration\n\nAlerts are based on SQL queries (with canned templates for common cases) that are run periodically, and decide if a\nnew event has occurred.\n\nSee [this GitHub issue][alerts-email-gh-issue] for more information."
},
{
"id": 5,
"parent": 1,
"path": "roadmap.md",
"level": 3,
"title": "Cross-Project Dashboards",
"content": "You'll be able to create dashboards with information from multiple projects.\n\nSee [this GitHub issue][cross-project-dashboards-gh-issue] for more information."
},
{
"id": 6,
"parent": 1,
"path": "roadmap.md",
"level": 3,
"title": "On-Premise Deployment",
"content": "We are planning to offer an on-premise deployment option for Logfire.\nThis will allow you to deploy Logfire on your own infrastructure.\n\nSee [this GitHub issue][on-prem-gh-issue] for more information."
},
{
"id": 7,
"parent": 1,
"path": "roadmap.md",
"level": 3,
"title": "Schema Catalog",
"content": "We want to build a catalog of Pydantic Models/Schemas as outlined\n[in our Roadmap article](https://blog.pydantic.dev/blog/2023/06/13/help-us-build-our-roadmap--pydantic/#4-schema-catalog)\nwith in Logfire.\n\nThe idea is that we'd use the SDK to upload the schema of Pydantic models to Logfire.\nThen allow you to watch how those schemas change as well as view metrics on how validation performed\nby a specific model is behaving.\n\nSee [this GitHub issue][schema-catalog-gh-issue] for more information."
},
{
"id": 8,
"parent": 1,
"path": "roadmap.md",
"level": 3,
"title": "Language Support",
"content": "Logfire is built on top of OpenTelemetry, which means that it supports all the languages that OpenTelemetry supports.\n\nStill, we are planning to create custom SDKs for JavaScript, TypeScript, and Rust, and make sure that the\nattributes are displayed in a nice way in the Logfire UI — as they are for Python.\n\nFor now, you can check our [Alternative Clients](guides/advanced/alternative-clients.md) section to see how\nyou can send data to Logfire from other languages.\n\nSee [this GitHub issue][language-support-gh-issue] for more information."
},
{
"id": 9,
"parent": 1,
"path": "roadmap.md",
"level": 3,
"title": "Automatic anomaly detection",
"content": "We are planning to implement an automatic anomaly detection system, which will be able to detect\nanomalies in the logs, and notify you without the need for you to define specific queries."
},
{
"id": 10,
"parent": null,
"path": "index.md",
"level": 1,
"title": "Pydantic Logfire",
"content": "From the team behind Pydantic, Logfire is a new type of observability platform built on the same belief as our open source library — that the most powerful tools can be easy to use.\n\nLogfire is built on OpenTelemetry, and supports monitoring your application from any language, with particularly great support for Python! [Read more](why-logfire/index.md)."
},
{
"id": 11,
"parent": 10,
"path": "index.md",
"level": 2,
"title": "Getting Started",
"content": "This page is a quick walk-through for setting up a Python app:\n\n1. [Set up Logfire](#logfire)\n2. [Install the SDK](#sdk)\n3. [Instrument your project](#instrument)"
},
{
"id": 12,
"parent": 10,
"path": "index.md",
"level": 2,
"title": "Set up Logfire {#logfire}",
"content": "1. [Log into Logfire :material-open-in-new:](https://logfire.pydantic.dev/login){:target=\"_blank\"}\n2. Follow the prompts to create your account\n3. From your Organisation, click **New project** to create your first project\n\n![Counting size of loaded files screenshot](images/logfire-screenshot-first-steps-first-project.png)\n\n!!! info \"\"\n The first time you use **Logfire** in a new environment, you'll need to set up a project. A **Logfire** project is a namespace for organizing your data. All data sent to **Logfire** must be associated with a project.\n\n??? tip \"You can also create a project via CLI...\"\n Check the [SDK CLI documentation](reference/cli.md#create-projects-new) for more information on how to create a project via CLI."
},
{
"id": 13,
"parent": 10,
"path": "index.md",
"level": 2,
"title": "Install the SDK {#sdk}",
"content": "1. In the terminal, install the **Logfire** SDK (Software Developer Kit):\n\n{{ install_logfire() }}\n\n2. Once installed, try it out!\n\n```bash\nlogfire -h\n```\n\n3. Next, authenticate your local environment:\n\n```bash\nlogfire auth\n```\n\n!!! info \"\"\n Upon successful authentication, credentials are stored in `~/.logfire/default.toml`."
},
{
"id": 14,
"parent": 10,
"path": "index.md",
"level": 2,
"title": "Instrument your project {#instrument}",
"content": "=== \":material-cog-outline: Development\"\n !!! tip \"Development setup\"\n During development, we recommend using the CLI to configure Logfire. You can also use a [write token](guides/advanced/creating-write-tokens.md).\n\n 1. Set your project\n\n ```bash title=\"in the terminal:\"\n logfire projects use <first-project>\n ```\n\n !!! info \"\"\n Run this command from the root directory of your app, e.g. `~/projects/first-project`\n\n 2. Write some basic logs in your Python app\n\n ```py title=\"hello_world.py\"\n import logfire\n\n logfire.configure() # (1)!\n logfire.info('Hello, {name}!', name='world') # (2)!\n ```\n\n 1. The `configure()` method should be called once before logging to initialize **Logfire**.\n 2. This will log `Hello world!` with `info` level.\n\n !!! info \"\"\n Other [log levels][logfire.Logfire] are also available to use, including `trace`, `debug`, `notice`, `warn`,\n `error`, and `fatal`.\n\n\n 3. See your logs in the **Live** view\n\n ![Hello world screenshot](images/logfire-screenshot-first-steps-hello-world.png)\n\n\n=== \":material-cloud-outline: Production\"\n !!! tip \"Production setup\"\n In production, we recommend you provide your write token to the Logfire SDK via environment variables.\n\n 1. Generate a new write token in the **Logfire** platform\n\n - Go to Project :material-chevron-right: Settings :material-chevron-right: Write Tokens\n - Follow the prompts to create a new token\n\n\n 2. Configure your **Logfire** environment\n\n ```bash title=\"in the terminal:\"\n LOGFIRE_TOKEN=<your-write-token>\n ```\n\n !!! info \"\"\n Running this command stores a Write Token used by the SDK to send data to a file in the current directory, at `.logfire/logfire_credentials.json`\n\n 3. Write some basic logs in your Python app\n\n ```py title=\"hello_world.py\"\n import logfire\n\n logfire.configure() # (1)!\n logfire.info('Hello, {name}!', name='world') # (2)!\n ```\n\n 1. The `configure()` method should be called once before logging to initialize **Logfire**.\n 2. This will log `Hello world!` with `info` level.\n\n !!! info \"\"\n Other [log levels][logfire.Logfire] are also available to use, including `trace`, `debug`, `notice`, `warn`,\n `error`, and `fatal`.\n\n 4. See your logs in the **Live** view\n\n ![Hello world screenshot](images/logfire-screenshot-first-steps-hello-world.png)\n\n---"
},
{
"id": 15,
"parent": 10,
"path": "index.md",
"level": 2,
"title": "Next steps",
"content": "Ready to keep going?\n\n- Read about [Tracing with Spans](get-started/traces.md)\n- Complete the [Onboarding Checklist](guides/onboarding-checklist/index.md)\n\nMore topics to explore\n\n- Logfire's real power comes from [integrations with many popular libraries](integrations/index.md)\n- As well as spans, you can [use Logfire to record metrics](guides/onboarding-checklist/add-metrics.md)\n- Logfire doesn't just work with Python, [read more about Language support](https://opentelemetry.io/docs/languages/){:target=\"_blank\"}"
},
{
"id": 16,
"parent": null,
"path": "integrations/structlog.md",
"level": 1,
"title": "Structlog",
"content": "**Logfire** has a built-in [structlog][structlog] processor that can be used to emit Logfire logs for every structlog event.\n\n```py title=\"main.py\" hl_lines=\"5 14\"\nfrom dataclasses import dataclass\n\nimport structlog\nimport logfire\n\nlogfire.configure()\n\nstructlog.configure(\n processors=[\n structlog.contextvars.merge_contextvars,\n structlog.processors.add_log_level,\n structlog.processors.StackInfoRenderer(),\n structlog.dev.set_exc_info,\n structlog.processors.TimeStamper(fmt='%Y-%m-%d %H:%M:%S', utc=False),\n logfire.StructlogProcessor(),\n structlog.dev.ConsoleRenderer(),\n ],\n)\nlogger = structlog.get_logger()\n\n\n@dataclass\nclass User:\n id: int\n name: str\n\n\nlogger.info('Login', user=User(id=42, name='Fred'))\n#> 2024-03-22 12:57:33 [info ] Login user=User(id=42, name='Fred')\n```\n\nThe **Logfire** processor **MUST** come before the last processor that renders the logs in the structlog configuration.\n\nBy default, [`LogfireProcessor`][logfire.integrations.structlog.LogfireProcessor] shown above\ndisables console logging by logfire so you can use the existing logger you have configured for structlog, if you\nwant to log with logfire, use [`LogfireProcessor(console_log=True)`][logfire.integrations.structlog.LogfireProcessor].\n\n!!! note\n Positional arguments aren't collected as attributes by the processor, since they are already part of the event\n message when the processor is called.\n\n If you have the following:\n\n ```py\n logger.error('Hello %s!', 'Fred')\n #> 2024-03-22 13:39:26 [error ] Hello Fred!\n ```\n\n The string `'Fred'` will not be collected by the processor as an attribute, just formatted with the message."
},
{
"id": 17,
"parent": null,
"path": "integrations/celery.md",
"level": 1,
"title": "Celery",
"content": "The [`logfire.instrument_celery()`][logfire.Logfire.instrument_celery] method will create a span for every task\nexecuted by your Celery workers."
},
{
"id": 18,
"parent": 17,
"path": "integrations/celery.md",
"level": 2,
"title": "Installation",
"content": "Install `logfire` with the `celery` extra:\n\n{{ install_logfire(extras=['celery']) }}"
},
{
"id": 19,
"parent": 17,
"path": "integrations/celery.md",
"level": 2,
"title": "Usage",
"content": "You'll need a message broker to run Celery. In this example, we'll run [RabbitMQ][rabbitmq-image] on a docker container.\nYou can run it as follows:\n\n```bash\ndocker run -d --hostname my-rabbit \\\n --name some-rabbit \\\n # -e RABBITMQ_DEFAULT_USER=user \\\n # -e RABBITMQ_DEFAULT_PASS=password \\\n rabbitmq:3-management\n```\n\nBelow we have a minimal example using Celery. You can run it with `celery -A tasks worker --loglevel=info`:\n\n```py title=\"tasks.py\"\nimport logfire\nfrom celery import Celery\nfrom celery.signals import worker_process_init\n\n\nlogfire.configure()\n\n@worker_process_init.connect(weak=False)\ndef init_celery_tracing(*args, **kwargs):\n logfire.instrument_celery()\n\napp = Celery(\"tasks\", broker=\"pyamqp://localhost//\") # (1)!\n\[email protected]\ndef add(x, y):\n return x + y\n\nadd.delay(42, 50)\n```\n\n1. Install `pyamqp` with `pip install pyamqp`.\n\nThe keyword arguments of [`logfire.instrument_celery()`][logfire.Logfire.instrument_celery] are passed to the\n[`CeleryInstrumentor().instrument()`][opentelemetry.instrumentation.celery.CeleryInstrumentor] method."
},
{
"id": 20,
"parent": null,
"path": "integrations/anthropic.md",
"level": 1,
"title": "Anthropic",
"content": ""
},
{
"id": 21,
"parent": 20,
"path": "integrations/anthropic.md",
"level": 2,
"title": "Introduction",
"content": "Logfire supports instrumenting calls to Anthropic with one extra line of code.\n\n```python hl_lines=\"6\"\nimport anthropic\nimport logfire\n\nclient = anthropic.Anthropic()\n\nlogfire.configure()\nlogfire.instrument_anthropic(client) # (1)!\n\nresponse = client.messages.create(\n max_tokens=1000,\n model='claude-3-haiku-20240307',\n system='You are a helpful assistant.',\n messages=[{'role': 'user', 'content': 'Please write me a limerick about Python logging.'}],\n)\nprint(response.content[0].text)\n```\n\n1. If you don't have access to the client instance, you can pass a class (e.g. `logfire.instrument_anthropic(anthropic.Anthropic)`), or just pass no arguments (i.e. `logfire.instrument_anthropic()`) to instrument both the `anthropic.Anthropic` and `anthropic.AsyncAnthropic` classes.\n\n_For more information, see the [`instrument_anthropic()` API reference][logfire.Logfire.instrument_anthropic]._\n\nWith that you get:\n\n* a span around the call to Anthropic which records duration and captures any exceptions that might occur\n* Human-readable display of the conversation with the agent\n* details of the response, including the number of tokens used\n\n<figure markdown=\"span\">\n ![Logfire Anthropic](../images/logfire-screenshot-anthropic.png){ width=\"500\" }\n <figcaption>Anthropic span and conversation</figcaption>\n</figure>\n\n<figure markdown=\"span\">\n ![Logfire Anthropic Arguments](../images/logfire-screenshot-anthropic-arguments.png){ width=\"500\" }\n <figcaption>Span arguments including response details</figcaption>\n</figure>"
},
{
"id": 22,
"parent": 20,
"path": "integrations/anthropic.md",
"level": 2,
"title": "Methods covered",
"content": "The following Anthropic methods are covered:\n\n- [`client.messages.create`](https://docs.anthropic.com/en/api/messages)\n- [`client.messages.stream`](https://docs.anthropic.com/en/api/messages-streaming)\n- [`client.beta.tools.messages.create`](https://docs.anthropic.com/en/docs/tool-use)\n\nAll methods are covered with both `anthropic.Anthropic` and `anthropic.AsyncAnthropic`."
},
{
"id": 23,
"parent": 20,
"path": "integrations/anthropic.md",
"level": 2,
"title": "Streaming Responses",
"content": "When instrumenting streaming responses, Logfire creates two spans — one around the initial request and one\naround the streamed response.\n\nHere we also use Rich's [`Live`][rich.live.Live] and [`Markdown`][rich.markdown.Markdown] types to render the response in the terminal in real-time. :dancer:\n\n```python\nimport anthropic\nimport logfire\nfrom rich.console import Console\nfrom rich.live import Live\nfrom rich.markdown import Markdown\n\nclient = anthropic.AsyncAnthropic()\nlogfire.configure()\nlogfire.instrument_anthropic(client)\n\n\nasync def main():\n console = Console()\n with logfire.span('Asking Anthropic to write some code'):\n response = client.messages.stream(\n max_tokens=1000,\n model='claude-3-haiku-20240307',\n system='Reply in markdown one.',\n messages=[{'role': 'user', 'content': 'Write Python to show a tree of files 🤞.'}],\n )\n content = ''\n with Live('', refresh_per_second=15, console=console) as live:\n async with response as stream:\n async for chunk in stream:\n if chunk.type == 'content_block_delta':\n content += chunk.delta.text\n live.update(Markdown(content))\n\n\nif __name__ == '__main__':\n import asyncio\n\n asyncio.run(main())\n```\n\nShows up like this in Logfire:\n\n<figure markdown=\"span\">\n ![Logfire Anthropic Streaming](../images/logfire-screenshot-anthropic-stream.png){ width=\"500\" }\n <figcaption>Anthropic streaming response</figcaption>\n</figure>"
},
{
"id": 24,
"parent": null,
"path": "integrations/sqlalchemy.md",
"level": 1,
"title": "SQLAlchemy",
"content": "The [`logfire.instrument_sqlalchemy()`][logfire.Logfire.instrument_sqlalchemy] method will create a span for every query executed by a [SQLAlchemy][sqlalchemy] engine."
},
{
"id": 25,
"parent": 24,
"path": "integrations/sqlalchemy.md",
"level": 2,
"title": "Installation",
"content": "Install `logfire` with the `sqlalchemy` extra:\n\n{{ install_logfire(extras=['sqlalchemy']) }}"
},
{
"id": 26,
"parent": 24,
"path": "integrations/sqlalchemy.md",
"level": 2,
"title": "Usage",
"content": "Let's see a minimal example below. You can run it with `python main.py`:\n\n```py title=\"main.py\"\nimport logfire\nfrom sqlalchemy import create_engine\n\nlogfire.configure()\n\nengine = create_engine(\"sqlite:///:memory:\")\nlogfire.instrument_sqlalchemy(engine=engine)\n```\n\nThe keyword arguments of `logfire.instrument_sqlalchemy()` are passed to the `SQLAlchemyInstrumentor().instrument()` method of the OpenTelemetry SQLAlchemy Instrumentation package, read more about it [here][opentelemetry-sqlalchemy].\n\n!!! tip\n If you use [SQLModel][sqlmodel], you can use the same `SQLAlchemyInstrumentor` to instrument it."
},
{
"id": 27,
"parent": null,
"path": "integrations/redis.md",
"level": 1,
"title": "Redis",
"content": "The [`logfire.instrument_redis()`][logfire.Logfire.instrument_redis] method will create a span for every command executed by your [Redis][redis] clients."
},
{
"id": 28,
"parent": 27,
"path": "integrations/redis.md",
"level": 2,
"title": "Installation",
"content": "Install `logfire` with the `redis` extra:\n\n{{ install_logfire(extras=['redis']) }}"
},
{
"id": 29,
"parent": 27,
"path": "integrations/redis.md",
"level": 2,
"title": "Usage",
"content": "Let's setup a container with Redis and run a Python script that connects to the Redis server to\ndemonstrate how to use **Logfire** with Redis."
},
{
"id": 30,
"parent": 29,
"path": "integrations/redis.md",
"level": 3,
"title": "Setup a Redis Server Using Docker",
"content": "First, we need to initialize a Redis server. This can be easily done using Docker with the following command:\n\n```bash\ndocker run --name redis -p 6379:6379 -d redis:latest\n```"
},
{
"id": 31,
"parent": 29,
"path": "integrations/redis.md",
"level": 3,
"title": "Run the Python script",
"content": "```py title=\"main.py\"\nimport logfire\nimport redis\n\n\nlogfire.configure()\nlogfire.instrument_redis()\n\nclient = redis.StrictRedis(host=\"localhost\", port=6379)\nclient.set(\"my-key\", \"my-value\")\n\nasync def main():\n client = redis.asyncio.Redis(host=\"localhost\", port=6379)\n await client.get(\"my-key\")\n\nif __name__ == \"__main__\":\n import asyncio\n\n asyncio.run(main())\n```\n\n!!! info\n You can pass `capture_statement=True` to `logfire.instrument_redis()` to capture the Redis command.\n\n By default, it is set to `False` given that Redis commands can contain sensitive information.\n\nThe keyword arguments of `logfire.instrument_redis()` are passed to the `RedisInstrumentor().instrument()` method of the OpenTelemetry Redis Instrumentation package, read more about it [here][opentelemetry-redis]."
},
{
"id": 32,
"parent": null,
"path": "integrations/openai.md",
"level": 1,
"title": "OpenAI",
"content": ""
},
{
"id": 33,
"parent": 32,
"path": "integrations/openai.md",
"level": 2,
"title": "Introduction",
"content": "Logfire supports instrumenting calls to OpenAI with one extra line of code.\n\n```python hl_lines=\"6\"\nimport openai\nimport logfire\n\nclient = openai.Client()\n\nlogfire.configure()\nlogfire.instrument_openai(client) # (1)!\n\nresponse = client.chat.completions.create(\n model='gpt-4',\n messages=[\n {'role': 'system', 'content': 'You are a helpful assistant.'},\n {'role': 'user', 'content': 'Please write me a limerick about Python logging.'},\n ],\n)\nprint(response.choices[0].message)\n```\n\n1. If you don't have access to the client instance, you can pass a class (e.g. `logfire.instrument_openai(openai.Client)`), or just pass no arguments (i.e. `logfire.instrument_openai()`) to instrument both the `openai.Client` and `openai.AsyncClient` classes.\n\n_For more information, see the [`instrument_openai()` API reference][logfire.Logfire.instrument_openai]._\n\nWith that you get:\n\n* a span around the call to OpenAI which records duration and captures any exceptions that might occur\n* Human-readable display of the conversation with the agent\n* details of the response, including the number of tokens used\n\n<figure markdown=\"span\">\n ![Logfire OpenAI](../images/logfire-screenshot-openai.png){ width=\"500\" }\n <figcaption>OpenAI span and conversation</figcaption>\n</figure>\n\n<figure markdown=\"span\">\n ![Logfire OpenAI Arguments](../images/logfire-screenshot-openai-arguments.png){ width=\"500\" }\n <figcaption>Span arguments including response details</figcaption>\n</figure>"
},
{
"id": 34,
"parent": 32,
"path": "integrations/openai.md",
"level": 2,
"title": "Methods covered",
"content": "The following OpenAI methods are covered:\n\n- [`client.chat.completions.create`](https://platform.openai.com/docs/guides/text-generation/chat-completions-api) — with and without `stream=True`\n- [`client.completions.create`](https://platform.openai.com/docs/guides/text-generation/completions-api) — with and without `stream=True`\n- [`client.embeddings.create`](https://platform.openai.com/docs/guides/embeddings/how-to-get-embeddings)\n- [`client.images.generate`](https://platform.openai.com/docs/guides/images/generations)\n\nAll methods are covered with both `openai.Client` and `openai.AsyncClient`.\n\nFor example, here's instrumentation of an image generation call:\n\n```python\nimport openai\nimport logfire\n\nasync def main():\n client = openai.AsyncClient()\n logfire.configure()\n logfire.instrument_openai(client)\n\n response = await client.images.generate(\n prompt='Image of R2D2 running through a desert in the style of cyberpunk.',\n model='dall-e-3',\n )\n url = response.data[0].url\n import webbrowser\n webbrowser.open(url)\n\nif __name__ == '__main__':\n import asyncio\n asyncio.run(main())\n```\n\nGives:\n\n<figure markdown=\"span\">\n ![Logfire OpenAI Image Generation](../images/logfire-screenshot-openai-image-gen.png){ width=\"500\" }\n <figcaption>OpenAI image generation span</figcaption>\n</figure>"
},
{
"id": 35,
"parent": 32,
"path": "integrations/openai.md",
"level": 2,
"title": "Streaming Responses",
"content": "When instrumenting streaming responses, Logfire creates two spans — one around the initial request and one\naround the streamed response.\n\nHere we also use Rich's [`Live`][rich.live.Live] and [`Markdown`][rich.markdown.Markdown] types to render the response in the terminal in real-time. :dancer:\n\n```python\nimport openai\nimport logfire\nfrom rich.console import Console\nfrom rich.live import Live\nfrom rich.markdown import Markdown\n\nclient = openai.AsyncClient()\nlogfire.configure()\nlogfire.instrument_openai(client)\n\nasync def main():\n console = Console()\n with logfire.span('Asking OpenAI to write some code'):\n response = await client.chat.completions.create(\n model='gpt-4',\n messages=[\n {'role': 'system', 'content': 'Reply in markdown one.'},\n {'role': 'user', 'content': 'Write Python to show a tree of files 🤞.'},\n ],\n stream=True\n )\n content = ''\n with Live('', refresh_per_second=15, console=console) as live:\n async for chunk in response:\n if chunk.choices[0].delta.content is not None:\n content += chunk.choices[0].delta.content\n live.update(Markdown(content))\n\nif __name__ == '__main__':\n import asyncio\n asyncio.run(main())\n```\n\nShows up like this in Logfire:\n\n<figure markdown=\"span\">\n ![Logfire OpenAI Streaming](../images/logfire-screenshot-openai-stream.png){ width=\"500\" }\n <figcaption>OpenAI streaming response</figcaption>\n</figure>"
},
{
"id": 36,
"parent": null,
"path": "integrations/loguru.md",
"level": 1,
"title": "Loguru",
"content": "**Logfire** can act as a sink for [Loguru][loguru] by emitting a **Logfire** log for every log record. For example:\n\n```py title=\"main.py\"\nimport logfire\nfrom loguru import logger\n\nlogfire.configure()\n\nlogger.configure(handlers=[logfire.loguru_handler()])\nlogger.info('Hello, {name}!', name='World')\n```\n\n!!! note\n Currently, **Logfire** will not scrub sensitive data from the message formatted by Loguru, e.g:\n\n ```python\n logger.info('Foo: {bar}', bar='secret_value')\n # > 14:58:26.085 Foo: secret_value\n ```"
},
{
"id": 37,
"parent": null,
"path": "integrations/wsgi.md",
"level": 1,
"title": "WSGI",
"content": "If the [WSGI][wsgi] framework doesn't have a dedicated OpenTelemetry package, you can use the\n[OpenTelemetry WSGI middleware][opentelemetry-wsgi]."
},
{
"id": 38,
"parent": 37,
"path": "integrations/wsgi.md",
"level": 2,
"title": "Installation",
"content": "You need to install the [`opentelemetry-instrumentation-wsgi`][pypi-otel-wsgi] package:\n\n```bash\npip install opentelemetry-instrumentation-wsgi\n```"
},
{
"id": 39,
"parent": 37,
"path": "integrations/wsgi.md",
"level": 2,
"title": "Usage",
"content": "Below we have a minimal example using the standard library [`wsgiref`][wsgiref]. You can run it with `python main.py`:\n\n```py title=\"main.py\"\nfrom wsgiref.simple_server import make_server\n\nfrom opentelemetry.instrumentation.wsgi import OpenTelemetryMiddleware\n\n\ndef app(env, start_response):\n start_response('200 OK', [('Content-Type','text/html')])\n return [b\"Hello World\"]\n\napp = OpenTelemetryMiddleware(app)\n\nwith make_server(\"\", 8000, app) as httpd:\n print(\"Serving on port 8000...\")\n\n # Serve until process is killed\n httpd.serve_forever()\n```\n\nYou can read more about the OpenTelemetry WSGI middleware [here][opentelemetry-wsgi]."
},
{
"id": 40,
"parent": 37,
"path": "integrations/wsgi.md",
"level": 2,
"title": "Capturing request and response headers",
"content": "<!-- note that this section is duplicated for different frameworks but with slightly different links -->\n\n- [Quick guide](use-cases/web-frameworks.md#capturing-http-server-request-and-response-headers)\n- [OpenTelemetry Documentation](https://opentelemetry-python-contrib.readthedocs.io/en/latest/instrumentation/wsgi/wsgi.html#capture-http-request-and-response-headers)"
},
{
"id": 41,
"parent": null,
"path": "integrations/mysql.md",
"level": 1,
"title": "MySQL",
"content": "The [`logfire.instrument_mysql()`][logfire.Logfire.instrument_mysql] method can be used to instrument the [MySQL Connector/Python][mysql-connector] database driver with **Logfire**, creating a span for every query."
},
{
"id": 42,
"parent": 41,
"path": "integrations/mysql.md",
"level": 2,
"title": "Installation",
"content": "Install `logfire` with the `mysql` extra:\n\n{{ install_logfire(extras=['mysql']) }}"
},
{
"id": 43,
"parent": 41,
"path": "integrations/mysql.md",
"level": 2,
"title": "Usage",
"content": "Let's setup a MySQL database using Docker and run a Python script that connects to the database using MySQL connector to\ndemonstrate how to use **Logfire** with MySQL."
},
{
"id": 44,
"parent": 43,
"path": "integrations/mysql.md",
"level": 3,
"title": "Setup a MySQL Database Using Docker",
"content": "First, we need to initialize a MySQL database. This can be easily done using Docker with the following command:\n\n```bash\ndocker run --name mysql \\ # (1)!\n -e MYSQL_ROOT_PASSWORD=secret \\ # (2)!\n -e MYSQL_DATABASE=database \\ # (3)!\n -e MYSQL_USER=user \\ # (4)!\n -e MYSQL_PASSWORD=secret \\ # (5)!\n -p 3306:3306 \\ # (6)!\n -d mysql # (7)!\n```\n\n1. `--name mysql`: This defines the name of the Docker container.\n2. `-e MYSQL_ROOT_PASSWORD=secret`: This sets a password for the MySQL root user.\n3. `-e MYSQL_DATABASE=database`: This creates a new database named \"database\", the same as the one used in your Python script.\n4. `-e MYSQL_USER=user`: This sets a user for the MySQL server.\n5. `-e MYSQL_PASSWORD=secret`: This sets a password for the MySQL server.\n6. `-p 3306:3306`: This makes the MySQL instance available on your local machine under port 3306.\n7. `-d mysql`: This denotes the Docker image to be used, in this case, \"mysql\", and starts the container in detached mode."
},
{
"id": 45,
"parent": 43,
"path": "integrations/mysql.md",
"level": 3,
"title": "Run the Python script",
"content": "The following Python script connects to the MySQL database and executes some SQL queries:\n\n```py\nimport logfire\nimport mysql.connector\n\nlogfire.configure()"
},
{
"id": 46,
"parent": null,
"path": "integrations/mysql.md",
"level": 1,
"title": "To instrument the whole module:",
"content": "logfire.instrument_mysql()\n\nconnection = mysql.connector.connect(\n host=\"localhost\",\n user=\"user\",\n password=\"secret\",\n database=\"database\",\n port=3306,\n use_pure=True,\n)"
},
{
"id": 47,
"parent": null,
"path": "integrations/mysql.md",
"level": 1,
"title": "Or instrument just the connection:",
"content": ""
},
{
"id": 48,
"parent": null,
"path": "integrations/mysql.md",
"level": 1,
"title": "connection = logfire.instrument_mysql(connection)",
"content": "with logfire.span('Create table and insert data'), connection.cursor() as cursor:\n cursor.execute(\n 'CREATE TABLE IF NOT EXISTS test (id INT AUTO_INCREMENT PRIMARY KEY, num integer, data varchar(255));'\n )\n\n # Insert some data\n cursor.execute('INSERT INTO test (num, data) VALUES (%s, %s)', (100, 'abc'))\n cursor.execute('INSERT INTO test (num, data) VALUES (%s, %s)', (200, 'def'))\n\n # Query the data\n cursor.execute('SELECT * FROM test')\n results = cursor.fetchall() # Fetch all rows\n for row in results:\n print(row) # Print each row\n```\n\n[`logfire.instrument_mysql()`][logfire.Logfire.instrument_mysql] uses the\n**OpenTelemetry MySQL Instrumentation** package,\nwhich you can find more information about [here][opentelemetry-mysql]."
},
{
"id": 49,
"parent": null,
"path": "integrations/stripe.md",
"level": 1,
"title": "Stripe",
"content": "[Stripe] is a popular payment gateway that allows businesses to accept payments online.\n\nThe stripe Python client has both synchronous and asynchronous methods for making requests to the Stripe API.\n\nBy default, the stripe client uses the `requests` package for making synchronous requests and\nthe `httpx` package for making asynchronous requests.\n\n```py\nfrom stripe import StripeClient\n\nclient = StripeClient(api_key='<your_secret_key>')"
},
{
"id": 50,
"parent": null,
"path": "integrations/stripe.md",
"level": 1,
"title": "Synchronous request",
"content": "client.customers.list() # uses `requests`"
},
{
"id": 51,
"parent": null,
"path": "integrations/stripe.md",
"level": 1,
"title": "Asynchronous request",
"content": "async def main():\n await client.customers.list_async() # uses `httpx`\n\nif __name__ == '__main__':\n import asyncio\n\n asyncio.run(main())\n```\n\nYou read more about this on the [Configuring an HTTP Client] section on the stripe repository."
},
{
"id": 52,
"parent": 51,
"path": "integrations/stripe.md",
"level": 2,
"title": "Synchronous Requests",
"content": "As mentioned, by default, `stripe` uses the `requests` package for making HTTP requests.\n\nIn this case, you'll need to call [`logfire.instrument_requests()`][requests-section].\n\n```py\nimport os\n\nimport logfire\nfrom stripe import StripeClient\n\nlogfire.configure()\nlogfire.instrument_requests()\n\nclient = StripeClient(api_key=os.getenv('STRIPE_SECRET_KEY'))\n\nclient.customers.list()\n```\n\n!!! note\n If you use the `http_client` parameter to configure the stripe client to use a different HTTP client,\n you'll need to call the appropriate instrumentation method."
},
{
"id": 53,
"parent": 51,
"path": "integrations/stripe.md",
"level": 2,
"title": "Asynchronous Requests",
"content": "As mentioned, by default, `stripe` uses the `httpx` package for making asynchronous HTTP requests.\n\nIn this case, you'll need to call [`logfire.instrument_httpx()`][httpx-section].\n\n```py\nimport asyncio\nimport os\n\nimport logfire\nfrom stripe import StripeClient\n\nlogfire.configure()\nlogfire.instrument_httpx() # for asynchronous requests\n\nclient = StripeClient(api_key=os.getenv('STRIPE_SECRET_KEY'))\n\nasync def main():\n with logfire.span('list async'):\n await client.customers.list_async()\n\nif __name__ == '__main__':\n asyncio.run(main())\n```\n\n!!! note\n If you use the `http_client` parameter to configure the stripe client to use a different HTTP client,\n you'll need to call the appropriate instrumentation method."
},
{
"id": 54,
"parent": 51,
"path": "integrations/stripe.md",
"level": 2,
"title": "Add logging instrumentation",
"content": "Stripe also has a logger (`logger = getLogger('stripe')`) that [you can instrument with **Logfire**][logging-section].\n\n```py hl_lines=\"8\"\nimport os\nfrom logging import basicConfig\n\nimport logfire\nfrom stripe import StripeClient\n\nlogfire.configure()\nbasicConfig(handlers=[logfire.LogfireLoggingHandler()], level='INFO')\n\nclient = StripeClient(api_key=os.getenv('STRIPE_SECRET_KEY'))\n\nclient.customers.list()\n```\n\nYou can change the `level=INFO` to `level=DEBUG` to see even more details, like the response body.\n\n\n\n[logging-section]: logging.md\n[requests-section]: requests.md"
},
{
"id": 55,
"parent": null,
"path": "integrations/requests.md",
"level": 1,
"title": "Requests",
"content": "The [`logfire.instrument_requests()`][logfire.Logfire.instrument_requests] method can be used to\ninstrument [`requests`][requests] with **Logfire**."
},
{
"id": 56,
"parent": 55,
"path": "integrations/requests.md",
"level": 2,
"title": "Installation",
"content": "Install `logfire` with the `requests` extra:\n\n{{ install_logfire(extras=['requests']) }}"
},
{
"id": 57,
"parent": 55,
"path": "integrations/requests.md",
"level": 2,
"title": "Usage",
"content": "```py title=\"main.py\"\nimport logfire\nimport requests\n\nlogfire.configure()\nlogfire.instrument_requests()\n\nrequests.get(\"https://httpbin.org/get\")\n```\n\n[`logfire.instrument_requests()`][logfire.Logfire.instrument_requests] uses the\n**OpenTelemetry requests Instrumentation** package,\nwhich you can find more information about [here][opentelemetry-requests]."
},
{
"id": 58,
"parent": null,
"path": "integrations/asyncpg.md",
"level": 1,
"title": "asyncpg",
"content": "The [`logfire.instrument_asyncpg()`][logfire.Logfire.instrument_asyncpg] function can be used to instrument the [asyncpg][asyncpg] PostgreSQL driver with **Logfire**."
},
{
"id": 59,
"parent": 58,
"path": "integrations/asyncpg.md",
"level": 2,
"title": "Installation",
"content": "Install `logfire` with the `asyncpg` extra:\n\n{{ install_logfire(extras=['asyncpg']) }}"
},
{
"id": 60,
"parent": 58,
"path": "integrations/asyncpg.md",
"level": 2,
"title": "Usage",
"content": "Let's setup a PostgreSQL database using Docker and run a Python script that connects to the database using asyncpg to\ndemonstrate how to use **Logfire** with asyncpg."
},
{
"id": 61,
"parent": 60,
"path": "integrations/asyncpg.md",
"level": 3,
"title": "Setup a PostgreSQL Database Using Docker",
"content": "First, we need to initialize a PostgreSQL database. This can be easily done using Docker with the following command:\n\n```bash\ndocker run --name postgres \\ # (1)!\n -e POSTGRES_USER=user \\ # (2)!\n -e POSTGRES_PASSWORD=secret \\ # (3)!\n -e POSTGRES_DB=database \\ # (4)!\n -p 5432:5432 \\ # (5)!\n -d postgres # (6)!\n```\n\n1. `--name postgres`: This defines the name of the Docker container.\n2. `-e POSTGRES_USER=user`: This sets a user for the PostgreSQL server.\n3. `-e POSTGRES_PASSWORD=secret`: This sets a password for the PostgreSQL server.\n4. `-e POSTGRES_DB=database`: This creates a new database named \"database\", the same as the one used in your Python script.\n5. `-p 5432:5432`: This makes the PostgreSQL instance available on your local machine under port 5432.\n6. `-d postgres`: This denotes the Docker image to be used, in this case, \"postgres\", and starts the container in detached mode."
},
{
"id": 62,
"parent": 60,
"path": "integrations/asyncpg.md",
"level": 3,
"title": "Run the Python script",
"content": "The following Python script connects to the PostgreSQL database and executes some SQL queries:\n\n```py\nimport asyncio\n\nimport asyncpg\n\nimport logfire\n\nlogfire.configure()\nlogfire.instrument_asyncpg()\n\n\nasync def main():\n connection: asyncpg.Connection = await asyncpg.connect(\n user='user', password='secret', database='database', host='0.0.0.0', port=5432\n )\n\n with logfire.span('Create table and insert data'):\n await connection.execute('CREATE TABLE IF NOT EXISTS test (id serial PRIMARY KEY, num integer, data varchar);')\n\n # Insert some data\n await connection.execute('INSERT INTO test (num, data) VALUES ($1, $2)', 100, 'abc')\n await connection.execute('INSERT INTO test (num, data) VALUES ($1, $2)', 200, 'def')\n\n # Query the data\n for record in await connection.fetch('SELECT * FROM test'):\n logfire.info('Retrieved {record=}', record=record)\n\n\nasyncio.run(main())\n```\n\nIf you go to your project on the UI, you will see the span created by the script."
},
{
"id": 63,
"parent": null,
"path": "integrations/system-metrics.md",
"level": 2,
"title": "Installation",
"content": "Install `logfire` with the `system-metrics` extra:\n\n{{ install_logfire(extras=['system-metrics']) }}"
},
{
"id": 64,
"parent": null,
"path": "integrations/system-metrics.md",
"level": 2,
"title": "Usage",
"content": "```py\nimport logfire\n\nlogfire.configure()\n\nlogfire.instrument_system_metrics()\n```\n\nThen in your project, click on 'Dashboards' in the top bar, click 'New Dashboard', and select 'Basic System Metrics (Logfire)' from the dropdown."
},
{
"id": 65,
"parent": null,
"path": "integrations/system-metrics.md",
"level": 2,
"title": "Configuration",
"content": "By default, `instrument_system_metrics` collects only the metrics it needs to display the 'Basic System Metrics (Logfire)' dashboard. You can choose exactly which metrics to collect and how much data to collect about each metric. The default is equivalent to this:\n\n```py\nlogfire.instrument_system_metrics({\n 'process.runtime.cpu.utilization': None, # (1)!\n 'system.cpu.simple_utilization': None, # (2)!\n 'system.memory.utilization': ['available'], # (3)!\n 'system.swap.utilization': ['used'], # (4)!\n})\n```\n\n1. `process.runtime.cpu.utilization` will lead to exporting a metric that is actually named `process.runtime.cpython.cpu.utilization` or a similar name depending on the Python implementation used. The `None` value means that there are no fields to configure for this metric. The value of this metric is `[psutil.Process().cpu_percent()](https://psutil.readthedocs.io/en/latest/#psutil.Process.cpu_percent) / 100`, i.e. the fraction of CPU time used by this process, where 1 means using 100% of a single CPU core. The value can be greater than 1 if the process uses multiple cores.\n2. The `None` value means that there are no fields to configure for this metric. The value of this metric is `[psutil.cpu_percent()](https://psutil.readthedocs.io/en/latest/#psutil.cpu_percent) / 100`, i.e. the fraction of CPU time used by the whole system, where 1 means using 100% of all CPU cores.\n3. The value here is a list of 'modes' of memory. The full list can be seen in the [`psutil` documentation](https://psutil.readthedocs.io/en/latest/#psutil.virtual_memory). `available` is \"the memory that can be given instantly to processes without the system going into swap. This is calculated by summing different memory metrics that vary depending on the platform. It is supposed to be used to monitor actual memory usage in a cross platform fashion.\" The value of the metric is a number between 0 and 1, and subtracting the value from 1 gives the fraction of memory used.\n4. This is the fraction of available swap used. The value is a number between 0 and 1.\n\nTo collect lots of detailed data about all available metrics, use `logfire.instrument_system_metrics(base='full')`.\n\n!!! warning\n The amount of data collected by `base='full'` can be expensive, especially if you have many servers,\n and this is easy to forget about. If you enable this, be sure to monitor your usage and costs.\n\n The most expensive metrics are `system.cpu.utilization/time` which collect data for each core and each mode,\n and `system.disk.*` which collect data for each disk device. The exact number depends on the machine hardware,\n but this can result in hundreds of data points per minute from each instrumented host.\n\n`logfire.instrument_system_metrics(base='full')` is equivalent to:\n\n```py\nlogfire.instrument_system_metrics({\n 'system.cpu.simple_utilization': None,\n 'system.cpu.time': ['idle', 'user', 'system', 'irq', 'softirq', 'nice', 'iowait', 'steal', 'interrupt', 'dpc'],\n 'system.cpu.utilization': ['idle', 'user', 'system', 'irq', 'softirq', 'nice', 'iowait', 'steal', 'interrupt', 'dpc'],\n 'system.memory.usage': ['available', 'used', 'free', 'active', 'inactive', 'buffers', 'cached', 'shared', 'wired', 'slab', 'total'],\n 'system.memory.utilization': ['available', 'used', 'free', 'active', 'inactive', 'buffers', 'cached', 'shared', 'wired', 'slab'],\n 'system.swap.usage': ['used', 'free'],\n 'system.swap.utilization': ['used'],\n 'system.disk.io': ['read', 'write'],\n 'system.disk.operations': ['read', 'write'],\n 'system.disk.time': ['read', 'write'],\n 'system.network.dropped.packets': ['transmit', 'receive'],\n 'system.network.packets': ['transmit', 'receive'],\n 'system.network.errors': ['transmit', 'receive'],\n 'system.network.io': ['transmit', 'receive'],\n 'system.thread_count': None,\n 'process.runtime.memory': ['rss', 'vms'],\n 'process.runtime.cpu.time': ['user', 'system'],\n 'process.runtime.gc_count': None,\n 'process.runtime.thread_count': None,\n 'process.runtime.cpu.utilization': None,\n 'process.runtime.context_switches': ['involuntary', 'voluntary'],\n 'process.open_file_descriptor.count': None,\n})\n```\n\nEach key here is a metric name. The values have different meanings for different metrics. For example, for `system.cpu.utilization`, the value is a list of CPU modes. So there will be a separate row for each CPU core saying what percentage of time it spent idle, another row for the time spent waiting for IO, etc. There are no fields to configure for `system.thread_count`, so the value is `None`.\n\nFor convenient customizability, the first dict argument is merged with the base. For example, if you want to collect disk read operations (but not writes) you can write:\n\n- `logfire.instrument_system_metrics({'system.disk.operations': ['read']})` to collect that data in addition to the basic defaults.\n- `logfire.instrument_system_metrics({'system.disk.operations': ['read']}, base='full')` to collect detailed data about all metrics, excluding disk write operations.\n- `logfire.instrument_system_metrics({'system.disk.operations': ['read']}, base=None)` to collect only disk read operations and nothing else."
},
{
"id": 66,
"parent": null,
"path": "integrations/index.md",
"level": 1,
"title": "Integrations",
"content": "If a package you are using is not listed here, please let us know on our [Slack][slack]!"
},
{
"id": 67,
"parent": 66,
"path": "integrations/index.md",
"level": 2,
"title": "OpenTelemetry Integrations",
"content": "Since **Pydantic Logfire** is [OpenTelemetry][opentelemetry] compatible, it can be used with any OpenTelemetry\ninstrumentation package. You can find the list of all OpenTelemetry instrumentation packages\n[here](https://opentelemetry-python-contrib.readthedocs.io/en/latest/).\n\nBelow you can see more details on how to use Logfire with some of the most popular Python packages.\n\n| Package | Type |\n|-------------------------------------|-------------------------|\n| [FastAPI](fastapi.md) | Web Framework |\n| [Django](django.md) | Web Framework |\n| [Flask](flask.md) | Web Framework |\n| [Starlette](starlette.md) | Web Framework |\n| [ASGI](asgi.md) | Web Framework Interface |\n| [WSGI](wsgi.md) | Web Framework Interface |\n| [HTTPX](httpx.md) | HTTP Client |\n| [Requests](requests.md) | HTTP Client |\n| [AIOHTTP](aiohttp.md) | HTTP Client |\n| [SQLAlchemy](sqlalchemy.md) | Databases |\n| [Asyncpg](asyncpg.md) | Databases |\n| [Psycopg](psycopg.md) | Databases |\n| [PyMongo](pymongo.md) | Databases |\n| [MySQL](mysql.md) | Databases |\n| [Redis](redis.md) | Databases |\n| [BigQuery](bigquery.md) | Databases |\n| [Celery](celery.md) | Task Queue |\n| [Stripe](stripe.md) | Payment Gateway |\n| [System Metrics](system-metrics.md) | System Metrics |\n\nIf you are using Logfire with a web application, we also recommend reviewing\nour [Web Frameworks](use-cases/web-frameworks.md)\ndocumentation."
},
{
"id": 68,
"parent": 66,
"path": "integrations/index.md",
"level": 2,
"title": "Custom Integrations",
"content": "We have special integration with the Pydantic library and the OpenAI SDK:\n\n| Package | Type |\n|-------------------------|-----------------|\n| [Pydantic](pydantic.md) | Data Validation |\n| [OpenAI](openai.md) | AI |"
},
{
"id": 69,
"parent": 66,
"path": "integrations/index.md",
"level": 2,
"title": "Logging Integrations",
"content": "Finally, we also have documentation for how to use Logfire with existing logging libraries:\n\n| Package | Type |\n|----------------------------------------|---------|\n| [Standard Library Logging](logging.md) | Logging |\n| [Loguru](loguru.md) | Logging |\n| [Structlog](structlog.md) | Logging |"
},
{
"id": 70,
"parent": null,
"path": "integrations/logging.md",
"level": 1,
"title": "Standard Library Logging",
"content": "**Logfire** can act as a sink for [standard library logging][logging] by emitting a **Logfire** log for\nevery standard library log record.\n\n```py title=\"main.py\"\nfrom logging import basicConfig, getLogger\n\nimport logfire\n\nlogfire.configure()\nbasicConfig(handlers=[logfire.LogfireLoggingHandler()])\n\nlogger = getLogger(__name__)\n\nlogger.error(\"Hello %s!\", \"Fred\")"
},
{
"id": 71,
"parent": null,
"path": "integrations/logging.md",
"level": 1,
"title": "10:05:06.855 Hello Fred!",
"content": "```"
},
{
"id": 72,
"parent": null,
"path": "integrations/starlette.md",
"level": 1,
"title": "Starlette",
"content": "The [`logfire.instrument_starlette()`][logfire.Logfire.instrument_starlette] method will create a span for every request to your [Starlette][starlette] application."
},
{
"id": 73,
"parent": 72,
"path": "integrations/starlette.md",
"level": 2,
"title": "Installation",
"content": "Install `logfire` with the `starlette` extra:\n\n{{ install_logfire(extras=['starlette']) }}"
},
{
"id": 74,
"parent": 72,
"path": "integrations/starlette.md",
"level": 2,
"title": "Usage",
"content": "We have a minimal example below. Please install [Uvicorn][uvicorn] to run it:\n\n```bash\npip install uvicorn\n```\n\nYou can run it with `python main.py`:\n\n```py title=\"main.py\"\nimport logfire\nfrom starlette.applications import Starlette\nfrom starlette.responses import PlainTextResponse\nfrom starlette.requests import Request\nfrom starlette.routing import Route\n\nlogfire.configure()\n\n\nasync def home(request: Request) -> PlainTextResponse:\n return PlainTextResponse(\"Hello, world!\")\n\n\napp = Starlette(routes=[Route(\"/\", home)])\nlogfire.instrument_starlette(app)\n\nif __name__ == \"__main__\":\n import uvicorn\n\n uvicorn.run(app)\n```\n\nThe keyword arguments of `logfire.instrument_starlette()` are passed to the `StarletteInstrumentor.instrument_app()` method of the OpenTelemetry Starlette Instrumentation package, read more about it [here][opentelemetry-starlette].\n\n!!! question \"What about the OpenTelemetry ASGI middleware?\"\n If you are a more experienced user, you might be wondering why we are not using\n the [OpenTelemetry ASGI middleware][opentelemetry-asgi]. The reason is that the\n `StarletteInstrumentor` actually wraps the ASGI middleware and adds some additional\n information related to the routes."
},
{
"id": 75,
"parent": 72,
"path": "integrations/starlette.md",
"level": 2,
"title": "Excluding URLs from instrumentation",
"content": "<!-- note that this section is duplicated for different frameworks but with slightly different links -->\n\n- [Quick guide](use-cases/web-frameworks.md#excluding-urls-from-instrumentation)\n- [OpenTelemetry Documentation](https://opentelemetry-python-contrib.readthedocs.io/en/latest/instrumentation/starlette/starlette.html#exclude-lists)"
},
{
"id": 76,
"parent": 72,
"path": "integrations/starlette.md",
"level": 2,
"title": "Capturing request and response headers",
"content": "<!-- note that this section is duplicated for different frameworks but with slightly different links -->\n\n- [Quick guide](use-cases/web-frameworks.md#capturing-http-server-request-and-response-headers)\n- [OpenTelemetry Documentation](https://opentelemetry-python-contrib.readthedocs.io/en/latest/instrumentation/starlette/starlette.html#capture-http-request-and-response-headers)"
},
{
"id": 77,
"parent": null,
"path": "integrations/django.md",
"level": 1,
"title": "Django",
"content": "The [`logfire.instrument_django()`][logfire.Logfire.instrument_django] method can be used to instrument the [Django][django] web framework with **Logfire**."
},
{
"id": 78,
"parent": 77,
"path": "integrations/django.md",
"level": 2,
"title": "Installation",
"content": "Install `logfire` with the `django` extra:\n\n{{ install_logfire(extras=['django']) }}"
},
{
"id": 79,
"parent": 77,
"path": "integrations/django.md",
"level": 2,
"title": "Usage",
"content": "In the `settings.py` file, add the following lines:\n\n```py\nimport logfire"
},
{
"id": 80,
"parent": null,
"path": "integrations/django.md",
"level": 1,
"title": "...All the other settings...",
"content": ""
},
{
"id": 81,
"parent": null,
"path": "integrations/django.md",
"level": 1,
"title": "Add the following lines at the end of the file",
"content": "logfire.configure()\nlogfire.instrument_django()\n```\n\n[`logfire.instrument_django()`][logfire.Logfire.instrument_django] uses the\n**OpenTelemetry Django Instrumentation** package,\nwhich you can find more information about [here][opentelemetry-django]."
},
{
"id": 82,
"parent": 81,
"path": "integrations/django.md",
"level": 2,
"title": "Excluding URLs from instrumentation",
"content": "<!-- note that this section is duplicated for different frameworks but with slightly different links -->\n\n- [Quick guide](use-cases/web-frameworks.md#excluding-urls-from-instrumentation)\n- [OpenTelemetry Documentation](https://opentelemetry-python-contrib.readthedocs.io/en/latest/instrumentation/django/django.html#exclude-lists)"
},
{
"id": 83,
"parent": 81,
"path": "integrations/django.md",
"level": 2,
"title": "Capturing request and response headers",
"content": "<!-- note that this section is duplicated for different frameworks but with slightly different links -->\n\n- [Quick guide](use-cases/web-frameworks.md#capturing-http-server-request-and-response-headers)\n- [OpenTelemetry Documentation](https://opentelemetry-python-contrib.readthedocs.io/en/latest/instrumentation/django/django.html#capture-http-request-and-response-headers)"
},
{
"id": 84,
"parent": null,
"path": "integrations/aiohttp.md",
"level": 1,
"title": "AIOHTTP Client",
"content": "[AIOHTTP][aiohttp] is an asynchronous HTTP client/server framework for asyncio and Python.\n\nThe [`logfire.instrument_aiohttp_client()`][logfire.Logfire.instrument_aiohttp_client] method will create a span for every request made by your AIOHTTP clients.\n\n!!! question \"What about AIOHTTP Server?\"\n The AIOHTTP server instrumentation is not supported yet. You can track the progress [here][aiohttp-server]."
},
{
"id": 85,
"parent": 84,
"path": "integrations/aiohttp.md",
"level": 2,
"title": "Installation",
"content": "Install `logfire` with the `aiohttp` extra:\n\n{{ install_logfire(extras=['aiohttp']) }}"
},
{
"id": 86,
"parent": 84,
"path": "integrations/aiohttp.md",
"level": 2,
"title": "Usage",
"content": "Let's see a minimal example below. You can run it with `python main.py`:\n\n```py title=\"main.py\"\nimport logfire\nimport aiohttp\n\n\nlogfire.configure()\nlogfire.instrument_aiohttp_client()\n\n\nasync def main():\n async with aiohttp.ClientSession() as session:\n await session.get(\"https://httpbin.org/get\")\n\n\nif __name__ == \"__main__\":\n import asyncio\n\n asyncio.run(main())\n```\n\nThe keyword arguments of `logfire.instrument_aiohttp_client()` are passed to the `AioHttpClientInstrumentor().instrument()` method of the OpenTelemetry aiohttp client Instrumentation package, read more about it [here][opentelemetry-aiohttp]."
},
{
"id": 87,
"parent": null,
"path": "integrations/psycopg.md",
"level": 1,
"title": "Psycopg",
"content": "The [`logfire.instrument_psycopg()`][logfire.Logfire.instrument_psycopg] function can be used to instrument the [Psycopg][psycopg] PostgreSQL driver with **Logfire**. It works with both the `psycopg2` and `psycopg` (i.e. Psycopg 3) packages.\n\nSee the documentation for the [OpenTelemetry Psycopg Instrumentation][opentelemetry-psycopg] or the [OpenTelemetry Psycopg2 Instrumentation][opentelemetry-psycopg2] package for more details."
},
{
"id": 88,
"parent": 87,
"path": "integrations/psycopg.md",
"level": 2,
"title": "Installation",
"content": "Install `logfire` with the `psycopg` extra:\n\n{{ install_logfire(extras=['psycopg']) }}\n\nOr with the `psycopg2` extra:\n\n{{ install_logfire(extras=['psycopg2']) }}"
},
{
"id": 89,
"parent": 87,
"path": "integrations/psycopg.md",
"level": 2,
"title": "Usage",
"content": "Let's setup a PostgreSQL database using Docker and run a Python script that connects to the database using Psycopg to\ndemonstrate how to use **Logfire** with Psycopg."
},
{
"id": 90,
"parent": 89,
"path": "integrations/psycopg.md",
"level": 3,
"title": "Setup a PostgreSQL Database Using Docker",
"content": "First, we need to initialize a PostgreSQL database. This can be easily done using Docker with the following command:\n\n```bash\ndocker run --name postgres \\ # (1)!\n -e POSTGRES_USER=user \\ # (2)!\n -e POSTGRES_PASSWORD=secret \\ # (3)!\n -e POSTGRES_DB=database \\ # (4)!\n -p 5432:5432 \\ # (5)!\n -d postgres # (6)!\n```\n\n1. `--name postgres`: This defines the name of the Docker container.\n2. `-e POSTGRES_USER=user`: This sets a user for the PostgreSQL server.\n3. `-e POSTGRES_PASSWORD=secret`: This sets a password for the PostgreSQL server.\n4. `-e POSTGRES_DB=database`: This creates a new database named \"database\", the same as the one used in your Python script.\n5. `-p 5432:5432`: This makes the PostgreSQL instance available on your local machine under port 5432.\n6. `-d postgres`: This denotes the Docker image to be used, in this case, \"postgres\", and starts the container in detached mode."
},
{
"id": 91,
"parent": 89,
"path": "integrations/psycopg.md",
"level": 3,
"title": "Run the Python script",
"content": "The following Python script connects to the PostgreSQL database and executes some SQL queries:\n\n```py\nimport logfire\nimport psycopg\n\nlogfire.configure()"
},
{
"id": 92,
"parent": null,
"path": "integrations/psycopg.md",
"level": 1,
"title": "To instrument the whole module:",
"content": "logfire.instrument_psycopg(psycopg)"
},
{
"id": 93,
"parent": null,
"path": "integrations/psycopg.md",
"level": 1,
"title": "or",
"content": "logfire.instrument_psycopg('psycopg')"
},
{
"id": 94,
"parent": null,
"path": "integrations/psycopg.md",
"level": 1,
"title": "or just instrument whichever modules (psycopg and/or psycopg2) are installed:",
"content": "logfire.instrument_psycopg()\n\nconnection = psycopg.connect(\n 'dbname=database user=user password=secret host=0.0.0.0 port=5432'\n)"
},
{
"id": 95,
"parent": null,
"path": "integrations/psycopg.md",
"level": 1,
"title": "Or instrument just the connection:",
"content": "logfire.instrument_psycopg(connection)\n\nwith logfire.span('Create table and insert data'), connection.cursor() as cursor:\n cursor.execute(\n 'CREATE TABLE IF NOT EXISTS test (id serial PRIMARY KEY, num integer, data varchar);'\n )\n\n # Insert some data\n cursor.execute('INSERT INTO test (num, data) VALUES (%s, %s)', (100, 'abc'))\n cursor.execute('INSERT INTO test (num, data) VALUES (%s, %s)', (200, 'def'))\n\n # Query the data\n cursor.execute('SELECT * FROM test')\n```\n\nIf you go to your project on the UI, you will see the span created by the script."
},
{
"id": 96,
"parent": 95,
"path": "integrations/psycopg.md",
"level": 2,
"title": "SQL Commenter",
"content": "To add SQL comments to the end of your queries to enrich your database logs with additional context, use the `enable_commenter` parameter:\n\n```python\nimport logfire\n\nlogfire.configure()\nlogfire.instrument_psycopg(enable_commenter=True)\n```\n\nThis can only be used when instrumenting the whole module, not individual connections.\n\nBy default the SQL comments will include values for the following keys:\n\n- `db_driver`\n- `dbapi_threadsafety`\n- `dbapi_level`\n- `libpq_version`\n- `driver_paramstyle`\n- `opentelemetry_values`\n\nYou can exclude any of these keys by passing a dictionary with those keys and the value `False` to `commenter_options`,\ne.g:\n\n```python\nimport logfire\n\nlogfire.configure()\nlogfire.instrument_psycopg(enable_commenter=True, commenter_options={'db_driver': False, 'dbapi_threadsafety': False})\n```"
},
{
"id": 97,
"parent": null,
"path": "integrations/bigquery.md",
"level": 1,
"title": "BigQuery",
"content": "The [Google Cloud BigQuery Python client library][bigquery-pypi] is instrumented with OpenTelemetry out of the box,\nand all the extra dependencies are already included with **Logfire** by default, so you only need to call `logfire.configure()`.\n\nLet's see an example:\n\n```python\nfrom google.cloud import bigquery\n\nimport logfire\n\nlogfire.configure()\n\nclient = bigquery.Client()\nquery = \"\"\"\nSELECT name\nFROM `bigquery-public-data.usa_names.usa_1910_2013`\nWHERE state = \"TX\"\nLIMIT 100\n\"\"\"\nquery_job = client.query(query)\nprint(list(query_job.result()))\n```\n\nYou can find more information about the BigQuery Python client library in the [official documentation][bigquery]."
},
{
"id": 98,
"parent": null,
"path": "integrations/pydantic.md",
"level": 1,
"title": "Pydantic",
"content": "Logfire has a Pydantic plugin to instrument [Pydantic][pydantic] models.\nThe plugin provides logs and metrics about model validation.\n\nTo enable the plugin, do one of the following:\n\n- Set the `LOGFIRE_PYDANTIC_PLUGIN_RECORD` environment variable to `all`.\n- Set `pydantic_plugin_record` in `pyproject.toml`, e.g:\n\n```toml\n[tool.logfire]\npydantic_plugin_record = \"all\"\n```\n\n- Call [`logfire.instrument_pydantic`][logfire.Logfire.instrument_pydantic] with the desired configuration, e.g:\n\n```py\nimport logfire\n\nlogfire.instrument_pydantic() # Defaults to record='all'\n```\n\nNote that if you only use the last option then only model classes defined and imported *after* calling `logfire.instrument_pydantic`\nwill be instrumented.\n\n!!! note\n Remember to call [`logfire.configure()`][logfire.configure] at some point, whether before or after\n calling `logfire.instrument_pydantic` and defining model classes.\n Model validations will only start being logged after calling `logfire.configure()`."
},
{
"id": 99,
"parent": 98,
"path": "integrations/pydantic.md",
"level": 2,
"title": "Third party modules",
"content": "By default, third party modules are not instrumented by the plugin to avoid noise. You can enable instrumentation for those\nusing the [`include`][logfire.PydanticPlugin.include] configuration.\n\n```py\nlogfire.instrument_pydantic(include={'openai'})\n```\n\nYou can also disable instrumentation for your own modules using the\n[`exclude`][logfire.PydanticPlugin.exclude] configuration.\n\n```py\nlogfire.instrument_pydantic(exclude={'app.api.v1'})\n```"
},
{
"id": 100,
"parent": 98,
"path": "integrations/pydantic.md",
"level": 2,
"title": "Model configuration",
"content": "If you want more granular control over the plugin, you can use the\n[`plugin_settings`][pydantic.config.ConfigDict.plugin_settings] class parameter in your Pydantic models.\n\n```py\nfrom logfire.integrations.pydantic import PluginSettings\nfrom pydantic import BaseModel\n\n\nclass Foo(BaseModel, plugin_settings=PluginSettings(logfire={'record': 'failure'})):\n ...\n```"
},
{
"id": 101,
"parent": 100,
"path": "integrations/pydantic.md",
"level": 3,
"title": "Record",
"content": "The [`record`][logfire.integrations.pydantic.LogfireSettings.record] argument is used to configure what to record.\nIt can be one of the following values:\n\n * `all`: Send traces and metrics for all events. This is default value for `logfire.instrument_pydantic`.\n * `failure`: Send metrics for all validations and traces only for validation failures.\n * `metrics`: Send only metrics.\n * `off`: Disable instrumentation.\n\n<!--\n[Sampling](../usage/sampling.md) can be configured by `trace_sample_rate` key in\n[`plugin_settings`][pydantic.config.ConfigDict.plugin_settings].\n\n```py\nfrom pydantic import BaseModel\n\n\nclass Foo(BaseModel, plugin_settings={'logfire': {'record': 'all', 'trace_sample_rate': 0.4}}):\n ...\n```\n-->"
},
{
"id": 102,
"parent": 100,
"path": "integrations/pydantic.md",
"level": 3,
"title": "Tags",
"content": "Tags are used to add additional information to the traces, and metrics. They can be included by\nadding the [`tags`][logfire.integrations.pydantic.LogfireSettings.tags] key in\n[`plugin_settings`][pydantic.config.ConfigDict.plugin_settings].\n\n```py\nfrom pydantic import BaseModel\n\n\nclass Foo(\n BaseModel,\n plugin_settings={'logfire': {'record': 'all', 'tags': ('tag1', 'tag2')}}\n):\n```"
},
{
"id": 103,
"parent": null,
"path": "integrations/pymongo.md",
"level": 1,
"title": "PyMongo",
"content": "The [`logfire.instrument_pymongo()`][logfire.Logfire.instrument_pymongo] method will create a span for every operation performed using your [PyMongo][pymongo] clients.\n\n!!! success \"Also works with Motor... 🚗\"\n This integration also works with [`motor`](https://motor.readthedocs.io/en/stable/), the asynchronous driver for MongoDB."
},
{
"id": 104,
"parent": 103,
"path": "integrations/pymongo.md",
"level": 2,
"title": "Installation",
"content": "Install `logfire` with the `pymongo` extra:\n\n{{ install_logfire(extras=['pymongo']) }}"
},
{
"id": 105,
"parent": 103,
"path": "integrations/pymongo.md",
"level": 2,
"title": "Usage",
"content": "The following example demonstrates how to use **Logfire** with PyMongo."
},
{
"id": 106,
"parent": 105,
"path": "integrations/pymongo.md",
"level": 3,
"title": "Run Mongo on Docker (Optional)",
"content": "If you already have a MongoDB instance running, you can skip this step.\nOtherwise, you can start MongoDB using Docker with the following command:\n\n```bash\ndocker run --name mongo -p 27017:27017 -d mongo:latest\n```"
},
{
"id": 107,
"parent": 105,
"path": "integrations/pymongo.md",
"level": 3,
"title": "Run the Python script",
"content": "The following script connects to a MongoDB database, inserts a document, and queries it:\n\n=== \"Sync\"\n\n ```py\n import logfire\n from pymongo import MongoClient\n\n logfire.configure()\n logfire.instrument_pymongo()\n\n client = MongoClient()\n db = client[\"database\"]\n collection = db[\"collection\"]\n collection.insert_one({\"name\": \"MongoDB\"})\n collection.find_one()\n ```\n\n=== \"Async\"\n\n ```py\n import asyncio\n import logfire\n from motor.motor_asyncio import AsyncIOMotorClient\n\n logfire.configure()\n logfire.instrument_pymongo()\n\n async def main():\n client = AsyncIOMotorClient()\n db = client[\"database\"]\n collection = db[\"collection\"]\n await collection.insert_one({\"name\": \"MongoDB\"})\n await collection.find_one()\n\n asyncio.run(main())\n ```\n\n!!! info\n You can pass `capture_statement=True` to `logfire.instrument_pymongo()` to capture the queries.\n\n By default, it is set to `False` to avoid capturing sensitive information.\n\nThe keyword arguments of `logfire.instrument_pymongo()` are passed to the `PymongoInstrumentor().instrument()` method of the OpenTelemetry pymongo Instrumentation package, read more about it [here][opentelemetry-pymongo]."
},
{
"id": 108,
"parent": null,
"path": "integrations/httpx.md",
"level": 1,
"title": "HTTPX",
"content": "The [`logfire.instrument_httpx()`][logfire.Logfire.instrument_httpx] method can be used to instrument [HTTPX][httpx] with **Logfire**."
},
{
"id": 109,
"parent": 108,
"path": "integrations/httpx.md",
"level": 2,
"title": "Installation",
"content": "Install `logfire` with the `httpx` extra:\n\n{{ install_logfire(extras=['httpx']) }}"
},
{
"id": 110,
"parent": 108,
"path": "integrations/httpx.md",
"level": 2,
"title": "Usage",
"content": "Let's see a minimal example below. You can run it with `python main.py`:\n\n```py title=\"main.py\"\nimport logfire\nimport httpx\n\nlogfire.configure()\nlogfire.instrument_httpx()\n\nurl = \"https://httpbin.org/get\"\n\nwith httpx.Client() as client:\n client.get(url)\n\n\nasync def main():\n async with httpx.AsyncClient() as client:\n await client.get(url)\n\n\nif __name__ == \"__main__\":\n import asyncio\n\n asyncio.run(main())\n```\n\n[`logfire.instrument_httpx()`][logfire.Logfire.instrument_httpx] uses the\n**OpenTelemetry HTTPX Instrumentation** package,\nwhich you can find more information about [here][opentelemetry-httpx]."
},
{
"id": 111,
"parent": null,
"path": "integrations/fastapi.md",
"level": 1,
"title": "FastAPI",
"content": "**Logfire** combines custom and third-party instrumentation for [FastAPI][fastapi]\nwith the [`logfire.instrument_fastapi()`][logfire.Logfire.instrument_fastapi] method."
},
{
"id": 112,
"parent": 111,
"path": "integrations/fastapi.md",
"level": 2,
"title": "Installation",
"content": "Install `logfire` with the `fastapi` extra:\n\n{{ install_logfire(extras=['fastapi']) }}"
},
{
"id": 113,
"parent": 111,
"path": "integrations/fastapi.md",
"level": 2,
"title": "Usage",
"content": "We have a minimal example below. Please install [Uvicorn][uvicorn] to run it:\n\n```bash\npip install uvicorn\n```\n\nYou can run it with `python main.py`:\n\n```py title=\"main.py\"\nimport logfire\nfrom fastapi import FastAPI\n\napp = FastAPI()\n\nlogfire.configure()\nlogfire.instrument_fastapi(app)\n\n\[email protected](\"/hello\")\nasync def hello(name: str):\n return {\"message\": f\"hello {name}\"}\n\n\nif __name__ == \"__main__\":\n import uvicorn\n\n uvicorn.run(app)\n```\n\nThen visit http://localhost:8000/hello?name=world and check the logs."
},
{
"id": 114,
"parent": 111,
"path": "integrations/fastapi.md",
"level": 2,
"title": "OpenTelemetry FastAPI Instrumentation",
"content": "The third-party [OpenTelemetry FastAPI Instrumentation][opentelemetry-fastapi] package adds spans to every request with\ndetailed attributes about the HTTP request such as the full URL and the user agent. The start and end times let you see\nhow long it takes to process each request.\n\n[`logfire.instrument_fastapi()`][logfire.Logfire.instrument_fastapi] applies this instrumentation by default.\nYou can disable it by passing `use_opentelemetry_instrumentation=False`.\n\n[`logfire.instrument_fastapi()`][logfire.Logfire.instrument_fastapi] also accepts arbitrary additional keyword arguments\nand passes them to the OpenTelemetry `FastAPIInstrumentor.instrument_app()` method. See their documentation for more details."
},
{
"id": 115,
"parent": 111,
"path": "integrations/fastapi.md",
"level": 2,
"title": "Logfire instrumentation: logging endpoint arguments and validation errors",
"content": "[`logfire.instrument_fastapi()`][logfire.Logfire.instrument_fastapi] will emit a span for each request\ncalled `FastAPI arguments` which shows how long it takes FastAPI to parse and validate the endpoint function\narguments from the request and resolve any dependencies.\nBy default the span will also contain the following attributes:\n\n- `values`: A dictionary mapping argument names of the endpoint function to parsed and validated values.\n- `errors`: A list of validation errors for any invalid inputs.\n\nYou can customize this by passing an `request_attributes_mapper` function to `instrument_fastapi`. This function will be called\nwith the `Request` or `WebSocket` object and the default attributes dictionary. It should return a new dictionary of\nattributes, or `None` to set the span level to 'debug' so that it's hidden by default. For example:\n\n```py\nimport logfire\n\napp = ...\n\n\ndef request_attributes_mapper(request, attributes):\n if attributes[\"errors\"]:\n # Only log validation errors, not valid arguments\n return {\n \"errors\": attributes[\"errors\"],\n \"my_custom_attribute\": ...,\n }\n else:\n # Don't log anything for valid requests\n return None\n\n\nlogfire.configure()\nlogfire.instrument_fastapi(app, request_attributes_mapper=request_attributes_mapper)\n```\n\n!!! note\n The [`request_attributes_mapper`][logfire.Logfire.instrument_fastapi(request_attributes_mapper)] function mustn't mutate the\n contents of `values` or `errors`, but it can safely replace them with new values."
},
{
"id": 116,
"parent": 111,
"path": "integrations/fastapi.md",
"level": 2,
"title": "Excluding URLs from instrumentation",
"content": "<!-- note that this section is duplicated for different frameworks but with slightly different links -->\n\n- [Quick guide](use-cases/web-frameworks.md#excluding-urls-from-instrumentation)\n- [OpenTelemetry Documentation](https://opentelemetry-python-contrib.readthedocs.io/en/latest/instrumentation/fastapi/fastapi.html#exclude-lists)"
},
{
"id": 117,
"parent": 111,
"path": "integrations/fastapi.md",
"level": 2,
"title": "Capturing request and response headers",
"content": "<!-- note that this section is duplicated for different frameworks but with slightly different links -->\n\n- [Quick guide](use-cases/web-frameworks.md#capturing-http-server-request-and-response-headers)\n- [OpenTelemetry Documentation](https://opentelemetry-python-contrib.readthedocs.io/en/latest/instrumentation/fastapi/fastapi.html#capture-http-request-and-response-headers)"
},
{
"id": 118,
"parent": null,
"path": "integrations/asgi.md",
"level": 1,
"title": "ASGI",
"content": "If the [ASGI][asgi] framework doesn't have a dedicated OpenTelemetry package, you can use the\n[OpenTelemetry ASGI middleware][opentelemetry-asgi]."
},
{
"id": 119,
"parent": 118,
"path": "integrations/asgi.md",
"level": 2,
"title": "Installation",
"content": "You need to install the `opentelemetry-instrumentation-asgi` package:\n\n```bash\npip install opentelemetry-instrumentation-asgi\n```"
},
{
"id": 120,
"parent": 118,
"path": "integrations/asgi.md",
"level": 2,
"title": "Usage",
"content": "Below we have a minimal example using [Uvicorn][uvicorn]. You can run it with `python main.py`:\n\n```py title=\"main.py\"\nimport logfire\nfrom opentelemetry.instrumentation.asgi import OpenTelemetryMiddleware\n\n\nlogfire.configure()\n\n\nasync def app(scope, receive, send):\n assert scope[\"type\"] == \"http\"\n await send(\n {\n \"type\": \"http.response.start\",\n \"status\": 200,\n \"headers\": [(b\"content-type\", b\"text/plain\"), (b\"content-length\", b\"13\")],\n }\n )\n await send({\"type\": \"http.response.body\", \"body\": b\"Hello, world!\"})\n\napp = OpenTelemetryMiddleware(app)\n\nif __name__ == \"__main__\":\n import uvicorn\n\n uvicorn.run(app)\n```\n\nYou can read more about the OpenTelemetry ASGI middleware [here][opentelemetry-asgi]."
},
{
"id": 121,
"parent": 118,
"path": "integrations/asgi.md",
"level": 2,
"title": "Excluding URLs from instrumentation",
"content": "<!-- note that this section is duplicated for different frameworks but with slightly different links -->\n\n- [Quick guide](use-cases/web-frameworks.md#excluding-urls-from-instrumentation)\n\n!!! note\n `OpenTelemetryMiddleware` does accept an `excluded_urls` parameter, but does not support specifying said URLs via an environment variable,\n unlike other instrumentations."
},
{
"id": 122,
"parent": 118,
"path": "integrations/asgi.md",
"level": 2,
"title": "Capturing request and response headers",
"content": "<!-- note that this section is duplicated for different frameworks but with slightly different links -->\n\n- [Quick guide](use-cases/web-frameworks.md#capturing-http-server-request-and-response-headers)\n- [OpenTelemetry Documentation](https://opentelemetry-python-contrib.readthedocs.io/en/latest/instrumentation/asgi/asgi.html#capture-http-request-and-response-headers)"
},
{
"id": 123,
"parent": null,
"path": "integrations/flask.md",
"level": 1,
"title": "Flask",
"content": "The [`logfire.instrument_flask()`][logfire.Logfire.instrument_flask] method\nwill create a span for every request to your [Flask][flask] application."
},
{
"id": 124,
"parent": 123,
"path": "integrations/flask.md",
"level": 2,
"title": "Install",
"content": "Install `logfire` with the `flask` extra:\n\n{{ install_logfire(extras=['flask']) }}"
},
{
"id": 125,
"parent": 123,
"path": "integrations/flask.md",
"level": 2,
"title": "Usage",
"content": "Let's see a minimal example below. You can run it with `python main.py`:\n\n```py title=\"main.py\"\nimport logfire\nfrom flask import Flask\n\n\nlogfire.configure()\n\napp = Flask(__name__)\nlogfire.instrument_flask(app)\n\n\[email protected](\"/\")\ndef hello():\n return \"Hello!\"\n\n\nif __name__ == \"__main__\":\n app.run(debug=True)\n```\n\nThe keyword arguments of `logfire.instrument_flask()` are passed to the `FlaskInstrumentor().instrument_app()` method\nof the OpenTelemetry Flask Instrumentation package, read more about it [here][opentelemetry-flask]."
},
{
"id": 126,
"parent": 123,
"path": "integrations/flask.md",
"level": 2,
"title": "Excluding URLs from instrumentation",
"content": "<!-- note that this section is duplicated for different frameworks but with slightly different links -->\n\n- [Quick guide](use-cases/web-frameworks.md#excluding-urls-from-instrumentation)\n- [OpenTelemetry Documentation](https://opentelemetry-python-contrib.readthedocs.io/en/latest/instrumentation/flask/flask.html#exclude-lists)"
},
{
"id": 127,
"parent": 123,
"path": "integrations/flask.md",
"level": 2,
"title": "Capturing request and response headers",
"content": "<!-- note that this section is duplicated for different frameworks but with slightly different links -->\n\n- [Quick guide](use-cases/web-frameworks.md#capturing-http-server-request-and-response-headers)\n- [OpenTelemetry Documentation](https://opentelemetry-python-contrib.readthedocs.io/en/latest/instrumentation/flask/flask.html#capture-http-request-and-response-headers)"
},
{
"id": 128,
"parent": null,
"path": "guides/index.md",
"level": 2,
"title": "**Onboarding Checklist 📋**",
"content": "In [this guide](onboarding-checklist/index.md), we provide a checklist with step-by-step instructions to take an existing application and thoroughly\ninstrument it to send data to Logfire. In particular, we'll show you how to leverage Logfire's various\n[integrations](../integrations/index.md) to generate as much useful data with as little development effort as possible.\n\n**Following this checklist for your application is _critical_ to getting the most out of Logfire.**"
},
{
"id": 129,
"parent": null,
"path": "guides/index.md",
"level": 2,
"title": "**Intro to the Web UI**",
"content": "In [this guide](web-ui/index.md), we introduce the various views and features of the Logfire Web UI, and show you how to use them\nto investigate your projects' data.\n\n[//]: # (When we have more than one, I think it's worth adding the following section:)\n[//]: # (### Use cases)\n[//]: # ()\n[//]: # (We have special documentation for some common use cases:)\n[//]: # (* **[Web Frameworks]&#40;use-cases/web-frameworks.md&#41;:** Django, Flask, FastAPI, etc.)\n\n[//]: # (Once we have more content, I think this would also be a useful section, somewhat different than the previous:)\n[//]: # (### Case Studies)\n[//]: # (* **[Investigating database performance issues with the Live view]&#40;...&#41;** [autoexplain + pgmustard])\n[//]: # (* **[Monitoring deployment health]&#40;...&#41;** [dashboards + alerts])\n[//]: # (* **[Investigating your data with the Live and Explore views]&#40;...&#41;**)"
},
{
"id": 130,
"parent": null,
"path": "guides/index.md",
"level": 2,
"title": "**Advanced User Guide**",
"content": "We cover additional topics in the **[Advanced User Guide](advanced/index.md)**, including:\n\n* **[Sampling](advanced/sampling.md/#sampling):** Down-sample lower-priority traces to reduce costs.\n* **[Scrubbing](advanced/scrubbing.md):** Remove sensitive data from your logs and traces before sending them to Logfire.\n* **[Testing](advanced/testing.md):** Test your usage of Logfire.\n* ... and more."
},
{
"id": 131,
"parent": null,
"path": "guides/index.md",
"level": 2,
"title": "**Integrations and Reference**",
"content": "* **[Integrations](../integrations/index.md):**\nIn this section of the docs we explain what an OpenTelemetry instrumentation is, and offer detailed guidance about how\nto get the most out of them in combination with Logfire. We also document here how to send data to Logfire from other\nlogging libraries you might already be using, including `loguru`, `structlog`, and the Python standard library's\n`logging` module.\n* **[Configuration](../reference/configuration.md):**\nIn this section we document the various ways you can configure which Logfire project your deployment will send data to.\n* **[Organization Structure](../reference/organization-structure.md):**\nIn this section we document the organization, project, and permissions model in Logfire.\n* **[SDK CLI docs](../reference/cli.md):**\nDocumentation of the `logfire` command-line interface."
},
{
"id": 132,
"parent": null,
"path": "why-logfire/python-centric.md",
"level": 1,
"title": "Python-centric insights :material-snake:",
"content": "Pydantic Logfire automatically instruments your code for minimal manual effort, provides exceptional insights into async code, offers detailed performance analytics, and displays Python objects the same as the interpreter. Pydantic Logfire gives you a clearer view into how your Python is running than any other observability tool."
},
{
"id": 133,
"parent": 132,
"path": "why-logfire/python-centric.md",
"level": 2,
"title": "Rich display of Python objects",
"content": "![Logfire FastAPI screenshot](../images/logfire-screenshot-fastapi-arguments.png)\n\nIn this example, you can see the parameters passed to a FastAPI endpoint formatted as a Python object."
},
{
"id": 134,
"parent": 132,
"path": "why-logfire/python-centric.md",
"level": 2,
"title": "Profiling Python code",
"content": "![Logfire Auto-tracing screenshot](../images/logfire-screenshot-autotracing.png)\n\nIn this simple app example, you can see every interaction the user makes with the web app automatically traced to the Live view using the [Auto-tracing method](../guides/onboarding-checklist/add-auto-tracing.md)."
},
{
"id": 135,
"parent": null,
"path": "why-logfire/sql.md",
"level": 1,
"title": "Structured Data and SQL :abacus: {#sql}",
"content": "Query your data with pure, canonical PostgreSQL — all the control and (for many) nothing new to learn. We even provide direct access to the underlying Postgres database, which means that you can query Logfire using any Postgres-compatible tools you like.\n\nThis includes BI tools and dashboard-building platforms like\n\n- Superset\n- Grafana\n- Google Looker Studio\n\nAs well as data science tools like\n\n- Pandas\n- SQLAlchemy\n- `psql`\n\nUsing vanilla PostgreSQL as the querying language throughout the platform ensures a consistent, powerful, and flexible querying experience.\n\nAnother big advantage of using the most widely used SQL databases is that generative AI tools like ChatGPT are excellent at writing SQL for you.\n\nJust include your Python objects in **Logfire** calls (lists, dict, dataclasses, Pydantic models, DataFrames, and more),\nand it'll end up as structured data in our platform ready to be queried.\n\nFor example, using data from a `User` model, we could list users from the USA:\n\n```sql\nSELECT attributes->'result'->>'name' as name, extract(year from (attributes->'result'->>'dob')::date) as \"birth year\"\nFROM records\nWHERE attributes->'result'->>'country_code' = 'USA';\n```\n\n![Logfire explore query screenshot](../images/index/logfire-screenshot-explore-query.png)\n\nYou can also filter to show only traces related to users in the USA in the live view with\n\n```sql\nattributes->'result'->>'name' = 'Ben'\n```\n\n![Logfire search query screenshot](../images/index/logfire-screenshot-search-query.png)\n\n\nStructured Data and Direct SQL Access means you can use familiar tools like Pandas, SQLAlchemy, or `psql`\nfor querying, can integrate seamlessly with BI tools, and can even leverage AI for SQL generation, ensuring your Python\nobjects and structured data are query-ready."
},
{
"id": 136,
"parent": null,
"path": "why-logfire/simplicity.md",
"level": 1,
"title": "Simplicity and Power :rocket:",
"content": "Emulating the Pydantic library's philosophy, Pydantic Logfire offers an\nintuitive start for beginners while providing the depth experts desire. It's the same balance of ease, sophistication,\nand productivity, reimagined for observability.\n\nWithin a few minutes you'll have your first logs:\n\n![Logfire hello world screenshot](../images/index/logfire-screenshot-hello-world-age.png)\n\n\nThis might look similar to simple logging, but it's much more powerful — you get:\n\n- **Structured data** from your logs\n- **Nested logs &amp; traces** to contextualize what you're viewing\n- **Custom-built platform** to view your data, with no configuration required\n- **Pretty display** of Python objects\n\nReady to try Logfire? [Get Started](../index.md)! 🚀"
},
{
"id": 137,
"parent": null,
"path": "why-logfire/index.md",
"level": 1,
"title": "Introducing Pydantic Logfire",
"content": "From the team behind Pydantic, **Logfire** is an observability platform built on the same belief as our open source library — that the most powerful tools can be easy to use."
},
{
"id": 138,
"parent": 137,
"path": "why-logfire/index.md",
"level": 2,
"title": "What sets Logfire apart",
"content": "<div class=\"grid cards\" markdown>\n\n- :rocket:{ .lg .middle } __Simplicity and Power__\n\n ---\n\n Logfire's dashboard is simple relative to the power it provides, ensuring your entire engineering team will actually use it. Time-to-first-log should be less than 5 minutes.\n\n [:octicons-arrow-right-24: Read more](simplicity.md)\n\n- :snake:{ .lg .middle } __Python-centric Insights__\n\n ---\n\n From rich display of **Python objects**, to **event-loop telemetry**, to **profiling Python code &amp; database queries**, Logfire gives you unparalleled visibility into your Python application's behavior.\n\n [:octicons-arrow-right-24: Read more](python-centric.md)\n\n- :simple-pydantic:{ .lg .middle } __Pydantic Integration__\n\n ---\n\n Understand the data flowing through your Pydantic models and get built-in analytics on validations.\n\n Pydantic Logfire helps you instrument your applications with less code, less time, and better understanding.\n\n [:octicons-arrow-right-24: Read more](pydantic.md)\n\n- :telescope:{ .lg .middle } __OpenTelemetry__\n\n ---\n\n Logfire is an opinionated wrapper around OpenTelemetry, allowing you to leverage existing tooling, infrastructure, and instrumentation for many common Python packages, and enabling support for virtually any language.\n\n [:octicons-arrow-right-24: Read more](opentelemetry.md)\n\n- :simple-instructure:{ .lg .middle } __Structured Data__\n\n ---\n\n Include your Python objects in Logfire calls (lists, dict, dataclasses, Pydantic models, DataFrames, and more), and it'll end up as structured data in our platform ready to be queried.\n\n [:octicons-arrow-right-24: Read more](sql.md)\n\n- :abacus:{ .lg .middle } __SQL__\n\n ---\n\n Query your data using standard SQL — all the control and (for many) nothing new to learn. Using SQL also means you can query your data with existing BI tools and database querying libraries.\n\n [:octicons-arrow-right-24: Read more](sql.md)\n\n</div>"
},
{
"id": 139,
"parent": 137,
"path": "why-logfire/index.md",
"level": 2,
"title": "Find the needle in a _stack trace_",
"content": "We understand Python and its peculiarities. Pydantic Logfire was crafted by Python developers, for Python developers, addressing the unique challenges and opportunities of the Python environment. It's not just about having data; it's about having the *right* data, presented in ways that make sense for Python applications.\n\n![Logfire FastAPI screenshot](../images/index/logfire-screenshot-fastapi-200.png)"
},
{
"id": 140,
"parent": null,
"path": "why-logfire/pydantic.md",
"level": 1,
"title": "Pydantic integration",
"content": "Logfire has an out-of-the-box Pydantic integration that lets you understand the data passing through your Pydantic models and get analytics on validations. For existing Pydantic users, it delivers unparalleled insights into your usage of Pydantic models.\n\nWe can record Pydantic models directly:\n\n```py\nfrom datetime import date\nimport logfire\nfrom pydantic import BaseModel\n\nlogfire.configure()\n\nclass User(BaseModel):\n name: str\n country_code: str\n dob: date\n\nuser = User(name='Anne', country_code='USA', dob='2000-01-01')\nlogfire.info('user processed: {user!r}', user=user) # (1)!\n```\n\n1. This will show `user processed: User(name='Anne', country_code='US', dob=datetime.date(2000, 1, 1))`, but also allow you to see a \"pretty\" view of the model within the Logfire Platform.\n\n![Logfire pydantic manual screenshot](../images/index/logfire-screenshot-pydantic-manual.png)\n\nOr we can record information about validations automatically:\n\n```py\nfrom datetime import date\nimport logfire\nfrom pydantic import BaseModel\n\nlogfire.configure()\nlogfire.instrument_pydantic() # (1)!\n\nclass User(BaseModel):\n name: str\n country_code: str\n dob: date\n\nUser(name='Anne', country_code='USA', dob='2000-01-01') # (2)!\nUser(name='Ben', country_code='USA', dob='2000-02-02')\nUser(name='Charlie', country_code='GBR', dob='1990-03-03')\n```\n\n1. This configuration means details about all Pydantic model validations will be recorded. You can also record details about validation failures only, or just metrics; see the [pydantic plugin docs](../integrations/pydantic.md).\n2. Since we've enabled the Pydantic Plugin, all Pydantic validations will be recorded in Logfire.\n\nLearn more about the [Pydantic Plugin here](../integrations/pydantic.md).\n\n![Logfire pydantic plugin screenshot](../images/index/logfire-screenshot-pydantic-plugin.png)"
},
{
"id": 141,
"parent": null,
"path": "why-logfire/opentelemetry.md",
"level": 1,
"title": "OpenTelemetry under the hood :telescope:",
"content": "Because **Pydantic Logfire** is built on [OpenTelemetry](https://opentelemetry.io/), you can\nuse a wealth of existing tooling and infrastructure, including\n[instrumentation for many common Python packages](https://opentelemetry-python-contrib.readthedocs.io/en/latest/index.html). Logfire also supports cross-language data integration and data export to any OpenTelemetry-compatible backend or proxy.\n\nFor example, we can instrument a simple FastAPI app with just 2 lines of code:\n\n```py title=\"main.py\" hl_lines=\"8 9 10\"\nfrom datetime import date\nimport logfire\nfrom pydantic import BaseModel\nfrom fastapi import FastAPI\n\napp = FastAPI()\n\nlogfire.configure()\nlogfire.instrument_fastapi(app) # (1)!"
},
{
"id": 142,
"parent": null,
"path": "why-logfire/opentelemetry.md",
"level": 1,
"title": "Here you'd instrument any other library that you use. (2)",
"content": "class User(BaseModel):\n name: str\n country_code: str\n dob: date\n\n\[email protected]('/')\nasync def add_user(user: User):\n # we would store the user here\n return {'message': f'{user.name} added'}\n```\n\n1. In addition to [configuring logfire](../reference/configuration.md) this line is all you need to instrument a FastAPI app with Logfire. The same applies to most other popular Python web frameworks.\n2. The [integrations](../integrations/index.md) page has more information on how to instrument other parts of your app. Run the [inspect](../reference/cli.md#inspect-inspect) command for package suggestions.\n\nWe'll need the [FastAPI contrib package](../integrations/fastapi.md), FastAPI itself and uvicorn installed to run this:\n\n```bash\npip install 'logfire[fastapi]' fastapi uvicorn # (1)!\nuvicorn fastapi_example:app # (2)!\n```\n\n1. Install the `logfire` package with the `fastapi` extra, FastAPI, and uvicorn.\n2. Run the FastAPI app with uvicorn.\n\nThis will give you information on the HTTP request and details of results from successful input validations:\n\n![Logfire FastAPI 200 response screenshot](../images/index/logfire-screenshot-fastapi-200.png)\n\nAnd, importantly, details of failed input validations:\n\n![Logfire FastAPI 422 response screenshot](../images/index/logfire-screenshot-fastapi-422.png)\n\nIn the example above, we can see the FastAPI arguments failing (`user` is null when it should always be populated). This demonstrates type-checking from Pydantic used out-of-the-box in FastAPI."
},
{
"id": 143,
"parent": null,
"path": "reference/examples.md",
"level": 1,
"title": "Examples",
"content": "These are working, stand-alone apps and projects that you can clone, spin up locally and play around with to get a feel for the different capabilities of Logfire.\n\n**Got a suggestion?**\n\nIf you want to see an example of a particular language or library, [get in touch](../help.md)."
},
{
"id": 144,
"parent": 143,
"path": "reference/examples.md",
"level": 2,
"title": "Python",
"content": ""
},
{
"id": 145,
"parent": 144,
"path": "reference/examples.md",
"level": 3,
"title": "Flask and SQLAlchemy example",
"content": "This example is a simple Python financial calculator app using Flask and SQLAlchemy which is instrumented using the appropriate integrations as well as [auto-tracing](../guides/onboarding-checklist/add-auto-tracing.md). If you spin up the server locally and interact with the calculator app, you'll be able to see traces come in automatically:\n\n![Flask and SQLAlchemy example](../images/logfire-screenshot-examples-flask-sqlalchemy.png)\n\n[See it on GitHub :material-open-in-new:](https://github.com/pydantic/logfire/tree/main/examples/python/flask-sqlalchemy/){:target=\"_blank\"}"
},
{
"id": 146,
"parent": 143,
"path": "reference/examples.md",
"level": 2,
"title": "JavaScript",
"content": "Currently we only have a Python SDK, but the Logfire backend and UI support data sent by any OpenTelemetry client. See the [alternative clients guide](../guides/advanced/alternative-clients.md) for details on setting up OpenTelemetry in any language. We're working on a JavaScript SDK, but in the meantime here are some examples of using plain OpenTelemetry in JavaScript:"
},
{
"id": 147,
"parent": 146,
"path": "reference/examples.md",
"level": 3,
"title": "Cloudflare worker example",
"content": "This example is based on the scaffolding created from `npm create cloudflare@latest`, and uses the [otel-cf-workers package](https://github.com/evanderkoogh/otel-cf-workers) to instrument a Cloudflare Worker and send traces and metrics to Logfire.\n\n[See it on GitHub :material-open-in-new:](https://github.com/pydantic/logfire/tree/main/examples/javascript/cloudflare-worker/){:target=\"_blank\"}"
},
{
"id": 148,
"parent": 146,
"path": "reference/examples.md",
"level": 3,
"title": "Express example",
"content": "This example demonstrates how to use OpenTelemetry to instrument an Express application and send traces and metrics to Logfire.\n\n[See it on GitHub :material-open-in-new:](https://github.com/pydantic/logfire/tree/main/examples/javascript/express/){:target=\"_blank\"}"
},
{
"id": 149,
"parent": null,
"path": "reference/cli.md",
"level": 1,
"title": "SDK Command Line Interface",
"content": "**Logfire** comes with a CLI used for authentication and project management:\n\n```\n{{ logfire_help }}\n```"
},
{
"id": 150,
"parent": 149,
"path": "reference/cli.md",
"level": 2,
"title": "Authentication (`auth`)",
"content": "You need to be authenticated to use the **Logfire**.\n\n!!! abstract\n Read the [Terms of Service][terms-of-service] and [Privacy Policy][privacy_policy] if you want\n to know how we handle your data. :nerd_face:\n\nTo authenticate yourself, run the `auth` command in the terminal:\n\n```bash\nlogfire auth\n```\n\n![Terminal screenshot with Logfire auth command](../images/cli/terminal-screenshot-auth-1.png)\n\nAfter pressing `\"Enter\"`, you will be redirected to the browser to log in to your account.\n\n![Browser screenshot with Logfire login page](../images/cli/browser-screenshot-auth.png)\n\nThen, if you go back to the terminal, you'll see that you are authenticated! :tada:\n\n![Terminal screenshot with successful authentication](../images/cli/terminal-screenshot-auth-2.png)"
},
{
"id": 151,
"parent": 149,
"path": "reference/cli.md",
"level": 2,
"title": "Backfill (`backfill`)",
"content": "!!! warning \"🚧 Work in Progress 🚧\"\n This section is yet to be written, [contact us](../help.md) if you have any questions."
},
{
"id": 152,
"parent": 149,
"path": "reference/cli.md",
"level": 2,
"title": "Clean (`clean`)",
"content": "To clean _most_ the files created by **Logfire**, run the following command:\n\n```bash\nlogfire clean\n```\n\nThe clean command doesn't remove the logs, and the authentication information stored in the `~/.logfire` directory.\n\nTo also remove the logs, you can run the following command:\n\n```bash\nlogfire clean --logs\n```"
},
{
"id": 153,
"parent": 149,
"path": "reference/cli.md",
"level": 2,
"title": "Inspect (`inspect`)",
"content": "The inspect command is used to identify the missing OpenTelemetry instrumentation packages in your project.\n\nTo inspect your project, run the following command:\n\n```bash\nlogfire inspect\n```\n\nThis will output the projects you need to install to have optimal OpenTelemetry instrumentation.\n\n![Terminal screenshot with Logfire inspect command](../images/cli/terminal-screenshot-inspect.png)"
},
{
"id": 154,
"parent": 149,
"path": "reference/cli.md",
"level": 2,
"title": "Who Am I (`whoami`)",
"content": "!!! warning \"🚧 Work in Progress 🚧\"\n This section is yet to be written, [contact us](../help.md) if you have any questions."
},
{
"id": 155,
"parent": 149,
"path": "reference/cli.md",
"level": 2,
"title": "Projects",
"content": "<!-- TODO(Marcelo): We can add the `logfire projects --help` here. -->"
},
{
"id": 156,
"parent": 155,
"path": "reference/cli.md",
"level": 3,
"title": "List (`projects list`)",
"content": "To check the projects you have access to, run the following command:\n\n```bash\nlogfire projects list\n```\n\nYou'll see something like this:\n\n```bash\n❯ logfire projects list\n┏━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━┓\n┃ Organization ┃ Project ┃\n┡━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━┩\n│ Kludex │ backend │\n│ Kludex │ worker │\n└──────────────┴────────────────┘\n```"
},
{
"id": 157,
"parent": 155,
"path": "reference/cli.md",
"level": 3,
"title": "Use (`projects use`)",
"content": "To use an already created project, run the following command:\n\n```bash\nlogfire projects use <project-name>\n```\n\nFor example, to use the `backend` project, you can run:\n\n```bash\nlogfire projects use backend\n```"
},
{
"id": 158,
"parent": 155,
"path": "reference/cli.md",
"level": 3,
"title": "Create (`projects new`)",
"content": "To create a new project, run the following command:\n\n```bash\nlogfire projects new <project-name>\n```\n\nFollow the instructions, and you'll have a new project created in no time! :partying_face:\n\n[terms-of-service]: ../legal/terms-of-service.md\n[privacy_policy]: ../legal/privacy.md"
},
{
"id": 159,
"parent": null,
"path": "reference/configuration.md",
"level": 2,
"title": "Programmatically via `configure`",
"content": "For more details, please refer to our [API documentation][logfire.configure]."
},
{
"id": 160,
"parent": null,
"path": "reference/configuration.md",
"level": 2,
"title": "Using environment variables",
"content": "You can use the following environment variables to configure **Logfire**:\n\n{{ env_var_table }}\n\nWhen using environment variables, you still need to call [`logfire.configure()`][logfire.configure],\nbut you can leave out the arguments."
},
{
"id": 161,
"parent": null,
"path": "reference/configuration.md",
"level": 2,
"title": "Using a configuration file (`pyproject.toml`)",
"content": "You can use the `pyproject.toml` to configure **Logfire**.\n\nHere's an example:\n\n```toml\n[tool.logfire]\nproject_name = \"My Project\"\nconsole_colors = \"never\"\n```\n\nThe keys are the same as the parameters of [`logfire.configure()`][logfire.configure]."
},
{
"id": 162,
"parent": null,
"path": "get-started/traces.md",
"level": 2,
"title": "Example #1",
"content": "In this example:\n\n1. The outer span measures the time to count the total size of files in the current directory (`cwd`).\n2. Inner spans measure the time to read each individual file.\n3. Finally, the total size is logged.\n\n```py\nfrom pathlib import Path\nimport logfire\n\nlogfire.configure()\n\ncwd = Path.cwd()\ntotal_size = 0\n\nwith logfire.span('counting size of {cwd=}', cwd=cwd):\n for path in cwd.iterdir():\n if path.is_file():\n with logfire.span('reading {path}', path=path.relative_to(cwd)):\n total_size += len(path.read_bytes())\n\n logfire.info('total size of {cwd} is {size} bytes', cwd=cwd, size=total_size)\n```\n\n![Counting size of loaded files screenshot](../images/logfire-screenshot-first-steps-load-files.png)\n\n---"
},
{
"id": 163,
"parent": null,
"path": "get-started/traces.md",
"level": 2,
"title": "Example #2",
"content": "In this example:\n\n1. The outer span sets the topic — the user's birthday\n2. The user input is captured in the terminal\n3. `dob` (date of birth) is displayed in the span\n3. Logfire calculates the age from the `dob` and displays age in the debug message\n\n```py\nlogfire.configure()\n\nwith logfire.span('Asking the user for their {question}', question='birthday'): # (1)!\n user_input = input('When were you born [YYYY-mm-dd]? ')\n dob = date.fromisoformat(user_input) # (2)!\n logfire.debug('{dob=} {age=!r}', dob=dob, age=date.today() - dob) # (3)!\n```\n\n1. Spans allow you to nest other Logfire calls, and also to measure how long code takes to run. They are the fundamental building block of traces!\n2. Attempt to extract a date from the user input. If any exception is raised, the outer span will include the details of the exception.\n3. This will log for example `dob=2000-01-01 age=datetime.timedelta(days=8838)` with `debug` level.\n\n![Logfire hello world screenshot](../images/index/logfire-screenshot-hello-world-age.png)\n\n---\n\nBy instrumenting your code with traces and spans, you can see how long operations take, identify bottlenecks,\nand get a high-level view of request flows in your system — all invaluable for maintaining the performance and\nreliability of your applications."
},
{
"id": 164,
"parent": null,
"path": "integrations/use-cases/web-frameworks.md",
"level": 1,
"title": "Web Frameworks",
"content": "Here are some tips for instrumenting your web applications."
},
{
"id": 165,
"parent": 164,
"path": "integrations/use-cases/web-frameworks.md",
"level": 2,
"title": "Integrations",
"content": "If you're using one of the following libraries, check out the integration docs:\n\n- [FastAPI](../fastapi.md)\n- [Starlette](../starlette.md)\n- [Django](../django.md)\n- [Flask](../flask.md)\n\nOtherwise, check if your server uses [WSGI](../wsgi.md) or [ASGI](../asgi.md) and check the corresponding integration."
},
{
"id": 166,
"parent": 164,
"path": "integrations/use-cases/web-frameworks.md",
"level": 2,
"title": "Capturing HTTP server request and response headers",
"content": "Some methods (e.g. `logfire.instrument_fastapi()`) allow you to pass `capture_headers=True` to record all request and response headers in the spans,\nand that's all you usually need.\n\nIf you want more control, there are three environment variables to tell the OpenTelemetry instrumentation libraries to capture request and response headers:\n\n- `OTEL_INSTRUMENTATION_HTTP_CAPTURE_HEADERS_SERVER_REQUEST`\n- `OTEL_INSTRUMENTATION_HTTP_CAPTURE_HEADERS_SERVER_RESPONSE`\n- `OTEL_INSTRUMENTATION_HTTP_CAPTURE_HEADERS_SANITIZE_FIELDS`\n\nEach accepts a comma-separated list of regexes which are checked case-insensitively against header names. The first two determine which request/response headers are captured and added to span attributes. The third determines which headers will have their values redacted.\n\nFor example, to capture _all_ headers, set the following:\n\n```\nOTEL_INSTRUMENTATION_HTTP_CAPTURE_HEADERS_SERVER_REQUEST=\".*\"\nOTEL_INSTRUMENTATION_HTTP_CAPTURE_HEADERS_SERVER_RESPONSE=\".*\"\n```\n\n(this is what `capture_headers=True` does)\n\nTo specifically capture the `content-type` request header and request headers starting with `X-`:\n\n```\nOTEL_INSTRUMENTATION_HTTP_CAPTURE_HEADERS_SERVER_REQUEST=\"content-type,X-.*\"\n```\n\nTo replace the `Authorization` header value with `[REDACTED]` to avoid leaking user credentials:\n\n```\nOTEL_INSTRUMENTATION_HTTP_CAPTURE_HEADERS_SANITIZE_FIELDS=\"Authorization\"\n```\n\n(although usually it's better to rely on **Logfire**'s [scrubbing](../../guides/advanced/scrubbing.md) feature)"
},
{
"id": 167,
"parent": 164,
"path": "integrations/use-cases/web-frameworks.md",
"level": 2,
"title": "Query HTTP requests duration per percentile",
"content": "It's usually interesting to visualize HTTP requests duration per percentile. Instead of having an average, which may be influenced by extreme values, percentiles allow us know the maximum duration for 50%, 90%, 95% or 99% of the requests.\n\nHere is a sample query to compute those percentiles for HTTP requests duration:\n\n```sql\nWITH dataset AS (\n SELECT\n time_bucket('%time_bucket_duration%', start_timestamp) AS x,\n (extract(ms from end_timestamp - start_timestamp)) as duration_ms\n FROM records\n WHERE attributes ? 'http.method'\n)\nSELECT\n x,\n approx_percentile_cont(duration_ms, 0.50) as percentile_50,\n approx_percentile_cont(duration_ms, 0.90) as percentile_90,\n approx_percentile_cont(duration_ms, 0.95) as percentile_95,\n approx_percentile_cont(duration_ms, 0.99) as percentile_99\nFROM dataset\nGROUP BY x\nORDER BY x\n```\n\nNotice how we filtered on records that have the `http.method` attributes set. It's a good starting point to retrieve traces that are relevant for HTTP requests, but depending on your setup, you might need to add more filters.\n\nYou can use this query in a Time Series chart in a dashboard:\n\n![Requests duration per percentile as Time Series chart](../../images/integrations/use-cases/web-frameworks/logfire-screenshot-chart-percentiles.png)\n\nSee the [DataFusion documentation](https://datafusion.apache.org/user-guide/sql/aggregate_functions_new.html#approx-percentile-cont) for more information on the `approx_percentile_cont` function."
},
{
"id": 168,
"parent": 164,
"path": "integrations/use-cases/web-frameworks.md",
"level": 2,
"title": "Excluding URLs from instrumentation",
"content": "If you want to exclude certain URLs from tracing, you can either use Logfire's instrumentation methods or OpenTelemetry configuration.\nYou can specify said URLs using a string of comma-separated regexes which will be matched against the full request URL."
},
{
"id": 169,
"parent": 168,
"path": "integrations/use-cases/web-frameworks.md",
"level": 3,
"title": "Using Logfire",
"content": "Some methods (e.g. `logfire.instrument_fastapi()`) allow you to pass the argument `excluded_urls` as a string of comma-separated regexes."
},
{
"id": 170,
"parent": 168,
"path": "integrations/use-cases/web-frameworks.md",
"level": 3,
"title": "Using OpenTelemetry",
"content": "You can set one of two environment variables to exclude URLs from tracing:\n\n- `OTEL_PYTHON_EXCLUDED_URLS`, which will also apply to all instrumentations for which excluded URLs apply).\n- `OTEL_PYTHON_FASTAPI_EXCLUDED_URLS`, for example, which will only apply to FastAPI instrumentation. You can replace `FASTAPI` with the name of the framework you're using.\n\nIf you'd like to trace all URLs except the base `/` URL, you can use the following regex for `excluded_urls`: `^https?://[^/]+/$`\n\nBreaking it down:\n\n* `^` matches the start of the string\n* `https?` matches `http` or `https`\n* `://` matches `://`\n* `[^/]+` matches one or more characters that are not `/` (this will be the host part of the URL)\n* `/` matches `/`\n* `$` matches the end of the string\n\nSo this regex will only match routes that have no path after the host.\n\nThis instrumentation might look like:\n\n```py\nfrom fastapi import FastAPI\n\nimport logfire\n\napp = FastAPI()\n\nlogfire.configure()\nlogfire.instrument_fastapi(app, excluded_urls='^https?://[^/]+/$')\n\nif __name__ == '__main__':\n import uvicorn\n\n uvicorn.run(app)\n```\n\nIf you visit http://127.0.0.1:8000/, that matches the above regex, so no spans will be sent to Logfire.\nIf you visit http://127.0.0.1:8000/hello/ (or any other endpoint that's not `/`, for that matter), a trace will be started and sent to Logfire.\n\n!!! note\n Under the hood, the `opentelemetry` library is using `re.search` (not `re.match` or `re.fullmatch`) to check for a match between the route and the `excluded_urls` regex, which is why we need to include the `^` at the start and `$` at the end of the regex.\n\n!!! note\n Specifying excluded URLs for a given instrumentation only prevents that specific instrumentation from creating spans/metrics, it doesn't suppress other instrumentation within the excluded endpoints."
},
{
"id": 171,
"parent": null,
"path": "integrations/third-party/litellm.md",
"level": 1,
"title": "LiteLLM",
"content": "LiteLLM allows you to call over 100 Large Language Models (LLMs) using the same input/output format. It also supports Logfire for logging and monitoring.\n\nTo integrate Logfire with LiteLLM:\n1. Set the `LOGFIRE_TOKEN` environment variable.\n2. Add `logfire` to the callbacks of LiteLLM.\n\nFor more details, [check the official LiteLLM documentation.](https://docs.litellm.ai/docs/observability/logfire_integration)"
},
{
"id": 172,
"parent": null,
"path": "integrations/third-party/mirascope.md",
"level": 1,
"title": "> Certainly! Here are some popular and well-regarded fantasy books and series: ...",
"content": "```\n\nThis will give you:\n\n* A span around the `recommend_books` that captures items like the prompt template, templating properties and fields, and input/output attributes\n* Human-readable display of the conversation with the agent\n* Details of the response, including the number of tokens used\n\n<figure markdown=\"span\">\n ![Logfire Mirascope Anthropic call](../../images/logfire-screenshot-mirascope-anthropic-call.png){ width=\"500\" }\n <figcaption>Mirascope Anthropic call span and Anthropic span and conversation</figcaption>\n</figure>\n\nSince Mirascope is built on top of [Pydantic][pydantic], you can use the [Pydantic plugin](../pydantic.md) to track additional logs and metrics about model validation.\n\nThis can be particularly useful when [extracting structured information][mirascope-extracting-structured-information] using LLMs:\n\n```py hl_lines=\"3 5 8 17\"\nfrom typing import Literal, Type\n\nimport logfire\nfrom mirascope.core import openai, prompt_template\nfrom mirascope.integrations.logfire import with_logfire\nfrom pydantic import BaseModel\n\nlogfire.configure()\nlogfire.instrument_pydantic()\n\n\nclass TaskDetails(BaseModel):\n description: str\n due_date: str\n priority: Literal[\"low\", \"normal\", \"high\"]\n\n\n@with_logfire()\[email protected](\"gpt-4o-mini\", response_model=TaskDetails)\n@prompt_template(\"Extract the details from the following task: {task}\")\ndef extract_task_details(task: str): ...\n\n\ntask = \"Submit quarterly report by next Friday. Task is high priority.\"\ntask_details = extract_task_details(task) # this will be logged automatically with logfire\nassert isinstance(task_details, TaskDetails)\nprint(task_details)"
},
{
"id": 173,
"parent": null,
"path": "integrations/third-party/mirascope.md",
"level": 1,
"title": "> description='Submit quarterly report' due_date='next Friday' priority='high'",
"content": "```\n\nThis will give you:\n\n* Tracking for validation of Pydantic models\n* A span around the `extract_task_details` that captures items like the prompt template, templating properties and fields, and input/output attributes\n* Human-readable display of the conversation with the agent including the function call\n* Details of the response, including the number of tokens used\n\n<figure markdown=\"span\">\n ![Logfire Mirascope Anthropic call](../../images/logfire-screenshot-mirascope-openai-extractor.png){ width=\"500\" }\n <figcaption>Mirascope OpenAI Extractor span and OpenAI span and function call</figcaption>\n</figure>\n\nFor more information on Mirascope and what you can do with it, check out their [documentation][mirascope-documentation]."
},
{
"id": 174,
"parent": null,
"path": "guides/advanced/sampling.md",
"level": 1,
"title": "Sampling",
"content": "Sampling is the practice of discarding some traces or spans in order to reduce the amount of data that needs to be\nstored and analyzed. Sampling is a trade-off between cost and completeness of data.\n\n_Head sampling_ means the decision to sample is made at the beginning of a trace. This is simpler and more common.\n\n_Tail sampling_ means the decision to sample is delayed, possibly until the end of a trace. This means there is more\ninformation available to make the decision, but this adds complexity.\n\nSampling usually happens at the trace level, meaning entire traces are kept or discarded. This way the remaining traces\nare generally complete."
},
{
"id": 175,
"parent": 174,
"path": "guides/advanced/sampling.md",
"level": 2,
"title": "Random head sampling",
"content": "Here's an example of randomly sampling 50% of traces:\n\n```python\nimport logfire\n\nlogfire.configure(sampling=logfire.SamplingOptions(head=0.5))\n\nfor x in range(10):\n with logfire.span(f'span {x}'):\n logfire.info(f'log {x}')\n```\n\nThis outputs something like:\n\n```\n11:09:29.041 span 0\n11:09:29.041 log 0\n11:09:29.041 span 1\n11:09:29.042 log 1\n11:09:29.042 span 4\n11:09:29.042 log 4\n11:09:29.042 span 5\n11:09:29.042 log 5\n11:09:29.042 span 7\n11:09:29.042 log 7\n```\n\nNote that 5 out of 10 traces are kept, and that the child log is kept if and only if the parent span is kept."
},
{
"id": 176,
"parent": 174,
"path": "guides/advanced/sampling.md",
"level": 2,
"title": "Tail sampling by level and duration",
"content": "Random head sampling often works well, but you may not want to lose any traces which indicate problems. In this case,\nyou can use tail sampling. Here's a simple example:\n\n```python\nimport time\n\nimport logfire\n\nlogfire.configure(sampling=logfire.SamplingOptions.level_or_duration())\n\nfor x in range(3):\n # None of these are logged\n with logfire.span('excluded span'):\n logfire.info(f'info {x}')\n\n # All of these are logged\n with logfire.span('included span'):\n logfire.error(f'error {x}')\n\nfor t in range(1, 10, 2):\n with logfire.span(f'span with duration {t}'):\n time.sleep(t)\n```\n\nThis outputs something like:\n\n```\n11:37:45.484 included span\n11:37:45.484 error 0\n11:37:45.485 included span\n11:37:45.485 error 1\n11:37:45.485 included span\n11:37:45.485 error 2\n11:37:49.493 span with duration 5\n11:37:54.499 span with duration 7\n11:38:01.505 span with duration 9\n```\n\n[`logfire.SamplingOptions.level_or_duration()`][logfire.sampling.SamplingOptions.level_or_duration] creates an instance\nof [`logfire.SamplingOptions`][logfire.sampling.SamplingOptions] with simple tail sampling. With no arguments,\nit means that a trace will be included if and only if it has at least one span/log that:\n\n1. has a log level greater than `info` (the default of any span), or\n2. has a duration greater than 5 seconds.\n\nThis way you won't lose information about warnings/errors or long-running operations. You can customize what to keep\nwith the `level_threshold` and `duration_threshold` arguments."
},
{
"id": 177,
"parent": 174,
"path": "guides/advanced/sampling.md",
"level": 2,
"title": "Combining head and tail sampling",
"content": "You can combine head and tail sampling. For example:\n\n```python\nimport logfire\n\nlogfire.configure(sampling=logfire.SamplingOptions.level_or_duration(head=0.1))\n```\n\nThis will only keep 10% of traces, even if they have a high log level or duration. Traces that don't meet the tail\nsampling criteria will be discarded every time."
},
{
"id": 178,
"parent": 174,
"path": "guides/advanced/sampling.md",
"level": 2,
"title": "Keeping a fraction of all traces",
"content": "To keep some traces even if they don't meet the tail sampling criteria, you can use the `background_rate` argument. For\nexample, this script:\n\n```python\nimport logfire\n\nlogfire.configure(sampling=logfire.SamplingOptions.level_or_duration(background_rate=0.3))\n\nfor x in range(10):\n logfire.info(f'info {x}')\nfor x in range(5):\n logfire.error(f'error {x}')\n```\n\nwill output something like:\n\n```\n12:24:40.293 info 2\n12:24:40.293 info 3\n12:24:40.293 info 7\n12:24:40.294 error 0\n12:24:40.294 error 1\n12:24:40.294 error 2\n12:24:40.294 error 3\n12:24:40.295 error 4\n```\n\ni.e. about 30% of the info logs and 100% of the error logs are kept.\n\n(Technical note: the trace ID is compared against the head and background rates to determine inclusion, so the\nprobabilities don't depend on the number of spans in the trace, and the rates give the probabilities directly without\nneeding any further calculations. For example, with a head sample rate of `0.6` and a background rate of `0.3`, the\nchance of a non-notable trace being included is `0.3`, not `0.6 * 0.3`.)"
},
{
"id": 179,
"parent": 174,
"path": "guides/advanced/sampling.md",
"level": 2,
"title": "Caveats of tail sampling",
"content": ""
},
{
"id": 180,
"parent": 179,
"path": "guides/advanced/sampling.md",
"level": 3,
"title": "Memory usage",
"content": "For tail sampling to work, all the spans in a trace must be kept in memory until either the trace is included by\nsampling or the trace is completed and discarded. In the above example, the spans named `included span` don't have a\nhigh enough level to be included, so they are kept in memory until the error logs cause the entire trace to be included.\nThis means that traces with a large number of spans can consume a lot of memory, whereas without tail sampling the spans\nwould be regularly exported and freed from memory without waiting for the rest of the trace.\n\nIn practice this is usually OK, because such large traces will usually exceed the duration threshold, at which point the\ntrace will be included and the spans will be exported and freed. This works because the duration is measured as the time\nbetween the start of the trace and the start/end of the most recent span, so the tail sampler can know that a span will\nexceed the duration threshold even before it's complete. For example, running this script:\n\n```python\nimport time\n\nimport logfire\n\nlogfire.configure(sampling=logfire.SamplingOptions.level_or_duration())\n\nwith logfire.span('span'):\n for x in range(1, 10):\n time.sleep(1)\n logfire.info(f'info {x}')\n```\n\nwill do nothing for the first 5 seconds, before suddenly logging all this at once:\n\n```\n12:29:43.063 span\n12:29:44.065 info 1\n12:29:45.066 info 2\n12:29:46.072 info 3\n12:29:47.076 info 4\n12:29:48.082 info 5\n```\n\nfollowed by additional logs once per second. This is despite the fact that at this stage the outer span hasn't completed\nyet and the inner logs each have 0 duration.\n\nHowever, memory usage can still be a problem in any of the following cases:\n\n- The duration threshold is set to a high value\n- Spans are produced extremely rapidly\n- Spans contain large attributes"
},
{
"id": 181,
"parent": 179,
"path": "guides/advanced/sampling.md",
"level": 3,
"title": "Distributed tracing",
"content": "Logfire's tail sampling is implemented in the SDK and only works for traces within one process. If you need tail\nsampling with distributed tracing, consider deploying\nthe [Tail Sampling Processor in the OpenTelemetry Collector](https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/processor/tailsamplingprocessor/README.md).\n\nIf a trace was started on another process and its context was propagated to the process using the Logfire SDK tail\nsampling, the whole trace will be included.\n\nIf you start a trace with the Logfire SDK with tail sampling, and then propagate the context to another process, the\nspans generated by the SDK may be discarded, while the spans generated by the other process may be included, leading to\nan incomplete trace."
},
{
"id": 182,
"parent": 174,
"path": "guides/advanced/sampling.md",
"level": 2,
"title": "Custom head sampling",
"content": "If you need more control than random sampling, you can pass an [OpenTelemetry\n`Sampler`](https://opentelemetry-python.readthedocs.io/en/latest/sdk/trace.sampling.html). For example:\n\n```python\nfrom opentelemetry.sdk.trace.sampling import (\n ALWAYS_OFF,\n ALWAYS_ON,\n ParentBased,\n Sampler,\n)\n\nimport logfire\n\n\nclass MySampler(Sampler):\n def should_sample(\n self,\n parent_context,\n trace_id,\n name,\n *args,\n **kwargs,\n ):\n if name == 'exclude me':\n sampler = ALWAYS_OFF\n else:\n sampler = ALWAYS_ON\n return sampler.should_sample(\n parent_context,\n trace_id,\n name,\n *args,\n **kwargs,\n )\n\n def get_description(self):\n return 'MySampler'\n\n\nlogfire.configure(\n sampling=logfire.SamplingOptions(\n head=ParentBased(\n MySampler(),\n )\n )\n)\n\nwith logfire.span('keep me'):\n logfire.info('kept child')\n\nwith logfire.span('exclude me'):\n logfire.info('excluded child')\n```\n\nThis will output something like:\n\n```\n10:37:30.897 keep me\n10:37:30.898 kept child\n```\n\nNote that the sampler explicitly excluded only the span named `exclude me`. The reason that the `excluded child` log is\nnot included is that `MySampler` was wrapped in a `ParentBased` sampler, which excludes spans whose parents are\nexcluded. If you remove that and simply pass `head=MySampler()`, the `excluded child` log will be included, resulting in\nan incomplete trace.\n\nYou can also pass a `Sampler` to the `head` argument of `SamplingOptions.level_or_duration` to combine tail sampling\nwith custom head sampling."
},
{
"id": 183,
"parent": 174,
"path": "guides/advanced/sampling.md",
"level": 2,
"title": "Custom tail sampling",
"content": "If you want tail sampling with more control than `level_or_duration`, you can pass a function to [\n`tail`][logfire.sampling.SamplingOptions.tail] which will accept an instance of [\n`TailSamplingSpanInfo`][logfire.sampling.TailSamplingSpanInfo] and return a float between 0 and 1 representing the\nprobability that the trace should be included. For example:\n\n```python\nimport logfire\n\n\ndef get_tail_sample_rate(span_info):\n if span_info.duration >= 1:\n return 0.5 # (1)!\n\n if span_info.level > 'warn': # (2)!\n return 0.3 # (3)!\n\n return 0.1 # (4)!\n\n\nlogfire.configure(\n sampling=logfire.SamplingOptions(\n head=0.5, # (5)!\n tail=get_tail_sample_rate,\n ),\n)\n```\n\n1. Keep 50% of traces with duration >= 1 second\n2. `span_info.level` is a [special object][logfire.sampling.SpanLevel] that can be compared to log level names\n3. Keep 30% of traces with a warning or error and with duration < 1 second\n4. Keep 10% of other traces\n5. Discard 50% of traces at the beginning to reduce the overhead of generating spans. This is optional, but improves\n performance, and we know that `get_tail_sample_rate` will always return at most 0.5 so the other 50% of traces will\n be discarded anyway. The probabilities are not independent - this will not discard traces that would otherwise have\n been kept by tail sampling."
},
{
"id": 184,
"parent": null,
"path": "guides/advanced/scrubbing.md",
"level": 1,
"title": "Scrubbing sensitive data",
"content": "The **Logfire** SDK scans for and redacts potentially sensitive data from logs and spans before exporting them."
},
{
"id": 185,
"parent": 184,
"path": "guides/advanced/scrubbing.md",
"level": 2,
"title": "Disabling scrubbing",
"content": "To disable scrubbing entirely, set [`scrubbing`][logfire.configure(scrubbing)] to `False`:\n\n```python\nimport logfire\n\nlogfire.configure(scrubbing=False)\n```"
},
{
"id": 186,
"parent": 184,
"path": "guides/advanced/scrubbing.md",
"level": 2,
"title": "Scrubbing more with custom patterns",
"content": "By default, the SDK looks for some sensitive regular expressions. To add your own patterns, set [`extra_patterns`][logfire.ScrubbingOptions.extra_patterns] to a list of regex strings:\n\n```python\nimport logfire\n\nlogfire.configure(scrubbing=logfire.ScrubbingOptions(extra_patterns=['my_pattern']))\n\nlogfire.info('Hello', data={\n 'key_matching_my_pattern': 'This string will be redacted because its key matches',\n 'other_key': 'This string will also be redacted because it matches MY_PATTERN case-insensitively',\n 'password': 'This will be redacted because custom patterns are combined with the default patterns',\n})\n```\n\nHere are the default scrubbing patterns:\n\n`'password'`, `'passwd'`, `'mysql_pwd'`, `'secret'`, `'auth'`, `'credential'`, `'private[._ -]?key'`, `'api[._ -]?key'`,\n`'session'`, `'cookie'`, `'csrf'`, `'xsrf'`, `'jwt'`, `'ssn'`, `'social[._ -]?security'`, `'credit[._ -]?card'`"
},
{
"id": 187,
"parent": 184,
"path": "guides/advanced/scrubbing.md",
"level": 2,
"title": "Scrubbing less with a callback",
"content": "On the other hand, if the scrubbing is to aggressive, you can pass a function to [`callback`][logfire.ScrubbingOptions.callback] to prevent certain data from being redacted.\n\nThe function will be called for each potential match found by the scrubber. If it returns `None`, the value is redacted. Otherwise, the returned value replaces the matched value. The function accepts a single argument of type [`logfire.ScrubMatch`][logfire.ScrubMatch].\n\nHere's an example:\n\n```python\nimport logfire\n\ndef scrubbing_callback(match: logfire.ScrubMatch):\n # `my_safe_value` often contains the string 'password' but it's not actually sensitive.\n if match.path == ('attributes', 'my_safe_value') and match.pattern_match.group(0) == 'password':\n # Return the original value to prevent redaction.\n return match.value\n\nlogfire.configure(scrubbing=logfire.ScrubbingOptions(callback=scrubbing_callback))\n```"
},
{
"id": 188,
"parent": 184,
"path": "guides/advanced/scrubbing.md",
"level": 2,
"title": "Security tips",
"content": ""
},
{
"id": 189,
"parent": 188,
"path": "guides/advanced/scrubbing.md",
"level": 3,
"title": "Use message templates",
"content": "The full span/log message is not scrubbed, only the fields within. For example, this:\n\n```python\nlogfire.info('User details: {user}', user=User(id=123, password='secret'))\n```\n\n...may log something like:\n\n```\nUser details: [Scrubbed due to 'password']\n```\n\n...but this:\n\n```python\nuser = User(id=123, password='secret')\nlogfire.info('User details: ' + str(user))\n```\n\nwill log:\n\n```\nUser details: User(id=123, password='secret')\n```\n\nThis is necessary so that safe messages such as 'Password is correct' are not redacted completely.\n\nUsing f-strings (e.g. `logfire.info(f'User details: {user}')`) *is* safe if `inspect_arguments` is enabled (the default in Python 3.11+) and working correctly.\n[See here](../onboarding-checklist/add-manual-tracing.md#f-strings) for more information.\n\nIn short, don't format the message yourself. This is also a good practice in general for [other reasons](../onboarding-checklist/add-manual-tracing.md#messages-and-span-names)."
},
{
"id": 190,
"parent": 188,
"path": "guides/advanced/scrubbing.md",
"level": 3,
"title": "Keep sensitive data out of URLs",
"content": "The attribute `\"http.url\"` which is recorded by OpenTelemetry instrumentation libraries is considered safe so that URLs like `\"http://example.com/users/123/authenticate\"` are not redacted.\n\nAs a general rule, not just for Logfire, assume that URLs (including query parameters) will be logged, so sensitive data should be put in the request body or headers instead."
},
{
"id": 191,
"parent": 188,
"path": "guides/advanced/scrubbing.md",
"level": 3,
"title": "Use parameterized database queries",
"content": "The `\"db.statement\"` attribute which is recorded by OpenTelemetry database instrumentation libraries is considered safe so that SQL queries like `\"SELECT secret_value FROM table WHERE ...\"` are not redacted.\n\nUse parameterized queries (e.g. prepared statements) so that sensitive data is not interpolated directly into the query string, even if\nyou use an interpolation method that's safe from SQL injection."
},
{
"id": 192,
"parent": null,
"path": "guides/advanced/generators.md",
"level": 1,
"title": "Generators",
"content": "The body of a `with logfire.span` statement or a function decorated with `@logfire.instrument` should not contain the `yield` keyword, except in functions decorated with `@contextlib.contextmanager` or `@contextlib.asynccontextmanager`. To see the problem, consider this example:\n\n```python\nimport logfire\n\nlogfire.configure()\n\n\ndef generate_items():\n with logfire.span('Generating items'):\n for i in range(3):\n yield i"
},
{
"id": 193,
"parent": null,
"path": "guides/advanced/generators.md",
"level": 1,
"title": "Or equivalently:",
"content": "@logfire.instrument('Generating items')\ndef generate_items():\n for i in range(3):\n yield i\n\n\ndef main():\n items = generate_items()\n for item in items:\n logfire.info(f'Got item {item}')\n # break\n logfire.info('After processing items')\n\n\nmain()\n```\n\nIf you run this, everything seems fine:\n\n![Generating items going fine](../../images/guide/generator-fine.png)\n\nThe `Got item` log lines are inside the `Generating items` span, and the `After processing items` log is outside it, as expected.\n\nBut if you uncomment the `break` line, you'll see that the `After processing items` log line is also inside the `Generating items` span:\n\n![Generating items going wrong](../../images/guide/generator-break.png)\n\nThis is because the `generate_items` generator is left suspended at the `yield` statement, and the `with logfire.span('Generating items'):` block is still active, so the `After processing items` log sees that span as its parent. This is confusing, and can happen anytime that iteration over a generator is interrupted, including by exceptions.\n\nIf you run the same code with async generators:\n\n```python\nimport asyncio\n\nimport logfire\n\nlogfire.configure()\n\n\nasync def generate_items():\n with logfire.span('Generating items'):\n for i in range(3):\n yield i\n\n\nasync def main():\n items = generate_items()\n async for item in items:\n logfire.info(f'Got item {item}')\n break\n logfire.info('After processing items')\n\n\nasyncio.run(main())\n```\n\nYou'll see the same problem, as well as an exception like this in the logs:\n\n```\nFailed to detach context\nTraceback (most recent call last):\n File \"async_generator_example.py\", line 11, in generate_items\n yield i\nasyncio.exceptions.CancelledError\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"opentelemetry/context/__init__.py\", line 154, in detach\n _RUNTIME_CONTEXT.detach(token)\n File \"opentelemetry/context/contextvars_context.py\", line 50, in detach\n self._current_context.reset(token)\nValueError: <Token var=<ContextVar name='current_context' default={} at 0x10afa3f60> at 0x10de034c0> was created in a different Context\n```\n\nThis is why generator functions are not traced by [`logfire.install_auto_tracing()`][logfire.Logfire.install_auto_tracing]."
},
{
"id": 194,
"parent": 193,
"path": "guides/advanced/generators.md",
"level": 2,
"title": "What you can do",
"content": ""
},
{
"id": 195,
"parent": 194,
"path": "guides/advanced/generators.md",
"level": 3,
"title": "Move the span outside the generator",
"content": "If you're looping over a generator, wrapping the loop in a span is safe, e.g:\n\n```python\nimport logfire\n\nlogfire.configure()\n\n\ndef generate_items():\n for i in range(3):\n yield i\n\n\ndef main():\n items = generate_items()\n with logfire.span('Generating items'):\n for item in items:\n logfire.info(f'Got item {item}')\n break\n logfire.info('After processing items')\n\n\nmain()\n```\n\nThis is fine because the `with logfire.span` block doesn't contain the `yield` directly in its body."
},
{
"id": 196,
"parent": 194,
"path": "guides/advanced/generators.md",
"level": 3,
"title": "Use a generator as a context manager",
"content": "`yield` is OK when used to implement a context manager, e.g:\n\n```python\nfrom contextlib import contextmanager\n\nimport logfire\n\nlogfire.configure()\n\n\n@contextmanager\ndef my_context():\n with logfire.span('Context manager span'):\n yield\n\n\ntry:\n with my_context():\n logfire.info('Inside context manager')\n raise ValueError()\nexcept Exception:\n logfire.exception('Error!')\nlogfire.info('After context manager')\n```\n\nThis is fine because even if there's an exception inside the context manager, the `with` statement will ensure that the `my_context` generator is promptly closed, and the span will be closed with it. This is in contrast to using a generator as an iterator, where the loop can be interrupted more easily."
},
{
"id": 197,
"parent": 194,
"path": "guides/advanced/generators.md",
"level": 3,
"title": "Create a context manager that closes the generator",
"content": "`with closing(generator)` can be used to ensure that the generator and thus the span within is closed even if the loop is interrupted, e.g:\n\n```python\nfrom contextlib import closing\n\nimport logfire\n\nlogfire.configure()\n\n\ndef generate_items():\n with logfire.span('Generating items'):\n for i in range(3):\n yield i\n\n\ndef main():\n with closing(generate_items()) as items:\n for item in items:\n logfire.info(f'Got item {item}')\n break\n logfire.info('After processing items')\n\n\nmain()\n```\n\nHowever this means that users of `generate_items` must always remember to use `with closing`. To ensure that they have no choice but to do so, you can make `generate_items` a context manager itself:\n\n```python\nfrom contextlib import closing, contextmanager\n\nimport logfire\n\nlogfire.configure()\n\n\n@contextmanager\ndef generate_items():\n def generator():\n with logfire.span('Generating items'):\n for i in range(3):\n yield i\n\n with closing(generator()) as items:\n yield items\n\n\ndef main():\n with generate_items() as items:\n for item in items:\n logfire.info(f'Got item {item}')\n break\n logfire.info('After processing items')\n\n\nmain()\n```"
},
{
"id": 198,
"parent": null,
"path": "guides/advanced/backfill.md",
"level": 1,
"title": "Backfilling data",
"content": "When Logfire fails to send a log to the server, it will dump data to the disk to avoid data loss.\n\nLogfire supports bulk loading data, either to import data from another system or to load data that\nwas dumped to disk.\n\nTo backfill data, you can use the `logfire backfill` command:\n\n```bash\n$ logfire backfill --help\n```\n\nBy default `logfire backfill` will read from the default fallback file so if you are just trying to upload data after a network failure you can just run:\n\n```bash\n$ logfire backfill\n```"
},
{
"id": 199,
"parent": 198,
"path": "guides/advanced/backfill.md",
"level": 2,
"title": "Bulk loading data",
"content": "This same mechanism can be used to bulk load data, for example if you are importing it from another system.\n\nFirst create a dump file:\n\n```py\nfrom datetime import datetime\n\nfrom logfire.backfill import Log, PrepareBackfill, StartSpan\n\nwith PrepareBackfill('logfire_spans123.bin') as backfill:\n span = StartSpan(\n start_timestamp=datetime(2023, 1, 1, 0, 0, 0),\n span_name='session',\n msg_template='session {user_id=} {path=}',\n service_name='docs.pydantic.dev',\n log_attributes={'user_id': '123', 'path': '/test'},\n )\n child = StartSpan(\n start_timestamp=datetime(2023, 1, 1, 0, 0, 1),\n span_name='query',\n msg_template='ran db query',\n service_name='docs.pydantic.dev',\n log_attributes={'query': 'SELECT * FROM users'},\n parent=span,\n )\n backfill.write(\n Log(\n timestamp=datetime(2023, 1, 1, 0, 0, 2),\n msg_template='GET {path=}',\n level='info',\n service_name='docs.pydantic.dev',\n attributes={'path': '/test'},\n )\n )\n backfill.write(child.end(end_timestamp=datetime(2023, 1, 1, 0, 0, 3)))\n backfill.write(span.end(end_timestamp=datetime(2023, 1, 1, 0, 0, 4)))\n```\n\nThis will create a `logfire_spans123.bin` file with the data.\n\nThen use the `backfill` command line tool to load it:\n\n```bash\n$ logfire backfill --file logfire_spans123.bin\n```"
},
{
"id": 200,
"parent": null,
"path": "guides/advanced/creating-write-tokens.md",
"level": 2,
"title": "Setting `send_to_logfire='if-token-present'`",
"content": "You may want to not send data to logfire during local development, but still have the option to send it in production without changing your code.\nTo do this we provide the parameter `send_to_logfire='if-token-present'` in the `logfire.configure()` function.\nIf you set it to `'if-token-present'`, logfire will only send data to logfire if a write token is present in the environment variable `LOGFIRE_TOKEN` or there is a token saved locally.\nIf you run tests in CI no data will be sent.\n\nYou can also set the environment variable `LOGFIRE_SEND_TO_LOGFIRE` to configure this option.\nFor example, you can set it to `LOGFIRE_SEND_TO_LOGFIRE=true` in your deployed application and `LOGFIRE_SEND_TO_LOGFIRE=false` in your tests setup."
},
{
"id": 201,
"parent": null,
"path": "guides/advanced/testing.md",
"level": 1,
"title": "Testing with Logfire",
"content": "!!! tip \"Running under Pytest... 🧪\"\n When running your test suite under Pytest, we set [`send_to_logfire=False`][logfire.configure(send_to_logfire)] by default.\n\nYou may want to check that your API is logging the data you expect, that spans correctly track the work they wrap, etc.\nThis can often be difficult, including with Python's built in logging and OpenTelemetry's SDKs.\n\nLogfire makes it very easy to test the emitted logs and spans using the utilities in the\n[`logfire.testing`][logfire.testing] module.\nThis is what Logfire uses internally to test itself as well."
},
{
"id": 202,
"parent": 201,
"path": "guides/advanced/testing.md",
"level": 2,
"title": "[`capfire`][logfire.testing.capfire] fixture",
"content": "This has two attributes [`exporter`][logfire.testing.CaptureLogfire.exporter] and\n[`metrics_reader`][logfire.testing.CaptureLogfire.metrics_reader]."
},
{
"id": 203,
"parent": 202,
"path": "guides/advanced/testing.md",
"level": 3,
"title": "[`exporter`][logfire.testing.CaptureLogfire.exporter]",
"content": "This is an instance of [`TestExporter`][logfire.testing.TestExporter] and is an OpenTelemetry SDK compatible\nspan exporter that keeps exported spans in memory.\n\nThe [`exporter.exported_spans_as_dict()`][logfire.testing.TestExporter.exported_spans_as_dict] method lets you get\na plain dict representation of the exported spans that you can easily assert against and get nice diffs from.\nThis method does some data massaging to make the output more readable and deterministic, e.g. replacing line\nnumbers with `123` and file paths with just the filename.\n\n```py title=\"test.py\"\nimport pytest\n\nimport logfire\nfrom logfire.testing import CaptureLogfire\n\n\ndef test_observability(capfire: CaptureLogfire) -> None:\n with pytest.raises(Exception):\n with logfire.span('a span!'):\n logfire.info('a log!')\n raise Exception('an exception!')\n\n exporter = capfire.exporter\n\n # insert_assert(exporter.exported_spans_as_dict()) (1)\n assert exporter.exported_spans_as_dict() == [\n {\n 'name': 'a log!',\n 'context': {'trace_id': 1, 'span_id': 3, 'is_remote': False},\n 'parent': {'trace_id': 1, 'span_id': 1, 'is_remote': False},\n 'start_time': 2000000000,\n 'end_time': 2000000000,\n 'attributes': {\n 'logfire.span_type': 'log',\n 'logfire.level_num': 9,\n 'logfire.msg_template': 'a log!',\n 'logfire.msg': 'a log!',\n 'code.filepath': 'test.py',\n 'code.lineno': 123,\n 'code.function': 'test_observability',\n },\n },\n {\n 'name': 'a span!',\n 'context': {'trace_id': 1, 'span_id': 1, 'is_remote': False},\n 'parent': None,\n 'start_time': 1000000000,\n 'end_time': 4000000000,\n 'attributes': {\n 'code.filepath': 'test.py',\n 'code.lineno': 123,\n 'code.function': 'test_observability',\n 'logfire.msg_template': 'a span!',\n 'logfire.span_type': 'span',\n 'logfire.msg': 'a span!',\n },\n 'events': [\n {\n 'name': 'exception',\n 'timestamp': 3000000000,\n 'attributes': {\n 'exception.type': 'Exception',\n 'exception.message': 'an exception!',\n 'exception.stacktrace': 'Exception: an exception!',\n 'exception.escaped': 'True',\n },\n }\n ],\n },\n ]\n```\n\n1. `insert_assert` is a utility function provided by [devtools](https://github.com/samuelcolvin/python-devtools).\n\n [See more about it below](#insert_assert).\n\nYou can access exported spans by `exporter.exported_spans`.\n\n```py\nimport logfire\nfrom logfire.testing import CaptureLogfire\n\n\ndef test_exported_spans(capfire: CaptureLogfire) -> None:\n with logfire.span('a span!'):\n logfire.info('a log!')\n\n exporter = capfire.exporter\n\n expected_span_names = ['a span! (pending)', 'a log!', 'a span!']\n span_names = [span.name for span in exporter.exported_spans]\n\n assert span_names == expected_span_names\n```\n\nYou can call [`exporter.clear()`][logfire.testing.TestExporter.clear] to reset the captured spans in a test.\n\n```py\nimport logfire\nfrom logfire.testing import CaptureLogfire\n\n\ndef test_reset_exported_spans(capfire: CaptureLogfire) -> None:\n exporter = capfire.exporter\n\n assert len(exporter.exported_spans) == 0\n\n logfire.info('First log!')\n assert len(exporter.exported_spans) == 1\n assert exporter.exported_spans[0].name == 'First log!'\n\n logfire.info('Second log!')\n assert len(exporter.exported_spans) == 2\n assert exporter.exported_spans[1].name == 'Second log!'\n\n exporter.clear()\n assert len(exporter.exported_spans) == 0\n\n logfire.info('Third log!')\n assert len(exporter.exported_spans) == 1\n assert exporter.exported_spans[0].name == 'Third log!'\n```"
},
{
"id": 204,
"parent": 202,
"path": "guides/advanced/testing.md",
"level": 3,
"title": "[`metrics_reader`][logfire.testing.CaptureLogfire.metrics_reader]",
"content": "This is an instance of [`InMemoryMetricReader`][in-memory-metric-reader] which reads metrics into memory.\n\n```py\nimport json\nfrom typing import cast\n\nfrom opentelemetry.sdk.metrics.export import MetricsData\n\nfrom logfire.testing import CaptureLogfire\n\n\ndef test_system_metrics_collection(capfire: CaptureLogfire) -> None:\n exported_metrics = json.loads(cast(MetricsData, capfire.metrics_reader.get_metrics_data()).to_json()) # type: ignore\n\n metrics_collected = {\n metric['name']\n for resource_metric in exported_metrics['resource_metrics']\n for scope_metric in resource_metric['scope_metrics']\n for metric in scope_metric['metrics']\n }\n\n # collected metrics vary by platform, etc.\n # assert that we at least collected _some_ of the metrics we expect\n assert metrics_collected.issuperset(\n {\n 'system.swap.usage',\n 'system.disk.operations',\n 'system.memory.usage',\n 'system.cpu.utilization',\n }\n ), metrics_collected\n```\n\nLet's walk through the utilities we used."
},
{
"id": 205,
"parent": 202,
"path": "guides/advanced/testing.md",
"level": 3,
"title": "[`IncrementalIdGenerator`][logfire.testing.IncrementalIdGenerator]",
"content": "One of the most complicated things about comparing log output to expected results are sources of non-determinism.\nFor OpenTelemetry spans the two biggest ones are the span & trace IDs and timestamps.\n\nThe [`IncrementalIdGenerator`][logfire.testing.IncrementalIdGenerator] generates sequentially increasing span\nand trace IDs so that test outputs are always the same."
},
{
"id": 206,
"parent": 202,
"path": "guides/advanced/testing.md",
"level": 3,
"title": "[`TimeGenerator`][logfire.testing.TimeGenerator]",
"content": "This class generates nanosecond timestamps that increment by 1s every time a timestamp is generated."
},
{
"id": 207,
"parent": 202,
"path": "guides/advanced/testing.md",
"level": 3,
"title": "[`logfire.configure`][logfire.configure]",
"content": "This is the same configuration function you'd use for production and where everything comes together.\n\nNote that we specifically configure:\n\n- `send_to_logfire=False` because we don't want to hit the actual production service\n- `id_generator=IncrementalIdGenerator()` to make the span IDs deterministic\n- `ns_timestamp_generator=TimeGenerator()` to make the timestamps deterministic\n- `processors=[SimpleSpanProcessor(exporter)]` to use our `TestExporter` to capture spans. We use `SimpleSpanProcessor` to export spans with no delay."
},
{
"id": 208,
"parent": 202,
"path": "guides/advanced/testing.md",
"level": 3,
"title": "`insert_assert`",
"content": "This is a utility function provided by [devtools](https://github.com/samuelcolvin/python-devtools) that will\nautomatically insert the output of the code it is called with into the test file when run via pytest.\nThat is, if you comment that line out you'll see that the `assert capfire.exported_spans_as_dict() == [...]`\nline is replaced with the current output of `capfire.exported_spans_as_dict()`, which should\nbe exactly the same given that our test is deterministic!"
},
{
"id": 209,
"parent": null,
"path": "guides/advanced/query-api.md",
"level": 2,
"title": "How to Create a Read Token",
"content": "If you've set up Logfire following the [getting started guide](../../index.md), you can generate read tokens from\nthe Logfire web interface, for use accessing the Logfire Query API.\n\nTo create a read token:\n\n1. Open the **Logfire** web interface at [logfire.pydantic.dev](https://logfire.pydantic.dev).\n2. Select your project from the **Projects** section on the left-hand side of the page.\n3. Click on the ⚙️ **Settings** tab in the top right corner of the page.\n4. Select the **Read tokens** tab from the left-hand menu.\n5. Click on the **Create read token** button.\n\nAfter creating the read token, you'll see a dialog with the token value.\n**Copy this value and store it securely, it will not be shown again.**"
},
{
"id": 210,
"parent": null,
"path": "guides/advanced/query-api.md",
"level": 2,
"title": "Using the Read Clients",
"content": "While you can [make direct HTTP requests](#making-direct-http-requests) to Logfire's querying API,\nwe provide Python clients to simplify the process of interacting with the API from Python.\n\nLogfire provides both synchronous and asynchronous clients.\nThese clients are currently experimental, meaning we might introduce breaking changes in the future.\nTo use these clients, you can import them from the `experimental` namespace:\n\n```python\nfrom logfire.experimental.query_client import AsyncLogfireQueryClient, LogfireQueryClient\n```\n\n!!! note \"Additional required dependencies\"\n\n To use the query clients provided in `logfire.experimental.query_client`, you need to install `httpx`.\n\n If you want to retrieve Arrow-format responses, you will also need to install `pyarrow`."
},
{
"id": 211,
"parent": 210,
"path": "guides/advanced/query-api.md",
"level": 3,
"title": "Client Usage Examples",
"content": "The `AsyncLogfireQueryClient` allows for asynchronous interaction with the Logfire API.\nIf blocking I/O is acceptable and you want to avoid the complexities of asynchronous programming,\nyou can use the plain `LogfireQueryClient`.\n\nHere's an example of how to use these clients:\n\n=== \"Async\"\n\n ```python\n from io import StringIO\n\n import polars as pl\n from logfire.experimental.query_client import AsyncLogfireQueryClient\n\n\n async def main():\n query = \"\"\"\n SELECT start_timestamp\n FROM records\n LIMIT 1\n \"\"\"\n\n async with AsyncLogfireQueryClient(read_token='<your_read_token>') as client:\n # Load data as JSON, in column-oriented format\n json_cols = await client.query_json(sql=query)\n print(json_cols)\n\n # Load data as JSON, in row-oriented format\n json_rows = await client.query_json_rows(sql=query)\n print(json_rows)\n\n # Retrieve data in arrow format, and load into a polars DataFrame\n # Note that JSON columns such as `attributes` will be returned as\n # JSON-serialized strings\n df_from_arrow = pl.from_arrow(await client.query_arrow(sql=query))\n print(df_from_arrow)\n\n # Retrieve data in CSV format, and load into a polars DataFrame\n # Note that JSON columns such as `attributes` will be returned as\n # JSON-serialized strings\n df_from_csv = pl.read_csv(StringIO(await client.query_csv(sql=query)))\n print(df_from_csv)\n\n\n if __name__ == '__main__':\n import asyncio\n\n asyncio.run(main())\n ```\n\n=== \"Sync\"\n\n ```python\n from io import StringIO\n\n import polars as pl\n from logfire.experimental.query_client import LogfireQueryClient\n\n\n def main():\n query = \"\"\"\n SELECT start_timestamp\n FROM records\n LIMIT 1\n \"\"\"\n\n with LogfireQueryClient(read_token='<your_read_token>') as client:\n # Load data as JSON, in column-oriented format\n json_cols = client.query_json(sql=query)\n print(json_cols)\n\n # Load data as JSON, in row-oriented format\n json_rows = client.query_json_rows(sql=query)\n print(json_rows)\n\n # Retrieve data in arrow format, and load into a polars DataFrame\n # Note that JSON columns such as `attributes` will be returned as\n # JSON-serialized strings\n df_from_arrow = pl.from_arrow(client.query_arrow(sql=query))\n print(df_from_arrow)\n\n # Retrieve data in CSV format, and load into a polars DataFrame\n # Note that JSON columns such as `attributes` will be returned as\n # JSON-serialized strings\n df_from_csv = pl.read_csv(StringIO(client.query_csv(sql=query)))\n print(df_from_csv)\n\n\n if __name__ == '__main__':\n main()\n ```"
},
{
"id": 212,
"parent": null,
"path": "guides/advanced/query-api.md",
"level": 2,
"title": "Making Direct HTTP Requests",
"content": "If you prefer not to use the provided clients, you can make direct HTTP requests to the Logfire API using any HTTP\nclient library, such as `requests` in Python. Below are the general steps and an example to guide you:"
},
{
"id": 213,
"parent": 212,
"path": "guides/advanced/query-api.md",
"level": 3,
"title": "General Steps to Make a Direct HTTP Request",
"content": "1. **Set the Endpoint URL**: The base URL for the Logfire API is `https://logfire-api.pydantic.dev`.\n\n2. **Add Authentication**: Include the read token in your request headers to authenticate.\n The header key should be `Authorization` with the value `Bearer <your_read_token_here>`.\n\n3. **Define the SQL Query**: Write the SQL query you want to execute.\n\n4. **Send the Request**: Use an HTTP GET request to the `/v1/query` endpoint with the SQL query as a query parameter.\n\n**Note:** You can provide additional query parameters to control the behavior of your requests.\nYou can also use the `Accept` header to specify the desired format for the response data (JSON, Arrow, or CSV)."
},
{
"id": 214,
"parent": 212,
"path": "guides/advanced/query-api.md",
"level": 3,
"title": "Example: Using Python `requests` Library",
"content": "```python\nimport requests"
},
{
"id": 215,
"parent": null,
"path": "guides/advanced/query-api.md",
"level": 1,
"title": "Define the base URL and your read token",
"content": "base_url = 'https://logfire-api.pydantic.dev'\nread_token = '<your_read_token_here>'"
},
{
"id": 216,
"parent": null,
"path": "guides/advanced/query-api.md",
"level": 1,
"title": "Set the headers for authentication",
"content": "headers = {'Authorization': f'Bearer {read_token}'}"
},
{
"id": 217,
"parent": null,
"path": "guides/advanced/query-api.md",
"level": 1,
"title": "Define your SQL query",
"content": "query = \"\"\"\nSELECT start_timestamp\nFROM records\nLIMIT 1\n\"\"\""
},
{
"id": 218,
"parent": null,
"path": "guides/advanced/query-api.md",
"level": 1,
"title": "Prepare the query parameters for the GET request",
"content": "params = {\n 'sql': query\n}"
},
{
"id": 219,
"parent": null,
"path": "guides/advanced/query-api.md",
"level": 1,
"title": "Send the GET request to the Logfire API",
"content": "response = requests.get(f'{base_url}/v1/query', params=params, headers=headers)"
},
{
"id": 220,
"parent": null,
"path": "guides/advanced/query-api.md",
"level": 1,
"title": "Check the response status",
"content": "if response.status_code == 200:\n print(\"Query Successful!\")\n print(response.json())\nelse:\n print(f\"Failed to execute query. Status code: {response.status_code}\")\n print(response.text)\n```"
},
{
"id": 221,
"parent": 220,
"path": "guides/advanced/query-api.md",
"level": 3,
"title": "Additional Configuration",
"content": "The Logfire API supports various response formats and query parameters to give you flexibility in how you retrieve your data:\n\n- **Response Format**: Use the `Accept` header to specify the response format. Supported values include:\n - `application/json`: Returns the data in JSON format. By default, this will be column-oriented unless specified otherwise with the `json_rows` parameter.\n - `application/vnd.apache.arrow.stream`: Returns the data in Apache Arrow format, suitable for high-performance data processing.\n - `text/csv`: Returns the data in CSV format, which is easy to use with many data tools.\n - If no `Accept` header is provided, the default response format is JSON.\n- **Query Parameters**:\n - **`sql`**: The SQL query to execute. This is the only required query parameter.\n - **`min_timestamp`**: An optional ISO-format timestamp to filter records with `start_timestamp` greater than this value for the `records` table or `recorded_timestamp` greater than this value for the `metrics` table. The same filtering can also be done manually within the query itself.\n - **`max_timestamp`**: Similar to `min_timestamp`, but serves as an upper bound for filtering `start_timestamp` in the `records` table or `recorded_timestamp` in the `metrics` table. The same filtering can also be done manually within the query itself.\n - **`limit`**: An optional parameter to limit the number of rows returned by the query. If not specified, **the default limit is 500**. The maximum allowed value is 10,000.\n - **`row_oriented`**: Only affects JSON responses. If set to `true`, the JSON response will be row-oriented; otherwise, it will be column-oriented.\n\nAll query parameters besides `sql` are optional and can be used in any combination to tailor the API response to your needs."
},
{
"id": 222,
"parent": 220,
"path": "guides/advanced/query-api.md",
"level": 3,
"title": "Important Notes",
"content": "- **Experimental Feature**: The query clients are under the `experimental` namespace, indicating that the API may change in future versions.\n- **Environment Configuration**: Remember to securely store your read token in environment variables or a secure vault for production use.\n\nWith read tokens, you have the flexibility to integrate Logfire into your workflow, whether using Python scripts, data analysis tools, or other systems."
},
{
"id": 223,
"parent": null,
"path": "guides/advanced/link-to-code-source.md",
"level": 2,
"title": "Usage",
"content": "Here's an example:\n\n```python\nimport logfire\n\nlogfire.configure(\n code_source=logfire.CodeSource(\n repository='https://github.com/pydantic/logfire', #(1)!\n revision='<hash of commit used on release>', #(2)!\n root_path='.', #(3)!\n )\n)\n```\n\n1. The URL of the repository e.g. `https://github.com/pydantic/logfire`.\n2. The specific branch, tag, or commit hash to link to e.g. `main`.\n3. The path to the root of the repository. If your code is in a subdirectory, you can specify it here.\n\nYou can learn more in our [`logfire.CodeSource`][logfire.CodeSource] API reference."
},
{
"id": 224,
"parent": null,
"path": "guides/advanced/link-to-code-source.md",
"level": 2,
"title": "Alternative Configuration",
"content": "For other OpenTelemetry SDKs, you can configure these settings using resource attributes, e.g. by setting the\n[`OTEL_RESOURCE_ATTRIBUTES`][otel-resource-attributes] environment variable:\n\n```\nOTEL_RESOURCE_ATTRIBUTES=vcs.repository.url.full=https://github.com/pydantic/platform\nOTEL_RESOURCE_ATTRIBUTES=${OTEL_RESOURCE_ATTRIBUTES},vcs.repository.ref.revision=main\nOTEL_RESOURCE_ATTRIBUTES=${OTEL_RESOURCE_ATTRIBUTES},vcs.root.path=.\n```\n\n[help]: ../../help.md"
},
{
"id": 225,
"parent": null,
"path": "guides/advanced/alternative-backends.md",
"level": 1,
"title": "Alternative backends",
"content": "**Logfire** uses the OpenTelemetry standard. This means that you can configure the SDK to export to any backend that supports OpenTelemetry.\n\nThe easiest way is to set the `OTEL_EXPORTER_OTLP_ENDPOINT` environment variable to a URL that points to your backend.\nThis will be used as a base, and the SDK will append `/v1/traces` and `/v1/metrics` to the URL to send traces and metrics, respectively.\n\nAlternatively, you can use the `OTEL_EXPORTER_OTLP_TRACES_ENDPOINT` and `OTEL_EXPORTER_OTLP_METRICS_ENDPOINT` environment variables to specify the URLs for traces and metrics separately. These URLs should include the full path, including `/v1/traces` and `/v1/metrics`.\n\n!!! note\n The data will be encoded using **Protobuf** (not JSON) and sent over **HTTP** (not gRPC).\n\n Make sure that your backend supports this! :nerd_face:"
},
{
"id": 226,
"parent": 225,
"path": "guides/advanced/alternative-backends.md",
"level": 2,
"title": "Example with Jaeger",
"content": "Run this minimal command to start a [Jaeger](https://www.jaegertracing.io/) container:\n\n```\ndocker run --rm \\\n -p 16686:16686 \\\n -p 4318:4318 \\\n jaegertracing/all-in-one:latest\n```\n\nThen run this code:\n\n```python\nimport os\n\nimport logfire"
},
{
"id": 227,
"parent": null,
"path": "guides/advanced/alternative-backends.md",
"level": 1,
"title": "Jaeger only supports traces, not metrics, so only set the traces endpoint",
"content": ""
},
{
"id": 228,
"parent": null,
"path": "guides/advanced/alternative-backends.md",
"level": 1,
"title": "to avoid errors about failing to export metrics.",
"content": ""
},
{
"id": 229,
"parent": null,
"path": "guides/advanced/alternative-backends.md",
"level": 1,
"title": "Use port 4318 for HTTP, not 4317 for gRPC.",
"content": "traces_endpoint = 'http://localhost:4318/v1/traces'\nos.environ['OTEL_EXPORTER_OTLP_TRACES_ENDPOINT'] = traces_endpoint\n\nlogfire.configure(\n # Setting a service name is good practice in general, but especially\n # important for Jaeger, otherwise spans will be labeled as 'unknown_service'\n service_name='my_logfire_service',\n\n # Sending to Logfire is on by default regardless of the OTEL env vars.\n # Keep this line here if you don't want to send to both Jaeger and Logfire.\n send_to_logfire=False,\n)\n\nwith logfire.span('This is a span'):\n logfire.info('Logfire logs are also actually just spans!')\n```\n\nFinally open [http://localhost:16686/search?service=my_logfire_service](http://localhost:16686/search?service=my_logfire_service) to see the traces in the Jaeger UI."
},
{
"id": 230,
"parent": 229,
"path": "guides/advanced/alternative-backends.md",
"level": 2,
"title": "Other environment variables",
"content": "If `OTEL_TRACES_EXPORTER` and/or `OTEL_METRICS_EXPORTER` are set to any non-empty value other than `otlp`, then **Logfire** will ignore the corresponding `OTEL_EXPORTER_OTLP_*` variables. This is because **Logfire** doesn't support other exporters, so we assume that the environment variables are intended to be used by something else. Normally you don't need to worry about this, and you don't need to set these variables at all unless you want to prevent **Logfire** from setting up these exporters.\n\nSee the [OpenTelemetry documentation](https://opentelemetry-python.readthedocs.io/en/latest/exporter/otlp/otlp.html) for information about the other headers you can set, such as `OTEL_EXPORTER_OTLP_HEADERS`."
},
{
"id": 231,
"parent": null,
"path": "guides/advanced/alternative-clients.md",
"level": 1,
"title": "Alternative clients",
"content": "**Logfire** uses the OpenTelemetry standard. This means that you can configure standard OpenTelemetry SDKs in many languages to export to the **Logfire** backend. Depending on your SDK, you may need to set only these [environment variables](https://opentelemetry.io/docs/languages/sdk-configuration/otlp-exporter/):\n\n- `OTEL_EXPORTER_OTLP_ENDPOINT=https://logfire-api.pydantic.dev` for both traces and metrics, or:\n - `OTEL_EXPORTER_OTLP_TRACES_ENDPOINT=https://logfire-api.pydantic.dev/v1/traces` for just traces\n - `OTEL_EXPORTER_OTLP_METRICS_ENDPOINT=https://logfire-api.pydantic.dev/v1/metrics` for just metrics\n- `OTEL_EXPORTER_OTLP_HEADERS='Authorization=your-write-token'` - see [Creating Write Tokens](./creating-write-tokens.md) to obtain a write token and replace `your-write-token` with it.\n- `OTEL_EXPORTER_OTLP_PROTOCOL=http/protobuf` to export in Protobuf format over HTTP (not gRPC). The **Logfire** backend supports both Protobuf and JSON, but only over HTTP for now. Some SDKs (such as Python) already use this value as the default so setting this isn't required, but other SDKs use `grpc` as the default."
},
{
"id": 232,
"parent": 231,
"path": "guides/advanced/alternative-clients.md",
"level": 2,
"title": "Example with Python",
"content": "First, run these commands:\n\n```sh\npip install opentelemetry-exporter-otlp\nexport OTEL_EXPORTER_OTLP_ENDPOINT=https://logfire-api.pydantic.dev\nexport OTEL_EXPORTER_OTLP_HEADERS='Authorization=your-write-token'\n```\n\nThen run this script with `python`:\n\n```python\nfrom opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter\nfrom opentelemetry.sdk.trace import TracerProvider\nfrom opentelemetry.sdk.trace.export import BatchSpanProcessor\n\nexporter = OTLPSpanExporter()\nspan_processor = BatchSpanProcessor(exporter)\ntracer_provider = TracerProvider()\ntracer_provider.add_span_processor(span_processor)\ntracer = tracer_provider.get_tracer('my_tracer')\n\ntracer.start_span('Hello World').end()\n```\n\nThen navigate to the Live view for your project in your browser. You should see a trace with a single span named `Hello World`.\n\nTo configure the exporter without environment variables:\n\n```python\nexporter = OTLPSpanExporter(\n endpoint='https://logfire-api.pydantic.dev/v1/traces',\n headers={'Authorization': 'your-write-token'},\n)\n```"
},
{
"id": 233,
"parent": 231,
"path": "guides/advanced/alternative-clients.md",
"level": 2,
"title": "Example with Rust",
"content": "First, set up a new Cargo project:\n\n```sh\ncargo new --bin otel-example && cd otel-example\nexport OTEL_EXPORTER_OTLP_ENDPOINT=https://logfire-api.pydantic.dev\nexport OTEL_EXPORTER_OTLP_HEADERS='Authorization=your-write-token'\n```\n\nUpdate the `Cargo.toml` and `main.rs` files with the following contents:\n\n```toml title=\"Cargo.toml\"\n[package]\nname = \"otel-example\"\nversion = \"0.1.0\"\nedition = \"2021\"\n\n[dependencies]\nopentelemetry = { version = \"*\", default-features = false, features = [\"trace\"] }"
},
{
"id": 234,
"parent": null,
"path": "guides/advanced/alternative-clients.md",
"level": 1,
"title": "Note: `reqwest-rustls` feature is necessary else you'll have a cryptic failure to export;",
"content": ""
},
{
"id": 235,
"parent": null,
"path": "guides/advanced/alternative-clients.md",
"level": 1,
"title": "see https://github.com/open-telemetry/opentelemetry-rust/issues/2169",
"content": "opentelemetry-otlp = { version = \"*\", default-features = false, features = [\"trace\", \"http-proto\", \"reqwest-blocking-client\", \"reqwest-rustls\"] }\n```\n\n```rust title=\"src/main.rs\"\nuse opentelemetry::{\n global::ObjectSafeSpan,\n trace::{Tracer, TracerProvider},\n};\n\nfn main() -> Result<(), Box<dyn std::error::Error + Send + Sync>> {\n let otlp_exporter = opentelemetry_otlp::new_exporter()\n .http()\n .with_protocol(opentelemetry_otlp::Protocol::HttpBinary)\n // If you don't want to export environment variables, you can also configure\n // programmatically like so:\n //\n // (You'll need to add `use opentelemetry_otlp::WithExportConfig;` to the top of the\n // file to access the `.with_endpoint` method.)\n //\n // .with_endpoint(\"https://logfire-api.pydantic.dev/v1/traces\")\n // .with_headers({\n // let mut headers = std::collections::HashMap::new();\n // headers.insert(\n // \"Authorization\".into(),\n // \"your-write-token\".into(),\n // );\n // headers\n // })\n ;\n\n let tracer_provider = opentelemetry_otlp::new_pipeline()\n .tracing()\n .with_exporter(otlp_exporter)\n .install_simple()?;\n let tracer = tracer_provider.tracer(\"my_tracer\");\n\n tracer.span_builder(\"Hello World\").start(&tracer).end();\n\n Ok(())\n}\n\n```\n\nFinally, use `cargo run` to execute."
},
{
"id": 236,
"parent": null,
"path": "guides/web-ui/explore.md",
"level": 2,
"title": "Querying Traces",
"content": "The primary table you will query is the `records` table, which contains all the spans/logs from your traced requests.\n\nTo query the records, simply start your query with `SELECT ... FROM records` and add a `WHERE` clause to filter the\nspans you want.\n\nFor example, here is a query that returns the message, start_timestamp, duration, and attributes for all spans that\nhave exceptions:\n\n```sql\nSELECT\n message,\n start_timestamp,\n EXTRACT(EPOCH FROM (end_timestamp - start_timestamp)) * 1000 AS duration_ms,\n attributes\nFROM records\nWHERE is_exception\n```\n\nYou can run more complex queries as well, using subqueries, CTEs, joins, aggregations, custom expressions,\nand any other standard SQL."
},
{
"id": 237,
"parent": null,
"path": "guides/web-ui/explore.md",
"level": 2,
"title": "Records Schema",
"content": "The schema of the `records` table is:\n\n```sql\nCREATE TABLE records AS (\n start_timestamp timestamp with time zone,\n created_at timestamp with time zone,\n trace_id text,\n span_id text,\n parent_span_id text,\n kind span_kind,\n end_timestamp timestamp with time zone,\n level smallint,\n span_name text,\n message text,\n attributes_json_schema text,\n attributes jsonb,\n tags text[],\n otel_links jsonb,\n otel_events jsonb,\n is_exception boolean,\n otel_status_code status_code,\n otel_status_message text,\n otel_scope_name text,\n otel_scope_version text,\n otel_scope_attributes jsonb,\n service_namespace text,\n service_name text,\n service_version text,\n service_instance_id text,\n process_pid integer\n)\n```"
},
{
"id": 238,
"parent": null,
"path": "guides/web-ui/explore.md",
"level": 2,
"title": "Cross-linking with Live View",
"content": "After running a query, you can take any `trace_id` and/or `span_id` and use it to look up data shown as traces\nin the Live View.\n\nSimply go to the Live View and enter a query like:\n\n```\ntrace_id = '7bda3ddf6e6d4a0c8386093209eb0bfc' -- replace with a real trace_id of your own\n```\n\nThis will show all the spans with that specific trace ID."
},
{
"id": 239,
"parent": null,
"path": "guides/web-ui/explore.md",
"level": 2,
"title": "Metrics Schema",
"content": "In addition to traces, you can also query your metrics data using the `metrics` table.\n\nThe schema of the `metrics` table is:\n\n```sql\nCREATE TABLE metrics AS (\n recorded_timestamp timestamp with time zone,\n metric_name text,\n metric_type text,\n unit text,\n start_timestamp timestamp with time zone,\n aggregation_temporality public.aggregation_temporality,\n is_monotonic boolean,\n metric_description text,\n scalar_value double precision,\n histogram_min double precision,\n histogram_max double precision,\n histogram_count integer,\n histogram_sum double precision,\n exp_histogram_scale integer,\n exp_histogram_zero_count integer,\n exp_histogram_zero_threshold double precision,\n exp_histogram_positive_bucket_counts integer[],\n exp_histogram_positive_bucket_counts_offset integer,\n exp_histogram_negative_bucket_counts integer[],\n exp_histogram_negative_bucket_counts_offset integer,\n histogram_bucket_counts integer[],\n histogram_explicit_bounds double precision[],\n attributes jsonb,\n tags text[],\n otel_scope_name text,\n otel_scope_version text,\n otel_scope_attributes jsonb,\n service_namespace text,\n service_name text,\n service_version text,\n service_instance_id text,\n process_pid integer\n)\n```\n\nYou can query metrics using standard SQL, just like traces. For example:\n\n```sql\nSELECT *\nFROM metrics\nWHERE metric_name = 'system.cpu.time'\n AND recorded_timestamp > now() - interval '1 hour'\n```"
},
{
"id": 240,
"parent": null,
"path": "guides/web-ui/explore.md",
"level": 2,
"title": "Executing Queries",
"content": "To execute a query, type or paste it into the query editor and click the \"Run Query\" button.\n\n![Logfire explore screen](../../images/guide/browser-explore-run-query.png)\n\nYou can modify the time range of the query using the dropdown next to the button. There is also a \"Limit\" dropdown that\ncontrols the maximum number of result rows returned.\n\nThe Explore page provides a flexible interface to query your traces and metrics using standard SQL.\n\nHappy querying! :rocket:"
},
{
"id": 241,
"parent": null,
"path": "guides/web-ui/explore.md",
"level": 2,
"title": "SQL Reference",
"content": "The primary reference for the SQL syntax can be found in the [SQL Language Reference for Apache DataFusion](https://datafusion.apache.org/user-guide/sql/index.html). For specific information on json related functions, see the [DataFusion JSON functions](https://github.com/datafusion-contrib/datafusion-functions-json)."
},
{
"id": 242,
"parent": null,
"path": "guides/web-ui/live.md",
"level": 1,
"title": "Live View",
"content": "The live view is the main view of Logfire, where you can see traces in real-time.\n\nThe live view is useful (as the name suggests) for watching what's going on within your application in real-time, but it can also be used to explore historical data."
},
{
"id": 243,
"parent": 242,
"path": "guides/web-ui/live.md",
"level": 2,
"title": "The Live View SQL Box",
"content": "The live view has a query box at the top. Here you can enter the `WHERE` clause of a SQL query.\n\n![Logfire Live View SQL query box](../../images/guide/live-view-sql-box.png)\n\nNote: you can run more complex queries on the [explore screen](explore.md)\n\n\nThe schema for the records table is:\n\n```sql\nCREATE TABLE records AS (\n start_timestamp timestamp with time zone,\n created_at timestamp with time zone,\n trace_id text,\n span_id text,\n parent_span_id text,\n kind span_kind,\n end_timestamp timestamp with time zone,\n level smallint,\n span_name text,\n message text,\n attributes_json_schema text,\n attributes jsonb,\n tags text[],\n otel_links jsonb,\n otel_events jsonb,\n is_exception boolean,\n otel_status_code status_code,\n otel_status_message text,\n otel_scope_name text,\n otel_scope_version text,\n otel_scope_attributes jsonb,\n service_namespace text,\n service_name text,\n service_version text,\n service_instance_id text,\n process_pid integer\n)\n```\n\nSome basic examples to get started:\n\n- To view your warnings and errors type: `level > 'info'`\n- To see just exceptions type: `is_exception`\n- To filter by service name (which you can find on the detail panel of any given trace): `service_name = 'crud-api'`"
},
{
"id": 244,
"parent": 242,
"path": "guides/web-ui/live.md",
"level": 2,
"title": "Details panel closed",
"content": "![Logfire OpenAI Image Generation](../../images/logfire-screenshot-live-view.png)\n\nThis is what you'll see when you come to the live view of a project with some data.\n\n1. **Organization and project labels:** In this example, the organization is `samuelcolvin`, and the project is `logfire-demo-spider`. You can click the organization name to go to the organization overview page; the project name is a link to this page.\n\n2. **Project pages:** These are links to the various project-specific pages, including the Live, [Dashboards](./dashboards.md), [Alerts](./alerts.md), [Explore](./explore.md), and Settings pages.\n\n3. **Feedback button:** Click the feedback button to provide us feedback.\n\n4. **Light/Dark mode toggle:** Cycles between light, dark, and system — because everyone seems to have an opinion on this :smile:\n\n5. **Link to the current view:** Clicking this copies a link to the page you are on, with the same query etc.\n\n6. **Organization selection panel:** Opens a drawer with links to the different organizations you are a member of, and also has links to the Terms and Conditions, Support, Documentation, and a Log Out button.\n\n7. **Query text input:** Enter a SQL query here to find spans that match the query. The query should be in the form of a Postgres-compatible `WHERE` clause on the records table (e.g. to find warnings, enter `level >= level_num('error')`). See the [Explore docs](./explore.md) for more detail about the schema here.\n\n8. **Search button:** You can click here to run the query after you've entered it, or just press cmd+enter (or ctrl+enter on windows/linux).\n\n9. **Extra query menu:** Here you can find quick selections for adding filters on various fields to your query. There is also a link to a natural language query entry option, which uses an LLM to generate a query based on a natural language description of what you are looking for.\n\n10. **Toggle timeline position button:** Click here to switch the timeline (see the next item for more info) between vertical and horizontal orientation.\n\n11. **Timeline:** This shows a histogram of the counts of spans matching your query over time. The blue-highlighted section corresponds to the time range currently visible in the scrollable list of traces below. You can click at points on this line to move to viewing logs from that point in time.\n\n12. **Traces scroll settings:** This menu contains some settings related to what is displayed in the traces scroll view.\n\n13. **Status label:** This should show \"Connected\" if your query is successful and you are receiving live data. If you have a syntax error in your query or run into other issues, you should see details about the problem here.\n\n14. **Service, scope, and tags visibility filters:** Here you can control whether certain spans are displayed based on their service, scope, or tags.\n\n15. **Level visibility filter:** Here you can control which log levels are displayed. By default, 'debug' and 'trace' level spans are hidden from view, but you can change the value here to display them, or you can toggle the visibility of spans of other levels as well.\n\n16. **Time window selection:** Here, you can toggle between \"Live tail\", which shows live logs as they are received, and a historical time range of varying sizes. When a specific time range is selected, the timeline from item 11 will match that range.\n\nBelow item 16, we have the \"Traces Scroll View\", which shows traces matching your current query and visibility filters.\n\n[//]: # (note we rely on the sane_lists markdown extension to \"start\" a list from 17!)\n\n17. **Start timestamp label:** This timestamp is the `start_timestamp` of the span. Hover this to see its age in human-readable format.\n\n18. **Service label:** This pill contains the `service_name` of the span. This is the name of the service that produced the span. You can hover to see version info.\n\n19. **Message:** Here you can see the `message` of this span (which is actually the root span of its trace). You can also click here to see more details. Note that the smaller diamond means that this span has no children\n\n20. **A collapsed trace:** The larger diamond to the left of the span message, with a `+` in it, indicates that this span has child spans, and can be expanded to view them by clicking on the `+`-diamond.\n\n21. **Scope label:** This pill contains the `otel_scope_name` of the span. This is the name of the OpenTelemetry scope that produced the span. Generally, OpenTelemetry scopes correspond to instrumentations, so this generally gives you a sense of what library's instrumentation produced the span. This will be logfire when producing spans using the logfire APIs, but will be the name of the OpenTelemetry instrumentation package if the span was produced by another instrumentation. You can hover to see version info.\n\n22. **Trace duration line:** When the root span of a trace is collapsed, the line on the right will be thicker and rounded, and start at the far left. When this is the case, the length of the line represents the log-scale duration of the trace. See item 25 for contrast.\n\n23. **Trace duration label:** Shows the duration of the trace.\n\n24. **An expanded trace:** Here we can see what it looks like if you expand a trace down a couple levels. You can click any row within the trace to see more details about the span.\n\n25. **Span duration line:** When a trace is expanded, the shape of the lines change, representing a transition to a linear scale where you can see each span's start and end timestamp within the overall trace."
},
{
"id": 245,
"parent": 242,
"path": "guides/web-ui/live.md",
"level": 2,
"title": "Details panel open",
"content": "![Logfire OpenAI Image Generation](../../images/logfire-screenshot-details-panel.png)\n\nWhen you click on a span in the Traces Scroll, it will open the details panel, which you can see here.\n\n1. **Timeline tooltip:** Here you can see the tooltip shown when you hover the timeline. It shows the count of records in the hovered histogram bar, the duration of the bar, the time range that the bar represents, and the exact timestamp you are hovering (and at which you'll retrieve records when you click on the timeline)\n\n2. **Level icon:** This icon represents the highest level of this span and any of its descendants.\n\n3. **Span message:** Here you can see whether the item is a Span or Log, and its message.\n\n4. **Details panel orientation toggle, and other buttons:** The second button copies a link to view this specific span. The X closes the details panel for this span.\n\n5. **Exception warning:** This exception indicator is present because an exception bubbled through this span. You can see more details in the Exception Traceback details tab.\n\n6. **Pinned span attributes:** This section contains some details about the span. The link icons on the \"Trace ID\" and \"Span ID\" pills can be clicked to take you to a view of the trace or span, respectively.\n\n7. **Details tabs:** These tabs include more detailed information about the span. Some tabs, such as the Exception Details tab, will only be present for spans with data relevant to that tab.\n\n8. **Arguments panel:** If a span was created with one of the logfire span/logging APIs, and some arguments were present, those arguments will be shown here, displayed as a Python dictionary.\n\n9. **Code details panel:** When attributes about the source line are present on a span, this panel will be present, and that information displayed here.\n\n10. **Full span attributes panel:** When any attributes are present, this panel will show the full list of OpenTelemetry attributes on the span. This panel is collapsed by default, but you can click on its name to show it."
},
{
"id": 246,
"parent": 242,
"path": "guides/web-ui/live.md",
"level": 2,
"title": "Live view variant",
"content": "![Logfire OpenAI Image Generation](../../images/logfire-screenshot-details-panel-variant.png)\n\n1. This is what the timeline looks like in vertical orientation. You can toggle this orientation at any time.\n2. This is what the details panel looks like in horizontal orientation. You can toggle this orientation whenever the details panel is open."
},
{
"id": 247,
"parent": null,
"path": "guides/web-ui/alerts.md",
"level": 2,
"title": "Create an alert",
"content": "Let's see in practice how to create an alert.\n\n1. Go to the **Alerts** tab in the left sidebar.\n2. Click the **Create alert** button.\n\nThen you'll see the following form:\n\n![Create alert form](../../images/guide/browser-alerts-create.png)\n\nThe **Query** field is where you define the conditions that will trigger the alert.\nFor example, you can set up an alert to notify you when the number of errors in your logs exceeds a certain threshold.\n\nOn our example, we're going to set up an alert that will trigger when an exception occurs in the `api` service\nand the route is `/members/{user_id}`.\n\n```sql\nSELECT * FROM records -- (1)!\nWHERE\n is_exception and -- (2)!\n service_name = 'api' and -- (3)!\n attributes->>'http.route' = '/members/{user_id}' -- (4)!\n```\n\n1. The `SELECT * FROM records` statement is the base query that will be executed. The **records** table contains the spans and logs data.\n\n You can use this table to filter the data you want to analyze.\n\n2. The `is_exception` field is a boolean field that indicates whether the record is an exception.\n3. The `service_name` field contains the name of the service that generated the record.\n4. The `attributes` field is a [JSONB] field that contains additional information about the record.\n5. In this case, we're using the `http.route` attribute to filter the records by route.\n\nThe **Time window** field allows you to specify the time range over which the query will be executed.\n\nThe **Webhook URL** field is where you can specify a URL to which the alert will send a POST request when triggered.\nFor now, **Logfire** alerts only send the requests in [Slack format].\n\n??? tip \"Get a Slack webhook URL\"\n To get a Slack webhook URL, follow the instructions in the [Slack documentation](https://api.slack.com/messaging/webhooks).\n\nAfter filling in the form, click the **Create alert** button. And... Alert created! :tada:"
},
{
"id": 248,
"parent": null,
"path": "guides/web-ui/alerts.md",
"level": 2,
"title": "Alert History",
"content": "After creating an alert, you'll be redirected to the alerts' list. There you can see the alerts you've created and their status.\n\nIf the query was not matched in the last time window, you'll see a 0 in the **Matches** column, and a green circle next to the alert name.\n\n![Alerts list](../../images/guide/browser-alerts-no-error.png)\n\nOtherwise, you'll see the number of matches and a red circle.\n\n![Alerts list with error](../../images/guide/browser-alerts-error.png)\n\nIn this case, you'll also receive a notification in the Webhook URL you've set up."
},
{
"id": 249,
"parent": null,
"path": "guides/web-ui/alerts.md",
"level": 2,
"title": "Edit an alert",
"content": "You can configure an alert by clicking on the **Configuration** button on the right side of the alert.\n\n![Edit alert](../../images/guide/browser-alerts-edit.png)\n\nYou can update the alert, or delete it by clicking the **Delete** button. If instead of deleting the alert, you want to disable it, you can click on the **Active** switch."
},
{
"id": 250,
"parent": null,
"path": "guides/web-ui/dashboards.md",
"level": 1,
"title": "Dashboards",
"content": "This guide illustrates how to create and customize dashboards within the **Logfire UI**, thereby enabling effective\nmonitoring of services and system metrics.\n\n![Logfire Dashboard](../../images/guide/browser-dashboard.png)"
},
{
"id": 251,
"parent": 250,
"path": "guides/web-ui/dashboards.md",
"level": 2,
"title": "Get started",
"content": "**Logfire** provides several pre-built dashboards as a convenient starting point."
},
{
"id": 252,
"parent": 250,
"path": "guides/web-ui/dashboards.md",
"level": 2,
"title": "Web Service Dashboard",
"content": "This dashboard offers a high-level view of your web services' well-being. It likely displays key metrics like:\n\n* **Requests:** Total number of requests received by your web service.\n* **Exceptions:** Number of exceptions encountered during request processing.\n* **Trend Routes:** Visualize the most frequently accessed routes or APIs over time.\n* **Percent of 2XX Requests:** Percentage of requests that resulted in successful responses (status codes in the 200 range).\n* **Percent of 5XX Requests:** Percentage of requests that resulted in server errors (status codes in the 500 range).\n* **Log Type Ratio**: Breakdown of the different log types generated by your web service (e.g., info, warning, error)."
},
{
"id": 253,
"parent": 250,
"path": "guides/web-ui/dashboards.md",
"level": 2,
"title": "Basic System Metrics",
"content": "This dashboard shows essential system resource utilization metrics. It comes in two variants:\n\n- **Basic System Metrics (Logfire):** Uses the data exported by [`logfire.instrument_system_metrics()`](../../integrations/system-metrics.md).\n- **Basic System Metrics (OpenTelemetry):** Uses data exported by any OpenTelemetry-based instrumentation following the standard semantic conventions.\n\nBoth variants include the following metrics:\n\n* **Number of Processes:** Total number of running processes on the system.\n* **System CPU usage %:** Percentage of total available processing power utilized by the whole system, i.e. the average across all CPU cores.\n* **Process CPU usage %:** CPU used by a single process, where e.g. using 2 CPU cores to full capacity would result in a value of 200%.\n* **Memory Usage %:** Percentage of memory currently in use by the system.\n* **Swap Usage %:** Percentage of swap space currently in use by the system."
},
{
"id": 254,
"parent": 250,
"path": "guides/web-ui/dashboards.md",
"level": 2,
"title": "Custom Dashboards",
"content": "To create a custom dashboard, follow these steps:\n\n1. From the dashboard page, click on the \"Start From Scratch\" button.\n3. Once your dashboard is created, you can start rename it and adding charts and blocks to it.\n4. To add a chart, click on the \"Add Chart\" button.\n5. Choose the type of block you want to add.\n6. Configure the block by providing the necessary data and settings (check the next section).\n7. Repeat steps 4-6 to add more blocks to your dashboard.\n8. To rearrange the blocks, enable the \"Edit Mode\" in the dashboard setting and simply drag and drop them to the desired position.\n\nFeel free to experiment with different block types and configurations to create a dashboard that suits your monitoring needs."
},
{
"id": 255,
"parent": 250,
"path": "guides/web-ui/dashboards.md",
"level": 2,
"title": "Choosing and Configuring Dashboard's Charts",
"content": "When creating a custom dashboard or modifying them in Logfire, you can choose from different chart types to visualize your data.\n\n![Logfire Dashboard chart types](../../images/guide/browser-dashboard-chart-types.png)"
},
{
"id": 256,
"parent": 255,
"path": "guides/web-ui/dashboards.md",
"level": 3,
"title": "Define Your Query",
"content": "In the second step of creating a chart, you need to input your SQL query. The Logfire dashboard's charts grab data based on this query. You can see the live result of the query on the table behind your query input. You can use the full power of PostgreSQL to retrieve your records.\n\n![Logfire Dashboard chart query](../../images/guide/browser-dashboard-chart-sql-query.png)"
},
{
"id": 257,
"parent": 255,
"path": "guides/web-ui/dashboards.md",
"level": 3,
"title": "Chart Preview and configuration",
"content": "Based on your need and query, you need to configure the chart to visualize and display your data:"
},
{
"id": 258,
"parent": 257,
"path": "guides/web-ui/dashboards.md",
"level": 4,
"title": "Time Series Chart",
"content": "A time series chart displays data points over a specific time period."
},
{
"id": 259,
"parent": 257,
"path": "guides/web-ui/dashboards.md",
"level": 4,
"title": "Pie Chart",
"content": "A pie chart represents data as slices of a circle, where each slice represents a category or value."
},
{
"id": 260,
"parent": 257,
"path": "guides/web-ui/dashboards.md",
"level": 4,
"title": "Table",
"content": "A table displays data in rows and columns, allowing you to present tabular data."
},
{
"id": 261,
"parent": 257,
"path": "guides/web-ui/dashboards.md",
"level": 4,
"title": "Values",
"content": "A values chart displays a single value or multiple values as a card or panel."
},
{
"id": 262,
"parent": 257,
"path": "guides/web-ui/dashboards.md",
"level": 4,
"title": "Categories",
"content": "A categories chart represents data as categories or groups, allowing you to compare different groups."
},
{
"id": 263,
"parent": 250,
"path": "guides/web-ui/dashboards.md",
"level": 2,
"title": "Tips and Tricks",
"content": ""
},
{
"id": 264,
"parent": 263,
"path": "guides/web-ui/dashboards.md",
"level": 3,
"title": "Enhanced Viewing with Synchronized Tooltips and Zoom",
"content": "For dashboards containing multiple time-series charts, consider enabling \"Sync Tooltip and Zoom.\" This powerful feature provides a more cohesive viewing experience:\n\n**Hover in Sync:** When you hover over a data point on any time-series chart, corresponding data points on all synchronized charts will be highlighted simultaneously. This allows you to easily compare values across different metrics at the same time point.\n**Zooming Together:** Zooming in or out on a single chart will automatically apply the same zoom level to all synchronized charts. This helps you maintain focus on a specific time range across all metrics, ensuring a consistent analysis.\nActivating Sync\n\nTo enable synchronized tooltips and zoom for your dashboard:\n\n* Open your dashboard in Logfire.\n* Click on Dashboard Setting\n* activate \"Sync Tooltip and Zoom\" option."
},
{
"id": 265,
"parent": 263,
"path": "guides/web-ui/dashboards.md",
"level": 3,
"title": "Customizing Your Charts",
"content": "**Logfire** empowers you to personalize the appearance and behavior of your charts to better suit your needs.\nHere's an overview of the available options:\n\n* **Rename Chart:** Assign a clear and descriptive title to your chart for improved readability.\n* **Edit Chart**: Change the chart query to better represent your data.\n* **Duplicate Chart:** Quickly create a copy of an existing chart for further modifications, saving you time and effort.\n* **Delete Chart:** Remove a chart from your dashboard if it's no longer relevant."
},
{
"id": 266,
"parent": null,
"path": "guides/onboarding-checklist/add-auto-tracing.md",
"level": 1,
"title": "Auto-tracing",
"content": "The [`logfire.install_auto_tracing()`][logfire.Logfire.install_auto_tracing] method\nwill trace all function calls in the specified modules.\n\nThis works by changing how those modules are imported,\nso the function MUST be called before importing the modules you want to trace.\n\nFor example, suppose all your code lives in the `app` package, e.g. `app.main`, `app.server`, `app.db`, etc.\nInstead of starting your application with `python app/main.py`,\nyou could create another file outside of the `app` package, e.g:\n\n```py title=\"main.py\"\nimport logfire\n\nlogfire.configure()\nlogfire.install_auto_tracing(modules=['app'], min_duration=0.01)\n\nfrom app.main import main\n\nmain()\n```\n\n!!! note\n Generator functions will not be traced for reasons explained [here](../advanced/generators.md)."
},
{
"id": 267,
"parent": 266,
"path": "guides/onboarding-checklist/add-auto-tracing.md",
"level": 2,
"title": "Only tracing functions above a minimum duration",
"content": "In most situations you don't want to trace every single function call in your application.\nThe most convenient way to exclude functions from tracing is with the [`min_duration`][logfire.Logfire.install_auto_tracing(min_duration)] argument. For example, the code snippet above will only trace functions that take longer than 0.01 seconds.\nThis means you automatically get observability for the heavier parts of your application without too much overhead or data. Note that there are some caveats:\n\n- A function will only start being traced after it runs longer than `min_duration` once. This means that:\n - If it runs faster than `min_duration` the first few times, you won't get data about those first calls.\n - The first time that it runs longer than `min_duration`, you also won't get data about that call.\n- After a function runs longer than `min_duration` once, it will be traced every time it's called afterwards, regardless of how long it takes.\n- Measuring the duration of a function call still adds a small overhead. For tiny functions that are called very frequently, it's best to still use the `@no_auto_trace` decorator to avoid any overhead. Auto-tracing with `min_duration` will still work for other undecorated functions.\n\nIf you want to trace all function calls from the beginning, set `min_duration=0`."
},
{
"id": 268,
"parent": 266,
"path": "guides/onboarding-checklist/add-auto-tracing.md",
"level": 2,
"title": "Filtering modules to trace",
"content": "The `modules` argument can be a list of module names.\nAny submodule within a given module will also be traced, e.g. `app.main` and `app.server`.\nOther modules whose names start with the same prefix will not be traced, e.g. `apples`.\n\nIf one of the strings in the list isn't a valid module name, it will be treated as a regex,\nso e.g. `modules=['app.*']` *will* trace `apples` in addition to `app.main` etc.\n\nFor even more control, the `modules` argument can be a function which returns `True` for modules that should be traced.\nThis function will be called with an [`AutoTraceModule`][logfire.AutoTraceModule] object, which has `name` and\n`filename` attributes. For example, this should trace all modules that aren't part of the standard library or\nthird-party packages in a typical Python installation:\n\n```py\nimport pathlib\n\nimport logfire\n\nPYTHON_LIB_ROOT = str(pathlib.Path(pathlib.__file__).parent)\n\n\ndef should_trace(module: logfire.AutoTraceModule) -> bool:\n return not module.filename.startswith(PYTHON_LIB_ROOT)\n\n\nlogfire.install_auto_tracing(should_trace)\n```"
},
{
"id": 269,
"parent": 266,
"path": "guides/onboarding-checklist/add-auto-tracing.md",
"level": 2,
"title": "Excluding functions from tracing",
"content": "Once you've selected which modules to trace, you probably don't want to trace *every* function in those modules.\nTo exclude a function from auto-tracing, add the [`no_auto_trace`][logfire.no_auto_trace] decorator to it:\n\n```py\nimport logfire\n\[email protected]_auto_trace\ndef my_function():\n # Nested functions will also be excluded\n def inner_function():\n ...\n\n return other_function()"
},
{
"id": 270,
"parent": null,
"path": "guides/onboarding-checklist/add-auto-tracing.md",
"level": 1,
"title": "This function is *not* excluded from auto-tracing.",
"content": ""
},
{
"id": 271,
"parent": null,
"path": "guides/onboarding-checklist/add-auto-tracing.md",
"level": 1,
"title": "It will still be traced even when called from the excluded `my_function` above.",
"content": "def other_function():\n ..."
},
{
"id": 272,
"parent": null,
"path": "guides/onboarding-checklist/add-auto-tracing.md",
"level": 1,
"title": "All methods of a decorated class will also be excluded",
"content": "@no_auto_trace\nclass MyClass:\n def my_method(self):\n ...\n```\n\nThe decorator is detected at import time.\nOnly `@no_auto_trace` or `@logfire.no_auto_trace` are supported.\nRenaming/aliasing either the function or module won't work.\nNeither will calling this indirectly via another function.\n\nThis decorator simply returns the argument unchanged, so there is zero runtime overhead."
},
{
"id": 273,
"parent": null,
"path": "guides/onboarding-checklist/add-metrics.md",
"level": 2,
"title": "System Metrics",
"content": "The easiest way to start using metrics is to enable system metrics.\nSee the [System Metrics][system-metrics] documentation to learn more."
},
{
"id": 274,
"parent": null,
"path": "guides/onboarding-checklist/add-metrics.md",
"level": 2,
"title": "Manual Metrics",
"content": "Let's see how to create and use custom metrics in your application.\n\n```py\nimport logfire"
},
{
"id": 275,
"parent": null,
"path": "guides/onboarding-checklist/add-metrics.md",
"level": 1,
"title": "Create a counter metric",
"content": "messages_sent = logfire.metric_counter('messages_sent')"
},
{
"id": 276,
"parent": null,
"path": "guides/onboarding-checklist/add-metrics.md",
"level": 1,
"title": "Increment the counter",
"content": "def send_message():\n messages_sent.add(1)\n```"
},
{
"id": 277,
"parent": 276,
"path": "guides/onboarding-checklist/add-metrics.md",
"level": 3,
"title": "Counter",
"content": "The Counter metric is particularly useful when you want to measure the frequency or occurrence of a certain\nevent or state in your application.\n\nYou can use this metric for counting things like:\n\n* The number of exceptions caught.\n* The number of requests received.\n* The number of items processed.\n\nTo create a counter metric, use the [`logfire.metric_counter`][logfire.Logfire.metric_counter] function:\n\n```py\nimport logfire\n\ncounter = logfire.metric_counter(\n 'exceptions',\n unit='1', # (1)!\n description='Number of exceptions caught'\n)\n\ntry:\n raise Exception('oops')\nexcept Exception:\n counter.add(1)\n```\n\n1. The `unit` parameter is optional, but it's a good practice to specify it.\n It should be a string that represents the unit of the counter.\n If the metric is _unitless_, you can use `'1'`.\n\nYou can read more about the Counter metric in the [OpenTelemetry documentation][counter-metric]."
},
{
"id": 278,
"parent": 276,
"path": "guides/onboarding-checklist/add-metrics.md",
"level": 3,
"title": "Histogram",
"content": "The Histogram metric is particularly useful when you want to measure the distribution of a set of values.\n\nYou can use this metric for measuring things like:\n\n* The duration of a request.\n* The size of a file.\n* The number of items in a list.\n\nTo create a histogram metric, use the [`logfire.metric_histogram`][logfire.Logfire.metric_histogram] function:\n\n```py\nimport logfire\n\nhistogram = logfire.metric_histogram(\n 'request_duration',\n unit='ms', # (1)!\n description='Duration of requests'\n)\n\nfor duration in [10, 20, 30, 40, 50]:\n histogram.record(duration)\n```\n\n1. The `unit` parameter is optional, but it's a good practice to specify it.\n It should be a string that represents the unit of the histogram.\n\nYou can read more about the Histogram metric in the [OpenTelemetry documentation][histogram-metric]."
},
{
"id": 279,
"parent": 276,
"path": "guides/onboarding-checklist/add-metrics.md",
"level": 3,
"title": "Up-Down Counter",
"content": "The \"Up-Down Counter\" is a type of counter metric that allows both incrementing (up) and decrementing (down) operations.\nUnlike a regular counter that only allows increments, an up-down counter can be increased or decreased based on\nthe events or states you want to track.\n\nYou can use this metric for measuring things like:\n\n* The number of active connections.\n* The number of items in a queue.\n* The number of users online.\n\nTo create an up-down counter metric, use the [`logfire.metric_up_down_counter`][logfire.Logfire.metric_up_down_counter] function:\n\n```py\nimport logfire\n\nactive_users = logfire.metric_up_down_counter(\n 'active_users',\n unit='1', # (1)!\n description='Number of active users'\n)\n\ndef user_logged_in():\n active_users.add(1)\n\ndef user_logged_out():\n active_users.add(-1)\n```\n\n1. The `unit` parameter is optional, but it's a good practice to specify it.\n It should be a string that represents the unit of the up-down counter.\n If the metric is _unitless_, you can use `'1'`.\n\nYou can read more about the Up-Down Counter metric in the [OpenTelemetry documentation][up-down-counter-metric]."
},
{
"id": 280,
"parent": 276,
"path": "guides/onboarding-checklist/add-metrics.md",
"level": 3,
"title": "Gauge",
"content": "The Gauge metric is particularly useful when you want to measure the current value of a certain state\nor event in your application. Unlike the counter metric, the gauge metric does not accumulate values over time.\n\nYou can use this metric for measuring things like:\n\n* The current temperature.\n* The current memory usage.\n* The current number of active connections.\n* The current number of users online.\n\nTo create a gauge metric, use the [`logfire.metric_gauge`][logfire.Logfire.metric_gauge] function:\n\n```py\nimport logfire\n\ntemperature = logfire.metric_gauge(\n 'temperature',\n unit='°C',\n description='Temperature'\n)\n\ndef set_temperature(value: float):\n temperature.set(value)\n```\n\nYou can read more about the Gauge metric in the [OpenTelemetry documentation][gauge-metric]."
},
{
"id": 281,
"parent": 276,
"path": "guides/onboarding-checklist/add-metrics.md",
"level": 3,
"title": "Callback Metrics",
"content": "Callback metrics, or observable metrics, are a way to create metrics that are automatically updated based on a time interval."
},
{
"id": 282,
"parent": 281,
"path": "guides/onboarding-checklist/add-metrics.md",
"level": 4,
"title": "Counter Callback",
"content": "To create a counter callback metric, use the [`logfire.metric_counter_callback`][logfire.Logfire.metric_counter_callback] function:\n\n```py\nimport logfire\nfrom opentelemetry.metrics import CallbackOptions, Observable\n\n\ndef cpu_time_callback(options: CallbackOptions) -> Iterable[Observation]:\n observations = []\n with open(\"/proc/stat\") as procstat:\n procstat.readline() # skip the first line\n for line in procstat:\n if not line.startswith(\"cpu\"):\n break\n cpu, user_time, nice_time, system_time = line.split()\n observations.append(\n Observation(int(user_time) // 100, {\"cpu\": cpu, \"state\": \"user\"})\n )\n observations.append(\n Observation(int(nice_time) // 100, {\"cpu\": cpu, \"state\": \"nice\"})\n )\n observations.append(\n Observation(int(system_time) // 100, {\"cpu\": cpu, \"state\": \"system\"})\n )\n return observations\n\nlogfire.metric_counter_callback(\n 'system.cpu.time',\n unit='s',\n callbacks=[cpu_time_callback],\n description='CPU time',\n)\n```\n\nYou can read more about the Counter metric in the [OpenTelemetry documentation][counter-callback-metric]."
},
{
"id": 283,
"parent": 281,
"path": "guides/onboarding-checklist/add-metrics.md",
"level": 4,
"title": "Gauge Callback",
"content": "The gauge metric is particularly useful when you want to measure the current value of a certain state\nor event in your application. Unlike the counter metric, the gauge metric does not accumulate values over time.\n\nTo create a gauge callback metric, use the [`logfire.metric_gauge_callback`][logfire.Logfire.metric_gauge_callback] function:\n\n```py\nimport logfire\n\n\ndef get_temperature(room: str) -> float:\n ...\n\n\ndef temperature_callback(options: CallbackOptions) -> Iterable[Observation]:\n for room in [\"kitchen\", \"living_room\", \"bedroom\"]:\n temperature = get_temperature(room)\n yield Observation(temperature, {\"room\": room})\n\n\nlogfire.metric_gauge_callback(\n 'temperature',\n unit='°C',\n callbacks=[temperature_callback],\n description='Temperature',\n)\n```\n\nYou can read more about the Gauge metric in the [OpenTelemetry documentation][gauge-callback-metric]."
},
{
"id": 284,
"parent": 281,
"path": "guides/onboarding-checklist/add-metrics.md",
"level": 4,
"title": "Up-Down Counter Callback",
"content": "This is the callback version of the [up-down counter metric](#up-down-counter).\n\nTo create an up-down counter callback metric, use the\n[`logfire.metric_up_down_counter_callback`][logfire.Logfire.metric_up_down_counter_callback] function:\n\n```py\nimport logfire\n\n\ndef get_active_users() -> int:\n ...\n\n\ndef active_users_callback(options: CallbackOptions) -> Iterable[Observation]:\n active_users = get_active_users()\n yield Observation(active_users, {})\n\n\nlogfire.metric_up_down_counter_callback(\n 'active_users',\n unit='1',\n callbacks=[active_users_callback],\n description='Number of active users',\n)\n```\n\nYou can read more about the Up-Down Counter metric in the [OpenTelemetry documentation][up-down-counter-callback-metric].\n\n\n\n\n\n\n\n\n[system-metrics]: ../../integrations/system-metrics.md"
},
{
"id": 285,
"parent": null,
"path": "guides/onboarding-checklist/index.md",
"level": 4,
"title": "Logfire Onboarding Checklist",
"content": "* [ ] **[Integrate Logfire](integrate.md)**: Fully integrate Logfire with your logging system and the packages you are\n using.\n\n* [ ] **[Add Logfire manual tracing](add-manual-tracing.md)**: Enhance your tracing data by manually adding custom\n spans and logs to your code for more targeted data collection.\n\n* [ ] **[Add Logfire auto-tracing](add-auto-tracing.md)**: Discover how to leverage Logfire's auto-tracing\n capabilities to automatically instrument your application with minimal code changes.\n\n* [ ] **[Add Logfire metrics](add-metrics.md)**: Learn how to create and use metrics to track and measure important\n aspects of your application's performance and behavior.\n\nWe'll walk you through the checklist step by step, introducing relevant features and concepts as we go. While the main\nfocus of this guide is on getting data into Logfire so you can leverage it in the future, we'll also provide an\nintroduction to the Logfire Web UI and show you how to interact with the data you're generating.\n\n!!! note\n\n For a more comprehensive walkthrough of the Logfire Web UI and its features, you may be interested in our\n [Logfire Web UI Guide](../web-ui/index.md).\n\nLet's get started! :rocket:"
},
{
"id": 286,
"parent": null,
"path": "guides/onboarding-checklist/integrate.md",
"level": 2,
"title": "OpenTelemetry Instrumentation",
"content": "Harnessing the power of [OpenTelemetry], **Logfire** not only offers broad compatibility with any [OpenTelemetry]\ninstrumentation package, but also includes a user-friendly CLI command that effortlessly highlights any\nmissing components in your project.\n\nTo inspect your project, run the following command:\n\n```bash\nlogfire inspect\n```\n\nThis will output the projects you need to install to have optimal OpenTelemetry instrumentation:\n\n![Logfire inspect command](../../images/cli/terminal-screenshot-inspect.png)\n\nTo install the missing packages, copy the command provided by the `inspect` command, and run it in your terminal.\n\nEach instrumentation package has its own way to be configured. Check our [Integrations][integrations] page to\nlearn how to configure them."
},
{
"id": 287,
"parent": null,
"path": "guides/onboarding-checklist/integrate.md",
"level": 2,
"title": "Logging Integration (Optional)",
"content": "!!! warning \"Attention\"\n If you are creating a new application or are not using a logging system, you can skip this step.\n\n You should use **Logfire** itself to collect logs from your application.\n\n All the standard logging methods are supported e.g. [`logfire.info()`][logfire.Logfire.info].\n\nThere are many logging systems within the Python ecosystem, and **Logfire** provides integrations for the most popular ones:\n[Standard Library Logging](../../integrations/logging.md), [Loguru](../../integrations/loguru.md), and\n[Structlog](../../integrations/structlog.md)."
},
{
"id": 288,
"parent": 287,
"path": "guides/onboarding-checklist/integrate.md",
"level": 3,
"title": "Standard Library",
"content": "To integrate **Logfire** with the standard library logging module, you can use the\n[`LogfireLoggingHandler`][logfire.integrations.logging.LogfireLoggingHandler] class.\n\nThe minimal configuration would be the following:\n\n```py hl_lines=\"5\"\nfrom logging import basicConfig\n\nimport logfire\n\nlogfire.configure()\nbasicConfig(handlers=[logfire.LogfireLoggingHandler()])\n```\n\nNow imagine, that you have a logger in your application:\n\n```py hl_lines=\"7-8\" title=\"main.py\"\nfrom logging import basicConfig, getLogger\n\nimport logfire\n\nlogfire.configure()\nbasicConfig(handlers=[logfire.LogfireLoggingHandler()])\n\nlogger = getLogger(__name__)\nlogger.error(\"Hello %s!\", \"Fred\")\n```\n\nIf we run the above code, with `python main.py`, we will see the following output:\n\n![Terminal with Logfire output](../../images/guide/terminal-integrate-logging.png)\n\nIf you go to the link, you will see the `\"Hello Fred!\"` log in the Web UI:\n\n![Logfire Web UI with logs](../../images/guide/browser-integrate.png)\n\nIt is simple as that! Cool, right? 🤘"
},
{
"id": 289,
"parent": 287,
"path": "guides/onboarding-checklist/integrate.md",
"level": 3,
"title": "Loguru",
"content": "To integrate with Loguru, check out the [Loguru] page."
},
{
"id": 290,
"parent": 287,
"path": "guides/onboarding-checklist/integrate.md",
"level": 3,
"title": "Structlog",
"content": "To integrate with Structlog, check out the [Structlog] page.\n\n[inspect-command]: ../../reference/cli.md#inspect-inspect\n[integrations]: ../../integrations/index.md\n\n[Loguru]: ../../integrations/loguru.md\n[Structlog]: ../../integrations/structlog.md"
},
{
"id": 291,
"parent": null,
"path": "guides/onboarding-checklist/add-manual-tracing.md",
"level": 2,
"title": "Spans, logs, and traces",
"content": "Here's a simple example of using Logfire:\n\n```python\nimport time\n\nimport logfire\n\nlogfire.configure()\n\nwith logfire.span('This is a span'):\n time.sleep(1)\n logfire.info('This is an info log')\n time.sleep(2)\n```\n\nIf you run this it should print something like:\n\n```\nLogfire project URL: https://logfire.pydantic.dev/my_username/my_project_name\n21:02:55.078 This is a span\n21:02:56.084 This is an info log\n```\n\nOpening the project URL should show something like this in the Live view:\n\n![Simple example in Live view](../../images/guide/manual-tracing-basic-closed-span.png)\n\nThe blue box with `1+` means that the span contains 1 direct child. Clicking on that box expands the span to reveal its children:\n\n![Simple example in Live view with span opened](../../images/guide/manual-tracing-basic.png)\n\nNote that:\n\n1. Any spans or logs created inside the `with logfire.span(...):` block will be children of that span. This lets you organize your logs nicely in a structured tree. You can also see this parent-child relationship in the console logs based on the indentation.\n2. Spans have a start and an end time, and thus a duration. This span took 3 seconds to complete.\n3. For logs, the start and end time are the same, so they don't have a duration. But you can still see in the UI that the log was created 1 second after the span started and 2 seconds before it ended.\n\nIf you click on the 'Explore' link in the top navbar, you can write SQL to explore further, e.g:\n\n![Query in Explore view: select extract('seconds' from end_timestamp - start_timestamp) as duration, kind, message, trace_id, span_id, parent_span_id from records order by start_timestamp ](../../images/guide/manual-tracing-explore-basic.png)\n\nNote:\n\n1. Spans and logs are stored together in the same `records` table.\n2. The `parent_span_id` of the log is the `span_id` of the span.\n3. Both have the same `trace_id`. You can click on it to open a new tab in the Live view filtered to that _trace_.\n\nA _trace_ is a tree of spans/logs sharing the same root. Whenever you create a new span/log when there's no active span, a new trace is created. If it's a span, any descendants of that span will be part of the same trace. To keep your logs organized nicely into traces, it's best to create spans at the top level representing high level operations such as handling web server requests."
},
{
"id": 292,
"parent": null,
"path": "guides/onboarding-checklist/add-manual-tracing.md",
"level": 2,
"title": "Attributes",
"content": "Spans and logs can have structured data attached to them, e.g:\n\n```python\nlogfire.info('Hello', name='world')\n```\n\nIf you click on the 'Hello' log in the Live view, you should see this in the details panel on the right:\n\n![name attribute in Live view](../../images/guide/manual-tracing-attribute-hello-world.png)\n\nThis data is stored in the `attributes` column in the `records` table as JSON. You can use e.g. `attributes->>'name' = 'world'` in the SQL filter at the top of the Live view to show only this log. This is used as the `WHERE` clause of a SQL query on the `records` table.\n\nBoth spans and logs can have attributes containing arbitrary values which will be intelligently serialized to JSON as needed. You can pass any keyword arguments to set attributes as long as they don't start with an underscore (`_`). That namespace is reserved for other keyword arguments with logfire-specific meanings.\n\nSometimes it's useful to attach an attribute to a span after it's been created but before it's finished. You can do this by calling the `span.set_attribute` method:\n\n```python\nwith logfire.span('Calculating...') as span:\n result = 1 + 2\n span.set_attribute('result', result)\n```"
},
{
"id": 293,
"parent": null,
"path": "guides/onboarding-checklist/add-manual-tracing.md",
"level": 2,
"title": "Messages and span names",
"content": "If you run this code:\n\n```python\nimport logfire\n\nlogfire.configure()\n\nfor name in ['Alice', 'Bob', 'Carol']:\n logfire.info('Hello {name}', name=name)\n```\n\n![Query in Explore view: select span_name, attributes->>'name' as name, message from records order by start_timestamp](../../images/guide/manual-tracing-span-names.png)\n\nHere you can see that:\n\n1. The first argument `'Hello {name}'` becomes the value of the `span_name` column. You can use this to find all records coming from the same code even if the messages are different, e.g. with the SQL filter `span_name = 'Hello {name}'`.\n2. The span name is also used as a `str.format`-style template which is formatted with the attributes to produce the `message` column. The message is what's shown in the console logs and the Live view.\n\nYou can also set `span.message` after a span is started but before it's finished, e.g:\n\n```python\nwith logfire.span('Calculating...') as span:\n result = 1 + 2\n span.message = f'Calculated: {result}'\n```\n\nYou could use `message` to filter for related records, e.g. `message like 'Hello%'`, but filtering on the `span_name` column is more efficient because it's indexed. Similarly, it's better to use `span_name = 'Hello {name}' and attributes->>'name' = 'Alice'` than `message = 'Hello Alice'`.\n\nTo allow efficiently filtering for related records, span names should be _low cardinality_, meaning they shouldn't vary too much. For example, this would be bad:\n\n```python\nname = get_username()\nlogfire.info('Hello ' + name, name=name)\n```\n\nbecause now the `span_name` column will have a different value for every username. But this would be fine:\n\n```python\nword = 'Goodbye' if leaving else 'Hello'\nlogfire.info(word + ' {name}', name=name)\n```\n\nbecause now the `span_name` column will only have two values (`'Goodbye {name}'` and `'Hello {name}'`) and it's both easier and more efficient to filter on `span_name = 'Hello {name}'` than `span_name = '{word} {name}' and attributes->>'word' = 'Hello'`.\n\nYou can use the `_span_name` argument when you want the span name to be different from the message template, e.g:\n\n```python\nlogfire.info('Hello {name}', name='world', _span_name='Hello')\n```\n\nThis will set the `span_name` to `'Hello'` and the `message` to `'Hello world'`. Note that the `_span_name` argument starts with an underscore to distinguish it from attributes."
},
{
"id": 294,
"parent": null,
"path": "guides/onboarding-checklist/add-manual-tracing.md",
"level": 2,
"title": "f-strings",
"content": "Instead of this:\n\n```python\nlogfire.info('Hello {name}', name=name)\n```\n\nit's much more convenient to use an f-string to avoid repeating `name` three times:\n\n```python\nlogfire.info(f'Hello {name}')\n```\n\nContrary to the previous section, this _will_ work well in Python 3.11+ because Logfire will use special magic to both set the `span_name` to `'Hello {name}'` and set the `name` attribute to the value of the `name` variable, so it's equivalent to the previous snippet. Here's what you need to know about this:\n\n- The feature is enabled by default in Python 3.11+. You can disable it with [`logfire.configure(inspect_arguments=False)`][logfire.configure(inspect_arguments)]. You can also enable it in Python 3.9 and 3.10, but it's more likely to not work correctly.\n- Inspecting arguments is expected to always work under normal circumstances. The main caveat is that the source code must be available, so e.g. deploying only `.pyc` files will cause it to fail.\n- If inspecting arguments fails, you will get a warning, and the f-string argument will be used as a formatting template. This means you will get high-cardinality span names such as `'Hello Alice'` and no `name` attribute, but the information won't be completely lost.\n- If inspecting arguments is enabled, then arguments will be inspected regardless of whether f-strings are being used. So if you write `logfire.info('Hello {name}', name=name)` and inspecting arguments fails, then you will still get a warning.\n- The values inside f-strings are evaluated and formatted by Logfire a second time. This means you should avoid code like `logfire.info(f'Hello {get_username()}')` if `get_username()` (or the string conversion of whatever it returns) is expensive or has side effects.\n- The first argument must be an actual f-string. `logfire.info(f'Hello {name}')` will work, but `message = f'Hello {name}'; logfire.info(message)` will not, nor will `logfire.info('Hello ' + name)`.\n- Inspecting arguments is cached so that the performance overhead of repeatedly inspecting the same f-string is minimal. However, there is a non-negligible overhead of parsing a large source file the first time arguments need to be inspected inside it. Either way, avoiding this overhead requires disabling inspecting arguments entirely, not merely avoiding f-strings."
},
{
"id": 295,
"parent": null,
"path": "guides/onboarding-checklist/add-manual-tracing.md",
"level": 2,
"title": "Exceptions",
"content": "The `logfire.span` context manager will automatically record any exceptions that cause it to exit, e.g:\n\n```python\nimport logfire\n\nlogfire.configure()\n\nwith logfire.span('This is a span'):\n raise ValueError('This is an error')\n```\n\nIf you click on the span in the Live view, the panel on the right will have an 'Exception Traceback' tab:\n\n![Traceback in UI](../../images/guide/manual-tracing-traceback.png)\n\nExceptions which are caught and not re-raised will not be recorded, e.g:\n\n```python\nwith logfire.span('This is a span'):\n try:\n raise ValueError('This is an acceptable error not worth recording')\n except ValueError:\n pass\n```\n\nIf you want to record a handled exception, use the [`span.record_exception`][logfire.LogfireSpan.record_exception] method:\n\n```python\nwith logfire.span('This is a span') as span:\n try:\n raise ValueError('Catch this error, but record it')\n except ValueError as e:\n span.record_exception(e)\n```\n\nAlternatively, if you only want to log exceptions without creating a span for the normal case, you can use [`logfire.exception`][logfire.Logfire.exception]:\n\n```python\ntry:\n raise ValueError('This is an error')\nexcept ValueError:\n logfire.exception('Something went wrong')\n```\n\n`logfire.exception(...)` is equivalent to `logfire.error(..., _exc_info=True)`. You can also use `_exc_info` with the other logging methods if you want to record a traceback in a log with a non-error level. You can set `_exc_info` to a specific exception object if it's not the one being handled. Don't forget the leading underscore!"
},
{
"id": 296,
"parent": null,
"path": "guides/onboarding-checklist/add-manual-tracing.md",
"level": 2,
"title": "Convenient function spans with `@logfire.instrument`",
"content": "Often you want to wrap a whole function in a span. Instead of doing this:\n\n```python\ndef my_function(x, y):\n with logfire.span('my_function', x=x, y=y):\n ...\n```\n\nyou can use the [`@logfire.instrument`][logfire.Logfire.instrument] decorator:\n\n```python\[email protected]()\ndef my_function(x, y):\n ...\n```\n\nBy default, this will add the function arguments to the span as attributes.\nTo disable this (e.g. if the arguments are large objects not worth collecting), use `instrument(extract_args=False)`.\n\nThe default span name will be something like `Calling module_name.my_function`.\nYou can pass an alternative span name as the first argument to `instrument`, and it can even be a template\ninto which arguments will be formatted, e.g:\n\n```python\[email protected]('Applying my_function to {x=} and {y=}')\ndef my_function(x, y):\n ...\n\nmy_function(3, 4)"
},
{
"id": 297,
"parent": null,
"path": "guides/onboarding-checklist/add-manual-tracing.md",
"level": 1,
"title": "Logs: Applying my_function to x=3 and y=4",
"content": "```\n\n!!! note\n\n - The [`@logfire.instrument`][logfire.Logfire.instrument] decorator MUST be applied first, i.e., UNDER any other decorators.\n - The source code of the function MUST be accessible."
},
{
"id": 298,
"parent": 297,
"path": "guides/onboarding-checklist/add-manual-tracing.md",
"level": 2,
"title": "Log levels",
"content": "The following methods exist for creating logs with different levels:\n\n- `logfire.trace`\n- `logfire.debug`\n- `logfire.info`\n- `logfire.notice`\n- `logfire.warn`\n- `logfire.error`\n- `logfire.fatal`\n\nBy default, `trace` and `debug` logs are hidden. You can change this by clicking the 'Default levels' dropdown in the Live view:\n\n![Default levels dropdown](../../images/guide/manual-tracing-default-levels.png)\n\nYou can also set the minimum level used for console logging with [`logfire.configure`][logfire.configure], e.g:\n\n```python\nimport logfire\n\nlogfire.configure(console=logfire.ConsoleOptions(min_log_level='debug'))\n```\n\nTo log a message with a variable level you can use `logfire.log`, e.g. `logfire.log('info', 'This is an info log')` is equivalent to `logfire.info('This is an info log')`.\n\nSpans are level `info` by default. You can change this with the `_level` argument, e.g. `with logfire.span('This is a debug span', _level='debug'):`. You can also change the level after the span has started but before it's finished with [`span.set_level`][logfire.LogfireSpan.set_level], e.g:\n\n```python\nwith logfire.span('Doing a thing') as span:\n success = do_thing()\n if not success:\n span.set_level('error')\n```\n\nIn the Live view, **spans are colored based on the highest level of them and their descendants**. So e.g. this code:\n\n```python\nimport logfire\n\nlogfire.configure()\n\nwith logfire.span('Outer span'):\n with logfire.span('Inner span'):\n logfire.info('This is an info message')\n logfire.error('This is an error message')\n```\n\nwill be displayed like this:\n\n![Spans colored by level](../../images/guide/manual-tracing-level-colors.png)\n\nHere the spans themselves still have their level set to `info` as is the default, but they're colored red instead of blue because they contain an error log.\n\nIf a span finishes with an unhandled exception, then in addition to recording a traceback as described above, the span's log level will be set to `error`. This will not happen when using the [`span.record_exception`][logfire.LogfireSpan.record_exception] method.\n\nIn the database, the log level is stored as a number in the `level` column. The values are based on OpenTelemetry, e.g. `info` is `9`. You can convert level names to numbers using the `level_num` SQL function, e.g. `level > level_num('info')` will find all 'unusual' records. You can also use the `level_name` SQL function to convert numbers to names, e.g. `SELECT level_name(level), ...` to see a human-readable level in the Explore view. Note that the `level` column is indexed so that filtering on `level = level_num('error')` is efficient, but filtering on `level_name(level) = 'error'` is not."
}
]
"""Run with `uv run --with pydantic python logfire_docs_gen.py`"""
from __future__ import annotations as _annotations
import re
from dataclasses import dataclass
from pathlib import Path
from pydantic import TypeAdapter
THIS_DIR = Path(__file__).parent
DOCS_DIR = THIS_DIR / 'docs'
ignored_files = 'release-notes.md', 'help.md', '/api/', '/legal/'
@dataclass
class DocsSection:
id: int
parent: int | None
path: str
level: int
title: str
content: str
sections_ta = TypeAdapter(list[DocsSection])
def main():
all_sections: list[DocsSection] = []
sid = 0
for file in DOCS_DIR.rglob('*.md'):
if any(ignored in str(file) for ignored in ignored_files):
continue
rel_path = file.relative_to(DOCS_DIR)
sections, sid = extract_sections(file, str(rel_path), sid)
all_sections.extend(sections)
store_path = Path('logfire_docs.json')
print(f'Extracted details of {len(all_sections)} sections, saving to {store_path}')
store_path.write_bytes(sections_ta.dump_json(all_sections, indent=2))
def extract_sections(file: Path, rel_path: str, sid: int) -> tuple[list[DocsSection], int]:
content = file.read_text()
# remove trailing markdown links
content = re.sub(r'^\[.*]: http.+', '', content, flags=re.MULTILINE)
sections: list[DocsSection] = []
# level and title
section: tuple[int, str] | None = None
stack: list[tuple[int, int]] = []
while True:
m = re.search(r'^(#+) (.+)', content, flags=re.MULTILINE)
if section is not None:
level, title = section
section_content = content[:m.start()] if m else content
# remove sections with higher level, e.g if we in an H2 section, H3 sections
while stack and stack[-1][0] >= level:
stack.pop()
parent = stack[-1][1] if stack else None
stack.append((level, sid))
sections.append(DocsSection(sid, parent, rel_path, level, title, section_content.strip()))
sid += 1
if m is None:
break
else:
section = len(m.group(1)), m.group(2)
content = content[m.end():]
return sections, sid
if __name__ == '__main__':
main()
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment