Inspired By: grafana/loki#333
- docker-compose.yml
version: "3"
networks:
loki:
| # open Pycharm settings->Build,Execution,Deployment->Console->Django Console | |
| # 1.add env var: DJANGO_SETTINGS_MODULE -> settings.settings | |
| # 2.copy below into Starting script | |
| # open preferences->keymap->search `Python Console`->create new shortcut `cmd + 3` | |
| from pprint import pprint | |
| from django_extensions.management.shells import import_objects | |
| from django.core.management.color import no_style | |
| from datetime import datetime |
| ''' | |
| Extracted and modified from django-model-logging | |
| It used it's own LogEntry model but since django | |
| has it's own LogEntry maybe someone would want to | |
| register in the same model instead of creating a | |
| new one. | |
| ''' | |
| from django.contrib.admin.models import LogEntry, ADDITION, CHANGE, ContentType, DELETION | |
| from django.utils.translation import gettext as _ |
| # Uses the Python Imaging Library | |
| # `pip install Pillow` works too | |
| from PIL import Image | |
| image_filename = "picture_with_EXIF.jpg" | |
| image_file = open('image_filename) | |
| image = Image.open(image_file) | |
| # next 3 lines strip exif | |
| image_data = list(image.getdata()) |
| _template/template_nginx_access_log | |
| { | |
| "index_patterns" : "*-nginx-access*", | |
| "order" : 1, | |
| "settings" : { | |
| "number_of_shards" : 2, | |
| "number_of_replicas" : 0, | |
| "codec" : "best_compression" | |
| }, | |
| "mappings" : { |
| /var/log/traefik.log | |
| { | |
| compress | |
| create 0640 root root | |
| daily | |
| delaycompress | |
| missingok | |
| notifempty | |
| rotate 5 |
| # coding: utf-8 | |
| from collections import OrderedDict | |
| from copy import deepcopy | |
| from django.core.exceptions import ValidationError | |
| from django.utils.encoding import force_str | |
| from import_export import resources, widgets | |
| from . import models as m |
Inspired By: grafana/loki#333
version: "3"
networks:
loki:
This logging setup configures Structlog to output pretty logs in development, and JSON log lines in production.
Then, you can use Structlog loggers or standard logging loggers, and they both will be processed by the Structlog pipeline (see the hello() endpoint for reference). That way any log generated by your dependencies will also be processed and enriched, even if they know nothing about Structlog!
Requests are assigned a correlation ID with the asgi-correlation-id middleware (either captured from incoming request or generated on the fly).
All logs are linked to the correlation ID, and to the Datadog trace/span if instrumented.
This data "global to the request" is stored in context vars, and automatically added to all logs produced during the request thanks to Structlog.
You can add to these "global local variables" at any point in an endpoint with `structlog.contextvars.bind_contextvars(custom