This logging setup configures Structlog to output pretty logs in development, and JSON log lines in production.
Then, you can use Structlog loggers or standard logging
loggers, and they both will be processed by the Structlog pipeline (see the hello()
endpoint for reference). That way any log generated by your dependencies will also be processed and enriched, even if they know nothing about Structlog!
Requests are assigned a correlation ID with the asgi-correlation-id
middleware (either captured from incoming request or generated on the fly).
All logs are linked to the correlation ID, and to the Datadog trace/span if instrumented.
This data "global to the request" is stored in context vars, and automatically added to all logs produced during the request thanks to Structlog.
You can add to these "global local variables" at any point in an endpoint with `structlog.contextvars.bind_contextvars(custom
# --- my_functions.py | |
import pandas as pd | |
def avg_3wk_spend(spend: pd.Series) -> pd.Series: | |
"""Rolling 3 week average spend.""" | |
return spend.rolling(3).mean() | |
def spend_per_signup(spend: pd.Series, signups: pd.Series) -> pd.Series: |
Reproduced from https://thecodinginterface.com/blog/kafka-source-sink-with-apache-flink-table-api/
PyFlink is compatible with Python>=3.5<3.9
Process:
- Produce events and send to Kafka topic
- Set up streaming service via PyFlink DataStream API
- Read from Kafka source via PyFlink TABLE API
- Process data
❯ rm out.csv | |
❯ cat 1.py | |
from glob import glob | |
import mmap | |
files = glob("data/*") | |
files.sort(key=lambda x: int(x.split("/")[-1].split(".")[0])) | |
write_f = open("out.csv", "w+b") |
I liked the way Grokking the coding interview organized problems into learnable patterns. However, the course is expensive and the majority of the time the problems are copy-pasted from leetcode. As the explanations on leetcode are usually just as good, the course really boils down to being a glorified curated list of leetcode problems.
So below I made a list of leetcode problems that are as close to grokking problems as possible.
import logging | |
from multiprocessing import Pool, TimeoutError | |
logging.basicConfig(level=logging.INFO) | |
def timeout(func, args=None, kwds=None, timeout=10): | |
"""Call function and wait until timeout. |
#!/usr/bin/env python | |
import logging | |
import os | |
import threading | |
import time | |
from multiprocessing import Process | |
from queue import Queue | |
from confluent_kafka import Consumer |
import torch | |
import io | |
import pandas as pd | |
import gc | |
import numpy as np | |
import transformers | |
''' | |
Original Code Author [@dlibenzi](https://github.com/dlibenzi) |