Skip to content

Instantly share code, notes, and snippets.

View r-wheeler's full-sized avatar

Ryan Wheeler r-wheeler

View GitHub Profile
import luigi
import luigi.scheduler
import luigi.worker
import logging as log
import socket
from datetime import datetime as dt
from ConfigParser import ConfigParser
from components import AssessSVMRegression
from components import CreateProteinList
from components import CreateReport
@djspiewak
djspiewak / streams-tutorial.md
Created March 22, 2015 19:55
Introduction to scalaz-stream

Introduction to scalaz-stream

Every application ever written can be viewed as some sort of transformation on data. Data can come from different sources, such as a network or a file or user input or the Large Hadron Collider. It can come from many sources all at once to be merged and aggregated in interesting ways, and it can be produced into many different output sinks, such as a network or files or graphical user interfaces. You might produce your output all at once, as a big data dump at the end of the world (right before your program shuts down), or you might produce it more incrementally. Every application fits into this model.

The scalaz-stream project is an attempt to make it easy to construct, test and scale programs that fit within this model (which is to say, everything). It does this by providing an abstraction around a "stream" of data, which is really just this notion of some number of data being sequentially pulled out of some unspecified data source. On top of this abstraction, sca

@mmellison
mmellison / grpc_asyncio.py
Last active August 6, 2024 01:23
gRPC Servicer with Asyncio (Python 3.6+)
import asyncio
from concurrent import futures
import functools
import inspect
import threading
from grpc import _server
def _loop_mgr(loop: asyncio.AbstractEventLoop):