... or Why Pipelining Is Not That Easy
Golang Concurrency Patterns for brave and smart.
By @kachayev
#!/usr/bin/env python3 | |
"""Parses and encodes the result of `docker ps` in JSON format.""" | |
import json | |
import sys | |
from collections import namedtuple | |
from subprocess import Popen, PIPE | |
... or Why Pipelining Is Not That Easy
Golang Concurrency Patterns for brave and smart.
By @kachayev
#!/bin/bash | |
# Here are some embedded Python examples using Python3. | |
# They are put into functions for separation and clarity. | |
# Simple usage, only using python to print the date. | |
# This is not really a good example, because the `date` | |
# command works just as well. | |
function date_time { |
// This script will boot app.js with the number of workers | |
// specified in WORKER_COUNT. | |
// | |
// The master will respond to SIGHUP, which will trigger | |
// restarting all the workers and reloading the app. | |
var cluster = require('cluster'); | |
var workerCount = process.env.WORKER_COUNT || 2; | |
// Defines what each worker needs to run |
As of version 3.3, python includes the very promising concurrent.futures
module, with elegant context managers for running tasks concurrently. Thanks to the simple and consistent interface you can use both threads and processes with minimal effort.
For most CPU bound tasks - anything that is heavy number crunching - you want your program to use all the CPUs in your PC. The simplest way to get a CPU bound task to run in parallel is to use the ProcessPoolExecutor, which will create enough sub-processes to keep all your CPUs busy.
We use the context manager thusly:
with concurrent.futures.ProcessPoolExecutor() as executor:
;; This is at: https://gist.github.com/8655399 | |
;; So we want a rhyming dictionary in Clojure. Jack Rusher put up | |
;; this code here: | |
;; | |
;; https://gist.github.com/jackrusher/8640437 | |
;; | |
;; I'm going to study this code and learn as I go. | |
;; | |
;; First I put it in a namespace. |
Numbers Everyone Should Know | |
L1 cache reference 0.5 ns | |
Branch mispredict 5 ns | |
L2 cache reference 7 ns | |
Mutex lock/unlock 25 ns | |
Main memory reference 100 ns | |
Compress 1K bytes with Zippy 3,000 ns | |
Send 2K bytes over 1 Gbps network 20,000 ns | |
Read 1 MB sequentially from memory 250,000 ns | |
Round trip within same datacenter 500,000 ns |
Slightly disorganized but reasonably complete notes on the algorithms, strategies and optimizations of the Akka Cluster implementation. Could use a lot more links and context etc., but was just written for my own understanding. Might be expanded later.
Links to papers and talks that have inspired the implementation can be found on the 10 last pages of this presentation.
This is the Gossip state representation:
This Gist contains instructions to setup Ubuntu server with Nginx, uWSGI & Node.js with that can serve for any Python apps (Django for instance) and will allow it's automatized deployment.
The content is as follows:
01-ubuntu.md
– A basic Ubuntu server setup.02-nginx-uwsgi-nodejs.md
– Nginx, uWSGI and Node.js installation and setup.03-deployment.md
– A server setup for automatized deployment.04-mariadb.md
– Mariadb installation and setup.If you ever need to download an entire website, perhaps for off-line viewing, wget can do the job — for example:
$ wget --recursive --no-clobber --page-requisites --html-extension --convert-links --restrict-file-names=windows --domains website.org --no-parent www.website.org/tutorials/html/
This command downloads the website www.website.org/tutorials/html/.
The options are:
--recursive
: download the entire website--domains website.org
: don't follow links outside website.org