Since Twitter doesn't have an edit button, it's a suitable host for JavaScript modules.
Source tweet: https://twitter.com/rauchg/status/712799807073419264
const leftPad = await requireFromTwitter('712799807073419264');
Since Twitter doesn't have an edit button, it's a suitable host for JavaScript modules.
Source tweet: https://twitter.com/rauchg/status/712799807073419264
const leftPad = await requireFromTwitter('712799807073419264');
Picking the right architecture = Picking the right battles + Managing trade-offs
function certchain() { | |
# Usage: certchain | |
# Display PKI chain-of-trust for a given domain | |
# GistID: https://gist.github.com/joshenders/cda916797665de69ebcd | |
if [[ "$#" -ne 1 ]]; then | |
echo "Usage: ${FUNCNAME} <ip|domain[:port]>" | |
return 1 | |
fi | |
local host_port="$1" |
(ns firstshot.chessknightmove | |
(:refer-clojure :exclude [== >= <= > < =]) | |
(:use clojure.core.logic | |
clojure.core.logic.arithmetic)) | |
(defn knight-moves | |
"Returns the available moves for a knight (on a 8x8 grid) given its current position." | |
[x y] | |
(let [xmax 8 ymax 8] | |
(run* [q] |
These are the Kickstarter Engineering and Data role definitions for both teams.
""" | |
Minimal character-level Vanilla RNN model. Written by Andrej Karpathy (@karpathy) | |
BSD License | |
""" | |
import numpy as np | |
# data I/O | |
data = open('input.txt', 'r').read() # should be simple plain text file | |
chars = list(set(data)) | |
data_size, vocab_size = len(data), len(chars) |
The standard way of understanding the HTTP protocol is via the request reply pattern. Each HTTP transaction consists of a finitely bounded HTTP request and a finitely bounded HTTP response.
However it's also possible for both parts of an HTTP 1.1 transaction to stream their possibly infinitely bounded data. The advantages is that the sender can send data that is beyond the sender's memory limit, and the receiver can act on
Every application ever written can be viewed as some sort of transformation on data. Data can come from different sources, such as a network or a file or user input or the Large Hadron Collider. It can come from many sources all at once to be merged and aggregated in interesting ways, and it can be produced into many different output sinks, such as a network or files or graphical user interfaces. You might produce your output all at once, as a big data dump at the end of the world (right before your program shuts down), or you might produce it more incrementally. Every application fits into this model.
The scalaz-stream project is an attempt to make it easy to construct, test and scale programs that fit within this model (which is to say, everything). It does this by providing an abstraction around a "stream" of data, which is really just this notion of some number of data being sequentially pulled out of some unspecified data source. On top of this abstraction, sca