... or Why Pipelining Is Not That Easy
Golang Concurrency Patterns for brave and smart.
By @kachayev
... or Why Pipelining Is Not That Easy
Golang Concurrency Patterns for brave and smart.
By @kachayev
(comment ; Fun with transducers, v2 | |
;; Still haven't found a brief + approachable overview of Clojure 1.7's new | |
;; transducers in the particular way I would have preferred myself - so here goes: | |
;;;; Definitions | |
;; Looking at the `reduce` docstring, we can define a 'reducing-fn' as: | |
(fn reducing-fn ([]) ([accumulation next-input])) -> new-accumulation | |
;; (The `[]` arity is actually optional; it's only used when calling | |
;; `reduce` w/o an init-accumulator). |
Rich Hickey • 3 years ago
Sorry, I have to disagree with the entire premise here.
A wide variety of experiences might lead to well-roundedness, but not to greatness, nor even goodness. By constantly switching from one thing to another you are always reaching above your comfort zone, yes, but doing so by resetting your skill and knowledge level to zero.
Mastery comes from a combination of at least several of the following:
by alexander white ©
Every application ever written can be viewed as some sort of transformation on data. Data can come from different sources, such as a network or a file or user input or the Large Hadron Collider. It can come from many sources all at once to be merged and aggregated in interesting ways, and it can be produced into many different output sinks, such as a network or files or graphical user interfaces. You might produce your output all at once, as a big data dump at the end of the world (right before your program shuts down), or you might produce it more incrementally. Every application fits into this model.
The scalaz-stream project is an attempt to make it easy to construct, test and scale programs that fit within this model (which is to say, everything). It does this by providing an abstraction around a "stream" of data, which is really just this notion of some number of data being sequentially pulled out of some unspecified data source. On top of this abstraction, sca
The standard way of understanding the HTTP protocol is via the request reply pattern. Each HTTP transaction consists of a finitely bounded HTTP request and a finitely bounded HTTP response.
However it's also possible for both parts of an HTTP 1.1 transaction to stream their possibly infinitely bounded data. The advantages is that the sender can send data that is beyond the sender's memory limit, and the receiver can act on
""" | |
Minimal character-level Vanilla RNN model. Written by Andrej Karpathy (@karpathy) | |
BSD License | |
""" | |
import numpy as np | |
# data I/O | |
data = open('input.txt', 'r').read() # should be simple plain text file | |
chars = list(set(data)) | |
data_size, vocab_size = len(data), len(chars) |
These are the Kickstarter Engineering and Data role definitions for both teams.