This gist shows how to create a GIF screencast using only free OS X tools: QuickTime, ffmpeg, and gifsicle.
To capture the video (filesize: 19MB), using the free "QuickTime Player" application:
# Greatest common divisor of more than 2 numbers. Am I terrible for doing it this way? | |
def gcd(*numbers): | |
"""Return the greatest common divisor of the given integers""" | |
from fractions import gcd | |
return reduce(gcd, numbers) | |
# Least common multiple is not in standard libraries? It's in gmpy, but this is simple enough: | |
def lcm(*numbers): |
#!/usr/bin/env python | |
""" | |
xlsx2tsv filename.xlsx [sheet number or name] | |
Parse a .xlsx (Excel OOXML, which is not OpenOffice) into tab-separated values. | |
If it has multiple sheets, need to give a sheet number or name. | |
Outputs honest-to-goodness tsv, no quoting or embedded \\n\\r\\t. | |
One reason I wrote this is because Mac Excel 2008 export to csv or tsv messes | |
up encodings, converting everything to something that's not utf8 (macroman |
# Add field | |
echo '{"hello": "world"}' | jq --arg foo bar '. + {foo: $foo}' | |
# { | |
# "hello": "world", | |
# "foo": "bar" | |
# } | |
# Override field value | |
echo '{"hello": "world"}' | jq --arg foo bar '. + {hello: $foo}' | |
{ |
// Pipe creates a synchronous in-memory pipe. | |
// It can be used to connect code expecting an io.Reader | |
// with code expecting an io.Writer. | |
// Reads on one end are matched with writes on the other, | |
// copying data directly between the two; there is no internal buffering. | |
// It is safe to call Read and Write in parallel with each other or with | |
// Close. Close will complete once pending I/O is done. Parallel calls to | |
// Read, and parallel calls to Write, are also safe: | |
// the individual calls will be gated sequentially. |
package main | |
import ( | |
"bufio" | |
"fmt" | |
"io" | |
"io/ioutil" | |
"log" | |
"net/http" | |
"sync" |
package main | |
import ( | |
"io" | |
"log" | |
"mime/multipart" | |
"net/http" | |
"os" | |
"path/filepath" | |
"runtime" |
Latency Comparison Numbers | |
-------------------------- | |
L1 cache reference 0.5 ns | |
Branch mispredict 5 ns | |
L2 cache reference 7 ns 14x L1 cache | |
Mutex lock/unlock 25 ns | |
Main memory reference 100 ns 20x L2 cache, 200x L1 cache | |
Compress 1K bytes with Zippy 3,000 ns 3 us | |
Send 1K bytes over 1 Gbps network 10,000 ns 10 us | |
Read 4K randomly from SSD* 150,000 ns 150 us ~1GB/sec SSD |
Either copy the aliases from the .gitconfig
or run the commands in add-pr-alias.sh
Easily checkout local copies of pull requests from remotes:
git pr 4
- creates local branch pr/4
from the github upstream
(if it exists) or origin
remote and checks it outgit pr 4 someremote
- creates local branch pr/4
from someremote
remote and checks it out ## ngram following http://tidytextmining.com/ngrams.html
library(dplyr)
library(tidytext)
library(tidyr)
# load manually downloaded web of science data dump
tt <- jsonlite::stream_in(file("data/wos_total.json"), verbose = FALSE) %>%
filter(!is.na(AB))