(C-x means ctrl+x, M-x means alt+x)
The default prefix is C-b. If you (or your muscle memory) prefer C-a, you need to add this to ~/.tmux.conf
:
Latency Comparison Numbers (~2012) | |
---------------------------------- | |
L1 cache reference 0.5 ns | |
Branch mispredict 5 ns | |
L2 cache reference 7 ns 14x L1 cache | |
Mutex lock/unlock 25 ns | |
Main memory reference 100 ns 20x L2 cache, 200x L1 cache | |
Compress 1K bytes with Zippy 3,000 ns 3 us | |
Send 1K bytes over 1 Gbps network 10,000 ns 10 us | |
Read 4K randomly from SSD* 150,000 ns 150 us ~1GB/sec SSD |
Title | Have It? | Transcribed | Content |
---|---|---|---|
What Lies Ahead | X | X | Whizzard |
Trace Amount | X | Not transcribing | Excerpt from Android: Free Fall |
Cyber Exodus | X | X | Chaos Theory |
A Study in Static | X | Not transcribing | Excerpt from Android: Golem (The Identity Trilogy) |
Humanity's Shadow | X | X | Andromeda |
Future Proof | X | Not transcribing | Excerpt from Android: Strange Flesh |
Creation and Control | X | X / X | Thomas Haas / Rielle "Kit" Peddler |
Opening Moves | X | X | Press Release / For Immediate Release |
//================================================================== | |
// SPARK INSTRUMENTATION | |
//================================================================== | |
import com.codahale.metrics.{MetricRegistry, Meter, Gauge} | |
import org.apache.spark.{SparkEnv, Accumulator} | |
import org.apache.spark.metrics.source.Source | |
import org.joda.time.DateTime | |
import scala.collection.mutable |
#!/usr/bin/env bash | |
# | |
# Generate a set of TLS credentials that can be used to run development mode. | |
# | |
# Based on script by Ash Wilson (@smashwilson) | |
# https://github.com/cloudpipe/cloudpipe/pull/45/files#diff-15 | |
# | |
# usage: sh ./genkeys.sh NAME HOSTNAME IP | |
set -o errexit |
{ | |
"metadata" : { | |
"name" : "Plot graph with D3", | |
"user_save_timestamp" : "2014-12-15T00:55:09.510Z", | |
"auto_save_timestamp" : "2014-12-15T00:50:41.883Z", | |
"language_info" : { | |
"name" : "scala", | |
"file_extension" : "scala", | |
"codemirror_mode" : "text/x-scala" | |
}, |
docker run -d --name es elasticsearch
docker run -d --name logstash --link es:elasticsearch logstash -v /tmp/logstash.conf:/config-dir/logstash.conf logstash logstash -f /config-dir/logstash.conf
docker run --link es:elasticsearch -d kibana
LOGSTASH_ADDRESS=$(docker inspect --format '{{ .NetworkSettings.IPAddress }}' logstash)
To be able to use custom endpoints with the latest Spark distribution, one needs to add an external package (hadoop-aws
). Then, custum endpoints can be configured according to docs.
bin/spark-shell --packages org.apache.hadoop:hadoop-aws:2.7.2
vault.barrier.* | |
name="vault_barrier" | |
method="$1" | |
vault.consul.* | |
name="vault_consul" | |
method="$1" | |
vault.route.*.* | |
name="vault_route" |