Skip to content

Instantly share code, notes, and snippets.

@eevans
eevans / Makefile
Created January 13, 2011 00:58
pipe the console output of junit tests into the script to colorize based on success/failure.
PREFIX ?= /usr/local
all:
install:
install -m 755 color-junit $(PREFIX)/bin
uninstall:
@jboner
jboner / latency.txt
Last active November 18, 2024 08:23
Latency Numbers Every Programmer Should Know
Latency Comparison Numbers (~2012)
----------------------------------
L1 cache reference 0.5 ns
Branch mispredict 5 ns
L2 cache reference 7 ns 14x L1 cache
Mutex lock/unlock 25 ns
Main memory reference 100 ns 20x L2 cache, 200x L1 cache
Compress 1K bytes with Zippy 3,000 ns 3 us
Send 1K bytes over 1 Gbps network 10,000 ns 10 us
Read 4K randomly from SSD* 150,000 ns 150 us ~1GB/sec SSD
@piscisaureus
piscisaureus / pr.md
Created August 13, 2012 16:12
Checkout github pull requests locally

Locate the section for your github remote in the .git/config file. It looks like this:

[remote "origin"]
	fetch = +refs/heads/*:refs/remotes/origin/*
	url = [email protected]:joyent/node.git

Now add the line fetch = +refs/pull/*/head:refs/remotes/origin/pr/* to this section. Obviously, change the github url to match your project's URL. It ends up looking like this:

@stackedsax
stackedsax / add_metrics_endpoints.sh
Last active August 29, 2015 13:57
cloud metrics staging endpoints info
## which tenant?
export TENANT_ID=....
## find the endpoints' id from the list of available endpoints
curl -H "X-Auth-Token:$TOKEN" https://staging.identity.api.rackspacecloud.com/v2.0/OS-KSCATALOG/endpointTemplates |python -m json.tool
## get a token
curl -X POST https://staging.identity.api.rackspacecloud.com/v2.0/tokens -d '{"auth":{"passwordCredentials":{"username":"autoscale","password":"....."}}}' -H "Content-type: application/json" |python -m json.tool
export TOKEN=...
@rbranson
rbranson / gist:038afa9ad7af3693efd0
Last active September 29, 2016 17:44
Disaggregated Proxy & Storage Nodes

The point of this is to use cheap machines with small/slow storage to coordinate client requests while dedicating the machines with the big and fast storage to doing what they do best. I found that request coordination was contributing to about half the CPU usage on our Cassandra nodes, on average. Solid state storage is quite expensive, nearly doubling the cost of typical hardware. It also means that if people have control over hardware placement within the network, they can place proxy nodes closer to the client without impacting their storage footprint or fault tolerance characteristics.

This is accomplished in Cassandra by passing the -Dcassandra.join_ring=false option when the process is started. These nodes will connect to the seeds, cache the gossip data, load the schema, and begin listening for client requests. Messages like "/x.x.x.x is now UP!" will appear on the other nodes.

There are also some more practical benefits to this. Handling client requests caused us to push the NewSize of the heap up