Skip to content

Instantly share code, notes, and snippets.

View simonmorley's full-sized avatar

Simon Morley simonmorley

  • London
View GitHub Profile
> curl https://api.stripe.com/v1/customers \
-u [apikey]: \
-d "card[number]=4000000000000002" \
-d "card[exp_month]=12" \
-d "card[exp_year]=2015" \
-d "card[cvc]=123" \
> -d validate=false
{
"object": "customer",
"created": 1388967139,
@rynbyjn
rynbyjn / google_maps_stub.coffee
Created January 4, 2014 00:28
Create spy objects for jasmine testing.
eventMethods = [
'Ga',
'Ke',
'Nj',
'Og',
'T',
'addDomListener',
'addDomListenerOnce',
'addListener',
'addListenerOnce',
@blmarket
blmarket / README.mkd
Last active December 3, 2017 16:23
HBase with thrift using node.js

HBase with thrift using node.js

Install (on Mac OS X)

homebrew

Trivial

hbase

RabbitMQ Basics

As demonstrated by the tutorials on the website, RabbitMQ can be used for everything from queuing background work to building RPC systems. To use RabbitMQ effectively, it is important to understand the core concepts: queues, exchanges, bindings, and messages. The documentation on rabbitmq.com is excellent, so I won't go into much depth, but it's worth briefly mentioning the core ideas.

Basic Terminology

Exchanges are where messages are sent. Every time a message is pushed in to RabbitMQ, it goes through an exchange.

@bgentry
bgentry / lock.go
Last active December 13, 2022 08:51
Redis locking in Go with redigo #golang
package main
import (
"github.com/garyburd/redigo/redis"
)
var ErrLockMismatch = errors.New("key is locked with a different secret")
const lockScript = `
local v = redis.call("GET", KEYS[1])

Setting up Flume NG, listening to syslog over UDP, with an S3 Sink

My goal was to set up Flume on my web instances, and write all events into s3, so I could easily use other tools like Amazon Elastic Map Reduce, and Amazon Red Shift.

I didn't want to have to deal with log rotation myself, so I setup Flume to read from a syslog UDP source. In this case, Flume NG acts as a syslog server, so as long as Flume is running, my web application can simply write to it in syslog format on the specified port. Most languages have plugins for this.

At the time of this writing, I've been able to get Flume NG up and running on 3 ec2 instances, and all writing to the same bucket.

Install Flume NG on instances

@bgentry
bgentry / rated.go
Created October 11, 2012 23:48
Go token bucket rate limiter #golang
package main
import (
"fmt"
"time"
)
func main() {
ticker := rateLimit(4, 10)
@nblumoe
nblumoe / angularjs_resource_tokenhandler.js
Created July 5, 2012 07:34
AngularJS service to send auth token with $resource requests
.factory('TokenHandler', function() {
var tokenHandler = {};
var token = "none";
tokenHandler.set = function( newToken ) {
token = newToken;
};
tokenHandler.get = function() {
return token;
@andreyvit
andreyvit / tmux.md
Created June 13, 2012 03:41
tmux cheatsheet

tmux cheat sheet

(C-x means ctrl+x, M-x means alt+x)

Prefix key

The default prefix is C-b. If you (or your muscle memory) prefer C-a, you need to add this to ~/.tmux.conf:

remap prefix to Control + a

@duydo
duydo / elasticsearch_best_practices.txt
Last active June 20, 2024 09:59
Elasticsearch - Index best practices from Shay Banon
If you want, I can try and help with pointers as to how to improve the indexing speed you get. Its quite easy to really increase it by using some simple guidelines, for example:
- Use create in the index API (assuming you can).
- Relax the real time aspect from 1 second to something a bit higher (index.engine.robin.refresh_interval).
- Increase the indexing buffer size (indices.memory.index_buffer_size), it defaults to the value 10% which is 10% of the heap.
- Increase the number of dirty operations that trigger automatic flush (so the translog won't get really big, even though its FS based) by setting index.translog.flush_threshold (defaults to 5000).
- Increase the memory allocated to elasticsearch node. By default its 1g.
- Start with a lower replica count (even 0), and then once the bulk loading is done, increate it to the value you want it to be using the update_settings API. This will improve things as possibly less shards will be allocated to each machine.
- Increase the number of machines you have so