Skip to content

Instantly share code, notes, and snippets.

View jage's full-sized avatar

Johan Eckerström jage

View GitHub Profile
@dentarg
dentarg / .zshrc
Created April 19, 2010 17:45
My .zshrc file
#!/usr/local/bin/zsh
#
# jage <[email protected]> http://jage.se
# dentarg <[email protected]> http://dentarg.net
#
# Settings
#OPENBSD_CVS='[email protected]:/cvs'
OPENBSD_CVS='[email protected]:/cvs'
NETBSD_CVS='[email protected]:/cvsroot'
@mislav
mislav / pagination.md
Created October 12, 2010 17:20
"Pagination 101" by Faruk Ateş

Pagination 101

Article by Faruk Ateş, [originally on KuraFire.net][original] which is currently down

One of the most commonly overlooked and under-refined elements of a website is its pagination controls. In many cases, these are treated as an afterthought. I rarely come across a website that has decent pagination, and it always makes me wonder why so few manage to get it right. After all, I'd say that pagination is pretty easy to get right. Alas, that doesn't seem the case, so after encouragement from Chris Messina on Flickr I decided to write my Pagination 101, hopefully it'll give you some clues as to what makes good pagination.

Before going into analyzing good and bad pagination, I want to explain just what I consider to be pagination: Pagination is any kind of control system that lets the user browse through pages of search results, archives, or any other kind of continued content. Search results are the o

@burke
burke / 0-readme.md
Created January 27, 2012 13:44 — forked from funny-falcon/cumulative_performance.patch
ruby-1.9.3-p327 cumulative performance patch for rbenv

ruby-1.9.3-p327 cumulative performance patch for rbenv

This installs a patched ruby 1.9.3-p327 with various performance improvements and a backported COW-friendly GC, all courtesy of funny-falcon.

Requirements

You will also need a C Compiler. If you're on Linux, you probably already have one or know how to install one. On OS X, you should install XCode, and brew install autoconf using homebrew.

@nherment
nherment / backup.sh
Created February 29, 2012 10:42
Backup and restore an Elastic search index (shamelessly copied from http://tech.superhappykittymeow.com/?p=296)
#!/bin/bash
# herein we backup our indexes! this script should run at like 6pm or something, after logstash
# rotates to a new ES index and theres no new data coming in to the old one. we grab metadatas,
# compress the data files, create a restore script, and push it all up to S3.
TODAY=`date +"%Y.%m.%d"`
INDEXNAME="logstash-$TODAY" # this had better match the index name in ES
INDEXDIR="/usr/local/elasticsearch/data/logstash/nodes/0/indices/"
BACKUPCMD="/usr/local/backupTools/s3cmd --config=/usr/local/backupTools/s3cfg put"
BACKUPDIR="/mnt/es-backups/"
YEARMONTH=`date +"%Y-%m"`
class PostsController < ActionController::Base
def create
Post.create(post_params)
end
def update
Post.find(params[:id]).update_attributes!(post_params)
end
private
@erikh
erikh / hack.sh
Created March 31, 2012 07:02 — forked from DAddYE/hack.sh
OSX For Hackers
#!/usr/bin/env sh
##
# This is script with usefull tips taken from:
# https://github.com/mathiasbynens/dotfiles/blob/master/.osx
#
# install it:
# curl -sL https://raw.github.com/gist/2108403/hack.sh | sh
#
@duydo
duydo / elasticsearch_best_practices.txt
Last active June 20, 2024 09:59
Elasticsearch - Index best practices from Shay Banon
If you want, I can try and help with pointers as to how to improve the indexing speed you get. Its quite easy to really increase it by using some simple guidelines, for example:
- Use create in the index API (assuming you can).
- Relax the real time aspect from 1 second to something a bit higher (index.engine.robin.refresh_interval).
- Increase the indexing buffer size (indices.memory.index_buffer_size), it defaults to the value 10% which is 10% of the heap.
- Increase the number of dirty operations that trigger automatic flush (so the translog won't get really big, even though its FS based) by setting index.translog.flush_threshold (defaults to 5000).
- Increase the memory allocated to elasticsearch node. By default its 1g.
- Start with a lower replica count (even 0), and then once the bulk loading is done, increate it to the value you want it to be using the update_settings API. This will improve things as possibly less shards will be allocated to each machine.
- Increase the number of machines you have so
@jboner
jboner / latency.txt
Last active June 2, 2025 03:01
Latency Numbers Every Programmer Should Know
Latency Comparison Numbers (~2012)
----------------------------------
L1 cache reference 0.5 ns
Branch mispredict 5 ns
L2 cache reference 7 ns 14x L1 cache
Mutex lock/unlock 25 ns
Main memory reference 100 ns 20x L2 cache, 200x L1 cache
Compress 1K bytes with Zippy 3,000 ns 3 us
Send 1K bytes over 1 Gbps network 10,000 ns 10 us
Read 4K randomly from SSD* 150,000 ns 150 us ~1GB/sec SSD
@langalex
langalex / login.sh
Created June 15, 2012 14:37
Logging in to the hotel wifi at Nordic Ruby 2012
# take one of the wifi cards from the bar and fill in the blanks
# this wil log you into the hotel wifi without having to type in the username/password every time
curl -XPOST -d "username=<username>&password=<password>&operator=homerun.telia.com" https://login.homerun.telia.com/sd/login
@dentarg
dentarg / mk_symlinks.sh
Created June 19, 2012 20:42
Symlink stuff from home.git
#!/bin/sh
add_dir_link() {
DIR=$1
cd $HOME
if [ -L $DIR ]; then
echo "Link exists: $DIR"
elif [ -d $DIR ]; then
echo "Dir exists: $DIR"
else