Consumer key: IQKbtAYlXLripLGPWd0HUA
Consumer secret: GgDYlkSvaPxGxC4X8liwpUoqKwwr3lCADbz8A7ADU
Consumer key: 3nVuSoBZnx6U4vzUxf5w
Consumer secret: Bcs59EFbbsdF6Sl9Ng71smgStWEGwXXKSjYvPVt7qys
Consumer key: CjulERsDeqhhjSme66ECg
Consumer key: IQKbtAYlXLripLGPWd0HUA
Consumer secret: GgDYlkSvaPxGxC4X8liwpUoqKwwr3lCADbz8A7ADU
Consumer key: 3nVuSoBZnx6U4vzUxf5w
Consumer secret: Bcs59EFbbsdF6Sl9Ng71smgStWEGwXXKSjYvPVt7qys
Consumer key: CjulERsDeqhhjSme66ECg
object Main extends App { | |
val printer = new Printer[String]() | |
val break = true | |
val text = "access granted" | |
//cannot be accessed: | |
//printer.printCodeName | |
//printer.codeName = "Rejewski" | |
//printer.printCodeName |
Yesterday I upgraded our running elasticsearch cluster on a site which serves a few million search requests a day, with zero downtime. I've been asked to describe the process, hence this blogpost.
To make it more complicated, the cluster was running elasticsearch version 0.17.8 (released 6 Oct 2011) and I upgraded it to the latest 0.19.10. There have been 21 releases between those two versions, with a lot of functional changes, so I needed to be ready to roll back if necessary.
We run elasticsearch on two biggish boxes: 16 cores plus 32GB of RAM. All indices have 1 replica, so all data is stored on both boxes (about 45GB of data). The primary data for our main indices is also stored in our database. We have a few other indices whose data is stored only in elasticsearch, but are updated once daily only. Finally, we store our sessions in elasticsearch, but active sessions are cached in memcached.
unicorn.rb | |
----------------------------------- | |
application = "jarvis" | |
remote_user = "vagrant" | |
env = ENV["RAILS_ENV"] || "development" | |
RAILS_ROOT = File.join("/home", remote_user, application) | |
worker_processes 8 | |
timeout 30 |
L1 cache reference ......................... 0.5 ns
Branch mispredict ............................ 5 ns
L2 cache reference ........................... 7 ns
Mutex lock/unlock ........................... 25 ns
Main memory reference ...................... 100 ns
Compress 1K bytes with Zippy ............. 3,000 ns = 3 µs
Send 2K bytes over 1 Gbps network ....... 20,000 ns = 20 µs
SSD random read ........................ 150,000 ns = 150 µs
Read 1 MB sequentially from memory ..... 250,000 ns = 250 µs
### blogged @ http://dojo4.com/blog/easy-cheasy-realtime-log-tailing-in-a-rails-admin-view | |
### the su controller action | |
def logs | |
log = File.join(Rails.root, "log", "#{ Rails.env }.log") | |
@lines = `tail -1024 #{ log }`.split(/\n/).reverse |
#!/bin/bash | |
# herein we backup our indexes! this script should run at like 6pm or something, after logstash | |
# rotates to a new ES index and theres no new data coming in to the old one. we grab metadatas, | |
# compress the data files, create a restore script, and push it all up to S3. | |
TODAY=`date +"%Y.%m.%d"` | |
INDEXNAME="logstash-$TODAY" # this had better match the index name in ES | |
INDEXDIR="/usr/local/elasticsearch/data/logstash/nodes/0/indices/" | |
BACKUPCMD="/usr/local/backupTools/s3cmd --config=/usr/local/backupTools/s3cfg put" | |
BACKUPDIR="/mnt/es-backups/" | |
YEARMONTH=`date +"%Y-%m"` |
# Ubuntu upstart file at /etc/init/mongodb.conf | |
limit nofile 32768 32768 | |
kill timeout 300 # wait 300s between SIGTERM and SIGKILL. | |
pre-start script | |
mkdir -p /data/mongodb &> /dev/null | |
mkdir -p /data/logs/mongo &> /dev/null | |
chown mongodb:nogroup /data/mongodb &> /dev/null |
##Intro
Janus is a "basic distribution of vim plugins and tools intended to be run on top of the latest MacVIM snapshot." It is maintained by Yehuda Katz and Carl Lerche.
I recently whinged on Twitter that all I really want is vim with TextMate's Command-T go-to-file functionality and some efficient visual tab/buffer navigation. Turns out MacVim + Janus, with just a few tweaks, is all that and more.
Follow the installation instructions on the git page