Skip to content

Instantly share code, notes, and snippets.

View skippy's full-sized avatar

Adam Greene skippy

  • San Juan County, WA
View GitHub Profile
@skippy
skippy / gist:1037128
Created June 21, 2011 02:38
chef oddness with ssh_known_hosts
We couldn’t find that file to show.
@skippy
skippy / middleware.rb
Created June 6, 2011 17:19 — forked from xdissent/middleware.rb
delete vagrant vm's chef client and node from chef server on destroy
class OnDestroyMiddleware
def initialize(app, env)
@app = app
end
def call(env)
env["config"].vm.provisioners.each do |provisioner|
env.ui.info "Attempting to remove client #{provisioner.config.node_name}"
`knife client show #{provisioner.config.node_name}`
if $?.to_i == 0
> db.data_points.find({ sensor_id: 247, faulty: { $ne: true }, orig_relative_time: { $lte: new Date(1299305991280) }, deleted_at: { $exists: false } }).sort({ orig_relative_time: -1 }).limit(-1).explain()
{
"cursor" : "BtreeCursor orig_relative_time_-1_sensor_id_-1",
"nscanned" : 11859,
"nscannedObjects" : 11859,
"n" : 11859,
"millis" : 1896,
"indexBounds" : {
"orig_relative_time" : [
[
m/r overview:
map: break date/value into various time buckets (multiple emits)... retrieve values from collectionA
reduce: merge key and each item into key => array of values
finalize: search to see if stored calc already exists in collectionB... if so, use some of that meta-data. populate a hash, run calculations for each key, and then insert/upsert into collectionB
Failing test:
ruby acceptance test
6 threads:
db.currentOp()
{
"inprog" : [
{
"opid" : 242886,
"active" : true,
"lockType" : "write",
"waitingForLock" : false,
"secs_running" : 893,
"op" : "query",
> db.currentOp()
{
"inprog" : [
{
"opid" : 101690,
"active" : false,
"lockType" : "read",
"waitingForLock" : true,
"op" : "getmore",
"ns" : "?ocal.oplog.$main",
working_directory '/data/myapp/current/'
worker_processes 16
listen '/var/run/engineyard/unicorn_myapp.sock', :backlog => 1024
timeout 60
pid "/var/run/engineyard/unicorn_myapp.pid"
# Based on http://gist.github.com/206253
logger Logger.new("log/unicorn.log")
Thu Apr 1 08:03:54 runQuery: mapp-staging.entries{ orig_relative_time: new Date(1263657120000), owner_id: 22, owner_type: "Place::Visit", value: 83.0 }
Thu Apr 1 08:03:54 query mapp-staging.entries ntoreturn:1 reslen:289 nreturned:1 1ms
Thu Apr 1 08:03:54 runQuery: mapp-staging.entries{ orig_relative_time: new Date(1263646920000), owner_id: 22, owner_type: "Place::Visit", value: 119.0 }
Thu Apr 1 08:03:54 query mapp-staging.entries ntoreturn:1 reslen:289 nreturned:1 1ms
Thu Apr 1 08:03:54 runQuery: mapp-staging.entries{ orig_relative_time: new Date(1263639900000), owner_id: 22, owner_type: "Place::Visit", value: 129.0 }
Thu Apr 1 08:03:54 query mapp-staging.entries ntoreturn:1 reslen:289 nreturned:1 1ms
Thu Apr 1 08:03:54 runQuery: mapp-staging.entries{ orig_relative_time: new Date(1263627420000), owner_id: 22, owner_type: "Place::Visit", value: 107.0 }
Thu Apr 1 08:03:54 query mapp-staging.entries ntoreturn:1 reslen:289 nreturned:1 1ms
Thu Apr 1 08:03:54 runQuery: mapp-staging.downloads{ owner_id: 2
deploy@ip-nope /data/my_app/current $ ps auwx | grep unicorn
deploy 26724 0.0 1.8 235780 147988 ? S 01:15 0:06 unicorn_rails master (old) -c /data/my_app/shared/config/unicorn.rb -E staging -D
deploy 26753 0.0 1.5 211012 121636 ? S 01:16 0:00 unicorn_rails worker[0] -c /data/my_app/shared/config/unicorn.rb -E staging -D
deploy 26754 0.0 1.4 202880 112512 ? S 01:16 0:00 unicorn_rails worker[1] -c /data/my_app/shared/config/unicorn.rb -E staging -D
deploy 26755 0.0 1.4 205668 116276 ? S 01:16 0:00 unicorn_rails worker[2] -c /data/my_app/shared/config/unicorn.rb -E staging -D
deploy 26756 0.0 1.7 227096 137524 ? S 01:16 0:00 unicorn_rails worker[3] -c /data/my_app/shared/config/unicorn.rb -E staging -D
deploy 28472 0.0 1.8 235436 147920 ? S 02:19 0:06 unicorn_rails master -c /data/my_app/shared/config/unicorn.rb -E staging -D
deploy
# wrapping the new GridFileSystem so it has some of the nice helpers of the old GridFS class
# HACK: I set MongoMapper.database to the database I need for the particular call, so I can
# get away with simpler method calls....
module Mongo
class GridFSExt
def self.read(file_loc)
@gridfs = GridFileSystem.new(MongoMapper.database)
@gridfs.open(file_loc, "r") {|f| f.read } rescue nil
end