Skip to content

Instantly share code, notes, and snippets.

View nevans's full-sized avatar

nicholas a. evans nevans

View GitHub Profile
@nevans
nevans / gist:1314531
Created October 25, 2011 22:20
premature micro-optimizations are the root of all evil

ruby 1.8.7 (REE)

irb(main):031:0> start = Time.now; 10_000_000.times { 'foo' }; puts Time.now - start
=> 2.154165
irb(main):032:0> start = Time.now; 10_000_000.times { "foo" }; puts Time.now - start
=> 2.177029

Remind me again, why do I care about 23 milliseconds for every 10 million string loads? Methinks that just one instance of needing to change single quotes to double quotes so you can do some interpolation that you hadn't originally

@nevans
nevans / resque_enqueue.rb
Created November 21, 2011 16:32
resque: queue.enqueue(job_selector, arg1, arg2)
# in response to https://twitter.com/#!/avdi/status/138513455622791168
# Yes, resque's enqueue API is wacky, but in practice it's not a problem,
# because it's trivial to route around.
# This is untested, and skips any resque enqueue hooks you might have setup.
# but those aren't major hurdles to fix.
class ResqueQueueWrapper
def initialize(queue, resque=Resque)
@queue = queue
@nevans
nevans / sugared_resque_job.rb
Created November 29, 2011 20:20
Maybe a good way to make resque job definition just a tad simpler.
module SugaredResqueJob
def self.new(queue, resque=Resque, &perform)
Module.new do
extend self
define_method :queue do queue end
define_method :enqueue do |*args| resque.enqueue(self, *args) end
define_method :perform do |*args| perform.call( *args) end
end
end
end
@nevans
nevans / imap_astring.treetop
Created December 12, 2011 18:00
tree top and IMAP astrings
# See http://tools.ietf.org/html/rfc3501#section-9
# INTERNET MESSAGE ACCESS PROTOCOL - VERSION 4rev1 - Formal Syntax
module IMAP
grammar Astring
# astring = 1*ASTRING-CHAR / string
rule astring
ASTRING_CHAR+ / string
end
@nevans
nevans / foo_task.rb
Created March 8, 2012 15:42
good or bad ruby on rails OO?
# I have a new feature that primarily deals with a single class. It's a
# relatively small and self contained feature; I don't expect that other
# features will ever develop dependencies on it, but it will be highly
# dependant on the single class that it deals with. For various reasons, it
# isn't appropriate to make an Observer class. I'd like to keep all of the
# code pertaining to this feature highly cohesive (localized, in one file if
# possible). But it does make some specific demands of the class that it's
# coupled with.
module FooTask
@nevans
nevans / bm.rb
Created March 21, 2012 18:00
Why is EM.epoll slowing down my connections in other threads?
#!/usr/bin/env ruby
# copied from https://gist.github.com/939696, and
# edited to add redis (which is where I was experiencing the issue)
require 'rubygems'
require 'net/http'
require 'hiredis'
require "redis/connection/hiredis"
require 'redis'
@nevans
nevans / rails_class_level_memoization.md
Created April 6, 2012 17:26
seen during a code review

The following code came up during a code review:

def self.account_for(email)
  @account_for ||= {}
  return @account_for[email] if @account_for.include?(email)
  @account_for[email] = Account.find(email)
end
@nevans
nevans / celluloid-retrying_supervisor_proxy.rb
Created September 11, 2012 20:09
celluloid supervisor that attempts to run the called method no matter what
#!/usr/bin/env ruby
# encoding: UTF-8
require "celluloid"
class BustedActor
include Celluloid
include Celluloid::Logger
def works_great; 42 end
def broken; raise "hell" end
end
@nevans
nevans / README.md
Last active August 28, 2020 12:48
Improving speed on slow CouchDB reduce functions

A common pattern in my CouchDB view reductions is to "merge" together the objects generated by the map function while preserving the original form, and only query that view with group=true. It's easiest to write the view reductions naively, so they continue merging the data all the way up to the top-level reduction (group=false).

But because CouchDB stores its b+trees with the reduction for each b+tree node stored in that node, moderately sized objects can result in a lot of extra writes to disk, and moderately complicated functions can cause a lot of wasted CPU time in the indexer running merge javascript that is never queried. Re-balancing the b+tree compounds this problem. This can cause the initial creation of large indexes to slow down tremendously. If the index becomes terribly fragmented, this will also affect query speed.

One solution: once the reduction is beyond the keys at the group level I care about, stop running the merge code and return the simplest data that works (e.g. null or

@nevans
nevans / gist:9374041
Last active February 16, 2023 23:12
simple ruby console histogram generator
# Pass in an enumeration of data and
# (optionally) a block to extract the grouping aspect of the data.
#
# Optional: sort_by lambda (operates on group key and count)
#
def puts_hist(data, sort_by:nil, &blk)
data = data.map(&blk) if blk
counts = data.each_with_object(Hash.new(0)) {|k,h| h[k]+=1}
max = counts.values.max
width = Pry::Terminal.size!.last