Skip to content

Instantly share code, notes, and snippets.

@knewter
knewter / .rvmrc
Last active December 18, 2015 17:19
Simultaneous Read Write hanging with sslsocket in celluloid-io in jruby
rvm jruby-1.7.4@celluloid_jruby_ssl_repro --create
#rvm ruby-1.9.3@celluloid_jruby_ssl_repro --create
@knewter
knewter / unexpected.md
Created May 29, 2013 17:04
Can't make second command die when parent shell dies

When I execute the following:

su - deployer -c "ls; sleep 250"

It outputs the ls and then sleeps for a few minutes. If I hit Ctrl+C, I get this output:

Session terminated, terminating shell... ...killed.
trucoin_rails_development=# explain analyze select to_timestamp(start_time * 86400) start_time_tz, to_timestamp((start_time + 1) * 86400) end_time_tz, sum(amount) as volume, min(price) as low, max(price) as high, coalesce(lag(max(last_price)) over (order by start_time), max(first_price)) as open, max(last_price) as close
from (
select date/86400 as start_time, amount, price, first_value(price) over w as first_price, last_value(price) over w as last_price
from trades
where date/86400 >= ((1369176488/86400) - 120)
and exchange_id = 1
window w as (partition by date/86400 order by date rows between unbounded preceding and unbounded following)
) s
group by start_time
order by start_time;
def valid_score_method
if score_as_multi?
unless valid_scoring_for_multi
self.errors[:base] << "Score Method must be None, Sum or Max for Checkbox or Select-multiple tags"
return false
end
elsif score_as_text?
unless valid_scoring_for_text?
self.errors[:base] << "Score Method must be None, TextCompletion or TextNumeric for Text or Textarea tags"
return false
@knewter
knewter / foo.sql
Last active December 17, 2015 00:00
select to_timestamp(start_time * 300) start_time_tz, to_timestamp((start_time + 1) * 300) end_time_tz, sum(amount) as volume, min(price) as low, max(price) as high, coalesce(lag(max(last_price)) over (order by start_time), max(first_price)) as first, max(last_price) as last
from (
select date/300 as start_time, amount, price, first_value(price) over w as first_price, last_value(price) over w as last_price
from trades
where date/300 >= ((1367153251/300) - 120)
window w as (partition by date/300 order by date rows between unbounded preceding and unbounded following)
) s
group by start_time
order by start_time desc;
select to_timestamp(start_time * 300) start_time_tz, to_timestamp((start_time + 1) * 300) end_time_tz, sum(amount) as volume, min(price) as low, max(price) as high, max(first_price) as first, max(last_price) as last
from (
select date/300 as start_time, amount, price, first_value(price) over w as first_price, last_value(price) over w as last_price
from trades
where date > (1367153251 - (120 * 300))
window w as (partition by date/300 order by date rows between unbounded preceding and unbounded following)
) s
group by start_time
order by start_time desc;
@knewter
knewter / output
Last active December 16, 2015 23:48
start_time_tz | end_time_tz | volume | low | high | first | last
------------------------+------------------------+-----------------+---------------+---------------+---------------+---------------
2011-06-26 12:15:00-05 | 2011-06-26 12:20:00-05 | 2.0000000000 | 17.5100100000 | 17.5100100000 | 17.5100100000 | 17.5100100000
2011-06-26 12:30:00-05 | 2011-06-26 12:35:00-05 | 2.0000000000 | 17.5100100000 | 17.5100100000 | 17.5100100000 | 17.5100100000
2011-06-26 12:40:00-05 | 2011-06-26 12:45:00-05 | 1.0000000000 | 17.5100100000 | 17.5100100000 | 17.5100100000 | 17.5100100000
2011-06-26 12:45:00-05 | 2011-06-26 12:50:00-05 | 5.0699297300 | 15.0000000000 | 15.0000000000 | 15.0000000000 | 15.0000000000
2011-06-26 12:50:00-05 | 2011-06-26 12:55:00-05 | 5.0000000000 | 16.5000000000 | 16.5000000000 | 16.5000000000 | 16.5000000000
2011-06-26 12:55:00-05 | 2011-06-26 13:00:00-05 | 50.0000000000 | 16.5000000000 | 17.0000000000 | 16.5
@knewter
knewter / gist:5515951
Last active December 16, 2015 23:48
discussion on #postgresql where people are trying to help me out with the candle data query
--- Log opened Fri May 03 21:44:08 2013
21:44 --> | jadams [[email protected]] has joined #postgresql
21:44 --- | Users 663 nicks [0 ops, 0 halfops, 0 voices, 663 normal]
21:44 --- | Channel #postgresql was synced in 2 seconds
21:44 < jadams> | hey hey
21:44 < supplicant> | hello
21:45 <-- | trbs2 [~trbs@2001:470:d2ad:1:4a5b:39ff:fe7d:1624] has quit (Remote host closed the connection)
21:46 < jadams> | are there any freelance dbas around these parts? Once I've spent another day or so on a query I might could use a dba to help make it go fast :D
21:46 <-- | nicholasf_ [[email protected]] has quit (Remote host closed the connection)
21:46 < jadams> | at present, I've got this working and it's ok-ish https://gist.github.com/knewter/5515761
@knewter
knewter / explain
Last active December 16, 2015 23:39
candlestick data function
QUERY PLAN
---------------------------------------------------------------------------------------------------------------
Limit (cost=2669841.24..4708620.21 rows=120 width=37)
CTE five_minute_intervals
-> Function Scan on generate_series n (cost=4901.85..4944.35 rows=1000 width=4)
InitPlan 1 (returns $0)
-> Aggregate (cost=1584.69..1584.70 rows=1 width=8)
-> Seq Scan on trades (cost=0.00..1436.95 rows=59095 width=8)
InitPlan 2 (returns $1)
-> Aggregate (cost=1584.69..1584.70 rows=1 width=8)
logger = MessageLogger.new(File.expand_path('../../log/logger.log', __FILE__))
publisher = Publisher.new('tcp://127.0.0.1:41125')
first_proxy = FirstProxy.supervise_as(:first_proxy, 'some_url')
second_proxy = SecondProxy.supervise_as(:second_proxy, 'some_url')
Celluloid::Actor[:first_proxy].start
Celluloid::Actor[:second_proxy].start
sleep