Skip to content

Instantly share code, notes, and snippets.

@cfedde
Last active December 27, 2015 04:19
Show Gist options
  • Save cfedde/7266511 to your computer and use it in GitHub Desktop.
Save cfedde/7266511 to your computer and use it in GitHub Desktop.
How can I think about making this kind of query faster?
explain analyze
select to_char(count(bucket)/60.00, '999.99') as cps,
max(max_loss) as max_loss, bucket from summary_5_minute_device_sn
group by bucket
order by bucket;
QUERY PLAN
--------------------------------------------------------------------------------------------------------------------------------------------------------
Sort (cost=2063766.71..2063793.82 rows=10844 width=12) (actual time=31076.144..31079.901 rows=24264 loops=1)
Sort Key: bucket
Sort Method: external merge Disk: 800kB
-> HashAggregate (cost=2062850.14..2063039.91 rows=10844 width=12) (actual time=31036.721..31061.935 rows=24264 loops=1)
-> Seq Scan on summary_5_minute_device_sn (cost=0.00..1662281.08 rows=53409208 width=12) (actual time=0.018..7667.843 rows=53409326 loops=1)
Total runtime: 31081.654 ms
(6 rows)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment