(C-x means ctrl+x, M-x means alt+x)
The default prefix is C-b. If you (or your muscle memory) prefer C-a, you need to add this to ~/.tmux.conf
:
package main | |
import ( | |
"fmt" | |
"net/http" | |
) | |
func broadcaster(in chan chan string, out chan chan string, listen chan string) { | |
chans := map[chan string]bool{} | |
for { |
package main | |
import ( | |
"fmt" | |
"labix.org/v2/mgo" | |
"labix.org/v2/mgo/bson" | |
"time" | |
) | |
type Person struct { |
type InitFunction func() (interface{}, error) | |
type ConnectionPoolWrapper struct { | |
size int | |
conn chan interface{} | |
} | |
/** | |
Call the init function size times. If the init function fails during any call, then | |
the creation of the pool is considered a failure. |
-- show running queries (pre 9.2) | |
SELECT procpid, age(clock_timestamp(), query_start), usename, current_query | |
FROM pg_stat_activity | |
WHERE current_query != '<IDLE>' AND current_query NOT ILIKE '%pg_stat_activity%' | |
ORDER BY query_start desc; | |
-- show running queries (9.2) | |
SELECT pid, age(clock_timestamp(), query_start), usename, query | |
FROM pg_stat_activity | |
WHERE query != '<IDLE>' AND query NOT ILIKE '%pg_stat_activity%' |
Asynchronous programming can be tricky for beginners, therefore I think it's useful to iron some basic concepts to avoid common pitfalls.
For an explanation about generic asynchronous programming, I recommend you one of the [many][2] [resources][3] [online][4].
I will focus on solely on asynchronous programming in [Tornado][1]. From Tornado's homepage:
Yesterday I upgraded our running elasticsearch cluster on a site which serves a few million search requests a day, with zero downtime. I've been asked to describe the process, hence this blogpost.
To make it more complicated, the cluster was running elasticsearch version 0.17.8 (released 6 Oct 2011) and I upgraded it to the latest 0.19.10. There have been 21 releases between those two versions, with a lot of functional changes, so I needed to be ready to roll back if necessary.
We run elasticsearch on two biggish boxes: 16 cores plus 32GB of RAM. All indices have 1 replica, so all data is stored on both boxes (about 45GB of data). The primary data for our main indices is also stored in our database. We have a few other indices whose data is stored only in elasticsearch, but are updated once daily only. Finally, we store our sessions in elasticsearch, but active sessions are cached in memcached.
_ = ( | |
255, | |
lambda | |
V ,B,c | |
:c and Y(V*V+B,B, c | |
-1)if(abs(V)<6)else | |
( 2+c-4*abs(V)**-0.4)/i | |
) ;v, x=1500,1000;C=range(v*x | |
);import struct;P=struct.pack;M,\ | |
j ='<QIIHHHH',open('M.bmp','wb').write |