Using pg.connect
is the way to go in a web environment.
PostgreSQL server can only handle 1 query at a time per conenction. That means if you have 1 global new pg.Client()
connected to your backend your entire app is bottleknecked based on how fast postgres can respond to queries. It literally will line everything up, queuing each query. Yeah, it's async and so that's alright...but wouldn't you rather multiply your throughput by 10x? Use pg.connect
set the pg.defaults.poolSize
to something sane (we do 25-100, not sure the right number yet).
new pg.Client
is for when you know what you're doing. When you need a single long lived client for some reason or need to very carefully control the life-cycle. A good example of this is when using LISTEN/NOTIFY
. The listening client needs to be around and connected and not shared so it can properly handle NOTIFY
messages. Other example would be when opening up a 1-off client to kill some hung stuff or in command line scripts.
In the case where I do want queries executed synchronously (e.g. database setup and provisioning), can I rely on this "1 query at a time per connection" property to ensure a long list of SQL commands are executed synchronously? And in such a case should I set up and then destroy a single pg.Client, or use pg.connect?