Ask postgresql where is the configuration file
$ psql postgres
psql (9.2.2)
Type "help" for help.
#!/usr/bin/php | |
<?php | |
/* | |
* Convert JSON file to CSV and output it. | |
* | |
* JSON should be an array of objects, dictionaries with simple data structure | |
* and the same keys in each object. | |
* The order of keys it took from the first element. | |
* | |
* Example: |
-- show running queries (pre 9.2) | |
SELECT procpid, age(clock_timestamp(), query_start), usename, current_query | |
FROM pg_stat_activity | |
WHERE current_query != '<IDLE>' AND current_query NOT ILIKE '%pg_stat_activity%' | |
ORDER BY query_start desc; | |
-- show running queries (9.2) | |
SELECT pid, age(clock_timestamp(), query_start), usename, query | |
FROM pg_stat_activity | |
WHERE query != '<IDLE>' AND query NOT ILIKE '%pg_stat_activity%' |
#!/usr/bin/env python | |
import sys | |
import boto | |
import pprint | |
del_flag = '' | |
if len(sys.argv) > 1: | |
del_flag = sys.argv[1] |
The web is full of benchmarks showing the supernatural speed of Git even with very big repositories, but unfortunately they use the wrong variable. Size is not important, but the number of files in the repository really is!
Why is that? Well, that's because Git works in a very different way compared to Synergy. You don't have to checkout a file in order to edit it; Git will do that for you automatically. But at what price?
The price is that for every Git operation that requires to know which files changed (git status, git commmit, etc etc) an lstat() call will be executed for every single file
Wow! So how does that perform on a fairly large repository? Let's find out! For this example I will use an example project, which has 19384 files in 1326 folders.