The web is full of benchmarks showing the supernatural speed of Git even with very big repositories, but unfortunately they use the wrong variable. Size is not important, but the number of files in the repository really is!
Why is that? Well, that's because Git works in a very different way compared to Synergy. You don't have to checkout a file in order to edit it; Git will do that for you automatically. But at what price?
The price is that for every Git operation that requires to know which files changed (git status, git commmit, etc etc) an lstat() call will be executed for every single file
Wow! So how does that perform on a fairly large repository? Let's find out! For this example I will use an example project, which has 19384 files in 1326 folders.