This is the reference point. All the other options are based off this.
|-- app
| |-- controllers
| | |-- admin
Latency Comparison Numbers (~2012) | |
---------------------------------- | |
L1 cache reference 0.5 ns | |
Branch mispredict 5 ns | |
L2 cache reference 7 ns 14x L1 cache | |
Mutex lock/unlock 25 ns | |
Main memory reference 100 ns 20x L2 cache, 200x L1 cache | |
Compress 1K bytes with Zippy 3,000 ns 3 us | |
Send 1K bytes over 1 Gbps network 10,000 ns 10 us | |
Read 4K randomly from SSD* 150,000 ns 150 us ~1GB/sec SSD |
# to generate your dhparam.pem file, run in the terminal | |
openssl dhparam -out /etc/nginx/ssl/dhparam.pem 2048 |
For this configuration you can use web server you like, i decided, because i work mostly with it to use nginx.
Generally, properly configured nginx can handle up to 400K to 500K requests per second (clustered), most what i saw is 50K to 80K (non-clustered) requests per second and 30% CPU load, course, this was 2 x Intel Xeon
with HyperThreading enabled, but it can work without problem on slower machines.
You must understand that this config is used in testing environment and not in production so you will need to find a way to implement most of those features best possible for your servers.
{ | |
"env": { | |
"node": true, | |
"es6": true | |
}, | |
"ecmaFeatures": { | |
"arrowFunctions": true, | |
"blockBindings": true, | |
"classes": true, | |
"defaultParameters": true, |