This gist is part of a blog post. Check it out at:
http://jasonrudolph.com/blog/2011/08/09/programming-achievements-how-to-level-up-as-a-developer
This gist is part of a blog post. Check it out at:
http://jasonrudolph.com/blog/2011/08/09/programming-achievements-how-to-level-up-as-a-developer
Installing Arch: | |
sudo vim /etc/pacman.conf | |
Update packages list: sudo pacman -Syy | |
run sudo pacman -Syu before installing any software (to update the repositories first) | |
* Timing issue: | |
- Change hardware clock to use UTC time: | |
sudo timedatectl set-local-rtc 0 |
# | |
# Working with branches | |
# | |
# Get the current branch name (not so useful in itself, but used in | |
# other aliases) | |
branch-name = "!git rev-parse --abbrev-ref HEAD" | |
# Push the current branch to the remote "origin", and set it to track | |
# the upstream branch | |
publish = "!git push -u origin $(git branch-name)" |
httpOnly
(and secure
to true
if running over SSL) when setting cookies.csrf
for preventing Cross-Site Request Forgery: http://expressjs.com/api.html#csrfbodyParser()
and only use multipart explicitly. To avoid multiparts vulnerability to 'temp file' bloat, use the defer
property and pipe()
the multipart upload stream to the intended destination.Secure sessions are easy, but it's not very well documented, so I'm changing that. | |
Here's a recipe for secure sessions in Node.js when NginX is used as an SSL proxy: | |
The desired configuration for using NginX as an SSL proxy is to offload SSL processing | |
and to put a hardened web server in front of your Node.js application, like: | |
[NODE.JS APP] <- HTTP -> [NginX] <- HTTPS -> [CLIENT] | |
To do this, here's what you need to do: |
Let's have some command-line fun with curl, [jq][1], and the [new GitHub Search API][2].
Today we're looking for:
/** | |
* Module dependencies | |
*/ | |
var express = require('express'); | |
var fs = require('fs'); | |
var mongoose = require('mongoose'); | |
var Schema = mongoose.Schema; | |
// img path |
# install needed libraries | |
sudo yum install texinfo libXpm-devel giflib-devel libtiff-devel libotf-devel | |
# compile autoconf | |
cd /tmp | |
wget ftp://ftp.gnu.org/gnu/autoconf/autoconf-2.68.tar.bz2 | |
tar xjvf autoconf-2.68.tar.bz2 | |
cd autoconf-2.68/ | |
./configure && make && sudo make install |
Magic words:
psql -U postgres
Some interesting flags (to see all, use -h
or --help
depending on your psql version):
-E
: will describe the underlaying queries of the \
commands (cool for learning!)-l
: psql will list all databases and then exit (useful if the user you connect with doesn't has a default database, like at AWS RDS)For this configuration you can use web server you like, i decided, because i work mostly with it to use nginx.
Generally, properly configured nginx can handle up to 400K to 500K requests per second (clustered), most what i saw is 50K to 80K (non-clustered) requests per second and 30% CPU load, course, this was 2 x Intel Xeon
with HyperThreading enabled, but it can work without problem on slower machines.
You must understand that this config is used in testing environment and not in production so you will need to find a way to implement most of those features best possible for your servers.