This is a detailed runbook for setting up a production server on an Ubuntu 16.04 cloud VPS for automated deployment of static web content to be served by an NGINX web server and Node.js web application behind an NGINX reverse-proxy, both with SSL/TLS (https) support.
The steps are as follows:
- Create a VPS instance
- Set Up Server Security
- Install Software
- Deploy Web Content and Web Application
- Configure Nginx
- Future Enhancements
Reference
This guide uses Digital Ocean to create VPS.
- Log in to Digital Ocean
- Skip to step 3 if SSH key already added to Digital Ocean. Under
Settings > Security
, add your SSH public key. If the public key was generated using PuTTYGen on Windows, then the public key format needs to be tweaked a bit before adding. First, all new-line characters need to be removed to yield a single line string. Second, "ssh-rsa " needs to be prefixed to the key string, if not present already. Without these tweaks, Digital Ocean will treat the string as having an invalid format. - Navigate to Droplets from the main menu, and click on the
Create Droplet
button. Select the desired configuration for the new droplet. Make sure you add your SSH key to the new droplet. HitCreate
to create the new droplet.
Reference
Open two new SSH sessions to the new server. One is for backup. Keep switching to it periodically to make sure it does not get logged out due to inactivity. User the other terminal session, follow the steps below to setup basic server security.
It is important to update the server software packages to make sure you have all latest security patches and latest system libraries. For more details about package management on Ubuntu refer to this.
-
Updated to latest Linux kernel
Reference
-
Updated list of the available packages in the repositories
$ apt-get update
-
Upgrade packages without package removal
$ apt-get -y upgrade
-
Upgrade packages and remove as necessary (check if this step is necessary)
$ apt-get -y dist-upgrade
-
Install fail2ban, a tool that scans log files and bans IPs that show the malicious signs. Refer to below guide for details on how to configure it. For this runbook we'll leave it with it's initial default config which should provide basic cover (for SSH and Nginx) out of the box.
$ sudo apt-get install fail2ban
-
Create a non root user (remember its password) and grant it sudo privileges.
$ adduser <username> $ usermod -aG sudo <username>
-
Setup SSH for new user. This is done by installing the user's SSH public key in the user's home directory.
-
Switch to the new (non-root) user.
$ su - <username>
-
Create a .ssh directory in the home directory and give just the user - read, write and execute permissions on it.
$ mkdir .ssh $ chmod 700 .ssh
-
Create the authorized_keys file in the .ssh directory and give just the user - read and write permission on it.
$ cd .ssh $ touch authorized_keys $ chmod 600 authorized_keys
-
Open the authorized_keys file in a text editor and copy over the user's public key in to it.
-
Start a new terminal session as the new user and make sure login and sudo privileges work as expected.
Change your SSH configuration to make it more secure.
-
$ vim /etc/ssh/sshd_config
-
An example of security through obscurity - changing the SSH port from the default 22 to some other random port number (> 1024) reduces the risk of being hit by a random scanner attack. It will, however, still not protect against an attack specifically targetted at your sever. Although those are usually not a problem unless you are a highly popular service or valuable brand that someone might want to specifically target.
Port <port-number>
-
Root login should be disabled, and all subsequent server administration should be done under the new non-root user (with sudo rights) created in the previous step. Be very careful while doing this. Make sure another active terminal is open as root, as a fall-back.
PermitRootLogin no
-
This is necessary to mitigate brute force attacks that try to guess passwords.
PasswordAuthentication no PubkeyAuthentication yes ChallengeResponseAuthentication no
-
systemctl reload sshd
Note: Any open SSH connections will be closed upon restart. If you have a root session open for fall-back, reopen it immediately after this step.
Reference
-
Check the current status of the firewall
$ sudo ufw status verbose
-
Allow SSH on our configured SSH port
$ sudo ufw allow <ssh-port-number>/tcp
-
Allow HTTPS on 443
$ sudo ufw allow https
Note: This can also be done later, after installing NGINX. In that case, we allow the app 'Nginx HTTPS' instead of the actual https port 443.
-
Enable UFW
$ sudo ufw enable
-
Check the current status of the firewall
$ sudo ufw status verbose
-
$ apt-get install -y build-essential
-
$ apt-get install -y vim
-
$ apt-get install -y git
Reference
-
Install Nginx
$ sudo apt-get update $ sudo apt-get install -y nginx
-
Add Nginx to firewall
$ sudo ufw app list #Check to make sure that the Nginx app was successfully registered with ufw Available applications: Nginx Full #This profile opens both port 80 and port 443 Nginx HTTP #This profile opens only port 80 Nginx HTTPS #This profile opens only port 443 OpenSSH $ sudo ufw allow 'Nginx HTTPS' #Allow https via Nginx $ sudo ufw delete allow https #It required, delete the rule that directly allows port 443 $ sudo ufw status #Check to make sure firewall rule updated successfully
-
Commands to manage the Nginx Process
$ sudo systemctl stop nginx $ sudo systemctl start nginx $ sudo systemctl restart nginx #restart NginX process $ sudo systemctl reload nginx #reload changes to config without restarting the process $ sudo systemctl disable nginx #disable the process from auto start up at boot $ sudo systemctl enable nginx #enable the service to start up at boot
-
Reference
$ curl -sL https://deb.nodesource.com/setup_4.x | sudo -E bash - $ sudo apt-get update $ sudo apt-get install -y nodejs
-
Reference
$ curl -sS https://dl.yarnpkg.com/debian/pubkey.gpg | sudo apt-key add - $ echo "deb https://dl.yarnpkg.com/debian/ stable main" | sudo tee /etc/apt/sources.list.d/yarn.list $ sudo apt-get update $ sudo apt-get install -y yarn
-
Reference
-
Install PM2
$ sudo npm install pm2@latest -g
-
Make PM2 start up at boot
$ pm2 startup [systemd] #helper script that will produce a command which can be run to make PM2 start up at boot
NOTE : When updating nodejs, the pm2 binary path might change (it will necessarily change if you are using nvm). Therefore, we would advise you to run the
startup
command after any update. -
Save current processes
$ pm2 save #It will save the process list with the corresponding environments into the dump file $PM2_HOME/.pm2/dump.pm2
-
Manually resurrect processes
$ pm2 resurrect #This brings back previously saved processes (via pm2 save)
-
Using PM2
Reference
$ <app dir>/pm2 start server #where entry point file is server.js. Lookup how to supply env vars $ <app dir>/pm2 ls #list all PM2 managed processes
-
Reference
-
Install MongoDB
$ sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv EA312927 $ echo "deb http://repo.mongodb.org/apt/ubuntu xenial/mongodb-org/3.2 multiverse" | sudo tee /etc/apt/sources.list.d/mongodb-org-3.2.list $ sudo apt-get update $ sudo apt-get install -y mongodb-org
-
Configure systemd to launch MongoDB at boot
$ sudo vim /etc/systemd/system/mongodb.service # paste below config #### [Unit] Description=High-performance, schema-free document-oriented database After=network.target [Service] User=mongodb ExecStart=/usr/bin/mongod --quiet --config /etc/mongod.conf [Install] WantedBy=multi-user.target # end of config to paste ####
-
restart MongoDB service
$ sudo systemctl start mongodb
-
Check its status
$ sudo systemctl status mongodb
-
Enable newly configured mongodb service
$ sudo systemctl enable mongodb
The following steps make use of this deploy script. Please go through its documentation to learn more about it.
-
Pick a suitable location on the server to deploy the app. This is usually
/var/www/
or~/apps/
. Subdirectories can be created per-app or per-platform/per-app. Thought needs to be given to come up with this organization depending on what all is likely to be deployed and run on this server. -
Once a suitable app specific location is identified all subsequent steps should be done in that location.
-
Create an application specific file, that will be shared across deployments, and will contain application configuration parameters to be supplied to the app via the environment - e.g. application secrets, API tokens, login credentials etc.
-
Run the deploy script with the appropriate command line arguments in the
setup
mode. -
For subsequent deployments of the same app, just run the deploy script in the
deploy
mode, supplying therelease tag
to be deployed.
- A network enabled computer system (like a server) can have as many network interfaces as it needs. For example, a typical laptop has two - ethernet and wifi.
- Each network interface allows the computer system to be part of a distinct network.
- Within each such network, it will have a unique IP address.
- These network interfaces could be
public
i.e. on the Internet orprivate
i.e. on a LAN. - In addition to these network interfaces, all computers have a special network interface called the
loopback
interface. A special IP address is reserved for its identity on this network interface127.0.0.0
, aliased aslocalhost
. - Any networking software/program/application (e.g. a node.js app) needs to be configured to communicate using any (and only) one of these network interfaces. It can then establish connections with other nodes only in the network associated with that network interface.
- Actually, there is a special IP address
0.0.0.0
. If a network program is configured to listens this IP, then it can connect to any node on any network across network interfaces. - Use
netstat -tln
command to see a list of open network connections, including which ports and n/w interfaces they are bound to.
All this is important to understand because this allows you to decide how you want to configure your node.js app to be running, i.e. which network interface you want it to be listening to - public/private/loopback.
This will intern decide whether this app is accessible directly from the internet, or only within a cluster of servers on a private LAN, or only from other network processes running on that very machine.
Reference
- How To Set Up Nginx Server Blocks (Virtual Hosts) on Ubuntu 16.04
- Understanding the Nginx Configuration File Structure and Configuration Contexts
Note: Configuring Nginx server blocks is the basic necessity for having Nginx serve a static website i.e. operating as a pure web server. For serving a webapp, it needs to be additionally configured as a reverse proxy server fronting the application server.
TODO: Add config to server block to enable logging for blocks serving static web content. TODO: Review content to make instruction for deploying static web content distinct from being just a precursor to setting up reverse proxy.
- Nginx server block is a configuration mechanism to allow Nginx to serve multiple websites, against multiple domain-names from the same server setup.
- All Nginx configuration lives in the
/etc/nginx/
directory. - The main config file is
/etc/nginx/nginx.conf
. This file further includes configurations blocks from files in the/etc/nginx/sites-enabled/
directory. - It is recommended to use this file only for configuration parameters that apply universally across all sites being served by a single Nginx server.
- Site specific configuration must be maintained in site specific files within the
/etc/nginx/sites-enabled/
directory. - Files in the
/etc/nginx/sites-enabled/
directory are symbolic links to files in the/etc/nginx/sites-available/
directory. This is essentially a convenient mechanism to enable/disable inclusion of configuration blocks, without actually having to move files around. - So to add a new server block, create a new config file in
/etc/nginx/sites-available/
. In this example the config file name has been chosen to be same as the domain name.$ sudo vim example.com
- The minimal configuration needed for a new server block is:
server { listen 80; # IPv4 Port Number listen [::]:80; # IPv6 Port Number server_name example.com www.example.com; # Domain name(s) to serve root /var/www/html; # Web document root directory. This is where static content is served. index index.htm index.html; # Default document(s) to serve location / { # Path to document/response mapping try_files $uri $uri/ =404; # Try requested URI an a file, # then as a dir, else return 404 } # custom error pages error_page 404 /404.html; location = /404.html { root /var/www/html; internal; } # custom error pages error_page 500 502 503 504 /50x.html; location = /50x.html { root /var/www/html; internal; } }
- Create a symbolic link to enable the new server block that's now available
$ sudo ln -s /etc/nginx/sites-available/example.com /etc/nginx/sites-enabled/example.com
- Test validity of config changes
$ sudo nginx -t
- Restart Nginx to make it pickup new configuration
$ sudo systemctl restart nginx # can also use the below command to make Nginx pick up the new config without having to stop the server process $ sudo systemctl reload nginx
Reference
Below is the required Nginx configuration to set it up as a reverse proxy. It is advisable to keep these config parameters in an independent Nginx config snippet that can be included in any server block as required. The only exception being the proxy-pass
directive that needs to be specific for each server block.
location / {
# the main directive defining the target server to be proxied
proxy_pass "http://localhost:3000/";
# http 1.1 between Nginx and app server is OK, even if Nginx itself is
# serving http 2 for the server block in question.
# Read above reference to know more about this.
proxy_http_version 1.1;
# marker for the proxied server that this connection came from an Nginx
# (reverse) Proxy, this may not be particularly useful
proxy_set_header X-NginX-Proxy true;
# directive to restore original hostname from the client request,
# recommended to use $host instead of $http_host
proxy_set_header Host $host;
# directive to restore original IP that sent the request
proxy_set_header X-Real-IP $remote_addr;
# directive to restore chain of original IPs that sent/proxied the request
# along the way before reaching the proxied server
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# don't not rewrite any Location header URLs that the proxied server returns
proxy_redirect off;
# don't reuse SSL session
proxy_ssl_session_reuse off;
# websocket enabling directives
# the hop-by-hop headers need to be restored while proxying to support
# hop-by-hop protocols like websockets
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
# directive to not use cache for websocket connections
proxy_cache_bypass $http_upgrade;
}
Reference
Reference
We will use the Let's Encrypt service to obtain a free Domain Validation SSL certificate.
-
Install Certbot client for Let's Encrypt
# Add certbot Ubuntu software repository $ sudo add-apt-repository ppa:certbot/certbot # Update packages $ sudo apt-get update # Install Certbot $ sudo apt-get install certbot
-
Prepare for Domain Validation
We'll use the
webroot
plugin for domain validation, because this option allows for certificates issue/renewal while the Nginx web server is running.Create an Nginx configuration snippet file
$ sudo vim /etc/nginx/snippets/well-known-location.conf
and add the following configuration parameters to it.
location ~ /\.well-known { root /var/www; }
Now include this snippet in the server block for domains for which SSL Certificate is to be obtained.
server { include snippets/well-known-location.conf; }
-
Register a Let's Encrypt ACME Account
This needs to be done just once. This account is then used for all certificate issue and renew requests.
$ sudo certbot register --email <your-email> --no-eff-email --agree-tos
-
Create config file
We'll create a Certbot config file, one for each nginx server that we wish to obtain SSL cert for. Identify a location on the server for these config files. Recommend to keep them at
/etc/letsencrypt/certbot-conf
so that they reside with other Let's Encrypt files. Open a new file in this location for editing. Recommend to name it<main-domain>.certbot.conf
. Add the following config entries in this file and save.cert-name = <main-domain> domains = <main-domain>[,<sub-domain-1>,<sub-domain-2>] rsa-key-size = 4096 authenticator = webroot webroot-path = /var/www/ staple-ocsp = true strict-permissions = true
-
Obtain Certificate
Run the below commands each time a new SSL cert is to be obtained for a new nginx server context. Make sure the proper Certbot config is in place.
$ sudo touch /var/log/letsencrypt/<main-domain>.certbot.log $ sudo date >> /var/log/letsencrypt/<main-domain>.certbot.log $ sudo certbot certonly -c /etc/letsencrypt/certbot-conf/<main-domain>.certbot.conf | tee /var/log/letsencrypt/<main-domain>.certbot.log
-
Renew Certificates
To set up automatic renewal of all certs for which we have a certbot.conf file, we create the below bash script (named
renew
, located in/etc/letsencrypt/bin
)#!/bin/bash set -e for conf in $(ls /etc/letsencrypt/certbot-conf/*.certbot.conf); do LOG_FILE="/var/log/letsencrypt/"$(basename $conf | sed s/conf$/log/) echo $'\n' >> $LOG_FILE echo "-------------------------------------------------------------------------------" >> $LOG_FILE date >> $LOG_FILE echo "-------------------------------------------------------------------------------" >> $LOG_FILE certbot renew -n -c $conf >> $LOG_FILE 2>&1 sleep 2 done systemctl reload nginx exit 0
and schedule it in
root
user'scrontab
to run every month.$ sudo crontab -e # and add the following entry 00 04 1 * * /etc/letsencrypt/bin/renew
-
Backup
Backup SSL Certs and Nginx Config files.
Reference
Key points to cover when adding SSL configuration to Nginx
-
Modularization: Extract all common SSL configuration parameters (should be everything except the Cert and Private-Key file locations) and put them in a separate config snippet that can be simply included in any server block that need it.
-
Redirect all HTTP to HTTPS: Have a global
server{ listen 80; }
directive to redirect allhttp
traffic tohttps
. There should be no server block configured to listen to port 80 other than this one.server { listen 80 default_server; listen [::]:80 default_server; # Redirect all HTTP requests to HTTPS with a 301 Moved Permanently response. return 301 https://$host$request_uri; }
-
Main SSL Config: Point to the respective
certificate-chain
andprivate-key
files for each of the server blocks in question.server { listen 443 ssl http2; listen [::]:443 ssl http2; # server specific parameters ssl_certificate /path/to/<full-chain.pem>; ssl_certificate_key /path/to/<prikey.pem>; ssl_trusted_certificate /path/to/<full-chain.pem>; ...
-
All Other Settings: Make sure all other
ssl_
config parameters are set up as recommended by web security authorities. E.g. Cipherli.st and Mozilla. -
SSL Session Parameters: Use recommended values.
ssl_session_timeout 1d; ssl_session_cache shared:SSL:10m; ssl_session_tickets off;
-
SSL Protocols: Use recommended values only as this is a very critical security setting.
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
-
Cipher Suite: Use recommended values only as this is a very critical security setting.
ssl_ciphers 'ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES256-SHA:ECDHE-ECDSA-DES-CBC3-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA:!DSS:!aNULL:!eNULL:!EXPORT:!DES:!MD5:!PSK:!RC4'; ssl_prefer_server_ciphers on;
-
HSTS: HTTP Strict Transport Layer Security setting basically adds a header to all server responses that tells the browser to always use HTTPS when communicating with our server.
add_header Strict-Transport-Security max-age=15768000; # Strickter version - use with care - max-age=2Y add_header Strict-Transport-Security "max-age=63072000; includeSubDomains; preload";
Note: If this header is set and served to some web clients, and then if we were to downgrade to http, these web-clients will no longer be able to communicate to the server until the
max-age
expires. So, this parameter must be used with care! -
OCSP Stapling: The Online Certificate Status Protocol is a internet protocol that allows a web client to have a server SSL certificate validated by a recognized OCSP Responder, in addition to the signing CA. This is a way to prevent false validation of certificates signed by a malicious CA that has somehow obtained a revoked signing key of a different, genuine CA.
This however means that the web client needs to make an additional request to validate an SSL cert - which has both performance and privacy implications. To avoid this, a web server can proactively obtain a relevant OCSP response, cache it, and staple it as part of the initial SSL/TLS handshake data that it exchanges with the web client, which thus does not have to request it separately on its own.
ssl_stapling on; ssl_stapling_verify on;
OCSP DNS Resolvers: This needs to be added to allow Nginx to resolve OCSP server IPs. The
valid=
parameter allows us to tune the time for which the DNS responses are cached. Default is 5 mintes. The relatedresolver_timeout
parameter allows to set a timeout for outbound DNS resolution requests.resolver 8.8.8.8 8.8.4.4 valid=300s; resolver_timeout 5s;
Note: This is not a security specific directive per se. It is a general directive to specify a DNS server that Nginx should use to resolve domain names in any upstream requests.
Trusted CA Certificate Chain: Ensure that the
ssl_trusted_certificate
parameter is properly set up. -
Diffie-Hellman Group: Set up this parameter pointing to file generated using this command
$ sudo openssl dhparam -out /etc/ssl/certs/dhparam.pem 2048
ssl_dhparam /etc/ssl/certs/dhparam.pem;
-
Disallow embedding this site in to
<frame>
,<iframe>
or<object>
in order to prevent clickjacking attacksadd_header X-Frame-Options DENY;
-
Prevents browsers from 'content sniffing' or changing the MIME type specified in the Content-Type server response header. This prevents browsers from transforming non-executable MIME types into executable MIME types, which could be a security vulnerability.
add_header X-Content-Type-Options nosniff;
- Setup Automatic, Unattended Upgrades for Ubuntu. Read more here.
- Setup server snapshotting and mechanism to restore server from such a snapshot.
- Setup DB backup and restore mechanism.
- Setup (automated) schedule for app/server restart for - maintenance, patches etc.
- Setup server monitoring, alerting and reporting.
- Setup application log monitoring, alerting and reporting, log rotation and compression/archival.
- Introduce an application cache component to the deployment architecture (memcached, redis, other?)=
- Introduce a content indexing and search component in the deployment architecture (solr, elastic search, other?)