This is a complete walkthrough on setting up hosting for a web app and a deployment pipeline. Some things can be imporoved, but this can be used as well.
Our app is monolithic JS app: React frontend, Express backend, managed by yarn. PM2 process manager. Database is out of scope currently, but some notes will be added later. We use nginx as reverse proxy. We will set up SSL via "Let's Encrypt" for free.
All this is hosted on DigitalOcean (that can also be changed). In DigitalOcean's terms, we will setup two droplets (i.e. two machines) that will be used for staging and production. We also added a space (object storage) that is used to store large files (this is out of scope and doesn't change any portion of this setup).
We will use Github and Github Actions to hook into events that happen within our code repository.
The flow is as follows: once this setup is complete, our repo will have activated actions. On each push to the dev
branch - newest version of that branch will be pulled and transfered into a specific folder on our staging machine. It will then install dependencies, clean the things it needs to and build and run fresh version. Same thing happens with master
branch and production droplet.
Staging versiong will be available at staging.domain.com
and production version at domain.com
.
After the setup is complete, there won't be the need to access remote machines as all thing will be done automatically. However, for the setup and future use, I recommend using Visual Studio Code Remote Development Extension Pack extension for VS Code. It will enabled you to connect to a remote machine via SSH and open a folder from it within VS Code and entire feature set of VS Code will be at your disposal to edit the files. The development can be done the usual way you're used to working locally, so there's really no difference. Also, activating console opens a remote console, so you've got everything you need, even though everything that you do is actually executed on a remote machine over SSH. Sweet!
This will enable you to access remote machine from your computer, without requiring any passwords and it will be more secure.
based on: https://www.digitalocean.com/community/tutorials/how-to-set-up-ssh-keys--2
on your local machine:
# generate with name '__KEYNAME__' (and empty passphrase)
$ cd ~/.ssh
$ ssh-keygen -t rsa
paste contents of ~/.ssh/__KEYNAME__.pub
into remote's /root/.ssh/authorized_keys
.
on your local machine, edit your ~/.ssh/config
, and add the following:
Host __MY_FANCY_NAME__-[staging|prod]
HostName __DROPLET_IP__
IdentityFile PATH/TO/.ssh/__KEYNAME__
User root
Do this for both staging and production machine. You may use same or different keys, it's up to you. (Obviously, different keys are more secure)
Now, when using VC Code's Remote-SSH, you will be able to simply pick with machine to connect to and that's it.
All things described here should be repeated for production and staging server. There are alternative setups where both production and staging live on the same machine, and some other things could be easier... ultimately, it's up to you.
# installing nvm
# https://www.digitalocean.com/community/tutorials/how-to-install-node-js-on-ubuntu-18-04
$ curl -sL https://raw.githubusercontent.com/creationix/nvm/v0.33.11/install.sh -o install_nvm.sh
$ bash install_nvm.sh
$ source ~/.profile
$ nvm install 12.14.1
# installing yarn
# https://yarnpkg.com/lang/en/docs/install/#debian-stable
$ curl -sS https://dl.yarnpkg.com/debian/pubkey.gpg | sudo apt-key add -
$ echo "deb https://dl.yarnpkg.com/debian/ stable main" | sudo tee /etc/apt/sources.list.d/yarn.list
$ sudo apt update && sudo apt install --no-install-recommends yarn
# installing nginx
# https://www.digitalocean.com/community/tutorials/how-to-install-nginx-on-ubuntu-18-04
$ sudo apt update
$ sudo apt install nginx
# setup firewall
# https://www.digitalocean.com/community/tutorials/how-to-set-up-a-firewall-with-ufw-on-ubuntu-18-04
# reset to default
$ sudo ufw default deny incoming
$ sudo ufw default allow outgoing
# allow 22, 443, 80 ports
$ sudo ufw allow ssh
$ sudo ufw allow 'Nginx HTTPS' # or 'Nginx Full'
$ sudo ufw enable
Our Expresss server will be running on port 1234
so you may need to add:
$ sudo ufw allow 127.0.0.1 to 127.0.0.1 port 1234
Also, if you're using database, don't forget to add additional rules so localhost applications can communicate with those. MySQL databse takes port 3306
, Redis takes 6379
(by default, but those can be changed).
The goal of firewall is to allow only the bare minimum required for your app to function. In ideal case, be as restrictive as much as possible, and expose only what's necessary.
If you haven't setup your firewall correctly, use logs from apps to detect those (see below for nginx, or use log files of your database or API server, or whatever it may be...).
First, some usefull commands for Nginx to know beforehand.
$ service nginx start
$ service nginx restart
$ service nginx stop
$ service nginx status
# if the status says that something is wrong, check log files (see below)
Nginx uses configuration files to know how to serve and redirect requests. Using config file per domain is recommended (although you could bundle them all up into one).
Nginx is able to set request and response headers, redirect (i.e. proxy) requests to other ports, servers and/or domains, include fallback/backup servers and do basic load balancing.
Since our app is React single page app (SPA), we will redirect all calls to index.html
, and then React will internally figure out what to display based on history and web location information. We will also add proxy for requests that start with /api/*
to a local API server that is running on specific port (1234
in our example). Nginx is smart enough to figure most of other statically delivered content (images, fonts e.g.).
Nginx uses /etc/nginx/sites-available
and /etc/nginx/sites-enabled
to figure out the complete configuration which to run. You can create one file (in sites-available
e.g.) and then link to it from the other location (sites-enabled
in this example).
Make a directory that we will use to store our logs from nginx:
$ mkdir /var/log/nginx/__DOMAIN__
Create file /etc/nginx/sites-available/__DOMAIN__
.
change placeholder value "
__DOMAIN__
" to your domain name,example.com
e.g.
Edit that file to:
server {
server_name __DOMAIN__;
# logs
error_log /var/log/nginx/__DOMAIN__/error.log;
access_log /var/log/nginx/__DOMAIN__/access.log;
location ~ ^/api {
proxy_connect_timeout 600;
proxy_send_timeout 600;
proxy_read_timeout 600;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
# we proxy calls from `domain.com/api/*` to our Express backend app
# we picked the port that Express app is actually using
proxy_pass http://localhost:1234;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header X-Address $remote_addr;
break;
}
location / {
# this will be folder location where our frontend code lives.
# pick a folder with fully built app code.
# usually, website content is at `/var/www/`
root /var/www/PATH/TOHTML;
try_files $uri $uri/ /index.html;
}
}
To create a link/shortcut between the files, use:
$ ln -s /etc/nginx/sites-available/__DOMAIN__ /etc/nginx/sites-enabled/__DOMAIN__
Restart Nginx.
You should now be able to access your website by visiting your domain name. Test the API, but make sure to have valid endpoint handlers implemented already.
(see the note below these command first)
# based on: https://certbot.eff.org/lets-encrypt/ubuntubionic-nginx
# add certobt PPA
$ sudo apt-get update
$ sudo apt-get install software-properties-common
$ sudo add-apt-repository universe
$ sudo add-apt-repository ppa:certbot/certbot
$ sudo apt-get update
# install cerbot
$ sudo apt-get install certbot python-certbot-nginx
# get a certificate
$ sudo certbot certonly --nginx
# test automatic renewal
$ sudo certbot renew --dry-run
Notes: Certbot setup should be pretty straightforward. When prompted to select a domain name, just pick __DOMAIN__
. Certbot should automatically inject all the necessary Nginx configurations into your files. The final result should look something like this:
server {
server_name __DOMAIN__;
# --------- new stuff from Certbot ----------
listen 443 ssl;
ssl_certificate /etc/letsencrypt/live/__DOMAIN__/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/__DOMAIN__/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
# --------- /new stuff from Certbot ----------
error_log /var/log/nginx/__DOMAIN__/error.log;
access_log /var/log/nginx/__DOMAIN__/access.log;
location ~ ^/api {
proxy_connect_timeout 600;
proxy_send_timeout 600;
proxy_read_timeout 600;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_pass http://localhost:1234;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header X-Address $remote_addr;
break;
}
location / {
root /var/www/PATH/TO/HTML;
try_files $uri $uri/ /index.html;
}
}
# --------- new stuff from Certbot ----------
server {
if ($host = __DOMAIN__) {
return 301 https://$host$request_uri;
}
server_name __DOMAIN__;
listen 80;
return 404;
}
# --------- /new stuff from Certbot ----------
Restart Nginx.
Now, when accessing your domain in the browser, you should see an index.html
content, but now also with a HTTPS icon.
Github Actions are things that get executed on some remote machine (provided by the Github (or you can provide it yourself)). You specify the triggers for actions, and for each trigger - a list of jobs which to execute.
With Actions, you can things such as:
- pull the latest code, run automatic test and report.
- pull the latest code, run tests, if they pass, build complete images and send them to hosting server and run it
- pull the latest code, copy it onto the hosting server, build fresh version and run it (this is what we will do)
- many, many other...
Actions enable you define and execute things automatically, so you can automate and simplify your workflow and deployment. These kind of stuff is part of CI/CD (continuous integration / continuous delivery).
Concretely, Github Actions are defined within Yaml files sitting at __REPO__/.github/workflows
.
As described at the beginning, we will have two flows basically - one for staging, one for production. While we could bundle all the actions into a single config file, we will split them into two for easier maintenance.
Name of files doesn't matter (in our case dev-master.yml
and prod-staging.yml
). Configuration in both files is very similar so we will show only one file, the staging one.
Additional parameters into actions can be provided via Action "Secrets". You setup those on the Github page of your repository in the Settings > Secrets panel. You create key-value pairs that are safely stored and passed into actions for them to use. In our case, we store HOST_STAGING
, SSH_KEY_STAGING
,... the things that will use to establish an SSH pipe to our hosting server for staging. This enables you not to write all the sensitive information withint Yaml files, but rather set it up in a secure and manageable way.
# this is the name of our action.
# this will be displayed in 'Actions' tab on your repo's github page when running.
name: deploy staging
# the trigger for the action.
# we are interested only in pushes onto a 'dev' branch
on:
push:
branches:
- dev
# jobs to be done.
# each job has multiple steps.
# jobs can depend on other jobs, but in out case it's very linear.
jobs:
build:
# name of the job (you will see this in Actions view on Github)
name: build and deploy of staging
# part of specification what kind of remote machine should Action use to execute the jobs on
runs-on: ubuntu-18.04
# steps are what you see in logs in Actions
steps:
# we use an action that somebody previously made that allows to get the latest version of the code from the specific branch
- name: checkout 'dev'
uses: actions/checkout@master
with:
ref: dev
# we also use previously made action that enables us to copy fies from a remote machine that Action is using currently, to another machine, via SSH
- name: copy the files
uses: appleboy/scp-action@master
with:
host: ${{ secrets.HOST_STAGING }}
username: ${{ secrets.USERNAME_STAGING }}
port: ${{ secrets.PORT_STAGING }}
key: ${{ secrets.SSH_KEY_STAGING }}
source: "."
target: "/var/www/__APP__"
# clean build of frontend
- name: build clean frontend
uses: appleboy/ssh-action@master
with:
host: ${{ secrets.HOST_PRODUCTION }}
username: ${{ secrets.USERNAME_STAGING }}
port: ${{ secrets.PORT_STAGING }}
key: ${{ secrets.SSH_KEY_STAGING }}
script: |
cd /var/www/__APP__/client
yarn
yarn build
# clean build of backend
- name: build clean backend
uses: appleboy/ssh-action@master
with:
host: ${{ secrets.HOST_PRODUCTION }}
username: ${{ secrets.USERNAME_STAGING }}
port: ${{ secrets.PORT_STAGING }}
key: ${{ secrets.SSH_KEY_STAGING }}
script: |
cd /var/www/__APP__/server
yarn
yarn build
pm2 restart all
# we don't even build our API server, but simple tell pm2 to run it.
# we communicate with our staging hosting server via SSH using an action previously made.
- name: restart pm2 for backend
uses: appleboy/ssh-action@master
with:
host: ${{ secrets.HOST_STAGING }}
username: ${{ secrets.USERNAME_STAGING }}
port: ${{ secrets.PORT_STAGING }}
key: ${{ secrets.SSH_KEY_STAGING }}
script: |
pm2 stop all
pm2 delete all
pm2 start /var/www/__APP__/server/index.js --name tourbillon-staging
Now, whenever we push something to a dev
branch, Action will fire. It will use the latest code version and copy it over to our staging server via SSH. Then, via SSH, it will execute commands on our machine that create a clean build of React app and our Express API server, and then restart the API server so it can serve the requests proxy'd by Nginx.
That's it.