Skip to content

Instantly share code, notes, and snippets.

@KamalakannanRM
Last active March 11, 2021 13:49
Show Gist options
  • Save KamalakannanRM/9cacb60220e251079e348c3636340829 to your computer and use it in GitHub Desktop.
Save KamalakannanRM/9cacb60220e251079e348c3636340829 to your computer and use it in GitHub Desktop.
Blue/Green deployments with NGINX

Blue/Green deployments with NGINX

Blue/Green deployments is nothing new and the benefits of the methodology are well documented and already extensively discussed. For a quick description, it is when we have two identical copies of the production environment (one is called “Blue”, one is called “Green”) and only one of them is visible to the customers. We deploy to the inactive one and once we are happy the deployment is OK, we then switch them.

Config files

First we have the nginx.config that includes the server blocks for both the active and inactive versions of the app:

##################################
# Configuration for active site
##################################
server {
  listen 80;
  listen [::]:80;
  server_name site.com www.site.com;
  return 301 https://$host$request_uri;
}
server {
  listen 443 ssl http2;
  listen [::]:443 ssl http2;
  server_name site.com www.site.com;
  access_log /var/log/nginx/site.com_access.log;
  error_log /var/log/nginx/site.com_error.log;
  root /var/www/site.com;
  index index.html;
  …
}
####################################
# Configuration for inactive site
####################################
server {
  listen 80;
  listen [::]:80;
  server_name inactive-site.com www.inactive-site.com;
  return 301 https://$host$request_uri;
}
server {
  listen 443 ssl http2;
  listen [::]:443 ssl http2;
  server_name inactive-site.com www.inactive-site.com;
  access_log /var/log/nginx/inactive-site.com_access.log;
  error_log /var/log/nginx/inactive-site.com_error.log;
  
  root /var/www/inactive-site.com;
  index index.html;
  …
}

This configuration will never change. The way we do that is by having the paths on the root directives above be symbolic links (shortcuts) instead of actual folder locations in the file system. We can see the setup with a directory listing:

$ ls -l  /var/www
total 4
lrwxrwxr-x  1 ... inactive-site.com -> /var/www/site.com-blue
lrwxrwxr-x  1 ... site.com -> /var/www/site.com-green
drwxrwxr-x  3 ... site.com-blue
drwxrwxr-x  3 ... site.com-green

This way the only thing we need to do is swap where the links are redirecting to. So our first problem is solved. As for determining the inactive version all we need to do is check where the inactive-site.com points to. No variables needed and the result always reflects the true state of our deployment. We used a simple bash script:

#!/bin/bash

inactive_now=$(ls -l /var/www/ | grep inactive)

if [[ "$inactive_now" == *blue ]]
then
  inactive="blue"
  active="green"
else
  inactive="green"
  active="blue"
fi

echo $inactive is inactive -> will be active
echo $active is active -> will be inactive

#remove current links
rm /var/www/site.com
rm /var/www/inactive-site.com

#create new links with the active/inactive reversed
ln -s /var/www/site.com-$inactive /var/www/site.com
ln -s /var/www/site.com-$active /var/www/inactive-site.com

#reload the http server
service nginx reload
echo swap completed

Of course the configs and scripts are source controlled and the CI pipeline makes sure they are all in place before they are run. But apart from placing them and calling them, the CI has no other involvement and holds no state, thus it is easier to maintain. As an extra bonus, the CI does not need to hold any state about what color is currently inactive when deploying to production. It will always deploy to /var/www/inactive-site.com and the host’s file system will take care of the rest.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment