Skip to content

Instantly share code, notes, and snippets.

@fardog
Last active November 19, 2015 07:40
Show Gist options
  • Save fardog/b625dbb58cbabfa71b7a to your computer and use it in GitHub Desktop.
Save fardog/b625dbb58cbabfa71b7a to your computer and use it in GitHub Desktop.
Deploying Web Services on Docker with nginx-proxy

Deploying Web Services on Docker with nginx-proxy

Note: This is an effort on keeping better notes on things I've set up and what works; please don't expect anything in this doc to be "right" or best practice. It's better to keep this than dig back through zsh history. :)

Project

I recently deployed mkwords, a web application built fully in Clojure/ClojureScript for selecting random words from a high quality default wordlist; it's built around hazard, the Clojure version of my old node-xkcd-password library. Seemed fitting for my first Clojure lib of any real substance to mirror my first node lib.

Additionally, I wanted a reason to try out Let's Encrypt since they were giving out beta invites. To throw another wrench in, I opted to deploy with docker, another first.

Building mkwords

mkwords was built using the default Reagent template; it was an easy place to start since the leiningen project was basically all set up for me, to scaffold it, I just ran:

lein new reagent mkpass

Originally, the project was called mkpass rather than mkwords; when I decided to change this later it was a very easy project-wide search/replace; no other changes necessary.

The scaffold came with a few simple out-of-the-box views so you could get a feel for how a Reagent project was set up. I've had no experience with Facebook's React (of which Reagent is a ClojureScript wrapper), but we use Ractive at Urban Airship, and many of the concepts we use there are analogous to Facebook's Flux architecture. I felt very at home with Reagent almost immediately (and its use of atoms for its state).

Getting a development server up and running was easy; in two separate terminals I ran lein run and lein figwheel which lifted the live-rebuilding ring and frontend build servers, respectively.

I found the backend auto-rebuilding to be more than adequate; it never got itself into an undefined state throughout the whole project. I only had to stop it when I added a new dependency to the leiningen project. The frontend server was another story entirely:

  • When the rebuilding worked, I still had to do a hard-refresh of the page to get it back into a usable state. The auto-reload would function, but for some reason it would fail to re-execute the initial XHR that retrieves the wordlist, getting things stuck in a non-working state.
  • When updating deps, restarting the build server wasn't enough. I needed to run a (reset-autobuild) from within the REPL for changes to get picked up; I assume it was running an "only rebuild what you need" that wasn't catching these dep changes, even after a restart of the process.
  • Many other times where I got into undefined states, necessitating more (reset-autobuild) steps.

All in all though: pretty smooth, especially for a Clojure beginner.

Introducing Node.js

In the end, I broke from a fully-Clojure setup. For reasons detailed later, I was unable to use the out-of-the-box minified version of bijou—the very-lightweight responsive framework I chose—so I needed to build SCSS. All of the options I found for doing this from Clojure were not well maintained and needed other dependencies (either Ruby or JRuby), so in the end I added a package.json and just used node-sass, which is highly reliable and bundles libsass for compilation. This does require a build step if there isn't a redistributable for your environment, but their is for most you'll run into.

I ended up including this step by using lein-shell, a Leiningen plugin which can run shell commands as part of build steps. This worked immediately and perfectly, and I was on my way. I did remove a number of things from my Leiningen project that weren't necessary anymore because of this (mostly lein-asset-minifier).

Building for distribution

The Leiningen project I was using was already set to bundle an "uberjar", which is just a jar with all dependencies bundled up and ready to be put into production. This process is really painless. Just do an:

lein uberjar

…and everything is built up into a single redistributable, including all of your static assets. I'm really impressed by how easy that process was.

Once bundled, the jar sat at target/mkwords.jar; I could do a test-run of this with:

java -cp target/mkwords.jar clojure.main -m mkwords.server

This spun up my complete service on http://localhost:3000, ready to test before going into production.

Building the Docker Image

There were several workflows that I found, but the one that I tried out came directly from the official clojure docker image. They detail a few different ways on that page to run your app on the image with Leiningen, or to build it directly on the image.

In the end I opted for something even more simple: building on my local machine and then just copying the jar to the Docker image. This wouldn't be a good idea for some sort of CI, but for my purposes it worked great.

Since we're just deploying an uberjar, I ended up changing to just the official java:8 docker container; clojure tools aren't necessary if I'm building locally.

You can see the build script and the Dockerfile, they're very boring. More on the Dockerfile later though.

Deploying with Docker

Once the dockerfile is built, you can run it locally (assuming docker is installed) with a:

docker run --name=mkwords -p 3000:3000 -i -t fardog/mkwords

To work with the docker image, there's some commands worth knowing:

docker ps -a  # show all docker containers, running or not
docker stop <container_name_or_hash>  # stop a running container
docker images  # show available images
docker rm <container_name_or_hash>  # remove a container
docker rmi <image_name_or_hash>  # remove an image

Satisfied with that, I decided to move to actual deployment; I have a Digital Ocean account (which runs my twitterbot primes and some other services); I opted to spin up a new box using their "Ubuntu Docker 1.9.0 on 14.04" image; as the name would imply it already has Docker on the box.

From my local machine, I pushed up my newly created Docker image (after creating the repo through the Docker Hub UI):

docker login  # enter your credentials
docker push fardog/mkwords

On the Digital Ocean box, I then did the following:

docker pull fardog/mkwords
docker run -p 3000:3000 --name=mkwords fardog/mkwords

Huge success: it was available on my remote server at http://mkwords.fardog.io:3000

Now obviously I didn't want to expose the Jetty server to the world just like that: first off, it should be behind a reliable webserver like nginx, and secondly it should be behind SSL.

Generating Certificates with Let's Encrypt

Let's Encrypt is still in beta, so I'm censoring a few things in these; but wow: it is dead simple. I'm really impressed with their work here:

git clone https://github.com/letsencrypt/letsencrypt  # clone the repo
cd letsencrypt/
./letsencrypt-auto --server <directory_server_url> --help  # showed some help
# now lets genrate the certificate
./letsencrypt-auto certonly -a standalone -d mkwords.fardog.io --server <directory_server_url>

That was it; it spun up a webserver automatically to prove I was at the domain I said I was (DNS had to be pointing here first obviously) and then generated the certificates. Done and amazingly done.

Running a Dockerized nginx proxy

At this point: I just really wanted to see the thing work! I decided on the nginx-proxy docker image, because it did a lot of out-of-the-box magic to get things up and running without requiring additional configuration; I plan to revisit this someday to better understand how it actually works, but for now I was able to get running very quickly.

First off, letsencrypt genrates all of its certificates with a .pem extension; this is fine: they're actually already in the format you need, they just need to be renamed.

The nginx-proxy image matches things up by having names passed around that match the domains it'll be serving; you'll see the string mkwords.fardog.io all over in the commands setting it up.

So I copied the certificate and private key to their reseing place on the filesystem:

cp fullchain.pem /etc/web-certs/mkwords.fardog.io.crt
cp privkey.pem /etc/web-certs/mkwords.fardog.io.key

Once that was done, I ran the nginx-proxy docker image, passing those certificate paths:

docker pull jwilder/nginx-proxy
docker run -d -p 80:80 -p 443:443 -v /etc/web-certs:/etc/nginx/certs \
  -v /var/run/docker.sock:/tmp/docker.sock:ro --name nginx-proxy jwilder/nginx-proxy

That got up and running. Then I ran my mkwords docker image, passing the correct configuration parameters to identify it:

docker run -e VIRTUAL_HOST=mkwords.fardog.io --name mkwords fardog/mkwords

That was it; I visited https://mkwords.fardog.io and there it was!

n.b. There's a notable thing in how all this works: when running the mkwords container, I'm not exposing any of the ports via the CLI; if you check the Dockerfile you'll see an EXPOSE directive; the port that's exposed here is exposed over docker's private internal network. Its with this port that the nginx proxy is able to expose your service, and this service won't be exposed to the world except through nginx.

one problem…

Whoops, broke my fonts in Chrome. Turns out that the SCSS framework I chose was loading fonts from Google over HTTP, not HTTPS. This is what drove me to build SCSS rather than using the already-created minified version.

Surviving a Restart

Now that I had everything running, I wanted to ensure that things could be started more easily. Given that I've named my two docker containers with sensible names, it's straightforward to create upstart scripts to run them:

# file /etc/init/nginx-proxy.conf
description "nginx proxy"
author "Nathan Wittstock"
start on filesystem and started docker
stop on runlevel [!2345]
respawn
script
  /usr/bin/docker start -a nginx-proxy
end script
# file /etc/init/mkwords.conf
description "mkwords"
author "Nathan Wittstock"
start on filesystem and started docker and started nginx-proxy
stop on runlevel [!2345]
respawn
script
  /usr/bin/docker start -a mkwords
end script

Now you could (ideally) do the following (assuming your images weren't running already):

start nginx-proxy
start mkwords
stop mkwords
stop nginx-proxy

In my case starting works, stopping doesn't. I still need to do a docker stop <container_name> to stop things; I haven't had the chance to look into this yet.

Updating your Application Container

I haven't figured out how to do this in a way that seems clean yet. My current process has been:

  • Run the build script in my repo and push it to docker hub:
./build-docker-image.sh
docker push fardog/mkwords
  • Then update the image on the server by stopping/pulling/starting:
docker pull fardog/mkwords  # pull the updated image
stop mkwords  # stop the upstart script
docker stop mkwords  # since my upstart script doesn't kill it yet :/
docker rm mkwords  # remove the current container
# then start a new container which will use the new image
docker run -e VIRTUAL_HOST=mkwords.fardog.io --name mkwords fardog/mkwords

Emphatic "bleh". There has to be a more elegant way to do that.


Conclusion: It works! There could be improvements to the process. In total though, I'm becoming much more familiar with Clojure. I'm feeling a lot of parallels with when I started learning Node.js several years ago: lots of getting things done without knowing if you're doing it even remotely right. Learning!

@jwilder
Copy link

jwilder commented Nov 19, 2015

Nice writeup.

You could probably use docker run --rm -e VIRTUAL_HOST=.... so that the container is removed automatically when stopped. That should eliminate the docker rm mkwords step.

Adding this to your upstart script might fix the docker stop mkwords issue as well.

pre-stop script
  /usr/bin/docker stop mkwords
end script

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment