Explanation of a fullstack deployment of wagtail in a dockerized environment with Nginx, Elasticsearch, Postgres and Memcached
Required Skills:
- docker
- docker-compose
- get a local wagtail site running
- Put wagtail into a container
- Wagtail Settings
- Elasticsearch, Postgres & Memcached
- Run Nginx in a Docker
- Connect everything with a compose file
This will be just a small example wagtail project - we will be good to go if we just use the example wagtail site.
wagtail start mysite
In the first step we will just build a simple Dockerfile
. Locate this file at the root of "mysite".
FROM python:3.6
ENV PYTHONUNBUFFERED 1
RUN apt-get install libjpeg-dev dos2unix gettext -q -y
RUN mkdir -p /app
WORKDIR /app
ADD . .
RUN pip install -r requirements.txt
RUN dos2unix docker-entrypoint.sh
RUN chmod +x docker-entrypoint.sh
RUN chown -R www-data:www-data /app
USER www-data:www-data
This file will do the following steps:
- Pull a python3.6 container
- Install some dependencies for wagtail (and for windows users i advise getting dos2unix too, because docker dislikes \r\n)
- Create our root directory
app
inside the container - Copy all the stuff from the current directory into the docker container
- Install all requirements
- Converts the
docker-entrypoint.sh
to unix style and makes it executeable - Fixes permissions to
www-data
on the whole app folder - Set the user inside the container to
www-data
As you see we will need 3 more files to get this Dockerfile running - create them in the mysite
root directory.
requirements.txt
wagtail==2.1.1
elasticsearch>=6.0.0,<7.0.0
python-memcached
uwsgi
psycopg2-binary
This will install the requirements for wagtail and the elasticsearch + memcached bindings.
Uwsgi is an alternative to gunicorn but comes with some nice features like cronjobs out of the box - i'll tell you more about them in the uwsgi.ini
docker-entrypoint.sh
#!/usr/bin/env bash
python manage.py collectstatic --noinput
python manage.py compilemessages
python manage.py migrate
python manage.py update_index
uwsgi uwsgi.ini
This will be the script which get's fired up inside the container. DON'T make python manage.py makemigrations
.
You should never make any new migrations in a deployment environment. If you want them - create them before fireing up the container.
We also call one time update_index to get a fresh search index on startup.
uwsgi.ini
[uwsgi]
http = 0.0.0.0:8000
master = True
processes = 8
threads = 2
module = mysite.wsgi
gid = www-data
uid = www-data
# Once an hour
cron = -0 -1 -1 -1 -1 python manage.py update_index
cron = -0 -1 -1 -1 -1 python manage.py publish_scheduled_pages
# Once a day
cron = -0 -0 -1 -1 -1 python manage.py search_garbage_collect
This file will do some simple things:
- Runs the wagtail instance at port 8000 inside the container Keep in mind: as long we don't expose the port in docker, it will not be accessable by anyone except other containers in the same network.
- We create 8 processes wich 2 threads each - please fit this to your needs
- The service runs as
www-data:www-data
- makes sense right? all files belong towww-data:www-data
inside the container. Nginx will run aswww-data
also - We create 3 cronjobs - see the wagtail documentation for more details
We already installed the dependencies for elasticsearch and memcached inside the container - but we also have to wire them up to get it working with mysite
.
Edit the mysite/mysite/settings/production.py
:
MIDDLEWARE = [
...
'django.middleware.cache.UpdateCacheMiddleware', <---- Before CommonMiddleware
'django.middleware.common.CommonMiddleware',
'django.middleware.cache.FetchFromCacheMiddleware', <---- After CommonMiddleware
...
'wagtail.core.middleware.SiteMiddleware',
'wagtail.contrib.redirects.middleware.RedirectMiddleware',
]
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql_psycopg2', <---- We will use postgres
'NAME': 'mysite', <---- Set a db name
'USER': 'mysite', <---- Set a db username
'PASSWORD': 'mysite', <---- Set a db password
'HOST': 'mysite_db', <---- This will be the postgres docker-compose service
}
}
WAGTAILSEARCH_BACKENDS = {
'default': {
'BACKEND': 'wagtail.search.backends.elasticsearch6', <---- We will use elasticsearch as backend
'URLS': ['http://mysite_elasticsearch:9200'], <---- This is the elasticsearch docker-compose service
'INDEX': 'wagtail', <---- Elasticsearch will index it with wagtail
'TIMEOUT': 5, <---- I don't know if you need more settings - i am not an expert in elasticsearch
'OPTIONS': {},
'INDEX_SETTINGS': {},
}
}
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache', <---- Using the memcached backend from django
'LOCATION': 'mysite_cache:11211', <---- This is the memcached docker-compose service
'TIMEOUT': 600
}
STATIC_ROOT = '/data/static' <---- We will store the static files in an external docker volume which will be shared with nginx
MEDIA_ROOT = '/data/media' <---- We will store the media files in an external docker volume which will be shared with nginx
That's it, your settings are finished to connect your wagtail with the DB, Elasticsearch and Memcached.
We will make a full dockerized environment AND want to get all the advantages of this setup, we also run nginx in a container
- We get access to external docker containers
- We can sepperate the network stacks (backend/frontend)
Create anywhere a nginx folder (but not in the mysite tree)
mkdir nginx
cd nginx
touch docker-compose.yml
Edit the docker-compose.yml:
version: "3"
services:
nginx:
image: nginx:latest
container_name: nginx
restart: always
volumes:
# - ./etc/nginx/ssl:/etc/nginx/ssl:ro
# - ./var/log/nginx:/var/log/nginx
- ./etc/nginx/nginx.conf:/etc/nginx/nginx.conf:ro
- ./etc/nginx/sites-enabled/:/etc/nginx/sites-enabled:ro
- ./etc/nginx/sites-available/:/etc/nginx/sites-available:ro
- mysite_static:/var/www/mysite/static:ro # We will make this readonly
- mysite_media:/var/www/mysite/media # Media needs write access (e.g. upload files)
- /etc/localtime:/etc/localtime:ro
networks:
- frontend
ports:
- "0.0.0.0:80:80"
- "0.0.0.0:443:443"
networks:
frontend:
external: true
volumes:
mysite_static:
external: true
mysite_media:
external: true
This file will:
- Mount a local nginx.conf file
- Mount a local sites-enabled folder
- Mount a local sites-available folder Please note: I don't know a proper way to mirror softlinks from the host into the container. This means: if you want to make a site enabled you have to copy or move it.
- Mount the external docker volumes with the static and media files. Create them with:
docker volume create mysite_static
anddocker volume create mysite_media
- Mount the localtime to get correct timestamps in the logs
- Nginx will get connected to a external frontend network. Create this network with:
docker network create frontend
- Expose the 80 and 443 ports to the HOST.
0.0.0.0
will open the ports on your HOST
Create the following folders in the root of your nginx folder:
mkdir -p etc/nginx/sites-enabled
mkdir -p etc/nginx/sites-available
Create the etc/nginx/nginx.conf
file:
user www-data;
worker_processes auto;
pid /run/nginx.pid;
events {
worker_connections 1024;
# multi_accept on;
}
http {
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
include /etc/nginx/mime.types;
default_type application/octet-stream;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
gzip on;
gzip_disable "msie6";
gzip_vary on;
gzip_proxied any;
gzip_comp_level 6;
gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*; # <---- Will include our config files
}
Please edit this config to fit your needs.
Create the etc/nginx/sites-enabled/mysite.conf
file:
#server {
# listen 80;
# server_name mysite.com;
# return 301 https://$host$request_uri;
#}
server {
listen 80;
# listen 443 ssl;
# ssl_certificate path/to/your/cert;
# ssl_certificate_key path/to/your/key;
server_name mysite.com;
client_max_body_size 10M;
location /static/ { # <---- Serving static files with 1 day cache
expires 1d;
add_header Cache-Control "public";
access_log off;
root /var/www/mysite/static;
}
location /media/ { # <---- Serving media files with 1 day cache
expires 1d;
add_header Cache-Control "public";
access_log off;
root /var/www/mysite/media;
}
location / {
proxy_pass http://mysite:8000; # <---- This will be a docker-compose service of mysite
proxy_set_header Host $host;
proxy_redirect off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Host $server_name;
}
}
Please read about SSL encryption in the nginx docs to get the ssl encryption working.
Go back to your mysite folder and create a docker-compose.yml
file:
version: "3"
services:
mysite_db:
container_name: mysite_db
image: postgres:10.3
restart: always
volumes:
- mysite_db_data:/var/lib/postgresql/data
environment:
- POSTGRES_USER=mysite
- POSTGRES_PASSWORD=mysite
- POSTGRES_DB=mysite
networks:
- backend
mysite_elasticsearch:
container_name: mysite_elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch:6.0.1
restart: always
networks:
- backend
mysite_cache:
container_name: mysite_cache
image: memcached:1.5.6
restart: always
command: memcached -m 1024
networks:
- backend
mysite:
container_name: mysite
build: .
command: ./docker-entrypoint.sh
restart: always
volumes:
- mysite_static:/data/static
- mysite_media:/data/media
depends_on:
- mysite_cache
- mysite_elasticsearch
- mysite_db
expose:
- "8000" # This port can be accessed by the nginx container, because both servies share the frontend network
environment:
- DJANGO_SETTINGS_MODULE=mysite.settings.production
networks:
- backend
- frontend
networks:
frontend:
external: true
backend:
# No need external - this network won't be need by any other containers
volumes:
mysite_db_data:
external: true
mysite_static:
external: true
mysite_media:
external: true
This compose will create all the services we need. We also sepperate the networks, and don't open any ports (except the expose into the frontend network).
Inside your mysite directory run:
docker-compose up
If everything is fine, close it with ctrl + c
and make a deamonized start with docker-compose up -d
(if you change files and rebuild, append --force-recreate --build
to force a rebuild of the container)
Inside your nginx folder, run:
docker-compose up
If everything is fine, repeat the step from above and start it with docker-compose up -d
thanks!