This walkthrough is broken up into to parts: a quick-start guide to running Superalgos with Docker and a deeper, more theoretical dive into containers with the hopes of explaining some of the design decisions.
Superalgos, at its core, is a web application which also means it can be deployed inside a container like many other web applications. One of the leading platforms for operating containers is Docker. Docker can run on many different operatng systems and compute platforms. The containers provide an easy, fast, repeatable, and secure way to deploy and distribute applications. While it doesn't take much experience to run containers or Docker, there are some basics that any user should learn in order to use the technology effectively.
Before getting started, be aware that Docker is not the originally intended method of running the Superalgos application and as such there are some drawbacks to doing so. Namely, you won't be able contribute back to the project or configure your Governance profile. We'll discuss this more later. Therefore, you should only consider using the Docker container for production deployments.
It is also worth noting that Superalgos is very resource intense so it is best to get acquainted with the software by running it on a fast computer with a good amount of RAM as opposed to a tiny, slow Raspberry Pi. But, once you are ready to run a production deployment, a more efficient, inexpensive server is preferred.
First, Docker Desktop needs to be installed on a Mac or Windows computer in order to run the containers. Linux and BSD users will need Docker Engine. Docker-compose is also recommended and it should be included with Docker Desktop, but it may need to be installed separately.
Docker can be downloaded from the Docker Store following the directions in the official Docker Docs. It can also be installed from the command line using Homebrew.
brew cask install docker
brew install docker-compose
Then, make sure Docker Desktop is running. An easy way to check is to push Cmd + space
then type docker.app
in the search bar. You should also see the little whale icon in the top menu bar near the clock.
Windows requires a few more steps than Mac OS, but it can also be downloaded from the Docker Store following the directions in the official Docker Docs. It can also be installed from the command line using Chocolatey.
choco install docker-desktop
Docker should automatically start after installation. If you don't see the whale icon in the tray by the clock, then find Docker
in the start menu.
On Linux and BSD systems, Docker can be installed using the prefered package manager. Step by step instructions for many different distributions of Linux can be found in the official Docker Docs. Since Superalgos runs well on Raspberry Pi 4 single-board computers, I'm going to illustrate the commands necessary to run Superalgos on the Raspbian Linux distribution.
# install the requirements
sudo apt-get update
sudo apt-get install \
apt-transport-https \
ca-certificates \
curl \
gnupg \
lsb-release
# add the gpg keys to verify the packages
curl -fsSL https://download.docker.com/linux/debian/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
# confgure the official repository
echo \
"deb [arch=arm64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/debian \
$(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
# install docker engine
sudo apt-get update
sudo apt-get install docker-ce docker-ce-cli containerd.io docker-compose
Docker should automatically start when it is done installing. You can check its status with systemctl status docker
. To make things easier, ensure your user is added to the docker
group. Type id
at the command prompt and you should see something like groups=...999(docker)
. If you don't see it listed, you can add yourself with sudo usermod -a -G docker <username>
.
Now that we have Docker installed and running, we can run the container. The option --rm
will remove the isntance of the container when it is exited.
docker run --rm ghcr.io/superalgos/superalgos:latest
You'll see some messages showing the container being downloaded and then some messages from the application itself showing that it is starting up and ready to accept connections. We'll stop the container now and add in a few more options so make it more useful, so press ctrl + c
to exit.
We'll need to expose some ports so we can actually connect to it from our browser. Also, we'll want to create a few directories on our host file system so we can save, or persist, data between container reboots.
-d
to run the container daemonized, in the background-p
to map ports to the host system to allow connections-v
to mount local directories inside the container to persist data--name
to assign a name to the container for easier reference
# create the data directories
mkdir -p Data-Storage Log-Files My-Workspaces
# run the container with the extra options
docker run \
-d \
--rm \
--name superalgos \
-p 18041:18041 \
-p 34248:34248 \
-v $(pwd)/Data-Storage:/app/Data-Storage \
-v $(pwd)/Log-Files:/app/Log-Files \
-v $(pwd)/My-Workspaces:/app/My-Workspaces \
ghcr.io/superalgos/superalgos:latest
You can check the status of the container with docker ps -a
and view the logs with docker logs superalgos
.
Note that you will see one error in the log output informing you that git
is not installed. This is intentional and is not required for the bot to function properly.
Now you can open your browser and load the application front end. Try loading http://localhost:34248
if you are on the same computer that is running the container, otherwise use the host ip, for example http://192.168.1.10:34248
.
If you want to stop the container, you can run docker stop superalgos
. When you want to upgrade the container, use:
docker pull ghcr.io/superalgos/superalgos:latest
docker stop superalgos
docker run \
-d \
--rm \
--name superalgos \
-p 18041:18041 \
-p 34248:34248 \
-v $(pwd)/Data-Storage:/app/Data-Storage \
-v $(pwd)/Log-Files:/app/Log-Files \
-v $(pwd)/My-Workspaces:/app/My-Workspaces \
ghcr.io/superalgos/superalgos:latest
Let's stop the container for now (docker stop superalgos
) and we'll run it again using docker-compose
.
Continue reading the Docker Command Line documentation for more details.
Docker-compose is a wrapper for the Docker API which makes it a little easier to maintain a declaritive configuration for an application instead of using the direct command line commands. There is a sample docker-compose configuration included in the Superalgos which you can as the basis for your own configuration.
Let's download the sample and edit it. If you don't use vim
, change that command for your preferred editor.
wget https://raw.githubusercontent.com/Superalgos/Superalgos/master/Docker/docker-compose.yml
vim docker-compose.yml
Now, let's change some of the settings:
version: "3"
services:
superalgos:
image: ghcr.io/superalgos/superalgos:latest
command: ["minMemo"]
ports:
- '34248:34248'
- '18041:18041'
volumes:
- ./Data-Storage:/app/Data-Storage
- ./Log-Files:/app/Log-Files
- ./My-Workspaces:/app/My-Workspaces
environment:
PUID: '1000'
PGID: '1000'
restart: on-failure
You can see we have the same ports and volumes mapped. We also get a few extra settings like command
, environment
, and restart
.
command
adds an extra setting to the main startup, so the application ends up running with the full commandnode platform noBrowser minMemo
. The base of that command is defined as theENTRYPOINT
in theDockerfile
that builds the container.envronment
defines environment variables. ThePUID
andPGID
settings are intended to help with the permissions on the mounted volumes to ensure the files and directories are writable. The values should map to your local user UID and GID on the host system. Use theid
command to confirm.restart
tells Docker to restart the container if an error occurs.
Save the file and exit the editor. If you are using vim
, hit escape and then type :wq!
or press shift + Z + Z
(two capital Z's).
Now, start the container with docker-compose up
. This will start the container in the foreground and you'll connect to the log output. Hit ctrl + c
to exit. Going forward, you'll want to add the -d
option to start the container and keep it running in the background.
docker-compose up -d
Now you have several docker-compose
commands at your disposal to interact with the container:
docker-compose ps
to see details of the containerdocker-compose logs
to see the logsdocker-compose down
to stop and remove the instance of the containerdocker-compose pull
to pull the container, or a new version of thelatest
tag
Putting some of those concepts together, you can keep the running container up to date with this one-liner:
docker-compose pull && docker-compose down && docker-compose up -d
Explore the full Docker Compose documentation for more information.
As with any technology, there are lots of ways of doing things. A lot of them will give the same end result, but some may have fewer steps, or be more secure, or be more maintainable. That is where "best practices" come in. They aren't there to necessarily tell you what to do and what not to do, but they are there to help guide you in the right direction. A lot of times, they help you avoid making architectural mistakes that could come back and bite you further down the road.
Docker has many advantages which can aid in every step from development through production deployment. There are some useful tips located directly in the Docker documentation. Concisely, they are to keep the build images small, a few tips on when and how to persist data, and to use CI/CD for testing and deployment. All of these tips are used in one way or another as a part of the Superalgos project.
- Use a minimal base image and multi-stage build. Superalgos is currently using an alpine base image but distroless is becoming more common and some times using a base image like python-slim is the bast that can be achieved.
- The images are quite large (around 2 GB in size), unfortunately, because the codebase itself is large. We've recently put in a lot of effort to make these images as small as possible.
- The images are built automatically using Github Actions when pull requests are merged.
- With the correct configuration, data, like data mine and trading bot information, is persisted to the host using volume mounts
Docker maintains a list of Dockerfile best practices in their main documentation. These tips dictate many of the decisions around the Superalgos Dockerfile. Some of the tips that are used by Superalgos are listed below.
- Use
.dockerignore
to prevent unwanted data from entering the container. This keeps the container small but also enhances security to prevent secrets from leaking into a container by accident. - Don't install unneccesary packages. In lieu of multi-stage builds, build caches and build dependencies are removed before finalizing the container. Only packages that are necessary for the execution of the application are installed. This is why
git
is not present. - Minimize the number of layers. Many
RUN
commands are combined into one using&&
in order to reduce the number of layers. - Sort multiline arguments. This mostly enhances readability and maintainability.
- Leverage build cache. When testing locally, several of the tips above help levarage the build cache and speed up build times.
Superalgos is a web app but it was originally designed to be run on a local workstation or server on a trusted network. However, I don't think it would be prudent to talk about Docker best practices without mentioning the Twelve Factor App manifesto. Over time, Superalgos itself may iterate and implement more and more of these principles as more people get involved and more people deploy production environments using Docker.
The Twelve Factors:
- Codebase: One codebase tracked in revision control, many deploys
- Superalgos utilizes one GitHub repository
- Dependencies: Explicitly declare and isolate dependencies
- Superalgos is a node.js app and uses npm to track and install dependencies
- Config: Store config in the environment
- Superalgos does not sue environment variables the moment. Options are
- Backing services: Treat backing services as attached resources
- Superalgos connects to external services via API calls over HTTP (TCP)
- Build, release, run: Strictly separate build and run stages
- Superalgos is deployed by end users in their own local environments
- Processes: Execute the app as one or more stateless processes
- Superalgos is one application
- Port binding: Export services via port binding
- Superalgos exposes an HTTP frontend port and a Websockets backend port (TCP)
- Concurrency: Scale out via the process model
- Superalgos can be clustered but the front end and backend do not necessarily scale out independently from each other
- Disposability: Maximize robustness with fast startup and graceful shutdown
- Superalgos is stateless in that the data files
- Dev/prod parity: Keep development, staging, and production as similar as possible
- Superalgos doesn't have its own production environment but the container makes it easy to ensure development environments are the same as what a user will see when they deploy it for themselves in their own production environment
- Logs: Treat logs as event streams
- Superalgos sends its own application logs to standard out. The trading mine logs are sent to flat files (persisted through volume mounts)
- Admin processes: Run admin/management tasks as one-off processes
- Superalgos does not have any admin or management tasks
A few years after the initial publication of the original Twelve Factor App manifesto, a revision and expansion was created. Beyond the Twelve Factor App was written by Kevin Hoffman and published by O'Reilly Media. I highly recommend reviewing both the original and this modification if you are a developer or system administrator who is deploying web applications.
For me, the main addition in Beyond
is Telemetry. Logs, metrics, and traces are the three pillars of observability and every production system should have them to get a full view of what is happening in the system at any time.