This guide will cover how to set up a private Docker registry on DigitalOcean. The registry will connect to an S3 compatible storage provider (DigitalOcean Spaces in this case), and will be secured behind TLS using Let's Encrypt. All of the components have been abstracted into a single docker-compose
manifest so that the registry itself requires minimal setup and can be reconstructed with ease in the event of infrastructure failure.
The registry setup itself is straightforward enough, however, there are a couple of items to address beforehand. Namely; the cloud infrastructure that will be hosting and supporting the registry. With this being the case, it is assumed that basic knowledge exists surrounding cloud infrastructure in general, and that things such as domain names are already registered. For this guide, we'll be covering how to set up the registry using DigitalOcean resources. This includes a droplet for the server itself, along with DigitalOcean's bucket storage (called "Spaces") to persist the Docker images.
While these steps are tailored for DigitalOcean, they may be abstracted and substituted with other cloud providers as the principles are roughly the same across platforms.
In order to host the registry, a droplet must exist within DigitalOcean. Navigate to the DigitalOcean control panel and create a droplet with the desired specs. The size of the droplet doesn't matter too much since the Docker images themselves will be offloaded to an S3 compatible storage mechanism. The operating system should be Ubuntu 20.04, and the data center may be chosen freely. Keep in mind that traffic going in and out of the data center will be subject to the bandwidth caps outlined by DigitalOcean. If other infrastructure is in place that will utilize this registry internally, it's best to keep them all on the same data center.
Finally, choose to either install an SSH Key, or use a password and then create the droplet.
Note: This guide does not cover how to set up an SSH key as this is readily available elsewhere.
Navigate to DigitalOcean's control panel and select the networking dashboard. From here, click on the domains tab and select the domain that is going to govern the registry. Bind a new A
record to the droplet that was just created so that the domain name will redirect traffic to the droplet.
Now that the droplet has been created it's time to configure everything. Start by opening a connection to the droplet via SSH.
ssh root@SERVER_IP_ADDRESS
This will open a connection to the server as the root
user. If SSH keys were not configured, the command will prompt for a password before connecting.
Securing the server should be the first priority. It is not advisable to use the root
user for everyday tasks, so start by creating a new user to handle general administration and maintenance.
adduser registry
Follow the prompts and then assign the new user sudo
privileges so that administrative tasks may be performed when necessary.
usermod -aG sudo registry
If the initial login to the root
user was made via an SSH key, then password login is disabled by default. In order to authenticate as the new user, simply copy the authorized_keys
from the root
user to the new user and modify the ownership of the file.
mkdir /home/registry/.ssh
cp /root/.ssh/authorized_keys /home/registry/.ssh/authorized_keys
chown -R registry:registry /home/registry/.ssh
From here, close the connection and authenticate again as the newly created user.
ssh registry@SERVER_IP_ADDRESS
Setting up the firewall is technically optional, but it is highly recommended.
Ubuntu 20.04 comes with UFW by default. Applications can register their profiles with UFW upon installation. These profiles allow UFW to manage these applications by name. The applications that have profiles with UFW can be viewed by running;
sudo ufw app list
Note: Since UFW requires administrative privileges, if ufw
is run as any user aside from root
, sudo
must prefix the ufw
command.
This will return the following list on a fresh Ubuntu 20.04 installation.
Available applications:
OpenSSH
Since a remote connection is necessary to interact with the sever, set UFW to allow connections to OpenSSH prior to enabling the firewall.
sudo ufw allow OpenSSH
Once this has been done, the firewall can be enabled.
sudo ufw enable
Type y
and then hit Enter
. All connections to the server will be blocked with the exception of SSH traffic. To verify the change, simply run;
sudo ufw status
This will show that OpenSSH has been enabled and that connections are not being blocked.
Status: active
To Action From
-- ------ ----
OpenSSH ALLOW Anywhere
OpenSSH (v6) ALLOW Anywhere (v6)
Next up is Docker. Ubuntu doesn't include the repository by default, so adding it will be necessary before an installation is possible.
sudo apt update && sudo apt upgrade
sudo apt install apache2-utils
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu focal stable"
sudo apt install docker-ce
These commands will:
- Update the existing packages included with the system
- Install
apache2-utils
(used for generating credentials later) - Add the GPG key for the Docker repository followed by the repository itself
- Install Docker.
The Docker daemon is configured to start immediately after installation, and will restart with the machine. This can be verified by running;
sudo systemctl status docker
The output from this command should look similar to the following.
● docker.service - Docker Application Container Engine
Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
Active: active (running) since Sat 2022-04-16 01:14:17 UTC; 30s ago
TriggeredBy: ● docker.socket
Docs: https://docs.docker.com
Main PID: 753 (dockerd)
Tasks: 39
Memory: 144.0M
CGroup: /system.slice/docker.service
├─ 753 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
Once Docker is installed, there are a few more steps to perform in order to get everything up and running properly. By default, the docker
command can only be run by a user with administrative privileges (sudo
), or by a user in the docker
group. Add the non-root user to the docker
group and reauthenticate with the server (or just switch users) for the changes to take effect.
sudo usermod -aG docker registry
su - registry
The changes may be confirmed by running;
groups
This should output the current groups that the authenticated user is part of.
registry sudo docker
The last step in getting Docker configured is to install the compose plugin. This will allow the use of a docker-compose
manifest to spin up all of the scaffolding necessary for the registry with ease. The latest version of the plugin can be found on the releases page in GitHub. Substitute the version used here for the latest version available at the time of installation.
mkdir -p ~/.docker/cli-plugins/
curl -SL https://github.com/docker/compose/releases/download/v2.4.1/docker-compose-linux-x86_64 -o ~/.docker/cli-plugins/docker-compose
chmod +x ~/.docker/cli-plugins/docker-compose
Note: There is a version of docker-compose
available via Ubuntu's default package manager. This is not the correct version as it is v1 of docker-compose
. Downloading the plugin from the project's GitHub page will provide the latest version.
Once this is complete, the compose
subcommand may be used with docker
, allowing the docker-compose.yaml
manifest to be spun up later in this guide.
With the server set up, the next step of the process is making sure that the bucket is set up to store the images. Navigate to the DigitalOcean control panel and create a new spaces bucket. The creation page will ask for a region and a unique name. It will also give options for enabling a CDN, and allow for either public, or private file listings. Choose the region that is physically closest to the rest of the infrastructure, and then give it a unique domain name. Ignore the CDN option and keep the file listing as private. This bucket will be used exclusively for the registry, so public options are not necessary.
Once the bucket is created, all that's left is to generate the access keys.
To generate the access keys for the spaces bucket, navigate to DigitalOcean's API section and look for the "Spaces access keys" section. Simply generate a new key and give it a name. Copy both the key and the secret somewhere safe for now as they will be used when configuring the registry. Note that the secret is only displayed once and that after navigating away from this page it will no longer be available to view.
Now that the infrastructure is in place, the registry itself can be set up. Open an SSH connection to the non-root user that was created.
ssh registry@SERVER_IP_ADDRESS
Once connected to the server, create a new directory that will contain the docker-compose
manifest and cd
into the directory.
mkdir /home/registry/manifest && cd /home/registry/manifest
Create a new docker-compose.yaml
file here and paste the following contents into the file.
nano docker-compose.yaml
version: '3'
networks:
registry:
driver: bridge
services:
# Nginx Proxy
nginx-proxy:
image: nginxproxy/nginx-proxy:1.0.1-alpine
container_name: nginx-proxy
ports:
- 80:80
- 443:443
restart: always
volumes:
- conf:/etc/nginx/conf.d
- vhost:/etc/nginx/vhost.d
- html:/usr/share/nginx/html
- dhparam:/etc/nginx/dhparam
- certs:/etc/nginx/certs:ro
- /var/run/docker.sock:/tmp/docker.sock:ro
networks:
- registry
# Certbot Companion
acme-companion:
image: nginxproxy/acme-companion:2.2.1
container_name: acme-companion
restart: always
environment:
- NGINX_PROXY_CONTAINER=nginx-proxy
- DEFAULT_EMAIL=YOUR_EMAIL_HERE
# Uncomment the line below to fetch a staging certificate instead of a production certificate.
# - ACME_CA_URI=https://acme-staging-v02.api.letsencrypt.org/directory
volumes:
- vhost:/etc/nginx/vhost.d
- html:/usr/share/nginx/html
- dhparam:/etc/nginx/dhparam:ro
- certs:/etc/nginx/certs
- acme:/etc/acme.sh
- /var/run/docker.sock:/var/run/docker.sock:ro
networks:
- registry
# Docker Registry
docker-registry:
image: registry:2.8.1
container_name: docker-registry
restart: always
env_file:
- ./registry-config.env
environment:
- VIRTUAL_PORT=5000
- VIRTUAL_HOST=YOUR_DOMAIN_NAME_HERE
- LETSENCRYPT_HOST=YOUR_DOMAIN_NAME_HERE
volumes:
- ./registry-config.yaml:/etc/docker/registry/config.yml
- ./registry-htpasswd:/auth/.htpasswd
networks:
- registry
volumes:
conf:
vhost:
html:
certs:
acme:
dhparam:
If using nano
to edit the file, the contents can be saved by using ctrl + o
and then hitting enter
. Exiting the editor can be done with ctrl + x
.
This manifest will create a number of resources on the system when spun up.
The first resource that the manifest will create is the network. This network is necessary to route traffic between the various Docker containers that are part of the resources
block. In addition to facilitating the connections between containers on the same network, it is also necessary in order to bind traffic to the host machine.
The reverse proxy service enables Docker containers to be accessible via a specified hostname. Whenever a container is spun up on the same network as the proxy container, the reverse proxy will take note of the VIRTUAL_HOST
environment variable on that container. The reverse proxy will then match the hostname of an incoming request to the container with the same VIRTUAL_HOST
value and route all of the traffic to the matching container. This makes setting up the webserver portion of the registry incredibly easy and free of complicated Nginx configurations. In the event that a custom configuration is desired, this can be overridden on a per hostname basis. Read more about the reverse proxy container on the nginx-proxy/nginx-proxy
GitHub page.
The acme companion service is intended to be used alongside the reverse proxy service and is responsible for requesting and refreshing TLS certificates for any container on the same Docker network as the reverse proxy. This service shares resources with the reverse proxy so that acme challenges and certificates are reachable by Nginx.
Before utilizing this service, be sure to update the DEFAULT_EMAIL
environment variable with the correct email so that certificates are registered properly with Let's Encrypt. While testing the setup, it is also advisable to uncomment the ACME_CA_URI
environment variable so that staging certificates are requested instead of production certificates. Production certificates are rate limited and should only be used once it is confirmed that everything is working properly.
The last resource is the heart of the configuration; the Docker registry itself. Docker provides an official registry container that requires minimal configuration. The only configuration that's needed at the manifest level is the VIRTUAL_HOST
and LETSENCRYPT_HOST
. These values should both be the domain name that was configured for the registry. The rest of the configuration will be covered below.
Configuring the registry doesn't take too much effort. The container is set up to accept three different configuration files; one for each configuration type.
registry-config.yaml
- This file will contain all of the generic registry configuration that is commonly used.
registry-config.env
- This file will contain the registry's secret key, along with all of the S3 connection information.
registry-htpasswd
- This file will contain the credentials used for authentication.
The container also accepts environment variables directly within the docker-compose.yaml
manifest that specify the domain where the registry will be hosted, as well as the port number to use for connections.
Start by filling in the VIRTUAL_HOST
and LETSENCRYPT_HOST
variables on the docker-compose.yaml
manifest with the appropriate domain. Both of these values should be the same and will be used to determine when traffic should be routed to the container, and what domain should be used for the TLS certificate.
Next, create the registry-config.yaml
file and paste the following configuration as the contents.
nano registry-config.yaml
version: 0.1
log:
fields:
service: registry
storage:
cache:
blobdescriptor: inmemory
http:
addr: :5000
headers:
X-Content-Type-Options: [nosniff]
auth:
htpasswd:
realm: basic-realm
path: /auth/.htpasswd
health:
storagedriver:
enabled: true
interval: 10s
threshold: 3
Save the contents using ctrl + o
and then exit the editor with ctrl + x
.
There's not much to explain here as this is just a basic config to get started with, but it is important to note that the auth
key is set to htpasswd
and that the path is set to /auth/.htpasswd
on the container's filesystem.
Once the first config file has been created, move on to creating the second config file. This second config file will be used to store the registry secret, along with the S3 configuration. This is abstracted to a .env
file instead of being stored with the .yaml
config as these values are typically secrets and should not be exposed publicly. Make sure to have the keys that were generated earlier handy to fill in the blanks.
nano registry-config.env
REGISTRY_HTTP_SECRET={SOME_RANDOM_STRING}
REGISTRY_STORAGE=s3
REGISTRY_STORAGE_S3_ACCESSKEY={DO_SPACES_ACCESS_KEY}
REGISTRY_STORAGE_S3_SECRETKEY={DO_SPACES_SECRET_KEY}
REGISTRY_STORAGE_S3_BUCKET={DO_SPACES_BUCKET_NAME}
REGISTRY_STORAGE_S3_REGION={DO_SPACES_BUCKET_REGION}
REGISTRY_STORAGE_S3_REGIONENDPOINT=https://{DO_SPACES_BUCKET_REGION}.digitaloceanspaces.com
REGISTRY_STORAGE_S3_ROOTDIRECTORY={PATH/TO/IMAGES}
An example of this file when completed would look like the following;
REGISTRY_HTTP_SECRET=INuoF6QdgcbOpJIsjMlSVLhej
REGISTRY_STORAGE=s3
REGISTRY_STORAGE_S3_ACCESSKEY=4PMQ42R2BR8QPL5J9HPK
REGISTRY_STORAGE_S3_SECRETKEY=YgwjECIdGblwh0jRAg5aYV1Po6Wvo6JM2oYHH7GWykg
REGISTRY_STORAGE_S3_BUCKET=my-docker-registry
REGISTRY_STORAGE_S3_REGION=nyc3
REGISTRY_STORAGE_S3_REGIONENDPOINT=https://nyc3.digitaloceanspaces.com
REGISTRY_STORAGE_S3_ROOTDIRECTORY=images
Once the registry-config.env
has been filled in, save the contents using ctrl + o
and then exit the editor with ctrl + x
.
The last step in configuring the registry is to create credentials. This can be done using the apache2-utils
package that was installed earlier in this guide. The apache2-utils
package comes with the htpasswd
command which will generate credentials in the format that the registry expects. To generate a set of credentials, run the following command (replacing username
with the desired username).
sudo htpasswd -c /home/registry/manifest/registry-htpasswd username
The command will prompt for a password to associate with the user and then store the credentials within the filename registry-htpasswd
.
If multiple users need to be added, simply run the command again. This time without the -c
argument and with a different username.
sudo htpasswd /home/registry/manifest/registry-htpasswd another-username
Now that the registry has been configured, it can be started! Start the registry with docker compose
and let the containers work their magic.
docker compose up
This will start the registry in the foreground, allowing visibility into the logs to ensure that the containers are functioning properly. The first time this command is run there will be a lot of output to the console. The proxy container will create all of the necessary configurations, and the acme companion will create TLS certificates. The registry will also go through the necessary steps to bootstrap itself as well. Verify that there are no errors.
Once the containers have finished the bootstrapping process, verify that the server is reachable via a browser.
https://YOUR_DOMAIN/v2
Visiting this address in a browser will prompt for a username and password -- the same ones that were just configured as part of the registry-htpasswd
file. After successfully authenticating, a blank page with an empty JSON object should be presented to the browser. This means that everything worked properly!
Now it's time to kill the process running the docker-compose
manifest with ctrl + c
. Running the registry in the foreground is great for debugging the containers and making sure that everything works correctly, but disconnecting the SSH session will kill the process, and the registry along with it. Instead, the registry should be run in the background so that it can be maintained as a daemon. This can be done by passing the -d
flag to the compose command.
docker compose up -d
Starting the registry this way will allow the containers to run continuously in the background and allow the SSH session to be terminated without killing the registry.
With the registry now up and running, keeping the registry running is going to be another thing to consider. In the event that the server reboots for some reason or another (updates, power loss, etc) the server must be manually started again using the same docker compose
command. To avoid this and allow the registry to start automatically, the compose command can be added to the system's crontab so that it starts automatically when the machine has rebooted. Edit the crontab file to enable this.
crontab -e
The first time this command is run, the terminal will request that the default editor be set. This can be any editor desired, however, Nano has been the editor of choice thus for, so it would likely be wise to select this option.
Once the editor is open, paste the following contents at the bottom of the file.
@reboot registry docker compose -f /home/registry/manifest/docker-compose.yaml up -d
Save the contents using ctrl + o
and then exit the editor with ctrl + x
. The next time the server is restarted, the registry will be started without requiring a user to manually authenticate with the server to do so.
Congratulations! Setup is now complete and Docker images can be pushed/pulled to the registry! This private registry may now be used in the same manner as any other hosted solution.