-
- General
- Docker
- Python
Useful links:
- Caching: https://docs.gitlab.com/ce/ci/caching/
- .gitlab-ci.yml: https://docs.gitlab.com/ce/ci/yaml/
- Examples of .gitlab-ci.yml files: https://docs.gitlab.com/ce/ci/examples/
- Available Variables: https://docs.gitlab.com/ce/ci/variables/
When you define your stages
all jobs of the same stage are executed in parallel.
- Gitlab.com doesn't support interactive web terminals for now (last I checked 2019/02/20), follow this issue for more.
You have two options:
When you're templating/extending keep in my mind that is better to avoid some simplified syntaxes because when merging the values, Gitlab CI will not merge lists for example.
Let's say you have something like:
deploy:
only:
- master
now you want to extend and add:
only:
# ...
changes:
- ./**/*.py
In order to avoid having to repeat the first bit in the extended form, you use from the beginning, like this:
deploy:
only:
refs:
- master
Then when you extend, you'll have the result you expect.
deploy:
only:
refs:
- master
+
deploy:
only:
changes:
- ./**/*.py
=
deploy:
only:
refs:
- master
changes:
- ./**/*.py
Run your jobs locally to avoid to commit and push just to see if you're writing correct "CI code".
There are some limitations, but for basic checks, it's good enough.
So, install: https://docs.gitlab.com/runner/
And you'll be running something like:
gitlab-runner exec docker my_awesome_job
I faced a problem with recent versions (19.*) of Docker when using DinD.
It turns out Docker generates certificates and enforce connection using TLS for DinD.
This is security by default, so people don't make the mistake of deploying Docker-in-Docker open to the world without authentication.
In GitlabCI, I think that may not be a problem. (please correct me if I'm wrong)
Try for yourself:
stages:
- Test
testing:
stage: Test
image: docker:19
services:
- docker:19-dind
- postgres:11-alpine
variables:
DOCKER_TLS_CERTDIR: ""
script:
- docker version
- docker info
gitlab-runner exec docker --docker-privileged testing
- https://github.com/docker-library/docker/blob/487a0ba15be708af420c13e9f0d787c89d8be372/19.03/dind/dockerd-entrypoint.sh#L128
- https://gitlab.com/gitlab-com/support-forum/issues/4416#note_216039772
A service available during a job runs in a container, but it's not available for you to link to another container.
My solution at the moment is:
stages:
- Test
testing:
stage: Test
image: docker:19
services:
- docker:19-dind
- name: postgres:11-alpine
alias: postgres
variables:
# https://gist.github.com/douglasmiranda/9b899c748e915173c8f19d948bbdc69c#docker-in-docker-doesnt-work-in-gitlab-runner-exec-docker
DOCKER_TLS_CERTDIR: ""
script:
# Let's get the IP for postgres service
# We need that in order to add as a host available in our container
- POSTGRES_IP=$(cat /etc/hosts | awk '{if ($2 == "ip6-localne") print $1;}')
# Just checking that the IP is reachable from outside the container
- ping -w 2 $POSTGRES_IP
# Now we add/map our Postgres service IP inside the container
# The hostname will be "postgres"
- docker run --rm --add-host="postgres:$POSTGRES_IP" alpine sh -c "ping -w 5 postgres"
Real world example:
stages:
- Build/Test
django:
stage: Build/Test
image: docker:19
services:
- docker:19-dind
- name: postgres:11-alpine
alias: postgres
variables:
# https://gist.github.com/douglasmiranda/9b899c748e915173c8f19d948bbdc69c#docker-in-docker-doesnt-work-in-gitlab-runner-exec-docker
DOCKER_TLS_CERTDIR: ""
script:
# Let's get the IP for postgres service
- POSTGRES_IP=$(cat /etc/hosts | awk '{if ($2 == "postgres") print $1;}')
# Build
- docker build --target=production -t ubit/django .
- docker run --rm --add-host="postgres:$POSTGRES_IP" --env="DJANGO_SETTINGS_MODULE=ubit_ads.config.test" --entrypoint="" ubit/django sh -c "pip install --user -r requirements/test.txt && pytest"
Note: it may be better just do build/test/release as separated jobs, like I do here.
job:
script:
- '[[ -z "$MY_PASSWORD" ]] && echo "You must set the variable: MY_PASSWORD" && exit 1;'
Of course, you have a built-in way of executing jobs only if variable == to something:
This can be useful for testing, like in a Build > Test > Release Scenario.
Let's see a complete example of how that would be:
services:
- docker:dind
stages:
- Build
- Test
- Release
variables:
DJANGO_IMAGE_TEST: $CI_REGISTRY_IMAGE/django:$CI_COMMIT_REF_SLUG
DJANGO_IMAGE: $CI_REGISTRY_IMAGE/django:$CI_COMMIT_SHA
django_build:
image: docker:stable
stage: Build
script:
- docker login -u gitlab-ci-token -p $CI_JOB_TOKEN $CI_REGISTRY
# So we can use as cache (`|| true` means that even if the pull fails, wel'll try to build it)
- docker pull $DJANGO_IMAGE_TEST || true
# Using --cache-from we make sure that if nothing is changed here we use what's cached
# BUILD TEST IMAGE:
- docker build --target=production --cache-from=$DJANGO_IMAGE_TEST -t $DJANGO_IMAGE_TEST .
# push so we can use in subsequent jobs
- docker push $DJANGO_IMAGE_TEST
django_test:
image: $DJANGO_IMAGE_TEST
stage: Test
services:
- postgres:11-alpine
variables:
POSTGRES_DB: postgres
POSTGRES_USER: postgres
POSTGRES_PASSWORD: ""
POSTGRES_PORT: "5432"
# Using the test settings, instead of actual production
DJANGO_SETTINGS_MODULE: myapp.config.test
script:
# Install some packages to run tests
# Execute pytest
- pip install --user -r requirements/test.txt
- pytest
django_release:
image: docker:stable
stage: Release
script:
- docker login -u gitlab-ci-token -p $CI_JOB_TOKEN $CI_REGISTRY
- docker pull $DJANGO_IMAGE_TEST
- docker tag $DJANGO_IMAGE_TEST $DJANGO_IMAGE
- docker push $DJANGO_IMAGE
Services are Docker containers with long-running services that you can access from your jobs.
For example the Postgres: https://docs.gitlab.com/ce/ci/services/postgres.html
- The
host
address will be available to conenct atpostgres
(notlocalhost
). - The default
database
,username
andpassword
are the default from the official image - You can customize some things
IMPORTANT:
You may want export the variables so you can see what variables Gitlab CI will inject by default.
This can cause some weird behaviors, maybe you're expecting POSTGRES_PORT
to be 5432
, but if you export the variables you'll see that it's actually something like: tcp://172.17.0.3:5432
.
So you probably want to define some variables, like:
variables:
POSTGRES_DB: postgres
POSTGRES_USER: postgres
POSTGRES_PASSWORD: ""
POSTGRES_PORT: "5432"
before_script:
- docker login -u gitlab-ci-token -p $CI_JOB_TOKEN $CI_REGISTRY
image: docker:stable
services:
- docker:dind
stages:
- Linters
test_docker_compose_files:
stage: Linters
script:
# Download and install docker-compose
- wget https://github.com/docker/compose/releases/download/1.23.2/run.sh -O /usr/local/bin/docker-compose
- chmod +x /usr/local/bin/docker-compose
# Validating the main Docker Compose file used in development environment
- docker-compose -f docker-compose.yml config
# Validating deployment docker stack files
- docker-compose -f deployment/docker-stack.django.yml config
deploy:
image: docker:latest
stage: Deployment
script:
# Fist let's check if our variables exists:
- '[[ -z "$MY_SECRET" ]] && echo "You must set the variable: MY_SECRET" && exit 1;'
# step two is to check if MY_SECRET is stored in Docker Secrets
# if not, we create one
- docker secret inspect MY_SECRET || echo $MY_SECRET | docker secret create MY_SECRET -
# and then we deploy to our swarm:
- docker stack deploy --with-registry-auth -c deployment/docker-stack.yml my_stack
when: manual
deploy:
image: docker:latest
stage: Deployment
script:
- apk add --no-cache openssl
- docker secret inspect MY_SECRET || openssl rand -base64 50 | docker secret create MY_SECRET -
# and then we deploy to our swarm:
- docker stack deploy --with-registry-auth -c deployment/docker-stack.yml my_stack
when: manual
validate_stack_files:
stage: Validate
image: docker:stable
script:
- wget https://github.com/docker/compose/releases/download/1.23.2/run.sh -O /usr/local/bin/docker-compose
- chmod +x /usr/local/bin/docker-compose
# Validating the main Docker Compose file used in development environment
- docker-compose -f docker-compose.yml config
# Validating the deployment docker stack files
- docker-compose -f deployment/docker-stack.django.yml config
only:
changes:
- docker-compose.*
- deployment/docker-stack.*
- Configure your Docker host to accept remote connections with TLS.
- Genereate your client certificates.
- In your Gitlab Environment Variables:
$TLSCACERT
$TLSCERT
$TLSKEY
remote-docker-template-job:
image: docker:stable
variables:
DOCKER_HOST: tcp://YOUR-DOCKER-HOST-IP-HERE:2376
DOCKER_TLS_VERIFY: 1
before_script:
- mkdir -p ~/.docker
- echo "$TLSCACERT" > ~/.docker/ca.pem
- echo "$TLSCERT" > ~/.docker/cert.pem
- echo "$TLSKEY" > ~/.docker/key.pem
- docker login -u $DEPLOY_USER -p $DEPLOY_TOKEN $CI_REGISTRY
# Now you are able to run commands in your remote docker from Gitlab CI.
- docker stack deploy ...
Let's say you want to run an one-off command inside a replicated (service) container. For example a DB migration job.
Django DB migration example:
docker exec $(docker ps -q -f name=mystack_django -f health=healthy -n 1) django-admin migrate
django_dbmigrate:
# You probably have some configurations for remote Docker here
<<: *remote_docker_template
stage: Deployment
script:
# $(docker ps -q -f name=$STACK_NAME_$DJANGO_SERVICE_NAME -f health=healthy -n 1): Get the id of ONE container
# from $STACK_NAME_django service that is running and is healthy.
- DJANGO_CONTAINER_ID=$(docker ps -q -f name=$STACK_NAME_$DJANGO_SERVICE_NAME -f health=healthy -n 1)
# docker-secrets-to-env-var.sh: will get postgres credentials available in Docker Secrets and
# expose as environment variables
- DJANGO_MIGRATE_CMD="django-admin migrate"
# Sometimes you have an additional step before the migrate command, like export environment variables, or something.
# - DJANGO_MIGRATE_CMD="source export-secrets.sh && django-admin migrate"
- docker exec $DJANGO_CONTAINER_ID sh -c "$DJANGO_MIGRATE_CMD"
when: manual
code_style:
stage: Quality
# It is simply to official Python image + Black
image: douglasmiranda/black
script:
- black --check --diff my_project/
only:
changes:
- ./**/*.py
allow_failure: true
when: on_success
Your doc is perfect! thank for sharing!
On my case, searching for POSTGRES_IP doesn't work with your bash script.
After searching, I would like to share maybe more stable solution;
POSTGRES_IP=$(cat /etc/hosts | grep postgres | grep -o '[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}' )