- Diego Lemos @dlresende, Derik Evangelista @kirederik
- this talk is about pipelines and how you can use best practices to write better pipelines
- who here works with concourse? [lots of hands]
- CI is the practice of building and testing your application on every checkin
- cd is an extension of ci that enable teams to release to your customers quickly, sustainably
- goal: make release process boring
- automatable build
- build, test and deploy via command line
- version control
- team buy-in
- CI is a practice, not a tool
- configuration as code
- pipeline is under version control
- isolated, reproducible, debuggable builds
- anatomy of a pipeline
- input: version, source-code
- job: build
- output : binary
- we’re explaining some concepts, but our goal is to share
continuous delivery best practices
- reference to the red & black jez humble / dave farley book
- start from an empty concourse - no pipelines configured
- i have some yaml here
- first thing we want is a job
- consume some resources, process, output
fly -t local set-pipeline -p petclinic -l secrets.yml -c 0_pipeline__blank_page
fly -t local unpause-pipeline -p petclinic
- calling this pipeline “petclinic” from the spring example app I’m using
- pushing stuff through minio - S3-compatible API
- concourse docker-compose to run locally
- let’s create a plan for this job
resources:
- name: source-code
type: git
source:
uri: https://.../foo.git
branch: master
containers:
maven: &maven
...
build:
- name: build
plan:
- get: source-code
trigger: true # without this, we won't automatically build when source-code changes
- task: package
config:
<<: *maven
inputs:
- name: source-code
task:
path: /bin/bash
args:
- -c
- |
cd source-code
mvn package
- You can do a simple dumb smoke test with
curl --fail $URL
- all changes need to be propagated through the pipeline
- when you have a lot of inputs, complex pipelines, you need to ensure that your pipeline reacts to all the possible changes
- ensure your builds have appropriate triggers
- if you deploy, try to deploy in an environment that is as close as possible to production
- smoke tests are important to tell you if a deployment has
succeeded
- when you deploy, multiple things are coming together at once
- you want to know if they are working together
- build your binaries once, and deploy the same binary everywhere
- don’t rebuild when you deploy to a new environment
- if a build step fails, stop the whole pipeline
- there is a
passed
attribute to inputs
- there is a
- use the same method to deploy to each environment
- in some environments, you have ops teams and dev teams who use different tools
- looking at CD from a lean/kanban point of view: “stop the line”
from factory pipelines. in our teams, people tend to push extra
commits onto broken builds. do you have that problem?
- I’d like to say we immediately revert but maybe not. it’s hard
- how do you build your binaries once in cloud foundry when it’s
not java and not go. when you’re doing the build inside cloud
foundry? you do a build on every
cf push
don’t you?- there are two dimensions here: compiled/not compiled language,
and paas/not-paas environment
- you can kind of hack artefacts for a language like ruby
- with cf, you can use a container if you want
- the v3 endpoints let you download the droplet, and push that somewhere else
- in cf you can at least specify a buildpack - sets ruby version
- there are two dimensions here: compiled/not compiled language,
and paas/not-paas environment
- in your pipeline, you’re running cf push as a task in a job. what
are your thoughts on using the cf resource instead?
- my opinion is: pipelines should be complicated not complex
- lots of small things that are easy to reason about
- when i design pipelines, i try to ensure that there’s a resource, but if a step is important i make a script that a dev could run locally so that they could understand it
- my opinion is: pipelines should be complicated not complex
- federico figus
- Tessian is a machine intelligent email security platform
- we moved from travis to concourse for CI a year ago
- we have ~45 pipelines now
- a resource is an ojbect that is going to be used for the jobs in
the pipeline
- git repo
- docker image
- every time something changes in the resource, there’s a new
version
- this can trigger jobs
- for git resources, the version is the hash
- for docker images, the image id
- a resource is a docker image with three executables:
/opt/resource/check
: discover new versions of the resource/opt/resource/in
: fetch the actual resource/opt/resource/out
: create a new version of the resource
- check:
- input: JSON with the resource configuration (source) and the last version we knew about
- outpit: JSON array with list of new versions
- in (get):
- input: a JSON object with the resource configuration (source), the version of the resource to get (version), some specific parameters (params), and the directory to store the data ($1)
- output: JSON object with fetched version and some metadata describing the version
- out (create):
- input: a JSON object with the resource configuration (source), some params, and the path to the build directory ($1)
- output: a JSON object with the new version and some metadata describing the new version
- we deploy docker containers and images to amazon ECS
- we like concourse but we wanted fine-grained permissions and have
a lot of auditing
- 2FA around auditing and releasing
- we built a resource type driven by S3
- S3 is permissioned with IAM, can turn on MFA
- drive concourse through those S3 files
- CloudTrail on S3 for audit
- https://github.com/Tessian/catapult