-
Basic Concepts
- Continuous Delivery
Delivering Software with confidence, small increments, frequent releases, requires automated tested, automated deployment, automated infrastructure
- Pipelines
Pipelines as a basic concept of CI/CD: breaks down software delivery process into stages, stages proof quality of software, each from a different perspective
- Infrastructure / Pipelines as Code
Pipelines easily become complex, should stay in sync with the corresponding version of the software. therefore we want to treat it as any other code: version it, maybe test it
- Feedback Loops
goal: speed up feedback loops and improve quality of feedback. the earlier the better, the more specific the better
- Continuous Delivery
-
MOTIVATION What do we want?
What are the expectations of the people using / providing Jenkins (developers & operators). Take this as a basis to later on assess whether Jenkins fullfills those demands / requirements or not
- Operators
Those that operate / provide Jenkins, e.g. Toolsmiths, Service Providers, sometimes developers that operate it on their own
- (Jenkins) Configuration as Code
set up Jenkins with some kind of config mgmt tool (Chef, Ansible, Docker Compose...)
- immutability
Immutable Infrastructure has proofen to be a valid approach for managing services. Instead of repairing / patching a running instance, we throw it away and provision from scratch (Repave)
- Scalable build agents
Operators want to have an easy way to provide scalable build agents, such that Jenkins adapts to the workload
- customizable build agents
it must be possible to provide any tool stack that the developer needs
- (Jenkins) Configuration as Code
- Developers
The people who are running their jobs on Jenkins
- (Continuous Delivery) Pipelines
Pipelines provide a powerful way to describe / implement the CI/CD workflow, so they should be supported by the CI/CD tool
- Pipelines as Code
see above: Pipelines easily become complex and have to stay in sync with the code base, hence we want to version them the same way as our code base / with our code base
- Expressive DSL
a pipeline DSL can help to make pipeline code understandable for everyone
- Reusability of Pipeline tasks / steps
DRY pipelines keep them maintainable, allows for easy refactoring
- do not want to "configure" Jenkins
devs do not want to dig down into the internals of Jenkins configuration, they simply want to run their pipelines
- Conventions and standards for pipelines
- at the end: deliver with confidence
- (Continuous Delivery) Pipelines
- Operators
-
What does Jenkins offer?
What are the good things about Jenkins, why do so many people use Jenkins?
- Groovy Pipeline DSL
Offers a "real" programming language and not just declarative YAML files. Can be easily extended.
- Multibranch builds
Feature branches are a common pattern for CVS, Jenkins supports this very well
- Broad (community) support in Internet (Stackoverflow...)
You will find an answer to almost any problem / error message in Jenkins on Stackoverflow etc
- Well proven integration in many tools (Git[Hub|Lab], Artifactory, Vault(?)...)
There are dozens of blog posts and tutorial on the internet on how to integrate Jenkins into whatever tooling
- Groovy Pipeline DSL
-
What are Jenkins pains?
why do people not like jenkins
- Day 2 operations (patching, updates, zero downtime)
if you have many plugins you easily break your setup with an upgrade / update. HA setup is not supported out of the box, hence no "rolling upgrades"
- Configuration Management
Configuration is spread across many files. Mostly XML, which is a burden for config mgmt. Configs are changed at runtime. Cannot easily be versioned and replicated
- Pipelines are no 1st class citizens
Pipelines have been added lately and still feel like an addon rather then fully integrated
- Pipelines are restricted to linear workflows
no real fan-in and fan-out. only single triggers at beginning of a pipeline
- Day 2 operations (patching, updates, zero downtime)
-
Jenkins concepts for build agents
what options do we have to set up build agents with Jenkins?
- full blown VMs
we provide a VM to run our jobs on. Dev stack has to be provided by installing tools on the VM
- Docker containers for build jobs
we run our build jobs in containers. Dev stack has to be provided by building container images with required tools
- PROs and CONs
- Jenkins provides both ways (and even more?) but what about configuration management for such setups?
Configuring complex master / agent setups requires lots of clicking in the UI, cannot be managed by config mgmt tools
- full blown VMs
-
Implementing more complex pipelines
The Jenkins Pipeline DSL can handle "linear" pipelines very well. It currently has no support for DAGs though, hence more complex pipelines are hard to implement
- Fan in - fan out
E.g. multiple triggers for a pipeline like building a Docker Image that might be triggered by a change to the
Dockerfile
(trigger 1) as well as a change to the base image (trigger 2) - Example: Spring Boot app
- build from source, triggered by code changes (i.e. multi-branch)
- run (complex) integration tests upon every commit
- upload artifact to artifact repository (as SNAPSHOT)
- deploy to qa stage (user acceptance tests), triggered by either "new artifact in repository" or "manually"
- once artifact is regarded as "stable", mark as a release, upload to artifact repository (as RELEASE)
- deploy to prod stage triggered manually
- run smoke tests
- introducing furher
Jenkinsfile
s - e.g.Deployfile
one approach to overcome those limitations is distributing the pipeline into several files which results in multiple pipelines that can each be triggered individually
- complexity with branching
what happens, when each of those "distribute pipelines" has multi-branch
- project organization / pipeline overview image
provide a graphic that shows a CI/CD pipeline for such an app
- complexity with branching
- Fan in - fan out
-
Parametrized Pipelines
- what kind of parameters are we talking about?
- selection of a version of an artifact that should be deployed
- next version of an artifact to be uploaded to an artifact repository (semver)
- credentials that we don't want to add to the Jenkins credentials store
- selection of deployment targets for multi-region deployments
have an array of destinations and select to which destination you want to deploy via a parameter
- properties in
Jenkinsfile
configure the job's Jenkins properties in the
Jenkinsfile
- parameters as pipeline step
ask operator for a parameter from within the pipeline UI
- providing parameters with Jenkins CLI
trigger parametrized jobs from your command line, e.g. read credentials from a local Vault
- what kind of parameters are we talking about?
-
Jenkins CLI
- using parameters in CLI
- using (local) credential stores in CLI
-
reuseable pipelines / shared library
idea: provide opinionated pipelines blueprints for your developers. E.g. each Spring Boot app is build and deployed more or less the same way. Only parametrize such a default pipeline in your
Jenkinsfile
- stick to conventions across multiple projects
make it easier for developers from different projects to understand each others' pipelines / CI/CD setup
- 2 design approaches
- full fledged pipelines (easier to use, less flexible)
this is the opinionated approach: developer has to stick to the conventions behind such a pipeline blueprint
- helpers (only fullfilling small tasks, more flexible)
developer gets small building blocks (e.g. helper methods) that he can assemble in his own pipeline
- full fledged pipelines (easier to use, less flexible)
- using closures for customization
whenever you need to parametrize behavior instead of only providing parameter values
- Pipeline DSL hacks
- sanitizing logs
we do not want to have our credentials written to our logfiles
- CRON configuration of the Jenkins job in
Jenkinsfile
bring as much of your job's configuration / properties into your
Jenkinsfile
--> less manual effort to set up pipelines
- sanitizing logs
- stick to conventions across multiple projects
-
handling credentials
- Jenkins credentials store
- external PAMs (Vault)
- providing credentials "at runtime" instead of putting them into credentials store
- masking credentials
-
integration in other tools
- build merge requests
- show build status in commits / merge requests
-
Summary
- which requirements are met
- what is still a pain Jenkins
- Link to "Requirements of modern CI/CD tools" https://github.com/cicd-hackathon-stgt/docs
- Jenkins HA setup https://endocode.com/blog/2018/08/17/jenkins-high-availability-setup/
- Stackoverflow trends for different CI/CD tools - see https://towardsdatascience.com/these-are-the-real-stack-overflow-trends-use-the-pageviews-c439903cd1a
- explain our (Git) development workflow
- why and how do we use branches
- how does this affect our CI/CD requirements
- short overview of the "new kids on the block"
- Concourse
- UI is not used for management
- configuration as code is very well implemented
- pipelines as first class citizens
- pipelines as code
- fully managed via CLI, hence easy to be integrated in deployment workflows
- GitLab CI
- combines VCS and CI
- only one tool
- pipelines as code
- GitLab runner is very flexible
- building and testing of merge requests
- GitHub Actions
- ???
- Jenkins X
- Kubernetes integration
- Concourse
- https://www.thoughtworks.com/insights/blog/modernizing-your-build-pipelines
- https://github.com/sirech/talks/raw/master/2019-01-tw-concourse_ci.pdf
- https://github.com/sirech/talks/raw/master/2019-04-tw-build_pipelines.pdf
- https://technologyconversations.com/2019/05/09/the-evolution-of-jenkins-jobs-and-how-we-got-to-the-yaml-based-jenkins-x-yml-format/
- https://jenkins.io/doc/book/pipeline/development/
- https://github.com/CloudPipelines/