-
Deployment Pipeline (re-instate)
- CI (shippable or semaphore) triggers pipeline on successful master build
- Probably 'Go' - has misgivings but does the job for now
- builds and pushes docker container artifact
- deploy to staging
- smoke test
- acceptance test
- Deploy to prod (auto-deploy can be optional e.g. supporter)
- smoke test
- Rollback on failure
- CI (shippable or semaphore) triggers pipeline on successful master build
-
Common flow for all services - adding a new one should be trivial
-
Security updates (e.g. base images) can have own pipelines that are dependencies of deployment ones
- if the update passes acceptance tests, it can be shipped
- tracability of which base image was used for a version
-
New Go Server on staging-build - Core OS running Go in docker. Same host can be used as shippable host.
- Each job runs in a container, so dependencies can be encapsulated. Old Go hosts required every dependency available.
-
Dynamic Routing
-
Registrator to roll service locations to nginx
-
Support internal only services (e.g. rous)
-
GliderGun style deployments
- new version in parallel to current version - private route?
- (only enter into routing pool after smoke test?)
- clean up old version
- Instance tags for service locations
-
Build resilience into AWS architecture
- Auto-scale groups around every host group - even with static size for resilience
- Launch configuration/User data bootstraps a node enough to join pool automatically
-
Staging and Production uniformity
- All changes rolled through staging
- Terraform?
-
Standard Health checks across services
- Consul health checks
-
Continue general monitoring/alerting
-
Continue general log shipping/aggregation
Created
April 1, 2015 02:43
-
-
Save colinbankier/b4cc0bdf5e1d5da59b3e to your computer and use it in GitHub Desktop.
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment