- creates user, adds keys
- Deploys build pack , dockerfile and deis pull apps
- Uninstall workflow-dev chart
- install workflow-dev chart
- tries to login with the existing user
- Does deis apps:list
- Curls through all apps
- Deployments are not available in v1 or server version 1.2, they are available as a part of v1extensions, basically we can't do it now
- Replicaiton controllers are not compatible with deployments. only replica sets
- Deployments support releases and rollbacks, canary and rolling updates
- Horizontal pod scalar again works on replica sets but scales the pods according to CPU utilization
- Deployments are good only if you are thinking of doing a roll back.
- For our usecase where every thing is an RC1 and just thinking about updating the exiting image for an RC , kubectl apply/edit RC is sufficient, unless we are not looking for kubectl rollback --release=<>
- Coming to Apps, thoerically deployments adds features like releases to an App. Also if we can treat every build as a release then basically let the builder deploy an App as deployment,
| package main | |
| import ( | |
| "fmt" | |
| "os" | |
| "strings" | |
| "text/template" | |
| ) | |
| type generate struct { |
A variety of Deis Workflow components rely on an object storage system to do their work including storing application slugs, Docker images and database logs.
Deis Workflow ships with [Minio][minio] by default, which provides in-cluster, ephemeral object storage. This means that if the Minio server crashes, all data will be lost. Therefore, Minio should be used for development or testing only.
Every component that relies on object storage uses two inputs for configuration:
| type cluster struct { | |
| [] nodes | |
| any metadata | |
| } | |
| get cluster() returns cluster | |
| type node struct { | |
| platform stirng | |
| metrics cpu , memory etc.., | |
| events for n time |
4ea46e7(builder) - registry: use registry proxy to talk to the internal registryb59bbbc(fluentd) - fluentd: Adding sumologic plugin supportb23f272(dockerbuilder) - registry: use registry proxy to talk to the internal registry424523c(logger) - storage: Add redis storage adapter2da72a5(logger) - redis: Optimize with more aggresive pipelining
After working on the test suite for a good amount of time and getting help from fellow folks. These are some issues I'm thinking of proposing to make the suite better. During our last retro, we have already talked about running smoke tests.
Some Proposals about proceeding further with current CI/CD infrastructure.
The control Plane:
- Controller, Builder, Registry, Database, Minio Any changes made to these components or Deis Cli or controller-sdk-go repository will affect workflow functionality and should run the full test suite.
t=2016-08-09T22:41:26+0000 lvl=info msg="Executing migration" logger=migrator id="create index IDX_api_key_org_id - v2"
t=2016-08-09T22:41:27+0000 lvl=info msg="Executing migration" logger=migrator id="create index UQE_api_key_key - v2"
t=2016-08-09T22:41:27+0000 lvl=info msg="Executing migration" logger=migrator id="create index UQE_api_key_org_id_name - v2"
t=2016-08-09T22:41:27+0000 lvl=info msg="Executing migration" logger=migrator id="copy api_key v1 to v2"
t=2016-08-09T22:41:27+0000 lvl=info msg="Executing migration" logger=migrator id="Drop old table api_key_v1"
t=2016-08-09T22:41:27+0000 lvl=info msg="Executing migration" logger=migrator id="create dashboard_snapshot table v4"
t=2016-08-09T22:41:27+0000 lvl=info msg="Executing migration" logger=migrator id="drop table dashboard_snapshot_v4 #1"
t=2016-08-09T22:41:27+0000 lvl=info msg="Executing migration" logger=migrator id="create dashboard_snapshot table v5 #2"
t=2016-08-09T22:41:27+0000 lvl=info msg="Executing migration" logger=migrator id="crea
buildpacks :
when we do a git push deis master for a buildpack type app. The code goes to builder and builder compresses the code to
tar file and uploads to configured object storage. Once this is done schedules a slugbuilder pod to provides the bucket and credentials to access the code
slugbuilder pod once starts fetches the tar file from object storage and extracts the file compiles the code according to buildpack specified if not
defaults to language specific buildpacks. Once the code is compiled it generates a slugfile which again it uploads to the same bucket.
Once the upload is done and slugbuilder pod finishes builder checks for the file existence and creates build hook to the controller.
which launches slugrunner pod encapsulating it in a RC or deployment. slugrunner pod gets the slug file from the bucket and extracts the file
and runs the code according to the slug file.
This sprint there are two issues in product backlog about dashboards and metrics.
The current dashboard about deis-router has response time, status code, requests per second, CPU and Memory. We are getting CPU and memory from kubernetes Prometheus end point. The reason why router has additional metrics other than CPU and memory
[2016-08-11T19:50:32+00:00] - deis/deis-monitor-grafana - 10.240.0.23 - - - 200 - "GET /api/datasources/proxy/1/query?db=kubernetes&q=SELECT%20last(%22gauge%22)%20FROM%20%22container_memory_usage_bytes%22%20WHERE%20%22kubernetes_container_name%22%20%3D%20%27deis-logger-redis%27%20AND%20time%20%3E%20now()%20-%205m%20GROUP%20BY%20time(2s)%20fill(null)&epoch=ms HTTP/1.1" - 772 - "http://grafana.104.154.18.233.nip.io/dashboard/db/redis" - "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.11; rv:47.0) Gecko/20100101 Firefox/47.0" - "~^grafana\x5C.(?<domain>.+)$" - 10.135.243.27:80 - grafana.104.154.18.233.nip.io - 0.088 - 0.088
If you observe the above log it has time stamp and status c