minikube start
kubectl apply -f crd-foo.example.com.yaml -f crd-bar.example.com.yaml
kubectl apply -f cr-foo.yaml -f cr-bar.yaml
go run main.go --custom-resource-state-only --custom-resource-state-config-file custom-resource-config-file.yaml --kubeconfig ~/.kube/config
[cluster-bekir-test:]/tmp $ cat <<EOF > helmsman.yaml | |
> namespaces: | |
> default: | |
> apps: | |
> testing-helmsman: | |
> chart: testing-chart | |
> version: 0.1.0 | |
> namespace: default | |
> enabled: true | |
> EOF |
kube-bench: (master u=) $ kubectl node-shell aks-cpuworkers-18754171-vmss000000 | |
spawning "nsenter-kru76j" on "aks-cpuworkers-18754171-vmss000000" | |
If you don't see a command prompt, try pressing enter. | |
root@aks-cpuworkers-18754171-vmss000000:/# | |
root@aks-cpuworkers-18754171-vmss000000:/# docker run --rm -v `pwd`:/host aquasec/kube-bench:latest install | |
=============================================== | |
kube-bench is now installed on your host | |
Run ./kube-bench to perform a security check | |
=============================================== | |
root@aks-cpuworkers-18754171-vmss000000:/# ./kube-bench node |
I hereby claim:
- I am bergerx on github.
- I am bergerx (https://keybase.io/bergerx) on keybase.
- I have a public key ASDOFqkSX0xxXCA1uPCCcSyaU-4aAQVdJoskMG0FPGGfCgo
To claim this, I am signing this object:
# removed some "Broken pipe" error messages for clarity, which are caused by "head; kill" | |
[dcos-infra]sensu-plugins-sensu: (master *+ u=) $ set -x; for i in json graphite statsd dogstatsd influxdb; do { head; kill "$$"; } < <(bundle exec bin/metrics-aggregate.rb --metric_format $i); done; set +x | |
+ for i in json graphite statsd dogstatsd influxdb | |
+ head | |
++ bundle exec bin/metrics-aggregate.rb --metric_format json | |
{"metric_name":"clients","value":4,"tags":{"check":"dcos.cluster-management.mesos.master","host":"Bekirs-MacBook-Pro-2.local"},"timestamp":1518300677} | |
{"metric_name":"checks","value":6,"tags":{"check":"dcos.cluster-management.mesos.master","host":"Bekirs-MacBook-Pro-2.local"},"timestamp":1518300677} | |
{"metric_name":"ok","value":2,"tags":{"check":"dcos.cluster-management.mesos.master","host":"Bekirs-MacBook-Pro-2.local"},"timestamp":1518300677} | |
{"metric_name":"warning","value":0,"tags":{"check":"dcos.cluster-management.mesos.master","host":"Bekirs-MacBook-Pro-2.local"},"timestamp":1518300677} | |
{"metric_n |
We use this service to populate Agent attributes to DC/OS agent nodes during first boot.
This service populates the tags only during first boot and doesn't trigger changes afterwards, since Mesos Agent config change forces the agent to be re-bootstrapped.
Shut mesos down gracefully only when the node is shutting down but NOT rebooting.
A clean shutdown will cause the tasks that were scheduled on this node to be rescheduled to another node. Since a rebooting node will return to the cluster shortly, it's better to leave the rebooting node in an unhealthy state so that its tasks continue running on it when it rejoins the cluster.
This has particular importance when you manage your nodes in AWS autoscale groups. When scaling an ASG down you'll find stale agents around.
1. cluster-level metrics and health (mesos-master, mesos-slave, | |
marathon, marathon-lb, mesos-dns, kafka ...) | |
Metrics for cluster components like mesos-master, mesos-slave, | |
frameworks (DC/OS services like zookeeper, marathon, marathon-lb, | |
mesos-dns, kafka,...). | |
These will be used to troubleshoot any problems at cluster-level. | |
Having each component's version as a metric label could help with |
items=[1,2,3,4] | |
items --> [1,2,3,4] | |
iter(items) --> <listiterator at 0x7f4d642273d0> | |
(iter(items) ) --> <listiterator at 0x7f4d64227690> | |
(iter(items), ) --> (<listiterator at 0x7f4d64227690>,) | |
(iter(items), ) * 2 --> (<listiterator at 0x7f4d64227590>, <listiterator at 0x7f4d64227590>) | |
zip(* (iter(items),)*2 ) --> zip(<listiterator at 0x7f4d64227590>, <listiterator at 0x7f4d64227590>) |
https://access.redhat.com/support/policy/updates/openshift | |
https://github.com/openshift/origin/releases | |
https://access.redhat.com/documentation/en/openshift-enterprise/3.2/single/release-notes/ | |
2016-07-20 enterprise -- RHBA-2016:1466 - OpenShift Enterprise 3.2.1.9 security and bug fix update | |
2016-07-14 origin v1.2.1 | |
2016-07-05 enterprise -- RHBA-2016:1383 - OpenShift Enterprise 3.2.1.4 bug fix and enhancement update | |
2016-06-27 enterprise -- RHBA-2016:1343 - OpenShift Enterprise 3.2.1.1 bug fix and enhancement update (Docker 1.10 Now Supported) only manual upgrades are possible from 3.2.x | |
2016-06-21 origin v1.3.0-alpha.2 | |
2016-06-07 enterprise -- RHBA-2016:1208 - atomic-openshift-utils Bug Fix Update |