Let's look at some basic kubectl output options.
Our intention is to list nodes (with their AWS InstanceId) and Pods (sorted by node).
We can start with:
kubectl get no
| #!/bin/bash | |
| JENKINS_URL=$1 | |
| NODE_NAME=$2 | |
| NODE_SLAVE_HOME='/home/build/slave' | |
| EXECUTORS=1 | |
| SSH_PORT=22 | |
| CRED_ID=$3 | |
| LABELS=build | |
| USERID=${USER} |
| # =================================================================== | |
| # COMMON SPRING BOOT PROPERTIES | |
| # | |
| # This sample file is provided as a guideline. Do NOT copy it in its | |
| # entirety to your own application. ^^^ | |
| # =================================================================== | |
| # ---------------------------------------- | |
| # CORE PROPERTIES | |
| # ---------------------------------------- |
| # Non-root account is recommended for this process | |
| # centos-specific prepration | |
| sudo yum -y update && sudo yum -y groupinstall 'Development Tools' && sudo yum -y install curl irb m4 ruby | |
| # Sanitize the environment | |
| PATH=~/.linuxbrew/bin:/usr/local/bin:/usr/bin:/bin | |
| unset LD_LIBRARY_PATH PKG_CONFIG_PATH | |
| # install linuxbrew |
| // dependencies | |
| var async = require('async'); | |
| var AWS = require('aws-sdk'); | |
| // Enable ImageMagick integration. | |
| var gm = require('gm').subClass({ imageMagick: true }); | |
| var util = require('util'); | |
| var pdf2png = require('pdf2png'); | |
| pdf2png.ghostscriptPath = "/usr/bin"; | |
| // constants |
| # copy/import data from heroku postgres to localhost pg database | |
| # useful for copying admin's work on live site into local database to reproduce errors | |
| # https://devcenter.heroku.com/articles/heroku-postgres-import-export | |
| # take heroku pg snapshot and download | |
| heroku pg:backups:capture | |
| heroku pg:backups:download | |
| # load the dump into local postgres database, assuming $DATABASE_URL set locally |
This is a non-XA pattern that involves a synchronized single-phase commit of a number of resources. Because the 2PC is not used, it can never be as safe as an XA transaction, but is often good enough if the participants are aware of the compromises. The basic idea is to delay the commit of all resources as late as possible in a transaction so that the only thing that can go wrong is an infrastructure failure (not a business-processing error). Systems that rely on Best Efforts 1PC reason that infrastructure failures are rare enough that they can afford to take the risk in return for higher throughput. If business-processing services are also designed to be idempotent, then little can go wrong in practice.
Consider a jms based service, where there is an inbound Queue manager (QM1), an outbound queue manager (QM2) and a database (DB). Here are the scenarios that I would like to cover using Best efforts 1 PC commit process:
Ondrej Sika <[email protected]>
use DATABASE_NAME
Cleanup resources (containers, volumes, images, networks) ...
// see: https://github.com/chadoe/docker-cleanup-volumes
$ docker volume rm $(docker volume ls -qf dangling=true)
$ docker volume ls -qf dangling=true | xargs -r docker volume rm