I hereby claim:
- I am ajohnstone on github.
- I am ajohnstone (https://keybase.io/ajohnstone) on keybase.
- I have a public key whose fingerprint is 3657 5228 12EA F1FF 9A2E B738 1B09 88E9 DD22 D552
To claim this, I am signing this object:
@NonCPS | |
def getAllCauses() { | |
currentBuild.rawBuild.getCauses().toString() | |
} | |
@NonCPS | |
def isIssueCommentCause() { | |
def triggerCause = currentBuild.rawBuild.getCause(org.jenkinsci.plugins.pipeline.github.trigger.IssueCommentCause) | |
if (triggerCause) { | |
env.TRIGGER_COMMENT=triggerCause.comment |
if ( pullRequest.title.toString() =~ /^([A-Z]+\-[0-9]+|NO_JIRA)/ ) { | |
echo 'invalid-jira - ' + pullRequest.title | |
pullRequest.createStatus(status: 'success', | |
context: 'pull-request-fmt', | |
description: 'pull request contains valid jira ticket', | |
targetUrl: "${env.JOB_URL}/testResults") | |
} else { | |
echo 'valid-jira - ' + pullRequest.title | |
pullRequest.createStatus(status: 'failure', | |
context: 'pull-request-fmt', |
kubectl proxy & | |
kubectl delete cluster my-cluster --ignore-not-found=true | |
kubectl delete crd cluster --ignore-not-found=true | |
cat <<EOF | kubectl apply -f - | |
apiVersion: apiextensions.k8s.io/v1beta1 | |
kind: CustomResourceDefinition | |
metadata: | |
name: clusters.stable.example.com | |
spec: | |
group: stable.example.com |
$ curl -s https://api.cqc.org.uk/public/v1/providers/1-101614751 | jq '.'; | |
{ | |
"providerId": "1-101614751", | |
"locationIds": [ | |
"1-107338589", | |
"1-107338606" | |
], | |
"organisationType": "Provider", | |
"ownershipType": "Organisation", | |
"type": "Social Care Org", |
I hereby claim:
To claim this, I am signing this object:
START="$(date +'%Y-%m-%dT%H:%M:%S' --date '-5 minutes')"; | |
END="$(date +'%Y-%m-%dT%H:%M:%S')"; | |
aws elb describe-load-balancers | jq -r '.LoadBalancerDescriptions[].LoadBalancerName' | while read LB; do | |
DATA=$(aws cloudwatch get-metric-statistics \ | |
--namespace AWS/ELB \ | |
--metric-name "RequestCount" \ | |
--dimensions '[{"Name":"LoadBalancerName","Value":"'$LB'"}]' \ | |
--start-time "$START" \ | |
--end-time "$END" \ | |
--period 60 \ |
The kubernetes cluster has nodes of m3.medium and only have a emphemeral storage capacity of 4gb.
This is easily utilized, and therefore need to increase the total size.
I've allocated the following so technically the LVM group could utilise the following. However its unclear how to automatically get it to expand the volume.
export MASTER_DISK_TYPE='gp2';
export MASTER_DISK_SIZE=250
function kubernetes::deployment::wait { | |
deployment=$1 | |
ns=${2:-'default'}; | |
k_cmd="kubectl --namespace=$ns get deployments $deployment"; | |
while true; do | |
observed=$($k_cmd -o 'jsonpath={.status.observedGeneration}'); | |
generated=$($k_cmd -o 'jsonpath={.metadata.Generation}'); | |
[ "$?" -ne 0 ] && break; | |
[ "${observed}" -ge "${generated}" ] && { | |
updated_replicas=$($k_cmd -o 'jsonpath={.status.updatedReplicas}'); |
$ kubectl exec --tty -i nginx-ingress-controller-9xccu -- ls -alh --color | |
total 6.2M | |
drwxr-xr-x 46 root root 4.0K May 1 12:47 . | |
drwxr-xr-x 46 root root 4.0K May 1 12:47 .. | |
-rwxr-xr-x 1 root root 0 May 1 12:46 .dockerenv | |
-rwxr-xr-x 1 root root 0 May 1 12:46 .dockerinit | |
drwxr-xr-x 2 root root 4.0K Apr 28 00:50 bin | |
drwxr-xr-x 2 root root 4.0K Nov 27 13:59 boot | |
drwxr-xr-x 5 root root 380 May 1 12:46 dev | |
drwxr-xr-x 45 root root 4.0K May 1 12:46 etc |
import boto3 | |
r53_client = boto3.client('route53') | |
hosted_zone = 'alias.photobox.com.' | |
def lambda_handler(event = {}, context = {}): | |
aws_region = event['detail']['awsRegion'] | |
elb_client = boto3.client('elb', region_name=aws_region) |