If you don't have dos2unix installed use sed
dos2unix: sed 's/\r//'
unix2dos: sed 's/$/\r/'
If you don't have dos2unix installed use sed
dos2unix: sed 's/\r//'
unix2dos: sed 's/$/\r/'
import gnupg | |
import cStringIO | |
gpg = gnupg.GPG() | |
gpg.encoding = 'utf-8' | |
me = {'name_real': 'alice', | |
'name_email': '[email protected]', | |
'expire_date': '2024-04-01', | |
'passphrase': 'alicespassword'} |
Businesses are machines producing mountains of data about sales, usage, customer, costs, etc... Traditionally data processing is highly centralised with teams of staff and computer running hot a whirling ready to process. We can do better than moving the mountain of data into the corporate data machine - so long as that machinary is light enough to be moved to the data.
We've had this problem; a huge directory of files in CSV format, conataining vital information for our business. But it's in CSV, requires analysis, and don't you don't feel like learning sed/grep/awk today - besides it's 2017 and no-one thinks those tools are easy to use.
See netstat detail of a running docker container: | |
nsenter -t $(docker inspect -f '{{.State.Pid}}' _container_id_) -n netstat -tunapp |
CLUSTER=$1 | |
sts aws ecs list-services --cluster ${CLUSTER} |\ | |
jq -r '.[] | .[]' | xargs -J % -n10 sts \ | |
aws ecs describe-services --cluster ${CLUSTER} \ | |
--services % | jq -r '.services[] | select(.desiredCount!=.runningCount) | .serviceArn' > services | |
while read r;do | |
sts aws ecs update-service \ | |
--cluster ${CLUSTER} \ | |
--service ${r} \ |
CLUSTER=$1 | |
aws ecs list-services --cluster ${CLUSTER} | jq -r '.[] | .[]' | \ | |
xargs -J % -n10 aws ecs describe-services --cluster ${CLUSTER} --services % | jq -r '.services[] | .serviceArn' | \ | |
xargs -J % -n10 aws ecs describe-services --cluster ${CLUSTER} --services % | jq -r '.services[]|.taskDefinition' | \ | |
xargs -J % -n1 aws ecs describe-task-definition --task-definition % | jq -r '.taskDefinition|.containerDefinitions[]|.image' |
set -exu | |
SPOTFLEETREQ=$1 | |
CLUSTER=$2 | |
AZ=$3 | |
fleet_sz=$(aws ec2 describe-spot-fleet-instances --spot-fleet-request-id ${SPOTFLEETREQ} | jq -r '.ActiveInstances|length') | |
container_sz=$(aws ecs list-container-instances --cluster ${CLUSTER} | jq -r '.containerInstanceArns[]' | \ | |
xargs -n10 -J % aws ecs describe-container-instances --cluster ${CLUSTER} --container-instances % | \ | |
jq -r ".containerInstances[].attributes[]|select(.name==\"ecs.availability-zone\" and .value==\"${AZ}\")|.value"|wc -l | xargs) |
aws kms create-key | |
# Encrypt the contents of the file | |
aws kms encrypt \ | |
--key-id ${key_id_from_create_key_step} \ | |
--plaintext fileb://super_secret_file \ | |
--output text \ | |
--query CiphertextBlob > super_secret_file.enc.b64 | |
# Decrypt the contents of the file |
Run command lives here:
/var/lib/cloud/instance/scripts/runcmd
Config is in:
/var/lib/cloud/instance/cloud-config.txt
Check (powershell):
# Remove any volumes that are not attached to an AWS instance | |
aws ec2 describe-volumes | \ | |
jq -r '.Volumes[] | select( (.Attachments|length)==0 ) | .VolumeId ' | \ | |
xargs -J % -n 1 aws ec2 delete-volume --volume-id % | |
# Remove DB snapshots that are older than a month | |
aws rds describe-db-snapshots | \ | |
jq -r ".DBSnapshots[] | select( (.SnapshotCreateTime<\"$(date -v-1m -u +%Y-%m-%dT%H:%M:%S.000Z)\") and (.SnapshotType==\"manual\")) | .DBSnapshotIdentifier" | \ | |
xargs -J % -n 1 sts aws rds delete-db-snapshot --db-snapshot-identifier % |