Usage
./s3_remove_delete_markers_recursive.sh <bucket_name> <prefix>
Dependencies
| #!/usr/bin/env bash | |
| set -eEo pipefail | |
| shopt -s inherit_errexit >/dev/null 2>&1 || true | |
| if [[ ! "$#" -eq 2 || "$1" != --bucket ]]; then | |
| echo -e "USAGE: $(basename "$0") --bucket <bucket>" | |
| exit 2 | |
| fi |
| git clone https://github.com/grpc/grpc-web | |
| bazel build --compilation_mode=opt //javascript/net/grpc/web:protoc-gen-grpc-web | |
| cp bazel-bin/javascript/net/grpc/web/protoc-gen-grpc-web /usr/local/bin/ |
| #!/bin/bash | |
| # Usage: ./get_kubeconfig_custom_cluster_rancher2.sh cluster_name | |
| # Needs to be run on the server running `rancher/rancher` container | |
| # Check if jq exists | |
| command -v jq >/dev/null 2>&1 || { echo "jq is not installed. Exiting." >&2; exit 1; } | |
| # Check if clustername is given | |
| if [ -z "$1" ]; then | |
| echo "Usage: $0 [clustername]" |
| #!/bin/sh | |
| # OUTDATED: please refer to the link below for the latest version: | |
| # https://github.com/rancherlabs/support-tools/blob/master/extended-rancher-2-cleanup/extended-cleanup-rancher2.sh | |
| docker rm -f $(docker ps -qa) | |
| docker volume rm $(docker volume ls -q) | |
| cleanupdirs="/var/lib/etcd /etc/kubernetes /etc/cni /opt/cni /var/lib/cni /var/run/calico /opt/rke" | |
| for dir in $cleanupdirs; do | |
| echo "Removing $dir" | |
| rm -rf $dir | |
| done |
| #!/bin/bash -e | |
| yum update -y | |
| yum install -y aws-cfn-bootstrap git aws-cli | |
| # Install the files and packages from the metadata | |
| /opt/aws/bin/cfn-init -v --stack "{{ aws_stack_name }}" \ | |
| --resource ECSInstanceLaunchConfiguration \ | |
| --configsets ConfigCluster \ | |
| --region "{{ ref('AWS::Region') }}" |
This snippet is a sample showing how to implement CloudWatch Logs streaming to ElasticSearch using terraform.
I wrote this gist because I didn't found a clear, end-to-end example on how to achieve this task. In particular,
I understood the resource "aws_lambda_permission" "cloudwatch_allow" part by reading a couple of bug reports plus
this stackoverflow post.
The js file is actually the Lambda function automatically created by AWS when creating this pipeline through the
web console. I only added a endpoint variable handling so it is configurable from terraform.
| # Run command with the same user ID as current user | |
| # -v $(pwd):/tmp/mount - mount current directory to /tmp/mount/ | |
| # --env HOME="/tmp/" - some commands may need to be able to write to your home, se it to temporary folder | |
| docker run -ti --rm -v $(pwd):/tmp/mount —user=$(id -u) --env HOME="/tmp/" debian:jessie | |
| # Mount current users and group and be able to use them | |
| # mount /etc/group and /etc/passwd read only | |
| # set user from $USER | |
| docker run -ti --rm -v $(pwd):/tmp/mount -w /tmp/hx -v /etc/group:/etc/group:ro -v /etc/passwd:/etc/passwd:ro —user=$USER debian:jessie |
| const a = new Set([1, 2, 3, 4, 4, 4]) | |
| const b = new Set([3, 4, 5, 6]) | |
| const intersect = (set1, set2) => [...set1].filter(num => set2.has(num)) | |
| const differ = (set1, set2) => [...set1].filter(num => !set2.has(num)) | |
| const joinSet = (set1, set2) => [...set1, ...set2] | |
| const myIntersectedSet = new Set(intersect(a, b)) | |
| console.log('myIntersectedSet', myIntersectedSet) |