Skip to content

Instantly share code, notes, and snippets.

@fizz
fizz / Jenkinsfile.groovy
Created October 26, 2024 16:38 — forked from Faheetah/Jenkinsfile.groovy
Jenkinsfile idiosynchrasies with escaping and quotes
node {
echo 'Results included as an inline comment exactly how they are returned as of Jenkins 2.121, with $BUILD_NUMBER = 1'
echo 'No quotes, pipeline command in single quotes'
sh 'echo $BUILD_NUMBER' // 1
echo 'Double quotes are silently dropped'
sh 'echo "$BUILD_NUMBER"' // 1
echo 'Even escaped with a single backslash they are dropped'
sh 'echo \"$BUILD_NUMBER\"' // 1
echo 'Using two backslashes, the quotes are preserved'
sh 'echo \\"$BUILD_NUMBER\\"' // "1"
@fizz
fizz / Jenkinsfile.groovy
Created October 26, 2024 16:38 — forked from Faheetah/Jenkinsfile.groovy
Jenkinsfile idiosynchrasies with escaping and quotes
node {
echo 'Results included as an inline comment exactly how they are returned as of Jenkins 2.121, with $BUILD_NUMBER = 1'
echo 'No quotes, pipeline command in single quotes'
sh 'echo $BUILD_NUMBER' // 1
echo 'Double quotes are silently dropped'
sh 'echo "$BUILD_NUMBER"' // 1
echo 'Even escaped with a single backslash they are dropped'
sh 'echo \"$BUILD_NUMBER\"' // 1
echo 'Using two backslashes, the quotes are preserved'
sh 'echo \\"$BUILD_NUMBER\\"' // "1"
@fizz
fizz / EKS_RDS_VPC_Peering_script.sh
Created July 12, 2024 19:48 — forked from imranity/EKS_RDS_VPC_Peering_script.sh
VPC Peering Between EKS and RDS postgres
#!/bin/bash -xe
## Create a VPC Peering connection between EKS and RDS Postgres
echo """ run this script as:
./eks-rds-peering.sh
+ read -p 'Enter name of EKS Cluster: ' EKS_CLUSTER
Enter name of EKS Cluster: xolv-dev-cluster
+ EKS_VPC=eksctl-xolv-dev-cluster-cluster/VPC
+ EKS_PUBLIC_ROUTING_TABLE=eksctl-xolv-dev-cluster-cluster/PublicRouteTable
+ read -p 'Enter name of RDS: ' RDS_DB_NAME
Enter name of RDS: sfstackuat
@fizz
fizz / csv-to-json.sqlite.md
Last active March 9, 2023 07:40
convert csv to json the sqlite way

try sqlite3 for csv wrangling! it's a powerhouse for easily turning your csv files into a queryable in-memory database, and it's trivial to turn that into json, and to pipe a sql query's output into jq or into a .db file, whatever you want. here are some aliases I wrote, put them in your dotfiles somewhere:

alias csvq="sqlite3 :memory: -cmd '.mode csv' -cmd '.import /dev/stdin s3' '.mode json'"

You can see that I generically name the table "s3", cuz it's an alias I use with any csv file I'm streaming from an s3 bucket, so the table name doesn't need to be too specific. I use it like this:

aws s3 cp s3://$bucket/$key - | csvq "select * from s3" | jq '.[]' -c

"select * from table" means "give me a bunch of rows", so jq '.[]' -c turns those rows into pretty-printed compressed jsonlines. I also have an alias tsvq for tsv files. It's the same thing pretty much, except with .mode tabs instead of .mode csv alias tsvq="sqlite3 :memory: -cmd '.mode tabs' -cmd '.import /dev/stdin s3' '.mode json'"

https://giphy.com/gifs/roosterteeth-lol-barbara-dunkelman-rt-podcast-lPMBfPCJnhX9kwXxc8
@fizz
fizz / migrate-pv-to-zone.sh
Created May 21, 2021 14:01 — forked from caruccio/ebs-pvc-migrate.sh
Migrate EBS Volume based PVs across AWS availability zones
#!/bin/bash
set -eu
NAMESPACE=$1
PVCNAME=$2
TARGETZONE=$3
DEPLOYMENTOBJ=$4
PVNAME=$(oc -n $NAMESPACE get pvc $PVCNAME --template={{.spec.volumeName}})
VOLUMEID=$(oc -n $NAMESPACE get pv $PVNAME --template={{.spec.awsElasticBlockStore.volumeID}} | cut -d/ -f 4)
@fizz
fizz / update_lambda_env_vars.bash
Last active January 16, 2021 07:55 — forked from monkut/update_awslambda_envars.bash
update aws lambda environment variables via awscli and jq
#!/usr/bin/env bash
# the `update-function-configuration` overwrites the existing set envars.
# In order to *ADD* variables we need to read the existing envars and add to that.
# This command uses `jq` to read and transform the json result to an envar then update the lambda configuration
# create the updated envar set
FUNCTION_NAME=trip-analytics-prod-app
UPDATED=.env.prod
UPDATE=$(jq --raw-input --slurp 'split("\n")[:-1]|map(split("=") as [$key, $value] | {$key, $value}) | from_entries' $UPDATED)
@fizz
fizz / script-template.sh
Created December 29, 2020 07:21 — forked from m-radzikowski/script-template.sh
Minimal safe Bash script template - see the article with full description: https://betterdev.blog/minimal-safe-bash-script-template/
#!/usr/bin/env bash
set -Eeuo pipefail
trap cleanup SIGINT SIGTERM ERR EXIT
script_dir=$(cd "$(dirname "${BASH_SOURCE[0]}")" &>/dev/null && pwd -P)
usage() {
cat <<EOF
Usage: $(basename "${BASH_SOURCE[0]}") [-h] [-v] [-f] -p param_value arg1 [arg2...]
@fizz
fizz / create-stack.sh
Created November 13, 2020 17:57 — forked from lantrix/create-stack.sh
Find yourself creating the same AWS CFN stack a lot during testing? Wasting too much time repeating tags? Put your tags and params in json files and use this bash wrapper script to create the cloudformation stack. Provide your own template file, and modify the stackParams and stackTags to your own needs.
# Create AWS CFN Stack - wrapper for `aws cloudformation create-stack`
#Example Usage:
[ $# -eq 0 ] && { echo -e "\nUsage `basename $0` <stack name> <CFN template file> <JSON parameters file> <JSON tags file>\n\nExample:\n`basename $0` mystackname MyCfn.template stackParams.json stackTags.json\n"; exit 1; }
#Inputs
stackName=${1?param missing - Stack Name}
templateFile=${2?param missing - Template File}
paramsFile=${3?param missing - Json Parameters file}
tagsFile=${4?param missing - Json Tags file}
@fizz
fizz / README.md
Created July 2, 2020 18:23 — forked from ktmud/README.md
Enable Okta Login for Superset

This Gist demonstrates how to enable Okta/OpenID Connect login for Superset (and with slight modifications, other Flask-Appbuilder apps).

All you need to do is to update superset_config.py as following and make sure your Cliend ID (OKTA_KEY) and Secret (OKTA_SECRET) are put into the env.