Skip to content

Instantly share code, notes, and snippets.

View jonico's full-sized avatar
🚀
@ home

Johannes Nicolai jonico

🚀
@ home
View GitHub Profile
@jonico
jonico / search-for-pattern-in-all-repos.sh
Last active March 3, 2021 22:43
Search through all reachable commits in all branches and tags in all repos on GitHub Enterprise Server for a pattern (run it on a backup system)
# Replace the string "Hyrule" with your search pattern - only run on backup system - will take a LOOOONG time to run through
ghe-console -y <<'ENDSCRIPT'
Repository.find_each do |repository|
if repository.active && !repository.empty? then
host,path = GitHub::Routing.lookup_route(repository.nwo)
puts "Processing repo #{repository.nwo} ..."
refs=`git --git-dir=#{path} rev-list --branches --tags | sort | uniq`
refs.each_line do |ref|
if system("git --git-dir=#{path} grep 'Hyrule' #{ref}")
puts "Found pattern in ref #{ref} in repo #{repository.nwo}: "
@jonico
jonico / dind-github-actions-k8s-example.yaml
Last active April 29, 2021 15:45
Example of a GitHub Action runner pod managed by summerwind/actions-runner-controller that was successfully tested with container and services keywords in action workflow files
apiVersion: v1
items:
- apiVersion: v1
kind: Pod
metadata:
creationTimestamp: "2021-04-29T15:09:09Z"
labels:
pod-template-hash: 65f67dcb66
runner-deployment-name: actions-runner-deployment-epona
runner-template-hash: 6788f4694c
@jonico
jonico / sync-github-actions.sh
Created June 21, 2021 15:14
Use actions-sync to sync actions/ org and some custom actions
#!/usr/bin/bash
cd
mkdir -p /tmp/actions
git clone https://github.com/github/platform-samples.git
sdk install groovy
cd platform-samples/api/groovy/
groovy ListReposInOrg.groovy -t $GH_PUBLIC actions > ~/repo-list
cd
wget https://github.com/actions/actions-sync/releases/download/v202009231612/gh_202009231612_linux_amd64.tar.gz
@jonico
jonico / migrate_from_azure_blob_to_s3.sh
Last active August 20, 2021 03:08
Co-pilot examples (only comments have been written by me)
# copy Azure bloc storage files to S3 buckets
#
# Usage:
# ./copy-azure-blob-storage-to-s3.sh <blob storage container> <s3 bucket>
#
# Example:
# ./copy-azure-blob-storage-to-s3.sh my-container s3://my-bucket
#
# Note:
# This script requires a working Azure CLI.
@jonico
jonico / wait-for-ps-branch-readiness.sh
Created October 4, 2021 21:36
shell script that waits for a PlanetScale branch to be ready for use and increases retry times exponentially
#!/bin/sh
# shell script that waits for a PlanetScale branch to be ready for use and increases retry times exponentially
# usage: wait-for-ps-branch-readiness.sh <db name> <branch name> <max retries>
function wait_for_branch_readiness {
local retries=$1
local db=$2
local branch=$3
@jonico
jonico / .env
Last active June 1, 2023 09:04
How to run PlanetScale alongside with an MySQL enabled app that does not have any other internet access
PLANETSCALE_DB=brandnewdb
PLANETSCALE_BRANCH=mybranch
PLANETSCALE_ORG=jonico
PLANETSCALE_SERVICE_TOKEN=pscale_tkn_loCzIH7NktDK-GWJ71eX97Qr5D3a9iEO_pgHCSHUtw
PLANETSCALE_SERVICE_TOKEN_NAME=69xrlIwgs4ms
@jonico
jonico / README.md
Last active September 14, 2023 15:13
How to create an RDS database that is suitable for PlanetScale's import feature

MySQL on RDS configured for import to PlanetScale example

This folder contains an example Terraform configuration that deploys a MySQL database - using RDS in an Amazon Web Services (AWS) account - that is properly configured for PlanetScale's DB import feature.

It will make sure that the RDS database has the binlog exposed, gtid-mode set to ON and is using ROW-based replication. All these are pre-requisites for imports to work without zero downtime.

If you are going to write a lot of data in your database in a very short amount of time, don't forget the only manual step after the Terraform setup.

@jonico
jonico / counting-affected-rows-in-potsgresql.md
Last active June 7, 2022 17:30
Counting processed rows (read/written) in PostgreSQL

Scripts to determine PostgreSQL database size and rows processed

Differences between PostgreSQL and MySQL storage format and why this matters for billing estimations

tl;dr Rows processed numbers between Postgres and MySQL and database size may differ due to different index and row storage

PostgreSQL and MySQL are both relational databases with strong transactional capabilities. The way their storage engines store rows and corresponding indexes and how those indexes are used during queries differs significantly though. Check out this article from Uber Engineering for the technical details behind those differences.

Due to those index and row storage format differences, any numbers about rows read / written and database size from PostgreSQL will differ from the numbers you can expect once migrated to MySQL. If you are using similar indexes for your queries, the numbers should be pretty similar but depending on your exact queries and read/write pa

@jonico
jonico / MySQLBYOD.py
Last active July 12, 2023 21:20
Copying from one PlanetScale table to another using AWS Glue (and MySQL 8.0 JDBC driver)
import sys
from awsglue.transforms import *
from awsglue.utils import getResolvedOptions
from pyspark.context import SparkContext, SparkConf
from awsglue.context import GlueContext
from awsglue.job import Job
import time
from pyspark.sql.types import StructType, StructField, IntegerType, StringType
sc = SparkContext()
@jonico
jonico / README.md
Last active July 5, 2022 11:01
Docker compose files for temporal.io with external MySQL databases for temporal and temporal_visibility tables (using PlanetScale as example)

Docker compose files for temporal.io with external MySQL databases for temporal and temporal_visibility tables (using PlanetScale as example)

As the docker-compose files 👇 are using PlanetScale's MySQL-compatible Vitess database as an example, each database (temporal and temporal_internal) use different keyspaces and connection strings. Unfortunately, temporalio/auto-setup does not seem to support multiple connection strings for database creation and schema updates (using temporal-sql-tool), so the following commands would need to be run manually before starting up docker-compose:

./temporal-sql-tool --ep $TEMPORAL_PSCALE_HOSTSTRING --user $TEMPORAL_PSCALE_USER --tls  --password $TEMPORAL_PASSWORD-p 3306 --plugin mysql --db temporal setup-schema -v 0.0
./temporal-sql-tool --ep $TEMPORAL_PSCALE_HOSTSTRING --user $TEMPORAL_PSCALE_USER --tls  --password $TEMPORAL_PASSWORD-p 3306 --plugin mysql --db temporal update-schema -d ./schema/mysql/v57/temporal/versioned
./temporal