Skip to content

Instantly share code, notes, and snippets.

View heathdutton's full-sized avatar
🕴️
🐈🐈🐈🐈🐈🐈🐈🐈🐈🐈🐈🐈🐈🐈🐈🐈🐈🐈🐈🐈🐈🐈🐈🐈🐈🐈🐈🐈🐈🐈🐈🐈🐈🐈🐈🐈🐈🐈🐈🐈🐈🐈🐈🐈🐈🐈🐈🐈🐈🐈🐈🐈🐈🐈🐈🐈🐈🐈🐈🐈🐈🐈🐈🐈🐈🐈🐈🐈🐈🐈🐈 oh hello

Heath Dutton ☕ heathdutton

🕴️
🐈🐈🐈🐈🐈🐈🐈🐈🐈🐈🐈🐈🐈🐈🐈🐈🐈🐈🐈🐈🐈🐈🐈🐈🐈🐈🐈🐈🐈🐈🐈🐈🐈🐈🐈🐈🐈🐈🐈🐈🐈🐈🐈🐈🐈🐈🐈🐈🐈🐈🐈🐈🐈🐈🐈🐈🐈🐈🐈🐈🐈🐈🐈🐈🐈🐈🐈🐈🐈🐈🐈 oh hello
  • ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
  • Tampa Florida
  • 18:20 (UTC -05:00)
  • https://orcid.org/0009-0005-9397-422X
View GitHub Profile
@heathdutton
heathdutton / recover_commit_latency.sql
Last active December 6, 2021 19:18
MySQL - Automatically recover from a big commit latency bottleneck
-- Note, this will CANCEL commits if they take more than 20s to complete.
-- It is assumed that if you have commit latency greater than 20s that something is terribly wrong,
-- and you are now losing data due to a lack of throughput already.
-- This will help resume opperation at the cost of potentially dropping locking changes.
-- Create a stored proceedure that can recover from a commit latency lock-up
-- Do not run multiple of this at once.
-- When running loop and kill till there are none to kill.
DROP PROCEDURE IF EXISTS `recover_commit_latency`;
DELIMITER ;;
@p4535992
p4535992 / ua.md
Created May 27, 2020 16:32 — forked from jwodder/ua.md
D&D 5e Unearthed Arcana Index

D&D 5e Unearthed Arcana Index

Because finding anything in this page is harder than it should be

Date Article PDF Contents
2015-02-02 Unearthed Arcana: Eberron UA_Eberron_v1.1.pdf Changelings, shifters, warforged, Wizard (Artificer), rules for action points, dragonmarks
@heathdutton
heathdutton / TimeZoneOptionCollection.php
Created January 28, 2020 20:52
Dynamic collection of supported PHP Timezones. Grouped by country, with dynamic abbreviations, DST indicators, and future-proofing. For making useful and clean select2 (or similar) dropdown selectors. Better than all the hard-coded solutions I could find.
<?php
/**
* Class TimeZoneOptionCollection
*
* Dynamic collection of supported PHP Timezones:
* - Grouped by country
* - Commonly used abbreviations
* - DST indicators
* - Future-proof (leans on PHP for timezones).
@heathdutton
heathdutton / mautic-campaign-delays-lite.sql
Last active February 25, 2019 21:54
List all Mautic campaign delays
-- All mautic campaign delays merged. Two queries (the first is important). Takes under 10s.
-- Depends on the PR of soft-deleted campaign events.
SET @@group_concat_max_len = 10000000000000;
SELECT *
FROM (
SELECT NULL as campaign_id,
NULL as campaign_name,
NULL as event_id,
NULL as event_name,
NULL as lead_count,
@heathdutton
heathdutton / backlog.sql
Last active November 1, 2018 19:05
mautic:campaign:trigger backlog
/* Lead ingestion behind - Leads waiting to be processed by a campaign */
SELECT cl.campaign_id as campaign_id, count(cl.lead_id) as lead_count FROM campaign_leads cl WHERE (cl.manually_removed = 0) AND (NOT EXISTS (SELECT null FROM campaign_lead_event_log e WHERE (cl.lead_id = e.lead_id) AND (e.campaign_id = cl.campaign_id))) GROUP BY cl.campaign_id LIMIT 100;
-- Faster:
SELECT cl.campaign_id AS campaign_id, c.name as campaign_name, count(cl.lead_id) AS lead_count, c.is_published AS published
FROM campaign_leads cl
LEFT JOIN campaigns c
ON c.id = cl.campaign_id
WHERE (NOT EXISTS (
SELECT null FROM campaign_lead_event_log e
WHERE
@heathdutton
heathdutton / UnpackCSVJSON.php
Last active December 5, 2017 20:28
Takes a CSV with JSON in it, and makes it more usable as a CSV with columns extracted.
<?php
/**
* Automatically extrapolate JSON keys/values from a CSV into their own columns.
* JSON is flattened, and null values/keys are ignored.
*
* Usage:
*
* php UnpackCSVJSON.php original.csv destination.csv
*/
$file = $argv[1];
@heathdutton
heathdutton / aws-eb-cron.sh
Last active August 16, 2020 09:45
Execute cron tasks in elastic beanstalk on only one instance with this.
#!/usr/bin/env bash
# Prevent cron tasks from being ran by multiple instances in Elastic Beanstalk.
# Automatically adjusts to new "leading instance" when scaling up or down.
# Stores the result in an environment variable for other uses as AWS_EB_CRON
#
# This must be ran by cron as the root user to function correctly.
# Anything after this file will be executed as webapp for security.
#
# Example Cron (should be created by .ebextensions):