Skip to content

Instantly share code, notes, and snippets.

View mrpatrick's full-sized avatar

Patrick Tully mrpatrick

View GitHub Profile
VARNISH_VERSION=3.0.3
VMOD_REPOSITORY=https://github.com/nand2/libvmod-throttle.git
VMOD_NAME=libvmod-throttle
VMOD_VERSION='0.1'
VMOD_LICENSE='Simplified BSD License'
VMOD_VENDOR='Nicolas Deschildre'
VMOD_DESCRIPTION='Rate limiting in Varnish, on different time windows and keys'
sudo yum install -y pcre-devel python-docutils
@eurica
eurica / PagerDutyWebhookToEmail.php
Last active November 22, 2019 02:12
Simple example of using PagerDuty webhooks and PHP to forward all incident state changes to an email address.
Sample PHP code to accept PagerDuty webhooks and send out notifications by email on state changes.
For more information, see http://developer.pagerduty.com/documentation/rest/webhooks
This example threads emails based on "$status: $description on $service" so each update to each incident would start a new thread.
This code is unsupported by PagerDuty.
<?php
$emailAddress = "[email protected]";
@louy
louy / .htaccess
Created July 25, 2013 22:31
Apache .htaccess geographical redirect based on CloudFlare's geo-ip headers
# add as many as you need...
SetEnvIf CF-IPCountry SY RedirectSubdomain=syria
SetEnvIf CF-IPCountry AE RedirectSubdomain=uae
SetEnvIf CF-IPCountry EG RedirectSubdomain=egypt
# Only redirect if Host is not a subdomain
SetEnvIfNoCase Host ^.+\.example\.com$ !RedirectSubdomain
# Only redirect if cookie "noredirect" doesn't exist
SetEnvIfNoCase ^Cookie$ noredirect=true !RedirectSubdomain
@mrpatrick
mrpatrick / s3_bucket_public_policy.json
Last active December 23, 2015 22:19
Amazon S3 Public Website Bucket Policy - ensures anything added or new will
{ "Version": "2008-10-17", "Id": "http referer policy", "Statement": [ { "Sid": "readonly policy", "Effect": "Allow", "Principal": "*", "Action": "s3:GetObject", "Resource": "arn:aws:s3:::BUCKET_NAME_HERE/*" } ] }
@wsargent
wsargent / docker_cheat.md
Last active June 29, 2024 19:32
Docker cheat sheet
@kevin-smets
kevin-smets / iterm2-solarized.md
Last active May 16, 2025 13:41
iTerm2 + Oh My Zsh + Solarized color scheme + Source Code Pro Powerline + Font Awesome + [Powerlevel10k] - (macOS)

Default

Default

Powerlevel10k

Powerlevel10k

@jfexyz
jfexyz / GitHub Wiki Subtree Storage.markdown
Created January 22, 2014 23:10
Store and edit GitHub wikis within the main project repository.

Project documentation

The project documentation (stored in the docs directory) is a git subtree of the project wiki. This allows for the documentation to be referenced and edited from within the main project.

Initial local setup

When cloning the main project repository for the first time, the wiki repository must be added as a remote.

git remote add wiki https://github.com//.wiki.git

@aschmidt75
aschmidt75 / specinfra_nsenter_prototype
Created September 6, 2014 20:17
Serverspec/Specinfra backend for nsenter
require 'specinfra/backend/exec'
require 'open3'
module SpecInfra
module Backend
class Nsenter < Exec
def run_command(cmd, opt={})
cmd = build_command(cmd)
cmd = add_pre_command(cmd)
ret = nsenter_exec!(cmd)
@gane5h
gane5h / datadog-nginx
Created October 22, 2014 04:06
Nginx log parsing with datadog
"""
Custom parser for nginx log suitable for use by Datadog 'dogstreams'.
To use, add to datadog.conf as follows:
dogstreams: [path to ngnix log (e.g: "/var/log/nginx/access.log"]:[path to this python script (e.g "/usr/share/datadog/agent/dogstream/nginx.py")]:[name of parsing method of this file ("parse")]
so, an example line would be:
dogstreams: /var/log/nginx/access.log:/usr/share/datadog/agent/dogstream/nginx.py:parse
Log of nginx should be defined like that:
log_format time_log '$time_local "$request" S=$status $bytes_sent T=$request_time R=$http_x_forwarded_for';
when starting dd-agent, you can find the collector.log and check if the dogstream initialized successfully
"""
@mrpatrick
mrpatrick / largest_tables.mysql
Created August 12, 2015 14:21
Find the largest (byte size) 20 tables in GB in MySQL 5.x - taken from: https://www.percona.com/blog/2008/02/04/finding-out-largest-tables-on-mysql-server/
SELECT CONCAT(table_schema, '.', table_name),
CONCAT(ROUND(table_rows / 1000000, 2), 'M') rows,
CONCAT(ROUND(data_length / ( 1024 * 1024 * 1024 ), 2), 'G') DATA,
CONCAT(ROUND(index_length / ( 1024 * 1024 * 1024 ), 2), 'G') idx,
CONCAT(ROUND(( data_length + index_length ) / ( 1024 * 1024 * 1024 ), 2), 'G') total_size,
ROUND(index_length / data_length, 2) idxfrac
FROM information_schema.TABLES
ORDER BY data_length + index_length DESC
LIMIT 20;