Skip to content

Instantly share code, notes, and snippets.

View danmackinlay's full-sized avatar

dan mackinlay danmackinlay

View GitHub Profile

Git pre-commit hook for large files

This hook warns you before you accidentally commit large files to git. It's very hard to reverse such an accidental commit, so it's better to prevent it in advance.

Since you will likely want this script to run in all your git repos, a script is attached to add this hook to all git repos you create / clone in the future.

Of course, you can just download it directly to the hooks in an existing git repo.

@danmackinlay
danmackinlay / .Rprofile
Created February 7, 2021 07:42
.Rprofile that prevents using linxubrew paths, which breaks compilation.
# R should not use the linuxbrew paths.
# this laborious workaround prevents that
.pth = Sys.getenv("PATH")
.pths = unlist(strsplit(.pth, ":"))
.nbrewpthi = as.vector(unlist(lapply(.pths, function (x) !grepl("brew", x))))
Sys.setenv(PATH=paste(.pths[.nbrewpthi], collapse=":"))
print("Changed PATH")
print(.pth)
print("to")
print(Sys.getenv("PATH"))
@danmackinlay
danmackinlay / find-pis
Last active February 7, 2024 09:09 — forked from chr15m/find-pis
Find Raspberry Pi devices on your local networks.
#!/bin/sh
# get broadcast addresses for each network
net=`ifconfig | grep -o -E "Bcast:(.*?) " | cut -f2 -d":"`
# loop over networks running the scan
for n in $net;
do
# first find SSH machines silently to prime the arp table
nmap -T4 -n -p 22 --open --min-parallelism 100 "$n/24" | grep -e "scan report for" -e "ssh" > /dev/null

Keybase proof

I hereby claim:

  • I am danmackinlay on github.
  • I am danmackinlay (https://keybase.io/danmackinlay) on keybase.
  • I have a public key ASBW9iWVglwJj-mtImYHR5uYigzQc-aCyVHThu_sQN9umAo

To claim this, I am signing this object:

@danmackinlay
danmackinlay / README.md
Created October 18, 2016 23:52
Fork of Alfred Klomp's excellent shrinkPdf.sh

Usage

Download the script by clicking the filename at the top of the box. Make it executable. If you run it with no arguments, it prints a usage summary. If you run it with a single argument – the name of the pdf to shrink – it writes the result to stdout:

./shrinkpdf.sh in.pdf > out.pdf

You can also provide a second filename for the output:

@danmackinlay
danmackinlay / gpd.py
Created October 5, 2016 06:19
Generalized Poisson Distribution for scipy.
import numpy as np
from scipy.stats import rv_discrete
from scipy.special import gamma, gammaln
class gpd_gen(rv_discrete):
"""
A Lagrangian Generalised Poisson-Poisson distribution.
``eta`` is the branching ratio,
@danmackinlay
danmackinlay / serialization.R
Created February 1, 2015 01:12
pas ssparse matrices between R and Python
library(rhdf5)
load.sparse.hdf = function (filename, path) {
idx = as.vector(h5read(filename, paste(path, "v_indices", sep="/")))
idxptr = as.vector(h5read(filename, paste(path, "v_indptr", sep="/")))
vals = as.vector(h5read(filename, paste(path, "v_data", sep="/")))
dims = as.vector(h5read(filename, paste(path, "v_datadims", sep="/")))
col.names = h5read(filename, paste(path, "v_col_names", sep="/"))
data = sparseMatrix(
@danmackinlay
danmackinlay / long_exposure.py
Last active August 29, 2015 14:12
Dirty hack to fake long exposure from a quicktime movie
import os
import tempfile
import subprocess
import skimage
from skimage import exposure
import skimage.io as io
import argparse
import shutil
import numpy as np
#!/bin/sh
# Converts a mysqldump file into a Sqlite 3 compatible file. It also extracts the MySQL `KEY xxxxx` from the
# CREATE block and create them in separate commands _after_ all the INSERTs.
# Awk is choosen because it's fast and portable. You can use gawk, original awk or even the lightning fast mawk.
# The mysqldump file is traversed only once.
# Usage: $ ./mysql2sqlite mysqldump-opts db-name | sqlite3 database.sqlite
# Example: $ ./mysql2sqlite --no-data -u root -pMySecretPassWord myDbase | sqlite3 database.sqlite
import numpy as np
from math import pi, log
import pylab
from scipy import fft, ifft
from scipy.optimize import curve_fit
i = 10000
x = np.linspace(0, 3.5 * pi, i)
y = (0.3*np.sin(x) + np.sin(1.3 * x) + 0.9 * np.sin(4.2 * x) + 0.06 *
np.random.randn(i))