Skip to content

Instantly share code, notes, and snippets.

View eliask's full-sized avatar
🥼

Elias Kunnas eliask

🥼
View GitHub Profile
@eliask
eliask / randomize_mac_periodically.sh
Created June 9, 2017 14:33
Randomize MAC address periodically with spoof-mac (brew install spoof-mac) -- Useful for time-limited wifi hotspots
#! /bin/sh
if test $# = 0 ; then
echo "Usage: $0 <sleep interval like 15m>"
exit 1
fi
while true
do
echo "Randomizing MAC now"
spoof-mac randomize en0
@eliask
eliask / factorio-recipe-parser.lua
Last active September 10, 2021 15:41 — forked from pfmoore/factorio-recipe-parser.lua
Parse the Factorio recipe files to create a CSV of recipes
--[[ Usage
Windows:
lua factorio-recipe-parser.lua "C:/Apps/Factorio/data/base/prototypes/recipe/"
Steam on macOS:
lua factorio-recipe-parser.lua ~/"Library/Application Support/Steam/steamapps/common/Factorio/factorio.app/Contents/"
NB: json.lua is from https://gist.github.com/tylerneylon/59f4bcf316be525b30ab
]]--
@eliask
eliask / .bashrc
Created July 13, 2017 15:21
bash: tmp() instead of rm
# tmp: move files to /tmp instead of removing them outright.
# /tmp is cleaned up on reboot or so, so there is time to rectify mistakes.
#
# Usage: tmp [files or directories...]
# batch "do stuff to a file" files...
function batch() {
p="$1"
shift
for x in "$@"; do
@eliask
eliask / strace_grammar.py
Last active February 3, 2018 17:07
Parsing strace (network related) output with Python and parsy
'''
Warning: Turns out parsy is very slow :(
Usage:
import strace_grammar as grammar
for line in strace_output:
# not interested in this strace metadata:
if line.startswith('strace:'):
continue
@eliask
eliask / fetch_edenred.sh
Last active May 28, 2022 14:10
Fetch data on places which accept Edenred.fi cards/vouchers
#! /usr/bin/env bash
set -eu
city=helsinki
for type in restaurant sport culture; do
for page in {0..5000}; do # limit to 5000 queries on failure.
url="https://search.edenred.fi/affiliates?page=${page}&count=30&city=${city}&type=${type}"
file=edenred_${type}_${city}_page_${page}.json
curl -fL "$url" > "$file"
@eliask
eliask / gist:7bee0c42da9027979112601740fffbdd
Created March 25, 2018 12:42
My uBlock / adblock rules. Mostly annoying cookie notices
![Adblock Plus 1.1]
! NB: Also add https://gitlab.com/isaakm/Custom-Prebake/raw/master/filterlist.txt
! 04/03/2018, 13:08:43 https://sway.com/
sway.com###msccBanner
! 04/03/2018, 14:01:30 https://lumo.fi/
lumo.fi###cookie-disclaimer
! 11/03/2018, 12:04:06 https://pretix.eu/about/en/
@eliask
eliask / create_feed.py
Last active May 29, 2018 19:44
Create a (podcast) RSS feed from given audio files (using Python, feedgen)
#! /usr/bin/env python
'''Create a simple RSS feed for a podcast
Usage: python create_feed.py *.m4a
Assumes .m4a audio files. Edit MIME type if needed.
Depends on feedgen: pip install feedgen
'''
import sys
from feedgen.feed import FeedGenerator
@eliask
eliask / create_ngc_vm_on_gcp.sh
Created June 8, 2018 10:40
gcloud CLI: Creating a (pre-emptible) VM instance with the NVIDIA GPU Cloud (NGC) image
# This creates a pre-emptible VM on Google Cloud with a
# more or less NVIDIA approved configuration and drivers
# using the public NVIDIA GPU Cloud (NGC) image on GCP.
# The VM has nvidia-docker installed so anything like
# this will work out of the box:
#
# $ docker run --runtime=nvidia --rm paperspace/fastai:cuda9_pytorch0.3.1
#
# None of the public scripts and info had working configuration anywhere.
# Even the "official" sample scripts don't have working default configuration:
@eliask
eliask / ndjson_to_csv.py
Created June 9, 2018 20:56
Convert newline-delimited JSON (ND-JSON) to CSV
#! /usr/bin/env python3
# Usage: ndjson_to_csv <files... or stdin> > output.csv
# NB: Assumes that each line is a simple JSON object with no nested arrays or objects
import csv
import json
import sys
import fileinput
from collections import namedtuple, OrderedDict
lines = fileinput.input()
@eliask
eliask / using_zstd.py
Created June 9, 2018 21:38
Using zstd in Python (decompression)
#
# pip install zstandard
#
# The zstandard bindings are a little off, compared to gzip, etc.
# So small tricks like this are needed to fully decompress a file in-memory:
dctx = zstd.ZstdDecompressor()
data = b''.join(dctx.read_to_iter(open('foo.zst', 'rb')))