Skip to content

Instantly share code, notes, and snippets.

@kanekv
kanekv / gcrgc.sh
Created February 16, 2019 10:24
cleanup gcr images older than date
#!/bin/bash
# Copyright © 2017 Google Inc.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
@kanekv
kanekv / .gdbinit
Created May 2, 2017 04:27
.gdbinit
# -*- ksh -*-
#
# If you use the GNU debugger gdb to debug the Python C runtime, you
# might find some of the following commands useful. Copy this to your
# ~/.gdbinit file and it'll get loaded into gdb automatically when you
# start it up. Then, at the gdb prompt you can do things like:
#
# (gdb) pyo apyobjectptr
# <module 'foobar' (built-in)>
# refcounts: 1
# bash/zsh completion support for core Git.
#
# Copyright (C) 2006,2007 Shawn O. Pearce <[email protected]>
# Conceptually based on gitcompletion (http://gitweb.hawaga.org.uk/).
# Distributed under the GNU General Public License, version 2.0.
#
# The contained completion routines provide support for completing:
#
# *) local and remote branch names
# *) local and remote tag names
http://jeffreystedfast.blogspot.com/2013/08/why-decoding-rfc2047-encoded-headers-is.html
http://jeffreystedfast.blogspot.com/2013/09/time-for-rant-on-mime-parsers.html
https://www.youtube.com/watch?v=JENdgiAPD6c&authuser=1
@kanekv
kanekv / tc
Created July 1, 2015 22:57
tc qdisc
tc qdisc add dev eth0 root netem delay 1000ms
tc qdisc del dev eth0 root
tc qdisc add dev eth0 root netem loss 25%
tc qdisc add dev eth0 root netem duplicate 50%
tc qdisc show
@kanekv
kanekv / spark config
Last active June 20, 2018 05:17
spark config
My initial configuration is:
conf.set("spark.cores.max", "16") // 16 map workers, that is 2 workers per machine (see my cluster config below)
conf.set("spark.akka.frameSize", "100000")
conf.set("spark.executor.memory", "120g")
conf.set("spark.reducer.maxMbInFlight", "100000")
conf.set("spark.storage.memoryFraction", "0.9")
conf.set("spark.shuffle.file.buffer.kb", "1000")
conf.set("spark.broadcast.factory", "org.apache.spark.broadcast.HttpBroadcastFactory")
conf.set("spark.driver.maxResultSize", "120g")
val sc = new SparkContext(conf)
dstat -tdD total,sda,sdb 60