Skip to content

Instantly share code, notes, and snippets.

@stjordanis
stjordanis / tmux.md
Created August 2, 2018 01:54 — forked from andreyvit/tmux.md
tmux cheatsheet

tmux cheat sheet

(C-x means ctrl+x, M-x means alt+x)

Prefix key

The default prefix is C-b. If you (or your muscle memory) prefer C-a, you need to add this to ~/.tmux.conf:

remap prefix to Control + a

@stjordanis
stjordanis / attrgetter_sort.ipynb
Created September 21, 2018 14:24 — forked from karlafej/attrgetter_sort.ipynb
Sort objects of the same class that don’t natively support comparison operations using operator.attrgetter
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
@stjordanis
stjordanis / add_intellij_launcer
Created September 23, 2018 02:10 — forked from rob-murray/add_intellij_launcer
Add Intellij launcher shortcut and icon for ubuntu
// create file:
sudo vim /usr/share/applications/intellij.desktop
// add the following
[Desktop Entry]
Version=13.0
Type=Application
Terminal=false
Icon[en_US]=/home/rob/.intellij-13/bin/idea.png
Name[en_US]=IntelliJ
@stjordanis
stjordanis / thread_pool.py
Created September 23, 2018 22:02 — forked from heavywatal/thread_pool.py
Example of thread pool in Python
#!/usr/bin/env python
# -*- coding: utf-8 -*-
"""Example of thread pool
https://docs.python.org/3/library/concurrent.futures.html
https://docs.python.org/3/library/multiprocessing.html
"""
import concurrent.futures as confu
import multiprocessing.pool as mpp
import time
@stjordanis
stjordanis / pit.R
Created October 2, 2018 19:54 — forked from ericnovik/pit.R
library(cowplot)
n <- 1e4
X <- rlogis(n)
Y <- plogis(X)
plot_dens <- function(data, ...) {
qplot(
data,
geom = "histogram",
alpha = I(1 / 2),
...
@stjordanis
stjordanis / 0-startup-overview.md
Created November 11, 2018 22:53 — forked from dideler/0-startup-overview.md
Startup Engineering notes
@stjordanis
stjordanis / parallel.py
Created November 12, 2018 23:08 — forked from thomwolf/parallel.py
Data Parallelism in PyTorch for modules and losses
##+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
## Created by: Hang Zhang, Rutgers University, Email: [email protected]
## Modified by Thomas Wolf, HuggingFace Inc., Email: [email protected]
## Copyright (c) 2017-2018
##
## This source code is licensed under the MIT-style license found in the
## LICENSE file in the root directory of this source tree
##+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
"""Encoding Data Parallel"""
'use strict';
// original: https://gist.github.com/indutny/8d0f5376ee643962a9f0
const BN = require('bn.js');
const elliptic = require('elliptic');
const bcoin = require('bcoin');
const ecdsa = new elliptic.ec('secp256k1');
# Load in embeddings
glove_vectors = '/home/ubuntu/.keras/datasets/glove.6B.100d.txt'
glove = np.loadtxt(glove_vectors, dtype='str', comments=None)
# Extract the vectors and words
vectors = glove[:, 1:].astype('float')
words = glove[:, 0]
# Create lookup of words to vectors
word_lookup = {word: vector for word, vector in zip(words, vectors)}
# Load in embeddings
glove_vectors = '/home/ubuntu/.keras/datasets/glove.6B.100d.txt'
glove = np.loadtxt(glove_vectors, dtype='str', comments=None)
# Extract the vectors and words
vectors = glove[:, 1:].astype('float')
words = glove[:, 0]
# Create lookup of words to vectors
word_lookup = {word: vector for word, vector in zip(words, vectors)}