Skip to content

Instantly share code, notes, and snippets.

View vyraun's full-sized avatar

Vikas Raunak vyraun

View GitHub Profile
@vyraun
vyraun / beautiful_idiomatic_python.md
Created December 6, 2016 12:30 — forked from JeffPaine/beautiful_idiomatic_python.md
Transforming Code into Beautiful, Idiomatic Python: notes from Raymond Hettinger's talk at pycon US 2013. The code examples and direct quotes are all from Raymond's talk. I've reproduced them here for my own edification and the hopes that others will find them as handy as I have!

Transforming Code into Beautiful, Idiomatic Python

Notes from Raymond Hettinger's talk at pycon US 2013 video, slides.

The code examples and direct quotes are all from Raymond's talk. I've reproduced them here for my own edification and the hopes that others will find them as handy as I have!

Looping over a range of numbers

for i in [0, 1, 2, 3, 4, 5]:
@vyraun
vyraun / readme.md
Created December 5, 2016 20:05 — forked from baraldilorenzo/readme.md
Deep Scene

A Deep Siamese Network for Scene Detection

This is a model from the paper:

A Deep Siamese Network for Scene Detection in Broadcast Videos
Lorenzo Baraldi, Costantino Grana, Rita Cucchiara
Proceedings of the 23rd ACM International Conference on Multimedia, 2015

Please cite the paper if you use the models.

@vyraun
vyraun / readme.md
Created December 5, 2016 20:04 — forked from baraldilorenzo/readme.md
VGG-16 pre-trained model for Keras

##VGG16 model for Keras

This is the Keras model of the 16-layer network used by the VGG team in the ILSVRC-2014 competition.

It has been obtained by directly converting the Caffe model provived by the authors.

Details about the network architecture can be found in the following arXiv paper:

Very Deep Convolutional Networks for Large-Scale Image Recognition

K. Simonyan, A. Zisserman

@vyraun
vyraun / process_word2vec.lua
Created December 5, 2016 08:52 — forked from ili3p/process_word2vec.lua
Reading 5.3GB text file with LuaJIT
local words = torch.load(opt.words) -- it's a tds.Hash
local word2vec = torch.FloatTensor(opt.vocabsz, opt.dim)
local buffsz = 2^13 -- == 8k
local f = io.input(opt.input)
local done = 0
local unk
-- read huge word2vec file with 2,196,017 lines
while true do
local lines, leftover = f:read(buffsz, '*line')
@vyraun
vyraun / _Instructions.md
Created December 1, 2016 13:51 — forked from genekogan/_Instructions.md
instructions for generating a style transfer animation from a video

Instructions for making a Neural-Style movie

The following instructions are for creating your own animations using the style transfer technique described by Gatys, Ecker, and Bethge, and implemented by Justin Johnson. To see an example of such an animation, see this video of Alice in Wonderland re-styled by 17 paintings.

Setting up the environment

The easiest way to set up the environment is to simply load Samim's a pre-built Terminal.com snap or use another cloud service like Amazon EC2. Unfortunately the g2.2xlarge GPU instances cost $0.99 per hour, and depending on parameters selected, it may take 10-15 minutes to produce a 512px-wide image, so it can cost $2-3 to generate 1 sec of video at 12fps.

If you do load the

@vyraun
vyraun / notebook-xkcd-style-plot
Created November 25, 2016 07:15 — forked from juhasch/notebook-xkcd-style-plot
IPython notebook containing a xkcd style plot
{
"metadata": {
"name": "xkcd-style-plot"
},
"nbformat": 3,
"nbformat_minor": 0,
"worksheets": [
{
"cells": [
{
@vyraun
vyraun / min-char-rnn.py
Created October 24, 2016 09:27 — forked from karpathy/min-char-rnn.py
Minimal character-level language model with a Vanilla Recurrent Neural Network, in Python/numpy
"""
Minimal character-level Vanilla RNN model. Written by Andrej Karpathy (@karpathy)
BSD License
"""
import numpy as np
# data I/O
data = open('input.txt', 'r').read() # should be simple plain text file
chars = list(set(data))
data_size, vocab_size = len(data), len(chars)
@vyraun
vyraun / dynamic_tsp.py
Created October 12, 2016 16:12 — forked from mlalevic/dynamic_tsp.py
Simple Python implementation of dynamic programming algorithm for the Traveling salesman problem
def solve_tsp_dynamic(points):
#calc all lengths
all_distances = [[length(x,y) for y in points] for x in points]
#initial value - just distance from 0 to every other point + keep the track of edges
A = {(frozenset([0, idx+1]), idx+1): (dist, [0,idx+1]) for idx,dist in enumerate(all_distances[0][1:])}
cnt = len(points)
for m in range(2, cnt):
B = {}
for S in [frozenset(C) | {0} for C in itertools.combinations(range(1, cnt), m)]:
for j in S - {0}:
@vyraun
vyraun / tf_lstm.py
Created October 4, 2016 15:04 — forked from siemanko/tf_lstm.py
Simple implementation of LSTM in Tensorflow in 50 lines (+ 130 lines of data generation and comments)
"""Short and sweet LSTM implementation in Tensorflow.
Motivation:
When Tensorflow was released, adding RNNs was a bit of a hack - it required
building separate graphs for every number of timesteps and was a bit obscure
to use. Since then TF devs added things like `dynamic_rnn`, `scan` and `map_fn`.
Currently the APIs are decent, but all the tutorials that I am aware of are not
making the best use of the new APIs.
Advantages of this implementation:
@vyraun
vyraun / ec2_caffe
Created October 1, 2016 10:56 — forked from baraldilorenzo/ec2_caffe
Install Caffe on Amazon EC2 g2.2xlarge instance
#! /bin/bash
# Upgrade
sudo aptitude update
sudo aptitude full-upgrade -y
# Install CUDA
wget http://developer.download.nvidia.com/compute/cuda/repos/ubuntu1404/x86_64/cuda-repo-ubuntu1404_6.5-14_amd64.deb
sudo dpkg -i cuda-repo-ubuntu1404_6.5-14_amd64.deb
sudo aptitude update