Built with blockbuilder.org
forked from fogcity89's block: fresh block
import numpy as np | |
import scipy | |
import scipy.ndimage | |
from scipy.ndimage.filters import gaussian_filter | |
from scipy.ndimage.interpolation import map_coordinates | |
import collections | |
from PIL import Image | |
import numbers | |
__author__ = "Wei OUYANG" |
# coding: utf-8 | |
# Imports | |
import os | |
import cPickle | |
import numpy as np | |
import theano | |
import theano.tensor as T |
Built with blockbuilder.org
forked from fogcity89's block: fresh block
After watching Bryan Cantrill's presentation on [Running Aground: Debugging Docker in Production][aground] I got all excited (and strangely nostalgic) about the possibility of core-dumping server-side Python apps whenever they go awry. This would theoretically allow me to fully inspect the state of the program at the point it exploded, rather than relying solely on the information of a stack trace.
# "Colorizing B/W Movies with Neural Nets", | |
# Network/Code Created by Ryan Dahl, hacked by samim.io to work with movies | |
# BACKGROUND: http://tinyclouds.org/colorize/ | |
# DEMO: https://www.youtube.com/watch?v=_MJU8VK2PI4 | |
# USAGE: | |
# 1. Download TensorFlow model from: http://tinyclouds.org/colorize/ | |
# 2. Use FFMPEG or such to extract frames from video. | |
# 3. Make sure your images are 224x224 pixels dimension. You can use imagemagicks "mogrify", here some useful commands: | |
# mogrify -resize 224x224 *.jpg | |
# mogrify -gravity center -background black -extent 224x224 *.jpg |
#!/bin/bash | |
# Run this on This AMI on AWS: | |
# https://console.aws.amazon.com/ec2/v2/home?region=us-east-1#LaunchInstanceWizard:ami=ami-b36981d8 | |
# You should get yourself a fully working GPU enabled tensorflow installation. | |
cd ~ | |
# grab cuda 7.0 |
# Note – this is not a bash script (some of the steps require reboot) | |
# I named it .sh just so Github does correct syntax highlighting. | |
# | |
# This is also available as an AMI in us-east-1 (virginia): ami-cf5028a5 | |
# | |
# The CUDA part is mostly based on this excellent blog post: | |
# http://tleyden.github.io/blog/2014/10/25/cuda-6-dot-5-on-aws-gpu-instance-running-ubuntu-14-dot-04/ | |
# Install various packages | |
sudo apt-get update |
#include <math.h> | |
#include <stdio.h> | |
#include <unistd.h> | |
// Each character encodes an angle of a plane we are checking | |
const char plane_angles[] = "O:85!fI,wfO8!yZfO8!f*hXK3&fO;:O;#hP;\"i["; | |
// and these encode an offset from the origin s.t. (x, y) dot (cos(a), sin(a)) < offset | |
const char plane_offsets[] = "<[\\]O=IKNAL;KNRbF8EbGEROQ@BSXXtG!#t3!^"; | |
// this table encodes the offsets within the above tables of each polygon |
// In languages like Python and Haskell, we can write list comprehension syntax like | |
// super simply to generate complex lists. Below, for example, we find all numbers that are | |
// the product of two sides of a triangle. | |
// [a * b | a <- [1..10], b <- [1..10], c <- [1..10], a * a + b * b == c * c] | |
// Why is this called nondeterministic computation? Because we essentially try ALL possible combinations | |
// of these values (a,b) and--you can imagine--run them all simultanously and get the result that matches the predicate. | |
// Now, obviously, this doesn't all happen at the same time, but that's the idea behind the nondeterminism. | |
// Let's examine how we can get a similiar result in Swift! We'll start super simple and work |