Skip to content

Instantly share code, notes, and snippets.

View erfannoury's full-sized avatar

Erfan Noury erfannoury

View GitHub Profile
@kastnerkyle
kastnerkyle / threaded_image_reader.py
Last active December 15, 2019 12:28
Threaded image reader in Python
# Author: Kyle Kastner
# License: BSD 3-Clause
# For a reference on parallel processing in Python see tutorial by David Beazley
# http://www.slideshare.net/dabeaz/an-introduction-to-python-concurrency
# Loosely based on IBM example
# http://www.ibm.com/developerworks/aix/library/au-threadingpython/
try:
import Queue
except ImportError:
import queue as Queue
@kastnerkyle
kastnerkyle / mscoco_dataset.py
Last active July 27, 2016 15:19
MSCOCO threaded iterator/reader
# Authors: Kyle Kastner, Francesco Visin
# License: BSD 3-Clause
try:
import Queue
except ImportError:
import queue as Queue
import threading
import time
from matplotlib.path import Path
from PIL import Image
@twiecki
twiecki / bayesian_neural_network.ipynb
Last active August 30, 2025 08:45
Bayesian Neural Network in PyMC3
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
import lasagne
from lasagne.nonlinearities import rectify, softmax
from lasagne.layers import InputLayer, DenseLayer, DropoutLayer, batch_norm, BatchNormLayer
from lasagne.layers import ElemwiseSumLayer, NonlinearityLayer, GlobalPoolLayer
from lasagne.layers.dnn import Conv2DDNNLayer as ConvLayer
from lasagne.init import HeNormal
def ResNet_FullPre_Wide(input_var=None, n=3, k=2):
'''
Adapted from https://github.com/Lasagne/Recipes/tree/master/papers/deep_residual_learning.
@karpathy
karpathy / pg-pong.py
Created May 30, 2016 22:50
Training a Neural Network ATARI Pong agent with Policy Gradients from raw pixels
""" Trains an agent with (stochastic) Policy Gradients on Pong. Uses OpenAI Gym. """
import numpy as np
import cPickle as pickle
import gym
# hyperparameters
H = 200 # number of hidden layer neurons
batch_size = 10 # every how many episodes to do a param update?
learning_rate = 1e-4
gamma = 0.99 # discount factor for reward
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
@udibr
udibr / gruln.py
Last active November 7, 2020 02:34
Keras GRU with Layer Normalization
import numpy as np
from keras.layers import GRU, initializations, K
from collections import OrderedDict
class GRULN(GRU):
'''Gated Recurrent Unit with Layer Normalization
Current impelemtation only works with consume_less = 'gpu' which is already
set.
# Arguments
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
import datetime
import pytz
from tensorflow.contrib.session_bundle import exporter
from tensorflow.python.client import timeline
import glob
import json
import time
import math
import numpy as np
import os
@braingineer
braingineer / fnc.ipynb
Created February 1, 2017 19:29 — forked from anonymous/fnc.ipynb
FNC1 - Data Handling Resources
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.