Skip to content

Instantly share code, notes, and snippets.

View erfannoury's full-sized avatar

Erfan Noury erfannoury

View GitHub Profile
# Now available here: https://github.com/y0ast/pytorch-snippets/tree/main/minimal_cifar
Efficient Algorithms for Non-convex Isotonic Regression through Submodular Optimization Francis Bach https://papers.nips.cc/paper/7286-efficient-algorithms-for-non-convex-isotonic-regression-through-submodular-optimization
Structure-Aware Convolutional Neural Networks Jianlong Chang https://papers.nips.cc/paper/7287-structure-aware-convolutional-neural-networks
Kalman Normalization: Normalizing Internal Representations Across Network Layers Guangrun Wang https://papers.nips.cc/paper/7288-kalman-normalization-normalizing-internal-representations-across-network-layers
HOGWILD!-Gibbs can be PanAccurate Constantinos Daskalakis https://papers.nips.cc/paper/7289-hogwild-gibbs-can-be-panaccurate
Text-Adaptive Generative Adversarial Networks: Manipulating Images with Natural Language Seonghyeon Nam https://papers.nips.cc/paper/7290-text-adaptive-generative-adversarial-networks-manipulating-images-with-natural-language
IntroVAE: Introspective Variational Autoencoders for Photographic Image Synthesis Huaibo Huang https
$ python script.py ps
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE3 instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
I tensorflow/core/distributed_runtime/rpc/grpc_channel.cc:200] Initialize GrpcChannelCache for job ps -> {0 -> localhost:9000}
I tensorflow/core/distributed_runtime/rpc/grpc_channel.cc:200] Initialize GrpcChannelCache
@braingineer
braingineer / fnc.ipynb
Created February 1, 2017 19:29 — forked from anonymous/fnc.ipynb
FNC1 - Data Handling Resources
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
import datetime
import pytz
from tensorflow.contrib.session_bundle import exporter
from tensorflow.python.client import timeline
import glob
import json
import time
import math
import numpy as np
import os
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
@udibr
udibr / gruln.py
Last active November 7, 2020 02:34
Keras GRU with Layer Normalization
import numpy as np
from keras.layers import GRU, initializations, K
from collections import OrderedDict
class GRULN(GRU):
'''Gated Recurrent Unit with Layer Normalization
Current impelemtation only works with consume_less = 'gpu' which is already
set.
# Arguments
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
@karpathy
karpathy / pg-pong.py
Created May 30, 2016 22:50
Training a Neural Network ATARI Pong agent with Policy Gradients from raw pixels
""" Trains an agent with (stochastic) Policy Gradients on Pong. Uses OpenAI Gym. """
import numpy as np
import cPickle as pickle
import gym
# hyperparameters
H = 200 # number of hidden layer neurons
batch_size = 10 # every how many episodes to do a param update?
learning_rate = 1e-4
gamma = 0.99 # discount factor for reward
import lasagne
from lasagne.nonlinearities import rectify, softmax
from lasagne.layers import InputLayer, DenseLayer, DropoutLayer, batch_norm, BatchNormLayer
from lasagne.layers import ElemwiseSumLayer, NonlinearityLayer, GlobalPoolLayer
from lasagne.layers.dnn import Conv2DDNNLayer as ConvLayer
from lasagne.init import HeNormal
def ResNet_FullPre_Wide(input_var=None, n=3, k=2):
'''
Adapted from https://github.com/Lasagne/Recipes/tree/master/papers/deep_residual_learning.