$ python script.py ps
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE3 instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
I tensorflow/core/distributed_runtime/rpc/grpc_channel.cc:200] Initialize GrpcChannelCache for job ps -> {0 -> localhost:9000}
I tensorflow/core/distributed_runtime/rpc/grpc_channel.cc:200] Initialize GrpcChannelCache
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| # Now available here: https://github.com/y0ast/pytorch-snippets/tree/main/minimal_cifar |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Efficient Algorithms for Non-convex Isotonic Regression through Submodular Optimization | Francis Bach | https://papers.nips.cc/paper/7286-efficient-algorithms-for-non-convex-isotonic-regression-through-submodular-optimization | |
|---|---|---|---|
| Structure-Aware Convolutional Neural Networks | Jianlong Chang | https://papers.nips.cc/paper/7287-structure-aware-convolutional-neural-networks | |
| Kalman Normalization: Normalizing Internal Representations Across Network Layers | Guangrun Wang | https://papers.nips.cc/paper/7288-kalman-normalization-normalizing-internal-representations-across-network-layers | |
| HOGWILD!-Gibbs can be PanAccurate | Constantinos Daskalakis | https://papers.nips.cc/paper/7289-hogwild-gibbs-can-be-panaccurate | |
| Text-Adaptive Generative Adversarial Networks: Manipulating Images with Natural Language | Seonghyeon Nam | https://papers.nips.cc/paper/7290-text-adaptive-generative-adversarial-networks-manipulating-images-with-natural-language | |
| IntroVAE: Introspective Variational Autoencoders for Photographic Image Synthesis | Huaibo Huang | https |
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| import datetime | |
| import pytz | |
| from tensorflow.contrib.session_bundle import exporter | |
| from tensorflow.python.client import timeline | |
| import glob | |
| import json | |
| import time | |
| import math | |
| import numpy as np | |
| import os |
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| import numpy as np | |
| from keras.layers import GRU, initializations, K | |
| from collections import OrderedDict | |
| class GRULN(GRU): | |
| '''Gated Recurrent Unit with Layer Normalization | |
| Current impelemtation only works with consume_less = 'gpu' which is already | |
| set. | |
| # Arguments |
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| """ Trains an agent with (stochastic) Policy Gradients on Pong. Uses OpenAI Gym. """ | |
| import numpy as np | |
| import cPickle as pickle | |
| import gym | |
| # hyperparameters | |
| H = 200 # number of hidden layer neurons | |
| batch_size = 10 # every how many episodes to do a param update? | |
| learning_rate = 1e-4 | |
| gamma = 0.99 # discount factor for reward |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| import lasagne | |
| from lasagne.nonlinearities import rectify, softmax | |
| from lasagne.layers import InputLayer, DenseLayer, DropoutLayer, batch_norm, BatchNormLayer | |
| from lasagne.layers import ElemwiseSumLayer, NonlinearityLayer, GlobalPoolLayer | |
| from lasagne.layers.dnn import Conv2DDNNLayer as ConvLayer | |
| from lasagne.init import HeNormal | |
| def ResNet_FullPre_Wide(input_var=None, n=3, k=2): | |
| ''' | |
| Adapted from https://github.com/Lasagne/Recipes/tree/master/papers/deep_residual_learning. |
NewerOlder