- 1x1 convolutions have been extensively used to reduce the number of parameters without affecting the results much
- Deep Mutual Learning: Unlike bagging/ boosting, models learn jointly, and help each other to fit well
- Skip connections: Help solving
degradation problem
without adding parameters. Hard Sample Mining
I take no responsability for any problems a user might have on following this gist. This includes university problems.
The motivation for this is to document, as dummy-oriented as possible, a way to setup and add a system call to Minix OS. This is a classic assignment at Operational Systems classes (and is pretty cool tbh)
ISO used: minix_R3.3.0-588a35b.iso
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
""" Trains an agent with (stochastic) Policy Gradients on Pong. Uses OpenAI Gym. """ | |
import numpy as np | |
import cPickle as pickle | |
import gym | |
# hyperparameters | |
H = 200 # number of hidden layer neurons | |
batch_size = 10 # every how many episodes to do a param update? | |
learning_rate = 1e-4 | |
gamma = 0.99 # discount factor for reward |
-
Install Atom on your computer (https://atom.io)
-
Launch Atom, go in Preferences, then chose Install, search for "remote-atom", and install it
-
On the distant machine:
cd ~ wget https://raw.githubusercontent.com/aurora/rmate/master/rmate chmod +x rmate