I used the cross-entropy method (an evolutionary algorithm / derivative free optimization method) to optimize small two-layer neural networks.
Code used to obtain these results can be found at the url https://github.com/joschu/modular_rl, commit 3324639f82a81288e9d21ddcb6c2a37957cdd361. The command line expression used for all the environments can be found in the text file below. Note that the same exact parameters were used for all tasks. The important parameters are:
hid_sizes=10,5
: hidden layer sizes of MLPextra_std=0.01
: noise added to variance, see [1]batch_size=200
: number of episodes per batchseed=0
random seed.
The program is single-threaded and deterministic. I used float32
precision, with THEANO_FLAGS=floatX=float32
.
The following instructions commands will let you conveniently run all of the experiments at once.
- Find a computer with many cpus.
- If it's a headless computer,
sudo apt-get install xvfb
. Then typexvfb-run /bin/bash -s "-screen 0 1400x900x24"
to enter a shell where all your commands will benefit from a fake monitor provided by xvfb. - Navigate into the
modular-rl
directory. export THEANO_FLAGS=floatX=float32; export outdir=/YOUR/PATH/HERE; export NUM_CPUS=YOUR_NUMBER_OF_CPUS
- Run all experiments with the following command
cat experiments/2-cem-scripts.txt | xargs -n 1 -P $NUM_CPUS bash -c
.
You can also set --video=0 in these scripts to disable video recording. If video is disabled, you won't need the xvfb commands.
[1] Szita, István, and András Lörincz. "Learning Tetris using the noisy cross-entropy method." Neural computation 18.12 (2006): 2936-2941.
Reproduced your results here, and marked as reviewed. To make the continuous control things work, I needed to run as
xvfb-run -s "-screen 0 1400x900x24" bash
: