This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
""" | |
Almost direct copy from https://github.com/openai/baselines/blob/master/baselines/common/vec_env/subproc_vec_env.py | |
""" | |
from multiprocessing import Process, Pipe | |
from pysc2.env import sc2_env, available_actions_printer | |
def worker(remote, env_fn_wrapper): | |
""" |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
""" | |
Pekka Aalto 2017 | |
This snippet tries to explain by example what deepmind means | |
in https://arxiv.org/abs/1708.04782 | |
about embedding on channel axis being equivalent to | |
one-hot-encoding followed by 1x1 conv. | |
They write: |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
""" | |
Investigate keras-lstm inputs, outputs and weights. | |
Needs tensorflow 2.0 | |
Note: The explanation of weights matches the CPU-implementation of LSTM-layer. | |
In GPU-implementation the weights are organized slightly differently | |
""" | |
import numpy as np | |
import tensorflow as tf |