-
-
Save awjuliani/b5d83fcf3bf2898656be5730f098e08b to your computer and use it in GitHub Desktop.
@Riotpiaole I've re-implement the tutorial codes here, you may take a look at it.
can anyone explain to me why we do not use softmax instead of sigmoid? and also why we don't use bias?(I tried both and it wouldn't work)
@lipixun do you know the answer to my question? it would really help me thanks
@pooriaPoorsarvi as seen above we already got the responsible_weight variable, now we are getting the negative
Log likelihood to optimize for the maxium (tf only can optimize) no need to consider every other classes
Instead of using slim, can use tf as:
state_in_OH = tf.one_hot(self.state_in, s_size)
output = tf.layers.dense(state_in_OH, a_size, tf.nn.sigmoid, use_bias=False, kernel_initializer = tf.ones_initializer())
Thanks Arthur! this is helpful tutorial for beginers like me. Here is tensorflow 2 implementation may be helpful for someone
Thanks Arthur! this is helpful tutorial for beginers like me. Here is tensorflow 2 implementation may be helpful for someone
Thanks for the implementation. I wonder how is the implementation a policy network? I don't see policy gradient is used.
According to my experiment (tensorflow 1.3), I suggest to use
AdamOptimizer
instead ofGradientDescentOptimizer
sinceGradientDescentOptimizer
suffers from training stability issue.