Skip to content

Instantly share code, notes, and snippets.

@poolio
Last active October 19, 2024 08:20
Show Gist options
  • Save poolio/b71eb943d6537d01f46e7b20e9225149 to your computer and use it in GitHub Desktop.
Save poolio/b71eb943d6537d01f46e7b20e9225149 to your computer and use it in GitHub Desktop.
Display the source blob
Display the rendered blob
Raw
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
@poolio
Copy link
Author

poolio commented Jan 29, 2017

That's a typo, should be p(x) close to q(x)

@JulienSiems
Copy link

Thanks for the great notebook! I have two questions though:

  • I don't understand why you make the the output of the generative network a stochastic tensor. In the paper in figure 1 the eps just gets added/multiplied. Shouldn't it be sufficient as an output?
  • Shouldn't the weights of q and p be updated separately? The paper states two different loss terms for them in the algorithm. Or is that possible because of proposition 2?

@poolio
Copy link
Author

poolio commented Mar 7, 2017

  • The StochasticTensor in the generative model is used to keep track of the sample x ~ p(x|z) and the density p(x|z) so we can evaluate it when computing the log probability of the data given the sampled latent state, z.

  • In practice we don't have acess to T* and use the current discriminator T as a replacement. T does not directly depend on the parameters of p, so d/dp -T(x, z) is 0, and the gradients are identical to the gradients using the separate losses in the paper.

@JulienSiems
Copy link

Makes sense! Thank you for taking the time to reply!

@clarken92
Copy link

Thank you for your interesting post. Please forgive me if I ask a dumb question. How could I find "stochastic_tensor" module on tensorflow? I use the version 1.8 installed with pip but it says there is no such module. Thank you.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment