Skip to content

Instantly share code, notes, and snippets.

Show Gist options
  • Save kafaichoi/c6e695928909ad42a318975cafecc809 to your computer and use it in GitHub Desktop.
Save kafaichoi/c6e695928909ad42a318975cafecc809 to your computer and use it in GitHub Desktop.
Seq2Seq
It's consisted of encoder and decoder.
Encoders(Autoencoders) are networks(unsuperprised learning, feed-forward), which try to reconstruct their own input
(reproduct input as output). You construct the network so that it reduces the input size by using one or more hidden layers,
until it reaches a reasonably small hidden layer in the middle. As a result your data has been compressed (encoded) into
a few variables. From this hidden representation the network tries to reconstruct (decode) the input again.
In order to do a good job at reconstructing the input the network has to learn a good representation of the data in
the middle hidden layer. This can be useful for dimensionality reduction, or for generating new “synthetic” data
from a given hidden representation.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment