Created
July 2, 2017 00:16
-
-
Save kafaichoi/c6e695928909ad42a318975cafecc809 to your computer and use it in GitHub Desktop.
Seq2Seq
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| It's consisted of encoder and decoder. | |
| Encoders(Autoencoders) are networks(unsuperprised learning, feed-forward), which try to reconstruct their own input | |
| (reproduct input as output). You construct the network so that it reduces the input size by using one or more hidden layers, | |
| until it reaches a reasonably small hidden layer in the middle. As a result your data has been compressed (encoded) into | |
| a few variables. From this hidden representation the network tries to reconstruct (decode) the input again. | |
| In order to do a good job at reconstructing the input the network has to learn a good representation of the data in | |
| the middle hidden layer. This can be useful for dimensionality reduction, or for generating new “synthetic” data | |
| from a given hidden representation. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment