Last active
April 17, 2018 05:21
-
-
Save Radi4/988f3ef2ba36d8f3625caf52f2f3673b to your computer and use it in GitHub Desktop.
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
(Last Petr's plot):
Take the left most point on the graph. It says that using (I guess 1-D ) autoencoder we get more than 80% of error in representing the EEG signal. Then, the second points states that using 6-D (or 5-D?) autoencoder we get 45% of error and so on until 29-D thing explains 70% of the variance in the data leaving 30% unexplained. Do I understand it correctly?
If my understanding is correct then I am not getting why the conv net based encoder is worse than the PCA that explains with 29 dimensions 78% of the variance leaving only 22% unexplained?
Why do we still get 30% error even when the using all the dimensions in the autoencoder ?
Suggestion: Also, It is better to plot the curves from all 3 methods on a single graph with the appropriate legend