Skip to content

Instantly share code, notes, and snippets.

@Radi4
Last active April 17, 2018 05:21
Show Gist options
  • Save Radi4/988f3ef2ba36d8f3625caf52f2f3673b to your computer and use it in GitHub Desktop.
Save Radi4/988f3ef2ba36d8f3625caf52f2f3673b to your computer and use it in GitHub Desktop.
Display the source blob
Display the rendered blob
Raw
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
@ossadtchi
Copy link

ossadtchi commented Apr 16, 2018

(Last Petr's plot):

  1. Take the left most point on the graph. It says that using (I guess 1-D ) autoencoder we get more than 80% of error in representing the EEG signal. Then, the second points states that using 6-D (or 5-D?) autoencoder we get 45% of error and so on until 29-D thing explains 70% of the variance in the data leaving 30% unexplained. Do I understand it correctly?

  2. If my understanding is correct then I am not getting why the conv net based encoder is worse than the PCA that explains with 29 dimensions 78% of the variance leaving only 22% unexplained?

  3. Why do we still get 30% error even when the using all the dimensions in the autoencoder ?

Suggestion: Also, It is better to plot the curves from all 3 methods on a single graph with the appropriate legend

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment