My third neural network experiment (second was FIR filter). DFT output is just a linear combination of inputs, so it should be implementable by a single layer with no activation function.
Animation of weights being trained:
Red are positive, blue are negative. The black squares (2336 out of 4096) are unused, and could be pruned out to save computation time (if I knew how to do that).
Even with pruning, it would be less efficient than an FFT, so if the FFT output is useful, probably best to do it externally and provide it as separate inputs?
This at least demonstrates that neural networks can figure out frequency content on their own, though, if it's useful to the problem.
The loss goes down for a while but then goes up. I don't know why:
Thanks for this very nice Gist! The increase in loss after around 100 epochs may come from floating point error accumulation. However I am still surprised by the pattern of the weights, any idea why they look like that? I just ran it again for different N and the pattern stays very similar.