Skip to content

Instantly share code, notes, and snippets.

@endolith
Last active October 11, 2024 07:04
Show Gist options
  • Save endolith/98863221204541bf017b6cae71cb0a89 to your computer and use it in GitHub Desktop.
Save endolith/98863221204541bf017b6cae71cb0a89 to your computer and use it in GitHub Desktop.
Training neural network to implement discrete Fourier transform (DFT/FFT)

My third neural network experiment (second was FIR filter). DFT output is just a linear combination of inputs, so it should be implementable by a single layer with no activation function.

Topology of a 4-point complex DFT

Animation of weights being trained:

Neural network weights heatmap

Red are positive, blue are negative. The black squares (2336 out of 4096) are unused, and could be pruned out to save computation time (if I knew how to do that).

Even with pruning, it would be less efficient than an FFT, so if the FFT output is useful, probably best to do it externally and provide it as separate inputs?

This at least demonstrates that neural networks can figure out frequency content on their own, though, if it's useful to the problem.

The loss goes down for a while but then goes up. I don't know why:

loss vs epoch

"""
Train a neural network to implement the discrete Fourier transform
"""
import matplotlib.pyplot as plt
import numpy as np
from tensorflow.keras.layers import Dense
from tensorflow.keras.models import Sequential
N = 32
batch = 10000
# Generate random input data and desired output data
sig = np.random.randn(batch, N) + 1j*np.random.randn(batch, N)
F = np.fft.fft(sig, axis=-1)
# First half of inputs/outputs is real part, second half is imaginary part
X = np.hstack([sig.real, sig.imag])
Y = np.hstack([F.real, F.imag])
# Create model with no hidden layers, same number of outputs as inputs.
# No bias needed. No activation function, since DFT is linear.
model = Sequential([Dense(N*2, input_dim=N*2, use_bias=False)])
model.compile(loss='mean_squared_error', optimizer='adam')
model.fit(X, Y, epochs=100, batch_size=100)
# Confirm that it works
data = np.arange(N)
def ANN_DFT(x):
if len(x) != N:
raise ValueError(f'Input must be length {N}')
pred = model.predict(np.hstack([x.real, x.imag])[np.newaxis])[0]
result = pred[:N] + 1j*pred[N:]
return result
ANN = ANN_DFT(data)
FFT = np.fft.fft(data)
print(f'ANN matches FFT: {np.allclose(ANN, FFT)}')
# Heat map of neuron weights
plt.imshow(model.get_weights()[0], vmin=-1, vmax=1, cmap='coolwarm')
@kasperjunge
Copy link

@KasperJuunge I'd like to see that :D

I'll send it when it's done 🤘

@decanbay
Copy link

@endolith
Copy link
Author

@CYHSM Here's a side view of some of the rows:

ezgif-6-8ad5d1629c60

@technolojin
Copy link

A quick solution would be
sig = sig * 0.01.

Other option is to adjust a parameter 'beta_2'.
adam = tf.keras.optimizers.Adam(learning_rate=0.001, beta_1=0.9, beta_2=0.9998, epsilon=1e-07, amsgrad=False)
model.compile(loss='mean_squared_error', optimizer=adam)

@endolith
Copy link
Author

@technolojin solution to what?

@technolojin
Copy link

@technolojin solution to what?
For the rise of the loss.

@endolith
Copy link
Author

@technolojin Oh ok, it's more that I don't understand why it happens, though. This was just a learning exercise

@technolojin
Copy link

@endolith I could learn this basic idea. I would like to thank you.
This work inspired me to find another related studies.

https://reality.ai/ffts-and-stupid-deep-learning-tricks/
https://dawn.cs.stanford.edu/2019/06/13/butterfly/

@masip85
Copy link

masip85 commented Sep 9, 2020

I guess this DNN execution is not faster than original fft? Isn't it?

@endolith
Copy link
Author

endolith commented Sep 9, 2020

@masip85 No, not at all. It's inaccurate and highly inefficient, and you need to train it on white data if you want it to be correct at all points of the spectrum. It was just an experiment. I'm a signal processing engineer and was trying to learn the basics of ANNs.

If you think that your neural network would benefit from FFT features, then I would suggest calculating the FFT and feeding it to the network separately.

I also tested if it could learn the absolute value of the FFT output and it did not do very well with ReLUs, they were not good at approximating the absolute value function.

Again, I would feed that information directly to the network if I thought it would benefit from it.

@rajb245
Copy link

rajb245 commented Sep 21, 2020

Very nice experiment. For people interested in ML for "fast transforms" of this type, let me leave a link to this work here:
https://dawn.cs.stanford.edu/2019/06/13/butterfly/

In the arXiv paper, they show that you can indeed learn fast-transforms that get to the same order of efficiency as a hand-tuned fast implementation. And the results go beyond that, implying that all matrices admit a representation as a composition of only sparse butterfly matrices and permutations.

@endolith
Copy link
Author

@rajb245 So are they replacing the dense connection layer with a butterfly connection layer?

@rajb245
Copy link

rajb245 commented Sep 21, 2020

@endolith That's one idea they explored in the paper, yes. They showed you can learn some underlying sparse structure of otherwise "fully connected" layers using their method. Using it like that is a new kind of matrix/model compression, and it contrasts a lot with the other approach that's popular in DL, pruning.

They also use the ideas of learning the weights of matrices with butterfly-sparsity-patterns and learning a permutation out a discrete set of choices to show that Adam does indeed find solutions that are the usual "fast" linear transforms from signal processing (FFT, fast DCT, fast DST, etc.).

@msaad1311
Copy link

Hello. I tried to use your code and tried to find the difference between in computing the FFT using numpy and neural network and there was a big difference. Can you provide some insights on how to solve that? I saw multiple papers that are being published in which the authors mention that using the neural network as FFT reduces the time complexity greatly.

@endolith
Copy link
Author

endolith commented Nov 19, 2020

@msaad1311

Yes, it's highly inefficient, as I said in the description and the comments. Even more so than a direct DFT because of all the zero weights being calculated unnecessarily. This isn't something that you should be doing. It was just an experiment while teaching myself neural nets.  If your neural net would benefit from frequency domain information, it's better to just do a numpy FFT and pass the output to the net (possibly adding a magnitude function afterward, since that nonlinearity was much harder to learn in my tests).

If you need the net to learn FFT-like transforms in general, look at the comment above about butterfly networks https://gist.github.com/endolith/98863221204541bf017b6cae71cb0a89#gistcomment-3461984

@sebraun-msr
Copy link

Can you shed some light on the motivation on this? Why do you want to approximate an existing algorithm with a (likely less accurate and less efficient) black box approach?

@endolith
Copy link
Author

endolith commented Jul 8, 2021

@sebraun-msr It was just an experiment. I thought that ANN nodes could be equivalent to signal processing nodes and tried implementing some things that way. I've said multiple times that you shouldn't actually do this. :)

For example, I learned that:

@endolith
Copy link
Author

endolith commented Mar 31, 2022

@ShriAka
Copy link

ShriAka commented Dec 22, 2022

This work is cited here,
https://youtu.be/mYy4Zv80tXQ
@20:21

@endolith
Copy link
Author

@ShriAka That's cool, thanks for pointing it out :)

@Chenhubget
Copy link

The demo is using random noise data , if specified data,such as time series data or picture data is used , is it easy to learn the approprate weight?

@endolith
Copy link
Author

@Chenhubget It only learns at the frequencies present in the data

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment