Last active
August 24, 2022 17:35
-
-
Save n0obcoder/8ecf4b0ea1a2159e3324bb1fddcfdd0b to your computer and use it in GitHub Desktop.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
losses = [] # to keep track of the epoch lossese | |
slope_list = [] # to keep track of the slope learnt by the model | |
intercept_list = [] # to keep track of the intercept learnt by the model | |
EPOCHS = 2500 | |
print('\nTRAINING...') | |
for epoch in range(EPOCHS): | |
# We need to clear the gradients of the optimizer before running the back-propagation in PyTorch | |
optimizer.zero_grad() | |
# Feeding the input data in the model and getting out the predictions | |
pred_y = model(data_x) | |
# Calculating the loss using the model's predictions and the real y values | |
loss = criterion(pred_y, data_y) | |
# Back-Propagation | |
loss.backward() | |
# Updating all the trainable parameters | |
optimizer.step() | |
# Appending the loss.item() (a scalar value) | |
losses.append(loss.item()) | |
# Appending the learnt slope and intercept | |
slope_list.append(model.linear.weight.item()) | |
intercept_list.append(model.linear.bias.item()) | |
# We print out the losses after every 2000 epochs | |
if (epoch)%100 == 0: | |
print('loss: ', loss.item()) |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
TRAINING... | |
loss: 290066.9375 | |
loss: 233668.875 | |
loss: 186075.625 | |
loss: 146441.828125 | |
loss: 113859.40625 | |
loss: 87455.7265625 | |
loss: 66397.1875 | |
loss: 49895.44140625 | |
loss: 37213.265625 | |
loss: 27672.890625 | |
loss: 20661.083984375 | |
loss: 15636.8427734375 | |
loss: 12134.056640625 | |
loss: 9762.8232421875 | |
loss: 8207.25390625 | |
loss: 7220.21630859375 | |
loss: 6615.59326171875 | |
loss: 6258.6826171875 | |
loss: 6055.97216796875 | |
loss: 5945.3681640625 | |
loss: 5887.44677734375 | |
loss: 5858.3232421875 | |
loss: 5844.21875 | |
loss: 5837.568359375 | |
loss: 5834.42822265625 |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Thanks ! It was very interesting article. But in practice while this approach works, there are faster functions for fitting than DNN training.
My favorite for now is np.polyfit() which gives you the best polynomial coefficients for a poly of given degree.