This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
#!/usr/bin/env python3 | |
#exec tail -n +3 $0 | |
# Be careful not to change | |
# the 'exec tail' line above. | |
# This script lives in /etc/grub.d/ | |
# 2015,2022 Ralph Versteegen | |
# The menuentry template was orginally generated by /etc/grub.d/10_linux |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
# using the Forward-Forward algorithm to train a neural network to classify positive and negative data | |
# the positive data is real data and the negative data is generated by the network itself | |
# the network is trained to have high goodness for positive data and low goodness for negative data | |
# the goodness is measured by the sum of the squared activities in a layer | |
# the network is trained to correctly classify input vectors as positive data or negative data | |
# the probability that an input vector is positive is given by applying the logistic function, σ to the goodness, minus some threshold, θ | |
# the negative data may be predicted by the neural net using top-down connections, or it may be supplied externally | |
import numpy as np |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
/////////////////////// Put x87 FPU in double-precision mode ////////////////// | |
// For cross-platform portability, force x87 floating-point calculations to be | |
// done with intermediate results stored in double precision (53 bit mantissa) | |
// instead of extended double precision (64 bit mantissa) registers. We change | |
// the x87 control register to accomplish this. But it only affects the | |
// mantissa, not the exponent, so does not remove all inconsistencies. | |
// | |
// See http://yosefk.com/blog/consistency-how-to-defeat-the-purpose-of-ieee-floating-point.html | |
// and http://christian-seiler.de/projekte/fpmath/ |