Last active
June 26, 2017 20:42
-
-
Save alexklibisz/2fe2a09ecd1f081fbf17eb1f2cde04ea to your computer and use it in GitHub Desktop.
fancy weighted log loss for keras
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
def wlogloss(yt, yp): | |
'''Weighted log loss for each example in batch.''' | |
# Weight false negative errors. This should decrease as recall increases. | |
# Get the mean (softmax) activation of outputs that should be positive. | |
meanpos = K.sum(yp * yt) / (K.sum(yt) + K.epsilon()) | |
# This is the maximum up-weighting value for false negative errors. | |
wfnmax = 20. | |
# Compute the false negative multiplier between 0 and wfnmax. Maybe clipping this at (1, wfnmax) would be smarter. | |
wfnmult = (1. - meanpos) * wfnmax | |
# Weight false positive errors - same idea. | |
ytinv = K.abs(yt - 1) | |
meanneg = K.sum(yp * ytinv) / (K.sum(ytinv) + K.epsilon()) | |
wfpmax = 20. | |
wfpmult = meanneg * wfpmax | |
# Define the weight matrix - one weight for every value of yp. | |
# Multiply the original yt by your false-negative error, then multiply the inverse of yt by your false positive error. | |
# Then add them together to get the weight mask/matrix. | |
wmat = (yt * wfnmult) + (ytinv * wfpmult) | |
# Compute the log-loss for every value of yp. | |
errmat = yt * K.log(yp + K.epsilon()) + ((1 - yt) * K.log(1 - yp + K.epsilon())) | |
errmat = errmat * -1 | |
# Multiply each value's loss by its corresponding weight in the weight matrix. | |
return K.mean(errmat * wmat) |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment