-
-
Save SuperShinyEyes/dcc68a08ff8b615442e3bc6a9b55a354 to your computer and use it in GitHub Desktop.
def f1_loss(y_true:torch.Tensor, y_pred:torch.Tensor, is_training=False) -> torch.Tensor: | |
'''Calculate F1 score. Can work with gpu tensors | |
The original implmentation is written by Michal Haltuf on Kaggle. | |
Returns | |
------- | |
torch.Tensor | |
`ndim` == 1. 0 <= val <= 1 | |
Reference | |
--------- | |
- https://www.kaggle.com/rejpalcz/best-loss-function-for-f1-score-metric | |
- https://scikit-learn.org/stable/modules/generated/sklearn.metrics.f1_score.html#sklearn.metrics.f1_score | |
- https://discuss.pytorch.org/t/calculating-precision-recall-and-f1-score-in-case-of-multi-label-classification/28265/6 | |
''' | |
assert y_true.ndim == 1 | |
assert y_pred.ndim == 1 or y_pred.ndim == 2 | |
if y_pred.ndim == 2: | |
y_pred = y_pred.argmax(dim=1) | |
tp = (y_true * y_pred).sum().to(torch.float32) | |
tn = ((1 - y_true) * (1 - y_pred)).sum().to(torch.float32) | |
fp = ((1 - y_true) * y_pred).sum().to(torch.float32) | |
fn = (y_true * (1 - y_pred)).sum().to(torch.float32) | |
epsilon = 1e-7 | |
precision = tp / (tp + fp + epsilon) | |
recall = tp / (tp + fn + epsilon) | |
f1 = 2* (precision*recall) / (precision + recall + epsilon) | |
f1.requires_grad = is_training | |
return f1 |
SuperShinyEyes
commented
Oct 15, 2019
Thank you for sharing!
nice!
It works!
@SuperShinyEyes, in your code, you wrote assert y_true.ndim == 1
, so this code doesn't accept the batch size axis?
Thank you.
@SuperShinyEyes, in your code, you wrote
assert y_true.ndim == 1
, so this code doesn't accept the batch size axis?
I believe it is because the code expects each batch to output the index of the label. This explains the line: y_true = F.one_hot(y_true, 2).to(torch.float32)
In this F1 "Loss", can this be backpropagated or is this just an eval metric?
why you never use "tn" variable?
In this F1 "Loss", can this be backpropagated or is this just an eval metric?
I second this question. Does it act as a loss function?
It took some time to integrate this f1_loss into my solution but here is some notes:
- results must be flat
- predictions must be within 0..1 range (not logits)
- you want to return negative f1 result to optimize NN
Here's what I came up with:
def f1_loss(y_true:torch.Tensor, y_pred:torch.Tensor, is_training=True) -> torch.Tensor:
tp = (y_true * y_pred).sum().to(torch.float32)
tn = ((1 - y_true) * (1 - y_pred)).sum().to(torch.float32)
fp = ((1 - y_true) * y_pred).sum().to(torch.float32)
fn = (y_true * (1 - y_pred)).sum().to(torch.float32)
epsilon = 1e-7
precision = tp / (tp + fp + epsilon)
recall = tp / (tp + fn + epsilon)
f1 = 2* (precision*recall) / (precision + recall + epsilon)
return -f1
Usage:
f1_loss(target.to(torch.float32).flatten(), F.sigmoid(preds.flatten()))
Hope it will save you some time and nerves 😅