Created
October 15, 2019 10:16
-
-
Save SuperShinyEyes/dcc68a08ff8b615442e3bc6a9b55a354 to your computer and use it in GitHub Desktop.
F1 score in PyTorch
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
def f1_loss(y_true:torch.Tensor, y_pred:torch.Tensor, is_training=False) -> torch.Tensor: | |
'''Calculate F1 score. Can work with gpu tensors | |
The original implmentation is written by Michal Haltuf on Kaggle. | |
Returns | |
------- | |
torch.Tensor | |
`ndim` == 1. 0 <= val <= 1 | |
Reference | |
--------- | |
- https://www.kaggle.com/rejpalcz/best-loss-function-for-f1-score-metric | |
- https://scikit-learn.org/stable/modules/generated/sklearn.metrics.f1_score.html#sklearn.metrics.f1_score | |
- https://discuss.pytorch.org/t/calculating-precision-recall-and-f1-score-in-case-of-multi-label-classification/28265/6 | |
''' | |
assert y_true.ndim == 1 | |
assert y_pred.ndim == 1 or y_pred.ndim == 2 | |
if y_pred.ndim == 2: | |
y_pred = y_pred.argmax(dim=1) | |
tp = (y_true * y_pred).sum().to(torch.float32) | |
tn = ((1 - y_true) * (1 - y_pred)).sum().to(torch.float32) | |
fp = ((1 - y_true) * y_pred).sum().to(torch.float32) | |
fn = (y_true * (1 - y_pred)).sum().to(torch.float32) | |
epsilon = 1e-7 | |
precision = tp / (tp + fp + epsilon) | |
recall = tp / (tp + fn + epsilon) | |
f1 = 2* (precision*recall) / (precision + recall + epsilon) | |
f1.requires_grad = is_training | |
return f1 |
In this F1 "Loss", can this be backpropagated or is this just an eval metric?
I second this question. Does it act as a loss function?
It took some time to integrate this f1_loss into my solution but here is some notes:
- results must be flat
- predictions must be within 0..1 range (not logits)
- you want to return negative f1 result to optimize NN
Here's what I came up with:
def f1_loss(y_true:torch.Tensor, y_pred:torch.Tensor, is_training=True) -> torch.Tensor:
tp = (y_true * y_pred).sum().to(torch.float32)
tn = ((1 - y_true) * (1 - y_pred)).sum().to(torch.float32)
fp = ((1 - y_true) * y_pred).sum().to(torch.float32)
fn = (y_true * (1 - y_pred)).sum().to(torch.float32)
epsilon = 1e-7
precision = tp / (tp + fp + epsilon)
recall = tp / (tp + fn + epsilon)
f1 = 2* (precision*recall) / (precision + recall + epsilon)
return -f1
Usage:
f1_loss(target.to(torch.float32).flatten(), F.sigmoid(preds.flatten()))
Hope it will save you some time and nerves 😅
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
why you never use "tn" variable?