Created
October 25, 2018 11:24
-
-
Save dgrahn/f68447e6cc83989c51617571396020f9 to your computer and use it in GitHub Desktop.
Metrics removed from Keras in 2.0.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
"""Keras 1.0 metrics. | |
This file contains the precision, recall, and f1_score metrics which were | |
removed from Keras by commit: a56b1a55182acf061b1eb2e2c86b48193a0e88f7 | |
""" | |
from keras import backend as K | |
def precision(y_true, y_pred): | |
"""Precision metric. | |
Only computes a batch-wise average of precision. Computes the precision, a | |
metric for multi-label classification of how many selected items are | |
relevant. | |
""" | |
true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1))) | |
predicted_positives = K.sum(K.round(K.clip(y_pred, 0, 1))) | |
precision = true_positives / (predicted_positives + K.epsilon()) | |
return precision | |
def recall(y_true, y_pred): | |
"""Recall metric. | |
Only computes a batch-wise average of recall. Computes the recall, a metric | |
for multi-label classification of how many relevant items are selected. | |
""" | |
true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1))) | |
possible_positives = K.sum(K.round(K.clip(y_true, 0, 1))) | |
recall = true_positives / (possible_positives + K.epsilon()) | |
return recall | |
def f1_score(y_true, y_pred): | |
"""Computes the F1 Score | |
Only computes a batch-wise average of recall. Computes the recall, a metric | |
for multi-label classification of how many relevant items are selected. | |
""" | |
p = precision(y_true, y_pred) | |
r = recall(y_true, y_pred) | |
return (2 * p * r) / (p + r + K.epsilon()) |
@FrancescaAlf Please post code using code tags, instead of screenshots. I don't know where your precision_recall_curve
or auc
functions are coming from. Are they numpy functions?
`from sklearn.metrics import auc, precision_recall_curve
import keras.backend as K
def precision(y_true, y_pred):
true_positives = K.sum(K.round(K.clip(y_true[:,:,1] * y_pred[:,:,1], 0, 1)))
predicted_positives = K.sum(K.round(K.clip(y_pred[:,:,1], 0, 1)))
precision = true_positives / (predicted_positives + K.epsilon())
return precision
def auc_pcr_1(y_true, y_pred):
precision, recall, _ = precision_recall_curve(y_true[:,:,1] ,y_pred[:,:,1])
area_under_curve_p_r = auc(recall, precision)
return auc_1 `
@dgrahn no, they are sklearn functions
@FrancescaAlf Ah -- that's what I meant!. So those methods accept numpy matrices, not tensors. If you are using TensorFlow as the backend, you could use tf.keras.metrics.AUC
and tf.keras.metrics.PrecisionAtRecall
. If not, you might have to implement those functions with tensors.
dgrahn Oh, ok. Thanks for your help
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
@dgrahn Hi, I am using the same code for multicalss-classification problem, with a small modification because I want to pay more attention to class 1.
Here is the code:
I would now create a new custom metrics to monitor the auc of precision recall curve for the same class . By using the following code, I get error: Cannot convert a symbolic Tensor (metrics_19/auc_pcr_1/strided_slice:0) to a numpy array.
Could you help me to find a solution?