- 
      
 - 
        
Save dgrahn/f68447e6cc83989c51617571396020f9 to your computer and use it in GitHub Desktop.  
| """Keras 1.0 metrics. | |
| This file contains the precision, recall, and f1_score metrics which were | |
| removed from Keras by commit: a56b1a55182acf061b1eb2e2c86b48193a0e88f7 | |
| """ | |
| from keras import backend as K | |
| def precision(y_true, y_pred): | |
| """Precision metric. | |
| Only computes a batch-wise average of precision. Computes the precision, a | |
| metric for multi-label classification of how many selected items are | |
| relevant. | |
| """ | |
| true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1))) | |
| predicted_positives = K.sum(K.round(K.clip(y_pred, 0, 1))) | |
| precision = true_positives / (predicted_positives + K.epsilon()) | |
| return precision | |
| def recall(y_true, y_pred): | |
| """Recall metric. | |
| Only computes a batch-wise average of recall. Computes the recall, a metric | |
| for multi-label classification of how many relevant items are selected. | |
| """ | |
| true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1))) | |
| possible_positives = K.sum(K.round(K.clip(y_true, 0, 1))) | |
| recall = true_positives / (possible_positives + K.epsilon()) | |
| return recall | |
| def f1_score(y_true, y_pred): | |
| """Computes the F1 Score | |
| Only computes a batch-wise average of recall. Computes the recall, a metric | |
| for multi-label classification of how many relevant items are selected. | |
| """ | |
| p = precision(y_true, y_pred) | |
| r = recall(y_true, y_pred) | |
| return (2 * p * r) / (p + r + K.epsilon()) | 
@thusinh1969 I'd have to see your data. I can't debug it otherwise.
Can we use same code for multi-class classification problem??
Thanks in anticipation
@Mariyamimtiaz If you're using tensorflow as a backend, I would recommend using tf.metrics
@dgrahn no I am using keras.
@Mariyamimtiaz Keras is a frontend. What's your backend?
I don't know. Debug it? Or use something built-into your backend.
@dgrahn  Hi, I am using the same code for multicalss-classification problem, with a small modification because I want to pay more attention to class 1.
Here is the code:
I would now create a new custom metrics to monitor the auc of precision recall curve for the same class . By using the following code, I get error: Cannot convert a symbolic Tensor (metrics_19/auc_pcr_1/strided_slice:0) to a numpy array.
Could you help me to find a solution?
@FrancescaAlf Please post code using code tags, instead of screenshots. I don't know where your precision_recall_curve or auc functions are coming from. Are they numpy functions?
`from sklearn.metrics import auc, precision_recall_curve
import keras.backend as K
def precision(y_true, y_pred):
    true_positives = K.sum(K.round(K.clip(y_true[:,:,1] * y_pred[:,:,1], 0, 1)))
    predicted_positives = K.sum(K.round(K.clip(y_pred[:,:,1], 0, 1)))
    precision = true_positives / (predicted_positives + K.epsilon())
    return precision
def auc_pcr_1(y_true, y_pred):
    precision, recall, _ = precision_recall_curve(y_true[:,:,1] ,y_pred[:,:,1])
    area_under_curve_p_r = auc(recall, precision)
    return auc_1 `
@dgrahn no, they are sklearn functions
@FrancescaAlf Ah -- that's what I meant!. So those methods accept numpy matrices, not tensors. If you are using TensorFlow as the backend, you could use tf.keras.metrics.AUC and tf.keras.metrics.PrecisionAtRecall. If not, you might have to implement those functions with tensors.
dgrahn Oh, ok. Thanks for your help

F1 return 1.9 ?!? What can go wrong ?
Thanks,
Steve