It appears Precision, Recall and F1 metrics have been removed from metrics.py as of today but I couldn't find any reference to their removal in the commit logs. Was this intentional?
Yes, it was intentional. See https://github.com/fchollet/keras/wiki/Keras-2.0-release-notes
Ah missed that, thank you very much.
What was the reason behind removing them?
Basically these are all global metrics that were approximated
batch-wise, which is more misleading than helpful. This was mentioned in
the docs but it's much cleaner to remove them altogether. It was a mistake
to merge them in the first place.
On 19 March 2017 at 11:28, Jacek Karwowski notifications@github.com wrote:
What was the reason behind removing them?
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
https://github.com/fchollet/keras/issues/5794#issuecomment-287636509,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AArWb3iZAJPZC-FrViC7eESFBO4ceX2yks5rnXPPgaJpZM4MeOw0
.
@fchollet Are there any plans to add another implementation for these metrics (evaluated globally) ?
@karimpedia What I did was creating a Callback and calculate them on the end of each epoch with the validation data.
class Metrics(keras.callbacks.Callback):
def on_epoch_end(self, batch, logs={}):
predict = np.asarray(self.model.predict(self.validation_data[0]))
targ = self.validation_data[1]
self.f1s=f1(targ, predict)
return
metrics = Metrics()
model.fit(X_train, y_train, epochs=epochs, batch_size=batch_size, validation_data=[X_test,y_test],
verbose=1, callbacks=[metrics])
And implement a function that calculates the f1 score or instead use Scikit Learn's Fscore function.
@imanoluria, i was using your code (thanks for posting it BTW) with the sklearn's F1 in my model. the model has 3 inputs and one output.
i used two calbacks: callbacks=[checkpointer, metrics]
.
alas, i get the error:
File "build/bdist.linux-x86_64/egg/keras/engine/training.py", line 1631, in fit
File "build/bdist.linux-x86_64/egg/keras/engine/training.py", line 1233, in _fit_loop
File "build/bdist.linux-x86_64/egg/keras/callbacks.py", line 73, in on_epoch_end
File "rafael.py", line 29, in on_epoch_end
predict = np.asarray(self.model.predict(self.validation_data[0]))
File "build/bdist.linux-x86_64/egg/keras/engine/training.py", line 1730, in predict
File "build/bdist.linux-x86_64/egg/keras/engine/training.py", line 121, in _standardize_input_data
ValueError: The model expects 3 arrays, but only received one array. Found: array with shape (34374, 15, 9)
the fit portion:
history = model.fit([T_train, S_train, P_train], y_train,
batch_size=size_batch,
nb_epoch=epochs,
verbose=1,
callbacks=[checkpointer, metrics],
class_weight=weights_dict,
validation_data=[[T_validation, S_validation, P_validation], y_validation],
shuffle=True)
my input to the validation is (first dim is #samples):
(34374, 15, 9)-temporal samples
(34374, 3)-1D vector samples
(34374, 7)-1D vector samples
my model's output is a one hot 25 category vector...
can you see what's the problem? thanks a lot!
@fchollet - will there be a future innate API for metrics in check pointer / similar method?
Hi @basque21, I tried to extend your example to work with predict_generator
but it did not work. Any ideas?
The error I got was AttributeError: 'Metrics' object has no attribute 'validation_data'
.
My code snippet is below:
class Metrics(keras.callbacks.Callback):
def on_epoch_end(self, batch, logs={}):
predict = self.model.predict_generator(
self.validation_data,
steps=self.validation_steps,
workers=6
)
targ = self.targ
self.f1s=f1(targ, predict)
return
@ShiangYong Did you set validation_data
when you fit()
?
Em sex, 20 de abr de 2018 Ã s 2:11 AM, Daniel Klevebring <
[email protected]> escreveu:
@ShiangYong https://github.com/ShiangYong Did you set validation_data
when you fit()?—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
https://github.com/keras-team/keras/issues/5794#issuecomment-382978514,
or mute the thread
https://github.com/notifications/unsubscribe-auth/Ai62Hu897l4u2QQZEvkx3QTfGiAu-HhCks5tqW3mgaJpZM4MeOw0
.no
@dakl No, I need to use fit_generator
or predict_generator
for my applications, those expect generators and not validation_data
If someone needs to implement it I suggest this work-around:
model.fit(nb_epoch=1, ...)
inside a for loop taking advantage of the precision/recall metrics outputted after every epochSomething like this:
```
for mini_batch in range(epochs):
model_hist = model.fit(X_train, Y_train, batch_size=batch_size, epochs=1,
verbose=2, validation_data=(X_val, Y_val))
precision = model_hist.history['val_precision'][0]
recall = model_hist.history['val_recall'][0]
f_score = (2.0 * precision * recall) / (precision + recall) ```
Most helpful comment
@karimpedia What I did was creating a Callback and calculate them on the end of each epoch with the validation data.
And implement a function that calculates the f1 score or instead use Scikit Learn's Fscore function.