Keras: EarlyStopping not working properly

Created on 1 Apr 2016  路  8Comments  路  Source: keras-team/keras

from keras.callbacks import EarlyStopping  
early_stopping =EarlyStopping(monitor='value_loss', patience=100)  

history = model.fit(X_train, Y_train, nb_epoch=2000, batch_size=4, show_accuracy=True, verbose=2,
                     callbacks=[early_stopping])

it always stop when patience number reached. then i went to debug and found
in callbacks.py
logs is always empty when on_epoch_end in was called.
so value current is always null, training will always stop when patience number reached.
but logs isn't empty when on_batch_begin/end was called. will logs reset before on_epoch_end was called?

Do I miss something?

def on_epoch_end(self, epoch, logs={}):
        current = logs.get(self.monitor)  
        if current is None:
            warnings.warn('Early stopping requires %s available!' %
                          (self.monitor), RuntimeWarning)

        if self.monitor_op(current, self.best):   # current is [] , self.wait never been reset to 0
            self.best = current
            self.wait = 0   
        else:
            if self.wait >= self.patience:
                if self.verbose > 0:
                    print('Epoch %05d: early stopping' % (epoch))
                self.model.stop_training = True
            self.wait += 1


stale

Most helpful comment

What's the purpose of using an EarlyStopping without a validation set?

All 8 comments

Shouldn't it be val_loss for validation loss?

Tried lots, not val_loss or value or whatever. Do you know if this earlystop work for your code? The issue is 'logs' is empty when this callback function been called. Seems logs got reset before on_epoch_end.

Yes, it worked IIRC.
Actually, have you also tried with loss (or acc equivalently)? because there's no validation dataset in your case. It seems to be the only thing that I would try.

    max_features = X_train.shape[1]
    m = Sequential()
    m.add(Dense(20, input_shape=(max_features,)))
    m.add(Activation('relu'))
    m.add(Dense(20))
    m.add(Activation('relu'))
    m.add(Dense(3))
    m.add(Activation('linear'))
    m.add(Round())
    m.compile(loss='mean_absolute_error', optimizer='adam')

    early_stopping = EarlyStopping(monitor='val_loss', patience=20, verbose=verbose, mode='auto')
    m.fit(X_train,
          y_train,
          batch_size=batch_size,
          nb_epoch=nb_epoch, verbose=verbose,
          validation_data=(X_test, y_test),
          callbacks=[early_stopping])

This code works. Might help.

What's the purpose of using an EarlyStopping without a validation set?

val_loss is for monitoring loss on validation dataset, I think. As you are not feeding the validatation_data to model.fit, there is no way to monitor on val_loss metric.

I have had the same issue. My mistake was omitting the mode='auto' when defining the EarlyStopping object !

@yanje03
It does not work for me either using "val_loss" (at a regression Task with KerasRegressor and scikitlearn CrossValScore).
The only keyword, that works, is "loss".
But it seems, that loss does not mean the mse on the validation data, because it stops too early ! (turning off early stopping and forcing longer runs yields better results - the point, were it deteriorates due to overfitting occurs later).
Any idea, which keyword gives access to the metric, that really matters: the mse on the validation data ?

Was this page helpful?
0 / 5 - 0 ratings

Related issues

farizrahman4u picture farizrahman4u  路  3Comments

snakeztc picture snakeztc  路  3Comments

LuCeHe picture LuCeHe  路  3Comments

somewacko picture somewacko  路  3Comments

Imorton-zd picture Imorton-zd  路  3Comments