Keras: Early stopping options for Keras

Created on 11 May 2015  路  7Comments  路  Source: keras-team/keras

Hi all,

Is there an early stopping option for Keras training based on any criterion (validation log loss etc.)
Appreciate any help.

Thanks.
Dr Chan

Most helpful comment

earlyStopping=keras.callbacks.EarlyStopping(monitor='val_loss', patience=0, verbose=0, mode='auto')
model.fit(X, y, batch_size=128, nb_epoch=100, verbose=1, callbacks=[earlyStopping], validation_split=0.0, validation_data=None, shuffle=True, show_accuracy=False, class_weight=None, sample_weight=None)

All 7 comments

You would have to implement it in your own code. One easy solution is to run epochs one after the other For instance, with .fit(nb_epoch=1) --this will return a training history which includes training and validation loss. You can then decide to stop training based on any criterion you want.

earlyStopping=keras.callbacks.EarlyStopping(monitor='val_loss', patience=0, verbose=0, mode='auto')
model.fit(X, y, batch_size=128, nb_epoch=100, verbose=1, callbacks=[earlyStopping], validation_split=0.0, validation_data=None, shuffle=True, show_accuracy=False, class_weight=None, sample_weight=None)

@Tokukawa I believe you mean to have validation_split greater than 0. Perhaps 0.1.

Sure, is a typo.

Great! It is so useful for small data which are usual in my area.

Such callback can be useful:

class EarlyStoppingByLossVal(Callback):
    def __init__(self, monitor='loss', value=0.01, verbose=0):
        super(Callback, self).__init__()
        self.monitor = monitor
        self.value = value
        self.verbose = verbose

    def on_epoch_end(self, epoch, logs={}):
        current = logs.get(self.monitor)
        if current is None:
            print("Early stopping requires %s available!" % self.monitor)
            exit()

        if current < self.value:
            if self.verbose > 0:
                print("Epoch %05d: early stopping THR" % epoch)
            self.model.stop_training = True

P.S. It is not fully mine, found it on stackoverflow and changed a little.

This code snippet shows how to stop and save the best model. Hope it helps.

filepath="weights.best.hdf5"
checkpoint = ModelCheckpoint(filepath, monitor='val_acc', verbose=1, save_best_only=True, mode='max')

# check 5 epochs
early_stop = EarlyStopping(monitor='val_acc', patience=5, mode='max') 

callbacks_list = [checkpoint, early_stop]

history = model.fit(x, y, validation_data=(x_test, y_test), epochs=100, callbacks=callbacks_list)
Was this page helpful?
0 / 5 - 0 ratings