Keras: val_acc KeyError using ModelCheckpoint.

Created on 1 Apr 2017  ·  25Comments  ·  Source: keras-team/keras

Hi, i am using ModelCheckpoint callback to monitor 'val_acc' and I get this error
File "/usr/local/lib/python2.7/dist-packages/keras/engine/training.py", line 1485, in fit
initial_epoch=initial_epoch)
File "/usr/local/lib/python2.7/dist-packages/keras/engine/training.py", line 1160, in _fit_loop
callbacks.on_epoch_end(epoch, epoch_logs)
File "/usr/local/lib/python2.7/dist-packages/keras/callbacks.py", line 75, in on_epoch_end
callback.on_epoch_end(epoch, logs)
File "/usr/local/lib/python2.7/dist-packages/keras/callbacks.py", line 383, in on_epoch_end
filepath = self.filepath.format(epoch=epoch, **logs)
KeyError: 'val_acc'
Looked at Keras codebase but could not debug it, why is this error coming?

stale

Most helpful comment

Use val_loss in place of val_acc

filepath="weights/weights-improvement-{epoch:02d}-{val_loss:.2f}.hdf5"
keras.callbacks.ModelCheckpoint(filepath, monitor='val_loss', verbose=0, save_best_only=False, save_weights_only=False, mode='auto', period=1)

otherwise for accuracy

Use val_accuracy in place of val_acc

filepath="weights/weights-improvement-{epoch:02d}-{val_accuracy:.2f}.hdf5"
keras.callbacks.ModelCheckpoint(filepath, monitor='val_accuracy', verbose=0, save_best_only=False, save_weights_only=False, mode='auto', period=1)

All 25 comments

bump

Just found out why this is happening, it seems you have to register for accuracy metrics on the method compile, like this:

model.compile(loss='mse', optimizer='adam', metrics=['accuracy'])

hope this helps

doing the same
bu that did not help

On Sun, Apr 2, 2017 at 2:07 AM, William Queen notifications@github.com
wrote:

Just found out why this is happening, it seems you have to register for
accuracy metrics on the method compile, like this:

model.compile(loss='mse', optimizer='adam', metrics=['accuracy'])

hope this helps


You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
https://github.com/fchollet/keras/issues/6104#issuecomment-290945569,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AQ2XI02XUI5BpKVDXQlwG59_6O_ZRrXjks5rrrVtgaJpZM4Mwdwy
.

--
Regards,
Mukul Arora
Delhi Technological University
+91-9868430208

@badjano anything else could you suggest?

Did you run model.fit() with validation_split or validation_data?
I met the same error and it's gone if I set validatoin_split=0.1.

This is my code, it is working, hope it helps:

self.model = Sequential()

self.model.add(Dense(input_dim=_in, output_dim=_out, init="glorot_uniform"))
self.model.add(Activation("sigmoid"))

self.filepath = "weights.hdf5"

if os.path.isfile(self.filepath):
self.model.load_weights(self.filepath)

self.model.compile(loss='mse', optimizer='adam', metrics=['accuracy'])

checkpoint = ModelCheckpoint(self.filepath, monitor='val_acc', verbose=0, save_best_only=False, mode='max')
self.model.fit(X, Y, nb_epoch=10000, batch_size=10, verbose=1, callbacks=[checkpoint], shuffle=True)

One other thing, make sure you install h5py module, that is required for checkpoints

ur keras version?

On Tue, Apr 4, 2017 at 7:17 PM, William Queen notifications@github.com
wrote:

This is my code, it is working, hope it helps:

`self.model = Sequential()

self.model.add(Dense(input_dim=_in, output_dim=_out,
init="glorot_uniform"))
self.model.add(Activation("sigmoid"))

self.filepath = "weights.hdf5"

if os.path.isfile(self.filepath):
self.model.load_weights(self.filepath)

self.model.compile(loss='mse', optimizer='adam', metrics=['accuracy'])

checkpoint = ModelCheckpoint(self.filepath, monitor='val_acc', verbose=0,
save_best_only=False, mode='max')
self.model.fit(X, Y, nb_epoch=10000, batch_size=10, verbose=1,
callbacks=[checkpoint], shuffle=True)`

One other thing, make sure you install h5py module, that is required for
checkpoints


You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
https://github.com/fchollet/keras/issues/6104#issuecomment-291504519,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AQ2XI9HLtuGPtQfY5q9t_Ju8qpQkUa5nks5rsknZgaJpZM4Mwdwy
.

--
Regards,
Mukul Arora
Delhi Technological University
+91-9868430208

keras.__version__ = 1.2.2

updated to 2.0.2, still working ;)

Fixed this issue ... just remove the 'accuracy' keyword from your checkpoint file path.

This issue has been automatically marked as stale because it has not had recent activity. It will be closed after 30 days if no further activity occurs, but feel free to re-open a closed issue if needed.

I ran into exactly the same error, but my problem was totally different. In case anyone has the same problem I did here's the details.

I had passed the validation directory to my_validation_iterator incorrectly and thus it found no validation files. When I called fit_generator with validation_data=my_validation_iterator my program raised KeyError: 'val_loss'

I solve the problem removing my callback. The code is:

checkpoint = ModelCheckpoint('deep-learning-model-full-v0.03.01.weights-improvement-{epoch:02d}-{val_acc:.2f}.hdf5' , monitor='val_loss' , verbose=1 , save_best_only=True , period=3)

Just remove the "{val_acc:.2f}" from the checkpoint file name. It will work fine.

Use val_loss in place of val_acc

filepath="weights/weights-improvement-{epoch:02d}-{val_loss:.2f}.hdf5"
keras.callbacks.ModelCheckpoint(filepath, monitor='val_loss', verbose=0, save_best_only=False, save_weights_only=False, mode='auto', period=1)

otherwise for accuracy

Use val_accuracy in place of val_acc

filepath="weights/weights-improvement-{epoch:02d}-{val_accuracy:.2f}.hdf5"
keras.callbacks.ModelCheckpoint(filepath, monitor='val_accuracy', verbose=0, save_best_only=False, save_weights_only=False, mode='auto', period=1)

I had the same problem and it was because I set validation_freq > 1 (so the callback did not have all the time access to the "val_acc" value).

change "{val_acc:.2f}" to "{val_accuracy:.2f}" It will work fine.
as no variable named "val_acc" is defined but what you are trying to get is accuracy value after each epoch, which is probably defined as "val_accuracy" :) +1:

I had the same problem, but i named the variable correctly (val_accuracy).
After a restart of my PC it worked

please help me to resolve this error....
TypeError: __init__() got an unexpected keyword argument 'restore_best_weights'

this is my code... while training the model...
from keras.optimizers import RMSprop, SGD, Adam
from keras.callbacks import ModelCheckpoint, EarlyStopping, ReduceLROnPlateau

checkpoint = ModelCheckpoint("emotion_little_vgg_3.h5",
monitor="val_loss",
mode="min",
save_best_only = True,
verbose=1)
earlystop = EarlyStopping(monitor = "val_loss",
min_delta = 0,
patience = 3,
verbose = 1,
restore_best_weights = True)

reduce_lr = ReduceLROnPlateau(monitor = 'val_loss', factor = 0.2, patience = 3, verbose = 1, min_delta = 0.0001)

we put our call backs into a callback list

callbacks = [earlystop, checkpoint, reduce_lr]

We use a very small learning rate

model.compile(loss = 'categorical_crossentropy',
optimizer = Adam(lr=0.001),
metrics = ['accuracy'])

nb_train_samples = 28273
nb_validation_samples = 3534
epochs = 10

history = model.fit_generator(
train_generator,
steps_per_epoch = nb_train_samples // batch_size,
epochs = epochs,
callbacks = callbacks,
validation_data = validation_generator,
validation_steps = nb_validation_samples // batch_size)

Just rename val_acc to val_accuracy and it should work fine

https://github.com/keras-team/keras/issues/6104#issuecomment-449293676

我解决这个问题,通过暂时使用chekpoint的代码

Just rename val_acc to val_accuracy and it should work fine

still it's not working for me

Did you check to make sure your validation_generator is actually finding validation files? If it doesn't find any files, for whatever reason, this error will also be thrown.

The answer has already been posted above: change val_acc to val_accuracy will fix it in most cases. The reason is a change in Keras2.3.0:

  • Metrics and losses are now reported under the exact name specified by the user (e.g. if you pass metrics=['acc'], your metric will be reported under the string "acc", not "accuracy", and inversely metrics=['accuracy'] will be reported under the string "accuracy".

https://github.com/keras-team/keras/releases/tag/2.3.0

In my case, I was using BinaryAccuracy, so my callback had to match that:

model.compile(optimizer=keras.optimizers.Adam(learning_rate),
            loss=keras.losses.BinaryCrossentropy(from_logits=True),
            metrics=[keras.metrics.BinaryAccuracy()])
mcp_save_acc = keras.callbacks.ModelCheckpoint('saved_model_' + datetime.datetime.now().strftime('%Y%m%d_%H%M') + '-epoch{epoch:02d}-val_binary_accuracy{val_binary_accuracy:.2f}.hdf5', save_best_only=True, monitor='val_binary_accuracy', mode='max')
Was this page helpful?
0 / 5 - 0 ratings