Keras: UnboundLocalError: local variable 'epoch_logs' referenced before assignment

Created on 27 Jul 2015  路  6Comments  路  Source: keras-team/keras

Traceback (most recent call last):
  File "pretrained_mlp.py", line 102, in <module>
    batch_size=1);
  File ".../keras/keras/models.py", line 465, in fit
    shuffle=shuffle, metrics=metrics)
  File ".../keras/keras/models.py", line 228, in _fit
    callbacks.on_epoch_end(epoch, epoch_logs)
UnboundLocalError: local variable 'epoch_logs' referenced before assignment

Has anybody ever had this issue? It infrequently comes up for me when training with autoencoders. I'm using python 2.7.8 on Ubuntu if that means anything.

Most helpful comment

For me, the solution was a smaller batch size. Here's why:

history = model.fit_generator(train_generator,
                              steps_per_epoch=nb_train_samples//batch_size,
                              ...
                              )

Since I mistakenly used a batch_size larger than the number of training samples, nb_train_samples//batch_size was rounded down to 0. And as @luis-i-reyes-castro explains, steps_per_epoch must be above 0.

All 6 comments

I've actually had the same issue yesterday, when reconstructing a model from a saved yaml. I didn't want to post anything here before I had more of an idea what could have caused it, and in which circumstances it did/didn't appear.

I was training a RNN without autoencoders, on 64-bit Python 3.4 on Win7.

From what I've gleaned from the source, epoch_logs is defined in the final batch of batch processing, and the error is thrown (as shown in your stack trace) when the end-of-epoch callbacks are activated.

It can happen when the model is trained on empty data (which won't work anyway). I can turn that into a clearer error message.

Ah-hah! That was indeed my error, silly mistake. Thanks for the update!

This can also be due to having a negative batch size.

Hello! I wanted to report that I got this error too, and I've found that in my case it was (at least partially) caused by having steps_per_epoch = 0. In this case line 1852 of "training.py" fails to enter the while loop which defines 'epochs_log'.
epochs_log_error_cause

For me, the solution was a smaller batch size. Here's why:

history = model.fit_generator(train_generator,
                              steps_per_epoch=nb_train_samples//batch_size,
                              ...
                              )

Since I mistakenly used a batch_size larger than the number of training samples, nb_train_samples//batch_size was rounded down to 0. And as @luis-i-reyes-castro explains, steps_per_epoch must be above 0.

Was this page helpful?
0 / 5 - 0 ratings