Keras: model.to_json() does not work with tensorflow.

Created on 8 Dec 2015  路  7Comments  路  Source: keras-team/keras

I'm trying to save the model with tensorflow, but model.to_json() does not work.

Please check this log and let me know any information about this problem:

$ cat dump_example.py
from keras.models import Sequential
from keras.layers.core import Dense

model = Sequential()
model.add(Dense(5, input_dim=5))
model.compile(loss='categorical_crossentropy', optimizer='adadelta')
model.to_json()
$ python dump_example.py
Using TensorFlow backend.
I tensorflow/core/common_runtime/local_device.cc:25] Local device intra op parallelism threads: 1
I tensorflow/core/common_runtime/local_session.cc:45] Local session inter op parallelism threads: 1
Traceback (most recent call last):
  File "dump_example.py", line 7, in <module>
    model.to_json()
  File "/home/m/miniconda/lib/python2.7/site-packages/keras/models.py", line 342, in to_json
    config = self.get_config()
  File "/home/m/miniconda/lib/python2.7/site-packages/keras/models.py", line 321, in get_config
    config['optimizer'] = self.optimizer.get_config()
  File "/home/m/miniconda/lib/python2.7/site-packages/keras/optimizers.py", line 171, in get_config
    "rho": float(K.get_value(self.rho)),
  File "/home/m/miniconda/lib/python2.7/site-packages/keras/backend/tensorflow_backend.py", line 304, in get_value
    return x.eval(session=_get_session())
AttributeError: 'float' object has no attribute 'eval'
$

My environment is here:

$ conda --version
conda 3.18.8
$ python --version
Python 2.7.10 :: Continuum Analytics, Inc.
$ conda list | grep -E '(tensorflow|keras)'
keras                     0.3.0                     <pip>
tensorflow                0.5.0                     <pip>
$

Most helpful comment

Just in case someone finds this issue through Google (like I did), it happened when I altered the learning rate doing:

myModel.optimizer.lr = someValue

I had trained the model for a few epochs with a certain learning rate and wanted to try training for a few more with a different learning rate. After that the model wouldn't save with the error @Vijayabhaskar96 mentioned.

The workaround was instantiating another optimizer entirely.

myModel.optimizer = Adam(lr=someValue)

All 7 comments

Fixed.

Thanks Fran莽ois! Was just trying to debug this on my side and happened to notice the new commit.

@fchollet Thanks for your quick fix.
Have a good day!

I get the same error,but I'm saving through ModelCheckpoint callback

Epoch 1/5
135/135 [==============================] - 122s 907ms/step - loss: 1.7039 - acc: 0.4417 - val_loss: 1.9791 - val_acc: 0.3419

Epoch 00001: val_loss improved from 2.17152 to 1.97914, saving model to ./kerasmodels/checkpoint-01-1.9791.hdf5
---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
<ipython-input-95-9fd96bd62ab8> in <module>()
      4                                   callbacks=[c1,c2],
      5                                   validation_steps=STEP_SIZE_VALID,
----> 6                                   epochs=5)

/usr/local/lib/python3.6/dist-packages/keras/legacy/interfaces.py in wrapper(*args, **kwargs)
     89                 warnings.warn('Update your `' + object_name +
     90                               '` call to the Keras 2 API: ' + signature, stacklevel=2)
---> 91             return func(*args, **kwargs)
     92         wrapper._original_function = func
     93         return wrapper

/usr/local/lib/python3.6/dist-packages/keras/engine/training.py in fit_generator(self, generator, steps_per_epoch, epochs, verbose, callbacks, validation_data, validation_steps, class_weight, max_queue_size, workers, use_multiprocessing, shuffle, initial_epoch)
   2260                         break
   2261 
-> 2262                 callbacks.on_epoch_end(epoch, epoch_logs)
   2263                 epoch += 1
   2264                 if callback_model.stop_training:

/usr/local/lib/python3.6/dist-packages/keras/callbacks.py in on_epoch_end(self, epoch, logs)
     75         logs = logs or {}
     76         for callback in self.callbacks:
---> 77             callback.on_epoch_end(epoch, logs)
     78 
     79     def on_batch_begin(self, batch, logs=None):

/usr/local/lib/python3.6/dist-packages/keras/callbacks.py in on_epoch_end(self, epoch, logs)
    445                             self.model.save_weights(filepath, overwrite=True)
    446                         else:
--> 447                             self.model.save(filepath, overwrite=True)
    448                     else:
    449                         if self.verbose > 0:

/usr/local/lib/python3.6/dist-packages/keras/engine/topology.py in save(self, filepath, overwrite, include_optimizer)
   2578         """
   2579         from ..models import save_model
-> 2580         save_model(self, filepath, overwrite, include_optimizer)
   2581 
   2582     def save_weights(self, filepath, overwrite=True):

/usr/local/lib/python3.6/dist-packages/keras/models.py in save_model(model, filepath, overwrite, include_optimizer)
    136                     'optimizer_config': {
    137                         'class_name': model.optimizer.__class__.__name__,
--> 138                         'config': model.optimizer.get_config()
    139                     },
    140                     'loss': model.loss,

/usr/local/lib/python3.6/dist-packages/keras/optimizers.py in get_config(self)
    485 
    486     def get_config(self):
--> 487         config = {'lr': float(K.get_value(self.lr)),
    488                   'beta_1': float(K.get_value(self.beta_1)),
    489                   'beta_2': float(K.get_value(self.beta_2)),

/usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py in get_value(x)
   2308         A Numpy array.
   2309     """
-> 2310     return x.eval(session=get_session())
   2311 
   2312 

AttributeError: 'float' object has no attribute 'eval'

Getting the same error here using save()

Just in case someone finds this issue through Google (like I did), it happened when I altered the learning rate doing:

myModel.optimizer.lr = someValue

I had trained the model for a few epochs with a certain learning rate and wanted to try training for a few more with a different learning rate. After that the model wouldn't save with the error @Vijayabhaskar96 mentioned.

The workaround was instantiating another optimizer entirely.

myModel.optimizer = Adam(lr=someValue)

@GabrielSiq the correct way to update the learning rate is
from keras import backend as K K.set_value(myModel.optimizer.lr,somevalue)

Was this page helpful?
0 / 5 - 0 ratings