Keras: Tensor conversion requested dtype int32 for Tensor with dtype float32

Created on 28 Jul 2017  路  19Comments  路  Source: keras-team/keras

When I try to load a model (keras.models.load_model()) that was saved by Keras using tensorflow 1.1 on a tensorflow 1.2 based keras I got following error:

/usr/local/lib/python3.5/dist-packages/keras/models.py in load_model(filepath, custom_objects, compile)
    231             raise ValueError('No model found in config file.')
    232         model_config = json.loads(model_config.decode('utf-8'))
--> 233         model = model_from_config(model_config, custom_objects=custom_objects)
    234 
    235         # set weights

/usr/local/lib/python3.5/dist-packages/keras/models.py in model_from_config(config, custom_objects)
    305                         'Maybe you meant to use '
    306                         '`Sequential.from_config(config)`?')
--> 307     return layer_module.deserialize(config, custom_objects=custom_objects)
    308 
    309 

/usr/local/lib/python3.5/dist-packages/keras/layers/__init__.py in deserialize(config, custom_objects)
     52                                     module_objects=globs,
     53                                     custom_objects=custom_objects,
---> 54                                     printable_module_name='layer')

/usr/local/lib/python3.5/dist-packages/keras/utils/generic_utils.py in deserialize_keras_object(identifier, module_objects, custom_objects, printable_module_name)
    137                 return cls.from_config(config['config'],
    138                                        custom_objects=dict(list(_GLOBAL_CUSTOM_OBJECTS.items()) +
--> 139                                                            list(custom_objects.items())))
    140             with CustomObjectScope(custom_objects):
    141                 return cls.from_config(config['config'])

/usr/local/lib/python3.5/dist-packages/keras/engine/topology.py in from_config(cls, config, custom_objects)
   2448 
   2449         for layer_data in config['layers']:
-> 2450             process_layer(layer_data)
   2451 
   2452         name = config.get('name')

/usr/local/lib/python3.5/dist-packages/keras/engine/topology.py in process_layer(layer_data)
   2443                 if input_tensors:
   2444                     if len(input_tensors) == 1:
-> 2445                         layer(input_tensors[0], **kwargs)
   2446                     else:
   2447                         layer(input_tensors, **kwargs)

/usr/local/lib/python3.5/dist-packages/keras/engine/topology.py in __call__(self, inputs, **kwargs)
    567                                          '`layer.build(batch_input_shape)`')
    568                 if len(input_shapes) == 1:
--> 569                     self.build(input_shapes[0])
    570                 else:
    571                     self.build(input_shapes)

/usr/local/lib/python3.5/dist-packages/keras/layers/embeddings.py in build(self, input_shape)
     99             regularizer=self.embeddings_regularizer,
    100             constraint=self.embeddings_constraint,
--> 101             dtype=self.dtype)
    102         self.built = True
    103 

/usr/local/lib/python3.5/dist-packages/keras/legacy/interfaces.py in wrapper(*args, **kwargs)
     85                 warnings.warn('Update your `' + object_name +
     86                               '` call to the Keras 2 API: ' + signature, stacklevel=2)
---> 87             return func(*args, **kwargs)
     88         wrapper._original_function = func
     89         return wrapper

/usr/local/lib/python3.5/dist-packages/keras/engine/topology.py in add_weight(self, name, shape, dtype, initializer, regularizer, trainable, constraint)
    389         if dtype is None:
    390             dtype = K.floatx()
--> 391         weight = K.variable(initializer(shape), dtype=dtype, name=name)
    392         if regularizer is not None:
    393             self.add_loss(regularizer(weight))

/usr/local/lib/python3.5/dist-packages/keras/backend/tensorflow_backend.py in variable(value, dtype, name)
    319         v._uses_learning_phase = False
    320         return v
--> 321     v = tf.Variable(value, dtype=_convert_string_dtype(dtype), name=name)
    322     if isinstance(value, np.ndarray):
    323         v._keras_shape = value.shape

/usr/local/lib/python3.5/dist-packages/tensorflow/python/ops/variables.py in __init__(self, initial_value, trainable, collections, validate_shape, caching_device, name, variable_def, dtype, expected_shape, import_scope)
    198           name=name,
    199           dtype=dtype,
--> 200           expected_shape=expected_shape)
    201 
    202   def __repr__(self):

/usr/local/lib/python3.5/dist-packages/tensorflow/python/ops/variables.py in _init_from_args(self, initial_value, trainable, collections, validate_shape, caching_device, name, dtype, expected_shape)
    287         else:
    288           self._initial_value = ops.convert_to_tensor(
--> 289               initial_value, name="initial_value", dtype=dtype)
    290           shape = (self._initial_value.get_shape()
    291                    if validate_shape else tensor_shape.unknown_shape())

/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/ops.py in convert_to_tensor(value, dtype, name, preferred_dtype)
    674       name=name,
    675       preferred_dtype=preferred_dtype,
--> 676       as_ref=False)
    677 
    678 

/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/ops.py in internal_convert_to_tensor(value, dtype, name, as_ref, preferred_dtype)
    739 
    740         if ret is None:
--> 741           ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)
    742 
    743         if ret is NotImplemented:

/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/ops.py in _TensorTensorConversionFunction(t, dtype, name, as_ref)
    612     raise ValueError(
    613         "Tensor conversion requested dtype %s for Tensor with dtype %s: %r"
--> 614         % (dtype.name, t.dtype.name, str(t)))
    615   return t
    616 

ValueError: Tensor conversion requested dtype int32 for Tensor with dtype float32: 'Tensor("embedding_1/random_uniform:0", shape=(5000, 60), dtype=float32)'

Most helpful comment

Not sure if this is the general case, but I observed this error when training models using Keras 2.0.5 and loading in Keras 2.0.6. If I retrain the model in 2.0.6, I can successfully reload the model using keras.models.load_model().

All 19 comments

same problem

Not sure if this is the general case, but I observed this error when training models using Keras 2.0.5 and loading in Keras 2.0.6. If I retrain the model in 2.0.6, I can successfully reload the model using keras.models.load_model().

Same problem.

Same problem

I get the same issue when using model saved in Keras 2.0.5 and loading in Keras 2.0.6

The workaround I found is to save only weights and recreate the model in the code and load just the weights.
It seems that the problem is caused by TF upgrade (from 1.1 to 1.2) rather than Keras though I might be wrong.

is there work around for this issue

switching to Keras 2.0.5 worked for me

What is really the solution for this issue? I am facing the same problem.

update and retrain

@ft512835 What do you mean by "update and retrain"? How can I fix the error?

@HangsunKim what version of your Keras and Tensorflow?

@ft512835
I'm using keras 2.1.2 and Tensorflow 1.1.0 as the backend.

@HangsunKim Is it possible that you trained the model using a different version of Keras and Tensorflow? I observed this problem, after switching to a new version of Keras..

Hey,
The error seems to come due to Keras version and not the backend as I have tried changing the backend from Tensorflow to Theano and different versions of both.
One line reason: If you have trained a model on Keras <= 2.0.5 and trying to load on Keras >2.0.5. It is going to throw and error.
One line solution: If you still have the data, re-train the model on Keras > 2.0.5else install Keras <= 2.0.5
Thanks,
Aditya

I think @adityac8 is right, roll back to the low version of Keras work for me.

If any one is still facing this issue even after training and loading on the same version of Keras and Tensorflow, (which I did), just casting it manually to dtype float32 worked for me.

here is a sample code snippet resembling my original problem (using the Functional API):

var = tf.constant(1, shape = [64,512])
inp = Input (tensor = var)
inp1 = Lambda (lambda x: tf.cast(x, tf.float32))(inp)
dense = Dense(1, activation = 'sigmoid')(inp1)

model = Model(inp, dense)
model.compile(...)
model.fit(...)

The lambda layer is used to convert a Tensorflow tensor to a Keras one.

Any workarounds to this problem yet? I have the entire model saved instead of the weights

i m trying to train images with train.py using tensorflow getting same error can someone tell me when and why this error occurs?
ty

Was this page helpful?
0 / 5 - 0 ratings

Related issues

zygmuntz picture zygmuntz  路  3Comments

amityaffliction picture amityaffliction  路  3Comments

LuCeHe picture LuCeHe  路  3Comments

rantsandruse picture rantsandruse  路  3Comments

anjishnu picture anjishnu  路  3Comments